[
  {
    "path": ".gitignore",
    "content": "# Ignore paths from OS X\n.DS_Store\n"
  },
  {
    "path": "ANNOUNCEMENT.md",
    "content": "# The Future of Ægir 3 is Bryght!\n*Announcement from Omega8.cc*\n\nOmega8.cc is now the lead developer team for Ægir 3 running on BOA (Barracuda-Octopus-Ægir stack). We want to thank all past contributors who brought Ægir to life – your work makes today’s progress possible. Because of you, there is still a Bryght Future for Ægir.\n\n<img width=\"1201\" height=\"975\" alt=\"Ægir-Sector\" src=\"https://github.com/user-attachments/assets/8d444437-6931-481c-b0f7-66c09a953413\" />\n\n## What to Expect\n\n- **Active maintenance and development**: Ægir 3 on BOA is alive and under active development. [See Adam’s comments here](https://www.drupal.org/project/hostmaster/issues/3517915).\n- **Migration made easier**: We are working to make it simple to migrate entire legacy Ægir instances (Apache or Nginx) into self-hosted BOA as Octopus Ægir.\n- **Standalone Ægir with BOA features**: We are doing our best to enable many BOA-derived features within Ægir standalone, so users can pick their flavour without adopting the full BOA stack.\n- **Modern compatibility**: The BOA-based fork already supports **Drupal 11** and **PHP 8.4**, using vanilla **Drush 13** for site installs and updates while still relying on forked and improved **Drush 8** for daily operations.\n\n## Development & Community\n\nDevelopment of BOA stack components continues here on GitHub:\n👉 https://github.com/omega8cc/boa\n\nAt the same time, we continue to leverage the Drupal.org issue queues so **the community effort continues there** too, and all BOA improvements can be systematically **backported**.\n\nÆgir is still very much alive, and together we can keep its **Bryght** Future shining!\n"
  },
  {
    "path": "BARRACUDA.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Barracuda Ægir Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport SHELL=/bin/bash\n\n###\n### Software versions\n###\n_ADMINER_VRN=4.8.1\n_BZR_VRN=2.6.0\n_CGP_VRN=master-22-07-2020\n_CHIVE_VRN=1.3\n_COMPOSER_VRN=2.8.2\n_CSF_VRN=15.00\n_CURL_VRN=8.20.0\n_DB_SRC=repo.percona.com\n###\n###\n_DRUSH_ELEVEN_VRN=11.6.0.9\n_DRUSH_TEN_VRN=10.6.2.9\n_DRUSH_EIGHT_VRN=8.5.0.5\n_DRUSH_EIGHT_TEST_VRN=8.5.0-force\n###\n###\n_GEOS_VRN=3.7.1\n_GIT_VRN=2.51.0\n_GOACCESS_VRN=1.9.4\n_ICU_LEGACY_VRN=52_2\n_ICU_MODERN_VRN=73-1\n_IMAGE_MAGICK_VRN=7.1.1-7\n_IMAGICK_OLD_VRN=3.1.2\n_IMAGICK_VRN=3.8.1\n_IONCUBE_VRN=15.0.0\n_JETTY_7_VRN=7.6.17.v20150415\n_JETTY_8_VRN=8.1.17.v20150415\n_JETTY_9_VRN=9.2.16.v20160414\n_JSMIN_PHP_LEGACY_VRN=2.0.1\n_JSMIN_PHP_MODERN_VRN=3.1.0\n_LIB_TIDY_VRN=5.2.0\n_LIB_YAML_VRN=0.2.5\n_LOGJ4_VRN=1.2.17\n_LSHELL_VRN=0.10\n_MAILPARSE_VRN=2.1.6\n_NEW_RELIC_VRN=12.6.0.34\n_NODE_VRN=v22.21.0\n_MONGO_VRN=1.6.14\n_MONGODB_VRN=1.2.5\n_MSS_VRN=master-29-06-2024\n_MYQUICK_VRN_ONE=0.19.3-3\n_MYQUICK_VRN_TWO=0.21.3-2\n_MYSQLTUNER_VRN=1.9.4\n_NGINX_VRN=1.29.8\n_OPENSSH_VRN=10.3p1\n_OPENSSL_LEGACY_VRN=1.0.2u\n_OPENSSL_EOL_VRN=1.1.1w\n_OPENSSL_MODERN_VRN=3.5.6\n_PERCONA_5_7_VRN=5.7\n_PERCONA_8_0_VRN=8.0\n_PERCONA_8_4_VRN=8.4\n_PHP56_API=20131226\n_PHP56_VRN=5.6.40\n_PHP70_API=20151012\n_PHP70_VRN=7.0.33\n_PHP71_API=20160303\n_PHP71_VRN=7.1.33\n_PHP72_API=20170718\n_PHP72_VRN=7.2.34\n_PHP73_API=20180731\n_PHP73_VRN=7.3.33\n_PHP74_API=20190902\n_PHP74_VRN=7.4.33\n_PHP80_API=20200930\n_PHP80_VRN=8.0.30\n_PHP81_API=20210902\n_PHP81_VRN=8.1.34\n_PHP82_API=20220829\n_PHP82_VRN=8.2.31\n_PHP83_API=20230831\n_PHP83_VRN=8.3.31\n_PHP84_API=20240924\n_PHP84_VRN=8.4.21\n_PHP85_API=20250925\n_PHP85_VRN=8.5.6\n_PHP_APCU=5.1.27\n_PHP_IGBINARY_EIGHT_FIVE=3.2.17\n_PHP_IGBINARY_THREE=3.2.16\n_PHP_IGBINARY_TWO=2.0.8\n_PHP_MCRYPT=1.0.9\n_PHPREDIS_SIX_LATEST_VRN=6.3.0\n_PHPREDIS_SIX_MODERN_VRN=6.3.0\n_PHPREDIS_SIX_LEGACY_VRN=6.0.2\n_PHPREDIS_FIVE_VRN=5.3.7\n_PHPREDIS_FOUR_VRN=4.3.0\n_PHPREDIS_THREE_VRN=3.1.6\n_PURE_FTPD_VRN=1.0.52\n_PXC_VRN=1.4.16\n_VALKEY_NINE_VRN=9.0.3\n_VALKEY_EIGHT_VRN=8.1.4\n_VALKEY_SEVEN_VRN=7.2.11\n_REDIS_FOUR_VRN=4.0.14\n_REDIS_FIVE_VRN=5.0.9\n_REDIS_SIX_VRN=6.2.7\n_REDIS_SEVEN_VRN=7.0.15\n_RUBY_VRN=3.3.4\n_SLF4J_VRN=1.7.21\n_SOLR_1_VRN=1.4.1\n_SOLR_3_VRN=3.6.2\n_SOLR_4_VRN=4.9.1\n_SOLR_7_VRN=7.7.3\n_SOLR_9_VRN=9.8.1\n_TWIGC_VRN=1.24.0\n_UNBOUND_VRN=1.24.2\n_UPROGRESS_LEGACY_VRN=1.0.3.1\n_UPROGRESS_SEVEN_VRN=2.0.1.6\n_UPROGRESS_EIGHT_VRN=2.0.2\n_VNSTAT_VRN=2.13\n_WKHTMLTOX_VRN=12.6-1\n_YAML_PHP_LEGACY_VRN=1.3.2\n_YAML_PHP_SEVENO_VRN=2.1.0\n_YAML_PHP_MODERN_VRN=2.2.5\n_ZLIB_VRN=1.3.1\n\n\n###\n### Default variables\n###\n_CUSTOM_NAME=\"nginx\"\n_DRUSH_VERSION=\"${_DRUSH_EIGHT_VRN}\"\n_DRUSH_VERSION_TEST=\"${_DRUSH_EIGHT_TEST_VRN}\"\n_FORCE_REDIS_RESTART=NO\n_LOC_OS_CODE=\"\"\n_PURGE_ALL_THISHTIP=NO\nexport _SMALLCORE7_V=7.105.1\nexport _DRUPAL7=\"drupal-${_SMALLCORE7_V}\"\n_SPINNER=NO\n_THIS_DB_PORT=3306\nif [ -n \"${STY+x}\" ]; then\n  _SPINNER=NO\nfi\n\n\n###\n### Helper variables\n###\n_aptLiSys=\"/etc/apt/sources.list\"\n_barCnf=\"/root/.barracuda.cnf\"\n_bldPth=\"/opt/tmp/boa\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_filIncB=\"barracuda.sh.cnf\"\n_gitHub=\"https://github.com/omega8cc\"\n_gitLab=\"https://gitlab.com/omega8cc\"\n_libFnc=\"${_bldPth}/lib/functions\"\n_locCnf=\"${_bldPth}/aegir/conf\"\n_mtrInc=\"/var/aegir/config/includes\"\n_mtrNgx=\"/var/aegir/config/server_master/nginx\"\n_mtrTpl=\"/var/aegir/.drush/sys/provision/http/Provision/Config/Nginx\"\n_pthLog=\"/var/log/boa\"\n_vBs=\"/var/backups\"\n\n\n###\n### SA variables\n###\n_saCoreN=\"SA-CORE-2014-005\"\n_saCoreS=\"${_saCoreN}-D7\"\n_saIncDb=\"includes/database/database.inc\"\n_saPatch=\"/var/xdrago/conf/${_saCoreS}.patch\"\n\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n\n###\n### Clean pid files on exit\n###\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.barracuda.sh.exit.exceptions.log\n    [ -e \"/opt/tmp/boa\" ] && rm -rf /opt/tmp/*\n  fi\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  service cron start &> /dev/null\n  _CNT=$(pgrep -fc 'tee -a /var/backups/barracuda-')\n  if (( _CNT > 1 )); then\n    pkill -f 'tee -a /var/backups/barracuda-'\n  fi\n  exit 1\n}\n\n\n###\n### Panic on missing include\n###\n_panic_exit() {\n  echo\n  echo \" EXIT: Required lib file not available?\"\n  echo \" EXIT: $1\"\n  echo \" EXIT: Cannot continue\"\n  echo \" EXIT: Bye (0)\"\n  echo\n  _clean_pid_exit _panic_exit_a\n}\n\n\n###\n### Include default settings and basic functions\n###\n[ -r \"${_vBs}/${_filIncB}\" ] || _panic_exit \"${_vBs}/${_filIncB}\"\n  source \"${_vBs}/${_filIncB}\"\n\n\n###\n### Download helpers and libs\n###\nif [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n  _DB_SERVER=Percona\nelse\n  _DB_SERVER=Percona\nfi\nif [ \"$(boa info | grep -c ${_DB_SERVER})\" -lt 3 ] || [ ! -e \"/usr/sbin/csf\" ]; then\n  if [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    _download_helpers_libs\n  fi\nelse\n  _download_helpers_libs\nfi\n\n\n###\n### Include shared functions\n###\n_FL=\"helper dns system sql valkey redis nginx php solr master xtra firewall hotfix\"\nfor f in ${_FL}; do\n  [ -r \"${_libFnc}/${f}.sh.inc\" ] || _panic_exit \"${f}\"\n  source \"${_libFnc}/${f}.sh.inc\"\ndone\n\n\n###\n### Make sure we are running as root\n###\n_if_running_as_root_barracuda\n\n\n###\n### Welcome msg\n###\necho \" \"\n_msg \"Skynet Agent v.${_X_VERSION} on $(dmidecode -s system-manufacturer 2>&1) welcomes you aboard!\"\necho \" \"\nsleep 3\n\n\n###\n### Early procedures\n###\n_normalize_ip_name_variables\n_mode_detection\n_check_exception_mycnf\n_virt_detection\n_os_detection\n_os_detection_minimal\n_if_rebuild_src_on_major_os_upgrade\n_if_long_generate_on_major_os_upgrade\n\n\n###\n### Quick php-idle ON/OFF procedure only\n###\n_if_php_idle_on_off\n\n\n###\n### Packages install/update on init\n###\n_sources_list_update\n_basic_packages_install_on_init\n_more_packages_install_on_init\n_run_aptitude_full_upgrade\n\n\n###\n### Misc checks\n###\n_check_boa_php_compatibility\n_check_boa_version\nif [ \"${_CHECKS_REMOTE_REPOS}\" = \"YES\" ]; then\n  _check_github_for_aegir_head_mode\n  _check_db_src\n  _check_git_repos\nfi\n_check_ip_hostname\n_check_prepare_dirs_permissions\n\n\n###\n### Turn Off AppArmor temporarily while running barracuda\n###\nif [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n  [ ! -e \"/root/.turn_off_apparmor_in_octopus.cnf\" ] && touch /root/.turn_off_apparmor_in_octopus.cnf\nelse\n  _turn_off_apparmor_temporarily\nfi\n\n\n###\n### Optional major system upgrades\n###\n_early_sys_ctrl_mark\n_if_post_major_os_upgrade\n_if_major_os_upgrade\n_normal_sys_ctrl_mark\n\n\n###\n### Upgrade only Ægir Master Instance (obsolete mode)\n###\nif [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ]; then\n  _if_upgrade_only_aegir_master\nfi\n\n###\n### System packages install and update\n###\n_sys_packages_update\n_if_proxysql_update\n_sys_packages_install\n_java_check_fix\n_locales_check_fix\n\n\n###\n### Do not allow strong passwords until locales work properly\n###\nif [ \"${_LOCALE_TEST}\" = \"BROKEN\" ]; then\n  _STRONG_PASSWORDS=NO\nfi\n\n\n###\n### Install key packages first\n###\n_run_aptitude_full_upgrade\n_run_aptitude_deps_install\n_kill_nash\n\n\n###\n### OpenSSL modern and legacy support\n###\n_LC_SSL_CTRL=\"/root/.install.legacy.openssl.cnf\"\n_MD_SSL_CTRL=\"/root/.install.modern.openssl.cnf\"\nif [ \"${_STATUS}\" = \"INIT\" ] || [ ! -x \"/usr/local/ssl/bin/openssl\" ]; then\n  if [ -e \"${_MD_SSL_CTRL}\" ]; then\n    chattr -i ${_MD_SSL_CTRL}\n    rm -f ${_MD_SSL_CTRL}\n  fi\n  touch ${_LC_SSL_CTRL}\n  _if_ssl_install_src\n  _sync_system_ssl_certs\n  _ssl_paths_sync\n  _ssl_crypto_lib_fix\n  _curl_install_src\nfi\nif [ -x \"/usr/local/ssl/bin/openssl\" ]; then\n  [ -e \"${_LC_SSL_CTRL}\" ] && rm -f ${_LC_SSL_CTRL}\nfi\nif [ \"${_STATUS}\" = \"INIT\" ] || [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  if [ ! -e \"${_LC_SSL_CTRL}\" ]; then\n    if [ ! -e \"${_MD_SSL_CTRL}\" ]; then\n      touch ${_MD_SSL_CTRL}\n      chattr +i ${_MD_SSL_CTRL}\n    fi\n  fi\nelif [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n  if [ ! -e \"/opt/php73/bin/php\" ] \\\n    && [ ! -e \"/opt/php72/bin/php\" ] \\\n    && [ ! -e \"/opt/php71/bin/php\" ] \\\n    && [ ! -e \"/opt/php70/bin/php\" ] \\\n    && [ ! -e \"/opt/php56/bin/php\" ]; then\n    if [ ! -e \"${_MD_SSL_CTRL}\" ] \\\n      && [ ! -e \"${_LC_SSL_CTRL}\" ]; then\n      touch ${_MD_SSL_CTRL}\n      chattr +i ${_MD_SSL_CTRL}\n    fi\n  fi\n  if [ ! -x \"/usr/local/ssl/bin/openssl\" ] \\\n    && [ -e \"${_LC_SSL_CTRL}\" ]; then\n    if [ -e \"${_MD_SSL_CTRL}\" ]; then\n      chattr -i ${_MD_SSL_CTRL}\n      rm -f ${_MD_SSL_CTRL}\n    fi\n  fi\nfi\nif [ -x \"/usr/local/ssl/bin/openssl\" ] \\\n  && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  if [ ! -e \"${_MD_SSL_CTRL}\" ]; then\n    touch ${_MD_SSL_CTRL}\n    chattr +i ${_MD_SSL_CTRL}\n  fi\nfi\n\n\n###\n### Install OpenSSL and cURL from sources\n###\nif [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ] || [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  _if_ssl_install_src\n  _sync_system_ssl_certs\n  _ssl_paths_sync\n  _ssl_crypto_lib_fix\n  _curl_install_src\nfi\n\n\n###\n### Install OpenSSH from sources\n###\nif [ \"${_SSH_FROM_SOURCES}\" = \"YES\" ] && [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ]; then\n  if [ \"${_STATUS}\" = \"INIT\" ] || [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    if [ \"${_OS_DIST}\" = \"Debian\" ] || [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      _sshd_install_src\n      _sshd_armour\n    fi\n  fi\nfi\n\n\n###\n### Install Percona server\n###\n_db_server_install\n\n\n###\n### Finalize initial Percona server and tools setup\n###\n_init_sql_root_credentials\n_sql_root_credentials_update\n_myquick_install_upgrade\n\n\n###\n### Install other services\n###\nif [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ]; then\n  _nginx_install_upgrade\n  _nginx_initd_check\n  _nginx_mime_check_fix\n  if [ \"${_VALKEY_MAJOR_RELEASE}\" = \"7\" ] \\\n    || [ \"${_VALKEY_MAJOR_RELEASE}\" = \"8\" ] \\\n    || [ \"${_VALKEY_MAJOR_RELEASE}\" = \"9\" ]; then\n    _valkey_install_upgrade\n  else\n    _redis_install_upgrade\n  fi\n  _lshell_install_upgrade\n  _magick_install_upgrade\n  _php_install_deps\n  _php_libs_fix\n  _php_if_versions_cleanup_cnf\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _php_ioncube_check_if_update\n    _php_check_if_rebuild\n    _mytop_install\n  fi\n  _php_install_upgrade\n  _php_config_check_update\n  _php_upgrade_all\n  _if_install_php_newrelic\n  _newrelic_check_fix\nfi\n_smtp_check\n_xdrago_install_upgrade\n_if_drupal_patches_update\n_mc_panels_ini_update\n\n\n###\n### Download system-wide Drush versions\n###\n_drush_system_install_update\n\n\n###\n### Install or upgrade Ægir Master Instance\n###\nif [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ]; then\n  _aegir_master_install_upgrade\n  _aegir_bin_extra_check_fix\n  _nginx_wildcard_ssl_install\n  _nginx_config_update_fix\n  _aegir_master_display_login_link\nfi\n\n\n###\n### Install or upgrade DNS cache server\n###\nif [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n  _dns_unbound_install_upgrade\nfi\n\n\n###\n### Install or upgrade csf/lfd monitoring\n###\nif [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n  _csf_lfd_install_upgrade\nfi\n\n\n###\n### Optional add-on services\n###\nif [ \"${_STATUS}\" = \"INIT\" ] || [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n  if [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ]; then\n    _if_install_ftpd\n    _if_install_vnstat\n    _if_install_wkhtmltox\n    _if_install_chromium\n    _if_install_git_src\n    _if_install_ffmpeg\n    _if_install_bzr\n    [ ! -e \"/root/.deny.java.cnf\" ] && _if_install_upgrade_solr\n    _if_install_adminer\n    _if_install_chive\n    _if_install_sqlbuddy\n    _if_install_collectd\n    _if_install_webmin\n    _if_install_bind\n    _if_install_ruby\n    _if_install_node\n    _sftp_ftps_modern_fix\n  fi\nfi\n\n\n###\n### Update rsyslog configuration\n###\n_rsyslog_config_update\n\n\n###\n### Install or uninstall AppArmor after barracuda install and upgrade\n###\nif [ -e \"/root/.keep_apparmor_on.cnf\" ] && [ ! -e \"/root/.deny.apparmor.cnf\" ]; then\n  [ ! -e \"/root/.allow.apparmor.cnf\" ] && touch /root/.allow.apparmor.cnf\n  if [ ! -e \"/root/.run-to-excalibur.cnf\" ] \\\n    && [ ! -e \"/root/.run-to-daedalus.cnf\" ] \\\n    && [ ! -e \"/root/.run-to-chimaera.cnf\" ] \\\n    && [ ! -e \"/root/.run-to-beowulf.cnf\" ]; then\n    if [ \"${_OS_CODE}\" != \"stretch\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n      _if_install_apparmor\n    fi\n  fi\nelse\n  [ -e \"/root/.allow.apparmor.cnf\" ] && rm -f /root/.allow.apparmor.cnf\n  [ ! -e \"/root/.deny.apparmor.cnf\" ] && touch /root/.deny.apparmor.cnf\n  if [ \"${_OS_CODE}\" != \"stretch\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n    _if_remove_apparmor\n  fi\nfi\n\n\n###\n### Update barracuda log, tools and system settings\n###\n_pam_umask_check_fix\n_pam_many_check_fix\n_avatars_check_fix\n_sysctl_update\n_initd_update\n_apticron_update\n_barracuda_log_update\n_find_server_city\n\n\n###\n### Complete system checks and cleanup\n###\n_complete\nexit 0\n\n\n###----------------------------------------###\n###\n###  Barracuda Ægir Installer\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "BOA.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  BOA Meta Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: run it with bash, not with sh  ###\n###----------------------------------------###\n###\n###  $ wget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n###\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\nexport _rLsn=\"BOA-5.9.1\"\nexport _bTs=591v02\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_NOW=$(date +%y%m%d-%H%M%S)\nexport _NOW=${_NOW//[^0-9-]/}\n_TODAY=$(date +%y%m%d)\nexport _TODAY=${_TODAY//[^0-9]/}\n#\n_barCnf=\"/root/.barracuda.cnf\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_optBin=\"/opt/local/bin\"\n_usrBin=\"/usr/local/bin\"\n_xpthLog=\"/var/xdrago/log\"\n_pthLog=\"/var/log/boa\"\n_tBn=\"tools/bin\"\n_vBs=\"/var/backups\"\n_boaToolsPid=\"${_pthLog}/updateBOAtools.${_bTs}.ctrl.${_tRee}.${_xSrl}.pid\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n#\nif [ ! -e \"${_pthLog}/.migrated.txt\" ] && [ -d \"${_xpthLog}\" ]; then\n  mkdir -p \"${_pthLog}\"\n  cp -a ${_xpthLog}/*.pid ${_pthLog}/\n  cp -a ${_xpthLog}/.*pid ${_pthLog}/\n  cp -a ${_xpthLog}/*.log ${_pthLog}/\n  cp -a ${_xpthLog}/*.txt ${_pthLog}/\n  cp -a ${_xpthLog}/usage ${_pthLog}/\n  cp -a ${_xpthLog}/daily ${_pthLog}/\n  cp -a ${_xpthLog}/core  ${_pthLog}/\n  cp -a ${_xpthLog}/le    ${_pthLog}/\n  touch \"${_pthLog}/.migrated.txt\"\nfi\n[ ! -d \"${_pthLog}/usage\" ] && mkdir -p \"${_pthLog}/usage\"\n[ ! -d \"${_pthLog}/daily\" ] && mkdir -p \"${_pthLog}/daily\"\n[ ! -d \"${_pthLog}/core\" ] && mkdir -p \"${_pthLog}/core\"\n[ ! -d \"${_pthLog}/le\" ] && mkdir -p \"${_pthLog}/le\"\n#\n_eldirF=\"0001-Print-site_footer-if-defined.patch\"\n_eldirP=\"/var/xdrago/conf/${_eldirF}\"\n#\n_tenCorePatchFname=\"drupal-ten-aegir-core-01.patch\"\n_tenCorePatchPath=\"/data/conf/patches/${_tenCorePatchFname}\"\n#\n_tenConsolePatchFname=\"drupal-ten-aegir-console-02.patch\"\n_tenConsolePatchPath=\"/data/conf/patches/${_tenConsolePatchFname}\"\n#\n_elevenCorePatchFname=\"drupal-eleven-aegir-core-01.patch\"\n_elevenCorePatchPath=\"/data/conf/patches/${_elevenCorePatchFname}\"\n#\n_elevenConsolePatchFname=\"drupal-eleven-aegir-console-02.patch\"\n_elevenConsolePatchPath=\"/data/conf/patches/${_elevenConsolePatchFname}\"\n#\n_elevenValidatorPatchFname=\"drupal-eleven-aegir-validator-03.patch\"\n_elevenValidatorPatchPath=\"/data/conf/patches/${_elevenValidatorPatchFname}\"\n#\n_provLeInc=\"provision_hosting_le.drush.inc\"\n_provLeIncFull=\"/var/xdrago/conf/${_provLeInc}\"\n#\n_hoLeInc=\"hosting_le_vhost.drush.inc\"\n_hoLeIncFull=\"/var/xdrago/conf/${_hoLeInc}\"\n#\n_dehydName=\"dehydrated\"\n_dehydSrcPath=\"/var/xdrago/conf/${_dehydName}\"\n_legacyLeSh=\"/var/xdrago/conf/letsencrypt.sh\"\n\n_DEBUG_MODE=$([ -e \"/root/.debug-barracuda-installer.cnf\" ] && echo \"YES\" || echo \"NO\")\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_if_hosted_sys() {\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n#\n# Find server city.\n_find_server_city() {\n  if [ -e \"/root/.found_correct_city.cnf\" ]; then\n    _LOC_CITY=$(cat /root/.found_correct_city.cnf 2>/dev/null | tr -d '\\n')\n  else\n    if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n      _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n      _LOC_CITY=$(curl ${_crlGet} ipinfo.io/${_LOC_IP}/city 2>&1)\n      _LOC_CITY=$(echo -n ${_LOC_CITY} | tr -d \"\\n\" 2>&1)\n    fi\n    if [ ! -z \"${_LOC_CITY}\" ]; then\n      _LOC_CITY=$(echo \"${_LOC_CITY}\" | tr ' ' '+' 2>&1)\n      echo ${_LOC_CITY} > /root/.found_correct_city.cnf\n    fi\n  fi\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n  if [ -n \"${_LOC_IP}\" ] && grep -qE \"${_LOC_IP}\\s\" /etc/hosts; then\n    cp -af /etc/hosts /etc/.was.hosts\n    sed -i \"s/^${_LOC_IP}.*//g\" /etc/hosts\n    [ -x \"/etc/init.d/unbound\" ] && [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n    [ -x \"/etc/init.d/unbound\" ] && service unbound restart &> /dev/null\n  fi\n}\n\n_fix_dns_settings() {\n  [ ! -d \"${_vBs}\" ] && mkdir -p ${_vBs}\n  rm -f ${_vBs}/resolv.conf.tmp\n  if ! grep -q \"nameserver 127.0.0.1\" /etc/resolv.conf; then\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      _FORCE_RESOLV_UPDATE=YES\n    else\n      _FORCE_RESOLV_UPDATE=NO\n    fi\n  fi\n  if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf || [ \"${_FORCE_RESOLV_UPDATE}\" = \"YES\" ]; then\n    echo \"### BOA-DNS-Config ###\" > ${_vBs}/resolv.conf.tmp\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      echo \"nameserver 127.0.0.1\" >> ${_vBs}/resolv.conf.tmp\n    fi\n    echo \"nameserver 1.1.1.1\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 8.8.8.8\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 9.9.9.9\" >> ${_vBs}/resolv.conf.tmp\n  fi\n  if [ -e \"${_vBs}/resolv.conf.tmp\" ]; then\n    chattr -i /etc/resolv.conf\n    rm -f /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp /etc/resolv.conf\n    chmod 0644 /etc/resolv.conf\n    chattr +i /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp ${_vBs}/resolv.conf.vanilla\n  fi\n  if [ -x \"/usr/sbin/unbound-control\" ] \\\n    && [ -e \"/etc/resolvconf/run/interface/lo.unbound\" ]; then\n    unbound-control reload &> /dev/null\n  fi\n}\n\n_check_dns_settings() {\n  _EHU=NO\n  if ! grep -q \"127.0.0.1 localhost\" /etc/hosts; then\n    sed -i \"s/^127.0.0.1.*//g\" /etc/hosts\n    echo \"\" >> /etc/hosts\n    echo \"127.0.0.1 localhost\" >> /etc/hosts\n    _EHU=YES\n  fi\n  if grep -q \"files.aegir.cc\" /etc/hosts; then\n    sed -i \"s/.*files.aegir.cc.*//g\" /etc/hosts\n    _EHU=YES\n  fi\n  if grep -q \"github\" /etc/hosts; then\n    sed -i \"s/.*github.*//g\" /etc/hosts\n    _EHU=YES\n  fi\n  if [ \"${_EHU}\" = \"YES\" ]; then\n    echo >>/etc/hosts\n    sed -i \"/^$/d\" /etc/hosts\n  fi\n  if [ -L \"/etc/resolv.conf\" ]; then\n    _fix_dns_settings\n    return 1  # Exit the function but continue the script\n  fi\n  if [ -e \"/root/.use.default.nameservers.cnf\" ]; then\n    if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n      rm -f /root/.use.local.nameservers.cnf\n    fi\n    _USE_DEFAULT_DNS=YES\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n    _USE_PROVIDER_DNS=YES\n  else\n    _REMOTE_DNS_TEST=$(host files.aegir.cc 1.1.1.1 -w 10 2>&1)\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [[ \"${_REMOTE_DNS_TEST}\" =~ \"no servers could be reached\" ]] \\\n    || [[ \"${_REMOTE_DNS_TEST}\" =~ \"Host files.aegir.cc not found\" ]] \\\n    || [ \"${_USE_PROVIDER_DNS}\" = \"YES\" ]; then\n    _fix_dns_settings\n  fi\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} &> /dev/null\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_extract_archive() {\n  if [ ! -z \"$1\" ]; then\n    case $1 in\n      *.tar.bz2)   tar xjf $1    ;;\n      *.tar.gz)    tar xzf $1    ;;\n      *.tar.xz)    tar xvf $1    ;;\n      *.bz2)       bunzip2 $1    ;;\n      *.rar)       unrar x $1    ;;\n      *.gz)        gunzip -q $1  ;;\n      *.tar)       tar xf $1     ;;\n      *.tbz2)      tar xjf $1    ;;\n      *.tgz)       tar xzf $1    ;;\n      *.zip)       unzip -qq $1  ;;\n      *.Z)         uncompress $1 ;;\n      *.7z)        7z x $1       ;;\n      *)           echo \"'$1' cannot be extracted via >extract<\" ;;\n    esac\n    rm -f $1\n  fi\n}\n\n#\n# Download and extract from dev/contrib mirror.\n_get_dev_contrib() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/${_tRee}/contrib/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      echo \"OOPS: Failed to download ${_urlDev}/${_tRee}/contrib/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Download and extract archive from dev/src mirror.\n_get_dev_src() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/src/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      echo \"OOPS: Failed to download ${_urlDev}/src/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n_if_clean_boa_env() {\n  if [ ! -x \"/etc/init.d/clean-boa-env\" ]; then\n    curl ${_crlGet} \"${_urlHmr}/conf/var/clean-boa-env\" -o /etc/init.d/clean-boa-env\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      chmod 700 /etc/init.d/clean-boa-env\n      chown root:root /etc/init.d/clean-boa-env\n      update-rc.d clean-boa-env defaults &> /dev/null\n    fi\n  fi\n}\n\n###\n### Function to verify BOA keys\n###\n_verify_boa_keys() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _verify_boa_keys in BOA.sh.txt\"\n  fi\n  if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n    _if_hosted_sys\n    _allw=NO\n    _urlEnc=\"http://${_USE_MIR}/enc/2024\"\n    _encName=$(echo ${_hName} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".o8.io\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".boa.io\"($) ]]; then\n      _allw=YES\n    fi\n    mkdir -p /var/opt\n    rm -f /var/opt/_encN*\n    curl ${_crlGet} \"${_urlEnc}/${_encName}\" -o /var/opt/_encN.${_encName}.tmp\n    wait\n    echo \"${_hName}.${_encName}\" > /var/opt/_encN_local.${_encName}.tmp\n    wait\n    if [ -e \"/var/opt/_encN.${_encName}.tmp\" ] && [ -e \"/var/opt/_encN_local.${_encName}.tmp\" ]; then\n      _diffTestIf=$(diff -w -B /var/opt/_encN.${_encName}.tmp /var/opt/_encN_local.${_encName}.tmp 2>&1)\n      if [ ! -z \"${_diffTestIf}\" ] && [ \"${_allw}\" = \"NO\" ]; then\n        echo\n        echo \"Your system requires valid license for access to ${_rLsn}-${_tRee}\"\n        echo \"Please visit https://omega8.cc/licenses to purchase your own\"\n        echo\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n        rm -f /var/opt/_encN*\n        exit 0\n      else\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n      fi\n    else\n      echo\n      echo \"Your system requires valid license to use this BOA version (${_tRee})\"\n      echo \"Unfortunately it was not possible to verify your system status\"\n      echo \"Please contact our support but visit https://omega8.cc/licenses first\"\n      echo\n      exit 0\n    fi\n  fi\n}\n\n_locales_check_fix_early() {\n  _isLoc=\"$(which locale)\"\n  if [ ! -x \"${_isLoc}\" ] || [ -z \"${_isLoc}\" ]; then\n    apt-get update -qq &> /dev/null\n    ${_INITINS} locales locales-all &> /dev/null\n  fi\n  _LOC_TEST=$(locale 2>&1)\n  if [[ \"${_LOC_TEST}\" =~ LANG=.*UTF-8 ]]; then\n    _LOCALE_TEST=OK\n  fi\n  if [[ \"${_LOC_TEST}\" =~ \"Cannot\" ]]; then\n    _LOCALE_TEST=BROKEN\n  fi\n  if [ \"${_LOCALE_TEST}\" = \"BROKEN\" ]; then\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen &> /dev/null\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce all locale settings\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_TIME=en_US.UTF-8 \\\n      LC_MONETARY=en_US.UTF-8 \\\n      LC_MESSAGES=en_US.UTF-8 \\\n      LC_PAPER=en_US.UTF-8 \\\n      LC_NAME=en_US.UTF-8 \\\n      LC_ADDRESS=en_US.UTF-8 \\\n      LC_TELEPHONE=en_US.UTF-8 \\\n      LC_MEASUREMENT=en_US.UTF-8 \\\n      LC_IDENTIFICATION=en_US.UTF-8 \\\n      LC_ALL= &> /dev/null\n    # Define all locale settings on the fly to prevent unnecessary\n    # warnings during installation of packages.\n    export LANG=en_US.UTF-8 &> /dev/null\n    export LC_CTYPE=en_US.UTF-8 &> /dev/null\n    export LC_COLLATE=POSIX &> /dev/null\n    export LC_NUMERIC=POSIX &> /dev/null\n    export LC_TIME=en_US.UTF-8 &> /dev/null\n    export LC_MONETARY=en_US.UTF-8 &> /dev/null\n    export LC_MESSAGES=en_US.UTF-8 &> /dev/null\n    export LC_PAPER=en_US.UTF-8 &> /dev/null\n    export LC_NAME=en_US.UTF-8 &> /dev/null\n    export LC_ADDRESS=en_US.UTF-8 &> /dev/null\n    export LC_TELEPHONE=en_US.UTF-8 &> /dev/null\n    export LC_MEASUREMENT=en_US.UTF-8 &> /dev/null\n    export LC_IDENTIFICATION=en_US.UTF-8 &> /dev/null\n    export LC_ALL= &> /dev/null\n  else\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen &> /dev/null\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce locale settings required for consistency\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_ALL= &> /dev/null\n    # Define locale settings required for consistency also on the fly\n    export LC_COLLATE=POSIX &> /dev/null\n    export LC_NUMERIC=POSIX &> /dev/null\n    export LC_ALL= &> /dev/null\n  fi\n  _LOCALES_BASHRC_TEST=$(grep LC_COLLATE /root/.bashrc 2>&1)\n  if [[ ! \"${_LOCALES_BASHRC_TEST}\" =~ \"LC_COLLATE\" ]]; then\n    printf \"\\n\" >> /root/.bashrc\n    echo \"export LANG=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_CTYPE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_COLLATE=POSIX\" >> /root/.bashrc\n    echo \"export LC_NUMERIC=POSIX\" >> /root/.bashrc\n    echo \"export LC_TIME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MONETARY=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MESSAGES=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_PAPER=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_NAME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ADDRESS=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_TELEPHONE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MEASUREMENT=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_IDENTIFICATION=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ALL=\" >> /root/.bashrc\n    printf \"\\n\" >> /root/.bashrc\n  fi\n}\n\n_if_fix_iptables_symlinks() {\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n  if [ -x \"/sbin/iptables\" ] && [ ! -e \"/usr/sbin/iptables\" ]; then\n    ln -sfn /sbin/iptables /usr/sbin/iptables\n  fi\n  if [ -x \"/usr/sbin/iptables\" ] && [ ! -e \"/sbin/iptables\" ]; then\n    ln -sfn /usr/sbin/iptables /sbin/iptables\n  fi\n  if [ -x \"/sbin/iptables-save\" ] && [ ! -e \"/usr/sbin/iptables-save\" ]; then\n    ln -sfn /sbin/iptables-save /usr/sbin/iptables-save\n  fi\n  if [ -x \"/usr/sbin/iptables-save\" ] && [ ! -e \"/sbin/iptables-save\" ]; then\n    ln -sfn /usr/sbin/iptables-save /sbin/iptables-save\n  fi\n  if [ -x \"/sbin/iptables-restore\" ] && [ ! -e \"/usr/sbin/iptables-restore\" ]; then\n    ln -sfn /sbin/iptables-restore /usr/sbin/iptables-restore\n  fi\n  if [ -x \"/usr/sbin/iptables-restore\" ] && [ ! -e \"/sbin/iptables-restore\" ]; then\n    ln -sfn /usr/sbin/iptables-restore /sbin/iptables-restore\n  fi\n  if [ -x \"/sbin/ip6tables\" ] && [ ! -e \"/usr/sbin/ip6tables\" ]; then\n    ln -sfn /sbin/ip6tables /usr/sbin/ip6tables\n  fi\n  if [ -x \"/usr/sbin/ip6tables\" ] && [ ! -e \"/sbin/ip6tables\" ]; then\n    ln -sfn /usr/sbin/ip6tables /sbin/ip6tables\n  fi\n  if [ -x \"/sbin/ip6tables-save\" ] && [ ! -e \"/usr/sbin/ip6tables-save\" ]; then\n    ln -sfn /sbin/ip6tables-save /usr/sbin/ip6tables-save\n  fi\n  if [ -x \"/usr/sbin/ip6tables-save\" ] && [ ! -e \"/sbin/ip6tables-save\" ]; then\n    ln -sfn /usr/sbin/ip6tables-save /sbin/ip6tables-save\n  fi\n  if [ -x \"/sbin/ip6tables-restore\" ] && [ ! -e \"/usr/sbin/ip6tables-restore\" ]; then\n    ln -sfn /sbin/ip6tables-restore /usr/sbin/ip6tables-restore\n  fi\n  if [ -x \"/usr/sbin/ip6tables-restore\" ] && [ ! -e \"/sbin/ip6tables-restore\" ]; then\n    ln -sfn /usr/sbin/ip6tables-restore /sbin/ip6tables-restore\n  fi\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n}\n\n###\n### Prefer Devuan APT sources\n###\n_prefer_devuan_repositories() {\n  # Prefer Devuan; force base-files from Devuan (handles lower version vs Debian).\n  mkdir -p /etc/apt/preferences.d\n  cat >/etc/apt/preferences.d/99-prefer-devuan <<'EOF'\nPackage: *\nPin: release o=Devuan\nPin-Priority: 700\n\nPackage: base-files\nPin: release o=Devuan\nPin-Priority: 1001\nEOF\n  _apt_clean_update\n}\n\n###\n### Display not supported VM or bare metal info\n###\n_not_supported_virt() {\n  echo\n  echo \"=== OOPS! ===\"\n  echo\n  echo \"You are running not supported virtualization system:\"\n  echo \"  $1\"\n  echo\n  echo \"If you wish to try BOA on this system anyway,\"\n  echo \"please create an empty control file:\"\n  echo \"  /root/.allow.any.virt.cnf\"\n  echo\n  echo \"Please be aware that it may not work at all,\"\n  echo \"or you can experience errors breaking BOA.\"\n  echo\n  echo \"WARNING! BOA IS NOT DESIGNED TO RUN DIRECTLY ON A BARE METAL.\"\n  echo \"WARNING! IT IS VERY DANGEROUS AND THUS EXTREMELY BAD IDEA!\"\n  echo \"WARNING! You are free to experiment but don't expect *ANY* support.\"\n  echo\n  echo \"BOA is known to work well on:\"\n  echo\n  echo \" * Linux Containers (LXC)\"\n  echo \" * Linux KVM guest\"\n  echo \" * Microsoft Hyper-V\"\n  echo \" * OpenVZ Containers\"\n  echo \" * Parallels guest\"\n  echo \" * Red Hat KVM guest\"\n  echo \" * VirtualBox guest\"\n  echo \" * VMware ESXi guest (but excluding vCloud Air)\"\n  echo \" * VServer guest\"\n  echo \" * Xen guest fully virtualized (HVM)\"\n  echo \" * Xen guest\"\n  echo \" * Xen paravirtualized guest domain\"\n  echo\n  echo \"Bye\"\n  echo\n  exit 1\n}\n\n# --- internal: print message only in debug mode\n_msg() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"[virt-what-fix] $*\"\n  fi\n}\n\n# --- internal: run virt-what under strace and parse the helper's exec path\n_discover_with_strace() {\n  local _path_found=\"\"\n  if ! command -v strace >/dev/null 2>&1; then\n    _msg \"strace not available, skipping strace-based discovery\"\n    echo \"\"\n    return 0\n  fi\n  # Temporarily extend PATH so virt-what can exec the helper for strace to see.\n  PATH=\"${PATH}:${_CANDIDATE_PATHS}\" strace -f -qq -e trace=execve -o \"${_TRACE}\" virt-what >/dev/null 2>&1\n\n  # mawk-safe parsing: pull the first quoted arg from execve(\"…\") and check suffix\n  if [ -s \"${_TRACE}\" ]; then\n    _path_found=$(\n      awk -v n=\"${_HELPER_NAME}\" '\n        /execve\\(\"/ {\n          # Find start of execve(\" then extract up to next quote\n          i = index($0, \"execve(\\\"\")\n          if (i) {\n            s = substr($0, i + 8)         # after execve(\"\n            j = index(s, \"\\\"\")\n            if (j) {\n              p = substr(s, 1, j - 1)     # the path inside quotes\n              if (p ~ (\"/\" n \"$\")) { print p; exit }\n            }\n          }\n        }\n      ' \"${_TRACE}\"\n    )\n  fi\n  rm -f \"${_TRACE}\"\n\n  if [ -n \"${_path_found}\" ] && [ -x \"${_path_found}\" ]; then\n    _msg \"strace discovered helper at: ${_path_found}\"\n    echo \"${_path_found}\"\n    return 0\n  fi\n  _msg \"strace discovery failed\"\n  echo \"\"\n  return 0\n}\n\n# --- internal: dpkg-based discovery (Debian/Devuan)\n_discover_with_dpkg() {\n  local _p=\"\"\n  if command -v dpkg >/dev/null 2>&1; then\n    _p=$(dpkg -L virt-what 2>/dev/null | grep -E \"/${_HELPER_NAME}$\" | head -n1)\n    if [ -n \"${_p}\" ] && [ -x \"${_p}\" ]; then\n      _msg \"dpkg discovered helper at: ${_p}\"\n      echo \"${_p}\"\n      return 0\n    fi\n  fi\n  echo \"\"\n  return 0\n}\n\n# --- internal: filesystem search fallback (bounded)\n_discover_with_find() {\n  local _p=\"\"\n  # Keep it bounded to /usr to stay fast/noisy-free.\n  _p=$(find /usr -maxdepth 4 -type f -name \"${_HELPER_NAME}\" 2>/dev/null | head -n1)\n  if [ -n \"${_p}\" ] && [ -x \"${_p}\" ]; then\n    _msg \"find discovered helper at: ${_p}\"\n    echo \"${_p}\"\n    return 0\n  fi\n  echo \"\"\n  return 0\n}\n\n# --- main: ensure symlink\n_ensure_virt_what_helper_symlink() {\n  # If the symlink already exists and is working, nothing to do.\n  if [ -L \"${_SYMLINK}\" ] && [ -x \"${_SYMLINK}\" ] && [ -e \"$(readlink -f \"${_SYMLINK}\")\" ]; then\n    _msg \"Symlink already present and valid: ${_SYMLINK} -> $(readlink -f \"${_SYMLINK}\")\"\n    return 0\n  fi\n\n  local _helper_path=\"\"\n  _helper_path=\"$(_discover_with_strace)\"\n  if [ -z \"${_helper_path}\" ]; then\n    _helper_path=\"$(_discover_with_dpkg)\"\n  fi\n  if [ -z \"${_helper_path}\" ]; then\n    _helper_path=\"$(_discover_with_find)\"\n  fi\n\n  if [ -z \"${_helper_path}\" ]; then\n    echo \"ERROR: Could not locate ${_HELPER_NAME} anywhere under /usr.\" 1>&2\n    return 1\n  fi\n\n  # Safety: if a non-symlink file already exists at the target, back it up once.\n  if [ -e \"${_SYMLINK}\" ] && [ ! -L \"${_SYMLINK}\" ]; then\n    _msg \"Backing up existing non-symlink at ${_SYMLINK} to ${_SYMLINK}.orig\"\n    mv -f \"${_SYMLINK}\" \"${_SYMLINK}.orig\"\n  fi\n\n  ln -sfn \"${_helper_path}\" \"${_SYMLINK}\"\n  if [ -x \"${_SYMLINK}\" ]; then\n    _msg \"Symlink created: ${_SYMLINK} -> ${_helper_path}\"\n    return 0\n  else\n    echo \"ERROR: Failed to create working symlink ${_SYMLINK} -> ${_helper_path}\" 1>&2\n    return 2\n  fi\n}\n\n###\n### Fix VM system detection\n###\n_fix_virt_what() {\n  _VIRT_TEST=\"$(which virt-what)\"\n  if [ -n \"${_VIRT_TEST}\" ] && [ -x \"${_VIRT_TEST}\" ]; then\n    _SHELL_TEST_A=$(grep -I -o \"\\#\\!.*/usr/bin/sh\" ${_VIRT_TEST} 2>&1)\n    _SHELL_TEST_B=$(grep -I -o \"\\#\\!.*/bin/sh\" ${_VIRT_TEST} 2>&1)\n    if [[ \"${_SHELL_TEST_A}\" =~ \"/usr/bin/sh\" ]]; then\n      sed -i \"s/\\/usr\\/bin\\/sh/\\/bin\\/dash/g\" ${_VIRT_TEST}\n    fi\n    if [[ \"${_SHELL_TEST_B}\" =~ \"/bin/sh\" ]]; then\n      sed -i \"s/\\/bin\\/sh/\\/bin\\/dash/g\" ${_VIRT_TEST}\n    fi\n    _HELPER_NAME=\"virt-what-cpuid-helper\"\n    _SYMLINK=\"/usr/sbin/${_HELPER_NAME}\"\n    _TRACE=\"/tmp/virtwhat.$$.strace\"\n    # Extra dirs we temporarily expose to PATH so virt-what can exec the helper for strace discovery\n    _CANDIDATE_PATHS=\"/usr/libexec:/usr/lib/x86_64-linux-gnu:/usr/lib64/virt-what:/usr/lib/virt-what\"\n    if [ ! -e \"${_SYMLINK}\" ]; then\n      echo \"INFO: virt-what tool requires small update, fixing...\"\n      if ! command -v strace &> /dev/null; then\n        _apt_clean_update\n        apt-get install strace ${_aptYesUnth}\n      fi\n      _ensure_virt_what_helper_symlink\n    fi\n  fi\n}\n\n###\n### Fix or install VM system detection\n###\n_fix_or_install_virt_what() {\n  _VIRT_TEST=\"$(which virt-what)\"\n  if [ -n \"${_VIRT_TEST}\" ] && [ -x \"${_VIRT_TEST}\" ]; then\n    _fix_virt_what\n  else\n    echo \"INFO: installing required virt-what tool ...\"\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install virt-what ${_aptYesUnth}\n    wait\n    _fix_virt_what\n  fi\n}\n\n_check_virt() {\n  _fix_or_install_virt_what\n  _VIRT_TOOL=\"$(which virt-what)\"\n  if [ -x \"${_VIRT_TOOL}\" ]; then\n    _VIRT_TEST=$(virt-what)\n    _VIRT_TEST=$(echo -n ${_VIRT_TEST} | fmt -su -w 2500 2>&1)\n    if [[ \"${_VIRT_TEST}\" =~ \"program not found\" ]]; then\n      echo \"ERROR: virt-what says: ${_VIRT_TEST}\"\n      echo \"ERROR: virt-what detection fails for unknown reason\"\n    fi\n    if [ ! -e \"/root/.allow.any.virt.cnf\" ]; then\n      if [ -e \"/proc/self/status\" ]; then\n        _VS_GUEST_TEST=$(grep -E \"VxID:[[:space:]]*[0-9]{2,}$\" /proc/self/status 2> /dev/null)\n        _VS_HOST_TEST=$(grep -E \"VxID:[[:space:]]*0$\" /proc/self/status 2> /dev/null)\n      fi\n      if [ ! -z \"${_VS_HOST_TEST}\" ] || [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n        if [ -z \"${_VS_HOST_TEST}\" ] && [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n          _VIRT_IS=\"Linux VServer guest\"\n        else\n          if [ ! -z \"${_VS_HOST_TEST}\" ]; then\n            _not_supported_virt \"Linux VServer host\"\n          else\n            _not_supported_virt \"unknown / not a virtual machine\"\n          fi\n        fi\n      else\n        if [ -z \"${_VIRT_TEST}\" ] || [ \"${_VIRT_TEST}\" = \"0\" ]; then\n          _not_supported_virt \"unknown / not a virtual machine\"\n        elif [[ \"${_VIRT_TEST}\" =~ \"xen-dom0\" ]]; then\n          _not_supported_virt \"Xen privileged domain\"\n        elif [[ \"${_VIRT_TEST}\" =~ \"linux_vserver-host\" ]]; then\n          _not_supported_virt \"Linux VServer host\"\n        else\n          if [[ \"${_VIRT_TEST}\" =~ \"xen xen-hvm\" ]]; then\n            _VIRT_TEST=\"xen-hvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"xen xen-domU\" ]]; then\n            _VIRT_TEST=\"xen-domU\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"virtualbox kvm\" ]]; then\n            _VIRT_TEST=\"virtualbox\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"hyperv qemu\" ]]; then\n            _VIRT_TEST=\"hyperv\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"kvm aws\" ]]; then\n            _VIRT_TEST=\"kvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"redhat kvm\" ]]; then\n            _VIRT_TEST=\"redhat-kvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"openvz lxc\" ]]; then\n            _VIRT_TEST=\"openvz\"\n          fi\n          case \"${_VIRT_TEST}\" in\n            hyperv)      _VIRT_IS=\"Microsoft Hyper-V\" ;;\n            kvm)         _VIRT_IS=\"Linux KVM guest\" ;;\n            lxc)         _VIRT_IS=\"Linux Containers (LXC)\" ;;\n            openvz)      _VIRT_IS=\"OpenVZ Containers\" ;;\n            parallels)   _VIRT_IS=\"Parallels guest\" ;;\n            redhat-kvm)  _VIRT_IS=\"Red Hat KVM guest\" ;;\n            virtualbox)  _VIRT_IS=\"VirtualBox guest\" ;;\n            vmware)      _VIRT_IS=\"VMware ESXi guest\" ;;\n            xen-domU)    _VIRT_IS=\"Xen paravirtualized guest domain\" ;;\n            xen-hvm)     _VIRT_IS=\"Xen guest fully virtualized (HVM)\" ;;\n            xen)         _VIRT_IS=\"Xen guest\" ;;\n            *)  _not_supported_virt \"${_VIRT_TEST}\"\n            ;;\n          esac\n        fi\n      fi\n    else\n      if [ -z \"${_VIRT_TEST}\" ] || [ \"${_VIRT_TEST}\" = \"0\" ]; then\n        _VIRT_TEST=\"unknown / not a virtual machine\"\n      fi\n    fi\n  fi\n}\n\n###\n### Make local OpenSSL new/legacy ssl/certs symlinked to system ssl/certs\n###\n_fix_sync_system_ssl_certs() {\n  if [ -e \"/etc/ssl/certs/ca-certificates.crt\" ] \\\n    && [ ! -e \"/usr/local/ssl3/.old-certs\" ] \\\n    && [ -d \"/usr/local/ssl3/certs\" ] \\\n    && [ ! -L \"/usr/local/ssl3/certs\" ]; then\n    mv -f /usr/local/ssl3/certs /usr/local/ssl3/.old-certs\n    ln -sfn /etc/ssl/certs /usr/local/ssl3/certs\n  fi\n  if [ -e \"/etc/ssl/certs/ca-certificates.crt\" ] \\\n    && [ ! -e \"/usr/local/ssl/.old-certs\" ] \\\n    && [ -d \"/usr/local/ssl/certs\" ] \\\n    && [ ! -L \"/usr/local/ssl/certs\" ]; then\n    mv -f /usr/local/ssl/certs /usr/local/ssl/.old-certs\n    ln -sfn /etc/ssl/certs /usr/local/ssl/certs\n  fi\n}\n\n_update_agents() {\n\n  _if_hosted_sys\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ ! -e \"/root/.extended.firewall.exceptions.cnf\" ]; then\n      echo host8 > /root/.extended.firewall.exceptions.cnf\n    fi\n  fi\n\n  if [ \"${_VMFAMILY}\" = \"HOSTED\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -d \"/data/u\" ] \\\n    && [ -e \"/var/xdrago\" ]; then\n    [ ! -e \"/root/.fast.cron.cnf\" ] && echo ON > /root/.fast.cron.cnf\n    _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    _InTest=$(ls /data/disk/*/static/control/cli.info | wc -l 2>&1)\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    if [ \"${_InTest}\" -lt 9 ] \\\n      && [[ ! \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n      && [[ ! \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      && [[ ! \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      && [[ ! \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      && [[ ! \"${_PrTestMonster}\" =~ \"MONSTER\" ]]; then\n      [ ! -e \"/root/.fast.cron.cnf\" ] && echo ${_InTest} > /root/.fast.cron.cnf\n      [ -e \"/root/.hr.monitor.cnf\" ] && rm -f /root/.hr.monitor.cnf\n      [ -e \"/root/.slow.cron.cnf\" ] && [ ! -e \"/root/.slow.cron.cnf.protected\" ] && rm -f /root/.slow.cron.cnf\n      [ -e \"/root/.tg.cnf\" ] && rm -f /root/.tg.cnf\n      mysql -u root -e \"SET GLOBAL max_connect_errors = 555;\"\n      mysql -u root -e \"SET GLOBAL max_connections = 111;\"\n      mysql -u root -e \"SET GLOBAL max_user_connections = 111;\"\n      mysql -u root -e \"SET GLOBAL group_concat_max_len = 10000;\"\n    fi\n    if [ \"${_InTest}\" -ge 9 ] && [ \"${_InTest}\" -le 50 ]; then\n      [ ! -e \"/root/.fast.cron.cnf\" ] && echo ${_InTest} > /root/.fast.cron.cnf\n      [ -e \"/root/.hr.monitor.cnf\" ] && rm -f /root/.hr.monitor.cnf\n      [ -e \"/root/.slow.cron.cnf\" ] && [ ! -e \"/root/.slow.cron.cnf.protected\" ] && rm -f /root/.slow.cron.cnf\n      [ -e \"/root/.tg.cnf\" ] && rm -f /root/.tg.cnf\n      mysql -u root -e \"SET GLOBAL max_connect_errors = 777;\"\n      mysql -u root -e \"SET GLOBAL max_connections = 555;\"\n      mysql -u root -e \"SET GLOBAL max_user_connections = 111;\"\n      mysql -u root -e \"SET GLOBAL group_concat_max_len = 10000;\"\n    fi\n    if [ \"${_InTest}\" -gt 50 ]; then\n      [ -e \"/root/.fast.cron.cnf\" ] && rm -f /root/.fast.cron.cnf\n      [ ! -e \"/root/.tg.cnf\" ] && echo ${_InTest} > /root/.tg.cnf\n      [ ! -e \"/root/.hr.monitor.cnf\" ] && echo ${_InTest} > /root/.hr.monitor.cnf\n      [ ! -e \"/root/.slow.cron.cnf\" ] && echo ${_InTest} > /root/.slow.cron.cnf\n      mysql -u root -e \"SET GLOBAL max_connect_errors = 999;\"\n      mysql -u root -e \"SET GLOBAL max_connections = 777;\"\n      mysql -u root -e \"SET GLOBAL max_user_connections = 111;\"\n      mysql -u root -e \"SET GLOBAL group_concat_max_len = 10000;\"\n    fi\n    if [[ \"${_PrTestPower}\" =~ \"POWER\" ]]; then\n      [ ! -e \"/root/.tg.cnf\" ] && echo ${_InTest} > /root/.tg.cnf\n      [ ! -e \"/root/.fast.cron.cnf\" ] && echo ${_InTest} > /root/.fast.cron.cnf\n      [ -e \"/root/.hr.monitor.cnf\" ] && rm -f /root/.hr.monitor.cnf\n      [ -e \"/root/.slow.cron.cnf\" ] && [ ! -e \"/root/.slow.cron.cnf.protected\" ] && rm -f /root/.slow.cron.cnf\n      mysql -u root -e \"SET GLOBAL max_connect_errors = 555;\"\n      mysql -u root -e \"SET GLOBAL max_connections = 333;\"\n      mysql -u root -e \"SET GLOBAL max_user_connections = 111;\"\n      mysql -u root -e \"SET GLOBAL group_concat_max_len = 10000;\"\n    fi\n    if [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]]; then\n      [ ! -e \"/root/.tg.cnf\" ] && echo ${_InTest} > /root/.tg.cnf\n      [ ! -e \"/root/.fast.cron.cnf\" ] && echo ${_InTest} > /root/.fast.cron.cnf\n      [ -e \"/root/.hr.monitor.cnf\" ] && rm -f /root/.hr.monitor.cnf\n      [ -e \"/root/.slow.cron.cnf\" ] && [ ! -e \"/root/.slow.cron.cnf.protected\" ] && rm -f /root/.slow.cron.cnf\n      mysql -u root -e \"SET GLOBAL max_connect_errors = 777;\"\n      mysql -u root -e \"SET GLOBAL max_connections = 555;\"\n      mysql -u root -e \"SET GLOBAL max_user_connections = 333;\"\n      mysql -u root -e \"SET GLOBAL group_concat_max_len = 10000;\"\n    fi\n    if [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]]; then\n      [ ! -e \"/root/.tg.cnf\" ] && echo ${_InTest} > /root/.tg.cnf\n      [ ! -e \"/root/.fast.cron.cnf\" ] && echo ${_InTest} > /root/.fast.cron.cnf\n      [ -e \"/root/.hr.monitor.cnf\" ] && rm -f /root/.hr.monitor.cnf\n      [ -e \"/root/.slow.cron.cnf\" ] && [ ! -e \"/root/.slow.cron.cnf.protected\" ] && rm -f /root/.slow.cron.cnf\n      mysql -u root -e \"SET GLOBAL max_connect_errors = 999;\"\n      mysql -u root -e \"SET GLOBAL max_connections = 777;\"\n      mysql -u root -e \"SET GLOBAL max_user_connections = 555;\"\n      mysql -u root -e \"SET GLOBAL group_concat_max_len = 10000;\"\n    fi\n    mysql -u root -e \"SET GLOBAL optimizer_switch='derived_merge=off';\"\n    mysql -u root -e \"SET GLOBAL sort_buffer_size = 262144;\"\n    if [ -e \"/root/.tg.cnf\" ]; then\n      if [ ! -e \"/root/.fixed_fpm_workers.cnf\" ]; then\n        sed -i \"s/^_PHP_FPM_WORKERS=.*/_PHP_FPM_WORKERS=100/g\" ${_barCnf}\n        touch /root/.fixed_fpm_workers.cnf\n      fi\n    fi\n    if [ ! -e \"/root/.high_traffic.cnf\" ]; then\n      echo ${_InTest} > /root/.high_traffic.cnf\n      echo ${_InTest} > /root/.no.swap.clear.cnf\n    fi\n    [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ] && rm -f /root/.randomize_duplicity_full_backup_day.cnf\n    [ -e \"/root/.skip_duplicity_monthly_cleanup.cnf\" ] && rm -f /root/.skip_duplicity_monthly_cleanup.cnf\n    [ -e \"/root/.my.batch_innodb.cnf\" ] && rm -f /root/.my.batch_innodb.cnf\n    [ -e \"/root/.batch_innodb.cnf\" ] && rm -f /root/.batch_innodb.cnf\n    [ -e \"/root/.force.drupalgeddon.cnf\" ] && rm -f /root/.force.drupalgeddon.cnf\n    [ -e \"/root/.skip_cleanup.cnf\" ] && rm -f /root/.skip_cleanup.cnf\n    [ -e \"/root/.giant_traffic.cnf\" ] && rm -f /root/.giant_traffic.cnf\n    [ -e \"/root/.default.cnf\" ] && rm -f /root/.default.cnf\n    [ -e \"/root/.debug.cnf\" ] && rm -f /root/.debug.cnf\n    if [ -e \"/data/conf/override.global.inc\" ] \\\n      && [ ! -e \"/data/conf/.prev6.override.global.inc.off\" ]; then\n      mv -f /data/conf/override.global.inc /data/conf/.prev6.override.global.inc.off\n    fi\n#     if [ ! -e \"/data/conf/override.global.inc\" ]; then\n#       echo \"<?php\" > /data/conf/override.global.inc.tmp\n#       echo \"\" >> /data/conf/override.global.inc.tmp\n#       echo \"\\$use_valkey = TRUE;\" >> /data/conf/override.global.inc.tmp\n#       chmod 644 /data/conf/override.global.inc.tmp\n#       mv -f /data/conf/override.global.inc.tmp /data/conf/override.global.inc\n#     fi\n  fi\n\n\n  if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n\n    _pthCtrl=\"/root/.remote_backups/ctrl\"\n    if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n      [ ! -e \"${_pthCtrl}\" ] && mkdir -p ${_pthCtrl}\n      [ ! -e \"/root/.remote_backups/run\" ] && mkdir -p /root/.remote_backups/run\n    else\n      rm -rf /root/.remote_backups\n    fi\n\n    [ ! -e \"/var/xdrago/monitor/check\" ] && mkdir -p /var/xdrago/monitor/check\n    [ ! -e \"/var/xdrago/monitor/log\" ] && mkdir -p /var/xdrago/monitor/log\n\n    if [ ! -e \"${_pthLog}/.force.f89.${_tRee}.${_xSrl}.ctrl\" ]; then\n      rm -f ${_pthLog}/*.ctrl.*.pid\n      touch ${_pthLog}/.force.f89.${_tRee}.${_xSrl}.ctrl\n    fi\n\n    [ ! -e \"/var/xdrago/checksql.pl\" ] && rm -f ${_pthLog}/checksql.pl.ctrl.*.pid\n    [ ! -e \"/var/xdrago/clear.sh\" ] && rm -f ${_pthLog}/clear.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/daily.sh\" ] && rm -f ${_pthLog}/daily.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/graceful.sh\" ] && rm -f ${_pthLog}/graceful.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/guest-fire.sh\" ] && rm -f ${_pthLog}/guest-fire.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/guest-water.sh\" ] && rm -f ${_pthLog}/guest-water.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/ip_access.sh\" ] && rm -f ${_pthLog}/ip_access.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/manage_ltd_users.sh\" ] && rm -f ${_pthLog}/manage_ltd_users.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/manage_solr_config.sh\" ] && rm -f ${_pthLog}/manage_solr_config.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/minute.sh\" ] && rm -f ${_pthLog}/minute.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/move_sql.sh\" ] && rm -f ${_pthLog}/move_sql.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/mysql_backup.sh\" ] && rm -f ${_pthLog}/mysql_backup.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/mysql_cleanup.sh\" ] && rm -f ${_pthLog}/mysql_cleanup.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/mysql_cluster_backup.sh\" ] && rm -f ${_pthLog}/mysql_cluster_backup.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/mysql_repair.sh\" ] && rm -f ${_pthLog}/mysql_repair.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/proc_num_ctrl.pl\" ] && rm -f ${_pthLog}/proc_num_ctrl.pl.ctrl.*.pid\n    [ ! -e \"/var/xdrago/purge_binlogs.sh\" ] && rm -f ${_pthLog}/purge_binlogs.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/runner.sh\" ] && rm -f ${_pthLog}/runner.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/second.sh\" ] && rm -f ${_pthLog}/second.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/usage.sh\" ] && rm -f ${_pthLog}/usage.sh.ctrl.*.pid\n\n    [ ! -e \"/var/xdrago/monitor/check/java.sh\" ] && rm -f ${_pthLog}/java.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/mysql.sh\" ] && rm -f ${_pthLog}/mysql.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/nginx.sh\" ] && rm -f ${_pthLog}/nginx.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/php.sh\" ] && rm -f ${_pthLog}/php.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/valkey.sh\" ] && rm -f ${_pthLog}/valkey.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/redis.sh\" ] && rm -f ${_pthLog}/redis.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/scan_nginx.sh\" ] && rm -f ${_pthLog}/scan_nginx.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/system.sh\" ] && rm -f ${_pthLog}/system.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/unbound.sh\" ] && rm -f ${_pthLog}/unbound.sh.ctrl.*.pid\n\n    [ ! -e \"/var/xdrago/monitor/check/escapecheck.sh\" ] && rm -f ${_pthLog}/escapecheck.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/hackcheck.sh\" ] && rm -f ${_pthLog}/hackcheck.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/hackftp.sh\" ] && rm -f ${_pthLog}/hackftp.sh.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/segfault_alert.pl\" ] && rm -f ${_pthLog}/segfault_alert.pl.ctrl.*.pid\n    [ ! -e \"/var/xdrago/monitor/check/sqlcheck.pl\" ] && rm -f ${_pthLog}/sqlcheck.pl.ctrl.*.pid\n\n    [ -e \"/var/xdrago/proc_num_ctrl.cgi\" ] && rm -f /var/xdrago/proc_num_ctrl.cgi\n    [ -e \"/var/xdrago/checksql.cgi\" ] && rm -f /var/xdrago/checksql.cgi\n    [ -e \"/var/xdrago/mysql_hourly.sh\" ] && rm -f /var/xdrago/mysql_hourly.sh\n\n    [ -e \"/var/xdrago/monitor/check/sqlcheck\" ] && rm -f ${_pthLog}/*.ctrl.*.pid\n    [ -e \"/var/xdrago/monitor/check/sqlcheck\" ] && rm -f /var/xdrago/monitor/check/*\n\n    [ -e \"/var/xdrago/monitor/hackcheck.archive.log\" ] && rm -f /var/xdrago/monitor/.scan_nginx_arch*\n    [ -e \"/var/xdrago/monitor/hackcheck.archive.log\" ] && mv -f /var/xdrago/monitor/*.log /var/xdrago/monitor/log/\n  fi\n\n  if [ -e \"/root/.remote_backups/schedule/backup_schedule.txt\" ] \\\n    && [ -d \"/var/aegir/drush\" ]; then\n    if grep -q \"Out of memory: Killed process.*duplicity\" /var/log/iptables.log; then\n     if [ ! -e \"/root/.remote_backups/schedule/backup_schedule.txt-off\" ]; then\n       cp -a /root/.remote_backups/schedule/backup_schedule.txt /root/.remote_backups/schedule/backup_schedule.txt-off\n       echo \"# Backup schedule (service user) OFF\" > /root/.remote_backups/schedule/backup_schedule.txt\n       chattr +i /root/.remote_backups/schedule/backup_schedule.txt\n     fi\n    else\n      if [ -e \"/root/.remote_backups/schedule/backup_schedule.txt-off\" ]; then\n        chattr -i /root/.remote_backups/schedule/backup_schedule.txt\n        rm -f /root/.remote_backups/schedule/backup_schedule.txt\n        mv /root/.remote_backups/schedule/backup_schedule.txt-off /root/.remote_backups/schedule/backup_schedule.txt\n      fi\n    fi\n    if [ \"$(pgrep -fc duplicity)\" -gt 0 ] \\\n      && [ \"$(pgrep -fc dcysetup)\" -lt 1 ] \\\n      && [ \"$(pgrep -fc mybackup)\" -lt 1 ] \\\n      && [ \"$(pgrep -fc multiback)\" -lt 1 ]; then\n      pkill -9 -f duplicity\n      rm -rf /tmp/duplicity*\n      rm -rf /root/.cache/duplicity/*/duplicity-*tempdir\n      rm -f /root/.cache/duplicity/*/lockfile\n      echo \"$(date) Orphaned duplicity processes killed\" >> /var/log/duplicity-cleanup.log\n    fi\n  fi\n\n  if ! grep -q \"OFF\" ${_optBin}/lock.inc; then\n    rm -f ${_pthLog}/lock.inc.sh.ctrl.*\n  fi\n  if [ ! -e \"${_optBin}/lock.inc\" ] \\\n    || [ ! -e \"${_pthLog}/lock.inc.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc foobar)\n    if (( _CNT > 0 )); then\n      echo \"The foobar is running!\"\n    else\n      if [ -e \"${_optBin}/lock.inc\" ]; then\n        mv -f ${_optBin}/lock.inc ${_optBin}/lock.inc.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/lock.inc\" -o ${_optBin}/lock.inc\n      if [ -e \"${_optBin}/lock.inc\" ]; then\n        chmod 700 ${_optBin}/lock.inc\n        chown root:root ${_optBin}/lock.inc\n        touch ${_pthLog}/lock.inc.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/lock.inc.old\" ]; then\n          mv -f ${_optBin}/lock.inc.old ${_optBin}/lock.inc\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/vmnetfix\" ] \\\n    || [ ! -e \"${_pthLog}/vmnetfix.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc vmnetfix)\n    if (( _CNT > 0 )); then\n      echo \"The vmnetfix is running!\"\n    else\n      if [ ! -e \"/etc/init.d/networking\" ]; then\n        mkdir -p /etc/init.d\n        curl ${_crlGet} \"${_urlHmr}/conf/network/networking\" -o /etc/init.d/networking\n        chmod 0755 /etc/init.d/networking\n        chown root:root /etc/init.d/networking\n        update-rc.d networking defaults >/dev/null 2>&1 || true\n      fi\n      if [ -e \"${_optBin}/vmnetfix\" ]; then\n        mv -f ${_optBin}/vmnetfix ${_optBin}/vmnetfix.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/vmnetfix\" -o ${_optBin}/vmnetfix\n      if [ -e \"${_optBin}/vmnetfix\" ]; then\n        chmod 700 ${_optBin}/vmnetfix\n        chown root:root ${_optBin}/vmnetfix\n        touch ${_pthLog}/vmnetfix.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/vmnetfix.old\" ]; then\n          mv -f ${_optBin}/vmnetfix.old ${_optBin}/vmnetfix\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/screenfetch\" ] \\\n    || [ ! -e \"${_pthLog}/screenfetch.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc screenfetch)\n    if (( _CNT > 0 )); then\n      echo \"The screenfetch is running!\"\n    else\n      if [ -e \"${_optBin}/screenfetch\" ]; then\n        mv -f ${_optBin}/screenfetch ${_optBin}/screenfetch.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/screenfetch\" -o ${_optBin}/screenfetch\n      if [ -e \"${_optBin}/screenfetch\" ]; then\n        chmod 700 ${_optBin}/screenfetch\n        chown root:root ${_optBin}/screenfetch\n        touch ${_pthLog}/screenfetch.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/screenfetch.old\" ]; then\n          mv -f ${_optBin}/screenfetch.old ${_optBin}/screenfetch\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/fixrepo\" ] \\\n    || [ ! -e \"${_pthLog}/fixrepo.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc fixrepo)\n    if (( _CNT > 0 )); then\n      echo \"The fixrepo is running!\"\n    else\n      if [ -e \"${_optBin}/fixrepo\" ]; then\n        mv -f ${_optBin}/fixrepo ${_optBin}/fixrepo.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/fixrepo\" -o ${_optBin}/fixrepo\n      if [ -e \"${_optBin}/fixrepo\" ]; then\n        chmod 700 ${_optBin}/fixrepo\n        chown root:root ${_optBin}/fixrepo\n        touch ${_pthLog}/fixrepo.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/fixrepo.old\" ]; then\n          mv -f ${_optBin}/fixrepo.old ${_optBin}/fixrepo\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/renameaegirhost\" ] \\\n    || [ ! -e \"${_pthLog}/renameaegirhost.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc renameaegirhost)\n    if (( _CNT > 0 )); then\n      echo \"The renameaegirhost is running!\"\n    else\n      if [ -e \"${_optBin}/renameaegirhost\" ]; then\n        mv -f ${_optBin}/renameaegirhost ${_optBin}/renameaegirhost.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/renameaegirhost\" -o ${_optBin}/renameaegirhost\n      if [ -e \"${_optBin}/renameaegirhost\" ]; then\n        chmod 700 ${_optBin}/renameaegirhost\n        chown root:root ${_optBin}/renameaegirhost\n        touch ${_pthLog}/renameaegirhost.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/renameaegirhost.old\" ]; then\n          mv -f ${_optBin}/renameaegirhost.old ${_optBin}/renameaegirhost\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/autosymlink\" ] \\\n    || [ ! -e \"${_pthLog}/autosymlink.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc autosymlink)\n    if (( _CNT > 0 )); then\n      echo \"The autosymlink is running!\"\n    else\n      if [ -e \"${_optBin}/autosymlink\" ]; then\n        mv -f ${_optBin}/autosymlink ${_optBin}/autosymlink.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/autosymlink\" -o ${_optBin}/autosymlink\n      if [ -e \"${_optBin}/autosymlink\" ]; then\n        chmod 700 ${_optBin}/autosymlink\n        chown root:root ${_optBin}/autosymlink\n        touch ${_pthLog}/autosymlink.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/autosymlink.old\" ]; then\n          mv -f ${_optBin}/autosymlink.old ${_optBin}/autosymlink\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/updatesymlinks\" ] \\\n    || [ ! -e \"${_pthLog}/updatesymlinks.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc updatesymlinks)\n    if (( _CNT > 0 )); then\n      echo \"The updatesymlinks is running!\"\n    else\n      if [ -e \"${_optBin}/updatesymlinks\" ]; then\n        mv -f ${_optBin}/updatesymlinks ${_optBin}/updatesymlinks.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/updatesymlinks\" -o ${_optBin}/updatesymlinks\n      if [ -e \"${_optBin}/updatesymlinks\" ]; then\n        chmod 700 ${_optBin}/updatesymlinks\n        chown root:root ${_optBin}/updatesymlinks\n        touch ${_pthLog}/updatesymlinks.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/updatesymlinks.old\" ]; then\n          mv -f ${_optBin}/updatesymlinks.old ${_optBin}/updatesymlinks\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/aptcleanup\" ] \\\n    || [ ! -e \"${_pthLog}/aptcleanup.sh.ctrl.f97.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc aptcleanup)\n    if (( _CNT > 0 )); then\n      echo \"The aptcleanup is running!\"\n    else\n      if [ -e \"${_optBin}/aptcleanup\" ]; then\n        mv -f ${_optBin}/aptcleanup ${_optBin}/aptcleanup.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/aptcleanup\" -o ${_optBin}/aptcleanup\n      if [ -e \"${_optBin}/aptcleanup\" ]; then\n        chmod 700 ${_optBin}/aptcleanup\n        chown root:root ${_optBin}/aptcleanup\n        touch ${_pthLog}/aptcleanup.sh.ctrl.f97.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/aptcleanup.old\" ]; then\n          mv -f ${_optBin}/aptcleanup.old ${_optBin}/aptcleanup\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/loadguard\" ] \\\n    || [ ! -e \"${_pthLog}/loadguard.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc xloadguard)\n    if (( _CNT > 0 )); then\n      echo \"The xloadguard is running!\"\n    else\n      if [ -e \"${_optBin}/loadguard\" ]; then\n        mv -f ${_optBin}/loadguard ${_optBin}/loadguard.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/loadguard\" -o ${_optBin}/loadguard\n      if [ -e \"${_optBin}/loadguard\" ]; then\n        chmod 700 ${_optBin}/loadguard\n        chown root:root ${_optBin}/loadguard\n        touch ${_pthLog}/loadguard.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/loadguard.old\" ]; then\n          mv -f ${_optBin}/loadguard.old ${_optBin}/loadguard\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/ffmirror\" ] \\\n    || [ ! -e \"${_pthLog}/ffmirror.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc xffmirror)\n    if (( _CNT > 0 )); then\n      echo \"The xffmirror is running!\"\n    else\n      if [ -e \"${_optBin}/ffmirror\" ]; then\n        mv -f ${_optBin}/ffmirror ${_optBin}/ffmirror.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/ffmirror\" -o ${_optBin}/ffmirror\n      if [ -e \"${_optBin}/ffmirror\" ]; then\n        chmod 700 ${_optBin}/ffmirror\n        chown root:root ${_optBin}/ffmirror\n        touch ${_pthLog}/ffmirror.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/ffmirror.old\" ]; then\n          mv -f ${_optBin}/ffmirror.old ${_optBin}/ffmirror\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/ffdevuan\" ] \\\n    || [ ! -e \"${_pthLog}/ffdevuan.sh.ctrl.f95.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc xffdevuan)\n    if (( _CNT > 0 )); then\n      echo \"The xffdevuan is running!\"\n    else\n      if [ -e \"${_optBin}/ffdevuan\" ]; then\n        mv -f ${_optBin}/ffdevuan ${_optBin}/ffdevuan.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/ffdevuan\" -o ${_optBin}/ffdevuan\n      if [ -e \"${_optBin}/ffdevuan\" ]; then\n        chmod 700 ${_optBin}/ffdevuan\n        chown root:root ${_optBin}/ffdevuan\n        touch ${_pthLog}/ffdevuan.sh.ctrl.f95.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/ffdevuan.old\" ]; then\n          mv -f ${_optBin}/ffdevuan.old ${_optBin}/ffdevuan\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/webserver\" ] \\\n    || [ ! -e \"${_pthLog}/webserver.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc xwebserver)\n    if (( _CNT > 0 )); then\n      echo \"The xwebserver is running!\"\n    else\n      if [ -e \"${_optBin}/webserver\" ]; then\n        mv -f ${_optBin}/webserver ${_optBin}/webserver.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/webserver\" -o ${_optBin}/webserver\n      if [ -e \"${_optBin}/webserver\" ]; then\n        chmod 700 ${_optBin}/webserver\n        chown root:root ${_optBin}/webserver\n        touch ${_pthLog}/webserver.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/webserver.old\" ]; then\n          mv -f ${_optBin}/webserver.old ${_optBin}/webserver\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/xboa\" ] \\\n    || [ ! -e \"${_pthLog}/xboa.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc /local/bin/xboa)\n    if (( _CNT > 0 )); then\n      echo \"The xboa is running!\"\n    else\n      if [ -e \"${_optBin}/xboa\" ]; then\n        mv -f ${_optBin}/xboa ${_optBin}/xboa.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/xboa\" -o ${_optBin}/xboa\n      if [ -e \"${_optBin}/xboa\" ]; then\n        chmod 700 ${_optBin}/xboa\n        chown root:root ${_optBin}/xboa\n        touch ${_pthLog}/xboa.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/xboa.old\" ]; then\n          mv -f ${_optBin}/xboa.old ${_optBin}/xboa\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/boa\" ] \\\n    || [ ! -e \"${_pthLog}/boa.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc /local/bin/boa)\n    if (( _CNT > 0 )); then\n      echo \"The boa is running!\"\n    else\n      if [ -e \"${_optBin}/boa\" ]; then\n        mv -f ${_optBin}/boa ${_optBin}/boa.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/boa\" -o ${_optBin}/boa\n      if [ -e \"${_optBin}/boa\" ]; then\n        chmod 700 ${_optBin}/boa\n        chown root:root ${_optBin}/boa\n        touch ${_pthLog}/boa.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/boa.old\" ]; then\n          mv -f ${_optBin}/boa.old ${_optBin}/boa\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/barracuda\" ] \\\n    || [ ! -e \"${_pthLog}/barracuda.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc /local/bin/barracuda)\n    if (( _CNT > 0 )); then\n      echo \"The barracuda is running!\"\n    else\n      if [ -e \"${_optBin}/barracuda\" ]; then\n        mv -f ${_optBin}/barracuda ${_optBin}/barracuda.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/barracuda\" -o ${_optBin}/barracuda\n      if [ -e \"${_optBin}/barracuda\" ]; then\n        chmod 700 ${_optBin}/barracuda\n        chown root:root ${_optBin}/barracuda\n        touch ${_pthLog}/barracuda.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/barracuda.old\" ]; then\n          mv -f ${_optBin}/barracuda.old ${_optBin}/barracuda\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/octopus\" ] \\\n    || [ ! -e \"${_pthLog}/octopus.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc /local/bin/octopus)\n    if (( _CNT > 0 )); then\n      echo \"The octopus is running!\"\n    else\n      if [ -e \"${_optBin}/octopus\" ]; then\n        mv -f ${_optBin}/octopus ${_optBin}/octopus.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/octopus\" -o ${_optBin}/octopus\n      if [ -e \"${_optBin}/octopus\" ]; then\n        chmod 700 ${_optBin}/octopus\n        chown root:root ${_optBin}/octopus\n        touch ${_pthLog}/octopus.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/octopus.old\" ]; then\n          mv -f ${_optBin}/octopus.old ${_optBin}/octopus\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/perftest\" ] \\\n    || [ ! -L \"${_usrBin}/perftest\" ] \\\n    || [ ! -e \"${_pthLog}/perftest.sh.ctrl.f97.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc /local/bin/perftest)\n    if (( _CNT > 0 )); then\n      echo \"The perftest is running!\"\n    else\n      if [ -e \"${_optBin}/perftest\" ]; then\n        mv -f ${_optBin}/perftest ${_optBin}/perftest.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/perftest\" -o ${_optBin}/perftest\n      if [ -e \"${_optBin}/perftest\" ]; then\n        chmod 700 ${_optBin}/perftest\n        chown root:root ${_optBin}/perftest\n        ln -sf ${_optBin}/perftest ${_usrBin}/perftest\n        rm -f ${_pthLog}/perftest.sh.ctrl.*\n        touch ${_pthLog}/perftest.sh.ctrl.f97.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/perftest.old\" ]; then\n          mv -f ${_optBin}/perftest.old ${_optBin}/perftest\n        fi\n      fi\n    fi\n  fi\n\n  if [ ! -e \"${_optBin}/aptfast\" ] \\\n    || [ ! -e \"${_pthLog}/aptfast.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc /local/bin/aptfast)\n    if (( _CNT > 0 )); then\n      echo \"The aptfast is running!\"\n    else\n      if [ -e \"${_optBin}/aptfast\" ]; then\n        mv -f ${_optBin}/aptfast ${_optBin}/aptfast.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/aptfast\" -o ${_optBin}/aptfast\n      if [ -e \"${_optBin}/aptfast\" ]; then\n        chmod 700 ${_optBin}/aptfast\n        chown root:root ${_optBin}/aptfast\n        touch ${_pthLog}/aptfast.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/aptfast.old\" ]; then\n          mv -f ${_optBin}/aptfast.old ${_optBin}/aptfast\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/backboa.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc duplicity)\n    if (( _CNT > 0 )); then\n      echo \"The duplicity backup is running!\"\n    else\n      if [ -e \"${_optBin}/backboa\" ]; then\n        mv -f ${_optBin}/backboa ${_optBin}/backboa.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/backboa\" -o ${_optBin}/backboa\n      if [ -e \"${_optBin}/backboa\" ]; then\n        chmod 700 ${_optBin}/backboa\n        chown root:root ${_optBin}/backboa\n        touch ${_pthLog}/backboa.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/backboa.old\" ]; then\n          mv -f ${_optBin}/backboa.old ${_optBin}/backboa\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/duobackboa.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc duplicity)\n    if (( _CNT > 0 )); then\n      echo \"The duplicity backup is running!\"\n    else\n      if [ -e \"${_optBin}/duobackboa\" ]; then\n        mv -f ${_optBin}/duobackboa ${_optBin}/duobackboa.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/duobackboa\" -o ${_optBin}/duobackboa\n      if [ -e \"${_optBin}/duobackboa\" ]; then\n        chmod 700 ${_optBin}/duobackboa\n        chown root:root ${_optBin}/duobackboa\n        touch ${_pthLog}/duobackboa.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/duobackboa.old\" ]; then\n          mv -f ${_optBin}/duobackboa.old ${_optBin}/duobackboa\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/root/.remote_backups/schedule/backup_schedule.txt\" ]; then\n    _BROKEN_UPDATE_TEST=$(grep \"Under Construction\" /root/.remote_backups/run/*.sh 2>&1)\n    if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n      rm -f ${_pthCtrl}/*.pid\n    fi\n    _BROKEN_UPDATE_TEST=$(grep \"404 Not Found\" /root/.remote_backups/run/*.sh 2>&1)\n    if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n      rm -f ${_pthCtrl}/*.pid\n    fi\n  fi\n\n  if [ -e \"/root/.remote_backups/schedule/backup_schedule.txt\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/dcysetup.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc duplicity)\n    if (( _CNT > 0 )); then\n      echo \"The duplicity backup is running!\"\n    else\n      if [ -e \"${_optBin}/dcysetup\" ]; then\n        mv -f ${_optBin}/dcysetup ${_optBin}/dcysetup.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/dcysetup\" -o ${_optBin}/dcysetup\n      if [ -e \"${_optBin}/dcysetup\" ]; then\n        chmod 700 ${_optBin}/dcysetup\n        chown root:root ${_optBin}/dcysetup\n        touch ${_pthCtrl}/dcysetup.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/dcysetup.old\" ]; then\n          mv -f ${_optBin}/dcysetup.old ${_optBin}/dcysetup\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/root/.remote_backups/schedule/backup_schedule.txt\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/multiback.sh.ctrl.f37.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc duplicity)\n    if (( _CNT > 0 )); then\n      echo \"The duplicity backup is running!\"\n    else\n      if [ -e \"${_optBin}/multiback\" ]; then\n        mv -f ${_optBin}/multiback ${_optBin}/multiback.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/multiback\" -o ${_optBin}/multiback\n      if [ -e \"${_optBin}/multiback\" ]; then\n        chmod 700 ${_optBin}/multiback\n        chown root:root ${_optBin}/multiback\n        touch ${_pthCtrl}/multiback.sh.ctrl.f37.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/multiback.old\" ]; then\n          mv -f ${_optBin}/multiback.old ${_optBin}/multiback\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/root/.remote_backups/schedule/backup_schedule.txt\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/mybackup.sh.ctrl.f37.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc duplicity)\n    if (( _CNT > 0 )); then\n      echo \"The duplicity backup is running!\"\n    else\n      if [ -e \"${_optBin}/mybackup\" ]; then\n        mv -f ${_optBin}/mybackup ${_optBin}/mybackup.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/mybackup\" -o ${_optBin}/mybackup\n      if [ -e \"${_optBin}/mybackup\" ]; then\n        chmod 755 ${_optBin}/mybackup\n        chown root:root ${_optBin}/mybackup\n        touch ${_pthCtrl}/mybackup.sh.ctrl.f37.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/mybackup.old\" ]; then\n          mv -f ${_optBin}/mybackup.old ${_optBin}/mybackup\n        fi\n      fi\n    fi\n  fi\n\n  if [ -d \"/root/.remote_backups/run\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/install_dependencies.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -e \"/root/.remote_backups/run/install_dependencies.sh\" ]; then\n      mv -f /root/.remote_backups/run/install_dependencies.sh /root/.remote_backups/run/install_dependencies.sh.old\n    fi\n    curl ${_crlGet} \"${_urlHmr}/tools/backup/run/install_dependencies.sh\" -o /root/.remote_backups/run/install_dependencies.sh\n    if [ -e \"/root/.remote_backups/run/install_dependencies.sh\" ]; then\n      chmod 700 /root/.remote_backups/run/install_dependencies.sh\n      chown root:root /root/.remote_backups/run/install_dependencies.sh\n      touch ${_pthCtrl}/install_dependencies.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\n    else\n      if [ -e \"/root/.remote_backups/run/install_dependencies.sh.old\" ]; then\n        mv -f /root/.remote_backups/run/install_dependencies.sh.old /root/.remote_backups/run/install_dependencies.sh\n      fi\n    fi\n  fi\n\n  if [ -d \"/root/.remote_backups/run\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/create_credentials_templates.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\" ]; then\n    rm -f /.backboa*\n    if [ -e \"/root/.remote_backups/run/create_credentials_templates.sh\" ]; then\n      mv -f /root/.remote_backups/run/create_credentials_templates.sh /root/.remote_backups/run/create_credentials_templates.sh.old\n    fi\n    curl ${_crlGet} \"${_urlHmr}/tools/backup/run/create_credentials_templates.sh\" -o /root/.remote_backups/run/create_credentials_templates.sh\n    if [ -e \"/root/.remote_backups/run/create_credentials_templates.sh\" ]; then\n      chmod 700 /root/.remote_backups/run/create_credentials_templates.sh\n      chown root:root /root/.remote_backups/run/create_credentials_templates.sh\n      touch ${_pthCtrl}/create_credentials_templates.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\n    else\n      if [ -e \"/root/.remote_backups/run/create_credentials_templates.sh.old\" ]; then\n        mv -f /root/.remote_backups/run/create_credentials_templates.sh.old /root/.remote_backups/run/create_credentials_templates.sh\n      fi\n    fi\n  fi\n\n  if [ -d \"/root/.remote_backups/run\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/create_global_paths_config.sh.ctrl.f44.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -e \"/root/.remote_backups/run/create_global_paths_config.sh\" ]; then\n      mv -f /root/.remote_backups/run/create_global_paths_config.sh /root/.remote_backups/run/create_global_paths_config.sh.old\n    fi\n    curl ${_crlGet} \"${_urlHmr}/tools/backup/run/create_global_paths_config.sh\" -o /root/.remote_backups/run/create_global_paths_config.sh\n    if [ -e \"/root/.remote_backups/run/create_global_paths_config.sh\" ]; then\n      chmod 700 /root/.remote_backups/run/create_global_paths_config.sh\n      chown root:root /root/.remote_backups/run/create_global_paths_config.sh\n      touch ${_pthCtrl}/create_global_paths_config.sh.ctrl.f44.${_tRee}.${_xSrl}.pid\n    else\n      if [ -e \"/root/.remote_backups/run/create_global_paths_config.sh.old\" ]; then\n        mv -f /root/.remote_backups/run/create_global_paths_config.sh.old /root/.remote_backups/run/create_global_paths_config.sh\n      fi\n    fi\n  fi\n\n  if [ -d \"/root/.remote_backups/run\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/create_user_paths_config.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\" ]; then\n    rm -f /.backboa*\n    if [ -e \"/root/.remote_backups/run/create_user_paths_config.sh\" ]; then\n      mv -f /root/.remote_backups/run/create_user_paths_config.sh /root/.remote_backups/run/create_user_paths_config.sh.old\n    fi\n    curl ${_crlGet} \"${_urlHmr}/tools/backup/run/create_user_paths_config.sh\" -o /root/.remote_backups/run/create_user_paths_config.sh\n    if [ -e \"/root/.remote_backups/run/create_user_paths_config.sh\" ]; then\n      chmod 700 /root/.remote_backups/run/create_user_paths_config.sh\n      chown root:root /root/.remote_backups/run/create_user_paths_config.sh\n      touch ${_pthCtrl}/create_user_paths_config.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\n    else\n      if [ -e \"/root/.remote_backups/run/create_user_paths_config.sh.old\" ]; then\n        mv -f /root/.remote_backups/run/create_user_paths_config.sh.old /root/.remote_backups/run/create_user_paths_config.sh\n      fi\n    fi\n  fi\n\n  if [ -d \"/root/.remote_backups/run\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/create_cron_entries.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -e \"/root/.remote_backups/run/create_cron_entries.sh\" ]; then\n      mv -f /root/.remote_backups/run/create_cron_entries.sh /root/.remote_backups/run/create_cron_entries.sh.old\n    fi\n    curl ${_crlGet} \"${_urlHmr}/tools/backup/run/create_cron_entries.sh\" -o /root/.remote_backups/run/create_cron_entries.sh\n    if [ -e \"/root/.remote_backups/run/create_cron_entries.sh\" ]; then\n      chmod 700 /root/.remote_backups/run/create_cron_entries.sh\n      chown root:root /root/.remote_backups/run/create_cron_entries.sh\n      touch ${_pthCtrl}/create_cron_entries.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\n    else\n      if [ -e \"/root/.remote_backups/run/create_cron_entries.sh.old\" ]; then\n        mv -f /root/.remote_backups/run/create_cron_entries.sh.old /root/.remote_backups/run/create_cron_entries.sh\n      fi\n    fi\n  fi\n\n  if [ -d \"/root/.remote_backups/run\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/create_readme.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -e \"/root/.remote_backups/run/create_readme.sh\" ]; then\n      mv -f /root/.remote_backups/run/create_readme.sh /root/.remote_backups/run/create_readme.sh.old\n    fi\n    curl ${_crlGet} \"${_urlHmr}/tools/backup/run/create_readme.sh\" -o /root/.remote_backups/run/create_readme.sh\n    if [ -e \"/root/.remote_backups/run/create_readme.sh\" ]; then\n      chmod 700 /root/.remote_backups/run/create_readme.sh\n      chown root:root /root/.remote_backups/run/create_readme.sh\n      touch ${_pthCtrl}/create_readme.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\n    else\n      if [ -e \"/root/.remote_backups/run/create_readme.sh.old\" ]; then\n        mv -f /root/.remote_backups/run/create_readme.sh.old /root/.remote_backups/run/create_readme.sh\n      fi\n    fi\n  fi\n\n  if [ -d \"/root/.remote_backups/run\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthCtrl}/create_config_readme.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -e \"/root/.remote_backups/run/create_config_readme.sh\" ]; then\n      mv -f /root/.remote_backups/run/create_config_readme.sh /root/.remote_backups/run/create_config_readme.sh.old\n    fi\n    curl ${_crlGet} \"${_urlHmr}/tools/backup/run/create_config_readme.sh\" -o /root/.remote_backups/run/create_config_readme.sh\n    if [ -e \"/root/.remote_backups/run/create_config_readme.sh\" ]; then\n      chmod 700 /root/.remote_backups/run/create_config_readme.sh\n      chown root:root /root/.remote_backups/run/create_config_readme.sh\n      touch ${_pthCtrl}/create_config_readme.sh.ctrl.f48.${_tRee}.${_xSrl}.pid\n    else\n      if [ -e \"/root/.remote_backups/run/create_config_readme.sh.old\" ]; then\n        mv -f /root/.remote_backups/run/create_config_readme.sh.old /root/.remote_backups/run/create_config_readme.sh\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/scan_nginx.sh.ctrl.f81.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/scan_nginx.sh /var/xdrago/monitor/check/scan_nginx.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/scan_nginx.sh\" -o /var/xdrago/monitor/check/scan_nginx.sh\n    if [ -e \"/var/xdrago/monitor/check/scan_nginx.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/scan_nginx.sh\n      chown root:root /var/xdrago/monitor/check/scan_nginx.sh\n      touch ${_pthLog}/scan_nginx.sh.ctrl.f81.${_tRee}.${_xSrl}.pid\n      if [ ! -e \"/var/xdrago/monitor/log/.scan_nginx_arch.${_xSrl}.pid\" ]; then\n        if [ -e \"/var/xdrago/monitor/scan_nginx.archive.log\" ]; then\n          mv -f /var/xdrago/monitor/scan_nginx.archive.log /var/xdrago/monitor/log/.scan_nginx_legacy.archive.f81.${_tRee}.${_xSrl}.log\n        fi\n        if [ -e \"/var/xdrago/monitor/log/scan_nginx.archive.log\" ]; then\n          mv -f /var/xdrago/monitor/log/scan_nginx.archive.log /var/xdrago/monitor/log/scan_nginx.archive.f81.${_tRee}.${_xSrl}.log\n        fi\n        rm -f /var/xdrago/monitor/log/.scan_nginx_arch*.pid\n        touch /var/xdrago/monitor/log/.scan_nginx_arch.${_xSrl}.pid\n        csf -df\n        wait\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n      if [ ! -e \"/var/xdrago/monitor/log/.hackcheck.arch.${_xSrl}.pid\" ]; then\n        if [ -e \"/var/xdrago/monitor/hackcheck.archive.log\" ]; then\n          mv -f /var/xdrago/monitor/hackcheck.archive.log /var/xdrago/monitor/log/.scan_nginx_legacy.archive.f81.${_tRee}.${_xSrl}.log\n        fi\n        if [ -e \"/var/xdrago/monitor/log/hackcheck.archive.log\" ]; then\n          mv -f /var/xdrago/monitor/log/hackcheck.archive.log /var/xdrago/monitor/log/hackcheck.archive.f81.${_tRee}.${_xSrl}.log\n        fi\n        rm -f /var/xdrago/monitor/log/.hackcheck.arch*.pid\n        touch /var/xdrago/monitor/log/.hackcheck.arch.${_xSrl}.pid\n        csf -df\n        wait\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    else\n      mv -f /var/xdrago/monitor/check/scan_nginx.sh.old /var/xdrago/monitor/check/scan_nginx.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/java.sh.ctrl.f90.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/java.sh /var/xdrago/monitor/check/java.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/java.sh\" -o /var/xdrago/monitor/check/java.sh\n    if [ -e \"/var/xdrago/monitor/check/java.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/java.sh\n      chown root:root /var/xdrago/monitor/check/java.sh\n      touch ${_pthLog}/java.sh.ctrl.f90.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/java.sh.old /var/xdrago/monitor/check/java.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/mysql.sh.ctrl.f82.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/mysql.sh /var/xdrago/monitor/check/mysql.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/mysql.sh\" -o /var/xdrago/monitor/check/mysql.sh\n    if [ -e \"/var/xdrago/monitor/check/mysql.sh\" ]; then\n      if [ -e \"/root/.debug.cnf\" ] && [ ! -e \"/root/.default.cnf\" ]; then\n        _DO_NOTHING=YES\n      else\n        if [ -e \"/root/.high_load.cnf\" ] \\\n          && [ ! -e \"/root/.big_db.cnf\" ] \\\n          && [ ! -e \"/root/.tg.cnf\" ]; then\n          sed -i \"s/3600/300/g\" /var/xdrago/monitor/check/mysql.sh\n        elif [ -e \"/root/.big_db.cnf\" ] || [ -e \"/root/.tg.cnf\" ]; then\n          _DO_NOTHING=YES\n        else\n          sed -i \"s/3600/1800/g\" /var/xdrago/monitor/check/mysql.sh\n        fi\n      fi\n      chmod 700 /var/xdrago/monitor/check/mysql.sh\n      chown root:root /var/xdrago/monitor/check/mysql.sh\n      touch ${_pthLog}/mysql.sh.ctrl.f82.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/mysql.sh.old /var/xdrago/monitor/check/mysql.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/nginx.sh.ctrl.f92.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/nginx.sh /var/xdrago/monitor/check/nginx.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/nginx.sh\" -o /var/xdrago/monitor/check/nginx.sh\n    if [ -e \"/var/xdrago/monitor/check/nginx.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/nginx.sh\n      chown root:root /var/xdrago/monitor/check/nginx.sh\n      touch ${_pthLog}/nginx.sh.ctrl.f92.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/nginx.sh.old /var/xdrago/monitor/check/nginx.sh\n    fi\n  fi\n\n  if [ ! -e \"/var/xdrago/monitor/check/nginx_guard.sh\" ]; then\n    rm -f ${_pthLog}/nginx_guard.sh.ctrl.*\n  fi\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/nginx_guard.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/nginx_guard.sh /var/xdrago/monitor/check/nginx_guard.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/nginx_guard.sh\" -o /var/xdrago/monitor/check/nginx_guard.sh\n    if [ -e \"/var/xdrago/monitor/check/nginx_guard.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/nginx_guard.sh\n      chown root:root /var/xdrago/monitor/check/nginx_guard.sh\n      touch ${_pthLog}/nginx_guard.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/nginx_guard.sh.old /var/xdrago/monitor/check/nginx_guard.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/php.sh.ctrl.f85.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/php.sh /var/xdrago/monitor/check/php.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/php.sh\" -o /var/xdrago/monitor/check/php.sh\n    if [ -e \"/var/xdrago/monitor/check/php.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/php.sh\n      chown root:root /var/xdrago/monitor/check/php.sh\n      touch ${_pthLog}/php.sh.ctrl.f85.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/php.sh.old /var/xdrago/monitor/check/php.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/valkey.sh.ctrl.f87.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/valkey.sh /var/xdrago/monitor/check/valkey.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/valkey.sh\" -o /var/xdrago/monitor/check/valkey.sh\n    if [ -e \"/var/xdrago/monitor/check/valkey.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/valkey.sh\n      chown root:root /var/xdrago/monitor/check/valkey.sh\n      touch ${_pthLog}/valkey.sh.ctrl.f87.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/valkey.sh.old /var/xdrago/monitor/check/valkey.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/redis.sh.ctrl.f90.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/redis.sh /var/xdrago/monitor/check/redis.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/redis.sh\" -o /var/xdrago/monitor/check/redis.sh\n    if [ -e \"/var/xdrago/monitor/check/redis.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/redis.sh\n      chown root:root /var/xdrago/monitor/check/redis.sh\n      touch ${_pthLog}/redis.sh.ctrl.f90.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/redis.sh.old /var/xdrago/monitor/check/redis.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/system.sh.ctrl.f83.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/system.sh /var/xdrago/monitor/check/system.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/system.sh\" -o /var/xdrago/monitor/check/system.sh\n    if [ -e \"/var/xdrago/monitor/check/system.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/system.sh\n      chown root:root /var/xdrago/monitor/check/system.sh\n      touch ${_pthLog}/system.sh.ctrl.f83.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/system.sh.old /var/xdrago/monitor/check/system.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/unbound.sh.ctrl.f86.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/unbound.sh /var/xdrago/monitor/check/unbound.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/unbound.sh\" -o /var/xdrago/monitor/check/unbound.sh\n    if [ -e \"/var/xdrago/monitor/check/unbound.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/unbound.sh\n      chown root:root /var/xdrago/monitor/check/unbound.sh\n      touch ${_pthLog}/unbound.sh.ctrl.f86.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/unbound.sh.old /var/xdrago/monitor/check/unbound.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/escapecheck.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/escapecheck.sh /var/xdrago/monitor/check/escapecheck.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/escapecheck.sh\" -o /var/xdrago/monitor/check/escapecheck.sh\n    if [ -e \"/var/xdrago/monitor/check/escapecheck.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/escapecheck.sh\n      chown root:root /var/xdrago/monitor/check/escapecheck.sh\n      touch ${_pthLog}/escapecheck.sh.ctrl.f99.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/escapecheck.sh.old /var/xdrago/monitor/check/escapecheck.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/hackcheck.sh.ctrl.f95.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/hackcheck.sh /var/xdrago/monitor/check/hackcheck.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/hackcheck.sh\" -o /var/xdrago/monitor/check/hackcheck.sh\n    if [ -e \"/var/xdrago/monitor/check/hackcheck.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/hackcheck.sh\n      chown root:root /var/xdrago/monitor/check/hackcheck.sh\n      touch ${_pthLog}/hackcheck.sh.ctrl.f95.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/hackcheck.sh.old /var/xdrago/monitor/check/hackcheck.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/hackftp.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/hackftp.sh /var/xdrago/monitor/check/hackftp.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/hackftp.sh\" -o /var/xdrago/monitor/check/hackftp.sh\n    if [ -e \"/var/xdrago/monitor/check/hackftp.sh\" ]; then\n      chmod 700 /var/xdrago/monitor/check/hackftp.sh\n      chown root:root /var/xdrago/monitor/check/hackftp.sh\n      touch ${_pthLog}/hackftp.sh.ctrl.f98.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/hackftp.sh.old /var/xdrago/monitor/check/hackftp.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/segfault_alert.pl.ctrl.f94.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/segfault_alert.pl /var/xdrago/monitor/check/segfault_alert.pl.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/segfault_alert.pl\" -o /var/xdrago/monitor/check/segfault_alert.pl\n    if [ -e \"/var/xdrago/monitor/check/segfault_alert.pl\" ]; then\n      chmod 700 /var/xdrago/monitor/check/segfault_alert.pl\n      chown root:root /var/xdrago/monitor/check/segfault_alert.pl\n      touch ${_pthLog}/segfault_alert.pl.ctrl.f94.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/segfault_alert.pl.old /var/xdrago/monitor/check/segfault_alert.pl\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/sqlcheck.pl.ctrl.f94.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/monitor/check/sqlcheck.pl /var/xdrago/monitor/check/sqlcheck.pl.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/monitor/check/sqlcheck.pl\" -o /var/xdrago/monitor/check/sqlcheck.pl\n    if [ -e \"/var/xdrago/monitor/check/sqlcheck.pl\" ]; then\n      chmod 700 /var/xdrago/monitor/check/sqlcheck.pl\n      chown root:root /var/xdrago/monitor/check/sqlcheck.pl\n      touch ${_pthLog}/sqlcheck.pl.ctrl.f94.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/monitor/check/sqlcheck.pl.old /var/xdrago/monitor/check/sqlcheck.pl\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ ! -e \"${_pthLog}/cv-phar-symlink.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -x \"/usr/local/bin/cv.phar\" ] \\\n      && [ -L \"/usr/bin/cv\" ]; then\n      _CV_SYMLINK=$(readlink -n /usr/bin/cv 2>&1)\n      _CV_SYMLINK=$(echo -n ${_CV_SYMLINK} | tr -d \"\\n\" 2>&1)\n      if [ \"${_CV_SYMLINK}\" != \"/usr/local/bin/cv.phar\" ]; then\n        rm -f /usr/bin/cv\n        ln -sfn /usr/local/bin/cv.phar /usr/bin/cv\n        touch ${_pthLog}/cv-phar-symlink.ctrl.${_tRee}.${_xSrl}.pid\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ ! -e \"${_pthLog}/drush8-classic-symlink.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n      && [ -L \"/usr/bin/drush8\" ]; then\n      _DRUSH_SYMLINK=$(readlink -n /usr/bin/drush8 2>&1)\n      _DRUSH_SYMLINK=$(echo -n ${_DRUSH_SYMLINK} | tr -d \"\\n\" 2>&1)\n      if [ \"${_DRUSH_SYMLINK}\" != \"/opt/tools/drush/8/drush/drush.php\" ]; then\n        rm -f /usr/bin/drush8\n        rm -f /usr/bin/drush\n        ln -sfn /opt/tools/drush/8/drush/drush.php /usr/bin/drush8\n        ln -sfn /opt/tools/drush/8/drush/drush.php /usr/bin/drush\n        touch ${_pthLog}/drush8-classic-symlink.ctrl.${_tRee}.${_xSrl}.pid\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/clean-boa-env.ctrl.f97.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /etc/init.d/clean-boa-env /var/xdrago/clean-boa-env.old\n    curl ${_crlGet} \"${_urlHmr}/conf/var/clean-boa-env\" -o /etc/init.d/clean-boa-env\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      chmod 700 /etc/init.d/clean-boa-env\n      chown root:root /etc/init.d/clean-boa-env\n      touch ${_pthLog}/clean-boa-env.ctrl.f97.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/clean-boa-env.old /etc/init.d/clean-boa-env\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/mysql_backup.sh.ctrl.f88.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc mysql_backup.sh)\n    if (( _CNT > 0 )); then\n      echo \"The mysql_backup.sh is running!\"\n    else\n      mv -f /var/xdrago/mysql_backup.sh /var/xdrago/mysql_backup.sh.old\n      curl ${_crlGet} \"${_urlHmr}/tools/system/mysql_backup.sh\" -o /var/xdrago/mysql_backup.sh\n      if [ -e \"/var/xdrago/mysql_backup.sh\" ]; then\n        chmod 700 /var/xdrago/mysql_backup.sh\n        chown root:root /var/xdrago/mysql_backup.sh\n        touch ${_pthLog}/mysql_backup.sh.ctrl.f88.${_xSrl}.pid\n      else\n        mv -f /var/xdrago/mysql_backup.sh.old /var/xdrago/mysql_backup.sh\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/unbound-helper.ctrl.f95.${_xSrl}.pid\" ]; then\n    mv -f /usr/libexec/unbound-helper /usr/libexec/unbound-helper.old\n    curl ${_crlGet} \"${_urlHmr}/conf/dns/unbound-helper\" -o /usr/libexec/unbound-helper\n    if [ -e \"/usr/libexec/unbound-helper\" ]; then\n      chmod 755 /usr/libexec/unbound-helper\n      chown root:root /usr/libexec/unbound-helper\n      touch ${_pthLog}/unbound-helper.ctrl.f95.${_xSrl}.pid\n    else\n      mv -f /usr/libexec/unbound-helper.old /usr/libexec/unbound-helper\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/mysql_cleanup.sh.ctrl.f92.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/mysql_cleanup.sh /var/xdrago/mysql_cleanup.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/mysql_cleanup.sh\" -o /var/xdrago/mysql_cleanup.sh\n    if [ -e \"/var/xdrago/mysql_cleanup.sh\" ]; then\n      chmod 700 /var/xdrago/mysql_cleanup.sh\n      chown root:root /var/xdrago/mysql_cleanup.sh\n      touch ${_pthLog}/mysql_cleanup.sh.ctrl.f92.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/mysql_cleanup.sh.old /var/xdrago/mysql_cleanup.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/mysql_cluster_backup.sh.ctrl.f91.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc mysql_cluster_backup.sh)\n    if (( _CNT > 0 )); then\n      echo \"The mysql_cluster_backup.sh is running!\"\n    else\n      mv -f /var/xdrago/mysql_cluster_backup.sh /var/xdrago/mysql_cluster_backup.sh.old\n      curl ${_crlGet} \"${_urlHmr}/tools/system/mysql_cluster_backup.sh\" -o /var/xdrago/mysql_cluster_backup.sh\n      if [ -e \"/var/xdrago/mysql_cluster_backup.sh\" ]; then\n        chmod 700 /var/xdrago/mysql_cluster_backup.sh\n        chown root:root /var/xdrago/mysql_cluster_backup.sh\n        touch ${_pthLog}/mysql_cluster_backup.sh.ctrl.f91.${_tRee}.${_xSrl}.pid\n      else\n        mv -f /var/xdrago/mysql_cluster_backup.sh.old /var/xdrago/mysql_cluster_backup.sh\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/runner.sh.ctrl.f86.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/runner.sh /var/xdrago/runner.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/runner.sh\" -o /var/xdrago/runner.sh\n    if [ -e \"/var/xdrago/runner.sh\" ]; then\n      chmod 700 /var/xdrago/runner.sh\n      chown root:root /var/xdrago/runner.sh\n      touch ${_pthLog}/runner.sh.ctrl.f86.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/runner.sh.old /var/xdrago/runner.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/minute.sh.ctrl.f91.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/minute.sh /var/xdrago/minute.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/minute.sh\" -o /var/xdrago/minute.sh\n    if [ -e \"/var/xdrago/minute.sh\" ]; then\n      chmod 700 /var/xdrago/minute.sh\n      chown root:root /var/xdrago/minute.sh\n      touch ${_pthLog}/minute.sh.ctrl.f91.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/minute.sh.old /var/xdrago/minute.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/second.sh.ctrl.f88.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/second.sh /var/xdrago/second.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/second.sh\" -o /var/xdrago/second.sh\n    if [ -e \"/var/xdrago/second.sh\" ]; then\n      chmod 700 /var/xdrago/second.sh\n      chown root:root /var/xdrago/second.sh\n      touch ${_pthLog}/second.sh.ctrl.f88.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/second.sh.old /var/xdrago/second.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/ip_access.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/ip_access.sh /var/xdrago/ip_access.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/ip_access.sh\" -o /var/xdrago/ip_access.sh\n    if [ -e \"/var/xdrago/ip_access.sh\" ]; then\n      chmod 700 /var/xdrago/ip_access.sh\n      chown root:root /var/xdrago/ip_access.sh\n      touch ${_pthLog}/ip_access.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/ip_access.sh.old /var/xdrago/ip_access.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/move_sql.sh.ctrl.f90.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/move_sql.sh /var/xdrago/move_sql.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/move_sql.sh\" -o /var/xdrago/move_sql.sh\n    if [ -e \"/var/xdrago/move_sql.sh\" ]; then\n      chmod 700 /var/xdrago/move_sql.sh\n      chown root:root /var/xdrago/move_sql.sh\n      touch ${_pthLog}/move_sql.sh.ctrl.f90.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/move_sql.sh.old /var/xdrago/move_sql.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/mysql_repair.sh.ctrl.f95.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/mysql_repair.sh /var/xdrago/mysql_repair.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/mysql_repair.sh\" -o /var/xdrago/mysql_repair.sh\n    if [ -e \"/var/xdrago/mysql_repair.sh\" ]; then\n      chmod 700 /var/xdrago/mysql_repair.sh\n      chown root:root /var/xdrago/mysql_repair.sh\n      touch ${_pthLog}/mysql_repair.sh.ctrl.f95.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/mysql_repair.sh.old /var/xdrago/mysql_repair.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/purge_binlogs.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/purge_binlogs.sh /var/xdrago/purge_binlogs.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/purge_binlogs.sh\" -o /var/xdrago/purge_binlogs.sh\n    if [ -e \"/var/xdrago/purge_binlogs.sh\" ]; then\n      chmod 700 /var/xdrago/purge_binlogs.sh\n      chown root:root /var/xdrago/purge_binlogs.sh\n      touch ${_pthLog}/purge_binlogs.sh.ctrl.f93.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/purge_binlogs.sh.old /var/xdrago/purge_binlogs.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/checksql.pl.ctrl.f95.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/checksql.pl /var/xdrago/checksql.pl.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/checksql.pl\" -o /var/xdrago/checksql.pl\n    if [ -e \"/var/xdrago/checksql.pl\" ]; then\n      chmod 700 /var/xdrago/checksql.pl\n      chown root:root /var/xdrago/checksql.pl\n      touch ${_pthLog}/checksql.pl.ctrl.f95.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/checksql.pl.old /var/xdrago/checksql.pl\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/clear.sh.ctrl.f85.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/clear.sh /var/xdrago/clear.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/clear.sh\" -o /var/xdrago/clear.sh\n    if [ -e \"/var/xdrago/clear.sh\" ]; then\n      chmod 700 /var/xdrago/clear.sh\n      chown root:root /var/xdrago/clear.sh\n      touch ${_pthLog}/clear.sh.ctrl.f85.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/clear.sh.old /var/xdrago/clear.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/autoupboa.ctrl.f76.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc autoupboa)\n    if (( _CNT > 0 )); then\n      echo \"The autoupboa is running!\"\n    else\n      if [ -e \"${_optBin}/autoupboa\" ]; then\n        mv -f ${_optBin}/autoupboa ${_optBin}/autoupboa.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/autoupboa\" -o ${_optBin}/autoupboa\n      if [ -e \"${_optBin}/autoupboa\" ]; then\n        chmod 700 ${_optBin}/autoupboa\n        chown root:root ${_optBin}/autoupboa\n        touch ${_pthLog}/autoupboa.ctrl.f76.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/autoupboa.old\" ]; then\n          mv -f ${_optBin}/autoupboa.old ${_optBin}/autoupboa\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/fixmounts.ctrl.f98.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc fixmounts)\n    if (( _CNT > 0 )); then\n      echo \"The fixmounts is running!\"\n    else\n      if [ -e \"${_optBin}/fixmounts\" ]; then\n        mv -f ${_optBin}/fixmounts ${_optBin}/fixmounts.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/fixmounts\" -o ${_optBin}/fixmounts\n      if [ -e \"${_optBin}/fixmounts\" ]; then\n        chmod 700 ${_optBin}/fixmounts\n        chown root:root ${_optBin}/fixmounts\n        touch ${_pthLog}/fixmounts.ctrl.f98.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/fixmounts.old\" ]; then\n          mv -f ${_optBin}/fixmounts.old ${_optBin}/fixmounts\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/daily.sh.ctrl.f73.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/daily.sh /var/xdrago/daily.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/daily.sh\" -o /var/xdrago/daily.sh\n    if [ -e \"/var/xdrago/daily.sh\" ]; then\n      chmod 700 /var/xdrago/daily.sh\n      chown root:root /var/xdrago/daily.sh\n      touch ${_pthLog}/daily.sh.ctrl.f73.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/daily.sh.old /var/xdrago/daily.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/graceful.sh.ctrl.f86.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/graceful.sh /var/xdrago/graceful.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/graceful.sh\" -o /var/xdrago/graceful.sh\n    if [ -e \"/var/xdrago/graceful.sh\" ]; then\n      chmod 700 /var/xdrago/graceful.sh\n      chown root:root /var/xdrago/graceful.sh\n      touch ${_pthLog}/graceful.sh.ctrl.f86.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/graceful.sh.old /var/xdrago/graceful.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/usage.sh.ctrl.f80.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/usage.sh /var/xdrago/usage.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/usage.sh\" -o /var/xdrago/usage.sh\n    if [ -e \"/var/xdrago/usage.sh\" ]; then\n      chmod 700 /var/xdrago/usage.sh\n      chown root:root /var/xdrago/usage.sh\n      touch ${_pthLog}/usage.sh.ctrl.f80.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/usage.sh.old /var/xdrago/usage.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/manage_ltd_users.sh.ctrl.f67.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/manage_ltd_users.sh /var/xdrago/manage_ltd_users.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/manage_ltd_users.sh\" \\\n      -o /var/xdrago/manage_ltd_users.sh\n    if [ -e \"/var/xdrago/manage_ltd_users.sh\" ]; then\n      chmod 700 /var/xdrago/manage_ltd_users.sh\n      chown root:root /var/xdrago/manage_ltd_users.sh\n      touch ${_pthLog}/manage_ltd_users.sh.ctrl.f67.${_tRee}.${_xSrl}.pid\n      [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n      [ -d \"/var/backups/ltd/log\" ] && rm -rf /var/backups/ltd/log\n      mkdir -p /var/backups/ltd/{conf,log,old}\n    else\n      mv -f /var/xdrago/manage_ltd_users.sh.old /var/xdrago/manage_ltd_users.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/manage_solr_config.sh.ctrl.f85.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/manage_solr_config.sh /var/xdrago/manage_solr_config.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/manage_solr_config.sh\" \\\n      -o /var/xdrago/manage_solr_config.sh\n    if [ -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n      chmod 700 /var/xdrago/manage_solr_config.sh\n      chown root:root /var/xdrago/manage_solr_config.sh\n      touch ${_pthLog}/manage_solr_config.sh.ctrl.f85.${_tRee}.${_xSrl}.pid\n      rm -f /run/manage_solr_config.pid\n    else\n      mv -f /var/xdrago/manage_solr_config.sh.old /var/xdrago/manage_solr_config.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/proc_num_ctrl.pl.ctrl.f83.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/proc_num_ctrl.pl /var/xdrago/proc_num_ctrl.pl.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/proc_num_ctrl.pl\" \\\n      -o /var/xdrago/proc_num_ctrl.pl\n    if [ -e \"/var/xdrago/proc_num_ctrl.pl\" ]; then\n      chmod 700 /var/xdrago/proc_num_ctrl.pl\n      chown root:root /var/xdrago/proc_num_ctrl.pl\n      touch ${_pthLog}/proc_num_ctrl.pl.ctrl.f83.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/proc_num_ctrl.pl.old /var/xdrago/proc_num_ctrl.pl\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/fast_shutdown.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    sed -i \"s/.*opcache.fast_shutdown.*//g\" /opt/etc/fpm/fpm-pool-commo*.conf\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n        service \"php${e}-fpm\" reload &> /dev/null\n      fi\n    done\n    _PHP_V=\"55 54 53\"\n    for e in ${_PHP_V}; do\n      if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n        service \"php${e}-fpm\" force-quit &> /dev/null\n      fi\n    done\n    touch ${_pthLog}/fast_shutdown.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -x \"/usr/sbin/csf\" ] \\\n    && [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ ! -e \"${_pthLog}/guest-fire.sh.ctrl.f92.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/guest-fire.sh /var/xdrago/guest-fire.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/guest-fire.sh\" \\\n      -o /var/xdrago/guest-fire.sh\n    if [ -e \"/var/xdrago/guest-fire.sh\" ]; then\n      chmod 700 /var/xdrago/guest-fire.sh\n      chown root:root /var/xdrago/guest-fire.sh\n      touch ${_pthLog}/guest-fire.sh.ctrl.f92.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/guest-fire.sh.old /var/xdrago/guest-fire.sh\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -x \"/usr/sbin/csf\" ] \\\n    && [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ ! -e \"${_pthLog}/guest-water.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/guest-water.sh /var/xdrago/guest-water.sh.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/guest-water.sh\" \\\n      -o /var/xdrago/guest-water.sh\n    if [ -e \"/var/xdrago/guest-water.sh\" ]; then\n      chmod 700 /var/xdrago/guest-water.sh\n      chown root:root /var/xdrago/guest-water.sh\n      touch ${_pthLog}/guest-water.sh.ctrl.f89.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/guest-water.sh.old /var/xdrago/guest-water.sh\n    fi\n  fi\n\n  if ! grep -q \"whoami\" /var/xdrago/conf/lshell.conf; then\n    rm -f ${_pthLog}/lshell.ctrl.*\n  fi\n  if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/lshell.ctrl.f91.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -z \"${_CUSTOM_CONFIG_LSHELL}\" ] \\\n      || [ \"${_CUSTOM_CONFIG_LSHELL}\" = \"NO\" ]; then\n      mv -f /var/xdrago/conf/lshell.conf /var/xdrago/conf/lshell.conf.old\n      curl ${_crlGet} \"${_urlHmr}/tools/system/conf/lshell.conf\" \\\n        -o /var/xdrago/conf/lshell.conf\n      if [ -e \"/var/xdrago/conf/lshell.conf\" ]; then\n        chmod 644 /var/xdrago/conf/lshell.conf\n        chown root:root /var/xdrago/conf/lshell.conf\n        touch ${_pthLog}/lshell.ctrl.f91.${_tRee}.${_xSrl}.pid\n      else\n        mv -f /var/xdrago/conf/lshell.conf.old /var/xdrago/conf/lshell.conf\n      fi\n    fi\n  fi\n\n  if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    _BROKEN_UPDATE_TEST=$(grep \"Under Construction\" /var/xdrago/conf/fpm-pool* 2>&1)\n    if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n      rm -f /var/xdrago/conf/fpm-pool*\n      rm ${_pthLog}/multi.ctrl.*\n      rm ${_pthLog}/legacy.ctrl.*\n      rm ${_pthLog}/modern.ctrl.*\n      rm ${_pthLog}/single.ctrl.*\n      rm ${_pthLog}/common.ctrl.*\n    fi\n    _BROKEN_UPDATE_TEST=$(grep \"404 Not Found\" /var/xdrago/conf/fpm-pool* 2>&1)\n    if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n      rm -f /var/xdrago/conf/fpm-pool*\n      rm ${_pthLog}/multi.ctrl.*\n      rm ${_pthLog}/legacy.ctrl.*\n      rm ${_pthLog}/modern.ctrl.*\n      rm ${_pthLog}/single.ctrl.*\n      rm ${_pthLog}/common.ctrl.*\n    fi\n    _BROKEN_UPDATE_TEST=$(grep \"max_execution_time\" /var/xdrago/conf/fpm-pool* 2>&1)\n    if [ -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n      rm -f /var/xdrago/conf/fpm-pool*\n      rm ${_pthLog}/multi.ctrl.*\n      rm ${_pthLog}/legacy.ctrl.*\n      rm ${_pthLog}/modern.ctrl.*\n      rm ${_pthLog}/single.ctrl.*\n      rm ${_pthLog}/common.ctrl.*\n    fi\n    _BROKEN_UPDATE_TEST=$(grep \"max_accelerated_files\" /var/xdrago/conf/fpm-pool* 2>&1)\n    if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n      rm -f /var/xdrago/conf/fpm-pool*\n      rm ${_pthLog}/multi.ctrl.*\n      rm ${_pthLog}/legacy.ctrl.*\n      rm ${_pthLog}/modern.ctrl.*\n      rm ${_pthLog}/single.ctrl.*\n      rm ${_pthLog}/common.ctrl.*\n    fi\n  fi\n\n  if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/common.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/conf/fpm-pool-common.conf /var/xdrago/conf/fpm-pool-common.conf.old\n    curl ${_crlGet} \"${_urlHmr}/conf/php/fpm-pool-common.conf\" \\\n      -o /var/xdrago/conf/fpm-pool-common.conf\n    if [ -e \"/var/xdrago/conf/fpm-pool-common.conf\" ]; then\n      sed -i \"s/127.0.0.1/127.0.0.1,${_LOC_IP}/g\" /var/xdrago/conf/fpm-pool-common.conf\n      chmod 644 /var/xdrago/conf/fpm-pool-common.conf\n      chown root:root /var/xdrago/conf/fpm-pool-common.conf\n      touch ${_pthLog}/common.ctrl.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/conf/fpm-pool-common.conf.old /var/xdrago/conf/fpm-pool-common.conf\n    fi\n  fi\n\n  if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/legacy.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/conf/fpm-pool-common-legacy.conf /var/xdrago/conf/fpm-pool-common-legacy.conf.old\n    curl ${_crlGet} \"${_urlHmr}/conf/php/fpm-pool-common-legacy.conf\" \\\n      -o /var/xdrago/conf/fpm-pool-common-legacy.conf\n    if [ -e \"/var/xdrago/conf/fpm-pool-common-legacy.conf\" ]; then\n      sed -i \"s/127.0.0.1/127.0.0.1,${_LOC_IP}/g\" /var/xdrago/conf/fpm-pool-common-legacy.conf\n      chmod 644 /var/xdrago/conf/fpm-pool-common-legacy.conf\n      chown root:root /var/xdrago/conf/fpm-pool-common-legacy.conf\n      touch ${_pthLog}/legacy.ctrl.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/conf/fpm-pool-common-legacy.conf.old /var/xdrago/conf/fpm-pool-common-legacy.conf\n    fi\n  fi\n\n  if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/modern.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/conf/fpm-pool-common-modern.conf /var/xdrago/conf/fpm-pool-common-modern.conf.old\n    curl ${_crlGet} \"${_urlHmr}/conf/php/fpm-pool-common-modern.conf\" \\\n      -o /var/xdrago/conf/fpm-pool-common-modern.conf\n    if [ -e \"/var/xdrago/conf/fpm-pool-common-modern.conf\" ]; then\n      sed -i \"s/127.0.0.1/127.0.0.1,${_LOC_IP}/g\" /var/xdrago/conf/fpm-pool-common-modern.conf\n      chmod 644 /var/xdrago/conf/fpm-pool-common-modern.conf\n      chown root:root /var/xdrago/conf/fpm-pool-common-modern.conf\n      touch ${_pthLog}/modern.ctrl.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/conf/fpm-pool-common-modern.conf.old /var/xdrago/conf/fpm-pool-common-modern.conf\n    fi\n  fi\n\n  if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/multi.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/conf/fpm-pool-foo-multi.conf /var/xdrago/conf/fpm-pool-foo-multi.conf.old\n    curl ${_crlGet} \"${_urlHmr}/conf/php/fpm-pool-foo-multi.conf\" \\\n      -o /var/xdrago/conf/fpm-pool-foo-multi.conf\n    if [ -e \"/var/xdrago/conf/fpm-pool-foo-multi.conf\" ]; then\n      chmod 644 /var/xdrago/conf/fpm-pool-foo-multi.conf\n      chown root:root /var/xdrago/conf/fpm-pool-foo-multi.conf\n      touch ${_pthLog}/multi.ctrl.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/conf/fpm-pool-foo-multi.conf.old /var/xdrago/conf/fpm-pool-foo-multi.conf\n    fi\n  fi\n\n  if [ -e \"/opt/tools/drush\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/single.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/conf/fpm-pool-foo.conf /var/xdrago/conf/fpm-pool-foo.conf.old\n    curl ${_crlGet} \"${_urlHmr}/conf/php/fpm-pool-foo.conf\" \\\n      -o /var/xdrago/conf/fpm-pool-foo.conf\n    if [ -e \"/var/xdrago/conf/fpm-pool-foo.conf\" ]; then\n      chmod 644 /var/xdrago/conf/fpm-pool-foo.conf\n      chown root:root /var/xdrago/conf/fpm-pool-foo.conf\n      touch ${_pthLog}/single.ctrl.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/conf/fpm-pool-foo.conf.old /var/xdrago/conf/fpm-pool-foo.conf\n    fi\n  fi\n\n  if [ -e \"/etc/ImageMagick-6/policy.xml\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ ! -e \"${_pthLog}/policymap-hf-06.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    _isCurlBin=\"$(which curl)\"\n    chmod 755 ${_isCurlBin} &> /dev/null\n    chgrp root ${_isCurlBin} &> /dev/null\n    cp -af /etc/ImageMagick-6/policy.xml /var/xdrago/conf/etc-ImageMagick-6-policy.xml.hf-06.old\n    rm -f /var/xdrago/conf/etc-ImageMagick-6-policy.xml\n    curl ${_crlGet} \"${_urlHmr}/conf/etc/etc-ImageMagick-6-policy.xml\" \\\n      -o /var/xdrago/conf/etc-ImageMagick-6-policy.xml\n    if [ -e \"/var/xdrago/conf/etc-ImageMagick-6-policy.xml\" ]; then\n      cp -af /var/xdrago/conf/etc-ImageMagick-6-policy.xml /etc/ImageMagick-6/policy.xml\n      chmod 644 /etc/ImageMagick-6/policy.xml\n      chown root:root /etc/ImageMagick-6/policy.xml\n      touch ${_pthLog}/policymap-hf-06.ctrl.${_tRee}.${_xSrl}.pid\n      _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ]; then\n          service \"php${e}-fpm\" reload &> /dev/null\n        fi\n      done\n    else\n      if [ -e \"/var/xdrago/conf/etc-ImageMagick-6-policy.xml.hf-06.old\" ]; then\n        cp -af /var/xdrago/conf/etc-ImageMagick-6-policy.xml.hf-06.old /etc/ImageMagick-6/policy.xml\n      fi\n    fi\n  fi\n\n  if [ -e \"/opt/tools/drush\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -d \"/data/u\" ] \\\n    && [ ! -e \"${_pthLog}/dispatch.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    sed -i \"s/.*cache.*//g; s/.*cc drush.*//g; s/ *$//g; /^$/d\" /data/disk/*/aegir.sh\n    touch ${_pthLog}/dispatch.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/xdrago/conf/control-readme.txt\" ] \\\n    && [ ! -e \"${_pthLog}/control-readme.txt.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /var/xdrago/conf/control-readme.txt /var/xdrago/conf/control-readme.txt.old\n    curl ${_crlGet} \"${_urlHmr}/tools/system/conf/control-readme.txt\" -o /var/xdrago/conf/control-readme.txt\n    if [ -e \"/var/xdrago/conf/control-readme.txt\" ]; then\n      chmod 644 /var/xdrago/conf/control-readme.txt\n      chown root:root /var/xdrago/conf/control-readme.txt\n      touch ${_pthLog}/control-readme.txt.ctrl.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/conf/control-readme.txt.old /var/xdrago/conf/control-readme.txt\n    fi\n  fi\n\n  if [ -d \"/data/u\" ] \\\n    && [ ! -e \"${_pthLog}/hosting.cron.queue.ctrl.f96.${_tRee}.${_xSrl}.pid\" ]; then\n    _hQueueF=\"hosting_cron.module\"\n    _hQueueP=\"/var/xdrago/conf/${_hQueueF}\"\n    [ -e \"${_hQueueP}\" ] && _isPatchedTpl=$(grep \"url_own\" \"${_hQueueP}\")\n    if [ ! -e \"${_hQueueP}\" ] || [[ ! \"${_isPatchedTpl}\" =~ \"url_own\" ]]; then\n      curl ${_crlGet} \"${_urlHmr}/patches/${_hQueueF}\" -o ${_hQueueP}\n    fi\n    for _pthSysUsr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n      _tUsr=\n      _tUsr=$(echo ${_pthSysUsr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n      if [ -n \"${_tUsr}\" ] && [ \"${_tUsr}\" != \"arch\" ]; then\n        if [ -e \"${_pthSysUsr}/log/hosting_cron_use_backend.txt\" ]; then\n          rm -f ${_pthSysUsr}/log/hosting_cron_use_backend.txt\n        fi\n        _hmPlr=$(cat ${_pthSysUsr}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"root'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        _hmDir=\"${_hmPlr}/profiles/hostmaster/modules/aegir/hosting\"\n        _hmQmd=\"${_hmDir}/cron/hosting_cron.module\"\n        if [ -e \"${_hmDir}/cron/hosting_cron.module.orig\" ]; then\n          rm -f ${_hmDir}/cron/hosting_cron.module.orig\n        fi\n        if [ -e \"${_hmDir}/cron/hosting_cron.module.rej\" ]; then\n          rm -f ${_hmDir}/cron/hosting_cron.module.rej\n        fi\n        if [ -e \"${_hmQmd}\" ] && [ -e \"${_hQueueP}\" ]; then\n          _isPatched=$(grep \"url_own\" \"${_hmQmd}\")\n          if [[ ! \"${_isPatched}\" =~ \"url_own\" ]]; then\n            cp -a ${_hQueueP} ${_hmDir}/cron/\n            if [ -e \"${_hmDir}/cron/${_hQueueF}\" ]; then\n              sed -i \"s/127.0.0.1/${_LOC_IP}/g\" \"${_hmDir}/cron/${_hQueueF}\"\n            fi\n          fi\n        fi\n      fi\n    done\n    touch ${_pthLog}/hosting.cron.queue.ctrl.f96.${_tRee}.${_xSrl}.pid\n  fi\n\n  if [ -d \"/data/u\" ] \\\n    && [ ! -e \"${_pthLog}/hosting.cron.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    for _pthSysUsr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n      _tUsr=\n      _tUsr=$(echo ${_pthSysUsr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n      if [ -n \"${_tUsr}\" ] && [ \"${_tUsr}\" != \"arch\" ]; then\n        if [ -e \"${_pthSysUsr}/log/hosting_cron_use_backend.txt\" ]; then\n          rm -f ${_pthSysUsr}/log/hosting_cron_use_backend.txt\n        fi\n      fi\n    done\n    touch ${_pthLog}/hosting.cron.ctrl.f99.${_tRee}.${_xSrl}.pid\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/data/u\" ] \\\n    && [ -e \"/usr/sbin/csf\" ] \\\n    && [ ! -e \"${_pthLog}/fpm-cli.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    _usrGroup=users\n    [ -d \"/var/backups/off-run/\" ] && cp -a /var/backups/off-run/run* /var/xdrago/ &> /dev/null\n    for _pthSysUsr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n      _tUsr=\n      _tUsr=$(echo ${_pthSysUsr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n      if [ \"${_tUsr}\" != \"arch\" ]; then\n        if [ ! -e \"${_pthSysUsr}/static/control/MyQuick.info\" ] \\\n          && [ ! -e \"${_pthSysUsr}/static/control/MyClassic.info\" ]; then\n          echo ON > ${_pthSysUsr}/static/control/MyQuick.info\n        fi\n        if [ ! -e \"${_pthSysUsr}/static/control/.disFastTrack.pid\" ]; then\n          rm -f ${_pthSysUsr}/static/control/FastTrack.info\n          touch ${_pthSysUsr}/static/control/.disFastTrack.pid\n        fi\n        if [ ! -e \"${_pthSysUsr}/static/control/FastTrack.info\" ] \\\n          && [ ! -e \"${_pthSysUsr}/static/control/ClassicTrack.info\" ]; then\n          echo ON > ${_pthSysUsr}/static/control/ClassicTrack.info\n        fi\n        if [ -e \"${_pthSysUsr}/static/control/fpm.info\" ] \\\n          && [ ! -e \"${_pthSysUsr}/static/control/cli.info\" ]; then\n          cp ${_pthSysUsr}/static/control/fpm.info ${_pthSysUsr}/static/control/cli.info\n        fi\n        if [ -e \"${_pthSysUsr}/log/CANCELLED\" ] \\\n          || [ -e \"${_pthSysUsr}/log/proxied.pid\" ] \\\n          || [ ! -e \"${_pthSysUsr}/static/control/cli.info\" ]; then\n          if [ -e \"/var/xdrago/run-${_tUsr}\" ] \\\n            && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n            [ -d \"/var/backups/off-run\" ] || mkdir -p /var/backups/off-run\n            mv -f /var/xdrago/run-${_tUsr} /var/backups/off-run/\n          fi\n        else\n          _dscUsr=\"/data/disk/${_tUsr}\"\n          _ngxCnf=\"${_dscUsr}/config/includes/nginx_vhost_common.conf\"\n          _NGINX_CNF_TEST=$(grep \"foobaroff\" ${_ngxCnf} 2>&1)\n          if [[ \"${_NGINX_CNF_TEST}\" =~ \"foobaroff\" ]]; then\n            _DO_NOTHING=YES\n          else\n            sed -i \"s/args.*q=/args ~* \\\"foobaroff=/g\" ${_ngxCnf}\n          fi\n          for _version in 84 85 83 82 81 74 56; do\n            if [ -x \"/opt/php${_version}/bin/php\" ]; then\n              if [ \"${_version}\" = \"74\" ]; then\n                _useCli=\"7.4\"\n                _useFpm=\"7.4\"\n              elif [ \"${_version}\" = \"56\" ]; then\n                _useCli=\"5.6\"\n                _useFpm=\"5.6\"\n              else\n                _useCli=\"8.${_version:1}\"\n                _useFpm=\"8.${_version:1}\"\n              fi\n              break\n            fi\n          done\n          if [ ! -e \"${_dscUsr}/static/control/fpm.info\" ] \\\n            && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n            if [ -n \"${_useFpm}\" ]; then\n              echo ${_useFpm} > ${_dscUsr}/static/control/fpm.info\n              chown ${_tUsr}.ftp:${_usrGroup} ${_dscUsr}/static/control/fpm.info\n              chmod 0644 ${_dscUsr}/static/control/fpm.info\n            fi\n          fi\n          if [ ! -e \"${_dscUsr}/static/control/cli.info\" ] \\\n            && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n            if [ -e \"${_dscUsr}/static/control/fpm.info\" ]; then\n              cp -af ${_dscUsr}/static/control/fpm.info ${_dscUsr}/static/control/cli.info\n            else\n              if [ -n \"${_useCli}\" ]; then\n                echo ${_useCli} > ${_dscUsr}/static/control/cli.info\n                chown ${_tUsr}.ftp:${_usrGroup} ${_dscUsr}/static/control/cli.info\n                chmod 0644 ${_dscUsr}/static/control/cli.info\n              fi\n            fi\n          fi\n          if [ ! -e \"${_dscUsr}/static/control/.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n            && [ -e \"/home/${_tUsr}.ftp/clients\" ]; then\n            mkdir -p ${_dscUsr}/static/control\n            chmod 755 ${_dscUsr}/static/control\n            if [ -e \"/var/xdrago/conf/control-readme.txt\" ]; then\n              cp -af /var/xdrago/conf/control-readme.txt \\\n                ${_dscUsr}/static/control/README.txt &> /dev/null\n              chmod 0644 ${_dscUsr}/static/control/README.txt\n            fi\n            chown -R ${_tUsr}.ftp:${_usrGroup} ${_dscUsr}/static/control\n            rm -f ${_dscUsr}/static/control/.ctrl.*\n            echo OK > ${_dscUsr}/static/control/.ctrl.${_tRee}.${_xSrl}.pid\n          fi\n        fi\n      fi\n    done\n    touch ${_pthLog}/fpm-cli.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  # Create the destination directory if it doesn't exist\n  [ -d \"/var/backups/off-run\" ] || mkdir -p /var/backups/off-run\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/data/u\" ] \\\n    && [ -e \"/usr/sbin/csf\" ]; then\n    # Loop through all files matching the pattern /var/xdrago/run-USER\n    for _file in /var/xdrago/run-*; do\n      # Skip iteration if no files match the pattern\n      [ -e \"${_file}\" ] || continue\n\n      # Extract the _USER from the filename\n      _USER=${_file#/var/xdrago/run-}\n\n      # Define the paths to check\n      _USER_DIR=\"/data/disk/${_USER}\"\n      _CANCELLED_FILE=\"${_USER_DIR}/log/CANCELLED\"\n      _PROXIED_PID_FILE=\"${_USER_DIR}/log/proxied.pid\"\n      _CLI_INFO_FILE=\"${_USER_DIR}/static/control/cli.info\"\n\n      # Check the conditions\n      if [ ! -d \"${_USER_DIR}\" ] || \\\n         [ -f \"${_CANCELLED_FILE}\" ] || \\\n         [ -f \"${_PROXIED_PID_FILE}\" ] || \\\n         [ ! -f \"${_CLI_INFO_FILE}\" ]; then\n        # Move the file if any condition is met\n        mv -f \"${_file}\" /var/backups/off-run/\n      fi\n\n      if grep -q \"renice 0\" \"${_file}\"; then\n        sed -i \"s/renice 0/renice 9/g\" \"${_file}\"\n      fi\n    done\n  fi\n\n  if [ -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n    && [ -e \"${_provLeIncFull}\" ] \\\n    && [ -e \"${_hoLeIncFull}\" ] \\\n    && [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -d \"/data/u\" ] \\\n    && [ ! -e \"${_pthLog}/le_renewal_days_69.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    _leBasePath=\"profiles/hostmaster/modules/aegir/hosting_le\"\n    _lePath=\"${_leBasePath}/drush/${_provLeInc}\"\n    _leVhPath=\"${_leBasePath}/hosting_le_vhost/drush/${_hoLeInc}\"\n    for _pthSysUsr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n      if [ -e \"${_pthSysUsr}/config/server_master/nginx/vhost.d\" ] \\\n        && [ -e \"${_pthSysUsr}/static/control/cli.info\" ] \\\n        && [ ! -e \"${_pthSysUsr}/log/proxied.pid\" ] \\\n        && [ ! -e \"${_pthSysUsr}/log/CANCELLED\" ]; then\n        _tUsr=\n        _validReg=\n        _validIPr=\n        _tUsr=$(echo ${_pthSysUsr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n        _dscUsr=\"/data/disk/${_tUsr}\"\n        _hmPf=$(cat ${_dscUsr}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"root'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        _locFile=\"${_hmPf}/${_lePath}\"\n        if [ -e \"${_locFile}\" ] && [ -e \"${_provLeIncFull}\" ]; then\n          cp -af ${_provLeIncFull} ${_locFile}\n          chown ${_tUsr}:users ${_locFile}\n          chmod 0644 ${_locFile}\n        fi\n        _locVhFile=\"${_hmPf}/${_leVhPath}\"\n        if [ -e \"${_locVhFile}\" ] && [ -e \"${_hoLeIncFull}\" ]; then\n          cp -af ${_hoLeIncFull} ${_locVhFile}\n          chown ${_tUsr}:users ${_locVhFile}\n          chmod 0644 ${_locVhFile}\n        fi\n        _leRoot=\"${_dscUsr}/tools/le\"\n        _exeLe=\"${_leRoot}/dehydrated\"\n        _dehydFull=\"${_leRoot}/${_dehydName}\"\n        _legacyLeShFile=\"${_leRoot}/letsencrypt.sh\"\n        _lockLeFile=\"${_leRoot}/lock\"\n        _configIni=\"${_leRoot}/config\"\n        _acctsDir=\"${_leRoot}/accounts\"\n        _acctsDemoDir=\"${_leRoot}/accounts-demo\"\n        _demoPid=\"${_leRoot}/.ctrl/ssl-demo-mode.pid\"\n        _normalRegPid=\"${_leRoot}/.ctrl/normal-re6-register.pid\"\n        _forcedRegPid=\"${_leRoot}/.ctrl/forced-re6-register.pid\"\n        _onDemandRegPid=\"${_leRoot}/.ctrl/onDemand-register.pid\"\n        _validIdn=$(grep \"letsencrypt\" ${_acctsDir}/*/account_id.json 2>&1)\n        _validReg=$(grep \"valid\" ${_acctsDir}/*/registration_info.json 2>&1)\n        _validIPr=$(grep \"${_LOC_IP}\" ${_acctsDir}/*/registration_info.json 2>&1)\n        _HOUR=$(date +%H 2>&1)\n        _HOUR=${_HOUR//[^0-9-]/}\n        if [ -e \"${_dehydSrcPath}\" ]; then\n          cp -af ${_dehydSrcPath} ${_dehydFull}\n          chown ${_tUsr}:users ${_dehydFull}\n          chmod 0700 ${_dehydFull}\n        fi\n        if [ -e \"${_dehydFull}\" ] \\\n          && [ ! -e \"${_normalRegPid}\" ]; then\n          if [ \"${_HOUR}\" = \"5\" ] \\\n            || [ \"${_HOUR}\" = \"17\" ] \\\n            || [ -e \"${_onDemandRegPid}\" ]; then\n            su -s /bin/bash - ${_tUsr} -c \"bash ${_exeLe} --register --accept-terms\"\n            wait\n            touch ${_normalRegPid}\n          fi\n        fi\n        if [ -e \"${_lockLeFile}\" ]; then\n          rm -f ${_lockLeFile}\n          sleep 1\n        fi\n        if [ -e \"${_demoPid}\" ]; then\n          rm -f ${_demoPid}\n        fi\n        if [ \"${_HOUR}\" = \"11\" ] \\\n          || [ \"${_HOUR}\" = \"23\" ] \\\n          || [ -e \"${_onDemandRegPid}\" ]; then\n          if [ -e \"${_legacyLeShFile}\" ] \\\n            || [ -e \"${_acctsDemoDir}\" ] \\\n            || [[ ! \"${_validIdn}\" =~ \"letsencrypt\" ]] \\\n            || [[ ! \"${_validReg}\" =~ \"valid\" ]] \\\n            || [[ ! \"${_validIPr}\" =~ \"${_LOC_IP}\" ]] \\\n            || [ ! -e \"${_forcedRegPid}\" ]; then\n            rm -f ${_legacyLeShFile}\n            rm -rf ${_acctsDemoDir}\n            rm -rf ${_acctsDir}\n            rm -f ${_leRoot}/.ctrl/.forced*\n            rm -f ${_leRoot}/.ctrl/.normal*\n            rm -f ${_leRoot}/.ctrl/forced*\n            rm -f ${_leRoot}/.ctrl/normal*\n            if [ -e \"${_exeLe}\" ]; then\n              su -s /bin/bash - ${_tUsr} -c \"bash ${_exeLe} --register --accept-terms\"\n              wait\n              touch ${_forcedRegPid}\n              touch ${_normalRegPid}\n            fi\n          fi\n        fi\n      fi\n    done\n    touch ${_pthLog}/le_renewal_days_69.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  if ! grep -q \"defunct\" /opt/local/bin/websh; then\n    rm -f ${_pthLog}/websh.ctrl.*\n  fi\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_pthLog}/websh.ctrl.f72.${_tRee}.${_xSrl}.pid\" ]; then\n    mv -f /opt/local/bin/websh /var/xdrago/websh.sh.old\n    curl ${_crlGet} \"${_urlHmr}/helpers/websh.sh.txt\" -o /opt/local/bin/websh\n    if [ -e \"/opt/local/bin/websh\" ] \\\n      && grep -i '_forward_to_dash' /opt/local/bin/websh &> /dev/null; then\n      chmod 755 /opt/local/bin/websh\n      chown root:root /opt/local/bin/websh\n      [ -x \"/bin/websh\" ] && [ ! -L \"/bin/websh\" ] && ln -sfn /opt/local/bin/websh /bin/websh\n      touch ${_pthLog}/websh.ctrl.f72.${_tRee}.${_xSrl}.pid\n    else\n      mv -f /var/xdrago/websh.sh.old /opt/local/bin/websh\n    fi\n    _WEB_SH=\"$(readlink -n /bin/sh)\"\n    if [ -x \"/opt/local/bin/websh\" ] \\\n      && grep -i '_forward_to_dash' /opt/local/bin/websh &> /dev/null; then\n      if [ \"${_WEB_SH}\" != \"/opt/local/bin/websh\" ]; then\n        ln -sfn /opt/local/bin/websh /bin/sh\n        if [ -e \"/usr/bin/sh\" ]; then\n          ln -sfn /opt/local/bin/websh /usr/bin/sh\n        fi\n        [ -x \"/bin/websh\" ] && [ ! -L \"/bin/websh\" ] && ln -sfn /opt/local/bin/websh /bin/websh\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -x \"/etc/cron.hourly/systemtime\" ] \\\n    && [ ! -e \"${_pthLog}/systemtime.ctrl.f95.${_tRee}.${_xSrl}.pid\" ]; then\n    curl ${_crlGet} \"${_urlHmr}/helpers/systemtime\" -o /etc/cron.hourly/systemtime\n    if [ -e \"/etc/cron.hourly/systemtime\" ]; then\n      chmod 755 /etc/cron.hourly/systemtime\n      chown root:root /etc/cron.hourly/systemtime\n      service cron restart\n      touch ${_pthLog}/systemtime.ctrl.f95.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/synproxy.ctrl.f93.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc synproxy_rollback)\n    if (( _CNT > 0 )); then\n      echo \"The synproxy_rollback is running!\"\n    else\n      if [ -e \"${_optBin}/synproxy\" ]; then\n        mv -f ${_optBin}/synproxy ${_optBin}/synproxy.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/synproxy\" -o ${_optBin}/synproxy\n      if [ -e \"${_optBin}/synproxy\" ]; then\n        chmod 700 ${_optBin}/synproxy\n        chown root:root ${_optBin}/synproxy\n        touch ${_pthLog}/synproxy.ctrl.f93.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/synproxy.old\" ]; then\n          mv -f ${_optBin}/synproxy.old ${_optBin}/synproxy\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/synproxy_rollback.ctrl.f94.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc synproxy_rollback)\n    if (( _CNT > 0 )); then\n      echo \"The synproxy_rollback is running!\"\n    else\n      if [ -e \"${_optBin}/synproxy_rollback\" ]; then\n        mv -f ${_optBin}/synproxy_rollback ${_optBin}/synproxy_rollback.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/synproxy_rollback\" -o ${_optBin}/synproxy_rollback\n      if [ -e \"${_optBin}/synproxy_rollback\" ]; then\n        chmod 700 ${_optBin}/synproxy_rollback\n        chown root:root ${_optBin}/synproxy_rollback\n        touch ${_pthLog}/synproxy_rollback.ctrl.f94.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/synproxy_rollback.old\" ]; then\n          mv -f ${_optBin}/synproxy_rollback.old ${_optBin}/synproxy_rollback\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/synproxy_reassert.ctrl.f88.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc synproxy_rollback)\n    if (( _CNT > 0 )); then\n      echo \"The synproxy_rollback is running!\"\n    else\n      if [ -e \"${_optBin}/synproxy_reassert\" ]; then\n        mv -f ${_optBin}/synproxy_reassert ${_optBin}/synproxy_reassert.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/synproxy_reassert\" -o ${_optBin}/synproxy_reassert\n      if [ -e \"${_optBin}/synproxy_reassert\" ]; then\n        chmod 700 ${_optBin}/synproxy_reassert\n        chown root:root ${_optBin}/synproxy_reassert\n        touch ${_pthLog}/synproxy_reassert.ctrl.f88.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/synproxy_reassert.old\" ]; then\n          mv -f ${_optBin}/synproxy_reassert.old ${_optBin}/synproxy_reassert\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/synproxy_hook_fix.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc synproxy_rollback)\n    if (( _CNT > 0 )); then\n      echo \"The synproxy_rollback is running!\"\n    else\n      if [ -e \"${_optBin}/synproxy_hook_fix\" ]; then\n        mv -f ${_optBin}/synproxy_hook_fix ${_optBin}/synproxy_hook_fix.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/synproxy_hook_fix\" -o ${_optBin}/synproxy_hook_fix\n      if [ -e \"${_optBin}/synproxy_hook_fix\" ]; then\n        chmod 700 ${_optBin}/synproxy_hook_fix\n        chown root:root ${_optBin}/synproxy_hook_fix\n        touch ${_pthLog}/synproxy_hook_fix.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/synproxy_hook_fix.old\" ]; then\n          mv -f ${_optBin}/synproxy_hook_fix.old ${_optBin}/synproxy_hook_fix\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/synproxy_snapshot.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc synproxy_rollback)\n    if (( _CNT > 0 )); then\n      echo \"The synproxy_rollback is running!\"\n    else\n      if [ -e \"${_optBin}/synproxy_snapshot\" ]; then\n        mv -f ${_optBin}/synproxy_snapshot ${_optBin}/synproxy_snapshot.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/synproxy_snapshot\" -o ${_optBin}/synproxy_snapshot\n      if [ -e \"${_optBin}/synproxy_snapshot\" ]; then\n        chmod 700 ${_optBin}/synproxy_snapshot\n        chown root:root ${_optBin}/synproxy_snapshot\n        touch ${_pthLog}/synproxy_snapshot.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/synproxy_snapshot.old\" ]; then\n          mv -f ${_optBin}/synproxy_snapshot.old ${_optBin}/synproxy_snapshot\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/synproxy_status.ctrl.f99.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc synproxy_rollback)\n    if (( _CNT > 0 )); then\n      echo \"The synproxy_rollback is running!\"\n    else\n      if [ -e \"${_optBin}/synproxy_status\" ]; then\n        mv -f ${_optBin}/synproxy_status ${_optBin}/synproxy_status.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/synproxy_status\" -o ${_optBin}/synproxy_status\n      if [ -e \"${_optBin}/synproxy_status\" ]; then\n        chmod 700 ${_optBin}/synproxy_status\n        chown root:root ${_optBin}/synproxy_status\n        touch ${_pthLog}/synproxy_status.ctrl.f99.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/synproxy_status.old\" ]; then\n          mv -f ${_optBin}/synproxy_status.old ${_optBin}/synproxy_status\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"/var/xdrago/monitor/check\" ] \\\n    && [ -d \"/var/aegir/drush\" ] \\\n    && [ ! -e \"${_pthLog}/synproxy_monitor.ctrl.f98.${_tRee}.${_xSrl}.pid\" ]; then\n    _CNT=$(pgrep -fc synproxy_rollback)\n    if (( _CNT > 0 )); then\n      echo \"The synproxy_rollback is running!\"\n    else\n      if [ -e \"${_optBin}/synproxy_monitor\" ]; then\n        mv -f ${_optBin}/synproxy_monitor ${_optBin}/synproxy_monitor.old\n      fi\n      curl ${_crlGet} \"${_urlHmr}/tools/bin/synproxy_monitor\" -o ${_optBin}/synproxy_monitor\n      if [ -e \"${_optBin}/synproxy_monitor\" ]; then\n        chmod 700 ${_optBin}/synproxy_monitor\n        chown root:root ${_optBin}/synproxy_monitor\n        touch ${_pthLog}/synproxy_monitor.ctrl.f98.${_tRee}.${_xSrl}.pid\n      else\n        if [ -e \"${_optBin}/synproxy_monitor.old\" ]; then\n          mv -f ${_optBin}/synproxy_monitor.old ${_optBin}/synproxy_monitor\n        fi\n      fi\n    fi\n  fi\n\n  _Dir=\"/data/all/000/modules\"\n  _REDIS_E_VERSION=8.x-1.11.2\n  if [ -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n    if [ ! -e \"${_Dir}/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\" ]; then\n      mkdir -p ${_Dir}\n      cd ${_Dir}\n      rm -rf ${_Dir}/redis_ten_eleven\n      _get_dev_contrib \"redis_ten_eleven-${_REDIS_E_VERSION}.tar.gz\"\n      echo update > ${_Dir}/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\n      find ${_Dir} -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${_Dir} -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${_pthLog}/redis_ten_eleven.ctrl.${_xSrl}.log\n    fi\n  fi\n\n  _Dir=\"/data/all/000/modules\"\n  _REDIS_T_VERSION=8.x-1.8.2\n  if [ -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n    if [ ! -e \"${_Dir}/redis_nine_ten/ver-${_REDIS_T_VERSION}.${_xSrl}.info\" ]; then\n      mkdir -p ${_Dir}\n      cd ${_Dir}\n      rm -rf ${_Dir}/redis_nine_ten\n      _get_dev_contrib \"redis_nine_ten-${_REDIS_T_VERSION}.tar.gz\"\n      echo update > ${_Dir}/redis_nine_ten/ver-${_REDIS_T_VERSION}.${_xSrl}.info\n      find ${_Dir} -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${_Dir} -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${_pthLog}/redis_nine_ten.ctrl.${_xSrl}.log\n    fi\n  fi\n\n  _Dir=\"/data/all/000/modules\"\n  _REDIS_C_VERSION=com-19-04-2021\n  if [ -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n    if [ ! -e \"${_Dir}/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\" ]; then\n      mkdir -p ${_Dir}\n      cd ${_Dir}\n      rm -rf ${_Dir}/redis_compr\n      _get_dev_contrib \"redis_compr-${_REDIS_C_VERSION}.tar.gz\"\n      echo update > ${_Dir}/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\n      find ${_Dir} -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${_Dir} -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${_pthLog}/redis_compr.ctrl.${_xSrl}.log\n    fi\n  fi\n\n  _Dir=\"/data/all/000/modules\"\n  _REDIS_L_VERSION=7.x-3.19.1\n  if [ -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n    if [ ! -e \"${_Dir}/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\" ]; then\n      mkdir -p ${_Dir}\n      cd ${_Dir}\n      rm -rf ${_Dir}/redis_edge\n      _get_dev_contrib \"redis_edge-${_REDIS_L_VERSION}.tar.gz\"\n      echo update > ${_Dir}/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\n      find ${_Dir} -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${_Dir} -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${_pthLog}/redis_edge.ctrl.${_xSrl}.log\n    fi\n  fi\n\n  _Dir=\"/data/all/000/modules\"\n  _REDIS_N_VERSION=com-19-04-2021\n  if [ -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n    if [ ! -e \"${_Dir}/redis_eight/ver-${_REDIS_N_VERSION}.${_xSrl}.info\" ]; then\n      mkdir -p ${_Dir}\n      cd ${_Dir}\n      rm -rf ${_Dir}/redis_eight\n      _get_dev_contrib \"redis_eight-${_REDIS_N_VERSION}.tar.gz\"\n      echo update > ${_Dir}/redis_eight/ver-${_REDIS_N_VERSION}.${_xSrl}.info\n      find ${_Dir} -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${_Dir} -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${_pthLog}/redis_eight.ctrl.${_xSrl}.log\n    fi\n  fi\n}\n\n_fix_core_dgd() {\n  # sed -i \"s/^_PERMISSIONS_FIX=.*/_PERMISSIONS_FIX=YES/g\" ${_barCnf}\n\n  _saCoreS=\"${_saCoreN}-D7\"\n  _saIncDb=\"includes/database/database.inc\"\n  _saPatch=\"/var/xdrago/conf/${_saCoreS}.patch\"\n\n  _saQCoreN=\"${_saCoreN}\"\n  _saQCoreS=\"${_saQCoreN}-D8\"\n  _saQIncDb=\"core/includes/database.inc\"\n  _saQPatch=\"/var/xdrago/conf/${_saQCoreS}.patch\"\n\n  _saXCoreN=\"${_saCoreN}\"\n  _saXCoreS=\"${_saXCoreN}-D6\"\n  _saXIncDb=\"includes/database.inc\"\n  _saXPatch=\"/var/xdrago/conf/${_saXCoreS}.patch\"\n\n  _saBCoreP=\"${_saCoreN}-provision\"\n  _saBPatch=\"/var/xdrago/conf/${_saBCoreP}.patch\"\n\n  # SA-CORE D8 patch\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_saQPatch}\" ]; then\n    mkdir -p /var/xdrago/conf\n    curl ${_crlGet} \"${_urlHmr}/patches/8-core/${_saQCoreS}.patch\" -o ${_saQPatch}\n  fi\n\n  # SA-CORE D7 patch\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_saPatch}\" ]; then\n    mkdir -p /var/xdrago/conf\n    curl ${_crlGet} \"${_urlHmr}/patches/7-core/${_saCoreS}.patch\" -o ${_saPatch}\n  fi\n\n  # SA-CORE D6 patch\n  # if [ -e \"/var/xdrago\" ] \\\n  #   && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n\t#   && [ ! -e \"${_saXPatch}\" ]; then\n\t#   mkdir -p /var/xdrago/conf\n\t#   curl ${_crlGet} \"${_urlHmr}/patches/6-core/${_saXCoreS}.patch\" -o ${_saXPatch}\n  # fi\n\n  # SA-CORE for Octopus hostmaster platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -d \"/data/u\" ] \\\n    && [ -e \"${_saPatch}\" ]; then\n    if [ -d \"/data/all\" ] \\\n      && [ ! -e \"${_pthLog}/hostmaster-octopus-${_saCoreN}-fixed-d7.log\" ]; then\n      for _File in `find /data/disk/*/aegir/distro/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n        fi\n      done\n      touch ${_pthLog}/hostmaster-octopus-${_saCoreN}-fixed-d7.log\n    fi\n    cd\n  fi\n\n  # SA-CORE for Barracuda hostmaster platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saPatch}\" ]; then\n    if [ -d \"/data/all\" ] \\\n      && [ ! -e \"${_pthLog}/hostmaster-barracuda-${_saCoreN}-fixed-d7.log\" ]; then\n      for _File in `find /var/aegir/host_master/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n        fi\n      done\n\n      for _File in `find /var/aegir/hostmaster*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n        fi\n      done\n\n      touch ${_pthLog}/hostmaster-barracuda-${_saCoreN}-fixed-d7.log\n    fi\n    cd\n  fi\n\n  # SA-CORE for built-in D7 platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saPatch}\" ] \\\n    && [ ! -e \"${_pthLog}/${_saCoreN}-fixed-d7.log\" ]; then\n    if [ -d \"/data/all/000/core\" ]; then\n      for _Core in `find /data/all/000/core/drupal-7* \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        cd ${_Core}\n        patch -p1 < ${_saPatch} &> /dev/null\n      done\n    elif [ -d \"/data/disk/all/000/core\" ]; then\n      for _Core in `find /data/disk/all/000/core/drupal-7* \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        cd ${_Core}\n        patch -p1 < ${_saPatch} &> /dev/null\n      done\n    fi\n    touch ${_pthLog}/${_saCoreN}-fixed-d7.log\n    cd\n  fi\n\n  # SA-CORE for ancient D7 platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saPatch}\" ]; then\n    if [ -d \"/data/all\" ] \\\n      && [ ! -e \"${_pthLog}/legacy-${_saCoreN}-fixed-d7.log\" ]; then\n      for _File in `find /data/all/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n        fi\n      done\n      touch ${_pthLog}/legacy-${_saCoreN}-fixed-d7.log\n    elif [ -d \"/data/disk/all\" ] \\\n      && [ ! -e \"${_pthLog}/legacy-${_saCoreN}-fixed-d7eee.log\" ]; then\n      for _File in `find /data/disk/all/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n        fi\n      done\n      touch ${_pthLog}/legacy-${_saCoreN}-fixed-d7eee.log\n    fi\n    cd\n  fi\n\n  # SA-CORE for custom D7 platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saPatch}\" ]; then\n    if [ -d \"/data/u\" ] \\\n      && [ ! -e \"${_pthLog}/batch-custom-${_saCoreN}-fixed-d7.log\" ]; then\n      for _File in `find /data/disk/*/static/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n    fi\n    cd\n    touch ${_pthLog}/batch-custom-${_saCoreN}-fixed-d7.log\n  fi\n\n  # SA-CORE for D8 platforms in ~/static\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saQPatch}\" ]; then\n    if [ -d \"/data/u\" ] \\\n      && [ ! -e \"${_pthLog}/batch-custom-${_saQCoreN}-fixed-d8.log\" ]; then\n      for _File in `find /data/disk/*/static/*/${_saQIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/core.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saQCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saQPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saQCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/${_saQIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/core.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saQCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saQPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saQCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/${_saQIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/core.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saQCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saQPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saQCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/${_saQIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/core.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saQCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saQPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saQCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/*/${_saQIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/core.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saQCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saQPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saQCoreS}-fix.info\n        fi\n      done\n    fi\n    cd\n    touch ${_pthLog}/batch-custom-${_saQCoreN}-fixed-d8.log\n  fi\n\n  # SA-CORE for built-in D6 platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saXPatch}\" ] \\\n    && [ ! -e \"${_pthLog}/${_saXCoreN}-finally-fixed-d6.log\" ]; then\n    if [ -d \"/data/all/000/core\" ]; then\n      for _Core in `find /data/all/000/core/pressflow-6* \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        cd ${_Core}\n        patch -p1 < ${_saXPatch} &> /dev/null\n      done\n    elif [ -d \"/data/disk/all/000/core\" ]; then\n      for _Core in `find /data/disk/all/000/core/pressflow-6* \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        cd ${_Core}\n        patch -p1 < ${_saXPatch} &> /dev/null\n      done\n    fi\n    touch ${_pthLog}/${_saXCoreN}-finally-fixed-d6.log\n    cd\n  fi\n\n  # SA-CORE for ancient D6 platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saXPatch}\" ]; then\n    if [ -d \"/data/all\" ] \\\n      && [ ! -e \"${_pthLog}/legacy-${_saXCoreN}-finally-fixed-d6.log\" ]; then\n      for _File in `find /data/all/*/*/${_saXIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saXPatch} &> /dev/null\n        fi\n      done\n      touch ${_pthLog}/legacy-${_saXCoreN}-finally-fixed-d6.log\n    elif [ -d \"/data/disk/all\" ] \\\n      && [ ! -e \"${_pthLog}/legacy-${_saXCoreN}-finally-fixed-d6eee.log\" ]; then\n      for _File in `find /data/disk/all/*/*/${_saXIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saXPatch} &> /dev/null\n        fi\n      done\n      touch ${_pthLog}/legacy-${_saXCoreN}-finally-fixed-d6eee.log\n    fi\n    cd\n  fi\n\n  # SA-CORE for custom D6 platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saXPatch}\" ]; then\n    if [ -d \"/data/u\" ] \\\n      && [ ! -e \"${_pthLog}/batch-custom-${_saXCoreN}-finally-fixed-d6.log\" ]; then\n      for _File in `find /data/disk/*/static/*/${_saXIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saXCoreS}-fix-finally.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saXPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saXCoreS}-fix-finally.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/${_saXIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saXCoreS}-fix-finally.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saXPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saXCoreS}-fix-finally.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/${_saXIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saXCoreS}-fix-finally.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saXPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saXCoreS}-fix-finally.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/${_saXIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saXCoreS}-fix-finally.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saXPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saXCoreS}-fix-finally.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/*/${_saXIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saXCoreS}-fix-finally.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saXPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saXCoreS}-fix-finally.info\n        fi\n      done\n    fi\n    cd\n    touch ${_pthLog}/batch-custom-${_saXCoreN}-finally-fixed-d6.log\n  fi\n}\n\n_fix_ping_perms() {\n  if [ -e \"/bin/ping\" ]; then\n    _PING_TEST=$(ls -la /bin/ping | grep rwsr-xr-x 2>&1)\n    if [ -z \"${_PING_TEST}\" ]; then\n      chown root:root /bin/ping\n      chmod 4755 /bin/ping\n    fi\n  fi\n}\n\n_fix_fpm_process_max() {\n  if [ ! -e \"${_pthLog}/process.max.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    sed -i \"s/process.max =.*/process.max = 0/g\" /opt/php*/etc/php*-fpm.conf\n    touch ${_pthLog}/process.max.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n}\n\n_fix_node_in_lshell_access() {\n  if [ ! -e \"${_pthLog}/node.lshell-fix-npx.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n    && [ -e \"/etc/lshell.conf\" ]; then\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    if [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [ -e \"/root/.allow.node.lshell.cnf\" ]; then\n      _ALLOW_NODE=YES\n    else\n      _ALLOW_NODE=NO\n      sed -i \\\n        -e \"s/, 'node', 'npm', 'npx',/,/gi\" \\\n        -e \"s/, 'scp',/,/gi\" \\\n        /etc/lshell.conf /var/xdrago/conf/lshell.conf\n    fi\n    touch ${_pthLog}/node.lshell-fix-npx.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n}\n\n_fix_php_in_lshell_access() {\n  if [ ! -e \"${_pthLog}/php.lshell-fix-php.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n    && [ -e \"/etc/lshell.conf\" ]; then\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    if [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [ -e \"/root/.allow.php.lshell.cnf\" ]; then\n      _ALLOW_PHP=YES\n    else\n      _ALLOW_PHP=NO\n      sed -i \\\n        -e \"s/, 'php.*':.*php',/,/gi\" \\\n        -e \"s/, '\\/opt\\/php.*',/,/gi\" \\\n        /etc/lshell.conf /var/xdrago/conf/lshell.conf\n    fi\n    touch ${_pthLog}/php.lshell-fix-php.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n}\n\n_if_fix_lshell() {\n  if [ ! -e \"/usr/local/etc/lshell.conf\" ] \\\n    && [ ! -L \"/usr/local/etc/lshell.conf\" ] \\\n    && [ -e \"/etc/lshell.conf\" ]; then\n    [ ! -d \"/usr/local/etc\" ] && mkdir -p /usr/local/etc\n    ln -sfn /etc/lshell.conf /usr/local/etc/lshell.conf\n  fi\n  _LSHELL_VRN=0.10\n  _PATH_LSHELL=\"${_usrBin}/lshell\"\n  _LSHELL_CHK_VRN=0.10\n  _LSHELL_FORCE_REINSTALL=NO\n  _isLshell=\"$(which lshell)\"\n  _LSHELL_ITD=$(${_isLshell} --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\"-\" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  if [ -z \"${_isLshell}\" ] \\\n    || [ -z \"${_PATH_LSHELL}\" ] \\\n    || [ \"${_LSHELL_ITD}\" != \"${_LSHELL_CHK_VRN}\" ] \\\n    || [[ \"${_LSHELL_ITD}\" =~ \"Traceback\" ]] \\\n    || [[ \"${_LSHELL_ITD}\" =~ \"bad interpreter\" ]] \\\n    || [[ \"${_LSHELL_ITD}\" =~ \"ImportError\" ]]; then\n    _LSHELL_FORCE_REINSTALL=YES\n  fi\n  if [ \"${_LSHELL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    [ -f \"/etc/lshell.conf\" ] && cp -af /etc/lshell.conf /etc/lshell.conf-bak-${_LSHELL_VRN}\n    _apt_clean_update\n    apt-get install python3-pip ${_aptYesUnth}\n    if [ -x \"/usr/bin/pip3\" ]; then\n      _usePip=/usr/bin/pip3\n    elif [ -x \"/usr/local/bin/pip3\" ]; then\n      _usePip=/usr/local/bin/pip3\n    fi\n    _PIP_TEST=$(${_usePip} --version 2>&1)\n    if [[ \"${_PIP_TEST}\" =~ \"python 3.11\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.12\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.13\" ]]; then\n      ${_usePip} install --upgrade pip --root-user-action ignore\n    else\n      ${_usePip} install --upgrade pip\n    fi\n    cd /var/opt\n    rm -rf lshell*\n    _get_dev_src \"lshell-${_LSHELL_VRN}.tar.gz\"\n    for _Files in `find /var/opt/lshell-${_LSHELL_VRN} -type f`; do\n      sed -i \"s/kicked/logged/g\" ${_Files} &> /dev/null\n      wait\n      sed -i \"s/Kicked/Logged/g\" ${_Files} &> /dev/null\n      wait\n    done\n    rm -rf /usr/local/lib/python*/site-packages/lshell*\n    rm -rf /usr/local/lib/python*/dist-packages/lshell*\n    cd /var/opt/lshell-${_LSHELL_VRN}\n    _PIP_TEST=$(${_usePip} --version 2>&1)\n    if [[ \"${_PIP_TEST}\" =~ \"python 3.11\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.12\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.13\" ]]; then\n      ${_usePip} install . --break-system-packages --root-user-action ignore\n    else\n      ${_usePip} install .\n    fi\n    [ -f \"/etc/lshell.conf-bak-${_LSHELL_VRN}\" ] && cp -af /etc/lshell.conf-bak-${_LSHELL_VRN} /etc/lshell.conf\n    rm -f /etc/logrotate.d/lshell\n    addgroup --system lshellg &> /dev/null\n    addgroup --system ltd-shell-more &> /dev/null\n    mkdir -p /var/log/lsh\n    chown :lshellg /var/log/lsh\n    chmod 770 /var/log/lsh &> /dev/null\n    # Kill all non-root logged-in users\n    who | awk '$1 !~ /^root$/ { cmd = \"pkill -KILL -u \" $1; system(cmd) }'\n    touch ${_pthLog}/lshell-fix-build-${_LSHELL_VRN}.log\n  fi\n  if [ -e \"${_usrBin}/lshell\" ]; then\n    chown root:users ${_usrBin}/lshell\n    chmod 750 ${_usrBin}/lshell\n    if [ ! -L \"/usr/bin/lshell\" ]; then\n      ln -sfn ${_usrBin}/lshell /usr/bin/lshell &> /dev/null\n    fi\n  fi\n}\n\n_fix_start_stop_ports_solr() {\n  if [ -x \"/etc/init.d/solr9\" ] && [ -e \"/etc/default/solr9.in.sh\" ]; then\n    _SOLR9_STOP_TEST=$(grep \"STOP\\.PORT=19099\" /etc/default/solr9.in.sh 2>&1)\n    _SOLR9_WAIT_TEST=$(grep \"SOLR_START_WAIT=\" /etc/default/solr9.in.sh 2>&1)\n    if [ ! -e \"/var/log/boa/solr9.in.004.fixed.pid\" ] \\\n      || [[ ! \"${_SOLR9_STOP_TEST}\" =~ \"19099\" ]] \\\n      || [[ ! \"${_SOLR9_WAIT_TEST}\" =~ \"10\" ]]; then\n      sed -i \"s/^SOLR_STOP_PORT.*//g\" /etc/default/solr9.in.sh\n      sed -i \"s/^SOLR_STOP_KEY.*//g\" /etc/default/solr9.in.sh\n      sed -i \"s/.*mycustomkey9.*//g\" /etc/default/solr9.in.sh\n      sed -i \"s/.*_WAIT.*//g\" /etc/default/solr9.in.sh\n      echo \"SOLR_OPTS=\\\"\\$SOLR_OPTS -DSTOP.PORT=19099 -DSTOP.KEY=mycustomkey9\\\"\" >> /etc/default/solr9.in.sh\n      echo \"SOLR_START_WAIT=\\\"10\\\"\" >> /etc/default/solr9.in.sh\n      echo \"SOLR_STOP_WAIT=\\\"10\\\"\" >> /etc/default/solr9.in.sh\n      echo \"SOLR_WAIT_FOR_ZK=\\\"10\\\"\" >> /etc/default/solr9.in.sh\n      sed -i \"/^$/d\" /etc/default/solr9.in.sh\n      echo \"_restartSolr9 at $(date)\" >> ${_pthLog}/_fix_start_stop_ports_solr.log\n      touch /var/log/boa/solr9.in.004.fixed.pid\n      service solr9 restart\n    fi\n  fi\n  if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/etc/default/solr7.in.sh\" ]; then\n    _SOLR7_STOP_TEST=$(grep \"STOP\\.PORT=17077\" /etc/default/solr7.in.sh 2>&1)\n    _SOLR7_WAIT_TEST=$(grep \"SOLR_START_WAIT=\" /etc/default/solr7.in.sh 2>&1)\n    if [ ! -e \"/var/log/boa/solr7.in.004.fixed.pid\" ] \\\n      || [[ ! \"${_SOLR7_STOP_TEST}\" =~ \"17077\" ]] \\\n      || [[ ! \"${_SOLR7_WAIT_TEST}\" =~ \"10\" ]]; then\n      sed -i \"s/^SOLR_STOP_PORT.*//g\" /etc/default/solr7.in.sh\n      sed -i \"s/^SOLR_STOP_KEY.*//g\" /etc/default/solr7.in.sh\n      sed -i \"s/.*mycustomkey7.*//g\" /etc/default/solr7.in.sh\n      sed -i \"s/.*_WAIT.*//g\" /etc/default/solr7.in.sh\n      echo \"SOLR_OPTS=\\\"\\$SOLR_OPTS -DSTOP.PORT=17077 -DSTOP.KEY=mycustomkey7\\\"\" >> /etc/default/solr7.in.sh\n      echo \"SOLR_START_WAIT=\\\"10\\\"\" >> /etc/default/solr7.in.sh\n      echo \"SOLR_STOP_WAIT=\\\"10\\\"\" >> /etc/default/solr7.in.sh\n      echo \"SOLR_WAIT_FOR_ZK=\\\"10\\\"\" >> /etc/default/solr7.in.sh\n      sed -i \"/^$/d\" /etc/default/solr7.in.sh\n      echo \"_restartSolr7 at $(date)\" >> ${_pthLog}/_fix_start_stop_ports_solr.log\n      touch /var/log/boa/solr7.in.004.fixed.pid\n      service solr7 restart\n    fi\n  fi\n  if [ -x \"/etc/init.d/jetty9\" ]; then\n    _restartSolr4=FALSE\n    _ctrl_jetty_nr=$(ls -la /tmp/jetty-0.0.0.0-8099-solr.war* | wc -l 2>&1)\n    if [[ ! \"${_ctrl_jetty_nr}\" =~ \"No such file\" ]] && [ \"${_ctrl_jetty_nr}\" -gt 8 ]; then\n      _restartSolr4=TRUE\n    fi\n    if [ \"${_restartSolr4}\" = \"TRUE\" ]; then\n      if [ ! -x \"/etc/init.d/jenkins\" ] && [ ! -e \"/var/lib/jenkins\" ]; then\n        find /tmp -mindepth 1 -user jetty9 -exec rm -rf {} + 2>/dev/null\n        pkill -9 -f jetty9\n        echo \"_restartSolr4 at $(date)\" >> ${_pthLog}/_fix_start_stop_ports_solr.log\n      fi\n    fi\n  fi\n}\n\n_fix_log4j_solr7() {\n  _LOG4J_VRN=2.17.1\n  _DO_SOLR_RESTART=\n  if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/etc/default/solr7.in.sh\" ]; then\n    if [ -e \"/opt/solr-7.7.3\" ] \\\n      && [ ! -e \"/opt/solr-7.7.3/server/lib/ext/log4j-core-${_LOG4J_VRN}.jar\" ]; then\n      cd /var/opt\n      rm -rf apache-log4j*\n      _get_dev_src \"apache-log4j-${_LOG4J_VRN}-bin.tar.gz\"\n      if [ -e \"/var/opt/apache-log4j-${_LOG4J_VRN}-bin/log4j-core-${_LOG4J_VRN}.jar\" ]; then\n        cd /var/opt/apache-log4j-${_LOG4J_VRN}-bin\n        [ -d \"/var/backups/log4j/solr-7.7.3\" ] || mkdir -p /var/backups/log4j/solr-7.7.3\n        mv -f /opt/solr-7.7.3/server/lib/ext/log4j* /var/backups/log4j/solr-7.7.3/\n        rm -f /opt/solr-7.7.3/contrib/prometheus-exporter/lib/log4j*\n        cp -af log4j-1.2-api-${_LOG4J_VRN}.jar    /opt/solr-7.7.3/server/lib/ext/\n        cp -af log4j-core-${_LOG4J_VRN}.jar       /opt/solr-7.7.3/server/lib/ext/\n        cp -af log4j-core-${_LOG4J_VRN}.jar       /opt/solr-7.7.3/contrib/prometheus-exporter/lib/\n        cp -af log4j-slf4j-impl-${_LOG4J_VRN}.jar /opt/solr-7.7.3/server/lib/ext/\n        cp -af log4j-slf4j-impl-${_LOG4J_VRN}.jar /opt/solr-7.7.3/contrib/prometheus-exporter/lib/\n        cp -af log4j-api-${_LOG4J_VRN}.jar        /opt/solr-7.7.3/server/lib/ext/\n        cp -af log4j-api-${_LOG4J_VRN}.jar        /opt/solr-7.7.3/contrib/prometheus-exporter/lib/\n        chown root:root /opt/solr-7.7.3/server/lib/ext/log4j*\n        chown root:root /opt/solr-7.7.3/contrib/prometheus-exporter/lib/log4j*\n        _DO_SOLR_RESTART=YES\n      fi\n    fi\n    if [ -e \"/opt/solr-7.6.0\" ] \\\n      && [ ! -e \"/opt/solr-7.6.0/server/lib/ext/log4j-core-${_LOG4J_VRN}.jar\" ]; then\n      cd /var/opt\n      rm -rf apache-log4j*\n      _get_dev_src \"apache-log4j-${_LOG4J_VRN}-bin.tar.gz\"\n      if [ -e \"/var/opt/apache-log4j-${_LOG4J_VRN}-bin/log4j-core-${_LOG4J_VRN}.jar\" ]; then\n        cd /var/opt/apache-log4j-${_LOG4J_VRN}-bin\n        [ -d \"/var/backups/log4j/solr-7.6.0\" ] || mkdir -p /var/backups/log4j/solr-7.6.0\n        mv -f /opt/solr-7.6.0/server/lib/ext/log4j* /var/backups/log4j/solr-7.6.0/\n        rm -f /opt/solr-7.6.0/contrib/prometheus-exporter/lib/log4j*\n        cp -af log4j-1.2-api-${_LOG4J_VRN}.jar    /opt/solr-7.6.0/server/lib/ext/\n        cp -af log4j-core-${_LOG4J_VRN}.jar       /opt/solr-7.6.0/server/lib/ext/\n        cp -af log4j-core-${_LOG4J_VRN}.jar       /opt/solr-7.6.0/contrib/prometheus-exporter/lib/\n        cp -af log4j-slf4j-impl-${_LOG4J_VRN}.jar /opt/solr-7.6.0/server/lib/ext/\n        cp -af log4j-slf4j-impl-${_LOG4J_VRN}.jar /opt/solr-7.6.0/contrib/prometheus-exporter/lib/\n        cp -af log4j-api-${_LOG4J_VRN}.jar        /opt/solr-7.6.0/server/lib/ext/\n        cp -af log4j-api-${_LOG4J_VRN}.jar        /opt/solr-7.6.0/contrib/prometheus-exporter/lib/\n        chown root:root /opt/solr-7.6.0/server/lib/ext/log4j*\n        chown root:root /opt/solr-7.6.0/contrib/prometheus-exporter/lib/log4j*\n        _DO_SOLR_RESTART=YES\n      fi\n    fi\n    _RESULT_LOG4J=$(grep \"LOG4J_FORMAT_MSG_NO_LOOKUPS=true\" /etc/default/solr7.in.sh 2>&1)\n    if [[ ! \"${_RESULT_LOG4J}\" =~ \"LOG4J\" ]]; then\n      echo \"LOG4J_FORMAT_MSG_NO_LOOKUPS=true\" >> /etc/default/solr7.in.sh\n    fi\n    if [[ ! \"${_RESULT_LOG4J}\" =~ \"LOG4J\" ]] || [ ! -z \"${_DO_SOLR_RESTART}\" ]; then\n      #pkill -9 -f solr7\n      service solr7 restart &> /dev/null\n    fi\n  fi\n}\n\n_fix_authorized_keys() {\n  if [ ! -e \"${_pthLog}/_fix_authorized_keys.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    chmod 0600 /home/*/.ssh/authorized_keys &> /dev/null\n    chmod 0700 /home/*/.ssh &> /dev/null\n    touch ${_pthLog}/_fix_authorized_keys.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n}\n\n_fix_aio() {\n  _AIO_FIX=$(grep \"fs.aio-max-nr\" /etc/sysctl.conf 2>&1)\n  if [ -z \"${_AIO_FIX}\" ]; then\n    echo \"fs.aio-max-nr = 1048576\" >> /etc/sysctl.conf\n  fi\n}\n\n_fix_console_print() {\n  _PRK_FIX=$(grep \"kernel.printk\" /etc/sysctl.conf 2>&1)\n  if [ -z \"${_PRK_FIX}\" ]; then\n    echo \"kernel.printk = 4 1 1 7\" >> /etc/sysctl.conf\n  fi\n}\n\n_fix_java_symlinks() {\n  if [ \"${_OS_CODE}\" = \"jessie\" ] && [ -x \"/usr/lib/jvm/java-7-openjdk/jre/bin/java\" ]; then\n    if [ ! -e \"/usr/bin/java\" ] || [ ! -e \"/etc/alternatives/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-7-openjdk/jre/bin/java /etc/alternatives/java\n      ln -sfn /etc/alternatives/java /usr/bin/java\n      echo fixed java symlinks for ${_OS_CODE}\n    fi\n  fi\n  if [ \"${_OS_CODE}\" = \"stretch\" ] && [ -x \"/usr/lib/jvm/java-8-openjdk/jre/bin/java\" ]; then\n    if [ ! -e \"/usr/bin/java\" ] || [ ! -e \"/etc/alternatives/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-8-openjdk/jre/bin/java /etc/alternatives/java\n      ln -sfn /etc/alternatives/java /usr/bin/java\n      echo fixed java symlinks for ${_OS_CODE}\n    fi\n  fi\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    if [ ! -e \"/usr/lib/jvm/java-21-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-21-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-21-openjdk-amd64 /usr/lib/jvm/java-21-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java21\" ] \\\n      && [ -e \"/usr/lib/jvm/java-21-openjdk-amd64/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-21-openjdk-amd64/bin/java /usr/bin/java21\n    fi\n    if [ ! -e \"/usr/lib/jvm/java-17-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-17-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/java-17-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java17\" ] \\\n      && [ -e \"/usr/lib/jvm/java-17-openjdk-amd64/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-17-openjdk-amd64/bin/java /usr/bin/java17\n    fi\n    if [ ! -e \"/usr/lib/jvm/java-11-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-11-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-11-openjdk-amd64 /usr/lib/jvm/java-11-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java11\" ] \\\n      && [ -e \"/usr/lib/jvm/java-11-openjdk-amd64/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-11-openjdk-amd64/bin/java /usr/bin/java11\n    fi\n    if [ -x \"/etc/init.d/jenkins\" ] && [ -e \"/var/lib/jenkins\" ]; then\n      _LOOK_LIKE_JENKINS=TRUE\n    elif [ -e \"/root/.look.like.jenkins.cnf\" ]; then\n      _LOOK_LIKE_JENKINS=TRUE\n    else\n      _LOOK_LIKE_JENKINS=FALSE\n    fi\n    if [ \"${_LOOK_LIKE_JENKINS}\" = \"TRUE\" ] \\\n      || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n      || [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      if [ -x \"/usr/lib/jvm/java-17-openjdk/bin/java\" ] \\\n        && [ ! -e \"/var/log/boa/.fixed-java17-symlinks.log\" ]; then\n        if [ -e \"/usr/lib/jvm/java-17-openjdk-amd64\" ]; then\n          rm -f /usr/lib/jvm/default-java\n          ln -sfn /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/default-java\n        fi\n        ln -sfn /usr/lib/jvm/java-17-openjdk/bin/java /etc/alternatives/java\n        ln -sfn /etc/alternatives/java /usr/bin/java\n        touch /var/log/boa/.fixed-java17-symlinks.log\n        echo \"Fixed Java 17 symlinks for ${_OS_CODE}\"\n      fi\n      if [ -x \"/usr/lib/jvm/java-21-openjdk/bin/java\" ] \\\n        && [ ! -e \"/var/log/boa/.fixed-java21-symlinks.log\" ]; then\n        if [ -e \"/usr/lib/jvm/java-21-openjdk-amd64\" ]; then\n          rm -f /usr/lib/jvm/default-java\n          ln -sfn /usr/lib/jvm/java-21-openjdk-amd64 /usr/lib/jvm/default-java\n        fi\n        ln -sfn /usr/lib/jvm/java-21-openjdk/bin/java /etc/alternatives/java\n        ln -sfn /etc/alternatives/java /usr/bin/java\n        touch /var/log/boa/.fixed-java21-symlinks.log\n        echo \"Fixed Java 21 symlinks for ${_OS_CODE}\"\n      fi\n    else\n      if [ -x \"/usr/lib/jvm/java-11-openjdk/bin/java\" ]; then\n        if [ -e \"/usr/lib/jvm/java-11-openjdk-amd64\" ]; then\n          rm -f /usr/lib/jvm/default-java\n          ln -sfn /usr/lib/jvm/java-11-openjdk-amd64 /usr/lib/jvm/default-java\n        fi\n        if [ ! -e \"/usr/bin/java\" ] || [ ! -e \"/etc/alternatives/java\" ]; then\n          ln -sfn /usr/lib/jvm/java-11-openjdk/bin/java /etc/alternatives/java\n          ln -sfn /etc/alternatives/java /usr/bin/java\n          echo \"Fixed Java 11 symlinks for ${_OS_CODE}\"\n        fi\n      fi\n    fi\n  else\n    if [ -x \"/usr/lib/jvm/java-11-openjdk/bin/java\" ]; then\n      if [ ! -e \"/usr/bin/java\" ] || [ ! -e \"/etc/alternatives/java\" ]; then\n        ln -sfn /usr/lib/jvm/java-11-openjdk/bin/java /etc/alternatives/java\n        ln -sfn /etc/alternatives/java /usr/bin/java\n        echo \"Fixed Java 11 symlinks for ${_OS_CODE}\"\n      fi\n    fi\n  fi\n}\n\n_fix_composer_version() {\n  _COMPOSER_VRN=2.8.2\n  if [ -x \"/usr/local/bin/composer\" ]; then\n    _COMPOSER_IS=$(composer --no-interaction --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f35 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_COMPOSER_IS}\" != \"${_COMPOSER_VRN}\" ]; then\n      composer self-update ${_COMPOSER_VRN} &> /dev/null\n    fi\n  fi\n}\n\n_fix_sftp_server() {\n  if [ -e \"/etc/ssh/sshd_config\" ]; then\n    _SFTP_UMASK_TEST=$(grep \"sftp-server -u 0002\" /etc/ssh/sshd_config 2>&1)\n    if [[ ! \"${_SFTP_UMASK_TEST}\" =~ \"sftp-server -u 0002\" ]]; then\n      sed -i \"s/^Subsystem.*//g\" /etc/ssh/sshd_config\n      echo \"Subsystem sftp /usr/lib/openssh/sftp-server -u 0002\" >> /etc/ssh/sshd_config\n      sed -i \"/^$/d\" /etc/ssh/sshd_config\n      service ssh restart 2> /dev/null\n    fi\n  fi\n}\n\n_fix_wkhtml_perms() {\n\n  _WKHTML_ARRAY=\"/usr/local/bin/wkhtmltopdf \\\n                  /usr/bin/wkhtmltopdf \\\n                  /usr/bin/wkhtmltopdf-0.12.4 \\\n                  /usr/local/bin/wkhtmltoimage \\\n                  /usr/bin/wkhtmltoimage \\\n                  /usr/bin/wkhtmltoimage-0.12.4\"\n\n  for _WKHTML_ITEM in ${_WKHTML_ARRAY}; do\n    if [ -x \"${_WKHTML_ITEM}\" ]; then\n      _PERM_TEST=$(ls -la ${_WKHTML_ITEM} | grep rwxr-xr-x 2>&1)\n      if [ -z \"${_PERM_TEST}\" ]; then\n        chgrp root ${_WKHTML_ITEM} &> /dev/null\n        chmod 755 ${_WKHTML_ITEM} &> /dev/null\n      fi\n    fi\n  done\n}\n\n_fix_wkhtml() {\n  if [ -x \"/usr/local/bin/wkhtmltopdf\" ] \\\n    && [ -L \"/usr/bin/wkhtmltopdf\" ]; then\n    rm -f /usr/bin/wkhtmltopdf\n    cp -af /usr/local/bin/wkhtmltopdf /usr/bin/wkhtmltopdf\n    chgrp root /usr/bin/wkhtmltopdf &> /dev/null\n    chmod 755 /usr/bin/wkhtmltopdf &> /dev/null\n  fi\n  if [ -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n    && [ -L \"/usr/bin/wkhtmltoimage\" ]; then\n    rm -f /usr/bin/wkhtmltoimage\n    cp -af /usr/local/bin/wkhtmltoimage /usr/bin/wkhtmltoimage\n    chgrp root /usr/bin/wkhtmltoimage &> /dev/null\n    chmod 755 /usr/bin/wkhtmltoimage &> /dev/null\n  fi\n  if [ -x \"/usr/local/bin/wkhtmltopdf\" ] \\\n    && [ ! -e \"/usr/bin/wkhtmltopdf\" ]; then\n    cp -af /usr/local/bin/wkhtmltopdf /usr/bin/wkhtmltopdf\n    chgrp root /usr/bin/wkhtmltopdf &> /dev/null\n    chmod 755 /usr/bin/wkhtmltopdf &> /dev/null\n  fi\n  if [ -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n    && [ ! -e \"/usr/bin/wkhtmltoimage\" ]; then\n    cp -af /usr/local/bin/wkhtmltoimage /usr/bin/wkhtmltoimage\n    chgrp root /usr/bin/wkhtmltoimage &> /dev/null\n    chmod 755 /usr/bin/wkhtmltoimage &> /dev/null\n  fi\n  if [ ! -x \"/usr/local/bin/wkhtmltopdf\" ] \\\n    && [ -x \"/usr/bin/wkhtmltopdf\" ]; then\n    rm -f /usr/local/bin/wkhtmltopdf\n    cp -af /usr/bin/wkhtmltopdf /usr/local/bin/wkhtmltopdf\n    chgrp root /usr/local/bin/wkhtmltopdf &> /dev/null\n    chmod 755 /usr/local/bin/wkhtmltopdf &> /dev/null\n  fi\n  if [ ! -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n    && [ -x \"/usr/bin/wkhtmltoimage\" ]; then\n    rm -f /usr/local/bin/wkhtmltoimage\n    cp -af /usr/bin/wkhtmltoimage /usr/local/bin/wkhtmltoimage\n    chgrp root /usr/local/bin/wkhtmltoimage &> /dev/null\n    chmod 755 /usr/local/bin/wkhtmltoimage &> /dev/null\n  fi\n}\n\n_fix_eldir() {\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_eldirP}\" ]; then\n    mkdir -p /var/xdrago/conf\n    curl ${_crlGet} \"${_urlHmr}/patches/${_eldirF}\" -o ${_eldirP}\n  fi\n}\n\n_if_drupal_patches_update() {\n  if [ -e \"/var/xdrago\" ]; then\n    _BROKEN_UPDATE_TEST_A=$(grep \"Under Construction\" /data/conf/patches/* 2>&1)\n    _BROKEN_UPDATE_TEST_B=$(grep \"404 Not Found\" /data/conf/patches/* 2>&1)\n    if [ ! -z \"${_BROKEN_UPDATE_TEST_A}\" ] \\\n      || [ ! -z \"${_BROKEN_UPDATE_TEST_B}\" ] \\\n      || [ ! -e \"/data/conf/patches/ctrl.f96.${_tRee}.${_xSrl}.pid\" ]; then\n      mkdir -p /data/conf/patches\n      rm -f /data/conf/patches/*\n      touch /data/conf/patches/ctrl.f96.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n}\n\n_fix_drupal_core_ten() {\n  if [ -e \"/var/xdrago\" ]; then\n    if [ ! -e \"${_tenCorePatchPath}\" ]; then\n      mkdir -p /data/conf/patches\n      curl ${_crlGet} \"${_urlHmr}/patches/${_tenCorePatchFname}\" -o ${_tenCorePatchPath}\n    fi\n    if [ ! -e \"${_tenConsolePatchPath}\" ]; then\n      mkdir -p /data/conf/patches\n      curl ${_crlGet} \"${_urlHmr}/patches/${_tenConsolePatchFname}\" -o ${_tenConsolePatchPath}\n    fi\n  fi\n}\n\n_fix_drupal_core_eleven() {\n  if [ -e \"/var/xdrago\" ]; then\n    if [ ! -e \"${_elevenCorePatchPath}\" ]; then\n      mkdir -p /data/conf/patches\n      curl ${_crlGet} \"${_urlHmr}/patches/${_elevenCorePatchFname}\" -o ${_elevenCorePatchPath}\n    fi\n    if [ ! -e \"${_elevenConsolePatchPath}\" ]; then\n      mkdir -p /data/conf/patches\n      curl ${_crlGet} \"${_urlHmr}/patches/${_elevenConsolePatchFname}\" -o ${_elevenConsolePatchPath}\n    fi\n    if [ ! -e \"${_elevenValidatorPatchPath}\" ]; then\n      mkdir -p /data/conf/patches\n      curl ${_crlGet} \"${_urlHmr}/patches/${_elevenValidatorPatchFname}\" -o ${_elevenValidatorPatchPath}\n    fi\n  fi\n}\n\n_fix_pure_ftpd() {\n  if [ -e \"/usr/local/etc/pure-ftpd.conf\" ]; then\n    _PAM_AUTH=$(grep \"^PAMAuthentication\" /usr/local/etc/pure-ftpd.conf 2>&1)\n    if [ ! -z \"${_PAM_AUTH}\" ]; then\n      sed -i \"s/^PAMAuthentication/# PAMAuthentication/g\" /usr/local/etc/pure-ftpd.conf\n      killall -9 pure-ftpd &> /dev/null\n    fi\n  fi\n}\n\n_fix_hosting_le() {\n  if [ -d \"/var/xdrago/conf\" ]; then\n    if [ ! -e \"${_hoLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n      || [ ! -e \"${_pthLog}/dehydrated-up01.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n      || [ -e \"/var/xdrago/${_provLeInc}\" ] \\\n      || [ -e \"/var/xdrago/${_hoLeInc}\" ] \\\n      || [ -e \"/var/xdrago/${_dehydName}\" ] \\\n      || [ -e \"/root/${_provLeInc}\" ] \\\n      || [ -e \"/root/hosting_le_vhost.drush.inc.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n      || [ -e \"/root/${_hoLeInc}\" ] \\\n      || [ -e \"${_legacyLeSh}\" ] \\\n      || [ ! -e \"${_dehydSrcPath}\" ] \\\n      || [ ! -e \"${_provLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n      mkdir -p /var/xdrago/conf\n      rm -f /var/xdrago/*.drush.inc*\n      rm -f /root/*.drush.inc*\n      rm -f ${_legacyLeSh}\n      rm -f ${_dehydSrcPath}.ctrl.${_tRee}.${_xSrl}.pid\n      rm -f ${_hoLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid\n      rm -f ${_provLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid\n      curl ${_crlGet} \"${_urlHmr}/helpers/${_dehydName}\" -o ${_dehydSrcPath}.ctrl.${_tRee}.${_xSrl}.pid\n      cp -af ${_dehydSrcPath}.ctrl.${_tRee}.${_xSrl}.pid ${_dehydSrcPath}\n      curl ${_crlGet} \"${_urlHmr}/patches/${_hoLeInc}\" -o ${_hoLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid\n      cp -af ${_hoLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid ${_hoLeIncFull}\n      curl ${_crlGet} \"${_urlHmr}/patches/${_provLeInc}\" -o ${_provLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid\n      if [ -e \"${_provLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n        cp -af ${_provLeIncFull}.ctrl.${_tRee}.${_xSrl}.pid ${_provLeIncFull}\n        [ -e \"${_provLeIncFull}\" ] && touch ${_pthLog}/dehydrated-up01.ctrl.${_tRee}.${_xSrl}.pid\n      fi\n    fi\n  fi\n}\n\n_fix_newrelic() {\n  _PHP_EXT_DIR_84=\"/opt/php84/lib/php/extensions/no-debug-non-zts-20240924\"\n  _NR_SO=\"/usr/lib/newrelic-php5/agent/x64/newrelic-20240924.so\"\n  if [ -e \"${_PHP_EXT_DIR_84}\" ] \\\n    && [ -e \"${_NR_SO}\" ] \\\n    && [ ! -e \"${_PHP_EXT_DIR_84}/newrelic.so\" ]; then\n    ln -sfn ${_NR_SO} ${_PHP_EXT_DIR_84}/newrelic.so\n    service php84-fpm reload\n  fi\n\n  _PHP_EXT_DIR_74=\"/opt/php74/lib/php/extensions/no-debug-non-zts-20190902\"\n  _NR_SO=\"/usr/lib/newrelic-php5/agent/x64/newrelic-20190902.so\"\n  if [ -e \"${_PHP_EXT_DIR_74}\" ] \\\n    && [ -e \"${_NR_SO}\" ] \\\n    && [ ! -e \"${_PHP_EXT_DIR_74}/newrelic.so\" ]; then\n    ln -sfn ${_NR_SO} ${_PHP_EXT_DIR_74}/newrelic.so\n    service php74-fpm reload\n  fi\n\n  _PHP_EXT_DIR_71=\"/opt/php71/lib/php/extensions/no-debug-non-zts-20160303\"\n  _NR_SO=\"/usr/lib/newrelic-php5/agent/x64/newrelic-20160303.so\"\n  if [ -e \"${_PHP_EXT_DIR_71}\" ] \\\n    && [ ! -e \"${_NR_SO}\" ] \\\n    && [ -L \"${_PHP_EXT_DIR_71}/newrelic.so\" ]; then\n    rm -f ${_PHP_EXT_DIR_71}/newrelic.so\n    service php71-fpm reload\n  fi\n\n  _PHP_EXT_DIR_70=\"/opt/php70/lib/php/extensions/no-debug-non-zts-20151012\"\n  _NR_SO=\"/usr/lib/newrelic-php5/agent/x64/newrelic-20151012.so\"\n  if [ -e \"${_PHP_EXT_DIR_70}\" ] \\\n    && [ ! -e \"${_NR_SO}\" ] \\\n    && [ -L \"${_PHP_EXT_DIR_70}/newrelic.so\" ]; then\n    rm -f ${_PHP_EXT_DIR_70}/newrelic.so\n    service php70-fpm reload\n  fi\n\n  _PHP_EXT_DIR_56=\"/opt/php56/lib/php/extensions/no-debug-non-zts-20131226\"\n  _NR_SO=\"/usr/lib/newrelic-php5/agent/x64/newrelic-20131226.so\"\n  if [ -e \"${_PHP_EXT_DIR_56}\" ] \\\n    && [ ! -e \"${_NR_SO}\" ] \\\n    && [ -L \"${_PHP_EXT_DIR_56}/newrelic.so\" ]; then\n    rm -f ${_PHP_EXT_DIR_56}/newrelic.so\n    service php56-fpm reload\n  fi\n}\n\n_fix_leftovers() {\n  if [ -e \"/data/disk/arch/static/control\" ]; then\n    rm -rf /data/disk/arch/static\n  fi\n}\n\n_force_rebuild() {\n  if [ ! -e \"${_pthLog}/forced.rebuild.glibc.txt\" ]; then\n    echo \"_GIT_FORCE_REINSTALL=YES\" >> ${_barCnf}\n    echo \"_NGX_FORCE_REINSTALL=YES\" >> ${_barCnf}\n    echo \"_PHP_FORCE_REINSTALL=YES\" >> ${_barCnf}\n    echo \"_SSH_FORCE_REINSTALL=YES\" >> ${_barCnf}\n    echo \"_SSL_FORCE_REINSTALL=YES\" >> ${_barCnf}\n    rm -f ${_pthLog}/pure-ftpd-build*\n    rm -f ${_pthLog}/mss-build*\n    rm -f ${_pthLog}/lshell-build*\n    rm -f ${_pthLog}/redis-*\n    rm -f ${_pthLog}/valkey-*\n    touch ${_pthLog}/forced.rebuild.glibc.txt\n  fi\n}\n\n#\n# Detect, remove, and report broken symlinks\n_check_and_remove_broken_symlinks() {\n  local _dir=$1\n\n  # Find broken symlinks in the directory\n  _broken_symlinks=$(find \"${_dir}\" -maxdepth 1 -type l ! -exec test -e {} \\; -print)\n\n  if [ -n \"${_broken_symlinks}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"CLNP: Removing the following broken symlinks from ${_dir}:\"\n      echo \"CLNP: ${_broken_symlinks}\"\n    fi\n\n    for _symlink in ${_broken_symlinks}; do\n      rm \"${_symlink}\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        echo \"CLNP: Removed broken symlink: ${_symlink}\"\n      fi\n    done\n\n    # Set the _ifAnySymlinksCleaned variable to true since we removed broken symlinks\n    _ifAnySymlinksCleaned=YES\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"CLNP: No broken symlinks found in ${_dir}\"\n    fi\n  fi\n}\n\n#\n# Check and move disallowed versions\n_check_and_move() {\n  local _dir=$1\n\n  # Determine the name of the backup subdirectory based on the source directory\n  local _backup_dir=\"${_backLegBase}$(echo \"${_dir}\" | tr '/' '_')\"\n\n  # Find any libcurl.so files in the directory, excluding the allowed version and those without a complete version number\n  _found_versions=$(find \"${_dir}\" -maxdepth 1 -type f -name \"libcurl.so.*\" ! -name \"${_allowedFile}\" | grep -E \"libcurl\\.so\\.[0-9]+\\.[0-9]+\\.[0-9]+$\")\n\n  if [ -n \"${_found_versions}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"CLNP: Moving the following disallowed versions from ${_dir} to ${_backup_dir}:\"\n      echo \"CLNP: ${_found_versions}\"\n    fi\n\n    # Create the backup directory if it doesn't exist\n    mkdir -p \"${_backup_dir}\"\n\n    # Move each found version to the backup directory\n    for _file in ${_found_versions}; do\n      mv -f \"${_file}\" \"${_backup_dir}/\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        echo \"CLNP: Moved ${_file} to ${_backup_dir}/\"\n      fi\n    done\n\n    # Set the _ifAnyFilesCleaned variable to true since we moved files\n    _ifAnyFilesCleaned=YES\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"CLNP: Only the allowed version (${_allowedFile}) is present in ${_dir}\"\n    fi\n  fi\n}\n\n_if_reinstall_curl() {\n  _CURL_VRN=8.20.0\n  _CURL_INSTALL_REQUIRED=NO\n  if ! command -v lsb_release &> /dev/null; then\n    apt-get update -qq &> /dev/null\n    apt-get install lsb-release ${_aptYesUnth} -qq &> /dev/null\n  fi\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  [ \"${_OS_CODE}\" = \"wheezy\" ] && _CURL_VRN=7.50.1\n  [ \"${_OS_CODE}\" = \"jessie\" ] && _CURL_VRN=7.71.1\n  [ \"${_OS_CODE}\" = \"stretch\" ] && _CURL_VRN=8.2.1\n\n  if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ \"${_OS_CODE}\" != \"jessie\" ] \\\n    && [ \"${_OS_CODE}\" != \"stretch\" ]; then\n\n    # Target version\n    _allowedFile=\"libcurl.so.4.8.0\"\n\n    # Directories to check\n    _dirsToClean=(\"/usr/lib\" \"/usr/local/lib\" \"/usr/lib/x86_64-linux-gnu\")\n\n    # Backup base directory\n    _backLegBase=\"/var/backups/legacy-libcurl-boa-${_NOW}\"\n\n    # Variable to track if any files were moved\n    _ifAnyFilesCleaned=NO\n\n    # Variable to track if any broken symlinks were found and removed\n    _ifAnySymlinksCleaned=NO\n\n    # Iterate over the directories and apply the _check_and_move function\n    for _dir in \"${_dirsToClean[@]}\"; do\n      _check_and_move \"${_dir}\"\n    done\n\n    # Iterate over the directories and apply the _check_and_remove_broken_symlinks function\n    for _dir in \"${_dirsToClean[@]}\"; do\n      _check_and_remove_broken_symlinks \"${_dir}\"\n    done\n\n    # Export the _ifAnyFilesCleaned variable for later use\n    export _ifAnyFilesCleaned\n\n    # Export the _ifAnySymlinksCleaned variable for later use\n    export _ifAnySymlinksCleaned\n\n  fi\n\n  if [ \"${_ifAnySymlinksCleaned}\" = \"YES\" ] \\\n    || [ \"${_ifAnyFilesCleaned}\" = \"YES\" ]; then\n    ldconfig 2> /dev/null\n    _CURL_INSTALL_REQUIRED=YES\n    _bkLibcurlPre=\"/var/backups/legacy-libcurl-pre-${_CURL_VRN}-${_NOW}\"\n    mkdir -p ${_bkLibcurlPre}\n    mv -f /usr/lib/x86_64-linux-gnu/libcurl.so* ${_bkLibcurlPre}/ &> /dev/null\n    mv -f /usr/lib/x86_64-linux-gnu/libcurl.la ${_bkLibcurlPre}/ &> /dev/null\n    mv -f /usr/lib/x86_64-linux-gnu/libcurl.a ${_bkLibcurlPre}/ &> /dev/null\n  fi\n\n  _isCurl=$(curl --version 2>&1)\n  if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] \\\n    || [[ \"${_isCurl}\" =~ \"libcurl.so.4\" ]] \\\n    || [ -z \"${_isCurl}\" ] \\\n    || [ \"${_ifAnySymlinksCleaned}\" = \"YES\" ] \\\n    || [ \"${_ifAnyFilesCleaned}\" = \"YES\" ] \\\n    || [ \"${_CURL_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      echo \"OOPS: cURL is broken! Re-installing..\"\n    fi\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    echo \"curl install\" | dpkg --set-selections 2> /dev/null\n    _apt_clean_update\n    # Check for libssl1.0-dev and remove conditionally\n    if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n      apt-get remove libssl1.0-dev -y --purge --auto-remove -qq 2>/dev/null\n    fi\n    apt-get autoremove -y 2> /dev/null\n    apt-get install libssl-dev ${_aptYesUnth} -qq 2> /dev/null\n    apt-get build-dep curl ${_aptYesUnth} 2> /dev/null\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      apt-get install curl --reinstall ${_aptYesUnth} -qq 2> /dev/null\n    fi\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      echo \"INFO: Installing curl from sources...\"\n      mkdir -p /var/opt\n      rm -rf /var/opt/curl*\n      cd /var/opt\n      wget ${_wgetGet} http://files.aegir.cc/dev/src/curl-${_CURL_VRN}.tar.gz &> /dev/null\n      tar -xzf curl-${_CURL_VRN}.tar.gz &> /dev/null\n      if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n        && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n        _SSL_BINARY=/usr/local/ssl3/bin/openssl\n      else\n        _SSL_BINARY=/usr/local/ssl/bin/openssl\n      fi\n      if [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n        _SSL_PATH=\"/usr/local/ssl3\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n      else\n        _SSL_PATH=\"/usr/local/ssl\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n      fi\n      _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n\n      if [ -e \"${_PKG_CONFIG_PATH}\" ] \\\n        && [ -e \"/var/opt/curl-${_CURL_VRN}\" ]; then\n        cd /var/opt/curl-${_CURL_VRN}\n        LIBS=\"-ldl -lpthread\" PKG_CONFIG_PATH=\"${_PKG_CONFIG_PATH}\" ./configure \\\n          --with-openssl \\\n          --with-zlib=/usr \\\n          --prefix=/usr/local &> /dev/null\n        make -j $(nproc) --quiet &> /dev/null\n        make --quiet install &> /dev/null\n        ldconfig 2> /dev/null\n      fi\n    fi\n    if [ -x \"/usr/local/bin/curl\" ] && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      _CURL_ITD=$(/usr/local/bin/curl --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f2 \\\n        | awk '{ print $1}' 2>&1)\n      if [[ ! \"${_CURL_ITD}\" =~ OpenSSL ]]; then\n        echo \"ERRR: /usr/local/bin/curl is broken\"\n        echo \"ERRR: Please install cURL and debug manually\"\n      else\n        echo \"GOOD: /usr/local/bin/curl works\"\n        echo \"curl hold\" | dpkg --set-selections &> /dev/null\n        if [ -x \"/usr/local/bin/curl\" ]; then\n          if [ -x \"/usr/bin/curl\" ] && [ ! -L \"/usr/bin/curl\" ]; then\n            mv -f /usr/bin/curl /usr/bin/old-curl-$(date +%y%m%d-%H%M%S)\n          fi\n          ln -sfn /usr/local/bin/curl /usr/bin/curl\n        fi\n        if [ ! -e \"${_SSL_PATH}/certs/ca-certificates.crt\" ]; then\n          cp -af /etc/ssl/certs/* ${_SSL_PATH}/certs/ &> /dev/null\n        fi\n        if [ -e \"/usr/local/lib/libcurl.so.4.8.0\" ]; then\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/libcurl.so\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/libcurl.so.4\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/libcurl.so.4.8.0\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/x86_64-linux-gnu/libcurl.so\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/x86_64-linux-gnu/libcurl.so.4\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/x86_64-linux-gnu/libcurl.so.4.8.0\n        fi\n        if [ -e \"/usr/local/lib/libcurl.a\" ]; then\n          ln -sfn /usr/local/lib/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.a\n          ln -sfn /usr/local/lib/libcurl.a /usr/lib/libcurl.a\n        fi\n        if [ -e \"/usr/local/lib/libcurl.la\" ]; then\n          ln -sfn /usr/local/lib/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.la\n          ln -sfn /usr/local/lib/libcurl.la /usr/lib/libcurl.la\n        fi\n        ldconfig 2> /dev/null\n        if [ -e \"/usr/local/include/curl/curl.h\" ] \\\n          && [ -e \"/usr/local/include/curl/easy.h\" ] \\\n          && [ -d \"/usr/include/x86_64-linux-gnu/curl\" ] \\\n          && [ ! -L \"/usr/include/x86_64-linux-gnu/curl\" ]; then\n          _apt_clean_update\n          if dpkg-query -W -f='${Status}' libcurl4-openssl-dev 2>/dev/null | grep -q \"install ok installed\"; then\n            apt-get remove libcurl4-openssl-dev -y --purge --auto-remove -qq 2> /dev/null\n          fi\n          ln -sfn /usr/local/include/curl /usr/include/x86_64-linux-gnu/curl\n          ldconfig 2> /dev/null\n        fi\n      fi\n    fi\n  fi\n}\n\n_if_boa_key_tools_update_allowed() {\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n    _BOA_KEY_TOOLS_UPDATE_ALLOWED=NO\n  else\n    _BOA_KEY_TOOLS_UPDATE_ALLOWED=YES\n  fi\n}\n\n_update_boa_tools() {\n  mkdir -p ${_usrBin}\n  if [ -e \"${_pthLog}\" ] && [ ! -e \"${_pthLog}/updateFx30.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    _fxPp=\"fix-drupal-platform-permissions.sh\"\n    _fxSp=\"fix-drupal-site-permissions.sh\"\n    _fxPo=\"fix-drupal-platform-ownership.sh\"\n    _fxSo=\"fix-drupal-site-ownership.sh\"\n    _fxLo=\"lock-local-drush-permissions.sh\"\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/${_fxPp}\" -o ${_usrBin}/${_fxPp}\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/${_fxSp}\" -o ${_usrBin}/${_fxSp}\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/${_fxPo}\" -o ${_usrBin}/${_fxPo}\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/${_fxSo}\" -o ${_usrBin}/${_fxSo}\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/${_fxLo}\" -o ${_usrBin}/${_fxLo}\n    chmod 700 ${_usrBin}/${_fxPp}\n    chmod 700 ${_usrBin}/${_fxSp}\n    chmod 700 ${_usrBin}/${_fxPo}\n    chmod 700 ${_usrBin}/${_fxSo}\n    chmod 700 ${_usrBin}/${_fxLo}\n    touch ${_pthLog}/updateFx30.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n  mkdir -p ${_optBin}\n  _boaBins=\"aptcleanup \\\n            aptfast \\\n            autobeowulf \\\n            autochimaera \\\n            autodaedalus \\\n            autoexcalibur \\\n            autoinit \\\n            automini \\\n            autosymlink \\\n            autoupboa \\\n            backboa \\\n            backchain \\\n            barracuda \\\n            boa \\\n            codebasecheck \\\n            copydbackup \\\n            dcysetup \\\n            dhcpfix \\\n            duobackboa \\\n            fancynow \\\n            ffdevuan \\\n            ffmirror \\\n            fixmounts \\\n            fixrepo \\\n            killer \\\n            loadguard \\\n            lock.inc \\\n            memorytuner \\\n            mergecsf \\\n            multiback \\\n            mybackup \\\n            mycnfup \\\n            mysqltuner5 \\\n            mysqltuner8 \\\n            octopus \\\n            perftest \\\n            randpass \\\n            renameaegirhost \\\n            screenfetch \\\n            setprio \\\n            smtpgapps \\\n            sqlclean \\\n            sqlmagic \\\n            syncpass \\\n            synproxy \\\n            synproxy_hook_fix \\\n            synproxy_monitor \\\n            synproxy_reassert \\\n            synproxy_rollback \\\n            synproxy_snapshot \\\n            synproxy_status \\\n            thinkdifferent \\\n            updatesymlinks \\\n            verifyvhostsdns \\\n            vhostcheck \\\n            vmnetfix \\\n            weblogx \\\n            webserver \\\n            websh \\\n            xboa \\\n            xcopy\"\n  for _cbn in ${_boaBins}; do\n    if [ -e \"${_optBin}/${_cbn}\" ]; then\n      _CNT=$(pgrep -fc /local/bin/${_cbn})\n      if (( _CNT > 0 )); then\n        echo \"The ${_cbn} is running!\"\n      else\n        _CNT=$(pgrep -fc /var/xdrago/daily.sh)\n        if [ \"${_cbn}\" = \"weblogx\" ] && (( _CNT > 0 )); then\n          echo \"The ${_cbn} and daily.sh is running!\"\n        else\n          rm -f ${_optBin}/${_cbn}.new\n          if [ \"${_cbn}\" = \"mysqltuner5\" ] || [ \"${_cbn}\" = \"mysqltuner8\" ]; then\n            curl ${_crlGet} \"${_urlHmr}/helpers/${_cbn}\" -o ${_optBin}/${_cbn}.new\n          else\n            curl ${_crlGet} \"${_urlHmr}/${_tBn}/${_cbn}\" -o ${_optBin}/${_cbn}.new\n          fi\n          mv -f ${_optBin}/${_cbn} ${_optBin}/${_cbn}.prev\n          mv -f ${_optBin}/${_cbn}.new ${_optBin}/${_cbn}\n          if [ -e \"${_optBin}/${_cbn}\" ]; then\n            chmod 755 ${_optBin}/${_cbn}\n            rm -f ${_optBin}/${_cbn}.prev\n          else\n            mv -f ${_optBin}/${_cbn}.prev ${_optBin}/${_cbn}\n          fi\n        fi\n      fi\n    else\n      if [ \"${_cbn}\" = \"mysqltuner5\" ] || [ \"${_cbn}\" = \"mysqltuner8\" ]; then\n        curl ${_crlGet} \"${_urlHmr}/helpers/${_cbn}\" -o ${_optBin}/${_cbn}\n      else\n        curl ${_crlGet} \"${_urlHmr}/${_tBn}/${_cbn}\" -o ${_optBin}/${_cbn}\n      fi\n    fi\n  done\n\n  if [ -e \"${_optBin}/fixmounts\" ] && [ ! -e \"${_usrBin}/fixmounts\" ]; then\n    rm -f ${_usrBin}/{aptcleanup*,autoinit*,automini*,backchain*,barracuda*,boa*,dhcpfix*,ffdevuan*,ffmirror*,fixmounts*}\n    rm -f ${_usrBin}/{aptfast*,killer*,loadguard*,perftest*,octopus*,screenfetch*,vmnetfix*,webserver*,websh*}\n    ln -sfn ${_optBin}/aptcleanup  ${_usrBin}/aptcleanup\n    ln -sfn ${_optBin}/autoinit    ${_usrBin}/autoinit\n    ln -sfn ${_optBin}/automini    ${_usrBin}/automini\n    ln -sfn ${_optBin}/backchain   ${_usrBin}/backchain\n    ln -sfn ${_optBin}/barracuda   ${_usrBin}/barracuda\n    ln -sfn ${_optBin}/boa         ${_usrBin}/boa\n    ln -sfn ${_optBin}/dhcpfix     ${_usrBin}/dhcpfix\n    ln -sfn ${_optBin}/ffdevuan    ${_usrBin}/ffdevuan\n    ln -sfn ${_optBin}/ffmirror    ${_usrBin}/ffmirror\n    ln -sfn ${_optBin}/fixmounts   ${_usrBin}/fixmounts\n    ln -sfn ${_optBin}/killer      ${_usrBin}/killer\n    ln -sfn ${_optBin}/loadguard   ${_usrBin}/loadguard\n    ln -sfn ${_optBin}/octopus     ${_usrBin}/octopus\n    ln -sfn ${_optBin}/perftest    ${_usrBin}/perftest\n    ln -sfn ${_optBin}/screenfetch ${_usrBin}/screenfetch\n    ln -sfn ${_optBin}/vmnetfix    ${_usrBin}/vmnetfix\n    ln -sfn ${_optBin}/webserver   ${_usrBin}/webserver\n    ln -sfn ${_optBin}/websh       ${_usrBin}/websh\n  fi\n\n  if [ -e \"/data/u\" ]; then\n    if [ ! -e \"${_usrBin}/dcysetup\" ] && [ -e \"${_optBin}/dcysetup\" ]; then\n      ln -sfn ${_optBin}/dcysetup ${_usrBin}/dcysetup\n    fi\n    if [ ! -e \"${_usrBin}/multiback\" ] && [ -e \"${_optBin}/multiback\" ]; then\n      ln -sfn ${_optBin}/multiback ${_usrBin}/multiback\n    fi\n    if [ ! -e \"${_usrBin}/mybackup\" ] && [ -e \"${_optBin}/mybackup\" ]; then\n      ln -sfn ${_optBin}/mybackup ${_usrBin}/mybackup\n    fi\n  fi\n\n  echo \"=== BOA executables permissions setup ===\"\n  echo \"Last updated: $(date)\"\n  echo \"Groups are organized by function.\"\n\n  # _AUTO (700): automatic install/upgrade helpers\n  chmod 700 ${_optBin}/{autobeowulf,autochimaera,autodaedalus,autoexcalibur,autoinit,automini,autoupboa}\n\n  # _BACKUP (700): backup helpers\n  chmod 700 ${_optBin}/{backboa,copydbackup,dcysetup,duobackboa,multiback}\n\n  # _CORE (700): core BOA tools\n  chmod 700 ${_optBin}/{barracuda,boa,ffdevuan,ffmirror,killer,octopus,webserver}\n  chmod 700 ${_optBin}/{loadguard,lock.inc,renameaegirhost,syncpass,weblogx,xboa,xcopy}\n\n  # _DB (700): performance and DB tuners\n  chmod 700 ${_optBin}/{memorytuner,mycnfup,mysqltuner5,mysqltuner8,perftest}\n\n  # _MAIL (700): mail + priority tools\n  chmod 700 ${_optBin}/{setprio,smtpgapps}\n\n  # _NET (700): network protection\n  chmod 700 ${_optBin}/synproxy*\n\n  # _SYS (700): system utilities\n  chmod 700 ${_optBin}/{aptfast,codebasecheck,dhcpfix,fancynow,fixmounts,fixrepo,mergecsf,screenfetch,vmnetfix}\n\n  # _SYS (700): cleanup tools\n  chmod 700 ${_optBin}/{aptcleanup,autosymlink,sqlclean,updatesymlinks,verifyvhostsdns,vhostcheck}\n\n  # _MISC (755): misc user-space utilities\n  chmod 755 ${_optBin}/{backchain,mybackup,randpass,sqlmagic,thinkdifferent,websh}\n\n  echo \"Permissions applied successfully\"\n  echo \"=== End of BOA executables permissions setup ===\"\n\n}\n\n# Ensure /usr/sbin/ipset and /sbin/ipset both resolve to the actual ipset binary.\n_ensure_ipset_symlinks() {\n  _IPSET_REAL=\"$(command -v ipset 2>/dev/null || true)\"\n  if [ -z \"${_IPSET_REAL}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"ipset not installed; skipping symlink fixes\"\n    fi\n    return 0\n  fi\n\n  # Resolve through any intermediate symlinks.\n  if [ -L \"${_IPSET_REAL}\" ]; then\n    _IPSET_REAL=\"$(readlink -f \"${_IPSET_REAL}\")\"\n  fi\n\n  for _CAND in /usr/sbin/ipset /sbin/ipset; do\n    _PARENT=\"$(dirname \"${_CAND}\")\"\n    [ -d \"${_PARENT}\" ] || mkdir -p \"${_PARENT}\"\n\n    # If the candidate *is* the real file, nothing to do.\n    if [ \"${_CAND}\" = \"${_IPSET_REAL}\" ]; then\n      continue\n    fi\n\n    # If it exists, check whether it already resolves to the right target.\n    if [ -e \"${_CAND}\" ] || [ -L \"${_CAND}\" ]; then\n      _TARGET=\"$(readlink -f \"${_CAND}\" 2>/dev/null || true)\"\n      if [ \"${_TARGET}\" = \"${_IPSET_REAL}\" ]; then\n        continue\n      fi\n    fi\n\n    ln -sfn \"${_IPSET_REAL}\" \"${_CAND}\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"Linked ${_CAND} -> ${_IPSET_REAL}\"\n    fi\n  done\n}\n\n_if_update_boa_key_tools_only() {\n\n  _check_dns_settings\n  _if_reinstall_curl\n\n  _CURL_TEST=$(curl -L -k -s \\\n    --max-redirs 10 \\\n    --retry 3 \\\n    --retry-delay 10 \\\n    -I \"http://${_USE_MIR}\" 2> /dev/null)\n  if [[ ! \"${_CURL_TEST}\" =~ \"200 OK\" ]]; then\n    if [[ \"${_CURL_TEST}\" =~ \"unknown option was passed in to libcurl\" ]]; then\n      echo \"ERROR: cURL libs are out of sync! Re-installing..\"\n      _if_reinstall_curl\n    fi\n    echo \"ERROR: ${_USE_MIR} is not available, please try later\"\n    exit 1\n  else\n    _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n  fi\n  _LSB_TEST=\"$(which lsb_release)\"\n  if [ ! -x \"${_LSB_TEST}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install lsb-release ${_aptYesUnth} &> /dev/null\n  fi\n  _IPSET_TEST=\"$(which ipset)\"\n  if [ ! -x \"${_IPSET_TEST}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    if [ -L \"/sbin/ipset\" ]; then\n      rm -f /sbin/ipset\n    fi\n    if [ -L \"/usr/sbin/ipset\" ]; then\n      rm -f /usr/sbin/ipset\n    fi\n    apt-get install ipset ${_aptYesUnth} &> /dev/null\n  fi\n\n  _ensure_ipset_symlinks\n\n  if [ -x \"/usr/sbin/csf\" ] \\\n    && [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ ! -x \"/etc/csf/csfpost.sh\" ]; then\n    echo \"\" > /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A OUTPUT -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    chmod 700 /etc/csf/csfpost.sh\n    _CSF_TEST=\"$(which csf)\"\n    if [ -x \"${_CSF_TEST}\" ]; then\n      service clean-boa-env start &> /dev/null\n      _if_fix_iptables_symlinks\n      ### csf -uf\n      ### wait\n      _NFTABLES_TEST=$(iptables -V)\n      if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n        if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n          update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n        fi\n        if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n          update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n        fi\n        if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n          update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n        fi\n        if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n          update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n        fi\n      fi\n      sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n      wait\n      sed -i \"/^$/d\" /etc/csf/csf.allow\n      if [ -e \"/var/log/daemon.log\" ]; then\n        _DHCP_LOG=\"/var/log/daemon.log\"\n      else\n        _DHCP_LOG=\"/var/log/syslog\"\n      fi\n      grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n        if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n          IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n          if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n            echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n          fi\n        fi\n      done\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n      ### Linux kernel TCP SACK CVEs mitigation\n      ### CVE-2019-11477 SACK Panic\n      ### CVE-2019-11478 SACK Slowness\n      ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n      if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n        _SACK_TEST=$(ip6tables --list | grep tcpmss)\n        if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n          sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n          iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n          ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n          [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n        fi\n      fi\n    fi\n  fi\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.conf\" ]; then\n    _CC_SRC_TEST=$(grep 'CC_SRC\\ =' /etc/csf/csf.conf 2>&1)\n    ### echo _CC_SRC_TEST 1 is \"${_CC_SRC_TEST}\"\n    if [[ ! ${_CC_SRC_TEST} =~ CC_SRC\\ =\\ \\\"2\\\" ]]; then\n      echo _CC_SRC_TEST 2 is \"${_CC_SRC_TEST}\"\n      service clean-boa-env start &> /dev/null\n      _if_fix_iptables_symlinks\n      ### csf -uf\n      ### wait\n      _NFTABLES_TEST=$(iptables -V)\n      if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n        if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n          update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n        fi\n        if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n          update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n        fi\n        if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n          update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n        fi\n        if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n          update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n        fi\n      fi\n      sed -i \"s/^CC_SRC .*/CC_SRC = \\\"2\\\"/g\" /etc/csf/csf.conf\n      wait\n      sed -i \"s/^AUTO_UPDATES .*/AUTO_UPDATES = \\\"0\\\"/g\" /etc/csf/csf.conf\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n    fi\n  fi\n  _BOA_TOOLS_UPDATE=NO\n  if [ -e \"${_pthLog}\" ]; then\n    if [ ! -x \"/opt/local/bin/xcopy\" ] \\\n      || [ ! -e \"${_boaToolsPid}\" ]; then\n      _BOA_TOOLS_UPDATE=YES\n    fi\n  fi\n  [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] && _BOA_TOOLS_UPDATE=YES\n  if [ \"${_BOA_TOOLS_UPDATE}\" = \"YES\" ]; then\n    _update_boa_tools\n    [ -e \"${_pthLog}\" ] && rm -f ${_pthLog}/updateBOAtools*.pid\n    [ -e \"${_pthLog}\" ] && touch ${_boaToolsPid}\n    if [ \"${1}\" = \"verbose\" ] || [ -z \"${1}\" ]; then\n      echo\n      echo \"BOA Meta Installers setup completed\"\n      echo \"Please check INSTALL.md and UPGRADE.md at https://github.com/omega8cc/boa\"\n      echo \"Bye\"\n      echo\n    fi\n  fi\n}\n\n_boa_setup() {\n  _BENG_VS=NO\n  _VMFAMILY=NO\n  _RANDOMIZE=NO\n  _VM_TEST=\"$(uname -a)\"\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _BENG_VS=YES\n    _RANDOMIZE=YES\n  fi\n  _if_hosted_sys\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    _VMFAMILY=HOSTED\n  fi\n\n  _check_dns_settings\n\n  if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    [ -d /run/unbound ] || mkdir -p /run/unbound\n    [ -d /run/unbound ] && chown -R unbound:unbound /run/unbound\n  fi\n\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n\n  _APT_CONFIG_FILE=\"/etc/apt/apt.conf.d/99ignorestrict\"\n\n  # Desired configuration content\n  _DESIRED_APT_CONFIG='Acquire::AllowInsecureRepositories \"true\";\n    APT::Get::AllowUnauthenticated \"true\";\n    Aptitude::CmdLine::Fix-Broken \"true\";'\n\n  # Remove leading whitespace from each line\n  _CLEANED_DESIRED_APT_CONFIG=$(echo \"${_DESIRED_APT_CONFIG}\" | sed 's/^[[:space:]]\\+//')\n\n  # Normalize the existing file content\n  if [[ -f \"${_APT_CONFIG_FILE}\" ]]; then\n    _CURRENT_APT_CONFIG=$(tr -d '[:space:]' < \"${_APT_CONFIG_FILE}\")\n  else\n    _CURRENT_APT_CONFIG=\"\"\n  fi\n\n  # Normalize the cleaned desired configuration content\n  _NORMALIZED_DESIRED_APT_CONFIG=$(echo \"${_CLEANED_DESIRED_APT_CONFIG}\" | tr -d '[:space:]')\n\n  # Compare normalized contents and update if necessary\n  if [[ \"${_CURRENT_APT_CONFIG}\" != \"${_NORMALIZED_DESIRED_APT_CONFIG}\" ]]; then\n    echo \"${_CLEANED_DESIRED_APT_CONFIG}\" | tee \"${_APT_CONFIG_FILE}\" > /dev/null\n  fi\n\n  if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] && [ ! -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n#     apt-get remove unscd -y --purge --auto-remove -qq &> /dev/null\n#     apt-get remove dbus -y --purge --auto-remove -qq &> /dev/null\n#     if [ -e \"/usr/share/dbus-1\" ]; then\n#       rm -f /usr/share/dbus-1/*/*freedesktop*\n#     fi\n    userdel -r debian &> /dev/null\n    sed -i \"s/^#startup_message off/startup_message off/g\" /etc/screenrc &> /dev/null\n  fi\n\n  _isScreen=$(screen --version 2>&1)\n  if [[ ! \"${_isScreen}\" =~ \"GNU\" ]] || [ -z \"${_isScreen}\" ]; then\n    apt-get install screen -y &> /dev/null\n    apt-get install net-tools -y &> /dev/null\n    apt-get install hostname -y &> /dev/null\n    apt-get install ntpsec-ntpdate -y &> /dev/null\n  fi\n\n  _if_reinstall_curl\n\n  _CURL_TEST=$(curl -L -k -s \\\n    --max-redirs 10 \\\n    --retry 3 \\\n    --retry-delay 10 \\\n    -I \"http://${_USE_MIR}\" 2> /dev/null)\n  if [[ ! \"${_CURL_TEST}\" =~ \"200 OK\" ]]; then\n    if [[ \"${_CURL_TEST}\" =~ \"unknown option was passed in to libcurl\" ]]; then\n      echo \"ERROR: cURL libs are out of sync! Re-installing..\"\n      _if_reinstall_curl\n    fi\n    echo \"ERROR: ${_USE_MIR} is not available, please try later\"\n    exit 1\n  else\n    _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n  fi\n\n  _if_clean_boa_env\n\n  _LSB_TEST=\"$(which lsb_release)\"\n  if [ ! -x \"${_LSB_TEST}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install lsb-release ${_aptYesUnth}\n  fi\n\n  ### Fix or install VM system detection\n  _check_virt\n\n  _BOA_TOOLS_UPDATE=NO\n  if [ -e \"${_pthLog}\" ] && [ ! -e \"${_boaToolsPid}\" ]; then\n    _BOA_TOOLS_UPDATE=YES\n  fi\n  [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] && _BOA_TOOLS_UPDATE=YES\n  if [ \"${_BOA_TOOLS_UPDATE}\" = \"YES\" ]; then\n    _update_boa_tools\n    [ -e \"${_pthLog}\" ] && rm -f ${_pthLog}/updateBOAtools*.pid\n    [ -e \"${_pthLog}\" ] && touch ${_boaToolsPid}\n    echo\n    echo \"BOA Meta Installers setup completed\"\n    echo \"Please check INSTALL.md and UPGRADE.md at https://github.com/omega8cc/boa\"\n    echo \"Bye\"\n    echo\n  fi\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n  mkdir -p /data/all\n  chmod 755 /data/all\n  echo ${_CPU_NR} > /data/all/cpuinfo\n  chmod 644 /data/all/cpuinfo\n}\n\n_sysctl_update() {\n  if [ ! -e \"/root/.no.sysctl.update.cnf\" ] \\\n    && [ ! -e \"/var/backups/.sysctl.conf.mod-disable-ipv6-${_xSrl}.log\" ]; then\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    cd /var/backups\n    rm -f /var/backups/sysctl.conf\n    curl ${_crlGet} \"${_urlHmr}/conf/var/sysctl.conf\" -o sysctl.conf\n    if [ -e \"/var/backups/sysctl.conf\" ]; then\n      cp -af /var/backups/sysctl.conf /etc/sysctl.conf\n    fi\n    if [ -e \"/etc/security/limits.conf\" ]; then\n      _IF_NF=$(grep '2097152' /etc/security/limits.conf 2>&1)\n      if [ ! -z \"${_IF_NF}\" ]; then\n        sed -i \"s/.*2097152.*//g\" /etc/security/limits.conf\n        wait\n      fi\n      _IF_NF=$(grep '524288' /etc/security/limits.conf 2>&1)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"*         soft    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"root      hard    nofile      1048576\" >> /etc/security/limits.conf\n        echo \"root      soft    nofile      1048576\" >> /etc/security/limits.conf\n      fi\n      _IF_NF=$(grep '65556' /etc/security/limits.conf 2>&1)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nproc       65556\"   >> /etc/security/limits.conf\n        echo \"*         soft    nproc       65556\"   >> /etc/security/limits.conf\n      fi\n    fi\n    if [ -e \"/boot/grub/grub.cfg\" ] || [ -e \"/boot/grub/menu.lst\" ]; then\n      #echo never > /sys/kernel/mm/transparent_hugepage/enabled\n      if [ -e \"/etc/sysctl.conf\" ]; then\n        sysctl -p /etc/sysctl.conf &> /dev/null\n      fi\n    else\n      if [ -e \"/etc/sysctl.conf\" ]; then\n        sysctl -p /etc/sysctl.conf &> /dev/null\n      fi\n    fi\n    if [ -e \"/etc/default/nginx\" ]; then\n      _IF_ULNX=$(grep '524288' /etc/default/nginx 2>&1)\n      if [ -z \"${_IF_ULNX}\" ]; then\n        sed -i \"s/^ULIMIT=.*//gi\" /etc/default/nginx\n        wait\n        echo ULIMIT=\\\"-n 524288\\\" >> /etc/default/nginx\n        ulimit -n 524288 &> /dev/null\n        service nginx restart &> /dev/null\n      fi\n    fi\n    if [ -e \"/etc/security/limits.d\" ] \\\n      && [ ! -e \"/etc/security/limits.d/solr9.conf\" ]; then\n      echo \"sshd soft nofile 524288\" > /etc/security/limits.d/sshd.conf\n      echo \"sshd hard nofile 999999\" >> /etc/security/limits.d/sshd.conf\n      echo \"redis soft nofile 65535\" > /etc/security/limits.d/redis.conf\n      echo \"redis hard nofile 524288\" >> /etc/security/limits.d/redis.conf\n      echo \"nginx soft nofile 524288\" > /etc/security/limits.d/nginx.conf\n      echo \"nginx hard nofile 999999\" >> /etc/security/limits.d/nginx.conf\n      echo \"jetty9 soft nofile 65535\" > /etc/security/limits.d/jetty9.conf\n      echo \"jetty9 hard nofile 524288\" >> /etc/security/limits.d/jetty9.conf\n      echo \"solr7 soft nofile 65535\" > /etc/security/limits.d/solr7.conf\n      echo \"solr7 hard nofile 524288\" >> /etc/security/limits.d/solr7.conf\n      echo \"solr9 soft nofile 65535\" > /etc/security/limits.d/solr9.conf\n      echo \"solr9 hard nofile 524288\" >> /etc/security/limits.d/solr9.conf\n      echo \"@www-data soft nofile 65535\" > /etc/security/limits.d/www.conf\n      echo \"@www-data hard nofile 524288\" >> /etc/security/limits.d/www.conf\n      if [ -e \"/etc/init.d/valkey-server\" ]; then\n        service valkey-server restart &> /dev/null\n      elif [ -e \"/etc/init.d/redis-server\" ]; then\n        service redis-server restart &> /dev/null\n      fi\n      service nginx restart &> /dev/null\n      service ssh restart &> /dev/null\n      _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ]; then\n          service \"php${e}-fpm\" reload &> /dev/null\n        fi\n      done\n    fi\n    touch /var/backups/.sysctl.conf.mod-disable-ipv6-${_xSrl}.log\n  fi\n}\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n\n# Function to notify about still running backup\n_backup_waiting_notify() {\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  _templog=\"${_bLogB}\"\n  cat /root/.remote_backups/schedule/backup_schedule.txt > ${_templog}\n  ps axf | grep multiback >> ${_templog}\n  ps axf | grep duplicity >> ${_templog}\n  ls -la /tmp/duplicity-*-tempdir >> ${_templog}\n  tree /root/.cache/duplicity >> ${_templog}\n  ls -laR /root/.cache/duplicity >> ${_templog}\n  grep \"Out of memory: Killed process.*duplicity\" /var/log/iptables.log >> ${_templog}\n  boa info  >> ${_templog}\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" != \"OFF\" ]; then\n    s-nail -s \"Multiback Waiting Report for [${_hName}] on $(date)\" ${_MY_EMAIL} < ${_templog}\n  fi\n}\n\n# Load kTLS module only if it is not already loaded.\n_load_ktls_module() {\n  if ! lsmod 2>/dev/null | awk '{print $1}' | grep -qx \"tls\"; then\n    if [ -e \"/lib/modules/$(uname -r)/kernel/net/tls/tls.ko\" ] \\\n      || modinfo tls >/dev/null 2>&1; then\n      modprobe -q tls >/dev/null 2>&1 || true\n      grep -qxF \"tls\" /etc/modules 2>/dev/null || printf '%s\\n' \"tls\" >> /etc/modules\n    fi\n  fi\n}\n\n# Fix CSF config only if Quic HTTP3 support is not enabled yet\n_csf_allow_quic_udp_443() {\n  _CSF_CONF=\"/etc/csf/csf.conf\"\n  _CSF_CHANGED=\"NO\"\n\n  if ! command -v csf >/dev/null 2>&1; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: csf not found - skipping UDP/443 (QUIC) firewall check\"\n    fi\n    return 0\n  fi\n\n  if [ ! -s \"${_CSF_CONF}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"WARN: ${_CSF_CONF} not found or empty - skipping\"\n    fi\n    return 0\n  fi\n\n  _csf_add_port_to_list_var() {\n    # $1 = VAR (UDP_IN / UDP6_IN), $2 = PORT (443)\n    _VAR=\"${1}\"\n    _PORT=\"${2}\"\n\n    # Extract current value between quotes\n    _CUR=\"$(grep -E \"^${_VAR}[[:space:]]*=\" \"${_CSF_CONF}\" 2>/dev/null | head -n1 | sed -E 's/^[^\"]*\"([^\"]*)\".*$/\\1/')\"\n\n    # If var not present, do nothing\n    if ! grep -q -E \"^${_VAR}[[:space:]]*=\" \"${_CSF_CONF}\" 2>/dev/null; then\n      return 0\n    fi\n\n    # If already present as a whole list item, do nothing\n    if echo \",${_CUR},\" | grep -q \",${_PORT},\"; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        echo \"INFO: CSF ${_VAR} already includes ${_PORT}\"\n      fi\n      return 0\n    fi\n\n    # Append port (handle empty list)\n    if [ -n \"${_CUR}\" ]; then\n      _NEW=\"${_CUR},${_PORT}\"\n    else\n      _NEW=\"${_PORT}\"\n    fi\n\n    # Replace only the FIRST matching line for this var, keep surrounding quotes\n    # Make a backup once (csf.conf.bak) if not already present\n    if [ ! -e \"${_CSF_CONF}.bak\" ]; then\n      cp -af \"${_CSF_CONF}\" \"${_CSF_CONF}.bak\"\n    fi\n\n    sed -i -E \"0,/^${_VAR}[[:space:]]*=/{s|^(${_VAR}[[:space:]]*=[[:space:]]*\\\").*(\\\".*)|\\1${_NEW}\\2|}\" \"${_CSF_CONF}\"\n\n    _CSF_CHANGED=\"YES\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: CSF updated: ${_VAR}=\\\"${_NEW}\\\"\"\n    fi\n  }\n\n  # QUIC needs inbound UDP/443\n  _csf_add_port_to_list_var \"UDP_IN\" \"443\"\n\n  # If your CSF has separate IPv6 var, update it too\n  if grep -q -E '^UDP6_IN[[:space:]]*=' \"${_CSF_CONF}\" 2>/dev/null; then\n    _csf_add_port_to_list_var \"UDP6_IN\" \"443\"\n  fi\n\n  if [ \"${_CSF_CHANGED}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: Reloading CSF to apply UDP/443 change\"\n    fi\n    csf -r >/dev/null 2>&1 || csf -ra >/dev/null 2>&1\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: CSF config unchanged\"\n    fi\n  fi\n\n  return 0\n}\n\n###--------------------###\nif [ \"$(id -u)\" -eq 0 ]; then\n\n  # Load kTLS module only if it is not already loaded\n  _load_ktls_module\n\n  # Fix CSF config only if Quic HTTP3 support is not enabled yet\n  _csf_allow_quic_udp_443\n\n  if ! command -v bc &> /dev/null; then\n    apt-get update -qq &> /dev/null\n    ${_INITINS} bc &> /dev/null\n  fi\n\n  if ! command -v curl &> /dev/null; then\n    apt-get update -qq &> /dev/null\n    ${_INITINS} curl &> /dev/null\n  fi\n\n  _find_correct_ip\n  _find_server_city\n\n  [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] && _locales_check_fix_early\n  _os_detection_minimal\n  _find_fast_mirror_early\n  _verify_boa_keys\n\n  ### Prefer Devuan apt sources\n  if [ -d \"/var/aegir\" ] && [ ! -e \"/etc/apt/preferences.d/99-prefer-devuan\" ]; then\n    if grep -qi 'ID=devuan' /etc/os-release 2>/dev/null; then\n      _prefer_devuan_repositories\n    fi\n  fi\n\n  ### Fix VM system detection\n  _check_virt\n\n  if [ -e \"${_barCnf}\" ]; then\n    source ${_barCnf}\n    _normalize_incident_report\n  fi\n\n  ### Notify if multiback backups seem to run for too long\n  _DCY=$(pgrep -fc duplicity)\n  _MLT=$(pgrep -fc multiback)\n  if (( _DCY > 0 )) && (( _MLT > 0 )); then\n    _bLogA=\"/var/backups/multiback_waiting_queue.log\"\n    _bLogB=\"/var/backups/tmp_multiback_waiting_queue.log\"\n    if [ ! -e \"${_bLogA}\" ] && [ ! -e \"${_bLogB}\" ]; then\n      _backup_waiting_notify\n    fi\n  fi\n\n  ### Make local OpenSSL new/legacy ssl/certs symlinked to system ssl/certs\n  if [ -d \"/var/aegir\" ]; then\n    _fix_sync_system_ssl_certs\n  fi\n\n  ### Fix Solr 4/7/9 conflicting ports\n  if [ -d \"/var/aegir\" ]; then\n    _fix_start_stop_ports_solr\n  fi\n\n  ### CVE-2021-44228 Log4j 2 Vulnerability\n  ### CVE-2021-45046 Log4j 2 Vulnerability\n  ### CVE-2021-45105 Log4j 2 Vulnerability\n  _fix_log4j_solr7\n\n  ### Linux kernel TCP SACK CVEs mitigation\n  ### CVE-2019-11477 SACK Panic\n  ### CVE-2019-11478 SACK Slowness\n  ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n    _SACK_TEST=$(ip6tables --list | grep tcpmss)\n    if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n      sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n      iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    fi\n  fi\n  ### More aggressive mitigation affecting network performance\n  # if [ -e \"/proc/sys/net/ipv4/tcp_sack\" ]; then\n  #   _SACK_TEST=$(cat /proc/sys/net/ipv4/tcp_sack 2>&1)\n  #   _SACK_TEST=$(echo -n ${_SACK_TEST} | tr -d \"\\n\" 2>&1)\n  #   if [[ \"${_SACK_TEST}\" =~ \"1\" ]]; then\n  #     echo \"0\" > /proc/sys/net/ipv4/tcp_sack\n  #   fi\n  # fi\n\n  ### Block known attackers IPs\n  _CSF_TEST=\"$(which csf)\"\n  if [ -x \"${_CSF_TEST}\" ]; then\n    _IP_BLOCK=\"47.82.0.0/16 47.79.0.0/16 2.57.121.0/24 2.57.122.0/24 45.148.10.0/24 80.94.92.0/24 92.118.39.0/24 185.177.72.0/24\"\n    for _IP in ${_IP_BLOCK}; do\n      _FW_TEST=$(csf -g ${_IP} 2>&1)\n      if [[ \"${_FW_TEST}\" =~ \"DENY Match:${_IP} Setting\" ]] \\\n        && [[ \"${_FW_TEST}\" =~ \"csf.deny: ${_IP}\" ]]; then\n        echo \"${_IP} already denied for Brute force SSH/Web Server attacks\"\n      else\n        csf -d ${_IP} do not delete Brute force SSH/Web Server attacks\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    done\n  fi\n\n  ### Linux kernel CVE-2017-2636 hotfix\n  if [ -e \"/etc/modprobe.d\" ] \\\n    && [ ! -e \"/etc/modprobe.d/blacklist-n_hdlc.conf\" ]; then\n    echo \"install n_hdlc /bin/true\" > /etc/modprobe.d/blacklist-n_hdlc.conf\n    rmmod n_hdlc &> /dev/null\n  fi\n\n  ### Linux kernel CVE-2017-6074 hotfix\n  if [ -e \"/etc/modprobe.d\" ] \\\n    && [ ! -e \"/etc/modprobe.d/blacklist-dccp-all.conf\" ]; then\n    echo \"install dccp /bin/true\" > /etc/modprobe.d/blacklist-dccp-all.conf\n    echo \"install dccp_diag /bin/true\" >> /etc/modprobe.d/blacklist-dccp-all.conf\n    echo \"install dccp_ipv4 /bin/true\" >> /etc/modprobe.d/blacklist-dccp-all.conf\n    echo \"install dccp_ipv6 /bin/true\" >> /etc/modprobe.d/blacklist-dccp-all.conf\n    echo \"install dccp_probe /bin/true\" >> /etc/modprobe.d/blacklist-dccp-all.conf\n    rmmod dccp &> /dev/null\n    rmmod dccp_diag &> /dev/null\n    rmmod dccp_ipv4 &> /dev/null\n    rmmod dccp_ipv6 &> /dev/null\n    rmmod dccp_probe &> /dev/null\n  fi\n\n  if [ ! -e \"/data/all/cpuinfo\" ]; then\n    _count_cpu\n  fi\n\n  _if_boa_key_tools_update_allowed\n\n  if [ \"${_BOA_KEY_TOOLS_UPDATE_ALLOWED}\" = \"YES\" ] \\\n    && [ -e \"/opt/etc/fpm/fpm-pool-common.conf\" ] \\\n    && [ -e \"/var/xdrago\" ]; then\n    if [ ! -z \"${_SKYNET_MODE}\" ] && [ \"${_SKYNET_MODE}\" = \"OFF\" ]; then\n      if [ -n \"${SSH_TTY+x}\" ]; then\n        echo\n        echo \"STATUS: BOA Skynet Agent is Inactive!\"\n        echo\n        echo \"HINT: Please remove the _SKYNET_MODE=OFF line from\"\n        echo \"HINT: ${_barCnf} to enable me again.\"\n        echo\n        echo \"NOTE: Critically important BOA tools will be still updated\"\n        echo\n        _if_update_boa_key_tools_only verbose\n        exit 0\n      else\n        _if_update_boa_key_tools_only silent\n        exit 0\n      fi\n    else\n      if [ -n \"${SSH_TTY+x}\" ]; then\n        echo\n        echo \"STATUS: BOA Skynet Agent is Active, OK!\"\n        echo\n        echo \"HINT: You can add the _SKYNET_MODE=OFF line in\"\n        echo \"HINT: ${_barCnf} to disable me, if needed.\"\n        echo\n      fi\n    fi\n  else\n    if [ -z \"$STY\" ]; then\n      _SCREEN_INIT=YES\n    fi\n  fi\n  if [ -d \"/.newrelic\" ]; then\n    rm -rf /.newrelic\n  fi\n  chmod a+w /dev/null\n  if [ ! -e \"/dev/fd\" ]; then\n    if [ -e \"/proc/self/fd\" ]; then\n      rm -rf /dev/fd\n      ln -sfn /proc/self/fd /dev/fd\n    fi\n  fi\n  if [ \"${_BOA_KEY_TOOLS_UPDATE_ALLOWED}\" = \"YES\" ]; then\n    _boa_setup\n  fi\n  if [ \"${_BOA_KEY_TOOLS_UPDATE_ALLOWED}\" = \"YES\" ] \\\n    && [ -e \"/var/log/barracuda_log.txt\" ]; then\n    _fix_sftp_server\n    _fix_ping_perms\n    _fix_fpm_process_max\n    _if_fix_lshell\n    _fix_node_in_lshell_access\n    # _fix_php_in_lshell_access\n    _fix_authorized_keys\n    _fix_aio\n    _fix_console_print\n    _fix_java_symlinks\n    _fix_composer_version\n    _fix_wkhtml\n    _fix_wkhtml_perms\n    _fix_eldir\n    _if_drupal_patches_update\n    _fix_drupal_core_ten\n    _fix_drupal_core_eleven\n    _fix_pure_ftpd\n    _fix_hosting_le\n    _fix_newrelic\n    _fix_leftovers\n    _update_agents\n    _sysctl_update\n    # _saCoreN=\"SA-CORE-2018-002\"\n    # _fix_core_dgd\n    # sleep 3\n    # _saCoreN=\"SA-CORE-2018-004\"\n    # _fix_core_dgd\n    # sleep 3\n    # _saCoreN=\"SA-CORE-2018-006\"\n    # _fix_core_dgd\n    # sleep 3\n    # _saCoreN=\"SA-CORE-2019-004\"\n    # _fix_core_dgd\n    # sleep 3\n    # _saCoreN=\"3143016-83\"\n    # _fix_core_dgd\n  fi\n  if [ ! -e \"/etc/ssl/private/4096.dhp\" ] && [ -d \"/var/xdrago\" ]; then\n    echo \"Generating 4096.dhp -- it may take a very long time...\"\n    openssl dhparam -out /etc/ssl/private/4096.dhp 4096 > /dev/null 2>&1 &\n  fi\n  if [ -e \"/etc/ssl/private/4096.dhp\" ]; then\n    chown -R root:ssl-cert /etc/ssl/private\n    chmod 640 /etc/ssl/private/*\n    chmod 710 /etc/ssl/private\n  fi\n  if [ ! -e \"/root/.upstart.cnf\" ]; then\n    service cron reload &> /dev/null\n  fi\n  if [ \"${_SCREEN_INIT}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" != \"YES\" ]; then\n      clear\n    fi\n    echo\n    echo \"The system is ready for BOA installation!\"\n    echo\n    echo \"We will start screen session for you automatically\"\n    echo \"to avoid problems with dropped SSH connections\"\n    echo \"during BOA stack installation, which may take up to\"\n    echo \"45-60 minutes, depending on your server speed.\"\n    echo\n    echo \"If your connection will drop, simply log in again\"\n    echo \"and re-attach your session with 'screen -R' command.\"\n    echo\n    echo \"Enjoy!\"\n    echo\n    if [ -x \"/usr/sbin/aa-teardown\" ]; then\n      aa-teardown &> /dev/null\n    fi\n  else\n    exit 0\n  fi\nelse\n  echo \"ERROR: This script should be run as a root user\"\n  exit 1\nfi\n"
  },
  {
    "path": "CHANGELOG.txt",
    "content": "\n###\n### Stable BOA-5.9.1-pro/lts - HTTP/3 Edition\n### Date: Sun Feb 8 10:02:06 PM NZDT 2026 in Auckland\n### Welcome Fast Lane HTTPS: HTTP/3 and KTLS\n### 333 commits since BOA-5.8.5-pro\n###\n\n@=> 4 NEW, 12 UPDATED, 32 TOTAL Drupal distros/platforms available\n\n  While most of you typically build your own codebases/platforms with Composer\n  these days, we still deliver a list of 32 platforms ready to use in your Ægir.\n\n  Since these platforms are updated only with BOA releases, they are not really\n  intended for production use per se, because you typically need a faster\n  lifecycle to keep your sites secure.\n\n  However, they provide a wide range of testing playgrounds, because you can\n  install only those you wish to test or use, and reinstall if needed, with\n  the help of our BOA-only feature that allows you to upgrade your Ægir\n  on demand with two simple control files, as described in the built-in docs\n  you can always find in ~/static/control/README.txt.\n\n  The complete list of 32 is further below, in the section \"Drupal platforms\n  available for installation\".\n\n@=> Going Local with Infrastructure\n\n  We’ve expanded our network considerably to meet the growing expectations of\n  the Data Sovereignty movement. This isn’t just about adding more cities\n  to our hosting map — it’s also about going local with infrastructure\n  wherever we can.\n\n  We no longer rely solely on big-name vendors and hyperscalers. Instead, we’re\n  gradually migrating to local providers and data centers in every country where\n  we offer hosted BOA for Drupal.\n\n  For example, in Canada you can now choose not only Toronto, but also Montreal,\n  Calgary, and Vancouver. In Australia, it’s no longer just Sydney — we also\n  offer Adelaide, Brisbane, and Perth.\n\n  We’ve also added an excellent facility in New Zealand.\n\n  Of course, we continue to support our original Singapore location and still\n  offer EU, UK, and US options.\n\n@=> Usage disk/sql limits x2 + Aero and Archive plans added to hosted BOA\n\n  It's worth mentioning that our hosted BOA plans have received a huge upgrade:\n  several new locations have been added around the world, our vendors are now\n  local (instead of the previous US-only hyperscalers), and an entirely new\n  Archive Tier has been added for those looking to host collections of\n  low-traffic sites at low cost.\n\n  Take a look if you are interested: https://omega8.cc/hosted\n\n@=> New BOA-5.9.1 PRO/LTS Release\n\n  Yes, we said that BOA-LTS would enter complete code freeze for 2026, but\n  we think that the major new features and many security updates introduced in\n  the last two months must be shared with the entire community before we enter\n  a less rapid feature development cycle for the next few months.\n\n  The future of 100% Open Source Drupal hosting is brighter than ever!\n\n  With BOA-5.9.1 PRO/LTS, we proudly deliver full HTTP/3 and KTLS support —\n  a fundamental change in the way modern browsers communicate with modern\n  HTTPS web servers — along with the latest OpenSSL 3.5 LTS, which made it\n  possible, a clever and very professional tool to diagnose your server hardware\n  performance in the context of BOA-specific requirements and capabilities,\n  and many critical security and bug fixes related to system components.\n\n  This groundbreaking feature not only pushes the boundaries of what BOA\n  can achieve but also reaffirms our commitment to staying ahead of the curve\n  for modern Drupal deployments.\n\n  We are thrilled to introduce BOA-5.9.1 PRO/LTS, our 8th release under the\n  new branch structure and dual licensing model. It merges 2 months of\n  intense development from the DEV branch, delivering 333 commits packed\n  with powerful features, critical fixes, and enhancements.\n\n  Thank you to everyone who supports our work by purchasing a BOA PRO license:\n  https://omega8.cc/boapro.\n\n@=> Key Improvements Explained\n\n * HTTP/3 and KTLS support. If you run Drupal sites that should feel fast and\n   responsive (and stay that way during spikes), this is genuinely good news.\n   Why is this a big deal? What should visitors notice?\n\n   Read the full story: https://github.com/omega8cc/boa/tree/5.x-dev/HTTP3.md\n\n * Percona 8.4 comes to Excalibur. We no longer need vanilla MySQL 8.4 now that\n   Percona has released its own build for Debian Trixie, which can be used on\n   Devuan Excalibur. There is no MySQL-to-Percona upgrade option yet, though.\n   Please note that we still recommend Devuan Daedalus as the most versatile\n   system, which can also support Percona 8.0 and 5.7.\n\n * Curious if your VM is good enough to fully benefit from BOA optimisations\n   and deliver a first-class Drupal hosting environment? There’s a deep\n   hardware and network analysis tool available: simply type `perftest` as root.\n\n * From now on, all BOA installers will download their components as packaged\n   batches instead of dozens of separate little modules. They will also no longer\n   rely on fetching complete repositories from GitHub, instead downloading only\n   the latest packaged code from our mirrors. You can revert to the old method\n   by changing _DL_MODE=BATCH to _DL_MODE=GIT in /root/.barracuda.cnf\n\n@=> New Features\n\n    * Add chromium as alternative for wkhtmltopdf\n    * Add Java 21 for Solr 9 and Jenkins\n    * Add kTLS support in Nginx config\n    * Add Red Hat KVM guest to supported virtualization systems\n    * Allow access to site-specific well-known/mta-sts.txt file\n    * Enable quic/http3 support in Nginx\n    * Percona 8.4 comes to Excalibur to replace vanilla MySQL 8.4\n\n@=> New Improvements\n\n    * Add _disable_systemd_networkd_for_next_boot\n    * Add _init_debian_networkd_handoff in autoinit\n    * Add _install_net_rollback in vmnetfix\n    * Add aptfast as multi-lane aptitude wrapper for faster downloads\n    * Add ciphers required by kTLS in Nginx\n    * Add triple check for base-files\n    * Improve and modernize Nginx build configuration options\n    * Improve autoinit support for some vendors with _init_sysv_net_repair\n    * Improve autoinit with _init_sysv_insserv_repair\n    * Improve DHCP/NAT support in vmnetfix\n    * Improve ffmirror\n    * Improve reliability of autoinit on 10+ vendors tested\n    * Improve usage info\n    * Improve vmnetfix with _iface_has_dhcp_for_if\n    * Modernize 301 redirects\n    * Nginx: extend log_format to include protocol used\n    * Prefer ifupdown over ifupdown2\n    * Simplify and sync _if_off_apparmor/_turn_off_apparmor\n    * Special-case Java 11/17/21 force install if version is too old\n    * Sync Ægir naming convention\n    * Sync config for GeoLite2\n    * Uninstall cloud-utils if not required -- makes reboot faster\n    * Update networking with vmnetfix early\n    * Update SSL proxy vhosts if KTLS is missing\n    * Use _DL_MODE=BATCH by default\n    * Use Java 21 also on older systems if installed\n\n@=> Changes\n\n    * Add strict locking to /etc/resolv.conf\n    * Enable access_log in proxy so DoS-guard can still work\n    * Prioritize modern ed25519 SSH keys\n    * Run /var/xdrago/clear.sh every 3 minutes\n    * Run /var/xdrago/ip_access.sh every 2 minutes\n    * Run /var/xdrago/manage_ltd_users.sh every minute\n    * Run /var/xdrago/manage_solr_config.sh every minute\n    * Switch default PHP to 8.4\n    * Turn off AppArmor in autoinit phase\n    * Use nginx restart instead of quietupgrade\n\n@=> Upgrades\n\n    * cURL 8.17.0\n    * cURL 8.18.0\n    * GoAccess 1.9.4\n    * ionCube 15.0.0 (up to PHP 8.4)\n    * Nginx 1.29.4\n    * Nginx 1.29.5\n    * OpenSSL 3.4.4\n    * OpenSSL 3.5.6 LTS\n    * PHP 8.1.34\n    * PHP 8.2.30\n    * PHP 8.3.29\n    * PHP 8.3.30\n    * PHP 8.4.16\n    * PHP 8.4.17\n    * PHP 8.5.1\n    * PHP 8.5.2\n    * screenFetch 3.9.9\n    * Sync dehydrated updates\n    * Valkey 9.0.1\n    * Valkey 9.0.2\n\n@=> Important Fixes\n\n    * Block language-prefix chain URL mutation crawlers spam\n    * Block node-chain URL mutation crawlers spam\n    * Fix for PHP 5.6 in Block node and language-prefix chain URL mutation spam\n    * Percona 8.4 from Trixie requires percona-telemetry-agent\n    * Remove unconditional _apt_clean_update running every 3 minutes\n    * Removing all .drush.inc files under modules/contrib/webform\n    * Restore less noisy _manage_sec_access_paths\n    * Restore nginx auto-reload after PHP version update\n    * Sync all PHP logs backup/cleanup after upgrade to force PHP-FPM reload\n    * Use _check_virt/_not_supported_virt early\n\n@=> Drupal platforms available for installation -- docs/PLATFORMS.md\n\n\t## Drupal 11\n\n\tCK3 - [Commerce 3.2.0] with core (11.3.3) (UPDATED)\n\tCMS - [Drupal CMS 2.0.0] with core (11.3.3) (NEW)\n\tDE1 - [Drupal 11.1.9] with Drush included -- dev/stage/prod\n\tDE2 - [Drupal 11.2.10] with Drush included -- dev/stage/prod (UPDATED)\n\tDE3 - [Drupal 11.3.3] with Drush included -- dev/stage/prod (NEW)\n\tSCR - [Sector 11.0.x-dev] with core (11.3.3) (UPDATED)\n\tTHR - [Thunder 8.3.1] with core (11.3.3) (UPDATED)\n\tVBX - [Varbase 10.1.0] with core (11.3.1) (NEW)\n\n\t## Drupal 10\n\n\tCK2 - [Commerce v.2] with core (10.1.8)\n\tDX0 - [Drupal 10.0.11] with Drush included -- dev/stage/prod\n\tDX1 - [Drupal 10.1.8] with Drush included -- dev/stage/prod\n\tDX2 - [Drupal 10.2.12] with Drush included -- dev/stage/prod\n\tDX3 - [Drupal 10.3.14] with Drush included -- dev/stage/prod\n\tDX4 - [Drupal 10.4.9] with Drush included -- dev/stage/prod\n\tDX5 - [Drupal 10.5.8] with Drush included -- dev/stage/prod (UPDATED)\n\tDX6 - [Drupal 10.6.3] with Drush included -- dev/stage/prod (NEW)\n\tDXP - [DXPR Marketing 10.3.0] with core (10.3.6)\n\tEZC - [EzContent 2.2.15] with core (10.3.6)\n\tFOS - [farmOS 3.5.1] with core (10.6.2) (UPDATED)\n\tLGV - [LocalGov 3.4.0] with core (10.6.3) (UPDATED)\n\tOCS - [OpenCulturas 2.5.4] with core (10.5.8) (UPDATED)\n\tOFD - [OpenFed 12.2.4] with core (10.2.10)\n\tSOC - [Social 12.4.5] with core (10.2.10)\n\tVB9 - [Varbase 9.1.13] with core (10.6.1) (UPDATED)\n\n\t## Drupal 9\n\n\tDL9 - [Drupal 9.5.11] -- dev/stage/prod\n\tOLS - [OpenLucius 2.0.0] with core (9.5.11)\n\tOPG - [Opigno LMS 3.1.0] with core (9.5.11)\n\n\t## Drupal 7\n\n\tCK1 - [Commerce v.1] with core (7.105.1) (UPDATED)\n\tDL7 - [Drupal 7.105.1] -- dev/stage/prod (UPDATED)\n\tUC7 - [Ubercart 3.13] with core (7.105.1) (UPDATED)\n\n\t## Drupal 6\n\n\tDL6 - [Pressflow 6.60.1] -- dev/stage/prod\n\tUC6 - [Ubercart 2.15] with core (6.60.1)\n\n\n###\n### Stable BOA-5.8.5-pro/lts - 30 Years of Heritage Edition\n### Date: Mon Dec 1 09:58:58 AM AEDT 2025 in Sydney\n### Welcome Devuan Excalibur and PHP 8.5\n### 1092 commits since BOA-5.7.12-pro\n###\n\n@=> 30 Years of Heritage -- Why We’re Different\n\n  We are unique within the hosting industry for many important reasons.\n\n  Our 15 years of Ægir-based hosting, plus earlier experience with Adgrafix\n  (the first company to offer a control panel for website management in 1995),\n  have helped shape what makes us different today.\n\n  We take Open Source seriously, it's not a buzzword for us. It's about freedom\n  from corporate control. Here's a short look back at our 15-year Ægir journey\n  and 19 years with Drupal.\n\n  Read the full story: https://bit.ly/different30y\n\n@=> The Future of Ægir 3 is Bryght!\n\n  Omega8.cc is now the lead developer team for Ægir 3 running on BOA\n  (Barracuda-Octopus-Ægir stack). We want to thank all past contributors\n  who brought Ægir to life – your work makes today’s progress possible.\n\n  Because of you, there is still a Bryght Future for Ægir. What to Expect?\n\n  Read the full story: https://bit.ly/aegirbryghtfuture\n\n@=> New BOA-5.8.5 PRO/LTS Release\n\n  The future of Drupal hosting is here! With BOA-5.8.5 PRO/LTS, we proudly\n  deliver both full Drupal 11 support integrated with Ægir and latest PHP 8.5,\n  now available on the latest Devuan Excalibur / Debian Trixie system.\n\n  This groundbreaking feature not only pushes the boundaries of what the BOA\n  can achieve but also reaffirms our commitment to staying ahead of the curve\n  for modern Drupal deployments.\n\n  Powered by Percona 8.4 and fine-tuned to leverage the latest innovations\n  across the stack, this update sets a new standard for hosting next-generation\n  Drupal applications while continuing to fully support legacy Drupal versions,\n  ensuring smooth operations for every site in your ecosystem, old or new.\n\n  We are thrilled to introduce BOA-5.8.5 PRO/LTS, our 7th release under the\n  new branch structure and dual licensing model. It merges four months of\n  intense development from the DEV branch, delivering 1092 commits packed\n  with powerful features, critical fixes, and enhancements.\n\n  Thank you to everyone who supports our work by purchasing a BOA PRO license:\n  https://omega8.cc/boapro.\n\n  As always, this announcement highlights only the most impactful changes.\n  For a full breakdown, explore the complete commit history.\n\n@=> Key Improvements Explained\n\n * Any expected downtime during barracuda system upgrades has been reduced\n   from 2-3 minutes to 10-14 seconds on average thanks to our improvements\n   across the board in the BOA system logic.\n\n * BOA now consistently pauses Ægir tasks queue if any system-backend tasks\n   are running -- this includes any barracuda/octopus upgrades, the heavy\n   daily.sh script and nightly DB backups, so no Ægir tasks should ever\n   collide with those important system tasks.\n\n * The auto-healing system has been rewritten from scratch and greatly\n   improved for precision, stability and protection from race conditions,\n   with added smart cooldown pause to avoid unnecessary interventions.\n\n * The _SKYNET_MODE=OFF now strictly blocks any updates otherwise applied\n   via the autoupboa tool running every 6 minutes, but also blocks any attempt\n   to run barracuda or octopus upgrades, even if invoked manually.\n\n * Many vendor-specific issues affecting BOA installation on VPS platforms\n   have been addressed for both older and newer Devuan/Debian releases,\n   especially for the autoinit procedure recommended as the first step.\n\n * We no longer hardcode Devuan's own APT sources lottery alias deb.devuan.org\n   and instead test and pick reputable mirrors to use the fastest in the\n   given server's location.\n\n * We limit the messaging noise generated by various parts of the new\n   auto-healing system by switching the _INCIDENT_REPORT to NO by default,\n   so only really critical incidents like service restarts caused by OOM\n   (out of memory) incidents are still reported.\n\n * The legacy _XTRAS_LIST logic has been improved with changes documented.\n   Now _XTRAS_LIST is by default EMPTY and extended only minimally depending\n   on mode, so almost no BOA xtras are installed by default like before.\n\n * The _CUSTOM_CONFIG_CSF should protect only /etc/csf/csf.conf.\n   Previously it blocked CSF/LFD upgrades completely while it should protect\n   only the main config file. If the protected config file becomes incompatible\n   as a result, it’s the system admin's responsibility to update it manually.\n\n * New control file /root/.dont.touch.permissions.cnf allows blocking any\n   otherwise defined/run actions globally by taking precedence over any other\n   settings in .barracuda.cnf and site/platform-level INI files.\n\n\n@=> New Features\n\n    * Add _UPGRADE_MODE=FAST/FULL mode to speed up barracuda upgrades\n    * Add /data/conf/sites-cron-off.ctrl to turn off all sites wget-cron\n    * Add autosymlink tool to automate symlinking sites files directories\n    * Add cooldown to max/critical load actions in auto-healing\n    * Add dhcpfix tool used to fix vendor-specific forced-dns issues if needed\n    * Add Droplet Agent to auto-healing\n    * Add environment_indicator to Ægir hostmaster control panel\n    * Add ffdevuan tool to update the list of reliable and fastest mirrors\n    * Add instant SQL fallback for Valkey/Redis to global-valkey.inc\n    * Add loadguard as future auto-healing orchestrator for testing\n    * Add new dedicated tools: aptcleanup and vmnetfix in autoinit\n    * Add support for python 3.13\n    * Add synproxy tool: SYNPROXY (TCP/443/80) + QUIC limiter (UDP/443) for CSF\n    * Add system level /root/.dont.touch.permissions.cnf\n    * Allow _LOCAL_DEVUAN_MIRROR but use _find_fast_devuan_mirror otherwise\n    * Auto-enable/disable slow_query_log with /root/.mysqladmin.monitor.cnf\n    * PHP 8.5 Support is ready\n    * Protect from any autoupboa updates when _SKYNET_MODE=OFF\n    * Protect from any barracuda updates when _SKYNET_MODE=OFF\n    * Protect from any octopus updates when _SKYNET_MODE=OFF\n    * Use MySQL 8.4 from Trixie on Excalibur until Percona releases own version\n\n@=> New Improvements\n\n    * Add \"How we build newer codebases for testing\" in docs/BUILDTESTS.md\n    * Add waiting before running octopus upgrade on init\n    * Always check if /etc/hosts update is needed\n    * Always use _spawn_detached procedure for Perl scripts\n    * Always use nohup for detached Bash scripts\n    * Build OpenSSL 3 w/o no-comp, no-hw\n    * Check LE status and run another octopus upgrade if needed on boa install\n    * Do not install Git from sources on BOA install\n    * Do not send email on Spider Protection on/off\n    * Enable APCu by default\n    * Improve Ægir accelerated task queue\n    * Make Solr/Java versions mapping strict\n    * No automatic task queue on CI instance\n    * Notify if multiback backups seem to run for too long\n    * Pause Ægir queue when /run/boa_run.pid is present\n    * Pause Ægir queue when daily.sh runs\n    * Pause Ægir tasks queue during system DB backups\n    * Prevent a flood of alerts on services up/down status if uptime < 15 min\n    * Reload nginx if access log is missing or empty\n    * Run _CHECK_MIRROR twice to prime DNS cache before the final speed test\n    * Run _satellite_download_for_local_build only once on install\n    * Run auto-healing only on fully installed system\n    * Run improved scan_nginx.sh every 5 seconds\n    * Save CPU cycles by disabling never used master Ægir cron/task queue\n    * Sync /root/.allow.clamav.cnf and /root/.deny.clamav.cnf logic\n    * Update APT sources on each upgrade\n    * Update Devuan mirrors daily\n    * Upgrade some held packages if needed and then rebuild from sources\n    * Use _find_fast_devuan_mirror by default but static for archived beowulf\n    * Use 180s opcache.revalidate_freq for heavy apps like CiviCRM\n    * Use 60s opcache.revalidate_freq by default\n    * Use Atomic lock/unlock to prevent TOCTOU race conditions globally\n    * Use separate verbose logs for barracuda/octopus upgrade details and errors\n    * Use special /var/log/boa/reset_no_new_password.pid to allow auto-healing\n\n@=> Changes\n\n    * Add New Relic support for PHP 8.4\n    * Add Pinterest to $is_crawler list in Nginx configuration\n    * Always log all barracuda/octopus upgrades, even self-upgrades\n    * Debian Buster has been archived already\n    * Display BOA Skynet Agent mesages only for logged in root\n    * Do not automate /root/.force.reinstall.cnf\n    * Don't invoke old proc_num_ctrl (replaced by new-generation auto-healing)\n    * Don't run codebasecheck daily unless /root/.allow-codebasecheck.cnf exists\n    * Drop legacy IMAP in PHP on Excalibur\n    * Force _VALKEY_MAJOR_RELEASE=9 unless _CUSTOM_CONFIG_VALKEY=YES\n    * Force slow Ægir tasks cron mode on VM with 4GB RAM or less\n    * Move APCu config to parent INI\n    * Move Zend OPcache config to parent INI\n    * Remove 60s wait on boa reboot\n    * Remove Percona 8.3 support\n    * Restore the _SQLMONITOR feature\n    * Turn _INCIDENT_REPORT globally off by default\n    * Use _BINLOG_KEEP_HOURS 24\n    * Use Ægir install check for /var/aegir/.drush/hm.alias.drushrc.php\n    * Use cache.backend.chainedfast for selected bins\n    * Use slower _iteration in second.sh in auto-healing system\n\n@=> Upgrades\n\n    * Commerce 3.2.0 with core 11.2.8\n    * CSF/LFD 15.00\n    * cURL 8.16.0\n    * Drupal 10.4.9\n    * Drupal 10.5.6\n    * Drupal 11.1.9\n    * Drupal 11.2.8\n    * Drupal 7.103.2 +Extra core\n    * Drupal 7.105.1 +Extra core LTS\n    * Drupal CMS 1.2.8 with core 11.2.8\n    * Duplicity 3.0.6\n    * farmOS 3.4.6 with core 10.4.9\n    * Git 2.51.0\n    * LocalGov 3.3.1 with core 10.5.6\n    * New Relic 12.2.0.27\n    * Nginx 1.29.1\n    * Nginx 1.29.3\n    * Node v22.20.0\n    * Node v22.21.0\n    * OpenCulturas 2.5.4 with core 10.5.6\n    * Openjdk 11.0.29\n    * OpenSSH 10.2p1\n    * OpenSSL 3.4.3\n    * PHP 8.3.24\n    * PHP 8.3.25\n    * PHP 8.3.26\n    * PHP 8.3.27\n    * PHP 8.3.28\n    * PHP 8.4.11\n    * PHP 8.4.12\n    * PHP 8.4.13\n    * PHP 8.4.14\n    * PHP 8.4.15\n    * PHP APCu 5.1.27\n    * PHP igbinary 3.2.16\n    * PHP igbinary 3.2.17 for 8.5\n    * PHP imagick 3.8.0\n    * PHP Yaml Pecl 2.2.5\n    * PHP_MCRYPT 1.0.9 for 7.3 and newer\n    * PHPREDIS 6.3.0\n    * Pure-FTPd 1.0.52\n    * Python 3.13.9 for Duplicity\n    * REDIS integration module 8.x-1.11.2 for Drupal 11.x\n    * Sector 11.0.x-dev with core 11.2.8\n    * Thunder 8.2.6 with core 11.2.8\n    * Unbound 1.24.1\n    * Unbound 1.24.2\n    * Valkey 7.2.11\n    * Valkey 9.0.0\n    * Varbase 10.0.8 with core 10.5.6\n    * Varbase 9.1.12 with core 10.5.2\n    * vnStat 2.13\n\n@=> Important Fixes\n\n    * Add _fix_stop_solr to Fix Solr 7/9 conflicting ports\n    * Add _if_drupal_patches_update — fixes #1892\n    * Allow cron web based requests even with HTTP Basic enabled in Ægir\n    * Always call updatedb with -y in Ægir Provision backend\n    * Always fix /etc/hosts before checking /etc/hostname\n    * Auto-update hostname if doesn’t match /etc/hostname\n    * Cron-only PHP entrypoint for Drupal 8+ w/ auth_basic turned off on the fly\n    * Detect and fix broken downloads in /data/conf/patches/ — fixes #1906\n    * Do not add date stamps to scripts — fixes #1891\n    * Do not run duplicate unbound-control reload\n    * Don't execute daily.sh until all installation procedures are finalized\n    * Duplicity backups: replace --file-to-restore w/ --path-to-restore (#1901)\n    * Fix access control for packages view in Ægir control panel\n    * Fix broken _XTRAS_LIST logic and document changes\n    * Hotfix for legacy vnStat installs — fixes #1908\n    * Improve _if_mydumper_is_locked procedure action/reporting in auto-healing\n    * Improve _solr_health_check_fix to detect stale pid files\n    * Install igbinary before redis extension to avoid redis ext build failure\n    * Install key DNS tools early with autoinit\n    * Java 17 should be set as default on Daedalus\n    * Limited Shell wrapper for Drush improvements — fixes #1907\n    * Make local OpenSSL new/legacy ssl/certs symlinked to system ssl/certs\n    * Remove duplicate unbound auto-healing\n    * Remove Permissions-Policy headers from Nginx level\n    * Remove server own IP from /etc/hosts if exists\n    * Set /etc/hostname before barracuda install\n    * Stop displaying useless expired one-time login link on boa install\n    * The _CUSTOM_CONFIG_CSF should protect only /etc/csf/csf.conf\n    * The maxmemory update for Valkey was missing — fixes #1893\n    * Update Nodejs/NPM install logic to always force upstream — fixes #1910\n    * Use drupal-ten-aegir-core-01.patch for Drupal 11.1.x\n    * Use strict check for virt-what tool\n\n@=> Drupal platforms available for installation -- docs/PLATFORMS.md\n\n\t## Drupal 11\n\n\tCK3 - [Commerce 3.2.0] with core (11.2.8) (NEW)\n\tCMS - [Drupal CMS 1.2.8] with core (11.2.8) (NEW)\n\tDE1 - [Drupal 11.1.9] with Drush included -- dev/stage/prod (NEW)\n\tDE2 - [Drupal 11.2.8] with Drush included -- dev/stage/prod (NEW)\n\tSCR - [Sector 11.0.x-dev] with core (11.2.8) (NEW)\n\tTHR - [Thunder 8.2.6] with core (11.2.8) (NEW)\n\n\t## Drupal 10\n\n\tCK2 - [Commerce v.2] with core (10.1.8)\n\tDX0 - [Drupal 10.0.11] with Drush included -- dev/stage/prod\n\tDX1 - [Drupal 10.1.8] with Drush included -- dev/stage/prod\n\tDX2 - [Drupal 10.2.12] with Drush included -- dev/stage/prod\n\tDX3 - [Drupal 10.3.14] with Drush included -- dev/stage/prod\n\tDX4 - [Drupal 10.4.9] with Drush included -- dev/stage/prod (NEW)\n\tDX5 - [Drupal 10.5.6] with Drush included -- dev/stage/prod (NEW)\n\tDXP - [DXPR Marketing 10.3.0] with core (10.3.6)\n\tEZC - [EzContent 2.2.15] with core (10.3.6)\n\tFOS - [farmOS 3.4.6] with core (10.4.9) (NEW)\n\tLGV - [LocalGov 3.3.1] with core (10.5.6) (NEW)\n\tOCS - [OpenCulturas 2.5.4] with core (10.5.6) (NEW)\n\tOFD - [OpenFed 12.2.4] with core (10.2.10)\n\tSOC - [Social 12.4.5] with core (10.2.10)\n\tVB9 - [Varbase 9.1.12] with core (10.5.2) (NEW)\n\tVBX - [Varbase 10.0.8] with core (10.5.6) (NEW)\n\n\t## Drupal 9\n\n\tDL9 - [Drupal 9.5.11] -- dev/stage/prod\n\tOLS - [OpenLucius 2.0.0] with core (9.5.11)\n\tOPG - [Opigno LMS 3.1.0] with core (9.5.11)\n\n\t## Drupal 7\n\n\tCK1 - [Commerce v.1] with core (7.105.1) (NEW)\n\tDL7 - [Drupal 7.105.1] -- dev/stage/prod (NEW)\n\tUC7 - [Ubercart 3.13] with core (7.105.1) (NEW)\n\n\t## Drupal 6\n\n\tDL6 - [Pressflow 6.60.1] -- dev/stage/prod\n\tUC6 - [Ubercart 2.15] with core (6.60.1)\n\n\n###\n### Stable BOA-5.7.12-pro - Full Edition\n### Date: Tue Jul 29 08:44:02 AM AEST 2025 in Sydney\n###\n\n@=> New BOA-5.7.12 PRO Release\n\n  This maintenance release delivers critical hot-fixes and essential\n  component upgrades to ensure maximum stability and compatibility across\n  all environments.\n\n  Immediate upgrade of both Barracuda and Octopus is strongly recommended\n  to benefit from these fixes and avoid potential issues.\n\n@=> Important Fixes\n\n    * Add _elevenValidatorPatch to fix Drupal 11 CMS distro fatal error\n    * Fix typo in --with-avif for PHP 8.1+ — fixes #1881\n    * Removing all .drush.inc files only in Drupal 11 -- fixes #1885 and #5\n\n@=> Changes\n\n    * Back to non-phar drush8 to restore PHP-CLI live PHP switch capability\n\n@=> Upgrades\n\n    * Drush 8.5.0.4 classic\n    * Unbound 1.23.1\n\n\n###\n### Stable BOA-5.7.11-pro - Full Edition\n### Date: Fri Jul 25 06:53:03 PM BST 2025 in London\n### Welcome Drupal 11—Ægir Mission Impossible—Again!\n###\n\n@=> New BOA-5.7.11 PRO Release\n\n  Drupal 11 with Ægir 3: They Said It Couldn’t Be Done — We Did It Anyway\n\n  The future of Drupal hosting is here! With BOA-5.7.11 PRO, we proudly deliver\n  what many thought impossible—full Drupal 11 support integrated with Ægir 3.\n\n  This groundbreaking feature not only pushes the boundaries of what the BOA\n  can achieve but also reaffirms our commitment to staying ahead of the curve\n  for modern Drupal deployments.\n\n  This release marks a major milestone: for the first time, BOA users can\n  seamlessly install, manage, and scale Drupal 11 sites with all the automation,\n  performance, and reliability you’ve come to expect.\n\n  Powered by Percona 8 and fine-tuned to leverage the latest innovations\n  across the stack, this update sets a new standard for hosting next-generation\n  Drupal applications while continuing to fully support legacy Drupal versions,\n  ensuring smooth operations for every site in your ecosystem, old or new.\n\n  We are thrilled to introduce BOA-5.7.11 PRO, our 5th release under the\n  new branch structure and dual licensing model. It merges seven months of\n  intense development from the DEV branch, delivering over 340 commits packed\n  with powerful features, critical fixes, and enhancements.\n\n  Thank you to everyone who supports our work by purchasing a BOA PRO license:\n  https://omega8.cc/boapro.\n\n  As always, this announcement highlights only the most impactful changes.\n  For a full breakdown, explore the complete commit history.\n\n@=> New Features\n\n    * Drupal 11 support (requires Percona 8)\n    * Install or update CiviCRM CLI Tool phar\n    * MultiCore Apache Solr 9 support\n    * Use Valkey and drop Redis support\n    * Write usage reports also to ~/static/usage/\n\n@=> Improvements\n\n    * Add _mysql_high_load procedure\n    * Add --with-avif to compatible PHP versions 8.1+\n    * Add check for downloaded dehydrated script integrity\n    * Add fstab helper for Linode Volumes\n    * Add mysql root password reset helper\n    * Allow to force symlinks mode with empty /data/conf/force_symlinks.conf\n    * Drupal 10+ core patches files are no longer added to codebase\n    * Drupal 11 site installation details including admin pwd in the task log\n    * Drupal 11 site is installed via site-platform-local Drush\n    * Improve _PHP_CLI detection, also from static/control/cli.info\n    * Improve permissions fix to include parent dir if app root != web root\n    * Local Drush locking/unlocking no longer adds control files in codebase\n    * Lock Local Drush and Symfony Console Input/Style in daily.sh\n    * Lock/Unlock Local Drush is more reliable with both Provision and bash\n    * More robust support for real IP detection behind vaious proxies\n    * Symfony Console Input/Style locking more reliable with live diff/patch\n    * Use faster SQL auto-healing for Too many connections\n    * Use older CSF/LFD for legacy systems\n    * Use queue.redis_reliable but only for core D8 to D10\n\n@=> Changes\n\n    * Always use system Drush 8 PHAR instead of Ægir local Drush\n    * Install System Drush 8 as PHAR (self-contained)\n    * Mark Apache Solr 4 with Jetty 9 as (deprecated)\n    * Solr is no longer included via ALL keyword in the _XTRAS_LIST\n\n@=> Upgrades\n\n    * CSF 14.24\n    * cURL 8.14.1\n    * dehydrated 0.7.3\n    * Drush 8.5.0.4\n    * MultiCore Apache Solr 9.8.1\n    * Nginx 1.29.0\n    * OpenSSH 10.0p2\n    * OpenSSL 3.4.2\n    * PHP 8.1.33\n    * PHP 8.2.29\n    * PHP 8.3.23\n    * PHP 8.4.10\n    * Valkey 7.2.10\n\n@=> Important Fixes\n\n    * Allow LE with HTTP Basic Auth fixed by Naurisr in #1790\n    * Remove too aggressive bots protection on /civicrm URLs\n\n@=> Drupal platforms available for installation -- docs/PLATFORMS.md\n\n\t## Drupal 11\n\n\tCK3 - [Commerce v.3] with core (11.2.2)\n\tDE2 - [Drupal 11.2.2]\n\tSCR - [Sector 11.0.x-dev] with core (11.2.0)\n\tTHR - [Thunder 8.2.1] with core (11.2.2)\n\n\t## Drupal 10\n\n\tCK2 - [Commerce v.2] with core (10.1.8)\n\tDX0 - [Drupal 10.0.11]\n\tDX1 - [Drupal 10.1.8]\n\tDX2 - [Drupal 10.2.12]\n\tDX3 - [Drupal 10.3.14]\n\tDX4 - [Drupal 10.4.8]\n\tDX5 - [Drupal 10.5.1]\n\tDXP - [DXPR Marketing 10.3.0] with core (10.3.6)\n\tEZC - [EzContent 2.2.15] with core (10.3.6)\n\tFOS - [farmOS 3.3.1] with core (10.3.6)\n\tLGV - [LocalGov 3.1.3] with core (10.5.1)\n\tOCS - [OpenCulturas 2.2.1] with core (10.3.6)\n\tOFD - [OpenFed 12.2.4] with core (10.2.10)\n\tSOC - [Social 12.4.5] with core (10.2.10)\n\tVBX - [Varbase 10.0.6] with core (10.5.1)\n\tVB9 - [Varbase 9.1.10] with core (10.5.1)\n\n\t## Drupal 9\n\n\tDL9 - [Drupal 9.5.11]\n\tOLS - [OpenLucius 2.0.0] with core (9.5.11)\n\tOPG - [Opigno LMS 3.1.0] with core (9.5.11)\n\n\t## Drupal 7\n\n\tCK1 - [Commerce v.1] with core (7.103.1)\n\tDL7 - [Drupal 7.103.1]\n\tUC7 - [Ubercart 3.13] with core (7.103.1)\n\n\t## Drupal 6\n\n\tDL6 - [Pressflow 6.60.1]\n\tUC6 - [Ubercart 2.15] with core (6.60.1)\n\n\n###\n### Stable BOA-5.6.0-pro - Full Edition\n### Date: Tue Dec 31 06:12:44 AM AEDT 2024 in Sydney\n### Happy New Year!\n###\n\n@=> New BOA-5.6.0 PRO Release – Happy New Year!\n\n  We're thrilled to introduce BOA-5.6.0 PRO, our latest release and the fourth\n  under our new branch structure and dual licensing model.\n\n  This PRO release brings the project fully in sync with the DEV branch,\n  which has been actively developed over the past two months, incorporating\n  over 750 commits since BOA-5.5.0.\n\n  We extend our heartfelt thanks to all of you who support our work\n  by purchasing a BOA Pro license: https://omega8.cc/boapro.\n\n  As always, this announcement covers only the most impactful new features,\n  critical fixes, and enhancements. For a comprehensive list of all updates,\n  please refer to the full commit history.\n\n@=> New Features\n\n    * Active Sites Databases Backups are available in ~/static/files/dbackup/\n    * Add experimental support for Cloudflare R2 Object Storage\n    * Add mergecsf tool to join and de-duplicate legacy csf configuration\n    * Add perftest tool to test hardware performance within VM\n    * Add php-cli access for [grp:ltd-shell-more]\n    * Add smtpgapps tool to install and configure msmtp on Devuan\n    * Add support for all AWS S3 regions, including dual-stack endpoints\n    * Add support for php-rebuild or php-reinstall on barracuda upgrade\n    * Add support for separate /root/.deny.solr7.cnf and /root/.deny.jetty9.cnf\n    * Add verifyvhostsdns tool to check all vhosts for aliases with invalid DNS\n    * New Relic Integration for Drupal with Drush Compatibility (8, 12, 13)\n    * PHP 8.4 is fully supported and installed by default\n    * Remote System Backups use `global`, `data` and optional `custom` buckets\n\n    * Completely New Backups! There is too much to cover, so please refer to\n      our extensive new documentation pages for all details.\n\n      This new feature is exclusive to BOA PRO and will not be ported to LTS.\n\n      New PRO Backups for BOA SysAdmin:\n        https://github.com/omega8cc/boa/tree/5.x-pro/docs/BACKUP_ROOT.md\n\n      New PRO Backups for Octopus Lshell User:\n        https://github.com/omega8cc/boa/tree/5.x-pro/docs/BACKUP_USER.md\n\n      New PRO Backups Retention Policy Configuration:\n        https://github.com/omega8cc/boa/tree/5.x-pro/docs/BACKUP_RETENTION.md\n\n      New PRO Backups Supported Regions and Bucket Creation Guidelines:\n        https://github.com/omega8cc/boa/tree/5.x-pro/docs/BACKUP_REGIONS.md\n\n\n@=> Improvements\n\n    * Add _backup_waiting_notify to make admin aware of the backup status\n    * Add _csf_lfd_gateway_allow()\n    * Add _linode_vm_postinstall()\n    * Add _turn_off_apparmor unless /root/.keep_apparmor_on.cnf\n    * Add /etc/cron.hourly/systemtime\n    * Add auto-restore of backup_schedule\n    * Add boa info to all backup reports\n    * Add cleanup for /var/lib/redis/ on OOM incident\n    * Add d7security_client-7.x-1.3 to o_contrib_seven and Hostmaster\n    * Add early aa-teardown on init to make sure that AppArmor is turned off\n    * Add function to auto-repair incomplete backup sets\n    * Add local Ægir Third Party Libraries\n    * Add more checks to make sure that OpenSSL is fully up to date\n    * Add support for /root/.turn.off.auto.update.cnf\n    * Add support for cloudflare-dns-ssl-py.info and cloudflare-dns-ssl-sh.info\n    * Add wkhtmltox_0.12.6.1-3 for Daedalus\n    * Ægir Hostmaster: Log wget cron runs\n    * Ægir Provision: Install local Drush automatically on platform verify and unlock\n    * Always run dist-upgrade twice -- helps with slow access to Devuan servers\n    * Configure backups --concurrency dynamically\n    * Disable _if_start_screen with noscreen in the args\n    * Disable backup_schedule on systems with too low free RAM\n    * Do not install CSF until BOA installation is complete on Linode\n    * Do not install csf/lfd on Linode early\n    * Do not reinstall Duplicity unless /root/.force.duplicity.reinstall.cnf exists\n    * Drupal 7 now supports and expects Trusted Host Patterns\n    * Improve all wget/curl downloads with proper re-try logic\n    * Improve and simplify _switch_php()\n    * Improve sysctl.conf template\n    * Improve tools/le/hooks/cloudflare logic\n    * Install or upgrade csf/lfd monitoring early\n    * Install wkhtmltopdf from packages first to get all dependencies\n    * Integrate original _SKYNET_MODE docs/history\n    * More fixes in the vdrush wrapper to support Drush 13\n    * Move /data/disk/arch to global and /home to data backups\n    * Move certain log scanners from the slow minute.sh to fast second.sh loop\n    * Replace direct exec with _forward_to_shell in shell wrapper\n    * Report also on newrelic-daemon and monagent versions in boa info\n    * Run guest-water.sh right after CSF install or upgrade\n    * Run orphaned duplicity processes cleanup separately\n    * Sync _sql_busy_detection with max_connect_errors\n    * Sync barracuda.cnf templates and docs\n    * Sync sshd restart procedure across all scripts\n    * Update barracuda config with missing vars if any\n    * Update email template to remove confusing legacy details\n    * Update xboa email templates\n    * Use include/exclude instead of exclude/include logic\n    * Use just one graceful csf restart on upgrade\n    * Use newer, supported --copy-links option in Duplicity\n    * Use noscreen in non-interactive scripts launched by parent scripts or cron\n    * Waiting 8 minutes before attempting to run enforced post-install upgrade\n\n@=> Changes\n\n    * Add docs/IPv6.md to explain why BOA disables IPv6 by default\n    * Disable confusing hosting_client_send_welcome with non-working login link\n    * Disable memory swap when running duplicity\n    * Disable no longer supported GSSAPIAuthentication in SSH config\n    * Disable performance_schema for Percona 5.7 but enable for 8.0+\n    * Disable ssl_stapling and fix http2 directive\n    * Do not auto-re-enable swap\n    * Enforce Composer 2.8.2 (because 2.8.3+ breaks previously working builds)\n    * Force sysctl.conf.mod-disable-ipv6\n    * Introducing the New BOA Branching Scheme -- see docs/BRANCHES.md\n    * Move /bin/websh to /opt/local/bin/websh\n    * Newest Python should be installed only with barracuda or backup tools\n    * Remove hosting_cron_use_backend.txt support\n    * Remove no longer supported legacy _SCOUT_KEY\n    * Remove no longer supported legacy HHVM\n    * Set SOLR WAIT to 8s to speed up reboot and services restarts\n    * The oldest NewRelic supported PHP version is 7.2\n    * Update and Sync nice/renice logic for scripts and services\n    * Update boa-mirrors-2024-12.txt\n    * Updates for lshell.conf template\n    * Use gzip to compress classic myslqdump backups\n    * Use zero tolerance mode for SSH/FTP failed login attempts\n    * Use zstd to compress mydumper sql backups\n    * We have now doubled the disk space in all our hosted plans\n\n@=> Upgrades\n\n    * Composer 2.8.2\n    * cURL 8.11.1\n    * Drupal 7.103.1\n    * Drush 8.5.0.1\n    * Duplicity 3.0.3.2\n    * ionCube 14.0.0 (up to PHP 8.3)\n    * New Relic 11.4.0.17\n    * Nginx 1.27.3\n    * OpenSSL 3.4.0\n    * PHP 8.1.31\n    * PHP 8.2.27\n    * PHP 8.3.15\n    * PHP 8.4.2\n    * PHP APCu 5.1.24\n    * PHP MCRYPT 1.0.7\n    * Unbound 1.23.1\n    * Use phpredis 4.3.0 with PHP 5.6\n    * Use phpredis 6.1.0 for 7.4 and newer\n\n@=> Important Fixes\n\n    * Ægir Hostmaster: Improve hosting_cron_queue reliability\n    * Ægir Hostmaster: Unset variables at the end of the loop\n    * Ægir Provision: Add backup mode ctrl file cleanup on clone and migrate\n    * Ægir Provision: Add more supported compression variants\n    * Ægir Provision: Do not confuse PDO and MySQLi conventions\n    * Ægir Provision: Fix Drush 13 support by invoking vendor/drush/drush/drush.php\n    * Ægir Provision: Follow symlinks to include all files in custom backup task only\n    * Ægir Provision: Improve function revoke()\n    * Ægir Provision: Prioritize '.tar.zst' as provision_backup_suffix\n    * Ægir Provision: Use supported localhost in can_grant_privileges()\n    * Allow _php_if_versions_cleanup_cnf if Master Ægir was not upgraded yet\n    * Barracuda downgrade protection should not rely on key/barracuda_key.txt\n    * Constant E_STRICT is deprecated in PHP 8.4\n    * Disable shell wrapper on system stop/start early\n    * Double check if /etc/init.d/nginx is really updated\n    * Excessive email notifications due to DHCP error checks #1829\n    * Final fixes in shell wrapper make it rock solid again\n    * Fix and sync all apt options\n    * Fix autoinit conflicting functions\n    * Fix for --enable-redis-lzf also in _php_extensions_update()\n    * Fix for counting symlinked files in resources usage monitoring\n    * Fix for duplicate http2 in all vhosts on upgrade\n    * Fix for platforms deployed using Manage with Git method\n    * Fix for SFTP chroot by using external mode in Subsystem sftp\n    * Fix the bug in the shell wrapper when composer is both a command and argument\n    * Fix the logic for Devuan base-files update for Daedalus\n    * Fix the logic in _ifnames_grub_check_sync()\n    * Improve the http2/ssl_stapling logic\n    * Legacy MCRYPT can’t be used with PHP 8.4\n    * Make sure that both web and app root dirs are group writable\n    * Make sure we add keys in a new line in xboa\n    * More capabilities to satisfy complex composer tasks\n    * Octopus downgrade protection should not rely on tools/key/octopus_key.txt\n    * Patch hosting_cron.module automatically to make web cron 100% reliable\n    * Remove duplicate ssl directives in all vhosts templates\n    * Sync include/openssl extended check for latest version\n    * Sync max allowed PHP-FPM versions running (11)\n    * Sync maxBooleanClauses for new Solr cores to 4096\n    * Sync PHP 8.3 precedence -- it's still default version\n    * Use dash by default and limit the use of _forward_to_shell\n\n\n###\n### Stable BOA-5.5.0-pro - Full Edition\n### Date: Sat 26 Oct 2024 09:49:51 AM PDT in Santa Clara\n###\n\n@=> New BOA-5.5.0 PRO Release – Thank You for Your Support!\n\n  We're thrilled to introduce BOA-5.5.0 PRO, our latest release and the third\n  under our new branch structure and dual licensing model.\n\n  This PRO release brings the project fully in sync with the DEV branch,\n  which has been actively developed over the past several months, incorporating\n  nearly 400 commits since BOA-5.4.0.\n\n  BOA-5.5.0 PRO also comes equipped with 26 Ægir-ready platforms, supporting\n  either Drupal core alone or various popular Drupal distributions—seven of\n  which are new! These platforms include options like Commerce, DXPR Marketing,\n  EzContent, farmOS, LocalGov, OpenCulturas, OpenFed, OpenLucius, Opigno LMS,\n  Sector, Social, Thunder, Ubercart, and Varbase.\n\n  We extend our heartfelt thanks to all of you who support our work\n  by purchasing a BOA Pro license: https://omega8.cc/boapro.\n\n  As always, this announcement covers only the most impactful new features,\n  critical fixes, and enhancements. For a comprehensive list of all updates,\n  please refer to the full commit history.\n\n@=> New Features\n\n    * Added codebasecheck tool for codebase compatibility check with Percona 8.0\n    * Added Drush 13 support by invoking vendor/drush/drush/drush.php directly\n    * Added dedicated memorytuner (for testing for now)\n    * Added mysqltuner5 and mysqltuner8\n    * Added bash version scan_nginx.sh -- the Nginx DoS Guard\n    * Added support for more granular load limits like 1.2 2.5 3.\n    * Added support for non-standard /hdd mount point\n    * Added support for /mnt/ paths in Drush\n    * Added sqlclean and vhostcheck tools for root\n    * SQL Adminer access moved to Octopus Ægir HTTPS vhost URL at /sqladmin\n    * Added incident_email_report() feature to all monitor/check/ scripts\n    * Allow SSH based access authorization to SQL Adminer at new /sqladmin/ URL\n    * Added incident detection and email reporting for LE certs renewal failures\n    * Added screen auto-start in boa, barracuda and octopus\n    * Added support for Percona 8.4 LTS (for testing only, you should use 8.0)\n    * Added support for Percona 8.3 (for testing only, you should use 8.0)\n    * Added support for Percona 8.0 (production ready)\n\n@=> Improvements\n\n    * Added _redis_cold_restart to mysql restart in the monitor/check/ scripts\n    * Rewrite the code used to install many new Drupal distros in Octopus\n    * Added Troubleshooting Docs in docs/FIXME.md (more entries soon)\n    * Faster _sql_busy_detection() in the monitor/check/ scripts\n    * Added _mysql_downgrade_protection() to avoid downgrade from Percona 8.0\n    * Many improvements in the Nginx DoS Guard in the monitor/check/ scripts\n    * Do not use fast firewall block unless /root/.instant.csf.block.cnf\n    * Pause some new monitors sub-tasks during BOA upgrades and backups\n    * Use underscore as prefix for all functions and camelCase vars\n    * Block only relevant ports using the monitor/check/ scripts\n    * Added docs on _NGINX_DOS_ variables\n    * Added doc on PHP versions management — fixes #1807\n    * Added separate docs/PHP-FPM.md and docs/DRUSH-CLI.md\n    * Added docs on Importance of Keeping SKYNET Enabled in BOA\n    * Added _CPU_TASK_RATIO to the CPU logic in auto-healing scripts\n    * Display currently used GRUB config in boa info\n    * Make the not_supported_virt() BOLD ENOUGH in boa info\n    * Added WARNING if /root/.allow.any.virt.cnf exists in boa info\n    * Display _DSK Usage for relevant partitions only in boa info\n    * Improved _XSY System Uptime/Load/Kernel/Disk/Memory Report in boa info\n    * Added Lshell version to boa info\n    * Always attach basic boa info report to barracuda upgrade log/email\n    * Improve check_php_rebuild() and add separate check_php_ssl_version()\n    * Explained _INCIDENT_EMAIL_REPORT variable\n    * Explained _SQL_MAX_TTL variable\n    * Explained _SQL_LOW_MAX_TTL variable\n    * Split big minute.sh into smaller auto-healing scripts\n    * Added procedure to fix empty or missing .dhp files\n    * Improved /root/.dont.use.fancy.bash.login.cnf logic\n    * Improved the octopus upgrade email tpl\n    * Added Key Services Uptime Report to boa info\n    * Pretty large defunct code cleanup\n\n@=> Changes\n\n    * Install python3-full packages\n    * Duplicity: Remove Python 2 support and require OpenSSL 3\n    * Remove restrictions for opcache_compile_file (Grav CMS support)\n    * Removed legacy manage_ip_auth_access() for SQL Adminer access\n    * PHP 8.3 is the new default version\n    * Prefer system default Python3 for Lshell and src build for Duplicity\n    * Always run ifnames_grub_check_sync in DEMO mode unless ctrl file exists\n    * Remove chrony if preinstalled\n    * PHP 8.1 is the max version supported on Stretch and Jessie\n    * New Relic removed support for legacy PHP 7.0 and 7.1\n    * Run _update_boa_tools only when new serial or pid key is detected\n    * Redis extension 8.x-1.8.2 (with not needed db schema update reverted)\n    * Disabled backboa install in auto mode\n    * Allow all 7.x PHP versions on legacy (Debian) systems\n    * Amazon EC2 No Longer Supported (system crashes, doesn't support Devuan)\n    * Use legacy PHP 7.x by default on legacy Debian systems\n\n@=> Upgrades\n\n    * Lshell 0.10\n    * Composer 2.8.1\n    * Unbound 1.21.1\n    * OpenSSL 3.3.2\n    * PHP 8.3.13\n    * PHP 8.2.25\n    * PHP 8.1.30\n    * OpenSSH 9.9p1\n    * Python 3.12.5 (for Duplicity)\n    * cURL 8.10.1\n    * Nginx 1.27.2\n    * ionCube 13.3.1 (also for PHP 8.3)\n    * MyQuick 0.16.7-3\n    * CSF 14.21\n    * Duplicity 3.0.2\n\n@=> Important Fixes\n\n    * Fix PATH in the websh wrapper (fixes git and OpenSSL issues)\n    * Fix for _PHP_FPM_TIMEOUT logic\n    * Remove apt-listchanges on Debian (for legacy systems with broken debconf)\n    * Improve _if_fix_python() procedure logic\n    * Fix the logic for _update_boa_tools on init\n    * Do not remove usage.sh — fixes #1824\n    * Add cleanup for exclude.tag (could result with no files on clone)\n    * Do not restart sshd every minute\n    * Do not reload nginx every few minutes by default\n    * cURL version upgrade should happen only with barracuda upgrade\n    * Fix for too broad cleanup in /var/xdrago/log/\n    * Ignore all dynamic requests related to css/js while they are generated\n    * Do not log redirects (Nginx)\n    * Inconsistent checks for SSL version in check_php_rebuild — fixes #1815\n    * Use _CURL_VRN=7.50.1 for Wheezy compatibility\n    * Use separate log for mysql notices — fixes #1805\n    * Add built-in /run/unbound setup — fixes #1804\n    * Percona 5.7 still depends on legacy packages naming — fixes #1808\n    * Compatibility with legacy Python 3.5\n\n@=> Drupal platforms available for installation -- docs/PLATFORMS.md\n\n    * Commerce Kickstart 2.77 (7.101.1)\n    * Commerce Base 2.40 (10.1.8)\n    * Commerce Kickstart 3.0.0 (10.3.6)\n    * DXPR Marketing 10.3.0 (10.3.6)\n    * EzContent 2.2.15 (10.3.6)\n    * farmOS 3.3.1 (10.3.6)\n    * LocalGov 3.0.11 (10.3.6)\n    * OpenCulturas 2.2.1 (10.3.6)\n    * OpenFed 12.2.4 (10.2.10)\n    * OpenLucius 2.0.0 (9.5.11)\n    * Opigno LMS 3.1.0 (9.5.11)\n    * Sector 10.0.0-rc5 (10.2.10)\n    * Social 12.4.5 (10.2.10)\n    * Thunder 7.3.7 (10.3.6)\n    * Ubercart 2.15 (6.60.1)\n    * Ubercart 3.13 (7.101.1)\n    * Varbase 9.1.6 (10.3.6)\n    * Varbase 10.0.2 (10.3.6)\n    * Pressflow 6.60.1 (core only)\n    * Drupal 7.101.1 (core only)\n    * Drupal 9.5.11 (core only)\n    * Drupal 10.0.11 (core only)\n    * Drupal 10.1.8 (core only)\n    * Drupal 10.2.10 (core only)\n    * Drupal 10.3.6 (core only)\n    * Drupal 10.4.x-dev (core only)\n\n\n###\n### Stable BOA-5.4.0-pro - Full Edition\n### Date: Wed 14 Aug 2024 06:24:03 AM AEST in Sydney\n###\n\n@=> New BOA PRO Release & Comparison with LTS and DEV Branches\n\n  We are excited to announce the release of BOA-5.4.0 PRO and BOA-5.4.0 LTS,\n  marking the second release under our new branch structure and dual licensing\n  model, which began with BOA-5.2.0.\n\n  These new PRO and LTS versions bring the project fully up to date with the\n  DEV branch, which has been actively developed over the past several months.\n\n  As always, this announcement highlights only the most significant new features,\n  critical fixes, and improvements. For a detailed list of all changes,\n  please refer to the commit history.\n\n@=> New Features\n\n    * Simplify and speed up BOA install/upgrades -- please check all details in\n      the updated and greatly improved documentation:\n\n      docs/INSTALL.md\n      docs/UPGRADE.md\n      docs/SELFUPGRADE.md\n      docs/MAJORUPGRADE.md\n\n    * AppArmor BOA integration for more strict system protection (needs docs)\n    * Barracuda install without Octopus is now possible -- docs/INSTALL.md\n    * Enable instant php-cli version switch for Ægir backend -- docs/DRUSH.md\n    * Improve Ruby Gems and Node/NPM security and speed x3 -- docs/GEM.md\n    * Let's Encrypt for Ægir Hostmaster installed automatically -- docs/SSL.md\n    * Let's Encrypt Live Mode is enabled by default -- docs/SSL.md\n    * Add three manual backup modes in Ægir (incomplete feature at the moment)\n    * New Relic support with Octopus/Platform/Site Config -- docs/NEWRELIC.md\n    * Restore _AEGIR_UPGRADE_ONLY {aegir} as supported barracuda upgrade mode\n    * Restore {aegir|platforms|both} as supported octopus upgrade modes\n    * Security Considerations for Multi-Ægir Systems -- docs/SECURITY.md\n    * Use /root/.deny.clamav.cnf to auto-disable clamav if installed\n    * Use /root/.deny.java.cnf to auto-disable Solr and Jetty if not used\n\n    * Drush 12 in Ægir Tasks: Dynamically Utilize Site-Local Drush for\n      the updatedb Operations on Drupal 10+ (needs docs).\n\n      For now here is a brief explanation on how it works:\n\n    # Both Migrate and Clone tasks in Ægir by default run the updatedb\n      with Ægir own Drush 8 in the final deploy internal procedure.\n\n    # This may cause unexpected issues in Drupal 10 and newer versions, so\n      we have added a switch which allows you to tell Ægir to skip running\n      `updatedb` on Drupal 10+ -- either globally with empty control file\n      ~/static/control/DisAutoUpDb.info or per site with empty control file\n      ~/static/control/sitename_DisAutoUpDb.info where `sitename` is the site\n      main domain name used in its Drush alias. You could then unlock the\n      Site-Local Drush and run it manually with `vdrush` in the platform\n      app root (not web root) to better control what happens on `updatedb`\n      using command: `vdrush @site-alias updatedb`\n\n    # Automatic mode does it even better for Drupal 10+ Here's how it works,\n      given no control file listed above exists:\n\n      1. Platform Verify task locks Site-Local Drush and patches Drupal core.\n\n      2. If the site is migrated to different platform or cloned to different\n         platform, Ægir will check if **both old and new** platforms have\n         the Site-Local Drush in their codebases.\n\n      3. If Site-Local Drush is detected in both platforms Ægir will unlock\n         Drush in both platforms, will also revert the Drupal core patch it\n         normally needs to use its own Drush 8.\n\n      4. Now Ægir will run the Site-Local Drush for `updatedb` command and\n         will report all details in the task log in the admin interface.\n\n      5. Once the `updatedb` is complete, Ægir will automatically apply\n         the Drupal core patch again and will lock Site-Local Drush, so you\n         could run any other tasks in the control panel as usual. Magic!\n\n@=> Drupal platforms available for installation -- docs/PLATFORMS.md\n\n    * Drupal 10.4.x-dev\n    * Drupal 10.3.1\n    * Drupal 10.2.7\n    * Drupal 10.1.8\n    * Drupal 10.0.11\n    * Social 12.4.2 (10.2.6)\n    * Thunder 7.3.0 (10.3.1)\n    * Varbase 10.0.0 (10.3.1)\n    * Varbase 9.1.3 (10.2.6)\n\n    * Drupal 9.5.11\n    * OpenLucius 2.0.0 (9.5.11)\n    * Opigno LMS 3.1.0 (9.5.11)\n\n    * Commerce 1.72\n    * Commerce 2.77\n    * Drupal 7.101.1\n    * Ubercart 3.13\n\n    * Pressflow 6.60.1\n    * Ubercart 2.15\n\n@=> Improvements\n\n    * Add better protection from duplicate sql tasks\n    * Improve Ægir tasks messages to identify new improvements in the backend\n    * Update Drush 10+ aliases on the fly within Ægir deploy procedure\n    * Add BOA Roadmap & Progress Update in ROADMAP.md\n    * Add bring_all_ram_cpu_online\n    * Add CSF self-update debugging log in /var/backups/csf/water/\n    * Add Dual License and BOA Branches Explained in DUALLICENSE.md\n    * Add INI (platform level) docs in docs/ini/platform/INI.md\n    * Add INI (site level) docs in docs/ini/site/INI.md\n    * Add killer script for hanging apt-get update\n    * Add support for /root/.force.queue.runner.cnf\n    * Add switch_to_bash_in_octopus\n    * Detect and remove stale pid faster\n    * Display also system-manufacturer in the welcome messages and reports\n    * Do not lower proc nice on init and major OS upgrades\n    * Do not restart slow starting services during major OS upgrade\n    * Execute post-install octopus auto-upgrade on boa and octopus install\n    * Explain how upgrades affect BOA special shell wrapper\n    * Improve and simplify is_logged_in early check in global.inc\n    * Improve rsyslog to use separate log files for cron, mail, lfd, iptables\n    * Limit noise printed in the console\n    * Protect csf.allow from removing custom entries\n    * Rewrite and improve all BOA project docs to use Markdown\n    * Rewrite and improve the main README.md\n    * Simplify upgrade docs\n    * Turn Off AppArmor while running octopus\n    * Update tests for Amazon EC2 environment detection\n    * Use `drush11 aliases` or `drush11 sa` for Drupal 8+ core and PHP 8.2+\n    * Use new `fancynow` welcome screen only for interactive root sessions\n    * Nginx: Sync js/css aggregation support\n    * Nginx: Sync static files regex\n\n@=> Changes and Upgrades\n\n    * Add compatibility with Redis 8.x-1.7.1\n    * Add igbinary support to PHP 5.6\n    * Add recommended security and privacy HTTP headers in Nginx config\n    * Add required now $settings['state_cache'] = TRUE; in global.inc\n    * Adjust patches and PHP versions\n    * AdvAgg is no longer added to D8+ o_contrib\n    * Barracuda upgrade after boa install is now automated\n    * Build OpenSSH from sources by default\n    * cURL 8.9.1\n    * Disable man-db/auto-update to speed up also autoinit and boa install\n    * Duplicity 3.0.0\n    * Force mysql root password update on barracuda upgrade\n    * Git 2.45.2\n    * Image Optimize toolkit binaries are now included by default\n    * Install Python 3.12.4 for Duplicity\n    * ionCube 13.0.4\n    * Launch daily.sh automatically after barracuda upgrade\n    * Lshell 0.9.18.10\n    * MySecureShell master-29-06-2024\n    * New Relic 19.9.9.93\n    * New Relic no longer supports PHP 5.6\n    * Nginx 1.27.0\n    * Nginx: http2 is now a separate directive\n    * OpenSSL 3.0.14 LTS\n    * Re-enable cleanup for GHOST distros revisions\n    * Remove /etc/apt/preferences\n    * Remove cloud-utils if detected\n    * Remove legacy i386/x32 support\n    * Remove no longer supported MariaDB code\n    * Remove not used mysql_hourly.sh\n    * Removing old boa-init no longer needed after introducing fast autoinit\n    * Removing systemd cleanup from boa, now handled by the fast autoinit\n    * Replace mail with s-nail\n    * Replace pdnsd with unbound\n    * Restrict also find/scp to prevent lshell escape\n    * Upgrade to openjdk 11.0.24\n    * Use /etc/ssh for OpenSSH built from sources (no new server keys, finally)\n    * Use maximum compatible PhpRedis versions for legacy PHP\n    * Use PermitRootLogin prohibit-password\n    * We no longer allow to install BOA on Debian to avoid confusion\n    * We no longer override server sshd keys to avoid confusion\n    * Nginx: Remove the legacy X-XSS-Protection header\n    * Nginx: block bytedance and PetalBot aggressive crawlers\n\n@=> Important Fixes\n\n    * Add python3.5 compatibility for Stretch\n    * Add second cron entry for critically important /var/xdrago/clear.sh\n    * Add support for legacy python3.4\n    * Always copy hostmaster LE cert to /etc/ssl/private/ if just updated\n    * Avoid any AppArmor code on legacy Debian systems\n    * Bash 5.2 compatibility\n    * Detect broken GIT early and reinstall from sources\n    * Do not install PHP 8.2 8.3 with _OPENSSL_EOL_VRN and _OPENSSL_LEGACY_VRN\n    * Do not use --with-http_v3_module for Nginx on legacy systems\n    * Do not use --with-imap for PHP on Jessie\n    * Do not use --with-imap for PHP on major upgrade on any OS\n    * Do not use --with-sodium for PHP on Jessie\n    * Fix confusing ICU logic\n    * Fix for ignored nofile limits\n    * Fix for iptables paths backward compatibility\n    * Fix for non-blocking ntpdate\n    * Fix New Relic APT config\n    * Fix Percona apt config logic\n    * Fix platforms symlinking in the limited shell account\n    * Fix Pure-FTPD install and config\n    * Force crontab update on major OS upgrade\n    * Improve resolvconf auto-config\n    * Let's Encrypt actually supports wildcard names already\n    * Make sure that _PHP_SINGLE_INSTALL exists before disabling other versions\n    * Modernize Percona keys logic\n    * Nginx: Sync http2 in legacy tpl\n    * Remove blocking cnf file if php-max is used\n    * Show PHP patch results on _DEBUG_MODE=YES\n    * Sync for python3.11\n    * Sync PHP extensions existence check directly, not just via ctrl files\n    * Sync PhpRedis build options with versions compatibility\n    * Sync with python3.9\n    * Update wkhtmltopdf versions logic\n    * Use cURL 7.71.1 on Jessie\n    * Use cURL 8.2.1 on Stretch\n    * Use OpenSSH 8.3p1 on Jessie\n    * Use OpenSSH 9.3p1 on Stretch\n    * Use OpenSSL 1.0.2u on Jessie\n    * Use OpenSSL 1.1.1w on Stretch\n    * Fix for composer.json and composer.lock protection\n\n\n###\n### Stable BOA-5.3.0-pro - Full Edition\n### Date: Mon 12 Aug 2024 05:33:46 AM AEST in Sydney\n###\n\n@=> New BOA LTS Release & Comparison with PRO and DEV Branches\n\n    We are excited to announce the release of the latest BOA LTS version,\n    marking the first LTS release since the introduction of our new branch\n    structure and dual licensing model, which began with the BOA-5.2.0 release.\n\n    This LTS version brings the project up to date with BOA-5.3.0-pro, which\n    has been available for several months. Both BOA-5.3.0-pro and BOA-5.3.0-lts\n    are officially released today.\n\n    Looking ahead, BOA-5.4.0-pro will be released within the next 48 hours,\n    incorporating all recent developments from the DEV branch.\n\n    Please note that the project README and documentation displayed on GitHub\n    by default apply primarily to the BOA DEV branch, and shortly to BOA PRO.\n    These do not cover BOA LTS. If you are working with the LTS version, ensure\n    you switch to the appropriate branch to access legacy documentation\n    relevant to BOA LTS.\n\n    As always, we highlight only the most critical fixes and improvements in\n    this announcement. For a comprehensive list of changes, please refer to\n    the commit history.\n\n@=> New Features\n\n    * PHP 8.3 Support\n\n    * Update sFTP password and password expiration date with temporary pid file\n        ~/static/control/run-sftp-password-update.pid\n      Now the main Octopus limited shell user can easily self-update password\n      based access if still has working SSH keys but lost working password.\n      New password will be written to ~/static/control/new-USER-password.txt\n\n    * Add boa cleanup {detect|purge} {user|batch} to automate Octopus instances\n      cleanup. Requires existence of /data/disk/USER/log/CANCELLED file and\n      no vhosts existing in /data/disk/USER/config/server_master/nginx/vhost.d/\n      It will archive only config files and delete everything else, but will not\n      delete any databases nor db users (yet).\n\n@=> Improvements\n\n    * Add ltd-shell account client access to moved sites files in static/files\n    * Always install legacy OpenSSL first and force new on upgrade\n    * Disable man-db/auto-update to speed up barracuda upgrades\n    * MySQL: Disable performance_schema by default\n    * MySQL: Do not run mysql_cleanup.sh on servers with >100 dbs\n    * Nginx DoS-Guard: Add ignore_admin to protect site admin activity\n    * Nginx DoS-Guard: Catch typical hack probe requests early\n    * Nginx DoS-Guard: Detect and block ‘unknown’ IPs requests\n    * Nginx DoS-Guard: Track and block 500/403/404 flood\n    * Prepare for but do not enable http3/quic yet\n    * Use cold solr7 restart only on barracuda upgrade\n\n@=> Changes and Upgrades\n\n    * Build PHP --with-bz2\n    * Build Redis with --enable-redis-lzf --enable-redis-igbinary\n    * Composer 2.7.7\n    * cURL 8.7.1\n    * Drupal 7.101.1\n    * Enable ClassicTrack for Ægir tasks by default\n    * ionCube 13.0.2\n    * Nginx 1.26.0\n    * OpenSSH 9.8p1\n    * OpenSSL LTS with 3.0.13 (new default version)\n    * PHP 8.1.29\n    * PHP 8.2.22\n    * PHP 8.3.10\n    * PHP APCu 5.1.23\n    * PHP igbinary 3.2.15\n    * PHP imagick 3.7.0\n    * Ruby 3.3.4\n    * Use _USE_FPM=1024 as minimum\n    * Use phpredis 6.0.2 for 7.2 and newer\n\n@=> Important Fixes\n\n    * Add clamd/freshclam to auto-healing\n    * Add cleanup for ctrl files blocking PHP upgrade\n    * Always check if all /var/xdrago/* scripts are present or force update\n    * Always install openjdk-11-jre-headless\n    * Fix for vdrush @site updb in Drush 12\n    * Fix protection from duplicate sql backups\n    * Legacy PHP versions require legacy OpenSSL version\n    * More protection from race conditions in auto-healing\n    * Remove old auto-healing pids if detected\n    * Restore ULIMIT in nginx init.d\n    * Sync autoupboa cron to not collide with sql backups\n    * The adduser no longer automates —home\n    * Use only php-fpm reload instead of start on upgrade\n    * Use PHP 7.4 in run_drush8_cmd if available\n\n\n###\n### Stable BOA-5.2.0 - Full Edition\n### Date: Wed 03 Apr 2024 02:11:56 PM CEST in Warsaw\n###\n\n@=> Notes on new available BOA branches and licenses\n\n    BOA is available in three main branches, but only LTS for installation:\n\n  * LTS which remains completely free to use without any kind of license\n    as it was from the beginning (previously named HEAD or STABLE).\n    This branch should be considered as BOA LTS with slow updates, focused\n    on both security and bug fixes, but very limited new features additions.\n\n  * DEV which requires paid license for both install and upgrade and includes\n    the latest features, security and bug fixes and installed services versions.\n    This branch shouldn't be used in production without extensive testing.\n\n  * PRO which requires paid license and is available only as an upgrade\n    from either LTS or DEV (or previous HEAD/STABLE) is the branch with regular\n    monthly or bi-monthly releases, closely following tested DEV branch.\n\n    Once you install BOA LTS and want to upgrade to PRO with license obtained\n    from https://omega8.cc/licenses you will need to use up-pro command.\n\n    Once you install BOA LTS or PRO and want to upgrade to DEV with license\n    from https://omega8.cc/licenses you will need to use up-dev command.\n\n    Old commands using in-head, in-stable, up-head and up-stable no longer work\n    to avoid confusion and have been replaced with in-lts and up-lts in all\n    installation and upgrade scripts.\n\n    Please make sure to read updated docs/INSTALL.txt and docs/UPGRADE.txt\n\n@=> New Features\n\n    * Add autodaedalus tool for easy automated major system upgrades\n    * Add Linux Containers (LXC) guest as supported (tested only by others)\n    * Add mysql_cleanup running hourly to keep known caches overhead at minimum\n    * Add OpenVZ Containers guest as supported (tested only by others)\n    * Add support for ~/static/control/disable_user_register_protection.info\n    * Add support for du command in limited shell with /root/.allow.du.cnf\n    * Debian Bookworm and Devuan Daedalus support (needs further testing)\n    * Full Drupal 10.2 support for install and upgrades from Drupal 9 and 10\n\n@=> Improvements\n\n    * Add control/enable-drush-sa.info for native drush sa command\n    * Add hyperv qemu and kvm aws as supported\n    * Add ltd-shell alias vdrush:vendor/bin/drush\n    * Do not enforce newrelic_background_job(FALSE)\n    * Document BOA planned features in the ROADMAP.txt\n    * Document Drush usage in docs/DRUSH.txt\n    * Make it clear that only Devuan Chimaera should be used in production\n    * New Relic: Separate Web and Drush stats\n    * Purge firewall deny rules before reboot for faster system restart\n    * README rewrite and improvements\n\n@=> Changes and Upgrades\n\n    * Ægir D10 Platforms: 3x Drupal core 10.0.11\n    * Ægir D10 Platforms: 3x Drupal core 10.1.8\n    * Ægir D10 Platforms: 3x Drupal core 10.2.4\n    * Ægir D10 Platforms: Social 12.2.2 with core 10.2.4\n    * Ægir D10 Platforms: Thunder 7.2.0 with core 10.2.4\n    * Ægir D10 Platforms: Varbase 9.1.1 with core 10.2.4\n    * Disable support for several built-in legacy D7 distros\n    * Do not enable /root/.fast.cron.cnf by default\n    * Drush 8.4.12.9\n    * Nginx 1.24.0\n    * Nginx: update ssl_ciphers remove 4 weak but leave 2 to support Safari 6-8\n    * OpenSSH 9.7p1\n    * OpenSSL LTS with 3.0.13 (prepare, optional)\n    * PHP 8.1.27\n    * PHP 8.2.17\n    * Redis 7.0.15\n    * Remove legacy Ubuntu support\n\n@=> Important Fixes\n\n    * Always revert to iptables-legacy from nf_tables\n    * Fix for broken cURL self-healing\n    * Fix for cURL/libcurl version conflict\n    * Force Nginx cold restart if status is locked\n    * Improve auto-healing for duplicate move_sql and mysql_backup\n    * Improve downgrade_protection\n    * Revert \"Sync /etc/security/limits.conf\"\n    * Update Drush yml sites aliases also for Ægir system user\n\n\n###\n### Stable BOA-5.1.0 - Full Edition\n### Date: Sat 04 Nov 2023 03:26:41 PM CET in Warsaw\n###\n### Documenting details in progress...\n###\n\n@=> New Features\n\n    * Automatically detect and add known web-root dir names on Add New Platform\n    * Lock Drush in any platform with Ægir task: Verify + Lock Drush\n    * Manage pid files in platforms web-root for Drush Lock/Unlock status\n    * Unlock Drush in any platform with new Ægir task: Unlock Local Drush\n\n@=> Improvements\n\n    * Document ~/static/control/FastTrack.info in docs/FASTTRACK.txt\n    * Improve BOA forks compatibility with standalone Ægir paths\n    * Improve tasks labels in the Ægir control panel\n    * Use Ægir backend built-in chmod for Unlock Drush w/o external scripts\n\n@=> Changes and Upgrades\n\n    * Ægir D10 Platforms: 3x Drupal core 10.1.6\n    * Ægir D10 Platforms: Social 12.0.0-rc3 with core 10.0.11\n    * Ægir D10 Platforms: Thunder 7.1.2 with core 10.1.6\n    * Ægir D10 Platforms: Varbase 9.0.16 with core 10.1.6\n    * Enable hosting_site_backup_manager Ægir extension by default again\n    * Fix permissions and ownership on every Platform Verify for Drupal 8/9/10\n    * OpenSSL 3.1.4\n    * PHP 8.1.25\n    * PHP 8.2.12\n\n@=> Important Fixes\n\n    * Added missing web-root paths in built-in platforms for Drupal 9/10\n    * Fix the ability to rename existing platforms in the Ægir control panel\n    * Multiple fixes for built-in permissions and ownership Ægir scripts\n\n\n###\n### Stable BOA-5.0.0 - Full Edition\n### Date: Thu 26 Oct 2023 09:55:22 PM CEST in Warsaw\n###\n### Documenting details in progress...\n###\n\n@=> New Features\n\n    * Add support for verbose Drush like 'drush -vvv @site status'\n    * Ægir in BOA is now fully compatible with PHP 8.1 and 8.2\n    * Do not purge cache tables listed in /root/.my.cache.exceptions.cnf\n    * Drupal 10 is fully supported (needs docs)\n    * Drupal 10 platforms available: Thunder, Varbase, Drupal 10.1 and 10.0\n    * Make system reboot much faster, also with 'boa reboot' command\n    * OpenSSL 3.x optional/test support with /root/.install.modern.openssl.cnf\n\n@=> Improvements\n\n    * Always install latest Composer on barracuda upgrade\n    * Enable ~/static/control/FastTrack.info by default (needs docs)\n    * Minimize services downtime on upgrade using soft reload only if possible\n    * Site Local Drush is no longer removed on platform Verify (only locked)\n    * Use 'barracuda php-idle disable' to speed up major upgrades\n\n@=> Changes and Upgrades\n\n    * Ægir D10 Platforms: 3x Drupal core 10.0.11\n    * Ægir D10 Platforms: 3x Drupal core 10.1.5\n    * Ægir D10 Platforms: Thunder 7.1.2 with core 10.1.5\n    * Ægir D10 Platforms: Varbase 9.0.16 with core 10.1.5\n    * Ægir D7 Platforms: Commerce 1.72 with core 7.98.1\n    * Ægir D7 Platforms: Commerce 2.77 with core 7.98.1\n    * Ægir D7 Platforms: Guardr 2.57 with core 7.98.1\n    * Ægir D7 Platforms: OpenOutreach 1.69 with core 7.98.1\n    * Ægir D7 Platforms: Opigno LMS 1.59 with core 7.98.1\n    * Ægir D7 Platforms: Panopoly 1.92 with core 7.98.1\n    * Ægir D7 Platforms: Ubercart 3.13 with core 7.98.1\n    * Ægir D9 Platforms: 3x Drupal 9.5.11\n    * Ægir D9 Platforms: OpenLucius 2.0.0 with core 9.5.11\n    * Ægir D9 Platforms: Opigno LMS 3.1.0 with core 9.5.11\n    * Ægir D9 Platforms: Social 11.9.14 with core 9.5.11\n    * BOA requires at least PHP 7.4 or newer as default version\n    * Change redis_perm_ttl from 6h to 24h\n    * Do not inlcude advagg/cdn in o_contrib_eight\n    * Drupal 10: add minimum patch for core\n    * Drupal 10: disable not working yet welcome email on install\n    * Drupal 10: fix compatibility and add missing code in Drush 8\n    * Drupal 10: lock vendor/drush\n    * Drupal 10: lock vendor/symfony/console/Input\n    * Drupal 10: replace psr/log in core with Drush 8 version\n    * Drush Launcher is not supported anymore so removed\n    * Enable /root/.fast.cron.cnf by default (needs docs)\n    * Remove confusing -bin suffix from Drush 10+ (needs docs)\n    * Set _PURGE_BACKUPS default to 14 or 7 on hosted BOA\n    * Set Composer Install Support in Ægir Backend as disabled by default\n    * The redis_use_modern is no longer optional in the INI files\n    * Update vendor code in the Ægir backend / Provision\n    * Use _STRONG_PASSWORDS=YES by default\n    * Use _USE_MYSQLTUNER=NO by default\n\n@=> Important Fixes\n\n    * Do not enable redis on D7/D6 automatically, it works anyway\n    * Fast DNS Cache Server (pdnsd) install is no longer optional since 2014 (!)\n    * Fix for hosting_cron_queue() with ADV_CRON_MAX_PLL logic\n    * Make sure that expired password will not hang backend task\n    * Nginx: Add missing no-cache checks from @cache to @drupal\n    * Nginx: Move exceptions to the /index.php location\n    * Nginx: The css/js aggregation logic has changed in Drupal 10.1\n\n\n###\n### Cutting Edge BOA-5.0.0-dev - Initial Edition\n### Date: Sat 06 May 2023 08:42:31 AM EEST in Kyiv\n### Слава Україні!\n###\n### Documenting details in progress...\n###\n\n@=> New Features\n\n    * Add 'barracuda php-idle disable/enable' (needs docs)\n    * Automatic BOA System Major Upgrade Tool -- see docs/UPGRADE.txt\n    * Debian Bullseye and Buster support\n    * Devuan Chimaera and Beowulf support (systemd-free Debian alternative)\n    * Make Composer running with PHP defined in ~/static/control/cli.info\n    * Make PHP-CLI for Composer and Drush configurable on the fly (needs docs)\n    * New multi-step BOA install procedure -- see docs/INSTALL.txt\n    * PHP 8.2 support\n\n@=> Major Improvements\n\n    * Barracuda first upgrade after boa install no longer requires reboot\n    * Use all available CPU cores for much faster PHP, Nginx, OpenSSL etc builds\n\n@=> Important Changes\n\n    * BOA requires the classic network interface naming convention (needs docs)\n    * Disable all nightly codebase cleanup procedures\n    * Nginx: Add PATCH to allowed $request_method list\n    * Nginx: Remove deprecated upload_progress support\n    * Remove AdvAgg and CDN from D9+ o_contrib\n    * Rewrite the _PHP_MULTI_INSTALL cleanup to make it optional (needs docs)\n    * Stop running any Drush operations on Drupal 8+ in daily.sh\n    * Switch to Redis Server 7.x by default\n    * The php-all should no longer include 7.3 and older versions (needs docs)\n    * Ubuntu support is deprecated\n    * Use php-max to install ALL nine (9) PHP versions (needs docs)\n\n@=> Important Fixes\n\n    * Discover the system IPv4 once and store in a file\n    * Fix several issues with ~/static/control/MyQuick.info logic\n    * Maintain csf.allow/ignore backup on serial update in /var/backups/csf/\n    * Nginx: Fix protected access to /update.php\n    * Nginx: Protect composer.json if exists in the Drupal web-root\n\n\n###\n### NEW BOA-4.2.0-stable - Full Edition\n### Date: Sat 06 May 2023 07:42:19 AM EEST in Ivano-Frankivsk\n### Слава Україні!\n###\n### Documenting details in progress...\n###\n\n@=> New Features\n\n    * Add 'barracuda php-idle disable/enable' (needs docs)\n    * Automatic BOA System Major Upgrade Tool -- see docs/UPGRADE.txt\n    * Debian Bullseye and Buster support\n    * Devuan Chimaera and Beowulf support (systemd-free Debian alternative)\n    * Make Composer running with PHP defined in ~/static/control/cli.info\n    * Make PHP-CLI for Composer and Drush configurable on the fly (needs docs)\n    * New multi-step BOA install procedure -- see docs/INSTALL.txt\n    * PHP 8.2 support\n\n@=> Major Improvements\n\n    * Barracuda first upgrade after boa install no longer requires reboot\n    * Use all available CPU cores for much faster PHP, Nginx, OpenSSL etc builds\n\n@=> Important Changes\n\n    * BOA requires the classic network interface naming convention (needs docs)\n    * Disable all nightly codebase cleanup procedures\n    * Remove AdvAgg and CDN from D9+ o_contrib\n    * Rewrite the _PHP_MULTI_INSTALL cleanup to make it optional (needs docs)\n    * Stop running any Drush operations on Drupal 8+ in daily.sh\n    * Switch to Redis Server 7.x by default\n    * The php-all should no longer include 7.3 and older versions (needs docs)\n    * Ubuntu support is deprecated\n    * Use php-max to install ALL nine (9) PHP versions (needs docs)\n\n@=> Important Fixes\n\n    * Discover the system IPv4 once and store in a file\n    * Maintain csf.allow/ignore backup on serial update in /var/backups/csf/\n\n\n###\n### Stable BOA-4.1.4-rel - Full Edition\n### Date: Fri Dec 10 22:30:49 CET 2021 in Warsaw\n###\n### Documenting details in progress...\n###\n\n@=> New Features\n\n    *\n    *\n    *\n\n@=> Major Improvements\n\n    *\n    *\n    *\n\n@=> Important Changes\n\n    *\n    *\n    *\n\n@=> Important Fixes\n\n    *\n    *\n    *\n\n\n### Stable BOA-4.1.3 Release - Full Edition\n### Date: Thu Sep 24 18:51:49 CEST 2020\n### Milestone URL: https://github.com/omega8cc/boa/milestones/4.1.3\n\n# Release Notes:\n\n  This BOA release is a second transitional release before switching to rolling\n  release policy. Detailed changelog will follow.\n\n  This BOA update provides latest PHP versions, system updates, including\n  security fixes, many bug fixes, latest Ægir version ..but no Ægir platforms\n  are installed by default anymore, unless their keywords are listed in the file\n\n    ~/static/control/platforms.info (please read further below for details)\n\n  TL;DR\n  * Yes, blazing fast site clone/migrate mode is available even for giant sites!\n  * Yes, BOA still supports Pressflow 6 (LTS version only!)\n  * No, we no longer install any supported distros as platforms by default.\n\n@=> Super fast site cloning and migration mode (NEW!)\n\n  It is now possible to enable blazing fast migrations and cloning even sites\n  with complex and giant databases with this empty control file:\n\n    ~/static/control/MyQuick.info\n\n  By the way, how fast is the super-fast? It's faster than you would expect!\n  We have seen it speeding up the clone and migrate tasks normally taking\n  1-2 hours to... even 3-6 minutes! Yes, that's how fast it's!\n\n  This file, if exists, will enable a super fast per table and parallel DB\n  dump and import, although without leaving a conventional complete database\n  dump file in the site archive normally created by Ægir when you run\n  not only the backup task, but also clone, migrate and delete tasks, hence\n  also restore task will not work anymore.\n\n  We need to emphasise this again: with this control file present all normally\n  super slow tasks will become blazing fast, but at the cost of not keeping\n  an archived complete database dump file in the archive of the site directory\n  where it would be otherwise included.\n\n  Of course the system still maintains nightly backups of all your sites\n  using the new split sql dump archives, but with this control file present\n  you won't be able to use restore task in Ægir, because the site archive\n  won't include the database dump -- you can still find that sql dump split\n  into per table files in the backups directory, though, in the subdirectory\n  with timestamp added, so you can still access it manually, if needed.\n\n@=> Drupal platforms and Composer support\n\n  We no longer install any supported Drupal distros as platforms by default, but\n  you can customize Octopus platform list via control file, which will be used\n  on the next Octopus upgrade (you can request it individually if you are on\n  hosted Ægir service):\n\n    ~/static/control/platforms.info\n\n  This file, if exists and contains a list of symbols used to define supported\n  platforms, allows to control/override the value of _PLATFORMS_LIST variable\n  normally defined in the /root/.${_USER}.octopus.cnf file, which can't be\n  modified by the Ægir instance owner with no system root access.\n\n  IMPORTANT: If used, it will replace/override the value defined on initial\n  instance install and all previous upgrades. It takes effect on every future\n  Octopus instance upgrade, which means that you will miss all newly added\n  distributions, if they will not be listed also in this control file.\n\n  Supported values which can be written in this file, listed in a single line\n  or one per line:\n\n  Drupal 9 based\n\n    THR ----------- Thunder\n\n  Drupal 8 based\n\n    LHG ----------- Lightning\n    OPG ----------- Opigno LMS\n    SOC ----------- Social\n    VBE ----------- Varbase\n\n  Drupal 7 based\n\n    D7P D7S D7D --- Drupal 7 prod/stage/dev\n    AGV ----------- aGov\n    CME ----------- Commerce v.2\n    CS7 ----------- Commons\n    DCE ----------- Commerce v.1\n    GDR ----------- Guardr\n    OA7 ----------- OpenAtrium\n    OAD ----------- OpenAid\n    OLS ----------- OpenLucius\n    OOH ----------- OpenOutreach\n    OPC ----------- OpenPublic\n    OPO ----------- Opigno LMS\n    PPY ----------- Panopoly\n    RST ----------- Restaurant\n    UC7 ----------- Ubercart\n\n  Drupal 6 based\n\n    D6P D6S D6D --- Pressflow (LTS) prod/stage/dev\n    DCS ----------- Commons\n    UCT ----------- Ubercart\n\n  You can also use special keyword 'ALL' instead of any other symbols to have\n  all available platforms installed, including newly added in all future BOA\n  system releases.\n\n  Examples:\n\n    ALL\n    LHG VBE D7P D7S D7D\n\n  Composer will now use PHP 7.3 by default, and you can find many useful hints at:\n    https://github.com/omega8cc/boa/blob/master/docs/COMPOSER.txt\n\n  IMPORTANT: You must switch your ~/static/control/cli.info to 7.2 or newer\n  PHP version (BOA hosted on Omega8.cc comes with 7.4, 7.3 and 7.2), because\n  D8 based distros require at least PHP 7.2 -- this also means that to run\n  the sites installed after switching cli.info to 7.2 or newer, you will also\n  need to either switch your ~/static/control/fpm.info to 7.2 or newer, or\n  more probably, to not break any existing sites not compatible with PHP 7.2+\n  you will need to list these D8 sites names in ~/static/control/multi-fpm.info\n\n  Please check for more information:\n    https://learn.omega8.cc/how-to-quickly-switch-php-to-newer-version-330\n\n  BOA supports Drupal 8 codebases both with classic directory structure like\n  in Drupal 7 and also Drupal 8 distros you can download from Drupal.org, but\n  if you use Composer based codebase with different structure, the platform path\n  is not the codebase root directory, but the subdirectory where you see the\n  Drupal own index.php and \"core\" subdirectory. It can be platform-name/web or\n  platform-name/docroot or something similar depending on the distro design.\n\n\n### Stable BOA-4.1.2 Release - Full Edition\n### Date: Tue Sep 22 05:30:08 CEST 2020\n### Milestone URL: https://github.com/omega8cc/boa/milestones/4.1.2\n\n# Release Notes:\n\n  This BOA release is a transitional release before switching to rolling\n  release policy. Detailed changelog will follow.\n\n\n### Stable BOA-4.0.1 Release - Full Edition\n### Date: Mon May  6 01:14:59 CEST 2019\n### Milestone URL: https://github.com/omega8cc/boa/milestones/4.0.1\n\n# Release Notes:\n\n  This BOA release provides three new PHP versions, system updates, including\n  security fixes, many bug fixes, latest Ægir version, plus all included\n  Drupal distributions updated to latest versions, and supplied with latest\n  Drupal 7 or Drupal 8 core, if possible. Yes, BOA still supports Pressflow 6.\n  Yes, Debian Stretch is supported. No newer Ubuntu releases are supported yet.\n  Yes, we have added Solr 7 support and every 5 minutes updates!\n\n  Four Drupal 8 based popular distributions have been included by default,\n  plus much improved Composer support and automatic permissions-fix-magic\n  on Platform and Site Verify tasks. No more manual fixes!\n\n  By the way, Composer will now use PHP 7.3 by default, and you can find\n  many useful hints at:\n    https://github.com/omega8cc/boa/blob/master/docs/COMPOSER.txt\n\n  Big improvements and changes are coming to (auto)managing Solr cores too!\n  Solr cores are are now created every 5 minutes if needed, instead of during\n  the nightly procedure only, and Solr 7 is used by default. Existing Solr 4\n  cores will continue to work as before, but the system will create new Solr 7\n  cores for all compatible sites, and will update the sites/foo.com/solr.php\n  accordingly. For existing Solr 4 cores there can be namespace conflicts,\n  so please make sure to check the updated sites/foo.com/solr.php file and\n  adjust your site configuration if needed.\n\n  Note: If you are using WinSCP and/or Putty on Windows, or Transmit/Coda\n  by Panic on a Mac, please check the Known Issues section at the bottom of this\n  BOA-4.0.1 release notes.\n\n@=> Solr 7 and Solr 4 support changes and improvements\n\n  Both Solr 7 and Solr 4 powered by Jetty 9 server are available. Supported\n  integration modules are limited to latest versions of either search_api_solr\n  (D8/Solr7 and D7/Solr7 ) or apachesolr (D7/Solr4 and D6/Solr4).\n\n  Currently supported versions are listed below:\n\n    https://ftp.drupal.org/files/projects/search_api_solr-8.x-2.7.tar.gz\n    https://ftp.drupal.org/files/projects/search_api_solr-7.x-1.14.tar.gz\n    https://ftp.drupal.org/files/projects/apachesolr-7.x-1.11.tar.gz\n    https://ftp.drupal.org/files/projects/apachesolr-6.x-3.1.tar.gz\n\n  Note that you still need to add preferred integration module along with\n  any its dependencies in your codebase since this feature doesn't modify\n  your platform or site - it only creates Solr core with configuration\n  files provided by integration module: schema.xml and solrconfig.xml etc.\n\n  Important: search_api_solr-8.x-2.x is different from all previous versions,\n  as it requires Composer to install the module and its dependencies, then\n  you will need to configure it, and only then you will be able to generate\n  customized Solr core config files, which you should upload in the path:\n  sites/foo.com/files/solr/ and wait 5-10 minutes to have them activated\n  on the Solr 7 core the system will create for you.\n\n  This will affect the running every 5 minutes auto-installer, hence\n  no need to wait until next morning to be able to use new Solr core. Win!\n\n  Once the Solr core is ready to use, you will find a special file in your\n  site directory: sites/foo.com/solr.php with details on how to access\n  your new Solr core with correct credentials.\n\n  Side note: the sites/foo.com/solr.php will be automatically deleted on every\n  site Verify task in Ægir, to prevent copying it across with incorrect\n  access credentials when you clone the site. As soon as the site is verified,\n  its sites/foo.com/solr.php will get re-created automatically within 5-10 min.\n  and the cloned site will also get its own Solr core created.\n\n  For more details please check the docs at:\n    https://github.com/omega8cc/boa/blob/master/docs/SOLR.txt\n\n@=> Drupal 8.7.0 platforms and Composer support\n\n  Since BOA-4.0.1 new Drupal 8.7.0 based platforms are included:\n\n  Lightning 3.3.0 -------------- https://drupal.org/project/lightning\n  Thunder 8.2.39 --------------- https://drupal.org/project/thunder\n  Varbase 8.6.8 ---------------- https://drupal.org/project/varbase\n  Social 8.5.1 (8.6.15 core) --- https://drupal.org/project/social\n\n  IMPORTANT: You must switch your ~/static/control/cli.info to 7.1 or newer\n  PHP version (BOA hosted on Omega8.cc comes with 7,1, 7.2 and 7.3), because\n  D8 based distros require at least PHP 7.1 -- this also means that to run\n  the sites installed after switching cli.info to 7.1 or newer, you will also\n  need to either switch your ~/static/control/fpm.info to 7.1 or newer, or\n  more probably, to not break any existing sites not compatible with PHP 7.1+\n  you will need to list these D8 sites names in ~/static/control/multi-fpm.info\n\n  Please check for more information:\n    https://learn.omega8.cc/how-to-quickly-switch-php-to-newer-version-330\n\n  BOA supports Drupal 8 codebases both with classic directory structure like\n  in Drupal 7 and also Drupal 8 distros you can download from Drupal.org, but\n  if you use Composer based codebase with different structure, the platform path\n  is not the codebase root directory, but the subdirectory where you see the\n  Drupal own index.php and \"core\" subdirectory. It can be platform-name/web or\n  platform-name/docroot or something similar depending on the distro design.\n\n  As you have discovered if you have already tried, the path you should use\n  in Ægir when adding Composer based codebase as a platform is the directory\n  where index.php resides, so effectively anything above that directory is not\n  available for web requests and thus safely protected.\n\n  The information from Ægir project docs saying \"When verifying a platform,\n  Ægir runs composer install if a composer.json file is found.\" doesn't apply\n  to BOA. We have disabled this. There are several reasons, most importantly:\n\n  a/ having this feature enabled is actually against the codebase management\n  workflow in Ægir, because it may modify codebase on a live site,\n\n  b/ some tasks launch verify many times during clone and migrate, which\n  results with giant overhead and conflicts if we allowed it to run composer\n  install many times in parallel,\n\n  c/ from our experience, having this poorly implemented feature enabled breaks\n  clone and migration tasks between platforms when both have the composer.json\n  file. It just doesn't make any sense in our opinion. The implementation\n  should be improved to make it actually work similarly to Drush Makefiles.\n\n  You should think about Composer like it was Drush Make replacement, and you\n  should not re-build nor upgrade the codebase on a platform with sites already\n  hosted. Just use it to build new codebases and then add them as platforms\n  when the build works without errors.\n\n@=> Important PHP versions availability changes\n\n  Still on PHP 5.6? You should switch to PHP 7.3 — It’s twice as fast as 5.6!\n  But don't switch blindly -- even sites already running on PHP 7.0 before\n  are most probably not ready for PHP 7.2 or 7.3 without proper fixes.\n\n  Note: BOA-4.0.1 release removes PHP 5.3, 5.4 and 5.5, if installed.\n  In addition to still supported, even if officially deprecated 5.6 and 7.0\n  versions, this release adds support for PHP 7.3, 7.2 and 7.1\n\n  Please check the PHP officially supported versions list at:\n    http://php.net/supported-versions.php\n\n  In our limited testing Drupal 7 core version included in this release works\n  without noticeable issues with both PHP 7.2 and 7.3, although many contrib\n  modules may not be ready to switch your instance to 7.3 or 7.2 just yet,\n  especially if you have not used PHP 7.0 already.\n\n  We recommend to test your sites clones with newer PHP versions using BOA\n  multi-PHP-version support via ~/static/control/multi-fpm.info before\n  switching your instance to use 7.3 or 7.2 by default.\n\n  Please check for more information:\n    https://learn.omega8.cc/how-to-quickly-switch-php-to-newer-version-330\n\n  We still include Pressflow 6 platforms, because in the meantime the LTS\n  community support made the latest Pressflow 6 version compatible with PHP 7.2\n\n  If you still have a reason to use Drupal 6 core, we recommend to use\n  our version: https://github.com/omega8cc/pressflow6/tree/pressflow-plus\n\n@=> BOA release policy changes\n\n  In over 15 months since BOA-3.2.2 release we have tested a more agile\n  approach with Rolling Release policy for BOA system part known as Barracuda.\n\n  We have implemented many changes and updates only in BOA HEAD and used\n  carefully tested HEAD in production. This worked flawlessly and allowed us\n  to keep all BOA hosted and maintained systems continuosly updated without\n  waiting for stable release.\n\n  BOA project is very complex, build atop of many packages and individually\n  built from sources components, plus other projects like Ægir, Drush and\n  Drupal core and distributions -- each of them with their own release policy.\n\n  After years of efforts to keep healthy balance between providing necessary\n  upgrades and avoiding BOA users maintenance fatigue due to frequent releases,\n  which usually results with skipping releases which has many adverse effects,\n  including requirement to keep new versions backward compatible with 2-3 years\n  old releases, we have decided that it's time to introduce Rolling Release\n  policy for Barracuda while still using standard point releases policy for\n  Octopus installer, which covers Ægir, Drush and Drupal platforms updates.\n\n  We will still use point releases for Barracuda when there will be major\n  changes introduced, like deprecating old PHP versions or changing components\n  like Let's Encrypt integration agents or methods.\n\n  BOA project docs will be updated to reflect these changes once another\n  standard point release is made either for Octopus, Barracuda or both.\n  The docs will explain how to run Barracuda system continuous updates properly.\n\n  # New features:\n\n    * Add auto-cleanup for empty old platforms in /var/aegir\n    * Add experimental support for autoslave and cache_consistent\n    * Add initial Composer docs\n    * Add jsmin support for PHP7 #1250\n    * Add mongodb extension for PHP 7 and Drupal 8.2.x support #1127\n    * Add redis_oom_check() to monitoring\n    * Add set_composer_manager_vendor_dir INI variable\n    * Add support for include/exclude filelist for duplicity. #1159\n    * Add support for Percona 5.7 and use MariaDB 10.1 by default\n    * Add UTF8MB4 Convert Drush extension #1047\n    * Automatically check and remove drush from codebase\n    * Debian Stretch support #1176\n    * Do not run Verify daily if ~/static/control/noverify.info exists\n    * Install ClamAV daemon by default\n    * PHP 7.3, 7.2 and 7.1 Support #1126\n    * Run manage_solr_config.sh every 5 minutes\n    * Update Solr with BOA #1305\n    * Use _DB_BACKUPS_TTL variable for local and cluster db backups rotation\n\n  # Changes:\n\n    * Add innodb_default_row_format = dynamic — fixes #1366\n    * Advanced Nginx microcaching to improve cache HITs #1271\n    * Change to dashes in bucket names and upgrade boto/duplicity #1247\n    * Create fpm.info and cli.info ctrl files on Octopus install\n    * Deprecate MariaDB 5.5 and force 10.1 instead\n    * Enable uploadprogress.so for testing on PHP 7+\n    * Force Composer to use PHP 7.2 if available #1213\n    * Higher PHP CLI limits to make Composer happy\n    * Increase default TTLs to make BOA more friendly for big sites\n    * Make DNS Cache Server pdnsd optional -- needs DCS keyword in _XTRAS_LIST\n    * Minimum 4 GB RAM and 2 CPU (with Solr minimum 8 GB RAM and 4+ CPU rec.)\n    * Re-verify LE enabled sites daily\n    * Refresh the tasks list more frequently\n    * Remove deprecated PHP versions #801\n    * Remove problematic opcache.fast_shutdown\n    * Remove ultimate_cron and background_process from the blacklist\n    * Replace Google DNS servers for Cloudflare DNS servers #1317\n    * Replace the complex public IP detection with an external API #1089\n    * Set PHP CLI to FPM version if only FPM is defined\n    * SQL: disable innodb_adaptive_hash_index by default\n    * Upgrade imagick to 3.4.3 for PHP7 support #1253\n    * Use /root/.backboa.autoupdate by default\n    * Use utf8mb4/utf8mb4_general_ci by default\n\n  # System upgrades:\n\n    * Adminer 4.7.0\n    * CSF/LFD 12.10\n    * Drush 8.2.3.1\n    * Galera 10.0.37\n    * Lshell 0.9.18.9\n    * MariaDB Server 10.1.39\n    * MariaDB Server 10.2.19\n    * MariaDB Server 10.3.14\n    * MySQLTuner 1.7.15\n    * Nginx 1.16.0\n    * Node.js v10.x LTS\n    * OpenSSH Server 8.0p1\n    * OpenSSL 1.0.2r for Nginx\n    * PHP 7.3.5, 7.2.18, 7.1.29, 7.0.33, 5.6.40\n    * PHP Redis extension 4.2.0\n    * Pure-FTPd 1.0.49\n    * Redis Module 8.x mod-05-02-2019\n    * Redis Server 4.0.14\n    * Ruby 2.6.0\n    * Use latest Duplicity and dependencies\n\n  # Fixes:\n\n    * Add fix_ping_perms()\n    * Add libzip-dev to satisfy PHP 7.3 requirements\n    * Add nginx config to mitigate SA-CORE-2018-002\n    * Add patches for CORE-2018-004 and SA-CORE-2018-002\n    * Add procedure satellite_fix_broken_entity_module()\n    * Add re_set_default_php_cli() procedure\n    * Add redis_slow_check()\n    * Ajax 200 parsererror on every Drupal site #1344\n    * Avoid potentially problematic --force-yes for apt-get\n    * Backboa AWS S3 backup integration no longer working #1138\n    * Backboa not installed #1310\n    * Backboa: Certificate error #1141\n    * Cannot switch php-cli, cannot create varbase composer project. #1308\n    * Check sshd not ssh version\n    * CiviCRM 4.7 not working under BOA #1223\n    * Crawlers see 403 on public path #1329\n    * Debian 9 (Stretch) _apt user + _STRICT_BIN_PERMISSIONS errors #1352\n    * Do not lock old/all hostmaster platforms automatically\n    * Downgrade MySecureShell until we can figure out compatibility issues\n    * Errors using site with CiviCRM #1304\n    * Extra cleanup for any codebase level drush copy\n    * Fix for empty old hostmaster platforms cleanup\n    * Fix for incomplete logic in multi-fpm mode\n    * Fix for jessie-backports\n    * Fix SA-CORE-2018-006 for D8 and D7\n    * Fix the site specific composer_manager dir also for D8\n    * Fix to include gitlab.com in ~/.ssh/known_hosts\n    * Improve gpg keys handling\n    * Improve pdnsd self-repair procedures\n    * Infinite loop on INFO: Retrieving F1656F24C74CD1D8 key.. #1323\n    * Known issues with contrib module Redirect in Drupal 8 and BOA #1239\n    * Make sure redis-server is up immediately after upgrade\n    * Make sure that ~/.rvmrc is fixed\n    * Make sure that composer permissions are fixed\n    * Make sure that the ownership on static/control is correct\n    * Make sure to fix Redis permissions\n    * Nginx: the \"ssl\" directive is deprecated since 1.15.0\n    * No live certificates from Let's Encrypt #1255\n    * No Web Server is added when BOA is installed locally #1306\n    * PSA-2018-003: Drupal core security release #1283\n    * Remove deprecated option UsePrivilegeSeparation if exists\n    * Restore Jessie default apt mode on Stretch+\n    * Solr dir is not defined in in setup_solr() #1370\n    * SSHD - use without-password for backward compatibility\n    * Switching out DNS servers caused breakage #1318\n    * Sync permissions fix on platform verify for D8, D7, D6\n    * Sync Solr 7 memory management logic\n    * The _PERMISSIONS_FIX var gets overridden to YES daily basis #1311\n    * The innodb_lazy_drop_table has been deprecated in Percona\n    * Update ~/static/control/README.txt if needed\n    * Update boa info [more] for current years #1248\n    * Update lshell.config to not break valid D8 specific Drush commands\n    * Update, sync and de-duplicate Zend OPcache config directives\n    * Updated default robots.txt #1172\n    * Use gpg2 directly instead of deprecated apt-key\n    * Use IP directly as a last fallback\n    * xboa migrate Solr 7 data #1376\n\n  # Known issues:\n\n    * SSH/SFTP WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!\n      In short, nothing to worry about, but please read on how to fix this:\n        https://learn.omega8.cc/2019-remote-host-identification-ssh-388\n\n    * PHP 7.1+ can't be installed w/ MariaDB 10.2+ until compatibility is fixed:\n        https://jira.mariadb.org/browse/MDEV-14555\n\n    * Existing Solr 4 cores may experience namespace conflicts.\n      Please make sure to check the updated sites/foo.com/solr.php file\n      and adjust your site configuration if needed.\n\n    * Error decoding SFTP packet -- affects WinSCP/Putty\n      We recommend to use CybderDuck for reliable SFTP access.\n      For known fix please check https://bit.ly/2HMGd6u -- quote below:\n      >>>>>\n        Basically we need to set the ‘Preferred SFTP protocol version’ to 3.\n        How to do this:\n          Edit the connection in WinSCP\n          Open the Advanced menu\n          Choose Advanced\n          This will bring up a new popup.\n          Under Environment click on SFTP\n          Change ‘Preferred SFTP protocol version’ to 3\n          Save the changes.\n      >>>>>\n\n      * SFTP connection doesn't work with Transmit nor Coda by Panic software.\n        We have not figured out the workaround yet, so we recommend using\n        working alternatives on a Mac, like Cyberduck or ForkLift.\n\n      * The filefield_nginx_progress module, which is deprecated for years,\n        no longer works and breaks upload fields. The module has been removed\n        from the supported modules list, and will be automatically disabled\n        if active in any D7 site daily, so we recommend to use the current\n        similar alternative (even if not so fancy) included now by default:\n          https://www.drupal.org/project/file_resup\n\n\n### Stable BOA-3.2.2 Release - Full Edition\n### Date: Sat Jan 20 11:03:34 PST 2018\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.2.2\n\n# Release Notes:\n\n  This BOA release provides system security upgrades, many bug fixes,\n  latest Ægir version, plus all supported Drupal distributions updated\n  to latest versions, and supplied with latest Drupal 7 core, if possible.\n  Thanks to Drush 8.1.15-dev we support also the newest Drupal 8.4.4 core.\n\n@=> Important changes planned in the next BOA feature release\n\n  BOA-3.2.2 is the last release still supporting PHP 5.3, 5.4 and 5.5 versions.\n  These versions will be *removed* in the next release, and instead\n  there will be support for PHP 7.1 and 7.2 added.\n\n  Future releases will no longer include Pressflow 6 platforms, but Pressflow 6\n  will be fully supported, and can still use PHP 5.6 -- We recommend to use\n  our version: https://github.com/omega8cc/pressflow6/tree/pressflow-plus\n\n  # Changes:\n\n    * Add support for WOFF 2.0\n    * Commerce 2.51\n    * Guardr 2.40\n    * OpenAtrium 2.624\n    * Panopoly 1.49\n\n  # System upgrades:\n\n    * Adminer 4.3.1\n    * Galera 10.0.33\n    * MariaDB 10.1.30\n    * MariaDB 10.2.12\n    * MariaDB 5.5.59\n    * Nginx 1.13.8\n    * OpenSSL 1.0.2n (used only in Nginx)\n    * PHP 5.6.33\n    * PHP 7.0.27\n    * PHP extension for Redis 3.1.6\n    * Pure-FTPd 1.0.49\n    * Redis Server 4.0.6\n    * Ruby 2.4.2\n    * Use Redis integration mod-30-12-2017 (D7)\n\n  # Fixes:\n\n    * Add mongo to the list of permissions exceptions, if installed\n    * Do not delete empty platforms if ~/static/control/platforms.info is used\n    * Do not restart Redis daily if /root/.high_traffic.cnf exists\n    * Fix Drupal 8 detection for distros with vendor dir moved out of docroot\n    * Fix requirements for the latest compass version\n    * Hints config update\n    * LE not renewing expired certificates due to IPv6 DNS entries -- #1179\n    * Notifications about new BOA editions are sent to notify@omega8.cc -- #1219\n    * Override fastcgi_params to make geoip headers work again\n    * Redirect module conflict with manual cron execution in D8 -- #1215\n    * Remove hmac-ripemd160 MAC, deprecated in OpenSSH 7.6 -- #1217\n    * The _SSH_ARMOUR=YES not compatible with OpenSSH 7.6 -- #1218\n    * Update keys for rvm.io\n    * Update LE License to LE-SA-v1.2-November-15-2017.pdf\n    * Use advagg-7.x-2.30\n    * Use modified rvm-installer.sh for user-level installations\n    * Use reroute_email-7.x-1.3\n    * Use rvm_silence_path_mismatch_check_flag=1\n\n\n### Stable BOA-3.2.1 Release - Full Edition\n### Date: Sat Oct  7 19:58:53 PDT 2017\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.2.1\n\n# Release Notes:\n\n  This BOA release provides system security upgrades, many bug fixes,\n  latest Ægir version, plus all supported Drupal distributions updated\n  to latest versions, and supplied with latest Drupal 7 core, if possible.\n  Thanks to Drush 8.1.15-dev we support also the newest Drupal 8.4 core.\n\n@=> Important changes planned in the next BOA release\n\n  BOA-3.2.1 is the last release still supporting PHP 5.3, 5.4 and 5.5 versions.\n  These versions will be *removed* in the next release, and instead\n  there will be support for PHP 7.1 and 7.2 added.\n\n  Future releases will no longer include Pressflow 6 platforms, but Pressflow 6\n  will be fully supported, and can still use PHP 5.6 -- We recommend to use\n  our version: https://github.com/omega8cc/pressflow6/tree/pressflow-plus\n\n@=> Drupal 6 vanilla core is deprecated starting with BOA-3.2.1\n\n  Drupal 6 vanilla core is no longer supported. It was never really supported,\n  but could still work. Those running Drupal 6 instead of supported Pressflow 6\n  will notice that their site displays only the homepage and all links/menus\n  no longer display expected content. This change is a result of new rewrite\n  in the Nginx configuration, required to properly support both Drupal 8 and\n  Drupal 7. Time to migrate to latest, included in this release, Pressflow 6!\n\n  # Changes:\n\n    * Add chained commands to forbidden list in lshell\n    * Add Nginx Headers More module support\n    * Add support for --include/exclude-filelist for duplicity -- #1158\n    * Add support for upcoming MariaDB 10.2\n    * Auto-update duplicity if installed\n    * Deny bots on non-prod domains, not only on aliases -- #1178\n    * Do not pause the tasks queue during mysql backup\n    * Do not truncate queue and accesslog tables by default\n    * Enable New Relic integration for PHP 7.0\n    * Install ipset to improve CSF performance\n    * mongodb.so for D8.2 and PHP7.0 -- #1128\n    * Run 3 queue tasks in parallel by default\n    * Use redis_scan_enable = FALSE by default\n\n  # System upgrades:\n\n    * CSF 10.22\n    * Drush micro-8-07-10-2017\n    * Galera 10.0.32\n    * MariaDB 10.1.28\n    * MariaDB 10.2.9\n    * MariaDB 5.5.57\n    * Nginx 1.13.5\n    * Node 6.x version bump -- #1129\n    * OpenSSH 7.6p1\n    * OpenSSL 1.0.2l (used only in Nginx)\n    * PHP 5.6.31\n    * PHP 7.0.24\n    * PHP extension for Redis 3.1.4\n    * Pure-FTPd 1.0.46\n    * Redis Server 4.0.2\n    * Update Redis module for Drupal 8\n    * Upgrade drush to support Drupal 8.4 -- #1206\n    * Upgrade wkhtmltopdf and wkhtmltoimage to 0.12.4\n\n  # Fixes:\n\n    * Add SSH (RSA) keys how-to\n    * Add support for tar.xz archives\n    * Add symlink suggested in #999\n    * Allow a bit higher load limits for queue runner\n    * Barracuda is not installing ipset so csf doesn't work -- #1203\n    * Deprecate no longer working distros\n    * Disable innodb_corrupt_table_action in 10.2\n    * Do not enable entitycache in the Commons distro\n    * Exclude special https.* proxy vhosts from daily cleanup\n    * Fix permissions on password files for HTTP Basic Auth -- #1187\n    * Fix syntax and race conditions in fire/water\n    * Galera compatibility: do not edit mysql.user directly\n    * Improve CSF race conditions protection\n    * Improve default system cron queue\n    * Improve repo.psand.net/pubkey update\n    * Improved PHP OPCache default configuration\n    * Linux kernel CVE-2017-2636 hotfix\n    * Linux kernel CVE-2017-6074 hotfix\n    * Make sure that not supported tools are not re-installed on VServer\n    * Move excludes first as they are more specific than includes -- #1168\n    * PHP not installed after Wheezy to Jessie upgrade -- #999\n    * Redirect module breaks Drupal 8 sites in BOA if present -- #1061\n    * Remove --numeric-ids option from xboa -- #1146\n    * Restart DB server on upgrade only if config has changed\n    * Run fast enough fire.sh again\n    * Silence mysql cleanup output -- #1180\n    * Site in subdirectory cookie is not set correctly -- #1211\n    * Sync PHP disable_functions across all versions\n    * Update default robots.txt -- #1172\n    * Use --skip-add-locks — Galera Cluster compatibility\n    * Use absolutely graceful MySQLD restart procedure\n    * VServer 4.1.42-vs2.3.8.6-beng compatibility\n    * Wait for MySQLD availability before running DB backup\n    * Whitelist known search engines bots IPs\n\n\n### Stable BOA-3.2.0 Release - Full Edition\n### Date: Sun Feb 26 09:11:39 PST 2017\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.2.0\n\n# Release Notes:\n\n  This BOA release provides many new features, system security upgrades, many\n  improvements and bug fixes, latest Ægir version, plus all supported Drupal\n  platforms updated to latest versions, and supplied with latest Drupal 7 core,\n  if possible.\n\n  The reason we list here also new features and changes already listed in\n  previous BOA-3.1.4 version is that they were supposed to be included in this\n  (3.2.0) release, since we normally don't include new features in bugfix\n  releases, but we had to publish more bugfix/security releases in the 3.1.x\n  series than initially expected, while new features were already pushed to HEAD\n  in anticipation of delayed 3.2.0 release.\n\n  We have also moved some new features originally intended to be included\n  in the (3.2.0) release to the next 3.3.0 milestone, which is expected\n  in about one month after 3.2.0 release.\n\n@=> Magic permissions fix now happens on-the-fly\n\n  The most interesting new Ægir feature is probably the ability to fix files\n  permissions and ownership on any site and platform, without waiting for\n  the running daily magic fix. Now it happens on-the-fly, when you run normal\n  platform and site Verify tasks.\n\n@=> MariaDB 10.1 is now the new default version\n\n  If you are already running 10.0, BOA will upgrade it to _DB_SERIES=10.1\n  but if you still run _DB_SERIES=5.5 it will continue to use MariaDB 5.5\n  on your system (not recommended).\n\n# New features and enhancements:\n\n  * Add Microsoft Hyper-V to supported virtualization systems\n  * Add support for _HOURLY_DB_BACKUPS=YES via Percona XtraBackup\n  * Add support for ‘boa version’ command\n  * Add support for /root/.my.batch_innodb.cnf weekly procedure\n  * Add support for /root/.my.restart_after_optimize.cnf procedure\n  * Add support for fix_ownership and fix_permissions on-the-fly\n  * Add support for latest 3.18.44-vs2.3.7.5-beng VS kernel\n  * Add support for latest 4.1.33-vs2.3.8.5.2-beng VS kernel\n  * Add support for the Open Lucius Distribution to Ægir —- #888\n  * Add support for the Opigno LMS Distribution to Ægir —- #953\n  * Automatically whitelist CloudFlare and Sucuri IPs (faster version)\n  * Bundle Opigno LMS dependencies: TinCanPHP and pdf.js\n  * Configure _INNODB_LOG_FILE_SIZE automatically\n  * Docs for Twig Debbuging in Drupal 8.2.x and BOA #1085\n  * Improve InnoDB performance\n  * Improve Let's Encrypt docs\n  * Include advagg, cdn, and robotstxt in o_contrib_eight -- #1096\n  * Install ClamAV and RKhunter by default —- #1019\n  * Make boost cache clearing configurable via _CLEAR_BOOST variable -- #1115\n  * MariaDB 10.1 support (new default version) -- #866\n  * Open LDAP ports 389 and 3268 for outgoing TCP connections\n  * Speed up mysql stop/start\n  * Update S3 regions list for backboa backups\n  * Use blazing fast Redis (SCAN) method on wildcard cache delete\n  * Use Redis_CacheCompressed mode, if available (saves a ton of RAM)\n\n# Changes:\n\n  * Allow to run global OPTIMIZE only once per month, on the last Sunday\n  * Always update barracuda, boa and octopus wrappers, ignore _SKYNET_MODE=OFF\n  * Enable ARCHIVE Storage Engine in MariaDB 10.1\n  * Force _CUSTOM_CONFIG_SQL=NO on MariaDB major upgrade/reinstall\n  * Remove exception for cache_form bin in Redis configuration\n  * Remove no longer supported textile module\n  * Run db OPTIMIZE only weekly, if configured\n  * Use bzip2 also for standard db backups\n  * Use lower system load limit for queue runner\n  * Use MySQLTuner to configure SQL limits — enabled by default\n\n# System upgrades:\n\n  * CSF/LFD 9.30\n  * Drupal 7.54.2\n  * Drush micro-8-07-02-2017\n  * Duplicity 0.7.11 (please run 'backboa install' to upgrade)\n  * MariaDB 10.1.21\n  * MariaDB 5.5.54\n  * MariaDB Galera Cluster 10.0.29\n  * Nginx 1.11.10\n  * OpenSSL 1.0.2k (used only in Nginx)\n  * PHP 5.6.30\n  * PHP 7.0.16\n  * Pure-FTPd 1.0.45\n  * Redis 3.2.8\n  * Redis D8/D7 integration mod-09-02-2017\n  * Use ImageMagick 7.0.4-6 if built from sources\n  * Use Redis integration mod-14-02-2017 (D7)\n\n# Fixes:\n\n  * Can't add clients on BOA3 -- #926\n  * Do not add newer InnoDB settings when old server version is in use -- #1122\n  * Do not disable site_readonly daily on migrated instances\n  * Fix the not working hostmaster LE cert auto-update (typo)\n  * Force vnstat restart on version upgrade\n  * Improve disable_chattr() and enable_chattr() logic\n  * Improve docs/FAQ.txt as suggested in #1119\n  * Improve userprotect initial-only setup -- #926\n  * MariaDB server not running properly alert -- #1122\n  * Migration should re-use Let's Encrypt certs in HTTPS proxy vhosts -- #1106\n  * Randomize SQL backup schedule\n  * Rebuild hosting_custom_settings feature after enabling Redis on install\n  * Sync db server (optional) restart with optimize\n  * Sync max_execution_time for PHP-FPM\n  * Sync max_input_time for PHP-FPM\n  * Update docs/SSL.txt -- #1109\n  * Whitelist /dev/urandom in open_basedir\n\n\n### Stable BOA-3.1.4 Release - Full Edition\n### Date: Tue Dec 20 14:09:21 PST 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.1.4\n### Latest hotfix added on: Wed Dec 21 12:44:58 PST 2016\n\n# Release Notes:\n\n  This BOA release provides system security upgrades, many improvements\n  and bug fixes, latest Ægir version, plus all supported Drupal platforms\n  updated to latest versions, and supplied with latest Drupal 7 core,\n  if possible.\n\n@=> Magic permissions fix now happens on-the-fly\n\n  The most interesting new Ægir feature included in this release is probably\n  the ability to fix files permissions and ownership on any site and platform,\n  without waiting for the running daily magic fix. Now it happens on-the-fly,\n  when you run normal platform and site Verify tasks.\n\n@=> MariaDB 10.1 is now the new default version\n\n  If you are already running _DB_SERIES=10.0, this BOA release will upgrade it\n  to _DB_SERIES=10.1 -- but if you still run _DB_SERIES=5.5 it will continue\n  to use MariaDB 5.5 on your system.\n\n# New features and enhancements:\n\n  * Add Microsoft Hyper-V to supported virtualization systems\n  * Add support for ‘boa version’ command\n  * Add support for fix_ownership and fix_permissions on-the-fly\n  * Add support for latest 3.18.44-vs2.3.7.5-beng VS kernel\n  * Add support for latest 4.1.33-vs2.3.8.5.2-beng VS kernel\n  * Automatically whitelist CloudFlare and Sucuri IPs (faster version)\n  * Configure _INNODB_LOG_FILE_SIZE automatically\n  * MariaDB 10.1 support (new default version) -- #866\n  * Use Redis_CacheCompressed mode, if available (saves a ton of RAM)\n\n# Changes:\n\n  * Always update barracuda, boa and octopus wrappers, ignore _SKYNET_MODE=OFF\n  * Enable ARCHIVE Storage Engine in MariaDB 10.1\n  * Force _CUSTOM_CONFIG_SQL=NO on MariaDB major upgrade/reinstall\n  * Remove no longer supported textile module\n  * Run db OPTIMIZE only weekly, if configured\n  * Use MySQLTuner to configure SQL limits — enabled by default\n\n# System upgrades:\n\n  * CSF 9.28\n  * Drush micro-8-17-12-2016\n  * MariaDB 10.1.20\n  * MariaDB Galera Cluster 10.0.28\n  * Nginx 1.11.7\n  * OpenSSH 7.4p1 (if installed from sources)\n  * OpenSSL 1.0.2j (used only in Nginx)\n  * PHP 5.6.29\n  * PHP 7.0.14\n  * PHPRedis 3.1.0\n  * Redis 3.2.6\n  * Use mydropwizard-6.x-1.6\n  * Use Redis module mod-20-12-2016\n\n# Fixes:\n\n  * Allow to run downgrade to _DB_SERIES 5.5 (experimental, not recommended!)\n  * Always reinstall cURL from packages if broken\n  * AMP support -- #948\n  * Archive PHP logs in /var/backups/php-logs/\n  * Check if bind should be installed early enough\n  * Do not enable innodb-defragment — it may crash the server\n  * Fix for check_root_keys_pwd()\n  * Fix for disable_chattr()\n  * Fix for missing PHP config regression -- #1105\n  * Fix for VnStat sysconfdir\n  * Fix the check in detect_deprecated_php()\n  * Ignore search lines to avoid breaking pdnsd config -- #1069\n  * Improve SQL defaults\n  * Make sure innodb_buffer_pool_instances is always defined\n  * Migration between installation profiles -- #1076\n  * Monitor more lines when /root/.hr.monitor.cnf exists\n  * Multiply already high opcache.max_accelerated_files\n  * Nginx: Set Access-Control-Allow-Origin header only for static files\n  * Remove duplicate config updates and restarts\n  * Remove various tmp/dot files breaking du command\n  * Sync the new on-the-fly permissions magic with BOA daily.sh logic\n  * The .git/* files are downloadable -- #1091\n  * Triple check that all sql tables are upgraded\n  * Update JS module to 7.x-2.1 -- #586\n  * Update migrate docs to avoid issues with already migrated instances\n  * Use long enough wait times for big SQL servers restarts\n  * Use Open Atrium own patched Drupal core -- #1083\n\n\n### Stable BOA-3.1.3 Release - Barracuda Edition\n### Date: Mon Sep 12 17:54:50 PDT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.1.3\n\n# Release Notes:\n\n  This BOA release provides important security upgrades and bug fixes.\n  You should upgrade via 'barracuda up-stable system' immediately.\n  Note: Octopus upgrade is **not** included in this BOA release.\n\n  Technically, even by running normal system update with previous BOA release\n  you would apply all security upgrades, since they are provided by MariaDB\n  packages, and thus enforced no matter if we release new BOA version, or not,\n  so we are doing this purely to make sure that all users have been alerted\n  about the situation affecting their systems.\n\n# Changes:\n\n  * Move Nginx cache cleanup to daily cleanup procedure\n  * Use standard hourly schedule for self-update in clear.sh\n\n# System upgrades:\n\n  * Add all Tika versions from 1.1 to 1.13 in /opt/tika9/\n  * MariaDB 10.0.27 (critical security upgrade)\n  * MariaDB Galera Cluster 10.0.27 (critical security upgrade)\n  * MongoDB database driver 1.6.14 for all PHP versions < 7 -- fixes #981\n  * Pure-FTPd 1.0.43\n\n# Fixes:\n\n  * Check if curl works and re-install if needed before running auto-update\n  * Log LE renewal attempts\n  * Log out all users after lshell em upgrade\n  * Make sure that cURL is always listed in packages\n  * Move permissions fix overrides check to the correct place\n  * Nginx: default FastCGI cache levels value may exhaust all inodes -- #2791885\n\n# Known problems:\n\n  https://github.com/omega8cc/boa/milestones/3.1.x\n\n\n### Stable BOA-3.1.2 Release - Full Edition\n### Date: Sat Aug 20 14:43:43 PDT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.1.2\n### Latest hotfix added on: Thu Aug 25 09:17:59 PDT 2016\n\n# Release Notes:\n\n  This BOA release provides system security upgrades, improvements\n  and bug fixes, plus all supported Drupal platforms updated to latest\n  versions, and supplied with latest Drupal 7 core.\n\n@=> You can use NPM to install Grunt/Gulp/Bower -- #1028 by @pricejn2 (thanks!)\n\n  Now the same ~/static/control/compass.info file will activate not only\n  RVM, which can be used to install Compass Tools, but also NPM, which\n  can be used to install Grunt/Gulp/Bower.\n\n  You will need to re-initialize your account to have it added, by\n  deleting the control file, and adding it again after ~10 minutes.\n\n  More details: https://github.com/omega8cc/boa/blob/master/docs/RVM.txt\n\n@=> Redis integration works with Drupal 8 -- with no effort on your side\n\n  We have added a smart activation procedure, to meet the D8 Redis module\n  requirements. The system will add Redis integration to your Drupal 8\n  sites automatically, but will keep it inactive, until the module will be\n  installed properly, during nightly system autonomous maintenance.\n\n  This means that Redis will start working in every existing and newly\n  installed Drupal 8 site with some initial delay, to get things installed\n  in the correct order, and still without any effort on your side.\n\n# Other enhancements:\n\n  * Add mydropwizard to Drush extensions for Drush Make D6 support\n  * Add support for Drupal 8 specific development.services.yml file\n  * Allow to configure stable/head BOA auto-upgrades via _AUTO_VER variable\n  * Compatibility with Multi-byte UTF-8 support in Drupal 7\n\n# Changes:\n\n  * Add Adminer database manager and deprecate Chive manager -- #1036\n  * Enable Let's Encrypt LIVE mode via ~/static/control/ssl-live-mode.info\n  * Force /root/.use.curl.from.packages.cnf to install cURL from packages\n  * Run db sqlmagic auto conversion also on test/dev sites, if activated\n\n# System upgrades:\n\n  * CSF 9.11\n  * Drush micro-8-23-07-2016\n  * Lshell 0.9.18.8 (security update for shell escalation issues)\n  * MariaDB 10.0.26\n  * MariaDB 5.5.51\n  * Mysqltuner v1.6.15\n  * Nginx 1.11.3\n  * OpenSSH 7.3p1 (if installed from sources)\n  * PHP 5.5.38\n  * PHP 5.6.25\n  * PHP 7.0.10\n  * PHPRedis dev5-11-08-2016\n  * PHPRedis dev7-11-08-2016\n  * Redis 3.2.3\n  * Redis D8 integration mod-12-08-2016\n  * vnStat 1.15\n\n# Fixes:\n\n  * Avoid race conditions on web system user update\n  * Debian Jessie 8.3+ needs grub update -- fixes #912\n  * Detection of Amazon AWS / EC2 instance -- fixes #930\n  * Disable Redis integration until module is installed (D8 only)\n  * Do not force --default-character-set=utf8 -- see #1020\n  * Don’t set $MANPATH when npm support is enabled\n  * Fix for openssh-sftp-server status on Jessie\n  * FMG installation hangs on keyring install -- fixes #1050\n  * Force InnoDB in sqlmagic for Drupal 7+ -- see #1020\n  * Ignore ~/control/multi-fpm.info on too old Octopus (2.4) instances\n  * Linux Kernel CVE-2016-5696 mitigation\n  * Mitigate httpoxy vulnerability\n  * Nginx: Fix for not working autodiscover flood protection\n  * Nginx: Fix for the add_header inheritance\n  * Nginx: Improve fastcgi_cache_valid TTL settings\n  * Octopus auto-upgrade should set _AUTOPILOT=YES on the fly -- fixes #1041\n  * Remove deprecated MyISAM exceptions in sqlmagic command\n  * Run detect_cdorked_malware() only if /usr/sbin/nginx exists\n  * Run registry-rebuild directly after hostmaster upgrade\n  * Single _tmp_ dir is enough to require forced cleanup (Drush cache)\n  * Sync keyring install command with BOA standard -- #1052\n  * Sync modules auto en/dis for Drupal 8\n  * Update check_boa_php_compatibility()\n  * Upgrade to panels-7.x-3.7 (security) in all distros using the module\n  * Whitelist elFinder requests\n  * Workaround for aegir_backup_export_path\n\n# Known problems:\n\n  https://github.com/omega8cc/boa/milestones/3.1.x\n\n\n### Stable BOA-3.1.1 Release - Full Edition\n### Date: Wed Jun 22 12:24:17 PDT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.1.1\n### Latest hotfix added on: Fri Jun 24 06:01:07 PDT 2016\n\n# Release Notes:\n\n  This BOA release provides system security upgrades, improvements\n  and bug fixes, plus all supported Drupal platforms updated to latest\n  versions, and supplied with latest Drupal 7 core (security release).\n\n# New features and enhancements:\n\n  * Add _SSH_ARMOUR feature\n  * Add strict check for supported virtualization systems\n  * Allow to install ImageMagick from sources when _MAGICK_FROM_SOURCES=YES\n\n# Changes:\n\n  * Deprecate support for old Solr versions <4\n  * Switch cluster support to 3.x\n\n# System upgrades:\n\n  * Drush micro-8-15-06-2016\n  * MariaDB 5.5.50\n  * Nginx 1.11.1\n  * PHP 5.5.37\n  * PHP 5.6.23\n  * PHP 7.0.8\n  * Redis 3.2.1\n\n# Fixes:\n\n  * Add compatibility with magick src\n  * Add ToC (Table of Contents) for the Let's Encrypt section in docs/SSL.txt\n  * Downgrade JSmin from 2.0.1 to 2.0.0 -- fixes #993\n  * Fix for legacy cluster support\n  * Fix for virtualbox detection -- see #972\n  * Fix permissions on sites directories\n  * Fix sites/all/drush permissions compatibility with Drush 8.2\n  * Improve protection for custom solrconfig.xml and schema.xml -- fixes #969\n  * Migration: xboa supports only Ægir 2.x -- #960\n  * Reinstall default-jre on major OS upgrade, if needed -- fixes #986\n  * Remote Drush support regression -- fixes #984\n  * The ~/static/control/README.txt is not updated on octopus upgrade #965\n  * Update docs/SOLR.txt to match currently supported procedures -- fixes #963\n  * Use st_runner() wrapper only for apt-get/aptitude\n\n# Known problems:\n\n  https://github.com/omega8cc/boa/milestones/3.1.x\n\n\n### Stable BOA-3.1.0 Release - Full Edition\n### Date: Thu May 26 16:41:40 PDT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.1.0\n### Latest hotfix added on: Mon May 30 08:55:03 PDT 2016\n\n  @=> Includes Ægir Hostmaster 3.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 8 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes new features, system upgrades, improvements\n  and bug fixes, with most notable features and changes listed below.\n  All supported Drupal platforms have been updated to latest versions.\n\n  @=> Let's Encrypt free SSL certificates are supported directly in Ægir\n  @=> PHP-FPM version can be switched per site hosted on the same instance\n  @=> Both Ægir control panel and its backend are compatible with PHP 7.0.7\n  @=> Support for forced Drush cache clear in the Ægir backend\n  @=> BOA can run Debian Wheezy to Debian Jessie upgrades easily\n\n  More details on new features, enhancements and changes can be found below.\n\n\n  ###\n#-### Let's Encrypt free SSL certificates are supported directly in Ægir\n  ###\n\n  You can find these important Let's Encrypt topics discussed below:\n\n  # Introduction\n  # How it works?\n  # How to add Letsencrypt.org SSL certificate to hosted site?\n  # How to add Letsencrypt.org SSL certificate to the Ægir Hostmaster site?\n  # How to modify/renew Letsencrypt.org SSL certificate for SSL enabled site?\n  # Are there any requirements, limitations or exceptions?\n  # How to enable live mode?\n  # How to replace Let's Encrypt certificate with custom certificate?\n\n    [ Available also at: https://omega8.cc/node/381 ]\n\n  This BOA release opens a new era in SSL support for all hosted Drupal sites.\n  The old method of creating SSL proxy vhosts is officially deprecated,\n  as explained in the docs/SSL.txt how-to:\n\n  NOTE ###===>>>\n\n  The old how-to is still useful if you prefer to use SSL termination separated\n  from your Ægir system, or if you don't want to use built-in Letsencrypt.org\n  SSL certificates support (available since BOA-3.1.0).\n\n  But if you can use Letsencrypt.org SSL certificates, or you are willing to use\n  also built-in BOA feature which allows you to replace Letsencrypt.org SSL\n  certificate with any third-party certificate per site, while still managing SSL\n  via Ægir control panel (for redirects, forced/required SSL mode), we highly\n  recommend to use Ægir built-in SSL support, which is enabled and ready to use\n  in all Octopus instances since BOA-3.1.0 release.\n\n  NOTE ###===>>>\n\n  * How it works?\n\n    BOA leverages letsencrypt.sh utility to talk to Letsencrypt.org servers,\n    and on the Ægir side it's using new `hosting_le` extension, which replaces\n    self-signed SSL certificates generated by Ægir with Let's Encrypt ones.\n    You can find more information on both at these URLs:\n\n      https://github.com/lukas2511/letsencrypt.sh\n      https://github.com/omega8cc/hosting_le\n\n  * How to add Letsencrypt.org SSL certificate to hosted site?\n\n    In your Ægir control panel please go to the site's node Edit tab, then\n    under `SSL Settings > Encryption` choose either `Enabled` or `Required`,\n    if you want to enable HTTP->HTTPS redirection on the fly. Now click `Save`\n    and wait until you will see the Verify task completed. Done!\n\n    NOTE: SSL Settings are not available in the Add Site form, only in Edit.\n\n  * How to add Letsencrypt.org SSL certificate to the Ægir Hostmaster site?\n\n    !!! WARNING\n    !!! ###===>>> Don't enable SSL option for the Hostmaster site in Ægir\n    !!! WARNING\n\n    Let's Encrypt SSL for Ægir control panel is handled in BOA outside of\n    the control panel, and you should never enable it within control panel.\n\n    During octopus upgrade you will see this message, explaining what to do:\n\n      BOA [02:44:59] ==> UPGRADE B: Letsencrypt SSL initial mode: DEMO\n      BOA [02:44:59] ==> UPGRADE B: LE -- No real SSL certs will be generated\n      BOA [02:44:59] ==> UPGRADE B: LE -- To enable live SSL mode, please delete file:\n      BOA [02:44:59] ==> UPGRADE B: LE -- /data/disk/o1/tools/le/.ctrl/ssl-demo-mode.pid\n      BOA [02:44:59] ==> UPGRADE B: LE -- Then run octopus forced upgrade\n\n  * How to modify/renew Letsencrypt.org SSL certificate for SSL enabled site?\n\n    When you modify aliases or redirections, Ægir will re-create the SSL\n    certificate on the fly, to match current settings and aliases to list.\n\n    BOA runs auto-renewal checks for you weekly, and forces renewal if there is\n    less than 30 days to the certificate expiration date (Let's Encrypt certs\n    are valid for up to 90 days before they have to be renewed).\n\n    Also every Verify task against SSL enabled site runs this check on the fly.\n\n  * Are there any requirements, limitations or exceptions?\n\n    Yes, there are some:\n\n    * All aliases must have valid DNS names pointing to your server IP address\n    * Even with aliases redirection enabled all aliases are listed as SAN names\n    * Avoid renaming SSL-enabled sites; move aliases between site's clones instead\n    * Before you rename a site, disable SSL first; then re-enable once it's renamed\n\n    NOTE: The Subject Alternative Names (SAN) is a feature which allows to issue\n    multi-domain / multi-subdomain SSL certificates -- it is automated in BOA.\n\n    Let's Encrypt API for live, real certificates has its own requirements\n    and limits you should be aware of. Please visit their website for details:\n\n      https://letsencrypt.org/docs/rate-limits/\n\n    To make this new BOA feature easy to test before you will be ready to\n    generate real, live SSL certificates, BOA comes with Let's Encrypt demo\n    mode enabled by default, so it will not hit limits enforced for live,\n    real Let's Encrypt SSL certificates. It allows to generate \"fake\" certs,\n    similar to self-signed certificate used in BOA by default.\n\n    NOTE: All sites with one or more keywords (listed below) in the site's\n    main name (this exception rule doesn't apply to aliases) will be ignored,\n    and they will receive only self-signed SSL certificates generated by Ægir,\n    once you will switch their SSL settings to `Enabled` or `Required`.\n\n      `.(dev|devel|temp|tmp|temporary|test|testing|stage|staging).`\n\n    Examples: `foo.temp.bar.org`, `foo.test.bar.org`, `foo.dev.bar.org`\n\n    NOTE: This exception rule doesn't apply to aliases which are not used\n    as a redirection target. Even aliases with listed special keywords in their\n    names will be listed as SAN entries, as long as they are valid DNS names.\n\n  * How to enable live mode?\n\n    It is enough to delete the `[aegir_root]/tools/le/.ctrl/ssl-demo-mode.pid`\n    control file and run Verify task on any SSL enabled site again.\n\n    NOTE: If you are on hosted BOA, you don't have an access to this location\n    on your system, so please open a ticket at: https://omega8.cc/support\n\n    You could switch it back and forth to demo/live mode by adding and deleting\n    the control file, and it will re-register your system via Let's Encrypt API,\n    but we have not tested how it may affect already generated live certificates\n    once you will run the switch many times, so please try not to abuse\n    this feature.\n\n    It is important to remember that once you will switch the Let's Encrypt mode\n    to demo from live, or from live to demo, by adding or removing the\n    `[aegir_root]/tools/le/.ctrl/ssl-demo-mode.pid` control file, it will not\n    replace all previously issued certificates instantly, because certificates\n    are updated, if needed, only when you (or the BOA system for you during its\n    daily maintenance, if used) will run Verify tasks on SSL enabled sites.\n\n    These BOA specific Verify tasks are normally scheduled to run weekly,\n    between Monday and Sunday, depending on the first character in the site's\n    main name, so both live and demo certificates may still work in parallel\n    for SSL enabled sites until it will be their turn to run Verify and update\n    the certificate according to currently set Let's Encrypt mode.\n\n    NOTE: You may find some helpful details in the Verify task log -- look for\n    lines with `[hosting_le]` prefix.\n\n  * How to replace Let's Encrypt certificate with custom certificate?\n\n    1. Create an empty control file (replace `example.com` with your site name):\n\n       `[aegir_root]/tools/le/.ctrl/dont-overwrite-example.com.pid`\n\n    2. Replace `privkey.pem` symlink with single file containing your custom\n       certificate key -- use `privkey.pem` as a filename in the directory:\n\n       `[aegir_root]/tools/le/certs/example.com/`\n\n    3. Replace `fullchain.pem` symlink with single file containing your custom\n       certificate and all intermediate certificates beneath it -- use\n       `fullchain.pem` as a filename in the same directory:\n\n       `[aegir_root]/tools/le/certs/example.com/`\n\n    4. Run Verify task for your site in the Ægir control panel. Done!\n\n    NOTE: If you are on hosted BOA, you don't have an access to this location\n    on your system, so please open a ticket at: https://omega8.cc/support\n\n\n  ###\n#-### Support for PHP-FPM version switch per Octopus instance (also per site)\n  ###\n\n  ### ~/static/control/fpm.info\n  ###\n  ### This file, if exists and contains supported and installed PHP-FPM version,\n  ### will be used by running every 2-3 minutes system agent to switch PHP-FPM\n  ### version used for serving web requests by this Octopus instance.\n  ###\n  ### IMPORTANT: If used, it will switch PHP-FPM for all Drupal sites\n  ### hosted on the instance, unless multi-fpm.info control file also exists.\n  ###\n  ### Supported values for single PHP-FPM mode which can be written in this file:\n  ###\n  ### 7.0\n  ### 5.6\n  ### 5.5\n  ### 5.4\n  ### 5.3\n  ###\n  ### NOTE: There must be only one line and one value (like: 7.0) in this file.\n  ### Otherwise it will be ignored.\n  ###\n  ### It is now possible to make all installed PHP-FPM versions available\n  ### simultaneously for sites on the Octopus instance with additional\n  ### control file:\n  ###\n  ### ~/static/control/multi-fpm.info\n  ###\n  ### This file, if exists, will switch all hosted sites to highest\n  ### available PHP-FPM version within the 5.3-5.6 range, with ability\n  ### to override PHP-FPM version per site, if the site's name is listed\n  ### in this additional control file, as shown below:\n  ###\n  ### foo.com 7.0\n  ### bar.com 5.5\n  ### old.com 5.3\n  ###\n  ### NOTE: Each line in the multi-fpm.info file must start with main site name,\n  ### followed by single space, and then the PHP-FPM version to use.\n  ###\n\n\n  ###\n#-### Support for PHP-CLI version switch per Octopus instance (all sites)\n  ###\n\n  ### ~/static/control/cli.info\n  ###\n  ### This file, while similar to fpm.info, if exists and contains supported\n  ### and installed PHP version, will be used by running every 2-3 minutes\n  ### system agent to switch PHP-CLI version for this Octopus instance, but\n  ### it will do this for all hosted sites. There is no option to switch this\n  ### or override per site hosted.\n  ###\n  ### NOTE: While current Ægir version 3.x included in BOA works fine with\n  ### latest PHP 7.0, many hosted sites, especially using Pressflow 6 core or\n  ### older Drupal 7 core without required patch we have included since 7.43.2,\n  ### will not work properly and Ægir tasks run against those sites may fail,\n  ### so it's recommended to use PHP-CLI 5.6, unless you have verified that all\n  ### sites on the instance support PHP 7.0 without issues.\n  ###\n  ### Supported values which can be written in this file:\n  ###\n  ### 7.0\n  ### 5.6\n  ### 5.5\n  ### 5.4\n  ### 5.3\n  ###\n  ### There must be only one line and one value (like: 5.6) in this control file.\n  ### Otherwise it will be ignored.\n  ###\n\n\n  ###\n#-### Support for forced Drush cache clear in the Ægir backend\n  ###\n\n  ### ~/static/control/clear-drush-cache.info\n  ###\n  ### Octopus instance will pause all scheduled tasks in its queue, if it will\n  ### detect a platform build from the makefile in progress, to make sure\n  ### that no other running task could break the build.\n  ###\n  ### This is great, until there will be a broken build, and Drush will fail\n  ### to clean up all leftovers from its .tmp/cache directory, which in turn\n  ### will pause all tasks in the queue for up to 24-48 hours, until the cache\n  ### directory will be automatically purged by running daily cleanup tasks,\n  ### designed to not touch anything not old enough (24 hours at minimum)\n  ### to not break any running builds.\n  ###\n  ### If you need to unlock the tasks queue by forcefully removing everything\n  ### from the Ægir backend Drush cache, you can create an empty control file:\n  ### ~/static/control/clear-drush-cache.info\n  ###\n\n\n  ###\n#-### BOA can run Debian Wheezy to Debian Jessie upgrades easily\n  ###\n\n  This feature works like it worked before for `_LENNY_TO_SQUEEZE=YES` and then\n  for `_SQUEEZE_TO_WHEEZY=YES`. But make sure you follow all the steps exactly\n  as listed below:\n\n  1. Upgrade both barracuda and octopus to current stable:\n\n    $ cd;wget -q -U iCab http://files.aegir.cc/BOA.sh.txt;bash BOA.sh.txt\n    $ barracuda up-stable\n    $ octopus up-stable all both\n\n    NOTE: You can upgrade octopus selectively, if you still need one running\n    the old stable BOA-2.4.9 version, example:\n\n    $ octopus up-2.4 o1 force\n    $ octopus up-stable o2 force\n    $ octopus up-stable o3 force\n\n  2. Add to your /root/.barracuda.cnf this line:\n\n    _WHEEZY_TO_JESSIE=YES\n\n  3. Run another barracuda upgrade with command:\n\n    $ barracuda up-stable\n\n  4. If there are no errors reported, try to run manual update:\n\n    $ aptitude update\n    $ aptitude full-upgrade\n\n    It should tell you that there are no packages to upgrade left.\n\n  5. Reboot your system (preferably via remote console)\n\n    $ reboot\n\n  6. Run barracuda upgrade again:\n\n    $ barracuda up-stable\n\n  7. Try to run manual update:\n\n    $ aptitude update\n    $ aptitude full-upgrade\n\n    It should tell you that there are no packages to upgrade left.\n\n  8. Congrats! You are running BOA stable on Debian Jessie.\n\n\n# New features and enhancements:\n\n  * Add all aliases as Subject Alternative Names in Let's encrypt certs -- #941\n  * Add auto-renewal procedure for Let's encrypt certs -- #942\n  * Add option to exclude *.tar.gz Drush archives in backboa -- #936\n  * Add Restaurant 1.11\n  * Add support for arbitrarily selected redirection targets as valid SSL names\n  * Allow to define PHP-FPM version per site hosted -- #935\n  * Allow to use drush7 and drush8 on command line directly\n  * Even with redirection enabled all aliases are listed as SAN names -- #964\n  * Feature: _WHEEZY_TO_JESSIE major upgrade procedure -- #870\n  * Let's encrypt support -- #500\n  * New Relic integration compatibility with multi-FPM mode\n  * Support for forced Drush cache clear in the Ægir backend\n  * Use Let's encrypt for Hostmaster site (after Octopus upgrade) -- #940\n\n# Changes:\n\n  * Do not allow XtraDB to crash the server due to single broken cache table\n  * Nginx: Use faster 301/302 redirects\n  * Nginx: Use only TLSv1.1 TLSv1.2\n  * Redis: Exclude cache_form bin again to avoid rare issues with contrib\n  * Use dynamic httpredir.debian.org mirrors\n\n# System upgrades:\n\n  * cURL 7.49.0 (if installed from sources)\n  * Jetty 9.2.16.v20160414\n  * Nginx 1.11.0\n  * PHP 5.5.36\n  * PHP 5.6.22\n  * PHP 7.0.7\n  * Redis 3.2.0\n  * SLF4J 1.7.21\n\n# Fixes:\n\n  * Add compatibility with \"config.sh\" renamed to \"config\" in letsencrypt.sh\n  * Add ssl_trusted_certificate directive required by ssl_stapling\n  * Add warning: \"Don't enable SSL option for the Hostmaster site in Ægir\" -- #962\n  * Check if parent dir exists before touching ctrl file -- #945\n  * Do not clear drush cache on every hosting-dispatch -- #943\n  * Do not create Letsencrypt cert for Hostmaster if still in demo mode\n  * Do not force PHP rebuild on new cURL install from sources\n  * Drush is broken error -- clear drush cache before testing it -- #946\n  * Fix for backward compatibility with FPM pool tpl in 2.4\n  * Fix for Chive auth (via SSH) access filtering\n  * Fix for conflicting Jetty libs\n  * Fix ownership and attr on usr home dirs / subdirs\n  * Improve sub-accounts zombie cleanup\n  * Let's Encrypt SSL - switching from demo to live -- #959\n  * Make backboa sub-tasks delays optional and disable them by default -- #919\n  * Nginx: Fix for ssl_dhparam if/else logic\n  * Remove deprecated wildcard HTTPS warning\n  * Run registry-rebuild before updatedb with --no-cache-clear -- #938\n  * Set LE mode to DEMO on initial setup -- both on octopus install and upgrade\n  * Skynet upgrades for limited shell configuration -- #950\n  * Something is stuck after BOA upgrade to 3.0.2 -- #951\n  * The makefile based platform creation fails with permissions error -- #943\n  * The site's files should have Ægir backend user as an owner\n  * Use strict paths checks to avoid running chown/chmod on parent dirs\n\n# Known problems:\n\n  https://github.com/omega8cc/boa/milestones/3.1.x\n\n\n### Stable BOA-3.0.2 Release - Full Edition\n### Date: Tue May  3 22:26:09 PDT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.0.2\n### Latest hotfix added on: Fri May  6 08:42:13 PDT 2016\n\n  @=> Includes Ægir Hostmaster 3.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 8 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes several important system upgrades, improvements\n  and bug fixes. All supported platforms have been updated to latest versions.\n\n  @=> Latest Drupal 7 core version used in BOA in all built-in platforms is\n      compatible with latest PHP 7.0.6 -- you can switch your Octopus instance\n      easily via fpm.info control file: https://omega8.cc/node/330 but please\n      don't use 7.0 in cli.info, because it is not supported in the Ægir\n      backend yet. PHP 7.0 can't be used if you have any Pressflow 6 site.\n\n# New features and enhancements:\n\n  * Add idna_convert to hostmaster for IDN domain names auto-conversion -- #916\n  * Allow to disable redis.path.inc feature via INI variable -- #815\n  * Drupal 7.43.2 (with PHP 7 compatibility patch)\n  * PHP 7 compatibility improvements -- #716\n  * Pressflow 6.38.2 (only version update)\n  * Truncate giant watchdog tables\n\n# Changes:\n\n  * Disable (temporarily) support for outdated ERPAL distro\n  * Disable auto-upgrade for legacy Octopus instances\n  * Disable page cache only in hostmaster\n  * Disable PAMAuthentication in pure-ftpd\n  * Force PHP 5.6 or 5.5 cli.info in Octopus 2.4.9\n  * Force Redis SOCKET mode if PORT was used before\n  * Redis module mod-03-05-2016\n  * Redis: Limit methods to define site prefix\n  * Redis: Use maxmemory-policy volatile-ttl\n  * Set redis_client_base\n  * Use Redis in hostmaster\n  * Use standard profile by default\n\n# System upgrades:\n\n  * Drush micro-8-24-04-2016\n  * MariaDB 10.0.25\n  * MariaDB 5.5.49\n  * MariaDB Galera Cluster 10.0.25\n  * Nginx 1.9.15\n  * OpenSSL 1.0.2h (used only in custom built Nginx)\n  * PHP 5.5.35\n  * PHP 5.6.21\n  * PHP 7.0.6\n\n# Fixes:\n\n  * Add check_boa_php_compatibility() procedure -- fixes #906\n  * Add patch for registration error (Commons)\n  * Avoid duplicate entries in hosting_cron on hostmaster install -- #928\n  * Cron not running on cloned sites -- fixes #922\n  * Disable hosting-pause / Provision -- not needed in BOA, may hang upgrade\n  * Do not force TERM\n  * Do not set $conf['redis_eval_enabled'] = TRUE;\n  * Enable _DEBUG_MODE=YES on Octopus upgrade from BOA-2.4.9\n  * Experimental hosting_git error, platform not installed -- fixes #904\n  * Improve the provision_autoload_register_prefix check\n  * Make sure that auto-generated robots.txt is OK -- fixes #925\n  * Make sure that hostmaster cron is never disabled\n  * Make sure to not set PHP 7 as system default\n  * Restart php-fpm on upgrade as soon as possible\n  * Run registry-rebuild directly after hostmaster-migrate\n  * Run update_php_cli_cron() twice\n  * Use inetutils-syslogd on VZ systems -- fixes #905\n  * Use syncpass during hostmaster upgrade\n  * Workaround for hostmaster upgrade from 2.x\n\n# Known problems:\n\n  https://github.com/omega8cc/boa/milestones/3.1.1\n\n\n### Stable BOA-3.0.1 Release - Full Edition\n### Date: Mon Apr 11 18:49:43 PDT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.0.1\n\n  @=> Includes Ægir Hostmaster 3.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 8 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes important fixes and improvements in the upgrade\n  procedure from BOA-2.4.9 and in the initial install procedures, along with\n  support for latest Drupal 8.0.x and 8.1.x as custom platforms you can create\n  in the ~/static directory tree. We list here also all hot-fixes applied\n  after initial BOA-3.0.1 release.\n\n  @=> BOA will not include built-in Drupal 8 platforms until Drupal 8 will\n      support symlinks in the codebase, like all previous core versions.\n\n  @=> Octopus Ægir instances hosted on Power Engine option will *not* receive\n      upgrade to BOA-3.x unless requested via https://omega8.cc/support\n      to prevent issues with (often) customized Hostmaster modules not ready\n      for Drupal 7 based Ægir control panel. All hosted BOA systems will still\n      continue to receive the Barracuda system upgrades.\n\n  @=> It is possible to host previous stable BOA-2.4.9 Octopus instances\n      on systems with Barracuda upgraded to BOA-3.0.1\n\n# Known problems:\n\n  https://github.com/omega8cc/boa/milestones/3.0.2\n\n# New features and enhancements:\n\n  * Allow boa in-octopus to specify version {stable|head|2.4}\n\n# Changes:\n\n  * Allow to execute compass over SSH\n  * Allow to upload dot-files via SFTP\n  * Remove/don't install not used blocks in Hostmaster\n\n# System upgrades:\n\n  * Add mydropwizard-6.x-1.4 to all existing D6 platforms\n  * Drush micro-8-08-04-2016\n  * Lshell 0.9.18.3 -- #895\n  * Nginx 1.9.14\n  * PHP 5.5.34\n  * PHP 5.6.20\n  * PHP 7.0.5 (for testing only)\n  * Redis module 7.x-3.12\n\n# Fixes:\n\n  * 3.0.0 clean install is broken -- #899\n  * boa in-2.4 fails to install on Debian Jessie -- #898\n  * Can't git pull -- #890\n  * CiviCRM error on verification D6 site -- #897\n  * D7 API compatibility fix for node_save() in Hostmaster\n  * Do not switch default PHP to 7.0 if installed\n  * Drush issues: no aliases available -- #887\n  * Fix for 3.x to 3.x upgrades\n  * Fix for FPM master proc monitor\n  * Fix for input filters upgrade path\n  * Fix for series test to avoid downgrade attempts\n  * Fix the legacy install mode -- #898\n  * Less and more no longer allowed -- #896\n  * Limit the list of allowed_shell_escape commands\n  * Missing VBO options -- #892\n  * Overlay header title not showing -- #889\n  * Problems installing rvm / compass -- #895\n  * Remove deprecated sftp restriction\n  * Require BOA-2.4.9 before upgrade to BOA-3.x also in barracuda -- #886\n  * Switch octopus upgrade mode automatically to legacy if needed\n  * tar and gunzip fails because of permission denied -- #894\n  * Use Drush 8 on command line -- #887\n  * vi and vim both open nano instead of vim -- #893\n\n\n### Stable BOA-3.0.0 Release - Full Edition\n### Date: Wed Mar 30 10:48:54 PDT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/3.0.0\n### Latest hotfix added on: Wed Apr  6 17:40:12 PDT 2016\n\n  @=> Includes Ægir Hostmaster 3.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 8 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes complete Ægir 3 with Drush 8, and introduces\n  full support for latest Drupal 8.0.5 and Drupal 8.1.0-beta2 as custom\n  platforms you can create in the ~/static directory tree.\n\n  @=> BOA will not include built-in Drupal 8 platforms until Drupal 8 will\n      support symlinks in the codebase, like all previous core versions.\n\n  @=> All supported Ægir platforms have been updated to their latest releases\n\n  @=> Octopus Ægir instances hosted on Power Engine option will *not* receive\n      upgrade to BOA-3.x unless requested via https://omega8.cc/support\n      to prevent issues with (often) customized Hostmaster modules not ready\n      for Drupal 7 based Ægir control panel. All hosted BOA systems will still\n      continue to receive the Barracuda system upgrades.\n\n  @=> It is possible to host previous stable BOA-2.4.9 Octopus instances\n      on systems with Barracuda upgraded to BOA-3.0.0\n\n# Known problems:\n\n  https://github.com/omega8cc/boa/milestones/3.0.1\n\n  While clean 3.0.0 install worked in our tests before the release, it doesn't\n  work for others. Until this problem is fixed properly without regressions,\n  we are switching boa installer back to 2.4.9, which makes getting 3.0.0\n  on initial installation a two step operation: first 'boa in-stable' install\n  to get 2.4.9, and then 'barracuda up-stable' plus 'octopus up-stable' upgrade\n  to get 3.0.0, because upgrades for barracuda and octopus from 2.4.9 to 3.0.0\n  work fine.\n\n  This also means that 'boa in-octopus' will still install the legacy 2.4.9\n  octopus extra instances, and you can upgrade them to 3.0.0 with standard\n  'octopus up-stable' mode.\n\n  It is still possible to test/debug boa 3.0.0 clean installs -- just create\n  an empty /root/.debug-boa-installer.cnf file before running the installer.\n\n# New features and enhancements:\n\n  * Add Hosting Git optional feature -- fixes #753\n  * Add mydropwizard module to D6 o_contrib by default\n  * Add support for ap-northeast-2 Asia Pacific (Seoul) S3\n  * Add support for PHP 7.0 -- experimental ! -- fixes #716\n  * Add support for VServer kernel 4.1.19-vs2.3.8.4-beng\n  * BOA with Ægir Hostmaster 3.x -- fixes #715\n  * Switch to Drush 8 for Drupal 8 -- fixes #729\n  * Allow to randomize duplicity full backup schedule\n  * Monitor and block SSH connections flood\n  * Run registry-rebuild in drush_provision_drupal_post_provision_deploy()\n\n# Changes:\n\n  * Add linkchecker module to Contrib [F]orce[D]isabled\n  * Deny sudo/su switch if used for root access - fixes #879\n  * Do not install / remove auditd on VServer systems\n  * Do not install / remove udev on VServer systems\n  * Merge hosting_advanced_cron into Ægir core cron\n  * Use Redis 7.x-3.x integration module\n\n# System upgrades:\n\n  * Boto 2.39.0-fix-python-2.7.9 (please run 'backboa install' to upgrade)\n  * CSF 8.16\n  * Drush mini-8-08-03-2016\n  * Duplicity 0.7.06 (please run 'backboa install' to upgrade)\n  * Lshell 0.9.18.3\n  * MongoDB database driver 1.6.13 for all PHP versions < 7 -- fixes #521\n  * Nginx 1.9.14\n  * OpenSSH 7.2p2 (if installed from sources)\n  * OpenSSL 1.0.2g (used only in custom built Nginx)\n  * PHP 5.5.34\n  * PHP 5.6.20\n  * PHP 7.0.5 (for testing only)\n  * Twig C extension for PHP - v.1.24.0\n  * Use PHP jsmin 2.0.1 ext with newer PHP versions - fixes #878\n\n# Fixes:\n\n  * [system] sync fix_locales for root -- fixes #880\n  * Add mydropwizard-6.x-1.4 to all existing D6 platforms\n  * Auto-Update lshell.conf on all systems\n  * Fix for 3.x to 3.x upgrades\n  * Fix for entitycache 1.2 to 1.5 upgrade problem #868\n  * Fix for FPM master proc monitor\n  * Fix for series test to avoid downgrade attempts\n  * Numerous lshell problems -- fixes #896 #895 #894 #893 #890\n  * Problems installing rvm / compass -- fixes #895\n  * Require 2.4.9 before upgrade to 3.0.0 also in barracuda -- fixes #886\n  * Restart rsyslog/sysklogd aggressively enough\n  * Switch boa meta installer to 2.4.9 until #899 is fixed\n  * Switch octopus upgrade mode automatically to legacy if needed\n  * Sync max_user_connections\n  * Update map $http_user_agent $is_crawler\n  * Use Drush 7 on command line until #887 is fixed\n\n\n### Stable BOA-2.4.9 Release - Full Edition\n### Date: Sat Feb 27 15:22:11 GMT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.9\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes latest Drupal 7 and Pressflow 6 security updates,\n  along with bug fixes and other system software updates.\n\n  @=> All supported Ægir platforms have been updated to their latest releases\n\n  @=> What are BOA plans for Drupal 6 support after February 24th, 2016?\n\n      We will support Drupal/Pressflow 6 in all new releases, as long as\n      available PHP versions will allow to use it (we run our own Pressflow 6\n      based site on PHP 5.6 for many months with zero issues). For more details\n      please check: https://github.com/omega8cc/boa/issues/824\n\n  @=> Even if deprecated PHP versions are still included in this release,\n      any Octopus instance running PHP older than 5.5 will not be able to\n      receive upgrade to BOA-2.4.9, as announced before -- Please switch your\n      Octopus to PHP 5.6 or at least 5.5 to be able to upgrade not only\n      the Barracuda system part of BOA, but also Octopus Satellite --\n      The how-to can be found at: https://omega8.cc/node/330\n\n  @=> Drupal 8 support for custom platforms in the ~/static directory tree\n      will be included, along with Drush 8 and Hostmaster 3.x in the upcoming\n      BOA-3.0.0 release: https://github.com/omega8cc/boa/milestones/3.0.0\n      Note: BOA will not include built-in Drupal 8 platforms until Drupal 8\n      will support symlinks in the codebase, like all previous core versions\n\n# System upgrades:\n\n  * MariaDB Galera Cluster 10.0.24\n  * Nginx 1.9.12\n\n# Fixes:\n\n  * Do not force Ruby with RVM for root on every upgrade\n  * SQL max_user_connections autoconf value can be too low -- fixes #873\n\n\n### Stable BOA-2.4.8 Release - Full Edition\n### Date: Sat Feb 20 11:28:05 GMT 2016\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.8\n### Latest hotfix added on: Mon Feb 22 18:28:51 GMT 2016\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes several important system upgrades and bug fixes,\n  with most notable features and changes listed below.\n\n  @=> Debian 8 Jessie is fully supported, but includes only PHP 5.5 and 5.6\n\n  @=> All supported Ægir platforms have been updated with latest Drupal cores\n\n  @=> Even if deprecated PHP versions are still included in this release,\n      any Octopus instance running PHP older than 5.5 will not be able to\n      receive upgrade to BOA-2.4.8, as announced before -- Please switch your\n      Octopus to PHP 5.6 or at least 5.5 to be able to upgrade not only\n      the Barracuda system part of BOA, but also Octopus Satellite --\n      The how-to can be found at: https://omega8.cc/node/330\n\n  @=> Drupal 8 support for custom platforms in the ~/static directory tree\n      will be included, along with Drush 8 and PHP 7 in the *upcoming*\n      BOA-3.0.0 release: https://github.com/omega8cc/boa/milestones/3.0.0\n      Note: BOA will not include built-in Drupal 8 platforms until Drupal 8\n      will support symlinks in the codebase, like all previous core versions\n\n  @=> What are BOA plans for Drupal 6 support after February 24th, 2016?\n\n      We will support Drupal/Pressflow 6 in all new releases, as long as\n      available PHP versions will allow to use it (we run our own Pressflow 6\n      based site on PHP 5.6 for many months with zero issues). For more details\n      please check: https://github.com/omega8cc/boa/issues/824\n\n# Changes:\n\n  * Add \"boa info\" and 'boa info more' helper command\n  * Add branch support in the boa wrapper\n  * Allow to force re-install with /root/.force.reinstall.cnf present\n  * Allow to run existing Octopus 2.4 on the upcoming Barracuda 3.0\n  * Deny Octopus upgrade unless it is running on a compatible PHP version 5.5+\n  * Full backboa backups are scheduled on Sunday, unless custom _AWS_FLC is set\n  * Full duobackboa backups will run on Saturday, unless custom _AWS_FLC is set\n  * Make base nice configurable via _B_NICE variable\n  * Nginx: Sync htaccess level protection with Drupal core\n  * Nginx: Update map $http_user_agent $is_crawler\n  * Only instance already running 2.4.8 can upgrade to upcoming 3.0.0\n  * Remove no longer supported T1lib in PHP\n  * Remove support for deprecated OS versions -- fixes #802\n  * Replace in-legacy and up-legacy with version specific commands\n  * Revert \"Issue #2377819: Gzipping backups suppresses file permissions errors\"\n  * Run minimal modules en/dis procedure on Wednesday and full on Saturday\n  * Skip legacy PHP 5.3 and 5.4 on Jessie\n  * Support for Debian 8 Jessie -- fixes #702\n  * The _MODULES_FIX variable is set to YES by default\n  * The _PERMISSIONS_FIX variable is set to YES by default -- fixes #593\n\n# System upgrades:\n\n  * Git 2.7.0 (if installed from sources)\n  * MariaDB 10.0.24\n  * MariaDB 5.5.48\n  * Nginx 1.9.11\n  * OpenSSH 7.1p2 (if installed from sources)\n  * OpenSSL 1.0.2f (used only in custom built Nginx)\n  * PHP 5.5.32\n  * PHP 5.6.18\n  * PHP: Imagick 3.3.0\n  * Redis 3.0.7\n  * Ruby 2.3.0\n\n# Fixes:\n\n  * Add duobackboa docs\n  * Add missing libs in Jessie\n  * Allow to install a specific PHP version on a local install -- fixes #848\n  * Allow to run upgrade from not really 3.x HEAD to 2.4.8\n  * Automate /root/.force.reinstall.cnf and improve docs\n  * Disable Octopus 3.x specific version check (tmp) for 2.4.8\n  * Disable spinner on Jessie\n  * Do not force rebuild on systems installed with 2.4.8\n  * Do not kill long running php-fpm childs\n  * Do not run the old D7 core fix on newer BOA versions -- fixes #842\n  * Do not wait for simple sed replacements -- fixes #838\n  * Fix a typo in some locCnf variable calls -- fixes #854\n  * Fix for ignored boa_platform_control.ini\n  * Fix for MariaDB version check\n  * Fix for not working S3 bucket connection test\n  * Fix for process.max and pm.max_children\n  * Fix for undefined locCnf variable in BOND - fixes #748\n  * Fix the logic in mysql_proc_kill()\n  * Fix too aggressive Jetty monitoring\n  * Force clean rsyslog/sysklogd restart if required\n  * Force rebuild for affected services built from sources -- CVE-2015-7547\n  * Improve backup sub-tasks randomized schedule\n  * Improve initial install how-to with screen\n  * Locales check should not be used with screen session -- fixes #871\n  * Nginx: Remove duplicate $args on redirects\n  * Nginx: Workaround for broken autocomplete\n  * Remove dependency on _MODULES_FIX=YES -- fixes #592\n  * Remove no longer used _SSL_FROM_SOURCES logic\n  * Remove systemd on Debian Jessie -- fixes #840\n  * Restart syslog hourly\n  * Run drush cache cleanup only once per account\n  * Speed up backup tasks by removing extra conn_test\n  * Speed up backup tasks by running extended cleanup and reporting weekly\n  * Speed up initial setup procedure\n  * Sync wait randomizer max value\n  * Upgrade wkhtmltopdf and wkhtmltoimage to 0.12.3 - fixes #858\n  * Use date %u day of week (1..7); 1 is Monday\n  * Whitelist missing upload progress path\n\n\n### Stable BOA-2.4.7 Release - Full Edition\n### Date: Fri Dec  4 08:09:21 PST 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.7\n### Latest hotfix added on: Thu Dec 10 10:10:26 PST 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes several important system upgrades and bug fixes,\n  with most notable features and changes listed below.\n\n  @=> All supported Ægir platforms have been updated with latest Drupal cores\n\n  @=> Drupal 8 support for custom platforms in the ~/static directory tree\n      will be included, along with Drush 8 and PHP 7 in the *upcoming*\n      BOA-3.0.0 release: https://github.com/omega8cc/boa/milestones/3.0.0\n\n  @=> This BOA release (2.4.7) is the last release which still supports\n      deprecated PHP versions: 5.3 and 5.4 -- You should switch to PHP 5.6\n      or at least 5.5 as soon as possible, or you will not be able to upgrade\n      to newer BOA versions after 2.4.7 -- https://omega8.cc/node/330\n\n  @=> What are BOA plans for Drupal 6 support after February 24th, 2016?\n\n      We will support Drupal/Pressflow 6 in all new releases, as long as\n      available PHP versions will allow to use it (we run our own Pressflow 6\n      based site on PHP 5.6 for many months with zero issues). For more details\n      please check: https://github.com/omega8cc/boa/issues/824\n\n  @=> SSH (RSA) keys for root are required by newer OpenSSH versions used in BOA\n\n      BOA installs SSH from sources by default (Debian only). This means that\n      password based access for root will not work once BOA is installed or\n      upgraded to current stable version. It is a result of OpenSSH changes\n      in recent releases and not BOA specific change. BOA will deny the initial\n      install and Barracuda will refuse to run upgrade if it detects that system\n      root has no SSH (RSA) keys added and only password based access is available.\n      You can still modify this behaviour in /usr/etc/sshd_config but future\n      OpenSSH versions may still revert such changes, so it is not recommended.\n\n  @=> BOA switched from SPDY to HTTP/2 + PFS on all supported OS versions\n\n# Changes:\n\n  * Allow to disable SQL monitoring with /root/.no.sql.cpu.limit.cnf -- #799\n  * Disable page caching on the fly where needed\n  * Disable temporarily support for broken Restaurant distro\n  * Do not rebuild features and entities on cache clear\n  * Document new requirement: SSH (RSA) keys for root -- fixes #786 #833\n  * Make ioncube_loader optional and disable by default with _PHP_IONCUBE=NO\n  * Nginx SSL: enable OCSP stapling by default\n  * Nginx SSL: enable OCSP stapling for existing HTTPS vhosts\n  * Nginx: Add ssl_dhparam to existing vhosts, if needed\n  * Nginx: HTTP/2 replaces SPDY -- fixes #624\n  * PHP: Add YAML extension with LibYAML\n  * Preserve customized /etc/sysctl.conf -- fixes #789\n  * Run modules ON/OFF only weekly -- requires _MODULES_FIX=YES (default is NO)\n  * Run most of crontab, install and upgrade tasks with low priority using\n    nice and ionice -- fixes #780\n\n# System upgrades:\n\n  * cURL 7.45.0 (if installed from sources)\n  * GEOS 3.5.0 (requires _PHP_GEOS=YES)\n  * Git 2.6.1 (if installed from sources)\n  * MariaDB 10.0.22\n  * MariaDB 5.5.47\n  * MariaDB Galera Cluster 10.0.22\n  * Nginx 1.9.7\n  * OpenSSL 1.0.2e (used only in custom built Nginx)\n  * PHP 5.5.30\n  * PHP 5.6.16\n  * Redis 3.0.5\n\n# Fixes:\n\n  * Add /root/.skip_cleanup.cnf support\n  * Add feature branch testing in HEAD\n  * Avoid load spikes caused by long running tasks\n  * Avoid race conditions on multi-line sed replacement -- fixes #806\n  * Clean up any remaining procs zombies\n  * Clean up postfix queue to get rid of bounced emails\n  * Disable ioncube and opcache for HHVM\n  * Disable Redis for Hostmaster in the backend\n  * Do not allow to install non-standard OpenSSH on Ubuntu\n  * Do not break /data/all/cpuinfo permissions on Octopus upgrade\n  * Do not run 'apt-get autoremove' automatically\n  * Do not use wrapper for dot-files cleanup\n  * Document better BOA aggressive installation behavior -- fixes #811\n  * Document boa in-octopus command -- fixes #817\n  * Don't strip $args from $request_uri in redirects\n  * Fix cron schedule for upgrades\n  * Fix for /etc/sudoers on _SQUEEZE_TO_WHEEZY\n  * Fix for broken Git on Ubuntu\n  * Fix for DNS on _SQUEEZE_TO_WHEEZY\n  * Fix for not working PHP rebuild check\n  * Fix for not working syncpass tool\n  * Fix for Ruby rebuild on _SQUEEZE_TO_WHEEZY\n  * Fix PHP deprecated warning in D8 -- fixes #804\n  * Ignore 'env COLUMNS' sent by Drush remotely -- fixes #373\n  * Ignore daily.sh in clear.sh\n  * Improve _SQUEEZE_TO_WHEEZY procedure -- #627\n  * Improve cron tasks schedule\n  * Improve daily cleanup performance + support for /root/.giant_traffic.cnf\n  * Improve devpts check -- fixes #788\n  * Improve docs/MIGRATE.txt\n  * Improve resolv.conf auto-recovery procedure\n  * Improve system check -- fixes #811\n  * Move Redis restart procedure to correct script\n  * PHP: Add missing path to open_basedir for CLI\n  * Remove debug code to not kill the initial install\n  * Remove not working /etc/logrotate.d/lshell -- fixes #823\n  * Update advagg auto configuration variables -- fixes #792\n  * Update boa/lib/functions/helper.sh.inc with current OS -- fixes #787\n  * Update FPM workers autoconf logic\n  * Update the cache cleanup logic\n  * Use better placeholder for solr_integration_module variable\n  * Use correct DPkg::Options for dist-upgrade -- fixes #627\n  * Use known MySQLTuner version -- fixes #827\n  * Use LibYAML 0.1.6\n  * Use opcache.restrict_api\n  * Use sha256 for self-signed certs\n\n\n### Stable BOA-2.4.6 Release - Full Edition\n### Date: Sat Sep 19 11:09:09 PDT 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.6\n### Latest hotfix added on: Mon Sep 21 05:18:33 PDT 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes several important system upgrades and bug fixes.\n  All supported Ægir platforms have been updated with latest Drupal cores.\n\n# Changes:\n\n  * Add Twig C extension to PHP - v.1.22.1\n  * Allow to customize auto-upgrades mode\n  * Disable support for broken OpenScholar and Recruiter\n  * Open default Postgres port for outgoing connections\n  * Remove support for deprecated Feature Server distro\n  * Remove support for deprecated OpenAcademy distro\n  * Remove support for deprecated OpenBlog distro\n  * Remove support for deprecated OpenChurch v.1 distro\n  * Remove support for deprecated OpenDeals distro\n  * Use distro specific Drupal core for problematic distros\n\n# System upgrades:\n\n  * cURL 7.44.0 (if installed from sources)\n  * Duplicity 0.7.05 (please run 'backboa install' to upgrade)\n  * Jetty 7.6.17.v20150415\n  * Jetty 8.1.17.v20150415\n  * MariaDB 10.0.21\n  * MariaDB 5.5.45\n  * MariaDB Galera Cluster 10.0.21\n  * Nginx 1.9.4\n  * OpenSSH 7.1p1 (if installed from sources)\n  * PHP 5.6.13, 5.5.29, 5.4.45\n  * PHP: ionCube loader 5.0.18\n  * Pure-FTPd 1.0.42\n  * Redis 3.0.4\n  * Ruby 2.2.3, 2.0.0-p647\n  * Use pecl-jsmin-1.1.0\n\n# Fixes:\n\n  * Allow to re-install deleted D7/D6 platforms when dev doesn't exist\n  * Do not install phpunit -- it adds many PHP tools we don't need\n  * Drush requires php-eval to run drush_find_tmp() in sql-sync\n  * Fix apache cleanup\n  * Fix invalid regex in the INI docs\n  * Improve auto-healing for SSHd\n  * Improve Nginx DoS an DDoS protection\n  * Improve pdnsd auto-healing\n  * Improve SSL Docs to add more detail about multidomain certificates #757\n  * Issue #766 - Fix for broken boa in-octopus procedure\n  * Nginx: Fix support for s3/files/styles (s3fs)\n  * Restart PHP-FPM if too many running childs are detected\n  * Sync .htaccess with D7 core\n  * Sync keywords for exceptions in daily.sh with global.inc\n  * Use short sleep on firewall temp blocks cleanup\n\n\n### Stable BOA-2.4.5 Release - Full Edition\n### Date: Fri Jul 10 11:25:43 PDT 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.5\n### Latest hotfix added on: Fri Jul 10 14:49:11 PDT 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes PHP security upgrade for versions 5.6, 5.5 and 5.4\n  plus security upgrade for Redis server and four updated Octopus platforms.\n\n  Support for Drupal 8 is temporarily removed, because now it would require\n  an upgrade to Drush 8, which in turn completely removes support for PHP 5.3,\n  while it's still more important to support legacy Pressflow 6 sites, if they\n  are not ready to move beyond PHP 5.3 yet, than trying to support some\n  (too fast) moving targets like Drupal 8 beta, and Drush 8 head.\n\n# Updated Octopus platforms:\n\n  Commerce 2.26 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 3.28 ----------------- https://drupal.org/project/commons\n  OpenAtrium 2.43 -------------- https://drupal.org/project/openatrium\n  Panopoly 1.25 ---------------- https://drupal.org/project/panopoly\n\n# Changes:\n\n  * Drupal 8 is not supported until we can switch to Drush 8 and remove PHP 5.3\n\n# System upgrades:\n\n  * Nginx 1.9.2\n  * PHP 5.4.43\n  * PHP 5.5.27\n  * PHP 5.6.11\n  * Redis 3.0.2\n\n\n### Stable BOA-2.4.4 Release - Full Edition\n### Date: Fri Jul  3 12:08:29 PDT 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.4\n### Latest hotfix added on: Thu Jul  9 10:28:42 PDT 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes several important system upgrades and bug fixes.\n\n  All supported Ægir platforms have been updated with latest Drupal cores.\n\n  This version automatically switches all hosted sites to PHP 5.5 on systems\n  hosted and managed remotely by Omega8.cc support team, unless you have\n  explicitly switched your Octopus instance to use PHP version you prefer.\n  Using PHP older than 5.5 is strongly discouraged, for security, stability and\n  performance reasons.\n\n# Changes:\n\n  * Do not change mysql root password by default -- workaround for #642\n  * Enable advagg_async_generation by default\n  * Logic update for /root/.high_traffic.cnf\n  * Redis Integration Module: Update to version mod-26-06-2015\n  * Use modern ssl_ciphers in all templates by default\n\n# System upgrades:\n\n  * cURL 7.43.0 (if installed from sources)\n  * Drush mini-7-30-06-2015 -- fixes #734\n  * MariaDB 5.5.44\n  * MariaDB Galera Cluster 10.0.20\n  * Nginx 1.9.1\n  * OpenSSH 6.9p1 (if installed from sources)\n  * OpenSSL 1.0.1p (if installed from sources)\n  * PHP 5.4.42\n  * PHP 5.5.26\n  * PHP 5.6.10\n  * PHPRedis master-27-06-2015\n  * Pure-FTPd 1.0.41\n  * vnStat 1.14\n\n# Fixes:\n\n  * Add 'grep' to overssh -- a list of commands allowed to execute over SSH\n  * Broken pdnsd configuration breaks DNS resolver -- fixes #701\n  * Do not force update_agents()\n  * Do not modify rkey/debug args in barracuda log/system upgrade mode\n  * Don't remove Drupal 6 core themes -- fixes #738\n  * Fix for legacy vnStat config\n  * Fixed backboa/duobackboa retrieve from remote host -- fixes #741\n  * Improve system cron tasks queue\n  * Incorrect permissions on /usr/bin/optipng - fixes #722\n  * Mitigate LOGJAM - fixes #723\n  * Restart Postfix after system DNS update -- #701\n  * Skip daily reload on high traffic instances\n  * Sync SQL connection limits with _PHP_FPM_WORKERS variable - fixes #699\n  * Use _AWS_URL to properly handle us-east-1 exception\n  * Use 2048 bit where possible - see #723\n  * Use better default value for advagg_cache_level - fixes #726\n\n\n### Stable BOA-2.4.3 Release - Full Edition\n### Date: Tue May 19 13:40:40 PDT 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.3\n### Latest hotfix added on: Fri Jun  5 04:43:50 PDT 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release is focused on Ægir platforms update with latest Drupal core\n  included. There are also a few system updates and bug fixes, as listed below.\n\n# Changes:\n\n  * Redis Integration Module: Update to version mod-08-05-2015\n  * Use HTTPS intermediate mode to support legacy systems like XP/IE8 - see #718\n\n# System upgrades:\n\n  * Drush mini-7-08-05-2015\n  * MariaDB 10.0.19\n  * MariaDB Galera Cluster 10.0.19\n  * PHP 5.4.41\n  * PHP 5.5.25\n  * PHP 5.6.9\n  * Redis 3.0.1\n\n# Fixes:\n\n  * CiviCRM known bugs and regressions fixed\n  * Improve drush aliases cleanup\n  * Redis: sync net.core.somaxconn with tcp-backlog\n  * sqlmagic: do not escape backslashes and EOL character - fixes #672\n  * SQL dump definer regexp causes invalid SQL during migrate/clone - #2497091\n  * Fix for backward compatibility with old Galera versions\n\n\n### Stable BOA-2.4.2 Release - Full Edition\n### Date: Mon Apr 27 11:12:09 PDT 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.2\n### Latest hotfix added on: Fri May  1 02:07:54 PDT 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7 customized for BOA\n\n# Release Notes:\n\n  This BOA release includes 15 updated Ægir platforms with latest Drupal core,\n  2 new features and enhancements, 13 new software versions, 3 other changes,\n  plus over 20 bug fixes.\n\n# Updated Octopus platforms:\n\n  aGov 1.7 --------------------- https://drupal.org/project/agov\n  Commerce 1.36 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.23 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 2.24 ----------------- https://drupal.org/project/commons\n  Commons 3.25 ----------------- https://drupal.org/project/commons\n  Guardr 2.11 ------------------ https://drupal.org/project/guardr\n  OpenAid 2.1 ------------------ https://drupal.org/project/openaid\n  OpenAtrium 2.33 -------------- https://drupal.org/project/openatrium\n  OpenChurch 1.17-b2 ----------- https://drupal.org/project/openchurch\n  OpenChurch 2.1-b7 ------------ https://drupal.org/project/openchurch\n  OpenOutreach 1.19 ------------ https://drupal.org/project/openoutreach\n  OpenPublic 1.5 --------------- https://drupal.org/project/openpublic\n  Panopoly 1.21 ---------------- https://drupal.org/project/panopoly\n  Recruiter 1.6 ---------------- https://drupal.org/project/recruiter\n  Restaurant 1.0-b12 ----------- https://drupal.org/project/restaurant\n\n  @=> NOTE: Drupal 8 support is broken in this release, because latest Drush\n      doesn't support older Drupal 8 beta versions, while new D8 beta is not\n      released and tested yet, and we really need latest Drush to fix broken\n      D6->D7 upgrade path, so we could prepare for full Ægir 3, which comes\n      with D7 in the frontend.\n\n# New features and enhancements:\n\n  * Re-create files/robots.txt if older than 7 days\n  * Restore default DNS when /root/.use.default.nameservers.cnf exists\n\n# Changes:\n\n  * Enable SPDY and PFS by default - fixes #545\n  * Use GitLab as a secondary mirror\n  * Whitelist drush pm-updatestatus\n\n# System upgrades:\n\n  * cURL 7.42.1 (if installed from sources)\n  * Drush mini-7-25-04-2015\n  * Duplicity 0.7.02 (please run 'backboa install' to upgrade)\n  * MariaDB 5.5.43\n  * MariaDB Galera Cluster 10.0.17\n  * MySecureShell master-20-03-2015\n  * Nginx 1.8.0\n  * OpenSSH 6.8p1 (if installed from sources)\n  * OpenSSL 1.0.2a (if installed from sources)\n  * PHP 5.6.8, 5.5.24, 5.4.40\n  * PHPRedis master-18-03-2015\n  * Redis 3.0.0\n  * Ruby 2.2.2\n\n# Fixes:\n\n  * Add service cron start to migrate docs - fixes #654\n  * BOA.sh.txt should update installers when invoked interactively - fixes #644\n  * Do not add Google DNS when custom DNS is expected\n  * Do not count requests for images derivatives if private files mode is used\n  * Do not create conflicting plain HTTP proxy for single IP mode - fixes #465\n  * Force csf/lfd update before and after running barracuda upgrade - fixes #685\n  * How to enable permanent redirect to HTTPS with single IP - #465\n  * Improve DNS self-healing magic - see #674\n  * Improve FPM auto-healing to properly detect conflicting instances\n  * Make sure that dl mirrors never get blocked\n  * Nginx: Stop the POST flood to /autodiscover/autodiscover.xml\n  * Nginx: Use dummy db fastcgi_param placeholders if any of them is empty\n  * Remove aggresive firewall cleanup - fixes #688\n  * Remove onetime fix intended to sync new defaults - fixes #678\n  * Update absolute URLs to files for sites cloned/migrated/renamed\n  * Update composer on barracuda upgrade\n  * Use _TOMCAT_TO_JETTY=NO in cnf template to avoid confusion - see #676\n  * Use correct placeholder in the xboa proxy - fixes #655\n  * Use MAIN_SITE_NAME instead of possibly fake SERVER_NAME - see #385\n  * Where to add the SSL redirect configuration snippet - fixes #681\n\n\n### Stable BOA-2.4.1 Release - Full Edition\n### Date: Sun Mar  8 14:56:51 PDT 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.1\n### Latest hotfix added on: Wed Mar 11 11:58:52 PDT 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7.0.0-alpha9 customized for BOA\n\n# Release Notes:\n\n  This new BOA release includes one new and 12 updated Ægir platforms,\n  8 new features and enhancements, 15 new software versions, 10 other changes,\n  plus over 38 bug fixes, with most notable features and changes listed below:\n\n  @=> Add duobackboa with /root/.duobackboa.cnf file to run duplicate backups\n  @=> Add SSL with TLS/SNI on server with one IP, multiple certificates support\n  @=> Add support for Octopus batch migration - see docs/MIGRATE.txt for details\n  @=> Allow to use _PHP_GEOS=YES with all PHP versions\n\n# New Octopus platforms:\n\n  OpenAid 2.0 ------------------ https://drupal.org/project/openaid\n\n# Updated Octopus platforms:\n\n  Commerce 1.33 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.21 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 2.22 ----------------- https://drupal.org/project/commons\n  Commons 3.22 ----------------- https://drupal.org/project/commons\n  Drupal 8.0.0-b7 -------------- https://drupal.org/drupal-8.0\n  Guardr 2.8 ------------------- https://drupal.org/project/guardr\n  OpenAtrium 2.32 -------------- https://drupal.org/project/openatrium\n  OpenChurch 2.1-b5 ------------ https://drupal.org/project/openchurch\n  OpenOutreach 1.16 ------------ https://drupal.org/project/openoutreach\n  OpenScholar 3.20.0 ----------- http://theopenscholar.org\n  Panopoly 1.18 ---------------- https://drupal.org/project/panopoly\n  Recruiter 1.5 ---------------- https://drupal.org/project/recruiter\n\n# New features and enhancements:\n\n  * Add compatibility with latest VS beng kernel\n  * Add duobackboa with /root/.duobackboa.cnf file to run duplicate backups\n  * Add support for multivalued fields in SOLR 4 - pull request #626\n  * Add support for mysqladmin proc logging\n  * Add support for Octopus batch migration - see docs/MIGRATE.txt for details\n  * Add support for scout/mysql monitoring\n  * CSF: Add popular ports 222 and 2222 to TCP_OUT by default\n  * SSL with TLS/SNI on server with one IP, multiple certificates - fixes #465\n\n# Changes:\n\n  * Allow to run automated SQL conversion only weekly\n  * Allow to use _PHP_GEOS=YES with all PHP versions\n  * Do not send extra nocache cookie on GET requests\n  * Drush mini-7-07-03-2015\n  * Make barracuda wrapper available on initial install to avoid confusion\n  * Nginx: Update for crawlers exceptions list\n  * Redis Integration Module: Update to version mod-05-03-2015\n  * Remove dependency on legacy Drush 4\n  * Use latest Apache Solr Search 6.x-3.x config\n  * Use latest Apache Solr Search 7.x-1.x config\n\n# System upgrades:\n\n  * Apache Solr 4.9.1\n  * cURL 7.41.0 (if installed from sources)\n  * Git 2.3.0 (if installed from sources)\n  * Jetty 9.2.7.v20150116\n  * MariaDB 10.0.17\n  * MariaDB 5.5.42\n  * MariaDB Galera Cluster 10.0.17\n  * Nginx 1.7.10\n  * OpenSSL 1.0.2 (if installed from sources)\n  * PHP 5.4.38\n  * PHP 5.5.22\n  * PHP 5.6.6\n  * PHP: ionCube loader 4.7.4\n  * Pure-FTPd 1.0.37\n  * Ruby 2.2.1\n  * Use duplicity 0.7.01 and boto 2.36.0 - fixes #630\n  * Vnstat 1.13\n\n# Fixes:\n\n  * [provision] False \"load on system too heavy\" messages - fixes #619\n  * [provision] Issue #2350695 - Profile is registered twice, also as a module\n  * [provision] Nginx: Remove webform keyword from regex locations - fixes #599\n  * Add also manage_ltd_users to the list - fixes #616\n  * Avoid installing New Relic with no valid license key provided - fixes #608\n  * Do not add no longer used symlink\n  * Do not create conflicting plain HTTP proxy for single IP mode - fixes #465\n  * Do not delete backboa while duplicity is running\n  * Do not replace any contrib in latest OA - fixes #2420131\n  * Do not run D7 core hotfix on already fixed instances\n  * Fix for legacy systems autoupdate logic\n  * Fix for missing chattr -i on web user update\n  * Fix for missing datestamp\n  * Fix for too dangerous pdnsd auto-config logic\n  * Fix pdnsd restarts procedures - fixes #610\n  * Fix permissions for pdnsd if needed\n  * Fix variable in autoupboa - pull request #629\n  * Force php.ini update\n  * Hotfix for cluster instances\n  * Hotfix for OpenSSL/cURL versions out of sync\n  * How to enable permanent redirect to HTTPS with single IP - #465\n  * Issue #2425963 - Broken slider in Commerce Kickstart 2.21\n  * Make sure that @hostmaster alias works after migration\n  * Provide a patch for older civicrm versions to make them Drush 7 compatible\n  * Randomize backups schedule to avoid issues with AWS limits\n  * Reload nginx service automatically - #465\n  * Remove conflicting pdnsd restarts to avoid race conditions - fixes #610\n  * Remove deprecated sysctl options\n  * Remove post-install leftovers if needed\n  * Single PHP-version installation fails - fixes #598\n  * Typo - fixes #539\n  * Unable to connect to SOLR on latest head - fixes #623\n  * Update installers as expected, also with _SKYNET_MODE=OFF - fixes #644\n  * Update meta-installers for new stable\n  * Update the upgrade procedure how-to - fixes ##616\n  * Use civicrm-4.5.6 compatible with Drush 7\n  * Use correct AWS Endpoint when us-east-1 Region is specified\n  * Use correct open_basedir for lshell user - fixes #603\n  * Use separate loops for symlinks and ghost cleanup\n  * Workaround for EntityMalformedException in Open Outreach - fixes #229\n  * Workaround for missing interface/lo.pdnsd on legacy systems\n  * Workaround for SA-CONTRIB-2015-063 - Webform - Cross Site Scripting\n\n\n### Stable BOA-2.4.0 Release - Full Edition\n### Date: Wed Feb  4 20:30:04 CET 2015\n### Milestone URL: https://github.com/omega8cc/boa/milestones/2.4.0\n### Latest hotfix added on: Sat Feb 21 10:18:15 UTC 2015\n\n  @=> Includes Ægir Hostmaster 2.x-head with improvements\n  @=> Includes Ægir Provision 3.x-head with improvements\n  @=> Includes Drush 7.0.0-alpha8 customized for BOA\n\n# Release Notes:\n\n  This new BOA release includes 7 updated Ægir platforms, over 28 new features\n  and enhancements, 12 new software versions, over 36 important changes, plus\n  over 100 bug fixes, with most notable features and changes listed below:\n\n  @=> Added Support for latest Drupal 8.0.0-beta with D8B platform keyword\n  @=> Added Support for latest Drupal 8.0.0-dev with D8D platform keyword\n  @=> Added Support for latest PHP 5.6\n  @=> BOA can auto-detect its fastest download mirror on install, upgrade etc.\n  @=> BOA Code Refactoring to make it modular and easier to read (in progress)\n  @=> BOA Skynet auto-updates can be turned off with _SKYNET_MODE=OFF\n  @=> Cron is run only for live sites with no tmp, temp, dev, test etc keywords\n  @=> Force single PHP version with command keyword on install and upgrade\n  @=> Introducing Support for HHVM -- see docs/HHVM.txt for details.\n  @=> PHP 5.5 is used by default on new installs instead of old 5.3\n  @=> PHP-FPM (and HHVM) runs now as a separate, very limited system user\n  @=> Removed Support for legacy PHP 5.2\n  @=> Sites Names Exceptions and Special Keywords have changed\n  @=> The _MODULES_FIX variable is set to NO by default\n  @=> The _PERMISSIONS_FIX variable is set to NO by default\n  @=> The built-in registry-rebuild on every Verify task is not run by default\n  @=> The Dev-Mode works only for site aliases, no longer for main site name\n\n  Please read further below for more details.\n\n# Caveats for self-hosted BOA:\n\n  We recommend to proceed with major upgrade procedure as follows:\n\n    $ cd;wget -q -U iCab http://files.aegir.cc/BOA.sh.txt;bash BOA.sh.txt\n    $ barracuda up-stable\n    $ barracuda up-stable system\n    $ octopus up-stable all both\n    $ bash /var/xdrago/manage_ltd_users.sh\n    $ bash /var/xdrago/daily.sh\n\n# Updated Octopus platforms:\n\n  aGov 1.6 --------------------- https://drupal.org/project/agov\n  Commerce 1.32 (with 1.11) ---- https://drupal.org/project/commerce_kickstart\n  Guardr 2.7 ------------------- https://drupal.org/project/guardr\n  OpenAtrium 2.26 -------------- https://drupal.org/project/openatrium\n  OpenChurch 1.17-b1 ----------- https://drupal.org/project/openchurch\n  OpenPublic 1.4 --------------- https://drupal.org/project/openpublic\n  Panopoly 1.15 ---------------- https://drupal.org/project/panopoly\n\n# New features and enhancements:\n\n  * Add backboa variables to configure full backup cycle and log verbosity.\n  * Add Backdrop CMS compatibility in global.inc (experimental)\n  * Add Drupal 8 compatibility in global.inc\n  * Add Drush Make Local - fixes #332\n  * Add safe_cache_form_clear Drush extension by default - fixes #568\n  * Add support for writable .aws directory in the web user home.\n  * Allow to set _PHP_SINGLE_INSTALL on command line - on install and upgrade.\n  * Allow to use both platform specific and ALL keyword in _PLATFORMS_LIST.\n  * BOA auto-selects the fastest download mirror on install, upgrade and update.\n  * Detect critically low free RAM and forcefully restart services if needed.\n  * Detect OOM incidents and forcefully restart services if needed.\n  * Improve backboa with AWS connection testing.\n  * Install latest D8-dev with D8D keyword specified.\n  * Monitor and rotate PHP error logs if too big (over 1 GB).\n  * Monitor the number of master PHP-FPM processes and force restart if needed.\n  * New 'nodns' option to skip DNS and SMTP checks on the fly.\n  * Nginx: Add support for images derivatives with URI shortcuts - fixes #481\n  * Nginx: Add support for URI shortcuts for sites in subdirectories.\n  * PHP: Add HHVMinfo.\n  * PHP: Add support for latest 5.6\n  * PHP: Allow to define version to install and use on command line - fixes #536\n  * PHP: Disable not used CLI versions if _PHP_SINGLE_INSTALL is defined.\n  * PHP: Disable not used FPM and CLI versions.\n  * PHP: HHVM experimental support - fixes #443\n  * Provide default value for composer_manager_vendor_dir variable - fixes #385\n  * Redis: Allow to configure remote IP via _REDIS_LISTEN_MODE /cluster support.\n  * Use cron scheduler fast mode (every 10 sec) if /root/.fast.cron.cnf exists.\n  * Use Drush Make Local for Hostmaster with download mirrors auto-detection.\n\n# Changes:\n\n  * Alter the cron_interval for existing sites to match Ægir default.\n  * Change required exceptions keywords to .temporary. and .testing.\n  * Dev mode detection and URLs protection - now works only for aliases.\n  * Do not display .cnf files contents if _DEBUG_MODE is not set to YES.\n  * Do not restart Redis daily if /root/.high_traffic.cnf exists - fixes #533\n  * Drush 7 is now used by default instead of Drush 6.\n  * Drush: Upgrade to mini-7-02-02-2015\n  * Force _TOMCAT_TO_JETTY=YES - fixes #570\n  * Hostmaster: Use Drush Make Local instead of downloading contrib with Drush\n  * Limit status messages verbosity if _DEBUG_MODE is not set to YES\n  * Make it possible to opt-out from BOA Skynet auto-updates - fixes #557\n  * Nginx: Block SEOkicks crawler.\n  * PHP: Always use by default version 5.5\n  * PHP: Disable legacy 5.2 version if installed.\n  * PHP: Ignore --with-curlwrappers defined in _PHP_EXTRA_CONF for 5.5 and 5.6\n  * PHP: Rebuild to remove --with-curlwrappers unless added in _PHP_EXTRA_CONF\n  * PHP: Remove no longer working custom config protection - see #559\n  * PHP: Tune FPM defaults for speed and RAM optimization.\n  * PHP: Use built-in Zend OPcache in 5.5\n  * PHP: Use built-in Zend OPcache in 5.6\n  * Redis Integration Module: Update to version mod-14-12-2014\n  * Reload system cron hourly.\n  * Remove deprecated RC4 from ssl_protocols.\n  * Remove the _O_CONTRIB_UP variable/feature.\n  * Run cron for 3 sites at once max.\n  * Set _MODULES_FIX=NO by default\n  * Set _PERMISSIONS_FIX=NO by default\n  * Site mode detection and cron protection - cron works only for live sites\n  * Split huge BARRACUDA script into lib includes.\n  * Switch to special limited system user also in PHP-FPM mode - fixes #551\n  * There is no need to update drupalgeddon every 5 minutes.\n  * Use 86400 as a default cron_interval to sync with Drupal default.\n  * Use MySQLTuner only if _USE_MYSQLTUNER=YES is set in .barracuda.cnf\n  * Use provision_civicrm 6.x-2.x directly.\n  * Use separate versioning for Ægir extensions download URLs.\n  * Run built-in registry-rebuild on Verify only if empty ctrl file\n    sites/all/modules/registry-rebuild.ini exists.\n\n# System upgrades:\n\n  * cURL 7.40.0 (if installed from sources)\n  * Git 2.2.1 (if installed from sources)\n  * MariaDB 10.0.16\n  * MariaDB 5.5.42\n  * MariaDB Galera Cluster 10.0.16\n  * Nginx 1.7.9\n  * PHP 5.4.37\n  * PHP 5.5.21\n  * PHP 5.6.5\n  * PHP: ionCube loader 4.7.3\n  * Redis 2.8.19\n  * Ruby 2.2.0\n\n# Fixes:\n\n  * Add CONTRIBUTING.txt guidelines.\n  * Add in docs/HINTS.txt Helper locations to avoid 404 on legacy images paths.\n  * Add still missing updates for migrated instances.\n  * Add warning about vCloud Air incompatibility with Drupal.\n  * Aliases are wiped out after site rename - fixes #542\n  * Allow slower DNS response.\n  * Always disable spinner when running boa in-octopus.\n  * Avoid broken install on D8 core where sites/all doesn't exist by default.\n  * Avoid confusing EXIT: You must specify already installed PHP version.\n  * Avoid sed warnings in old stable and legacy modes.\n  * Backward compatibility with Drush 6.\n  * Block attempts to lookup /etc/passwd via web shell.\n  * Check only LANG environment variable in locale test - fixes #584\n  * Compare $new_uri with d()->name and not d()->uri in the Site Rename Check.\n  * Delete duplicity ghost pid file if older than 2 days.\n  * Do not confuse D7 with D8 or Backdrop CMS.\n  * Do not force cURL reinstall from packages - fixes #565\n  * Do not try to add platforms nodes if no new platform has been installed.\n  * Do not update backboa if duplicity is running.\n  * Document when to use /root/.fast.cron.cnf\n  * Drupal 8 removed drupal_mail()\n  * Drupal 8 requires container_yamls defined.\n  * Drupal 8 requires read permissions in sites/all\n  * Drupal 8 requires trusted_host_patterns defined in settings.php\n  * Drupal 8 with $clean_urls=1 should use /cron/ URI.\n  * Drush 7 requires composer.\n  * Fix and Improve Squeeze to Wheezy upgrade procedure.\n  * Fix for $HOME detection if not set for some reason.\n  * Fix for Drush aliases protection.\n  * Fix for octopus batch upgrade mode.\n  * Fix for octopus single upgrade mode.\n  * Fix for pdnsd install/update logic.\n  * Fix missing symlinks after broken openjdk-6 upgrade.\n  * Fix path to PHP-CLI if needed.\n  * Fix public IP auto-detection on AWS in Octopus.\n  * Fix the logic for aegir/platforms upgrade mode.\n  * Fix the logic for TMPDIR set on the fly - fixes #552\n  * Fix: LANGUAGE (en_US.UTF-8) is not compatible with LC_ALL (). Disabling it.\n  * Force _PHP_MULTI_INSTALL to match defined _PHP_FPM_VERSION on cluster nodes.\n  * Force _THIS_DB_HOST=localhost on AWS.\n  * HHVM: Add /home/ to open_basedir so access to the .tmp works - fixes #569\n  * HHVM: Add workarounds for potential security issues - fixes #443\n  * Improve Ægir tasks scheduling and load spikes protection.\n  * Improve docs for backboa.\n  * Improve pdnsd configuration update by removing non-IP lines early enough.\n  * Improve procs monitor.\n  * Improve web wrapper.\n  * Increase inotify defaults to improve lsyncd support.\n  * Issue #2372653: Add --no-autocommit when dumping MySQL tables.\n  * Jetty: Detect if running as zombie and force restart if needed.\n  * Make sure that AcceptEnv is set in sshd_config.\n  * Make sure to never run cron on just cloned site.\n  * MariaDB patch is no longer needed.\n  * Monitor lsyncd and xinetd if installed and expected to run.\n  * Never delete tmp dirs to avoid Drush/PHP segfaults and race conditions.\n  * Nginx: Add missing variables in subdirectory config template.\n  * Nginx: Fix for D8-specific /cron/ location regex.\n  * Nginx: Force clean URLs for Drupal 8.\n  * Nginx: Helper locations to avoid 404 on legacy images paths (subdir only)\n  * Nginx: Hide X-Drupal-Cache-Tags header.\n  * Nginx: Use safe fallback for mysteriously empty $db_port\n  * PHP: Avoid version guessing for Octopus when _PHP_SINGLE_INSTALL is used.\n  * PHP: Make sure that _PHP_SINGLE_INSTALL takes precedence.\n  * PHP: OPcache configuration for Drupal 8 - fixes #419\n  * PHP: Re-install libmagickwand-dev to avoid broken extension build.\n  * PHP: The fallback version should be detected and not hardcoded.\n  * Prevent 'Could not change permissions' warnings with CiviCRM - fixes #523\n  * Remove Drupal 8 specific code from settings template used in older Drupal.\n  * Remove known sensitive credentials from barracuda upgrade log.\n  * Revert \"Issue #2313327: Fixed Unknown options for provision-verify.\"\n  * Run agents update on cluster nodes.\n  * Run single mirror check - fixes #565\n  * RVM: Install also eventmachine-1.0.3\n  * Set files paths on D8 install to avoid using system default /tmp.\n  * Silence confusing noise - fixes #589\n  * Skip auto-update for agents not compatible with older versions.\n  * Skip extra SQL connection test on AWS.\n  * Standardize platforms version and naming convention.\n  * Support for _NGINX_NAXSI is experimental (don't use)\n  * Symlinks directories expected by Drush/Ægir in D8 root.\n  * Sync defaults for hosting_advanced_cron_default_interval\n  * Syntax error - fixes #587\n  * Syntax error - fixes #588\n  * The _NGINX_FORWARD_SECRECY=YES is ignored on Debian Wheezy - fixes #591\n  * The /login suffix is no longer supported in Drupal 8 and results with 404.\n  * The backend verify sub-task breaks site import for Drupal 8.\n  * Tomcat is not used anymore - see #570\n  * Use consistent stderr 2 stdout redirects in grep checks.\n  * Use correct _THIS_DB_HOST on master instance.\n  * Use correct pid file in procs monitor.\n  * Use correct user to run drush test commands.\n  * Use extended display mode for messages longer than 200 chars.\n  * Use faster mysqldump mode/flags.\n  * Use mirror to download complete vendor directory for Drush 7.\n  * Use more intuitive PHP keyword naming convention.\n  * Use mutatable interface in install_8.inc - fxes #2409085\n  * Use recommended releases for views404 and views_accelerator - fixes #578\n  * Use release specific o_contrib downloads.\n  * Use safe tmp cleanup to avoid race conditions.\n  * Where to set _USE_MYSQLTUNER variable - fixes #594\n\n\n### Stable BOA-2.3.8 Release - Full Edition\n### Date: Sat Nov 29 09:58:45 SGT 2014\n### Includes Ægir 2.x-head with improvements\n\n# Release Notes:\n\n  This new BOA release includes new features, improvements and bug fixes.\n\n#-### Support for optional Drupalgeddon daily checks on all hosted D7 sites\n\n  ~/static/control/drupalgeddon.info\n\n  Previously enabled by default, now requires this control file to still\n  run daily, because it may generate some false positives not always possible\n  to avoid or silence, so it no longer makes sense to run this check daily,\n  especially after BOA has run it automatically for a month and finally even\n  disabled automatically all clearly compromised sites.\n\n  Note that your system administrator may still enable this with root level\n  control file /root/.force.drupalgeddon.cnf, so it will still run, even\n  if you will not create the Octopus instance level empty control file:\n  ~/static/control/drupalgeddon.info\n\n  Please note that current version of Drupalgeddon Drush extension needs\n  the 'update' module to be enabled to avoid even more false positives,\n  so BOA will enable the 'update' module temporarily while running this\n  check, which in turn will result with even more emails notices sent\n  to the site admin email, if these notices are enabled.\n\n#-### Support for automated BOA upgrades: weekly and one-time\n\n  You can configure BOA to run automated upgrades to latest stable version\n  for both Barracuda and all Octopus instances with three variables, empty\n  by default. All three variables must be defined to enable auto-upgrade.\n  You can set _AUTO_UP_MONTH and _AUTO_UP_DAY to any date in the past\n  if you wish to enable only weekly system upgrades.\n\n  Remember that one-time upgrades will include complete upgrade to latest BOA\n  stable for Barracuda and all Octopus instances, while weekly upgrade is\n  designed to run only 'barracuda up-stable system' upgrade.\n\n  _AUTO_UP_WEEKLY= #------ Day of week (1-7) for weekly system upgrades\n  _AUTO_UP_MONTH= #------- Month (1-12) to define date of one-time upgrade\n  _AUTO_UP_DAY= #--------- Day (1-31) to define date of one-time upgrade\n\n  All three variables should be added in your /root/.barracuda.cnf file.\n\n# Updated Octopus platforms:\n\n  ERPAL 2.2 -------------------- https://drupal.org/project/erpal\n\n# New features and enhancements in this release:\n\n  * Support for automated BOA upgrades: weekly and one-time.\n\n# Changes in this release:\n\n  * Drupalgeddon daily checks on all hosted D7 sites are now optional.\n\n# Fixes in this release:\n\n  * Issue #508 - The _EASY_HOSTNAME is not required in local install mode.\n  * Issue #516 - Do not break binaries detection with 'which'.\n\n\n### Stable BOA-2.3.7 Release - Full Edition\n### Date: Tue Nov 25 15:44:48 PST 2014\n### Includes Ægir 2.x-head with improvements\n\n# Release Notes:\n\n  This new BOA release includes updated versions of all supported Drupal\n  platforms to provide latest Drupal 7.34 and Pressflow 6.34 cores, plus\n  new features, improvements and bug fixes.\n\n  We recommend that you upgrade your D7 sites using this safe workflow:\n\n    https://omega8.cc/your-drupal-site-upgrade-safe-workflow-298\n\n  For up-to-date information on #Drupageddon please check:\n\n    https://omega8.cc/drupageddon-psa-2014-003-342\n\n#-### Support for locking/unlocking web server write access in all codebases\n\n  This new, auto-enabled by default protection will enhance your system\n  security, especially for sites in custom platforms you maintain\n  in the ~/static directory tree.\n\n  It is important to understand that your web server / PHP-FPM runs as your\n  shell/ftps user, although with a different group. This allows to maintain\n  virtual chroot for Octopus instances, which significantly improves security.\n\n  However, it had a serious drawback: the web server had write access in all\n  your platforms codebases located in the ~/static directory tree, because\n  all files you have uploaded there have the same owner.\n\n  While it allows you to use code management which requires web hooks, it also\n  opens a door for possible attack vectors, like for the infamous #drupageddon\n  disaster, where Drupal allowed attackers to create .php files intended\n  to be used as backdoors in future attacks - inside your codebase.\n\n  Even if it could affect only custom platforms you maintain in the ~/static\n  directory tree, since all built-in Octopus platforms always had Drupal core\n  completely write-protected, plus, even if created by attacking bot, these\n  extra .php files are completely useless for attackers, because BOA default\n  restricted configuration doesn't allow to execute not whitelisted, unknown\n  .php files, having codebase writable by your web server is still dangerous,\n  because at least theoretically it may open a possibility to overwrite valid\n  .php files, so they could be used as an entry point in a future attack.\n\n  BOA now protects all your codebases by reverting (daily) ownership on all\n  files and directories in your codebase (modules and themes) so they are\n  owned by the Ægir backend user and not your shell/ftps user.\n\n  While this new default procedure protects all your codebases in the ~/static\n  directory tree, and even in the sites/all directory tree, and even in the\n  sites/foo.com/modules|themes tree in all your built-in Octopus platforms,\n  you can still manage the code and themes with your main and extra shell\n  accounts as usual, because your codebase is still group writable, and your\n  shell accounts are members of the group not available for the web server.\n\n  You can easily disable this default daily procedure with a single switch:\n\n    ~/static/control/unlock.info\n\n  You can also exclude any custom platform you maintain in the ~/static\n  directory tree from this global procedure by adding an empty skip.info\n  control file in the given platform root directory, so all other platforms\n  are still protected, and only excluded platform is open for write access\n  also for the web server. But normally you should never need this unlock!\n\n  Please note that this procedure will not affect any platform if you have\n  the non-default _PERMISSIONS_FIX=NO setting in your /root/.barracuda.cnf\n  file. It will also skip any platform with fix_files_permissions_daily\n  variable set to FALSE in the given platform active INI file.\n\n# Updated Octopus platforms:\n\n  Commerce 1.32 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.20 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 2.21 ----------------- https://drupal.org/project/commons\n  Commons 3.20 ----------------- https://drupal.org/project/commons\n  Guardr 2.5 ------------------- https://drupal.org/project/guardr\n  Open Atrium 2.25 ------------- https://drupal.org/project/openatrium\n  Open Outreach 1.13 ----------- https://drupal.org/project/openoutreach\n  Panopoly 1.14 ---------------- https://drupal.org/project/panopoly\n\n# New features and enhancements in this release:\n\n  * Support for locking/unlocking web server write access in all codebases.\n\n# Changes in this release:\n\n  * Do not force site_readonly to be disabled on non-dev sites.\n\n# System upgrades in this release:\n\n  * MariaDB 10.0.15\n\n# Fixes in this release:\n\n  * Allow any single site to use 1/2 of available SQL connections max.\n  * Clean up dot files after installing or updating RVM.\n  * Do not run extra updates on systems running latest head version.\n  * Improve ghost sites cleanup.\n  * Issue #467 - Centralize control files outside of codebases tree.\n  * Issue #498 - ERPAL: Fatal error: Unsupported operand types.\n  * Issue #499 - RVM: Add oily_png gem version 1.1.1\n  * Issue #504 - Add docs/RVM.txt\n  * Issue #504 - Remove ~/.rvm/scripts/notes script breaking lshell.\n  * Issue #509 - Do not delete anything from hostmaster site level modules.\n  * It is safe to run manage_ltd_users every minute.\n  * Never touch hostmaster aliases and vhosts even they appear broken.\n  * Nginx: Fix for possible problem with files/imagecache in legacy D6 sites.\n  * Use gnupg2 by default.\n  * Use latest Ruby 2.1.x or 2.0.x available.\n  * Use verbose RVM install mode to improve debugging.\n\n\n### Stable BOA-2.3.6 Release - Full Edition\n### Date: Mon Nov 17 08:11:17 SGT 2014\n### Includes Ægir 2.x-head with improvements\n\n# Release Notes:\n\n  This new BOA release includes updated versions of all supported Drupal\n  platforms to provide latest Drupal 7.33 core, plus great new features,\n  improvements and bug fixes.\n\n  We recommend that you upgrade your D7 sites using this safe workflow:\n\n    https://omega8.cc/your-drupal-site-upgrade-safe-workflow-298\n\n  For up-to-date information on #Drupageddon please check:\n\n    https://omega8.cc/drupageddon-psa-2014-003-342\n\n#-### Support for automated, encrypted, daily backups to Amazon S3\n\n  * This new feature is available on self-hosted BOA and hosted Power Engines.\n  * Note that provided 'backboa' tool uses symmetric password-only encryption.\n  * You can configure AWS Region you prefer to use and Backup Rotation policy.\n\n  It will archive all directories required to restore your data (sites files,\n  databases archives, Nginx configuration and more) on a freshly installed BOA:\n\n    /etc /var/aegir /var/www /home /data\n\n  It will start to run nightly at 2:08 AM (server time) only once you will add\n  five required _AWS_* variables in the /root/.barracuda.cnf file and run the\n  special command 'backboa install' while logged in as root.\n\n  To restore any file from backups created with 'backboa' tool, you can use\n  the same script on the same or any other BOA server.\n\n  Please read docs/BACKUPS.txt at https://github.com/omega8cc/boa for details.\n\n# Updated Octopus platforms:\n\n  Commons 3.19 ----------------- https://drupal.org/project/commons\n  Open Atrium 2.24 ------------- https://drupal.org/project/openatrium\n  Open Deals 1.35 -------------- https://drupal.org/project/opendeals\n  OpenChurch 1.15 -------------- https://drupal.org/project/openchurch\n  OpenChurch 2.0-b2 ------------ https://drupal.org/project/openchurch\n  OpenScholar 3.16.0 ----------- http://theopenscholar.org\n  Panopoly 1.13 ---------------- https://drupal.org/project/panopoly\n  Restaurant 1.0-b10 ----------- https://drupal.org/project/restaurant\n  Ubercart 2.14 ---------------- https://drupal.org/project/ubercart\n  Ubercart 3.8 ----------------- https://drupal.org/project/ubercart\n\n# New features and enhancements in this release:\n\n  * Add support for automated, encrypted, daily backups to Amazon S3.\n  * Automatic shutdown for sites with known #Drupageddon users/roles added.\n  * Drush drupalgeddon extension added in all accounts.\n  * Make _STRONG_PASSWORDS length configurable: 8-128, YES (32), NO (8).\n  * Support for web and db clusters with MariaDB Galera (work in progress).\n  * Apply SA-CORE-2014-005 hot-fix daily everywhere, also on BOA (any version)\n    servers left on the auto-pilot.\n\n# Changes in this release:\n\n  * Do not force site_readonly to be disabled on non-dev sites.\n  * Ignore disabled sites in daily monitoring and healing procedures.\n  * Remove support for abandoned Managing News distro.\n  * Remove support for abandoned Open Atrium 6.x distro.\n  * Remove support for abandoned Spark distro.\n  * Remove support for abandoned Totem distro.\n  * Set _PERMISSIONS_FIX=YES by default, so important fixes can be applied.\n  * Update BOA wrappers hourly.\n\n# System upgrades in this release:\n\n  * cURL 7.39.0 (if installed from sources)\n  * Drush: Upgrade command line version 6 to mini-6-30-10-2014\n  * Nginx 1.7.7\n  * PHP 5.4.35\n  * PHP 5.5.19\n  * PHP: Zend OPcache master-08-11-2014\n\n# Fixes in this release:\n\n  * Add scout user if _SCOUT_KEY is not empty or cron entry exists.\n  * Always escape dots in preg_replace() to not truncate www. by mistake.\n  * Check if directory tree exists before running extended checks/fixes.\n  * Clear drush cache directly before running hostmaster-migrate.\n  * Disable scout if installed and enable later.\n  * Do not export LC_CTYPE on initial install.\n  * Do not use Redis on provision-save.\n  * Fix for edge case when incorrect permissions were set in custom platform.\n  * Fix for openatrium-7.x-2.22-7.32.1\n  * Fix for site_readonly mode in migrated instances.\n  * Force setting to avoid issues with not expected to work RVM self-update.\n  * Hint for Apache Solr Attachments and Java path possible confusion.\n  * Improve web wrapper filtering.\n  * Issue #2163979 - Check if field_info_field_map() is available.\n  * Issue #2373923 - HTTPS and aliases redirection problem with Nginx.\n  * Issue #438 - PHP: Remove support for 5.5 built-in Zend OPcache.\n  * Issue #452 - PHP build could be broken also with MariaDB newer than 5.5.40\n  * Issue #456 - Aliases redirection: problems with AdvAgg paths.\n  * Issue #457 - Aliases redirection: 404 file not found for resources.\n  * Issue #461 - Remote Import needs Drush strict=0 mode.\n  * Issue #463 - The yajl-ruby gem needs native binaries building.\n  * Issue #480 - Normalize /etc/hosts to avoid FQDN mapped to 127.0.1.1\n  * Issue #490 - Nginx: Block semalt botnet.\n  * Issue #496 - RVM 1.26.0 introduces signed releases (rvm: not found error).\n  * Make sure that hostmaster site usage is not counted.\n  * Move DB GRANTS setup for master instance to the correct level.\n  * Move redis server daily restart to daily.sh agent.\n  * Nginx: Fail if required db creds are empty to never create a broken vhost.\n  * Remove hardcoded DNS for files.aegir.cc\n  * Strict Permissions on All Binaries are default, not optional.\n  * There is no point in running MySQLTuner on initial install.\n  * Whitelist mysql command for overssh in lshell.\n\n\n### Stable BOA-2.3.5 Release - Full Edition\n### Date: Wed Oct 15 16:28:25 PDT 2014\n### Includes Ægir 2.1 with improvements\n### Latest hotfix added on: Thu Oct 16 08:55:02 PDT 2014\n\n# Release Notes:\n\n  This new BOA release includes important updates and bug fixes.\n\n  * All new Drupal 7 platforms received Drupal core security upgrade.\n    For details please read: https://www.drupal.org/SA-CORE-2014-005\n\n  * All existing Drupal 7 built-in platforms will receive a hot-fix for\n    this known vulnerability: https://www.drupal.org/SA-CORE-2014-005\n    once you will run 'barracuda up-stable' command on your server.\n    This procedure is automated on hosted and managed Ægir at Omega8.cc\n\n  * Your custom D7 platforms created in the ~/static directory tree\n    will be checked in the next 12 hours after the upgrade, and if you\n    have not applied this patch yet, it will be applied automatically\n    for you - but only if there is at least one active site present\n    in the given custom D7 platform. Note that while this procedure is\n    automated on hosted and managed Ægir at Omega8.cc, on self-hosted\n    BOA systems it will work only if you will set _PERMISSIONS_FIX=YES\n    in /root/.barracuda.cnf (default is NO)\n\n  We recommend that you upgrade your D7 sites using safe workflow:\n\n    https://omega8.cc/your-drupal-site-upgrade-safe-workflow-298\n\n# Updated Octopus platforms:\n\n  aGov 1.5 --------------------- https://drupal.org/project/agov\n  Commerce 1.31 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.19 ---------------- https://drupal.org/project/commerce_kickstart\n  ERPAL 2.1 -------------------- https://drupal.org/project/erpal\n  Guardr 1.14 ------------------ https://drupal.org/project/guardr\n  Open Atrium 2.22 ------------- https://drupal.org/project/openatrium\n  Open Outreach 1.12 ----------- https://drupal.org/project/openoutreach\n  OpenPublic 1.2 --------------- https://drupal.org/project/openpublic\n  Panopoly 1.12 ---------------- https://drupal.org/project/panopoly\n  Recruiter 1.3 ---------------- https://drupal.org/project/recruiter\n\n# New features and enhancements in this release:\n\n  * Explain that Solr self-provisioning works only if _MODULES_FIX=YES is set.\n  * Reverify all sites daily if /root/.force.sites.verify.cnf ctrl file exists\n    and _PERMISSIONS_FIX=YES is set in /root/.barracuda.cnf (default is NO)\n\n# Changes in this release:\n\n  * Security: Remove support for SSLv3 due to POODLE vulnerability.\n  * Disable Redis in Hostmaster until we will fix the Views based pages/blocks.\n  * Disable site_readonly for non-dev sites by default.\n  * Drush: Upgrade command line version 6 to mini-6-04-10-2014\n  * Enable AllowUserFXP in Pure-FTPd config by default.\n  * Remove support for already deprecated non-LTS Ubuntu versions.\n  * Run manage_ip_auth_access only once per minute.\n  * The INI variable redis_flush_forced_mode is enabled by default (again).\n  * Use sysklogd instead of rsyslog on Ubuntu.\n\n# System upgrades in this release:\n\n  * MariaDB 5.5.40\n  * Nginx 1.7.6\n  * OpenSSH 6.7p1 (if installed from sources)\n  * OpenSSL 1.0.1j (if installed from sources) - security upgrade.\n  * PHP 5.5.18\n  * PHPRedis: master-03-10-2014\n\n# Fixes in this release:\n\n  * Add auto-detection of Legacy Ruby patch level update on old systems.\n  * Add cleanup for ghost/broken sites dirs leftovers.\n  * Add missing cleanup for backup_migrate leftovers.\n  * Always cleanup pid files on exit/abort.\n  * Apply patch for SA-CORE-2014-005 in all shared D7 cores/built-in platforms.\n  * Compass Tools: Install 1.9.3 ffi expected by older themes.\n  * Fix db_port entry in all vhosts hourly.\n  * Fix for broken erpal-7.x-2.0-7.31.1\n  * Fix for broken site level drushrc.php file.\n  * Fix for false alarm caused by ghost sites leftovers.\n  * Fix for incorrect hash filtering on systems with OpenSSL built from sources.\n  * Fix locales: Numerous fixes and improvements -- thanks ar-jan!\n  * Fix typo in REVISIONS.\n  * Force site Verify via frontend if drushrc.php has been fixed.\n  * Issue #435 - SQL: Remove deprecated table_cache +update table_open_cache\n  * Issue #440 - Improve innodb_buffer_pool_size calculation and add 10%\n  * Issue #441 - New Relic is not disabled after removing newrelic.info file.\n  * Issue #442 - Skip locked/fpmcheck if /root/.high_traffic.cnf exists.\n  * Issue #444 - PHP: Remove useless sed replacement in pool.d/www{*}.conf\n  * Issue #445 - Remote Import: update 6.x-2.x branch for Ægir 2.x and Drush 6\n  * Issue #447 - Export LANG, LANGUAGE and all LC_ environment variables.\n  * Issue #447 - Improve locales consistency.\n  * Issue #447 - Set default LC_CTYPE and LC_COLLATE environment variables.\n  * Issue #447 - Simplify locales configuration on Ubuntu.\n  * Issue #448 - Enforce locale settings by configuring defaults.\n  * Issue #452 - PHP build is broken with latest MariaDB 5.5.40\n  * Make sure that db_port is never empty and defaults to 3306.\n  * Make sure that firewall monitoring scripts never run simultaneously.\n  * Make sure that standard caching is enabled in hostmaster.\n  * Pause hostmaster tasks when RVM install for any user is running.\n  * PHP: Do not run rebuilds if not needed.\n  * PHP: Fix for broken upgrade logic on libcurl or libssl packages upgrade.\n  * Remove acquia_connector from latest Commons to avoid broken installs.\n  * Remove all legacy gems and re-install RVM/Ruby for root from scratch.\n  * Remove legacy replacement to avoid converting symlinked includes into files.\n  * SQL: Use correct defaults if MySQLTuner test failed.\n  * Workaround for Drupal flood using 127.0.0.1 for all requests behind proxy.\n\n\n### Stable BOA-2.3.4 Release - Full Edition\n### Date: Wed Oct 15 09:51:08 PDT 2014\n### Includes Ægir 2.1 with improvements\n\n  Release Notes and changelog for BOA-2.3.4 has been merged into BOA-2.3.5\n  above after security upgrades related to OpenSSL and SSLv3 have been added\n  shortly after 2.3.4 release.\n\n\n### Stable BOA-2.3.3 Release - Full Edition\n### Date: Sat Sep 27 01:25:46 PDT 2014\n### Includes Ægir 2.1 with improvements\n\n# Release Notes:\n\n  This BOA Edition includes important fixes to address some issues discovered\n  after BOA-2.3.1 release. Please read also the release notes for BOA-2.3.1\n  further below before running the upgrade!\n\n#-### Important details on CiviCRM versions compatibility and profiles support\n\n  * All BOA-2.3.x Editions fully support latest CiviCRM 4.5.0 for Drupal 7.\n  * CiviCRM for Drupal 6 is not supported because of known CiviCRM issues.\n  * CiviCRM support for Drupal 7 works great when added in sites/all/modules\n  * CiviCRM support for Drupal 7 also works when added in profiles/foo/modules\n    but no CiviCRM cron is currently managed until this known issue is fixed,\n    therefore BOA-2.3.3 will check all platforms on the Octopus instance and if\n    it will detect any with CiviCRM added in the installation profile directory\n    tree, it will refuse to upgrade such instance to not break things for those\n    using currently not fully supported CiviCRM codebase structure.\n\n# New Octopus platforms:\n\n  OpenChurch 2.0-b1 ------------ https://drupal.org/project/openchurch\n\n# Updated Octopus platforms:\n\n  ERPAL 2.0 -------------------- https://drupal.org/project/erpal\n  Guardr 1.13 ------------------ https://drupal.org/project/guardr\n  Open Outreach 1.11 ----------- https://drupal.org/project/openoutreach\n  OpenChurch 1.14 -------------- https://drupal.org/project/openchurch\n  OpenPublic 1.0-rc5 ----------- https://drupal.org/project/openpublic\n  OpenScholar 3.15.1 ----------- http://theopenscholar.org\n\n# New features and enhancements in this release:\n\n  * Add makefiles for CiviCRM 4.4.7\n  * Add makefiles for CiviCRM 4.5.0\n\n# Changes in this release:\n\n  * Drush: Upgrade command line version 6 to mini-6-27-09-2014\n  * Restart SSH hourly.\n  * The INI variable redis_flush_forced_mode is now disabled by default.\n  * Use aegir_custom_settings-6.x-3.12\n  * Use Provision CiviCRM boa-2.3.3-dev\n\n# System upgrades in this release:\n\n  * MariaDB 10.0.14\n  * Nginx 1.7.5\n  * PHP 5.4.33\n  * PHP 5.5.17\n  * PHPRedis: master-02-09-2014\n  * Redis 2.8.17\n\n# Fixes in this release:\n\n  * Add extra cleanup for Drush related caches.\n  * Always respect _SSH_PORT if set.\n  * Always start cron before aborting on error.\n  * Do not add duplicate cron entry for runner.sh\n  * Do not allow system only upgrades if Master Instance is still on 2.2.x\n  * Do not disable _DNS_SETUP_TEST\n  * Enable path_alias_cache by default also in the hostmaster site.\n  * Fix for broken pdnsd configuration if wrong IPs are detected.\n  * Fix for insufficient permissions on files/civicrm/ConfigAndLog\n  * Fix for insufficient permissions on files/civicrm/custom\n  * Fix for insufficient permissions on files/civicrm/dynamic\n  * Fix for missing cron entry for Scout, if _SCOUT_KEY is not empty.\n  * Fix the not working procedure to revert hostmaster features.\n  * Force problematic gems install to add them on accounts with enabled RVM.\n  * Fox for Java version for Jetty 9 on newer systems.\n  * Hardcode files.aegir.cc DNS entry.\n  * Improve docs/ctrl/system.ctrl readability.\n  * Install openjdk on CI instances by default.\n  * Issue #411 - Unable to update Octopus Instance - Reports PHP on 5.2\n  * Issue #423 - Make sure that innodb_buffer_pool_size is not smaller than 64M\n  * Issue #424 - Update mysqltuner.pl to support MariaDB 10.0\n  * Make sure that lsb-release is installed properly.\n  * Make the check_civicrm_compatibility more reliable to avoid false alarms.\n  * New Relic not enabled if no custom ~/static/control/{fpm|cli}.info exists.\n  * Nginx: Auto-Switch to wildcard all vhosts existing in the Master Instance.\n  * Nginx: Avoid any downtime on upgrade by using www53.fpm.socket temporarily.\n  * Nginx: Convert all config templates to wildcard mode in legacy instances.\n  * Nginx: Convert all Octopus vhosts to wildcard mode on Barracuda upgrade.\n  * Nginx: Convert config to use PHP 5.2 if the instance still depends on it.\n  * Nginx: Delete ghost, outdated or broken config includes in all instances.\n  * Nginx: Delete ghost, outdated or broken vhosts in all instances.\n  * Nginx: Force special vhosts access rules rebuild hourly.\n  * Nginx: Improve wildcard conversion procedure on some really old instances.\n  * Purge all ghost delete tasks before running hostmaster-migrate / upgrade.\n  * Purge Drush related caches cleanly when needed.\n  * Recreate possibly broken vhosts.\n  * Remove duplicate cron entry for runner.sh to avoid critical system load.\n  * Remove legacy replacement to not convert config symlinks into regular files.\n  * Run check_civicrm_compatibility only on upgrade.\n  * Single feature revert may not be enough.\n  * Update contrib in Open Atrium D7 to maintain upgrade path.\n  * Update cron defaults and remove legacy code.\n  * Update default SSL Wildcard Nginx Proxy to use wildcard listen mode.\n  * Use strict regex in vhosts listen mode conversion to not break ports.\n\n\n### Stable BOA-2.3.2 Release - Full Edition\n### Date: Thu Sep 18 15:16:33 PDT 2014\n### Includes Ægir 2.1 with improvements\n\n  Release Notes and changelog for BOA-2.3.2 has been merged into BOA-2.3.3\n  above after several hotfixes and various updates have been added shortly\n  after 2.3.2 release to address all identified post-release issues.\n\n\n### Stable BOA-2.3.1 Release - Full Edition\n### Date: Sun Sep 14 15:53:25 SGT 2014\n### Includes Ægir 2.1 with improvements\n### Latest hotfix added on: Mon Sep 15 19:10:07 SGT 2014\n\n# Release Notes:\n\n  This major BOA Edition introduces many new features, changes and fixes.\n\n  You should carefully read about some caveats further below **before** running\n  this major upgrade on your system. Please secure a fresh system backup first.\n\n  If you haven't run full barracuda+octopus upgrade to latest BOA Stable\n  Edition yet, don't use any partial/system upgrade modes.\n\n  Once new BOA Stable is released, you must run *full* upgrades with commands:\n\n  $ cd;wget -q -U iCab http://files.aegir.cc/BOA.sh.txt;bash BOA.sh.txt\n  $ barracuda up-stable\n  $ octopus up-stable all both\n\n  @=> Key new features:\n\n  * BOA-2.3.1 comes with new, shiny Ægir 2.1 stable version!\n  * Support for Drupal sites in subdirectories is enabled by default\n  * Solr 4 cores can be added/updated/deleted via site level INI settings\n  * Super-easy to use New Relic support with per Octopus license key\n  * Ability to add new Octopus instances with new, simple command syntax\n\n  @=> Ægir control panel new features:\n\n  * The list of sites is searchable by name or installation profile\n  * Sites have dedicated tabs: Backups, Task log, Edit and Packages\n  * Platform have tabs: Add site, Clients, Task log, Edit and Packages\n  * You can schedule tasks against filtered sites in batches\n  * Scheduling tasks in batches is available also on the platform view\n  * Scheduling tasks in batches is available also on the profile view\n  * Scheduling tasks in batches is available also on the client view\n  * You can schedule tasks also against platforms in batches\n  * You can safely apply db updates via 'Run db updates' task on any site\n  * The new 'Clients' menu item allows to list and manage sub-accounts\n  * Profiles are listed with both human-readable and machine names\n  * It is now possible to choose any existing alias or the main site name\n    as a redirect target, but without the need to rename the site --\n    it will just re-verify the site and create new vhost automatically\n\n  @=> Ægir control panel changes:\n\n  * The hosting/signup form is still available but not included in the menu\n  * The node/add/site form is no longer included in the main menu\n  * The optional pseudo-CDN-aliases feature is now disabled by default\n\n  @=> Other important changes:\n\n  * Support for PHP 5.2 has been officially deprecated\n  * The www53 PHP-FPM pool has been switched from port to default socket mode\n  * All existing vhosts must use wildcard in the Nginx 'listen' directive\n  * Legacy mode for Install and Upgrade moves to 2.2.x branch\n  * DB credentials are no longer in settings.php, only in drushrc.php\n  * Latest Drush 6 version is used in the Ægir backend by default\n\n  But what if you are not ready for this major upgrade and you would like\n  to have more time for testing, but still be able to run system upgrades,\n  thus effectively still using previous version 2.2.9 ?\n\n#-### Legacy mode for Install and Upgrade moves to 2.2.x branch\n\n  From now on, the 'legacy' install and upgrade mode available in all meta-\n  installers will utilize branch 2.2.x instead of deprecated 2.1.x series.\n\n  This means that starting with meta installers updated to use BOA-2.3.1\n  version you can use commands like shown below to update Barracuda, Octopus\n  and also to install more Octopus instances, while still using version 2.2.9:\n\n  $ boa in-legacy public server.mydomain.org my@email o1\n  $ barracuda up-legacy system\n  $ octopus up-legacy o1\n  $ boa in-legacy public server.mydomain.org my@email o2 mini\n  etc.\n\n  Remember to update your meta-installers first!\n\n  $ cd;wget -q -U iCab http://files.aegir.cc/BOA.sh.txt;bash BOA.sh.txt\n\n  Note also that if you will upgrade to current 'stable', it is not possible\n  to downgrade back to the 'old stable' with 'legacy' mode, so please proceed\n  with care!\n\n  Remember also that current legacy version will not receive any further\n  updates, even for security issues (besides those provided as packages by\n  your OS vendor - Debian or Ubuntu, which will still work), because it is\n  already different enough from current 2.3.1 stable, so we can't reliably\n  maintain both with working upgrade path.\n\n#-### Caveats: This upgrade will force wildcard in the Nginx 'listen' directive\n\n  If you have old enough BOA system which still uses legacy IP mode and not\n  a wildcard in the Nginx 'listen' directive, which is both Ægir and BOA\n  standard for a long time already, this upgrade will fix the problem and\n  update directives only in vhosts known and controlled by BOA.\n\n  If you have any other vhosts, located in standard or non-standard Nginx/BOA\n  directories for vhosts, you have to update them manually after upgrade to\n  BOA-2.3.0 or newer, or they will take over all other vhosts on the system\n  and cause redirects to /install.php which results with Nginx error 403 or 404,\n  depending on the prior configuration.\n\n  It will happen because IP based 'listen' directive in Nginx has higher\n  priority, and will mess things horribly if there are vhosts using wildcard\n  and some using the main system IP address.\n\n  What and how to replace? Here are the commands you need to run as root:\n\n    $ sed -i \"s/.*listen.*:80;/  listen  \\*:80;/g\" /path/to/vhost.file\n    $ service nginx reload\n\n  Note: this **doesn't** affect special vhosts for SSL enabled sites, if used,\n  because they are designed to use IP based 'listen' directives to provide\n  separation between SSL enabled IPs and their associated certificates,\n  while their associated 'upstream' block may even point to either local or\n  remote IP address, so there is no wildcard to use in this case, and it will\n  not conflict with all other vhosts managed by Ægir, because all SSL enabled\n  vhosts listen on other IP addresses than the main system IP, which is\n  by default used by all vhosts with wildcard in the 'listen' directive.\n\n  The problem may happen only when you have vhosts using wildcard and also\n  some vhosts using **main** system IP address in the 'listen' directive,\n  which may happen also unintentionally during upgrade to BOA-2.3.0 or never,\n  if there are either vhosts BOA doesn't control, or there are ghost vhosts\n  not yet purged if you didn't upgrade to BOA-2.2.9 before, or there are\n  some disabled sites, so their vhosts will not be re-created by Ægir\n  during this major upgrade (because only active sites can be re-verified).\n\n  While BOA will fix also any such ghost vhosts anyway, it will not be able\n  to detect and fix vhosts outside of the standard directories managed by Ægir.\n\n#-### Ability to add new Octopus instances with new, simple command syntax\n\n  It is now possible to add stable Octopus instances w/o forcing Barracuda\n  upgrade, plus optionally with no platforms added by default -- usage:\n\n    $ boa {in-octopus} {email} {o2} {mini|max|none}\n\n#-### The www53 PHP-FPM pool has been switched from port to default socket mode.\n\n  Note that we are breaking backward compatibility here, so it will cause\n  downtime on upgrade from any too old BOA version, until you will upgrade also\n  Octopus instance(s) and update any other non-standard vhosts or includes\n  still using legacy port mode for 'fastcgi_pass' Nginx directive.\n\n  If you have 'fastcgi_pass 127.0.0.1:9090;' in any custom vhost or Nginx\n  include file on the Octopus instance, you should replace it with:\n\n    fastcgi_pass unix:/var/run/o1.fpm.socket;\n\n  where 'o1' is your corresponding Octopus system username.\n\n  Note that if you have custom vhosts or includes in the Ægir Master Instance,\n  you should instead replace 'fastcgi_pass 127.0.0.1:9090;' with:\n\n    fastcgi_pass unix:/var/run/www53.fpm.socket;\n\n  where '53' is related to PHP version defined via _PHP_FPM_VERSION in your\n  /root/.barracuda.cnf file. Note that while variable has a dot, the socket\n  name doesn't.\n\n#-### Support for PHP 5.2 has been officially deprecated\n\n  While Barracuda 2.3.1 can continue to run and even upgrade if needed also\n  the very old PHP 5.2 version, only Octopus instances running at least PHP 5.3\n  or newer in both FPM and CLI mode can be upgraded to Octopus 2.3.1 Edition.\n\n  If you are still using PHP 5.2 in your Octopus instance, you will not\n  receive Ægir nor Drupal Platforms upgrade, but the Barracuda part of your\n  system will receive upgrade to 2.3.1 anyway, so it will be ready to support\n  your outdated Octopus instance upgrade as soon as you will switch it to\n  modern and secure PHP version -- which is easy!\n\n  Let's quote the original how-to for reference:\n\n#-### Support for PHP FPM/CLI version safe switch per Octopus instance\n\n  This allows to easily switch PHP version by the instance owner w/o system\n  admin (root) help. All you need to do is to create ~/static/control/fpm.info\n  and ~/static/control/cli.info file with a single line telling the system\n  which available PHP version should be used (if installed): 5.5 or 5.4 or 5.3\n\n  Only one of them can be set, but you can use separate versions for web access\n  (fpm.info) and the Ægir backend (cli.info). The system will switch versions\n  defined via these control files in 5 minutes or less. We use external control\n  files and not any option in the Ægir interface to make sure you will never\n  lock yourself by switching to version which may cause unexpected problems.\n\n#-### Support for New Relic monitoring with per Octopus instance license key\n\n  This new feature will disable global New Relic monitoring by deactivating\n  server-level license key, so it can safely auto-enable or auto-disable it\n  every 5 minutes, but per Octopus instance -- for all sites hosted on\n  the given instance -- when a valid license key is present in the special\n  new ~/static/control/newrelic.info control file.\n\n  Please note that valid license key is a 40-character hexadecimal string\n  that New Relic provides when you sign up for an account.\n\n  To disable New Relic monitoring for the Octopus instance, simply delete\n  its ~/static/control/newrelic.info control file and wait a few minutes.\n\n  Please note that on a self-hosted BOA you still need to add your valid\n  license key as _NEWRELIC_KEY in the /root/.barracuda.cnf file and run\n  system upgrade with at least 'barracuda up-stable' first. This step is\n  not required on Omega8.cc hosted service, where New Relic agent is already\n  pre-installed for you.\n\n#-### Solr 4 cores can be added/updated/deleted via site level INI settings\n\n;;\n;;  This option allows to activate Solr 4 core configuration for the site.\n;;\n;;  Only Solr 4 powered by Jetty server is available. Supported integration\n;;  modules are limited to latest versions of either search_api_solr (D7 only)\n;;  or apachesolr (will use Drupal core specific version automatically).\n;;\n;;  Currently used versions are listed below:\n;;\n;;    https://ftp.drupal.org/files/projects/search_api_solr-7.x-1.6.tar.gz\n;;    https://ftp.drupal.org/files/projects/apachesolr-7.x-1.7.tar.gz\n;;    https://ftp.drupal.org/files/projects/apachesolr-6.x-3.0.tar.gz\n;;\n;;  Note that you still need to add preferred integration module along with\n;;  any its dependencies in your codebase since this feature doesn't modify\n;;  your platform or site - it only creates Solr core with configuration\n;;  files provided by integration module: schema.xml and solrconfig.xml\n;;\n;;  This setting affects only the running daily maintenance system behaviour,\n;;  so you need to wait until next morning to be able to use new Solr 4 core.\n;;\n;;  Once the Solr core is ready to use, you will find a special file in your\n;;  site directory: sites/foo.com/solr.php with details on how to access\n;;  your new Solr core with correct credentials.\n;;\n;;  The site with enabled Solr core can be safely migrated between platforms,\n;;  integration module can be moved within your codebase and even upgraded,\n;;  as long as it is using compatible schema.xml and solrconfig.xml files.\n;;\n;;  Supported values for the solr_integration_module variable:\n;;\n;;    apachesolr\n;;    search_api_solr\n;;\n;;  To delete existing Solr core simply comment out this line.\n;;  The system will cleanly delete existing Solr core next morning.\n;;\n;;  IMPORTANT if you are using self-hosted BOA: _MODULES_FIX=YES must be set\n;;  in the /root/.barracuda.cnf file (this is default value) to make this\n;;  feature active.\n;;\n;solr_integration_module = your_module_name_here\n\n;;\n;;  This option allows to auto-update your Solr 4 core configuration files:\n;;\n;;    schema.xml\n;;    solrconfig.xml\n;;\n;;  If there is new release for either apachesolr or search_api_solr, your\n;;  Solr core will not be automatically upgraded to use newer schema.xml and\n;;  solrconfig.xml, unless allowed by switching solr_update_config to YES.\n;;\n;;  This option will be ignored if you will set solr_custom_config to YES.\n;;\n;solr_update_config = NO\n\n;;\n;;  This option allows to protect custom Solr 4 core configuration files:\n;;\n;;    schema.xml\n;;    solrconfig.xml\n;;\n;;  To use customized version of either schema.xml or solrconfig.xml, you need\n;;  to switch solr_custom_config to YES below and if you are using hosted\n;;  Ægir service, submit a support ticket to get these files updated with\n;;  your custom versions. On self-hosted BOA simply update these files directly.\n;;\n;;  Please remember to use Solr 4 compatible config files.\n;;\n;solr_custom_config = NO\n\n\n# Updated Octopus platforms:\n\n  aGov 1.4 --------------------- https://drupal.org/project/agov\n  Guardr 1.12 ------------------ https://drupal.org/project/guardr\n  Open Academy 1.1 ------------- https://drupal.org/project/openacademy\n  Restaurant 1.0-b9 ------------ https://drupal.org/project/restaurant\n  Ubercart 3.7 ----------------- https://drupal.org/project/ubercart\n\n# New features and enhancements in this release:\n\n  * Ability to add new Octopus instances with new, simple command syntax\n  * Add default aggressive php-fpm monitoring + /root/.no.fpm.cpu.limit.cnf\n  * Allow to define always disabled modules via _MODULES_FORCE variable.\n  * Better wait limits on connection testing for slow network / long distance.\n  * Issue #1927522 - Add support for easy Solr cores self-management.\n  * Issue #362 - Add imageapi_optimize binaries via IMG in _XTRAS_LIST\n  * Issue #376 - Add New Relic support with per Octopus instance license key.\n  * Make firewall management faster with randomized schedule.\n  * Procs monitor runs every 3 seconds.\n  * Run mysql_proc_control every 5 seconds for better results.\n  * You can safely apply db updates via 'Run db updates' task on any site.\n\n# Changes in this release:\n\n  * DB credentials are no longer visible in settings.php, only in drushrc.php\n  * Delete default profiles in the hostmaster platform.\n  * Disable _DEBUG_MODE if not enabled on the fly.\n  * Disable newrelic-sysmond unless /root/.enable.newrelic.sysmond.cnf exists.\n  * Drush: Upgrade command line version 6 to mini-6-14-09-2014\n  * Nginx: Remove deprecated code - _HTTP_WILDCARD is already used by default.\n  * Nginx: Use limit_conn protection only for known dynamic requests.\n  * Redis Integration Module (cache_backport): Update to version 6.x-1.0-rc2\n  * Redis Integration Module: Update to version mod-12-09-2014\n  * Remove _ALLOW_UNSUPPORTED legacy and no longer working properly feature.\n  * Remove dependency on Update Manager globally.\n  * Remove deprecated multi-instance labels in the New Relic configuration.\n  * Replace old hosting_civicrm_cron with newer hosting_civicrm module.\n  * Set hosting_default_profile to 'minimal' to improve Ubercart 3 visibility.\n  * The www53 PHP-FPM pool has been switched from port to default socket mode.\n  * Use Provision CiviCRM boa-2.3.1-dev\n\n# System upgrades in this release:\n\n  * cURL 7.38.0 (if installed from sources)\n  * Git 2.1.0 (if installed from sources)\n  * Jetty 7.6.16.v20140903\n  * Jetty 8.1.16.v20140903\n  * Jetty 9.2.3.v20140905\n  * PHP 5.3.29 EOL! Please read: http://php.net/archive/2014.php#id2014-08-14-1\n  * PHP 5.4.32\n  * PHP 5.5.16\n  * Redis 2.8.14\n\n# Fixes in this release:\n\n  * Add cleanup for _GIT_FORCE_REINSTALL if added in .barracuda.cnf\n  * Add missing drush cache-clear drush to improve upgrade path.\n  * Add new features in the README.txt\n  * Add wheezy to the exceptions list where required.\n  * Allow to clear drush cache without directory restrictions.\n  * Always set correct TMP path for supported users.\n  * Cleanup for cron pid files in user specific .tmp dirs.\n  * Count properly also symlinked files directories (improved).\n  * D6 colorbox module requires old 1.3.18 library.\n  * Delete drush_make leftovers.\n  * Delete duplicate menu items on upgrade.\n  * Do not allow to install SSH from sources on Trusty to avoid problems.\n  * Do not skip daily.sh during barracuda system only update.\n  * Eldir theme: Use max width for buttons, if possible.\n  * Explain why installing RVM may take longer than expected.\n  * Fix cleanup for drush aliases in sub-accounts.\n  * Fix daily cleanup for user specific .tmp directories.\n  * Fix docs/HINTS.txt\n  * Fix for broken mariadb.list\n  * Fix for broken, way too aggressive PHP-FPM monitoring.\n  * Fix for ghost dirs cleanup.\n  * Fix for ghost vhosts cleanup.\n  * Fix for missing symlinks to existing platforms.\n  * Fix for not working protection from blocking local IPs on multi-IP systems.\n  * Fix for subdirs_support universal check.\n  * Fix for unreliable _IS_OLD check on Octopus instances upgrade.\n  * Fix for warning \"Could not create directory .\" on Hostmaster site Verify.\n  * Fix the fields order in the site edit form.\n  * Fix the regex to not whitelist unexpected IP ranges inadvertently.\n  * Force cURL rebuild if installed with outdated OpenSSL version.\n  * Guard against destructive or insecure tasks run on the hostmaster site.\n  * Improve cleanup for empty platforms directories.\n  * Improve monitoring to protect against convert trying to overload the system.\n  * Issue #2330781 - Use Drush dt() wrapper instead of not always available t()\n  * Issue #357 - Fix the logic for Git (re)install from sources.\n  * Issue #360 - Exclude special --CDN vhosts from daily cleanup.\n  * Issue #361 - Update and improve docs/FAQ.txt\n  * Issue #369 - Automatically download and fix /bin/websh if missing.\n  * Issue #369 - Restore classic /bin/sh symlink automatically if needed.\n  * Issue #373 - Set correct TMP, TEMP, TMPDIR env variables in limited shell.\n  * Issue #373 - Too restrictive lshell forbidden list breaks drush sql-sync.\n  * Issue #380 - Nameserver / pdnsd problem -- Fixes also Issue #2007990.\n  * Issue #381 - Zend OPcache forced adds useless noise in the log.\n  * Issue #388 - Version 6.x-2.x of provision_civicrm requires hosting_civicrm\n  * Issue #389 - hosting_civicrm breaks site install form with confusing error.\n  * Issue #390 - Duplicate platforms nodes are created after upgrade to 2.3.0\n  * Issue #395 - Validate username isn't reserved before running install script.\n  * Issue #396 - Locale isn't getting set properly.\n  * Issue #397 - Not actually prompted for platforms during installation.\n  * Issue #398 - Make locales setup/fix for Debian always OS compatible.\n  * Issue #399 - The hitimes gem needs to be pre-installed to support Omega4.\n  * Issue #400 - CiviCRM is not installed on 2.3.0\n  * Issue #401 - Create sites/all/* subdirs in Hostmaster early enough.\n  * Issue #402 - Fix for ghost or disabled vhosts which still listen on IP.\n  * Issue #405 - Installer hangs due to yes/no dialog - \"Untrusted packages\"\n  * Issue #406 - Force keyring reinstall also upon 'GPG error'.\n  * Issue #407 - Fix for 'username is already taken' error on a local VM install\n  * Issue #408 - Fix for multiple funny typos. Thanks ar-jan!\n  * Make it clear that subdomain and subdirectory name must be identical.\n  * Make sure that keys subdirectory exists to avoid active platforms cleanup.\n  * Make the PHP-FPM processes monitor less aggressive by default.\n  * New Relic not enabled if no custom ~/static/control/{fpm|cli}.info exists.\n  * Nginx: Add config symlinks only on legacy instances.\n  * Nginx: Add cron access support for subdir sites.\n  * Nginx: Convert all vhosts to wildcard mode on Barracuda upgrade.\n  * Nginx: Disable monitoring for POST requests related to cart/checkout URI.\n  * Nginx: Do not touch nginx_wild_ssl.conf during this upgrade.\n  * Nginx: Improve wildcard conversion procedure on some really old instances.\n  * Nginx: Remove deprecated code and config templates.\n  * Nginx: Sanitize aliases in vhost_disabled.tpl.php to avoid warnings.\n  * Nginx: Update config includes to match optional BOA features improvements.\n  * Nginx: Update unified configuration templates in Provision to unfork BOA.\n  * Nginx: Update vhosts templates to match BOA improvements.\n  * PHP: Avoid unintended duplicate rebuilds.\n  * PHP: Sync disable_functions list.\n  * Protect sites/all/drush\n  * Provision: Backport provision_hosting_feature_enabled()\n  * Provision: Remove legacy subdir code and update checks.\n  * Redis config should sync with PHP-CLI, not PHP-FPM.\n  * Remove legacy procs monitoring code.\n  * Remove no longer needed limreq global fixes.\n  * Remove no longer needed/used contrib updates.\n  * Remove redundant file_exists() if is_readable() is also used.\n  * Replace old hosting_civicrm_cron with newer hosting_civicrm module.\n  * Restart pdnsd before running barracuda upgrade.\n  * Restore BOA formatting for tasks log to improve readability.\n  * Restore BOA naming convention and docs in Hostmaster.\n  * Restore BOA naming convention for Installation profiles in Hostmaster.\n  * Restore BOA strict _hosting_valid_fqdn* testing procedures in Hostmaster.\n  * Restore BOA weight defaults in the form in Hostmaster.\n  * Restore punycode in Hostmaster.\n  * Restore tasks sort to always show tasks scheduled and running at the top.\n  * Sanitize cli.info and fpm.info\n  * Set _PLATFORMS_LIST properly.\n  * Silence early sed replacements to avoid confusion.\n  * Simplify colorbox-1.3.18 download.\n  * Simplify colorbox-1.5.13 download.\n  * Switch branch on the fly and add support for Ægir vanilla mode.\n  * Sync /tmp access restrictions.\n  * The hosting_civicrm_cron is now a submodule and should be also auto-enabled.\n  * The wildcard transition **doesn't** affect vhosts for SSL enabled sites.\n  * There is no need to force backend clone from GitHub on initial upgrade.\n  * Update for the Hostmaster welcome page.\n  * Update FPM monitoring settings.\n  * Use as short labels on the site node as possible.\n  * Use control files properly to not run redundant Jetty/Solr upgrade.\n  * Use correct paths to platform level drushrc.php file.\n  * Use correct Provision version on initial upgrade to 2.3.0\n  * Use Drush6 with @hostmaster.\n  * Use is_dir() instead of file_exists() when checking directory existence.\n  * Use is_file() and is_link() instead of file_exists() before trying unlink()\n  * Use is_readable() and file_exists() instead of file_exists() for backup.\n  * Use is_readable() check instead of insufficient file_exists() for includes.\n  * Use is_readable() instead of file_exists() when checking alias existence.\n  * Install latest Git even if not specified via _XTRAS_LIST but previous\n    version built from sources is detected.\n  * Issue #2278847 - Derivatives can't be created on install with Drush and\n    Ægir or when no vhost is available yet (Drupal Commons)\n\n\n### Stable BOA-2.3.0 Release - Full Edition\n### Date: Mon Sep  8 08:42:01 PDT 2014\n### Includes Ægir 2.1 with improvements\n\n  Release Notes and changelog for BOA-2.3.0 has been merged into BOA-2.3.1\n  above after several hotfixes and some great new features have been added\n  shortly after 2.3.0 release to address all identified post-release issues.\n\n\n### Stable BOA-2.2.9 Release - Full Edition\n### Date: Wed Aug  6 17:08:10 PDT 2014\n### Includes Ægir 2.x-boa-custom version.\n### Latest hotfix added on: Fri Aug 15 09:37:04 PDT 2014\n\n# Release Notes:\n\n  This release includes updated versions of all supported Drupal platforms to\n  provide latest Drupal 7 and Pressflow 6 core, plus some changes, improvements,\n  bug fixes, and many updated Octopus platforms.\n\n  NOTE: Since the first Edition in the BOA-2.3.x series is not ready for release\n  yet, and new Drupal core has been released to fix security issues, followed\n  by yet another release to fix serious regressions, followed by yet another\n  security release, we have decided to make it available to everyone and release\n  yet another stable BOA-2.2.x Edition.\n\n  IMPORTANT! This is the last Edition in the 2.2.x series, which marks the end\n  of Drupal 5, PHP 5.2 and Drush 4 support. Next Edition will open 2.3.x series,\n  which will allow us to provide newer Ægir version with built-in Drush 6\n  support, sites in subdirectories, and many Ægir User Interface improvements.\n\n  If you still host any Drupal 5 sites or you are using PHP 5.2 for D6 sites,\n  you will not be able to upgrade to the next 2.3.x Edition and you will have to\n  stay on the 'legacy' BOA 2.2.x version, which will receive only system\n  security upgrades, but no further feature nor bugfix releases.\n\n  This also means that from now on the 'legacy' 2.2.x version will no longer\n  receive Drupal core upgrades, even if there will be security core releases.\n\n  It is time to upgrade away from Drupal 5 and away from PHP 5.2, if still used.\n\n# Updated Octopus platforms:\n\n  aGov 1.2 --------------------- https://drupal.org/project/agov\n  Commerce 1.29 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.17 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 2.20 ----------------- https://drupal.org/project/commons\n  Commons 3.17 ----------------- https://drupal.org/project/commons\n  ERPAL 2.0-b5 ----------------- https://drupal.org/project/erpal\n  Guardr 1.11 ------------------ https://drupal.org/project/guardr\n  Open Atrium 2.21 ------------- https://drupal.org/project/openatrium\n  Open Outreach 1.10 ----------- https://drupal.org/project/openoutreach\n  OpenPublic 1.0-rc4 ----------- https://drupal.org/project/openpublic\n  Panopoly 1.11 ---------------- https://drupal.org/project/panopoly\n  Restaurant 1.0-b2 ------------ https://drupal.org/project/restaurant\n\n# New features and enhancements in this release:\n\n  * Allow to define always disabled modules via _MODULES_FORCE variable.\n  * Eldir: Add subtle 3D and round some edges.\n  * Eldir: Improve spacing and hide useless headers.\n  * Fix permissions on sites/all/{modules,libraries,themes} on Platform Verify.\n  * Make firewall management faster with randomized schedule.\n  * Merge pull request #362 from pricejn2/imageapi-optimize-binaries\n  * RVM: Add exceptions for gems which can't be installed in Limited Shell.\n  * Shell: Compass Tools: Allow to access guard.\n  * Shell: Improve config to better support advanced Drush commands over SSH.\n\n# Changes in this release:\n\n  * Drush: Upgrade command line version 6 to mini-6-14-08-2014\n  * Nginx: Add DBot to is_crawler list.\n  * Remove no longer supported NodeStream distro.\n  * Run complete modules-dis-list weekly (Saturday) and basic list daily.\n\n# System upgrades in this release:\n\n  * MariaDB 10.0.13\n  * MariaDB 5.5.39\n  * Nginx 1.7.4\n  * OpenSSL 1.0.1i (if installed from sources)\n  * PHP: ionCube loader 4.6.1\n  * PHP: Zend OPcache master-30-07-2014\n\n# Fixes in this release:\n\n  * Add cleanup for .tmp in sub-accounts.\n  * Add cleanup for drush-backups leftovers.\n  * Add cleanup for various /var/backups/* leftovers.\n  * Add daily auto-cleanup for ghost vhosts, platforms and drush aliases.\n  * Add exception for symlinked /data/all\n  * Add hint for HTTPS-only mode forced in local.settings.php\n  * Allow to clear drush cache without directory restrictions.\n  * Avoid \"Is a directory\" noise in the log.\n  * Commons 2.20 has changed its profile name from drupal_commons to commons.\n  * Do not modify site_footer on hostmaster upgrade.\n  * Do not rename the legacy Commons profile name.\n  * Fix -mtime expected values.\n  * Fix cleanup for .restore vhost leftovers.\n  * Fix cleanup for drush aliases in sub-accounts.\n  * Fix for unreliable _IS_OLD check on Octopus instances upgrade.\n  * Fix Nginx monitor to respect all whitelisted POST requests in both modes.\n  * Fix permissions on sites/all/{modules,libraries,themes} globally.\n  * Fix weird typo in global.inc\n  * Improve cleanup for empty platforms directories.\n  * Improve RVM cleanup.\n  * Issue #2278847 - Derivatives (Drupal Commons) can't be created on install.\n  * Issue #334 - Backported provision_civicrm #1485920\n  * Issue #334 - Delete the civicrm_class_loader variable after deploying.\n  * Issue #334 - Install civicrm in any location (sites/ profiles + contrib).\n  * Issue #360 - Exclude special --CDN vhosts from daily cleanup.\n  * Make sure that /keys subdirectory exists to avoid active platforms cleanup.\n  * Make sure that local IPs are never blocked by mistake.\n  * Never touch websh wrapper to avoid high load because of redirect loop.\n  * Nginx: Detected $device is not used in Boost config, only in Speed Booster.\n  * Nginx: Fix limreq also for some really old vhosts.\n  * Nginx: Modify only vhosts known as included in the protected mode.\n  * Remove /var/run/daily-fix.pid if exists when it shouldn't.\n  * Remove debugging mode in old codebases cleanup.\n  * Remove no longer needed/used contrib updates.\n  * Restore default websh wrapper symlink as fast as possible.\n  * Run manage_ltd_users every 3 minutes instead of every minute.\n  * Simplify colorbox-1.3.18 download.\n  * Simplify colorbox-1.5.13 download.\n  * Uninstall css_emimage only on hostmaster upgrade.\n  * Update and improve docs/FAQ.txt\n  * Update regex for exceptions in Nginx monitoring.\n\n\n### Stable BOA-2.2.8 Release - Full Edition\n### Date: Sat Jul 26 15:31:29 PDT 2014\n### Includes Ægir 2.x-boa-custom version.\n### Latest hotfix added on: Tue Aug  5 14:47:17 PDT 2014\n\n# Release Notes:\n\n  This release includes updated versions of all supported Drupal platforms to\n  provide latest Drupal 7 and Pressflow 6 core, plus some changes, improvements,\n  bug fixes, and six (6) updated Octopus platforms.\n\n  NOTE: Since the first Edition in the BOA-2.3.x series is not ready for release\n  yet, and new Drupal core has been released to fix security issues, followed by\n  yet another release to fix serious regressions, we have decided to make it\n  available to everyone and release yet another stable BOA-2.2.x Edition.\n\n  IMPORTANT! This is the last Edition in the 2.2.x series, which marks the end\n  of Drupal 5, PHP 5.2 and Drush 4 support. Next Edition will open 2.3.x series,\n  which will allow us to provide newer Ægir version with built-in Drush 6\n  support, sites in subdirectories, and many Ægir User Interface improvements.\n\n  If you still host any Drupal 5 sites or you are using PHP 5.2 for D6 sites,\n  you will not be able to upgrade to the next 2.3.x Edition and you will have to\n  stay on the 'legacy' BOA 2.2.x version, which will receive only system\n  security upgrades, but no further feature nor bugfix releases.\n\n  This also means that from now on the 'legacy' 2.2.x version will no longer\n  receive Drupal core upgrades, even if there will be security core releases.\n\n  It is time to upgrade away from Drupal 5 and away from PHP 5.2, if still used.\n\n# Updated Octopus platforms:\n\n  Commerce 1.28 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.16 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 3.16 ----------------- https://drupal.org/project/commons\n  Open Outreach 1.8 ------------ https://drupal.org/project/openoutreach\n  OpenBlog 1.0-v3 -------------- https://drupal.org/project/openblog\n  Panopoly 1.8 ----------------- https://drupal.org/project/panopoly\n\n# New features and enhancements in this release:\n\n  * Allow to force OpenSSL etc. re-install with _SSL_FORCE_REINSTALL=YES\n  * Auto-Move no longer used shared codebases to /var/backups/codebases-cleanup\n\n# Changes in this release:\n\n  * Drush: Upgrade command line version 6 to mini-6-29-07-2014\n  * Issue #334 - Update provision_civicrm version - code by ixiam - thanks!\n  * Redis Integration Module: Update to version mod-21-07-2014\n  * Uninstall css_emimage in hostmaster to avoid broken upgrades.\n  * Update for Contrib [F]orce[D]isabled modules list.\n  * Use more aggressive defaults for _PURGE_BACKUPS and _PURGE_TMP if not set.\n\n# System upgrades in this release:\n\n  * PHP 5.4.31\n  * PHP 5.5.15\n\n# Fixes in this release:\n\n  * Add auto-cleanup for civimail ghost leftovers.\n  * Add cleanup drush aliases in the main SSH account properly.\n  * Add cleanup for RVM archives and logs.\n  * Fix for default value on hot fix update.\n  * Fix for dev regression - it shouldn't set $conf['cache'] on valid dev URLs.\n  * Fix the logic for custom _DEL_OLD_EMPTY_PLATFORMS defaults.\n  * Issue #333 - Update BOA changelog URL shortcut.\n  * Nginx: Automate SPDY test to determine if OpenSSL re-install is required.\n  * Nginx: Silence access log for already protected /civicrm admin requests.\n  * Remove special one-time variables if set, once used.\n  * RVM: Install OS compatible Ruby version + various related adjustments.\n  * Silence useless noise in the log.\n  * Sync firewall limits.\n\n\n### Stable BOA-2.2.7 Release - Full Edition\n### Date: Thu Jul 17 03:11:47 CEST 2014\n### Includes Ægir 2.x-boa-custom version.\n### Latest hotfix added on: Fri Jul 18 18:21:40 CDT 2014\n\n# Release Notes:\n\n  This release includes some nice new features, improvements, bug fixes, one\n  new Octopus platform, five (5) updated Octopus platforms, along with latest\n  Drupal core security upgrades for all supported platforms.\n\n  NOTE: Since the first Edition in the BOA-2.3.x series is not ready for release\n  yet, and new Drupal core has been released today to fix security issues,\n  we have decided to make it available to everyone and release yet another\n  stable BOA-2.2.x series Edition.\n\n  IMPORTANT! This is the last Edition in the 2.2.x series, which marks the end\n  of Drupal 5, PHP 5.2 and Drush 4 support. Next Edition will open 2.3.x series,\n  which will allow us to provide newer Ægir version with built-in Drush 6\n  support, sites in subdirectories, and many Ægir User Interface improvements.\n\n  If you still host any Drupal 5 sites or you are using PHP 5.2 for D6 sites,\n  you will not be able to upgrade to the next 2.3.x Edition and you will have to\n  stay on the 'legacy' BOA 2.2.x version, which will receive only system\n  security upgrades, but no further feature nor bugfix releases.\n\n  This also means that from now on the 'legacy' 2.2.x version will no longer\n  receive Drupal core upgrades, even if there will be security core releases.\n\n  It is time to upgrade away from Drupal 5 and away from PHP 5.2, if still used.\n\n# New Octopus platforms:\n\n  OpenPublic 1.0-b23 ----------- https://drupal.org/project/openpublic\n\n# Updated Octopus platforms:\n\n  Commerce 1.27 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 3.15 ----------------- https://drupal.org/project/commons\n  ERPAL 2.0-b4 ----------------- https://drupal.org/project/erpal\n  Guardr 1.9 ------------------- https://drupal.org/project/guardr\n  Open Deals 1.33 -------------- https://drupal.org/project/opendeals\n\n# New features and enhancements in this release:\n\n  * Add early auto-repair procedure if Provision is missing for any reason.\n  * Add support for Debian Squeeze LTS updates.\n  * Add support for Debian Squeeze Stable Proposed Updates.\n  * Add views_accelerator in all D7 platforms by default via o_contrib bundle.\n  * Issue #307 - Support for Compass Tools via RVM with local user gems.\n  * Make $conf['cache'] configurable via disable_drupal_page_cache INI variable.\n\n# Changes in this release:\n\n  * Nginx: Send Boost compatible Cache-Control headers also with Speed Booster.\n\n   This is to mimic Drupal core behaviour when full-page cache is disabled,\n   even if it is not really disabled via disable_drupal_page_cache INI variable.\n   Note that Speed Booster continues to ignore Cache-Control headers sent by\n   Drupal backend, as before, to force its own TTL set via INI variable:\n   speed_booster_anon_cache_ttl or in the custom local.settings.php code.\n\n  * Add css_emimage to hostmaster makefile to remove dependency on o_contrib.\n  * Do not upgrade existing o_contrib, only add new if missing in old platforms.\n  * Drush: Upgrade command line version 6 to mini-6-16-07-2014\n  * Limited Shell configuration update.\n  * Nginx: Do not log HTTPS redirects.\n  * PHP: AutoRemove 5.2 from _PHP_MULTI_INSTALL if no instance is using it.\n  * Prefer dash if available.\n  * Redis Integration Module: Update to version mod-10-07-2014\n  * The ?nocache=1 in the URL should also force $conf['cache'] = 0; on the fly.\n  * Update lfd default configuration.\n\n# System upgrades in this release:\n\n  * cURL 7.37.1 (if installed from sources)\n  * Nginx 1.7.3\n  * PHP 5.4.30\n  * PHP 5.5.14\n  * PHPRedis: master-06-07-2014\n  * Redis 2.8.13\n\n# Fixes in this release:\n\n  * Authorized IPs detection - it should ignore serial/remote console logins.\n  * BND --- Bind9 DNS Server (available on Debian only).\n  * Clear packages cache more aggressively to avoid issues during OS upgrades.\n  * Configure RVM env properly if installed in the user home directory.\n  * Contrib update: filefield-6.x-3.13\n  * Disable redis integration during hostmaster upgrade.\n  * Do not allow known bots to activate nocache and noredis URLs behaviour.\n  * Do not use css_emimage in hostmaster to avoid broken upgrades.\n  * Fix for o_contrib update logic.\n  * Fix for possible permissions problem with redis log file.\n  * Fix incorrect version in the permissions fix.\n  * Fix legacy test logic to allow head instances to upgrade to another 2.2.x\n  * Fix regex in procs monitor.\n  * Fix the check for legacy systems on upgrade.\n  * Force keyring reinstall if reported as broken.\n  * Issue #316 - Octopus upgrade fails because of missing cd $_ROOT/.drush/sys\n  * Issue #319 - XTRAS_LIST settings are being overwritten (Ubuntu).\n  * Issue #320 - Compass Tools available on Squeeze, Wheezy, Precise and Trusty.\n  * Issue #324 - HTTPS results in redirect loop on AWS due to ignored _MY_OWNIP.\n  * Issue #328 - The /bin/sh symlink modified daily causes false lfd alarm.\n  * Make it clear that we recommend and support Debian 64bit.\n  * Make sure that redis and cache_backport are available for hostmaster.\n  * Purge no longer used jdk leftovers.\n  * Readme improvements.\n  * Remove no longer needed tmp chown -R\n  * Remove no longer used /data/src directory.\n  * Remove remote_import if found if the wrong directory.\n  * Sanitize logs lines before analyzing them.\n  * The list of platforms symbols can be in a single line or one per line.\n  * There is no need to force SHELL in the websh wrapper.\n  * Update nginx documentation URL.\n  * Use static ftp.debian.org instead of unreliable http.debian.net mirrors.\n\n\n### Stable BOA-2.2.6 Release - Full Edition\n### Date: Sat Jun 21 06:14:18 PDT 2014\n### Includes Ægir 2.x-boa-custom version.\n### Latest hotfix added on: Mon Jul 14 14:54:04 CDT 2014\n\n# Release Notes:\n\n  This release includes great new features, improvements, important changes,\n  many bug fixes, plus 3 new and 7 updated Octopus platforms.\n\n  IMPORTANT! This is the last Edition in the 2.2.x series, which marks the end\n  of Drupal 5, PHP 5.2 and Drush 4 support. Next Edition will open 2.3.x series,\n  which will allow us to provide newer Ægir version with built-in Drush 6\n  support, sites in subdirectories, and many Ægir User Interface improvements.\n\n  If you still host any Drupal 5 sites or you are using PHP 5.2 for D6 sites,\n  you will not be able to upgrade to the next 2.3.x Edition and you will have to\n  stay on the 'legacy' BOA 2.2.x version, which will receive only system\n  security upgrades, but no further feature nor bugfix releases.\n\n  This also means that from now on the 'legacy' 2.2.x version will no longer\n  receive Drupal core upgrades, even if there will be security core releases.\n\n  It is time to upgrade away from Drupal 5 and away from PHP 5.2, if still used.\n\n# New Octopus platforms:\n\n  aGov 1.0-rc8 ----------------- https://drupal.org/project/agov\n  ERPAL 2.0-b2 ----------------- https://drupal.org/project/erpal\n  Restaurant 1.0-a5 ------------ https://drupal.org/project/restaurant\n\n# Updated Octopus platforms:\n\n  Commerce 2.15 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 2.18 ----------------- https://drupal.org/project/commons\n  Commons 3.14 ----------------- https://drupal.org/project/commons\n  Guardr 1.5 ------------------- https://drupal.org/project/guardr\n  Open Atrium 2.19 ------------- https://drupal.org/project/openatrium\n  Open Outreach 1.7 ------------ https://drupal.org/project/openoutreach\n  Panopoly 1.6 ----------------- https://drupal.org/project/panopoly\n\n# New features and enhancements in this release:\n\n  * Drush aliases based workflows are now supported also remotely over SSH.\n\n    This is significant improvement since we have added automatically generated\n    and updated Drush aliases for the on-the-server use in BOA-2.2.0\n\n  * Add gems: compass_radix v2 and compass_twitter_bootstrap\n  * Add support for automatic Scout App upgrade on RVM/Ruby/Gems upgrade.\n  * Install headless JRE and only if Solr is expected to run.\n  * Issue #2268889 - Allow to whitelist IPs for chive, cgp and sqlbuddy access.\n  * Issues #2248907 #1299526 - Allow to use comments for admin notes.\n  * Nginx: Disable proxy_buffering to avoid useless extra layer in local proxy.\n  * SQL: Allow to change InnoDB log file size via _INNODB_LOG_FILE_SIZE variable\n  * Use better subdirectory tree for Drush extensions.\n  * Add support for disable_user_register_protection INI variable on the\n    platform level - on self-hosted BOA and Power Engines only.\n\n  * Issue #2240277 - Customize Octopus platforms list via control file.\n\n    ~/static/control/platforms.info\n\n    This file, if exists and contains a single line with supported platforms\n    symbols, allows to control/override the value of _PLATFORMS_LIST variable\n    normally defined in the /root/.${_USER}.octopus.cnf file, which can't be\n    modified by the Ægir instance owner with no system root access.\n\n    IMPORTANT: If used, it will replace/override the value defined on initial\n    instance install and all previous upgrades. It takes effect on every\n    future Octopus instance upgrade, which means that you will miss all newly\n    added distributions, if they will not be listed also in this control file.\n\n    Supported values which can be written in this file - remember: all in a\n    single line, space separated, so not one per line, as listed below\n    only for readability:\n\n    # D7P D7S D7D --- Drupal 7 prod/stage/dev\n    # D6P D6S D6D --- Pressflow 6 p/s/d\n    # AGV ----------- aGov\n    # CME ----------- Commerce v.2\n    # CS7 ----------- Commons 7\n    # DCE ----------- Commerce v.1\n    # DCS ----------- Commons 6\n    # ERP ----------- ERPAL\n    # FSR ----------- Feature Server\n    # GDR ----------- Guardr\n    # MNS ----------- Managing News\n    # OA7 ----------- Open Atrium D7\n    # OAM ----------- Open Atrium D6\n    # OAY ----------- Open Academy\n    # OBG ----------- OpenBlog\n    # OCH ----------- OpenChurch\n    # ODS ----------- Open Deals\n    # OOH ----------- Open Outreach\n    # OSR ----------- OpenScholar\n    # PPY ----------- Panopoly\n    # RER ----------- Recruiter\n    # RST ----------- Restaurant\n    # SRK ----------- Spark\n    # TTM ----------- Totem\n    # UC7 ----------- Ubercart D7\n    # UCT ----------- Ubercart D6\n\n    You can also use special keyword 'ALL' to have all available platforms\n    installed, including newly added in the future BOA system releases.\n\n    Examples:\n\n      ALL\n      D7P D6P OAM MNS OOH RST\n\n  * Issue #314 - Make _BACKEND_ITEMS configurable via _BACKEND_ITEMS_LIST\n\n    You can whitelist extra binaries to make them available for web server\n    requests, in addition to already whitelisted, known as safe binaries.\n\n    NOTE: This feature is available only on self-hosted BOA systems.\n\n    Please be aware that you could easily open security holes by whitelisting\n    commands which may provide access to otherwise not available parts of\n    the system, because the exec() in PHP doesn't respect other limitations\n    like open_basedir directive.\n\n    You should list only filenames, not full paths, for example:\n\n      _BACKEND_ITEMS_LIST=\"git foo bar\"\n\n# Changes in this release:\n\n  * Add memcache, memcache_admin to the list of automatically disabled modules.\n  * Add support for Debian Squeeze LTS updates.\n  * Add support for Debian Squeeze Stable Proposed Updates.\n  * Add varnish to the list of automatically disabled modules.\n  * Add watchdog_live to the list of automatically disabled modules.\n  * Disable and remove not used init scripts on known VM systems.\n  * Drush: Upgrade command line version 6 to mini-6-21-06-2014\n  * Fast DNS Cache Server (pdnsd) install is no longer optional.\n  * Install only vanilla core platforms by default (can be overridden)\n  * Nginx: Update default limit_conn settings.\n  * Nginx: Use only newer control file to force DoS monitor aggressive mode.\n  * Sync permissions with new defaults in the hardened setup.\n  * Update files ownership to match defaults in the hardened setup.\n  * Use dynamic mirror selection provided by Debian instead of forced static.\n\n  * The BOA project has moved to Github!\n\n    We no longer use repositories and issue queues on drupal.org, in an effort\n    to avoid fragmentation and duplication. We have moved all downloads used\n    by Barracuda and Octopus to our mirrors a few months ago, and it helped to\n    make BOA faster and more reliable during both system install and upgrades.\n\n    The next step is to use http://boa.readthedocs.org as a new home for all\n    future documentation efforts - it will build the docs, including printable\n    versions, on the fly, using dedicated Github repository as a backend, where\n    you can help migrate existing docs and improve them, both via boa-docs\n    project issue queue and pull requests:\n\n      https://github.com/omega8cc/boa-docs\n\n    We also encourage you to use drupal.stackexchange.com for BOA support:\n\n      http://drupal.stackexchange.com/questions/tagged/aegir\n\n    Please use our Github project for contributing code, reporting bugs,\n    and also suggesting new features and ideas:\n\n      https://github.com/omega8cc/boa\n\n# System upgrades in this release:\n\n  * cURL 7.37.0 (if installed from sources)\n  * MariaDB 10.0.12\n  * MariaDB 5.5.38\n  * MySecureShell 1.33\n  * Nginx 1.7.2\n  * OpenSSL 1.0.1h (if installed from sources)\n  * PHP 5.4.29\n  * PHP 5.5.13\n  * PHP: Zend OPcache master-28-05-2014\n  * Redis 2.8.11\n  * Ruby 2.1.2\n\n# Fixes in this release:\n\n  * Add caveats to docs/REMOTE.txt\n  * Add explicit whitelisting in websh wrapper to avoid any edge case problems.\n  * Add info about Two-Factor Auth for Chive in the welcome email template.\n  * Add missing exceptions in global.inc and simplify docs/REMOTE.txt\n  * Add missing wrapper exceptions required by daily.sh script.\n  * Clean up packages cache on finale()\n  * Create symlink for boa wrapper on the initial install only.\n  * Delete daily both files and directories in the ~/static/trash/\n  * Do not remove bundler in CI instances if /root/.keep.bundler.cnf exists.\n  * Explain that _ALLOW_UNSUPPORTED works only with head.\n  * Fix for _NGINX_DOS_LIMIT logical error in the scan_nginx template.\n  * Fix for already installed Open Atrium 2.18 7.28.1\n  * Fix for Postfix configuration.\n  * Fix incorrect version in the permissions fix.\n  * Fix permissions after every upgrade.\n  * Fix permissions and owner/group required for feeds (upload) support.\n  * Fix regex in procs monitor.\n  * Force apticron re-install if apticron.conf is outdated.\n  * Generate /data/all/cpuinfo daily to be used in Provision.\n  * GPL Ghostscript should be available for the web (PHP-FPM) access.\n  * Issue #2248037 - Add Platform and Site INI files Templates on Verify task.\n  * Issue #2262935 - Modules dir must be group writable in custom platforms.\n  * Issue #315 - Upgrading from older versions of BOA fails\n  * Issue #316 - Upgrade fails because of missing cd $_ROOT/.drush/sys line.\n  * Issue #319 - XTRAS_LIST settings are being overwritten (Ubuntu)\n  * Issue #324 - HTTPS results in redirect loop on AWS due to ignored _MY_OWNIP.\n  * PHP: Add protection from switching to not installed CLI or FPM version.\n  * PHP: Do not block getenv function.\n  * Provision: Use /data/all/cpuinfo generated by BOA daily, if exists.\n  * Remove redundant downloads silencer.\n  * Remove remote_import if found in the wrong directory.\n  * Sanitize logs lines before analyzing them.\n  * SQL: Do not run update_innodb_log_file_size() if the size is the same.\n  * Sync BOND with BARRACUDA.\n  * Update for switch_to_bash procedure.\n  * Use already downloaded patches.\n  * Use Debian release specific proposed-updates.\n  * Use full path to sqlmagic in daily.sh to avoid 'command not found' error.\n  * Use static ftp.debian.org instead of unreliable http.debian.net mirrors.\n  * Fix for authorized IPs detection in the protected vhosts logic - it should\n    ignore serial/remote console logins.\n  * Provision: Use higher hardcoded threshold to avoid breaking tasks due to\n    high load on multi-CPU systems when provision can't determine the real load.\n\n\n### Stable BOA-2.2.5 Release - Full Edition\n### Date: Thu May  8 11:59:23 PDT 2014\n### Includes Ægir 2.x-boa-custom version.\n### Latest hotfix added on: Sat May 10 09:05:19 PDT 2014\n\n# Release Notes:\n\n  This release includes no new features, but does include bug fixes plus latest\n  Drupal 7.28.1 and Pressflow 6.31.2 core in all built-in Octopus platforms.\n  There are also three updated distributions included, as listed below.\n  We also list here all hot-fixes applied to previous stable after its release.\n\n# Important - Read This First! (for self-hosted BOA only)\n\n  If you haven't run full barracuda+octopus upgrade to latest BOA Stable\n  Edition yet, don't use any partial upgrade modes explained in docs/UPGRADE.txt\n  Once new BOA Stable is released, you must run *full* upgrades with commands:\n\n  $ barracuda up-stable\n  $ octopus up-stable all both\n\n  For silent, logged mode with email message sent once the upgrade is\n  complete, but no progress is displayed in the terminal window, you can run\n  alternatively, starting with screen session to avoid incomplete upgrade\n  if your SSH session will be closed for any reason before the upgrade\n  will complete:\n\n  $ screen\n  $ barracuda up-stable log\n  $ octopus up-stable all both log\n\n  Note that the silent, non-interactive mode will automatically say Y/Yes\n  to all prompts and is thus useful to run auto-upgrades scheduled in cron.\n\n  If you have skipped some recent BOA releases, and you have new default config\n  option: _PERMISSIONS_FIX=NO in your /root/.barracuda.cnf configuration file,\n  plus, you are not sure if you follow best practices for managing permissions\n  as recommended in our docs: https://omega8.cc/node/116 then we recommend\n  that you change it to _PERMISSIONS_FIX=YES temporarily, or even permanently\n  if your VPS is fast enough, and then run this powerful script as root:\n\n  $ bash /var/xdrago/daily.sh\n\n  Note that BOA 'legacy' mode is still at version 2.1.3\n\n# Updated Octopus platforms:\n\n  Commons 3.12 ----------------- https://drupal.org/project/commons\n  Open Atrium 2.18 ------------- https://drupal.org/project/openatrium\n  Open Outreach 1.6 ------------ https://drupal.org/project/openoutreach\n\n# Changes in this release:\n\n  * Add rsyslog/sysklogd to auto-healing procedures.\n  * Make the aggressive scan_nginx mode optional and use old mode by default.\n  * Nginx: Add HiScan to blocked crawlers list.\n  * Nginx: Add Riddler to blocked crawlers list.\n  * PHP: Use pm.process_idle_timeout = 10s for speed and RAM optimization.\n\n# System upgrades in this release:\n\n  * MySecureShell 1.33\n  * PHP 5.4.28\n  * PHP 5.5.12\n\n# Fixes in this release:\n\n  * Always define _PHP_CN variable properly.\n  * Firewall: Sync CONNLIMIT for web ports with updated limit_conn in Nginx.\n  * Fix for _NGINX_DOS_LIMIT logical error in the scan_nginx template.\n  * Force Pure-FTPd server re-install if key files are missing for any reason.\n  * Issue #2237167 - Improve authorized IPs detection in all protected vhosts.\n  * Issue #2262935 - Modules dir must be group writable in custom platforms.\n  * Nginx: Do not overwrite custom symlinks to the Under Construction template.\n  * Nginx: Update limit_conn in all instances and vhosts on Barracuda upgrade.\n  * PHP: Delete pear in legacy paths, if still exists.\n  * PHP: Fix for CVE-2014-0185 privilege escalation in FPM (doesn't affect BOA)\n  * Postfix: Force re-install if broken permissions detected on upgrade.\n  * Pressflow 6: Fix #GH 84 by using drupal_page_is_cacheable().\n  * Pressflow 6: Merge pull request #GH 85 from pressflow/SA-CORE-2014-002-fix.\n  * Pressflow 6: Remove duplicate openid_update_6001().\n  * Revert \"Force MariaDB 5.5 re-install\".\n  * Set the TERM env variable if missing to avoid errors.\n  * Skip packages set on hold when running apticron.\n  * The ~/static/control must be writeable by lshell user to manage ctrl files.\n  * Add extra cron semaphore to prevent concurrent cron invocations via\n    multiple running runner.sh instances.\n\n\n### Stable BOA-2.2.4 Release - Full Edition\n### Date: Wed Apr 30 17:03:36 PDT 2014\n### Includes Ægir 2.x-boa-custom version.\n### Latest hotfix added on: Fri May  2 04:54:25 PDT 2014\n\n# Release Notes:\n\n  This release includes several bug fixes along with five updated platforms,\n  plus some hot-fixes applied to previous stable after its release. We have\n  also added a fix for known problem is recent Drupal 7.27 [#2245331] hence\n  the change from Drupal 7.27.1 to 7.27.2 in all D7 platforms.\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.27.2\n\n  Commerce 1.25 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.14 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 3.11 ----------------- https://drupal.org/project/commons\n  Panopoly 1.5 ----------------- https://drupal.org/project/panopoly\n\n  ### Pressflow 6.31.1\n\n  Commons 2.17 ----------------- https://drupal.org/project/commons\n\n  Note: Always read and follow upgrade procedure if explained in the distro\n  release notes, like for Panopoly 1.5 at https://drupal.org/node/2255133\n\n# New o_contrib modules:\n\n  * print-6.x-1.19 (includes patch to auto-detect /usr/bin/wkhtmltopdf)\n  * print-7.x-2.0  (includes patch to auto-detect /usr/bin/wkhtmltopdf)\n\n# New features and enhancements in this release:\n\n  * Support for session.gc_maxlifetime configurable via INI files.\n\n  You can control session garbage collector (EOL) per site and per platform.\n  The value (in seconds) of the session_gc_eol variable is used as\n  session.gc_maxlifetime value and specifies the number of seconds after which\n  data will be seen as 'garbage' and potentially cleaned up, resulting with\n  $_SESSION variable discarded and affected authenticated users logged out.\n\n  BOA default defined in the system level global.inc file is 86400 == 24h.\n\n# Changes in this release:\n\n  * Drush: Upgrade command line version 6 to mini-6-26-04-2014\n  * Nginx: Use higher defaults for limit_conn to avoid error 503 (CloudFlare)\n  * Nginx: Use more aggressive limits against spambots trying to rgstr accounts.\n  * Redis: Integration module (the modern variant) upgrade to 7.x-2.x-o8-2.6-B\n\n# System upgrades in this release:\n\n  * Nginx 1.7.0\n  * PHP 5.5.12\n  * Redis 2.8.9\n\n# Fixes in this release:\n\n  * Add symlinks in the home directory if missing (every 5 minutes).\n  * Add warning that Compass Tools install and upgrade may take a LONG time.\n  * Always define _PHP_CN variable properly.\n  * Do not delete symlinks to wrappers to avoid false LFD alarms.\n  * Fix for 'Force backward compatible SERVER_SOFTWARE'.\n  * Fix in websh for _IN_PATH logic to not break backend Drush tasks.\n  * Fix the logic for wrappers update and symlinks.\n  * Improve status messages to display when silent mode is used on upgrade.\n  * Improve whitelisting in the websh wrapper.\n  * Issue #2238805 - Command filtering - no word containing *drush* is allowed.\n  * Issue #2241495 - wkhtmltopdf stopped working after upgrade.\n  * Issue #2247997 - Update docs/REMOTE.txt with workaround for websh issue.\n  * Issue #2250397 - Always follow (limited) redirects in cURL requests.\n  * Issue #GH-304  - [rvm] use $_RUBY_VERSION as default.\n  * Issue #GH-305  - Check disk usage before running install/upgrade.\n  * Issue #GH-306  - Allow ruby 1.8 to remain installed.\n  * Nginx: Allow to configure keywords for aggressive requests rate monitoring.\n  * Nginx: Do not overwrite custom symlinks to the Under Construction template.\n  * Nginx: Sync FastCGI timeouts with other Nginx and PHP-FPM defaults.\n  * PHP: Add /opt/local/bin/php tmp symlink on barracuda/octopus upgrade.\n  * PHP: Allow to set custom _PHP_FPM_TIMEOUT but not lower than 60 (in seconds)\n  * PHP: Always respect _PHP_FPM_WORKERS variable if set to numeric value > 0\n  * PHP: Better defaults for realpath_cache_ttl and realpath_cache_size.\n  * PHP: Fix for CVE-2014-0185 privilege escalation in FPM (doesn't affect BOA)\n  * PHP: pm.max_children was not properly updated on FPM version self-switch.\n  * PHP: Sync incorrect default_socket_timeout with max_execution_time (180s).\n  * PHP: Use 30s for pm.process_idle_timeout - it prevents too high RAM usage.\n  * PHP: Variable _PROCESS_MAX_FPM is not used on the Satellite Instance level.\n  * Postfix: Force re-install if broken permissions detected on upgrade.\n  * Prevent duplicate cron invocations with more strict delays.\n  * Restart rsyslog once the install or upgrade is complete.\n  * Set the TERM env variable if missing to avoid errors.\n  * Shell: Proper fix for wildcard in the path (cd command only)\n  * Standardize install and upgrade for Chive, SQL Buddy and CGP.\n  * Sync Redis timeout with default FPM timeout (180s).\n  * Sync SQL connect_timeout with default mysql.connect_timeout in PHP (60s).\n  * The ~/static/control must be writeable by lshell user to manage ctrl files.\n  * Update the logic for multi-version PHP support in BOND.\n  * Update the logic for multi-version PHP support in docs/REMOTE.txt\n\n\n### Stable BOA-2.2.3 Release - Full Edition\n### Date: Fri Apr 18 12:57:40 PDT 2014\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  This release includes several bug fixes and security upgrades both for the\n  system services and Drupal core, along with three updated platforms and new\n  features, including support for MariaDB 10.0 and Ubuntu 14.04 LTS Trusty.\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.27.1\n\n  Guardr 1.3 ------------------- https://drupal.org/project/guardr\n  Open Atrium 2.17 ------------- https://drupal.org/project/openatrium\n  Recruiter 1.2 ---------------- https://drupal.org/project/recruiter\n\n# New features and enhancements in this release:\n\n  * Add docs/FAQ.txt\n  * Add support for MariaDB 10.0 or 5.5 install via _DB_SERIES variable.\n  * Add support for Ubuntu 14.04 LTS Trusty.\n  * Improve auto-healing for multi-version PHP-FPM setup.\n  * Improve docs/UPGRADE.txt\n  * Improve health check for protected vhosts during live SSH-auth update.\n  * Nginx: More aggressive limits against spambots trying to register accounts.\n\n# Changes in this release:\n\n  * Issue #GH-299 - Force disable LESS developer mode on production sites.\n  * Move custom scripts to /opt/local/bin/\n  * Nginx: Use higher defaults for limit_conn to avoid error 503 (CloudFlare)\n  * Normalize localhost entry in /etc/hosts to avoid FQDN mapped to 127.0.0.1\n  * PHP: Do not use separate FPM pool for cron if _PHP_FPM_DENY is empty.\n\n# System upgrades in this release:\n\n  * MariaDB 5.5.37\n\n# Fixes in this release:\n\n  * Add 'exit 0' line if missing.\n  * Add /opt/local/bin to PATH by default.\n  * Add symlinks for wrappers only temporarily.\n  * Add warning that Compass Tools install and upgrade may take a LONG time.\n  * Better gem uninstall options.\n  * Compass: Multiple fixes for various expected gems versions install/upgrades.\n  * Do not override lshell env_path in websh wrapper.\n  * Do not use monitored bin path for custom scripts to avoid LFD false alarms.\n  * Extra db GRANT for 127.0.0.1 not added when migrating site.\n  * Improve auto-healing to create required directories in /var/run/ if missing.\n  * Issue #2230269 - New Jetty 9 version overrides JETTY_PORT=8099 with 8080.\n  * Issue #2235991 - Drush make needs better exceptions in websh wrapper.\n  * Issue #2236475 - Clarify what the Legacy mode really means.\n  * Issue #2238965 - Add missing path to switch_to_bash().\n  * Issue #2241013 - Git commands should be whitelisted in websh wrapper.\n  * Issue #2241495 - wkhtmltopdf stopped working after upgrade.\n  * Issue #GH-301 - Update the list of restricted keywords for Octopus username.\n  * Issue #GH-304 - [rvm] use $_RUBY_VERSION as default.\n  * Make sure that permissions on Chive Manager dir/files are correct.\n  * Note: _SSL_FROM_SOURCES=YES is ignored and not needed on Wheezy and Precise.\n  * PHP: Add /opt/local/bin/php tmp symlink on barracuda/octopus upgrade.\n  * PHP: Allow to set custom _PHP_FPM_TIMEOUT but not lower than 60 (in seconds)\n  * PHP: Always respect _PHP_FPM_WORKERS variable if set to numeric value > 0\n  * PHP: pm.max_children was not properly updated on FPM version self-switch.\n  * PHP: Variable _PROCESS_MAX_FPM is not used on the Satellite Instance level.\n  * Remove the line with header TABLE_NAME (sqlmagic).\n  * Reset PATH to avoid RVM overrides after Compass Tools install/upgrade.\n  * Shell: Allow to run 'drush cache-clear drush' in any directory.\n  * The _PHP_MODERN_ONLY variable is no longer used.\n  * Ubuntu 14.04 LTS Trusty requires MariaDB 10.0\n  * Use hostname -b instead of deprecated hostname -v.\n\n\n### Stable BOA-2.2.2 Release - Barracuda Edition\n### Date: Tue Apr  8 07:24:18 PDT 2014\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  This is a bug-fix only release to address issues discovered after recent\n  major BOA-2.2.0 and subsequent BOA-2.2.1 Releases.\n\n  The most important problem fixed in this Release is related to known OpenSSL\n  security issue, which has been fixed in OpenSSL 1.0.1g\n\n  To learn more please visit: http://heartbleed.com\n\n  @=> Note for those on self-hosted BOA (skip this if you are on a hosted Ægir)\n\n  We recommend that you enable _SSL_FROM_SOURCES=YES option in your system\n  /root/.barracuda.cnf file, to always build latest OpenSSL from sources.\n  Note that it will also trigger OpenSSH and cURL install from sources, plus\n  subsequent PHP rebuild to include latest SSL libraries.\n\n  Note that _SSL_FROM_SOURCES=YES will not force the build from sources on\n  Debian Wheezy and Ubuntu Precise, to avoid confirmed conflicts and because\n  both OS versions already provide custom, patched OpenSSL packages.\n\n  This Release doesn't include any updates to the Octopus installer, so there is\n  no point in running full upgrade. It is enough to run the barracuda only,\n  system upgrade in the \"silent mode\" with:\n\n  $ screen\n  $ barracuda up-stable system\n\n  The system will send you an email with results when the upgrade is complete,\n  but there will be no upgrade progress displayed in the console. You can watch\n  it, if you prefer, with command (DATE/TIME are placeholders for real values):\n\n  $ tail -f /var/backups/reports/up/barracuda/DATE/barracuda-up-DATE-TIME.log\n\n# System upgrades in this release:\n\n  * Nginx 1.5.13\n  * OpenSSL 1.0.1g (if installed from sources)\n  * PHP 5.4.27\n  * PHP 5.5.11\n\n# Fixes in this release:\n\n  * Chive Authentication via SSH session may break Nginx due to race conditions.\n  * Drush specific dt() wrapper is required in Provision for custom platforms.\n  * Fix Compass Tools support for Omega (gems dependencies via bundle install).\n  * Fix default shell for system level cron tasks.\n  * Fix for csf firewall compatibility test.\n  * Force better health check on protected vhosts on live SSH-auth update.\n  * Improved health check for protected vhosts during live SSH-auth update.\n  * Issue #2229555 - On fresh boa install link missing durring install.\n  * Issue #2229715 - Tasks queue doesn't work on the Master Instance.\n  * Issue #2231093 - Add new line before 'UseDNS no' in the sshd_config file.\n  * Issue #2235991 - Drush make needs better exceptions in websh wrapper.\n  * Issue #294 - New Relic ext not installed even if _NEWRELIC_KEY is not empty.\n  * Nginx: Backup and re-create default wildcard SSL cert/key with rsa:4096\n  * Nginx: Generate 4096 bit long DH parameters when _NGINX_FORWARD_SECRECY=YES\n  * Normalize localhost entry in /etc/hosts to avoid FQDN mapped to 127.0.0.1\n  * PHP: Better default workers limits for the ondemand mode.\n  * PHP: max_input_time should be set to 180 and not 60, by default.\n  * PHP: Zend OPcache directive opcache.enable=1 must be set in all ini files.\n  * Reset PATH to avoid RVM overrides after Compass Tools install/upgrade.\n  * The 'scp' command is broken in limited shell.\n  * Too broad whitelisting breaks commands in limited shell with 'tmp' keyword.\n  * Too restrictive open_basedir defaults break access to valid PEAR paths.\n  * Too restrictive open_basedir defaults break access to valid Tika paths.\n  * Use rsa:4096 by default in self-signed certs for Nginx and FTPS.\n\n\n### Stable BOA-2.2.1 Release - Full Edition\n### Date: Tue Apr  1 10:28:45 SGT 2014\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  This is a bug-fix only release to address issues discovered after recent\n  major BOA-2.2.0 Release.\n\n# Fixes in this release:\n\n  * Chive Authentication via SSH session doesn't work on some older instances.\n  * Compass Tools don't use correct paths to Ruby 2.1.1\n  * Cron for sites doesn't work on old instances without Nginx wildcard vhost.\n  * FTPS (FTP over SSL) connections may experience TLS problems.\n  * PHP: Disabled 'assert' may cause warnings on features revert.\n  * PHP: Disabled 'create_function' may break some contrib modules or code.\n  * The 'git pull' command is broken in limited shell.\n  * The 'rsync' command is broken in limited shell.\n  * The 'drush dl foo' command can't be run outside of site directory.\n\n# Known Issues on systems upgraded to BOA-2.2.1 (and 2.2.0) releases\n\n  ==> Updated on Tue Apr  8 01:26:47 PDT 2014\n\n  @=> Issues fixed in BOA head (running the hotfix in stable is enough):\n\n  * Chive Authentication via SSH session may break Nginx due to race conditions.\n  * Drush specific dt() wrapper is required in Provision for custom platforms.\n  * Issue #2229715 - Tasks queue doesn't work on the Master Instance.\n  * PHP: max_input_time should be set to 180 and not 60, by default.\n  * The 'scp' command is broken in limited shell.\n  * Too broad whitelisting breaks commands in limited shell with 'tmp' keyword.\n  * Too restrictive open_basedir defaults break access to valid Tika paths.\n  * Zend OPcache directive opcache.enable=1 must be set in all php.ini files.\n\n  To fix all those problems you can run as root on self-hosted system:\n\n  $ wget -q -U iCab http://files.aegir.cc/update/boa221fix.txt\n  $ bash boa221fix.txt\n\n  We have fixed this on all hosted and remotely managed Ægir instances already.\n\n  @=> Other issues fixed in BOA head (run 'barracuda up-head system' to apply):\n\n  * PHP: New Relic extension not installed even if _NEWRELIC_KEY is not empty.\n  * Too restrictive open_basedir defaults break access to valid PEAR paths.\n\n\n### Stable BOA-2.2.0 Release - Full Edition\n### Date: Mon Mar 31 06:44:08 SGT 2014\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  There are many important changes and improvements in this release\n  you should be aware of *before* running your BOA system upgrade.\n\n  Even if you are on a hosted BOA system with upgrades managed for you,\n  it is very important to read at least this extensive release notes.\n\n  Here is a list of topics covered in detail further below:\n\n  * New 'legacy' mode available for installs and upgrades\n  * Important Note For Those Using Our Hosted Ægir Service!\n  * Custom php.ini protection has changed and will not honor old settings\n  * Barracuda no longer supports Percona since 2.2.0 release\n  * Support for PHP FPM/CLI version safe switch per Octopus instance\n  * All PHP FPM workers in 5.5, 5.4 and 5.3 now use the 'ondemand' mode\n  * Drush aliases are now automatically copied to all relevant accounts\n  * Drush is now restricted to use only trusted modules installed by default\n  * The ~/.drush and other important directories and symlinks are protected\n  * Support for safely configurable cache bins exceptions in Redis\n  * Two-Factor-like Authentication to protect access to Chive DB Manager\n  * Support for session.cookie_lifetime configurable via INI files\n  * Support for files permissions-fix exceptions via platform level INI file\n  * High-performance JavaScript callback handler (js) in all platforms\n\n  And if you are more curious, read also the big changelog further below,\n  which covers only a small number of over 560 commits since BOA-2.1.3 release.\n\n  But what if you are not ready for this major upgrade and you would like\n  to have more time for testing, but still be able to run system upgrades,\n  thus effectively still using previous version 2.1.3 with standard command\n  'barracuda up-stable system', as explained in the docs/UPGRADE.txt?\n\n#-### New 'legacy' mode available for installs and upgrades\n\n  We are introducing special 'legacy' mode both for BOA installs and upgrades.\n\n  This means that starting with BOA-2.2.0 you can use commands like:\n\n  $ boa in-legacy public server.mydomain.org my@email o1\n  $ barracuda up-legacy system\n  $ octopus up-legacy o1\n  etc.\n\n  These special 'legacy' commands allow you to install and/or upgrade the 'old\n  stable', once the 'new stable' is released. But only until another 'stable'\n  is released, of course. Thus you can use it only as an interim solution\n  if you are not yet ready for latest 'stable' BOA Edition, for any reason,\n  but you want to update at least the low level system packages, kernel etc.\n\n  Note also that if you will upgrade to current 'stable', it is not possible\n  to downgrade back to the 'old stable' with 'legacy' mode, so please proceed\n  with care!\n\n  This option will be particularly important once we release *next* major BOA\n  Edition. It will come with terminated support for Drush 4, Drupal 5 and, yes,\n  PHP 5.2 (finally). This step is required to use latest Drush 6+ with supported\n  Drupal cores versions and supported PHP versions, which in fact is required\n  to introduce the real Ægir 2.0 in BOA -- we are still using older, customized\n  for backward compatibility, Ægir 2 HEAD version, so it is time to move on and\n  stay up to date with everything, get new features like ability to manage\n  Drupal sites in subdirectories etc.\n\n  Once that *next* major BOA Edition is released, we will freeze the 'legacy'\n  mode at 2.2.x series level, which will receive only security upgrades and\n  no further feature nor bugfix releases. At that point you will have to stick\n  to the 'legacy' BOA version if you will need to run PHP 5.2 and Drupal 5\n  with Ægir based on Drush 4. It will be still possible, but not recommended\n  and not really supported, besides security related issues outside of Drupal.\n  This also means that at that point the 'legacy' version will no longer\n  receive Drupal core upgrades, even if there will be security core releases.\n\n  Note that we don't use the term \"major release\" in the known convention\n  for versions naming. It is because the first digit, for historical reasons,\n  refers to the Ægir version supported, the second digit refers to BOA stack\n  major release, and the last digit refers to both feature and bugfix BOA\n  stack upgrades.\n\n#-### Important Note For Those Using Our Hosted Ægir Service!\n\n  NOW is the time (and last chance) to upgrade all your legacy Drupal 5 sites\n  and outdated Drupal 6 sites still not compatible with at least PHP 5.3,\n  because once we upgrade to the *next* major BOA Edition, it will be no longer\n  possible to still run Drupal sites not compatible with PHP 5.3 -- there\n  were literally years of this legacy support provided, and this finally\n  comes to the end, because we will not use the BOA 'legacy' mode on our own\n  servers. It will be still available for remotely managed 'Ægir on Your Own\n  Server' option, though, but only on request: https://omega8.cc/support\n\n#-### Custom php.ini protection has changed and will not honor old settings\n\n  If you have custom settings in any of your php.ini files protected with\n  old variable in the /root/.barracuda.cnf, make a backup of your ini files\n  before running this upgrade. While these files will not get overwritten,\n  they will no longer be used, because we have introduced new, standardized\n  directory structure to properly support multi-PHP-versions systems.\n\n  Respective php.ini files are now located in /opt/phpXX/etc/phpXX.ini\n  for FPM and /opt/phpXX/lib/php.ini for CLI, where XX is 55, 54, 53 or 52,\n  depending on the versions listed via _PHP_MULTI_INSTALL variable in the\n  /root/.barracuda.cnf file. Also the variables used to protect ini files\n  from being overwritten have changed to _CUSTOM_CONFIG_PHPXX.\n\n  If you need any non-standard settings in any of active ini files, don't\n  overwrite them with the old files, but rather carefully review and apply\n  only the differences you need.\n\n#-### Barracuda no longer supports Percona since 2.2.0 release\n\n  If you have used Percona before, Barracuda will force upgrade to MariaDB 5.5\n  and PHP rebuild automatically. We plan to add possibility to install\n  MariaDB 10.0 once released as stable and tested. MariaDB is the default\n  DB server in Barracuda for a long time already.\n\n#-### Support for PHP FPM/CLI version safe switch per Octopus instance\n\n  This allows to easily switch PHP version by the instance owner w/o system\n  admin (root) help. All you need to do is to create ~/static/control/fpm.info\n  and ~/static/control/cli.info file with a single line telling the system\n  which available PHP version should be used (if installed): 5.5 or 5.4 or 5.3\n\n  Only one of them can be set, but you can use separate versions for web access\n  (fpm.info) and the Ægir backend (cli.info). The system will switch versions\n  defined via these control files in 5 minutes or less. We use external control\n  files and not any option in the Ægir interface to make sure you will never\n  lock yourself by switching to version which may cause unexpected problems.\n\n  Note that the same version will be used in all platforms and all sites hosted\n  on the same Octopus instance. Why not to try latest and greatest PHP 5.5 now?\n\n#-### All PHP FPM workers in 5.5, 5.4 and 5.3 now use the 'ondemand' mode\n\n  This change will help to better manage memory use, especially on systems with\n  multiple PHP versions running in parallel. This will also free resources\n  and allocate them dynamically only when requests are coming and only to\n  the active FPM pools. Note that the 'ondemand' mode doesn't affect Zend\n  OPcache, because it is managed by the parent process(es) which stay(s) active.\n\n  The net result is that on a vanilla BOA install, without non-hostmaster sites\n  running, the complete stack consumes just ~200 MB of RAM (in total, so with\n  MariaDB, Redis and Nginx etc. included) with all three PHP-FPM versions\n  running in parallel: 5.5, 5.4 and 5.3:\n\n  CPU[#*                                                       2.0%]\n  Mem[|||||||||||||###***********************************209/1002MB]\n  Swp[                                                        0/0MB]\n\n  magic:~# ps axf | grep fpm\n   8380 ? Ss 0:00 php-fpm: master process (/opt/php55/etc/php55-fpm.conf)\n   8391 ? Ss 0:00 php-fpm: master process (/opt/php54/etc/php54-fpm.conf)\n   8402 ? Ss 0:00 php-fpm: master process (/opt/php53/etc/php53-fpm.conf)\n  magic:~#\n\n#-### Drush aliases are now automatically copied to all relevant accounts\n\n  While Ægir manages Drush aliases for its backend needs, they are normally\n  not available for the main nor the extra shell users on the instance.\n\n  But starting with 2.2.0, BOA automatically manages copies of all Drush\n  aliases, by adding them, updating or removing, every 5 minutes, once it\n  detects that there are changes applied, like: the site has been migrated\n  to another platform, or associated client/owner has been updated, etc.\n\n  You no longer need to `cd` to the respective site directory to perform\n  some available Drush tasks. Just check the available aliases list with\n  `drush aliases` and then enjoy the beauty of `drush @foo.com command` syntax.\n\n#-### Drush is now restricted to use only trusted modules installed by default\n\n  Note: this change affects only Ægir backend/system user, typically o1,\n  while all other limited shell accounts are not affected, because they are\n  already individually jailed with protected custom php.ini and special\n  Drush wrappers and settings.\n\n  This means that you can skip this section if you are on a hosted Ægir.\n\n  Customized Drush now included in BOA by default, will be able to use only\n  extensions/commands bundled with contrib modules which are either a part\n  of modules added in every platform via shared o_contrib/o_contrib_seven\n  symlink located in the platform core modules directory, or are included\n  in the built-in platforms installation profiles space, or in the system\n  account, protected .drush sub-directory.\n\n  This means that any Drush extension/command bundled with contrib module\n  uploaded to the sites/all/modules space in all built-in platforms will be\n  ignored and not available on command line for the backend user. The same\n  applies to site level contrib space, if used.\n\n  Additionally, any Drush extension/command bundled with custom platforms\n  located in the ~/static directory tree will be completely ignored by Drush,\n  no matter where uploaded: core, profiles, sites/all or sites/foo.com space.\n\n  This is not a problem in hosted environments, where users normally never\n  should have an access to the Ægir backend user, anyway.\n\n  If you have any reason to use Drush on command line as an Ægir backend/system\n  user, for example to escape limited shell restrictions, we recommend to\n  install vanilla Drush 6, for example in /opt/tools/drush/vanilla/drush/ and\n  then symlink it into /usr/local/bin/ with custom name, so it will be available\n  automatically in your backend o1 user's PATH.\n\n  Further improvements to secure sites and instances in a completely locked\n  virtual jails are planned in next BOA releases, which will address all other\n  known and even potential security issues in Ægir.\n\n#-### The ~/.drush and other important directories and symlinks are protected\n\n  There are directories, files and symlinks which should be protected from\n  any changes and managed exclusively by the BOA system. The reasons may vary\n  from security to avoidable support requests when the less experienced user\n  will delete his sites or platforms symlinks, while they can't be easily nor\n  automatically recreated. It also prevents the sub-accounts users from using\n  their account home directory as a private upload/archive disk space.\n\n#-### Support for safely configurable cache bins exceptions in Redis\n\n  Sometimes you may want to exclude some problematic cache bins from Redis\n  so they will use default SQL engine, at least until related issue will be\n  fixed either in your contrib code or in the Redis integration module.\n\n  Normally you had to edit the local.settings.php file which is both tedious\n  and dangerous because of extra steps: https://omega8.cc/node/230 to add\n  a line, for example: $conf['cache_class_cache_foo'] = 'DrupalDatabaseCache';\n  Plus, it had to be done for every site separately.\n\n  Now you can simply list the cache bins to exclude, comma separated, either\n  in the site or platform level active INI file.\n\n  Example: redis_exclude_bins = \"cache_views,cache_foo,cache_bar\"\n\n#-### Two-Factor-like Authentication to protect access to Chive DB Manager\n\n  We are introducing Two-Factor-like Authentication logic - now extended also\n  to protect Chive DB Manager, Collectd Graph Panel and SQL Buddy DB Manager.\n  You must be logged in via SSH and run any auto-continuos command, for example:\n  `ping -i 30 google.com` to keep the access open for your IP address.\n\n  Why is this important?\n\n  While BOA forces HTTPS connection for Chive, anyone who knows the URL can\n  access it and attempt to either run brute-force attack to get into your\n  site's database, or at least attempt to hammer the server and cause DoS-like\n  effects, at least until the system will block his IP on the firewall.\n\n  The other important reason is that your site's DB credentials change only\n  when you migrate or rename the site, and otherwise remain intact. Now, what\n  if you have an employee or a freelancer whom you no longer want to be able\n  to access your site? If you think that deleting his SFTP sub-account is\n  enough, think again. He still can access your site's database via Chive, if\n  he knows the site's DB credentials and the Chive URL.\n\n  But now it's no longer possible. Only the visitor who is able to successfully\n  authenticate himself via SSH, and keeps active SSH session, will be able\n  to access the Chive URL. The rest of the world will see just dummy Nginx 403\n  Access Denied error.\n\n  And in case you are using self-hosted BOA, the same protection is applied\n  also to Collectd Graph Panel and SQL Buddy DB Manager.\n\n#-### Support for session.cookie_lifetime configurable via INI files\n\n  You can control session cookies expiration (TTL) per site and per platform.\n  The value (in seconds) of the session_cookie_ttl variable is used as\n  session.cookie_lifetime value.\n\n  BOA default defined in the system level global.inc file is 86400 == 24h.\n\n  We also recommend that you enable and configure built-in session_expire\n  module, which allows you to keep the sessions DB table tidy. Make sure that\n  TTL set via session_cookie_ttl variable is *lower* than TTL configured\n  in the session_expire module, because the module does not care about PHP\n  settings and simply deletes old entries from the sessions table on cron run.\n\n#-### Support for files permissions-fix exceptions via platform level INI file\n\n  You can opt-out from globally enabled daily-permissions-fix procedure per\n  platform with new fix_files_permissions_daily variable.\n\n  This feature can be useful when you prefer to manage custom platform in\n  a monolithic codebase mode in Git, so forcing permissions could conflict\n  with your workflow or development tools. Otherwise you should never disable\n  this to avoid issues with Ægir tasks related to sites on this platform.\n\n  Note that the system level option _PERMISSIONS_FIX (introduced in BOA-2.1.0\n  and set to NO by default) should be also enabled with YES in the system level\n  /root/.barracuda.cnf file, if you prefer to have permissions fixed in all\n  sites on all platforms, except those with fix_files_permissions_daily = FALSE\n  set in the platform level, active INI file.\n\n#-### High-performance JavaScript callback handler (js) in all platforms\n\n  All platforms, both built-in and custom in the ~/static directory tree, enjoy\n  automatically added High-performance JavaScript callback handler (js) support,\n  which requires extra /js.php file in the platform root and also proper Nginx\n  rewrites. The module itself is also included in the built-in o_contrib bundle.\n\n  All you need is to enable the module, if recommended by any other module,\n  and enjoy much faster page generation, where possible. You can review the\n  full list of modules which will benefit from this great helper module on its\n  project page: https://drupal.org/project/js\n\n\n  Enjoy another super-fast and even more powerful BOA Edition!\n\n\n# New Octopus platforms:\n\n  ### Drupal 7.26.4\n\n  Guardr 1.1 ------------------- https://drupal.org/project/guardr\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.26.4\n\n  Commerce 1.24 ---------------- https://drupal.org/project/commerce_kickstart\n  Commerce 2.13 ---------------- https://drupal.org/project/commerce_kickstart\n  Commons 3.9.1 ---------------- https://drupal.org/project/commons\n  Drupal 7.26.4 ---------------- https://drupal.org/drupal-7.26\n  Open Academy 1.0 ------------- https://drupal.org/project/openacademy\n  Open Atrium 2.15 ------------- https://drupal.org/project/openatrium\n  Open Deals 1.32 -------------- https://drupal.org/project/opendeals\n  Open Outreach 1.5 ------------ https://drupal.org/project/openoutreach\n  OpenBlog 1.0-a3 -------------- https://drupal.org/project/openblog\n  OpenChurch 1.12 -------------- https://drupal.org/project/openchurch\n  OpenScholar 3.12.1 ----------- http://theopenscholar.org\n  Panopoly 1.2 ----------------- https://drupal.org/project/panopoly\n  Recruiter 1.1.2 -------------- https://drupal.org/project/recruiter\n  Spark 1.0-b1 ----------------- https://drupal.org/project/spark\n  Totem 1.1.2 ------------------ https://drupal.org/project/totem\n  Ubercart 3.6 ----------------- https://drupal.org/project/ubercart\n\n  ### Pressflow 6.30.1\n\n  Commons 2.16 ----------------- https://drupal.org/project/commons\n  Feature Server 1.2 ----------- http://bit.ly/fserver\n  Managing News 1.2.4 ---------- https://drupal.org/project/managingnews\n  Open Atrium 1.7.2 ------------ https://drupal.org/project/openatrium\n  Pressflow 6.30.1 ------------- http://pressflow.org\n  Ubercart 2.13 ---------------- https://drupal.org/project/ubercart\n\n# New features and enhancements in this release:\n\n  * Add High-performance JavaScript callback handler (js) in all platforms.\n  * Add session_expire module to shared contrib space in all platforms.\n  * Add support for session.cookie_lifetime configurable via INI variable.\n  * Allow to control swap clear with control file /root/.no.swap.clear.cnf\n  * Auto-Update all BOA install and upgrade wrappers daily.\n  * Default system /bin/sh symlink target replaced with /bin/websh wrapper.\n  * Disable tcp_slow_start_after_idle for better SPDY performance.\n  * Improve the logic in the global.inc for faster processing.\n  * Issue #1217486 - Add o_contrib symlinks on platform Verify task.\n  * Issue #1310054 - Add support for drush aliases in all lshell accounts.\n  * Issue #2148335 - Add Default Localhost Vhost.\n  * Issue #2166641 - Make hard-coded load thresholds configurable.\n  * Issue #2170079 - Use _CUSTOM_CONFIG_LSHELL to protect lshell.conf template.\n  * Issue #2226919 - Custom Platforms in Version Control (skip permissions fix).\n  * Lshell: Update /etc/lshell.conf only when required instead of every 5 min.\n  * Manage extra db GRANT for 127.0.0.1 to allow SSH tunneling for SQL access.\n  * New option _REDIS_LISTEN_MODE to configure PORT or SOCKET mode globally.\n  * Nginx: Add support for protected PHP-FPM monitor.\n  * Nginx: Force aggressive no-cache headers for the under construction page.\n  * Nginx: Switch to buffered logging when /root/.high_traffic.cnf exists.\n  * PHP: Add support for FPM/CLI version safe switch per Octopus instance.\n  * PHP: Allow to install and run all supported versions: 5.5, 5.4, 5.3, 5.2\n  * PHP: Extra php.ini files automatically managed per system and shell user.\n  * PHP: FPM workers in 5.5, 5.4 and 5.3 will use 'ondemand' mode by default.\n  * PHP: Use separate FPM pools per Octopus instance.\n  * PHP: Use TCP Socket mode for all FPM pools and Port mode for legacy vhosts.\n  * Protect ~/.drush and other important directories and symlinks from changes.\n  * Redis: Allow to exclude cache bins on the fly, per site or per platform.\n  * Save 295 seconds on BOA Install and Upgrade.\n  * Set and auto-manage strict permissions on some important config files.\n  * Set PHP CLI version in the /bin/websh wrapper on the fly.\n  * Use Two-Factor-like Authentication logic for Chive DB Manager access.\n  * Improve `sqlmagic fix file.sql` to properly replace INSERT INTO with\n    INSERT IGNORE INTO (a workaround for duplicate keys in the DB dump)\n  * Use the same trick with modules/local-allow.info to temporarily make\n    civicrm.settings.php writable, if exists.\n\n# Changes in this release:\n\n  * Add ~/static/trash/* to automatic daily cleanup.\n  * Add coder to auto-disabled modules -- see #2068771\n  * Allow 'drush uli' as root, but deny root access to Drush by default.\n  * Disable D8 install via _ALLOW_UNSUPPORTED until next release.\n  * Do not enable SYNFLOOD protection by default.\n  * Do not force old_short_name in any profile file directly.\n  * Firewall: Allow to connect to Apple Push Notification service (APNs)\n  * Issue #289 - Update lshell env_path for RVM and install/update global gems.\n  * Issue #292 - Open standard RTMP port 1935.\n  * Lshell: Use latest Drush 6 (master) by default and remove other versions.\n  * Nginx and PHP-FPM: Better default timeout limits.\n  * Nginx: Add apk, pxl, ipa to known mime types / download extensions.\n  * Nginx: Use text/xml mime type for .xml URLs and restore other mime defaults.\n  * Open local access for web based sites cron.\n  * Open outgoing port 2525 for custom SMTP connections.\n  * Percona DB server is no longer supported.\n  * PHP: Always build from sources.\n  * PHP: Disable 5.2 FPM if installed, but not used.\n  * PHP: Only critical errors are enabled by default in the CLI mode.\n  * PHP: Reloading FPM hourly no longer makes any sense.\n  * PHP: Remove support for deprecated APC and Memcached.\n  * PHP: Restore MailParse support - 2.1.6\n  * PHP: Use aggressive disable_functions defaults (further tuned per FPM pool).\n  * Redis: Integration module (the modern variant) upgrade to 7.x-2.x-o8-2.6-A\n  * Redis: Use modern version with enabled fast lock and aggressive flush mode.\n  * Remove insecure exception for wkhtmltopdf uploaded in the user space.\n  * Rename master repository on GitHub from legacy nginx-for-drupal to boa.\n  * Set _STRICT_BIN_PERMISSIONS=YES by default.\n  * Upgrade Compass Tools on every upgrade, not just on new BOA release.\n  * Use 60s opcache.revalidate_freq by default to save disk I/O on live sites.\n  * Use Ruby Version Manager (RVM) by default to manage Compass Tools etc.\n  * Use RVM for global gem installation and updates.\n  * Use search_api_solr-7.x-1.4 for new installs.\n  * Use web based cron by default to benefit from Zend OPcache.\n  * Do not check existence nor auto-config Purge/Expire unless INI variable\n    purge_expire_auto_configuration is set to TRUE (automatically, when\n    the module is detected as enabled).\n  * New naming convention for Ubercart 3.x platforms: [ud2] to support upgrades\n    from uberdrupal profile, and [aq3] to support upgrades from acquia profile.\n    Note that you have to choose Vanilla Testing profile to see [ud2] or\n    Vanilla Minimal to see [aq3] platform in the Add Site form.\n  * GitHub is now our main repository, we re-open the issue queue there\n    for patches merge requests, while d.o has a code mirror status from now on.\n  * Make it crystal clear that Ubuntu is barely supported, rarely tested and\n    thus not recommended.\n  * The \"Run cron\" extra task has been removed for security reasons. Site cron\n    can be run either via standard, scheduled in Ægir procedure, which uses\n    local, but web based request to the protected /cron.php URL, or on command\n    line, or from the site admin area, as usual.\n\n# System upgrades in this release:\n\n  * Bazaar Version Control System (bzr) 2.6.0\n  * Collectd Graph Panel (CGP) master-30-03-2014\n  * cURL 7.36.0 (if installed from sources)\n  * Git 1.9.1 (if installed from sources)\n  * Jetty 7.6.14, 8.1.14, 9.1.3\n  * Limited Shell 0.9.16.5-om8\n  * MariaDB 5.5.36\n  * MySecureShell 1.32\n  * Nginx 1.5.12\n  * OpenSSH 6.6p1 (if installed from sources)\n  * OpenSSL 1.0.1f (if installed from sources)\n  * PHP 5.4.26\n  * PHP 5.5.10\n  * PHP: Imagick 3.1.2\n  * PHP: ionCube loader 4.5.3\n  * PHP: MongoDB 1.4.5 (optional add-on)\n  * PHP: Zend OPcache master-09-03-2014\n  * PHPRedis: master-22-03-2014\n  * Redis 2.8.8\n  * Ruby 2.1.1 (from now on compiled from sources)\n\n# Fixes in this release:\n\n  * Add fix_collectd_nginx for Collectd config update.\n  * Add missing panopoly_demo app in the Panopoly distro to fix broken install.\n  * Add missing variables to active INI files, if needed.\n  * Avoid way too long Speed Booster TTL for bots, especially for rss feeds.\n  * Changing old_short_name mapping to: uberdrupal->testing and acquia->minimal\n  * Do not force old_short_name if already set in db/drushrc.\n  * Do not run swap clean when heavy tasks like cdp backup run.\n  * Drush: Simplify and improve access restrictions logic when aliases are used.\n  * Excessive and useless Drush internal cache clear in daily.sh removed.\n  * Fix default PATH in all sub-scripts.\n  * Fix for broken cURL from sources install logic.\n  * Fix for drush make broken by websh fix for cd wildcard crash fix.\n  * Fix for multi-IP cron access.\n  * Fix missing /dev/fd early enough to avoid broken tasks in Ægir.\n  * Fix the logic in manage_ip_auth_access()\n  * Fix to avoid daily services maintenance/cron freeze if Jetty didn't stop.\n  * Force backward compatible SERVER_SOFTWARE to silence core warnings.\n  * Force OpenSSH rebuild on OpenSSL upgrade (if installed from sources).\n  * Issue #1317322 - Filters UI broken.\n  * Issue #1991908 - Fix the syslog flood caused by collectd df plugin.\n  * Issue #2057213 - Use better SQL GRANT style.\n  * Issue #2110589 - Unable to install BOA correctly on Debian 6.0 and OpenVZ\n  * Issue #2141283 - Drush aliases like `drush dbup` no longer work properly.\n  * Issue #2144801 - Display bug on add site.\n  * Issue #2144947 - Install new Ruby for better compatibility with new gems.\n  * Issue #2150557 - Make the check and update procedure for UseDNS safe.\n  * Issue #2152383 - Fix for [js module] - add js_server_software variable.\n  * Issue #2159881 - Drush is broken because Console_Table URL no longer works.\n  * Issue #2161115 - AdvAgg: Strictly follow RFC 2616 14.21\n  * Issue #2167141 - Do not exclude --with-ldap --with-gmp in the PHP on Wheezy.\n  * Issue #2172089 - Fix for syntax error.\n  * Issue #2173209 - Do not use legacy (removed) symlink for version check.\n  * Issue #2175197 - Regex configuration not matching esi/ssi tags.\n  * Issue #2177837 - process.max not set correctly for PHP 5.5 and 5.4\n  * Issue #2182671 - Solr 4 with Jetty 8 does not start after upgrade.\n  * Issue #2188907 - Update docs criteria for not rebuilding ssh, ssl, and curl.\n  * Issue #2199229 - CiviCRM 4.4.4 Requires change in the Nginx configuration.\n  * Issue #288 - SMTP Authentication Module depends on fsockopen.\n  * Lshell: Fix for crash on wildcard cd.\n  * Lshell: Remove symlinks for legacy drush_make.\n  * Modules can be incorrectly whitelisted from dis by installation profile.\n  * Nginx: Add exceptions for known video players.\n  * Nginx: Avoid downtime on upgrade because of too low variables_hash_max_size\n  * Nginx: Better gzip defaults.\n  * Nginx: Default value of variables_hash_max_size is too low.\n  * Nginx: Do not overwrite gzip_types.\n  * Nginx: Improve fastcgi defaults.\n  * Nginx: Remove too broad regex for 'flag' keyword in the URI.\n  * Nginx: Send Access-Control-Allow-Origin * header also for /favicon.ico\n  * Nginx: Use port 9090 in nginx_octopus_include.conf by default (PHP-FPM 5.3)\n  * Nginx: Use Redirect 301 for legacy paths /sites/default/files/*\n  * Once you have next 2.3.x installed, you can't downgrade to legacy 2.2.x\n  * PHP: Add protection for instance level php.ini files.\n  * PHP: Fix for broken build when --with-ldap is used.\n  * PHP: Fix for broken dependencies in newer Debian and Ubuntu systems.\n  * PHP: Fix for forced rebuild mode if lib curl is broken or updated with apt.\n  * PHP: Fix for GEOS 3.4.2 and multi-version install.\n  * PHP: Fix for legacy 5.2 logic.\n  * PHP: Force 5.5 to use correct SQL drivers so its built-in will not be used.\n  * PHP: Reduce duplicate rebuilds.\n  * PHP: The --with-curlwrappers option has been removed in 5.5\n  * Redis: Auto-Restart if socket is missing only when socket mode is enabled.\n  * Redis: Exclude cache_form bin or it will break modules like ajax_comments.\n  * Redis: Force clean restart daily, with long enough sleep time.\n  * Redis: Restore pwd protection.\n  * Redis: The cache_metatag bin needs aggressive flush mode -- see #2062379\n  * Reduce system load during db backups with short delays between databases.\n  * Remove collectd on major system upgrade even if /var/www/cgp doesn't exist.\n  * Silence AIS (Adaptive Image Styles) module .htaccess requirements.\n  * Sort and group cnf variables to bring some order into this chaos.\n  * Symlink main drush wrapper to shared location outside of Master Instance.\n  * Update for Redis bins exceptions logic.\n  * Update system load check method in all scripts.\n  * Use forced Jetty restart mode.\n  * Use https in the welcome screen image src URL.\n  * Use IPv4-strict hostname and IP checks only.\n\n\n# Known Issues on systems upgraded to BOA-2.2.0 release (all fixed)\n\n  ==> Updated on Tue Apr  1 12:20:27 SGT 2014\n\n  @=> Issues hot-fixed in stable (run 'barracuda up-stable system' to apply):\n\n  * Compass Tools don't use correct paths to Ruby 2.1.1\n  * Chive Authentication via SSH session doesn't work on some older instances.\n  * PHP: Disabled 'create_function' may break some contrib modules or code.\n  * PHP: Disabled 'assert' may cause warnings on features revert.\n  * Cron for sites doesn't work on old instances without Nginx wildcard vhost.\n  * The 'git pull' command is broken in limited shell.\n  * FTPS (FTP over SSL) connections may experience TLS problems.\n  * The 'rsync' command is broken in limited shell.\n  * The drush dl foo can't be run outside of site directory.\n\n\n### Stable BOA-2.1.3 Release - Full Edition\n### Date: Thu Nov 21 17:55:47 SGT 2013\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  This release provides Drupal 7.24.1 and Pressflow 6.29.1 core security upgrade\n  for all supported distributions. It also includes two updated platforms and\n  several fixes for issues discovered since BOA-2.1.2 released 3 days ago, plus\n  some clever improvements to help you automatically optimize all tables daily,\n  or even automatically convert tables to-innodb or to-myisam, either per site\n  or per platform, or per entire Octopus instance. There is also Purge Cruft\n  Machine available to run some spring-cleaning daily with configurable TTL.\n\n  Enjoy another super-fast and even more clever BOA Edition!\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.24.1\n\n  Open Atrium 2.0.9 ------------ http://drupal.org/project/openatrium\n  OpenScholar 3.9.3 ------------ http://openscholar.harvard.edu\n\n# New features and enhancements in this release:\n\n  * Purge Cruft Machine moved to daily.sh agent and made configurable\n    with _DEL_OLD_BACKUPS and _DEL_OLD_TMP per Octopus instance.\n\n    If changed to any number greater than \"0\" it will automatically delete\n    backups stored in the /data/disk/U/backups/ directory and in all hosted\n    sites backup_migrate directories, during daily cleanup, if created more\n    than X days ago, where X is a number of days defined in _DEL_OLD_BACKUPS.\n\n    If \"0\" then this feature is disabled. It can't be configured via INI files,\n    so you may need to submit support request if you want to customize this\n    option set to 7 days by default on all hosted instances, as per our\n    backups policy: https://omega8.cc/backups\n\n    The same logic applies to _DEL_OLD_TMP which defines for how long the\n    temporary files in all hosted sites files/tmp/ and private/temp/ directories\n    are kept before deleting them during running daily maintenance.\n\n  * Added sql_conversion_mode variable in the platform and site level INI\n    to customize instance-wide mode optionally set via _SQL_CONVERT.\n\n    This option allows to activate and/or customize DB tables conversion\n    per site, per platform and via _SQL_CONVERT per Octopus instance.\n\n    Supported values are: innodb and myisam (lowercase only!)\n    Note that this conversion will run daily even if all tables have been\n    already converted, so it will run OPTIMIZE on all tables, effectively.\n\n    Related Issue #2126471 - Convert DB engine control files to ini format.\n\n# Changes in this release:\n\n  * Allow to install unsupported distros only in head, not stable.\n  * Contrib update: advagg-7.x-2.3\n  * Map drush to drush6 on command line. You can still use drush4 and drush5.\n  * New contrib: display_cache\n  * New contrib: panels_content_cache\n  * Nginx 1.5.7 -- security upgrade.\n  * Use dev versions of CDN module with patch for AdvAgg 7 compatibility.\n  * Use Drush 5 and 6 head until next release.\n\n# Fixes in this release:\n\n  * Always cleanup temp downloads to avoid failed builds due to leftovers.\n  * Always fix permissions on contrib on upgrade and in daily.sh agent.\n  * Better auto-recovery when broken libcurl is detected.\n  * Delete any tar/gz/zip files in modules|themes|libraries daily.\n  * Delete dangerous local-allow.info file.\n  * Display all active INI variables in HTTP headers on dev URLs.\n  * Fix for cron auto-correction.\n  * Fix for Feature Server broken due to incorrect context version downloaded.\n  * Fix the logic for cURL install from sources.\n  * Nginx: Add Access-Control-Allow-Origin header also for static .json\n  * Nginx: Protect also .md files in modules|themes|libraries dirs.\n  * Issue #2137583 - Permissions on the site directory are broken after running,\n    how ironically, the Health Check task.\n  * Issue #2138811 - Maintenance agent disables modules from its standard\n    turn-off list, even if they are required by other modules, apps or features.\n\n# Known Issues on systems upgraded to initial BOA-2.1.3 release\n\n  ==> Updated on Thu Nov 28 18:33:58 SGT 2013.\n\n  @=> Issues which will trigger `barracuda up-stable system` if discovered:\n\n  * PHP: Fix for broken cURL from sources install logic.\n  * PHP: Fix for forced rebuild mode if lib curl is broken or updated.\n  * PHP: Fix for legacy 5.2 rebuild required when broken libcurl is detected.\n  * Use dummy variable instead of 'true' to avoid breaking the logic.\n\n  @=> Issues which will NOT trigger `barracuda up-stable system` if discovered:\n\n  * Add coder to the auto-disabled modules list -- see #2068771\n  * Excessive and useless Drush internal cache clear in daily.sh\n  * Issue #2141283 - Drush aliases like `drush dbup` no longer work properly.\n  * Issue #8215957 - Invalid version type error in old Drush Make.\n  * MariaDB 5.5.34 just released.\n  * Redis: Incorrect permissions on the integration module directory.\n  * Modules can be incorrectly whitelisted by installation profile and\n    never disabled, while they should be.\n\n# HotFix for known post-upgrade issues\n\n  Run the boa-fix-upgrade script when logged in as system root:\n\n  $ cd;rm -f boa-fix-upgrade.sh.txt*\n  $ wget -q -U iCab http://files.aegir.cc/update/boa-fix-upgrade.sh.txt\n  $ bash boa-fix-upgrade.sh.txt\n\n  This script is updated once there is any new regression or bug discovered,\n  so it is safe and recommended to run it again if the list of known issues\n  have been updated. Note that this script will detect and fix all Octopus\n  instances on your system at once.\n\n\n### Stable BOA-2.1.2 Release - Full Edition\n### Date: Mon Nov 18 00:03:30 SGT 2013\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  This is primarily a bug-fix release and you should read release notes and\n  also the changelog for both BOA-2.1.1 and BOA-2.1.0 for a context, especially\n  if you are upgrading from BOA-2.0.9 or older release (we have tested upgrades\n  from as old Editions as BOA-2.0.1, released on Dec 28 07:00:00 EST 2011).\n\n  This Edition includes fixes for all Known Issues on systems already upgraded\n  to initial BOA-2.1.1 release, plus some extra improvements and one updated\n  platform (Managing News).\n\n  Important new features include ability to use either legacy (default) or\n  modern (highly recommended) version of Redis integration module.\n\n  The reason we don't enable the modern version by default is that it may need\n  some testing before using it on a complex Drupal sites. The modern version\n  of Redis integration module comes with some great new features which allow\n  you to configure flush mode per cache bin, with three modes available.\n\n  Please refer to the module README for more information on all available\n  advanced flush modes: http://bit.ly/1drmi35\n\n  It also comes with super-fast lock backend, which can be enabled only when\n  you are using the modern version, but still needs more improvements, so we\n  auto-configure some exceptions on the fly, when it is used, to avoid known\n  issues, as reported in the queue: https://drupal.org/node/2135545\n\n  Please read also INI docs to understand how it works, and how to improve\n  performance by enabling and tuning these settings: http://bit.ly/1bwfZZj\n\n  Enjoy!\n\n# Updated Octopus platforms:\n\n  ### Pressflow 6.28.3\n\n  Managing News 1.2.4 ---------- http://drupal.org/project/managingnews\n\n# New features and enhancements in this release:\n\n  * Redis: Modern integration module 7.x-2.5 with latest fixes from #2135545\n    is available as an option with new INI variable: redis_use_modern\n\n  * Redis: New option redis_flush_forced_mode to better control flush modes\n    when redis_use_modern = TRUE\n\n  * Add example for custom Speed Booster cache TTL configuration in the optional\n    override.global.inc file. It can be used also in local.settings.php file.\n\n  * Add detection and auto-config for the allow_private_file_downloads variable.\n  * Issue #1978066 - Add _RESERVED_RAM variable for \"reserved\" memory.\n  * Map all old_short_name profiles relations in the Ægir Provision directly.\n\n# Updated Ægir modules or extensions:\n\n  * Newer aegir_custom_settings 6.x-2.3 with site clone added for client role.\n  * Newer registry_rebuild 7.x-2.1 with fixed critical bug - see: #2130905\n\n# Changes in this release:\n\n  * Auto-Disable views_cache_bully also when Ubercart is enabled.\n  * Do not delete testing profile, we need it for acquia->testing upgrade path.\n  * Do not map old_short_name on the Octopus level, it is moved to Provision.\n  * Make ACTIVE INI files comments-free to never confuse them with templates.\n  * Make the fix for known Feeds problem global, not just ManagingNews specific.\n  * PHP: 5.4.22 and 5.5.6 as an option (for testing only).\n  * PHP: Use latest (master) phpredis_new by default.\n  * Redis: Default integration module version reverted to pre-7.x-2.0 release.\n  * Redis: Force rebuild on system upgrade to update also Redis config.\n  * Redis: Make redis_lock_enable available only when redis_use_modern = TRUE\n  * Set opcache.revalidate_freq to 5 sec only on non-dev URLs by default.\n  * Switch Ubercart 3 to use D7 Minimal instead if Standard to fix upgrade path.\n  * Update prev release notes to explain importance of using latest Pressflow 6.\n\n# Fixes in this release:\n\n  * Always fix permissions on contrib on upgrade and in daily.sh agent.\n  * Avoid files checks for Drupal for Facebook and Domain Access by default.\n  * Better auto-recovery when broken libcurl is detected.\n  * Fix for cron auto-correction.\n  * Fix for post-upgrade permissions issues affecting modules|themes|libraries.\n  * Fix for too restrictive permissions in /data/all/000/*\n  * Fix regression in the logic for dev URLs detection and auto-configuration.\n  * Fix the forced contrib upgrade logic.\n  * Fix the logic for cURL install from sources.\n  * Improve procs monitoring agent with better whitelisting.\n  * Improve sanitize_string() filtering to avoid issues with strong passwords.\n  * Issue #1860706 - Native, unified support also for D6 lock backend.\n  * Issue #2023895 - Do not kill java, only jetty and tomcat procs when needed.\n  * Issue #2105477 - Allowed gem commands need custom aliases in lshell.\n  * Issue #2134329 - Going from 2.0.9 to 2.1.1 does not update platforms.\n  * Issue #2135545 - Lock Backend freezes the site on cache clear.\n  * Issue #2136413 - Use -H to force correct HOME environment variable.\n  * Issue #2136413 - Use sudo to avoid lshell protection in DB auto-conversion.\n  * Make sure that /usr/local/bin is in the PATH.\n  * Make the check_if_required test in daily.sh six (6) times faster.\n  * Nginx: Fix too restrictive access policy for Ægir specific /hosting URI.\n  * Redis: Add some debugging on dev URLs to make sure permissions are correct.\n  * Redis: Added prefix support for lock backend.\n  * Redis: Disable persistent mode to never use on-disk storage, see #2135545\n  * Redis: Do not enable tcp-keepalive or weird things may happen, see #2135545\n  * Redis: Exclude some bins to avoid issues with lock support, see #2135545\n  * Redis: Missing default values on variable_get() calls causing D6 break.\n  * Redis: Update docs and naming convention for modern integration module.\n  * Silence cURL test in meta-installers.\n  * Sync randpass with sanitize_string().\n  * Set less restrictive permissions on civicrm.settings.php since\n    provision_civicrm does not make the file writable temporarily as it should.\n\n# Known Issues on systems upgraded to initial BOA-2.1.2 release\n\n  ==> Updated on Thu Nov 21 01:28:23 SGT 2013 with all fixes applied to stable.\n\n  * Feature Server platform is broken since BOA-2.1.0 due to incorrect context\n    module version downloaded via makefile. This bug affects only some instances\n    upgraded to head and not stable, but since in the first 24 hours after\n    BOA-2.1.2 release our static downloads were still out of sync on two of\n    our mirrors, it is safe to assume that you should run the HotFix via\n    boa-fix-upgrade.sh.txt anyway.\n\n  * There is regression introduced in the maintenance agent logic, which\n    results with dependency check effectively ignored. This may cause various\n    disastrous effects, like disabling all modules chained via feature or\n    via apps module, because apps module requires update module, which is\n    normally disabled. While any feature which requires dblog or update module\n    enabled is considered as a serious developer error and should be avoided,\n    we have to respect all dependencies defined to never break any site by\n    forcefully disabling modules.\n\n  * Part of the Site Health Check task (the `drush6 status-report` command)\n    breaks permissions on the site directory, which blocks any further tasks\n    like Clone, Migrate and Backup. This regression was introduced in the\n    BOA-2.1.0 release.\n\n# HotFix for known post-upgrade issues\n\n  Run the boa-fix-upgrade script when logged in as system root:\n\n  $ cd;rm -f boa-fix-upgrade.sh.txt*\n  $ wget -q -U iCab http://files.aegir.cc/update/boa-fix-upgrade.sh.txt\n  $ bash boa-fix-upgrade.sh.txt\n\n  This script is updated once there is any new regression or bug discovered,\n  so it is safe and recommended to run it again if the list of known issues\n  have been updated. Note that this script will detect and fix all Octopus\n  instances on your system at once.\n\n\n### Stable BOA-2.1.1 Release - Full Edition\n### Date: Sat Nov  9 17:00:00 EST 2013\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  There are some important bug fixes in this release, along with changes\n  to the Auto-(En|Dis)able agent, explained in greater detail in embedded docs\n  included in platform specific INI file template.\n\n  Note that the system agent doesn't modify any existing and active INI file,\n  so updated docs are included only in the updated each morning INI templates:\n  default.boa_platform_control.ini and default.boa_site_control.ini\n\n  You can find both INI templates also online at: https://omega8.cc/node/293\n\n  We have also added some docs to help you if you experience any issues\n  with cached, Views based pages and panels: https://omega8.cc/node/292\n\n  Note also that since BOA-2.1.0 all D6 based sites are forced to use PHP 5.3.27\n  on hosted and managed Ægir instances, even if they were previously configured\n  to use deprecated, insecure, unstable and outdated PHP 5.2 for D6 based sites.\n\n  This means that if you are using either too old D6 core (older than 6.28.x)\n  some features will stop working, namely imagecache, /update.php and any\n  feature which depends on contrib modules not yet compatible with PHP 5.3\n\n  We have allowed to use PHP 5.2 for too long, to give enough time (in years)\n  to upgrade to latest Pressflow 6.x version and we no longer can extend\n  this allowance, for obvious security and systems stability reasons.\n\n  Furthermore, sticking with PHP 5.2 would not allow us to use latest Ægir 2.x\n  version (BOA still includes a bit older Ægir 2.x for backward compatibility),\n  since newer Ægir versions need newer Drush (BOA still uses ancient Drush 4.6)\n  and newer Drush requires newer PHP version.\n\n  It is even more important because Drupal 8 will not run on older PHP nor Drush\n  older than 7.x, so there is basically no choice other than make all your sites\n  compatible with PHP 5.3, or you will miss all future BOA system upgrades.\n\n  Now even PHP 5.3 is officially in the EOL (End-of-Live) phase, with only\n  security fixes expected, but also only until July 2014 and then it will be\n  completely deprecated, so we will have to switch to modern PHP 5.5, first\n  introduced as an option, later this year.\n\n  Upgrading to latest Pressflow 6.x is *very* easy. Just add all contrib modules\n  you are using in your outdated 6.x platform to the latest Pressflow 6.x\n  platform we provide by default, reverify the new platform, clone the site\n  in the old platform, migrate the cloned copy to the new platform and if\n  everything works fine, migrate also your live site. It will take less than\n  15 minutes and there is absolutely no excuse to not upgrade.\n\n  If you experience issues with your site due to the old core used on now forced\n  PHP 5.3, we can temporarily revert it to PHP 5.2 for the last time, but it is\n  really a bad idea. Much better idea is to find those 15 minutes and upgrade\n  your site, so we could continue to provide future upgrades and new amazing\n  features also for your Ægir instance.\n\n  Enjoy new, shiny BOA Edition!\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.23.3\n\n  Open Atrium 2.0.4 ------------ http://drupal.org/project/openatrium\n  Open Deals 1.31 -------------- http://drupal.org/project/opendeals\n  OpenBlog 1.0-a3 -------------- http://drupal.org/project/openblog\n  Recruiter 1.1.2 -------------- http://drupal.org/project/recruiter\n  Spark 1.0-a10 ---------------- http://drupal.org/project/spark\n  Totem 1.1.2 ------------------ http://drupal.org/project/totem\n\n  ### Pressflow 6.28.3\n\n  Commons 2.13.2 --------------- http://drupal.org/project/commons\n  Open Atrium 1.7.2 ------------ http://drupal.org/project/openatrium\n\n# New features and enhancements in this release:\n\n  * Document all system-level control files in docs/ctrl/system.ctrl\n  * Fast Redis lock implementation is now enabled by default for D6 and D7.\n  * Nginx: Add NAXSI (Nginx Anti XSS & SQL Injection) WAF as an option.\n  * Use 100% static downloads in stable to remove dependency on github and d.o\n  * Use extended connection check procedure before exit 1.\n  * Use reliable Redis UP check via PING/PONG instead of pid file check.\n\n# Updated o_contrib modules:\n\n  * Contrib update: httprl-6.x-1.13\n  * Contrib update: httprl-7.x-1.13\n  * Contrib update: redis-7.x-2.3\n  * Contrib update: views_cache_bully-6.x-3.x\n  * Contrib update: views_cache_bully-7.x-3.x\n  * Contrib update: views_content_cache-7.x-3.0-alpha3\n\n# Changes in this release:\n\n  * Introducing Pressflow 6.28.3 to include fix for #2130865\n  * Updated INI docs for views_cache_bully and views_content_cache.\n  * ProsePoint moved to unsupported.\n\n  * Private files mode in D7 requires allow_private_file_downloads = TRUE in\n    boa_site_control.ini or boa_platform_control.ini and is disabled by default.\n\n  * Do not enable views_cache_bully and views_content_cache, unless special\n    control files exist and related variables in the platform specific INI\n    are not set to TRUE.\n\n  * Auto-Disable views_cache_bully on sites with commerce module enabled, but\n    allow to override it with ~/static/control/enable_views_cache_bully.info\n    and views_cache_bully_dont_enable = FALSE\n\n# Fixes in this release:\n\n  * All-in-One Site Health Check in Ægir not displayed for non-uid=1 users.\n  * Always prepare shared D6 and D7 cores.\n  * Always remove www. from the Redis cache key prefix.\n  * Better check for not yet updated Octopus instances in a batch upgrade mode.\n  * Check if ctools is enabled before attempting to enable views_content_cache.\n  * Do not force HEAD on Precise.\n  * Fix for /root/.upstart.cnf consistency.\n  * Fix for PATH in aegir.sh\n  * Fix still too aggressive procs monitoring.\n  * Fix the check_if_required() logic in the Auto-Disable agent.\n  * Improve all cURL based downloads with auto-continue mode.\n  * Issue #1980250 - Fix for broken cache_page bin in Redis integration module.\n  * Issue #2127237 - New Relic: Unable to initialize module on Debian Wheezy.\n  * Issue #2128233 - Rsyslog is still installed and consumes all CPU on OpenVZ.\n  * Issue #2128819 - Better exceptions in too aggressive process monitoring.\n  * Make sure to never set any HTTP headers or redirects in the backend.\n  * Nginx: Do not use separate location for /images/ URI shortcut.\n  * Nginx: Fix for regression in \"Rewrite for legacy requests with /index.php\".\n  * Nginx: Fix the logic for restricted access to /authorize.php and /update.php\n  * Nginx: Map URI shortcuts early to avoid overrides in other locations.\n  * Remove rsyslog on VZ, if installed.\n  * Restore backward compatibility with IP and not wildcard based vhosts.\n  * Use silent upgrade mode in _LENNY_TO_SQUEEZE and _SQUEEZE_TO_WHEEZY.\n  * Issue #2127329 - AdvAgg (D6 version) presence in o_contrib should not\n    auto-disable standard aggregation, unless the module is enabled.\n\n# Known Issues on systems upgraded to initial BOA-2.1.1 release\n\n  ==> Updated on Tue Nov 12 14:44:16 EST 2013 with all fixes applied to stable.\n\n  * Fast Redis lock may cause problems on node edit, with temporary error\n    saying that the node was changed by \"another user\", because current\n    implementation was not multisite-aware enough.\n\n  * Views Cache Bully module, if enabled after upgrade to BOA-2.1.0, may break\n    the cart and checkout on sites using Ubercart, and should be disabled\n    automatically like it is done for Commerce based sites since BOA-2.1.1\n\n  * The version of Redis integration module included: 7.x-2.3 causes warnings\n    for D6 sites, visible either on dev URLs or on command line and may break\n    some advanced Views configurations if custom caching is not yet enabled.\n    It may also break menu updates due to not aggressive enough cache clear\n    policy for cache_menu bin.\n\n  * Permissions set daily on the civicrm.settings.php file are too restrictive\n    and since provision_civicrm extension does not make this file writable\n    before attempting to re-create it, as it should, all tasks on CiviCRM\n    enabled sites fail.\n\n  * Permissions on sites/all/{modules,theme,libraries} on newly added, empty\n    platforms with no sites created yet, so not included in the running daily\n    permissions fix, are initially not group writable, as they should be.\n\n  * The check_if_required procedure in the running daily maintenance agent to\n    detect if the module is required by any other module or feature or by\n    installation profile, is 6 (six) slower than it should be and never disables\n    devel module properly.\n\n  * The running daily maintenance agent does not disable files checks for\n    Drupal for Facebook (fb) and Domain Access modules as it should in the\n    platform level INI file, unless those modules are detected.\n\n# HotFix for known post-upgrade issues\n\n  Run the boa-fix-upgrade script when logged in as system root:\n\n  $ cd;rm -f boa-fix-upgrade.sh.txt*\n  $ wget -q -U iCab http://files.aegir.cc/update/boa-fix-upgrade.sh.txt\n  $ bash boa-fix-upgrade.sh.txt\n\n  This script is updated once there is any new regression or bug discovered,\n  so it is safe and recommended to run it again if the list of known issues\n  have been updated.\n\n  You can also run another upgrade with \"barracuda up-stable system\" command,\n  followed by \"octopus up-stable all both log\" since all fixes have been applied\n  to current stable as well, but boa-fix-upgrade script is faster than running\n  complete upgrade again.\n\n\n### Stable BOA-2.1.0 Release - Full Edition - Now NSA-proof\n### Date: Sat Nov  2 18:15:19 EDT 2013\n### Includes Ægir 2.x-boa-custom version.\n\n# Release Notes:\n\n  There are some really important changes and improvements in this release\n  you should be aware of before running your BOA system upgrade.\n\n  Even if you are on a hosted BOA system with upgrades managed for you,\n  it is very important to read at least this release notes.\n\n  And if you are more curious, read also the giant changelog further below.\n\n  Besides all changes, fixes and improvements, all currently supported\n  Drupal distributions have been upgraded to use latest Drupal core versions.\n\n  Plus, there are seven (7) NEW platforms included!\n\n#-### Control files to customize your BOA system per platform and per site\n\n  Almost all control files are now replaced with two centralized,\n  platform and site specific INI files, using standard PHP INI format.\n\n  The platform specific INI file template with extensive documentation\n  included, has filename default.boa_platform_control.ini and is located\n  in the sites/all/modules directory.\n\n  The site specific INI file template with extensive documentation\n  included, has filename default.boa_site_control.ini and is located\n  in the sites/foo.com/modules directory.\n\n  Any existing control files, both on the platform and site level\n  will be automatically converted into active INI files and then deleted\n  to avoid confusion, also automatically, on the first run of the special\n  maintenance script: /var/xdrago/daily.sh but defaults in the global.inc\n  file will allow for smooth, fully automated transition.\n\n  This change will improve customizing your BOA system maintainability\n  and overall system performance/load thanks to minimized files checks.\n\n#-### Empty and not used platforms auto-cleanup\n\n  BOA has finally the ability to auto-delete, during daily maintenance,\n  which happens each morning (server time zone), all empty and not used\n  platforms. While on all hosted instances the TTL (time-to-live) is set\n  to 60 days (counted since last verify task date/time on the platform),\n  it can be configured per instance in the /root/.USER.octopus.cnf file\n  by changing value of _DEL_OLD_EMPTY_PLATFORMS variable to anything\n  higher than 0 (days), which is default (and means the feature is OFF).\n\n  Note that every Octopus instance upgrade re-verifies all existing platforms,\n  so if you will configure the TTL to 90 days but you will run the upgrade\n  every month or every two months, no platforms will ever be deleted.\n\n  If you wish to have this TTL customized on the hosted instance, where\n  it is set to 60 (days) by default, please open a support ticket via:\n  https://omega8.cc/support\n\n  Remotely managed BOA systems can have this feature enabled and configured\n  upon request submitted via https://omega8.cc/support\n\n#-### All-in-One Site Health Check in your Ægir control panel\n\n  You will notice a new Task available on every site page in your Ægir\n  Control Panel, named \"Run health check\". This new task will run a few\n  important tests on your site and will store all results in the Task Log,\n  so you easily review all results by clicking on the \"View\" button to the\n  right of the task, when it is complete. Make sure to check all details\n  by clicking on the \"Expand\" links in the log.\n\n  What are the tests included?\n\n  1. The \"drush clean-modules\" command will be run for you to make sure\n     there is no module left in the system table as \"enabled\" while it\n     no longer even exists on the system. This part will utilize (behind\n     the scenes) extension: https://drupal.org/project/clean_missing_modules\n     If it will find any such leftover, it will clean it up, automatically.\n\n  2. The \"drush6 pm-updatestatus\" command is a native Drush command which\n     tells you if there are any waiting module/code updates in the site.\n     Note: it will *not* upgrade anything, it is a check only.\n     Of course there should be no updates waiting if you follow Ægir\n     site upgrade best practices and your site's code is up to date.\n     Yes, this check will automatically enable the \"update\" module for you,\n     but it will not auto-disable it afterwards (to not break things in case\n     it is required by some other module or feature).\n\n  3. The \"drush6 status-report\" command is a native Drush command\n     which provides you a complete overview of your site status.\n     Instead of logging into the site, you can review it easily here.\n\n  4. The \"drush6 updatedb-status\" command is a native Drush command\n     which tells you if there are any waiting database updates in the site.\n     Note: it will *not* apply these updates, it is a check only.\n     Of course there should be no updates waiting if you follow Ægir\n     site upgrade best practices, but who knows, hence the check.\n\n  5. The \"drush security-review\" command will run only on Drupal 7 based sites\n     and provides some additional information by using (behind the scenes)\n     this extension: https://drupal.org/project/security_review\n\n#-### PFS (Perfect Forward Secrecy) support in Nginx\n\n  BOA now fully supports the most secure, yet still compatible with most\n  used systems and browsers SSL configuration.\n\n  All hosted BOA instances have been already upgraded automatically and\n  you don't need to do anything to make it work -- it is already done\n  for you -- both on any SSL enabled site with dedicated certificate\n  and IP address and also on the standard, system-wide SSL proxy level,\n  which is available for every hosted site -- just type HTTPS:// in the URL.\n\n  On self-hosted instances it needs to be enabled by adding a line in your\n  /root/.barracuda.cnf file: _NGINX_FORWARD_SECRECY=YES before the upgrade.\n  Note that depending on the system used, it may auto-install some\n  requirements like latest OpenSSL libraries and packages.\n\n  Remotely managed BOA systems can have this feature enabled upon request\n  submitted via https://omega8.cc/support\n\n#-### SPDY (new networking protocol) support in Nginx\n\n  BOA now fully supports the advanced, new protocol which allows to run\n  sites over HTTPS with much better performance than plain HTTP.\n\n  While not all browsers support this protocol yet, it is already enabled\n  by default on all hosted BOA instances (but obviously works only when you\n  access the site via HTTPS:// in the URL).\n\n  On self-hosted instances it needs to be enabled by adding a line in your\n  /root/.barracuda.cnf file: _NGINX_SPDY=YES before the upgrade.\n  Note that depending on the system used, it may auto-install some\n  requirements like latest OpenSSL libraries and packages.\n\n  Remotely managed BOA systems can have this feature enabled upon request\n  submitted via https://omega8.cc/support\n\n#-### Zend OPcache replaced APC in PHP\n\n  Newer versions of PHP already come with next generation opcode cache\n  from Zend, which is now open-sourced and available also as an extension\n  for older PHP versions, including 5.2 and 5.3\n\n  BOA leverages this opportunity and now uses Zend OPcache instead of APC.\n\n  This change is introduced automatically on all systems, both hosted\n  and managed for you and also self-hosted.\n\n  Only Debian Squeeze and Ubuntu Precise systems which are using PHP\n  installed from packages and not from sources, so with _BUILD_FROM_SRC=NO\n  set in the /root/.barracuda.cnf file, still use APC by default.\n  You can install Zend OPcache by changing it to _BUILD_FROM_SRC=YES\n  before running the upgrade.\n\n  Note that Zend OPcache default configuration caches every script for\n  60 seconds, so any changes you will introduce, will be visible with\n  up to 1 minute delay. However, if there is .dev. or .devel. in the site\n  name, this delay is lowered automatically to just 1 second.\n\n  You can change the default per site permanently by adding in the\n  local.settings.php preferred value, for example, to set it to 10 seconds:\n  ini_set('opcache.revalidate_freq', '10'); -- but remember that you will\n  override default (1 second) for dev URLs using this method.\n\n  Enjoy the most advanced, NSA-proof BOA Edition yet!\n\n\n# New Octopus platforms:\n\n  ### Drupal 7.23.3\n\n  Open Academy 1.0-rc3 --------- http://drupal.org/project/openacademy\n  Open Atrium 2.0 -------------- http://drupal.org/project/openatrium\n  OpenBlog 1.0-a2 -------------- http://drupal.org/project/openblog\n  OpenScholar 3.8.1 ------------ http://openscholar.harvard.edu\n  Recruiter 1.1 ---------------- http://drupal.org/project/recruiter\n  Spark 1.0-a9 ----------------- http://drupal.org/project/spark\n  Totem 1.1 -------------------- http://drupal.org/project/totem\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.23.3\n\n  Commerce 1.20 ---------------- http://drupal.org/project/commerce_kickstart\n  Commerce 2.9 ----------------- http://drupal.org/project/commerce_kickstart\n  Commons 3.4 ------------------ http://drupal.org/project/commons\n  Conference 1.0-a2 ------------ http://drupal.org/project/cod\n  Drupal 7.23.3 ---------------- http://drupal.org/drupal-7.23\n  Open Deals 1.27 -------------- http://drupal.org/project/opendeals\n  Open Outreach 1.2 ------------ http://drupal.org/project/openoutreach\n  OpenChurch 1.11-b14 ---------- http://drupal.org/project/openchurch\n  Panopoly 1.0-rc5 ------------- http://drupal.org/project/panopoly\n  Ubercart 3.5.1 --------------- http://drupal.org/project/ubercart\n\n  ### Pressflow 6.28.2\n\n  Commons 2.13 ----------------- http://drupal.org/project/commons\n  Feature Server 1.2 ----------- http://bit.ly/fserver\n  Managing News 1.2.3 ---------- http://drupal.org/project/managingnews\n  Open Atrium 1.7.1 ------------ http://drupal.org/project/openatrium\n  Pressflow 6.28.2 ------------- http://pressflow.org\n  ProsePoint 0.46 -------------- http://prosepoint.org\n  Ubercart 2.12.1 -------------- http://drupal.org/project/ubercart\n\n# New features and enhancements in this release:\n\n  * Add a workaround for an edge case problem -- a missing /etc/resolv.conf\n  * Add auto-config for AdvAgg on both Drupal 7 and Drupal 6.\n  * Add command to check for available updates: `drushextra check updates`\n  * Add gems for Omega 4 by default.\n  * Add sass-globbing gem by default.\n  * Allow to install latest OpenSSH from sources with _SSH_FROM_SOURCES\n  * Allow to install latest OpenSSL from sources with _SSL_FROM_SOURCES\n  * Anonymize lshell intro message.\n  * Better code sharing with central core dirs for all built-in platforms.\n  * BOA installer wrapper depends on curl instead of wget.\n  * Do not stop/start cron if /root/.upstart.cnf control file exists.\n  * Drush: Add embedded how-to for aliased commands.\n  * Enable views_cache_bully and views_content_cache if views is enabled.\n  * Firewall: Disable incoming ping/ICMP.\n  * Firewall: Protect port 80 only with CONNLIMIT and remove it from PORTFLOOD.\n  * Firewall: Update config template and enable port/syn flood protection\n  * FTP: Allow to list/see up to 3000 files/subdirs in a directory.\n  * Improve daily.sh performance.\n  * Improve dist-upgrade procedure.\n  * Improve docs/MODULES.txt\n  * Improve meta-installers auto-update procedures.\n  * Improve SQL limits auto-configuration.\n  * Install pdnsd as a last service.\n  * Issue #2000932 - Add also zen-grids.\n  * Issue #2015553 - Fix the logic for protected registration of new accounts.\n  * Issue #2044589 - SPDY Nginx support.\n  * Issue #2052703 - Conversion from control files to ini includes.\n  * Issue #2092599 - Switch to disable MySQL password reset on upgrades.\n  * Issue #2105477 - Add support for bundler gem.\n  * Issue #2116387 - Nginx and PHP: Improve system hardening.\n  * Issue #2116395 - Nginx: Better protection and 404 instead of 403.\n  * Issue #2118393 - Mark drush/cron as newrelic_background_job\n  * Make Bazaar installation optional with BZR keyword required in _XTRAS_LIST\n  * Nginx: Use forced HTTPS-only access for Chive and SQL Buddy.\n  * PHP: Add experimental support for 5.4 and 5.5\n  * PHP: Install Zend OPcache instead of deprecated APC by default.\n  * PHP: Reload FPM hourly unless /root/.high_traffic.cnf exists.\n  * Restart db server when backup is complete if /root/.my.optimize.cnf exists.\n  * Restore support for Expire and Purge modules.\n  * Shell: Add gunzip to allowed commands.\n  * Shell: Disable mc on the fly unless /root/.allow.mc.cnf control file exists.\n  * Shell: Use MySecureShell 1.31 for SFTP by default.\n  * Try to download wrapper 4 times before it gives up.\n  * Use MySQLTuner to better tune SQL configuration on install and upgrade.\n  * Use sqlmagic to fix errors caused by duplicate keys in the db dump.\n  * Use standard D7 profile for Ubercart 3 and update related contrib.\n  * We no longer depend on drupal.org for any downloads.\n  * Add optional, configurable per site, automated and smart (via sqlmagic tool)\n    DB table format/engine conversion, enabled per instance with non-default\n    _SQL_CONVERT=YES option.\n  * Add support for _MODULES_SKIP variable and make the auto-disable agent\n    much smarter to never disable any module defined as required by any other\n    module or feature.\n  * Improve auto-recovery from manual permissions/ownership big mistakes\n    related to critical files and dirs.\n  * Issue #2067193 - PFS (Perfect Forward Secrecy) support in Nginx with\n    _NGINX_FORWARD_SECRECY=YES config option.\n  * Use _DEL_OLD_EMPTY_PLATFORMS to enable and define auto-cleanup for old,\n    empty platforms with no sites hosted, separately per Satellite instance\n    (it does not affect Master instance).\n  * Issue #2000932 - Add more Compass tools/extensions: (compass_radix,\n    zurb-foundation) and make sure the gems are updated on upgrade.\n  * Nginx: Add support for domain specific /robots.txt mapped to static\n    files/$host.robots.txt to make it possible to manage it per domain\n    also when Domain Access module is used.\n  * Improve the logic for daily permissions fix (no longer enabled by default)\n    and make it configurable via _PERMISSIONS_FIX variable.\n  * Improve the logic for daily modules fix (still enabled by default)\n    and make it configurable via _MODULES_FIX variable.\n  * Generate static sites/foo.com/files/robots.txt file per site, which is\n    mapped to /robots.txt\n\n# New and updated Ægir modules or extensions:\n\n  * Add security_review extension\n  * Use registry_rebuild 7.x-2.x\n\n# New o_contrib modules:\n\n  * Add Advagg 6 and 7 to all platforms.\n  * Add force_password_change to all platforms.\n  * Add views_cache_bully to all platforms.\n\n# Changes in this release:\n\n  * All D6 based sites are forced to use latest PHP 5.3.27 version.\n  * Chive 1.3\n  * cURL 7.33.0 as an option.\n  * Drush 5.10.0 and 6.1.0 (available as drush5 and drush6)\n  * Git 1.8.4.1\n  * Lshell 0.9.16.4-om8\n  * MariaDB 5.5.33a\n  * Nginx 1.5.6\n  * Nginx: ngx_cache_purge-2.1\n  * OpenSSH 6.3p1 as an option.\n  * Percona 5.5.33\n  * PHP 5.4.21 and 5.5.5 as an option.\n  * Redis 2.6.16\n  * Vnstat 1.11\n  * Deprecate CiviCRM as a separate platform.\n  * Remove obsolete MartPlug distro.\n  * Move OpenPublish to unsupported.\n  * Move NodeStream to unsupported.\n  * Do not include D6 core translations, never included also in D7 platforms.\n  * Do not include notoriously buggy backup_migrate module.\n\n# Fixes in this release:\n\n  * Add all extra, non-standard options in the barracuda.cnf docs template.\n  * Add built-in support for Domain Access also for sites/all/modules/contrib\n  * Add exception to support commerce_multicurrency module properly.\n  * Add info about self-signed SSL certificate in the welcome email (again).\n  * Add support for /usr/etc/sshd_config if exists.\n  * Always force update_newrelic - even if there is no new PHP version.\n  * Better check for GitHub partial downtime.\n  * Better logic for clean resolvconf re-install when needed.\n  * Contrib: Make the list readable.\n  * Delete too old pid files if any exists.\n  * Do not allow to break working DNS cache server with parent system overrides.\n  * Do not allow to install OpenSSL and cURL from sources also on Precise.\n  * Do not install rsyslog on VZ based VM.\n  * Do not set session.cookie_secure on SSL requests for sites < D7\n  * Enable dev mode also when HTTP_HOST begins with dev.\n  * Firewall: Adjust some defaults to improve flood protection,\n  * Firewall: Always upgrade, unless _CUSTOM_CONFIG_CSF is set to YES.\n  * Firewall: Better support for auto-whitelisting multi-IP systems.\n  * Firewall: Fix csf.uidignore file to whitelist important system uids.\n  * Firewall: Fix for csf template on VZ.\n  * Firewall: Improve some flood protection defaults.\n  * Firewall: Improve whitelisted IPs msg.\n  * Firewall: Remove deprecated monitoring for now closed port 25 (incoming).\n  * Firewall: Update config template.\n  * Firewall: VZ compatibility.\n  * Fix for /etc/resolv.conf and curl requirement in the BOA Meta Installer.\n  * Fix for cron tasks queue.\n  * Fix for forced pdnsd and resolvconf upgrades.\n  * Fix for incorrect nproc discovery results on some VM systems.\n  * Fix for proper handling mysql connections leftovers.\n  * Fix for selected packages hold status.\n  * Fix for the auto-update logic -- now it is default.\n  * Fix permissions for control files to avoid leftovers on delete task.\n  * Fix permissions on default backup_migrate dirs.\n  * Fix the auto-healing to avoid killing all php-fpm processes at midnight.\n  * Fix the automatic generation of static robots.txt file per site.\n  * Fix the daily enable/disable logic and use faster drush version.\n  * Fix the logic for chained installs from sources on upgrade.\n  * Fix the makefiles to avoid issues after d.o upgrade.\n  * Fix the not really working auto-healing to properly restart mysqld.\n  * Fix the not really working lshell logs monitor.\n  * Force clean pdnsd and resolvconf reinstall when needed.\n  * Force contrib update to include redis module stable release.\n  * Force cURL and OpenSSH re-install from sources when OpenSSL is from src.\n  * Force Git rebuild from sources if SSL/cURL was built from sources.\n  * Force Lshell rebuild when OpenSSL is installed from sources.\n  * Force MSS and FTP rebuild when OpenSSL is installed from sources.\n  * Force Nginx, PHP and Pure-FTPd re-install when OpenSSL is from sources.\n  * Force PHP-FPM restart if 9+ connections with 499 in the last 60 seconds.\n  * Generate 2048 bit long DH parameters when _NGINX_FORWARD_SECRECY=YES\n  * IDS monitor should use lower defaults after introducing last min checks.\n  * Improve gem and bundler allowed/denied restrictions.\n  * Improve procs monitoring and whitelist backend tasks properly.\n  * Improvements for Ubercart 2 installation + contrib updates.\n  * Install latest CGP, collectd 5 compatible.\n  * Issue #1751916 - Add Spark 1.0-a9\n  * Issue #1874786 - Fix for GNU Mailutils support.\n  * Issue #1991312 - Fix support and auto-config for AdvAgg 7 and HTTPRL.\n  * Issue #1991658 - Firewall: Close port 25 for incoming connections\n  * Issue #1994346 - DoS protection for not cached URLs doesn't respect $scheme\n  * Issue #1994346 - Fix the logic for SSESS/SESS prefix in the cookie name.\n  * Issue #1995342 - X-Accel-Expires is never send when $expire_in_seconds == 0\n  * Issue #2002678 - barracuda up-stable system adds annoying extra delay.\n  * Issue #2005116 - 403 on every attempt to log in from Hostmaster homepage.\n  * Issue #2015551 - Fix for broken dev mode support switch.\n  * Issue #2015551 - Fix the keyword check used to trigger \"dev\" mode.\n  * Issue #2020043 - Send PUT requests for *.json URI to Drupal.\n  * Issue #2032379 - _AUTOPILOT=YES should be forced also for \"silent\" modes.\n  * Issue #2083373 - drush dl foo --destination=/path/ should be restricted.\n  * Issue #2101193 - Support Drupal for Facebook from sites/all/modules/contrib\n  * Issue #2105259 - All Platforms Installation Fails with Permission Denied.\n  * Issue #2116177 - Use phpredis 2.2.4\n  * Lshell: Better settings for newer Drush versions.\n  * Lshell: Fix for env_path\n  * Lshell: version update and monitoring improvements.\n  * Make sure o_contrib is updated also on head-to-head upgrades.\n  * Make sure to rebuild PHP if cURL is installed from sources.\n  * Make the upgrade email generic.\n  * More compact code for downloads.\n  * Move csf/lfd corrections after pdnsd install.\n  * Move the giant modules list from README.txt to docs/MODULES.txt\n  * Nginx: Add access protection for .txt files in the modules|themes|libraries.\n  * Nginx: Add access protection with fast 404 also for authorize.php\n  * Nginx: Add access protection with fast 404 for extra .php known URLs.\n  * Nginx: Add example site specific config for legacy .php URIs 301 redirects.\n  * Nginx: Better support for static and dynamic .json requests/URIs\n  * Nginx: Deny spiders on glossary/* URI, as they are never allowed to crawl.\n  * Nginx: Fix for dynamically generated PDFs.\n  * Nginx: Fix for redirects for legacy URLs with asp/aspx extension.\n  * Nginx: Improve auto-whitelisting in the access log monitor.\n  * Nginx: Improve POST requests monitoring.\n  * Nginx: Move AJAX and webform requests location after civicrm location.\n  * Nginx: Normalize newlines and spacing when fixing proxy config files.\n  * Nginx: Remove 'results' from the bots-protected URI regex.\n  * Nginx: Remove deprecated conf.d directory, if exists.\n  * Nginx: Replace legacy keyword gulag with neutral limreq everywhere.\n  * Nginx: Replace the zone legacy name also in Provision.\n  * Nginx: Rewrite legacy requests with /index.php to extension-free URL.\n  * Nginx: The /admin* URI protection logic has been moved to global.inc\n  * Nginx: Update gzip_types to list all expected mime.types\n  * Nginx: Update headers for AdvAgg compatibility.\n  * Nginx: Update mime.types\n  * Nginx: Use more precise wildcard in paths for replacements.\n  * PHP: 5.4 requires uploadprogress-1.0.3.1\n  * PHP: Disable ionCube Loader for PHP 5.5\n  * PHP: Do not force extensions re-install unless _PHP_FORCE_REINSTALL=YES\n  * PHP: Fix config overrides for 5.4 and 5.5\n  * PHP: Fix possible issues with legacy 5.2 support logic.\n  * PHP: Fix unintended overrides in the ini files.\n  * PHP: Force All Extensions Rebuild when _FROM_SOURCES=NO\n  * PHP: Force APC instead of Zend OPcache on Squeeze/Precise on no-src install.\n  * PHP: Force legacy version rebuild if exists.\n  * PHP: Improve rebuild logic if SSL/cURL was built from sources.\n  * PHP: Make sure that latest version of ionCube loader is installed.\n  * PHP: Rebuild extensions also for 5.2, even if _PHP_MODERN_ONLY=YES\n  * PHP: Set opcache.revalidate_freq to 1 second on dev alias/URL on the fly.\n  * PHP: Start more FPM workers by default to avoid Nginx 499 and timeouts.\n  * PHP: Use correct version of ioncube_loader for 5.4\n  * PHP: Use pecl-jsmin-0.1.1 with newer PHP versions.\n  * PHP: Zend OPcache is a zend_extension and needs full path in the php.ini\n  * Redis: Make redis_client_password optional and none by default.\n  * Reload PHP-FPM before auto-healing will force its restart after midnight.\n  * Remove already deprecated platforms.\n  * Remove insecure files from libraries/plupload/examples.\n  * Remove lock files before adding new users.\n  * Security updates for selected contrib on all affected D7 platforms.\n  * Shell: Fix FTPS compatibility after switching to MySecureShell\n  * Shell: Sync IdleTimeOut for MSS with SSH and FTPS default 15m.\n  * Shorten some too long status messages.\n  * Silent Mode Option: aegir == Only stock Ægir forced up-head upgrade.\n  * Simplify vnstat setup.\n  * Split usage monitor into two separate scripts.\n  * SQL auto-healing should always stop-stop-start and not just restart it.\n  * SQL: Allow the engine to manage correct innodb_thread_concurrency value.\n  * SSH: Make sure that 'UseDNS no' is always set.\n  * Sync $cookie_domain validation with Drupal 7 core.\n  * Sync dates with BOA defaults.\n  * Unify apt-get options order.\n  * Update for Redis config template.\n  * Update or create /etc/apt/sources.list early enough.\n  * Update PHP and SQL config early enough to avoid issues during upgrade.\n  * Use --force-yes option if apt-get -y is used.\n  * Use correct version of /etc/apt/preferences\n  * Use drush6 only when required.\n  * Use extended GitHub tests on HEAD and non-stock build only.\n  * Use forced symlinks mode if possible.\n  * Use is_readable() check instead of file_exists() for all includes.\n  * Use mirror downloads for all contrib and patches to make it faster.\n  * Use more restrictive permissions on lshell log files.\n\n\n### Stable BOA-2.0.9 Release - Barracuda Edition\n### Date: Thu May  9 11:25:59 EDT 2013\n### Includes Ægir from BOA-2.0.8 Edition\n\n# This is the first Barracuda-only Edition, released to address important\n  security issue with Nginx server and provide system level upgrades.\n\n  This Edition will not upgrade Ægir Master nor Ægir Satellite Instances,\n  because there was no new Drupal core released since BOA-2.0.8 Edition and\n  there were not enough updates to built-in platforms or contrib accumulated.\n\n  Releasing Barracuda-only Edition separately from full Edition allows us\n  to address system/services security issues without any extra delay,\n  while releasing Octopus-only Edition will allow us to provide Drupal core\n  or Ægir version upgrades, without affecting system level services.\n\n  There is also another reason why separate releases will be useful.\n  BOA-2.0.9 is the last Edition where Ægir 2.x still uses old Drush 4.6\n  in the backend. We need to sync BOA specific Ægir 2.x with upstream\n  and finally switch to Drush 5, or even Drush 6, if possible.\n\n  This change, however, may cause issues if you still host legacy Drupal 5\n  or some old Drupal 6 sites, with either core or contrib not compatible\n  with PHP 5.3, which is now used by default.\n\n  That is why we plan to introduce ability to install older/previous\n  Barracuda and/or Octopus release, if you need more time to upgrade.\n\n# New features and enhancements in this release:\n\n  * Debian 7.0 Wheezy support.\n  * Automated upgrade from Squeeze with _SQUEEZE_TO_WHEEZY=YES option.\n  * Added config template with inline how-to in docs/cnf/barracuda.cnf\n  * Added config template with inline how-to in docs/cnf/octopus.cnf\n  * Added passwords encryption how-to in docs/BLOWFISH.txt\n  * Added the list of symbols used on install in docs/PLATFORMS.txt\n  * Forced mysql restart if there are too many high CPU mysqld processes.\n  * Improved docs/NOTES.txt\n  * Improved docs/README.txt\n  * Install libpam-unix2 and libxcrypt1 by default.\n  * Install s3cmd by default.\n  * Issue #1974640 - Allow to use Midnight Commander for limited shell users.\n  * Limited Shell Logs Monitor enabled by default.\n  * Nginx: Check for Linux/Cdorked.A malware and delete if discovered.\n  * Re-generate and sync Ægir passwords before and after instance upgrade.\n  * The silent 'system' mode documented in docs/UPGRADE.txt\n  * Allow to exclude platform from otherwise forced `drush en entitycache -y`\n    if sites/all/modules/entitycache_dont_enable.info control file is present.\n\n# Changes in this release:\n\n  * Nginx 1.5.0 - security upgrade for CVE-2013-2028\n  * PHP 5.3.25\n  * Redis 2.6.13\n  * Do not disable update module in platforms known to include it as required.\n  * Firewall: Open port 1129 for outgoing connections (some gateways need it).\n  * Force syslog module as disabled by default and save some disk I/O.\n  * Tune kernel to always use max RAM and not swap, if possible.\n\n# Fixes in this release:\n\n  * Add outgoing port 25 SMTP to the list of requirements.\n  * Firewall: Add truly permanent block for heavy abusers.\n  * Fix for mytop support, available again on systems with MariaDB.\n  * Fix permissions in the /data/all tree if required.\n  * Fix the order of checks - they scan only the last (current) minute.\n  * Force _STRONG_PASSWORDS=NO if locales still look broken on second check.\n  * Improve detecting no longer running drush.php and/or cron PHP processes.\n  * Improve fix_locales logic.\n  * Improve global.inc symlinking on initial install and upgrade.\n  * Improve messages displayed when fix_locales discovers broken locales.\n  * Improve monitoring to avoid duplicate entries on low traffic systems.\n  * Improve sanitize_string() filtering to avoid issues with strong passwords.\n  * Improve syncpass tool - Update system user passwd and flush privileges.\n  * Issue #1961226 - Warning: Could not change permissions of sites/all to 751.\n  * Issue #1962458 - 403 for anonymous users on node/add.\n  * Issue #1963044 - Force UTF-8 locales if not present/configured properly.\n  * Issue #1974542 - Use /root/.home.no.wildcard.chmod.cnf control file.\n  * Issue #1987936 - Restore ability to install PHP 5.2 for FPM and CLI.\n  * Make sure that /dev/null is writable for everyone.\n  * Make sure that all drushrc.php files are owned by Ægir system user.\n  * Make sure that all expected sites/all/{modules,themes,libraries} dirs exist.\n  * Make sure that DB server is restarted on upgrade after config tuning.\n  * Make sure that pdnsd and resolvconf are properly installed.\n  * Nginx: Remove duplicate Vary: Accept-Encoding headers.\n  * Percona no longer supports older Ubuntu non-LTS releases.\n  * PHP: Do not reload FPM every hour - it may cause error 502.\n  * PHP: Fix paths depending on CLI version used.\n  * PHP: Fix the extensions installation and upgrade logic.\n  * PHP: Make sure that the FPM port is set correctly for D6 sites with 5.2\n  * PHP: Properly uninstall all related packages when using source build.\n  * PHP: Start more FPM workers on systems with enough RAM by default.\n  * Purge bin logs before disabling them.\n  * Run New Relic re-install early enough to avoid locking full-upgrade.\n  * Sync the load limits for spiders and backend tasks.\n  * The Java/Jetty monitor should use higher allowed limits by default.\n  * Update apticron message to recommend system mode instead of full upgrade.\n  * Update docs for _BUILD_FROM_SRC option.\n  * Use aggressive enough Jetty restart procedure on nightly services reload.\n  * Use correct status messages on install and upgrade.\n  * Use installer and not Ægir version download on stable install/upgrade.\n\n\n### Stable Edition BOA-2.0.8\n### Date: Mon Apr  8 01:41:36 CEST 2013\n### Installs Ægir 2.x\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.22.1\n\n  Commerce 2.6 ----------------- http://drupal.org/project/commerce_kickstart\n  NodeStream 2.0-rc5 ----------- http://drupal.org/project/nodestream\n  Open Deals 1.19 -------------- http://drupal.org/project/opendeals\n\n  All other not listed above platforms are available with latest\n  D6 or D7 core, even if there were no new distro version released.\n\n# Fixes:\n\n  * Critical Issue #1962690 - Fix for broken Percona support.\n\n  * Allow to use [a-z0-9] subdomains and not only [www] for IDN domain names.\n  * Change the interval between platforms builds from 5 to 3 seconds.\n  * Forced 1s Speed Booster TTL for vhosts behind local proxy is deprecated.\n  * Move old firewall logs to backups to avoid crazy load after upgrade.\n  * Nginx: Better exceptions handling in the Abuse Guard for js/shs modules.\n  * PHP: CLI is at 5.3 since BOA-2.0.4, so symlink old 5.2 binary path to 5.3\n  * Update _LENNY_TO_SQUEEZE major upgrade procedure.\n  * Update contrib with login_security-7.x-1.2\n  * Use static downloads for all distros in stable edition.\n\n\n### Stable Edition BOA-2.0.7\n### Date: Thu Apr  4 00:00:17 EDT 2013\n### Installs Ægir 2.x\n\n# Updated Octopus platforms:\n\n  ### Drupal 7.22.1\n\n  Commons 3.2 ------------------ http://drupal.org/project/commons\n\n  All other not listed above platforms are available with latest\n  D6 or D7 core, even if there were no new distro version released.\n\n# Fixes:\n\n  * Create dot dirs and keys if not exist, plus known_hosts for system user.\n  * Fix the sqlmagic regex to really convert only expected tables.\n  * Issue #1958502 - Add missing symlinks to the new Drush extensions.\n  * Issue #1960192 - Fix literal path replacement with sites/$new_url in D7.\n  * Issues #1930670 #1958898 #1932616 - Fix for hosting_server_update_6200.\n  * Taxonomy Edge update to 7.x-1.7 and 6.x-1.7\n  * Update contrib in all D7 platforms to ctools-7.x-1.3 - security upgrade.\n\n\n### Stable Edition BOA-2.0.6\n### Date: Mon Apr  1 21:34:04 EDT 2013\n### Installs Ægir 2.x\n\n# New Octopus platforms:\n\n  ### Drupal 7\n\n  Commons 3.1 ------------------ http://drupal.org/project/commons\n\n# Updated Octopus platforms:\n\n  ### Drupal 7\n\n  CiviCRM 4.2.8 ---------------- http://civicrm.org\n  Commerce 1.16 ---------------- http://drupal.org/project/commerce_kickstart\n  Commerce 2.5 ----------------- http://drupal.org/project/commerce_kickstart\n  Drupal 7.21.2 ---------------- http://drupal.org/drupal-7.21\n  NodeStream 2.0-rc4 ----------- http://drupal.org/project/nodestream\n  Open Deals 1.18 -------------- http://drupal.org/project/opendeals\n  Open Outreach 1.0-rc10 ------- http://drupal.org/project/openoutreach\n  OpenChurch 1.11-beta9 -------- http://drupal.org/project/openchurch\n  Panopoly 1.0-rc4a ------------ http://drupal.org/project/panopoly\n  Ubercart 3.4.1 --------------- http://drupal.org/project/ubercart\n\n  ### Pressflow 6\n\n  Acquia 6.28.1 ---------------- http://bit.ly/acquiadrupal\n  Commons 2.12 ----------------- http://drupal.org/project/commons\n  Feature Server 1.2 ----------- http://bit.ly/fserver\n  Managing News 1.2.3 ---------- http://drupal.org/project/managingnews\n  Open Atrium 1.7.1 ------------ http://drupal.org/project/openatrium\n  Pressflow 6.28.1 ------------- http://pressflow.org\n  ProsePoint 0.46 -------------- http://prosepoint.org\n  Ubercart 2.11.1 -------------- http://drupal.org/project/ubercart\n\n  All other not listed above platforms are available with latest\n  D6 or D7 core, even if there were no new distro version released.\n\n# No longer supported Octopus platforms:\n\n  The platforms listed below can be re-added when their maintainers\n  will fix all critical issues and/or apply required updates:\n\n  ELMS ------------------------- http://drupal.org/project/elms\n  MartPlug --------------------- http://drupal.org/project/martplug\n  Octopus Video ---------------- http://octopusvideo.org\n  Open Academy ----------------- http://drupal.org/project/openacademy\n  Open Enterprise -------------- http://drupal.org/project/openenterprise\n  OpenPublic ------------------- http://drupal.org/project/openpublic\n  OpenScholar ------------------ http://openscholar.harvard.edu\n  Videola ---------------------- http://videola.tv\n\n# New features:\n\n  * Add an option to allow cron based, unattended system-only upgrades.\n  * Add randpass helper script.\n  * Add support for wkhtmltoimage.\n  * Add syncpass tool to repair broken instances after incomplete upgrade.\n  * Allow to specify extra apt-get packages with _EXTRA_PACKAGES option.\n  * Allow to tune PHP-CLI timeout in the BOND script with separate option.\n  * Install auditd with aureport by default.\n  * Issue #1479300 - Add optional LDAP support in Nginx.\n  * Issue #1876418 - Support for High-performance JavaScript callback handler.\n  * Issue #1916804 - Validated bypass of flood control based on tty.\n  * Jetty: Make migration from Tomcat easy with _TOMCAT_TO_JETTY=YES\n  * PHP: Allow to use _PHP_EXTRA_CONF for custom builds from src.\n  * Redis: Add Lock Backend Support for Drupal 6 and Drupal 7.\n  * Redis: Enable lock support if modules/redis_lock_enable.info exists.\n  * Shell: Add extra Drush versions available as drush4, drush5 and drush6.\n  * SOLR: Support for 1.x / Jetty 7, 3.x / Jetty 8 and 4.x / Jetty 9.\n  * SOLR: Use Jetty 8 for Solr 4 on systems with Java 1.6 available.\n  * SOLR: Use Jetty 9 for Solr 4 on systems with Java 1.7 available.\n  * SQL: Add sqlmagic tool to fix SQL dumps and convert to/from InnoDB/MyISAM.\n  * SQL: Make default_storage_engine configurable with _DB_ENGINE option.\n  * Use Registry Rebuild with Fixed Redis Lock Support aware configuration.\n  * Allow to force SERVER_NAME based $cookie_domain with special\n    modules/cookie_domain.info control file per site.\n\n# New Ægir modules or extensions:\n\n  * Add drush clean-modules command - clean_missing_modules extension.\n  * Add drush_ecl extension - Drush Entity Cache Loader.\n  * Add hosting_site_backup and provision_site_backup enabled by default.\n\n# Changes:\n\n  * Git 1.8.2\n  * MariaDB 5.5.30\n  * Nginx 1.3.15\n  * Percona 5.5.30\n  * PHP 5.3.23\n  * Redis 2.6.12\n  * Deprecate CiviCRM 3.4.8 D6 - only available with _ALLOW_UNSUPPORTED=YES.\n  * Do not force filefield_nginx_progress as enabled also for D7.\n  * Drupal 8.0-dev-tested deprecated and moved to unsupported group.\n  * ELMS 1.0-beta1 deprecated and moved to unsupported group.\n  * Enable entitycache module by default.\n  * Master Ægir: Re-create secure db password on every barracuda upgrade.\n  * Master Ægir: Sync generating secure db password also on barracuda install.\n  * Nginx: Set 24h Speed Booster cache TTL for spiders/bots by default.\n  * NodeStream 1.5.1 deprecated and moved to unsupported group.\n  * Open default MongoDB port 27017 for outgoing connections.\n  * OpenScholar deprecated and moved to unsupported group.\n  * PHP: Deprecate 5.2 also on upgrade.\n  * PHP: Install MongoDB driver if MNG keyword is listed in _XTRAS_LIST.\n  * PHP: Set _PHP_CLI_VERSION=5.3 by default.\n  * PHP: Switch to forced CLI 5.3 and FPM 5.3 also in the custom config.\n  * PHP: Switch to FPM 5.3 also for D6 sites by default.\n  * Pressflow 5.23 deprecated and moved to unsupported group.\n  * Redis: Re-create secure password on every barracuda upgrade.\n  * Satellite Ægir: Re-create secure db password on every octopus upgrade.\n  * SQL: Do not run DB OPTIMIZE unless /root/.my.optimize.cnf ctrl file exists.\n  * SQL: Re-generate new secure mysql root password on every barracuda upgrade.\n  * SQL: Use key_buffer = 2M by default.\n  * SQL: Use more safe memory limits after introducing higher key_buffer_size\n  * Use better names for various control files.\n  * Watch crons running > 2 min and kill crons running > 3 min.\n  * Split _XTRAS_LIST into two groups: included via ALL keyword and\n    other which need to be listed explicitly.\n\n# Fixes:\n\n  * Add Ksplice-aware kernel upgrade alert.\n  * Add some delay to avoid race conditions when removing more zombies.\n  * Allow higher system load before disabling access for spiders temporarily.\n  * Always send upgrade log when running in the silent mode.\n  * Avoid cron collisions and make sure all maintenance tasks run 0-6 AM.\n  * Better and separate backup rotation on hostmaster upgrade.\n  * Better check if Webmin GnuPG signing key has been added properly.\n  * Better fix for $cookie_domain and DA compatibility.\n  * Better protection for all ports usually targeted in brute force attacks.\n  * Check if nproc is present and fall back to /proc/cpuinfo otherwise.\n  * Clean swap on kernel tuning update.\n  * Delete broken o_contrib symlinks before trying to recreate them.\n  * Do not add and remove bind from /etc/sudoers since it is not supported.\n  * Do not block @ in the limited shell - it breaks git foo git@bar etc.\n  * Do not force _DEBUG_MODE=YES if not required.\n  * Do not force _HTTP_WILDCARD=NO for stock install option.\n  * Do not run extra IP checks for requests below $mininumber threshold.\n  * Do not run initial apticron check in local install.\n  * Do not run two mysql restarts in a row on mysql upgrade.\n  * Downgrade to working wkhtmltopdf-0.10.0_rc2 and wkhtmltoimage-0.10.0_rc2\n  * Drupal 7.x core with Field API memory optimization - see #1915646\n  * Enable image_allow_insecure_derivatives to avoid issues with drupal-7.20\n  * Fix apticron to suggest barracuda up-stable instead of apt-get upgrades.\n  * Fix AWS system auto-discovery and auto-configuration.\n  * Fix Drush 5.x and _USE_STOCK support.\n  * Fix for Bazaar (bzr) 2.6b2 extensions build.\n  * Fix for pdnsd install on Ubuntu Precise.\n  * Fix the 32 long ALNUM password generation for lshell users.\n  * Fix the hint to just display the uptrack command, not run it.\n  * Force logrotate on demand if /var/log/syslog > 1GB\n  * Force mysql tables check and upgrade before hostmaster upgrade.\n  * Force proper pdnsd and resolvconf re-installation if needed.\n  * Force proper resolvconf configuration to support and use pdnsd server.\n  * FTPS on all modern systems requires lshell path added in /etc/shells.\n  * Hostmaster/Octopus contrib modules are now added via Ægir makefile.\n  * Improve autonomous IDS auto-cleaning and permanent block mgmt.\n  * Improve compatibility testing with Drush 5 and Drush 6.\n  * Improve kernel default tuning.\n  * Improve Master Instance upgrade logic.\n  * Improve mysqldump performance by default.\n  * Improve the default strict configuration for $cookie_domain.\n  * Improve Tomcat/Jetty self-healing to avoid stuck processes.\n  * Install also hostmaster contrib when stock option is used.\n  * Issue #1782034 - Use fixed version of the message_notify module.\n  * Issue #1825018 - Disable binary logging and make it optional.\n  * Issue #1871060 - CiviCRM 4.2.6 needs separate civicrml10n fix.\n  * Issue #1873478 - Localhost install broken because getent test is used.\n  * Issue #1875348 - Fix for Nginx 1.3.10 bug causing random segfaults.\n  * Issue #1886920 - Fix the unrecognized option [service=system-auth] error.\n  * Issue #1886920 - Pure-FTPd config broken because of deprecated pam_stack.so\n  * Issue #1888380 - Deleted platform cache folder recreated automatically.\n  * Issue #1889322 - Domain Access module breaks sites provisioning.\n  * Issue #1897018 - Set Pin-Priority also in wrappers to fix also stable.\n  * Issue #1897018 - Ubuntu Precise breaks install and upgrade.\n  * Issue #1906760 - Incomplete access_log directive in the purge vhost.\n  * Issue #1906900 - Nginx microcaching not disabled on prefixed admin URIs.\n  * Issue #1909208 - Changed MariaDB GnuPG signing key hangs install/upgrade.\n  * Issue #1913394 - Disable automatic CSF/LFD upgrade.\n  * Issue #1913488 - Do not install GEOS PHP ext. unless explicitly listed.\n  * Issue #1914294 - APC 3.1.14 disappeared from PECL - downgrade to 3.1.13\n  * Issue #1918722 - Add diff command as allowed in the limited shell.\n  * Issue #1920972 - Could not change permissions warnings on site verify.\n  * Issue #1932388 - Use correct keyword PPY for Panopoly install.\n  * Issue #1935388 - Use reliable check for Master Instance install path.\n  * Issue #1947082 - Permissions are never fixed on the profile level.\n  * Issue #1949740 - Make sure that cache_prefix for Redis is always set.\n  * Issue #1952042 - Make strong passwords optional and not default.\n  * Issue #1953248 - Extra Drush versions should be added properly.\n  * Issue #1957762 - Upgrade to Bazaar (bzr) 2.6b2\n  * Jetty: Tune memory limits automatically to avoid extra RAM requirements.\n  * Keep all extra modules in the same profiles/hostmaster/modules directory.\n  * Lshell: Allow ping command to help keep session active / auto-whitelist.\n  * Make apticron aware of the BOA version currently running.\n  * Make BOND aware of _CUSTOM_CONFIG_SQL if present.\n  * Make Compass Tools available in the standard path, if installed.\n  * Make sure that all removed zombies use unique dir names.\n  * Make sure that all users home dirs are protected.\n  * Make sure that now redundant hosting_backup_gc module is removed.\n  * Make sure that SERVER_NAME is set to HTTP_HOST early enough, if required.\n  * Make the errors monitor aware of system only upgrade mode.\n  * Make URI filtering regex localization-aware in the global.inc\n  * Nginx Security: BEAST attack protection and fix for PCI compliance.\n  * Nginx: Another fix for broken imagecache paths in some imported sites.\n  * Nginx: Better protection from DoS attempts on never cached uri.\n  * Nginx: Do not block spiders on imagecache/styles URIs.\n  * Nginx: Do not force use epoll - it is set on install properly.\n  * Nginx: Do not force worker_connections. It will not work in the VM guest.\n  * Nginx: Do not force worker_rlimit_nofile. It will not work in the VM guest.\n  * Nginx: Force rebuild to include LDAP support if enabled via _NGINX_LDAP=YES\n  * Nginx: Improve Abuse Guard to better protect from imagecache|styles flood.\n  * Nginx: Improve no-cache exceptions for known AJAX and webform requests.\n  * Nginx: Make json compatible with boost caching but dynamic for POST.\n  * Nginx: Restore fast 404 for static json requests.\n  * Nginx: Set workers number to available CPUs x2 with min/max defaults.\n  * Nginx: Use default buffer=32k in the access_log for better performance.\n  * Nginx: Use static /normal/ instead of dynamic /$device/ for Boost cache.\n  * PHP: Enable more FPM workers by default for better performance.\n  * PHP: Force php53-fpm restart if there is no master process running.\n  * PHP: Many Drupal 7 based distros require 196M limit at minimum.\n  * PHP: Never force php53-fpm restart when another script reloads it.\n  * PHP: Use more safe limits on low memory systems.\n  * Prevent turning the feature server site into a spam machine.\n  * Protect also from not supported request types if Nginx server is busy.\n  * Randomize tasks wait/start intervals better to avoid high system load.\n  * Redis: Do not disable it on the fly when there is /nojs/ in the URI.\n  * Redis: Double check if $cache_lock_path exists before using it.\n  * Redis: No need to force exception for cache_menu bin.\n  * Redis: Tune sysctl for better memory management by default.\n  * Remove up to two last zombies on Master Instance upgrade.\n  * Remove up to two last zombies on Satellite Instance upgrade.\n  * Rename profiles to avoid confusion between Commons 2 and Commons 3.\n  * Run drush @hostmaster hosting-dispatch during upgrade to sync things.\n  * Send also OK report when running in the silent mode.\n  * Set correct default DNS entry in /etc/hosts before running local install.\n  * Shell: Fix for too restrictive Drush commands filtering.\n  * Shell: Fix the broken Git support over SSH.\n  * Shell: Fixed too restrictive permissions on the extra Drush directories.\n  * SQL: Do not run the purge_binlogs script when binary logging is disabled.\n  * SQL: Improve sqlmagic converter and allow it to use control files.\n  * SQL: The sqlmagic_convert should not be available for extra lshell users.\n  * SQL: Tune also key_buffer_size by default.\n  * Sync generating secure passwords also for limited shell users.\n  * Update csf.conf template.\n  * Update self-healing for Tomcat/Jetty support.\n  * Update welcome email template to better explain how to manage databases.\n  * Use Boost with silenced false alarms.\n  * Use Limited Shell branch with fixed tab completion.\n  * Use public DNS during pdnsd (re)installation to avoid issues.\n  * Whitelist /tmp/make_tmp.* in the csf.fignore to avoid false alarms.\n\n\n### Stable Edition BOA-2.0.5\n### Date: Sun Dec 23 15:35:46 EST 2012\n### Installs Ægir 2.0.5 compatible with Ægir 1.9\n\n# Updated Octopus platforms:\n\n  Commerce 1.12.1 -------------- http://drupalcommerce.org\n  Commerce 2.0 ----------------- http://drupalcommerce.org\n  Commons 2.11 ----------------- http://acquia.com/drupalcommons\n  Drupal 7.18.1 ---------------- http://drupal.org/drupal-7.18\n  Open Deals 1.14 -------------- http://opendealsapp.com\n  Open Outreach 1.0-rc7 -------- http://openoutreach.org\n  OpenChurch 1.11-beta7 -------- http://openchurchsite.com\n  Panopoly 1.0-rc3 ------------- http://drupal.org/project/panopoly\n  Pressflow 6.27.1 ------------- http://pressflow.org\n  ProsePoint 0.45 -------------- http://prosepoint.org\n  Ubercart 2.11.1 -------------- http://ubercart.org\n  Ubercart 3.3.1 --------------- http://ubercart.org\n\n  All other not listed above platforms are available with latest\n  D6 or D7 core, even if there were no new distro version released.\n\n# New Ægir modules or extensions:\n\n  * Add drush clean-modules command - clean_missing_modules extension.\n\n# New o_contrib modules:\n\n  * Add reroute_email module in both D6 and D7 contrib.\n\n# Changes:\n\n  * Git 1.8.0.2\n  * MariaDB 5.3.11 on Debian Lenny\n  * MariaDB 5.5.28a\n  * Nginx 1.3.9\n  * PHP 5.3.20\n  * Redis 2.6.7\n  * Delete old tmp files in all sites daily.\n  * Disable Expire and Purge modules by default - they are no longer needed.\n  * Redis integration module updated to 7.x-2.0-beta2\n  * There is no need to restart Redis and Tomcat hourly.\n  * Use higher innodb_lock_wait_timeout by default - 120 instead of 50.\n  * Use 1h instead of 30min default timeout for sql and php-cli to avoid\n    breaking some extra long running backend tasks on some really big sites.\n\n# Fixes:\n\n  * Allow more drush commands over SSH.\n  * Always force drupal_http_request_fails to FALSE to avoid false alarm.\n  * Better check for standalone vhosts firewall setup.\n  * Better lshell forbidden list of keywords.\n  * Better regex to deny wildcards with top-level or country level domains.\n  * Check for existence of host_master and not host_master/001 directory.\n  * Compass is not available on older OS versions.\n  * Delete ltd-shell extra user/client if there is no site associated/owned.\n  * Delete old symlinks in the client directory for no longer associated sites.\n  * Fix broken usage.sh script - it does not enable/disable modules.\n  * Fix date formatting also in the sqlcheck script.\n  * Fix for some really old installs without .barracuda.cnf file.\n  * Fix permissions for Boost cache directory with correct chmod.\n  * Fix the hint - it should say to restart mysql.\n  * Issue #1081266 - Avoid re-scanning modules directory.\n  * Issue #1263602 - Force New Relic re-install on every upgrade, if used.\n  * Issue #1460882 - Send .json requests to @drupal instead of =404.\n  * Issue #1837418 - Fix permissions inside ~/.drush directory.\n  * Issue #1837776 - Do not disable httprl module.\n  * Issue #1837910 - Upload progress broken for all D6 sites.\n  * Issue #1839122 - Disabling Redis on known AJAX calls breaks UI elements.\n  * Issue #1839544 - Use language neutral checks for users, groups and hosts.\n  * Issue #1841230 - BOA provides Apache Solr 1.4 with Tomcat 6.\n  * Issue #1841246 - Fix csf.fignore file to whitelist /tmp/drush_*\n  * Issue #1842554 - Replace broken links to Skitch screenshots.\n  * Issue #1847682 - Fix extra Nginx config support in the Master Instance.\n  * Issue #1850034 - Disable SYSLOG_CHECK in csf to avoid false alarms.\n  * Issue #1857250 - Domain Access support is broken in the backend cli.\n  * Issue #1857990 - Include reroute_email module in o_contrib by default.\n  * Issue #1860100 - Use provision-backup-delete instead of backup_delete.\n  * Issue #1865112 - Add drush clean-modules command.\n  * Issue #1867264 - Too many Redis caching exceptions cause serious confusion.\n  * Issue #1871060 - CiviCRM l10n should be moved to proper directory.\n  * Lshell: Map drush mup to up instead of upc. Add new drush mupc map for upc.\n  * Max supported version of Search API Solr search is 7.x-1.0-rc2\n  * More complete permissions fix on install and upgrade.\n  * More strict check for _LENNY_TO_SQUEEZE option.\n  * Nginx: Better regex in the Nginx monitor.\n  * Nginx: Exclude also files/progress path in the Nginx monitor.\n  * Nginx: Fix rewrite rules in the CDN Far Future expiration support.\n  * Nginx: Make sure that any older packages are uninstalled on upgrade.\n  * Nginx: Make sure that default Nginx vhosts are deleted also on upgrade.\n  * Nginx: Skip all logged media and download requests in the Nginx monitor.\n  * PHP: Use high enough value for max_input_vars in PHP 5.3 by default.\n  * Really fix the datestamp comparison logic on various systems.\n  * Rebuild registry without --no-cache-clear option to avoid issues.\n  * Redis: Check if Redis binary exists, not symlink.\n  * Redis: Delete redis-server symlink to avoid failed Redis install.\n  * Redis: Do not use all three extra exceptions on the hostmaster site.\n  * Redis: Do not use sleep breaks during Redis full restart.\n  * Redis: The cache_menu bin should be still excluded from Redis caching.\n  * Redis: The hostmaster site needs exception for cache_class_cache bin.\n  * Stop and Start CSF only if installed.\n  * The locked auto-healing script needs to kill tomcat more aggressively.\n  * Update csf.conf template.\n  * Upgrade to ctools-6.x-1.10 in the hostmaster platform.\n  * Use aliases in drush commands where possible.\n  * Use better name for non-web New Relic app tracking.\n  * You must remove remote_import extension from the source server.\n\n\n### Stable Edition BOA-2.0.4\n### Date: Thu Nov  8 18:31:01 EST 2012\n### Installs Ægir 2.0.4 compatible with Ægir 1.9\n\n# New Octopus platforms:\n\n  Commerce 2.0-rc4 ------------- http://drupalcommerce.org\n\n# Updated Octopus platforms:\n\n  CiviCRM 4.1.6-d6 ------------- http://civicrm.org\n  CiviCRM 4.2.6-d7 ------------- http://civicrm.org\n  Commerce 1.11.1 -------------- http://drupalcommerce.org\n  Commons 2.10 ----------------- http://acquia.com/drupalcommons\n  Conference 1.0-rc2 ----------- http://usecod.com\n  Drupal 7.17.1 ---------------- http://drupal.org/drupal-7.17\n  Drupal 8.0-dev-tested -------- http://bit.ly/drupal-eight\n  ELMS 1.0-beta1 --------------- http://elms.psu.edu\n  NodeStream 1.5.1 ------------- http://nodestream.org\n  NodeStream 2.0-beta8 --------- http://nodestream.org\n  Open Atrium 1.6.1 ------------ http://openatrium.com\n  Open Deals 1.11 -------------- http://opendealsapp.com\n  Open Outreach 1.0-rc6 -------- http://openoutreach.org\n  OpenChurch 1.11-beta5 -------- http://openchurchsite.com\n  OpenPublish 3.0-beta7 -------- http://openpublishapp.com\n  OpenScholar 2.0-rc1 ---------- http://openscholar.harvard.edu\n  Panopoly 1.0-rc2 ------------- http://drupal.org/project/panopoly\n  Ubercart 2.10.1 -------------- http://ubercart.org\n  Ubercart 3.2.1 --------------- http://ubercart.org\n\n* We plan to shorten BOA system release and upgrades cycle\n  to 1-2 months max, so we have decided to remove support for some\n  outdated distros. We have tried to manage both security and version\n  updates for some abandoned or semi-abandoned distros, to keep them\n  useful for you, but since it involves increasing amount of work\n  because of cascades of no longer compatible patches and various\n  dependencies, we have decided that it is time to stop doing it,\n  if their original maintainers no longer care about their users.\n\n  Here is a list of distros we no longer support:\n\n  MartPlug ------------ http://drupal.org/project/martplug\n  Octopus Video ------- http://octopusvideo.org\n  Open Academy -------- http://drupal.org/project/openacademy\n  Open Enterprise ----- http://drupal.org/project/openenterprise\n  OpenPublic ---------- http://openpublicapp.com\n  Videola ------------- http://videola.tv\n\n  The platforms listed above can be re-added when their maintainers\n  will fix all critical issues and/or apply required updates.\n\n# New features:\n\n  * Add auto-healing support for Bind9.\n  * Add LOCK/FROZEN check for PHP-FPM and Tomcat in the auto-healing.\n  * Add option to force 15min Speed Booster cache TTL for anonymous visitors.\n  * Add optional easy install of already supported Compass Tools.\n  * Add support for aegir|platforms|both modes on octopus upgrade.\n  * Allow for another one upgrade daily but only to add more platforms.\n  * Allow to install unsupported distros with option _ALLOW_UNSUPPORTED=YES\n  * Allow to install vanilla Ægir 2.x and Drush 5.7 with \"stock\" option.\n  * Improved databases backup with added OPTIMIZE TABLE foo action per table.\n  * New Relic PHP Agent version 3.0 compatibility.\n  * Pseudo-streaming server-side support for Flash Video (FLV) and H.264/AAC.\n  * Support for Wysiwyg Fields module.\n\n# New Ægir modules or extensions:\n\n  * Add hosting_tasks_extra module and provision_tasks_extra extension.\n\n# New o_contrib modules:\n\n  * Add login_security module in D7 contrib.\n  * Add cdn module in both D6 and D7 contrib.\n\n# Changes:\n\n  * Allow outgoing mysql connections by default.\n  * APC 3.1.13\n  * Chive 1.2\n  * Do not bundle seckit module in o_contrib.\n  * Do not enable Expire and Purge modules by default.\n  * Enable Syslog module by default.\n  * Git 1.8.0\n  * MariaDB 5.3.9 on Debian Lenny\n  * MariaDB 5.5.28\n  * Nginx 1.3.8\n  * Percona 5.5.28\n  * PHP 5.3.18\n  * Pure-FTPd 1.0.36\n  * Redis 2.6.4\n  * Remove not supported httprl module and disable if enabled.\n  * The filefield_nginx_progress is forced-enabled in all D7 sites, again.\n  * Use PHP-FPM 5.3 for Chive, Collectd and other non-Drupal sites.\n  * Use php-cli 5.3 for drush on command line by default.\n    You can still force 5.2 with --php=/usr/local/bin/php drush option.\n\n# Fixes:\n\n  * Add cache_tax_image bin to no-redis-cache exceptions.\n  * Add support for pdnsd in the VServer guest.\n  * Allow all standard compass/sass commands in limited shell.\n  * Auto-discover _NEWRELIC_KEY if not listed in .barracuda.cnf\n  * Better auto-healing for php-fpm zombies edge case.\n  * Better check for failed login attempts (when user exists).\n  * Better permissions magic repair running daily.\n  * Deny crawlers on search results pages - they may cause very high load.\n  * Disable spinner if screen is used.\n  * Do not force default Debian and Ubuntu mirrors even if _AUTOPILOT=YES.\n  * Do not quote password in .my.cnf - it breaks mytop.\n  * Do not use log/custom_cron for anything.\n  * Do not use resolveip in the localhost mode.\n  * Exclude cache_bootstrap and cache_pulled_tweets from Redis caching.\n  * Fix for broken drush make edge case caused by leftovers.\n  * Fix for broken Tika download URL.\n  * Fix for civicrm_engage in D6.\n  * Fix for Debian Lenny upgrade.\n  * Fix for global.inc logic related to high traffic sites only.\n  * Fix for NGX, PHP and SQL forced reinstall mode.\n  * Fix for Pin-Priority in Squeeze.\n  * Fix for sql abuse monitor.\n  * Fix for the selectively forced upgrade mode.\n  * Fix motd for Skynet fun.\n  * Fix too restrictive lshell command filtering.\n  * Force Pure-FTPd rebuild on every upgrade to avoid broken binary.\n  * Force tomcat restart and reload php-fpm hourly.\n  * Improve Domain module support.\n  * Improve mysql crashed tables detection and repair in auto-healing.\n  * Improve Nginx Abuse Guard by stopping those never cached POST DoS attacks.\n  * Improve Nginx guard support for VServer guests.\n  * Improved checkpoint info in Octopus.\n  * Issue #1225380 - Do not truncate sessions table during db daily backup.\n  * Issue #1472786 - SQL check ERROR and too many SQL check CLEAN notices.\n  * Issue #1528726 - Fix for Redis support in all shared directories/code.\n  * Issue #1540242 - Do not install conflicting libavcodec53 or libavcodec52.\n  * Issue #1588060 - Make sure that /var/run is present in open_basedir.\n  * Issue #1589052 - Incomplete PATH breaks standard tasks.\n  * Issue #1590120 - Fix for java path changed in recent Ubuntu releases.\n  * Issue #1591746 - Update GeoIP.dat file automatically.\n  * Issue #1592646 - Enabled old cache backend integration module causes WSOD.\n  * Issue #1592650 - Do not use Hide platforms with non-default profiles.\n  * Issue #1592680 - Upload progress module breaks uploads on all D7 sites.\n  * Issue #1593794 - New redis-only caching backend settings.\n  * Issue #1593810 - Duplicate php-cli 5.3 binaries after upgrade.\n  * Issue #1593980 - Remove invisible characters breaking localhost install.\n  * Issue #1597580 - External/Aggressive caching in D6 breaks path_alias_cache.\n  * Issue #1598676 - Collectd graphs broken.\n  * Issue #1600426 - Cron is run every minute on all sites not yet defined.\n  * Issue #1602142 - Do not use device specific keys for Redis cache entries.\n  * Issue #1606146 - The manage_ltd_users.sh script locks important tasks.\n  * Issue #1614162 - CRON Not Running on Octopus Satellites and Sites.\n  * Issue #1643616 - APC is missing in the Ubuntu Precise based install.\n  * Issue #1659452 - Add support for Ægir HTTPS header in the Speed Booster.\n  * Issue #1663262 - Fix FMG install on Ubuntu Precise.\n  * Issue #1679114 - New user name check in Octopus is too restrictive.\n  * Issue #1689656 - Avoid caching /civicrm* and known webform requests.\n  * Issue #1716004 - The zlib.output_compression should be disabled in 5.3\n  * Issue #1728616 - Better CDN Far Future expiration support.\n  * Issue #1777982 - Do not break wordpress_migrate module support.\n  * Issue #1778712 - Better workaround for MariaDB 5.5.27 critical bug.\n  * Issue #1784440 - Cannot stat scan_nginx when using BOND.sh.txt\n  * Issue #1796420 - Do not break write access to the tcpdf cache directory.\n  * Issue #1798288 - Provision-backup_delete could not be found.\n  * Issue #1799116 - Standardize on installation vs. install profile.\n  * Issue #1821866 - Force Nginx rebuild to include pseudo-streaming support.\n  * Issue #1824888 - BOND.sh.txt breaks Nginx, SQL and PHP configuration.\n  * Issue #1825298 - Redis: force rebuild from sources on version mismatch.\n  * Issue #1825420 - Avoid the Use of undefined constant OctopusNoCacheID.\n  * Issue #1825630 - Remove duplicate code causing false alarm.\n  * Issue #1825992 - Redis cache is never cleared via php-cli.\n  * Issue #1825998 - Improved auto-healing for Redis.\n  * Issue #1835796 - Default cache headers break CloudFlare Always Online.\n  * Make sure that path_alias_cache module takes precedence.\n  * Make sure that PHP 5.2 is re-installed if required.\n  * Monitor and kill too long running sites cron tasks.\n  * Move away buagent init script if exists when Barracuda runs.\n  * Nginx: Allow to include high level local configuration override.\n  * Nginx: Better regex for exceptions in the abuse guard monitor.\n  * Nginx: Block stupid spiders/downloaders with 403 error, not CSF.\n  * Nginx: Deny known bots on some heavy URLs.\n  * Nginx: FileField Nginx Progress 7.x-2.3 compatibility.\n  * Nginx: Fix for broken images paths in civicrm.\n  * Nginx: Fix for D6 upload progress support.\n  * Nginx: Make the abuse monitor aware of possible lang code prefixes.\n  * Nginx: Monitor and block if required also via-multi-proxy attacks.\n  * Nginx: Remove packages on every upgrade to avoid duplicate re-installs.\n  * Nginx: Remove redundant URL filtering.\n  * Nginx: Send 403 for vbulletin URI to avoid Drupal heavy 404.\n  * Nginx: Support for /contrib/ for wysiwyg helpers exceptions location.\n  * Nginx: Use latest nginx-upload-progress-module v0.9.0\n  * Nginx: Use ngx_cache_purge-1.6\n  * PHP: Allow short_open_tag also in 5.3\n  * PHP: Disable the original php5-fpm init script causing segfaults.\n  * PHP: Fix for _FROM_SOURCES PHP-FPM 5.3 build.\n  * PHP: Fix for the php53-fpm init script.\n  * PHP: Force proper php53-fpm restart if required.\n  * PHP: Install JSMin extension by default.\n  * PHP: Install php-pear by default also in no-src based default install.\n  * PHP: Load extensions in a safe, correct order.\n  * PHP: Log killed php-fpm events.\n  * PHP: Make sure that all builds use correct, fresh downloads.\n  * PHP: Make sure that php53-fpm is disabled during apt-get based upgrade.\n  * PHP: Make sure that suhosin.so is removed and jsmin.so added.\n  * PHP: Remove duplicate and conflicting allow_call_time_pass_reference.\n  * PHP: Remove php5-sasl extension causing segfaults.\n  * PHP: Remove php5-suhosin from the stack - too many weird issues.\n  * PHP: The realpath_cache_ttl should be as low for CLI as possible.\n  * PHP: Use 2x higher limits in the tune_web_server_config logic.\n  * Purge Redis cache hourly.\n  * Randomize runner intervals.\n  * Remove all control files on init to avoid aborted Octopus upgrades.\n  * Remove any extra search directive from resolv.conf when pdnsd is installed.\n  * Remove Dotdeb libmysqld-dev conflicting with Percona libmysqlclient-dev.\n  * Remove not really working properly Boost separate mobile bins.\n  * Remove not supported MTA only on initial install.\n  * Remove old cache module from all old profiles.\n  * Segfault monitor should not disable sites by default.\n  * Serve .less files as static by default, no log.\n  * Set hosting_advanced_cron_default_interval to 3 hours.\n  * SQL: Use skip-name-resolve by default.\n  * Support both HTTP_X_FORWARDED_PROTO and HTTPS.\n  * The dev. should not disable Redis cache.\n  * The missing /usr/bin/lshell entry may affect also Lucid.\n  * There is no need to force Debian mirror.\n  * Tune AdvAgg config - disable async mode and use JSMin by default.\n  * Use autoselect for civicrm downloads.\n  * Use DrupalDatabaseCache for some Redis bins to avoid confirmed issues.\n  * Use higher default timeouts for php-cli and wait_timeout in mysql.\n  * Use SERVER_NAME instead of HTTP_HOST header in the Redis cache key.\n  * Use version specific directory for static downloads.\n  * Yet another umask trick for shell and SFTP.\n\n\n### Stable Edition BOA-2.0.3\n### Date: Thu May 17 18:17:40 EST 2012\n### Installs Ægir 2.0.3 compatible with Ægir 1.9\n\n# There are major improvements and new features added in this BOA Edition.\n\n  Here is the description of those most important/expected, while complete\n  list of all changes, new features and fixes is available further below.\n\n  * Caching backend has been simplified. We no longer use chained cache\n    system with Memcached+Redis+database. New system uses only Redis cache\n    and the same configuration for all Drupal 6 and Drupal 7 platforms.\n    This new system doesn't require any extra module to be enabled in any site.\n    Complete integration is already enabled by default for every platform/site\n    installed by default and for every custom platform as before - the next\n    day after first site on the custom platform has been created.\n    You can disable this caching layer using the same modules/cache/NO.txt\n    control file as before. While there is just one cache engine (Redis) used,\n    there is also an automatic, instant failover to standard database caching,\n    just in case Redis is not available for some reason. You can also disable\n    Redis cache on the fly for debugging by adding ?noredis=1 to any URL.\n\n  * We have added support for Drupal 8.x while still using modified\n    Drush 4.6-dev version, so we can still support Drupal 5 on the same\n    system, but on another Octopus instance.\n\n  * You can choose different PHP version for PHP-FPM (web access)\n    and PHP-CLI, for even greater control over compatibility with\n    various Drupal major versions.\n\n  * You can choose both PHP-FPM and PHP-CLI versions per Octopus instance,\n    on the same system. And you can change those versions on upgrade.\n\n  * Installing and upgrading BOA system has been greatly simplified.\n    You can still configure and run both installers as before,\n    but you can also use these new, shockingly simple command line tools\n    to install Barracuda and Octopus at once, to install more Octopus\n    instances, to run selective or batch upgrades of all Octopus instances etc.\n    See docs/INSTALL.txt and docs/UPGRADE.txt for details.\n\n  * We have added an 'easy install' configuration shortcuts for both\n    standard (public) and localhost installs. You no longer need\n    to read, understand and configure all options, unless you prefer\n    to choose some non-default configuration options.\n\n  * Default installs on Debian Squeeze and Ubuntu Precise use\n    packages for PHP 5.3, so initial setup takes just 10-15 minutes.\n\n  * You can easily grant limited shell and FTPS access for developers,\n    simply by creating \"Clients\" in the Ægir control panel and define\n    them as 'owners' of one or more sites. Their access will be limited\n    to only sites they can manage, but only if you will send them their\n    access credentials, which are independent of their Ægir control\n    panel credentials and stored in the ~/users/ directory in your main account.\n    You will find there files with passwords for every \"Client\" with at least\n    one site attached. For example ~/users/o1.username file means that\n    this Client's username for SSH and FTPS access is 'o1.username' while\n    his password is stored in this file. This means that SSH/FTPS access\n    is not granted automatically, but you can decide who should receive it.\n    How to change any extra user's password? Simply delete his ~/users/o1.username\n    file and wait up to 5 minutes - the system will re-create his account\n    with new password. And how to delete the user completely? Simply\n    delete this user \"Client\" account in the Ægir control panel and allow\n    the system to delete also his SSH/FTPS access in the next 5 minutes.\n\n  * We have added segfault monitor for php-fpm and nginx, enabled by default.\n    It is pretty aggressive, because it disables vhost of any site causing\n    segfault errors and sends email alert to the Octopus instance owner\n    and server owner email addresses. Simple site re-verify in Ægir enables\n    the site again - but until the next segfault only, so read the info\n    included in the email alert message, if this will happen. If you prefer\n    to not run this monitor: `rm -f /var/xdrago/monitor/check/segfault_alert`\n\n  * Previously recommended site and platforms re-verify on Clone or Migrate\n    is now fully automated. Ægir will run these extra tasks as a part\n    of Clone or Migrate task, to make sure that there are no errors\n    and that Ægir is using up-to-date information collected about\n    platforms and sites. It also automatically fixes the known problem\n    with domain aliases incorrectly written in the original and cloned sites,\n    as reported in the Ægir queue: http://drupal.org/node/1004526\n\n  * Apps are now fully supported. If the App is not downloaded yet, installing\n    it via browser only requires write permissions, normally never available\n    for the web server user, so you need to create an empty control file, either\n    in sites/all/modules/apps-allow.info or sites/domain/modules/apps-allow.info\n    and then run 'Reset password' task. It will open write access where required\n    until the next site 'Verify' task will run . After installing the App,\n    remember to re-Verify the site to restore default, safe permissions.\n\n  * Custom local.settings.php file support uses similar logic with control file\n    sites/domain/modules/local-allow.info and also 'Reset password' task.\n    After running this task the local.settings.php file will be group writable,\n    so you will be able to edit it also when logged in as limited shell user.\n    Remember to run site Verify when done, to restore standard, safe permissions.\n    Note that this file is created automatically, but is not open for write\n    access by default.\n\n# Notes on new and updated platforms and new Drupal core:\n\n  All 6.x and 7.x platforms have been updated with latest core,\n  so they are all in fact new in this BOA Edition, but we list here\n  only really new platforms or those with new version released\n  since last BOA Edition, with one exception: we list also basic\n  6.26.2 and 7.14.2 platforms as new.\n\n  NOTE: before you will try to upgrade any of your sites,\n  please read our important how-to:\n\n  http://omega8.cc/the-best-recipes-for-disaster-139\n  http://omega8.cc/are-there-any-specific-good-habits-to-learn-116\n  http://omega8.cc/managing-your-code-in-the-aegir-style-110\n\n  REALLY, PLEASE READ IT TO AVOID SOME HEAVY HEADACHES!\n\n# New Octopus platforms:\n\n  CiviCRM 4.1.2-d6 ------------- http://civicrm.org\n  CiviCRM 4.1.2-d7 ------------- http://civicrm.org\n  Drupal 7.14.2 ---------------- http://drupal.org/drupal-7.14\n  Drupal 8.0-dev --------------- http://bit.ly/drupal-eight\n  MartPlug 1.0-beta1b ---------- http://drupal.org/project/martplug\n  Octopus Video 1.0-alpha6 ----- http://octopusvideo.org\n  Panopoly 1.0-beta3 ----------- http://drupal.org/project/panopoly\n  Pressflow 6.26.2 ------------- http://pressflow.org\n\n# Updated Octopus platforms:\n\n  Acquia 6.26.2 ---------------- http://bit.ly/acquiadrupal\n  CiviCRM 3.4.8-d6 ------------- http://civicrm.org\n  CiviCRM 4.0.8-d7 ------------- http://civicrm.org\n  Commerce 1.7.1 --------------- http://drupalcommerce.org\n  Commons 2.6 ------------------ http://acquia.com/drupalcommons\n  Feature Server 1.1 ----------- http://bit.ly/fserver\n  Managing News 1.2.2 ---------- http://managingnews.com\n  NodeStream 1.5 --------------- http://nodestream.org\n  NodeStream 2.0-beta1 --------- http://nodestream.org\n  Open Atrium 1.4.1 ------------ http://openatrium.com\n  Open Deals 1.0-beta7e -------- http://opendealsapp.com\n  Open Outreach 1.0-rc1 -------- http://openoutreach.org\n  OpenChurch 1.10-alpha1 ------- http://openchurchsite.com\n  OpenPublish 3.0-alpha8 ------- http://openpublishapp.com\n  Ubercart 2.9.1 --------------- http://ubercart.org\n  Ubercart 3.1.1 --------------- http://ubercart.org\n  Videola 1.0-alpha3 ----------- http://videola.tv\n\n# New features:\n\n  * Add Adaptive Image Styles support.\n  * Add Compass compatibility in the limited shell (Compass is not installed by default).\n  * Add ssh-copy-id and ssh-add commands as allowed over SSH.\n  * Add X-Speed-Cache-Key header for Speed Booster debugging.\n  * All Clone/Migrate forms in the Ægir control panel have useful inline help added.\n  * Allow to easily re-start BOA failed install, just by running boa installer again.\n  * Allow to install PHP 5.3 only with option _PHP_MODERN_ONLY=YES (default).\n  * Deny HTTPS access on Nginx level for all known bots and crawlers.\n  * Do not force HTTPS for Ægir if /data/conf/no-https-aegir.inc control file exists.\n  * Fix system time hourly via auto-healing.\n  * Install wkhtmltopdf by default - available at /usr/bin/wkhtmltopdf\n  * Issue #1263602 - New Relic Server and Apps Monitor with per Site/Instance reporting.\n  * Issue #1392498 - Use .barracuda.cnf to define YES/NO for some config overrides.\n  * Issue #1428078 - Compatibility with resp_img module.\n  * Issue #1436522 - Add option to set _PHP_CLI_VERSION.\n  * Issue #1438906 - Add Imagick to PHP by default.\n  * Issue #1463494 - Add support for radioactivity module.\n  * Issue #1542712 - Automated wildcard DNS for easy localhost mode.\n  * Lock temporarily almost all known crawlers on high load with error 503.\n  * Make _NGINX_DOS_LIMIT configurable and allow higher load by default.\n  * Make both 1 and 5 minute max allowed load configurable in the auto-healing.\n  * Support for automatically managed extra SSH/FTPS accounts per Ægir Client.\n  * The _LOAD_LIMIT used in the auto-healing system is now configurable.\n  * The _SPEED_VALID_MAX used as a Speed Booster cache TTL is now configurable.\n  * Ubuntu Precise 12.04 is fully supported.\n  * Use nice default /root/.bashrc config.\n\n# New Ægir modules or extensions:\n\n  * Add hosting_advanced_cron module - enabled by default.\n  * Add hosting_civicrm_cron module - enabled by default.\n  * Add hosting_task_gc module - enabled by default.\n  * Add provision_cdn module and extension, by default not enabled.\n  * Add remote_import and hosting_remote_import - not enabled by default.\n  * Add revision_deletion module - automatically configured and enabled by default.\n  * Registry Rebuild Drush extension - installed by default.\n\n# New o_contrib modules:\n\n  * entitycache-7.x-1.x-dev\n  * nocurrent_pass-7.x-1.0\n  * speedy-7.x-1.0\n\n# Changes:\n\n  * Acquia 7.x platform has been merged with Ubercart 3.\n  * Always disable css_gzip, javascript_aggregator and performance modules.\n  * Automate database server secure setup on initial install.\n  * Disable /etc/cron.daily/mlocate by default.\n  * Do not disable update module - it may break some features depending on it.\n  * Do not enable filefield_nginx_progress module by default.\n  * Do not remove Testing profile and use better naming convention for D7/D8.\n  * Do not search for mirrors by default.\n  * Drupal 8 compatible Drush 4.6-dev\n  * GitHub availability is required also when another mirror is used by default.\n  * Installing Git from sources is now optional.\n  * Limited shell 0.9.15.1-sec-noreload\n  * Lower default APC and Redis memory in VZ to 64MB to avoid/limit known VZ issues.\n  * MariaDB and Percona 5.5\n  * Modify Ubercart platform to include some contrib modules in the D6 version.\n  * Nginx 1.3.0\n  * Open Enterprise 1.0-beta3 is deprecated and not supported.\n  * Plain FTP access disabled with FTPS-only mode available.\n  * Pure-FTPd server install is now optional, but still default.\n  * Send all known bots to $args free URLs.\n  * Use _HTTP_WILDCARD=YES by default to match Ægir standard setup.\n\n# Fixes:\n\n  * Abort all parent installers as soon as any sub-installer fails with fatal error.\n  * Add $http_x_forwarded_proto to the cache key to never mix HTTP and HTTPS entries.\n  * Add a list/chart in the readme for an easy overview of all included modules.\n  * Add volatile updates to /etc/apt/sources.list for Squeeze.\n  * All connection tests should be run after netcat is installed if not yet available.\n  * Allow more than one IP to connect to the same FTPS account at the same time.\n  * Allow some known php files also in profiles - a fix for Nginx config regression.\n  * Always update nginx_speed_purge.conf file on upgrade.\n  * Archive install and upgrade logs in /var/backups/\n  * Avoid double dots in $cookie_domain.\n  * Better detection of real visitor IP in the scan_nginx abuse guard.\n  * Cache 403 response for 5s by default.\n  * Count only valid requests in the scan_nginx abuse guard.\n  * Disable caching in admin_menu module by default.\n  * Disabled allow_url_fopen breaks drush dl.\n  * Do not allow bots to create cache entries with long expire time.\n  * Do not prompt for D6 or D7 vanilla platforms install if not defined in the config.\n  * Explain in the email templates that plain FTP is no longer available.\n  * Fix cart block issue in Ubercart.\n  * Fix for Debian Lenny support - packages have been moved to archives.\n  * Fix for slow networks/DNS in pdnsd cache default config.\n  * Fix for VServer on _LENNY_TO_SQUEEZE upgrade.\n  * Fix tune_memory_limits logic to really tune the config on low mem systems.\n  * Follow some symlinks when running chmod/ownership repair daily.\n  * Force global upgrade for Expire and Purge modules.\n  * Force safe default settings for expire module.\n  * Improved Lenny to Squeeze major upgrade support.\n  * Increase allowed limit_conn for local purge requests.\n  * Issue #1216420 - Incorrect lshell path in /etc/passwd breaks FTPS on Squeeze.\n  * Issue #1317264 #1543118 - Uninstall Sendmail if exists to avoid breaking Postfix.\n  * Issue #1377492 - Improve Install / Upgrade mode detection and move away any zombies.\n  * Issue #1398050 - Use our mirror for all downloads on install and upgrade.\n  * Issue #1436522 - Add missing php.ini for PHP-CLI 5.3\n  * Issue #1440796 - Ægir support broken due to duplicate db update in Commons/OG.\n  * Issue #1441366 - The _USE_SPEED_BOOSTER switch is deprecated.\n  * Issue #1443284 - Early start of CSF may lockout the ssh user and break the install.\n  * Issue #1445460 - Broken Git install on Ubuntu Lucid.\n  * Issue #1451262 - Do not lock the access to phpinfo.\n  * Issue #1472460 #1524738 - Nginx denies request methods: PUT, DELETE and OPTIONS.\n  * Issue #1475416 - Unable to install Barracuda due to Ægir failed install.\n  * Issue #1478984 - Add Access-Control-Allow-Origin header with wildcard where required.\n  * Issue #1479188 - Octopus does not respect _DNS_SETUP_TEST setting on upgrade.\n  * Issue #1505370 - Conflict between Mime Type and Document Type in Nginx.\n  * Issue #1515762 - Nginx microcaching should skip all known AJAX requests.\n  * Issue #1526382 - The _PHP_CLI_VERSION set in cnf file is not respected.\n  * Issue #1527852 - Random WSOD on D7 sites with Redis enabled for anonymous visitors.\n  * Issue #1528692 - Both cache_backport and redis modules are never added on upgrade.\n  * Issue #1528726 - Redis caching backend should be unified across all instances.\n  * Issue #1528996 - Nginx microcaching should use TTL 1s only for upstream errors.\n  * Issue #1534306 - Duplicate directives break Dotdeb Nginx version.\n  * Issue #1539512 - Keep custom Redis configuration during upgrade.\n  * Issue #1540112 - HEAD install fails on Debian Squeeze 32bit.\n  * Issue #1540242 - Add useful codecs to ffmpeg if enabled.\n  * Issue #1541334 - Add kvm to supported virtualization systems.\n  * Issue #1544144 - Use $server_name instead of $host in all sites/ paths.\n  * Issue #1547878 - Port 11371 should be open for outgoing connections.\n  * Issue #1553150 - Both php.ini and my.cnf config files get overridden upon upgrade.\n  * Issue #1553166 - Disable incompatible mysql config options.\n  * Issue #1554972 - PHP cli downgraded to 5.2 on upgrade with _PHP_MODERN_ONLY=YES\n  * Issue #1556192 - Upgrade Entity API to head to fix issue with Drupal 7.14\n  * Issue #1585348 - Disable openchurch_video_demo_content to avoid fatal error.\n  * Kill nash-hotplug if running.\n  * Lower some my.cnf defaults to better support low mem systems.\n  * Make default myisam_sort_buffer_size big enough to run repair if required.\n  * Make sure that /dev/null has correct permissions.\n  * Pass some expected headers when using local proxy.\n  * Remind people that they should use their own email address or exit early.\n  * Remove deprecated Nginx config includes and use symlinks for backward compatibility.\n  * Sanitize important variables early.\n  * Save 330 seconds with 3x faster spinner.\n  * Set hosting_queue_cron_frequency to 8888 weeks by default to really use schedule\n    defined via hosting_advanced_cron module and never override it.\n  * Share and symlink civicrm code.\n  * Skip _AEGIR_LOGIN_URL in the debug mode - it is empty then.\n  * Update mime.types for Nginx.\n  * Use _FULL_FORCE_REINSTALL when recovering from broken/partial install automatically.\n  * Use faster locations matching where possible in the Nginx config.\n  * Use higher values for limit_conn in Nginx to avoid issues when required.\n  * Use loglevel warning in Redis config.\n  * Use safe placeholders to avoid issues on low-mem machines.\n\n\n### Stable Edition BOA-2.0.2\n### Date: Thu Feb 9 14:00:00 EST 2012\n### Installs Ægir 2.0.2\n\n# Note on new and updated platforms and new Drupal core:\n\n  All 6.x and 7.x platforms have been updated with latest core,\n  so they are all in fact new in this BOA Edition, but we list here\n  only really new platforms or those with new version released\n  since last BOA Edition, with one exception: we list also basic\n  6.24.1 and 7.12 platforms as new.\n\n  Please note that instead of waiting for 6.25, we already\n  included patches required to fix major issues with 6.24:\n\n  http://drupal.org/node/1425868\n  http://drupal.org/node/1425260\n\n  Our Pressflow 6.24.1 +Extra version includes not only listed\n  above patches, but also a few extra, performance related\n  patches discussed here:\n\n  http://groups.drupal.org/node/187209\n\n  Note also that we renamed too basic Acquia 7.x platform to\n  Ubercart 3.x platform. It is based on the same acquia install\n  profile, but includes all contrib modules required for any\n  basic Ubercart 3.x site.\n\n  NOTE: before you will try to upgrade any of your sites,\n  please read our important how-to:\n\n  http://omega8.cc/the-best-recipes-for-disaster-139\n  http://omega8.cc/are-there-any-specific-good-habits-to-learn-116\n  http://omega8.cc/managing-your-code-in-the-aegir-style-110\n\n  REALLY, PLEASE READ IT TO AVOID SOME HEAVY HEADACHES!\n\n# New Octopus platforms:\n\n  Drupal 7.12 ------------------ http://drupal.org/drupal-7.12\n  NodeStream 2.0-alpha6 -------- http://nodestream.org\n  OpenPublish 3-alpha3 --------- http://openpublishapp.com\n  Pressflow 6.24.1 ------------- http://pressflow.org\n  Ubercart 3.0.1 --------------- http://ubercart.org\n\n# Updated Octopus platforms:\n\n  Acquia Commons 2.4 ----------- http://acquia.com/drupalcommons\n  Commerce Kickstart 1.3 ------- http://drupalcommerce.org\n  ELMS 1.0-alpha6 -------------- http://elms.psu.edu\n  Open Atrium 1.2.1 ------------ http://openatrium.com\n  Open Deals 1.0-beta7 --------- http://opendealsapp.com\n  Open Outreach 1.0-beta7a ----- http://openoutreach.org\n  ProsePoint 0.43 -------------- http://prosepoint.org\n  Videola 1.0-alpha2 ----------- http://videola.tv\n\n# New features:\n\n  * Barracuda now supports Debian Lenny to Squeeze major upgrade.\n    Of course you should create full backup image before running\n    this major system upgrade, just in case, but all the rest\n    is fully automated - it is enough to set advanced configuration\n    option in Barracuda to _LENNY_TO_SQUEEZE=YES and run Barracuda\n    as usual. It will upgrade your system to Squeeze and re-build\n    everything, with almost no downtime during the upgrade.\n    You will still need to reboot the server when it will complete\n    all upgrades.\n\n    Important: Debian Lenny reached EOL on February 6, 2012.\n    Details: http://lists.debian.org/debian-announce/2012/msg00001.html\n\n  * All new 7.x sites now run on latest PHP-FPM 5.3.10 by default.\n    For existing sites it is enough to re-verify them in your\n    Ægir control panel to get them on PHP-FPM 5.3.10 automatically.\n\n    All existing and new 5.x sites run on the old PHP-FPM 5.2.17\n    version by default and you can't change that.\n\n    You can still choose between PHP-FPM 5.2.17 and 5.3.10 for\n    all your 6.x sites - just let us know via http://omega8.cc/support\n    that you wish to switch to 5.3.10 - but make sure first that all\n    your 6.x sites are fully PHP 5.3 compatible. By default all\n    6.x sites still run on PHP-FPM 5.2.17.\n\n    Of course you could choose 5.3.10 for 6.x sites on one Octopus\n    instance and 5.2.17 on another - on the same server. Just one more\n    reason to use Octopus built-in intelligence :)\n\n    All of this works the same both for Ægir Master Instance\n    and all Ægir Satellite Instances.\n\n  * Both Speed Booster, Boost and Redis/Memcached supports separate\n    caches per mobile device, so it is safe to use separate themes\n    or content for mobile devices. We use simple logic to determine\n    the kind of device and there are separate cache bins for\n    mobile-tablet, mobile-smart and mobile-other.\n    You can review it here: http://bit.ly/wYz6PG\n\n  * Purge module is now enabled by default in all 6.x and also 7.x\n    sites. Now Speed Booster works like a Boost - it expires\n    immediately the cache for any node/page as soon as it has been\n    edited or comment added. It also automatically expires the cache\n    for the homepage and RSS feed at once. You no longer need to wait\n    up to one hour for Speed Booster cache expiration. Plus, unlike\n    in Boost, it purges all separate caches for all mobile devices\n    along with non-mobile cache, at once. Now you have a good reason\n    to disable Boost and use our crazy fast Speed Booster only.\n\n  * You can use GeoIP data provided by your Nginx server\n    in your custom code or modules with variables:\n    $_SERVER['GEOIP_COUNTRY_CODE'] and $_SERVER['GEOIP_COUNTRY_NAME']\n    to display content or block depending on the visitor's country.\n    You can check/review it from your location also on command line\n    with: 'curl -I http://your-domain' - you will see GeoIP headers.\n\n  * You can safely manage Clients/Users attached to hosted sites\n    in your Ægir interface. Make sure that all sites have its\n    associated Client! Otherwise the site will be listed as available\n    for all Clients/Users you have added. The site can lost its\n    association with Client after Clone task if there is any\n    non-alphanumeric value in the Client name, like &.\n\n  * CloudFlare specific header 'CF-Connecting-IP' is now supported\n    out of the box and available as standard $_SERVER['REMOTE_ADDR']\n    in all 5.x, 6.x and 7.x platforms without any contrib module.\n\n  * You can disable both Boost and Speed Booster on the fly\n    by adding ?nocache=1 to any URL. Useful for debugging.\n\n  * Speed Booster offers now also ESI microcaching, as explained\n    in this article: http://groups.drupal.org/node/197478.\n    This may enhance not only anonymous visitors, but also\n    logged in users experience, since it allows you to separate\n    microcache for ESI/SSI includes (valid for just 15 seconds)\n    from both default Speed Booster cache for anonymous visitors\n    (valid by default for 3 hours, unless purged on demand via\n    recently introduced Purge/Expire modules) and also from\n    Speed Booster cache per logged in user (valid for 60 seconds).\n    The ESI module is included in all 6.x platforms but is not\n    enabled and not configured automatically, so please consult\n    its documentation for details on how to use it properly.\n\n    Now you have three different levels of Speed Booster cache\n    to leverage and deliver the 'live content' experience for\n    all visitors, and still protect your server from DoS or\n    simply high load caused by unexpected high traffic etc.\n\n  * Automatic configuration of options required when Barracuda\n    detects _VMFAMILY=AWS (Amazon EC2).\n\n  * Both _NGINX_WORKERS and _PHP_FPM_WORKERS are now configurable.\n\n  * You can avoid overwriting /etc/mysql/my.cnf with empty\n    control file: $ touch /etc/mysql/custom.my.cnf\n\n  * You can avoid overwriting /opt/php52/etc/php52.ini on upgrade\n    with empty control file: $ touch /opt/etc/custom.php.ini\n\n  * You can avoid overwriting /opt/php52/lib/php.ini on upgrade\n    with empty control file: $ touch /opt/etc/custom.php.ini\n\n  * You can avoid overwriting /opt/php53/etc/php53.ini on upgrade\n    with empty control file: $ touch /opt/etc/custom.php53.ini\n\n  * You can avoid overwriting /var/spool/cron/crontabs/root on upgrade\n    by adding your extra/custom entries in the extra file:\n    $ nano /var/xdrago/cron/custom.txt\n\n  * You can avoid overwriting your CSF configuration on upgrade\n    with empty control file: $ touch /var/log/custom.csf.log\n\n# New o_contrib modules:\n\n  * taxonomy_edge-6.x-1.3 (with core patch)\n  * taxonomy_edge-7.x-1.1 (with core patch)\n  * purge-6.x-1.x\n  * purge-7.x-1.x\n  * expire-6.x-1.x\n  * expire-7.x-1.x\n\n# Changes:\n\n  * Nginx upgrade to 1.0.12\n  * Lshell upgrade to 0.9.15-beta1\n  * Percona upgrade to 5.5.19\n  * Chive upgrade to 1.0.2\n  * Git upgrade to 1.7.9\n  * Suhosin upgrade to 0.9.33\n  * Textile upgrade to 2.3\n  * Mytop is now installed by default.\n  * Drush based method for sites cron is more reliable and now set by default.\n  * More compact naming for platforms in Octopus.\n  * Speed Booster cache per logged in user now valid for only 60 seconds.\n  * Speed Booster anonymous cache now valid for 3 hours, unless purged.\n  * Extra $_COOKIE[OctopusCacheID] has been removed.\n  * We use $cache_uid from parent map (Nginx) in fastcgi_cache_key.\n  * Forced external caching only for Pressflow 6 core.\n  * Octopus installs by default: D7P D7S D7D D6P D6S D6D OAM.\n  * We no longer need to force Percona on Oneiric. MariaDB also works.\n  * We no longer need to force MariaDB on Lenny and MariaDB Natty on Oneiric.\n  * We no longer need to use Percona for Maverick on Natty and Oneiric.\n  * We use _THIS_DB_HOST=localhost by default.\n  * Secure/restricted access to manage users/clients is open by default\n    in every Ægir Satellite Instance also for the extra non-uid=1 admin.\n  * Users in every Ægir Satellite Instance are protected with userprotect\n    and protect_critical_users modules.\n  * Some default SQL limits have been increased.\n  * The insecure D7 plugin manager is now forced as disabled by default.\n  * The hosting_platform_pathauto module is now enabled in Ægir by default.\n  * The provision_boost module is now added and enabled in Ægir by default.\n\n# Fixes:\n\n  * Simplified Nginx config with 'modern', 'octopus' and 'legacy' templates.\n  * Removed duplicate code and fixed caching logic for D5, D6 and D7.\n  * Fixed logic for ESI microcache and Boost cache.\n  * Removed imageinfo_cache module. It breaks platforms with imagecache module.\n  * Disable deslash in globalredirect to avoid redirect loop.\n  * Load IonCube also in php-cli.\n  * Use core version in paths for all platforms.\n  * Make sure that 301 redirects are only microcached - 5 seconds by default.\n  * Do not run duplicate PHP-FPM rebuild on upgrade when there is\n    no new DB server version installed/available.\n  * Set boost_ignore_htaccess_warning to 1 by default.\n  * Use provision_civicrm 6.x-1.x branch instead of outdated master.\n  * Fix for broken regex on lshell.conf update per user.\n  * All broken symlinks in the clients directory now deleted daily.\n  * All broken symlinks in the lshell user home directory now deleted daily.\n  * Avoid breaking Ægir upgrade because of high load.\n  * Set correct loglevel for Redis to avoid useless I/O noise.\n  * Add curl as allowed command to lshell default config.\n  * Use faster download instead of git for Pressflow core.\n  * Issue #1432668 - Octopus username should never start with a digit.\n  * Issue #1408972 - Make nginx rewrites compatible with audio module.\n  * Issue #1428990 - Load memcache in php-cli.\n  * Issue #1408200 - AgrCache breaks aggregation and should be removed.\n  * Issue #1420758 - Make sure that Nginx config includes are really used\n                     on initial Barracuda install.\n  * Issue #1418608 - Add --with-xmlrpc in the PHP-FPM build by default.\n  * Issue #1396204 - Add GeoIP support in Nginx by default\n  * Issue #1394152 - Build PHP-FPM with --enable-calendar by default.\n  * Issue #1392498 - Do not overwrite CSF configuration on Barracuda upgrade.\n\n# Recommendations:\n\n  * Use _FORCE_GIT_MIRROR=github because it is 10x faster than others.\n\n\n### Stable Edition BOA-2.0.1\n### Date: Wed Dec 28 07:00:00 EST 2011\n### Installs Ægir 2.0.1\n\n# New Octopus platforms:\n\n  ELMS 1.0-alpha5 -------------- http://elms.psu.edu\n  Open Deals 1.0-alpha4 -------- http://opendealsapp.com\n  Open Outreach 1.0-beta6 ------ http://openoutreach.org\n\n# Updated Octopus platforms:\n\n  Acquia 7.10.10 --------------- http://bit.ly/acquiadrupal\n  Acquia Commons 2.3 ----------- http://acquia.com/drupalcommons\n  CiviCRM 3.4.8 ---------------- http://civicrm.org\n  CiviCRM 4.0.8 ---------------- http://civicrm.org\n  Commerce Kickstart 1.0-rc7 --- http://drupalcommerce.org\n  Drupal 7.10 ------------------ http://drupal.org/drupal-7.0\n  Managing News 1.2.1 ---------- http://managingnews.com\n  NodeStream 1.1 --------------- http://nodestream.org\n  Open Atrium 1.1.1 ------------ http://openatrium.com\n  OpenChurch 1.22-a ------------ http://openchurchsite.com\n  OpenScholar 2.0-beta13 ------- http://openscholar.harvard.edu\n  ProsePoint 0.41 -------------- http://prosepoint.org\n\n# New features:\n\n  * Speed Booster Purge Server for all Drupal 6.x based platforms\n    with automatically configured support for all devices caching.\n  * Enhanced Pressflow core for all bundled 6.22 based platforms,\n    applied automatically also to already installed platforms:\n    https://github.com/omega8cc/pressflow6\n  * Added access to the \"clients\" directory with shortcuts/symlinks\n    to all hosted sites per Ægir \"client\".\n\n# New o_contrib modules:\n\n  * ESI for Nginx SSI - http://drupal.org/sandbox/mikeytown2/1328648\n  * Purge for Speed Booster - http://drupal.org/project/purge\n  * Expire for Speed Booster - http://drupal.org/project/expire\n\n# Changes:\n\n  * Nginx upgrade to 1.0.11\n  * MariaDB upgrade to 5.2.10\n  * Percona upgrade to 5.5.18\n  * Chive upgrade to 1.0.1\n  * Pure-FTPd upgrade to 1.0.35\n  * The syslog module is no longer enabled by default and added\n    to the list of automatically disabled modules.\n\n# Fixes:\n\n  * Mobile devices detection and caching improved.\n  * Many fixes and enhancements for Speed Booster caching logic.\n  * Many fixes and enhancements for Boost caching logic.\n  * More reliable Nginx auto-healing.\n  * Broken symlinks in the \"clients\" directory are now purged daily.\n  * The preg_match for dev should check for dev. and devel. only.\n  * Issue #1366564 - Use instance specific .octopus.cnf files.\n  * Issue #1262988 - Use reliable test for upload progress availability.\n  * Issue #1350028 - Make sure that all BOA pid files are removed on reboot.\n  * Issue #1348906 - BOND script outdated _INSTALLER_VERSION variable fixed.\n  * Issue #1321428 - Make sure that _SSH_PORT is written in /etc/ssh/sshd_config.\n\n\n### Stable Edition BOA-1.4S\n### Date: Mon, 24 October 2011 14:00:00 +0200\n### Installs Ægir stable 1.4S\n\n# Updated Octopus platforms:\n\n  Acquia 7.8.7 ----------------- http://bit.ly/acquiadrupal\n  Acquia Commons 2.2 ----------- http://acquia.com/drupalcommons\n  CiviCRM 3.4.7 ---------------- http://civicrm.org\n  CiviCRM 4.0.7 ---------------- http://civicrm.org\n  Commerce Kickstart 1.0-rc4 --- http://drupalcommerce.org\n  OpenPublic 1.0-beta3 --------- http://openpublicapp.com\n  Ubercart 6.x-2.7 ------------- http://ubercart.org\n\n# New features:\n\n  * Mobile devices detection for mobile-tablet, mobile-smart and mobile-other.\n  * Mobile devices detection integrated with Redis/Memcached caches.\n  * Mobile devices detection integrated with Boost cache.\n  * Mobile devices detection integrated with Speed Booster cache.\n  * Responsive Images 7.x module support.\n  * New .barracuda.cnf and .octopus.cnf files for better configuration management.\n  * Ubuntu Oneiric 11.10 is now fully supported.\n  * Issue #1266912 - Support for Apache Solr Attachments - Tika.\n  * Issue #1310082 - Disable XML Sitemap for dev automatically.\n  * Support for fbconnect module.\n  * Support testing->minimal->standard migrations for D7 out-of-the-box.\n  * The Speed Booster $key_uri enhanced logic included in the default Nginx config.\n\n# Changes:\n\n  * Nginx upgrade to 1.0.8\n  * Create mobile cache separate subdirs for Boost by default.\n  * _MODULES_ON and _MODULES_OFF now forced also for D7 sites.\n  * Do not force hosting_ignore_default_profiles by default.\n  * Some o_contrib modules received updates - use _O_CONTRIB_UP=YES to apply them.\n  * Allow 'contrib' subdirectory in the modules path for allowed PHP files.\n  * Issue #1309996 - Extended support for common modules locations/paths.\n  * Issue #1305542 - Do not overwrite php.ini and my.cnf if control files exist.\n  * Add collectd to the auto-healing monitor and automated restart.\n  * Disable l10n_update module by default to avoid issues when d.o servers are down.\n  * Updated docs/SOLR.txt to explain how to configure any core to support 7.x.\n  * Duplicate parts of Nginx config moved to maps in the parent server.tpl.php file.\n  * Add 'drush pmi' to the list of displayed/allowed commands.\n  * Issue #1243068 - Allow to override in override.global.inc also Redis/Memcached etc.\n  * Deny known crawlers on the HTTPS proxy level.\n\n# Fixes:\n\n  * The wkhtmltopdf binary should be always executable if exists.\n  * Issue #1238200 - Use custom _SSH_PORT only in TCP_IN.\n  * Make sure the keys for MariaDB or Percona are added to avoid broken install.\n  * Issue #1307664 - Test repo.percona.com and ftp.osuosl.org availability.\n  * Issue #1262988 - Missing upload_progress_test.conf breaks upgrade for older installs.\n  * Issue #1281896 - Add some missing video types to mime.types in the Nginx config.\n  * Do not use path_alias_cache in the Hostmaster site to avoid broken URL aliases.\n  * Issue #1270724 and #1263124 - really use /tmp directory during 'drush dl module'.\n  * Do not break admin/reports/status/rebuild URL in D7.\n\n\n### Stable Edition 1.0-boa-T-8.10\n### Date: Mon, 5 September 2011 16:15:00 +0200.\n### Installs Ægir stable 1.3.1\n\n# New Octopus platforms:\n\n  OpenChurch 1.21 -------------- http://openchurchsite.com\n\n# Updated Octopus platforms:\n\n  Acquia 7.7.6 ----------------- http://bit.ly/acquiadrupal\n  Acquia Commons 2.0 ----------- http://acquia.com/drupalcommons\n  CiviCRM 3.4.5 ---------------- http://civicrm.org\n  CiviCRM 4.0.5 ---------------- http://civicrm.org\n  Conference 1.0-beta2 --------- http://usecod.com\n  Drupal 7.8 ------------------- http://drupal.org/drupal-7.0\n  Drupal Commerce 1.0 ---------- http://drupalcommerce.org\n  OpenPublic 1.0-beta2 7.8 ----- http://openpublicapp.com\n  Ubercart 2.6 6.22 ------------ http://ubercart.org\n\n# Changes:\n\n  * Drush Make upgrade to 2.3\n  * Drush upgrade to 4.5\n  * Nginx upgrade to 1.0.6\n  * MariaDB upgrade to 5.2.8\n  * Higher limit_conn for AdvAgg to support high async connections rate.\n\n# Fixes:\n\n  * Tomcat runs as a separate 'tomcat' user instead of root.\n  * Issue #1250448 - Textile 7 requires Vars module.\n  * Issue #1248432 - support for CNAME records in the DNS check.\n\n# New features:\n\n  * HTTP/HTTPS redirects example in the override.global.inc file.\n  * Enabled by default HTTPS and HTTP sessions/cookies for D7.\n  * Issue #1243068 - Allow to override $cache_module_path.\n\n\n### Stable Edition 1.0-boa-T-8.9\n### Date: Sat, 30 July 2011 23:50:00 +0200.\n### Installs Ægir HEAD 1.2.1\n\n# Updated Octopus platforms:\n\n  Drupal 7.7 ------------------- http://drupal.org/drupal-7.0\n  Acquia 7.7.5 ----------------- http://bit.ly/acquiadrupal\n  OpenPublic 1.0-beta1 7.7 ----- http://openpublicapp.com\n  Drupal Commerce 1.0-rc1 ------ http://drupalcommerce.org\n  Open Atrium 1.0 6.22 --------- http://openatrium.com\n  ProsePoint 0.40 6.22 --------- http://prosepoint.org\n\n# Fixes:\n\n  * Two critical cache related bugs fixed in Nginx 1.0.5.\n  * Critical Issue #1222208 - broken web-based cron for sites.\n  * Issue #1223506 - cloning a site looses client site ownership.\n  * Missing jquery.ui symlink in Conference COD breaks install.\n  * Issue #1230420 - do not purge /tmp too aggressively.\n  * Issue #1234470 - SSL proxy didn't respect HTTP wildcard.\n  * Boost's false alarm about permissions silenced.\n  * Permissions for sites/domain/private/* also fixed daily.\n\n# Changes:\n\n  * Nginx upgrade to 1.0.5\n  * Chive upgrade to 0.5.1\n  * Web-based method set by default for sites cron in Ægir.\n\n# New features:\n\n  * Speed Booster Purge experimental backend can be installed,\n    but is not used in production yet - see _PURGE_MODE flag\n    and Issue #1048000.\n\n\n### Stable Edition 1.0-boa-T-8.8\n### Date: Thu, 15 July 2011 08:00:00 +0200\n### Installs Ægir stable 1.2\n\n# New Octopus platforms:\n\n  Drupal 7.4 ------------------- http://drupal.org/drupal-7.0\n  CiviCRM 3.4.4 ---------------- http://civicrm.org\n  CiviCRM 4.0.4 ---------------- http://civicrm.org\n  Videola 1.0-alpha1 ----------- http://videola.tv\n\n# Updated Octopus platforms:\n\n  OpenPublic 1.0-beta1 7.4 ----- http://openpublicapp.com\n  Drupal Commerce 1.0-beta4 ---- http://drupalcommerce.org\n  Acquia Commons 1.7 ----------- http://acquia.com/drupalcommons\n  Acquia 7.4.4 ----------------- http://bit.ly/acquiadrupal\n  OpenScholar 2.0-beta11 ------- http://openscholar.harvard.edu\n  Conference 1.0-beta1 --------- http://usecod.com\n\n# New features:\n\n  * Speed Booster can be disabled per site or per platform.\n  * Redis/Memcached can be disabled per site or per platform.\n  * Redis/Memcached chained cache enabled also for anonymous visitors.\n  * Support for private_upload module added.\n  * Support for static sites/domain/files/robots.txt file per site #1173954.\n  * New _HTTP_WILDCARD Barracuda option for Nginx configuration #1152316.\n  * New _XTRAS_LIST Barracuda option to define extras to be used.\n  * Scripts to add extra ftp or lshell standard or lshell master users.\n  * New _PLATFORMS_LIST Octopus option to configure the list of platforms.\n  * You can migrate sites between some installation profiles by default:\n    Drupal/Pressflow -> Acquia\n    Acquia -> Drupal/Pressflow\n    Acquia -> CiviCRM 3\n    Cocomore/CDC/DrupalCenter -> Pressflow\n  * New _O_CONTRIB_UP Octopus option to upgrade last two contrib sets.\n\n# Changes:\n\n  * Migration from commercedev to commerce_kickstart profile.\n  * More system info stored in BOA logs to help with debugging.\n  * Nginx config - deny access to /hosting/c/server_master.\n  * Better how-to in the override.global.inc template.\n  * Chive upgrade to 0.4.2\n  * Nginx upgrade to 1.0.4\n\n# Fixes:\n\n  * OpenPublic password policy issue fixed on site install.\n  * OpenScholar missing libraries issue fixed.\n  * Issue #1213094 - FServer platform missing module fixed.\n  * Mollom problem when running via (SSL) proxy fixed.\n  * Issue #1209150 - always use _MY_OWNIP when defined.\n  * Issue #1208386 - fix for broken csf configuration template.\n  * Boost cache write permissions after site migration fixed.\n  * Nginx config - better support for CiviCRM.\n  * Issue #1198572 - do not run SMTP check if _SMTP_RELAY_HOST is set.\n  * Forced PHP-FPM rebuild on MariaDB 5.2.7 upgrade.\n  * Issue #1196006 - fixed Nginx X-Accel-Redirect support.\n  * Security Issue #1197172 - bypass access restrictions to protected files fixed.\n  * Issue #1182680 - fixed support for backup_migrate module.\n  * Issue #1182582 - fixed search paths for node.js, image.jpg etc.\n  * Critical Issue #1183500 #1182660 - fall back to the wildcard * in Nginx.\n  * Issue #962188 - Nginx version check in vhost.tpl.php now works.\n  * Issue #1170498 - Extra config variable was missing in Nginx config templates.\n  * Percona upgrade path fixed.\n  * Broken dev version of the backup_migrate module replaced with stable.\n  * Use correct platforms versions numbers in the ftp symlinks.\n\n\n### Stable Edition 1.0-boa-T-8.7\n### Date: Mon, 30 May 2011 11:40:00 +0200\n### Installs Ægir HEAD 1.1.2\n\n1. Fixed critical issue with MariaDB upgrade from 5.1 to 5.2\n2. Fixed critical issue with Nginx build.\n3. Fixed critical issue with Feature Server platform build.\n4. Added upgrade monitor.\n\n\n### Stable Edition 1.0-boa-T-8.6\n### Date: Sun, 29 May 2011 13:30:00 +0200\n### Installs Ægir HEAD 1.1.2\n\n----------------------------------------\n# Added or upgraded since January 2011\n----------------------------------------\n\n* Added support for install and upgrade to Percona Server 5.5\n* MariaDB server upgraded to version 5.2.6.\n* Nginx server upgraded to version Barracuda/1.0.2\n* Added support for Debian Squeeze and Ubunty Natty.\n* Open Atrium includes extra features:\n  Atrium Folders:  http://bit.ly/oafolders\n  Ideation:        http://bit.ly/oaideation\n\n* Hostmaster platform comes with ready to enable extra modules:\n  http://drupal.org/project/hosting_backup_queue\n  http://drupal.org/project/hosting_backup_gc\n  http://drupal.org/project/hosting_upload\n\n* New Octopus platforms:\n\n  OpenPublic 1.0-beta1 --------- http://openpublicapp.com\n  NodeStream 1.0 --------------- http://nodestream.org\n  Drupal Commons 1.6 ----------- http://acquia.com/drupalcommons\n  OpenScholar 2.0-beta10-1 ----- http://openscholar.harvard.edu\n  Conference 1.0-alpha3 -------- http://usecod.com\n  Open Enterprise 1.0-beta3 ---- http://leveltendesign.com/enterprise\n  Acquia 7.2.2 ----------------- http://bit.ly/acquiadrupal\n  Drupal Commerce 1.0-beta3 ---- http://drupalcommerce.org\n\n* Basic Drupal 6 and Drupal 7 platforms now come in three instances,\n  to make your standard workflow easier for: -dev, -stage and -prod,\n  with correct suffix: D.00x, S.00x and P.00x in the platform name.\n\n* Speed Booster cache for 5.x, 6.x and 7.x Drupal platforms.\n  This new feature adds super fast caching for anonymous visitors,\n  and yes! - also for logged in users (cache per user) directly on\n  the web server level - no Drupal module required.\n  It works for all platforms, except of Ubercart, Commerce\n  and any platform with ubercart in sites/all/modules/ubercart.\n\n* Support for secure ubercart keys location to use ../keys path.\n* The filefield_nginx_progress now also in every 7.x platform.\n* Drush upgraded to version 4.4\n* Drush Make upgraded to version 2.2\n* Redis cache server upgraded to version 2.0.5\n* PHP-FPM server upgraded to version 5.2.17\n* APC upgraded to version 3.1.9\n* Memcache extension replaced with memcached and libmemcached.\n* Chive database manager upgraded to version 0.4.1\n* Added support for robotstxt module in all new 6.x based platforms.\n* Drush gm / generate-makefile command added as allowed to lshell.\n* Git over ssh added as allowed to lshell.\n\n----------------------------------------\n# Improvements since January 2011\n----------------------------------------\n\n* Speed Booster now works also in the Ægir Master Instance.\n* Full Barracuda install takes only 30 minutes (tested on Linode).\n* Nginx abuse guard is now integrated with csf firewall.\n* Bots/crawlers are now denied on any \"dev\" type subdomain.\n* The pdnsd server install is now optional.\n* The csf/lfd firewall install is now optional.\n* Limited shell configuration is now updated on every upgrade.\n* Auto-tuning in Barracuda leaves more memory for MyISAM etc.\n* Ægir runs cron for D5 and D6 sites using Wget instead of Drush\n  to leverage APC cache, while D7 can use built-in poormanscron.\n* Many improvements in the Speed Booster cache configuration.\n* Improved memcached/redis cache bins configuration.\n* The o_contrib modules now symlinked also in custom platforms.\n* Boost directories created automatically also in custom platforms.\n* Improved web server self-healing monitor.\n* PHP notices no longer displayed for dev subdomains, only errors.\n* Many improvements in the Nginx configuration - now it's faster.\n* Permissions on uploaded modules, themes and files are now\n  automatically fixed every morning to help with post-import issues.\n* Almost all 6.x platforms now come with performance related\n  modules already enabled and configured on site install by default.\n* Nginx config - now doesn't use php-fpm to serve fckeditor files.\n* Introduced possibility to add upgrade-safe custom Nginx rewrite\n  rules to support transparent migration of legacy URLs/content.\n* Ægir Hostmaster control panel received extra caching and speed.\n* Better support for securepages 1.9 with forced secure cookies.\n* Better support for dynamically created base_url for http/https.\n* Too generic D7 profile names replaced with unique Drupal 7 names.\n* A few new commands have been added to your Ægir Drush Shell (SSH).\n* You can use git to manage the code and rsync to manage backups.\n* Useful new commands from Drush v.4 are now available.\n* Now it is possible to delete old sites backups created in Ægir.\n* You can access Ægir backups also via SSH or SFTP/FTPS.\n* You can cancel queued task in Ægir before it is started.\n* The \"dev\" anywhere in the subdomain enables all PHP errors.\n* You can use \"dev\" type alias for live site for easier debugging.\n* Added support for imagecache_external module.\n* It is possible to safely delete any not used platforms on request.\n* Access to static files allowed only for currently used domain.\n* Added crossdomain.xml in the root of every new platform.\n* New rewrite introduced to map /files to /sites/domain/files,\n  /images to /sites/domain/files/images and\n  /downloads to /sites/domain/files/downloads.\n* The standard /update.php works again, however using \"drush dbup\"\n  command is recommended.\n* The \"drush mup\" command allows now to upgrade contributed modules.\n\n\n----------------------------------------\n# Fixes since January 2011\n----------------------------------------\n\n* Auto-healing no longer starts concurrent servers when InnoDB start\n  takes more time on servers with big or many databases.\n* Hostname is no longer reverted to default on Linode and similar.\n* Barracuda supports now both old and new Mailx behavior.\n* All platforms paths and symlinks include core version numbers.\n* Fixed some memory issues with Virtuozzo family systems.\n* Fixed issue with broken site when non-lowercase domain was used\n  on Migrate or Clone task.\n* Fixed upgrade path for Drupal 5\n* Fixed double slash in the images paths issue in the Pressflow core.\n* Speed Booster cookies shouldn't be sent for imagecache/styles\n  and AdvAgg module dynamic requests.\n* Speed Booster shouldn't cache imagecache/styles and AdvAgg module\n  dynamic requests on the Nginx level.\n* Nginx upgrade to 1.0.0 fixes known issue with random but very high\n  CPU load on Nginx server configuration reload/restart.\n* Fix for critical bug causing sessions issues on older sites without\n  $cookie_domain set in settings.php when speed booster is enabled.\n* The session.cookie_secure is no longer forced in D6 platforms.\n* Security issue #1098304 - domain aliases were not sanitized.\n* Nginx config - proper fix for broken wysiwyg pop-ups.\n* Fixed issue with Nginx configuration for private files access.\n* The authorize.php added to allowed php files - required in D7.\n* Known issue with paths to files not rewritten is now fixed.\n* Known issue with sites cron semaphore in Ægir now resolved.\n* Known issue with PHP notices breaking some Ægir tasks resolved.\n* Fixed web server rewrites to support \"ad\" module.\n* Fixed Ægir issue with .info and .pl domains extensions.\n* Drush make via SSH now works as expected.\n* Fixed Nginx issue with /system/ paths and static files or images.\n* Fixed issue with broken site when non-lowercase domain was used.\n\n\n----------------------------------------\n# Other changes\n----------------------------------------\n\n* Forced public downloads for all 6.x platforms, except of ubercart.\n* Boost crawler option is now denied for performance reasons.\n* Forced log-out on browser quit only for Ægir control panel.\n\n\n### Project and issue queue moved to Drupal.org\n### Date: Sat, 7 May 2011 14:00:00 +0200\n### http://drupal.org/project/barracuda\n### http://drupal.org/project/octopus\n\n### Stable Edition 1.0-boa-T-8.5\n### Date: Tue, 3 May 2011 14:30:00 +0200\n### Installs Ægir stable 1.1\n\n### Stable Edition 1.0-boa-T-8.4\n### Date: Sun, 1 May 2011 23:30:00 +0200\n### Installs Ægir stable 1.1\n\n### Stable Edition 1.0-boa-T-8.3\n### Date: Sat, 30 Apr 2011 20:15:00 +0200\n### Installs Ægir stable 1.1\n\n### Stable Edition 1.0-boa-T-8.2\n### Date: Tue, 26 Apr 2011 21:45:00 +0200\n### Installs Ægir stable 1.1\n\n### Stable Edition 1.0-boa-T-8.1\n### Date: Wed, 20 Apr 2011 19:30:00 +0200\n### Installs Ægir stable 1.1\n\n### Stable Edition 1.0-boa-T-8\n### Date: Mon, 18 Apr 2011 20:15:00 +0200\n### Installs Ægir stable 1.0\n\n### Stable Edition 1.0-boa-T-5\n### Date: Fri, 8 Apr 2011 19:15:00 +0200\n### Installs Ægir working HEAD after 1.0-rc6\n\n### Stable Edition 1.0-boa-T-2\n### Date: Wed, 6 Apr 2011 01:34:40 +0200\n### Installs Ægir working HEAD before 1.0-rc3\n\n### Stable Edition 1.0-boa-T\n### Date: Mon, 14 Mar 2011 02:43:15 +0100\n\n### Stable Edition 0.4-boa-C\n### Date: Thu, 10 Feb 2011 04:41:57 +0100\n\n###\nFor changes/improvements between 2010-09-24 and\n2010-12-31 please see comments in the commits history.\n###\n\n### Thu, 2010-09-23 17:30 - Edition 0.4-HEAD-A14.B\n\nAdded/Fixed: (upgrade for all pre-A14.A required)\n\n1.  Introducing default SSL Wildcard Nginx Proxy.\n    Works for all sites/hostmaster instances on\n    the same server and can be used also for\n    encrypted connections to Chive and Collectd.\n    Doesn't interfere even with SSL enabled sites\n    on the same IP (with separate certs).\n\n2.  The redirects are now back and enhanced.\n    Fully compatible with Nginx in any combination\n    with aliases and SSL settings/modes.\n\n3.  Barracuda and Octopus by default installs still\n    Ægir HEAD, but the latest alpha14 also works.\n\n4.  Octopus can define its separate IP address\n    if available.\n\n5.  Fixed issue with too aggressive Hot Sauce check,\n    causing creating not shared copies of code\n    for platforms on every install or upgrade.\n\n6.  Barracuda and Octopus now allows to skip DNS\n    test, to make it possible to install on any\n    virtualbox with dynamic DNS/IP etc. There is\n    no guarantee it will work, but another switch\n    is now available, if someone needs it.\n\n7.  Octopus can now turn off local Memcache and Redis\n    caches and switch all sites to use defined remote\n    caches.\n\n8.  Forced /etc/apt/sources.list rewrite also before\n    the Barracuda system upgrade.\n\n9.  Fix for the already installed and possibly broken\n    git-core.\n\n10. Fix for Ægir sites with .info domains, the path\n    alias should now work without 403 error.\n\n\n### Fri, 2010-09-17 11:00 - Edition 0.4-HEAD-A14.A\n\nAdded/Fixed: (upgrade required)\n\n1. Barracuda and Octopus by default installs now\n   Ægir HEAD to use the fix for critical issue\n   on sites import. It will be included in alpha14,\n   please don't use alpha13.\n\n2. Debian Lenny on 32bit systems works again.\n   Fix for broken git-core after upgrade\n   to version: 1:1.5.6.5-3+lenny3.1 on Lenny 32bit.\n\n3. Fix and better inline warnings/info about\n   missing locales at Linode and RackSpaceCloud.\n\n4. More details in the installer log for better\n   debugging and version tracking.\n\n5. E-mail address for alerts on database repair\n   started by auto-healing now correctly replaced.\n\n6. Redis for Lenny now built from sources due to\n   apt version moved already to Squeeze.\n\n7. Critical bugfix for failed platforms install\n   when hostmaster is not upgraded.\n\n8. Introducing simple edition archive:\n   http://omega8.cc/dev/bo-a14a.tar.gz\n\n9. Octopus now better supports using newer shared\n   code for platforms and introduces new setting:\n   _HOT_SAUCE to allow forced fresh/hot code.\n\n\n### Tue, 2010-09-12 21:50 - Edition 0.4-HEAD-A13.A\n\nAdded/Fixed: (upgrade recommended)\n\n1.  Octopus now creates SSH/FTPS separate, non-aegir\n    account for every Ægir Satellite Instance,\n    with limited shell to avoid using commands\n    like \"drush up\" since they should never be used\n    on sites managed in the Ægir system.\n\n2.  Octopus now by default sends a welcome email\n    with some useful intro information and access\n    details to the address defined as _CLIENT_EMAIL.\n\n3.  When Octopus is used the first time to create\n    an Ægir Satellite Instance, it doesn't allow\n    to skip installing all platforms, since it is\n    recommended to add all available platforms with\n    initial install, for easier re-using the code\n    by next Ægir Satellite Instances.\n\n4.  The second and all future non-core Hostmaster\n    installs allow to choose one or more platforms\n    or to skip adding platforms at all.\n\n5.  Octopus by default honors initial domain used\n    for the Ægir Satellite Instance on every\n    upgrade to avoid mistakes with using different\n    copies of the script for different Ægir\n    Satellite Instances upgrades.\n\n6.  Also Barracuda will always honor initial\n    domain used for the core Hostmaster to avoid\n    mistakes on upgrade when you don't use\n    the original version of the script.\n\n7.  Better checks if the script is running as root.\n\n8.  Removed memcache module since cache is used.\n\n9.  SMTP connection test is now optional.\n\n10. Nginx version set to 0.8.50.\n\n11. By default Ægir 0.4-HEAD instead of alpha13\n    is now installed to fix critical issues with\n    importing sites.\n\n    See also: http://drupal.org/node/907248\n\n12. Solr and Chive are now optional (Yes/no).\n\n13. Added optional install of Collectd monitor.\n\n14. Fixed issue with SSL mode.\n\n15. Better compatibility for upgrades from\n    pre-Barracuda Nginx installs.\n\n16. Now it doesn't start cron before completing all\n    install tasks to avoid breaking spinner.\n\n17. Both Barracuda and Octopus now can better\n    support re-starting stopped install/upgrade.\n\n18. Octopus now refuses to run if defined domain\n    doesn't resolve yet to the server IP address.\n\n19. Octopus now refuses to run on system not\n    created initially by Barracuda installer.\n\n20. Custom FQDN hostname is now forced (if defined)\n    in Barracuda before running DNS checks.\n\n21. Fix for some missing mime types in vanilla Nginx.\n\n22. Updated versions of Open Atrium, Drupal Commons\n    and Cocomore Drupal distros installed by Octopus.\n\n23. Lowered memory defaults in the MariaDB configuration.\n\n\n### Tue, 2010-08-31 23:50 - Edition 0.4-HEAD-A12.D\n\nAdded/Fixed: (upgrade recommended because it works!)\n\n1. Upgrade of Ægir Master Instance by Barracuda\n   and upgrade of Ægir Satellite Instances by Octopus\n   finally works as expected.\n\n2. It is now possible to use Barracuda to install\n   environment and Ægir Master Instance, to\n   upgrade only environment, to upgrade only Ægir\n   Master Instance, or both at the same time.\n\n3. Octopus now can separately install and/or upgrade\n   any Ægir Satellite Instance or any platform\n   on any instance, separately, using detailed prompt\n   with version numbers and links to distributions\n   home pages.\n\n4. New platform Cocomore Drupal added in Octopus:\n   http://drupal.cocomore.com\n\n\n### Sat, 2010-08-28 20:15 - Edition 0.4-HEAD-A12.C\n\nAdded/Fixed: (upgrade recommended)\n\n1. By default Ægir 0.4-HEAD with Drush 3.3\n   is now installed to fix critical issues with\n   importing sites. The fix is also available\n   as a patch for alpha12:\n   http://drupal.org/node/882970#comment-3382542\n\n2. Both Barracuda and Octopus now allow to choose\n   if the Ægir Hostmaster will be upgraded or not.\n\n3. Added versions numbers and links to all platforms\n   Yes/no prompts.\n\n4. /tmp directory no longer used to avoid problems\n   due to secure noexec mount.\n\n5. Improved readme and docs (in progress).\n\n6. Removed old, no longer supported installer.\n\n\n### Fri, 2010-08-27 04:15 - Edition 0.4-alpha12-A12.B\n\nAdded/Fixed: (upgrade optional)\n\n1. Octopus now allows to install or upgrade only Ægir\n   Satellite Instance without any platforms added.\n\n2. Enabled again early exit on the first error to avoid\n   confusing cascade of errors if something went wrong.\n\n3. Both Barracuda and Octopus runs now faster.\n\n\n### Thu, 2010-08-26 19:30 - Edition 0.4-alpha12-A12.A\n\nAdded/Fixed: (upgrade from previous versions recommended)\n\n1. Barracuda now includes multicore Apache Solr Search,\n   Redis and Memcache.\n\n2. Barracuda now can upgrade packages selectively.\n   Just run it again to upgrade the system and the\n   Ægir Master Instance.\n\n3. Octopus can create many Ægir Satellite Instances\n   on the same server, each with different set of platforms,\n   but with ability to share the code between instances,\n   so you can use this system even on the low end VPS.\n\n4. Chive database manager added by default with db.\n   subdomain (may require dns entry or wildcard).\n\n\n### Thu, 2010-08-26 08:55 - Edition 0.4-alpha12-A12.A\n\nAdded/Fixed: (upgrade from previous versions recommended)\n\n1. By default Ægir 0.4-alpha12 with Drush 3.3\n   is now installed.\n\n2. Introduced new Octopus and Barracuda installers.\n   See README.txt for more information.\n   Both are in pre-alpha debugging phase.\n\n3. All installers code and helpers now hosted on GitHub.\n\n\n### Thu, 2010-08-18 21:30 - Edition 0.4-HEAD-A11.B\n\nAdded/Fixed: (upgrade from previous versions recommended)\n\n1. By default Ægir 0.4-HEAD with Drush 3.3\n   is now installed.\n\n2. Introduced support for Virtuozzo/OpenVZ IP address\n   automatic discovery.\n\n\n### Thu, 2010-08-12 22:15 - Edition 0.4-alpha11-A11.A\n\nAdded/Fixed: (upgrade from previous versions recommended)\n\n1. By default Ægir 0.4-alpha11 with Drush 3.3\n   is now installed.\n\n2. PHP-FPM version is now 5.2.14.\n\n3. Improved UX - only interesting status messages\n   are now displayed.\n\n4. Hostmaster root directory now properly named using\n   Ægir version: '-0.4-alpha11' or '-HEAD'.\n\n\n### Thu, 2010-08-12 06:10 - Edition 0.4-alpha10-A10.A\n\nAdded/Fixed: (upgrade from previous versions recommended)\n\n1. By default Ægir 0.4-alpha10 with Drush 3.3\n   is now installed.\n\n2. Nginx version is now 0.8.49, MariaDB is 5.1.49\n   and Drupal is 6.19.\n\n3. Fixed freezing request on the first /admin hit.\n\n4. Better tuned Nginx, PHP-FPM and MariaDB settings.\n\n5. Various small improvements in the code.\n\n\n### Thu, 2010-08-07 06:10 - Edition 0.4-alpha9-A9.F\n\nAdded/Fixed: (upgrade of existing installs not required)\n\n1. By default latest HEAD from git.aegirproject.org\n   is now installed, due to critical bug found,\n   see this for details: http://drupal.org/node/874716\n   The default install will be reverted to 0.4-alpha10\n   when it will be released. You can use 0.4-alpha9 with\n   caution (just don't use remote servers new feature\n   to stay safe).\n\n2. Fixed problem with setting up FQDN hostname on Linode\n   based servers. The fix can help also with other\n   providers probably.\n\n3. Installer now writes date and version used in file:\n   /var/aegir/config/includes/installer_version.txt\n\n\n### Thu, 2010-08-05 22:00\n\nAdded/Fixed: (upgrade of existing installs not required)\n\n1. Fixed critical problem with Drush broken due to\n   change of URL to the required php library:\n   http://drupal.org/node/875196\n\n2. Ægir version is now configurable. By default latest\n   0.4-alpha9 will be installed, but it is also possible\n   to install latest HEAD from git.aegirproject.org.\n\n3. Ægir front-end (sub)domain is now configurable and\n   can be different than machine FQDN hostname.\n\n4. Machine FQDN hostname and IP is now configurable.\n\n5. Nginx version updated to 0.8.48.\n\n6. Fixed progress spinner on Ubuntu.\n\n7. Fixed problem with automatic ionCube loader\n   discovery of required version 32/64 bit.\n\n\n### Mon, 2010-08-02 01:08\n\nAdded/Fixed:\n\n1. Added automatic, full support for Ubuntu Lucid and Karmic.\n\n2. If there is no FQDN hostname, we are trying to set it\n   using reverse IP hostname, if exists.\n\n3. Now we are trying both `uname -n` and `hostname -f`\n   to make sure if the FQDN hostname is already set,\n   but not available with `uname -n` test.\n\n4. Added support for ionCube Loader with automatic\n   discovery of required version 32/64 bit.\n\n\n### Sat, 2010-07-31 18:00\n\nAdded/Fixed:\n\n1. Simplified installer by removing unnecessary duplicate\n   prompts in the original embedded install script.\n\n2. Check for SMTP outgoing port 25 now fully automated.\n\n3. Even more fun added :)\n\n\n### Fri, 2010-07-30 19:00\n\nAdded/Removed:\n\n1. New all-in-one installer for Debian 5.0 Lenny\n   Ægir 0.4-alpha9 compatible.\n\n2. Removed deprecated scripts & how-to.\n\n\n### Sat, 2010-02-06 23:55\n\nAdded/Fixed:\n\n1. Missing --with-libevent=shared added in php-fpm-install.txt\n   http://github.com/omega8cc/boa/issues/#issue/2\n\n2. Debian specific stuff added in php-fpm-install.txt to allow\n   easy install on vanilla vps.\n\n3. Xcache replaced with APC and Memcache install added.\n\n\n### Wed, 2010-02-03 06:37\n\nAdded/Fixed:\n\n1. mkdir for required cache dirs added in nginx-install.txt\n   http://github.com/omega8cc/boa/issues#issue/1\n\n\n### Fri, 2010-01-29 06:37\n\nAdded/Fixed:\n\n1. FCKeditor/CKEditor fix for .xml files.\n2. Security: deny direct access to backup_migrate directory.\n\n\n### Mon, 2010-01-11 01:46\n\n1. Added custom fix required only when using purl, spaces & og\n   for modules: ajax_comments, watcher and fasttoggle.\n\n2. Simplified rewrite rules for location @drupal\n   resolves also some problems with imagecache.\n\n3. Changed order of try_files for Boost\n   to match newer version of dirs structure first.\n\n\n### Tue, 2009-12-01 16:19\n\nAdded/Fixed:\n\n1. Latest Boost compatibility for /cache/normal & /cache/perm.\n2. Json cache for Boost added.\n3. Fix for xml/feed Boost cache files with .html extension.\n4. Fix for xml/feed Boost cache correct mime type.\n"
  },
  {
    "path": "DIFFERENT30Y.md",
    "content": "# 30 Years of Heritage\n\nWe are unique within the hosting industry for many important reasons. Our 15 years of Ægir-based hosting, plus earlier experience with Adgrafix (the first company to offer a control panel for website management in 1995), have helped shape what makes us different today.\n\nFun fact: Robert “Bo” Bennett, the founder of Adgrafix, started as a self-taught programmer and created the world's first web hosting control panel entirely in Perl. We continued to build on that fundation for almost ten years, adding shopping cart and marketing tools and the early BOA version -- everything in Perl. Why not in PHP? Because PHP was still in its infancy, since its early prototype 1.0 was released in June 1995 by Rasmus Lerdorf, while Perl 4 had been out since 1991 and was already a stable, mature, widely adopted language used both for server tools, websites backend and frontend.\n\n<img width=\"906\" height=\"672\" alt=\"boa-on-excalibur\" src=\"https://github.com/user-attachments/assets/cdf4f72b-6d7d-4712-895b-46f612be333f\" />\n\n## Why We’re Different\n\nWe avoid marketing noise. Think of us like your electricity provider — reliable, fast, essential. You know we’re there, running everything **silently and smoothly** in the background. No need for constant attention or distractions.\n\n## Focus, focus, focus — like no one else\n\nOur website has been running on GravCMS for years, but our hosting and associated service itself is 100% Drupal-focused — and has been for 15 years. Others have tried and failed to stay focused, adding WordPress and a dozen other platforms. That’s not how you become **best-in-class, fastest, and most trusted** by the open-source community.\n\n## Technological sovereignty\n\nWe take **Open Source seriously** — it’s not a buzzword for us. It’s about freedom from corporate control. Here's a short look back at our 15-year Ægir journey and 19 years with Drupal.\n\nOur BOA system is the most complete solution of its kind. It has been designed from the ground up to make Drupal hosting **faster, easier, safer, and more efficient** — especially for teams that don’t want to get lost in DevOps. It was always built on Debian OS, but we never accepted systemd, imposed by Red Hat. After years of battling, we switched fully to **Devuan** — the true Debian fork without systemd, built by Debian veterans.\n\nAt the same time, we stayed away from MariaDB, because it drifted away from MySQL compatibility and became vendor-focused. Instead, we adopted **Percona** — a truly open, compatible, and community-aligned database option. The same thing happened with Redis — the fast cache-in-memory server. As soon as Redis switched to a restrictive corporate license, we moved to the open-source **Valkey** fork.\n\nBottom line: BOA is **architected around technological freedom and independence**. We work for you — not for Oracle, Red Hat, investors or proprietary agendas. That matters, especially for universities, NGOs, and businesses who rely on no-compromise sovereignty. Make your high-traffic demanding Drupal sites **Lightning Fast and Secure** with our [**Pro Hosted Plans &raquo;**](https://omega8.cc/pro)\n\n<img width=\"1215\" height=\"1264\" alt=\"Ægir-BOA\" src=\"https://github.com/user-attachments/assets/b2417cc7-2fb8-422c-96f8-71d12c1c2fd7\" />\n\n## Support for legacy Drupal versions\n\nWhile Drupal 7 users were largely abandoned by core developers without an upgrade path to Drupal 8 and beyond, we still fully support Pressflow 6 and Drupal 7. We even maintain our own improved core forks, provide Drupal 7 LTS, and made Drupal 7 compatible with PHP 8.4 ages ago. Make your legacy Drupal sites **Fast and Secure** with our [**Basic Hosted Plans &raquo;**](https://omega8.cc/basic)\n\n## Support for Ægir 3 and Drupal 11+\n\nMany believed Ægir 3 would never support Drupal 9 — until we made it work. Same for Drupal 10. This year, we did it again and added Drupal 11 support to Ægir — a “mission impossible” at first glance. We recently took over the technical stewardship of Ægir 3 project, ensuring its future is bright. Make your modern Drupal sites **Blazing Fast and Secure** with our [**Advanced Hosted Plans &raquo;**](https://omega8.cc/advanced)\n\n## 90-day backups running every six hours\n\nYour site backups (files + database) are securely stored via Backblaze with a full 90-day retention period — taken automatically every six hours. Far better than the industry-standard daily or weekly backups. You can also configure your own automatic backups, using any of 8 supported storage providers. Because there is no such thing in the universe like \"too many backups\".\n\n## Drupal codebase upgrades powered by Jenkins\n\nWe’ve built highly tailored CI/CD pipelines for Drupal, including custom Composer-based setups and multi-environment workflows. If you’ve never used CI/CD, we understand — it can be complex. The good news is that you can use it without any learning — if you’re tired of constant updates, PHP compatibility issues, or security patching — we can help using Jenkins wizardry in the backend, including same-day updates — check our [**Managed Drupal Updates Plans &raquo;**](https://omega8.cc/managed)\n\n**We’d be happy to show you how it all works and talk through how it could fit with your projects** — [**Contact Us Today! &raquo;**](https://omega8.cc/contact) to discuss your needs.\n\n"
  },
  {
    "path": "DUALLICENSE.md",
    "content": "# Dual License and BOA Branches Explained\n\n**BOA** remains a **Free/Libre Open Source Project**. While all of **BOA** code is **Free/Libre Open Source**, only the **BOA LTS** branch and **Ægir** are available without any cost or restrictions.\n\n- **LTS**: This public branch remains completely free to use without any commercial license, as it has been from the beginning (previously named HEAD or STABLE). This branch should be considered the **BOA Long Term Support** variant, with slow updates focused on security and bug fixes, and limited new features.\n\n- **DEV**: This public branch requires a commercial license for both installation and upgrades. It includes the latest features, security updates, bug fixes, and updated service versions. This branch should not be used in production without extensive testing.\n\n- **PRO**: This public branch requires a commercial license and is available only as an upgrade from either LTS or DEV (or previous HEAD/STABLE). It offers new releases once ready, closely following the tested DEV branch.\n\n- **OMM**: This private branch is managed separately, with some unused components removed and others added. It is generally simplified for easier maintenance and adheres to modern coding standards.\n\nYou can install only **BOA LTS** and then upgrade to **PRO** with a license from [Omega8.cc](https://omega8.cc/licenses).\n\n## **LTS** branch will enter a full code-freeze on December 31, 2025\n\nPlease note that **as of December 31, 2025, the LTS branch will enter a full code-freeze**. No further feature development or regular releases are planned for 2026. A possible re-evaluation may occur in 2027, but this should not be assumed.\n\nAfter the freeze, **only critical functional fixes within BOA itself will be considered**. There will be **no updates** for underlying components such as PHP, Percona, Nginx, Valkey, OpenSSL, OpenSSH, or related system libraries, although your barracuda will still be able to upgrade your system with newer Devuan packages.\n\nSeveral of the upcoming and most impactful features are planned **exclusively for BOA PRO**, as outlined in the [ROADMAP](https://github.com/omega8cc/boa/tree/5.x-dev/ROADMAP.md).\n\nFor continued access to new features, ongoing improvements, and a future-proof stack, [BOA PRO](https://omega8.cc/licenses) is the recommended upgrade path.\n\n## Practical Differences Between **LTS** and **PRO**\n\nOver time, **PRO** will be ahead of **LTS** as its name suggests.\n\nThe `BOA-5.9.1` release is the last parallel release including all features developed for **PRO**, so both **PRO** and **LTS** users will enjoy the same improvements, bug fixes, and new features.\n\nIn the future, new features will be regularly added to **PRO**, while **LTS** will receive only security updates and critical fixes for BOA itself. There may be exceptions, and some new features may find their way to **LTS**, but only as exceptions.\n\nThe **PRO** will be available in three main variants, and while all **BOA PRO** licenses will grant access to the same **BOA PRO** branch and features, they will differ in terms of available support levels.\n\n### **PRO** with **Basic Support**\n\nThis license is designed for **BOA** users familiar with managing and monitoring their own systems who don't need extended support, monitoring, or assistance in managing their **BOA** installation and updates. Our support is limited to the Issue Queue on GitHub without any kind of SLA or Best Effort guarantee.\n\nIdeal for: Small businesses or developers who need basic support and can handle issues independently or with community help.\n\n### **PRO** with **Advanced Support**\n\nThis license is designed for **BOA** users who are familiar with managing their own server but need assistance in handling their custom needs or fixing individual problems privately via our helpdesk at [Ægir Helpdesk](https://aegir.happyfox.com), without posting details on GitHub. There is no SLA guarantee, only a Best Effort guarantee. System local and remote uptime monitoring with Site24x7 is included.\n\nIdeal for: Medium to large businesses needing reliable support during business hours with quick response times for critical issues.\n\n### **PRO** with **Hands-Off Experience**\n\nThis license is for **BOA** users who prefer to delegate all the work needed to maintain their **BOA** server, including regular upgrades (both **BOA** and major OS upgrades), active monitoring, and responding to DoS incidents. It comes with a fully managed **BOA PRO** installation you can use without worrying about anything else, with our general SLA guarantee applied: [Omega8.cc SLA](https://omega8.cc/sla). System local and remote uptime monitoring with Site24x7 is included.\n\nIdeal for: Enterprises requiring comprehensive, around-the-clock support with quick response times for all issues.\n\nYou can obtain a **BOA PRO** license from [Omega8.cc](https://omega8.cc/licenses).\n\n## Upcoming **PRO-Only** Features\n\nCertain planned features are likely to be exclusive to **BOA PRO**. If these features are added to other **BOA** versions, it will be with a significant delay.\n\nCheck out the details in [ROADMAP](https://github.com/omega8cc/boa/tree/5.x-dev/ROADMAP.md)\n"
  },
  {
    "path": "HTTP3.md",
    "content": "# Strap in, your sites are getting an F1 engine\n\nWe’re rolling out a meaningful upgrade across BOA/Omega8.cc nodes: HTTP/3 and KTLS support.\n\nIf you run Drupal sites that should feel fast and responsive (and stay that way during spikes), this is genuinely good news. It’s not a “new feature in the control panel” kind of update — it’s the kind that improves the *experience* visitors have without you touching a single line of code.\n\n\n## Why this is a big deal\n\nNearly everything on the web is encrypted now (HTTPS). That’s great for security, but it also means every visit involves extra work just to establish and maintain that secure connection.\n\nThe upgrades are all about making secure browsing feel lighter and faster.\n\n### What visitors should notice\n\n* Faster “start” to loading (especially for first-time or returning visits after a while)\n* Smoother browsing on mobile and Wi-Fi (fewer annoying stalls when the connection quality changes)\n* More consistent performance during traffic spikes (less overhead spent on transport/encryption, more resources left for actual Drupal work)\n\n### Why it matters for *your server* too\n\n* More efficient HTTPS handling can mean lower CPU pressure in the busiest parts of the request path.\n* That translates into more headroom for PHP-FPM, caches, and database work when it really counts.\n\nIn short: faster for users, more efficient for servers, and no application changes required.\n\n<img width=\"1050\" height=\"1218\" alt=\"screenshot 2026-02-05 at 10 35 06\" src=\"https://github.com/user-attachments/assets/4e179e7b-97ac-4808-ace6-b0f29e6b66a8\" />\n\n## What’s being enabled (friendly version)\n\n### HTTP/3\n\nHTTP/3 is the newest “dialect” browsers can use to talk to your site. It’s designed for today’s reality: phones, roaming, Wi-Fi, variable quality connections.\n\nWhen a visitor’s browser supports HTTP/3, it can:\n\n* connect more quickly,\n* recover better from “internet wobble,”\n* and keep page loads feeling responsive even when conditions aren’t perfect.\n\nAnd the best part: browsers choose it automatically. Nobody needs to configure anything on their device.\n\n### KTLS\n\nKTLS helps the operating system handle part of the secure connection workload more efficiently. The result is simpler to describe than the internals:\n\n* less overhead\n* more throughput\n* better stability under load\n\nIt’s one of those improvements that quietly makes a platform feel “stronger” and less stressed during peak times.\n\n\n## What you need to do (this is the important part)\n\nWe’ll make sure the server side is ready after the upgrades — but to actually *apply* the new capabilities cleanly across your BOA/Ægir-managed stack, you need to run Verify so configurations are regenerated with the updated features.\n\n### After the maintenance completes:\n\n1. Log in to your Ægir Control Panel\n2. Go to Platforms → run Verify on all platforms you use to host sites\n3. Go to Sites → run Verify on your hosted sites\n\n   * If you host many sites: use the bulk actions on the Sites list so you don’t have to click site-by-site.\n\nThis ensures your platform and site configs are refreshed and the upgraded stack is applied consistently.\n\n\n## “Will anything break?” (No — it’s designed not to)\n\n* If a browser supports HTTP/3, it will use it automatically.\n* If it doesn’t, it will quietly fall back to HTTP/2 or HTTP/1.1.\n* Your Drupal code stays the same.\n* Your visitors don’t have to do anything.\n\n\n## Want more detail?\n\nWe’ll publish our own concise, friendly explainer that goes deeper into:\n\n* what HTTP/3 changes in real-life browsing,\n* why KTLS improves HTTPS efficiency,\n* and how to confirm your browser is using HTTP/3.\n\nMore details: *(link coming soon — we’ll share it as soon as it’s live)*\n\n\n## Quick checklist (save this)\n\nAfter the system upgrade:\n\n1. Ægir → Platforms → Verify (all platforms)\n2. Ægir → Sites → Verify (use bulk actions if you have many sites)\n\nThat’s it. Once Verify is done, you’re ready to benefit automatically — and your visitors get a faster, smoother ride. Enjoy!\n\n\n"
  },
  {
    "path": "OCTOPUS.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport SHELL=/bin/bash\n\n###\n### Default values for main Octopus instance variables\n###\n\n_USER=o1\n_MY_OCTO_EMAIL=\"noc@omega8.cc\"\n_CLIENT_EMAIL=\"notify@omega8.cc\"\n_CLIENT_OPTION=POWER\n_CLIENT_SUBSCR=M\n_CLIENT_CORES=1\n\n###\n### Required by AegirSetupA script, running in\n### the same env, to avoid chicken/egg race.\n###\nexport _USER=\"${_USER}\"\n\n###\n### Drush and Redis Variables\n###\n\nexport _DRUSH_VERSION=8.5.0.5\nexport _REDIS_C_VERSION=com-19-04-2021\nexport _REDIS_L_VERSION=7.x-3.19.1\nexport _REDIS_N_VERSION=com-19-04-2021\nexport _REDIS_T_VERSION=8.x-1.8.2\nexport _REDIS_E_VERSION=8.x-1.11.2\n\n###\n### Drupal Core Versions\n###\n\nexport _SMALLCORE10_0_V=10.0.11\nexport _SMALLCORE10_1_V=10.1.8\nexport _SMALLCORE10_2_V=10.2.12\nexport _SMALLCORE10_3_V=10.3.14\nexport _SMALLCORE10_4_V=10.4.9\nexport _SMALLCORE10_5_V=10.5.8\nexport _SMALLCORE10_6_V=10.6.3\nexport _SMALLCORE11_1_V=11.1.9\nexport _SMALLCORE11_2_V=11.2.10\nexport _SMALLCORE11_3_V=11.3.3\nexport _SMALLCORE6_V=6.60.1\nexport _SMALLCORE7_V=7.105.1\nexport _SMALLCORE9_V=9.5.11\n\n###\n### Drupal Core Variables\n###\n\nexport _DRUPAL10_0=\"drupal-${_SMALLCORE10_0_V}\"\nexport _DRUPAL10_1=\"drupal-${_SMALLCORE10_1_V}\"\nexport _DRUPAL10_2=\"drupal-${_SMALLCORE10_2_V}\"\nexport _DRUPAL10_3=\"drupal-${_SMALLCORE10_3_V}\"\nexport _DRUPAL10_4=\"drupal-${_SMALLCORE10_4_V}\"\nexport _DRUPAL10_5=\"drupal-${_SMALLCORE10_5_V}\"\nexport _DRUPAL10_6=\"drupal-${_SMALLCORE10_6_V}\"\nexport _DRUPAL11_1=\"drupal-${_SMALLCORE11_1_V}\"\nexport _DRUPAL11_2=\"drupal-${_SMALLCORE11_2_V}\"\nexport _DRUPAL11_3=\"drupal-${_SMALLCORE11_3_V}\"\nexport _DRUPAL6=\"pressflow-${_SMALLCORE6_V}\"\nexport _DRUPAL7=\"drupal-${_SMALLCORE7_V}\"\nexport _DRUPAL9=\"drupal-${_SMALLCORE9_V}\"\n\nexport _DRUPAL6_D=\"${_DRUPAL6}-dev\"\nexport _DRUPAL6_P=\"${_DRUPAL6}-prod\"\nexport _DRUPAL6_S=\"${_DRUPAL6}-stage\"\n\nexport _DRUPAL7_D=\"${_DRUPAL7}-dev\"\nexport _DRUPAL7_P=\"${_DRUPAL7}-prod\"\nexport _DRUPAL7_S=\"${_DRUPAL7}-stage\"\n\nexport _DRUPAL9_D=\"${_DRUPAL9}-dev\"\nexport _DRUPAL9_P=\"${_DRUPAL9}-prod\"\nexport _DRUPAL9_S=\"${_DRUPAL9}-stage\"\n\nexport _DRUPAL10_0_D=\"${_DRUPAL10_0}-dev\"\nexport _DRUPAL10_0_P=\"${_DRUPAL10_0}-prod\"\nexport _DRUPAL10_0_S=\"${_DRUPAL10_0}-stage\"\n\nexport _DRUPAL10_1_D=\"${_DRUPAL10_1}-dev\"\nexport _DRUPAL10_1_P=\"${_DRUPAL10_1}-prod\"\nexport _DRUPAL10_1_S=\"${_DRUPAL10_1}-stage\"\n\nexport _DRUPAL10_2_D=\"${_DRUPAL10_2}-dev\"\nexport _DRUPAL10_2_P=\"${_DRUPAL10_2}-prod\"\nexport _DRUPAL10_2_S=\"${_DRUPAL10_2}-stage\"\n\nexport _DRUPAL10_3_D=\"${_DRUPAL10_3}-dev\"\nexport _DRUPAL10_3_P=\"${_DRUPAL10_3}-prod\"\nexport _DRUPAL10_3_S=\"${_DRUPAL10_3}-stage\"\n\nexport _DRUPAL10_4_D=\"${_DRUPAL10_4}-dev\"\nexport _DRUPAL10_4_P=\"${_DRUPAL10_4}-prod\"\nexport _DRUPAL10_4_S=\"${_DRUPAL10_4}-stage\"\n\nexport _DRUPAL10_5_D=\"${_DRUPAL10_5}-dev\"\nexport _DRUPAL10_5_P=\"${_DRUPAL10_5}-prod\"\nexport _DRUPAL10_5_S=\"${_DRUPAL10_5}-stage\"\n\nexport _DRUPAL10_6_D=\"${_DRUPAL10_6}-dev\"\nexport _DRUPAL10_6_P=\"${_DRUPAL10_6}-prod\"\nexport _DRUPAL10_6_S=\"${_DRUPAL10_6}-stage\"\n\nexport _DRUPAL11_1_D=\"${_DRUPAL11_1}-dev\"\nexport _DRUPAL11_1_P=\"${_DRUPAL11_1}-prod\"\nexport _DRUPAL11_1_S=\"${_DRUPAL11_1}-stage\"\n\nexport _DRUPAL11_2_D=\"${_DRUPAL11_2}-dev\"\nexport _DRUPAL11_2_P=\"${_DRUPAL11_2}-prod\"\nexport _DRUPAL11_2_S=\"${_DRUPAL11_2}-stage\"\n\nexport _DRUPAL11_3_D=\"${_DRUPAL11_3}-dev\"\nexport _DRUPAL11_3_P=\"${_DRUPAL11_3}-prod\"\nexport _DRUPAL11_3_S=\"${_DRUPAL11_3}-stage\"\n\nexport _SPINNER=NO\nexport _T_BUILD=SRC\nexport _USRG=users\nexport _WEBG=www-data\n\nif [ -n \"${STY+x}\" ]; then\n  export _SPINNER=NO\nfi\n\nexport _F_TIME=\"$(date)\"\n\n\n###\n### Instance specific variables\n###\nexport _WEB=\"${_USER}.web\"\nexport _DOMAIN=\"${_USER}.$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nexport _ROOT=\"/data/disk/${_USER}\"\nexport _THIS_DB_PORT=3306\nexport _octCnf=\"/root/.${_USER}.octopus.cnf\"\nexport _octInc=\"${_ROOT}/config/includes\"\nexport _octTpl=\"${_ROOT}/.drush/sys/provision/http/Provision/Config/Nginx\"\nexport _octSetTpl=\"${_ROOT}/.drush/sys/provision/Provision/Config/Drupal\"\n\n\n###\n### Helper variables\n###\nexport _bldPth=\"/opt/tmp/boa\"\nexport _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\nexport _wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\nexport _filIncO=\"octopus.sh.cnf\"\nexport _gCb=\"git clone --branch\"\nexport _gitHub=\"https://github.com/omega8cc\"\nexport _gitLab=\"https://gitlab.com/omega8cc\"\nexport _libFnc=\"${_bldPth}/lib/functions\"\nexport _tocIncO=\"${_filIncO}.${_USER}\"\nexport _vBs=\"/var/backups\"\n\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n\n###\n### Clean pid files on exit\n###\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.octopus.sh.exit.exceptions.log\n    [ -e \"/opt/tmp/boa\" ] && rm -rf /opt/tmp/*\n  fi\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  service cron start &> /dev/null\n  exit 1\n}\n\n\n###\n### Panic on missing include\n###\n_panic_exit() {\n  echo\n  echo \" EXIT: Required lib file not available?\"\n  echo \" EXIT: $1\"\n  echo \" EXIT: Cannot continue\"\n  echo \" EXIT: Bye (0)\"\n  echo\n  _clean_pid_exit _panic_exit_a\n}\n\n\n###\n### Include default settings and basic functions\n###\nif [ -e \"${_vBs}/${_tocIncO}\" ]; then\n  source \"${_vBs}/${_tocIncO}\"\n  _tInc=\"${_vBs}/${_tocIncO}\"\nelif [ -e \"${_vBs}/${_filIncO}\" ]; then\n  source \"${_vBs}/${_filIncO}\"\n  _tInc=\"${_vBs}/${_filIncO}\"\nelse\n  _panic_exit \"${_tInc}\"\nfi\n\n\n###\n### Download helpers and libs\n###\nif [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n  _DB_SERVER=Percona\nelse\n  _DB_SERVER=Percona\nfi\nif [ \"$(boa info | grep -c ${_DB_SERVER})\" -lt 3 ] || [ ! -e \"/usr/sbin/csf\" ]; then\n  if [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    _download_helpers_libs\n  fi\nelse\n  _download_helpers_libs\nfi\n\n\n###\n### Include shared functions\n###\n_FL=\"helper dns satellite\"\nfor f in ${_FL}; do\n  [ -r \"${_libFnc}/${f}.sh.inc\" ] || _panic_exit \"${f}\"\n  source \"${_libFnc}/${f}.sh.inc\"\ndone\n\n\n###\n### Welcome msg\n###\necho \" \"\n_msg \"Skynet Agent v.${_X_VERSION} on $(dmidecode -s system-manufacturer 2>&1) welcomes you aboard!\"\necho \" \"\nsleep 3\n\n\n###\n### Turn Off AppArmor while running octopus\n###\n_turn_off_apparmor_in_octopus\n\n\n###\n### Unlock sendmail for allow-snail group\n###\n_unlock_sendmail_for_snail\n\n\n###\n### Switch to dash while running octopus\n###\n_switch_to_dash_in_octopus\n\n\n###\n### More local default variables\n###\n_LASTNUM=001\n_LAST_HMR=001\n_LAST_ALL=001\n_DISTRO=001\n_HM_DISTRO=001\n_ALL_DISTRO=001\n_STATUS=INIT\n\n\n###\n### Misc checks\n###\n_satellite_check_php_compatibility\n_satellite_check_octopus_vs_barracuda_ver\n_satellite_if_head_github_connection_test\n_satellite_if_sql_exception_test\n_satellite_if_running_as_root_octopus\n_satellite_check_sanitize_user_name\n_satellite_if_localhost_mode_magic\n_satellite_check_sanitize_domain_name\n_satellite_detect_vm_family\n_check_git_repos\n\n\n###\n### Main procedures\n###\n_satellite_cnf\n_satellite_if_init_or_upgrade\n_satellite_if_major_upgrade\n_satellite_if_check_dns\n_satellite_checkpoint\n_satellite_pre_cleanup\n_satellite_make\n_satellite_post_cleanup\nexit 0\n\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "README.md",
    "content": "# Welcome to BOA!\n\nBOA stands for Barracuda, Octopus, and Ægir—a high-performance LEMP stack supporting Drupal from Pressflow 6 to the latest Drupal 11, as well as Backdrop CMS and Grav CMS (soon).\n\n## Strap in, your sites are getting an F1 engine\n\nWe’re rolling out a meaningful upgrade across BOA/Omega8.cc nodes: HTTP/3 and KTLS support. If you run Drupal sites that should feel fast and responsive (and stay that way during spikes), this is genuinely good news. Why this is a big deal? What visitors should notice? Why it matters for *your server* too [**Read the full story!**](https://github.com/omega8cc/boa/tree/5.x-dev/HTTP3.md)\n\n## 30 Years of Heritage\n\nWe are unique within the hosting industry for many important reasons. Our 15 years of Ægir-based hosting, plus earlier experience with Adgrafix (the first company to offer a control panel for website management in 1995), have helped shape what makes us different today. We take **Open Source seriously** — it’s not a buzzword for us. It’s about freedom from corporate control. Here's a short look back at our 15-year Ægir journey and 19 years with Drupal. [**Read the full story!**](https://github.com/omega8cc/boa/tree/5.x-dev/DIFFERENT30Y.md)\n\n## What is Ægir?\n\nÆgir, named after the Norse god of the sea, is an open-source hosting system for managing multiple Drupal sites. The name Ægir was chosen to reflect the relationship between Drupal's water drop logo, symbolizing individual sites, and Ægir's role as the god of the ocean, representing the hosting of many Drupal sites together. It automates tasks such as site installation, upgrades, and maintenance, making your life easier.\n\n**Announcement from Omega8.cc team**: [**The Future of Ægir 3 is Bryght!**](https://github.com/omega8cc/boa/tree/5.x-dev/ANNOUNCEMENT.md)\n\n### Key Features of Ægir:\n\n- **Site Management**: Manage multiple Drupal sites from a single interface.\n- **Automation**: Automate code deployment, database updates, and site backups.\n- **Scalability**: Easily scale your Drupal hosting infrastructure.\n- **Multitenancy**: Share a codebase across multiple sites with separate databases.\n- **Open-Source**: Customize and extend Ægir to fit your needs.\n- **Integration with Drush**: Use powerful command-line tools for site administration.\n\n<img width=\"1215\" height=\"1264\" alt=\"Ægir-BOA\" src=\"https://github.com/user-attachments/assets/b2417cc7-2fb8-422c-96f8-71d12c1c2fd7\" />\n\n## Why Barracuda?\n\nBarracuda is a specially tuned hosting environment for Ægir, designed to be lightning fast and agile, just like the barracuda fish known for its incredible speed and agility in the ocean.\n\n## Why Octopus?\n\nOctopus is a smart system designed to manage multiple Ægir instances within Barracuda. Just like the sea creature with eight limbs, Octopus allows you to create and manage many separate but connected Ægir instances, showcasing its intelligence and adaptability in efficiently handling complex hosting environments.\n\n## Dual License\n\n**BOA** remains a **Free/Libre Open Source Project**. While all of **BOA** code is **Free/Libre Open Source**, only the **BOA LTS** branch and **Ægir** are available without any cost or restrictions.\n\nCheck out the details in [**DUALLICENSE.md**](https://github.com/omega8cc/boa/tree/5.x-dev/DUALLICENSE.md).\n\n## BOA Priorities\n\n- **High Performance**: Ensure your sites run fast.\n- **Security**: Keep your sites and system secure.\n- **Automation**: Minimize daily maintenance with automated system and OS upgrades.\n\n## Multi-Ægir Hosting\n\nLeverage one Ægir Master Instance and multiple Satellite Instances. Use Satellite Instances to host your sites, as the Master holds the central Nginx configuration. Note: The 'Master' and 'Satellite' names in the Barracuda/Octopus context are not related to the multi-server Ægir features but to the multi-instance environment with virtual chroot/jail for each Ægir Satellite instance.\n\n## Installation Scripts\n\n- **BOA**: Runs Barracuda and Octopus to install complete BOA system.\n- **BARRACUDA**: Upgrades the system and the Ægir Master Instance.\n- **OCTOPUS**: Updates Ægir Instances + Drupal platforms.\n\n## Bug Reporting\n\nFollow the guidelines in [**docs/CONTRIBUTING.md**](https://github.com/omega8cc/boa/tree/5.x-dev/docs/CONTRIBUTING.md).\n\n## Requirements\n\n- Basic sysadmin skills and experience.\n- Willingness to accept BOA PI (paranoid idiosyncrasies).\n- Minimum 4 GB RAM and 2 CPUs (8 GB RAM and 4+ CPUs with Solr).\n- SSH (ed25519) keys for root are required by newer OpenSSH versions used in BOA.\n- Wget must be installed.\n- Open outgoing TCP ports: 25, 53, 80, 443.\n- Locales with UTF-8 support, otherwise en_US.UTF-8 (default) is forced.\n\n## Provided Services and Features\n\nCheck out the details in [**docs/PROVIDES.md**](https://github.com/omega8cc/boa/tree/5.x-dev/docs/PROVIDES.md).\n\n## Supported Virtualization Systems\n\n- Linux Containers (LXC)\n- Linux KVM guest\n- Microsoft Hyper-V\n- OpenVZ Containers\n- Parallels guest\n- Red Hat KVM guest\n- VirtualBox guest\n- VMware ESXi guest (but excluding vCloud Air)\n- VServer guest\n- Xen guest\n- Xen guest fully virtualized (HVM)\n- Xen paravirtualized guest domain\n\n## Supported Operating Systems\n\n<img width=\"906\" height=\"672\" alt=\"boa-on-excalibur\" src=\"https://github.com/user-attachments/assets/cdf4f72b-6d7d-4712-895b-46f612be333f\" />\n\n### Devuan (recommended)\n\n- Excalibur (supported, but only with Percona 8.4)\n- Daedalus (default, with Percona 5.7, 8.0 or 8.4)\n- Chimaera (supported but upgrade recommended)\n- Beowulf (supported for upgrades)\n\n### Debian (for migration)\n\n- Trixie (supported only as a base for migration to Devuan)\n- Bookworm (supported only as a base for migration to Devuan)\n- Bullseye (supported only as a base for migration to Devuan)\n- Buster (supported only as a base for migration to Devuan)\n- Stretch (deprecated but still works, please upgrade to Chimaera)\n- Jessie (deprecated but still works, please upgrade to Chimaera)\n\n## Project Roadmap\n\nCheck out the details in [**ROADMAP.md**](https://github.com/omega8cc/boa/tree/5.x-dev/ROADMAP.md)\n\n## Documentation and Templates\n\n- Installation Instructions: [docs/INSTALL.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/INSTALL.md)\n- Upgrade Instructions: [docs/UPGRADE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/UPGRADE.md)\n- Major-Upgrade Instructions: [docs/MAJORUPGRADE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/MAJORUPGRADE.md)\n- Importance of Keeping SKYNET Enabled in BOA: [docs/SKYNET.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SKYNET.md)\n- INI configuration per site: [docs/ini/site/INI.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/ini/site/INI.md)\n- INI configuration per platform: [docs/ini/platform/INI.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/ini/platform/INI.md)\n- Configuration Templates: [docs/cnf/barracuda.cnf](https://github.com/omega8cc/boa/tree/5.x-dev/docs/cnf/barracuda.cnf), [docs/cnf/octopus.cnf](https://github.com/omega8cc/boa/tree/5.x-dev/docs/cnf/octopus.cnf)\n- System Control Files Index: [docs/ctrl/system.ctrl](https://github.com/omega8cc/boa/tree/5.x-dev/docs/ctrl/system.ctrl)\n- How we build newer codebases for testing: [docs/BUILDTESTS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BUILDTESTS.md)\n\n## Documentation for BOA PRO\n\n- New Backups for BOA SysAdmin [docs/BACKUP_ROOT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_ROOT.md)\n- New Backups for Octopus Lshell User [docs/BACKUP_USER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n- New Backups Retention Policy Configuration [docs/BACKUP_RETENTION.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_RETENTION.md)\n- Supported Regions and Bucket Creation Guidelines [docs/BACKUP_REGIONS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_REGIONS.md)\n\n## Additional Documentation\n\n- Composer How-To: [docs/COMPOSER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/COMPOSER.md)\n- Dev-Mode Notes: [docs/DEVELOPMENT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/DEVELOPMENT.md)\n- Drupal Contrib Modules: [docs/MODULES.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/MODULES.md)\n- Extra Comments: [docs/CAVEATS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/CAVEATS.md)\n- FAQ: [docs/FAQ.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/FAQ.md)\n- Fast DB Operations: [docs/MYQUICK.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/MYQUICK.md)\n- Fast Migrate/Clone: [docs/FASTTRACK.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/FASTTRACK.md)\n- Included Platforms: [docs/PLATFORMS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/PLATFORMS.md)\n- Let’s Encrypt: [docs/SSL.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SSL.md)\n- Live Disk Resize How-To: [docs/DISK_RESIZE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/DISK_RESIZE.md)\n- Migration (Octopus Instance): [docs/MIGRATE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/MIGRATE.md)\n- Migration (Single Site): [docs/REMOTE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/REMOTE.md)\n- New Relic How-To: [docs/NEWRELIC.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/NEWRELIC.md)\n- Nginx Custom Rewrites: [docs/REWRITES.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/REWRITES.md)\n- PHP-CLI and Drush Configuration How-To: [docs/DRUSH-CLI.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/DRUSH-CLI.md)\n- PHP-FPM Configuration How-To: [docs/PHP-FPM.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/PHP-FPM.md)\n- Remote S3 Backups: [docs/BACKUPS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUPS.md)\n- Ruby Gems and NPM: [docs/GEM.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/GEM.md)\n- Security Settings: [docs/SECURITY.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SECURITY.md)\n- Self-Upgrade How-To: [docs/SELFUPGRADE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SELFUPGRADE.md)\n- SMTP SSL Error Debugging: [docs/SMTP_SSL_DEBUG.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SMTP_SSL_DEBUG.md)\n- Solr and Jetty How-To: [docs/SOLR.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SOLR.md)\n- SSH Encryption: [docs/BLOWFISH.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BLOWFISH.md)\n- VServer Cluster: [docs/CLUSTER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/CLUSTER.md) (deprecated)\n\n## Useful Links\n\n- BOA User Handbook (legacy): [**Learn BOA**](https://learn.omega8.cc/library/good-to-know)\n- Ægir Docs (legacy): [**Ægir Project**](https://docs.aegirproject.org)\n\n## Maintainers\n\nBOA is maintained by [**Omega8.cc**](https://omega8.cc/about).\n\n## Credits\n\nThanks to the Ægir Project founders and developers. [**Ægir Team**](https://docs.aegirproject.org/community/core-team/).\n\n## Support\n\nSupport BOA development by purchasing a commercial license or using Omega8.cc hosted services. Check out [**Omega8.cc**](https://omega8.cc/compare) for more info.\n\nThank you for supporting BOA!\n"
  },
  {
    "path": "ROADMAP.md",
    "content": "# BOA Roadmap & Progress\n\nDocumenting ongoing, upcoming and completed tasks. Some tasks are relatively simple, while others are major undertakings that take weeks or months. Therefore, we are working on many things simultaneously.\n\nThis document highlights the most complex or important tasks we are working on or planning to undertake. Routine tasks such as debugging, fixing issues, and implementing small improvements are usually documented in the commit history and changelog, which are updated with each new BOA release.\n\nSeveral of the upcoming and most impactful features are planned **exclusively for BOA PRO**, as outlined below.\n\nPlease also note that **as of December 31, 2025, the LTS branch will enter a full code-freeze**. No further feature development or regular releases are planned for 2026. A possible re-evaluation may occur in 2027, but this should not be assumed.\n\nAfter the freeze, **only critical functional fixes within BOA itself will be considered**. There will be **no updates** for underlying components such as PHP, Percona, Nginx, Valkey, OpenSSL, OpenSSH, or related system libraries.\n\nFor continued access to new features, ongoing improvements, and a future-proof stack, **BOA PRO is the recommended upgrade path**.\n\n## IN PROGRESS (PRO only)\n\n- **Import from Classic Ægir**: Extend xboa to import from remote classic Ægir servers using Nginx or Apache (PRO)\n- **Backdrop CMS Support**: Implement Backdrop CMS as a supported platform (PRO)\n- **Grav CMS Support**: Introduce support for Grav CMS (command line only) (PRO)\n- **Optional AppArmor Support**: Enhanced security and accounts privilege separation (PRO)\n- **Tar Pipelines on Clone**: Use Tar Pipelines to create separate symlinked copies during site clone tasks (PRO)\n- **Ægir Admin Interface**: Transition the Ægir admin interface to Backdrop CMS (PRO)\n- **BO4D**: Offer a *BOA For Docker* version tailored for local development (PRO)\n- **DDEV Integration**: Add support for BOA-compatible configurations within DDEV (PRO)\n- **Documentation Consolidation**: Convert legacy and built-in docs into a unified Grav CMS site. (PRO)\n\n## RELEASED IN BOA PRO only\n\n- **Amazon S3 Alternatives**: Integrate support for AWS S3 eight (8) alternatives in `multiback` and `mybackup` (PRO)\n\n## MAJOR NEW FEATURES RELEASED IN BOA LTS/PRO\n\n- **HTTP/3 on QUIC with KTLS Magic**: Strap in, your sites are getting an F1 engine (PRO/LTS)\n- **Drupal 11 with Ægir 3**: They Said It Couldn’t Be Done — We Did It Anyway (PRO/LTS)\n- **Debian Trixie and Devuan Excalibur**: Ensure compatibility for installation and automated upgrades (PRO/LTS)\n- **Debian Bookworm and Devuan Daedalus**: Ensure compatibility for installation and automated upgrades (PRO/LTS)\n- **Percona for MySQL 8.4**: Add support for Percona Server 8.4, the new Percona LTS (PRO/LTS)\n- **Original MySQL 8.4**: Add support for original MySQL Server 8.4 on Trixie/Excalibur (PRO/LTS)\n- **Percona for MySQL 8.0**: Add support for Percona Server 8.0, necessary for Drupal 11 (PRO/LTS)\n- **Super Fast System AutoInit**: Facilitate easy upgrades to the latest Devuan before BOA installation (PRO/LTS)\n- **Use OpenSSL 3 by default**: Maintain compatibility with OpenSSL 1.1.1 for legacy PHP versions (PRO/LTS)\n\n## OTHER NEW FEATURES RELEASED IN BOA LTS/PRO\n\n- **PHP 8.5 Support**: Enhancing performance and supporting twelve PHP versions (PRO/LTS)\n- **PHP 8.4 Support**: Enhancing performance and supporting eleven PHP versions (PRO/LTS)\n- **PHP 8.3 Support**: Required for Drupal 11, enhancing performance and supporting ten PHP versions (PRO/LTS)\n- **Add instant SQL fallback for Valkey/Redis**: zero downtime during upgrades/restarts/etc\n- **Symlink Site Files**: Automatically symlink all site files to expedite migration tasks and conserve disk space (PRO/LTS)\n- **Solr 9 Support**: Add latest Solr Server 9 as supported via BOA automation (PRO/LTS)\n- **Ruby Gems and Node/NPM Support 3x Faster**: From 15 to 5 minutes, with improved security (PRO/LTS)\n- **Ægir Task for SQL Backup**: Enable classic mysqldump backups for individual site downloads (PRO/LTS)\n- **Drush 12/13 in Ægir Tasks**: Dynamically Utilize Site-Local Drush for `updatedb` Operations on Drupal 10+ (PRO/LTS)\n- **Documentation Conversion to Markdown**: Update all BOA documentation from legacy TXT to Markdown.\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php56.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php56) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php56/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php56/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php56/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php56.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php56) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php56/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php56/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php56/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php70.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php70) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php70/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php70/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php70/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php70.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php70) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php70/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php70/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php70/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php71.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php71) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php71/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php71/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php71/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php71.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php71) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php71/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php71/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php71/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php72.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php72) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php72/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php72/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php72/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php72.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php72) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php72/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php72/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php72/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php73.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php73) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php73/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php73/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php73/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php73.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php73) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php73/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php73/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php73/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php74.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php74) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php74/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php74/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php74/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php74.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php74) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php74/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php74/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php74/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php80.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php80) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php80/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php80/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php80/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php80.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php80) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php80/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php80/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php80/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php81.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php81) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php81/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php81/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php81/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php81.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php81) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php81/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php81/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php81/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php82.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php82) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php82/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php82/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php82/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php82.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php82) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php82/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php82/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php82/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php83.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php83) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php83/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php83/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php83/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php83.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php83) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php83/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php83/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php83/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php84.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php84) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php84/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php84/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php84/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php84.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php84) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php84/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php84/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php84/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php85.bin.php",
    "content": "# AppArmor profile for PHP-CLI\n# This profile restricts PHP-CLI (php85) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php85/bin/php flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by PHP-CLI\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability mknod,\n  capability setgid,\n  capability setuid,\n  capability sys_ptrace,\n  capability sys_resource,\n\n  # Allow PHP-CLI to execute its own binary\n  /opt/php85/bin/php mrix,\n\n  # Allow PHP-CLI to signal/ptrace other processes\n  ptrace (read) peer=/opt/php*/bin/php,\n  signal (send) peer=unconfined,\n  signal (send) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/opt/php*/sbin/php-fpm,\n  ptrace (read) peer=/usr/bin/mysqld_safe,\n  ptrace (read) peer=/usr/bin/redis-server,\n  ptrace (read) peer=/usr/local/sbin/pure-ftpd,\n  ptrace (read) peer=/usr/sbin/nginx,\n  ptrace (read) peer=/usr/sbin/rsyslogd,\n  ptrace (read) peer=/usr/sbin/unbound,\n  ptrace (read) peer=unconfined,\n\n  # Allow PHP-CLI to read required configuration files\n  /data/disk/*/.subversion/ r,\n  /data/disk/*/.subversion/* r,\n  /etc/default/nginx r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/ldap/ldap.conf r,\n  /etc/mailname r,\n  /etc/mysql/conf.d/ r,\n  /etc/mysql/conf.d/* r,\n  /etc/mysql/my.cnf r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/main.cf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /etc/subversion/ r,\n  /etc/subversion/* r,\n  /etc/wgetrc r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /usr/local/share/git-core/templates/ r,\n  /usr/local/share/git-core/templates/* r,\n  /usr/local/share/git-core/templates/** r,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /opt/php85/** r,\n\n  # Allow PHP-CLI to read required user/access files\n  /etc/login.defs r,\n  /etc/pam.d/* r,\n  /etc/passwd r,\n  /etc/security/capability.conf r,\n  /etc/security/limits.conf r,\n  /etc/security/limits.d/ r,\n  /etc/security/limits.d/* r,\n  /etc/shadow r,\n  /etc/sudo.conf r,\n  /etc/sudoers r,\n  /etc/sudoers.d/ r,\n  /etc/sudoers.d/* r,\n  /run/sudo/ts/ r,\n  /run/sudo/ts/* r,\n\n  # Allow PHP-CLI to execute some other binaries\n  /usr/bin/symlinks mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/chown mrix,\n  /bin/cp mrix,\n  /bin/dash mrix,\n  /bin/date mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/kmod mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/pidof mrix,\n  /bin/rm mrix,\n  /bin/run-parts mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /opt/local/bin/websh mrix,\n  /data/disk/*/**/vendor/drush/drush/drush.php mrix,\n  /etc/init.d/nginx mrix,\n  /sbin/killall5 mrix,\n  /sbin/unix_chkpwd mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/sudo mrix,\n  /usr/bin/svn mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/fix-drupal-platform-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-platform-permissions.sh mrix,\n  /usr/local/bin/fix-drupal-site-ownership.sh mrix,\n  /usr/local/bin/fix-drupal-site-permissions.sh mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/lock-local-drush-permissions.sh mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/local/libexec/git-core/* mrix,\n  /usr/local/libexec/git-core/** mrix,\n  /usr/sbin/nginx mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-CLI to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty rw,\n  /dev/urandom r,\n\n  # Allow PHP-CLI to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-CLI to read and write its log files\n  /var/log/php/** rw,\n  /var/log/newrelic/php_agent.log rw,\n\n  # Allow PHP-CLI to write to some other log/pid files\n  /run/nginx.pid rw,\n  /var/log/nginx/access.log rw,\n  /var/log/nginx/error.log rw,\n\n  # Allow PHP-CLI to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Allow PHP-CLI to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-CLI to access drush\n  /data/disk/*/tools/drush/ r,\n  /data/disk/*/tools/drush/* mrix,\n  /data/disk/*/tools/drush/** r,\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n  /var/aegir/drush/ r,\n  /var/aegir/drush/* mrix,\n  /var/aegir/drush/** r,\n\n  # Allow PHP-CLI to access System Default Web Root\n\n  /var/www/** r,\n  owner /var/www/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Master Instance\n\n  owner /var/aegir/.drush/ r,\n  owner /var/aegir/.drush/* rw,\n  owner /var/aegir/.drush/** rw,\n  owner /var/aegir/.tmp/ r,\n  owner /var/aegir/.tmp/* rw,\n  owner /var/aegir/.tmp/** rw,\n  owner /var/aegir/config/ r,\n  owner /var/aegir/config/* rw,\n  owner /var/aegir/config/** rw,\n  owner /var/aegir/host_master-*/ r,\n  owner /var/aegir/host_master-*/* rw,\n  owner /var/aegir/host_master-*/** rw,\n  owner /var/aegir/host_master/ r,\n  owner /var/aegir/host_master/* rw,\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/ r,\n  owner /var/aegir/platforms/* rw,\n  owner /var/aegir/platforms/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Backend on Octopus Instances\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  /data/disk/*/.bashrc r,\n\n  owner /data/disk/*/.cache/**/pack-* l,\n  owner /data/disk/*/static/**/pack-* l,\n\n  owner /data/disk/*/log/ r,\n  owner /data/disk/*/log/* rw,\n  owner /data/disk/*/.*.pass.php r,\n  owner /data/disk/*/.rnd rw,\n\n  owner /data/disk/*/backups/ rwl,\n  owner /data/disk/*/backups/* rwl,\n  owner /data/disk/*/backups/** rwl,\n\n  owner /data/disk/*/backup-exports/ rwl,\n  owner /data/disk/*/backup-exports/* rwl,\n  owner /data/disk/*/backup-exports/** rwl,\n\n  owner /data/disk/*/.config/ rwl,\n  owner /data/disk/*/.config/* rwl,\n  owner /data/disk/*/.config/** rwl,\n\n  owner /data/disk/*/.cache/ rwl,\n  owner /data/disk/*/.cache/* rwl,\n  owner /data/disk/*/.cache/** rwl,\n\n  owner /data/disk/*/.drush/ rwl,\n  owner /data/disk/*/.drush/* rwl,\n  owner /data/disk/*/.drush/** rwl,\n\n  owner /data/disk/*/.tmp/ rwl,\n  owner /data/disk/*/.tmp/* rwl,\n  owner /data/disk/*/.tmp/** rwl,\n\n  owner /data/disk/*/clients/ rw,\n  owner /data/disk/*/clients/* rw,\n  owner /data/disk/*/clients/** rw,\n\n  owner /data/disk/*/config/ rw,\n  owner /data/disk/*/config/* rw,\n  owner /data/disk/*/config/** rw,\n\n  owner /data/disk/*/tools/le/ rw,\n  owner /data/disk/*/tools/le/* rw,\n  owner /data/disk/*/tools/le/** rw,\n\n  # Allow PHP-CLI to read/write in the Ægir Frontend on Octopus Instances\n\n  owner /data/disk/*/aegir/ rw,\n  owner /data/disk/*/aegir/* rw,\n  owner /data/disk/*/aegir/** rw,\n\n  # Allow PHP-CLI to read/write in the limited shell user home for Drush support\n\n  owner /home/*/.drush/sites/ rw,\n  owner /home/*/.drush/sites/* rw,\n  owner /home/*/.drush/sites/** rw,\n\n  owner /home/*/.drush/cache/ rw,\n  owner /home/*/.drush/cache/* rw,\n  owner /home/*/.drush/cache/** rw,\n\n  owner /home/*/.tmp/ rw,\n  owner /home/*/.tmp/* rw,\n  owner /home/*/.tmp/** rw,\n\n  # Allow PHP-CLI to read/write in the custom web root directories\n\n  /data/disk/*/static/ rw,\n  /data/disk/*/static/* rw,\n  /data/disk/*/static/** rw,\n\n  /data/disk/*/distro/ rw,\n  /data/disk/*/distro/* rw,\n  /data/disk/*/distro/** rw,\n\n  /data/disk/*/platforms/ rw,\n  /data/disk/*/platforms/* rw,\n  /data/disk/*/platforms/** rw,\n\n  # Allow PHP-CLI to read and write in the root tmp\n\n  owner /root/.tmp/ rw,\n  owner /root/.tmp/* rw,\n  owner /root/.tmp/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/opt.php85.sbin.php-fpm",
    "content": "# AppArmor profile for PHP-FPM\n# This profile restricts the PHP-FPM (php85) to essential operations only.\n\n#include <tunables/global>\n\n/opt/php85/sbin/php-fpm flags=(attach_disconnected) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/php>\n  include <abstractions/mysql>\n\n  # Capabilities needed by PHP-FPM\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability kill,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  # Allow PHP-FPM to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow PHP-FPM to execute its own binary\n  /opt/php85/sbin/php-fpm mrix,\n\n  # Allow PHP-FPM to read its configuration files\n  /data/conf/ r,\n  /data/conf/** r,\n  /etc/ImageMagick-6/log.xml r,\n  /etc/ImageMagick-6/policy.xml r,\n  /etc/ld.so.cache r,\n  /etc/mailname r,\n  /etc/newrelic/upgrade_please.key r,\n  /etc/postfix/dynamicmaps.cf r,\n  /etc/postfix/dynamicmaps.cf.d/ r,\n  /etc/postfix/dynamicmaps.cf.d/* r,\n  /etc/postfix/main.cf r,\n  /home/*/.drush/** r,\n  /opt/etc/fpm/** r,\n  /var/spool/postfix/maildrop/ r,\n  /var/spool/postfix/maildrop/* rw,\n  /opt/php85/** r,\n\n  # Allow PHP-FPM to execute some other binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/advdef mrix,\n  /usr/bin/advpng mrix,\n  /usr/bin/chromium mrix,\n  /usr/bin/convert mrix,\n  /usr/bin/id mrix,\n  /usr/bin/jpegoptim mrix,\n  /usr/bin/jpegtran mrix,\n  /usr/bin/magick mrix,\n  /usr/bin/optipng mrix,\n  /usr/bin/pngcrush mrix,\n  /usr/bin/pngquant mrix,\n  /usr/lib/postfix/sbin/smtpd mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /usr/sbin/postdrop mrix,\n  /usr/sbin/sendmail mrix,\n\n  # Allow PHP-FPM to access some /dev\n  /dev/null rw,\n  /dev/random r,\n  /dev/tty wr,\n  /dev/urandom r,\n\n  # Allow PHP-FPM to access its run directory\n  /run/** rw,\n\n  # Allow PHP-FPM to use tmp files\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow PHP-FPM to read and write its log files\n  /var/log/newrelic/php_agent.log rw,\n  /var/log/php/** rw,\n\n  # Allow PHP-FPM to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow PHP-FPM to use /dev/shm for temporary storage\n  /dev/shm/ r,\n  /dev/shm/** rw,\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow PHP-FPM to read and write in the custom web root directories\n\n  /var/www/** r,\n\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  /data/all/ r,\n  /data/all/* r,\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n  /data/conf/** r,\n\n  owner /var/aegir/host_master/** rw,\n  owner /var/aegir/platforms/** rw,\n\n  owner /data/disk/*/aegir/** rw,\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n  owner /var/www/** rw,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/sbin.dhclient",
    "content": "# AppArmor profile for DHCP dhclient\n# This profile restricts DHCP dhclient (dhclient) to essential operations only.\n\n#include <tunables/global>\n\n/{,usr/}sbin/dhclient flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n\n  # Capabilities needed by DHCP dhclient\n  capability net_bind_service,\n  capability net_raw,\n  capability dac_override,\n  capability net_admin,\n\n  network packet,\n  network raw,\n\n  @{PROC}/[0-9]*/net/ r,\n  @{PROC}/[0-9]*/net/** r,\n\n  # dhclient wants to update its threads with functional names\n  owner @{PROC}/@{pid}/task/[0-9]*/comm rw,\n\n  /{,usr/}sbin/dhclient mrix,\n  /{,usr/}bin/bash mrix,\n\n  /etc/dhclient.conf r,\n  /etc/dhcp/ r,\n  /etc/dhcp/** r,\n\n  /var/lib/dhcp{,3}/dhclient* lrw,\n  /{,var/}run/dhclient*.pid lrw,\n  /{,var/}run/dhclient*.lease* lrw,\n\n  # NetworkManager\n  /{,var/}run/nm*conf r,\n  /{,var/}run/sendsigs.omit.d/network-manager.dhclient*.pid lrw,\n  /{,var/}run/NetworkManager/dhclient*.pid lrw,\n  /var/lib/NetworkManager/dhclient*.conf lrw,\n  /var/lib/NetworkManager/dhclient*.lease* lrw,\n  signal (receive) peer=/usr/sbin/NetworkManager,\n  ptrace (readby) peer=/usr/sbin/NetworkManager,\n\n  # connman\n  /{,var/}run/connman/dhclient*.pid lrw,\n  /{,var/}run/connman/dhclient*.leases lrw,\n\n  # synce-hal\n  /usr/share/synce-hal/dhclient.conf r,\n\n  # if there is a custom script, let it run unconfined\n  /etc/dhcp/dhclient-script Uxr,\n\n  # The dhclient-script shell script sources other shell scripts rather than\n  # executing them, so we can't just use a separate profile for dhclient-script\n  # with 'Uxr' on the hook scripts. However, for the long-running dhclient3\n  # daemon to run arbitrary code via /sbin/dhclient-script, it would need to be\n  # able to subvert dhclient-script or write to the hooks.d directories. As\n  # such, if the dhclient3 daemon is subverted, this effectively limits it to\n  # only being able to run the hooks scripts.\n  /{,usr/}sbin/dhclient-script Uxr,\n\n  # Run the ELF executables under their own unrestricted profiles\n  /usr/lib/NetworkManager/nm-dhcp-client.action Pxrm,\n  /usr/lib/connman/scripts/dhclient-script Pxrm,\n\n  # Support the new executable helper from NetworkManager.\n  /usr/lib/NetworkManager/nm-dhcp-helper Pxrm,\n  signal (receive) peer=/usr/lib/NetworkManager/nm-dhcp-helper,\n}\n\n# Profile for NetworkManager action\n/usr/lib/NetworkManager/nm-dhcp-client.action flags=(complain) {\n  include <abstractions/base>\n  include <abstractions/dbus>\n  /usr/lib/NetworkManager/nm-dhcp-client.action mrix,\n\n  /var/lib/NetworkManager/*lease r,\n  signal (receive) peer=/usr/sbin/NetworkManager,\n  ptrace (readby) peer=/usr/sbin/NetworkManager,\n  network inet dgram,\n  network inet6 dgram,\n}\n\n# Profile for NetworkManager helper\n/usr/lib/NetworkManager/nm-dhcp-helper flags=(complain) {\n  include <abstractions/base>\n  include <abstractions/dbus>\n  /usr/lib/NetworkManager/nm-dhcp-helper mrix,\n\n  /run/NetworkManager/private-dhcp rw,\n  signal (send) peer=/sbin/dhclient,\n\n  /var/lib/NetworkManager/*lease r,\n  signal (receive) peer=/usr/sbin/NetworkManager,\n  ptrace (readby) peer=/usr/sbin/NetworkManager,\n  network inet dgram,\n  network inet6 dgram,\n}\n\n# Profile for connman script\n/usr/lib/connman/scripts/dhclient-script flags=(complain) {\n  include <abstractions/base>\n  include <abstractions/dbus>\n  /usr/lib/connman/scripts/dhclient-script mrix,\n  network inet dgram,\n  network inet6 dgram,\n}\n\n\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.chromium",
    "content": "# File: /etc/apparmor.d/usr.bin.chromium\n\n#include <tunables/global>\n\n/usr/bin/chromium flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/openssl>\n  include <abstractions/fonts>\n\n  # Deny capability sys_ptrace\n  deny capability sys_ptrace,\n\n  # System paths chromium needs to operate\n  /usr/bin/chromium mrix,\n  /usr/lib/chromium/** mrix,\n\n  # Temp usage\n  owner /tmp/** rwk,\n  owner /dev/shm/** rwk,\n\n  # Proc/sys reads\n  /proc/** r,\n  /sys/** r,\n\n  # Devices commonly accessed\n  /dev/null rw,\n  /dev/urandom r,\n  /dev/random r,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Fonts + fontconfig\n  /etc/fonts/** r,\n  /usr/share/fonts/** r,\n  /var/cache/fontconfig/** r,\n\n  # Allow to read and write in the web root directories\n  /var/www/** r,\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n\n  owner /data/disk/*/distro/** rwk,\n  owner /data/disk/*/platforms/** rwk,\n  owner /data/disk/*/static/** rwk,\n  owner /var/www/** rwk,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.freshclam",
    "content": "# AppArmor profile for Freshclam service\n# This profile restricts Freshclam service (freshclam) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/freshclam flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n  include <abstractions/ubuntu-browsers.d/multimedia>\n  include <abstractions/user-tmp>\n\n  # Capabilities needed by Freshclam service\n  capability chown,\n  capability dac_override,\n  capability net_admin,\n  capability net_bind_service,\n  capability setgid,\n  capability setuid,\n\n  network inet stream,\n  network inet dgram,\n  network inet6 stream,\n  network inet6 dgram,\n\n  # Allow execution of necessary shells and the freshclam binary\n  /bin/dash mrix,\n  /bin/bash mrix,\n  /bin/sh mrix,\n  /usr/bin/freshclam mrix,\n\n  # Allow access to /dev\n  /dev/log w,\n  /dev/null rw,\n  /dev/random r,\n  /dev/urandom r,\n\n  # Allow access to /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow access to temporary directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Deny access to samba specific directories\n  deny /{,var/}run/samba/{gencache,unexpected}.tdb mrwlk,\n\n  # Allow read access to ClamAV configuration files\n  /etc/clamav/clamd.conf r,\n  /etc/clamav/freshclam.conf r,\n  /etc/clamav/onerrorexecute.d/* mr,\n  /etc/clamav/onupdateexecute.d/* mr,\n  /etc/clamav/virusevent.d/* mr,\n\n  # Allow read access to SSL libraries\n  /usr/local/ssl3/lib64/libcrypto.so.* mr,\n  /usr/local/ssl3/lib64/libssl.so.* mr,\n  /usr/local/ssl3/openssl.cnf r,\n\n  # Allow access to ClamAV directories and files\n  /var/lib/clamav/ r,\n  /var/lib/clamav/** rwk,\n  /var/log/clamav/* rwk,\n  /{,var/}run/clamav/clamd.ctl rw,\n  /{,var/}run/clamav/freshclam.pid w,\n\n  # Allow reading filesystems information\n  @{PROC}/filesystems r,\n\n  # Allow read/write access to ClamAV user directories\n  owner /home/*/.clamtk/db/ r,\n  owner /home/*/.clamtk/db/** rwk,\n  owner /home/*/.klamav/database/ r,\n  owner /home/*/.klamav/database/** rwk,\n  owner @{PROC}/[0-9]*/status r,\n\n  # Deny access to sensitive files and directories\n  deny /etc/shadow* rwlx,\n  deny /etc/passwd* rwlx,\n  deny /root/** rwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.man",
    "content": "# AppArmor profile for Man service\n# This profile restricts Man service (man) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/man flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n\n  # Use a special profile when man calls anything groff-related. We only\n  # include the programs that actually parse input data in a non-trivial\n  # way, not wrappers such as groff and nroff, since the latter would need a\n  # broader profile.\n  /usr/bin/eqn mrCx -> &man_groff,\n  /usr/bin/grap mrCx -> &man_groff,\n  /usr/bin/pic mrCx -> &man_groff,\n  /usr/bin/preconv mrCx -> &man_groff,\n  /usr/bin/refer mrCx -> &man_groff,\n  /usr/bin/tbl mrCx -> &man_groff,\n  /usr/bin/troff mrCx -> &man_groff,\n  /usr/bin/vgrind mrCx -> &man_groff,\n\n  # Similarly, use a special profile when man calls decompressors and other\n  # simple filters.\n  /{,usr/}bin/bzip2 mrCx -> &man_filter,\n  /{,usr/}bin/gzip mrCx -> &man_filter,\n  /usr/bin/col mrCx -> &man_filter,\n  /usr/bin/compress mrCx -> &man_filter,\n  /usr/bin/iconv mrCx -> &man_filter,\n  /usr/bin/lzip.lzip mrCx -> &man_filter,\n  /usr/bin/tr mrCx -> &man_filter,\n  /usr/bin/xz mrCx -> &man_filter,\n\n  # Allow basic filesystem access, subject to DAC\n  /** mrixwlk,\n  unix,\n\n  # Capabilities needed by Man service\n  capability setuid,\n  capability setgid,\n\n  # Ordinary permission checks sometimes involve checking whether the\n  # process has this capability, which can produce audit log messages.\n  # Silence them.\n  deny capability dac_override,\n  deny capability dac_read_search,\n\n  signal peer=@{profile_name},\n  signal peer=/usr/bin/man//&man_groff,\n  signal peer=/usr/bin/man//&man_filter,\n}\n\nprofile man_groff flags=(complain) {\n  include <abstractions/base>\n  include <abstractions/consoles>\n\n  /usr/bin/eqn mrix,\n  /usr/bin/grap mrix,\n  /usr/bin/pic mrix,\n  /usr/bin/preconv mrix,\n  /usr/bin/refer mrix,\n  /usr/bin/tbl mrix,\n  /usr/bin/troff mrix,\n  /usr/bin/vgrind mrix,\n\n  /etc/groff/** r,\n  /etc/papersize r,\n  /usr/lib/groff/site-tmac/** r,\n  /usr/share/groff/** r,\n\n  /tmp/groff* rw,\n\n  signal peer=/usr/bin/man,\n  signal peer=/usr/bin/man//&man_groff,\n}\n\nprofile man_filter flags=(complain) {\n  include <abstractions/base>\n  include <abstractions/consoles>\n\n  /{,usr/}bin/bzip2 mrix,\n  /{,usr/}bin/gzip mrix,\n  /usr/bin/col mrix,\n  /usr/bin/compress mrix,\n  /usr/bin/iconv mrix,\n  /usr/bin/lzip.lzip mrix,\n  /usr/bin/tr mrix,\n  /usr/bin/xz mrix,\n\n  # Manual pages can be more or less anywhere, especially with \"man -l\", and\n  # there's no harm in allowing wide read access here since the worst it can\n  # do is feed data to the invoking man process.\n  /** r,\n\n  # Allow writing cat pages.\n  /var/cache/man/** rw,\n\n  signal peer=/usr/bin/man,\n  signal peer=/usr/bin/man//&man_filter,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.mysecureshell",
    "content": "# AppArmor profile for MySecureShell\n# This profile restricts MySecureShell (mysecureshell) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/mysecureshell flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/authentication>\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n  include <abstractions/python>\n  include <abstractions/wutmp>\n\n  # Read access to its own config and logs\n  /etc/ssh/sftp_config r,\n  /etc/lshell.conf r,\n  /var/log/lsh/ r,\n  /var/log/lsh/* rw,\n  /opt/php*/lib/php.ini r,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow MySecureShell to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Read/write access to user home\n  /home/*/ r,\n  /home/*/** rw,\n\n  # Read/write access to SSH client files\n  /home/*/.ssh/ r,\n  /home/*/.ssh/** rw,\n\n  # Read-only access to Drush aliases and php.ini files\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n\n  # Drush access\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n\n  # Read-only access to Octopus directories\n  /data/disk/*/.drush/ r,\n  /data/disk/*/.drush/** r,\n  /data/disk/*/backups/ r,\n  /data/disk/*/backups/** r,\n  /data/disk/*/clients/ r,\n  /data/disk/*/clients/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/static/ r,\n  /data/disk/*/static/** r,\n\n  # Allow write access to Octopus user directories and files\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/static/ r,\n  owner /data/disk/*/static/** rw,\n  owner /opt/user/npm/*/** rw,\n  owner /opt/user/gems/*/** rw,\n  owner /opt/user/gems/*/bin/** k,\n\n  # Read/write access to Drush cache\n  /home/*/.drush/cache/ r,\n  /home/*/.drush/cache/** rw,\n\n  # Deny access to critical system files\n  deny /etc/shadow* rwlx,\n\n  # Allow read access to user information files\n  /etc/passwd r,\n  /etc/group r,\n  /etc/nsswitch.conf r,\n  /etc/hosts r,\n\n  # Allow read-only access to resolv.conf for DNS resolution\n  /etc/resolv.conf r,\n\n  # Temporary files and directories\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n\n  # Deny execution of any shell or command not explicitly allowed\n  deny /bin/bash x,\n  deny /usr/bin/perl x,\n\n  # Allow execution of necessary binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/mysecureshell mrix,\n  /usr/bin/python* mrix,\n  /usr/local/bin/lshell mrix,\n\n  # Additional binaries allowed in Limited Shell\n  /bin/bzip2 mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/cp mrix,\n  /bin/echo mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/gunzip mrix,\n  /bin/gzip mrix,\n  /bin/ls mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/nano mrix,\n  /bin/ping mrix,\n  /bin/pwd mrix,\n  /bin/rm mrix,\n  /bin/rmdir mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /bin/true mrix,\n  /data/disk/*/tools/drush/drush.php mrix,\n  /opt/local/bin/mybackup mrix,\n  /opt/local/bin/sqlmagic mrix,\n  /opt/php*/bin/php mrix,\n  /usr/bin/diff mrix,\n  /usr/bin/du mrix,\n  /usr/bin/env mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/mysqldump mrix,\n  /usr/bin/node mrix,\n  /usr/bin/openssl mrix,\n  /usr/bin/passwd mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/rsync mrix,\n  /usr/bin/rvim mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/bin/zstd mrix,\n  /usr/lib/node_modules/npm/bin/** mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/gem mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/git-receive-pack mrix,\n  /usr/local/bin/git-upload-archive mrix,\n  /usr/local/bin/git-upload-pack mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/scp mrix,\n  /usr/local/bin/sftp mrix,\n  /usr/local/bin/ssh mrix,\n  /usr/local/bin/ssh-keygen mrix,\n  owner /opt/user/gems/*/** mrix,\n  owner /opt/user/npm/*/** mrix,\n\n  # Deny execution of any other binaries\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.mysql",
    "content": "# AppArmor profile for MySQL client\n# This profile restricts MySQL client (mysql) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/mysql flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n  include <abstractions/user-tmp>\n\n  # Capabilities needed by MySQL client\n  capability net_bind_service,\n  capability setgid,\n  capability setuid,\n\n  # Allow execution of the mysql binary\n  /usr/bin/mysql mrix,\n\n  # Allow execution of the mysqld binary\n  /usr/sbin/mysqld mrix,\n\n  # Allow execution of necessary utilities\n  /bin/** mrix,\n  /usr/bin/** mrix,\n  /usr/sbin/** mrix,\n\n  # Allow reading necessary directories\n  /bin/ r,\n  /usr/bin/ r,\n  /usr/sbin/ r,\n  /etc/inputrc r,\n\n  # Allow MySQL to read its configuration files\n  /etc/mysql/** r,\n  /etc/mysql/conf.d/** r,\n  /etc/mysql/mysql.conf.d/** r,\n\n  # Allow MySQL to access its data directory\n  /var/lib/mysql/ rwk,\n  /var/lib/mysql/** rwk,\n\n  # Allow MySQL to access its run directory\n  /run/mysqld/ r,\n  /run/mysqld/** rw,\n\n  # Allow MySQL to access its tmp directory\n  /tmp/ r,\n  /tmp/** rw,\n\n  # Allow MySQL to read system libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/mysql/plugin/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n  /usr/share/mysql/** r,\n  /usr/share/zoneinfo/** r,\n\n  # Allow MySQL to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow MySQL to use /dev/shm for temporary storage\n  /dev/shm/** rw,\n  /dev/shm/ r,\n\n  # Allow MySQL to read network-related configurations\n  /etc/hosts.allow r,\n  /etc/hosts.deny r,\n  /etc/services r,\n\n  # Disallow execution of binaries from /tmp and /var/tmp\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Deny access to various sensitive directories\n  deny /boot/** mrwklx,\n\n  # Deny access to various sensitive files\n  deny /etc/shadow* rwlx,\n  deny /etc/shadow- r,\n  deny /etc/gshadow r,\n  deny /etc/gshadow- r,\n\n  # Allow reading the user's .my.cnf file\n  /root/.my.cnf r,\n  /home/*/.my.cnf r,\n\n  # Allow writing to log files in user's home directory\n  /home/*/.mysql_history rw,\n  /root/.mysql_history rw,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.mysqld_safe",
    "content": "# AppArmor profile for MySQLd starter\n# This profile restricts MySQLd starter (mysqld_safe) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/mysqld_safe flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by MySQLd starter\n  capability dac_override,\n  capability dac_read_search,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n  capability sys_nice,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow MySQLd to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  # Allow mysqld_safe to read its configuration files\n  /etc/mysql/** r,\n  /etc/mysql/conf.d/** r,\n  /etc/mysql/mysql.conf.d/** r,\n  /etc/hosts.deny r,\n  /etc/hosts.allow r,\n\n  # Allow mysqld_safe to access its data directory\n  /var/lib/mysql/ rwk,\n  /var/lib/mysql/** rwk,\n\n  # Allow mysqld_safe to access its run directory\n  /run/mysqld/ r,\n  /run/mysqld/** rw,\n\n  # Allow mysqld_safe to write to its log files\n  /var/log/mysql/ r,\n  /var/log/mysql/** rw,\n\n  # Allow mysqld_safe to access tmp directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow mysqld_safe to read system libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/mysql/plugin/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n  /usr/share/mysql/** r,\n  /usr/share/zoneinfo/** r,\n\n  # Allow mysqld_safe to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow mysqld_safe to use /dev/shm for temporary storage\n  /dev/shm/** rw,\n  /dev/shm/ r,\n\n  # Allow execution of mysqld_safe\n  /usr/bin/mysqld_safe mrix,\n\n  # Allow execution of the mysql binary\n  /usr/bin/mysql mrix,\n\n  # Allow execution of the mysqld binary\n  /usr/sbin/mysqld mrix,\n\n  # Allow execution of necessary utilities\n  /bin/** mrix,\n  /usr/bin/** mrix,\n  /usr/sbin/** mrix,\n\n  # Allow reading necessary directories\n  /bin/ r,\n  /usr/bin/ r,\n  /usr/sbin/ r,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.newrelic-daemon",
    "content": "# AppArmor profile for New Relic\n# This profile restricts the New Relic (newrelic-daemon) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/newrelic-daemon flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by New Relic\n  capability net_admin,\n  capability setgid,\n  capability setuid,\n\n  network inet stream,\n  network inet dgram,\n  network inet6 stream,\n  network inet6 dgram,\n\n  # Allow execution of the newrelic-daemon binary\n  /usr/bin/newrelic-daemon mrix,\n\n  # Allow newrelic-daemon to read its configuration files\n  /etc/newrelic/** r,\n\n  # Allow newrelic-daemon to read system libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/lib64/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow newrelic-daemon to access /proc for necessary information\n  /proc/** r,\n\n  # Allow newrelic-daemon to access log files\n  /var/log/newrelic/** rw,\n\n  # Allow newrelic-daemon to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow newrelic-daemon to access run directory\n  /run/newrelic/** rw,\n\n  # Allow newrelic-daemon to access shared memory\n  /dev/shm/** rw,\n  /dev/shm/ r,\n\n  # Disallow execution of binaries from /tmp and /var/tmp\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Deny access to various sensitive directories\n  deny /boot/** mrwklx,\n  deny /opt/** mrwklx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.node",
    "content": "# AppArmor profile for Node/NPM\n# This profile restricts Limited Shell (lshell) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/node flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n\n  # Capability permissions\n  capability ipc_lock,\n  capability sys_resource,\n\n  # Network access\n  network inet,\n\n  # Allow read access to necessary libraries\n  /etc/ssl/openssl.cnf r,\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow reading of environment variables\n  /proc/** r,\n  /sys/** r,\n\n  # Specific file permissions\n  owner /home/*/.npmrc r,\n\n  # Temporary files and directories\n  owner /home/*/.tmp/ r,\n  owner /home/*/.tmp/** rw,\n\n  # Miscellaneous\n  /dev/urandom rw,\n  /dev/null rw,\n  /dev/tty rw,\n\n  # Deny execution of any shell or command not explicitly allowed\n  deny /bin/bash x,\n  deny /bin/dash x,\n  deny /bin/websh x,\n  deny /usr/bin/perl x,\n  deny /usr/bin/python* x,\n  deny /usr/local/bin/ruby x,\n\n  # Deny certain capabilities\n  deny capability sys_chroot,  # Deny changing root\n  deny capability sys_admin,   # Deny various system admin privileges\n  deny capability setuid,      # Deny changing user IDs\n  deny capability setgid,      # Deny changing group IDs\n  deny capability kill,        # Deny sending signals to arbitrary processes\n\n  # Deny execution of binaries from these directories\n  deny /home/*/.tmp/** m,\n  deny /home/*/** m,\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Allow execution of npm etc\n  /usr/bin/node mrix,\n  /usr/bin/npm mrix,\n  /opt/user/npm/*/ r,\n  /opt/user/npm/*/** mrix,\n\n  # Allow to read and write in the custom web root directories\n\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/platforms/** rw,\n  owner /data/disk/*/static/** rw,\n\n  # Deny access to various sensitive directories and files\n  deny /boot/** mrwklx,\n  deny /root/** mrwklx,\n  deny /etc/shadow* rwlx,\n  deny /etc/passwd* rwlx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.redis-server",
    "content": "# AppArmor profile for Redis server\n# This profile restricts the Redis server (redis-server) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/redis-server flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/user-tmp>\n\n  # Allow Redis to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  # Allow reading necessary kernel parameters\n  /proc/sys/** r,\n  /sys/devices/** r,\n  /sys/kernel/** r,\n\n  # Allow execution of redis-server binary\n  /usr/bin/redis-server mrix,\n\n  # Allow Redis to read its configuration file\n  /etc/redis/redis.conf r,\n\n  # Allow Redis to read and write its data files\n  /var/lib/redis/** rwk,\n\n  # Allow Redis to read and write its log files\n  /var/log/redis/** rw,\n\n  # Allow Redis to open TCP sockets on any address\n  network inet stream,\n\n  # Allow Redis to use syslog\n  /dev/log w,\n  /usr/bin/logger ixr,\n\n  # Allow Redis to read system libraries\n  /lib/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/local/sbin/* mrix,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow Redis to use /run for pid/sock files\n  /run/redis/** rw,\n\n  # Allow Redis to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n\n  owner /proc/*/smaps r,\n  owner /proc/*/stat r,\n  owner /var/lib/redis/ r,\n  owner /var/lib/redis/dump.rdb rw,\n  owner /var/lib/redis/temp-*.rdb rw,\n  owner /var/log/redis/redis-server.log rw,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.bin.valkey-server",
    "content": "# AppArmor profile for Valkey server\n# This profile restricts the Valkey server (valkey-server) to essential operations only.\n\n#include <tunables/global>\n\n/usr/bin/valkey-server flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/user-tmp>\n\n  # Allow Valkey to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  # Allow reading necessary kernel parameters\n  /proc/sys/** r,\n  /sys/devices/** r,\n  /sys/kernel/** r,\n\n  # Allow execution of valkey-server binary\n  /usr/bin/valkey-server mrix,\n\n  # Allow Valkey to read its configuration file\n  /etc/valkey/valkey.conf r,\n\n  # Allow Valkey to read and write its data files\n  /var/lib/valkey/** rwk,\n\n  # Allow Valkey to read and write its log files\n  /var/log/valkey/** rw,\n\n  # Allow Valkey to open TCP sockets on any address\n  network inet stream,\n\n  # Allow Valkey to use syslog\n  /dev/log w,\n  /usr/bin/logger ixr,\n\n  # Allow Valkey to read system libraries\n  /lib/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/local/sbin/* mrix,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow Valkey to use /run for pid/sock files\n  /run/valkey/** rw,\n\n  # Allow Valkey to use tmp files\n  /tmp/ r,\n  /tmp/** rw,\n\n  owner /proc/*/smaps r,\n  owner /proc/*/stat r,\n  owner /var/lib/valkey/ r,\n  owner /var/lib/valkey/dump.rdb rw,\n  owner /var/lib/valkey/temp-*.rdb rw,\n  owner /var/log/valkey/valkey-server.log rw,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.local.bin.lshell",
    "content": "# AppArmor profile for Limited Shell\n# This profile restricts Limited Shell (lshell) to essential operations only.\n\n#include <tunables/global>\n\n/usr/local/bin/lshell flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/authentication>\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n  include <abstractions/python>\n  include <abstractions/wutmp>\n\n  # Read access to its own config and logs\n  /etc/ssh/sftp_config r,\n  /etc/lshell.conf r,\n  /var/log/lsh/ r,\n  /var/log/lsh/* rw,\n  /opt/php*/lib/php.ini r,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /opt/php*/lib/php/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/ioncube/ioncube_loader_lin_*.so mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow Limited Shell to access /proc and /sys for necessary information\n  /proc/ r,\n  /proc/** r,\n  /sys/ r,\n  /sys/** r,\n\n  # Read/write access to user home\n  /home/*/ r,\n  /home/*/** rw,\n\n  # Read/write access to SSH client files\n  /home/*/.ssh/ r,\n  /home/*/.ssh/** rw,\n\n  # Read-only access to Drush aliases and php.ini files\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n\n  # Drush access\n  /opt/tools/drush/** mrix,\n  /usr/local/bin/cv.phar mrix,\n\n  # Read-only access to Octopus directories\n  /data/disk/*/.drush/ r,\n  /data/disk/*/.drush/** r,\n  /data/disk/*/backups/ r,\n  /data/disk/*/backups/** r,\n  /data/disk/*/clients/ r,\n  /data/disk/*/clients/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/static/ r,\n  /data/disk/*/static/** r,\n\n  # Allow write access to Octopus user directories and files\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/static/ r,\n  owner /data/disk/*/static/** rw,\n  owner /opt/user/npm/*/** rw,\n  owner /opt/user/gems/*/** rw,\n  owner /opt/user/gems/*/bin/** k,\n\n  # Read/write access to Drush cache\n  /home/*/.drush/cache/ r,\n  /home/*/.drush/cache/** rw,\n\n  # Deny access to critical system files\n  deny /etc/shadow* rwlx,\n\n  # Allow read access to user information files\n  /etc/passwd r,\n  /etc/group r,\n  /etc/nsswitch.conf r,\n  /etc/hosts r,\n\n  # Allow read-only access to resolv.conf for DNS resolution\n  /etc/resolv.conf r,\n\n  # Temporary files and directories\n  /home/*/.tmp/ r,\n  /home/*/.tmp/** rw,\n\n  # Deny execution of any shell or command not explicitly allowed\n  deny /bin/bash x,\n  deny /usr/bin/perl x,\n\n  # Allow execution of necessary binaries\n  /bin/dash mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/mysecureshell mrix,\n  /usr/bin/python* mrix,\n  /usr/local/bin/lshell mrix,\n\n  # Additional binaries allowed in Limited Shell\n  /bin/bzip2 mrix,\n  /bin/cat mrix,\n  /bin/chmod mrix,\n  /bin/cp mrix,\n  /bin/echo mrix,\n  /bin/egrep mrix,\n  /bin/grep mrix,\n  /bin/gunzip mrix,\n  /bin/gzip mrix,\n  /bin/ls mrix,\n  /bin/mkdir mrix,\n  /bin/mv mrix,\n  /bin/nano mrix,\n  /bin/ping mrix,\n  /bin/pwd mrix,\n  /bin/rm mrix,\n  /bin/rmdir mrix,\n  /bin/sed mrix,\n  /bin/stty mrix,\n  /bin/tar mrix,\n  /bin/touch mrix,\n  /bin/true mrix,\n  /data/disk/*/tools/drush/drush.php mrix,\n  /opt/local/bin/mybackup mrix,\n  /opt/local/bin/sqlmagic mrix,\n  /opt/php*/bin/php mrix,\n  /usr/bin/diff mrix,\n  /usr/bin/du mrix,\n  /usr/bin/env mrix,\n  /usr/bin/find mrix,\n  /usr/bin/id mrix,\n  /usr/bin/mysql mrix,\n  /usr/bin/mysqldump mrix,\n  /usr/bin/node mrix,\n  /usr/bin/openssl mrix,\n  /usr/bin/passwd mrix,\n  /usr/bin/patch mrix,\n  /usr/bin/rsync mrix,\n  /usr/bin/rvim mrix,\n  /usr/bin/tput mrix,\n  /usr/bin/unzip mrix,\n  /usr/bin/wget mrix,\n  /usr/bin/which.debianutils mrix,\n  /usr/bin/zstd mrix,\n  /usr/lib/node_modules/npm/bin/** mrix,\n  /usr/local/bin/composer mrix,\n  /usr/local/bin/curl mrix,\n  /usr/local/bin/gem mrix,\n  /usr/local/bin/git mrix,\n  /usr/local/bin/git-receive-pack mrix,\n  /usr/local/bin/git-upload-archive mrix,\n  /usr/local/bin/git-upload-pack mrix,\n  /usr/local/bin/mydumper mrix,\n  /usr/local/bin/myloader mrix,\n  /usr/local/bin/scp mrix,\n  /usr/local/bin/sftp mrix,\n  /usr/local/bin/ssh mrix,\n  /usr/local/bin/ssh-keygen mrix,\n  owner /opt/user/gems/*/** mrix,\n  owner /opt/user/npm/*/** mrix,\n\n  # Deny execution of any other binaries\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.local.bin.ssh",
    "content": "# AppArmor profile for SSH client\n# This profile restricts the SSH client (ssh) to essential operations only.\n\n#include <tunables/global>\n\n/usr/local/bin/ssh flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/authentication>\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n  include <abstractions/python>\n  include <abstractions/wutmp>\n\n  # Allow execution of the ssh binary\n  /usr/local/bin/ssh mrix,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Read access to SSH client configuration files\n  /etc/ssh/ssh_config r,\n  /etc/ssh/ssh_known_hosts r,\n  /home/*/.ssh/** rw,\n\n  # Allow network access for making outbound connections\n  network inet stream,\n  network inet6 stream,\n\n  # Deny access to critical system files\n  deny /etc/shadow* rwlx,\n\n  # Allow read access to user information files\n  /etc/passwd r,\n  /etc/group r,\n  /etc/nsswitch.conf r,\n  /etc/hosts r,\n\n  # Allow read-only access to resolv.conf for DNS resolution\n  /etc/resolv.conf r,\n\n  # Temporary files and directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow execution of necessary binaries\n  /bin/dash mrix,\n  /bin/sh mrix,\n  /opt/local/bin/websh mrix,\n  /usr/bin/id mrix,\n  /usr/bin/mysecureshell mrix,\n  /usr/local/bin/lshell mrix,\n  /usr/local/bin/ssh-agent mrix,\n\n  # Additional binaries used by SSH (e.g., scp, sftp)\n  /usr/local/bin/scp mrix,\n  /usr/local/bin/sftp mrix,\n\n  # Deny execution of any other binaries\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.local.bin.wkhtmltoimage",
    "content": "# File: /etc/apparmor.d/usr.local.bin.wkhtmltoimage\n# Template from https://wkhtmltopdf.org/apparmor.html\n\n#include <tunables/global>\n\n/usr/local/bin/wkhtmltoimage flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/openssl>\n  include <abstractions/fonts>\n\n  # Deny capability sys_ptrace\n  deny capability sys_ptrace,\n\n  # System paths wkhtmltoimage needs to operate\n  /proc/*/maps r,\n  /usr/local/bin/wkhtmltoimage mrix,\n  /var/cache/fontconfig/* r,\n  /tmp/** rwlk,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow to read and write in the web root directories\n  /var/www/** r,\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n\n  owner /data/disk/*/distro/** rwk,\n  owner /data/disk/*/platforms/** rwk,\n  owner /data/disk/*/static/** rwk,\n  owner /var/www/** rwk,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.local.bin.wkhtmltopdf",
    "content": "# File: /etc/apparmor.d/usr.local.bin.wkhtmltopdf\n# Template from https://wkhtmltopdf.org/apparmor.html\n\n#include <tunables/global>\n\n/usr/local/bin/wkhtmltopdf flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/openssl>\n  include <abstractions/fonts>\n\n  # Deny capability sys_ptrace\n  deny capability sys_ptrace,\n\n  # System paths wkhtmltopdf needs to operate\n  /proc/*/maps r,\n  /usr/local/bin/wkhtmltopdf mrix,\n  /var/cache/fontconfig/* r,\n  /tmp/** rwlk,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow to read and write in the web root directories\n  /var/www/** r,\n  /data/disk/*/aegir/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n\n  owner /data/disk/*/distro/** rwk,\n  owner /data/disk/*/platforms/** rwk,\n  owner /data/disk/*/static/** rwk,\n  owner /var/www/** rwk,\n\n  /home/*.web/.aws/ r,\n  /home/*.web/.aws/* rw,\n  /home/*.web/.drush/ r,\n  /home/*.web/.drush/* r,\n  /home/*.web/.tmp/ r,\n  /home/*.web/.tmp/* rw,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.local.sbin.pure-ftpd",
    "content": "# AppArmor profile for Pure-FTPd server\n# This profile restricts Pure-FTPd server (pure-ftpd) to essential operations only.\n\n#include <tunables/global>\n\n/usr/local/sbin/pure-ftpd flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/authentication>\n  include <abstractions/ssl_keys>\n\n  # Capabilities needed by Pure-FTPd server\n  capability net_bind_service,\n  capability setgid,\n  capability setuid,\n  capability mknod,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow Pure-FTPd to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  # Allow execution of /bin/sh\n  /bin/sh mrix,\n  /opt/local/bin/websh mrix,\n\n  # Allow access to /dev\n  /dev/log w,\n  /dev/urandom r,\n\n  # Allow read access to system configuration and password files\n  /etc/hostname r,\n  /etc/hosts r,\n  /etc/pure-ftpd.conf r,\n  /etc/passwd r,\n  /etc/group r,\n  /etc/shadow r,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow read access to SSL certificates\n  /etc/ssl/private/pure-ftpd.pem r,\n  /etc/ssl/private/pure-ftpd-dhparams.pem r,\n\n  # Allow reading necessary kernel parameters\n  /proc/** r,\n  /sys/** r,\n\n  # Allow access to run directory\n  /run/pure-ftpd.pid rw,\n  /run/pure-ftpd/ r,\n  /run/pure-ftpd/** rwk,\n\n  # Allow access to temporary directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow write access to log files\n  /var/log/pureftpd.log rw,\n\n  # Allow execution of the pure-ftpd binary and configuration script\n  /usr/local/sbin/pure-ftpd mrix,\n  /usr/local/sbin/pure-config.pl mrix,\n\n  # Allow read access to Octopus user directories and files\n  /data/disk/*/.drush/ r,\n  /data/disk/*/.drush/** r,\n  /data/disk/*/backups/ r,\n  /data/disk/*/backups/** r,\n  /data/disk/*/clients/ r,\n  /data/disk/*/clients/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/static/ r,\n  /data/disk/*/static/** r,\n  /home/*/.drush/ r,\n  /home/*/.drush/** r,\n  /opt/tools/drush/** r,\n\n  # Allow write access to Octopus user directories and files\n  owner /data/disk/*/distro/** rw,\n  owner /data/disk/*/static/ r,\n  owner /data/disk/*/static/** rw,\n  owner /home/*/ r,\n  owner /home/*/.drush/cache/ r,\n  owner /home/*/.drush/cache/** rw,\n  owner /home/*/** rw,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.local.sbin.sshd",
    "content": "# AppArmor profile for SSHd daemon\n# This profile restricts the SSHd daemon (sshd) to essential operations only.\n\n#include <tunables/global>\n\n/usr/local/sbin/sshd flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/authentication>\n  include <abstractions/base>\n  include <abstractions/bash>\n  include <abstractions/consoles>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n  include <abstractions/python>\n  include <abstractions/wutmp>\n\n  # Allow execution of the sshd binary\n  /usr/local/sbin/sshd mrix,\n\n  # Capabilities needed by SSHd daemon\n  capability audit_control,\n  capability audit_write,\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability fowner,\n  capability fsetid,\n  capability kill,\n  capability net_bind_service,\n  capability setgid,\n  capability setuid,\n  capability sys_admin,\n  capability sys_chroot,\n  capability sys_resource,\n  capability sys_tty_config,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Read/Write access\n  /dev/null rw,\n  /dev/ptmx rw,\n  /dev/pts/* rw,\n  /dev/tty rw,\n  /dev/urandom r,\n  /proc/** rw,\n  /run/** rwk,\n  /sys/** r,\n  /tmp/ r,\n  /tmp/** rw,\n  /var/** r,\n  /var/lib/sshd/** rw,\n\n  # Read/Write owner access\n  owner /** rwk,\n  owner /etc/group rw,\n  owner /etc/motd rw,\n  owner /etc/passwd rw,\n  owner /etc/shadow rw,\n  owner /etc/ssh/* rw,\n  owner /proc/*/oom_score_adj rw,\n  owner /root/** rw,\n  owner /run/sshd.pid rw,\n\n  # Exec access\n  /{media,mnt,opt,srv}/** mrix,\n  /bin/* mrix,\n  /opt/local/bin/* mrix,\n  /usr/bin/* mrix,\n  /usr/local/bin/* mrix,\n  /usr/local/sbin/* mrix,\n  /usr/sbin/* mrix,\n\n  # Read access to SSH daemon configuration files\n  /etc/default/locale r,\n  /etc/environment r,\n  /etc/hosts.allow r,\n  /etc/hosts.deny r,\n  /etc/modules.conf r,\n  /etc/security/** r,\n  /etc/ssh/* r,\n  /etc/ssl/openssl.cnf r,\n\n  # Write access to the PID file\n  /run/sshd.pid rw,\n\n  # Allow network access for accepting inbound connections\n  network inet stream,\n  network inet6 stream,\n\n  # Allow reading user home directories and authorized keys\n  /home/*/*/ r,\n  /home/*/*/.ssh/ r,\n  /home/*/.ssh/authorized_keys{,2} r,\n\n  # Temporary files and directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  /dev/pts/[0-9]* rw,\n  /etc/ssh/moduli r,\n  @{PROC}/@{pid}/mounts r,\n  /etc/motd r,\n  /{,var/}run/motd{,.new} rw,\n  /tmp/ssh-*/agent.[0-9]* rwl,\n  /tmp/ssh-*[0-9]*/ w,\n\n  # Allow execution of various shells\n  /bin/ash rUx,\n  /bin/bash rUx,\n  /bin/bash2 rUx,\n  /bin/bsh rUx,\n  /bin/csh rUx,\n  /bin/dash rUx,\n  /bin/ksh rUx,\n  /bin/sh rUx,\n  /bin/tcsh rUx,\n  /bin/zsh rUx,\n  /bin/zsh4 rUx,\n  /sbin/nologin rUx,\n  /usr/bin/mysecureshell rUx,\n  /usr/local/bin/lshell rUx,\n\n  # Allow ptrace read access for necessary binaries\n  ptrace read peer=unconfined,\n  ptrace read peer=/opt/php*/bin/php,\n  ptrace read peer=/opt/php*/sbin/php-fpm,\n  ptrace read peer=/usr/bin/newrelic-daemon,\n  ptrace read peer=/sbin/dhclient,\n  ptrace read peer=/usr/bin/mysqld_safe,\n  ptrace read peer=/usr/bin/mysqld,\n  ptrace read peer=/usr/bin/redis-server,\n  ptrace read peer=/usr/lib/jvm/java-11-openjdk-amd64/bin/java,\n  ptrace read peer=/usr/lib/jvm/java-17-openjdk-amd64/bin/java,\n  ptrace read peer=/usr/lib/jvm/java-21-openjdk-amd64/bin/java,\n  ptrace read peer=/usr/lib/postfix/sbin/master,\n  ptrace read peer=/usr/lib/postfix/sbin/pickup,\n  ptrace read peer=/usr/lib/postfix/sbin/qmgr,\n  ptrace read peer=/usr/local/sbin/pure-ftpd,\n  ptrace read peer=/usr/sbin/nginx,\n  ptrace read peer=/usr/sbin/unbound,\n\n  ^EXEC flags=(complain) {\n    # Include base abstractions\n    include <abstractions/base>\n\n    /bin/ash Ux,\n    /bin/bash Ux,\n    /bin/bash2 Ux,\n    /bin/bsh Ux,\n    /bin/csh Ux,\n    /bin/dash Ux,\n    /bin/ksh Ux,\n    /bin/sh Ux,\n    /bin/tcsh Ux,\n    /bin/zsh Ux,\n    /bin/zsh4 Ux,\n    /sbin/nologin Ux,\n    /usr/bin/mysecureshell Ux,\n    /usr/local/bin/lshell Ux,\n  }\n\n  ^PRIVSEP flags=(complain) {\n    # Include base and nameservice abstractions\n    include <abstractions/base>\n    include <abstractions/nameservice>\n\n    capability sys_chroot,\n    capability setuid,\n    capability setgid,\n  }\n\n  ^PRIVSEP_MONITOR flags=(complain) {\n    # Include authentication, base, nameservice, and wutmp abstractions\n    include <abstractions/authentication>\n    include <abstractions/base>\n    include <abstractions/nameservice>\n    include <abstractions/wutmp>\n\n    capability setuid,\n    capability setgid,\n    capability chown,\n\n    /home/*/.ssh/authorized_keys{,2} r,\n    /dev/ptmx rw,\n    /dev/pts/[0-9]* rw,\n    /dev/urandom r,\n    /etc/hosts.allow r,\n    /etc/hosts.deny r,\n    /etc/ssh/moduli r,\n    @{PROC}/@{pid}/mounts r,\n  }\n\n  ^AUTHENTICATED flags=(complain) {\n    # Include authentication, consoles, nameservice, and wutmp abstractions\n    include <abstractions/authentication>\n    include <abstractions/consoles>\n    include <abstractions/nameservice>\n    include <abstractions/wutmp>\n\n    capability sys_tty_config,\n    capability setgid,\n    capability setuid,\n\n    /dev/log w,\n    /dev/ptmx rw,\n    /etc/default/passwd r,\n    /etc/localtime r,\n    /etc/writable/localtime r,\n    /etc/login.defs r,\n    /etc/motd r,\n    /{,var/}run/motd{,.new} rw,\n    /tmp/ssh-*/agent.[0-9]* rwl,\n    /tmp/ssh-*[0-9]*/ w,\n  }\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.sbin.clamd",
    "content": "# AppArmor profile for Clamd service\n# This profile restricts Clamd service (clamd) to essential operations only.\n\n#include <tunables/global>\n\n/usr/sbin/clamd flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n\n  # Capabilities needed by Clamd service\n  capability chown,\n  capability dac_override,\n  capability dac_read_search,\n  capability setgid,\n  capability setuid,\n  capability sys_resource,\n\n  network inet stream,\n  network inet6 stream,\n  network inet dgram,\n  network inet6 dgram,\n\n  # Allow execution of necessary shells and the clamd binary\n  /bin/dash mrix,\n  /bin/bash mrix,\n  /bin/sh mrix,\n  /usr/sbin/clamd mrix,\n\n  # Allow access to /dev\n  /dev/log w,\n  /dev/null rw,\n  /dev/random r,\n  /dev/urandom r,\n\n  # Allow access to /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow access to temporary directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow access to ClamAV configuration and data\n  /etc/clamav/clamd.conf r,\n  /var/lib/amavis/tmp/** r,\n  /var/lib/clamav/ r,\n  /var/lib/clamav/** rwk,\n  /var/log/clamav/* rwk,\n  /var/spool/MIMEDefang/mdefang-*/Work/ r,\n  /var/spool/MIMEDefang/mdefang-*/Work/** r,\n  /var/spool/clamsmtp/* r,\n  /var/spool/exim4/** r,\n  /var/spool/havp/** r,\n  /var/spool/p3scan/children/** r,\n  /var/spool/qpsmtpd/* r,\n  /{,var/}run/clamav/clamd.ctl w,\n  /{,var/}run/clamav/clamd.pid w,\n\n  # Allow read access to user directories\n  /data/all/** r,\n  /data/conf/* r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /home/*/ r,\n  /home/*/** r,\n\n  # Allow reading filesystems information\n  @{PROC}/[0-9]*/status r,\n  @{PROC}/filesystems r,\n\n  # Deny access to sensitive files and directories\n  deny /etc/shadow* rwlx,\n  deny /etc/passwd* rwlx,\n  deny /root/** rwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.sbin.mysqld",
    "content": "# AppArmor profile for MySQLd server\n# This profile restricts MySQLd server (mysqld) to essential operations only.\n\n#include <tunables/global>\n\n/usr/sbin/mysqld flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/mysql>\n  include <abstractions/nameservice>\n\n  # Capabilities needed by MySQLd server\n  capability dac_override,\n  capability dac_read_search,\n  capability sys_resource,\n  capability setgid,\n  capability setuid,\n  capability sys_nice,\n\n  network inet stream,\n  network inet6 stream,\n\n  # Allow execution of mysqld_safe\n  /usr/bin/mysqld_safe mrix,\n\n  # Allow execution of the mysql binary\n  /usr/bin/mysql mrix,\n\n  # Allow execution of the mysqld binary\n  /usr/sbin/mysqld mrix,\n\n  # Allow execution of necessary utilities\n  /bin/** mrix,\n  /usr/bin/** mrix,\n  /usr/sbin/** mrix,\n\n  # Allow reading necessary directories\n  /bin/ r,\n  /usr/bin/ r,\n  /usr/sbin/ r,\n\n  # Allow MySQL to read its configuration files\n  /etc/mysql/** r,\n  /etc/mysql/conf.d/** r,\n  /etc/mysql/mysql.conf.d/** r,\n\n  # Allow MySQL to access its data directory\n  /var/lib/mysql/ rwk,\n  /var/lib/mysql/** rwk,\n\n  # Allow MySQL to access its run directory\n  /run/mysqld/ r,\n  /run/mysqld/** rw,\n\n  # Allow MySQL to write to its log files\n  /var/log/mysql/ r,\n  /var/log/mysql/** rw,\n\n  # Allow MySQL to access its tmp directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Allow MySQL to read system libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/mysql/plugin/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n  /usr/share/mysql/** r,\n  /usr/share/zoneinfo/** r,\n\n  # Allow MySQL to access /proc and /sys for necessary information\n  /proc/** r,\n  /sys/** r,\n\n  # Allow MySQL to use /dev/shm for temporary storage\n  /dev/shm/** rw,\n  /dev/shm/ r,\n\n  # Allow MySQL to read network-related configurations\n  /etc/hosts.allow r,\n  /etc/hosts.deny r,\n  /etc/services r,\n\n  # Disallow execution of binaries from /tmp and /var/tmp\n  deny /tmp/** m,\n  deny /var/tmp/** m,\n\n  # Deny access to various sensitive directories\n  deny /boot/** mrwklx,\n  deny /root/** mrwklx,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n\n  # Site-specific additions and overrides can be added below\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.sbin.nginx",
    "content": "# AppArmor profile for Nginx server\n# This profile restricts Nginx server (nginx) to essential operations only.\n\n#include <tunables/global>\n\n/usr/sbin/nginx flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/consoles>\n  include <abstractions/dovecot-common>\n  include <abstractions/nameservice>\n  include <abstractions/postfix-common>\n  include <abstractions/ssl_keys>\n\n  # Capabilities needed by Nginx server\n  capability dac_override,\n  capability dac_read_search,\n  capability mknod,\n\n  # Allow Nginx to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow execution of the nginx binary\n  /usr/sbin/nginx mrix,\n\n  # Allow read/write access to nginx specific directories and files\n  /etc/default/nginx r,\n  /etc/nginx/ r,\n  /etc/nginx/** r,\n  /etc/nginx/conf.d/ r,\n  /etc/nginx/conf.d/** r,\n  /etc/nginx/fastcgi_params r,\n  /etc/nginx/mime.types r,\n  /etc/nginx/nginx.conf r,\n  /etc/ssl/private/ r,\n  /etc/ssl/private/* r,\n  /etc/ssl/private/nginx-wild-ssl.crt r,\n  /etc/ssl/private/nginx-wild-ssl.dhp r,\n  /etc/ssl/private/nginx-wild-ssl.key r,\n  /var/www/ r,\n  /var/www/** r,\n\n  # Specific directories used by Ægir (if applicable)\n  /var/aegir/.drush/ r,\n  /var/aegir/.drush/** r,\n  /var/aegir/config/ r,\n  /var/aegir/config/** r,\n  /var/aegir/host_master/** r,\n  /var/aegir/platforms/** r,\n\n  /data/disk/*/aegir/** r,\n  /data/disk/*/config/** r,\n  /data/disk/*/distro/** r,\n  /data/disk/*/platforms/** r,\n  /data/disk/*/static/** r,\n  /data/disk/*/tools/le/** r,\n\n  # Additional specific directories\n  /data/all/** r,\n  /data/conf/ r,\n  /data/conf/* r,\n\n  # Other required directories and files\n  /proc/sys/** r,\n  /run/nginx.pid rw,\n  /run/nginx.pid.oldbin rw,\n  /usr/fastcgi_temp/ r,\n  /usr/fastcgi_temp/** rw,\n  /usr/share/GeoIP/GeoIP.dat r,\n  /usr/share/GeoIP/GeoIPv6.dat r,\n  /usr/share/GeoIP/GeoLite2-ASN.mmdb r,\n  /usr/share/GeoIP/GeoLite2-City.mmdb r,\n  /usr/share/GeoIP/GeoLite2-Country.mmdb r,\n  /var/lib/nginx/ r,\n  /var/lib/nginx/** rw,\n  /var/log/nginx/ r,\n  /var/log/nginx/access.log w,\n  /var/log/nginx/error.log w,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.sbin.rsyslogd",
    "content": "# AppArmor profile for Rsyslogd service\n# This profile restricts Rsyslogd service (rsyslogd) to essential operations only.\n\n#include <tunables/global>\n\n/usr/sbin/rsyslogd flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/nameservice>\n  include <abstractions/ssl_keys>\n\n  # Capabilities needed by Rsyslogd service\n  capability syslog,\n\n  # Allow Rsyslogd to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  # Allow execution of the rsyslogd binary\n  /usr/sbin/rsyslogd mrix,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow read access to system configuration files\n  /etc/rsyslog.conf r,\n  /etc/rsyslog.d/ r,\n  /etc/rsyslog.d/** r,\n  /etc/localtime r,\n  /etc/ssl/certs/** r,\n\n  # Allow read and write access to the log directories and files\n  /var/log/** rwk,\n  /var/spool/rsyslog/** rw,\n  /var/spool/postfix/** rw,\n\n  # Allow network access\n  network inet stream,\n  network inet dgram,\n\n  # Allow access to pid files\n  /run/rsyslogd.pid rw,\n  /run/rsyslogd.pid.tmp rw,\n\n  # Capabilities needed by Rsyslogd service\n  capability net_bind_service,\n  capability setuid,\n  capability setgid,\n  capability chown,\n  capability dac_override,\n\n  # Allow reading necessary kernel parameters\n  /proc/sys/kernel/random/uuid r,\n  /proc/cpuinfo r,\n  /proc/meminfo r,\n  /proc/kmsg r,\n  /proc/stat r,\n\n  # Allow access to /dev for logging\n  /dev/log w,\n  /dev/kmsg w,\n\n  # Catchall to deny everything else\n  #deny /** rwklx,\n}\n"
  },
  {
    "path": "aegir/conf/apparmor/usr.sbin.unbound",
    "content": "# AppArmor profile for Unbound server\n# This profile restricts Unbound server (unbound) to essential operations only.\n\n#include <tunables/global>\n\n/usr/sbin/unbound flags=(complain) {\n\n  # Include common AppArmor abstractions\n  include <abstractions/base>\n  include <abstractions/consoles>\n  include <abstractions/nameservice>\n  include <abstractions/openssl>\n\n  # Capabilities needed by Unbound server\n  capability chown,\n  capability fowner,\n  capability fsetid,\n  capability kill,\n  capability net_bind_service,\n  capability setgid,\n  capability setuid,\n  capability sys_chroot,\n  capability sys_resource,\n  capability net_admin,\n  capability dac_override,\n\n  # Allow to open TCP sockets on any address\n  network inet stream,\n  network inet6 stream,\n\n  # Allow Unbound to accept signal from PHP-CLI processes\n  signal (receive) peer=/opt/php*/bin/php,\n\n  # Allow read access to necessary libraries\n  /lib/** mr,\n  /lib/x86_64-linux-gnu/** mr,\n  /lib64/** mr,\n  /usr/lib/** mr,\n  /usr/lib/x86_64-linux-gnu/** mr,\n  /usr/libexec/** mr,\n  /usr/local/include/** mr,\n  /usr/local/lib/** mr,\n  /usr/local/ssl/** mr,\n  /usr/local/ssl3/** mr,\n\n  # Allow Unbound to access some /dev\n  /dev/log w,\n  /dev/random r,\n  /dev/urandom r,\n\n  # Allow Unbound to access tmp directories\n  /tmp/ r,\n  /tmp/** rw,\n  /var/tmp/** rw,\n\n  # Access root hints from dns-data-root\n  /usr/share/dns/root.* r,\n\n  # Unbound configuration paths\n  /etc/unbound/ r,\n  /etc/unbound/** r,\n  /usr/etc/unbound/ r,\n  /usr/etc/unbound/** r,\n  /var/lib/unbound/ r,\n  /var/lib/unbound/** r,\n\n  # Unbound logs\n  /var/log/unbound/ r,\n  /var/log/unbound/** rw,\n\n  # Unbound keys (if write access is needed)\n  /usr/etc/unbound/keys/** rw,\n\n  # Allow Unbound to execute its own binary\n  /usr/sbin/unbound mrix,\n\n  # Allow Unbound to access its pid and control socket\n  /run/unbound.ctl rw,\n  /run/unbound.pid rw,\n  /run/unbound/ r,\n  /run/unbound/** r,\n  /run/unbound/unbound.ctl rw,\n  /run/unbound/unbound.pid rw,\n}\n"
  },
  {
    "path": "aegir/conf/dns/unbound",
    "content": "#!/bin/dash\n\n### BEGIN INIT INFO\n# Provides:          unbound\n# Required-Start:    $network $remote_fs $syslog\n# Required-Stop:     $network $remote_fs $syslog\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: Validating, recursive, and caching DNS resolver\n### END INIT INFO\n\nNAME=\"unbound\"\nDESC=\"DNS server\"\nDAEMON=\"/usr/sbin/unbound\"\nPIDFILE=\"/run/unbound/unbound.pid\"\n\nHELPER=\"/usr/libexec/unbound-helper\"\n\ntest -x $DAEMON || exit 0\n\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n. /lib/lsb/init-functions\n\n# Override this variable by editing or creating /etc/default/unbound.\nDAEMON_OPTS=\"\"\n\n[ -d /run/unbound ] || mkdir -p /run/unbound\n[ -d /run/unbound ] && chown -R unbound:unbound /run/unbound\n\nif [ -f /etc/default/unbound ]; then\n    . /etc/default/unbound\nfi\n\n# --- BEGIN CI NoMail hook ---\napply_ci_nomail() {\n    if [ \"${UNBOUND_CI_NOMAIL:-NO}\" = \"YES\" ] && [ -x /usr/local/sbin/unbound_ci_nomail.sh ]; then\n        /usr/bin/env _LOCAL_ZONE_TYPE=\"${UNBOUND_CI_NOMAIL_TYPE:-always_nxdomain}\" \\\n            /usr/local/sbin/unbound_ci_nomail.sh\n    fi\n}\n# --- END CI NoMail hook ---\n\ncase \"$1\" in\n    start)\n        log_daemon_msg \"Starting $DESC\" \"$NAME\"\n        $HELPER chroot_setup\n        $HELPER root_trust_anchor_update 2>&1 | tee /dev/fd/2 | logger -p daemon.info -t unbound\n        if start-stop-daemon --start --quiet --oknodo --pidfile $PIDFILE --name $NAME --startas $DAEMON -- $DAEMON_OPTS; then\n            $HELPER resolvconf_start\n            apply_ci_nomail\n            log_end_msg 0\n        else\n            log_end_msg 1\n        fi\n        ;;\n\n    stop)\n        log_daemon_msg \"Stopping $DESC\" \"$NAME\"\n        if start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE --name $NAME --retry 5; then\n            $HELPER resolvconf_stop\n            $HELPER chroot_teardown\n            log_end_msg 0\n        else\n            log_end_msg 1\n        fi\n        ;;\n\n    restart|force-reload)\n        log_daemon_msg \"Restarting $DESC\" \"$NAME\"\n        start-stop-daemon --stop --quiet --remove-pidfile --pidfile $PIDFILE --name $NAME --retry 5\n        $HELPER resolvconf_stop\n        if start-stop-daemon --start --quiet --oknodo --pidfile $PIDFILE --name $NAME --startas $DAEMON -- $DAEMON_OPTS; then\n            $HELPER chroot_setup\n            $HELPER resolvconf_start\n            apply_ci_nomail\n            log_end_msg 0\n        else\n            log_end_msg 1\n        fi\n        ;;\n\n    reload)\n        log_daemon_msg \"Reloading $DESC\" \"$NAME\"\n        if start-stop-daemon --stop --pidfile $PIDFILE --name $NAME --signal 1; then\n            $HELPER chroot_setup\n            apply_ci_nomail\n            log_end_msg 0\n        else\n            log_end_msg 1\n        fi\n        ;;\n\n    status)\n        status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $?\n        ;;\n\n    *)\n        N=/etc/init.d/$NAME\n        echo \"Usage: $N {start|stop|restart|status|reload|force-reload}\" >&2\n        exit 1\n        ;;\nesac\n\nexit 0\n\n"
  },
  {
    "path": "aegir/conf/dns/unbound-helper",
    "content": "#!/bin/dash -e\n\nUNBOUND_CONF=\"/etc/unbound/unbound.conf\"\nUNBOUND_BASE_DIR=\"${UNBOUND_CONF%/*}\"\nCHROOT_DIR=\"$(unbound-checkconf -o chroot)\"\n\nDNS_ROOT_KEY_FILE=\"/usr/share/dns/root.key\"\nROOT_TRUST_ANCHOR_FILE=\"/var/lib/unbound/root.key\"\n\n# Override these variables by editing or creating /etc/default/unbound.\nRESOLVCONF=true\nROOT_TRUST_ANCHOR_UPDATE=true\n\nif [ -f /etc/default/unbound ]; then\n    . /etc/default/unbound\n\n    case \"$RESOLVCONF\" in false|0|no)\n        RESOLVCONF=false ;;\n    esac\n\n    case \"$ROOT_TRUST_ANCHOR_UPDATE\" in false|0|no)\n        ROOT_TRUST_ANCHOR_UPDATE=false ;;\n    esac\nfi\n\ndo_resolvconf_start() {\n    [ false != \"$RESOLVCONF\" -a -x /sbin/resolvconf ] || return 0\n\n    unbound-checkconf $CHROOT_DIR/$UNBOUND_CONF -o interface | {\n        default=yes\n        while read interface; do\n            default=\n            # XXXX here, only localhost and all-zero addresses are handled\n            # in case some other IP is specified it will not work\n            case \"$interface\" in\n              ( 0.0.0.0 | 127.0.0.1 ) echo \"nameserver 127.0.0.1\" ;;\n              ( ::0 | ::1 ) echo \"nameserver ::1\" ;;\n            esac\n        done\n        [ -z \"$default\" ] ||\n            # unbound defaults to listening on localhost\n            echo \"nameserver 127.0.0.1\"\n    } | /sbin/resolvconf -a lo.unbound\n}\n\ndo_resolvconf_stop() {\n    [ false != \"$RESOLVCONF\" -a -x /sbin/resolvconf ] || return 0\n\n    /sbin/resolvconf -d lo.unbound\n}\n\ndo_chroot_setup() {\n    [ -n \"$CHROOT_DIR\" -a -d \"$CHROOT_DIR\" ] || return 0\n    if [ \"$CHROOT_DIR\" != \"$UNBOUND_BASE_DIR\" ]; then\n        # we probably should not do the force-recreate but just a refresh\n        rm -rf   \"$CHROOT_DIR/$UNBOUND_BASE_DIR\"\n        mkdir -p \"$CHROOT_DIR/$UNBOUND_BASE_DIR\"\n        tar -C \"$UNBOUND_BASE_DIR\" -c . |\n            tar -C \"$CHROOT_DIR/$UNBOUND_BASE_DIR\" -x\n    fi\n    if [ -S \"/run/systemd/notify\" ]; then\n        if [ ! -e \"$CHROOT_DIR/run/systemd/notify\" ]; then\n            mkdir -p \"$CHROOT_DIR/run/systemd\"\n            touch \"$CHROOT_DIR/run/systemd/notify\"\n        fi\n        if ! mountpoint -q \"$CHROOT_DIR/run/systemd/notify\"; then\n            mount --bind \"/run/systemd/notify\" \"$CHROOT_DIR/run/systemd/notify\"\n        fi\n    fi\n}\n\ndo_chroot_teardown() {\n    if [ -n \"$CHROOT_DIR\" -a -d \"$CHROOT_DIR\" ] &&\n       mountpoint -q \"$CHROOT_DIR/run/systemd/notify\"; then\n        umount \"$CHROOT_DIR/run/systemd/notify\"\n    fi\n}\n\ndo_root_trust_anchor_update() {\n    [ false != \"$ROOT_TRUST_ANCHOR_UPDATE\" -a \\\n      -n \"$ROOT_TRUST_ANCHOR_FILE\"  -a \\\n      -r \"$DNS_ROOT_KEY_FILE\" ] || return\n\n    if [ ! -e \"$ROOT_TRUST_ANCHOR_FILE\" ] ||\n       # we do not want to copy if unbound's file is more recent\n       [ \"$DNS_ROOT_KEY_FILE\" -nt \"$ROOT_TRUST_ANCHOR_FILE\" ]; then\n\n        echo \"Updating $ROOT_TRUST_ANCHOR_FILE from $DNS_ROOT_KEY_FILE\"\n        # Copy to temp first and do mv only when done to ensure the file is in\n        # good condition.  Can use install(1) here to set correct owner but need\n        # mv anyway, and doing both as root in an untrusted dir seems risky.\n        setpriv --reuid=unbound --regid=unbound --clear-groups \\\n          sh -c \"\\\n            cp --remove-destination --preserve \\\n                 \\\"$DNS_ROOT_KEY_FILE\\\" \\\"$ROOT_TRUST_ANCHOR_FILE.tmp\\\" && \\\n            mv -f \\\"$ROOT_TRUST_ANCHOR_FILE.tmp\\\" \\\"$ROOT_TRUST_ANCHOR_FILE\\\"\"\n    fi\n}\n\ncase \"$1\" in\n    ( resolvconf_start \\\n    | resolvconf_stop \\\n    | chroot_setup \\\n    | chroot_teardown \\\n    | root_trust_anchor_update \\\n    )\n        do_$1\n        ;;\n\n    (*)\n        echo \"Usage: $0 {resolvconf_start|resolvconf_stop|chroot_setup|chroot_teardown|root_trust_anchor_update}\" >&2\n        exit 1\n        ;;\nesac\n"
  },
  {
    "path": "aegir/conf/dns/unbound.conf",
    "content": "###\n### /etc/unbound/unbound.conf.d/unbound.conf\n###\nserver:\n    # Log\n    use-syslog: no\n    logfile: \"/var/log/unbound/unbound.log\"\n    log-time-ascii: yes\n    verbosity: 1\n\n    # Pid\n    pidfile: \"/run/unbound/unbound.pid\"\n\n    # Listen\n    interface: 127.0.0.1\n    port: 53\n    do-tcp: yes\n    do-ip4: yes\n    do-udp: yes\n    do-ip6: no\n    prefer-ip6: no\n\n    # Performance settings\n    num-threads: 2\n    so-rcvbuf: 1m\n    so-sndbuf: 1m\n\n    # Access control\n    access-control: 127.0.0.0/8 allow\n    access-control: ::1 allow\n    access-control: 192.168.1.0/24 allow\n\n    # DNSSEC configuration\n    val-log-level: 2\n    val-permissive-mode: no\n    val-clean-additional: yes\n    harden-dnssec-stripped: yes\n    harden-below-nxdomain: yes\n    harden-glue: yes\n\n    # Prevent DNS rebinding attacks\n    private-address: 192.168.0.0/16\n    private-address: 172.16.0.0/12\n    private-address: 10.0.0.0/8\n    private-address: fd00::/8\n    private-address: fe80::/10\n\n    # Prefetching and caching\n    prefetch: yes\n    prefetch-key: yes\n    cache-max-ttl: 14400\n    cache-min-ttl: 900\n    edns-buffer-size: 1232\n\n    # TLS and DNS-over-TLS configuration (if needed)\n    tls-cert-bundle: \"/etc/ssl/certs/ca-certificates.crt\"\n    tls-port: 853\n    tls-service-key: \"/etc/unbound/unbound_server.key\"\n    tls-service-pem: \"/etc/unbound/unbound_server.pem\"\n\n    # Misc\n    chroot: \"\"\n    hide-identity: yes\n    hide-version: yes\n    minimal-responses: yes\n    qname-minimisation: yes\n    rrset-roundrobin: yes\n    root-hints: \"/var/lib/unbound/root.hints\"\n    auto-trust-anchor-file: \"/var/lib/unbound/root.key\"\n    use-caps-for-id: no\n\nremote-control:\n    # Enable the control interface\n    control-enable: yes\n\n    # Define control interface\n    control-interface: /run/unbound/unbound.ctl\n\n    # Specify server key and certificate\n    server-key-file: \"/etc/unbound/unbound_control.key\"\n    server-cert-file: \"/etc/unbound/unbound_control.pem\"\n\n    # Specify control key and certificate\n    control-key-file: \"/etc/unbound/unbound_control.key\"\n    control-cert-file: \"/etc/unbound/unbound_control.pem\"\n\ninclude-toplevel: \"/usr/etc/unbound/unbound.conf.d/*.conf\"\n"
  },
  {
    "path": "aegir/conf/droplet/droplet-agent",
    "content": "#!/usr/bin/env bash\n### BEGIN INIT INFO\n# Provides:          droplet-agent\n# Required-Start:    $remote_fs $syslog $network\n# Required-Stop:     $remote_fs $syslog $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: DigitalOcean Droplet Agent\n### END INIT INFO\n\n_PATHS=\"/usr/bin/droplet-agent /usr/local/bin/droplet-agent /opt/digitalocean/bin/droplet-agent /opt/digitalocean/droplet-agent/droplet-agent\"\n_DAEMON=\"\"\n_NAME=\"droplet-agent\"\n_DESC=\"DigitalOcean Droplet Agent\"\n_PIDFILE=\"/run/${_NAME}.pid\"\n_USER=\"root\"\n_NICE=\"0\"\n\n_for_each_path() {\n  for _P in ${_PATHS}; do\n    if [ -x \"${_P}\" ]; then _DAEMON=\"${_P}\"; return 0; fi\n  done\n  return 1\n}\n\n_do_start() {\n  if [ -z \"${_DAEMON}\" ] && ! _for_each_path; then\n    echo \"${_DESC}: binary not found\"; return 1\n  fi\n  start-stop-daemon --start --quiet --background \\\n    --make-pidfile --pidfile \"${_PIDFILE}\" \\\n    --chuid \"${_USER}\" --nicelevel \"${_NICE}\" \\\n    --exec \"${_DAEMON}\" -- || return 1\n  return 0\n}\n\n_do_stop() {\n  if [ -f \"${_PIDFILE}\" ]; then\n    start-stop-daemon --stop --quiet --pidfile \"${_PIDFILE}\" --retry=TERM/15/KILL/5\n    rm -f \"${_PIDFILE}\" 2>/dev/null || true\n    return 0\n  fi\n  pkill -f \"${_NAME}\" 2>/dev/null || true\n  return 0\n}\n\n_do_status() {\n  if [ -f \"${_PIDFILE}\" ] && ps -p \"$(cat \"${_PIDFILE}\" 2>/dev/null)\" >/dev/null 2>&1; then\n    echo \"${_DESC} is running (pid $(cat \"${_PIDFILE}\"))\"\n    return 0\n  fi\n  pgrep -f \"${_NAME}\" >/dev/null 2>&1 && { echo \"${_DESC} seems running (no pidfile)\"; return 0; }\n  echo \"${_DESC} is not running\"\n  return 3\n}\n\ncase \"$1\" in\n  start)   _do_start ;;\n  stop)    _do_stop ;;\n  restart) _do_stop; _do_start ;;\n  status)  _do_status ;;\n  *) echo \"Usage: $0 {start|stop|restart|status}\"; exit 2 ;;\nesac\n"
  },
  {
    "path": "aegir/conf/etc/etc-ImageMagick-6-policy.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE policymap [\n  <!ELEMENT policymap (policy)*>\n  <!ATTLIST policymap xmlns CDATA #FIXED ''>\n  <!ELEMENT policy EMPTY>\n  <!ATTLIST policy xmlns CDATA #FIXED '' domain NMTOKEN #REQUIRED\n    name NMTOKEN #IMPLIED pattern CDATA #IMPLIED rights NMTOKEN #IMPLIED\n    stealth NMTOKEN #IMPLIED value CDATA #IMPLIED>\n]>\n<!--\n  Configure ImageMagick policies.\n\n  Domains include system, delegate, coder, filter, path, or resource.\n\n  Rights include none, read, write, execute and all.  Use | to combine them,\n  for example: \"read | write\" to permit read from, or write to, a path.\n\n  Use a glob expression as a pattern.\n\n  Suppose we do not want users to process MPEG video images:\n\n    <policy domain=\"delegate\" rights=\"none\" pattern=\"mpeg:decode\" />\n\n  Here we do not want users reading images from HTTP:\n\n    <policy domain=\"coder\" rights=\"none\" pattern=\"HTTP\" />\n\n  The /repository file system is restricted to read only.  We use a glob\n  expression to match all paths that start with /repository:\n\n    <policy domain=\"path\" rights=\"read\" pattern=\"/repository/*\" />\n\n  Lets prevent users from executing any image filters:\n\n    <policy domain=\"filter\" rights=\"none\" pattern=\"*\" />\n\n  Any large image is cached to disk rather than memory:\n\n    <policy domain=\"resource\" name=\"area\" value=\"1GP\"/>\n\n  Use the default system font unless overwridden by the application:\n\n    <policy domain=\"system\" name=\"font\" value=\"/usr/share/fonts/favorite.ttf\"/>\n\n  Define arguments for the memory, map, area, width, height and disk resources\n  with SI prefixes (.e.g 100MB).  In addition, resource policies are maximums\n  for each instance of ImageMagick (e.g. policy memory limit 1GB, -limit 2GB\n  exceeds policy maximum so memory limit is 1GB).\n\n  Rules are processed in order.  Here we want to restrict ImageMagick to only\n  read or write a small subset of proven web-safe image types:\n\n    <policy domain=\"delegate\" rights=\"none\" pattern=\"*\" />\n    <policy domain=\"filter\" rights=\"none\" pattern=\"*\" />\n    <policy domain=\"coder\" rights=\"none\" pattern=\"*\" />\n    <policy domain=\"coder\" rights=\"read|write\" pattern=\"{GIF,JPEG,PNG,WEBP}\" />\n-->\n<policymap>\n  <!-- <policy domain=\"resource\" name=\"temporary-path\" value=\"/tmp\"/> -->\n  <policy domain=\"resource\" name=\"memory\" value=\"256MiB\"/>\n  <policy domain=\"resource\" name=\"map\" value=\"512MiB\"/>\n  <policy domain=\"resource\" name=\"width\" value=\"16KP\"/>\n  <policy domain=\"resource\" name=\"height\" value=\"16KP\"/>\n  <!-- <policy domain=\"resource\" name=\"list-length\" value=\"128\"/> -->\n  <policy domain=\"resource\" name=\"area\" value=\"128MP\"/>\n  <policy domain=\"resource\" name=\"disk\" value=\"1GiB\"/>\n  <!-- <policy domain=\"resource\" name=\"file\" value=\"768\"/> -->\n  <!-- <policy domain=\"resource\" name=\"thread\" value=\"4\"/> -->\n  <!-- <policy domain=\"resource\" name=\"throttle\" value=\"0\"/> -->\n  <!-- <policy domain=\"resource\" name=\"time\" value=\"3600\"/> -->\n  <!-- <policy domain=\"coder\" rights=\"none\" pattern=\"MVG\" /> -->\n  <!-- <policy domain=\"module\" rights=\"none\" pattern=\"{PS,PDF,XPS}\" /> -->\n  <!-- <policy domain=\"path\" rights=\"none\" pattern=\"@*\" /> -->\n  <!-- <policy domain=\"cache\" name=\"memory-map\" value=\"anonymous\"/> -->\n  <!-- <policy domain=\"cache\" name=\"synchronize\" value=\"True\"/> -->\n  <!-- <policy domain=\"cache\" name=\"shared-secret\" value=\"passphrase\" stealth=\"true\"/> -->\n  <!-- <policy domain=\"system\" name=\"max-memory-request\" value=\"256MiB\"/> -->\n  <!-- <policy domain=\"system\" name=\"shred\" value=\"2\"/> -->\n  <!-- <policy domain=\"system\" name=\"precision\" value=\"6\"/> -->\n  <!-- <policy domain=\"system\" name=\"font\" value=\"/path/to/font.ttf\"/> -->\n  <!-- <policy domain=\"system\" name=\"pixel-cache-memory\" value=\"anonymous\"/> -->\n  <!-- <policy domain=\"system\" name=\"shred\" value=\"2\"/> -->\n  <!-- <policy domain=\"system\" name=\"precision\" value=\"6\"/> -->\n  <!-- not needed due to the need to use explicitly by mvg: -->\n  <!-- <policy domain=\"delegate\" rights=\"none\" pattern=\"MVG\" /> -->\n  <!-- use curl -->\n  <policy domain=\"delegate\" rights=\"read|write\" pattern=\"URL\" />\n  <policy domain=\"delegate\" rights=\"read|write\" pattern=\"HTTPS\" />\n  <policy domain=\"delegate\" rights=\"read|write\" pattern=\"HTTP\" />\n  <!-- in order to avoid to get image with password text -->\n  <policy domain=\"path\" rights=\"none\" pattern=\"@*\"/>\n  <!-- disable ghostscript format types -->\n<!--\n  <policy domain=\"coder\" rights=\"none\" pattern=\"PS\" />\n  <policy domain=\"coder\" rights=\"none\" pattern=\"PS2\" />\n  <policy domain=\"coder\" rights=\"none\" pattern=\"PS3\" />\n  <policy domain=\"coder\" rights=\"none\" pattern=\"EPS\" />\n  <policy domain=\"coder\" rights=\"none\" pattern=\"PDF\" />\n  <policy domain=\"coder\" rights=\"none\" pattern=\"XPS\" />\n -->\n</policymap>\n"
  },
  {
    "path": "aegir/conf/ftpd/ftpusers",
    "content": "root\ndaemon\nbin\nsys\nsync\ngames\nman\nlp\nmail\nnews\nuucp\nnobody\n"
  },
  {
    "path": "aegir/conf/ftpd/pure-config.pl.txt",
    "content": "#! /usr/bin/perl\n\n# (C) 2001-2009 Aristotle Pagaltzis\n# derived from code (C) 2001-2002 Frank Denis and Matthias Andree\n\nuse strict;\n\nmy ($conffile, @flg) = @ARGV;\n\nmy $PUREFTPD;\n-x && ($PUREFTPD=$_, last) for qw(\n\t${exec_prefix}/sbin/pure-ftpd\n\t/usr/local/pure-ftpd/sbin/pure-ftpd\n\t/usr/local/pureftpd/sbin/pure-ftpd\n\t/usr/local/sbin/pure-ftpd\n\t/usr/sbin/pure-ftpd\n\t/opt/sbin/pure-ftpd\n);\n\nmy %simple_switch_for = (\n\tIPV4Only\t\t\t=> \"-4\",\n\tIPV6Only\t\t\t=> \"-6\",        \n\tChrootEveryone\t\t\t=> \"-A\",\n\tBrokenClientsCompatibility\t=> \"-b\",\n\tDaemonize\t\t\t=> \"-B\",\n\tVerboseLog\t\t\t=> \"-d\",\n\tDisplayDotFiles\t\t\t=> \"-D\",\n\tAnonymousOnly\t\t\t=> \"-e\",\n\tNoAnonymous\t\t\t=> \"-E\",\n\tDontResolve\t\t\t=> \"-H\",\n\tAnonymousCanCreateDirs\t\t=> \"-M\",\n\tNATmode\t\t\t\t=> \"-N\",\n\tCallUploadScript\t\t=> \"-o\",\n\tAntiWarez\t\t\t=> \"-s\",\n\tAllowUserFXP\t\t\t=> \"-w\",\n\tAllowAnonymousFXP\t\t=> \"-W\",\n\tProhibitDotFilesWrite\t\t=> \"-x\",\n\tProhibitDotFilesRead\t\t=> \"-X\",\n\tAllowDotFiles\t\t\t=> \"-z\",\n\tAutoRename\t\t\t=> \"-r\",\n\tAnonymousCantUpload\t\t=> \"-i\",\n\tLogPID\t\t\t\t=> \"-1\",\n\tNoChmod\t\t\t\t=> \"-R\",\n\tKeepAllFiles\t\t\t=> \"-K\",\n\tCreateHomeDir\t\t\t=> \"-j\",\n\tNoRename\t\t\t=> \"-G\",\n\tCustomerProof\t\t\t=> \"-Z\",\n\tNoTruncate\t\t\t=> \"-0\",\n);\n\nmy %string_switch_for = (\n  \tFileSystemCharset\t=> \"-8\",\n\tClientCharset\t\t=> \"-9\",\n\tSyslogFacility\t\t=> \"-f\",\n\tFortunesFile\t\t=> \"-F\",\n\tForcePassiveIP\t\t=> \"-P\",\n\tBind\t\t\t=> \"-S\",\n\tAnonymousBandwidth\t=> \"-t\",\n\tUserBandwidth\t\t=> \"-T\",\n\tTrustedIP\t\t=> \"-V\",\n\tAltLog\t\t\t=> \"-O\",\n\tPIDFile\t\t\t=> \"-g\",\n);\n\nmy %numeric_switch_for = (\n\tMaxIdleTime\t\t=> \"-I\",\n\tMaxDiskUsage\t\t=> \"-k\",\n\tTrustedGID\t\t=> \"-a\",\n\tMaxClientsNumber\t=> \"-c\",\n\tMaxClientsPerIP\t\t=> \"-C\",\n\tMaxLoad\t\t\t=> \"-m\",\n\tMinUID\t\t\t=> \"-u\",\n\tTLS\t\t\t=> \"-Y\",\n);\n\nmy %numpairb_switch_for = (\n\tLimitRecursion\t\t=> \"-L\",\n\tPassivePortRange\t=> \"-p\",\n\tAnonymousRatio\t\t=> \"-q\",\n\tUserRatio\t\t=> \"-Q\",\n);\n\nmy %numpairc_switch_for = (\n\tUmask\t\t=> \"-U\",\n\tQuota\t\t=> \"-n\",\n\tPerUserLimits\t=> \"-y\",\n);\n\nmy %auth_method_for = (\n\tLDAPConfigFile\t\t=> \"ldap\",\n\tMySQLConfigFile\t\t=> \"mysql\",\n\tPGSQLConfigFile\t\t=> \"pgsql\",\n\tPureDB\t\t\t=> \"puredb\",\n\tExtAuth\t\t\t=> \"extauth\",\n);\n\nmy $simple_switch = qr/(@{[join \"|\", keys %simple_switch_for ]})\\s+yes/i;\nmy $string_switch = qr/(@{[join \"|\", keys %string_switch_for ]})\\s+(\\S+)/i;\nmy $numeric_switch = qr/(@{[join \"|\", keys %numeric_switch_for ]})\\s+(\\d+)/i;\nmy $numpairb_switch = qr/(@{[join \"|\", keys %numpairb_switch_for ]})\\s+(\\d+)\\s+(\\d+)/i;\nmy $numpairc_switch = qr/(@{[join \"|\", keys %numpairc_switch_for ]})\\s+(\\d+):(\\d+)/i;\nmy $auth_method = qr/(@{[join \"|\", keys %auth_method_for ]})\\s+(\\S+)/i;\n\ndie \"Usage: pure-config.pl <configuration file> [extra options]\\n\"\n\tunless defined $conffile;\n\nopen CONF, \"< $conffile\" or die \"Can't open $conffile: $!\\n\";\n\n!/^\\s*(?:$|#)/ and (chomp, push @flg,\n\t/$simple_switch/i\t\t? ($simple_switch_for{$1}) :\n\t/$string_switch/i\t\t? ($string_switch_for{$1} . $2) :\n\t/$numeric_switch/i\t\t? ($numeric_switch_for{$1} . $2) :\n\t/$numpairb_switch/i\t\t? ($numpairb_switch_for{$1} . \"$2:$3\") :\n\t/$numpairc_switch/i\t\t? ($numpairc_switch_for{$1} . \"$2:$3\") :\n\t/$auth_method/i\t\t\t? (\"-l\" . \"$auth_method_for{$1}:$2\") :\n\t/UnixAuthentication\\s+yes/i\t? (\"-l\" . \"unix\") :\n\t/PAMAuthentication\\s+yes/i\t? (\"-l\" . \"pam\") :\n\t()\n) while <CONF>;\n\nclose CONF;\n\nprint \"Running: $PUREFTPD \", join(\" \", @flg), \"\\n\";\nexec { $PUREFTPD } ($PUREFTPD, @flg) or die \"cannot exec $PUREFTPD: $!\";\n"
  },
  {
    "path": "aegir/conf/ftpd/pure-ftpd.conf",
    "content": "\n############################################################\n#                                                          #\n#             Configuration file for pure-ftpd             #\n#                                                          #\n############################################################\n\n# If you want to run Pure-FTPd with this configuration\n# instead of command-line options, please run the\n# following command :\n#\n# ${exec_prefix}/sbin/sbin/pure-ftpd /etc/pure-ftpd.conf\n#\n# Online documentation:\n# https://www.pureftpd.org/project/pure-ftpd/doc\n\n\n# Restrict users to their home directory\n\nChrootEveryone               yes\n\n\n\n# If the previous option is set to \"no\", members of the following group\n# won't be restricted. Others will be. If you don't want chroot()ing anyone,\n# just comment out ChrootEveryone and TrustedGID.\n\n# TrustedGID                   100\n\n\n\n# Turn on compatibility hacks for broken clients\n\nBrokenClientsCompatibility   yes\n\n\n\n# Maximum number of simultaneous users\n\nMaxClientsNumber             50\n\n\n\n# Run as a background process\n\nDaemonize                    yes\n\n\n\n# Maximum number of simultaneous clients with the same IP address\n\nMaxClientsPerIP              8\n\n\n\n# If you want to log all client commands, set this to \"yes\".\n# This directive can be specified twice to also log server responses.\n\nVerboseLog                   yes\n\n\n\n# List dot-files even when the client doesn't send \"-a\".\n\nDisplayDotFiles              no\n\n\n\n# Disallow authenticated users - Act only as a public FTP server.\n\nAnonymousOnly                no\n\n\n\n# Disallow anonymous connections. Only accept authenticated users.\n\nNoAnonymous                  yes\n\n\n\n# Syslog facility (auth, authpriv, daemon, ftp, security, user, local*)\n# The default facility is \"ftp\". \"none\" disables logging.\n\nSyslogFacility               ftp\n\n\n\n# Display fortune cookies\n\n# FortunesFile                 /usr/share/fortune/zippy\n\n\n\n# Don't resolve host names in log files. Recommended unless you trust\n# reverse host names, and don't care about DNS resolution being possibly slow.\n\nDontResolve                  yes\n\n\n\n# Maximum idle time in minutes (default = 15 minutes)\n\nMaxIdleTime                  15\n\n\n\n# LDAP configuration file (see README.LDAP)\n\n# LDAPConfigFile               /etc/pureftpd-ldap.conf\n\n\n\n# MySQL configuration file (see README.MySQL)\n\n# MySQLConfigFile              /etc/pureftpd-mysql.conf\n\n\n# PostgreSQL configuration file (see README.PGSQL)\n\n# PGSQLConfigFile              /etc/pureftpd-pgsql.conf\n\n\n# PureDB user database (see README.Virtual-Users)\n\n# PureDB                       /etc/pureftpd.pdb\n\n\n# Path to pure-authd socket (see README.Authentication-Modules)\n\n# ExtAuth                      /run/ftpd.sock\n\n\n\n# If you want to enable PAM authentication, uncomment the following line\n\n# PAMAuthentication            yes\n\n\n\n# If you want simple Unix (/etc/passwd) authentication, uncomment this\n\nUnixAuthentication           yes\n\n\n\n# Please note that LDAPConfigFile, MySQLConfigFile, PAMAuthentication and\n# UnixAuthentication can be used specified once, but can be combined\n# together. For instance, if you use MySQLConfigFile, then UnixAuthentication,\n# the SQL server will be used first. If the SQL authentication fails because the\n# user wasn't found, a new attempt will be done using system authentication.\n# If the SQL authentication fails because the password didn't match, the\n# authentication chain stops here. Authentication methods are chained in\n# the order they are given.\n\n\n\n# 'ls' recursion limits. The first argument is the maximum number of\n# files to be displayed. The second one is the max subdirectories depth.\n\nLimitRecursion               10000 8\n\n\n\n# Are anonymous users allowed to create new directories?\n\nAnonymousCanCreateDirs       no\n\n\n\n# If the system load is greater than the given value, anonymous users\n# aren't allowed to download.\n\nMaxLoad                      4\n\n\n\n# Port range for passive connections - keep it as broad as possible.\n\nPassivePortRange             30000 50000\n\n\n\n# Force an IP address in PASV/EPSV/SPSV replies. - for NAT.\n# Symbolic host names are also accepted for gateways with dynamic IP\n# addresses.\n\n# ForcePassiveIP               192.168.0.1\n\n\n\n# Upload/download ratio for anonymous users.\n\n# AnonymousRatio               1 10\n\n\n\n# Upload/download ratio for all users.\n# This directive supersedes the previous one.\n\n# UserRatio                    1 10\n\n\n\n# Disallow downloads of files owned by the \"ftp\" system user;\n# files that were uploaded but not validated by a local admin.\n\nAntiWarez                    yes\n\n\n\n# IP address/port to listen to (default=all IP addresses, port 21).\n\n# Bind                         127.0.0.1,21\n\n\n\n# Maximum bandwidth for anonymous users in KB/s\n\n# AnonymousBandwidth           8\n\n\n\n# Maximum bandwidth for *all* users (including anonymous) in KB/s\n# Use AnonymousBandwidth *or* UserBandwidth, not both.\n\n# UserBandwidth                8\n\n\n\n# File creation mask. <umask for files>:<umask for dirs> .\n# 177:077 if you feel paranoid.\n\nUmask                        113:002\n\n\n# Minimum UID for an authenticated user to log in.\n# For example, a value of 100 prevents all users whose user id is below\n# 100 from logging in. If you want \"root\" to be able to log in, use 0.\n\nMinUID                       100\n\n\n\n# Allow FXP transfers for authenticated users.\n\nAllowUserFXP                 yes\n\n\n\n# Allow anonymous FXP for anonymous and non-anonymous users.\n\nAllowAnonymousFXP            no\n\n\n\n# Users can't delete/write files starting with a dot ('.')\n# even if they own them. But if TrustedGID is enabled, that group\n# will exceptionally have access to dot-files.\n\nProhibitDotFilesWrite        no\n\n\n\n# Prohibit *reading* of files starting with a dot (.history, .ssh...)\n\nProhibitDotFilesRead         no\n\n\n\n# Don't overwrite files. When a file whose name already exist is uploaded,\n# it gets automatically renamed to file.1, file.2, file.3, ...\n\nAutoRename                   no\n\n\n\n# Prevent anonymous users from uploading new files (no = upload is allowed)\n\nAnonymousCantUpload          yes\n\n\n\n# Only connections to this specific IP address are allowed to be\n# non-anonymous. You can use this directive to open several public IPs for\n# anonymous FTP, and keep a private firewalled IP for remote administration.\n# You can also only allow a non-routable local IP (such as 10.x.x.x) for\n# authenticated users, and run a public anon-only FTP server on another IP.\n\n# TrustedIP                    10.1.1.1\n\n\n\n# To add the PID to log entries, uncomment the following line.\n\n# LogPID                       yes\n\n\n\n# Create an additional log file with transfers logged in a Apache-like format :\n# fw.c9x.org - jedi [13/Apr/2017:19:36:39] \"GET /ftp/linux.tar.bz2\" 200 21809338\n# This log file can then be processed by common HTTP traffic analyzers.\n\n# AltLog                       clf:/var/log/pureftpd.log\n\n\n\n# Create an additional log file with transfers logged in a format optimized\n# for statistic reports.\n\n# AltLog                       stats:/var/log/pureftpd.log\n\n\n\n# Create an additional log file with transfers logged in the standard W3C\n# format (compatible with many HTTP log analyzers)\n\nAltLog                       w3c:/var/log/pureftpd.log\n\n\n\n# Disallow the CHMOD command. Users cannot change perms of their own files.\n\n# NoChmod                      yes\n\n\n\n# Allow users to resume/upload files, but *NOT* to delete them.\n\n# KeepAllFiles                 yes\n\n\n\n# Automatically create home directories if they are missing\n\n# CreateHomeDir                yes\n\n\n\n# Enable virtual quotas. The first value is the max number of files.\n# The second value is the maximum size, in megabytes.\n# So 1000:10 limits every user to 1000 files and 10 Mb.\n\n# Quota                        1000:10\n\n\n\n# If your pure-ftpd has been compiled with standalone support, you can change\n# the location of the pid file. The default is /run/pure-ftpd.pid\n\nPIDFile                      /run/pure-ftpd.pid\n\n\n\n# If your pure-ftpd has been compiled with pure-uploadscript support,\n# this will make pure-ftpd write info about new uploads to\n# /run/pure-ftpd.upload.pipe so pure-uploadscript can read it and\n# spawn a script to handle the upload.\n# Don't enable this option if you don't actually use pure-uploadscript.\n\n# CallUploadScript             yes\n\n\n\n# This option is useful on servers where anonymous upload is\n# allowed. When the partition is more that percententage full,\n# new uploads are disallowed.\n\nMaxDiskUsage                   90\n\n\n\n# Set to 'yes' to prevent users from renaming files.\n\n# NoRename                     yes\n\n\n\n# Be 'customer proof': forbids common customer mistakes such as\n# 'chmod 0 public_html', that are valid, but can cause customers to\n# unintentionally shoot themselves in the foot.\n\nCustomerProof                yes\n\n\n\n# Per-user concurrency limits. Will only work if the FTP server has\n# been compiled with --with-peruserlimits.\n# Format is: <max sessions per user>:<max anonymous sessions>\n# For example, 3:20 means that an authenticated user can have up to 3 active\n# sessions, and that up to 20 anonymous sessions are allowed.\n\n# PerUserLimits                3:20\n\n\n\n# When a file is uploaded and there was already a previous version of the file\n# with the same name, the old file will neither get removed nor truncated.\n# The file will be stored under a temporary name and once the upload is\n# complete, it will be atomically renamed. For example, when a large PHP\n# script is being uploaded, the web server will keep serving the old version and\n# later switch to the new one as soon as the full file will have been\n# transferred. This option is incompatible with virtual quotas.\n\n# NoTruncate                   yes\n\n\n\n# This option accepts three values:\n# 0: disable SSL/TLS encryption layer (default).\n# 1: accept both cleartext and encrypted sessions.\n# 2: refuse connections that don't use the TLS security mechanism,\n#    including anonymous sessions.\n# Do _not_ uncomment this blindly. Double check that:\n# 1) The server has been compiled with TLS support (--with-tls),\n# 2) A valid certificate is in place,\n# 3) Only compatible clients will log in.\n\nTLS                          2\n\n\n# Cipher suite for TLS sessions.\n# The default suite is secure and setting this property is usually\n# only required to *lower* the security to cope with legacy clients.\n# Prefix with -C: in order to require valid client certificates.\n# If -C: is used, make sure that clients' public keys are present on\n# the server.\n\n# TLSCipherSuite               HIGH\n\n\n\n# Certificate file, for TLS\n# The certificate itself and the keys can be bundled into the same\n# file or split into two files.\n# CertFile is for a cert+key bundle, CertFileAndKey for separate files.\n# Use only one of these.\n\n# CertFile                     /etc/ssl/private/pure-ftpd.pem\n# CertFileAndKey               \"/etc/pure-ftpd.pem\" \"/etc/pure-ftpd.key\"\n\n\n\n# Unix socket of the external certificate handler, for TLS\n\n# ExtCert                      /run/ftpd-certs.sock\n\n\n# Listen only to IPv4 addresses in standalone mode (ie. disable IPv6)\n# By default, both IPv4 and IPv6 are enabled.\n\n# IPV4Only                     yes\n\n\n\n# Listen only to IPv6 addresses in standalone mode (i.e. disable IPv4)\n# By default, both IPv4 and IPv6 are enabled.\n\n# IPV6Only                     yes\n\n"
  },
  {
    "path": "aegir/conf/global/global-10.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Include ini\n */\nif (is_readable('/data/conf/global/global-ini.inc')) {\n  include_once('/data/conf/global/global-ini.inc');\n}\n\n$drupal_core = '10';\n\n/**\n * Include mode\n */\nif (is_readable('/data/conf/global/global-mode.inc')) {\n  include_once('/data/conf/global/global-mode.inc');\n}\n\n/**\n * Include main\n */\nif (is_readable('/data/conf/global/global-main.inc')) {\n  include_once('/data/conf/global/global-main.inc');\n}\n\n/**\n * Include redirects\n */\nif (is_readable('/data/conf/global/global-settings.inc')) {\n  include_once('/data/conf/global/global-settings.inc');\n}\n\n/**\n * Optional system level early overrides\n */\nif (is_readable('/data/conf/settings.global.inc')) {\n  require_once \"/data/conf/settings.global.inc\";\n}\n\n/**\n * Include front-end\n */\nif (is_readable('/data/conf/global/global-front-end.inc')) {\n  include_once('/data/conf/global/global-front-end.inc');\n}\n\n/**\n * If include valkey/redis\n */\nif (is_readable('/data/conf/global/global-if-valkey.inc')) {\n  include_once('/data/conf/global/global-if-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-if-redis.inc')) {\n  include_once('/data/conf/global/global-if-redis.inc');\n}\n\n/**\n * Optional system level overrides\n */\nif (is_readable('/data/conf/override.global.inc')) {\n  require_once \"/data/conf/override.global.inc\";\n}\n\n/**\n * Include valkey/redis\n */\nif (is_readable('/data/conf/global/global-valkey.inc')) {\n  include_once('/data/conf/global/global-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-redis.inc')) {\n  include_once('/data/conf/global/global-redis.inc');\n}\n\n/**\n * Include newrelic\n */\nif (is_readable('/data/conf/global/global-newrelic.inc')) {\n  include_once('/data/conf/global/global-newrelic.inc');\n}\n\n/**\n * Include extra\n */\nif (is_readable('/data/conf/global/global-extra.inc')) {\n  include_once('/data/conf/global/global-extra.inc');\n}\n\n"
  },
  {
    "path": "aegir/conf/global/global-11.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Include ini\n */\nif (is_readable('/data/conf/global/global-ini.inc')) {\n  include_once('/data/conf/global/global-ini.inc');\n}\n\n$drupal_core = '11';\n\n/**\n * Include mode\n */\nif (is_readable('/data/conf/global/global-mode.inc')) {\n  include_once('/data/conf/global/global-mode.inc');\n}\n\n/**\n * Include main\n */\nif (is_readable('/data/conf/global/global-main.inc')) {\n  include_once('/data/conf/global/global-main.inc');\n}\n\n/**\n * Include redirects\n */\nif (is_readable('/data/conf/global/global-settings.inc')) {\n  include_once('/data/conf/global/global-settings.inc');\n}\n\n/**\n * Optional system level early overrides\n */\nif (is_readable('/data/conf/settings.global.inc')) {\n  require_once \"/data/conf/settings.global.inc\";\n}\n\n/**\n * Include front-end\n */\nif (is_readable('/data/conf/global/global-front-end.inc')) {\n  include_once('/data/conf/global/global-front-end.inc');\n}\n\n/**\n * If include valkey/redis\n */\nif (is_readable('/data/conf/global/global-if-valkey.inc')) {\n  include_once('/data/conf/global/global-if-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-if-redis.inc')) {\n  include_once('/data/conf/global/global-if-redis.inc');\n}\n\n/**\n * Optional system level overrides\n */\nif (is_readable('/data/conf/override.global.inc')) {\n  require_once \"/data/conf/override.global.inc\";\n}\n\n/**\n * Include valkey/redis\n */\nif (is_readable('/data/conf/global/global-valkey.inc')) {\n  include_once('/data/conf/global/global-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-redis.inc')) {\n  include_once('/data/conf/global/global-redis.inc');\n}\n\n/**\n * Include newrelic\n */\nif (is_readable('/data/conf/global/global-newrelic.inc')) {\n  include_once('/data/conf/global/global-newrelic.inc');\n}\n\n/**\n * Include extra\n */\nif (is_readable('/data/conf/global/global-extra.inc')) {\n  include_once('/data/conf/global/global-extra.inc');\n}\n\n"
  },
  {
    "path": "aegir/conf/global/global-6.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Include ini\n */\nif (is_readable('/data/conf/global/global-ini.inc')) {\n  include_once('/data/conf/global/global-ini.inc');\n}\n\n$drupal_core = '6';\n\n/**\n * Include mode\n */\nif (is_readable('/data/conf/global/global-mode.inc')) {\n  include_once('/data/conf/global/global-mode.inc');\n}\n\n/**\n * Include main\n */\nif (is_readable('/data/conf/global/global-main.inc')) {\n  include_once('/data/conf/global/global-main.inc');\n}\n\n/**\n * Include redirects\n */\nif (is_readable('/data/conf/global/global-settings.inc')) {\n  include_once('/data/conf/global/global-settings.inc');\n}\n\n/**\n * Optional system level early overrides\n */\nif (is_readable('/data/conf/settings.global.inc')) {\n  require_once \"/data/conf/settings.global.inc\";\n}\n\n/**\n * Include front-end\n */\nif (is_readable('/data/conf/global/global-front-end.inc')) {\n  include_once('/data/conf/global/global-front-end.inc');\n}\n\n/**\n * If include valkey/redis\n */\nif (is_readable('/data/conf/global/global-if-valkey.inc')) {\n  include_once('/data/conf/global/global-if-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-if-redis.inc')) {\n  include_once('/data/conf/global/global-if-redis.inc');\n}\n\n/**\n * Optional system level overrides\n */\nif (is_readable('/data/conf/override.global.inc')) {\n  require_once \"/data/conf/override.global.inc\";\n}\n\n/**\n * Include valkey/redis\n */\nif (is_readable('/data/conf/global/global-valkey.inc')) {\n  include_once('/data/conf/global/global-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-redis.inc')) {\n  include_once('/data/conf/global/global-redis.inc');\n}\n\n/**\n * Include newrelic\n */\nif (is_readable('/data/conf/global/global-newrelic.inc')) {\n  include_once('/data/conf/global/global-newrelic.inc');\n}\n\n/**\n * Include extra\n */\nif (is_readable('/data/conf/global/global-extra.inc')) {\n  include_once('/data/conf/global/global-extra.inc');\n}\n\n"
  },
  {
    "path": "aegir/conf/global/global-7.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Include ini\n */\nif (is_readable('/data/conf/global/global-ini.inc')) {\n  include_once('/data/conf/global/global-ini.inc');\n}\n\n$drupal_core = '7';\n\n/**\n * Include mode\n */\nif (is_readable('/data/conf/global/global-mode.inc')) {\n  include_once('/data/conf/global/global-mode.inc');\n}\n\n/**\n * Include main\n */\nif (is_readable('/data/conf/global/global-main.inc')) {\n  include_once('/data/conf/global/global-main.inc');\n}\n\n/**\n * Include redirects\n */\nif (is_readable('/data/conf/global/global-settings.inc')) {\n  include_once('/data/conf/global/global-settings.inc');\n}\n\n/**\n * Optional system level early overrides\n */\nif (is_readable('/data/conf/settings.global.inc')) {\n  require_once \"/data/conf/settings.global.inc\";\n}\n\n/**\n * Include front-end\n */\nif (is_readable('/data/conf/global/global-front-end.inc')) {\n  include_once('/data/conf/global/global-front-end.inc');\n}\n\n/**\n * If include valkey/redis\n */\nif (is_readable('/data/conf/global/global-if-valkey.inc')) {\n  include_once('/data/conf/global/global-if-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-if-redis.inc')) {\n  include_once('/data/conf/global/global-if-redis.inc');\n}\n\n/**\n * Optional system level overrides\n */\nif (is_readable('/data/conf/override.global.inc')) {\n  require_once \"/data/conf/override.global.inc\";\n}\n\n/**\n * Include valkey/redis\n */\nif (is_readable('/data/conf/global/global-valkey.inc')) {\n  include_once('/data/conf/global/global-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-redis.inc')) {\n  include_once('/data/conf/global/global-redis.inc');\n}\n\n/**\n * Include newrelic\n */\nif (is_readable('/data/conf/global/global-newrelic.inc')) {\n  include_once('/data/conf/global/global-newrelic.inc');\n}\n\n/**\n * Include extra\n */\nif (is_readable('/data/conf/global/global-extra.inc')) {\n  include_once('/data/conf/global/global-extra.inc');\n}\n\n"
  },
  {
    "path": "aegir/conf/global/global-8.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Include ini\n */\nif (is_readable('/data/conf/global/global-ini.inc')) {\n  include_once('/data/conf/global/global-ini.inc');\n}\n\n$drupal_core = '8';\n\n/**\n * Include mode\n */\nif (is_readable('/data/conf/global/global-mode.inc')) {\n  include_once('/data/conf/global/global-mode.inc');\n}\n\n/**\n * Include main\n */\nif (is_readable('/data/conf/global/global-main.inc')) {\n  include_once('/data/conf/global/global-main.inc');\n}\n\n/**\n * Include redirects\n */\nif (is_readable('/data/conf/global/global-settings.inc')) {\n  include_once('/data/conf/global/global-settings.inc');\n}\n\n/**\n * Optional system level early overrides\n */\nif (is_readable('/data/conf/settings.global.inc')) {\n  require_once \"/data/conf/settings.global.inc\";\n}\n\n/**\n * Include front-end\n */\nif (is_readable('/data/conf/global/global-front-end.inc')) {\n  include_once('/data/conf/global/global-front-end.inc');\n}\n\n/**\n * If include valkey/redis\n */\nif (is_readable('/data/conf/global/global-if-valkey.inc')) {\n  include_once('/data/conf/global/global-if-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-if-redis.inc')) {\n  include_once('/data/conf/global/global-if-redis.inc');\n}\n\n/**\n * Optional system level overrides\n */\nif (is_readable('/data/conf/override.global.inc')) {\n  require_once \"/data/conf/override.global.inc\";\n}\n\n/**\n * Include valkey/redis\n */\nif (is_readable('/data/conf/global/global-valkey.inc')) {\n  include_once('/data/conf/global/global-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-redis.inc')) {\n  include_once('/data/conf/global/global-redis.inc');\n}\n\n/**\n * Include newrelic\n */\nif (is_readable('/data/conf/global/global-newrelic.inc')) {\n  include_once('/data/conf/global/global-newrelic.inc');\n}\n\n/**\n * Include extra\n */\nif (is_readable('/data/conf/global/global-extra.inc')) {\n  include_once('/data/conf/global/global-extra.inc');\n}\n\n"
  },
  {
    "path": "aegir/conf/global/global-9.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Include ini\n */\nif (is_readable('/data/conf/global/global-ini.inc')) {\n  include_once('/data/conf/global/global-ini.inc');\n}\n\n$drupal_core = '9';\n\n/**\n * Include mode\n */\nif (is_readable('/data/conf/global/global-mode.inc')) {\n  include_once('/data/conf/global/global-mode.inc');\n}\n\n/**\n * Include main\n */\nif (is_readable('/data/conf/global/global-main.inc')) {\n  include_once('/data/conf/global/global-main.inc');\n}\n\n/**\n * Include redirects\n */\nif (is_readable('/data/conf/global/global-settings.inc')) {\n  include_once('/data/conf/global/global-settings.inc');\n}\n\n/**\n * Optional system level early overrides\n */\nif (is_readable('/data/conf/settings.global.inc')) {\n  require_once \"/data/conf/settings.global.inc\";\n}\n\n/**\n * Include front-end\n */\nif (is_readable('/data/conf/global/global-front-end.inc')) {\n  include_once('/data/conf/global/global-front-end.inc');\n}\n\n/**\n * If include valkey/redis\n */\nif (is_readable('/data/conf/global/global-if-valkey.inc')) {\n  include_once('/data/conf/global/global-if-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-if-redis.inc')) {\n  include_once('/data/conf/global/global-if-redis.inc');\n}\n\n/**\n * Optional system level overrides\n */\nif (is_readable('/data/conf/override.global.inc')) {\n  require_once \"/data/conf/override.global.inc\";\n}\n\n/**\n * Include valkey/redis\n */\nif (is_readable('/data/conf/global/global-valkey.inc')) {\n  include_once('/data/conf/global/global-valkey.inc');\n}\nelseif (is_readable('/data/conf/global/global-redis.inc')) {\n  include_once('/data/conf/global/global-redis.inc');\n}\n\n/**\n * Include newrelic\n */\nif (is_readable('/data/conf/global/global-newrelic.inc')) {\n  include_once('/data/conf/global/global-newrelic.inc');\n}\n\n/**\n * Include extra\n */\nif (is_readable('/data/conf/global/global-extra.inc')) {\n  include_once('/data/conf/global/global-extra.inc');\n}\n\n"
  },
  {
    "path": "aegir/conf/global/global-extra.inc",
    "content": "<?php # global settings.php\n\n/**\n * Activate mail_safety for sites-cron-off on the fly\n */\nif (is_readable('/data/conf/sites-cron-off.ctrl')) {\n  if ($drupal_core >= 8) {\n    $config['automated_cron.settings']['interval'] = 0;\n    $config['mail_safety.settings']['default_mail_address'] = '';\n    $config['mail_safety.settings']['enabled'] = TRUE;\n    $config['mail_safety.settings']['send_mail_to_dashboard'] = TRUE;\n    $config['mail_safety.settings']['send_mail_to_default_mail'] = FALSE;\n    $config['scheduler.settings']['lightweight_cron_access_key'] = '';\n    $config['simple_cron.settings']['interval'] = 0;\n    $config['system.cron']['key'] = '';\n    $config['system.cron']['last'] = 0;\n    $config['system.cron']['threshold']['auto'] = 0;\n    $config['ultimate_cron.job.cron_queue']['status'] = FALSE;\n    $config['ultimate_cron.settings']['scheduler'] = 'never';\n  }\n  else {\n    $conf['mail_safety_enabled'] = TRUE;\n    $conf['mail_safety_send_mail_to_dashboard'] = TRUE;\n  }\n}\n\n/**\n * Use site specific composer_manager dir\n */\nif ($all_ini['set_composer_manager_vendor_dir'] && !$is_install) {\n  if ($drupal_core >= 8) {\n    $config['composer_manager.settings']['vendor_dir'] = 'sites/' . $_SERVER['SERVER_NAME'] . '/vendor';\n  }\n  else {\n    $conf['composer_manager_vendor_dir'] = 'sites/' . $_SERVER['SERVER_NAME'] . '/vendor';\n  }\n}\n\n/**\n * Domain Access Module Paths Detection\n */\nif ($all_ini['auto_detect_domain_access_integration']) {\n  if (is_readable('sites/all/modules/domain/settings.inc')) {\n    $da_inc = 'sites/all/modules/domain/settings.inc';\n  }\n  elseif (is_readable('sites/all/modules/contrib/domain/settings.inc')) {\n    $da_inc = 'sites/all/modules/contrib/domain/settings.inc';\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/domain/settings.inc')) {\n    $da_inc = 'profiles/' . $conf['install_profile'] . '/modules/domain/settings.inc';\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/contrib/domain/settings.inc')) {\n    $da_inc = 'profiles/' . $conf['install_profile'] . '/modules/contrib/domain/settings.inc';\n  }\n}\n\n/**\n * Domain Access Module inc should not be loaded during installation\n */\nif ($is_install) {\n  $da_inc    = FALSE;\n}\n\n/**\n * Domain Access Module inc loading\n */\nif (!$custom_da) {\n  if ($da_inc) {\n    require_once($da_inc);\n  }\n}\n\n/**\n * Drupal for Facebook (fb)\n *\n * Important:\n * Facebook client libraries will not work properly if arg_separator.output is not &\n * The default value is &amp;. Change this in settings.php. Make the value \"&\"\n * https://drupal.org/node/205476\n */\nif (!$custom_fb && $all_ini['auto_detect_facebook_integration']) {\n  if (is_readable('sites/all/modules/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once \"sites/all/modules/fb/fb_settings.inc\";\n    $conf['fb_api_file'] = \"sites/all/modules/fb/facebook-platform/php/facebook.php\";\n  }\n  elseif (is_readable('sites/all/modules/contrib/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once \"sites/all/modules/contrib/fb/fb_settings.inc\";\n    $conf['fb_api_file'] = \"sites/all/modules/contrib/fb/facebook-platform/php/facebook.php\";\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once 'profiles/' . $conf['install_profile'] . '/modules/fb/fb_settings.inc';\n    $conf['fb_api_file'] = 'profiles/' . $conf['install_profile'] . '/modules/fb/facebook-platform/php/facebook.php';\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/contrib/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once 'profiles/' . $conf['install_profile'] . '/modules/contrib/fb/fb_settings.inc';\n    $conf['fb_api_file'] = 'profiles/' . $conf['install_profile'] . '/modules/contrib/fb/facebook-platform/php/facebook.php';\n  }\n}\n\n/**\n * Unset config arrays on non-dev URLs\n */\nif (!$is_dev) {\n  unset($boa_ini);\n  unset($usr_plr_ini);\n  unset($usr_loc_ini);\n  unset($all_ini);\n}\n"
  },
  {
    "path": "aegir/conf/global/global-front-end.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * More logic for the front-end only\n */\nif (!$is_backend && isset($_SERVER['HTTP_HOST']) &&\n    isset($_SERVER['SERVER_NAME'])) {\n\n  // PHP 5.6-safe defaults (no ??).\n  $request_uri = isset($_SERVER['REQUEST_URI']) ? $_SERVER['REQUEST_URI'] : '/';\n  $path = parse_url($request_uri, PHP_URL_PATH);\n  $path = rawurldecode($path);\n\n  if ($path === NULL || $path === FALSE || $path === '') {\n    $path = '/';\n  }\n\n  // Prevent header injection / weird values in SERVER_PROTOCOL.\n  $proto = 'HTTP/1.1';\n  if (!empty($_SERVER['SERVER_PROTOCOL']) && preg_match('#^HTTP/[0-9]\\.[0-9]$#', $_SERVER['SERVER_PROTOCOL'])) {\n    $proto = $_SERVER['SERVER_PROTOCOL'];\n  }\n\n  /**\n   * Block “node-chain” URL mutation spam\n   * Examples:\n   * /node/1771/pl/node/1771/es/node/1771/...\n   * /pl/node/1771/es/node/1771/...\n   */\n  if (preg_match('#/node/[0-9]+/.*/node/[0-9]+#i', $path)) {\n    header($proto . ' 404 Not Found');\n    exit;\n  }\n\n  /**\n   * Block excessive language-prefix chaining\n   * Examples (3+ prefixes):\n   * /pl/en/fr/de/office/...\n   */\n  if (preg_match('#^/(?:[a-z]{2}(?:-[a-z0-9]+)?/){3,}#i', $path)) {\n    header($proto . ' 404 Not Found');\n    exit;\n  }\n\n  if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) || isset($_SERVER['HTTPS'])) {\n    $conf['https'] = TRUE;\n    $request_type = ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https' || $_SERVER['HTTPS'] == 'on') ? 'SSL' : 'NONSSL';\n    if ($request_type == \"SSL\") { // we check for secure connection to set correct base_url\n      $base_url = 'https://' . $_SERVER['HTTP_HOST'];\n      if ($conf['install_profile'] != 'hostmaster') {\n        $_SERVER['HTTPS'] = 'on';\n        if ($drupal_core >= 7) {\n          ini_set('session.cookie_secure', TRUE);\n          if ($is_dev) {\n            header('X-Cookie-Sec: YES');\n          }\n        }\n      }\n      if ($is_dev) {\n        header('X-Local-Proto: https');\n      }\n    }\n    else {\n      if ($site_subdir && $raw_host) {\n        $base_url = 'http://' . $raw_host . '/' . $site_subdir;\n      }\n      else {\n        $base_url = 'http://' . $_SERVER['HTTP_HOST'];\n      }\n    }\n  }\n  else {\n    if ($site_subdir && $raw_host) {\n      $base_url = 'http://' . $raw_host . '/' . $site_subdir;\n    }\n    else {\n      $base_url = 'http://' . $_SERVER['HTTP_HOST'];\n    }\n  }\n\n  if ($base_url && $is_dev) {\n    header(\"X-Base-Url: \" . $base_url);\n  }\n\n  if ($site_subdir && $is_dev) {\n    header(\"X-Site-Subdir: \" . $site_subdir);\n  }\n\n  if ($all_ini['server_name_cookie_domain']) {\n    $domain = '.' . preg_replace('`^www\\.`', '', $_SERVER['SERVER_NAME']);\n  }\n  elseif ($site_subdir && isset($_SERVER['RAW_HOST'])) {\n    $domain = '.' . preg_replace('`^www\\.`', '', $_SERVER['RAW_HOST']);\n  }\n  else {\n    $domain = '.' . preg_replace('`^www\\.`', '', $_SERVER['HTTP_HOST']);\n  }\n  $domain = str_replace('..', '.', $domain);\n  if (count(explode('.', $domain)) > 2 &&\n      !is_numeric(str_replace('.', '', $domain))) {\n    ini_set('session.cookie_domain', $domain);\n    $cookie_domain = $domain;\n    header(\"X-Cookie-Domain: \" . $cookie_domain);\n  }\n\n  $this_prefix = preg_replace('`^www\\.`', '', $_SERVER['SERVER_NAME']) . '_z_';\n  if ($is_dev) {\n    header(\"X-Valkey-Prefix: \" . $this_prefix);\n  }\n\n  if (isset($_SERVER['REQUEST_TIME']) &&\n      isset($_SERVER['REMOTE_ADDR']) &&\n      isset($_SERVER['HTTP_USER_AGENT']) &&\n      !preg_match(\"/^\\/esi\\//\", $_SERVER['REQUEST_URI'])) {\n\n    // Determine if the site is running on HTTPS\n    $request_type = 'NONSSL';\n    if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) || isset($_SERVER['HTTPS'])) {\n      $request_type = ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https' || $_SERVER['HTTPS'] == 'on') ? 'SSL' : 'NONSSL';\n    }\n    if ($request_type == \"SSL\") {\n      $is_https = TRUE;\n      if ($is_dev) {\n        header('X-Request-Type:' . $request_type);\n      }\n    }\n    else {\n      $is_https = FALSE;\n      if ($is_dev) {\n        header('X-Request-Type:' . $request_type);\n      }\n    }\n\n    // Create a unique identifier for the request\n    $identity = $_SERVER['REQUEST_TIME'] . $_SERVER['REMOTE_ADDR'] . $_SERVER['SERVER_NAME'] . $_SERVER['HTTP_USER_AGENT'];\n    $identity = 'BD' . md5($identity);\n    if ($is_dev) {\n      header('X-Identity:' . $identity);\n    }\n\n    if ($drupal_core >= 8) {\n      // Check if the user is logged in by looking for the session cookie.\n      // The session cookie name starts with \"SESS\" or \"SSESS\" followed by a hash.\n      // This check is not site specific in Drupal 8+ like it is in Drupal 7\n      // or Drupal 6, but should be sufficient for the intended use case below.\n      $cookie_prefix = ini_get('session.cookie_secure') ? 'SSESS' : 'SESS';\n      $is_logged_in = FALSE;\n      foreach ($_COOKIE as $key => $value) {\n        if (strpos($key, $cookie_prefix) == 0) {\n          $is_logged_in = TRUE;\n          break;\n        }\n      }\n      if ($is_dev) {\n        header('X-Cookie-Prefix-A:' . $cookie_prefix);\n        header('X-Is-Logged-In-A:' . $is_logged_in);\n      }\n    }\n    elseif ($drupal_core == 7) {\n      // For Drupal 7 use sha256 hash and cookie prefix based on session.cookie_secure\n      $cookie_prefix = ini_get('session.cookie_secure') ? 'SSESS' : 'SESS';\n      $test_sess_name = $cookie_prefix . substr(hash('sha256', $cookie_domain), 0, 32);\n      if ($is_dev) {\n        header('X-Cookie-Prefix-B:' . $cookie_prefix);\n        header('X-Test-Sess-Name-B:' . $test_sess_name);\n      }\n    }\n    else {\n      // For Drupal 6 use md5 hash and SESS prefix only\n      $cookie_prefix = 'SESS';\n      $test_sess_name = $cookie_prefix . md5($cookie_domain);\n      if ($is_dev) {\n        header('X-Cookie-Prefix-C:' . $cookie_prefix);\n        header('X-Test-Sess-Name-C:' . $test_sess_name);\n      }\n    }\n\n    // Check if the session cookie is present\n    if (isset($_COOKIE[$test_sess_name]) || $is_logged_in) {\n      $is_anon = 'LOGGED';\n    }\n    else {\n      $is_anon = 'ANONYMOUS';\n    }\n    if ($is_dev) {\n      header('X-Is-Anon:' . $is_anon);\n    }\n\n    // Redirect not logged in visitors to homepage to protect admin URLs from bots\n    if ($is_anon == 'ANONYMOUS') {\n      if (preg_match(\"/\\/(?:node\\/[0-9]+\\/edit|node\\/add)/\", $_SERVER['REQUEST_URI'])) {\n        if (empty($all_ini['allow_anon_node_add'])) {\n          header(\"Location: \" . $base_url . \"/\", true, 301);\n          exit;\n        }\n      }\n      if (preg_match(\"/^\\/(?:[a-z]{2}\\/)?(?:admin|logout|privatemsg|approve)/\", $_SERVER['REQUEST_URI'])) {\n        if (empty($all_ini['disable_admin_dos_protection'])) {\n          header(\"Location: \" . $base_url . \"/\", true, 301);\n          exit;\n        }\n      }\n    }\n\n    // Additional logic for caching or other needs\n    if ($is_anon == 'ANONYMOUS' && !empty($all_ini['speed_booster_anon_cache_ttl']) && preg_match(\"/^[0-9]{2,}$/\", $all_ini['speed_booster_anon_cache_ttl'])) {\n      if ($all_ini['speed_booster_anon_cache_ttl'] > 10) {\n        $expire_in_seconds = $all_ini['speed_booster_anon_cache_ttl'];\n        header('X-Limit-Booster:' . $all_ini['speed_booster_anon_cache_ttl']);\n      }\n    }\n\n    // Prevent turning the feature server site into a spam machine\n    // Disable self-registration also on hostmaster\n    if ($conf['install_profile'] == 'feature_server' ||\n        $conf['install_profile'] == 'hostmaster') {\n      $conf['user_register'] = 0; // Force \"Only site administrators can create new user accounts\"\n    }\n    if (!$is_bot && !$high_traffic) {\n      if (preg_match(\"/^\\/(?:[a-z]{2}\\/)?(?:admin|cart|checkout|logout|privatemsg)/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/\\/(?:node\\/[0-9]+\\/edit|node\\/add|comment\\/reply|approve|ajax_comments|commerce_currency_select)/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/(?:^dev\\.|\\.dev\\.|\\.devel\\.)/\", $_SERVER['HTTP_HOST'])) {\n        $expire_in_seconds = '1';\n        header('X-Limit-Booster: 1');\n      }\n      if (isset($_SERVER['REQUEST_URI']) &&\n          preg_match(\"/(?:x-progress-id|ahah|progress\\/|autocomplete|ajax|batch|js\\/.*)/i\", $_SERVER['REQUEST_URI'])) {\n        $expire_in_seconds = '0';\n        if ($is_dev) {\n          header('X-Skip-Booster: AjaxRU');\n        }\n      }\n      if (isset($_SERVER['QUERY_STRING']) &&\n          preg_match(\"/(?:x-progress-id|ahah|progress\\/|autocomplete|ajax|batch|js\\/.*)/i\", $_SERVER['QUERY_STRING'])) {\n        $expire_in_seconds = '0';\n        if ($is_dev) {\n          header('X-Skip-Booster: AjaxQS');\n        }\n      }\n      if (isset($_SERVER['REQUEST_METHOD']) &&\n          $_SERVER['REQUEST_METHOD'] == 'POST') {\n        if (!isset($_COOKIE['NoCacheID'])) {\n          $lifetime = '15';\n          setcookie('NoCacheID', 'POST' . $identity, $_SERVER['REQUEST_TIME'] + $lifetime, '/', $cookie_domain);\n        }\n        $expire_in_seconds = '0';\n        if ($is_dev) {\n          header('X-Skip-Booster: PostRM');\n        }\n      }\n    }\n    if ($is_bot) {\n      if (!preg_match(\"/Pingdom/i\", $_SERVER['HTTP_USER_AGENT']) &&\n          !preg_match(\"/(?:rss|feed)/i\", $_SERVER['REQUEST_URI'])) {\n        $expire_in_seconds = '3600';\n        if ($is_dev) {\n          header('X-Bot-Booster: 3600');\n        }\n      }\n    }\n    if ($conf['install_profile'] != 'hostmaster' && ($expire_in_seconds > -1)) {\n      header(\"X-Accel-Expires: \" . $expire_in_seconds);\n      if ($expire_in_seconds > -1 && $expire_in_seconds < 2) {\n        $conf['cache'] = 0; // Disable page caching on the fly\n      }\n    }\n  }\n}\n\n\n/**\n * Support files/styles with short URIs also for files not generated yet\n */\nif (preg_match(\"/^\\/files\\/styles\\//\", $_SERVER['REQUEST_URI'])) {\n  header(\"Location: \" . $base_url . \"/sites/\" . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI'], true, 301);\n  exit;\n}\n"
  },
  {
    "path": "aegir/conf/global/global-if-redis.inc",
    "content": "<?php # global settings.php\n\n\n/* ---------------- Feature switch -------------------------------------- */\nif ($drupal_core >= 6) {\n  $use_valkey = TRUE;\n}\n\nif (isset($_SERVER['SERVER_NAME'])) {\n  if ($all_ini['valkey_cache_disable'] || $all_ini['redis_cache_disable']) {\n    $use_valkey = FALSE;\n  }\n}\n\nif (!$is_bot && isset($_SERVER['REQUEST_URI'])) {\n  if (preg_match(\"/noredis=1/\", $_SERVER['REQUEST_URI'])) {\n    $use_valkey = FALSE;\n  }\n}\n\n/* ---------------- Defaults -------------------------------------------- */\n$valkey_up = FALSE;\n\n/* ---------------- Connection targets ---------------------------------- */\n$valkey_socket_path = '/run/valkey/valkey.sock';\n$valkey_host        = '127.0.0.1';\n$valkey_port        = 6379;\n$valkey_pass_file   = '/data/conf/valkey/pass.inc';\n\n/* ---------------- Timeouts & backoff ---------------------------------- */\n$connect_timeout_s  = 0.2;   // short and non-blocking feel\n$read_timeout_s     = 0.2;   // keep calls snappy\n$backoff_ttl_s      = 60;    // do not retry within this window after a failure\n$flag_dir_run       = '/var/tmp/fpm';\n$flag_file_fallback = '/data/conf/arch/valkey.disabled.flag'; // fallback\n\n/* ---------------- Optional debug log ---------------------------------- */\n// Set to an absolute path to enable lightweight probe logging.\n// Example: '/var/tmp/fpm/valkey-fallback.log'\n$redis_debug_log    = '';\n\n/* ---------------- Helpers (filesystem only) ---------------------------- */\nfunction _valkey_backoff_flag_path($flag_dir_run, $fallback) {\n  $path = $fallback;\n  if (is_dir($flag_dir_run)) {\n    if (is_writable($flag_dir_run)) {\n      $path = rtrim($flag_dir_run, '/').'/valkey.disabled.flag';\n    }\n  }\n  return $path;\n}\n\nfunction _valkey_backoff_is_active($flag_path, $ttl) {\n  $active = FALSE;\n  if (is_file($flag_path)) {\n    $age = time() - @filemtime($flag_path);\n    if ($age >= 0 && $age < $ttl) {\n      $active = TRUE;\n    }\n  }\n  return $active;\n}\n\nfunction _valkey_backoff_touch($flag_path) {\n  @touch($flag_path);\n}\n\nfunction _valkey_backoff_clear($flag_path) {\n  if (is_file($flag_path)) {\n    @unlink($flag_path);\n  }\n}\n\nfunction _valkey_dbg_write($log_path, $line) {\n  if (!empty($log_path)) {\n    $msg = date('c').' '.$line.\"\\n\";\n    @file_put_contents($log_path, $msg, FILE_APPEND);\n  }\n}\n\n/* ------------------- Probe Valkey once with guard --------------------- */\n$flag_path = _valkey_backoff_flag_path($flag_dir_run, $flag_file_fallback);\n$skip_probe = _valkey_backoff_is_active($flag_path, $backoff_ttl_s);\n\nif ($use_valkey) {\n  if (!$skip_probe) {\n    if (class_exists('Redis')) {\n      $r = new Redis();\n      $connected = FALSE;\n      $last_reason = 'init';\n\n      // Try socket first.\n      if (!empty($valkey_socket_path) && @is_readable($valkey_socket_path)) {\n        try {\n          $connected = $r->connect($valkey_socket_path);\n        } catch (Exception $e) {\n          $connected = FALSE;\n          $last_reason = 'connect-socket-exception';\n        }\n      }\n\n      // Fallback to TCP.\n      if (!$connected) {\n        try {\n          $connected = $r->connect($valkey_host, $valkey_port, $connect_timeout_s);\n        } catch (Exception $e) {\n          $connected = FALSE;\n          $last_reason = 'connect-tcp-exception';\n        }\n      }\n\n      if ($connected) {\n        if (defined('Redis::OPT_READ_TIMEOUT')) {\n          $r->setOption(Redis::OPT_READ_TIMEOUT, $read_timeout_s);\n        }\n\n        // Authenticate if password file exists.\n        $auth_pass = 'isfoobared';\n        if (is_file($valkey_pass_file)) {\n          $auth_pass = trim((string) @file_get_contents($valkey_pass_file));\n        }\n        if ($auth_pass !== '') {\n          try {\n            if (!$r->auth($auth_pass)) {\n              $connected = FALSE;\n              $last_reason = 'auth-failed';\n            }\n          } catch (Exception $e) {\n            $connected = FALSE;\n            $last_reason = 'auth-exception';\n          }\n        }\n\n        // Verify ping.\n        if ($connected) {\n          try {\n            $pong = $r->ping();\n            if ((is_string($pong) && stripos($pong, 'PONG') !== FALSE) || $pong === TRUE) {\n              $valkey_up = TRUE;\n            } else {\n              $valkey_up = FALSE;\n              $last_reason = 'ping-not-ok';\n            }\n          } catch (Exception $e) {\n            $valkey_up = FALSE;\n            $last_reason = 'ping-exception';\n          }\n        }\n\n        if ($valkey_up) {\n          _valkey_backoff_clear($flag_path);\n          _valkey_dbg_write($redis_debug_log, 'VALKEY UP action=clear flag='.$flag_path.' reason=ok');\n        } else {\n          _valkey_backoff_touch($flag_path);\n          _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason='.(string) $last_reason);\n        }\n\n        try {\n          $r->close();\n        } catch (Exception $e) {\n          // ignore\n        }\n      } else {\n        _valkey_backoff_touch($flag_path);\n        _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason='.(string) $last_reason);\n      }\n    } else {\n      // phpredis extension not available.\n      $valkey_up = FALSE;\n      _valkey_backoff_touch($flag_path);\n      _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason=no-phpredis');\n    }\n  } else {\n    $valkey_up = FALSE;\n    _valkey_dbg_write($redis_debug_log, 'VALKEY SKIP reason=backoff-active flag='.$flag_path);\n  }\n}\n\n/* ---------------- Diagnostics & final guard ---------------------------- */\nif (!empty($is_dev)) {\n  if (empty($is_backend)) {\n    if ($use_valkey && $valkey_up) {\n      header('X-Allow-Valkey: YES');\n    } else {\n      header('X-Allow-Valkey: NO');\n    }\n  }\n}\n\nif (!empty($is_install)) {\n  $use_valkey = FALSE;\n}\n"
  },
  {
    "path": "aegir/conf/global/global-if-valkey.inc",
    "content": "<?php # global settings.php\n\n\n/* ---------------- Feature switch -------------------------------------- */\nif ($drupal_core >= 6) {\n  $use_valkey = TRUE;\n}\n\nif (isset($_SERVER['SERVER_NAME'])) {\n  if ($all_ini['valkey_cache_disable'] || $all_ini['redis_cache_disable']) {\n    $use_valkey = FALSE;\n  }\n}\n\nif (!$is_bot && isset($_SERVER['REQUEST_URI'])) {\n  if (preg_match(\"/noredis=1/\", $_SERVER['REQUEST_URI'])) {\n    $use_valkey = FALSE;\n  }\n}\n\n/* ---------------- Defaults -------------------------------------------- */\n$valkey_up = FALSE;\n\n/* ---------------- Connection targets ---------------------------------- */\n$valkey_socket_path = '/run/valkey/valkey.sock';\n$valkey_host        = '127.0.0.1';\n$valkey_port        = 6379;\n$valkey_pass_file   = '/data/conf/valkey/pass.inc';\n\n/* ---------------- Timeouts & backoff ---------------------------------- */\n$connect_timeout_s  = 0.2;   // short and non-blocking feel\n$read_timeout_s     = 0.2;   // keep calls snappy\n$backoff_ttl_s      = 60;    // do not retry within this window after a failure\n$flag_dir_run       = '/var/tmp/fpm';\n$flag_file_fallback = '/data/conf/arch/valkey.disabled.flag'; // fallback\n\n/* ---------------- Optional debug log ---------------------------------- */\n// Set to an absolute path to enable lightweight probe logging.\n// Example: '/var/tmp/fpm/valkey-fallback.log'\n$redis_debug_log    = '';\n\n/* ---------------- Helpers (filesystem only) ---------------------------- */\nfunction _valkey_backoff_flag_path($flag_dir_run, $fallback) {\n  $path = $fallback;\n  if (is_dir($flag_dir_run)) {\n    if (is_writable($flag_dir_run)) {\n      $path = rtrim($flag_dir_run, '/').'/valkey.disabled.flag';\n    }\n  }\n  return $path;\n}\n\nfunction _valkey_backoff_is_active($flag_path, $ttl) {\n  $active = FALSE;\n  if (is_file($flag_path)) {\n    $age = time() - @filemtime($flag_path);\n    if ($age >= 0 && $age < $ttl) {\n      $active = TRUE;\n    }\n  }\n  return $active;\n}\n\nfunction _valkey_backoff_touch($flag_path) {\n  @touch($flag_path);\n}\n\nfunction _valkey_backoff_clear($flag_path) {\n  if (is_file($flag_path)) {\n    @unlink($flag_path);\n  }\n}\n\nfunction _valkey_dbg_write($log_path, $line) {\n  if (!empty($log_path)) {\n    $msg = date('c').' '.$line.\"\\n\";\n    @file_put_contents($log_path, $msg, FILE_APPEND);\n  }\n}\n\n/* ------------------- Probe Valkey once with guard --------------------- */\n$flag_path = _valkey_backoff_flag_path($flag_dir_run, $flag_file_fallback);\n$skip_probe = _valkey_backoff_is_active($flag_path, $backoff_ttl_s);\n\nif ($use_valkey) {\n  if (!$skip_probe) {\n    if (class_exists('Redis')) {\n      $r = new Redis();\n      $connected = FALSE;\n      $last_reason = 'init';\n\n      // Try socket first.\n      if (!empty($valkey_socket_path) && @is_readable($valkey_socket_path)) {\n        try {\n          $connected = $r->connect($valkey_socket_path);\n        } catch (Exception $e) {\n          $connected = FALSE;\n          $last_reason = 'connect-socket-exception';\n        }\n      }\n\n      // Fallback to TCP.\n      if (!$connected) {\n        try {\n          $connected = $r->connect($valkey_host, $valkey_port, $connect_timeout_s);\n        } catch (Exception $e) {\n          $connected = FALSE;\n          $last_reason = 'connect-tcp-exception';\n        }\n      }\n\n      if ($connected) {\n        if (defined('Redis::OPT_READ_TIMEOUT')) {\n          $r->setOption(Redis::OPT_READ_TIMEOUT, $read_timeout_s);\n        }\n\n        // Authenticate if password file exists.\n        $auth_pass = 'isfoobared';\n        if (is_file($valkey_pass_file)) {\n          $auth_pass = trim((string) @file_get_contents($valkey_pass_file));\n        }\n        if ($auth_pass !== '') {\n          try {\n            if (!$r->auth($auth_pass)) {\n              $connected = FALSE;\n              $last_reason = 'auth-failed';\n            }\n          } catch (Exception $e) {\n            $connected = FALSE;\n            $last_reason = 'auth-exception';\n          }\n        }\n\n        // Verify ping.\n        if ($connected) {\n          try {\n            $pong = $r->ping();\n            if ((is_string($pong) && stripos($pong, 'PONG') !== FALSE) || $pong === TRUE) {\n              $valkey_up = TRUE;\n            } else {\n              $valkey_up = FALSE;\n              $last_reason = 'ping-not-ok';\n            }\n          } catch (Exception $e) {\n            $valkey_up = FALSE;\n            $last_reason = 'ping-exception';\n          }\n        }\n\n        if ($valkey_up) {\n          _valkey_backoff_clear($flag_path);\n          _valkey_dbg_write($redis_debug_log, 'VALKEY UP action=clear flag='.$flag_path.' reason=ok');\n        } else {\n          _valkey_backoff_touch($flag_path);\n          _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason='.(string) $last_reason);\n        }\n\n        try {\n          $r->close();\n        } catch (Exception $e) {\n          // ignore\n        }\n      } else {\n        _valkey_backoff_touch($flag_path);\n        _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason='.(string) $last_reason);\n      }\n    } else {\n      // phpredis extension not available.\n      $valkey_up = FALSE;\n      _valkey_backoff_touch($flag_path);\n      _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason=no-phpredis');\n    }\n  } else {\n    $valkey_up = FALSE;\n    _valkey_dbg_write($redis_debug_log, 'VALKEY SKIP reason=backoff-active flag='.$flag_path);\n  }\n}\n\n/* ---------------- Diagnostics & final guard ---------------------------- */\nif (!empty($is_dev)) {\n  if (empty($is_backend)) {\n    if ($use_valkey && $valkey_up) {\n      header('X-Allow-Valkey: YES');\n    } else {\n      header('X-Allow-Valkey: NO');\n    }\n  }\n}\n\nif (!empty($is_install)) {\n  $use_valkey = FALSE;\n}\n"
  },
  {
    "path": "aegir/conf/global/global-ini.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Core versions init\n */\n$backdropcms   = FALSE;\n$drupal_core   = FALSE;\n$drupal_id     = FALSE;\n\n\n/**\n * Vars init\n */\n$custom_cache  = FALSE;\n$custom_da     = FALSE;\n$custom_fb     = FALSE;\n$da_inc        = FALSE;\n$deny_anon     = FALSE;\n$hidden_uri    = FALSE;\n$high_traffic  = FALSE;\n$ini_loc_src   = FALSE;\n$ini_plr_src   = FALSE;\n$is_backend    = FALSE;\n$is_bot        = FALSE;\n$is_dev        = FALSE;\n$is_install    = FALSE;\n$is_tmp        = FALSE;\n$local_req     = FALSE;\n$no_dns        = FALSE;\n$raw_host      = FALSE;\n$redis_comprs  = FALSE;\n$redis_lock    = FALSE;\n$redis_path    = FALSE;\n$site_subdir   = FALSE;\n$use_auto_se   = FALSE;\n$use_cache_ct  = FALSE;\n$use_valkey    = FALSE;\n$usr_loc_ini   = FALSE;\n$usr_plr_ini   = FALSE;\n$valkey_up     = FALSE;\n\n\n/**\n * BOA INI defaults\n */\n$boa_ini = array(\n  'session_cookie_ttl' => '86400',\n  'session_gc_eol' => '86400',\n  'redis_use_modern' => TRUE,\n  'redis_flush_forced_mode' => TRUE,\n  'redis_lock_enable' => TRUE,\n  'redis_path_enable' => TRUE,\n  'redis_scan_enable' => FALSE,\n  'redis_cache_disable' => FALSE,\n  'redis_old_nine_mode' => FALSE,\n  'redis_old_eight_mode' => FALSE,\n  'sql_conversion_mode' => FALSE,\n  'enable_strict_user_register_protection' => FALSE,\n  'entitycache_dont_enable' => FALSE,\n  'views_cache_bully_dont_enable' => FALSE,\n  'views_content_cache_dont_enable' => FALSE,\n  'autoslave_enable' => FALSE,\n  'cache_consistent_enable' => FALSE,\n  'redis_exclude_bins' => FALSE,\n  'speed_booster_anon_cache_ttl' => FALSE,\n  'allow_anon_node_add' => FALSE,\n  'enable_newrelic_integration' => FALSE,\n  'disable_admin_dos_protection' => FALSE,\n  'ignore_user_register_protection' => FALSE,\n  'allow_private_file_downloads' => FALSE,\n  'server_name_cookie_domain' => FALSE,\n  'auto_detect_facebook_integration' => TRUE,      // For backward compatibility until next release, then FALSE\n  'auto_detect_domain_access_integration' => TRUE, // For backward compatibility until next release, then FALSE\n  'advagg_auto_configuration' => FALSE,            // Will be set to TRUE in boa_site_control.ini if the module is enabled\n  'disable_drupal_page_cache' => FALSE,            // FALSE for backward compatibility and max performance\n  'set_composer_manager_vendor_dir' => FALSE,      // FALSE by default to not break site installation depending on custom value\n);\n\n"
  },
  {
    "path": "aegir/conf/global/global-main.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Bots protection for all tmp/dev sites - works also for aliases\n */\nif ($is_bot) {\n  if ($is_tmp) {\n    // Ignore known bots\n    header('X-Accel-Expires: 300');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * Site cron protection - cron works only for live sites\n */\nif (preg_match(\"/^\\/cron\\.php/\", $_SERVER['REQUEST_URI']) ||\n    preg_match(\"/^\\/cron\\//\", $_SERVER['REQUEST_URI'])) {\n  if (($is_tmp) || (file_exists('/data/conf/sites-cron-off.ctrl'))) {\n    // Ignore cron requests\n    header('X-Accel-Expires: 300');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * Support real IP detection behind proxies (Cloudflare, Akamai, etc.)\n */\nif (isset($_SERVER['REMOTE_ADDR']) && php_sapi_name() !== 'cli') {\n\n  $real_ip = null;\n  $proxy_header = null;\n  $proxy_ip = null;\n\n  // Handle possible comma-separated proxy chain in REMOTE_ADDR\n  $raw_proxy_ip = $_SERVER['REMOTE_ADDR'];\n  $proxy_ip_list = array_map('trim', explode(',', $raw_proxy_ip));\n  $proxy_ip_list = array_filter($proxy_ip_list, function ($ip) {\n    return filter_var($ip, FILTER_VALIDATE_IP);\n  });\n\n  if (!empty($proxy_ip_list)) {\n    $proxy_ip = end($proxy_ip_list); // Trust the last proxy in the chain\n  }\n\n  // Detect real client IP using known proxy headers\n  if (!empty($_SERVER['HTTP_CF_CONNECTING_IP'])) {\n    // Cloudflare (single IP, trusted)\n    $real_ip = trim($_SERVER['HTTP_CF_CONNECTING_IP']);\n    $proxy_header = 'CF-Connecting-IP';\n  }\n  elseif (!empty($_SERVER['HTTP_TRUE_CLIENT_IP'])) {\n    // Akamai / Fastly (single IP)\n    $real_ip = trim($_SERVER['HTTP_TRUE_CLIENT_IP']);\n    $proxy_header = 'True-Client-IP';\n  }\n  elseif (!empty($_SERVER['HTTP_X_REAL_IP'])) {\n    // NGINX-style (single IP)\n    $real_ip = trim($_SERVER['HTTP_X_REAL_IP']);\n    $proxy_header = 'X-Real-IP';\n  }\n  elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {\n    // Generic proxy chain (may contain multiple IPs)\n    $forwarded_ips = array_map('trim', explode(',', $_SERVER['HTTP_X_FORWARDED_FOR']));\n    $forwarded_ips = array_filter($forwarded_ips, function ($ip) {\n      return filter_var($ip, FILTER_VALIDATE_IP);\n    });\n    if (!empty($forwarded_ips)) {\n      $real_ip = $forwarded_ips[0]; // First = original client IP\n      $proxy_header = 'X-Forwarded-For';\n    }\n  }\n\n  // Apply the final values\n  if (!empty($real_ip) && !empty($proxy_ip) && !empty($proxy_header)) {\n    $_SERVER['REMOTE_ADDR'] = $real_ip;\n\n    if (isset($drupal_core) && $drupal_core >= 8) {\n      $settings['reverse_proxy'] = TRUE;\n      $settings['reverse_proxy_header'] = $proxy_header;\n      $settings['reverse_proxy_addresses'] = array($proxy_ip);\n    }\n  }\n}\n\n\n/**\n * The nodns protection\n */\nif ($no_dns) {\n  if ($local_req) {\n    // Allow local requests\n    if (!$is_backend && isset($_SERVER['REMOTE_ADDR'])) {\n      header(\"X-Local-Y: \" . $_SERVER['REMOTE_ADDR']);\n    }\n  }\n  else {\n    // Ignore remote requests\n    header('X-Accel-Expires: 60');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * The hidden uri protection\n */\nif ($hidden_uri) {\n  if ($local_req) {\n    // Allow local requests to hidden uri\n    if (!$is_backend && isset($_SERVER['REMOTE_ADDR'])) {\n      header(\"X-Local-URI-Y: \" . $_SERVER['REMOTE_ADDR']);\n    }\n  }\n  else {\n    // Ignore remote requests\n    header('X-Accel-Expires: 60');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * Use Ægir/BOA specific MAIN_SITE_NAME instead of possibly fake SERVER_NAME\n */\nif (isset($_SERVER['MAIN_SITE_NAME'])) {\n  $_SERVER['SERVER_NAME'] = $_SERVER['MAIN_SITE_NAME'];\n}\n\n\n/**\n * Set MAIN_SITE_NAME to match SERVER_NAME, if MAIN_SITE_NAME is not set\n */\nif (!isset($_SERVER['MAIN_SITE_NAME']) && isset($_SERVER['SERVER_NAME'])) {\n  $_SERVER['MAIN_SITE_NAME'] = $_SERVER['SERVER_NAME'];\n}\n\n\n/**\n * Required for proper Valkey/Redis support on command line / via Drush\n */\nif (isset($_SERVER['HTTP_HOST']) && !isset($_SERVER['SERVER_NAME'])) {\n  $_SERVER['SERVER_NAME'] = $_SERVER['HTTP_HOST'];\n}\n\n\n/**\n * Force backward compatible SERVER_SOFTWARE\n */\nif (!$is_backend) {\n  if (isset($_SERVER['SERVER_SOFTWARE']) &&\n      !preg_match(\"/ApacheSolarisNginx/i\", $_SERVER['SERVER_SOFTWARE'])) {\n    $_SERVER['SERVER_SOFTWARE'] = 'ApacheSolarisNginx/1.29.8';\n  }\n}\n\n\n/**\n * Early bots redirect on protected URLs\n */\nif (!$is_backend) {\n  if (isset($_SERVER['HTTP_HOST']) && $is_bot) {\n    if (preg_match(\"/(?:^tmp\\.|\\.test\\.|\\.tmp\\.)/i\", $_SERVER['HTTP_HOST'])) {\n      // Deny known search bots on ^(tmp|foo.(tmp|test)).domain subdomains\n      header('X-Accel-Expires: 60');\n      header(\"Location: http://www.aegirproject.org/\", true, 301);\n      exit;\n    }\n    elseif (preg_match(\"/\\.(?:host8|boa|aegir|o8)\\.(?:biz|io|cc)$/i\", $_SERVER['HTTP_HOST'])) {\n      // Deny known search bots on some protected CI subdomains\n      header('X-Accel-Expires: 60');\n      header(\"Location: https://omega8.cc/\", true, 301);\n      exit;\n    }\n  }\n}\n\n\n/**\n * Disable reporting errors by default - enable later only for foo.dev.domain\n */\nerror_reporting(0);\n\n\n/**\n * Hostmaster specific settings\n */\nif ($conf['install_profile'] == 'hostmaster') {\n  $conf['hosting_require_disable_before_delete'] = 0;\n  $conf['hosting_task_refresh_timeout'] = 5555;\n  $conf['theme_link'] = FALSE;\n  $conf['cache'] = 0;\n  if (!$is_backend && isset($_SERVER['HTTP_USER_AGENT'])) {\n    $conf['environment_indicator_overwrite'] = TRUE;\n    $conf['environment_indicator_overwritten_position'] = 'top';\n    if (is_readable('/data/conf/development-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Development';\n      $conf['environment_indicator_overwritten_color'] = '#00AA00'; // Green\n    }\n    elseif (is_readable('/data/conf/staging-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Staging';\n      $conf['environment_indicator_overwritten_color'] = '#FFCC00'; // Yellow\n    }\n    elseif (is_readable('/data/conf/production-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Production';\n      $conf['environment_indicator_overwritten_color'] = '#CC0000'; // Red\n    }\n    elseif (is_readable('/data/conf/testing-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Testing';\n      $conf['environment_indicator_overwritten_color'] = '#FF69B4'; // Hot Pink\n      //$conf['environment_indicator_overwritten_color'] = '#FFC0CB'; // Light Pink\n    }\n    else {\n      $conf['environment_indicator_overwritten_name'] = 'Production';\n      $conf['environment_indicator_overwritten_color'] = '#CC0000'; // Red\n    }\n    ini_set('session.cookie_lifetime', 0); // Force log-out on browser quit\n    header('X-Accel-Expires: 1');\n    if (!file_exists('/data/conf/no-https-aegir.inc')) {\n      $request_type = ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https' ||\n      $_SERVER['HTTPS'] == 'on') ? 'SSL' : 'NONSSL';\n      if ($request_type != \"SSL\" &&\n          !preg_match(\"/^\\/cron\\.php/\", $_SERVER['REQUEST_URI'])) { // we force secure connection here\n        header('X-Accel-Expires: 5');\n        header(\"Location: https://\" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'], true, 301);\n        exit;\n      }\n    }\n    if (isset($_SERVER['HTTP_HOST']) &&\n        preg_match(\"/\\.(?:host8|boa|aegir|o8)\\.(?:biz|io|cc)$/i\", $_SERVER['HTTP_HOST'])) {\n      if (preg_match(\"/^\\/admin\\/user\\/user\\/create/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/^\\/node\\/add\\/server/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/^\\/node\\/(?:1|2|4|5|7|8|10)\\/(?:edit|delete)/\", $_SERVER['REQUEST_URI'])) {\n        header('X-Accel-Expires: 5');\n        header(\"Location: https://\" . $_SERVER['HTTP_HOST'] . \"/hosting/sites\", true, 301);\n        exit;\n      }\n    }\n  }\n}\n\n\n/**\n * Optional site and platform level settings defined in the ini files\n * Note: the site-level ini file takes precedence over platform level ini\n */\n$all_ini = $boa_ini;\nif (is_readable('sites/all/modules/boa_platform_control.ini')) {\n  $ini_plr_src = 'sites/all/modules/boa_platform_control.ini';\n}\nif ($ini_plr_src) {\n  $usr_plr_ini = array();\n  $usr_plr_ini = parse_ini_file($ini_plr_src);\n}\nif (is_readable('sites/' . $_SERVER['SERVER_NAME'] . '/modules/boa_site_control.ini')) {\n  $ini_loc_src = 'sites/' . $_SERVER['SERVER_NAME'] . '/modules/boa_site_control.ini';\n}\nif ($ini_loc_src) {\n  $usr_loc_ini = array();\n  $usr_loc_ini = parse_ini_file($ini_loc_src);\n}\nif (is_array($usr_plr_ini) && $usr_plr_ini) {\n  $all_ini = array_merge($all_ini, $usr_plr_ini);\n}\nif (is_array($usr_loc_ini) && $usr_loc_ini) {\n  $all_ini = array_merge($all_ini, $usr_loc_ini);\n}\n\n\n/**\n * Display All Active INI Values on .dev. URL\n */\nif (is_array($all_ini) && $is_dev && !$is_backend) {\n  if ($ini_plr_src) {\n    header(\"X-Ini-Plr-Src: \" . $ini_plr_src);\n  }\n  if ($ini_loc_src) {\n    header(\"X-Ini-Loc-Src: \" . $ini_loc_src);\n  }\n  if (!$ini_plr_src && !$ini_loc_src) {\n    header(\"X-Ini-Src: BOA-Default\");\n  }\n  header(\"X-Ini-Valkey-Use-Modern: \" . $all_ini['redis_use_modern']);\n  header(\"X-Ini-Valkey-Flush-Forced-Mode: \" . $all_ini['redis_flush_forced_mode']);\n  header(\"X-Ini-Valkey-Lock-Enable: \" . $all_ini['redis_lock_enable']);\n  header(\"X-Ini-Valkey-Path-Enable: \" . $all_ini['redis_path_enable']);\n  header(\"X-Ini-Valkey-Scan-Enable: \" . $all_ini['redis_scan_enable']);\n  header(\"X-Ini-Valkey-Old-Nine-Mode: \" . $all_ini['redis_old_nine_mode']);\n  header(\"X-Ini-Valkey-Old-Eight-Mode: \" . $all_ini['redis_old_eight_mode']);\n  header(\"X-Ini-Valkey-Cache-Disable: \" . $all_ini['redis_cache_disable']);\n  header(\"X-Ini-Valkey-Exclude-Bins: \" . $all_ini['redis_exclude_bins']);\n  header(\"X-Ini-Speed-Booster-Anon-Cache-Ttl: \" . $all_ini['speed_booster_anon_cache_ttl']);\n  header(\"X-Ini-Allow-Anon-Node-Add: \" . $all_ini['allow_anon_node_add']);\n  header(\"X-Ini-Enable-NewRelic-Integration: \" . $all_ini['enable_newrelic_integration']);\n  header(\"X-Ini-Disable-Admin-Dos-Protection: \" . $all_ini['disable_admin_dos_protection']);\n  header(\"X-Ini-Allow-Private-File-Downloads: \" . $all_ini['allow_private_file_downloads']);\n  header(\"X-Ini-Server-Name-Cookie-Domain: \" . $all_ini['server_name_cookie_domain']);\n  header(\"X-Ini-Auto-Detect-Facebook-Integration: \" . $all_ini['auto_detect_facebook_integration']);\n  header(\"X-Ini-Auto-Detect-Domain-Access-Integration: \" . $all_ini['auto_detect_domain_access_integration']);\n  header(\"X-Ini-Advagg-Auto-Configuration: \" . $all_ini['advagg_auto_configuration']);\n  header(\"X-Ini-Sql-Conversion-Mode: \" . $all_ini['sql_conversion_mode']);\n  header(\"X-Ini-Enable-Strict-User-Register-Protection: \" . $all_ini['enable_strict_user_register_protection']);\n  header(\"X-Ini-Entitycache-Dont-Enable: \" . $all_ini['entitycache_dont_enable']);\n  header(\"X-Ini-Views-Cache-Bully-Dont-Enable: \" . $all_ini['views_cache_bully_dont_enable']);\n  header(\"X-Ini-Views-Content-Cache-Dont-Enable: \" . $all_ini['views_content_cache_dont_enable']);\n  header(\"X-Ini-Ignore-User-Register-Protection: \" . $all_ini['ignore_user_register_protection']);\n  header(\"X-Ini-Session-Cookie-Ttl: \" . $all_ini['session_cookie_ttl']);\n  header(\"X-Ini-Session-Gc-Eol: \" . $all_ini['session_gc_eol']);\n  header(\"X-Ini-Disable-Drupal-Page-Cache: \" . $all_ini['disable_drupal_page_cache']);\n  header(\"X-Ini-Set-Composer-Manager-Vendor-Dir: \" . $all_ini['set_composer_manager_vendor_dir']);\n  header(\"X-Ini-AutoSlave-Enable: \" . $all_ini['autoslave_enable']);\n  header(\"X-Ini-CacheConsistent-Enable: \" . $all_ini['cache_consistent_enable']);\n}\n"
  },
  {
    "path": "aegir/conf/global/global-mode.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Bots detection\n */\nif (isset($_SERVER['HTTP_USER_AGENT']) &&\n    preg_match(\"/(?:crawl|bot|spider|tracker|click|parser|google|yahoo|yandex|baidu|bing)/i\", $_SERVER['HTTP_USER_AGENT'])) {\n  $is_bot = TRUE;\n}\n\n\n/**\n * Site mode detection - works also for aliases\n */\nif (isset($_SERVER['HTTP_HOST']) &&\n    (preg_match(\"/(?:^dev\\.|\\.dev\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^devel\\.|\\.devel\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^tmp\\.|\\.tmp\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^temp\\.|\\.temp\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^temporary\\.|\\.temporary\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^test\\.|\\.test\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^testing\\.|\\.testing\\.)/i\", $_SERVER['HTTP_HOST']))) {\n  $is_tmp = TRUE;\n}\n\n\n/**\n * Dev mode detection - works only for aliases\n */\nif (isset($_SERVER['HTTP_HOST']) &&\n    isset($_SERVER['MAIN_SITE_NAME']) &&\n    preg_match(\"/(?:^dev\\.|^devel\\.|\\.dev\\.|\\.devel\\.)/i\", $_SERVER['HTTP_HOST']) &&\n    $_SERVER['HTTP_HOST'] != $_SERVER['MAIN_SITE_NAME'] &&\n    $_SERVER['HTTP_HOST'] != $_SERVER['SERVER_NAME'] &&\n    !$is_backend) {\n  $is_dev = TRUE;\n}\n\n\n/**\n * Backend and task detection\n */\nif (function_exists('drush_get_command')) {\n  $command = drush_get_command();\n  if (isset($command['command'])) {\n    $command = explode(\" \", $command['command']);\n    if (isset($command[0])) {\n      if (!preg_match(\"/^help/\", $command[0])) {\n        $is_backend = TRUE;\n      }\n      if (preg_match(\"/^(provision-install|provision-save|provision-backup|php-eval)/\", $command[0])) {\n        if (!is_readable('/data/conf/clstr.cnf')) {\n          $is_install = TRUE;\n        }\n      }\n    }\n  }\n}\nelse {\n  if (php_sapi_name() === 'cli' || PHP_SAPI === 'cli') {\n    $is_backend = TRUE;\n  }\n}\n\n\n/**\n * Detecting subdirectory mode\n */\nif (isset($_SERVER['SITE_SUBDIR'])) {\n  $site_subdir = $_SERVER['SITE_SUBDIR'];\n}\nif (isset($_SERVER['RAW_HOST'])) {\n  $raw_host = $_SERVER['RAW_HOST'];\n}\n\n\n/**\n * The nodns mode detection\n */\nif (isset($_SERVER['HTTP_HOST']) &&\n    (preg_match(\"/(?:^nodns\\.|\\.nodns\\.)/i\", $_SERVER['HTTP_HOST']))) {\n  $no_dns = TRUE;\n}\n\n\n/**\n * Local nodns request detection\n */\nif (isset($_SERVER['REMOTE_ADDR']) &&\n    (preg_match(\"/(^127\\.0\\.0\\.1)$/i\", $_SERVER['REMOTE_ADDR']) ||\n     preg_match(\"/(^127\\.0\\.0\\.1\\, 127\\.0\\.0\\.1)$/i\", $_SERVER['REMOTE_ADDR']))) {\n  $local_req = TRUE;\n}\n\n\n/**\n * Local path request check\n */\nif (preg_match(\"/\\/api\\/hidden\\//\", $_SERVER['REQUEST_URI'])) {\n  $hidden_uri = TRUE;\n}\n\n\n/**\n * Drupal core or other apps id\n */\nif ($drupal_core == 6) {\n  $drupal_id = 'DVI';\n}\nelseif ($drupal_core == 7) {\n  $drupal_id = 'DVII';\n}\nelseif ($drupal_core == 8) {\n  $drupal_id = 'DVIII';\n}\nelseif ($drupal_core == 9) {\n  $drupal_id = 'DIX';\n}\nelseif ($drupal_core == 10) {\n  $drupal_id = 'DX';\n}\nelseif ($drupal_core == 11) {\n  $drupal_id = 'DXI';\n}\nelse {\n  $drupal_id = 'ND';\n}\nif ($drupal_id && $is_dev && !$is_backend) {\n  header('X-Backend: ' . $drupal_id);\n}\n\n"
  },
  {
    "path": "aegir/conf/global/global-newrelic.inc",
    "content": "<?php # global settings.php\n\n/**\n * New Relic Integration for Drupal with Drush Compatibility (8, 12, 13)\n *\n * Supports background jobs and sets appropriate New Relic parameters.\n */\nif (extension_loaded('newrelic') && !empty($all_ini['enable_newrelic_integration'])) {\n  $this_instance = FALSE;\n\n  if ($is_backend) {\n    $uri = FALSE;\n\n    // Check if drush_get_context exists (Drush 8)\n    if (function_exists('drush_get_context')) {\n      // Drush 8 context retrieval\n      $context = drush_get_context();\n      if (isset($context['DRUSH_URI'])) {\n        $uri = $context['DRUSH_URI'];\n      }\n      elseif (isset($context['DRUSH_DRUPAL_SITE'])) {\n        $uri = $context['DRUSH_DRUPAL_SITE'];\n      }\n    }\n    else {\n      // Drush 9+ context retrieval\n      // Attempt to retrieve URI from environment variables or Drush services\n      // Drush commands can pass the URI as an environment variable or argument\n\n      // Example: Using environment variable (you might need to set this in Drush commands)\n      if (isset($_SERVER['DRUSH_URI'])) {\n        $uri = $_SERVER['DRUSH_URI'];\n      }\n      elseif (isset($_SERVER['DRUPAL_SITE_URI'])) {\n        $uri = $_SERVER['DRUPAL_SITE_URI'];\n      }\n      else {\n        // Fallback: Attempt to determine URI using Drupal APIs\n        // Note: In Drush context, some Drupal services might not be fully bootstrapped\n        try {\n          $request = \\Drupal::request();\n          $uri = $request->getSchemeAndHttpHost();\n        }\n        catch (\\Exception $e) {\n          // Unable to determine URI; proceed without setting it\n        }\n      }\n    }\n\n    if ($uri) {\n      // Clean the URI by removing the scheme\n      $uri = str_replace(['http://', 'https://'], '', $uri);\n      $this_instance = 'Drush Site: ' . $uri;\n\n      // Set New Relic transaction name and parameters if command details are available\n      if (isset($command['command']) && isset($command['arguments'])) {\n        $drush_command = array_merge([$command['command']], $command['arguments']);\n        $command_str = implode(' ', $drush_command);\n\n        // Add custom parameters to New Relic\n        newrelic_add_custom_parameter('Drush command', $command_str);\n        newrelic_name_transaction($command_str);\n\n        // Indicate that this is a background job\n        newrelic_background_job(TRUE);\n      }\n    }\n  }\n  else {\n    // Non-Drush (web request) context\n    if (isset($_SERVER['SERVER_NAME'])) {\n      $this_instance = 'Web Site: ' . $_SERVER['SERVER_NAME'];\n      // Optionally, indicate this is not a background job\n      // newrelic_background_job(FALSE);\n    }\n  }\n\n  // Apply the New Relic app name if determined\n  if ($this_instance) {\n    ini_set('newrelic.appname', $this_instance);\n    newrelic_set_appname($this_instance);\n  }\n}\nelseif (extension_loaded('newrelic') && empty($all_ini['enable_newrelic_integration'])) {\n  // Disable New Relic auto-RUM and ignore transactions if integration is disabled\n  newrelic_disable_autorum();\n  newrelic_ignore_apdex();\n  newrelic_ignore_transaction();\n}\n"
  },
  {
    "path": "aegir/conf/global/global-redis.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Use Redis caching and lock support only for d6 and d7 profiles\n */\nif ($valkey_up && $use_valkey && !$custom_cache) {\n  $cache_backport = FALSE;\n  $cache_valkey = FALSE;\n  $all_ini['redis_use_modern'] = TRUE;\n  if ($all_ini['redis_use_modern']) {\n    if ($drupal_core >= 8) {\n      $redis_comprs = TRUE;\n      $redis_dirname = 'redis_eight';\n      if (!$all_ini['redis_old_eight_mode']) {\n        $redis_dirname = 'redis_compr';\n      }\n      if ($drupal_core == 10 || $drupal_core == 11) {\n        $redis_new_dirname = 'redis_ten_eleven';\n        $redis_legacy_dirname = 'redis_nine_ten';\n        if (is_readable('modules/o_contrib_ten/' . $redis_new_dirname . '/redis.services.yml')) {\n          $redis_dirname = $redis_new_dirname;\n        }\n        elseif (is_readable('modules/o_contrib_ten/' . $redis_legacy_dirname . '/redis.services.yml')) {\n          $redis_dirname = $redis_legacy_dirname;\n        }\n      }\n      elseif ($drupal_core == 9) {\n        $redis_dirname = 'redis_nine_ten';\n        if ($all_ini['redis_old_nine_mode']) {\n          $redis_dirname = 'redis_compr';\n        }\n      }\n    }\n    else {\n      $redis_dirname = 'redis_edge';\n    }\n    if ($is_dev && !$is_backend) {\n      header(\"X-Redis-Version-Is: Modern\");\n      header(\"X-Redis-Dir-Is: \" . $redis_dirname);\n    }\n    if ($all_ini['redis_flush_forced_mode']) {\n      if ($drupal_core >= 8) {\n        $settings['redis_perm_ttl']                 = 86400; // 24 hours max\n        $settings['redis_flush_mode']               = 1; // Redis default is 0\n        $settings['redis_flush_mode_cache_page']    = 2; // Redis default is 1\n        $settings['redis_flush_mode_cache_block']   = 2; // Redis default is 1\n        $settings['redis_flush_mode_cache_menu']    = 2; // Redis default is 0\n        $settings['redis_flush_mode_cache_metatag'] = 2; // Redis default is 0\n      }\n      else {\n        $conf['redis_perm_ttl']                 = 86400; // 24 hours max\n        $conf['redis_flush_mode']               = 1; // Redis default is 0\n        $conf['redis_flush_mode_cache_page']    = 2; // Redis default is 1\n        $conf['redis_flush_mode_cache_block']   = 2; // Redis default is 1\n        $conf['redis_flush_mode_cache_menu']    = 2; // Redis default is 0\n        $conf['redis_flush_mode_cache_metatag'] = 2; // Redis default is 0\n      }\n      // See http://bit.ly/1drmi35 for more information\n      if ($is_dev && !$is_backend) {\n        header(\"X-Redis-Flush-Forced-Mode: Forced\");\n      }\n    }\n  }\n  else {\n    $redis_dirname = 'redis';\n    if ($is_dev && !$is_backend) {\n      header(\"X-Redis-Version-Is: Legacy\");\n      header(\"X-Redis-Dir-Is: \" . $redis_dirname);\n    }\n  }\n  if ($drupal_core >= 8) {\n    if (file_exists('sites/' . $_SERVER['SERVER_NAME'] . '/.redisLegacyOff')) {\n      if ($is_dev && !$is_backend) {\n        header(\"X-Redis-Off-Ctrl-Exists: .redisLegacyOff\");\n      }\n    }\n    else {\n      if (is_readable('sites/' . $_SERVER['SERVER_NAME'] . '/files/development.services.yml')) {\n        if ($is_dev && !$is_backend) {\n          header(\"X-Dev-Services-Yml-Is-Readable: development.services.yml\");\n        }\n      }\n      else {\n        if (is_readable('modules/o_contrib_ten')) {\n          if (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_ten/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_ten/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_ten/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Redis-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_eleven')) {\n          if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Redis-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_nine')) {\n          if (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_nine/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_nine/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_nine/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Redis-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_eight')) {\n          if (is_readable('modules/o_contrib_eight/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_services_path = 'modules/o_contrib_eight/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_eight/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Example-Services-Is-Readable: \" . $example_services_path);\n            }\n          }\n          if (is_readable('modules/o_contrib_eight/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_eight/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Redis-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n      }\n    }\n  }\n  elseif ($drupal_core == 7) {\n    if (is_readable('modules/o_contrib_seven/' . $redis_dirname . '/redis.autoload.inc')) {\n      $cache_valkey = TRUE;\n      $cache_backport = FALSE;\n      $cache_redis_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.autoload.inc';\n      $cache_lock_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.lock.inc';\n      $cache_path_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.path.inc';\n      $cache_gzip_path = 'modules/o_contrib_seven/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Redis-Autoload-Is-Readable: \" . $cache_redis_path);\n      }\n    }\n    if ($all_ini['autoslave_enable']) {\n      if (is_readable('modules/o_contrib_seven/autoslave/autoslave.cache.inc') &&\n        is_readable('includes/database/autoslave/database.inc')) {\n        $use_auto_se = TRUE;\n        $gzip_mode = FALSE;\n        $cache_backport = FALSE;\n        $auto_se_path = 'modules/o_contrib_seven/autoslave/autoslave.cache.inc';\n        if ($is_dev && !$is_backend) {\n          header(\"X-AutoSlave-Cache-Is-Readable: \" . $auto_se_path);\n        }\n      }\n    }\n    if ($all_ini['cache_consistent_enable']) {\n      if (is_readable('modules/o_contrib_seven/cache_consistent/cache_consistent.inc')) {\n        $use_cache_ct = TRUE;\n        $gzip_mode = FALSE;\n        $cache_backport = FALSE;\n        $cache_ct_path = 'modules/o_contrib_seven/cache_consistent/cache_consistent.inc';\n        if ($is_dev && !$is_backend) {\n          header(\"X-CacheConsistent-Is-Readable: \" . $cache_ct_path);\n        }\n      }\n    }\n  }\n  elseif ($drupal_core == 6) {\n    if (is_readable('modules/o_contrib/cache_backport/cache.inc')) {\n      $cache_backport = TRUE;\n      $cache_backport_path = 'modules/o_contrib/cache_backport/cache.inc';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Redis-Cache-Backport-Is-Readable: \" . $cache_backport_path);\n      }\n    }\n    if (is_readable('modules/o_contrib/' . $redis_dirname . '/redis.autoload.inc')) {\n      $cache_valkey = TRUE;\n      $cache_redis_path = 'modules/o_contrib/' . $redis_dirname . '/redis.autoload.inc';\n      $cache_lock_path = 'modules/o_contrib/' . $redis_dirname . '/redis.lock.inc';\n      $cache_path_path = 'modules/o_contrib/' . $redis_dirname . '/redis.path.inc';\n      $cache_gzip_path = 'modules/o_contrib/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Redis-Autoload-Is-Readable: \" . $cache_redis_path);\n      }\n    }\n  }\n  if ($cache_valkey) {\n    if ($drupal_core >= 8) {\n      if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_eleven/' . $redis_dirname . '/src');\n      }\n      elseif (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_ten/' . $redis_dirname . '/src');\n      }\n      elseif (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_nine/' . $redis_dirname . '/src');\n      }\n      else {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_eight/' . $redis_dirname . '/src');\n      }\n      $settings['redis.connection']['interface'] = 'PhpRedis';\n      $settings['redis.connection']['host']      = '127.0.0.1';\n      $settings['redis.connection']['port']      = '6379';\n      $settings['redis.connection']['password']  = 'isfoobared';\n      $settings['redis.connection']['base']      = '8';\n      $settings['cache_prefix']                  = $this_prefix;\n      $settings['cache']['default']              = 'cache.backend.redis';\n      if (!is_readable('/data/conf/clstr.cnf')) {\n        $settings['cache']['bins']['bootstrap']  = 'cache.backend.chainedfast';\n        $settings['cache']['bins']['discovery']  = 'cache.backend.chainedfast';\n        $settings['cache']['bins']['config']     = 'cache.backend.chainedfast';\n      }\n      if (is_readable($example_failover_path)) {\n        $settings['container_yamls'][]           = $example_failover_path;\n        $settings['redis.failover']              = TRUE;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Redis-Failover-Is-Readable: \" . $example_failover_path);\n        }\n      }\n      elseif (is_readable($example_services_path)) {\n        $settings['container_yamls'][]           = $example_services_path;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Redis-Example-Is-Readable: \" . $example_services_path);\n        }\n      }\n      if (is_readable($redis_services_path)) {\n        $settings['container_yamls'][]           = $redis_services_path;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Redis-Services-Is-Readable: \" . $redis_services_path);\n        }\n      }\n      if ($drupal_core <= 10) {\n        $settings['queue_default']               = 'queue.redis_reliable';\n      }\n      if ($redis_comprs) {\n        $settings['redis_compress_length']       = 100;\n        $settings['redis_compress_level']        = 5;\n      }\n      $settings['cache']['bins']['state']        = 'cache.backend.redis';\n      $settings['state_cache']                   = TRUE;\n      $settings['bootstrap_container_definition'] = [\n        'parameters' => [],\n        'services' => [\n          'redis.factory' => [\n            'class' => 'Drupal\\redis\\ClientFactory',\n          ],\n          'cache.backend.redis' => [\n            'class' => 'Drupal\\redis\\Cache\\CacheBackendFactory',\n            'arguments' => ['@redis.factory', '@cache_tags_provider.container', '@serialization.phpserialize'],\n          ],\n          'cache.container' => [\n            'class' => '\\Drupal\\redis\\Cache\\PhpRedis',\n            'factory' => ['@cache.backend.redis', 'get'],\n            'arguments' => ['container'],\n          ],\n          'cache_tags_provider.container' => [\n            'class' => 'Drupal\\redis\\Cache\\RedisCacheTagsChecksum',\n            'arguments' => ['@redis.factory'],\n          ],\n          'serialization.phpserialize' => [\n            'class' => 'Drupal\\Component\\Serialization\\PhpSerialize',\n          ],\n        ],\n      ];\n    }\n    else {\n      if ($cache_backport) {\n        $conf['cache_inc']                      = $cache_backport_path;\n      }\n      if ($all_ini['redis_use_modern']) {\n        if ($all_ini['redis_lock_enable']) {\n          $redis_lock = TRUE;\n        }\n        if ($all_ini['redis_path_enable']) {\n          $redis_path = TRUE;\n        }\n      }\n      if (is_readable($cache_lock_path) && $redis_lock) {\n        $conf['lock_inc']                       = $cache_lock_path;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Redis-Lock-Is-Readable: \" . $cache_lock_path);\n        }\n      }\n      if (is_readable($cache_path_path) && $redis_path) {\n        $conf['path_inc']                       = $cache_path_path;\n        $conf['path_alias_admin_blacklist']     = FALSE;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Redis-Path-Is-Readable: \" . $cache_path_path);\n        }\n      }\n      if ($all_ini['redis_scan_enable']) {\n        $conf['redis_scan_delete']              = TRUE;\n        $gzip_mode = FALSE;\n      }\n      else {\n        if (is_readable($cache_gzip_path)) {\n          $gzip_mode = TRUE;\n        }\n        else {\n          $gzip_mode = FALSE;\n        }\n      }\n      if ($gzip_mode) {\n        $conf['cache_default_class']            = 'Redis_CacheCompressed';\n      }\n      else {\n        $conf['cache_default_class']            = 'Redis_Cache';\n      }\n      $conf['cache_backends'][]                 = $cache_redis_path;\n      if ($use_auto_se) {\n        $conf['cache_backends'][]               = $auto_se_path;\n        $conf['cache_default_class']            = 'AutoslaveCache';\n        $conf['autoslave_cache_default_class']  = 'Redis_Cache';\n      }\n      if ($use_cache_ct) {\n        $conf['cache_backends'][]               = $cache_ct_path;\n        $conf['cache_default_class']            = 'ConsistentCache';\n        if (!is_readable('/data/conf/clstr.cnf')) {\n          $conf['cache_class_cache_form']       = 'DrupalDatabaseCache';\n          $conf['cache_class_cache_bootstrap']  = 'DrupalDatabaseCache';\n        }\n        $conf['consistent_cache_default_class'] = 'Redis_Cache';\n        $conf['consistent_cache_default_safe']  = TRUE;\n        $conf['consistent_cache_buffer_mechanism'] = 'ConsistentCacheBuffer';\n        $conf['consistent_cache_default_strict'] = FALSE;\n        $conf['consistent_cache_strict_cache_bootstrap'] = TRUE;\n      }\n      if (!is_readable('/data/conf/clstr.cnf')) {\n        $conf['cache_class_cache_form']         = 'DrupalDatabaseCache';\n        $conf['cache_class_cache_bootstrap']    = 'DrupalDatabaseCache';\n      }\n      $conf['redis_client_interface']           = 'PhpRedis';\n      $conf['redis_client_host']                = '127.0.0.1';\n      $conf['redis_client_port']                = '6379';\n      $conf['redis_client_password']            = 'isfoobared';\n      $conf['redis_client_base']                = '8';\n      $conf['cache_prefix']                     = $this_prefix;\n      $conf['page_cache_invoke_hooks']          = TRUE;  // D7 == Do not use Aggressive Mode\n      $conf['page_cache_without_database']      = FALSE; // D7 == Do not use Aggressive Mode\n      $conf['page_cache_maximum_age']           = 0;     // D7 == max-age in the Cache-Control header (ignored by Speed Booster)\n      $conf['page_cache_max_age']               = 0;     // D6 == max-age in the Cache-Control header (ignored by Speed Booster)\n      $conf['cache_lifetime']                   = 0;     // D7 == BOA uses Speed Booster / Nginx micro-caching instead\n      $conf['page_cache_lifetime']              = 0;     // D6 == BOA uses Speed Booster / Nginx micro-caching instead\n    }\n    if ($all_ini['redis_exclude_bins'] && !is_readable('/data/conf/clstr.cnf')) {\n      $excludes = array();\n      $excludes = explode(\",\", $all_ini['redis_exclude_bins']);\n      foreach ($excludes as $exclude) {\n        $exclude = rtrim($exclude);\n        $exclude = ltrim($exclude);\n        if ($drupal_core >= 8) {\n          $bin_exclude = $exclude;\n          $settings['cache']['bins'][$bin_exclude] = 'cache.backend.database';\n        }\n        else {\n          $bin_exclude = 'cache_class_' . $exclude;\n          $conf[$bin_exclude] = 'DrupalDatabaseCache';\n        }\n        if ($is_dev && !$is_backend) {\n          header(\"X-Ini-Redis-Exclude-Bin-\" . $exclude . \": \" . $bin_exclude);\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "aegir/conf/global/global-settings.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Forced default settings\n */\nif ($drupal_core >= 11) {\n  // Drupal core Package Manager: prevent Status report \"error\" about early testing.\n  $settings['testing_package_manager'] = TRUE;\n}\nif ($drupal_core >= 8) {\n  //\n  // Drupal 8 behaviour is confusing, because while it is possible\n  // to force settings listed below, they will not be shown in the\n  // site admin area. For example, CSS/JS aggregation checkboxes\n  // will accept on/off changes on form submit, while being silently\n  // overridden here.\n  //\n  $config['image.settings']['allow_insecure_derivatives'] = TRUE;  // Not sure if it's a good idea in D8\n  $config['image.settings']['suppress_itok_output'] = TRUE;        // Not sure if it's a good idea in D8\n  $config['system.cron']['threshold.autorun'] = FALSE;             // Disable poormanscron (legacy)\n  $config['system.cron']['threshold']['auto'] = 0;                 // Disable auto-cron (current)\n  $config['system.logging']['error_level'] = 'hide';               // Disable errors on screen\n  $config['system.performance']['css']['preprocess'] = TRUE;       // Enable hardcoded CSS aggregation\n  $config['system.performance']['js']['preprocess'] = TRUE;        // Enable hardcoded JS aggregation\n  $config['system.performance']['response.gzip'] = FALSE;          // Nginx already compresses everything\n  //$config['system.file']['default_scheme'] = 'public';             // Force public downloads by default\n}\nelse {\n  if ($backdropcms) {\n    $conf['css_gzip_compression'] = FALSE; // Nginx already compresses everything\n    $conf['js_gzip_compression'] = FALSE;  // Nginx already compresses everything\n    $settings['backdrop_drupal_compatibility'] = TRUE; // Enable Drupal backwards compatibility\n  }\n  $conf['page_compression'] = 0;    // Nginx already compresses everything\n  $conf['boost_crawl_on_cron'] = 0; // Deny Boost crawler\n  $conf['cron_safe_threshold'] = 0; // Disable poormanscron\n  $conf['preprocess_css'] = 1;      // Enable hardcoded CSS aggregation\n  $conf['preprocess_js'] = 1;       // Enable hardcoded JS aggregation\n  $conf['file_downloads'] = 1;      // Force public downloads by default in D6\n  $conf['file_default_scheme'] = 'public'; // Force public downloads by default in D7\n  $conf['error_level'] = 0;         // Disable errors on screen\n  $conf['statistics_enable_access_log'] = 0;   // Disable access log stats\n  $conf['allow_authorize_operations'] = FALSE; // Disable insecure plugin manager\n  $conf['admin_menu_cache_client'] = FALSE;    // Disable caching in admin_menu #442560\n  $conf['boost_ignore_htaccess_warning'] = 1;  // Silence false alarm in boost\n  $conf['expire_flush_front'] = 1;             // Default settings for expire module\n  $conf['expire_flush_node_terms'] = 1;        // Default settings for expire module\n  $conf['expire_flush_menu_items'] = 0;        // Default settings for expire module\n  $conf['expire_flush_cck_references'] = 0;    // Default settings for expire module\n  $conf['expire_include_base_url'] = 1;        // Default settings for expire module\n  $conf['js_server_software'] = \"other\";       // Set JS Callback handler server software\n  $conf['video_ffmpeg_instances'] = 1;         // Force safe default for ffmpeg\n  $conf['securepages_enable'] = 1;             // Force to avoid issues with ssl proxy\n  $conf['less_devel'] = FALSE;                 // Prevent CSS regeneration on every page load\n  $conf['drupal_http_request_fails'] = FALSE;  // Avoid false alarm\n  $conf['image_allow_insecure_derivatives'] = TRUE; // Enable to avoid known issues: https://drupal.org/drupal-7.20-release-notes\n  $conf['theme_cloudy_settings']['omega_rebuild_aggregates'] = FALSE;     // Do not allow to turn it on by default\n  $conf['theme_cloudy_settings']['omega_rebuild_theme_registry'] = FALSE; // Do not allow to turn it on by default\n  $update_free_access = FALSE;\n  $conf['webform_table'] = TRUE; // Workaround for SA-CONTRIB-2015-063 https://www.drupal.org/node/2445935\n  $conf['features_rebuild_on_flush'] = FALSE; // https://michaelshadle.com/2015/04/21/speeding-up-drupal-cache-flushing\n  $conf['entity_rebuild_on_flush'] = FALSE; // http://a-fro.com/speed-up-cache-clearing-on-drupal7\n  $conf['redis_eval_enabled'] = TRUE;\n  // Use EVAL commands to greatly speed up cache clearing\n  // Enable when https://www.drupal.org/node/2487333 is fixed\n}\n\n\n/**\n * Logic for the front-end only\n */\nif (!$is_backend) {\n  if ($is_dev) {\n    // Dev mode switch\n    error_reporting(E_ALL & ~E_NOTICE);\n    ini_set('display_errors', TRUE);\n    ini_set('display_startup_errors', TRUE);\n    ini_set('opcache.revalidate_freq', '0');\n    if (!$is_backend) {\n      header(\"X-Opcache-Revalidate-Freq: 0\");\n    }\n    if ($drupal_core >= 8) {\n      unset($config['system.logging']['error_level']);            // Stop hardcoding no errors on screen\n      unset($config['system.performance']['cache.page.max_age']); // Stop hardcoding internal page cache\n      unset($config['system.performance']['css']['preprocess']);  // Stop hardcoding CSS aggregation\n      unset($config['system.performance']['js']['preprocess']);   // Stop hardcoding JS aggregation\n      if (is_readable('sites/' . $_SERVER['SERVER_NAME'] . '/files/development.services.yml')) {\n        //\n        // This file, if exists, disables Redis on the fly!\n        //\n        $settings['container_yamls'][] = 'sites/' . $_SERVER['SERVER_NAME'] . '/files/development.services.yml';\n        //\n        // The two settings below make sense only if the development.services.yml file\n        // located in the sites/domain/files/ dir contains at least these three lines:\n        //\n        // services:\n        //   cache.backend.null:\n        //     class: Drupal\\Core\\Cache\\NullBackendFactory\n        //\n        $settings['cache']['bins']['render'] = 'cache.backend.null';\n        $settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.null';\n        //\n        // Warning: you must clear caches via Ægir interface or with Drush\n        // before these lines will start working on .dev. alias without error 500\n        // saying: You have requested a non-existent service \"cache.backend.null\"\n        //\n        // To enable Twig debugging add in the development.services.yml file also lines:\n        //\n        // parameters:\n        //   twig.config:\n        //     debug: true\n        //     auto_reload: true\n        //     cache: true\n        //\n        // Note that normally you should not disable Twig cache, since auto_reload\n        // is enough for development and debugging, withot slowing down everything;\n        // see also: https://www.drupal.org/node/1903374\n        //\n      }\n    }\n    else {\n      $conf['xmlsitemap_submit'] = 0; // Disable XML Sitemap for foo.dev.domain\n      $conf['xmlsitemap_update'] = 0; // Disable XML Sitemap for foo.dev.domain\n      unset($conf['cache']);          // Stop hardcoding internal page cache\n      unset($conf['error_level']);    // Stop hardcoding no errors on screen\n      unset($conf['less_devel']);     // Stop hardcoding CSS regeneration on every page load\n      unset($conf['preprocess_css']); // Stop hardcoding CSS aggregation\n      unset($conf['preprocess_js']);  // Stop hardcoding JS aggregation\n      unset($conf['theme_cloudy_settings']['omega_rebuild_aggregates']);     // Do not force on dev URLs\n      unset($conf['theme_cloudy_settings']['omega_rebuild_theme_registry']); // Do not force on dev URLs\n    }\n  }\n  else {\n    if (preg_match(\"/^\\/civicrm/\", $_SERVER['REQUEST_URI'])) {\n      // Force custom opcache TTL for CiviCRM codebase\n      ini_set('opcache.revalidate_freq', '180');\n      if (!$is_backend) {\n        header(\"X-Opcache-Revalidate-Freq: 180\");\n      }\n    }\n    else {\n      // Set sane default opcache TTL on non-dev sites\n      ini_set('opcache.revalidate_freq', '60');\n      if (!$is_backend) {\n        header(\"X-Opcache-Revalidate-Freq: 60\");\n      }\n    }\n  }\n}\n\n\n/**\n * Enable page caching if disable_drupal_page_cache is not set to TRUE,\n * but only on non-dev URLs and only for the front-end.\n */\nif (!$is_backend && !$is_dev) {\n  if (!$is_bot && $all_ini['disable_drupal_page_cache']) {\n    if ($drupal_core >= 8) {\n      $config['system.performance']['cache.page.max_age'] = 0;\n    }\n    else {\n      $conf['cache'] = 0;\n    }\n  }\n  else {\n    if ($drupal_core >= 8) {\n      $config['system.performance']['cache.page.max_age'] = 60;\n    }\n    else {\n      $conf['cache'] = 1;\n    }\n  }\n}\n\n\n/**\n * Disable page caching when Speed Booster is disabled on the fly\n */\nif (!$is_bot && isset($_SERVER['REQUEST_URI']) &&\n    preg_match(\"/nocache=1/\", $_SERVER['REQUEST_URI'])) {\n  if ($drupal_core >= 8) {\n    $config['system.performance']['cache.page.max_age'] = 0;\n  }\n  else {\n    $conf['cache'] = 0;\n  }\n}\n\n\n/**\n * Session Cookie TTL settings\n *\n * Set session cookie lifetime (in seconds), i.e. the time from the session is\n * created to the cookie expires, i.e. when the browser is expected to discard\n * the cookie. The value 0 means \"until the browser is closed\".\n */\nif ($all_ini['session_cookie_ttl']) {\n  ini_set('session.cookie_lifetime', $all_ini['session_cookie_ttl']);\n}\n\n\n/**\n * Session Garbage Collector EOL settings\n *\n * Set session lifetime (in seconds), i.e. the time from the user's last visit\n * to the active session may be deleted by the session garbage collector. When\n * a session is deleted, authenticated users are logged out, and the contents\n * of the user's $_SESSION variable is discarded.\n */\nif ($all_ini['session_gc_eol']) {\n  ini_set('session.gc_maxlifetime', $all_ini['session_gc_eol']);\n}\n\n\n/**\n * Main section starts here\n */\nif (isset($_SERVER['SERVER_NAME']) &&\n    $all_ini['allow_private_file_downloads']) {\n  unset($conf['file_downloads']); // Disable hardcoded public downloads for D6\n  unset($conf['file_default_scheme']); // Disable hardcoded public downloads for D7\n  //unset($config['system.file']['default_scheme']); // Disable hardcoded public downloads for D8+\n  if ($is_dev && !$is_backend) {\n    header('X-Is-Cart: YES');\n  }\n}\n\n\nif (isset($_SERVER['HTTP_USER_AGENT']) && isset($_SERVER['USER_DEVICE'])) {\n  $this_device = $_SERVER['USER_DEVICE'];\n}\nelse {\n  $this_device = 'normal';\n}\n\n\n/**\n * Logic for non-dev URLs only\n */\nif (!$is_dev) {\n  if ($all_ini['advagg_auto_configuration']) {\n\n    if ($drupal_core == 6) {\n      if (is_readable('modules/o_contrib/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('sites/all/modules/advagg/advagg_bundler/advagg_bundler.module')) {\n        $conf['preprocess_css'] = 0; // CSS aggregation disabled\n        $conf['preprocess_js'] = 0;  // JS aggregation disabled\n        $conf['advagg_aggregate_mode'] = 1;\n        $conf['advagg_async_generation'] = 1;\n        $conf['advagg_checksum_mode'] = \"md5\";\n        $conf['advagg_closure'] = 1;\n        $conf['advagg_css_compress_agg_files'] = 1;\n        $conf['advagg_css_compress_compressor_level'] = \"sane\";\n        $conf['advagg_css_compress_inline'] = 1;\n        $conf['advagg_css_compressor'] = 2;\n        $conf['advagg_debug'] = 0;\n        $conf['advagg_dir_htaccess'] = 0;\n        $conf['advagg_enabled'] = 1;\n        $conf['advagg_gzip_compression'] = 1;\n        $conf['advagg_js_compress_agg_files'] = 1;\n        $conf['advagg_js_compress_callback'] = 1;\n        $conf['advagg_js_compress_inline'] = 1;\n        $conf['advagg_js_compress_packer_enable'] = 0;\n        $conf['advagg_js_compressor'] = 1;\n        $conf['advagg_page_cache_mode'] = 0;\n        $conf['advagg_rebuild_on_flush'] = 0;\n        $conf['advagg_server_addr'] = \"-1\";\n      }\n    }\n    elseif ($drupal_core == 7) {\n      if (is_readable('modules/o_contrib_seven/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('sites/all/modules/advagg/advagg_bundler/advagg_bundler.module')) {\n        $conf['advagg_bundler_active'] = 1;\n        $conf['advagg_cache_level'] = 3;\n        $conf['advagg_combine_css_media'] = 0;\n        $conf['advagg_core_groups'] = 0;\n        $conf['advagg_css_compressor'] = 2;\n        $conf['advagg_css_compress_inline'] = 2;\n        $conf['advagg_css_compress_inline_if_not_cacheable'] = 1;\n        $conf['advagg_enabled'] = 1;\n        $conf['advagg_gzip'] = 1;\n        $conf['advagg_ie_css_selector_limiter'] = 1;\n        $conf['advagg_js_compressor'] = 3;\n        $conf['advagg_js_compress_packer'] = 0;\n        $conf['advagg_js_compress_inline'] = 3;\n        $conf['advagg_js_compress_inline_if_not_cacheable'] = 1;\n        $conf['preprocess_css'] = 1;\n        $conf['preprocess_js'] = 1;\n      }\n    }\n    elseif ($drupal_core >= 8) {\n      if (is_readable('modules/o_contrib_eight/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/o_contrib_nine/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/o_contrib_ten/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/o_contrib_eleven/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('sites/all/modules/advagg/advagg_bundler/advagg_bundler.module')) {\n        $config['advagg.settings']['css']['combine_media'] = false;\n        $config['advagg.settings']['css']['ie']['limit_selectors'] = true;\n        $config['advagg.settings']['cache_level'] = 3;\n        $config['advagg.settings']['core_groups'] = false;\n        $config['advagg.settings']['enabled'] = true;\n        $config['advagg_bundler.settings']['active'] = true;\n        $config['advagg_css_minify.settings']['minifier'] = 2;\n        $config['advagg_js_minify.settings']['minifier'] = 3;\n        $config['system.performance']['css']['preprocess'] = true;\n        $config['system.performance']['js']['preprocess'] = true;\n      }\n    }\n\n    if ($drupal_core == 6 || $drupal_core == 7) {\n      if (is_readable('modules/o_contrib/httprl/httprl.module') ||\n          is_readable('modules/o_contrib_seven/httprl/httprl.module')) {\n        $conf['advagg_use_httprl'] = 1;\n        $conf['httprl_background_callback'] = 1;\n        $conf['httprl_connect_timeout'] = 3;\n        $conf['httprl_dns_timeout'] = 3;\n        $conf['httprl_global_timeout'] = \"60\";\n        $conf['httprl_server_addr'] = \"-1\";\n        $conf['httprl_timeout'] = \"10\";\n        $conf['httprl_ttfb_timeout'] = \"5\";\n        // $conf['drupal_http_request_function'] = \"httprl_override_core\";\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "aegir/conf/global/global-valkey.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Use Valkey caching and lock support only for d6 and d7 profiles\n */\nif ($valkey_up && $use_valkey && !$custom_cache) {\n  $cache_backport = FALSE;\n  $cache_valkey = FALSE;\n  $all_ini['redis_use_modern'] = TRUE;\n  if ($all_ini['redis_use_modern']) {\n    if ($drupal_core >= 8) {\n      $redis_comprs = TRUE;\n      $redis_dirname = 'redis_eight';\n      if (!$all_ini['redis_old_eight_mode']) {\n        $redis_dirname = 'redis_compr';\n      }\n      if ($drupal_core == 10 || $drupal_core == 11) {\n        $redis_new_dirname = 'redis_ten_eleven';\n        $redis_legacy_dirname = 'redis_nine_ten';\n        if (is_readable('modules/o_contrib_ten/' . $redis_new_dirname . '/redis.services.yml')) {\n          $redis_dirname = $redis_new_dirname;\n        }\n        elseif (is_readable('modules/o_contrib_ten/' . $redis_legacy_dirname . '/redis.services.yml')) {\n          $redis_dirname = $redis_legacy_dirname;\n        }\n      }\n      elseif ($drupal_core == 9) {\n        $redis_dirname = 'redis_nine_ten';\n        if ($all_ini['redis_old_nine_mode']) {\n          $redis_dirname = 'redis_compr';\n        }\n      }\n    }\n    else {\n      $redis_dirname = 'redis_edge';\n    }\n    if ($is_dev && !$is_backend) {\n      header(\"X-Valkey-Version-Is: Modern\");\n      header(\"X-Valkey-Dir-Is: \" . $redis_dirname);\n    }\n    if ($all_ini['redis_flush_forced_mode']) {\n      if ($drupal_core >= 8) {\n        $settings['redis_perm_ttl']                 = 86400; // 24 hours max\n        $settings['redis_flush_mode']               = 1; // Valkey default is 0\n        $settings['redis_flush_mode_cache_page']    = 2; // Valkey default is 1\n        $settings['redis_flush_mode_cache_block']   = 2; // Valkey default is 1\n        $settings['redis_flush_mode_cache_menu']    = 2; // Valkey default is 0\n        $settings['redis_flush_mode_cache_metatag'] = 2; // Valkey default is 0\n      }\n      else {\n        $conf['redis_perm_ttl']                 = 86400; // 24 hours max\n        $conf['redis_flush_mode']               = 1; // Valkey default is 0\n        $conf['redis_flush_mode_cache_page']    = 2; // Valkey default is 1\n        $conf['redis_flush_mode_cache_block']   = 2; // Valkey default is 1\n        $conf['redis_flush_mode_cache_menu']    = 2; // Valkey default is 0\n        $conf['redis_flush_mode_cache_metatag'] = 2; // Valkey default is 0\n      }\n      // See http://bit.ly/1drmi35 for more information\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Flush-Forced-Mode: Forced\");\n      }\n    }\n  }\n  else {\n    $redis_dirname = 'redis';\n    if ($is_dev && !$is_backend) {\n      header(\"X-Valkey-Version-Is: Legacy\");\n      header(\"X-Valkey-Dir-Is: \" . $redis_dirname);\n    }\n  }\n  if ($drupal_core >= 8) {\n    if (file_exists('sites/' . $_SERVER['SERVER_NAME'] . '/.redisLegacyOff')) {\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Off-Ctrl-Exists: .redisLegacyOff\");\n      }\n    }\n    else {\n      if (is_readable('sites/' . $_SERVER['SERVER_NAME'] . '/files/development.services.yml')) {\n        if ($is_dev && !$is_backend) {\n          header(\"X-Dev-Services-Yml-Is-Readable: development.services.yml\");\n        }\n      }\n      else {\n        if (is_readable('modules/o_contrib_ten')) {\n          if (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_ten/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_ten/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_ten/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_eleven')) {\n          if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_nine')) {\n          if (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_nine/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_nine/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_nine/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_eight')) {\n          if (is_readable('modules/o_contrib_eight/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_services_path = 'modules/o_contrib_eight/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_eight/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Example-Services-Is-Readable: \" . $example_services_path);\n            }\n          }\n          if (is_readable('modules/o_contrib_eight/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_eight/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n      }\n    }\n  }\n  elseif ($drupal_core == 7) {\n    if (is_readable('modules/o_contrib_seven/' . $redis_dirname . '/redis.autoload.inc')) {\n      $cache_valkey = TRUE;\n      $cache_backport = FALSE;\n      $cache_redis_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.autoload.inc';\n      $cache_lock_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.lock.inc';\n      $cache_path_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.path.inc';\n      $cache_gzip_path = 'modules/o_contrib_seven/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Autoload-Is-Readable: \" . $cache_redis_path);\n      }\n    }\n    if ($all_ini['autoslave_enable']) {\n      if (is_readable('modules/o_contrib_seven/autoslave/autoslave.cache.inc') &&\n        is_readable('includes/database/autoslave/database.inc')) {\n        $use_auto_se = TRUE;\n        $gzip_mode = FALSE;\n        $cache_backport = FALSE;\n        $auto_se_path = 'modules/o_contrib_seven/autoslave/autoslave.cache.inc';\n        if ($is_dev && !$is_backend) {\n          header(\"X-AutoSlave-Cache-Is-Readable: \" . $auto_se_path);\n        }\n      }\n    }\n    if ($all_ini['cache_consistent_enable']) {\n      if (is_readable('modules/o_contrib_seven/cache_consistent/cache_consistent.inc')) {\n        $use_cache_ct = TRUE;\n        $gzip_mode = FALSE;\n        $cache_backport = FALSE;\n        $cache_ct_path = 'modules/o_contrib_seven/cache_consistent/cache_consistent.inc';\n        if ($is_dev && !$is_backend) {\n          header(\"X-CacheConsistent-Is-Readable: \" . $cache_ct_path);\n        }\n      }\n    }\n  }\n  elseif ($drupal_core == 6) {\n    if (is_readable('modules/o_contrib/cache_backport/cache.inc')) {\n      $cache_backport = TRUE;\n      $cache_backport_path = 'modules/o_contrib/cache_backport/cache.inc';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Cache-Backport-Is-Readable: \" . $cache_backport_path);\n      }\n    }\n    if (is_readable('modules/o_contrib/' . $redis_dirname . '/redis.autoload.inc')) {\n      $cache_valkey = TRUE;\n      $cache_redis_path = 'modules/o_contrib/' . $redis_dirname . '/redis.autoload.inc';\n      $cache_lock_path = 'modules/o_contrib/' . $redis_dirname . '/redis.lock.inc';\n      $cache_path_path = 'modules/o_contrib/' . $redis_dirname . '/redis.path.inc';\n      $cache_gzip_path = 'modules/o_contrib/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Autoload-Is-Readable: \" . $cache_redis_path);\n      }\n    }\n  }\n  if ($cache_valkey) {\n    if ($drupal_core >= 8) {\n      if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_eleven/' . $redis_dirname . '/src');\n      }\n      elseif (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_ten/' . $redis_dirname . '/src');\n      }\n      elseif (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_nine/' . $redis_dirname . '/src');\n      }\n      else {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_eight/' . $redis_dirname . '/src');\n      }\n      $settings['redis.connection']['interface'] = 'PhpRedis';\n      $settings['redis.connection']['host']      = '127.0.0.1';\n      $settings['redis.connection']['port']      = '6379';\n      $settings['redis.connection']['password']  = 'isfoobared';\n      $settings['redis.connection']['base']      = '8';\n      $settings['cache_prefix']                  = $this_prefix;\n      $settings['cache']['default']              = 'cache.backend.redis';\n      if (!is_readable('/data/conf/clstr.cnf')) {\n        $settings['cache']['bins']['bootstrap']  = 'cache.backend.chainedfast';\n        $settings['cache']['bins']['discovery']  = 'cache.backend.chainedfast';\n        $settings['cache']['bins']['config']     = 'cache.backend.chainedfast';\n      }\n      if (is_readable($example_failover_path)) {\n        $settings['container_yamls'][]           = $example_failover_path;\n        $settings['redis.failover']              = TRUE;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Valkey-Failover-Is-Readable: \" . $example_failover_path);\n        }\n      }\n      elseif (is_readable($example_services_path)) {\n        $settings['container_yamls'][]           = $example_services_path;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Valkey-Example-Is-Readable: \" . $example_services_path);\n        }\n      }\n      if (is_readable($redis_services_path)) {\n        $settings['container_yamls'][]           = $redis_services_path;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n        }\n      }\n      if ($drupal_core <= 10) {\n        $settings['queue_default']               = 'queue.redis_reliable';\n      }\n      if ($redis_comprs) {\n        $settings['redis_compress_length']       = 100;\n        $settings['redis_compress_level']        = 5;\n      }\n      $settings['cache']['bins']['state']        = 'cache.backend.redis';\n      $settings['state_cache']                   = TRUE;\n      $settings['bootstrap_container_definition'] = [\n        'parameters' => [],\n        'services' => [\n          'redis.factory' => [\n            'class' => 'Drupal\\redis\\ClientFactory',\n          ],\n          'cache.backend.redis' => [\n            'class' => 'Drupal\\redis\\Cache\\CacheBackendFactory',\n            'arguments' => ['@redis.factory', '@cache_tags_provider.container', '@serialization.phpserialize'],\n          ],\n          'cache.container' => [\n            'class' => '\\Drupal\\redis\\Cache\\PhpRedis',\n            'factory' => ['@cache.backend.redis', 'get'],\n            'arguments' => ['container'],\n          ],\n          'cache_tags_provider.container' => [\n            'class' => 'Drupal\\redis\\Cache\\RedisCacheTagsChecksum',\n            'arguments' => ['@redis.factory'],\n          ],\n          'serialization.phpserialize' => [\n            'class' => 'Drupal\\Component\\Serialization\\PhpSerialize',\n          ],\n        ],\n      ];\n    }\n    else {\n      if ($cache_backport) {\n        $conf['cache_inc']                      = $cache_backport_path;\n      }\n      if ($all_ini['redis_use_modern']) {\n        if ($all_ini['redis_lock_enable']) {\n          $redis_lock = TRUE;\n        }\n        if ($all_ini['redis_path_enable']) {\n          $redis_path = TRUE;\n        }\n      }\n      if (is_readable($cache_lock_path) && $redis_lock) {\n        $conf['lock_inc']                       = $cache_lock_path;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Valkey-Lock-Is-Readable: \" . $cache_lock_path);\n        }\n      }\n      if (is_readable($cache_path_path) && $redis_path) {\n        $conf['path_inc']                       = $cache_path_path;\n        $conf['path_alias_admin_blacklist']     = FALSE;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Valkey-Path-Is-Readable: \" . $cache_path_path);\n        }\n      }\n      if ($all_ini['redis_scan_enable']) {\n        $conf['redis_scan_delete']              = TRUE;\n        $gzip_mode = FALSE;\n      }\n      else {\n        if (is_readable($cache_gzip_path)) {\n          $gzip_mode = TRUE;\n        }\n        else {\n          $gzip_mode = FALSE;\n        }\n      }\n      if ($gzip_mode) {\n        $conf['cache_default_class']            = 'Redis_CacheCompressed';\n      }\n      else {\n        $conf['cache_default_class']            = 'Redis_Cache';\n      }\n      $conf['cache_backends'][]                 = $cache_redis_path;\n      if ($use_auto_se) {\n        $conf['cache_backends'][]               = $auto_se_path;\n        $conf['cache_default_class']            = 'AutoslaveCache';\n        $conf['autoslave_cache_default_class']  = 'Redis_Cache';\n      }\n      if ($use_cache_ct) {\n        $conf['cache_backends'][]               = $cache_ct_path;\n        $conf['cache_default_class']            = 'ConsistentCache';\n        if (!is_readable('/data/conf/clstr.cnf')) {\n          $conf['cache_class_cache_form']       = 'DrupalDatabaseCache';\n          $conf['cache_class_cache_bootstrap']  = 'DrupalDatabaseCache';\n        }\n        $conf['consistent_cache_default_class'] = 'Redis_Cache';\n        $conf['consistent_cache_default_safe']  = TRUE;\n        $conf['consistent_cache_buffer_mechanism'] = 'ConsistentCacheBuffer';\n        $conf['consistent_cache_default_strict'] = FALSE;\n        $conf['consistent_cache_strict_cache_bootstrap'] = TRUE;\n      }\n      if (!is_readable('/data/conf/clstr.cnf')) {\n        $conf['cache_class_cache_form']         = 'DrupalDatabaseCache';\n        $conf['cache_class_cache_bootstrap']    = 'DrupalDatabaseCache';\n      }\n      $conf['redis_client_interface']           = 'PhpRedis';\n      $conf['redis_client_host']                = '127.0.0.1';\n      $conf['redis_client_port']                = '6379';\n      $conf['redis_client_password']            = 'isfoobared';\n      $conf['redis_client_base']                = '8';\n      $conf['cache_prefix']                     = $this_prefix;\n      $conf['page_cache_invoke_hooks']          = TRUE;  // D7 == Do not use Aggressive Mode\n      $conf['page_cache_without_database']      = FALSE; // D7 == Do not use Aggressive Mode\n      $conf['page_cache_maximum_age']           = 0;     // D7 == max-age in the Cache-Control header (ignored by Speed Booster)\n      $conf['page_cache_max_age']               = 0;     // D6 == max-age in the Cache-Control header (ignored by Speed Booster)\n      $conf['cache_lifetime']                   = 0;     // D7 == BOA uses Speed Booster / Nginx micro-caching instead\n      $conf['page_cache_lifetime']              = 0;     // D6 == BOA uses Speed Booster / Nginx micro-caching instead\n    }\n    if ($all_ini['redis_exclude_bins'] && !is_readable('/data/conf/clstr.cnf')) {\n      $excludes = array();\n      $excludes = explode(\",\", $all_ini['redis_exclude_bins']);\n      foreach ($excludes as $exclude) {\n        $exclude = rtrim($exclude);\n        $exclude = ltrim($exclude);\n        if ($drupal_core >= 8) {\n          $bin_exclude = $exclude;\n          $settings['cache']['bins'][$bin_exclude] = 'cache.backend.database';\n        }\n        else {\n          $bin_exclude = 'cache_class_' . $exclude;\n          $conf[$bin_exclude] = 'DrupalDatabaseCache';\n        }\n        if ($is_dev && !$is_backend) {\n          header(\"X-Ini-Valkey-Exclude-Bin-\" . $exclude . \": \" . $bin_exclude);\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "aegir/conf/global/global.inc",
    "content": "<?php # global settings.php\n\n\n/**\n * Core versions init\n */\n$backdropcms   = FALSE;\n$drupal_core   = FALSE;\n$drupal_id     = FALSE;\n\n\n/**\n * Vars init\n */\n$custom_cache  = FALSE;\n$custom_da     = FALSE;\n$custom_fb     = FALSE;\n$da_inc        = FALSE;\n$deny_anon     = FALSE;\n$hidden_uri    = FALSE;\n$high_traffic  = FALSE;\n$ini_loc_src   = FALSE;\n$ini_plr_src   = FALSE;\n$is_backend    = FALSE;\n$is_bot        = FALSE;\n$is_dev        = FALSE;\n$is_install    = FALSE;\n$is_tmp        = FALSE;\n$local_req     = FALSE;\n$no_dns        = FALSE;\n$raw_host      = FALSE;\n$redis_comprs  = FALSE;\n$redis_lock    = FALSE;\n$redis_path    = FALSE;\n$site_subdir   = FALSE;\n$use_auto_se   = FALSE;\n$use_cache_ct  = FALSE;\n$use_valkey    = FALSE;\n$usr_loc_ini   = FALSE;\n$usr_plr_ini   = FALSE;\n$valkey_up     = FALSE;\n\n\n/**\n * Detecting subdirectory mode\n */\nif (isset($_SERVER['SITE_SUBDIR'])) {\n  $site_subdir = $_SERVER['SITE_SUBDIR'];\n}\nif (isset($_SERVER['RAW_HOST'])) {\n  $raw_host = $_SERVER['RAW_HOST'];\n}\n\n\n/**\n * Backend and task detection\n */\nif (function_exists('drush_get_command')) {\n  $command = drush_get_command();\n  if (isset($command['command'])) {\n    $command = explode(\" \", $command['command']);\n    if (isset($command[0])) {\n      if (!preg_match(\"/^help/\", $command[0])) {\n        $is_backend = TRUE;\n      }\n      if (preg_match(\"/^(provision-install|provision-save|provision-backup|php-eval)/\", $command[0])) {\n        if (!is_readable('/data/conf/clstr.cnf')) {\n          $is_install = TRUE;\n        }\n      }\n    }\n  }\n}\n\n\n/**\n * Force backward compatible SERVER_SOFTWARE\n */\nif (!$is_backend) {\n  if (isset($_SERVER['SERVER_SOFTWARE']) &&\n      !preg_match(\"/ApacheSolarisNginx/i\", $_SERVER['SERVER_SOFTWARE'])) {\n    $_SERVER['SERVER_SOFTWARE'] = 'ApacheSolarisNginx/1.29.8';\n  }\n}\n\n\n/**\n * Bots detection\n */\nif (isset($_SERVER['HTTP_USER_AGENT']) &&\n    preg_match(\"/(?:crawl|bot|spider|tracker|click|parser|google|yahoo|yandex|baidu|bing)/i\", $_SERVER['HTTP_USER_AGENT'])) {\n  $is_bot = TRUE;\n}\n\n\n/**\n * Use Ægir/BOA specific MAIN_SITE_NAME instead of possibly fake SERVER_NAME\n */\nif (isset($_SERVER['MAIN_SITE_NAME'])) {\n  $_SERVER['SERVER_NAME'] = $_SERVER['MAIN_SITE_NAME'];\n}\n\n\n/**\n * Set MAIN_SITE_NAME to match SERVER_NAME, if MAIN_SITE_NAME is not set\n */\nif (!isset($_SERVER['MAIN_SITE_NAME']) && isset($_SERVER['SERVER_NAME'])) {\n  $_SERVER['MAIN_SITE_NAME'] = $_SERVER['SERVER_NAME'];\n}\n\n\n/**\n * Site mode detection - works also for aliases\n */\nif (isset($_SERVER['HTTP_HOST']) &&\n    (preg_match(\"/(?:^dev\\.|\\.dev\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^devel\\.|\\.devel\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^tmp\\.|\\.tmp\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^temp\\.|\\.temp\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^temporary\\.|\\.temporary\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^test\\.|\\.test\\.)/i\", $_SERVER['HTTP_HOST']) ||\n     preg_match(\"/(?:^testing\\.|\\.testing\\.)/i\", $_SERVER['HTTP_HOST']))) {\n  $is_tmp = TRUE;\n}\n\n\n/**\n * Dev mode detection - works only for aliases\n */\nif (isset($_SERVER['HTTP_HOST']) &&\n    isset($_SERVER['MAIN_SITE_NAME']) &&\n    preg_match(\"/(?:^dev\\.|^devel\\.|\\.dev\\.|\\.devel\\.)/i\", $_SERVER['HTTP_HOST']) &&\n    $_SERVER['HTTP_HOST'] != $_SERVER['MAIN_SITE_NAME'] &&\n    $_SERVER['HTTP_HOST'] != $_SERVER['SERVER_NAME'] &&\n    !$is_backend) {\n  $is_dev = TRUE;\n}\n\n\n/**\n * Core version detection\n */\nif (file_exists('web.config') || file_exists('index.php')) { // Check for the main entry point file\n  if (file_exists('core')) {\n    if (file_exists('vendor/symfony') ||\n      file_exists('core/vendor/symfony') ||\n      file_exists('autoload.php')) {\n      $drupal_core = '8';\n      $drupal_id = 'DVIII';\n      if (file_exists('core/themes/olivero') && file_exists('core/themes/classy')) {\n        $drupal_core = '9';\n        $drupal_id = 'DIX';\n      }\n      if (file_exists('core/themes/olivero') && !file_exists('core/themes/classy')) {\n        $drupal_core = '10';\n        $drupal_id = 'DX';\n      }\n      if (file_exists('core/modules/workspaces_ui') && !file_exists('web.config')) {\n        $drupal_core = '11';\n        $drupal_id = 'DXI';\n      }\n      $use_valkey = TRUE;\n    }\n    else {\n      $backdropcms = TRUE;\n      $drupal_id = 'ND';\n    }\n  }\n  elseif (file_exists('modules/path_alias_cache')) {\n    $drupal_core = '6';\n    $drupal_id = 'DVI';\n    $use_valkey = TRUE;\n  }\n  else {\n    $drupal_core = '7';\n    $drupal_id = 'DVII';\n    $use_valkey = TRUE;\n  }\n  if ($drupal_id && $is_dev && !$is_backend) {\n    header('X-Backend: ' . $drupal_id);\n  }\n}\n\n\n/**\n * Support real IP detection behind proxies (Cloudflare, Akamai, etc.)\n */\nif (isset($_SERVER['REMOTE_ADDR']) && php_sapi_name() !== 'cli') {\n\n  $real_ip = null;\n  $proxy_header = null;\n  $proxy_ip = null;\n\n  // Handle possible comma-separated proxy chain in REMOTE_ADDR\n  $raw_proxy_ip = $_SERVER['REMOTE_ADDR'];\n  $proxy_ip_list = array_map('trim', explode(',', $raw_proxy_ip));\n  $proxy_ip_list = array_filter($proxy_ip_list, function ($ip) {\n    return filter_var($ip, FILTER_VALIDATE_IP);\n  });\n\n  if (!empty($proxy_ip_list)) {\n    $proxy_ip = end($proxy_ip_list); // Trust the last proxy in the chain\n  }\n\n  // Detect real client IP using known proxy headers\n  if (!empty($_SERVER['HTTP_CF_CONNECTING_IP'])) {\n    // Cloudflare (single IP, trusted)\n    $real_ip = trim($_SERVER['HTTP_CF_CONNECTING_IP']);\n    $proxy_header = 'CF-Connecting-IP';\n  }\n  elseif (!empty($_SERVER['HTTP_TRUE_CLIENT_IP'])) {\n    // Akamai / Fastly (single IP)\n    $real_ip = trim($_SERVER['HTTP_TRUE_CLIENT_IP']);\n    $proxy_header = 'True-Client-IP';\n  }\n  elseif (!empty($_SERVER['HTTP_X_REAL_IP'])) {\n    // NGINX-style (single IP)\n    $real_ip = trim($_SERVER['HTTP_X_REAL_IP']);\n    $proxy_header = 'X-Real-IP';\n  }\n  elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) {\n    // Generic proxy chain (may contain multiple IPs)\n    $forwarded_ips = array_map('trim', explode(',', $_SERVER['HTTP_X_FORWARDED_FOR']));\n    $forwarded_ips = array_filter($forwarded_ips, function ($ip) {\n      return filter_var($ip, FILTER_VALIDATE_IP);\n    });\n    if (!empty($forwarded_ips)) {\n      $real_ip = $forwarded_ips[0]; // First = original client IP\n      $proxy_header = 'X-Forwarded-For';\n    }\n  }\n\n  // Apply the final values\n  if (!empty($real_ip) && !empty($proxy_ip) && !empty($proxy_header)) {\n    $_SERVER['REMOTE_ADDR'] = $real_ip;\n\n    if (isset($drupal_core) && $drupal_core >= 8) {\n      $settings['reverse_proxy'] = TRUE;\n      $settings['reverse_proxy_header'] = $proxy_header;\n      $settings['reverse_proxy_addresses'] = array($proxy_ip);\n    }\n  }\n}\n\n\n/**\n * The nodns mode detection\n */\nif (isset($_SERVER['HTTP_HOST']) &&\n    (preg_match(\"/(?:^nodns\\.|\\.nodns\\.)/i\", $_SERVER['HTTP_HOST']))) {\n  $no_dns = TRUE;\n}\n\n\n/**\n * Local nodns request detection\n */\nif (isset($_SERVER['REMOTE_ADDR']) &&\n    (preg_match(\"/(^127\\.0\\.0\\.1)$/i\", $_SERVER['REMOTE_ADDR']) ||\n     preg_match(\"/(^127\\.0\\.0\\.1\\, 127\\.0\\.0\\.1)$/i\", $_SERVER['REMOTE_ADDR']))) {\n  $local_req = TRUE;\n}\n\n\n/**\n * Local path request check\n */\nif (preg_match(\"/\\/api\\/hidden\\//\", $_SERVER['REQUEST_URI'])) {\n  $hidden_uri = TRUE;\n}\n\n\n/**\n * The nodns protection\n */\nif ($no_dns) {\n  if ($local_req) {\n    // Allow local requests\n    if (!$is_backend && isset($_SERVER['REMOTE_ADDR'])) {\n      header(\"X-Local-Y: \" . $_SERVER['REMOTE_ADDR']);\n    }\n  }\n  else {\n    // Ignore remote requests\n    header('X-Accel-Expires: 60');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * The hidden uri protection\n */\nif ($hidden_uri) {\n  if ($local_req) {\n    // Allow local requests to hidden uri\n    if (!$is_backend && isset($_SERVER['REMOTE_ADDR'])) {\n      header(\"X-Local-URI-Y: \" . $_SERVER['REMOTE_ADDR']);\n    }\n  }\n  else {\n    // Ignore remote requests\n    header('X-Accel-Expires: 60');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * Bots protection for all tmp/dev sites - works also for aliases\n */\nif ($is_bot) {\n  if ($is_tmp) {\n    // Ignore known bots\n    header('X-Accel-Expires: 300');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * Site cron protection - cron works only for live sites\n */\nif (preg_match(\"/^\\/cron\\.php/\", $_SERVER['REQUEST_URI']) ||\n    preg_match(\"/^\\/cron\\//\", $_SERVER['REQUEST_URI'])) {\n  if (($is_tmp) || (file_exists('/data/conf/sites-cron-off.ctrl'))) {\n    // Ignore cron requests\n    header('X-Accel-Expires: 300');\n    header('HTTP/1.0 404 Not Found');\n    header(\"Connection: close\");\n    exit;\n  }\n}\n\n\n/**\n * Required for proper Valkey support on command line / via Drush\n */\nif (isset($_SERVER['HTTP_HOST']) && !isset($_SERVER['SERVER_NAME'])) {\n  $_SERVER['SERVER_NAME'] = $_SERVER['HTTP_HOST'];\n}\n\n\n/**\n * BOA INI defaults\n */\n$boa_ini = array(\n  'session_cookie_ttl' => '86400',\n  'session_gc_eol' => '86400',\n  'redis_use_modern' => TRUE,\n  'redis_flush_forced_mode' => TRUE,\n  'redis_lock_enable' => TRUE,\n  'redis_path_enable' => TRUE,\n  'redis_scan_enable' => FALSE,\n  'redis_cache_disable' => FALSE,\n  'redis_old_nine_mode' => FALSE,\n  'redis_old_eight_mode' => FALSE,\n  'sql_conversion_mode' => FALSE,\n  'enable_strict_user_register_protection' => FALSE,\n  'entitycache_dont_enable' => FALSE,\n  'views_cache_bully_dont_enable' => FALSE,\n  'views_content_cache_dont_enable' => FALSE,\n  'autoslave_enable' => FALSE,\n  'cache_consistent_enable' => FALSE,\n  'redis_exclude_bins' => FALSE,\n  'speed_booster_anon_cache_ttl' => FALSE,\n  'allow_anon_node_add' => FALSE,\n  'enable_newrelic_integration' => FALSE,\n  'disable_admin_dos_protection' => FALSE,\n  'ignore_user_register_protection' => FALSE,\n  'allow_private_file_downloads' => FALSE,\n  'server_name_cookie_domain' => FALSE,\n  'auto_detect_facebook_integration' => TRUE,      // For backward compatibility until next release, then FALSE\n  'auto_detect_domain_access_integration' => TRUE, // For backward compatibility until next release, then FALSE\n  'advagg_auto_configuration' => FALSE,            // Will be set to TRUE in boa_site_control.ini if the module is enabled\n  'disable_drupal_page_cache' => FALSE,            // FALSE for backward compatibility and max performance\n  'set_composer_manager_vendor_dir' => FALSE,      // FALSE by default to not break site installation depending on custom value\n);\n\n\n/**\n * Optional system level early overrides\n */\nif (is_readable('/data/conf/settings.global.inc')) {\n  require_once \"/data/conf/settings.global.inc\";\n}\n\n\n/**\n * Optional site and platform level settings defined in the ini files\n * Note: the site-level ini file takes precedence over platform level ini\n */\n$all_ini = $boa_ini;\nif (is_readable('sites/all/modules/boa_platform_control.ini')) {\n  $ini_plr_src = 'sites/all/modules/boa_platform_control.ini';\n}\nif ($ini_plr_src) {\n  $usr_plr_ini = array();\n  $usr_plr_ini = parse_ini_file($ini_plr_src);\n}\nif (is_readable('sites/' . $_SERVER['SERVER_NAME'] . '/modules/boa_site_control.ini')) {\n  $ini_loc_src = 'sites/' . $_SERVER['SERVER_NAME'] . '/modules/boa_site_control.ini';\n}\nif ($ini_loc_src) {\n  $usr_loc_ini = array();\n  $usr_loc_ini = parse_ini_file($ini_loc_src);\n}\nif (is_array($usr_plr_ini) && $usr_plr_ini) {\n  $all_ini = array_merge($all_ini, $usr_plr_ini);\n}\nif (is_array($usr_loc_ini) && $usr_loc_ini) {\n  $all_ini = array_merge($all_ini, $usr_loc_ini);\n}\nif (is_array($all_ini) && $is_dev && !$is_backend) {\n  if ($ini_plr_src) {\n    header(\"X-Ini-Plr-Src: \" . $ini_plr_src);\n  }\n  if ($ini_loc_src) {\n    header(\"X-Ini-Loc-Src: \" . $ini_loc_src);\n  }\n  if (!$ini_plr_src && !$ini_loc_src) {\n    header(\"X-Ini-Src: BOA-Default\");\n  }\n  header(\"X-Ini-Valkey-Use-Modern: \" . $all_ini['redis_use_modern']);\n  header(\"X-Ini-Valkey-Flush-Forced-Mode: \" . $all_ini['redis_flush_forced_mode']);\n  header(\"X-Ini-Valkey-Lock-Enable: \" . $all_ini['redis_lock_enable']);\n  header(\"X-Ini-Valkey-Path-Enable: \" . $all_ini['redis_path_enable']);\n  header(\"X-Ini-Valkey-Scan-Enable: \" . $all_ini['redis_scan_enable']);\n  header(\"X-Ini-Valkey-Old-Nine-Mode: \" . $all_ini['redis_old_nine_mode']);\n  header(\"X-Ini-Valkey-Old-Eight-Mode: \" . $all_ini['redis_old_eight_mode']);\n  header(\"X-Ini-Valkey-Cache-Disable: \" . $all_ini['redis_cache_disable']);\n  header(\"X-Ini-Valkey-Exclude-Bins: \" . $all_ini['redis_exclude_bins']);\n  header(\"X-Ini-Speed-Booster-Anon-Cache-Ttl: \" . $all_ini['speed_booster_anon_cache_ttl']);\n  header(\"X-Ini-Allow-Anon-Node-Add: \" . $all_ini['allow_anon_node_add']);\n  header(\"X-Ini-Enable-NewRelic-Integration: \" . $all_ini['enable_newrelic_integration']);\n  header(\"X-Ini-Disable-Admin-Dos-Protection: \" . $all_ini['disable_admin_dos_protection']);\n  header(\"X-Ini-Allow-Private-File-Downloads: \" . $all_ini['allow_private_file_downloads']);\n  header(\"X-Ini-Server-Name-Cookie-Domain: \" . $all_ini['server_name_cookie_domain']);\n  header(\"X-Ini-Auto-Detect-Facebook-Integration: \" . $all_ini['auto_detect_facebook_integration']);\n  header(\"X-Ini-Auto-Detect-Domain-Access-Integration: \" . $all_ini['auto_detect_domain_access_integration']);\n  header(\"X-Ini-Advagg-Auto-Configuration: \" . $all_ini['advagg_auto_configuration']);\n  header(\"X-Ini-Sql-Conversion-Mode: \" . $all_ini['sql_conversion_mode']);\n  header(\"X-Ini-Enable-Strict-User-Register-Protection: \" . $all_ini['enable_strict_user_register_protection']);\n  header(\"X-Ini-Entitycache-Dont-Enable: \" . $all_ini['entitycache_dont_enable']);\n  header(\"X-Ini-Views-Cache-Bully-Dont-Enable: \" . $all_ini['views_cache_bully_dont_enable']);\n  header(\"X-Ini-Views-Content-Cache-Dont-Enable: \" . $all_ini['views_content_cache_dont_enable']);\n  header(\"X-Ini-Ignore-User-Register-Protection: \" . $all_ini['ignore_user_register_protection']);\n  header(\"X-Ini-Session-Cookie-Ttl: \" . $all_ini['session_cookie_ttl']);\n  header(\"X-Ini-Session-Gc-Eol: \" . $all_ini['session_gc_eol']);\n  header(\"X-Ini-Disable-Drupal-Page-Cache: \" . $all_ini['disable_drupal_page_cache']);\n  header(\"X-Ini-Set-Composer-Manager-Vendor-Dir: \" . $all_ini['set_composer_manager_vendor_dir']);\n  header(\"X-Ini-AutoSlave-Enable: \" . $all_ini['autoslave_enable']);\n  header(\"X-Ini-CacheConsistent-Enable: \" . $all_ini['cache_consistent_enable']);\n}\n\n\n/**\n * Disable reporting errors by default - enable later only for foo.dev.domain\n */\nerror_reporting(0);\n\n\n/**\n * Forced default settings\n */\nif ($drupal_core >= 11) {\n  // Drupal core Package Manager: prevent Status report \"error\" about early testing.\n  $settings['testing_package_manager'] = TRUE;\n}\nif ($drupal_core >= 8) {\n  //\n  // Drupal 8 behaviour is confusing, because while it is possible\n  // to force settings listed below, they will not be shown in the\n  // site admin area. For example, CSS/JS aggregation checkboxes\n  // will accept on/off changes on form submit, while being silently\n  // overridden here.\n  //\n  $config['image.settings']['allow_insecure_derivatives'] = TRUE;  // Not sure if it's a good idea in D8\n  $config['image.settings']['suppress_itok_output'] = TRUE;        // Not sure if it's a good idea in D8\n  $config['system.cron']['threshold.autorun'] = FALSE;             // Disable poormanscron (legacy)\n  $config['system.cron']['threshold']['auto'] = 0;                 // Disable auto-cron (current)\n  $config['system.logging']['error_level'] = 'hide';               // Disable errors on screen\n  $config['system.performance']['css']['preprocess'] = TRUE;       // Enable hardcoded CSS aggregation\n  $config['system.performance']['js']['preprocess'] = TRUE;        // Enable hardcoded JS aggregation\n  $config['system.performance']['response.gzip'] = FALSE;          // Nginx already compresses everything\n  //$config['system.file']['default_scheme'] = 'public';             // Force public downloads by default\n}\nelse {\n  if ($backdropcms) {\n    $conf['css_gzip_compression'] = FALSE; // Nginx already compresses everything\n    $conf['js_gzip_compression'] = FALSE;  // Nginx already compresses everything\n    $settings['backdrop_drupal_compatibility'] = TRUE; // Enable Drupal backwards compatibility\n  }\n  $conf['page_compression'] = 0;    // Nginx already compresses everything\n  $conf['boost_crawl_on_cron'] = 0; // Deny Boost crawler\n  $conf['cron_safe_threshold'] = 0; // Disable poormanscron\n  $conf['preprocess_css'] = 1;      // Enable hardcoded CSS aggregation\n  $conf['preprocess_js'] = 1;       // Enable hardcoded JS aggregation\n  $conf['file_downloads'] = 1;      // Force public downloads by default in D6\n  $conf['file_default_scheme'] = 'public'; // Force public downloads by default in D7\n  $conf['error_level'] = 0;         // Disable errors on screen\n  $conf['statistics_enable_access_log'] = 0;   // Disable access log stats\n  $conf['allow_authorize_operations'] = FALSE; // Disable insecure plugin manager\n  $conf['admin_menu_cache_client'] = FALSE;    // Disable caching in admin_menu #442560\n  $conf['boost_ignore_htaccess_warning'] = 1;  // Silence false alarm in boost\n  $conf['expire_flush_front'] = 1;             // Default settings for expire module\n  $conf['expire_flush_node_terms'] = 1;        // Default settings for expire module\n  $conf['expire_flush_menu_items'] = 0;        // Default settings for expire module\n  $conf['expire_flush_cck_references'] = 0;    // Default settings for expire module\n  $conf['expire_include_base_url'] = 1;        // Default settings for expire module\n  $conf['js_server_software'] = \"other\";       // Set JS Callback handler server software\n  $conf['video_ffmpeg_instances'] = 1;         // Force safe default for ffmpeg\n  $conf['securepages_enable'] = 1;             // Force to avoid issues with ssl proxy\n  $conf['less_devel'] = FALSE;                 // Prevent CSS regeneration on every page load\n  $conf['drupal_http_request_fails'] = FALSE;  // Avoid false alarm\n  $conf['image_allow_insecure_derivatives'] = TRUE; // Enable to avoid known issues: https://drupal.org/drupal-7.20-release-notes\n  $conf['theme_cloudy_settings']['omega_rebuild_aggregates'] = FALSE;     // Do not allow to turn it on by default\n  $conf['theme_cloudy_settings']['omega_rebuild_theme_registry'] = FALSE; // Do not allow to turn it on by default\n  $update_free_access = FALSE;\n  $conf['webform_table'] = TRUE; // Workaround for SA-CONTRIB-2015-063 https://www.drupal.org/node/2445935\n  $conf['features_rebuild_on_flush'] = FALSE; // https://michaelshadle.com/2015/04/21/speeding-up-drupal-cache-flushing\n  $conf['entity_rebuild_on_flush'] = FALSE; // http://a-fro.com/speed-up-cache-clearing-on-drupal7\n  $conf['redis_eval_enabled'] = TRUE;\n  // Use EVAL commands to greatly speed up cache clearing\n  // Enable when https://www.drupal.org/node/2487333 is fixed\n}\n\n\n/**\n * Logic for the front-end only\n */\nif (!$is_backend) {\n  if (isset($_SERVER['HTTP_HOST']) && $is_bot) {\n    if (preg_match(\"/(?:^tmp\\.|\\.test\\.|\\.tmp\\.)/i\", $_SERVER['HTTP_HOST'])) {\n      // Deny known search bots on ^(tmp|foo.(tmp|test)).domain subdomains\n      header('X-Accel-Expires: 60');\n      header(\"Location: http://www.aegirproject.org/\", true, 301);\n      exit;\n    }\n    elseif (preg_match(\"/\\.(?:host8|boa|aegir|o8)\\.(?:biz|io|cc)$/i\", $_SERVER['HTTP_HOST'])) {\n      // Deny known search bots on some protected CI subdomains\n      header('X-Accel-Expires: 60');\n      header(\"Location: https://omega8.cc/\", true, 301);\n      exit;\n    }\n  }\n\n  if ($is_dev) {\n    // Dev mode switch\n    error_reporting(E_ALL & ~E_NOTICE);\n    ini_set('display_errors', TRUE);\n    ini_set('display_startup_errors', TRUE);\n    ini_set('opcache.revalidate_freq', '0');\n    if (!$is_backend) {\n      header(\"X-Opcache-Revalidate-Freq: 0\");\n    }\n    if ($drupal_core >= 8) {\n      unset($config['system.logging']['error_level']);            // Stop hardcoding no errors on screen\n      unset($config['system.performance']['cache.page.max_age']); // Stop hardcoding internal page cache\n      unset($config['system.performance']['css']['preprocess']);  // Stop hardcoding CSS aggregation\n      unset($config['system.performance']['js']['preprocess']);   // Stop hardcoding JS aggregation\n      if (is_readable('sites/' . $_SERVER['SERVER_NAME'] . '/files/development.services.yml')) {\n        //\n        // This file, if exists, disables Valkey on the fly!\n        //\n        $settings['container_yamls'][] = 'sites/' . $_SERVER['SERVER_NAME'] . '/files/development.services.yml';\n        //\n        // The two settings below make sense only if the development.services.yml file\n        // located in the sites/domain/files/ dir contains at least these three lines:\n        //\n        // services:\n        //   cache.backend.null:\n        //     class: Drupal\\Core\\Cache\\NullBackendFactory\n        //\n        $settings['cache']['bins']['render'] = 'cache.backend.null';\n        $settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.null';\n        //\n        // Warning: you must clear caches via Ægir interface or with Drush\n        // before these lines will start working on .dev. alias without error 500\n        // saying: You have requested a non-existent service \"cache.backend.null\"\n        //\n        // To enable Twig debugging add in the development.services.yml file also lines:\n        //\n        // parameters:\n        //   twig.config:\n        //     debug: true\n        //     auto_reload: true\n        //     cache: true\n        //\n        // Note that normally you should not disable Twig cache, since auto_reload\n        // is enough for development and debugging, withot slowing down everything;\n        // see also: https://www.drupal.org/node/1903374\n        //\n      }\n    }\n    else {\n      $conf['xmlsitemap_submit'] = 0; // Disable XML Sitemap for foo.dev.domain\n      $conf['xmlsitemap_update'] = 0; // Disable XML Sitemap for foo.dev.domain\n      unset($conf['cache']);          // Stop hardcoding internal page cache\n      unset($conf['error_level']);    // Stop hardcoding no errors on screen\n      unset($conf['less_devel']);     // Stop hardcoding CSS regeneration on every page load\n      unset($conf['preprocess_css']); // Stop hardcoding CSS aggregation\n      unset($conf['preprocess_js']);  // Stop hardcoding JS aggregation\n      unset($conf['theme_cloudy_settings']['omega_rebuild_aggregates']);     // Do not force on dev URLs\n      unset($conf['theme_cloudy_settings']['omega_rebuild_theme_registry']); // Do not force on dev URLs\n    }\n  }\n  else {\n    if (preg_match(\"/^\\/civicrm/\", $_SERVER['REQUEST_URI'])) {\n      // Force custom opcache TTL for CiviCRM codebase\n      ini_set('opcache.revalidate_freq', '180');\n      if (!$is_backend) {\n        header(\"X-Opcache-Revalidate-Freq: 180\");\n      }\n    }\n    else {\n      // Set sane default opcache TTL on non-dev sites\n      ini_set('opcache.revalidate_freq', '60');\n      if (!$is_backend) {\n        header(\"X-Opcache-Revalidate-Freq: 60\");\n      }\n    }\n  }\n}\n\n\n/**\n * Enable page caching if disable_drupal_page_cache is not set to TRUE,\n * but only on non-dev URLs and only for the front-end.\n */\nif (!$is_backend && !$is_dev) {\n  if (!$is_bot && $all_ini['disable_drupal_page_cache']) {\n    if ($drupal_core >= 8) {\n      $config['system.performance']['cache.page.max_age'] = 0;\n    }\n    else {\n      $conf['cache'] = 0;\n    }\n  }\n  else {\n    if ($drupal_core >= 8) {\n      $config['system.performance']['cache.page.max_age'] = 60;\n    }\n    else {\n      $conf['cache'] = 1;\n    }\n  }\n}\n\n\n/**\n * Disable page caching when Speed Booster is disabled on the fly\n */\nif (!$is_bot && isset($_SERVER['REQUEST_URI']) &&\n    preg_match(\"/nocache=1/\", $_SERVER['REQUEST_URI'])) {\n  if ($drupal_core >= 8) {\n    $config['system.performance']['cache.page.max_age'] = 0;\n  }\n  else {\n    $conf['cache'] = 0;\n  }\n}\n\n\n/**\n * Session Cookie TTL settings\n *\n * Set session cookie lifetime (in seconds), i.e. the time from the session is\n * created to the cookie expires, i.e. when the browser is expected to discard\n * the cookie. The value 0 means \"until the browser is closed\".\n */\nini_set('session.cookie_lifetime', $all_ini['session_cookie_ttl']);\n\n\n/**\n * Session Garbage Collector EOL settings\n *\n * Set session lifetime (in seconds), i.e. the time from the user's last visit\n * to the active session may be deleted by the session garbage collector. When\n * a session is deleted, authenticated users are logged out, and the contents\n * of the user's $_SESSION variable is discarded.\n */\nini_set('session.gc_maxlifetime', $all_ini['session_gc_eol']);\n\n\n/**\n * Hostmaster specific settings\n */\nif ($conf['install_profile'] == 'hostmaster') {\n  $conf['hosting_require_disable_before_delete'] = 0;\n  $conf['hosting_task_refresh_timeout'] = 5555;\n  $conf['theme_link'] = FALSE;\n  $conf['cache'] = 0;\n  if (!$is_backend && isset($_SERVER['HTTP_USER_AGENT'])) {\n    $conf['environment_indicator_overwrite'] = TRUE;\n    $conf['environment_indicator_overwritten_position'] = 'top';\n    if (is_readable('/data/conf/development-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Development';\n      $conf['environment_indicator_overwritten_color'] = '#00AA00'; // Green\n    }\n    elseif (is_readable('/data/conf/staging-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Staging';\n      $conf['environment_indicator_overwritten_color'] = '#FFCC00'; // Yellow\n    }\n    elseif (is_readable('/data/conf/production-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Production';\n      $conf['environment_indicator_overwritten_color'] = '#CC0000'; // Red\n    }\n    elseif (is_readable('/data/conf/testing-env.ctrl')) {\n      $conf['environment_indicator_overwritten_name'] = 'Testing';\n      $conf['environment_indicator_overwritten_color'] = '#FF69B4'; // Hot Pink\n      //$conf['environment_indicator_overwritten_color'] = '#FFC0CB'; // Light Pink\n    }\n    else {\n      $conf['environment_indicator_overwritten_name'] = 'Production';\n      $conf['environment_indicator_overwritten_color'] = '#CC0000'; // Red\n    }\n    ini_set('session.cookie_lifetime', 0); // Force log-out on browser quit\n    header('X-Accel-Expires: 1');\n    if (!file_exists('/data/conf/no-https-aegir.inc')) {\n      $request_type = ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https' ||\n      $_SERVER['HTTPS'] == 'on') ? 'SSL' : 'NONSSL';\n      if ($request_type != \"SSL\" &&\n          !preg_match(\"/^\\/cron\\.php/\", $_SERVER['REQUEST_URI'])) { // we force secure connection here\n        header('X-Accel-Expires: 5');\n        header(\"Location: https://\" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'], true, 301);\n        exit;\n      }\n    }\n    if (isset($_SERVER['HTTP_HOST']) &&\n        preg_match(\"/\\.(?:host8|boa|aegir|o8)\\.(?:biz|io|cc)$/i\", $_SERVER['HTTP_HOST'])) {\n      if (preg_match(\"/^\\/admin\\/user\\/user\\/create/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/^\\/node\\/add\\/server/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/^\\/node\\/(?:1|2|4|5|7|8|10)\\/(?:edit|delete)/\", $_SERVER['REQUEST_URI'])) {\n        header('X-Accel-Expires: 5');\n        header(\"Location: https://\" . $_SERVER['HTTP_HOST'] . \"/hosting/sites\", true, 301);\n        exit;\n      }\n    }\n  }\n  else {\n    $use_valkey = TRUE;\n  }\n}\n\n\n/**\n * Main section starts here\n */\nif (isset($_SERVER['SERVER_NAME']) &&\n    $all_ini['allow_private_file_downloads']) {\n  unset($conf['file_downloads']); // Disable hardcoded public downloads for D6\n  unset($conf['file_default_scheme']); // Disable hardcoded public downloads for D7\n  //unset($config['system.file']['default_scheme']); // Disable hardcoded public downloads for D8+\n  if ($is_dev && !$is_backend) {\n    header('X-Is-Cart: YES');\n  }\n}\n\n/* ---------------- Feature switch -------------------------------------- */\nif ($drupal_core >= 6) {\n  $use_valkey = TRUE;\n}\n\nif (isset($_SERVER['SERVER_NAME'])) {\n  if ($all_ini['valkey_cache_disable'] || $all_ini['redis_cache_disable']) {\n    $use_valkey = FALSE;\n  }\n}\n\nif (!$is_bot && isset($_SERVER['REQUEST_URI'])) {\n  if (preg_match(\"/noredis=1/\", $_SERVER['REQUEST_URI'])) {\n    $use_valkey = FALSE;\n  }\n}\n\n/* ---------------- Defaults -------------------------------------------- */\n$valkey_up = FALSE;\n\n/* ---------------- Connection targets ---------------------------------- */\n$valkey_socket_path = '/run/valkey/valkey.sock';\n$valkey_host        = '127.0.0.1';\n$valkey_port        = 6379;\n$valkey_pass_file   = '/data/conf/valkey/pass.inc';\n\n/* ---------------- Timeouts & backoff ---------------------------------- */\n$connect_timeout_s  = 0.2;   // short and non-blocking feel\n$read_timeout_s     = 0.2;   // keep calls snappy\n$backoff_ttl_s      = 60;    // do not retry within this window after a failure\n$flag_dir_run       = '/var/tmp/fpm';\n$flag_file_fallback = '/data/conf/arch/valkey.disabled.flag'; // fallback\n\n/* ---------------- Optional debug log ---------------------------------- */\n// Set to an absolute path to enable lightweight probe logging.\n// Example: '/var/tmp/fpm/valkey-fallback.log'\n$redis_debug_log    = '';\n\n/* ---------------- Helpers (filesystem only) ---------------------------- */\nfunction _valkey_backoff_flag_path($flag_dir_run, $fallback) {\n  $path = $fallback;\n  if (is_dir($flag_dir_run)) {\n    if (is_writable($flag_dir_run)) {\n      $path = rtrim($flag_dir_run, '/').'/valkey.disabled.flag';\n    }\n  }\n  return $path;\n}\n\nfunction _valkey_backoff_is_active($flag_path, $ttl) {\n  $active = FALSE;\n  if (is_file($flag_path)) {\n    $age = time() - @filemtime($flag_path);\n    if ($age >= 0 && $age < $ttl) {\n      $active = TRUE;\n    }\n  }\n  return $active;\n}\n\nfunction _valkey_backoff_touch($flag_path) {\n  @touch($flag_path);\n}\n\nfunction _valkey_backoff_clear($flag_path) {\n  if (is_file($flag_path)) {\n    @unlink($flag_path);\n  }\n}\n\nfunction _valkey_dbg_write($log_path, $line) {\n  if (!empty($log_path)) {\n    $msg = date('c').' '.$line.\"\\n\";\n    @file_put_contents($log_path, $msg, FILE_APPEND);\n  }\n}\n\n/* ------------------- Probe Valkey once with guard --------------------- */\n$flag_path = _valkey_backoff_flag_path($flag_dir_run, $flag_file_fallback);\n$skip_probe = _valkey_backoff_is_active($flag_path, $backoff_ttl_s);\n\nif ($use_valkey) {\n  if (!$skip_probe) {\n    if (class_exists('Redis')) {\n      $r = new Redis();\n      $connected = FALSE;\n      $last_reason = 'init';\n\n      // Try socket first.\n      if (!empty($valkey_socket_path) && @is_readable($valkey_socket_path)) {\n        try {\n          $connected = $r->connect($valkey_socket_path);\n        } catch (Exception $e) {\n          $connected = FALSE;\n          $last_reason = 'connect-socket-exception';\n        }\n      }\n\n      // Fallback to TCP.\n      if (!$connected) {\n        try {\n          $connected = $r->connect($valkey_host, $valkey_port, $connect_timeout_s);\n        } catch (Exception $e) {\n          $connected = FALSE;\n          $last_reason = 'connect-tcp-exception';\n        }\n      }\n\n      if ($connected) {\n        if (defined('Redis::OPT_READ_TIMEOUT')) {\n          $r->setOption(Redis::OPT_READ_TIMEOUT, $read_timeout_s);\n        }\n\n        // Authenticate if password file exists.\n        $auth_pass = 'isfoobared';\n        if (is_file($valkey_pass_file)) {\n          $auth_pass = trim((string) @file_get_contents($valkey_pass_file));\n        }\n        if ($auth_pass !== '') {\n          try {\n            if (!$r->auth($auth_pass)) {\n              $connected = FALSE;\n              $last_reason = 'auth-failed';\n            }\n          } catch (Exception $e) {\n            $connected = FALSE;\n            $last_reason = 'auth-exception';\n          }\n        }\n\n        // Verify ping.\n        if ($connected) {\n          try {\n            $pong = $r->ping();\n            if ((is_string($pong) && stripos($pong, 'PONG') !== FALSE) || $pong === TRUE) {\n              $valkey_up = TRUE;\n            } else {\n              $valkey_up = FALSE;\n              $last_reason = 'ping-not-ok';\n            }\n          } catch (Exception $e) {\n            $valkey_up = FALSE;\n            $last_reason = 'ping-exception';\n          }\n        }\n\n        if ($valkey_up) {\n          _valkey_backoff_clear($flag_path);\n          _valkey_dbg_write($redis_debug_log, 'VALKEY UP action=clear flag='.$flag_path.' reason=ok');\n        } else {\n          _valkey_backoff_touch($flag_path);\n          _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason='.(string) $last_reason);\n        }\n\n        try {\n          $r->close();\n        } catch (Exception $e) {\n          // ignore\n        }\n      } else {\n        _valkey_backoff_touch($flag_path);\n        _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason='.(string) $last_reason);\n      }\n    } else {\n      // phpredis extension not available.\n      $valkey_up = FALSE;\n      _valkey_backoff_touch($flag_path);\n      _valkey_dbg_write($redis_debug_log, 'VALKEY DOWN action=touch flag='.$flag_path.' reason=no-phpredis');\n    }\n  } else {\n    $valkey_up = FALSE;\n    _valkey_dbg_write($redis_debug_log, 'VALKEY SKIP reason=backoff-active flag='.$flag_path);\n  }\n}\n\n/* ---------------- Diagnostics & final guard ---------------------------- */\nif (!empty($is_dev)) {\n  if (empty($is_backend)) {\n    if ($use_valkey && $valkey_up) {\n      header('X-Allow-Valkey: YES');\n    } else {\n      header('X-Allow-Valkey: NO');\n    }\n  }\n}\n\nif (!empty($is_install)) {\n  $use_valkey = FALSE;\n}\n\nif ($all_ini['auto_detect_domain_access_integration']) {\n  if (is_readable('sites/all/modules/domain/settings.inc')) {\n    $da_inc = 'sites/all/modules/domain/settings.inc';\n  }\n  elseif (is_readable('sites/all/modules/contrib/domain/settings.inc')) {\n    $da_inc = 'sites/all/modules/contrib/domain/settings.inc';\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/domain/settings.inc')) {\n    $da_inc = 'profiles/' . $conf['install_profile'] . '/modules/domain/settings.inc';\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/contrib/domain/settings.inc')) {\n    $da_inc = 'profiles/' . $conf['install_profile'] . '/modules/contrib/domain/settings.inc';\n  }\n}\n\n/**\n * Activate mail_safety for sites-cron-off on the fly\n */\nif (is_readable('/data/conf/sites-cron-off.ctrl')) {\n  if ($drupal_core >= 8) {\n    $config['automated_cron.settings']['interval'] = 0;\n    $config['mail_safety.settings']['default_mail_address'] = '';\n    $config['mail_safety.settings']['enabled'] = TRUE;\n    $config['mail_safety.settings']['send_mail_to_dashboard'] = TRUE;\n    $config['mail_safety.settings']['send_mail_to_default_mail'] = FALSE;\n    $config['scheduler.settings']['lightweight_cron_access_key'] = '';\n    $config['simple_cron.settings']['interval'] = 0;\n    $config['system.cron']['key'] = '';\n    $config['system.cron']['last'] = 0;\n    $config['system.cron']['threshold']['auto'] = 0;\n    $config['ultimate_cron.job.cron_queue']['status'] = FALSE;\n    $config['ultimate_cron.settings']['scheduler'] = 'never';\n  }\n  else {\n    $conf['mail_safety_enabled'] = TRUE;\n    $conf['mail_safety_send_mail_to_dashboard'] = TRUE;\n  }\n}\n\n/**\n * Use site specific composer_manager dir\n */\nif ($all_ini['set_composer_manager_vendor_dir'] && !$is_install) {\n  if ($drupal_core >= 8) {\n    $config['composer_manager.settings']['vendor_dir'] = 'sites/' . $_SERVER['SERVER_NAME'] . '/vendor';\n  }\n  else {\n    $conf['composer_manager_vendor_dir'] = 'sites/' . $_SERVER['SERVER_NAME'] . '/vendor';\n  }\n}\n\nif (!empty($is_install)) {\n  $da_inc    = FALSE;\n}\n\nif (isset($_SERVER['HTTP_USER_AGENT']) && isset($_SERVER['USER_DEVICE'])) {\n  $this_device = $_SERVER['USER_DEVICE'];\n}\nelse {\n  $this_device = 'normal';\n}\n\n\n/**\n * Logic for non-dev URLs only\n */\nif (!$is_dev) {\n  if ($all_ini['advagg_auto_configuration']) {\n\n    if ($drupal_core == 6) {\n      if (is_readable('modules/o_contrib/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('sites/all/modules/advagg/advagg_bundler/advagg_bundler.module')) {\n        $conf['preprocess_css'] = 0; // CSS aggregation disabled\n        $conf['preprocess_js'] = 0;  // JS aggregation disabled\n        $conf['advagg_aggregate_mode'] = 1;\n        $conf['advagg_async_generation'] = 1;\n        $conf['advagg_checksum_mode'] = \"md5\";\n        $conf['advagg_closure'] = 1;\n        $conf['advagg_css_compress_agg_files'] = 1;\n        $conf['advagg_css_compress_compressor_level'] = \"sane\";\n        $conf['advagg_css_compress_inline'] = 1;\n        $conf['advagg_css_compressor'] = 2;\n        $conf['advagg_debug'] = 0;\n        $conf['advagg_dir_htaccess'] = 0;\n        $conf['advagg_enabled'] = 1;\n        $conf['advagg_gzip_compression'] = 1;\n        $conf['advagg_js_compress_agg_files'] = 1;\n        $conf['advagg_js_compress_callback'] = 1;\n        $conf['advagg_js_compress_inline'] = 1;\n        $conf['advagg_js_compress_packer_enable'] = 0;\n        $conf['advagg_js_compressor'] = 1;\n        $conf['advagg_page_cache_mode'] = 0;\n        $conf['advagg_rebuild_on_flush'] = 0;\n        $conf['advagg_server_addr'] = \"-1\";\n      }\n    }\n    elseif ($drupal_core == 7) {\n      if (is_readable('modules/o_contrib_seven/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('sites/all/modules/advagg/advagg_bundler/advagg_bundler.module')) {\n        $conf['advagg_bundler_active'] = 1;\n        $conf['advagg_cache_level'] = 3;\n        $conf['advagg_combine_css_media'] = 0;\n        $conf['advagg_core_groups'] = 0;\n        $conf['advagg_css_compressor'] = 2;\n        $conf['advagg_css_compress_inline'] = 2;\n        $conf['advagg_css_compress_inline_if_not_cacheable'] = 1;\n        $conf['advagg_enabled'] = 1;\n        $conf['advagg_gzip'] = 1;\n        $conf['advagg_ie_css_selector_limiter'] = 1;\n        $conf['advagg_js_compressor'] = 3;\n        $conf['advagg_js_compress_packer'] = 0;\n        $conf['advagg_js_compress_inline'] = 3;\n        $conf['advagg_js_compress_inline_if_not_cacheable'] = 1;\n        $conf['preprocess_css'] = 1;\n        $conf['preprocess_js'] = 1;\n      }\n    }\n    elseif ($drupal_core >= 8) {\n      if (is_readable('modules/o_contrib_eight/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/o_contrib_nine/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/o_contrib_ten/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/o_contrib_eleven/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('modules/advagg/advagg_bundler/advagg_bundler.module') ||\n          is_readable('sites/all/modules/advagg/advagg_bundler/advagg_bundler.module')) {\n        $config['advagg.settings']['css']['combine_media'] = false;\n        $config['advagg.settings']['css']['ie']['limit_selectors'] = true;\n        $config['advagg.settings']['cache_level'] = 3;\n        $config['advagg.settings']['core_groups'] = false;\n        $config['advagg.settings']['enabled'] = true;\n        $config['advagg_bundler.settings']['active'] = true;\n        $config['advagg_css_minify.settings']['minifier'] = 2;\n        $config['advagg_js_minify.settings']['minifier'] = 3;\n        $config['system.performance']['css']['preprocess'] = true;\n        $config['system.performance']['js']['preprocess'] = true;\n      }\n    }\n\n    if ($drupal_core == 6 || $drupal_core == 7) {\n      if (is_readable('modules/o_contrib/httprl/httprl.module') ||\n          is_readable('modules/o_contrib_seven/httprl/httprl.module')) {\n        $conf['advagg_use_httprl'] = 1;\n        $conf['httprl_background_callback'] = 1;\n        $conf['httprl_connect_timeout'] = 3;\n        $conf['httprl_dns_timeout'] = 3;\n        $conf['httprl_global_timeout'] = \"60\";\n        $conf['httprl_server_addr'] = \"-1\";\n        $conf['httprl_timeout'] = \"10\";\n        $conf['httprl_ttfb_timeout'] = \"5\";\n        // $conf['drupal_http_request_function'] = \"httprl_override_core\";\n      }\n    }\n  }\n}\n\n\n/**\n * More logic for the front-end only\n */\nif (!$is_backend && isset($_SERVER['HTTP_HOST']) &&\n    isset($_SERVER['SERVER_NAME'])) {\n  if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) || isset($_SERVER['HTTPS'])) {\n    $conf['https'] = TRUE;\n    $request_type = ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https' || $_SERVER['HTTPS'] == 'on') ? 'SSL' : 'NONSSL';\n    if ($request_type == \"SSL\") { // we check for secure connection to set correct base_url\n      $base_url = 'https://' . $_SERVER['HTTP_HOST'];\n      if ($conf['install_profile'] != 'hostmaster') {\n        $_SERVER['HTTPS'] = 'on';\n        if ($drupal_core >= 7) {\n          ini_set('session.cookie_secure', TRUE);\n          if ($is_dev) {\n            header('X-Cookie-Sec: YES');\n          }\n        }\n      }\n      if ($is_dev) {\n        header('X-Local-Proto: https');\n      }\n    }\n    else {\n      if ($site_subdir && $raw_host) {\n        $base_url = 'http://' . $raw_host . '/' . $site_subdir;\n      }\n      else {\n        $base_url = 'http://' . $_SERVER['HTTP_HOST'];\n      }\n    }\n  }\n  else {\n    if ($site_subdir && $raw_host) {\n      $base_url = 'http://' . $raw_host . '/' . $site_subdir;\n    }\n    else {\n      $base_url = 'http://' . $_SERVER['HTTP_HOST'];\n    }\n  }\n\n  if ($base_url && $is_dev) {\n    header(\"X-Base-Url: \" . $base_url);\n  }\n\n  if ($site_subdir && $is_dev) {\n    header(\"X-Site-Subdir: \" . $site_subdir);\n  }\n\n  if ($all_ini['server_name_cookie_domain']) {\n    $domain = '.' . preg_replace('`^www\\.`', '', $_SERVER['SERVER_NAME']);\n  }\n  elseif ($site_subdir && isset($_SERVER['RAW_HOST'])) {\n    $domain = '.' . preg_replace('`^www\\.`', '', $_SERVER['RAW_HOST']);\n  }\n  else {\n    $domain = '.' . preg_replace('`^www\\.`', '', $_SERVER['HTTP_HOST']);\n  }\n  $domain = str_replace('..', '.', $domain);\n  if (count(explode('.', $domain)) > 2 &&\n      !is_numeric(str_replace('.', '', $domain))) {\n    ini_set('session.cookie_domain', $domain);\n    $cookie_domain = $domain;\n    header(\"X-Cookie-Domain: \" . $cookie_domain);\n  }\n\n  $this_prefix = preg_replace('`^www\\.`', '', $_SERVER['SERVER_NAME']) . '_z_';\n  if ($is_dev) {\n    header(\"X-Valkey-Prefix: \" . $this_prefix);\n  }\n\n  if (isset($_SERVER['REQUEST_TIME']) &&\n      isset($_SERVER['REMOTE_ADDR']) &&\n      isset($_SERVER['HTTP_USER_AGENT']) &&\n      !preg_match(\"/^\\/esi\\//\", $_SERVER['REQUEST_URI'])) {\n\n    // Determine if the site is running on HTTPS\n    $request_type = 'NONSSL';\n    if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) || isset($_SERVER['HTTPS'])) {\n      $request_type = ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https' || $_SERVER['HTTPS'] == 'on') ? 'SSL' : 'NONSSL';\n    }\n    if ($request_type == \"SSL\") {\n      $is_https = TRUE;\n      if ($is_dev) {\n        header('X-Request-Type:' . $request_type);\n      }\n    }\n    else {\n      $is_https = FALSE;\n      if ($is_dev) {\n        header('X-Request-Type:' . $request_type);\n      }\n    }\n\n    // Create a unique identifier for the request\n    $identity = $_SERVER['REQUEST_TIME'] . $_SERVER['REMOTE_ADDR'] . $_SERVER['SERVER_NAME'] . $_SERVER['HTTP_USER_AGENT'];\n    $identity = 'BD' . md5($identity);\n    if ($is_dev) {\n      header('X-Identity:' . $identity);\n    }\n\n    if ($drupal_core >= 8) {\n      // Check if the user is logged in by looking for the session cookie.\n      // The session cookie name starts with \"SESS\" or \"SSESS\" followed by a hash.\n      // This check is not site specific in Drupal 8+ like it is in Drupal 7\n      // or Drupal 6, but should be sufficient for the intended use case below.\n      $cookie_prefix = ini_get('session.cookie_secure') ? 'SSESS' : 'SESS';\n      $is_logged_in = FALSE;\n      foreach ($_COOKIE as $key => $value) {\n        if (strpos($key, $cookie_prefix) == 0) {\n          $is_logged_in = TRUE;\n          break;\n        }\n      }\n      if ($is_dev) {\n        header('X-Cookie-Prefix-A:' . $cookie_prefix);\n        header('X-Is-Logged-In-A:' . $is_logged_in);\n      }\n    }\n    elseif ($drupal_core == 7) {\n      // For Drupal 7 use sha256 hash and cookie prefix based on session.cookie_secure\n      $cookie_prefix = ini_get('session.cookie_secure') ? 'SSESS' : 'SESS';\n      $test_sess_name = $cookie_prefix . substr(hash('sha256', $cookie_domain), 0, 32);\n      if ($is_dev) {\n        header('X-Cookie-Prefix-B:' . $cookie_prefix);\n        header('X-Test-Sess-Name-B:' . $test_sess_name);\n      }\n    }\n    else {\n      // For Drupal 6 use md5 hash and SESS prefix only\n      $cookie_prefix = 'SESS';\n      $test_sess_name = $cookie_prefix . md5($cookie_domain);\n      if ($is_dev) {\n        header('X-Cookie-Prefix-C:' . $cookie_prefix);\n        header('X-Test-Sess-Name-C:' . $test_sess_name);\n      }\n    }\n\n    // Check if the session cookie is present\n    if (isset($_COOKIE[$test_sess_name]) || $is_logged_in) {\n      $is_anon = 'LOGGED';\n    }\n    else {\n      $is_anon = 'ANONYMOUS';\n    }\n    if ($is_dev) {\n      header('X-Is-Anon:' . $is_anon);\n    }\n\n    // Redirect not logged in visitors to homepage to protect admin URLs from bots\n    if ($is_anon == 'ANONYMOUS') {\n      if (preg_match(\"/\\/(?:node\\/[0-9]+\\/edit|node\\/add)/\", $_SERVER['REQUEST_URI'])) {\n        if (empty($all_ini['allow_anon_node_add'])) {\n          header(\"Location: \" . $base_url . \"/\", true, 301);\n          exit;\n        }\n      }\n      if (preg_match(\"/^\\/(?:[a-z]{2}\\/)?(?:admin|logout|privatemsg|approve)/\", $_SERVER['REQUEST_URI'])) {\n        if (empty($all_ini['disable_admin_dos_protection'])) {\n          header(\"Location: \" . $base_url . \"/\", true, 301);\n          exit;\n        }\n      }\n    }\n\n    // Additional logic for caching or other needs\n    if ($is_anon == 'ANONYMOUS' && !empty($all_ini['speed_booster_anon_cache_ttl']) && preg_match(\"/^[0-9]{2,}$/\", $all_ini['speed_booster_anon_cache_ttl'])) {\n      if ($all_ini['speed_booster_anon_cache_ttl'] > 10) {\n        $expire_in_seconds = $all_ini['speed_booster_anon_cache_ttl'];\n        header('X-Limit-Booster:' . $all_ini['speed_booster_anon_cache_ttl']);\n      }\n    }\n\n    // Prevent turning the feature server site into a spam machine\n    // Disable self-registration also on hostmaster\n    if ($conf['install_profile'] == 'feature_server' ||\n        $conf['install_profile'] == 'hostmaster') {\n      $conf['user_register'] = 0; // Force \"Only site administrators can create new user accounts\"\n    }\n    if (!$is_bot && !$high_traffic) {\n      if (preg_match(\"/^\\/(?:[a-z]{2}\\/)?(?:admin|cart|checkout|logout|privatemsg)/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/\\/(?:node\\/[0-9]+\\/edit|node\\/add|comment\\/reply|approve|ajax_comments|commerce_currency_select)/\", $_SERVER['REQUEST_URI']) ||\n          preg_match(\"/(?:^dev\\.|\\.dev\\.|\\.devel\\.)/\", $_SERVER['HTTP_HOST'])) {\n        $expire_in_seconds = '1';\n        header('X-Limit-Booster: 1');\n      }\n      if (isset($_SERVER['REQUEST_URI']) &&\n          preg_match(\"/(?:x-progress-id|ahah|progress\\/|autocomplete|ajax|batch|js\\/.*)/i\", $_SERVER['REQUEST_URI'])) {\n        $expire_in_seconds = '0';\n        if ($is_dev) {\n          header('X-Skip-Booster: AjaxRU');\n        }\n      }\n      if (isset($_SERVER['QUERY_STRING']) &&\n          preg_match(\"/(?:x-progress-id|ahah|progress\\/|autocomplete|ajax|batch|js\\/.*)/i\", $_SERVER['QUERY_STRING'])) {\n        $expire_in_seconds = '0';\n        if ($is_dev) {\n          header('X-Skip-Booster: AjaxQS');\n        }\n      }\n      if (isset($_SERVER['REQUEST_METHOD']) &&\n          $_SERVER['REQUEST_METHOD'] == 'POST') {\n        if (!isset($_COOKIE['NoCacheID'])) {\n          $lifetime = '15';\n          setcookie('NoCacheID', 'POST' . $identity, $_SERVER['REQUEST_TIME'] + $lifetime, '/', $cookie_domain);\n        }\n        $expire_in_seconds = '0';\n        if ($is_dev) {\n          header('X-Skip-Booster: PostRM');\n        }\n      }\n    }\n    if ($is_bot) {\n      if (!preg_match(\"/Pingdom/i\", $_SERVER['HTTP_USER_AGENT']) &&\n          !preg_match(\"/(?:rss|feed)/i\", $_SERVER['REQUEST_URI'])) {\n        $expire_in_seconds = '3600';\n        if ($is_dev) {\n          header('X-Bot-Booster: 3600');\n        }\n      }\n    }\n    if ($conf['install_profile'] != 'hostmaster' && ($expire_in_seconds > -1)) {\n      header(\"X-Accel-Expires: \" . $expire_in_seconds);\n      if ($expire_in_seconds > -1 && $expire_in_seconds < 2) {\n        $conf['cache'] = 0; // Disable page caching on the fly\n      }\n    }\n  }\n}\n\n\n/**\n * Support files/styles with short URIs also for files not generated yet\n */\nif (preg_match(\"/^\\/files\\/styles\\//\", $_SERVER['REQUEST_URI'])) {\n  header(\"Location: \" . $base_url . \"/sites/\" . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI'], true, 301);\n  exit;\n}\n\n\n/**\n * Optional system level overrides\n */\nif (is_readable('/data/conf/override.global.inc')) {\n  require_once \"/data/conf/override.global.inc\";\n}\n\n/**\n * Use Redis caching and lock support only for d6 and d7 profiles\n */\nif ($valkey_up && $use_valkey && !$custom_cache) {\n  $cache_backport = FALSE;\n  $cache_valkey = FALSE;\n  $all_ini['redis_use_modern'] = TRUE;\n  if ($all_ini['redis_use_modern']) {\n    if ($drupal_core >= 8) {\n      $redis_comprs = TRUE;\n      $redis_dirname = 'redis_eight';\n      if (!$all_ini['redis_old_eight_mode']) {\n        $redis_dirname = 'redis_compr';\n      }\n      if ($drupal_core == 10 || $drupal_core == 11) {\n        $redis_new_dirname = 'redis_ten_eleven';\n        $redis_legacy_dirname = 'redis_nine_ten';\n        if (is_readable('modules/o_contrib_ten/' . $redis_new_dirname . '/redis.services.yml')) {\n          $redis_dirname = $redis_new_dirname;\n        }\n        elseif (is_readable('modules/o_contrib_ten/' . $redis_legacy_dirname . '/redis.services.yml')) {\n          $redis_dirname = $redis_legacy_dirname;\n        }\n      }\n      elseif ($drupal_core == 9) {\n        $redis_dirname = 'redis_nine_ten';\n        if ($all_ini['redis_old_nine_mode']) {\n          $redis_dirname = 'redis_compr';\n        }\n      }\n    }\n    else {\n      $redis_dirname = 'redis_edge';\n    }\n    if ($is_dev && !$is_backend) {\n      header(\"X-Valkey-Version-Is: Modern\");\n      header(\"X-Valkey-Dir-Is: \" . $redis_dirname);\n    }\n    if ($all_ini['redis_flush_forced_mode']) {\n      if ($drupal_core >= 8) {\n        $settings['redis_perm_ttl']                 = 86400; // 24 hours max\n        $settings['redis_flush_mode']               = 1; // Valkey default is 0\n        $settings['redis_flush_mode_cache_page']    = 2; // Valkey default is 1\n        $settings['redis_flush_mode_cache_block']   = 2; // Valkey default is 1\n        $settings['redis_flush_mode_cache_menu']    = 2; // Valkey default is 0\n        $settings['redis_flush_mode_cache_metatag'] = 2; // Valkey default is 0\n      }\n      else {\n        $conf['redis_perm_ttl']                 = 86400; // 24 hours max\n        $conf['redis_flush_mode']               = 1; // Valkey default is 0\n        $conf['redis_flush_mode_cache_page']    = 2; // Valkey default is 1\n        $conf['redis_flush_mode_cache_block']   = 2; // Valkey default is 1\n        $conf['redis_flush_mode_cache_menu']    = 2; // Valkey default is 0\n        $conf['redis_flush_mode_cache_metatag'] = 2; // Valkey default is 0\n      }\n      // See http://bit.ly/1drmi35 for more information\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Flush-Forced-Mode: Forced\");\n      }\n    }\n  }\n  else {\n    $redis_dirname = 'redis';\n    if ($is_dev && !$is_backend) {\n      header(\"X-Valkey-Version-Is: Legacy\");\n      header(\"X-Valkey-Dir-Is: \" . $redis_dirname);\n    }\n  }\n  if ($drupal_core >= 8) {\n    if (file_exists('sites/' . $_SERVER['SERVER_NAME'] . '/.redisLegacyOff')) {\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Off-Ctrl-Exists: .redisLegacyOff\");\n      }\n    }\n    else {\n      if (is_readable('sites/' . $_SERVER['SERVER_NAME'] . '/files/development.services.yml')) {\n        if ($is_dev && !$is_backend) {\n          header(\"X-Dev-Services-Yml-Is-Readable: development.services.yml\");\n        }\n      }\n      else {\n        if (is_readable('modules/o_contrib_ten')) {\n          if (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_ten/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_ten/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_ten/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_eleven')) {\n          if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_nine')) {\n          if (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_failover_path = 'modules/o_contrib_nine/' . $redis_dirname . '/example.failover.services.yml';\n            $example_services_path = 'modules/o_contrib_nine/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_nine/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n          }\n          if (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n        elseif (is_readable('modules/o_contrib_eight')) {\n          if (is_readable('modules/o_contrib_eight/' . $redis_dirname . '/example.services.yml')) {\n            $cache_valkey = TRUE;\n            $example_services_path = 'modules/o_contrib_eight/' . $redis_dirname . '/example.services.yml';\n            $cache_gzip_path = 'modules/o_contrib_eight/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Example-Services-Is-Readable: \" . $example_services_path);\n            }\n          }\n          if (is_readable('modules/o_contrib_eight/' . $redis_dirname . '/redis.services.yml')) {\n            $cache_valkey = TRUE;\n            $redis_services_path = 'modules/o_contrib_eight/' . $redis_dirname . '/redis.services.yml';\n            if ($is_dev && !$is_backend) {\n              header(\"X-Valkey-Services-Is-Readable: \" . $redis_services_path);\n            }\n          }\n        }\n      }\n    }\n  }\n  elseif ($drupal_core == 7) {\n    if (is_readable('modules/o_contrib_seven/' . $redis_dirname . '/redis.autoload.inc')) {\n      $cache_valkey = TRUE;\n      $cache_backport = FALSE;\n      $cache_redis_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.autoload.inc';\n      $cache_lock_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.lock.inc';\n      $cache_path_path = 'modules/o_contrib_seven/' . $redis_dirname . '/redis.path.inc';\n      $cache_gzip_path = 'modules/o_contrib_seven/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Autoload-Is-Readable: \" . $cache_redis_path);\n      }\n    }\n    if ($all_ini['autoslave_enable']) {\n      if (is_readable('modules/o_contrib_seven/autoslave/autoslave.cache.inc') &&\n        is_readable('includes/database/autoslave/database.inc')) {\n        $use_auto_se = TRUE;\n        $gzip_mode = FALSE;\n        $cache_backport = FALSE;\n        $auto_se_path = 'modules/o_contrib_seven/autoslave/autoslave.cache.inc';\n        if ($is_dev && !$is_backend) {\n          header(\"X-AutoSlave-Cache-Is-Readable: \" . $auto_se_path);\n        }\n      }\n    }\n    if ($all_ini['cache_consistent_enable']) {\n      if (is_readable('modules/o_contrib_seven/cache_consistent/cache_consistent.inc')) {\n        $use_cache_ct = TRUE;\n        $gzip_mode = FALSE;\n        $cache_backport = FALSE;\n        $cache_ct_path = 'modules/o_contrib_seven/cache_consistent/cache_consistent.inc';\n        if ($is_dev && !$is_backend) {\n          header(\"X-CacheConsistent-Is-Readable: \" . $cache_ct_path);\n        }\n      }\n    }\n  }\n  elseif ($drupal_core == 6) {\n    if (is_readable('modules/o_contrib/cache_backport/cache.inc')) {\n      $cache_backport = TRUE;\n      $cache_backport_path = 'modules/o_contrib/cache_backport/cache.inc';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Cache-Backport-Is-Readable: \" . $cache_backport_path);\n      }\n    }\n    if (is_readable('modules/o_contrib/' . $redis_dirname . '/redis.autoload.inc')) {\n      $cache_valkey = TRUE;\n      $cache_redis_path = 'modules/o_contrib/' . $redis_dirname . '/redis.autoload.inc';\n      $cache_lock_path = 'modules/o_contrib/' . $redis_dirname . '/redis.lock.inc';\n      $cache_path_path = 'modules/o_contrib/' . $redis_dirname . '/redis.path.inc';\n      $cache_gzip_path = 'modules/o_contrib/' . $redis_dirname . '/lib/Redis/CacheCompressed.php';\n      if ($is_dev && !$is_backend) {\n        header(\"X-Valkey-Autoload-Is-Readable: \" . $cache_redis_path);\n      }\n    }\n  }\n  if ($cache_valkey) {\n    if ($drupal_core >= 8) {\n      if (is_readable('modules/o_contrib_eleven/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_eleven/' . $redis_dirname . '/src');\n      }\n      elseif (is_readable('modules/o_contrib_ten/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_ten/' . $redis_dirname . '/src');\n      }\n      elseif (is_readable('modules/o_contrib_nine/' . $redis_dirname . '/redis.services.yml')) {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_nine/' . $redis_dirname . '/src');\n      }\n      else {\n        $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/o_contrib_eight/' . $redis_dirname . '/src');\n      }\n      $settings['redis.connection']['interface'] = 'PhpRedis';\n      $settings['redis.connection']['host']      = '127.0.0.1';\n      $settings['redis.connection']['port']      = '6379';\n      $settings['redis.connection']['password']  = 'isfoobared';\n      $settings['redis.connection']['base']      = '8';\n      $settings['cache_prefix']                  = $this_prefix;\n      $settings['cache']['default']              = 'cache.backend.redis';\n      if (!is_readable('/data/conf/clstr.cnf')) {\n        $settings['cache']['bins']['bootstrap']  = 'cache.backend.chainedfast';\n        $settings['cache']['bins']['discovery']  = 'cache.backend.chainedfast';\n        $settings['cache']['bins']['config']     = 'cache.backend.chainedfast';\n      }\n      $settings['container_yamls'][]             = $example_services_path;\n      $settings['container_yamls'][]             = $redis_services_path;\n      if ($drupal_core <= 10) {\n        $settings['queue_default']               = 'queue.redis_reliable';\n      }\n      if ($redis_comprs) {\n        $settings['redis_compress_length']       = 100;\n        $settings['redis_compress_level']        = 5;\n      }\n      $settings['cache']['bins']['state']        = 'cache.backend.redis';\n      $settings['state_cache']                   = TRUE;\n      $settings['bootstrap_container_definition'] = [\n        'parameters' => [],\n        'services' => [\n          'redis.factory' => [\n            'class' => 'Drupal\\redis\\ClientFactory',\n          ],\n          'cache.backend.redis' => [\n            'class' => 'Drupal\\redis\\Cache\\CacheBackendFactory',\n            'arguments' => ['@redis.factory', '@cache_tags_provider.container', '@serialization.phpserialize'],\n          ],\n          'cache.container' => [\n            'class' => '\\Drupal\\redis\\Cache\\PhpRedis',\n            'factory' => ['@cache.backend.redis', 'get'],\n            'arguments' => ['container'],\n          ],\n          'cache_tags_provider.container' => [\n            'class' => 'Drupal\\redis\\Cache\\RedisCacheTagsChecksum',\n            'arguments' => ['@redis.factory'],\n          ],\n          'serialization.phpserialize' => [\n            'class' => 'Drupal\\Component\\Serialization\\PhpSerialize',\n          ],\n        ],\n      ];\n    }\n    else {\n      if ($cache_backport) {\n        $conf['cache_inc']                      = $cache_backport_path;\n      }\n      if ($all_ini['redis_use_modern']) {\n        if ($all_ini['redis_lock_enable']) {\n          $redis_lock = TRUE;\n        }\n        if ($all_ini['redis_path_enable']) {\n          $redis_path = TRUE;\n        }\n      }\n      if (is_readable($cache_lock_path) && $redis_lock) {\n        $conf['lock_inc']                       = $cache_lock_path;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Valkey-Lock-Is-Readable: \" . $cache_lock_path);\n        }\n      }\n      if (is_readable($cache_path_path) && $redis_path) {\n        $conf['path_inc']                       = $cache_path_path;\n        $conf['path_alias_admin_blacklist']     = FALSE;\n        if ($is_dev && !$is_backend) {\n          header(\"X-Valkey-Path-Is-Readable: \" . $cache_path_path);\n        }\n      }\n      if ($all_ini['redis_scan_enable']) {\n        $conf['redis_scan_delete']              = TRUE;\n        $gzip_mode = FALSE;\n      }\n      else {\n        if (is_readable($cache_gzip_path)) {\n          $gzip_mode = TRUE;\n        }\n        else {\n          $gzip_mode = FALSE;\n        }\n      }\n      if ($gzip_mode) {\n        $conf['cache_default_class']            = 'Redis_CacheCompressed';\n      }\n      else {\n        $conf['cache_default_class']            = 'Redis_Cache';\n      }\n      $conf['cache_backends'][]                 = $cache_redis_path;\n      if ($use_auto_se) {\n        $conf['cache_backends'][]               = $auto_se_path;\n        $conf['cache_default_class']            = 'AutoslaveCache';\n        $conf['autoslave_cache_default_class']  = 'Redis_Cache';\n      }\n      if ($use_cache_ct) {\n        $conf['cache_backends'][]               = $cache_ct_path;\n        $conf['cache_default_class']            = 'ConsistentCache';\n        if (!is_readable('/data/conf/clstr.cnf')) {\n          $conf['cache_class_cache_form']       = 'DrupalDatabaseCache';\n          $conf['cache_class_cache_bootstrap']  = 'DrupalDatabaseCache';\n        }\n        $conf['consistent_cache_default_class'] = 'Redis_Cache';\n        $conf['consistent_cache_default_safe']  = TRUE;\n        $conf['consistent_cache_buffer_mechanism'] = 'ConsistentCacheBuffer';\n        $conf['consistent_cache_default_strict'] = FALSE;\n        $conf['consistent_cache_strict_cache_bootstrap'] = TRUE;\n      }\n      if (!is_readable('/data/conf/clstr.cnf')) {\n        $conf['cache_class_cache_form']         = 'DrupalDatabaseCache';\n        $conf['cache_class_cache_bootstrap']    = 'DrupalDatabaseCache';\n      }\n      $conf['redis_client_interface']           = 'PhpRedis';\n      $conf['redis_client_host']                = '127.0.0.1';\n      $conf['redis_client_port']                = '6379';\n      $conf['redis_client_password']            = 'isfoobared';\n      $conf['redis_client_base']                = '8';\n      $conf['cache_prefix']                     = $this_prefix;\n      $conf['page_cache_invoke_hooks']          = TRUE;  // D7 == Do not use Aggressive Mode\n      $conf['page_cache_without_database']      = FALSE; // D7 == Do not use Aggressive Mode\n      $conf['page_cache_maximum_age']           = 0;     // D7 == max-age in the Cache-Control header (ignored by Speed Booster)\n      $conf['page_cache_max_age']               = 0;     // D6 == max-age in the Cache-Control header (ignored by Speed Booster)\n      $conf['cache_lifetime']                   = 0;     // D7 == BOA uses Speed Booster / Nginx micro-caching instead\n      $conf['page_cache_lifetime']              = 0;     // D6 == BOA uses Speed Booster / Nginx micro-caching instead\n    }\n    if ($all_ini['redis_exclude_bins'] && !is_readable('/data/conf/clstr.cnf')) {\n      $excludes = array();\n      $excludes = explode(\",\", $all_ini['redis_exclude_bins']);\n      foreach ($excludes as $exclude) {\n        $exclude = rtrim($exclude);\n        $exclude = ltrim($exclude);\n        if ($drupal_core >= 8) {\n          $bin_exclude = $exclude;\n          $settings['cache']['bins'][$bin_exclude] = 'cache.backend.database';\n        }\n        else {\n          $bin_exclude = 'cache_class_' . $exclude;\n          $conf[$bin_exclude] = 'DrupalDatabaseCache';\n        }\n        if ($is_dev && !$is_backend) {\n          header(\"X-Ini-Valkey-Exclude-Bin-\" . $exclude . \": \" . $bin_exclude);\n        }\n      }\n    }\n  }\n}\n\n\n/**\n * Drupal for Facebook (fb)\n *\n * Important:\n * Facebook client libraries will not work properly if arg_separator.output is not &\n * The default value is &amp;. Change this in settings.php. Make the value \"&\"\n * https://drupal.org/node/205476\n */\nif (!$custom_fb && $all_ini['auto_detect_facebook_integration']) {\n  if (is_readable('sites/all/modules/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once \"sites/all/modules/fb/fb_settings.inc\";\n    $conf['fb_api_file'] = \"sites/all/modules/fb/facebook-platform/php/facebook.php\";\n  }\n  elseif (is_readable('sites/all/modules/contrib/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once \"sites/all/modules/contrib/fb/fb_settings.inc\";\n    $conf['fb_api_file'] = \"sites/all/modules/contrib/fb/facebook-platform/php/facebook.php\";\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once 'profiles/' . $conf['install_profile'] . '/modules/fb/fb_settings.inc';\n    $conf['fb_api_file'] = 'profiles/' . $conf['install_profile'] . '/modules/fb/facebook-platform/php/facebook.php';\n  }\n  elseif (is_readable('profiles/' . $conf['install_profile'] . '/modules/contrib/fb/fb_settings.inc')) {\n    ini_set('arg_separator.output', '&');\n    require_once 'profiles/' . $conf['install_profile'] . '/modules/contrib/fb/fb_settings.inc';\n    $conf['fb_api_file'] = 'profiles/' . $conf['install_profile'] . '/modules/contrib/fb/facebook-platform/php/facebook.php';\n  }\n}\n\n\n/**\n * Domain module\n */\nif (!$custom_da) {\n  if ($da_inc) {\n    require_once($da_inc);\n  }\n}\n\n\n/**\n * New Relic Integration for Drupal with Drush Compatibility (8, 12, 13)\n *\n * Supports background jobs and sets appropriate New Relic parameters.\n */\nif (extension_loaded('newrelic') && !empty($all_ini['enable_newrelic_integration'])) {\n  $this_instance = FALSE;\n\n  if ($is_backend) {\n    $uri = FALSE;\n\n    // Check if drush_get_context exists (Drush 8)\n    if (function_exists('drush_get_context')) {\n      // Drush 8 context retrieval\n      $context = drush_get_context();\n      if (isset($context['DRUSH_URI'])) {\n        $uri = $context['DRUSH_URI'];\n      }\n      elseif (isset($context['DRUSH_DRUPAL_SITE'])) {\n        $uri = $context['DRUSH_DRUPAL_SITE'];\n      }\n    }\n    else {\n      // Drush 9+ context retrieval\n      // Attempt to retrieve URI from environment variables or Drush services\n      // Drush commands can pass the URI as an environment variable or argument\n\n      // Example: Using environment variable (you might need to set this in Drush commands)\n      if (isset($_SERVER['DRUSH_URI'])) {\n        $uri = $_SERVER['DRUSH_URI'];\n      }\n      elseif (isset($_SERVER['DRUPAL_SITE_URI'])) {\n        $uri = $_SERVER['DRUPAL_SITE_URI'];\n      }\n      else {\n        // Fallback: Attempt to determine URI using Drupal APIs\n        // Note: In Drush context, some Drupal services might not be fully bootstrapped\n        try {\n          $request = \\Drupal::request();\n          $uri = $request->getSchemeAndHttpHost();\n        }\n        catch (\\Exception $e) {\n          // Unable to determine URI; proceed without setting it\n        }\n      }\n    }\n\n    if ($uri) {\n      // Clean the URI by removing the scheme\n      $uri = str_replace(['http://', 'https://'], '', $uri);\n      $this_instance = 'Drush Site: ' . $uri;\n\n      // Set New Relic transaction name and parameters if command details are available\n      if (isset($command['command']) && isset($command['arguments'])) {\n        $drush_command = array_merge([$command['command']], $command['arguments']);\n        $command_str = implode(' ', $drush_command);\n\n        // Add custom parameters to New Relic\n        newrelic_add_custom_parameter('Drush command', $command_str);\n        newrelic_name_transaction($command_str);\n\n        // Indicate that this is a background job\n        newrelic_background_job(TRUE);\n      }\n    }\n  }\n  else {\n    // Non-Drush (web request) context\n    if (isset($_SERVER['SERVER_NAME'])) {\n      $this_instance = 'Web Site: ' . $_SERVER['SERVER_NAME'];\n      // Optionally, indicate this is not a background job\n      // newrelic_background_job(FALSE);\n    }\n  }\n\n  // Apply the New Relic app name if determined\n  if ($this_instance) {\n    ini_set('newrelic.appname', $this_instance);\n    newrelic_set_appname($this_instance);\n  }\n}\nelseif (extension_loaded('newrelic') && empty($all_ini['enable_newrelic_integration'])) {\n  // Disable New Relic auto-RUM and ignore transactions if integration is disabled\n  newrelic_disable_autorum();\n  newrelic_ignore_apdex();\n  newrelic_ignore_transaction();\n}\n\n\n/**\n * Unset config arrays on non-dev URLs\n */\nif (!$is_dev) {\n  unset($boa_ini);\n  unset($usr_plr_ini);\n  unset($usr_loc_ini);\n  unset($all_ini);\n}\n"
  },
  {
    "path": "aegir/conf/global/override.global.inc",
    "content": "<?php # override global settings.php\n\n// This file should be created as /data/conf/override.global.inc.\n\n// Kind of core version agnostic, securepages module\n// for proper HTTP/HTTPS redirects.\nif (isset($_SERVER['HTTP_HOST']) && preg_match(\"/(?:domain\\.com|another-domain\\.com)/\", $_SERVER['HTTP_HOST']) &&\n    isset($_SERVER['REQUEST_URI']) && isset($_SERVER['HTTP_USER_AGENT'])) {\n  $request_type = ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') ? 'SSL' : 'NONSSL';\n  $conf['https'] = TRUE;\n  if (preg_match(\"/^\\/(?:[a-z]{2}\\/)?(?:cart.*|checkout.*|admin.*|donate.*|civicrm.*|node\\/add.*|node\\/.*\\/edit)$/\", $_SERVER['REQUEST_URI']) ||\n      preg_match(\"/^\\/(?:user.*|user\\/.*\\/edit.*|user\\/reset.*|user\\/register.*|user\\/logout|user\\/password|user\\/login)$/\", $_SERVER['REQUEST_URI'])) {\n    $base_url = 'https://' . $_SERVER['HTTP_HOST'];\n    if ($request_type != \"SSL\") {\n      header('X-Accel-Expires: 1');\n      // Note: never use header('X-Accel-Expires: 0'); to disable Speed Booster completely.\n      // You always want that one second or you will be vulnerable to DoS attacks.\n      header(\"Location: https://\" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'], true, 301);\n      exit;\n    }\n  }\n  else {\n    $base_url = 'http://' . $_SERVER['HTTP_HOST'];\n    if ($request_type == \"SSL\" && !preg_match(\"/(?:x-progress-id|ahah|filefield_nginx_progress\\/*|tinybrowser|f?ckeditor|tinymce|flowplayer|jwplayer|videomanager|autocomplete|ajax|batch|js\\/.*)/\", $_SERVER['REQUEST_URI'])) {\n      header('X-Accel-Expires: 1');\n      // Note: never use header('X-Accel-Expires: 0'); to disable Speed Booster completely.\n      // You always want that one second or you will be vulnerable to DoS attacks.\n      header(\"Location: http://\" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'], true, 301);\n      exit;\n    }\n  }\n}\n\n$custom_cache = FALSE; // When set to TRUE in the /data/conf/override.global.inc file,\n                       // it will disable default Redis configuration.\n\n$custom_fb    = FALSE; // When set to TRUE in the /data/conf/override.global.inc file,\n                       // it will disable default Drupal for Facebook (fb) configuration.\n\n$custom_da    = FALSE; // When set to TRUE in the /data/conf/override.global.inc file,\n                       // it will disable default Domain Access configuration,\n                       // so you can define your own, custom configuration in the included below\n                       // /data/conf/override.global.inc file. NOTE: if set to TRUE, then you\n                       // must set to TRUE also $custom_cache and copy all its logic in your\n                       // /data/conf/override.global.inc file, because Domain Access must be included\n                       // *after* any cache-related settings to work properly.\n\n\n/**\n * Custom Speed Booster TTL override, for example to force caching\n * on HTTPS-only site, where otherwise default TTL is just 1 second\n */\nif (isset($_COOKIE[$test_sess_name])) {\n  // Custom forced Speed Booster cache for logged in users\n  $expire_in_seconds = 300;\n}\nelse {\n  // Custom forced Speed Booster cache for anonymous visitors\n  $expire_in_seconds = 3600;\n}\nheader(\"X-Accel-Expires: \" . $expire_in_seconds);\n"
  },
  {
    "path": "aegir/conf/global/settings.global.inc",
    "content": "<?php # define custom global settings\n\n// This file should be created as /data/conf/settings.global.inc.\n\n// Settings useful for very high traffic sites\n$high_traffic = TRUE; // no side effects expected\n"
  },
  {
    "path": "aegir/conf/ini/default.boa_platform_control.ini",
    "content": "\n;; ## INI (platform level) located in sites/all/modules/\n\n;;\n;;  DO NOT EDIT THIS FILE, it is just a TEMPLATE with documentation included!\n;;\n;;  This is a platform level INI file template which can be used to modify\n;;  default BOA system behaviour for all sites hosted on this platform.\n;;\n;;  Copy this file as boa_platform_control.ini into sites/all/modules directory,\n;;  then uncomment lines for any settings you want to modify, to make it active.\n;;  All settings are initially listed with system defaults, for reference.\n;;\n;;  Note that it takes ~60 seconds to see any modification results in action\n;;  due to opcode caching enabled in PHP-FPM for all non-dev sites.\n;;\n\n\n;; ### INI (platform level) for Session Control\n\n;session_cookie_ttl = 86400\n;;\n;;  You can control session cookies expiration (TTL) per site and per platform.\n;;  The value (in seconds) of the session_cookie_ttl variable is used as\n;;  session.cookie_lifetime value.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n;;\n;;  We also recommend that you enable and configure built-in session_expire\n;;  module, which allows you to keep the sessions DB table tidy. Make sure that\n;;  TTL set via session_cookie_ttl variable below is *lower* than TTL configured\n;;  in the session_expire module, because the module does not care about PHP\n;;  settings and simply deletes old entries from the sessions table on cron run.\n\n;session_gc_eol = 86400\n;;\n;;  You can control session garbage collector (EOL) per site and per platform.\n;;  The value (in seconds) of the session_gc_eol variable is used as\n;;  session.gc_maxlifetime value and specifies the number of seconds after which\n;;  data will be seen as 'garbage' and potentially cleaned up, resulting with\n;;  $_SESSION variable discarded and affected authenticated users logged out.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n\n\n;; ### INI (platform level) for Redis Cache Settings Control\n\n;redis_old_nine_mode = FALSE\n;;\n;;  If you are running Drupal 9 older than 9.3 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n\n;redis_old_eight_mode = FALSE\n;;\n;;  If you are running Drupal 8 older than 8.8 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n\n;redis_lock_enable = TRUE\n;;\n;;  The blazing fast Redis lock implementation is also enabled by default.\n\n;redis_path_enable = TRUE\n;;\n;;  The blazing fast Redis path cache implementation is also enabled by default.\n\n;redis_scan_enable = FALSE\n;;\n;;  The blazing fast Redis method on wildcard cache delete. Uses non-atomic,\n;;  non-blocking, and concurrency friendly SCAN command instead of KEYS\n;;  to perform cache wildcard key deletions. Not enabled by default, because\n;;  it may cause serious yet random problems -- see the comment for details:\n;;  https://www.drupal.org/node/2851625#comment-11963867\n\n\n;; ### INI (platform level) for Redis Cache Advanced Settings Control\n\n;redis_flush_forced_mode = TRUE\n;;\n;;  The more aggressive cache flush mode is now enabled by default, but you can\n;;  still disable it with FALSE below, if you wish, after some testing, since\n;;  it will further improve your site's performance.\n;;\n;;  NOTE: This option, enabled by default, may cause mysterious and random WSOD\n;;        depending on the site's dependence on entries in the cache, because\n;;        it limits each cache entry TTL to 6 hours max, hence any module using\n;;        cacheBackendInterface::CACHE_PERMANENT will be surprised by suddenly\n;;        and mysteriously missing entries. If that happens, uncomment this line\n;;        and set it to FALSE below.\n;;\n;;  Remember to uncomment the line above if you want to modify default settings.\n;;\n;;  When enabled, it will automatically set more aggressive cache flush mode\n;;  in general and very aggressive for selected cache bins, as listed below,\n;;  along with redis integration module defaults, active when this option\n;;  is explicitly set to FALSE.\n;;\n;;    $conf['redis_perm_ttl']                 = 86400; // 24 hours max\n;;    $conf['redis_flush_mode']               = 1; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_page']    = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_block']   = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_menu']    = 2; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_metatag'] = 2; // Redis default is 0\n;;\n;;  Note that even with this option enabled, you can easily override these\n;;  values or configure completely custom modes, both for the wildcard option\n;;  redis_flush_mode and per cache bin, in the local.settings.php file.\n;;\n;;  Please refer to the module README for more information on all available\n;;  advanced flush modes: http://bit.ly/1drmi35\n\n;redis_exclude_bins = FALSE\n;;\n;;  Sometimes you may want to exclude some problematic cache bins from Redis\n;;  so they will use default SQL engine, at least until related issue will be\n;;  fixed either in your contrib code or in the Redis integration module.\n;;\n;;  Normally you had to edit the local.settings.php file which is both tedious\n;;  and dangerous because of extra steps: https://omega8.cc/node/230 to add\n;;  a line, for example: $conf['cache_class_cache_foo'] = 'DrupalDatabaseCache';\n;;  Plus, it had to be done for every site separately.\n;;\n;;  Now you can simply list the cache bins to exclude below, comma separated,\n;;  either in the site or platform level active INI file.\n;;\n;;  Example: redis_exclude_bins = \"cache_form,cache_foo,cache_bar\"\n\n;redis_cache_disable = FALSE\n;;\n;;  Normally you should never disable Redis, unless for debugging rare issues.\n;;  If you are sure you need to disable Redis for all sites on this platform,\n;;  uncomment the line above and set the value to TRUE.\n\n\n;; ### INI (platform level) for Nginx Microcache Control\n\n;speed_booster_anon_cache_ttl = 10\n;;\n;;  Speed Booster uses Nginx microcaching mode by default, with just 10 seconds\n;;  both for anonymous visitors and logged in users. All known robots/crawlers\n;;  and search engines spiders are forced to accept up to 24 hours cache TTL.\n;;  Below you can modify the (10 seconds) default for human, anonymous visitors.\n;;  Uncomment the line above and set any numeric value you prefer (in seconds)\n;;  to override system default (10 seconds). You may want to enable Purge and\n;;  Expire modules in all sites on this platform, so any new or modified node,\n;;  comment added etc will selectively auto-purge related cache entries to avoid\n;;  serving stale content for extended time (depending on the TTL configured).\n;;  Note that the value must be higher than 10 or it will be ignored.\n\n;disable_drupal_page_cache = FALSE\n;;\n;;  With default Speed Booster TTL set to just 10 seconds to achieve the\n;;  microcaching mode, disabling Drupal own page cache will significantly\n;;  degrade your site's performance on every request not served via Speed\n;;  Booster front-end cache, because Drupal will have to build the page from\n;;  scratch every 10 seconds, making your site SLOW for every visitor not lucky\n;;  enough to visit already cached page. This is why BOA enables Drupal own\n;;  page cache by default, even if Boost, if used, will complain about it.\n;;  It allows Drupal to keep its internal full-page cache in the super-fast\n;;  Redis backend, to make also those every-10-seconds non-cached in Speed\n;;  Booster requests blazingly fast.\n;;\n;;  If for some reason this default BOA configuration breaks something\n;;  important in your site, like some page which should display not cached\n;;  in a full-page cache results for anonymous visitors, even if they don't\n;;  have a cookie set in the browser, didn't submit any form etc, so no other\n;;  method to make the displayed page dynamic on the fly could be triggered,\n;;  you could (very carefully) consider changing this variable to TRUE.\n;;\n;;  But please think twice before using this variable. While Redis will still\n;;  improve performance for all other cache bins, the cache_page bin will\n;;  not be used, and this will make your site much slower, randomly, even if\n;;  you will increase the tiny speed_booster_anon_cache_ttl value above.\n;;\n;;  If you really have to disable this on some problematic URI, to guarantee\n;;  that the page will be as dynamic as possible also for anonymous visitors,\n;;  you may want to use more granular method, like adding in your site's\n;;  local.settings.php file exception for the affected URI:\n\n;;    if (preg_match(\"/^\\/(?:foo|bar)/\", $_SERVER['REQUEST_URI'])) {\n;;      header('X-Accel-Expires: 1'); // This disables Speed Booster\n;;      $conf['cache'] = 0; // This disables page caching on the fly\n;;    }\n\n\n;; ### INI (platform level) for Drupal Sites Access Control\n\n;allow_anon_node_add = FALSE\n;;\n;;  When set to TRUE allows anonymous users to add content. Best practice and\n;;  the default is FALSE which results with redirect to the site's homepage.\n;;  Note that this option opens also an access to the node edit.\n\n;disable_admin_dos_protection = FALSE\n;;\n;;  When set to TRUE allows anonymous visitors to access the /admin* URL, even\n;;  if only to see the 403 Access Denied message. Best practice and the default\n;;  is FALSE which results with redirect to the site's homepage. This allows\n;;  you to protect the site from DoS attempts, since the /admin* requests are\n;;  never cached and always hit Drupal directly. Some sites may experience\n;;  issues when your browser has expired session/cookie which redirects you\n;;  to the homepage even if you were logged in. If something like this happens,\n;;  you may want to disable this protection by changing it to TRUE below.\n;;  Remember to uncomment the line above if you want to use this feature.\n\n\n;; ### INI (platform level) for User Register Protection\n\n;enable_strict_user_register_protection = FALSE\n;;\n;;  When set to TRUE allows to force protection on all sites globally, unless\n;;  the site has its own custom setting for ignore_user_register_protection\n;;  variable in the site level boa_site_control.ini file, located in the\n;;  sites/foo.com/modules directory.\n;;\n;;  Hoever, this setting will be ignored if you also set below the opposite:\n;;\n;;    ignore_user_register_protection = TRUE\n;;\n;;  It can be set to TRUE automatically on all platforms if there is an empty\n;;  control file:\n;;\n;;    ~/static/control/enable_strict_user_register_protection.info\n;;\n;;  NOTE: The *enable* file will be IGNORED if the *disable* file also exists!\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n\n;ignore_user_register_protection = FALSE\n;;\n;;  Registration settings are now restricted by design to protect your sites\n;;  from unintended turning them into spam machines (which is allowed by\n;;  Drupal 6 default settings, sadly). Spambots targeting Drupal sites are\n;;  already a plague, so unless you have already set more strict permissions\n;;  'Administrators only', we force reasonable default policy for new accounts\n;;  registration: 'Visitors, but administrator approval is required' plus\n;;  'Require email verification when a visitor creates an account' enabled.\n;;  If you wish to disable email verification or set 'Who can register\n;;  accounts' to 'Visitors', you must set it to TRUE below and uncomment\n;;  the line. Now you will be able to permanently change these settings\n;;  in this site admin area. Otherwise our default protection will be enabled\n;;  again the next day (early morning in the server time zone). Note that\n;;  we don't force 'Administrators only', because it could immediately break\n;;  many commerce or community sites essential features. But for other sites,\n;;  'Administrators only' is strongly suggested.\n;;\n;;  It can be set to TRUE automatically on all platforms AND all sites if\n;;  there is an empty *disable* control file:\n;;\n;;    ~/static/control/ignore_user_register_protection.info\n;;\n;;  NOTE: The *disable* file will make the *enable* file IGNORED if both exist!\n;;\n;;    ~/static/control/enable_strict_user_register_protection.info\n;;\n;;  Note also that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again -- which happens each morning.\n\n\n;; ### INI (platform level) Clarification on The User Register Protection Logic\n\n;;  Instead of the previously used confusing enable/disable variables and control\n;;  files we use correct names corresponding with the feature behaviour which is\n;;  actually enable_strict/ignore with non-strict enable being the default mode.\n;;\n;;  There are actually three modes available, affecting only Drupal 6 and 7 sites:\n;;\n;;  * non-strict protection when no vars/control files are used, which by default\n;;    switches Drupal 6 and Drupal 7 sites to 'Visitors, but administrator\n;;    approval is required' plus 'Require email verification when a visitor\n;;    creates an account'\n;;\n;;  * strict protection when either var or control file of \"enable_strict\" type\n;;    is used, which switches Drupal 6 and Drupal 7 sites to 'Administrators only'\n;;\n;;  * no protection when either var or control file of \"ignore\" type is used,\n;;    which simply disables the procedure altogether but doesn't modify\n;;    any settings in the Drupal 6 and Drupal 7 sites.\n\n\n;; ### INI (platform level) for New Relic, Composer, Private Files and Cookie Domain\n\n;enable_newrelic_integration = FALSE\n;;\n;;  When set to TRUE it will enable New Relic monitoring for all sites on this\n;;  platform, but only if there is a valid New Relic license key present in the\n;;  ~/static/control/newrelic.info control file.\n;;  NOTE: The New Relic license keys is a 40-character hexadecimal string.\n\n;set_composer_manager_vendor_dir = FALSE\n;;\n;;  When set to TRUE it will enforce site specific path to Composer Manager\n;;  composer_manager_vendor_dir path: sites/domain/vendor but only once the site\n;;  is already installed, so it will not override the variable on install,\n;;  if set programmatically.\n\n;allow_private_file_downloads = FALSE\n;;\n;;  When set to TRUE allows to use private files mode, so it is useful only\n;;  for commerce sites which sell files for download or for intranet sites\n;;  where you need to enforce strict access control. All other sites should\n;;  never ever use private files mode for obvious performance reasons.\n\n;server_name_cookie_domain = FALSE\n;;\n;;  When set to TRUE, it forces the cookie_domain to always use main domain,\n;;  also when the site is accessed via any domain alias. For example use case\n;;  please read: https://gist.github.com/omega8cc/5724528\n\n\n;; ### INI (platform level) for Files Permissions Daily Fix Logic\n\n;fix_files_permissions_daily = TRUE\n;;\n;;  When set to FALSE allows to skip standard files permissions fix on all sites\n;;  on this platform, even if the global option in the system level config\n;;  file .barracuda.cnf is set to _PERMISSIONS_FIX=YES (default).\n;;\n;;  This feature can be useful when you prefer to manage custom platform in\n;;  a monolithic codebase mode in Git, so forcing permissions could conflict\n;;  with your workflow or development tools. Otherwise you should never disable\n;;  this to avoid issues with Ægir tasks related to sites on this platform.\n;;\n;;  This setting affects only the running daily maintenance system behaviour.\n;;\n;;  This option is available only in BOA-2.2.0 or newer.\n\n\n;; ### INI (platform level) for SQL Tables Conversions\n\n;sql_conversion_mode = NO\n;;\n;;  This option allows to activate DB tables conversion for all sites hosted\n;;  on this platform, unless the site has its own custom setting for variable\n;;  sql_conversion_mode in the site level boa_site_control.ini file, located\n;;  in the sites/foo.com/modules directory.\n;;\n;;  It can be also set (and forced) automatically for all sites on all platforms\n;;  if there is special _SQL_CONVERT variable defined for this Octopus instance\n;;  in its .USER.octopus.cnf config file, but it may require submitting support\n;;  request if you are using hosted Ægir BOA service without root access.\n;;\n;;  Supported values are: innodb and myisam (lowercase only!)\n;;\n;;  Note that this conversion, if enabled, will run daily even if all tables\n;;  have been already converted, so it will run OPTIMIZE task on all tables,\n;;  effectively.\n;;\n;;  This setting affects only the running daily maintenance system behaviour.\n;;\n;;  This option is available only in BOA-2.1.3 or newer.\n\n\n;; ### INI (platform level) for Domain Access (domain) Module\n\n;auto_detect_domain_access_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Domain Access module. Supported locations, in the order of precedence:\n;;\n;;    sites/all/modules/domain/\n;;    sites/all/modules/contrib/domain/\n;;    profiles/foo/modules/domain/\n;;    profiles/foo/modules/contrib/domain/\n;;\n;;  IMPORTANT!\n;;\n;;  This setting on the platform level will be automatically set to TRUE\n;;  (but not on the site level) during daily maintenance procedure\n;;  if the module will be detected, however it will be completely ignored if\n;;  there is boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE there to improve performance,\n;;  if the module is not used in that site even if it exists in the platform.\n;;  Remember to uncomment the line above if you want to use this feature.\n\n\n;; ### INI (platform level) for Drupal for Facebook (fb) Module\n\n;auto_detect_facebook_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Drupal for Facebook (fb) module. Supported locations, in the order\n;;  of precedence:\n;;\n;;    sites/all/modules/fb/\n;;    sites/all/modules/contrib/fb/\n;;    profiles/foo/modules/fb/\n;;    profiles/foo/modules/contrib/fb/\n;;\n;;  IMPORTANT!\n;;\n;;  This setting on the platform level will be automatically set to TRUE\n;;  (but not on the site level) during daily maintenance procedure\n;;  if the module will be detected, however it will be completely ignored if\n;;  there is boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE there to improve performance,\n;;  if the module is not used in that site even if it exists in the platform.\n;;  Remember to uncomment the line above if you want to use this feature.\n\n\n;; ### INI (platform level) for Entity Cache (entitycache) Module\n\n;entitycache_dont_enable = FALSE\n;;\n;;  When set to TRUE allows to avoid having the entitycache module, which is\n;;  included by default, auto-enabled during daily maintenance.\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n;;\n;;  This setting is available only on the platform level, because if the distro\n;;  or custom installation profile conflicts with entitycache, you don't want\n;;  to have it re-enabled on any site on such platform, even if it is a great\n;;  performance improvement for any Drupal 7 based site, and thus is highly\n;;  recommended. Maybe you could fix your platform to make it compatible?\n\n\n;; ### INI (platform level) for Views Cache Bully (views_cache_bully) Module\n\n;views_cache_bully_dont_enable = FALSE\n;;\n;;  When set to TRUE allows to avoid having the views_cache_bully module,\n;;  which is included by default, auto-enabled during daily maintenance,\n;;  but only if there is special, global, non-default control file present:\n;;\n;;  ~/static/control/enable_views_cache_bully.info\n;;\n;;  If you didn't create this file to auto-enable views_cache_bully in all\n;;  your sites, the variable views_cache_bully_dont_enable below will be\n;;  completely ignored and the module will not be enabled. This feature\n;;  is available since Since BOA-2.1.1, because BOA-2.1.0 forced this module\n;;  to be enabled by default.\n;;\n;;  But even if you will create the special, global control file, you can still\n;;  stop the system from enabling the module per platform, by changing the\n;;  value of views_cache_bully_dont_enable to TRUE and activating the line.\n;;\n;;  This useful module automatically enables some default caching in all your\n;;  views with no caching enabled yet, which is handy for busy webmasters.\n;;\n;;  Since BOA-2.1.1 this module is no longer enabled by default because\n;;  it may affect commerce based sites, resulting with broken checkout.\n;;  Because of those possible issues this module is automatically disabled\n;;  if the maintenance agent discovers that commerce module is enabled.\n;;\n;;  You can still force views_cache_bully to be enabled if you have the special\n;;  aforementioned control file in place and views_cache_bully_dont_enable\n;;  variable below is set to FALSE or left commented out.\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n;;\n;;  This setting is available only on the platform level, because if the distro\n;;  or custom installation profile conflicts with this module, you don't want\n;;  to have it re-enabled on any site on such platform, even if it is a great\n;;  performance improvement for any Drupal 6/7 site, and thus is highly\n;;  recommended. Maybe you could fix your platform to make it compatible?\n;;  If not, then make sure to configure caching in all your views manually.\n;;  You will make your sites visitors and users more happy!\n\n\n;; ### INI (platform level) for Views Content Cache (views_content_cache) Module\n\n;views_content_cache_dont_enable = FALSE\n;;\n;;  When set to TRUE allows to avoid having the views_content_cache module,\n;;  which is included by default, auto-enabled during daily maintenance,\n;;  but only if there is special, global, non-default control file present:\n;;\n;;  ~/static/control/enable_views_content_cache.info\n;;\n;;  If you didn't create this file to auto-enable views_content_cache in all\n;;  your sites, the variable views_content_cache_dont_enable below will be\n;;  completely ignored and the module will not be enabled. This feature\n;;  is available since Since BOA-2.1.1, because BOA-2.1.0 forced this module\n;;  to be enabled by default.\n;;\n;;  But even if you will create the special, global control file, you can still\n;;  stop the system from enabling the module per platform, by changing the\n;;  value of views_content_cache_dont_enable to TRUE and activating the line.\n;;\n;;  This handy module allows you to configure caching for views and views\n;;  based blocks very precisely, which is great if you need to better control\n;;  the cache per view and per content type.\n;;\n;;  Note that this module require that Chaos tools (ctools) module is already\n;;  enabled. Otherwise views_content_cache will not be enabled.\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n;;\n;;  This setting is available only on the platform level, because if the distro\n;;  or custom installation profile conflicts with this module, you don't want\n;;  to have it re-enabled on any site on such platform, even if it is a great\n;;  performance improvement for any Drupal 6/7 site, and thus is highly\n;;  recommended. Maybe you could fix your platform to make it compatible?\n;;  You will make your sites visitors and users even more happy!\n\n"
  },
  {
    "path": "aegir/conf/ini/default.boa_site_control.ini",
    "content": "\n;; ## INI (site level) located in sites/foo.com/modules/\n\n;;\n;;  DO NOT EDIT THIS FILE, it is just a TEMPLATE with documentation included!\n;;\n;;  This is a site level INI file template which can be used to modify\n;;  default BOA system behaviour for this site only.\n;;\n;;  Copy this file as boa_site_control.ini into sites/foo.com/modules directory,\n;;  then uncomment lines for any settings you want to modify, to make it active.\n;;  All settings are initially listed with system defaults, for reference.\n;;\n;;  Note that it takes ~60 seconds to see any modification results in action\n;;  due to opcode caching enabled in PHP-FPM for all non-dev sites.\n;;\n\n\n;; ### INI (site level) for Session Control\n\n;session_cookie_ttl = 86400\n;;\n;;  You can control session cookies expiration (TTL) per site and per platform.\n;;  The value (in seconds) of the session_cookie_ttl variable is used as\n;;  session.cookie_lifetime value.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n;;\n;;  We also recommend that you enable and configure built-in session_expire\n;;  module, which allows you to keep the sessions DB table tidy. Make sure that\n;;  TTL set via session_cookie_ttl variable below is *lower* than TTL configured\n;;  in the session_expire module, because the module does not care about PHP\n;;  settings and simply deletes old entries from the sessions table on cron run.\n\n;session_gc_eol = 86400\n;;\n;;  You can control session garbage collector (EOL) per site and per platform.\n;;  The value (in seconds) of the session_gc_eol variable is used as\n;;  session.gc_maxlifetime value and specifies the number of seconds after which\n;;  data will be seen as 'garbage' and potentially cleaned up, resulting with\n;;  $_SESSION variable discarded and affected authenticated users logged out.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n\n\n;; ### INI (site level) for Redis Cache Settings Control\n\n;redis_old_nine_mode = FALSE\n;;\n;;  If you are running Drupal 9 older than 9.3 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n\n;redis_old_eight_mode = FALSE\n;;\n;;  If you are running Drupal 8 older than 8.8 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n\n;redis_lock_enable = TRUE\n;;\n;;  The blazing fast Redis lock implementation is also enabled by default.\n\n;redis_path_enable = TRUE\n;;\n;;  The blazing fast Redis path cache implementation is also enabled by default.\n\n;redis_scan_enable = FALSE\n;;\n;;  The blazing fast Redis method on wildcard cache delete. Uses non-atomic,\n;;  non-blocking, and concurrency friendly SCAN command instead of KEYS\n;;  to perform cache wildcard key deletions. Not enabled by default, because\n;;  it may cause serious yet random problems -- see the comment for details:\n;;  https://www.drupal.org/node/2851625#comment-11963867\n\n\n;; ### INI (site level) for Redis Cache Advanced Settings Control\n\n;redis_flush_forced_mode = TRUE\n;;\n;;  The more aggressive cache flush mode is now enabled by default, but you can\n;;  still disable it with FALSE below, if you wish, after some testing, since\n;;  it will further improve your site's performance.\n;;\n;;  NOTE: This option, enabled by default, may cause mysterious and random WSOD\n;;        depending on the site's dependence on entries in the cache, because\n;;        it limits each cache entry TTL to 6 hours max, hence any module using\n;;        cacheBackendInterface::CACHE_PERMANENT will be surprised by suddenly\n;;        and mysteriously missing entries. If that happens, uncomment this line\n;;        and set it to FALSE below.\n;;\n;;  Remember to uncomment the line above if you want to modify default settings.\n;;\n;;  When enabled, it will automatically set more aggressive cache flush mode\n;;  in general and very aggressive for selected cache bins, as listed below,\n;;  along with redis integration module defaults, active when this option\n;;  is explicitly set to FALSE.\n;;\n;;    $conf['redis_perm_ttl']                 = 86400; // 24 hours max\n;;    $conf['redis_flush_mode']               = 1; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_page']    = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_block']   = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_menu']    = 2; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_metatag'] = 2; // Redis default is 0\n;;\n;;  Note that even with this option enabled, you can easily override these\n;;  values or configure completely custom modes, both for the wildcard option\n;;  redis_flush_mode and per cache bin, in the local.settings.php file.\n;;\n;;  Please refer to the module README for more information on all available\n;;  advanced flush modes: http://bit.ly/1drmi35\n\n;redis_exclude_bins = FALSE\n;;\n;;  Sometimes you may want to exclude some problematic cache bins from Redis\n;;  so they will use default SQL engine, at least until related issue will be\n;;  fixed either in your contrib code or in the Redis integration module.\n;;\n;;  Normally you had to edit the local.settings.php file which is both tedious\n;;  and dangerous because of extra steps: https://omega8.cc/node/230 to add\n;;  a line, for example: $conf['cache_class_cache_foo'] = 'DrupalDatabaseCache';\n;;  Plus, it had to be done for every site separately.\n;;\n;;  Now you can simply list the cache bins to exclude below, comma separated,\n;;  either in the site or platform level active INI file.\n;;\n;;  Example: redis_exclude_bins = \"cache_form,cache_foo,cache_bar\"\n\n;redis_cache_disable = FALSE\n;;\n;;  Normally you should never disable Redis, unless for debugging rare issues.\n;;  If you are sure you need to disable Redis for all sites on this platform,\n;;  uncomment the line above and set the value to TRUE.\n\n\n;; ### INI (site level) for Nginx Microcache Control\n\n;speed_booster_anon_cache_ttl = 10\n;;\n;;  Speed Booster uses Nginx microcaching mode by default, with just 10 seconds\n;;  both for anonymous visitors and logged in users. All known robots/crawlers\n;;  and search engines spiders are forced to accept up to 24 hours cache TTL.\n;;  Below you can modify the (10 seconds) default for human, anonymous visitors.\n;;  Uncomment the line above and set any numeric value you prefer (in seconds)\n;;  to override system default (10 seconds). You may want to enable Purge and\n;;  Expire modules in this site modules admin area, so any new or modified node,\n;;  comment added etc will selectively auto-purge related cache entries to avoid\n;;  serving stale content for extended time (depending on the TTL configured).\n;;  Note that the value must be higher than 10 or it will be ignored.\n\n;disable_drupal_page_cache = FALSE\n;;\n;;  With default Speed Booster TTL set to just 10 seconds to achieve the\n;;  microcaching mode, disabling Drupal own page cache will significantly\n;;  degrade your site's performance on every request not served via Speed\n;;  Booster front-end cache, because Drupal will have to build the page from\n;;  scratch every 10 seconds, making your site SLOW for every visitor not lucky\n;;  enough to visit already cached page. This is why BOA enables Drupal own\n;;  page cache by default, even if Boost, if used, will complain about it.\n;;  It allows Drupal to keep its internal full-page cache in the super-fast\n;;  Redis backend, to make also those every-10-seconds non-cached in Speed\n;;  Booster requests blazingly fast.\n;;\n;;  If for some reason this default BOA configuration breaks something\n;;  important in your site, like some page which should display not cached\n;;  in a full-page cache results for anonymous visitors, even if they don't\n;;  have a cookie set in the browser, didn't submit any form etc, so no other\n;;  method to make the displayed page dynamic on the fly could be triggered,\n;;  you could (very carefully) consider changing this variable to TRUE.\n;;\n;;  But please think twice before using this variable. While Redis will still\n;;  improve performance for all other cache bins, the cache_page bin will\n;;  not be used, and this will make your site much slower, randomly, even if\n;;  you will increase the tiny speed_booster_anon_cache_ttl value above.\n;;\n;;  If you really have to disable this on some problematic URI, to guarantee\n;;  that the page will be as dynamic as possible also for anonymous visitors,\n;;  you may want to use more granular method, like adding in your site's\n;;  local.settings.php file exception for the affected URI:\n;;\n;;    if (preg_match(\"/^\\/(?:foo|bar)/\", $_SERVER['REQUEST_URI'])) {\n;;      header('X-Accel-Expires: 1'); // This disables Speed Booster\n;;      $conf['cache'] = 0; // This disables page caching on the fly\n;;    }\n\n\n;; ### INI (site level) for Drupal Sites Access Control\n\n;allow_anon_node_add = FALSE\n;;\n;;  When set to TRUE allows anonymous users to add content. Best practice and\n;;  the default is FALSE which results with redirect to the site's homepage.\n;;  Note that this option opens also an access to the node edit.\n\n;disable_admin_dos_protection = FALSE\n;;\n;;  When set to TRUE allows anonymous visitors to access the /admin* URL, even\n;;  if only to see the 403 Access Denied message. Best practice and the default\n;;  is FALSE which results with redirect to the site's homepage. This allows\n;;  you to protect the site from DoS attempts, since the /admin* requests are\n;;  never cached and always hit Drupal directly. Some sites may experience\n;;  issues when your browser has expired session/cookie which redirects you\n;;  to the homepage even if you were logged in. If something like this happens,\n;;  you may want to disable this protection by changing it to TRUE below.\n;;  Remember to uncomment the line above if you want to use this feature.\n\n\n;; ### INI (site level) for User Register Protection\n\n;ignore_user_register_protection = FALSE\n;;\n;;  Registration settings are now restricted by design to protect your sites\n;;  from unintended turning them into spam machines (which is allowed by\n;;  Drupal 6 default settings, sadly). Spambots targeting Drupal sites are\n;;  already a plague, so unless you have already set more strict permissions\n;;  'Administrators only', we force reasonable default policy for new accounts\n;;  registration: 'Visitors, but administrator approval is required' plus\n;;  'Require email verification when a visitor creates an account' enabled.\n;;  If you wish to disable email verification or set 'Who can register\n;;  accounts' to 'Visitors', you must set it to TRUE below and uncomment\n;;  the line. Now you will be able to permanently change these settings\n;;  in this site admin area. Otherwise our default protection will be enabled\n;;  again the next day (early morning in the server time zone). Note that\n;;  we don't force 'Administrators only', because it could immediately break\n;;  many commerce or community sites essential features. But for other sites,\n;;  'Administrators only' is strongly suggested.\n;;\n;;  It can be set to TRUE automatically on all platforms AND all sites if\n;;  there is an empty *disable* control file:\n;;\n;;    ~/static/control/ignore_user_register_protection.info\n;;\n;;  NOTE: The *disable* file will make the *enable* file IGNORED if both exist!\n;;\n;;    ~/static/control/enable_strict_user_register_protection.info\n;;\n;;  Note also that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again -- which happens each morning.\n\n\n;; ### INI (site level) Clarification on The User Register Protection Logic\n\n;;  Instead of the previously used confusing enable/disable variables and control\n;;  files we use correct names corresponding with the feature behaviour which is\n;;  actually enable_strict/ignore with non-strict enable being the default mode.\n;;\n;;  There are actually three modes available, affecting only Drupal 6 and 7 sites:\n;;\n;;  * non-strict protection when no vars/control files are used, which by default\n;;    switches Drupal 6 and Drupal 7 sites to 'Visitors, but administrator\n;;    approval is required' plus 'Require email verification when a visitor\n;;    creates an account'\n;;\n;;  * strict protection when either var or control file of \"enable_strict\" type\n;;    is used, which switches Drupal 6 and Drupal 7 sites to 'Administrators only'\n;;\n;;  * no protection when either var or control file of \"ignore\" type is used,\n;;    which simply disables the procedure altogether but doesn't modify\n;;    any settings in the Drupal 6 and Drupal 7 sites.\n\n\n;; ### INI (site level) for New Relic, Composer, Private Files and Cookie Domain\n\n;enable_newrelic_integration = FALSE\n;;\n;;  When set to TRUE it will enable New Relic monitoring for this site only.\n;;  You still need a valid New Relic license key present in the control file:\n;;  ~/static/control/newrelic.info\n;;  NOTE: The New Relic license keys is a 40-character hexadecimal string.\n\n;set_composer_manager_vendor_dir = FALSE\n;;\n;;  When set to TRUE it will enforce site specific path to Composer Manager\n;;  composer_manager_vendor_dir path: sites/domain/vendor but only once the site\n;;  is already installed, so it will not override the variable on install,\n;;  if set programmatically.\n\n;allow_private_file_downloads = FALSE\n;;\n;;  When set to TRUE allows to use private files mode, so it is useful only\n;;  for commerce sites which sell files for download or for intranet sites\n;;  where you need to enforce strict access control. All other sites should\n;;  never ever use private files mode for obvious performance reasons.\n\n;server_name_cookie_domain = FALSE\n;;\n;;  When set to TRUE, it forces the cookie_domain to always use main domain,\n;;  also when the site is accessed via any domain alias. For example use case\n;;  please read: https://gist.github.com/omega8cc/5724528\n\n\n;; ### INI (site level) for Solr Configuration\n\n;solr_integration_module = your_module_name_here\n;;\n;;  This option allows to activate Solr core configuration for the site.\n;;\n;;  Supported values for the solr_integration_module variable:\n;;\n;;    search_api_solr9 (Activates Solr 9 core if installed)\n;;    search_api_solr7 (Activates Solr 7 core if installed)\n;;    search_api_solr  (Activates Solr 7 core if installed)\n;;    apachesolr       (Activates Solr 4 core if installed) (deprecated)\n;;\n;;  Solr 9 and Solr 7 are available if installed.\n;;\n;;  Supported integration modules are latest versions of either search_api_solr\n;;  or apachesolr as listed below:\n;;\n;;   https://ftp.drupal.org/files/projects/search_api_solr-4.3.8.tar.gz (D10.2+)\n;;   https://ftp.drupal.org/files/projects/search_api_solr-4.2.12.tar.gz (D9.3+)\n;;   https://ftp.drupal.org/files/projects/search_api_solr-4.1.12.tar.gz (D8.8+)\n;;   https://ftp.drupal.org/files/projects/search_api_solr-7.x-1.17.tar.gz (D7)\n;;   https://ftp.drupal.org/files/projects/apachesolr-7.x-1.12.tar.gz (D7)\n;;   https://ftp.drupal.org/files/projects/apachesolr-6.x-3.1.tar.gz (D6)\n;;\n;;  Note that you still need to add preferred integration module along with\n;;  any its dependencies in your codebase since this feature doesn't modify\n;;  your platform or site - it only creates Solr core with configuration\n;;  files provided by integration module: schema.xml and solrconfig.xml\n;;\n;;  Important: search_api_solr for D8+ is different from all previous versions,\n;;  as it requires Composer to install the module and its dependencies, then\n;;  you will need to configure it, and only then you will be able to generate\n;;  customized Solr core config files, which you should upload in the path:\n;;  sites/foo.com/files/solr/ and wait 5-10 minutes to have them activated\n;;  on the Solr 7 core the system will create for you.\n;;\n;;  NOTE: You must set 'solr_custom_config = NO' for the changes to take effect.\n;;\n;;  This setting affects the running every 5-10 minutes auto-installer, hence\n;;  no need to wait until next morning to be able to use new Solr core. Win!\n;;\n;;  Once the Solr core is ready to use, you will find a special file in your\n;;  site directory: sites/foo.com/solr.php with details on how to access\n;;  your new Solr core with correct credentials.\n;;\n;;  The site with enabled Solr core can be safely migrated between platforms,\n;;  integration module can be moved within your codebase and even upgraded,\n;;  as long as it is using compatible schema.xml and solrconfig.xml files.\n;;\n;;  To delete existing Solr core simply comment out this line.\n;;  The system will cleanly delete existing Solr core in 15 minutes.\n\n;solr_update_config = NO\n;;\n;;  This option allows to auto-update your Solr core configuration files:\n;;\n;;    schema.xml\n;;    solrconfig.xml\n;;\n;;  If there is new release for either apachesolr or search_api_solr, your\n;;  Solr core will not be automatically upgraded to use newer schema.xml and\n;;  solrconfig.xml, unless allowed by switching solr_update_config to YES.\n;;\n;;  This option will be ignored if you will set solr_custom_config to YES.\n\n;solr_custom_config = NO\n;;\n;;  This option allows to protect custom Solr core configuration files:\n;;\n;;    schema.xml\n;;    solrconfig.xml\n;;\n;;  To use customized version of either schema.xml or solrconfig.xml, you need\n;;  to switch solr_custom_config to YES below and if you are using hosted\n;;  Ægir service, submit a support ticket to get these files updated with\n;;  your custom versions. On self-hosted BOA simply update these files directly.\n;;\n;;  Please remember to use Solr compatible config files.\n;;\n;;  IMPORTANT! -- Please note that with this option enabled you won't be able\n;;  to follow the Drupal 8+ specific procedure for search_api_solr with config\n;;  files generated and uploaded to the files/solr/ directory in your site.\n;;  You could still use this option to make your Solr core immutable between\n;;  your upgrades, though, but you must remember about disabling this option\n;;  briefly (5-10 minutes) for the changes to take effect.\n\n\n;; ### INI (site level) for SQL Tables Conversions\n\n;sql_conversion_mode = NO\n;;\n;;  This option allows to activate and/or customize DB tables conversion mode\n;;  for this site only, and the value defined here will override the value of\n;;  sql_conversion_mode set in the platform level boa_platform_control.ini\n;;  file located in the sites/all/modules directory.\n;;\n;;  It can be also set (and forced) automatically for all sites on all platforms\n;;  if there is special _SQL_CONVERT variable defined for this Octopus instance\n;;  in its .USER.octopus.cnf config file, but it may require submitting support\n;;  request if you are using hosted Ægir BOA service without root access.\n;;\n;;  Supported values are: innodb and myisam (lowercase only!)\n;;\n;;  Note that this conversion, if enabled, will run daily even if all tables\n;;  have been already converted, so it will run OPTIMIZE task on all tables,\n;;  effectively.\n;;\n;;  This setting affects only the running daily maintenance system behaviour.\n;;\n;;  This option is available only in BOA-2.1.3 or newer.\n\n\n;; ### INI (site level) for AdvAgg (advagg) Module\n\n;advagg_auto_configuration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-configuration for the AdvAgg module\n;;  on the global.inc level. Supported locations, in the order of precedence:\n;;\n;;    sites/all/modules/advagg/       (optional override on the platform level)\n;;    modules/o_contrib/advagg/       (included in all D6 platforms)\n;;    modules/o_contrib_seven/advagg/ (included in all D7 platforms)\n;;\n;;  IMPORTANT!\n;;\n;;  This setting will be automatically set to TRUE during daily maintenance\n;;  procedure if the module will be detected as enabled in the site, so while\n;;  you could enable or disable it temporarily below, this setting will be\n;;  overwritten again next morning, depending on the module actual status.\n;;  Of course, it will not affect sites with .dev. or .devel. keyword present\n;;  in the main site name.\n\n\n;; ### INI (site level) for Domain Access (domain) Module\n\n;auto_detect_domain_access_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Domain Access module. Supported locations, in the order of precedence:\n;;\n;;    sites/all/modules/domain/\n;;    sites/all/modules/contrib/domain/\n;;    profiles/foo/modules/domain/\n;;    profiles/foo/modules/contrib/domain/\n;;\n;;  IMPORTANT!\n;;\n;;  While the same setting in the platform level boa_platform_control.ini\n;;  file located in the sites/all/modules directory will be automatically\n;;  set to TRUE during daily maintenance procedure if the module\n;;  will be detected, it will be completely ignored if there is also\n;;  boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE below to improve performance,\n;;  in case this module is not used in this site. Remember to uncomment\n;;  the line above if you want to use this feature.\n\n\n;; ### INI (site level) for Drupal for Facebook (fb) Module\n\n;auto_detect_facebook_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Drupal for Facebook (fb) module. Supported locations, in the order\n;;  of precedence:\n;;\n;;    sites/all/modules/fb/\n;;    sites/all/modules/contrib/fb/\n;;    profiles/foo/modules/fb/\n;;    profiles/foo/modules/contrib/fb/\n;;\n;;  IMPORTANT!\n;;\n;;  While the same setting in the platform level boa_platform_control.ini\n;;  file located in the sites/all/modules directory will be automatically\n;;  set to TRUE during daily maintenance procedure if the module\n;;  will be detected, it will be completely ignored if there is also\n;;  boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE below to improve performance,\n;;  in case this module is not used in this site. Remember to uncomment\n;;  the line above if you want to use this feature.\n\n"
  },
  {
    "path": "aegir/conf/ini/panels.ini",
    "content": "[New Left Panel]\ndisplay=listing\nsort_order=mtime\nreverse=1\n\n[New Right Panel]\ndisplay=listing\nsort_order=mtime\nreverse=1\n\n"
  },
  {
    "path": "aegir/conf/network/networking",
    "content": "#!/bin/dash -e\n### BEGIN INIT INFO\n# Provides:          networking ifupdown\n# Required-Start:    mountkernfs $local_fs urandom\n# Required-Stop:     $local_fs\n# Default-Start:     S\n# Default-Stop:      0 6\n# Short-Description: Raise network interfaces.\n# Description:       Prepare /run/network directory, ifstate file and raise network interfaces, or take them down.\n### END INIT INFO\n\nPATH=\"/sbin:/bin:/usr/sbin:/usr/bin\"\nRUN_DIR=\"/run/network\"\nIFSTATE=\"$RUN_DIR/ifstate\"\nSTATEDIR=\"$RUN_DIR/state\"\n\n[ -x \"$(command -v ifup)\" ] || exit 0\n[ -x \"$(command -v ifdown)\" ] || exit 0\n\n. /lib/lsb/init-functions\n\nCONFIGURE_INTERFACES=yes\nEXCLUDE_INTERFACES=\nVERBOSE=no\n\n[ -f /etc/default/networking ] && . /etc/default/networking\n\nverbose=\"\"\n[ \"$VERBOSE\" = yes ] && verbose=-v\n\nprocess_exclusions() {\n    set -- $EXCLUDE_INTERFACES\n    exclusions=\"\"\n    for d\n    do\n\texclusions=\"-X $d $exclusions\"\n    done\n    echo $exclusions\n}\n\nprocess_options() {\n    [ -e /etc/network/options ] || return 0\n    log_warning_msg \"/etc/network/options still exists and it will be IGNORED! Please use /etc/sysctl.conf instead.\"\n}\n\ncheck_ifstate() {\n    if [ ! -d \"$RUN_DIR\" ] ; then\n\tif ! mkdir -p \"$RUN_DIR\" ; then\n\t    log_failure_msg \"can't create $RUN_DIR\"\n\t    exit 1\n\tfi\n\tif ! chown root:netdev \"$RUN_DIR\" ; then\n\t    log_warning_msg \"can't chown $RUN_DIR\"\n\tfi\n    fi\n    if [ ! -r \"$IFSTATE\" ] ; then\n\tif ! :> \"$IFSTATE\" ; then\n\t    log_failure_msg \"can't initialise $IFSTATE\"\n\t    exit 1\n\tfi\n    fi\n}\n\ncheck_network_file_systems() {\n    [ -e /proc/mounts ] || return 0\n\n    if [ -e /etc/iscsi/iscsi.initramfs ]; then\n\tlog_warning_msg \"not deconfiguring network interfaces: iSCSI root is mounted.\"\n\texit 0\n    fi\n\n    while read DEV MTPT FSTYPE REST; do\n\tcase $DEV in\n\t/dev/nbd*|/dev/nd[a-z]*|/dev/etherd/e*|curlftpfs*)\n\t    log_warning_msg \"not deconfiguring network interfaces: network devices still mounted.\"\n\t    exit 0\n\t    ;;\n\tesac\n\tcase $FSTYPE in\n\tnfs|nfs4|smbfs|ncp|ncpfs|cifs|coda|ocfs2|gfs|pvfs|pvfs2|fuse.httpfs|fuse.curlftpfs)\n\t    log_warning_msg \"not deconfiguring network interfaces: network file systems still mounted.\"\n\t    exit 0\n\t    ;;\n\tesac\n    done < /proc/mounts\n}\n\ncheck_network_swap() {\n    [ -e /proc/swaps ] || return 0\n\n    while read DEV MTPT FSTYPE REST; do\n\tcase $DEV in\n\t/dev/nbd*|/dev/nd[a-z]*|/dev/etherd/e*)\n\t    log_warning_msg \"not deconfiguring network interfaces: network swap still mounted.\"\n\t    exit 0\n\t    ;;\n\tesac\n    done < /proc/swaps\n}\n\nifup_hotplug () {\n    if [ -d /sys/class/net ]\n    then\n\t    ifaces=$(for iface in $(ifquery --list --allow=hotplug)\n\t\t\t    do\n\t\t\t\t    link=${iface%%:*}\n\t\t\t\t    link=${link%%.*}\n\t\t\t\t    if [ -e \"/sys/class/net/$link\" ] && ! ifquery --state \"$iface\" >/dev/null\n\t\t\t\t    then\n\t\t\t\t\techo \"$iface\"\n\t\t\t\t    fi\n\t\t\t    done)\n\t    if [ -n \"$ifaces\" ]\n\t    then\n\t\tifup $ifaces \"$@\" || true\n\t    fi\n    fi\n}\n\ncase \"$1\" in\nstart)\n\tprocess_options\n\tcheck_ifstate\n\n\tif [ \"$CONFIGURE_INTERFACES\" = no ]\n\tthen\n\t    log_action_msg \"Not configuring network interfaces, see /etc/default/networking\"\n\t    exit 0\n\tfi\n\tset -f\n\texclusions=$(process_exclusions)\n\tlog_action_begin_msg \"Configuring network interfaces\"\n\tif [ -x \"$(command -v udevadm)\" ]; then\n\t\tif [ -n \"$(ifquery --list --exclude=lo)\" ] || [ -n \"$(ifquery --list --allow=hotplug)\" ]; then\n\t\t\tudevadm settle || true\n\t\tfi\n\tfi\n\tif ifup -a $exclusions $verbose && ifup_hotplug $exclusions $verbose\n\tthen\n\t    log_action_end_msg $?\n\telse\n\t    log_action_end_msg $?\n\tfi\n\t;;\n\nstop)\n\tcheck_network_file_systems\n\tcheck_network_swap\n\n\tlog_action_begin_msg \"Deconfiguring network interfaces\"\n\tif ifdown -a --exclude=lo $verbose; then\n\t    log_action_end_msg $?\n\telse\n\t    log_action_end_msg $?\n\tfi\n\t;;\n\nreload)\n\tprocess_options\n\n\tlog_action_begin_msg \"Reloading network interfaces configuration\"\n\tstate=$(ifquery --state)\n\tifdown -a --exclude=lo $verbose || true\n\tif ifup --exclude=lo $state $verbose ; then\n\t    log_action_end_msg $?\n\telse\n\t    log_action_end_msg $?\n\tfi\n\t;;\n\nforce-reload|restart)\n\tprocess_options\n\n\tlog_warning_msg \"Running $0 $1 is deprecated because it may not re-enable some interfaces\"\n\tlog_action_begin_msg \"Reconfiguring network interfaces\"\n\tifdown -a --exclude=lo $verbose || true\n\tset -f\n\texclusions=$(process_exclusions)\n\tif ifup -a --exclude=lo $exclusions $verbose && ifup_hotplug $exclusions $verbose\n\tthen\n\t    log_action_end_msg $?\n\telse\n\t    log_action_end_msg $?\n\tfi\n\t;;\n\n*)\n\techo \"Usage: /etc/init.d/networking {start|stop|reload|restart|force-reload}\"\n\texit 1\n\t;;\nesac\n\nexit 0\n\n# vim: noet ts=8\n"
  },
  {
    "path": "aegir/conf/nginx/fastcgi_params.txt",
    "content": "fastcgi_param   QUERY_STRING            $query_string;\nfastcgi_param   REQUEST_METHOD          $request_method;\nfastcgi_param   CONTENT_TYPE            $content_type;\nfastcgi_param   CONTENT_LENGTH          $content_length;\n\nfastcgi_param   SCRIPT_FILENAME         $request_filename;\nfastcgi_param   SCRIPT_NAME             $fastcgi_script_name;\nfastcgi_param   REQUEST_URI             $request_uri;\nfastcgi_param   DOCUMENT_URI            $document_uri;\nfastcgi_param   DOCUMENT_ROOT           $document_root;\nfastcgi_param   SERVER_PROTOCOL         $server_protocol;\n\nfastcgi_param   GATEWAY_INTERFACE       CGI/1.1;\nfastcgi_param   SERVER_SOFTWARE         ApacheSolarisNginx/$nginx_version;\n\nfastcgi_param   REMOTE_ADDR             $remote_addr;\nfastcgi_param   REMOTE_PORT             $remote_port;\nfastcgi_param   SERVER_ADDR             $server_addr;\nfastcgi_param   SERVER_PORT             $server_port;\nfastcgi_param   SERVER_NAME             $server_name;\n\n# BOA specific\nfastcgi_param   USER_DEVICE             $device;\nfastcgi_param   GEOIP_COUNTRY_CODE      $geoip_country_code;\nfastcgi_param   GEOIP_COUNTRY_CODE3     $geoip_country_code3;\nfastcgi_param   GEOIP_COUNTRY_NAME      $geoip_country_name;\n\nfastcgi_param   HTTPS                   $https;\n\n# PHP only, required if PHP was built with --enable-force-cgi-redirect\nfastcgi_param   REDIRECT_STATUS         200;\n\n# Block https://httpoxy.org/ attacks.\nfastcgi_param   HTTP_PROXY              \"\";\n"
  },
  {
    "path": "aegir/conf/nginx/mime.types",
    "content": "types {\n  application/atom+xml                  atom;\n  application/iphone                    pxl ipa;\n  application/java-archive              jar war ear;\n  application/javascript                js;\n  application/json                      json;\n  application/mac-binhex40              hqx;\n  application/msword                    doc;\n  application/octet-stream              bin exe dll;\n  application/octet-stream              deb;\n  application/octet-stream              dmg;\n  application/octet-stream              iso img;\n  application/octet-stream              msi msp msm;\n  application/octet-stream              safariextz;\n  application/ogg                       ogx;\n  application/pdf                       pdf;\n  application/postscript                ps eps ai;\n  application/rss+xml                   rss;\n  application/rtf                       rtf;\n  application/vnd.android.package-archive apk;\n  application/vnd.google-earth.kml+xml  kml;\n  application/vnd.google-earth.kmz      kmz;\n  application/vnd.ms-excel              xls;\n  application/vnd.ms-fontobject         eot;\n  application/vnd.ms-powerpoint         ppt;\n  application/vnd.oasis.opendocument.chart                   odc;\n  application/vnd.oasis.opendocument.chart-template          otc;\n  application/vnd.oasis.opendocument.database                odb;\n  application/vnd.oasis.opendocument.formula                 odf;\n  application/vnd.oasis.opendocument.formula-template       odft;\n  application/vnd.oasis.opendocument.graphics                odg;\n  application/vnd.oasis.opendocument.graphics-template       otg;\n  application/vnd.oasis.opendocument.image                   odi;\n  application/vnd.oasis.opendocument.image-template          oti;\n  application/vnd.oasis.opendocument.presentation            odp;\n  application/vnd.oasis.opendocument.presentation-template   otp;\n  application/vnd.oasis.opendocument.spreadsheet             ods;\n  application/vnd.oasis.opendocument.spreadsheet-template    ots;\n  application/vnd.oasis.opendocument.text                    odt;\n  application/vnd.oasis.opendocument.text-master             otm;\n  application/vnd.oasis.opendocument.text-template           ott;\n  application/vnd.oasis.opendocument.text-web                oth;\n  application/vnd.openofficeorg.extension                    oxt;\n  application/vnd.openxmlformats-officedocument.presentationml.presentation  pptx;\n  application/vnd.openxmlformats-officedocument.presentationml.slide         sldx;\n  application/vnd.openxmlformats-officedocument.presentationml.slideshow     ppsx;\n  application/vnd.openxmlformats-officedocument.presentationml.template      potx;\n  application/vnd.openxmlformats-officedocument.spreadsheetml.sheet          xlsx;\n  application/vnd.openxmlformats-officedocument.spreadsheetml.template       xltx;\n  application/vnd.openxmlformats-officedocument.wordprocessingml.document    docx;\n  application/vnd.openxmlformats-officedocument.wordprocessingml.template    dotx;\n  application/vnd.sun.xml.calc               sxc;\n  application/vnd.sun.xml.calc.template      stc;\n  application/vnd.sun.xml.draw               sxd;\n  application/vnd.sun.xml.draw.template      std;\n  application/vnd.sun.xml.impress            sxi;\n  application/vnd.sun.xml.impress.template   sti;\n  application/vnd.sun.xml.math               sxm;\n  application/vnd.sun.xml.writer             sxw;\n  application/vnd.sun.xml.writer.global      sxg;\n  application/vnd.sun.xml.writer.template    stw;\n  application/vnd.wap.wmlc              wmlc;\n  application/x-7z-compressed           7z;\n  application/x-bittorrent              torrent;\n  application/x-chrome-extension        crx;\n  application/x-cocoa                   cco;\n  application/x-font-ttf                ttc ttf;\n  application/x-h5p                     h5p;\n  application/x-java-archive-diff       jardiff;\n  application/x-java-jnlp-file          jnlp;\n  application/x-makeself                run;\n  application/x-opera-extension         oex;\n  application/x-perl                    pl pm;\n  application/x-pilot                   prc pdb;\n  application/x-rar-compressed          rar;\n  application/x-redhat-package-manager  rpm;\n  application/x-sea                     sea;\n  application/x-shockwave-flash         swf;\n  application/x-stuffit                 sit;\n  application/x-tcl                     tcl tk;\n  application/x-web-app-manifest+json   webapp;\n  application/x-x509-ca-cert            der pem crt;\n  application/x-xpinstall               xpi;\n  application/xhtml+xml                 xhtml;\n  application/xml                       rdf;\n  application/zip                       zip;\n  audio/midi                            mid midi kar;\n  audio/mp4                             aac f4a f4b m4a;\n  audio/mpeg                            mp3;\n  audio/ogg                             oga ogg;\n  audio/x-realaudio                     ra;\n  audio/x-wav                           wav;\n  font/opentype                         otf;\n  font/woff                             woff;\n  font/woff2                            woff2;\n  image/bmp                             bmp;\n  image/gif                             gif;\n  image/jpeg                            jpeg jpg;\n  image/png                             png;\n  image/svg+xml                         svg svgz;\n  image/tiff                            tif tiff;\n  image/vnd.wap.wbmp                    wbmp;\n  image/webp                            webp;\n  image/x-icon                          ico;\n  image/x-jng                           jng;\n  text/cache-manifest                   manifest appcache;\n  text/css                              css;\n  text/html                             html htm shtml;\n  text/mathml                           mml;\n  text/plain                            txt;\n  text/vnd.sun.j2me.app-descriptor      jad;\n  text/vnd.wap.wml                      wml;\n  text/vtt                              vtt;\n  text/x-component                      htc;\n  text/x-vcard                          vcf;\n  text/xml                              xml;\n  video/3gpp                            3gpp 3gp;\n  video/mp4                             mp4 m4v f4v f4p;\n  video/mpeg                            mpeg mpg;\n  video/ogg                             ogv;\n  video/quicktime                       mov;\n  video/webm                            webm;\n  video/x-flv                           flv;\n  video/x-mng                           mng;\n  video/x-ms-asf                        asx asf;\n  video/x-ms-wmv                        wmv;\n  video/x-msvideo                       avi;\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          nginx\n# Required-Start:    $remote_fs $syslog\n# Required-Stop:     $remote_fs $syslog\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: nginx init.d dash script for Debian or other *nix.\n# Description:       nginx init.d dash script for Debian or other *nix.\n### END INIT INFO\n#------------------------------------------------------------------------------\n# nginx - this Debian Almquist shell (dash) script, starts and stops the nginx\n#         daemon for Ubuntu and other *nix releases.\n#\n# description:  Nginx is an HTTP(S) server, HTTP(S) reverse \\\n#               proxy and IMAP/POP3 proxy server.  This \\\n#               script will manage the initiation of the \\\n#               server and it's process state.\n#\n# processname: nginx\n# config:      /etc/nginx/nginx.conf\n# pidfile:     /run/nginx.pid\n# Provides:    nginx\n#\n# Author:  Jason Giedymin\n#          <jason.giedymin AT gmail.com>.\n#\n# Version: 3.5.1 11-NOV-2013 jason.giedymin AT gmail.com\n# Notes: nginx init.d dash script for Ubuntu.\n# Tested with: Ubuntu 13.10, nginx-1.4.3\n#\n# This script's project home is:\n#   http://github.com/JasonGiedymin/nginx-init-ubuntu\n#\n# Modified by: Barracuda Team <omega8cc AT gmail.com>\n#\n#------------------------------------------------------------------------------\n#                               MIT X11 License\n#------------------------------------------------------------------------------\n#\n# Copyright (c) 2008-2013 Jason Giedymin, http://jasongiedymin.com\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\n# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n#------------------------------------------------------------------------------\n\n#------------------------------------------------------------------------------\n#                               Functions\n#------------------------------------------------------------------------------\nLSB_FUNC=/lib/lsb/init-functions\n\n# Test that init functions exists\ntest -r $LSB_FUNC || {\n    echo \"$0: Cannot find $LSB_FUNC! Script exiting.\" 1>&2\n    exit 5\n}\n\n. $LSB_FUNC\n\n#------------------------------------------------------------------------------\n#                               Consts\n#------------------------------------------------------------------------------\nPATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nDAEMON=/usr/sbin/nginx\n\nPS=\"nginx\"\nPIDNAME=\"nginx\"                     #lets you do $PS-slave\nPIDFILE=$PIDNAME.pid                #pid file\nPIDSPATH=/run                       #default pid location, you should change it\n\nDESCRIPTION=\"Nginx Server...\"\n\nRUNAS=root                          #user to run as\n\nSCRIPT_OK=0                         #ala error codes\nSCRIPT_ERROR=1                      #ala error codes\nTRUE=1                              #boolean\nFALSE=0                             #boolean\n\nlockfile=/var/lock/subsys/nginx\nNGINX_CONF_FILE=\"/etc/nginx/nginx.conf\"\n\n#------------------------------------------------------------------------------\n#                               Simple Tests\n#------------------------------------------------------------------------------\n\n# Test if nginx is a file and executable\ntest -x $DAEMON || {\n    echo \"$0: You don't have permissions to execute nginx.\" 1>&2\n    exit 4\n}\n\n# Include nginx defaults if available\nif [ -f /etc/default/nginx ]; then\n    . /etc/default/nginx\nfi\n\n#------------------------------------------------------------------------------\n#                               Functions\n#------------------------------------------------------------------------------\n\n\n# Load kTLS module only if it is not already loaded\n_load_ktls_module() {\n  if ! lsmod 2>/dev/null | awk '{print $1}' | grep -qx \"tls\"; then\n    if [ -e \"/lib/modules/$(uname -r)/kernel/net/tls/tls.ko\" ] \\\n      || modinfo tls >/dev/null 2>&1; then\n      modprobe -q tls >/dev/null 2>&1 || true\n      grep -qxF \"tls\" /etc/modules 2>/dev/null || printf '%s\\n' \"tls\" >> /etc/modules\n    fi\n  fi\n}\n\nsetFilePerms(){\n    if [ -f $PIDSPATH/$PIDFILE ]; then\n        chmod 400 $PIDSPATH/$PIDFILE\n    fi\n}\n\nconfigtest() {\n    $DAEMON -t -c $NGINX_CONF_FILE\n}\n\ngetPSCount() {\n    return `pgrep -f $PS | wc -l`\n}\n\nisRunning() {\n    if [ $1 ]; then\n        pidof_daemon $1\n        PID=$?\n\n        if [ $PID -gt 0 ]; then\n            return 1\n        else\n            return 0\n        fi\n    else\n        pidof_daemon\n        PID=$?\n\n        if [ $PID -gt 0 ]; then\n            return 1\n        else\n            return 0\n        fi\n    fi\n}\n\n#courtesy of php-fpm\nwait_for_pid() {\n    try=0\n\n    while test $try -lt 35; do\n        case \"$1\" in\n            'created')\n            if [ -f \"$2\" ]; then\n                try=''\n                break\n            fi\n            ;;\n\n            'removed')\n            if [ ! -f \"$2\" ]; then\n                try=''\n                break\n            fi\n            ;;\n        esac\n\n        try=`expr $try + 1`\n        sleep 1\n    done\n}\n\nstatus(){\n    isRunning\n    isAlive=$?\n\n    if [ \"${isAlive}\" -eq $TRUE ]; then\n        log_warning_msg \"$DESCRIPTION found running with processes:  `pidof $PS`\"\n        rc=0\n    else\n        log_warning_msg \"$DESCRIPTION is NOT running.\"\n        rc=3\n    fi\n\n    return\n}\n\nremovePIDFile(){\n    if [ $1 ]; then\n        if [ -f $1 ]; then\n            rm -f $1\n        fi\n    else\n        #Do default removal\n        if [ -f $PIDSPATH/$PIDFILE ]; then\n            rm -f $PIDSPATH/$PIDFILE\n        fi\n    fi\n}\n\nstart() {\n    log_daemon_msg \"Starting $DESCRIPTION\"\n\n    _load_ktls_module\n\n    isRunning\n    isAlive=$?\n\n    if [ \"${isAlive}\" -eq $TRUE ]; then\n        log_end_msg $SCRIPT_ERROR\n        rc=0\n    else\n        # Check if the ULIMIT is set in /etc/default/nginx\n        if [ -n \"$ULIMIT\" ]; then\n            # Set the ulimits\n            ulimit $ULIMIT\n        fi\n        start-stop-daemon --start --quiet --chuid \\\n        $RUNAS --pidfile $PIDSPATH/$PIDFILE --exec $DAEMON \\\n        -- -c $NGINX_CONF_FILE\n        setFilePerms\n        log_end_msg $SCRIPT_OK\n        rc=0\n    fi\n\n    return\n}\n\nstop() {\n    log_daemon_msg \"Stopping $DESCRIPTION\"\n\n    isRunning\n    isAlive=$?\n\n    if [ \"${isAlive}\" -eq $TRUE ]; then\n        start-stop-daemon --stop --quiet --pidfile $PIDSPATH/$PIDFILE\n\n        wait_for_pid 'removed' $PIDSPATH/$PIDFILE\n\n        if [ -n \"$try\" ]; then\n            log_end_msg $SCRIPT_ERROR\n            rc=0 # lsb states 1, but under status it is 2 (which is more prescriptive). Deferring to standard.\n        else\n            removePIDFile\n            log_end_msg $SCRIPT_OK\n            rc=0\n        fi\n    else\n        log_end_msg $SCRIPT_ERROR\n        rc=7\n    fi\n\n    return\n}\n\nreload() {\n    configtest || return $?\n\n    log_daemon_msg \"Reloading (via HUP) $DESCRIPTION\"\n\n    isRunning\n\n    if [ $? -eq $TRUE ]; then\n        kill -HUP `cat $PIDSPATH/$PIDFILE`\n        log_end_msg $SCRIPT_OK\n        rc=0\n    else\n        log_end_msg $SCRIPT_ERROR\n        rc=7\n    fi\n\n    return\n}\n\nquietupgrade() {\n    log_daemon_msg \"Performing Quiet Upgrade $DESCRIPTION\"\n\n    isRunning\n    isAlive=$?\n\n    if [ \"${isAlive}\" -eq $TRUE ]; then\n        kill -USR2 `cat $PIDSPATH/$PIDFILE`\n        kill -WINCH `cat $PIDSPATH/$PIDFILE.oldbin`\n\n        isRunning\n        isAlive=$?\n\n        if [ \"${isAlive}\" -eq $TRUE ]; then\n            kill -QUIT `cat $PIDSPATH/$PIDFILE.oldbin`\n            wait_for_pid 'removed' $PIDSPATH/$PIDFILE.oldbin\n            removePIDFile $PIDSPATH/$PIDFILE.oldbin\n\n            log_end_msg $SCRIPT_OK\n            rc=0\n        else\n            log_end_msg $SCRIPT_ERROR\n\n            log_daemon_msg \"ERROR! Reverting back to original $DESCRIPTION\"\n\n            kill -HUP `cat $PIDSPATH/$PIDFILE`\n            kill -TERM `cat $PIDSPATH/$PIDFILE.oldbin`\n            kill -QUIT `cat $PIDSPATH/$PIDFILE.oldbin`\n\n            wait_for_pid 'removed' $PIDSPATH/$PIDFILE.oldbin\n            removePIDFile $PIDSPATH/$PIDFILE.oldbin\n\n            log_end_msg $SCRIPT_OK\n            rc=0\n        fi\n    else\n        log_end_msg $SCRIPT_ERROR\n        rc=7\n    fi\n\n    return\n}\n\nterminate() {\n    log_daemon_msg \"Force terminating (via KILL) $DESCRIPTION\"\n\n    PIDS=`pidof $PS` || true\n\n    [ -e $PIDSPATH/$PIDFILE ] && PIDS2=`cat $PIDSPATH/$PIDFILE`\n\n    for i in $PIDS; do\n        if [ \"$i\" = \"$PIDS2\" ]; then\n            kill $i\n            wait_for_pid 'removed' $PIDSPATH/$PIDFILE\n            removePIDFile\n        fi\n    done\n\n    log_end_msg $SCRIPT_OK\n    rc=0\n}\n\ndestroy() {\n    log_daemon_msg \"Force terminating and may include self (via KILLALL) $DESCRIPTION\"\n    killall $PS -q >> /dev/null 2>&1\n    log_end_msg $SCRIPT_OK\n    rc=0\n}\n\npidof_daemon() {\n    PIDS=`pidof $PS` || true\n\n    [ -e $PIDSPATH/$PIDFILE ] && PIDS2=`cat $PIDSPATH/$PIDFILE`\n\n    for i in $PIDS; do\n        if [ \"$i\" = \"$PIDS2\" ]; then\n            return 1\n        fi\n    done\n\n    return 0\n}\n\naction=\"$1\"\ncase \"$1\" in\n    start)\n        start\n        ;;\n    stop)\n        stop\n        ;;\n    restart|force-reload)\n        stop\n        # if [ $rc -ne 0 ]; then\n        #     script_exit\n        # fi\n        sleep 1\n        start\n        ;;\n    reload)\n        $1\n        ;;\n    status)\n        status\n        ;;\n    configtest)\n        $1\n        ;;\n    quietupgrade)\n        $1\n        ;;\n    terminate)\n        $1\n        ;;\n    destroy)\n        $1\n        ;;\n    *)\n        FULLPATH=/etc/init.d/$PS\n        echo \"Usage: $FULLPATH {start|stop|restart|force-reload|status|configtest|quietupgrade|terminate|destroy}\"\n        echo \"       The 'destroy' command should only be used as a last resort.\"\n        exit 3\n        ;;\nesac\n\nexit $rc\n"
  },
  {
    "path": "aegir/conf/nginx/nginx-squeeze-init",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          nginx\n# Required-Start:    $local_fs $remote_fs $network $syslog\n# Required-Stop:     $local_fs $remote_fs $network $syslog\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts the nginx web server\n# Description:       starts nginx using start-stop-daemon\n### END INIT INFO\n\nPATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nDAEMON=/usr/sbin/nginx\nNAME=nginx\nDESC=nginx\n\n# Include nginx defaults if available\nif [ -f /etc/default/nginx ]; then\n\t. /etc/default/nginx\nfi\n\ntest -x $DAEMON || exit 0\n\n. /lib/lsb/init-functions\n\ntest_nginx_config() {\n\tif $DAEMON -t $DAEMON_OPTS >/dev/null 2>&1; then\n\t\treturn 0\n\telse\n\t\t$DAEMON -t $DAEMON_OPTS\n\t\treturn $?\n\tfi\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting $DESC: \"\n\t\ttest_nginx_config\n\t\t# Check if the ULIMIT is set in /etc/default/nginx\n\t\tif [ -n \"$ULIMIT\" ]; then\n\t\t\t# Set the ulimits\n\t\t\tulimit $ULIMIT\n\t\tfi\n\t\tstart-stop-daemon --start --quiet --pidfile /run/$NAME.pid \\\n\t\t    --exec $DAEMON -- $DAEMON_OPTS || true\n\t\techo \"$NAME.\"\n\t\t;;\n\n\tstop)\n\t\techo -n \"Stopping $DESC: \"\n\t\tstart-stop-daemon --stop --quiet --pidfile /run/$NAME.pid \\\n\t\t    --exec $DAEMON || true\n\t\techo \"$NAME.\"\n\t\t;;\n\n\trestart|force-reload)\n\t\techo -n \"Restarting $DESC: \"\n\t\tstart-stop-daemon --stop --quiet --pidfile \\\n\t\t    /run/$NAME.pid --exec $DAEMON || true\n\t\tsleep 1\n\t\ttest_nginx_config\n\t\tstart-stop-daemon --start --quiet --pidfile \\\n\t\t    /run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true\n\t\techo \"$NAME.\"\n\t\t;;\n\n\treload)\n\t\techo -n \"Reloading $DESC configuration: \"\n\t\ttest_nginx_config\n\t\tstart-stop-daemon --stop --signal HUP --quiet --pidfile /run/$NAME.pid \\\n\t\t    --exec $DAEMON || true\n\t\techo \"$NAME.\"\n\t\t;;\n\n\tconfigtest|testconfig)\n\t\techo -n \"Testing $DESC configuration: \"\n\t\tif test_nginx_config; then\n\t\t\techo \"$NAME.\"\n\t\telse\n\t\t\texit $?\n\t\tfi\n\t\t;;\n\n\tstatus)\n\t\tstatus_of_proc -p /run/$NAME.pid \"$DAEMON\" nginx && exit 0 || exit $?\n\t\t;;\n\t*)\n\t\techo \"Usage: $NAME {start|stop|restart|reload|force-reload|status|configtest}\" >&2\n\t\texit 1\n\t\t;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/conf/nginx/nginx.conf",
    "content": "# Nginx web server main configuration file: /etc/nginx/nginx.conf\n#\nuser www-data;\nworker_processes auto;\npid /run/nginx.pid;\n\nevents {\n  multi_accept on;\n  worker_connections 20000;\n}\n\nhttp {\n  default_type application/octet-stream;\n  gzip on;\n  gzip_disable \"msie6\";\n  keepalive_timeout 70;\n  sendfile on;\n  tcp_nodelay on;\n  tcp_nopush on;\n  keepalive_requests 99999;\n  types_hash_max_size 8192;\n  include /etc/nginx/mime.types;\n  include /etc/nginx/conf.d/*.conf;\n  include /etc/nginx/sites-enabled/*;\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_compact_include.conf",
    "content": "#######################################################\n###  nginx compact basic configuration start\n#######################################################\n\n###\n### Deny crawlers.\n###\nif ($is_crawler) {\n  return 444;\n}\n\n###\n### Include high load protection config if exists.\n###\ninclude /data/conf/nginx_high_load.c*;\n\n###\n### Catch all unspecified requests.\n###\nlocation / {\n  try_files $uri @dynamic;\n}\n\n###\n### Send all not cached requests to php-fpm with clean URLs support.\n###\nlocation @dynamic {\n  rewrite ^/(.*)$  /index.php last;\n}\n\n###\n### Send all non-static requests to php-fpm.\n###\nlocation ~ \\.php$ {\n  try_files $uri =404;            ### check for existence of php file first\n  fastcgi_pass 127.0.0.1:9090;\n}\n\n###\n### Serve & no-log static files & images directly.\n###\nlocation ~* ^.+\\.(?:css|js|htc|xml|jpe?g|gif|png|ico|webp|bmp|svg|swf|pdf|docx?|xlsx?|tiff?|txt|rtf|vcard|vcf|cgi|bat|pl|dll|aspx?|class|otf|ttf|woff2?|eot|less)$ {\n  access_log off;\n  log_not_found off;\n  expires 30d;\n  try_files $uri =404;\n}\n\n###\n### Serve & log bigger media/static/archive files directly.\n###\nlocation ~* ^.+\\.(?:avi|mpe?g|mov|wmv|mp3|ogg|ogv|wav|midi|zip|tar|t?gz|rar|dmg|exe|apk|pxl|ipa)$ {\n  expires 30d;\n  try_files $uri =404;\n}\n\n###\n### Pseudo-streaming server-side support for Flash Video (FLV) files.\n###\nlocation ~* ^.+\\.flv$ {\n  flv;\n  expires 30d;\n  try_files $uri =404;\n}\n\n###\n### Pseudo-streaming server-side support for H.264/AAC files.\n###\nlocation ~* ^.+\\.(?:mp4|m4a)$ {\n  mp4;\n  mp4_buffer_size 1m;\n  mp4_max_buffer_size 5m;\n  expires 30d;\n  try_files $uri =404;\n}\n\n#######################################################\n###  nginx compact basic configuration end\n#######################################################\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_high_load_off.conf",
    "content": "###\n### Only allowed crawlers under high load.\n###\nif ($deny_on_high_load) {\n  return 503;\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_speed_purge.conf",
    "content": "###\n### Support for https://drupal.org/project/purge module.\n###\nserver {\n  listen       127.0.0.1:8888;\n  server_name  _;\n  limit_conn   limreq 8888;\n  access_log   /var/log/nginx/speed_purge.log main buffer=32k;\n  allow        127.0.0.1;\n  deny         all;\n  root         /var/www/nginx-default;\n  index        index.html index.htm;\n  server_name_in_redirect off;\n  location / {\n    try_files $uri =404;\n  }\n  location ~ /purge-([a-z\\-]*)(/.*) {\n    fastcgi_cache_purge speed $1$host$request_method$2;\n    log_not_found off;\n  }\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_sql_adminer.conf",
    "content": "###\n### Adminer SQL Manager Redirect to HTTPS.\n###\nserver {\n  listen                       127.0.0.1:80;\n  server_name                  adminer_name;\n  # Disable access logs for this server block\n  access_log off;\n  log_not_found off;\n  return 301 https://$host$request_uri;\n}\n\n###\n### Adminer SQL Manager HTTPS Only.\n###\nserver {\n  include                      fastcgi_params;\n  fastcgi_param                SCRIPT_FILENAME $document_root$fastcgi_script_name;\n  fastcgi_param                HTTPS on;\n  limit_conn                   limreq 555;\n  listen                       127.0.0.1:443 ssl;\n  http2                        on;\n  server_name                  adminer_name;\n  root                         /var/www/adminer;\n  index                        index.php index.html;\n  ssl_dhparam                  /etc/ssl/private/nginx-wild-ssl.dhp;\n  ssl_certificate              /etc/ssl/private/nginx-wild-ssl.crt;\n  ssl_certificate_key          /etc/ssl/private/nginx-wild-ssl.key;\n  ssl_session_timeout          5m;\n  if ($is_crawler) {\n    return 444;\n  }\n  include                      /var/aegir/config/includes/ip_access/sqladmin*;\n  include                      /var/aegir/config/includes/nginx_compact_include.conf;\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_sql_buddy.conf",
    "content": "###\n### SQL Buddy Manager Redirect to HTTPS.\n###\nserver {\n  listen                       127.0.0.1:80;\n  server_name                  buddy_name;\n  # Disable access logs for this server block\n  access_log off;\n  log_not_found off;\n  return 301 https://$host$request_uri;\n}\n\n###\n### SQL Buddy Manager HTTPS Only.\n###\nserver {\n  include                      fastcgi_params;\n  fastcgi_param                SCRIPT_FILENAME $document_root$fastcgi_script_name;\n  fastcgi_param                HTTPS on;\n  limit_conn                   limreq 555;\n  listen                       127.0.0.1:443 ssl;\n  http2                        on;\n  server_name                  buddy_name;\n  root                         /var/www/sqlbuddy;\n  index                        index.php index.html;\n  ssl_dhparam                  /etc/ssl/private/nginx-wild-ssl.dhp;\n  ssl_certificate              /etc/ssl/private/nginx-wild-ssl.crt;\n  ssl_certificate_key          /etc/ssl/private/nginx-wild-ssl.key;\n  ssl_session_timeout          5m;\n  if ($is_crawler) {\n    return 444;\n  }\n  include                      /var/aegir/config/includes/ip_access/sqladmin*;\n  include                      /var/aegir/config/includes/nginx_compact_include.conf;\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_sql_cgp.conf",
    "content": "###\n### Collectd Graph Panel Redirect to HTTPS.\n###\nserver {\n  listen                       127.0.0.1:80;\n  server_name                  cgp_name;\n  # Disable access logs for this server block\n  access_log off;\n  log_not_found off;\n  return 301 https://$host$request_uri;\n}\n\n###\n### Collectd Graph Panel HTTPS Only.\n###\nserver {\n  include                      fastcgi_params;\n  fastcgi_param                SCRIPT_FILENAME $document_root$fastcgi_script_name;\n  fastcgi_param                HTTPS on;\n  limit_conn                   limreq 555;\n  listen                       127.0.0.1:443 ssl;\n  http2                        on;\n  server_name                  cgp_name;\n  root                         /var/www/cgp;\n  index                        index.php index.html;\n  ssl_dhparam                  /etc/ssl/private/nginx-wild-ssl.dhp;\n  ssl_certificate              /etc/ssl/private/nginx-wild-ssl.crt;\n  ssl_certificate_key          /etc/ssl/private/nginx-wild-ssl.key;\n  ssl_session_timeout          5m;\n  if ($is_crawler) {\n    return 444;\n  }\n  include                      /var/aegir/config/includes/ip_access/sqladmin*;\n  include                      /var/aegir/config/includes/nginx_compact_include.conf;\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_sql_chive.conf",
    "content": "###\n### Chive SQL Manager Redirect to HTTPS.\n###\nserver {\n  listen                       127.0.0.1:80;\n  server_name                  chive_name;\n  # Disable access logs for this server block\n  access_log off;\n  log_not_found off;\n  return 301 https://$host$request_uri;\n}\n\n###\n### Chive SQL Manager HTTPS Only.\n###\nserver {\n  include                      fastcgi_params;\n  fastcgi_param                SCRIPT_FILENAME $document_root$fastcgi_script_name;\n  fastcgi_param                HTTPS on;\n  limit_conn                   limreq 555;\n  listen                       127.0.0.1:443 ssl;\n  http2                        on;\n  server_name                  chive_name;\n  root                         /var/www/chive;\n  index                        index.php index.html;\n  ssl_dhparam                  /etc/ssl/private/nginx-wild-ssl.dhp;\n  ssl_certificate              /etc/ssl/private/nginx-wild-ssl.crt;\n  ssl_certificate_key          /etc/ssl/private/nginx-wild-ssl.key;\n  ssl_session_timeout          5m;\n  if ($is_crawler) {\n    return 444;\n  }\n  include                      /var/aegir/config/includes/ip_access/sqladmin*;\n  include                      /var/aegir/config/includes/nginx_compact_include.conf;\n}\n"
  },
  {
    "path": "aegir/conf/nginx/nginx_wild_ssl.conf",
    "content": "\n### /var/aegir/config/server_master/nginx/pre.d/nginx_wild_ssl.conf\n\nupstream nginx_http {\n  server  127.0.0.1:80;\n}\n\nserver {\n  listen                       127.0.0.1:443 ssl;\n  listen                       127.0.0.1:443 quic reuseport;\n  http2                        on;\n  http3                        on;\n  http3_hq                     on;\n  server_name                  _;\n  ssl_dhparam                  /etc/ssl/private/nginx-wild-ssl.dhp;\n  ssl_certificate              /etc/ssl/private/nginx-wild-ssl.crt;\n  ssl_certificate_key          /etc/ssl/private/nginx-wild-ssl.key;\n  ssl_session_timeout          5m;\n  ssl_conf_command Options     KTLS;\n  access_log                   off;\n  log_not_found                off;\n  ###\n  ### Deny known crawlers.\n  ###\n  if ($is_crawler) {\n    return 444;\n  }\n  location / {\n    proxy_pass                 http://nginx_http;\n    proxy_redirect             off;\n    gzip_vary                  off;\n    proxy_buffering            off;\n    proxy_set_header           Host              $host;\n    proxy_set_header           X-Real-IP         $remote_addr;\n    proxy_set_header           X-Forwarded-By    $server_addr:$server_port;\n    proxy_set_header           X-Forwarded-For   $proxy_add_x_forwarded_for;\n    proxy_set_header           X-Local-Proxy     $scheme;\n    proxy_set_header           X-Forwarded-Proto $scheme;\n    proxy_pass_header          Set-Cookie;\n    proxy_pass_header          Cookie;\n    proxy_pass_header          X-Accel-Expires;\n    proxy_pass_header          X-Accel-Redirect;\n    proxy_pass_header          X-This-Proto;\n    proxy_connect_timeout      180;\n    proxy_send_timeout         180;\n    proxy_read_timeout         180;\n  }\n}\n"
  },
  {
    "path": "aegir/conf/php/fpm-pool-common-legacy.conf",
    "content": "\ngroup = www-data\nlisten.owner = www-data\nlisten.group = www-data\nlisten.mode = 0660\nlisten.allowed_clients = 127.0.0.1\n\npm = ondemand\npm.process_idle_timeout = 10s\npm.max_requests = 5000\npm.status_path = /fpm-status\nping.path = /fpm-ping\nping.response = pong\nslowlog = /var/log/php/fpm-$pool-slow.log\nrequest_slowlog_timeout = 90s\nlisten.backlog = 65535\n\nenv[HOSTNAME] = $HOSTNAME\nenv[PATH] = /usr/local/bin:/usr/bin:/bin\n\nphp_admin_value[default_socket_timeout] = 180\nphp_admin_value[max_execution_time] = 180\nphp_admin_value[max_input_time] = 180\nphp_admin_value[memory_limit] = 395M\n\nphp_admin_value[opcache.error_log] = /var/log/php/opcache-$pool-error.log\nphp_admin_value[opcache.log_verbosity_level] = 1\n\nphp_admin_flag[apc.enabled] = on\n\n"
  },
  {
    "path": "aegir/conf/php/fpm-pool-common-modern.conf",
    "content": "\ngroup = www-data\nlisten.owner = www-data\nlisten.group = www-data\nlisten.mode = 0660\nlisten.allowed_clients = 127.0.0.1\n\npm = ondemand\npm.process_idle_timeout = 10s\npm.max_requests = 5000\npm.status_path = /fpm-status\nping.path = /fpm-ping\nping.response = pong\nslowlog = /var/log/php/fpm-$pool-slow.log\nrequest_slowlog_timeout = 90s\nlisten.backlog = 65535\n\nenv[HOSTNAME] = $HOSTNAME\nenv[PATH] = /usr/local/bin:/usr/bin:/bin\n\nphp_admin_value[default_socket_timeout] = 180\nphp_admin_value[max_execution_time] = 180\nphp_admin_value[max_input_time] = 180\nphp_admin_value[memory_limit] = 395M\n\nphp_admin_value[opcache.error_log] = /var/log/php/opcache-$pool-error.log\nphp_admin_value[opcache.log_verbosity_level] = 1\n\nphp_admin_flag[apc.enabled] = on\n\n"
  },
  {
    "path": "aegir/conf/php/fpm-pool-common.conf",
    "content": "\ngroup = www-data\nlisten.owner = www-data\nlisten.group = www-data\nlisten.mode = 0660\nlisten.allowed_clients = 127.0.0.1\n\npm = ondemand\npm.process_idle_timeout = 10s\npm.max_requests = 5000\npm.status_path = /fpm-status\nping.path = /fpm-ping\nping.response = pong\nslowlog = /var/log/php/fpm-$pool-slow.log\nrequest_slowlog_timeout = 90s\nlisten.backlog = 65535\n\nenv[HOSTNAME] = $HOSTNAME\nenv[PATH] = /usr/local/bin:/usr/bin:/bin\n\nphp_admin_value[default_socket_timeout] = 180\nphp_admin_value[max_execution_time] = 180\nphp_admin_value[max_input_time] = 180\nphp_admin_value[memory_limit] = 395M\n\nphp_admin_value[opcache.error_log] = /var/log/php/opcache-$pool-error.log\nphp_admin_value[opcache.log_verbosity_level] = 1\n\nphp_admin_flag[apc.enabled] = on\n\n"
  },
  {
    "path": "aegir/conf/php/fpm-pool-foo-multi.conf",
    "content": "[THISPOOL]\n\nprefix = /data/disk/foo\nuser = $pool.web\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/home/foo.web/.tmp\"\nphp_admin_value[upload_tmp_dir] = \"/home/foo.web/.tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/home/foo.web/.tmp\"\nphp_admin_value[session.save_path] = \"/home/foo.web/.tmp\"\nphp_admin_value[uploadprogress.file.contents_template] = \"/home/foo.web/.tmp/upload_contents_%s\"\nphp_admin_value[uploadprogress.file.filename_template] = \"/home/foo.web/.tmp/upt_%s.txt\"\n\nenv[TMP] = /home/foo.web/.tmp\nenv[TMPDIR] = /home/foo.web/.tmp\nenv[TEMP] = /home/foo.web/.tmp\n\nphp_admin_value[open_basedir] = \".:/data/disk/foo/distro:/data/disk/foo/static:/data/disk/foo/aegir:/data/disk/foo/platforms:/data/disk/foo/backup-exports:/home/foo.web/.tmp:/home/foo.web/.aws:/data/all:/data/disk/all:/data/conf:/var/second/foo:/mnt:/srv:/hdd:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php56:/opt/php70:/opt/php71:/opt/php72:/opt/php73:/opt/php74:/opt/php80:/opt/php81:/opt/php82:/opt/php83:/opt/php84:/opt/php85:/dev/urandom:/var/tmp/fpm:/var/www/phpcache/foo\"\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm-pool-foo.conf",
    "content": "[foo]\n\nprefix = /data/disk/foo\nuser = $pool.web\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/home/foo.web/.tmp\"\nphp_admin_value[upload_tmp_dir] = \"/home/foo.web/.tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/home/foo.web/.tmp\"\nphp_admin_value[session.save_path] = \"/home/foo.web/.tmp\"\nphp_admin_value[uploadprogress.file.contents_template] = \"/home/foo.web/.tmp/upload_contents_%s\"\nphp_admin_value[uploadprogress.file.filename_template] = \"/home/foo.web/.tmp/upt_%s.txt\"\n\nenv[TMP] = /home/foo.web/.tmp\nenv[TMPDIR] = /home/foo.web/.tmp\nenv[TEMP] = /home/foo.web/.tmp\n\nphp_admin_value[open_basedir] = \".:/data/disk/foo/distro:/data/disk/foo/static:/data/disk/foo/aegir:/data/disk/foo/platforms:/data/disk/foo/backup-exports:/home/foo.web/.tmp:/home/foo.web/.aws:/data/all:/data/disk/all:/data/conf:/var/second/foo:/mnt:/srv:/hdd:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php56:/opt/php70:/opt/php71:/opt/php72:/opt/php73:/opt/php74:/opt/php80:/opt/php81:/opt/php82:/opt/php83:/opt/php84:/opt/php85:/dev/urandom:/var/tmp/fpm:/var/www/phpcache/foo\"\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm56-pool-www.conf",
    "content": "[www56]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common-legacy.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n"
  },
  {
    "path": "aegir/conf/php/fpm70-pool-www.conf",
    "content": "[www70]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common-legacy.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm71-pool-www.conf",
    "content": "[www71]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common-legacy.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm72-pool-www.conf",
    "content": "[www72]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common-legacy.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm73-pool-www.conf",
    "content": "[www73]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common-legacy.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm74-pool-www.conf",
    "content": "[www74]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common-legacy.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm80-pool-www.conf",
    "content": "[www80]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm81-pool-www.conf",
    "content": "[www81]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm82-pool-www.conf",
    "content": "[www82]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm83-pool-www.conf",
    "content": "[www83]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm84-pool-www.conf",
    "content": "[www84]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/fpm85-pool-www.conf",
    "content": "[www85]\n\nprefix = /var/www/$pool\nuser = $pool\nlisten = /run/$pool.fpm.socket\n\ninclude = /opt/etc/fpm/fpm-pool-common.conf\n\npm.max_children = 8\nrequest_terminate_timeout = 180s\n\nphp_admin_value[sys_temp_dir] = \"/tmp\"\nphp_admin_value[upload_tmp_dir] = \"/tmp\"\nphp_admin_value[soap.wsdl_cache_dir] = \"/tmp\"\nphp_admin_value[session.save_path] = \"/tmp\"\n\nenv[TMP] = /tmp\nenv[TMPDIR] = /tmp\nenv[TEMP] = /tmp\n\nphp_admin_value[disable_functions] = \"passthru,disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\nphp_admin_value[newrelic.license] = \"\"\nphp_admin_value[newrelic.enabled] = \"false\"\n"
  },
  {
    "path": "aegir/conf/php/newrelic.ini",
    "content": "\n; New Relic\nextension=newrelic.so\n[newrelic]\nnewrelic.enabled=true\nnewrelic.license = \"REPLACE_WITH_REAL_KEY\"\nnewrelic.logfile = \"/var/log/newrelic/php_agent.log\"\nnewrelic.loglevel = \"error\"\nnewrelic.appname = \"Other Tasks\"\nnewrelic.daemon.logfile = \"/var/log/newrelic/newrelic-daemon.log\"\nnewrelic.daemon.loglevel = \"error\"\nnewrelic.daemon.pidfile = \"/run/newrelic-daemon.pid\"\nnewrelic.daemon.dont_launch = 3\n"
  },
  {
    "path": "aegir/conf/php/php56-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.hash_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; url_rewriter.tags\n;   Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n;   Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n;   Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; Allow ASP-style <% %> tags.\n; http://php.net/asp-tags\nasp_tags = Off\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\nserialize_precision = 17\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/track-errors\ntrack_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_56\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a character encoding using\n; the Content-type: header.  To disable sending of the charset, simply\n; set it to be empty.\n;\n; PHP's built-in default is text/html\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; mbstring or iconv output handler is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n; Always populate the $HTTP_RAW_POST_DATA variable. PHP's default behavior is\n; to disable this feature and it will be removed in a future version.\n; If post reading is disabled through enable_post_data_reading,\n; $HTTP_RAW_POST_DATA is *NOT* populated.\n; http://php.net/always-populate-raw-post-data\nalways_populate_raw_post_data = -1\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php56/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php56/lib/php/extensions/no-debug-non-zts-20131226/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename.extension\n;\n; For example, on Windows:\n;\n;   extension=msql.dll\n;\n; ... or under UNIX:\n;\n;   extension=msql.so\n;\n; ... or with a path:\n;\n;   extension=/path/to/extension/msql.so\n;\n; If you only provide the name of the extension, PHP will look for it in its\n; default extension directory.\n;\n; Windows Extensions\n; Note that ODBC support is built in, so no dll is needed for it.\n; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5)\n; extension folders as well as the separate PECL DLL download (PHP 5).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=php_bz2.dll\n;extension=php_curl.dll\n;extension=php_fileinfo.dll\n;extension=php_gd2.dll\n;extension=php_gettext.dll\n;extension=php_gmp.dll\n;extension=php_intl.dll\n;extension=php_imap.dll\n;extension=php_interbase.dll\n;extension=php_ldap.dll\n;extension=php_mbstring.dll\n;extension=php_exif.dll      ; Must be after mbstring as it depends on it\n;extension=php_mysql.dll\n;extension=php_mysqli.dll\n;extension=php_oci8_12c.dll  ; Use with Oracle Database 12c Instant Client\n;extension=php_openssl.dll\n;extension=php_pdo_firebird.dll\n;extension=php_pdo_mysql.dll\n;extension=php_pdo_oci.dll\n;extension=php_pdo_odbc.dll\n;extension=php_pdo_pgsql.dll\n;extension=php_pdo_sqlite.dll\n;extension=php_pgsql.dll\n;extension=php_pspell.dll\n;extension=php_shmop.dll\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=php_snmp.dll\n\n;extension=php_soap.dll\n;extension=php_sockets.dll\n;extension=php_sqlite3.dll\n;extension=php_sybase_ct.dll\n;extension=php_tidy.dll\n;extension=php_xmlrpc.dll\n;extension=php_xsl.dll\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n\n[sqlite]\n; http://php.net/sqlite.assoc-case\n;sqlite.assoc_case = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[SQL]\n; http://php.net/sql.safe-mode\nsql.safe_mode = Off\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQL]\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysql.allow_local_infile\nmysql.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysql.allow-persistent\nmysql.allow_persistent = On\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysql.cache_size\nmysql.cache_size = 2000\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysql.max-persistent\nmysql.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/mysql.max-links\nmysql.max_links = -1\n\n; Default port number for mysql_connect().  If unset, mysql_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysql.default-port\nmysql.default_port =\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysql.default-socket\nmysql.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysql.default-host\nmysql.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysql.default-user\nmysql.default_user =\n\n; Default password for mysql_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysql.default_password\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysql.default-password\nmysql.default_password =\n\n; Maximum time (in seconds) for connect timeout. -1 means no limit\n; http://php.net/mysql.connect-timeout\nmysql.connect_timeout = 60\n\n; Trace mode. When trace_mode is active (=On), warnings for table/index scans and\n; SQL-Errors will be displayed.\n; http://php.net/mysql.trace-mode\nmysql.trace_mode = Off\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[Sybase-CT]\n; Allow or prevent persistent links.\n; http://php.net/sybct.allow-persistent\nsybct.allow_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/sybct.max-persistent\nsybct.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/sybct.max-links\nsybct.max_links = -1\n\n; Minimum server message severity to display.\n; http://php.net/sybct.min-server-severity\nsybct.min_server_severity = 10\n\n; Minimum client message severity to display.\n; http://php.net/sybct.min-client-severity\nsybct.min_client_severity = 10\n\n; Set per-context timeout\n; http://php.net/sybct.timeout\n;sybct.timeout=\n\n;sybct.packet_size\n\n; The maximum time in seconds to wait for a connection attempt to succeed before returning failure.\n; Default: one minute\n;sybct.login_timeout=\n\n; The name of the host you claim to be connecting from, for display by sp_who.\n; Default: none\n;sybct.hostname=\n\n; Allows you to define how often deadlocks are to be retried. -1 means \"forever\".\n; Default: 0\n;sybct.deadlock_retry_count=\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; How many bytes to read from the file.\n; http://php.net/session.entropy-length\nsession.entropy_length = 32\n\n; Specified here to create the session id.\n; http://php.net/session.entropy-file\n; Defaults to /dev/urandom\n; On systems that don't have /dev/urandom but do have /dev/arandom, this will default to /dev/arandom\n; If neither are found at compile time, the default is no entropy file.\n; On windows, setting the entropy_length setting will activate the\n; Windows random source (using the CryptoAPI)\nsession.entropy_file = /dev/urandom\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Select a hash function for use in generating session ids.\n; Possible Values\n;   0  (MD5 128 bits)\n;   1  (SHA-1 160 bits)\n; This option may also be set to the name of any hash function supported by\n; the hash extension. A list of available hashes is returned by the hash_algos()\n; function.\n; http://php.net/session.hash-function\nsession.hash_function = 0\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.hash_bits_per_character = 5\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; form/fieldset are special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs.  If you want XHTML conformity, remove the form entry.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n; Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; http://php.net/url-rewriter.tags\nurl_rewriter.tags = \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n[MSSQL]\n; Allow or prevent persistent links.\nmssql.allow_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\nmssql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\nmssql.max_links = -1\n\n; Minimum error severity to display.\nmssql.min_error_severity = 10\n\n; Minimum message severity to display.\nmssql.min_message_severity = 10\n\n; Compatibility mode with old versions of PHP 3.0.\nmssql.compatibility_mode = Off\n\n; Connect timeout\n;mssql.connect_timeout = 5\n\n; Query timeout\n;mssql.timeout = 60\n\n; Valid range 0 - 2147483647.  Default = 4096.\n;mssql.textlimit = 4096\n\n; Valid range 0 - 2147483647.  Default = 4096.\n;mssql.textsize = 4096\n\n; Limits the number of records in each batch.  0 = all records in one batch.\n;mssql.batchsize = 0\n\n; Specify how datetime and datetim4 columns are returned\n; On => Returns data converted to SQL server settings\n; Off => Returns values as YYYY-MM-DD hh:mm:ss\n;mssql.datetimeconvert = On\n\n; Use NT authentication when connecting to the server\nmssql.secure_connection = Off\n\n; Specify max number of processes. -1 = library default\n; msdlib defaults to 25\n; FreeTDS defaults to 4096\n;mssql.max_procs = -1\n\n; Specify client character set.\n; If empty or not set the client charset from freetds.conf is used\n; This is only used when compiled with FreeTDS\n;mssql.charset = \"ISO-8859-1\"\n\n[Assertion]\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Issue a PHP warning for each failed assertion.\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstrig.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 0\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[mcrypt]\n; For more information about mcrypt settings see http://php.net/mcrypt-module-open\n\n; Directory where to load mcrypt algorithms\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.algorithms_dir=\n\n; Directory where to load mcrypt modes\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.modes_dir=\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=0\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=64\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=4\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 100000 are allowed.\n;opcache.max_accelerated_files=2000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If disabled, PHPDoc comments are not loaded from SHM, so \"Doc Comments\"\n; may be always stored (save_comments=1), but not loaded by applications\n; that don't need them anyway.\n;opcache.load_comments=1\n\n; If enabled, a fast shutdown sequence is used for the accelerated code\n;opcache.fast_shutdown=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_5.6.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n\n"
  },
  {
    "path": "aegir/conf/php/php56-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php56-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php56-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php56\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php56/sbin/php-fpm\nphp_fpm_CONF=/opt/php56/etc/php56-fpm.conf\nphp_fpm_PID=/run/php56-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php56/etc/php56.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php56-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php56-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php56-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php56-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php56-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php56-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php56-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php56-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php56-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php56-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php56-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php56). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php56 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php56/var\n; Default Value: none\npid = /run/php56-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php56/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php56-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php56-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php56/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php56.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.hash_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; url_rewriter.tags\n;   Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n;   Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n;   Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; Allow ASP-style <% %> tags.\n; http://php.net/asp-tags\nasp_tags = Off\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\nserialize_precision = 17\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php56:/dev/urandom\"\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/track-errors\ntrack_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_56\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a character encoding using\n; the Content-type: header.  To disable sending of the charset, simply\n; set it to be empty.\n;\n; PHP's built-in default is text/html\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; mbstring or iconv output handler is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n; Always populate the $HTTP_RAW_POST_DATA variable. PHP's default behavior is\n; to disable this feature and it will be removed in a future version.\n; If post reading is disabled through enable_post_data_reading,\n; $HTTP_RAW_POST_DATA is *NOT* populated.\n; http://php.net/always-populate-raw-post-data\nalways_populate_raw_post_data = -1\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php56/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php56/lib/php/extensions/no-debug-non-zts-20131226/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename.extension\n;\n; For example, on Windows:\n;\n;   extension=msql.dll\n;\n; ... or under UNIX:\n;\n;   extension=msql.so\n;\n; ... or with a path:\n;\n;   extension=/path/to/extension/msql.so\n;\n; If you only provide the name of the extension, PHP will look for it in its\n; default extension directory.\n;\n; Windows Extensions\n; Note that ODBC support is built in, so no dll is needed for it.\n; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5)\n; extension folders as well as the separate PECL DLL download (PHP 5).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=php_bz2.dll\n;extension=php_curl.dll\n;extension=php_fileinfo.dll\n;extension=php_gd2.dll\n;extension=php_gettext.dll\n;extension=php_gmp.dll\n;extension=php_intl.dll\n;extension=php_imap.dll\n;extension=php_interbase.dll\n;extension=php_ldap.dll\n;extension=php_mbstring.dll\n;extension=php_exif.dll      ; Must be after mbstring as it depends on it\n;extension=php_mysql.dll\n;extension=php_mysqli.dll\n;extension=php_oci8_12c.dll  ; Use with Oracle Database 12c Instant Client\n;extension=php_openssl.dll\n;extension=php_pdo_firebird.dll\n;extension=php_pdo_mysql.dll\n;extension=php_pdo_oci.dll\n;extension=php_pdo_odbc.dll\n;extension=php_pdo_pgsql.dll\n;extension=php_pdo_sqlite.dll\n;extension=php_pgsql.dll\n;extension=php_pspell.dll\n;extension=php_shmop.dll\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=php_snmp.dll\n\n;extension=php_soap.dll\n;extension=php_sockets.dll\n;extension=php_sqlite3.dll\n;extension=php_sybase_ct.dll\n;extension=php_tidy.dll\n;extension=php_xmlrpc.dll\n;extension=php_xsl.dll\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n\n[sqlite]\n; http://php.net/sqlite.assoc-case\n;sqlite.assoc_case = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[SQL]\n; http://php.net/sql.safe-mode\nsql.safe_mode = Off\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQL]\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysql.allow_local_infile\nmysql.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysql.allow-persistent\nmysql.allow_persistent = On\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysql.cache_size\nmysql.cache_size = 2000\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysql.max-persistent\nmysql.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/mysql.max-links\nmysql.max_links = -1\n\n; Default port number for mysql_connect().  If unset, mysql_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysql.default-port\nmysql.default_port =\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysql.default-socket\nmysql.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysql.default-host\nmysql.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysql.default-user\nmysql.default_user =\n\n; Default password for mysql_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysql.default_password\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysql.default-password\nmysql.default_password =\n\n; Maximum time (in seconds) for connect timeout. -1 means no limit\n; http://php.net/mysql.connect-timeout\nmysql.connect_timeout = 60\n\n; Trace mode. When trace_mode is active (=On), warnings for table/index scans and\n; SQL-Errors will be displayed.\n; http://php.net/mysql.trace-mode\nmysql.trace_mode = Off\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[Sybase-CT]\n; Allow or prevent persistent links.\n; http://php.net/sybct.allow-persistent\nsybct.allow_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/sybct.max-persistent\nsybct.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/sybct.max-links\nsybct.max_links = -1\n\n; Minimum server message severity to display.\n; http://php.net/sybct.min-server-severity\nsybct.min_server_severity = 10\n\n; Minimum client message severity to display.\n; http://php.net/sybct.min-client-severity\nsybct.min_client_severity = 10\n\n; Set per-context timeout\n; http://php.net/sybct.timeout\n;sybct.timeout=\n\n;sybct.packet_size\n\n; The maximum time in seconds to wait for a connection attempt to succeed before returning failure.\n; Default: one minute\n;sybct.login_timeout=\n\n; The name of the host you claim to be connecting from, for display by sp_who.\n; Default: none\n;sybct.hostname=\n\n; Allows you to define how often deadlocks are to be retried. -1 means \"forever\".\n; Default: 0\n;sybct.deadlock_retry_count=\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; How many bytes to read from the file.\n; http://php.net/session.entropy-length\nsession.entropy_length = 32\n\n; Specified here to create the session id.\n; http://php.net/session.entropy-file\n; Defaults to /dev/urandom\n; On systems that don't have /dev/urandom but do have /dev/arandom, this will default to /dev/arandom\n; If neither are found at compile time, the default is no entropy file.\n; On windows, setting the entropy_length setting will activate the\n; Windows random source (using the CryptoAPI)\nsession.entropy_file = /dev/urandom\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Select a hash function for use in generating session ids.\n; Possible Values\n;   0  (MD5 128 bits)\n;   1  (SHA-1 160 bits)\n; This option may also be set to the name of any hash function supported by\n; the hash extension. A list of available hashes is returned by the hash_algos()\n; function.\n; http://php.net/session.hash-function\nsession.hash_function = 0\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.hash_bits_per_character = 5\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; form/fieldset are special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs.  If you want XHTML conformity, remove the form entry.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n; Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; http://php.net/url-rewriter.tags\nurl_rewriter.tags = \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n[MSSQL]\n; Allow or prevent persistent links.\nmssql.allow_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\nmssql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\nmssql.max_links = -1\n\n; Minimum error severity to display.\nmssql.min_error_severity = 10\n\n; Minimum message severity to display.\nmssql.min_message_severity = 10\n\n; Compatibility mode with old versions of PHP 3.0.\nmssql.compatibility_mode = Off\n\n; Connect timeout\n;mssql.connect_timeout = 5\n\n; Query timeout\n;mssql.timeout = 60\n\n; Valid range 0 - 2147483647.  Default = 4096.\n;mssql.textlimit = 4096\n\n; Valid range 0 - 2147483647.  Default = 4096.\n;mssql.textsize = 4096\n\n; Limits the number of records in each batch.  0 = all records in one batch.\n;mssql.batchsize = 0\n\n; Specify how datetime and datetim4 columns are returned\n; On => Returns data converted to SQL server settings\n; Off => Returns values as YYYY-MM-DD hh:mm:ss\n;mssql.datetimeconvert = On\n\n; Use NT authentication when connecting to the server\nmssql.secure_connection = Off\n\n; Specify max number of processes. -1 = library default\n; msdlib defaults to 25\n; FreeTDS defaults to 4096\n;mssql.max_procs = -1\n\n; Specify client character set.\n; If empty or not set the client charset from freetds.conf is used\n; This is only used when compiled with FreeTDS\n;mssql.charset = \"ISO-8859-1\"\n\n[Assertion]\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Issue a PHP warning for each failed assertion.\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstrig.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 0\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[mcrypt]\n; For more information about mcrypt settings see http://php.net/mcrypt-module-open\n\n; Directory where to load mcrypt algorithms\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.algorithms_dir=\n\n; Directory where to load mcrypt modes\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.modes_dir=\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=0\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=64\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=4\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 100000 are allowed.\n;opcache.max_accelerated_files=2000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If disabled, PHPDoc comments are not loaded from SHM, so \"Doc Comments\"\n; may be always stored (save_comments=1), but not loaded by applications\n; that don't need them anyway.\n;opcache.load_comments=1\n\n; If enabled, a fast shutdown sequence is used for the accelerated code\n;opcache.fast_shutdown=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_5.6.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php70-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.hash_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; url_rewriter.tags\n;   Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n;   Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n;   Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\nserialize_precision = 17\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/track-errors\ntrack_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_70\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; mbstring or iconv output handler is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php70/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php70/lib/php/extensions/no-debug-non-zts-20151012/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename.extension\n;\n; For example, on Windows:\n;\n;   extension=msql.dll\n;\n; ... or under UNIX:\n;\n;   extension=msql.so\n;\n; ... or with a path:\n;\n;   extension=/path/to/extension/msql.so\n;\n; If you only provide the name of the extension, PHP will look for it in its\n; default extension directory.\n;\n; Windows Extensions\n; Note that ODBC support is built in, so no dll is needed for it.\n; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=php_bz2.dll\n;extension=php_curl.dll\n;extension=php_fileinfo.dll\n;extension=php_gd2.dll\n;extension=php_gettext.dll\n;extension=php_gmp.dll\n;extension=php_intl.dll\n;extension=php_imap.dll\n;extension=php_interbase.dll\n;extension=php_ldap.dll\n;extension=php_mbstring.dll\n;extension=php_exif.dll      ; Must be after mbstring as it depends on it\n;extension=php_mysqli.dll\n;extension=php_oci8_12c.dll  ; Use with Oracle Database 12c Instant Client\n;extension=php_openssl.dll\n;extension=php_pdo_firebird.dll\n;extension=php_pdo_mysql.dll\n;extension=php_pdo_oci.dll\n;extension=php_pdo_odbc.dll\n;extension=php_pdo_pgsql.dll\n;extension=php_pdo_sqlite.dll\n;extension=php_pgsql.dll\n;extension=php_shmop.dll\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=php_snmp.dll\n\n;extension=php_soap.dll\n;extension=php_sockets.dll\n;extension=php_sqlite3.dll\n;extension=php_tidy.dll\n;extension=php_xmlrpc.dll\n;extension=php_xsl.dll\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[SQL]\n; http://php.net/sql.safe-mode\nsql.safe_mode = Off\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; How many bytes to read from the file.\n; http://php.net/session.entropy-length\nsession.entropy_length = 32\n\n; Specified here to create the session id.\n; http://php.net/session.entropy-file\n; Defaults to /dev/urandom\n; On systems that don't have /dev/urandom but do have /dev/arandom, this will default to /dev/arandom\n; If neither are found at compile time, the default is no entropy file.\n; On windows, setting the entropy_length setting will activate the\n; Windows random source (using the CryptoAPI)\nsession.entropy_file = /dev/urandom\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Select a hash function for use in generating session ids.\n; Possible Values\n;   0  (MD5 128 bits)\n;   1  (SHA-1 160 bits)\n; This option may also be set to the name of any hash function supported by\n; the hash extension. A list of available hashes is returned by the hash_algos()\n; function.\n; http://php.net/session.hash-function\nsession.hash_function = 0\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.hash_bits_per_character = 5\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; form/fieldset are special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs.  If you want XHTML conformity, remove the form entry.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n; Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; http://php.net/url-rewriter.tags\nurl_rewriter.tags = \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertationException on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 0\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[mcrypt]\n; For more information about mcrypt settings see http://php.net/mcrypt-module-open\n\n; Directory where to load mcrypt algorithms\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.algorithms_dir=\n\n; Directory where to load mcrypt modes\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.modes_dir=\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=0\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=64\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=4\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 100000 are allowed.\n;opcache.max_accelerated_files=2000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, a fast shutdown sequence is used for the accelerated code\n;opcache.fast_shutdown=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.0.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php70-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php70-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php70-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php70\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php70/sbin/php-fpm\nphp_fpm_CONF=/opt/php70/etc/php70-fpm.conf\nphp_fpm_PID=/run/php70-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php70/etc/php70.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php70-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php70-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php70-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php70-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php70-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php70-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php70-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php70-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php70-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php70-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php70-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php70). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php70 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php70/var\n; Default Value: none\npid = /run/php70-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php70/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php70-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php70-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php70/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php70.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.hash_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; url_rewriter.tags\n;   Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n;   Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n;   Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\nserialize_precision = 17\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php70:/dev/urandom\"\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/track-errors\ntrack_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_70\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; mbstring or iconv output handler is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php70/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php70/lib/php/extensions/no-debug-non-zts-20151012/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename.extension\n;\n; For example, on Windows:\n;\n;   extension=msql.dll\n;\n; ... or under UNIX:\n;\n;   extension=msql.so\n;\n; ... or with a path:\n;\n;   extension=/path/to/extension/msql.so\n;\n; If you only provide the name of the extension, PHP will look for it in its\n; default extension directory.\n;\n; Windows Extensions\n; Note that ODBC support is built in, so no dll is needed for it.\n; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=php_bz2.dll\n;extension=php_curl.dll\n;extension=php_fileinfo.dll\n;extension=php_gd2.dll\n;extension=php_gettext.dll\n;extension=php_gmp.dll\n;extension=php_intl.dll\n;extension=php_imap.dll\n;extension=php_interbase.dll\n;extension=php_ldap.dll\n;extension=php_mbstring.dll\n;extension=php_exif.dll      ; Must be after mbstring as it depends on it\n;extension=php_mysqli.dll\n;extension=php_oci8_12c.dll  ; Use with Oracle Database 12c Instant Client\n;extension=php_openssl.dll\n;extension=php_pdo_firebird.dll\n;extension=php_pdo_mysql.dll\n;extension=php_pdo_oci.dll\n;extension=php_pdo_odbc.dll\n;extension=php_pdo_pgsql.dll\n;extension=php_pdo_sqlite.dll\n;extension=php_pgsql.dll\n;extension=php_shmop.dll\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=php_snmp.dll\n\n;extension=php_soap.dll\n;extension=php_sockets.dll\n;extension=php_sqlite3.dll\n;extension=php_tidy.dll\n;extension=php_xmlrpc.dll\n;extension=php_xsl.dll\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[SQL]\n; http://php.net/sql.safe-mode\nsql.safe_mode = Off\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; How many bytes to read from the file.\n; http://php.net/session.entropy-length\nsession.entropy_length = 32\n\n; Specified here to create the session id.\n; http://php.net/session.entropy-file\n; Defaults to /dev/urandom\n; On systems that don't have /dev/urandom but do have /dev/arandom, this will default to /dev/arandom\n; If neither are found at compile time, the default is no entropy file.\n; On windows, setting the entropy_length setting will activate the\n; Windows random source (using the CryptoAPI)\nsession.entropy_file = /dev/urandom\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Select a hash function for use in generating session ids.\n; Possible Values\n;   0  (MD5 128 bits)\n;   1  (SHA-1 160 bits)\n; This option may also be set to the name of any hash function supported by\n; the hash extension. A list of available hashes is returned by the hash_algos()\n; function.\n; http://php.net/session.hash-function\nsession.hash_function = 0\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.hash_bits_per_character = 5\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; form/fieldset are special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs.  If you want XHTML conformity, remove the form entry.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=,fieldset=\"\n; Development Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; Production Value: \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n; http://php.net/url-rewriter.tags\nurl_rewriter.tags = \"a=href,area=href,frame=src,input=src,form=fakeentry\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertationException on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 0\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[mcrypt]\n; For more information about mcrypt settings see http://php.net/mcrypt-module-open\n\n; Directory where to load mcrypt algorithms\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.algorithms_dir=\n\n; Directory where to load mcrypt modes\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.modes_dir=\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=0\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=64\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=4\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 100000 are allowed.\n;opcache.max_accelerated_files=2000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, a fast shutdown sequence is used for the accelerated code\n;opcache.fast_shutdown=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.0.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php71-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrites absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/track-errors\ntrack_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_71\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php71/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php71/lib/php/extensions/no-debug-non-zts-20160303/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n; http://php.net/cgi.dicard-path\n;cgi.discard_path=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename.extension\n;\n; For example, on Windows:\n;\n;   extension=msql.dll\n;\n; ... or under UNIX:\n;\n;   extension=msql.so\n;\n; ... or with a path:\n;\n;   extension=/path/to/extension/msql.so\n;\n; If you only provide the name of the extension, PHP will look for it in its\n; default extension directory.\n;\n; Windows Extensions\n; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=php_bz2.dll\n;extension=php_curl.dll\n;extension=php_fileinfo.dll\n;extension=php_ftp.dll\n;extension=php_gd2.dll\n;extension=php_gettext.dll\n;extension=php_gmp.dll\n;extension=php_intl.dll\n;extension=php_imap.dll\n;extension=php_interbase.dll\n;extension=php_ldap.dll\n;extension=php_mbstring.dll\n;extension=php_exif.dll      ; Must be after mbstring as it depends on it\n;extension=php_mysqli.dll\n;extension=php_oci8_12c.dll  ; Use with Oracle Database 12c Instant Client\n;extension=php_odbc.dll\n;extension=php_openssl.dll\n;extension=php_pdo_firebird.dll\n;extension=php_pdo_mysql.dll\n;extension=php_pdo_oci.dll\n;extension=php_pdo_odbc.dll\n;extension=php_pdo_pgsql.dll\n;extension=php_pdo_sqlite.dll\n;extension=php_pgsql.dll\n;extension=php_shmop.dll\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=php_snmp.dll\n\n;extension=php_soap.dll\n;extension=php_sockets.dll\n;extension=php_sqlite3.dll\n;extension=php_tidy.dll\n;extension=php_xmlrpc.dll\n;extension=php_xsl.dll\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[SQL]\n; http://php.net/sql.safe-mode\nsql.safe_mode = Off\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n; http://php.net/mysqlnd.log_mask\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\n; http://php.net/mysqlnd.mempool_default_size\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n; http://php.net/mysqlnd.net_read_timeout\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n; http://php.net/mysqlnd.sha256_server_public_key\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute pathes, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertationException on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[mcrypt]\n; For more information about mcrypt settings see http://php.net/mcrypt-module-open\n\n; Directory where to load mcrypt algorithms\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.algorithms_dir=\n\n; Directory where to load mcrypt modes\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.modes_dir=\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=0\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, a fast shutdown sequence is used for the accelerated code\n; Depending on the used Memory Manager this may cause some incompatibilities.\n;opcache.fast_shutdown=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.1.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php71-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php71-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php71-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php71\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php71/sbin/php-fpm\nphp_fpm_CONF=/opt/php71/etc/php71-fpm.conf\nphp_fpm_PID=/run/php71-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php71/etc/php71.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php71-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php71-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php71-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php71-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php71-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php71-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php71-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php71-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php71-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php71-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php71-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php71). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php71 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php71/var\n; Default Value: none\npid = /run/php71-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php71/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php71-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php71-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php71/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php71.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrites absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php71:/dev/urandom\"\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/track-errors\ntrack_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_71\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php71/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php71/lib/php/extensions/no-debug-non-zts-20160303/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n; http://php.net/cgi.dicard-path\n;cgi.discard_path=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename.extension\n;\n; For example, on Windows:\n;\n;   extension=msql.dll\n;\n; ... or under UNIX:\n;\n;   extension=msql.so\n;\n; ... or with a path:\n;\n;   extension=/path/to/extension/msql.so\n;\n; If you only provide the name of the extension, PHP will look for it in its\n; default extension directory.\n;\n; Windows Extensions\n; Note that many DLL files are located in the extensions/ (PHP 4) ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=php_bz2.dll\n;extension=php_curl.dll\n;extension=php_fileinfo.dll\n;extension=php_ftp.dll\n;extension=php_gd2.dll\n;extension=php_gettext.dll\n;extension=php_gmp.dll\n;extension=php_intl.dll\n;extension=php_imap.dll\n;extension=php_interbase.dll\n;extension=php_ldap.dll\n;extension=php_mbstring.dll\n;extension=php_exif.dll      ; Must be after mbstring as it depends on it\n;extension=php_mysqli.dll\n;extension=php_oci8_12c.dll  ; Use with Oracle Database 12c Instant Client\n;extension=php_odbc.dll\n;extension=php_openssl.dll\n;extension=php_pdo_firebird.dll\n;extension=php_pdo_mysql.dll\n;extension=php_pdo_oci.dll\n;extension=php_pdo_odbc.dll\n;extension=php_pdo_pgsql.dll\n;extension=php_pdo_sqlite.dll\n;extension=php_pgsql.dll\n;extension=php_shmop.dll\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=php_snmp.dll\n\n;extension=php_soap.dll\n;extension=php_sockets.dll\n;extension=php_sqlite3.dll\n;extension=php_tidy.dll\n;extension=php_xmlrpc.dll\n;extension=php_xsl.dll\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[SQL]\n; http://php.net/sql.safe-mode\nsql.safe_mode = Off\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n; http://php.net/mysqlnd.log_mask\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\n; http://php.net/mysqlnd.mempool_default_size\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n; http://php.net/mysqlnd.net_read_timeout\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n; http://php.net/mysqlnd.sha256_server_public_key\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute pathes, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertationException on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[mcrypt]\n; For more information about mcrypt settings see http://php.net/mcrypt-module-open\n\n; Directory where to load mcrypt algorithms\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.algorithms_dir=\n\n; Directory where to load mcrypt modes\n; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt)\n;mcrypt.modes_dir=\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, a fast shutdown sequence is used for the accelerated code\n; Depending on the used Memory Manager this may cause some incompatibilities.\n;opcache.fast_shutdown=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.1.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php72-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrites absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; This directive is DEPRECATED.\n; Default Value: Off\n; Development Value: Off\n; Production Value: Off\n; http://php.net/track-errors\n;track_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_72\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php72/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php72/lib/php/extensions/no-debug-non-zts-20170718/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n; http://php.net/cgi.dicard-path\n;cgi.discard_path=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=fileinfo\n;extension=gd2\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=interbase\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sqlite3\n;extension=tidy\n;extension=xmlrpc\n;extension=xsl\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n; http://php.net/mysqlnd.log_mask\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\n; http://php.net/mysqlnd.mempool_default_size\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n; http://php.net/mysqlnd.net_read_timeout\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n; http://php.net/mysqlnd.sha256_server_public_key\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute pathes, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertationException on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=0\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.2.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php72-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php72-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php72-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php72\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php72/sbin/php-fpm\nphp_fpm_CONF=/opt/php72/etc/php72-fpm.conf\nphp_fpm_PID=/run/php72-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php72/etc/php72.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php72-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php72-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php72-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php72-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php72-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php72-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php72-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php72-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php72-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php72-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php72-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php72). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php72 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php72/var\n; Default Value: none\npid = /run/php72-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php72/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php72-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php72-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php72/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php72.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (C:\\windows or C:\\winnt)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrites absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php72:/dev/urandom\"\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; This directive is DEPRECATED.\n; Default Value: Off\n; Development Value: Off\n; Production Value: Off\n; http://php.net/track-errors\n;track_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_72\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php72/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php72/lib/php/extensions/no-debug-non-zts-20170718/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n; http://php.net/cgi.dicard-path\n;cgi.discard_path=1\n\n; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=fileinfo\n;extension=gd2\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=interbase\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sqlite3\n;extension=tidy\n;extension=xmlrpc\n;extension=xsl\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < intput_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/pdo_mysql.cache_size\npdo_mysql.cache_size = 2000\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/pdo_mysql.default-socket\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n;birdstep.max_links = -1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; If mysqlnd is used: Number of cache slots for the internal result set cache\n; http://php.net/mysqli.cache_size\nmysqli.cache_size = 2000\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_statistics\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; http://php.net/mysqlnd.collect_memory_statistics\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n; http://php.net/mysqlnd.log_mask\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\n; http://php.net/mysqlnd.mempool_default_size\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\n; http://php.net/mysqlnd.net_cmd_buffer_size\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\n; http://php.net/mysqlnd.net_read_buffer_size\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n; http://php.net/mysqlnd.net_read_timeout\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n; http://php.net/mysqlnd.sha256_server_public_key\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept uninitialized session ID and regenerate\n; session ID if browser sends uninitialized session ID. Strict mode protects\n; applications from session fixation via session adoption vulnerability. It is\n; disabled by default for maximum compatibility, but enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any give request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any give request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute pathes, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertationException on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a components typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_traslation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < intput_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0xffffffff\n\n;opcache.inherited_hack=1\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.2.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php73-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; This directive is DEPRECATED.\n; Default Value: Off\n; Development Value: Off\n; Production Value: Off\n; http://php.net/track-errors\n;track_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_73\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (only base ASCII characters)\n;   no_ctrl (all characters except control characters)\n;   all (all characters)\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php73/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php73/lib/php/extensions/no-debug-non-zts-20180731/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=fileinfo\n;extension=gd2\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=interbase\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xmlrpc\n;extension=xsl\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Lax\" or \"Strict\"\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any given request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 100\n; when the session.gc_probability value is 1 will give you approximately a 1% chance\n; the gc will run on any given request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any given request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=0\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.3.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php73-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php73-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php73-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php73\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php73/sbin/php-fpm\nphp_fpm_CONF=/opt/php73/etc/php73-fpm.conf\nphp_fpm_PID=/run/php73-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php73/etc/php73.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php73-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php73-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php73-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php73-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php73-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php73-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php73-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php73-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php73-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php73-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php73-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php73). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php73 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php73/var\n; Default Value: none\npid = /run/php73-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php73/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php73-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php73-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php73/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php73.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n;  foo =         ; sets foo to an empty string\n;  foo = None    ; sets foo to an empty string\n;  foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; html_errors\n;   Default Value: On\n;   Development Value: On\n;   Production value: On\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; track_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; http://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php73:/dev/urandom\"\n\n; This directive allows you to disable certain functions for security reasons.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes for security reasons.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume (128MB)\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This has only effect in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; This directive is DEPRECATED.\n; Default Value: Off\n; Development Value: Off\n; Production Value: Off\n; http://php.net/track-errors\n;track_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: On\n; Development Value: On\n; Production value: On\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_73\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (only base ASCII characters)\n;   no_ctrl (all characters except control characters)\n;   all (all characters)\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any affect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php73/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n; extension_dir = \"./\"\n; On windows:\n; extension_dir = \"ext\"\nextension_dir = \"/opt/php73/lib/php/extensions/no-debug-non-zts-20180731/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n; extension folders as well as the separate PECL DLL download (PHP 5+).\n; Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=fileinfo\n;extension=gd2\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=interbase\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xmlrpc\n;extension=xsl\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n;sqlite3.extension_dir =\n\n[Pcre]\n;PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n;PCRE library recursion limit.\n;Please note that if you set this value to a high number you may consume all\n;the available process stack and eventually crash PHP (due to reaching the\n;stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n;Enables or disables JIT compilation of patterns. This requires the PCRE\n;library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[Interbase]\n; Allow or prevent persistent links.\nibase.allow_persistent = 1\n\n; Maximum number of persistent links.  -1 means no limit.\nibase.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\nibase.max_links = -1\n\n; Default database name for ibase_connect().\n;ibase.default_db =\n\n; Default username for ibase_connect().\n;ibase.default_user =\n\n; Default password for ibase_connect().\n;ibase.default_password =\n\n; Default charset for ibase_connect().\n;ibase.default_charset =\n\n; Default timestamp format.\nibase.timestampformat = \"%Y-%m-%d %H:%M:%S\"\n\n; Default date format.\nibase.dateformat = \"%Y-%m-%d\"\n\n; Default time format.\nibase.timeformat = \"%H:%M:%S\"\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysql_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Lax\" or \"Strict\"\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data.  php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started\n; on every session initialization. The probability is calculated by using\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator\n; and gc_divisor is the denominator in the equation. Setting this value to 1\n; when the session.gc_divisor value is 100 will give you approximately a 1% chance\n; the gc will run on any given request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using the following equation:\n; gc_probability/gc_divisor. Where session.gc_probability is the numerator and\n; session.gc_divisor is the denominator in the equation. Setting this value to 100\n; when the session.gc_probability value is 1 will give you approximately a 1% chance\n; the gc will run on any given request. Increasing this value to 1000 will give you\n; a 0.1% chance the gc will run on any given request. For high volume production servers,\n; this is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script would is the equivalent of\n;       setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.3.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php74-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; http://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions\n; Default: Off\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\nzend.exception_ignore_args = On\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; This directive is DEPRECATED.\n; Default Value: Off\n; Development Value: Off\n; Production Value: Off\n; http://php.net/track-errors\n;track_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_74\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; http://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php74/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php74/lib/php/extensions/no-debug-non-zts-20190902/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd2\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xmlrpc\n;extension=xsl\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; http://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n; Default: 100000\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n; Default: 1000000\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; http://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; http://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.4.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php74-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php74-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php74-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php74\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php74/sbin/php-fpm\nphp_fpm_CONF=/opt/php74/etc/php74-fpm.conf\nphp_fpm_PID=/run/php74-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php74/etc/php74.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php74-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php74-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php74-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php74-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php74-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php74-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php74-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php74-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php74-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php74-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php74-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php74). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php74 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php74/var\n; Default Value: none\npid = /run/php74-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php74/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php74-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php74-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php74/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php74.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; http://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php74:/dev/urandom\"\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n; Default: Off\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n; Default: \"\"\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions\n; Default: Off\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\nzend.exception_ignore_args = On\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. PHP's default behavior is to suppress those\n; errors from clients. Turning the display of startup errors on can be useful in\n; debugging configuration problems. We strongly recommend you\n; set this to 'off' for production servers.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is on by default.\n;report_zend_debug = 0\n\n; Store the last error/warning message in $php_errormsg (boolean). Setting this value\n; to On can assist in debugging and is appropriate for development servers. It should\n; however be disabled on production servers.\n; This directive is DEPRECATED.\n; Default Value: Off\n; Development Value: Off\n; Production Value: Off\n; http://php.net/track-errors\n;track_errors = Off\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_74\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; http://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php74/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php74/lib/php/extensions/no-debug-non-zts-20190902/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd2\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xmlrpc\n;extension=xsl\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.583333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.583333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; http://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n;pdo_odbc.db2_instance_name\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n; Eval the expression with current error_reporting().  Set to true if you want\n; error_reporting(0) around the eval().\n; http://php.net/assert.quiet-eval\n;assert.quiet_eval = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbsting.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; overload(replace) single byte functions by mbstring functions.\n; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),\n; etc. Possible values are 0,1,2,4 or combination of them.\n; For example, 7 for overload everything.\n; 0: No overload\n; 1: Overload mail() function\n; 2: Overload str*() functions\n; 4: Overload ereg*() functions\n; http://php.net/mbstring.func-overload\n;mbstring.func_overload = 0\n\n; enable strict encoding detection.\n; Default: Off\n;mbstring.strict_detection = On\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetype=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetype=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n; Default: 100000\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n; Default: 1000000\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; http://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; http://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n; Local Variables:\n; tab-width: 4\n; End:\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_7.4.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=jsmin.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php80-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; http://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_80\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; http://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php80/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php80/lib/php/extensions/no-debug-non-zts-20200930/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; http://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; http://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; http://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php80-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php80-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php80-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php80\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php80/sbin/php-fpm\nphp_fpm_CONF=/opt/php80/etc/php80-fpm.conf\nphp_fpm_PID=/run/php80-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php80/etc/php80.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php80-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php80-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php80-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php80-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php80-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php80-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php80-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php80-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php80-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php80-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php80-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php80). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php80 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php80/var\n; Default Value: none\npid = /run/php80-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php80/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php80-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php80-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php80/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php80.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; http://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; http://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; http://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; http://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; http://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; http://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; http://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; http://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; http://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; http://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; http://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php80:/dev/urandom\"\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; http://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; http://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; http://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; http://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; http://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; http://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; http://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; http://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; http://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; http://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; http://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; http://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; http://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; http://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; http://php.net/log-errors\nlog_errors = On\n\n; Set maximum length of log_errors. In error_log information about the source is\n; added. The default is 1024 and 0 allows to not apply any maximum length at all.\n; http://php.net/log-errors-max-len\nlog_errors_max_len = 1024\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; http://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; http://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; http://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; http://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; http://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from http://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; http://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; http://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; http://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; http://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_80\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; http://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; http://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; http://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; http://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; http://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; http://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; http://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; http://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; http://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; http://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; http://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; http://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; http://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; http://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; http://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; http://php.net/include-path\ninclude_path\t=  \".:/opt/php80/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; http://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; http://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; http://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php80/lib/php/extensions/no-debug-non-zts-20200930/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; http://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; http://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; http://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; http://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; http://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; http://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; http://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; http://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; http://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; http://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like http:// or ftp://) as files.\n; http://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; http://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; http://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; http://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; http://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See http://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; http://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; http://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; http://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; http://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; http://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; http://php.net/filter.default\n;filter.default = unsafe_raw\n\n; http://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; http://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; http://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; http://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; http://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; http://php.net/phar.readonly\n;phar.readonly = On\n\n; http://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; http://php.net/smtp\n;SMTP = localhost\n; http://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; http://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; http://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; http://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; http://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; http://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; http://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; http://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; http://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; http://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; http://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; http://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; Allow or prevent persistent links.\n; http://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; http://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; http://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; http://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; http://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; http://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; http://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; http://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; http://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; http://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; http://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; http://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; http://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; http://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; http://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; http://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; http://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; http://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; http://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; http://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; http://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; http://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; http://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; http://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; http://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; http://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; http://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; http://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; http://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; http://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; http://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; http://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; http://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; http://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; http://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; http://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; http://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; http://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; http://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; http://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; http://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; http://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; http://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; http://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; http://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; http://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; http://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; http://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; http://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; http://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; http://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; http://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; http://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; http://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; http://php.net/assert.callback\n;assert.callback = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; http://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; http://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typlib on com_load()\n; http://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; http://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; http://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; http://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; http://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; http://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; http://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; http://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; http://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; http://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; http://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; http://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; http://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; http://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; http://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; http://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; http://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; http://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; http://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; http://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; http://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; http://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; http://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php81-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; https://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_81\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php81/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php81/lib/php/extensions/no-debug-non-zts-20210902/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; https://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; https://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; https://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; https://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; https://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; https://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; https://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; https://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; https://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; https://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; https://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; https://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; https://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; https://php.net/assert.callback\n;assert.callback = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.1.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php81-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php81-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php81-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php81\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php81/sbin/php-fpm\nphp_fpm_CONF=/opt/php81/etc/php81-fpm.conf\nphp_fpm_PID=/run/php81-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php81/etc/php81.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php81-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php81-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php81-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php81-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php81-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php81-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php81-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php81-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php81-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php81-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php81-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php81). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php81 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php81/var\n; Default Value: none\npid = /run/php81-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php81/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php81-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php81-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php81/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php81.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php81:/dev/urandom\"\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; https://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_81\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php81/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php81/lib/php/extensions/no-debug-non-zts-20210902/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; https://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; https://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; https://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; https://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; https://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; https://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; https://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; https://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; https://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; https://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; https://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; https://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; https://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; https://php.net/assert.callback\n;assert.callback = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.1.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php82-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; https://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_82\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php82/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php82/lib/php/extensions/no-debug-non-zts-20220829/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; https://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; https://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; https://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; https://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; https://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; https://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; https://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; https://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; https://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; https://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; https://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; https://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; https://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; https://php.net/assert.callback\n;assert.callback = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.2.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php82-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php82-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php82-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php82\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php82/sbin/php-fpm\nphp_fpm_CONF=/opt/php82/etc/php82-fpm.conf\nphp_fpm_PID=/run/php82-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php82/etc/php82.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php82-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php82-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php82-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php82-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php82-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php82-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php82-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php82-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php82-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php82-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php82-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php82). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php82 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php82/var\n; Default Value: none\npid = /run/php82-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php82/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php82-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php82-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php82/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php82.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable. (As of PHP 5.2.0)\n; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0)\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security conscience applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php82:/dev/urandom\"\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings (includes E_STRICT as of PHP 5.4.0)\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; https://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_82\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php82/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php82/lib/php/extensions/no-debug-non-zts-20220829/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the extensions/ (PHP 4) or ext/ (PHP 5+)\n;   extension folders as well as the separate PECL DLL download (PHP 5+).\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; https://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; https://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; https://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; https://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle 11g Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; https://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables statement prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; https://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; https://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; https://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; https://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini! (For turning assertions on and off at run-time, see assert.active, when zend.assertions = 1)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n; Assert(expr); active by default.\n; https://php.net/assert.active\n;assert.active = On\n\n; Throw an AssertionError on failed assertions\n; https://php.net/assert.exception\n;assert.exception = On\n\n; Issue a PHP warning for each failed assertion. (Overridden by assert.exception if active)\n; https://php.net/assert.warning\n;assert.warning = On\n\n; Don't bail out by default.\n; https://php.net/assert.bail\n;assert.bail = Off\n\n; User-function to be called if an assertion fails.\n; https://php.net/assert.callback\n;assert.callback = 0\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; Check the cache checksum each N requests.\n; The default value of \"0\" means that the checks are disabled.\n;opcache.consistency_checks=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; This should improve performance, but requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.2.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php83-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable.\n; 3. A number of predefined registry keys on Windows\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security-conscious applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; session.sid_length\n;   Default Value: 32\n;   Development Value: 26\n;   Production Value: 26\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.assertions\n;   Default Value: 1\n;   Development Value: 1\n;   Production Value: -1\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; How many multipart body parts (combined input variable and file uploads) may\n; be accepted.\n; Default Value: -1 (Sum of max_input_vars and max_file_uploads)\n;max_multipart_body_parts = 1500\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; https://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_83\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php83/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php83/lib/php/extensions/no-debug-non-zts-20230831/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the ext/\n;   extension folders as well as the separate PECL DLL download.\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n\n; The ldap extension must be before curl if OpenSSL 1.0.2 and OpenLDAP is used\n; otherwise it results in segfault when unloading after using SASL.\n; See https://github.com/php/php-src/issues/8620 for more info.\n;extension=ldap\n\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n;extension=zip\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; Use mixed LF and CRLF line separators to keep compatibility with some\n; RFC 2822 non conformant MTA.\nmail.mixed_lf_and_crlf = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; https://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; https://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; https://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; https://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; https://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables row prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; https://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Tuning: Sets the amount of LOB data that is internally returned from\n; Oracle Database when an Oracle LOB locator is initially retrieved as\n; part of a query. Setting this can improve performance by reducing\n; round-trips.\n; https://php.net/oci8.prefetch-lob-size\n; oci8.prefetch_lob_size = 0\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; https://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; https://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; https://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini!\n; (For turning assertions on and off at run-time, toggle zend.assertions between the values 1 and 0)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; Under certain circumstances (if only a single global PHP process is\n; started from which all others fork), this can increase performance\n; by a tiny amount because TLB misses are reduced.  On the other hand, this\n; delays PHP startup, increases memory usage and degrades performance\n; under memory pressure - use with care.\n; Requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.3.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php83-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php83-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php83-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php83\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php83/sbin/php-fpm\nphp_fpm_CONF=/opt/php83/etc/php83-fpm.conf\nphp_fpm_PID=/run/php83-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php83/etc/php83.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php83-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php83-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php83-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php83-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php83-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php83-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php83-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php83-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php83-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php83-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php83-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php83). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php83 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php83/var\n; Default Value: none\npid = /run/php83-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php83/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php83-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php83-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php83/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php83.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable.\n; 3. A number of predefined registry keys on Windows\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security-conscious applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; session.sid_bits_per_character\n;   Default Value: 4\n;   Development Value: 5\n;   Production Value: 5\n\n; session.sid_length\n;   Default Value: 32\n;   Development Value: 26\n;   Production Value: 26\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.assertions\n;   Default Value: 1\n;   Development Value: 1\n;   Production Value: -1\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php83:/dev/urandom\"\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; How many multipart body parts (combined input variable and file uploads) may\n; be accepted.\n; Default Value: -1 (Sum of max_input_vars and max_file_uploads)\n;max_multipart_body_parts = 1500\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE and E_STRICT, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_STRICT          - run-time notices, enable to have PHP suggest changes\n;                     to your code which will ensure the best interoperability\n;                     and forward compatibility of your code\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_ALL & ~E_NOTICE & ~E_STRICT  (Show all errors, except for notices and coding standards warnings.)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED & ~E_STRICT\n; https://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_83\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php83/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php83/lib/php/extensions/no-debug-non-zts-20230831/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the ext/\n;   extension folders as well as the separate PECL DLL download.\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n\n; The ldap extension must be before curl if OpenSSL 1.0.2 and OpenLDAP is used\n; otherwise it results in segfault when unloading after using SASL.\n; See https://github.com/php/php-src/issues/8620 for more info.\n;extension=ldap\n\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=imap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=oci8_12c  ; Use with Oracle Database 12c Instant Client\n;extension=oci8_19  ; Use with Oracle Database 19 Instant Client\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_oci\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n;extension=zip\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[imap]\n; rsh/ssh logins are disabled by default. Use this INI entry if you want to\n; enable them. Note that the IMAP library does not filter mailbox names before\n; passing them to rsh/ssh command, thus passing untrusted data to this function\n; with rsh/ssh enabled is insecure.\n;imap.enable_insecure_rsh=0\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; Use mixed LF and CRLF line separators to keep compatibility with some\n; RFC 2822 non conformant MTA.\nmail.mixed_lf_and_crlf = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; Allow or prevent reconnect\nmysqli.reconnect = Off\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[OCI8]\n\n; Connection: Enables privileged connections using external\n; credentials (OCI_SYSOPER, OCI_SYSDBA)\n; https://php.net/oci8.privileged-connect\n;oci8.privileged_connect = Off\n\n; Connection: The maximum number of persistent OCI8 connections per\n; process. Using -1 means no limit.\n; https://php.net/oci8.max-persistent\n;oci8.max_persistent = -1\n\n; Connection: The maximum number of seconds a process is allowed to\n; maintain an idle persistent connection. Using -1 means idle\n; persistent connections will be maintained forever.\n; https://php.net/oci8.persistent-timeout\n;oci8.persistent_timeout = -1\n\n; Connection: The number of seconds that must pass before issuing a\n; ping during oci_pconnect() to check the connection validity. When\n; set to 0, each oci_pconnect() will cause a ping. Using -1 disables\n; pings completely.\n; https://php.net/oci8.ping-interval\n;oci8.ping_interval = 60\n\n; Connection: Set this to a user chosen connection class to be used\n; for all pooled server requests with Oracle Database Resident\n; Connection Pooling (DRCP).  To use DRCP, this value should be set to\n; the same string for all web servers running the same application,\n; the database pool must be configured, and the connection string must\n; specify to use a pooled server.\n;oci8.connection_class =\n\n; High Availability: Using On lets PHP receive Fast Application\n; Notification (FAN) events generated when a database node fails. The\n; database must also be configured to post FAN events.\n;oci8.events = Off\n\n; Tuning: This option enables statement caching, and specifies how\n; many statements to cache. Using 0 disables statement caching.\n; https://php.net/oci8.statement-cache-size\n;oci8.statement_cache_size = 20\n\n; Tuning: Enables row prefetching and sets the default number of\n; rows that will be fetched automatically after statement execution.\n; https://php.net/oci8.default-prefetch\n;oci8.default_prefetch = 100\n\n; Tuning: Sets the amount of LOB data that is internally returned from\n; Oracle Database when an Oracle LOB locator is initially retrieved as\n; part of a query. Setting this can improve performance by reducing\n; round-trips.\n; https://php.net/oci8.prefetch-lob-size\n; oci8.prefetch_lob_size = 0\n\n; Compatibility. Using On means oci_close() will not close\n; oci_connect() and oci_new_connect() connections.\n; https://php.net/oci8.old-oci-close-semantics\n;oci8.old_oci_close_semantics = Off\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; Set session ID character length. This value could be between 22 to 256.\n; Shorter length than default is supported only for compatibility reason.\n; Users should use 32 or more chars.\n; https://php.net/session.sid-length\n; Default Value: 32\n; Development Value: 26\n; Production Value: 26\nsession.sid_length = 26\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Define how many bits are stored in each character when converting\n; the binary hash data to something readable.\n; Possible values:\n;   4  (4 bits: 0-9, a-f)\n;   5  (5 bits: 0-9, a-v)\n;   6  (6 bits: 0-9, a-z, A-Z, \"-\", \",\")\n; Default Value: 4\n; Development Value: 5\n; Production Value: 5\n; https://php.net/session.hash-bits-per-character\nsession.sid_bits_per_character = 5\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini!\n; (For turning assertions on and off at run-time, toggle zend.assertions between the values 1 and 0)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting cannot be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; Under certain circumstances (if only a single global PHP process is\n; started from which all others fork), this can increase performance\n; by a tiny amount because TLB misses are reduced.  On the other hand, this\n; delays PHP startup, increases memory usage and degrades performance\n; under memory pressure - use with care.\n; Requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.3.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php84-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable.\n; 3. A number of predefined registry keys on Windows\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security-conscious applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.assertions\n;   Default Value: 1\n;   Development Value: 1\n;   Production Value: -1\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; How many multipart body parts (combined input variable and file uploads) may\n; be accepted.\n; Default Value: -1 (Sum of max_input_vars and max_file_uploads)\n;max_multipart_body_parts = 1500\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED\n; https://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_84\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php84/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php84/lib/php/extensions/no-debug-non-zts-20240924/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the ext/\n;   extension folders as well as the separate PECL DLL download.\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n;extension=zip\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; Use mixed LF and CRLF line separators to keep compatibility with some\n; RFC 2822 non conformant MTA.\nmail.mixed_lf_and_crlf = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini!\n; (For turning assertions on and off at run-time, toggle zend.assertions between the values 1 and 0)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting must not be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; Under certain circumstances (if only a single global PHP process is\n; started from which all others fork), this can increase performance\n; by a tiny amount because TLB misses are reduced.  On the other hand, this\n; delays PHP startup, increases memory usage and degrades performance\n; under memory pressure - use with care.\n; Requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.4.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php84-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php84-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php84-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php84\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php84/sbin/php-fpm\nphp_fpm_CONF=/opt/php84/etc/php84-fpm.conf\nphp_fpm_PID=/run/php84-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php84/etc/php84.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php84-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php84-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php84-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php84-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php84-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php84-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php84-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php84-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php84-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php84-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php84-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php84). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php84 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php84/var\n; Default Value: none\npid = /run/php84-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php84/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php84-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php84-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php84/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php84.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable.\n; 3. A number of predefined registry keys on Windows\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security-conscious applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; register_argc_argv\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.assertions\n;   Default Value: 1\n;   Development Value: 1\n;   Production Value: -1\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php84:/dev/urandom\"\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; How many multipart body parts (combined input variable and file uploads) may\n; be accepted.\n; Default Value: -1 (Sum of max_input_vars and max_file_uploads)\n;max_multipart_body_parts = 1500\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED\n; https://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; If this parameter is set to Off, then memory leaks will not be shown (on\n; stdout or in the log). This is only effective in a debug compile, and if\n; error reporting includes E_WARNING in the allowed list\n; https://php.net/report-memleaks\nreport_memleaks = On\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_84\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For performance reasons, this feature should be disabled\n; on production servers.\n; Note: This directive is hardcoded to On for the CLI SAPI\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php84/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php84/lib/php/extensions/no-debug-non-zts-20240924/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the ext/\n;   extension folders as well as the separate PECL DLL download.\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=ldap\n;extension=mbstring\n;extension=exif      ; Must be after mbstring as it depends on it\n;extension=mysqli\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n;extension=zip\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n;intl.error_level = E_WARNING\n;intl.use_exceptions = 0\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; Use mixed LF and CRLF line separators to keep compatibility with some\n; RFC 2822 non conformant MTA.\nmail.mixed_lf_and_crlf = Off\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().  If unset, mysqli_connect() will use\n; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the\n; compile-time value defined MYSQL_PORT (in that order).  Win32 will only look\n; at MYSQL_PORT.\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect() (doesn't apply in safe mode).\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect() (doesn't apply in safe mode).\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_memory_statistics = 0\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini!\n; (For turning assertions on and off at run-time, toggle zend.assertions between the values 1 and 0)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting must not be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; Under certain circumstances (if only a single global PHP process is\n; started from which all others fork), this can increase performance\n; by a tiny amount because TLB misses are reduced.  On the other hand, this\n; delays PHP startup, increases memory usage and degrades performance\n; under memory pressure - use with care.\n; Requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\nzend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.4.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/php/php85-cli.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable.\n; 3. A number of predefined registry keys on Windows\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security-conscious applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; mysqlnd.collect_memory_statistics\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.assertions\n;   Default Value: 1\n;   Development Value: 1\n;   Production Value: -1\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\n;open_basedir =\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions =\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=5\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 3600\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 3600\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; How many multipart body parts (combined input variable and file uploads) may\n; be accepted.\n; Default Value: -1 (Sum of max_input_vars and max_file_uploads)\n;max_multipart_body_parts = 1500\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\nmax_memory_limit = -1\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED\n; https://php.net/error-reporting\nerror_reporting = 1\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_cli_85\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n; This directive controls whether PHP will output the backtrace of fatal errors.\n; Default Value: On\n; Development Value: On\n; Production Value: On\n;fatal_error_backtraces = On\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For security reasons, this feature should be disabled\n; for non-CLI SAPIs.\n; Note: This directive is ignored for the CLI SAPI\n; This directive is deprecated.\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php85/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php85/lib/php/extensions/no-debug-non-zts-20250925/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Prevent decoding of SCRIPT_FILENAME when using Apache ProxyPass or\n; ProxyPassMatch. This should be used if script file paths are not stored\n; in an encoded format on the file system.\n; Default is 1.\n;fastcgi.script_path_encoded = 0\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 3600\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the ext/\n;   extension folders as well as the separate PECL DLL download.\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=exif\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=ldap\n;extension=mbstring\n;extension=mysqli\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n;extension=zip\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n; This directive is deprecated.\n;intl.error_level = E_WARNING\n; If enabled this directive indicates that when an error occurs within an\n; intl function a IntlException should be thrown.\n; Default is Off, which means errors need to be handled manually.\n;intl.use_exceptions = On\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; Use mixed LF and CRLF line separators to keep compatibility with some\n; RFC 2822 non conformant MTA.\nmail.mixed_lf_and_crlf = Off\n\n; Control line ending mode for mail messages and headers.\n; Possible values: \"crlf\" (default), \"lf\", \"mixed\", \"os\"\n; - crlf: Use CRLF line endings\n; - lf: Use LF line endings only (converts CRLF in message to LF)\n; - mixed: Same as mail.mixed_lf_and_crlf = On\n; - os: Use CRLF on Windows, LF on other systems\nmail.cr_lf_mode = crlf\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect().\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect().\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect().\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\nmysqlnd.collect_memory_statistics = Off\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; https://php.net/session.cookie-partitioned\n;session.cookie_partitioned = 0\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini!\n; (For turning assertions on and off at run-time, toggle zend.assertions between the values 1 and 0)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting must not be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables read-only mode for the second level cache directory.\n; It should improve performance for read-only containers,\n; when the cache is pre-warmed and packaged alongside the application.\n; Best used with `opcache.validate_timestamps=0`, `opcache.enable_file_override=1`\n; and `opcache.file_cache_consistency_checks=0`.\n; Note: A cache generated with a different build of PHP, a different file path,\n; or different settings (including which extensions are loaded), may be ignored.\n;opcache.file_cache_read_only=0\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; Under certain circumstances (if only a single global PHP process is\n; started from which all others fork), this can increase performance\n; by a tiny amount because TLB misses are reduced.  On the other hand, this\n; delays PHP startup, increases memory usage and degrades performance\n; under memory pressure - use with care.\n; Requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; The libctx is an OpenSSL library context. OpenSSL defines a default library\n; context, but PHP OpenSSL also defines its own library context to avoid\n; interference with other libraries using OpenSSL and to provide an independent\n; context for each thread in ZTS. Possible values:\n;  \"custom\"  - use a custom library context (default)\n;  \"default\" - use the default OpenSSL library context\n;openssl.libctx=custom\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\n;zend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.5.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n"
  },
  {
    "path": "aegir/conf/php/php85-fpm",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          php85-fpm\n# Required-Start:    $remote_fs $network\n# Required-Stop:     $remote_fs $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: starts php85-fpm\n# Description:       starts the PHP FastCGI Process Manager daemon\n### END INIT INFO\n\nprefix=/opt/php85\nexec_prefix=${prefix}\nphp_fpm_BIN=/opt/php85/sbin/php-fpm\nphp_fpm_CONF=/opt/php85/etc/php85-fpm.conf\nphp_fpm_PID=/run/php85-fpm.pid\nphp_opts=\"--fpm-config $php_fpm_CONF --pid $php_fpm_PID -c /opt/php85/etc/php85.ini\"\n\nwait_for_pid() {\n\ttry=0\n\n\twhile test $try -lt 5; do\n\n\t\tcase \"$1\" in\n\t\t\t'created')\n\t\t\tif [ -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\n\t\t\t'removed')\n\t\t\tif [ ! -f \"$2\" ]; then\n\t\t\t\ttry=''\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\n\t\techo -n .\n\t\ttry=`expr $try + 1`\n\t\tsleep 1\n\n\tdone\n\n}\n\ncase \"$1\" in\n\tstart)\n\t\techo -n \"Starting php85-fpm...\"\n\n\t\t$php_fpm_BIN --daemonize $php_opts\n\n\t\tif [ \"$?\" != 0 ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\tfi\n\n\t\twait_for_pid created $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstop)\n\t\techo -n \"Gracefully shutting down php85-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php85-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -QUIT `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed. Use force-quit\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\tstatus)\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"php85-fpm is stopped\"\n\t\t\texit 0\n\t\tfi\n\n\t\tPID=`cat $php_fpm_PID`\n\t\tif ps -p $PID | grep -q $PID; then\n\t\t\techo \"php85-fpm (pid $PID) is running...\"\n\t\telse\n\t\t\techo \"php85-fpm dead but pid file exists\"\n\t\tfi\n\t;;\n\n\tforce-quit)\n\t\techo -n \"Terminating php85-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php85-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -TERM `cat $php_fpm_PID`\n\n\t\twait_for_pid removed $php_fpm_PID\n\n\t\tif [ -n \"$try\" ]; then\n\t\t\techo \" failed\"\n\t\t\texit 1\n\t\telse\n\t\t\techo \" done\"\n\t\tfi\n\t;;\n\n\trestart)\n\t\t$0 stop\n\t\t$0 start\n\t;;\n\n\treload)\n\n\t\techo -n \"Reloading service php85-fpm...\"\n\n\t\tif [ ! -r $php_fpm_PID ]; then\n\t\t\techo \"warning, no pid file found - php85-fpm is not running ?\"\n\t\t\texit 1\n\t\tfi\n\n\t\tkill -USR2 `cat $php_fpm_PID`\n\n\t\techo \" done\"\n\t;;\n\n\tconfigtest)\n\t\t$php_fpm_BIN -t\n\t;;\n\n\t*)\n\t\techo \"Usage: $0 {start|stop|force-quit|restart|reload|status|configtest}\"\n\t\texit 1\n\t;;\n\nesac\n"
  },
  {
    "path": "aegir/conf/php/php85-fpm.conf",
    "content": ";;;;;;;;;;;;;;;;;;;;;\n; FPM Configuration ;\n;;;;;;;;;;;;;;;;;;;;;\n\n; All relative paths in this configuration file are relative to PHP's install\n; prefix (/opt/php85). This prefix can be dynamically changed by using the\n; '-p' argument from the command line.\n\n; Include one or more files. If glob(3) exists, it is used to include a bunch of\n; files from a glob(3) pattern. This directive can be used everywhere in the\n; file.\n; Relative path can also be used. They will be prefixed by:\n;  - the global prefix if it's been set (-p argument)\n;  - /opt/php85 otherwise\n;include=etc/fpm.d/*.conf\n\n;;;;;;;;;;;;;;;;;;\n; Global Options ;\n;;;;;;;;;;;;;;;;;;\n\n[global]\n; Pid file\n; Note: the default prefix is /opt/php85/var\n; Default Value: none\npid = /run/php85-fpm.pid\n\n; Error log file\n; If it's set to \"syslog\", log is sent to syslogd instead of being written\n; in a local file.\n; Note: the default prefix is /opt/php85/var\n; Default Value: log/php-fpm.log\nerror_log = /var/log/php/php85-fpm-error.log\n\n; syslog_facility is used to specify what type of program is logging the\n; message. This lets syslogd specify that messages from different facilities\n; will be handled differently.\n; See syslog(3) for possible values (ex daemon equiv LOG_DAEMON)\n; Default Value: daemon\nsyslog.facility = daemon\n\n; syslog_ident is prepended to every message. If you have multiple FPM\n; instances running on the same server, you can change the default value\n; which must suit common needs.\n; Default Value: php-fpm\nsyslog.ident = php85-fpm\n\n; Log level\n; Possible Values: alert, error, warning, notice, debug\n; Default Value: notice\nlog_level = warning\n\n; If this number of child processes exit with SIGSEGV or SIGBUS within the time\n; interval set by emergency_restart_interval then FPM will restart. A value\n; of '0' means 'Off'.\n; Default Value: 0\nemergency_restart_threshold = 5\n\n; Interval of time used by emergency_restart_interval to determine when\n; a graceful restart will be initiated.  This can be useful to work around\n; accidental corruptions in an accelerator's shared memory.\n; Available Units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nemergency_restart_interval = 1m\n\n; Time limit for child processes to wait for a reaction on signals from master.\n; Available units: s(econds), m(inutes), h(ours), or d(ays)\n; Default Unit: seconds\n; Default Value: 0\nprocess_control_timeout = 5s\n\n; The maximum number of processes FPM will fork. This has been design to control\n; the global number of processes when using dynamic PM within a lot of pools.\n; Use it with caution.\n; Note: A value of 0 indicates no limit\n; Default Value: 0\nprocess.max = 0\n\n; Specify the nice(2) priority to apply to the master process (only if set)\n; The value can vary from -19 (highest priority) to 20 (lower priority)\n; Note: - It will only work if the FPM master process is launched as root\n;       - The pool process will inherit the master process priority\n;         unless it specified otherwise\n; Default Value: no set\n; process.priority = -19\n\n; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.\n; Default Value: yes\ndaemonize = yes\n\n; Set open file descriptor rlimit for the master process.\n; Default Value: system defined value\n;rlimit_files = 1024\n\n; Set max core size rlimit for the master process.\n; Possible Values: 'unlimited' or an integer greater or equal to 0\n; Default Value: system defined value\n;rlimit_core = 0\n\n; Specify the event mechanism FPM will use. The following is available:\n; - select     (any POSIX os)\n; - poll       (any POSIX os)\n; - epoll      (linux >= 2.5.44)\n; - kqueue     (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0)\n; - /dev/poll  (Solaris >= 7)\n; - port       (Solaris >= 10)\n; Default Value: not set (auto detection)\n;events.mechanism = epoll\n\n;;;;;;;;;;;;;;;;;;;;\n; Pool Definitions ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Multiple pools of child processes may be started with different listening\n; ports and different management options.  The name of the pool will be\n; used in logs and stats. There is no limitation on the number of pools which\n; FPM can handle. Your system will tell you anyway :)\n\ninclude = /opt/php85/etc/pool.d/*.conf\n"
  },
  {
    "path": "aegir/conf/php/php85.ini",
    "content": "[PHP]\n\n;;;;;;;;;;;;;;;;;;;\n; About php.ini   ;\n;;;;;;;;;;;;;;;;;;;\n; PHP's initialization file, generally called php.ini, is responsible for\n; configuring many of the aspects of PHP's behavior.\n\n; PHP attempts to find and load this configuration from a number of locations.\n; The following is a summary of its search order:\n; 1. SAPI module specific location.\n; 2. The PHPRC environment variable.\n; 3. A number of predefined registry keys on Windows\n; 4. Current working directory (except CLI)\n; 5. The web server's directory (for SAPI modules), or directory of PHP\n; (otherwise in Windows)\n; 6. The directory from the --with-config-file-path compile time option, or the\n; Windows directory (usually C:\\windows)\n; See the PHP docs for more specific information.\n; https://php.net/configuration.file\n\n; The syntax of the file is extremely simple.  Whitespace and lines\n; beginning with a semicolon are silently ignored (as you probably guessed).\n; Section headers (e.g. [Foo]) are also silently ignored, even though\n; they might mean something in the future.\n\n; Directives following the section heading [PATH=/www/mysite] only\n; apply to PHP files in the /www/mysite directory.  Directives\n; following the section heading [HOST=www.example.com] only apply to\n; PHP files served from www.example.com.  Directives set in these\n; special sections cannot be overridden by user-defined INI files or\n; at runtime. Currently, [PATH=] and [HOST=] sections only work under\n; CGI/FastCGI.\n; https://php.net/ini.sections\n\n; Directives are specified using the following syntax:\n; directive = value\n; Directive names are *case sensitive* - foo=bar is different from FOO=bar.\n; Directives are variables used to configure PHP or PHP extensions.\n; There is no name validation.  If PHP can't find an expected\n; directive because it is not set or is mistyped, a default value will be used.\n\n; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one\n; of the INI constants (On, Off, True, False, Yes, No and None) or an expression\n; (e.g. E_ALL & ~E_NOTICE), a quoted string (\"bar\"), or a reference to a\n; previously set variable or directive (e.g. ${foo})\n\n; Expressions in the INI file are limited to bitwise operators and parentheses:\n; |  bitwise OR\n; ^  bitwise XOR\n; &  bitwise AND\n; ~  bitwise NOT\n; !  boolean NOT\n\n; Boolean flags can be turned on using the values 1, On, True or Yes.\n; They can be turned off using the values 0, Off, False or No.\n\n; An empty string can be denoted by simply not writing anything after the equal\n; sign, or by using the None keyword:\n\n; foo =         ; sets foo to an empty string\n; foo = None    ; sets foo to an empty string\n; foo = \"None\"  ; sets foo to the string 'None'\n\n; If you use constants in your value, and these constants belong to a\n; dynamically loaded extension (either a PHP extension or a Zend extension),\n; you may only use these constants *after* the line that loads the extension.\n\n;;;;;;;;;;;;;;;;;;;\n; About this file ;\n;;;;;;;;;;;;;;;;;;;\n; PHP comes packaged with two INI files. One that is recommended to be used\n; in production environments and one that is recommended to be used in\n; development environments.\n\n; php.ini-production contains settings which hold security, performance and\n; best practices at its core. But please be aware, these settings may break\n; compatibility with older or less security-conscious applications. We\n; recommending using the production ini in production and testing environments.\n\n; php.ini-development is very similar to its production variant, except it is\n; much more verbose when it comes to errors. We recommend using the\n; development version only in development environments, as errors shown to\n; application users can inadvertently leak otherwise secure information.\n\n; This is the php.ini-production INI file.\n\n;;;;;;;;;;;;;;;;;;;\n; Quick Reference ;\n;;;;;;;;;;;;;;;;;;;\n\n; The following are all the settings which are different in either the production\n; or development versions of the INIs with respect to PHP's default behavior.\n; Please see the actual settings later in the document for more details as to why\n; we recommend these changes in PHP's behavior.\n\ndisplay_errors = Off\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; display_startup_errors\n;   Default Value: On\n;   Development Value: On\n;   Production Value: Off\n\n; error_reporting\n;   Default Value: E_ALL\n;   Development Value: E_ALL\n;   Production Value: E_ALL & ~E_DEPRECATED\n\n; log_errors\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: On\n\n; max_input_time\n;   Default Value: -1 (Unlimited)\n;   Development Value: 60 (60 seconds)\n;   Production Value: 60 (60 seconds)\n\n; mysqlnd.collect_memory_statistics\n;   Default Value: Off\n;   Development Value: On\n;   Production Value: Off\n\n; output_buffering\n;   Default Value: Off\n;   Development Value: 4096\n;   Production Value: 4096\n\n; request_order\n;   Default Value: None\n;   Development Value: \"GP\"\n;   Production Value: \"GP\"\n\n; session.gc_divisor\n;   Default Value: 100\n;   Development Value: 1000\n;   Production Value: 1000\n\n; short_open_tag\n;   Default Value: On\n;   Development Value: Off\n;   Production Value: Off\n\n; variables_order\n;   Default Value: \"EGPCS\"\n;   Development Value: \"GPCS\"\n;   Production Value: \"GPCS\"\n\n; zend.assertions\n;   Default Value: 1\n;   Development Value: 1\n;   Production Value: -1\n\n; zend.exception_ignore_args\n;   Default Value: Off\n;   Development Value: Off\n;   Production Value: On\n\n; zend.exception_string_param_max_len\n;   Default Value: 15\n;   Development Value: 15\n;   Production Value: 0\n\n;;;;;;;;;;;;;;;;;;;;\n; php.ini Options  ;\n;;;;;;;;;;;;;;;;;;;;\n; Name for user-defined php.ini (.htaccess) files. Default is \".user.ini\"\n;user_ini.filename = \".user.ini\"\n\n; To disable this feature set this option to an empty value\nuser_ini.filename =\n\n; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes)\n;user_ini.cache_ttl = 300\n\n;;;;;;;;;;;;;;;;;;;;\n; Language Options ;\n;;;;;;;;;;;;;;;;;;;;\n\n; Enable the PHP scripting language engine under Apache.\n; https://php.net/engine\nengine = On\n\n; This directive determines whether or not PHP will recognize code between\n; <? and ?> tags as PHP source which should be processed as such. It is\n; generally recommended that <?php and ?> should be used and that this feature\n; should be disabled, as enabling it may result in issues when generating XML\n; documents, however this remains supported for backward compatibility reasons.\n; Note that this directive does not control the <?= shorthand tag, which can be\n; used regardless of this directive.\n; Default Value: On\n; Development Value: Off\n; Production Value: Off\n; https://php.net/short-open-tag\nshort_open_tag = On\n\n; The number of significant digits displayed in floating point numbers.\n; https://php.net/precision\nprecision = 14\n\n; Output buffering is a mechanism for controlling how much output data\n; (excluding headers and cookies) PHP should keep internally before pushing that\n; data to the client. If your application's output exceeds this setting, PHP\n; will send that data in chunks of roughly the size you specify.\n; Turning on this setting and managing its maximum buffer size can yield some\n; interesting side-effects depending on your application and web server.\n; You may be able to send headers and cookies after you've already sent output\n; through print or echo. You also may see performance benefits if your server is\n; emitting less packets due to buffered output versus PHP streaming the output\n; as it gets it. On production servers, 4096 bytes is a good setting for performance\n; reasons.\n; Note: Output buffering can also be controlled via Output Buffering Control\n;   functions.\n; Possible Values:\n;   On = Enabled and buffer is unlimited. (Use with caution)\n;   Off = Disabled\n;   Integer = Enables the buffer and sets its maximum size in bytes.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; Default Value: Off\n; Development Value: 4096\n; Production Value: 4096\n; https://php.net/output-buffering\noutput_buffering = 4096\n\n; You can redirect all of the output of your scripts to a function.  For\n; example, if you set output_handler to \"mb_output_handler\", character\n; encoding will be transparently converted to the specified encoding.\n; Setting any output handler automatically turns on output buffering.\n; Note: People who wrote portable scripts should not depend on this ini\n;   directive. Instead, explicitly set the output handler using ob_start().\n;   Using this ini directive may cause problems unless you know what script\n;   is doing.\n; Note: You cannot use both \"mb_output_handler\" with \"ob_iconv_handler\"\n;   and you cannot use both \"ob_gzhandler\" and \"zlib.output_compression\".\n; Note: output_handler must be empty if this is set 'On' !!!!\n;   Instead you must use zlib.output_handler.\n; https://php.net/output-handler\n;output_handler =\n\n; URL rewriter function rewrites URL on the fly by using\n; output buffer. You can set target tags by this configuration.\n; \"form\" tag is special tag. It will add hidden input tag to pass values.\n; Refer to session.trans_sid_tags for usage.\n; Default Value: \"form=\"\n; Development Value: \"form=\"\n; Production Value: \"form=\"\n;url_rewriter.tags\n\n; URL rewriter will not rewrite absolute URL nor form by default. To enable\n; absolute URL rewrite, allowed hosts must be defined at RUNTIME.\n; Refer to session.trans_sid_hosts for more details.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;url_rewriter.hosts\n\n; Transparent output compression using the zlib library\n; Valid values for this option are 'off', 'on', or a specific buffer size\n; to be used for compression (default is 4KB)\n; Note: Resulting chunk size may vary due to nature of compression. PHP\n;   outputs chunks that are few hundreds bytes each as a result of\n;   compression. If you prefer a larger chunk size for better\n;   performance, enable output_buffering in addition.\n; Note: You need to use zlib.output_handler instead of the standard\n;   output_handler, or otherwise the output will be corrupted.\n; https://php.net/zlib.output-compression\nzlib.output_compression = Off\n\n; https://php.net/zlib.output-compression-level\n;zlib.output_compression_level = -1\n\n; You cannot specify additional output handlers if zlib.output_compression\n; is activated here. This setting does the same as output_handler but in\n; a different order.\n; https://php.net/zlib.output-handler\n;zlib.output_handler =\n\n; Implicit flush tells PHP to tell the output layer to flush itself\n; automatically after every output block.  This is equivalent to calling the\n; PHP function flush() after each and every call to print() or echo() and each\n; and every HTML block.  Turning this option on has serious performance\n; implications and is generally recommended for debugging purposes only.\n; https://php.net/implicit-flush\n; Note: This directive is hardcoded to On for the CLI SAPI\nimplicit_flush = Off\n\n; The unserialize callback function will be called (with the undefined class'\n; name as parameter), if the unserializer finds an undefined class\n; which should be instantiated. A warning appears if the specified function is\n; not defined, or if the function doesn't include/implement the missing class.\n; So only set this entry, if you really want to implement such a\n; callback-function.\nunserialize_callback_func =\n\n; The unserialize_max_depth specifies the default depth limit for unserialized\n; structures. Setting the depth limit too high may result in stack overflows\n; during unserialization. The unserialize_max_depth ini setting can be\n; overridden by the max_depth option on individual unserialize() calls.\n; A value of 0 disables the depth limit.\n;unserialize_max_depth = 4096\n\n; When floats & doubles are serialized, store serialize_precision significant\n; digits after the floating point. The default value ensures that when floats\n; are decoded with unserialize, the data will remain the same.\n; The value is also used for json_encode when encoding double values.\n; If -1 is used, then dtoa mode 0 is used which automatically select the best\n; precision.\nserialize_precision = -1\n\n; open_basedir, if set, limits all file operations to the defined directory\n; and below.  This directive makes most sense if used in a per-directory\n; or per-virtualhost web server configuration file.\n; Note: disables the realpath cache\n; https://php.net/open-basedir\nopen_basedir = \".:/data:/mnt:/srv:/hdd:/opt/tmp:/tmp:/usr:/var/aegir:/var/lib/collectd:/var/lib/nginx:/var/www:/var/second:/usr/bin:/usr/local/bin:/opt/tika:/opt/tika7:/opt/tika8:/opt/tika9:/opt/php85:/dev/urandom\"\n\n; This directive allows you to disable certain functions.\n; It receives a comma-delimited list of function names.\n; https://php.net/disable-functions\ndisable_functions = \"disk_free_space,disk_total_space,diskfreespace,dl,get_current_user,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_restore,link,pfsockopen,posix_getlogin,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_ttyname,posix_uname,proc_nice,proc_terminate,show_source,symlink,opcache_reset\"\n\n; This directive allows you to disable certain classes.\n; It receives a comma-delimited list of class names.\n; https://php.net/disable-classes\ndisable_classes =\n\n; Colors for Syntax Highlighting mode.  Anything that's acceptable in\n; <span style=\"color: ???????\"> would work.\n; https://php.net/syntax-highlighting\n;highlight.string  = #DD0000\n;highlight.comment = #FF9900\n;highlight.keyword = #007700\n;highlight.default = #0000BB\n;highlight.html    = #000000\n\n; If enabled, the request will be allowed to complete even if the user aborts\n; the request. Consider enabling it if executing long requests, which may end up\n; being interrupted by the user or a browser timing out. PHP's default behavior\n; is to disable this feature.\n; https://php.net/ignore-user-abort\n;ignore_user_abort = On\n\n; Determines the size of the realpath cache to be used by PHP. This value should\n; be increased on systems where PHP opens many files to reflect the quantity of\n; the file operations performed.\n; Note: if open_basedir is set, the cache is disabled\n; https://php.net/realpath-cache-size\nrealpath_cache_size=64M\n\n; Duration of time, in seconds for which to cache realpath information for a given\n; file or directory. For systems with rarely changing files, consider increasing this\n; value.\n; https://php.net/realpath-cache-ttl\nrealpath_cache_ttl=180\n\n; Enables or disables the circular reference collector.\n; https://php.net/zend.enable-gc\nzend.enable_gc = On\n\n; If enabled, scripts may be written in encodings that are incompatible with\n; the scanner.  CP936, Big5, CP949 and Shift_JIS are the examples of such\n; encodings.  To use this feature, mbstring extension must be enabled.\n;zend.multibyte = Off\n\n; Allows to set the default encoding for the scripts.  This value will be used\n; unless \"declare(encoding=...)\" directive appears at the top of the script.\n; Only affects if zend.multibyte is set.\n;zend.script_encoding =\n\n; Allows to include or exclude arguments from stack traces generated for exceptions.\n; In production, it is recommended to turn this setting on to prohibit the output\n; of sensitive information in stack traces\n; Default Value: Off\n; Development Value: Off\n; Production Value: On\nzend.exception_ignore_args = On\n\n; Allows setting the maximum string length in an argument of a stringified stack trace\n; to a value between 0 and 1000000.\n; This has no effect when zend.exception_ignore_args is enabled.\n; Default Value: 15\n; Development Value: 15\n; Production Value: 0\n; In production, it is recommended to set this to 0 to reduce the output\n; of sensitive information in stack traces.\nzend.exception_string_param_max_len = 0\n\n;;;;;;;;;;;;;;;;;\n; Miscellaneous ;\n;;;;;;;;;;;;;;;;;\n\n; Decides whether PHP may expose the fact that it is installed on the server\n; (e.g. by adding its signature to the Web server header).  It is no security\n; threat in any way, but it makes it possible to determine whether you use PHP\n; on your server or not.\n; https://php.net/expose-php\nexpose_php = On\n\n;;;;;;;;;;;;;;;;;;;\n; Resource Limits ;\n;;;;;;;;;;;;;;;;;;;\n\n; Maximum execution time of each script, in seconds\n; https://php.net/max-execution-time\n; Note: This directive is hardcoded to 0 for the CLI SAPI\nmax_execution_time = 180\n\n; Maximum amount of time each script may spend parsing request data. It's a good\n; idea to limit this time on productions servers in order to eliminate unexpectedly\n; long running scripts.\n; Note: This directive is hardcoded to -1 for the CLI SAPI\n; Default Value: -1 (Unlimited)\n; Development Value: 60 (60 seconds)\n; Production Value: 60 (60 seconds)\n; https://php.net/max-input-time\nmax_input_time = 180\n\n; Maximum input variable nesting level\n; https://php.net/max-input-nesting-level\n;max_input_nesting_level = 64\n\n; How many GET/POST/COOKIE input variables may be accepted\nmax_input_vars = 9999\n\n; How many multipart body parts (combined input variable and file uploads) may\n; be accepted.\n; Default Value: -1 (Sum of max_input_vars and max_file_uploads)\n;max_multipart_body_parts = 1500\n\n; Maximum amount of memory a script may consume\n; https://php.net/memory-limit\nmemory_limit = 395M\nmax_memory_limit = 395M\n\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n; Error handling and logging ;\n;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; This directive informs PHP of which errors, warnings and notices you would like\n; it to take action for. The recommended way of setting values for this\n; directive is through the use of the error level constants and bitwise\n; operators. The error level constants are below here for convenience as well as\n; some common settings and their meanings.\n; By default, PHP is set to take action on all errors, notices and warnings EXCEPT\n; those related to E_NOTICE, which together cover best practices and\n; recommended coding standards in PHP. For performance reasons, this is the\n; recommend error reporting setting. Your production server shouldn't be wasting\n; resources complaining about best practices and coding standards. That's what\n; development servers and development settings are for.\n; Note: The php.ini-development file has this setting as E_ALL. This\n; means it pretty much reports everything which is exactly what you want during\n; development and early testing.\n;\n; Error Level Constants:\n; E_ALL             - All errors and warnings\n; E_ERROR           - fatal run-time errors\n; E_RECOVERABLE_ERROR  - almost fatal run-time errors\n; E_WARNING         - run-time warnings (non-fatal errors)\n; E_PARSE           - compile-time parse errors\n; E_NOTICE          - run-time notices (these are warnings which often result\n;                     from a bug in your code, but it's possible that it was\n;                     intentional (e.g., using an uninitialized variable and\n;                     relying on the fact it is automatically initialized to an\n;                     empty string)\n; E_CORE_ERROR      - fatal errors that occur during PHP's initial startup\n; E_CORE_WARNING    - warnings (non-fatal errors) that occur during PHP's\n;                     initial startup\n; E_COMPILE_ERROR   - fatal compile-time errors\n; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)\n; E_USER_ERROR      - user-generated error message\n; E_USER_WARNING    - user-generated warning message\n; E_USER_NOTICE     - user-generated notice message\n; E_DEPRECATED      - warn about code that will not work in future versions\n;                     of PHP\n; E_USER_DEPRECATED - user-generated deprecation warnings\n;\n; Common Values:\n;   E_ALL (Show all errors, warnings and notices including coding standards.)\n;   E_ALL & ~E_NOTICE  (Show all errors, except for notices)\n;   E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR  (Show only errors)\n; Default Value: E_ALL\n; Development Value: E_ALL\n; Production Value: E_ALL & ~E_DEPRECATED\n; https://php.net/error-reporting\nerror_reporting = E_ALL & ~E_DEPRECATED\n\n; This directive controls whether or not and where PHP will output errors,\n; notices and warnings too. Error output is very useful during development, but\n; it could be very dangerous in production environments. Depending on the code\n; which is triggering the error, sensitive information could potentially leak\n; out of your application such as database usernames and passwords or worse.\n; For production environments, we recommend logging errors rather than\n; sending them to STDOUT.\n; Possible Values:\n;   Off = Do not display any errors\n;   stderr = Display errors to STDERR (affects only CGI/CLI binaries!)\n;   On or stdout = Display errors to STDOUT\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-errors\ndisplay_errors = Off\n\n; The display of errors which occur during PHP's startup sequence are handled\n; separately from display_errors. We strongly recommend you set this to 'off'\n; for production servers to avoid leaking configuration details.\n; Default Value: On\n; Development Value: On\n; Production Value: Off\n; https://php.net/display-startup-errors\ndisplay_startup_errors = Off\n\n; Besides displaying errors, PHP can also log errors to locations such as a\n; server-specific log, STDERR, or a location specified by the error_log\n; directive found below. While errors should not be displayed on productions\n; servers they should still be monitored and logging is a great way to do that.\n; Default Value: Off\n; Development Value: On\n; Production Value: On\n; https://php.net/log-errors\nlog_errors = On\n\n; Do not log repeated messages. Repeated errors must occur in same file on same\n; line unless ignore_repeated_source is set true.\n; https://php.net/ignore-repeated-errors\nignore_repeated_errors = Off\n\n; Ignore source of message when ignoring repeated messages. When this setting\n; is On you will not log errors with repeated messages from different files or\n; source lines.\n; https://php.net/ignore-repeated-source\nignore_repeated_source = Off\n\n; This setting is off by default.\n;report_zend_debug = 0\n\n; Turn off normal error reporting and emit XML-RPC error XML\n; https://php.net/xmlrpc-errors\n;xmlrpc_errors = 0\n\n; An XML-RPC faultCode\n;xmlrpc_error_number = 0\n\n; When PHP displays or logs an error, it has the capability of formatting the\n; error message as HTML for easier reading. This directive controls whether\n; the error message is formatted as HTML or not.\n; Note: This directive is hardcoded to Off for the CLI SAPI\n; https://php.net/html-errors\nhtml_errors = Off\n\n; If html_errors is set to On *and* docref_root is not empty, then PHP\n; produces clickable error messages that direct to a page describing the error\n; or function causing the error in detail.\n; You can download a copy of the PHP manual from https://php.net/docs\n; and change docref_root to the base URL of your local copy including the\n; leading '/'. You must also specify the file extension being used including\n; the dot. PHP's default behavior is to leave these settings empty, in which\n; case no links to documentation are generated.\n; Note: Never use this feature for production boxes.\n; https://php.net/docref-root\n; Examples\n;docref_root = \"/phpmanual/\"\n\n; https://php.net/docref-ext\n;docref_ext = .html\n\n; String to output before an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-prepend-string\n; Example:\n;error_prepend_string = \"<span style='color: #ff0000'>\"\n\n; String to output after an error message. PHP's default behavior is to leave\n; this setting blank.\n; https://php.net/error-append-string\n; Example:\n;error_append_string = \"</span>\"\n\n; Log errors to specified file. PHP's default behavior is to leave this value\n; empty.\n; https://php.net/error-log\n; Example:\n;error_log = php_errors.log\n; Log errors to syslog (Event Log on Windows).\n;error_log = syslog\nerror_log = /var/log/php/error_log_85\n\n; The syslog ident is a string which is prepended to every message logged\n; to syslog. Only used when error_log is set to syslog.\n;syslog.ident = php\n\n; The syslog facility is used to specify what type of program is logging\n; the message. Only used when error_log is set to syslog.\n;syslog.facility = user\n\n; Set this to disable filtering control characters (the default).\n; Some loggers only accept NVT-ASCII, others accept anything that's not\n; control characters. If your logger accepts everything, then no filtering\n; is needed at all.\n; Allowed values are:\n;   ascii (all printable ASCII characters and NL)\n;   no-ctrl (all characters except control characters)\n;   all (all characters)\n;   raw (like \"all\", but messages are not split at newlines)\n; https://php.net/syslog.filter\n;syslog.filter = ascii\n\n;windows.show_crt_warning\n; Default value: 0\n; Development value: 0\n; Production value: 0\n\n; This directive controls whether PHP will output the backtrace of fatal errors.\n; Default Value: On\n; Development Value: On\n; Production Value: On\n;fatal_error_backtraces = On\n\n;;;;;;;;;;;;;;;;;\n; Data Handling ;\n;;;;;;;;;;;;;;;;;\n\n; The separator used in PHP generated URLs to separate arguments.\n; PHP's default setting is \"&\".\n; https://php.net/arg-separator.output\n; Example:\n;arg_separator.output = \"&amp;\"\n\n; List of separator(s) used by PHP to parse input URLs into variables.\n; PHP's default setting is \"&\".\n; NOTE: Every character in this directive is considered as separator!\n; https://php.net/arg-separator.input\n; Example:\n;arg_separator.input = \";&\"\n\n; This directive determines which super global arrays are registered when PHP\n; starts up. G,P,C,E & S are abbreviations for the following respective super\n; globals: GET, POST, COOKIE, ENV and SERVER. There is a performance penalty\n; paid for the registration of these arrays and because ENV is not as commonly\n; used as the others, ENV is not recommended on productions servers. You\n; can still get access to the environment variables through getenv() should you\n; need to.\n; Default Value: \"EGPCS\"\n; Development Value: \"GPCS\"\n; Production Value: \"GPCS\";\n; https://php.net/variables-order\nvariables_order = \"GPCS\"\n\n; This directive determines which super global data (G,P & C) should be\n; registered into the super global array REQUEST. If so, it also determines\n; the order in which that data is registered. The values for this directive\n; are specified in the same manner as the variables_order directive,\n; EXCEPT one. Leaving this value empty will cause PHP to use the value set\n; in the variables_order directive. It does not mean it will leave the super\n; globals array REQUEST empty.\n; Default Value: None\n; Development Value: \"GP\"\n; Production Value: \"GP\"\n; https://php.net/request-order\nrequest_order = \"GP\"\n\n; This directive determines whether PHP registers $argv & $argc each time it\n; runs. $argv contains an array of all the arguments passed to PHP when a script\n; is invoked. $argc contains an integer representing the number of arguments\n; that were passed when the script was invoked. These arrays are extremely\n; useful when running scripts from the command line. When this directive is\n; enabled, registering these variables consumes CPU cycles and memory each time\n; a script is executed. For security reasons, this feature should be disabled\n; for non-CLI SAPIs.\n; Note: This directive is ignored for the CLI SAPI\n; This directive is deprecated.\n; https://php.net/register-argc-argv\nregister_argc_argv = Off\n\n; When enabled, the ENV, REQUEST and SERVER variables are created when they're\n; first used (Just In Time) instead of when the script starts. If these\n; variables are not used within a script, having this directive on will result\n; in a performance gain. The PHP directive register_argc_argv must be disabled\n; for this directive to have any effect.\n; https://php.net/auto-globals-jit\nauto_globals_jit = On\n\n; Whether PHP will read the POST data.\n; This option is enabled by default.\n; Most likely, you won't want to disable this option globally. It causes $_POST\n; and $_FILES to always be empty; the only way you will be able to read the\n; POST data will be through the php://input stream wrapper. This can be useful\n; to proxy requests or to process the POST data in a memory efficient fashion.\n; https://php.net/enable-post-data-reading\n;enable_post_data_reading = Off\n\n; Maximum size of POST data that PHP will accept.\n; Its value may be 0 to disable the limit. It is ignored if POST data reading\n; is disabled through enable_post_data_reading.\n; https://php.net/post-max-size\npost_max_size = 350M\n\n; Automatically add files before PHP document.\n; https://php.net/auto-prepend-file\nauto_prepend_file =\n\n; Automatically add files after PHP document.\n; https://php.net/auto-append-file\nauto_append_file =\n\n; By default, PHP will output a media type using the Content-Type header. To\n; disable this, simply set it to be empty.\n;\n; PHP's built-in default media type is set to text/html.\n; https://php.net/default-mimetype\ndefault_mimetype = \"text/html\"\n\n; PHP's default character set is set to UTF-8.\n; https://php.net/default-charset\ndefault_charset = \"UTF-8\"\n\n; PHP internal character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/internal-encoding\n;internal_encoding =\n\n; PHP input character encoding is set to empty.\n; If empty, default_charset is used.\n; https://php.net/input-encoding\n;input_encoding =\n\n; PHP output character encoding is set to empty.\n; If empty, default_charset is used.\n; See also output_buffer.\n; https://php.net/output-encoding\n;output_encoding =\n\n;;;;;;;;;;;;;;;;;;;;;;;;;\n; Paths and Directories ;\n;;;;;;;;;;;;;;;;;;;;;;;;;\n\n; UNIX: \"/path1:/path2\"\n;include_path = \".:/php/includes\"\n;\n; Windows: \"\\path1;\\path2\"\n;include_path = \".;c:\\php\\includes\"\n;\n; PHP's default setting for include_path is \".;/path/to/php/pear\"\n; https://php.net/include-path\ninclude_path\t=  \".:/opt/php85/lib/php\"\n\n; The root of the PHP pages, used only if nonempty.\n; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root\n; if you are running php as a CGI under any web server (other than IIS)\n; see documentation for security issues.  The alternate is to use the\n; cgi.force_redirect configuration below\n; https://php.net/doc-root\ndoc_root =\n\n; The directory under which PHP opens the script using /~username used only\n; if nonempty.\n; https://php.net/user-dir\nuser_dir =\n\n; Directory in which the loadable extensions (modules) reside.\n; https://php.net/extension-dir\n;extension_dir = \"./\"\n; On windows:\n;extension_dir = \"ext\"\nextension_dir = \"/opt/php85/lib/php/extensions/no-debug-non-zts-20250925/\"\n\n; Directory where the temporary files should be placed.\n; Defaults to the system default (see sys_get_temp_dir)\nsys_temp_dir = \"/tmp\"\n\n; Whether or not to enable the dl() function.  The dl() function does NOT work\n; properly in multithreaded servers, such as IIS or Zeus, and is automatically\n; disabled on them.\n; https://php.net/enable-dl\nenable_dl = Off\n\n; cgi.force_redirect is necessary to provide security running PHP as a CGI under\n; most web servers.  Left undefined, PHP turns this on by default.  You can\n; turn it off here AT YOUR OWN RISK\n; **You CAN safely turn this off for IIS, in fact, you MUST.**\n; https://php.net/cgi.force-redirect\n;cgi.force_redirect = 1\n\n; if cgi.nph is enabled it will force cgi to always sent Status: 200 with\n; every request. PHP's default behavior is to disable this feature.\n;cgi.nph = 1\n\n; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape\n; (iPlanet) web servers, you MAY need to set an environment variable name that PHP\n; will look for to know it is OK to continue execution.  Setting this variable MAY\n; cause security issues, KNOW WHAT YOU ARE DOING FIRST.\n; https://php.net/cgi.redirect-status-env\n;cgi.redirect_status_env =\n\n; cgi.fix_pathinfo provides *real* PATH_INFO/PATH_TRANSLATED support for CGI.  PHP's\n; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok\n; what PATH_INFO is.  For more information on PATH_INFO, see the cgi specs.  Setting\n; this to 1 will cause PHP CGI to fix its paths to conform to the spec.  A setting\n; of zero causes PHP to behave as before.  Default is 1.  You should fix your scripts\n; to use SCRIPT_FILENAME rather than PATH_TRANSLATED.\n; https://php.net/cgi.fix-pathinfo\n;cgi.fix_pathinfo=1\n\n; if cgi.discard_path is enabled, the PHP CGI binary can safely be placed outside\n; of the web tree and people will not be able to circumvent .htaccess security.\n;cgi.discard_path=1\n\n; FastCGI under IIS supports the ability to impersonate\n; security tokens of the calling client.  This allows IIS to define the\n; security context that the request runs under.  mod_fastcgi under Apache\n; does not currently support this feature (03/17/2002)\n; Set to 1 if running under IIS.  Default is zero.\n; https://php.net/fastcgi.impersonate\n;fastcgi.impersonate = 1\n\n; Prevent decoding of SCRIPT_FILENAME when using Apache ProxyPass or\n; ProxyPassMatch. This should be used if script file paths are not stored\n; in an encoded format on the file system.\n; Default is 1.\n;fastcgi.script_path_encoded = 0\n\n; Disable logging through FastCGI connection. PHP's default behavior is to enable\n; this feature.\n;fastcgi.logging = 0\n\n; cgi.rfc2616_headers configuration option tells PHP what type of headers to\n; use when sending HTTP response code. If set to 0, PHP sends Status: header that\n; is supported by Apache. When this option is set to 1, PHP will send\n; RFC2616 compliant header.\n; Default is zero.\n; https://php.net/cgi.rfc2616-headers\n;cgi.rfc2616_headers = 0\n\n; cgi.check_shebang_line controls whether CGI PHP checks for line starting with #!\n; (shebang) at the top of the running script. This line might be needed if the\n; script support running both as stand-alone script and via PHP CGI<. PHP in CGI\n; mode skips this line and ignores its content if this directive is turned on.\n; https://php.net/cgi.check-shebang-line\n;cgi.check_shebang_line=1\n\n;;;;;;;;;;;;;;;;\n; File Uploads ;\n;;;;;;;;;;;;;;;;\n\n; Whether to allow HTTP file uploads.\n; https://php.net/file-uploads\nfile_uploads = On\n\n; Temporary directory for HTTP uploaded files (will use system default if not\n; specified).\n; https://php.net/upload-tmp-dir\nupload_tmp_dir = /tmp\n\n; Maximum allowed size for uploaded files.\n; https://php.net/upload-max-filesize\nupload_max_filesize = 325M\n\n; Maximum number of files that can be uploaded via a single request\nmax_file_uploads = 50\n\n;;;;;;;;;;;;;;;;;;\n; Fopen wrappers ;\n;;;;;;;;;;;;;;;;;;\n\n; Whether to allow the treatment of URLs (like http:// or ftp://) as files.\n; https://php.net/allow-url-fopen\nallow_url_fopen = On\n\n; Whether to allow include/require to open URLs (like https:// or ftp://) as files.\n; https://php.net/allow-url-include\nallow_url_include = Off\n\n; Define the anonymous ftp password (your email address). PHP's default setting\n; for this is empty.\n; https://php.net/from\n;from=\"john@doe.com\"\n\n; Define the User-Agent string. PHP's default setting for this is empty.\n; https://php.net/user-agent\n;user_agent=\"PHP\"\n\n; Default timeout for socket based streams (seconds)\n; https://php.net/default-socket-timeout\ndefault_socket_timeout = 180\n\n; If your scripts have to deal with files from Macintosh systems,\n; or you are running on a Mac and need to deal with files from\n; unix or win32 systems, setting this flag will cause PHP to\n; automatically detect the EOL character in those files so that\n; fgets() and file() will work regardless of the source of the file.\n; https://php.net/auto-detect-line-endings\nauto_detect_line_endings = On\n\n;;;;;;;;;;;;;;;;;;;;;;\n; Dynamic Extensions ;\n;;;;;;;;;;;;;;;;;;;;;;\n\n; If you wish to have an extension loaded automatically, use the following\n; syntax:\n;\n;   extension=modulename\n;\n; For example:\n;\n;   extension=mysqli\n;\n; When the extension library to load is not located in the default extension\n; directory, You may specify an absolute path to the library file:\n;\n;   extension=/path/to/extension/mysqli.so\n;\n; Note : The syntax used in previous PHP versions ('extension=<ext>.so' and\n; 'extension='php_<ext>.dll') is supported for legacy reasons and may be\n; deprecated in a future PHP major version. So, when it is possible, please\n; move to the new ('extension=<ext>) syntax.\n;\n; Notes for Windows environments :\n;\n; - Many DLL files are located in the ext/\n;   extension folders as well as the separate PECL DLL download.\n;   Be sure to appropriately set the extension_dir directive.\n;\n;extension=bz2\n;extension=curl\n;extension=exif\n;extension=ffi\n;extension=ftp\n;extension=fileinfo\n;extension=gd\n;extension=gettext\n;extension=gmp\n;extension=intl\n;extension=ldap\n;extension=mbstring\n;extension=mysqli\n;extension=odbc\n;extension=openssl\n;extension=pdo_firebird\n;extension=pdo_mysql\n;extension=pdo_odbc\n;extension=pdo_pgsql\n;extension=pdo_sqlite\n;extension=pgsql\n;extension=shmop\n\n; The MIBS data available in the PHP distribution must be installed.\n; See https://www.php.net/manual/en/snmp.installation.php\n;extension=snmp\n\n;extension=soap\n;extension=sockets\n;extension=sodium\n;extension=sqlite3\n;extension=tidy\n;extension=xsl\n;extension=zip\n\n;zend_extension=opcache\n\n;;;;;;;;;;;;;;;;;;;\n; Module Settings ;\n;;;;;;;;;;;;;;;;;;;\n\n[CLI Server]\n; Whether the CLI web server uses ANSI color coding in its terminal output.\ncli_server.color = On\n\n[Date]\n; Defines the default timezone used by the date functions\n; https://php.net/date.timezone\ndate.timezone = \"UTC\"\n\n; https://php.net/date.default-latitude\n;date.default_latitude = 31.7667\n\n; https://php.net/date.default-longitude\n;date.default_longitude = 35.2333\n\n; https://php.net/date.sunrise-zenith\n;date.sunrise_zenith = 90.833333\n\n; https://php.net/date.sunset-zenith\n;date.sunset_zenith = 90.833333\n\n[filter]\n; https://php.net/filter.default\n;filter.default = unsafe_raw\n\n; https://php.net/filter.default-flags\n;filter.default_flags =\n\n[iconv]\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; If empty, default_charset or input_encoding or iconv.input_encoding is used.\n; The precedence is: default_charset < input_encoding < iconv.input_encoding\n;iconv.input_encoding =\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;iconv.internal_encoding =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; If empty, default_charset or output_encoding or iconv.output_encoding is used.\n; The precedence is: default_charset < output_encoding < iconv.output_encoding\n; To use an output encoding conversion, iconv's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n;iconv.output_encoding =\n\n[intl]\n;intl.default_locale =\n; This directive allows you to produce PHP errors when some error\n; happens within intl functions. The value is the level of the error produced.\n; Default is 0, which does not produce any errors.\n; This directive is deprecated.\n;intl.error_level = E_WARNING\n; If enabled this directive indicates that when an error occurs within an\n; intl function a IntlException should be thrown.\n; Default is Off, which means errors need to be handled manually.\n;intl.use_exceptions = On\n\n[sqlite3]\n; Directory pointing to SQLite3 extensions\n; https://php.net/sqlite3.extension-dir\n;sqlite3.extension_dir =\n\n; SQLite defensive mode flag (only available from SQLite 3.26+)\n; When the defensive flag is enabled, language features that allow ordinary\n; SQL to deliberately corrupt the database file are disabled. This forbids\n; writing directly to the schema, shadow tables (eg. FTS data tables), or\n; the sqlite_dbpage virtual table.\n; https://www.sqlite.org/c3ref/c_dbconfig_defensive.html\n; (for older SQLite versions, this flag has no use)\n;sqlite3.defensive = 1\n\n[Pcre]\n; PCRE library backtracking limit.\n; https://php.net/pcre.backtrack-limit\n;pcre.backtrack_limit=100000\n\n; PCRE library recursion limit.\n; Please note that if you set this value to a high number you may consume all\n; the available process stack and eventually crash PHP (due to reaching the\n; stack size limit imposed by the Operating System).\n; https://php.net/pcre.recursion-limit\n;pcre.recursion_limit=100000\n\n; Enables or disables JIT compilation of patterns. This requires the PCRE\n; library to be compiled with JIT support.\n;pcre.jit=1\n\n[Pdo]\n; Whether to pool ODBC connections. Can be one of \"strict\", \"relaxed\" or \"off\"\n; https://php.net/pdo-odbc.connection-pooling\n;pdo_odbc.connection_pooling=strict\n\n[Pdo_mysql]\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\npdo_mysql.default_socket=\n\n[Phar]\n; https://php.net/phar.readonly\n;phar.readonly = On\n\n; https://php.net/phar.require-hash\n;phar.require_hash = On\n\n;phar.cache_list =\n\n[mail function]\n; For Win32 only.\n; https://php.net/smtp\n;SMTP = localhost\n; https://php.net/smtp-port\n;smtp_port = 25\n\n; For Win32 only.\n; https://php.net/sendmail-from\n;sendmail_from = me@example.com\n\n; For Unix only.  You may supply arguments as well (default: \"sendmail -t -i\").\n; https://php.net/sendmail-path\nsendmail_path = /usr/sbin/sendmail -t -i\n\n; Force the addition of the specified parameters to be passed as extra parameters\n; to the sendmail binary. These parameters will always replace the value of\n; the 5th parameter to mail().\n;mail.force_extra_parameters =\n\n; Add X-PHP-Originating-Script: that will include uid of the script followed by the filename\nmail.add_x_header = Off\n\n; Use mixed LF and CRLF line separators to keep compatibility with some\n; RFC 2822 non conformant MTA.\nmail.mixed_lf_and_crlf = Off\n\n; Control line ending mode for mail messages and headers.\n; Possible values: \"crlf\" (default), \"lf\", \"mixed\", \"os\"\n; - crlf: Use CRLF line endings\n; - lf: Use LF line endings only (converts CRLF in message to LF)\n; - mixed: Same as mail.mixed_lf_and_crlf = On\n; - os: Use CRLF on Windows, LF on other systems\nmail.cr_lf_mode = crlf\n\n; The path to a log file that will log all mail() calls. Log entries include\n; the full path of the script, line number, To address and headers.\n;mail.log =\n; Log mail to syslog (Event Log on Windows).\n;mail.log = syslog\n\n[ODBC]\n; https://php.net/odbc.default-db\n;odbc.default_db    =  Not yet implemented\n\n; https://php.net/odbc.default-user\n;odbc.default_user  =  Not yet implemented\n\n; https://php.net/odbc.default-pw\n;odbc.default_pw    =  Not yet implemented\n\n; Controls the ODBC cursor model.\n; Default: SQL_CURSOR_STATIC (default).\n;odbc.default_cursortype\n\n; Allow or prevent persistent links.\n; https://php.net/odbc.allow-persistent\nodbc.allow_persistent = On\n\n; Check that a connection is still valid before reuse.\n; https://php.net/odbc.check-persistent\nodbc.check_persistent = On\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/odbc.max-persistent\nodbc.max_persistent = -1\n\n; Maximum number of links (persistent + non-persistent).  -1 means no limit.\n; https://php.net/odbc.max-links\nodbc.max_links = -1\n\n; Handling of LONG fields.  Returns number of bytes to variables.  0 means\n; passthru.\n; https://php.net/odbc.defaultlrl\nodbc.defaultlrl = 4096\n\n; Handling of binary data.  0 means passthru, 1 return as is, 2 convert to char.\n; See the documentation on odbc_binmode and odbc_longreadlen for an explanation\n; of odbc.defaultlrl and odbc.defaultbinmode\n; https://php.net/odbc.defaultbinmode\nodbc.defaultbinmode = 1\n\n[MySQLi]\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/mysqli.max-persistent\nmysqli.max_persistent = -1\n\n; Allow accessing, from PHP's perspective, local files with LOAD DATA statements\n; https://php.net/mysqli.allow_local_infile\n;mysqli.allow_local_infile = On\n\n; It allows the user to specify a folder where files that can be sent via LOAD DATA\n; LOCAL can exist. It is ignored if mysqli.allow_local_infile is enabled.\n;mysqli.local_infile_directory =\n\n; Allow or prevent persistent links.\n; https://php.net/mysqli.allow-persistent\nmysqli.allow_persistent = On\n\n; Maximum number of links.  -1 means no limit.\n; https://php.net/mysqli.max-links\nmysqli.max_links = -1\n\n; Default port number for mysqli_connect().\n; https://php.net/mysqli.default-port\nmysqli.default_port = 3306\n\n; Default socket name for local MySQL connects.  If empty, uses the built-in\n; MySQL defaults.\n; https://php.net/mysqli.default-socket\nmysqli.default_socket =\n\n; Default host for mysqli_connect().\n; https://php.net/mysqli.default-host\nmysqli.default_host =\n\n; Default user for mysqli_connect().\n; https://php.net/mysqli.default-user\nmysqli.default_user =\n\n; Default password for mysqli_connect().\n; Note that this is generally a *bad* idea to store passwords in this file.\n; *Any* user with PHP access can run 'echo get_cfg_var(\"mysqli.default_pw\")\n; and reveal this password!  And of course, any users with read access to this\n; file will be able to reveal the password as well.\n; https://php.net/mysqli.default-pw\nmysqli.default_pw =\n\n; If this option is enabled, closing a persistent connection will rollback\n; any pending transactions of this connection, before it is put back\n; into the persistent connection pool.\n;mysqli.rollback_on_cached_plink = Off\n\n[mysqlnd]\n; Enable / Disable collection of general statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\nmysqlnd.collect_statistics = 0\n\n; Enable / Disable collection of memory usage statistics by mysqlnd which can be\n; used to tune and monitor MySQL operations.\n; Default Value: Off\n; Development Value: On\n; Production Value: Off\nmysqlnd.collect_memory_statistics = Off\n\n; Records communication from all extensions using mysqlnd to the specified log\n; file.\n; https://php.net/mysqlnd.debug\n;mysqlnd.debug =\n\n; Defines which queries will be logged.\n;mysqlnd.log_mask = 0\n\n; Default size of the mysqlnd memory pool, which is used by result sets.\nmysqlnd.mempool_default_size = 64000\n\n; Size of a pre-allocated buffer used when sending commands to MySQL in bytes.\nmysqlnd.net_cmd_buffer_size = 8192\n\n; Size of a pre-allocated buffer used for reading data sent by the server in\n; bytes.\nmysqlnd.net_read_buffer_size = 131072\n\n; Timeout for network requests in seconds.\n;mysqlnd.net_read_timeout = 31536000\n\n; SHA-256 Authentication Plugin related. File with the MySQL server public RSA\n; key.\n;mysqlnd.sha256_server_public_key =\n\n[PostgreSQL]\n; Allow or prevent persistent links.\n; https://php.net/pgsql.allow-persistent\npgsql.allow_persistent = On\n\n; Detect broken persistent links always with pg_pconnect().\n; Auto reset feature requires a little overheads.\n; https://php.net/pgsql.auto-reset-persistent\npgsql.auto_reset_persistent = Off\n\n; Maximum number of persistent links.  -1 means no limit.\n; https://php.net/pgsql.max-persistent\npgsql.max_persistent = -1\n\n; Maximum number of links (persistent+non persistent).  -1 means no limit.\n; https://php.net/pgsql.max-links\npgsql.max_links = -1\n\n; Ignore PostgreSQL backends Notice message or not.\n; Notice message logging require a little overheads.\n; https://php.net/pgsql.ignore-notice\npgsql.ignore_notice = 0\n\n; Log PostgreSQL backends Notice message or not.\n; Unless pgsql.ignore_notice=0, module cannot log notice message.\n; https://php.net/pgsql.log-notice\npgsql.log_notice = 0\n\n[bcmath]\n; Number of decimal digits for all bcmath functions.\n; https://php.net/bcmath.scale\nbcmath.scale = 0\n\n[browscap]\n; https://php.net/browscap\n;browscap = extra/browscap.ini\n\n[Session]\n; Handler used to store/retrieve data.\n; https://php.net/session.save-handler\nsession.save_handler = files\n\n; Argument passed to save_handler.  In the case of files, this is the path\n; where data files are stored. Note: Windows users have to change this\n; variable in order to use PHP's session functions.\n;\n; The path can be defined as:\n;\n;     session.save_path = \"N;/path\"\n;\n; where N is an integer.  Instead of storing all the session files in\n; /path, what this will do is use subdirectories N-levels deep, and\n; store the session data in those directories.  This is useful if\n; your OS has problems with many files in one directory, and is\n; a more efficient layout for servers that handle many sessions.\n;\n; NOTE 1: PHP will not create this directory structure automatically.\n;         You can use the script in the ext/session dir for that purpose.\n; NOTE 2: See the section on garbage collection below if you choose to\n;         use subdirectories for session storage\n;\n; The file storage module creates files using mode 600 by default.\n; You can change that by using\n;\n;     session.save_path = \"N;MODE;/path\"\n;\n; where MODE is the octal representation of the mode. Note that this\n; does not overwrite the process's umask.\n; https://php.net/session.save-path\nsession.save_path = \"/opt/tmp\"\n\n; Whether to use strict session mode.\n; Strict session mode does not accept an uninitialized session ID, and\n; regenerates the session ID if the browser sends an uninitialized session ID.\n; Strict mode protects applications from session fixation via a session adoption\n; vulnerability. It is disabled by default for maximum compatibility, but\n; enabling it is encouraged.\n; https://wiki.php.net/rfc/strict_sessions\nsession.use_strict_mode = 0\n\n; Whether to use cookies.\n; https://php.net/session.use-cookies\nsession.use_cookies = 1\n\n; https://php.net/session.cookie-secure\n;session.cookie_secure =\n\n; https://php.net/session.cookie-partitioned\n;session.cookie_partitioned = 0\n\n; This option forces PHP to fetch and use a cookie for storing and maintaining\n; the session id. We encourage this operation as it's very helpful in combating\n; session hijacking when not specifying and managing your own session id. It is\n; not the be-all and end-all of session hijacking defense, but it's a good start.\n; https://php.net/session.use-only-cookies\nsession.use_only_cookies = 1\n\n; Name of the session (used as cookie name).\n; https://php.net/session.name\nsession.name = PHPSESSID\n\n; Initialize session on request startup.\n; https://php.net/session.auto-start\nsession.auto_start = 0\n\n; Lifetime in seconds of cookie or, if 0, until browser is restarted.\n; https://php.net/session.cookie-lifetime\nsession.cookie_lifetime = 0\n\n; The path for which the cookie is valid.\n; https://php.net/session.cookie-path\nsession.cookie_path = /\n\n; The domain for which the cookie is valid.\n; https://php.net/session.cookie-domain\nsession.cookie_domain =\n\n; Whether or not to add the httpOnly flag to the cookie, which makes it\n; inaccessible to browser scripting languages such as JavaScript.\n; https://php.net/session.cookie-httponly\nsession.cookie_httponly = 1\n\n; Add SameSite attribute to cookie to help mitigate Cross-Site Request Forgery (CSRF/XSRF)\n; Current valid values are \"Strict\", \"Lax\" or \"None\". When using \"None\",\n; make sure to include the quotes, as `none` is interpreted like `false` in ini files.\n; https://tools.ietf.org/html/draft-west-first-party-cookies-07\nsession.cookie_samesite =\n\n; Handler used to serialize data. php is the standard serializer of PHP.\n; https://php.net/session.serialize-handler\nsession.serialize_handler = php\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.gc-probability\nsession.gc_probability = 1\n\n; Defines the probability that the 'garbage collection' process is started on every\n; session initialization. The probability is calculated by using gc_probability/gc_divisor,\n; e.g. 1/100 means there is a 1% chance that the GC process starts on each request.\n; For high volume production servers, using a value of 1000 is a more efficient approach.\n; Default Value: 100\n; Development Value: 1000\n; Production Value: 1000\n; https://php.net/session.gc-divisor\nsession.gc_divisor = 1000\n\n; After this number of seconds, stored data will be seen as 'garbage' and\n; cleaned up by the garbage collection process.\n; https://php.net/session.gc-maxlifetime\nsession.gc_maxlifetime = 1440\n\n; NOTE: If you are using the subdirectory option for storing session files\n;       (see session.save_path above), then garbage collection does *not*\n;       happen automatically.  You will need to do your own garbage\n;       collection through a shell script, cron entry, or some other method.\n;       For example, the following script is the equivalent of setting\n;       session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):\n;          find /path/to/sessions -cmin +24 -type f | xargs rm\n\n; Check HTTP Referer to invalidate externally stored URLs containing ids.\n; HTTP_REFERER has to contain this substring for the session to be\n; considered as valid.\n; https://php.net/session.referer-check\nsession.referer_check =\n\n; Set to {nocache,private,public,} to determine HTTP caching aspects\n; or leave this empty to avoid sending anti-caching headers.\n; https://php.net/session.cache-limiter\nsession.cache_limiter = nocache\n\n; Document expires after n minutes.\n; https://php.net/session.cache-expire\nsession.cache_expire = 180\n\n; trans sid support is disabled by default.\n; Use of trans sid may risk your users' security.\n; Use this option with caution.\n; - User may send URL contains active session ID\n;   to other person via. email/irc/etc.\n; - URL that contains active session ID may be stored\n;   in publicly accessible computer.\n; - User may access your site with the same session ID\n;   always using URL stored in browser's history or bookmarks.\n; https://php.net/session.use-trans-sid\nsession.use_trans_sid = 0\n\n; The URL rewriter will look for URLs in a defined set of HTML tags.\n; <form> is special; if you include them here, the rewriter will\n; add a hidden <input> field with the info which is otherwise appended\n; to URLs. <form> tag's action attribute URL will not be modified\n; unless it is specified.\n; Note that all valid entries require a \"=\", even if no value follows.\n; Default Value: \"a=href,area=href,frame=src,form=\"\n; Development Value: \"a=href,area=href,frame=src,form=\"\n; Production Value: \"a=href,area=href,frame=src,form=\"\n; https://php.net/url-rewriter.tags\nsession.trans_sid_tags = \"a=href,area=href,frame=src,form=\"\n\n; URL rewriter does not rewrite absolute URLs by default.\n; To enable rewrites for absolute paths, target hosts must be specified\n; at RUNTIME. i.e. use ini_set()\n; <form> tags is special. PHP will check action attribute's URL regardless\n; of session.trans_sid_tags setting.\n; If no host is defined, HTTP_HOST will be used for allowed host.\n; Example value: php.net,www.php.net,wiki.php.net\n; Use \",\" for multiple hosts. No spaces are allowed.\n; Default Value: \"\"\n; Development Value: \"\"\n; Production Value: \"\"\n;session.trans_sid_hosts=\"\"\n\n; Enable upload progress tracking in $_SESSION\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.enabled\n;session.upload_progress.enabled = On\n\n; Cleanup the progress information as soon as all POST data has been read\n; (i.e. upload completed).\n; Default Value: On\n; Development Value: On\n; Production Value: On\n; https://php.net/session.upload-progress.cleanup\n;session.upload_progress.cleanup = On\n\n; A prefix used for the upload progress key in $_SESSION\n; Default Value: \"upload_progress_\"\n; Development Value: \"upload_progress_\"\n; Production Value: \"upload_progress_\"\n; https://php.net/session.upload-progress.prefix\n;session.upload_progress.prefix = \"upload_progress_\"\n\n; The index name (concatenated with the prefix) in $_SESSION\n; containing the upload progress information\n; Default Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Development Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; Production Value: \"PHP_SESSION_UPLOAD_PROGRESS\"\n; https://php.net/session.upload-progress.name\n;session.upload_progress.name = \"PHP_SESSION_UPLOAD_PROGRESS\"\n\n; How frequently the upload progress should be updated.\n; Given either in percentages (per-file), or in bytes\n; Default Value: \"1%\"\n; Development Value: \"1%\"\n; Production Value: \"1%\"\n; https://php.net/session.upload-progress.freq\n;session.upload_progress.freq =  \"1%\"\n\n; The minimum delay between updates, in seconds\n; Default Value: 1\n; Development Value: 1\n; Production Value: 1\n; https://php.net/session.upload-progress.min-freq\n;session.upload_progress.min_freq = \"1\"\n\n; Only write session data when session data is changed. Enabled by default.\n; https://php.net/session.lazy-write\n;session.lazy_write = On\n\n[Assertion]\n; Switch whether to compile assertions at all (to have no overhead at run-time)\n; -1: Do not compile at all\n;  0: Jump over assertion at run-time\n;  1: Execute assertions\n; Changing from or to a negative value is only possible in php.ini!\n; (For turning assertions on and off at run-time, toggle zend.assertions between the values 1 and 0)\n; Default Value: 1\n; Development Value: 1\n; Production Value: -1\n; https://php.net/zend.assertions\nzend.assertions = -1\n\n[COM]\n; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs\n; https://php.net/com.typelib-file\n;com.typelib_file =\n\n; allow Distributed-COM calls\n; https://php.net/com.allow-dcom\n;com.allow_dcom = true\n\n; autoregister constants of a component's typelib on com_load()\n; https://php.net/com.autoregister-typelib\n;com.autoregister_typelib = true\n\n; register constants casesensitive\n; https://php.net/com.autoregister-casesensitive\n;com.autoregister_casesensitive = false\n\n; show warnings on duplicate constant registrations\n; https://php.net/com.autoregister-verbose\n;com.autoregister_verbose = true\n\n; The default character set code-page to use when passing strings to and from COM objects.\n; Default: system ANSI code page\n;com.code_page=\n\n; The version of the .NET framework to use. The value of the setting are the first three parts\n; of the framework's version number, separated by dots, and prefixed with \"v\", e.g. \"v4.0.30319\".\n;com.dotnet_version=\n\n[mbstring]\n; language for internal character representation.\n; This affects mb_send_mail() and mbstring.detect_order.\n; https://php.net/mbstring.language\n;mbstring.language = Japanese\n\n; Use of this INI entry is deprecated, use global internal_encoding instead.\n; internal/script encoding.\n; Some encoding cannot work as internal encoding. (e.g. SJIS, BIG5, ISO-2022-*)\n; If empty, default_charset or internal_encoding or iconv.internal_encoding is used.\n; The precedence is: default_charset < internal_encoding < iconv.internal_encoding\n;mbstring.internal_encoding =\n\n; Use of this INI entry is deprecated, use global input_encoding instead.\n; http input encoding.\n; mbstring.encoding_translation = On is needed to use this setting.\n; If empty, default_charset or input_encoding or mbstring.input is used.\n; The precedence is: default_charset < input_encoding < mbstring.http_input\n; https://php.net/mbstring.http-input\n;mbstring.http_input =\n\n; Use of this INI entry is deprecated, use global output_encoding instead.\n; http output encoding.\n; mb_output_handler must be registered as output buffer to function.\n; If empty, default_charset or output_encoding or mbstring.http_output is used.\n; The precedence is: default_charset < output_encoding < mbstring.http_output\n; To use an output encoding conversion, mbstring's output handler must be set\n; otherwise output encoding conversion cannot be performed.\n; https://php.net/mbstring.http-output\n;mbstring.http_output =\n\n; enable automatic encoding translation according to\n; mbstring.internal_encoding setting. Input chars are\n; converted to internal encoding by setting this to On.\n; Note: Do _not_ use automatic encoding translation for\n;       portable libs/applications.\n; https://php.net/mbstring.encoding-translation\n;mbstring.encoding_translation = Off\n\n; automatic encoding detection order.\n; \"auto\" detect order is changed according to mbstring.language\n; https://php.net/mbstring.detect-order\n;mbstring.detect_order = auto\n\n; substitute_character used when character cannot be converted\n; one from another\n; https://php.net/mbstring.substitute-character\n;mbstring.substitute_character = none\n\n; Enable strict encoding detection.\n;mbstring.strict_detection = Off\n\n; This directive specifies the regex pattern of content types for which mb_output_handler()\n; is activated.\n; Default: mbstring.http_output_conv_mimetypes=^(text/|application/xhtml\\+xml)\n;mbstring.http_output_conv_mimetypes=\n\n; This directive specifies maximum stack depth for mbstring regular expressions. It is similar\n; to the pcre.recursion_limit for PCRE.\n;mbstring.regex_stack_limit=100000\n\n; This directive specifies maximum retry count for mbstring regular expressions. It is similar\n; to the pcre.backtrack_limit for PCRE.\n;mbstring.regex_retry_limit=1000000\n\n[gd]\n; Tell the jpeg decode to ignore warnings and try to create\n; a gd image. The warning will then be displayed as notices\n; disabled by default\n; https://php.net/gd.jpeg-ignore-warning\n;gd.jpeg_ignore_warning = 1\n\n[exif]\n; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.\n; With mbstring support this will automatically be converted into the encoding\n; given by corresponding encode setting. When empty mbstring.internal_encoding\n; is used. For the decode settings you can distinguish between motorola and\n; intel byte order. A decode setting must not be empty.\n; https://php.net/exif.encode-unicode\n;exif.encode_unicode = ISO-8859-15\n\n; https://php.net/exif.decode-unicode-motorola\n;exif.decode_unicode_motorola = UCS-2BE\n\n; https://php.net/exif.decode-unicode-intel\n;exif.decode_unicode_intel    = UCS-2LE\n\n; https://php.net/exif.encode-jis\n;exif.encode_jis =\n\n; https://php.net/exif.decode-jis-motorola\n;exif.decode_jis_motorola = JIS\n\n; https://php.net/exif.decode-jis-intel\n;exif.decode_jis_intel    = JIS\n\n[Tidy]\n; The path to a default tidy configuration file to use when using tidy\n; https://php.net/tidy.default-config\n;tidy.default_config = /usr/local/lib/php/default.tcfg\n\n; Should tidy clean and repair output automatically?\n; WARNING: Do not use this option if you are generating non-html content\n; such as dynamic images\n; https://php.net/tidy.clean-output\ntidy.clean_output = Off\n\n[soap]\n; Enables or disables WSDL caching feature.\n; https://php.net/soap.wsdl-cache-enabled\nsoap.wsdl_cache_enabled=1\n\n; Sets the directory name where SOAP extension will put cache files.\n; https://php.net/soap.wsdl-cache-dir\nsoap.wsdl_cache_dir=\"/tmp\"\n\n; (time to live) Sets the number of second while cached file will be used\n; instead of original one.\n; https://php.net/soap.wsdl-cache-ttl\nsoap.wsdl_cache_ttl=86400\n\n; Sets the size of the cache limit. (Max. number of WSDL files to cache)\nsoap.wsdl_cache_limit = 5\n\n[sysvshm]\n; A default size of the shared memory segment\n;sysvshm.init_mem = 10000\n\n[ldap]\n; Sets the maximum number of open links or -1 for unlimited.\nldap.max_links = -1\n\n[dba]\n;dba.default_handler=\n\n[opcache]\n; Determines if Zend OPCache is enabled\n;opcache.enable=1\n\n; Determines if Zend OPCache is enabled for the CLI version of PHP\n;opcache.enable_cli=0\n\n; The OPcache shared memory storage size.\n;opcache.memory_consumption=128\n\n; The amount of memory for interned strings in Mbytes.\n;opcache.interned_strings_buffer=8\n\n; The maximum number of keys (scripts) in the OPcache hash table.\n; Only numbers between 200 and 1000000 are allowed.\n;opcache.max_accelerated_files=10000\n\n; The maximum percentage of \"wasted\" memory until a restart is scheduled.\n;opcache.max_wasted_percentage=5\n\n; When this directive is enabled, the OPcache appends the current working\n; directory to the script key, thus eliminating possible collisions between\n; files with the same name (basename). Disabling the directive improves\n; performance, but may break existing applications.\n;opcache.use_cwd=1\n\n; When disabled, you must reset the OPcache manually or restart the\n; webserver for changes to the filesystem to take effect.\n;opcache.validate_timestamps=1\n\n; How often (in seconds) to check file timestamps for changes to the shared\n; memory storage allocation. (\"1\" means validate once per second, but only\n; once per request. \"0\" means always validate)\n;opcache.revalidate_freq=2\n\n; Enables or disables file search in include_path optimization\n;opcache.revalidate_path=0\n\n; If disabled, all PHPDoc comments are dropped from the code to reduce the\n; size of the optimized code.\n;opcache.save_comments=1\n\n; If enabled, compilation warnings (including notices and deprecations) will\n; be recorded and replayed each time a file is included. Otherwise, compilation\n; warnings will only be emitted when the file is first cached.\n;opcache.record_warnings=0\n\n; Allow file existence override (file_exists, etc.) performance feature.\n;opcache.enable_file_override=0\n\n; A bitmask, where each bit enables or disables the appropriate OPcache\n; passes\n;opcache.optimization_level=0x7FFFBFFF\n\n;opcache.dups_fix=0\n\n; The location of the OPcache blacklist file (wildcards allowed).\n; Each OPcache blacklist file is a text file that holds the names of files\n; that should not be accelerated. The file format is to add each filename\n; to a new line. The filename may be a full path or just a file prefix\n; (i.e., /var/www/x  blacklists all the files and directories in /var/www\n; that start with 'x'). Line starting with a ; are ignored (comments).\n;opcache.blacklist_filename=\n\n; Allows exclusion of large files from being cached. By default all files\n; are cached.\n;opcache.max_file_size=0\n\n; How long to wait (in seconds) for a scheduled restart to begin if the cache\n; is not being accessed.\n;opcache.force_restart_timeout=180\n\n; OPcache error_log file name. Empty string assumes \"stderr\".\n;opcache.error_log=\n\n; All OPcache errors go to the Web server log.\n; By default, only fatal errors (level 0) or errors (level 1) are logged.\n; You can also enable warnings (level 2), info messages (level 3) or\n; debug messages (level 4).\n;opcache.log_verbosity_level=1\n\n; Preferred Shared Memory back-end. Leave empty and let the system decide.\n;opcache.preferred_memory_model=\n\n; Protect the shared memory from unexpected writing during script execution.\n; Useful for internal debugging only.\n;opcache.protect_memory=0\n\n; Allows calling OPcache API functions only from PHP scripts which path is\n; started from specified string. The default \"\" means no restriction\n;opcache.restrict_api=\n\n; Mapping base of shared memory segments (for Windows only). All the PHP\n; processes have to map shared memory into the same address space. This\n; directive allows to manually fix the \"Unable to reattach to base address\"\n; errors.\n;opcache.mmap_base=\n\n; Facilitates multiple OPcache instances per user (for Windows only). All PHP\n; processes with the same cache ID and user share an OPcache instance.\n;opcache.cache_id=\n\n; Enables and sets the second level cache directory.\n; It should improve performance when SHM memory is full, at server restart or\n; SHM reset. The default \"\" disables file based caching.\n;opcache.file_cache=\n\n; Enables or disables read-only mode for the second level cache directory.\n; It should improve performance for read-only containers,\n; when the cache is pre-warmed and packaged alongside the application.\n; Best used with `opcache.validate_timestamps=0`, `opcache.enable_file_override=1`\n; and `opcache.file_cache_consistency_checks=0`.\n; Note: A cache generated with a different build of PHP, a different file path,\n; or different settings (including which extensions are loaded), may be ignored.\n;opcache.file_cache_read_only=0\n\n; Enables or disables opcode caching in shared memory.\n;opcache.file_cache_only=0\n\n; Enables or disables checksum validation when script loaded from file cache.\n;opcache.file_cache_consistency_checks=1\n\n; Implies opcache.file_cache_only=1 for a certain process that failed to\n; reattach to the shared memory (for Windows only). Explicitly enabled file\n; cache is required.\n;opcache.file_cache_fallback=1\n\n; Enables or disables copying of PHP code (text segment) into HUGE PAGES.\n; Under certain circumstances (if only a single global PHP process is\n; started from which all others fork), this can increase performance\n; by a tiny amount because TLB misses are reduced.  On the other hand, this\n; delays PHP startup, increases memory usage and degrades performance\n; under memory pressure - use with care.\n; Requires appropriate OS configuration.\n;opcache.huge_code_pages=0\n\n; Validate cached file permissions.\n;opcache.validate_permission=0\n\n; Prevent name collisions in chroot'ed environment.\n;opcache.validate_root=0\n\n; If specified, it produces opcode dumps for debugging different stages of\n; optimizations.\n;opcache.opt_debug_level=0\n\n; Specifies a PHP script that is going to be compiled and executed at server\n; start-up.\n; https://php.net/opcache.preload\n;opcache.preload=\n\n; Preloading code as root is not allowed for security reasons. This directive\n; facilitates to let the preloading to be run as another user.\n; https://php.net/opcache.preload_user\n;opcache.preload_user=\n\n; Prevents caching files that are less than this number of seconds old. It\n; protects from caching of incompletely updated files. In case all file updates\n; on your site are atomic, you may increase performance by setting it to \"0\".\n;opcache.file_update_protection=2\n\n; Absolute path used to store shared lockfiles (for *nix only).\n;opcache.lockfile_path=/tmp\n\n[curl]\n; A default value for the CURLOPT_CAINFO option. This is required to be an\n; absolute path.\n;curl.cainfo =\n\n[openssl]\n; The location of a Certificate Authority (CA) file on the local filesystem\n; to use when verifying the identity of SSL/TLS peers. Most users should\n; not specify a value for this directive as PHP will attempt to use the\n; OS-managed cert stores in its absence. If specified, this value may still\n; be overridden on a per-stream basis via the \"cafile\" SSL stream context\n; option.\n;openssl.cafile=\n\n; If openssl.cafile is not specified or if the CA file is not found, the\n; directory pointed to by openssl.capath is searched for a suitable\n; certificate. This value must be a correctly hashed certificate directory.\n; Most users should not specify a value for this directive as PHP will\n; attempt to use the OS-managed cert stores in its absence. If specified,\n; this value may still be overridden on a per-stream basis via the \"capath\"\n; SSL stream context option.\n;openssl.capath=\n\n; The libctx is an OpenSSL library context. OpenSSL defines a default library\n; context, but PHP OpenSSL also defines its own library context to avoid\n; interference with other libraries using OpenSSL and to provide an independent\n; context for each thread in ZTS. Possible values:\n;  \"custom\"  - use a custom library context (default)\n;  \"default\" - use the default OpenSSL library context\n;openssl.libctx=custom\n\n[ffi]\n; FFI API restriction. Possible values:\n; \"preload\" - enabled in CLI scripts and preloaded files (default)\n; \"false\"   - always disabled\n; \"true\"    - always enabled\n;ffi.enable=preload\n\n; List of headers files to preload, wildcard patterns allowed.\n;ffi.preload=\n\n[Zend]\n;zend_extension=\"/usr/local/ioncube/ioncube_loader_lin_8.5.so\"\n\n; fix for segfaults\nauto_globals_jit = Off\n\ncgi.fix_pathinfo = 1\nmbstring.http_input = \"pass\"\nmbstring.http_output = \"pass\"\nmbstring.encoding_translation = 0\n\n; Enable Extensions\nextension=uploadprogress.so\nextension=imagick.so\nextension=redis.so\n\n; APCu\nextension=apcu.so\napc.enable_cli=1\napc.gc_ttl=300\napc.shm_segments=1\napc.shm_size=256M\napc.slam_defense=0\napc.ttl=0\n;\n"
  },
  {
    "path": "aegir/conf/redis/redis-server",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:\t\tredis-server\n# Required-Start:\t$syslog $remote_fs\n# Required-Stop:\t$syslog $remote_fs\n# Should-Start:\t\t$local_fs\n# Should-Stop:\t\t$local_fs\n# Default-Start:\t2 3 4 5\n# Default-Stop:\t\t0 1 6\n# Short-Description:    redis-server - Persistent key-value db\n# Description:          redis-server - Persistent key-value db\n### END INIT INFO\n\nPATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nDAEMON=/usr/bin/redis-server\nDAEMON_ARGS=/etc/redis/redis.conf\nNAME=redis-server\nDESC=redis-server\nPIDFILE=/run/redis/redis.pid\n\ntest -x $DAEMON || exit 0\n\n[ -d /run/redis ] || mkdir -p /run/redis\n[ -d /run/redis ] && chown -R redis:redis /run/redis\n\nmaxclients=$(awk '/^[ \\t]*maxclients[ \\t]/ { print $2 }' /etc/redis/redis.conf)\nif [ ! -z \"$maxclients\" ] && [ \"$maxclients\" -gt 992 ]; then\n  ulimit -n $((maxclients+32))\nfi\n\ncase \"$1\" in\n  start)\n\techo -n \"Starting $DESC: \"\n\ttouch $PIDFILE\n\tchown redis:redis $PIDFILE\n\tif start-stop-daemon --start --quiet --umask 007 --pidfile $PIDFILE --chuid redis:redis --exec $DAEMON -- $DAEMON_ARGS\n\tthen\n\t\techo \"$NAME.\"\n\telse\n\t\techo \"failed\"\n\tfi\n\t;;\n  stop)\n\techo -n \"Stopping $DESC: \"\n\tif start-stop-daemon --stop --retry 8 --quiet --oknodo --pidfile $PIDFILE --exec $DAEMON\n\tthen\n\t\techo \"$NAME.\"\n\telse\n\t\techo \"failed\"\n\tfi\n\trm -f $PIDFILE\n\tsleep 1\n\t;;\n\n  restart|force-reload)\n\t${0} stop\n\t${0} start\n\t;;\n\n  reload)\n    echo -n \"Reloading service ${NAME}...\"\n    if [ ! -r ${PIDFILE} ]; then\n      echo \"warning, no pid file found - ${NAME} is not running ?\"\n      exit 1\n    fi\n    kill -USR2 `cat ${PIDFILE}`\n    echo \" done\"\n\t;;\n\n  status)\n\techo -n \"$DESC is \"\n\tif start-stop-daemon --stop --quiet --signal 0 --name ${NAME} --pidfile ${PIDFILE}\n\tthen\n\t\techo \"running\"\n\telse\n\t\techo \"not running\"\n\t\texit 1\n\tfi\n\t;;\n\n  *)\n\techo \"Usage: service $NAME {start|stop|restart|reload|force-reload}\" >&2\n\texit 1\n\t;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/conf/redis/redis.conf",
    "content": "# Redis configuration file example.\n#\n# Note that in order to read the configuration file, Redis must be\n# started with the file path as first argument:\n#\n# ./redis-server /path/to/redis.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all Redis servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Notice option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Redis Sentinel. Since Redis always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, Redis listens\n# for connections from all the network interfaces available on the server.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1\n# xbind 127.0.0.1 ::1\n#\n# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force Redis to listen only into\n# the IPv4 lookback interface address (this means Redis will be able to\n# accept connections only from clients running into the same computer it\n# is running).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# JUST COMMENT THE FOLLOWING LINE.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# Redis instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and if:\n#\n# 1) The server is not binding explicitly to a set of addresses using the\n#    \"bind\" directive.\n# 2) No password is configured.\n#\n# The server only accepts connections from clients connecting from the\n# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain\n# sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to Redis\n# even if no authentication is configured, nor a specific set of interfaces\n# are explicitly listed using the \"bind\" directive.\nprotected-mode no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified Redis will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need an high backlog in order\n# to avoid slow clients connections issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so Redis will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/redis/redis.sock\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 3600\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Take the connection alive from the point of view of network\n#    equipment in the middle.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\n#\n# A reasonable value for this option is 300 seconds, which is the new\n# Redis default starting with Redis 3.2.1.\ntcp-keepalive 300\n\n################################# GENERAL #####################################\n\n# By default Redis does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /run/redis.pid when daemonized.\ndaemonize yes\n\n# If you run Redis from upstart or systemd, Redis can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous liveness pings back to your supervisor.\nsupervised no\n\n# If a pid file is specified, Redis writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/redis.pid\".\n#\n# Creating a pid file is best effort: if Redis is not able to create it\n# nothing bad happens, the server will start and run normally.\npidfile /run/redis/redis.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\nloglevel warning\n\n# Specify the log file name. Also the empty string can be used to force\n# Redis to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/redis/redis-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident redis\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 8\n\n# By default Redis shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY. Basically this means\n# that normally a logo is displayed only in interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo yes\n\n################################ SNAPSHOTTING  ################################\n#\n# Save the DB on disk:\n#\n#   save <seconds> <changes>\n#\n#   Will save the DB if both the given number of seconds and the given\n#   number of write operations against the DB occurred.\n#\n#   In the example below the behaviour will be to save:\n#   after 900 sec (15 min) if at least 1 key changed\n#   after 300 sec (5 min) if at least 10 keys changed\n#   after 60 sec if at least 10000 keys changed\n#\n#   Note: you can disable saving completely by commenting out all \"save\" lines.\n#\n#   It is also possible to remove all the previously configured save\n#   points by adding a save directive with a single empty string argument\n#   like in the following example:\n#\n#   save \"\"\n\n# save 900 1\n# save 300 10\n# save 60 10000\n\n# By default Redis will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again Redis will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the Redis server\n# and persistence, you may want to disable this feature so that Redis will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# For default that's set to 'yes' as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir /var/lib/redis/\n\n################################# REPLICATION #################################\n\n# Master-Slave replication. Use slaveof to make a Redis instance a copy of\n# another Redis server. A few things to understand ASAP about Redis replication.\n#\n# 1) Redis replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of slaves.\n# 2) Redis slaves are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition slaves automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# slaveof <masterip> <masterport>\n\n# If the master is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the slave to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the slave request.\n#\n# masterauth <master-password>\n\n# When a slave loses its connection with the master, or when the replication\n# is still in progress, the slave can act in two different ways:\n#\n# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) if slave-serve-stale-data is set to 'no' the slave will reply with\n#    an error \"SYNC with master in progress\" to all the kind of commands\n#    but to INFO and SLAVEOF.\n#\nslave-serve-stale-data yes\n\n# You can configure a slave instance to accept writes or not. Writing against\n# a slave instance may be useful to store some ephemeral data (because data\n# written on a slave will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# Since Redis 2.6 by default slaves are read-only.\n#\n# Note: read only slaves are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only slave exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only slaves using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nslave-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# -------------------------------------------------------\n# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY\n# -------------------------------------------------------\n#\n# New slaves and reconnecting slaves that are not able to continue the replication\n# process just receiving differences, need to do what is called a \"full\n# synchronization\". An RDB file is transmitted from the master to the slaves.\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The Redis master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the slaves incrementally.\n# 2) Diskless: The Redis master creates a new process that directly writes the\n#              RDB file to slave sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more slaves\n# can be queued and served with the RDB file as soon as the current child producing\n# the RDB file finishes its work. With diskless replication instead once\n# the transfer starts, new slaves arriving will be queued and a new transfer\n# will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple slaves\n# will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync no\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the slaves.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new slaves arriving, that will be queued for the next RDB transfer, so the server\n# waits a delay in order to let more slaves arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# Slaves send PINGs to server in a predefined interval. It's possible to change\n# this interval with the repl_ping_slave_period option. The default value is 10\n# seconds.\n#\n# repl-ping-slave-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of slave.\n# 2) Master timeout from the point of view of slaves (data, pings).\n# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-slave-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the slave.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the slave socket after SYNC?\n#\n# If you select \"yes\" Redis will use a smaller number of TCP packets and\n# less bandwidth to send data to slaves. But this can add a delay for\n# the data to appear on the slave side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the slave side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and slaves are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# slave data when slaves are disconnected for some time, so that when a slave\n# wants to reconnect again, often a full resync is not needed, but a partial\n# resync is enough, just passing the portion of data the slave missed while\n# disconnected.\n#\n# The bigger the replication backlog, the longer the time the slave can be\n# disconnected and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated once there is at least a slave connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no longer connected slaves for some time, the backlog\n# will be freed. The following option configures the amount of seconds that\n# need to elapse, starting from the time the last slave disconnected, for\n# the backlog buffer to be freed.\n#\n# Note that slaves never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly \"partially\n# resynchronize\" with the slaves: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The slave priority is an integer number published by Redis in the INFO output.\n# It is used by Redis Sentinel in order to select a slave to promote into a\n# master if the master is no longer working correctly.\n#\n# A slave with a low priority number is considered better for promotion, so\n# for instance if there are three slaves with priority 10, 100, 25 Sentinel will\n# pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the slave as not able to perform the\n# role of master, so a slave with priority of 0 will never be selected by\n# Redis Sentinel for promotion.\n#\n# By default the priority is 100.\nslave-priority 100\n\n# It is possible for a master to stop accepting writes if there are less than\n# N slaves connected, having a lag less or equal than M seconds.\n#\n# The N slaves need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the slave, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough slaves\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 slaves with a lag <= 10 seconds use:\n#\n# min-slaves-to-write 3\n# min-slaves-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-slaves-to-write is set to 0 (feature disabled) and\n# min-slaves-max-lag is set to 10.\n\n# A Redis master is able to list the address and port of the attached\n# slaves in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Redis Sentinel in order to discover slave instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a master.\n#\n# The listed IP and address normally reported by a slave is obtained\n# in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the slave to connect with the master.\n#\n#   Port: The port is communicated by the slave during the replication\n#   handshake, and is normally the port that the slave is using to\n#   list for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the slave may be actually reachable via different IP and port\n# pairs. The following two options can be used by a slave in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# slave-announce-ip 5.5.5.5\n# slave-announce-port 1234\n\n################################## SECURITY ###################################\n\n# Require clients to issue AUTH <PASSWORD> before processing any other\n# commands.  This might be useful in environments in which you do not trust\n# others with access to the host running redis-server.\n#\n# This should stay commented out for backward compatibility and because most\n# people do not need auth (e.g. they run their own servers).\n#\n# Warning: since Redis is pretty fast an outside user can try up to\n# 150k passwords per second against a good box. This means that you should\n# use a very strong password otherwise it will be very easy to break.\n#\n# requirepass isfoobared\n\n# Command renaming.\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to slaves may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the Redis server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as Redis reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached Redis will close all the new connections sending\n# an error 'max number of clients reached'.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached Redis will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If Redis can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', Redis will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using Redis as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have slaves attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the slaves are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of slaves is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have slaves attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for slave\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory\n# is reached. You can select among five behaviors:\n#\n# volatile-lru -> Evict using approximated LRU among the keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key among the ones with an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, Redis will return an error on write\n#       operations, when there are no suitable keys for eviction.\n#\n#       At the date of writing these commands are: set setnx setex append\n#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd\n#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby\n#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby\n#       getset mset msetnx exec sort\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. For default Redis will check five keys and pick the one that was\n# used less recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate.\n#\n# maxmemory-samples 5\n\n############################# LAZY FREEING ####################################\n\n# Redis has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in Redis. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons Redis also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the Redis server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically Redis deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a slave performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transfered.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives:\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nslave-lazy-flush yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default Redis asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the Redis process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) Redis can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the Redis process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup Redis will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Please check http://redis.io/topics/persistence for more information.\n\nappendonly no\n\n# The name of the append only file (default: \"appendonly.aof\")\n\nappendfilename \"appendonly.aof\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# Redis supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# Redis may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of Redis is\n# the same as \"appendfsync none\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# Redis is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: Redis remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when Redis itself\n# crashes or aborts but the operating system still works correctly).\n#\n# Redis can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the Redis server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"redis-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# Redis will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# When rewriting the AOF file, Redis is able to use an RDB preamble in the\n# AOF file for faster rewrites and recoveries. When this option is turned\n# on the rewritten AOF file is composed of two different stanzas:\n#\n#   [RDB file][AOF tail]\n#\n# When loading Redis recognizes that the AOF file starts with the \"REDIS\"\n# string and loads the prefixed RDB file, and continues loading the AOF\n# tail.\n#\n# This is currently turned off by default in order to avoid the surprise\n# of a format change, but will at some point be used as the default.\naof-use-rdb-preamble no\n\n################################ LUA SCRIPTING  ###############################\n\n# Max execution time of a Lua script in milliseconds.\n#\n# If the maximum execution time is reached Redis will log that a script is\n# still in execution after the maximum allowed time and will start to\n# reply to queries with an error.\n#\n# When a long running script exceeds the maximum execution time only the\n# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be\n# used to stop a script that did not yet called write commands. The second\n# is the only way to shut down the server in the case a write command was\n# already issued by the script but the user doesn't want to wait for the natural\n# termination of the script.\n#\n# Set it to 0 or a negative value for unlimited execution without warnings.\nlua-time-limit 5000\n\n################################ REDIS CLUSTER  ###############################\n#\n# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however\n# in order to mark it as \"mature\" we need to wait for a non trivial percentage\n# of users to deploy it in production.\n# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n#\n# Normal Redis instances can't be part of a Redis Cluster; only nodes that are\n# started as cluster nodes can. In order to start a Redis instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by Redis nodes.\n# Every Redis Cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# A slave of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a slave to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple slaves able to failover, they exchange messages\n#    in order to try to give an advantage to the slave with the best\n#    replication offset (more data from the master processed).\n#    Slaves will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single slave computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the slave will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a slave will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * slave-validity-factor) + repl-ping-slave-period\n#\n# So for example if node-timeout is 30 seconds, and the slave-validity-factor\n# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the\n# slave will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large slave-validity-factor may allow slaves with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a slave at all.\n#\n# For maximum availability, it is possible to set the slave-validity-factor\n# to a value of 0, which means, that slaves will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-slave-validity-factor 10\n\n# Cluster slaves are able to migrate to orphaned masters, that are masters\n# that are left without working slaves. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working slaves.\n#\n# Slaves migrate to orphaned masters only if there are still at least a\n# given number of other working slaves for their old master. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a slave\n# will migrate only if there is at least 1 other working slave for its master\n# and so forth. It usually reflects the number of slaves you want for every\n# master in your cluster.\n#\n# Default is 1 (slaves migrate only if their masters remain with at least\n# one slave). To disable migration just set it to a very large value.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# By default Redis Cluster nodes stop accepting queries if they detect there\n# is at least an hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents slaves from trying to failover its\n# master during master failures. However the master can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-slave-no-failover no\n\n# In order to setup your cluster make sure to read the documentation\n# available at http://redis.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, Redis Cluster nodes address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make Redis Cluster working in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following two options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-bus-port\n#\n# Each instruct the node about its address, client port, and cluster message\n# bus port. The information is then published in the header of the bus packets\n# so that other nodes will be able to correctly map the address of the node\n# publishing the information.\n#\n# If the above options are not used, the normal Redis Cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usually.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-port 6379\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The Redis Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells Redis\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The Redis latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a Redis instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n############################# EVENT NOTIFICATION ##############################\n\n# Redis can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at http://redis.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that Redis will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  A     Alias for g$lshzxe, so that the \"AKE\" string means all the events.\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-ziplist-entries 512\nhash-max-ziplist-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-ziplist-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding in just one case: when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-ziplist-entries 128\nzset-max-ziplist-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When an HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main Redis hash table (the one mapping top-level\n# keys to values). The hash table implementation Redis uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing \"steps\" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use \"activerehashing no\" if you have hard latency requirements and it is\n# not a good thing in your environment that Redis can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use \"activerehashing yes\" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# slave  -> slave clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and slave clients, since\n# subscribers and slaves receive data in a push fashion.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit slave 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such us huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In the Redis protocol, bulk requests, that are, elements representing single\n# strings, are normally limited ot 512 mb. However you can change this limit\n# here.\n#\n# proto-max-bulk-len 512mb\n\n# Redis calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but Redis checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# Redis is idle, but at the same time will make Redis more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the Redis LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   redis-benchmark -n 1000000 incr foo\n#   redis-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be divided by two (or decremented if it has a value\n# less <= 10).\n#\n# The default value for the lfu-decay-time is 1. A Special value of 0 means to\n# decay the counter every time it happens to be scanned.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested\n# even in production and manually tested by multiple engineers for some\n# time.\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a Redis server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra for Redis 4.0 this process can happen at runtime\n# in an \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) Redis will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled Redis\n#    to use the copy of Jemalloc we ship with the source code of Redis.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Enabled active defragmentation\n# activedefrag yes\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage\n# active-defrag-cycle-min 25\n\n# Maximal effort for defrag in CPU percentage\n# active-defrag-cycle-max 75\n\n"
  },
  {
    "path": "aegir/conf/redis/redis4.conf",
    "content": "# Redis configuration file example.\n#\n# Note that in order to read the configuration file, Redis must be\n# started with the file path as first argument:\n#\n# ./redis-server /path/to/redis.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all Redis servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Notice option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Redis Sentinel. Since Redis always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, Redis listens\n# for connections from all the network interfaces available on the server.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1\n# xbind 127.0.0.1 ::1\n#\n# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force Redis to listen only into\n# the IPv4 lookback interface address (this means Redis will be able to\n# accept connections only from clients running into the same computer it\n# is running).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# JUST COMMENT THE FOLLOWING LINE.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# Redis instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and if:\n#\n# 1) The server is not binding explicitly to a set of addresses using the\n#    \"bind\" directive.\n# 2) No password is configured.\n#\n# The server only accepts connections from clients connecting from the\n# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain\n# sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to Redis\n# even if no authentication is configured, nor a specific set of interfaces\n# are explicitly listed using the \"bind\" directive.\nprotected-mode no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified Redis will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need an high backlog in order\n# to avoid slow clients connections issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so Redis will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/redis/redis.sock\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 3600\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Take the connection alive from the point of view of network\n#    equipment in the middle.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\n#\n# A reasonable value for this option is 300 seconds, which is the new\n# Redis default starting with Redis 3.2.1.\ntcp-keepalive 300\n\n################################# GENERAL #####################################\n\n# By default Redis does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /run/redis.pid when daemonized.\ndaemonize yes\n\n# If you run Redis from upstart or systemd, Redis can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous liveness pings back to your supervisor.\nsupervised no\n\n# If a pid file is specified, Redis writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/redis.pid\".\n#\n# Creating a pid file is best effort: if Redis is not able to create it\n# nothing bad happens, the server will start and run normally.\npidfile /run/redis/redis.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\nloglevel warning\n\n# Specify the log file name. Also the empty string can be used to force\n# Redis to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/redis/redis-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident redis\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 8\n\n# By default Redis shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY. Basically this means\n# that normally a logo is displayed only in interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo yes\n\n################################ SNAPSHOTTING  ################################\n#\n# Save the DB on disk:\n#\n#   save <seconds> <changes>\n#\n#   Will save the DB if both the given number of seconds and the given\n#   number of write operations against the DB occurred.\n#\n#   In the example below the behaviour will be to save:\n#   after 900 sec (15 min) if at least 1 key changed\n#   after 300 sec (5 min) if at least 10 keys changed\n#   after 60 sec if at least 10000 keys changed\n#\n#   Note: you can disable saving completely by commenting out all \"save\" lines.\n#\n#   It is also possible to remove all the previously configured save\n#   points by adding a save directive with a single empty string argument\n#   like in the following example:\n#\n#   save \"\"\n\n# save 900 1\n# save 300 10\n# save 60 10000\n\n# By default Redis will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again Redis will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the Redis server\n# and persistence, you may want to disable this feature so that Redis will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# For default that's set to 'yes' as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir /var/lib/redis/\n\n################################# REPLICATION #################################\n\n# Master-Slave replication. Use slaveof to make a Redis instance a copy of\n# another Redis server. A few things to understand ASAP about Redis replication.\n#\n# 1) Redis replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of slaves.\n# 2) Redis slaves are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition slaves automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# slaveof <masterip> <masterport>\n\n# If the master is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the slave to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the slave request.\n#\n# masterauth <master-password>\n\n# When a slave loses its connection with the master, or when the replication\n# is still in progress, the slave can act in two different ways:\n#\n# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) if slave-serve-stale-data is set to 'no' the slave will reply with\n#    an error \"SYNC with master in progress\" to all the kind of commands\n#    but to INFO and SLAVEOF.\n#\nslave-serve-stale-data yes\n\n# You can configure a slave instance to accept writes or not. Writing against\n# a slave instance may be useful to store some ephemeral data (because data\n# written on a slave will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# Since Redis 2.6 by default slaves are read-only.\n#\n# Note: read only slaves are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only slave exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only slaves using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nslave-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# -------------------------------------------------------\n# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY\n# -------------------------------------------------------\n#\n# New slaves and reconnecting slaves that are not able to continue the replication\n# process just receiving differences, need to do what is called a \"full\n# synchronization\". An RDB file is transmitted from the master to the slaves.\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The Redis master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the slaves incrementally.\n# 2) Diskless: The Redis master creates a new process that directly writes the\n#              RDB file to slave sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more slaves\n# can be queued and served with the RDB file as soon as the current child producing\n# the RDB file finishes its work. With diskless replication instead once\n# the transfer starts, new slaves arriving will be queued and a new transfer\n# will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple slaves\n# will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync no\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the slaves.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new slaves arriving, that will be queued for the next RDB transfer, so the server\n# waits a delay in order to let more slaves arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# Slaves send PINGs to server in a predefined interval. It's possible to change\n# this interval with the repl_ping_slave_period option. The default value is 10\n# seconds.\n#\n# repl-ping-slave-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of slave.\n# 2) Master timeout from the point of view of slaves (data, pings).\n# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-slave-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the slave.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the slave socket after SYNC?\n#\n# If you select \"yes\" Redis will use a smaller number of TCP packets and\n# less bandwidth to send data to slaves. But this can add a delay for\n# the data to appear on the slave side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the slave side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and slaves are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# slave data when slaves are disconnected for some time, so that when a slave\n# wants to reconnect again, often a full resync is not needed, but a partial\n# resync is enough, just passing the portion of data the slave missed while\n# disconnected.\n#\n# The bigger the replication backlog, the longer the time the slave can be\n# disconnected and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated once there is at least a slave connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no longer connected slaves for some time, the backlog\n# will be freed. The following option configures the amount of seconds that\n# need to elapse, starting from the time the last slave disconnected, for\n# the backlog buffer to be freed.\n#\n# Note that slaves never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly \"partially\n# resynchronize\" with the slaves: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The slave priority is an integer number published by Redis in the INFO output.\n# It is used by Redis Sentinel in order to select a slave to promote into a\n# master if the master is no longer working correctly.\n#\n# A slave with a low priority number is considered better for promotion, so\n# for instance if there are three slaves with priority 10, 100, 25 Sentinel will\n# pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the slave as not able to perform the\n# role of master, so a slave with priority of 0 will never be selected by\n# Redis Sentinel for promotion.\n#\n# By default the priority is 100.\nslave-priority 100\n\n# It is possible for a master to stop accepting writes if there are less than\n# N slaves connected, having a lag less or equal than M seconds.\n#\n# The N slaves need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the slave, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough slaves\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 slaves with a lag <= 10 seconds use:\n#\n# min-slaves-to-write 3\n# min-slaves-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-slaves-to-write is set to 0 (feature disabled) and\n# min-slaves-max-lag is set to 10.\n\n# A Redis master is able to list the address and port of the attached\n# slaves in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Redis Sentinel in order to discover slave instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a master.\n#\n# The listed IP and address normally reported by a slave is obtained\n# in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the slave to connect with the master.\n#\n#   Port: The port is communicated by the slave during the replication\n#   handshake, and is normally the port that the slave is using to\n#   list for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the slave may be actually reachable via different IP and port\n# pairs. The following two options can be used by a slave in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# slave-announce-ip 5.5.5.5\n# slave-announce-port 1234\n\n################################## SECURITY ###################################\n\n# Require clients to issue AUTH <PASSWORD> before processing any other\n# commands.  This might be useful in environments in which you do not trust\n# others with access to the host running redis-server.\n#\n# This should stay commented out for backward compatibility and because most\n# people do not need auth (e.g. they run their own servers).\n#\n# Warning: since Redis is pretty fast an outside user can try up to\n# 150k passwords per second against a good box. This means that you should\n# use a very strong password otherwise it will be very easy to break.\n#\n# requirepass isfoobared\n\n# Command renaming.\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to slaves may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the Redis server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as Redis reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached Redis will close all the new connections sending\n# an error 'max number of clients reached'.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached Redis will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If Redis can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', Redis will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using Redis as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have slaves attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the slaves are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of slaves is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have slaves attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for slave\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory\n# is reached. You can select among five behaviors:\n#\n# volatile-lru -> Evict using approximated LRU among the keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key among the ones with an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, Redis will return an error on write\n#       operations, when there are no suitable keys for eviction.\n#\n#       At the date of writing these commands are: set setnx setex append\n#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd\n#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby\n#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby\n#       getset mset msetnx exec sort\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. For default Redis will check five keys and pick the one that was\n# used less recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate.\n#\n# maxmemory-samples 5\n\n############################# LAZY FREEING ####################################\n\n# Redis has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in Redis. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons Redis also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the Redis server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically Redis deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a slave performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transfered.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives:\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nslave-lazy-flush yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default Redis asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the Redis process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) Redis can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the Redis process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup Redis will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Please check http://redis.io/topics/persistence for more information.\n\nappendonly no\n\n# The name of the append only file (default: \"appendonly.aof\")\n\nappendfilename \"appendonly.aof\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# Redis supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# Redis may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of Redis is\n# the same as \"appendfsync none\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# Redis is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: Redis remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when Redis itself\n# crashes or aborts but the operating system still works correctly).\n#\n# Redis can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the Redis server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"redis-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# Redis will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# When rewriting the AOF file, Redis is able to use an RDB preamble in the\n# AOF file for faster rewrites and recoveries. When this option is turned\n# on the rewritten AOF file is composed of two different stanzas:\n#\n#   [RDB file][AOF tail]\n#\n# When loading Redis recognizes that the AOF file starts with the \"REDIS\"\n# string and loads the prefixed RDB file, and continues loading the AOF\n# tail.\n#\n# This is currently turned off by default in order to avoid the surprise\n# of a format change, but will at some point be used as the default.\naof-use-rdb-preamble no\n\n################################ LUA SCRIPTING  ###############################\n\n# Max execution time of a Lua script in milliseconds.\n#\n# If the maximum execution time is reached Redis will log that a script is\n# still in execution after the maximum allowed time and will start to\n# reply to queries with an error.\n#\n# When a long running script exceeds the maximum execution time only the\n# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be\n# used to stop a script that did not yet called write commands. The second\n# is the only way to shut down the server in the case a write command was\n# already issued by the script but the user doesn't want to wait for the natural\n# termination of the script.\n#\n# Set it to 0 or a negative value for unlimited execution without warnings.\nlua-time-limit 5000\n\n################################ REDIS CLUSTER  ###############################\n#\n# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however\n# in order to mark it as \"mature\" we need to wait for a non trivial percentage\n# of users to deploy it in production.\n# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n#\n# Normal Redis instances can't be part of a Redis Cluster; only nodes that are\n# started as cluster nodes can. In order to start a Redis instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by Redis nodes.\n# Every Redis Cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# A slave of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a slave to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple slaves able to failover, they exchange messages\n#    in order to try to give an advantage to the slave with the best\n#    replication offset (more data from the master processed).\n#    Slaves will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single slave computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the slave will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a slave will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * slave-validity-factor) + repl-ping-slave-period\n#\n# So for example if node-timeout is 30 seconds, and the slave-validity-factor\n# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the\n# slave will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large slave-validity-factor may allow slaves with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a slave at all.\n#\n# For maximum availability, it is possible to set the slave-validity-factor\n# to a value of 0, which means, that slaves will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-slave-validity-factor 10\n\n# Cluster slaves are able to migrate to orphaned masters, that are masters\n# that are left without working slaves. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working slaves.\n#\n# Slaves migrate to orphaned masters only if there are still at least a\n# given number of other working slaves for their old master. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a slave\n# will migrate only if there is at least 1 other working slave for its master\n# and so forth. It usually reflects the number of slaves you want for every\n# master in your cluster.\n#\n# Default is 1 (slaves migrate only if their masters remain with at least\n# one slave). To disable migration just set it to a very large value.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# By default Redis Cluster nodes stop accepting queries if they detect there\n# is at least an hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents slaves from trying to failover its\n# master during master failures. However the master can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-slave-no-failover no\n\n# In order to setup your cluster make sure to read the documentation\n# available at http://redis.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, Redis Cluster nodes address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make Redis Cluster working in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following two options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-bus-port\n#\n# Each instruct the node about its address, client port, and cluster message\n# bus port. The information is then published in the header of the bus packets\n# so that other nodes will be able to correctly map the address of the node\n# publishing the information.\n#\n# If the above options are not used, the normal Redis Cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usually.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-port 6379\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The Redis Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells Redis\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The Redis latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a Redis instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n############################# EVENT NOTIFICATION ##############################\n\n# Redis can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at http://redis.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that Redis will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  A     Alias for g$lshzxe, so that the \"AKE\" string means all the events.\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-ziplist-entries 512\nhash-max-ziplist-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-ziplist-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding in just one case: when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-ziplist-entries 128\nzset-max-ziplist-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When an HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main Redis hash table (the one mapping top-level\n# keys to values). The hash table implementation Redis uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing \"steps\" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use \"activerehashing no\" if you have hard latency requirements and it is\n# not a good thing in your environment that Redis can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use \"activerehashing yes\" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# slave  -> slave clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and slave clients, since\n# subscribers and slaves receive data in a push fashion.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit slave 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such us huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In the Redis protocol, bulk requests, that are, elements representing single\n# strings, are normally limited ot 512 mb. However you can change this limit\n# here.\n#\n# proto-max-bulk-len 512mb\n\n# Redis calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but Redis checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# Redis is idle, but at the same time will make Redis more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the Redis LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   redis-benchmark -n 1000000 incr foo\n#   redis-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be divided by two (or decremented if it has a value\n# less <= 10).\n#\n# The default value for the lfu-decay-time is 1. A Special value of 0 means to\n# decay the counter every time it happens to be scanned.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested\n# even in production and manually tested by multiple engineers for some\n# time.\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a Redis server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra for Redis 4.0 this process can happen at runtime\n# in an \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) Redis will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled Redis\n#    to use the copy of Jemalloc we ship with the source code of Redis.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Enabled active defragmentation\n# activedefrag yes\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage\n# active-defrag-cycle-min 25\n\n# Maximal effort for defrag in CPU percentage\n# active-defrag-cycle-max 75\n\n"
  },
  {
    "path": "aegir/conf/redis/redis5.conf",
    "content": "# Redis configuration file example.\n#\n# Note that in order to read the configuration file, Redis must be\n# started with the file path as first argument:\n#\n# ./redis-server /path/to/redis.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all Redis servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Notice option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Redis Sentinel. Since Redis always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, Redis listens\n# for connections from all the network interfaces available on the server.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1\n# xbind 127.0.0.1 ::1\n#\n# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force Redis to listen only into\n# the IPv4 loopback interface address (this means Redis will be able to\n# accept connections only from clients running into the same computer it\n# is running).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# JUST COMMENT THE FOLLOWING LINE.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# Redis instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and if:\n#\n# 1) The server is not binding explicitly to a set of addresses using the\n#    \"bind\" directive.\n# 2) No password is configured.\n#\n# The server only accepts connections from clients connecting from the\n# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain\n# sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to Redis\n# even if no authentication is configured, nor a specific set of interfaces\n# are explicitly listed using the \"bind\" directive.\nprotected-mode no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified Redis will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need an high backlog in order\n# to avoid slow clients connections issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so Redis will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/redis/redis.sock\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 3600\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Take the connection alive from the point of view of network\n#    equipment in the middle.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\n#\n# A reasonable value for this option is 300 seconds, which is the new\n# Redis default starting with Redis 3.2.1.\ntcp-keepalive 300\n\n################################# GENERAL #####################################\n\n# By default Redis does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /run/redis.pid when daemonized.\ndaemonize yes\n\n# If you run Redis from upstart or systemd, Redis can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous liveness pings back to your supervisor.\nsupervised no\n\n# If a pid file is specified, Redis writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/redis.pid\".\n#\n# Creating a pid file is best effort: if Redis is not able to create it\n# nothing bad happens, the server will start and run normally.\npidfile /run/redis/redis.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\nloglevel warning\n\n# Specify the log file name. Also the empty string can be used to force\n# Redis to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/redis/redis-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident redis\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 8\n\n# By default Redis shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY. Basically this means\n# that normally a logo is displayed only in interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo yes\n\n################################ SNAPSHOTTING  ################################\n#\n# Save the DB on disk:\n#\n#   save <seconds> <changes>\n#\n#   Will save the DB if both the given number of seconds and the given\n#   number of write operations against the DB occurred.\n#\n#   In the example below the behaviour will be to save:\n#   after 900 sec (15 min) if at least 1 key changed\n#   after 300 sec (5 min) if at least 10 keys changed\n#   after 60 sec if at least 10000 keys changed\n#\n#   Note: you can disable saving completely by commenting out all \"save\" lines.\n#\n#   It is also possible to remove all the previously configured save\n#   points by adding a save directive with a single empty string argument\n#   like in the following example:\n#\n#   save \"\"\n\n# save 900 1\n# save 300 10\n# save 60 10000\n\n# By default Redis will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again Redis will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the Redis server\n# and persistence, you may want to disable this feature so that Redis will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# For default that's set to 'yes' as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir /var/lib/redis/\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a Redis instance a copy of\n# another Redis server. A few things to understand ASAP about Redis replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Redis replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Redis replicas are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# replicaof <masterip> <masterport>\n\n# If the master is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the replica request.\n#\n# masterauth <master-password>\n\n# When a replica loses its connection with the master, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) if replica-serve-stale-data is set to 'no' the replica will reply with\n#    an error \"SYNC with master in progress\" to all the kind of commands\n#    but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,\n#    SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,\n#    COMMAND, POST, HOST: and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# Since Redis 2.6 by default replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# -------------------------------------------------------\n# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY\n# -------------------------------------------------------\n#\n# New replicas and reconnecting replicas that are not able to continue the replication\n# process just receiving differences, need to do what is called a \"full\n# synchronization\". An RDB file is transmitted from the master to the replicas.\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The Redis master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The Redis master creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child producing\n# the RDB file finishes its work. With diskless replication instead once\n# the transfer starts, new replicas arriving will be queued and a new transfer\n# will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple replicas\n# will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync no\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the server\n# waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# Replicas send PINGs to server in a predefined interval. It's possible to change\n# this interval with the repl_ping_replica_period option. The default value is 10\n# seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the replica.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select \"yes\" Redis will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and replicas are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a replica\n# wants to reconnect again, often a full resync is not needed, but a partial\n# resync is enough, just passing the portion of data the replica missed while\n# disconnected.\n#\n# The bigger the replication backlog, the longer the time the replica can be\n# disconnected and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated once there is at least a replica connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no longer connected replicas for some time, the backlog\n# will be freed. The following option configures the amount of seconds that\n# need to elapse, starting from the time the last replica disconnected, for\n# the backlog buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly \"partially\n# resynchronize\" with the replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The replica priority is an integer number published by Redis in the INFO output.\n# It is used by Redis Sentinel in order to select a replica to promote into a\n# master if the master is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel will\n# pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of master, so a replica with priority of 0 will never be selected by\n# Redis Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# It is possible for a master to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A Redis master is able to list the address and port of the attached\n# replicas in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Redis Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a master.\n#\n# The listed IP and address normally reported by a replica is obtained\n# in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the master.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may be actually reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n################################## SECURITY ###################################\n\n# Require clients to issue AUTH <PASSWORD> before processing any other\n# commands.  This might be useful in environments in which you do not trust\n# others with access to the host running redis-server.\n#\n# This should stay commented out for backward compatibility and because most\n# people do not need auth (e.g. they run their own servers).\n#\n# Warning: since Redis is pretty fast an outside user can try up to\n# 150k passwords per second against a good box. This means that you should\n# use a very strong password otherwise it will be very easy to break.\n#\n# requirepass isfoobared\n\n# Command renaming.\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the Redis server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as Redis reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached Redis will close all the new connections sending\n# an error 'max number of clients reached'.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached Redis will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If Redis can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', Redis will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using Redis as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory\n# is reached. You can select among five behaviors:\n#\n# volatile-lru -> Evict using approximated LRU among the keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key among the ones with an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, Redis will return an error on write\n#       operations, when there are no suitable keys for eviction.\n#\n#       At the date of writing these commands are: set setnx setex append\n#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd\n#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby\n#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby\n#       getset mset msetnx exec sort\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. For default Redis will check five keys and pick the one that was\n# used less recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate.\n#\n# maxmemory-samples 5\n\n# Starting from Redis 5, by default a replica will ignore its maxmemory setting\n# (unless it is promoted to master after a failover or manually). It means\n# that the eviction of keys will be just handled by the master, sending the\n# DEL commands to the replica as keys evict in the master side.\n#\n# This behavior ensures that masters and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica to have\n# a different memory setting, and you are sure all the writes performed to the\n# replica are idempotent, then you may change this default (but be sure to understand\n# what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory and so\n# forth). So make sure you monitor your replicas and make sure they have enough\n# memory to never hit a real out-of-memory condition before the master hits\n# the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n############################# LAZY FREEING ####################################\n\n# Redis has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in Redis. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons Redis also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the Redis server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically Redis deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives:\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nreplica-lazy-flush yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default Redis asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the Redis process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) Redis can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the Redis process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup Redis will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Please check http://redis.io/topics/persistence for more information.\n\nappendonly no\n\n# The name of the append only file (default: \"appendonly.aof\")\n\nappendfilename \"appendonly.aof\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# Redis supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# Redis may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of Redis is\n# the same as \"appendfsync none\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# Redis is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: Redis remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when Redis itself\n# crashes or aborts but the operating system still works correctly).\n#\n# Redis can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the Redis server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"redis-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# Redis will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# When rewriting the AOF file, Redis is able to use an RDB preamble in the\n# AOF file for faster rewrites and recoveries. When this option is turned\n# on the rewritten AOF file is composed of two different stanzas:\n#\n#   [RDB file][AOF tail]\n#\n# When loading Redis recognizes that the AOF file starts with the \"REDIS\"\n# string and loads the prefixed RDB file, and continues loading the AOF\n# tail.\naof-use-rdb-preamble yes\n\n################################ LUA SCRIPTING  ###############################\n\n# Max execution time of a Lua script in milliseconds.\n#\n# If the maximum execution time is reached Redis will log that a script is\n# still in execution after the maximum allowed time and will start to\n# reply to queries with an error.\n#\n# When a long running script exceeds the maximum execution time only the\n# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be\n# used to stop a script that did not yet called write commands. The second\n# is the only way to shut down the server in the case a write command was\n# already issued by the script but the user doesn't want to wait for the natural\n# termination of the script.\n#\n# Set it to 0 or a negative value for unlimited execution without warnings.\nlua-time-limit 5000\n\n################################ REDIS CLUSTER  ###############################\n\n# Normal Redis instances can't be part of a Redis Cluster; only nodes that are\n# started as cluster nodes can. In order to start a Redis instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by Redis nodes.\n# Every Redis Cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# A replica of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the master processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large replica-validity-factor may allow replicas with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned masters, that are masters\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned masters only if there are still at least a\n# given number of other working replicas for their old master. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its master\n# and so forth. It usually reflects the number of replicas you want for every\n# master in your cluster.\n#\n# Default is 1 (replicas migrate only if their masters remain with at least\n# one replica). To disable migration just set it to a very large value.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# By default Redis Cluster nodes stop accepting queries if they detect there\n# is at least an hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# master during master failures. However the master can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# In order to setup your cluster make sure to read the documentation\n# available at http://redis.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, Redis Cluster nodes address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make Redis Cluster working in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following two options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-bus-port\n#\n# Each instruct the node about its address, client port, and cluster message\n# bus port. The information is then published in the header of the bus packets\n# so that other nodes will be able to correctly map the address of the node\n# publishing the information.\n#\n# If the above options are not used, the normal Redis Cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usually.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-port 6379\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The Redis Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells Redis\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The Redis latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a Redis instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n############################# EVENT NOTIFICATION ##############################\n\n# Redis can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at http://redis.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that Redis will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  A     Alias for g$lshzxe, so that the \"AKE\" string means all the events.\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-ziplist-entries 512\nhash-max-ziplist-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-ziplist-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding in just one case: when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-ziplist-entries 128\nzset-max-ziplist-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When an HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entires limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main Redis hash table (the one mapping top-level\n# keys to values). The hash table implementation Redis uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing \"steps\" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use \"activerehashing no\" if you have hard latency requirements and it is\n# not a good thing in your environment that Redis can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use \"activerehashing yes\" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica  -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such us huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In the Redis protocol, bulk requests, that are, elements representing single\n# strings, are normally limited ot 512 mb. However you can change this limit\n# here.\n#\n# proto-max-bulk-len 512mb\n\n# Redis calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but Redis checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# Redis is idle, but at the same time will make Redis more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# Normally it is useful to have an HZ value which is proportional to the\n# number of clients connected. This is useful in order, for instance, to\n# avoid too many clients are processed for each background task invocation\n# in order to avoid latency spikes.\n#\n# Since the default HZ value by default is conservatively set to 10, Redis\n# offers, and enables by default, the ability to use an adaptive HZ value\n# which will temporary raise when there are many connected clients.\n#\n# When dynamic HZ is enabled, the actual configured HZ will be used as\n# as a baseline, but multiples of the configured HZ value will be actually\n# used as needed once more clients are connected. In this way an idle\n# instance will use very little CPU time while a busy instance will be\n# more responsive.\ndynamic-hz yes\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When redis saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the Redis LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   redis-benchmark -n 1000000 incr foo\n#   redis-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be divided by two (or decremented if it has a value\n# less <= 10).\n#\n# The default value for the lfu-decay-time is 1. A Special value of 0 means to\n# decay the counter every time it happens to be scanned.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested\n# even in production and manually tested by multiple engineers for some\n# time.\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a Redis server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra for Redis 4.0 this process can happen at runtime\n# in an \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) Redis will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled Redis\n#    to use the copy of Jemalloc we ship with the source code of Redis.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Enabled active defragmentation\n# activedefrag yes\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage\n# active-defrag-cycle-min 5\n\n# Maximal effort for defrag in CPU percentage\n# active-defrag-cycle-max 75\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n"
  },
  {
    "path": "aegir/conf/redis/redis6.conf",
    "content": "# Redis configuration file example.\n#\n# Note that in order to read the configuration file, Redis must be\n# started with the file path as first argument:\n#\n# ./redis-server /path/to/redis.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all Redis servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Note that option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Redis Sentinel. Since Redis always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, Redis listens\n# for connections from all available network interfaces on the host machine.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n# Each address can be prefixed by \"-\", which means that redis will not fail to\n# start if the address is not available. Being not available only refers to\n# addresses that does not correspond to any network interfece. Addresses that\n# are already in use will always fail, and unsupported protocols will always BE\n# silently skipped.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses\n# xbind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6\n# xbind * -::*                     # like the default, all available interfaces\n#\n# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force Redis to listen only on the\n# IPv4 and IPv6 (if available) loopback interface addresses (this means Redis\n# will only be able to accept client connections from the same host that it is\n# running on).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# JUST COMMENT OUT THE FOLLOWING LINE.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# Redis instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and if:\n#\n# 1) The server is not binding explicitly to a set of addresses using the\n#    \"bind\" directive.\n# 2) No password is configured.\n#\n# The server only accepts connections from clients connecting from the\n# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain\n# sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to Redis\n# even if no authentication is configured, nor a specific set of interfaces\n# are explicitly listed using the \"bind\" directive.\nprotected-mode no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified Redis will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need a high backlog in order\n# to avoid slow clients connection issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so Redis will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/redis/redis.sock\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 900\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Force network equipment in the middle to consider the connection to be\n#    alive.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\n#\n# A reasonable value for this option is 300 seconds, which is the new\n# Redis default starting with Redis 3.2.1.\ntcp-keepalive 300\n\n################################# TLS/SSL #####################################\n\n# By default, TLS/SSL is disabled. To enable it, the \"tls-port\" configuration\n# directive can be used to define TLS-listening ports. To enable TLS on the\n# default port, use:\n#\n# port 0\n# tls-port 6379\n\n# Configure a X.509 certificate and private key to use for authenticating the\n# server to connected clients, masters or cluster peers.  These files should be\n# PEM formatted.\n#\n# tls-cert-file redis.crt\n# tls-key-file redis.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-key-file-pass secret\n\n# Normally Redis uses the same certificate for both server functions (accepting\n# connections) and client functions (replicating from a master, establishing\n# cluster bus connections, etc.).\n#\n# Sometimes certificates are issued with attributes that designate them as\n# client-only or server-only certificates. In that case it may be desired to use\n# different certificates for incoming (server) and outgoing (client)\n# connections. To do that, use the following directives:\n#\n# tls-client-cert-file client.crt\n# tls-client-key-file client.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-client-key-file-pass secret\n\n# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange,\n# required by older versions of OpenSSL (<3.0). Newer versions do not require\n# this configuration and recommend against it.\n#\n# tls-dh-params-file redis.dh\n\n# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL\n# clients and peers.  Redis requires an explicit configuration of at least one\n# of these, and will not implicitly use the system wide configuration.\n#\n# tls-ca-cert-file ca.crt\n# tls-ca-cert-dir /etc/ssl/certs\n\n# By default, clients (including replica servers) on a TLS port are required\n# to authenticate using valid client side certificates.\n#\n# If \"no\" is specified, client certificates are not required and not accepted.\n# If \"optional\" is specified, client certificates are accepted and must be\n# valid if provided, but are not required.\n#\n# tls-auth-clients no\n# tls-auth-clients optional\n\n# By default, a Redis replica does not attempt to establish a TLS connection\n# with its master.\n#\n# Use the following directive to enable TLS on replication links.\n#\n# tls-replication yes\n\n# By default, the Redis Cluster bus uses a plain TCP connection. To enable\n# TLS for the bus protocol, use the following directive:\n#\n# tls-cluster yes\n\n# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended\n# that older formally deprecated versions are kept disabled to reduce the attack surface.\n# You can explicitly specify TLS versions to support.\n# Allowed values are case insensitive and include \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\",\n# \"TLSv1.3\" (OpenSSL >= 1.1.1) or any combination.\n# To enable only TLSv1.2 and TLSv1.3, use:\n#\n# tls-protocols \"TLSv1.2 TLSv1.3\"\n\n# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information\n# about the syntax of this string.\n#\n# Note: this configuration applies only to <= TLSv1.2.\n#\n# tls-ciphers DEFAULT:!MEDIUM\n\n# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more\n# information about the syntax of this string, and specifically for TLSv1.3\n# ciphersuites.\n#\n# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256\n\n# When choosing a cipher, use the server's preference instead of the client\n# preference. By default, the server follows the client's preference.\n#\n# tls-prefer-server-ciphers yes\n\n# By default, TLS session caching is enabled to allow faster and less expensive\n# reconnections by clients that support it. Use the following directive to disable\n# caching.\n#\n# tls-session-caching no\n\n# Change the default number of TLS sessions cached. A zero value sets the cache\n# to unlimited size. The default size is 20480.\n#\n# tls-session-cache-size 5000\n\n# Change the default timeout of cached TLS sessions. The default timeout is 300\n# seconds.\n#\n# tls-session-cache-timeout 60\n\n################################# GENERAL #####################################\n\n# By default Redis does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /run/redis.pid when daemonized.\n# When Redis is supervised by upstart or systemd, this parameter has no impact.\ndaemonize yes\n\n# If you run Redis from upstart or systemd, Redis can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode\n#                        requires \"expect stop\" in your upstart job config\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#                        on startup, and updating Redis status on a regular\n#                        basis.\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous pings back to your supervisor.\n#\n# The default is \"no\". To run under upstart/systemd, you can simply uncomment\n# the line below:\n#\n# supervised auto\n\n# If a pid file is specified, Redis writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/redis.pid\".\n#\n# Creating a pid file is best effort: if Redis is not able to create it\n# nothing bad happens, the server will start and run normally.\n#\n# Note that on modern Linux systems \"/run/redis.pid\" is more conforming\n# and should be used instead.\npidfile /run/redis/redis.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\nloglevel warning\n\n# Specify the log file name. Also the empty string can be used to force\n# Redis to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/redis/redis-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident redis\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# To disable the built in crash log, which will possibly produce cleaner core\n# dumps when they are needed, uncomment the following:\n#\n# crash-log-enabled no\n\n# To disable the fast memory check that's run as part of the crash log, which\n# will possibly let redis terminate sooner, uncomment the following:\n#\n# crash-memcheck-enabled no\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 8\n\n# By default Redis shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY and syslog logging is\n# disabled. Basically this means that normally a logo is displayed only in\n# interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo no\n\n# By default, Redis modifies the process title (as seen in 'top' and 'ps') to\n# provide some runtime information. It is possible to disable this and leave\n# the process name as executed by setting the following to no.\nset-proc-title yes\n\n# When changing the process title, Redis uses the following template to construct\n# the modified title.\n#\n# Template variables are specified in curly brackets. The following variables are\n# supported:\n#\n# {title}           Name of process as executed if parent, or type of child process.\n# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or\n#                   Unix socket if only that's available.\n# {server-mode}     Special mode, i.e. \"[sentinel]\" or \"[cluster]\".\n# {port}            TCP port listening on, or 0.\n# {tls-port}        TLS port listening on, or 0.\n# {unixsocket}      Unix domain socket listening on, or \"\".\n# {config-file}     Name of configuration file used.\n#\nproc-title-template \"{title} {listen-addr} {server-mode}\"\n\n################################ SNAPSHOTTING  ################################\n\n# Save the DB to disk.\n#\n# save <seconds> <changes>\n#\n# Redis will save the DB if both the given number of seconds and the given\n# number of write operations against the DB occurred.\n#\n# Snapshotting can be completely disabled with a single empty string argument\n# as in following example:\n#\n# save \"\"\n#\n# Unless specified otherwise, by default Redis will save the DB:\n#   * After 3600 seconds (an hour) if at least 1 key changed\n#   * After 300 seconds (5 minutes) if at least 100 keys changed\n#   * After 60 seconds if at least 10000 keys changed\n#\n# You can set these explicitly by uncommenting the three following lines.\n#\n# save 3600 1\n# save 300 100\n# save 60 10000\n\n# By default Redis will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again Redis will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the Redis server\n# and persistence, you may want to disable this feature so that Redis will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# By default compression is enabled as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# Enables or disables full sanitation checks for ziplist and listpack etc when\n# loading an RDB or RESTORE payload. This reduces the chances of a assertion or\n# crash later on while processing commands.\n# Options:\n#   no         - Never perform full sanitation\n#   yes        - Always perform full sanitation\n#   clients    - Perform full sanitation only for user connections.\n#                Excludes: RDB files, RESTORE commands received from the master\n#                connection, and client connections which have the\n#                skip-sanitize-payload ACL flag.\n# The default should be 'clients' but since it currently affects cluster\n# resharding via MIGRATE, it is temporarily set to 'no' by default.\n#\n# sanitize-dump-payload no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# Remove RDB files used by replication in instances without persistence\n# enabled. By default this option is disabled, however there are environments\n# where for regulations or other security concerns, RDB files persisted on\n# disk by masters in order to feed replicas, or stored on disk by replicas\n# in order to load them for the initial synchronization, should be deleted\n# ASAP. Note that this option ONLY WORKS in instances that have both AOF\n# and RDB persistence disabled, otherwise is completely ignored.\n#\n# An alternative (and sometimes better) way to obtain the same effect is\n# to use diskless replication on both master and replicas instances. However\n# in the case of replicas, diskless is not always an option.\nrdb-del-sync-files no\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir /var/lib/redis/\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a Redis instance a copy of\n# another Redis server. A few things to understand ASAP about Redis replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Redis replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Redis replicas are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# replicaof <masterip> <masterport>\n\n# If the master is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the replica request.\n#\n# masterauth <master-password>\n#\n# However this is not enough if you are using Redis ACLs (for Redis version\n# 6 or greater), and the default user is not capable of running the PSYNC\n# command and/or other commands needed for replication. In this case it's\n# better to configure a special user to use with replication, and specify the\n# masteruser configuration as such:\n#\n# masteruser <username>\n#\n# When masteruser is specified, the replica will authenticate against its\n# master using the new AUTH form: AUTH <username> <password>.\n\n# When a replica loses its connection with the master, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) If replica-serve-stale-data is set to 'no' the replica will reply with\n#    an error \"SYNC with master in progress\" to all commands except:\n#    INFO, REPLICAOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,\n#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,\n#    HOST and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# Since Redis 2.6 by default replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# New replicas and reconnecting replicas that are not able to continue the\n# replication process just receiving differences, need to do what is called a\n# \"full synchronization\". An RDB file is transmitted from the master to the\n# replicas.\n#\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The Redis master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The Redis master creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child\n# producing the RDB file finishes its work. With diskless replication instead\n# once the transfer starts, new replicas arriving will be queued and a new\n# transfer will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple\n# replicas will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync no\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the\n# server waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# -----------------------------------------------------------------------------\n# WARNING: RDB diskless load is experimental. Since in this setup the replica\n# does not immediately store an RDB on disk, it may cause data loss during\n# failovers. RDB diskless load + Redis modules not handling I/O reads may also\n# cause Redis to abort in case of I/O errors during the initial synchronization\n# stage with the master. Use only if you know what you are doing.\n# -----------------------------------------------------------------------------\n#\n# Replica can load the RDB it reads from the replication link directly from the\n# socket, or store the RDB to a file and read that file after it was completely\n# received from the master.\n#\n# In many cases the disk is slower than the network, and storing and loading\n# the RDB file may increase replication time (and even increase the master's\n# Copy on Write memory and salve buffers).\n# However, parsing the RDB file directly from the socket may mean that we have\n# to flush the contents of the current database before the full rdb was\n# received. For this reason we have the following options:\n#\n# \"disabled\"    - Don't use diskless load (store the rdb file to the disk first)\n# \"on-empty-db\" - Use diskless load only when it is completely safe.\n# \"swapdb\"      - Keep a copy of the current db contents in RAM while parsing\n#                 the data directly from the socket. note that this requires\n#                 sufficient memory, if you don't have it, you risk an OOM kill.\nrepl-diskless-load disabled\n\n# Replicas send PINGs to server in a predefined interval. It's possible to\n# change this interval with the repl_ping_replica_period option. The default\n# value is 10 seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the replica. The default\n# value is 60 seconds.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select \"yes\" Redis will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and replicas are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a\n# replica wants to reconnect again, often a full resync is not needed, but a\n# partial resync is enough, just passing the portion of data the replica\n# missed while disconnected.\n#\n# The bigger the replication backlog, the longer the replica can endure the\n# disconnect and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated if there is at least one replica connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no connected replicas for some time, the backlog will be\n# freed. The following option configures the amount of seconds that need to\n# elapse, starting from the time the last replica disconnected, for the backlog\n# buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly \"partially\n# resynchronize\" with other replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The replica priority is an integer number published by Redis in the INFO\n# output. It is used by Redis Sentinel in order to select a replica to promote\n# into a master if the master is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel\n# will pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of master, so a replica with priority of 0 will never be selected by\n# Redis Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# -----------------------------------------------------------------------------\n# By default, Redis Sentinel includes all replicas in its reports. A replica\n# can be excluded from Redis Sentinel's announcements. An unannounced replica\n# will be ignored by the 'sentinel replicas <master>' command and won't be\n# exposed to Redis Sentinel's clients.\n#\n# This option does not change the behavior of replica-priority. Even with\n# replica-announced set to 'no', the replica can be promoted to master. To\n# prevent this behavior, set replica-priority to 0.\n#\n# replica-announced yes\n\n# It is possible for a master to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A Redis master is able to list the address and port of the attached\n# replicas in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Redis Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a master.\n#\n# The listed IP address and port normally reported by a replica is\n# obtained in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the master.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may actually be reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n############################### KEYS TRACKING #################################\n\n# Redis implements server assisted support for client side caching of values.\n# This is implemented using an invalidation table that remembers, using\n# a radix key indexed by key name, what clients have which keys. In turn\n# this is used in order to send invalidation messages to clients. Please\n# check this page to understand more about the feature:\n#\n#   https://redis.io/topics/client-side-caching\n#\n# When tracking is enabled for a client, all the read only queries are assumed\n# to be cached: this will force Redis to store information in the invalidation\n# table. When keys are modified, such information is flushed away, and\n# invalidation messages are sent to the clients. However if the workload is\n# heavily dominated by reads, Redis could use more and more memory in order\n# to track the keys fetched by many clients.\n#\n# For this reason it is possible to configure a maximum fill value for the\n# invalidation table. By default it is set to 1M of keys, and once this limit\n# is reached, Redis will start to evict keys in the invalidation table\n# even if they were not modified, just to reclaim memory: this will in turn\n# force the clients to invalidate the cached values. Basically the table\n# maximum size is a trade off between the memory you want to spend server\n# side to track information about who cached what, and the ability of clients\n# to retain cached objects in memory.\n#\n# If you set the value to 0, it means there are no limits, and Redis will\n# retain as many keys as needed in the invalidation table.\n# In the \"stats\" INFO section, you can find information about the number of\n# keys in the invalidation table at every given moment.\n#\n# Note: when key tracking is used in broadcasting mode, no memory is used\n# in the server side so this setting is useless.\n#\n# tracking-table-max-keys 1000000\n\n################################## SECURITY ###################################\n\n# Warning: since Redis is pretty fast, an outside user can try up to\n# 1 million passwords per second against a modern box. This means that you\n# should use very strong passwords, otherwise they will be very easy to break.\n# Note that because the password is really a shared secret between the client\n# and the server, and should not be memorized by any human, the password\n# can be easily a long string from /dev/urandom or whatever, so by using a\n# long and unguessable password no brute force attack will be possible.\n\n# Redis ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n#\n# The special username \"default\" is used for new connections. If this user\n# has the \"nopass\" rule, then new connections will be immediately authenticated\n# as the \"default\" user without the need of any password provided via the\n# AUTH command. Otherwise if the \"default\" user is not flagged with \"nopass\"\n# the connections will start in not authenticated state, and will require\n# AUTH (or the HELLO command AUTH option) in order to be authenticated and\n# start to work.\n#\n# The ACL rules that describe what a user can do are the following:\n#\n#  on           Enable the user: it is possible to authenticate as this user.\n#  off          Disable the user: it's no longer possible to authenticate\n#               with this user, however the already authenticated connections\n#               will still work.\n#  skip-sanitize-payload    RESTORE dump-payload sanitation is skipped.\n#  sanitize-payload         RESTORE dump-payload is sanitized (default).\n#  +<command>   Allow the execution of that command\n#  -<command>   Disallow the execution of that command\n#  +@<category> Allow the execution of all the commands in such category\n#               with valid categories are like @admin, @set, @sortedset, ...\n#               and so forth, see the full list in the server.c file where\n#               the Redis command table is described and defined.\n#               The special category @all means all the commands, but currently\n#               present in the server, and that will be loaded in the future\n#               via modules.\n#  +<command>|subcommand    Allow a specific subcommand of an otherwise\n#                           disabled command. Note that this form is not\n#                           allowed as negative like -DEBUG|SEGFAULT, but\n#                           only additive starting with \"+\".\n#  allcommands  Alias for +@all. Note that it implies the ability to execute\n#               all the future commands loaded via the modules system.\n#  nocommands   Alias for -@all.\n#  ~<pattern>   Add a pattern of keys that can be mentioned as part of\n#               commands. For instance ~* allows all the keys. The pattern\n#               is a glob-style pattern like the one of KEYS.\n#               It is possible to specify multiple patterns.\n#  allkeys      Alias for ~*\n#  resetkeys    Flush the list of allowed keys patterns.\n#  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be\n#               accessed by the user. It is possible to specify multiple channel\n#               patterns.\n#  allchannels  Alias for &*\n#  resetchannels            Flush the list of allowed channel patterns.\n#  ><password>  Add this password to the list of valid password for the user.\n#               For example >mypass will add \"mypass\" to the list.\n#               This directive clears the \"nopass\" flag (see later).\n#  <<password>  Remove this password from the list of valid passwords.\n#  nopass       All the set passwords of the user are removed, and the user\n#               is flagged as requiring no password: it means that every\n#               password will work against this user. If this directive is\n#               used for the default user, every new connection will be\n#               immediately authenticated with the default user without\n#               any explicit AUTH command required. Note that the \"resetpass\"\n#               directive will clear this condition.\n#  resetpass    Flush the list of allowed passwords. Moreover removes the\n#               \"nopass\" status. After \"resetpass\" the user has no associated\n#               passwords and there is no way to authenticate without adding\n#               some password (or setting it as \"nopass\" later).\n#  reset        Performs the following actions: resetpass, resetkeys, off,\n#               -@all. The user returns to the same state it has immediately\n#               after its creation.\n#\n# ACL rules can be specified in any order: for instance you can start with\n# passwords, then flags, or key patterns. However note that the additive\n# and subtractive rules will CHANGE MEANING depending on the ordering.\n# For instance see the following example:\n#\n#   user alice on +@all -DEBUG ~* >somepassword\n#\n# This will allow \"alice\" to use all the commands with the exception of the\n# DEBUG command, since +@all added all the commands to the set of the commands\n# alice can use, and later DEBUG was removed. However if we invert the order\n# of two ACL rules the result will be different:\n#\n#   user alice on -DEBUG +@all ~* >somepassword\n#\n# Now DEBUG was removed when alice had yet no commands in the set of allowed\n# commands, later all the commands are added, so the user will be able to\n# execute everything.\n#\n# Basically ACL rules are processed left-to-right.\n#\n# For more information about ACL configuration please refer to\n# the Redis web site at https://redis.io/topics/acl\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked\n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with\n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside redis.conf to describe users.\n#\n# aclfile /etc/redis/users.acl\n\n# IMPORTANT NOTE: starting with Redis 6 \"requirepass\" is just a compatibility\n# layer on top of the new ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# The requirepass is not compatable with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n#\n# requirepass isfoobared\n\n# New users are initialized with restrictive permissions by default, via the\n# equivalent of this ACL rule 'off resetkeys -@all'. Starting with Redis 6.2, it\n# is possible to manage access to Pub/Sub channels with ACL rules as well. The\n# default Pub/Sub channels permission if new users is controlled by the\n# acl-pubsub-default configuration directive, which accepts one of these values:\n#\n# allchannels: grants access to all Pub/Sub channels\n# resetchannels: revokes access to all Pub/Sub channels\n#\n# To ensure backward compatibility while upgrading Redis 6.0, acl-pubsub-default\n# defaults to the 'allchannels' permission.\n#\n# Future compatibility note: it is very likely that in a future version of Redis\n# the directive's default of 'allchannels' will be changed to 'resetchannels' in\n# order to provide better out-of-the-box Pub/Sub security. Therefore, it is\n# recommended that you explicitly define Pub/Sub permissions for all users\n# rather then rely on implicit default values. Once you've set explicit\n# Pub/Sub for all existing users, you should uncomment the following line.\n#\n# acl-pubsub-default resetchannels\n\n# Command renaming (DEPRECATED).\n#\n# ------------------------------------------------------------------------\n# WARNING: avoid using this option if possible. Instead use ACLs to remove\n# commands from the default user, and put them only in some admin user you\n# create for administrative purposes.\n# ------------------------------------------------------------------------\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the Redis server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as Redis reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached Redis will close all the new connections sending\n# an error 'max number of clients reached'.\n#\n# IMPORTANT: When Redis Cluster is used, the max number of connections is also\n# shared with the cluster bus: every node in the cluster will use two\n# connections, one incoming and another outgoing. It is important to size the\n# limit accordingly in case of very large clusters.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached Redis will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If Redis can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', Redis will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using Redis as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory\n# is reached. You can select one from the following behaviors:\n#\n# volatile-lru -> Evict using approximated LRU, only keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key having an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, when there are no suitable keys for\n# eviction, Redis will return an error on write operations that require\n# more memory. These are usually commands that create new keys, add data or\n# modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,\n# SORT (due to the STORE argument), and EXEC (if the transaction includes any\n# command that requires memory).\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. By default Redis will check five keys and pick the one that was\n# used least recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate.\n#\n# maxmemory-samples 5\n\n# Eviction processing is designed to function well with the default setting.\n# If there is an unusually large amount of write traffic, this value may need to\n# be increased.  Decreasing this value may reduce latency at the risk of\n# eviction processing effectiveness\n#   0 = minimum latency, 10 = default, 100 = process without regard to latency\n#\n# maxmemory-eviction-tenacity 10\n\n# Starting from Redis 5, by default a replica will ignore its maxmemory setting\n# (unless it is promoted to master after a failover or manually). It means\n# that the eviction of keys will be just handled by the master, sending the\n# DEL commands to the replica as keys evict in the master side.\n#\n# This behavior ensures that masters and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica\n# to have a different memory setting, and you are sure all the writes performed\n# to the replica are idempotent, then you may change this default (but be sure\n# to understand what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory\n# and so forth). So make sure you monitor your replicas and make sure they\n# have enough memory to never hit a real out-of-memory condition before the\n# master hits the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n# Redis reclaims expired keys in two ways: upon access when those keys are\n# found to be expired, and also in background, in what is called the\n# \"active expire key\". The key space is slowly and interactively scanned\n# looking for expired keys to reclaim, so that it is possible to free memory\n# of keys that are expired and will never be accessed again in a short time.\n#\n# The default effort of the expire cycle will try to avoid having more than\n# ten percent of expired keys still in memory, and will try to avoid consuming\n# more than 25% of total memory and to add latency to the system. However\n# it is possible to increase the expire \"effort\" that is normally set to\n# \"1\", to a greater value, up to the value \"10\". At its maximum value the\n# system will use more CPU, longer cycles (and technically may introduce\n# more latency), and will tolerate less already expired keys still present\n# in the system. It's a tradeoff between memory, CPU and latency.\n#\n# active-expire-effort 1\n\n############################# LAZY FREEING ####################################\n\n# Redis has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in Redis. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons Redis also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the Redis server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically Redis deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives.\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nreplica-lazy-flush yes\n\n# It is also possible, for the case when to replace the user code DEL calls\n# with UNLINK calls is not easy, to modify the default behavior of the DEL\n# command to act exactly like UNLINK, using the following configuration\n# directive:\n\nlazyfree-lazy-user-del yes\n\n# FLUSHDB, FLUSHALL, and SCRIPT FLUSH support both asynchronous and synchronous\n# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the\n# commands. When neither flag is passed, this directive will be used to determine\n# if the data should be deleted asynchronously.\n\nlazyfree-lazy-user-flush yes\n\n################################ THREADED I/O #################################\n\n# Redis is mostly single threaded, however there are certain threaded\n# operations such as UNLINK, slow I/O accesses and other things that are\n# performed on side threads.\n#\n# Now it is also possible to handle Redis clients socket reads and writes\n# in different I/O threads. Since especially writing is so slow, normally\n# Redis users use pipelining in order to speed up the Redis performances per\n# core, and spawn multiple instances in order to scale more. Using I/O\n# threads it is possible to easily speedup two times Redis without resorting\n# to pipelining nor sharding of the instance.\n#\n# By default threading is disabled, we suggest enabling it only in machines\n# that have at least 4 or more cores, leaving at least one spare core.\n# Using more than 8 threads is unlikely to help much. We also recommend using\n# threaded I/O only if you actually have performance problems, with Redis\n# instances being able to use a quite big percentage of CPU time, otherwise\n# there is no point in using this feature.\n#\n# So for instance if you have a four cores boxes, try to use 2 or 3 I/O\n# threads, if you have a 8 cores, try to use 6 threads. In order to\n# enable I/O threads use the following configuration directive:\n#\n# io-threads 4\n#\n# Setting io-threads to 1 will just use the main thread as usual.\n# When I/O threads are enabled, we only use threads for writes, that is\n# to thread the write(2) syscall and transfer the client buffers to the\n# socket. However it is also possible to enable threading of reads and\n# protocol parsing using the following configuration directive, by setting\n# it to yes:\n#\n# io-threads-do-reads no\n#\n# Usually threading reads doesn't help much.\n#\n# NOTE 1: This configuration directive cannot be changed at runtime via\n# CONFIG SET. Aso this feature currently does not work when SSL is\n# enabled.\n#\n# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make\n# sure you also run the benchmark itself in threaded mode, using the\n# --threads option to match the number of Redis threads, otherwise you'll not\n# be able to notice the improvements.\n\n############################ KERNEL OOM CONTROL ##############################\n\n# On Linux, it is possible to hint the kernel OOM killer on what processes\n# should be killed first when out of memory.\n#\n# Enabling this feature makes Redis actively control the oom_score_adj value\n# for all its processes, depending on their role. The default scores will\n# attempt to have background child processes killed before all others, and\n# replicas killed before masters.\n#\n# Redis supports three options:\n#\n# no:       Don't make changes to oom-score-adj (default).\n# yes:      Alias to \"relative\" see below.\n# absolute: Values in oom-score-adj-values are written as is to the kernel.\n# relative: Values are used relative to the initial value of oom_score_adj when\n#           the server starts and are then clamped to a range of -1000 to 1000.\n#           Because typically the initial value is 0, they will often match the\n#           absolute values.\noom-score-adj no\n\n# When oom-score-adj is used, this directive controls the specific values used\n# for master, replica and background child processes. Values range -2000 to\n# 2000 (higher means more likely to be killed).\n#\n# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)\n# can freely increase their value, but not decrease it below its initial\n# settings. This means that setting oom-score-adj to \"relative\" and setting the\n# oom-score-adj-values to positive values will always succeed.\noom-score-adj-values 0 200 800\n\n\n#################### KERNEL transparent hugepage CONTROL ######################\n\n# Usually the kernel Transparent Huge Pages control is set to \"madvise\" or\n# or \"never\" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which\n# case this config has no effect. On systems in which it is set to \"always\",\n# redis will attempt to disable it specifically for the redis process in order\n# to avoid latency problems specifically with fork(2) and CoW.\n# If for some reason you prefer to keep it enabled, you can set this config to\n# \"no\" and the kernel global to \"always\".\n\ndisable-thp yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default Redis asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the Redis process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) Redis can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the Redis process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup Redis will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Please check https://redis.io/topics/persistence for more information.\n\nappendonly no\n\n# The name of the append only file (default: \"appendonly.aof\")\n\nappendfilename \"appendonly.aof\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# Redis supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# Redis may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of Redis is\n# the same as \"appendfsync none\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# Redis is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: Redis remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when Redis itself\n# crashes or aborts but the operating system still works correctly).\n#\n# Redis can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the Redis server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"redis-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# Redis will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# When rewriting the AOF file, Redis is able to use an RDB preamble in the\n# AOF file for faster rewrites and recoveries. When this option is turned\n# on the rewritten AOF file is composed of two different stanzas:\n#\n#   [RDB file][AOF tail]\n#\n# When loading, Redis recognizes that the AOF file starts with the \"REDIS\"\n# string and loads the prefixed RDB file, then continues loading the AOF\n# tail.\naof-use-rdb-preamble yes\n\n################################ LUA SCRIPTING  ###############################\n\n# Max execution time of a Lua script in milliseconds.\n#\n# If the maximum execution time is reached Redis will log that a script is\n# still in execution after the maximum allowed time and will start to\n# reply to queries with an error.\n#\n# When a long running script exceeds the maximum execution time only the\n# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be\n# used to stop a script that did not yet call any write commands. The second\n# is the only way to shut down the server in the case a write command was\n# already issued by the script but the user doesn't want to wait for the natural\n# termination of the script.\n#\n# Set it to 0 or a negative value for unlimited execution without warnings.\nlua-time-limit 5000\n\n################################ REDIS CLUSTER  ###############################\n\n# Normal Redis instances can't be part of a Redis Cluster; only nodes that are\n# started as cluster nodes can. In order to start a Redis instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by Redis nodes.\n# Every Redis Cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are a multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# A replica of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the master processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large cluster-replica-validity-factor may allow replicas with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the cluster-replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned masters, that are masters\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned masters only if there are still at least a\n# given number of other working replicas for their old master. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its master\n# and so forth. It usually reflects the number of replicas you want for every\n# master in your cluster.\n#\n# Default is 1 (replicas migrate only if their masters remain with at least\n# one replica). To disable migration just set it to a very large value or\n# set cluster-allow-replica-migration to 'no'.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# Turning off this option allows to use less automatic cluster configuration.\n# It both disables migration to orphaned masters and migration from masters\n# that became empty.\n#\n# Default is 'yes' (allow automatic migrations).\n#\n# cluster-allow-replica-migration yes\n\n# By default Redis Cluster nodes stop accepting queries if they detect there\n# is at least a hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# master during master failures. However the replica can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# This option, when set to yes, allows nodes to serve read traffic while the\n# the cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful for two cases.  The first case is for when an application\n# doesn't require consistency of data during node failures or network partitions.\n# One example of this is a cache, where as long as the node has the data it\n# should be able to serve it.\n#\n# The second use case is for configurations that don't meet the recommended\n# three shards but want to enable cluster mode and scale later. A\n# master outage in a 1 or 2 shard configuration causes a read/write outage to the\n# entire cluster without this option set, with it set there is only a write outage.\n# Without a quorum of masters, slot ownership will not change automatically.\n#\n# cluster-allow-reads-when-down no\n\n# In order to setup your cluster make sure to read the documentation\n# available at https://redis.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, Redis Cluster nodes address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make Redis Cluster working in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following four options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-tls-port\n# * cluster-announce-bus-port\n#\n# Each instructs the node about its address, client ports (for connections\n# without and with TLS) and cluster message bus port. The information is then\n# published in the header of the bus packets so that other nodes will be able to\n# correctly map the address of the node publishing the information.\n#\n# If cluster-tls is set to yes and cluster-announce-tls-port is omitted or set\n# to zero, then cluster-announce-port refers to the TLS port. Note also that\n# cluster-announce-tls-port has no effect if cluster-tls is set to no.\n#\n# If the above options are not used, the normal Redis Cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usual.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-tls-port 6379\n# cluster-announce-port 0\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The Redis Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells Redis\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The Redis latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a Redis instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n############################# EVENT NOTIFICATION ##############################\n\n# Redis can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at https://redis.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that Redis will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  t     Stream commands\n#  d     Module key type events\n#  m     Key-miss events (Note: It is not included in the 'A' class)\n#  A     Alias for g$lshzxetd, so that the \"AKE\" string means all the events\n#        (Except key-miss events which are excluded from 'A' due to their\n#         unique nature).\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### GOPHER SERVER #################################\n\n# Redis contains an implementation of the Gopher protocol, as specified in\n# the RFC 1436 (https://www.ietf.org/rfc/rfc1436.txt).\n#\n# The Gopher protocol was very popular in the late '90s. It is an alternative\n# to the web, and the implementation both server and client side is so simple\n# that the Redis server has just 100 lines of code in order to implement this\n# support.\n#\n# What do you do with Gopher nowadays? Well Gopher never *really* died, and\n# lately there is a movement in order for the Gopher more hierarchical content\n# composed of just plain text documents to be resurrected. Some want a simpler\n# internet, others believe that the mainstream internet became too much\n# controlled, and it's cool to create an alternative space for people that\n# want a bit of fresh air.\n#\n# Anyway for the 10nth birthday of the Redis, we gave it the Gopher protocol\n# as a gift.\n#\n# --- HOW IT WORKS? ---\n#\n# The Redis Gopher support uses the inline protocol of Redis, and specifically\n# two kind of inline requests that were anyway illegal: an empty request\n# or any request that starts with \"/\" (there are no Redis commands starting\n# with such a slash). Normal RESP2/RESP3 requests are completely out of the\n# path of the Gopher protocol implementation and are served as usual as well.\n#\n# If you open a connection to Redis when Gopher is enabled and send it\n# a string like \"/foo\", if there is a key named \"/foo\" it is served via the\n# Gopher protocol.\n#\n# In order to create a real Gopher \"hole\" (the name of a Gopher site in Gopher\n# talking), you likely need a script like the following:\n#\n#   https://github.com/antirez/gopher2redis\n#\n# --- SECURITY WARNING ---\n#\n# If you plan to put Redis on the internet in a publicly accessible address\n# to server Gopher pages MAKE SURE TO SET A PASSWORD to the instance.\n# Once a password is set:\n#\n#   1. The Gopher server (when enabled, not by default) will still serve\n#      content via Gopher.\n#   2. However other commands cannot be called before the client will\n#      authenticate.\n#\n# So use the 'requirepass' option to protect your instance.\n#\n# Note that Gopher is not currently supported when 'io-threads-do-reads'\n# is enabled.\n#\n# To enable Gopher support, uncomment the following line and set the option\n# from no (the default) to yes.\n#\n# gopher-enabled no\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-ziplist-entries 512\nhash-max-ziplist-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-ziplist-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding in just one case: when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-ziplist-entries 128\nzset-max-ziplist-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When an HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entries limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main Redis hash table (the one mapping top-level\n# keys to values). The hash table implementation Redis uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing \"steps\" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use \"activerehashing no\" if you have hard latency requirements and it is\n# not a good thing in your environment that Redis can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use \"activerehashing yes\" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica  -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such us huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In the Redis protocol, bulk requests, that are, elements representing single\n# strings, are normally limited to 512 mb. However you can change this limit\n# here, but must be 1mb or greater\n#\n# proto-max-bulk-len 512mb\n\n# Redis calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but Redis checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# Redis is idle, but at the same time will make Redis more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# Normally it is useful to have an HZ value which is proportional to the\n# number of clients connected. This is useful in order, for instance, to\n# avoid too many clients are processed for each background task invocation\n# in order to avoid latency spikes.\n#\n# Since the default HZ value by default is conservatively set to 10, Redis\n# offers, and enables by default, the ability to use an adaptive HZ value\n# which will temporarily raise when there are many connected clients.\n#\n# When dynamic HZ is enabled, the actual configured HZ will be used\n# as a baseline, but multiples of the configured HZ value will be actually\n# used as needed once more clients are connected. In this way an idle\n# instance will use very little CPU time while a busy instance will be\n# more responsive.\ndynamic-hz yes\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When redis saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the Redis LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   redis-benchmark -n 1000000 incr foo\n#   redis-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be divided by two (or decremented if it has a value\n# less <= 10).\n#\n# The default value for the lfu-decay-time is 1. A special value of 0 means to\n# decay the counter every time it happens to be scanned.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a Redis server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra for Redis 4.0 this process can happen at runtime\n# in a \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) Redis will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled Redis\n#    to use the copy of Jemalloc we ship with the source code of Redis.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Enabled active defragmentation\n# activedefrag no\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage, to be used when the lower\n# threshold is reached\n# active-defrag-cycle-min 1\n\n# Maximal effort for defrag in CPU percentage, to be used when the upper\n# threshold is reached\n# active-defrag-cycle-max 25\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n# Jemalloc background thread for purging will be enabled by default\njemalloc-bg-thread yes\n\n# It is possible to pin different threads and processes of Redis to specific\n# CPUs in your system, in order to maximize the performances of the server.\n# This is useful both in order to pin different Redis threads in different\n# CPUs, but also in order to make sure that multiple Redis instances running\n# in the same host will be pinned to different CPUs.\n#\n# Normally you can do this using the \"taskset\" command, however it is also\n# possible to this via Redis configuration directly, both in Linux and FreeBSD.\n#\n# You can pin the server/IO threads, bio threads, aof rewrite child process, and\n# the bgsave child process. The syntax to specify the cpu list is the same as\n# the taskset command:\n#\n# Set redis server/io threads to cpu affinity 0,2,4,6:\n# server_cpulist 0-7:2\n#\n# Set bio threads to cpu affinity 1,3:\n# bio_cpulist 1,3\n#\n# Set aof rewrite child process to cpu affinity 8,9,10,11:\n# aof_rewrite_cpulist 8-11\n#\n# Set bgsave child process to cpu affinity 1,10,11\n# bgsave_cpulist 1,10-11\n\n# In some cases redis will emit warnings and even refuse to start if it detects\n# that the system is in bad state, it is possible to suppress these warnings\n# by setting the following config which takes a space delimited list of warnings\n# to suppress\n#\n# ignore-warnings ARM64-COW-BUG\n"
  },
  {
    "path": "aegir/conf/redis/redis7.conf",
    "content": "# Redis configuration file example.\n#\n# Note that in order to read the configuration file, Redis must be\n# started with the file path as first argument:\n#\n# ./redis-server /path/to/redis.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all Redis servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Note that option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Redis Sentinel. Since Redis always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# Included paths may contain wildcards. All files matching the wildcards will\n# be included in alphabetical order.\n# Note that if an include path contains a wildcards but no files match it when\n# the server is started, the include statement will be ignored and no error will\n# be emitted.  It is safe, therefore, to include wildcard files from empty\n# directories.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n# include /path/to/fragments/*.conf\n#\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, Redis listens\n# for connections from all available network interfaces on the host machine.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n# Each address can be prefixed by \"-\", which means that redis will not fail to\n# start if the address is not available. Being not available only refers to\n# addresses that does not correspond to any network interface. Addresses that\n# are already in use will always fail, and unsupported protocols will always BE\n# silently skipped.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses\n# xbind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6\n# xbind * -::*                     # like the default, all available interfaces\n#\n# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force Redis to listen only on the\n# IPv4 and IPv6 (if available) loopback interface addresses (this means Redis\n# will only be able to accept client connections from the same host that it is\n# running on).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# COMMENT OUT THE FOLLOWING LINE.\n#\n# You will also need to set a password unless you explicitly disable protected\n# mode.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# By default, outgoing connections (from replica to master, from Sentinel to\n# instances, cluster bus, etc.) are not bound to a specific local address. In\n# most cases, this means the operating system will handle that based on routing\n# and the interface through which the connection goes out.\n#\n# Using bind-source-addr it is possible to configure a specific address to bind\n# to, which may also affect how the connection gets routed.\n#\n# Example:\n#\n# bind-source-addr 10.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# Redis instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and the default user has no password, the server\n# only accepts local connections from the IPv4 address (127.0.0.1), IPv6 address\n# (::1) or Unix domain sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to Redis\n# even if no authentication is configured.\nprotected-mode yes\n\n# Redis uses default hardened security configuration directives to reduce the\n# attack surface on innocent users. Therefore, several sensitive configuration\n# directives are immutable, and some potentially-dangerous commands are blocked.\n#\n# Configuration directives that control files that Redis writes to (e.g., 'dir'\n# and 'dbfilename') and that aren't usually modified during runtime\n# are protected by making them immutable.\n#\n# Commands that can increase the attack surface of Redis and that aren't usually\n# called by users are blocked by default.\n#\n# These can be exposed to either all connections or just local ones by setting\n# each of the configs listed below to either of these values:\n#\n# no    - Block for any connection (remain immutable)\n# yes   - Allow for any connection (no protection)\n# local - Allow only for local connections. Ones originating from the\n#         IPv4 address (127.0.0.1), IPv6 address (::1) or Unix domain sockets.\n#\n# enable-protected-configs no\n# enable-debug-command no\n# enable-module-command no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified Redis will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need a high backlog in order\n# to avoid slow clients connection issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so Redis will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/redis/redis.sock\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 900\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Force network equipment in the middle to consider the connection to be\n#    alive.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\n#\n# A reasonable value for this option is 300 seconds, which is the new\n# Redis default starting with Redis 3.2.1.\ntcp-keepalive 300\n\n# Apply OS-specific mechanism to mark the listening socket with the specified\n# ID, to support advanced routing and filtering capabilities.\n#\n# On Linux, the ID represents a connection mark.\n# On FreeBSD, the ID represents a socket cookie ID.\n# On OpenBSD, the ID represents a route table ID.\n#\n# The default value is 0, which implies no marking is required.\n# socket-mark-id 0\n\n################################# TLS/SSL #####################################\n\n# By default, TLS/SSL is disabled. To enable it, the \"tls-port\" configuration\n# directive can be used to define TLS-listening ports. To enable TLS on the\n# default port, use:\n#\n# port 0\n# tls-port 6379\n\n# Configure a X.509 certificate and private key to use for authenticating the\n# server to connected clients, masters or cluster peers.  These files should be\n# PEM formatted.\n#\n# tls-cert-file redis.crt\n# tls-key-file redis.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-key-file-pass secret\n\n# Normally Redis uses the same certificate for both server functions (accepting\n# connections) and client functions (replicating from a master, establishing\n# cluster bus connections, etc.).\n#\n# Sometimes certificates are issued with attributes that designate them as\n# client-only or server-only certificates. In that case it may be desired to use\n# different certificates for incoming (server) and outgoing (client)\n# connections. To do that, use the following directives:\n#\n# tls-client-cert-file client.crt\n# tls-client-key-file client.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-client-key-file-pass secret\n\n# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange,\n# required by older versions of OpenSSL (<3.0). Newer versions do not require\n# this configuration and recommend against it.\n#\n# tls-dh-params-file redis.dh\n\n# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL\n# clients and peers.  Redis requires an explicit configuration of at least one\n# of these, and will not implicitly use the system wide configuration.\n#\n# tls-ca-cert-file ca.crt\n# tls-ca-cert-dir /etc/ssl/certs\n\n# By default, clients (including replica servers) on a TLS port are required\n# to authenticate using valid client side certificates.\n#\n# If \"no\" is specified, client certificates are not required and not accepted.\n# If \"optional\" is specified, client certificates are accepted and must be\n# valid if provided, but are not required.\n#\n# tls-auth-clients no\n# tls-auth-clients optional\n\n# By default, a Redis replica does not attempt to establish a TLS connection\n# with its master.\n#\n# Use the following directive to enable TLS on replication links.\n#\n# tls-replication yes\n\n# By default, the Redis Cluster bus uses a plain TCP connection. To enable\n# TLS for the bus protocol, use the following directive:\n#\n# tls-cluster yes\n\n# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended\n# that older formally deprecated versions are kept disabled to reduce the attack surface.\n# You can explicitly specify TLS versions to support.\n# Allowed values are case insensitive and include \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\",\n# \"TLSv1.3\" (OpenSSL >= 1.1.1) or any combination.\n# To enable only TLSv1.2 and TLSv1.3, use:\n#\n# tls-protocols \"TLSv1.2 TLSv1.3\"\n\n# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information\n# about the syntax of this string.\n#\n# Note: this configuration applies only to <= TLSv1.2.\n#\n# tls-ciphers DEFAULT:!MEDIUM\n\n# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more\n# information about the syntax of this string, and specifically for TLSv1.3\n# ciphersuites.\n#\n# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256\n\n# When choosing a cipher, use the server's preference instead of the client\n# preference. By default, the server follows the client's preference.\n#\n# tls-prefer-server-ciphers yes\n\n# By default, TLS session caching is enabled to allow faster and less expensive\n# reconnections by clients that support it. Use the following directive to disable\n# caching.\n#\n# tls-session-caching no\n\n# Change the default number of TLS sessions cached. A zero value sets the cache\n# to unlimited size. The default size is 20480.\n#\n# tls-session-cache-size 5000\n\n# Change the default timeout of cached TLS sessions. The default timeout is 300\n# seconds.\n#\n# tls-session-cache-timeout 60\n\n################################# GENERAL #####################################\n\n# By default Redis does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /run/redis.pid when daemonized.\n# When Redis is supervised by upstart or systemd, this parameter has no impact.\ndaemonize yes\n\n# If you run Redis from upstart or systemd, Redis can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode\n#                        requires \"expect stop\" in your upstart job config\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#                        on startup, and updating Redis status on a regular\n#                        basis.\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous pings back to your supervisor.\n#\n# The default is \"no\". To run under upstart/systemd, you can simply uncomment\n# the line below:\n#\n# supervised auto\n\n# If a pid file is specified, Redis writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/redis.pid\".\n#\n# Creating a pid file is best effort: if Redis is not able to create it\n# nothing bad happens, the server will start and run normally.\n#\n# Note that on modern Linux systems \"/run/redis.pid\" is more conforming\n# and should be used instead.\npidfile /run/redis/redis.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\nloglevel warning\n\n# Specify the log file name. Also the empty string can be used to force\n# Redis to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/redis/redis-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident redis\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# To disable the built in crash log, which will possibly produce cleaner core\n# dumps when they are needed, uncomment the following:\n#\n# crash-log-enabled no\n\n# To disable the fast memory check that's run as part of the crash log, which\n# will possibly let redis terminate sooner, uncomment the following:\n#\n# crash-memcheck-enabled no\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 8\n\n# By default Redis shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY and syslog logging is\n# disabled. Basically this means that normally a logo is displayed only in\n# interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo no\n\n# By default, Redis modifies the process title (as seen in 'top' and 'ps') to\n# provide some runtime information. It is possible to disable this and leave\n# the process name as executed by setting the following to no.\nset-proc-title yes\n\n# When changing the process title, Redis uses the following template to construct\n# the modified title.\n#\n# Template variables are specified in curly brackets. The following variables are\n# supported:\n#\n# {title}           Name of process as executed if parent, or type of child process.\n# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or\n#                   Unix socket if only that's available.\n# {server-mode}     Special mode, i.e. \"[sentinel]\" or \"[cluster]\".\n# {port}            TCP port listening on, or 0.\n# {tls-port}        TLS port listening on, or 0.\n# {unixsocket}      Unix domain socket listening on, or \"\".\n# {config-file}     Name of configuration file used.\n#\nproc-title-template \"{title} {listen-addr} {server-mode}\"\n\n################################ SNAPSHOTTING  ################################\n\n# Save the DB to disk.\n#\n# save <seconds> <changes> [<seconds> <changes> ...]\n#\n# Redis will save the DB if the given number of seconds elapsed and it\n# surpassed the given number of write operations against the DB.\n#\n# Snapshotting can be completely disabled with a single empty string argument\n# as in following example:\n#\n# save \"\"\n#\n# Unless specified otherwise, by default Redis will save the DB:\n#   * After 3600 seconds (an hour) if at least 1 change was performed\n#   * After 300 seconds (5 minutes) if at least 100 changes were performed\n#   * After 60 seconds if at least 10000 changes were performed\n#\n# You can set these explicitly by uncommenting the following line.\n#\n# save 3600 1 300 100 60 10000\n\n# By default Redis will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again Redis will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the Redis server\n# and persistence, you may want to disable this feature so that Redis will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# By default compression is enabled as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# Enables or disables full sanitization checks for ziplist and listpack etc when\n# loading an RDB or RESTORE payload. This reduces the chances of a assertion or\n# crash later on while processing commands.\n# Options:\n#   no         - Never perform full sanitization\n#   yes        - Always perform full sanitization\n#   clients    - Perform full sanitization only for user connections.\n#                Excludes: RDB files, RESTORE commands received from the master\n#                connection, and client connections which have the\n#                skip-sanitize-payload ACL flag.\n# The default should be 'clients' but since it currently affects cluster\n# resharding via MIGRATE, it is temporarily set to 'no' by default.\n#\n# sanitize-dump-payload no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# Remove RDB files used by replication in instances without persistence\n# enabled. By default this option is disabled, however there are environments\n# where for regulations or other security concerns, RDB files persisted on\n# disk by masters in order to feed replicas, or stored on disk by replicas\n# in order to load them for the initial synchronization, should be deleted\n# ASAP. Note that this option ONLY WORKS in instances that have both AOF\n# and RDB persistence disabled, otherwise is completely ignored.\n#\n# An alternative (and sometimes better) way to obtain the same effect is\n# to use diskless replication on both master and replicas instances. However\n# in the case of replicas, diskless is not always an option.\nrdb-del-sync-files no\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir /var/lib/redis/\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a Redis instance a copy of\n# another Redis server. A few things to understand ASAP about Redis replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Redis replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Redis replicas are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# replicaof <masterip> <masterport>\n\n# If the master is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the replica request.\n#\n# masterauth <master-password>\n#\n# However this is not enough if you are using Redis ACLs (for Redis version\n# 6 or greater), and the default user is not capable of running the PSYNC\n# command and/or other commands needed for replication. In this case it's\n# better to configure a special user to use with replication, and specify the\n# masteruser configuration as such:\n#\n# masteruser <username>\n#\n# When masteruser is specified, the replica will authenticate against its\n# master using the new AUTH form: AUTH <username> <password>.\n\n# When a replica loses its connection with the master, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) If replica-serve-stale-data is set to 'no' the replica will reply with error\n#    \"MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'\"\n#    to all data access commands, excluding commands such as:\n#    INFO, REPLICAOF, AUTH, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,\n#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,\n#    HOST and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# Since Redis 2.6 by default replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# New replicas and reconnecting replicas that are not able to continue the\n# replication process just receiving differences, need to do what is called a\n# \"full synchronization\". An RDB file is transmitted from the master to the\n# replicas.\n#\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The Redis master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The Redis master creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child\n# producing the RDB file finishes its work. With diskless replication instead\n# once the transfer starts, new replicas arriving will be queued and a new\n# transfer will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple\n# replicas will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync yes\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the\n# server waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# When diskless replication is enabled with a delay, it is possible to let\n# the replication start before the maximum delay is reached if the maximum\n# number of replicas expected have connected. Default of 0 means that the\n# maximum is not defined and Redis will wait the full delay.\nrepl-diskless-sync-max-replicas 0\n\n# -----------------------------------------------------------------------------\n# WARNING: RDB diskless load is experimental. Since in this setup the replica\n# does not immediately store an RDB on disk, it may cause data loss during\n# failovers. RDB diskless load + Redis modules not handling I/O reads may also\n# cause Redis to abort in case of I/O errors during the initial synchronization\n# stage with the master. Use only if you know what you are doing.\n# -----------------------------------------------------------------------------\n#\n# Replica can load the RDB it reads from the replication link directly from the\n# socket, or store the RDB to a file and read that file after it was completely\n# received from the master.\n#\n# In many cases the disk is slower than the network, and storing and loading\n# the RDB file may increase replication time (and even increase the master's\n# Copy on Write memory and replica buffers).\n# However, parsing the RDB file directly from the socket may mean that we have\n# to flush the contents of the current database before the full rdb was\n# received. For this reason we have the following options:\n#\n# \"disabled\"    - Don't use diskless load (store the rdb file to the disk first)\n# \"on-empty-db\" - Use diskless load only when it is completely safe.\n# \"swapdb\"      - Keep current db contents in RAM while parsing the data directly\n#                 from the socket. Replicas in this mode can keep serving current\n#                 data set while replication is in progress, except for cases where\n#                 they can't recognize master as having a data set from same\n#                 replication history.\n#                 Note that this requires sufficient memory, if you don't have it,\n#                 you risk an OOM kill.\nrepl-diskless-load disabled\n\n# Master send PINGs to its replicas in a predefined interval. It's possible to\n# change this interval with the repl_ping_replica_period option. The default\n# value is 10 seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the replica. The default\n# value is 60 seconds.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select \"yes\" Redis will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and replicas are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a\n# replica wants to reconnect again, often a full resync is not needed, but a\n# partial resync is enough, just passing the portion of data the replica\n# missed while disconnected.\n#\n# The bigger the replication backlog, the longer the replica can endure the\n# disconnect and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated if there is at least one replica connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no connected replicas for some time, the backlog will be\n# freed. The following option configures the amount of seconds that need to\n# elapse, starting from the time the last replica disconnected, for the backlog\n# buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly \"partially\n# resynchronize\" with other replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The replica priority is an integer number published by Redis in the INFO\n# output. It is used by Redis Sentinel in order to select a replica to promote\n# into a master if the master is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel\n# will pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of master, so a replica with priority of 0 will never be selected by\n# Redis Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# The propagation error behavior controls how Redis will behave when it is\n# unable to handle a command being processed in the replication stream from a master\n# or processed while reading from an AOF file. Errors that occur during propagation\n# are unexpected, and can cause data inconsistency. However, there are edge cases\n# in earlier versions of Redis where it was possible for the server to replicate or persist\n# commands that would fail on future versions. For this reason the default behavior\n# is to ignore such errors and continue processing commands.\n#\n# If an application wants to ensure there is no data divergence, this configuration\n# should be set to 'panic' instead. The value can also be set to 'panic-on-replicas'\n# to only panic when a replica encounters an error on the replication stream. One of\n# these two panic values will become the default value in the future once there are\n# sufficient safety mechanisms in place to prevent false positive crashes.\n#\n# propagation-error-behavior ignore\n\n# Replica ignore disk write errors controls the behavior of a replica when it is\n# unable to persist a write command received from its master to disk. By default,\n# this configuration is set to 'no' and will crash the replica in this condition.\n# It is not recommended to change this default, however in order to be compatible\n# with older versions of Redis this config can be toggled to 'yes' which will just\n# log a warning and execute the write command it got from the master.\n#\n# replica-ignore-disk-write-errors no\n\n# -----------------------------------------------------------------------------\n# By default, Redis Sentinel includes all replicas in its reports. A replica\n# can be excluded from Redis Sentinel's announcements. An unannounced replica\n# will be ignored by the 'sentinel replicas <master>' command and won't be\n# exposed to Redis Sentinel's clients.\n#\n# This option does not change the behavior of replica-priority. Even with\n# replica-announced set to 'no', the replica can be promoted to master. To\n# prevent this behavior, set replica-priority to 0.\n#\n# replica-announced yes\n\n# It is possible for a master to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A Redis master is able to list the address and port of the attached\n# replicas in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Redis Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a master.\n#\n# The listed IP address and port normally reported by a replica is\n# obtained in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the master.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may actually be reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n############################### KEYS TRACKING #################################\n\n# Redis implements server assisted support for client side caching of values.\n# This is implemented using an invalidation table that remembers, using\n# a radix key indexed by key name, what clients have which keys. In turn\n# this is used in order to send invalidation messages to clients. Please\n# check this page to understand more about the feature:\n#\n#   https://redis.io/topics/client-side-caching\n#\n# When tracking is enabled for a client, all the read only queries are assumed\n# to be cached: this will force Redis to store information in the invalidation\n# table. When keys are modified, such information is flushed away, and\n# invalidation messages are sent to the clients. However if the workload is\n# heavily dominated by reads, Redis could use more and more memory in order\n# to track the keys fetched by many clients.\n#\n# For this reason it is possible to configure a maximum fill value for the\n# invalidation table. By default it is set to 1M of keys, and once this limit\n# is reached, Redis will start to evict keys in the invalidation table\n# even if they were not modified, just to reclaim memory: this will in turn\n# force the clients to invalidate the cached values. Basically the table\n# maximum size is a trade off between the memory you want to spend server\n# side to track information about who cached what, and the ability of clients\n# to retain cached objects in memory.\n#\n# If you set the value to 0, it means there are no limits, and Redis will\n# retain as many keys as needed in the invalidation table.\n# In the \"stats\" INFO section, you can find information about the number of\n# keys in the invalidation table at every given moment.\n#\n# Note: when key tracking is used in broadcasting mode, no memory is used\n# in the server side so this setting is useless.\n#\n# tracking-table-max-keys 1000000\n\n################################## SECURITY ###################################\n\n# Warning: since Redis is pretty fast, an outside user can try up to\n# 1 million passwords per second against a modern box. This means that you\n# should use very strong passwords, otherwise they will be very easy to break.\n# Note that because the password is really a shared secret between the client\n# and the server, and should not be memorized by any human, the password\n# can be easily a long string from /dev/urandom or whatever, so by using a\n# long and unguessable password no brute force attack will be possible.\n\n# Redis ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n#\n# The special username \"default\" is used for new connections. If this user\n# has the \"nopass\" rule, then new connections will be immediately authenticated\n# as the \"default\" user without the need of any password provided via the\n# AUTH command. Otherwise if the \"default\" user is not flagged with \"nopass\"\n# the connections will start in not authenticated state, and will require\n# AUTH (or the HELLO command AUTH option) in order to be authenticated and\n# start to work.\n#\n# The ACL rules that describe what a user can do are the following:\n#\n#  on           Enable the user: it is possible to authenticate as this user.\n#  off          Disable the user: it's no longer possible to authenticate\n#               with this user, however the already authenticated connections\n#               will still work.\n#  skip-sanitize-payload    RESTORE dump-payload sanitization is skipped.\n#  sanitize-payload         RESTORE dump-payload is sanitized (default).\n#  +<command>   Allow the execution of that command.\n#               May be used with `|` for allowing subcommands (e.g \"+config|get\")\n#  -<command>   Disallow the execution of that command.\n#               May be used with `|` for blocking subcommands (e.g \"-config|set\")\n#  +@<category> Allow the execution of all the commands in such category\n#               with valid categories are like @admin, @set, @sortedset, ...\n#               and so forth, see the full list in the server.c file where\n#               the Redis command table is described and defined.\n#               The special category @all means all the commands, but currently\n#               present in the server, and that will be loaded in the future\n#               via modules.\n#  +<command>|first-arg  Allow a specific first argument of an otherwise\n#                        disabled command. It is only supported on commands with\n#                        no sub-commands, and is not allowed as negative form\n#                        like -SELECT|1, only additive starting with \"+\". This\n#                        feature is deprecated and may be removed in the future.\n#  allcommands  Alias for +@all. Note that it implies the ability to execute\n#               all the future commands loaded via the modules system.\n#  nocommands   Alias for -@all.\n#  ~<pattern>   Add a pattern of keys that can be mentioned as part of\n#               commands. For instance ~* allows all the keys. The pattern\n#               is a glob-style pattern like the one of KEYS.\n#               It is possible to specify multiple patterns.\n# %R~<pattern>  Add key read pattern that specifies which keys can be read\n#               from.\n# %W~<pattern>  Add key write pattern that specifies which keys can be\n#               written to.\n#  allkeys      Alias for ~*\n#  resetkeys    Flush the list of allowed keys patterns.\n#  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be\n#               accessed by the user. It is possible to specify multiple channel\n#               patterns.\n#  allchannels  Alias for &*\n#  resetchannels            Flush the list of allowed channel patterns.\n#  ><password>  Add this password to the list of valid password for the user.\n#               For example >mypass will add \"mypass\" to the list.\n#               This directive clears the \"nopass\" flag (see later).\n#  <<password>  Remove this password from the list of valid passwords.\n#  nopass       All the set passwords of the user are removed, and the user\n#               is flagged as requiring no password: it means that every\n#               password will work against this user. If this directive is\n#               used for the default user, every new connection will be\n#               immediately authenticated with the default user without\n#               any explicit AUTH command required. Note that the \"resetpass\"\n#               directive will clear this condition.\n#  resetpass    Flush the list of allowed passwords. Moreover removes the\n#               \"nopass\" status. After \"resetpass\" the user has no associated\n#               passwords and there is no way to authenticate without adding\n#               some password (or setting it as \"nopass\" later).\n#  reset        Performs the following actions: resetpass, resetkeys, off,\n#               -@all. The user returns to the same state it has immediately\n#               after its creation.\n# (<options>)   Create a new selector with the options specified within the\n#               parentheses and attach it to the user. Each option should be\n#               space separated. The first character must be ( and the last\n#               character must be ).\n# clearselectors            Remove all of the currently attached selectors.\n#                           Note this does not change the \"root\" user permissions,\n#                           which are the permissions directly applied onto the\n#                           user (outside the parentheses).\n#\n# ACL rules can be specified in any order: for instance you can start with\n# passwords, then flags, or key patterns. However note that the additive\n# and subtractive rules will CHANGE MEANING depending on the ordering.\n# For instance see the following example:\n#\n#   user alice on +@all -DEBUG ~* >somepassword\n#\n# This will allow \"alice\" to use all the commands with the exception of the\n# DEBUG command, since +@all added all the commands to the set of the commands\n# alice can use, and later DEBUG was removed. However if we invert the order\n# of two ACL rules the result will be different:\n#\n#   user alice on -DEBUG +@all ~* >somepassword\n#\n# Now DEBUG was removed when alice had yet no commands in the set of allowed\n# commands, later all the commands are added, so the user will be able to\n# execute everything.\n#\n# Basically ACL rules are processed left-to-right.\n#\n# The following is a list of command categories and their meanings:\n# * keyspace - Writing or reading from keys, databases, or their metadata\n#     in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE,\n#     KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace,\n#     key or metadata will also have `write` category. Commands that only read\n#     the keyspace, key or metadata will have the `read` category.\n# * read - Reading from keys (values or metadata). Note that commands that don't\n#     interact with keys, will not have either `read` or `write`.\n# * write - Writing to keys (values or metadata)\n# * admin - Administrative commands. Normal applications will never need to use\n#     these. Includes REPLICAOF, CONFIG, DEBUG, SAVE, MONITOR, ACL, SHUTDOWN, etc.\n# * dangerous - Potentially dangerous (each should be considered with care for\n#     various reasons). This includes FLUSHALL, MIGRATE, RESTORE, SORT, KEYS,\n#     CLIENT, DEBUG, INFO, CONFIG, SAVE, REPLICAOF, etc.\n# * connection - Commands affecting the connection or other connections.\n#     This includes AUTH, SELECT, COMMAND, CLIENT, ECHO, PING, etc.\n# * blocking - Potentially blocking the connection until released by another\n#     command.\n# * fast - Fast O(1) commands. May loop on the number of arguments, but not the\n#     number of elements in the key.\n# * slow - All commands that are not Fast.\n# * pubsub - PUBLISH / SUBSCRIBE related\n# * transaction - WATCH / MULTI / EXEC related commands.\n# * scripting - Scripting related.\n# * set - Data type: sets related.\n# * sortedset - Data type: zsets related.\n# * list - Data type: lists related.\n# * hash - Data type: hashes related.\n# * string - Data type: strings related.\n# * bitmap - Data type: bitmaps related.\n# * hyperloglog - Data type: hyperloglog related.\n# * geo - Data type: geo related.\n# * stream - Data type: streams related.\n#\n# For more information about ACL configuration please refer to\n# the Redis web site at https://redis.io/topics/acl\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked\n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with\n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside redis.conf to describe users.\n#\n# aclfile /etc/redis/users.acl\n\n# IMPORTANT NOTE: starting with Redis 6 \"requirepass\" is just a compatibility\n# layer on top of the new ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# The requirepass is not compatible with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n#\n# requirepass isfoobared\n\n# New users are initialized with restrictive permissions by default, via the\n# equivalent of this ACL rule 'off resetkeys -@all'. Starting with Redis 6.2, it\n# is possible to manage access to Pub/Sub channels with ACL rules as well. The\n# default Pub/Sub channels permission if new users is controlled by the\n# acl-pubsub-default configuration directive, which accepts one of these values:\n#\n# allchannels: grants access to all Pub/Sub channels\n# resetchannels: revokes access to all Pub/Sub channels\n#\n# From Redis 7.0, acl-pubsub-default defaults to 'resetchannels' permission.\n#\n# acl-pubsub-default resetchannels\n\n# Command renaming (DEPRECATED).\n#\n# ------------------------------------------------------------------------\n# WARNING: avoid using this option if possible. Instead use ACLs to remove\n# commands from the default user, and put them only in some admin user you\n# create for administrative purposes.\n# ------------------------------------------------------------------------\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the Redis server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as Redis reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached Redis will close all the new connections sending\n# an error 'max number of clients reached'.\n#\n# IMPORTANT: When Redis Cluster is used, the max number of connections is also\n# shared with the cluster bus: every node in the cluster will use two\n# connections, one incoming and another outgoing. It is important to size the\n# limit accordingly in case of very large clusters.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached Redis will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If Redis can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', Redis will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using Redis as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory\n# is reached. You can select one from the following behaviors:\n#\n# volatile-lru -> Evict using approximated LRU, only keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key having an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, when there are no suitable keys for\n# eviction, Redis will return an error on write operations that require\n# more memory. These are usually commands that create new keys, add data or\n# modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,\n# SORT (due to the STORE argument), and EXEC (if the transaction includes any\n# command that requires memory).\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. By default Redis will check five keys and pick the one that was\n# used least recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate.\n#\n# maxmemory-samples 5\n\n# Eviction processing is designed to function well with the default setting.\n# If there is an unusually large amount of write traffic, this value may need to\n# be increased.  Decreasing this value may reduce latency at the risk of\n# eviction processing effectiveness\n#   0 = minimum latency, 10 = default, 100 = process without regard to latency\n#\n# maxmemory-eviction-tenacity 10\n\n# Starting from Redis 5, by default a replica will ignore its maxmemory setting\n# (unless it is promoted to master after a failover or manually). It means\n# that the eviction of keys will be just handled by the master, sending the\n# DEL commands to the replica as keys evict in the master side.\n#\n# This behavior ensures that masters and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica\n# to have a different memory setting, and you are sure all the writes performed\n# to the replica are idempotent, then you may change this default (but be sure\n# to understand what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory\n# and so forth). So make sure you monitor your replicas and make sure they\n# have enough memory to never hit a real out-of-memory condition before the\n# master hits the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n# Redis reclaims expired keys in two ways: upon access when those keys are\n# found to be expired, and also in background, in what is called the\n# \"active expire key\". The key space is slowly and interactively scanned\n# looking for expired keys to reclaim, so that it is possible to free memory\n# of keys that are expired and will never be accessed again in a short time.\n#\n# The default effort of the expire cycle will try to avoid having more than\n# ten percent of expired keys still in memory, and will try to avoid consuming\n# more than 25% of total memory and to add latency to the system. However\n# it is possible to increase the expire \"effort\" that is normally set to\n# \"1\", to a greater value, up to the value \"10\". At its maximum value the\n# system will use more CPU, longer cycles (and technically may introduce\n# more latency), and will tolerate less already expired keys still present\n# in the system. It's a tradeoff between memory, CPU and latency.\n#\n# active-expire-effort 1\n\n############################# LAZY FREEING ####################################\n\n# Redis has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in Redis. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons Redis also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the Redis server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically Redis deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives.\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nreplica-lazy-flush yes\n\n# It is also possible, for the case when to replace the user code DEL calls\n# with UNLINK calls is not easy, to modify the default behavior of the DEL\n# command to act exactly like UNLINK, using the following configuration\n# directive:\n\nlazyfree-lazy-user-del yes\n\n# FLUSHDB, FLUSHALL, SCRIPT FLUSH and FUNCTION FLUSH support both asynchronous and synchronous\n# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the\n# commands. When neither flag is passed, this directive will be used to determine\n# if the data should be deleted asynchronously.\n\nlazyfree-lazy-user-flush yes\n\n################################ THREADED I/O #################################\n\n# Redis is mostly single threaded, however there are certain threaded\n# operations such as UNLINK, slow I/O accesses and other things that are\n# performed on side threads.\n#\n# Now it is also possible to handle Redis clients socket reads and writes\n# in different I/O threads. Since especially writing is so slow, normally\n# Redis users use pipelining in order to speed up the Redis performances per\n# core, and spawn multiple instances in order to scale more. Using I/O\n# threads it is possible to easily speedup two times Redis without resorting\n# to pipelining nor sharding of the instance.\n#\n# By default threading is disabled, we suggest enabling it only in machines\n# that have at least 4 or more cores, leaving at least one spare core.\n# Using more than 8 threads is unlikely to help much. We also recommend using\n# threaded I/O only if you actually have performance problems, with Redis\n# instances being able to use a quite big percentage of CPU time, otherwise\n# there is no point in using this feature.\n#\n# So for instance if you have a four cores boxes, try to use 2 or 3 I/O\n# threads, if you have a 8 cores, try to use 6 threads. In order to\n# enable I/O threads use the following configuration directive:\n#\n# io-threads 4\n#\n# Setting io-threads to 1 will just use the main thread as usual.\n# When I/O threads are enabled, we only use threads for writes, that is\n# to thread the write(2) syscall and transfer the client buffers to the\n# socket. However it is also possible to enable threading of reads and\n# protocol parsing using the following configuration directive, by setting\n# it to yes:\n#\n# io-threads-do-reads no\n#\n# Usually threading reads doesn't help much.\n#\n# NOTE 1: This configuration directive cannot be changed at runtime via\n# CONFIG SET. Also, this feature currently does not work when SSL is\n# enabled.\n#\n# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make\n# sure you also run the benchmark itself in threaded mode, using the\n# --threads option to match the number of Redis threads, otherwise you'll not\n# be able to notice the improvements.\n\n############################ KERNEL OOM CONTROL ##############################\n\n# On Linux, it is possible to hint the kernel OOM killer on what processes\n# should be killed first when out of memory.\n#\n# Enabling this feature makes Redis actively control the oom_score_adj value\n# for all its processes, depending on their role. The default scores will\n# attempt to have background child processes killed before all others, and\n# replicas killed before masters.\n#\n# Redis supports these options:\n#\n# no:       Don't make changes to oom-score-adj (default).\n# yes:      Alias to \"relative\" see below.\n# absolute: Values in oom-score-adj-values are written as is to the kernel.\n# relative: Values are used relative to the initial value of oom_score_adj when\n#           the server starts and are then clamped to a range of -1000 to 1000.\n#           Because typically the initial value is 0, they will often match the\n#           absolute values.\noom-score-adj no\n\n# When oom-score-adj is used, this directive controls the specific values used\n# for master, replica and background child processes. Values range -2000 to\n# 2000 (higher means more likely to be killed).\n#\n# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)\n# can freely increase their value, but not decrease it below its initial\n# settings. This means that setting oom-score-adj to \"relative\" and setting the\n# oom-score-adj-values to positive values will always succeed.\noom-score-adj-values 0 200 800\n\n\n#################### KERNEL transparent hugepage CONTROL ######################\n\n# Usually the kernel Transparent Huge Pages control is set to \"madvise\" or\n# or \"never\" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which\n# case this config has no effect. On systems in which it is set to \"always\",\n# redis will attempt to disable it specifically for the redis process in order\n# to avoid latency problems specifically with fork(2) and CoW.\n# If for some reason you prefer to keep it enabled, you can set this config to\n# \"no\" and the kernel global to \"always\".\n\ndisable-thp yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default Redis asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the Redis process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) Redis can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the Redis process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup Redis will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Please check https://redis.io/topics/persistence for more information.\n\nappendonly no\n\n# The base name of the append only file.\n#\n# Redis 7 and newer use a set of append-only files to persist the dataset\n# and changes applied to it. There are two basic types of files in use:\n#\n# - Base files, which are a snapshot representing the complete state of the\n#   dataset at the time the file was created. Base files can be either in\n#   the form of RDB (binary serialized) or AOF (textual commands).\n# - Incremental files, which contain additional commands that were applied\n#   to the dataset following the previous file.\n#\n# In addition, manifest files are used to track the files and the order in\n# which they were created and should be applied.\n#\n# Append-only file names are created by Redis following a specific pattern.\n# The file name's prefix is based on the 'appendfilename' configuration\n# parameter, followed by additional information about the sequence and type.\n#\n# For example, if appendfilename is set to appendonly.aof, the following file\n# names could be derived:\n#\n# - appendonly.aof.1.base.rdb as a base file.\n# - appendonly.aof.1.incr.aof, appendonly.aof.2.incr.aof as incremental files.\n# - appendonly.aof.manifest as a manifest file.\n\nappendfilename \"appendonly.aof\"\n\n# For convenience, Redis stores all persistent append-only files in a dedicated\n# directory. The name of the directory is determined by the appenddirname\n# configuration parameter.\n\nappenddirname \"appendonlydir\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# Redis supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# Redis may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of Redis is\n# the same as \"appendfsync no\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# Redis is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: Redis remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when Redis itself\n# crashes or aborts but the operating system still works correctly).\n#\n# Redis can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the Redis server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"redis-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# Redis will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# Redis can create append-only base files in either RDB or AOF formats. Using\n# the RDB format is always faster and more efficient, and disabling it is only\n# supported for backward compatibility purposes.\naof-use-rdb-preamble yes\n\n# Redis supports recording timestamp annotations in the AOF to support restoring\n# the data from a specific point-in-time. However, using this capability changes\n# the AOF format in a way that may not be compatible with existing AOF parsers.\naof-timestamp-enabled no\n\n################################ SHUTDOWN #####################################\n\n# Maximum time to wait for replicas when shutting down, in seconds.\n#\n# During shut down, a grace period allows any lagging replicas to catch up with\n# the latest replication offset before the master exists. This period can\n# prevent data loss, especially for deployments without configured disk backups.\n#\n# The 'shutdown-timeout' value is the grace period's duration in seconds. It is\n# only applicable when the instance has replicas. To disable the feature, set\n# the value to 0.\n#\n# shutdown-timeout 10\n\n# When Redis receives a SIGINT or SIGTERM, shutdown is initiated and by default\n# an RDB snapshot is written to disk in a blocking operation if save points are configured.\n# The options used on signaled shutdown can include the following values:\n# default:  Saves RDB snapshot only if save points are configured.\n#           Waits for lagging replicas to catch up.\n# save:     Forces a DB saving operation even if no save points are configured.\n# nosave:   Prevents DB saving operation even if one or more save points are configured.\n# now:      Skips waiting for lagging replicas.\n# force:    Ignores any errors that would normally prevent the server from exiting.\n#\n# Any combination of values is allowed as long as \"save\" and \"nosave\" are not set simultaneously.\n# Example: \"nosave force now\"\n#\n# shutdown-on-sigint default\n# shutdown-on-sigterm default\n\n################ NON-DETERMINISTIC LONG BLOCKING COMMANDS #####################\n\n# Maximum time in milliseconds for EVAL scripts, functions and in some cases\n# modules' commands before Redis can start processing or rejecting other clients.\n#\n# If the maximum execution time is reached Redis will start to reply to most\n# commands with a BUSY error.\n#\n# In this state Redis will only allow a handful of commands to be executed.\n# For instance, SCRIPT KILL, FUNCTION KILL, SHUTDOWN NOSAVE and possibly some\n# module specific 'allow-busy' commands.\n#\n# SCRIPT KILL and FUNCTION KILL will only be able to stop a script that did not\n# yet call any write commands, so SHUTDOWN NOSAVE may be the only way to stop\n# the server in the case a write command was already issued by the script when\n# the user doesn't want to wait for the natural termination of the script.\n#\n# The default is 5 seconds. It is possible to set it to 0 or a negative value\n# to disable this mechanism (uninterrupted execution). Note that in the past\n# this config had a different name, which is now an alias, so both of these do\n# the same:\n# lua-time-limit 5000\n# busy-reply-threshold 5000\n\n################################ REDIS CLUSTER  ###############################\n\n# Normal Redis instances can't be part of a Redis Cluster; only nodes that are\n# started as cluster nodes can. In order to start a Redis instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by Redis nodes.\n# Every Redis Cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are a multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# The cluster port is the port that the cluster bus will listen for inbound connections on. When set\n# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires\n# you to specify the cluster bus port when executing cluster meet.\n# cluster-port 0\n\n# A replica of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the master processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large cluster-replica-validity-factor may allow replicas with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the cluster-replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned masters, that are masters\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned masters only if there are still at least a\n# given number of other working replicas for their old master. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its master\n# and so forth. It usually reflects the number of replicas you want for every\n# master in your cluster.\n#\n# Default is 1 (replicas migrate only if their masters remain with at least\n# one replica). To disable migration just set it to a very large value or\n# set cluster-allow-replica-migration to 'no'.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# Turning off this option allows to use less automatic cluster configuration.\n# It both disables migration to orphaned masters and migration from masters\n# that became empty.\n#\n# Default is 'yes' (allow automatic migrations).\n#\n# cluster-allow-replica-migration yes\n\n# By default Redis Cluster nodes stop accepting queries if they detect there\n# is at least a hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# master during master failures. However the replica can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# This option, when set to yes, allows nodes to serve read traffic while the\n# cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful for two cases.  The first case is for when an application\n# doesn't require consistency of data during node failures or network partitions.\n# One example of this is a cache, where as long as the node has the data it\n# should be able to serve it.\n#\n# The second use case is for configurations that don't meet the recommended\n# three shards but want to enable cluster mode and scale later. A\n# master outage in a 1 or 2 shard configuration causes a read/write outage to the\n# entire cluster without this option set, with it set there is only a write outage.\n# Without a quorum of masters, slot ownership will not change automatically.\n#\n# cluster-allow-reads-when-down no\n\n# This option, when set to yes, allows nodes to serve pubsub shard traffic while\n# the cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful if the application would like to use the pubsub feature even when\n# the cluster global stable state is not OK. If the application wants to make sure only\n# one shard is serving a given channel, this feature should be kept as yes.\n#\n# cluster-allow-pubsubshard-when-down yes\n\n# Cluster link send buffer limit is the limit on the memory usage of an individual\n# cluster bus link's send buffer in bytes. Cluster links would be freed if they exceed\n# this limit. This is to primarily prevent send buffers from growing unbounded on links\n# toward slow peers (E.g. PubSub messages being piled up).\n# This limit is disabled by default. Enable this limit when 'mem_cluster_links' INFO field\n# and/or 'send-buffer-allocated' entries in the 'CLUSTER LINKS` command output continuously increase.\n# Minimum limit of 1gb is recommended so that cluster link buffer can fit in at least a single\n# PubSub message by default. (client-query-buffer-limit default value is 1gb)\n#\n# cluster-link-sendbuf-limit 0\n\n# Clusters can configure their announced hostname using this config. This is a common use case for\n# applications that need to use TLS Server Name Indication (SNI) or dealing with DNS based\n# routing. By default this value is only shown as additional metadata in the CLUSTER SLOTS\n# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is\n# communicated along the clusterbus to all nodes, setting it to an empty string will remove\n# the hostname and also propagate the removal.\n#\n# cluster-announce-hostname \"\"\n\n# Clusters can advertise how clients should connect to them using either their IP address,\n# a user defined hostname, or by declaring they have no endpoint. Which endpoint is\n# shown as the preferred endpoint is set by using the cluster-preferred-endpoint-type\n# config with values 'ip', 'hostname', or 'unknown-endpoint'. This value controls how\n# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS.\n# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?'\n# will be returned instead.\n#\n# When a cluster advertises itself as having an unknown endpoint, it's indicating that\n# the server doesn't know how clients can reach the cluster. This can happen in certain\n# networking situations where there are multiple possible routes to the node, and the\n# server doesn't know which one the client took. In this case, the server is expecting\n# the client to reach out on the same endpoint it used for making the last request, but use\n# the port provided in the response.\n#\n# cluster-preferred-endpoint-type ip\n\n# In order to setup your cluster make sure to read the documentation\n# available at https://redis.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, Redis Cluster nodes address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make Redis Cluster working in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following four options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-tls-port\n# * cluster-announce-bus-port\n#\n# Each instructs the node about its address, client ports (for connections\n# without and with TLS) and cluster message bus port. The information is then\n# published in the header of the bus packets so that other nodes will be able to\n# correctly map the address of the node publishing the information.\n#\n# If cluster-tls is set to yes and cluster-announce-tls-port is omitted or set\n# to zero, then cluster-announce-port refers to the TLS port. Note also that\n# cluster-announce-tls-port has no effect if cluster-tls is set to no.\n#\n# If the above options are not used, the normal Redis Cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usual.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-tls-port 6379\n# cluster-announce-port 0\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The Redis Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells Redis\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The Redis latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a Redis instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n################################ LATENCY TRACKING ##############################\n\n# The Redis extended latency monitoring tracks the per command latencies and enables\n# exporting the percentile distribution via the INFO latencystats command,\n# and cumulative latency distributions (histograms) via the LATENCY command.\n#\n# By default, the extended latency monitoring is enabled since the overhead\n# of keeping track of the command latency is very small.\n# latency-tracking yes\n\n# By default the exported latency percentiles via the INFO latencystats command\n# are the p50, p99, and p999.\n# latency-tracking-info-percentiles 50 99 99.9\n\n############################# EVENT NOTIFICATION ##############################\n\n# Redis can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at https://redis.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that Redis will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  n     New key events (Note: not included in the 'A' class)\n#  t     Stream commands\n#  d     Module key type events\n#  m     Key-miss events (Note: It is not included in the 'A' class)\n#  A     Alias for g$lshzxetd, so that the \"AKE\" string means all the events\n#        (Except key-miss events which are excluded from 'A' due to their\n#         unique nature).\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-listpack-entries 512\nhash-max-listpack-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-listpack-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding in just one case: when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-listpack-entries 128\nzset-max-listpack-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When an HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entries limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main Redis hash table (the one mapping top-level\n# keys to values). The hash table implementation Redis uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing \"steps\" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use \"activerehashing no\" if you have hard latency requirements and it is\n# not a good thing in your environment that Redis can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use \"activerehashing yes\" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Note that it doesn't make sense to set the replica clients output buffer\n# limit lower than the repl-backlog-size config (partial sync will succeed\n# and then replica will get disconnected).\n# Such a configuration is ignored (the size of repl-backlog-size will be used).\n# This doesn't have memory consumption implications since the replica client\n# will share the backlog buffers memory.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such us huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In some scenarios client connections can hog up memory leading to OOM\n# errors or data eviction. To avoid this we can cap the accumulated memory\n# used by all client connections (all pubsub and normal clients). Once we\n# reach that limit connections will be dropped by the server freeing up\n# memory. The server will attempt to drop the connections using the most\n# memory first. We call this mechanism \"client eviction\".\n#\n# Client eviction is configured using the maxmemory-clients setting as follows:\n# 0 - client eviction is disabled (default)\n#\n# A memory value can be used for the client eviction threshold,\n# for example:\n# maxmemory-clients 1g\n#\n# A percentage value (between 1% and 100%) means the client eviction threshold\n# is based on a percentage of the maxmemory setting. For example to set client\n# eviction at 5% of maxmemory:\n# maxmemory-clients 5%\n\n# In the Redis protocol, bulk requests, that are, elements representing single\n# strings, are normally limited to 512 mb. However you can change this limit\n# here, but must be 1mb or greater\n#\n# proto-max-bulk-len 512mb\n\n# Redis calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but Redis checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# Redis is idle, but at the same time will make Redis more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# Normally it is useful to have an HZ value which is proportional to the\n# number of clients connected. This is useful in order, for instance, to\n# avoid too many clients are processed for each background task invocation\n# in order to avoid latency spikes.\n#\n# Since the default HZ value by default is conservatively set to 10, Redis\n# offers, and enables by default, the ability to use an adaptive HZ value\n# which will temporarily raise when there are many connected clients.\n#\n# When dynamic HZ is enabled, the actual configured HZ will be used\n# as a baseline, but multiples of the configured HZ value will be actually\n# used as needed once more clients are connected. In this way an idle\n# instance will use very little CPU time while a busy instance will be\n# more responsive.\ndynamic-hz yes\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When redis saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the Redis LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   redis-benchmark -n 1000000 incr foo\n#   redis-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be divided by two (or decremented if it has a value\n# less <= 10).\n#\n# The default value for the lfu-decay-time is 1. A special value of 0 means to\n# decay the counter every time it happens to be scanned.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a Redis server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra for Redis 4.0 this process can happen at runtime\n# in a \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) Redis will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled Redis\n#    to use the copy of Jemalloc we ship with the source code of Redis.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Active defragmentation is disabled by default\n# activedefrag no\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage, to be used when the lower\n# threshold is reached\n# active-defrag-cycle-min 1\n\n# Maximal effort for defrag in CPU percentage, to be used when the upper\n# threshold is reached\n# active-defrag-cycle-max 25\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n# Jemalloc background thread for purging will be enabled by default\njemalloc-bg-thread yes\n\n# It is possible to pin different threads and processes of Redis to specific\n# CPUs in your system, in order to maximize the performances of the server.\n# This is useful both in order to pin different Redis threads in different\n# CPUs, but also in order to make sure that multiple Redis instances running\n# in the same host will be pinned to different CPUs.\n#\n# Normally you can do this using the \"taskset\" command, however it is also\n# possible to this via Redis configuration directly, both in Linux and FreeBSD.\n#\n# You can pin the server/IO threads, bio threads, aof rewrite child process, and\n# the bgsave child process. The syntax to specify the cpu list is the same as\n# the taskset command:\n#\n# Set redis server/io threads to cpu affinity 0,2,4,6:\n# server_cpulist 0-7:2\n#\n# Set bio threads to cpu affinity 1,3:\n# bio_cpulist 1,3\n#\n# Set aof rewrite child process to cpu affinity 8,9,10,11:\n# aof_rewrite_cpulist 8-11\n#\n# Set bgsave child process to cpu affinity 1,10,11\n# bgsave_cpulist 1,10-11\n\n# In some cases redis will emit warnings and even refuse to start if it detects\n# that the system is in bad state, it is possible to suppress these warnings\n# by setting the following config which takes a space delimited list of warnings\n# to suppress\n#\n# ignore-warnings ARM64-COW-BUG\n"
  },
  {
    "path": "aegir/conf/solr9/analysis-extras.mod",
    "content": "# Solr module: analysis-extras\nname=analysis-extras\nlib.dir=../../modules/analysis-extras/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/analytics.mod",
    "content": "# Solr module: analytics\nname=analytics\nlib.dir=../../modules/analytics/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/clustering.mod",
    "content": "# Solr module: clustering\nname=clustering\nlib.dir=../../modules/clustering/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/cross-dc.mod",
    "content": "# Solr module: cross-dc\nname=cross-dc\nlib.dir=../../modules/cross-dc/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/extraction.mod",
    "content": "# Solr module: extraction\nname=extraction\nlib.dir=../../modules/extraction/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/gcs-repository.mod",
    "content": "# Solr module: gcs-repository\nname=gcs-repository\nlib.dir=../../modules/gcs-repository/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/hadoop-auth.mod",
    "content": "# Solr module: hadoop-auth\nname=hadoop-auth\nlib.dir=../../modules/hadoop-auth/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/hdfs.mod",
    "content": "# Solr module: hdfs\nname=hdfs\nlib.dir=../../modules/hdfs/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/jaegertracer-configurator.mod",
    "content": "# Solr module: jaegertracer-configurator\nname=jaegertracer-configurator\nlib.dir=../../modules/jaegertracer-configurator/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/jwt-auth.mod",
    "content": "# Solr module: jwt-auth\nname=jwt-auth\nlib.dir=../../modules/jwt-auth/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/langid.mod",
    "content": "# Solr module: langid\nname=langid\nlib.dir=../../modules/langid/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/llm.mod",
    "content": "# Solr module: llm\nname=llm\nlib.dir=../../modules/llm/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/ltr.mod",
    "content": "# Solr module: ltr\nname=ltr\nlib.dir=../../modules/ltr/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/opentelemetry.mod",
    "content": "# Solr module: opentelemetry\nname=opentelemetry\nlib.dir=../../modules/opentelemetry/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/s3-repository.mod",
    "content": "# Solr module: s3-repository\nname=s3-repository\nlib.dir=../../modules/s3-repository/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/scripting.mod",
    "content": "# Solr module: scripting\nname=scripting\nlib.dir=../../modules/scripting/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/solr9/sql.mod",
    "content": "# Solr module: sql\nname=sql\nlib.dir=../../modules/sql/lib\nclass=org.apache.solr.core.SolrResourceLoader\n"
  },
  {
    "path": "aegir/conf/tpl/migration.html",
    "content": "<html>\n<head>\n<title>Server Migration</title>\n<style type=\"text/css\">\n<!--\nbody {\nbackground-color: #ffffff;\n}\n-->\n</style>\n</head>\n<body>\n<table border=\"0\" align=\"center\" cellpadding=\"0\" cellspacing=\"0\">\n<tr>\n<td  width=\"800\" height=\"600\" align=\"center\" valign=\"middle\"><p class=\"style1\"><img src=\"/migration.jpg\" alt=\"Server Migration\" width=\"347\" height=\"346\" /></p>\n<h2>We are performing server migration and will be back shortly</h2>\n</td>\n</tr>\n</table>\n</body>\n</html>\n"
  },
  {
    "path": "aegir/conf/tpl/robots.txt",
    "content": "#\n# robots.txt\n#\n# This file is to prevent the crawling and indexing of certain parts\n# of your site by web crawlers and spiders run by sites like Yahoo!\n# and Google. By telling these \"robots\" where not to go on your site,\n# you save bandwidth and server resources.\n#\n# This file will be ignored unless it is at the root of your host:\n# Used:    http://example.com/robots.txt\n# Ignored: http://example.com/site/robots.txt\n#\n# For more information about the robots.txt standard, see:\n# http://www.robotstxt.org/robotstxt.html\n#\n# For syntax checking, see:\n# http://www.sxw.org.uk/computing/robots/check.html\n\nUser-agent: *\nCrawl-delay: 10\n# CSS, JS, Images\nAllow: /misc/*.css$\nAllow: /misc/*.css?\nAllow: /misc/*.js$\nAllow: /misc/*.js?\nAllow: /misc/*.gif\nAllow: /misc/*.jpg\nAllow: /misc/*.jpeg\nAllow: /misc/*.png\nAllow: /modules/*.css$\nAllow: /modules/*.css?\nAllow: /modules/*.js$\nAllow: /modules/*.js?\nAllow: /modules/*.gif\nAllow: /modules/*.jpg\nAllow: /modules/*.jpeg\nAllow: /modules/*.png\nAllow: /profiles/*.css$\nAllow: /profiles/*.css?\nAllow: /profiles/*.js$\nAllow: /profiles/*.js?\nAllow: /profiles/*.gif\nAllow: /profiles/*.jpg\nAllow: /profiles/*.jpeg\nAllow: /profiles/*.png\nAllow: /themes/*.css$\nAllow: /themes/*.css?\nAllow: /themes/*.js$\nAllow: /themes/*.js?\nAllow: /themes/*.gif\nAllow: /themes/*.jpg\nAllow: /themes/*.jpeg\nAllow: /themes/*.png\n# Directories\nDisallow: /includes/\nDisallow: /misc/\nDisallow: /modules/\nDisallow: /profiles/\nDisallow: /scripts/\nDisallow: /themes/\n# Files\nDisallow: /boost_stats.php\nDisallow: /CHANGELOG.txt\nDisallow: /cron.php\nDisallow: /INSTALL.mysql.txt\nDisallow: /INSTALL.pgsql.txt\nDisallow: /INSTALL.sqlite.txt\nDisallow: /install.php\nDisallow: /INSTALL.txt\nDisallow: /LICENSE.txt\nDisallow: /MAINTAINERS.txt\nDisallow: /update.php\nDisallow: /UPGRADE.txt\nDisallow: /xmlrpc.php\n# Paths (clean URLs)\nDisallow: /admin/\nDisallow: /comment/reply/\nDisallow: /filter/tips/\nDisallow: /node/add/\nDisallow: /search/\nDisallow: /user/register/\nDisallow: /user/password/\nDisallow: /user/login/\nDisallow: /user/logout/\n# Paths (no clean URLs)\nDisallow: /?q=admin/\nDisallow: /?q=comment/reply/\nDisallow: /?q=filter/tips/\nDisallow: /?q=node/add/\nDisallow: /?q=search/\nDisallow: /?q=user/password/\nDisallow: /?q=user/register/\nDisallow: /?q=user/login/\nDisallow: /?q=user/logout/\n"
  },
  {
    "path": "aegir/conf/tpl/setupmail.txt",
    "content": "Hello,\n\nWelcome to your new Ægir control panel, designed for easy Drupal multi-site deployment, development, and management.\n\nYour Ægir control panel [version boa.version] is available at:\n\nhttps://aegir.url.name\n\nThis Email Covers:\n1. Logging into your Ægir control panel\n2. Deploying Ægir default websites\n3. Adding modules & themes\n4. Managing your databases\n5. Advanced user information\n6. Articles and video tutorials\n\nPlease read this email thoroughly. It contains important information required to properly leverage all your available Ægir features.\n\n----------------------------------------\n1. LOGGING INTO YOUR AEGIR CONTROL PANEL\n----------------------------------------\n\nTo access your control panel, visit this URL: https://aegir.url.name\n\nIf your account has been migrated, use the previous username and password you have already used before, or reset your password using your email address as your username at: https://aegir.url.name/user/password\n\nPlease double-check your spam folder to ensure all emails are delivered.\n\n----------------------------------------\n2. DEPLOYING YOUR WEBSITES\n----------------------------------------\n\nLog into the control panel and start exploring how Ægir works. We are ready to assist and guide you step by step, so please don’t hesitate to ask questions!\n\nTo create a new site:\n1. Click the Add Site tab.\n2. After adding a site, click the Home icon on the site's node in Ægir to access the admin area.\n\nIf the Home icon no longer links to the one-time login page, run the \"Reset password\" task on the site's node, and once complete, click the Home icon again.\n\nFor more details on site import and platform management, refer to:\n- Import Your Sites to Ægir: https://omega8.cc/import-your-sites-to-aegir-in-8-easy-steps-109\n- Add Custom Platform: https://omega8.cc/how-to-add-custom-platform-properly-140\n- Drupal Site Upgrade Workflow: https://omega8.cc/your-drupal-site-upgrade-safe-workflow-298\n\nTo make a site \"live\" using any domain name, point its A or CNAME DNS record to your Ægir instance public IP address:\n\nyourdomain.com.           IN  A      166.84.6.231\nsubdomain.yourdomain.com. IN  CNAME  aegir.url.name.\n\nFor test sites, use any subdomain in *.aegir.url.name, e.g., http://atrium.aegir.url.name.\n\nNeed assistance with site import? Contact us: https://omega8.cc/contact\n\n----------------------------------------\n3. ADDING MODULES & THEMES\n----------------------------------------\n\nTo add modules/themes:\n1. Log into your FTPS/SSH/SFTP account:\n\nhost: aegir.url.name\nuser: dragon.ftp\npass: FN8rXcQn\nport: 21 (FTPS)\nport: 22 (SSH/SFTP)\n\n2. Type \"help\" when logged in via SSH to see all available shell commands.\n3. Change your password via SSH with the \"passwd\" command every 3 months.\n\nNote: Use Explicit TLS mode with port 21 for FTPS and port 22 for SFTP (unless your Ægir instance uses a non-standard SSH port).\n\nRefer to Compatible FTP-SSL/TLS Clients: https://omega8.cc/dev/ftp-tls.txt for more information.\n\n----------------------------------------\n4. MANAGING YOUR DATABASES\n----------------------------------------\n\nManage your databases via the Adminer Manager web interface, using credentials available in each site's drushrc.php file:\n\nAdminer Manager URL: https://aegir.url.name/sqladmin/\n\nNote: Keep the SSH session active with a continuous command (e.g., ping -i 30 google.com) to maintain database access.\n\nUse a desktop SQL manager that supports SSH tunneling, as remote access over MySQL port 3306 is not available for security reasons. For a video tutorial, visit: http://bit.ly/om8rsql\n\nYou can also manage databases via command line with Drush commands or tools like mysql and mysqldump.\n\n----------------------------------------\n5. ADVANCED USER INFORMATION\n----------------------------------------\n\nHow-To Information: Check the built-in docs in your account at ~/static/control/README.txt.\n\nDirectory Information:\n- Your home directory contains subdirectories in ~/platforms for different platform releases.\n- Use symlinks in ~/clients/client-name/ to find all your sites directly.\n\nCustom Platform Information:\n- Upload custom Drupal platforms to ~/static/platforms in separate subdirectories.\n- Enable custom platforms via the \"Add platform\" option in your Ægir control panel.\n\nNote: Only Pressflow (LTS) core-based platforms are allowed for Drupal 6.x versions; standard Drupal core can be used for Drupal 7 and newer versions.\n\n----------------------------------------\n6. ARTICLES & VIDEO TUTORIALS\n----------------------------------------\n\nVideo Tutorials: http://bit.ly/aegir8cc\n\nSite Import & Development:\n- Development Library: https://learn.omega8.cc/library/development\n- Good to Know: https://learn.omega8.cc/library/good-to-know\n\nPerformance Information:\n- Performance Library: https://learn.omega8.cc/library/performance\n- Tips & Tricks: https://learn.omega8.cc/library/tips-and-tricks\n\nUseful Hints: Problems & Solutions: https://learn.omega8.cc/library/problems-solutions\n\nRecommended Articles:\n- Biggest Misunderstanding Ever: https://learn.omega8.cc/the-biggest-misunderstanding-ever-122\n- Best Recipes for Disaster: https://learn.omega8.cc/the-best-recipes-for-disaster-139\n- Good Habits to Learn: https://learn.omega8.cc/are-there-any-specific-good-habits-to-learn-116\n\nFor further assistance, contact us: https://omega8.cc/contact\n\nThank you,\nThe Omega8.cc Team\n\n"
  },
  {
    "path": "aegir/conf/tpl/uc.html",
    "content": "<html>\n<head>\n<title>Under Construction</title>\n<style type=\"text/css\">\n<!--\nbody {\nbackground-color: #000000;\n}\n-->\n</style>\n</head>\n<body>\n<table border=\"0\" align=\"center\" cellpadding=\"0\" cellspacing=\"0\">\n<tr>\n<td  width=\"800\" height=\"600\" align=\"center\" valign=\"middle\"><p class=\"style1\"><img src=\"/under_construction.jpg\" alt=\"Under Construction\" width=\"500\" height=\"300\" /></p>\n</td>\n</tr>\n</table>\n</body>\n</html>\n"
  },
  {
    "path": "aegir/conf/tpl/upgrademail.txt",
    "content": "Hello,\n\nWe are pleased to inform you that your Ægir instance has been successfully upgraded to our new HTTP/3 Edition [boa.version]\n\n@=> The future of 100% Open Source Drupal hosting is brighter than ever!\n\nWith BOA-5.9.1 PRO/LTS, we proudly deliver full HTTP/3 and KTLS support — a fundamental change in the way modern browsers communicate with modern HTTPS web servers — along with the latest OpenSSL 3.5 LTS, which made it possible, and many critical security and bug fixes related to system components.\n\n@=> Key Improvements Explained\n\nHTTP/3 and KTLS support. If you run Drupal sites that should feel fast and responsive (and stay that way during spikes), this is genuinely good news. Why is this a big deal? What should visitors notice?\n\n  Read the full story: https://github.com/omega8cc/boa/tree/5.x-dev/HTTP3.md\n\n@=> Usage disk/sql limits x2 + Aero and Archive plans added to hosted BOA\n\nIt's worth mentioning that our hosted BOA plans have received a huge upgrade: several new locations have been added around the world, our vendors are now local (instead of the previous US-only hyperscalers), and an entirely new Archive Tier has been added for those looking to host collections of low-traffic sites at low cost.\n\n  Take a look if you are interested: https://omega8.cc/hosted\n\n@=> Going Local with Infrastructure\n\nWe’ve expanded our network considerably to meet the growing expectations of the Data Sovereignty movement. This isn’t just about adding more cities to our hosting map — it’s also about going local with infrastructure wherever we can. We no longer rely solely on big-name vendors and hyperscalers. Instead, we’re gradually migrating to local providers and data centers in every country where we offer hosted BOA for Drupal.\n\nFor example, in Canada you can now choose not only Toronto, but also Montreal, Calgary, and Vancouver. In Australia, it’s no longer just Sydney — we also offer Adelaide, Brisbane, and Perth. We’ve also added an excellent facility in New Zealand. Of course, we continue to support our original Singapore location and still offer EU, UK, and US options.\n\n@=> 4 NEW, 12 UPDATED, 32 TOTAL Drupal distros/platforms available\n\nWhile most of you typically build your own codebases/platforms with Composer these days, we still deliver a list of 32 platforms ready to use in your Ægir.\n\nSince these platforms are updated only with BOA releases, they are not really intended for production use per se, because you typically need a faster lifecycle to keep your sites secure.\n\nHowever, they provide a wide range of testing playgrounds, because you can install only those you wish to test or use, and reinstall if needed, with the help of our BOA-only feature that allows you to upgrade your Ægir on demand with two simple control files, as described in the built-in docs you can always find in ~/static/control/README.txt.\n\nThe complete list of 32 can be found at:\n\n  https://github.com/omega8cc/boa/blob/5.x-dev/docs/PLATFORMS.md\n\n@=> You can access your upgraded Ægir instance at the following URL:\n\n  https://aegir.url.name\n\n@=> For detailed information about the upgrade, please refer to:\n\n  BOA Changelog: https://bit.ly/boa-changelog\n\n@=> Please check also the built-in documentation in your account:\n\n  ~/static/control/README.txt\n\nThank you for choosing Ægir!\n\nBest regards,\nThe BOA Dev Team\n"
  },
  {
    "path": "aegir/conf/valkey/valkey-server",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:\t\tvalkey-server\n# Required-Start:\t$syslog $remote_fs\n# Required-Stop:\t$syslog $remote_fs\n# Should-Start:\t\t$local_fs\n# Should-Stop:\t\t$local_fs\n# Default-Start:\t2 3 4 5\n# Default-Stop:\t\t0 1 6\n# Short-Description:    valkey-server - Persistent key-value db\n# Description:          valkey-server - Persistent key-value db\n### END INIT INFO\n\nPATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\nDAEMON=/usr/bin/valkey-server\nDAEMON_ARGS=/etc/valkey/valkey.conf\nNAME=valkey-server\nDESC=valkey-server\nPIDFILE=/run/valkey/valkey.pid\n\ntest -x $DAEMON || exit 0\n\n[ -d /run/valkey ] || mkdir -p /run/valkey\n[ -d /run/valkey ] && chown -R valkey:valkey /run/valkey\n\nmaxclients=$(awk '/^[ \\t]*maxclients[ \\t]/ { print $2 }' /etc/valkey/valkey.conf)\nif [ ! -z \"$maxclients\" ] && [ \"$maxclients\" -gt 992 ]; then\n  ulimit -n $((maxclients+32))\nfi\n\ncase \"$1\" in\n  start)\n\techo -n \"Starting $DESC: \"\n\ttouch $PIDFILE\n\tchown valkey:valkey $PIDFILE\n\tif start-stop-daemon --start --quiet --umask 007 --pidfile $PIDFILE --chuid valkey:valkey --exec $DAEMON -- $DAEMON_ARGS\n\tthen\n\t\techo \"$NAME.\"\n\telse\n\t\techo \"failed\"\n\tfi\n\t;;\n  stop)\n\techo -n \"Stopping $DESC: \"\n\tif start-stop-daemon --stop --retry 8 --quiet --oknodo --pidfile $PIDFILE --exec $DAEMON\n\tthen\n\t\techo \"$NAME.\"\n\telse\n\t\techo \"failed\"\n\tfi\n\trm -f $PIDFILE\n\tsleep 1\n\t;;\n\n  restart|force-reload)\n\t${0} stop\n\t${0} start\n\t;;\n\n  reload)\n    echo -n \"Reloading service ${NAME}...\"\n    if [ ! -r ${PIDFILE} ]; then\n      echo \"warning, no pid file found - ${NAME} is not running ?\"\n      exit 1\n    fi\n    kill -USR2 `cat ${PIDFILE}`\n    echo \" done\"\n\t;;\n\n  status)\n\techo -n \"$DESC is \"\n\tif start-stop-daemon --stop --quiet --signal 0 --name ${NAME} --pidfile ${PIDFILE}\n\tthen\n\t\techo \"running\"\n\telse\n\t\techo \"not running\"\n\t\texit 1\n\tfi\n\t;;\n\n  *)\n\techo \"Usage: service $NAME {start|stop|restart|reload|force-reload}\" >&2\n\texit 1\n\t;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/conf/valkey/valkey7.conf",
    "content": "# Valkey configuration file example.\n#\n# Note that in order to read the configuration file, the server must be\n# started with the file path as first argument:\n#\n# ./valkey-server /path/to/valkey.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Note that option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Sentinel. Since the server always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# Included paths may contain wildcards. All files matching the wildcards will\n# be included in alphabetical order.\n# Note that if an include path contains a wildcards but no files match it when\n# the server is started, the include statement will be ignored and no error will\n# be emitted.  It is safe, therefore, to include wildcard files from empty\n# directories.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n# include /path/to/fragments/*.conf\n#\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, the server listens\n# for connections from all available network interfaces on the host machine.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n# Each address can be prefixed by \"-\", which means that the server will not fail to\n# start if the address is not available. Being not available only refers to\n# addresses that does not correspond to any network interface. Addresses that\n# are already in use will always fail, and unsupported protocols will always be\n# silently skipped.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses\n# xbind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6\n# xbind * -::*                     # like the default, all available interfaces\n#\n# ~~~ WARNING ~~~ If the computer running the server is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force the server to listen only on the\n# IPv4 and IPv6 (if available) loopback interface addresses (this means the server\n# will only be able to accept client connections from the same host that it is\n# running on).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# COMMENT OUT THE FOLLOWING LINE.\n#\n# You will also need to set a password unless you explicitly disable protected\n# mode.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# By default, outgoing connections (from replica to master, from Sentinel to\n# instances, cluster bus, etc.) are not bound to a specific local address. In\n# most cases, this means the operating system will handle that based on routing\n# and the interface through which the connection goes out.\n#\n# Using bind-source-addr it is possible to configure a specific address to bind\n# to, which may also affect how the connection gets routed.\n#\n# Example:\n#\n# bind-source-addr 10.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# the server instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and the default user has no password, the server\n# only accepts local connections from the IPv4 address (127.0.0.1), IPv6 address\n# (::1) or Unix domain sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to the server\n# even if no authentication is configured.\nprotected-mode yes\n\n# The server uses default hardened security configuration directives to reduce the\n# attack surface on innocent users. Therefore, several sensitive configuration\n# directives are immutable, and some potentially-dangerous commands are blocked.\n#\n# Configuration directives that control files that the server writes to (e.g., 'dir'\n# and 'dbfilename') and that aren't usually modified during runtime\n# are protected by making them immutable.\n#\n# Commands that can increase the attack surface of the server and that aren't usually\n# called by users are blocked by default.\n#\n# These can be exposed to either all connections or just local ones by setting\n# each of the configs listed below to either of these values:\n#\n# no    - Block for any connection (remain immutable)\n# yes   - Allow for any connection (no protection)\n# local - Allow only for local connections. Ones originating from the\n#         IPv4 address (127.0.0.1), IPv6 address (::1) or Unix domain sockets.\n#\n# enable-protected-configs no\n# enable-debug-command no\n# enable-module-command no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified the server will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need a high backlog in order\n# to avoid slow clients connection issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so the server will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/valkey/valkey.sock\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 900\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Force network equipment in the middle to consider the connection to be\n#    alive.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\ntcp-keepalive 300\n\n# Apply OS-specific mechanism to mark the listening socket with the specified\n# ID, to support advanced routing and filtering capabilities.\n#\n# On Linux, the ID represents a connection mark.\n# On FreeBSD, the ID represents a socket cookie ID.\n# On OpenBSD, the ID represents a route table ID.\n#\n# The default value is 0, which implies no marking is required.\n# socket-mark-id 0\n\n################################# TLS/SSL #####################################\n\n# By default, TLS/SSL is disabled. To enable it, the \"tls-port\" configuration\n# directive can be used to define TLS-listening ports. To enable TLS on the\n# default port, use:\n#\n# port 0\n# tls-port 6379\n\n# Configure a X.509 certificate and private key to use for authenticating the\n# server to connected clients, masters or cluster peers.  These files should be\n# PEM formatted.\n#\n# tls-cert-file valkey.crt\n# tls-key-file valkey.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-key-file-pass secret\n\n# Normally the server uses the same certificate for both server functions (accepting\n# connections) and client functions (replicating from a master, establishing\n# cluster bus connections, etc.).\n#\n# Sometimes certificates are issued with attributes that designate them as\n# client-only or server-only certificates. In that case it may be desired to use\n# different certificates for incoming (server) and outgoing (client)\n# connections. To do that, use the following directives:\n#\n# tls-client-cert-file client.crt\n# tls-client-key-file client.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-client-key-file-pass secret\n\n# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange,\n# required by older versions of OpenSSL (<3.0). Newer versions do not require\n# this configuration and recommend against it.\n#\n# tls-dh-params-file valkey.dh\n\n# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL\n# clients and peers. The server requires an explicit configuration of at least one\n# of these, and will not implicitly use the system wide configuration.\n#\n# tls-ca-cert-file ca.crt\n# tls-ca-cert-dir /etc/ssl/certs\n\n# By default, clients (including replica servers) on a TLS port are required\n# to authenticate using valid client side certificates.\n#\n# If \"no\" is specified, client certificates are not required and not accepted.\n# If \"optional\" is specified, client certificates are accepted and must be\n# valid if provided, but are not required.\n#\n# tls-auth-clients no\n# tls-auth-clients optional\n\n# By default, a replica does not attempt to establish a TLS connection\n# with its master.\n#\n# Use the following directive to enable TLS on replication links.\n#\n# tls-replication yes\n\n# By default, the cluster bus uses a plain TCP connection. To enable\n# TLS for the bus protocol, use the following directive:\n#\n# tls-cluster yes\n\n# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended\n# that older formally deprecated versions are kept disabled to reduce the attack surface.\n# You can explicitly specify TLS versions to support.\n# Allowed values are case insensitive and include \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\",\n# \"TLSv1.3\" (OpenSSL >= 1.1.1) or any combination.\n# To enable only TLSv1.2 and TLSv1.3, use:\n#\n# tls-protocols \"TLSv1.2 TLSv1.3\"\n\n# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information\n# about the syntax of this string.\n#\n# Note: this configuration applies only to <= TLSv1.2.\n#\n# tls-ciphers DEFAULT:!MEDIUM\n\n# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more\n# information about the syntax of this string, and specifically for TLSv1.3\n# ciphersuites.\n#\n# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256\n\n# When choosing a cipher, use the server's preference instead of the client\n# preference. By default, the server follows the client's preference.\n#\n# tls-prefer-server-ciphers yes\n\n# By default, TLS session caching is enabled to allow faster and less expensive\n# reconnections by clients that support it. Use the following directive to disable\n# caching.\n#\n# tls-session-caching no\n\n# Change the default number of TLS sessions cached. A zero value sets the cache\n# to unlimited size. The default size is 20480.\n#\n# tls-session-cache-size 5000\n\n# Change the default timeout of cached TLS sessions. The default timeout is 300\n# seconds.\n#\n# tls-session-cache-timeout 60\n\n################################# GENERAL #####################################\n\n# By default the server does not run as a daemon. Use 'yes' if you need it.\n# Note that the server will write a pid file in /run/valkey/valkey.pid when daemonized.\n# When the server is supervised by upstart or systemd, this parameter has no impact.\ndaemonize yes\n\n# If you run the server from upstart or systemd, the server can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting the server into SIGSTOP mode\n#                        requires \"expect stop\" in your upstart job config\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#                        on startup, and updating the server status on a regular\n#                        basis.\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous pings back to your supervisor.\n#\n# The default is \"no\". To run under upstart/systemd, you can simply uncomment\n# the line below:\n#\n# supervised auto\n\n# If a pid file is specified, the server writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/valkey/valkey.pid\".\n#\n# Creating a pid file is best effort: if the server is not able to create it\n# nothing bad happens, the server will start and run normally.\n#\n# Note that on modern Linux systems \"/run/valkey/valkey.pid\" is more conforming\n# and should be used instead.\npidfile /run/valkey/valkey.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\n# nothing (nothing is logged)\nloglevel warning\n\n# Specify the log file name. Also the empty string can be used to force\n# the server to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/valkey/valkey-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident valkey\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# To disable the built in crash log, which will possibly produce cleaner core\n# dumps when they are needed, uncomment the following:\n#\n# crash-log-enabled no\n\n# To disable the fast memory check that's run as part of the crash log, which\n# will possibly let the server terminate sooner, uncomment the following:\n#\n# crash-memcheck-enabled no\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 8\n\n# By default the server shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY and syslog logging is\n# disabled. Basically this means that normally a logo is displayed only in\n# interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo no\n\n# By default, the server modifies the process title (as seen in 'top' and 'ps') to\n# provide some runtime information. It is possible to disable this and leave\n# the process name as executed by setting the following to no.\nset-proc-title yes\n\n# When changing the process title, the server uses the following template to construct\n# the modified title.\n#\n# Template variables are specified in curly brackets. The following variables are\n# supported:\n#\n# {title}           Name of process as executed if parent, or type of child process.\n# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or\n#                   Unix socket if only that's available.\n# {server-mode}     Special mode, i.e. \"[sentinel]\" or \"[cluster]\".\n# {port}            TCP port listening on, or 0.\n# {tls-port}        TLS port listening on, or 0.\n# {unixsocket}      Unix domain socket listening on, or \"\".\n# {config-file}     Name of configuration file used.\n#\nproc-title-template \"{title} {listen-addr} {server-mode}\"\n\n# Set the local environment which is used for string comparison operations, and\n# also affect the performance of Lua scripts. Empty String indicates the locale\n# is derived from the environment variables.\nlocale-collate \"\"\n\n################################ SNAPSHOTTING  ################################\n\n# Save the DB to disk.\n#\n# save <seconds> <changes> [<seconds> <changes> ...]\n#\n# The server will save the DB if the given number of seconds elapsed and it\n# surpassed the given number of write operations against the DB.\n#\n# Snapshotting can be completely disabled with a single empty string argument\n# as in following example:\n#\n# save \"\"\n#\n# Unless specified otherwise, by default the server will save the DB:\n#   * After 3600 seconds (an hour) if at least 1 change was performed\n#   * After 300 seconds (5 minutes) if at least 100 changes were performed\n#   * After 60 seconds if at least 10000 changes were performed\n#\n# You can set these explicitly by uncommenting the following line.\n#\n# save 3600 1 300 100 60 10000\n\n# By default the server will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again, the server will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the server\n# and persistence, you may want to disable this feature so that the server will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# By default compression is enabled as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# Enables or disables full sanitization checks for ziplist and listpack etc when\n# loading an RDB or RESTORE payload. This reduces the chances of a assertion or\n# crash later on while processing commands.\n# Options:\n#   no         - Never perform full sanitization\n#   yes        - Always perform full sanitization\n#   clients    - Perform full sanitization only for user connections.\n#                Excludes: RDB files, RESTORE commands received from the master\n#                connection, and client connections which have the\n#                skip-sanitize-payload ACL flag.\n# The default should be 'clients' but since it currently affects cluster\n# resharding via MIGRATE, it is temporarily set to 'no' by default.\n#\n# sanitize-dump-payload no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# Remove RDB files used by replication in instances without persistence\n# enabled. By default this option is disabled, however there are environments\n# where for regulations or other security concerns, RDB files persisted on\n# disk by masters in order to feed replicas, or stored on disk by replicas\n# in order to load them for the initial synchronization, should be deleted\n# ASAP. Note that this option ONLY WORKS in instances that have both AOF\n# and RDB persistence disabled, otherwise is completely ignored.\n#\n# An alternative (and sometimes better) way to obtain the same effect is\n# to use diskless replication on both master and replicas instances. However\n# in the case of replicas, diskless is not always an option.\nrdb-del-sync-files no\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir /var/lib/valkey/\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a server a copy of\n# another server. A few things to understand ASAP about replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Replicas are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# replicaof <masterip> <masterport>\n\n# If the master is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the replica request.\n#\n# masterauth <master-password>\n#\n# However this is not enough if you are using ACLs\n# and the default user is not capable of running the PSYNC\n# command and/or other commands needed for replication. In this case it's\n# better to configure a special user to use with replication, and specify the\n# masteruser configuration as such:\n#\n# masteruser <username>\n#\n# When masteruser is specified, the replica will authenticate against its\n# master using the new AUTH form: AUTH <username> <password>.\n\n# When a replica loses its connection with the master, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) If replica-serve-stale-data is set to 'no' the replica will reply with error\n#    \"MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'\"\n#    to all data access commands, excluding commands such as:\n#    INFO, REPLICAOF, AUTH, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,\n#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,\n#    HOST and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# By default, replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# New replicas and reconnecting replicas that are not able to continue the\n# replication process just receiving differences, need to do what is called a\n# \"full synchronization\". An RDB file is transmitted from the master to the\n# replicas.\n#\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The master creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child\n# producing the RDB file finishes its work. With diskless replication instead\n# once the transfer starts, new replicas arriving will be queued and a new\n# transfer will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple\n# replicas will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync yes\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the\n# server waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# When diskless replication is enabled with a delay, it is possible to let\n# the replication start before the maximum delay is reached if the maximum\n# number of replicas expected have connected. Default of 0 means that the\n# maximum is not defined and the server will wait the full delay.\nrepl-diskless-sync-max-replicas 0\n\n# -----------------------------------------------------------------------------\n# WARNING: Since in this setup the replica does not immediately store an RDB on\n# disk, it may cause data loss during failovers. RDB diskless load + server\n# modules not handling I/O reads may cause the server to abort in case of I/O errors\n# during the initial synchronization stage with the master.\n# -----------------------------------------------------------------------------\n#\n# Replica can load the RDB it reads from the replication link directly from the\n# socket, or store the RDB to a file and read that file after it was completely\n# received from the master.\n#\n# In many cases the disk is slower than the network, and storing and loading\n# the RDB file may increase replication time (and even increase the master's\n# Copy on Write memory and replica buffers).\n# However, when parsing the RDB file directly from the socket, in order to avoid\n# data loss it's only safe to flush the current dataset when the new dataset is\n# fully loaded in memory, resulting in higher memory usage.\n# For this reason we have the following options:\n#\n# \"disabled\"    - Don't use diskless load (store the rdb file to the disk first)\n# \"swapdb\"      - Keep current db contents in RAM while parsing the data directly\n#                 from the socket. Replicas in this mode can keep serving current\n#                 dataset while replication is in progress, except for cases where\n#                 they can't recognize master as having a data set from same\n#                 replication history.\n#                 Note that this requires sufficient memory, if you don't have it,\n#                 you risk an OOM kill.\n# \"on-empty-db\" - Use diskless load only when current dataset is empty. This is\n#                 safer and avoid having old and new dataset loaded side by side\n#                 during replication.\nrepl-diskless-load disabled\n\n# Master send PINGs to its replicas in a predefined interval. It's possible to\n# change this interval with the repl_ping_replica_period option. The default\n# value is 10 seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the replica. The default\n# value is 60 seconds.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select \"yes\", the server will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and replicas are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a\n# replica wants to reconnect again, often a full resync is not needed, but a\n# partial resync is enough, just passing the portion of data the replica\n# missed while disconnected.\n#\n# The bigger the replication backlog, the longer the replica can endure the\n# disconnect and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated if there is at least one replica connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no connected replicas for some time, the backlog will be\n# freed. The following option configures the amount of seconds that need to\n# elapse, starting from the time the last replica disconnected, for the backlog\n# buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly \"partially\n# resynchronize\" with other replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The replica priority is an integer number published by the server in the INFO\n# output. It is used by Sentinel in order to select a replica to promote\n# into a master if the master is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel\n# will pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of master, so a replica with priority of 0 will never be selected by\n# Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# The propagation error behavior controls how the server will behave when it is\n# unable to handle a command being processed in the replication stream from a master\n# or processed while reading from an AOF file. Errors that occur during propagation\n# are unexpected, and can cause data inconsistency.\n#\n# If an application wants to ensure there is no data divergence, this configuration\n# should be set to 'panic' instead. The value can also be set to 'panic-on-replicas'\n# to only panic when a replica encounters an error on the replication stream. One of\n# these two panic values will become the default value in the future once there are\n# sufficient safety mechanisms in place to prevent false positive crashes.\n#\n# propagation-error-behavior ignore\n\n# Replica ignore disk write errors controls the behavior of a replica when it is\n# unable to persist a write command received from its master to disk. By default,\n# this configuration is set to 'no' and will crash the replica in this condition.\n# It is not recommended to change this default.\n#\n# replica-ignore-disk-write-errors no\n\n# -----------------------------------------------------------------------------\n# By default, Sentinel includes all replicas in its reports. A replica\n# can be excluded from Sentinel's announcements. An unannounced replica\n# will be ignored by the 'sentinel replicas <master>' command and won't be\n# exposed to Sentinel's clients.\n#\n# This option does not change the behavior of replica-priority. Even with\n# replica-announced set to 'no', the replica can be promoted to master. To\n# prevent this behavior, set replica-priority to 0.\n#\n# replica-announced yes\n\n# It is possible for a master to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A master is able to list the address and port of the attached\n# replicas in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a master.\n#\n# The listed IP address and port normally reported by a replica is\n# obtained in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the master.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may actually be reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n############################### KEYS TRACKING #################################\n\n# The client side caching of values is assisted via server-side support.\n# This is implemented using an invalidation table that remembers, using\n# a radix key indexed by key name, what clients have which keys. In turn\n# this is used in order to send invalidation messages to clients. Please\n# check this page to understand more about the feature:\n#\n#   https://valkey.io/topics/client-side-caching\n#\n# When tracking is enabled for a client, all the read only queries are assumed\n# to be cached: this will force the server to store information in the invalidation\n# table. When keys are modified, such information is flushed away, and\n# invalidation messages are sent to the clients. However if the workload is\n# heavily dominated by reads, the server could use more and more memory in order\n# to track the keys fetched by many clients.\n#\n# For this reason it is possible to configure a maximum fill value for the\n# invalidation table. By default it is set to 1M of keys, and once this limit\n# is reached, the server will start to evict keys in the invalidation table\n# even if they were not modified, just to reclaim memory: this will in turn\n# force the clients to invalidate the cached values. Basically the table\n# maximum size is a trade off between the memory you want to spend server\n# side to track information about who cached what, and the ability of clients\n# to retain cached objects in memory.\n#\n# If you set the value to 0, it means there are no limits, and the server will\n# retain as many keys as needed in the invalidation table.\n# In the \"stats\" INFO section, you can find information about the number of\n# keys in the invalidation table at every given moment.\n#\n# Note: when key tracking is used in broadcasting mode, no memory is used\n# in the server side so this setting is useless.\n#\n# tracking-table-max-keys 1000000\n\n################################## SECURITY ###################################\n\n# Warning: since the server is pretty fast, an outside user can try up to\n# 1 million passwords per second against a modern box. This means that you\n# should use very strong passwords, otherwise they will be very easy to break.\n# Note that because the password is really a shared secret between the client\n# and the server, and should not be memorized by any human, the password\n# can be easily a long string from /dev/urandom or whatever, so by using a\n# long and unguessable password no brute force attack will be possible.\n\n# ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n#\n# The special username \"default\" is used for new connections. If this user\n# has the \"nopass\" rule, then new connections will be immediately authenticated\n# as the \"default\" user without the need of any password provided via the\n# AUTH command. Otherwise if the \"default\" user is not flagged with \"nopass\"\n# the connections will start in not authenticated state, and will require\n# AUTH (or the HELLO command AUTH option) in order to be authenticated and\n# start to work.\n#\n# The ACL rules that describe what a user can do are the following:\n#\n#  on           Enable the user: it is possible to authenticate as this user.\n#  off          Disable the user: it's no longer possible to authenticate\n#               with this user, however the already authenticated connections\n#               will still work.\n#  skip-sanitize-payload    RESTORE dump-payload sanitization is skipped.\n#  sanitize-payload         RESTORE dump-payload is sanitized (default).\n#  +<command>   Allow the execution of that command.\n#               May be used with `|` for allowing subcommands (e.g \"+config|get\")\n#  -<command>   Disallow the execution of that command.\n#               May be used with `|` for blocking subcommands (e.g \"-config|set\")\n#  +@<category> Allow the execution of all the commands in such category\n#               with valid categories are like @admin, @set, @sortedset, ...\n#               and so forth, see the full list in the server.c file where\n#               the server command table is described and defined.\n#               The special category @all means all the commands, but currently\n#               present in the server, and that will be loaded in the future\n#               via modules.\n#  +<command>|first-arg  Allow a specific first argument of an otherwise\n#                        disabled command. It is only supported on commands with\n#                        no sub-commands, and is not allowed as negative form\n#                        like -SELECT|1, only additive starting with \"+\". This\n#                        feature is deprecated and may be removed in the future.\n#  allcommands  Alias for +@all. Note that it implies the ability to execute\n#               all the future commands loaded via the modules system.\n#  nocommands   Alias for -@all.\n#  ~<pattern>   Add a pattern of keys that can be mentioned as part of\n#               commands. For instance ~* allows all the keys. The pattern\n#               is a glob-style pattern like the one of KEYS.\n#               It is possible to specify multiple patterns.\n# %R~<pattern>  Add key read pattern that specifies which keys can be read\n#               from.\n# %W~<pattern>  Add key write pattern that specifies which keys can be\n#               written to.\n#  allkeys      Alias for ~*\n#  resetkeys    Flush the list of allowed keys patterns.\n#  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be\n#               accessed by the user. It is possible to specify multiple channel\n#               patterns.\n#  allchannels  Alias for &*\n#  resetchannels            Flush the list of allowed channel patterns.\n#  ><password>  Add this password to the list of valid password for the user.\n#               For example >mypass will add \"mypass\" to the list.\n#               This directive clears the \"nopass\" flag (see later).\n#  <<password>  Remove this password from the list of valid passwords.\n#  nopass       All the set passwords of the user are removed, and the user\n#               is flagged as requiring no password: it means that every\n#               password will work against this user. If this directive is\n#               used for the default user, every new connection will be\n#               immediately authenticated with the default user without\n#               any explicit AUTH command required. Note that the \"resetpass\"\n#               directive will clear this condition.\n#  resetpass    Flush the list of allowed passwords. Moreover removes the\n#               \"nopass\" status. After \"resetpass\" the user has no associated\n#               passwords and there is no way to authenticate without adding\n#               some password (or setting it as \"nopass\" later).\n#  reset        Performs the following actions: resetpass, resetkeys, resetchannels,\n#               allchannels (if acl-pubsub-default is set), off, clearselectors, -@all.\n#               The user returns to the same state it has immediately after its creation.\n# (<options>)   Create a new selector with the options specified within the\n#               parentheses and attach it to the user. Each option should be\n#               space separated. The first character must be ( and the last\n#               character must be ).\n# clearselectors            Remove all of the currently attached selectors.\n#                           Note this does not change the \"root\" user permissions,\n#                           which are the permissions directly applied onto the\n#                           user (outside the parentheses).\n#\n# ACL rules can be specified in any order: for instance you can start with\n# passwords, then flags, or key patterns. However note that the additive\n# and subtractive rules will CHANGE MEANING depending on the ordering.\n# For instance see the following example:\n#\n#   user alice on +@all -DEBUG ~* >somepassword\n#\n# This will allow \"alice\" to use all the commands with the exception of the\n# DEBUG command, since +@all added all the commands to the set of the commands\n# alice can use, and later DEBUG was removed. However if we invert the order\n# of two ACL rules the result will be different:\n#\n#   user alice on -DEBUG +@all ~* >somepassword\n#\n# Now DEBUG was removed when alice had yet no commands in the set of allowed\n# commands, later all the commands are added, so the user will be able to\n# execute everything.\n#\n# Basically ACL rules are processed left-to-right.\n#\n# The following is a list of command categories and their meanings:\n# * keyspace - Writing or reading from keys, databases, or their metadata\n#     in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE,\n#     KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace,\n#     key or metadata will also have `write` category. Commands that only read\n#     the keyspace, key or metadata will have the `read` category.\n# * read - Reading from keys (values or metadata). Note that commands that don't\n#     interact with keys, will not have either `read` or `write`.\n# * write - Writing to keys (values or metadata)\n# * admin - Administrative commands. Normal applications will never need to use\n#     these. Includes REPLICAOF, CONFIG, DEBUG, SAVE, MONITOR, ACL, SHUTDOWN, etc.\n# * dangerous - Potentially dangerous (each should be considered with care for\n#     various reasons). This includes FLUSHALL, MIGRATE, RESTORE, SORT, KEYS,\n#     CLIENT, DEBUG, INFO, CONFIG, SAVE, REPLICAOF, etc.\n# * connection - Commands affecting the connection or other connections.\n#     This includes AUTH, SELECT, COMMAND, CLIENT, ECHO, PING, etc.\n# * blocking - Potentially blocking the connection until released by another\n#     command.\n# * fast - Fast O(1) commands. May loop on the number of arguments, but not the\n#     number of elements in the key.\n# * slow - All commands that are not Fast.\n# * pubsub - PUBLISH / SUBSCRIBE related\n# * transaction - WATCH / MULTI / EXEC related commands.\n# * scripting - Scripting related.\n# * set - Data type: sets related.\n# * sortedset - Data type: zsets related.\n# * list - Data type: lists related.\n# * hash - Data type: hashes related.\n# * string - Data type: strings related.\n# * bitmap - Data type: bitmaps related.\n# * hyperloglog - Data type: hyperloglog related.\n# * geo - Data type: geo related.\n# * stream - Data type: streams related.\n#\n# For more information about ACL configuration please refer to\n# the Valkey web site at https://valkey.io/topics/acl\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked\n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with\n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside valkey.conf to describe users.\n#\n# aclfile /etc/valkey/users.acl\n\n# IMPORTANT NOTE: \"requirepass\" is just a compatibility\n# layer on top of the new ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# The requirepass is not compatible with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n#\n\nuser default on >isfoobared allcommands allkeys\n\n# The default Pub/Sub channels permission for new users is controlled by the\n# acl-pubsub-default configuration directive, which accepts one of these values:\n#\n# allchannels: grants access to all Pub/Sub channels\n# resetchannels: revokes access to all Pub/Sub channels\n#\n# acl-pubsub-default defaults to 'resetchannels' permission.\n#\n# acl-pubsub-default resetchannels\n\n# Command renaming (DEPRECATED).\n#\n# ------------------------------------------------------------------------\n# WARNING: avoid using this option if possible. Instead use ACLs to remove\n# commands from the default user, and put them only in some admin user you\n# create for administrative purposes.\n# ------------------------------------------------------------------------\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as the server reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached the server will close all the new connections sending\n# an error 'max number of clients reached'.\n#\n# IMPORTANT: With a cluster-enabled setup, the max number of connections is also\n# shared with the cluster bus: every node in the cluster will use two\n# connections, one incoming and another outgoing. It is important to size the\n# limit accordingly in case of very large clusters.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached the server will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If the server can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', the server will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using the server as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how the server will select what to remove when maxmemory\n# is reached. You can select one from the following behaviors:\n#\n# volatile-lru -> Evict using approximated LRU, only keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key having an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, when there are no suitable keys for\n# eviction, the server will return an error on write operations that require\n# more memory. These are usually commands that create new keys, add data or\n# modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,\n# SORT (due to the STORE argument), and EXEC (if the transaction includes any\n# command that requires memory).\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. By default the server will check five keys and pick the one that was\n# used least recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate.\n#\n# maxmemory-samples 5\n\n# Eviction processing is designed to function well with the default setting.\n# If there is an unusually large amount of write traffic, this value may need to\n# be increased.  Decreasing this value may reduce latency at the risk of\n# eviction processing effectiveness\n#   0 = minimum latency, 10 = default, 100 = process without regard to latency\n#\n# maxmemory-eviction-tenacity 10\n\n# By default a replica will ignore its maxmemory setting\n# (unless it is promoted to master after a failover or manually). It means\n# that the eviction of keys will be just handled by the master, sending the\n# DEL commands to the replica as keys evict in the master side.\n#\n# This behavior ensures that masters and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica\n# to have a different memory setting, and you are sure all the writes performed\n# to the replica are idempotent, then you may change this default (but be sure\n# to understand what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory\n# and so forth). So make sure you monitor your replicas and make sure they\n# have enough memory to never hit a real out-of-memory condition before the\n# master hits the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n# The server reclaims expired keys in two ways: upon access when those keys are\n# found to be expired, and also in background, in what is called the\n# \"active expire key\". The key space is slowly and interactively scanned\n# looking for expired keys to reclaim, so that it is possible to free memory\n# of keys that are expired and will never be accessed again in a short time.\n#\n# The default effort of the expire cycle will try to avoid having more than\n# ten percent of expired keys still in memory, and will try to avoid consuming\n# more than 25% of total memory and to add latency to the system. However\n# it is possible to increase the expire \"effort\" that is normally set to\n# \"1\", to a greater value, up to the value \"10\". At its maximum value the\n# system will use more CPU, longer cycles (and technically may introduce\n# more latency), and will tolerate less already expired keys still present\n# in the system. It's a tradeoff between memory, CPU and latency.\n#\n# active-expire-effort 1\n\n############################# LAZY FREEING ####################################\n\n# The server has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in the server. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons the server also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically the server deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives.\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nreplica-lazy-flush yes\n\n# It is also possible, for the case when to replace the user code DEL calls\n# with UNLINK calls is not easy, to modify the default behavior of the DEL\n# command to act exactly like UNLINK, using the following configuration\n# directive:\n\nlazyfree-lazy-user-del yes\n\n# FLUSHDB, FLUSHALL, SCRIPT FLUSH and FUNCTION FLUSH support both asynchronous and synchronous\n# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the\n# commands. When neither flag is passed, this directive will be used to determine\n# if the data should be deleted asynchronously.\n\nlazyfree-lazy-user-flush yes\n\n################################ THREADED I/O #################################\n\n# The server is mostly single threaded, however there are certain threaded\n# operations such as UNLINK, slow I/O accesses and other things that are\n# performed on side threads.\n#\n# Now it is also possible to handle the server clients socket reads and writes\n# in different I/O threads. Since especially writing is so slow, normally\n# users use pipelining in order to speed up the server performances per\n# core, and spawn multiple instances in order to scale more. Using I/O\n# threads it is possible to easily speedup two times the server without resorting\n# to pipelining nor sharding of the instance.\n#\n# By default threading is disabled, we suggest enabling it only in machines\n# that have at least 4 or more cores, leaving at least one spare core.\n# Using more than 8 threads is unlikely to help much. We also recommend using\n# threaded I/O only if you actually have performance problems, with\n# instances being able to use a quite big percentage of CPU time, otherwise\n# there is no point in using this feature.\n#\n# So for instance if you have a four cores boxes, try to use 2 or 3 I/O\n# threads, if you have a 8 cores, try to use 6 threads. In order to\n# enable I/O threads use the following configuration directive:\n#\n# io-threads 4\n#\n# Setting io-threads to 1 will just use the main thread as usual.\n# When I/O threads are enabled, we only use threads for writes, that is\n# to thread the write(2) syscall and transfer the client buffers to the\n# socket. However it is also possible to enable threading of reads and\n# protocol parsing using the following configuration directive, by setting\n# it to yes:\n#\n# io-threads-do-reads no\n#\n# Usually threading reads doesn't help much.\n#\n# NOTE 1: This configuration directive cannot be changed at runtime via\n# CONFIG SET. Also, this feature currently does not work when SSL is\n# enabled.\n#\n# NOTE 2: If you want to test the server speedup using valkey-benchmark, make\n# sure you also run the benchmark itself in threaded mode, using the\n# --threads option to match the number of server threads, otherwise you'll not\n# be able to notice the improvements.\n\n############################ KERNEL OOM CONTROL ##############################\n\n# On Linux, it is possible to hint the kernel OOM killer on what processes\n# should be killed first when out of memory.\n#\n# Enabling this feature makes the server actively control the oom_score_adj value\n# for all its processes, depending on their role. The default scores will\n# attempt to have background child processes killed before all others, and\n# replicas killed before masters.\n#\n# The server supports these options:\n#\n# no:       Don't make changes to oom-score-adj (default).\n# yes:      Alias to \"relative\" see below.\n# absolute: Values in oom-score-adj-values are written as is to the kernel.\n# relative: Values are used relative to the initial value of oom_score_adj when\n#           the server starts and are then clamped to a range of -1000 to 1000.\n#           Because typically the initial value is 0, they will often match the\n#           absolute values.\noom-score-adj no\n\n# When oom-score-adj is used, this directive controls the specific values used\n# for master, replica and background child processes. Values range -2000 to\n# 2000 (higher means more likely to be killed).\n#\n# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)\n# can freely increase their value, but not decrease it below its initial\n# settings. This means that setting oom-score-adj to \"relative\" and setting the\n# oom-score-adj-values to positive values will always succeed.\noom-score-adj-values 0 200 800\n\n\n#################### KERNEL transparent hugepage CONTROL ######################\n\n# Usually the kernel Transparent Huge Pages control is set to \"madvise\" or\n# \"never\" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which\n# case this config has no effect. On systems in which it is set to \"always\",\n# the server will attempt to disable it specifically for the server process in order\n# to avoid latency problems specifically with fork(2) and CoW.\n# If for some reason you prefer to keep it enabled, you can set this config to\n# \"no\" and the kernel global to \"always\".\n\ndisable-thp yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default the server asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the server process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) the server can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup the server will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Please check https://valkey.io/topics/persistence for more information.\n\nappendonly no\n\n# The base name of the append only file.\n#\n# The server uses a set of append-only files to persist the dataset\n# and changes applied to it. There are two basic types of files in use:\n#\n# - Base files, which are a snapshot representing the complete state of the\n#   dataset at the time the file was created. Base files can be either in\n#   the form of RDB (binary serialized) or AOF (textual commands).\n# - Incremental files, which contain additional commands that were applied\n#   to the dataset following the previous file.\n#\n# In addition, manifest files are used to track the files and the order in\n# which they were created and should be applied.\n#\n# Append-only file names are created by the server following a specific pattern.\n# The file name's prefix is based on the 'appendfilename' configuration\n# parameter, followed by additional information about the sequence and type.\n#\n# For example, if appendfilename is set to appendonly.aof, the following file\n# names could be derived:\n#\n# - appendonly.aof.1.base.rdb as a base file.\n# - appendonly.aof.1.incr.aof, appendonly.aof.2.incr.aof as incremental files.\n# - appendonly.aof.manifest as a manifest file.\n\nappendfilename \"appendonly.aof\"\n\n# For convenience, the server stores all persistent append-only files in a dedicated\n# directory. The name of the directory is determined by the appenddirname\n# configuration parameter.\n\nappenddirname \"appendonlydir\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# The server supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# the server may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of the server is\n# the same as \"appendfsync no\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# The server is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: The server remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the server\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where the server is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when the server itself\n# crashes or aborts but the operating system still works correctly).\n#\n# The server can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"valkey-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# the server will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# The server can create append-only base files in either RDB or AOF formats. Using\n# the RDB format is always faster and more efficient, and disabling it is only\n# supported for backward compatibility purposes.\naof-use-rdb-preamble yes\n\n# The server supports recording timestamp annotations in the AOF to support restoring\n# the data from a specific point-in-time. However, using this capability changes\n# the AOF format in a way that may not be compatible with existing AOF parsers.\naof-timestamp-enabled no\n\n################################ SHUTDOWN #####################################\n\n# Maximum time to wait for replicas when shutting down, in seconds.\n#\n# During shut down, a grace period allows any lagging replicas to catch up with\n# the latest replication offset before the master exits. This period can\n# prevent data loss, especially for deployments without configured disk backups.\n#\n# The 'shutdown-timeout' value is the grace period's duration in seconds. It is\n# only applicable when the instance has replicas. To disable the feature, set\n# the value to 0.\n#\n# shutdown-timeout 10\n\n# When the server receives a SIGINT or SIGTERM, shutdown is initiated and by default\n# an RDB snapshot is written to disk in a blocking operation if save points are configured.\n# The options used on signaled shutdown can include the following values:\n# default:  Saves RDB snapshot only if save points are configured.\n#           Waits for lagging replicas to catch up.\n# save:     Forces a DB saving operation even if no save points are configured.\n# nosave:   Prevents DB saving operation even if one or more save points are configured.\n# now:      Skips waiting for lagging replicas.\n# force:    Ignores any errors that would normally prevent the server from exiting.\n#\n# Any combination of values is allowed as long as \"save\" and \"nosave\" are not set simultaneously.\n# Example: \"nosave force now\"\n#\n# shutdown-on-sigint default\n# shutdown-on-sigterm default\n\n################ NON-DETERMINISTIC LONG BLOCKING COMMANDS #####################\n\n# Maximum time in milliseconds for EVAL scripts, functions and in some cases\n# modules' commands before the server can start processing or rejecting other clients.\n#\n# If the maximum execution time is reached the server will start to reply to most\n# commands with a BUSY error.\n#\n# In this state the server will only allow a handful of commands to be executed.\n# For instance, SCRIPT KILL, FUNCTION KILL, SHUTDOWN NOSAVE and possibly some\n# module specific 'allow-busy' commands.\n#\n# SCRIPT KILL and FUNCTION KILL will only be able to stop a script that did not\n# yet call any write commands, so SHUTDOWN NOSAVE may be the only way to stop\n# the server in the case a write command was already issued by the script when\n# the user doesn't want to wait for the natural termination of the script.\n#\n# The default is 5 seconds. It is possible to set it to 0 or a negative value\n# to disable this mechanism (uninterrupted execution). Note that in the past\n# this config had a different name, which is now an alias, so both of these do\n# the same:\n# lua-time-limit 5000\n# busy-reply-threshold 5000\n\n################################ VALKEY CLUSTER  ###############################\n\n# Normal server instances can't be part of a cluster; only nodes that are\n# started as cluster nodes can. In order to start a server instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by each node.\n# Every cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are a multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# The cluster port is the port that the cluster bus will listen for inbound connections on. When set\n# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires\n# you to specify the cluster bus port when executing cluster meet.\n# cluster-port 0\n\n# A replica of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the master processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large cluster-replica-validity-factor may allow replicas with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the cluster-replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned masters, that are masters\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned masters only if there are still at least a\n# given number of other working replicas for their old master. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its master\n# and so forth. It usually reflects the number of replicas you want for every\n# master in your cluster.\n#\n# Default is 1 (replicas migrate only if their masters remain with at least\n# one replica). To disable migration just set it to a very large value or\n# set cluster-allow-replica-migration to 'no'.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# Turning off this option allows to use less automatic cluster configuration.\n# It both disables migration to orphaned masters and migration from masters\n# that became empty.\n#\n# Default is 'yes' (allow automatic migrations).\n#\n# cluster-allow-replica-migration yes\n\n# By default cluster nodes stop accepting queries if they detect there\n# is at least a hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# master during master failures. However the replica can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# This option, when set to yes, allows nodes to serve read traffic while the\n# cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful for two cases.  The first case is for when an application\n# doesn't require consistency of data during node failures or network partitions.\n# One example of this is a cache, where as long as the node has the data it\n# should be able to serve it.\n#\n# The second use case is for configurations that don't meet the recommended\n# three shards but want to enable cluster mode and scale later. A\n# master outage in a 1 or 2 shard configuration causes a read/write outage to the\n# entire cluster without this option set, with it set there is only a write outage.\n# Without a quorum of masters, slot ownership will not change automatically.\n#\n# cluster-allow-reads-when-down no\n\n# This option, when set to yes, allows nodes to serve pubsub shard traffic while\n# the cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful if the application would like to use the pubsub feature even when\n# the cluster global stable state is not OK. If the application wants to make sure only\n# one shard is serving a given channel, this feature should be kept as yes.\n#\n# cluster-allow-pubsubshard-when-down yes\n\n# Cluster link send buffer limit is the limit on the memory usage of an individual\n# cluster bus link's send buffer in bytes. Cluster links would be freed if they exceed\n# this limit. This is to primarily prevent send buffers from growing unbounded on links\n# toward slow peers (E.g. PubSub messages being piled up).\n# This limit is disabled by default. Enable this limit when 'mem_cluster_links' INFO field\n# and/or 'send-buffer-allocated' entries in the 'CLUSTER LINKS` command output continuously increase.\n# Minimum limit of 1gb is recommended so that cluster link buffer can fit in at least a single\n# PubSub message by default. (client-query-buffer-limit default value is 1gb)\n#\n# cluster-link-sendbuf-limit 0\n\n# Clusters can configure their announced hostname using this config. This is a common use case for\n# applications that need to use TLS Server Name Indication (SNI) or dealing with DNS based\n# routing. By default this value is only shown as additional metadata in the CLUSTER SLOTS\n# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is\n# communicated along the clusterbus to all nodes, setting it to an empty string will remove\n# the hostname and also propagate the removal.\n#\n# cluster-announce-hostname \"\"\n\n# Clusters can configure an optional nodename to be used in addition to the node ID for\n# debugging and admin information. This name is broadcasted between nodes, so will be used\n# in addition to the node ID when reporting cross node events such as node failures.\n# cluster-announce-human-nodename \"\"\n\n# Clusters can advertise how clients should connect to them using either their IP address,\n# a user defined hostname, or by declaring they have no endpoint. Which endpoint is\n# shown as the preferred endpoint is set by using the cluster-preferred-endpoint-type\n# config with values 'ip', 'hostname', or 'unknown-endpoint'. This value controls how\n# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS.\n# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?'\n# will be returned instead.\n#\n# When a cluster advertises itself as having an unknown endpoint, it's indicating that\n# the server doesn't know how clients can reach the cluster. This can happen in certain\n# networking situations where there are multiple possible routes to the node, and the\n# server doesn't know which one the client took. In this case, the server is expecting\n# the client to reach out on the same endpoint it used for making the last request, but use\n# the port provided in the response.\n#\n# cluster-preferred-endpoint-type ip\n\n# In order to setup your cluster make sure to read the documentation\n# available at https://valkey.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, cluster node's address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make a cluster work in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following four options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-tls-port\n# * cluster-announce-bus-port\n#\n# Each instructs the node about its address, client ports (for connections\n# without and with TLS) and cluster message bus port. The information is then\n# published in the header of the bus packets so that other nodes will be able to\n# correctly map the address of the node publishing the information.\n#\n# If tls-cluster is set to yes and cluster-announce-tls-port is omitted or set\n# to zero, then cluster-announce-port refers to the TLS port. Note also that\n# cluster-announce-tls-port has no effect if tls-cluster is set to no.\n#\n# If the above options are not used, the normal cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usual.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-tls-port 6379\n# cluster-announce-port 0\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The server Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells the server\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The server latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a server instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n################################ LATENCY TRACKING ##############################\n\n# The server's extended latency monitoring tracks the per command latencies and enables\n# exporting the percentile distribution via the INFO latencystats command,\n# and cumulative latency distributions (histograms) via the LATENCY command.\n#\n# By default, the extended latency monitoring is enabled since the overhead\n# of keeping track of the command latency is very small.\n# latency-tracking yes\n\n# By default the exported latency percentiles via the INFO latencystats command\n# are the p50, p99, and p999.\n# latency-tracking-info-percentiles 50 99 99.9\n\n############################# EVENT NOTIFICATION ##############################\n\n# The server can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at https://valkey.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that the server will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  n     New key events (Note: not included in the 'A' class)\n#  t     Stream commands\n#  d     Module key type events\n#  m     Key-miss events (Note: It is not included in the 'A' class)\n#  A     Alias for g$lshzxetd, so that the \"AKE\" string means all the events\n#        (Except key-miss events which are excluded from 'A' due to their\n#         unique nature).\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-listpack-entries 512\nhash-max-listpack-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-listpack-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Sets containing non-integer values are also encoded using a memory efficient\n# data structure when they have a small number of entries, and the biggest entry\n# does not exceed a given threshold. These thresholds can be configured using\n# the following directives.\nset-max-listpack-entries 128\nset-max-listpack-value 64\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-listpack-entries 128\nzset-max-listpack-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When a HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entries limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main server hash table (the one mapping top-level\n# keys to values). The hash table implementation the server uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing \"steps\" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use \"activerehashing no\" if you have hard latency requirements and it is\n# not a good thing in your environment that the server can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use \"activerehashing yes\" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Note that it doesn't make sense to set the replica clients output buffer\n# limit lower than the repl-backlog-size config (partial sync will succeed\n# and then replica will get disconnected).\n# Such a configuration is ignored (the size of repl-backlog-size will be used).\n# This doesn't have memory consumption implications since the replica client\n# will share the backlog buffers memory.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such us huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In some scenarios client connections can hog up memory leading to OOM\n# errors or data eviction. To avoid this we can cap the accumulated memory\n# used by all client connections (all pubsub and normal clients). Once we\n# reach that limit connections will be dropped by the server freeing up\n# memory. The server will attempt to drop the connections using the most\n# memory first. We call this mechanism \"client eviction\".\n#\n# Client eviction is configured using the maxmemory-clients setting as follows:\n# 0 - client eviction is disabled (default)\n#\n# A memory value can be used for the client eviction threshold,\n# for example:\n# maxmemory-clients 1g\n#\n# A percentage value (between 1% and 100%) means the client eviction threshold\n# is based on a percentage of the maxmemory setting. For example to set client\n# eviction at 5% of maxmemory:\n# maxmemory-clients 5%\n\n# In the server protocol, bulk requests, that are, elements representing single\n# strings, are normally limited to 512 mb. However you can change this limit\n# here, but must be 1mb or greater\n#\n# proto-max-bulk-len 512mb\n\n# The server calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but the server checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# the server is idle, but at the same time will make the server more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# Normally it is useful to have an HZ value which is proportional to the\n# number of clients connected. This is useful in order, for instance, to\n# avoid too many clients are processed for each background task invocation\n# in order to avoid latency spikes.\n#\n# Since the default HZ value by default is conservatively set to 10, the server\n# offers, and enables by default, the ability to use an adaptive HZ value\n# which will temporarily raise when there are many connected clients.\n#\n# When dynamic HZ is enabled, the actual configured HZ will be used\n# as a baseline, but multiples of the configured HZ value will be actually\n# used as needed once more clients are connected. In this way an idle\n# instance will use very little CPU time while a busy instance will be\n# more responsive.\ndynamic-hz yes\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When the server saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# The server's LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the server LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so the server\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   valkey-benchmark -n 1000000 incr foo\n#   valkey-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be decremented.\n#\n# The default value for the lfu-decay-time is 1. A special value of 0 means we\n# will never decay the counter.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra, this process can happen at runtime\n# in a \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) the server will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled the server\n#    to use the copy of Jemalloc we ship with the source code of the server.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Active defragmentation is disabled by default\n# activedefrag no\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage, to be used when the lower\n# threshold is reached\n# active-defrag-cycle-min 1\n\n# Maximal effort for defrag in CPU percentage, to be used when the upper\n# threshold is reached\n# active-defrag-cycle-max 25\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n# Jemalloc background thread for purging will be enabled by default\njemalloc-bg-thread yes\n\n# It is possible to pin different threads and processes of the server to specific\n# CPUs in your system, in order to maximize the performances of the server.\n# This is useful both in order to pin different server threads in different\n# CPUs, but also in order to make sure that multiple server instances running\n# in the same host will be pinned to different CPUs.\n#\n# Normally you can do this using the \"taskset\" command, however it is also\n# possible to do this via the server configuration directly, both in Linux and FreeBSD.\n#\n# You can pin the server/IO threads, bio threads, aof rewrite child process, and\n# the bgsave child process. The syntax to specify the cpu list is the same as\n# the taskset command:\n#\n# Set redis server/io threads to cpu affinity 0,2,4,6:\n# server_cpulist 0-7:2\n#\n# Set bio threads to cpu affinity 1,3:\n# bio_cpulist 1,3\n#\n# Set aof rewrite child process to cpu affinity 8,9,10,11:\n# aof_rewrite_cpulist 8-11\n#\n# Set bgsave child process to cpu affinity 1,10,11\n# bgsave_cpulist 1,10-11\n\n# In some cases the server will emit warnings and even refuse to start if it detects\n# that the system is in bad state, it is possible to suppress these warnings\n# by setting the following config which takes a space delimited list of warnings\n# to suppress\n#\n# ignore-warnings ARM64-COW-BUG\n"
  },
  {
    "path": "aegir/conf/valkey/valkey8.conf",
    "content": "# Valkey configuration file example.\n#\n# Note that in order to read the configuration file, the server must be\n# started with the file path as first argument:\n#\n# ./valkey-server /path/to/valkey.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Note that option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Sentinel. Since the server always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# Included paths may contain wildcards. All files matching the wildcards will\n# be included in alphabetical order.\n# Note that if an include path contains a wildcards but no files match it when\n# the server is started, the include statement will be ignored and no error will\n# be emitted.  It is safe, therefore, to include wildcard files from empty\n# directories.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n# include /path/to/fragments/*.conf\n#\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n# loadmodule /path/to/args_module.so [arg [arg ...]]\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, the server listens\n# for connections from all available network interfaces on the host machine.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n# Each address can be prefixed by \"-\", which means that the server will not fail to\n# start if the address is not available. Being not available only refers to\n# addresses that does not correspond to any network interface. Addresses that\n# are already in use will always fail, and unsupported protocols will always be\n# silently skipped.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses\n# xbind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6\n# xbind * -::*                     # like the default, all available interfaces\n#\n# ~~~ WARNING ~~~ If the computer running the server is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force the server to listen only on the\n# IPv4 and IPv6 (if available) loopback interface addresses (this means the server\n# will only be able to accept client connections from the same host that it is\n# running on).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# COMMENT OUT THE FOLLOWING LINE.\n#\n# You will also need to set a password unless you explicitly disable protected\n# mode.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# By default, outgoing connections (from replica to primary, from Sentinel to\n# instances, cluster bus, etc.) are not bound to a specific local address. In\n# most cases, this means the operating system will handle that based on routing\n# and the interface through which the connection goes out.\n#\n# Using bind-source-addr it is possible to configure a specific address to bind\n# to, which may also affect how the connection gets routed.\n#\n# Example:\n#\n# bind-source-addr 10.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# the server instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and the default user has no password, the server\n# only accepts local connections from the IPv4 address (127.0.0.1), IPv6 address\n# (::1) or Unix domain sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to the server\n# even if no authentication is configured.\nprotected-mode yes\n\n# The server uses default hardened security configuration directives to reduce the\n# attack surface on innocent users. Therefore, several sensitive configuration\n# directives are immutable, and some potentially-dangerous commands are blocked.\n#\n# Configuration directives that control files that the server writes to (e.g., 'dir'\n# and 'dbfilename') and that aren't usually modified during runtime\n# are protected by making them immutable.\n#\n# Commands that can increase the attack surface of the server and that aren't usually\n# called by users are blocked by default.\n#\n# These can be exposed to either all connections or just local ones by setting\n# each of the configs listed below to either of these values:\n#\n# no    - Block for any connection (remain immutable)\n# yes   - Allow for any connection (no protection)\n# local - Allow only for local connections. Ones originating from the\n#         IPv4 address (127.0.0.1), IPv6 address (::1) or Unix domain sockets.\n#\n# enable-protected-configs no\n# enable-debug-command no\n# enable-module-command no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified the server will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need a high backlog in order\n# to avoid slow clients connection issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so the server will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/valkey/valkey.sock\n# unixsocketgroup valkey\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 900\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Force network equipment in the middle to consider the connection to be\n#    alive.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\ntcp-keepalive 300\n\n# Apply OS-specific mechanism to mark the listening socket with the specified\n# ID, to support advanced routing and filtering capabilities.\n#\n# On Linux, the ID represents a connection mark.\n# On FreeBSD, the ID represents a socket cookie ID.\n# On OpenBSD, the ID represents a route table ID.\n#\n# The default value is 0, which implies no marking is required.\n# socket-mark-id 0\n\n################################# TLS/SSL #####################################\n\n# By default, TLS/SSL is disabled. To enable it, the \"tls-port\" configuration\n# directive can be used to define TLS-listening ports. To enable TLS on the\n# default port, use:\n#\n# port 0\n# tls-port 6379\n\n# Configure a X.509 certificate and private key to use for authenticating the\n# server to connected clients, primaries or cluster peers.  These files should be\n# PEM formatted.\n#\n# tls-cert-file valkey.crt\n# tls-key-file valkey.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-key-file-pass secret\n\n# Normally the server uses the same certificate for both server functions (accepting\n# connections) and client functions (replicating from a primary, establishing\n# cluster bus connections, etc.).\n#\n# Sometimes certificates are issued with attributes that designate them as\n# client-only or server-only certificates. In that case it may be desired to use\n# different certificates for incoming (server) and outgoing (client)\n# connections. To do that, use the following directives:\n#\n# tls-client-cert-file client.crt\n# tls-client-key-file client.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-client-key-file-pass secret\n\n# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange,\n# required by older versions of OpenSSL (<3.0). Newer versions do not require\n# this configuration and recommend against it.\n#\n# tls-dh-params-file valkey.dh\n\n# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL\n# clients and peers. The server requires an explicit configuration of at least one\n# of these, and will not implicitly use the system wide configuration.\n#\n# tls-ca-cert-file ca.crt\n# tls-ca-cert-dir /etc/ssl/certs\n\n# By default, clients (including replica servers) on a TLS port are required\n# to authenticate using valid client side certificates.\n#\n# If \"no\" is specified, client certificates are not required and not accepted.\n# If \"optional\" is specified, client certificates are accepted and must be\n# valid if provided, but are not required.\n#\n# tls-auth-clients no\n# tls-auth-clients optional\n\n# By default, a replica does not attempt to establish a TLS connection\n# with its primary.\n#\n# Use the following directive to enable TLS on replication links.\n#\n# tls-replication yes\n\n# By default, the cluster bus uses a plain TCP connection. To enable\n# TLS for the bus protocol, use the following directive:\n#\n# tls-cluster yes\n\n# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended\n# that older formally deprecated versions are kept disabled to reduce the attack surface.\n# You can explicitly specify TLS versions to support.\n# Allowed values are case insensitive and include \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\",\n# \"TLSv1.3\" (OpenSSL >= 1.1.1) or any combination.\n# To enable only TLSv1.2 and TLSv1.3, use:\n#\n# tls-protocols \"TLSv1.2 TLSv1.3\"\n\n# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information\n# about the syntax of this string.\n#\n# Note: this configuration applies only to <= TLSv1.2.\n#\n# tls-ciphers DEFAULT:!MEDIUM\n\n# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more\n# information about the syntax of this string, and specifically for TLSv1.3\n# ciphersuites.\n#\n# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256\n\n# When choosing a cipher, use the server's preference instead of the client\n# preference. By default, the server follows the client's preference.\n#\n# tls-prefer-server-ciphers yes\n\n# By default, TLS session caching is enabled to allow faster and less expensive\n# reconnections by clients that support it. Use the following directive to disable\n# caching.\n#\n# tls-session-caching no\n\n# Change the default number of TLS sessions cached. A zero value sets the cache\n# to unlimited size. The default size is 20480.\n#\n# tls-session-cache-size 5000\n\n# Change the default timeout of cached TLS sessions. The default timeout is 300\n# seconds.\n#\n# tls-session-cache-timeout 60\n\n################################### RDMA ######################################\n\n# Valkey Over RDMA is experimental, it may be changed or be removed in any minor or major version.\n# By default, RDMA is disabled. To enable it, the \"rdma-port\" configuration\n# directive can be used to define RDMA-listening ports.\n#\n# rdma-port 6379\n# rdma-bind 192.168.1.100\n\n# The RDMA receive transfer buffer is 1M by default. It can be set between 64K and 16M.\n# Note that page size aligned size is preferred.\n#\n# rdma-rx-size 1048576\n\n# The RDMA completion queue will use the completion vector to signal completion events\n# via hardware interrupts. A large number of hardware interrupts can affect CPU performance.\n# It is possible to tune the performance using rdma-completion-vector.\n#\n# Example 1. a) Pin hardware interrupt vectors [0, 3] to CPU [0, 3].\n#            b) Set CPU affinity for valkey to CPU [4, X].\n#            c) Any valkey server uses a random RDMA completion vector [-1].\n# All valkey servers will not affect each other and will be isolated from kernel interrupts.\n#\n#   SYS    SYS    SYS    SYS  VALKEY VALKEY     VALKEY\n#    |      |      |      |      |      |          |\n#  CPU0   CPU1   CPU2   CPU3   CPU4   CPU5   ... CPUX\n#    |      |      |      |\n#  INTR0  INTR1  INTR2  INTR3\n#\n# Example 2. a) 1:1 pin hardware interrupt vectors [0, X] to CPU [0, X].\n#            b) Set CPU affinity for valkey [M] to CPU [M].\n#            c) Valkey server [M] uses RDMA completion vector [M].\n# A single CPU [M] handles hardware interrupts, the RDMA completion vector [M],\n# and the valkey server [M] within its context only.\n# This avoids overhead and function calls across multiple CPUs, fully isolating\n# each valkey server from one another.\n#\n# VALKEY VALKEY VALKEY VALKEY VALKEY VALKEY     VALKEY\n#    |      |      |      |      |      |          |\n#  CPU0   CPU1   CPU2   CPU3   CPU4   CPU5  ...  CPUX\n#    |      |      |      |      |      |          |\n#  INTR0  INTR1  INTR2  INTR3  INTR4  INTR5      INTRX\n#\n# Use 0 and positive numbers to specify the RDMA completion vector, or specify -1 to allow\n# the server to use a random vector for a new connection. The default vector is -1.\n#\n# rdma-completion-vector 0\n\n################################# GENERAL #####################################\n\n# By default the server does not run as a daemon. Use 'yes' if you need it.\n# Note that the server will write a pid file in /run/valkey/valkey.pid when daemonized.\n# When the server is supervised by upstart or systemd, this parameter has no impact.\ndaemonize yes\n\n# If you run the server from upstart or systemd, the server can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting the server into SIGSTOP mode\n#                        requires \"expect stop\" in your upstart job config\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#                        on startup, and updating the server status on a regular\n#                        basis.\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous pings back to your supervisor.\n#\n# The default is \"no\". To run under upstart/systemd, you can simply uncomment\n# the line below:\n#\n# supervised auto\n\n# If a pid file is specified, the server writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/valkey/valkey.pid\".\n#\n# Creating a pid file is best effort: if the server is not able to create it\n# nothing bad happens, the server will start and run normally.\n#\n# Note that on modern Linux systems \"/run/valkey/valkey.pid\" is more conforming\n# and should be used instead.\npidfile /run/valkey/valkey.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\n# nothing (nothing is logged)\nloglevel warning\n\n# Specify the logging format.\n# This can be one of:\n#\n# - legacy: the default, traditional log format\n# - logfmt: a structured log format; see https://www.brandur.org/logfmt\n#\n# log-format legacy\n\n# Specify the timestamp format used in logs using 'log-timestamp-format'.\n#\n# - legacy: default format\n# - iso8601: ISO 8601 extended date and time with time zone, on the form\n#   yyyy-mm-ddThh:mm:ss.sss±hh:mm\n# - milliseconds: milliseconds since the epoch\n#\n# log-timestamp-format legacy\n\n# Specify the log file name. Also the empty string can be used to force\n# the server to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/valkey/valkey-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident valkey\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# To disable the built in crash log, which will possibly produce cleaner core\n# dumps when they are needed, uncomment the following:\n#\n# crash-log-enabled no\n\n# To disable the fast memory check that's run as part of the crash log, which\n# will possibly let the server terminate sooner, uncomment the following:\n#\n# crash-memcheck-enabled no\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 8\n\n# By default the server shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY and syslog logging is\n# disabled. Basically this means that normally a logo is displayed only in\n# interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo no\n\n# User data, including keys, values, client names, and ACL usernames, can be\n# logged as part of assertions and other error cases. To prevent sensitive user\n# information, such as PII, from being recorded in the server log file, this\n# user data is hidden from the log by default. If you need to log user data for\n# debugging or troubleshooting purposes, you can disable this feature by\n# changing the config value to no.\nhide-user-data-from-log yes\n\n# By default, the server modifies the process title (as seen in 'top' and 'ps') to\n# provide some runtime information. It is possible to disable this and leave\n# the process name as executed by setting the following to no.\nset-proc-title yes\n\n# When changing the process title, the server uses the following template to construct\n# the modified title.\n#\n# Template variables are specified in curly brackets. The following variables are\n# supported:\n#\n# {title}           Name of process as executed if parent, or type of child process.\n# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or\n#                   Unix socket if only that's available.\n# {server-mode}     Special mode, i.e. \"[sentinel]\" or \"[cluster]\".\n# {port}            TCP port listening on, or 0.\n# {tls-port}        TLS port listening on, or 0.\n# {unixsocket}      Unix domain socket listening on, or \"\".\n# {config-file}     Name of configuration file used.\n#\nproc-title-template \"{title} {listen-addr} {server-mode}\"\n\n# Set the local environment which is used for string comparison operations, and\n# also affect the performance of Lua scripts. Empty String indicates the locale\n# is derived from the environment variables.\nlocale-collate \"\"\n\n# Valkey is largely compatible with Redis OSS, apart from a few cases where\n# Valkey identifies itself itself as \"Valkey\" rather than \"Redis\". Extended\n# Redis OSS compatibility mode makes Valkey pretend to be Redis. Enable this\n# only if you have problems with tools or clients. This is a temporary\n# configuration added in Valkey 8.0 and is scheduled to have no effect in Valkey\n# 9.0 and be completely removed in Valkey 10.0.\n#\nextended-redis-compatibility yes\n\n################################ SNAPSHOTTING  ################################\n\n# Save the DB to disk.\n#\n# save <seconds> <changes> [<seconds> <changes> ...]\n#\n# The server will save the DB if the given number of seconds elapsed and it\n# surpassed the given number of write operations against the DB.\n#\n# Snapshotting can be completely disabled with a single empty string argument\n# as in following example:\n#\n# save \"\"\n#\n# Unless specified otherwise, by default the server will save the DB:\n#   * After 3600 seconds (an hour) if at least 1 change was performed\n#   * After 300 seconds (5 minutes) if at least 100 changes were performed\n#   * After 60 seconds if at least 10000 changes were performed\n#\n# You can set these explicitly by uncommenting the following line.\n#\n# save 3600 1 300 100 60 10000\n\n# By default the server will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again, the server will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the server\n# and persistence, you may want to disable this feature so that the server will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# By default compression is enabled as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# Valkey can try to load an RDB dump produced by a future version of Valkey.\n# This can only work on a best-effort basis, because future RDB versions may\n# contain information that's not known to the current version. If no new features\n# are used, it may be possible to import the data produced by a later version,\n# but loading is aborted if unknown information is encountered. Possible values\n# are 'strict' and 'relaxed'. This also applies to replication and the RESTORE\n# command.\nrdb-version-check relaxed\n\n# Enables or disables full sanitization checks for ziplist and listpack etc when\n# loading an RDB or RESTORE payload. This reduces the chances of a assertion or\n# crash later on while processing commands.\n# Options:\n#   no         - Never perform full sanitization\n#   yes        - Always perform full sanitization\n#   clients    - Perform full sanitization only for user connections.\n#                Excludes: RDB files, RESTORE commands received from the primary\n#                connection, and client connections which have the\n#                skip-sanitize-payload ACL flag.\n# The default should be 'clients' but since it currently affects cluster\n# resharding via MIGRATE, it is temporarily set to 'no' by default.\n#\n# sanitize-dump-payload no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# Remove RDB files used by replication in instances without persistence\n# enabled. By default this option is disabled, however there are environments\n# where for regulations or other security concerns, RDB files persisted on\n# disk by primaries in order to feed replicas, or stored on disk by replicas\n# in order to load them for the initial synchronization, should be deleted\n# ASAP. Note that this option ONLY WORKS in instances that have both AOF\n# and RDB persistence disabled, otherwise is completely ignored.\n#\n# An alternative (and sometimes better) way to obtain the same effect is\n# to use diskless replication on both primary and replicas instances. However\n# in the case of replicas, diskless is not always an option.\nrdb-del-sync-files no\n\n# The working directory.\n#\n# The server log is written relative this directory, if the 'logfile'\n# configuration directive is a relative path.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# The Cluster config file is written relative this directory, if the\n# 'cluster-config-file' configuration directive is a relative path.\n#\n# Note that you must specify a directory here, not a file name.\n# Note that modifying 'dir' during runtime may have unexpected behavior,\n# for example when a child process is running, related file operations may\n# have unexpected effects.\ndir /var/lib/valkey/\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a server a copy of\n# another server. A few things to understand ASAP about replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Replication is asynchronous, but you can configure a primary to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Replicas are able to perform a partial resynchronization with the\n#    primary if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to primaries\n#    and resynchronize with them.\n#\n# replicaof <primary_ip> <primary_port>\n\n# If the primary is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the primary will\n# refuse the replica request.\n#\n# primaryauth <primary-password>\n#\n# However this is not enough if you are using ACLs\n# and the default user is not capable of running the PSYNC\n# command and/or other commands needed for replication. In this case it's\n# better to configure a special user to use with replication, and specify the\n# primaryuser configuration as such:\n#\n# primaryuser <username>\n#\n# When primaryuser is specified, the replica will authenticate against its\n# primary using the new AUTH form: AUTH <username> <password>.\n\n# When a replica loses its connection with the primary, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) If replica-serve-stale-data is set to 'no' the replica will reply with error\n#    \"MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'\"\n#    to all data access commands, excluding commands such as:\n#    INFO, REPLICAOF, AUTH, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,\n#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,\n#    HOST and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the primary) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# By default, replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# New replicas and reconnecting replicas that are not able to continue the\n# replication process just receiving differences, need to do what is called a\n# \"full synchronization\". An RDB file is transmitted from the primary to the\n# replicas.\n#\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The primary creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The primary creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child\n# producing the RDB file finishes its work. With diskless replication instead\n# once the transfer starts, new replicas arriving will be queued and a new\n# transfer will start when the current one terminates.\n#\n# When diskless replication is used, the primary waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple\n# replicas will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync yes\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the\n# server waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# When diskless replication is enabled with a delay, it is possible to let\n# the replication start before the maximum delay is reached if the maximum\n# number of replicas expected have connected. Default of 0 means that the\n# maximum is not defined and the server will wait the full delay.\nrepl-diskless-sync-max-replicas 0\n\n# -----------------------------------------------------------------------------\n# WARNING: Since in this setup the replica does not immediately store an RDB on\n# disk, it may cause data loss during failovers. RDB diskless load + server\n# modules not handling I/O reads may cause the server to abort in case of I/O errors\n# during the initial synchronization stage with the primary.\n# -----------------------------------------------------------------------------\n#\n# Replica can load the RDB it reads from the replication link directly from the\n# socket, or store the RDB to a file and read that file after it was completely\n# received from the primary.\n#\n# In many cases the disk is slower than the network, and storing and loading\n# the RDB file may increase replication time (and even increase the primary's\n# Copy on Write memory and replica buffers).\n# However, when parsing the RDB file directly from the socket, in order to avoid\n# data loss it's only safe to flush the current dataset when the new dataset is\n# fully loaded in memory, resulting in higher memory usage.\n# For this reason we have the following options:\n#\n# \"disabled\"          - Don't use diskless load (store the rdb file to the disk first)\n# \"swapdb\"            - Keep current db contents in RAM while parsing the data directly\n#                       from the socket. Replicas in this mode can keep serving current\n#                       dataset while replication is in progress, except for cases where\n#                       they can't recognize primary as having a data set from same\n#                       replication history.\n#                       Note that this requires sufficient memory, if you don't have it,\n#                       you risk an OOM kill.\n# \"on-empty-db\"       - Use diskless load only when current dataset is empty. This is\n#                       safer and avoid having old and new dataset loaded side by side\n#                       during replication.\n# \"flush-before-load\" - [dangerous] Flush all data before parsing. Note that if\n#                       there's a problem before the replication succeeded you may\n#                       lose all your data.\nrepl-diskless-load disabled\n\n# This dual channel replication sync feature optimizes the full synchronization process\n# between a primary and its replicas. When enabled, it reduces both memory and CPU load\n# on the primary server.\n#\n# How it works:\n# 1. During full sync, instead of accumulating replication data on the primary server,\n#    the data is sent directly to the syncing replica.\n# 2. The primary's background save (bgsave) process streams the RDB snapshot directly\n#    to the replica over a separate connection.\n#\n# Tradeoff:\n# While this approach reduces load on the primary, it shifts the burden of storing\n# the replication buffer to the replica. This means the replica must have sufficient\n# memory to accommodate the buffer during synchronization. However, this tradeoff is\n# generally beneficial as it prevents potential performance degradation on the primary\n# server, which is typically handling more critical operations.\n#\n# When toggling this configuration on or off during an ongoing synchronization process,\n# it does not change the already running sync method. The new configuration will take\n# effect only for subsequent synchronization processes.\n\ndual-channel-replication-enabled no\n\n# Master send PINGs to its replicas in a predefined interval. It's possible to\n# change this interval with the repl_ping_replica_period option. The default\n# value is 10 seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of primaries (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the primary and the replica. The default\n# value is 60 seconds.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select \"yes\", the server will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the primary and replicas are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a\n# replica wants to reconnect again, often a full resync is not needed, but a\n# partial resync is enough, just passing the portion of data the replica\n# missed while disconnected.\n#\n# The bigger the replication backlog, the longer the replica can endure the\n# disconnect and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated if there is at least one replica connected.\n#\n# repl-backlog-size 1mb\n\n# After a primary has no connected replicas for some time, the backlog will be\n# freed. The following option configures the amount of seconds that need to\n# elapse, starting from the time the last replica disconnected, for the backlog\n# buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to primaries later, and should be able to correctly \"partially\n# resynchronize\" with other replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The replica priority is an integer number published by the server in the INFO\n# output. It is used by Sentinel in order to select a replica to promote\n# into a primary if the primary is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel\n# will pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of primary, so a replica with priority of 0 will never be selected by\n# Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# The propagation error behavior controls how the server will behave when it is\n# unable to handle a command being processed in the replication stream from a primary\n# or processed while reading from an AOF file. Errors that occur during propagation\n# are unexpected, and can cause data inconsistency.\n#\n# If an application wants to ensure there is no data divergence, this configuration\n# should be set to 'panic' instead. The value can also be set to 'panic-on-replicas'\n# to only panic when a replica encounters an error on the replication stream. One of\n# these two panic values will become the default value in the future once there are\n# sufficient safety mechanisms in place to prevent false positive crashes.\n#\n# propagation-error-behavior ignore\n\n# Replica ignore disk write errors controls the behavior of a replica when it is\n# unable to persist a write command received from its primary to disk. By default,\n# this configuration is set to 'no' and will crash the replica in this condition.\n# It is not recommended to change this default.\n#\n# replica-ignore-disk-write-errors no\n\n# Make the primary forbid expiration and eviction.\n# This is useful for sync tools, because expiration and eviction may cause the data corruption.\n# Sync tools can mark their connections as importing source by CLIENT IMPORT-SOURCE.\n# NOTICE: Clients should avoid writing the same key on the source server and the destination server.\n#\n# import-mode no\n\n# -----------------------------------------------------------------------------\n# By default, Sentinel includes all replicas in its reports. A replica\n# can be excluded from Sentinel's announcements. An unannounced replica\n# will be ignored by the 'sentinel replicas <primary>' command and won't be\n# exposed to Sentinel's clients.\n#\n# This option does not change the behavior of replica-priority. Even with\n# replica-announced set to 'no', the replica can be promoted to primary. To\n# prevent this behavior, set replica-priority to 0.\n#\n# replica-announced yes\n\n# It is possible for a primary to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A primary is able to list the address and port of the attached\n# replicas in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a primary.\n#\n# The listed IP address and port normally reported by a replica is\n# obtained in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the primary.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may actually be reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its primary a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n############################### KEYS TRACKING #################################\n\n# The client side caching of values is assisted via server-side support.\n# This is implemented using an invalidation table that remembers, using\n# a radix key indexed by key name, what clients have which keys. In turn\n# this is used in order to send invalidation messages to clients. Please\n# check this page to understand more about the feature:\n#\n#   https://valkey.io/topics/client-side-caching\n#\n# When tracking is enabled for a client, all the read only queries are assumed\n# to be cached: this will force the server to store information in the invalidation\n# table. When keys are modified, such information is flushed away, and\n# invalidation messages are sent to the clients. However if the workload is\n# heavily dominated by reads, the server could use more and more memory in order\n# to track the keys fetched by many clients.\n#\n# For this reason it is possible to configure a maximum fill value for the\n# invalidation table. By default it is set to 1M of keys, and once this limit\n# is reached, the server will start to evict keys in the invalidation table\n# even if they were not modified, just to reclaim memory: this will in turn\n# force the clients to invalidate the cached values. Basically the table\n# maximum size is a trade off between the memory you want to spend server\n# side to track information about who cached what, and the ability of clients\n# to retain cached objects in memory.\n#\n# If you set the value to 0, it means there are no limits, and the server will\n# retain as many keys as needed in the invalidation table.\n# In the \"stats\" INFO section, you can find information about the number of\n# keys in the invalidation table at every given moment.\n#\n# Note: when key tracking is used in broadcasting mode, no memory is used\n# in the server side so this setting is useless.\n#\n# tracking-table-max-keys 1000000\n\n################################## SECURITY ###################################\n\n# Warning: since the server is pretty fast, an outside user can try up to\n# 1 million passwords per second against a modern box. This means that you\n# should use very strong passwords, otherwise they will be very easy to break.\n# Note that because the password is really a shared secret between the client\n# and the server, and should not be memorized by any human, the password\n# can be easily a long string from /dev/urandom or whatever, so by using a\n# long and unguessable password no brute force attack will be possible.\n\n# ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n#\n# The special username \"default\" is used for new connections. If this user\n# has the \"nopass\" rule, then new connections will be immediately authenticated\n# as the \"default\" user without the need of any password provided via the\n# AUTH command. Otherwise if the \"default\" user is not flagged with \"nopass\"\n# the connections will start in not authenticated state, and will require\n# AUTH (or the HELLO command AUTH option) in order to be authenticated and\n# start to work.\n#\n# The ACL rules that describe what a user can do are the following:\n#\n#  on           Enable the user: it is possible to authenticate as this user.\n#  off          Disable the user: it's no longer possible to authenticate\n#               with this user, however the already authenticated connections\n#               will still work.\n#  skip-sanitize-payload    RESTORE dump-payload sanitization is skipped.\n#  sanitize-payload         RESTORE dump-payload is sanitized (default).\n#  +<command>   Allow the execution of that command.\n#               May be used with `|` for allowing subcommands (e.g \"+config|get\")\n#  -<command>   Disallow the execution of that command.\n#               May be used with `|` for blocking subcommands (e.g \"-config|set\")\n#  +@<category> Allow the execution of all the commands in such category\n#               with valid categories are like @admin, @set, @sortedset, ...\n#               and so forth, see the full list in the server.c file where\n#               the server command table is described and defined.\n#               The special category @all means all the commands, but currently\n#               present in the server, and that will be loaded in the future\n#               via modules.\n#  +<command>|first-arg  Allow a specific first argument of an otherwise\n#                        disabled command. It is only supported on commands with\n#                        no sub-commands, and is not allowed as negative form\n#                        like -SELECT|1, only additive starting with \"+\". This\n#                        feature is deprecated and may be removed in the future.\n#  allcommands  Alias for +@all. Note that it implies the ability to execute\n#               all the future commands loaded via the modules system.\n#  nocommands   Alias for -@all.\n#  ~<pattern>   Add a pattern of keys that can be mentioned as part of\n#               commands. For instance ~* allows all the keys. The pattern\n#               is a glob-style pattern like the one of KEYS.\n#               It is possible to specify multiple patterns.\n# %R~<pattern>  Add key read pattern that specifies which keys can be read\n#               from.\n# %W~<pattern>  Add key write pattern that specifies which keys can be\n#               written to.\n#  allkeys      Alias for ~*\n#  resetkeys    Flush the list of allowed keys patterns.\n#  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be\n#               accessed by the user. It is possible to specify multiple channel\n#               patterns.\n#  allchannels  Alias for &*\n#  resetchannels            Flush the list of allowed channel patterns.\n#  ><password>  Add this password to the list of valid password for the user.\n#               For example >mypass will add \"mypass\" to the list.\n#               This directive clears the \"nopass\" flag (see later).\n#  <<password>  Remove this password from the list of valid passwords.\n#  nopass       All the set passwords of the user are removed, and the user\n#               is flagged as requiring no password: it means that every\n#               password will work against this user. If this directive is\n#               used for the default user, every new connection will be\n#               immediately authenticated with the default user without\n#               any explicit AUTH command required. Note that the \"resetpass\"\n#               directive will clear this condition.\n#  resetpass    Flush the list of allowed passwords. Moreover removes the\n#               \"nopass\" status. After \"resetpass\" the user has no associated\n#               passwords and there is no way to authenticate without adding\n#               some password (or setting it as \"nopass\" later).\n#  reset        Performs the following actions: resetpass, resetkeys, resetchannels,\n#               allchannels (if acl-pubsub-default is set), off, clearselectors, -@all.\n#               The user returns to the same state it has immediately after its creation.\n# (<options>)   Create a new selector with the options specified within the\n#               parentheses and attach it to the user. Each option should be\n#               space separated. The first character must be ( and the last\n#               character must be ).\n# clearselectors            Remove all of the currently attached selectors.\n#                           Note this does not change the \"root\" user permissions,\n#                           which are the permissions directly applied onto the\n#                           user (outside the parentheses).\n#\n# ACL rules can be specified in any order: for instance you can start with\n# passwords, then flags, or key patterns. However note that the additive\n# and subtractive rules will CHANGE MEANING depending on the ordering.\n# For instance see the following example:\n#\n#   user alice on +@all -DEBUG ~* >somepassword\n#\n# This will allow \"alice\" to use all the commands with the exception of the\n# DEBUG command, since +@all added all the commands to the set of the commands\n# alice can use, and later DEBUG was removed. However if we invert the order\n# of two ACL rules the result will be different:\n#\n#   user alice on -DEBUG +@all ~* >somepassword\n#\n# Now DEBUG was removed when alice had yet no commands in the set of allowed\n# commands, later all the commands are added, so the user will be able to\n# execute everything.\n#\n# Basically ACL rules are processed left-to-right.\n#\n# The following is a list of command categories and their meanings:\n# * keyspace - Writing or reading from keys, databases, or their metadata\n#     in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE,\n#     KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace,\n#     key or metadata will also have `write` category. Commands that only read\n#     the keyspace, key or metadata will have the `read` category.\n# * read - Reading from keys (values or metadata). Note that commands that don't\n#     interact with keys, will not have either `read` or `write`.\n# * write - Writing to keys (values or metadata)\n# * admin - Administrative commands. Normal applications will never need to use\n#     these. Includes REPLICAOF, CONFIG, DEBUG, SAVE, MONITOR, ACL, SHUTDOWN, etc.\n# * dangerous - Potentially dangerous (each should be considered with care for\n#     various reasons). This includes FLUSHALL, MIGRATE, RESTORE, SORT, KEYS,\n#     CLIENT, DEBUG, INFO, CONFIG, SAVE, REPLICAOF, etc.\n# * connection - Commands affecting the connection or other connections.\n#     This includes AUTH, SELECT, COMMAND, CLIENT, ECHO, PING, etc.\n# * blocking - Potentially blocking the connection until released by another\n#     command.\n# * fast - Fast O(1) commands. May loop on the number of arguments, but not the\n#     number of elements in the key.\n# * slow - All commands that are not Fast.\n# * pubsub - PUBLISH / SUBSCRIBE related\n# * transaction - WATCH / MULTI / EXEC related commands.\n# * scripting - Scripting related.\n# * set - Data type: sets related.\n# * sortedset - Data type: zsets related.\n# * list - Data type: lists related.\n# * hash - Data type: hashes related.\n# * string - Data type: strings related.\n# * bitmap - Data type: bitmaps related.\n# * hyperloglog - Data type: hyperloglog related.\n# * geo - Data type: geo related.\n# * stream - Data type: streams related.\n#\n# For more information about ACL configuration please refer to\n# the Valkey web site at https://valkey.io/topics/acl\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked\n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with\n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside valkey.conf to describe users.\n#\n# aclfile /etc/valkey/users.acl\n\n# IMPORTANT NOTE: \"requirepass\" is just a compatibility\n# layer on top of the new ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# The requirepass is not compatible with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n#\n\nuser default on >isfoobared allcommands allkeys\n\n# The default Pub/Sub channels permission for new users is controlled by the\n# acl-pubsub-default configuration directive, which accepts one of these values:\n#\n# allchannels: grants access to all Pub/Sub channels\n# resetchannels: revokes access to all Pub/Sub channels\n#\n# acl-pubsub-default defaults to 'resetchannels' permission.\n#\n# acl-pubsub-default resetchannels\n\n# Command renaming (DEPRECATED).\n#\n# ------------------------------------------------------------------------\n# WARNING: avoid using this option if possible. Instead use ACLs to remove\n# commands from the default user, and put them only in some admin user you\n# create for administrative purposes.\n# ------------------------------------------------------------------------\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as the server reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached the server will close all the new connections sending\n# an error 'max number of clients reached'.\n#\n# IMPORTANT: With a cluster-enabled setup, the max number of connections is also\n# shared with the cluster bus: every node in the cluster will use two\n# connections, one incoming and another outgoing. It is important to size the\n# limit accordingly in case of very large clusters.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached the server will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If the server can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', the server will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using the server as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how the server will select what to remove when maxmemory\n# is reached. You can select one from the following behaviors:\n#\n# volatile-lru -> Evict using approximated LRU, only keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key having an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, when there are no suitable keys for\n# eviction, the server will return an error on write operations that require\n# more memory. These are usually commands that create new keys, add data or\n# modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,\n# SORT (due to the STORE argument), and EXEC (if the transaction includes any\n# command that requires memory).\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. By default the server will check five keys and pick the one that was\n# used least recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate. The maximum\n# value that can be set is 64.\n#\n# maxmemory-samples 5\n\n# Eviction processing is designed to function well with the default setting.\n# If there is an unusually large amount of write traffic, this value may need to\n# be increased.  Decreasing this value may reduce latency at the risk of\n# eviction processing effectiveness\n#   0 = minimum latency, 10 = default, 100 = process without regard to latency\n#\n# maxmemory-eviction-tenacity 10\n\n# By default a replica will ignore its maxmemory setting\n# (unless it is promoted to primary after a failover or manually). It means\n# that the eviction of keys will be just handled by the primary, sending the\n# DEL commands to the replica as keys evict in the primary side.\n#\n# This behavior ensures that primaries and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica\n# to have a different memory setting, and you are sure all the writes performed\n# to the replica are idempotent, then you may change this default (but be sure\n# to understand what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory\n# and so forth). So make sure you monitor your replicas and make sure they\n# have enough memory to never hit a real out-of-memory condition before the\n# primary hits the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n# The server reclaims expired keys in two ways: upon access when those keys are\n# found to be expired, and also in the background, in what is called the\n# \"active expire key\". The key space is slowly and incrementally scanned\n# looking for expired keys to reclaim, so that it is possible to free memory\n# of keys that are expired and will never be accessed again in a short time.\n#\n# The default effort of the expire cycle will try to avoid having more than\n# ten percent of expired keys still in memory, and will try to avoid consuming\n# more than 25% of total memory and to add latency to the system. However\n# it is possible to increase the expire \"effort\" that is normally set to\n# \"1\", to a greater value, up to the value \"10\". At its maximum value the\n# system will use more CPU, longer cycles (and technically may introduce\n# more latency), and will tolerate less already expired keys still present\n# in the system. It's a tradeoff between memory, CPU and latency.\n#\n# active-expire-effort 1\n\n############################# LAZY FREEING ####################################\n\n# When keys are deleted, the served has historically freed their memory using\n# blocking operations. It means that the server stopped processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in the server. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons, lazy freeing (or asynchronous freeing), has been\n# introduced. With lazy freeing, keys are deleted in constant time. Another\n# thread will incrementally free the object in the background as fast as\n# possible.\n#\n# Starting from Valkey 8.0, lazy freeing is enabled by default. It is possible\n# to retain the synchronous freeing behaviour by setting the lazyfree related\n# configuration directives to 'no'.\n\n# Commands like DEL, FLUSHALL and FLUSHDB delete keys, but the server can also\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically the server deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its primary, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases, the default is to release memory in a non-blocking\n# way.\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nreplica-lazy-flush yes\n\n# For keys deleted using the DEL command, lazy freeing is controlled by the\n# configuration directive 'lazyfree-lazy-user-del'. The default is 'yes'. The\n# UNLINK command is identical to the DEL command, except that UNLINK always\n# frees the memory lazily, regardless of this configuration directive:\n\nlazyfree-lazy-user-del yes\n\n# FLUSHDB, FLUSHALL, SCRIPT FLUSH and FUNCTION FLUSH support both asynchronous and synchronous\n# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the\n# commands. When neither flag is passed, this directive will be used to determine\n# if the data should be deleted asynchronously.\n#\n# When a replica performs a node reset via CLUSTER RESET, the entire\n# database content is removed to allow the node to become an empty primary.\n# This directive also determines whether the data should be deleted asynchronously.\n#\n# There are many problems with running flush synchronously. Even in single CPU\n# environments, the thread managers should balance between the freeing and\n# serving incoming requests. The default value is yes.\n\nlazyfree-lazy-user-flush yes\n\n################################ THREADED I/O #################################\n\n# The server is mostly single threaded, however there are certain threaded\n# operations such as UNLINK, slow I/O accesses and other things that are\n# performed on side threads.\n#\n# Now it is also possible to handle the server clients socket reads and writes\n# in different I/O threads. Since especially writing is so slow, normally\n# users use pipelining in order to speed up the server performances per\n# core, and spawn multiple instances in order to scale more. Using I/O\n# threads it is possible to easily speedup two times the server without resorting\n# to pipelining nor sharding of the instance.\n#\n# By default threading is disabled, we suggest enabling it only in machines\n# that have at least 3 or more cores, leaving at least one spare core.\n# We also recommend using threaded I/O only if you actually have performance problems, with\n# instances being able to use a quite big percentage of CPU time, otherwise\n# there is no point in using this feature.\n#\n# So for instance if you have a four cores boxes, try to use 2 or 3 I/O\n# threads, if you have a 8 cores, try to use 6 threads. In order to\n# enable I/O threads use the following configuration directive:\n#\n# io-threads 4\n#\n# Setting io-threads to 1 will just use the main thread as usual.\n# When I/O threads are enabled, we use threads for reads and writes, that is\n# to thread the write and read syscall and transfer the client buffers to the\n# socket and to enable threading of reads and protocol parsing.\n#\n# When multiple commands are parsed by the I/O threads and ready for execution,\n# we take advantage of knowing the next set of commands and prefetch their\n# required dictionary entries in a batch. This reduces memory access costs.\n#\n# The optimal batch size depends on the specific workflow of the user.\n# The default batch size is 16, which can be modified using the\n# 'prefetch-batch-max-size' config.\n#\n# When the config is set to 0, prefetching is disabled.\n#\n# prefetch-batch-max-size 16\n#\n# NOTE:\n# 1. The 'io-threads-do-reads' config is deprecated and has no effect. Please\n# avoid using this config if possible.\n#\n# 2. If you want to test the server speedup using valkey-benchmark, make\n# sure you also run the benchmark itself in threaded mode, using the\n# --threads option to match the number of server threads, otherwise you'll not\n# be able to notice the improvements.\n\n############################ KERNEL OOM CONTROL ##############################\n\n# On Linux, it is possible to hint the kernel OOM killer on what processes\n# should be killed first when out of memory.\n#\n# Enabling this feature makes the server actively control the oom_score_adj value\n# for all its processes, depending on their role. The default scores will\n# attempt to have background child processes killed before all others, and\n# replicas killed before primaries.\n#\n# The server supports these options:\n#\n# no:       Don't make changes to oom-score-adj (default).\n# yes:      Alias to \"relative\" see below.\n# absolute: Values in oom-score-adj-values are written as is to the kernel.\n# relative: Values are used relative to the initial value of oom_score_adj when\n#           the server starts and are then clamped to a range of -1000 to 1000.\n#           Because typically the initial value is 0, they will often match the\n#           absolute values.\noom-score-adj no\n\n# When oom-score-adj is used, this directive controls the specific values used\n# for primary, replica and background child processes. Values range -2000 to\n# 2000 (higher means more likely to be killed).\n#\n# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)\n# can freely increase their value, but not decrease it below its initial\n# settings. This means that setting oom-score-adj to \"relative\" and setting the\n# oom-score-adj-values to positive values will always succeed.\noom-score-adj-values 0 200 800\n\n\n#################### KERNEL transparent hugepage CONTROL ######################\n\n# Usually the kernel Transparent Huge Pages control is set to \"madvise\" or\n# \"never\" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which\n# case this config has no effect. On systems in which it is set to \"always\",\n# the server will attempt to disable it specifically for the server process in order\n# to avoid latency problems specifically with fork(2) and CoW.\n# If for some reason you prefer to keep it enabled, you can set this config to\n# \"no\" and the kernel global to \"always\".\n\ndisable-thp yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default the server asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the server process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) the server can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup the server will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Note that changing this value in a config file of an existing database and\n# restarting the server can lead to data loss. A conversion needs to be done\n# by setting it via CONFIG command on a live server first.\n#\n# Please check https://valkey.io/topics/persistence for more information.\n\nappendonly no\n\n# The base name of the append only file.\n#\n# The server uses a set of append-only files to persist the dataset\n# and changes applied to it. There are two basic types of files in use:\n#\n# - Base files, which are a snapshot representing the complete state of the\n#   dataset at the time the file was created. Base files can be either in\n#   the form of RDB (binary serialized) or AOF (textual commands).\n# - Incremental files, which contain additional commands that were applied\n#   to the dataset following the previous file.\n#\n# In addition, manifest files are used to track the files and the order in\n# which they were created and should be applied.\n#\n# Append-only file names are created by the server following a specific pattern.\n# The file name's prefix is based on the 'appendfilename' configuration\n# parameter, followed by additional information about the sequence and type.\n#\n# For example, if appendfilename is set to appendonly.aof, the following file\n# names could be derived:\n#\n# - appendonly.aof.1.base.rdb as a base file.\n# - appendonly.aof.1.incr.aof, appendonly.aof.2.incr.aof as incremental files.\n# - appendonly.aof.manifest as a manifest file.\n\nappendfilename \"appendonly.aof\"\n\n# For convenience, the server stores all persistent append-only files in a dedicated\n# directory. The name of the directory is determined by the appenddirname\n# configuration parameter.\n\nappenddirname \"appendonlydir\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# The server supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# the server may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of the server is\n# the same as \"appendfsync no\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# The server is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: The server remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the server\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where the server is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when the server itself\n# crashes or aborts but the operating system still works correctly).\n#\n# The server can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"valkey-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# the server will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# The server can create append-only base files in either RDB or AOF formats. Using\n# the RDB format is always faster and more efficient, and disabling it is only\n# supported for backward compatibility purposes.\naof-use-rdb-preamble yes\n\n# The server supports recording timestamp annotations in the AOF to support restoring\n# the data from a specific point-in-time. However, using this capability changes\n# the AOF format in a way that may not be compatible with existing AOF parsers.\naof-timestamp-enabled no\n\n################################ SHUTDOWN #####################################\n\n# Maximum time to wait for replicas when shutting down, in seconds.\n#\n# During shut down, a grace period allows any lagging replicas to catch up with\n# the latest replication offset before the primary exits. This period can\n# prevent data loss, especially for deployments without configured disk backups.\n#\n# The 'shutdown-timeout' value is the grace period's duration in seconds. It is\n# only applicable when the instance has replicas. To disable the feature, set\n# the value to 0.\n#\n# shutdown-timeout 10\n\n# When the server receives a SIGINT or SIGTERM, shutdown is initiated and by default\n# an RDB snapshot is written to disk in a blocking operation if save points are configured.\n# The options used on signaled shutdown can include the following values:\n# default:  Saves RDB snapshot only if save points are configured.\n#           Waits for lagging replicas to catch up.\n# save:     Forces a DB saving operation even if no save points are configured.\n# nosave:   Prevents DB saving operation even if one or more save points are configured.\n# now:      Skips waiting for lagging replicas.\n# force:    Ignores any errors that would normally prevent the server from exiting.\n#\n# Any combination of values is allowed as long as \"save\" and \"nosave\" are not set simultaneously.\n# Example: \"nosave force now\"\n#\n# shutdown-on-sigint default\n# shutdown-on-sigterm default\n\n################ NON-DETERMINISTIC LONG BLOCKING COMMANDS #####################\n\n# Maximum time in milliseconds for EVAL scripts, functions and in some cases\n# modules' commands before the server can start processing or rejecting other clients.\n#\n# If the maximum execution time is reached the server will start to reply to most\n# commands with a BUSY error.\n#\n# In this state the server will only allow a handful of commands to be executed.\n# For instance, SCRIPT KILL, FUNCTION KILL, SHUTDOWN NOSAVE and possibly some\n# module specific 'allow-busy' commands.\n#\n# SCRIPT KILL and FUNCTION KILL will only be able to stop a script that did not\n# yet call any write commands, so SHUTDOWN NOSAVE may be the only way to stop\n# the server in the case a write command was already issued by the script when\n# the user doesn't want to wait for the natural termination of the script.\n#\n# The default is 5 seconds. It is possible to set it to 0 or a negative value\n# to disable this mechanism (uninterrupted execution). Note that in the past\n# this config had a different name, which is now an alias, so both of these do\n# the same:\n# lua-time-limit 5000\n# busy-reply-threshold 5000\n\n################################ VALKEY CLUSTER  ###############################\n\n# Normal server instances can't be part of a cluster; only nodes that are\n# started as cluster nodes can. In order to start a server instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by each node.\n# Every cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are a multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# The cluster port is the port that the cluster bus will listen for inbound connections on. When set\n# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires\n# you to specify the cluster bus port when executing cluster meet.\n# cluster-port 0\n\n# A replica of a failing primary will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the primary processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its primary. This can be the last ping or command received (if the primary\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the primary (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the primary, the time\n# elapsed is greater than:\n#\n#   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the primary\n# for longer than 310 seconds.\n#\n# A large cluster-replica-validity-factor may allow replicas with too old data to failover\n# a primary, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the cluster-replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# primary regardless of the last time they interacted with the primary.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned primaries, that are primaries\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned primary can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned primaries only if there are still at least a\n# given number of other working replicas for their old primary. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its primary\n# and so forth. It usually reflects the number of replicas you want for every\n# primary in your cluster.\n#\n# Default is 1 (replicas migrate only if their primaries remain with at least\n# one replica). To disable migration just set it to a very large value or\n# set cluster-allow-replica-migration to 'no'.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# Turning off this option allows to use less automatic cluster configuration.\n# It disables migration of replicas to orphaned primaries. Masters that become\n# empty due to losing their last slots to another primary will not automatically\n# replicate from the primary that took over their last slots. Instead, they will\n# remain as empty primaries without any slots.\n#\n# Default is 'yes' (allow automatic migrations).\n#\n# cluster-allow-replica-migration yes\n\n# By default cluster nodes stop accepting queries if they detect there\n# is at least a hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# primary during primary failures. However the replica can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# The timeout in milliseconds for cluster manual failover. If a manual failover\n# does not complete within the specified time, both the replica and the primary\n# will abort it.\n#\n# A manual failover is a special kind of failover that is usually executed when\n# there are no actual failures, and we wish to swap the current primary with one\n# of its replicas in a safe way, without any window for data loss.\n#\n# To avoid data loss, the primary and the replica need to wait for each other for\n# a period of time, the primary need to pause the clients writes to stop processing\n# traffic. The default failover timeout is 5000ms, it is possible to configure the\n# timeout and decide how long the primary will pause in the worst case scenario,\n# i.e. the manual failover timed out due to the insufficient votes.\n#\n# Check https://valkey.io/commands/cluster-failover/ for more information.\n#\n# cluster-manual-failover-timeout 5000\n\n# This option, when set to yes, allows nodes to serve read traffic while the\n# cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful for two cases.  The first case is for when an application\n# doesn't require consistency of data during node failures or network partitions.\n# One example of this is a cache, where as long as the node has the data it\n# should be able to serve it.\n#\n# The second use case is for configurations that don't meet the recommended\n# three shards but want to enable cluster mode and scale later. A\n# primary outage in a 1 or 2 shard configuration causes a read/write outage to the\n# entire cluster without this option set, with it set there is only a write outage.\n# Without a quorum of primaries, slot ownership will not change automatically.\n#\n# cluster-allow-reads-when-down no\n\n# This option, when set to yes, allows nodes to serve pubsub shard traffic while\n# the cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful if the application would like to use the pubsub feature even when\n# the cluster global stable state is not OK. If the application wants to make sure only\n# one shard is serving a given channel, this feature should be kept as yes.\n#\n# cluster-allow-pubsubshard-when-down yes\n\n# Cluster link send buffer limit is the limit on the memory usage of an individual\n# cluster bus link's send buffer in bytes. Cluster links would be freed if they exceed\n# this limit. This is to primarily prevent send buffers from growing unbounded on links\n# toward slow peers (E.g. PubSub messages being piled up).\n# This limit is disabled by default. Enable this limit when 'mem_cluster_links' INFO field\n# and/or 'send-buffer-allocated' entries in the 'CLUSTER LINKS` command output continuously increase.\n# Minimum limit of 1gb is recommended so that cluster link buffer can fit in at least a single\n# PubSub message by default. (client-query-buffer-limit default value is 1gb)\n#\n# cluster-link-sendbuf-limit 0\n\n# Clusters can configure their announced hostname using this config. This is a common use case for\n# applications that need to use TLS Server Name Indication (SNI) or dealing with DNS based\n# routing. By default this value is only shown as additional metadata in the CLUSTER SLOTS\n# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is\n# communicated along the clusterbus to all nodes, setting it to an empty string will remove\n# the hostname and also propagate the removal.\n#\n# cluster-announce-hostname \"\"\n\n# Clusters can configure an optional nodename to be used in addition to the node ID for\n# debugging and admin information. This name is broadcasted between nodes, so will be used\n# in addition to the node ID when reporting cross node events such as node failures.\n# cluster-announce-human-nodename \"\"\n\n# Clusters can advertise how clients should connect to them using either their IP address,\n# a user defined hostname, or by declaring they have no endpoint. Which endpoint is\n# shown as the preferred endpoint is set by using the cluster-preferred-endpoint-type\n# config with values 'ip', 'hostname', or 'unknown-endpoint'. This value controls how\n# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS.\n# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?'\n# will be returned instead.\n#\n# When a cluster advertises itself as having an unknown endpoint, it's indicating that\n# the server doesn't know how clients can reach the cluster. This can happen in certain\n# networking situations where there are multiple possible routes to the node, and the\n# server doesn't know which one the client took. In this case, the server is expecting\n# the client to reach out on the same endpoint it used for making the last request, but use\n# the port provided in the response.\n#\n# cluster-preferred-endpoint-type ip\n\n# The cluster blacklist is used when removing a node from the cluster completely.\n# When CLUSTER FORGET is called for a node, that node is put into the blacklist for\n# some time so that when gossip messages are received from other nodes that still\n# remember it, it is not re-added. This gives time for CLUSTER FORGET to be sent to\n# every node in the cluster. The blacklist TTL is 60 seconds by default, which should\n# be sufficient for most clusters, but you may considering increasing this if you see\n# nodes getting re-added while using CLUSTER FORGET.\n#\n# cluster-blacklist-ttl 60\n\n# Clusters can be configured to track per-slot resource statistics,\n# which are accessible by the CLUSTER SLOT-STATS command.\n#\n# By default, the 'cluster-slot-stats-enabled' is disabled, and only 'key-count' is captured.\n# By enabling the 'cluster-slot-stats-enabled' config, the cluster will begin to capture advanced statistics.\n# These statistics can be leveraged to assess general slot usage trends, identify hot / cold slots,\n# migrate slots for a balanced cluster workload, and / or re-write application logic to better utilize slots.\n#\n# cluster-slot-stats-enabled no\n\n# In order to setup your cluster make sure to read the documentation\n# available at https://valkey.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, cluster node's address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make a cluster work in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-client-ipv4\n# * cluster-announce-client-ipv6\n# * cluster-announce-port\n# * cluster-announce-tls-port\n# * cluster-announce-bus-port\n#\n# Each instructs the node about its address, possibly other addresses to expose\n# to clients, client ports (for connections without and with TLS) and cluster\n# message bus port. The information is then published in the bus packets so that\n# other nodes will be able to correctly map the address of the node publishing\n# the information.\n#\n# If tls-cluster is set to yes and cluster-announce-tls-port is omitted or set\n# to zero, then cluster-announce-port refers to the TLS port. Note also that\n# cluster-announce-tls-port has no effect if tls-cluster is set to no.\n#\n# If cluster-announce-client-ipv4 and cluster-announce-client-ipv6 are omitted,\n# then cluster-announce-ip is exposed to clients.\n#\n# If the above options are not used, the normal cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usual.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-client-ipv4 123.123.123.5\n# cluster-announce-client-ipv6 2001:db8::8a2e:370:7334\n# cluster-announce-tls-port 6379\n# cluster-announce-port 0\n# cluster-announce-bus-port 6380\n\n################################## COMMAND LOG ###################################\n\n# The Command Log system is used to record commands that consume significant resources\n# during server operation, including CPU, memory, and network bandwidth.\n# These commands and the data they access may lead to abnormal instance operations,\n# the commandlog can help users quickly and intuitively locate issues.\n#\n# Currently, three types of command logs are supported:\n#\n# SLOW: Logs commands that exceed a specified execution time. This excludes time spent\n# on I/O operations like client communication and focuses solely on the command's\n# processing time, where the main thread is blocked.\n#\n# LARGE-REQUEST: Logs commands with requests exceeding a defined size. This helps\n# identify potentially problematic commands that send excessive data to the server.\n#\n# LARGE-REPLY: Logs commands that generate replies exceeding a defined size. This\n# helps identify commands that return unusually large amounts of data, which may\n# impact network performance or client processing.\n#\n# Each log type has two key parameters:\n# 1. A threshold value that determines when a command is logged. This threshold is specific\n#    to the type of log (e.g., execution time, request size, or reply size). A negative value disables\n#    logging. A value of 0 logs all commands.\n# 2. A maximum length that specifies the number of entries to retain in the log. Increasing\n#    the length allows more entries to be stored but consumes additional memory. To clear all\n#    entries for a specific log type and reclaim memory, use the `COMMANDLOG RESET`\n#    subcommand followed by the log type.\n#\n# SLOW Command Logs\n# The SLOW log records commands that exceed a specified execution time. The execution time\n# does not include I/O operations, such as client communication or sending responses.\n# It only measures the time spent executing the command, during which the thread is blocked\n# and cannot handle other requests.\n#\n# The threshold is measured in microseconds.\n#\n# Backward Compatibility: The parameter `slowlog-log-slower-than` is still supported but\n# deprecated in favor of `commandlog-slow-execution`.\ncommandlog-execution-slower-than 10000\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET or COMMANDLOG RESET SLOW.\ncommandlog-slow-execution-max-len 128\n#\n# LARGE_REQUEST Command Logs\n# The LARGE_REQUEST log tracks commands with requests exceeding a specified size. The request size\n# includes the command itself and all its arguments. For example, in `SET KEY VALUE`, the size is\n# determined by the combined size of the key and value. Commands that consume excessive network\n# bandwidth or query buffer space are recorded here.\n#\n# The threshold is measured in bytes.\ncommandlog-request-larger-than 1048576\n# Record the number of commands.\ncommandlog-large-request-max-len 128\n#\n# LARGE_REPLY Command Logs\n# The LARGE_REPLY log records commands that produce replies exceeding a specified size. These replies\n# may consume significant network bandwidth or client output buffer space. Examples include commands\n# like `KEYS` or `HGETALL` that return large datasets. Even a `GET` command may qualify if the value\n# is substantial.\n#\n# The threshold is measured in bytes.\ncommandlog-reply-larger-than 1048576\ncommandlog-large-reply-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The server latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a server instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n################################ LATENCY TRACKING ##############################\n\n# The server's extended latency monitoring tracks the per command latencies and enables\n# exporting the percentile distribution via the INFO latencystats command,\n# and cumulative latency distributions (histograms) via the LATENCY command.\n#\n# By default, the extended latency monitoring is enabled since the overhead\n# of keeping track of the command latency is very small.\n# latency-tracking yes\n\n# By default the exported latency percentiles via the INFO latencystats command\n# are the p50, p99, and p999.\n# latency-tracking-info-percentiles 50 99 99.9\n\n############################# EVENT NOTIFICATION ##############################\n\n# The server can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at https://valkey.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that the server will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  n     New key events (Note: not included in the 'A' class)\n#  t     Stream commands\n#  d     Module key type events\n#  m     Key-miss events (Note: It is not included in the 'A' class)\n#  A     Alias for g$lshzxetd, so that the \"AKE\" string means all the events\n#        (Except key-miss events which are excluded from 'A' due to their\n#         unique nature).\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-listpack-entries 512\nhash-max-listpack-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-listpack-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Sets containing non-integer values are also encoded using a memory efficient\n# data structure when they have a small number of entries, and the biggest entry\n# does not exceed a given threshold. These thresholds can be configured using\n# the following directives.\nset-max-listpack-entries 128\nset-max-listpack-value 64\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-listpack-entries 128\nzset-max-listpack-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When a HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entries limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Active rehashing uses 1% of the CPU time to help perform incremental rehashing\n# of the main server hash tables, the ones mapping top-level keys to values.\n#\n# If active rehashing is disabled and rehashing is needed, a hash table is\n# rehashed one \"step\" on every operation performed on the hash table (add, find,\n# etc.), so if the server is idle, the rehashing may never complete and some\n# more memory is used by the hash tables. Active rehashing helps prevent this.\n#\n# Active rehashing runs as a background task. Depending on the value of 'hz',\n# the frequency at which the server performs background tasks, active rehashing\n# can cause the server to freeze for a short time. For example, if 'hz' is set\n# to 10, active rehashing runs for up to one millisecond every 100 milliseconds.\n# If a freeze of one millisecond is not acceptable, you can increase 'hz' to let\n# active rehashing run more often. If instead 'hz' is set to 100, active\n# rehashing runs up to only 100 microseconds every 10 milliseconds. The total is\n# still 1% of the time.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Note that it doesn't make sense to set the replica clients output buffer\n# limit lower than the repl-backlog-size config (partial sync will succeed\n# and then replica will get disconnected).\n# Such a configuration is ignored (the size of repl-backlog-size will be used).\n# This doesn't have memory consumption implications since the replica client\n# will share the backlog buffers memory.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such as a command with huge argument, or huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In some scenarios client connections can hog up memory leading to OOM\n# errors or data eviction. To avoid this we can cap the accumulated memory\n# used by all client connections (all pubsub and normal clients). Once we\n# reach that limit connections will be dropped by the server freeing up\n# memory. The server will attempt to drop the connections using the most\n# memory first. We call this mechanism \"client eviction\".\n#\n# Client eviction is configured using the maxmemory-clients setting as follows:\n# 0 - client eviction is disabled (default)\n#\n# A memory value can be used for the client eviction threshold,\n# for example:\n# maxmemory-clients 1g\n#\n# A percentage value (between 1% and 100%) means the client eviction threshold\n# is based on a percentage of the maxmemory setting. For example to set client\n# eviction at 5% of maxmemory:\n# maxmemory-clients 5%\n\n# In the server protocol, bulk requests, that are, elements representing single\n# strings, are normally limited to 512 mb. However you can change this limit\n# here, but must be 1mb or greater\n#\n# proto-max-bulk-len 512mb\n\n# The server calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but the server checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# the server is idle, but at the same time will make the server more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When the server saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# The server's LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the server LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so the server\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   valkey-benchmark -n 1000000 incr foo\n#   valkey-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be decremented.\n#\n# The default value for the lfu-decay-time is 1. A special value of 0 means we\n# will never decay the counter.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n\n# The maximum number of new client connections accepted per event-loop cycle. This configuration\n# is set independently for TLS connections.\n#\n# By default, up to 10 new connection will be accepted per event-loop cycle for normal connections\n# and up to 1 new connection per event-loop cycle for TLS connections.\n#\n# Adjusting this to a larger number can slightly improve efficiency for new connections\n# at the risk of causing timeouts for regular commands on established connections.  It is\n# not advised to change this without ensuring that all clients have limited connection\n# pools and exponential backoff in the case of command/connection timeouts.\n#\n# If your application is establishing a large number of new connections per second you should\n# also consider tuning the value of tcp-backlog, which allows the kernel to buffer more\n# pending connections before dropping or rejecting connections.\n#\n# max-new-connections-per-cycle 10\n# max-new-tls-connections-per-cycle 1\n\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature, this\n# process can happen at runtime in a \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) the server will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled the server\n#    to use the copy of Jemalloc we ship with the source code of the server.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Active defragmentation is disabled by default\n# activedefrag no\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage, not cycle time as the name might\n# suggest, to be used when the lower threshold is reached.\n# active-defrag-cycle-min 1\n\n# Maximal effort for defrag in CPU percentage, not cycle time as the name might\n# suggest, to be used when the upper threshold is reached.\n# active-defrag-cycle-max 25\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n# The time spent (in microseconds) of the periodic active defrag process.  This\n# affects the latency impact of active defrag on client commands.  Smaller numbers\n# will result in less latency impact at the cost of increased defrag overhead.\n# active-defrag-cycle-us 500\n\n# Jemalloc background thread for purging will be enabled by default\njemalloc-bg-thread yes\n\n# It is possible to pin different threads and processes of the server to specific\n# CPUs in your system, in order to maximize the performances of the server.\n# This is useful both in order to pin different server threads in different\n# CPUs, but also in order to make sure that multiple server instances running\n# in the same host will be pinned to different CPUs.\n#\n# Normally you can do this using the \"taskset\" command, however it is also\n# possible to do this via the server configuration directly, both in Linux and FreeBSD.\n#\n# You can pin the server/IO threads, bio threads, aof rewrite child process, and\n# the bgsave child process. The syntax to specify the cpu list is the same as\n# the taskset command:\n#\n# Set server/io threads to cpu affinity 0,2,4,6:\n# server-cpulist 0-7:2\n#\n# Set bio threads to cpu affinity 1,3:\n# bio-cpulist 1,3\n#\n# Set aof rewrite child process to cpu affinity 8,9,10,11:\n# aof-rewrite-cpulist 8-11\n#\n# Set bgsave child process to cpu affinity 1,10,11\n# bgsave-cpulist 1,10-11\n\n# In some cases the server will emit warnings and even refuse to start if it detects\n# that the system is in bad state, it is possible to suppress these warnings\n# by setting the following config which takes a space delimited list of warnings\n# to suppress\n#\n# ignore-warnings ARM64-COW-BUG\n\n# Inform Valkey of the availability zone if running in a cloud environment.  Currently\n# this is exposed in the INFO and HELLO commands for clients to use. Default is\n# the empty string.\n#\n# availability-zone \"zone-name\"\n"
  },
  {
    "path": "aegir/conf/valkey/valkey9.conf",
    "content": "# Valkey configuration file example.\n#\n# Note that in order to read the configuration file, the server must be\n# started with the file path as first argument:\n#\n# ./valkey-server /path/to/valkey.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Note that option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Sentinel. Since the server always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# Included paths may contain wildcards. All files matching the wildcards will\n# be included in alphabetical order.\n# Note that if an include path contains a wildcards but no files match it when\n# the server is started, the include statement will be ignored and no error will\n# be emitted.  It is safe, therefore, to include wildcard files from empty\n# directories.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n# include /path/to/fragments/*.conf\n#\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n# loadmodule /path/to/args_module.so [arg [arg ...]]\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, the server listens\n# for connections from all available network interfaces on the host machine.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n# Each address can be prefixed by \"-\", which means that the server will not fail to\n# start if the address is not available. Being not available only refers to\n# addresses that does not correspond to any network interface. Addresses that\n# are already in use will always fail, and unsupported protocols will always be\n# silently skipped.\n#\n# Examples:\n#\n# xbind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses\n# xbind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6\n# xbind * -::*                     # like the default, all available interfaces\n#\n# ~~~ WARNING ~~~ If the computer running the server is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force the server to listen only on the\n# IPv4 and IPv6 (if available) loopback interface addresses (this means the server\n# will only be able to accept client connections from the same host that it is\n# running on).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# COMMENT OUT THE FOLLOWING LINE.\n#\n# You will also need to set a password unless you explicitly disable protected\n# mode.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1\n\n# By default, outgoing connections (from replica to primary, from Sentinel to\n# instances, cluster bus, etc.) are not bound to a specific local address. In\n# most cases, this means the operating system will handle that based on routing\n# and the interface through which the connection goes out.\n#\n# Using bind-source-addr it is possible to configure a specific address to bind\n# to, which may also affect how the connection gets routed.\n#\n# Example:\n#\n# bind-source-addr 10.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# the server instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and the default user has no password, the server\n# only accepts local connections from the IPv4 address (127.0.0.1), IPv6 address\n# (::1) or Unix domain sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to the server\n# even if no authentication is configured.\nprotected-mode yes\n\n# The server uses default hardened security configuration directives to reduce the\n# attack surface on innocent users. Therefore, several sensitive configuration\n# directives are immutable, and some potentially-dangerous commands are blocked.\n#\n# Configuration directives that control files that the server writes to (e.g., 'dir'\n# and 'dbfilename') and that aren't usually modified during runtime\n# are protected by making them immutable.\n#\n# Commands that can increase the attack surface of the server and that aren't usually\n# called by users are blocked by default.\n#\n# These can be exposed to either all connections or just local ones by setting\n# each of the configs listed below to either of these values:\n#\n# no    - Block for any connection (remain immutable)\n# yes   - Allow for any connection (no protection)\n# local - Allow only for local connections. Ones originating from the\n#         IPv4 address (127.0.0.1), IPv6 address (::1) or Unix domain sockets.\n#\n# enable-protected-configs no\n# enable-debug-command no\n# enable-module-command no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified the server will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need a high backlog in order\n# to avoid slow clients connection issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Multipath TCP (MPTCP)\n#\n# MPTCP splits a single TCP connection into subflows over multiple interfaces or paths.\n# It enables bandwidth aggregation, failover, and improved reliability.\n# When set to 'yes', clients will be able to use MPTCP if requested. When not\n# requested, regular TCP can be used like before.\n# Note: MPTCP is supported in the mainline Linux kernel starting from version 5.6.\n#\n# mptcp yes\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so the server will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/valkey/valkey.sock\n# unixsocketgroup valkey\n# unixsocketperm 777\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 900\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Force network equipment in the middle to consider the connection to be\n#    alive.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\ntcp-keepalive 300\n\n# Apply OS-specific mechanism to mark the listening socket with the specified\n# ID, to support advanced routing and filtering capabilities.\n#\n# On Linux, the ID represents a connection mark.\n# On FreeBSD, the ID represents a socket cookie ID.\n# On OpenBSD, the ID represents a route table ID.\n#\n# The default value is 0, which implies no marking is required.\n# socket-mark-id 0\n\n################################# TLS/SSL #####################################\n\n# By default, TLS/SSL is disabled. To enable it, the \"tls-port\" configuration\n# directive can be used to define TLS-listening ports. To enable TLS on the\n# default port, use:\n#\n# port 0\n# tls-port 6379\n\n# Configure a X.509 certificate and private key to use for authenticating the\n# server to connected clients, primaries or cluster peers.  These files should be\n# PEM formatted.\n#\n# tls-cert-file valkey.crt\n# tls-key-file valkey.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-key-file-pass secret\n\n# Normally the server uses the same certificate for both server functions (accepting\n# connections) and client functions (replicating from a primary, establishing\n# cluster bus connections, etc.).\n#\n# Sometimes certificates are issued with attributes that designate them as\n# client-only or server-only certificates. In that case it may be desired to use\n# different certificates for incoming (server) and outgoing (client)\n# connections. To do that, use the following directives:\n#\n# tls-client-cert-file client.crt\n# tls-client-key-file client.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-client-key-file-pass secret\n\n# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange,\n# required by older versions of OpenSSL (<3.0). Newer versions do not require\n# this configuration and recommend against it.\n#\n# tls-dh-params-file valkey.dh\n\n# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL\n# clients and peers. The server requires an explicit configuration of at least one\n# of these, and will not implicitly use the system wide configuration.\n#\n# tls-ca-cert-file ca.crt\n# tls-ca-cert-dir /etc/ssl/certs\n\n# By default, clients (including replica servers) on a TLS port are required\n# to authenticate using valid client side certificates.\n#\n# If \"no\" is specified, client certificates are not required and not accepted.\n# If \"optional\" is specified, client certificates are accepted and must be\n# valid if provided, but are not required.\n#\n# tls-auth-clients no\n# tls-auth-clients optional\n\n# Automatically authenticate TLS clients as Valkey users based on their\n# certificates.\n#\n# If set to a field like \"CN\", the server will extract the corresponding field\n# from the client's TLS certificate and attempt to find a Valkey user with the\n# same name. If a matching user is found, the client is automatically\n# authenticated as that user during the TLS handshake. If no matching user is\n# found, the client is connected as the unauthenticated default user. Set to\n# \"off\" to disable automatic user authentication via certificate fields.\n#\n# Supported values: CN, off. Default: off.\n#\n# tls-auth-clients-user CN\n\n# By default, a replica does not attempt to establish a TLS connection\n# with its primary.\n#\n# Use the following directive to enable TLS on replication links.\n#\n# tls-replication yes\n\n# By default, the cluster bus uses a plain TCP connection. To enable\n# TLS for the bus protocol, use the following directive:\n#\n# tls-cluster yes\n\n# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended\n# that older formally deprecated versions are kept disabled to reduce the attack surface.\n# You can explicitly specify TLS versions to support.\n# Allowed values are case insensitive and include \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\",\n# \"TLSv1.3\" (OpenSSL >= 1.1.1) or any combination.\n# To enable only TLSv1.2 and TLSv1.3, use:\n#\n# tls-protocols \"TLSv1.2 TLSv1.3\"\n\n# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information\n# about the syntax of this string.\n#\n# Note: this configuration applies only to <= TLSv1.2.\n#\n# tls-ciphers DEFAULT:!MEDIUM\n\n# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more\n# information about the syntax of this string, and specifically for TLSv1.3\n# ciphersuites.\n#\n# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256\n\n# When choosing a cipher, use the server's preference instead of the client\n# preference. By default, the server follows the client's preference.\n#\n# tls-prefer-server-ciphers yes\n\n# By default, TLS session caching is enabled to allow faster and less expensive\n# reconnections by clients that support it. Use the following directive to disable\n# caching.\n#\n# tls-session-caching no\n\n# Change the default number of TLS sessions cached. A zero value sets the cache\n# to unlimited size. The default size is 20480.\n#\n# tls-session-cache-size 5000\n\n# Change the default timeout of cached TLS sessions. The default timeout is 300\n# seconds.\n#\n# tls-session-cache-timeout 60\n\n################################### RDMA ######################################\n\n# Valkey Over RDMA is experimental, it may be changed or be removed in any minor or major version.\n# By default, RDMA is disabled. To enable it, the \"rdma-port\" configuration\n# directive can be used to define RDMA-listening ports.\n#\n# rdma-port 6379\n# rdma-bind 192.168.1.100\n\n# The RDMA receive transfer buffer is 1M by default. It can be set between 64K and 16M.\n# Note that page size aligned size is preferred.\n#\n# rdma-rx-size 1048576\n\n# The RDMA completion queue will use the completion vector to signal completion events\n# via hardware interrupts. A large number of hardware interrupts can affect CPU performance.\n# It is possible to tune the performance using rdma-completion-vector.\n#\n# Example 1. a) Pin hardware interrupt vectors [0, 3] to CPU [0, 3].\n#            b) Set CPU affinity for valkey to CPU [4, X].\n#            c) Any valkey server uses a random RDMA completion vector [-1].\n# All valkey servers will not affect each other and will be isolated from kernel interrupts.\n#\n#   SYS    SYS    SYS    SYS  VALKEY VALKEY     VALKEY\n#    |      |      |      |      |      |          |\n#  CPU0   CPU1   CPU2   CPU3   CPU4   CPU5   ... CPUX\n#    |      |      |      |\n#  INTR0  INTR1  INTR2  INTR3\n#\n# Example 2. a) 1:1 pin hardware interrupt vectors [0, X] to CPU [0, X].\n#            b) Set CPU affinity for valkey [M] to CPU [M].\n#            c) Valkey server [M] uses RDMA completion vector [M].\n# A single CPU [M] handles hardware interrupts, the RDMA completion vector [M],\n# and the valkey server [M] within its context only.\n# This avoids overhead and function calls across multiple CPUs, fully isolating\n# each valkey server from one another.\n#\n# VALKEY VALKEY VALKEY VALKEY VALKEY VALKEY     VALKEY\n#    |      |      |      |      |      |          |\n#  CPU0   CPU1   CPU2   CPU3   CPU4   CPU5  ...  CPUX\n#    |      |      |      |      |      |          |\n#  INTR0  INTR1  INTR2  INTR3  INTR4  INTR5      INTRX\n#\n# Use 0 and positive numbers to specify the RDMA completion vector, or specify -1 to allow\n# the server to use a random vector for a new connection. The default vector is -1.\n#\n# rdma-completion-vector 0\n\n################################# GENERAL #####################################\n\n# By default the server does not run as a daemon. Use 'yes' if you need it.\n# Note that the server will write a pid file in /run/valkey/valkey.pid when daemonized.\n# When the server is supervised by upstart or systemd, this parameter has no impact.\ndaemonize yes\n\n# If you run the server from upstart or systemd, the server can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting the server into SIGSTOP mode\n#                        requires \"expect stop\" in your upstart job config\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#                        on startup, and updating the server status on a regular\n#                        basis.\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous pings back to your supervisor.\n#\n# The default is \"no\". To run under upstart/systemd, you can simply uncomment\n# the line below:\n#\n# supervised auto\n\n# If a pid file is specified, the server writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/run/valkey/valkey.pid\".\n#\n# Creating a pid file is best effort: if the server is not able to create it\n# nothing bad happens, the server will start and run normally.\n#\n# Note that on modern Linux systems \"/run/valkey/valkey.pid\" is more conforming\n# and should be used instead.\npidfile /run/valkey/valkey.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\n# nothing (nothing is logged)\nloglevel warning\n\n# Specify the logging format.\n# This can be one of:\n#\n# - legacy: the default, traditional log format\n# - logfmt: a structured log format; see https://www.brandur.org/logfmt\n#\n# log-format legacy\n\n# Specify the timestamp format used in logs using 'log-timestamp-format'.\n#\n# - legacy: default format\n# - iso8601: ISO 8601 extended date and time with time zone, on the form\n#   yyyy-mm-ddThh:mm:ss.sss±hh:mm\n# - milliseconds: milliseconds since the epoch\n#\n# log-timestamp-format legacy\n\n# Specify the log file name. Also the empty string can be used to force\n# the server to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile /var/log/valkey/valkey-server.log\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident valkey\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# To disable the built in crash log, which will possibly produce cleaner core\n# dumps when they are needed, uncomment the following:\n#\n# crash-log-enabled no\n\n# To disable the fast memory check that's run as part of the crash log, which\n# will possibly let the server terminate sooner, uncomment the following:\n#\n# crash-memcheck-enabled no\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\n# Note: This setting is ignored in cluster mode. Use `cluster-databases` instead.\ndatabases 8\n\n# By default the server shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY and syslog logging is\n# disabled. Basically this means that normally a logo is displayed only in\n# interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo no\n\n# User data, including keys, values, client names, and ACL usernames, can be\n# logged as part of assertions and other error cases. To prevent sensitive user\n# information, such as PII, from being recorded in the server log file, this\n# user data is hidden from the log by default. If you need to log user data for\n# debugging or troubleshooting purposes, you can disable this feature by\n# changing the config value to no.\nhide-user-data-from-log yes\n\n# By default, the server modifies the process title (as seen in 'top' and 'ps') to\n# provide some runtime information. It is possible to disable this and leave\n# the process name as executed by setting the following to no.\nset-proc-title yes\n\n# When changing the process title, the server uses the following template to construct\n# the modified title.\n#\n# Template variables are specified in curly brackets. The following variables are\n# supported:\n#\n# {title}           Name of process as executed if parent, or type of child process.\n# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or\n#                   Unix socket if only that's available.\n# {server-mode}     Special mode, i.e. \"[sentinel]\" or \"[cluster]\".\n# {port}            TCP port listening on, or 0.\n# {tls-port}        TLS port listening on, or 0.\n# {unixsocket}      Unix domain socket listening on, or \"\".\n# {config-file}     Name of configuration file used.\n#\nproc-title-template \"{title} {listen-addr} {server-mode}\"\n\n# Set the local environment which is used for string comparison operations, and\n# also affect the performance of Lua scripts. Empty String indicates the locale\n# is derived from the environment variables.\nlocale-collate \"\"\n\n# Valkey is largely compatible with Redis OSS, apart from a few cases where\n# Valkey identifies itself as \"Valkey\" rather than \"Redis\". Extended\n# Redis OSS compatibility mode makes Valkey pretend to be Redis. Enable this\n# only if you have problems with tools or clients. This is a temporary\n# configuration added in Valkey 8.0 and is scheduled to have no effect and\n# will be removed in a future version.\n#\nextended-redis-compatibility yes\n\n################################ SNAPSHOTTING  ################################\n\n# Save the DB to disk.\n#\n# save <seconds> <changes> [<seconds> <changes> ...]\n#\n# The server will save the DB if the given number of seconds elapsed and it\n# surpassed the given number of write operations against the DB.\n#\n# Snapshotting can be completely disabled with a single empty string argument\n# as in following example:\n#\nsave \"\"\n#\n# Unless specified otherwise, by default the server will save the DB:\n#   * After 3600 seconds (an hour) if at least 1 change was performed\n#   * After 300 seconds (5 minutes) if at least 100 changes were performed\n#   * After 60 seconds if at least 10000 changes were performed\n#\n# You can set these explicitly by uncommenting the following line.\n#\n# save 3600 1 300 100 60 10000\n\n# By default the server will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again, the server will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the server\n# and persistence, you may want to disable this feature so that the server will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error no\n\n# Compress string objects using LZF when dump .rdb databases?\n# By default compression is enabled as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum no\n\n# Valkey can try to load an RDB dump produced by a future version of Valkey.\n# This can only work on a best-effort basis, because future RDB versions may\n# contain information that's not known to the current version. If no new features\n# are used, it may be possible to import the data produced by a later version,\n# but loading is aborted if unknown information is encountered. Possible values\n# are 'strict' and 'relaxed'. This also applies to replication and the RESTORE\n# command.\nrdb-version-check relaxed\n\n# Enables or disables full sanitization checks for ziplist and listpack etc when\n# loading an RDB or RESTORE payload. This reduces the chances of a assertion or\n# crash later on while processing commands.\n# Options:\n#   no         - Never perform full sanitization\n#   yes        - Always perform full sanitization\n#   clients    - Perform full sanitization only for user connections.\n#                Excludes: RDB files, RESTORE commands received from the primary\n#                connection, and client connections which have the\n#                skip-sanitize-payload ACL flag.\n# The default should be 'clients' but since it currently affects cluster\n# resharding via MIGRATE, it is temporarily set to 'no' by default.\n#\n# sanitize-dump-payload no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# Remove RDB files used by replication in instances without persistence\n# enabled. By default this option is disabled, however there are environments\n# where for regulations or other security concerns, RDB files persisted on\n# disk by primaries in order to feed replicas, or stored on disk by replicas\n# in order to load them for the initial synchronization, should be deleted\n# ASAP. Note that this option ONLY WORKS in instances that have both AOF\n# and RDB persistence disabled, otherwise is completely ignored.\n#\n# An alternative (and sometimes better) way to obtain the same effect is\n# to use diskless replication on both primary and replicas instances. However\n# in the case of replicas, diskless is not always an option.\nrdb-del-sync-files no\n\n# The working directory.\n#\n# The server log is written relative this directory, if the 'logfile'\n# configuration directive is a relative path.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# The Cluster config file is written relative this directory, if the\n# 'cluster-config-file' configuration directive is a relative path.\n#\n# Note that you must specify a directory here, not a file name.\n# Note that modifying 'dir' during runtime may have unexpected behavior,\n# for example when a child process is running, related file operations may\n# have unexpected effects.\ndir /var/lib/valkey/\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a server a copy of\n# another server. A few things to understand ASAP about replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Replication is asynchronous, but you can configure a primary to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Replicas are able to perform a partial resynchronization with the\n#    primary if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to primaries\n#    and resynchronize with them.\n#\n# replicaof <primary_ip> <primary_port>\n\n# If the primary is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the primary will\n# refuse the replica request.\n#\n# primaryauth <primary-password>\n#\n# However this is not enough if you are using ACLs\n# and the default user is not capable of running the PSYNC\n# command and/or other commands needed for replication. In this case it's\n# better to configure a special user to use with replication, and specify the\n# primaryuser configuration as such:\n#\n# primaryuser <username>\n#\n# When primaryuser is specified, the replica will authenticate against its\n# primary using the new AUTH form: AUTH <username> <password>.\n\n# When a replica loses its connection with the primary, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) If replica-serve-stale-data is set to 'no' the replica will reply with error\n#    \"MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'\"\n#    to all data access commands, excluding commands such as:\n#    INFO, REPLICAOF, AUTH, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,\n#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,\n#    HOST and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the primary) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# By default, replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# New replicas and reconnecting replicas that are not able to continue the\n# replication process just receiving differences, need to do what is called a\n# \"full synchronization\". An RDB file is transmitted from the primary to the\n# replicas.\n#\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The primary creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The primary creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child\n# producing the RDB file finishes its work. With diskless replication instead\n# once the transfer starts, new replicas arriving will be queued and a new\n# transfer will start when the current one terminates.\n#\n# When diskless replication is used, the primary waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple\n# replicas will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync yes\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the\n# server waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# When diskless replication is enabled with a delay, it is possible to let\n# the replication start before the maximum delay is reached if the maximum\n# number of replicas expected have connected. Default of 0 means that the\n# maximum is not defined and the server will wait the full delay.\nrepl-diskless-sync-max-replicas 0\n\n# -----------------------------------------------------------------------------\n# WARNING: Since in this setup the replica does not immediately store an RDB on\n# disk, it may cause data loss during failovers. RDB diskless load + server\n# modules not handling I/O reads may cause the server to abort in case of I/O errors\n# during the initial synchronization stage with the primary.\n# -----------------------------------------------------------------------------\n#\n# Replica can load the RDB it reads from the replication link directly from the\n# socket, or store the RDB to a file and read that file after it was completely\n# received from the primary.\n#\n# In many cases the disk is slower than the network, and storing and loading\n# the RDB file may increase replication time (and even increase the primary's\n# Copy on Write memory and replica buffers).\n# However, when parsing the RDB file directly from the socket, in order to avoid\n# data loss it's only safe to flush the current dataset when the new dataset is\n# fully loaded in memory, resulting in higher memory usage.\n# For this reason we have the following options:\n#\n# \"disabled\"          - Don't use diskless load (store the rdb file to the disk first)\n# \"swapdb\"            - Keep current db contents in RAM while parsing the data directly\n#                       from the socket. Replicas in this mode can keep serving current\n#                       dataset while replication is in progress, except for cases where\n#                       they can't recognize primary as having a data set from same\n#                       replication history.\n#                       Note that this requires sufficient memory, if you don't have it,\n#                       you risk an OOM kill.\n# \"on-empty-db\"       - Use diskless load only when current dataset is empty. This is\n#                       safer and avoid having old and new dataset loaded side by side\n#                       during replication.\n# \"flush-before-load\" - [dangerous] Flush all data before parsing. Note that if\n#                       there's a problem before the replication succeeded you may\n#                       lose all your data.\nrepl-diskless-load disabled\n\n# This dual channel replication sync feature optimizes the full synchronization process\n# between a primary and its replicas. When enabled, it reduces both memory and CPU load\n# on the primary server.\n#\n# How it works:\n# 1. During full sync, instead of accumulating replication data on the primary server,\n#    the data is sent directly to the syncing replica.\n# 2. The primary's background save (bgsave) process streams the RDB snapshot directly\n#    to the replica over a separate connection.\n#\n# Tradeoff:\n# While this approach reduces load on the primary, it shifts the burden of storing\n# the replication buffer to the replica. This means the replica must have sufficient\n# memory to accommodate the buffer during synchronization. However, this tradeoff is\n# generally beneficial as it prevents potential performance degradation on the primary\n# server, which is typically handling more critical operations.\n#\n# When toggling this configuration on or off during an ongoing synchronization process,\n# it does not change the already running sync method. The new configuration will take\n# effect only for subsequent synchronization processes.\n\ndual-channel-replication-enabled no\n\n# Master send PINGs to its replicas in a predefined interval. It's possible to\n# change this interval with the repl_ping_replica_period option. The default\n# value is 10 seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of primaries (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the primary and the replica. The default\n# value is 60 seconds.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select \"yes\", the server will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the primary and replicas are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Enables MPTCP for the replica's connection to the primary.\n#\n# An MPTCP connection is established between the primary and the replica if\n# the replica has set 'repl-mptcp yes' and the primary has set 'mptcp yes'.\n# Otherwise, it will automatically and implicitly fall back to a regular TCP\n# connection.\n#\n# repl-mptcp no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a\n# replica wants to reconnect again, often a full resync is not needed, but a\n# partial resync is enough, just passing the portion of data the replica\n# missed while disconnected.\n#\n# The bigger the replication backlog, the longer the replica can endure the\n# disconnect and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated if there is at least one replica connected.\n#\n# repl-backlog-size 10mb\n\n# After a primary has no connected replicas for some time, the backlog will be\n# freed. The following option configures the amount of seconds that need to\n# elapse, starting from the time the last replica disconnected, for the backlog\n# buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to primaries later, and should be able to correctly \"partially\n# resynchronize\" with other replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The replica priority is an integer number published by the server in the INFO\n# output. It is used by Sentinel in order to select a replica to promote\n# into a primary if the primary is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel\n# will pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of primary, so a replica with priority of 0 will never be selected by\n# Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# The propagation error behavior controls how the server will behave when it is\n# unable to handle a command being processed in the replication stream from a primary\n# or processed while reading from an AOF file. Errors that occur during propagation\n# are unexpected, and can cause data inconsistency.\n#\n# If an application wants to ensure there is no data divergence, this configuration\n# should be set to 'panic' instead. The value can also be set to 'panic-on-replicas'\n# to only panic when a replica encounters an error on the replication stream. One of\n# these two panic values will become the default value in the future once there are\n# sufficient safety mechanisms in place to prevent false positive crashes.\n#\n# propagation-error-behavior ignore\n\n# Replica ignore disk write errors controls the behavior of a replica when it is\n# unable to persist a write command received from its primary to disk. By default,\n# this configuration is set to 'no' and will crash the replica in this condition.\n# It is not recommended to change this default.\n#\n# replica-ignore-disk-write-errors no\n\n# Make the primary forbid expiration and eviction.\n# This is useful for sync tools, because expiration and eviction may cause the data corruption.\n# Sync tools can mark their connections as importing source by CLIENT IMPORT-SOURCE.\n# NOTICE: Clients should avoid writing the same key on the source server and the destination server.\n#\n# import-mode no\n\n# -----------------------------------------------------------------------------\n# By default, Sentinel includes all replicas in its reports. A replica\n# can be excluded from Sentinel's announcements. An unannounced replica\n# will be ignored by the 'sentinel replicas <primary>' command and won't be\n# exposed to Sentinel's clients.\n#\n# This option does not change the behavior of replica-priority. Even with\n# replica-announced set to 'no', the replica can be promoted to primary. To\n# prevent this behavior, set replica-priority to 0.\n#\n# replica-announced yes\n\n# It is possible for a primary to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A primary is able to list the address and port of the attached\n# replicas in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a primary.\n#\n# The listed IP address and port normally reported by a replica is\n# obtained in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the primary.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may actually be reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its primary a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n############################### KEYS TRACKING #################################\n\n# The client side caching of values is assisted via server-side support.\n# This is implemented using an invalidation table that remembers, using\n# a radix key indexed by key name, what clients have which keys. In turn\n# this is used in order to send invalidation messages to clients. Please\n# check this page to understand more about the feature:\n#\n#   https://valkey.io/topics/client-side-caching\n#\n# When tracking is enabled for a client, all the read only queries are assumed\n# to be cached: this will force the server to store information in the invalidation\n# table. When keys are modified, such information is flushed away, and\n# invalidation messages are sent to the clients. However if the workload is\n# heavily dominated by reads, the server could use more and more memory in order\n# to track the keys fetched by many clients.\n#\n# For this reason it is possible to configure a maximum fill value for the\n# invalidation table. By default it is set to 1M of keys, and once this limit\n# is reached, the server will start to evict keys in the invalidation table\n# even if they were not modified, just to reclaim memory: this will in turn\n# force the clients to invalidate the cached values. Basically the table\n# maximum size is a trade off between the memory you want to spend server\n# side to track information about who cached what, and the ability of clients\n# to retain cached objects in memory.\n#\n# If you set the value to 0, it means there are no limits, and the server will\n# retain as many keys as needed in the invalidation table.\n# In the \"stats\" INFO section, you can find information about the number of\n# keys in the invalidation table at every given moment.\n#\n# Note: when key tracking is used in broadcasting mode, no memory is used\n# in the server side so this setting is useless.\n#\n# tracking-table-max-keys 1000000\n\n################################## SECURITY ###################################\n\n# Warning: since the server is pretty fast, an outside user can try up to\n# 1 million passwords per second against a modern box. This means that you\n# should use very strong passwords, otherwise they will be very easy to break.\n# Note that because the password is really a shared secret between the client\n# and the server, and should not be memorized by any human, the password\n# can be easily a long string from /dev/urandom or whatever, so by using a\n# long and unguessable password no brute force attack will be possible.\n\n# ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n#\n# The special username \"default\" is used for new connections. If this user\n# has the \"nopass\" rule, then new connections will be immediately authenticated\n# as the \"default\" user without the need of any password provided via the\n# AUTH command. Otherwise if the \"default\" user is not flagged with \"nopass\"\n# the connections will start in not authenticated state, and will require\n# AUTH (or the HELLO command AUTH option) in order to be authenticated and\n# start to work.\n#\n# The ACL rules that describe what a user can do are the following:\n#\n#  on           Enable the user: it is possible to authenticate as this user.\n#  off          Disable the user: it's no longer possible to authenticate\n#               with this user, however the already authenticated connections\n#               will still work.\n#  skip-sanitize-payload    RESTORE dump-payload sanitization is skipped.\n#  sanitize-payload         RESTORE dump-payload is sanitized (default).\n#  +<command>   Allow the execution of that command.\n#               May be used with `|` for allowing subcommands (e.g \"+config|get\")\n#  -<command>   Disallow the execution of that command.\n#               May be used with `|` for blocking subcommands (e.g \"-config|set\")\n#  +@<category> Allow the execution of all the commands in such category\n#               with valid categories are like @admin, @set, @sortedset, ...\n#               and so forth, see the full list in the server.c file where\n#               the server command table is described and defined.\n#               The special category @all means all the commands, but currently\n#               present in the server, and that will be loaded in the future\n#               via modules.\n#  +<command>|first-arg  Allow a specific first argument of an otherwise\n#                        disabled command. It is only supported on commands with\n#                        no sub-commands, and is not allowed as negative form\n#                        like -SELECT|1, only additive starting with \"+\". This\n#                        feature is deprecated and may be removed in the future.\n#  allcommands  Alias for +@all. Note that it implies the ability to execute\n#               all the future commands loaded via the modules system.\n#  nocommands   Alias for -@all.\n#  ~<pattern>   Add a pattern of keys that can be mentioned as part of\n#               commands. For instance ~* allows all the keys. The pattern\n#               is a glob-style pattern like the one of KEYS.\n#               It is possible to specify multiple patterns.\n# %R~<pattern>  Add key read pattern that specifies which keys can be read\n#               from.\n# %W~<pattern>  Add key write pattern that specifies which keys can be\n#               written to.\n#  allkeys      Alias for ~*\n#  resetkeys    Flush the list of allowed keys patterns.\n#  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be\n#               accessed by the user. It is possible to specify multiple channel\n#               patterns.\n#  allchannels  Alias for &*\n#  resetchannels            Flush the list of allowed channel patterns.\n#  ><password>  Add this password to the list of valid password for the user.\n#               For example >mypass will add \"mypass\" to the list.\n#               This directive clears the \"nopass\" flag (see later).\n#  <<password>  Remove this password from the list of valid passwords.\n#  nopass       All the set passwords of the user are removed, and the user\n#               is flagged as requiring no password: it means that every\n#               password will work against this user. If this directive is\n#               used for the default user, every new connection will be\n#               immediately authenticated with the default user without\n#               any explicit AUTH command required. Note that the \"resetpass\"\n#               directive will clear this condition.\n#  resetpass    Flush the list of allowed passwords. Moreover removes the\n#               \"nopass\" status. After \"resetpass\" the user has no associated\n#               passwords and there is no way to authenticate without adding\n#               some password (or setting it as \"nopass\" later).\n#  reset        Performs the following actions: resetpass, resetkeys, resetchannels,\n#               allchannels (if acl-pubsub-default is set), off, clearselectors, -@all.\n#               The user returns to the same state it has immediately after its creation.\n# (<options>)   Create a new selector with the options specified within the\n#               parentheses and attach it to the user. Each option should be\n#               space separated. The first character must be ( and the last\n#               character must be ).\n# clearselectors            Remove all of the currently attached selectors.\n#                           Note this does not change the \"root\" user permissions,\n#                           which are the permissions directly applied onto the\n#                           user (outside the parentheses).\n#\n# ACL rules can be specified in any order: for instance you can start with\n# passwords, then flags, or key patterns. However note that the additive\n# and subtractive rules will CHANGE MEANING depending on the ordering.\n# For instance see the following example:\n#\n#   user alice on +@all -DEBUG ~* >somepassword\n#\n# This will allow \"alice\" to use all the commands with the exception of the\n# DEBUG command, since +@all added all the commands to the set of the commands\n# alice can use, and later DEBUG was removed. However if we invert the order\n# of two ACL rules the result will be different:\n#\n#   user alice on -DEBUG +@all ~* >somepassword\n#\n# Now DEBUG was removed when alice had yet no commands in the set of allowed\n# commands, later all the commands are added, so the user will be able to\n# execute everything.\n#\n# Basically ACL rules are processed left-to-right.\n#\n# The following is a list of command categories and their meanings:\n# * keyspace - Writing or reading from keys, databases, or their metadata\n#     in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE,\n#     KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace,\n#     key or metadata will also have `write` category. Commands that only read\n#     the keyspace, key or metadata will have the `read` category.\n# * read - Reading from keys (values or metadata). Note that commands that don't\n#     interact with keys, will not have either `read` or `write`.\n# * write - Writing to keys (values or metadata)\n# * admin - Administrative commands. Normal applications will never need to use\n#     these. Includes REPLICAOF, CONFIG, DEBUG, SAVE, MONITOR, ACL, SHUTDOWN, etc.\n# * dangerous - Potentially dangerous (each should be considered with care for\n#     various reasons). This includes FLUSHALL, MIGRATE, RESTORE, SORT, KEYS,\n#     CLIENT, DEBUG, INFO, CONFIG, SAVE, REPLICAOF, etc.\n# * connection - Commands affecting the connection or other connections.\n#     This includes AUTH, SELECT, COMMAND, CLIENT, ECHO, PING, etc.\n# * blocking - Potentially blocking the connection until released by another\n#     command.\n# * fast - Fast O(1) commands. May loop on the number of arguments, but not the\n#     number of elements in the key.\n# * slow - All commands that are not Fast.\n# * pubsub - PUBLISH / SUBSCRIBE related\n# * transaction - WATCH / MULTI / EXEC related commands.\n# * scripting - Scripting related.\n# * set - Data type: sets related.\n# * sortedset - Data type: zsets related.\n# * list - Data type: lists related.\n# * hash - Data type: hashes related.\n# * string - Data type: strings related.\n# * bitmap - Data type: bitmaps related.\n# * hyperloglog - Data type: hyperloglog related.\n# * geo - Data type: geo related.\n# * stream - Data type: streams related.\n#\n# For more information about ACL configuration please refer to\n# the Valkey web site at https://valkey.io/topics/acl\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked\n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with\n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside valkey.conf to describe users.\n#\n# aclfile /etc/valkey/users.acl\n\n# IMPORTANT NOTE: \"requirepass\" is just a compatibility\n# layer on top of the new ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# The requirepass is not compatible with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n#\n\nuser default on >isfoobared allcommands allkeys\n\n# The default Pub/Sub channels permission for new users is controlled by the\n# acl-pubsub-default configuration directive, which accepts one of these values:\n#\n# allchannels: grants access to all Pub/Sub channels\n# resetchannels: revokes access to all Pub/Sub channels\n#\n# acl-pubsub-default defaults to 'resetchannels' permission.\n#\n# acl-pubsub-default resetchannels\n\n# Command renaming (DEPRECATED).\n#\n# ------------------------------------------------------------------------\n# WARNING: avoid using this option if possible. Instead use ACLs to remove\n# commands from the default user, and put them only in some admin user you\n# create for administrative purposes.\n# ------------------------------------------------------------------------\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as the server reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached the server will close all the new connections sending\n# an error 'max number of clients reached'.\n#\n# IMPORTANT: With a cluster-enabled setup, the max number of connections is also\n# shared with the cluster bus: every node in the cluster will use two\n# connections, one incoming and another outgoing. It is important to size the\n# limit accordingly in case of very large clusters.\n#\nmaxclients 4000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached the server will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If the server can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', the server will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using the server as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\nmaxmemory 88MB\n\n# MAXMEMORY POLICY: how the server will select what to remove when maxmemory\n# is reached. You can select one from the following behaviors:\n#\n# volatile-lru -> Evict using approximated LRU, only keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key having an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, when there are no suitable keys for\n# eviction, the server will return an error on write operations that require\n# more memory. These are usually commands that create new keys, add data or\n# modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,\n# SORT (due to the STORE argument), and EXEC (if the transaction includes any\n# command that requires memory).\n#\n# The default is: noeviction\n#\nmaxmemory-policy allkeys-lru\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. By default the server will check five keys and pick the one that was\n# used least recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate. The maximum\n# value that can be set is 64.\n#\n# maxmemory-samples 5\n\n# Eviction processing is designed to function well with the default setting.\n# If there is an unusually large amount of write traffic, this value may need to\n# be increased.  Decreasing this value may reduce latency at the risk of\n# eviction processing effectiveness\n#   0 = minimum latency, 10 = default, 100 = process without regard to latency\n#\n# maxmemory-eviction-tenacity 10\n\n# By default a replica will ignore its maxmemory setting\n# (unless it is promoted to primary after a failover or manually). It means\n# that the eviction of keys will be just handled by the primary, sending the\n# DEL commands to the replica as keys evict in the primary side.\n#\n# This behavior ensures that primaries and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica\n# to have a different memory setting, and you are sure all the writes performed\n# to the replica are idempotent, then you may change this default (but be sure\n# to understand what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory\n# and so forth). So make sure you monitor your replicas and make sure they\n# have enough memory to never hit a real out-of-memory condition before the\n# primary hits the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n# The server reclaims expired keys in two ways: upon access when those keys are\n# found to be expired, and also in the background, in what is called the\n# \"active expire key\". The key space is slowly and incrementally scanned\n# looking for expired keys to reclaim, so that it is possible to free memory\n# of keys that are expired and will never be accessed again in a short time.\n#\n# The default effort of the expire cycle will try to avoid having more than\n# ten percent of expired keys still in memory, and will try to avoid consuming\n# more than 25% of total memory and to add latency to the system. However\n# it is possible to increase the expire \"effort\" that is normally set to\n# \"1\", to a greater value, up to the value \"10\". At its maximum value the\n# system will use more CPU, longer cycles (and technically may introduce\n# more latency), and will tolerate less already expired keys still present\n# in the system. It's a tradeoff between memory, CPU and latency.\n#\n# active-expire-effort 1\n\n############################# LAZY FREEING ####################################\n\n# When keys are deleted, the served has historically freed their memory using\n# blocking operations. It means that the server stopped processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in the server. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons, lazy freeing (or asynchronous freeing), has been\n# introduced. With lazy freeing, keys are deleted in constant time. Another\n# thread will incrementally free the object in the background as fast as\n# possible.\n#\n# Starting from Valkey 8.0, lazy freeing is enabled by default. It is possible\n# to retain the synchronous freeing behaviour by setting the lazyfree related\n# configuration directives to 'no'.\n\n# Commands like DEL, FLUSHALL and FLUSHDB delete keys, but the server can also\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically the server deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its primary, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases, the default is to release memory in a non-blocking\n# way.\n\nlazyfree-lazy-eviction yes\nlazyfree-lazy-expire yes\nlazyfree-lazy-server-del yes\nreplica-lazy-flush yes\n\n# For keys deleted using the DEL command, lazy freeing is controlled by the\n# configuration directive 'lazyfree-lazy-user-del'. The default is 'yes'. The\n# UNLINK command is identical to the DEL command, except that UNLINK always\n# frees the memory lazily, regardless of this configuration directive:\n\nlazyfree-lazy-user-del yes\n\n# FLUSHDB, FLUSHALL, SCRIPT FLUSH and FUNCTION FLUSH support both asynchronous and synchronous\n# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the\n# commands. When neither flag is passed, this directive will be used to determine\n# if the data should be deleted asynchronously.\n#\n# When a replica performs a node reset via CLUSTER RESET, the entire\n# database content is removed to allow the node to become an empty primary.\n# This directive also determines whether the data should be deleted asynchronously.\n#\n# There are many problems with running flush synchronously. Even in single CPU\n# environments, the thread managers should balance between the freeing and\n# serving incoming requests. The default value is yes.\n\nlazyfree-lazy-user-flush yes\n\n################################ THREADED I/O #################################\n\n# The server is mostly single threaded, however there are certain threaded\n# operations such as UNLINK, slow I/O accesses and other things that are\n# performed on side threads.\n#\n# Now it is also possible to handle the server clients socket reads and writes\n# in different I/O threads. Since especially writing is so slow, normally\n# users use pipelining in order to speed up the server performances per\n# core, and spawn multiple instances in order to scale more. Using I/O\n# threads it is possible to easily speedup two times the server without resorting\n# to pipelining nor sharding of the instance.\n#\n# By default threading is disabled, we suggest enabling it only in machines\n# that have at least 3 or more cores, leaving at least one spare core.\n# We also recommend using threaded I/O only if you actually have performance problems, with\n# instances being able to use a quite big percentage of CPU time, otherwise\n# there is no point in using this feature.\n#\n# So for instance if you have a four cores boxes, try to use 2 or 3 I/O\n# threads, if you have a 8 cores, try to use 6 threads. In order to\n# enable I/O threads use the following configuration directive:\n#\n# io-threads 4\n#\n# Setting io-threads to 1 will just use the main thread as usual.\n# When I/O threads are enabled, we use threads for reads and writes, that is\n# to thread the write and read syscall and transfer the client buffers to the\n# socket and to enable threading of reads and protocol parsing.\n#\n#\n# NOTE:\n# 1. The 'io-threads-do-reads' config is deprecated and has no effect. Please\n# avoid using this config if possible.\n#\n# 2. If you want to test the server speedup using valkey-benchmark, make\n# sure you also run the benchmark itself in threaded mode, using the\n# --threads option to match the number of server threads, otherwise you'll not\n# be able to notice the improvements.\n\n############################ KERNEL OOM CONTROL ##############################\n\n# On Linux, it is possible to hint the kernel OOM killer on what processes\n# should be killed first when out of memory.\n#\n# Enabling this feature makes the server actively control the oom_score_adj value\n# for all its processes, depending on their role. The default scores will\n# attempt to have background child processes killed before all others, and\n# replicas killed before primaries.\n#\n# The server supports these options:\n#\n# no:       Don't make changes to oom-score-adj (default).\n# yes:      Alias to \"relative\" see below.\n# absolute: Values in oom-score-adj-values are written as is to the kernel.\n# relative: Values are used relative to the initial value of oom_score_adj when\n#           the server starts and are then clamped to a range of -1000 to 1000.\n#           Because typically the initial value is 0, they will often match the\n#           absolute values.\noom-score-adj no\n\n# When oom-score-adj is used, this directive controls the specific values used\n# for primary, replica and background child processes. Values range -2000 to\n# 2000 (higher means more likely to be killed).\n#\n# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)\n# can freely increase their value, but not decrease it below its initial\n# settings. This means that setting oom-score-adj to \"relative\" and setting the\n# oom-score-adj-values to positive values will always succeed.\noom-score-adj-values 0 200 800\n\n\n#################### KERNEL transparent hugepage CONTROL ######################\n\n# Usually the kernel Transparent Huge Pages control is set to \"madvise\" or\n# \"never\" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which\n# case this config has no effect. On systems in which it is set to \"always\",\n# the server will attempt to disable it specifically for the server process in order\n# to avoid latency problems specifically with fork(2) and CoW.\n# If for some reason you prefer to keep it enabled, you can set this config to\n# \"no\" and the kernel global to \"always\".\n\ndisable-thp yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default the server asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the server process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) the server can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup the server will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Note that changing this value in a config file of an existing database and\n# restarting the server can lead to data loss. A conversion needs to be done\n# by setting it via CONFIG command on a live server first.\n#\n# Please check https://valkey.io/topics/persistence for more information.\n\nappendonly no\n\n# The base name of the append only file.\n#\n# The server uses a set of append-only files to persist the dataset\n# and changes applied to it. There are two basic types of files in use:\n#\n# - Base files, which are a snapshot representing the complete state of the\n#   dataset at the time the file was created. Base files can be either in\n#   the form of RDB (binary serialized) or AOF (textual commands).\n# - Incremental files, which contain additional commands that were applied\n#   to the dataset following the previous file.\n#\n# In addition, manifest files are used to track the files and the order in\n# which they were created and should be applied.\n#\n# Append-only file names are created by the server following a specific pattern.\n# The file name's prefix is based on the 'appendfilename' configuration\n# parameter, followed by additional information about the sequence and type.\n#\n# For example, if appendfilename is set to appendonly.aof, the following file\n# names could be derived:\n#\n# - appendonly.aof.1.base.rdb as a base file.\n# - appendonly.aof.1.incr.aof, appendonly.aof.2.incr.aof as incremental files.\n# - appendonly.aof.manifest as a manifest file.\n\nappendfilename \"appendonly.aof\"\n\n# For convenience, the server stores all persistent append-only files in a dedicated\n# directory. The name of the directory is determined by the appenddirname\n# configuration parameter.\n\nappenddirname \"appendonlydir\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# The server supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# the server may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of the server is\n# the same as \"appendfsync no\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# The server is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: The server remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the server\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where the server is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when the server itself\n# crashes or aborts but the operating system still works correctly).\n#\n# The server can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"valkey-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# the server will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# The server can create append-only base files in either RDB or AOF formats. Using\n# the RDB format is always faster and more efficient, and disabling it is only\n# supported for backward compatibility purposes.\naof-use-rdb-preamble yes\n\n# The server supports recording timestamp annotations in the AOF to support restoring\n# the data from a specific point-in-time. However, using this capability changes\n# the AOF format in a way that may not be compatible with existing AOF parsers.\naof-timestamp-enabled no\n\n################################ SHUTDOWN #####################################\n\n# Maximum time to wait for replicas when shutting down, in seconds.\n#\n# During shut down, a grace period allows any lagging replicas to catch up with\n# the latest replication offset before the primary exits. This period can\n# prevent data loss, especially for deployments without configured disk backups.\n#\n# The 'shutdown-timeout' value is the grace period's duration in seconds. It is\n# only applicable when the instance has replicas. To disable the feature, set\n# the value to 0.\n#\n# shutdown-timeout 10\n\n# When the server receives a SIGINT or SIGTERM, shutdown is initiated and by default\n# an RDB snapshot is written to disk in a blocking operation if save points are configured.\n# The options used on signaled shutdown can include the following values:\n#\n# default:  Saves RDB snapshot only if save points are configured.\n#           Waits for lagging replicas to catch up.\n# save:     Forces a DB saving operation even if no save points are configured.\n# nosave:   Prevents DB saving operation even if one or more save points are configured.\n# now:      Skips waiting for lagging replicas.\n# force:    Ignores any errors that would normally prevent the server from exiting.\n# safe:     Shut down only when safe. Note that safe cannot prevent force, in the case of\n#           force, safe will print the relevant logs. The definition of safe may be different\n#           in different modes. Here are the definitions:\n#           * In cluster mode, it is unsafe to shut down a primary with slots, and may cause\n#             the cluster to go down.\n# failover: In cluster mode, when shutting down a primary, it can proactively\n#           initiate a manual failover. This promotes one of its replicas to\n#           primary before shutdown, resulting in a quicker and safer transition\n#           than relying on an automatic failover. For a replica to be eligible\n#           for this promotion, it must be fully synchronized with the primary\n#           node at the time of the shutdown signal after waiting up to the\n#           configured shutdown-timeout. If no such replica is found, this\n#           proactive failover will not occur.\n#\n# Any combination of values is allowed as long as \"save\" and \"nosave\" are not set simultaneously.\n# Example: \"nosave force now\"\n#\n# shutdown-on-sigint default\n# shutdown-on-sigterm default\n\n################ NON-DETERMINISTIC LONG BLOCKING COMMANDS #####################\n\n# Maximum time in milliseconds for EVAL scripts, functions and in some cases\n# modules' commands before the server can start processing or rejecting other clients.\n#\n# If the maximum execution time is reached the server will start to reply to most\n# commands with a BUSY error.\n#\n# In this state the server will only allow a handful of commands to be executed.\n# For instance, SCRIPT KILL, FUNCTION KILL, SHUTDOWN NOSAVE and possibly some\n# module specific 'allow-busy' commands.\n#\n# SCRIPT KILL and FUNCTION KILL will only be able to stop a script that did not\n# yet call any write commands, so SHUTDOWN NOSAVE may be the only way to stop\n# the server in the case a write command was already issued by the script when\n# the user doesn't want to wait for the natural termination of the script.\n#\n# The default is 5 seconds. It is possible to set it to 0 or a negative value\n# to disable this mechanism (uninterrupted execution). Note that in the past\n# this config had a different name, which is now an alias, so both of these do\n# the same:\n# lua-time-limit 5000\n# busy-reply-threshold 5000\n\n################################ VALKEY CLUSTER  ###############################\n\n# Normal server instances can't be part of a cluster; only nodes that are\n# started as cluster nodes can. In order to start a server instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by each node.\n# Every cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are a multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# The cluster port is the port that the cluster bus will listen for inbound connections on. When set\n# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires\n# you to specify the cluster bus port when executing cluster meet.\n# cluster-port 0\n\n# A replica of a failing primary will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the primary processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its primary. This can be the last ping or command received (if the primary\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the primary (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the primary, the time\n# elapsed is greater than:\n#\n#   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the primary\n# for longer than 310 seconds.\n#\n# A large cluster-replica-validity-factor may allow replicas with too old data to failover\n# a primary, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the cluster-replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# primary regardless of the last time they interacted with the primary.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned primaries, that are primaries\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned primary can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned primaries only if there are still at least a\n# given number of other working replicas for their old primary. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its primary\n# and so forth. It usually reflects the number of replicas you want for every\n# primary in your cluster.\n#\n# Default is 1 (replicas migrate only if their primaries remain with at least\n# one replica). To disable migration just set it to a very large value or\n# set cluster-allow-replica-migration to 'no'.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# Turning off this option allows to use less automatic cluster configuration.\n# It disables migration of replicas to orphaned primaries. Masters that become\n# empty due to losing their last slots to another primary will not automatically\n# replicate from the primary that took over their last slots. Instead, they will\n# remain as empty primaries without any slots.\n#\n# Default is 'yes' (allow automatic migrations).\n#\n# cluster-allow-replica-migration yes\n\n# By default cluster nodes stop accepting queries if they detect there\n# is at least a hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# primary during primary failures. However the replica can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# The timeout in milliseconds for cluster manual failover. If a manual failover\n# does not complete within the specified time, both the replica and the primary\n# will abort it. Note that this timeout is also used for the finalization of\n# migrations initiated with the CLUSTER MIGRATESLOTS command.\n#\n# A manual failover is a special kind of failover that is usually executed when\n# there are no actual failures, and we wish to swap the current primary with one\n# of its replicas in a safe way, without any window for data loss.\n#\n# To avoid data loss, the primary and the replica need to wait for each other for\n# a period of time, the primary need to pause the clients writes to stop processing\n# traffic. The default failover timeout is 5000ms, it is possible to configure the\n# timeout and decide how long the primary will pause in the worst case scenario,\n# i.e. the manual failover timed out due to the insufficient votes.\n#\n# Check https://valkey.io/commands/cluster-failover/ for more information.\n#\n# cluster-manual-failover-timeout 5000\n\n# This option, when set to yes, allows nodes to serve read traffic while the\n# cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful for two cases.  The first case is for when an application\n# doesn't require consistency of data during node failures or network partitions.\n# One example of this is a cache, where as long as the node has the data it\n# should be able to serve it.\n#\n# The second use case is for configurations that don't meet the recommended\n# three shards but want to enable cluster mode and scale later. A\n# primary outage in a 1 or 2 shard configuration causes a read/write outage to the\n# entire cluster without this option set, with it set there is only a write outage.\n# Without a quorum of primaries, slot ownership will not change automatically.\n#\n# cluster-allow-reads-when-down no\n\n# This option, when set to yes, allows nodes to serve pubsub shard traffic while\n# the cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful if the application would like to use the pubsub feature even when\n# the cluster global stable state is not OK. If the application wants to make sure only\n# one shard is serving a given channel, this feature should be kept as yes.\n#\n# cluster-allow-pubsubshard-when-down yes\n\n# Cluster link send buffer limit is the limit on the memory usage of an individual\n# cluster bus link's send buffer in bytes. Cluster links would be freed if they exceed\n# this limit. This is to primarily prevent send buffers from growing unbounded on links\n# toward slow peers (E.g. PubSub messages being piled up).\n# This limit is disabled by default. Enable this limit when 'mem_cluster_links' INFO field\n# and/or 'send-buffer-allocated' entries in the 'CLUSTER LINKS` command output continuously increase.\n# Minimum limit of 1gb is recommended so that cluster link buffer can fit in at least a single\n# PubSub message by default. (client-query-buffer-limit default value is 1gb)\n#\n# cluster-link-sendbuf-limit 0\n\n# Clusters can configure their announced hostname using this config. This is a common use case for\n# applications that need to use TLS Server Name Indication (SNI) or dealing with DNS based\n# routing. By default this value is only shown as additional metadata in the CLUSTER SLOTS\n# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is\n# communicated along the clusterbus to all nodes, setting it to an empty string will remove\n# the hostname and also propagate the removal.\n#\n# cluster-announce-hostname \"\"\n\n# Clusters can configure an optional nodename to be used in addition to the node ID for\n# debugging and admin information. This name is broadcasted between nodes, so will be used\n# in addition to the node ID when reporting cross node events such as node failures.\n# cluster-announce-human-nodename \"\"\n\n# Clusters can advertise how clients should connect to them using either their IP address,\n# a user defined hostname, or by declaring they have no endpoint. Which endpoint is\n# shown as the preferred endpoint is set by using the cluster-preferred-endpoint-type\n# config with values 'ip', 'hostname', or 'unknown-endpoint'. This value controls how\n# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS.\n# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?'\n# will be returned instead.\n#\n# When a cluster advertises itself as having an unknown endpoint, it's indicating that\n# the server doesn't know how clients can reach the cluster. This can happen in certain\n# networking situations where there are multiple possible routes to the node, and the\n# server doesn't know which one the client took. In this case, the server is expecting\n# the client to reach out on the same endpoint it used for making the last request, but use\n# the port provided in the response.\n#\n# cluster-preferred-endpoint-type ip\n\n# The cluster blacklist is used when removing a node from the cluster completely.\n# When CLUSTER FORGET is called for a node, that node is put into the blacklist for\n# some time so that when gossip messages are received from other nodes that still\n# remember it, it is not re-added. This gives time for CLUSTER FORGET to be sent to\n# every node in the cluster. The blacklist TTL is 60 seconds by default, which should\n# be sufficient for most clusters, but you may considering increasing this if you see\n# nodes getting re-added while using CLUSTER FORGET.\n#\n# cluster-blacklist-ttl 60\n\n# Clusters can be configured to track per-slot resource statistics,\n# which are accessible by the CLUSTER SLOT-STATS command.\n#\n# By default, the 'cluster-slot-stats-enabled' is disabled, and only 'key-count' is captured.\n# By enabling the 'cluster-slot-stats-enabled' config, the cluster will begin to capture advanced statistics.\n# These statistics can be leveraged to assess general slot usage trends, identify hot / cold slots,\n# migrate slots for a balanced cluster workload, and / or re-write application logic to better utilize slots.\n#\n# cluster-slot-stats-enabled no\n\n# Slot migrations using the CLUSTER MIGRATESLOTS command will generate an in-memory migration log on both\n# the source and target nodes of the migration. These can be observed with CLUSTER GETSLOTMIGRATIONS.\n# 'cluster-slot-migration-log-max-len' allows the maximum length of this log to be specified. Only\n# migrations that are completed will be considered for removal.\n#\n# cluster-slot-migration-log-max-len 1000\n\n# During the CLUSTER MIGRATESLOTS command execution, the source node needs to pause itself and allow all\n# writes to be fully processed by the target node. The amount of data remaining in the buffer on the\n# source node when this pause happens will affect how long this pause takes.\n# 'slot-migration-max-failover-repl-bytes' allows the pause to wait until there are at most this\n# many bytes in the output buffer. Setting this to -1 will disable this limit, and 0 will require\n# no data be in the source output buffer (although this is not a guaranatee the data is fully\n# received by the target).\n#\n# slot-migration-max-failover-repl-bytes 0\n\n# In order to setup your cluster make sure to read the documentation\n# available at https://valkey.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, cluster node's address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make a cluster work in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-client-ipv4\n# * cluster-announce-client-ipv6\n# * cluster-announce-port\n# * cluster-announce-tls-port\n# * cluster-announce-bus-port\n# * cluster-announce-client-port\n# * cluster-announce-client-tls-port\n#\n# Each instructs the node about its address, possibly other addresses to expose\n# to clients, client ports (for connections without and with TLS) and cluster\n# message bus port. The information is then published in the bus packets so that\n# other nodes will be able to correctly map the address of the node publishing\n# the information.\n#\n# If tls-cluster is set to yes and cluster-announce-tls-port is omitted or set\n# to zero, then cluster-announce-port refers to the TLS port. Note also that\n# cluster-announce-tls-port has no effect if tls-cluster is set to no.\n#\n# If cluster-announce-client-ipv4 and cluster-announce-client-ipv6 are omitted,\n# then cluster-announce-ip is exposed to clients.\n#\n# If the port that clients will use to connect to Valkey is different than\n# the one other valkey nodes in the cluster will connect to it on, either\n# through special networking rules or because Valkey is behind a load balancer,\n# you can configure the port that clients will see by setting\n# cluster-announce-client-port or cluster-announce-client-tls-port.\n#\n# If the above options are not used, the normal cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usual.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-client-ipv4 123.123.123.5\n# cluster-announce-client-ipv6 2001:db8::8a2e:370:7334\n# cluster-announce-tls-port 6379\n# cluster-announce-port 0\n# cluster-announce-bus-port 6380\n# cluster-announce-client-tls-port 6479\n# cluster-announce-client-port 0\n\n# Set the number of databases in cluster mode. The default database is DB 0,\n# you can select a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'cluster-databases'-1.\n# cluster-databases 1\n\n################################## COMMAND LOG ###################################\n\n# The Command Log system is used to record commands that consume significant resources\n# during server operation, including CPU, memory, and network bandwidth.\n# These commands and the data they access may lead to abnormal instance operations,\n# the commandlog can help users quickly and intuitively locate issues.\n#\n# Currently, three types of command logs are supported:\n#\n# SLOW: Logs commands that exceed a specified execution time. This excludes time spent\n# on I/O operations like client communication and focuses solely on the command's\n# processing time, where the main thread is blocked.\n#\n# LARGE-REQUEST: Logs commands with requests exceeding a defined size. This helps\n# identify potentially problematic commands that send excessive data to the server.\n#\n# LARGE-REPLY: Logs commands that generate replies exceeding a defined size. This\n# helps identify commands that return unusually large amounts of data, which may\n# impact network performance or client processing.\n#\n# Each log type has two key parameters:\n# 1. A threshold value that determines when a command is logged. This threshold is specific\n#    to the type of log (e.g., execution time, request size, or reply size). A negative value disables\n#    logging. A value of 0 logs all commands.\n# 2. A maximum length that specifies the number of entries to retain in the log. Increasing\n#    the length allows more entries to be stored but consumes additional memory. To clear all\n#    entries for a specific log type and reclaim memory, use the `COMMANDLOG RESET`\n#    subcommand followed by the log type.\n#\n# SLOW Command Logs\n# The SLOW log records commands that exceed a specified execution time. The execution time\n# does not include I/O operations, such as client communication or sending responses.\n# It only measures the time spent executing the command, during which the thread is blocked\n# and cannot handle other requests.\n#\n# The threshold is measured in microseconds.\n#\n# Backward Compatibility: The parameters `slowlog-log-slower-than` and `slowlog-max-len`\n# are still supported but deprecated in favor of these commandlog parameters.\n#\n# The following time is expressed in microseconds, so 1000000 is equivalent to 1 second.\n# Note that -1 disables the slow log, while 0 forces logging of every command.\ncommandlog-execution-slower-than 10000\n# Record the number of commands.\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET or COMMANDLOG RESET SLOW.\ncommandlog-slow-execution-max-len 128\n#\n# LARGE_REQUEST Command Logs\n# The LARGE_REQUEST log tracks commands with requests exceeding a specified size. The request size\n# includes the command itself and all its arguments. For example, in `SET KEY VALUE`, the size is\n# determined by the combined size of the key and value. Commands that consume excessive network\n# bandwidth or query buffer space are recorded here.\n#\n# The threshold is measured in bytes.\n# Note that -1 disables the large request log, while 0 forces logging of every command.\ncommandlog-request-larger-than 1048576\n# Record the number of commands.\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the large request log with COMMANDLOG RESET LARGE-REQUEST.\ncommandlog-large-request-max-len 128\n#\n# LARGE_REPLY Command Logs\n# The LARGE_REPLY log records commands that produce replies exceeding a specified size. These replies\n# may consume significant network bandwidth or client output buffer space. Examples include commands\n# like `KEYS` or `HGETALL` that return large datasets. Even a `GET` command may qualify if the value\n# is substantial.\n#\n# The threshold is measured in bytes.\n# Note that -1 disables the large reply log, while 0 forces logging of every command.\n# Enabling this feature (values other than -1) has performance implications\n# when I/O threads are used due to additional tracking overhead.\n# Consider using -1 to disable if large reply monitoring is not needed.\ncommandlog-reply-larger-than 1048576\n# Record the number of commands.\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the large reply log with COMMANDLOG RESET LARGE-REPLY.\ncommandlog-large-reply-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The server latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a server instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n################################ LATENCY TRACKING ##############################\n\n# The server's extended latency monitoring tracks the per command latencies and enables\n# exporting the percentile distribution via the INFO latencystats command,\n# and cumulative latency distributions (histograms) via the LATENCY command.\n#\n# By default, the extended latency monitoring is enabled since the overhead\n# of keeping track of the command latency is very small.\n# latency-tracking yes\n\n# By default the exported latency percentiles via the INFO latencystats command\n# are the p50, p99, and p999.\n# latency-tracking-info-percentiles 50 99 99.9\n\n############################# EVENT NOTIFICATION ##############################\n\n# The server can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at https://valkey.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that the server will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  n     New key events (Note: not included in the 'A' class)\n#  t     Stream commands\n#  d     Module key type events\n#  m     Key-miss events (Note: It is not included in the 'A' class)\n#  A     Alias for g$lshzxetd, so that the \"AKE\" string means all the events\n#        (Except key-miss events which are excluded from 'A' due to their\n#         unique nature).\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-listpack-entries 512\nhash-max-listpack-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-listpack-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Sets containing non-integer values are also encoded using a memory efficient\n# data structure when they have a small number of entries, and the biggest entry\n# does not exceed a given threshold. These thresholds can be configured using\n# the following directives.\nset-max-listpack-entries 128\nset-max-listpack-value 64\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-listpack-entries 128\nzset-max-listpack-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When a HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entries limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Active rehashing uses 1% of the CPU time to help perform incremental rehashing\n# of the main server hash tables, the ones mapping top-level keys to values.\n#\n# If active rehashing is disabled and rehashing is needed, a hash table is\n# rehashed one \"step\" on every operation performed on the hash table (add, find,\n# etc.), so if the server is idle, the rehashing may never complete and some\n# more memory is used by the hash tables. Active rehashing helps prevent this.\n#\n# Active rehashing runs as a background task. Depending on the value of 'hz',\n# the frequency at which the server performs background tasks, active rehashing\n# can cause the server to freeze for a short time. For example, if 'hz' is set\n# to 10, active rehashing runs for up to one millisecond every 100 milliseconds.\n# If a freeze of one millisecond is not acceptable, you can increase 'hz' to let\n# active rehashing run more often. If instead 'hz' is set to 100, active\n# rehashing runs up to only 100 microseconds every 10 milliseconds. The total is\n# still 1% of the time.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Note that it doesn't make sense to set the replica clients output buffer\n# limit lower than the repl-backlog-size config (partial sync will succeed\n# and then replica will get disconnected).\n# Such a configuration is ignored (the size of repl-backlog-size will be used).\n# This doesn't have memory consumption implications since the replica client\n# will share the backlog buffers memory.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such as a command with huge argument, or huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In some scenarios client connections can hog up memory leading to OOM\n# errors or data eviction. To avoid this we can cap the accumulated memory\n# used by all client connections (all pubsub and normal clients). Once we\n# reach that limit connections will be dropped by the server freeing up\n# memory. The server will attempt to drop the connections using the most\n# memory first. We call this mechanism \"client eviction\".\n#\n# Client eviction is configured using the maxmemory-clients setting as follows:\n# 0 - client eviction is disabled (default)\n#\n# A memory value can be used for the client eviction threshold,\n# for example:\n# maxmemory-clients 1g\n#\n# A percentage value (between 1% and 100%) means the client eviction threshold\n# is based on a percentage of the maxmemory setting. For example to set client\n# eviction at 5% of maxmemory:\n# maxmemory-clients 5%\n\n# In the server protocol, bulk requests, that are, elements representing single\n# strings, are normally limited to 512 mb. However you can change this limit\n# here, but must be 1mb or greater\n#\n# proto-max-bulk-len 512mb\n\n# The server calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but the server checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# the server is idle, but at the same time will make the server more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When the server saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# The server's LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the server LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so the server\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   valkey-benchmark -n 1000000 incr foo\n#   valkey-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be decremented.\n#\n# The default value for the lfu-decay-time is 1. A special value of 0 means we\n# will never decay the counter.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n\n# The maximum number of new client connections accepted per event-loop cycle. This configuration\n# is set independently for TLS connections.\n#\n# By default, up to 10 new connection will be accepted per event-loop cycle for normal connections\n# and up to 1 new connection per event-loop cycle for TLS connections.\n#\n# Adjusting this to a larger number can slightly improve efficiency for new connections\n# at the risk of causing timeouts for regular commands on established connections.  It is\n# not advised to change this without ensuring that all clients have limited connection\n# pools and exponential backoff in the case of command/connection timeouts.\n#\n# If your application is establishing a large number of new connections per second you should\n# also consider tuning the value of tcp-backlog, which allows the kernel to buffer more\n# pending connections before dropping or rejecting connections.\n#\n# max-new-connections-per-cycle 10\n# max-new-tls-connections-per-cycle 1\n\n# Memory prefetching is used when multiple commands are parsed and ready for\n# execution. We take advantage of knowing the next set of commands and prefetch\n# their required hash table entries in a batch. This reduces the time spent on\n# memory accesses.\n#\n# When I/O threads are used, the keys of multiple commands from multiple clients\n# are prefetched together. When I/O threads are not used, only the commands from\n# a single client's command pipeline is prefetched.\n#\n# The optimal batch size depends on the specific workflow of the user and on the\n# hardware used. The default batch size is 16, which can be modified using the\n# 'prefetch-batch-max-size' config.\n#\n# When the config is set to 0, prefetching is disabled.\n#\n# prefetch-batch-max-size 16\n\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature, this\n# process can happen at runtime in a \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) the server will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled the server\n#    to use the copy of Jemalloc we ship with the source code of the server.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Active defragmentation is disabled by default\n# activedefrag no\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage, not cycle time as the name might\n# suggest, to be used when the lower threshold is reached.\n# active-defrag-cycle-min 1\n\n# Maximal effort for defrag in CPU percentage, not cycle time as the name might\n# suggest, to be used when the upper threshold is reached.\n# active-defrag-cycle-max 25\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n# The time spent (in microseconds) of the periodic active defrag process.  This\n# affects the latency impact of active defrag on client commands.  Smaller numbers\n# will result in less latency impact at the cost of increased defrag overhead.\n# active-defrag-cycle-us 500\n\n# Jemalloc background thread for purging will be enabled by default\njemalloc-bg-thread yes\n\n# It is possible to pin different threads and processes of the server to specific\n# CPUs in your system, in order to maximize the performances of the server.\n# This is useful both in order to pin different server threads in different\n# CPUs, but also in order to make sure that multiple server instances running\n# in the same host will be pinned to different CPUs.\n#\n# Normally you can do this using the \"taskset\" command, however it is also\n# possible to do this via the server configuration directly, both in Linux and FreeBSD.\n#\n# You can pin the server/IO threads, bio threads, aof rewrite child process,\n# bgsave child process and the slot migration process.\n# The syntax to specify the cpu list is the same as the taskset command:\n#\n# Set server/io threads to cpu affinity 0,2,4,6:\n# server-cpulist 0-7:2\n#\n# Set bio threads to cpu affinity 1,3:\n# bio-cpulist 1,3\n#\n# Set aof rewrite child process to cpu affinity 8,9,10,11:\n# aof-rewrite-cpulist 8-11\n#\n# Set bgsave (or slot migration) child process to cpu affinity 1,10,11:\n# bgsave-cpulist 1,10-11\n\n# In some cases the server will emit warnings and even refuse to start if it detects\n# that the system is in bad state, it is possible to suppress these warnings\n# by setting the following config which takes a space delimited list of warnings\n# to suppress\n#\n# ignore-warnings ARM64-COW-BUG\n\n# Inform Valkey of the availability zone if running in a cloud environment.  Currently\n# this is exposed in the INFO and HELLO commands for clients to use. Default is\n# the empty string.\n#\n# availability-zone \"zone-name\"\n"
  },
  {
    "path": "aegir/conf/var/boa.bashrc.txt",
    "content": "#-------------------------------------------------------------\n# BOA default .bashrc\n#-------------------------------------------------------------\n\nulimit -S -c 0\n\nset -o notify\nset -o ignoreeof\n\nshopt -s cdspell\nshopt -s cdable_vars\nshopt -s checkhash\nshopt -s checkwinsize\nshopt -s sourcepath\nshopt -s no_empty_cmd_completion\nshopt -s cmdhist\nshopt -s histappend histreedit histverify\nshopt -u mailwarn\nunset MAILCHECK\n\nHISTCONTROL=ignoredups:ignorespace\n\nexport PATH=$PATH:/usr/local/bin:/opt/local/bin\n\ncase \"$TERM\" in\nxterm*|rxvt*|vt100)\n  export PS1='\\h:\\w\\$ '\n  ;;\n  *)\n  ;;\nesac\n\n# workaround for immediate refresh in screen session\nif [ -n \"$STY\" ]; then\n  alias mc='TERM=dumb mc'\nelse\n  alias mc='mc'\nfi\n\nif [ -x /usr/bin/dircolors ]; then\n  test -r ~/.dircolors && eval \"$(dircolors -b ~/.dircolors)\" || eval \"$(dircolors -b)\"\n  alias ll=\"ls -l --group-directories-first --color=auto\"\n  alias ls='ls -hF --color=auto'   # add colors for filetype recognition\n  alias la='ls -Al --color=auto'   # show hidden files\n  alias lx='ls -lXB --color=auto'  # sort by extension\n  alias lk='ls -lSr --color=auto'  # sort by size, biggest last\n  alias lc='ls -ltcr --color=auto' # sort by and show change time, most recent last\n  alias lu='ls -ltur --color=auto' # sort by and show access time, most recent last\n  alias lt='ls -ltr --color=auto'  # sort by date, most recent last\n  alias lr='ls -lR --color=auto'   # recursive ls\n  alias dir='dir --color=auto'     # add colors\n  alias vdir='vdir --color=auto'   # add colors\n  alias grep='grep --color=auto'   # add colors\n  alias fgrep='fgrep --color=auto' # add colors\n  alias egrep='egrep --color=auto' # add colors\nelse\n  alias ll=\"ls -l --group-directories-first\"\n  alias ls='ls -hF'   # filetype recognition\n  alias la='ls -Al'   # show hidden files\n  alias lx='ls -lXB'  # sort by extension\n  alias lk='ls -lSr'  # sort by size, biggest last\n  alias lc='ls -ltcr' # sort by and show change time, most recent last\n  alias lu='ls -ltur' # sort by and show access time, most recent last\n  alias lt='ls -ltr'  # sort by date, most recent last\n  alias lr='ls -lR'   # recursive ls\nfi\n\nalias df='df -kTPh'\nalias wget='wget --no-check-certificate'\n\nfunction xtitle() {\n  case \"$TERM\" in\n    xterm*|rxvt*)\n    echo -n -e \"\\033]0;$*\\007\" ;;\n    *)\n    ;;\n  esac\n}\n\n_L_HOST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nalias top='xtitle $_L_HOST System Monitor && top'\nalias htop='xtitle $_L_HOST System Monitor && htop'\nalias mytop='xtitle $_L_HOST SQL Monitor && mytop'\nalias make='xtitle Making $(basename $PWD) ; make SHELL=/bin/bash'\n\nfunction extract() {\n  if [ -f $1 ]; then\n    case $1 in\n      *.tar.bz2)   tar xjf $1    ;;\n      *.tar.gz)    tar xzf $1    ;;\n      *.bz2)       bunzip2 $1    ;;\n      *.rar)       unrar x $1    ;;\n      *.gz)        gunzip -q $1  ;;\n      *.tar)       tar xf $1     ;;\n      *.tbz2)      tar xjf $1    ;;\n      *.tgz)       tar xzf $1    ;;\n      *.zip)       unzip -qq $1  ;;\n      *.Z)         uncompress $1 ;;\n      *.7z)        7z x $1       ;;\n      *)           echo \"'$1' cannot be extracted via >extract<\" ;;\n    esac\n  else\n    echo \"'$1' is not a valid file\"\n  fi\n}\n\nif [ -f ~/.bash_aliases ]; then\n  . ~/.bash_aliases\nfi\n\numask 022\n"
  },
  {
    "path": "aegir/conf/var/clean-boa-env",
    "content": "#!/bin/bash\n\n### BEGIN INIT INFO\n# Provides:          clean-boa-env\n# Required-Start:    $remote_fs $syslog\n# Required-Stop:     $remote_fs $syslog\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: safeguard to remove auto-healing pid files after reboot etc.\n# Description:       safeguard to remove auto-healing pid files after reboot etc.\n### END INIT INFO\n\nPATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nNAME=clean-boa-env\nDESC=clean-boa-env\nPIDFILE=/run/clean-boa-env.pid\n\ncase \"$1\" in\n  start)\n    echo -n \"Starting $DESC: \"\n    if [ -L \"/bin/sh\" ]; then\n      _WEB_SH=\"$(readlink -n /bin/sh)\"\n      if [ -x \"/bin/dash\" ] || [ -x \"/usr/bin/dash\" ]; then\n        if [ \"${_WEB_SH}\" != \"/bin/dash\" ]; then\n          if [ -x \"/usr/bin/dash\" ] && [ ! -L \"/usr/bin/dash\" ]; then\n            if [ -L \"/usr/bin/sh\" ]; then\n              ln -sfn /usr/bin/dash /usr/bin/sh\n            fi\n            if [ -L \"/bin/sh\" ]; then\n              ln -sfn /usr/bin/dash /bin/sh\n            fi\n          fi\n          if [ -x \"/bin/dash\" ] && [ ! -L \"/bin/dash\" ]; then\n            if [ -L \"/usr/bin/sh\" ]; then\n              ln -sfn /bin/dash /usr/bin/sh\n            fi\n            if [ -L \"/bin/sh\" ]; then\n              ln -sfn /bin/dash /bin/sh\n            fi\n          fi\n        fi\n      elif [ -x \"/bin/bash\" ] || [ -x \"/usr/bin/bash\" ]; then\n        if [ \"${_WEB_SH}\" != \"/bin/bash\" ]; then\n          if [ -x \"/usr/bin/bash\" ] && [ ! -L \"/usr/bin/bash\" ]; then\n            if [ -L \"/usr/bin/sh\" ]; then\n              ln -sfn /usr/bin/bash /usr/bin/sh\n            fi\n            if [ -L \"/bin/sh\" ]; then\n              ln -sfn /usr/bin/bash /bin/sh\n            fi\n          fi\n          if [ -x \"/bin/bash\" ] && [ ! -L \"/bin/bash\" ]; then\n            if [ -L \"/usr/bin/sh\" ]; then\n              ln -sfn /bin/bash /usr/bin/sh\n            fi\n            if [ -L \"/bin/sh\" ]; then\n              ln -sfn /bin/bash /bin/sh\n            fi\n          fi\n        fi\n      fi\n    fi\n    _RAM_AUTO_FILE=\"/sys/devices/system/memory/auto_online_blocks\"\n    if [ -f \"${_RAM_AUTO_FILE}\" ]; then\n      if grep -qx offline \"${_RAM_AUTO_FILE}\"; then\n        echo online > \"${_RAM_AUTO_FILE}\"\n      fi\n    fi\n    for _CPU_DIR in /sys/devices/system/cpu/cpu[0-9]*\n    do\n      _CPU=${_CPU_DIR##*/}\n      _CPU_STATE_FILE=\"${_CPU_DIR}/online\"\n      if [ -f \"${_CPU_STATE_FILE}\" ]; then\n        if grep -qx 0 \"${_CPU_STATE_FILE}\"; then\n          echo 1 > \"${_CPU_STATE_FILE}\"\n        fi\n      fi\n    done\n    for _RAM_DIR in /sys/devices/system/memory/memory[0-9]*\n    do\n      _RAM=${_RAM_DIR##*/}\n      _RAM_STATE_FILE=\"${_RAM_DIR}/state\"\n      if [ -f \"${_RAM_STATE_FILE}\" ]; then\n        if grep -qx offline \"${_RAM_STATE_FILE}\"; then\n          echo online > \"${_RAM_STATE_FILE}\"\n        fi\n      fi\n    done\n    if [ -e \"/root/.run-to-excalibur.cnf\" ]; then\n      if [ -x \"/opt/local/bin/autoexcalibur\" ]; then\n        nohup /opt/local/bin/autoexcalibur > /dev/null 2>&1 &\n      fi\n    elif [ -e \"/root/.run-to-daedalus.cnf\" ]; then\n      if [ -x \"/opt/local/bin/autodaedalus\" ]; then\n        nohup /opt/local/bin/autodaedalus > /dev/null 2>&1 &\n      fi\n    elif [ -e \"/root/.run-to-chimaera.cnf\" ]; then\n      if [ -x \"/opt/local/bin/autochimaera\" ]; then\n        nohup /opt/local/bin/autochimaera > /dev/null 2>&1 &\n      fi\n    elif [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n      if [ -x \"/opt/local/bin/autobeowulf\" ]; then\n        nohup /opt/local/bin/autobeowulf > /dev/null 2>&1 &\n      fi\n    fi\n    touch $PIDFILE\n    [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n    [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n    [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n    [ -e \"/run/daily-fix.pid\" ] && rm -f /run/daily-fix.pid\n    [ -e \"/run/boa_cron_wait.pid\" ] && rm -f /run/boa_cron_wait.pid\n  ;;\n  stop)\n    echo -n \"Stopping $DESC: \"\n    if [ -L \"/bin/sh\" ]; then\n      _WEB_SH=\"$(readlink -n /bin/sh)\"\n      if [ -x \"/bin/dash\" ]; then\n        if [ \"${_WEB_SH}\" != \"/bin/dash\" ]; then\n          ln -sfn /bin/dash /bin/sh\n          if [ -e \"/usr/bin/sh\" ]; then\n            ln -sfn /bin/dash /usr/bin/sh\n          fi\n        fi\n      else\n        if [ \"${_WEB_SH}\" != \"/bin/bash\" ]; then\n          ln -sfn /bin/bash /bin/sh\n          if [ -e \"/usr/bin/sh\" ]; then\n            ln -sfn /bin/bash /usr/bin/sh\n          fi\n        fi\n      fi\n    fi\n    _REBOOT_ONE_TEST=$(ls -la /root/.run-auto-major-os-reboot*-one.cnf 2>&1)\n    _REBOOT_TWO_TEST=$(ls -la /root/.run-auto-major-os-reboot*-two.cnf 2>&1)\n    if [[ \"${_REBOOT_ONE_TEST}\" =~ \"No such file\" ]] \\\n      && [[ \"${_REBOOT_TWO_TEST}\" =~ \"No such file\" ]]; then\n      service cron stop &> /dev/null\n      killall cron &> /dev/null\n      pkill -9 -f second.sh\n      pkill -9 -f runner.sh\n      pkill -9 -f minute.sh\n      echo \"Cron has been stopped\"\n      if [ ! -e \"/root/.allow.clamav.cnf\" ] || [ -e \"/root/.deny.clamav.cnf\" ]; then\n        if [ -e \"/etc/init.d/clamav-daemon\" ]; then\n          update-rc.d -f clamav-daemon remove &> /dev/null\n        fi\n        if [ -e \"/etc/init.d/clamav-freshclam\" ]; then\n          update-rc.d -f clamav-freshclam remove &> /dev/null\n        fi\n      fi\n      pkill -9 -f avahi-daemon\n      pkill -9 -f clamd\n      pkill -9 -f freshclam\n      pkill -9 -f java\n      rm -f /run/clamav/*\n      echo \"Java/Solr/Clamav have been stopped\"\n      service nginx stop &> /dev/null\n      killall nginx &> /dev/null\n      killall php &> /dev/null\n      pkill -9 -f php-fpm\n      echo \"Nginx, PHP-CLI and PHP-FPM have been stopped\"\n      csf -df &> /dev/null\n      csf -tf &> /dev/null\n      echo \"Firewall has been purged\"\n      if [ -e \"/root/.my.pass.txt\" ]; then\n        _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n        _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n        if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ] && [ ! -z \"${_SQL_PSWD}\" ]; then\n          echo \"Preparing MySQLD for quick shutdown...\"\n          _DBS_TEST=\"$(which mysql)\"\n          if [ ! -z \"${_DBS_TEST}\" ]; then\n            _DB_SERVER_TEST=$(mysql -V 2>&1)\n          fi\n          if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n            _DB_V=8.4\n          elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n            _DB_V=8.0\n          elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n            _DB_V=5.7\n          fi\n          mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n          if [ \"${_DB_V}\" = \"5.7\" ]; then\n            mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n            mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n          fi\n          mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n          echo \"Stopping MySQLD now...\"\n          service mysql stop &> /dev/null\n          wait\n          echo \"MySQLD stopped\"\n        else\n          echo \"MySQLD already stopped\"\n        fi\n      fi\n    fi\n    [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n    [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n    [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n    [ -e \"/run/daily-fix.pid\" ] && rm -f /run/daily-fix.pid\n    [ -e \"/run/boa_cron_wait.pid\" ] && rm -f /run/boa_cron_wait.pid\n    rm -f $PIDFILE\n  ;;\n\n  restart|force-reload)\n    ${0} stop\n    ${0} start\n  ;;\n\n  *)\n    echo \"Usage: service $NAME {start|stop|restart|force-reload}\" >&2\n    exit 0\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/conf/var/crossdomain.xml",
    "content": "<?xml version=\"1.0\"?>\n<!DOCTYPE cross-domain-policy SYSTEM \"http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd\">\n<cross-domain-policy>\n  <allow-access-from domain=\"*\" />\n</cross-domain-policy> \n"
  },
  {
    "path": "aegir/conf/var/csf.conf",
    "content": "###############################################################################\n# SECTION:Initial Settings\n###############################################################################\n# Testing flag - enables a CRON job that clears iptables incase of\n# configuration problems when you start csf. This should be enabled until you\n# are sure that the firewall works - i.e. incase you get locked out of your\n# server! Then do remember to set it to 0 and restart csf when you're sure\n# everything is OK. Stopping csf will remove the line from /etc/crontab\n#\n# lfd will not start while this is enabled\nTESTING = \"0\"\n\n# The interval for the crontab in minutes. Since this uses the system clock the\n# CRON job will run at the interval past the hour and not from when you issue\n# the start command. Therefore an interval of 5 minutes means the firewall\n# will be cleared in 0-5 minutes from the firewall start\nTESTING_INTERVAL = \"5\"\n\n# SECURITY WARNING\n# ================\n#\n# Unfortunately, syslog and rsyslog allow end-users to log messages to some\n# system logs via the same unix socket that other local services use. This\n# means that any log line shown in these system logs that syslog or rsyslog\n# maintain can be spoofed (they are exactly the same as real log lines).\n#\n# Since some of the features of lfd rely on such log lines, spoofed messages\n# can cause false-positive matches which can lead to confusion at best, or\n# blocking of any innocent IP address or making the server inaccessible at\n# worst.\n#\n# Any option that relies on the log entries in the files listed in\n# /etc/syslog.conf and /etc/rsyslog.conf should therefore be considered\n# vulnerable to exploitation by end-users and scripts run by end-users.\n#\n# NOTE: Not all log files are affected as they may not use syslog/rsyslog\n#\n# The option RESTRICT_SYSLOG disables all these features that rely on affected\n# logs. These options are:\n# LF_SSHD LF_FTPD LF_IMAPD LF_POP3D LF_BIND LF_SUHOSIN LF_SSH_EMAIL_ALERT\n# LF_SU_EMAIL_ALERT LF_CONSOLE_EMAIL_ALERT LF_DISTATTACK LF_DISTFTP\n# LT_POP3D LT_IMAPD PS_INTERVAL UID_INTERVAL WEBMIN_LOG LF_WEBMIN_EMAIL_ALERT\n# PORTKNOCKING_ALERT LF_SUDO_EMAIL_ALERT\n#\n# This list of options use the logs but are not disabled by RESTRICT_SYSLOG:\n# ST_ENABLE SYSLOG_CHECK LOGSCANNER CUSTOM*_LOG\n#\n# The following options are still enabled by default on new installations so\n# that, on balance, csf/lfd still provides expected levels of security:\n# LF_SSHD LF_FTPD LF_POP3D LF_IMAPD LF_SSH_EMAIL_ALERT LF_SU_EMAIL_ALERT\n#\n# If you set RESTRICT_SYSLOG to \"0\" or \"2\" and enable any of the options listed\n# above, it should be done with the knowledge that any of the those options\n# that are enabled could be triggered by spoofed log lines and lead to the\n# server being inaccessible in the worst case. If you do not want to take that\n# risk you should set RESTRICT_SYSLOG to \"1\" and those features will not work\n# but you will not be protected from the exploits that they normally help block\n#\n# The recommended setting for RESTRICT_SYSLOG is \"3\" to restrict who can access\n# the syslog/rsyslog unix socket.\n#\n# For further advice on how to help mitigate these issues, see\n# /etc/csf/readme.txt\n#\n# 0 = Allow those options listed above to be used and configured\n# 1 = Disable all the options listed above and prevent them from being used\n# 2 = Disable only alerts about this feature and do nothing else\n# 3 = Restrict syslog/rsyslog access to RESTRICT_SYSLOG_GROUP ** RECOMMENDED **\nRESTRICT_SYSLOG = \"3\"\n\n# The following setting is used if RESTRICT_SYSLOG is set to 3. It restricts\n# write access to the syslog/rsyslog unix socket(s). The group must not already\n# exists in /etc/group before setting RESTRICT_SYSLOG to 3, so set the option\n# to a unique name for the server\n#\n# You can add users to this group by changing /etc/csf/csf.syslogusers and then\n# restarting lfd afterwards. This will create the system group and add the\n# users from csf.syslogusers if they exist to that group and will change the\n# permissions on the syslog/rsyslog unix socket(s). The socket(s) will be\n# monitored and the permissions re-applied should syslog/rsyslog be restarted\n#\n# Using this option will prevent some legitimate logging, e.g. end-user cron\n# job logs\n#\n# If you want to revert RESTRICT_SYSLOG to another option and disable this\n# feature, change the setting of RESTRICT_SYSLOG and then restart lfd and then\n# syslog/rsyslog and the unix sockets will be reset\nRESTRICT_SYSLOG_GROUP = \"mysyslog\"\n\n# This options restricts the ability to modify settings within this file from\n# the csf UI. Should the parent control panel be compromised, these restricted\n# options could be used to further compromise the server. For this reason we\n# recommend leaving this option set to at least \"1\" and if any of the\n# restricted items need to be changed, they are done so from the root shell\n#\n# 0 = Unrestricted UI\n# 1 = Restricted UI\n# 2 = Disabled UI\nRESTRICT_UI = \"1\"\n\n# Enabling auto updates creates a cron job called /etc/cron.d/csf_update which\n# runs once per day to see if there is an update to csf+lfd and upgrades if\n# available and restarts csf and lfd\n#\n# You should check for new version announcements at http://blog.configserver.com\nAUTO_UPDATES = \"0\"\n\n###############################################################################\n# SECTION:IPv4 Port Settings\n###############################################################################\n# Lists of ports in the following comma separated lists can be added using a\n# colon (e.g. 30000:35000).\n\n# Some kernel/iptables setups do not perform stateful connection tracking\n# correctly (typically some virtual servers or custom compiled kernels), so a\n# SPI firewall will not function correctly. If this happens, LF_SPI can be set\n# to 0 to reconfigure csf as a static firewall.\n#\n# As connection tracking will not be configured, applications that rely on it\n# will not function unless all outgoing ports are opened. Therefore, all\n# outgoing connections will be allowed once all other tests have completed. So\n# TCP_OUT, UDP_OUT and ICMP_OUT will not have any affect.\n#\n# If you allow incoming DNS lookups you may need to use the following\n# directive in the options{} section of your named.conf:\n#\n#        query-source port 53;\n#\n# This will force incoming DNS traffic only through port 53\n#\n# Disabling this option will break firewall functionality that relies on\n# stateful packet inspection (e.g. DNAT, PACKET_FILTER) and makes the firewall\n# less secure\n#\n# This option should be set to \"1\" in all other circumstances\nLF_SPI = \"1\"\n\n# Allow incoming TCP ports\nTCP_IN = \"20,21,22,24,53,80,110,143,443,465,587,993,995,2401,5280,9418,30000:50000\"\n\n# Allow outgoing TCP ports\nTCP_OUT = \"20,21,22,25,53,80,110,143,222,389,443,465,587,636,873,993,995,1129,1935,2195,2222,2401,2525,3268,3306,5280,5432,8081,8443,9418,9998,11371,27017,30000:50000,5201:5210\"\n\n# Allow incoming UDP ports\nUDP_IN = \"20,21,53,123,33434:33523,443\"\n\n# Allow outgoing UDP ports\n# To allow outgoing traceroute add 33434:33523 to this list\nUDP_OUT = \"20,21,53,123,33434:33523\"\n\n# Allow incoming PING. Disabling PING will likely break external uptime\n# monitoring\nICMP_IN = \"1\"\n\n# Set the per IP address incoming ICMP packet rate for PING requests. This\n# ratelimits PING requests which if exceeded results in silently rejected\n# packets. Disable or increase this value if you are seeing PING drops that you\n# do not want\n#\n# To disable rate limiting set to \"0\", otherwise set according to the iptables\n# documentation for the limit module. For example, \"1/s\" will limit to one\n# packet per second\nICMP_IN_RATE = \"1/s\"\n\n# Allow outgoing PING\n#\n# Unless there is a specific reason, this option should NOT be disabled as it\n# could break OS functionality\nICMP_OUT = \"1\"\n\n# Set the per IP address outgoing ICMP packet rate for PING requests. This\n# ratelimits PING requests which if exceeded results in silently rejected\n# packets. Disable or increase this value if you are seeing PING drops that you\n# do not want\n#\n# Unless there is a specific reason, this option should NOT be enabled as it\n# could break OS functionality\n#\n# To disable rate limiting set to \"0\", otherwise set according to the iptables\n# documentation for the limit module. For example, \"1/s\" will limit to one\n# packet per second\nICMP_OUT_RATE = \"1/s\"\n\n# For those with PCI Compliance tools that state that ICMP timestamps (type 13)\n# should be dropped, you can enable the following option. Otherwise, there\n# appears to be little evidence that it has anything to do with a security risk\n# and can impact network performance, so should be left disabled by everyone\n# else\nICMP_TIMESTAMPDROP = \"0\"\n\n###############################################################################\n# SECTION:IPv6 Port Settings\n###############################################################################\n# IPv6: (Requires ip6tables)\n#\n# Pre v2.6.20 kernels do not perform stateful connection tracking, so a static\n# firewall is configured as a fallback instead if IPV6_SPI is set to 0 below\n#\n# Supported:\n# Temporary ACCEPT/DENY, GLOBAL_DENY, GLOBAL_ALLOW, SMTP_BLOCK, LF_PERMBLOCK,\n# PACKET_FILTER, Advanced Allow/Deny Filters, RELAY_*, CLUSTER_*, CC6_LOOKUPS,\n# SYNFLOOD, LF_NETBLOCK\n#\n# Supported if CC6_LOOKUPS and CC_LOOKUPS are enabled\n# CC_DENY, CC_ALLOW, CC_ALLOW_FILTER, CC_IGNORE, CC_ALLOW_PORTS, CC_DENY_PORTS,\n# CC_ALLOW_SMTPAUTH\n#\n# Supported if ip6tables >= 1.4.3:\n# PORTFLOOD, CONNLIMIT\n#\n# Supported if ip6tables >= 1.4.17 and perl module IO::Socket::INET6 is\n# installed:\n# MESSENGER DOCKER SMTP_REDIRECT\n#\n# Not supported:\n# ICMP_IN, ICMP_OUT\n#\nIPV6 = \"1\"\n\n# IPv6 uses icmpv6 packets very heavily. By default, csf will allow all icmpv6\n# traffic in the INPUT and OUTPUT chains. However, this could increase the risk\n# of icmpv6 attacks. To restrict incoming icmpv6, set to \"1\" but may break some\n# connection types\nIPV6_ICMP_STRICT = \"1\"\n\n# Pre v2.6.20 kernel must set this option to \"0\" as no working state module is\n# present, so a static firewall is configured as a fallback\n#\n# A workaround has been added for CentOS/RedHat v5 and custom kernels that do\n# not support IPv6 connection tracking by opening ephemeral port range\n# 32768:61000. This is only applied if IPV6_SPI is not enabled. This is the\n# same workaround implemented by RedHat in the sample default IPv6 rules\n#\n# As connection tracking will not be configured, applications that rely on it\n# will not function unless all outgoing ports are opened. Therefore, all\n# outgoing connections will be allowed once all other tests have completed. So\n# TCP6_OUT, UDP6_OUT and ICMP6_OUT will not have any affect.\n#\n# If you allow incoming ipv6 DNS lookups you may need to use the following\n# directive in the options{} section of your named.conf:\n#\n#        query-source-v6 port 53;\n#\n# This will force ipv6 incoming DNS traffic only through port 53\n#\n# These changes are not necessary if the SPI firewall is used\nIPV6_SPI = \"1\"\n\n# Allow incoming IPv6 TCP ports\nTCP6_IN = \"53\"\n\n# Allow outgoing IPv6 TCP ports\nTCP6_OUT = \"53\"\n\n# Allow incoming IPv6 UDP ports\nUDP6_IN = \"53\"\n\n# Allow outgoing IPv6 UDP ports\n# To allow outgoing traceroute add 33434:33523 to this list\nUDP6_OUT = \"53\"\n\n###############################################################################\n# SECTION:General Settings\n###############################################################################\n# By default, csf will auto-configure iptables to filter all traffic except on\n# the loopback device. If you only want iptables rules applied to a specific\n# NIC, then list it here (e.g. eth1, or eth+)\nETH_DEVICE = \"\"\n\n# By adding a device to this option, ip6tables can be configured only on the\n# specified device. Otherwise, ETH_DEVICE and then the default setting will be\n# used\nETH6_DEVICE = \"\"\n\n# If you don't want iptables rules applied to specific NICs, then list them in\n# a comma separated list (e.g \"eth1,eth2\")\nETH_DEVICE_SKIP = \"\"\n\n# This option should be enabled unless the kernel does not support the\n# \"conntrack\" module\n#\n# To use the deprecated iptables \"state\" module, change this to 0\nUSE_CONNTRACK = \"1\"\n\n# Enable ftp helper via the iptables CT target on supporting kernels (v2.6.34+)\n# instead of the current method via /proc/sys/net/netfilter/nf_conntrack_helper\n# This will also remove the RELATED target from the global state iptables rule\n#\n# This is not needed (and will be ignored) if LF_SPI/IPV6_SPI is disabled or\n# the raw tables do not exist. The USE_CONNTRACK option should be enabled\n#\n# To enable this option, set it to your FTP server listening port number\n# (normally 21), do NOT set it to \"1\"\nUSE_FTPHELPER = \"0\"\n\n# Check whether syslog is running. Many of the lfd checks require syslog to be\n# running correctly. This test will send a coded message to syslog every\n# SYSLOG_CHECK seconds. lfd will check SYSLOG_LOG log lines for the coded\n# message. If it fails to do so within SYSLOG_CHECK seconds an alert using\n# syslogalert.txt is sent\n#\n# A value of between 300 and 3600 seconds is suggested. Set to 0 to disable\nSYSLOG_CHECK = \"0\"\n\n# Enable this option if you want lfd to ignore (i.e. don't block) IP addresses\n# listed in csf.allow in addition to csf.ignore (the default). This option\n# should be used with caution as it would mean that IP's allowed through the\n# firewall from infected PC's could launch attacks on the server that lfd\n# would ignore\nIGNORE_ALLOW = \"0\"\n\n# Enable the following option if you want to apply strict iptables rules to DNS\n# traffic (i.e. relying on iptables connection tracking). Enabling this option\n# could cause DNS resolution issues both to and from the server but could help\n# prevent abuse of the local DNS server\nDNS_STRICT = \"0\"\n\n# Enable the following option if you want to apply strict iptables rules to DNS\n# traffic between the server and the nameservers listed in /etc/resolv.conf\n# Enabling this option could cause DNS resolution issues both to and from the\n# server but could help prevent abuse of the local DNS server\nDNS_STRICT_NS = \"0\"\n\n# Limit the number of IP's kept in the /etc/csf/csf.deny file\n#\n# Care should be taken when increasing this value on servers with low memory\n# resources or hard limits (such as Virtuozzo/OpenVZ) as too many rules (in the\n# thousands) can sometimes cause network slowdown\n#\n# The value set here is the maximum number of IPs/CIDRs allowed\n# if the limit is reached, the entries will be rotated so that the oldest\n# entries (i.e. the ones at the top) will be removed and the latest is added.\n# The limit is only checked when using csf -d (which is what lfd also uses)\n# Set to 0 to disable limiting\n#\n# For implementations wishing to set this value significantly higher, we\n# recommend using the IPSET option\nDENY_IP_LIMIT = \"9000\"\n\n# Limit the number of IP's kept in the temprary IP ban list. If the limit is\n# reached the oldest IP's in the ban list will be removed and allowed\n# regardless of the amount of time remaining for the block\n# Set to 0 to disable limiting\nDENY_TEMP_IP_LIMIT = \"300\"\n\n# Enable login failure detection daemon (lfd). If set to 0 none of the\n# following settings will have any effect as the daemon won't start.\nLF_DAEMON = \"1\"\n\n# Check whether csf appears to have been stopped and restart if necessary,\n# unless TESTING is enabled above. The check is done every 300 seconds\nLF_CSF = \"1\"\n\n# This option uses IPTABLES_SAVE, IPTABLES_RESTORE and IP6TABLES_SAVE,\n# IP6TABLES_RESTORE in two ways:\n#\n# 1. On a clean server reboot the entire csf iptables configuration is saved\n#    and then restored where possible to provide a near instant firewall\n#    startup[*]\n#\n# 2. On csf restart or lfd reloading tables, CC_* as well as SPAMHAUS, DSHIELD,\n#    BOGON, TOR are loaded using this method in a fraction of the time than if\n#    this setting is disabled\n#\n# [*]Not supported on all OS platforms\n#\n# Set to \"0\" to disable this functionality\nFASTSTART = \"0\"\n\n# This option allows you to use ipset v6+ for the following csf options:\n# CC_* and /etc/csf/csf.blocklist, /etc/csf/csf.allow, /etc/csf/csf.deny,\n# GLOBAL_DENY, GLOBAL_ALLOW, DYNDNS, GLOBAL_DYNDNS, MESSENGER\n#\n# ipset will only be used with the above options when listing IPs and CIDRs.\n# Advanced Allow Filters and temporary blocks use traditional iptables\n#\n# Using ipset moves the onus of ip matching against large lists away from\n# iptables rules and to a purpose built and optimised database matching\n# utility. It also simplifies the switching in of updated lists\n#\n# To use this option you must have a fully functioning installation of ipset\n# installed either via rpm or source from http://ipset.netfilter.org/\n#\n# Note: Using ipset has many advantages, some disadvantages are that you will\n# no longer see packet and byte counts against IPs and it makes identifying\n# blocked/allowed IPs that little bit harder\n#\n# Note: If you mainly use IP address only entries in csf.deny, you can increase\n# the value of DENY_IP_LIMIT significantly if you wish\n#\n# Note: It's highly unlikely that ipset will function on Virtuozzo/OpenVZ\n# containers even if it has been installed\n#\n# If you find any problems, please post on forums.configserver.com with full\n# details of the issue\nLF_IPSET = \"1\"\n\n# Versions of iptables greater or equal to v1.4.20 should support the --wait\n# option. This forces iptables commands that use the option to wait until a\n# lock by any other process using iptables completes, rather than simply\n# failing\n#\n# Enabling this feature will add the --wait option to iptables commands\n#\n# NOTE: The disadvantage of using this option is that any iptables command that\n# uses it will hang until the lock is released. This could cause a cascade of\n# hung processes trying to issue iptables commands. To try and avoid this issue\n# csf uses a last ditch timeout, WAITLOCK_TIMEOUT in seconds, that will trigger\n# a failure if reached\nWAITLOCK = \"1\"\nWAITLOCK_TIMEOUT = \"300\"\n\n# The following sets the hashsize for ipset sets, which must be a power of 2.\n#\n# Note: Increasing this value will consume more memory for all sets\n# Default: \"1024\"\nLF_IPSET_HASHSIZE = \"1024\"\n\n# The following sets the maxelem for ipset sets.\n#\n# Note: Increasing this value will consume more memory for all sets\n# Default: \"65536\"\nLF_IPSET_MAXELEM = \"65536\"\n\n# If you enable this option then whenever a CLI request to restart csf is used\n# lfd will restart csf instead within LF_PARSE seconds\n#\n# This feature can be helpful for restarting configurations that cannot use\n# FASTSTART\nLFDSTART = \"1\"\n\n# Enable verbose output of iptables commands\nVERBOSE = \"1\"\n\n# Drop out of order packets and packets in an INVALID state in iptables\n# connection tracking\nPACKET_FILTER = \"1\"\n\n# Perform reverse DNS lookups on IP addresses. (See also CC_LOOKUPS)\nLF_LOOKUPS = \"1\"\n\n# Custom styling is possible in the csf UI. See the readme.txt for more\n# information under \"UI skinning and Mobile View\"\n#\n# This option enables the use of custom styling. If the styling fails to work\n# correctly, e.g. custom styling does not take into account a change in the\n# standard csf UI, then disabling this option will return the standard UI\nSTYLE_CUSTOM = \"1\"\n\n# This option disables the presence of the Mobile View in the csf UI\nSTYLE_MOBILE = \"1\"\n\n###############################################################################\n# SECTION:SMTP Settings\n###############################################################################\n# Block outgoing SMTP except for root, exim and mailman (forces scripts/users\n# to use the exim/sendmail binary instead of sockets access). This replaces the\n# protection as WHM > Tweak Settings > SMTP Tweaks\n#\n# This option uses the iptables ipt_owner/xt_owner module and must be loaded\n# for it to work. It may not be available on some VPS platforms\n#\n# Note: Run /etc/csf/csftest.pl to check whether this option will function on\n# this server\nSMTP_BLOCK = \"1\"\n\n# If SMTP_BLOCK is enabled but you want to allow local connections to port 25\n# on the server (e.g. for webmail or web scripts) then enable this option to\n# allow outgoing SMTP connections to the loopback device\nSMTP_ALLOWLOCAL = \"0\"\n\n# This option redirects outgoing SMTP connections destined for remote servers\n# for non-bypass users to the local SMTP server to force local relaying of\n# email. Such email may require authentication (SMTP AUTH)\nSMTP_REDIRECT = \"0\"\n\n# This is a comma separated list of the ports to block. You should list all\n# ports that exim is configured to listen on\nSMTP_PORTS = \"25\"\n\n# Always allow the following comma separated users and groups to bypass\n# SMTP_BLOCK\n#\n# Note: root (UID:0) is always allowed\nSMTP_ALLOWUSER = \"postfix\"\nSMTP_ALLOWGROUP = \"\"\n\n# This option will only allow SMTP AUTH to be advertised to the IP addresses\n# listed in /etc/csf/csf.smtpauth on EXIM mail servers\n#\n# The additional option CC_ALLOW_SMTPAUTH can be used with this option to\n# additionally restrict access to specific countries\n#\n# This is to help limit attempts at distributed attacks against SMTP AUTH which\n# are difficult to achive since port 25 needs to be open to relay email\n#\n# The reason why this works is that if EXIM does not advertise SMTP AUTH on a\n# connection, then SMTP AUTH will not accept logins, defeating the attacks\n# without restricting mail relaying\n#\n# Note: csf and lfd must be restarted if /etc/csf/csf.smtpauth is modified so\n# that the lookup file in /etc/exim.smtpauth is regenerated from the\n# information from /etc/csf/csf.smtpauth plus any countries listed in\n# CC_ALLOW_SMTPAUTH\n#\n# NOTE: To make this option work you MUST make the modifications to exim.conf\n# as explained in \"Exim SMTP AUTH Restriction\" section in /etc/csf/readme.txt\n# after enabling the option here, otherwise this option will not work\n#\n# To enable this option, set to 1 and make the exim configuration changes\n# To disable this option, set to 0 and undo the exim configuration changes\nSMTPAUTH_RESTRICT = \"0\"\n\n###############################################################################\n# SECTION:Port Flood Settings\n###############################################################################\n# Enable SYN Flood Protection. This option configures iptables to offer some\n# protection from tcp SYN packet DOS attempts. You should set the RATE so that\n# false-positives are kept to a minimum otherwise visitors may see connection\n# issues (check /var/log/messages for *SYNFLOOD Blocked*). See the iptables\n# man page for the correct --limit rate syntax\n#\n# Note: This option should ONLY be enabled if you know you are under a SYN\n# flood attack as it will slow down all new connections from any IP address to\n# the server if triggered\nSYNFLOOD = \"0\"\nSYNFLOOD_RATE = \"100/s\"\nSYNFLOOD_BURST = \"150\"\n\n# Connection Limit Protection. This option configures iptables to offer more\n# protection from DOS attacks against specific ports. It can also be used as a\n# way to simply limit resource usage by IP address to specific server services.\n# This option limits the number of concurrent new connections per IP address\n# that can be made to specific ports\n#\n# This feature does not work on servers that do not have the iptables module\n# xt_connlimit loaded. Typically, this will be with MONOLITHIC kernels. VPS\n# server admins should check with their VPS host provider that the iptables\n# module is included\n#\n# For further information and syntax refer to the Connection Limit Protection\n# section of the csf readme.txt\n#\n# Note: Run /etc/csf/csftest.pl to check whether this option will function on\n# this server\nCONNLIMIT = \"24;99,22;99,80;9999,443;9999,53;99\"\n\n# Port Flood Protection. This option configures iptables to offer protection\n# from DOS attacks against specific ports. This option limits the number of\n# new connections per time interval that can be made to specific ports\n#\n# This feature does not work on servers that do not have the iptables module\n# ipt_recent loaded. Typically, this will be with MONOLITHIC kernels. VPS\n# server admins should check with their VPS host provider that the iptables\n# module is included\n#\n# For further information and syntax refer to the Port Flood Protection\n# section of the csf readme.txt\n#\n# Note: Run /etc/csf/csftest.pl to check whether this option will function on\n# this server\nPORTFLOOD = \"24;tcp;99;60,22;tcp;99;60,1433;tcp;1;900\"\n\n# Outgoing UDP Flood Protection. This option limits outbound UDP packet floods.\n# These typically originate from exploit scripts uploaded through vulnerable\n# web scripts. Care should be taken on servers that use services that utilise\n# high levels of UDP outbound traffic, such as SNMP, so you may need to alter\n# the UDPFLOOD_LIMIT and UDPFLOOD_BURST options to suit your environment\n#\n# We recommend enabling User ID Tracking (UID_INTERVAL) with this feature\nUDPFLOOD = \"0\"\nUDPFLOOD_LIMIT = \"100/s\"\nUDPFLOOD_BURST = \"500\"\n\n# This is a list of usernames that should not be rate limited, such as \"named\"\n# to prevent bind traffic from being limited.\n#\n# Note: root (UID:0) is always allowed\nUDPFLOOD_ALLOWUSER = \"named\"\n\n###############################################################################\n# SECTION:Logging Settings\n###############################################################################\n# Log lfd messages to SYSLOG in addition to /var/log/lfd.log. You must have the\n# perl module Sys::Syslog installed to use this feature\nSYSLOG = \"0\"\n\n# Drop target for incoming iptables rules. This can be set to either DROP or\n# REJECT. REJECT will send back an error packet, DROP will not respond at all.\n# REJECT is more polite, however it does provide extra information to a hacker\n# and lets them know that a firewall is blocking their attempts. DROP hangs\n# their connection, thereby frustrating attempts to port scan the server\nDROP = \"DROP\"\n\n# Drop target for outgoing iptables rules. This can be set to either DROP or\n# REJECT as with DROP, however as such connections are from this server it is\n# better to REJECT connections to closed ports rather than to DROP them. This\n# helps to immediately free up server resources rather than tying them up until\n# a connection times out. It also tells the process making the connection that\n# it has immediately failed\n#\n# It is possible that some monolithic kernels may not support the REJECT\n# target. If this is the case, csf checks before using REJECT and falls back to\n# using DROP, issuing a warning to set this to DROP instead\nDROP_OUT = \"REJECT\"\n\n# Enable logging of dropped connections to blocked ports to syslog, usually\n# /var/log/messages. This option needs to be enabled to use Port Scan Tracking\nDROP_LOGGING = \"1\"\n\n# Enable logging of dropped incoming connections from blocked IP addresses\n#\n# This option will be disabled if you enable Port Scan Tracking (PS_INTERVAL)\nDROP_IP_LOGGING = \"0\"\n\n# Enable logging of dropped outgoing connections\n#\n# Note: Only outgoing SYN packets for TCP connections are logged, other\n# protocols log all packets\n#\n# We recommend that you enable this option\nDROP_OUT_LOGGING = \"1\"\n\n# Together with DROP_OUT_LOGGING enabled, this option logs the UID connecting\n# out (where available) which can help track abuse\nDROP_UID_LOGGING = \"1\"\n\n# Only log incoming reserved port dropped connections (0:1023). This can reduce\n# the amount of log noise from dropped connections, but will affect options\n# such as Port Scan Tracking (PS_INTERVAL)\nDROP_ONLYRES = \"0\"\n\n# Commonly blocked ports that you do not want logging as they tend to just fill\n# up the log file. These ports are specifically blocked (applied to TCP and UDP\n# protocols) for incoming connections\nDROP_NOLOG = \"67,68,111,113,135:139,445,500,513,520\"\n\n# Log packets dropped by the packet filtering option PACKET_FILTER\nDROP_PF_LOGGING = \"0\"\n\n# Log packets dropped by the Connection Limit Protection option CONNLIMIT. If\n# this is enabled and Port Scan Tracking (PS_INTERVAL) is also enabled, IP\n# addresses breaking the Connection Limit Protection will be blocked\nCONNLIMIT_LOGGING = \"0\"\n\n# Enable logging of UDP floods. This should be enabled, especially with User ID\n# Tracking enabled\nUDPFLOOD_LOGGING = \"1\"\n\n# Send an alert if log file flooding is detected which causes lfd to skip log\n# lines to prevent lfd from looping. If this alert is sent you should check the\n# reported log file for the reason for the flooding\nLOGFLOOD_ALERT = \"1\"\n\n###############################################################################\n# SECTION:Reporting Settings\n###############################################################################\n# By default, lfd will send alert emails using the relevant alert template to\n# the To: address configured within that template. Setting the following\n# option will override the configured To: field in all lfd alert emails\n#\n# Leave this option empty to use the To: field setting in each alert template\nLF_ALERT_TO = \"notify@omega8.cc\"\n\n# By default, lfd will send alert emails using the relevant alert template from\n# the From: address configured within that template. Setting the following\n# option will override the configured From: field in all lfd alert emails\n#\n# Leave this option empty to use the From: field setting in each alert template\nLF_ALERT_FROM = \"\"\n\n# By default, lfd will send all alerts using the SENDMAIL binary. To send using\n# SMTP directly, you can set the following to a relaying SMTP server, e.g.\n# \"127.0.0.1\". Leave this setting blank to use SENDMAIL\nLF_ALERT_SMTP = \"\"\n\n# Block Reporting. lfd can run an external script when it performs and IP\n# address block following for example a login failure. The following setting\n# is to the full path of the external script which must be executable. See\n# readme.txt for format details\n#\n# Leave this setting blank to disable\nBLOCK_REPORT = \"\"\n\n# To also run an external script when a temporary block is unblocked. The\n# following setting can be the full path of the external script which must be\n# executable. See readme.txt for format details\n#\n# Leave this setting blank to disable\nUNBLOCK_REPORT = \"\"\n\n# In addition to the standard lfd email alerts, you can additionally enable the\n# sending of X-ARF reports (see http://www.xarf.org/specification.html). Only\n# block alert messages will be sent. The reports use our schema at:\n# https://download.configserver.com/abuse_login-attack_0.2.json\n#\n# These reports are in a format accepted by many Netblock owners and should\n# help them investigate abuse. This option is not designed to automatically\n# forward these reports to the Netblock owners and should be checked for\n# false-positive blocks before reporting\n#\n# If available, the report will also include the abuse contact for the IP from\n# the Abusix Contact DB: https://abusix.com/contactdb.html\n#\n# Note: The following block types are not reported through this feature:\n# LF_PERMBLOCK, LF_NETBLOCK, LF_DISTATTACK, LF_DISTFTP, RT_*_ALERT\nX_ARF = \"0\"\n\n# By default, lfd will send emails from the root forwarder. Setting the\n# following option will override this\nX_ARF_FROM = \"\"\n\n# By default, lfd will send emails to the root forwarder. Setting the following\n# option will override this\nX_ARF_TO = \"notify@omega8.cc\"\n\n# If you want to automatically send reports to the abuse contact where found,\n# you can enable the following option\n#\n# Note: You MUST set X_ARF_FROM to a valid email address for this option to\n# work. This is so that the abuse contact can reply to the report\n#\n# However, you should be aware that without manual checking you could be\n# reporting innocent IP addresses, including your own clients, yourself and\n# your own servers\n#\n# Additionally, just because a contact address is found, does not mean that\n# there is anyone on the end of it reading, processing or acting on such\n# reports and you could conceivably reported for sending spam\n#\n# We do not recommend enabling this option. Abuse reports should be checked and\n# verified before being forwarded to the abuse contact\nX_ARF_ABUSE = \"0\"\n\n###############################################################################\n# SECTION:Temp to Perm/Netblock Settings\n###############################################################################\n# Temporary to Permanent IP blocking. The following enables this feature to\n# permanently block IP addresses that have been temporarily blocked more than\n# LF_PERMBLOCK_COUNT times in the last LF_PERMBLOCK_INTERVAL seconds. Set\n# LF_PERMBLOCK  to \"1\" to enable this feature\n#\n# Care needs to be taken when setting LF_PERMBLOCK_INTERVAL as it needs to be\n# at least LF_PERMBLOCK_COUNT multiplied by the longest temporary time setting\n# (TTL) for blocked IPs, to be effective\n#\n# Set LF_PERMBLOCK to \"0\" to disable this feature\nLF_PERMBLOCK = \"1\"\nLF_PERMBLOCK_INTERVAL = \"86400\"\nLF_PERMBLOCK_COUNT = \"4\"\nLF_PERMBLOCK_ALERT = \"0\"\n\n# Permanently block IPs by network class. The following enables this feature\n# to permanently block classes of IP address where individual IP addresses\n# within the same class LF_NETBLOCK_CLASS have already been blocked more than\n# LF_NETBLOCK_COUNT times in the last LF_NETBLOCK_INTERVAL seconds. Set\n# LF_NETBLOCK  to \"1\" to enable this feature\n#\n# This can be an affective way of blocking DDOS attacks launched from within\n# the same network class\n#\n# Valid settings for LF_NETBLOCK_CLASS are \"A\", \"B\" and \"C\", care and\n# consideration is required when blocking network classes A or B\n#\n# Set LF_NETBLOCK to \"0\" to disable this feature\nLF_NETBLOCK = \"1\"\nLF_NETBLOCK_INTERVAL = \"86400\"\nLF_NETBLOCK_COUNT = \"4\"\nLF_NETBLOCK_CLASS = \"C\"\nLF_NETBLOCK_ALERT = \"0\"\n\n# Valid settings for LF_NETBLOCK_IPV6 are \"/64\", \"/56\", \"/48\", \"/32\" and \"/24\"\n# Great care should be taken with IPV6 netblock ranges due to the large number\n# of addresses involved\n#\n# To disable IPv6 netblocks set to \"\"\nLF_NETBLOCK_IPV6 = \"\"\n\n###############################################################################\n# SECTION:Global Lists/DYNDNS/Blocklists\n###############################################################################\n# Safe Chain Update. If enabled, all dynamic update chains (GALLOW*, GDENY*,\n# SPAMHAUS, DSHIELD, BOGON, CC_ALLOW, CC_DENY, ALLOWDYN*) will create a new\n# chain when updating, and insert it into the relevant LOCALINPUT/LOCALOUTPUT\n# chain, then flush and delete the old dynamic chain and rename the new chain.\n#\n# This prevents a small window of opportunity opening when an update occurs and\n# the dynamic chain is flushed for the new rules.\n#\n# This option should not be enabled on servers with long dynamic chains (e.g.\n# CC_DENY/CC_ALLOW lists) and low memory. It should also not be enabled on\n# Virtuozzo VPS servers with a restricted numiptent value. This is because each\n# chain will effectively be duplicated while the update occurs, doubling the\n# number of iptables rules\nSAFECHAINUPDATE = \"0\"\n\n# If you wish to allow access from dynamic DNS records (for example if your IP\n# address changes whenever you connect to the internet but you have a dedicated\n# dynamic DNS record from the likes of dyndns.org) then you can list the FQDN\n# records in csf.dyndns and then set the following to the number of seconds to\n# poll for a change in the IP address. If the IP address has changed iptables\n# will be updated.\n#\n# If the FQDN has multiple A records then all of the IP addresses will be\n# processed. If IPV6 is enabled, then all IPv6 AAAA IP address records will\n# also be allowed.\n#\n# A setting of 600 would check for IP updates every 10 minutes. Set the value\n# to 0 to disable the feature\nDYNDNS = \"0\"\n\n# To always ignore DYNDNS IP addresses in lfd blocking, set the following\n# option to 1\nDYNDNS_IGNORE = \"0\"\n\n# The follow Global options allow you to specify a URL where csf can grab a\n# centralised copy of an IP allow or deny block list of your own. You need to\n# specify the full URL in the following options, i.e.:\n# http://www.somelocation.com/allow.txt\n#\n# The actual retrieval of these IP's is controlled by lfd, so you need to set\n# LF_GLOBAL to the interval (in seconds) when you want lfd to retrieve. lfd\n# will perform the retrieval when it runs and then again at the specified\n# interval. A sensible interval would probably be every 3600 seconds (1 hour).\n# A minimum value of 300 is enforced for LF_GLOBAL if enabled\n#\n# You do not have to specify both an allow and a deny file\n#\n# You can also configure a global ignore file for IP's that lfd should ignore\nLF_GLOBAL = \"0\"\n\nGLOBAL_ALLOW = \"\"\nGLOBAL_DENY = \"\"\nGLOBAL_IGNORE = \"\"\n\n# Provides the same functionality as DYNDNS but with a GLOBAL URL file. Set\n# this to the URL of the file containing DYNDNS entries\nGLOBAL_DYNDNS = \"\"\n\n# Set the following to the number of seconds to poll for a change in the IP\n# address resoved from GLOBAL_DYNDNS\nGLOBAL_DYNDNS_INTERVAL = \"600\"\n\n# To always ignore GLOBAL_DYNDNS IP addresses in lfd blocking, set the following\n# option to 1\nGLOBAL_DYNDNS_IGNORE = \"0\"\n\n# Blocklists are controlled by modifying /etc/csf/csf.blocklists\n#\n# If you don't want BOGON rules applied to specific NICs, then list them in\n# a comma separated list (e.g \"eth1,eth2\")\nLF_BOGON_SKIP = \"\"\n\n# The following option can be used to select the method csf will use to\n# retrieve URL data and files\n#\n# This can be set to use:\n#\n# 1. Perl module HTTP::Tiny\n# 2. Perl module LWP::UserAgent\n# 3. CURL/WGET (set location at the bottom of csf.conf if installed)\n#\n# HTTP::Tiny is much faster than LWP::UserAgent and is included in the csf\n# distribution. LWP::UserAgent may have to be installed manually, but it can\n# better support https:// URL's which also needs the LWP::Protocol::https perl\n# module\n#\n# CURL/WGET uses the system binaries if installed but does not always provide\n# good feedback when it fails. The script will first look for CURL, if that\n# does not exist at the configured location it will then look for WGET\n#\n# Additionally, 1 or 2 are used and the retrieval fails, then if either CURL or\n# WGET are available, an additional attempt will be using CURL/WGET. This is\n# useful if the perl distribution has outdated modules that do not support\n# modern SSL/TLS implementations\n#\n# To install the LWP perl modules required:\n#\n# On rpm based systems:\n#\n#   yum install perl-libwww-perl.noarch perl-LWP-Protocol-https.noarch\n#\n# On APT based systems:\n#\n#   apt-get install libwww-perl liblwp-protocol-https-perl\n#\n# Via cpan:\n#\n#   perl -MCPAN -eshell\n#   cpan> install LWP LWP::Protocol::https\n#\n# We recommend setting this set to \"2\" or \"3\" as upgrades to csf will be\n# performed over SSL as well as other URLs used when retrieving external data\n#\n# \"1\" = HTTP::Tiny\n# \"2\" = LWP::UserAgent\n# \"3\" = CURL/WGET (set location at the bottom of csf.conf)\nURLGET = \"1\"\n\n# If you need csf/lfd to use a proxy, then you can set this option to the URL\n# of the proxy. The proxy provided will be used for both HTTP and HTTPS\n# connections\nURLPROXY = \"\"\n\n###############################################################################\n# SECTION:Country Code Lists and Settings\n###############################################################################\n# Country Code to CIDR allow/deny. In the following options you can allow or\n# deny whole country CIDR ranges. The CIDR blocks are obtained from a selected\n# source below. They also display Country Code Country and City for reported IP\n# addresses and lookups\n#\n# There are a number of sources for these databases, before utilising them you\n# need to visit each site and ensure you abide by their license provisions\n# where stated:\n\n# 1. MaxMind\n#\n# MaxMind GeoLite2 Country/City and ASN databases at:\n# https://dev.MaxMind.com/geoip/geoip2/geolite2/\n# This feature relies entirely on that service being available\n#\n# Advantages: This is a one stop shop for all of the databases required for\n# these features. They provide a consistent dataset for blocking and reporting\n# purposes\n#\n# Disadvantages: MaxMind require a license key to download their databases.\n# This is free of charge, but requires the user to create an account on their\n# website to generate the required key:\n#\n# WARNING: As of 2019-12-29, MaxMind REQUIRES you to create an account on their\n# site and to generate a license key to use their databases. See:\n# https://www.maxmind.com/en/geolite2/signup\n# https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/\n#\n# You MUST set the following to continue using the IP lookup features of csf,\n# otherwise an error will be generated and the features will not work.\n# Alternatively set CC_SRC below to a different provider\n#\n# MaxMind License Key:\nMM_LICENSE_KEY = \"\"\n\n# 2. DB-IP, ipdeny.com, iptoasn.com\n#\n# Advantages: The ipdeny.com databases form CC blocking are better optimised\n# and so are quicker to process and create fewer iptables entries. All of these\n# databases are free to download without requiring login or key\n#\n# Disadvantages: Multiple sources mean that any one of the three could\n# interrupt the provision of these features. It may also mean that there are\n# inconsistences between them\n#\n# https://db-ip.com/db/lite.php\n# http://ipdeny.com/\n# https://iptoasn.com/\n# http://download.geonames.org/export/dump/readme.txt\n\n# Set the following to your preferred source:\n#\n# \"1\" - MaxMind\n# \"2\" - db-ip, ipdeny, iptoasn\n#\n# The default is \"2\" on new installations of csf, or set to \"1\" to use the\n# MaxMind databases after obtaining a license key\nCC_SRC = \"2\"\n\n# In the following options, specify the the two-letter ISO Country Code(s).\n# The iptables rules are for incoming connections only\n#\n# Additionally, ASN numbers can also be added to the comma separated lists\n# below that also list Country Codes. The same WARNINGS for Country Codes apply\n# to the use of ASNs. More about Autonomous System Numbers (ASN):\n# http://www.iana.org/assignments/as-numbers/as-numbers.xhtml\n# ASNs must be listed as ASnnnn (where nnnn is the ASN number)\n#\n# You should consider using LF_IPSET when using any of the following options\n#\n# WARNING: These lists are never 100% accurate and some ISP's (e.g. AOL) use\n# non-geographic IP address designations for their clients\n#\n# WARNING: Some of the CIDR lists are huge and each one requires a rule within\n# the incoming iptables chain. This can result in significant performance\n# overheads and could render the server inaccessible in some circumstances. For\n# this reason (amongst others) we do not recommend using these options\n#\n# WARNING: Due to the resource constraints on VPS servers this feature should\n# not be used on such systems unless you choose very small CC zones\n#\n# WARNING: CC_ALLOW allows access through all ports in the firewall. For this\n# reason CC_ALLOW probably has very limited use and CC_ALLOW_FILTER is\n# preferred\n#\n# Each option is a comma separated list of CC's, e.g. \"US,GB,DE\"\nCC_DENY = \"\"\nCC_ALLOW = \"\"\n\n# An alternative to CC_ALLOW is to only allow access from the following\n# countries but still filter based on the port and packets rules. All other\n# connections are dropped\nCC_ALLOW_FILTER = \"\"\n\n# This option allows access from the following countries to specific ports\n# listed in CC_ALLOW_PORTS_TCP and CC_ALLOW_PORTS_UDP\n#\n# Note: The rules for this feature are inserted after the allow and deny\n# rules to still allow blocking of IP addresses\n#\n# Each option is a comma separated list of CC's, e.g. \"US,GB,DE\"\nCC_ALLOW_PORTS = \"\"\n\n# All listed ports should be removed from TCP_IN/UDP_IN to block access from\n# elsewhere. This option uses the same format as TCP_IN/UDP_IN\n#\n# An example would be to list port 21 here and remove it from TCP_IN/UDP_IN\n# then only countries listed in CC_ALLOW_PORTS can access FTP\nCC_ALLOW_PORTS_TCP = \"\"\nCC_ALLOW_PORTS_UDP = \"\"\n\n# This option denies access from the following countries to specific ports\n# listed in CC_DENY_PORTS_TCP and CC_DENY_PORTS_UDP\n#\n# Note: The rules for this feature are inserted after the allow and deny\n# rules to still allow allowing of IP addresses\n#\n# Each option is a comma separated list of CC's, e.g. \"US,GB,DE\"\nCC_DENY_PORTS = \"\"\n\n# This option uses the same format as TCP_IN/UDP_IN. The ports listed should\n# NOT be removed from TCP_IN/UDP_IN\n#\n# An example would be to list port 21 here then countries listed in\n# CC_DENY_PORTS cannot access FTP\nCC_DENY_PORTS_TCP = \"\"\nCC_DENY_PORTS_UDP = \"\"\n\n# This Country Code list will prevent lfd from blocking IP address hits for the\n# listed CC's\n#\n# CC_LOOKUPS must be enabled to use this option\nCC_IGNORE = \"\"\n\n# This Country Code list will only allow SMTP AUTH to be advertised to the\n# listed countries in EXIM. This is to help limit attempts at distributed\n# attacks against SMTP AUTH which are difficult to achive since port 25 needs\n# to be open to relay email\n#\n# The reason why this works is that if EXIM does not advertise SMTP AUTH on a\n# connection, then SMTP AUTH will not accept logins, defeating the attacks\n# without restricting mail relaying\n#\n# This option can generate a very large list of IP addresses that could easily\n# severely impact on SMTP (mail) performance, so care must be taken when\n# selecting countries and if performance issues ensue\n#\n# The option SMTPAUTH_RESTRICT must be enabled to use this option\nCC_ALLOW_SMTPAUTH = \"\"\n\n# These options can control which IP blocks are redirected to the MESSENGER\n# service, if it is enabled\n#\n# If Country Codes are listed in CC_MESSENGER_ALLOW, then only a blocked IP\n# that resolves to one of those Country Codes will be redirected to the\n# MESSENGER service\n#\n# If Country Codes are listed in CC_MESSENGER_DENY, then a blocked IP that\n# resolves to one of those Country Codes will NOT be redirected to the\n# MESSENGER service\n#\nCC_MESSENGER_ALLOW = \"\"\nCC_MESSENGER_DENY = \"\"\n\n# Set this option to a valid CIDR (i.e. 1 to 32) to ignore CIDR blocks smaller\n# than this value when implementing CC_DENY/CC_ALLOW/CC_ALLOW_FILTER. This can\n# help reduce the number of CC entries and may improve iptables throughput.\n# Obviously, this will deny/allow fewer IP addresses depending on how small you\n# configure the option\n#\n# For example, to ignore all CIDR (and single IP) entries small than a /16, set\n# this option to \"16\". Set to \"\" to block all CC IP addresses\nCC_DROP_CIDR = \"\"\n\n# Display Country Code and Country for reported IP addresses. This option can\n# be configured to use the databases enabled at the top of this section. An\n# additional option is also available if you cannot use those databases:\n#\n# \"0\" - disable\n# \"1\" - Reports: Country Code and Country\n# \"2\" - Reports: Country Code and Country and Region and City\n# \"3\" - Reports: Country Code and Country and Region and City and ASN\n# \"4\" - Reports: Country Code and Country and Region and City (db-ip.com)\n#\n# Note: \"4\" does not use the databases enabled at the top of this section\n# directly for lookups. Instead it uses a URL-based lookup from\n# https://db-ip.com and so avoids having to download and process the large\n# databases. Please visit the https://db-ip.com and read their limitations and\n# understand that this option will either cease to function or be removed by us\n# if that site is abused or overloaded. ONLY use this option if you have\n# difficulties using the databases enabled at the top of this section. This\n# option is ONLY for IP lookups, NOT when using the CC_* options above, which\n# will continue to use the databases enabled at the top of this section\n#\nCC_LOOKUPS = \"1\"\n\n# Display Country Code and Country for reported IPv6 addresses using the\n# databases enabled at the top of this section\n#\n# \"0\" - disable\n# \"1\" - enable and report the detail level as specified in CC_LOOKUPS\n#\n# This option must also be enabled to allow IPv6 support to CC_*, MESSENGER and\n# PORTFLOOD\nCC6_LOOKUPS = \"0\"\n\n# This option tells lfd how often to retrieve the databases for CC_ALLOW,\n# CC_ALLOW_FILTER, CC_DENY, CC_IGNORE and CC_LOOKUPS (in days)\nCC_INTERVAL = \"7\"\n\n###############################################################################\n# SECTION:Login Failure Blocking and Alerts\n###############################################################################\n# The following[*] triggers are application specific. If you set LF_TRIGGER to\n# \"0\" the value of each trigger is the number of failures against that\n# application that will trigger lfd to block the IP address\n#\n# If you set LF_TRIGGER to a value greater than \"0\" then the following[*]\n# application triggers are simply on or off (\"0\" or \"1\") and the value of\n# LF_TRIGGER is the total cumulative number of failures that will trigger lfd\n# to block the IP address\n#\n# Setting the application trigger to \"0\" disables it\nLF_TRIGGER = \"0\"\n\n# If LF_TRIGGER is > \"0\" then LF_TRIGGER_PERM can be set to \"1\" to permanently\n# block the IP address, or LF_TRIGGER_PERM can be set to a value greater than\n# \"1\" and the IP address will be blocked temporarily for that value in seconds.\n# For example:\n# LF_TRIGGER_PERM = \"1\" => the IP is blocked permanently\n# LF_TRIGGER_PERM = \"3600\" => the IP is blocked temporarily for 1 hour\n#\n# If LF_TRIGGER is \"0\", then the application LF_[application]_PERM value works\n# in the same way as above and LF_TRIGGER_PERM serves no function\nLF_TRIGGER_PERM = \"3600\"\n\n# To only block access to the failed application instead of a complete block\n# for an ip address, you can set the following to \"1\", but LF_TRIGGER must be\n# set to \"0\" with specific application[*] trigger levels also set appropriately\n#\n# The ports that are blocked can be configured by changing the PORTS_* options\nLF_SELECT = \"0\"\n\n# Send an email alert if an IP address is blocked by one of the [*] triggers\nLF_EMAIL_ALERT = \"0\"\n\n# Send an email alert if an IP address is only temporarily blocked by one of\n# the [*] triggers\n#\n# Note: LF_EMAIL_ALERT must still be enabled to get permanent block emails\nLF_TEMP_EMAIL_ALERT = \"1\"\n\n# [*]Enable login failure detection of sshd connections\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_SSHD = \"10\"\nLF_SSHD_PERM = \"3600\"\n\n# [*]Enable login failure detection of ftp connections\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_FTPD = \"10\"\nLF_FTPD_PERM = \"3600\"\n\n# [*]Enable login failure detection of SMTP AUTH connections\nLF_SMTPAUTH = \"10\"\nLF_SMTPAUTH_PERM = \"3600\"\n\n# [*]Enable syntax failure detection of Exim connections\nLF_EXIMSYNTAX = \"10\"\nLF_EXIMSYNTAX_PERM = \"1\"\n\n# [*]Enable login failure detection of pop3 connections\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_POP3D = \"10\"\nLF_POP3D_PERM = \"3600\"\n\n# [*]Enable login failure detection of imap connections\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_IMAPD = \"10\"\nLF_IMAPD_PERM = \"3600\"\n\n# [*]Enable login failure detection of Apache .htpasswd connections\n# Due to the often high logging rate in the Apache error log, you might want to\n# enable this option only if you know you are suffering from attacks against\n# password protected directories\nLF_HTACCESS = \"0\"\nLF_HTACCESS_PERM = \"3600\"\n\n# [*]Enable failure detection of repeated Apache mod_security rule triggers\nLF_MODSEC = \"10\"\nLF_MODSEC_PERM = \"3600\"\n\n# [*]Enable detection of repeated BIND denied requests\n# This option should be enabled with care as it will prevent blocked IPs from\n# resolving any domains on the server. You might want to set the trigger value\n# reasonably high to avoid this\n# Example: LF_BIND = \"100\"\nLF_BIND = \"0\"\nLF_BIND_PERM = \"1\"\n\n# [*]Enable detection of repeated suhosin ALERTs\n# Example: LF_SUHOSIN = \"5\"\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_SUHOSIN = \"0\"\nLF_SUHOSIN_PERM = \"1\"\n\n# [*]Enable detection of repeated cxs ModSecurity mod_security rule triggers\n# This option will block IP addresses if cxs detects a hits from the\n# ModSecurity rule associated with it\n#\n# Note: This option takes precedence over LF_MODSEC and removes any hits\n# counted towards LF_MODSEC for the cxs rule\n#\n# This setting should probably set very low, perhaps to 1, if you want to\n# effectively block IP addresses for this trigger option\nLF_CXS = \"0\"\nLF_CXS_PERM = \"1\"\n\n# [*]Enable detection of repeated Apache mod_qos rule triggers\nLF_QOS = \"0\"\nLF_QOS_PERM = \"1\"\n\n# [*]Enable detection of repeated Apache symlink race condition triggers from\n# the Apache patch provided by:\n# http://www.mail-archive.com/dev@httpd.apache.org/msg55666.html\n# This patch has also been included by cPanel via the easyapache option:\n# \"Symlink Race Condition Protection\"\nLF_SYMLINK = \"0\"\nLF_SYMLINK_PERM = \"1\"\n\n# [*]Enable login failure detection of webmin connections\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_WEBMIN = \"0\"\nLF_WEBMIN_PERM = \"1\"\n\n# Send an email alert if anyone logs in successfully using SSH\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_SSH_EMAIL_ALERT = \"0\"\n\n# Send an email alert if anyone uses su to access another account. This will\n# send an email alert whether the attempt to use su was successful or not\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_SU_EMAIL_ALERT = \"1\"\n\n# Send an email alert if anyone uses sudo to access another account. This will\n# send an email alert whether the attempt to use sudo was successful or not\n#\n# NOTE: This option could become onerous if sudo is used extensively for root\n# access by administrators or control panels. It is provided for those where\n# this is not the case\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_SUDO_EMAIL_ALERT = \"0\"\n\n# Send an email alert if anyone accesses webmin\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_WEBMIN_EMAIL_ALERT = \"0\"\n\n# Send an email alert if anyone logs in successfully to root on the console\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_CONSOLE_EMAIL_ALERT = \"1\"\n\n# This option will keep track of the number of \"File does not exist\" errors in\n# HTACCESS_LOG. If the number of hits is more than LF_APACHE_404 in LF_INTERVAL\n# seconds then the IP address will be blocked\n#\n# Care should be used with this option as it could generate many\n# false-positives, especially Search Bots (use csf.rignore to ignore such bots)\n# so only use this option if you know you are under this type of attack\n#\n# A sensible setting for this would be quite high, perhaps 200\n#\n# To disable set to \"0\"\nLF_APACHE_404 = \"0\"\n\n# If this option is set to 1 the blocks will be permanent\n# If this option is > 1, the blocks will be temporary for the specified number\n# of seconds\nLF_APACHE_404_PERM = \"3600\"\n\n# This option will keep track of the number of \"client denied by server\n# configuration\" errors in HTACCESS_LOG. If the number of hits is more than\n# LF_APACHE_403 in LF_INTERVAL seconds then the IP address will be blocked\n#\n# Care should be used with this option as it could generate many\n# false-positives, especially Search Bots (use csf.rignore to ignore such bots)\n# so only use this option if you know you are under this type of attack\n#\n# A sensible setting for this would be quite high, perhaps 200\n#\n# To disable set to \"0\"\nLF_APACHE_403 = \"0\"\n\n# If this option is set to 1 the blocks will be permanent\n# If this option is > 1, the blocks will be temporary for the specified number\n# of seconds\nLF_APACHE_403_PERM = \"3600\"\n\n# This option will keep track of the number of 401 failures in HTACCESS_LOG.\n# If the number of hits is more than LF_APACHE_401 in LF_INTERVAL seconds then\n# the IP address will be blocked\n#\n# To disable set to \"0\"\nLF_APACHE_401 = \"0\"\n\n# This option is used to determine if the Apache error_log format contains the\n# client port after the client IP. In Apache prior to v2.4, this was not the\n# case. In Apache v2.4+ the error_log format can be configured using\n# ErrorLogFormat, making the port directive optional\n#\n# Unfortunately v2.4 ErrorLogFormat places the port number after a colon next\n# to the client IP by default. This makes determining client IPv6 addresses\n# difficult unless we know whether the port is being appended or not\n#\n# lfd will attempt to autodetect the correct value if this option is set to \"0\"\n# from the httpd binary found in common locations. If it fails to find a binary\n# it will be set to \"2\", unless specified here\n#\n# The value can be set here explicitly if the autodetection does not work:\n# 0 - autodetect\n# 1 - no port directive after client IP\n# 2 - port directive after client IP\nLF_APACHE_ERRPORT = \"0\"\n\n# If this option is set to 1 the blocks will be permanent\n# If this option is > 1, the blocks will be temporary for the specified number\n# of seconds\nLF_APACHE_401_PERM = \"3600\"\n\n# This option will send an alert if the ModSecurity IP persistent storage grows\n# excessively large: https://goo.gl/rGh5sF\n#\n# More information on cPanel servers here: https://goo.gl/vo6xTE\n#\n# LF_MODSECIPDB_FILE must be set to the correct location of the database file\n#\n# The check is performed at lfd startup and then once per hour, the template\n# used is modsecipdbalert.txt\n#\n# Set to \"0\" to disable this option, otherwise it is the threshold size of the\n# file to report in gigabytes, e.g. set to 5 for 5GB\nLF_MODSECIPDB_ALERT = \"0\"\n\n# This is the location of the persistent IP storage file on the server, e.g.:\n# /run/modsecurity/data/ip.pag\n# /var/cpanel/secdatadir/ip.pag\n# /var/cache/modsecurity/ip.pag\n# /usr/local/apache/conf/modsec/data/msa/ip.pag\n# /var/tmp/ip.pag\n# /tmp/ip.pag\nLF_MODSECIPDB_FILE = \"/run/modsecurity/data/ip.pag\"\n\n# System Exploit Checking. This option is designed to perform a series of tests\n# to send an alert in case a possible server compromise is detected\n#\n# To enable this feature set the following to the checking interval in seconds\n# (a value of 300 would seem sensible).\n#\n# To disable set to \"0\"\nLF_EXPLOIT = \"300\"\n\n# This comma separated list allows you to ignore tests LF_EXPLOIT performs\n#\n# For the SUPERUSER check, you can list usernames in csf.suignore to have them\n# ignored for that test\n#\n# Valid tests are:\n# SUPERUSER\n#\n# If you want to ignore a test add it to this as a comma separated list, e.g.\n# \"SUPERUSER\"\nLF_EXPLOIT_IGNORE = \"\"\n\n# Set the time interval to track login and other LF_ failures within (seconds),\n# i.e. LF_TRIGGER failures within the last LF_INTERVAL seconds\nLF_INTERVAL = \"300\"\n\n# This is how long the lfd process sleeps (in seconds) before processing the\n# log file entries and checking whether other events need to be triggered\nLF_PARSE = \"5\"\n\n# This is the interval that is used to flush reports of usernames, files and\n# pids so that persistent problems continue to be reported, in seconds.\n# A value of 3600 seems sensible\nLF_FLUSH = \"3600\"\n\n# Under some circumstances iptables can fail to include a rule instruction,\n# especially if more than one request is made concurrently. In this event, a\n# permanent block entry may exist in csf.deny, but not in iptables.\n#\n# This option instructs csf to deny an already blocked IP address the number\n# of times set. The downside, is that there will be multiple entries for an IP\n# address in csf.deny and possibly multiple rules for the same IP address in\n# iptables. This needs to be taken into consideration when unblocking such IP\n# addresses.\n#\n# Set to \"0\" to disable this feature. Do not set this too high for the reasons\n# detailed above (e.g. \"5\" should be more than enough)\nLF_REPEATBLOCK = \"0\"\n\n# By default csf will create both an inbound and outbound blocks from/to an IP\n# unless otherwise specified in csf.deny and GLOBAL_DENY. This is the most\n# effective way to block IP traffic. This option instructs csf to only block\n# inbound traffic from those IP's and so reduces the number of iptables rules,\n# but at the expense of less effectiveness. For this reason we recommend\n# leaving this option disabled\n#\n# Set to \"0\" to disable this feature - the default\nLF_BLOCKINONLY = \"0\"\n\n###############################################################################\n# SECTION:CloudFlare\n###############################################################################\n# This features provides interaction with the CloudFlare Firewall\n#\n# As CloudFlare is a reverse proxy, any attacking IP addresses (so far as\n# iptables is concerned) come from the CloudFlare IP's. To counter this, an\n# Apache module (mod_cloudflare) is available that obtains the true attackers\n# IP from a custom HTTP header record (similar functionality is available\n# for other HTTP daemons\n#\n# However, despite now knowing the true attacking IP address, iptables cannot\n# be used to block that IP as the traffic is still coming from the CloudFlare\n# servers\n#\n# CloudFlare have provided a Firewall feature within the user account where\n# rules can be added to block, challenge or whitelist IP addresses\n#\n# Using the CloudFlare API, this feature adds and removes attacking IPs from\n# that firewall and provides CLI (and via the UI) additional commands\n#\n# See /etc/csf/readme.txt for more information about this feature and the\n# restrictions for its use BEFORE enabling this feature\nCF_ENABLE = \"0\"\n\n# This can be set to either \"block\" or \"challenge\" (see CloudFlare docs)\nCF_BLOCK = \"block\"\n\n# This setting determines how long the temporary block will apply within csf\n# and CloudFlare, keeping them in sync\n#\n# Block duration in seconds - overrides perm block or time of individual blocks\n# in lfd for block triggers\nCF_TEMP = \"3600\"\n\n###############################################################################\n# SECTION:Directory Watching & Integrity\n###############################################################################\n# Enable Directory Watching. This enables lfd to check /tmp and /dev/shm\n# directories for suspicious files, i.e. script exploits. If a suspicious\n# file is found an email alert is sent. One alert per file per LF_FLUSH\n# interval is sent\n#\n# To enable this feature set the following to the checking interval in seconds.\n# To disable set to \"0\"\nLF_DIRWATCH = \"300\"\n\n# To remove any suspicious files found during directory watching, enable the\n# following. These files will be appended to a tarball in\n# /var/lib/csf/suspicious.tar\nLF_DIRWATCH_DISABLE = \"0\"\n\n# This option allows you to have lfd watch a particular file or directory for\n# changes and should they change and email alert using watchalert.txt is sent\n#\n# To enable this feature set the following to the checking interval in seconds\n# (a value of 60 would seem sensible) and add your entries to csf.dirwatch\n#\n# Set to disable set to \"0\"\nLF_DIRWATCH_FILE = \"0\"\n\n# System Integrity Checking. This enables lfd to compare md5sums of the\n# servers OS binary application files from the time when lfd starts. If the\n# md5sum of a monitored file changes an alert is sent. This option is intended\n# as an IDS (Intrusion Detection System) and is the last line of detection for\n# a possible root compromise.\n#\n# There will be constant false-positives as the servers OS is updated or\n# monitored application binaries are updated. However, unexpected changes\n# should be carefully inspected.\n#\n# Modified files will only be reported via email once.\n#\n# To enable this feature set the following to the checking interval in seconds\n# (a value of 3600 would seem sensible). This option may increase server I/O\n# load onto the server as it checks system binaries.\n#\n# To disable set to \"0\"\nLF_INTEGRITY = \"3600\"\n\n###############################################################################\n# SECTION:Distributed Attacks\n###############################################################################\n# Distributed Account Attack. This option will keep track of login failures\n# from distributed IP addresses to a specific application account. If the\n# number of failures matches the trigger value above, ALL of the IP addresses\n# involved in the attack will be blocked according to the temp/perm rules above\n#\n# Tracking applies to LF_SSHD, LF_FTPD, LF_SMTPAUTH, LF_POP3D, LF_IMAPD,\n# LF_HTACCESS\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_DISTATTACK = \"1\"\n\n# Set the following to the minimum number of unique IP addresses that trigger\n# LF_DISTATTACK\nLF_DISTATTACK_UNIQ = \"3\"\n\n# Distributed FTP Logins. This option will keep track of successful FTP logins.\n# If the number of successful logins to an individual account is at least\n# LF_DISTFTP in LF_DIST_INTERVAL from at least LF_DISTFTP_UNIQ IP addresses,\n# then all of the IP addresses will be blocked\n#\n# This option can help mitigate the common FTP account compromise attacks that\n# use a distributed network of zombies to deface websites\n#\n# A sensible setting for this might be 5, depending on how many different\n# IP addresses you expect to an individual FTP account within LF_DIST_INTERVAL\n#\n# To disable set to \"0\"\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLF_DISTFTP = \"5\"\n\n# Set the following to the minimum number of unique IP addresses that trigger\n# LF_DISTFTP. LF_DISTFTP_UNIQ must be <= LF_DISTFTP for this to work\nLF_DISTFTP_UNIQ = \"5\"\n\n# If this option is set to 1 the blocks will be permanent\n# If this option is > 1, the blocks will be temporary for the specified number\n# of seconds\nLF_DISTFTP_PERM = \"900\"\n\n# Send an email alert if LF_DISTFTP is triggered\nLF_DISTFTP_ALERT = \"1\"\n\n# Distributed SMTP Logins. This option will keep track of successful SMTP\n# logins. If the number of successful logins to an individual account is at\n# least LF_DISTSMTP in LF_DIST_INTERVAL from at least LF_DISTSMTP_UNIQ IP\n# addresses, then all of the IP addresses will be blocked. These options only\n# apply to the exim MTA\n#\n# This option can help mitigate the common SMTP account compromise attacks that\n# use a distributed network of zombies to send spam\n#\n# A sensible setting for this might be 5, depending on how many different\n# IP addresses you expect to an individual SMTP account within LF_DIST_INTERVAL\n#\n# To disable set to \"0\"\nLF_DISTSMTP = \"0\"\n\n# Set the following to the minimum number of unique IP addresses that trigger\n# LF_DISTSMTP. LF_DISTSMTP_UNIQ must be <= LF_DISTSMTP for this to work\nLF_DISTSMTP_UNIQ = \"3\"\n\n# If this option is set to 1 the blocks will be permanent\n# If this option is > 1, the blocks will be temporary for the specified number\n# of seconds\nLF_DISTSMTP_PERM = \"1\"\n\n# Send an email alert if LF_DISTSMTP is triggered\nLF_DISTSMTP_ALERT = \"1\"\n\n# This is the interval during which a distributed FTP or SMTP attack is\n# measured\nLF_DIST_INTERVAL = \"300\"\n\n# If LF_DISTFTP or LF_DISTSMTP is triggered, then if the following contains the\n# path to a script, it will run the script and pass the following as arguments:\n#\n# LF_DISTFTP/LF_DISTSMTP\n# account name\n# log file text\n#\n# The action script must have the execute bit and interpreter (shebang) set\nLF_DIST_ACTION = \"\"\n\n###############################################################################\n# SECTION:Login Tracking\n###############################################################################\n# Block POP3 logins if greater than LT_POP3D times per hour per account per IP\n# address (0=disabled)\n#\n# This is a temporary block for the rest of the hour, afterwhich the IP is\n# unblocked\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLT_POP3D = \"0\"\n\n# Block IMAP logins if greater than LT_IMAPD times per hour per account per IP\n# address (0=disabled) - not recommended for IMAP logins due to the ethos\n# within which IMAP works. If you want to use this, setting it quite high is\n# probably a good idea\n#\n# This is a temporary block for the rest of the hour, afterwhich the IP is\n# unblocked\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nLT_IMAPD = \"0\"\n\n# Send an email alert if an account exceeds LT_POP3D/LT_IMAPD logins per hour\n# per IP\nLT_EMAIL_ALERT = \"0\"\n\n# If LF_PERMBLOCK is enabled but you do not want this to apply to\n# LT_POP3D/LT_IMAPD, then enable this option\nLT_SKIPPERMBLOCK = \"0\"\n\n###############################################################################\n# SECTION:Connection Tracking\n###############################################################################\n# Connection Tracking. This option enables tracking of all connections from IP\n# addresses to the server. If the total number of connections is greater than\n# this value then the offending IP address is blocked. This can be used to help\n# prevent some types of DOS attack.\n#\n# Care should be taken with this option. It's entirely possible that you will\n# see false-positives. Some protocols can be connection hungry, e.g. FTP, IMAPD\n# and HTTP so it could be quite easy to trigger, especially with a lot of\n# closed connections in TIME_WAIT. However, for a server that is prone to DOS\n# attacks this may be very useful. A reasonable setting for this option might\n# be around 300.\n#\n# To disable this feature, set this to 0\nCT_LIMIT = \"0\"\n\n# Connection Tracking interval. Set this to the the number of seconds between\n# connection tracking scans\nCT_INTERVAL = \"30\"\n\n# Send an email alert if an IP address is blocked due to connection tracking\nCT_EMAIL_ALERT = \"0\"\n\n# If you want to make IP blocks permanent then set this to 1, otherwise blocks\n# will be temporary and will be cleared after CT_BLOCK_TIME seconds\nCT_PERMANENT = \"0\"\n\n# If you opt for temporary IP blocks for CT, then the following is the interval\n# in seconds that the IP will remained blocked for (e.g. 1800 = 30 mins)\nCT_BLOCK_TIME = \"3600\"\n\n# If you don't want to count the TIME_WAIT state against the connection count\n# then set the following to \"1\"\nCT_SKIP_TIME_WAIT = \"0\"\n\n# If you only want to count specific states (e.g. SYN_RECV) then add the states\n# to the following as a comma separated list. E.g. \"SYN_RECV,TIME_WAIT\"\n#\n# Leave this option empty to count all states against CT_LIMIT\nCT_STATES = \"\"\n\n# If you only want to count specific ports (e.g. 80,443) then add the ports\n# to the following as a comma separated list. E.g. \"80,443\"\n#\n# Leave this option empty to count all ports against CT_LIMIT\nCT_PORTS = \"\"\n\n# If the total number of connections from a class C subnet is greater than this\n# value then the offending subnet is blocked according to the other CT_*\n# settings\n#\n# This option can be used to help prevent some types of DOS attack where a\n# range of IP's between x.y.z.1-255 has connected to the server\n#\n# If you use a reverse proxy service such as Cloudflare you should not enable\n# this option, or should exclude the ports that you have proxied in CT_PORTS\n#\n# To disable this feature, set this to 0\nCT_SUBNET_LIMIT = \"0\"\n\n###############################################################################\n# SECTION:Process Tracking\n###############################################################################\n# Process Tracking. This option enables tracking of user and nobody processes\n# and examines them for suspicious executables or open network ports. Its\n# purpose is to identify potential exploit processes that are running on the\n# server, even if they are obfuscated to appear as system services. If a\n# suspicious process is found an alert email is sent with relevant information.\n# It is then the responsibility of the recipient to investigate the process\n# further as the script takes no further action\n#\n# The following is the number of seconds a process has to be active before it\n# is inspected. If you set this time too low, then you will likely trigger\n# false-positives with CGI or PHP scripts.\n# Set the value to 0 to disable this feature\nPT_LIMIT = \"0\"\n\n# How frequently processes are checked in seconds\nPT_INTERVAL = \"60\"\n\n# If you want process tracking to highlight php or perl scripts that are run\n# through apache then disable the following,\n# i.e. set it to 0\n#\n# While enabling this setting will reduce false-positives, having it set to 0\n# does provide better checking for exploits running on the server\nPT_SKIP_HTTP = \"0\"\n\n# lfd will report processes, even if they're listed in csf.pignore, if they're\n# tagged as (deleted) by Linux. This information is provided in Linux under\n# /proc/PID/exe. A (deleted) process is one that is running a binary that has\n# the inode for the file removed from the file system directory. This usually\n# happens when the binary has been replaced due to an upgrade for it by the OS\n# vendor or another third party (e.g. cPanel). You need to investigate whether\n# this is indeed the case to be sure that the original binary has not been\n# replaced by a rootkit or is running an exploit.\n#\n# Note: If a deleted executable process is detected and reported then lfd will\n# not report children of the parent (or the parent itself if a child triggered\n# the report) if the parent is also a deleted executable process\n#\n# To stop lfd reporting such process you need to restart the daemon to which it\n# belongs and therefore run the process using the replacement binary (presuming\n# one exists). This will normally mean running the associated startup script in\n# /etc/init.d/\n#\n# If you do want lfd to report deleted binary processes, set to 1\nPT_DELETED = \"0\"\n\n# If a PT_DELETED event is triggered, then if the following contains the path to\n# a script, it will be run in a child process and passed the executable, pid,\n# account for the process, and parent pid\n#\n# The action script must have the execute bit and interpreter (shebang) set. An\n# example is provided in /usr/local/csf/bin/pt_deleted_action.pl\n#\n# WARNING: Make sure you read and understand the potential security\n# implications of such processes in PT_DELETED above before simply restarting\n# such processes with a script\nPT_DELETED_ACTION = \"\"\n\n# User Process Tracking. This option enables the tracking of the number of\n# process any given account is running at one time. If the number of processes\n# exceeds the value of the following setting an email alert is sent with\n# details of those processes. If you specify a user in csf.pignore it will be\n# ignored\n#\n# Set to 0 to disable this feature\nPT_USERPROC = \"0\"\n\n# This User Process Tracking option sends an alert if any user process exceeds\n# the virtual memory usage set (MB). To ignore specific processes or users use\n# csf.pignore\n#\n# Set to 0 to disable this feature\nPT_USERMEM = \"0\"\n\n# This User Process Tracking option sends an alert if any user process exceeds\n# the RSS memory usage set (MB) - RAM used, not virtual. To ignore specific\n# processes or users use csf.pignore\n#\n# Set to 0 to disable this feature\nPT_USERRSS = \"0\"\n\n# This User Process Tracking option sends an alert if any linux user process\n# exceeds the time usage set (seconds). To ignore specific processes or users\n# use csf.pignore\n#\n# Set to 0 to disable this feature\nPT_USERTIME = \"0\"\n\n# If this option is set then processes detected by PT_USERMEM, PT_USERTIME or\n# PT_USERPROC are killed\n#\n# Warning: We don't recommend enabling this option unless absolutely necessary\n# as it can cause unexpected problems when processes are suddenly terminated.\n# It can also lead to system processes being terminated which could cause\n# stability issues. It is much better to leave this option disabled and to\n# investigate each case as it is reported when the triggers above are breached\n#\n# Note: Processes that are running deleted excecutables (see PT_DELETED) will\n# not be killed by lfd\nPT_USERKILL = \"0\"\n\n# If you want to disable email alerts if PT_USERKILL is triggered, then set\n# this option to 0\nPT_USERKILL_ALERT = \"0\"\n\n# If a PT_* event is triggered, then if the following contains the path to\n# a script, it will be run in a child process and passed the PID(s) of the\n# process(es) in a comma separated list.\n#\n# The action script must have the execute bit and interpreter (shebang) set\nPT_USER_ACTION = \"\"\n\n# Check the PT_LOAD_AVG minute Load Average (can be set to 1 5 or 15 and\n# defaults to 5 if set otherwise) on the server every PT_LOAD seconds. If the\n# load average is greater than or equal to PT_LOAD_LEVEL then an email alert is\n# sent. lfd then does not report subsequent high load until PT_LOAD_SKIP\n# seconds has passed to prevent email floods.\n#\n# Set PT_LOAD to \"0\" to disable this feature\nPT_LOAD = \"0\"\nPT_LOAD_AVG = \"5\"\nPT_LOAD_LEVEL = \"10\"\nPT_LOAD_SKIP = \"3600\"\n\n# This is the Apache Server Status URL used in the email alert. Requires the\n# Apache mod_status module to be installed and configured correctly\nPT_APACHESTATUS = \"http://127.0.0.1/server-status\"\n\n# If a PT_LOAD event is triggered, then if the following contains the path to\n# a script, it will be run in a child process. For example, the script could\n# contain commands to terminate and restart httpd, php, exim, etc incase of\n# looping processes. The action script must have the execute bit an\n# interpreter (shebang) set\nPT_LOAD_ACTION = \"\"\n\n# Fork Bomb Protection. This option checks the number of processes with the\n# same session id and if greater than the value set, the whole session tree is\n# terminated and an alert sent\n#\n# You can see an example of common session id processes on most Linux systems\n# using: \"ps axf -O sid\"\n#\n# On cPanel servers, PT_ALL_USERS should be enabled to use this option\n# effectively\n#\n# This option will check root owned processes. Session id 0 and 1 will always\n# be ignored as they represent kernel and init processes. csf.pignore will be\n# honoured, but bear in mind that a session tree can contain a variety of users\n# and executables\n#\n# Care needs to be taken to ensure that this option only detects runaway fork\n# bombs, so should be set higher than any session tree is likely to get (e.g.\n# httpd could have 100s of legitimate children on very busy systems). A\n# sensible starting point on most servers might be 250\nPT_FORKBOMB = \"250\"\n\n# Terminate hung SSHD sessions. When under an SSHD login attack, SSHD processes\n# are often left hanging after their connecting IP addresses have been blocked\n#\n# This option will terminate the SSH processes created by the blocked IP. This\n# option is preferred over PT_SSHDHUNG\nPT_SSHDKILL = \"0\"\n\n# This option will terminate all processes with the cmdline of \"sshd: unknown\n# [net]\" or \"sshd: unknown [priv]\" if they have been running for more than 60\n# seconds\nPT_SSHDHUNG = \"0\"\n\n###############################################################################\n# SECTION:Port Scan Tracking\n###############################################################################\n# Port Scan Tracking. This feature tracks port blocks logged by iptables to\n# syslog. If an IP address generates a port block that is logged more than\n# PS_LIMIT within PS_INTERVAL seconds, the IP address will be blocked.\n#\n# This feature could, for example, be useful for blocking hackers attempting\n# to access the standard SSH port if you have moved it to a port other than 22\n# and have removed 22 from the TCP_IN list so that connection attempts to the\n# old port are being logged\n#\n# This feature blocks all iptables blocks from the iptables logs, including\n# repeated attempts to one port or SYN flood blocks, etc\n#\n# Note: This feature will only track iptables blocks from the log file set in\n# IPTABLES_LOG below and if you have DROP_LOGGING enabled. However, it will\n# cause redundant blocking with DROP_IP_LOGGING enabled\n#\n# Warning: It's possible that an elaborate DDOS (i.e. from multiple IP's)\n# could very quickly fill the iptables rule chains and cause a DOS in itself.\n# The DENY_IP_LIMIT should help to mitigate such problems with permanent blocks\n# and the DENY_TEMP_IP_LIMIT with temporary blocks\n#\n# Set PS_INTERVAL to \"0\" to disable this feature. A value of between 60 and 300\n# would be sensible to enable this feature\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nPS_INTERVAL = \"120\"\nPS_LIMIT = \"19\"\n\n# You can specify the ports and/or port ranges that should be tracked by the\n# Port Scan Tracking feature. The following setting is a comma separated list\n# of those ports and uses the same format as TCP_IN. The setting of\n# 0:65535,ICMP,INVALID,OPEN,BRD covers all ports\n#\n# Special values are:\n#   ICMP    - include ICMP blocks (see ICMP_*)\n#   INVALID - include INVALID blocks (see PACKET_FILTER)\n#   OPEN    - include TCP_IN and UDP_IN open port blocks - *[proto]_IN Blocked*\n#   BRD     - include UDP Broadcast IPs, otherwise they are ignored\nPS_PORTS = \"0:65535,ICMP\"\n\n# To specify how many different ports qualifies as a Port Scan you can increase\n# the following from the default value of 1. The risk in doing so will mean\n# that persistent attempts to attack a specific closed port will not be\n# detected and blocked\nPS_DIVERSITY = \"1\"\n\n# You can select whether IP blocks for Port Scan Tracking should be temporary\n# or permanent. Set PS_PERMANENT to \"0\" for temporary and \"1\" for permanent\n# blocking. If set to \"0\" PS_BLOCK_TIME is the amount of time in seconds to\n# temporarily block the IP address for\nPS_PERMANENT = \"0\"\nPS_BLOCK_TIME = \"3600\"\n\n# Set the following to \"1\" to enable Port Scan Tracking email alerts, set to\n# \"0\" to disable them\nPS_EMAIL_ALERT = \"1\"\n\n###############################################################################\n# SECTION:User ID Tracking\n###############################################################################\n# User ID Tracking. This feature tracks UID blocks logged by iptables to\n# syslog. If a UID generates a port block that is logged more than UID_LIMIT\n# times within UID_INTERVAL seconds, an alert will be sent\n#\n# Note: This feature will only track iptables blocks from the log file set in\n# IPTABLES_LOG and if DROP_OUT_LOGGING and DROP_UID_LOGGING are enabled.\n#\n# To ignore specific UIDs list them in csf.uidignore and then restart lfd\n#\n# Set UID_INTERVAL to \"0\" to disable this feature. A value of between 60 and 300\n# would be sensible to enable this feature\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nUID_INTERVAL = \"0\"\nUID_LIMIT = \"10\"\n\n# You can specify the ports and/or port ranges that should be tracked by the\n# User ID Tracking feature. The following setting is a comma separated list\n# of those ports and uses the same format as TCP_OUT. The default setting of\n# 0:65535,ICMP covers all ports\nUID_PORTS = \"0:65535,ICMP\"\n\n###############################################################################\n# SECTION:Account Tracking\n###############################################################################\n# Account Tracking. The following options enable the tracking of modifications\n# to the accounts on a server. If any of the enabled options are triggered by\n# a modifications to an account, an alert email is sent. Only the modification\n# is reported. The cause of the modification will have to be investigated\n# manually\n#\n# You can set AT_ALERT to the following:\n# 0 = disable this feature\n# 1 = enable this feature for all accounts\n# 2 = enable this feature only for superuser accounts (UID = 0, e.g. root, etc)\n# 3 = enable this feature only for the root account\nAT_ALERT = \"2\"\n\n# This options is the interval between checks in seconds\nAT_INTERVAL = \"60\"\n\n# Send alert if a new account is created\nAT_NEW = \"1\"\n\n# Send alert if an existing account is deleted\nAT_OLD = \"1\"\n\n# Send alert if an account password has changed\nAT_PASSWD = \"1\"\n\n# Send alert if an account uid has changed\nAT_UID = \"1\"\n\n# Send alert if an account gid has changed\nAT_GID = \"1\"\n\n# Send alert if an account login directory has changed\nAT_DIR = \"1\"\n\n# Send alert if an account login shell has changed\nAT_SHELL = \"1\"\n\n###############################################################################\n# SECTION:Integrated User Interface\n###############################################################################\n# Integrated User Interface. This feature provides a HTML UI to csf and lfd,\n# without requiring a control panel or web server. The UI runs as a sub process\n# to the lfd daemon\n#\n# As it runs under the root account and successful login provides root access\n# to the server, great care should be taken when configuring and using this\n# feature. There are additional restrictions to enhance secure access to the UI\n#\n# See readme.txt for more information about using this feature BEFORE enabling\n# it for security and access reasons\n#\n# 1 to enable, 0 to disable\nUI = \"0\"\n\n# Set this to the port that want to bind this service to. You should configure\n# this port to be >1023 and different from any other port already being used\n#\n# Do NOT enable access to this port in TCP_IN, instead only allow trusted IP's\n# to the port using Advanced Allow Filters (see readme.txt)\nUI_PORT = \"9898\"\n\n# Optionally set the IP address to bind to. Normally this should be left blank\n# to bind to all IP addresses on the server.\n#\n# If the server is configured for IPv6 but the IP to bind to is IPv4, then the\n# IP address MUST use the IPv6 representation. For example 1.2.3.4 must use\n# ::ffff:1.2.3.4\n#\n# Leave blank to bind to all IP addresses on the server\nUI_IP = \"\"\n\n# This should be a secure, hard to guess username\n#\n# This must be changed from the default\nUI_USER = \"username\"\n\n# This should be a secure, hard to guess password. That is, at least 8\n# characters long with a mixture of upper and lowercase characters plus\n# numbers and non-alphanumeric characters\n#\n# This must be changed from the default\nUI_PASS = \"password\"\n\n# This is the login session timeout. If there is no activity for a logged in\n# session within this number of seconds, the session will timeout and a new\n# login will be required\n#\n# For security reasons, you should always keep this option low (i.e 60-300)\nUI_TIMEOUT = \"300\"\n\n# This is the maximum concurrent connections allowed to the server. The default\n# value should be sufficient\nUI_CHILDREN = \"5\"\n\n# The number of login retries allowed within a 24 hour period. A successful\n# login from the IP address will clear the failures\n#\n# For security reasons, you should always keep this option low (i.e 0-10)\nUI_RETRY = \"5\"\n\n# If enabled, this option will add the connecting IP address to the file\n# /etc/csf/ui/ui.ban after UI_RETRY login failures. The IP address will not be\n# able to login to the UI while it is listed in this file. The UI_BAN setting\n# does not refer to any of the csf/lfd allow or ignore files, e.g. csf.allow,\n# csf.ignore, etc.\n#\n# For security reasons, you should always enable this option\nUI_BAN = \"1\"\n\n# If enabled, only IPs (or CIDR's) listed in the file /etc/csf/ui/ui.allow will\n# be allowed to login to the UI. The UI_ALLOW setting does not refer to any of\n# the csf/lfd allow or ignore files, e.g. csf.allow, csf.ignore, etc.\n#\n# For security reasons, you should always enable this option and use ui.allow\nUI_ALLOW = \"1\"\n\n# If enabled, this option will trigger an iptables block through csf after\n# UI_RETRY login failures\n#\n# 0 = no block;1 = perm block;nn=temp block for nn secs\nUI_BLOCK = \"1\"\n\n# This controls what email alerts are sent with regards to logins to the UI. It\n# uses the uialert.txt template\n#\n# 4 = login success + login failure/ban/block + login attempts\n# 3 = login success + login failure/ban/block\n# 2 = login failure/ban/block\n# 1 = login ban/block\n# 0 = disabled\nUI_ALERT = \"4\"\n\n# This is the SSL cipher list that the Integrated UI will negotiate from\nUI_CIPHER = \"ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH\"\n\n# This is the SSL protocol version used. See IO::Socket::SSL if you wish to\n# change this and to understand the implications of changing it\nUI_SSL_VERSION = \"SSLv23:!SSLv2\"\n\n# If cxs is installed then enabling this option will provide a dropdown box to\n# switch between applications\nUI_CXS = \"0\"\n\n# There is a modified installation of ConfigServer Explorer (cse) provided with\n# the csf distribution. If this option is enabled it will provide a dropdown\n# box to switch between applications\nUI_CSE = \"0\"\n\n###############################################################################\n# SECTION:Messenger service\n###############################################################################\n# Messenger service. This feature allows the display of a message to a blocked\n# connecting IP address to inform the user that they are blocked in the\n# firewall. This can help when users get themselves blocked, e.g. due to\n# multiple login failures. The service is provided by two daemons running on\n# ports providing either an HTML or TEXT message\n#\n# This feature does not work on servers that do not have the iptables module\n# ipt_REDIRECT loaded. Typically, this will be with MONOLITHIC kernels. VPS\n# server admins should check with their VPS host provider that the iptables\n# module is included\n#\n# IPv6 will need the IO::Socket::INET6 perl module\n#\n# For further information on features and limitations refer to the csf\n# readme.txt\n#\n# Note: Run /etc/csf/csftest.pl to check whether this option will function on\n# this server\n#\n# 1 to enable, 0 to disable\nMESSENGER = \"0\"\n\n# Provide this service to temporary IP address blocks\nMESSENGER_TEMP = \"1\"\n\n# Provide this service to permanent IP address blocks\nMESSENGER_PERM = \"1\"\n\n# User account to run the service servers under. We recommend creating a\n# specific non-priv, non-shell account for this purpose\n#\n# Note: When using MESSENGERV2, this account must NOT be a valid control panel\n# account, it must be created manually as explained in the csf readme.txt\nMESSENGER_USER = \"csf\"\n\n# This option points to the file(s) containing the Apache VirtualHost SSL\n# definitions. This can be a file glob if there are multiple files to search.\n# Only Apache v2 SSL VirtualHost definitions are supported\n#\n# This is used by MESSENGERV1 and MESSENGERV2 only\nMESSENGER_HTTPS_CONF = \"/etc/httpd/conf.d/ssl.conf\"\n\n# The following options can be specified to provide a default fallback\n# certificate to be used if either SNI is not supported or a hosted domain does\n# not have an SSL certificate. If a fallback is not provided, one of the certs\n# obtained from MESSENGER_HTTPS_CONF will be used\n#\n# This is used by MESSENGERV1 and MESSENGERV2 only\nMESSENGER_HTTPS_KEY = \"/etc/pki/tls/private/localhost.key\"\nMESSENGER_HTTPS_CRT = \"/etc/pki/tls/certs/localhost.crt\"\n\n# Set this to the port that will receive the HTTPS HTML message. You should\n# configure this port to be >1023 and different from the TEXT and HTML port. Do\n# NOT enable access to this port in TCP_IN. This option requires the perl\n# module IO::Socket::SSL at a version level that supports SNI (1.83+).\n# Additionally the version of openssl on the server must also support SNI\n#\n# The option uses existing SSL certificates on the server for each domain to\n# maintain a secure connection without browser warnings. It uses SNI to choose\n# the correct certificate to use for each client connection\n#\n# Warning: On some servers the amount of memory used by the HTTPS MESSENGER\n# service can become significant depending on various factors associated with\n# the use of IO::Socket::SSL including the number of domains and certificates\n# served. This is normally only an issue if using MESSENGERV1\nMESSENGER_HTTPS = \"8887\"\n\n# This comma separated list are the HTTPS HTML ports that will be redirected\n# for the blocked IP address. If you are using per application blocking\n# (LF_TRIGGER) then only the relevant block port will be redirected to the\n# messenger port\n#\n# Recommended setting \"443\" plus any end-user control panel SSL ports. So, for\n# cPanel: \"443,2083,2096\"\nMESSENGER_HTTPS_IN = \"\"\n\n# Set this to the port that will receive the HTML message. You should configure\n# this port to be >1023 and different from the TEXT port. Do NOT enable access\n# to this port in TCP_IN\nMESSENGER_HTML = \"8888\"\n\n# This comma separated list are the HTML ports that will be redirected for the\n# blocked IP address. If you are using per application blocking (LF_TRIGGER)\n# then only the relevant block port will be redirected to the messenger port\nMESSENGER_HTML_IN = \"80,2082,2095\"\n\n# Set this to the port that will receive the TEXT message. You should configure\n# this port to be >1023 and different from the HTML port. Do NOT enable access\n# to this port in TCP_IN\nMESSENGER_TEXT = \"8889\"\n\n# This comma separated list are the TEXT ports that will be redirected for the\n# blocked IP address. If you are using per application blocking (LF_TRIGGER)\n# then only the relevant block port will be redirected to the messenger port\nMESSENGER_TEXT_IN = \"21\"\n\n# These settings limit the rate at which connections can be made to the\n# messenger service servers. Its intention is to provide protection from\n# attacks or excessive connections to the servers. If the rate is exceeded then\n# iptables will revert for the duration to the normal blocking activity\n#\n# See the iptables man page for the correct --limit rate syntax\nMESSENGER_RATE = \"100/s\"\nMESSENGER_BURST = \"150\"\n\n# MESSENGERV1 only:\n#------------------------------------------------------------------------------\n# This is the maximum concurrent connections allowed to each service server\n#\n# Note: This number should be increased to cater for the number of local images\n# served by this page, including one for favicon.ico. This is because each\n# image displayed counts as an additional connection\nMESSENGER_CHILDREN = \"10\"\n\n# This options ignores ServerAlias definitions that begin with \"mail.\". This\n# can help reduce memory usage on systems that do not require the use of\n# MESSENGER_HTTPS on those subdomains\n#\n# Set to 0 to include these ServerAlias definitions\nMESSENGER_HTTPS_SKIPMAIL = \"1\"\n\n# MESSENGERV2 only:\n#------------------------------------------------------------------------------\n# MESSENGERV2. This option is available on cPanel servers running Apache v2.4+\n# under EA4.\n#\n# This uses the Apache http daemon to provide the web server functionality for\n# the MESSENGER HTML and HTTPS services. It uses a fraction of the resources\n# that the lfd inbuilt service uses and overcomes the memory overhead of using\n# the MESSENGER HTTPS service\n#\n# For more information consult readme.txt before enabling this option\n#MESSENGERV2 = \"0\"\n\n# MESSENGERV3 only:\n#------------------------------------------------------------------------------\n# MESSENGERV3. This option is available on any server running Apache v2.4+,\n# Litespeed or Openlitespeed\n#\n# This uses the web server http daemon to provide the web server functionality\n# for the MESSENGER HTML and HTTPS services. It uses a fraction of the\n# resources that the lfd inbuilt service uses and overcomes the memory overhead\n# of using the MESSENGER HTTPS service\n#\n# For more information consult readme.txt before enabling this option\nMESSENGERV3 = \"0\"\n\n# This is the file or directory where the additional web server configuration\n# file should be included\nMESSENGERV3LOCATION = \"/etc/httpd/conf.d/\"\n\n# This is the command to restart the web server\nMESSENGERV3RESTART = \"service httpd restart\"\n\n# This is the command to test the validity of the web server configuration. If\n# using Litespeed, set to \"\"\nMESSENGERV3TEST = \"/usr/sbin/apachectl -t\"\n\n# This must be set to the main httpd.conf file for either Apache or Litespeed\nMESSENGERV3HTTPS_CONF = \"/etc/httpd/conf/httpd.conf\"\n\n# This can be set to either:\n# \"apache\" - for servers running Apache v2.4+ or Litespeed using Apache\n# configuration\n# \"litespeed\" - for Litespeed or Openlitespeed\nMESSENGERV3WEBSERVER = \"apache\"\n\n# On creation, set the MESSENGER_USER public_html directory permissions to\n# Note: If you precreate this directory the following setting will be ignored\nMESSENGERV3PERMS = \"711\"\n\n# On creation, set the MESSENGER_USER public_html directory group user to\n# Note: If you precreate this directory the following setting will be ignored\nMESSENGERV3GROUP = \"apache\"\n\n# This is the web server configuration to allow PHP scripts to run. If left\n# empty, the MESSENGER service will try to configure this. If this does not\n# work, this should be set as an \"Include /path/to/csf_php.conf\" or similar\n# file which must contain appropriate web server configuration to allow PHP\n# scripts to run. This line will be included within each MESSENGER VirtualHost\n# container. This will replace the [MESSENGERV3PHPHANDLER] line from the csf\n# webserver template files\nMESSENGERV3PHPHANDLER = \"\"\n\n# RECAPTCHA:\n#------------------------------------------------------------------------------\n# The RECAPTCHA options provide a way for end-users that have blocked\n# themselves in the firewall to unblock themselves.\n#\n# A valid Google ReCAPTCHA (v2) key set is required for this feature from:\n# https://www.google.com/recaptcha/intro/index.html\n#\n# When configuring a new reCAPTCHA API key set you must ensure that the option\n# for \"Domain Name Validation\" is unticked so that the same reCAPTCHA can be\n# used for all domains hosted on the server. lfd then checks that the hostname\n# of the request resolves to an IP on this server\n#\n# This feature requires the installation of the LWP::UserAgent perl module (see\n# option URLGET for more details)\n#\n# The template used for this feature is /etc/csf/messenger/index.recaptcha.html\n#\n# Note: An unblock will fail if the end-users IP is located in a netblock,\n# blocklist or CC_* deny entry\nRECAPTCHA_SITEKEY = \"\"\nRECAPTCHA_SECRET = \"\"\n\n# Send an email when an IP address successfully attempts to unblock themselves.\n# This does not necessarily mean the IP was unblocked, only that the\n# post-recaptcha unblock request was attempted\n#\n# Set to \"0\" to disable\nRECAPTCHA_ALERT = \"1\"\n\n# If the server uses NAT then resolving the hostname to hosted IPs will likely\n# not succeed. In that case, the external IP addresses must be listed as comma\n# separated list here\nRECAPTCHA_NAT = \"\"\n\n###############################################################################\n# SECTION:lfd Clustering\n###############################################################################\n# lfd Clustering. This allows the configuration of an lfd cluster environment\n# where a group of servers can share blocks and configuration option changes.\n# Included are CLI and UI options to send requests to the cluster.\n#\n# See the readme.txt file for more information and details on setup and\n# security risks.\n#\n# Set this to a comma separated list of cluster member IP addresses to send\n# requests to. Alternatively, it can be set to the full path of a file that\n# will read in one IP per line, e.g.:\n# \"/etc/csf/cluster_sendto.txt\"\nCLUSTER_SENDTO = \"\"\n\n# Set this to a comma separated list of cluster member IP addresses to receive\n# requests from. Alternatively, it can be set to the full path of a file that\n# will read in one IP per line, e.g.:\n# \"/etc/csf/cluster_recvfrom.txt\"\nCLUSTER_RECVFROM = \"\"\n\n# IP address of the master node in the cluster allowed to send CLUSTER_CONFIG\n# changes\nCLUSTER_MASTER = \"\"\n\n# If this is a NAT server, set this to the public IP address of this server\nCLUSTER_NAT = \"\"\n\n# If a cluster member should send requests on an IP other than the default IP,\n# set it here\nCLUSTER_LOCALADDR = \"\"\n\n# Cluster communication port (must be the same on all member servers). There\n# is no need to open this port in the firewall as csf will automatically add\n# in and out bound rules to allow communication between cluster members\nCLUSTER_PORT = \"7777\"\n\n# This is a secret key used to encrypt cluster communications using the\n# Blowfish algorithm. It should be between 8 and 56 characters long,\n# preferably > 20 random characters\n# 56 chars:    01234567890123456789012345678901234567890123456789012345\nCLUSTER_KEY = \"\"\n\n# Automatically send lfd blocks to all members of CLUSTER_SENDTO. Those\n# servers must have this servers IP address listed in their CLUSTER_RECVFROM\n#\n# Set to 0 to disable this feature\nCLUSTER_BLOCK = \"0\"\n\n# This option allows the enabling and disabling of the Cluster configuration\n# changing options --cconfig, --cconfigr, --cfile, --ccfile sent from the\n# CLUSTER_MASTER server\n#\n# Set this option to 1 to allow Cluster configurations to be received\nCLUSTER_CONFIG = \"0\"\n\n# Maximum number of child processes to listen on. High blocking rates or large\n# clusters may need to increase this\nCLUSTER_CHILDREN = \"10\"\n\n###############################################################################\n# SECTION:Port Knocking\n###############################################################################\n# Port Knocking. This feature allows port knocking to be enabled on multiple\n# ports with a variable number of knocked ports and a timeout. There must be a\n# minimum of 3 ports to knock for an entry to be valid\n#\n# See the following for information regarding Port Knocking:\n# http://www.portknocking.org/\n#\n# This feature does not work on servers that do not have the iptables module\n# ipt_recent loaded. Typically, this will be with MONOLITHIC kernels. VPS\n# server admins should check with their VPS host provider that the iptables\n# module is included\n#\n# For further information and syntax refer to the Port Knocking section of the\n# csf readme.txt\n#\n# Note: Run /etc/csf/csftest.pl to check whether this option will function on\n# this server\n#\n# openport;protocol;timeout;kport1;kport2;kport3[...;kportN],...\n# e.g.: 22;TCP;20;100;200;300;400\nPORTKNOCKING = \"\"\n\n# Enable PORTKNOCKING logging by iptables\nPORTKNOCKING_LOG = \"1\"\n\n# Send an email alert if the PORTKNOCKING port is opened. PORTKNOCKING_LOG must\n# also be enabled to use this option\n#\n# SECURITY NOTE: This option is affected by the RESTRICT_SYSLOG option. Read\n# this file about RESTRICT_SYSLOG before enabling this option:\nPORTKNOCKING_ALERT = \"1\"\n\n###############################################################################\n# SECTION:Log Scanner\n###############################################################################\n# Log Scanner. This feature will send out an email summary of the log lines of\n# each log listed in /etc/csf/csf.logfiles. All lines will be reported unless\n# they match a regular expression in /etc/csf/csf.logignore\n#\n# File globbing is supported for logs listed in /etc/csf/csf.logfiles. However,\n# be aware that the more files lfd has to track, the greater the performance\n# hit. Note: File globs are only evaluated when lfd is started\n#\n# Note: lfd builds the report continuously from lines logged after lfd has\n# started, so any lines logged when lfd is not running will not be reported\n# (e.g. during reboot). If lfd is restarted, then the report will include any\n# lines logged during the previous lfd logging period that weren't reported\n#\n# 1 to enable, 0 to disable\nLOGSCANNER = \"0\"\n\n# This is the interval each report will be sent based on the logalert.txt\n# template\n#\n# The interval can be set to:\n# \"hourly\" - sent on the hour\n# \"daily\"  - sent at midnight (00:00)\n# \"manual\" - sent whenever \"csf --logrun\" is run. This allows for scheduling\n#            via cron job\nLOGSCANNER_INTERVAL = \"hourly\"\n\n# Report Style\n# 1 = Separate chronological log lines per log file\n# 2 = Simply chronological log of all lines\nLOGSCANNER_STYLE = \"1\"\n\n# Send the report email even if no log lines reported\n# 1 to enable, 0 to disable\nLOGSCANNER_EMPTY = \"1\"\n\n# Maximum number of lines in the report before it is truncated. This is to\n# prevent log lines flooding resulting in an excessively large report. This\n# might need to be increased if you choose a daily report\nLOGSCANNER_LINES = \"5000\"\n\n###############################################################################\n# SECTION:Statistics Settings\n###############################################################################\n# Statistics\n#\n# Some of the Statistics output requires the gd graphics library and the\n# GD::Graph perl module with all dependent modules to be installed for the UI\n# for them to be displayed\n#\n# This option enabled statistical data gathering\nST_ENABLE = \"1\"\n\n# This option determines how many iptables log lines to store for reports\nST_IPTABLES = \"100\"\n\n# This option indicates whether rDNS and CC lookups are performed at the time\n# the log line is recorded (this is not performed when viewing the reports)\n#\n# Warning: If DROP_IP_LOGGING is enabled and there are frequent iptables hits,\n# then enabling this setting could cause serious performance problems\nST_LOOKUP = \"0\"\n\n# This option will gather basic system statstics. Through the UI it displays\n# various graphs for disk, cpu, memory, network, etc usage over 4 intervals:\n#  . Hourly (per minute)\n#  . 24 hours (per minute)\n#  . 7 days (per minute averaged over an hour)\n#  . 30 days (per minute averaged over an hour) - user definable\n# The data is stored in /var/lib/csf/stats/system and the option requires the\n# perl GD::Graph module\n#\n# Note: Disk graphs do not show on Virtuozzo/OpenVZ servers as the kernel on\n# those systems do not store the required information in /proc/diskstats\n# On new installations or when enabling this option it will take time for these\n# graphs to be populated\nST_SYSTEM = \"0\"\n\n# Set the maximum days to collect statistics for. The default is 30 days, the\n# more data that is collected the longer it will take for each of the graphs to\n# be generated\nST_SYSTEM_MAXDAYS = \"30\"\n\n# If ST_SYSTEM is enabled, then these options can collect MySQL statistical\n# data. To use this option the server must have the perl modules DBI and\n# DBD::mysql installed.\n#\n# Set this option to \"0\" to disable MySQL data collection\nST_MYSQL = \"0\"\n\n# The following options are for authentication for MySQL data collection. If\n# the password is left blank and the user set to \"root\" then the procedure will\n# look for authentication data in /root/.my.cnf. Otherwise, you will need to\n# provide a MySQL username and password to collect the data. Any MySQL user\n# account can be used\nST_MYSQL_USER = \"root\"\nST_MYSQL_PASS = \"\"\nST_MYSQL_HOST = \"localhost\"\n\n# If ST_SYSTEM is enabled, then this option can collect Apache statistical data\n# The value for PT_APACHESTATUS must be correctly set\nST_APACHE = \"0\"\n\n# The following options measure disk write performance using dd (location set\n# via the DD setting). It creates a 64MB file called /var/lib/dd_write_test and\n# the statistics will plot the MB/s response time of the disk. As this is an IO\n# intensive operation, it may not be prudent to run this test too often, so by\n# default it is only run every 5 minutes and the result duplicated for each\n# intervening minute for the statistics\n#\n# This is not necessrily a good measure of disk performance, primarily because\n# the measurements are for relatively small amounts of data over a small amount\n# of time. To properly test disk performance there are a variety of tools\n# available that should be run for extended periods of time to obtain an\n# accurate measurement. This metric is provided to give an idea of how the disk\n# is performing over time\n#\n# Note: There is a 15 second timeout performing the check\n#\n# Set to 0 to disable, 1 to enable\nST_DISKW = \"0\"\n\n# The number of minutes that elapse between tests. Default is 5, minimum is 1.\nST_DISKW_FREQ = \"5\"\n\n# This is the command line passed to dd. If you are familiar with dd, or wish\n# to move the output file (of) to a different disk, then you can alter this\n# command. Take great care when making any changes to this command as it is\n# very easy to overwrite a disk using dd if you make a mistake\nST_DISKW_DD = \"if=/dev/zero of=/etc/csf/dd_test bs=1MB count=64 conv=fdatasync\"\n\n###############################################################################\n# SECTION:Docker Settings\n###############################################################################\n# This section provides the configuration of iptables rules to allow Docker\n# containers to communicate through the host. If the generated rules do not\n# work with your setup you will have to use a /etc/csf/csfpost.sh file and add\n# your own iptables configuration instead\n#\n# 1 to enable, 0 to disable\nDOCKER = \"0\"\n\n# The network device on the host\nDOCKER_DEVICE = \"docker0\"\n\n# Docker container IPv4 range\nDOCKER_NETWORK4 = \"172.17.0.0/16\"\n\n# Docker container IPv6 range. IPV6 must be enabled and the IPv6 nat table\n# available (see IPv6 section). Leave blank to disable\nDOCKER_NETWORK6 = \"2001:db8:1::/64\"\n\n###############################################################################\n# SECTION:OS Specific Settings\n###############################################################################\n# Binary locations\nIPTABLES = \"/usr/sbin/iptables\"\nIPTABLES_SAVE = \"/usr/sbin/iptables-save\"\nIPTABLES_RESTORE = \"/usr/sbin/iptables-restore\"\nIP6TABLES = \"/usr/sbin/ip6tables\"\nIP6TABLES_SAVE = \"/usr/sbin/ip6tables-save\"\nIP6TABLES_RESTORE = \"/usr/sbin/ip6tables-restore\"\nMODPROBE = \"/sbin/modprobe\"\nIFCONFIG = \"/sbin/ifconfig\"\nSENDMAIL = \"/usr/sbin/sendmail\"\nPS = \"/bin/ps\"\nVMSTAT = \"/usr/bin/vmstat\"\nNETSTAT = \"/bin/netstat\"\nLS = \"/bin/ls\"\nMD5SUM = \"/usr/bin/md5sum\"\nTAR = \"/bin/tar\"\nCHATTR = \"/usr/bin/chattr\"\nUNZIP = \"/usr/bin/unzip\"\nGUNZIP = \"/bin/gunzip\"\nDD = \"/bin/dd\"\nTAIL = \"/usr/bin/tail\"\nGREP = \"/bin/grep\"\nZGREP = \"/bin/zgrep\"\nIPSET = \"/usr/sbin/ipset\"\nSYSTEMCTL = \"/usr/bin/systemctl\"\nHOST = \"/usr/bin/host\"\nIP = \"/bin/ip\"\nCURL = \"/usr/bin/curl\"\nWGET = \"/usr/bin/wget\"\n\n# Log file locations\n#\n# File globbing is allowed for the following logs. However, be aware that the\n# more files lfd has to track, the greater the performance hit\n#\n# Note: File globs are only evaluated when lfd is started\n#\nHTACCESS_LOG = \"/var/log/nginx/error.log\"\nMODSEC_LOG = \"/var/log/nginx/error.log\"\nSSHD_LOG = \"/var/log/auth.log\"\nSU_LOG = \"/var/log/auth.log\"\nSUDO_LOG = \"/var/log/auth.log\"\nFTPD_LOG = \"/var/log/messages\"\nSMTPAUTH_LOG = \"/var/log/mail.log\"\nPOP3D_LOG = \"/var/log/mail.log\"\nIMAPD_LOG = \"/var/log/mail.log\"\nIPTABLES_LOG = \"/var/log/iptables.log\"\nSUHOSIN_LOG = \"/var/log/messages\"\nBIND_LOG = \"/var/log/messages\"\nSYSLOG_LOG = \"/var/log/syslog\"\nWEBMIN_LOG = \"/var/log/auth.log\"\nCUSTOM1_LOG = \"/var/log/cron.log\"\n\n# The following are comma separated lists used if LF_SELECT is enabled,\n# otherwise they are not used. They are derived from the application returned\n# from a regex match in /usr/local/csf/bin/regex.pm\n#\n# All ports default to tcp blocks. To specify udp or tcp use the format:\n# port;protocol,port;protocol,... For example, \"53;udp,53;tcp\"\nPORTS_pop3d = \"110,995\"\nPORTS_imapd = \"143,993\"\nPORTS_htpasswd = \"80,443\"\nPORTS_mod_security = \"80,443\"\nPORTS_mod_qos = \"80,443\"\nPORTS_symlink = \"80,443\"\nPORTS_suhosin = \"80,443\"\nPORTS_cxs = \"80,443\"\nPORTS_bind = \"53\"\nPORTS_ftpd = \"20,21\"\nPORTS_webmin = \"10000\"\nPORTS_smtpauth = \"25,465,587\"\nPORTS_eximsyntax = \"25,465,587\"\n# This list is replaced, if present, by \"Port\" definitions in\n# /etc/ssh/sshd_config\nPORTS_sshd = \"22\"\n\n# This configuration is for use with generic Linux servers, do not change the\n# following setting:\nGENERIC = \"1\"\n\n# For internal use only. You should not enable this option as it could cause\n# instability in csf and lfd\nDEBUG = \"0\"\n###############################################################################\n"
  },
  {
    "path": "aegir/conf/var/galera.cnf",
    "content": "[mysqld]\n\n###\n### Galera configuration template\n### /etc/mysql/conf.d/galera.cnf\n###\n\n### Mandatory for Galera\n#\nbinlog_format=ROW\n#default_storage_engine=InnoDB\ninnodb_autoinc_lock_mode=2\n\n### Recommended for Galera\n#\ninnodb_flush_log_at_trx_commit=0\nbinlog_row_image=minimal\nperformance_schema=OFF\n\n### Basic Galera Settings\n#\n# wsrep_provider=/usr/lib/galera/libgalera_smm.so\n# wsrep_cluster_name=\"galera_cluster\"\n# wsrep_cluster_address=\"gcomm://192.168.0.1,192.168.0.2,192.168.0.3,...?pc.wait_prim=no\"\n# wsrep_sst_auth=wsrep:sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n# wsrep_provider_options='socket.checksum=1'\n\n### Optional Galera Settings\n#\n# wsrep_node_address=\"192.168.0.1\"\n# wsrep_node_name=\"galera_node1\"\n# wsrep_slave_threads=8\n\n### Optional Memory Settings for Galera\n#\n# gcs.recv_q_hard_limit=4G\n# gcs.recv_q_soft_limit=2G\n# gcs.max_throttle=0.25T\n\n### Optional MyISAM Support in Galera\n#\n# wsrep_replicate_myisam=1\n"
  },
  {
    "path": "aegir/conf/var/get.htaccess.txt",
    "content": "#\n# Apache/PHP/Drupal settings:\n#\n\n# Protect files and directories from prying eyes.\n<FilesMatch \"\\.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\\.php)?|xtmpl)(~|\\.sw[op]|\\.bak|\\.orig|\\.save)?$|^(\\..*|Entries.*|Repository|Root|Tag|Template|composer\\.(json|lock))$|^#.*#$|\\.php(~|\\.sw[op]|\\.bak|\\.orig\\.save)$\">\n  Order allow,deny\n</FilesMatch>\n\n# Don't show directory listings for URLs which map to a directory.\nOptions -Indexes\n\n# Follow symbolic links in this directory.\nOptions +FollowSymLinks\n\n# Make Drupal handle any 404 errors.\nErrorDocument 404 /index.php\n\n# Set the default handler.\nDirectoryIndex index.php index.html index.htm\n\n# Override PHP settings that cannot be changed at runtime. See\n# sites/default/default.settings.php and drupal_environment_initialize() in\n# includes/bootstrap.inc for settings that can be changed at runtime.\n\n# PHP 5, Apache 1 and 2.\n<IfModule mod_php5.c>\n  php_flag magic_quotes_gpc                 off\n  php_flag magic_quotes_sybase              off\n  php_flag register_globals                 off\n  php_flag session.auto_start               off\n  php_value mbstring.http_input             pass\n  php_value mbstring.http_output            pass\n  php_flag mbstring.encoding_translation    off\n</IfModule>\n\n# Requires mod_expires to be enabled.\n<IfModule mod_expires.c>\n  # Enable expirations.\n  ExpiresActive On\n\n  # Cache all files for 2 weeks after access (A).\n  ExpiresDefault A1209600\n\n  <FilesMatch \\.php$>\n    # Do not allow PHP scripts to be cached unless they explicitly send cache\n    # headers themselves. Otherwise all scripts would have to overwrite the\n    # headers set by mod_expires if they want another caching behavior. This may\n    # fail if an error occurs early in the bootstrap process, and it may cause\n    # problems if a non-Drupal PHP file is installed in a subdirectory.\n    ExpiresActive Off\n  </FilesMatch>\n</IfModule>\n\n# Various rewrite rules.\n<IfModule mod_rewrite.c>\n  RewriteEngine on\n\n  # Set \"protossl\" to \"s\" if we were accessed via https://.  This is used later\n  # if you enable \"www.\" stripping or enforcement, in order to ensure that\n  # you don't bounce between http and https.\n  RewriteRule ^ - [E=protossl]\n  RewriteCond %{HTTPS} on\n  RewriteRule ^ - [E=protossl:s]\n\n  # Make sure Authorization HTTP header is available to PHP\n  # even when running as CGI or FastCGI.\n  RewriteRule ^ - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]\n\n  # Block access to \"hidden\" directories whose names begin with a period. This\n  # includes directories used by version control systems such as Subversion or\n  # Git to store control files. Files whose names begin with a period, as well\n  # as the control files used by CVS, are protected by the FilesMatch directive\n  # above.\n  #\n  # NOTE: This only works when mod_rewrite is loaded. Without mod_rewrite, it is\n  # not possible to block access to entire directories from .htaccess, because\n  # <DirectoryMatch> is not allowed here.\n  #\n  # If you do not have mod_rewrite installed, you should remove these\n  # directories from your webroot or otherwise protect them from being\n  # downloaded.\n  RewriteRule \"(^|/)\\.\" - [F]\n\n  # If your site can be accessed both with and without the 'www.' prefix, you\n  # can use one of the following settings to redirect users to your preferred\n  # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option:\n  #\n  # To redirect all users to access the site WITH the 'www.' prefix,\n  # (http://example.com/... will be redirected to http://www.example.com/...)\n  # uncomment the following:\n  # RewriteCond %{HTTP_HOST} .\n  # RewriteCond %{HTTP_HOST} !^www\\. [NC]\n  # RewriteRule ^ http%{ENV:protossl}://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]\n  #\n  # To redirect all users to access the site WITHOUT the 'www.' prefix,\n  # (http://www.example.com/... will be redirected to http://example.com/...)\n  # uncomment the following:\n  # RewriteCond %{HTTP_HOST} ^www\\.(.+)$ [NC]\n  # RewriteRule ^ http%{ENV:protossl}://%1%{REQUEST_URI} [L,R=301]\n\n  # Modify the RewriteBase if you are using Drupal in a subdirectory or in a\n  # VirtualDocumentRoot and the rewrite rules are not working properly.\n  # For example if your site is at http://example.com/drupal uncomment and\n  # modify the following line:\n  # RewriteBase /drupal\n  #\n  # If your site is running in a VirtualDocumentRoot at http://example.com/,\n  # uncomment the following line:\n  # RewriteBase /\n\n# AIS: Adaptive Image Style\n  RewriteBase /\n  RewriteCond %{REQUEST_URI} ^(.+)/files/styles/adaptive/(.+)$\n  RewriteCond %{REQUEST_URI} !/modules/image/sample.png\n  RewriteCond %{HTTP_COOKIE} ais=([a-z0-9_-]+)\n  RewriteRule ^(.+)/files/styles/adaptive/(.+)$ $1/files/styles/%1/$2 [R=302,L]\n\n  ### BOOST START ###\n  AddDefaultCharset utf-8\n  <FilesMatch \"\\.((html|xml|json)|((html|xml|json)\\.gz))$\">\n    <IfModule mod_expires.c>\n      ExpiresDefault A1\n    </IfModule>\n    <IfModule mod_headers.c>\n      Header set Expires \"Tue, 24 Jan 1984 08:00:00 GMT\"\n      Header set Cache-Control \"no-store, no-cache, must-revalidate, post-check=0, pre-check=0\"\n    </IfModule>\n  </FilesMatch>\n  <IfModule mod_mime.c>\n    AddCharset utf-8 .html\n    AddCharset utf-8 .xml\n    AddCharset utf-8 .json\n    AddCharset utf-8 .css\n    AddCharset utf-8 .js\n    AddEncoding gzip .gz\n  </IfModule>\n  <FilesMatch \"\\.(html|html\\.gz)$\">\n    ForceType text/html\n  </FilesMatch>\n  <FilesMatch \"\\.(xml|xml\\.gz)$\">\n    ForceType text/xml\n  </FilesMatch>\n  <FilesMatch \"\\.((json|js)|((json|js)\\.gz))$\">\n    ForceType text/javascript\n  </FilesMatch>\n  <FilesMatch \"\\.(css|css\\.gz)$\">\n    ForceType text/css\n  </FilesMatch>\n\n  # Gzip Cookie Test\n  RewriteRule boost-gzip-cookie-test\\.html  cache/perm/boost-gzip-cookie-test\\.html\\.gz [L,T=text/html]\n\n  # GZIP - Cached css & js files\n  RewriteCond %{HTTP_COOKIE} !(boost-gzip)\n  RewriteCond %{HTTP:Accept-encoding} !gzip\n  RewriteRule .* - [S=2]\n  RewriteCond %{DOCUMENT_ROOT}/cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.css\\.gz -s\n  RewriteRule .* cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.css\\.gz [L,QSA,T=text/css]\n  RewriteCond %{DOCUMENT_ROOT}/cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.js\\.gz -s\n  RewriteRule .* cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.js\\.gz [L,QSA,T=text/javascript]\n\n  # NORMAL - Cached css & js files\n  RewriteCond %{DOCUMENT_ROOT}/cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.css -s\n  RewriteRule .* cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.css [L,QSA,T=text/css]\n  RewriteCond %{DOCUMENT_ROOT}/cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.js -s\n  RewriteRule .* cache/perm/%{HTTP_HOST}%{REQUEST_URI}_\\.js [L,QSA,T=text/javascript]\n\n  # Caching for anonymous users\n  # Skip boost IF not get request OR uri has wrong dir OR cookie is set OR https request\n  RewriteCond %{REQUEST_METHOD} !^(GET|HEAD)$ [OR]\n  RewriteCond %{REQUEST_URI} (^/(admin|cache|misc|modules|sites|system|openid|themes|node/add))|(/(comment/reply|edit|user|user/(login|password|register))$) [OR]\n  RewriteCond %{HTTP_COOKIE} DRUPAL_UID [OR]\n  RewriteCond %{HTTPS} on\n  RewriteRule .* - [S=7]\n\n  # GZIP\n  RewriteCond %{HTTP_COOKIE} !(boost-gzip)\n  RewriteCond %{HTTP:Accept-encoding} !gzip\n  RewriteRule .* - [S=3]\n  RewriteCond %{DOCUMENT_ROOT}/cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.html\\.gz -s\n  RewriteRule .* cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.html\\.gz [L,T=text/html]\n  RewriteCond %{DOCUMENT_ROOT}/cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.xml\\.gz -s\n  RewriteRule .* cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.xml\\.gz [L,T=text/xml]\n  RewriteCond %{DOCUMENT_ROOT}/cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.json\\.gz -s\n  RewriteRule .* cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.json\\.gz [L,T=text/javascript]\n\n  # NORMAL\n  RewriteCond %{DOCUMENT_ROOT}/cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.html -s\n  RewriteRule .* cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.html [L,T=text/html]\n  RewriteCond %{DOCUMENT_ROOT}/cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.xml -s\n  RewriteRule .* cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.xml [L,T=text/xml]\n  RewriteCond %{DOCUMENT_ROOT}/cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.json -s\n  RewriteRule .* cache/normal/%{HTTP_HOST}%{REQUEST_URI}_%{QUERY_STRING}\\.json [L,T=text/javascript]\n\n  ### BOOST END ###\n\n# $Id: boosted2.txt,v 1.1.2.24 2010/03/17 05:43:15 mikeytown2 Exp $\n\n  # Pass all requests not referring directly to files in the filesystem to\n  # index.php. Clean URLs are handled in drupal_environment_initialize().\n  RewriteCond %{REQUEST_FILENAME} !-f\n  RewriteCond %{REQUEST_FILENAME} !-d\n  RewriteCond %{REQUEST_URI} !=/favicon.ico\n  RewriteRule ^ index.php [L]\n\n  # Rules to correctly serve gzip compressed CSS and JS files.\n  # Requires both mod_rewrite and mod_headers to be enabled.\n  <IfModule mod_headers.c>\n    # Serve gzip compressed CSS files if they exist and the client accepts gzip.\n    RewriteCond %{HTTP:Accept-encoding} gzip\n    RewriteCond %{REQUEST_FILENAME}\\.gz -s\n    RewriteRule ^(.*)\\.css $1\\.css\\.gz [QSA]\n\n    # Serve gzip compressed JS files if they exist and the client accepts gzip.\n    RewriteCond %{HTTP:Accept-encoding} gzip\n    RewriteCond %{REQUEST_FILENAME}\\.gz -s\n    RewriteRule ^(.*)\\.js $1\\.js\\.gz [QSA]\n\n    # Serve correct content types, and prevent mod_deflate double gzip.\n    RewriteRule \\.css\\.gz$ - [T=text/css,E=no-gzip:1]\n    RewriteRule \\.js\\.gz$ - [T=text/javascript,E=no-gzip:1]\n\n    <FilesMatch \"(\\.js\\.gz|\\.css\\.gz)$\">\n      # Serve correct encoding type.\n      Header set Content-Encoding gzip\n      # Force proxies to cache gzipped & non-gzipped css/js files separately.\n      Header append Vary Accept-Encoding\n    </FilesMatch>\n  </IfModule>\n</IfModule>\n"
  },
  {
    "path": "aegir/conf/var/logrotate.d.rsyslog.conf",
    "content": "/var/log/syslog\n/var/log/mail.info\n/var/log/mail.warn\n/var/log/mail.err\n/var/log/mail.log\n/var/log/daemon.log\n/var/log/mysql-notices.log\n/var/log/kern.log\n/var/log/iptables.log\n/var/log/auth.log\n/var/log/user.log\n/var/log/lpr.log\n/var/log/cron.log\n/var/log/debug\n/var/log/messages\n{\n        rotate 4\n        weekly\n        missingok\n        notifempty\n        compress\n        delaycompress\n        sharedscripts\n        postrotate\n                /usr/lib/rsyslog/rsyslog-rotate\n        endscript\n}\n"
  },
  {
    "path": "aegir/conf/var/my.cnf.txt",
    "content": "[client]\nport                    = 3306\nsocket                  = /run/mysqld/mysqld.sock\ndefault-character-set   = utf8mb4\n\n[mysqld]\nuser                    = mysql\npid-file                = /run/mysqld/mysqld.pid\nsocket                  = /run/mysqld/mysqld.sock\nport                    = 3306\nbasedir                 = /usr\ndatadir                 = /var/lib/mysql\ntmpdir                  = /tmp\n#default_storage_engine  = InnoDB\n#mysql_native_password   = ON\n#mysqlx                  = OFF\nlc_messages_dir         = /usr/share/mysql\nlc_messages             = en_US\ncharacter_set_server    = utf8mb4\ncollation_server        = utf8mb4_unicode_ci\ntransaction-isolation   = READ-COMMITTED\ntransaction-read-only   = OFF\nskip-external-locking\nskip-name-resolve\nsecure_file_priv        = NULL\n#performance_schema      = OFF\n#performance_schema_instrument = 'wait/%=ON'\n#performance_schema_consumer_events_waits_current=OFF\n#performance_schema_consumer_events_waits_history=OFF\n#performance_schema_consumer_events_waits_history_long=OFF\n#symbolic-links          = 0\nconnect_timeout         = 60\njoin_buffer_size        = 1M\nkey_buffer_size         = 1024M\nmax_allowed_packet      = 256M\nmax_connect_errors      = 191\nmax_connections         = 292\nmax_user_connections    = 191\nmyisam_sort_buffer_size = 256K\nread_buffer_size        = 8M\nread_rnd_buffer_size    = 4M\nsort_buffer_size        = 256K\nbulk_insert_buffer_size = 256K\ntable_open_cache        = 2048\ntable_definition_cache  = 512\nthread_stack            = 256K\nthread_cache_size       = 128\nwait_timeout            = 3600\ntmp_table_size          = 64M\nmax_heap_table_size     = 128M\nlow_priority_updates    = 1\nconcurrent_insert       = 2\n#max_tmp_tables          = 16384\nserver-id               = 8\n#myisam-recover-options  = BACKUP\n#myisam_recover          = BACKUP\nsync_binlog             = 0\nopen_files_limit        = 294912\ninnodb_autoinc_lock_mode= 2\ngroup_concat_max_len    = 10000\nskip-log-bin\n#log_bin                 = ON\n#max_binlog_size         = 256M\n#binlog_row_image        = minimal\n#binlog_format           = ROW\n#slow_query_log          = 1\n#long_query_time         = 10\n#slow_query_log_file     = /var/log/mysql/sql-slow-query.log\n#log_queries_not_using_indexes\n\n# --- Logging in Percona 5.7 -----------------------------------------------\n# log_syslog             = ON\n# log_syslog_facility    = daemon\n# log_syslog_include_pid = ON\n\n# --- Logging in Percona 8.x -----------------------------------------------\n# log_error               = /var/log/mysql/error.log\n# log_error_verbosity     = 2\n# log_error_services      = log_filter_internal; log_sink_internal; log_sink_syseventlog\n# syseventlog.facility    = daemon\n# syseventlog.include_pid = ON\n\n# * InnoDB\n#\n# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.\n# Read the manual for more InnoDB related options. There are many!\nsql_mode                = NO_ENGINE_SUBSTITUTION\n# you can't just change log file size, requires special procedure\n#innodb_log_file_size   = 50M\n#innodb_redo_log_capacity = 50M\ninnodb_buffer_pool_instances = 8\ninnodb_page_cleaners    = 8\ninnodb_lru_scan_depth   = 1024\ninnodb_buffer_pool_size = 181M\ninnodb_log_buffer_size  = 256M\ninnodb_file_per_table   = 1\n#innodb_use_native_aio   = 1\ninnodb_open_files       = 196608\ninnodb_io_capacity      = 1000\ninnodb_flush_method     = O_DIRECT\ninnodb_flush_log_at_trx_commit = 2\ninnodb_thread_concurrency = 0\ninnodb_lock_wait_timeout = 300\ninnodb_buffer_pool_dump_at_shutdown = 1\ninnodb_buffer_pool_load_at_startup = 1\n#innodb_buffer_pool_dump_pct = 100\n#innodb_buffer_pool_dump_now = ON\ninnodb_stats_on_metadata = OFF\ninnodb_adaptive_hash_index = 0\ninnodb_default_row_format = dynamic\ninnodb_doublewrite = 1\n#innodb_checksum_algorithm=crc32\ninnodb_flush_log_at_timeout = 1\n#innodb_force_recovery = 3\n#innodb_temp_data_file_path = ibtmp1:12M:autoextend:max:900M\n\n[mysqld_safe]\nsocket                  = /run/mysqld/mysqld.sock\nnice                    = 0\nopen_files_limit        = 294912\nsyslog\n\n[mysqldump]\nquick\nmax_allowed_packet      = 256M\nquote-names\n\n[mysql]\ndefault-character-set   = utf8mb4\nno-auto-rehash\n\n[myisamchk]\nkey_buffer              = 1M\nsort_buffer_size        = 256K\nread_buffer             = 4M\nwrite_buffer            = 4M\n\n[isamchk]\nkey_buffer              = 1M\nsort_buffer_size        = 256K\nread_buffer             = 4M\nwrite_buffer            = 4M\n\n[mysqlhotcopy]\ninteractive-timeout\n\n!includedir /etc/mysql/conf.d/\n"
  },
  {
    "path": "aegir/conf/var/mysql",
    "content": "#!/bin/bash\n### BEGIN INIT INFO\n# Provides:          mysql\n# Required-Start:    $network $remote_fs $syslog\n# Required-Stop:     $network $remote_fs $syslog\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: Start and stop Percona MySQL server\n# Description:       Manage the Percona MySQL server daemon\n### END INIT INFO\n\n# Path to Percona MySQL executable\nDAEMON=/usr/sbin/mysqld\nMYSQL_OPTS=\"\"\n\n# PID file location\nPIDFILE=/run/mysqld/mysqld.pid\n\n# Configuration file location\nMYCNF=/etc/mysql/my.cnf\n\n# Logging\nLOGFILE=/var/log/mysql/mysql.log\nERRORLOG=/var/log/mysql/error.log\n\n# Ensure the Percona MySQL directory exists\n[ -d /run/mysqld ] || mkdir -p /run/mysqld\nchown mysql:mysql /run/mysqld\n\n# Start Percona MySQL Server\nstart_mysql() {\n    echo \"Starting MySQL\"\n    if [ -f $PIDFILE ]; then\n        echo \"Percona MySQL is already running.\"\n        return 1\n    fi\n\n    # Start MySQL\n    $DAEMON --defaults-file=$MYCNF $MYSQL_OPTS > /dev/null 2>&1 &\n    sleep 5\n\n    if [ -f $PIDFILE ]; then\n        echo \"Percona MySQL started successfully.\"\n    else\n        echo \"Percona MySQL failed to start.\"\n        return 1\n    fi\n}\n\n# Stop Percona MySQL Server\nstop_mysql() {\n    echo \"Stopping MySQL\"\n    if [ ! -f $PIDFILE ]; then\n        echo \"Percona MySQL is not running.\"\n        return 1\n    fi\n\n    kill `cat $PIDFILE`\n    sleep 5\n\n    if [ -f $PIDFILE ]; then\n        echo \"Percona MySQL failed to stop.\"\n        return 1\n    else\n        echo \"Percona MySQL stopped successfully.\"\n    fi\n}\n\n# Restart Percona MySQL Server\nrestart_mysql() {\n    stop_mysql\n    start_mysql\n}\n\n# Status of Percona MySQL Server\nstatus_mysql() {\n    if [ -f $PIDFILE ]; then\n        echo \"Percona MySQL is running (PID: `cat $PIDFILE`).\"\n    else\n        echo \"Percona MySQL is not running.\"\n        return 1\n    fi\n}\n\ncase \"$1\" in\n    start)\n        start_mysql\n        ;;\n    stop)\n        stop_mysql\n        ;;\n    restart)\n        restart_mysql\n        ;;\n    status)\n        status_mysql\n        ;;\n    *)\n        echo \"Usage: $0 {start|stop|restart|status}\"\n        exit 1\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/conf/var/mysql-notices.conf",
    "content": "# /etc/rsyslog.d/mysql-notices.conf\nif $programname == 'mysqld' and $msg contains 'InnoDB: Stopping purge' then /var/log/mysql-notices.log\nif $programname == 'mysqld' and $msg contains 'InnoDB: Resuming purge' then /var/log/mysql-notices.log\n\n# Don't log these specific notices in syslog or daemon.log\nif $programname == 'mysqld' and $msg contains 'InnoDB: Stopping purge' then stop\nif $programname == 'mysqld' and $msg contains 'InnoDB: Resuming purge' then stop\n"
  },
  {
    "path": "aegir/conf/var/named.conf.options",
    "content": "options {\n\tdirectory \"/var/cache/bind\";\n\n\t// If there is a firewall between you and nameservers you want\n\t// to talk to, you may need to fix the firewall to allow multiple\n\t// ports to talk.  See http://www.kb.cert.org/vuls/id/800113\n\n\t// If your ISP provided one or more IP addresses for stable \n\t// nameservers, you probably want to use them as forwarders.  \n\t// Uncomment the following block, and insert the addresses replacing \n\t// the all-0's placeholder.\n\n\t// forwarders {\n\t// \t0.0.0.0;\n\t// };\n\n\tauth-nxdomain no;    # conform to RFC1035\n\tlisten-on-v6 { any; };\n\tlisten-on {\n\t\t127.0.1.1;\n\t\t};\n};\n"
  },
  {
    "path": "aegir/conf/var/rsyslog.conf",
    "content": "# /etc/rsyslog.conf configuration file for rsyslog\n#\n# For more information install rsyslog-doc and see\n# /usr/share/doc/rsyslog-doc/html/configuration/index.html\n\n\n#################\n#### MODULES ####\n#################\n\nmodule(load=\"imuxsock\") # provides support for local system logging\nmodule(load=\"imklog\")   # provides kernel logging support\n#module(load=\"immark\")  # provides --MARK-- message capability\n\n# provides UDP syslog reception\n#module(load=\"imudp\")\n#input(type=\"imudp\" port=\"514\")\n\n# provides TCP syslog reception\n#module(load=\"imtcp\")\n#input(type=\"imtcp\" port=\"514\")\n\n\n###########################\n#### GLOBAL DIRECTIVES ####\n###########################\n\n#\n# Use traditional timestamp format.\n# To enable high precision timestamps, comment out the following line.\n#\n$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat\n\n#\n# Set the default permissions for all log files.\n#\n$FileOwner root\n$FileGroup adm\n$FileCreateMode 0640\n$DirCreateMode 0755\n$Umask 0022\n\n#\n# Where to place spool and state files\n#\n$WorkDirectory /var/spool/rsyslog\n\n#\n# Include all config files in /etc/rsyslog.d/\n#\n$IncludeConfig /etc/rsyslog.d/*.conf\n\n\n###############\n#### RULES ####\n###############\n\n#\n# First some standard log files.  Log by facility.\n#\n\n# Direct all cron messages to /var/log/cron.log\nif $programname == 'CRON' then /var/log/cron.log\n& stop\n\n# Log auth and authpriv messages to /var/log/auth.log\nauth,authpriv.*        /var/log/auth.log\n\n*.*;kern.!info;\\\n    cron.!info;\\\n    mail.!info;\\\n    auth,authpriv.none     -/var/log/syslog\ndaemon.*\t\t\t-/var/log/daemon.log\nkern.*;kern.!info      -/var/log/kern.log\nlpr.*\t\t\t\t-/var/log/lpr.log\nmail.*\t\t\t\t-/var/log/mail.log\nuser.*\t\t\t\t-/var/log/user.log\nkern.info           -/var/log/iptables.log\ncron.info\t\t\t/var/log/cron.log\n\n#\n# Logging for the mail system.  Split it up so that\n# it is easy to write scripts to parse these files.\n#\nmail.info\t\t\t-/var/log/mail.info\nmail.warn\t\t\t-/var/log/mail.warn\nmail.err\t\t\t/var/log/mail.err\n\n#\n# Some \"catch-all\" log files.\n#\n*.=debug;\\\n\tauth,authpriv.none;\\\n\tmail.none\t\t-/var/log/debug\n*.=info;*.=notice;*.=warn;\\\n    kern.!info;\\\n\tauth,authpriv.none;\\\n\tcron,daemon.none;\\\n\tmail.none\t\t-/var/log/messages\n\n#\n# Emergencies are sent to everybody logged in.\n#\n*.emerg\t\t\t\t:omusrmsg:*\n\n"
  },
  {
    "path": "aegir/conf/var/sftp_config",
    "content": "<Default>\n  GlobalDownload         0\n  GlobalUpload           0\n  Download               0\n  Upload                 0\n  StayAtHome             true\n  VirtualChroot          true\n  LimitConnection        0\n  LimitConnectionByUser  5\n  LimitConnectionByIP    5\n  Home                   /home/$USER\n  IdleTimeOut            15m\n  ResolveIP              false\n  IgnoreHidden           false\n  HideNoAccess           true\n  DefaultRights          0664 0775\n  MinimumRights          0664 0775\n</Default>\n\n<Group lshellg>\n  Shell                  /usr/bin/lshell\n</Group>\n"
  },
  {
    "path": "aegir/conf/var/ssh_config",
    "content": "\n# This is the ssh client system-wide configuration file.  See\n# ssh_config(5) for more information.  This file provides defaults for\n# users, and the values can be changed in per-user configuration files\n# or on the command line.\n\n# Configuration data is parsed as follows:\n#  1. command line options\n#  2. user-specific file\n#  3. system-wide file\n# Any configuration value is only changed the first time it is set.\n# Thus, host-specific definitions should be at the beginning of the\n# configuration file, and defaults at the end.\n\n# Site-wide defaults for some commonly used options.  For a comprehensive\n# list of available options, their meanings and defaults, please see the\n# ssh_config(5) man page.\n\nInclude /etc/ssh/ssh_config.d/*.conf\n\nHost *\n#   ForwardAgent no\n#   ForwardX11 no\n#   ForwardX11Trusted yes\n#   PasswordAuthentication yes\n#   HostbasedAuthentication no\n#   GSSAPIAuthentication no\n#   GSSAPIDelegateCredentials no\n#   GSSAPIKeyExchange no\n#   GSSAPITrustDNS no\n#   BatchMode no\n#   CheckHostIP yes\n#   AddressFamily any\n#   ConnectTimeout 0\n#   StrictHostKeyChecking ask\n#   IdentityFile ~/.ssh/id_rsa\n#   IdentityFile ~/.ssh/id_dsa\n#   IdentityFile ~/.ssh/id_ecdsa\n#   IdentityFile ~/.ssh/id_ed25519\n#   Port 22\n#   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc\n#   MACs hmac-md5,hmac-sha1,umac-64@openssh.com\n#   EscapeChar ~\n#   Tunnel no\n#   TunnelDevice any:any\n#   PermitLocalCommand no\n#   VisualHostKey no\n#   ProxyCommand ssh -q -W %h:%p gateway.example.com\n#   RekeyLimit 1G 1h\n#   UserKnownHostsFile ~/.ssh/known_hosts.d/%k\n    SendEnv LANG LC_*\n    HashKnownHosts yes\n    WarnWeakCrypto no\n"
  },
  {
    "path": "aegir/conf/var/sshd_config",
    "content": "# This is the sshd server system-wide configuration file.  See\n# sshd_config(5) for more information.\n# This sshd was compiled with PATH=/usr/local/bin:/usr/bin:/bin:/usr/games\n# The strategy used for options in the default sshd_config shipped with\n# OpenSSH is to specify options with their default value where\n# possible, but leave them commented.  Uncommented options override the\n# default value.\nInclude /etc/ssh/sshd_config.d/*.conf\nPort 22\n#AddressFamily any\n#ListenAddress 0.0.0.0\n#ListenAddress ::\n#HostKey /etc/ssh/ssh_host_rsa_key\n#HostKey /etc/ssh/ssh_host_ecdsa_key\n#HostKey /etc/ssh/ssh_host_ed25519_key\n# Ciphers and keying\n#RekeyLimit default none\n# Logging\n#SyslogFacility AUTH\n#LogLevel INFO\n# Authentication:\n#LoginGraceTime 2m\nPermitRootLogin prohibit-password\n#StrictModes yes\n#MaxAuthTries 6\n#MaxSessions 10\n#PubkeyAuthentication yes\n# Expect .ssh/authorized_keys2 to be disregarded by default in future.\n#AuthorizedKeysFile\t.ssh/authorized_keys .ssh/authorized_keys2\n#AuthorizedPrincipalsFile none\n#AuthorizedKeysCommand none\n#AuthorizedKeysCommandUser nobody\n# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts\n#HostbasedAuthentication no\n# Change to yes if you don't trust ~/.ssh/known_hosts for\n# HostbasedAuthentication\n#IgnoreUserKnownHosts no\n# Don't read the user's ~/.rhosts and ~/.shosts files\n#IgnoreRhosts yes\n# To disable tunneled clear text passwords, change to no here!\nPermitEmptyPasswords no\n# Change to yes to enable challenge-response passwords (beware issues with\n# some PAM modules and threads)\nKbdInteractiveAuthentication no\n# Kerberos options\n#KerberosAuthentication no\n#KerberosOrLocalPasswd yes\n#KerberosTicketCleanup yes\n#KerberosGetAFSToken no\n# GSSAPI options\n#GSSAPIAuthentication no\n#GSSAPICleanupCredentials yes\n#GSSAPIStrictAcceptorCheck yes\n#GSSAPIKeyExchange no\n# Set this to 'yes' to enable PAM authentication, account processing,\n# and session processing. If this is enabled, PAM authentication will\n# be allowed through the KbdInteractiveAuthentication and\n# PasswordAuthentication.  Depending on your PAM configuration,\n# PAM authentication via KbdInteractiveAuthentication may bypass\n# the setting of \"PermitRootLogin prohibit-password\".\n# If you just want the PAM account and session checks to run without\n# PAM authentication, then enable this but set PasswordAuthentication\n# and KbdInteractiveAuthentication to 'no'.\n#AllowAgentForwarding yes\n#AllowTcpForwarding yes\n#GatewayPorts no\n#X11Forwarding yes\n#X11DisplayOffset 10\n#X11UseLocalhost yes\n#PermitTTY yes\n#PrintLastLog yes\n#PermitUserEnvironment no\n#Compression delayed\n#ClientAliveInterval 0\n#ClientAliveCountMax 3\n#PidFile /run/sshd.pid\n#MaxStartups 10:30:100\n#PermitTunnel no\n#ChrootDirectory none\n#VersionAddendum none\n# no default banner path\n#Banner none\n# Allow client to pass locale environment variables\nAcceptEnv LANG LC_*\n# override default of no subsystems\nSubsystem sftp /usr/lib/openssh/sftp-server -u 0002\n# Example of overriding settings on a per-user basis\n#Match User anoncvs\n#\tX11Forwarding no\n#\tAllowTcpForwarding no\n#\tPermitTTY no\n#\tForceCommand cvs server\nIgnoreUserKnownHosts no\nPasswordAuthentication yes\nUseDNS no\nUsePAM no\nPrintMotd yes\nClientAliveInterval 300\nClientAliveCountMax 10000\nTCPKeepAlive yes\nMaxAuthTries 3\nLoginGraceTime 30\nMaxStartups 5:50:10\nX11Forwarding no\nAllowTcpForwarding no\nAllowAgentForwarding no\n"
  },
  {
    "path": "aegir/conf/var/sysctl.conf",
    "content": "# =============================================================================\n# sysctl.conf - Kernel parameter tuning for KVM/QEMU virtual machines\n# Target: Devuan Daedalus 5.x (Linux kernel 6.1 LTS / Debian Bookworm base)\n# Compatible with: Linux 4.9+ (all settings verified for kernel 6.1)\n#\n# Apply changes with:   sysctl -p /etc/sysctl.conf\n# Verify a value with:  sysctl <parameter.name>\n#\n# Sections:\n#   1. IPv4 Security Hardening\n#   2. IPv6 Disablement\n#   3. TCP/IP Performance Tuning\n#   4. Network Buffer Sizes\n#   5. Virtual Memory\n#   6. Filesystem & Process Limits\n#   7. Kernel Security Hardening\n# =============================================================================\n\n\n# -----------------------------------------------------------------------------\n# 1. IPv4 Security Hardening\n# -----------------------------------------------------------------------------\n\n# Ignore ICMP echo requests sent to broadcast addresses (smurf attack defence)\nnet.ipv4.icmp_echo_ignore_broadcasts = 1\n\n# Suppress bogus ICMP error responses that could be used for OS fingerprinting\nnet.ipv4.icmp_ignore_bogus_error_responses = 1\n\n# Enable SYN cookies to withstand SYN flood attacks without dropping connections\nnet.ipv4.tcp_syncookies = 1\n\n# Log martian packets (packets with impossible source addresses).\n# Useful for detecting spoofing attempts; generates some log noise on noisy\n# networks. Set to 0 if log verbosity is a concern.\nnet.ipv4.conf.all.log_martians = 1\nnet.ipv4.conf.default.log_martians = 1\n\n# Reject source-routed packets (attacker-controlled routing)\nnet.ipv4.conf.all.accept_source_route = 0\nnet.ipv4.conf.default.accept_source_route = 0\n\n# Strict Reverse Path Filtering: drop packets whose source address has no\n# return route through the interface they arrived on. Prevents IP spoofing.\n# Use rp_filter = 2 (loose mode) only if asymmetric routing is required.\nnet.ipv4.conf.all.rp_filter = 1\nnet.ipv4.conf.default.rp_filter = 1\n\n# Reject ICMP redirect messages (could be used to alter routing tables)\nnet.ipv4.conf.all.accept_redirects = 0\nnet.ipv4.conf.default.accept_redirects = 0\n\n# Reject \"secure\" redirects as well (redirects from listed gateways only)\nnet.ipv4.conf.all.secure_redirects = 0\nnet.ipv4.conf.default.secure_redirects = 0\n\n# Do not forward traffic between interfaces (this is not a router)\nnet.ipv4.ip_forward = 0\n\n# Do not send ICMP redirects (only relevant on routers)\nnet.ipv4.conf.all.send_redirects = 0\nnet.ipv4.conf.default.send_redirects = 0\n\n# Protect against TIME_WAIT assassination (RFC 1337).\n# Drops RST packets for sockets in TIME_WAIT state.\nnet.ipv4.tcp_rfc1337 = 1\n\n\n# -----------------------------------------------------------------------------\n# 2. IPv6 Disablement\n# -----------------------------------------------------------------------------\n# Remove this section entirely if IPv6 is needed. If keeping IPv6, replace\n# these with appropriate security settings instead of wholesale disablement.\n\n# Disable IPv6 on all interfaces including loopback\nnet.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1\nnet.ipv6.conf.lo.disable_ipv6 = 1\n\n# The following are redundant when disable_ipv6 = 1 but are kept as a\n# defence-in-depth measure in case IPv6 is re-enabled without removing this file\nnet.ipv6.conf.default.router_solicitations = 0\nnet.ipv6.conf.default.accept_ra_rtr_pref = 0\nnet.ipv6.conf.default.accept_ra_pinfo = 0\nnet.ipv6.conf.default.accept_ra_defrtr = 0\nnet.ipv6.conf.default.autoconf = 0\nnet.ipv6.conf.default.dad_transmits = 0\nnet.ipv6.conf.default.max_addresses = 1\n\n\n# -----------------------------------------------------------------------------\n# 3. TCP/IP Performance Tuning\n# -----------------------------------------------------------------------------\n\n# Use BBR (Bottleneck Bandwidth and RTT) congestion control.\n# BBR significantly improves throughput in virtualised environments compared\n# to the default CUBIC algorithm. Requires the 'fq' packet scheduler below.\n# Available since kernel 4.9; present in Daedalus kernel 6.1.\nnet.core.default_qdisc = fq\nnet.ipv4.tcp_congestion_control = bbr\n\n# How long to wait before closing a socket in FIN_WAIT_2 (seconds).\n# Reducing from the 60s default frees ports faster under heavy connection churn.\n# Do not go below 15 on systems with load balancers or long-RTT clients.\nnet.ipv4.tcp_fin_timeout = 20\n\n# TCP keepalive: how long an idle connection is kept before probing begins.\n# 300s is a reasonable balance between detecting dead peers and keeping\n# long-lived idle connections (e.g. SSH, database pools) alive.\nnet.ipv4.tcp_keepalive_time = 300\n# Number of unacknowledged probes before the connection is declared dead\nnet.ipv4.tcp_keepalive_probes = 5\n# Interval between consecutive keepalive probes (seconds)\nnet.ipv4.tcp_keepalive_intvl = 15\n\n# Disable slow-start restart after an idle period.\n# Prevents throughput degradation for long-lived but bursty connections\n# (e.g. HTTP/2 keep-alive, gRPC). Recommended for most server workloads.\nnet.ipv4.tcp_slow_start_after_idle = 0\n\n# Allow TIME_WAIT sockets to be reused for new outbound connections.\n# Safe when tcp_timestamps is enabled (which it is below).\nnet.ipv4.tcp_tw_reuse = 1\n\n# Ephemeral port range for outbound connections.\n# Starting at 1024 (below 1024 requires CAP_NET_BIND_SERVICE anyway) gives\n# more ports. The previous value of 2000 wasted the 1024-1999 range.\nnet.ipv4.ip_local_port_range = 1024 65535\n\n# Maximum queue length for SYN requests awaiting completion of the 3-way\n# handshake. Increase for servers handling high connection rates.\nnet.ipv4.tcp_max_syn_backlog = 4096\n\n# Number of times a SYN-ACK is retransmitted for an incoming connection.\n# Reducing from the 5 default shortens the time a half-open connection\n# occupies resources before being abandoned.\nnet.ipv4.tcp_synack_retries = 2\n\n# Enable TCP timestamps (RFC 1323). Required for tcp_tw_reuse to be safe,\n# and improves RTT estimation accuracy. Disable only if you have strong\n# reasons (e.g. privacy concerns about timestamp-based OS fingerprinting).\nnet.ipv4.tcp_timestamps = 1\n\n# Enable window scaling (RFC 1323) to support receive windows > 65535 bytes.\n# Essential for performance over high-bandwidth or high-latency paths.\nnet.ipv4.tcp_window_scaling = 1\n\n# Enable TCP Fast Open for both outbound (1) and inbound (2) connections.\n# Value 3 = both. Reduces latency for repeated TCP connections by allowing\n# data in the SYN packet. Requires application support.\nnet.ipv4.tcp_fastopen = 3\n\n# Raise the conntrack table size if this VM runs iptables/nftables and\n# sees high connection volume. The default (~65k) is often too low.\n# Note: this parameter only takes effect if the nf_conntrack module is loaded.\nnet.netfilter.nf_conntrack_max = 262144\n\n# TCP SACK (Selective Acknowledgments) - leave enabled (default = 1).\n# SACK improves recovery from packet loss. The security argument for disabling\n# it (a 2019 kernel bug) has been patched in all current kernels.\nnet.ipv4.tcp_sack = 1\n\n\n# -----------------------------------------------------------------------------\n# 4. Network Buffer Sizes\n# -----------------------------------------------------------------------------\n\n# Per-socket TCP receive buffer: min, default, max (bytes)\n# 8 MB max gives headroom for high-BDP (bandwidth-delay product) paths\n# without wasting memory on idle sockets.\nnet.ipv4.tcp_rmem = 4096 131072 8388608\n\n# Per-socket TCP send buffer: min, default, max (bytes)\nnet.ipv4.tcp_wmem = 4096 131072 8388608\n\n# Global maximum socket receive/send buffer (bytes).\n# Must be >= the max values in tcp_rmem/tcp_wmem above.\nnet.core.rmem_max = 8388608\nnet.core.wmem_max = 8388608\n\n# Default socket receive/send buffer for non-TCP protocols (UDP etc.)\nnet.core.rmem_default = 262144\nnet.core.wmem_default = 262144\n\n# Maximum number of packets queued on the input side of a network interface\n# before the kernel starts dropping them. Increase for high-throughput NICs.\nnet.core.netdev_max_backlog = 16384\n\n# Maximum number of connections that can be queued for acceptance per socket.\n# nginx/Apache worker queues and similar services benefit from values >= 1024.\nnet.core.somaxconn = 1024\n\n\n# -----------------------------------------------------------------------------\n# 5. Virtual Memory\n# -----------------------------------------------------------------------------\n\n# Swappiness: how aggressively the kernel swaps anonymous memory to disk.\n# 0  = swap only to avoid OOM\n# 1  = swap as little as possible (original value; risky with limited RAM)\n# 10 = sensible minimum for VMs; avoids OOM kills while still preferring RAM\n# 60 = kernel default\n# For VMs with ample RAM and no swap, 1 is acceptable. For typical VMs,\n# 10 reduces the risk of unexpected OOM kills under memory pressure.\nvm.swappiness = 10\n\n# vfs_cache_pressure controls how eagerly the kernel reclaims memory used for\n# dentries and inodes. Default is 100 (neutral). Values > 100 reclaim more\n# aggressively; values < 100 keep more cache. 50 favours retaining the VFS\n# cache, which benefits workloads touching many files (web servers, git, etc.)\n# The original value of 200 was excessively aggressive for most workloads.\nvm.vfs_cache_pressure = 50\n\n# Minimum virtual address for mmap. Prevents null-pointer dereference exploits.\n# 4096 = one page. Some hardened profiles set this to 65536.\nvm.mmap_min_addr = 65536\n\n# Memory overcommit policy:\n#   0 = heuristic (default): the kernel uses an estimate of available memory\n#   1 = always allow overcommit (suitable for scientific/HPC workloads)\n#   2 = strict: never commit more than (RAM * overcommit_ratio / 100) + swap\n# Mode 0 is appropriate for general-purpose VMs.\n# overcommit_ratio is only meaningful with mode 2; leaving it at the default\n# (50) avoids confusion. The original config set ratio=0 with mode=0 which\n# had no effect but implied strict behaviour.\nvm.overcommit_memory = 0\n# vm.overcommit_ratio = 50   # Only relevant when overcommit_memory = 2\n\n\n# -----------------------------------------------------------------------------\n# 6. Filesystem & Process Limits\n# -----------------------------------------------------------------------------\n\n# Maximum number of open file descriptors system-wide.\n# Raise if running databases, high-concurrency servers, or many containers.\nfs.file-max = 2097152\n\n# Maximum number of concurrent asynchronous I/O operations system-wide.\n# Relevant for databases (PostgreSQL, MySQL) that use AIO heavily.\nfs.aio-max-nr = 1048576\n\n# Maximum number of inotify watches per user.\n# 65536 covers most use cases. Raise to 524288 if running IDEs, webpack,\n# lsyncd, or other file-watching tools that exhaust the default.\nfs.inotify.max_user_watches = 65536\n\n# Maximum number of inotify event queue entries per inotify instance.\n# The default (16384) is usually fine; raise if you see ENOSPC from inotify.\nfs.inotify.max_queued_events = 32768\n\n# Protect against hardlink attacks (e.g. TOCTOU in /tmp).\n# Prevents users from creating hardlinks to files they do not own.\nfs.protected_hardlinks = 1\n\n# Protect against symlink attacks in world-writable sticky directories.\nfs.protected_symlinks = 1\n\n# Protect against opening FIFOs in world-writable sticky directories\n# by users other than the owner. Available since kernel 4.19.\nfs.protected_fifos = 1\n\n# Prevent opening regular files not owned by the user in world-writable\n# sticky directories. Available since kernel 4.19.\n# 2 = also block writes; 1 = block opens by non-owners\nfs.protected_regular = 2\n\n# Disable core dumps for setuid programs. Prevents sensitive memory from\n# being written to disk by privilege-dropped processes.\nfs.suid_dumpable = 0\n\n\n# -----------------------------------------------------------------------------\n# 7. Kernel Security Hardening\n# -----------------------------------------------------------------------------\n\n# Full ASLR: randomise the base addresses of the stack, VDSO, and mmap regions.\n# 0 = off, 1 = partial, 2 = full. Always use 2 on production systems.\nkernel.randomize_va_space = 2\n\n# Restrict access to the kernel message ring buffer (dmesg) to root only.\n# Prevents unprivileged users from gleaning kernel addresses or error details.\nkernel.dmesg_restrict = 1\n\n# Hide kernel symbol addresses from unprivileged users.\n# 1 = hide from non-root, 2 = hide from all including root (CAP_SYSLOG bypasses).\n# kptr_restrict = 2 is the stricter option; use 1 if you need oops reports.\nkernel.kptr_restrict = 2\n\n# Restrict ptrace to parent processes (or root). Prevents one unprivileged\n# process from attaching to and inspecting another.\n# 1 = restricted (recommended), 0 = permissive (default), 3 = disabled entirely\n# Note: this is a Debian/Ubuntu-specific sysctl; it may be a no-op on some kernels.\nkernel.yama.ptrace_scope = 1\n\n# Suppress most kernel messages on the console. The four values are:\n# console_loglevel, default_message_loglevel, min_console_level, default_console_level\n# \"4 1 1 7\" shows KERN_WARNING and above on the console, which is appropriate\n# for production VMs to avoid log spam without silencing warnings.\nkernel.printk = 4 1 1 7\n\n# Redirect core dumps to /dev/null rather than writing them to disk.\n# Prevents potentially sensitive process memory from being captured.\n# To re-enable temporarily: sysctl -w kernel.core_pattern=core\nkernel.core_pattern = |/bin/false\n\n# Maximum PID value. The default (32768) can be exhausted by systems running\n# many threads or containers. 4194304 is the maximum on 64-bit kernels.\nkernel.pid_max = 4194304\n\n# Kernel module loading:\n# Uncomment the line below only on fully provisioned systems where you are\n# certain no additional modules will ever need to be loaded. It is\n# irreversible without a reboot.\n# kernel.modules_disabled = 1\n\n# =============================================================================\n# End of sysctl.conf\n# =============================================================================\n"
  },
  {
    "path": "aegir/conf/version/barracuda-release.txt",
    "content": "BOA-5.9.1-dev\n"
  },
  {
    "path": "aegir/conf/version/barracuda-version.txt",
    "content": "BOA-5.9.1-dev\n"
  },
  {
    "path": "aegir/conf/version/octopus-release.txt",
    "content": "BOA-5.9.1-dev\n"
  },
  {
    "path": "aegir/conf/version/octopus-version.txt",
    "content": "BOA-5.9.1-dev\n"
  },
  {
    "path": "aegir/conf/version/release.txt",
    "content": "BOA-5.9.1-dev\n"
  },
  {
    "path": "aegir/conf/version/version.txt",
    "content": "BOA-5.9.1-dev\n"
  },
  {
    "path": "aegir/helpers/Gemfile.txt",
    "content": "source 'https://rubygems.org'\n\ngroup :development do\n\n  # Sass, Compass and extensions.\n  gem 'sass'                    # Sass.\n  gem 'sass-globbing'           # Import Sass files based on globbing pattern.\n  gem 'compass'                 # Framework built on Sass.\n  gem 'compass-validator'       # So you can `compass validate`.\n  gem 'compass-normalize'       # Compass version of normalize.css.\n  gem 'compass-rgbapng'         # Turns rgba() into .png's for backwards compatibility.\n  gem 'susy'                    # Susy grid framework.\n  gem 'singularitygs'           # Alternative to the Susy grid framework.\n  gem 'toolkit'                 # Compass utility from the fabulous Snugug.\n  gem 'breakpoint'              # Manages CSS media queries.\n  gem 'oily_png'                # Faster Compass sprite generation.\n  gem 'css_parser'              # Helps `compass stats` output statistics.\n\n  # Guard\n  gem 'guard'                   # Guard event handler.\n  gem 'guard-compass'           # Compile on sass/scss change.\n  gem 'guard-shell'             # Run shell commands.\n  gem 'guard-livereload'        # Browser reload.\n  gem 'yajl-ruby'               # Faster JSON with LiveReload in the browser.\n\n  # Dependency to prevent polling. Setup for multiple OS environments.\n  # Optionally remove the lines not specific to your OS.\n  # https://github.com/guard/guard#efficient-filesystem-handling\n  gem 'rb-inotify', '~> 0.9', :require => false      # Linux\n  gem 'rb-fsevent', :require => false                # Mac OSX\n  gem 'rb-fchange', :require => false                # Windows\n\nend\n"
  },
  {
    "path": "aegir/helpers/apt-list-debian.txt",
    "content": "ftp.at.debian.org\nftp.au.debian.org\nftp.ba.debian.org\nftp.be.debian.org\nftp.bg.debian.org\nftp.br.debian.org\nftp.ca.debian.org\nftp.ch.debian.org\nftp.cz.debian.org\nftp.de.debian.org\nftp.debian.org\nftp.dk.debian.org\nftp.ee.debian.org\nftp.es.debian.org\nftp.fi.debian.org\nftp.fr.debian.org\nftp.gr.debian.org\nftp.hk.debian.org\nftp.hu.debian.org\nftp.ie.debian.org\nftp.it.debian.org\nftp.jp.debian.org\nftp.lt.debian.org\nftp.nl.debian.org\nftp.nz.debian.org\nftp.pl.debian.org\nftp.pt.debian.org\nftp.ro.debian.org\nftp.ru.debian.org\nftp.se.debian.org\nftp.sk.debian.org\nftp.th.debian.org\nftp.tr.debian.org\nftp.tw.debian.org\nftp.ua.debian.org\nftp.uk.debian.org\nftp.us.debian.org"
  },
  {
    "path": "aegir/helpers/apt.conf.noi.dist",
    "content": "APT::Get::Assume-Yes \"true\";\nAPT::Get::Show-Upgraded \"true\";\nAPT::Get::Install-Recommends \"false\";\nAPT::Get::Install-Suggests \"false\";\nAPT::Quiet \"true\";\nDPkg::Options {\"--force-confnew\";\"--force-confmiss\";};\nDPkg::Pre-Install-Pkgs {\"/usr/sbin/dpkg-preconfigure --apt\";};\nDir::Etc::SourceList \"/etc/apt/sources.list\";\n"
  },
  {
    "path": "aegir/helpers/apt.conf.noi.nrml",
    "content": "APT::Get::Assume-Yes \"true\";\nAPT::Get::Show-Upgraded \"true\";\nAPT::Get::Install-Recommends \"false\";\nAPT::Get::Install-Suggests \"false\";\nAPT::Quiet \"true\";\nDPkg::Options {\"--force-confdef\";\"--force-confmiss\";\"--force-confold\"};\nDPkg::Pre-Install-Pkgs {\"/usr/sbin/dpkg-preconfigure --apt\";};\nDir::Etc::SourceList \"/etc/apt/sources.list\";\n"
  },
  {
    "path": "aegir/helpers/apt.conf.noninteractive",
    "content": "APT::Get::Assume-Yes \"true\";\nAPT::Get::Show-Upgraded \"true\";\nAPT::Get::Install-Recommends \"false\";\nAPT::Get::Install-Suggests \"false\";\nAPT::Quiet \"true\";\nDPkg::Options {\"--force-confdef\";\"--force-confmiss\";\"--force-confold\"};\nDPkg::Pre-Install-Pkgs {\"/usr/sbin/dpkg-preconfigure --apt\";};\nDir::Etc::SourceList \"/etc/apt/sources.list\";\n"
  },
  {
    "path": "aegir/helpers/cf-simple-hook.sh",
    "content": "#!/usr/bin/env bash\n\n# Enable strict error handling for debugging only\n# set -euo pipefail\n\nfunction vault_upload {\n    local DOMAIN=\"${1}\"\n    local KEYFILE=\"${2}\"\n    local CERTFILE=\"${3}\"\n    local FULLCHAINFILE=\"${4}\"\n    local CHAINFILE=\"${5}\"\n\n    local MAIN=$(<<<\"${DOMAIN}\" grep -oP '[^\\.]+\\.[^\\.]+$')\n    local HOST=\"${DOMAIN/.${MAIN}/}\"\n\n    if [[ \"${HOST}\" == \"\" ]]; then\n        local STORE=secret/dehydrated/${MAIN}/${HOST}\n    else\n        local STORE=secret/dehydrated/${MAIN}/wildcard\n    fi\n\n    local TOKEN=$(\n        curl -s -X POST \\\n            -d \"{\\\"role_id\\\":\\\"SNIP\\\",\\\"secret_id\\\":\\\"SNIP\\\"}\" \\\n            https://vault.example.com/v1/auth/approle/login | jq -r .auth.client_token\n    )\n\n    curl -s -X POST \\\n        -H \"X-Vault-Token: ${TOKEN}\" \\\n        -d @<(\n            jq -n \\\n                --arg cert \"$(< ${CERTFILE} )\" \\\n                --arg key \"$(< ${KEYFILE} )\" \\\n                --arg chain \"$(< ${CHAINFILE} )\" \\\n                --arg fullchain \"$(< ${FULLCHAINFILE} )\" \\\n                --arg timestamp \"${TIMESTAMP}\" \\\n                '{cert:$cert,key:$key,chain:$chain,fullchain:$fullchain}'\n        ) \\\n        https://vault.example.com/v1/${STORE}\n}\n\nfunction deploy_challenge {\n    local DOMAIN=\"${1}\"\n    local TOKEN_FILENAME=\"${2}\"\n    local TOKEN_VALUE=\"${3}\"\n\n    lexicon cloudflare create ${DOMAIN} TXT \\\n        --name=\"_acme-challenge.${DOMAIN}.\" \\\n        --content=\"${TOKEN_VALUE}\" \\\n        --auth-username=\"devops@example.com\" \\\n        --auth-token=\"SNIP\"\n\n    sleep 10\n\n    :\n}\n\nfunction clean_challenge {\n    local DOMAIN=\"${1}\"\n    local TOKEN_FILENAME=\"${2}\"\n    local TOKEN_VALUE=\"${3}\"\n\n    lexicon cloudflare delete ${DOMAIN} TXT \\\n        --name=\"_acme-challenge.${DOMAIN}.\" \\\n        --content=\"${TOKEN_VALUE}\" \\\n        --auth-username=\"devops@example.com\" \\\n        --auth-token=\"SNIP\"\n\n    :\n}\n\nfunction deploy_cert {\n    local DOMAIN=\"${1}\"\n    local KEYFILE=\"${2}\"\n    local CERTFILE=\"${3}\"\n    local FULLCHAINFILE=\"${4}\"\n    local CHAINFILE=\"${5}\"\n\n    vault_upload \"${@}\"\n\n    :\n}\n\nfunction unchanged_cert {\n    local DOMAIN=\"${1}\"\n    local KEYFILE=\"${2}\"\n    local CERTFILE=\"${3}\"\n    local FULLCHAINFILE=\"${4}\"\n    local CHAINFILE=\"${5}\"\n\n    vault_upload \"${@}\"\n\n    :\n}\n\nfunction invalid_challenge() {\n    local DOMAIN=\"${1}\"\n    local RESPONSE=\"${2}\"\n\n    :\n}\n\nexit_hook() {\n  :\n}\n\nstartup_hook() {\n  :\n}\n\nHANDLER=\"${1}\"\nshift\n\nif [ -n \"$(type -t ${HANDLER})\" ] && [ \"$(type -t ${HANDLER})\" = function ]; then\n  $HANDLER \"${@}\"\nfi\n"
  },
  {
    "path": "aegir/helpers/challenge-dns-email-hook.sh",
    "content": "#!/usr/bin/env bash\n\nfunction has_propagated {\n    while [ \"$#\" -ge 2 ]; do\n        local RECORD_NAME=\"${1}\"; shift\n        local TOKEN_VALUE=\"${1}\"; shift\n        if [ ${#AUTH_NS[@]} -eq 0 ]; then\n            local RECORD_DOMAIN=$RECORD_NAME\n            declare -a iAUTH_NS\n            while [ -z \"$iAUTH_NS\" ]; do\n                RECORD_DOMAIN=$(echo \"${RECORD_DOMAIN}\" | cut -d'.' -f 2-)\n                iAUTH_NS=($(dig +short \"${RECORD_DOMAIN}\" IN CNAME))\n                if [ -n \"$iAUTH_NS\" ]; then\n                    unset iAUTH_NS && declare -a iAUTH_NS\n                    continue\n                fi\n                iAUTH_NS=($(dig +short \"${RECORD_DOMAIN}\" IN NS))\n            done\n        else\n           local iAUTH_NS=(\"${AUTH_NS[@]}\")\n        fi\n        for NS in \"${iAUTH_NS[@]}\"; do\n            dig +short @\"${NS}\" \"${RECORD_NAME}\" IN TXT | grep -q \"\\\"${TOKEN_VALUE}\\\"\" || return 1\n        done\n        unset iAUTH_NS\n    done\n    return 0\n}\n\nfunction ocsp_update {\n    local DOMAIN=\"${1}\" KEYFILE=\"${2}\" CERTFILE=\"${3}\" FULLCHAINFILE=\"${4}\" CHAINFILE=\"${5}\" TIMESTAMP=\"${6}\"\n\n    # Get oscp response and shove it into a file, used for OCSP stapling.\n    #\n    # You only need this for old versions of of nginx that can't do this itself,\n    # or if your server is behind a proxy (eg nginx can't do OCSP via HTTP proxy).\n    #\n    # Parameters:\n    # - DOMAIN\n    #   The primary domain name, i.e. the certificate common\n    #   name (CN).\n    # - KEYFILE\n    #   The path of the file containing the private key.\n    # - CERTFILE\n    #   The path of the file containing the signed certificate.\n    # - FULLCHAINFILE\n    #   The path of the file containing the full certificate chain.\n    # - CHAINFILE\n    #   The path of the file containing the intermediate certificate(s).\n    # - TIMESTAMP\n    #   Timestamp when the specified certificate was created.\n\n    if [ -n \"${OCSP_RESPONSE_FILE}\" ]; then\n\n        if [ -z \"${OCSP_HOST}\" ]; then\n            OCSP_HOST=\"${http_proxy}\" # eg http://foo.bar:3128/\n\t    # strip protocol and path:\n\t    OCSP_HOST=\"$(echo \"$OCSP_HOST\" | sed -E 's/(\\w+:\\/\\/)((\\w|\\.)+:[0-9]+?)\\/?.*/\\2/')\" # eg foo.bar:3128\n        fi\n\n        if [ -n \"$VERBOSE\" ]; then\n          echo \"OCSP_HOST: $OCSP_HOST\"\n          echo \"http_proxy: $http_proxy\"\n          echo \"OCSP_RESPONSE_FILE: $OCSP_RESPONSE_FILE\"\n          echo \"CHAINFILE: $CHAINFILE\"\n          echo \"CERTFILE: $CERTFILE\"\n          echo \"command: openssl ocsp -noverify -no_nonce -respout \\\"${OCSP_RESPONSE_FILE}\\\" -issuer \\\"${CHAINFILE}\\\" -cert \\\"${CERTFILE}\\\" -host \\\"${OCSP_HOST}\\\" -path \\\"\\$(openssl x509 -noout -ocsp_uri -in \\\"${CERTFILE}\\\")\\\" -CApath \\\"/etc/ssl/certs\\\"\"\n        fi\n\n        if [ -n \"${OCSP_HOST}\" ]; then\n            openssl ocsp -noverify -no_nonce -respout \"${OCSP_RESPONSE_FILE}\" -issuer \"${CHAINFILE}\" -cert \"${CERTFILE}\" -host \"${OCSP_HOST}\" -path \"$(openssl x509 -noout -ocsp_uri -in \"${CERTFILE}\")\" -CApath \"/etc/ssl/certs\"\n        else\n            openssl ocsp -noverify -no_nonce -respout \"${OCSP_RESPONSE_FILE}\" -issuer \"${CHAINFILE}\" -cert \"${CERTFILE}\" -path \"$(openssl x509 -noout -ocsp_uri -in \"${CERTFILE}\")\" -CApath \"/etc/ssl/certs\"\n        fi\n    fi\n}\n\nfunction oscp_update {\n    #oops :)\n    ocsp_update \"$@\"\n}\n\nfunction deploy_challenge {\n    local RECORDS=()\n    RECIPIENT=${RECIPIENT:-$(id -u -n)}\n    local FIRSTDOMAIN=\"${1}\"\n    local SUBJECT=\"Let's Encrypt certificate renewal\"\n    while (( \"$#\" >= 3 )); do\n        local DOMAIN=\"${1}\"; shift\n        local TOKEN_FILENAME=\"${1}\"; shift\n        local TOKEN_VALUE=\"${1}\"; shift\n\n        # This hook is called once for every domain that needs to be\n        # validated, including any alternative names you may have listed.\n        #\n        # Parameters:\n        # - DOMAIN\n        #   The domain name (CN or subject alternative name) being\n        #   validated.\n        # - TOKEN_FILENAME\n        #   The name of the file containing the token to be served for HTTP\n        #   validation. Should be served by your web server as\n        #   /.well-known/acme-challenge/${TOKEN_FILENAME}.\n        # - TOKEN_VALUE\n        #   The token value that needs to be served for validation. For DNS\n        #   validation, this is what you want to put in the _acme-challenge\n        #   TXT record. For HTTP validation it is the value that is expected\n        #   be found in the $TOKEN_FILENAME file.\n\n        RECORD_NAME=\"_acme-challenge.${DOMAIN}\"\n        RECORDS+=( ${RECORD_NAME} )\n        RECORDS+=( ${TOKEN_VALUE} )\n    done\n\n    read -d '' MESSAGE <<EOF\nThe Let's Encrypt certificate for ${FIRSTDOMAIN} is about to expire.\nBefore it can be renewed, ownership of the domain must be proven by\nresponding to a challenge.\n\nPlease deploy the following record(s) to validate ownership of ${FIRSTDOMAIN}:\n\nEOF\n    for (( i=0; i < \"${#RECORDS[@]}\"; i+=2 )); do\n        MESSAGE=\"$(printf '%s\\n  %s. IN TXT %s\\n' \"$MESSAGE\" \"${RECORDS[$i]}\" \"${RECORDS[$(($i + 1))]}\")\"\n    done\n\n    echo \"$MESSAGE\" | s-nail -s \"$SUBJECT\" \"$RECIPIENT\"\n\n    echo \" + Settling down for 10s...\"\n    sleep 10\n\n    while ! has_propagated \"${RECORDS[@]}\"; do\n         echo \" + DNS not propagated. Waiting 30s for record creation and replication...\"\n         sleep 30\n    done\n}\n\nfunction clean_challenge {\n    local RECORDS=()\n    RECIPIENT=${RECIPIENT:-$(id -u -n)}\n    local FIRSTDOMAIN=\"${1}\"\n    local SUBJECT=\"Let's Encrypt certificate renewal\"\n    while (( \"$#\" >= 3 )); do\n        local DOMAIN=\"${1}\"; shift\n        local TOKEN_FILENAME=\"${1}\"; shift\n        local TOKEN_VALUE=\"${1}\"; shift\n\n        # This hook is called after attempting to validate each domain,\n        # whether or not validation was successful. Here you can delete\n        # files or DNS records that are no longer needed.\n        #\n        # The parameters are the same as for deploy_challenge.\n\n        RECORD_NAME=\"_acme-challenge.${DOMAIN}\"\n        RECORDS+=( ${RECORD_NAME} )\n        RECORDS+=( ${TOKEN_VALUE} )\n    done\n\n    read -d '' MESSAGE <<EOF\nPropagation has completed for ${FIRSTDOMAIN}. The following record(s) can now be deleted:\n\nEOF\n\n    while (( \"${#RECORDS}\" >= 2 )); do\n        MESSAGE=\"$(printf '%s\\n  %s. IN TXT %s\\n' \"$MESSAGE\" \"${RECORDS[0]}\" \"${RECORDS[1]}\")\"\n        RECORDS=( \"${RECORDS[@]:2}\" )\n    done\n\n    echo \"$MESSAGE\" | s-nail -s \"$SUBJECT\" \"$RECIPIENT\"\n\n}\n\nfunction deploy_cert {\n    local DOMAIN=\"${1}\" KEYFILE=\"${2}\" CERTFILE=\"${3}\" FULLCHAINFILE=\"${4}\" CHAINFILE=\"${5}\" TIMESTAMP=\"${6}\"\n\n    # This hook is called once for each certificate that has been\n    # produced. Here you might, for instance, copy your new certificates\n    # to service-specific locations and reload the service.\n    #\n    # Parameters:\n    # - DOMAIN\n    #   The primary domain name, i.e. the certificate common\n    #   name (CN).\n    # - KEYFILE\n    #   The path of the file containing the private key.\n    # - CERTFILE\n    #   The path of the file containing the signed certificate.\n    # - FULLCHAINFILE\n    #   The path of the file containing the full certificate chain.\n    # - CHAINFILE\n    #   The path of the file containing the intermediate certificate(s).\n    # - TIMESTAMP\n    #   Timestamp when the specified certificate was created.\n\n    oscp_update \"$@\"\n}\n\nfunction unchanged_cert {\n    local DOMAIN=\"${1}\" KEYFILE=\"${2}\" CERTFILE=\"${3}\" FULLCHAINFILE=\"${4}\" CHAINFILE=\"${5}\"\n\n    # This hook is called once for each certificate that is still\n    # valid and therefore wasn't reissued.\n    #\n    # Parameters:\n    # - DOMAIN\n    #   The primary domain name, i.e. the certificate common\n    #   name (CN).\n    # - KEYFILE\n    #   The path of the file containing the private key.\n    # - CERTFILE\n    #   The path of the file containing the signed certificate.\n    # - FULLCHAINFILE\n    #   The path of the file containing the full certificate chain.\n    # - CHAINFILE\n    #   The path of the file containing the intermediate certificate(s).\n\n    oscp_update \"$@\"\n}\n\nHANDLER=$1; shift\nif [[ \"${HANDLER}\" =~ ^(deploy_challenge|clean_challenge|deploy_cert|unchanged_cert)$ ]]; then\n  \"$HANDLER\" \"$@\"\nfi\n"
  },
  {
    "path": "aegir/helpers/dehydrated",
    "content": "#!/usr/bin/env bash\n\n# dehydrated by lukas2511\n# Source: https://dehydrated.io\n#\n# This script is licensed under The MIT License (see LICENSE for more information).\n\nset -e\nset -u\nset -o pipefail\n[[ -n \"${ZSH_VERSION:-}\" ]] && set -o SH_WORD_SPLIT && set +o FUNCTION_ARGZERO && set -o NULL_GLOB && set -o noglob\n[[ -z \"${ZSH_VERSION:-}\" ]] && shopt -s nullglob && set -f\n\numask 077 # paranoid umask, we're creating private keys\n\n# Close weird external file descriptors\nexec 3>&-\nexec 4>&-\n\nVERSION=\"0.7.3\"\n\n# Find directory in which this script is stored by traversing all symbolic links\nSOURCE=\"${0}\"\nwhile [ -h \"$SOURCE\" ]; do # resolve $SOURCE until the file is no longer a symlink\n  DIR=\"$( cd -P \"$( dirname \"$SOURCE\" )\" && pwd )\"\n  SOURCE=\"$(readlink \"$SOURCE\")\"\n  [[ $SOURCE != /* ]] && SOURCE=\"$DIR/$SOURCE\" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located\ndone\nSCRIPTDIR=\"$( cd -P \"$( dirname \"$SOURCE\" )\" && pwd )\"\n\nBASEDIR=\"${SCRIPTDIR}\"\nORIGARGS=(\"${@}\")\n\nnoglob_set() {\n  if [[ -n \"${ZSH_VERSION:-}\" ]]; then\n    set +o noglob\n  else\n    set +f\n  fi\n}\n\nnoglob_clear() {\n  if [[ -n \"${ZSH_VERSION:-}\" ]]; then\n    set -o noglob\n  else\n    set -f\n  fi\n}\n\n# Generate json.sh path matching string\njson_path() {\n  if [ ! \"${1}\" = \"-p\" ]; then\n    printf '\"%s\"' \"${1}\"\n  else\n    printf '%s' \"${2}\"\n  fi\n}\n\n# Get string value from json dictionary\nget_json_string_value() {\n  local filter\n  filter=\"$(printf 's/.*\\[%s\\][[:space:]]*\"\\([^\"]*\\)\"/\\\\1/p' \"$(json_path \"${1:-}\" \"${2:-}\")\")\"\n  sed -n \"${filter}\"\n}\n\n# Get array values from json dictionary\nget_json_array_values() {\n  grep -E '^\\['\"$(json_path \"${1:-}\" \"${2:-}\")\"',[0-9]*\\]' | sed -e 's/\\[[^\\]*\\][[:space:]]*//g' -e 's/^\"//' -e 's/\"$//'\n}\n\n# Get sub-dictionary from json\nget_json_dict_value() {\n  local filter\n  filter=\"$(printf 's/.*\\[%s\\][[:space:]]*\\(.*\\)/\\\\1/p' \"$(json_path \"${1:-}\" \"${2:-}\")\")\"\n  sed -n \"${filter}\" | jsonsh\n}\n\n# Get integer value from json\nget_json_int_value() {\n  local filter\n  filter=\"$(printf 's/.*\\[%s\\][[:space:]]*\\([^\"]*\\)/\\\\1/p' \"$(json_path \"${1:-}\" \"${2:-}\")\")\"\n  sed -n \"${filter}\"\n}\n\n# Get boolean value from json\nget_json_bool_value() {\n  local filter\n  filter=\"$(printf 's/.*\\[%s\\][[:space:]]*\\([^\"]*\\)/\\\\1/p' \"$(json_path \"${1:-}\" \"${2:-}\")\")\"\n  sed -n \"${filter}\"\n}\n\n# JSON.sh JSON-parser\n# Modified from https://github.com/dominictarr/JSON.sh\n# Original Copyright (c) 2011 Dominic Tarr\n# Licensed under The MIT License\njsonsh() {\n\n  throw() {\n    echo \"$*\" >&2\n    exit 1\n  }\n\n  awk_egrep () {\n    local pattern_string=$1\n\n    awk '{\n      while ($0) {\n        start=match($0, pattern);\n        token=substr($0, start, RLENGTH);\n        print token;\n        $0=substr($0, start+RLENGTH);\n      }\n    }' pattern=\"$pattern_string\"\n  }\n\n  tokenize () {\n    local GREP\n    local ESCAPE\n    local CHAR\n\n    if echo \"test string\" | grep -Eao --color=never \"test\" >/dev/null 2>&1\n    then\n      GREP='grep -Eao --color=never'\n    else\n      GREP='grep -Eao'\n    fi\n\n    # shellcheck disable=SC2196\n    if echo \"test string\" | grep -Eao \"test\" >/dev/null 2>&1\n    then\n      ESCAPE='(\\\\[^u[:cntrl:]]|\\\\u[0-9a-fA-F]{4})'\n      CHAR='[^[:cntrl:]\"\\\\]'\n    else\n      GREP=awk_egrep\n      ESCAPE='(\\\\\\\\[^u[:cntrl:]]|\\\\u[0-9a-fA-F]{4})'\n      CHAR='[^[:cntrl:]\"\\\\\\\\]'\n    fi\n\n    local STRING=\"\\\"$CHAR*($ESCAPE$CHAR*)*\\\"\"\n    local NUMBER='-?(0|[1-9][0-9]*)([.][0-9]*)?([eE][+-]?[0-9]*)?'\n    local KEYWORD='null|false|true'\n    local SPACE='[[:space:]]+'\n\n    # Force zsh to expand $A into multiple words\n    local is_wordsplit_disabled\n    is_wordsplit_disabled=\"$(unsetopt 2>/dev/null | grep -c '^shwordsplit$' || true)\"\n    if [ \"${is_wordsplit_disabled}\" != \"0\" ]; then setopt shwordsplit; fi\n    $GREP \"$STRING|$NUMBER|$KEYWORD|$SPACE|.\" | grep -Ev \"^$SPACE$\"\n    if [ \"${is_wordsplit_disabled}\" != \"0\" ]; then unsetopt shwordsplit; fi\n  }\n\n  parse_array () {\n    local index=0\n    local ary=''\n    read -r token\n    case \"$token\" in\n      ']') ;;\n      *)\n        while :\n        do\n          parse_value \"$1\" \"$index\"\n          index=$((index+1))\n          ary=\"$ary\"\"$value\"\n          read -r token\n          case \"$token\" in\n            ']') break ;;\n            ',') ary=\"$ary,\" ;;\n            *) throw \"EXPECTED , or ] GOT ${token:-EOF}\" ;;\n          esac\n          read -r token\n        done\n        ;;\n    esac\n    value=$(printf '[%s]' \"$ary\") || value=\n    :\n  }\n\n  parse_object () {\n    local key\n    local obj=''\n    read -r token\n    case \"$token\" in\n      '}') ;;\n      *)\n        while :\n        do\n          case \"$token\" in\n            '\"'*'\"') key=$token ;;\n            *) throw \"EXPECTED string GOT ${token:-EOF}\" ;;\n          esac\n          read -r token\n          case \"$token\" in\n            ':') ;;\n            *) throw \"EXPECTED : GOT ${token:-EOF}\" ;;\n          esac\n          read -r token\n          parse_value \"$1\" \"$key\"\n          obj=\"$obj$key:$value\"\n          read -r token\n          case \"$token\" in\n            '}') break ;;\n            ',') obj=\"$obj,\" ;;\n            *) throw \"EXPECTED , or } GOT ${token:-EOF}\" ;;\n          esac\n          read -r token\n        done\n      ;;\n    esac\n    value=$(printf '{%s}' \"$obj\") || value=\n    :\n  }\n\n  parse_value () {\n    local jpath=\"${1:+$1,}${2:-}\"\n    case \"$token\" in\n      '{') parse_object \"$jpath\" ;;\n      '[') parse_array  \"$jpath\" ;;\n      # At this point, the only valid single-character tokens are digits.\n      ''|[!0-9]) throw \"EXPECTED value GOT ${token:-EOF}\" ;;\n      *) value=\"${token//\\\\\\///}\"\n         # replace solidus (\"\\/\") in json strings with normalized value: \"/\"\n         ;;\n    esac\n    [ \"$value\" = '' ] && return\n    [ -z \"$jpath\" ] && return # do not print head\n\n    printf \"[%s]\\t%s\\n\" \"$jpath\" \"$value\"\n    :\n  }\n\n  parse () {\n    read -r token\n    parse_value\n    read -r token || true\n    case \"$token\" in\n      '') ;;\n      *) throw \"EXPECTED EOF GOT $token\" ;;\n    esac\n  }\n\n  tokenize | parse\n}\n\n# Convert IP addresses to their reverse dns variants.\n# Used for ALPN certs as validation for IPs uses this in SNI since IPs aren't allowed there.\nip_to_ptr() {\n  ip=\"$(cat)\"\n  if [[ \"${ip}\" =~ : ]]; then\n    printf \"%sip6.arpa\" \"$(printf \"%s\" \"${ip}\" | awk -F: 'BEGIN {OFS=\"\"; }{addCount = 9 - NF; for(i=1; i<=NF;i++){if(length($i) == 0){ for(j=1;j<=addCount;j++){$i = ($i \"0000\");} } else { $i = substr((\"0000\" $i), length($i)+5-4);}}; print}' | rev | sed -e \"s/./&./g\")\"\n  else\n    printf \"%s.in-addr.arpa\" \"$(printf \"%s\" \"${ip}\" | awk -F. '{print $4\".\"$3\".\" $2\".\"$1}')\"\n  fi\n}\n\n# IPv6 conversion helpers\nipv6_expand() {\n  # expand double colons until 8 segments exist\n  # replace remaining double colon with single colon\n  # pad all segments to 4 characters with leading zeros\n  _sed \\\n    -e ':addsegs; /^([^:]*:){0,7}[^:]*$/{ s/::/:0000::/g; t addsegs; }' \\\n    -e 's/::/:/' \\\n    -e ':padsegs; s/(:|^)([^:]{0,3})(:|$)/\\10\\2\\3/g; t padsegs;'\n}\n\nipv6_shorten() {\n  # remove leading zeros from all segments\n  # find the longest matching run of zeros and replace with double colons (this could be prettier..)\n  _sed \\\n    -e ':unpadsegs;/(^|:)0/{s/(^|:)0([^:])/\\1\\2/g;t unpadsegs;}' \\\n    -e '/(^|:)(0(:|$)){8}/{ s/(^|:)(0(:|$)){8}/::/; t end; }' \\\n    -e '/(^|:)(0(:|$)){7}/{ s/(^|:)(0(:|$)){7}/::/; t end; }' \\\n    -e '/(^|:)(0(:|$)){6}/{ s/(^|:)(0(:|$)){6}/::/; t end; }' \\\n    -e '/(^|:)(0(:|$)){5}/{ s/(^|:)(0(:|$)){5}/::/; t end; }' \\\n    -e '/(^|:)(0(:|$)){4}/{ s/(^|:)(0(:|$)){4}/::/; t end; }' \\\n    -e '/(^|:)(0(:|$)){3}/{ s/(^|:)(0(:|$)){3}/::/; t end; }' \\\n    -e '/(^|:)(0(:|$)){2}/{ s/(^|:)(0(:|$)){2}/::/; t end; }' \\\n    -e ':end'\n}\n\nipv6_normalize() {\n  for domain in $(cat); do\n    if [[ \"${domain}\" =~ : ]]; then\n      printf \"%s\" \"${domain}\" | ipv6_expand | ipv6_shorten\n    else\n      printf \"%s\" \"${domain}\"\n    fi\n    printf \" \"\n  done | sed -e 's/ $//'\n}\n\n# Create (identifiable) temporary files\n_mktemp() {\n  mktemp \"${TMPDIR:-/tmp}/dehydrated-XXXXXX\"\n}\n\n# Check for script dependencies\ncheck_dependencies() {\n  # look for required binaries\n  for binary in grep mktemp diff sed awk curl cut head tail hexdump; do\n    bin_path=\"$(command -v \"${binary}\" 2>/dev/null)\" || _exiterr \"This script requires ${binary}.\"\n    [[ -x \"${bin_path}\" ]] || _exiterr \"${binary} found in PATH but it's not executable\"\n  done\n\n  # just execute some dummy and/or version commands to see if required tools are actually usable\n  \"${OPENSSL}\" version > /dev/null 2>&1 || _exiterr \"This script requires an openssl binary.\"\n  _sed \"\" < /dev/null > /dev/null 2>&1 || _exiterr \"This script requires sed with support for extended (modern) regular expressions.\"\n\n  # curl returns with an error code in some ancient versions so we have to catch that\n  set +e\n  CURL_VERSION=\"$(curl -V 2>&1 | head -n1 | awk '{print $2}')\"\n  set -e\n}\n\nstore_configvars() {\n  __KEY_ALGO=\"${KEY_ALGO}\"\n  __OCSP_MUST_STAPLE=\"${OCSP_MUST_STAPLE}\"\n  __OCSP_FETCH=\"${OCSP_FETCH}\"\n  __OCSP_DAYS=\"${OCSP_DAYS}\"\n  __PRIVATE_KEY_RENEW=\"${PRIVATE_KEY_RENEW}\"\n  __PRIVATE_KEY_ROLLOVER=\"${PRIVATE_KEY_ROLLOVER}\"\n  __KEYSIZE=\"${KEYSIZE}\"\n  __CHALLENGETYPE=\"${CHALLENGETYPE}\"\n  __HOOK=\"${HOOK}\"\n  __PREFERRED_CHAIN=\"${PREFERRED_CHAIN}\"\n  __WELLKNOWN=\"${WELLKNOWN}\"\n  __HOOK_CHAIN=\"${HOOK_CHAIN}\"\n  __OPENSSL_CNF=\"${OPENSSL_CNF}\"\n  __RENEW_DAYS=\"${RENEW_DAYS}\"\n  __IP_VERSION=\"${IP_VERSION}\"\n  __ACME_PROFILE=\"${ACME_PROFILE}\"\n  __ORDER_TIMEOUT=${ORDER_TIMEOUT}\n  __VALIDATION_TIMEOUT=${VALIDATION_TIMEOUT}\n  __KEEP_GOING=${KEEP_GOING}\n}\n\nreset_configvars() {\n  KEY_ALGO=\"${__KEY_ALGO}\"\n  OCSP_MUST_STAPLE=\"${__OCSP_MUST_STAPLE}\"\n  OCSP_FETCH=\"${__OCSP_FETCH}\"\n  OCSP_DAYS=\"${__OCSP_DAYS}\"\n  PRIVATE_KEY_RENEW=\"${__PRIVATE_KEY_RENEW}\"\n  PRIVATE_KEY_ROLLOVER=\"${__PRIVATE_KEY_ROLLOVER}\"\n  KEYSIZE=\"${__KEYSIZE}\"\n  CHALLENGETYPE=\"${__CHALLENGETYPE}\"\n  HOOK=\"${__HOOK}\"\n  PREFERRED_CHAIN=\"${__PREFERRED_CHAIN}\"\n  WELLKNOWN=\"${__WELLKNOWN}\"\n  HOOK_CHAIN=\"${__HOOK_CHAIN}\"\n  OPENSSL_CNF=\"${__OPENSSL_CNF}\"\n  RENEW_DAYS=\"${__RENEW_DAYS}\"\n  IP_VERSION=\"${__IP_VERSION}\"\n  ACME_PROFILE=\"${__ACME_PROFILE}\"\n  ORDER_TIMEOUT=${__ORDER_TIMEOUT}\n  VALIDATION_TIMEOUT=${__VALIDATION_TIMEOUT}\n  KEEP_GOING=\"${__KEEP_GOING}\"\n}\n\nhookscript_bricker_hook() {\n  # Hook scripts should ignore any hooks they don't know.\n  # Calling a random hook to make this clear to the hook script authors...\n  if [[ -n \"${HOOK}\" ]]; then\n    \"${HOOK}\" \"this_hookscript_is_broken__dehydrated_is_working_fine__please_ignore_unknown_hooks_in_your_script\" || _exiterr \"Please check your hook script, it should exit cleanly without doing anything on unknown/new hooks.\"\n  fi\n}\n\n# verify configuration values\nverify_config() {\n  [[ \"${CHALLENGETYPE}\" == \"http-01\" || \"${CHALLENGETYPE}\" == \"dns-01\" || \"${CHALLENGETYPE}\" == \"dns-persist-01\" || \"${CHALLENGETYPE}\" == \"tls-alpn-01\" ]] || _exiterr \"Unknown challenge type ${CHALLENGETYPE}... cannot continue.\"\n  if [[ \"${COMMAND:-}\" =~ sign_domains|sign_csr ]]; then\n    if [[ \"${CHALLENGETYPE}\" = \"dns-01\" ]] && [[ -z \"${HOOK}\" ]]; then\n      _exiterr \"Challenge type dns-01 needs a hook script for deployment... cannot continue.\"\n    fi\n    if [[ \"${CHALLENGETYPE}\" = \"http-01\" ]] && [[ ! -d \"${WELLKNOWN}\" ]]; then\n      _exiterr \"WELLKNOWN directory doesn't exist, please create ${WELLKNOWN} and set appropriate permissions.\"\n    fi\n  fi\n  [[ \"${KEY_ALGO}\" == \"rsa\" || \"${KEY_ALGO}\" == \"prime256v1\" || \"${KEY_ALGO}\" == \"secp384r1\" || \"${KEY_ALGO}\" == \"secp521r1\" ]] || _exiterr \"Unknown public key algorithm ${KEY_ALGO}... cannot continue.\"\n  if [[ -n \"${IP_VERSION}\" ]]; then\n    [[ \"${IP_VERSION}\" = \"4\" || \"${IP_VERSION}\" = \"6\" ]] || _exiterr \"Unknown IP version ${IP_VERSION}... cannot continue.\"\n  fi\n  [[ \"${API}\" == \"auto\" || \"${API}\" == \"1\" || \"${API}\" == \"2\" ]] || _exiterr \"Unsupported API version defined in config: ${API}\"\n  [[ \"${OCSP_DAYS}\" =~ ^[0-9]+$ ]] || _exiterr \"OCSP_DAYS must be a number\"\n  [[ \"${ORDER_TIMEOUT}\" =~ ^[0-9]+$ ]] || _exiterr \"ORDER_TIMEOUT must be a number\"\n  [[ \"${VALIDATION_TIMEOUT}\" =~ ^[0-9]+$ ]] || _exiterr \"VALIDATION_TIMEOUT must be a number\"\n}\n\n# Setup default config values, search for and load configuration files\nload_config() {\n  # Check for config in various locations\n  if [[ -z \"${CONFIG:-}\" ]]; then\n    for check_config in \"/etc/dehydrated\" \"/usr/local/etc/dehydrated\" \"${PWD}\" \"${SCRIPTDIR}\"; do\n      if [[ -f \"${check_config}/config\" ]]; then\n        BASEDIR=\"${check_config}\"\n        CONFIG=\"${check_config}/config\"\n        break\n      fi\n    done\n  fi\n\n  # Preset\n  CA_ZEROSSL=\"https://acme.zerossl.com/v2/DV90\"\n  CA_LETSENCRYPT=\"https://acme-v02.api.letsencrypt.org/directory\"\n  CA_LETSENCRYPT_TEST=\"https://acme-staging-v02.api.letsencrypt.org/directory\"\n  CA_BUYPASS=\"https://api.buypass.com/acme/directory\"\n  CA_BUYPASS_TEST=\"https://api.test4.buypass.no/acme/directory\"\n  CA_GOOGLE=\"https://dv.acme-v02.api.pki.goog/directory\"\n  CA_GOOGLE_TEST=\"https://dv.acme-v02.test-api.pki.goog/directory\"\n\n  # Default values\n  CA=\"letsencrypt\"\n  OLDCA=\n  CERTDIR=\n  ALPNCERTDIR=\n  ACCOUNTDIR=\n  ACCOUNT_KEYSIZE=\"4096\"\n  ACCOUNT_KEY_ALGO=rsa\n  CHALLENGETYPE=\"http-01\"\n  CONFIG_D=\n  CURL_OPTS=\n  DOMAINS_D=\n  DOMAINS_TXT=\n  HOOK=\n  PREFERRED_CHAIN=\n  HOOK_CHAIN=\"no\"\n  RENEW_DAYS=\"69\"\n  KEYSIZE=\"4096\"\n  WELLKNOWN=\n  PRIVATE_KEY_RENEW=\"yes\"\n  PRIVATE_KEY_ROLLOVER=\"no\"\n  KEY_ALGO=secp384r1\n  OPENSSL=openssl\n  OPENSSL_CNF=\n  CONTACT_EMAIL=\n  LOCKFILE=\n  OCSP_MUST_STAPLE=\"no\"\n  OCSP_FETCH=\"no\"\n  OCSP_DAYS=5\n  IP_VERSION=\n  CHAINCACHE=\n  AUTO_CLEANUP=\"no\"\n  AUTO_CLEANUP_DELETE=\"no\"\n  DEHYDRATED_USER=\n  DEHYDRATED_GROUP=\n  API=2\n  ACME_PROFILE=\"\"\n  ORDER_TIMEOUT=0\n  VALIDATION_TIMEOUT=0\n  KEEP_GOING=\"no\"\n\n  if [[ -z \"${CONFIG:-}\" ]]; then\n    echo \"#\" >&2\n    echo \"# BOA auto-config mode\" >&2\n    echo \"#\" >&2\n  elif [[ -f \"${CONFIG}\" ]]; then\n    echo \"# INFO: Using main config file ${CONFIG}\"\n    BASEDIR=\"$(dirname \"${CONFIG}\")\"\n    # shellcheck disable=SC1090\n    . \"${CONFIG}\"\n  else\n    _exiterr \"Specified config file doesn't exist.\"\n  fi\n\n  if [[ -n \"${CONFIG_D}\" ]]; then\n    if [[ ! -d \"${CONFIG_D}\" ]]; then\n      _exiterr \"The path ${CONFIG_D} specified for CONFIG_D does not point to a directory.\"\n    fi\n\n    # Allow globbing\n    noglob_set\n\n    for check_config_d in \"${CONFIG_D}\"/*.sh; do\n      if [[ -f \"${check_config_d}\" ]] && [[ -r \"${check_config_d}\" ]]; then\n        echo \"# INFO: Using additional config file ${check_config_d}\"\n        # shellcheck disable=SC1090\n        . \"${check_config_d}\"\n      else\n        _exiterr \"Specified additional config ${check_config_d} is not readable or not a file at all.\"\n      fi\n    done\n\n    # Disable globbing\n    noglob_clear\n  fi\n\n  # Check for missing dependencies\n  check_dependencies\n\n  has_sudo() {\n    command -v sudo > /dev/null 2>&1 || _exiterr \"DEHYDRATED_USER set but sudo not available. Please install sudo.\"\n  }\n\n  # Check if we are running & are allowed to run as root\n  if [[ -n \"$DEHYDRATED_USER\" ]]; then\n    command -v getent > /dev/null 2>&1 || _exiterr \"DEHYDRATED_USER set but getent not available. Please install getent.\"\n\n    TARGET_UID=\"$(getent passwd \"${DEHYDRATED_USER}\" | cut -d':' -f3)\" || _exiterr \"DEHYDRATED_USER ${DEHYDRATED_USER} is invalid\"\n    if [[ -z \"${DEHYDRATED_GROUP}\" ]]; then\n      if [[ \"${EUID}\" != \"${TARGET_UID}\" ]]; then\n        echo \"# INFO: Running $0 as ${DEHYDRATED_USER}\"\n        has_sudo && exec sudo -u \"${DEHYDRATED_USER}\" \"${0}\" \"${ORIGARGS[@]}\"\n      fi\n    else\n      TARGET_GID=\"$(getent group \"${DEHYDRATED_GROUP}\" | cut -d':' -f3)\" || _exiterr \"DEHYDRATED_GROUP ${DEHYDRATED_GROUP} is invalid\"\n      if [[ -z \"${EGID:-}\" ]]; then\n        command -v id > /dev/null 2>&1 || _exiterr \"DEHYDRATED_GROUP set, don't know current gid and 'id' not available... Please provide 'id' binary.\"\n        EGID=\"$(id -g)\"\n      fi\n      if [[ \"${EUID}\" != \"${TARGET_UID}\" ]] || [[ \"${EGID}\" != \"${TARGET_GID}\" ]]; then\n        echo \"# INFO: Running $0 as ${DEHYDRATED_USER}/${DEHYDRATED_GROUP}\"\n        has_sudo && exec sudo -u \"${DEHYDRATED_USER}\" -g \"${DEHYDRATED_GROUP}\" \"${0}\" \"${ORIGARGS[@]}\"\n      fi\n    fi\n  elif [[ -n \"${DEHYDRATED_GROUP}\" ]]; then\n    _exiterr \"DEHYDRATED_GROUP can only be used in combination with DEHYDRATED_USER.\"\n  fi\n\n  # Remove slash from end of BASEDIR. Mostly for cleaner outputs, doesn't change functionality.\n  [[ \"$BASEDIR\" != \"/\" ]] && BASEDIR=\"${BASEDIR%%/}\"\n\n  # Check BASEDIR and set default variables\n  [[ -d \"${BASEDIR}\" ]] || _exiterr \"BASEDIR does not exist: ${BASEDIR}\"\n\n  # Check for ca cli parameter\n  if [ -n \"${PARAM_CA:-}\" ]; then\n    CA=\"${PARAM_CA}\"\n  fi\n\n  # Preset CAs\n  if [ \"${CA}\" = \"letsencrypt\" ]; then\n    CA=\"${CA_LETSENCRYPT}\"\n  elif [ \"${CA}\" = \"letsencrypt-test\" ]; then\n    CA=\"${CA_LETSENCRYPT_TEST}\"\n  elif [ \"${CA}\" = \"zerossl\" ]; then\n    CA=\"${CA_ZEROSSL}\"\n  elif [ \"${CA}\" = \"buypass\" ]; then\n    CA=\"${CA_BUYPASS}\"\n  elif [ \"${CA}\" = \"buypass-test\" ]; then\n    CA=\"${CA_BUYPASS_TEST}\"\n  elif [ \"${CA}\" = \"google\" ]; then\n    CA=\"${CA_GOOGLE}\"\n  elif [ \"${CA}\" = \"google-test\" ]; then\n    CA=\"${CA_GOOGLE_TEST}\"\n  fi\n\n  if [[ -z \"${OLDCA}\" ]] && [[ \"${CA}\" = \"https://acme-v02.api.letsencrypt.org/directory\" ]]; then\n    OLDCA=\"https://acme-v01.api.letsencrypt.org/directory\"\n  fi\n\n  # Create new account directory or symlink to account directory from old CA\n  # dev note: keep in mind that because of the use of 'echo' instead of 'printf' or\n  # similar there is a newline encoded in the directory name. not going to fix this\n  # since it's a non-issue and trying to fix existing installations would be too much\n  # trouble\n  CAHASH=\"$(echo \"${CA}\" | urlbase64)\"\n  [[ -z \"${ACCOUNTDIR}\" ]] && ACCOUNTDIR=\"${BASEDIR}/accounts\"\n  if [[ ! -e \"${ACCOUNTDIR}/${CAHASH}\" ]]; then\n    OLDCAHASH=\"$(echo \"${OLDCA}\" | urlbase64)\"\n    mkdir -p \"${ACCOUNTDIR}\"\n    if [[ -n \"${OLDCA}\" ]] && [[ -e \"${ACCOUNTDIR}/${OLDCAHASH}\" ]]; then\n      echo \"! Reusing account from ${OLDCA}\"\n      ln -s \"${OLDCAHASH}\" \"${ACCOUNTDIR}/${CAHASH}\"\n    else\n      mkdir \"${ACCOUNTDIR}/${CAHASH}\"\n    fi\n  fi\n\n  # shellcheck disable=SC1090\n  [[ -f \"${ACCOUNTDIR}/${CAHASH}/config\" ]] && . \"${ACCOUNTDIR}/${CAHASH}/config\"\n  ACCOUNT_KEY=\"${ACCOUNTDIR}/${CAHASH}/account_key.pem\"\n  ACCOUNT_KEY_JSON=\"${ACCOUNTDIR}/${CAHASH}/registration_info.json\"\n  ACCOUNT_ID_JSON=\"${ACCOUNTDIR}/${CAHASH}/account_id.json\"\n  ACCOUNT_DEACTIVATED=\"${ACCOUNTDIR}/${CAHASH}/deactivated\"\n\n  if [[ -f \"${ACCOUNT_DEACTIVATED}\" ]]; then\n    _exiterr \"Account has been deactivated. Remove account and create a new one using --register.\"\n  fi\n\n  if [[ -f \"${BASEDIR}/private_key.pem\" ]] && [[ ! -f \"${ACCOUNT_KEY}\" ]]; then\n    echo \"! Moving private_key.pem to ${ACCOUNT_KEY}\"\n    mv \"${BASEDIR}/private_key.pem\" \"${ACCOUNT_KEY}\"\n  fi\n  if [[ -f \"${BASEDIR}/private_key.json\" ]] && [[ ! -f \"${ACCOUNT_KEY_JSON}\" ]]; then\n    echo \"! Moving private_key.json to ${ACCOUNT_KEY_JSON}\"\n    mv \"${BASEDIR}/private_key.json\" \"${ACCOUNT_KEY_JSON}\"\n  fi\n\n  [[ -z \"${CERTDIR}\" ]] && CERTDIR=\"${BASEDIR}/certs\"\n  [[ -z \"${ALPNCERTDIR}\" ]] && ALPNCERTDIR=\"${BASEDIR}/alpn-certs\"\n  [[ -z \"${CHAINCACHE}\" ]] && CHAINCACHE=\"${BASEDIR}/chains\"\n  [[ -z \"${DOMAINS_TXT}\" ]] && DOMAINS_TXT=\"${BASEDIR}/domains.txt\"\n  [[ -z \"${WELLKNOWN}\" ]] && WELLKNOWN=\"${BASEDIR}/.acme-challenges\"\n  [[ -z \"${LOCKFILE}\" ]] && LOCKFILE=\"${BASEDIR}/lock\"\n  [[ -z \"${OPENSSL_CNF}\" ]] && OPENSSL_CNF=\"$(\"${OPENSSL}\" version -d | cut -d\\\" -f2)/openssl.cnf\"\n  [[ -n \"${PARAM_LOCKFILE_SUFFIX:-}\" ]] && LOCKFILE=\"${LOCKFILE}-${PARAM_LOCKFILE_SUFFIX}\"\n  [[ -n \"${PARAM_NO_LOCK:-}\" ]] && LOCKFILE=\"\"\n\n  [[ -n \"${PARAM_HOOK:-}\" ]] && HOOK=\"${PARAM_HOOK}\"\n  [[ -n \"${PARAM_DOMAINS_TXT:-}\" ]] && DOMAINS_TXT=\"${PARAM_DOMAINS_TXT}\"\n  [[ -n \"${PARAM_PREFERRED_CHAIN:-}\" ]] && PREFERRED_CHAIN=\"${PARAM_PREFERRED_CHAIN}\"\n  [[ -n \"${PARAM_CERTDIR:-}\" ]] && CERTDIR=\"${PARAM_CERTDIR}\"\n  [[ -n \"${PARAM_ALPNCERTDIR:-}\" ]] && ALPNCERTDIR=\"${PARAM_ALPNCERTDIR}\"\n  [[ -n \"${PARAM_CHALLENGETYPE:-}\" ]] && CHALLENGETYPE=\"${PARAM_CHALLENGETYPE}\"\n  [[ -n \"${PARAM_KEY_ALGO:-}\" ]] && KEY_ALGO=\"${PARAM_KEY_ALGO}\"\n  [[ -n \"${PARAM_OCSP_MUST_STAPLE:-}\" ]] && OCSP_MUST_STAPLE=\"${PARAM_OCSP_MUST_STAPLE}\"\n  [[ -n \"${PARAM_IP_VERSION:-}\" ]] && IP_VERSION=\"${PARAM_IP_VERSION}\"\n  [[ -n \"${PARAM_ACME_PROFILE:-}\" ]] && ACME_PROFILE=\"${PARAM_ACME_PROFILE}\"\n  [[ -n \"${PARAM_ORDER_TIMEOUT:-}\" ]] && ORDER_TIMEOUT=\"${PARAM_ORDER_TIMEOUT}\"\n  [[ -n \"${PARAM_VALIDATION_TIMEOUT:-}\" ]] && VALIDATION_TIMEOUT=\"${PARAM_VALIDATION_TIMEOUT}\"\n  [[ -n \"${PARAM_KEEP_GOING:-}\" ]] && KEEP_GOING=\"${PARAM_KEEP_GOING}\"\n\n  if [ \"${PARAM_FORCE_VALIDATION:-no}\" = \"yes\" ] && [ \"${PARAM_FORCE:-no}\" = \"no\" ]; then\n    _exiterr \"Argument --force-validation can only be used in combination with --force (-x)\"\n  fi\n\n  if [ ! \"${1:-}\" = \"noverify\" ]; then\n    verify_config\n  fi\n  store_configvars\n}\n\n# Initialize system\ninit_system() {\n  load_config\n\n  # Lockfile handling (prevents concurrent access)\n  if [[ -n \"${LOCKFILE}\" ]]; then\n    LOCKDIR=\"$(dirname \"${LOCKFILE}\")\"\n    [[ -w \"${LOCKDIR}\" ]] || _exiterr \"Directory ${LOCKDIR} for LOCKFILE ${LOCKFILE} is not writable, aborting.\"\n    ( set -C; date > \"${LOCKFILE}\" ) 2>/dev/null || _exiterr \"Lock file '${LOCKFILE}' present, aborting.\"\n    remove_lock() { rm -f \"${LOCKFILE}\"; }\n    trap 'remove_lock' EXIT\n  fi\n\n  # Get CA URLs\n  CA_DIRECTORY=\"$(http_request get \"${CA}\" | jsonsh)\"\n\n  # Automatic discovery of API version\n  if [[ \"${API}\" = \"auto\" ]]; then\n    grep -q newOrder <<< \"${CA_DIRECTORY}\" && API=2 || API=1\n  fi\n\n  # shellcheck disable=SC2015\n  if [[ \"${API}\" = \"1\" ]]; then\n    CA_NEW_CERT=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value new-cert)\" &&\n    CA_NEW_AUTHZ=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value new-authz)\" &&\n    CA_NEW_REG=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value new-reg)\" &&\n    CA_TERMS=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value terms-of-service)\" &&\n    CA_REQUIRES_EAB=\"false\" &&\n    CA_REVOKE_CERT=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value revoke-cert)\" ||\n    _exiterr \"Problem retrieving ACME/CA-URLs, check if your configured CA points to the directory entrypoint.\"\n    # Since reg URI is missing from directory we will assume it is the same as CA_NEW_REG without the new part\n    CA_REG=${CA_NEW_REG/new-reg/reg}\n\n    if [[ -n \"${ACME_PROFILE}\" ]]; then\n      _exiterr \"ACME profiles are not supported in ACME v1.\"\n    fi\n  else\n    CA_NEW_ORDER=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value newOrder)\" &&\n    CA_NEW_NONCE=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value newNonce)\" &&\n    CA_NEW_ACCOUNT=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value newAccount)\" &&\n    CA_TERMS=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value -p '\"meta\",\"termsOfService\"')\" &&\n    CA_REQUIRES_EAB=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_bool_value -p '\"meta\",\"externalAccountRequired\"' || echo false)\" &&\n    CA_REVOKE_CERT=\"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_string_value revokeCert)\" ||\n    _exiterr \"Problem retrieving ACME/CA-URLs, check if your configured CA points to the directory entrypoint.\"\n\n    # Checking ACME profile\n    if [[ -n \"${ACME_PROFILE}\" ]]; then\n      # Extract available profiles from CA directory\n      declare -A available_profiles=()\n      while IFS=$'\\t' read -r path value; do\n        if [[ \"${value}\" =~ ^\\\"([^\\\"]+)\\\"$ ]]; then\n          value=${BASH_REMATCH[1]}\n        fi\n        if [[ \"${path}\" =~ ^\\[\\\"([^\\\"]+)\\\"\\]$ ]]; then\n          available_profiles[${BASH_REMATCH[1]}]=$value\n        fi\n      done <<< \"$(printf \"%s\" \"${CA_DIRECTORY}\" | get_json_dict_value -p '\"meta\",\"profiles\"' 2>/dev/null)\"\n      if [[ ${#available_profiles[@]} -eq 0 ]]; then\n          _exiterr \"ACME profile not supported by this CA\"\n      fi\n\n      # Check if the requested profile is available\n      found_profile=\"no\"\n      for profile in \"${!available_profiles[@]}\"; do\n        if [[ \"${profile}\" == \"${ACME_PROFILE}\" ]]; then\n          found_profile=\"yes\"\n          break\n        fi\n      done\n      if [[ \"${found_profile}\" == \"no\" ]]; then\n        _exiterr \"ACME profile '${ACME_PROFILE}' not found, available profiles:$(for key in \"${!available_profiles[@]}\"; do printf \"\\n  %s: %s\" \"${key}\" \"${available_profiles[$key]}\"; done)\"\n      fi\n    fi\n  fi\n\n  # Export some environment variables to be used in hook script\n  export WELLKNOWN BASEDIR CERTDIR ALPNCERTDIR CONFIG COMMAND\n\n  # Checking for private key ...\n  register_new_key=\"no\"\n  generated=\"false\"\n  if [[ -n \"${PARAM_ACCOUNT_KEY:-}\" ]]; then\n    # a private key was specified from the command line so use it for this run\n    echo \"Using private key ${PARAM_ACCOUNT_KEY} instead of account key\"\n    ACCOUNT_KEY=\"${PARAM_ACCOUNT_KEY}\"\n    ACCOUNT_KEY_JSON=\"${PARAM_ACCOUNT_KEY}.json\"\n    ACCOUNT_ID_JSON=\"${PARAM_ACCOUNT_KEY}_id.json\"\n    [ \"${COMMAND:-}\" = \"register\" ] && register_new_key=\"yes\"\n  else\n    # Check if private account key exists, if it doesn't exist yet generate a new one (rsa key)\n    if [[ ! -e \"${ACCOUNT_KEY}\" ]]; then\n      if [[ ! \"${PARAM_ACCEPT_TERMS:-}\" = \"yes\" ]]; then\n        printf '\\n' >&2\n        printf 'To use dehydrated with this certificate authority you have to agree to their terms of service which you can find here: %s\\n\\n' \"${CA_TERMS}\" >&2\n        printf 'To accept these terms of service run \"%s --register --accept-terms\".\\n' \"${0}\" >&2\n        exit 1\n      fi\n\n      echo \"+ Generating account key...\"\n      generated=\"true\"\n      local tmp_account_key\n      tmp_account_key=\"$(_mktemp)\"\n      if [[ ${API} -eq 1 && ! \"${ACCOUNT_KEY_ALGO}\" = \"rsa\" ]]; then\n        _exiterr \"ACME API version 1 does not support EC account keys\"\n      fi\n      case \"${ACCOUNT_KEY_ALGO}\" in\n        rsa) _openssl genrsa -out \"${tmp_account_key}\" \"${ACCOUNT_KEYSIZE}\";;\n        prime256v1|secp384r1|secp521r1) _openssl ecparam -genkey -name \"${ACCOUNT_KEY_ALGO}\" -out \"${tmp_account_key}\" -noout;;\n      esac\n      cat \"${tmp_account_key}\" > \"${ACCOUNT_KEY}\"\n      rm \"${tmp_account_key}\"\n      register_new_key=\"yes\"\n    fi\n  fi\n\n  if (\"${OPENSSL}\" rsa -in \"${ACCOUNT_KEY}\" -check 2>/dev/null > /dev/null); then\n    # Get public components from private key and calculate thumbprint\n    pubExponent64=\"$(printf '%x' \"$(\"${OPENSSL}\" rsa -in \"${ACCOUNT_KEY}\" -noout -text | awk '/publicExponent/ {print $2}')\" | hex2bin | urlbase64)\"\n    pubMod64=\"$(\"${OPENSSL}\" rsa -in \"${ACCOUNT_KEY}\" -noout -modulus | cut -d'=' -f2 | hex2bin | urlbase64)\"\n\n    account_key_info=\"$(printf '{\"e\":\"%s\",\"kty\":\"RSA\",\"n\":\"%s\"}' \"${pubExponent64}\" \"${pubMod64}\")\"\n    account_key_sigalgo=RS256\n  elif (\"${OPENSSL}\" ec -in \"${ACCOUNT_KEY}\" -check 2>/dev/null > /dev/null); then\n    curve=\"$(\"${OPENSSL}\" ec -in \"${ACCOUNT_KEY}\" -noout -text 2>/dev/null | grep 'NIST CURVE' | cut -d':' -f2 | tr -d ' ')\"\n    pubkey=\"$(\"${OPENSSL}\" ec -in \"${ACCOUNT_KEY}\" -noout -text 2>/dev/null | tr -d '\\n ' | grep -Eo 'pub:.*ASN1' | _sed -e 's/^pub://' -e 's/ASN1$//' | tr -d ':')\"\n\n    if [ \"${curve}\" = \"P-256\" ]; then\n      account_key_sigalgo=\"ES256\"\n    elif [ \"${curve}\" = \"P-384\" ]; then\n      account_key_sigalgo=\"ES384\"\n    elif [ \"${curve}\" = \"P-521\" ]; then\n      account_key_sigalgo=\"ES512\"\n    else\n      _exiterr \"Unknown account key curve: ${curve}\"\n    fi\n\n    ec_x_offset=2\n    ec_x_len=$((${#pubkey}/2 - 1))\n    ec_x=\"${pubkey:$ec_x_offset:$ec_x_len}\"\n    ec_x64=\"$(printf \"%s\" \"${ec_x}\" | hex2bin | urlbase64)\"\n\n    ec_y_offset=$((ec_x_offset+ec_x_len))\n    ec_y_len=$((${#pubkey}-ec_y_offset))\n    ec_y=\"${pubkey:$ec_y_offset:$ec_y_len}\"\n    ec_y64=\"$(printf \"%s\" \"${ec_y}\" | hex2bin | urlbase64)\"\n\n    account_key_info=\"$(printf '{\"crv\":\"%s\",\"kty\":\"EC\",\"x\":\"%s\",\"y\":\"%s\"}' \"${curve}\" \"${ec_x64}\" \"${ec_y64}\")\"\n  else\n    _exiterr \"Account key is not valid, cannot continue.\"\n  fi\n  thumbprint=\"$(printf '%s' \"${account_key_info}\" | \"${OPENSSL}\" dgst -sha256 -binary | urlbase64)\"\n\n  # If we generated a new private key in the step above we have to register it with the acme-server\n  if [[ \"${register_new_key}\" = \"yes\" ]]; then\n    echo \"+ Registering account key with ACME server...\"\n    FAILED=false\n\n    if [[ ${API} -eq 1 && -z \"${CA_NEW_REG}\" ]] || [[ ${API} -eq 2 && -z \"${CA_NEW_ACCOUNT}\" ]]; then\n      echo \"Certificate authority doesn't allow registrations.\"\n      FAILED=true\n    fi\n\n    # ZeroSSL special sauce\n    if [[ \"${CA}\" = \"${CA_ZEROSSL}\" ]]; then\n      if [[ -z \"${EAB_KID:-}\" ]] ||  [[ -z \"${EAB_HMAC_KEY:-}\" ]]; then\n        if [[ -z \"${CONTACT_EMAIL}\" ]]; then\n          echo \"ZeroSSL requires contact email to be set or EAB_KID/EAB_HMAC_KEY to be manually configured\"\n          FAILED=true\n        else\n          zeroapi=\"$(curl ${ip_version:-} -A \"dehydrated/${VERSION} curl/${CURL_VERSION}\" ${CURL_OPTS} -s \"https://api.zerossl.com/acme/eab-credentials-email\" -d \"email=${CONTACT_EMAIL}\" | jsonsh)\"\n          EAB_KID=\"$(printf \"%s\" \"${zeroapi}\" | get_json_string_value eab_kid)\"\n          EAB_HMAC_KEY=\"$(printf \"%s\" \"${zeroapi}\" | get_json_string_value eab_hmac_key)\"\n          if [[ -z \"${EAB_KID:-}\" ]] ||  [[ -z \"${EAB_HMAC_KEY:-}\" ]]; then\n            echo \"Unknown error retrieving ZeroSSL API credentials\"\n            echo \"${zeroapi}\"\n            FAILED=true\n          fi\n        fi\n      fi\n    fi\n\n     # Google special sauce\n    if [[ \"${CA}\" = \"${CA_GOOGLE}\" ]]; then\n      if [[ -z \"${CONTACT_EMAIL}\" ]] || [[ -z \"${EAB_KID:-}\" ]] || [[ -z \"${EAB_HMAC_KEY:-}\" ]]; then\n          echo \"Google requires contact email, EAB_KID and EAB_HMAC_KEY to be manually configured (see https://cloud.google.com/certificate-manager/docs/public-ca-tutorial)\"\n          FAILED=true\n      fi\n    fi\n\n    # Check if external account is required\n    if [[ \"${FAILED}\" = \"false\" ]]; then\n      if [[ \"${CA_REQUIRES_EAB}\" = \"true\" ]]; then\n        if [[ -z \"${EAB_KID:-}\" ]] || [[ -z \"${EAB_HMAC_KEY:-}\" ]]; then\n          FAILED=true\n          echo \"This CA requires an external account but no EAB_KID/EAB_HMAC_KEY has been configured\"\n        fi\n      fi\n    fi\n\n    # If an email for the contact has been provided then adding it to the registration request\n    if [[ \"${FAILED}\" = \"false\" ]]; then\n      if [[ ${API} -eq 1 ]]; then\n        if [[ -n \"${CONTACT_EMAIL}\" ]]; then\n          (signed_request \"${CA_NEW_REG}\" '{\"resource\": \"new-reg\", \"contact\":[\"mailto:'\"${CONTACT_EMAIL}\"'\"], \"agreement\": \"'\"${CA_TERMS}\"'\"}' > \"${ACCOUNT_KEY_JSON}\") || FAILED=true\n        else\n          (signed_request \"${CA_NEW_REG}\" '{\"resource\": \"new-reg\", \"agreement\": \"'\"${CA_TERMS}\"'\"}' > \"${ACCOUNT_KEY_JSON}\") || FAILED=true\n        fi\n      else\n        if [[ -n \"${EAB_KID:-}\" ]] && [[ -n \"${EAB_HMAC_KEY:-}\" ]]; then\n          eab_url=\"${CA_NEW_ACCOUNT}\"\n          eab_protected64=\"$(printf '{\"alg\":\"HS256\",\"kid\":\"%s\",\"url\":\"%s\"}' \"${EAB_KID}\" \"${eab_url}\" | urlbase64)\"\n          eab_payload64=\"$(printf \"%s\" \"${account_key_info}\" | urlbase64)\"\n          eab_key=\"$(printf \"%s\" \"${EAB_HMAC_KEY}\" | deurlbase64 | bin2hex)\"\n          eab_signed64=\"$(printf '%s' \"${eab_protected64}.${eab_payload64}\" | \"${OPENSSL}\" dgst -binary -sha256 -mac HMAC -macopt \"hexkey:${eab_key}\" | urlbase64)\"\n\n          if [[ -n \"${CONTACT_EMAIL}\" ]]; then\n            regjson='{\"contact\":[\"mailto:'\"${CONTACT_EMAIL}\"'\"], \"termsOfServiceAgreed\": true, \"externalAccountBinding\": {\"protected\": \"'\"${eab_protected64}\"'\", \"payload\": \"'\"${eab_payload64}\"'\", \"signature\": \"'\"${eab_signed64}\"'\"}}'\n          else\n            regjson='{\"termsOfServiceAgreed\": true, \"externalAccountBinding\": {\"protected\": \"'\"${eab_protected64}\"'\", \"payload\": \"'\"${eab_payload64}\"'\", \"signature\": \"'\"${eab_signed64}\"'\"}}'\n          fi\n        else\n          if [[ -n \"${CONTACT_EMAIL}\" ]]; then\n            regjson='{\"contact\":[\"mailto:'\"${CONTACT_EMAIL}\"'\"], \"termsOfServiceAgreed\": true}'\n          else\n            regjson='{\"termsOfServiceAgreed\": true}'\n          fi\n        fi\n        (signed_request \"${CA_NEW_ACCOUNT}\" \"${regjson}\" > \"${ACCOUNT_KEY_JSON}\") || FAILED=true\n      fi\n    fi\n\n    if [[ \"${FAILED}\" = \"true\" ]]; then\n      echo >&2\n      echo >&2\n      echo \"Error registering account key. See message above for more information.\" >&2\n      if [[ \"${generated}\" = \"true\" ]]; then\n        rm \"${ACCOUNT_KEY}\"\n      fi\n      rm -f \"${ACCOUNT_KEY_JSON}\"\n      exit 1\n    fi\n  elif [[ \"${COMMAND:-}\" = \"register\" ]]; then\n    echo \"+ Account already registered!\"\n    exit 0\n  fi\n\n  # Read account information or request from CA if missing\n  if [[ -e \"${ACCOUNT_KEY_JSON}\" ]]; then\n    if [[ ${API} -eq 1 ]]; then\n      ACCOUNT_ID=\"$(jsonsh < \"${ACCOUNT_KEY_JSON}\" | get_json_int_value id)\"\n      ACCOUNT_URL=\"${CA_REG}/${ACCOUNT_ID}\"\n    else\n      if [[ -e \"${ACCOUNT_ID_JSON}\" ]]; then\n        ACCOUNT_URL=\"$(jsonsh < \"${ACCOUNT_ID_JSON}\" | get_json_string_value url)\"\n      fi\n      # if account URL is not storred, fetch it from the CA\n      if [[ -z \"${ACCOUNT_URL:-}\" ]]; then\n        echo \"+ Fetching account URL...\"\n        ACCOUNT_URL=\"$(signed_request \"${CA_NEW_ACCOUNT}\" '{\"onlyReturnExisting\": true}' 4>&1 | grep -i ^Location: | cut -d':' -f2- | tr -d ' \\t\\r\\n')\"\n        if [[ -z \"${ACCOUNT_URL}\" ]]; then\n          _exiterr \"Unknown error on fetching account information\"\n        fi\n        echo '{\"url\":\"'\"${ACCOUNT_URL}\"'\"}' > \"${ACCOUNT_ID_JSON}\" # store the URL for next time\n      fi\n    fi\n  else\n    echo \"Fetching missing account information from CA...\"\n    if [[ ${API} -eq 1 ]]; then\n      _exiterr \"This is not implemented for ACMEv1! Consider switching to ACMEv2 :)\"\n    else\n      ACCOUNT_URL=\"$(signed_request \"${CA_NEW_ACCOUNT}\" '{\"onlyReturnExisting\": true}' 4>&1 | grep -i ^Location: | cut -d':' -f2- | tr -d ' \\t\\r\\n')\"\n      ACCOUNT_INFO=\"$(signed_request \"${ACCOUNT_URL}\" '{}')\"\n    fi\n    echo \"${ACCOUNT_INFO}\" > \"${ACCOUNT_KEY_JSON}\"\n  fi\n}\n\n# Different sed version for different os types...\n_sed() {\n  if [[ \"${OSTYPE}\" = \"Linux\" || \"${OSTYPE:0:5}\" = \"MINGW\" ]]; then\n    sed -r \"${@}\"\n  else\n    sed -E \"${@}\"\n  fi\n}\n\n# Print error message and exit with error\n_exiterr() {\n  if [ -n \"${1:-}\" ]; then\n    echo \"ERROR: ${1}\" >&2\n  fi\n  [[ \"${skip_exit_hook:-no}\" = \"no\" ]] && [[ -n \"${HOOK:-}\" ]] && (\"${HOOK}\" \"exit_hook\" \"${1:-}\" || echo 'exit_hook returned with non-zero exit code!' >&2)\n  exit 1\n}\n\n# Remove newlines and whitespace from json\nclean_json() {\n  tr -d '\\r\\n' | _sed -e 's/ +/ /g' -e 's/\\{ /{/g' -e 's/ \\}/}/g' -e 's/\\[ /[/g' -e 's/ \\]/]/g'\n}\n\n# Encode data as url-safe formatted base64\nurlbase64() {\n  # urlbase64: base64 encoded string with '+' replaced with '-' and '/' replaced with '_'\n  \"${OPENSSL}\" base64 -e | tr -d '\\n\\r' | _sed -e 's:=*$::g' -e 'y:+/:-_:'\n}\n\n# Decode data from url-safe formatted base64\ndeurlbase64() {\n  data=\"$(cat | tr -d ' \\n\\r')\"\n  modlen=$((${#data} % 4))\n  padding=\"\"\n  if [[ \"${modlen}\" = \"2\" ]]; then padding=\"==\";\n  elif [[ \"${modlen}\" = \"3\" ]]; then padding=\"=\"; fi\n  printf \"%s%s\" \"${data}\" \"${padding}\" | tr -d '\\n\\r' | _sed -e 'y:-_:+/:' | \"${OPENSSL}\" base64 -d -A\n}\n\n# Convert hex string to binary data\nhex2bin() {\n  # Remove spaces, add leading zero, escape as hex string and parse with printf\n  # shellcheck disable=SC2059\n  printf \"%b\" \"$(cat | _sed -e 's/[[:space:]]//g' -e 's/^(.(.{2})*)$/0\\1/' -e 's/(.{2})/\\\\x\\1/g')\"\n}\n\n# Convert binary data to hex string\nbin2hex() {\n  hexdump -v -e '/1 \"%02x\"'\n}\n\n# OpenSSL writes to stderr/stdout even when there are no errors. So just\n# display the output if the exit code was != 0 to simplify debugging.\n_openssl() {\n  set +e\n  out=\"$(\"${OPENSSL}\" \"${@}\" 2>&1)\"\n  res=$?\n  set -e\n  if [[ ${res} -ne 0 ]]; then\n    echo \"  + ERROR: failed to run $* (Exitcode: ${res})\" >&2\n    echo >&2\n    echo \"Details:\" >&2\n    echo \"${out}\" >&2\n    echo >&2\n    exit \"${res}\"\n  fi\n}\n\n# Send http(s) request with specified method\nhttp_request() {\n  tempcont=\"$(_mktemp)\"\n  tempheaders=\"$(_mktemp)\"\n\n  if [[ -n \"${IP_VERSION:-}\" ]]; then\n      ip_version=\"-${IP_VERSION}\"\n  fi\n\n  set +e\n  # shellcheck disable=SC2086\n  if [[ \"${1}\" = \"head\" ]]; then\n    statuscode=\"$(curl ${ip_version:-} -A \"dehydrated/${VERSION} curl/${CURL_VERSION}\" ${CURL_OPTS} -s -w \"%{http_code}\" -o \"${tempcont}\" -H 'Cache-Control: no-cache' \"${2}\" -I)\"\n    curlret=\"${?}\"\n    touch \"${tempheaders}\"\n  elif [[ \"${1}\" = \"get\" ]]; then\n    statuscode=\"$(curl ${ip_version:-} -A \"dehydrated/${VERSION} curl/${CURL_VERSION}\" ${CURL_OPTS} -L -s -w \"%{http_code}\" -o \"${tempcont}\" -D \"${tempheaders}\" -H 'Cache-Control: no-cache' \"${2}\")\"\n    curlret=\"${?}\"\n  elif [[ \"${1}\" = \"post\" ]]; then\n    statuscode=\"$(curl ${ip_version:-} -A \"dehydrated/${VERSION} curl/${CURL_VERSION}\" ${CURL_OPTS} -s -w \"%{http_code}\" -o \"${tempcont}\" \"${2}\" -D \"${tempheaders}\" -H 'Cache-Control: no-cache' -H 'Content-Type: application/jose+json' -d \"${3}\")\"\n    curlret=\"${?}\"\n  else\n    set -e\n    _exiterr \"Unknown request method: ${1}\"\n  fi\n  set -e\n\n  if [[ ! \"${curlret}\" = \"0\" ]]; then\n    _exiterr \"Problem connecting to server (${1} for ${2}; curl returned with ${curlret})\"\n  fi\n\n  if [[ ! \"${statuscode:0:1}\" = \"2\" ]]; then\n    # check for existing registration warning\n    if [[ \"${API}\" = \"1\" ]] && [[ -n \"${CA_NEW_REG:-}\" ]] && [[ \"${2}\" = \"${CA_NEW_REG:-}\" ]] && [[ \"${statuscode}\" = \"409\" ]] && grep -q \"Registration key is already in use\" \"${tempcont}\"; then\n      # do nothing\n      :\n    # check for already-revoked warning\n    elif [[ -n \"${CA_REVOKE_CERT:-}\" ]] && [[ \"${2}\" = \"${CA_REVOKE_CERT:-}\" ]] && [[ \"${statuscode}\" = \"409\" ]]; then\n      grep -q \"Certificate already revoked\" \"${tempcont}\" && return\n    else\n      if grep -q \"urn:ietf:params:acme:error:badNonce\" \"${tempcont}\"; then\n        printf \"badnonce %s\" \"$(grep -Eoi \"^replay-nonce:.*$\" \"${tempheaders}\" | sed 's/ //' | cut -d: -f2)\"\n        return 0\n      fi\n      echo \"  + ERROR: An error occurred while sending ${1}-request to ${2} (Status ${statuscode})\" >&2\n      echo >&2\n      echo \"Details:\" >&2\n      cat \"${tempheaders}\" >&2\n      cat \"${tempcont}\" >&2\n      echo >&2\n      echo >&2\n\n      # An exclusive hook for the {1}-request error might be useful (e.g., for sending an e-mail to admins)\n      if [[ -n \"${HOOK}\" ]]; then\n        errtxt=\"$(cat \"${tempcont}\")\"\n        errheaders=\"$(cat \"${tempheaders}\")\"\n        \"${HOOK}\" \"request_failure\" \"${statuscode}\" \"${errtxt}\" \"${1}\" \"${errheaders}\" || _exiterr 'request_failure hook returned with non-zero exit code'\n      fi\n\n      rm -f \"${tempcont}\"\n      rm -f \"${tempheaders}\"\n\n      # remove temporary domains.txt file if used\n      [[ \"${COMMAND:-}\" = \"sign_domains\" && -n \"${PARAM_DOMAIN:-}\" && -n \"${DOMAINS_TXT:-}\" ]] && rm \"${DOMAINS_TXT}\"\n      _exiterr\n    fi\n  fi\n\n  if { true >&4; } 2>/dev/null; then\n    cat \"${tempheaders}\" >&4\n  fi\n  cat \"${tempcont}\"\n  rm -f \"${tempcont}\"\n  rm -f \"${tempheaders}\"\n}\n\n# Send signed request\nsigned_request() {\n  # Encode payload as urlbase64\n  payload64=\"$(printf '%s' \"${2}\" | urlbase64)\"\n\n  if [ -n \"${3:-}\" ]; then\n    nonce=\"$(printf \"%s\" \"${3}\" | tr -d ' \\t\\n\\r')\"\n  else\n    # Retrieve nonce from acme-server\n    if [[ ${API} -eq 1 ]]; then\n      nonce=\"$(http_request head \"${CA}\" | grep -i ^Replay-Nonce: | cut -d':' -f2- | tr -d ' \\t\\n\\r')\"\n    else\n      nonce=\"$(http_request head \"${CA_NEW_NONCE}\" | grep -i ^Replay-Nonce: | cut -d':' -f2- | tr -d ' \\t\\n\\r')\"\n    fi\n  fi\n\n  if [[ ${API} -eq 1 ]]; then\n    # Build another header which also contains the previously received nonce and encode it as urlbase64\n    protected='{\"alg\": \"RS256\", \"jwk\": {\"e\": \"'\"${pubExponent64}\"'\", \"kty\": \"RSA\", \"n\": \"'\"${pubMod64}\"'\"}, \"nonce\": \"'\"${nonce}\"'\"}'\n    protected64=\"$(printf '%s' \"${protected}\" | urlbase64)\"\n  else\n    # Build another header which also contains the previously received nonce and url and encode it as urlbase64\n    if [[ -n \"${ACCOUNT_URL:-}\" ]]; then\n      protected='{\"alg\": \"'\"${account_key_sigalgo}\"'\", \"kid\": \"'\"${ACCOUNT_URL}\"'\", \"url\": \"'\"${1}\"'\", \"nonce\": \"'\"${nonce}\"'\"}'\n    else\n      protected='{\"alg\": \"'\"${account_key_sigalgo}\"'\", \"jwk\": '\"${account_key_info}\"', \"url\": \"'\"${1}\"'\", \"nonce\": \"'\"${nonce}\"'\"}'\n    fi\n    protected64=\"$(printf '%s' \"${protected}\" | urlbase64)\"\n  fi\n\n  # Sign header with nonce and our payload with our private key and encode signature as urlbase64\n  if [[ \"${account_key_sigalgo}\" = \"RS256\" ]]; then\n    signed64=\"$(printf '%s' \"${protected64}.${payload64}\" | \"${OPENSSL}\" dgst -sha256 -sign \"${ACCOUNT_KEY}\" | urlbase64)\"\n  else\n    dgstparams=\"$(printf '%s' \"${protected64}.${payload64}\" | \"${OPENSSL}\" dgst -sha${account_key_sigalgo:2} -sign \"${ACCOUNT_KEY}\" | \"${OPENSSL}\" asn1parse -inform DER)\"\n    dgst_parm_1=\"$(echo \"$dgstparams\" | head -n 2 | tail -n 1 | cut -d':' -f4)\"\n    dgst_parm_2=\"$(echo \"$dgstparams\" | head -n 3 | tail -n 1 | cut -d':' -f4)\"\n\n    # zero-padding (doesn't seem to be necessary, but other clients are doing this as well...\n    case \"${account_key_sigalgo}\" in\n      \"ES256\") siglen=64;;\n      \"ES384\") siglen=96;;\n      \"ES512\") siglen=132;;\n    esac\n    while [[ ${#dgst_parm_1} -lt $siglen ]]; do dgst_parm_1=\"0${dgst_parm_1}\"; done\n    while [[ ${#dgst_parm_2} -lt $siglen ]]; do dgst_parm_2=\"0${dgst_parm_2}\"; done\n\n    signed64=\"$(printf \"%s%s\" \"${dgst_parm_1}\" \"${dgst_parm_2}\" | hex2bin | urlbase64)\"\n  fi\n\n  if [[ ${API} -eq 1 ]]; then\n    # Build header with just our public key and algorithm information\n    header='{\"alg\": \"RS256\", \"jwk\": {\"e\": \"'\"${pubExponent64}\"'\", \"kty\": \"RSA\", \"n\": \"'\"${pubMod64}\"'\"}}'\n\n    # Send header + extended header + payload + signature to the acme-server\n    data='{\"header\": '\"${header}\"', \"protected\": \"'\"${protected64}\"'\", \"payload\": \"'\"${payload64}\"'\", \"signature\": \"'\"${signed64}\"'\"}'\n  else\n    # Send extended header + payload + signature to the acme-server\n    data='{\"protected\": \"'\"${protected64}\"'\", \"payload\": \"'\"${payload64}\"'\", \"signature\": \"'\"${signed64}\"'\"}'\n  fi\n\n  output=\"$(http_request post \"${1}\" \"${data}\")\"\n\n  if grep -qE \"^badnonce \" <<< \"${output}\"; then\n    echo \" ! Request failed (badNonce), retrying request...\" >&2\n    signed_request \"${1:-}\" \"${2:-}\" \"$(printf \"%s\" \"${output}\" | cut -d' ' -f2)\"\n  else\n    printf \"%s\" \"${output}\"\n  fi\n}\n\n# Extracts all subject names from a CSR\n# Outputs either the CN, or the SANs, one per line\nextract_altnames() {\n  csrfile=\"${1}\" # path to CSR file\n\n  if ! \"${OPENSSL}\" req -in \"${csrfile}\" -verify -noout >/dev/null; then\n    _exiterr \"Certificate signing request isn't valid\"\n  fi\n\n  reqtext=\"$(\"${OPENSSL}\" req -in \"${csrfile}\" -noout -text)\"\n  if <<<\"${reqtext}\" grep -q '^[[:space:]]*X509v3 Subject Alternative Name:[[:space:]]*$'; then\n    # SANs used, extract these\n    altnames=\"$( <<<\"${reqtext}\" awk '/X509v3 Subject Alternative Name:/{print;getline;print;}' | tail -n1 )\"\n    # split to one per line:\n    # shellcheck disable=SC1003\n    altnames=\"$( <<<\"${altnames}\" _sed -e 's/^[[:space:]]*//; s/, /\\'$'\\n''/g' )\"\n    # we can only get DNS/IP: ones signed\n    if grep -qEv '^(DNS|IP( Address)*|othername):' <<<\"${altnames}\"; then\n      _exiterr \"Certificate signing request contains non-DNS/IP Subject Alternative Names\"\n    fi\n    # strip away the DNS/IP: prefix\n    altnames=\"$( <<<\"${altnames}\" _sed -e 's/^(DNS:|IP( Address)*:|othername:<unsupported>)//' )\"\n    printf \"%s\" \"${altnames}\" | tr '\\n' ' '\n  else\n    # No SANs, extract CN\n    altnames=\"$( <<<\"${reqtext}\" grep '^[[:space:]]*Subject:' | _sed -e 's/.*[ /]CN ?= ?([^ /,]*).*/\\1/' )\"\n    printf \"%s\" \"${altnames}\"\n  fi\n}\n\n# Get last issuer CN in certificate chain\nget_last_cn() {\n  <<<\"${1}\" _sed 'H;/-----BEGIN CERTIFICATE-----/h;$!d;x' | \"${OPENSSL}\" x509 -noout -issuer | head -n1 | _sed -e 's/.*[ /]CN ?= ?([^/,]*).*/\\1/'\n}\n\n# Create certificate for domain(s) and outputs it FD 3\nsign_csr() {\n  csrfile=\"${1}\" # path to CSR file\n\n  if { true >&3; } 2>/dev/null; then\n    : # fd 3 looks OK\n  else\n    _exiterr \"sign_csr: FD 3 not open\"\n  fi\n\n  shift 1 || true\n  export altnames=\"${*}\"\n\n  if [[ ${API} -eq 1 ]]; then\n    if [[ -z \"${CA_NEW_AUTHZ}\" ]] || [[ -z \"${CA_NEW_CERT}\" ]]; then\n      _exiterr \"Certificate authority doesn't allow certificate signing\"\n    fi\n  elif [[ ${API} -eq 2 ]] && [[ -z \"${CA_NEW_ORDER}\" ]]; then\n    _exiterr \"Certificate authority doesn't allow certificate signing\"\n  fi\n\n  if [[ -n \"${ZSH_VERSION:-}\" ]]; then\n    local -A challenge_names challenge_uris challenge_tokens authorizations keyauths deploy_args\n  else\n    local -a challenge_names challenge_uris challenge_tokens authorizations keyauths deploy_args\n  fi\n\n  # Initial step: Find which authorizations we're dealing with\n  if [[ ${API} -eq 2 ]]; then\n    # Request new order and store authorization URIs\n    local challenge_identifiers=\"\"\n    for altname in ${altnames}; do\n      if [[ \"${altname}\" =~ ^ip: ]]; then\n        ip=\"${altname:3}\"\n        if [[ \"${ip}\" =~ : ]]; then\n          ip=\"$(ipv6_normalize <<< \"${ip}\")\"\n        fi\n        challenge_identifiers+=\"$(printf '{\"type\": \"ip\", \"value\": \"%s\"}, ' \"${ip}\")\"\n      else\n        challenge_identifiers+=\"$(printf '{\"type\": \"dns\", \"value\": \"%s\"}, ' \"${altname}\")\"\n      fi\n    done\n    challenge_identifiers=\"[${challenge_identifiers%, }]\"\n\n    echo \" + Requesting new certificate order from CA...\"\n    local order_payload='{\"identifiers\": '\"${challenge_identifiers}\"\n    if [[ -n \"${ACME_PROFILE}\" ]]; then\n      order_payload=\"${order_payload}\"',\"profile\":\"'\"${ACME_PROFILE}\"'\"'\n    fi\n    order_payload=\"${order_payload}\"'}'\n    order_location=\"$(signed_request \"${CA_NEW_ORDER}\" \"${order_payload}\" 4>&1 | grep -i ^Location: | cut -d':' -f2- | tr -d ' \\t\\r\\n')\"\n    result=\"$(signed_request \"${order_location}\" \"\" | jsonsh)\"\n\n    order_authorizations=\"$(echo \"${result}\" | get_json_array_values authorizations)\"\n    finalize=\"$(echo \"${result}\" | get_json_string_value finalize)\"\n\n    local idx=0\n    for uri in ${order_authorizations}; do\n      authorizations[${idx}]=\"${uri}\"\n      idx=$((idx+1))\n    done\n    echo \" + Received ${idx} authorizations URLs from the CA\"\n  else\n    # Copy $altnames to $authorizations (just doing this to reduce duplicate code later on)\n    local idx=0\n    for altname in ${altnames}; do\n      authorizations[${idx}]=\"${altname}\"\n      idx=$((idx+1))\n    done\n  fi\n\n  # Check if authorizations are valid and gather challenge information for pending authorizations\n  local idx=0\n  for authorization in ${authorizations[*]}; do\n    if [[ \"${API}\" -eq 2 ]]; then\n      # Receive authorization ($authorization is authz uri)\n      response=\"$(signed_request \"$(echo \"${authorization}\" | _sed -e 's/\\\"(.*)\".*/\\1/')\" \"\" | jsonsh)\"\n      identifier=\"$(echo \"${response}\" | get_json_string_value -p '\"identifier\",\"value\"')\"\n      identifier_type=\"$(echo \"${response}\" | get_json_string_value -p '\"identifier\",\"type\"')\"\n      echo \" + Handling authorization for ${identifier}\"\n    else\n      # Request new authorization ($authorization is altname)\n      identifier=\"${authorization}\"\n      echo \" + Requesting authorization for ${identifier}...\"\n      response=\"$(signed_request \"${CA_NEW_AUTHZ}\" '{\"resource\": \"new-authz\", \"identifier\": {\"type\": \"dns\", \"value\": \"'\"${identifier}\"'\"}}' | jsonsh)\"\n    fi\n\n    # Check if authorization has already been validated\n    if [ \"$(echo \"${response}\" | get_json_string_value status)\" = \"valid\" ]; then\n      if [ \"${PARAM_FORCE_VALIDATION:-no}\" = \"yes\" ]; then\n        echo \" + A valid authorization has been found but will be ignored\"\n      else\n        echo \" + Found valid authorization for ${identifier}\"\n        continue\n      fi\n    fi\n\n    # Find challenge in authorization\n    challengeindex=\"$(echo \"${response}\" | grep -E '^\\[\"challenges\",[0-9]+,\"type\"\\][[:space:]]+\"'\"${CHALLENGETYPE}\"'\"' | cut -d',' -f2 || true)\"\n\n    if [ -z \"${challengeindex}\" ]; then\n      allowed_validations=\"$(echo \"${response}\" | grep -E '^\\[\"challenges\",[0-9]+,\"type\"\\]' | sed -e 's/\\[[^\\]*\\][[:space:]]*//g' -e 's/^\"//' -e 's/\"$//' | tr '\\n' ' ')\"\n      _exiterr \"Validating this certificate is not possible using ${CHALLENGETYPE}. Possible validation methods are: ${allowed_validations}. Please check with your CA for more information about supported validation methods.\"\n    fi\n    challenge=\"$(echo \"${response}\" | get_json_dict_value -p '\"challenges\",'\"${challengeindex}\")\"\n\n    # Gather challenge information\n    if [ \"${identifier_type:-}\" = \"ip\" ] && [ \"${CHALLENGETYPE}\" = \"tls-alpn-01\" ]; then\n      challenge_names[${idx}]=\"$(echo \"${identifier}\" | ip_to_ptr)\"\n    else\n      challenge_names[${idx}]=\"${identifier}\"\n    fi\n    challenge_tokens[${idx}]=\"$(echo \"${challenge}\" | get_json_string_value token)\"\n\n    if [[ ${API} -eq 2 ]]; then\n      challenge_uris[${idx}]=\"$(echo \"${challenge}\" | get_json_string_value url)\"\n    else\n      if [[ \"$(echo \"${challenge}\" | get_json_string_value type)\" = \"urn:acme:error:unauthorized\" ]]; then\n        _exiterr \"Challenge unauthorized: $(echo \"${challenge}\" | get_json_string_value detail)\"\n      fi\n      challenge_uris[${idx}]=\"$(echo \"${challenge}\" | get_json_dict_value validationRecord | get_json_string_value uri)\"\n    fi\n\n    # Prepare challenge tokens and deployment parameters\n    keyauth=\"${challenge_tokens[${idx}]}.${thumbprint}\"\n\n    case \"${CHALLENGETYPE}\" in\n      \"http-01\")\n        # Store challenge response in well-known location and make world-readable (so that a webserver can access it)\n        printf '%s' \"${keyauth}\" > \"${WELLKNOWN}/${challenge_tokens[${idx}]}\"\n        chmod a+r \"${WELLKNOWN}/${challenge_tokens[${idx}]}\"\n        keyauth_hook=\"${keyauth}\"\n        ;;\n      \"dns-01\")\n        # Generate DNS entry content for dns-01 validation\n        keyauth_hook=\"$(printf '%s' \"${keyauth}\" | \"${OPENSSL}\" dgst -sha256 -binary | urlbase64)\"\n        ;;\n      \"dns-persist-01\")\n        # Pre-existing persistent DNS record is expected; no deploy/cleanup by dehydrated.\n        keyauth_hook=\"\"\n        ;;\n      \"tls-alpn-01\")\n        keyauth_hook=\"$(printf '%s' \"${keyauth}\" | \"${OPENSSL}\" dgst -sha256 -c -hex | awk '{print $NF}')\"\n        generate_alpn_certificate \"${identifier}\" \"${identifier_type}\" \"${keyauth_hook}\"\n        ;;\n    esac\n\n    keyauths[${idx}]=\"${keyauth}\"\n    if [ \"${identifier_type:-}\" = \"ip\" ] && [ \"${CHALLENGETYPE}\" = \"tls-alpn-01\" ]; then\n      deploy_args[${idx}]=\"$(echo \"${identifier}\" | ip_to_ptr) ${challenge_tokens[${idx}]} ${keyauth_hook}\"\n    else\n      deploy_args[${idx}]=\"${identifier} ${challenge_tokens[${idx}]} ${keyauth_hook}\"\n    fi\n\n    idx=$((idx+1))\n  done\n  local num_pending_challenges=${idx}\n  echo \" + ${num_pending_challenges} pending challenge(s)\"\n\n  # Deploy challenge tokens\n  if [[ ${num_pending_challenges} -ne 0 ]]; then\n    if [[ \"${CHALLENGETYPE}\" != \"dns-persist-01\" ]]; then\n      echo \" + Deploying challenge tokens...\"\n      if [[ -n \"${HOOK}\" ]] && [[ \"${HOOK_CHAIN}\" = \"yes\" ]]; then\n        # shellcheck disable=SC2068\n        \"${HOOK}\" \"deploy_challenge\" ${deploy_args[@]} || _exiterr 'deploy_challenge hook returned with non-zero exit code'\n      elif [[ -n \"${HOOK}\" ]]; then\n        # Run hook script to deploy the challenge token\n        local idx=0\n        while [ ${idx} -lt ${num_pending_challenges} ]; do\n          # shellcheck disable=SC2086\n          \"${HOOK}\" \"deploy_challenge\" ${deploy_args[${idx}]} || _exiterr 'deploy_challenge hook returned with non-zero exit code'\n          idx=$((idx+1))\n        done\n      fi\n    fi\n  fi\n\n  # Validate pending challenges\n  local idx=0\n  while [ ${idx} -lt ${num_pending_challenges} ]; do\n    echo \" + Responding to challenge for ${challenge_names[${idx}]} authorization...\"\n\n    # Ask the acme-server to verify our challenge and wait until it is no longer pending\n    if [[ ${API} -eq 1 ]]; then\n      result=\"$(signed_request \"${challenge_uris[${idx}]}\" '{\"resource\": \"challenge\", \"keyAuthorization\": \"'\"${keyauths[${idx}]}\"'\"}' | jsonsh)\"\n    else\n      result=\"$(signed_request \"${challenge_uris[${idx}]}\" '{}' | jsonsh)\"\n    fi\n\n    reqstatus=\"$(echo \"${result}\" | get_json_string_value status)\"\n\n    local waited=0\n    while [[ \"${reqstatus}\" = \"pending\" ]] || [[ \"${reqstatus}\" = \"processing\" ]]; do\n      if [ ${VALIDATION_TIMEOUT} -gt 0 ] && [ ${waited} -gt ${VALIDATION_TIMEOUT} ]; then\n        _exiterr \"Timed out waiting for processing of domain validation (still ${reqstatus})\"\n      fi\n      echo \" + Validation is ${reqstatus}...\"\n      sleep 1\n      waited=$((waited+1))\n      if [[ \"${API}\" -eq 2 ]]; then\n        result=\"$(signed_request \"${challenge_uris[${idx}]}\" \"\" | jsonsh)\"\n      else\n        result=\"$(http_request get \"${challenge_uris[${idx}]}\" | jsonsh)\"\n      fi\n      reqstatus=\"$(echo \"${result}\" | get_json_string_value status)\"\n    done\n\n    [[ \"${CHALLENGETYPE}\" = \"http-01\" ]] && rm -f \"${WELLKNOWN}/${challenge_tokens[${idx}]}\"\n    [[ \"${CHALLENGETYPE}\" = \"tls-alpn-01\" ]] && rm -f \"${ALPNCERTDIR}/${challenge_names[${idx}]}.crt.pem\" \"${ALPNCERTDIR}/${challenge_names[${idx}]}.key.pem\"\n\n    if [[ \"${reqstatus}\" = \"valid\" ]]; then\n      echo \" + Challenge is valid!\"\n    else\n      [[ -n \"${HOOK}\" ]] && (\"${HOOK}\" \"invalid_challenge\" \"${altname}\" \"${result}\" || _exiterr 'invalid_challenge hook returned with non-zero exit code')\n      break\n    fi\n    idx=$((idx+1))\n  done\n\n  if [[ ${num_pending_challenges} -ne 0 ]]; then\n    if [[ \"${CHALLENGETYPE}\" != \"dns-persist-01\" ]]; then\n      echo \" + Cleaning challenge tokens...\"\n\n      # Clean challenge tokens using chained hook\n      # shellcheck disable=SC2068\n      [[ -n \"${HOOK}\" ]] && [[ \"${HOOK_CHAIN}\" = \"yes\" ]] && (\"${HOOK}\" \"clean_challenge\" ${deploy_args[@]} || _exiterr 'clean_challenge hook returned with non-zero exit code')\n\n      # Clean remaining challenge tokens if validation has failed\n      local idx=0\n      while [ ${idx} -lt ${num_pending_challenges} ]; do\n        # Delete challenge file\n        [[ \"${CHALLENGETYPE}\" = \"http-01\" ]] && rm -f \"${WELLKNOWN}/${challenge_tokens[${idx}]}\"\n        # Delete alpn verification certificates\n        [[ \"${CHALLENGETYPE}\" = \"tls-alpn-01\" ]] && rm -f \"${ALPNCERTDIR}/${challenge_names[${idx}]}.crt.pem\" \"${ALPNCERTDIR}/${challenge_names[${idx}]}.key.pem\"\n        # Clean challenge token using non-chained hook\n        # shellcheck disable=SC2086\n        [[ -n \"${HOOK}\" ]] && [[ \"${HOOK_CHAIN}\" != \"yes\" ]] && (\"${HOOK}\" \"clean_challenge\" ${deploy_args[${idx}]} || _exiterr 'clean_challenge hook returned with non-zero exit code')\n        idx=$((idx+1))\n      done\n    fi\n\n    if [[ \"${reqstatus}\" != \"valid\" ]]; then\n      echo \" + Challenge validation has failed :(\"\n      _exiterr \"Challenge is invalid! (returned: ${reqstatus}) (result: ${result})\"\n    fi\n  fi\n\n  # Finally request certificate from the acme-server and store it in cert-${timestamp}.pem and link from cert.pem\n  echo \" + Requesting certificate...\"\n  csr64=\"$(\"${OPENSSL}\" req -in \"${csrfile}\" -config \"${OPENSSL_CNF}\" -outform DER | urlbase64)\"\n  if [[ ${API} -eq 1 ]]; then\n    crt64=\"$(signed_request \"${CA_NEW_CERT}\" '{\"resource\": \"new-cert\", \"csr\": \"'\"${csr64}\"'\"}' | \"${OPENSSL}\" base64 -e)\"\n    crt=\"$( printf -- '-----BEGIN CERTIFICATE-----\\n%s\\n-----END CERTIFICATE-----\\n' \"${crt64}\" )\"\n  else\n    result=\"$(signed_request \"${finalize}\" '{\"csr\": \"'\"${csr64}\"'\"}' | jsonsh)\"\n    waited=0\n    while :; do\n      orderstatus=\"$(echo \"${result}\" | get_json_string_value status)\"\n      case \"${orderstatus}\"\n      in\n        \"processing\" | \"pending\")\n          if [ ${ORDER_TIMEOUT} -gt 0 ] && [ ${waited} -gt ${ORDER_TIMEOUT} ]; then\n            _exiterr \"Timed out waiting for processing of order (still ${orderstatus})\"\n          fi\n          echo \" + Order is ${orderstatus}...\"\n          sleep 2;\n          waited=$((waited+2))\n          ;;\n        \"valid\")\n          break;\n          ;;\n        *)\n          _exiterr \"Order has invalid/unknown status: ${orderstatus}\"\n          ;;\n      esac\n      result=\"$(signed_request \"${order_location}\" \"\" | jsonsh)\"\n    done\n\n    resheaders=\"$(_mktemp)\"\n    certificate=\"$(echo \"${result}\" | get_json_string_value certificate)\"\n    crt=\"$(signed_request \"${certificate}\" \"\" 4>\"${resheaders}\")\"\n\n    if [ -n \"${PREFERRED_CHAIN:-}\" ]; then\n      foundaltchain=0\n      altcn=\"$(get_last_cn \"${crt}\")\"\n      altoptions=\"${altcn}\"\n      if [ \"${altcn}\" = \"${PREFERRED_CHAIN}\" ]; then\n        foundaltchain=1\n      fi\n      if [ \"${foundaltchain}\" = \"0\" ] && (grep -Ei '^link:' \"${resheaders}\" | grep -q -Ei 'rel=\"alternate\"'); then\n        while read -r altcrturl; do\n          if [ \"${foundaltchain}\" = \"0\" ]; then\n            altcrt=\"$(signed_request \"${altcrturl}\" \"\")\"\n            altcn=\"$(get_last_cn \"${altcrt}\")\"\n            altoptions=\"${altoptions}, ${altcn}\"\n            if [ \"${altcn}\" = \"${PREFERRED_CHAIN}\" ]; then\n              foundaltchain=1\n              crt=\"${altcrt}\"\n            fi\n          fi\n        done <<< \"$(grep -Ei '^link:' \"${resheaders}\" | grep -Ei 'rel=\"alternate\"' | cut -d'<' -f2 | cut -d'>' -f1)\"\n      fi\n      if [ \"${foundaltchain}\" = \"0\" ]; then\n        _exiterr \"Alternative chain with CN = ${PREFERRED_CHAIN} not found, available options: ${altoptions}\"\n      fi\n      echo \" + Using preferred chain with CN = ${altcn}\"\n    fi\n    rm -f \"${resheaders}\"\n  fi\n\n  # Try to load the certificate to detect corruption\n  echo \" + Checking certificate...\"\n  _openssl x509 -text <<<\"${crt}\"\n\n  echo \"${crt}\" >&3\n\n  unset challenge_token\n  echo \" + Done!\"\n}\n\n# grep issuer cert uri from certificate\nget_issuer_cert_uri() {\n  certificate=\"${1}\"\n  \"${OPENSSL}\" x509 -in \"${certificate}\" -noout -text | (grep 'CA Issuers - URI:' | cut -d':' -f2-) || true\n}\n\nget_issuer_hash() {\n  certificate=\"${1}\"\n  \"${OPENSSL}\" x509 -in \"${certificate}\" -noout -issuer_hash\n}\n\nget_ocsp_url() {\n  certificate=\"${1}\"\n  \"${OPENSSL}\" x509 -in \"${certificate}\" -noout -ocsp_uri\n}\n\n# walk certificate chain, retrieving all intermediate certificates\nwalk_chain() {\n  local certificate\n  certificate=\"${1}\"\n\n  local issuer_cert_uri\n  issuer_cert_uri=\"${2:-}\"\n  if [[ -z \"${issuer_cert_uri}\" ]]; then issuer_cert_uri=\"$(get_issuer_cert_uri \"${certificate}\")\"; fi\n  if [[ -n \"${issuer_cert_uri}\" ]]; then\n    # create temporary files\n    local tmpcert\n    local tmpcert_raw\n    tmpcert_raw=\"$(_mktemp)\"\n    tmpcert=\"$(_mktemp)\"\n\n    # download certificate\n    http_request get \"${issuer_cert_uri}\" > \"${tmpcert_raw}\"\n\n    # PEM\n    if grep -q \"BEGIN CERTIFICATE\" \"${tmpcert_raw}\"; then mv \"${tmpcert_raw}\" \"${tmpcert}\"\n    # DER\n    elif \"${OPENSSL}\" x509 -in \"${tmpcert_raw}\" -inform DER -out \"${tmpcert}\" -outform PEM 2> /dev/null > /dev/null; then :\n    # PKCS7\n    elif \"${OPENSSL}\" pkcs7 -in \"${tmpcert_raw}\" -inform DER -out \"${tmpcert}\" -outform PEM -print_certs 2> /dev/null > /dev/null; then :\n    # Unknown certificate type\n    else _exiterr \"Unknown certificate type in chain\"\n    fi\n\n    local next_issuer_cert_uri\n    next_issuer_cert_uri=\"$(get_issuer_cert_uri \"${tmpcert}\")\"\n    if [[ -n \"${next_issuer_cert_uri}\" ]]; then\n      printf \"\\n%s\\n\" \"${issuer_cert_uri}\"\n      cat \"${tmpcert}\"\n      walk_chain \"${tmpcert}\" \"${next_issuer_cert_uri}\"\n    fi\n    rm -f \"${tmpcert}\" \"${tmpcert_raw}\"\n  fi\n}\n\n# Generate ALPN verification certificate\ngenerate_alpn_certificate() {\n  local altname=\"${1}\"\n  local identifier_type=\"${2}\"\n  local acmevalidation=\"${3}\"\n\n  local alpncertdir=\"${ALPNCERTDIR}\"\n  if [[ ! -e \"${alpncertdir}\" ]]; then\n    echo \" + Creating new directory ${alpncertdir} ...\"\n    mkdir -p \"${alpncertdir}\" || _exiterr \"Unable to create directory ${alpncertdir}\"\n  fi\n\n  echo \" + Generating ALPN certificate and key for ${1}...\"\n  tmp_openssl_cnf=\"$(_mktemp)\"\n  cat \"${OPENSSL_CNF}\" > \"${tmp_openssl_cnf}\"\n  if [[ \"${identifier_type}\" = \"ip\" ]]; then\n    printf \"\\n[SAN]\\nsubjectAltName=IP:%s\\n\" \"${altname}\" >> \"${tmp_openssl_cnf}\"\n  else\n    printf \"\\n[SAN]\\nsubjectAltName=DNS:%s\\n\" \"${altname}\" >> \"${tmp_openssl_cnf}\"\n  fi\n  printf \"1.3.6.1.5.5.7.1.31=critical,DER:04:20:%s\\n\" \"${acmevalidation}\" >> \"${tmp_openssl_cnf}\"\n  SUBJ=\"/CN=${altname}/\"\n  [[ \"${OSTYPE:0:5}\" = \"MINGW\" ]] && SUBJ=\"/${SUBJ}\"\n  if [[ \"${identifier_type}\" = \"ip\" ]]; then\n    altname=\"$(echo \"${altname}\" | ip_to_ptr)\"\n  fi\n  _openssl req -x509 -new -sha256 -nodes -newkey rsa:2048 -keyout \"${alpncertdir}/${altname}.key.pem\" -out \"${alpncertdir}/${altname}.crt.pem\" -subj \"${SUBJ}\" -extensions SAN -config \"${tmp_openssl_cnf}\"\n  chmod g+r \"${alpncertdir}/${altname}.key.pem\" \"${alpncertdir}/${altname}.crt.pem\"\n  rm -f \"${tmp_openssl_cnf}\"\n}\n\n# Create certificate for domain(s)\nsign_domain() {\n  local certdir=\"${1}\"\n  shift\n  timestamp=\"${1}\"\n  shift\n  domain=\"${1}\"\n  altnames=\"${*}\"\n\n  export altnames\n\n  echo \" + Signing domains...\"\n  if [[ ${API} -eq 1 ]]; then\n    if [[ -z \"${CA_NEW_AUTHZ}\" ]] || [[ -z \"${CA_NEW_CERT}\" ]]; then\n      _exiterr \"Certificate authority doesn't allow certificate signing\"\n    fi\n  elif [[ ${API} -eq 2 ]] && [[ -z \"${CA_NEW_ORDER}\" ]]; then\n    _exiterr \"Certificate authority doesn't allow certificate signing\"\n  fi\n\n  local privkey=\"privkey.pem\"\n  if [[ ! -e \"${certdir}/cert-${timestamp}.csr\" ]]; then\n    # generate a new private key if we need or want one\n    if [[ ! -r \"${certdir}/privkey.pem\" ]] || [[ \"${PRIVATE_KEY_RENEW}\" = \"yes\" ]]; then\n      echo \" + Generating private key...\"\n      privkey=\"privkey-${timestamp}.pem\"\n      local tmp_privkey\n      tmp_privkey=\"$(_mktemp)\"\n      case \"${KEY_ALGO}\" in\n        rsa) _openssl genrsa -out \"${tmp_privkey}\" \"${KEYSIZE}\";;\n        prime256v1|secp384r1) _openssl ecparam -genkey -name \"${KEY_ALGO}\" -out \"${tmp_privkey}\" -noout;;\n      esac\n      cat \"${tmp_privkey}\" > \"${certdir}/privkey-${timestamp}.pem\"\n      rm \"${tmp_privkey}\"\n    fi\n    # move rolloverkey into position (if any)\n    if [[ -r \"${certdir}/privkey.pem\" && -r \"${certdir}/privkey.roll.pem\" && \"${PRIVATE_KEY_RENEW}\" = \"yes\" && \"${PRIVATE_KEY_ROLLOVER}\" = \"yes\" ]]; then\n      echo \" + Moving Rolloverkey into position....  \"\n      mv \"${certdir}/privkey.roll.pem\" \"${certdir}/privkey-tmp.pem\"\n      mv \"${certdir}/privkey-${timestamp}.pem\" \"${certdir}/privkey.roll.pem\"\n      mv \"${certdir}/privkey-tmp.pem\" \"${certdir}/privkey-${timestamp}.pem\"\n    fi\n    # generate a new private rollover key if we need or want one\n    if [[ ! -r \"${certdir}/privkey.roll.pem\" && \"${PRIVATE_KEY_ROLLOVER}\" = \"yes\" && \"${PRIVATE_KEY_RENEW}\" = \"yes\" ]]; then\n      echo \" + Generating private rollover key...\"\n      case \"${KEY_ALGO}\" in\n        rsa) _openssl genrsa -out \"${certdir}/privkey.roll.pem\" \"${KEYSIZE}\";;\n        prime256v1|secp384r1) _openssl ecparam -genkey -name \"${KEY_ALGO}\" -out \"${certdir}/privkey.roll.pem\" -noout;;\n      esac\n    fi\n    # delete rolloverkeys if disabled\n    if [[ -r \"${certdir}/privkey.roll.pem\" && ! \"${PRIVATE_KEY_ROLLOVER}\" = \"yes\" ]]; then\n      echo \" + Removing Rolloverkey (feature disabled)...\"\n      rm -f \"${certdir}/privkey.roll.pem\"\n    fi\n\n    # Generate signing request config and the actual signing request\n    echo \" + Generating signing request...\"\n    SAN=\"\"\n    for altname in ${altnames}; do\n      if [[ \"${altname}\" =~ ^ip: ]]; then\n        SAN=\"${SAN}IP:${altname:3}, \"\n      else\n        SAN=\"${SAN}DNS:${altname}, \"\n      fi\n    done\n    if [[ \"${domain}\" =~ ^ip: ]]; then\n      SUBJ=\"/\"\n    else\n      SUBJ=\"/CN=${domain}/\"\n    fi\n    SAN=\"${SAN%%, }\"\n    local tmp_openssl_cnf\n    tmp_openssl_cnf=\"$(_mktemp)\"\n    cat \"${OPENSSL_CNF}\" > \"${tmp_openssl_cnf}\"\n    printf \"\\n[SAN]\\nsubjectAltName=%s\" \"${SAN}\" >> \"${tmp_openssl_cnf}\"\n    if [ \"${OCSP_MUST_STAPLE}\" = \"yes\" ]; then\n      printf \"\\n1.3.6.1.5.5.7.1.24=DER:30:03:02:01:05\" >> \"${tmp_openssl_cnf}\"\n    fi\n    if [[ \"${OSTYPE:0:5}\" = \"MINGW\" ]]; then\n      # The subject starts with a /, so MSYS will assume it's a path and convert\n      # it unless we escape it with another one:\n      SUBJ=\"/${SUBJ}\"\n    fi\n    \"${OPENSSL}\" req -new -sha256 -key \"${certdir}/${privkey}\" -out \"${certdir}/cert-${timestamp}.csr\" -subj \"${SUBJ}\" -reqexts SAN -config \"${tmp_openssl_cnf}\"\n    rm -f \"${tmp_openssl_cnf}\"\n  fi\n\n  crt_path=\"${certdir}/cert-${timestamp}.pem\"\n  # shellcheck disable=SC2086\n  sign_csr \"${certdir}/cert-${timestamp}.csr\" ${altnames} 3>\"${crt_path}\"\n\n  # Create fullchain.pem\n  echo \" + Creating fullchain.pem...\"\n  if [[ ${API} -eq 1 ]]; then\n    cat \"${crt_path}\" > \"${certdir}/fullchain-${timestamp}.pem\"\n    local issuer_hash\n    issuer_hash=\"$(get_issuer_hash \"${crt_path}\")\"\n    if [ -e \"${CHAINCACHE}/${issuer_hash}.chain\" ]; then\n      echo \" + Using cached chain!\"\n      cat \"${CHAINCACHE}/${issuer_hash}.chain\" > \"${certdir}/chain-${timestamp}.pem\"\n    else\n      echo \" + Walking chain...\"\n      local issuer_cert_uri\n      issuer_cert_uri=\"$(get_issuer_cert_uri \"${crt_path}\" || echo \"unknown\")\"\n      (walk_chain \"${crt_path}\" > \"${certdir}/chain-${timestamp}.pem\") || _exiterr \"Walking chain has failed, your certificate has been created and can be found at ${crt_path}, the corresponding private key at ${privkey}. If you want you can manually continue on creating and linking all necessary files. If this error occurs again you should manually generate the certificate chain and place it under ${CHAINCACHE}/${issuer_hash}.chain (see ${issuer_cert_uri})\"\n      cat \"${certdir}/chain-${timestamp}.pem\" > \"${CHAINCACHE}/${issuer_hash}.chain\"\n    fi\n    cat \"${certdir}/chain-${timestamp}.pem\" >> \"${certdir}/fullchain-${timestamp}.pem\"\n  else\n    tmpcert=\"$(_mktemp)\"\n    tmpchain=\"$(_mktemp)\"\n    awk '{print >out}; /----END CERTIFICATE-----/{out=tmpchain}' out=\"${tmpcert}\" tmpchain=\"${tmpchain}\" \"${certdir}/cert-${timestamp}.pem\"\n    mv \"${certdir}/cert-${timestamp}.pem\" \"${certdir}/fullchain-${timestamp}.pem\"\n    cat \"${tmpcert}\" > \"${certdir}/cert-${timestamp}.pem\"\n    cat \"${tmpchain}\" > \"${certdir}/chain-${timestamp}.pem\"\n    rm \"${tmpcert}\" \"${tmpchain}\"\n  fi\n\n  # Wait for hook script to sync the files before creating the symlinks\n  [[ -n \"${HOOK}\" ]] && (\"${HOOK}\" \"sync_cert\" \"${certdir}/privkey-${timestamp}.pem\" \"${certdir}/cert-${timestamp}.pem\" \"${certdir}/fullchain-${timestamp}.pem\" \"${certdir}/chain-${timestamp}.pem\" \"${certdir}/cert-${timestamp}.csr\" || _exiterr 'sync_cert hook returned with non-zero exit code')\n\n  # Update symlinks\n  [[ \"${privkey}\" = \"privkey.pem\" ]] || ln -sf \"privkey-${timestamp}.pem\" \"${certdir}/privkey.pem\"\n\n  ln -sf \"chain-${timestamp}.pem\" \"${certdir}/chain.pem\"\n  ln -sf \"fullchain-${timestamp}.pem\" \"${certdir}/fullchain.pem\"\n  ln -sf \"cert-${timestamp}.csr\" \"${certdir}/cert.csr\"\n  ln -sf \"cert-${timestamp}.pem\" \"${certdir}/cert.pem\"\n\n  # Wait for hook script to clean the challenge and to deploy cert if used\n  [[ -n \"${HOOK}\" ]] && (\"${HOOK}\" \"deploy_cert\" \"${domain}\" \"${certdir}/privkey.pem\" \"${certdir}/cert.pem\" \"${certdir}/fullchain.pem\" \"${certdir}/chain.pem\" \"${timestamp}\" || _exiterr 'deploy_cert hook returned with non-zero exit code')\n\n  unset challenge_token\n  echo \" + Done!\"\n}\n\n# Update OCSP stapling file\nupdate_ocsp_stapling() {\n    local certdir=\"${1}\"\n    local update_ocsp=\"${2}\"\n    local cert=\"${3}\"\n    local chain=\"${4}\"\n\n    local ocsp_url=\"$(get_ocsp_url \"${cert}\")\"\n\n    if [[ -z \"${ocsp_url}\" ]]; then\n      echo \" ! ERROR: OCSP stapling requested but no OCSP url found in certificate.\" >&2\n      echo \" ! Keep in mind that some CAs ended support for OCSP: https://letsencrypt.org/2024/12/05/ending-ocsp/\" >&2\n      return 1\n    fi\n\n    if [[ ! -e \"${certdir}/ocsp.der\" ]]; then\n      update_ocsp=\"yes\"\n    elif ! (\"${OPENSSL}\" ocsp -no_nonce -issuer \"${chain}\" -verify_other \"${chain}\" -cert \"${cert}\" -respin \"${certdir}/ocsp.der\" -status_age $((OCSP_DAYS*24*3600)) 2>&1 | grep -q \"${cert}: good\"); then\n      update_ocsp=\"yes\"\n    fi\n\n    if [[ \"${update_ocsp}\" = \"yes\" ]]; then\n      echo \" + Updating OCSP stapling file\"\n      ocsp_timestamp=\"$(date +%s)\"\n      if grep -qE \"^(openssl (0|(1\\.0))\\.)|(libressl (1|2|3)\\.)\" <<< \"$(${OPENSSL} version | awk '{print tolower($0)}')\"; then\n        ocsp_log=\"$(\"${OPENSSL}\" ocsp -no_nonce -issuer \"${chain}\" -verify_other \"${chain}\" -cert \"${cert}\" -respout \"${certdir}/ocsp-${ocsp_timestamp}.der\" -url \"${ocsp_url}\" -header \"HOST\" \"$(echo \"${ocsp_url}\" | _sed -e 's/^http(s?):\\/\\///' -e 's/\\/.*$//g')\" 2>&1)\" || _exiterr \"Fetching of OCSP information failed. Please note that some CAs (e.g. LetsEncrypt) do no longer support OCSP. Error message: ${ocsp_log}\"\n      else\n        ocsp_log=\"$(\"${OPENSSL}\" ocsp -no_nonce -issuer \"${chain}\" -verify_other \"${chain}\" -cert \"${cert}\" -respout \"${certdir}/ocsp-${ocsp_timestamp}.der\" -url \"${ocsp_url}\" 2>&1)\" || _exiterr \"Fetching of OCSP information failed. Please note that some CAs (e.g. LetsEncrypt) do no longer support OCSP. Error message: ${ocsp_log}\"\n      fi\n      ln -sf \"ocsp-${ocsp_timestamp}.der\" \"${certdir}/ocsp.der\"\n      [[ -n \"${HOOK}\" ]] && (altnames=\"${domain} ${morenames}\" \"${HOOK}\" \"deploy_ocsp\" \"${domain}\" \"${certdir}/ocsp.der\" \"${ocsp_timestamp}\" || _exiterr 'deploy_ocsp hook returned with non-zero exit code')\n    else\n      echo \" + OCSP stapling file is still valid (skipping update)\"\n    fi\n}\n\n# Usage: --version (-v)\n# Description: Print version information\ncommand_version() {\n  load_config noverify\n\n  echo \"Dehydrated by Lukas Schauer\"\n  echo \"https://dehydrated.io\"\n  echo \"\"\n  echo \"Dehydrated version: ${VERSION}\"\n  revision=\"$(cd \"${SCRIPTDIR}\"; git rev-parse HEAD 2>/dev/null || echo \"unknown\")\"\n  echo \"GIT-Revision: ${revision}\"\n  echo \"\"\n  # shellcheck disable=SC1091\n  if [[ \"${OSTYPE}\" =~ (BSD|Darwin) ]]; then\n    echo \"OS: $(uname -sr)\"\n  elif [[ -e /etc/os-release ]]; then\n    ( . /etc/os-release && echo \"OS: $PRETTY_NAME\" )\n  elif [[ -e /usr/lib/os-release ]]; then\n    ( . /usr/lib/os-release && echo \"OS: $PRETTY_NAME\" )\n  else\n    echo \"OS: $(grep -v '^$' /etc/issue | head -n1 | _sed 's/\\\\(r|n|l) .*//g')\"\n  fi\n  echo \"Used software:\"\n  [[ -n \"${BASH_VERSION:-}\" ]] && echo \" bash: ${BASH_VERSION}\"\n  [[ -n \"${ZSH_VERSION:-}\" ]] && echo \" zsh: ${ZSH_VERSION}\"\n  echo \" curl: ${CURL_VERSION}\"\n  if [[ \"${OSTYPE}\" =~ (BSD|Darwin) ]]; then\n    echo \" awk, sed, mktemp, grep, diff: BSD base system versions\"\n  else\n    echo \" awk: $(awk -W version 2>&1 | head -n1)\"\n    echo \" sed: $(sed --version 2>&1 | head -n1)\"\n    echo \" mktemp: $(mktemp --version 2>&1 | head -n1)\"\n    echo \" grep: $(grep --version 2>&1 | head -n1)\"\n    echo \" diff: $(diff --version 2>&1 | head -n1)\"\n  fi\n  echo \" openssl: $(\"${OPENSSL}\" version 2>&1)\"\n\n  exit 0\n}\n\n# Usage: --display-terms\n# Description: Display current terms of service\ncommand_terms() {\n  init_system\n  echo \"The current terms of service: $CA_TERMS\"\n  echo \"+ Done!\"\n  exit 0\n}\n\n# Usage: --register\n# Description: Register account key\ncommand_register() {\n  init_system\n  echo \"+ Done!\"\n  exit 0\n}\n\n# Usage: --account\n# Description: Update account contact information\ncommand_account() {\n  init_system\n  FAILED=false\n\n  NEW_ACCOUNT_KEY_JSON=\"$(_mktemp)\"\n\n  # Check if we have the registration url\n  if [[ -z \"${ACCOUNT_URL}\" ]]; then\n    _exiterr \"Error retrieving registration url.\"\n  fi\n\n  echo \"+ Updating registration url: ${ACCOUNT_URL} contact information...\"\n  if [[ ${API} -eq 1 ]]; then\n    # If an email for the contact has been provided then adding it to the registered account\n    if [[ -n \"${CONTACT_EMAIL}\" ]]; then\n      (signed_request \"${ACCOUNT_URL}\" '{\"resource\": \"reg\", \"contact\":[\"mailto:'\"${CONTACT_EMAIL}\"'\"]}' > \"${NEW_ACCOUNT_KEY_JSON}\") || FAILED=true\n    else\n      (signed_request \"${ACCOUNT_URL}\" '{\"resource\": \"reg\", \"contact\":[]}' > \"${NEW_ACCOUNT_KEY_JSON}\") || FAILED=true\n    fi\n  else\n    # If an email for the contact has been provided then adding it to the registered account\n    if [[ -n \"${CONTACT_EMAIL}\" ]]; then\n      (signed_request \"${ACCOUNT_URL}\" '{\"contact\":[\"mailto:'\"${CONTACT_EMAIL}\"'\"]}' > \"${NEW_ACCOUNT_KEY_JSON}\") || FAILED=true\n    else\n      (signed_request \"${ACCOUNT_URL}\" '{\"contact\":[]}' > \"${NEW_ACCOUNT_KEY_JSON}\") || FAILED=true\n    fi\n  fi\n\n  if [[ \"${FAILED}\" = \"true\" ]]; then\n    rm \"${NEW_ACCOUNT_KEY_JSON}\"\n    _exiterr \"Error updating account information. See message above for more information.\"\n  fi\n  if diff -q \"${NEW_ACCOUNT_KEY_JSON}\" \"${ACCOUNT_KEY_JSON}\" > /dev/null; then\n    echo \"+ Account information was the same after the update\"\n    rm \"${NEW_ACCOUNT_KEY_JSON}\"\n  else\n    ACCOUNT_KEY_JSON_BACKUP=\"${ACCOUNT_KEY_JSON%.*}-$(date +%s).json\"\n    echo \"+ Backup ${ACCOUNT_KEY_JSON} as ${ACCOUNT_KEY_JSON_BACKUP}\"\n    cp -p \"${ACCOUNT_KEY_JSON}\" \"${ACCOUNT_KEY_JSON_BACKUP}\"\n    echo \"+ Populate ${ACCOUNT_KEY_JSON}\"\n    mv \"${NEW_ACCOUNT_KEY_JSON}\" \"${ACCOUNT_KEY_JSON}\"\n  fi\n  echo \"+ Done!\"\n  exit 0\n}\n\n# Parse contents of domains.txt and domains.txt.d\nparse_domains_txt() {\n  # Allow globbing temporarily\n  noglob_set\n  local inputs=(\"${DOMAINS_TXT}\" \"${DOMAINS_TXT}.d\"/*.txt)\n  noglob_clear\n\n  cat \"${inputs[@]}\" |\n    tr -d '\\r' |\n    awk '{print tolower($0)}' |\n    _sed -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*$//g' -e 's/[[:space:]]+/ /g' -e 's/([^ ])>/\\1 >/g' -e 's/> />/g' |\n    (grep -vE '^(#|$)' || true)\n}\n\n# normalize SAN lists\n# normalize IPv6 adresses, and sort alphabetically\nnormalize_san_list() {\n  cat | awk '{print tolower($0)}' | _sed 's/ $//' | _sed 's/^ //' | ipv6_normalize | tr ' ' '\\n' | sort -u | tr '\\n' ' ' | _sed 's/ $//'\n}\n\n# Usage: --cron (-c)\n# Description: Sign/renew non-existent/changed/expiring certificates.\ncommand_sign_domains() {\n  init_system\n  hookscript_bricker_hook\n\n  # Call startup hook\n  [[ -n \"${HOOK}\" ]] && (\"${HOOK}\" \"startup_hook\" || _exiterr 'startup_hook hook returned with non-zero exit code')\n\n  if [ ! -d \"${CHAINCACHE}\" ]; then\n    echo \" + Creating chain cache directory ${CHAINCACHE}\"\n    mkdir \"${CHAINCACHE}\"\n  fi\n\n  if [[ -n \"${PARAM_DOMAIN:-}\" ]]; then\n    DOMAINS_TXT=\"$(_mktemp)\"\n    if [[ -n \"${PARAM_ALIAS:-}\" ]]; then\n      printf \"%s > %s\" \"${PARAM_DOMAIN}\" \"${PARAM_ALIAS}\" > \"${DOMAINS_TXT}\"\n    else\n      printf \"%s\" \"${PARAM_DOMAIN}\" > \"${DOMAINS_TXT}\"\n    fi\n  elif [[ -e \"${DOMAINS_TXT}\" ]]; then\n    if [[ ! -r \"${DOMAINS_TXT}\" ]]; then\n      _exiterr \"domains.txt found but not readable\"\n    fi\n  else\n    _exiterr \"domains.txt not found and --domain not given\"\n  fi\n\n  # Generate certificates for all domains found in domains.txt. Check if existing certificate are about to expire\n  ORIGIFS=\"${IFS}\"\n  IFS=$'\\n'\n  for line in $(parse_domains_txt); do\n    reset_configvars\n    IFS=\"${ORIGIFS}\"\n    alias=\"$(grep -Eo '>[^ ]+' <<< \"${line}\" || true)\"\n    line=\"$(_sed -e 's/>[^ ]+[ ]*//g' <<< \"${line}\")\"\n    aliascount=\"$(grep -Eo '>' <<< \"${alias}\" | awk 'END {print NR}' || true )\"\n    [ \"${aliascount}\" -gt 1 ] && _exiterr \"Only one alias per line is allowed in domains.txt!\"\n\n    domain=\"$(printf '%s\\n' \"${line}\" | cut -d' ' -f1)\"\n    morenames=\"$(printf '%s\\n' \"${line}\" | cut -s -d' ' -f2-)\"\n    [ \"${aliascount}\" -lt 1 ] && alias=\"${domain}\" || alias=\"${alias#>}\"\n    export alias\n\n    if [[ -z \"${morenames}\" ]];then\n      echo \"Processing ${domain}\"\n    else\n      echo \"Processing ${domain} with alternative names: ${morenames}\"\n    fi\n\n    if [ \"${alias:0:2}\" = \"*.\" ]; then\n      _exiterr \"Please define a valid alias for your ${domain} wildcard-certificate. See domains.txt-documentation for more details.\"\n    fi\n\n    local certdir=\"${CERTDIR}/${alias}\"\n    cert=\"${certdir}/cert.pem\"\n    chain=\"${certdir}/chain.pem\"\n\n    force_renew=\"${PARAM_FORCE:-no}\"\n\n    timestamp=\"$(date +%s)\"\n\n    # If there is no existing certificate directory => make it\n    if [[ ! -e \"${certdir}\" ]]; then\n      echo \" + Creating new directory ${certdir} ...\"\n      mkdir -p \"${certdir}\" || _exiterr \"Unable to create directory ${certdir}\"\n    fi\n\n    # read cert config\n    # for now this loads the certificate specific config in a subshell and parses a diff of set variables.\n    # we could just source the config file but i decided to go this way to protect people from accidentally overriding\n    # variables used internally by this script itself.\n    if [[ -n \"${DOMAINS_D}\" ]]; then\n      certconfig=\"${DOMAINS_D}/${alias}\"\n    else\n      certconfig=\"${certdir}/config\"\n    fi\n\n    if [ -f \"${certconfig}\" ]; then\n      echo \" + Using certificate specific config file!\"\n      ORIGIFS=\"${IFS}\"\n      IFS=$'\\n'\n      for cfgline in $(\n        beforevars=\"$(_mktemp)\"\n        aftervars=\"$(_mktemp)\"\n        set > \"${beforevars}\"\n        # shellcheck disable=SC1090\n        . \"${certconfig}\"\n        set > \"${aftervars}\"\n        diff -u \"${beforevars}\" \"${aftervars}\" | grep -E '^\\+[^+]'\n        rm \"${beforevars}\"\n        rm \"${aftervars}\"\n      ); do\n        config_var=\"$(echo \"${cfgline:1}\" | cut -d'=' -f1)\"\n        config_value=\"$(echo \"${cfgline:1}\" | cut -d'=' -f2- | tr -d \"'\")\"\n        # All settings that are allowed here should also be stored and\n        # restored in store_configvars() and reset_configvars()\n        case \"${config_var}\" in\n          KEY_ALGO|OCSP_MUST_STAPLE|OCSP_FETCH|OCSP_DAYS|PRIVATE_KEY_RENEW|PRIVATE_KEY_ROLLOVER|KEYSIZE|CHALLENGETYPE|HOOK|PREFERRED_CHAIN|WELLKNOWN|HOOK_CHAIN|OPENSSL_CNF|RENEW_DAYS|ACME_PROFILE|ORDER_TIMEOUT|VALIDATION_TIMEOUT|KEEP_GOING)\n            echo \"   + ${config_var} = ${config_value}\"\n            declare -- \"${config_var}=${config_value}\"\n            ;;\n          _) ;;\n          *) echo \"   ! Setting ${config_var} on a per-certificate base is not (yet) supported\" >&2\n        esac\n      done\n      IFS=\"${ORIGIFS}\"\n    fi\n    verify_config\n    hookscript_bricker_hook\n    export WELLKNOWN CHALLENGETYPE KEY_ALGO PRIVATE_KEY_ROLLOVER\n\n    skip=\"no\"\n\n    # Allow for external CSR generation\n    local csrfile=\"\"\n    if [[ -n \"${HOOK}\" ]]; then\n      csr=\"$(\"${HOOK}\" \"generate_csr\" \"${domain}\" \"${certdir}\" \"${domain} ${morenames}\")\" || _exiterr 'generate_csr hook returned with non-zero exit code'\n      if grep -qE \"\\-----BEGIN (NEW )?CERTIFICATE REQUEST-----\" <<< \"${csr}\"; then\n        csrfile=\"$(_mktemp)\"\n        cat > \"${csrfile}\" <<< \"${csr}\"\n        altnames=\"$(extract_altnames \"${csrfile}\")\"\n        domain=\"$(cut -d' ' -f1 <<< \"${altnames}\")\"\n        morenames=\"$(cut -s -d' ' -f2- <<< \"${altnames}\")\"\n        echo \" + Using CSR from hook script (real names: ${altnames})\"\n      else\n        csrfile=\"\"\n      fi\n    fi\n\n    # Check domain names of existing certificate\n    if [[ -e \"${cert}\" && \"${force_renew}\" = \"no\" ]]; then\n      printf \" + Checking domain name(s) of existing cert...\"\n\n      certnames=\"$(\"${OPENSSL}\" x509 -in \"${cert}\" -text -noout | grep -E '(DNS|IP( Address)*):' | _sed 's/(DNS|IP( Address)*)://g' | tr -d ' ' | tr ',' ' ' | normalize_san_list )\"\n      givennames=\"$(echo \"${domain}\" \"${morenames}\" | _sed 's/ip://g' | normalize_san_list )\"\n\n      if [[ \"${certnames}\" = \"${givennames}\" ]]; then\n        echo \" unchanged.\"\n      else\n        echo \" changed!\"\n        echo \" + Domain name(s) are not matching!\"\n        echo \" + Names in old certificate: ${certnames}\"\n        echo \" + Configured names: ${givennames}\"\n        echo \" + Forcing renew.\"\n        force_renew=\"yes\"\n      fi\n    fi\n\n    # Check expire date of existing certificate\n    if [[ -e \"${cert}\" ]]; then\n      echo \" + Checking expire date of existing cert...\"\n      valid=\"$(\"${OPENSSL}\" x509 -enddate -noout -in \"${cert}\" | cut -d= -f2- )\"\n\n      printf \" + Valid till %s \" \"${valid}\"\n      if (\"${OPENSSL}\" x509 -checkend $((RENEW_DAYS * 86400)) -in \"${cert}\" 2>&1 | grep -q \"will not expire\"); then\n        printf \"(Longer than %d days). \" \"${RENEW_DAYS}\"\n        if [[ \"${force_renew}\" = \"yes\" ]]; then\n          echo \"Ignoring because renew was forced!\"\n        else\n          # Certificate-Names unchanged and cert is still valid\n          echo \"Skipping renew!\"\n          [[ -n \"${HOOK}\" ]] && (\"${HOOK}\" \"unchanged_cert\" \"${domain}\" \"${certdir}/privkey.pem\" \"${certdir}/cert.pem\" \"${certdir}/fullchain.pem\" \"${certdir}/chain.pem\" || _exiterr 'unchanged_cert hook returned with non-zero exit code')\n          skip=\"yes\"\n        fi\n      else\n        echo \"(Less than ${RENEW_DAYS} days). Renewing!\"\n      fi\n    fi\n\n    local update_ocsp\n    update_ocsp=\"no\"\n\n    # Sign certificate for this domain\n    if [[ ! \"${skip}\" = \"yes\" ]]; then\n      update_ocsp=\"yes\"\n      if [[ -n \"${csrfile}\" ]]; then\n        cat \"${csrfile}\" > \"${certdir}/cert-${timestamp}.csr\"\n        rm \"${csrfile}\"\n      fi\n      # shellcheck disable=SC2086\n      if [[ \"${KEEP_GOING:-}\" = \"yes\" ]]; then\n        skip_exit_hook=yes\n        sign_domain \"${certdir}\" \"${timestamp}\" \"${domain}\" ${morenames} &\n        wait $! || exit_with_errorcode=1\n        skip_exit_hook=no\n      else\n        sign_domain \"${certdir}\" \"${timestamp}\" \"${domain}\" ${morenames}\n      fi\n    fi\n\n    if [[ \"${OCSP_FETCH}\" = \"yes\" ]]; then\n      if [[ \"${KEEP_GOING:-}\" = \"yes\" ]]; then\n        skip_exit_hook=yes\n        update_ocsp_stapling \"${certdir}\" \"${update_ocsp}\" \"${cert}\" \"${chain}\" &\n        wait $! || exit_with_errorcode=1\n        skip_exit_hook=no\n      else\n        update_ocsp_stapling \"${certdir}\" \"${update_ocsp}\" \"${cert}\" \"${chain}\"\n      fi\n    fi\n  done\n  reset_configvars\n\n  # remove temporary domains.txt file if used\n  [[ -n \"${PARAM_DOMAIN:-}\" ]] && rm -f \"${DOMAINS_TXT}\"\n\n  [[ -n \"${HOOK}\" ]] && (\"${HOOK}\" \"exit_hook\" || echo 'exit_hook returned with non-zero exit code!' >&2)\n  if [[ \"${AUTO_CLEANUP}\" == \"yes\" ]]; then\n    echo \" + Running automatic cleanup\"\n    PARAM_CLEANUPDELETE=\"${AUTO_CLEANUP_DELETE:-no}\" command_cleanup noinit | _sed 's/^/ + /g'\n  fi\n\n  exit \"${exit_with_errorcode}\"\n}\n\n# Usage: --signcsr (-s) path/to/csr.pem\n# Description: Sign a given CSR, output CRT on stdout (advanced usage)\ncommand_sign_csr() {\n  init_system\n\n  # redirect stdout to stderr\n  # leave stdout over at fd 3 to output the cert\n  exec 3>&1 1>&2\n\n  # load csr\n  local csrfile=\"${1}\"\n  if [ ! -r \"${csrfile}\" ]; then\n    _exiterr \"Could not read certificate signing request ${csrfile}\"\n  fi\n\n  # extract names\n  altnames=\"$(extract_altnames \"${csrfile}\")\"\n\n  # gen cert\n  certfile=\"$(_mktemp)\"\n  # shellcheck disable=SC2086\n  sign_csr \"${csrfile}\" ${altnames} 3> \"${certfile}\"\n\n  # print cert\n  echo \"# CERT #\" >&3\n  cat \"${certfile}\" >&3\n  echo >&3\n\n  # print chain\n  if [ -n \"${PARAM_FULL_CHAIN:-}\" ]; then\n    # get and convert ca cert\n    chainfile=\"$(_mktemp)\"\n    tmpchain=\"$(_mktemp)\"\n    http_request get \"$(\"${OPENSSL}\" x509 -in \"${certfile}\" -noout -text | grep 'CA Issuers - URI:' | cut -d':' -f2-)\" > \"${tmpchain}\"\n    if grep -q \"BEGIN CERTIFICATE\" \"${tmpchain}\"; then\n      mv \"${tmpchain}\" \"${chainfile}\"\n    else\n      \"${OPENSSL}\" x509 -in \"${tmpchain}\" -inform DER -out \"${chainfile}\" -outform PEM\n      rm \"${tmpchain}\"\n    fi\n\n    echo \"# CHAIN #\" >&3\n    cat \"${chainfile}\" >&3\n\n    rm \"${chainfile}\"\n  fi\n\n  # cleanup\n  rm \"${certfile}\"\n\n  exit 0\n}\n\n# Usage: --revoke (-r) path/to/cert.pem\n# Description: Revoke specified certificate\ncommand_revoke() {\n  init_system\n\n  [[ -n \"${CA_REVOKE_CERT}\" ]] || _exiterr \"Certificate authority doesn't allow certificate revocation.\"\n\n  cert=\"${1}\"\n  if [[ -L \"${cert}\" ]]; then\n    # follow symlink and use real certificate name (so we move the real file and not the symlink at the end)\n    local link_target\n    link_target=\"$(readlink -n \"${cert}\")\"\n    if [[ \"${link_target}\" =~ ^/ ]]; then\n      cert=\"${link_target}\"\n    else\n      cert=\"$(dirname \"${cert}\")/${link_target}\"\n    fi\n  fi\n  [[ -f \"${cert}\" ]] || _exiterr \"Could not find certificate ${cert}\"\n\n  echo \"Revoking ${cert}\"\n\n  cert64=\"$(\"${OPENSSL}\" x509 -in \"${cert}\" -inform PEM -outform DER | urlbase64)\"\n  if [[ ${API} -eq 1 ]]; then\n    response=\"$(signed_request \"${CA_REVOKE_CERT}\" '{\"resource\": \"revoke-cert\", \"certificate\": \"'\"${cert64}\"'\"}' | clean_json)\"\n  else\n    response=\"$(signed_request \"${CA_REVOKE_CERT}\" '{\"certificate\": \"'\"${cert64}\"'\"}' | clean_json)\"\n  fi\n  # if there is a problem with our revoke request _request (via signed_request) will report this and \"exit 1\" out\n  # so if we are here, it is safe to assume the request was successful\n  echo \" + Done.\"\n  echo \" + Renaming certificate to ${cert}-revoked\"\n  mv -f \"${cert}\" \"${cert}-revoked\"\n}\n\n# Usage: --deactivate\n# Description: Deactivate account\ncommand_deactivate() {\n  init_system\n\n  echo \"Deactivating account ${ACCOUNT_URL}\"\n\n  if [[ ${API} -eq 1 ]]; then\n    echo \"Deactivation for ACMEv1 is not implemented\"\n  else\n    response=\"$(signed_request \"${ACCOUNT_URL}\" '{\"status\": \"deactivated\"}' | clean_json)\"\n    deactstatus=$(echo \"$response\" | jsonsh | get_json_string_value \"status\")\n    if [[ \"${deactstatus}\" = \"deactivated\" ]]; then\n      touch \"${ACCOUNT_DEACTIVATED}\"\n    else\n      _exiterr \"Account deactivation failed!\"\n    fi\n  fi\n\n  echo \" + Done.\"\n}\n\n# Usage: --cleanup (-gc)\n# Description: Move unused certificate files to archive directory\ncommand_cleanup() {\n  if [ ! \"${1:-}\" = \"noinit\" ]; then\n    load_config\n  fi\n\n  if [[ ! \"${PARAM_CLEANUPDELETE:-}\" = \"yes\" ]]; then\n    # Create global archive directory if not existent\n    if [[ ! -e \"${BASEDIR}/archive\" ]]; then\n      mkdir \"${BASEDIR}/archive\"\n    fi\n  fi\n\n  # Allow globbing\n  noglob_set\n\n  # Loop over all certificate directories\n  for certdir in \"${CERTDIR}/\"*; do\n    # Skip if entry is not a folder\n    [[ -d \"${certdir}\" ]] || continue\n\n    # Get certificate name\n    certname=\"$(basename \"${certdir}\")\"\n\n    # Create certificates archive directory if not existent\n    if [[ ! \"${PARAM_CLEANUPDELETE:-}\" = \"yes\" ]]; then\n      archivedir=\"${BASEDIR}/archive/${certname}\"\n      if [[ ! -e \"${archivedir}\" ]]; then\n        mkdir \"${archivedir}\"\n      fi\n    fi\n\n    # Loop over file-types (certificates, keys, signing-requests, ...)\n    for filetype in cert.csr cert.pem chain.pem fullchain.pem privkey.pem ocsp.der; do\n      # Delete all if symlink is broken\n      if [[ -r \"${certdir}/${filetype}\" ]]; then\n        # Look up current file in use\n        current=\"$(basename \"$(readlink \"${certdir}/${filetype}\")\")\"\n      else\n        if [[ -h \"${certdir}/${filetype}\" ]]; then\n          echo \"Removing broken symlink: ${certdir}/${filetype}\"\n          rm -f \"${certdir}/${filetype}\"\n        fi\n        current=\"\"\n      fi\n\n      # Split filetype into name and extension\n      filebase=\"$(echo \"${filetype}\" | cut -d. -f1)\"\n      fileext=\"$(echo \"${filetype}\" | cut -d. -f2)\"\n\n      # Loop over all files of this type\n      for file in \"${certdir}/${filebase}-\"*\".${fileext}\" \"${certdir}/${filebase}-\"*\".${fileext}-revoked\"; do\n        # Check if current file is in use, if unused move to archive directory\n        filename=\"$(basename \"${file}\")\"\n        if [[ ! \"${filename}\" = \"${current}\" ]] && [[ -f \"${certdir}/${filename}\" ]]; then\n          if [[ \"${PARAM_CLEANUPDELETE:-}\" = \"yes\" ]]; then\n            echo \"Deleting unused file: ${certname}/${filename}\"\n            rm \"${certdir}/${filename}\"\n          else\n            echo \"Moving unused file to archive directory: ${certname}/${filename}\"\n            mv \"${certdir}/${filename}\" \"${archivedir}/${filename}\"\n          fi\n        fi\n      done\n    done\n  done\n\n  exit \"${exit_with_errorcode}\"\n}\n\n# Usage: --cleanup-delete (-gcd)\n# Description: Deletes (!) unused certificate files\ncommand_cleanupdelete() {\n  command_cleanup\n}\n\n\n# Usage: --help (-h)\n# Description: Show help text\ncommand_help() {\n  printf \"Usage: %s [-h] [command [argument]] [parameter [argument]] [parameter [argument]] ...\\n\\n\" \"${0}\"\n  printf \"Default command: help\\n\\n\"\n  echo \"Commands:\"\n  grep -e '^[[:space:]]*# Usage:' -e '^[[:space:]]*# Description:' -e '^command_.*()[[:space:]]*{' \"${0}\" | while read -r usage; read -r description; read -r command; do\n    if [[ ! \"${usage}\" =~ Usage ]] || [[ ! \"${description}\" =~ Description ]] || [[ ! \"${command}\" =~ ^command_ ]]; then\n      _exiterr \"Error generating help text.\"\n    fi\n    printf \" %-32s %s\\n\" \"${usage##\"# Usage: \"}\" \"${description##\"# Description: \"}\"\n  done\n  printf -- \"\\nParameters:\\n\"\n  grep -E -e '^[[:space:]]*# PARAM_Usage:' -e '^[[:space:]]*# PARAM_Description:' \"${0}\" | while read -r usage; read -r description; do\n    if [[ ! \"${usage}\" =~ Usage ]] || [[ ! \"${description}\" =~ Description ]]; then\n      _exiterr \"Error generating help text.\"\n    fi\n    printf \" %-32s %s\\n\" \"${usage##\"# PARAM_Usage: \"}\" \"${description##\"# PARAM_Description: \"}\"\n  done\n}\n\n# Usage: --env (-e)\n# Description: Output configuration variables for use in other scripts\ncommand_env() {\n  echo \"# dehydrated configuration\"\n  load_config\n  typeset -p CA CERTDIR ALPNCERTDIR CHALLENGETYPE DOMAINS_D DOMAINS_TXT HOOK HOOK_CHAIN RENEW_DAYS ACCOUNT_KEY ACCOUNT_KEY_JSON ACCOUNT_ID_JSON KEYSIZE WELLKNOWN PRIVATE_KEY_RENEW OPENSSL_CNF CONTACT_EMAIL LOCKFILE\n}\n\n# Main method (parses script arguments and calls command_* methods)\nmain() {\n  exit_with_errorcode=0\n  skip_exit_hook=no\n  COMMAND=\"\"\n  set_command() {\n    [[ -z \"${COMMAND}\" ]] || _exiterr \"Only one command can be executed at a time. See help (-h) for more information.\"\n    COMMAND=\"${1}\"\n  }\n\n  check_parameters() {\n    if [[ -z \"${1:-}\" ]]; then\n      echo \"The specified command requires additional parameters. See help:\" >&2\n      echo >&2\n      command_help >&2\n      exit 1\n    elif [[ \"${1:0:1}\" = \"-\" ]]; then\n      _exiterr \"Invalid argument: ${1}\"\n    fi\n  }\n\n  [[ -z \"${*}\" ]] && eval set -- \"--help\"\n\n  while (( ${#} )); do\n    case \"${1}\" in\n      --help|-h)\n        command_help\n        exit 0\n        ;;\n\n      --env|-e)\n        set_command env\n        ;;\n\n      --cron|-c)\n        set_command sign_domains\n        ;;\n\n      --register)\n        set_command register\n        ;;\n\n      --account)\n        set_command account\n        ;;\n\n      # PARAM_Usage: --accept-terms\n      # PARAM_Description: Accept CAs terms of service\n      --accept-terms)\n        PARAM_ACCEPT_TERMS=\"yes\"\n        ;;\n\n      --display-terms)\n        set_command terms\n        ;;\n\n      --signcsr|-s)\n        shift 1\n        set_command sign_csr\n        check_parameters \"${1:-}\"\n        PARAM_CSR=\"${1}\"\n        ;;\n\n      --revoke|-r)\n        shift 1\n        set_command revoke\n        check_parameters \"${1:-}\"\n        PARAM_REVOKECERT=\"${1}\"\n        ;;\n\n      --deactivate)\n        set_command deactivate\n        ;;\n\n      --version|-v)\n        set_command version\n        ;;\n\n      --cleanup|-gc)\n        set_command cleanup\n        ;;\n\n      --cleanup-delete|-gcd)\n        set_command cleanupdelete\n        PARAM_CLEANUPDELETE=\"yes\"\n        ;;\n\n      # PARAM_Usage: --full-chain (-fc)\n      # PARAM_Description: Print full chain when using --signcsr\n      --full-chain|-fc)\n        PARAM_FULL_CHAIN=\"1\"\n        ;;\n\n      # PARAM_Usage: --ipv4 (-4)\n      # PARAM_Description: Resolve names to IPv4 addresses only\n      --ipv4|-4)\n        PARAM_IP_VERSION=\"4\"\n        ;;\n\n      # PARAM_Usage: --ipv6 (-6)\n      # PARAM_Description: Resolve names to IPv6 addresses only\n      --ipv6|-6)\n        PARAM_IP_VERSION=\"6\"\n        ;;\n\n      # PARAM_Usage: --domain (-d) domain.tld\n      # PARAM_Description: Use specified domain name(s) instead of domains.txt entry (one certificate!)\n      --domain|-d)\n        shift 1\n        check_parameters \"${1:-}\"\n        if [[ -z \"${PARAM_DOMAIN:-}\" ]]; then\n          PARAM_DOMAIN=\"${1}\"\n        else\n          PARAM_DOMAIN=\"${PARAM_DOMAIN} ${1}\"\n         fi\n        ;;\n\n      # PARAM_Usage: --ca url/preset\n      # PARAM_Description: Use specified CA URL or preset\n      --ca)\n        shift 1\n        check_parameters \"${1:-}\"\n        [[ -n \"${PARAM_CA:-}\" ]] && _exiterr \"CA can only be specified once!\"\n        PARAM_CA=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --alias certalias\n      # PARAM_Description: Use specified name for certificate directory (and per-certificate config) instead of the primary domain (only used if --domain is specified)\n      --alias)\n        shift 1\n        check_parameters \"${1:-}\"\n        [[ -n \"${PARAM_ALIAS:-}\" ]] && _exiterr \"Alias can only be specified once!\"\n        PARAM_ALIAS=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --keep-going (-g)\n      # PARAM_Description: Keep going after encountering an error while creating/renewing multiple certificates in cron mode\n      --keep-going|-g)\n        PARAM_KEEP_GOING=\"yes\"\n        ;;\n\n      # PARAM_Usage: --force (-x)\n      # PARAM_Description: Force certificate renewal even if it is not due to expire within RENEW_DAYS\n      --force|-x)\n        PARAM_FORCE=\"yes\"\n        ;;\n\n      # PARAM_Usage: --force-validation\n      # PARAM_Description: Force revalidation of domain names (used in combination with --force)\n      --force-validation)\n        PARAM_FORCE_VALIDATION=\"yes\"\n        ;;\n\n      # PARAM_Usage: --no-lock (-n)\n      # PARAM_Description: Don't use lockfile (potentially dangerous!)\n      --no-lock|-n)\n        PARAM_NO_LOCK=\"yes\"\n        ;;\n\n      # PARAM_Usage: --lock-suffix example.com\n      # PARAM_Description: Suffix lockfile name with a string (useful for with -d)\n      --lock-suffix)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_LOCKFILE_SUFFIX=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --ocsp\n      # PARAM_Description: Sets option in CSR indicating OCSP stapling to be mandatory\n      --ocsp)\n        PARAM_OCSP_MUST_STAPLE=\"yes\"\n        ;;\n\n      # PARAM_Usage: --privkey (-p) path/to/key.pem\n      # PARAM_Description: Use specified private key instead of account key (useful for revocation)\n      --privkey|-p)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_ACCOUNT_KEY=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --domains-txt path/to/domains.txt\n      # PARAM_Description: Use specified domains.txt instead of default/configured one\n      --domains-txt)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_DOMAINS_TXT=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --config (-f) path/to/config\n      # PARAM_Description: Use specified config file\n      --config|-f)\n        shift 1\n        check_parameters \"${1:-}\"\n        CONFIG=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --hook (-k) path/to/hook.sh\n      # PARAM_Description: Use specified script for hooks\n      --hook|-k)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_HOOK=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --preferred-chain issuer-cn\n      # PARAM_Description: Use alternative certificate chain identified by issuer CN\n      --preferred-chain)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_PREFERRED_CHAIN=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --out (-o) certs/directory\n      # PARAM_Description: Output certificates into the specified directory\n      --out|-o)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_CERTDIR=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --alpn alpn-certs/directory\n      # PARAM_Description: Output alpn verification certificates into the specified directory\n      --alpn)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_ALPNCERTDIR=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --challenge (-t) http-01|dns-01|dns-persist-01|tls-alpn-01\n      # PARAM_Description: Which challenge should be used? Currently http-01, dns-01, dns-persist-01 and tls-alpn-01 are supported\n      --challenge|-t)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_CHALLENGETYPE=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --algo (-a) rsa|prime256v1|secp384r1\n      # PARAM_Description: Which public key algorithm should be used? Supported: rsa, prime256v1 and secp384r1\n      --algo|-a)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_KEY_ALGO=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --acme-profile profile_name\n      # PARAM_Description: Use specified ACME profile\n      --acme-profile)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_ACME_PROFILE=\"${1}\"\n        ;;\n\n      # PARAM_Usage: --order-timeout seconds\n      # PARAM_Description: Amount of seconds to wait for processing of order until erroring out\n      --order-timeout)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_ORDER_TIMEOUT=${1}\n        ;;\n\n      # PARAM_Usage: --validation-timeout seconds\n      # PARAM_Description: Amount of seconds to wait for processing of domain validations until erroring out\n      --validation-timeout)\n        shift 1\n        check_parameters \"${1:-}\"\n        PARAM_VALIDATION_TIMEOUT=${1}\n        ;;\n\n      *)\n        echo \"Unknown parameter detected: ${1}\" >&2\n        echo >&2\n        command_help >&2\n        exit 1\n        ;;\n    esac\n\n    shift 1\n  done\n\n  case \"${COMMAND}\" in\n    env) command_env;;\n    sign_domains) command_sign_domains;;\n    register) command_register;;\n    account) command_account;;\n    sign_csr) command_sign_csr \"${PARAM_CSR}\";;\n    revoke) command_revoke \"${PARAM_REVOKECERT}\";;\n    deactivate) command_deactivate;;\n    cleanup) command_cleanup;;\n    terms) command_terms;;\n    cleanupdelete) command_cleanupdelete;;\n    version) command_version;;\n    *) command_help; exit 1;;\n  esac\n\n  exit \"${exit_with_errorcode}\"\n}\n\n# Determine OS type\nOSTYPE=\"$(uname)\"\n\nif [[ ! \"${DEHYDRATED_NOOP:-}\" = \"NOOP\" ]]; then\n  # Run script\n  main \"${@:-}\"\nfi\n\n# vi: expandtab sw=2 ts=2\n"
  },
  {
    "path": "aegir/helpers/dump_cdorked_config.c",
    "content": "// This program dumps the content of a shared memory block\n// used by Linux/Cdorked.A into a file named httpd_cdorked_config.bin\n// when the machine is infected.\n//\n// Some of the data is encrypted. If your server is infected and you\n// would like to help, please send the httpd_cdorked_config.bin\n// and your httpd executable to our lab for analysis. Thanks!\n//\n// Build with gcc -o dump_cdorked_config dump_cdorked_config.c\n//\n// Marc-Etienne M.Léveillé <leveille@eset.com>\n//\n\n#include <stdio.h>\n#include <sys/shm.h>\n\n#define CDORKED_SHM_SIZE (6118512)\n#define CDORKED_OUTFILE \"httpd_cdorked_config.bin\"\n\nint main (int argc, char *argv[]) {\n    int maxkey, id, shmid, infected = 0;\n    struct shm_info shm_info;\n    struct shmid_ds shmds;\n    void * cdorked_data;\n    FILE * outfile;\n    \n    maxkey = shmctl(0, SHM_INFO, (void *) &shm_info);\n    for(id = 0; id <= maxkey; id++) {\n        shmid = shmctl(id, SHM_STAT, &shmds);\n        if (shmid < 0)\n            continue;\n        \n        if(shmds.shm_segsz == CDORKED_SHM_SIZE) {\n            // We have a matching Cdorked memory segment\n            infected++;\n            printf(\"A shared memory matching Cdorked signature was found.\\n\");\n            printf(\"You should check your HTTP server's executable file integrity.\\n\");\n            \n            cdorked_data = shmat(shmid, NULL, 0666);\n            if(cdorked_data != NULL) {\n                outfile = fopen(CDORKED_OUTFILE, \"wb\");\n                if(outfile == NULL) {\n                    printf(\"Could not open file %s for writing.\", CDORKED_OUTFILE);\n                }\n                else {\n                    fwrite(cdorked_data, CDORKED_SHM_SIZE, 1, outfile);\n                    fclose(outfile);\n                    \n                    printf(\"The Cdorked configuration was dumped in the %s file.\\n\\n\", CDORKED_OUTFILE);\n                }\n            }\n        }\n    }\n    if(infected == 0) {\n        printf(\"No shared memory matching Cdorked signature was found.\\n\");\n        printf(\"To further verify your server, run \\\"ipcs -m -p\\\" and look\");\n        printf(\" for a memory segments created by your http server.\\n\");\n    }\n    else {\n        printf(\"If you would like to help us in our research on Cdorked, \");\n        printf(\"please send the httpd_cdorked_config.bin and your httpd executable file \");\n        printf(\"to our lab for analysis at leveille@eset.com. Thanks!\\n\");\n    }\n    return infected;\n}\n"
  },
  {
    "path": "aegir/helpers/fix-fstab-to-uuid.sh",
    "content": "#!/bin/bash\n\n# Enable strict error handling for debugging only\n# set -euo pipefail\n\necho \"Backing up /etc/fstab to /etc/fstab.bak\"\ncp -p /etc/fstab /etc/fstab.bak\n\necho \"Processing Linode volume mounts...\"\n\n# Loop through current fstab entries related to Linode Volumes\ngrep '/dev/disk/by-id/scsi-0Linode_Volume_' /etc/fstab.bak | while read -r line; do\n  # Extract device and mount point\n  device=$(echo \"$line\" | awk '{print $1}')\n  mountpoint=$(echo \"$line\" | awk '{print $2}')\n\n  # Resolve the real device path (like /dev/sdb, /dev/sdc)\n  realdev=$(readlink -f \"$device\")\n\n  # Get the UUID of the real device\n  uuid=$(blkid -s UUID -o value \"$realdev\")\n\n  if [ -n \"$uuid\" ]; then\n    echo \"Updating $mountpoint to UUID=$uuid\"\n    # Escape slashes for sed replacement\n    escaped_device=$(echo \"$device\" | sed 's|/|\\\\/|g')\n    sed -i \"s|$escaped_device|UUID=$uuid|\" /etc/fstab\n  else\n    echo \"Warning: UUID not found for $device ($realdev)\"\n  fi\ndone\n\necho \"Checking with diff..\"\ndiff -urp /etc/fstab.bak /etc/fstab\n\necho \"Done. New /etc/fstab is ready.\"\necho \"Please verify with: cat /etc/fstab\"\necho \"If everything looks good, you can safely reboot.\"\n"
  },
  {
    "path": "aegir/helpers/hosting_cron.sql",
    "content": "CREATE TABLE IF NOT EXISTS `hosting_cron` (\n  `nid` int(10) unsigned NOT NULL DEFAULT '0',\n  `cron_interval` int(10) unsigned NOT NULL DEFAULT '0',\n  PRIMARY KEY (`nid`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;\n"
  },
  {
    "path": "aegir/helpers/le-hook.sh",
    "content": "#!/usr/bin/env bash\n# https://github.com/lukas2511/dehydrated/blob/master/docs/examples/hook.sh\n\n# Enable strict error handling for debugging only\n# set -euo pipefail\n\ndeploy_challenge() {\n\t\tlocal DOMAIN=\"${1}\" TOKEN_FILENAME=\"${2}\" TOKEN_VALUE=\"${3}\"\n\t\techo \"\"\n\t\techo \"Add the following to the zone definition of ${1}:\"\n\t\techo \"_acme-challenge.${1}. IN TXT \\\"${3}\\\"\"\n\t\techo \"\"\n\t\techo -n \"Press enter to continue...\"\n\t\tread tmp\n\t\techo \"\"\n}\n\nclean_challenge() {\n\t\tlocal DOMAIN=\"${1}\" TOKEN_FILENAME=\"${2}\" TOKEN_VALUE=\"${3}\"\n\t\techo \"\"\n\t\techo \"Now you can remove the following from the zone definition of ${1}:\"\n\t\techo \"_acme-challenge.${1}. IN TXT \\\"${3}\\\"\"\n\t\techo \"\"\n\t\techo -n \"Press enter to continue...\"\n\t\tread tmp\n\t\techo \"\"\n}\n\ndeploy_cert() {\n\t\tlocal DOMAIN=\"${1}\" KEYFILE=\"${2}\" CERTFILE=\"${3}\" FULLCHAINFILE=\"${4}\" CHAINFILE=\"${5}\" TIMESTAMP=\"${6}\"\n\t\techo \"\"\n\t\techo \"deploy_cert()\"\n\t\techo \"\"\n}\n\nunchanged_cert() {\n    local DOMAIN=\"${1}\" KEYFILE=\"${2}\" CERTFILE=\"${3}\" FULLCHAINFILE=\"${4}\" CHAINFILE=\"${5}\"\n\t\techo \"\"\n\t\techo \"unchanged_cert()\"\n\t\techo \"\"\n}\n\ninvalid_challenge() {\n    local DOMAIN=\"${1}\" RESPONSE=\"${2}\"\n\t\techo \"\"\n\t\techo \"invalid_challenge()\"\n\t\techo \"${1}\"\n\t\techo \"${2}\"\n\t\techo \"\"\n}\n\nrequest_failure() {\n    local STATUSCODE=\"${1}\" REASON=\"${2}\" REQTYPE=\"${3}\"\n\t\techo \"\"\n\t\techo \"request_failure()\"\n\t\techo \"${1}\"\n\t\techo \"${2}\"\n\t\techo \"${3}\"\n\t\techo \"\"\n}\n\nexit_hook() {\n\t\techo \"\"\n\t\techo \"done\"\n\t\techo \"\"\n}\n\nHANDLER=\"$1\"; shift\nif [[ \"${HANDLER}\" =~ ^(deploy_challenge|clean_challenge|deploy_cert|unchanged_cert|invalid_challenge|request_failure|exit_hook)$ ]]; then\n  \"$HANDLER\" \"$@\"\nfi\n"
  },
  {
    "path": "aegir/helpers/make_client.php.txt",
    "content": "<?php\n\n  // Create the client node\n  global $argv;\n  var_dump($argv);\n  $number = rand(100000, 999999);\n  $types = node_types_rebuild();\n  $node = new stdClass();\n  $node->uid = 1;\n  $node->type = 'client';\n  $node->email = $_SERVER['argv'][3];\n  $node->title = 'Octopus' . $number;\n  $node->language = LANGUAGE_NONE;\n  $node->status = 1;\n  node_object_prepare($node);\n  $node = node_submit($node);\n  node_save($node);\n  $this_client_id = $node->nid;\n  variable_set('hosting_default_client', $node->nid);\n\n?>\n"
  },
  {
    "path": "aegir/helpers/make_client_3.php.txt",
    "content": "<?php\n\n  // Create the client node\n  global $argv;\n  var_dump($argv);\n  $number = rand(100000, 999999);\n  $types = node_types_rebuild();\n  $node = new stdClass();\n  $node->uid = 1;\n  $node->type = 'client';\n  $node->email = $_SERVER['argv'][4];\n  $node->title = 'Octopus' . $number;\n  $node->status = 1;\n  node_save($node);\n  $this_client_id = $node->nid;\n  variable_set('hosting_default_client', $node->nid);\n\n?>\n"
  },
  {
    "path": "aegir/helpers/make_home.php.txt",
    "content": "<?php\n\n  // Create the home page node\n  $types = node_types_rebuild();\n  $node = new stdClass();\n  $node->type = 'book';\n  variable_set('comment_book', '0');\n  $node->title = 'Welcome to the World of Ægir';\n  $node->language = LANGUAGE_NONE;\n  $path = 'welcome';\n  $node->path = array('alias' => $path);\n  node_object_prepare($node);\n  $node->uid = 1;\n  $body_text = '<br /><br /><p style=\"text-align: justify;\">Do you manage more than a few Drupal sites, and feel a great sense of panic every time a security release is announced? Or maybe you only have a few sites, and would like to spend less time on the tedious (and likely manual) tasks associated with running these Drupal sites over their entire lifetime?</p><p style=\"text-align: justify;\"><img src=\"https://static.o8.io/dev/smokinfast2020.jpeg\" width=\"600\"></p><p style=\"text-align: justify;\">Solve your problems with multiple Drupal sites by running in Ægir! It\\'s even easier than tweeting! Simply enter your domain or subdomain, pointed to your Ægir instance, choose an installation profile and platform, click Save, then - wait a few minutes and you\\'re ready to go!</p><p style=\"text-align: justify;\">Now, from one web site, you can manage every other web site you\\'ve created - clone it, batch-migrate to newer platforms, reset your main password - anything you want, and it\\'s still the same 2-click easy task - as simple as posting a new tweet!</p><br /><br /><p style=\"text-align: justify;\">Already 900+ other hosts powering thousands of Drupal sites are running on our high-performance, Free/Libre Open Source Ægir BOA Software. BOA is an acronym of high performance Barracuda, Octopus and Ægir LEMP server stack. Barracuda installs and monitors all essential system services, while Octopus is an Ægir installer, with many popular Drupal Distributions ready to use, including: Drupal CMS, Commerce, DXPR Marketing, EzContent, farmOS, LocalGov, OpenCulturas, OpenFed, OpenLucius, Opigno LMS, Sector, Social, Thunder, Ubercart, and Varbase.</p><br /><br /><p style=\"text-align: justify;\">Ægir is built by a community of system administrators and developers who share Drupal deployment tools, strategies and best practices. Ægir makes it easy to install, upgrade, and backup an entire network of Drupal sites. Ægir is fully extensible, since it\\'s built on Drupal and Drush.</p>';\n  $node->status = 1;\n  $node->body[$node->language][0]['value']   = $body_text;\n  $node->body[$node->language][0]['summary'] = text_summary('Welcome to the World of Ægir','filtered_html');\n  $node->body[$node->language][0]['format']  = 'full_html';\n  $node = node_submit($node);\n  node_save($node);\n  variable_set('site_frontpage', 'node/' . $node->nid);\n\n?>\n"
  },
  {
    "path": "aegir/helpers/make_platform.php.txt",
    "content": "<?php\n\n  // Create the platform node\n  global $argv;\n  var_dump($argv);\n  $types = node_types_rebuild();\n  $node = new stdClass();\n  $node->type = 'platform';\n  $node->title = $_SERVER['argv'][3];\n  $node->language = LANGUAGE_NONE;\n  node_object_prepare($node);\n  $node->uid = 1;\n  $node->publish_path = $_SERVER['argv'][5];\n  $node->web_server = variable_get('hosting_default_web_server', 2);\n  $node->status = 1;\n  $node = node_submit($node);\n  node_save($node);\n  $platform_id = $node->nid;\n  variable_set('hosting_own_platform', $node->nid);\n\n  // Create the platform profile node\n  $node = new stdClass();\n  $node->type = 'package';\n  $node->title = $_SERVER['argv'][3];\n  $node->language = LANGUAGE_NONE;\n  node_object_prepare($node);\n  $node->uid = 1;\n  $node->package_type = 'profile';\n  $node->short_name = $_SERVER['argv'][4];\n  $node->status = 1;\n  $node = node_submit($node);\n  node_save($node);\n\n?>\n"
  },
  {
    "path": "aegir/helpers/make_platform_3.php.txt",
    "content": "<?php\n\n  // Create the platform node\n  global $argv;\n  var_dump($argv);\n  $types = node_types_rebuild();\n  $node = new stdClass();\n  $node->uid = 1;\n  $node->type = 'platform';\n  $node->title = $_SERVER['argv'][4];\n  $node->publish_path = $_SERVER['argv'][6];\n  $node->web_server = variable_get('hosting_default_web_server', 2);\n  $node->status = 1;\n  node_save($node);\n  $platform_id = $node->nid;\n  variable_set('hosting_own_platform', $node->nid);\n\n  // Create the platform profile node\n  $node = new stdClass();\n  $node->uid = 1;\n  $node->title = $_SERVER['argv'][4];\n  $node->type = 'package';\n  $node->package_type = 'profile';\n  $node->short_name = $_SERVER['argv'][5];\n  $node->status = 1;\n  node_save($node);\n\n?>\n"
  },
  {
    "path": "aegir/helpers/mysql_root_pass_reset.sh",
    "content": "service cron stop\n\n### Check /root/.my.cnf\nserver:~# cat /root/.my.cnf\n[client]\nuser=root\npassword=FOOO\nserver:~#\n\n### If /root/.my.pass.txt does not exist or does not match /root/.my.cnf\nserver:~# echo FOOO > /root/.my.pass.txt\n\n### If /etc/mysql_pre exists and /etc/mysql does not\nserver:~# mv -f /etc/mysql_pre /etc/mysql\n\n### Wait 60 sec.\n\n### Run:\nservice mysql stop\nps axf | grep mysql\n\n### For Percona 8.0 and 8.4\n/usr/sbin/mysqld \\\n  --defaults-file=/etc/mysql/my.cnf \\\n  --user=mysql \\\n  --skip-grant-tables \\\n  --skip-networking \\\n  --log-error-verbosity=3 \\\n  --daemonize=OFF\n\n### For Percona 5.7\n/usr/sbin/mysqld \\\n  --defaults-file=/etc/mysql/my.cnf \\\n  --user=mysql \\\n  --skip-grant-tables \\\n  --skip-networking \\\n  --log-warnings=2\n\nserver:~# mysql\nFLUSH PRIVILEGES;\nALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH mysql_native_password BY 'FOOO';\nALTER USER 'root'@'::1' IDENTIFIED WITH mysql_native_password BY 'FOOO';\nALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'FOOO';\nFLUSH PRIVILEGES;\nmysql> exit\n\nserver:~# service mysql restart\n\nserver:~# mysql\nmysql> exit\n\nserver:~# service cron start\n\n\n"
  },
  {
    "path": "aegir/helpers/mysqltuner5",
    "content": "#!/usr/bin/env perl\n# mysqltuner.pl - Version 2.5.2\n# High Performance MySQL Tuning Script\n# Copyright (C) 2015-2023 Jean-Marie Renouard - jmrenouard@gmail.com\n# Copyright (C) 2006-2023 Major Hayden - major@mhtx.net\n\n# For the latest updates, please visit http://mysqltuner.pl/\n# Git repository available at https://github.com/major/MySQLTuner-perl\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program.  If not, see <https://www.gnu.org/licenses/>.\n#\n# This project would not be possible without help from:\n#   Matthew Montgomery     Paul Kehrer          Dave Burgess\n#   Jonathan Hinds         Mike Jackson         Nils Breunese\n#   Shawn Ashlee           Luuk Vosslamber      Ville Skytta\n#   Trent Hornibrook       Jason Gill           Mark Imbriaco\n#   Greg Eden              Aubin Galinotti      Giovanni Bechis\n#   Bill Bradford          Ryan Novosielski     Michael Scheidell\n#   Blair Christensen      Hans du Plooy        Victor Trac\n#   Everett Barnes         Tom Krouper          Gary Barrueto\n#   Simon Greenaway        Adam Stein           Isart Montane\n#   Baptiste M.            Cole Turner          Major Hayden\n#   Joe Ashcraft           Jean-Marie Renouard  Christian Loos\n#   Julien Francoz         Daniel Black         Long Radix\n#\n# Inspired by Matthew Montgomery's tuning-primer.sh script:\n# http://www.day32.com/MySQL/\n#\npackage main;\n\nuse 5.005;\nuse strict;\nuse warnings;\n\nuse diagnostics;\nuse File::Spec;\nuse Getopt::Long;\nuse Pod::Usage;\nuse File::Basename;\nuse Cwd 'abs_path';\n\n#use Data::Dumper;\n#$Data::Dumper::Pair = \" : \";\n\n# for which()\n#use Env;\n\n# Set up a few variables for use in the script\nmy $tunerversion = \"2.5.2\";\nmy ( @adjvars, @generalrec );\n\n# Set defaults\nmy %opt = (\n    \"silent\"              => 0,\n    \"nobad\"               => 0,\n    \"nogood\"              => 0,\n    \"noinfo\"              => 0,\n    \"debug\"               => 0,\n    \"nocolor\"             => ( !-t STDOUT ),\n    \"color\"               => ( -t STDOUT ),\n    \"forcemem\"            => 0,\n    \"forceswap\"           => 0,\n    \"host\"                => 0,\n    \"socket\"              => 0,\n    \"port\"                => 0,\n    \"user\"                => 0,\n    \"pass\"                => 0,\n    \"password\"            => 0,\n    \"ssl-ca\"              => 0,\n    \"skipsize\"            => 0,\n    \"checkversion\"        => 0,\n    \"updateversion\"       => 0,\n    \"buffers\"             => 0,\n    \"passwordfile\"        => 0,\n    \"bannedports\"         => '',\n    \"maxportallowed\"      => 0,\n    \"outputfile\"          => 0,\n    \"noprocess\"           => 0,\n    \"dbstat\"              => 0,\n    \"nodbstat\"            => 0,\n    \"server-log\"          => '',\n    \"tbstat\"              => 0,\n    \"notbstat\"            => 0,\n    \"colstat\"             => 0,\n    \"nocolstat\"           => 0,\n    \"idxstat\"             => 0,\n    \"noidxstat\"           => 0,\n    \"nomyisamstat\"        => 0,\n    \"nostructstat\"        => 0,\n    \"sysstat\"             => 0,\n    \"nosysstat\"           => 0,\n    \"pfstat\"              => 0,\n    \"nopfstat\"            => 0,\n    \"skippassword\"        => 0,\n    \"noask\"               => 0,\n    \"template\"            => 0,\n    \"json\"                => 0,\n    \"prettyjson\"          => 0,\n    \"reportfile\"          => 0,\n    \"verbose\"             => 0,\n    \"defaults-file\"       => '',\n    \"defaults-extra-file\" => '',\n    \"protocol\"            => '',\n    \"dumpdir\"             => '',\n    \"feature\"             => '',\n    \"dbgpattern\"          => '',\n    \"defaultarch\"         => 64\n);\n\n# Gather the options from the command line\nGetOptions(\n    \\%opt,                   'nobad',\n    'nogood',                'noinfo',\n    'debug',                 'nocolor',\n    'forcemem=i',            'forceswap=i',\n    'host=s',                'socket=s',\n    'port=i',                'user=s',\n    'pass=s',                'skipsize',\n    'checkversion',          'mysqladmin=s',\n    'mysqlcmd=s',            'help',\n    'buffers',               'skippassword',\n    'passwordfile=s',        'outputfile=s',\n    'silent',                'noask',\n    'json',                  'prettyjson',\n    'template=s',            'reportfile=s',\n    'cvefile=s',             'bannedports=s',\n    'updateversion',         'maxportallowed=s',\n    'verbose',               'password=s',\n    'passenv=s',             'userenv=s',\n    'defaults-file=s',       'ssl-ca=s',\n    'color',                 'noprocess',\n    'dbstat',                'nodbstat',\n    'tbstat',                'notbstat',\n    'colstat',               'nocolstat',\n    'sysstat',               'nosysstat',\n    'pfstat',                'nopfstat',\n    'idxstat',               'noidxstat',\n    'structstat',            'nostructstat',\n    'myisamstat',            'nomyisamstat',\n    'server-log=s',          'protocol=s',\n    'defaults-extra-file=s', 'dumpdir=s',\n    'feature=s',             'dbgpattern=s',\n    'defaultarch=i'\n  )\n  or pod2usage(\n    -exitval  => 1,\n    -verbose  => 99,\n    -sections => [\n        \"NAME\",\n        \"IMPORTANT USAGE GUIDELINES\",\n        \"CONNECTION AND AUTHENTICATION\",\n        \"PERFORMANCE AND REPORTING OPTIONS\",\n        \"OUTPUT OPTIONS\"\n    ]\n  );\n\nif ( defined $opt{'help'} && $opt{'help'} == 1 ) {\n    pod2usage(\n        -exitval  => 0,\n        -verbose  => 99,\n        -sections => [\n            \"NAME\",\n            \"IMPORTANT USAGE GUIDELINES\",\n            \"CONNECTION AND AUTHENTICATION\",\n            \"PERFORMANCE AND REPORTING OPTIONS\",\n            \"OUTPUT OPTIONS\"\n        ]\n    );\n}\n\nmy $devnull = File::Spec->devnull();\nmy $basic_password_files =\n  ( $opt{passwordfile} eq \"0\" )\n  ? abs_path( dirname(__FILE__) ) . \"/basic_passwords.txt\"\n  : abs_path( $opt{passwordfile} );\n\n# Username from envvar\nif ( exists $opt{userenv} && exists $ENV{ $opt{userenv} } ) {\n    $opt{user} = $ENV{ $opt{userenv} };\n}\n\n# Related to password option\nif ( exists $opt{passenv} && exists $ENV{ $opt{passenv} } ) {\n    $opt{pass} = $ENV{ $opt{passenv} };\n}\n$opt{pass} = $opt{password} if ( $opt{pass} eq 0 and $opt{password} ne 0 );\n\nif ( $opt{dumpdir} ne '' ) {\n    $opt{dumpdir} = abs_path( $opt{dumpdir} );\n    if ( !-d $opt{dumpdir} ) {\n        mkdir $opt{dumpdir} or die \"Cannot create directory $opt{dumpdir}: $!\";\n    }\n}\n\n# for RPM distributions\n$basic_password_files = \"/usr/share/mysqltuner/basic_passwords.txt\"\n  unless -f \"$basic_password_files\";\n\n$opt{dbgpattern} = '.*' if ( $opt{dbgpattern} eq '' );\n\n# check if we need to enable verbose mode\nif ( $opt{feature} ne '' ) { $opt{verbose} = 1; }\nif ( $opt{verbose} ) {\n    $opt{checkversion} = 1;    # Check for updates to MySQLTuner\n    $opt{dbstat}       = 1;    # Print database information\n    $opt{tbstat}       = 1;    # Print database information\n    $opt{idxstat}      = 1;    # Print index information\n    $opt{sysstat}      = 1;    # Print index information\n    $opt{buffers}      = 1;    # Print global and per-thread buffer values\n    $opt{pfstat}       = 1;    # Print performance schema info.\n    $opt{structstat}   = 1;    # Print table structure information\n    $opt{myisamstat}   = 1;    # Print MyISAM table information\n\n    $opt{cvefile} = 'vulnerabilities.csv';    #CVE File for vulnerability checks\n}\n$opt{nocolor} = 1 if defined( $opt{outputfile} );\n$opt{tbstat}  = 0 if ( $opt{notbstat} == 1 );    # Don't print table information\n$opt{colstat} = 0 if ( $opt{nocolstat} == 1 );  # Don't print column information\n$opt{dbstat}  = 0 if ( $opt{nodbstat} == 1 ); # Don't print database information\n$opt{noprocess} = 0\n  if ( $opt{noprocess} == 1 );                # Don't print process information\n$opt{sysstat} = 0 if ( $opt{nosysstat} == 1 ); # Don't print sysstat information\n$opt{pfstat}  = 0\n  if ( $opt{nopfstat} == 1 );    # Don't print performance schema information\n$opt{idxstat} = 0 if ( $opt{noidxstat} == 1 );   # Don't print index information\n$opt{structstat} = 0\n  if ( not defined( $opt{structstat} ) or $opt{nostructstat} == 1 )\n  ;    # Don't print table struct information\n$opt{myisamstat} = 1\n  if ( not defined( $opt{myisamstat} ) );\n$opt{myisamstat} = 0\n  if ( $opt{nomyisamstat} == 1 );    # Don't print MyISAM table information\n\n# for RPM distributions\n$opt{cvefile} = \"/usr/share/mysqltuner/vulnerabilities.csv\"\n  unless ( defined $opt{cvefile} and -f \"$opt{cvefile}\" );\n$opt{cvefile} = '' unless -f \"$opt{cvefile}\";\n$opt{cvefile} = './vulnerabilities.csv' if -f './vulnerabilities.csv';\n\n$opt{'bannedports'} = '' unless defined( $opt{'bannedports'} );\nmy @banned_ports = split ',', $opt{'bannedports'};\n\n#\nmy $outputfile = undef;\n$outputfile = abs_path( $opt{outputfile} ) unless $opt{outputfile} eq \"0\";\n\nmy $fh = undef;\nopen( $fh, '>', $outputfile )\n  or die(\"Fail opening $outputfile\")\n  if defined($outputfile);\n$opt{nocolor} = 1 if defined($outputfile);\n$opt{nocolor} = 1 unless ( -t STDOUT );\n\n$opt{nocolor} = 0 if ( $opt{color} == 1 );\n\n# Setting up the colors for the print styles\nmy $me = `whoami`;\n$me =~ s/\\n//g;\nmy $good = ( $opt{nocolor} == 0 ) ? \"[\\e[0;32mOK\\e[0m]\"  : \"[OK]\";\nmy $bad  = ( $opt{nocolor} == 0 ) ? \"[\\e[0;31m!!\\e[0m]\"  : \"[!!]\";\nmy $info = ( $opt{nocolor} == 0 ) ? \"[\\e[0;34m--\\e[0m]\"  : \"[--]\";\nmy $deb  = ( $opt{nocolor} == 0 ) ? \"[\\e[0;31mDG\\e[0m]\"  : \"[DG]\";\nmy $cmd  = ( $opt{nocolor} == 0 ) ? \"\\e[1;32m[CMD]($me)\" : \"[CMD]($me)\";\nmy $end  = ( $opt{nocolor} == 0 ) ? \"\\e[0m\"              : \"\";\n\n# Maximum lines of log output to read from end\nmy $maxlines = 30000;\n\n# Checks for supported or EOL'ed MySQL versions\nmy ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro );\n\n# Database\nmy @dblist;\n\n# Super structure containing all information\nmy %result;\n$result{'MySQLTuner'}{'version'}  = $tunerversion;\n$result{'MySQLTuner'}{'datetime'} = `date '+%d-%m-%Y %H:%M:%S'`;\n$result{'MySQLTuner'}{'options'}  = \\%opt;\n\n# Functions that handle the print styles\nsub prettyprint {\n    print $_[0] . \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n    print $fh $_[0] . \"\\n\" if defined($fh);\n}\n\nsub goodprint {\n    prettyprint $good. \" \" . $_[0] unless ( $opt{nogood} == 1 );\n}\n\nsub infoprint {\n    prettyprint $info. \" \" . $_[0] unless ( $opt{noinfo} == 1 );\n}\n\nsub badprint {\n    prettyprint $bad. \" \" . $_[0] unless ( $opt{nobad} == 1 );\n}\n\nsub debugprint {\n    prettyprint $deb. \" \" . $_[0] unless ( $opt{debug} == 0 );\n}\n\nsub redwrap {\n    return ( $opt{nocolor} == 0 ) ? \"\\e[0;31m\" . $_[0] . \"\\e[0m\" : $_[0];\n}\n\nsub greenwrap {\n    return ( $opt{nocolor} == 0 ) ? \"\\e[0;32m\" . $_[0] . \"\\e[0m\" : $_[0];\n}\n\nsub cmdprint {\n    prettyprint $cmd. \" \" . $_[0] . $end;\n}\n\nsub infoprintml {\n    for my $ln (@_) { $ln =~ s/\\n//g; infoprint \"\\t$ln\"; }\n}\n\nsub infoprintcmd {\n    cmdprint \"@_\";\n    infoprintml grep { $_ ne '' and $_ !~ /^\\s*$/ } `@_ 2>&1`;\n}\n\nsub subheaderprint {\n    my $tln = 100;\n    my $sln = 8;\n    my $ln  = length(\"@_\") + 2;\n\n    prettyprint \" \";\n    prettyprint \"-\" x $sln . \" @_ \" . \"-\" x ( $tln - $ln - $sln );\n}\n\nsub infoprinthcmd {\n    subheaderprint \"$_[0]\";\n    infoprintcmd \"$_[1]\";\n}\n\nsub is_remote() {\n    my $host = $opt{'host'};\n    return 0 if ( $host eq '' );\n    return 0 if ( $host eq 'localhost' );\n    return 0 if ( $host eq '127.0.0.1' );\n    return 1;\n}\n\nsub is_int {\n    return 0 unless defined $_[0];\n    my $str = $_[0];\n\n    #trim whitespace both sides\n    $str =~ s/^\\s+|\\s+$//g;\n\n    #Alternatively, to match any float-like numeric, use:\n    # m/^([+-]?)(?=\\d|\\.\\d)\\d*(\\.\\d*)?([Ee]([+-]?\\d+))?$/\n\n    #flatten to string and match dash or plus and one or more digits\n    if ( $str =~ /^(\\-|\\+)?\\d+?$/ ) {\n        return 1;\n    }\n    return 0;\n}\n\n# Calculates the number of physical cores considering HyperThreading\nsub cpu_cores {\n    if ( $^O eq 'linux' ) {\n        my $cntCPU =\n`awk -F: '/^core id/ && !P[\\$2] { CORES++; P[\\$2]=1 }; /^physical id/ && !N[\\$2] { CPUs++; N[\\$2]=1 };  END { print CPUs*CORES }' /proc/cpuinfo`;\n        chomp $cntCPU;\n        return ( $cntCPU == 0 ? `nproc` : $cntCPU );\n    }\n\n    if ( $^O eq 'freebsd' ) {\n        my $cntCPU = `sysctl -n kern.smp.cores`;\n        chomp $cntCPU;\n        return $cntCPU + 0;\n    }\n    return 0;\n}\n\n# Calculates the parameter passed in bytes, then rounds it to one decimal place\nsub hr_bytes {\n    my $num = shift;\n    return \"0B\" unless defined($num);\n    return \"0B\" if $num eq \"NULL\";\n    return \"0B\" if $num eq \"\";\n\n    if ( $num >= ( 1024**3 ) ) {    # GB\n        return sprintf( \"%.1f\", ( $num / ( 1024**3 ) ) ) . \"G\";\n    }\n    elsif ( $num >= ( 1024**2 ) ) {    # MB\n        return sprintf( \"%.1f\", ( $num / ( 1024**2 ) ) ) . \"M\";\n    }\n    elsif ( $num >= 1024 ) {           # KB\n        return sprintf( \"%.1f\", ( $num / 1024 ) ) . \"K\";\n    }\n    else {\n        return $num . \"B\";\n    }\n}\n\nsub hr_raw {\n    my $num = shift;\n    return \"0\" unless defined($num);\n    return \"0\" if $num eq \"NULL\";\n    if ( $num =~ /^(\\d+)G$/ ) {\n        return $1 * 1024 * 1024 * 1024;\n    }\n    if ( $num =~ /^(\\d+)M$/ ) {\n        return $1 * 1024 * 1024;\n    }\n    if ( $num =~ /^(\\d+)K$/ ) {\n        return $1 * 1024;\n    }\n    if ( $num =~ /^(\\d+)$/ ) {\n        return $1;\n    }\n    return $num;\n}\n\n# Calculates the parameter passed in bytes, then rounds it to the nearest integer\nsub hr_bytes_rnd {\n    my $num = shift;\n    return \"0B\" unless defined($num);\n    return \"0B\" if $num eq \"NULL\";\n\n    if ( $num >= ( 1024**3 ) ) {    # GB\n        return int( ( $num / ( 1024**3 ) ) ) . \"G\";\n    }\n    elsif ( $num >= ( 1024**2 ) ) {    # MB\n        return int( ( $num / ( 1024**2 ) ) ) . \"M\";\n    }\n    elsif ( $num >= 1024 ) {           # KB\n        return int( ( $num / 1024 ) ) . \"K\";\n    }\n    else {\n        return $num . \"B\";\n    }\n}\n\n# Calculates the parameter passed to the nearest power of 1000, then rounds it to the nearest integer\nsub hr_num {\n    my $num = shift;\n    if ( $num >= ( 1000**3 ) ) {       # Billions\n        return int( ( $num / ( 1000**3 ) ) ) . \"B\";\n    }\n    elsif ( $num >= ( 1000**2 ) ) {    # Millions\n        return int( ( $num / ( 1000**2 ) ) ) . \"M\";\n    }\n    elsif ( $num >= 1000 ) {           # Thousands\n        return int( ( $num / 1000 ) ) . \"K\";\n    }\n    else {\n        return $num;\n    }\n}\n\n# Calculate Percentage\nsub percentage {\n    my $value = shift;\n    my $total = shift;\n    $total = 0 unless defined $total;\n    $total = 0 if $total eq \"NULL\";\n    return 100, 00 if $total == 0;\n    return sprintf( \"%.2f\", ( $value * 100 / $total ) );\n}\n\n# Calculates uptime to display in a human-readable form\nsub pretty_uptime {\n    my $uptime  = shift;\n    my $seconds = $uptime % 60;\n    my $minutes = int( ( $uptime % 3600 ) / 60 );\n    my $hours   = int( ( $uptime % 86400 ) / (3600) );\n    my $days    = int( $uptime / (86400) );\n    my $uptimestring;\n    if ( $days > 0 ) {\n        $uptimestring = \"${days}d ${hours}h ${minutes}m ${seconds}s\";\n    }\n    elsif ( $hours > 0 ) {\n        $uptimestring = \"${hours}h ${minutes}m ${seconds}s\";\n    }\n    elsif ( $minutes > 0 ) {\n        $uptimestring = \"${minutes}m ${seconds}s\";\n    }\n    else {\n        $uptimestring = \"${seconds}s\";\n    }\n    return $uptimestring;\n}\n\n# Retrieves the memory installed on this machine\nmy ( $physical_memory, $swap_memory, $duflags, $xargsflags );\n\nsub memerror {\n    badprint\n\"Unable to determine total memory/swap; use '--forcemem' and '--forceswap'\";\n    exit 1;\n}\n\nsub os_setup {\n    my $os = `uname`;\n    $duflags    = ( $os =~ /Linux/ )        ? '-b' : '';\n    $xargsflags = ( $os =~ /Darwin|SunOS/ ) ? ''   : '-r';\n    if ( $opt{'forcemem'} > 0 ) {\n        $physical_memory = $opt{'forcemem'} * 1048576;\n        infoprint \"Assuming $opt{'forcemem'} MB of physical memory\";\n        if ( $opt{'forceswap'} > 0 ) {\n            $swap_memory = $opt{'forceswap'} * 1048576;\n            infoprint \"Assuming $opt{'forceswap'} MB of swap space\";\n        }\n        else {\n            $swap_memory = 0;\n            badprint \"Assuming 0 MB of swap space (use --forceswap to specify)\";\n        }\n    }\n    else {\n        if ( $os =~ /Linux|CYGWIN/ ) {\n            $physical_memory =\n              `grep -i memtotal: /proc/meminfo | awk '{print \\$2}'`\n              or memerror;\n            $physical_memory *= 1024;\n\n            $swap_memory =\n              `grep -i swaptotal: /proc/meminfo | awk '{print \\$2}'`\n              or memerror;\n            $swap_memory *= 1024;\n        }\n        elsif ( $os =~ /Darwin/ ) {\n            $physical_memory = `sysctl -n hw.memsize` or memerror;\n            $swap_memory =\n              `sysctl -n vm.swapusage | awk '{print \\$3}' | sed 's/\\..*\\$//'`\n              or memerror;\n        }\n        elsif ( $os =~ /NetBSD|OpenBSD|FreeBSD/ ) {\n            $physical_memory = `sysctl -n hw.physmem` or memerror;\n            if ( $physical_memory < 0 ) {\n                $physical_memory = `sysctl -n hw.physmem64` or memerror;\n            }\n            $swap_memory =\n              `swapctl -l | grep '^/' | awk '{ s+= \\$2 } END { print s }'`\n              or memerror;\n        }\n        elsif ( $os =~ /BSD/ ) {\n            $physical_memory = `sysctl -n hw.realmem` or memerror;\n            $swap_memory =\n              `swapinfo | grep '^/' | awk '{ s+= \\$2 } END { print s }'`;\n        }\n        elsif ( $os =~ /SunOS/ ) {\n            $physical_memory =\n              `/usr/sbin/prtconf | grep Memory | cut -f 3 -d ' '`\n              or memerror;\n            chomp($physical_memory);\n            $physical_memory = $physical_memory * 1024 * 1024;\n        }\n        elsif ( $os =~ /AIX/ ) {\n            $physical_memory =\n              `lsattr -El sys0 | grep realmem | awk '{print \\$2}'`\n              or memerror;\n            chomp($physical_memory);\n            $physical_memory = $physical_memory * 1024;\n            $swap_memory     = `lsps -as | awk -F\"(MB| +)\" '/MB /{print \\$2}'`\n              or memerror;\n            chomp($swap_memory);\n            $swap_memory = $swap_memory * 1024 * 1024;\n        }\n        elsif ( $os =~ /windows/i ) {\n            $physical_memory =\n`wmic ComputerSystem get TotalPhysicalMemory | perl -ne \"chomp; print if /[0-9]+/;\"`\n              or memerror;\n            $swap_memory =\n`wmic OS get FreeVirtualMemory | perl -ne \"chomp; print if /[0-9]+/;\"`\n              or memerror;\n        }\n    }\n    debugprint \"Physical Memory: $physical_memory\";\n    debugprint \"Swap Memory: $swap_memory\";\n    chomp($physical_memory);\n    chomp($swap_memory);\n    chomp($os);\n    $result{'OS'}{'OS Type'}                   = $os;\n    $result{'OS'}{'Physical Memory'}{'bytes'}  = $physical_memory;\n    $result{'OS'}{'Physical Memory'}{'pretty'} = hr_bytes($physical_memory);\n    $result{'OS'}{'Swap Memory'}{'bytes'}      = $swap_memory;\n    $result{'OS'}{'Swap Memory'}{'pretty'}     = hr_bytes($swap_memory);\n    $result{'OS'}{'Other Processes'}{'bytes'}  = get_other_process_memory();\n    $result{'OS'}{'Other Processes'}{'pretty'} =\n      hr_bytes( get_other_process_memory() );\n}\n\nsub get_http_cli {\n    my $httpcli = which( \"curl\", $ENV{'PATH'} );\n    chomp($httpcli);\n    if ($httpcli) {\n        return $httpcli;\n    }\n\n    $httpcli = which( \"wget\", $ENV{'PATH'} );\n    chomp($httpcli);\n    if ($httpcli) {\n        return $httpcli;\n    }\n    return \"\";\n}\n\n# Checks for updates to MySQLTuner\nsub validate_tuner_version {\n    if ( $opt{'checkversion'} eq 0 ) {\n        print \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n        infoprint \"Skipped version check for MySQLTuner script\";\n        return;\n    }\n\n    my $update;\n    my $url =\n\"https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl\";\n    my $httpcli = get_http_cli();\n    if ( $httpcli =~ /curl$/ ) {\n        debugprint \"$httpcli is available.\";\n\n        debugprint\n\"$httpcli -m 3 -silent '$url' 2>/dev/null | grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2\";\n        $update =\n`$httpcli -m 3 -silent '$url' 2>/dev/null | grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2`;\n        chomp($update);\n        debugprint \"VERSION: $update\";\n\n        compare_tuner_version($update);\n        return;\n    }\n\n    if ( $httpcli =~ /wget$/ ) {\n        debugprint \"$httpcli is available.\";\n\n        debugprint\n\"$httpcli -e timestamping=off -t 1 -T 3 -O - '$url' 2>$devnull| grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2\";\n        $update =\n`$httpcli -e timestamping=off -t 1 -T 3 -O - '$url' 2>$devnull| grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2`;\n        chomp($update);\n        compare_tuner_version($update);\n        return;\n    }\n    debugprint \"curl and wget are not available.\";\n    infoprint \"Unable to check for the latest MySQLTuner version\";\n    infoprint\n\"Using --pass and --password option is insecure during MySQLTuner execution (password disclosure)\"\n      if ( defined( $opt{'pass'} ) );\n}\n\n# Checks for updates to MySQLTuner\nsub update_tuner_version {\n    if ( $opt{'updateversion'} eq 0 ) {\n        badprint \"Skipped version update for MySQLTuner script\";\n        print \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n        return;\n    }\n\n    my $update;\n    my $fullpath = \"\";\n    my $url = \"https://raw.githubusercontent.com/major/MySQLTuner-perl/master/\";\n    my @scripts =\n      ( \"mysqltuner.pl\", \"basic_passwords.txt\", \"vulnerabilities.csv\" );\n    my $totalScripts    = scalar(@scripts);\n    my $receivedScripts = 0;\n    my $httpcli         = get_http_cli();\n\n    foreach my $script (@scripts) {\n\n        if ( $httpcli =~ /curl$/ ) {\n            debugprint \"$httpcli is available.\";\n\n            $fullpath = dirname(__FILE__) . \"/\" . $script;\n            debugprint \"FullPath: $fullpath\";\n            debugprint\n\"$httpcli --connect-timeout 3 '$url$script' 2>$devnull > $fullpath\";\n            $update =\n`$httpcli --connect-timeout 3 '$url$script' 2>$devnull > $fullpath`;\n            chomp($update);\n            debugprint \"$script updated: $update\";\n\n            if ( -s $script eq 0 ) {\n                badprint \"Couldn't update $script\";\n            }\n            else {\n                ++$receivedScripts;\n                debugprint \"$script updated: $update\";\n            }\n        }\n        elsif ( $httpcli =~ /wget$/ ) {\n\n            debugprint \"$httpcli is available.\";\n\n            debugprint\n\"$httpcli -qe timestamping=off -t 1 -T 3 -O $script '$url$script'\";\n            $update =\n`$httpcli -qe timestamping=off -t 1 -T 3 -O $script '$url$script'`;\n            chomp($update);\n\n            if ( -s $script eq 0 ) {\n                badprint \"Couldn't update $script\";\n            }\n            else {\n                ++$receivedScripts;\n                debugprint \"$script updated: $update\";\n            }\n        }\n        else {\n            debugprint \"curl and wget are not available.\";\n            infoprint \"Unable to check for the latest MySQLTuner version\";\n        }\n\n    }\n\n    if ( $receivedScripts eq $totalScripts ) {\n        goodprint \"Successfully updated MySQLTuner script\";\n    }\n    else {\n        badprint \"Couldn't update MySQLTuner script\";\n    }\n    infoprint \"Stopping program: MySQLTuner script must be updated first.\";\n    exit 0;\n}\n\nsub compare_tuner_version {\n    my $remoteversion = shift;\n    debugprint \"Remote data: $remoteversion\";\n\n    #exit 0;\n    if ( $remoteversion ne $tunerversion ) {\n        badprint\n          \"There is a new version of MySQLTuner available ($remoteversion)\";\n        update_tuner_version();\n        return;\n    }\n    goodprint \"You have the latest version of MySQLTuner ($tunerversion)\";\n    return;\n}\n\n# Checks to see if a MySQL login is possible\nmy ( $mysqllogin, $doremote, $remotestring, $mysqlcmd, $mysqladmincmd );\n\nmy $osname = $^O;\nif ( $osname eq 'MSWin32' ) {\n    eval { require Win32; } or last;\n    $osname = Win32::GetOSName();\n    infoprint \"* Windows OS ($osname) is not fully supported.\\n\";\n\n    #exit 1;\n}\n\nsub mysql_setup {\n    $doremote     = 0;\n    $remotestring = '';\n    if ( $opt{mysqladmin} ) {\n        $mysqladmincmd = $opt{mysqladmin};\n    }\n    else {\n        $mysqladmincmd = which( \"mysqladmin\", $ENV{'PATH'} );\n        if ( !-e $mysqladmincmd ) {\n            $mysqladmincmd = which( \"mariadb-admin\", $ENV{'PATH'} );\n        }\n    }\n    chomp($mysqladmincmd);\n    if ( !-e $mysqladmincmd && $opt{mysqladmin} ) {\n        badprint \"Unable to find the mysqladmin command you specified: \"\n          . $mysqladmincmd . \"\";\n        exit 1;\n    }\n    elsif ( !-e $mysqladmincmd ) {\n        badprint\n\"Couldn't find mysqladmin/mariadb-admin in your \\$PATH. Is MySQL installed?\";\n\n        #exit 1;\n    }\n    if ( $opt{mysqlcmd} ) {\n        $mysqlcmd = $opt{mysqlcmd};\n    }\n    else {\n        $mysqlcmd = which( \"mysql\", $ENV{'PATH'} );\n        if ( !-e $mysqlcmd ) {\n            $mysqlcmd = which( \"mariadb\", $ENV{'PATH'} );\n        }\n    }\n    chomp($mysqlcmd);\n    if ( !-e $mysqlcmd && $opt{mysqlcmd} ) {\n        badprint \"Unable to find the mysql command you specified: \"\n          . $mysqlcmd . \"\";\n        exit 1;\n    }\n    elsif ( !-e $mysqlcmd ) {\n        badprint\n          \"Couldn't find mysql/mariadb in your \\$PATH. Is MySQL installed?\";\n        exit 1;\n    }\n    $mysqlcmd =~ s/\\n$//g;\n    my $mysqlclidefaults = `$mysqlcmd --print-defaults`;\n    debugprint \"MySQL Client: $mysqlclidefaults\";\n    if ( $mysqlclidefaults =~ /auto-vertical-output/ ) {\n        badprint\n          \"Avoid auto-vertical-output in configuration file(s) for MySQL like\";\n        exit 1;\n    }\n\n    debugprint \"MySQL Client: $mysqlcmd\";\n\n    # Are we being asked to connect via a socket?\n    if ( $opt{socket} ne 0 ) {\n        if ( $opt{port} ne 0 ) {\n            $remotestring = \" -S $opt{socket} -P $opt{port}\";\n        }\n        else {\n            $remotestring = \" -S $opt{socket}\";\n        }\n    }\n\n    if ( $opt{protocol} ne '' ) {\n        $remotestring = \" --protocol=$opt{protocol}\";\n    }\n\n    # Are we being asked to connect to a remote server?\n    if ( $opt{host} ne 0 ) {\n        chomp( $opt{host} );\n        $opt{port} = ( $opt{port} eq 0 ) ? 3306 : $opt{port};\n\n# If we're doing a remote connection, but forcemem wasn't specified, we need to exit\n        if ( $opt{'forcemem'} eq 0 && is_remote eq 1 ) {\n            badprint \"The --forcemem option is required for remote connections\";\n            badprint\n              \"Assuming RAM memory is 1Gb for simplify remote connection usage\";\n            $opt{'forcemem'} = 1024;\n\n            #exit 1;\n        }\n        if ( $opt{'forceswap'} eq 0 && is_remote eq 1 ) {\n            badprint\n              \"The --forceswap option is required for remote connections\";\n            badprint\n              \"Assuming Swap size is 1Gb for simplify remote connection usage\";\n            $opt{'forceswap'} = 1024;\n\n            #exit 1;\n        }\n        infoprint \"Performing tests on $opt{host}:$opt{port}\";\n        $remotestring = \" -h $opt{host} -P $opt{port}\";\n        $doremote     = is_remote();\n\n    }\n    else {\n        $opt{host} = '127.0.0.1';\n    }\n\n    if ( $opt{'ssl-ca'} ne 0 ) {\n        if ( -e -r -f $opt{'ssl-ca'} ) {\n            $remotestring .= \" --ssl-ca=$opt{'ssl-ca'}\";\n            infoprint\n              \"Will connect using ssl public key passed on the command line\";\n            return 1;\n        }\n        else {\n            badprint\n\"Attempted to use passed ssl public key, but it was not found or could not be read\";\n            exit 1;\n        }\n    }\n\n   # Did we already get a username with or without password on the command line?\n    if ( $opt{user} ne 0 ) {\n        $mysqllogin =\n            \"-u $opt{user} \"\n          . ( ( $opt{pass} ne 0 ) ? \"-p'$opt{pass}' \" : \" \" )\n          . $remotestring;\n        my $loginstatus =\n          `$mysqlcmd -Nrs -e 'select \"mysqld is alive\";' $mysqllogin 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint \"Logged in using credentials passed on the command line\";\n            return 1;\n        }\n        else {\n            badprint\n              \"Attempted to use login credentials, but they were invalid\";\n            exit 1;\n        }\n    }\n\n    my $svcprop = which( \"svcprop\", $ENV{'PATH'} );\n    if ( substr( $svcprop, 0, 1 ) =~ \"/\" ) {\n\n        # We are on solaris\n        ( my $mysql_login =\n`svcprop -p quickbackup/username svc:/network/mysql-quickbackup:default`\n        ) =~ s/\\s+$//;\n        ( my $mysql_pass =\n`svcprop -p quickbackup/password svc:/network/mysql-quickbackup:default`\n        ) =~ s/\\s+$//;\n        if ( substr( $mysql_login, 0, 7 ) ne \"svcprop\" ) {\n\n            # mysql-quickbackup is installed\n            $mysqllogin = \"-u $mysql_login -p$mysql_pass\";\n            my $loginstatus = `mysqladmin $mysqllogin ping 2>&1`;\n            if ( $loginstatus =~ /mysqld is alive/ ) {\n                goodprint \"Logged in using credentials from mysql-quickbackup.\";\n                return 1;\n            }\n            else {\n                badprint\n\"Attempted to use login credentials from mysql-quickbackup, but they failed.\";\n                exit 1;\n            }\n        }\n    }\n    elsif ( -r \"/etc/psa/.psa.shadow\" and $doremote == 0 ) {\n\n        # It's a Plesk box, use the available credentials\n        $mysqllogin = \"-u admin -p`cat /etc/psa/.psa.shadow`\";\n        my $loginstatus = `$mysqladmincmd ping $mysqllogin 2>&1`;\n        unless ( $loginstatus =~ /mysqld is alive/ ) {\n\n            # Plesk 10+\n            $mysqllogin =\n              \"-u admin -p`/usr/local/psa/bin/admin --show-password`\";\n            $loginstatus = `$mysqladmincmd ping $mysqllogin 2>&1`;\n            unless ( $loginstatus =~ /mysqld is alive/ ) {\n                badprint\n\"Attempted to use login credentials from Plesk and Plesk 10+, but they failed.\";\n                exit 1;\n            }\n        }\n    }\n    elsif ( -r \"/usr/local/directadmin/conf/mysql.conf\" and $doremote == 0 ) {\n\n        # It's a DirectAdmin box, use the available credentials\n        my $mysqluser =\n          `cat /usr/local/directadmin/conf/mysql.conf | egrep '^user=.*'`;\n        my $mysqlpass =\n          `cat /usr/local/directadmin/conf/mysql.conf | egrep '^passwd=.*'`;\n\n        $mysqluser =~ s/user=//;\n        $mysqluser =~ s/[\\r\\n]//;\n        $mysqlpass =~ s/passwd=//;\n        $mysqlpass =~ s/[\\r\\n]//;\n\n        $mysqllogin = \"-u $mysqluser -p$mysqlpass\";\n\n        my $loginstatus = `mysqladmin ping $mysqllogin 2>&1`;\n        unless ( $loginstatus =~ /mysqld is alive/ ) {\n            badprint\n\"Attempted to use login credentials from DirectAdmin, but they failed.\";\n            exit 1;\n        }\n    }\n    elsif ( -r \"/etc/mysql/debian.cnf\"\n        and $doremote == 0\n        and $opt{'defaults-file'} eq '' )\n    {\n\n        # We have a Debian maintenance account, use it\n        $mysqllogin = \"--defaults-file=/etc/mysql/debian.cnf\";\n        my $loginstatus = `$mysqladmincmd $mysqllogin ping 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint\n              \"Logged in using credentials from Debian maintenance account.\";\n            return 1;\n        }\n        else {\n            badprint\n\"Attempted to use login credentials from Debian maintenance account, but they failed.\";\n            exit 1;\n        }\n    }\n    elsif ( $opt{'defaults-file'} ne '' and -r \"$opt{'defaults-file'}\" ) {\n\n        # defaults-file\n        debugprint \"defaults file detected: $opt{'defaults-file'}\";\n        my $mysqlclidefaults = `$mysqlcmd --print-defaults`;\n        debugprint \"MySQL Client Default File: $opt{'defaults-file'}\";\n\n        $mysqllogin = \"--defaults-file=\" . $opt{'defaults-file'};\n        my $loginstatus = `$mysqladmincmd $mysqllogin ping 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint \"Logged in using credentials from defaults file account.\";\n            return 1;\n        }\n    }\n    elsif ( $opt{'defaults-extra-file'} ne ''\n        and -r \"$opt{'defaults-extra-file'}\" )\n    {\n\n        # defaults-extra-file\n        debugprint \"defaults extra file detected: $opt{'defaults-extra-file'}\";\n        my $mysqlclidefaults = `$mysqlcmd --print-defaults`;\n        debugprint\n          \"MySQL Client Extra Default File: $opt{'defaults-extra-file'}\";\n\n        $mysqllogin = \"--defaults-extra-file=\" . $opt{'defaults-extra-file'};\n        my $loginstatus = `$mysqladmincmd $mysqllogin ping 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint\n              \"Logged in using credentials from extra defaults file account.\";\n            return 1;\n        }\n    }\n    else {\n        # It's not Plesk or Debian, we should try a login\n        debugprint \"$mysqladmincmd $remotestring ping 2>&1\";\n\n        #my $loginstatus = \"\";\n        debugprint \"Using mysqlcmd: $mysqlcmd\";\n\n        #if (defined($mysqladmincmd)) {\n        #  infoprint \"Using mysqladmin to check login\";\n        #  $loginstatus=`$mysqladmincmd $remotestring ping 2>&1`;\n        #} else {\n        infoprint \"Using mysql to check login\";\n        my $loginstatus =\n`$mysqlcmd $remotestring -Nrs -e 'select \"mysqld is alive\"' --connect-timeout=3 2>&1`;\n\n        #}\n\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n\n            # Login went just fine\n            $mysqllogin = \" $remotestring \";\n\n       # Did this go well because of a .my.cnf file or is there no password set?\n            my $userpath = `printenv HOME`;\n            if ( length($userpath) > 0 ) {\n                chomp($userpath);\n            }\n            unless ( -e \"${userpath}/.my.cnf\" or -e \"${userpath}/.mylogin.cnf\" )\n            {\n                badprint\n                  \"SECURITY RISK: Successfully authenticated without password\";\n            }\n            return 1;\n        }\n        else {\n            if ( $opt{'noask'} == 1 ) {\n                badprint\n                  \"Attempted to use login credentials, but they were invalid\";\n                exit 1;\n            }\n            my ( $name, $password );\n\n            # If --user is defined no need to ask for username\n            if ( $opt{user} ne 0 ) {\n                $name = $opt{user};\n            }\n            else {\n                print STDERR \"Please enter your MySQL administrative login: \";\n                $name = <STDIN>;\n            }\n\n            # If --pass is defined no need to ask for password\n            if ( $opt{pass} ne 0 ) {\n                $password = $opt{pass};\n            }\n            else {\n                print STDERR\n                  \"Please enter your MySQL administrative password: \";\n                system(\"stty -echo >$devnull 2>&1\");\n                $password = <STDIN>;\n                system(\"stty echo >$devnull 2>&1\");\n            }\n            chomp($password);\n            chomp($name);\n            $mysqllogin = \"-u $name\";\n\n            if ( length($password) > 0 ) {\n                $mysqllogin .= \" -p'$password'\";\n            }\n            $mysqllogin .= $remotestring;\n            my $loginstatus = `$mysqladmincmd ping $mysqllogin 2>&1`;\n            if ( $loginstatus =~ /mysqld is alive/ ) {\n\n                #print STDERR \"\";\n                if ( !length($password) ) {\n\n       # Did this go well because of a .my.cnf file or is there no password set?\n                    my $userpath = `printenv HOME`;\n                    chomp($userpath);\n                    unless ( -e \"$userpath/.my.cnf\" ) {\n                        print STDERR \"\";\n                        badprint\n\"SECURITY RISK: Successfully authenticated without password\";\n                    }\n                }\n                return 1;\n            }\n            else {\n                #print STDERR \"\";\n                badprint\n                  \"Attempted to use login credentials, but they were invalid.\";\n                exit 1;\n            }\n            exit 1;\n        }\n    }\n}\n\n# MySQL Request Array\nsub select_array {\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my @result = `$mysqlcmd $mysqllogin -Bse \"\\\\w$req\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array: return code : $?\";\n    chomp(@result);\n    return @result;\n}\n\n# MySQL Request Array\nsub select_array_with_headers {\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my @result = `$mysqlcmd $mysqllogin -Bre \"\\\\w$req\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array_with_headers: return code : $?\";\n    chomp(@result);\n    return @result;\n}\n\n# MySQL Request Array\nsub select_csv_file {\n    my $tfile = shift;\n    my $req   = shift;\n    debugprint \"PERFORM: $req CSV into $tfile\";\n\n    #return;\n    my @result = select_array_with_headers($req);\n    open( my $fh, '>', $tfile ) or die \"Could not open file '$tfile' $!\";\n    for my $l (@result) {\n        $l =~ s/\\t/\",\"/g;\n        $l =~ s/^/\"/;\n        $l =~ s/$/\"\\n/;\n        print $fh $l;\n        print $l if $opt{debug};\n    }\n    close $fh;\n    infoprint \"CSV file $tfile created\";\n}\n\nsub human_size {\n    my ( $size, $n ) = ( shift, 0 );\n    ++$n and $size /= 1024 until $size < 1024;\n    return sprintf \"%.2f %s\", $size, (qw[ bytes KB MB GB TB ])[$n];\n}\n\n# MySQL Request one\nsub select_one {\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my $result = `$mysqlcmd $mysqllogin -Bse \"\\\\w$req\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array: return code : $?\";\n    chomp($result);\n    return $result;\n}\n\n# MySQL Request one\nsub select_one_g {\n    my $pattern = shift;\n\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my @result = `$mysqlcmd $mysqllogin -re \"\\\\w$req\\\\G\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array: return code : $?\";\n    chomp(@result);\n    return ( grep { /$pattern/ } @result )[0];\n}\n\nsub select_str_g {\n    my $pattern = shift;\n\n    my $req = shift;\n    my $str = select_one_g $pattern, $req;\n    return () unless defined $str;\n    my @val = split /:/, $str;\n    shift @val;\n    return trim(@val);\n}\n\nsub select_user_dbs {\n    return select_array(\n\"SELECT DISTINCT TABLE_SCHEMA FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'percona', 'sys')\"\n    );\n}\n\nsub select_tables_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_indexes_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT INDEX_NAME FROM information_schema.STATISTICS WHERE TABLE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_views_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT TABLE_NAME FROM information_schema.VIEWS WHERE TABLE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_triggers_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT TRIGGER_NAME FROM information_schema.TRIGGERS WHERE TRIGGER_SCHEMA='$schema'\"\n    );\n}\n\nsub select_routines_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT ROUTINE_NAME FROM information_schema.ROUTINES WHERE ROUTINE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_table_indexes_db {\n    my $schema = shift;\n    my $tbname = shift;\n    return select_array(\n\"SELECT INDEX_NAME FROM information_schema.STATISTICS WHERE TABLE_SCHEMA='$schema' AND TABLE_NAME='$tbname'\"\n    );\n}\n\nsub select_table_columns_db {\n    my $schema = shift;\n    my $table  = shift;\n    return select_array(\n\"SELECT COLUMN_NAME FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$schema' AND TABLE_NAME='$table'\"\n    );\n}\n\nsub get_tuning_info {\n    my @infoconn = select_array \"\\\\s\";\n    my ( $tkey, $tval );\n    @infoconn =\n      grep { !/Threads:/ and !/Connection id:/ and !/pager:/ and !/Using/ }\n      @infoconn;\n    foreach my $line (@infoconn) {\n        if ( $line =~ /\\s*(.*):\\s*(.*)/ ) {\n            debugprint \"$1 => $2\";\n            $tkey = $1;\n            $tval = $2;\n            chomp($tkey);\n            chomp($tval);\n            $result{'MySQL Client'}{$tkey} = $tval;\n        }\n    }\n    $result{'MySQL Client'}{'Client Path'}         = $mysqlcmd;\n    $result{'MySQL Client'}{'Admin Path'}          = $mysqladmincmd;\n    $result{'MySQL Client'}{'Authentication Info'} = $mysqllogin;\n\n}\n\n# Populates all of the variable and status hashes\nmy ( %mystat, %myvar, $dummyselect, %myrepl, %myslaves );\n\nsub arr2hash {\n    my $href = shift;\n    my $harr = shift;\n    my $sep  = shift;\n    my $key  = '';\n    my $val  = '';\n\n    $sep = '\\s' unless defined($sep);\n    foreach my $line (@$harr) {\n        next if ( $line =~ m/^\\*\\*\\*\\*\\*\\*\\*/ );\n        $line =~ /([a-zA-Z_]*)\\s*$sep\\s*(.*)/;\n        $key         = $1;\n        $val         = $2;\n        $$href{$key} = $val;\n\n        debugprint \" * $key = $val\" if $key =~ /$opt{dbgpattern}/i;\n    }\n}\n\nsub get_all_vars {\n\n    # We need to initiate at least one query so that our data is useable\n    $dummyselect = select_one \"SELECT VERSION()\";\n    if ( not defined($dummyselect) or $dummyselect eq \"\" ) {\n        badprint\n          \"You probably do not have enough privileges to run MySQLTuner ...\";\n        exit(256);\n    }\n    $dummyselect =~ s/(.*?)\\-.*/$1/;\n    debugprint \"VERSION: \" . $dummyselect . \"\";\n    $result{'MySQL Client'}{'Version'} = $dummyselect;\n\n    my @mysqlvarlist = select_array(\"SHOW VARIABLES\");\n    push( @mysqlvarlist, select_array(\"SHOW GLOBAL VARIABLES\") );\n    arr2hash( \\%myvar, \\@mysqlvarlist );\n    $result{'Variables'} = \\%myvar;\n\n    my @mysqlstatlist = select_array(\"SHOW STATUS\");\n    push( @mysqlstatlist, select_array(\"SHOW GLOBAL STATUS\") );\n    arr2hash( \\%mystat, \\@mysqlstatlist );\n    $result{'Status'} = \\%mystat;\n    unless ( defined( $myvar{'innodb_support_xa'} ) ) {\n        $myvar{'innodb_support_xa'} = 'ON';\n    }\n    $mystat{'Uptime'} = 1\n      unless defined( $mystat{'Uptime'} )\n      and $mystat{'Uptime'} > 0;\n    $myvar{'have_galera'} = \"NO\";\n    if (   defined( $myvar{'wsrep_provider_options'} )\n        && $myvar{'wsrep_provider_options'} ne \"\"\n        && $myvar{'wsrep_on'} ne \"OFF\" )\n    {\n        $myvar{'have_galera'} = \"YES\";\n        debugprint \"Galera options: \" . $myvar{'wsrep_provider_options'};\n    }\n\n    # Workaround for MySQL bug #59393 wrt. ignore-builtin-innodb\n    if ( ( $myvar{'ignore_builtin_innodb'} || \"\" ) eq \"ON\" ) {\n        $myvar{'have_innodb'} = \"NO\";\n    }\n\n    # Support GTID MODE FOR MARIADB\n    # Issue MariaDB GTID mode #513\n    $myvar{'gtid_mode'} = 'ON'\n      if ( defined( $myvar{'gtid_current_pos'} )\n        and $myvar{'gtid_current_pos'} ne '' );\n\n    # Whether the server uses a thread pool to handle client connections\n    # MariaDB: thread_handling = pool-of-threads\n    # MySQL: thread_handling = loaded-dynamically\n    $myvar{'have_threadpool'} = \"NO\";\n    if (\n        defined( $myvar{'thread_handling'} )\n        and (  $myvar{'thread_handling'} eq 'pool-of-threads'\n            || $myvar{'thread_handling'} eq 'loaded-dynamically' )\n      )\n    {\n        $myvar{'have_threadpool'} = \"YES\";\n    }\n\n    # have_* for engines is deprecated and will be removed in MySQL 5.6;\n    # check SHOW ENGINES and set corresponding old style variables.\n    # Also works around MySQL bug #59393 wrt. skip-innodb\n    my @mysqlenginelist = select_array \"SHOW ENGINES\";\n    foreach my $line (@mysqlenginelist) {\n        if ( $line =~ /^([a-zA-Z_]+)\\s+(\\S+)/ ) {\n            my $engine = lc($1);\n\n            if ( $engine eq \"federated\" || $engine eq \"blackhole\" ) {\n                $engine .= \"_engine\";\n            }\n            elsif ( $engine eq \"berkeleydb\" ) {\n                $engine = \"bdb\";\n            }\n            my $val = ( $2 eq \"DEFAULT\" ) ? \"YES\" : $2;\n            $myvar{\"have_$engine\"} = $val;\n            $result{'Storage Engines'}{$engine} = $2;\n        }\n    }\n\n    #debugprint Dumper(@mysqlenginelist);\n\n    my @mysqlslave;\n    if ( mysql_version_eq(8) or mysql_version_ge( 10, 5 ) ) {\n        @mysqlslave = select_array(\"SHOW REPLICA STATUS\\\\G\");\n    }\n    else {\n        @mysqlslave = select_array(\"SHOW SLAVE STATUS\\\\G\");\n    }\n    arr2hash( \\%myrepl, \\@mysqlslave, ':' );\n    $result{'Replication'}{'Status'} = \\%myrepl;\n\n    my @mysqlslaves;\n    if ( mysql_version_eq(8) or mysql_version_ge( 10, 5 ) ) {\n        @mysqlslaves = select_array \"SHOW SLAVE STATUS\";\n    }\n    else {\n        @mysqlslaves = select_array(\"SHOW SLAVE HOSTS\\\\G\");\n    }\n\n    my @lineitems = ();\n    foreach my $line (@mysqlslaves) {\n        debugprint \"L: $line \";\n        @lineitems                                        = split /\\s+/, $line;\n        $myslaves{ $lineitems[0] }                        = $line;\n        $result{'Replication'}{'Slaves'}{ $lineitems[0] } = $lineitems[4];\n    }\n}\n\nsub remove_cr {\n    return map {\n        my $line = $_;\n        $line =~ s/\\n$//g;\n        $line =~ s/^\\s+$//g;\n        $line;\n    } @_;\n}\n\nsub remove_empty {\n    grep { $_ ne '' } @_;\n}\n\nsub grep_file_contents {\n    my $file = shift;\n    my $patt;\n}\n\nsub get_file_contents {\n    my $file = shift;\n    open( my $fh, \"<\", $file ) or die \"Can't open $file for read: $!\";\n    my @lines = <$fh>;\n    close $fh or die \"Cannot close $file: $!\";\n    @lines = remove_cr @lines;\n    return @lines;\n}\n\nsub get_basic_passwords {\n    return get_file_contents(shift);\n}\n\nsub get_log_file_real_path {\n    my $file     = shift;\n    my $hostname = shift;\n    my $datadir  = shift;\n    if ( -f \"$file\" ) {\n        return $file;\n    }\n    elsif ( -f \"$hostname.log\" ) {\n        return \"$hostname.log\";\n    }\n    elsif ( -f \"$hostname.err\" ) {\n        return \"$hostname.err\";\n    }\n    elsif ( -f \"$datadir$hostname.err\" ) {\n        return \"$datadir$hostname.err\";\n    }\n    elsif ( -f \"$datadir$hostname.log\" ) {\n        return \"$datadir$hostname.log\";\n    }\n    elsif ( -f \"$datadir\" . \"mysql_error.log\" ) {\n        return \"$datadir\" . \"mysql_error.log\";\n    }\n    elsif ( -f \"/var/log/mysql.log\" ) {\n        return \"/var/log/mysql.log\";\n    }\n    elsif ( -f \"/var/log/mysqld.log\" ) {\n        return \"/var/log/mysqld.log\";\n    }\n    elsif ( -f \"/var/log/mysql/$hostname.err\" ) {\n        return \"/var/log/mysql/$hostname.err\";\n    }\n    elsif ( -f \"/var/log/mysql/$hostname.log\" ) {\n        return \"/var/log/mysql/$hostname.log\";\n    }\n    elsif ( -f \"/var/log/mysql/\" . \"mysql_error.log\" ) {\n        return \"/var/log/mysql/\" . \"mysql_error.log\";\n    }\n    else {\n        return $file;\n    }\n}\n\nsub log_file_recommendations {\n    if ( is_remote eq 1 ) {\n        infoprint \"Skipping error log files checks on remote host\";\n        return;\n    }\n    my $fh;\n    $myvar{'log_error'} = $opt{'server-log'}\n      || get_log_file_real_path( $myvar{'log_error'}, $myvar{'hostname'},\n        $myvar{'datadir'} );\n\n    subheaderprint \"Log file Recommendations\";\n    if ( \"$myvar{'log_error'}\" eq \"stderr\" ) {\n        badprint\n\"log_error is set to $myvar{'log_error'}, but this script can't read stderr\";\n        return;\n    }\n    elsif ( $myvar{'log_error'} =~ /^(docker|podman|kubectl):(.*)/ ) {\n        open( $fh, '-|', \"$1 logs --tail=$maxlines '$2'\" )\n          // die \"Can't start $1 $!\";\n        goodprint \"Log from cloud` $myvar{'log_error'} exists\";\n    }\n    elsif ( $myvar{'log_error'} =~ /^systemd:(.*)/ ) {\n        open( $fh, '-|', \"journalctl -n $maxlines -b  -u '$1'\" )\n          // die \"Can't start journalctl $!\";\n        goodprint \"Log journal` $myvar{'log_error'} exists\";\n    }\n    elsif ( -f \"$myvar{'log_error'}\" ) {\n        goodprint \"Log file $myvar{'log_error'} exists\";\n        my $size = ( stat $myvar{'log_error'} )[7];\n        infoprint \"Log file: \"\n          . $myvar{'log_error'} . \" (\"\n          . hr_bytes_rnd($size) . \")\";\n\n        if ( $size > 0 ) {\n            goodprint \"Log file $myvar{'log_error'} is not empty\";\n            if ( $size < 32 * 1024 * 1024 ) {\n                goodprint \"Log file $myvar{'log_error'} is smaller than 32 MB\";\n            }\n            else {\n                badprint \"Log file $myvar{'log_error'} is bigger than 32 MB\";\n                push @generalrec,\n                  $myvar{'log_error'}\n                  . \" is > 32MB, you should analyze why or implement a rotation log strategy such as logrotate!\";\n            }\n        }\n        else {\n            infoprint\n\"Log file $myvar{'log_error'} is empty. Assuming log-rotation. Use --server-log={file} for explicit file\";\n            return;\n        }\n        if ( !open( $fh, '<', $myvar{'log_error'} ) ) {\n            badprint \"Log file $myvar{'log_error'} isn't readable.\";\n            return;\n        }\n        goodprint \"Log file $myvar{'log_error'} is readable.\";\n\n        if ( $maxlines * 80 < $size ) {\n            seek( $fh, -$maxlines * 80, 2 );\n            <$fh>;    # discard line fragment\n        }\n    }\n    else {\n        badprint \"Log file $myvar{'log_error'} doesn't exist\";\n        return;\n    }\n\n    my $numLi     = 0;\n    my $nbWarnLog = 0;\n    my $nbErrLog  = 0;\n    my @lastShutdowns;\n    my @lastStarts;\n\n    while ( my $logLi = <$fh> ) {\n        chomp $logLi;\n        $numLi++;\n        debugprint \"$numLi: $logLi\"\n          if $logLi =~ /warning|error/i and $logLi !~ /Logging to/;\n        $nbErrLog++\n          if $logLi  =~ /error/i\n          and $logLi !~ /(Logging to|\\[Warning\\].*ERROR_FOR_DIVISION_BY_ZERO)/;\n        $nbWarnLog++ if $logLi =~ /warning/i;\n        push @lastShutdowns, $logLi\n          if $logLi =~ /Shutdown complete/ and $logLi !~ /Innodb/i;\n        push @lastStarts, $logLi if $logLi =~ /ready for connections/;\n    }\n    close $fh;\n\n    if ( $nbWarnLog > 0 ) {\n        badprint \"$myvar{'log_error'} contains $nbWarnLog warning(s).\";\n        push @generalrec, \"Check warning line(s) in $myvar{'log_error'} file\";\n    }\n    else {\n        goodprint \"$myvar{'log_error'} doesn't contain any warning.\";\n    }\n    if ( $nbErrLog > 0 ) {\n        badprint \"$myvar{'log_error'} contains $nbErrLog error(s).\";\n        push @generalrec, \"Check error line(s) in $myvar{'log_error'} file\";\n    }\n    else {\n        goodprint \"$myvar{'log_error'} doesn't contain any error.\";\n    }\n\n    infoprint scalar @lastStarts . \" start(s) detected in $myvar{'log_error'}\";\n    my $nStart = 0;\n    my $nEnd   = 10;\n    if ( scalar @lastStarts < $nEnd ) {\n        $nEnd = scalar @lastStarts;\n    }\n    for my $startd ( reverse @lastStarts[ -$nEnd .. -1 ] ) {\n        $nStart++;\n        infoprint \"$nStart) $startd\";\n    }\n    infoprint scalar @lastShutdowns\n      . \" shutdown(s) detected in $myvar{'log_error'}\";\n    $nStart = 0;\n    $nEnd   = 10;\n    if ( scalar @lastShutdowns < $nEnd ) {\n        $nEnd = scalar @lastShutdowns;\n    }\n    for my $shutd ( reverse @lastShutdowns[ -$nEnd .. -1 ] ) {\n        $nStart++;\n        infoprint \"$nStart) $shutd\";\n    }\n\n    #exit 0;\n}\n\nsub cve_recommendations {\n    subheaderprint \"CVE Security Recommendations\";\n    unless ( defined( $opt{cvefile} ) && -f \"$opt{cvefile}\" ) {\n        infoprint \"Skipped due to --cvefile option undefined\";\n        return;\n    }\n\n#$mysqlvermajor=10;\n#$mysqlverminor=1;\n#$mysqlvermicro=17;\n#prettyprint \"Look for related CVE for $myvar{'version'} or lower in $opt{cvefile}\";\n    my $cvefound = 0;\n    open( my $fh, \"<\", $opt{cvefile} )\n      or die \"Can't open $opt{cvefile} for read: $!\";\n    while ( my $cveline = <$fh> ) {\n        my @cve = split( ';', $cveline );\n        debugprint\n\"Comparing $mysqlvermajor\\.$mysqlverminor\\.$mysqlvermicro with $cve[1]\\.$cve[2]\\.$cve[3] : \"\n          . ( mysql_version_le( $cve[1], $cve[2], $cve[3] ) ? '<=' : '>' );\n\n        # Avoid not major/minor version corresponding CVEs\n        next\n          unless ( int( $cve[1] ) == $mysqlvermajor\n            && int( $cve[2] ) == $mysqlverminor );\n        if ( int( $cve[3] ) >= $mysqlvermicro ) {\n            badprint \"$cve[4](<= $cve[1]\\.$cve[2]\\.$cve[3]) : $cve[6]\";\n            $result{'CVE'}{'List'}{$cvefound} =\n              \"$cve[4](<= $cve[1]\\.$cve[2]\\.$cve[3]) : $cve[6]\";\n            $cvefound++;\n        }\n    }\n    close $fh or die \"Cannot close $opt{cvefile}: $!\";\n    $result{'CVE'}{'nb'} = $cvefound;\n\n    my $cve_warning_notes = \"\";\n    if ( $cvefound == 0 ) {\n        goodprint \"NO SECURITY CVE FOUND FOR YOUR VERSION\";\n        return;\n    }\n    if ( $mysqlvermajor eq 5 and $mysqlverminor eq 5 ) {\n        infoprint\n          \"False positive CVE(s) for MySQL and MariaDB 5.5.x can be found.\";\n        infoprint \"Check carefully each CVE for those particular versions\";\n    }\n    badprint $cvefound . \" CVE(s) found for your MySQL release.\";\n    push( @generalrec,\n        $cvefound\n          . \" CVE(s) found for your MySQL release. Consider upgrading your version !\"\n    );\n}\n\nsub get_opened_ports {\n    my @opened_ports = `netstat -ltn`;\n    @opened_ports = map {\n        my $v = $_;\n        $v =~ s/.*:(\\d+)\\s.*$/$1/;\n        $v =~ s/\\D//g;\n        $v;\n    } @opened_ports;\n    @opened_ports = sort { $a <=> $b } grep { !/^$/ } @opened_ports;\n\n    #debugprint Dumper \\@opened_ports;\n    $result{'Network'}{'TCP Opened'} = \\@opened_ports;\n    return @opened_ports;\n}\n\nsub is_open_port {\n    my $port = shift;\n    if ( grep { /^$port$/ } get_opened_ports ) {\n        return 1;\n    }\n    return 0;\n}\n\nsub get_process_memory {\n    my $pid = shift;\n    my @mem = `ps -p $pid -o rss`;\n    return 0 if scalar @mem != 2;\n    return $mem[1] * 1024;\n}\n\nsub get_other_process_memory {\n    return 0 if ( $opt{tbstat} == 0 );\n    my @procs = `ps eaxo pid,command`;\n    @procs = map {\n        my $v = $_;\n        $v =~ s/.*PID.*//;\n        $v =~ s/.*mysqld.*//;\n        $v =~ s/.*\\[.*\\].*//;\n        $v =~ s/^\\s+$//g;\n        $v =~ s/.*PID.*CMD.*//;\n        $v =~ s/.*systemd.*//;\n        $v =~ s/\\s*?(\\d+)\\s*.*/$1/g;\n        $v;\n    } @procs;\n    @procs = remove_cr @procs;\n    @procs = remove_empty @procs;\n    my $totalMemOther = 0;\n    map { $totalMemOther += get_process_memory($_); } @procs;\n    return $totalMemOther;\n}\n\nsub get_os_release {\n    if ( -f \"/etc/lsb-release\" ) {\n        my @info_release = get_file_contents \"/etc/lsb-release\";\n        my $os_release   = $info_release[3];\n        $os_release =~ s/.*=\"//;\n        $os_release =~ s/\"$//;\n        return $os_release;\n    }\n\n    if ( -f \"/etc/system-release\" ) {\n        my @info_release = get_file_contents \"/etc/system-release\";\n        return $info_release[0];\n    }\n\n    if ( -f \"/etc/os-release\" ) {\n        my @info_release = get_file_contents \"/etc/os-release\";\n        my $os_release   = $info_release[0];\n        $os_release =~ s/.*=\"//;\n        $os_release =~ s/\"$//;\n        return $os_release;\n    }\n\n    if ( -f \"/etc/issue\" ) {\n        my @info_release = get_file_contents \"/etc/issue\";\n        my $os_release   = $info_release[0];\n        $os_release =~ s/\\s+\\\\n.*//;\n        return $os_release;\n    }\n    return \"Unknown OS release\";\n}\n\nsub get_fs_info {\n    my @sinfo = `df -P | grep '%'`;\n    my @iinfo = `df -Pi| grep '%'`;\n    shift @sinfo;\n    shift @iinfo;\n\n    foreach my $info (@sinfo) {\n\n        #exit(0);\n        if ( $info =~ /.*?(\\d+)\\s+(\\d+)\\s+(\\d+)\\s+(\\d+)%\\s+(.*)$/ ) {\n            next if $5 =~ m{(run|dev|sys|proc|snap|init)};\n            if ( $4 > 85 ) {\n                badprint \"mount point $5 is using $4 % total space (\"\n                  . human_size( $2 * 1024 ) . \" / \"\n                  . human_size( $1 * 1024 ) . \")\";\n                push( @generalrec, \"Add some space to $4 mountpoint.\" );\n            }\n            else {\n                infoprint \"mount point $5 is using $4 % total space (\"\n                  . human_size( $2 * 1024 ) . \" / \"\n                  . human_size( $1 * 1024 ) . \")\";\n            }\n            $result{'Filesystem'}{'Space Pct'}{$5}   = $4;\n            $result{'Filesystem'}{'Used Space'}{$5}  = $2;\n            $result{'Filesystem'}{'Free Space'}{$5}  = $3;\n            $result{'Filesystem'}{'Total Space'}{$5} = $1;\n        }\n    }\n\n    @iinfo = map {\n        my $v = $_;\n        $v =~ s/.*\\s(\\d+)%\\s+(.*)/$1\\t$2/g;\n        $v;\n    } @iinfo;\n    foreach my $info (@iinfo) {\n        next if $info =~ m{(\\d+)\\t/(run|dev|sys|proc|snap)($|/)};\n        if ( $info =~ /(\\d+)\\t(.*)/ ) {\n            if ( $1 > 85 ) {\n                badprint \"mount point $2 is using $1 % of max allowed inodes\";\n                push( @generalrec,\n\"Cleanup files from $2 mountpoint or reformat your filesystem.\"\n                );\n            }\n            else {\n                infoprint \"mount point $2 is using $1 % of max allowed inodes\";\n            }\n            $result{'Filesystem'}{'Inode Pct'}{$2} = $1;\n        }\n    }\n}\n\nsub merge_hash {\n    my $h1     = shift;\n    my $h2     = shift;\n    my %result = {};\n    foreach my $substanceref ( $h1, $h2 ) {\n        while ( my ( $k, $v ) = each %$substanceref ) {\n            next if ( exists $result{$k} );\n            $result{$k} = $v;\n        }\n    }\n    return \\%result;\n}\n\nsub is_virtual_machine {\n    if ( $^O eq 'linux' ) {\n        my $isVm = `grep -Ec '^flags.*\\ hypervisor\\ ' /proc/cpuinfo`;\n        return ( $isVm == 0 ? 0 : 1 );\n    }\n\n    if ( $^O eq 'freebsd' ) {\n        my $isVm = `sysctl -n kern.vm_guest`;\n        chomp $isVm;\n        print \"FARK DEBUG isVm=[$isVm]\";\n        return ( $isVm eq 'none' ? 0 : 1 );\n    }\n    return 0;\n}\n\nsub infocmd {\n    my $cmd = \"@_\";\n    debugprint \"CMD: $cmd\";\n    my @result = `$cmd`;\n    @result = remove_cr @result;\n    for my $l (@result) {\n        infoprint \"$l\";\n    }\n}\n\nsub infocmd_tab {\n    my $cmd = \"@_\";\n    debugprint \"CMD: $cmd\";\n    my @result = `$cmd`;\n    @result = remove_cr @result;\n    for my $l (@result) {\n        infoprint \"\\t$l\";\n    }\n}\n\nsub infocmd_one {\n    my $cmd    = \"@_\";\n    my @result = `$cmd 2>&1`;\n    @result = remove_cr @result;\n    return join ', ', @result;\n}\n\nsub get_kernel_info {\n    my @params = (\n        'fs.aio-max-nr',                 'fs.aio-nr',\n        'fs.nr_open',                    'fs.file-max',\n        'sunrpc.tcp_fin_timeout',        'sunrpc.tcp_max_slot_table_entries',\n        'sunrpc.tcp_slot_table_entries', 'vm.swappiness'\n    );\n    infoprint \"Information about kernel tuning:\";\n    foreach my $param (@params) {\n        infocmd_tab(\"sysctl $param 2>/dev/null\");\n        $result{'OS'}{'Config'}{$param} = `sysctl -n $param 2>/dev/null`;\n    }\n    if ( `sysctl -n vm.swappiness` > 10 ) {\n        badprint\n          \"Swappiness is > 10, please consider having a value lower than 10\";\n        push @generalrec, \"setup swappiness lower or equal to 10\";\n        push @adjvars,\n'vm.swappiness <= 10 (echo 10 > /proc/sys/vm/swappiness) or vm.swappiness=10 in /etc/sysctl.conf';\n    }\n    else {\n        infoprint \"Swappiness is < 10.\";\n    }\n\n    # only if /proc/sys/sunrpc exists\n    my $tcp_slot_entries =\n      `sysctl -n sunrpc.tcp_slot_table_entries 2>/dev/null`;\n    if ( -f \"/proc/sys/sunrpc\"\n        and ( $tcp_slot_entries eq '' or $tcp_slot_entries < 100 ) )\n    {\n        badprint\n\"Initial TCP slot entries is < 1M, please consider having a value greater than 100\";\n        push @generalrec, \"setup Initial TCP slot entries greater than 100\";\n        push @adjvars,\n'sunrpc.tcp_slot_table_entries > 100 (echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries)  or sunrpc.tcp_slot_table_entries=128 in /etc/sysctl.conf';\n    }\n    else {\n        infoprint \"TCP slot entries is > 100.\";\n    }\n\n    if ( -f \"/proc/sys/fs/aio-max-nr\" ) {\n        if ( `sysctl -n fs.aio-max-nr` < 1000000 ) {\n            badprint\n\"Max running total of the number of max. events is < 1M, please consider having a value greater than 1M\";\n            push @generalrec, \"setup Max running number events greater than 1M\";\n            push @adjvars,\n'fs.aio-max-nr > 1M (echo 1048576 > /proc/sys/fs/aio-max-nr) or fs.aio-max-nr=1048576 in /etc/sysctl.conf';\n        }\n        else {\n            infoprint \"Max Number of AIO events is > 1M.\";\n        }\n    }\n    if ( -f \"/proc/sys/fs/nr_open\" ) {\n        if ( `sysctl -n fs.nr_open` < 1000000 ) {\n            badprint\n\"Max running total of the number of file open request is < 1M, please consider having a value greater than 1M\";\n            push @generalrec,\n              \"setup running number of open request greater than 1M\";\n            push @adjvars,\n'fs.aio-nr > 1M (echo 1048576 > /proc/sys/fs/nr_open) or fs.nr_open=1048576 in /etc/sysctl.conf';\n        }\n        else {\n            infoprint \"Max Number of open file requests is > 1M.\";\n        }\n    }\n}\n\nsub get_system_info {\n    $result{'OS'}{'Release'} = get_os_release();\n    infoprint get_os_release;\n    if (is_virtual_machine) {\n        infoprint \"Machine type          : Virtual machine\";\n        $result{'OS'}{'Virtual Machine'} = 'YES';\n    }\n    else {\n        infoprint \"Machine type          : Physical machine\";\n        $result{'OS'}{'Virtual Machine'} = 'NO';\n    }\n\n    $result{'Network'}{'Connected'} = 'NO';\n    `ping -c 1 ipecho.net &>/dev/null`;\n    my $isConnected = $?;\n    if ( $? == 0 ) {\n        infoprint \"Internet              : Connected\";\n        $result{'Network'}{'Connected'} = 'YES';\n    }\n    else {\n        badprint \"Internet              : Disconnected\";\n    }\n    $result{'OS'}{'NbCore'} = cpu_cores;\n    infoprint \"Number of Core CPU : \" . cpu_cores;\n    $result{'OS'}{'Type'} = `uname -o`;\n    infoprint \"Operating System Type : \" . infocmd_one \"uname -o\";\n    $result{'OS'}{'Kernel'} = `uname -r`;\n    infoprint \"Kernel Release        : \" . infocmd_one \"uname -r\";\n    $result{'OS'}{'Hostname'}         = `hostname`;\n    $result{'Network'}{'Internal Ip'} = `hostname -I`;\n    infoprint \"Hostname              : \" . infocmd_one \"hostname\";\n    infoprint \"Network Cards         : \";\n    infocmd_tab \"ifconfig| grep -A1 mtu\";\n    infoprint \"Internal IP           : \" . infocmd_one \"hostname -I\";\n    $result{'Network'}{'Internal Ip'} = `ifconfig| grep -A1 mtu`;\n    my $httpcli = get_http_cli();\n    infoprint \"HTTP client found: $httpcli\" if defined $httpcli;\n\n    my $ext_ip = \"\";\n    if ( $httpcli =~ /curl$/ ) {\n        $ext_ip = infocmd_one \"$httpcli -m 3 ipecho.net/plain\";\n    }\n    elsif ( $httpcli =~ /wget$/ ) {\n\n        $ext_ip = infocmd_one \"$httpcli -t 1 -T 3 -q -O - ipecho.net/plain\";\n    }\n    infoprint \"External IP           : \" . $ext_ip;\n    $result{'Network'}{'External Ip'} = $ext_ip;\n    badprint \"External IP           : Can't check, no Internet connectivity\"\n      unless defined($httpcli);\n    infoprint \"Name Servers          : \"\n      . infocmd_one \"grep 'nameserver' /etc/resolv.conf \\| awk '{print \\$2}'\";\n    infoprint \"Logged In users       : \";\n    infocmd_tab \"who\";\n    $result{'OS'}{'Logged users'} = `who`;\n    infoprint \"Ram Usages in MB      : \";\n    infocmd_tab \"free -m | grep -v +\";\n    $result{'OS'}{'Free Memory RAM'} = `free -m | grep -v +`;\n    infoprint \"Load Average          : \";\n    infocmd_tab \"top -n 1 -b | grep 'load average:'\";\n    $result{'OS'}{'Load Average'} = `top -n 1 -b | grep 'load average:'`;\n\n    infoprint \"System Uptime         : \";\n    infocmd_tab \"uptime\";\n    $result{'OS'}{'Uptime'} = `uptime`;\n}\n\nsub system_recommendations {\n    if ( is_remote eq 1 ) {\n        infoprint \"Skipping system checks on remote host\";\n        return;\n    }\n    return if ( $opt{sysstat} == 0 );\n    subheaderprint \"System Linux Recommendations\";\n    my $os = `uname`;\n    unless ( $os =~ /Linux/i ) {\n        infoprint \"Skipped due to non Linux server\";\n        return;\n    }\n    prettyprint \"Look for related Linux system recommendations\";\n\n    #prettyprint '-'x78;\n    get_system_info();\n\n    my $nb_cpus = cpu_cores;\n    if ( $nb_cpus > 1 ) {\n        goodprint \"There is at least one CPU dedicated to database server.\";\n    }\n    else {\n        badprint\n\"There is only one CPU, consider dedicated one CPU for your database server\";\n        push @generalrec,\n          \"Consider increasing number of CPU for your database server\";\n    }\n\n    if ( $physical_memory >= 1.5 * 1024 ) {\n        goodprint \"There is at least 1 Gb of RAM dedicated to Linux server.\";\n    }\n    else {\n        badprint\n\"There is less than 1,5 Gb of RAM, consider dedicated 1 Gb for your Linux server\";\n        push @generalrec,\n          \"Consider increasing 1,5 / 2 Gb of RAM for your Linux server\";\n    }\n\n    my $omem = get_other_process_memory;\n    infoprint \"User process except mysqld used \"\n      . hr_bytes_rnd($omem) . \" RAM.\";\n    if ( ( 0.15 * $physical_memory ) < $omem ) {\n        badprint\n\"Other user process except mysqld used more than 15% of total physical memory \"\n          . percentage( $omem, $physical_memory ) . \"% (\"\n          . hr_bytes_rnd($omem) . \" / \"\n          . hr_bytes_rnd($physical_memory) . \")\";\n        push( @generalrec,\n\"Consider stopping or dedicate server for additional process other than mysqld.\"\n        );\n        push( @adjvars,\n\"DON'T APPLY SETTINGS BECAUSE THERE ARE TOO MANY PROCESSES RUNNING ON THIS SERVER. OOM KILL CAN OCCUR!\"\n        );\n    }\n    else {\n        infoprint\n\"Other user process except mysqld used less than 15% of total physical memory \"\n          . percentage( $omem, $physical_memory ) . \"% (\"\n          . hr_bytes_rnd($omem) . \" / \"\n          . hr_bytes_rnd($physical_memory) . \")\";\n    }\n\n    if ( $opt{'maxportallowed'} > 0 ) {\n        my @opened_ports = get_opened_ports;\n        infoprint \"There is \"\n          . scalar @opened_ports\n          . \" listening port(s) on this server.\";\n        if ( scalar(@opened_ports) > $opt{'maxportallowed'} ) {\n            badprint \"There are too many listening ports: \"\n              . scalar(@opened_ports)\n              . \" opened > \"\n              . $opt{'maxportallowed'}\n              . \"allowed.\";\n            push( @generalrec,\n\"Consider dedicating a server for your database installation with fewer services running on it!\"\n            );\n        }\n        else {\n            goodprint \"There are less than \"\n              . $opt{'maxportallowed'}\n              . \" opened ports on this server.\";\n        }\n    }\n\n    foreach my $banport (@banned_ports) {\n        if ( is_open_port($banport) ) {\n            badprint \"Banned port: $banport is opened..\";\n            push( @generalrec,\n\"Port $banport is opened. Consider stopping the program over this port.\"\n            );\n        }\n        else {\n            goodprint \"$banport is not opened.\";\n        }\n    }\n\n    subheaderprint \"Filesystem Linux Recommendations\";\n    get_fs_info;\n    subheaderprint \"Kernel Information Recommendations\";\n    get_kernel_info;\n}\n\nsub security_recommendations {\n    subheaderprint \"Security Recommendations\";\n\n    if ( mysql_version_eq(8) ) {\n        infoprint \"Skipped due to unsupported feature for MySQL 8.0+\";\n        return;\n    }\n\n    #exit 0;\n    if ( $opt{skippassword} eq 1 ) {\n        infoprint \"Skipped due to --skippassword option\";\n        return;\n    }\n\n    my $PASS_COLUMN_NAME = 'password';\n\n    # New table schema available since mysql-5.7 and mariadb-10.2\n    # But need to be checked\n    if ( $myvar{'version'} =~ /5\\.7|10\\.[2-5]\\..*MariaDB*/ ) {\n        my $password_column_exists =\n`$mysqlcmd $mysqllogin -Bse \"SELECT 1 FROM information_schema.columns WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME = 'password'\" 2>>/dev/null`;\n        my $authstring_column_exists =\n`$mysqlcmd $mysqllogin -Bse \"SELECT 1 FROM information_schema.columns WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME = 'authentication_string'\" 2>>/dev/null`;\n        if ( $password_column_exists && $authstring_column_exists ) {\n            $PASS_COLUMN_NAME =\n\"IF(plugin='mysql_native_password', authentication_string, password)\";\n        }\n        elsif ($authstring_column_exists) {\n            $PASS_COLUMN_NAME = 'authentication_string';\n        }\n        elsif ( !$password_column_exists ) {\n            infoprint \"Skipped due to none of known auth columns exists\";\n            return;\n        }\n    }\n    debugprint \"Password column = $PASS_COLUMN_NAME\";\n\n    # IS THERE A ROLE COLUMN\n    my $is_role_column = select_one\n\"select count(*) from information_schema.columns where TABLE_NAME='user' AND TABLE_SCHEMA='mysql' and COLUMN_NAME='IS_ROLE'\";\n\n    my $extra_user_condition = \"\";\n    $extra_user_condition = \"IS_ROLE = 'N' AND\" if $is_role_column > 0;\n    my @mysqlstatlist;\n    if ( $is_role_column > 0 ) {\n        @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE IS_ROLE='Y'\";\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            infoprint \"User $line is User Role\";\n        }\n    }\n    else {\n        debugprint \"No Role user detected\";\n        goodprint \"No Role user detected\";\n    }\n\n    # Looking for Anonymous users\n    @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE $extra_user_condition (TRIM(USER) = '' OR USER IS NULL)\";\n\n    #debugprint Dumper \\@mysqlstatlist;\n\n    #exit 0;\n    if (@mysqlstatlist) {\n        push( @generalrec,\n                \"Remove Anonymous User accounts: there are \"\n              . scalar(@mysqlstatlist)\n              . \" anonymous accounts.\" );\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            badprint \"User \"\n              . $line\n              . \" is an anonymous account. Remove with DROP USER \"\n              . $line . \";\";\n        }\n    }\n    else {\n        goodprint \"There are no anonymous accounts for any database users\";\n    }\n    if ( mysql_version_le( 5, 1 ) ) {\n        badprint \"No more password checks for MySQL version <=5.1\";\n        badprint \"MySQL version <=5.1 is deprecated and end of support.\";\n        return;\n    }\n\n    # Looking for Empty Password\n    if ( mysql_version_ge( 10, 4 ) ) {\n        @mysqlstatlist = select_array\nq{SELECT CONCAT(QUOTE(user), '@', QUOTE(host)) FROM mysql.global_priv WHERE\n    ( user != ''\n    AND JSON_CONTAINS(Priv, '\"mysql_native_password\"', '$.plugin') AND JSON_CONTAINS(Priv, '\"\"', '$.authentication_string')\n    AND NOT JSON_CONTAINS(Priv, 'true', '$.account_locked')\n    )};\n    }\n    else {\n        @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE ($PASS_COLUMN_NAME = '' OR $PASS_COLUMN_NAME IS NULL)\n    AND user != ''\n    /*!50501 AND plugin NOT IN ('auth_socket', 'unix_socket', 'win_socket', 'auth_pam_compat') */\n    /*!80000 AND account_locked = 'N' AND password_expired = 'N' */\";\n    }\n    if (@mysqlstatlist) {\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            badprint \"User '\" . $line . \"' has no password set.\";\n            push( @generalrec,\n\"Set up a Secure Password for $line user: SET PASSWORD FOR $line = PASSWORD('secure_password');\"\n            );\n        }\n    }\n    else {\n        goodprint \"All database users have passwords assigned\";\n    }\n\n    if ( mysql_version_ge( 5, 7 ) ) {\n        my $valPlugin = select_one(\n\"select count(*) from information_schema.plugins where PLUGIN_NAME='validate_password' AND PLUGIN_STATUS='ACTIVE'\"\n        );\n        if ( $valPlugin >= 1 ) {\n            infoprint\n\"Bug #80860 MySQL 5.7: Avoid testing password when validate_password is activated\";\n            return;\n        }\n    }\n\n    # Looking for User with user/ uppercase /capitalise user as password\n    @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE user != '' AND (CAST($PASS_COLUMN_NAME as Binary) = PASSWORD(user) OR CAST($PASS_COLUMN_NAME as Binary) = PASSWORD(UPPER(user)) OR CAST($PASS_COLUMN_NAME as Binary) = PASSWORD(CONCAT(UPPER(LEFT(User, 1)), SUBSTRING(User, 2, LENGTH(User)))))\";\n    if (@mysqlstatlist) {\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            badprint \"User \" . $line . \" has user name as password.\";\n            push( @generalrec,\n\"Set up a Secure Password for $line user: SET PASSWORD FOR $line = PASSWORD('secure_password');\"\n            );\n        }\n    }\n\n    @mysqlstatlist = select_array\n      \"SELECT CONCAT(QUOTE(user), '\\@', host) FROM mysql.user WHERE HOST='%'\";\n    if (@mysqlstatlist) {\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            my $luser = ( split /@/, $line )[0];\n            badprint \"User \" . $line\n              . \" does not specify hostname restrictions.\";\n            push( @generalrec,\n\"Restrict Host for $luser\\@'%' to $luser\\@LimitedIPRangeOrLocalhost\"\n            );\n            push( @generalrec,\n                    \"RENAME USER $luser\\@'%' TO \"\n                  . $luser\n                  . \"\\@LimitedIPRangeOrLocalhost;\" );\n        }\n    }\n\n    unless ( -f $basic_password_files ) {\n        badprint \"There is no basic password file list!\";\n        return;\n    }\n\n    my @passwords = get_basic_passwords $basic_password_files;\n    infoprint \"There are \"\n      . scalar(@passwords)\n      . \" basic passwords in the list.\";\n    my $nbins = 0;\n    my $passreq;\n    if (@passwords) {\n        my $nbInterPass = 0;\n        foreach my $pass (@passwords) {\n            $nbInterPass++;\n\n            $pass =~ s/\\s//g;\n            $pass =~ s/\\'/\\\\\\'/g;\n            chomp($pass);\n\n            # Looking for User with user/ uppercase /capitalise weak password\n            @mysqlstatlist =\n              select_array\n\"SELECT CONCAT(user, '\\@', host) FROM mysql.user WHERE $PASS_COLUMN_NAME = PASSWORD('\"\n              . $pass\n              . \"') OR $PASS_COLUMN_NAME = PASSWORD(UPPER('\"\n              . $pass\n              . \"')) OR $PASS_COLUMN_NAME = PASSWORD(CONCAT(UPPER(LEFT('\"\n              . $pass\n              . \"', 1)), SUBSTRING('\"\n              . $pass\n              . \"', 2, LENGTH('\"\n              . $pass . \"'))))\";\n            debugprint \"There are \" . scalar(@mysqlstatlist) . \" items.\";\n            if (@mysqlstatlist) {\n                foreach my $line (@mysqlstatlist) {\n                    chomp($line);\n                    badprint \"User '\" . $line\n                      . \"' is using weak password: $pass in a lower, upper or capitalize derivative version.\";\n\n                    push( @generalrec,\n\"Set up a Secure Password for $line user: SET PASSWORD FOR '\"\n                          . ( split /@/, $line )[0] . \"'\\@'\"\n                          . ( split /@/, $line )[1]\n                          . \"' = PASSWORD('secure_password');\" );\n                    $nbins++;\n                }\n            }\n            debugprint \"$nbInterPass / \" . scalar(@passwords)\n              if ( $nbInterPass % 1000 == 0 );\n        }\n    }\n    if ( $nbins > 0 ) {\n        push( @generalrec,\n            $nbins\n              . \" user(s) used basic or weak password from basic dictionary.\" );\n    }\n}\n\nsub get_replication_status {\n    subheaderprint \"Replication Metrics\";\n    infoprint \"Galera Synchronous replication: \" . $myvar{'have_galera'};\n    if ( scalar( keys %myslaves ) == 0 ) {\n        infoprint \"No replication slave(s) for this server.\";\n    }\n    else {\n        infoprint \"This server is acting as master for \"\n          . scalar( keys %myslaves )\n          . \" server(s).\";\n    }\n    infoprint \"Binlog format: \" . $myvar{'binlog_format'};\n    infoprint \"XA support enabled: \" . $myvar{'innodb_support_xa'};\n\n    infoprint \"Semi synchronous replication Master: \"\n      . (\n        (\n                 defined( $myvar{'rpl_semi_sync_master_enabled'} )\n              or defined( $myvar{'rpl_semi_sync_source_enabled'} )\n        )\n        ? ( $myvar{'rpl_semi_sync_master_enabled'}\n              // $myvar{'rpl_semi_sync_source_enabled'} )\n        : 'Not Activated'\n      );\n    infoprint \"Semi synchronous replication Slave: \"\n      . (\n        (\n                 defined( $myvar{'rpl_semi_sync_slave_enabled'} )\n              or defined( $myvar{'rpl_semi_sync_replica_enabled'} )\n        )\n        ? ( $myvar{'rpl_semi_sync_slave_enabled'}\n              // $myvar{'rpl_semi_sync_replica_enabled'} )\n        : 'Not Activated'\n      );\n    if ( scalar( keys %myrepl ) == 0 and scalar( keys %myslaves ) == 0 ) {\n        infoprint \"This is a standalone server\";\n        return;\n    }\n    if ( scalar( keys %myrepl ) == 0 ) {\n        infoprint\n          \"No replication setup for this server or replication not started.\";\n        return;\n    }\n\n    $result{'Replication'}{'status'} = \\%myrepl;\n    my ($io_running) = $myrepl{'Slave_IO_Running'}\n      // $myrepl{'Replica_IO_Running'};\n    debugprint \"IO RUNNING: $io_running \";\n    my ($sql_running) = $myrepl{'Slave_SQL_Running'}\n      // $myrepl{'Replica_SQL_Running'};\n    debugprint \"SQL RUNNING: $sql_running \";\n\n    my ($seconds_behind_master) = $myrepl{'Seconds_Behind_Master'}\n      // $myrepl{'Seconds_Behind_Source'};\n    $seconds_behind_master = 1000000 unless defined($seconds_behind_master);\n    debugprint \"SECONDS : $seconds_behind_master \";\n\n    if ( defined($io_running)\n        and ( $io_running !~ /yes/i or $sql_running !~ /yes/i ) )\n    {\n        badprint\n          \"This replication slave is not running but seems to be configured.\";\n    }\n    if (   defined($io_running)\n        && $io_running  =~ /yes/i\n        && $sql_running =~ /yes/i )\n    {\n        if ( $myvar{'read_only'} eq 'OFF' ) {\n            badprint\n\"This replication slave is running with the read_only option disabled.\";\n        }\n        else {\n            goodprint\n\"This replication slave is running with the read_only option enabled.\";\n        }\n        if ( $seconds_behind_master > 0 ) {\n            badprint\n\"This replication slave is lagging and slave has $seconds_behind_master second(s) behind master host.\";\n        }\n        else {\n            goodprint \"This replication slave is up to date with master.\";\n        }\n    }\n}\n\n# https://endoflife.software/applications/databases/mysql\n# https://endoflife.date/mariadb\nsub validate_mysql_version {\n    ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n    $mysqlverminor ||= 0;\n    $mysqlvermicro ||= 0;\n\n    prettyprint \" \";\n\n    if (   mysql_version_eq(8)\n        or mysql_version_eq( 5,  7 )\n        or mysql_version_eq( 10, 3 )\n        or mysql_version_eq( 10, 4 )\n        or mysql_version_eq( 10, 5 )\n        or mysql_version_eq( 10, 6 )\n        or mysql_version_eq( 10, 7 )\n        or mysql_version_eq( 10, 8 )\n        or mysql_version_eq( 10, 9 )\n        or mysql_version_eq( 10, 10 )\n        or mysql_version_eq( 10, 11 ) )\n    {\n        goodprint \"Currently running supported MySQL version \"\n          . $myvar{'version'} . \"\";\n        return;\n    }\n    else {\n        badprint \"Your MySQL version \"\n          . $myvar{'version'}\n          . \" is EOL software. Upgrade soon!\";\n        push( @generalrec,\n            \"You are using an unsupported version for production environments\"\n        );\n        push( @generalrec,\n            \"Upgrade as soon as possible to a supported version !\" );\n\n    }\n}\n\n# Checks if MySQL version is equal to (major, minor, micro)\nsub mysql_version_eq {\n    my ( $maj, $min, $mic ) = @_;\n    my ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n\n    return int($mysqlvermajor) == int($maj)\n      if ( !defined($min) && !defined($mic) );\n    return int($mysqlvermajor) == int($maj) && int($mysqlverminor) == int($min)\n      if ( !defined($mic) );\n    return ( int($mysqlvermajor) == int($maj)\n          && int($mysqlverminor) == int($min)\n          && int($mysqlvermicro) == int($mic) );\n}\n\n# Checks if MySQL version is greater than equal to (major, minor, micro)\nsub mysql_version_ge {\n    my ( $maj, $min, $mic ) = @_;\n    $min ||= 0;\n    $mic ||= 0;\n    my ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n\n    return\n         int($mysqlvermajor) > int($maj)\n      || ( int($mysqlvermajor) == int($maj) && int($mysqlverminor) > int($min) )\n      || ( int($mysqlvermajor) == int($maj)\n        && int($mysqlverminor) == int($min)\n        && int($mysqlvermicro) >= int($mic) );\n}\n\n# Checks if MySQL version is lower than equal to (major, minor, micro)\nsub mysql_version_le {\n    my ( $maj, $min, $mic ) = @_;\n    $min ||= 0;\n    $mic ||= 0;\n    my ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n    return\n         int($mysqlvermajor) < int($maj)\n      || ( int($mysqlvermajor) == int($maj) && int($mysqlverminor) < int($min) )\n      || ( int($mysqlvermajor) == int($maj)\n        && int($mysqlverminor) == int($min)\n        && int($mysqlvermicro) <= int($mic) );\n}\n\n# Checks for 32-bit boxes with more than 2GB of RAM\nmy ($arch);\n\nsub check_architecture {\n    if ( is_remote eq 1 ) {\n        infoprint \"Skipping architecture check on remote host\";\n        infoprint \"Using default $opt{defaultarch} bits as target architecture\";\n        $arch = $opt{defaultarch};\n        return;\n    }\n    if ( `uname` =~ /SunOS/ && `isainfo -b` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` !~ /SunOS/ && `uname -m` =~ /(64|s390x)/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /AIX/ && `bootinfo -K` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /NetBSD|OpenBSD/ && `sysctl -b hw.machine` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /FreeBSD/ && `sysctl -b hw.machine_arch` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /Darwin/ && `uname -m` =~ /Power Macintosh/ ) {\n\n# Darwin box.local 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:57:01 PDT 2009; root:xnu1228.15.4~1/RELEASE_PPC Power Macintosh\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /Darwin/ && `uname -m` =~ /x86_64/ ) {\n\n# Darwin gibas.local 12.5.2 Darwin Kernel Version 12.3.0: Sun Jan 6 22:37:10 PST 2013; root:xnu-2050.22.13~1/RELEASE_X86_64 x86_64\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    else {\n        $arch = 32;\n        if ( $physical_memory > 2147483648 ) {\n            badprint\n\"Switch to 64-bit OS - MySQL cannot currently use all of your RAM\";\n        }\n        else {\n            goodprint \"Operating on 32-bit architecture with less than 2GB RAM\";\n        }\n    }\n    $result{'OS'}{'Architecture'} = \"$arch bits\";\n\n}\n\n# Start up a ton of storage engine counts/statistics\nmy ( %enginestats, %enginecount, $fragtables );\n\nsub check_storage_engines {\n    subheaderprint \"Storage Engine Statistics\";\n    if ( $opt{skipsize} eq 1 ) {\n        infoprint \"Skipped due to --skipsize option\";\n        return;\n    }\n\n    my $engines;\n    if ( mysql_version_ge( 5, 5 ) ) {\n        my @engineresults = select_array\n\"SELECT ENGINE,SUPPORT FROM information_schema.ENGINES ORDER BY ENGINE ASC\";\n        foreach my $line (@engineresults) {\n            my ( $engine, $engineenabled );\n            ( $engine, $engineenabled ) = $line =~ /([a-zA-Z_]*)\\s+([a-zA-Z]+)/;\n            $result{'Engine'}{$engine}{'Enabled'} = $engineenabled;\n            $engines .=\n              ( $engineenabled eq \"YES\" || $engineenabled eq \"DEFAULT\" )\n              ? greenwrap \"+\" . $engine . \" \"\n              : redwrap \"-\" . $engine . \" \";\n        }\n    }\n    elsif ( mysql_version_ge( 5, 1, 5 ) ) {\n        my @engineresults = select_array\n\"SELECT ENGINE, SUPPORT FROM information_schema.ENGINES WHERE ENGINE NOT IN ('MyISAM', 'MERGE', 'MEMORY') ORDER BY ENGINE\";\n        foreach my $line (@engineresults) {\n            my ( $engine, $engineenabled );\n            ( $engine, $engineenabled ) = $line =~ /([a-zA-Z_]*)\\s+([a-zA-Z]+)/;\n            $result{'Engine'}{$engine}{'Enabled'} = $engineenabled;\n            $engines .=\n              ( $engineenabled eq \"YES\" || $engineenabled eq \"DEFAULT\" )\n              ? greenwrap \"+\" . $engine . \" \"\n              : redwrap \"-\" . $engine . \" \";\n        }\n    }\n    else {\n        $engines .=\n          ( defined $myvar{'have_archive'} && $myvar{'have_archive'} eq \"YES\" )\n          ? greenwrap \"+Archive \"\n          : redwrap \"-Archive \";\n        $engines .=\n          ( defined $myvar{'have_bdb'} && $myvar{'have_bdb'} eq \"YES\" )\n          ? greenwrap \"+BDB \"\n          : redwrap \"-BDB \";\n        $engines .=\n          ( defined $myvar{'have_federated_engine'}\n              && $myvar{'have_federated_engine'} eq \"YES\" )\n          ? greenwrap \"+Federated \"\n          : redwrap \"-Federated \";\n        $engines .=\n          ( defined $myvar{'have_innodb'} && $myvar{'have_innodb'} eq \"YES\" )\n          ? greenwrap \"+InnoDB \"\n          : redwrap \"-InnoDB \";\n        $engines .=\n          ( defined $myvar{'have_isam'} && $myvar{'have_isam'} eq \"YES\" )\n          ? greenwrap \"+ISAM \"\n          : redwrap \"-ISAM \";\n        $engines .=\n          ( defined $myvar{'have_ndbcluster'}\n              && $myvar{'have_ndbcluster'} eq \"YES\" )\n          ? greenwrap \"+NDBCluster \"\n          : redwrap \"-NDBCluster \";\n    }\n\n    my @dblist = grep { $_ ne 'lost+found' } select_array \"SHOW DATABASES\";\n\n    $result{'Databases'}{'List'} = [@dblist];\n    infoprint \"Status: $engines\";\n    if ( mysql_version_ge( 5, 1, 5 ) ) {\n\n# MySQL 5+ servers can have table sizes calculated quickly from information schema\n        my @templist = select_array\n\"SELECT ENGINE, SUM(DATA_LENGTH+INDEX_LENGTH), COUNT(ENGINE), SUM(DATA_LENGTH), SUM(INDEX_LENGTH) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql') AND ENGINE IS NOT NULL GROUP BY ENGINE ORDER BY ENGINE ASC;\";\n\n        my ( $engine, $size, $count, $dsize, $isize );\n        foreach my $line (@templist) {\n            ( $engine, $size, $count, $dsize, $isize ) =\n              $line =~ /([a-zA-Z_]+)\\s+(\\d+)\\s+(\\d+)\\s+(\\d+)\\s+(\\d+)/;\n            debugprint \"Engine Found: $engine\";\n            next unless ( defined($engine) or trim($engine) eq '' );\n            $size  = 0 unless ( defined($size)  or trim($engine) eq '' );\n            $isize = 0 unless ( defined($isize) or trim($engine) eq '' );\n            $dsize = 0 unless ( defined($dsize) or trim($engine) eq '' );\n            $count = 0 unless ( defined($count) or trim($engine) eq '' );\n            $enginestats{$engine}                      = $size;\n            $enginecount{$engine}                      = $count;\n            $result{'Engine'}{$engine}{'Table Number'} = $count;\n            $result{'Engine'}{$engine}{'Total Size'}   = $size;\n            $result{'Engine'}{$engine}{'Data Size'}    = $dsize;\n            $result{'Engine'}{$engine}{'Index Size'}   = $isize;\n        }\n\n        #print Dumper( \\%enginestats ) if $opt{debug};\n        my $not_innodb = '';\n        if ( not defined $result{'Variables'}{'innodb_file_per_table'} ) {\n            $not_innodb = \"AND NOT ENGINE='InnoDB'\";\n        }\n        elsif ( $result{'Variables'}{'innodb_file_per_table'} eq 'OFF' ) {\n            $not_innodb = \"AND NOT ENGINE='InnoDB'\";\n        }\n        $result{'Tables'}{'Fragmented tables'} =\n          [ select_array\n\"SELECT TABLE_SCHEMA, TABLE_NAME, ENGINE, CAST(DATA_FREE AS SIGNED) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql') AND DATA_LENGTH/1024/1024>100 AND cast(DATA_FREE as signed)*100/(DATA_LENGTH+INDEX_LENGTH+cast(DATA_FREE as signed)) > 10 AND NOT ENGINE='MEMORY' $not_innodb\"\n          ];\n        $fragtables = scalar @{ $result{'Tables'}{'Fragmented tables'} };\n\n    }\n    else {\n\n        # MySQL < 5 servers take a lot of work to get table sizes\n        my @tblist;\n\n# Now we build a database list, and loop through it to get storage engine stats for tables\n        foreach my $db (@dblist) {\n            chomp($db);\n            if (   $db eq \"information_schema\"\n                or $db eq \"performance_schema\"\n                or $db eq \"mysql\"\n                or $db eq \"lost+found\" )\n            {\n                next;\n            }\n            my @ixs = ( 1, 6, 9 );\n            if ( !mysql_version_ge( 4, 1 ) ) {\n\n                # MySQL 3.23/4.0 keeps Data_Length in the 5th (0-based) column\n                @ixs = ( 1, 5, 8 );\n            }\n            push( @tblist,\n                map { [ (split)[@ixs] ] }\n                  select_array \"SHOW TABLE STATUS FROM \\\\\\`$db\\\\\\`\" );\n        }\n\n     # Parse through the table list to generate storage engine counts/statistics\n        $fragtables = 0;\n        foreach my $tbl (@tblist) {\n\n            #debugprint \"Data dump \" . Dumper(@$tbl) if $opt{debug};\n            my ( $engine, $size, $datafree ) = @$tbl;\n            next if $engine eq 'NULL' or not defined($engine);\n            $size     = 0 if $size eq 'NULL'     or not defined($size);\n            $datafree = 0 if $datafree eq 'NULL' or not defined($datafree);\n            if ( defined $enginestats{$engine} ) {\n                $enginestats{$engine} += $size;\n                $enginecount{$engine} += 1;\n            }\n            else {\n                $enginestats{$engine} = $size;\n                $enginecount{$engine} = 1;\n            }\n            if ( $datafree > 0 ) {\n                $fragtables++;\n            }\n        }\n    }\n    while ( my ( $engine, $size ) = each(%enginestats) ) {\n        infoprint \"Data in $engine tables: \"\n          . hr_bytes($size)\n          . \" (Tables: \"\n          . $enginecount{$engine} . \")\" . \"\";\n    }\n\n    # If the storage engine isn't being used, recommend it to be disabled\n    if (  !defined $enginestats{'InnoDB'}\n        && defined $myvar{'have_innodb'}\n        && $myvar{'have_innodb'} eq \"YES\" )\n    {\n        badprint \"InnoDB is enabled, but isn't being used\";\n        push( @generalrec,\n            \"Add skip-innodb to MySQL configuration to disable InnoDB\" );\n    }\n    if (  !defined $enginestats{'BerkeleyDB'}\n        && defined $myvar{'have_bdb'}\n        && $myvar{'have_bdb'} eq \"YES\" )\n    {\n        badprint \"BDB is enabled, but isn't being used\";\n        push( @generalrec,\n            \"Add skip-bdb to MySQL configuration to disable BDB\" );\n    }\n    if (  !defined $enginestats{'ISAM'}\n        && defined $myvar{'have_isam'}\n        && $myvar{'have_isam'} eq \"YES\" )\n    {\n        badprint \"MyISAM is enabled, but isn't being used\";\n        push( @generalrec,\n\"Add skip-isam to MySQL configuration to disable MyISAM (MySQL > 4.1.0)\"\n        );\n    }\n\n    # Fragmented tables\n    if ( $fragtables > 0 ) {\n        badprint \"Total fragmented tables: $fragtables\";\n        push @generalrec,\n'Run ALTER TABLE ... FORCE or OPTIMIZE TABLE to defragment tables for better performance';\n        my $total_free = 0;\n        foreach my $table_line ( @{ $result{'Tables'}{'Fragmented tables'} } ) {\n            my ( $table_schema, $table_name, $engine, $data_free ) =\n              split /\\t/msx, $table_line;\n            $data_free = $data_free / 1024 / 1024;\n            $total_free += $data_free;\n            my $generalrec;\n            if ( $engine eq 'InnoDB' ) {\n                $generalrec =\n                  \"  ALTER TABLE `$table_schema`.`$table_name` FORCE;\";\n            }\n            else {\n                $generalrec = \"  OPTIMIZE TABLE `$table_schema`.`$table_name`;\";\n            }\n            $generalrec .= \" -- can free $data_free MiB\";\n            push @generalrec, $generalrec;\n        }\n        push @generalrec,\n          \"Total freed space after defragmentation: $total_free MiB\";\n    }\n    else {\n        goodprint \"Total fragmented tables: $fragtables\";\n    }\n\n    # Auto increments\n    my %tblist;\n\n    # Find the maximum integer\n    my $maxint = select_one \"SELECT ~0\";\n    $result{'MaxInt'} = $maxint;\n\n# Now we use a database list, and loop through it to get storage engine stats for tables\n    foreach my $db (@dblist) {\n        chomp($db);\n\n        if ( !$tblist{$db} ) {\n            $tblist{$db} = ();\n        }\n\n        if ( $db eq \"information_schema\" ) { next; }\n        my @ia = ( 0, 10 );\n        if ( !mysql_version_ge( 4, 1 ) ) {\n\n            # MySQL 3.23/4.0 keeps Data_Length in the 5th (0-based) column\n            @ia = ( 0, 9 );\n        }\n        push(\n            @{ $tblist{$db} },\n            map { [ (split)[@ia] ] }\n              select_array \"SHOW TABLE STATUS FROM \\\\\\`$db\\\\\\`\"\n        );\n    }\n\n    my @dbnames = keys %tblist;\n\n    foreach my $db (@dbnames) {\n        foreach my $tbl ( @{ $tblist{$db} } ) {\n            my ( $name, $autoincrement ) = @$tbl;\n\n            if ( $autoincrement =~ /^\\d+?$/ ) {\n                my $percent = percentage( $autoincrement, $maxint );\n                $result{'PctAutoIncrement'}{\"$db.$name\"} = $percent;\n                if ( $percent >= 75 ) {\n                    badprint\n\"Table '$db.$name' has an autoincrement value near max capacity ($percent%)\";\n                }\n            }\n        }\n    }\n}\n\nmy %mycalc;\n\nsub dump_into_file {\n    my $file    = shift;\n    my $content = shift;\n    if ( -d \"$opt{dumpdir}\" ) {\n        $file = \"$opt{dumpdir}/$file\";\n        open( FILE, \">$file\" ) or die \"Can't open $file: $!\";\n        print FILE $content;\n        close FILE;\n        infoprint \"Data saved to $file\";\n    }\n}\n\nsub calculations {\n    if ( $mystat{'Questions'} < 1 ) {\n        badprint \"Your server has not answered any queries: cannot continue...\";\n        exit 2;\n    }\n\n    # Per-thread memory\n    $mycalc{'per_thread_buffers'} = 0;\n    $mycalc{'per_thread_buffers'} += $myvar{'read_buffer_size'}\n      if is_int( $myvar{'read_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'read_rnd_buffer_size'}\n      if is_int( $myvar{'read_rnd_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'sort_buffer_size'}\n      if is_int( $myvar{'sort_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'thread_stack'}\n      if is_int( $myvar{'thread_stack'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'join_buffer_size'}\n      if is_int( $myvar{'join_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'binlog_cache_size'}\n      if is_int( $myvar{'binlog_cache_size'} );\n    debugprint \"per_thread_buffers: $mycalc{'per_thread_buffers'} (\"\n      . human_size( $mycalc{'per_thread_buffers'} ) . \" )\";\n\n# Error max_allowed_packet is not included in thread buffers size\n#$mycalc{'per_thread_buffers'} += $myvar{'max_allowed_packet'} if is_int($myvar{'max_allowed_packet'});\n\n    # Total per-thread memory\n    $mycalc{'total_per_thread_buffers'} =\n      $mycalc{'per_thread_buffers'} * $myvar{'max_connections'};\n\n    # Max total per-thread memory reached\n    $mycalc{'max_total_per_thread_buffers'} =\n      $mycalc{'per_thread_buffers'} * $mystat{'Max_used_connections'};\n\n    # Server-wide memory\n    $mycalc{'max_tmp_table_size'} =\n      ( $myvar{'tmp_table_size'} > $myvar{'max_heap_table_size'} )\n      ? $myvar{'max_heap_table_size'}\n      : $myvar{'tmp_table_size'};\n    $mycalc{'server_buffers'} =\n      $myvar{'key_buffer_size'} + $mycalc{'max_tmp_table_size'};\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'innodb_buffer_pool_size'} )\n      ? $myvar{'innodb_buffer_pool_size'}\n      : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'innodb_additional_mem_pool_size'} )\n      ? $myvar{'innodb_additional_mem_pool_size'}\n      : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'innodb_log_buffer_size'} )\n      ? $myvar{'innodb_log_buffer_size'}\n      : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'query_cache_size'} ) ? $myvar{'query_cache_size'} : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'aria_pagecache_buffer_size'} )\n      ? $myvar{'aria_pagecache_buffer_size'}\n      : 0;\n\n# Global memory\n# Max used memory is memory used by MySQL based on Max_used_connections\n# This is the max memory used theoretically calculated with the max concurrent connection number reached by mysql\n    $mycalc{'max_used_memory'} =\n      $mycalc{'server_buffers'} +\n      $mycalc{\"max_total_per_thread_buffers\"} +\n      get_pf_memory();\n\n    #   + get_gcache_memory();\n    $mycalc{'pct_max_used_memory'} =\n      percentage( $mycalc{'max_used_memory'}, $physical_memory );\n\n# Total possible memory is memory needed by MySQL based on max_connections\n# This is the max memory MySQL can theoretically used if all connections allowed has opened by mysql\n    $mycalc{'max_peak_memory'} =\n      $mycalc{'server_buffers'} +\n      $mycalc{'total_per_thread_buffers'} +\n      get_pf_memory();\n\n    # +  get_gcache_memory();\n    $mycalc{'pct_max_physical_memory'} =\n      percentage( $mycalc{'max_peak_memory'}, $physical_memory );\n\n    debugprint \"Max Used Memory: \"\n      . hr_bytes( $mycalc{'max_used_memory'} ) . \"\";\n    debugprint \"Max Used Percentage RAM: \"\n      . $mycalc{'pct_max_used_memory'} . \"%\";\n\n    debugprint \"Max Peak Memory: \"\n      . hr_bytes( $mycalc{'max_peak_memory'} ) . \"\";\n    debugprint \"Max Peak Percentage RAM: \"\n      . $mycalc{'pct_max_physical_memory'} . \"%\";\n\n    # Slow queries\n    $mycalc{'pct_slow_queries'} =\n      int( ( $mystat{'Slow_queries'} / $mystat{'Questions'} ) * 100 );\n\n    # Connections\n    $mycalc{'pct_connections_used'} = int(\n        ( $mystat{'Max_used_connections'} / $myvar{'max_connections'} ) * 100 );\n    $mycalc{'pct_connections_used'} =\n      ( $mycalc{'pct_connections_used'} > 100 )\n      ? 100\n      : $mycalc{'pct_connections_used'};\n\n    # Aborted Connections\n    $mycalc{'pct_connections_aborted'} =\n      percentage( $mystat{'Aborted_connects'}, $mystat{'Connections'} );\n    debugprint \"Aborted_connects: \" . $mystat{'Aborted_connects'} . \"\";\n    debugprint \"Connections: \" . $mystat{'Connections'} . \"\";\n    debugprint \"pct_connections_aborted: \"\n      . $mycalc{'pct_connections_aborted'} . \"\";\n\n    # Key buffers\n    if ( mysql_version_ge( 4, 1 ) && $myvar{'key_buffer_size'} > 0 ) {\n        $mycalc{'pct_key_buffer_used'} = sprintf(\n            \"%.1f\",\n            (\n                1 - (\n                    (\n                        $mystat{'Key_blocks_unused'} *\n                          $myvar{'key_cache_block_size'}\n                    ) / $myvar{'key_buffer_size'}\n                )\n            ) * 100\n        );\n    }\n    else {\n        $mycalc{'pct_key_buffer_used'} = 0;\n    }\n\n    if ( $mystat{'Key_read_requests'} > 0 ) {\n        $mycalc{'pct_keys_from_mem'} = sprintf(\n            \"%.1f\",\n            (\n                100 - (\n                    ( $mystat{'Key_reads'} / $mystat{'Key_read_requests'} ) *\n                      100\n                )\n            )\n        );\n    }\n    else {\n        $mycalc{'pct_keys_from_mem'} = 0;\n    }\n    if ( defined $mystat{'Aria_pagecache_read_requests'}\n        && $mystat{'Aria_pagecache_read_requests'} > 0 )\n    {\n        $mycalc{'pct_aria_keys_from_mem'} = sprintf(\n            \"%.1f\",\n            (\n                100 - (\n                    (\n                        $mystat{'Aria_pagecache_reads'} /\n                          $mystat{'Aria_pagecache_read_requests'}\n                    ) * 100\n                )\n            )\n        );\n    }\n    else {\n        $mycalc{'pct_aria_keys_from_mem'} = 0;\n    }\n\n    if ( $mystat{'Key_write_requests'} > 0 ) {\n        $mycalc{'pct_wkeys_from_mem'} = sprintf( \"%.1f\",\n            ( ( $mystat{'Key_writes'} / $mystat{'Key_write_requests'} ) * 100 )\n        );\n    }\n    else {\n        $mycalc{'pct_wkeys_from_mem'} = 0;\n    }\n\n    if ( $doremote eq 0 and !mysql_version_ge(5) ) {\n        my $size = 0;\n        $size += (split)[0]\n          for\n`find \"$myvar{'datadir'}\" -name \"*.MYI\" -print0 2>&1 | xargs $xargsflags -0 du -L $duflags 2>&1`;\n        $mycalc{'total_myisam_indexes'} = $size;\n        $size = 0 + (split)[0]\n          for\n`find \"$myvar{'datadir'}\" -name \"*.MAI\" -print0 2>&1 | xargs $xargsflags -0 du -L $duflags 2>&1`;\n        $mycalc{'total_aria_indexes'} = $size;\n    }\n    elsif ( mysql_version_ge(5) ) {\n        $mycalc{'total_myisam_indexes'} = select_one\n\"SELECT IFNULL(SUM(INDEX_LENGTH), 0) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema') AND ENGINE = 'MyISAM';\";\n        $mycalc{'total_aria_indexes'} = select_one\n\"SELECT IFNULL(SUM(INDEX_LENGTH), 0) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema') AND ENGINE = 'Aria';\";\n    }\n    if ( defined $mycalc{'total_myisam_indexes'} ) {\n        chomp( $mycalc{'total_myisam_indexes'} );\n    }\n    if ( defined $mycalc{'total_aria_indexes'} ) {\n        chomp( $mycalc{'total_aria_indexes'} );\n    }\n\n    # Query cache\n    if ( mysql_version_ge(8) and mysql_version_le(10) ) {\n        $mycalc{'query_cache_efficiency'} = 0;\n    }\n    elsif ( mysql_version_ge(4) ) {\n        $mycalc{'query_cache_efficiency'} = sprintf(\n            \"%.1f\",\n            (\n                $mystat{'Qcache_hits'} /\n                  ( $mystat{'Com_select'} + $mystat{'Qcache_hits'} )\n            ) * 100\n        );\n        if ( $myvar{'query_cache_size'} ) {\n            $mycalc{'pct_query_cache_used'} = sprintf(\n                \"%.1f\",\n                100 - (\n                    $mystat{'Qcache_free_memory'} / $myvar{'query_cache_size'}\n                ) * 100\n            );\n        }\n        if ( $mystat{'Qcache_lowmem_prunes'} == 0 ) {\n            $mycalc{'query_cache_prunes_per_day'} = 0;\n        }\n        else {\n            $mycalc{'query_cache_prunes_per_day'} = int(\n                $mystat{'Qcache_lowmem_prunes'} / ( $mystat{'Uptime'} / 86400 )\n            );\n        }\n    }\n\n    # Sorting\n    $mycalc{'total_sorts'} = $mystat{'Sort_scan'} + $mystat{'Sort_range'};\n    if ( $mycalc{'total_sorts'} > 0 ) {\n        $mycalc{'pct_temp_sort_table'} = int(\n            ( $mystat{'Sort_merge_passes'} / $mycalc{'total_sorts'} ) * 100 );\n    }\n\n    # Joins\n    $mycalc{'joins_without_indexes'} =\n      $mystat{'Select_range_check'} + $mystat{'Select_full_join'};\n    $mycalc{'joins_without_indexes_per_day'} =\n      int( $mycalc{'joins_without_indexes'} / ( $mystat{'Uptime'} / 86400 ) );\n\n    # Temporary tables\n    if ( $mystat{'Created_tmp_tables'} > 0 ) {\n        if ( $mystat{'Created_tmp_disk_tables'} > 0 ) {\n            $mycalc{'pct_temp_disk'} = int(\n                (\n                    $mystat{'Created_tmp_disk_tables'} /\n                      $mystat{'Created_tmp_tables'}\n                ) * 100\n            );\n        }\n        else {\n            $mycalc{'pct_temp_disk'} = 0;\n        }\n    }\n\n    # Table cache\n    if ( $mystat{'Opened_tables'} > 0 ) {\n        if ( not defined( $mystat{'Table_open_cache_hits'} ) ) {\n            $mycalc{'table_cache_hit_rate'} =\n              int( $mystat{'Open_tables'} * 100 / $mystat{'Opened_tables'} );\n        }\n        else {\n            $mycalc{'table_cache_hit_rate'} = int(\n                $mystat{'Table_open_cache_hits'} * 100 / (\n                    $mystat{'Table_open_cache_hits'} +\n                      $mystat{'Table_open_cache_misses'}\n                )\n            );\n        }\n    }\n    else {\n        $mycalc{'table_cache_hit_rate'} = 100;\n    }\n\n    # Open files\n    if ( $myvar{'open_files_limit'} > 0 ) {\n        $mycalc{'pct_files_open'} =\n          int( $mystat{'Open_files'} * 100 / $myvar{'open_files_limit'} );\n    }\n\n    # Table locks\n    if ( $mystat{'Table_locks_immediate'} > 0 ) {\n        if ( $mystat{'Table_locks_waited'} == 0 ) {\n            $mycalc{'pct_table_locks_immediate'} = 100;\n        }\n        else {\n            $mycalc{'pct_table_locks_immediate'} = int(\n                $mystat{'Table_locks_immediate'} * 100 / (\n                    $mystat{'Table_locks_waited'} +\n                      $mystat{'Table_locks_immediate'}\n                )\n            );\n        }\n    }\n\n    # Thread cache\n    $mycalc{'thread_cache_hit_rate'} =\n      int( 100 -\n          ( ( $mystat{'Threads_created'} / $mystat{'Connections'} ) * 100 ) );\n\n    # Other\n    if ( $mystat{'Connections'} > 0 ) {\n        $mycalc{'pct_aborted_connections'} =\n          int( ( $mystat{'Aborted_connects'} / $mystat{'Connections'} ) * 100 );\n    }\n    if ( $mystat{'Questions'} > 0 ) {\n        $mycalc{'total_reads'} = $mystat{'Com_select'};\n        $mycalc{'total_writes'} =\n          $mystat{'Com_delete'} +\n          $mystat{'Com_insert'} +\n          $mystat{'Com_update'} +\n          $mystat{'Com_replace'};\n        if ( $mycalc{'total_reads'} == 0 ) {\n            $mycalc{'pct_reads'}  = 0;\n            $mycalc{'pct_writes'} = 100;\n        }\n        else {\n            $mycalc{'pct_reads'} = int(\n                (\n                    $mycalc{'total_reads'} /\n                      ( $mycalc{'total_reads'} + $mycalc{'total_writes'} )\n                ) * 100\n            );\n            $mycalc{'pct_writes'} = 100 - $mycalc{'pct_reads'};\n        }\n    }\n\n    # InnoDB\n    $myvar{'innodb_log_files_in_group'} = 1\n      unless defined( $myvar{'innodb_log_files_in_group'} );\n    $myvar{'innodb_log_files_in_group'} = 1\n      if $myvar{'innodb_log_files_in_group'} == 0;\n\n    $myvar{\"innodb_buffer_pool_instances\"} = 1\n      unless defined( $myvar{'innodb_buffer_pool_instances'} );\n    if ( $myvar{'have_innodb'} eq \"YES\" ) {\n        $mycalc{'innodb_log_size_pct'} =\n          ( $myvar{'innodb_log_file_size'} *\n              $myvar{'innodb_log_files_in_group'} * 100 /\n              $myvar{'innodb_buffer_pool_size'} );\n    }\n    if ( !defined $myvar{'innodb_buffer_pool_size'} ) {\n        $mycalc{'innodb_log_size_pct'}    = 0;\n        $myvar{'innodb_buffer_pool_size'} = 0;\n    }\n\n    # InnoDB Buffer pool read cache efficiency\n    (\n        $mystat{'Innodb_buffer_pool_read_requests'},\n        $mystat{'Innodb_buffer_pool_reads'}\n      )\n      = ( 1, 1 )\n      unless defined $mystat{'Innodb_buffer_pool_reads'};\n    $mycalc{'pct_read_efficiency'} = percentage(\n        $mystat{'Innodb_buffer_pool_read_requests'},\n        (\n            $mystat{'Innodb_buffer_pool_read_requests'} +\n              $mystat{'Innodb_buffer_pool_reads'}\n        )\n    ) if defined $mystat{'Innodb_buffer_pool_read_requests'};\n    debugprint \"pct_read_efficiency: \" . $mycalc{'pct_read_efficiency'} . \"\";\n    debugprint \"Innodb_buffer_pool_reads: \"\n      . $mystat{'Innodb_buffer_pool_reads'} . \"\";\n    debugprint \"Innodb_buffer_pool_read_requests: \"\n      . $mystat{'Innodb_buffer_pool_read_requests'} . \"\";\n\n    # InnoDB log write cache efficiency\n    ( $mystat{'Innodb_log_write_requests'}, $mystat{'Innodb_log_writes'} ) =\n      ( 1, 1 )\n      unless defined $mystat{'Innodb_log_writes'};\n    $mycalc{'pct_write_efficiency'} = percentage(\n        ( $mystat{'Innodb_log_write_requests'} - $mystat{'Innodb_log_writes'} ),\n        $mystat{'Innodb_log_write_requests'}\n    ) if defined $mystat{'Innodb_log_write_requests'};\n    debugprint \"pct_write_efficiency: \" . $mycalc{'pct_write_efficiency'} . \"\";\n    debugprint \"Innodb_log_writes: \" . $mystat{'Innodb_log_writes'} . \"\";\n    debugprint \"Innodb_log_write_requests: \"\n      . $mystat{'Innodb_log_write_requests'} . \"\";\n    $mycalc{'pct_innodb_buffer_used'} = percentage(\n        (\n            $mystat{'Innodb_buffer_pool_pages_total'} -\n              $mystat{'Innodb_buffer_pool_pages_free'}\n        ),\n        $mystat{'Innodb_buffer_pool_pages_total'}\n    ) if defined $mystat{'Innodb_buffer_pool_pages_total'};\n\n    $mycalc{'innodb_buffer_alloc_pct'} = select_one(\n            \"select  round( 100* sum(allocated)/( select VARIABLE_VALUE \"\n          . \"FROM performance_schema.global_variables \"\n          . \"WHERE VARIABLE_NAME='innodb_buffer_pool_size' ) ,2)\"\n          . 'FROM sys.x\\$innodb_buffer_stats_by_table;' );\n\n    # Binlog Cache\n    if ( $myvar{'log_bin'} ne 'OFF' ) {\n        $mycalc{'pct_binlog_cache'} = percentage(\n            $mystat{'Binlog_cache_use'} - $mystat{'Binlog_cache_disk_use'},\n            $mystat{'Binlog_cache_use'} );\n    }\n}\n\nsub mysql_stats {\n    subheaderprint \"Performance Metrics\";\n\n    # Show uptime, queries per second, connections, traffic stats\n    my $qps;\n    if ( $mystat{'Uptime'} > 0 ) {\n        $qps = sprintf( \"%.3f\", $mystat{'Questions'} / $mystat{'Uptime'} );\n    }\n    push( @generalrec,\n\"MySQL was started within the last 24 hours: recommendations may be inaccurate\"\n    ) if ( $mystat{'Uptime'} < 86400 );\n    infoprint \"Up for: \"\n      . pretty_uptime( $mystat{'Uptime'} ) . \" (\"\n      . hr_num( $mystat{'Questions'} ) . \" q [\"\n      . hr_num($qps)\n      . \" qps], \"\n      . hr_num( $mystat{'Connections'} )\n      . \" conn,\" . \" TX: \"\n      . hr_bytes_rnd( $mystat{'Bytes_sent'} )\n      . \", RX: \"\n      . hr_bytes_rnd( $mystat{'Bytes_received'} ) . \")\";\n    infoprint \"Reads / Writes: \"\n      . $mycalc{'pct_reads'} . \"% / \"\n      . $mycalc{'pct_writes'} . \"%\";\n\n    # Binlog Cache\n    if ( $myvar{'log_bin'} eq 'OFF' ) {\n        infoprint \"Binary logging is disabled\";\n    }\n    else {\n        infoprint \"Binary logging is enabled (GTID MODE: \"\n          . ( defined( $myvar{'gtid_mode'} ) ? $myvar{'gtid_mode'} : \"OFF\" )\n          . \")\";\n    }\n\n    # Memory usage\n    infoprint \"Physical Memory     : \" . hr_bytes($physical_memory);\n    infoprint \"Max MySQL memory    : \" . hr_bytes( $mycalc{'max_peak_memory'} );\n    infoprint \"Other process memory: \" . hr_bytes( get_other_process_memory() );\n\n    infoprint \"Total buffers: \"\n      . hr_bytes( $mycalc{'server_buffers'} )\n      . \" global + \"\n      . hr_bytes( $mycalc{'per_thread_buffers'} )\n      . \" per thread ($myvar{'max_connections'} max threads)\";\n    infoprint \"Performance_schema Max memory usage: \"\n      . hr_bytes_rnd( get_pf_memory() );\n    $result{'Performance_schema'}{'memory'} = get_pf_memory();\n    $result{'Performance_schema'}{'pretty_memory'} =\n      hr_bytes_rnd( get_pf_memory() );\n    infoprint \"Galera GCache Max memory usage: \"\n      . hr_bytes_rnd( get_gcache_memory() );\n    $result{'Galera'}{'GCache'}{'memory'} = get_gcache_memory();\n    $result{'Galera'}{'GCache'}{'pretty_memory'} =\n      hr_bytes_rnd( get_gcache_memory() );\n\n    if ( $opt{buffers} ne 0 ) {\n        infoprint \"Global Buffers\";\n        infoprint \" +-- Key Buffer: \"\n          . hr_bytes( $myvar{'key_buffer_size'} ) . \"\";\n        infoprint \" +-- Max Tmp Table: \"\n          . hr_bytes( $mycalc{'max_tmp_table_size'} ) . \"\";\n\n        if ( defined $myvar{'query_cache_type'} ) {\n            infoprint \"Query Cache Buffers\";\n            infoprint \" +-- Query Cache: \"\n              . $myvar{'query_cache_type'} . \" - \"\n              . (\n                $myvar{'query_cache_type'} eq 0 |\n                  $myvar{'query_cache_type'} eq 'OFF' ? \"DISABLED\"\n                : (\n                    $myvar{'query_cache_type'} eq 1 ? \"ALL REQUESTS\"\n                    : \"ON DEMAND\"\n                )\n              ) . \"\";\n            infoprint \" +-- Query Cache Size: \"\n              . hr_bytes( $myvar{'query_cache_size'} ) . \"\";\n        }\n\n        infoprint \"Per Thread Buffers\";\n        infoprint \" +-- Read Buffer: \"\n          . hr_bytes( $myvar{'read_buffer_size'} ) . \"\";\n        infoprint \" +-- Read RND Buffer: \"\n          . hr_bytes( $myvar{'read_rnd_buffer_size'} ) . \"\";\n        infoprint \" +-- Sort Buffer: \"\n          . hr_bytes( $myvar{'sort_buffer_size'} ) . \"\";\n        infoprint \" +-- Thread stack: \"\n          . hr_bytes( $myvar{'thread_stack'} ) . \"\";\n        infoprint \" +-- Join Buffer: \"\n          . hr_bytes( $myvar{'join_buffer_size'} ) . \"\";\n        if ( $myvar{'log_bin'} ne 'OFF' ) {\n            infoprint \"Binlog Cache Buffers\";\n            infoprint \" +-- Binlog Cache: \"\n              . hr_bytes( $myvar{'binlog_cache_size'} ) . \"\";\n        }\n    }\n\n    if (   $arch\n        && $arch == 32\n        && $mycalc{'max_used_memory'} > 2 * 1024 * 1024 * 1024 )\n    {\n        badprint\n          \"Allocating > 2GB RAM on 32-bit systems can cause system instability\";\n        badprint \"Maximum reached memory usage: \"\n          . hr_bytes( $mycalc{'max_used_memory'} )\n          . \" ($mycalc{'pct_max_used_memory'}% of installed RAM)\";\n    }\n    elsif ( $mycalc{'pct_max_used_memory'} > 85 ) {\n        badprint \"Maximum reached memory usage: \"\n          . hr_bytes( $mycalc{'max_used_memory'} )\n          . \" ($mycalc{'pct_max_used_memory'}% of installed RAM)\";\n    }\n    else {\n        goodprint \"Maximum reached memory usage: \"\n          . hr_bytes( $mycalc{'max_used_memory'} )\n          . \" ($mycalc{'pct_max_used_memory'}% of installed RAM)\";\n    }\n\n    if ( $mycalc{'pct_max_physical_memory'} > 85 ) {\n        badprint \"Maximum possible memory usage: \"\n          . hr_bytes( $mycalc{'max_peak_memory'} )\n          . \" ($mycalc{'pct_max_physical_memory'}% of installed RAM)\";\n        push( @generalrec,\n            \"Reduce your overall MySQL memory footprint for system stability\" );\n    }\n    else {\n        goodprint \"Maximum possible memory usage: \"\n          . hr_bytes( $mycalc{'max_peak_memory'} )\n          . \" ($mycalc{'pct_max_physical_memory'}% of installed RAM)\";\n    }\n\n    if ( $physical_memory <\n        ( $mycalc{'max_peak_memory'} + get_other_process_memory() ) )\n    {\n        badprint\n          \"Overall possible memory usage with other process exceeded memory\";\n        push( @generalrec,\n            \"Dedicate this server to your database for highest performance.\" );\n    }\n    else {\n        goodprint\n\"Overall possible memory usage with other process is compatible with memory available\";\n    }\n\n    # Slow queries\n    if ( $mycalc{'pct_slow_queries'} > 5 ) {\n        badprint \"Slow queries: $mycalc{'pct_slow_queries'}% (\"\n          . hr_num( $mystat{'Slow_queries'} ) . \"/\"\n          . hr_num( $mystat{'Questions'} ) . \")\";\n    }\n    else {\n        goodprint \"Slow queries: $mycalc{'pct_slow_queries'}% (\"\n          . hr_num( $mystat{'Slow_queries'} ) . \"/\"\n          . hr_num( $mystat{'Questions'} ) . \")\";\n    }\n    if ( $myvar{'long_query_time'} > 10 ) {\n        push( @adjvars, \"long_query_time (<= 10)\" );\n    }\n    if ( defined( $myvar{'log_slow_queries'} ) ) {\n        if ( $myvar{'log_slow_queries'} eq \"OFF\" ) {\n            push( @generalrec,\n                \"Enable the slow query log to troubleshoot bad queries\" );\n        }\n    }\n\n    # Connections\n    if ( $mycalc{'pct_connections_used'} > 85 ) {\n        badprint\n\"Highest connection usage: $mycalc{'pct_connections_used'}% ($mystat{'Max_used_connections'}/$myvar{'max_connections'})\";\n        push( @adjvars,\n            \"max_connections (> \" . $myvar{'max_connections'} . \")\" );\n        push( @adjvars,\n            \"wait_timeout (< \" . $myvar{'wait_timeout'} . \")\",\n            \"interactive_timeout (< \" . $myvar{'interactive_timeout'} . \")\" );\n        push( @generalrec,\n\"Reduce or eliminate persistent connections to reduce connection usage\"\n        );\n    }\n    else {\n        goodprint\n\"Highest usage of available connections: $mycalc{'pct_connections_used'}% ($mystat{'Max_used_connections'}/$myvar{'max_connections'})\";\n    }\n\n    # Aborted Connections\n    if ( $mycalc{'pct_connections_aborted'} > 3 ) {\n        badprint\n\"Aborted connections: $mycalc{'pct_connections_aborted'}% ($mystat{'Aborted_connects'}/$mystat{'Connections'})\";\n        push( @generalrec,\n            \"Reduce or eliminate unclosed connections and network issues\" );\n    }\n    else {\n        goodprint\n\"Aborted connections: $mycalc{'pct_connections_aborted'}% ($mystat{'Aborted_connects'}/$mystat{'Connections'})\";\n    }\n\n    # name resolution\n    debugprint \"skip name resolve: $result{'Variables'}{'skip_name_resolve'}\"\n      if ( defined( $result{'Variables'}{'skip_name_resolve'} ) );\n    if ( defined( $result{'Variables'}{'skip_networking'} )\n        && $result{'Variables'}{'skip_networking'} eq 'ON' )\n    {\n        infoprint\n\"Skipped name resolution test due to skip_networking=ON in system variables.\";\n    }\n    elsif ( not defined( $result{'Variables'}{'skip_name_resolve'} ) ) {\n        infoprint\n\"Skipped name resolution test due to missing skip_name_resolve in system variables.\";\n    }\n\n    #Cpanel and Skip name resolve\n    elsif ( -r \"/usr/local/cpanel/cpanel\" ) {\n        if ( $result{'Variables'}{'skip_name_resolve'} ne 'OFF' ) {\n            infoprint \"CPanel and Flex system skip-name-resolve should be on\";\n        }\n        if ( $result{'Variables'}{'skip_name_resolve'} eq 'OFF' ) {\n            badprint \"CPanel and Flex system skip-name-resolve should be on\";\n            push( @generalrec,\n\"name resolution is enabled due to cPanel doesn't support this disabled.\"\n            );\n            push( @adjvars, \"skip-name-resolve=0\" );\n        }\n    }\n    elsif ( $result{'Variables'}{'skip_name_resolve'} ne 'ON'\n        and $result{'Variables'}{'skip_name_resolve'} ne '1' )\n    {\n        badprint\n\"Name resolution is active: a reverse name resolution is made for each new connection which can reduce performance\";\n        push( @generalrec,\n\"Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=ON\"\n        );\n        push( @adjvars, \"skip-name-resolve=ON\" );\n    }\n\n    # Query cache\n    if ( !mysql_version_ge(4) ) {\n\n        # MySQL versions < 4.01 don't support query caching\n        push( @generalrec,\n            \"Upgrade MySQL to version 4+ to utilize query caching\" );\n    }\n    elsif ( mysql_version_eq(8) ) {\n        infoprint \"Query cache has been removed since MySQL 8.0\";\n\n        #return;\n    }\n    elsif ($myvar{'query_cache_size'} < 1\n        or $myvar{'query_cache_type'} eq \"OFF\" )\n    {\n        goodprint\n\"Query cache is disabled by default due to mutex contention on multiprocessor machines.\";\n    }\n    elsif ( $mystat{'Com_select'} == 0 ) {\n        badprint\n          \"Query cache cannot be analyzed: no SELECT statements executed\";\n    }\n    else {\n        if ( $mycalc{'query_cache_efficiency'} < 20 ) {\n            badprint\n              \"Query cache efficiency: $mycalc{'query_cache_efficiency'}% (\"\n              . hr_num( $mystat{'Qcache_hits'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Qcache_hits'} + $mystat{'Com_select'} )\n              . \" selects)\";\n            push( @adjvars,\n                    \"query_cache_limit (> \"\n                  . hr_bytes_rnd( $myvar{'query_cache_limit'} )\n                  . \", or use smaller result sets)\" );\n            badprint\n              \"Query cache may be disabled by default due to mutex contention.\";\n            push( @adjvars, \"query_cache_size (=0)\" );\n            push( @adjvars, \"query_cache_type (=0)\" );\n        }\n        else {\n            goodprint\n              \"Query cache efficiency: $mycalc{'query_cache_efficiency'}% (\"\n              . hr_num( $mystat{'Qcache_hits'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Qcache_hits'} + $mystat{'Com_select'} )\n              . \" selects)\";\n            if ( $mycalc{'query_cache_prunes_per_day'} > 98 ) {\n                badprint\n\"Query cache prunes per day: $mycalc{'query_cache_prunes_per_day'}\";\n                if ( $myvar{'query_cache_size'} >= 128 * 1024 * 1024 ) {\n                    push( @generalrec,\n\"Increasing the query_cache size over 128M may reduce performance\"\n                    );\n                    push( @adjvars,\n                            \"query_cache_size (> \"\n                          . hr_bytes_rnd( $myvar{'query_cache_size'} )\n                          . \") [see warning above]\" );\n                }\n                else {\n                    push( @adjvars,\n                            \"query_cache_size (> \"\n                          . hr_bytes_rnd( $myvar{'query_cache_size'} )\n                          . \")\" );\n                }\n            }\n            else {\n                goodprint\n\"Query cache prunes per day: $mycalc{'query_cache_prunes_per_day'}\";\n            }\n        }\n\n    }\n\n    # Sorting\n    if ( $mycalc{'total_sorts'} == 0 ) {\n        goodprint \"No Sort requiring temporary tables\";\n    }\n    elsif ( $mycalc{'pct_temp_sort_table'} > 10 ) {\n        badprint\n          \"Sorts requiring temporary tables: $mycalc{'pct_temp_sort_table'}% (\"\n          . hr_num( $mystat{'Sort_merge_passes'} )\n          . \" temp sorts / \"\n          . hr_num( $mycalc{'total_sorts'} )\n          . \" sorts)\";\n        push( @adjvars,\n                \"sort_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'sort_buffer_size'} )\n              . \")\" );\n        push( @adjvars,\n                \"read_rnd_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'read_rnd_buffer_size'} )\n              . \")\" );\n    }\n    else {\n        goodprint\n          \"Sorts requiring temporary tables: $mycalc{'pct_temp_sort_table'}% (\"\n          . hr_num( $mystat{'Sort_merge_passes'} )\n          . \" temp sorts / \"\n          . hr_num( $mycalc{'total_sorts'} )\n          . \" sorts)\";\n    }\n\n    # Joins\n    if ( $mycalc{'joins_without_indexes_per_day'} > 250 ) {\n        badprint\n          \"Joins performed without indexes: $mycalc{'joins_without_indexes'}\";\n        push( @adjvars,\n                \"join_buffer_size (> \"\n              . hr_bytes( $myvar{'join_buffer_size'} )\n              . \", or always use indexes with JOINs)\" );\n        push(\n            @generalrec,\n\"We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.\n             See https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_join_buffer_size\"\n        );\n    }\n    else {\n        goodprint \"No joins without indexes\";\n\n        # No joins have run without indexes\n    }\n\n    # Temporary tables\n    if ( $mystat{'Created_tmp_tables'} > 0 ) {\n        if (   $mycalc{'pct_temp_disk'} > 25\n            && $mycalc{'max_tmp_table_size'} < 256 * 1024 * 1024 )\n        {\n            badprint\n              \"Temporary tables created on disk: $mycalc{'pct_temp_disk'}% (\"\n              . hr_num( $mystat{'Created_tmp_disk_tables'} )\n              . \" on disk / \"\n              . hr_num( $mystat{'Created_tmp_tables'} )\n              . \" total)\";\n            push( @adjvars,\n                    \"tmp_table_size (> \"\n                  . hr_bytes_rnd( $myvar{'tmp_table_size'} )\n                  . \")\" );\n            push( @adjvars,\n                    \"max_heap_table_size (> \"\n                  . hr_bytes_rnd( $myvar{'max_heap_table_size'} )\n                  . \")\" );\n            push( @generalrec,\n\"When making adjustments, make tmp_table_size/max_heap_table_size equal\"\n            );\n            push( @generalrec,\n                \"Reduce your SELECT DISTINCT queries which have no LIMIT clause\"\n            );\n        }\n        elsif ($mycalc{'pct_temp_disk'} > 25\n            && $mycalc{'max_tmp_table_size'} >= 256 * 1024 * 1024 )\n        {\n            badprint\n              \"Temporary tables created on disk: $mycalc{'pct_temp_disk'}% (\"\n              . hr_num( $mystat{'Created_tmp_disk_tables'} )\n              . \" on disk / \"\n              . hr_num( $mystat{'Created_tmp_tables'} )\n              . \" total)\";\n            push( @generalrec,\n                \"Temporary table size is already large: reduce result set size\"\n            );\n            push( @generalrec,\n                \"Reduce your SELECT DISTINCT queries without LIMIT clauses\" );\n        }\n        else {\n            goodprint\n              \"Temporary tables created on disk: $mycalc{'pct_temp_disk'}% (\"\n              . hr_num( $mystat{'Created_tmp_disk_tables'} )\n              . \" on disk / \"\n              . hr_num( $mystat{'Created_tmp_tables'} )\n              . \" total)\";\n        }\n    }\n    else {\n        goodprint \"No tmp tables created on disk\";\n    }\n\n    # Thread cache\n    if ( defined( $myvar{'have_threadpool'} )\n        and $myvar{'have_threadpool'} eq 'YES' )\n    {\n# https://www.percona.com/doc/percona-server/5.7/performance/threadpool.html#status-variables\n# When thread pool is enabled, the value of the thread_cache_size variable\n# is ignored. The Threads_cached status variable contains 0 in this case.\n        infoprint \"Thread cache not used with thread pool enabled\";\n    }\n    else {\n        if ( $myvar{'thread_cache_size'} eq 0 ) {\n            badprint \"Thread cache is disabled\";\n            push( @generalrec,\n                \"Set thread_cache_size to 4 as a starting value\" );\n            push( @adjvars, \"thread_cache_size (start at 4)\" );\n        }\n        else {\n            if ( $mycalc{'thread_cache_hit_rate'} <= 50 ) {\n                badprint\n                  \"Thread cache hit rate: $mycalc{'thread_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Threads_created'} )\n                  . \" created / \"\n                  . hr_num( $mystat{'Connections'} )\n                  . \" connections)\";\n                push( @adjvars,\n                    \"thread_cache_size (> $myvar{'thread_cache_size'})\" );\n            }\n            else {\n                goodprint\n                  \"Thread cache hit rate: $mycalc{'thread_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Threads_created'} )\n                  . \" created / \"\n                  . hr_num( $mystat{'Connections'} )\n                  . \" connections)\";\n            }\n        }\n    }\n\n    # Table cache\n    my $table_cache_var = \"\";\n    if ( $mystat{'Open_tables'} > 0 ) {\n        if ( $mycalc{'table_cache_hit_rate'} < 20 ) {\n\n            unless ( defined( $mystat{'Table_open_cache_hits'} ) ) {\n                badprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Open_tables'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Opened_tables'} )\n                  . \" requests)\";\n            }\n            else {\n                badprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Table_open_cache_hits'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Table_open_cache_hits'} +\n                      $mystat{'Table_open_cache_misses'} )\n                  . \" requests)\";\n            }\n\n            if ( mysql_version_ge( 5, 1 ) ) {\n                $table_cache_var = \"table_open_cache\";\n            }\n            else {\n                $table_cache_var = \"table_cache\";\n            }\n\n            push( @adjvars,\n                $table_cache_var . \" (> \" . $myvar{$table_cache_var} . \")\" );\n            push( @generalrec,\n                    \"Increase \"\n                  . $table_cache_var\n                  . \" gradually to avoid file descriptor limits\" );\n            push( @generalrec,\n                    \"Read this before increasing \"\n                  . $table_cache_var\n                  . \" over 64: https://bit.ly/2Fulv7r\" );\n            push( @generalrec,\n                    \"Read this before increasing for MariaDB\"\n                  . \" https://mariadb.com/kb/en/library/optimizing-table_open_cache/\"\n            );\n            push( @generalrec,\n\"This is MyISAM only table_cache scalability problem, InnoDB not affected.\"\n            );\n            push( @generalrec,\n                \"For more details see: https://bugs.mysql.com/bug.php?id=49177\"\n            );\n            push( @generalrec,\n\"This bug already fixed in MySQL 5.7.9 and newer MySQL versions.\"\n            );\n            push( @generalrec,\n                    \"Beware that open_files_limit (\"\n                  . $myvar{'open_files_limit'}\n                  . \") variable \" );\n            push( @generalrec,\n                    \"should be greater than $table_cache_var (\"\n                  . $myvar{$table_cache_var}\n                  . \")\" );\n        }\n        else {\n            unless ( defined( $mystat{'Table_open_cache_hits'} ) ) {\n                goodprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Open_tables'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Opened_tables'} )\n                  . \" requests)\";\n            }\n            else {\n                goodprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Table_open_cache_hits'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Table_open_cache_hits'} +\n                      $mystat{'Table_open_cache_misses'} )\n                  . \" requests)\";\n            }\n        }\n    }\n\n    # Table definition cache\n    my $nbtables = select_one('SELECT COUNT(*) FROM information_schema.tables');\n    $mycalc{'total_tables'} = $nbtables;\n    if ( defined $myvar{'table_definition_cache'} ) {\n        if ( $myvar{'table_definition_cache'} == -1 ) {\n            infoprint( \"table_definition_cache (\"\n                  . $myvar{'table_definition_cache'}\n                  . \") is in autosizing mode\" );\n        }\n        elsif ( $myvar{'table_definition_cache'} < $nbtables ) {\n            badprint \"table_definition_cache (\"\n              . $myvar{'table_definition_cache'}\n              . \") is less than number of tables ($nbtables) \";\n            push( @adjvars,\n                    \"table_definition_cache (\"\n                  . $myvar{'table_definition_cache'} . \") > \"\n                  . $nbtables\n                  . \" or -1 (autosizing if supported)\" );\n        }\n        else {\n            goodprint \"table_definition_cache (\"\n              . $myvar{'table_definition_cache'}\n              . \") is greater than number of tables ($nbtables)\";\n        }\n    }\n    else {\n        infoprint \"No table_definition_cache variable found.\";\n    }\n\n    # Open files\n    if ( defined $mycalc{'pct_files_open'} ) {\n        if ( $mycalc{'pct_files_open'} > 85 ) {\n            badprint \"Open file limit used: $mycalc{'pct_files_open'}% (\"\n              . hr_num( $mystat{'Open_files'} ) . \"/\"\n              . hr_num( $myvar{'open_files_limit'} ) . \")\";\n            push( @adjvars,\n                \"open_files_limit (> \" . $myvar{'open_files_limit'} . \")\" );\n        }\n        else {\n            goodprint \"Open file limit used: $mycalc{'pct_files_open'}% (\"\n              . hr_num( $mystat{'Open_files'} ) . \"/\"\n              . hr_num( $myvar{'open_files_limit'} ) . \")\";\n        }\n    }\n\n    # Table locks\n    if ( defined $mycalc{'pct_table_locks_immediate'} ) {\n        if ( $mycalc{'pct_table_locks_immediate'} < 95 ) {\n            badprint\n\"Table locks acquired immediately: $mycalc{'pct_table_locks_immediate'}%\";\n            push( @generalrec,\n                \"Optimize queries and/or use InnoDB to reduce lock wait\" );\n        }\n        else {\n            goodprint\n\"Table locks acquired immediately: $mycalc{'pct_table_locks_immediate'}% (\"\n              . hr_num( $mystat{'Table_locks_immediate'} )\n              . \" immediate / \"\n              . hr_num( $mystat{'Table_locks_waited'} +\n                  $mystat{'Table_locks_immediate'} )\n              . \" locks)\";\n        }\n    }\n\n    # Binlog cache\n    if ( defined $mycalc{'pct_binlog_cache'} ) {\n        if (   $mycalc{'pct_binlog_cache'} < 90\n            && $mystat{'Binlog_cache_use'} > 0 )\n        {\n            badprint \"Binlog cache memory access: \"\n              . $mycalc{'pct_binlog_cache'} . \"% (\"\n              . (\n                $mystat{'Binlog_cache_use'} - $mystat{'Binlog_cache_disk_use'} )\n              . \" Memory / \"\n              . $mystat{'Binlog_cache_use'}\n              . \" Total)\";\n            push( @generalrec,\n                    \"Increase binlog_cache_size (current value: \"\n                  . $myvar{'binlog_cache_size'}\n                  . \")\" );\n            push( @adjvars,\n                    \"binlog_cache_size (\"\n                  . hr_bytes( $myvar{'binlog_cache_size'} + 16 * 1024 * 1024 )\n                  . \")\" );\n        }\n        else {\n            goodprint \"Binlog cache memory access: \"\n              . $mycalc{'pct_binlog_cache'} . \"% (\"\n              . (\n                $mystat{'Binlog_cache_use'} - $mystat{'Binlog_cache_disk_use'} )\n              . \" Memory / \"\n              . $mystat{'Binlog_cache_use'}\n              . \" Total)\";\n            debugprint \"Not enough data to validate binlog cache size\\n\"\n              if $mystat{'Binlog_cache_use'} < 10;\n        }\n    }\n\n    # Performance options\n    if ( !mysql_version_ge( 5, 1 ) ) {\n        push( @generalrec, \"Upgrade to MySQL 5.5+ to use asynchronous write\" );\n    }\n    elsif ( $myvar{'concurrent_insert'} eq \"OFF\" ) {\n        push( @generalrec, \"Enable concurrent_insert by setting it to 'ON'\" );\n    }\n    elsif ( $myvar{'concurrent_insert'} eq 0 ) {\n        push( @generalrec, \"Enable concurrent_insert by setting it to 1\" );\n    }\n}\n\n# Recommendations for MyISAM\nsub mysql_myisam {\n    return 0 unless ( $opt{'myisamstat'} > 0 );\n    subheaderprint \"MyISAM Metrics\";\n    my $nb_myisam_tables = select_one(\n\"SELECT COUNT(*) FROM information_schema.TABLES WHERE ENGINE='MyISAM' and TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema')\"\n    );\n    push( @generalrec,\n        \"MyISAM engine is deprecated, consider migrating to InnoDB\" )\n      if $nb_myisam_tables > 0;\n\n    if ( $nb_myisam_tables > 0 ) {\n        badprint\n          \"Consider migrating $nb_myisam_tables following tables to InnoDB:\";\n        my $sql_mig = \"\";\n        for my $myisam_table (\n            select_array(\n\"SELECT CONCAT(TABLE_SCHEMA, '.', TABLE_NAME) FROM information_schema.TABLES WHERE ENGINE='MyISAM' and TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema')\"\n            )\n          )\n        {\n            $sql_mig =\n\"${sql_mig}-- InnoDB migration for $myisam_table\\nALTER TABLE $myisam_table ENGINE=InnoDB;\\n\\n\";\n            infoprint\n\"* InnoDB migration request for $myisam_table Table: ALTER TABLE $myisam_table ENGINE=InnoDB;\";\n        }\n        dump_into_file( \"migrate_myisam_to_innodb.sql\", $sql_mig );\n    }\n    infoprint(\"General MyIsam metrics:\");\n    infoprint \" +-- Total MyISAM Tables  : $nb_myisam_tables\";\n    infoprint \" +-- Total MyISAM indexes : \"\n      . hr_bytes( $mycalc{'total_myisam_indexes'} )\n      if defined( $mycalc{'total_myisam_indexes'} );\n    infoprint \" +-- KB Size :\" . hr_bytes( $myvar{'key_buffer_size'} );\n    infoprint \" +-- KB Used Size :\"\n      . hr_bytes( $myvar{'key_buffer_size'} -\n          $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} );\n    infoprint \" +-- KB used :\" . $mycalc{'pct_key_buffer_used'} . \"%\";\n    infoprint \" +-- Read KB hit rate: $mycalc{'pct_keys_from_mem'}% (\"\n      . hr_num( $mystat{'Key_read_requests'} )\n      . \" cached / \"\n      . hr_num( $mystat{'Key_reads'} )\n      . \" reads)\";\n    infoprint \" +-- Write KB hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n      . hr_num( $mystat{'Key_write_requests'} )\n      . \" cached / \"\n      . hr_num( $mystat{'Key_writes'} )\n      . \" writes)\";\n\n    if ( $nb_myisam_tables == 0 ) {\n        infoprint \"No MyISAM table(s) detected ....\";\n        return;\n    }\n    if ( mysql_version_ge(8) and mysql_version_le(10) ) {\n        infoprint \"MyISAM Metrics are disabled since MySQL 8.0.\";\n        if ( $myvar{'key_buffer_size'} > 0 ) {\n            push( @adjvars, \"key_buffer_size=0\" );\n            push( @generalrec,\n                \"Buffer Key MyISAM set to 0, no MyISAM table detected\" );\n        }\n        return;\n    }\n\n    if ( !defined( $mycalc{'total_myisam_indexes'} ) ) {\n        badprint\n          \"Unable to calculate MyISAM index size on MySQL server < 5.0.0\";\n        push( @generalrec,\n            \"Unable to calculate MyISAM index size on MySQL server < 5.0.0\" );\n        return;\n    }\n    if ( $mycalc{'pct_key_buffer_used'} == 0 ) {\n\n        # No queries have run that would use keys\n        infoprint \"Key buffer used: $mycalc{'pct_key_buffer_used'}% (\"\n          . hr_bytes( $myvar{'key_buffer_size'} -\n              $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} )\n          . \" used / \"\n          . hr_bytes( $myvar{'key_buffer_size'} )\n          . \" cache)\";\n        infoprint \"No SQL statement based on MyISAM table(s) detected ....\";\n        return;\n    }\n\n    # Key buffer usage\n    if ( $mycalc{'pct_key_buffer_used'} < 90 ) {\n        badprint \"Key buffer used: $mycalc{'pct_key_buffer_used'}% (\"\n          . hr_bytes( $myvar{'key_buffer_size'} -\n              $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} )\n          . \" used / \"\n          . hr_bytes( $myvar{'key_buffer_size'} )\n          . \" cache)\";\n\n        push(\n            @adjvars,\n            \"key_buffer_size (\\~ \"\n              . hr_num(\n                $myvar{'key_buffer_size'} *\n                  $mycalc{'pct_key_buffer_used'} / 100\n              )\n              . \")\"\n        );\n    }\n    else {\n        goodprint \"Key buffer used: $mycalc{'pct_key_buffer_used'}% (\"\n          . hr_bytes( $myvar{'key_buffer_size'} -\n              $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} )\n          . \" used / \"\n          . hr_bytes( $myvar{'key_buffer_size'} )\n          . \" cache)\";\n    }\n\n    # Key buffer size / total MyISAM indexes\n    if (   $myvar{'key_buffer_size'} < $mycalc{'total_myisam_indexes'}\n        && $mycalc{'pct_keys_from_mem'} < 95 )\n    {\n        badprint \"Key buffer size / total MyISAM indexes: \"\n          . hr_bytes( $myvar{'key_buffer_size'} ) . \"/\"\n          . hr_bytes( $mycalc{'total_myisam_indexes'} ) . \"\";\n        push( @adjvars,\n                \"key_buffer_size (> \"\n              . hr_bytes( $mycalc{'total_myisam_indexes'} )\n              . \")\" );\n    }\n    else {\n        goodprint \"Key buffer size / total MyISAM indexes: \"\n          . hr_bytes( $myvar{'key_buffer_size'} ) . \"/\"\n          . hr_bytes( $mycalc{'total_myisam_indexes'} ) . \"\";\n    }\n    if ( $mystat{'Key_read_requests'} > 0 ) {\n        if ( $mycalc{'pct_keys_from_mem'} < 95 ) {\n            badprint\n              \"Read Key buffer hit rate: $mycalc{'pct_keys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_read_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_reads'} )\n              . \" reads)\";\n        }\n        else {\n            goodprint\n              \"Read Key buffer hit rate: $mycalc{'pct_keys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_read_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_reads'} )\n              . \" reads)\";\n        }\n    }\n\n    # No queries have run that would use keys\n    debugprint \"Key buffer size / total MyISAM indexes: \"\n      . hr_bytes( $myvar{'key_buffer_size'} ) . \"/\"\n      . hr_bytes( $mycalc{'total_myisam_indexes'} ) . \"\";\n    if ( $mystat{'Key_write_requests'} > 0 ) {\n        if ( $mycalc{'pct_wkeys_from_mem'} < 95 ) {\n            badprint\n              \"Write Key buffer hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_write_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_writes'} )\n              . \" writes)\";\n        }\n        else {\n            goodprint\n              \"Write Key buffer hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_write_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_writes'} )\n              . \" writes)\";\n        }\n    }\n    else {\n        # No queries have run that would use keys\n        debugprint\n          \"Write Key buffer hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n          . hr_num( $mystat{'Key_write_requests'} )\n          . \" cached / \"\n          . hr_num( $mystat{'Key_writes'} )\n          . \" writes)\";\n    }\n}\n\n# Recommendations for ThreadPool\nsub mariadb_threadpool {\n    subheaderprint \"ThreadPool Metrics\";\n\n    # MariaDB\n    unless ( defined $myvar{'have_threadpool'}\n        && $myvar{'have_threadpool'} eq \"YES\" )\n    {\n        infoprint \"ThreadPool stat is disabled.\";\n        return;\n    }\n    infoprint \"ThreadPool stat is enabled.\";\n    infoprint \"Thread Pool Size: \" . $myvar{'thread_pool_size'} . \" thread(s).\";\n\n    if (   $myvar{'version'} =~ /percona/i\n        or $myvar{'version_comment'} =~ /percona/i )\n    {\n        my $np = cpu_cores;\n        if (    $myvar{'thread_pool_size'} >= $np\n            and $myvar{'thread_pool_size'} < ( $np * 1.5 ) )\n        {\n            goodprint\n\"thread_pool_size for Percona between 1 and 1.5 times number of CPUs (\"\n              . $np . \" and \"\n              . ( $np * 1.5 ) . \")\";\n        }\n        else {\n            badprint\n\"thread_pool_size for Percona between 1 and 1.5 times number of CPUs (\"\n              . $np . \" and \"\n              . ( $np * 1.5 ) . \")\";\n            push( @adjvars,\n                    \"thread_pool_size between \"\n                  . $np . \" and \"\n                  . ( $np * 1.5 )\n                  . \" for InnoDB usage\" );\n        }\n        return;\n    }\n\n    if ( $myvar{'version'} =~ /mariadb/i ) {\n        infoprint \"Using default value is good enough for your version (\"\n          . $myvar{'version'} . \")\";\n        return;\n    }\n\n    if ( $myvar{'have_innodb'} eq 'YES' ) {\n        if (   $myvar{'thread_pool_size'} < 16\n            or $myvar{'thread_pool_size'} > 36 )\n        {\n            badprint\n\"thread_pool_size between 16 and 36 when using InnoDB storage engine.\";\n            push( @generalrec,\n                    \"Thread pool size for InnoDB usage (\"\n                  . $myvar{'thread_pool_size'}\n                  . \")\" );\n            push( @adjvars,\n                \"thread_pool_size between 16 and 36 for InnoDB usage\" );\n        }\n        else {\n            goodprint\n\"thread_pool_size between 16 and 36 when using InnoDB storage engine.\";\n        }\n        return;\n    }\n    if ( $myvar{'have_isam'} eq 'YES' ) {\n        if ( $myvar{'thread_pool_size'} < 4 or $myvar{'thread_pool_size'} > 8 )\n        {\n            badprint\n\"thread_pool_size between 4 and 8 when using MyISAM storage engine.\";\n            push( @generalrec,\n                    \"Thread pool size for MyISAM usage (\"\n                  . $myvar{'thread_pool_size'}\n                  . \")\" );\n            push( @adjvars,\n                \"thread_pool_size between 4 and 8 for MyISAM usage\" );\n        }\n        else {\n            goodprint\n\"thread_pool_size between 4 and 8 when using MyISAM storage engine.\";\n        }\n    }\n}\n\nsub get_pf_memory {\n\n    # Performance Schema\n    return 0 unless defined $myvar{'performance_schema'};\n    return 0 if $myvar{'performance_schema'} eq 'OFF';\n\n    my @infoPFSMemory = grep { /\\tperformance_schema[.]memory\\t/msx }\n      select_array(\"SHOW ENGINE PERFORMANCE_SCHEMA STATUS\");\n    @infoPFSMemory == 1 || return 0;\n    $infoPFSMemory[0] =~ s/.*\\s+(\\d+)$/$1/g;\n    return $infoPFSMemory[0];\n}\n\n# Recommendations for Performance Schema\nsub mysql_pfs {\n    subheaderprint \"Performance schema\";\n\n    # Performance Schema\n    debugprint \"Performance schema is \" . $myvar{'performance_schema'};\n    $myvar{'performance_schema'} = 'OFF'\n      unless defined( $myvar{'performance_schema'} );\n    if ( $myvar{'performance_schema'} eq 'OFF' ) {\n        badprint \"Performance_schema should be activated.\";\n        push( @adjvars, \"performance_schema=ON\" );\n        push( @generalrec,\n            \"Performance schema should be activated for better diagnostics\" );\n    }\n    if ( $myvar{'performance_schema'} eq 'ON' ) {\n        infoprint \"Performance_schema is activated.\";\n        debugprint \"Performance schema is \" . $myvar{'performance_schema'};\n        infoprint \"Memory used by Performance_schema: \"\n          . hr_bytes( get_pf_memory() );\n    }\n\n    unless ( grep /^sys$/, select_array(\"SHOW DATABASES\") ) {\n        infoprint \"Sys schema is not installed.\";\n        push( @generalrec,\n            mysql_version_ge( 10, 0 )\n            ? \"Consider installing Sys schema from https://github.com/FromDual/mariadb-sys for MariaDB\"\n            : \"Consider installing Sys schema from https://github.com/mysql/mysql-sys for MySQL\"\n        ) unless ( mysql_version_le( 5, 6 ) );\n\n        return;\n    }\n    infoprint \"Sys schema is installed.\";\n    return if ( $opt{pfstat} == 0 or $myvar{'performance_schema'} ne 'ON' );\n\n    infoprint \"Sys schema Version: \"\n      . select_one(\"select sys_version from sys.version\");\n\n    # Store all sys schema in dumpdir if defined\n    if ( defined $opt{dumpdir} and -d \"$opt{dumpdir}\" ) {\n        for my $sys_view ( select_array('use sys;show tables;') ) {\n            infoprint \"Dumping $sys_view into $opt{dumpdir}\";\n            my $sys_view_table = $sys_view;\n            $sys_view_table =~ s/\\$/\\\\\\$/g;\n            select_csv_file( \"$opt{dumpdir}/sys_$sys_view.csv\",\n                'select * from sys.\\`' . $sys_view_table . '\\`' );\n        }\n        return;\n\n        #exit 0 if ( $opt{stop} == 1 );\n    }\n\n    # Top user per connection\n    subheaderprint \"Performance schema: Top 5 user per connection\";\n    my $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, total_connections from sys.user_summary order by total_connections desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery conn(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per statement\n    subheaderprint \"Performance schema: Top 5 user per statement\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, statements from sys.user_summary order by statements desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery stmt(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per statement latency\n    subheaderprint \"Performance schema: Top 5 user per statement latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, statement_avg_latency from sys.x\\\\$user_summary order by statement_avg_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per lock latency\n    subheaderprint \"Performance schema: Top 5 user per lock latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, lock_latency from sys.x\\\\$user_summary_by_statement_latency order by lock_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per full scans\n    subheaderprint \"Performance schema: Top 5 user per nb full scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, full_scans from sys.x\\\\$user_summary_by_statement_latency order by full_scans desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per row_sent\n    subheaderprint \"Performance schema: Top 5 user per rows sent\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, rows_sent from sys.x\\\\$user_summary_by_statement_latency order by rows_sent desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per row modified\n    subheaderprint \"Performance schema: Top 5 user per rows modified\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, rows_affected from sys.x\\\\$user_summary_by_statement_latency order by rows_affected desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per io\n    subheaderprint \"Performance schema: Top 5 user per IO\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, file_ios from sys.x\\\\$user_summary order by file_ios desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per io latency\n    subheaderprint \"Performance schema: Top 5 user per IO latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, file_io_latency from sys.x\\\\$user_summary order by file_io_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per connection\n    subheaderprint \"Performance schema: Top 5 host per connection\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, total_connections from sys.x\\\\$host_summary order by total_connections desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery conn(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per statement\n    subheaderprint \"Performance schema: Top 5 host per statement\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, statements from sys.x\\\\$host_summary order by statements desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery stmt(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per statement latency\n    subheaderprint \"Performance schema: Top 5 host per statement latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, statement_avg_latency from sys.x\\\\$host_summary order by statement_avg_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per lock latency\n    subheaderprint \"Performance schema: Top 5 host per lock latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, lock_latency from sys.x\\\\$host_summary_by_statement_latency order by lock_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per full scans\n    subheaderprint \"Performance schema: Top 5 host per nb full scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, full_scans from sys.x\\\\$host_summary_by_statement_latency order by full_scans desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per rows sent\n    subheaderprint \"Performance schema: Top 5 host per rows sent\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, rows_sent from sys.x\\\\$host_summary_by_statement_latency order by rows_sent desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per rows modified\n    subheaderprint \"Performance schema: Top 5 host per rows modified\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, rows_affected from sys.x\\\\$host_summary_by_statement_latency order by rows_affected desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per io\n    subheaderprint \"Performance schema: Top 5 host per io\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, file_ios from sys.x\\\\$host_summary order by file_ios desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top 5 host per io latency\n    subheaderprint \"Performance schema: Top 5 host per io latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, file_io_latency from sys.x\\\\$host_summary order by file_io_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top IO type order by total io\n    subheaderprint \"Performance schema: Top IO type order by total io\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,14), SUM(total)AS total from sys.x\\\\$host_summary_by_file_io_type GROUP BY substring(event_name,14) ORDER BY total DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery i/o\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top IO type order by total latency\n    subheaderprint \"Performance schema: Top IO type order by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select substring(event_name,14), ROUND(SUM(total_latency),1) AS total_latency from sys.x\\\\$host_summary_by_file_io_type GROUP BY substring(event_name,14) ORDER BY total_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top IO type order by max latency\n    subheaderprint \"Performance schema: Top IO type order by max latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,14), MAX(max_latency) as max_latency from sys.x\\\\$host_summary_by_file_io_type GROUP BY substring(event_name,14) ORDER BY max_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top Stages order by total io\n    subheaderprint \"Performance schema: Top Stages order by total io\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,7), SUM(total)AS total from sys.x\\\\$host_summary_by_stages GROUP BY substring(event_name,7) ORDER BY total DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery i/o\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top Stages order by total latency\n    subheaderprint \"Performance schema: Top Stages order by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,7), ROUND(SUM(total_latency),1) AS total_latency from sys.x\\\\$host_summary_by_stages GROUP BY substring(event_name,7) ORDER BY total_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top Stages order by avg latency\n    subheaderprint \"Performance schema: Top Stages order by avg latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,7), MAX(avg_latency) as avg_latency from sys.x\\\\$host_summary_by_stages GROUP BY substring(event_name,7) ORDER BY avg_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per table scans\n    subheaderprint \"Performance schema: Top 5 host per table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, table_scans from sys.x\\\\$host_summary order by table_scans desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # InnoDB Buffer Pool by schema\n    subheaderprint \"Performance schema: InnoDB Buffer Pool by schema\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select object_schema, allocated, data, pages from sys.x\\\\$innodb_buffer_stats_by_schema ORDER BY pages DESC'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery page(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # InnoDB Buffer Pool by table\n    subheaderprint \"Performance schema: 40 InnoDB Buffer Pool by table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select object_schema,  object_name, allocated,data, pages from sys.x\\\\$innodb_buffer_stats_by_table ORDER BY pages DESC LIMIT 40'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery page(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Process per allocated memory\n    subheaderprint \"Performance schema: Process per time\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, Command AS PROC, time from sys.x\\\\$processlist ORDER BY time DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # InnoDB Lock Waits\n    subheaderprint \"Performance schema: InnoDB Lock Waits\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select wait_age_secs, locked_table, locked_type, waiting_query from sys.x\\\\$innodb_lock_waits order by wait_age_secs DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Threads IO Latency\n    subheaderprint \"Performance schema: Thread IO Latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, total_latency, max_latency from sys.x\\\\$io_by_thread_by_latency order by total_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # High Cost SQL statements\n    subheaderprint \"Performance schema: Top 15 Most latency statements\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select LEFT(query, 120), avg_latency from sys.x\\\\$statement_analysis order by avg_latency desc LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top 5% slower queries\n    subheaderprint \"Performance schema: Top 15 slower queries\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select LEFT(query, 120), exec_count from sys.x\\\\$statements_with_runtimes_in_95th_percentile order by exec_count desc LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery s\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top 10 nb statement type\n    subheaderprint \"Performance schema: Top 15 nb statement type\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(total) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by total latency\n    subheaderprint \"Performance schema: Top 15 statement by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(total_latency) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by lock latency\n    subheaderprint \"Performance schema: Top 15 statement by lock latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(lock_latency) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by full scans\n    subheaderprint \"Performance schema: Top 15 statement by full scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(full_scans) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by rows sent\n    subheaderprint \"Performance schema: Top 15 statement by rows sent\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(rows_sent) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by rows modified\n    subheaderprint \"Performance schema: Top 15 statement by rows modified\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(rows_affected) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Use temporary tables\n    subheaderprint \"Performance schema: 15 sample queries using temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select left(query, 120) from sys.x\\\\$statements_with_temp_tables LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Unused Indexes\n    subheaderprint \"Performance schema: Unused indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n\"select \\* from sys.schema_unused_indexes where object_schema not in ('performance_schema')\"\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Full table scans\n    subheaderprint \"Performance schema: Tables with full table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select * from sys.x\\\\$schema_tables_with_full_table_scans order by rows_full_scanned DESC'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Latest file IO by latency\n    subheaderprint \"Performance schema: Latest File IO by latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select thread, file, latency, operation from sys.x\\\\$latest_file_io ORDER BY latency LIMIT 10;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # FILE by IO read bytes\n    subheaderprint \"Performance schema: File by IO read bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select file, total_read from sys.x\\\\$io_global_by_file_by_bytes order by total_read DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # FILE by IO written bytes\n    subheaderprint \"Performance schema: File by IO written bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select file, total_written from sys.x\\\\$io_global_by_file_by_bytes order by total_written DESC LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # file per IO total latency\n    subheaderprint \"Performance schema: File per IO total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select file, total_latency from sys.x\\\\$io_global_by_file_by_latency ORDER BY total_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # file per IO read latency\n    subheaderprint \"Performance schema: file per IO read latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select file, read_latency from sys.x\\\\$io_global_by_file_by_latency ORDER BY read_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # file per IO write latency\n    subheaderprint \"Performance schema: file per IO write latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select file, write_latency from sys.x\\\\$io_global_by_file_by_latency ORDER BY write_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Event Wait by read bytes\n    subheaderprint \"Performance schema: Event Wait by read bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select event_name, total_read from sys.x\\\\$io_global_by_wait_by_bytes order by total_read DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Event Wait by write bytes\n    subheaderprint \"Performance schema: Event Wait written bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select event_name, total_written from sys.x\\\\$io_global_by_wait_by_bytes order by total_written DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # event per wait total latency\n    subheaderprint \"Performance schema: event per wait total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_name, total_latency from sys.x\\\\$io_global_by_wait_by_latency ORDER BY total_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # event per wait read latency\n    subheaderprint \"Performance schema: event per wait read latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_name, read_latency from sys.x\\\\$io_global_by_wait_by_latency ORDER BY read_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # event per wait write latency\n    subheaderprint \"Performance schema: event per wait write latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_name, write_latency from sys.x\\\\$io_global_by_wait_by_latency ORDER BY write_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    #schema_index_statistics\n    # TOP 15 most read index\n    subheaderprint \"Performance schema: Top 15 most read indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, rows_selected from sys.x\\\\$schema_index_statistics ORDER BY ROWs_selected DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 most used index\n    subheaderprint \"Performance schema: Top 15 most modified indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, rows_inserted+rows_updated+rows_deleted AS changes from sys.x\\\\$schema_index_statistics ORDER BY rows_inserted+rows_updated+rows_deleted DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high read latency index\n    subheaderprint \"Performance schema: Top 15 high read latency index\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, select_latency from sys.x\\\\$schema_index_statistics ORDER BY select_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high insert latency index\n    subheaderprint \"Performance schema: Top 15 most modified indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, insert_latency from sys.x\\\\$schema_index_statistics ORDER BY insert_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high update latency index\n    subheaderprint \"Performance schema: Top 15 high update latency index\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, update_latency from sys.x\\\\$schema_index_statistics ORDER BY update_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high delete latency index\n    subheaderprint \"Performance schema: Top 15 high delete latency index\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, delete_latency from sys.x\\\\$schema_index_statistics ORDER BY delete_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 most read tables\n    subheaderprint \"Performance schema: Top 15 most read tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, rows_fetched from sys.x\\\\$schema_table_statistics ORDER BY ROWs_fetched DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 most used tables\n    subheaderprint \"Performance schema: Top 15 most modified tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, rows_inserted+rows_updated+rows_deleted AS changes from sys.x\\\\$schema_table_statistics ORDER BY rows_inserted+rows_updated+rows_deleted DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high read latency tables\n    subheaderprint \"Performance schema: Top 15 high read latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, fetch_latency from sys.x\\\\$schema_table_statistics ORDER BY fetch_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high insert latency tables\n    subheaderprint \"Performance schema: Top 15 high insert latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, insert_latency from sys.x\\\\$schema_table_statistics ORDER BY insert_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high update latency tables\n    subheaderprint \"Performance schema: Top 15 high update latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, update_latency from sys.x\\\\$schema_table_statistics ORDER BY update_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high delete latency tables\n    subheaderprint \"Performance schema: Top 15 high delete latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, delete_latency from sys.x\\\\$schema_table_statistics ORDER BY delete_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Redundant indexes\n    subheaderprint \"Performance schema: Redundant indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array('use sys;select * from schema_redundant_indexes;') )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Table not using InnoDB buffer\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n' Select table_schema, table_name from sys.x\\\\$schema_table_statistics_with_buffer where innodb_buffer_allocated IS NULL;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 Tables using InnoDB buffer\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select table_schema,table_name,innodb_buffer_allocated from sys.x\\\\$schema_table_statistics_with_buffer where innodb_buffer_allocated IS NOT NULL ORDER BY innodb_buffer_allocated DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 Tables with InnoDB buffer free\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select table_schema,table_name,innodb_buffer_free from sys.x\\\\$schema_table_statistics_with_buffer where innodb_buffer_allocated IS NOT NULL ORDER BY innodb_buffer_free DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 Most executed queries\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statement_analysis order by exec_count DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Latest SQL queries in errors or warnings\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select LEFT(query, 120), last_seen from sys.x\\\\$statements_with_errors_or_warnings ORDER BY last_seen LIMIT 40;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 20 queries with full table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statements_with_full_table_scans order BY exec_count DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Last 50 queries with full table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), last_seen from sys.x\\\\$statements_with_full_table_scans order BY last_seen DESC LIMIT 50;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 reader queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), rows_sent from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY ROWs_sent DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 most row look queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), rows_examined AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY rows_examined DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 total latency queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), total_latency AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 max latency queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), max_latency AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY max_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 average latency queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), avg_latency AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY avg_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 20 queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statements_with_sorting order BY exec_count DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Last 50 queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), last_seen from sys.x\\\\$statements_with_sorting order BY last_seen DESC LIMIT 50;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 row sorting queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), rows_sorted from sys.x\\\\$statements_with_sorting ORDER BY ROWs_sorted DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 total latency queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), total_latency AS search from sys.x\\\\$statements_with_sorting ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 merge queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), sort_merge_passes AS search from sys.x\\\\$statements_with_sorting ORDER BY sort_merge_passes DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 average sort merges queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), avg_sort_merges AS search from sys.x\\\\$statements_with_sorting ORDER BY avg_sort_merges DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 scans queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), sorts_using_scans AS search from sys.x\\\\$statements_with_sorting ORDER BY sorts_using_scans DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 range queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), sort_using_range AS search from sys.x\\\\$statements_with_sorting ORDER BY sort_using_range DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n##################################################################################\n\n    #statements_with_temp_tables\n\n#mysql> desc statements_with_temp_tables;\n#+--------------------------+---------------------+------+-----+---------------------+-------+\n#| Field                    | Type                | Null | Key | Default             | Extra |\n#+--------------------------+---------------------+------+-----+---------------------+-------+\n#| query                    | longtext            | YES  |     | NULL                |       |\n#| db                       | varchar(64)         | YES  |     | NULL                |       |\n#| exec_count               | bigint(20) unsigned | NO   |     | NULL                |       |\n#| total_latency            | text                | YES  |     | NULL                |       |\n#| memory_tmp_tables        | bigint(20) unsigned | NO   |     | NULL                |       |\n#| disk_tmp_tables          | bigint(20) unsigned | NO   |     | NULL                |       |\n#| avg_tmp_tables_per_query | decimal(21,0)       | NO   |     | 0                   |       |\n#| tmp_tables_to_disk_pct   | decimal(24,0)       | NO   |     | 0                   |       |\n#| first_seen               | timestamp           | NO   |     | 0000-00-00 00:00:00 |       |\n#| last_seen                | timestamp           | NO   |     | 0000-00-00 00:00:00 |       |\n#| digest                   | varchar(32)         | YES  |     | NULL                |       |\n#+--------------------------+---------------------+------+-----+---------------------+-------+\n#11 rows in set (0,01 sec)#\n#\n    subheaderprint \"Performance schema: Top 20 queries with temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statements_with_temp_tables order BY exec_count DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Last 50 queries with temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), last_seen from sys.x\\\\$statements_with_temp_tables order BY last_seen DESC LIMIT 50;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 total latency queries with temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), total_latency AS search from sys.x\\\\$statements_with_temp_tables ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 queries with temp table to disk\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), disk_tmp_tables from sys.x\\\\$statements_with_temp_tables ORDER BY disk_tmp_tables DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n##################################################################################\n    #wait_classes_global_by_latency\n\n#mysql> select * from wait_classes_global_by_latency;\n#-----------------+-------+---------------+-------------+-------------+-------------+\n# event_class     | total | total_latency | min_latency | avg_latency | max_latency |\n#-----------------+-------+---------------+-------------+-------------+-------------+\n# wait/io/file    | 15381 | 1.23 s        | 0 ps        | 80.12 us    | 230.64 ms   |\n# wait/io/table   |    59 | 7.57 ms       | 5.45 us     | 128.24 us   | 3.95 ms     |\n# wait/lock/table |    69 | 3.22 ms       | 658.84 ns   | 46.64 us    | 1.10 ms     |\n#-----------------+-------+---------------+-------------+-------------+-------------+\n# rows in set (0,00 sec)\n\n    subheaderprint \"Performance schema: Top 15 class events by number\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_class, total from sys.x\\\\$wait_classes_global_by_latency ORDER BY total DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 30 events by number\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select events, total from sys.x\\\\$waits_global_by_latency ORDER BY total DESC LIMIT 30;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 class events by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_class, total_latency from sys.x\\\\$wait_classes_global_by_latency ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 30 events by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select events, total_latency from sys.x\\\\$waits_global_by_latency ORDER BY total_latency DESC LIMIT 30;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 class events by max latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select event_class, max_latency from sys.x\\\\$wait_classes_global_by_latency ORDER BY max_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 30 events by max latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select events, max_latency from sys.x\\\\$waits_global_by_latency ORDER BY max_latency DESC LIMIT 30;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n}\n\n# Recommendations for Aria Engine\nsub mariadb_aria {\n    subheaderprint \"Aria Metrics\";\n\n    # Aria\n    if ( !defined $myvar{'have_aria'} ) {\n        infoprint \"Aria Storage Engine not available.\";\n        return;\n    }\n    if ( $myvar{'have_aria'} ne \"YES\" ) {\n        infoprint \"Aria Storage Engine is disabled.\";\n        return;\n    }\n    infoprint \"Aria Storage Engine is enabled.\";\n\n    # Aria pagecache\n    if ( !defined( $mycalc{'total_aria_indexes'} ) ) {\n        push( @generalrec,\n            \"Unable to calculate Aria index size on MySQL server\" );\n    }\n    else {\n        if (\n            $myvar{'aria_pagecache_buffer_size'} < $mycalc{'total_aria_indexes'}\n            && $mycalc{'pct_aria_keys_from_mem'} < 95 )\n        {\n            badprint \"Aria pagecache size / total Aria indexes: \"\n              . hr_bytes( $myvar{'aria_pagecache_buffer_size'} ) . \"/\"\n              . hr_bytes( $mycalc{'total_aria_indexes'} ) . \"\";\n            push( @adjvars,\n                    \"aria_pagecache_buffer_size (> \"\n                  . hr_bytes( $mycalc{'total_aria_indexes'} )\n                  . \")\" );\n        }\n        else {\n            goodprint \"Aria pagecache size / total Aria indexes: \"\n              . hr_bytes( $myvar{'aria_pagecache_buffer_size'} ) . \"/\"\n              . hr_bytes( $mycalc{'total_aria_indexes'} ) . \"\";\n        }\n        if ( $mystat{'Aria_pagecache_read_requests'} > 0 ) {\n            if ( $mycalc{'pct_aria_keys_from_mem'} < 95 ) {\n                badprint\n\"Aria pagecache hit rate: $mycalc{'pct_aria_keys_from_mem'}% (\"\n                  . hr_num( $mystat{'Aria_pagecache_read_requests'} )\n                  . \" cached / \"\n                  . hr_num( $mystat{'Aria_pagecache_reads'} )\n                  . \" reads)\";\n            }\n            else {\n                goodprint\n\"Aria pagecache hit rate: $mycalc{'pct_aria_keys_from_mem'}% (\"\n                  . hr_num( $mystat{'Aria_pagecache_read_requests'} )\n                  . \" cached / \"\n                  . hr_num( $mystat{'Aria_pagecache_reads'} )\n                  . \" reads)\";\n            }\n        }\n        else {\n\n            # No queries have run that would use keys\n        }\n    }\n}\n\n# Recommendations for TokuDB\nsub mariadb_tokudb {\n    subheaderprint \"TokuDB Metrics\";\n\n    # AriaDB\n    unless ( defined $myvar{'have_tokudb'}\n        && $myvar{'have_tokudb'} eq \"YES\" )\n    {\n        infoprint \"TokuDB is disabled.\";\n        return;\n    }\n    infoprint \"TokuDB is enabled.\";\n\n    # Not implemented\n}\n\n# Recommendations for XtraDB\nsub mariadb_xtradb {\n    subheaderprint \"XtraDB Metrics\";\n\n    # XtraDB\n    unless ( defined $myvar{'have_xtradb'}\n        && $myvar{'have_xtradb'} eq \"YES\" )\n    {\n        infoprint \"XtraDB is disabled.\";\n        return;\n    }\n    infoprint \"XtraDB is enabled.\";\n    infoprint \"Note that MariaDB 10.2 makes use of InnoDB, not XtraDB.\"\n\n      # Not implemented\n}\n\n# Recommendations for RocksDB\nsub mariadb_rockdb {\n    subheaderprint \"RocksDB Metrics\";\n\n    # RocksDB\n    unless ( defined $myvar{'have_rocksdb'}\n        && $myvar{'have_rocksdb'} eq \"YES\" )\n    {\n        infoprint \"RocksDB is disabled.\";\n        return;\n    }\n    infoprint \"RocksDB is enabled.\";\n\n    # Not implemented\n}\n\n# Recommendations for Spider\nsub mariadb_spider {\n    subheaderprint \"Spider Metrics\";\n\n    # Spider\n    unless ( defined $myvar{'have_spider'}\n        && $myvar{'have_spider'} eq \"YES\" )\n    {\n        infoprint \"Spider is disabled.\";\n        return;\n    }\n    infoprint \"Spider is enabled.\";\n\n    # Not implemented\n}\n\n# Recommendations for Connect\nsub mariadb_connect {\n    subheaderprint \"Connect Metrics\";\n\n    # Connect\n    unless ( defined $myvar{'have_connect'}\n        && $myvar{'have_connect'} eq \"YES\" )\n    {\n        infoprint \"Connect is disabled.\";\n        return;\n    }\n    infoprint \"Connect is enabled.\";\n\n    # Not implemented\n}\n\n# Perl trim function to remove whitespace from the start and end of the string\nsub trim {\n    my $string = shift;\n    return \"\" unless defined($string);\n    $string =~ s/^\\s+//;\n    $string =~ s/\\s+$//;\n    return $string;\n}\n\nsub get_wsrep_options {\n    return () unless defined $myvar{'wsrep_provider_options'};\n\n    my @galera_options      = split /;/, $myvar{'wsrep_provider_options'};\n    my $wsrep_slave_threads = $myvar{'wsrep_slave_threads'};\n    push @galera_options, ' wsrep_slave_threads = ' . $wsrep_slave_threads;\n    @galera_options = remove_cr @galera_options;\n    @galera_options = remove_empty @galera_options;\n\n    #debugprint Dumper( \\@galera_options ) if $opt{debug};\n    return @galera_options;\n}\n\nsub get_gcache_memory {\n    my $gCacheMem = hr_raw( get_wsrep_option('gcache.size') );\n\n    return 0 unless defined $gCacheMem and $gCacheMem ne '';\n    return $gCacheMem;\n}\n\nsub get_wsrep_option {\n    my $key = shift;\n    return '' unless defined $myvar{'wsrep_provider_options'};\n    my @galera_options = get_wsrep_options;\n    return '' unless scalar(@galera_options) > 0;\n    my @memValues = grep /\\s*$key =/, @galera_options;\n    my $memValue  = $memValues[0];\n    return 0 unless defined $memValue;\n    $memValue =~ s/.*=\\s*(.+)$/$1/g;\n    return $memValue;\n}\n\n# REcommendations for Tables\nsub mysql_table_structures {\n    return 0 unless ( $opt{structstat} > 0 );\n    subheaderprint \"Table structures analysis\";\n\n    my @primaryKeysNbTables = select_array(\n        \"Select CONCAT(c.table_schema, ',' , c.table_name)\nfrom information_schema.columns c\njoin information_schema.tables t using (TABLE_SCHEMA, TABLE_NAME)\nwhere c.table_schema not in ('sys', 'mysql', 'information_schema', 'performance_schema')\n  and t.table_type = 'BASE TABLE'\ngroup by c.table_schema,c.table_name\nhaving sum(if(c.column_key in ('PRI', 'UNI'), 1, 0)) = 0\"\n    );\n\n    my $tmpContent = 'Schema,Table';\n    if ( scalar(@primaryKeysNbTables) > 0 ) {\n        badprint \"Following table(s) don't have primary key:\";\n        foreach my $badtable (@primaryKeysNbTables) {\n            badprint \"\\t$badtable\";\n            push @{ $result{'Tables without PK'} }, $badtable;\n            $tmpContent .= \"\\n$badtable\";\n        }\n        push @generalrec,\n\"Ensure that all table(s) get an explicit primary keys for performance, maintenance and also for replication\";\n\n    }\n    else {\n        goodprint \"All tables get a primary key\";\n    }\n    dump_into_file( \"tables_without_primary_keys.csv\", $tmpContent );\n\n    my @nonInnoDBTables = select_array(\n        \"select CONCAT(table_schema, ',', table_name, ',', ENGINE) \nFROM information_schema.tables t\nWHERE ENGINE <> 'InnoDB' \nand t.table_type = 'BASE TABLE'\nand table_schema not in \n('sys', 'mysql', 'performance_schema', 'information_schema')\"\n    );\n    $tmpContent = 'Schema,Table,Engine';\n    if ( scalar(@nonInnoDBTables) > 0 ) {\n        badprint \"Following table(s) are not InnoDB table:\";\n        push @generalrec,\n\"Ensure that all table(s) are InnoDB tables for performance and also for replication\";\n        foreach my $badtable (@nonInnoDBTables) {\n            if ( $badtable =~ /Memory/i ) {\n                badprint\n\"Table $badtable is a MEMORY table. It's suggested to use only InnoDB tables in production\";\n            }\n            else {\n                badprint \"\\t$badtable\";\n            }\n            $tmpContent .= \"\\n$badtable\";\n        }\n    }\n    else {\n        goodprint \"All tables are InnoDB tables\";\n    }\n    dump_into_file( \"tables_non_innodb.csv\", $tmpContent );\n\n    my @nonutf8columns = select_array(\n\"SELECT CONCAT(table_schema, ',', table_name, ',', column_name, ',', CHARacter_set_name, ',', COLLATION_name, ',', data_type, ',', CHARACTER_MAXIMUM_LENGTH)\nfrom information_schema.columns\nWHERE table_schema not in ('sys', 'mysql', 'performance_schema', 'information_schema')\nand (CHARacter_set_name  NOT LIKE 'utf8%'\nor COLLATION_name NOT LIKE 'utf8%');\"\n    );\n    $tmpContent =\n      'Schema,Table,Column, Charset, Collation, Data Type, Max Length';\n    if ( scalar(@nonutf8columns) > 0 ) {\n        badprint \"Following character columns(s) are not utf8 compliant:\";\n        push @generalrec,\n\"Ensure that all text colums(s) are UTF-8 compliant for encoding support and performance\";\n        foreach my $badtable (@nonutf8columns) {\n            badprint \"\\t$badtable\";\n            $tmpContent .= \"\\n$badtable\";\n        }\n    }\n    else {\n        goodprint \"All columns are UTF-8 compliant\";\n    }\n    dump_into_file( \"columns_non_utf8.csv\", $tmpContent );\n\n    my @utf8columns = select_array(\n\"SELECT CONCAT(table_schema, ',', table_name, ',', column_name, ',', CHARacter_set_name, ',', COLLATION_name, ',', data_type, ',', CHARACTER_MAXIMUM_LENGTH)\nfrom information_schema.columns\nWHERE table_schema not in ('sys', 'mysql', 'performance_schema', 'information_schema')\nand (CHARacter_set_name  LIKE 'utf8%'\nor COLLATION_name LIKE 'utf8%');\"\n    );\n    $tmpContent =\n      'Schema,Table,Column, Charset, Collation, Data Type, Max Length';\n    foreach my $badtable (@utf8columns) {\n        $tmpContent .= \"\\n$badtable\";\n    }\n    dump_into_file( \"columns_utf8.csv\", $tmpContent );\n\n    my @ftcolumns = select_array(\n\"SELECT CONCAT(table_schema, ',', table_name, ',', column_name, ',', data_type)\nfrom information_schema.columns\nWHERE table_schema not in ('sys', 'mysql', 'performance_schema', 'information_schema')\nAND data_type='FULLTEXT';\"\n    );\n    $tmpContent = 'Schema,Table,Column, Data Type';\n    foreach my $ctable (@ftcolumns) {\n        $tmpContent .= \"\\n$ctable\";\n    }\n    dump_into_file( \"fulltext_columns.csv\", $tmpContent );\n\n}\n\n# Recommendations for Galera\nsub mariadb_galera {\n    subheaderprint \"Galera Metrics\";\n\n    # Galera Cluster\n    unless ( defined $myvar{'have_galera'}\n        && $myvar{'have_galera'} eq \"YES\" )\n    {\n        infoprint \"Galera is disabled.\";\n        return;\n    }\n    infoprint \"Galera is enabled.\";\n    debugprint \"Galera variables:\";\n    foreach my $gvar ( keys %myvar ) {\n        next unless $gvar =~ /^wsrep.*/;\n        next if $gvar eq 'wsrep_provider_options';\n        debugprint \"\\t\" . trim($gvar) . \" = \" . $myvar{$gvar};\n        $result{'Galera'}{'variables'}{$gvar} = $myvar{$gvar};\n    }\n    if ( not defined( $myvar{'wsrep_on'} ) or $myvar{'wsrep_on'} ne \"ON\" ) {\n        infoprint \"Galera is disabled.\";\n        return;\n    }\n    debugprint \"Galera wsrep provider Options:\";\n    my @galera_options = get_wsrep_options;\n    $result{'Galera'}{'wsrep options'} = get_wsrep_options();\n    foreach my $gparam (@galera_options) {\n        debugprint \"\\t\" . trim($gparam);\n    }\n    debugprint \"Galera status:\";\n    foreach my $gstatus ( keys %mystat ) {\n        next unless $gstatus =~ /^wsrep.*/;\n        debugprint \"\\t\" . trim($gstatus) . \" = \" . $mystat{$gstatus};\n        $result{'Galera'}{'status'}{$gstatus} = $myvar{$gstatus};\n    }\n    infoprint \"GCache is using \"\n      . hr_bytes_rnd( get_wsrep_option('gcache.mem_size') );\n\n    infoprint \"CPU cores detected : \" . (cpu_cores);\n    infoprint \"wsrep_slave_threads: \" . get_wsrep_option('wsrep_slave_threads');\n\n    if (   get_wsrep_option('wsrep_slave_threads') > ( (cpu_cores) * 4 )\n        or get_wsrep_option('wsrep_slave_threads') < ( (cpu_cores) * 2 ) )\n    {\n        badprint\n\"wsrep_slave_threads is not equal to 2, 3 or 4 times the number of CPU(s)\";\n        push @adjvars, \"wsrep_slave_threads = \" . ( (cpu_cores) * 4 );\n    }\n    else {\n        goodprint\n\"wsrep_slave_threads is equal to 2, 3 or 4 times the number of CPU(s)\";\n    }\n\n    if ( get_wsrep_option('wsrep_slave_threads') > 1 ) {\n        infoprint\n          \"wsrep parallel slave can cause frequent inconsistency crash.\";\n        push @adjvars,\n\"Set wsrep_slave_threads to 1 in case of HA_ERR_FOUND_DUPP_KEY crash on slave\";\n\n        # check options for parallel slave\n        if ( get_wsrep_option('wsrep_slave_FK_checks') eq \"OFF\" ) {\n            badprint \"wsrep_slave_FK_checks is off with parallel slave\";\n            push @adjvars,\n              \"wsrep_slave_FK_checks should be ON when using parallel slave\";\n        }\n\n        # wsrep_slave_UK_checks seems useless in MySQL source code\n        if ( $myvar{'innodb_autoinc_lock_mode'} != 2 ) {\n            badprint\n              \"innodb_autoinc_lock_mode is incorrect with parallel slave\";\n            push @adjvars,\n              \"innodb_autoinc_lock_mode should be 2 when using parallel slave\";\n        }\n    }\n\n    if ( get_wsrep_option('gcs.fc_limit') != $myvar{'wsrep_slave_threads'} * 5 )\n    {\n        badprint \"gcs.fc_limit should be equal to 5 * wsrep_slave_threads (=\"\n          . ( $myvar{'wsrep_slave_threads'} * 5 ) . \")\";\n        push @adjvars, \"gcs.fc_limit= wsrep_slave_threads * 5 (=\"\n          . ( $myvar{'wsrep_slave_threads'} * 5 ) . \")\";\n    }\n    else {\n        goodprint \"gcs.fc_limit is equal to 5 * wsrep_slave_threads ( =\"\n          . get_wsrep_option('gcs.fc_limit') . \")\";\n    }\n\n    if ( get_wsrep_option('gcs.fc_factor') != 0.8 ) {\n        badprint \"gcs.fc_factor should be equal to 0.8 (=\"\n          . get_wsrep_option('gcs.fc_factor') . \")\";\n        push @adjvars, \"gcs.fc_factor=0.8\";\n    }\n    else {\n        goodprint \"gcs.fc_factor is equal to 0.8\";\n    }\n    if ( get_wsrep_option('wsrep_flow_control_paused') > 0.02 ) {\n        badprint \"Fraction of time node pause flow control > 0.02\";\n    }\n    else {\n        goodprint\n\"Flow control fraction seems to be OK (wsrep_flow_control_paused <= 0.02)\";\n    }\n\n    if ( $myvar{'binlog_format'} ne 'ROW' ) {\n        badprint \"Binlog format should be in ROW mode.\";\n        push @adjvars, \"binlog_format = ROW\";\n    }\n    else {\n        goodprint \"Binlog format is in ROW mode.\";\n    }\n    if ( $myvar{'innodb_flush_log_at_trx_commit'} != 0 ) {\n        badprint \"InnoDB flush log at each commit should be disabled.\";\n        push @adjvars, \"innodb_flush_log_at_trx_commit = 0\";\n    }\n    else {\n        goodprint \"InnoDB flush log at each commit is disabled for Galera.\";\n    }\n\n    infoprint \"Read consistency mode :\" . $myvar{'wsrep_causal_reads'};\n\n    if ( defined( $myvar{'wsrep_cluster_name'} )\n        and $myvar{'wsrep_on'} eq \"ON\" )\n    {\n        goodprint \"Galera WsREP is enabled.\";\n        if ( defined( $myvar{'wsrep_cluster_address'} )\n            and trim(\"$myvar{'wsrep_cluster_address'}\") ne \"\" )\n        {\n            goodprint \"Galera Cluster address is defined: \"\n              . $myvar{'wsrep_cluster_address'};\n            my @NodesTmp = split /,/, $myvar{'wsrep_cluster_address'};\n            my $nbNodes  = @NodesTmp;\n            infoprint \"There are $nbNodes nodes in wsrep_cluster_address\";\n            my $nbNodesSize = trim( $mystat{'wsrep_cluster_size'} );\n            if ( $nbNodesSize == 3 or $nbNodesSize == 5 ) {\n                goodprint \"There are $nbNodesSize nodes in wsrep_cluster_size.\";\n            }\n            else {\n                badprint\n\"There are $nbNodesSize nodes in wsrep_cluster_size. Prefer 3 or 5 nodes architecture.\";\n                push @generalrec, \"Prefer 3 or 5 nodes architecture.\";\n            }\n\n            # wsrep_cluster_address doesn't include garbd nodes\n            if ( $nbNodes > $nbNodesSize ) {\n                badprint\n\"All cluster nodes are not detected. wsrep_cluster_size less than node count in wsrep_cluster_address\";\n            }\n            else {\n                goodprint \"All cluster nodes detected.\";\n            }\n        }\n        else {\n            badprint \"Galera Cluster address is undefined\";\n            push @adjvars,\n              \"set up wsrep_cluster_address variable for Galera replication\";\n        }\n        if ( defined( $myvar{'wsrep_cluster_name'} )\n            and trim( $myvar{'wsrep_cluster_name'} ) ne \"\" )\n        {\n            goodprint \"Galera Cluster name is defined: \"\n              . $myvar{'wsrep_cluster_name'};\n        }\n        else {\n            badprint \"Galera Cluster name is undefined\";\n            push @adjvars,\n              \"set up wsrep_cluster_name variable for Galera replication\";\n        }\n        if ( defined( $myvar{'wsrep_node_name'} )\n            and trim( $myvar{'wsrep_node_name'} ) ne \"\" )\n        {\n            goodprint \"Galera Node name is defined: \"\n              . $myvar{'wsrep_node_name'};\n        }\n        else {\n            badprint \"Galera node name is undefined\";\n            push @adjvars,\n              \"set up wsrep_node_name variable for Galera replication\";\n        }\n        if ( trim( $myvar{'wsrep_notify_cmd'} ) ne \"\" ) {\n            goodprint \"Galera Notify command is defined.\";\n        }\n        else {\n            badprint \"Galera Notify command is not defined.\";\n            push( @adjvars,\n                \"set up parameter wsrep_notify_cmd to be notified\" );\n        }\n        if (    trim( $myvar{'wsrep_sst_method'} ) !~ \"^xtrabackup.*\"\n            and trim( $myvar{'wsrep_sst_method'} ) !~ \"^mariabackup\" )\n        {\n            badprint \"Galera SST method is not xtrabackup based.\";\n            push( @adjvars,\n\"set up parameter wsrep_sst_method to xtrabackup based parameter\"\n            );\n        }\n        else {\n            goodprint \"SST Method is based on xtrabackup.\";\n        }\n        if (\n            (\n                defined( $myvar{'wsrep_OSU_method'} )\n                && trim( $myvar{'wsrep_OSU_method'} ) eq \"TOI\"\n            )\n            || ( defined( $myvar{'wsrep_osu_method'} )\n                && trim( $myvar{'wsrep_osu_method'} ) eq \"TOI\" )\n          )\n        {\n            goodprint \"TOI is default mode for upgrade.\";\n        }\n        else {\n            badprint \"Schema upgrade are not replicated automatically\";\n            push( @adjvars, \"set up parameter wsrep_OSU_method to TOI\" );\n        }\n        infoprint \"Max WsRep message : \"\n          . hr_bytes( $myvar{'wsrep_max_ws_size'} );\n    }\n    else {\n        badprint \"Galera WsREP is disabled\";\n    }\n\n    if ( defined( $mystat{'wsrep_connected'} )\n        and $mystat{'wsrep_connected'} eq \"ON\" )\n    {\n        goodprint \"Node is connected\";\n    }\n    else {\n        badprint \"Node is disconnected\";\n    }\n    if ( defined( $mystat{'wsrep_ready'} ) and $mystat{'wsrep_ready'} eq \"ON\" )\n    {\n        goodprint \"Node is ready\";\n    }\n    else {\n        badprint \"Node is not ready\";\n    }\n    infoprint \"Cluster status :\" . $mystat{'wsrep_cluster_status'};\n    if ( defined( $mystat{'wsrep_cluster_status'} )\n        and $mystat{'wsrep_cluster_status'} eq \"Primary\" )\n    {\n        goodprint \"Galera cluster is consistent and ready for operations\";\n    }\n    else {\n        badprint \"Cluster is not consistent and ready\";\n    }\n    if ( $mystat{'wsrep_local_state_uuid'} eq\n        $mystat{'wsrep_cluster_state_uuid'} )\n    {\n        goodprint \"Node and whole cluster at the same level: \"\n          . $mystat{'wsrep_cluster_state_uuid'};\n    }\n    else {\n        badprint \"Node and whole cluster not the same level\";\n        infoprint \"Node    state uuid: \" . $mystat{'wsrep_local_state_uuid'};\n        infoprint \"Cluster state uuid: \" . $mystat{'wsrep_cluster_state_uuid'};\n    }\n    if ( $mystat{'wsrep_local_state_comment'} eq 'Synced' ) {\n        goodprint \"Node is synced with whole cluster.\";\n    }\n    else {\n        badprint \"Node is not synced\";\n        infoprint \"Node State : \" . $mystat{'wsrep_local_state_comment'};\n    }\n    if ( $mystat{'wsrep_local_cert_failures'} == 0 ) {\n        goodprint \"There is no certification failures detected.\";\n    }\n    else {\n        badprint \"There is \"\n          . $mystat{'wsrep_local_cert_failures'}\n          . \" certification failure(s)detected.\";\n    }\n\n    for my $key ( keys %mystat ) {\n        if ( $key =~ /wsrep_|galera/i ) {\n            debugprint \"WSREP: $key = $mystat{$key}\";\n        }\n    }\n\n    #debugprint Dumper get_wsrep_options() if $opt{debug};\n}\n\n# Recommendations for InnoDB\nsub mysql_innodb {\n    subheaderprint \"InnoDB Metrics\";\n\n    # InnoDB\n    unless ( defined $myvar{'have_innodb'}\n        && $myvar{'have_innodb'} eq \"YES\" )\n    {\n        infoprint \"InnoDB is disabled.\";\n        if ( mysql_version_ge( 5, 5 ) ) {\n            my $defengine = 'InnoDB';\n            $defengine = $myvar{'default_storage_engine'}\n              if defined( $myvar{'default_storage_engine'} );\n            badprint\n\"InnoDB Storage engine is disabled. $defengine is the default storage engine\"\n              if $defengine eq 'InnoDB';\n            infoprint\n\"InnoDB Storage engine is disabled. $defengine is the default storage engine\"\n              if $defengine ne 'InnoDB';\n        }\n        return;\n    }\n    infoprint \"InnoDB is enabled.\";\n    if ( !defined $enginestats{'InnoDB'} ) {\n        if ( $opt{skipsize} eq 1 ) {\n            infoprint \"Skipped due to --skipsize option\";\n            return;\n        }\n        badprint \"No tables are Innodb\";\n        $enginestats{'InnoDB'} = 0;\n    }\n\n    if ( $opt{buffers} ne 0 ) {\n        infoprint \"InnoDB Buffers\";\n        if ( defined $myvar{'innodb_buffer_pool_size'} ) {\n            infoprint \" +-- InnoDB Buffer Pool: \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} ) . \"\";\n        }\n        if ( defined $myvar{'innodb_buffer_pool_instances'} ) {\n            infoprint \" +-- InnoDB Buffer Pool Instances: \"\n              . $myvar{'innodb_buffer_pool_instances'} . \"\";\n        }\n\n        if ( defined $myvar{'innodb_buffer_pool_chunk_size'} ) {\n            infoprint \" +-- InnoDB Buffer Pool Chunk Size: \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_chunk_size'} ) . \"\";\n        }\n        if ( defined $myvar{'innodb_additional_mem_pool_size'} ) {\n            infoprint \" +-- InnoDB Additional Mem Pool: \"\n              . hr_bytes( $myvar{'innodb_additional_mem_pool_size'} ) . \"\";\n        }\n        if ( defined $myvar{'innodb_redo_log_capacity'} ) {\n            infoprint \" +-- InnoDB Redo Log Capacity: \"\n              . hr_bytes( $myvar{'innodb_redo_log_capacity'} );\n        }\n        else {\n            if ( defined $myvar{'innodb_log_file_size'} ) {\n                infoprint \" +-- InnoDB Log File Size: \"\n                  . hr_bytes( $myvar{'innodb_log_file_size'} );\n            }\n            if ( defined $myvar{'innodb_log_files_in_group'} ) {\n                infoprint \" +-- InnoDB Log File In Group: \"\n                  . $myvar{'innodb_log_files_in_group'};\n                infoprint \" +-- InnoDB Total Log File Size: \"\n                  . hr_bytes( $myvar{'innodb_log_files_in_group'} *\n                      $myvar{'innodb_log_file_size'} )\n                  . \"(\"\n                  . $mycalc{'innodb_log_size_pct'}\n                  . \" % of buffer pool)\";\n            }\n            else {\n                infoprint \" +-- InnoDB Total Log File Size: \"\n                  . hr_bytes( $myvar{'innodb_log_file_size'} ) . \"(\"\n                  . $mycalc{'innodb_log_size_pct'}\n                  . \" % of buffer pool)\";\n            }\n        }\n        if ( defined $myvar{'innodb_log_buffer_size'} ) {\n            infoprint \" +-- InnoDB Log Buffer: \"\n              . hr_bytes( $myvar{'innodb_log_buffer_size'} );\n        }\n        if ( defined $mystat{'Innodb_buffer_pool_pages_free'} ) {\n            infoprint \" +-- InnoDB Buffer Free: \"\n              . hr_bytes( $mystat{'Innodb_buffer_pool_pages_free'} ) . \"\";\n        }\n        if ( defined $mystat{'Innodb_buffer_pool_pages_total'} ) {\n            infoprint \" +-- InnoDB Buffer Used: \"\n              . hr_bytes( $mystat{'Innodb_buffer_pool_pages_total'} ) . \"\";\n        }\n    }\n\n    if ( defined $myvar{'innodb_thread_concurrency'} ) {\n        infoprint \"InnoDB Thread Concurrency: \"\n          . $myvar{'innodb_thread_concurrency'};\n    }\n\n    # InnoDB Buffer Pool Size\n    if ( $myvar{'innodb_file_per_table'} eq \"ON\" ) {\n        goodprint \"InnoDB File per table is activated\";\n    }\n    else {\n        badprint \"InnoDB File per table is not activated\";\n        push( @adjvars, \"innodb_file_per_table=ON\" );\n    }\n\n    # InnoDB Buffer Pool Size\n    if ( $arch == 32 && $myvar{'innodb_buffer_pool_size'} > 4294967295 ) {\n        badprint\n          \"InnoDB Buffer Pool size limit reached for 32 bits architecture: (\"\n          . hr_bytes(4294967295) . \" )\";\n        push( @adjvars,\n                \"limit innodb_buffer_pool_size under \"\n              . hr_bytes(4294967295)\n              . \" for 32 bits architecture\" );\n    }\n    if ( $arch == 32 && $myvar{'innodb_buffer_pool_size'} < 4294967295 ) {\n        goodprint \"InnoDB Buffer Pool size ( \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n          . \" ) under limit for 32 bits architecture: (\"\n          . hr_bytes(4294967295) . \")\";\n    }\n    if (   $arch == 64\n        && $myvar{'innodb_buffer_pool_size'} > 18446744073709551615 )\n    {\n        badprint \"InnoDB Buffer Pool size limit(\"\n          . hr_bytes(18446744073709551615)\n          . \") reached for 64 bits architecture\";\n        push( @adjvars,\n                \"limit innodb_buffer_pool_size under \"\n              . hr_bytes(18446744073709551615)\n              . \" for 64 bits architecture\" );\n    }\n\n    if (   $arch == 64\n        && $myvar{'innodb_buffer_pool_size'} < 18446744073709551615 )\n    {\n        goodprint \"InnoDB Buffer Pool size ( \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n          . \" ) under limit for 64 bits architecture: (\"\n          . hr_bytes(18446744073709551615) . \" )\";\n    }\n    if ( $myvar{'innodb_buffer_pool_size'} > $enginestats{'InnoDB'} ) {\n        goodprint \"InnoDB buffer pool / data size: \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} ) . \" / \"\n          . hr_bytes( $enginestats{'InnoDB'} ) . \"\";\n    }\n    else {\n        badprint \"InnoDB buffer pool / data size: \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} ) . \" / \"\n          . hr_bytes( $enginestats{'InnoDB'} ) . \"\";\n        push( @adjvars,\n                \"innodb_buffer_pool_size (>= \"\n              . hr_bytes( $enginestats{'InnoDB'} )\n              . \") if possible.\" );\n    }\n\n  # select  round( 100* sum(allocated)/( select VARIABLE_VALUE\n  #                                  FROM performance_schema.global_variables\n  #                              where VARIABLE_NAME='innodb_buffer_pool_size' )\n  # ,2) as \"PCT ALLOC/BUFFER POOL\"\n  #from sys.x$innodb_buffer_stats_by_table;\n\n    if ( $mycalc{innodb_buffer_alloc_pct} < 80 ) {\n        badprint \"Ratio Buffer Pool allocated / Buffer Pool Size: \"\n          . $mycalc{'innodb_buffer_alloc_pct'} . '%';\n    }\n    else {\n        goodprint \"Ratio Buffer Pool allocated / Buffer Pool Size: \"\n          . $mycalc{'innodb_buffer_alloc_pct'} . '%';\n    }\n    if (   $mycalc{'innodb_log_size_pct'} < 20\n        or $mycalc{'innodb_log_size_pct'} > 30 )\n    {\n        if ( defined $myvar{'innodb_redo_log_capacity'} ) {\n            badprint\n              \"Ratio InnoDB redo log capacity / InnoDB Buffer pool size (\"\n              . $mycalc{'innodb_log_size_pct'} . \"%): \"\n              . hr_bytes( $myvar{'innodb_redo_log_capacity'} ) . \" / \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n            push( @adjvars,\n                    \"innodb_redo_log_capacity should be (=\"\n                  . hr_bytes_rnd( $myvar{'innodb_buffer_pool_size'} / 4 )\n                  . \") if possible, so InnoDB Redo log Capacity equals 25% of buffer pool size.\"\n            );\n            push( @generalrec,\n\"Be careful, increasing innodb_redo_log_capacity means higher crash recovery mean time\"\n            );\n        }\n        else {\n            badprint \"Ratio InnoDB log file size / InnoDB Buffer pool size (\"\n              . $mycalc{'innodb_log_size_pct'} . \"%): \"\n              . hr_bytes( $myvar{'innodb_log_file_size'} ) . \" * \"\n              . $myvar{'innodb_log_files_in_group'} . \" / \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n            push(\n                @adjvars,\n                \"innodb_log_file_size should be (=\"\n                  . hr_bytes_rnd(\n                    $myvar{'innodb_buffer_pool_size'} /\n                      $myvar{'innodb_log_files_in_group'} / 4\n                  )\n                  . \") if possible, so InnoDB total log file size equals 25% of buffer pool size.\"\n            );\n            push( @generalrec,\n\"Be careful, increasing innodb_log_file_size / innodb_log_files_in_group means higher crash recovery mean time\"\n            );\n        }\n        if ( mysql_version_le( 5, 6, 2 ) ) {\n            push( @generalrec,\n\"For MySQL 5.6.2 and lower, total innodb_log_file_size should have a ceiling of (4096MB / log files in group) - 1MB.\"\n            );\n        }\n\n    }\n    else {\n        if ( defined $myvar{'innodb_redo_log_capacity'} ) {\n            goodprint\n              \"Ratio InnoDB Redo Log Capacity / InnoDB Buffer pool size: \"\n              . hr_bytes( $myvar{'innodb_redo_log_capacity'} ) . \"/\"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n        }\n        else {\n            push( @generalrec,\n\"Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU\"\n            );\n            goodprint \"Ratio InnoDB log file size / InnoDB Buffer pool size: \"\n              . hr_bytes( $myvar{'innodb_log_file_size'} ) . \" * \"\n              . $myvar{'innodb_log_files_in_group'} . \"/\"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n        }\n    }\n\n    # InnoDB Buffer Pool Instances (MySQL 5.6.6+)\n    if ( not mysql_version_ge( 10, 4 )\n        and defined( $myvar{'innodb_buffer_pool_instances'} ) )\n    {\n\n        # Bad Value if > 64\n        if ( $myvar{'innodb_buffer_pool_instances'} > 64 ) {\n            badprint \"InnoDB buffer pool instances: \"\n              . $myvar{'innodb_buffer_pool_instances'} . \"\";\n            push( @adjvars, \"innodb_buffer_pool_instances (<= 64)\" );\n        }\n\n        # InnoDB Buffer Pool Size > 1Go\n        if ( $myvar{'innodb_buffer_pool_size'} > 1024 * 1024 * 1024 ) {\n\n# InnoDB Buffer Pool Size / 1Go = InnoDB Buffer Pool Instances limited to 64 max.\n\n            #  InnoDB Buffer Pool Size > 64Go\n            my $max_innodb_buffer_pool_instances =\n              int( $myvar{'innodb_buffer_pool_size'} / ( 1024 * 1024 * 1024 ) );\n            $max_innodb_buffer_pool_instances = 64\n              if ( $max_innodb_buffer_pool_instances > 64 );\n\n            if ( $myvar{'innodb_buffer_pool_instances'} !=\n                $max_innodb_buffer_pool_instances )\n            {\n                badprint \"InnoDB buffer pool instances: \"\n                  . $myvar{'innodb_buffer_pool_instances'} . \"\";\n                push( @adjvars,\n                        \"innodb_buffer_pool_instances(=\"\n                      . $max_innodb_buffer_pool_instances\n                      . \")\" );\n            }\n            else {\n                goodprint \"InnoDB buffer pool instances: \"\n                  . $myvar{'innodb_buffer_pool_instances'} . \"\";\n            }\n\n            # InnoDB Buffer Pool Size < 1Go\n        }\n        else {\n            if ( $myvar{'innodb_buffer_pool_instances'} != 1 ) {\n                badprint\n\"InnoDB buffer pool <= 1G and Innodb_buffer_pool_instances(!=1).\";\n                push( @adjvars, \"innodb_buffer_pool_instances (=1)\" );\n            }\n            else {\n                goodprint \"InnoDB buffer pool instances: \"\n                  . $myvar{'innodb_buffer_pool_instances'} . \"\";\n            }\n        }\n    }\n\n    # InnoDB Used Buffer Pool Size vs CHUNK size\n    if ( !defined( $myvar{'innodb_buffer_pool_chunk_size'} ) ) {\n        infoprint\n          \"InnoDB Buffer Pool Chunk Size not used or defined in your version\";\n    }\n    else {\n        infoprint \"Number of InnoDB Buffer Pool Chunk: \"\n          . int( $myvar{'innodb_buffer_pool_size'} ) /\n          int( $myvar{'innodb_buffer_pool_chunk_size'} ) . \" for \"\n          . $myvar{'innodb_buffer_pool_instances'}\n          . \" Buffer Pool Instance(s)\";\n\n        if (\n            int( $myvar{'innodb_buffer_pool_size'} ) % (\n                int( $myvar{'innodb_buffer_pool_chunk_size'} ) *\n                  int( $myvar{'innodb_buffer_pool_instances'} )\n            ) eq 0\n          )\n        {\n            goodprint\n\"Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances\";\n        }\n        else {\n            badprint\n\"Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances\";\n\n#push( @adjvars, \"Adjust innodb_buffer_pool_instances, innodb_buffer_pool_chunk_size with innodb_buffer_pool_size\" );\n            push( @adjvars,\n\"innodb_buffer_pool_size must always be equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances\"\n            );\n        }\n    }\n\n    # InnoDB Read efficiency\n    if ( defined $mycalc{'pct_read_efficiency'}\n        && $mycalc{'pct_read_efficiency'} < 90 )\n    {\n        badprint \"InnoDB Read buffer efficiency: \"\n          . $mycalc{'pct_read_efficiency'} . \"% (\"\n          . $mystat{'Innodb_buffer_pool_read_requests'}\n          . \" hits / \"\n          . ( $mystat{'Innodb_buffer_pool_reads'} +\n              $mystat{'Innodb_buffer_pool_read_requests'} )\n          . \" total)\";\n    }\n    else {\n        goodprint \"InnoDB Read buffer efficiency: \"\n          . $mycalc{'pct_read_efficiency'} . \"% (\"\n          . $mystat{'Innodb_buffer_pool_read_requests'}\n          . \" hits / \"\n          . ( $mystat{'Innodb_buffer_pool_reads'} +\n              $mystat{'Innodb_buffer_pool_read_requests'} )\n          . \" total)\";\n    }\n\n    # InnoDB Write efficiency\n    if ( defined $mycalc{'pct_write_efficiency'}\n        && $mycalc{'pct_write_efficiency'} < 90 )\n    {\n        badprint \"InnoDB Write Log efficiency: \"\n          . abs( $mycalc{'pct_write_efficiency'} ) . \"% (\"\n          . abs( $mystat{'Innodb_log_write_requests'} -\n              $mystat{'Innodb_log_writes'} )\n          . \" hits / \"\n          . $mystat{'Innodb_log_write_requests'}\n          . \" total)\";\n        push( @adjvars,\n                \"innodb_log_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'innodb_log_buffer_size'} )\n              . \")\" );\n    }\n    else {\n        goodprint \"InnoDB Write Log efficiency: \"\n          . $mycalc{'pct_write_efficiency'} . \"% (\"\n          . ( $mystat{'Innodb_log_write_requests'} -\n              $mystat{'Innodb_log_writes'} )\n          . \" hits / \"\n          . $mystat{'Innodb_log_write_requests'}\n          . \" total)\";\n    }\n\n    # InnoDB Log Waits\n    $mystat{'Innodb_log_waits_computed'} = 0;\n\n    if (    defined( $mystat{'Innodb_log_waits'} )\n        and defined( $mystat{'Innodb_log_writes'} )\n        and $mystat{'Innodb_log_writes'} > 0.000001 )\n    {\n        $mystat{'Innodb_log_waits_computed'} =\n          $mystat{'Innodb_log_waits'} / $mystat{'Innodb_log_writes'};\n    }\n    else {\n        undef $mystat{'Innodb_log_waits_computed'};\n    }\n\n    if ( defined $mystat{'Innodb_log_waits_computed'}\n        && $mystat{'Innodb_log_waits_computed'} > 0.000001 )\n    {\n        badprint \"InnoDB log waits: \"\n          . percentage( $mystat{'Innodb_log_waits'},\n            $mystat{'Innodb_log_writes'} )\n          . \"% (\"\n          . $mystat{'Innodb_log_waits'}\n          . \" waits / \"\n          . $mystat{'Innodb_log_writes'}\n          . \" writes)\";\n        push( @adjvars,\n                \"innodb_log_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'innodb_log_buffer_size'} )\n              . \")\" );\n    }\n    else {\n        goodprint \"InnoDB log waits: \"\n          . percentage( $mystat{'Innodb_log_waits'},\n            $mystat{'Innodb_log_writes'} )\n          . \"% (\"\n          . $mystat{'Innodb_log_waits'}\n          . \" waits / \"\n          . $mystat{'Innodb_log_writes'}\n          . \" writes)\";\n    }\n    $result{'Calculations'} = {%mycalc};\n}\n\nsub check_metadata_perf {\n    subheaderprint \"Analysis Performance Metrics\";\n    if ( defined $myvar{'innodb_stats_on_metadata'} ) {\n        infoprint \"innodb_stats_on_metadata: \"\n          . $myvar{'innodb_stats_on_metadata'};\n        if ( $myvar{'innodb_stats_on_metadata'} eq 'ON' ) {\n            badprint \"Stat are updated during querying INFORMATION_SCHEMA.\";\n            push @adjvars, \"SET innodb_stats_on_metadata = OFF\";\n\n            #Disabling innodb_stats_on_metadata\n            select_one(\"SET GLOBAL innodb_stats_on_metadata = OFF;\");\n            return 1;\n        }\n    }\n    goodprint \"No stat updates during querying INFORMATION_SCHEMA.\";\n    return 0;\n}\n\n# Recommendations for Database metrics\nsub mysql_databases {\n    return if ( $opt{dbstat} == 0 );\n\n    subheaderprint \"Database Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Database metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n\n    @dblist = select_array(\n\"SELECT SCHEMA_NAME FROM information_schema.SCHEMATA WHERE SCHEMA_NAME NOT IN ( 'mysql', 'performance_schema', 'information_schema', 'sys' );\"\n    );\n    infoprint \"There is \" . scalar(@dblist) . \" Database(s).\";\n    my @totaldbinfo = split /\\s/,\n      select_one(\n\"SELECT SUM(TABLE_ROWS), SUM(DATA_LENGTH), SUM(INDEX_LENGTH), SUM(DATA_LENGTH+INDEX_LENGTH), COUNT(TABLE_NAME), COUNT(DISTINCT(TABLE_COLLATION)), COUNT(DISTINCT(ENGINE)) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n      );\n    infoprint \"All User Databases:\";\n    infoprint \" +-- TABLE : \"\n      . select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='BASE TABLE' AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys')\"\n      ) . \"\";\n    infoprint \" +-- VIEW  : \"\n      . select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='VIEW' AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys')\"\n      ) . \"\";\n    infoprint \" +-- INDEX : \"\n      . select_one(\n\"SELECT count(distinct(concat(TABLE_NAME, TABLE_SCHEMA, INDEX_NAME))) from information_schema.STATISTICS WHERE TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys')\"\n      ) . \"\";\n\n    infoprint \" +-- CHARS : \"\n      . ( $totaldbinfo[5] eq 'NULL' ? 0 : $totaldbinfo[5] ) . \" (\"\n      . (\n        join \", \",\n        select_array(\n\"select distinct(CHARACTER_SET_NAME) from information_schema.columns WHERE CHARACTER_SET_NAME IS NOT NULL AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n        )\n      ) . \")\";\n    infoprint \" +-- COLLA : \"\n      . ( $totaldbinfo[5] eq 'NULL' ? 0 : $totaldbinfo[5] ) . \" (\"\n      . (\n        join \", \",\n        select_array(\n\"SELECT DISTINCT(TABLE_COLLATION) FROM information_schema.TABLES WHERE TABLE_COLLATION IS NOT NULL AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n        )\n      ) . \")\";\n    infoprint \" +-- ROWS  : \"\n      . ( $totaldbinfo[0] eq 'NULL' ? 0 : $totaldbinfo[0] ) . \"\";\n    infoprint \" +-- DATA  : \"\n      . hr_bytes( $totaldbinfo[1] ) . \"(\"\n      . percentage( $totaldbinfo[1], $totaldbinfo[3] ) . \"%)\";\n    infoprint \" +-- INDEX : \"\n      . hr_bytes( $totaldbinfo[2] ) . \"(\"\n      . percentage( $totaldbinfo[2], $totaldbinfo[3] ) . \"%)\";\n    infoprint \" +-- SIZE  : \" . hr_bytes( $totaldbinfo[3] ) . \"\";\n    infoprint \" +-- ENGINE: \"\n      . ( $totaldbinfo[6] eq 'NULL' ? 0 : $totaldbinfo[6] ) . \" (\"\n      . (\n        join \", \",\n        select_array(\n\"SELECT DISTINCT(ENGINE) FROM information_schema.TABLES WHERE ENGINE IS NOT NULL AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n        )\n      ) . \")\";\n\n    $result{'Databases'}{'All databases'}{'Rows'} =\n      ( $totaldbinfo[0] eq 'NULL' ? 0 : $totaldbinfo[0] );\n    $result{'Databases'}{'All databases'}{'Data Size'} = $totaldbinfo[1];\n    $result{'Databases'}{'All databases'}{'Data Pct'} =\n      percentage( $totaldbinfo[1], $totaldbinfo[3] ) . \"%\";\n    $result{'Databases'}{'All databases'}{'Index Size'} = $totaldbinfo[2];\n    $result{'Databases'}{'All databases'}{'Index Pct'} =\n      percentage( $totaldbinfo[2], $totaldbinfo[3] ) . \"%\";\n    $result{'Databases'}{'All databases'}{'Total Size'} = $totaldbinfo[3];\n    print \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n    my $nbViews  = 0;\n    my $nbTables = 0;\n\n    foreach (@dblist) {\n        my @dbinfo = split /\\s/,\n          select_one(\n\"SELECT TABLE_SCHEMA, SUM(TABLE_ROWS), SUM(DATA_LENGTH), SUM(INDEX_LENGTH), SUM(DATA_LENGTH+INDEX_LENGTH), COUNT(DISTINCT ENGINE), COUNT(TABLE_NAME), COUNT(DISTINCT(TABLE_COLLATION)), COUNT(DISTINCT(ENGINE)) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' GROUP BY TABLE_SCHEMA ORDER BY TABLE_SCHEMA\"\n          );\n        next unless defined $dbinfo[0];\n\n        infoprint \"Database: \" . $dbinfo[0] . \"\";\n        $nbTables = select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='BASE TABLE' AND TABLE_SCHEMA='$_'\"\n        );\n        infoprint \" +-- TABLE : $nbTables\";\n        infoprint \" +-- VIEW  : \"\n          . select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='VIEW' AND TABLE_SCHEMA='$_'\"\n          ) . \"\";\n        infoprint \" +-- INDEX : \"\n          . select_one(\n\"SELECT count(distinct(concat(TABLE_NAME, TABLE_SCHEMA, INDEX_NAME))) from information_schema.STATISTICS WHERE TABLE_SCHEMA='$_'\"\n          ) . \"\";\n        infoprint \" +-- CHARS : \"\n          . ( $totaldbinfo[5] eq 'NULL' ? 0 : $totaldbinfo[5] ) . \" (\"\n          . (\n            join \", \",\n            select_array(\n\"select distinct(CHARACTER_SET_NAME) from information_schema.columns WHERE CHARACTER_SET_NAME IS NOT NULL AND TABLE_SCHEMA='$_';\"\n            )\n          ) . \")\";\n        infoprint \" +-- COLLA : \"\n          . ( $dbinfo[7] eq 'NULL' ? 0 : $dbinfo[7] ) . \" (\"\n          . (\n            join \", \",\n            select_array(\n\"SELECT DISTINCT(TABLE_COLLATION) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' AND TABLE_COLLATION IS NOT NULL;\"\n            )\n          ) . \")\";\n        infoprint \" +-- ROWS  : \"\n          . ( !defined( $dbinfo[1] ) or $dbinfo[1] eq 'NULL' ? 0 : $dbinfo[1] )\n          . \"\";\n        infoprint \" +-- DATA  : \"\n          . hr_bytes( $dbinfo[2] ) . \"(\"\n          . percentage( $dbinfo[2], $dbinfo[4] ) . \"%)\";\n        infoprint \" +-- INDEX : \"\n          . hr_bytes( $dbinfo[3] ) . \"(\"\n          . percentage( $dbinfo[3], $dbinfo[4] ) . \"%)\";\n        infoprint \" +-- TOTAL : \" . hr_bytes( $dbinfo[4] ) . \"\";\n        infoprint \" +-- ENGINE: \"\n          . ( $dbinfo[8] eq 'NULL' ? 0 : $dbinfo[8] ) . \" (\"\n          . (\n            join \", \",\n            select_array(\n\"SELECT DISTINCT(ENGINE) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' AND ENGINE IS NOT NULL\"\n            )\n          ) . \")\";\n\n        foreach my $eng (\n            select_array(\n\"SELECT DISTINCT(ENGINE) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' AND ENGINE IS NOT NULL\"\n            )\n          )\n        {\n            infoprint \" +-- ENGINE $eng : \"\n              . select_one(\n\"SELECT COUNT(*) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$dbinfo[0]' AND ENGINE='$eng'\"\n              ) . \" TABLE(s)\";\n        }\n\n        if ( $nbTables == 0 ) {\n            badprint \" No table in $dbinfo[0] database\";\n            next;\n        }\n        badprint \"Index size is larger than data size for $dbinfo[0] \\n\"\n          if ( $dbinfo[2] ne 'NULL' )\n          and ( $dbinfo[3] ne 'NULL' )\n          and ( $dbinfo[2] < $dbinfo[3] );\n        if ( $dbinfo[5] > 1 and $nbTables > 0 ) {\n            badprint \"There are \"\n              . $dbinfo[5]\n              . \" storage engines. Be careful. \\n\";\n            push @generalrec,\n\"Select one storage engine (InnoDB is a good choice) for all tables in $dbinfo[0] database ($dbinfo[5] engines detected)\";\n        }\n        $result{'Databases'}{ $dbinfo[0] }{'Rows'}       = $dbinfo[1];\n        $result{'Databases'}{ $dbinfo[0] }{'Tables'}     = $dbinfo[6];\n        $result{'Databases'}{ $dbinfo[0] }{'Collations'} = $dbinfo[7];\n        $result{'Databases'}{ $dbinfo[0] }{'Data Size'}  = $dbinfo[2];\n        $result{'Databases'}{ $dbinfo[0] }{'Data Pct'} =\n          percentage( $dbinfo[2], $dbinfo[4] ) . \"%\";\n        $result{'Databases'}{ $dbinfo[0] }{'Index Size'} = $dbinfo[3];\n        $result{'Databases'}{ $dbinfo[0] }{'Index Pct'} =\n          percentage( $dbinfo[3], $dbinfo[4] ) . \"%\";\n        $result{'Databases'}{ $dbinfo[0] }{'Total Size'} = $dbinfo[4];\n\n        if ( $dbinfo[7] > 1 ) {\n            badprint $dbinfo[7]\n              . \" different collations for database \"\n              . $dbinfo[0];\n            push( @generalrec,\n                \"Check all table collations are identical for all tables in \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[7]\n              . \" collation for \"\n              . $dbinfo[0]\n              . \" database.\";\n        }\n        if ( $dbinfo[8] > 1 ) {\n            badprint $dbinfo[8]\n              . \" different engines for database \"\n              . $dbinfo[0];\n            push( @generalrec,\n                    \"Check all table engines are identical for all tables in \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[8] . \" engine for \" . $dbinfo[0] . \" database.\";\n        }\n\n        my @distinct_column_charset = select_array(\n\"select DISTINCT(CHARACTER_SET_NAME) from information_schema.COLUMNS where CHARACTER_SET_NAME IS NOT NULL AND TABLE_SCHEMA ='$_' AND CHARACTER_SET_NAME IS NOT NULL\"\n        );\n        infoprint \"Charsets for $dbinfo[0] database table column: \"\n          . join( ', ', @distinct_column_charset );\n        if ( scalar(@distinct_column_charset) > 1 ) {\n            badprint $dbinfo[0]\n              . \" table column(s) has several charsets defined for all text like column(s).\";\n            push( @generalrec,\n                    \"Limit charset for column to one charset if possible for \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[0]\n              . \" table column(s) has same charset defined for all text like column(s).\";\n        }\n\n        my @distinct_column_collation = select_array(\n\"select DISTINCT(COLLATION_NAME) from information_schema.COLUMNS where COLLATION_NAME IS NOT NULL AND TABLE_SCHEMA ='$_' AND COLLATION_NAME IS NOT NULL\"\n        );\n        infoprint \"Collations for $dbinfo[0] database table column: \"\n          . join( ', ', @distinct_column_collation );\n        if ( scalar(@distinct_column_collation) > 1 ) {\n            badprint $dbinfo[0]\n              . \" table column(s) has several collations defined for all text like column(s).\";\n            push( @generalrec,\n                \"Limit collations for column to one collation if possible for \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[0]\n              . \" table column(s) has same collation defined for all text like column(s).\";\n        }\n    }\n}\n\n# Recommendations for database columns\nsub mysql_tables {\n    return if ( $opt{tbstat} == 0 );\n\n    subheaderprint \"Table Column Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Table column metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n    if ( mysql_version_ge(8) and not mysql_version_eq(10) ) {\n        infoprint\n\"MySQL and Percona version 8.0 and greater have removed PROCEDURE ANALYSE feature\";\n        $opt{colstat} = 0;\n        infoprint \"Disabling colstat parameter\";\n\n    }\n\n    infoprint(\"Dumpdir: $opt{dumpdir}\");\n\n    # Store all information schema in dumpdir if defined\n    if ( defined $opt{dumpdir} and -d \"$opt{dumpdir}\" ) {\n        for my $info_s_table (\n            select_array('use information_schema;show tables;') )\n        {\n            infoprint \"Dumping $info_s_table into $opt{dumpdir}\";\n            select_csv_file(\n                \"$opt{dumpdir}/ifs_${info_s_table}.csv\",\n                \"select * from information_schema.$info_s_table\"\n            );\n        }\n\n        #exit 0 if ( $opt{stop} == 1 );\n    }\n    foreach ( select_user_dbs() ) {\n        my $dbname = $_;\n        next unless defined $_;\n        infoprint \"Database: \" . $_ . \"\";\n        my @dbtable = select_array(\n\"SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_SCHEMA='$dbname' AND TABLE_TYPE='BASE TABLE' ORDER BY TABLE_NAME\"\n        );\n        foreach (@dbtable) {\n            my $tbname = $_;\n            infoprint \" +-- TABLE: $tbname\";\n            infoprint \"     +-- TYPE: \"\n              . select_one(\n\"SELECT ENGINE FROM information_schema.tables where TABLE_schema='$dbname' AND TABLE_NAME='$tbname'\"\n              );\n\n            my $selIdxReq = <<\"ENDSQL\";\n      SELECT  index_name AS idxname, \n              GROUP_CONCAT(column_name ORDER BY seq_in_index) AS cols, \n              INDEX_TYPE as type\n              FROM information_schema.statistics\n              WHERE INDEX_SCHEMA='$dbname'\n              AND TABLE_NAME='$tbname'\n              GROUP BY idxname, type\nENDSQL\n            my @tbidx = select_array($selIdxReq);\n            my $found = 0;\n            foreach my $idx (@tbidx) {\n                my @info = split /\\s/, $idx;\n                next if $info[0] eq 'NULL';\n                infoprint\n                  \"     +-- Index $info[0] - Cols: $info[1] - Type: $info[2]\";\n                $found++;\n            }\n            if ( $found == 0 ) {\n                badprint(\"Table $dbname.$tbname has no index defined\");\n                push @generalrec,\n                  \"Add at least a primary key on table $dbname.$tbname\";\n            }\n            my @tbcol = select_array(\n\"SELECT COLUMN_NAME FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$dbname' AND TABLE_NAME='$tbname'\"\n            );\n            foreach (@tbcol) {\n                my $ctype = select_one(\n\"SELECT COLUMN_TYPE FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$dbname' AND TABLE_NAME='$tbname' AND COLUMN_NAME='$_' \"\n                );\n                my $isnull = select_one(\n\"SELECT IS_NULLABLE FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$dbname' AND TABLE_NAME='$tbname' AND COLUMN_NAME='$_' \"\n                );\n\n                my $current_type =\n                  uc($ctype) . ( $isnull eq 'NO' ? \" NOT NULL\" : \" NULL\" );\n                my $optimal_type = '';\n                infoprint \"     +-- Column $tbname.$_: $current_type\";\n                if ( $opt{colstat} == 1 ) {\n                    $optimal_type = select_str_g( \"Optimal_fieldtype\",\n\"SELECT \\\\`$_\\\\` FROM \\\\`$dbname\\\\`.\\\\`$tbname\\\\` PROCEDURE ANALYSE(100000)\"\n                      )\n                      unless ( mysql_version_ge(8)\n                        and not mysql_version_eq(10) );\n                }\n                if ( $optimal_type eq '' ) {\n\n                    #infoprint \"     +-- Current Fieldtype: $current_type\";\n\n                    #infoprint \"      Optimal Fieldtype: Not available\";\n                }\n                elsif ( $current_type ne $optimal_type\n                    and $current_type !~ /.*DATETIME.*/\n                    and $current_type !~ /.*TIMESTAMP.*/ )\n                {\n                    infoprint \"     +-- Current Fieldtype: $current_type\";\n                    if ( $optimal_type =~ /.*ENUM\\(.*/ ) {\n                        $optimal_type = \"ENUM( ... )\";\n                    }\n                    infoprint \"     +-- Optimal Fieldtype: $optimal_type \";\n                    if ( $optimal_type !~ /.*ENUM\\(.*/ ) {\n                        badprint\n\"Consider changing type for column $_ in table $dbname.$tbname\";\n                        push( @generalrec,\n\"ALTER TABLE \\`$dbname\\`.\\`$tbname\\` MODIFY \\`$_\\` $optimal_type;\"\n                        );\n                    }\n                }\n                else {\n                    goodprint \"$dbname.$tbname ($_) type: $current_type\";\n                }\n            }\n        }\n    }\n}\n\n# Recommendations for Indexes metrics\nsub mysql_indexes {\n    return if ( $opt{idxstat} == 0 );\n\n    subheaderprint \"Indexes Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Index metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n\n#    unless ( mysql_version_ge( 5, 6 ) ) {\n#        infoprint\n#\"Skip Index metrics from information schema due to erroneous information provided in this version\";\n#        return;\n#    }\n    my $selIdxReq = <<'ENDSQL';\nSELECT\n  CONCAT(t.TABLE_SCHEMA, '.', t.TABLE_NAME) AS 'table', \n  CONCAT(s.INDEX_NAME, '(', s.COLUMN_NAME, ')') AS 'index'\n , s.SEQ_IN_INDEX AS 'seq'\n , s2.max_columns AS 'maxcol'\n , s.CARDINALITY  AS 'card'\n , t.TABLE_ROWS   AS 'est_rows'\n , INDEX_TYPE as type\n , ROUND(((s.CARDINALITY / IFNULL(t.TABLE_ROWS, 0.01)) * 100), 2) AS 'sel'\nFROM INFORMATION_SCHEMA.STATISTICS s\n INNER JOIN INFORMATION_SCHEMA.TABLES t\n  ON s.TABLE_SCHEMA = t.TABLE_SCHEMA\n  AND s.TABLE_NAME = t.TABLE_NAME\n INNER JOIN (\n  SELECT\n     TABLE_SCHEMA\n   , TABLE_NAME\n   , INDEX_NAME\n   , MAX(SEQ_IN_INDEX) AS max_columns\n  FROM INFORMATION_SCHEMA.STATISTICS\n  WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema')\n  AND INDEX_TYPE <> 'FULLTEXT'\n  GROUP BY TABLE_SCHEMA, TABLE_NAME, INDEX_NAME\n ) AS s2\n ON s.TABLE_SCHEMA = s2.TABLE_SCHEMA\n AND s.TABLE_NAME = s2.TABLE_NAME\n AND s.INDEX_NAME = s2.INDEX_NAME\nWHERE t.TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema')\nAND t.TABLE_ROWS > 10\nAND s.CARDINALITY IS NOT NULL\nAND (s.CARDINALITY / IFNULL(t.TABLE_ROWS, 0.01)) < 8.00\nORDER BY sel\nLIMIT 10;\nENDSQL\n    my @idxinfo = select_array($selIdxReq);\n    infoprint \"Worst selectivity indexes:\";\n    foreach (@idxinfo) {\n        debugprint \"$_\";\n        my @info = split /\\s/;\n        infoprint \"Index: \" . $info[1] . \"\";\n\n        infoprint \" +-- COLUMN      : \" . $info[0] . \"\";\n        infoprint \" +-- NB SEQS     : \" . $info[2] . \" sequence(s)\";\n        infoprint \" +-- NB COLS     : \" . $info[3] . \" column(s)\";\n        infoprint \" +-- CARDINALITY : \" . $info[4] . \" distinct values\";\n        infoprint \" +-- NB ROWS     : \" . $info[5] . \" rows\";\n        infoprint \" +-- TYPE        : \" . $info[6];\n        infoprint \" +-- SELECTIVITY : \" . $info[7] . \"%\";\n\n        $result{'Indexes'}{ $info[1] }{'Column'}           = $info[0];\n        $result{'Indexes'}{ $info[1] }{'Sequence number'}  = $info[2];\n        $result{'Indexes'}{ $info[1] }{'Number of column'} = $info[3];\n        $result{'Indexes'}{ $info[1] }{'Cardinality'}      = $info[4];\n        $result{'Indexes'}{ $info[1] }{'Row number'}       = $info[5];\n        $result{'Indexes'}{ $info[1] }{'Index Type'}       = $info[6];\n        $result{'Indexes'}{ $info[1] }{'Selectivity'}      = $info[7];\n        if ( $info[7] < 25 ) {\n            badprint \"$info[1] has a low selectivity\";\n        }\n    }\n    infoprint \"Indexes per database:\";\n    foreach my $dbname ( select_user_dbs() ) {\n        infoprint \"Database: \" . $dbname . \"\";\n        $selIdxReq = <<\"ENDSQL\";\n        SELECT  concat(table_name, '.', index_name) AS idxname,\n                GROUP_CONCAT(column_name ORDER BY seq_in_index) AS cols,\n                SUM(CARDINALITY) as card,\n                INDEX_TYPE as type\n        FROM information_schema.statistics\n        WHERE INDEX_SCHEMA='$dbname'\n        AND index_name IS NOT NULL\n        GROUP BY table_name, idxname, type\nENDSQL\n        my $found = 0;\n        foreach my $idxinfo ( select_array($selIdxReq) ) {\n            my @info = split /\\s/, $idxinfo;\n            next if $info[0] eq 'NULL';\n            infoprint \" +-- INDEX      : \" . $info[0];\n            infoprint \" +-- COLUMNS    : \" . $info[1];\n            infoprint \" +-- CARDINALITY: \" . $info[2];\n            infoprint \" +-- TYPE        : \" . $info[4] if defined $info[4];\n            infoprint \" +-- COMMENT     : \" . $info[5] if defined $info[5];\n            $found++;\n        }\n        my $nbTables = select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='BASE TABLE' AND TABLE_SCHEMA='$dbname'\"\n        );\n        badprint \"No index found for $dbname database\"\n          if $found == 0 and $nbTables > 1;\n        push @generalrec, \"Add indexes on tables from $dbname database\"\n          if $found == 0 and $nbTables > 1;\n    }\n    return\n      unless ( defined( $myvar{'performance_schema'} )\n        and $myvar{'performance_schema'} eq 'ON' );\n\n    $selIdxReq = <<'ENDSQL';\nSELECT CONCAT(object_schema, '.', object_name) AS 'table', index_name\nFROM performance_schema.table_io_waits_summary_by_index_usage\nWHERE index_name IS NOT NULL\nAND count_star = 0\nAND index_name <> 'PRIMARY'\nAND object_schema NOT IN ('mysql', 'performance_schema', 'information_schema')\nORDER BY count_star, object_schema, object_name;\nENDSQL\n    @idxinfo = select_array($selIdxReq);\n    infoprint \"Unused indexes:\";\n    push( @generalrec, \"Remove unused indexes.\" ) if ( scalar(@idxinfo) > 0 );\n    foreach (@idxinfo) {\n        debugprint \"$_\";\n        my @info = split /\\s/;\n        badprint \"Index: $info[1] on $info[0] is not used.\";\n        push @{ $result{'Indexes'}{'Unused Indexes'} },\n          $info[0] . \".\" . $info[1];\n    }\n}\n\nsub mysql_views {\n    subheaderprint \"Views Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Views metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n}\n\nsub mysql_routines {\n    subheaderprint \"Routines Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Routines metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n}\n\nsub mysql_triggers {\n    subheaderprint \"Triggers Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Trigger metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n}\n\n# Take the two recommendation arrays and display them at the end of the output\nsub make_recommendations {\n    $result{'Recommendations'} = \\@generalrec;\n    $result{'AdjustVariables'} = \\@adjvars;\n    subheaderprint \"Recommendations\";\n    if ( @generalrec > 0 ) {\n        prettyprint \"General recommendations:\";\n        foreach (@generalrec) { prettyprint \"    \" . $_ . \"\"; }\n    }\n    if ( @adjvars > 0 ) {\n        prettyprint \"Variables to adjust:\";\n        if ( $mycalc{'pct_max_physical_memory'} > 90 ) {\n            prettyprint\n              \"  *** MySQL's maximum memory usage is dangerously high ***\\n\"\n              . \"  *** Add RAM before increasing MySQL buffer variables ***\";\n        }\n        foreach (@adjvars) { prettyprint \"    \" . $_ . \"\"; }\n    }\n    if ( @generalrec == 0 && @adjvars == 0 ) {\n        prettyprint \"No additional performance recommendations are available.\";\n    }\n}\n\nsub close_outputfile {\n    close($fh) if defined($fh);\n}\n\nsub headerprint {\n    prettyprint \" >>  MySQLTuner $tunerversion\\n\"\n      . \"\\t * Jean-Marie Renouard <jmrenouard\\@gmail.com>\\n\"\n      . \"\\t * Major Hayden <major\\@mhtx.net>\\n\"\n      . \" >>  Bug reports, feature requests, and downloads at http://mysqltuner.pl/\\n\"\n      . \" >>  Run with '--help' for additional options and output filtering\";\n}\n\nsub string2file {\n    my $filename = shift;\n    my $content  = shift;\n    open my $fh, q(>), $filename\n      or die\n\"Unable to open $filename in write mode. Please check permissions for this file or directory\";\n    print $fh $content if defined($content);\n    close $fh;\n    debugprint $content if ( $opt{'debug'} );\n}\n\nsub file2array {\n    my $filename = shift;\n    debugprint \"* reading $filename\" if ( $opt{'debug'} );\n    my $fh;\n    open( $fh, q(<), \"$filename\" )\n      or die \"Couldn't open $filename for reading: $!\\n\";\n    my @lines = <$fh>;\n    close($fh);\n    return @lines;\n}\n\nsub file2string {\n    return join( '', file2array(@_) );\n}\n\nmy $templateModel;\nif ( $opt{'template'} ne 0 ) {\n    $templateModel = file2string( $opt{'template'} );\n}\nelse {\n    # DEFAULT REPORT TEMPLATE\n    $templateModel = <<'END_TEMPLATE';\n<!DOCTYPE html>\n<html>\n<head>\n  <title>MySQLTuner Report</title>\n  <meta charset=\"UTF-8\">\n</head>\n<body>\n\n<h1>Result output</h1>\n<pre>\n{$data}\n</pre>\n\n</body>\n</html>\nEND_TEMPLATE\n}\n\nsub dump_result {\n\n    #debugprint Dumper( \\%result ) if ( $opt{'debug'} );\n    debugprint \"HTML REPORT: $opt{'reportfile'}\";\n\n    if ( $opt{'reportfile'} ne 0 ) {\n        eval { require Text::Template };\n        eval { require JSON };\n        if ($@) {\n            badprint \"Text::Template Module is needed.\";\n            die \"Text::Template Module is needed.\";\n        }\n\n        my $json      = JSON->new->allow_nonref;\n        my $json_text = $json->pretty->encode( \\%result );\n        my %vars      = (\n            'data'  => \\%result,\n            'debug' => $json_text,\n        );\n        my $template;\n        {\n            no warnings 'once';\n            $template = Text::Template->new(\n                TYPE       => 'STRING',\n                PREPEND    => q{;},\n                SOURCE     => $templateModel,\n                DELIMITERS => [ '[%', '%]' ]\n            ) or die \"Couldn't construct template: $Text::Template::ERROR\";\n        }\n\n        open my $fh, q(>), $opt{'reportfile'}\n          or die\n\"Unable to open $opt{'reportfile'} in write mode. please check permissions for this file or directory\";\n        $template->fill_in( HASH => \\%vars, OUTPUT => $fh );\n        close $fh;\n    }\n\n    if ( $opt{'json'} ne 0 ) {\n        eval { require JSON };\n        if ($@) {\n            print \"$bad JSON Module is needed.\\n\";\n            return 1;\n        }\n\n        my $json = JSON->new->allow_nonref;\n        print $json->utf8(1)->pretty( ( $opt{'prettyjson'} ? 1 : 0 ) )\n          ->encode( \\%result );\n\n        if ( $opt{'outputfile'} ne 0 ) {\n            unlink $opt{'outputfile'} if ( -e $opt{'outputfile'} );\n            open my $fh, q(>), $opt{'outputfile'}\n              or die\n\"Unable to open $opt{'outputfile'} in write mode. please check permissions for this file or directory\";\n            print $fh $json->utf8(1)->pretty( ( $opt{'prettyjson'} ? 1 : 0 ) )\n              ->encode( \\%result );\n            close $fh;\n        }\n    }\n}\n\nsub which {\n    my $prog_name   = shift;\n    my $path_string = shift;\n    my @path_array  = split /:/, $ENV{'PATH'};\n\n    for my $path (@path_array) {\n        return \"$path/$prog_name\" if ( -x \"$path/$prog_name\" );\n    }\n\n    return 0;\n}\n\n# ---------------------------------------------------------------------------\n# BEGIN 'MAIN'\n# ---------------------------------------------------------------------------\nheaderprint;    # Header Print\n\nvalidate_tuner_version;    # Check latest version\nmysql_setup;               # Gotta login first\ndebugprint \"MySQL FINAL Client : $mysqlcmd $mysqllogin\";\ndebugprint \"MySQL Admin FINAL Client : $mysqladmincmd $mysqllogin\";\n\n#exit(0);\nos_setup;                  # Set up some OS variables\nget_all_vars;              # Toss variables/status into hashes\nget_tuning_info;           # Get information about the tuning connection\ncalculations;              # Calculate everything we need\ncheck_architecture;        # Suggest 64-bit upgrade\ncheck_storage_engines;     # Show enabled storage engines\nif ( $opt{'feature'} ne '' ) {\n    subheaderprint \"See FEATURES.md for more information\";\n    no strict 'refs';\n    for my $feature ( split /,/, $opt{'feature'} ) {\n        subheaderprint \"Running feature: $opt{'feature'}\";\n        $feature->();\n    }\n    make_recommendations;\n    exit(0);\n}\nvalidate_mysql_version;    # Check current MySQL version\n\nsystem_recommendations;    # Avoid too many services on the same host\nlog_file_recommendations;  # check log file content\n\ncheck_metadata_perf;      # Show parameter impacting performance during analysis\nmysql_databases;          # Show information about databases\nmysql_tables;             # Show information about table column\nmysql_table_structures;   # Show information about table structures\n\nmysql_indexes;            # Show information about indexes\nmysql_views;              # Show information about views\nmysql_triggers;           # Show information about triggers\nmysql_routines;           # Show information about routines\nsecurity_recommendations; # Display some security recommendations\ncve_recommendations;      # Display related CVE\n\nmysql_stats;              # Print the server stats\nmysql_pfs;                # Print Performance schema info\n\nmariadb_threadpool;       # Print MariaDB ThreadPool stats\nmysql_myisam;             # Print MyISAM stats\nmysql_innodb;             # Print InnoDB stats\nmariadb_aria;             # Print MariaDB Aria stats\nmariadb_tokudb;           # Print MariaDB Tokudb stats\nmariadb_xtradb;           # Print MariaDB XtraDB stats\n\n#mariadb_rockdb;           # Print MariaDB RockDB stats\n#mariadb_spider;           # Print MariaDB Spider stats\n#mariadb_connect;          # Print MariaDB Connect stats\nmariadb_galera;            # Print MariaDB Galera Cluster stats\nget_replication_status;    # Print replication info\nmake_recommendations;      # Make recommendations based on stats\ndump_result;               # Dump result if debug is on\nclose_outputfile;          # Close reportfile if needed\n\n# ---------------------------------------------------------------------------\n# END 'MAIN'\n# ---------------------------------------------------------------------------\n1;\n\n__END__\n\n=pod\n\n=encoding UTF-8\n\n=head1 NAME\n\n MySQLTuner 2.5.2 - MySQL High Performance Tuning Script\n\n=head1 IMPORTANT USAGE GUIDELINES\n\nTo run the script with the default options, run the script without arguments\nAllow MySQL server to run for at least 24-48 hours before trusting suggestions\nSome routines may require root level privileges (script will provide warnings)\nYou must provide the remote server's total memory when connecting to other servers\n\n=head1 CONNECTION AND AUTHENTICATION\n\n --host <hostname>           Connect to a remote host to perform tests (default: localhost)\n --socket <socket>           Use a different socket for a local connection\n --port <port>               Port to use for connection (default: 3306)\n --protocol tcp              Force TCP connection instead of socket\n --user <username>           Username to use for authentication\n --userenv <envvar>          Name of env variable which contains username to use for authentication\n --pass <password>           Password to use for authentication\n --passenv <envvar>          Name of env variable which contains password to use for authentication\n --ssl-ca <path>             Path to public key\n --mysqladmin <path>         Path to a custom mysqladmin executable\n --mysqlcmd <path>           Path to a custom mysql executable\n --defaults-file <path>      Path to a custom .my.cnf\n --defaults-extra-file <path>      Path to an extra custom config file\n --server-log <path>         Path to explicit log file (error_log)\n\n=head1 PERFORMANCE AND REPORTING OPTIONS\n\n --skipsize                  Don't enumerate tables and their types/sizes (default: on)\n                             (Recommended for servers with many tables)\n --json                      Print result as JSON string\n --prettyjson                Print result as JSON formatted string\n --skippassword              Don't perform checks on user passwords (default: off)\n --checkversion              Check for updates to MySQLTuner (default: don't check)\n --updateversion             Check for updates to MySQLTuner and update when newer version is available (default: don't check)\n --forcemem <size>           Amount of RAM installed in megabytes\n --forceswap <size>          Amount of swap memory configured in megabytes\n --passwordfile <path>       Path to a password file list (one password by line)\n --cvefile <path>            CVE File for vulnerability checks\n --outputfile <path>         Path to a output txt file\n --reportfile <path>         Path to a report txt file\n --template   <path>         Path to a template file\n --dumpdir <path>            Path to a directory where to dump information files\n --feature <feature>         Run a specific feature (see FEATURES section)\n=head1 OUTPUT OPTIONS\n\n --silent                    Don't output anything on screen\n --verbose                   Print out all options (default: no verbose, dbstat, idxstat, sysstat, tbstat, pfstat)\n --color                     Print output in color\n --nocolor                   Don't print output in color\n --nogood                    Remove OK responses\n --nobad                     Remove negative/suggestion responses\n --noinfo                    Remove informational responses\n --debug                     Print debug information\n --noprocess                 Consider no other process is running\n --dbstat                    Print database information\n --nodbstat                  Don't print database information\n --tbstat                    Print table information\n --notbstat                  Don't print table information\n --colstat                   Print column information\n --nocolstat                 Don't print column information\n --idxstat                   Print index information\n --noidxstat                 Don't print index information\n --nomyisamstat              Don't print MyIsam information\n --sysstat                   Print system information\n --nosysstat                 Don't print system information\n --nostructstat              Don't print table structures information\n --pfstat                    Print Performance schema\n --nopfstat                  Don't print Performance schema\n --bannedports               Ports banned separated by comma (,)\n --server-log                Define specific error_log to analyze\n --maxportallowed            Number of open ports allowable on this host\n --buffers                   Print global and per-thread buffer values\n\n=head1 PERLDOC\n\nYou can find documentation for this module with the perldoc command.\n\n  perldoc mysqltuner\n\n=head2 INTERNALS\n\nL<https://github.com/major/MySQLTuner-perl/blob/master/INTERNALS.md>\n\n Internal documentation\n\n=head1 AUTHORS\n\nMajor Hayden - major@mhtx.net\nJean-Marie Renouard - jmrenouard@gmail.com\n\n=head1 CONTRIBUTORS\n\n=over 4\n\n=item *\n\nMatthew Montgomery\n\n=item *\n\nPaul Kehrer\n\n=item *\n\nDave Burgess\n\n=item *\n\nJonathan Hinds\n\n=item *\n\nMike Jackson\n\n=item *\n\nNils Breunese\n\n=item *\n\nShawn Ashlee\n\n=item *\n\nLuuk Vosslamber\n\n=item *\n\nVille Skytta\n\n=item *\n\nTrent Hornibrook\n\n=item *\n\nJason Gill\n\n=item *\n\nMark Imbriaco\n\n=item *\n\nGreg Eden\n\n=item *\n\nAubin Galinotti\n\n=item *\n\nGiovanni Bechis\n\n=item *\n\nBill Bradford\n\n=item *\n\nRyan Novosielski\n\n=item *\n\nMichael Scheidell\n\n=item *\n\nBlair Christensen\n\n=item *\n\nHans du Plooy\n\n=item *\n\nVictor Trac\n\n=item *\n\nEverett Barnes\n\n=item *\n\nTom Krouper\n\n=item *\n\nGary Barrueto\n\n=item *\n\nSimon Greenaway\n\n=item *\n\nAdam Stein\n\n=item *\n\nIsart Montane\n\n=item *\n\nBaptiste M.\n\n=item *\n\nCole Turner\n\n=item *\n\nMajor Hayden\n\n=item *\n\nJoe Ashcraft\n\n=item *\n\nJean-Marie Renouard\n\n=item *\n\nStephan GroBberndt\n\n=item *\n\nChristian Loos\n\n=item *\n\nLong Radix\n\n=back\n\n=head1 SUPPORT\n\n\nBug reports, feature requests, and downloads at http://mysqltuner.pl/\n\nBug tracker can be found at https://github.com/major/MySQLTuner-perl/issues\n\nMaintained by Jean-Marie Renouard (jmrenouard\\@gmail.com) - Licensed under GPL\n\n=head1 SOURCE CODE\n\nL<https://github.com/major/MySQLTuner-perl>\n\n git clone https://github.com/major/MySQLTuner-perl.git\n\n=head1 COPYRIGHT AND LICENSE\n\nCopyright (C) 2006-2023 Major Hayden - major@mhtx.net\n# Copyright (C) 2015-2023 Jean-Marie Renouard - jmrenouard@gmail.com\n\nFor the latest updates, please visit http://mysqltuner.pl/\n\nGit repository available at https://github.com/major/MySQLTuner-perl\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n See the GNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program.  If not, see <https://www.gnu.org/licenses/>.\n\n=cut\n\n# Local variables:\n# indent-tabs-mode: t\n# cperl-indent-level: 8\n# perl-indent-level: 8\n# End:\n"
  },
  {
    "path": "aegir/helpers/mysqltuner8",
    "content": "#!/usr/bin/env perl\n# mysqltuner.pl - Version 2.6.0\n# High Performance MySQL Tuning Script\n# Copyright (C) 2015-2023 Jean-Marie Renouard - jmrenouard@gmail.com\n# Copyright (C) 2006-2023 Major Hayden - major@mhtx.net\n\n# For the latest updates, please visit http://mysqltuner.pl/\n# Git repository available at https://github.com/major/MySQLTuner-perl\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program.  If not, see <https://www.gnu.org/licenses/>.\n#\n# This project would not be possible without help from:\n#   Matthew Montgomery     Paul Kehrer          Dave Burgess\n#   Jonathan Hinds         Mike Jackson         Nils Breunese\n#   Shawn Ashlee           Luuk Vosslamber      Ville Skytta\n#   Trent Hornibrook       Jason Gill           Mark Imbriaco\n#   Greg Eden              Aubin Galinotti      Giovanni Bechis\n#   Bill Bradford          Ryan Novosielski     Michael Scheidell\n#   Blair Christensen      Hans du Plooy        Victor Trac\n#   Everett Barnes         Tom Krouper          Gary Barrueto\n#   Simon Greenaway        Adam Stein           Isart Montane\n#   Baptiste M.            Cole Turner          Major Hayden\n#   Joe Ashcraft           Jean-Marie Renouard  Christian Loos\n#   Julien Francoz         Daniel Black         Long Radix\n#\n# Inspired by Matthew Montgomery's tuning-primer.sh script:\n# http://www.day32.com/MySQL/\n#\npackage main;\n\nuse 5.005;\nuse strict;\nuse warnings;\n\nuse diagnostics;\nuse File::Spec;\nuse Getopt::Long;\nuse Pod::Usage;\nuse File::Basename;\nuse Cwd 'abs_path';\n\n#use Data::Dumper;\n#$Data::Dumper::Pair = \" : \";\n\n# for which()\n#use Env;\n\n# Set up a few variables for use in the script\nmy $tunerversion = \"2.6.0\";\nmy ( @adjvars, @generalrec );\n\n# Set defaults\nmy %opt = (\n    \"silent\"              => 0,\n    \"nobad\"               => 0,\n    \"nogood\"              => 0,\n    \"noinfo\"              => 0,\n    \"debug\"               => 0,\n    \"nocolor\"             => ( !-t STDOUT ),\n    \"color\"               => ( -t STDOUT ),\n    \"forcemem\"            => 0,\n    \"forceswap\"           => 0,\n    \"host\"                => 0,\n    \"socket\"              => 0,\n    \"port\"                => 0,\n    \"user\"                => 0,\n    \"pass\"                => 0,\n    \"password\"            => 0,\n    \"ssl-ca\"              => 0,\n    \"skipsize\"            => 0,\n    \"checkversion\"        => 0,\n    \"updateversion\"       => 0,\n    \"buffers\"             => 0,\n    \"passwordfile\"        => 0,\n    \"bannedports\"         => '',\n    \"maxportallowed\"      => 0,\n    \"outputfile\"          => 0,\n    \"noprocess\"           => 0,\n    \"dbstat\"              => 0,\n    \"nodbstat\"            => 0,\n    \"server-log\"          => '',\n    \"tbstat\"              => 0,\n    \"notbstat\"            => 0,\n    \"colstat\"             => 0,\n    \"nocolstat\"           => 0,\n    \"idxstat\"             => 0,\n    \"noidxstat\"           => 0,\n    \"nomyisamstat\"        => 0,\n    \"nostructstat\"        => 0,\n    \"sysstat\"             => 0,\n    \"nosysstat\"           => 0,\n    \"pfstat\"              => 0,\n    \"nopfstat\"            => 0,\n    \"skippassword\"        => 0,\n    \"noask\"               => 0,\n    \"template\"            => 0,\n    \"json\"                => 0,\n    \"prettyjson\"          => 0,\n    \"reportfile\"          => 0,\n    \"verbose\"             => 0,\n    \"experimental\"        => 0,\n    \"nondedicated\"        => 0,\n    \"defaults-file\"       => '',\n    \"defaults-extra-file\" => '',\n    \"protocol\"            => '',\n    \"dumpdir\"             => '',\n    \"feature\"             => '',\n    \"dbgpattern\"          => '',\n    \"defaultarch\"         => 64\n);\n\n# Gather the options from the command line\nGetOptions(\n    \\%opt,                   'nobad',\n    'nogood',                'noinfo',\n    'debug',                 'nocolor',\n    'forcemem=i',            'forceswap=i',\n    'host=s',                'socket=s',\n    'port=i',                'user=s',\n    'pass=s',                'skipsize',\n    'checkversion',          'mysqladmin=s',\n    'mysqlcmd=s',            'help',\n    'buffers',               'skippassword',\n    'passwordfile=s',        'outputfile=s',\n    'silent',                'noask',\n    'json',                  'prettyjson',\n    'template=s',            'reportfile=s',\n    'cvefile=s',             'bannedports=s',\n    'updateversion',         'maxportallowed=s',\n    'verbose',               'password=s',\n    'passenv=s',             'userenv=s',\n    'defaults-file=s',       'ssl-ca=s',\n    'color',                 'noprocess',\n    'dbstat',                'nodbstat',\n    'tbstat',                'notbstat',\n    'colstat',               'nocolstat',\n    'sysstat',               'nosysstat',\n    'pfstat',                'nopfstat',\n    'idxstat',               'noidxstat',\n    'structstat',            'nostructstat',\n    'myisamstat',            'nomyisamstat',\n    'server-log=s',          'protocol=s',\n    'defaults-extra-file=s', 'dumpdir=s',\n    'feature=s',             'dbgpattern=s',\n    'defaultarch=i',         'experimental',\n    'nondedicated'\n  )\n  or pod2usage(\n    -exitval  => 1,\n    -verbose  => 99,\n    -sections => [\n        \"NAME\",\n        \"IMPORTANT USAGE GUIDELINES\",\n        \"CONNECTION AND AUTHENTICATION\",\n        \"PERFORMANCE AND REPORTING OPTIONS\",\n        \"OUTPUT OPTIONS\"\n    ]\n  );\n\nif ( defined $opt{'help'} && $opt{'help'} == 1 ) {\n    pod2usage(\n        -exitval  => 0,\n        -verbose  => 99,\n        -sections => [\n            \"NAME\",\n            \"IMPORTANT USAGE GUIDELINES\",\n            \"CONNECTION AND AUTHENTICATION\",\n            \"PERFORMANCE AND REPORTING OPTIONS\",\n            \"OUTPUT OPTIONS\"\n        ]\n    );\n}\n\nmy $devnull = File::Spec->devnull();\nmy $basic_password_files =\n  ( $opt{passwordfile} eq \"0\" )\n  ? abs_path( dirname(__FILE__) ) . \"/basic_passwords.txt\"\n  : abs_path( $opt{passwordfile} );\n\n# Username from envvar\nif ( exists $opt{userenv} && exists $ENV{ $opt{userenv} } ) {\n    $opt{user} = $ENV{ $opt{userenv} };\n}\n\n# Related to password option\nif ( exists $opt{passenv} && exists $ENV{ $opt{passenv} } ) {\n    $opt{pass} = $ENV{ $opt{passenv} };\n}\n$opt{pass} = $opt{password} if ( $opt{pass} eq 0 and $opt{password} ne 0 );\n\nif ( $opt{dumpdir} ne '' ) {\n    $opt{dumpdir} = abs_path( $opt{dumpdir} );\n    if ( !-d $opt{dumpdir} ) {\n        mkdir $opt{dumpdir} or die \"Cannot create directory $opt{dumpdir}: $!\";\n    }\n}\n\n# for RPM distributions\n$basic_password_files = \"/usr/share/mysqltuner/basic_passwords.txt\"\n  unless -f \"$basic_password_files\";\n\n$opt{dbgpattern} = '.*' if ( $opt{dbgpattern} eq '' );\n\n# Activate debug variables\n#if ( $opt{debug} ne '' ) { $opt{debug} = 2; }\n# Activate experimental calculations and analysis\n#if ( $opt{experimental} ne '' ) { $opt{experimental} = 1; }\n\n# check if we need to enable verbose mode\nif ( $opt{feature} ne '' ) { $opt{verbose} = 1; }\nif ( $opt{verbose} ) {\n    $opt{checkversion} = 0;    # Check for updates to MySQLTuner\n    $opt{dbstat}       = 1;    # Print database information\n    $opt{tbstat}       = 1;    # Print database information\n    $opt{idxstat}      = 1;    # Print index information\n    $opt{sysstat}      = 1;    # Print index information\n    $opt{buffers}      = 1;    # Print global and per-thread buffer values\n    $opt{pfstat}       = 1;    # Print performance schema info.\n    $opt{structstat}   = 1;    # Print table structure information\n    $opt{myisamstat}   = 1;    # Print MyISAM table information\n\n    $opt{cvefile} = 'vulnerabilities.csv';    #CVE File for vulnerability checks\n}\n$opt{nocolor} = 1 if defined( $opt{outputfile} );\n$opt{tbstat}  = 0 if ( $opt{notbstat} == 1 );    # Don't print table information\n$opt{colstat} = 0 if ( $opt{nocolstat} == 1 );  # Don't print column information\n$opt{dbstat}  = 0 if ( $opt{nodbstat} == 1 ); # Don't print database information\n$opt{noprocess} = 0\n  if ( $opt{noprocess} == 1 );                # Don't print process information\n$opt{sysstat} = 0 if ( $opt{nosysstat} == 1 ); # Don't print sysstat information\n$opt{pfstat}  = 0\n  if ( $opt{nopfstat} == 1 );    # Don't print performance schema information\n$opt{idxstat} = 0 if ( $opt{noidxstat} == 1 );   # Don't print index information\n$opt{structstat} = 0\n  if ( not defined( $opt{structstat} ) or $opt{nostructstat} == 1 )\n  ;    # Don't print table struct information\n$opt{myisamstat} = 1\n  if ( not defined( $opt{myisamstat} ) );\n$opt{myisamstat} = 0\n  if ( $opt{nomyisamstat} == 1 );    # Don't print MyISAM table information\n\n# for RPM distributions\n$opt{cvefile} = \"/usr/share/mysqltuner/vulnerabilities.csv\"\n  unless ( defined $opt{cvefile} and -f \"$opt{cvefile}\" );\n$opt{cvefile} = '' unless -f \"$opt{cvefile}\";\n$opt{cvefile} = './vulnerabilities.csv' if -f './vulnerabilities.csv';\n\n$opt{'bannedports'} = '' unless defined( $opt{'bannedports'} );\nmy @banned_ports = split ',', $opt{'bannedports'};\n\n#\nmy $outputfile = undef;\n$outputfile = abs_path( $opt{outputfile} ) unless $opt{outputfile} eq \"0\";\n\nmy $fh = undef;\nopen( $fh, '>', $outputfile )\n  or die(\"Fail opening $outputfile\")\n  if defined($outputfile);\n$opt{nocolor} = 1 if defined($outputfile);\n$opt{nocolor} = 1 unless ( -t STDOUT );\n\n$opt{nocolor} = 0 if ( $opt{color} == 1 );\n\n# Setting up the colors for the print styles\nmy $me = `whoami`;\n$me =~ s/\\n//g;\nmy $good = ( $opt{nocolor} == 0 ) ? \"[\\e[0;32mOK\\e[0m]\"  : \"[OK]\";\nmy $bad  = ( $opt{nocolor} == 0 ) ? \"[\\e[0;31m!!\\e[0m]\"  : \"[!!]\";\nmy $info = ( $opt{nocolor} == 0 ) ? \"[\\e[0;34m--\\e[0m]\"  : \"[--]\";\nmy $deb  = ( $opt{nocolor} == 0 ) ? \"[\\e[0;31mDG\\e[0m]\"  : \"[DG]\";\nmy $cmd  = ( $opt{nocolor} == 0 ) ? \"\\e[1;32m[CMD]($me)\" : \"[CMD]($me)\";\nmy $end  = ( $opt{nocolor} == 0 ) ? \"\\e[0m\"              : \"\";\n\n# Maximum lines of log output to read from end\nmy $maxlines = 30000;\n\n# Checks for supported or EOL'ed MySQL versions\nmy ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro );\n\n# Database\nmy @dblist;\n\n# Super structure containing all information\nmy %result;\n$result{'MySQLTuner'}{'version'}  = $tunerversion;\n$result{'MySQLTuner'}{'datetime'} = `date '+%d-%m-%Y %H:%M:%S'`;\n$result{'MySQLTuner'}{'options'}  = \\%opt;\n\n# Functions that handle the print styles\nsub prettyprint {\n    print $_[0] . \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n    print $fh $_[0] . \"\\n\" if defined($fh);\n}\n\nsub goodprint {\n    prettyprint $good. \" \" . $_[0] unless ( $opt{nogood} == 1 );\n}\n\nsub infoprint {\n    prettyprint $info. \" \" . $_[0] unless ( $opt{noinfo} == 1 );\n}\n\nsub badprint {\n    prettyprint $bad. \" \" . $_[0] unless ( $opt{nobad} == 1 );\n}\n\nsub debugprint {\n    prettyprint $deb. \" \" . $_[0] unless ( $opt{debug} == 0 );\n}\n\nsub redwrap {\n    return ( $opt{nocolor} == 0 ) ? \"\\e[0;31m\" . $_[0] . \"\\e[0m\" : $_[0];\n}\n\nsub greenwrap {\n    return ( $opt{nocolor} == 0 ) ? \"\\e[0;32m\" . $_[0] . \"\\e[0m\" : $_[0];\n}\n\nsub cmdprint {\n    prettyprint $cmd. \" \" . $_[0] . $end;\n}\n\nsub infoprintml {\n    for my $ln (@_) { $ln =~ s/\\n//g; infoprint \"\\t$ln\"; }\n}\n\nsub infoprintcmd {\n    cmdprint \"@_\";\n    infoprintml grep { $_ ne '' and $_ !~ /^\\s*$/ } `@_ 2>&1`;\n}\n\nsub subheaderprint {\n    my $tln = 100;\n    my $sln = 8;\n    my $ln  = length(\"@_\") + 2;\n\n    prettyprint \" \";\n    prettyprint \"-\" x $sln . \" @_ \" . \"-\" x ( $tln - $ln - $sln );\n}\n\nsub infoprinthcmd {\n    subheaderprint \"$_[0]\";\n    infoprintcmd \"$_[1]\";\n}\n\nsub is_remote() {\n    my $host = $opt{'host'};\n    return 0 if ( $host eq '' );\n    return 0 if ( $host eq 'localhost' );\n    return 0 if ( $host eq '127.0.0.1' );\n    return 1;\n}\n\nsub is_int {\n    return 0 unless defined $_[0];\n    my $str = $_[0];\n\n    #trim whitespace both sides\n    $str =~ s/^\\s+|\\s+$//g;\n\n    #Alternatively, to match any float-like numeric, use:\n    # m/^([+-]?)(?=\\d|\\.\\d)\\d*(\\.\\d*)?([Ee]([+-]?\\d+))?$/\n\n    #flatten to string and match dash or plus and one or more digits\n    if ( $str =~ /^(\\-|\\+)?\\d+?$/ ) {\n        return 1;\n    }\n    return 0;\n}\n\n# Calculates the number of physical cores considering HyperThreading\nsub cpu_cores {\n    if ( $^O eq 'linux' ) {\n        my $cntCPU =\n`awk -F: '/^core id/ && !P[\\$2] { CORES++; P[\\$2]=1 }; /^physical id/ && !N[\\$2] { CPUs++; N[\\$2]=1 };  END { print CPUs*CORES }' /proc/cpuinfo`;\n        chomp $cntCPU;\n        return ( $cntCPU == 0 ? `nproc` : $cntCPU );\n    }\n\n    if ( $^O eq 'freebsd' ) {\n        my $cntCPU = `sysctl -n kern.smp.cores`;\n        chomp $cntCPU;\n        return $cntCPU + 0;\n    }\n    return 0;\n}\n\n# Calculates the parameter passed in bytes, then rounds it to one decimal place\nsub hr_bytes {\n    my $num = shift;\n    return \"0B\" unless defined($num);\n    return \"0B\" if $num eq \"NULL\";\n    return \"0B\" if $num eq \"\";\n\n    if ( $num >= ( 1024**3 ) ) {    # GB\n        return sprintf( \"%.1f\", ( $num / ( 1024**3 ) ) ) . \"G\";\n    }\n    elsif ( $num >= ( 1024**2 ) ) {    # MB\n        return sprintf( \"%.1f\", ( $num / ( 1024**2 ) ) ) . \"M\";\n    }\n    elsif ( $num >= 1024 ) {           # KB\n        return sprintf( \"%.1f\", ( $num / 1024 ) ) . \"K\";\n    }\n    else {\n        return $num . \"B\";\n    }\n}\n\nsub hr_raw {\n    my $num = shift;\n    return \"0\" unless defined($num);\n    return \"0\" if $num eq \"NULL\";\n    if ( $num =~ /^(\\d+)G$/ ) {\n        return $1 * 1024 * 1024 * 1024;\n    }\n    if ( $num =~ /^(\\d+)M$/ ) {\n        return $1 * 1024 * 1024;\n    }\n    if ( $num =~ /^(\\d+)K$/ ) {\n        return $1 * 1024;\n    }\n    if ( $num =~ /^(\\d+)$/ ) {\n        return $1;\n    }\n    return $num;\n}\n\n# Calculates the parameter passed in bytes, then rounds it to the nearest integer\nsub hr_bytes_rnd {\n    my $num = shift;\n    return \"0B\" unless defined($num);\n    return \"0B\" if $num eq \"NULL\";\n\n    if ( $num >= ( 1024**3 ) ) {    # GB\n        return int( ( $num / ( 1024**3 ) ) ) . \"G\";\n    }\n    elsif ( $num >= ( 1024**2 ) ) {    # MB\n        return int( ( $num / ( 1024**2 ) ) ) . \"M\";\n    }\n    elsif ( $num >= 1024 ) {           # KB\n        return int( ( $num / 1024 ) ) . \"K\";\n    }\n    else {\n        return $num . \"B\";\n    }\n}\n\n# Calculates the parameter passed to the nearest power of 1000, then rounds it to the nearest integer\nsub hr_num {\n    my $num = shift;\n    if ( $num >= ( 1000**3 ) ) {       # Billions\n        return int( ( $num / ( 1000**3 ) ) ) . \"B\";\n    }\n    elsif ( $num >= ( 1000**2 ) ) {    # Millions\n        return int( ( $num / ( 1000**2 ) ) ) . \"M\";\n    }\n    elsif ( $num >= 1000 ) {           # Thousands\n        return int( ( $num / 1000 ) ) . \"K\";\n    }\n    else {\n        return $num;\n    }\n}\n\n# Calculate Percentage\nsub percentage {\n    my $value = shift;\n    my $total = shift;\n    $total = 0 unless defined $total;\n    $total = 0 if $total eq \"NULL\";\n    return 100, 00 if $total == 0;\n    return sprintf( \"%.2f\", ( $value * 100 / $total ) );\n}\n\n# Calculates uptime to display in a human-readable form\nsub pretty_uptime {\n    my $uptime  = shift;\n    my $seconds = $uptime % 60;\n    my $minutes = int( ( $uptime % 3600 ) / 60 );\n    my $hours   = int( ( $uptime % 86400 ) / (3600) );\n    my $days    = int( $uptime / (86400) );\n    my $uptimestring;\n    if ( $days > 0 ) {\n        $uptimestring = \"${days}d ${hours}h ${minutes}m ${seconds}s\";\n    }\n    elsif ( $hours > 0 ) {\n        $uptimestring = \"${hours}h ${minutes}m ${seconds}s\";\n    }\n    elsif ( $minutes > 0 ) {\n        $uptimestring = \"${minutes}m ${seconds}s\";\n    }\n    else {\n        $uptimestring = \"${seconds}s\";\n    }\n    return $uptimestring;\n}\n\n# Retrieves the memory installed on this machine\nmy ( $physical_memory, $swap_memory, $duflags, $xargsflags );\n\nsub memerror {\n    badprint\n\"Unable to determine total memory/swap; use '--forcemem' and '--forceswap'\";\n    exit 1;\n}\n\nsub os_setup {\n    my $os = `uname`;\n    $duflags    = ( $os =~ /Linux/ )        ? '-b' : '';\n    $xargsflags = ( $os =~ /Darwin|SunOS/ ) ? ''   : '-r';\n    if ( $opt{'forcemem'} > 0 ) {\n        $physical_memory = $opt{'forcemem'} * 1048576;\n        infoprint \"Assuming $opt{'forcemem'} MB of physical memory\";\n        if ( $opt{'forceswap'} > 0 ) {\n            $swap_memory = $opt{'forceswap'} * 1048576;\n            infoprint \"Assuming $opt{'forceswap'} MB of swap space\";\n        }\n        else {\n            $swap_memory = 0;\n            badprint \"Assuming 0 MB of swap space (use --forceswap to specify)\";\n        }\n    }\n    else {\n        if ( $os =~ /Linux|CYGWIN/ ) {\n            $physical_memory =\n              `grep -i memtotal: /proc/meminfo | awk '{print \\$2}'`\n              or memerror;\n            $physical_memory *= 1024;\n\n            $swap_memory =\n              `grep -i swaptotal: /proc/meminfo | awk '{print \\$2}'`\n              or memerror;\n            $swap_memory *= 1024;\n        }\n        elsif ( $os =~ /Darwin/ ) {\n            $physical_memory = `sysctl -n hw.memsize` or memerror;\n            $swap_memory =\n              `sysctl -n vm.swapusage | awk '{print \\$3}' | sed 's/\\..*\\$//'`\n              or memerror;\n        }\n        elsif ( $os =~ /NetBSD|OpenBSD|FreeBSD/ ) {\n            $physical_memory = `sysctl -n hw.physmem` or memerror;\n            if ( $physical_memory < 0 ) {\n                $physical_memory = `sysctl -n hw.physmem64` or memerror;\n            }\n            $swap_memory =\n              `swapctl -l | grep '^/' | awk '{ s+= \\$2 } END { print s }'`\n              or memerror;\n        }\n        elsif ( $os =~ /BSD/ ) {\n            $physical_memory = `sysctl -n hw.realmem` or memerror;\n            $swap_memory =\n              `swapinfo | grep '^/' | awk '{ s+= \\$2 } END { print s }'`;\n        }\n        elsif ( $os =~ /SunOS/ ) {\n            $physical_memory =\n              `/usr/sbin/prtconf | grep Memory | cut -f 3 -d ' '`\n              or memerror;\n            chomp($physical_memory);\n            $physical_memory = $physical_memory * 1024 * 1024;\n        }\n        elsif ( $os =~ /AIX/ ) {\n            $physical_memory =\n              `lsattr -El sys0 | grep realmem | awk '{print \\$2}'`\n              or memerror;\n            chomp($physical_memory);\n            $physical_memory = $physical_memory * 1024;\n            $swap_memory     = `lsps -as | awk -F\"(MB| +)\" '/MB /{print \\$2}'`\n              or memerror;\n            chomp($swap_memory);\n            $swap_memory = $swap_memory * 1024 * 1024;\n        }\n        elsif ( $os =~ /windows/i ) {\n            $physical_memory =\n`wmic ComputerSystem get TotalPhysicalMemory | perl -ne \"chomp; print if /[0-9]+/;\"`\n              or memerror;\n            $swap_memory =\n`wmic OS get FreeVirtualMemory | perl -ne \"chomp; print if /[0-9]+/;\"`\n              or memerror;\n        }\n    }\n    debugprint \"Physical Memory: $physical_memory\";\n    debugprint \"Swap Memory: $swap_memory\";\n    chomp($physical_memory);\n    chomp($swap_memory);\n    chomp($os);\n    $physical_memory = $opt{forcemem}\n      if ( defined( $opt{forcemem} ) and $opt{forcemem} gt 0 );\n    $result{'OS'}{'OS Type'}                   = $os;\n    $result{'OS'}{'Physical Memory'}{'bytes'}  = $physical_memory;\n    $result{'OS'}{'Physical Memory'}{'pretty'} = hr_bytes($physical_memory);\n    $result{'OS'}{'Swap Memory'}{'bytes'}      = $swap_memory;\n    $result{'OS'}{'Swap Memory'}{'pretty'}     = hr_bytes($swap_memory);\n    $result{'OS'}{'Other Processes'}{'bytes'}  = get_other_process_memory();\n    $result{'OS'}{'Other Processes'}{'pretty'} =\n      hr_bytes( get_other_process_memory() );\n}\n\nsub get_http_cli {\n    my $httpcli = which( \"curl\", $ENV{'PATH'} );\n    chomp($httpcli);\n    if ($httpcli) {\n        return $httpcli;\n    }\n\n    $httpcli = which( \"wget\", $ENV{'PATH'} );\n    chomp($httpcli);\n    if ($httpcli) {\n        return $httpcli;\n    }\n    return \"\";\n}\n\n# Checks for updates to MySQLTuner\nsub validate_tuner_version {\n    if ( $opt{'checkversion'} eq 0 ) {\n        print \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n        infoprint \"Skipped version check for MySQLTuner script\";\n        return;\n    }\n\n    my $update;\n    my $url =\n\"https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl\";\n    my $httpcli = get_http_cli();\n    if ( $httpcli =~ /curl$/ ) {\n        debugprint \"$httpcli is available.\";\n\n        debugprint\n\"$httpcli -m 3 -silent '$url' 2>/dev/null | grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2\";\n        $update =\n`$httpcli -m 3 -silent '$url' 2>/dev/null | grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2`;\n        chomp($update);\n        debugprint \"VERSION: $update\";\n\n        compare_tuner_version($update);\n        return;\n    }\n\n    if ( $httpcli =~ /wget$/ ) {\n        debugprint \"$httpcli is available.\";\n\n        debugprint\n\"$httpcli -e timestamping=off -t 1 -T 3 -O - '$url' 2>$devnull| grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2\";\n        $update =\n`$httpcli -e timestamping=off -t 1 -T 3 -O - '$url' 2>$devnull| grep 'my \\$tunerversion'| cut -d\\\\\\\" -f2`;\n        chomp($update);\n        compare_tuner_version($update);\n        return;\n    }\n    debugprint \"curl and wget are not available.\";\n    infoprint \"Unable to check for the latest MySQLTuner version\";\n    infoprint\n\"Using --pass and --password option is insecure during MySQLTuner execution (password disclosure)\"\n      if ( defined( $opt{'pass'} ) );\n}\n\n# Checks for updates to MySQLTuner\nsub update_tuner_version {\n    if ( $opt{'updateversion'} eq 0 ) {\n        badprint \"Skipped version update for MySQLTuner script\";\n        print \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n        return;\n    }\n\n    my $update;\n    my $fullpath = \"\";\n    my $url = \"https://raw.githubusercontent.com/major/MySQLTuner-perl/master/\";\n    my @scripts =\n      ( \"mysqltuner.pl\", \"basic_passwords.txt\", \"vulnerabilities.csv\" );\n    my $totalScripts    = scalar(@scripts);\n    my $receivedScripts = 0;\n    my $httpcli         = get_http_cli();\n\n    foreach my $script (@scripts) {\n\n        if ( $httpcli =~ /curl$/ ) {\n            debugprint \"$httpcli is available.\";\n\n            $fullpath = dirname(__FILE__) . \"/\" . $script;\n            debugprint \"FullPath: $fullpath\";\n            debugprint\n\"$httpcli --connect-timeout 3 '$url$script' 2>$devnull > $fullpath\";\n            $update =\n`$httpcli --connect-timeout 3 '$url$script' 2>$devnull > $fullpath`;\n            chomp($update);\n            debugprint \"$script updated: $update\";\n\n            if ( -s $script eq 0 ) {\n                badprint \"Couldn't update $script\";\n            }\n            else {\n                ++$receivedScripts;\n                debugprint \"$script updated: $update\";\n            }\n        }\n        elsif ( $httpcli =~ /wget$/ ) {\n\n            debugprint \"$httpcli is available.\";\n\n            debugprint\n\"$httpcli -qe timestamping=off -t 1 -T 3 -O $script '$url$script'\";\n            $update =\n`$httpcli -qe timestamping=off -t 1 -T 3 -O $script '$url$script'`;\n            chomp($update);\n\n            if ( -s $script eq 0 ) {\n                badprint \"Couldn't update $script\";\n            }\n            else {\n                ++$receivedScripts;\n                debugprint \"$script updated: $update\";\n            }\n        }\n        else {\n            debugprint \"curl and wget are not available.\";\n            infoprint \"Unable to check for the latest MySQLTuner version\";\n        }\n\n    }\n\n    if ( $receivedScripts eq $totalScripts ) {\n        goodprint \"Successfully updated MySQLTuner script\";\n    }\n    else {\n        badprint \"Couldn't update MySQLTuner script\";\n    }\n    infoprint \"Stopping program: MySQLTuner script must be updated first.\";\n    exit 0;\n}\n\nsub compare_tuner_version {\n    my $remoteversion = shift;\n    debugprint \"Remote data: $remoteversion\";\n\n    #exit 0;\n    if ( $remoteversion ne $tunerversion ) {\n        badprint\n          \"There is a new version of MySQLTuner available ($remoteversion)\";\n        update_tuner_version();\n        return;\n    }\n    goodprint \"You have the latest version of MySQLTuner ($tunerversion)\";\n    return;\n}\n\n# Checks to see if a MySQL login is possible\nmy ( $mysqllogin, $doremote, $remotestring, $mysqlcmd, $mysqladmincmd );\n\nmy $osname = $^O;\nif ( $osname eq 'MSWin32' ) {\n    eval { require Win32; } or last;\n    $osname = Win32::GetOSName();\n    infoprint \"* Windows OS ($osname) is not fully supported.\\n\";\n\n    #exit 1;\n}\n\nsub mysql_setup {\n    $doremote     = 0;\n    $remotestring = '';\n    if ( $opt{mysqladmin} ) {\n        $mysqladmincmd = $opt{mysqladmin};\n    }\n    else {\n        $mysqladmincmd = which( \"mariadb-admin\", $ENV{'PATH'} );\n        if ( !-e $mysqladmincmd ) {\n            $mysqladmincmd = which( \"mysqladmin\", $ENV{'PATH'} );\n        }\n    }\n    chomp($mysqladmincmd);\n    if ( !-e $mysqladmincmd && $opt{mysqladmin} ) {\n        badprint \"Unable to find the mysqladmin command you specified: \"\n          . $mysqladmincmd . \"\";\n        exit 1;\n    }\n    elsif ( !-e $mysqladmincmd ) {\n        badprint\n\"Couldn't find mysqladmin/mariadb-admin in your \\$PATH. Is MySQL installed?\";\n\n        #exit 1;\n    }\n    if ( $opt{mysqlcmd} ) {\n        $mysqlcmd = $opt{mysqlcmd};\n    }\n    else {\n        $mysqlcmd = which( \"mariadb\", $ENV{'PATH'} );\n        if ( !-e $mysqlcmd ) {\n            $mysqlcmd = which( \"mysql\", $ENV{'PATH'} );\n        }\n    }\n    chomp($mysqlcmd);\n    if ( !-e $mysqlcmd && $opt{mysqlcmd} ) {\n        badprint \"Unable to find the mysql command you specified: \"\n          . $mysqlcmd . \"\";\n        exit 1;\n    }\n    elsif ( !-e $mysqlcmd ) {\n        badprint\n          \"Couldn't find mysql/mariadb in your \\$PATH. Is MySQL installed?\";\n        exit 1;\n    }\n    $mysqlcmd =~ s/\\n$//g;\n    my $mysqlclidefaults = `$mysqlcmd --print-defaults`;\n    debugprint \"MySQL Client: $mysqlclidefaults\";\n    if ( $mysqlclidefaults =~ /auto-vertical-output/ ) {\n        badprint\n          \"Avoid auto-vertical-output in configuration file(s) for MySQL like\";\n        exit 1;\n    }\n\n    debugprint \"MySQL Client: $mysqlcmd\";\n\n    # Are we being asked to connect via a socket?\n    if ( $opt{socket} ne 0 ) {\n        if ( $opt{port} ne 0 ) {\n            $remotestring = \" -S $opt{socket} -P $opt{port}\";\n        }\n        else {\n            $remotestring = \" -S $opt{socket}\";\n        }\n    }\n\n    if ( $opt{protocol} ne '' ) {\n        $remotestring = \" --protocol=$opt{protocol}\";\n    }\n\n    # Are we being asked to connect to a remote server?\n    if ( $opt{host} ne 0 ) {\n        chomp( $opt{host} );\n        $opt{port} = ( $opt{port} eq 0 ) ? 3306 : $opt{port};\n\n# If we're doing a remote connection, but forcemem wasn't specified, we need to exit\n        if ( $opt{'forcemem'} eq 0 && is_remote eq 1 ) {\n            badprint \"The --forcemem option is required for remote connections\";\n            badprint\n              \"Assuming RAM memory is 1Gb for simplify remote connection usage\";\n            $opt{'forcemem'} = 1024;\n\n            #exit 1;\n        }\n        if ( $opt{'forceswap'} eq 0 && is_remote eq 1 ) {\n            badprint\n              \"The --forceswap option is required for remote connections\";\n            badprint\n              \"Assuming Swap size is 1Gb for simplify remote connection usage\";\n            $opt{'forceswap'} = 1024;\n\n            #exit 1;\n        }\n        infoprint \"Performing tests on $opt{host}:$opt{port}\";\n        $remotestring = \" -h $opt{host} -P $opt{port}\";\n        $doremote     = is_remote();\n\n    }\n    else {\n        $opt{host} = '127.0.0.1';\n    }\n\n    if ( $opt{'ssl-ca'} ne 0 ) {\n        if ( -e -r -f $opt{'ssl-ca'} ) {\n            $remotestring .= \" --ssl-ca=$opt{'ssl-ca'}\";\n            infoprint\n              \"Will connect using ssl public key passed on the command line\";\n            return 1;\n        }\n        else {\n            badprint\n\"Attempted to use passed ssl public key, but it was not found or could not be read\";\n            exit 1;\n        }\n    }\n\n   # Did we already get a username with or without password on the command line?\n    if ( $opt{user} ne 0 ) {\n        $mysqllogin =\n            \"-u $opt{user} \"\n          . ( ( $opt{pass} ne 0 ) ? \"-p'$opt{pass}' \" : \" \" )\n          . $remotestring;\n        my $loginstatus =\n          `$mysqlcmd -Nrs -e 'select \"mysqld is alive\";' $mysqllogin 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint \"Logged in using credentials passed on the command line\";\n            return 1;\n        }\n        else {\n            badprint\n              \"Attempted to use login credentials, but they were invalid\";\n            exit 1;\n        }\n    }\n\n    my $svcprop = which( \"svcprop\", $ENV{'PATH'} );\n    if ( substr( $svcprop, 0, 1 ) =~ \"/\" ) {\n\n        # We are on solaris\n        ( my $mysql_login =\n`svcprop -p quickbackup/username svc:/network/mysql-quickbackup:default`\n        ) =~ s/\\s+$//;\n        ( my $mysql_pass =\n`svcprop -p quickbackup/password svc:/network/mysql-quickbackup:default`\n        ) =~ s/\\s+$//;\n        if ( substr( $mysql_login, 0, 7 ) ne \"svcprop\" ) {\n\n            # mysql-quickbackup is installed\n            $mysqllogin = \"-u $mysql_login -p$mysql_pass\";\n            my $loginstatus = `mysqladmin $mysqllogin ping 2>&1`;\n            if ( $loginstatus =~ /mysqld is alive/ ) {\n                goodprint \"Logged in using credentials from mysql-quickbackup.\";\n                return 1;\n            }\n            else {\n                badprint\n\"Attempted to use login credentials from mysql-quickbackup, but they failed.\";\n                exit 1;\n            }\n        }\n    }\n    elsif ( -r \"/etc/psa/.psa.shadow\" and $doremote == 0 ) {\n\n        # It's a Plesk box, use the available credentials\n        $mysqllogin = \"-u admin -p`cat /etc/psa/.psa.shadow`\";\n        my $loginstatus = `$mysqladmincmd ping $mysqllogin 2>&1`;\n        unless ( $loginstatus =~ /mysqld is alive/ ) {\n\n            # Plesk 10+\n            $mysqllogin =\n              \"-u admin -p`/usr/local/psa/bin/admin --show-password`\";\n            $loginstatus = `$mysqladmincmd ping $mysqllogin 2>&1`;\n            unless ( $loginstatus =~ /mysqld is alive/ ) {\n                badprint\n\"Attempted to use login credentials from Plesk and Plesk 10+, but they failed.\";\n                exit 1;\n            }\n        }\n    }\n    elsif ( -r \"/usr/local/directadmin/conf/mysql.conf\" and $doremote == 0 ) {\n\n        # It's a DirectAdmin box, use the available credentials\n        my $mysqluser =\n          `cat /usr/local/directadmin/conf/mysql.conf | egrep '^user=.*'`;\n        my $mysqlpass =\n          `cat /usr/local/directadmin/conf/mysql.conf | egrep '^passwd=.*'`;\n\n        $mysqluser =~ s/user=//;\n        $mysqluser =~ s/[\\r\\n]//;\n        $mysqlpass =~ s/passwd=//;\n        $mysqlpass =~ s/[\\r\\n]//;\n\n        $mysqllogin = \"-u $mysqluser -p$mysqlpass\";\n\n        my $loginstatus = `mysqladmin ping $mysqllogin 2>&1`;\n        unless ( $loginstatus =~ /mysqld is alive/ ) {\n            badprint\n\"Attempted to use login credentials from DirectAdmin, but they failed.\";\n            exit 1;\n        }\n    }\n    elsif ( -r \"/etc/mysql/debian.cnf\"\n        and $doremote == 0\n        and $opt{'defaults-file'} eq '' )\n    {\n\n        # We have a Debian maintenance account, use it\n        $mysqllogin = \"--defaults-file=/etc/mysql/debian.cnf\";\n        my $loginstatus = `$mysqladmincmd $mysqllogin ping 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint\n              \"Logged in using credentials from Debian maintenance account.\";\n            return 1;\n        }\n        else {\n            badprint\n\"Attempted to use login credentials from Debian maintenance account, but they failed.\";\n            exit 1;\n        }\n    }\n    elsif ( $opt{'defaults-file'} ne '' and -r \"$opt{'defaults-file'}\" ) {\n\n        # defaults-file\n        debugprint \"defaults file detected: $opt{'defaults-file'}\";\n        my $mysqlclidefaults = `$mysqlcmd --print-defaults`;\n        debugprint \"MySQL Client Default File: $opt{'defaults-file'}\";\n\n        $mysqllogin = \"--defaults-file=\" . $opt{'defaults-file'};\n        my $loginstatus = `$mysqladmincmd $mysqllogin ping 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint \"Logged in using credentials from defaults file account.\";\n            return 1;\n        }\n    }\n    elsif ( $opt{'defaults-extra-file'} ne ''\n        and -r \"$opt{'defaults-extra-file'}\" )\n    {\n\n        # defaults-extra-file\n        debugprint \"defaults extra file detected: $opt{'defaults-extra-file'}\";\n        my $mysqlclidefaults = `$mysqlcmd --print-defaults`;\n        debugprint\n          \"MySQL Client Extra Default File: $opt{'defaults-extra-file'}\";\n\n        $mysqllogin = \"--defaults-extra-file=\" . $opt{'defaults-extra-file'};\n        my $loginstatus = `$mysqladmincmd $mysqllogin ping 2>&1`;\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n            goodprint\n              \"Logged in using credentials from extra defaults file account.\";\n            return 1;\n        }\n    }\n    else {\n        # It's not Plesk or Debian, we should try a login\n        debugprint \"$mysqladmincmd $remotestring ping 2>&1\";\n\n        #my $loginstatus = \"\";\n        debugprint \"Using mysqlcmd: $mysqlcmd\";\n\n        #if (defined($mysqladmincmd)) {\n        #  infoprint \"Using mysqladmin to check login\";\n        #  $loginstatus=`$mysqladmincmd $remotestring ping 2>&1`;\n        #} else {\n        infoprint \"Using mysql to check login\";\n        my $loginstatus =\n`$mysqlcmd $remotestring -Nrs -e 'select \"mysqld is alive\"' --connect-timeout=3 2>&1`;\n\n        #}\n\n        if ( $loginstatus =~ /mysqld is alive/ ) {\n\n            # Login went just fine\n            $mysqllogin = \" $remotestring \";\n\n       # Did this go well because of a .my.cnf file or is there no password set?\n            my $userpath = `printenv HOME`;\n            if ( length($userpath) > 0 ) {\n                chomp($userpath);\n            }\n            unless ( -e \"${userpath}/.my.cnf\" or -e \"${userpath}/.mylogin.cnf\" )\n            {\n                badprint\n                  \"SECURITY RISK: Successfully authenticated without password\";\n            }\n            return 1;\n        }\n        else {\n            if ( $opt{'noask'} == 1 ) {\n                badprint\n                  \"Attempted to use login credentials, but they were invalid\";\n                exit 1;\n            }\n            my ( $name, $password );\n\n            # If --user is defined no need to ask for username\n            if ( $opt{user} ne 0 ) {\n                $name = $opt{user};\n            }\n            else {\n                print STDERR \"Please enter your MySQL administrative login: \";\n                $name = <STDIN>;\n            }\n\n            # If --pass is defined no need to ask for password\n            if ( $opt{pass} ne 0 ) {\n                $password = $opt{pass};\n            }\n            else {\n                print STDERR\n                  \"Please enter your MySQL administrative password: \";\n                system(\"stty -echo >$devnull 2>&1\");\n                $password = <STDIN>;\n                system(\"stty echo >$devnull 2>&1\");\n            }\n            chomp($password);\n            chomp($name);\n            $mysqllogin = \"-u $name\";\n\n            if ( length($password) > 0 ) {\n                $mysqllogin .= \" -p'$password'\";\n            }\n            $mysqllogin .= $remotestring;\n            my $loginstatus = `$mysqladmincmd ping $mysqllogin 2>&1`;\n            if ( $loginstatus =~ /mysqld is alive/ ) {\n\n                #print STDERR \"\";\n                if ( !length($password) ) {\n\n       # Did this go well because of a .my.cnf file or is there no password set?\n                    my $userpath = `printenv HOME`;\n                    chomp($userpath);\n                    unless ( -e \"$userpath/.my.cnf\" ) {\n                        print STDERR \"\";\n                        badprint\n\"SECURITY RISK: Successfully authenticated without password\";\n                    }\n                }\n                return 1;\n            }\n            else {\n                #print STDERR \"\";\n                badprint\n                  \"Attempted to use login credentials, but they were invalid.\";\n                exit 1;\n            }\n            exit 1;\n        }\n    }\n}\n\n# MySQL Request Array\nsub select_array {\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my @result = `$mysqlcmd $mysqllogin -Bse \"\\\\w$req\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array: return code : $?\";\n    chomp(@result);\n    return @result;\n}\n\n# MySQL Request Array\nsub select_array_with_headers {\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my @result = `$mysqlcmd $mysqllogin -Bre \"\\\\w$req\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array_with_headers: return code : $?\";\n    chomp(@result);\n    return @result;\n}\n\n# MySQL Request Array\nsub select_csv_file {\n    my $tfile = shift;\n    my $req   = shift;\n    debugprint \"PERFORM: $req CSV into $tfile\";\n\n    #return;\n    my @result = select_array_with_headers($req);\n    open( my $fh, '>', $tfile ) or die \"Could not open file '$tfile' $!\";\n    for my $l (@result) {\n        $l =~ s/\\t/\",\"/g;\n        $l =~ s/^/\"/;\n        $l =~ s/$/\"\\n/;\n        print $fh $l;\n        print $l if $opt{debug};\n    }\n    close $fh;\n    infoprint \"CSV file $tfile created\";\n}\n\nsub human_size {\n    my ( $size, $n ) = ( shift, 0 );\n    ++$n and $size /= 1024 until $size < 1024;\n    return sprintf \"%.2f %s\", $size, (qw[ bytes KB MB GB TB ])[$n];\n}\n\n# MySQL Request one\nsub select_one {\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my $result = `$mysqlcmd $mysqllogin -Bse \"\\\\w$req\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array: return code : $?\";\n    chomp($result);\n    return $result;\n}\n\n# MySQL Request one\nsub select_one_g {\n    my $pattern = shift;\n\n    my $req = shift;\n    debugprint \"PERFORM: $req \";\n    my @result = `$mysqlcmd $mysqllogin -re \"\\\\w$req\\\\G\" 2>>/dev/null`;\n    if ( $? != 0 ) {\n        badprint \"Failed to execute: $req\";\n        badprint \"FAIL Execute SQL / return code: $?\";\n        debugprint \"CMD    : $mysqlcmd\";\n        debugprint \"OPTIONS: $mysqllogin\";\n        debugprint `$mysqlcmd $mysqllogin -Bse \"$req\" 2>&1`;\n\n        #exit $?;\n    }\n    debugprint \"select_array: return code : $?\";\n    chomp(@result);\n    return ( grep { /$pattern/ } @result )[0];\n}\n\nsub select_str_g {\n    my $pattern = shift;\n\n    my $req = shift;\n    my $str = select_one_g $pattern, $req;\n    return () unless defined $str;\n    my @val = split /:/, $str;\n    shift @val;\n    return trim(@val);\n}\n\nsub select_user_dbs {\n    return select_array(\n\"SELECT DISTINCT TABLE_SCHEMA FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'percona', 'sys')\"\n    );\n}\n\nsub select_tables_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_indexes_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT INDEX_NAME FROM information_schema.STATISTICS WHERE TABLE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_views_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT TABLE_NAME FROM information_schema.VIEWS WHERE TABLE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_triggers_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT TRIGGER_NAME FROM information_schema.TRIGGERS WHERE TRIGGER_SCHEMA='$schema'\"\n    );\n}\n\nsub select_routines_db {\n    my $schema = shift;\n    return select_array(\n\"SELECT DISTINCT ROUTINE_NAME FROM information_schema.ROUTINES WHERE ROUTINE_SCHEMA='$schema'\"\n    );\n}\n\nsub select_table_indexes_db {\n    my $schema = shift;\n    my $tbname = shift;\n    return select_array(\n\"SELECT INDEX_NAME FROM information_schema.STATISTICS WHERE TABLE_SCHEMA='$schema' AND TABLE_NAME='$tbname'\"\n    );\n}\n\nsub select_table_columns_db {\n    my $schema = shift;\n    my $table  = shift;\n    return select_array(\n\"SELECT COLUMN_NAME FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$schema' AND TABLE_NAME='$table'\"\n    );\n}\n\nsub get_tuning_info {\n    my @infoconn = select_array \"\\\\s\";\n    my ( $tkey, $tval );\n    @infoconn =\n      grep { !/Threads:/ and !/Connection id:/ and !/pager:/ and !/Using/ }\n      @infoconn;\n    foreach my $line (@infoconn) {\n        if ( $line =~ /\\s*(.*):\\s*(.*)/ ) {\n            debugprint \"$1 => $2\";\n            $tkey = $1;\n            $tval = $2;\n            chomp($tkey);\n            chomp($tval);\n            $result{'MySQL Client'}{$tkey} = $tval;\n        }\n    }\n    $result{'MySQL Client'}{'Client Path'}         = $mysqlcmd;\n    $result{'MySQL Client'}{'Admin Path'}          = $mysqladmincmd;\n    $result{'MySQL Client'}{'Authentication Info'} = $mysqllogin;\n\n}\n\n# Populates all of the variable and status hashes\nmy ( %mystat, %myvar, $dummyselect, %myrepl, %myslaves );\n\nsub arr2hash {\n    my $href = shift;\n    my $harr = shift;\n    my $sep  = shift;\n    my $key  = '';\n    my $val  = '';\n\n    $sep = '\\s' unless defined($sep);\n    foreach my $line (@$harr) {\n        next if ( $line =~ m/^\\*\\*\\*\\*\\*\\*\\*/ );\n        $line =~ /([a-zA-Z_]*)\\s*$sep\\s*(.*)/;\n        $key         = $1;\n        $val         = $2;\n        $$href{$key} = $val;\n\n        debugprint \" * $key = $val\" if $key =~ /$opt{dbgpattern}/i;\n    }\n}\n\nsub get_all_vars {\n\n    # We need to initiate at least one query so that our data is useable\n    $dummyselect = select_one \"SELECT VERSION()\";\n    if ( not defined($dummyselect) or $dummyselect eq \"\" ) {\n        badprint\n          \"You probably do not have enough privileges to run MySQLTuner ...\";\n        exit(256);\n    }\n    $dummyselect =~ s/(.*?)\\-.*/$1/;\n    debugprint \"VERSION: \" . $dummyselect . \"\";\n    $result{'MySQL Client'}{'Version'} = $dummyselect;\n\n    my @mysqlvarlist = select_array(\"SHOW VARIABLES\");\n    push( @mysqlvarlist, select_array(\"SHOW GLOBAL VARIABLES\") );\n    arr2hash( \\%myvar, \\@mysqlvarlist );\n    $result{'Variables'} = \\%myvar;\n\n    my @mysqlstatlist = select_array(\"SHOW STATUS\");\n    push( @mysqlstatlist, select_array(\"SHOW GLOBAL STATUS\") );\n    arr2hash( \\%mystat, \\@mysqlstatlist );\n    $result{'Status'} = \\%mystat;\n    unless ( defined( $myvar{'innodb_support_xa'} ) ) {\n        $myvar{'innodb_support_xa'} = 'ON';\n    }\n    $mystat{'Uptime'} = 1\n      unless defined( $mystat{'Uptime'} )\n      and $mystat{'Uptime'} > 0;\n    $myvar{'have_galera'} = \"NO\";\n    if (   defined( $myvar{'wsrep_provider_options'} )\n        && $myvar{'wsrep_provider_options'} ne \"\"\n        && $myvar{'wsrep_on'} ne \"OFF\" )\n    {\n        $myvar{'have_galera'} = \"YES\";\n        debugprint \"Galera options: \" . $myvar{'wsrep_provider_options'};\n    }\n\n    # Workaround for MySQL bug #59393 wrt. ignore-builtin-innodb\n    if ( ( $myvar{'ignore_builtin_innodb'} || \"\" ) eq \"ON\" ) {\n        $myvar{'have_innodb'} = \"NO\";\n    }\n\n    # Support GTID MODE FOR MARIADB\n    # Issue MariaDB GTID mode #513\n    $myvar{'gtid_mode'} = 'ON'\n      if ( defined( $myvar{'gtid_current_pos'} )\n        and $myvar{'gtid_current_pos'} ne '' );\n\n    # Whether the server uses a thread pool to handle client connections\n    # MariaDB: thread_handling = pool-of-threads\n    # MySQL: thread_handling = loaded-dynamically\n    $myvar{'have_threadpool'} = \"NO\";\n    if (\n        defined( $myvar{'thread_handling'} )\n        and (  $myvar{'thread_handling'} eq 'pool-of-threads'\n            || $myvar{'thread_handling'} eq 'loaded-dynamically' )\n      )\n    {\n        $myvar{'have_threadpool'} = \"YES\";\n    }\n\n    # have_* for engines is deprecated and will be removed in MySQL 5.6;\n    # check SHOW ENGINES and set corresponding old style variables.\n    # Also works around MySQL bug #59393 wrt. skip-innodb\n    my @mysqlenginelist = select_array \"SHOW ENGINES\";\n    foreach my $line (@mysqlenginelist) {\n        if ( $line =~ /^([a-zA-Z_]+)\\s+(\\S+)/ ) {\n            my $engine = lc($1);\n\n            if ( $engine eq \"federated\" || $engine eq \"blackhole\" ) {\n                $engine .= \"_engine\";\n            }\n            elsif ( $engine eq \"berkeleydb\" ) {\n                $engine = \"bdb\";\n            }\n            my $val = ( $2 eq \"DEFAULT\" ) ? \"YES\" : $2;\n            $myvar{\"have_$engine\"} = $val;\n            $result{'Storage Engines'}{$engine} = $2;\n        }\n    }\n\n    #debugprint Dumper(@mysqlenginelist);\n\n    my @mysqlslave;\n    if ( mysql_version_eq(8) or mysql_version_ge( 10, 5 ) ) {\n        @mysqlslave = select_array(\"SHOW REPLICA STATUS\\\\G\");\n    }\n    else {\n        @mysqlslave = select_array(\"SHOW SLAVE STATUS\\\\G\");\n    }\n    arr2hash( \\%myrepl, \\@mysqlslave, ':' );\n    $result{'Replication'}{'Status'} = \\%myrepl;\n\n    my @mysqlslaves;\n    if ( mysql_version_eq(8) or mysql_version_ge( 10, 5 ) ) {\n        @mysqlslaves = select_array \"SHOW SLAVE STATUS\";\n    }\n    else {\n        @mysqlslaves = select_array(\"SHOW SLAVE HOSTS\\\\G\");\n    }\n\n    my @lineitems = ();\n    foreach my $line (@mysqlslaves) {\n        debugprint \"L: $line \";\n        @lineitems                                        = split /\\s+/, $line;\n        $myslaves{ $lineitems[0] }                        = $line;\n        $result{'Replication'}{'Slaves'}{ $lineitems[0] } = $lineitems[4];\n    }\n}\n\nsub remove_cr {\n    return map {\n        my $line = $_;\n        $line =~ s/\\n$//g;\n        $line =~ s/^\\s+$//g;\n        $line;\n    } @_;\n}\n\nsub remove_empty {\n    grep { $_ ne '' } @_;\n}\n\nsub grep_file_contents {\n    my $file = shift;\n    my $patt;\n}\n\nsub get_file_contents {\n    my $file = shift;\n    open( my $fh, \"<\", $file ) or die \"Can't open $file for read: $!\";\n    my @lines = <$fh>;\n    close $fh or die \"Cannot close $file: $!\";\n    @lines = remove_cr @lines;\n    return @lines;\n}\n\nsub get_basic_passwords {\n    return get_file_contents(shift);\n}\n\nsub get_log_file_real_path {\n    my $file     = shift;\n    my $hostname = shift;\n    my $datadir  = shift;\n    if ( -f \"$file\" ) {\n        return $file;\n    }\n    elsif ( -f \"$hostname.log\" ) {\n        return \"$hostname.log\";\n    }\n    elsif ( -f \"$hostname.err\" ) {\n        return \"$hostname.err\";\n    }\n    elsif ( -f \"$datadir$hostname.err\" ) {\n        return \"$datadir$hostname.err\";\n    }\n    elsif ( -f \"$datadir$hostname.log\" ) {\n        return \"$datadir$hostname.log\";\n    }\n    elsif ( -f \"$datadir\" . \"mysql_error.log\" ) {\n        return \"$datadir\" . \"mysql_error.log\";\n    }\n    elsif ( -f \"/var/log/mysql.log\" ) {\n        return \"/var/log/mysql.log\";\n    }\n    elsif ( -f \"/var/log/mysqld.log\" ) {\n        return \"/var/log/mysqld.log\";\n    }\n    elsif ( -f \"/var/log/mysql/$hostname.err\" ) {\n        return \"/var/log/mysql/$hostname.err\";\n    }\n    elsif ( -f \"/var/log/mysql/$hostname.log\" ) {\n        return \"/var/log/mysql/$hostname.log\";\n    }\n    elsif ( -f \"/var/log/mysql/\" . \"mysql_error.log\" ) {\n        return \"/var/log/mysql/\" . \"mysql_error.log\";\n    }\n    else {\n        return $file;\n    }\n}\n\nsub log_file_recommendations {\n    if ( is_remote eq 1 ) {\n        infoprint \"Skipping error log files checks on remote host\";\n        return;\n    }\n    my $fh;\n    $myvar{'log_error'} = $opt{'server-log'}\n      || get_log_file_real_path( $myvar{'log_error'}, $myvar{'hostname'},\n        $myvar{'datadir'} );\n\n    subheaderprint \"Log file Recommendations\";\n    if ( \"$myvar{'log_error'}\" eq \"stderr\" ) {\n        badprint\n\"log_error is set to $myvar{'log_error'}, but this script can't read stderr\";\n        return;\n    }\n    elsif ( $myvar{'log_error'} =~ /^(docker|podman|kubectl):(.*)/ ) {\n        open( $fh, '-|', \"$1 logs --tail=$maxlines '$2'\" )\n          // die \"Can't start $1 $!\";\n        goodprint \"Log from cloud` $myvar{'log_error'} exists\";\n    }\n    elsif ( $myvar{'log_error'} =~ /^systemd:(.*)/ ) {\n        open( $fh, '-|', \"journalctl -n $maxlines -b  -u '$1'\" )\n          // die \"Can't start journalctl $!\";\n        goodprint \"Log journal` $myvar{'log_error'} exists\";\n    }\n    elsif ( -f \"$myvar{'log_error'}\" ) {\n        goodprint \"Log file $myvar{'log_error'} exists\";\n        my $size = ( stat $myvar{'log_error'} )[7];\n        infoprint \"Log file: \"\n          . $myvar{'log_error'} . \" (\"\n          . hr_bytes_rnd($size) . \")\";\n\n        if ( $size > 0 ) {\n            goodprint \"Log file $myvar{'log_error'} is not empty\";\n            if ( $size < 32 * 1024 * 1024 ) {\n                goodprint \"Log file $myvar{'log_error'} is smaller than 32 MB\";\n            }\n            else {\n                badprint \"Log file $myvar{'log_error'} is bigger than 32 MB\";\n                push @generalrec,\n                  $myvar{'log_error'}\n                  . \" is > 32MB, you should analyze why or implement a rotation log strategy such as logrotate!\";\n            }\n        }\n        else {\n            infoprint\n\"Log file $myvar{'log_error'} is empty. Assuming log-rotation. Use --server-log={file} for explicit file\";\n            return;\n        }\n        if ( !open( $fh, '<', $myvar{'log_error'} ) ) {\n            badprint \"Log file $myvar{'log_error'} isn't readable.\";\n            return;\n        }\n        goodprint \"Log file $myvar{'log_error'} is readable.\";\n\n        if ( $maxlines * 80 < $size ) {\n            seek( $fh, -$maxlines * 80, 2 );\n            <$fh>;    # discard line fragment\n        }\n    }\n    else {\n        badprint \"Log file $myvar{'log_error'} doesn't exist\";\n        return;\n    }\n\n    my $numLi     = 0;\n    my $nbWarnLog = 0;\n    my $nbErrLog  = 0;\n    my @lastShutdowns;\n    my @lastStarts;\n\n    while ( my $logLi = <$fh> ) {\n        chomp $logLi;\n        $numLi++;\n        debugprint \"$numLi: $logLi\"\n          if $logLi =~ /warning|error/i and $logLi !~ /Logging to/;\n        $nbErrLog++\n          if $logLi  =~ /error/i\n          and $logLi !~ /(Logging to|\\[Warning\\].*ERROR_FOR_DIVISION_BY_ZERO)/;\n        $nbWarnLog++ if $logLi =~ /warning/i;\n        push @lastShutdowns, $logLi\n          if $logLi =~ /Shutdown complete/ and $logLi !~ /Innodb/i;\n        push @lastStarts, $logLi if $logLi =~ /ready for connections/;\n    }\n    close $fh;\n\n    if ( $nbWarnLog > 0 ) {\n        badprint \"$myvar{'log_error'} contains $nbWarnLog warning(s).\";\n        push @generalrec, \"Check warning line(s) in $myvar{'log_error'} file\";\n    }\n    else {\n        goodprint \"$myvar{'log_error'} doesn't contain any warning.\";\n    }\n    if ( $nbErrLog > 0 ) {\n        badprint \"$myvar{'log_error'} contains $nbErrLog error(s).\";\n        push @generalrec, \"Check error line(s) in $myvar{'log_error'} file\";\n    }\n    else {\n        goodprint \"$myvar{'log_error'} doesn't contain any error.\";\n    }\n\n    infoprint scalar @lastStarts . \" start(s) detected in $myvar{'log_error'}\";\n    my $nStart = 0;\n    my $nEnd   = 10;\n    if ( scalar @lastStarts < $nEnd ) {\n        $nEnd = scalar @lastStarts;\n    }\n    for my $startd ( reverse @lastStarts[ -$nEnd .. -1 ] ) {\n        $nStart++;\n        infoprint \"$nStart) $startd\";\n    }\n    infoprint scalar @lastShutdowns\n      . \" shutdown(s) detected in $myvar{'log_error'}\";\n    $nStart = 0;\n    $nEnd   = 10;\n    if ( scalar @lastShutdowns < $nEnd ) {\n        $nEnd = scalar @lastShutdowns;\n    }\n    for my $shutd ( reverse @lastShutdowns[ -$nEnd .. -1 ] ) {\n        $nStart++;\n        infoprint \"$nStart) $shutd\";\n    }\n\n    #exit 0;\n}\n\nsub cve_recommendations {\n    subheaderprint \"CVE Security Recommendations\";\n    unless ( defined( $opt{cvefile} ) && -f \"$opt{cvefile}\" ) {\n        infoprint \"Skipped due to --cvefile option undefined\";\n        return;\n    }\n\n#$mysqlvermajor=10;\n#$mysqlverminor=1;\n#$mysqlvermicro=17;\n#prettyprint \"Look for related CVE for $myvar{'version'} or lower in $opt{cvefile}\";\n    my $cvefound = 0;\n    open( my $fh, \"<\", $opt{cvefile} )\n      or die \"Can't open $opt{cvefile} for read: $!\";\n    while ( my $cveline = <$fh> ) {\n        my @cve = split( ';', $cveline );\n        debugprint\n\"Comparing $mysqlvermajor\\.$mysqlverminor\\.$mysqlvermicro with $cve[1]\\.$cve[2]\\.$cve[3] : \"\n          . ( mysql_version_le( $cve[1], $cve[2], $cve[3] ) ? '<=' : '>' );\n\n        # Avoid not major/minor version corresponding CVEs\n        next\n          unless ( int( $cve[1] ) == $mysqlvermajor\n            && int( $cve[2] ) == $mysqlverminor );\n        if ( int( $cve[3] ) >= $mysqlvermicro ) {\n            badprint \"$cve[4](<= $cve[1]\\.$cve[2]\\.$cve[3]) : $cve[6]\";\n            $result{'CVE'}{'List'}{$cvefound} =\n              \"$cve[4](<= $cve[1]\\.$cve[2]\\.$cve[3]) : $cve[6]\";\n            $cvefound++;\n        }\n    }\n    close $fh or die \"Cannot close $opt{cvefile}: $!\";\n    $result{'CVE'}{'nb'} = $cvefound;\n\n    my $cve_warning_notes = \"\";\n    if ( $cvefound == 0 ) {\n        goodprint \"NO SECURITY CVE FOUND FOR YOUR VERSION\";\n        return;\n    }\n    if ( $mysqlvermajor eq 5 and $mysqlverminor eq 5 ) {\n        infoprint\n          \"False positive CVE(s) for MySQL and MariaDB 5.5.x can be found.\";\n        infoprint \"Check carefully each CVE for those particular versions\";\n    }\n    badprint $cvefound . \" CVE(s) found for your MySQL release.\";\n    push( @generalrec,\n        $cvefound\n          . \" CVE(s) found for your MySQL release. Consider upgrading your version !\"\n    );\n}\n\nsub get_opened_ports {\n    my @opened_ports = `netstat -ltn`;\n    @opened_ports = map {\n        my $v = $_;\n        $v =~ s/.*:(\\d+)\\s.*$/$1/;\n        $v =~ s/\\D//g;\n        $v;\n    } @opened_ports;\n    @opened_ports = sort { $a <=> $b } grep { !/^$/ } @opened_ports;\n\n    #debugprint Dumper \\@opened_ports;\n    $result{'Network'}{'TCP Opened'} = \\@opened_ports;\n    return @opened_ports;\n}\n\nsub is_open_port {\n    my $port = shift;\n    if ( grep { /^$port$/ } get_opened_ports ) {\n        return 1;\n    }\n    return 0;\n}\n\nsub get_process_memory {\n    my $pid = shift;\n    my @mem = `ps -p $pid -o rss`;\n    return 0 if scalar @mem != 2;\n    return $mem[1] * 1024;\n}\n\nsub get_other_process_memory {\n    return 0 if ( $opt{tbstat} == 0 );\n    my @procs = `ps eaxo pid,command`;\n    @procs = map {\n        my $v = $_;\n        $v =~ s/.*PID.*//;\n        $v =~ s/.*mysqld.*//;\n        $v =~ s/.*\\[.*\\].*//;\n        $v =~ s/^\\s+$//g;\n        $v =~ s/.*PID.*CMD.*//;\n        $v =~ s/.*systemd.*//;\n        $v =~ s/\\s*?(\\d+)\\s*.*/$1/g;\n        $v;\n    } @procs;\n    @procs = remove_cr @procs;\n    @procs = remove_empty @procs;\n    my $totalMemOther = 0;\n    map { $totalMemOther += get_process_memory($_); } @procs;\n    return $totalMemOther;\n}\n\nsub get_os_release {\n    if ( -f \"/etc/lsb-release\" ) {\n        my @info_release = get_file_contents \"/etc/lsb-release\";\n        my $os_release   = $info_release[3];\n        $os_release =~ s/.*=\"//;\n        $os_release =~ s/\"$//;\n        return $os_release;\n    }\n\n    if ( -f \"/etc/system-release\" ) {\n        my @info_release = get_file_contents \"/etc/system-release\";\n        return $info_release[0];\n    }\n\n    if ( -f \"/etc/os-release\" ) {\n        my @info_release = get_file_contents \"/etc/os-release\";\n        my $os_release   = $info_release[0];\n        $os_release =~ s/.*=\"//;\n        $os_release =~ s/\"$//;\n        return $os_release;\n    }\n\n    if ( -f \"/etc/issue\" ) {\n        my @info_release = get_file_contents \"/etc/issue\";\n        my $os_release   = $info_release[0];\n        $os_release =~ s/\\s+\\\\n.*//;\n        return $os_release;\n    }\n    return \"Unknown OS release\";\n}\n\nsub get_fs_info {\n    my @sinfo = `df -P | grep '%'`;\n    my @iinfo = `df -Pi| grep '%'`;\n    shift @sinfo;\n    shift @iinfo;\n\n    foreach my $info (@sinfo) {\n\n        #exit(0);\n        if ( $info =~ /.*?(\\d+)\\s+(\\d+)\\s+(\\d+)\\s+(\\d+)%\\s+(.*)$/ ) {\n            next if $5 =~ m{(run|dev|sys|proc|snap|init)};\n            if ( $4 > 85 ) {\n                badprint \"mount point $5 is using $4 % total space (\"\n                  . human_size( $2 * 1024 ) . \" / \"\n                  . human_size( $1 * 1024 ) . \")\";\n                push( @generalrec, \"Add some space to $4 mountpoint.\" );\n            }\n            else {\n                infoprint \"mount point $5 is using $4 % total space (\"\n                  . human_size( $2 * 1024 ) . \" / \"\n                  . human_size( $1 * 1024 ) . \")\";\n            }\n            $result{'Filesystem'}{'Space Pct'}{$5}   = $4;\n            $result{'Filesystem'}{'Used Space'}{$5}  = $2;\n            $result{'Filesystem'}{'Free Space'}{$5}  = $3;\n            $result{'Filesystem'}{'Total Space'}{$5} = $1;\n        }\n    }\n\n    @iinfo = map {\n        my $v = $_;\n        $v =~ s/.*\\s(\\d+)%\\s+(.*)/$1\\t$2/g;\n        $v;\n    } @iinfo;\n    foreach my $info (@iinfo) {\n        next if $info =~ m{(\\d+)\\t/(run|dev|sys|proc|snap)($|/)};\n        if ( $info =~ /(\\d+)\\t(.*)/ ) {\n            if ( $1 > 85 ) {\n                badprint \"mount point $2 is using $1 % of max allowed inodes\";\n                push( @generalrec,\n\"Cleanup files from $2 mountpoint or reformat your filesystem.\"\n                );\n            }\n            else {\n                infoprint \"mount point $2 is using $1 % of max allowed inodes\";\n            }\n            $result{'Filesystem'}{'Inode Pct'}{$2} = $1;\n        }\n    }\n}\n\nsub merge_hash {\n    my $h1     = shift;\n    my $h2     = shift;\n    my %result = {};\n    foreach my $substanceref ( $h1, $h2 ) {\n        while ( my ( $k, $v ) = each %$substanceref ) {\n            next if ( exists $result{$k} );\n            $result{$k} = $v;\n        }\n    }\n    return \\%result;\n}\n\nsub is_virtual_machine {\n    if ( $^O eq 'linux' ) {\n        my $isVm = `grep -Ec '^flags.*\\ hypervisor\\ ' /proc/cpuinfo`;\n        return ( $isVm == 0 ? 0 : 1 );\n    }\n\n    if ( $^O eq 'freebsd' ) {\n        my $isVm = `sysctl -n kern.vm_guest`;\n        chomp $isVm;\n        print \"FARK DEBUG isVm=[$isVm]\";\n        return ( $isVm eq 'none' ? 0 : 1 );\n    }\n    return 0;\n}\n\nsub infocmd {\n    my $cmd = \"@_\";\n    debugprint \"CMD: $cmd\";\n    my @result = `$cmd`;\n    @result = remove_cr @result;\n    for my $l (@result) {\n        infoprint \"$l\";\n    }\n}\n\nsub infocmd_tab {\n    my $cmd = \"@_\";\n    debugprint \"CMD: $cmd\";\n    my @result = `$cmd`;\n    @result = remove_cr @result;\n    for my $l (@result) {\n        infoprint \"\\t$l\";\n    }\n}\n\nsub infocmd_one {\n    my $cmd    = \"@_\";\n    my @result = `$cmd 2>&1`;\n    @result = remove_cr @result;\n    return join ', ', @result;\n}\n\nsub get_kernel_info {\n    my @params = (\n        'fs.aio-max-nr',                 'fs.aio-nr',\n        'fs.nr_open',                    'fs.file-max',\n        'sunrpc.tcp_fin_timeout',        'sunrpc.tcp_max_slot_table_entries',\n        'sunrpc.tcp_slot_table_entries', 'vm.swappiness'\n    );\n    infoprint \"Information about kernel tuning:\";\n    foreach my $param (@params) {\n        infocmd_tab(\"sysctl $param 2>/dev/null\");\n        $result{'OS'}{'Config'}{$param} = `sysctl -n $param 2>/dev/null`;\n    }\n    if ( `sysctl -n vm.swappiness` > 10 ) {\n        badprint\n          \"Swappiness is > 10, please consider having a value lower than 10\";\n        push @generalrec, \"setup swappiness lower or equal to 10\";\n        push @adjvars,\n'vm.swappiness <= 10 (echo 10 > /proc/sys/vm/swappiness) or vm.swappiness=10 in /etc/sysctl.conf';\n    }\n    else {\n        infoprint \"Swappiness is < 10.\";\n    }\n\n    # only if /proc/sys/sunrpc exists\n    my $tcp_slot_entries =\n      `sysctl -n sunrpc.tcp_slot_table_entries 2>/dev/null`;\n    if ( -f \"/proc/sys/sunrpc\"\n        and ( $tcp_slot_entries eq '' or $tcp_slot_entries < 100 ) )\n    {\n        badprint\n\"Initial TCP slot entries is < 1M, please consider having a value greater than 100\";\n        push @generalrec, \"setup Initial TCP slot entries greater than 100\";\n        push @adjvars,\n'sunrpc.tcp_slot_table_entries > 100 (echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries)  or sunrpc.tcp_slot_table_entries=128 in /etc/sysctl.conf';\n    }\n    else {\n        infoprint \"TCP slot entries is > 100.\";\n    }\n\n    if ( -f \"/proc/sys/fs/aio-max-nr\" ) {\n        if ( `sysctl -n fs.aio-max-nr` < 1000000 ) {\n            badprint\n\"Max running total of the number of max. events is < 1M, please consider having a value greater than 1M\";\n            push @generalrec, \"setup Max running number events greater than 1M\";\n            push @adjvars,\n'fs.aio-max-nr > 1M (echo 1048576 > /proc/sys/fs/aio-max-nr) or fs.aio-max-nr=1048576 in /etc/sysctl.conf';\n        }\n        else {\n            infoprint \"Max Number of AIO events is > 1M.\";\n        }\n    }\n    if ( -f \"/proc/sys/fs/nr_open\" ) {\n        if ( `sysctl -n fs.nr_open` < 1000000 ) {\n            badprint\n\"Max running total of the number of file open request is < 1M, please consider having a value greater than 1M\";\n            push @generalrec,\n              \"setup running number of open request greater than 1M\";\n            push @adjvars,\n'fs.aio-nr > 1M (echo 1048576 > /proc/sys/fs/nr_open) or fs.nr_open=1048576 in /etc/sysctl.conf';\n        }\n        else {\n            infoprint \"Max Number of open file requests is > 1M.\";\n        }\n    }\n}\n\nsub get_system_info {\n    $result{'OS'}{'Release'} = get_os_release();\n    infoprint get_os_release;\n    if (is_virtual_machine) {\n        infoprint \"Machine type          : Virtual machine\";\n        $result{'OS'}{'Virtual Machine'} = 'YES';\n    }\n    else {\n        infoprint \"Machine type          : Physical machine\";\n        $result{'OS'}{'Virtual Machine'} = 'NO';\n    }\n\n    $result{'Network'}{'Connected'} = 'NO';\n    `ping -c 1 ipecho.net &>/dev/null`;\n    my $isConnected = $?;\n    if ( $? == 0 ) {\n        infoprint \"Internet              : Connected\";\n        $result{'Network'}{'Connected'} = 'YES';\n    }\n    else {\n        badprint \"Internet              : Disconnected\";\n    }\n    $result{'OS'}{'NbCore'} = cpu_cores;\n    infoprint \"Number of Core CPU : \" . cpu_cores;\n    $result{'OS'}{'Type'} = `uname -o`;\n    infoprint \"Operating System Type : \" . infocmd_one \"uname -o\";\n    $result{'OS'}{'Kernel'} = `uname -r`;\n    infoprint \"Kernel Release        : \" . infocmd_one \"uname -r\";\n    $result{'OS'}{'Hostname'}         = `hostname`;\n    $result{'Network'}{'Internal Ip'} = `hostname -I`;\n    infoprint \"Hostname              : \" . infocmd_one \"hostname\";\n    infoprint \"Network Cards         : \";\n    infocmd_tab \"ifconfig| grep -A1 mtu\";\n    infoprint \"Internal IP           : \" . infocmd_one \"hostname -I\";\n    $result{'Network'}{'Internal Ip'} = `ifconfig| grep -A1 mtu`;\n    my $httpcli = get_http_cli();\n    infoprint \"HTTP client found: $httpcli\" if defined $httpcli;\n\n    my $ext_ip = \"\";\n    if ( $httpcli =~ /curl$/ ) {\n        $ext_ip = infocmd_one \"$httpcli -m 3 ipecho.net/plain\";\n    }\n    elsif ( $httpcli =~ /wget$/ ) {\n\n        $ext_ip = infocmd_one \"$httpcli -t 1 -T 3 -q -O - ipecho.net/plain\";\n    }\n    infoprint \"External IP           : \" . $ext_ip;\n    $result{'Network'}{'External Ip'} = $ext_ip;\n    badprint \"External IP           : Can't check, no Internet connectivity\"\n      unless defined($httpcli);\n    infoprint \"Name Servers          : \"\n      . infocmd_one \"grep 'nameserver' /etc/resolv.conf \\| awk '{print \\$2}'\";\n    infoprint \"Logged In users       : \";\n    infocmd_tab \"who\";\n    $result{'OS'}{'Logged users'} = `who`;\n    infoprint \"Ram Usages in MB      : \";\n    infocmd_tab \"free -m | grep -v +\";\n    $result{'OS'}{'Free Memory RAM'} = `free -m | grep -v +`;\n    infoprint \"Load Average          : \";\n    infocmd_tab \"top -n 1 -b | grep 'load average:'\";\n    $result{'OS'}{'Load Average'} = `top -n 1 -b | grep 'load average:'`;\n\n    infoprint \"System Uptime         : \";\n    infocmd_tab \"uptime\";\n    $result{'OS'}{'Uptime'} = `uptime`;\n}\n\nsub system_recommendations {\n    if ( is_remote eq 1 ) {\n        infoprint \"Skipping system checks on remote host\";\n        return;\n    }\n    return if ( $opt{sysstat} == 0 );\n    subheaderprint \"System Linux Recommendations\";\n    my $os = `uname`;\n    unless ( $os =~ /Linux/i ) {\n        infoprint \"Skipped due to non Linux server\";\n        return;\n    }\n    prettyprint \"Look for related Linux system recommendations\";\n\n    #prettyprint '-'x78;\n    get_system_info();\n\n    my $nb_cpus = cpu_cores;\n    if ( $nb_cpus > 1 ) {\n        goodprint \"There is at least one CPU dedicated to database server.\";\n    }\n    else {\n        badprint\n\"There is only one CPU, consider dedicated one CPU for your database server\";\n        push @generalrec,\n          \"Consider increasing number of CPU for your database server\";\n    }\n\n    if ( $physical_memory >= 1.5 * 1024 ) {\n        goodprint \"There is at least 1 Gb of RAM dedicated to Linux server.\";\n    }\n    else {\n        badprint\n\"There is less than 1,5 Gb of RAM, consider dedicated 1 Gb for your Linux server\";\n        push @generalrec,\n          \"Consider increasing 1,5 / 2 Gb of RAM for your Linux server\";\n    }\n\n    my $omem = get_other_process_memory;\n    infoprint \"User process except mysqld used \"\n      . hr_bytes_rnd($omem) . \" RAM.\";\n    if ( ( 0.15 * $physical_memory ) < $omem ) {\n        if ( $opt{nondedicated} ) {\n            infoprint \"No warning with --nondedicated option\";\n            infoprint\n\"Other user process except mysqld used more than 15% of total physical memory \"\n              . percentage( $omem, $physical_memory ) . \"% (\"\n              . hr_bytes_rnd($omem) . \" / \"\n              . hr_bytes_rnd($physical_memory) . \")\";\n        }\n        else {\n\n            badprint\n\"Other user process except mysqld used more than 15% of total physical memory \"\n              . percentage( $omem, $physical_memory ) . \"% (\"\n              . hr_bytes_rnd($omem) . \" / \"\n              . hr_bytes_rnd($physical_memory) . \")\";\n            push( @generalrec,\n\"Consider stopping or dedicate server for additional process other than mysqld.\"\n            );\n            push( @adjvars,\n\"DON'T APPLY SETTINGS BECAUSE THERE ARE TOO MANY PROCESSES RUNNING ON THIS SERVER. OOM KILL CAN OCCUR!\"\n            );\n        }\n    }\n    else {\n        infoprint\n\"Other user process except mysqld used less than 15% of total physical memory \"\n          . percentage( $omem, $physical_memory ) . \"% (\"\n          . hr_bytes_rnd($omem) . \" / \"\n          . hr_bytes_rnd($physical_memory) . \")\";\n    }\n\n    if ( $opt{'maxportallowed'} > 0 ) {\n        my @opened_ports = get_opened_ports;\n        infoprint \"There is \"\n          . scalar @opened_ports\n          . \" listening port(s) on this server.\";\n        if ( scalar(@opened_ports) > $opt{'maxportallowed'} ) {\n            badprint \"There are too many listening ports: \"\n              . scalar(@opened_ports)\n              . \" opened > \"\n              . $opt{'maxportallowed'}\n              . \"allowed.\";\n            push( @generalrec,\n\"Consider dedicating a server for your database installation with fewer services running on it!\"\n            );\n        }\n        else {\n            goodprint \"There are less than \"\n              . $opt{'maxportallowed'}\n              . \" opened ports on this server.\";\n        }\n    }\n\n    foreach my $banport (@banned_ports) {\n        if ( is_open_port($banport) ) {\n            badprint \"Banned port: $banport is opened..\";\n            push( @generalrec,\n\"Port $banport is opened. Consider stopping the program over this port.\"\n            );\n        }\n        else {\n            goodprint \"$banport is not opened.\";\n        }\n    }\n\n    subheaderprint \"Filesystem Linux Recommendations\";\n    get_fs_info;\n    subheaderprint \"Kernel Information Recommendations\";\n    get_kernel_info;\n}\n\nsub security_recommendations {\n    subheaderprint \"Security Recommendations\";\n\n    if ( mysql_version_eq(8) ) {\n        infoprint \"Skipped due to unsupported feature for MySQL 8.0+\";\n        return;\n    }\n\n    #exit 0;\n    if ( $opt{skippassword} eq 1 ) {\n        infoprint \"Skipped due to --skippassword option\";\n        return;\n    }\n\n    my $PASS_COLUMN_NAME = 'password';\n\n    # New table schema available since mysql-5.7 and mariadb-10.2\n    # But need to be checked\n    if ( $myvar{'version'} =~ /5\\.7|10\\.[2-5]\\..*MariaDB*/ ) {\n        my $password_column_exists =\n`$mysqlcmd $mysqllogin -Bse \"SELECT 1 FROM information_schema.columns WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME = 'password'\" 2>>/dev/null`;\n        my $authstring_column_exists =\n`$mysqlcmd $mysqllogin -Bse \"SELECT 1 FROM information_schema.columns WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME = 'authentication_string'\" 2>>/dev/null`;\n        if ( $password_column_exists && $authstring_column_exists ) {\n            $PASS_COLUMN_NAME =\n\"IF(plugin='mysql_native_password', authentication_string, password)\";\n        }\n        elsif ($authstring_column_exists) {\n            $PASS_COLUMN_NAME = 'authentication_string';\n        }\n        elsif ( !$password_column_exists ) {\n            infoprint \"Skipped due to none of known auth columns exists\";\n            return;\n        }\n    }\n    debugprint \"Password column = $PASS_COLUMN_NAME\";\n\n    # IS THERE A ROLE COLUMN\n    my $is_role_column = select_one\n\"select count(*) from information_schema.columns where TABLE_NAME='user' AND TABLE_SCHEMA='mysql' and COLUMN_NAME='IS_ROLE'\";\n\n    my $extra_user_condition = \"\";\n    $extra_user_condition = \"IS_ROLE = 'N' AND\" if $is_role_column > 0;\n    my @mysqlstatlist;\n    if ( $is_role_column > 0 ) {\n        @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE IS_ROLE='Y'\";\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            infoprint \"User $line is User Role\";\n        }\n    }\n    else {\n        debugprint \"No Role user detected\";\n        goodprint \"No Role user detected\";\n    }\n\n    # Looking for Anonymous users\n    @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE $extra_user_condition (TRIM(USER) = '' OR USER IS NULL)\";\n\n    #debugprint Dumper \\@mysqlstatlist;\n\n    #exit 0;\n    if (@mysqlstatlist) {\n        push( @generalrec,\n                \"Remove Anonymous User accounts: there are \"\n              . scalar(@mysqlstatlist)\n              . \" anonymous accounts.\" );\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            badprint \"User \"\n              . $line\n              . \" is an anonymous account. Remove with DROP USER \"\n              . $line . \";\";\n        }\n    }\n    else {\n        goodprint \"There are no anonymous accounts for any database users\";\n    }\n    if ( mysql_version_le( 5, 1 ) ) {\n        badprint \"No more password checks for MySQL version <=5.1\";\n        badprint \"MySQL version <=5.1 is deprecated and end of support.\";\n        return;\n    }\n\n    # Looking for Empty Password\n    if ( mysql_version_ge( 10, 4 ) ) {\n        @mysqlstatlist = select_array\nq{SELECT CONCAT(QUOTE(user), '@', QUOTE(host)) FROM mysql.global_priv WHERE\n    ( user != ''\n    AND JSON_CONTAINS(Priv, '\"mysql_native_password\"', '$.plugin') AND JSON_CONTAINS(Priv, '\"\"', '$.authentication_string')\n    AND NOT JSON_CONTAINS(Priv, 'true', '$.account_locked')\n    )};\n    }\n    else {\n        @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE ($PASS_COLUMN_NAME = '' OR $PASS_COLUMN_NAME IS NULL)\n    AND user != ''\n    /*!50501 AND plugin NOT IN ('auth_socket', 'unix_socket', 'win_socket', 'auth_pam_compat') */\n    /*!80000 AND account_locked = 'N' AND password_expired = 'N' */\";\n    }\n    if (@mysqlstatlist) {\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            badprint \"User '\" . $line . \"' has no password set.\";\n            push( @generalrec,\n\"Set up a Secure Password for $line user: SET PASSWORD FOR $line = PASSWORD('secure_password');\"\n            );\n        }\n    }\n    else {\n        goodprint \"All database users have passwords assigned\";\n    }\n\n    if ( mysql_version_ge( 5, 7 ) ) {\n        my $valPlugin = select_one(\n\"select count(*) from information_schema.plugins where PLUGIN_NAME='validate_password' AND PLUGIN_STATUS='ACTIVE'\"\n        );\n        if ( $valPlugin >= 1 ) {\n            infoprint\n\"Bug #80860 MySQL 5.7: Avoid testing password when validate_password is activated\";\n            return;\n        }\n    }\n\n    # Looking for User with user/ uppercase /capitalise user as password\n    @mysqlstatlist = select_array\n\"SELECT CONCAT(QUOTE(user), '\\@', QUOTE(host)) FROM mysql.user WHERE user != '' AND (CAST($PASS_COLUMN_NAME as Binary) = PASSWORD(user) OR CAST($PASS_COLUMN_NAME as Binary) = PASSWORD(UPPER(user)) OR CAST($PASS_COLUMN_NAME as Binary) = PASSWORD(CONCAT(UPPER(LEFT(User, 1)), SUBSTRING(User, 2, LENGTH(User)))))\";\n    if (@mysqlstatlist) {\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            badprint \"User \" . $line . \" has user name as password.\";\n            push( @generalrec,\n\"Set up a Secure Password for $line user: SET PASSWORD FOR $line = PASSWORD('secure_password');\"\n            );\n        }\n    }\n\n    @mysqlstatlist = select_array\n      \"SELECT CONCAT(QUOTE(user), '\\@', host) FROM mysql.user WHERE HOST='%'\";\n    if (@mysqlstatlist) {\n        foreach my $line ( sort @mysqlstatlist ) {\n            chomp($line);\n            my $luser = ( split /@/, $line )[0];\n            badprint \"User \" . $line\n              . \" does not specify hostname restrictions.\";\n            push( @generalrec,\n\"Restrict Host for $luser\\@'%' to $luser\\@LimitedIPRangeOrLocalhost\"\n            );\n            push( @generalrec,\n                    \"RENAME USER $luser\\@'%' TO \"\n                  . $luser\n                  . \"\\@LimitedIPRangeOrLocalhost;\" );\n        }\n    }\n\n    unless ( -f $basic_password_files ) {\n        badprint \"There is no basic password file list!\";\n        return;\n    }\n\n    my @passwords = get_basic_passwords $basic_password_files;\n    infoprint \"There are \"\n      . scalar(@passwords)\n      . \" basic passwords in the list.\";\n    my $nbins = 0;\n    my $passreq;\n    if (@passwords) {\n        my $nbInterPass = 0;\n        foreach my $pass (@passwords) {\n            $nbInterPass++;\n\n            $pass =~ s/\\s//g;\n            $pass =~ s/\\'/\\\\\\'/g;\n            chomp($pass);\n\n            # Looking for User with user/ uppercase /capitalise weak password\n            @mysqlstatlist =\n              select_array\n\"SELECT CONCAT(user, '\\@', host) FROM mysql.user WHERE $PASS_COLUMN_NAME = PASSWORD('\"\n              . $pass\n              . \"') OR $PASS_COLUMN_NAME = PASSWORD(UPPER('\"\n              . $pass\n              . \"')) OR $PASS_COLUMN_NAME = PASSWORD(CONCAT(UPPER(LEFT('\"\n              . $pass\n              . \"', 1)), SUBSTRING('\"\n              . $pass\n              . \"', 2, LENGTH('\"\n              . $pass . \"'))))\";\n            debugprint \"There are \" . scalar(@mysqlstatlist) . \" items.\";\n            if (@mysqlstatlist) {\n                foreach my $line (@mysqlstatlist) {\n                    chomp($line);\n                    badprint \"User '\" . $line\n                      . \"' is using weak password: $pass in a lower, upper or capitalize derivative version.\";\n\n                    push( @generalrec,\n\"Set up a Secure Password for $line user: SET PASSWORD FOR '\"\n                          . ( split /@/, $line )[0] . \"'\\@'\"\n                          . ( split /@/, $line )[1]\n                          . \"' = PASSWORD('secure_password');\" );\n                    $nbins++;\n                }\n            }\n            debugprint \"$nbInterPass / \" . scalar(@passwords)\n              if ( $nbInterPass % 1000 == 0 );\n        }\n    }\n    if ( $nbins > 0 ) {\n        push( @generalrec,\n            $nbins\n              . \" user(s) used basic or weak password from basic dictionary.\" );\n    }\n}\n\nsub get_replication_status {\n    subheaderprint \"Replication Metrics\";\n    infoprint \"Galera Synchronous replication: \" . $myvar{'have_galera'};\n    if ( scalar( keys %myslaves ) == 0 ) {\n        infoprint \"No replication slave(s) for this server.\";\n    }\n    else {\n        infoprint \"This server is acting as master for \"\n          . scalar( keys %myslaves )\n          . \" server(s).\";\n    }\n    infoprint \"Binlog format: \" . $myvar{'binlog_format'};\n    infoprint \"XA support enabled: \" . $myvar{'innodb_support_xa'};\n\n    infoprint \"Semi synchronous replication Master: \"\n      . (\n        (\n                 defined( $myvar{'rpl_semi_sync_master_enabled'} )\n              or defined( $myvar{'rpl_semi_sync_source_enabled'} )\n        )\n        ? ( $myvar{'rpl_semi_sync_master_enabled'}\n              // $myvar{'rpl_semi_sync_source_enabled'} )\n        : 'Not Activated'\n      );\n    infoprint \"Semi synchronous replication Slave: \"\n      . (\n        (\n                 defined( $myvar{'rpl_semi_sync_slave_enabled'} )\n              or defined( $myvar{'rpl_semi_sync_replica_enabled'} )\n        )\n        ? ( $myvar{'rpl_semi_sync_slave_enabled'}\n              // $myvar{'rpl_semi_sync_replica_enabled'} )\n        : 'Not Activated'\n      );\n    if ( scalar( keys %myrepl ) == 0 and scalar( keys %myslaves ) == 0 ) {\n        infoprint \"This is a standalone server\";\n        return;\n    }\n    if ( scalar( keys %myrepl ) == 0 ) {\n        infoprint\n          \"No replication setup for this server or replication not started.\";\n        return;\n    }\n\n    $result{'Replication'}{'status'} = \\%myrepl;\n    my ($io_running) = $myrepl{'Slave_IO_Running'}\n      // $myrepl{'Replica_IO_Running'};\n    debugprint \"IO RUNNING: $io_running \";\n    my ($sql_running) = $myrepl{'Slave_SQL_Running'}\n      // $myrepl{'Replica_SQL_Running'};\n    debugprint \"SQL RUNNING: $sql_running \";\n\n    my ($seconds_behind_master) = $myrepl{'Seconds_Behind_Master'}\n      // $myrepl{'Seconds_Behind_Source'};\n    $seconds_behind_master = 1000000 unless defined($seconds_behind_master);\n    debugprint \"SECONDS : $seconds_behind_master \";\n\n    if ( defined($io_running)\n        and ( $io_running !~ /yes/i or $sql_running !~ /yes/i ) )\n    {\n        badprint\n          \"This replication slave is not running but seems to be configured.\";\n    }\n    if (   defined($io_running)\n        && $io_running  =~ /yes/i\n        && $sql_running =~ /yes/i )\n    {\n        if ( $myvar{'read_only'} eq 'OFF' ) {\n            badprint\n\"This replication slave is running with the read_only option disabled.\";\n        }\n        else {\n            goodprint\n\"This replication slave is running with the read_only option enabled.\";\n        }\n        if ( $seconds_behind_master > 0 ) {\n            badprint\n\"This replication slave is lagging and slave has $seconds_behind_master second(s) behind master host.\";\n        }\n        else {\n            goodprint \"This replication slave is up to date with master.\";\n        }\n    }\n}\n\n# https://endoflife.software/applications/databases/mysql\n# https://endoflife.date/mariadb\nsub validate_mysql_version {\n    ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n    $mysqlverminor ||= 0;\n    $mysqlvermicro ||= 0;\n\n    prettyprint \" \";\n\n    if (   mysql_version_eq(9)\n        or mysql_version_eq(8, 4)\n\t\t\t\tor mysql_version_eq(8, 0)\n        or mysql_version_eq( 10, 5 )\n        or mysql_version_eq( 10, 6 )\n        or mysql_version_eq( 10, 11 )\n        or mysql_version_eq( 11, 4 ) )\n    {\n        goodprint \"Currently running supported MySQL version \"\n          . $myvar{'version'} . \"\";\n        return;\n    }\n    else {\n        badprint \"Your MySQL version \"\n          . $myvar{'version'}\n          . \" is EOL software. Upgrade soon!\";\n        push( @generalrec,\n            \"You are using an unsupported version for production environments\"\n        );\n        push( @generalrec,\n            \"Upgrade as soon as possible to a supported version !\" );\n\n    }\n}\n\n# Checks if MySQL version is equal to (major, minor, micro)\nsub mysql_version_eq {\n    my ( $maj, $min, $mic ) = @_;\n    my ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n\n    return int($mysqlvermajor) == int($maj)\n      if ( !defined($min) && !defined($mic) );\n    return int($mysqlvermajor) == int($maj) && int($mysqlverminor) == int($min)\n      if ( !defined($mic) );\n    return ( int($mysqlvermajor) == int($maj)\n          && int($mysqlverminor) == int($min)\n          && int($mysqlvermicro) == int($mic) );\n}\n\n# Checks if MySQL version is greater than equal to (major, minor, micro)\nsub mysql_version_ge {\n    my ( $maj, $min, $mic ) = @_;\n    $min ||= 0;\n    $mic ||= 0;\n    my ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n\n    return\n         int($mysqlvermajor) > int($maj)\n      || ( int($mysqlvermajor) == int($maj) && int($mysqlverminor) > int($min) )\n      || ( int($mysqlvermajor) == int($maj)\n        && int($mysqlverminor) == int($min)\n        && int($mysqlvermicro) >= int($mic) );\n}\n\n# Checks if MySQL version is lower than equal to (major, minor, micro)\nsub mysql_version_le {\n    my ( $maj, $min, $mic ) = @_;\n    $min ||= 0;\n    $mic ||= 0;\n    my ( $mysqlvermajor, $mysqlverminor, $mysqlvermicro ) =\n      $myvar{'version'} =~ /^(\\d+)(?:\\.(\\d+)|)(?:\\.(\\d+)|)/;\n    return\n         int($mysqlvermajor) < int($maj)\n      || ( int($mysqlvermajor) == int($maj) && int($mysqlverminor) < int($min) )\n      || ( int($mysqlvermajor) == int($maj)\n        && int($mysqlverminor) == int($min)\n        && int($mysqlvermicro) <= int($mic) );\n}\n\n# Checks for 32-bit boxes with more than 2GB of RAM\nmy ($arch);\n\nsub check_architecture {\n    if ( is_remote eq 1 ) {\n        infoprint \"Skipping architecture check on remote host\";\n        infoprint \"Using default $opt{defaultarch} bits as target architecture\";\n        $arch = $opt{defaultarch};\n        return;\n    }\n    if ( `uname` =~ /SunOS/ && `isainfo -b` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` !~ /SunOS/ && `uname -m` =~ /(64|s390x)/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /AIX/ && `bootinfo -K` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /NetBSD|OpenBSD/ && `sysctl -b hw.machine` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /FreeBSD/ && `sysctl -b hw.machine_arch` =~ /64/ ) {\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /Darwin/ && `uname -m` =~ /Power Macintosh/ ) {\n\n# Darwin box.local 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:57:01 PDT 2009; root:xnu1228.15.4~1/RELEASE_PPC Power Macintosh\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    elsif ( `uname` =~ /Darwin/ && `uname -m` =~ /x86_64/ ) {\n\n# Darwin gibas.local 12.6.0 Darwin Kernel Version 12.3.0: Sun Jan 6 22:37:10 PST 2013; root:xnu-2050.22.13~1/RELEASE_X86_64 x86_64\n        $arch = 64;\n        goodprint \"Operating on 64-bit architecture\";\n    }\n    else {\n        $arch = 32;\n        if ( $physical_memory > 2147483648 ) {\n            badprint\n\"Switch to 64-bit OS - MySQL cannot currently use all of your RAM\";\n        }\n        else {\n            goodprint \"Operating on 32-bit architecture with less than 2GB RAM\";\n        }\n    }\n    $result{'OS'}{'Architecture'} = \"$arch bits\";\n\n}\n\n# Start up a ton of storage engine counts/statistics\nmy ( %enginestats, %enginecount, $fragtables );\n\nsub check_storage_engines {\n    subheaderprint \"Storage Engine Statistics\";\n    if ( $opt{skipsize} eq 1 ) {\n        infoprint \"Skipped due to --skipsize option\";\n        return;\n    }\n\n    my $engines;\n    if ( mysql_version_ge( 5, 5 ) ) {\n        my @engineresults = select_array\n\"SELECT ENGINE,SUPPORT FROM information_schema.ENGINES ORDER BY ENGINE ASC\";\n        foreach my $line (@engineresults) {\n            my ( $engine, $engineenabled );\n            ( $engine, $engineenabled ) = $line =~ /([a-zA-Z_]*)\\s+([a-zA-Z]+)/;\n            $result{'Engine'}{$engine}{'Enabled'} = $engineenabled;\n            $engines .=\n              ( $engineenabled eq \"YES\" || $engineenabled eq \"DEFAULT\" )\n              ? greenwrap \"+\" . $engine . \" \"\n              : redwrap \"-\" . $engine . \" \";\n        }\n    }\n    elsif ( mysql_version_ge( 5, 1, 5 ) ) {\n        my @engineresults = select_array\n\"SELECT ENGINE, SUPPORT FROM information_schema.ENGINES WHERE ENGINE NOT IN ('MyISAM', 'MERGE', 'MEMORY') ORDER BY ENGINE\";\n        foreach my $line (@engineresults) {\n            my ( $engine, $engineenabled );\n            ( $engine, $engineenabled ) = $line =~ /([a-zA-Z_]*)\\s+([a-zA-Z]+)/;\n            $result{'Engine'}{$engine}{'Enabled'} = $engineenabled;\n            $engines .=\n              ( $engineenabled eq \"YES\" || $engineenabled eq \"DEFAULT\" )\n              ? greenwrap \"+\" . $engine . \" \"\n              : redwrap \"-\" . $engine . \" \";\n        }\n    }\n    else {\n        $engines .=\n          ( defined $myvar{'have_archive'} && $myvar{'have_archive'} eq \"YES\" )\n          ? greenwrap \"+Archive \"\n          : redwrap \"-Archive \";\n        $engines .=\n          ( defined $myvar{'have_bdb'} && $myvar{'have_bdb'} eq \"YES\" )\n          ? greenwrap \"+BDB \"\n          : redwrap \"-BDB \";\n        $engines .=\n          ( defined $myvar{'have_federated_engine'}\n              && $myvar{'have_federated_engine'} eq \"YES\" )\n          ? greenwrap \"+Federated \"\n          : redwrap \"-Federated \";\n        $engines .=\n          ( defined $myvar{'have_innodb'} && $myvar{'have_innodb'} eq \"YES\" )\n          ? greenwrap \"+InnoDB \"\n          : redwrap \"-InnoDB \";\n        $engines .=\n          ( defined $myvar{'have_isam'} && $myvar{'have_isam'} eq \"YES\" )\n          ? greenwrap \"+ISAM \"\n          : redwrap \"-ISAM \";\n        $engines .=\n          ( defined $myvar{'have_ndbcluster'}\n              && $myvar{'have_ndbcluster'} eq \"YES\" )\n          ? greenwrap \"+NDBCluster \"\n          : redwrap \"-NDBCluster \";\n    }\n\n    my @dblist = grep { $_ ne 'lost+found' } select_array \"SHOW DATABASES\";\n\n    $result{'Databases'}{'List'} = [@dblist];\n    infoprint \"Status: $engines\";\n    if ( mysql_version_ge( 5, 1, 5 ) ) {\n\n# MySQL 5+ servers can have table sizes calculated quickly from information schema\n        my @templist = select_array\n\"SELECT ENGINE, SUM(DATA_LENGTH+INDEX_LENGTH), COUNT(ENGINE), SUM(DATA_LENGTH), SUM(INDEX_LENGTH) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql') AND ENGINE IS NOT NULL GROUP BY ENGINE ORDER BY ENGINE ASC;\";\n\n        my ( $engine, $size, $count, $dsize, $isize );\n        foreach my $line (@templist) {\n            ( $engine, $size, $count, $dsize, $isize ) =\n              $line =~ /([a-zA-Z_]+)\\s+(\\d+)\\s+(\\d+)\\s+(\\d+)\\s+(\\d+)/;\n            debugprint \"Engine Found: $engine\";\n            next unless ( defined($engine) or trim($engine) eq '' );\n            $size  = 0 unless ( defined($size)  or trim($engine) eq '' );\n            $isize = 0 unless ( defined($isize) or trim($engine) eq '' );\n            $dsize = 0 unless ( defined($dsize) or trim($engine) eq '' );\n            $count = 0 unless ( defined($count) or trim($engine) eq '' );\n            $enginestats{$engine}                      = $size;\n            $enginecount{$engine}                      = $count;\n            $result{'Engine'}{$engine}{'Table Number'} = $count;\n            $result{'Engine'}{$engine}{'Total Size'}   = $size;\n            $result{'Engine'}{$engine}{'Data Size'}    = $dsize;\n            $result{'Engine'}{$engine}{'Index Size'}   = $isize;\n        }\n\n        #print Dumper( \\%enginestats ) if $opt{debug};\n        my $not_innodb = '';\n        if ( not defined $result{'Variables'}{'innodb_file_per_table'} ) {\n            $not_innodb = \"AND NOT ENGINE='InnoDB'\";\n        }\n        elsif ( $result{'Variables'}{'innodb_file_per_table'} eq 'OFF' ) {\n            $not_innodb = \"AND NOT ENGINE='InnoDB'\";\n        }\n        $result{'Tables'}{'Fragmented tables'} =\n          [ select_array\n\"SELECT TABLE_SCHEMA, TABLE_NAME, ENGINE, CAST(DATA_FREE AS SIGNED) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema', 'performance_schema', 'mysql') AND DATA_LENGTH/1024/1024>100 AND cast(DATA_FREE as signed)*100/(DATA_LENGTH+INDEX_LENGTH+cast(DATA_FREE as signed)) > 10 AND NOT ENGINE='MEMORY' $not_innodb\"\n          ];\n        $fragtables = scalar @{ $result{'Tables'}{'Fragmented tables'} };\n\n    }\n    else {\n\n        # MySQL < 5 servers take a lot of work to get table sizes\n        my @tblist;\n\n# Now we build a database list, and loop through it to get storage engine stats for tables\n        foreach my $db (@dblist) {\n            chomp($db);\n            if (   $db eq \"information_schema\"\n                or $db eq \"performance_schema\"\n                or $db eq \"mysql\"\n                or $db eq \"lost+found\" )\n            {\n                next;\n            }\n            my @ixs = ( 1, 6, 9 );\n            if ( !mysql_version_ge( 4, 1 ) ) {\n\n                # MySQL 3.23/4.0 keeps Data_Length in the 5th (0-based) column\n                @ixs = ( 1, 5, 8 );\n            }\n            push( @tblist,\n                map { [ (split)[@ixs] ] }\n                  select_array \"SHOW TABLE STATUS FROM \\\\\\`$db\\\\\\`\" );\n        }\n\n     # Parse through the table list to generate storage engine counts/statistics\n        $fragtables = 0;\n        foreach my $tbl (@tblist) {\n\n            #debugprint \"Data dump \" . Dumper(@$tbl) if $opt{debug};\n            my ( $engine, $size, $datafree ) = @$tbl;\n            next if $engine eq 'NULL' or not defined($engine);\n            $size     = 0 if $size eq 'NULL'     or not defined($size);\n            $datafree = 0 if $datafree eq 'NULL' or not defined($datafree);\n            if ( defined $enginestats{$engine} ) {\n                $enginestats{$engine} += $size;\n                $enginecount{$engine} += 1;\n            }\n            else {\n                $enginestats{$engine} = $size;\n                $enginecount{$engine} = 1;\n            }\n            if ( $datafree > 0 ) {\n                $fragtables++;\n            }\n        }\n    }\n    while ( my ( $engine, $size ) = each(%enginestats) ) {\n        infoprint \"Data in $engine tables: \"\n          . hr_bytes($size)\n          . \" (Tables: \"\n          . $enginecount{$engine} . \")\" . \"\";\n    }\n\n    # If the storage engine isn't being used, recommend it to be disabled\n    if (  !defined $enginestats{'InnoDB'}\n        && defined $myvar{'have_innodb'}\n        && $myvar{'have_innodb'} eq \"YES\" )\n    {\n        badprint \"InnoDB is enabled, but isn't being used\";\n        push( @generalrec,\n            \"Add skip-innodb to MySQL configuration to disable InnoDB\" );\n    }\n    if (  !defined $enginestats{'BerkeleyDB'}\n        && defined $myvar{'have_bdb'}\n        && $myvar{'have_bdb'} eq \"YES\" )\n    {\n        badprint \"BDB is enabled, but isn't being used\";\n        push( @generalrec,\n            \"Add skip-bdb to MySQL configuration to disable BDB\" );\n    }\n    if (  !defined $enginestats{'ISAM'}\n        && defined $myvar{'have_isam'}\n        && $myvar{'have_isam'} eq \"YES\" )\n    {\n        badprint \"MyISAM is enabled, but isn't being used\";\n        push( @generalrec,\n\"Add skip-isam to MySQL configuration to disable MyISAM (MySQL > 4.1.0)\"\n        );\n    }\n\n    # Fragmented tables\n    if ( $fragtables > 0 ) {\n        badprint \"Total fragmented tables: $fragtables\";\n        push @generalrec,\n'Run ALTER TABLE ... FORCE or OPTIMIZE TABLE to defragment tables for better performance';\n        my $total_free = 0;\n        foreach my $table_line ( @{ $result{'Tables'}{'Fragmented tables'} } ) {\n            my ( $table_schema, $table_name, $engine, $data_free ) =\n              split /\\t/msx, $table_line;\n            $data_free = $data_free / 1024 / 1024;\n            $total_free += $data_free;\n            my $generalrec;\n            if ( $engine eq 'InnoDB' ) {\n                $generalrec =\n                  \"  ALTER TABLE `$table_schema`.`$table_name` FORCE;\";\n            }\n            else {\n                $generalrec = \"  OPTIMIZE TABLE `$table_schema`.`$table_name`;\";\n            }\n            $generalrec .= \" -- can free $data_free MiB\";\n            push @generalrec, $generalrec;\n        }\n        push @generalrec,\n          \"Total freed space after defragmentation: $total_free MiB\";\n    }\n    else {\n        goodprint \"Total fragmented tables: $fragtables\";\n    }\n\n    # Auto increments\n    my %tblist;\n\n    # Find the maximum integer\n    my $maxint = select_one \"SELECT ~0\";\n    $result{'MaxInt'} = $maxint;\n\n# Now we use a database list, and loop through it to get storage engine stats for tables\n    foreach my $db (@dblist) {\n        chomp($db);\n\n        if ( !$tblist{$db} ) {\n            $tblist{$db} = ();\n        }\n\n        if ( $db eq \"information_schema\" ) { next; }\n        my @ia = ( 0, 10 );\n        if ( !mysql_version_ge( 4, 1 ) ) {\n\n            # MySQL 3.23/4.0 keeps Data_Length in the 5th (0-based) column\n            @ia = ( 0, 9 );\n        }\n        push(\n            @{ $tblist{$db} },\n            map { [ (split)[@ia] ] }\n              select_array \"SHOW TABLE STATUS FROM \\\\\\`$db\\\\\\`\"\n        );\n    }\n\n    my @dbnames = keys %tblist;\n\n    foreach my $db (@dbnames) {\n        foreach my $tbl ( @{ $tblist{$db} } ) {\n            my ( $name, $autoincrement ) = @$tbl;\n\n            if ( $autoincrement =~ /^\\d+?$/ ) {\n                my $percent = percentage( $autoincrement, $maxint );\n                $result{'PctAutoIncrement'}{\"$db.$name\"} = $percent;\n                if ( $percent >= 75 ) {\n                    badprint\n\"Table '$db.$name' has an autoincrement value near max capacity ($percent%)\";\n                }\n            }\n        }\n    }\n}\n\nmy %mycalc;\n\nsub dump_into_file {\n    my $file    = shift;\n    my $content = shift;\n    if ( -d \"$opt{dumpdir}\" ) {\n        $file = \"$opt{dumpdir}/$file\";\n        open( FILE, \">$file\" ) or die \"Can't open $file: $!\";\n        print FILE $content;\n        close FILE;\n        infoprint \"Data saved to $file\";\n    }\n}\n\nsub calculations {\n    if ( $mystat{'Questions'} < 1 ) {\n        badprint \"Your server has not answered any queries: cannot continue...\";\n        exit 2;\n    }\n\n    # Per-thread memory\n    $mycalc{'per_thread_buffers'} = 0;\n    $mycalc{'per_thread_buffers'} += $myvar{'read_buffer_size'}\n      if is_int( $myvar{'read_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'read_rnd_buffer_size'}\n      if is_int( $myvar{'read_rnd_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'sort_buffer_size'}\n      if is_int( $myvar{'sort_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'thread_stack'}\n      if is_int( $myvar{'thread_stack'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'join_buffer_size'}\n      if is_int( $myvar{'join_buffer_size'} );\n    $mycalc{'per_thread_buffers'} += $myvar{'binlog_cache_size'}\n      if is_int( $myvar{'binlog_cache_size'} );\n    debugprint \"per_thread_buffers: $mycalc{'per_thread_buffers'} (\"\n      . human_size( $mycalc{'per_thread_buffers'} ) . \" )\";\n\n# Error max_allowed_packet is not included in thread buffers size\n#$mycalc{'per_thread_buffers'} += $myvar{'max_allowed_packet'} if is_int($myvar{'max_allowed_packet'});\n\n    # Total per-thread memory\n    $mycalc{'total_per_thread_buffers'} =\n      $mycalc{'per_thread_buffers'} * $myvar{'max_connections'};\n\n    # Max total per-thread memory reached\n    $mycalc{'max_total_per_thread_buffers'} =\n      $mycalc{'per_thread_buffers'} * $mystat{'Max_used_connections'};\n\n    # Server-wide memory\n    $mycalc{'max_tmp_table_size'} =\n      ( $myvar{'tmp_table_size'} > $myvar{'max_heap_table_size'} )\n      ? $myvar{'max_heap_table_size'}\n      : $myvar{'tmp_table_size'};\n    $mycalc{'server_buffers'} =\n      $myvar{'key_buffer_size'} + $mycalc{'max_tmp_table_size'};\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'innodb_buffer_pool_size'} )\n      ? $myvar{'innodb_buffer_pool_size'}\n      : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'innodb_additional_mem_pool_size'} )\n      ? $myvar{'innodb_additional_mem_pool_size'}\n      : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'innodb_log_buffer_size'} )\n      ? $myvar{'innodb_log_buffer_size'}\n      : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'query_cache_size'} ) ? $myvar{'query_cache_size'} : 0;\n    $mycalc{'server_buffers'} +=\n      ( defined $myvar{'aria_pagecache_buffer_size'} )\n      ? $myvar{'aria_pagecache_buffer_size'}\n      : 0;\n\n# Global memory\n# Max used memory is memory used by MySQL based on Max_used_connections\n# This is the max memory used theoretically calculated with the max concurrent connection number reached by mysql\n    $mycalc{'max_used_memory'} =\n      $mycalc{'server_buffers'} +\n      $mycalc{\"max_total_per_thread_buffers\"} +\n      get_pf_memory();\n\n    #   + get_gcache_memory();\n    $mycalc{'pct_max_used_memory'} =\n      percentage( $mycalc{'max_used_memory'}, $physical_memory );\n\n# Total possible memory is memory needed by MySQL based on max_connections\n# This is the max memory MySQL can theoretically used if all connections allowed has opened by mysql\n    $mycalc{'max_peak_memory'} =\n      $mycalc{'server_buffers'} +\n      $mycalc{'total_per_thread_buffers'} +\n      get_pf_memory();\n\n    # +  get_gcache_memory();\n    $mycalc{'pct_max_physical_memory'} =\n      percentage( $mycalc{'max_peak_memory'}, $physical_memory );\n\n    debugprint \"Max Used Memory: \"\n      . hr_bytes( $mycalc{'max_used_memory'} ) . \"\";\n    debugprint \"Max Used Percentage RAM: \"\n      . $mycalc{'pct_max_used_memory'} . \"%\";\n\n    debugprint \"Max Peak Memory: \"\n      . hr_bytes( $mycalc{'max_peak_memory'} ) . \"\";\n    debugprint \"Max Peak Percentage RAM: \"\n      . $mycalc{'pct_max_physical_memory'} . \"%\";\n\n    # Slow queries\n    $mycalc{'pct_slow_queries'} =\n      int( ( $mystat{'Slow_queries'} / $mystat{'Questions'} ) * 100 );\n\n    # Connections\n    $mycalc{'pct_connections_used'} = int(\n        ( $mystat{'Max_used_connections'} / $myvar{'max_connections'} ) * 100 );\n    $mycalc{'pct_connections_used'} =\n      ( $mycalc{'pct_connections_used'} > 100 )\n      ? 100\n      : $mycalc{'pct_connections_used'};\n\n    # Aborted Connections\n    $mycalc{'pct_connections_aborted'} =\n      percentage( $mystat{'Aborted_connects'}, $mystat{'Connections'} );\n    debugprint \"Aborted_connects: \" . $mystat{'Aborted_connects'} . \"\";\n    debugprint \"Connections: \" . $mystat{'Connections'} . \"\";\n    debugprint \"pct_connections_aborted: \"\n      . $mycalc{'pct_connections_aborted'} . \"\";\n\n    # Key buffers\n    if ( mysql_version_ge( 4, 1 ) && $myvar{'key_buffer_size'} > 0 ) {\n        $mycalc{'pct_key_buffer_used'} = sprintf(\n            \"%.1f\",\n            (\n                1 - (\n                    (\n                        $mystat{'Key_blocks_unused'} *\n                          $myvar{'key_cache_block_size'}\n                    ) / $myvar{'key_buffer_size'}\n                )\n            ) * 100\n        );\n    }\n    else {\n        $mycalc{'pct_key_buffer_used'} = 0;\n    }\n\n    if ( $mystat{'Key_read_requests'} > 0 ) {\n        $mycalc{'pct_keys_from_mem'} = sprintf(\n            \"%.1f\",\n            (\n                100 - (\n                    ( $mystat{'Key_reads'} / $mystat{'Key_read_requests'} ) *\n                      100\n                )\n            )\n        );\n    }\n    else {\n        $mycalc{'pct_keys_from_mem'} = 0;\n    }\n    if ( defined $mystat{'Aria_pagecache_read_requests'}\n        && $mystat{'Aria_pagecache_read_requests'} > 0 )\n    {\n        $mycalc{'pct_aria_keys_from_mem'} = sprintf(\n            \"%.1f\",\n            (\n                100 - (\n                    (\n                        $mystat{'Aria_pagecache_reads'} /\n                          $mystat{'Aria_pagecache_read_requests'}\n                    ) * 100\n                )\n            )\n        );\n    }\n    else {\n        $mycalc{'pct_aria_keys_from_mem'} = 0;\n    }\n\n    if ( $mystat{'Key_write_requests'} > 0 ) {\n        $mycalc{'pct_wkeys_from_mem'} = sprintf( \"%.1f\",\n            ( ( $mystat{'Key_writes'} / $mystat{'Key_write_requests'} ) * 100 )\n        );\n    }\n    else {\n        $mycalc{'pct_wkeys_from_mem'} = 0;\n    }\n\n    if ( $doremote eq 0 and !mysql_version_ge(5) ) {\n        my $size = 0;\n        $size += (split)[0]\n          for\n`find \"$myvar{'datadir'}\" -name \"*.MYI\" -print0 2>&1 | xargs $xargsflags -0 du -L $duflags 2>&1`;\n        $mycalc{'total_myisam_indexes'} = $size;\n        $size = 0 + (split)[0]\n          for\n`find \"$myvar{'datadir'}\" -name \"*.MAI\" -print0 2>&1 | xargs $xargsflags -0 du -L $duflags 2>&1`;\n        $mycalc{'total_aria_indexes'} = $size;\n    }\n    elsif ( mysql_version_ge(5) ) {\n        $mycalc{'total_myisam_indexes'} = select_one\n\"SELECT IFNULL(SUM(INDEX_LENGTH), 0) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema') AND ENGINE = 'MyISAM';\";\n        $mycalc{'total_aria_indexes'} = select_one\n\"SELECT IFNULL(SUM(INDEX_LENGTH), 0) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema') AND ENGINE = 'Aria';\";\n    }\n    if ( defined $mycalc{'total_myisam_indexes'} ) {\n        chomp( $mycalc{'total_myisam_indexes'} );\n    }\n    if ( defined $mycalc{'total_aria_indexes'} ) {\n        chomp( $mycalc{'total_aria_indexes'} );\n    }\n\n    # Query cache\n    if ( mysql_version_ge(8) and mysql_version_le(10) ) {\n        $mycalc{'query_cache_efficiency'} = 0;\n    }\n    elsif ( mysql_version_ge(4) ) {\n        $mycalc{'query_cache_efficiency'} = sprintf(\n            \"%.1f\",\n            (\n                $mystat{'Qcache_hits'} /\n                  ( $mystat{'Com_select'} + $mystat{'Qcache_hits'} )\n            ) * 100\n        );\n        if ( $myvar{'query_cache_size'} ) {\n            $mycalc{'pct_query_cache_used'} = sprintf(\n                \"%.1f\",\n                100 - (\n                    $mystat{'Qcache_free_memory'} / $myvar{'query_cache_size'}\n                ) * 100\n            );\n        }\n        if ( $mystat{'Qcache_lowmem_prunes'} == 0 ) {\n            $mycalc{'query_cache_prunes_per_day'} = 0;\n        }\n        else {\n            $mycalc{'query_cache_prunes_per_day'} = int(\n                $mystat{'Qcache_lowmem_prunes'} / ( $mystat{'Uptime'} / 86400 )\n            );\n        }\n    }\n\n    # Sorting\n    $mycalc{'total_sorts'} = $mystat{'Sort_scan'} + $mystat{'Sort_range'};\n    if ( $mycalc{'total_sorts'} > 0 ) {\n        $mycalc{'pct_temp_sort_table'} = int(\n            ( $mystat{'Sort_merge_passes'} / $mycalc{'total_sorts'} ) * 100 );\n    }\n\n    # Joins\n    $mycalc{'joins_without_indexes'} =\n      $mystat{'Select_range_check'} + $mystat{'Select_full_join'};\n    $mycalc{'joins_without_indexes_per_day'} =\n      int( $mycalc{'joins_without_indexes'} / ( $mystat{'Uptime'} / 86400 ) );\n\n    # Temporary tables\n    if ( $mystat{'Created_tmp_tables'} > 0 ) {\n        if ( $mystat{'Created_tmp_disk_tables'} > 0 ) {\n            $mycalc{'pct_temp_disk'} = int(\n                (\n                    $mystat{'Created_tmp_disk_tables'} /\n                      $mystat{'Created_tmp_tables'}\n                ) * 100\n            );\n        }\n        else {\n            $mycalc{'pct_temp_disk'} = 0;\n        }\n    }\n\n    # Table cache\n    if ( $mystat{'Opened_tables'} > 0 ) {\n        if ( not defined( $mystat{'Table_open_cache_hits'} ) ) {\n            $mycalc{'table_cache_hit_rate'} =\n              int( $mystat{'Open_tables'} * 100 / $mystat{'Opened_tables'} );\n        }\n        else {\n            $mycalc{'table_cache_hit_rate'} = int(\n                $mystat{'Table_open_cache_hits'} * 100 / (\n                    $mystat{'Table_open_cache_hits'} +\n                      $mystat{'Table_open_cache_misses'}\n                )\n            );\n        }\n    }\n    else {\n        $mycalc{'table_cache_hit_rate'} = 100;\n    }\n\n    # Open files\n    if ( $myvar{'open_files_limit'} > 0 ) {\n        $mycalc{'pct_files_open'} =\n          int( $mystat{'Open_files'} * 100 / $myvar{'open_files_limit'} );\n    }\n\n    # Table locks\n    if ( $mystat{'Table_locks_immediate'} > 0 ) {\n        if ( $mystat{'Table_locks_waited'} == 0 ) {\n            $mycalc{'pct_table_locks_immediate'} = 100;\n        }\n        else {\n            $mycalc{'pct_table_locks_immediate'} = int(\n                $mystat{'Table_locks_immediate'} * 100 / (\n                    $mystat{'Table_locks_waited'} +\n                      $mystat{'Table_locks_immediate'}\n                )\n            );\n        }\n    }\n\n    # Thread cache\n    $mycalc{'thread_cache_hit_rate'} =\n      int( 100 -\n          ( ( $mystat{'Threads_created'} / $mystat{'Connections'} ) * 100 ) );\n\n    # Other\n    if ( $mystat{'Connections'} > 0 ) {\n        $mycalc{'pct_aborted_connections'} =\n          int( ( $mystat{'Aborted_connects'} / $mystat{'Connections'} ) * 100 );\n    }\n    if ( $mystat{'Questions'} > 0 ) {\n        $mycalc{'total_reads'} = $mystat{'Com_select'};\n        $mycalc{'total_writes'} =\n          $mystat{'Com_delete'} +\n          $mystat{'Com_insert'} +\n          $mystat{'Com_update'} +\n          $mystat{'Com_replace'};\n        if ( $mycalc{'total_reads'} == 0 ) {\n            $mycalc{'pct_reads'}  = 0;\n            $mycalc{'pct_writes'} = 100;\n        }\n        else {\n            $mycalc{'pct_reads'} = int(\n                (\n                    $mycalc{'total_reads'} /\n                      ( $mycalc{'total_reads'} + $mycalc{'total_writes'} )\n                ) * 100\n            );\n            $mycalc{'pct_writes'} = 100 - $mycalc{'pct_reads'};\n        }\n    }\n\n    # InnoDB\n    $myvar{'innodb_log_files_in_group'} = 1\n      unless defined( $myvar{'innodb_log_files_in_group'} );\n    $myvar{'innodb_log_files_in_group'} = 1\n      if $myvar{'innodb_log_files_in_group'} == 0;\n\n    $myvar{\"innodb_buffer_pool_instances\"} = 1\n      unless defined( $myvar{'innodb_buffer_pool_instances'} );\n    if ( $myvar{'have_innodb'} eq \"YES\" ) {\n        $mycalc{'innodb_log_size_pct'} =\n          ( $myvar{'innodb_log_file_size'} *\n              $myvar{'innodb_log_files_in_group'} * 100 /\n              $myvar{'innodb_buffer_pool_size'} );\n    }\n    if ( !defined $myvar{'innodb_buffer_pool_size'} ) {\n        $mycalc{'innodb_log_size_pct'}    = 0;\n        $myvar{'innodb_buffer_pool_size'} = 0;\n    }\n\n    # InnoDB Buffer pool read cache efficiency\n    (\n        $mystat{'Innodb_buffer_pool_read_requests'},\n        $mystat{'Innodb_buffer_pool_reads'}\n      )\n      = ( 1, 1 )\n      unless defined $mystat{'Innodb_buffer_pool_reads'};\n    $mycalc{'pct_read_efficiency'} = percentage(\n        $mystat{'Innodb_buffer_pool_read_requests'},\n        (\n            $mystat{'Innodb_buffer_pool_read_requests'} +\n              $mystat{'Innodb_buffer_pool_reads'}\n        )\n    ) if defined $mystat{'Innodb_buffer_pool_read_requests'};\n    debugprint \"pct_read_efficiency: \" . $mycalc{'pct_read_efficiency'} . \"\";\n    debugprint \"Innodb_buffer_pool_reads: \"\n      . $mystat{'Innodb_buffer_pool_reads'} . \"\";\n    debugprint \"Innodb_buffer_pool_read_requests: \"\n      . $mystat{'Innodb_buffer_pool_read_requests'} . \"\";\n\n    # InnoDB log write cache efficiency\n    ( $mystat{'Innodb_log_write_requests'}, $mystat{'Innodb_log_writes'} ) =\n      ( 1, 1 )\n      unless defined $mystat{'Innodb_log_writes'};\n    $mycalc{'pct_write_efficiency'} = percentage(\n        ( $mystat{'Innodb_log_write_requests'} - $mystat{'Innodb_log_writes'} ),\n        $mystat{'Innodb_log_write_requests'}\n    ) if defined $mystat{'Innodb_log_write_requests'};\n    debugprint \"pct_write_efficiency: \" . $mycalc{'pct_write_efficiency'} . \"\";\n    debugprint \"Innodb_log_writes: \" . $mystat{'Innodb_log_writes'} . \"\";\n    debugprint \"Innodb_log_write_requests: \"\n      . $mystat{'Innodb_log_write_requests'} . \"\";\n    $mycalc{'pct_innodb_buffer_used'} = percentage(\n        (\n            $mystat{'Innodb_buffer_pool_pages_total'} -\n              $mystat{'Innodb_buffer_pool_pages_free'}\n        ),\n        $mystat{'Innodb_buffer_pool_pages_total'}\n    ) if defined $mystat{'Innodb_buffer_pool_pages_total'};\n\n    my $lreq =\n        \"select  ROUND( 100* sum(allocated)/ \"\n      . $myvar{'innodb_buffer_pool_size'}\n      . ',1) FROM sys.x\\$innodb_buffer_stats_by_table;';\n    debugprint(\"lreq: $lreq\");\n    $mycalc{'innodb_buffer_alloc_pct'} = select_one($lreq)\n      if ( $opt{experimental} );\n\n    # Binlog Cache\n    if ( $myvar{'log_bin'} ne 'OFF' ) {\n        $mycalc{'pct_binlog_cache'} = percentage(\n            $mystat{'Binlog_cache_use'} - $mystat{'Binlog_cache_disk_use'},\n            $mystat{'Binlog_cache_use'} );\n    }\n}\n\nsub mysql_stats {\n    subheaderprint \"Performance Metrics\";\n\n    # Show uptime, queries per second, connections, traffic stats\n    my $qps;\n    if ( $mystat{'Uptime'} > 0 ) {\n        $qps = sprintf( \"%.3f\", $mystat{'Questions'} / $mystat{'Uptime'} );\n    }\n    push( @generalrec,\n\"MySQL was started within the last 24 hours: recommendations may be inaccurate\"\n    ) if ( $mystat{'Uptime'} < 86400 );\n    infoprint \"Up for: \"\n      . pretty_uptime( $mystat{'Uptime'} ) . \" (\"\n      . hr_num( $mystat{'Questions'} ) . \" q [\"\n      . hr_num($qps)\n      . \" qps], \"\n      . hr_num( $mystat{'Connections'} )\n      . \" conn,\" . \" TX: \"\n      . hr_bytes_rnd( $mystat{'Bytes_sent'} )\n      . \", RX: \"\n      . hr_bytes_rnd( $mystat{'Bytes_received'} ) . \")\";\n    infoprint \"Reads / Writes: \"\n      . $mycalc{'pct_reads'} . \"% / \"\n      . $mycalc{'pct_writes'} . \"%\";\n\n    # Binlog Cache\n    if ( $myvar{'log_bin'} eq 'OFF' ) {\n        infoprint \"Binary logging is disabled\";\n    }\n    else {\n        infoprint \"Binary logging is enabled (GTID MODE: \"\n          . ( defined( $myvar{'gtid_mode'} ) ? $myvar{'gtid_mode'} : \"OFF\" )\n          . \")\";\n    }\n\n    # Memory usage\n    infoprint \"Physical Memory     : \" . hr_bytes($physical_memory);\n    infoprint \"Max MySQL memory    : \" . hr_bytes( $mycalc{'max_peak_memory'} );\n    infoprint \"Other process memory: \" . hr_bytes( get_other_process_memory() );\n\n    infoprint \"Total buffers: \"\n      . hr_bytes( $mycalc{'server_buffers'} )\n      . \" global + \"\n      . hr_bytes( $mycalc{'per_thread_buffers'} )\n      . \" per thread ($myvar{'max_connections'} max threads)\";\n    infoprint \"Performance_schema Max memory usage: \"\n      . hr_bytes_rnd( get_pf_memory() );\n    $result{'Performance_schema'}{'memory'} = get_pf_memory();\n    $result{'Performance_schema'}{'pretty_memory'} =\n      hr_bytes_rnd( get_pf_memory() );\n    infoprint \"Galera GCache Max memory usage: \"\n      . hr_bytes_rnd( get_gcache_memory() );\n    $result{'Galera'}{'GCache'}{'memory'} = get_gcache_memory();\n    $result{'Galera'}{'GCache'}{'pretty_memory'} =\n      hr_bytes_rnd( get_gcache_memory() );\n\n    if ( $opt{buffers} ne 0 ) {\n        infoprint \"Global Buffers\";\n        infoprint \" +-- Key Buffer: \"\n          . hr_bytes( $myvar{'key_buffer_size'} ) . \"\";\n        infoprint \" +-- Max Tmp Table: \"\n          . hr_bytes( $mycalc{'max_tmp_table_size'} ) . \"\";\n\n        if ( defined $myvar{'query_cache_type'} ) {\n            infoprint \"Query Cache Buffers\";\n            infoprint \" +-- Query Cache: \"\n              . $myvar{'query_cache_type'} . \" - \"\n              . (\n                $myvar{'query_cache_type'} eq 0 |\n                  $myvar{'query_cache_type'} eq 'OFF' ? \"DISABLED\"\n                : (\n                    $myvar{'query_cache_type'} eq 1 ? \"ALL REQUESTS\"\n                    : \"ON DEMAND\"\n                )\n              ) . \"\";\n            infoprint \" +-- Query Cache Size: \"\n              . hr_bytes( $myvar{'query_cache_size'} ) . \"\";\n        }\n\n        infoprint \"Per Thread Buffers\";\n        infoprint \" +-- Read Buffer: \"\n          . hr_bytes( $myvar{'read_buffer_size'} ) . \"\";\n        infoprint \" +-- Read RND Buffer: \"\n          . hr_bytes( $myvar{'read_rnd_buffer_size'} ) . \"\";\n        infoprint \" +-- Sort Buffer: \"\n          . hr_bytes( $myvar{'sort_buffer_size'} ) . \"\";\n        infoprint \" +-- Thread stack: \"\n          . hr_bytes( $myvar{'thread_stack'} ) . \"\";\n        infoprint \" +-- Join Buffer: \"\n          . hr_bytes( $myvar{'join_buffer_size'} ) . \"\";\n        if ( $myvar{'log_bin'} ne 'OFF' ) {\n            infoprint \"Binlog Cache Buffers\";\n            infoprint \" +-- Binlog Cache: \"\n              . hr_bytes( $myvar{'binlog_cache_size'} ) . \"\";\n        }\n    }\n\n    if (   $arch\n        && $arch == 32\n        && $mycalc{'max_used_memory'} > 2 * 1024 * 1024 * 1024 )\n    {\n        badprint\n          \"Allocating > 2GB RAM on 32-bit systems can cause system instability\";\n        badprint \"Maximum reached memory usage: \"\n          . hr_bytes( $mycalc{'max_used_memory'} )\n          . \" ($mycalc{'pct_max_used_memory'}% of installed RAM)\";\n    }\n    elsif ( $mycalc{'pct_max_used_memory'} > 85 ) {\n        badprint \"Maximum reached memory usage: \"\n          . hr_bytes( $mycalc{'max_used_memory'} )\n          . \" ($mycalc{'pct_max_used_memory'}% of installed RAM)\";\n    }\n    else {\n        goodprint \"Maximum reached memory usage: \"\n          . hr_bytes( $mycalc{'max_used_memory'} )\n          . \" ($mycalc{'pct_max_used_memory'}% of installed RAM)\";\n    }\n\n    if ( $mycalc{'pct_max_physical_memory'} > 85 ) {\n        badprint \"Maximum possible memory usage: \"\n          . hr_bytes( $mycalc{'max_peak_memory'} )\n          . \" ($mycalc{'pct_max_physical_memory'}% of installed RAM)\";\n        push( @generalrec,\n            \"Reduce your overall MySQL memory footprint for system stability\" );\n    }\n    else {\n        goodprint \"Maximum possible memory usage: \"\n          . hr_bytes( $mycalc{'max_peak_memory'} )\n          . \" ($mycalc{'pct_max_physical_memory'}% of installed RAM)\";\n    }\n\n    if ( $physical_memory <\n        ( $mycalc{'max_peak_memory'} + get_other_process_memory() ) )\n    {\n        if ( $opt{nondedicated} ) {\n            infoprint \"No warning with --nondedicated option\";\n            infoprint\n\"Overall possible memory usage with other process exceeded memory\";\n        }\n        else {\n            badprint\n\"Overall possible memory usage with other process exceeded memory\";\n            push( @generalrec,\n                \"Dedicate this server to your database for highest performance.\"\n            );\n        }\n    }\n    else {\n        goodprint\n\"Overall possible memory usage with other process is compatible with memory available\";\n    }\n\n    # Slow queries\n    if ( $mycalc{'pct_slow_queries'} > 5 ) {\n        badprint \"Slow queries: $mycalc{'pct_slow_queries'}% (\"\n          . hr_num( $mystat{'Slow_queries'} ) . \"/\"\n          . hr_num( $mystat{'Questions'} ) . \")\";\n    }\n    else {\n        goodprint \"Slow queries: $mycalc{'pct_slow_queries'}% (\"\n          . hr_num( $mystat{'Slow_queries'} ) . \"/\"\n          . hr_num( $mystat{'Questions'} ) . \")\";\n    }\n    if ( $myvar{'long_query_time'} > 10 ) {\n        push( @adjvars, \"long_query_time (<= 10)\" );\n    }\n    if ( defined( $myvar{'log_slow_queries'} ) ) {\n        if ( $myvar{'log_slow_queries'} eq \"OFF\" ) {\n            push( @generalrec,\n                \"Enable the slow query log to troubleshoot bad queries\" );\n        }\n    }\n\n    # Connections\n    if ( $mycalc{'pct_connections_used'} > 85 ) {\n        badprint\n\"Highest connection usage: $mycalc{'pct_connections_used'}% ($mystat{'Max_used_connections'}/$myvar{'max_connections'})\";\n        push( @adjvars,\n            \"max_connections (> \" . $myvar{'max_connections'} . \")\" );\n        push( @adjvars,\n            \"wait_timeout (< \" . $myvar{'wait_timeout'} . \")\",\n            \"interactive_timeout (< \" . $myvar{'interactive_timeout'} . \")\" );\n        push( @generalrec,\n\"Reduce or eliminate persistent connections to reduce connection usage\"\n        );\n    }\n    else {\n        goodprint\n\"Highest usage of available connections: $mycalc{'pct_connections_used'}% ($mystat{'Max_used_connections'}/$myvar{'max_connections'})\";\n    }\n\n    # Aborted Connections\n    if ( $mycalc{'pct_connections_aborted'} > 3 ) {\n        badprint\n\"Aborted connections: $mycalc{'pct_connections_aborted'}% ($mystat{'Aborted_connects'}/$mystat{'Connections'})\";\n        push( @generalrec,\n            \"Reduce or eliminate unclosed connections and network issues\" );\n    }\n    else {\n        goodprint\n\"Aborted connections: $mycalc{'pct_connections_aborted'}% ($mystat{'Aborted_connects'}/$mystat{'Connections'})\";\n    }\n\n    # name resolution\n    debugprint \"skip name resolve: $result{'Variables'}{'skip_name_resolve'}\"\n      if ( defined( $result{'Variables'}{'skip_name_resolve'} ) );\n    if ( defined( $result{'Variables'}{'skip_networking'} )\n        && $result{'Variables'}{'skip_networking'} eq 'ON' )\n    {\n        infoprint\n\"Skipped name resolution test due to skip_networking=ON in system variables.\";\n    }\n    elsif ( not defined( $result{'Variables'}{'skip_name_resolve'} ) ) {\n        infoprint\n\"Skipped name resolution test due to missing skip_name_resolve in system variables.\";\n    }\n\n    #Cpanel and Skip name resolve\n    elsif ( -r \"/usr/local/cpanel/cpanel\" ) {\n        if ( $result{'Variables'}{'skip_name_resolve'} ne 'OFF' ) {\n            infoprint \"CPanel and Flex system skip-name-resolve should be on\";\n        }\n        if ( $result{'Variables'}{'skip_name_resolve'} eq 'OFF' ) {\n            badprint \"CPanel and Flex system skip-name-resolve should be on\";\n            push( @generalrec,\n\"name resolution is enabled due to cPanel doesn't support this disabled.\"\n            );\n            push( @adjvars, \"skip-name-resolve=0\" );\n        }\n    }\n    elsif ( $result{'Variables'}{'skip_name_resolve'} ne 'ON'\n        and $result{'Variables'}{'skip_name_resolve'} ne '1' )\n    {\n        badprint\n\"Name resolution is active: a reverse name resolution is made for each new connection which can reduce performance\";\n        push( @generalrec,\n\"Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=ON\"\n        );\n        push( @adjvars, \"skip-name-resolve=ON\" );\n    }\n\n    # Query cache\n    if ( !mysql_version_ge(4) ) {\n\n        # MySQL versions < 4.01 don't support query caching\n        push( @generalrec,\n            \"Upgrade MySQL to version 4+ to utilize query caching\" );\n    }\n    elsif ( mysql_version_eq(8) ) {\n        infoprint \"Query cache has been removed since MySQL 8.0\";\n\n        #return;\n    }\n    elsif ($myvar{'query_cache_size'} < 1\n        or $myvar{'query_cache_type'} eq \"OFF\" )\n    {\n        goodprint\n\"Query cache is disabled by default due to mutex contention on multiprocessor machines.\";\n    }\n    elsif ( $mystat{'Com_select'} == 0 ) {\n        badprint\n          \"Query cache cannot be analyzed: no SELECT statements executed\";\n    }\n    else {\n        if ( $mycalc{'query_cache_efficiency'} < 20 ) {\n            badprint\n              \"Query cache efficiency: $mycalc{'query_cache_efficiency'}% (\"\n              . hr_num( $mystat{'Qcache_hits'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Qcache_hits'} + $mystat{'Com_select'} )\n              . \" selects)\";\n            push( @adjvars,\n                    \"query_cache_limit (> \"\n                  . hr_bytes_rnd( $myvar{'query_cache_limit'} )\n                  . \", or use smaller result sets)\" );\n            badprint\n              \"Query cache may be disabled by default due to mutex contention.\";\n            push( @adjvars, \"query_cache_size (=0)\" );\n            push( @adjvars, \"query_cache_type (=0)\" );\n        }\n        else {\n            goodprint\n              \"Query cache efficiency: $mycalc{'query_cache_efficiency'}% (\"\n              . hr_num( $mystat{'Qcache_hits'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Qcache_hits'} + $mystat{'Com_select'} )\n              . \" selects)\";\n            if ( $mycalc{'query_cache_prunes_per_day'} > 98 ) {\n                badprint\n\"Query cache prunes per day: $mycalc{'query_cache_prunes_per_day'}\";\n                if ( $myvar{'query_cache_size'} >= 128 * 1024 * 1024 ) {\n                    push( @generalrec,\n\"Increasing the query_cache size over 128M may reduce performance\"\n                    );\n                    push( @adjvars,\n                            \"query_cache_size (> \"\n                          . hr_bytes_rnd( $myvar{'query_cache_size'} )\n                          . \") [see warning above]\" );\n                }\n                else {\n                    push( @adjvars,\n                            \"query_cache_size (> \"\n                          . hr_bytes_rnd( $myvar{'query_cache_size'} )\n                          . \")\" );\n                }\n            }\n            else {\n                goodprint\n\"Query cache prunes per day: $mycalc{'query_cache_prunes_per_day'}\";\n            }\n        }\n\n    }\n\n    # Sorting\n    if ( $mycalc{'total_sorts'} == 0 ) {\n        goodprint \"No Sort requiring temporary tables\";\n    }\n    elsif ( $mycalc{'pct_temp_sort_table'} > 10 ) {\n        badprint\n          \"Sorts requiring temporary tables: $mycalc{'pct_temp_sort_table'}% (\"\n          . hr_num( $mystat{'Sort_merge_passes'} )\n          . \" temp sorts / \"\n          . hr_num( $mycalc{'total_sorts'} )\n          . \" sorts)\";\n        push( @adjvars,\n                \"sort_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'sort_buffer_size'} )\n              . \")\" );\n        push( @adjvars,\n                \"read_rnd_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'read_rnd_buffer_size'} )\n              . \")\" );\n    }\n    else {\n        goodprint\n          \"Sorts requiring temporary tables: $mycalc{'pct_temp_sort_table'}% (\"\n          . hr_num( $mystat{'Sort_merge_passes'} )\n          . \" temp sorts / \"\n          . hr_num( $mycalc{'total_sorts'} )\n          . \" sorts)\";\n    }\n\n    # Joins\n    if ( $mycalc{'joins_without_indexes_per_day'} > 250 ) {\n        badprint\n          \"Joins performed without indexes: $mycalc{'joins_without_indexes'}\";\n        push( @adjvars,\n                \"join_buffer_size (> \"\n              . hr_bytes( $myvar{'join_buffer_size'} )\n              . \", or always use indexes with JOINs)\" );\n        push(\n            @generalrec,\n\"We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.\n             See https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_join_buffer_size\"\n        );\n    }\n    else {\n        goodprint \"No joins without indexes\";\n\n        # No joins have run without indexes\n    }\n\n    # Temporary tables\n    if ( $mystat{'Created_tmp_tables'} > 0 ) {\n        if (   $mycalc{'pct_temp_disk'} > 25\n            && $mycalc{'max_tmp_table_size'} < 256 * 1024 * 1024 )\n        {\n            badprint\n              \"Temporary tables created on disk: $mycalc{'pct_temp_disk'}% (\"\n              . hr_num( $mystat{'Created_tmp_disk_tables'} )\n              . \" on disk / \"\n              . hr_num( $mystat{'Created_tmp_tables'} )\n              . \" total)\";\n            push( @adjvars,\n                    \"tmp_table_size (> \"\n                  . hr_bytes_rnd( $myvar{'tmp_table_size'} )\n                  . \")\" );\n            push( @adjvars,\n                    \"max_heap_table_size (> \"\n                  . hr_bytes_rnd( $myvar{'max_heap_table_size'} )\n                  . \")\" );\n            push( @generalrec,\n\"When making adjustments, make tmp_table_size/max_heap_table_size equal\"\n            );\n            push( @generalrec,\n                \"Reduce your SELECT DISTINCT queries which have no LIMIT clause\"\n            );\n        }\n        elsif ($mycalc{'pct_temp_disk'} > 25\n            && $mycalc{'max_tmp_table_size'} >= 256 * 1024 * 1024 )\n        {\n            badprint\n              \"Temporary tables created on disk: $mycalc{'pct_temp_disk'}% (\"\n              . hr_num( $mystat{'Created_tmp_disk_tables'} )\n              . \" on disk / \"\n              . hr_num( $mystat{'Created_tmp_tables'} )\n              . \" total)\";\n            push( @generalrec,\n                \"Temporary table size is already large: reduce result set size\"\n            );\n            push( @generalrec,\n                \"Reduce your SELECT DISTINCT queries without LIMIT clauses\" );\n        }\n        else {\n            goodprint\n              \"Temporary tables created on disk: $mycalc{'pct_temp_disk'}% (\"\n              . hr_num( $mystat{'Created_tmp_disk_tables'} )\n              . \" on disk / \"\n              . hr_num( $mystat{'Created_tmp_tables'} )\n              . \" total)\";\n        }\n    }\n    else {\n        goodprint \"No tmp tables created on disk\";\n    }\n\n    # Thread cache\n    if ( defined( $myvar{'have_threadpool'} )\n        and $myvar{'have_threadpool'} eq 'YES' )\n    {\n# https://www.percona.com/doc/percona-server/5.7/performance/threadpool.html#status-variables\n# When thread pool is enabled, the value of the thread_cache_size variable\n# is ignored. The Threads_cached status variable contains 0 in this case.\n        infoprint \"Thread cache not used with thread pool enabled\";\n    }\n    else {\n        if ( $myvar{'thread_cache_size'} eq 0 ) {\n            badprint \"Thread cache is disabled\";\n            push( @generalrec,\n                \"Set thread_cache_size to 4 as a starting value\" );\n            push( @adjvars, \"thread_cache_size (start at 4)\" );\n        }\n        else {\n            if ( $mycalc{'thread_cache_hit_rate'} <= 50 ) {\n                badprint\n                  \"Thread cache hit rate: $mycalc{'thread_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Threads_created'} )\n                  . \" created / \"\n                  . hr_num( $mystat{'Connections'} )\n                  . \" connections)\";\n                push( @adjvars,\n                    \"thread_cache_size (> $myvar{'thread_cache_size'})\" );\n            }\n            else {\n                goodprint\n                  \"Thread cache hit rate: $mycalc{'thread_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Threads_created'} )\n                  . \" created / \"\n                  . hr_num( $mystat{'Connections'} )\n                  . \" connections)\";\n            }\n        }\n    }\n\n    # Table cache\n    my $table_cache_var = \"\";\n    if ( $mystat{'Open_tables'} > 0 ) {\n        if ( $mycalc{'table_cache_hit_rate'} < 20 ) {\n\n            unless ( defined( $mystat{'Table_open_cache_hits'} ) ) {\n                badprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Open_tables'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Opened_tables'} )\n                  . \" requests)\";\n            }\n            else {\n                badprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Table_open_cache_hits'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Table_open_cache_hits'} +\n                      $mystat{'Table_open_cache_misses'} )\n                  . \" requests)\";\n            }\n\n            if ( mysql_version_ge( 5, 1 ) ) {\n                $table_cache_var = \"table_open_cache\";\n            }\n            else {\n                $table_cache_var = \"table_cache\";\n            }\n\n            push( @adjvars,\n                $table_cache_var . \" (> \" . $myvar{$table_cache_var} . \")\" );\n            push( @generalrec,\n                    \"Increase \"\n                  . $table_cache_var\n                  . \" gradually to avoid file descriptor limits\" );\n            push( @generalrec,\n                    \"Read this before increasing \"\n                  . $table_cache_var\n                  . \" over 64: https://bit.ly/2Fulv7r\" );\n            push( @generalrec,\n                    \"Read this before increasing for MariaDB\"\n                  . \" https://mariadb.com/kb/en/library/optimizing-table_open_cache/\"\n            );\n            push( @generalrec,\n\"This is MyISAM only table_cache scalability problem, InnoDB not affected.\"\n            );\n            push( @generalrec,\n                \"For more details see: https://bugs.mysql.com/bug.php?id=49177\"\n            );\n            push( @generalrec,\n\"This bug already fixed in MySQL 5.7.9 and newer MySQL versions.\"\n            );\n            push( @generalrec,\n                    \"Beware that open_files_limit (\"\n                  . $myvar{'open_files_limit'}\n                  . \") variable \" );\n            push( @generalrec,\n                    \"should be greater than $table_cache_var (\"\n                  . $myvar{$table_cache_var}\n                  . \")\" );\n        }\n        else {\n            unless ( defined( $mystat{'Table_open_cache_hits'} ) ) {\n                goodprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Open_tables'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Opened_tables'} )\n                  . \" requests)\";\n            }\n            else {\n                goodprint\n                  \"Table cache hit rate: $mycalc{'table_cache_hit_rate'}% (\"\n                  . hr_num( $mystat{'Table_open_cache_hits'} )\n                  . \" hits / \"\n                  . hr_num( $mystat{'Table_open_cache_hits'} +\n                      $mystat{'Table_open_cache_misses'} )\n                  . \" requests)\";\n            }\n        }\n    }\n\n    # Table definition cache\n    my $nbtables = select_one('SELECT COUNT(*) FROM information_schema.tables');\n    $mycalc{'total_tables'} = $nbtables;\n    if ( defined $myvar{'table_definition_cache'} ) {\n        if ( $myvar{'table_definition_cache'} == -1 ) {\n            infoprint( \"table_definition_cache (\"\n                  . $myvar{'table_definition_cache'}\n                  . \") is in autosizing mode\" );\n        }\n        elsif ( $myvar{'table_definition_cache'} < $nbtables ) {\n            badprint \"table_definition_cache (\"\n              . $myvar{'table_definition_cache'}\n              . \") is less than number of tables ($nbtables) \";\n            push( @adjvars,\n                    \"table_definition_cache (\"\n                  . $myvar{'table_definition_cache'} . \") > \"\n                  . $nbtables\n                  . \" or -1 (autosizing if supported)\" );\n        }\n        else {\n            goodprint \"table_definition_cache (\"\n              . $myvar{'table_definition_cache'}\n              . \") is greater than number of tables ($nbtables)\";\n        }\n    }\n    else {\n        infoprint \"No table_definition_cache variable found.\";\n    }\n\n    # Open files\n    if ( defined $mycalc{'pct_files_open'} ) {\n        if ( $mycalc{'pct_files_open'} > 85 ) {\n            badprint \"Open file limit used: $mycalc{'pct_files_open'}% (\"\n              . hr_num( $mystat{'Open_files'} ) . \"/\"\n              . hr_num( $myvar{'open_files_limit'} ) . \")\";\n            push( @adjvars,\n                \"open_files_limit (> \" . $myvar{'open_files_limit'} . \")\" );\n        }\n        else {\n            goodprint \"Open file limit used: $mycalc{'pct_files_open'}% (\"\n              . hr_num( $mystat{'Open_files'} ) . \"/\"\n              . hr_num( $myvar{'open_files_limit'} ) . \")\";\n        }\n    }\n\n    # Table locks\n    if ( defined $mycalc{'pct_table_locks_immediate'} ) {\n        if ( $mycalc{'pct_table_locks_immediate'} < 95 ) {\n            badprint\n\"Table locks acquired immediately: $mycalc{'pct_table_locks_immediate'}%\";\n            push( @generalrec,\n                \"Optimize queries and/or use InnoDB to reduce lock wait\" );\n        }\n        else {\n            goodprint\n\"Table locks acquired immediately: $mycalc{'pct_table_locks_immediate'}% (\"\n              . hr_num( $mystat{'Table_locks_immediate'} )\n              . \" immediate / \"\n              . hr_num( $mystat{'Table_locks_waited'} +\n                  $mystat{'Table_locks_immediate'} )\n              . \" locks)\";\n        }\n    }\n\n    # Binlog cache\n    if ( defined $mycalc{'pct_binlog_cache'} ) {\n        if (   $mycalc{'pct_binlog_cache'} < 90\n            && $mystat{'Binlog_cache_use'} > 0 )\n        {\n            badprint \"Binlog cache memory access: \"\n              . $mycalc{'pct_binlog_cache'} . \"% (\"\n              . (\n                $mystat{'Binlog_cache_use'} - $mystat{'Binlog_cache_disk_use'} )\n              . \" Memory / \"\n              . $mystat{'Binlog_cache_use'}\n              . \" Total)\";\n            push( @generalrec,\n                    \"Increase binlog_cache_size (current value: \"\n                  . $myvar{'binlog_cache_size'}\n                  . \")\" );\n            push( @adjvars,\n                    \"binlog_cache_size (\"\n                  . hr_bytes( $myvar{'binlog_cache_size'} + 16 * 1024 * 1024 )\n                  . \")\" );\n        }\n        else {\n            goodprint \"Binlog cache memory access: \"\n              . $mycalc{'pct_binlog_cache'} . \"% (\"\n              . (\n                $mystat{'Binlog_cache_use'} - $mystat{'Binlog_cache_disk_use'} )\n              . \" Memory / \"\n              . $mystat{'Binlog_cache_use'}\n              . \" Total)\";\n            debugprint \"Not enough data to validate binlog cache size\\n\"\n              if $mystat{'Binlog_cache_use'} < 10;\n        }\n    }\n\n    # Performance options\n    if ( !mysql_version_ge( 5, 1 ) ) {\n        push( @generalrec, \"Upgrade to MySQL 5.5+ to use asynchronous write\" );\n    }\n    elsif ( $myvar{'concurrent_insert'} eq \"OFF\" ) {\n        push( @generalrec, \"Enable concurrent_insert by setting it to 'ON'\" );\n    }\n    elsif ( $myvar{'concurrent_insert'} eq 0 ) {\n        push( @generalrec, \"Enable concurrent_insert by setting it to 1\" );\n    }\n}\n\n# Recommendations for MyISAM\nsub mysql_myisam {\n    return 0 unless ( $opt{'myisamstat'} > 0 );\n    subheaderprint \"MyISAM Metrics\";\n    my $nb_myisam_tables = select_one(\n\"SELECT COUNT(*) FROM information_schema.TABLES WHERE ENGINE='MyISAM' and TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema')\"\n    );\n    push( @generalrec,\n        \"MyISAM engine is deprecated, consider migrating to InnoDB\" )\n      if $nb_myisam_tables > 0;\n\n    if ( $nb_myisam_tables > 0 ) {\n        badprint\n          \"Consider migrating $nb_myisam_tables following tables to InnoDB:\";\n        my $sql_mig = \"\";\n        for my $myisam_table (\n            select_array(\n\"SELECT CONCAT(TABLE_SCHEMA, '.', TABLE_NAME) FROM information_schema.TABLES WHERE ENGINE='MyISAM' and TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema')\"\n            )\n          )\n        {\n            $sql_mig =\n\"${sql_mig}-- InnoDB migration for $myisam_table\\nALTER TABLE $myisam_table ENGINE=InnoDB;\\n\\n\";\n            infoprint\n\"* InnoDB migration request for $myisam_table Table: ALTER TABLE $myisam_table ENGINE=InnoDB;\";\n        }\n        dump_into_file( \"migrate_myisam_to_innodb.sql\", $sql_mig );\n    }\n    infoprint(\"General MyIsam metrics:\");\n    infoprint \" +-- Total MyISAM Tables  : $nb_myisam_tables\";\n    infoprint \" +-- Total MyISAM indexes : \"\n      . hr_bytes( $mycalc{'total_myisam_indexes'} )\n      if defined( $mycalc{'total_myisam_indexes'} );\n    infoprint \" +-- KB Size :\" . hr_bytes( $myvar{'key_buffer_size'} );\n    infoprint \" +-- KB Used Size :\"\n      . hr_bytes( $myvar{'key_buffer_size'} -\n          $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} );\n    infoprint \" +-- KB used :\" . $mycalc{'pct_key_buffer_used'} . \"%\";\n    infoprint \" +-- Read KB hit rate: $mycalc{'pct_keys_from_mem'}% (\"\n      . hr_num( $mystat{'Key_read_requests'} )\n      . \" cached / \"\n      . hr_num( $mystat{'Key_reads'} )\n      . \" reads)\";\n    infoprint \" +-- Write KB hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n      . hr_num( $mystat{'Key_write_requests'} )\n      . \" cached / \"\n      . hr_num( $mystat{'Key_writes'} )\n      . \" writes)\";\n\n    if ( $nb_myisam_tables == 0 ) {\n        infoprint \"No MyISAM table(s) detected ....\";\n        return;\n    }\n    if ( mysql_version_ge(8) and mysql_version_le(10) ) {\n        infoprint \"MyISAM Metrics are disabled since MySQL 8.0.\";\n        if ( $myvar{'key_buffer_size'} > 0 ) {\n            push( @adjvars, \"key_buffer_size=0\" );\n            push( @generalrec,\n                \"Buffer Key MyISAM set to 0, no MyISAM table detected\" );\n        }\n        return;\n    }\n\n    if ( !defined( $mycalc{'total_myisam_indexes'} ) ) {\n        badprint\n          \"Unable to calculate MyISAM index size on MySQL server < 5.0.0\";\n        push( @generalrec,\n            \"Unable to calculate MyISAM index size on MySQL server < 5.0.0\" );\n        return;\n    }\n    if ( $mycalc{'pct_key_buffer_used'} == 0 ) {\n\n        # No queries have run that would use keys\n        infoprint \"Key buffer used: $mycalc{'pct_key_buffer_used'}% (\"\n          . hr_bytes( $myvar{'key_buffer_size'} -\n              $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} )\n          . \" used / \"\n          . hr_bytes( $myvar{'key_buffer_size'} )\n          . \" cache)\";\n        infoprint \"No SQL statement based on MyISAM table(s) detected ....\";\n        return;\n    }\n\n    # Key buffer usage\n    if ( $mycalc{'pct_key_buffer_used'} < 90 ) {\n        badprint \"Key buffer used: $mycalc{'pct_key_buffer_used'}% (\"\n          . hr_bytes( $myvar{'key_buffer_size'} -\n              $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} )\n          . \" used / \"\n          . hr_bytes( $myvar{'key_buffer_size'} )\n          . \" cache)\";\n\n        push(\n            @adjvars,\n            \"key_buffer_size (\\~ \"\n              . hr_num(\n                $myvar{'key_buffer_size'} *\n                  $mycalc{'pct_key_buffer_used'} / 100\n              )\n              . \")\"\n        );\n    }\n    else {\n        goodprint \"Key buffer used: $mycalc{'pct_key_buffer_used'}% (\"\n          . hr_bytes( $myvar{'key_buffer_size'} -\n              $mystat{'Key_blocks_unused'} * $myvar{'key_cache_block_size'} )\n          . \" used / \"\n          . hr_bytes( $myvar{'key_buffer_size'} )\n          . \" cache)\";\n    }\n\n    # Key buffer size / total MyISAM indexes\n    if (   $myvar{'key_buffer_size'} < $mycalc{'total_myisam_indexes'}\n        && $mycalc{'pct_keys_from_mem'} < 95 )\n    {\n        badprint \"Key buffer size / total MyISAM indexes: \"\n          . hr_bytes( $myvar{'key_buffer_size'} ) . \"/\"\n          . hr_bytes( $mycalc{'total_myisam_indexes'} ) . \"\";\n        push( @adjvars,\n                \"key_buffer_size (> \"\n              . hr_bytes( $mycalc{'total_myisam_indexes'} )\n              . \")\" );\n    }\n    else {\n        goodprint \"Key buffer size / total MyISAM indexes: \"\n          . hr_bytes( $myvar{'key_buffer_size'} ) . \"/\"\n          . hr_bytes( $mycalc{'total_myisam_indexes'} ) . \"\";\n    }\n    if ( $mystat{'Key_read_requests'} > 0 ) {\n        if ( $mycalc{'pct_keys_from_mem'} < 95 ) {\n            badprint\n              \"Read Key buffer hit rate: $mycalc{'pct_keys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_read_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_reads'} )\n              . \" reads)\";\n        }\n        else {\n            goodprint\n              \"Read Key buffer hit rate: $mycalc{'pct_keys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_read_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_reads'} )\n              . \" reads)\";\n        }\n    }\n\n    # No queries have run that would use keys\n    debugprint \"Key buffer size / total MyISAM indexes: \"\n      . hr_bytes( $myvar{'key_buffer_size'} ) . \"/\"\n      . hr_bytes( $mycalc{'total_myisam_indexes'} ) . \"\";\n    if ( $mystat{'Key_write_requests'} > 0 ) {\n        if ( $mycalc{'pct_wkeys_from_mem'} < 95 ) {\n            badprint\n              \"Write Key buffer hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_write_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_writes'} )\n              . \" writes)\";\n        }\n        else {\n            goodprint\n              \"Write Key buffer hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n              . hr_num( $mystat{'Key_write_requests'} )\n              . \" cached / \"\n              . hr_num( $mystat{'Key_writes'} )\n              . \" writes)\";\n        }\n    }\n    else {\n        # No queries have run that would use keys\n        debugprint\n          \"Write Key buffer hit rate: $mycalc{'pct_wkeys_from_mem'}% (\"\n          . hr_num( $mystat{'Key_write_requests'} )\n          . \" cached / \"\n          . hr_num( $mystat{'Key_writes'} )\n          . \" writes)\";\n    }\n}\n\n# Recommendations for ThreadPool\nsub mariadb_threadpool {\n    subheaderprint \"ThreadPool Metrics\";\n\n    # MariaDB\n    unless ( defined $myvar{'have_threadpool'}\n        && $myvar{'have_threadpool'} eq \"YES\" )\n    {\n        infoprint \"ThreadPool stat is disabled.\";\n        return;\n    }\n    infoprint \"ThreadPool stat is enabled.\";\n    infoprint \"Thread Pool Size: \" . $myvar{'thread_pool_size'} . \" thread(s).\";\n\n    if (   $myvar{'version'} =~ /percona/i\n        or $myvar{'version_comment'} =~ /percona/i )\n    {\n        my $np = cpu_cores;\n        if (    $myvar{'thread_pool_size'} >= $np\n            and $myvar{'thread_pool_size'} < ( $np * 1.5 ) )\n        {\n            goodprint\n\"thread_pool_size for Percona between 1 and 1.5 times number of CPUs (\"\n              . $np . \" and \"\n              . ( $np * 1.5 ) . \")\";\n        }\n        else {\n            badprint\n\"thread_pool_size for Percona between 1 and 1.5 times number of CPUs (\"\n              . $np . \" and \"\n              . ( $np * 1.5 ) . \")\";\n            push( @adjvars,\n                    \"thread_pool_size between \"\n                  . $np . \" and \"\n                  . ( $np * 1.5 )\n                  . \" for InnoDB usage\" );\n        }\n        return;\n    }\n\n    if ( $myvar{'version'} =~ /mariadb/i ) {\n        infoprint \"Using default value is good enough for your version (\"\n          . $myvar{'version'} . \")\";\n        return;\n    }\n\n    if ( $myvar{'have_innodb'} eq 'YES' ) {\n        if (   $myvar{'thread_pool_size'} < 16\n            or $myvar{'thread_pool_size'} > 36 )\n        {\n            badprint\n\"thread_pool_size between 16 and 36 when using InnoDB storage engine.\";\n            push( @generalrec,\n                    \"Thread pool size for InnoDB usage (\"\n                  . $myvar{'thread_pool_size'}\n                  . \")\" );\n            push( @adjvars,\n                \"thread_pool_size between 16 and 36 for InnoDB usage\" );\n        }\n        else {\n            goodprint\n\"thread_pool_size between 16 and 36 when using InnoDB storage engine.\";\n        }\n        return;\n    }\n    if ( $myvar{'have_isam'} eq 'YES' ) {\n        if ( $myvar{'thread_pool_size'} < 4 or $myvar{'thread_pool_size'} > 8 )\n        {\n            badprint\n\"thread_pool_size between 4 and 8 when using MyISAM storage engine.\";\n            push( @generalrec,\n                    \"Thread pool size for MyISAM usage (\"\n                  . $myvar{'thread_pool_size'}\n                  . \")\" );\n            push( @adjvars,\n                \"thread_pool_size between 4 and 8 for MyISAM usage\" );\n        }\n        else {\n            goodprint\n\"thread_pool_size between 4 and 8 when using MyISAM storage engine.\";\n        }\n    }\n}\n\nsub get_pf_memory {\n\n    # Performance Schema\n    return 0 unless defined $myvar{'performance_schema'};\n    return 0 if $myvar{'performance_schema'} eq 'OFF';\n\n    my @infoPFSMemory = grep { /\\tperformance_schema[.]memory\\t/msx }\n      select_array(\"SHOW ENGINE PERFORMANCE_SCHEMA STATUS\");\n    @infoPFSMemory == 1 || return 0;\n    $infoPFSMemory[0] =~ s/.*\\s+(\\d+)$/$1/g;\n    return $infoPFSMemory[0];\n}\n\n# Recommendations for Performance Schema\nsub mysql_pfs {\n    subheaderprint \"Performance schema\";\n\n    # Performance Schema\n    debugprint \"Performance schema is \" . $myvar{'performance_schema'};\n    $myvar{'performance_schema'} = 'OFF'\n      unless defined( $myvar{'performance_schema'} );\n    if ( $myvar{'performance_schema'} eq 'OFF' ) {\n        badprint \"Performance_schema should be activated.\";\n        push( @adjvars, \"performance_schema=ON\" );\n        push( @generalrec,\n            \"Performance schema should be activated for better diagnostics\" );\n    }\n    if ( $myvar{'performance_schema'} eq 'ON' ) {\n        infoprint \"Performance_schema is activated.\";\n        debugprint \"Performance schema is \" . $myvar{'performance_schema'};\n        infoprint \"Memory used by Performance_schema: \"\n          . hr_bytes( get_pf_memory() );\n    }\n\n    unless ( grep /^sys$/, select_array(\"SHOW DATABASES\") ) {\n        infoprint \"Sys schema is not installed.\";\n        push( @generalrec,\n            mysql_version_ge( 10, 0 )\n            ? \"Consider installing Sys schema from https://github.com/FromDual/mariadb-sys for MariaDB\"\n            : \"Consider installing Sys schema from https://github.com/mysql/mysql-sys for MySQL\"\n        ) unless ( mysql_version_le( 5, 6 ) );\n\n        return;\n    }\n    infoprint \"Sys schema is installed.\";\n    return if ( $opt{pfstat} == 0 or $myvar{'performance_schema'} ne 'ON' );\n\n    infoprint \"Sys schema Version: \"\n      . select_one(\"select sys_version from sys.version\");\n\n    # Store all sys schema in dumpdir if defined\n    if ( defined $opt{dumpdir} and -d \"$opt{dumpdir}\" ) {\n        for my $sys_view ( select_array('use sys;show tables;') ) {\n            infoprint \"Dumping $sys_view into $opt{dumpdir}\";\n            my $sys_view_table = $sys_view;\n            $sys_view_table =~ s/\\$/\\\\\\$/g;\n            select_csv_file( \"$opt{dumpdir}/sys_$sys_view.csv\",\n                'select * from sys.\\`' . $sys_view_table . '\\`' );\n        }\n        return;\n\n        #exit 0 if ( $opt{stop} == 1 );\n    }\n\n    # Top user per connection\n    subheaderprint \"Performance schema: Top 5 user per connection\";\n    my $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, total_connections from sys.user_summary order by total_connections desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery conn(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per statement\n    subheaderprint \"Performance schema: Top 5 user per statement\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, statements from sys.user_summary order by statements desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery stmt(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per statement latency\n    subheaderprint \"Performance schema: Top 5 user per statement latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, statement_avg_latency from sys.x\\\\$user_summary order by statement_avg_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per lock latency\n    subheaderprint \"Performance schema: Top 5 user per lock latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, lock_latency from sys.x\\\\$user_summary_by_statement_latency order by lock_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per full scans\n    subheaderprint \"Performance schema: Top 5 user per nb full scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, full_scans from sys.x\\\\$user_summary_by_statement_latency order by full_scans desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per row_sent\n    subheaderprint \"Performance schema: Top 5 user per rows sent\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, rows_sent from sys.x\\\\$user_summary_by_statement_latency order by rows_sent desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per row modified\n    subheaderprint \"Performance schema: Top 5 user per rows modified\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, rows_affected from sys.x\\\\$user_summary_by_statement_latency order by rows_affected desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per io\n    subheaderprint \"Performance schema: Top 5 user per IO\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, file_ios from sys.x\\\\$user_summary order by file_ios desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top user per io latency\n    subheaderprint \"Performance schema: Top 5 user per IO latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, file_io_latency from sys.x\\\\$user_summary order by file_io_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per connection\n    subheaderprint \"Performance schema: Top 5 host per connection\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, total_connections from sys.x\\\\$host_summary order by total_connections desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery conn(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per statement\n    subheaderprint \"Performance schema: Top 5 host per statement\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, statements from sys.x\\\\$host_summary order by statements desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery stmt(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per statement latency\n    subheaderprint \"Performance schema: Top 5 host per statement latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, statement_avg_latency from sys.x\\\\$host_summary order by statement_avg_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per lock latency\n    subheaderprint \"Performance schema: Top 5 host per lock latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, lock_latency from sys.x\\\\$host_summary_by_statement_latency order by lock_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per full scans\n    subheaderprint \"Performance schema: Top 5 host per nb full scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, full_scans from sys.x\\\\$host_summary_by_statement_latency order by full_scans desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per rows sent\n    subheaderprint \"Performance schema: Top 5 host per rows sent\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, rows_sent from sys.x\\\\$host_summary_by_statement_latency order by rows_sent desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per rows modified\n    subheaderprint \"Performance schema: Top 5 host per rows modified\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, rows_affected from sys.x\\\\$host_summary_by_statement_latency order by rows_affected desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per io\n    subheaderprint \"Performance schema: Top 5 host per io\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, file_ios from sys.x\\\\$host_summary order by file_ios desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top 5 host per io latency\n    subheaderprint \"Performance schema: Top 5 host per io latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, file_io_latency from sys.x\\\\$host_summary order by file_io_latency desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top IO type order by total io\n    subheaderprint \"Performance schema: Top IO type order by total io\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,14), SUM(total)AS total from sys.x\\\\$host_summary_by_file_io_type GROUP BY substring(event_name,14) ORDER BY total DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery i/o\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top IO type order by total latency\n    subheaderprint \"Performance schema: Top IO type order by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select substring(event_name,14), ROUND(SUM(total_latency),1) AS total_latency from sys.x\\\\$host_summary_by_file_io_type GROUP BY substring(event_name,14) ORDER BY total_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top IO type order by max latency\n    subheaderprint \"Performance schema: Top IO type order by max latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,14), MAX(max_latency) as max_latency from sys.x\\\\$host_summary_by_file_io_type GROUP BY substring(event_name,14) ORDER BY max_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top Stages order by total io\n    subheaderprint \"Performance schema: Top Stages order by total io\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,7), SUM(total)AS total from sys.x\\\\$host_summary_by_stages GROUP BY substring(event_name,7) ORDER BY total DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery i/o\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top Stages order by total latency\n    subheaderprint \"Performance schema: Top Stages order by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,7), ROUND(SUM(total_latency),1) AS total_latency from sys.x\\\\$host_summary_by_stages GROUP BY substring(event_name,7) ORDER BY total_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top Stages order by avg latency\n    subheaderprint \"Performance schema: Top Stages order by avg latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select substring(event_name,7), MAX(avg_latency) as avg_latency from sys.x\\\\$host_summary_by_stages GROUP BY substring(event_name,7) ORDER BY avg_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top host per table scans\n    subheaderprint \"Performance schema: Top 5 host per table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select host, table_scans from sys.x\\\\$host_summary order by table_scans desc LIMIT 5'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # InnoDB Buffer Pool by schema\n    subheaderprint \"Performance schema: InnoDB Buffer Pool by schema\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select object_schema, allocated, data, pages from sys.x\\\\$innodb_buffer_stats_by_schema ORDER BY pages DESC'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery page(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # InnoDB Buffer Pool by table\n    subheaderprint \"Performance schema: 40 InnoDB Buffer Pool by table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select object_schema,  object_name, allocated,data, pages from sys.x\\\\$innodb_buffer_stats_by_table ORDER BY pages DESC LIMIT 40'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery page(s)\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Process per allocated memory\n    subheaderprint \"Performance schema: Process per time\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, Command AS PROC, time from sys.x\\\\$processlist ORDER BY time DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # InnoDB Lock Waits\n    subheaderprint \"Performance schema: InnoDB Lock Waits\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select wait_age_secs, locked_table, locked_type, waiting_query from sys.x\\\\$innodb_lock_waits order by wait_age_secs DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Threads IO Latency\n    subheaderprint \"Performance schema: Thread IO Latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select user, total_latency, max_latency from sys.x\\\\$io_by_thread_by_latency order by total_latency DESC;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # High Cost SQL statements\n    subheaderprint \"Performance schema: Top 15 Most latency statements\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select LEFT(query, 120), avg_latency from sys.x\\\\$statement_analysis order by avg_latency desc LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top 5% slower queries\n    subheaderprint \"Performance schema: Top 15 slower queries\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select LEFT(query, 120), exec_count from sys.x\\\\$statements_with_runtimes_in_95th_percentile order by exec_count desc LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery s\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top 10 nb statement type\n    subheaderprint \"Performance schema: Top 15 nb statement type\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(total) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by total latency\n    subheaderprint \"Performance schema: Top 15 statement by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(total_latency) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by lock latency\n    subheaderprint \"Performance schema: Top 15 statement by lock latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(lock_latency) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by full scans\n    subheaderprint \"Performance schema: Top 15 statement by full scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(full_scans) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by rows sent\n    subheaderprint \"Performance schema: Top 15 statement by rows sent\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(rows_sent) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Top statement by rows modified\n    subheaderprint \"Performance schema: Top 15 statement by rows modified\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select statement, sum(rows_affected) as total from sys.x\\\\$host_summary_by_statement_type group by statement order by total desc LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Use temporary tables\n    subheaderprint \"Performance schema: 15 sample queries using temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select left(query, 120) from sys.x\\\\$statements_with_temp_tables LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Unused Indexes\n    subheaderprint \"Performance schema: Unused indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n\"select \\* from sys.schema_unused_indexes where object_schema not in ('performance_schema')\"\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Full table scans\n    subheaderprint \"Performance schema: Tables with full table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select * from sys.x\\\\$schema_tables_with_full_table_scans order by rows_full_scanned DESC'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Latest file IO by latency\n    subheaderprint \"Performance schema: Latest File IO by latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select thread, file, latency, operation from sys.x\\\\$latest_file_io ORDER BY latency LIMIT 10;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # FILE by IO read bytes\n    subheaderprint \"Performance schema: File by IO read bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select file, total_read from sys.x\\\\$io_global_by_file_by_bytes order by total_read DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # FILE by IO written bytes\n    subheaderprint \"Performance schema: File by IO written bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select file, total_written from sys.x\\\\$io_global_by_file_by_bytes order by total_written DESC LIMIT 15'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # file per IO total latency\n    subheaderprint \"Performance schema: File per IO total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select file, total_latency from sys.x\\\\$io_global_by_file_by_latency ORDER BY total_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # file per IO read latency\n    subheaderprint \"Performance schema: file per IO read latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select file, read_latency from sys.x\\\\$io_global_by_file_by_latency ORDER BY read_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # file per IO write latency\n    subheaderprint \"Performance schema: file per IO write latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select file, write_latency from sys.x\\\\$io_global_by_file_by_latency ORDER BY write_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Event Wait by read bytes\n    subheaderprint \"Performance schema: Event Wait by read bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select event_name, total_read from sys.x\\\\$io_global_by_wait_by_bytes order by total_read DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Event Wait by write bytes\n    subheaderprint \"Performance schema: Event Wait written bytes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select event_name, total_written from sys.x\\\\$io_global_by_wait_by_bytes order by total_written DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # event per wait total latency\n    subheaderprint \"Performance schema: event per wait total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_name, total_latency from sys.x\\\\$io_global_by_wait_by_latency ORDER BY total_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # event per wait read latency\n    subheaderprint \"Performance schema: event per wait read latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_name, read_latency from sys.x\\\\$io_global_by_wait_by_latency ORDER BY read_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # event per wait write latency\n    subheaderprint \"Performance schema: event per wait write latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_name, write_latency from sys.x\\\\$io_global_by_wait_by_latency ORDER BY write_latency DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    #schema_index_statistics\n    # TOP 15 most read index\n    subheaderprint \"Performance schema: Top 15 most read indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, rows_selected from sys.x\\\\$schema_index_statistics ORDER BY ROWs_selected DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 most used index\n    subheaderprint \"Performance schema: Top 15 most modified indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, rows_inserted+rows_updated+rows_deleted AS changes from sys.x\\\\$schema_index_statistics ORDER BY rows_inserted+rows_updated+rows_deleted DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high read latency index\n    subheaderprint \"Performance schema: Top 15 high read latency index\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, select_latency from sys.x\\\\$schema_index_statistics ORDER BY select_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high insert latency index\n    subheaderprint \"Performance schema: Top 15 most modified indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, insert_latency from sys.x\\\\$schema_index_statistics ORDER BY insert_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high update latency index\n    subheaderprint \"Performance schema: Top 15 high update latency index\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, update_latency from sys.x\\\\$schema_index_statistics ORDER BY update_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high delete latency index\n    subheaderprint \"Performance schema: Top 15 high delete latency index\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name,index_name, delete_latency from sys.x\\\\$schema_index_statistics ORDER BY delete_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 most read tables\n    subheaderprint \"Performance schema: Top 15 most read tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, rows_fetched from sys.x\\\\$schema_table_statistics ORDER BY ROWs_fetched DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 most used tables\n    subheaderprint \"Performance schema: Top 15 most modified tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, rows_inserted+rows_updated+rows_deleted AS changes from sys.x\\\\$schema_table_statistics ORDER BY rows_inserted+rows_updated+rows_deleted DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high read latency tables\n    subheaderprint \"Performance schema: Top 15 high read latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, fetch_latency from sys.x\\\\$schema_table_statistics ORDER BY fetch_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high insert latency tables\n    subheaderprint \"Performance schema: Top 15 high insert latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, insert_latency from sys.x\\\\$schema_table_statistics ORDER BY insert_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high update latency tables\n    subheaderprint \"Performance schema: Top 15 high update latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, update_latency from sys.x\\\\$schema_table_statistics ORDER BY update_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # TOP 15 high delete latency tables\n    subheaderprint \"Performance schema: Top 15 high delete latency tables\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select table_schema, table_name, delete_latency from sys.x\\\\$schema_table_statistics ORDER BY delete_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    # Redundant indexes\n    subheaderprint \"Performance schema: Redundant indexes\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array('use sys;select * from schema_redundant_indexes;') )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Table not using InnoDB buffer\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n' Select table_schema, table_name from sys.x\\\\$schema_table_statistics_with_buffer where innodb_buffer_allocated IS NULL;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 Tables using InnoDB buffer\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select table_schema,table_name,innodb_buffer_allocated from sys.x\\\\$schema_table_statistics_with_buffer where innodb_buffer_allocated IS NOT NULL ORDER BY innodb_buffer_allocated DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 Tables with InnoDB buffer free\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select table_schema,table_name,innodb_buffer_free from sys.x\\\\$schema_table_statistics_with_buffer where innodb_buffer_allocated IS NOT NULL ORDER BY innodb_buffer_free DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 Most executed queries\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statement_analysis order by exec_count DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Latest SQL queries in errors or warnings\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select LEFT(query, 120), last_seen from sys.x\\\\$statements_with_errors_or_warnings ORDER BY last_seen LIMIT 40;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 20 queries with full table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statements_with_full_table_scans order BY exec_count DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Last 50 queries with full table scans\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), last_seen from sys.x\\\\$statements_with_full_table_scans order BY last_seen DESC LIMIT 50;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 reader queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), rows_sent from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY ROWs_sent DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 most row look queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), rows_examined AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY rows_examined DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 total latency queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), total_latency AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 max latency queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), max_latency AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY max_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 average latency queries (95% percentile)\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), avg_latency AS search from sys.x\\\\$statements_with_runtimes_in_95th_percentile ORDER BY avg_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 20 queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statements_with_sorting order BY exec_count DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Last 50 queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), last_seen from sys.x\\\\$statements_with_sorting order BY last_seen DESC LIMIT 50;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 row sorting queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), rows_sorted from sys.x\\\\$statements_with_sorting ORDER BY ROWs_sorted DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 total latency queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), total_latency AS search from sys.x\\\\$statements_with_sorting ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 merge queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), sort_merge_passes AS search from sys.x\\\\$statements_with_sorting ORDER BY sort_merge_passes DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 average sort merges queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), avg_sort_merges AS search from sys.x\\\\$statements_with_sorting ORDER BY avg_sort_merges DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 scans queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), sorts_using_scans AS search from sys.x\\\\$statements_with_sorting ORDER BY sorts_using_scans DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 range queries with sort\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), sort_using_range AS search from sys.x\\\\$statements_with_sorting ORDER BY sort_using_range DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n##################################################################################\n\n    #statements_with_temp_tables\n\n#mysql> desc statements_with_temp_tables;\n#+--------------------------+---------------------+------+-----+---------------------+-------+\n#| Field                    | Type                | Null | Key | Default             | Extra |\n#+--------------------------+---------------------+------+-----+---------------------+-------+\n#| query                    | longtext            | YES  |     | NULL                |       |\n#| db                       | varchar(64)         | YES  |     | NULL                |       |\n#| exec_count               | bigint(20) unsigned | NO   |     | NULL                |       |\n#| total_latency            | text                | YES  |     | NULL                |       |\n#| memory_tmp_tables        | bigint(20) unsigned | NO   |     | NULL                |       |\n#| disk_tmp_tables          | bigint(20) unsigned | NO   |     | NULL                |       |\n#| avg_tmp_tables_per_query | decimal(21,0)       | NO   |     | 0                   |       |\n#| tmp_tables_to_disk_pct   | decimal(24,0)       | NO   |     | 0                   |       |\n#| first_seen               | timestamp           | NO   |     | 0000-00-00 00:00:00 |       |\n#| last_seen                | timestamp           | NO   |     | 0000-00-00 00:00:00 |       |\n#| digest                   | varchar(32)         | YES  |     | NULL                |       |\n#+--------------------------+---------------------+------+-----+---------------------+-------+\n#11 rows in set (0,01 sec)#\n#\n    subheaderprint \"Performance schema: Top 20 queries with temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), exec_count from sys.x\\\\$statements_with_temp_tables order BY exec_count DESC LIMIT 20;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Last 50 queries with temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), last_seen from sys.x\\\\$statements_with_temp_tables order BY last_seen DESC LIMIT 50;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint\n      \"Performance schema: Top 15 total latency queries with temp table\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select db, LEFT(query, 120), total_latency AS search from sys.x\\\\$statements_with_temp_tables ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 queries with temp table to disk\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select db, LEFT(query, 120), disk_tmp_tables from sys.x\\\\$statements_with_temp_tables ORDER BY disk_tmp_tables DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n##################################################################################\n    #wait_classes_global_by_latency\n\n#mysql> select * from wait_classes_global_by_latency;\n#-----------------+-------+---------------+-------------+-------------+-------------+\n# event_class     | total | total_latency | min_latency | avg_latency | max_latency |\n#-----------------+-------+---------------+-------------+-------------+-------------+\n# wait/io/file    | 15381 | 1.23 s        | 0 ps        | 80.12 us    | 230.64 ms   |\n# wait/io/table   |    59 | 7.57 ms       | 5.45 us     | 128.24 us   | 3.95 ms     |\n# wait/lock/table |    69 | 3.22 ms       | 658.84 ns   | 46.64 us    | 1.10 ms     |\n#-----------------+-------+---------------+-------------+-------------+-------------+\n# rows in set (0,00 sec)\n\n    subheaderprint \"Performance schema: Top 15 class events by number\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_class, total from sys.x\\\\$wait_classes_global_by_latency ORDER BY total DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 30 events by number\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select events, total from sys.x\\\\$waits_global_by_latency ORDER BY total DESC LIMIT 30;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 class events by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select event_class, total_latency from sys.x\\\\$wait_classes_global_by_latency ORDER BY total_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 30 events by total latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'use sys;select events, total_latency from sys.x\\\\$waits_global_by_latency ORDER BY total_latency DESC LIMIT 30;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 15 class events by max latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select event_class, max_latency from sys.x\\\\$wait_classes_global_by_latency ORDER BY max_latency DESC LIMIT 15;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n    subheaderprint \"Performance schema: Top 30 events by max latency\";\n    $nbL = 1;\n    for my $lQuery (\n        select_array(\n'select events, max_latency from sys.x\\\\$waits_global_by_latency ORDER BY max_latency DESC LIMIT 30;'\n        )\n      )\n    {\n        infoprint \" +-- $nbL: $lQuery\";\n        $nbL++;\n    }\n    infoprint \"No information found or indicators deactivated.\"\n      if ( $nbL == 1 );\n\n}\n\n# Recommendations for Aria Engine\nsub mariadb_aria {\n    subheaderprint \"Aria Metrics\";\n\n    # Aria\n    if ( !defined $myvar{'have_aria'} ) {\n        infoprint \"Aria Storage Engine not available.\";\n        return;\n    }\n    if ( $myvar{'have_aria'} ne \"YES\" ) {\n        infoprint \"Aria Storage Engine is disabled.\";\n        return;\n    }\n    infoprint \"Aria Storage Engine is enabled.\";\n\n    # Aria pagecache\n    if ( !defined( $mycalc{'total_aria_indexes'} ) ) {\n        push( @generalrec,\n            \"Unable to calculate Aria index size on MySQL server\" );\n    }\n    else {\n        if (\n            $myvar{'aria_pagecache_buffer_size'} < $mycalc{'total_aria_indexes'}\n            && $mycalc{'pct_aria_keys_from_mem'} < 95 )\n        {\n            badprint \"Aria pagecache size / total Aria indexes: \"\n              . hr_bytes( $myvar{'aria_pagecache_buffer_size'} ) . \"/\"\n              . hr_bytes( $mycalc{'total_aria_indexes'} ) . \"\";\n            push( @adjvars,\n                    \"aria_pagecache_buffer_size (> \"\n                  . hr_bytes( $mycalc{'total_aria_indexes'} )\n                  . \")\" );\n        }\n        else {\n            goodprint \"Aria pagecache size / total Aria indexes: \"\n              . hr_bytes( $myvar{'aria_pagecache_buffer_size'} ) . \"/\"\n              . hr_bytes( $mycalc{'total_aria_indexes'} ) . \"\";\n        }\n        if ( $mystat{'Aria_pagecache_read_requests'} > 0 ) {\n            if ( $mycalc{'pct_aria_keys_from_mem'} < 95 ) {\n                badprint\n\"Aria pagecache hit rate: $mycalc{'pct_aria_keys_from_mem'}% (\"\n                  . hr_num( $mystat{'Aria_pagecache_read_requests'} )\n                  . \" cached / \"\n                  . hr_num( $mystat{'Aria_pagecache_reads'} )\n                  . \" reads)\";\n            }\n            else {\n                goodprint\n\"Aria pagecache hit rate: $mycalc{'pct_aria_keys_from_mem'}% (\"\n                  . hr_num( $mystat{'Aria_pagecache_read_requests'} )\n                  . \" cached / \"\n                  . hr_num( $mystat{'Aria_pagecache_reads'} )\n                  . \" reads)\";\n            }\n        }\n        else {\n\n            # No queries have run that would use keys\n        }\n    }\n}\n\n# Recommendations for TokuDB\nsub mariadb_tokudb {\n    subheaderprint \"TokuDB Metrics\";\n\n    # AriaDB\n    unless ( defined $myvar{'have_tokudb'}\n        && $myvar{'have_tokudb'} eq \"YES\" )\n    {\n        infoprint \"TokuDB is disabled.\";\n        return;\n    }\n    infoprint \"TokuDB is enabled.\";\n\n    # Not implemented\n}\n\n# Recommendations for XtraDB\nsub mariadb_xtradb {\n    subheaderprint \"XtraDB Metrics\";\n\n    # XtraDB\n    unless ( defined $myvar{'have_xtradb'}\n        && $myvar{'have_xtradb'} eq \"YES\" )\n    {\n        infoprint \"XtraDB is disabled.\";\n        return;\n    }\n    infoprint \"XtraDB is enabled.\";\n    infoprint \"Note that MariaDB 10.2 makes use of InnoDB, not XtraDB.\"\n\n      # Not implemented\n}\n\n# Recommendations for RocksDB\nsub mariadb_rockdb {\n    subheaderprint \"RocksDB Metrics\";\n\n    # RocksDB\n    unless ( defined $myvar{'have_rocksdb'}\n        && $myvar{'have_rocksdb'} eq \"YES\" )\n    {\n        infoprint \"RocksDB is disabled.\";\n        return;\n    }\n    infoprint \"RocksDB is enabled.\";\n\n    # Not implemented\n}\n\n# Recommendations for Spider\nsub mariadb_spider {\n    subheaderprint \"Spider Metrics\";\n\n    # Spider\n    unless ( defined $myvar{'have_spider'}\n        && $myvar{'have_spider'} eq \"YES\" )\n    {\n        infoprint \"Spider is disabled.\";\n        return;\n    }\n    infoprint \"Spider is enabled.\";\n\n    # Not implemented\n}\n\n# Recommendations for Connect\nsub mariadb_connect {\n    subheaderprint \"Connect Metrics\";\n\n    # Connect\n    unless ( defined $myvar{'have_connect'}\n        && $myvar{'have_connect'} eq \"YES\" )\n    {\n        infoprint \"Connect is disabled.\";\n        return;\n    }\n    infoprint \"Connect is enabled.\";\n\n    # Not implemented\n}\n\n# Perl trim function to remove whitespace from the start and end of the string\nsub trim {\n    my $string = shift;\n    return \"\" unless defined($string);\n    $string =~ s/^\\s+//;\n    $string =~ s/\\s+$//;\n    return $string;\n}\n\nsub get_wsrep_options {\n    return () unless defined $myvar{'wsrep_provider_options'};\n\n    my @galera_options      = split /;/, $myvar{'wsrep_provider_options'};\n    my $wsrep_slave_threads = $myvar{'wsrep_slave_threads'};\n    push @galera_options, ' wsrep_slave_threads = ' . $wsrep_slave_threads;\n    @galera_options = remove_cr @galera_options;\n    @galera_options = remove_empty @galera_options;\n\n    #debugprint Dumper( \\@galera_options ) if $opt{debug};\n    return @galera_options;\n}\n\nsub get_gcache_memory {\n    my $gCacheMem = hr_raw( get_wsrep_option('gcache.size') );\n\n    return 0 unless defined $gCacheMem and $gCacheMem ne '';\n    return $gCacheMem;\n}\n\nsub get_wsrep_option {\n    my $key = shift;\n    return '' unless defined $myvar{'wsrep_provider_options'};\n    my @galera_options = get_wsrep_options;\n    return '' unless scalar(@galera_options) > 0;\n    my @memValues = grep /\\s*$key =/, @galera_options;\n    my $memValue  = $memValues[0];\n    return 0 unless defined $memValue;\n    $memValue =~ s/.*=\\s*(.+)$/$1/g;\n    return $memValue;\n}\n\n# REcommendations for Tables\nsub mysql_table_structures {\n    return 0 unless ( $opt{structstat} > 0 );\n    subheaderprint \"Table structures analysis\";\n\n    my @primaryKeysNbTables = select_array(\n        \"Select CONCAT(c.table_schema, ',' , c.table_name)\nfrom information_schema.columns c\njoin information_schema.tables t using (TABLE_SCHEMA, TABLE_NAME)\nwhere c.table_schema not in ('sys', 'mysql', 'information_schema', 'performance_schema')\n  and t.table_type = 'BASE TABLE'\ngroup by c.table_schema,c.table_name\nhaving sum(if(c.column_key in ('PRI', 'UNI'), 1, 0)) = 0\"\n    );\n\n    my $tmpContent = 'Schema,Table';\n    if ( scalar(@primaryKeysNbTables) > 0 ) {\n        badprint \"Following table(s) don't have primary key:\";\n        foreach my $badtable (@primaryKeysNbTables) {\n            badprint \"\\t$badtable\";\n            push @{ $result{'Tables without PK'} }, $badtable;\n            $tmpContent .= \"\\n$badtable\";\n        }\n        push @generalrec,\n\"Ensure that all table(s) get an explicit primary keys for performance, maintenance and also for replication\";\n\n    }\n    else {\n        goodprint \"All tables get a primary key\";\n    }\n    dump_into_file( \"tables_without_primary_keys.csv\", $tmpContent );\n\n    my @nonInnoDBTables = select_array(\n        \"select CONCAT(table_schema, ',', table_name, ',', ENGINE) \nFROM information_schema.tables t\nWHERE ENGINE <> 'InnoDB' \nand t.table_type = 'BASE TABLE'\nand table_schema not in \n('sys', 'mysql', 'performance_schema', 'information_schema')\"\n    );\n    $tmpContent = 'Schema,Table,Engine';\n    if ( scalar(@nonInnoDBTables) > 0 ) {\n        badprint \"Following table(s) are not InnoDB table:\";\n        push @generalrec,\n\"Ensure that all table(s) are InnoDB tables for performance and also for replication\";\n        foreach my $badtable (@nonInnoDBTables) {\n            if ( $badtable =~ /Memory/i ) {\n                badprint\n\"Table $badtable is a MEMORY table. It's suggested to use only InnoDB tables in production\";\n            }\n            else {\n                badprint \"\\t$badtable\";\n            }\n            $tmpContent .= \"\\n$badtable\";\n        }\n    }\n    else {\n        goodprint \"All tables are InnoDB tables\";\n    }\n    dump_into_file( \"tables_non_innodb.csv\", $tmpContent );\n\n    my @nonutf8columns = select_array(\n\"SELECT CONCAT(table_schema, ',', table_name, ',', column_name, ',', CHARacter_set_name, ',', COLLATION_name, ',', data_type, ',', CHARACTER_MAXIMUM_LENGTH)\nfrom information_schema.columns\nWHERE table_schema not in ('sys', 'mysql', 'performance_schema', 'information_schema')\nand (CHARacter_set_name  NOT LIKE 'utf8%'\nor COLLATION_name NOT LIKE 'utf8%');\"\n    );\n    $tmpContent =\n      'Schema,Table,Column, Charset, Collation, Data Type, Max Length';\n    if ( scalar(@nonutf8columns) > 0 ) {\n        badprint \"Following character columns(s) are not utf8 compliant:\";\n        push @generalrec,\n\"Ensure that all text colums(s) are UTF-8 compliant for encoding support and performance\";\n        foreach my $badtable (@nonutf8columns) {\n            badprint \"\\t$badtable\";\n            $tmpContent .= \"\\n$badtable\";\n        }\n    }\n    else {\n        goodprint \"All columns are UTF-8 compliant\";\n    }\n    dump_into_file( \"columns_non_utf8.csv\", $tmpContent );\n\n    my @utf8columns = select_array(\n\"SELECT CONCAT(table_schema, ',', table_name, ',', column_name, ',', CHARacter_set_name, ',', COLLATION_name, ',', data_type, ',', CHARACTER_MAXIMUM_LENGTH)\nfrom information_schema.columns\nWHERE table_schema not in ('sys', 'mysql', 'performance_schema', 'information_schema')\nand (CHARacter_set_name  LIKE 'utf8%'\nor COLLATION_name LIKE 'utf8%');\"\n    );\n    $tmpContent =\n      'Schema,Table,Column, Charset, Collation, Data Type, Max Length';\n    foreach my $badtable (@utf8columns) {\n        $tmpContent .= \"\\n$badtable\";\n    }\n    dump_into_file( \"columns_utf8.csv\", $tmpContent );\n\n    my @ftcolumns = select_array(\n\"SELECT CONCAT(table_schema, ',', table_name, ',', column_name, ',', data_type)\nfrom information_schema.columns\nWHERE table_schema not in ('sys', 'mysql', 'performance_schema', 'information_schema')\nAND data_type='FULLTEXT';\"\n    );\n    $tmpContent = 'Schema,Table,Column, Data Type';\n    foreach my $ctable (@ftcolumns) {\n        $tmpContent .= \"\\n$ctable\";\n    }\n    dump_into_file( \"fulltext_columns.csv\", $tmpContent );\n\n}\n\n# Recommendations for Galera\nsub mariadb_galera {\n    subheaderprint \"Galera Metrics\";\n\n    # Galera Cluster\n    unless ( defined $myvar{'have_galera'}\n        && $myvar{'have_galera'} eq \"YES\" )\n    {\n        infoprint \"Galera is disabled.\";\n        return;\n    }\n    infoprint \"Galera is enabled.\";\n    debugprint \"Galera variables:\";\n    foreach my $gvar ( keys %myvar ) {\n        next unless $gvar =~ /^wsrep.*/;\n        next if $gvar eq 'wsrep_provider_options';\n        debugprint \"\\t\" . trim($gvar) . \" = \" . $myvar{$gvar};\n        $result{'Galera'}{'variables'}{$gvar} = $myvar{$gvar};\n    }\n    if ( not defined( $myvar{'wsrep_on'} ) or $myvar{'wsrep_on'} ne \"ON\" ) {\n        infoprint \"Galera is disabled.\";\n        return;\n    }\n    debugprint \"Galera wsrep provider Options:\";\n    my @galera_options = get_wsrep_options;\n    $result{'Galera'}{'wsrep options'} = get_wsrep_options();\n    foreach my $gparam (@galera_options) {\n        debugprint \"\\t\" . trim($gparam);\n    }\n    debugprint \"Galera status:\";\n    foreach my $gstatus ( keys %mystat ) {\n        next unless $gstatus =~ /^wsrep.*/;\n        debugprint \"\\t\" . trim($gstatus) . \" = \" . $mystat{$gstatus};\n        $result{'Galera'}{'status'}{$gstatus} = $myvar{$gstatus};\n    }\n    infoprint \"GCache is using \"\n      . hr_bytes_rnd( get_wsrep_option('gcache.mem_size') );\n\n    infoprint \"CPU cores detected : \" . (cpu_cores);\n    infoprint \"wsrep_slave_threads: \" . get_wsrep_option('wsrep_slave_threads');\n\n    if (   get_wsrep_option('wsrep_slave_threads') > ( (cpu_cores) * 4 )\n        or get_wsrep_option('wsrep_slave_threads') < ( (cpu_cores) * 2 ) )\n    {\n        badprint\n\"wsrep_slave_threads is not equal to 2, 3 or 4 times the number of CPU(s)\";\n        push @adjvars, \"wsrep_slave_threads = \" . ( (cpu_cores) * 4 );\n    }\n    else {\n        goodprint\n\"wsrep_slave_threads is equal to 2, 3 or 4 times the number of CPU(s)\";\n    }\n\n    if ( get_wsrep_option('wsrep_slave_threads') > 1 ) {\n        infoprint\n          \"wsrep parallel slave can cause frequent inconsistency crash.\";\n        push @adjvars,\n\"Set wsrep_slave_threads to 1 in case of HA_ERR_FOUND_DUPP_KEY crash on slave\";\n\n        # check options for parallel slave\n        if ( get_wsrep_option('wsrep_slave_FK_checks') eq \"OFF\" ) {\n            badprint \"wsrep_slave_FK_checks is off with parallel slave\";\n            push @adjvars,\n              \"wsrep_slave_FK_checks should be ON when using parallel slave\";\n        }\n\n        # wsrep_slave_UK_checks seems useless in MySQL source code\n        if ( $myvar{'innodb_autoinc_lock_mode'} != 2 ) {\n            badprint\n              \"innodb_autoinc_lock_mode is incorrect with parallel slave\";\n            push @adjvars,\n              \"innodb_autoinc_lock_mode should be 2 when using parallel slave\";\n        }\n    }\n\n    if ( get_wsrep_option('gcs.fc_limit') != $myvar{'wsrep_slave_threads'} * 5 )\n    {\n        badprint \"gcs.fc_limit should be equal to 5 * wsrep_slave_threads (=\"\n          . ( $myvar{'wsrep_slave_threads'} * 5 ) . \")\";\n        push @adjvars, \"gcs.fc_limit= wsrep_slave_threads * 5 (=\"\n          . ( $myvar{'wsrep_slave_threads'} * 5 ) . \")\";\n    }\n    else {\n        goodprint \"gcs.fc_limit is equal to 5 * wsrep_slave_threads ( =\"\n          . get_wsrep_option('gcs.fc_limit') . \")\";\n    }\n\n    if ( get_wsrep_option('gcs.fc_factor') != 0.8 ) {\n        badprint \"gcs.fc_factor should be equal to 0.8 (=\"\n          . get_wsrep_option('gcs.fc_factor') . \")\";\n        push @adjvars, \"gcs.fc_factor=0.8\";\n    }\n    else {\n        goodprint \"gcs.fc_factor is equal to 0.8\";\n    }\n    if ( get_wsrep_option('wsrep_flow_control_paused') > 0.02 ) {\n        badprint \"Fraction of time node pause flow control > 0.02\";\n    }\n    else {\n        goodprint\n\"Flow control fraction seems to be OK (wsrep_flow_control_paused <= 0.02)\";\n    }\n\n    if ( $myvar{'binlog_format'} ne 'ROW' ) {\n        badprint \"Binlog format should be in ROW mode.\";\n        push @adjvars, \"binlog_format = ROW\";\n    }\n    else {\n        goodprint \"Binlog format is in ROW mode.\";\n    }\n    if ( $myvar{'innodb_flush_log_at_trx_commit'} != 0 ) {\n        badprint \"InnoDB flush log at each commit should be disabled.\";\n        push @adjvars, \"innodb_flush_log_at_trx_commit = 0\";\n    }\n    else {\n        goodprint \"InnoDB flush log at each commit is disabled for Galera.\";\n    }\n\n    infoprint \"Read consistency mode :\" . $myvar{'wsrep_causal_reads'};\n\n    if ( defined( $myvar{'wsrep_cluster_name'} )\n        and $myvar{'wsrep_on'} eq \"ON\" )\n    {\n        goodprint \"Galera WsREP is enabled.\";\n        if ( defined( $myvar{'wsrep_cluster_address'} )\n            and trim(\"$myvar{'wsrep_cluster_address'}\") ne \"\" )\n        {\n            goodprint \"Galera Cluster address is defined: \"\n              . $myvar{'wsrep_cluster_address'};\n            my @NodesTmp = split /,/, $myvar{'wsrep_cluster_address'};\n            my $nbNodes  = @NodesTmp;\n            infoprint \"There are $nbNodes nodes in wsrep_cluster_address\";\n            my $nbNodesSize = trim( $mystat{'wsrep_cluster_size'} );\n            if ( $nbNodesSize == 3 or $nbNodesSize == 5 ) {\n                goodprint \"There are $nbNodesSize nodes in wsrep_cluster_size.\";\n            }\n            else {\n                badprint\n\"There are $nbNodesSize nodes in wsrep_cluster_size. Prefer 3 or 5 nodes architecture.\";\n                push @generalrec, \"Prefer 3 or 5 nodes architecture.\";\n            }\n\n            # wsrep_cluster_address doesn't include garbd nodes\n            if ( $nbNodes > $nbNodesSize ) {\n                badprint\n\"All cluster nodes are not detected. wsrep_cluster_size less than node count in wsrep_cluster_address\";\n            }\n            else {\n                goodprint \"All cluster nodes detected.\";\n            }\n        }\n        else {\n            badprint \"Galera Cluster address is undefined\";\n            push @adjvars,\n              \"set up wsrep_cluster_address variable for Galera replication\";\n        }\n        if ( defined( $myvar{'wsrep_cluster_name'} )\n            and trim( $myvar{'wsrep_cluster_name'} ) ne \"\" )\n        {\n            goodprint \"Galera Cluster name is defined: \"\n              . $myvar{'wsrep_cluster_name'};\n        }\n        else {\n            badprint \"Galera Cluster name is undefined\";\n            push @adjvars,\n              \"set up wsrep_cluster_name variable for Galera replication\";\n        }\n        if ( defined( $myvar{'wsrep_node_name'} )\n            and trim( $myvar{'wsrep_node_name'} ) ne \"\" )\n        {\n            goodprint \"Galera Node name is defined: \"\n              . $myvar{'wsrep_node_name'};\n        }\n        else {\n            badprint \"Galera node name is undefined\";\n            push @adjvars,\n              \"set up wsrep_node_name variable for Galera replication\";\n        }\n        if ( trim( $myvar{'wsrep_notify_cmd'} ) ne \"\" ) {\n            goodprint \"Galera Notify command is defined.\";\n        }\n        else {\n            badprint \"Galera Notify command is not defined.\";\n            push( @adjvars,\n                \"set up parameter wsrep_notify_cmd to be notified\" );\n        }\n        if (    trim( $myvar{'wsrep_sst_method'} ) !~ \"^xtrabackup.*\"\n            and trim( $myvar{'wsrep_sst_method'} ) !~ \"^mariabackup\" )\n        {\n            badprint \"Galera SST method is not xtrabackup based.\";\n            push( @adjvars,\n\"set up parameter wsrep_sst_method to xtrabackup based parameter\"\n            );\n        }\n        else {\n            goodprint \"SST Method is based on xtrabackup.\";\n        }\n        if (\n            (\n                defined( $myvar{'wsrep_OSU_method'} )\n                && trim( $myvar{'wsrep_OSU_method'} ) eq \"TOI\"\n            )\n            || ( defined( $myvar{'wsrep_osu_method'} )\n                && trim( $myvar{'wsrep_osu_method'} ) eq \"TOI\" )\n          )\n        {\n            goodprint \"TOI is default mode for upgrade.\";\n        }\n        else {\n            badprint \"Schema upgrade are not replicated automatically\";\n            push( @adjvars, \"set up parameter wsrep_OSU_method to TOI\" );\n        }\n        infoprint \"Max WsRep message : \"\n          . hr_bytes( $myvar{'wsrep_max_ws_size'} );\n    }\n    else {\n        badprint \"Galera WsREP is disabled\";\n    }\n\n    if ( defined( $mystat{'wsrep_connected'} )\n        and $mystat{'wsrep_connected'} eq \"ON\" )\n    {\n        goodprint \"Node is connected\";\n    }\n    else {\n        badprint \"Node is disconnected\";\n    }\n    if ( defined( $mystat{'wsrep_ready'} ) and $mystat{'wsrep_ready'} eq \"ON\" )\n    {\n        goodprint \"Node is ready\";\n    }\n    else {\n        badprint \"Node is not ready\";\n    }\n    infoprint \"Cluster status :\" . $mystat{'wsrep_cluster_status'};\n    if ( defined( $mystat{'wsrep_cluster_status'} )\n        and $mystat{'wsrep_cluster_status'} eq \"Primary\" )\n    {\n        goodprint \"Galera cluster is consistent and ready for operations\";\n    }\n    else {\n        badprint \"Cluster is not consistent and ready\";\n    }\n    if ( $mystat{'wsrep_local_state_uuid'} eq\n        $mystat{'wsrep_cluster_state_uuid'} )\n    {\n        goodprint \"Node and whole cluster at the same level: \"\n          . $mystat{'wsrep_cluster_state_uuid'};\n    }\n    else {\n        badprint \"Node and whole cluster not the same level\";\n        infoprint \"Node    state uuid: \" . $mystat{'wsrep_local_state_uuid'};\n        infoprint \"Cluster state uuid: \" . $mystat{'wsrep_cluster_state_uuid'};\n    }\n    if ( $mystat{'wsrep_local_state_comment'} eq 'Synced' ) {\n        goodprint \"Node is synced with whole cluster.\";\n    }\n    else {\n        badprint \"Node is not synced\";\n        infoprint \"Node State : \" . $mystat{'wsrep_local_state_comment'};\n    }\n    if ( $mystat{'wsrep_local_cert_failures'} == 0 ) {\n        goodprint \"There is no certification failures detected.\";\n    }\n    else {\n        badprint \"There is \"\n          . $mystat{'wsrep_local_cert_failures'}\n          . \" certification failure(s)detected.\";\n    }\n\n    for my $key ( keys %mystat ) {\n        if ( $key =~ /wsrep_|galera/i ) {\n            debugprint \"WSREP: $key = $mystat{$key}\";\n        }\n    }\n\n    #debugprint Dumper get_wsrep_options() if $opt{debug};\n}\n\n# Recommendations for InnoDB\nsub mysql_innodb {\n    subheaderprint \"InnoDB Metrics\";\n\n    # InnoDB\n    unless ( defined $myvar{'have_innodb'}\n        && $myvar{'have_innodb'} eq \"YES\" )\n    {\n        infoprint \"InnoDB is disabled.\";\n        if ( mysql_version_ge( 5, 5 ) ) {\n            my $defengine = 'InnoDB';\n            $defengine = $myvar{'default_storage_engine'}\n              if defined( $myvar{'default_storage_engine'} );\n            badprint\n\"InnoDB Storage engine is disabled. $defengine is the default storage engine\"\n              if $defengine eq 'InnoDB';\n            infoprint\n\"InnoDB Storage engine is disabled. $defengine is the default storage engine\"\n              if $defengine ne 'InnoDB';\n        }\n        return;\n    }\n    infoprint \"InnoDB is enabled.\";\n    if ( !defined $enginestats{'InnoDB'} ) {\n        if ( $opt{skipsize} eq 1 ) {\n            infoprint \"Skipped due to --skipsize option\";\n            return;\n        }\n        badprint \"No tables are Innodb\";\n        $enginestats{'InnoDB'} = 0;\n    }\n\n    if ( $opt{buffers} ne 0 ) {\n        infoprint \"InnoDB Buffers\";\n        if ( defined $myvar{'innodb_buffer_pool_size'} ) {\n            infoprint \" +-- InnoDB Buffer Pool: \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} ) . \"\";\n        }\n        if ( defined $myvar{'innodb_buffer_pool_instances'} ) {\n            infoprint \" +-- InnoDB Buffer Pool Instances: \"\n              . $myvar{'innodb_buffer_pool_instances'} . \"\";\n        }\n\n        if ( defined $myvar{'innodb_buffer_pool_chunk_size'} ) {\n            infoprint \" +-- InnoDB Buffer Pool Chunk Size: \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_chunk_size'} ) . \"\";\n        }\n        if ( defined $myvar{'innodb_additional_mem_pool_size'} ) {\n            infoprint \" +-- InnoDB Additional Mem Pool: \"\n              . hr_bytes( $myvar{'innodb_additional_mem_pool_size'} ) . \"\";\n        }\n        if ( defined $myvar{'innodb_redo_log_capacity'} ) {\n            infoprint \" +-- InnoDB Redo Log Capacity: \"\n              . hr_bytes( $myvar{'innodb_redo_log_capacity'} );\n        }\n        else {\n            if ( defined $myvar{'innodb_log_file_size'} ) {\n                infoprint \" +-- InnoDB Log File Size: \"\n                  . hr_bytes( $myvar{'innodb_log_file_size'} );\n            }\n            if ( defined $myvar{'innodb_log_files_in_group'} ) {\n                infoprint \" +-- InnoDB Log File In Group: \"\n                  . $myvar{'innodb_log_files_in_group'};\n                infoprint \" +-- InnoDB Total Log File Size: \"\n                  . hr_bytes( $myvar{'innodb_log_files_in_group'} *\n                      $myvar{'innodb_log_file_size'} )\n                  . \"(\"\n                  . $mycalc{'innodb_log_size_pct'}\n                  . \" % of buffer pool)\";\n            }\n            else {\n                infoprint \" +-- InnoDB Total Log File Size: \"\n                  . hr_bytes( $myvar{'innodb_log_file_size'} ) . \"(\"\n                  . $mycalc{'innodb_log_size_pct'}\n                  . \" % of buffer pool)\";\n            }\n        }\n        if ( defined $myvar{'innodb_log_buffer_size'} ) {\n            infoprint \" +-- InnoDB Log Buffer: \"\n              . hr_bytes( $myvar{'innodb_log_buffer_size'} );\n        }\n        if ( defined $mystat{'Innodb_buffer_pool_pages_free'} ) {\n            infoprint \" +-- InnoDB Buffer Free: \"\n              . hr_bytes( $mystat{'Innodb_buffer_pool_pages_free'} ) . \"\";\n        }\n        if ( defined $mystat{'Innodb_buffer_pool_pages_total'} ) {\n            infoprint \" +-- InnoDB Buffer Used: \"\n              . hr_bytes( $mystat{'Innodb_buffer_pool_pages_total'} ) . \"\";\n        }\n    }\n\n    if ( defined $myvar{'innodb_thread_concurrency'} ) {\n        infoprint \"InnoDB Thread Concurrency: \"\n          . $myvar{'innodb_thread_concurrency'};\n    }\n\n    # InnoDB Buffer Pool Size\n    if ( $myvar{'innodb_file_per_table'} eq \"ON\" ) {\n        goodprint \"InnoDB File per table is activated\";\n    }\n    else {\n        badprint \"InnoDB File per table is not activated\";\n        push( @adjvars, \"innodb_file_per_table=ON\" );\n    }\n\n    # InnoDB Buffer Pool Size\n    if ( $arch == 32 && $myvar{'innodb_buffer_pool_size'} > 4294967295 ) {\n        badprint\n          \"InnoDB Buffer Pool size limit reached for 32 bits architecture: (\"\n          . hr_bytes(4294967295) . \" )\";\n        push( @adjvars,\n                \"limit innodb_buffer_pool_size under \"\n              . hr_bytes(4294967295)\n              . \" for 32 bits architecture\" );\n    }\n    if ( $arch == 32 && $myvar{'innodb_buffer_pool_size'} < 4294967295 ) {\n        goodprint \"InnoDB Buffer Pool size ( \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n          . \" ) under limit for 32 bits architecture: (\"\n          . hr_bytes(4294967295) . \")\";\n    }\n    if (   $arch == 64\n        && $myvar{'innodb_buffer_pool_size'} > 18446744073709551615 )\n    {\n        badprint \"InnoDB Buffer Pool size limit(\"\n          . hr_bytes(18446744073709551615)\n          . \") reached for 64 bits architecture\";\n        push( @adjvars,\n                \"limit innodb_buffer_pool_size under \"\n              . hr_bytes(18446744073709551615)\n              . \" for 64 bits architecture\" );\n    }\n\n    if (   $arch == 64\n        && $myvar{'innodb_buffer_pool_size'} < 18446744073709551615 )\n    {\n        goodprint \"InnoDB Buffer Pool size ( \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n          . \" ) under limit for 64 bits architecture: (\"\n          . hr_bytes(18446744073709551615) . \" )\";\n    }\n    if ( $myvar{'innodb_buffer_pool_size'} > $enginestats{'InnoDB'} ) {\n        goodprint \"InnoDB buffer pool / data size: \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} ) . \" / \"\n          . hr_bytes( $enginestats{'InnoDB'} ) . \"\";\n    }\n    else {\n        badprint \"InnoDB buffer pool / data size: \"\n          . hr_bytes( $myvar{'innodb_buffer_pool_size'} ) . \" / \"\n          . hr_bytes( $enginestats{'InnoDB'} ) . \"\";\n        push( @adjvars,\n                \"innodb_buffer_pool_size (>= \"\n              . hr_bytes( $enginestats{'InnoDB'} )\n              . \") if possible.\" );\n    }\n\n  # select  round( 100* sum(allocated)/( select VARIABLE_VALUE\n  #                                  FROM information_schema.global_variables\n  #                              where VARIABLE_NAME='innodb_buffer_pool_size' )\n  # ,2) as \"PCT ALLOC/BUFFER POOL\"\n  #from sys.x$innodb_buffer_stats_by_table;\n\n    if ( $opt{experimental} ) {\n        debugprint( 'innodb_buffer_alloc_pct: \"'\n              . $mycalc{innodb_buffer_alloc_pct}\n              . '\"' );\n        if ( defined $mycalc{innodb_buffer_alloc_pct}\n            and $mycalc{innodb_buffer_alloc_pct} ne '' )\n        {\n            if ( $mycalc{innodb_buffer_alloc_pct} < 80 ) {\n                badprint \"Ratio Buffer Pool allocated / Buffer Pool Size: \"\n                  . $mycalc{'innodb_buffer_alloc_pct'} . '%';\n            }\n            else {\n                goodprint \"Ratio Buffer Pool allocated / Buffer Pool Size: \"\n                  . $mycalc{'innodb_buffer_alloc_pct'} . '%';\n            }\n        }\n    }\n    if (   $mycalc{'innodb_log_size_pct'} < 20\n        or $mycalc{'innodb_log_size_pct'} > 30 )\n    {\n        if ( defined $myvar{'innodb_redo_log_capacity'} ) {\n            badprint\n              \"Ratio InnoDB redo log capacity / InnoDB Buffer pool size (\"\n              . $mycalc{'innodb_log_size_pct'} . \"%): \"\n              . hr_bytes( $myvar{'innodb_redo_log_capacity'} ) . \" / \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n            push( @adjvars,\n                    \"innodb_redo_log_capacity should be (=\"\n                  . hr_bytes_rnd( $myvar{'innodb_buffer_pool_size'} / 4 )\n                  . \") if possible, so InnoDB Redo log Capacity equals 25% of buffer pool size.\"\n            );\n            push( @generalrec,\n\"Be careful, increasing innodb_redo_log_capacity means higher crash recovery mean time\"\n            );\n        }\n        else {\n            badprint \"Ratio InnoDB log file size / InnoDB Buffer pool size (\"\n              . $mycalc{'innodb_log_size_pct'} . \"%): \"\n              . hr_bytes( $myvar{'innodb_log_file_size'} ) . \" * \"\n              . $myvar{'innodb_log_files_in_group'} . \" / \"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n            push(\n                @adjvars,\n                \"innodb_log_file_size should be (=\"\n                  . hr_bytes_rnd(\n                    $myvar{'innodb_buffer_pool_size'} /\n                      $myvar{'innodb_log_files_in_group'} / 4\n                  )\n                  . \") if possible, so InnoDB total log file size equals 25% of buffer pool size.\"\n            );\n            push( @generalrec,\n\"Be careful, increasing innodb_log_file_size / innodb_log_files_in_group means higher crash recovery mean time\"\n            );\n        }\n        if ( mysql_version_le( 5, 6, 2 ) ) {\n            push( @generalrec,\n\"For MySQL 5.6.2 and lower, total innodb_log_file_size should have a ceiling of (4096MB / log files in group) - 1MB.\"\n            );\n        }\n\n    }\n    else {\n        if ( defined $myvar{'innodb_redo_log_capacity'} ) {\n            goodprint\n              \"Ratio InnoDB Redo Log Capacity / InnoDB Buffer pool size: \"\n              . hr_bytes( $myvar{'innodb_redo_log_capacity'} ) . \"/\"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n        }\n        else {\n            push( @generalrec,\n\"Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU\"\n            );\n            goodprint \"Ratio InnoDB log file size / InnoDB Buffer pool size: \"\n              . hr_bytes( $myvar{'innodb_log_file_size'} ) . \" * \"\n              . $myvar{'innodb_log_files_in_group'} . \"/\"\n              . hr_bytes( $myvar{'innodb_buffer_pool_size'} )\n              . \" should be equal to 25%\";\n        }\n    }\n\n    # InnoDB Buffer Pool Instances (MySQL 5.6.6+)\n    if ( not mysql_version_ge( 10, 4 )\n        and defined( $myvar{'innodb_buffer_pool_instances'} ) )\n    {\n\n        # Bad Value if > 64\n        if ( $myvar{'innodb_buffer_pool_instances'} > 64 ) {\n            badprint \"InnoDB buffer pool instances: \"\n              . $myvar{'innodb_buffer_pool_instances'} . \"\";\n            push( @adjvars, \"innodb_buffer_pool_instances (<= 64)\" );\n        }\n\n        # InnoDB Buffer Pool Size > 1Go\n        if ( $myvar{'innodb_buffer_pool_size'} > 1024 * 1024 * 1024 ) {\n\n# InnoDB Buffer Pool Size / 1Go = InnoDB Buffer Pool Instances limited to 64 max.\n\n            #  InnoDB Buffer Pool Size > 64Go\n            my $max_innodb_buffer_pool_instances =\n              int( $myvar{'innodb_buffer_pool_size'} / ( 1024 * 1024 * 1024 ) );\n            $max_innodb_buffer_pool_instances = 64\n              if ( $max_innodb_buffer_pool_instances > 64 );\n\n            if ( $myvar{'innodb_buffer_pool_instances'} !=\n                $max_innodb_buffer_pool_instances )\n            {\n                badprint \"InnoDB buffer pool instances: \"\n                  . $myvar{'innodb_buffer_pool_instances'} . \"\";\n                push( @adjvars,\n                        \"innodb_buffer_pool_instances(=\"\n                      . $max_innodb_buffer_pool_instances\n                      . \")\" );\n            }\n            else {\n                goodprint \"InnoDB buffer pool instances: \"\n                  . $myvar{'innodb_buffer_pool_instances'} . \"\";\n            }\n\n            # InnoDB Buffer Pool Size < 1Go\n        }\n        else {\n            if ( $myvar{'innodb_buffer_pool_instances'} != 1 ) {\n                badprint\n\"InnoDB buffer pool <= 1G and Innodb_buffer_pool_instances(!=1).\";\n                push( @adjvars, \"innodb_buffer_pool_instances (=1)\" );\n            }\n            else {\n                goodprint \"InnoDB buffer pool instances: \"\n                  . $myvar{'innodb_buffer_pool_instances'} . \"\";\n            }\n        }\n    }\n\n    # InnoDB Used Buffer Pool Size vs CHUNK size\n    if ( !defined( $myvar{'innodb_buffer_pool_chunk_size'} ) ) {\n        infoprint\n          \"InnoDB Buffer Pool Chunk Size not used or defined in your version\";\n    }\n    else {\n        infoprint \"Number of InnoDB Buffer Pool Chunk: \"\n          . int( $myvar{'innodb_buffer_pool_size'} ) /\n          int( $myvar{'innodb_buffer_pool_chunk_size'} ) . \" for \"\n          . $myvar{'innodb_buffer_pool_instances'}\n          . \" Buffer Pool Instance(s)\";\n\n        if (\n            int( $myvar{'innodb_buffer_pool_size'} ) % (\n                int( $myvar{'innodb_buffer_pool_chunk_size'} ) *\n                  int( $myvar{'innodb_buffer_pool_instances'} )\n            ) eq 0\n          )\n        {\n            goodprint\n\"Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances\";\n        }\n        else {\n            badprint\n\"Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances\";\n\n#push( @adjvars, \"Adjust innodb_buffer_pool_instances, innodb_buffer_pool_chunk_size with innodb_buffer_pool_size\" );\n            push( @adjvars,\n\"innodb_buffer_pool_size must always be equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances\"\n            );\n        }\n    }\n\n    # InnoDB Read efficiency\n    if ( defined $mycalc{'pct_read_efficiency'}\n        && $mycalc{'pct_read_efficiency'} < 90 )\n    {\n        badprint \"InnoDB Read buffer efficiency: \"\n          . $mycalc{'pct_read_efficiency'} . \"% (\"\n          . $mystat{'Innodb_buffer_pool_read_requests'}\n          . \" hits / \"\n          . ( $mystat{'Innodb_buffer_pool_reads'} +\n              $mystat{'Innodb_buffer_pool_read_requests'} )\n          . \" total)\";\n    }\n    else {\n        goodprint \"InnoDB Read buffer efficiency: \"\n          . $mycalc{'pct_read_efficiency'} . \"% (\"\n          . $mystat{'Innodb_buffer_pool_read_requests'}\n          . \" hits / \"\n          . ( $mystat{'Innodb_buffer_pool_reads'} +\n              $mystat{'Innodb_buffer_pool_read_requests'} )\n          . \" total)\";\n    }\n\n    # InnoDB Write efficiency\n    if ( defined $mycalc{'pct_write_efficiency'}\n        && $mycalc{'pct_write_efficiency'} < 90 )\n    {\n        badprint \"InnoDB Write Log efficiency: \"\n          . abs( $mycalc{'pct_write_efficiency'} ) . \"% (\"\n          . abs( $mystat{'Innodb_log_write_requests'} -\n              $mystat{'Innodb_log_writes'} )\n          . \" hits / \"\n          . $mystat{'Innodb_log_write_requests'}\n          . \" total)\";\n        push( @adjvars,\n                \"innodb_log_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'innodb_log_buffer_size'} )\n              . \")\" );\n    }\n    else {\n        goodprint \"InnoDB Write Log efficiency: \"\n          . $mycalc{'pct_write_efficiency'} . \"% (\"\n          . ( $mystat{'Innodb_log_write_requests'} -\n              $mystat{'Innodb_log_writes'} )\n          . \" hits / \"\n          . $mystat{'Innodb_log_write_requests'}\n          . \" total)\";\n    }\n\n    # InnoDB Log Waits\n    $mystat{'Innodb_log_waits_computed'} = 0;\n\n    if (    defined( $mystat{'Innodb_log_waits'} )\n        and defined( $mystat{'Innodb_log_writes'} )\n        and $mystat{'Innodb_log_writes'} > 0.000001 )\n    {\n        $mystat{'Innodb_log_waits_computed'} =\n          $mystat{'Innodb_log_waits'} / $mystat{'Innodb_log_writes'};\n    }\n    else {\n        undef $mystat{'Innodb_log_waits_computed'};\n    }\n\n    if ( defined $mystat{'Innodb_log_waits_computed'}\n        && $mystat{'Innodb_log_waits_computed'} > 0.000001 )\n    {\n        badprint \"InnoDB log waits: \"\n          . percentage( $mystat{'Innodb_log_waits'},\n            $mystat{'Innodb_log_writes'} )\n          . \"% (\"\n          . $mystat{'Innodb_log_waits'}\n          . \" waits / \"\n          . $mystat{'Innodb_log_writes'}\n          . \" writes)\";\n        push( @adjvars,\n                \"innodb_log_buffer_size (> \"\n              . hr_bytes_rnd( $myvar{'innodb_log_buffer_size'} )\n              . \")\" );\n    }\n    else {\n        goodprint \"InnoDB log waits: \"\n          . percentage( $mystat{'Innodb_log_waits'},\n            $mystat{'Innodb_log_writes'} )\n          . \"% (\"\n          . $mystat{'Innodb_log_waits'}\n          . \" waits / \"\n          . $mystat{'Innodb_log_writes'}\n          . \" writes)\";\n    }\n    $result{'Calculations'} = {%mycalc};\n}\n\nsub check_metadata_perf {\n    subheaderprint \"Analysis Performance Metrics\";\n    if ( defined $myvar{'innodb_stats_on_metadata'} ) {\n        infoprint \"innodb_stats_on_metadata: \"\n          . $myvar{'innodb_stats_on_metadata'};\n        if ( $myvar{'innodb_stats_on_metadata'} eq 'ON' ) {\n            badprint \"Stat are updated during querying INFORMATION_SCHEMA.\";\n            push @adjvars, \"SET innodb_stats_on_metadata = OFF\";\n\n            #Disabling innodb_stats_on_metadata\n            select_one(\"SET GLOBAL innodb_stats_on_metadata = OFF;\");\n            return 1;\n        }\n    }\n    goodprint \"No stat updates during querying INFORMATION_SCHEMA.\";\n    return 0;\n}\n\n# Recommendations for Database metrics\nsub mysql_databases {\n    return if ( $opt{dbstat} == 0 );\n\n    subheaderprint \"Database Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Database metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n\n    @dblist = select_array(\n\"SELECT SCHEMA_NAME FROM information_schema.SCHEMATA WHERE SCHEMA_NAME NOT IN ( 'mysql', 'performance_schema', 'information_schema', 'sys' );\"\n    );\n    infoprint \"There is \" . scalar(@dblist) . \" Database(s).\";\n    my @totaldbinfo = split /\\s/,\n      select_one(\n\"SELECT SUM(TABLE_ROWS), SUM(DATA_LENGTH), SUM(INDEX_LENGTH), SUM(DATA_LENGTH+INDEX_LENGTH), COUNT(TABLE_NAME), COUNT(DISTINCT(TABLE_COLLATION)), COUNT(DISTINCT(ENGINE)) FROM information_schema.TABLES WHERE TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n      );\n    infoprint \"All User Databases:\";\n    infoprint \" +-- TABLE : \"\n      . select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='BASE TABLE' AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys')\"\n      ) . \"\";\n    infoprint \" +-- VIEW  : \"\n      . select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='VIEW' AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys')\"\n      ) . \"\";\n    infoprint \" +-- INDEX : \"\n      . select_one(\n\"SELECT count(distinct(concat(TABLE_NAME, TABLE_SCHEMA, INDEX_NAME))) from information_schema.STATISTICS WHERE TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys')\"\n      ) . \"\";\n\n    infoprint \" +-- CHARS : \"\n      . ( $totaldbinfo[5] eq 'NULL' ? 0 : $totaldbinfo[5] ) . \" (\"\n      . (\n        join \", \",\n        select_array(\n\"select distinct(CHARACTER_SET_NAME) from information_schema.columns WHERE CHARACTER_SET_NAME IS NOT NULL AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n        )\n      ) . \")\";\n    infoprint \" +-- COLLA : \"\n      . ( $totaldbinfo[5] eq 'NULL' ? 0 : $totaldbinfo[5] ) . \" (\"\n      . (\n        join \", \",\n        select_array(\n\"SELECT DISTINCT(TABLE_COLLATION) FROM information_schema.TABLES WHERE TABLE_COLLATION IS NOT NULL AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n        )\n      ) . \")\";\n    infoprint \" +-- ROWS  : \"\n      . ( $totaldbinfo[0] eq 'NULL' ? 0 : $totaldbinfo[0] ) . \"\";\n    infoprint \" +-- DATA  : \"\n      . hr_bytes( $totaldbinfo[1] ) . \"(\"\n      . percentage( $totaldbinfo[1], $totaldbinfo[3] ) . \"%)\";\n    infoprint \" +-- INDEX : \"\n      . hr_bytes( $totaldbinfo[2] ) . \"(\"\n      . percentage( $totaldbinfo[2], $totaldbinfo[3] ) . \"%)\";\n    infoprint \" +-- SIZE  : \" . hr_bytes( $totaldbinfo[3] ) . \"\";\n    infoprint \" +-- ENGINE: \"\n      . ( $totaldbinfo[6] eq 'NULL' ? 0 : $totaldbinfo[6] ) . \" (\"\n      . (\n        join \", \",\n        select_array(\n\"SELECT DISTINCT(ENGINE) FROM information_schema.TABLES WHERE ENGINE IS NOT NULL AND TABLE_SCHEMA NOT IN ('mysql', 'performance_schema', 'information_schema', 'sys');\"\n        )\n      ) . \")\";\n\n    $result{'Databases'}{'All databases'}{'Rows'} =\n      ( $totaldbinfo[0] eq 'NULL' ? 0 : $totaldbinfo[0] );\n    $result{'Databases'}{'All databases'}{'Data Size'} = $totaldbinfo[1];\n    $result{'Databases'}{'All databases'}{'Data Pct'} =\n      percentage( $totaldbinfo[1], $totaldbinfo[3] ) . \"%\";\n    $result{'Databases'}{'All databases'}{'Index Size'} = $totaldbinfo[2];\n    $result{'Databases'}{'All databases'}{'Index Pct'} =\n      percentage( $totaldbinfo[2], $totaldbinfo[3] ) . \"%\";\n    $result{'Databases'}{'All databases'}{'Total Size'} = $totaldbinfo[3];\n    print \"\\n\" unless ( $opt{'silent'} or $opt{'json'} );\n    my $nbViews  = 0;\n    my $nbTables = 0;\n\n    foreach (@dblist) {\n        my @dbinfo = split /\\s/,\n          select_one(\n\"SELECT TABLE_SCHEMA, SUM(TABLE_ROWS), SUM(DATA_LENGTH), SUM(INDEX_LENGTH), SUM(DATA_LENGTH+INDEX_LENGTH), COUNT(DISTINCT ENGINE), COUNT(TABLE_NAME), COUNT(DISTINCT(TABLE_COLLATION)), COUNT(DISTINCT(ENGINE)) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' GROUP BY TABLE_SCHEMA ORDER BY TABLE_SCHEMA\"\n          );\n        next unless defined $dbinfo[0];\n\n        infoprint \"Database: \" . $dbinfo[0] . \"\";\n        $nbTables = select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='BASE TABLE' AND TABLE_SCHEMA='$_'\"\n        );\n        infoprint \" +-- TABLE : $nbTables\";\n        infoprint \" +-- VIEW  : \"\n          . select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='VIEW' AND TABLE_SCHEMA='$_'\"\n          ) . \"\";\n        infoprint \" +-- INDEX : \"\n          . select_one(\n\"SELECT count(distinct(concat(TABLE_NAME, TABLE_SCHEMA, INDEX_NAME))) from information_schema.STATISTICS WHERE TABLE_SCHEMA='$_'\"\n          ) . \"\";\n        infoprint \" +-- CHARS : \"\n          . ( $totaldbinfo[5] eq 'NULL' ? 0 : $totaldbinfo[5] ) . \" (\"\n          . (\n            join \", \",\n            select_array(\n\"select distinct(CHARACTER_SET_NAME) from information_schema.columns WHERE CHARACTER_SET_NAME IS NOT NULL AND TABLE_SCHEMA='$_';\"\n            )\n          ) . \")\";\n        infoprint \" +-- COLLA : \"\n          . ( $dbinfo[7] eq 'NULL' ? 0 : $dbinfo[7] ) . \" (\"\n          . (\n            join \", \",\n            select_array(\n\"SELECT DISTINCT(TABLE_COLLATION) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' AND TABLE_COLLATION IS NOT NULL;\"\n            )\n          ) . \")\";\n        infoprint \" +-- ROWS  : \"\n          . ( !defined( $dbinfo[1] ) or $dbinfo[1] eq 'NULL' ? 0 : $dbinfo[1] )\n          . \"\";\n        infoprint \" +-- DATA  : \"\n          . hr_bytes( $dbinfo[2] ) . \"(\"\n          . percentage( $dbinfo[2], $dbinfo[4] ) . \"%)\";\n        infoprint \" +-- INDEX : \"\n          . hr_bytes( $dbinfo[3] ) . \"(\"\n          . percentage( $dbinfo[3], $dbinfo[4] ) . \"%)\";\n        infoprint \" +-- TOTAL : \" . hr_bytes( $dbinfo[4] ) . \"\";\n        infoprint \" +-- ENGINE: \"\n          . ( $dbinfo[8] eq 'NULL' ? 0 : $dbinfo[8] ) . \" (\"\n          . (\n            join \", \",\n            select_array(\n\"SELECT DISTINCT(ENGINE) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' AND ENGINE IS NOT NULL\"\n            )\n          ) . \")\";\n\n        foreach my $eng (\n            select_array(\n\"SELECT DISTINCT(ENGINE) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$_' AND ENGINE IS NOT NULL\"\n            )\n          )\n        {\n            infoprint \" +-- ENGINE $eng : \"\n              . select_one(\n\"SELECT COUNT(*) FROM information_schema.TABLES WHERE TABLE_SCHEMA='$dbinfo[0]' AND ENGINE='$eng'\"\n              ) . \" TABLE(s)\";\n        }\n\n        if ( $nbTables == 0 ) {\n            badprint \" No table in $dbinfo[0] database\";\n            next;\n        }\n        badprint \"Index size is larger than data size for $dbinfo[0] \\n\"\n          if ( $dbinfo[2] ne 'NULL' )\n          and ( $dbinfo[3] ne 'NULL' )\n          and ( $dbinfo[2] < $dbinfo[3] );\n        if ( $dbinfo[5] > 1 and $nbTables > 0 ) {\n            badprint \"There are \"\n              . $dbinfo[5]\n              . \" storage engines. Be careful. \\n\";\n            push @generalrec,\n\"Select one storage engine (InnoDB is a good choice) for all tables in $dbinfo[0] database ($dbinfo[5] engines detected)\";\n        }\n        $result{'Databases'}{ $dbinfo[0] }{'Rows'}       = $dbinfo[1];\n        $result{'Databases'}{ $dbinfo[0] }{'Tables'}     = $dbinfo[6];\n        $result{'Databases'}{ $dbinfo[0] }{'Collations'} = $dbinfo[7];\n        $result{'Databases'}{ $dbinfo[0] }{'Data Size'}  = $dbinfo[2];\n        $result{'Databases'}{ $dbinfo[0] }{'Data Pct'} =\n          percentage( $dbinfo[2], $dbinfo[4] ) . \"%\";\n        $result{'Databases'}{ $dbinfo[0] }{'Index Size'} = $dbinfo[3];\n        $result{'Databases'}{ $dbinfo[0] }{'Index Pct'} =\n          percentage( $dbinfo[3], $dbinfo[4] ) . \"%\";\n        $result{'Databases'}{ $dbinfo[0] }{'Total Size'} = $dbinfo[4];\n\n        if ( $dbinfo[7] > 1 ) {\n            badprint $dbinfo[7]\n              . \" different collations for database \"\n              . $dbinfo[0];\n            push( @generalrec,\n                \"Check all table collations are identical for all tables in \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[7]\n              . \" collation for \"\n              . $dbinfo[0]\n              . \" database.\";\n        }\n        if ( $dbinfo[8] > 1 ) {\n            badprint $dbinfo[8]\n              . \" different engines for database \"\n              . $dbinfo[0];\n            push( @generalrec,\n                    \"Check all table engines are identical for all tables in \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[8] . \" engine for \" . $dbinfo[0] . \" database.\";\n        }\n\n        my @distinct_column_charset = select_array(\n\"select DISTINCT(CHARACTER_SET_NAME) from information_schema.COLUMNS where CHARACTER_SET_NAME IS NOT NULL AND TABLE_SCHEMA ='$_' AND CHARACTER_SET_NAME IS NOT NULL\"\n        );\n        infoprint \"Charsets for $dbinfo[0] database table column: \"\n          . join( ', ', @distinct_column_charset );\n        if ( scalar(@distinct_column_charset) > 1 ) {\n            badprint $dbinfo[0]\n              . \" table column(s) has several charsets defined for all text like column(s).\";\n            push( @generalrec,\n                    \"Limit charset for column to one charset if possible for \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[0]\n              . \" table column(s) has same charset defined for all text like column(s).\";\n        }\n\n        my @distinct_column_collation = select_array(\n\"select DISTINCT(COLLATION_NAME) from information_schema.COLUMNS where COLLATION_NAME IS NOT NULL AND TABLE_SCHEMA ='$_' AND COLLATION_NAME IS NOT NULL\"\n        );\n        infoprint \"Collations for $dbinfo[0] database table column: \"\n          . join( ', ', @distinct_column_collation );\n        if ( scalar(@distinct_column_collation) > 1 ) {\n            badprint $dbinfo[0]\n              . \" table column(s) has several collations defined for all text like column(s).\";\n            push( @generalrec,\n                \"Limit collations for column to one collation if possible for \"\n                  . $dbinfo[0]\n                  . \" database.\" );\n        }\n        else {\n            goodprint $dbinfo[0]\n              . \" table column(s) has same collation defined for all text like column(s).\";\n        }\n    }\n}\n\n# Recommendations for database columns\nsub mysql_tables {\n    return if ( $opt{tbstat} == 0 );\n\n    subheaderprint \"Table Column Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Table column metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n    if ( mysql_version_ge(8) and not mysql_version_eq(10) ) {\n        infoprint\n\"MySQL and Percona version 8.0 and greater have removed PROCEDURE ANALYSE feature\";\n        $opt{colstat} = 0;\n        infoprint \"Disabling colstat parameter\";\n\n    }\n\n    infoprint(\"Dumpdir: $opt{dumpdir}\");\n\n    # Store all information schema in dumpdir if defined\n    if ( defined $opt{dumpdir} and -d \"$opt{dumpdir}\" ) {\n        for my $info_s_table (\n            select_array('use information_schema;show tables;') )\n        {\n            infoprint \"Dumping $info_s_table into $opt{dumpdir}\";\n            select_csv_file(\n                \"$opt{dumpdir}/ifs_${info_s_table}.csv\",\n                \"select * from information_schema.$info_s_table\"\n            );\n        }\n\n        #exit 0 if ( $opt{stop} == 1 );\n    }\n    foreach ( select_user_dbs() ) {\n        my $dbname = $_;\n        next unless defined $_;\n        infoprint \"Database: \" . $_ . \"\";\n        my @dbtable = select_array(\n\"SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_SCHEMA='$dbname' AND TABLE_TYPE='BASE TABLE' ORDER BY TABLE_NAME\"\n        );\n        foreach (@dbtable) {\n            my $tbname = $_;\n            infoprint \" +-- TABLE: $tbname\";\n            infoprint \"     +-- TYPE: \"\n              . select_one(\n\"SELECT ENGINE FROM information_schema.tables where TABLE_schema='$dbname' AND TABLE_NAME='$tbname'\"\n              );\n\n            my $selIdxReq = <<\"ENDSQL\";\n      SELECT  index_name AS idxname, \n              GROUP_CONCAT(column_name ORDER BY seq_in_index) AS cols, \n              INDEX_TYPE as type\n              FROM information_schema.statistics\n              WHERE INDEX_SCHEMA='$dbname'\n              AND TABLE_NAME='$tbname'\n              GROUP BY idxname, type\nENDSQL\n            my @tbidx = select_array($selIdxReq);\n            my $found = 0;\n            foreach my $idx (@tbidx) {\n                my @info = split /\\s/, $idx;\n                next if $info[0] eq 'NULL';\n                infoprint\n                  \"     +-- Index $info[0] - Cols: $info[1] - Type: $info[2]\";\n                $found++;\n            }\n            if ( $found == 0 ) {\n                badprint(\"Table $dbname.$tbname has no index defined\");\n                push @generalrec,\n                  \"Add at least a primary key on table $dbname.$tbname\";\n            }\n            my @tbcol = select_array(\n\"SELECT COLUMN_NAME FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$dbname' AND TABLE_NAME='$tbname'\"\n            );\n            foreach (@tbcol) {\n                my $ctype = select_one(\n\"SELECT COLUMN_TYPE FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$dbname' AND TABLE_NAME='$tbname' AND COLUMN_NAME='$_' \"\n                );\n                my $isnull = select_one(\n\"SELECT IS_NULLABLE FROM information_schema.COLUMNS WHERE TABLE_SCHEMA='$dbname' AND TABLE_NAME='$tbname' AND COLUMN_NAME='$_' \"\n                );\n\n                my $current_type =\n                  uc($ctype) . ( $isnull eq 'NO' ? \" NOT NULL\" : \" NULL\" );\n                my $optimal_type = '';\n                infoprint \"     +-- Column $tbname.$_: $current_type\";\n                if ( $opt{colstat} == 1 ) {\n                    $optimal_type = select_str_g( \"Optimal_fieldtype\",\n\"SELECT \\\\`$_\\\\` FROM \\\\`$dbname\\\\`.\\\\`$tbname\\\\` PROCEDURE ANALYSE(100000)\"\n                      )\n                      unless ( mysql_version_ge(8)\n                        and not mysql_version_eq(10) );\n                }\n                if ( $optimal_type eq '' ) {\n\n                    #infoprint \"     +-- Current Fieldtype: $current_type\";\n\n                    #infoprint \"      Optimal Fieldtype: Not available\";\n                }\n                elsif ( $current_type ne $optimal_type\n                    and $current_type !~ /.*DATETIME.*/\n                    and $current_type !~ /.*TIMESTAMP.*/ )\n                {\n                    infoprint \"     +-- Current Fieldtype: $current_type\";\n                    if ( $optimal_type =~ /.*ENUM\\(.*/ ) {\n                        $optimal_type = \"ENUM( ... )\";\n                    }\n                    infoprint \"     +-- Optimal Fieldtype: $optimal_type \";\n                    if ( $optimal_type !~ /.*ENUM\\(.*/ ) {\n                        badprint\n\"Consider changing type for column $_ in table $dbname.$tbname\";\n                        push( @generalrec,\n\"ALTER TABLE \\`$dbname\\`.\\`$tbname\\` MODIFY \\`$_\\` $optimal_type;\"\n                        );\n                    }\n                }\n                else {\n                    goodprint \"$dbname.$tbname ($_) type: $current_type\";\n                }\n            }\n        }\n    }\n}\n\n# Recommendations for Indexes metrics\nsub mysql_indexes {\n    return if ( $opt{idxstat} == 0 );\n\n    subheaderprint \"Indexes Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Index metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n\n#    unless ( mysql_version_ge( 5, 6 ) ) {\n#        infoprint\n#\"Skip Index metrics from information schema due to erroneous information provided in this version\";\n#        return;\n#    }\n    my $selIdxReq = <<'ENDSQL';\nSELECT\n  CONCAT(t.TABLE_SCHEMA, '.', t.TABLE_NAME) AS 'table', \n  CONCAT(s.INDEX_NAME, '(', s.COLUMN_NAME, ')') AS 'index'\n , s.SEQ_IN_INDEX AS 'seq'\n , s2.max_columns AS 'maxcol'\n , s.CARDINALITY  AS 'card'\n , t.TABLE_ROWS   AS 'est_rows'\n , INDEX_TYPE as type\n , ROUND(((s.CARDINALITY / IFNULL(t.TABLE_ROWS, 0.01)) * 100), 2) AS 'sel'\nFROM INFORMATION_SCHEMA.STATISTICS s\n INNER JOIN INFORMATION_SCHEMA.TABLES t\n  ON s.TABLE_SCHEMA = t.TABLE_SCHEMA\n  AND s.TABLE_NAME = t.TABLE_NAME\n INNER JOIN (\n  SELECT\n     TABLE_SCHEMA\n   , TABLE_NAME\n   , INDEX_NAME\n   , MAX(SEQ_IN_INDEX) AS max_columns\n  FROM INFORMATION_SCHEMA.STATISTICS\n  WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema')\n  AND INDEX_TYPE <> 'FULLTEXT'\n  GROUP BY TABLE_SCHEMA, TABLE_NAME, INDEX_NAME\n ) AS s2\n ON s.TABLE_SCHEMA = s2.TABLE_SCHEMA\n AND s.TABLE_NAME = s2.TABLE_NAME\n AND s.INDEX_NAME = s2.INDEX_NAME\nWHERE t.TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema')\nAND t.TABLE_ROWS > 10\nAND s.CARDINALITY IS NOT NULL\nAND (s.CARDINALITY / IFNULL(t.TABLE_ROWS, 0.01)) < 8.00\nORDER BY sel\nLIMIT 10;\nENDSQL\n    my @idxinfo = select_array($selIdxReq);\n    infoprint \"Worst selectivity indexes:\";\n    foreach (@idxinfo) {\n        debugprint \"$_\";\n        my @info = split /\\s/;\n        infoprint \"Index: \" . $info[1] . \"\";\n\n        infoprint \" +-- COLUMN      : \" . $info[0] . \"\";\n        infoprint \" +-- NB SEQS     : \" . $info[2] . \" sequence(s)\";\n        infoprint \" +-- NB COLS     : \" . $info[3] . \" column(s)\";\n        infoprint \" +-- CARDINALITY : \" . $info[4] . \" distinct values\";\n        infoprint \" +-- NB ROWS     : \" . $info[5] . \" rows\";\n        infoprint \" +-- TYPE        : \" . $info[6];\n        infoprint \" +-- SELECTIVITY : \" . $info[7] . \"%\";\n\n        $result{'Indexes'}{ $info[1] }{'Column'}           = $info[0];\n        $result{'Indexes'}{ $info[1] }{'Sequence number'}  = $info[2];\n        $result{'Indexes'}{ $info[1] }{'Number of column'} = $info[3];\n        $result{'Indexes'}{ $info[1] }{'Cardinality'}      = $info[4];\n        $result{'Indexes'}{ $info[1] }{'Row number'}       = $info[5];\n        $result{'Indexes'}{ $info[1] }{'Index Type'}       = $info[6];\n        $result{'Indexes'}{ $info[1] }{'Selectivity'}      = $info[7];\n        if ( $info[7] < 25 ) {\n            badprint \"$info[1] has a low selectivity\";\n        }\n    }\n    infoprint \"Indexes per database:\";\n    foreach my $dbname ( select_user_dbs() ) {\n        infoprint \"Database: \" . $dbname . \"\";\n        $selIdxReq = <<\"ENDSQL\";\n        SELECT  concat(table_name, '.', index_name) AS idxname,\n                GROUP_CONCAT(column_name ORDER BY seq_in_index) AS cols,\n                SUM(CARDINALITY) as card,\n                INDEX_TYPE as type\n        FROM information_schema.statistics\n        WHERE INDEX_SCHEMA='$dbname'\n        AND index_name IS NOT NULL\n        GROUP BY table_name, idxname, type\nENDSQL\n        my $found = 0;\n        foreach my $idxinfo ( select_array($selIdxReq) ) {\n            my @info = split /\\s/, $idxinfo;\n            next if $info[0] eq 'NULL';\n            infoprint \" +-- INDEX      : \" . $info[0];\n            infoprint \" +-- COLUMNS    : \" . $info[1];\n            infoprint \" +-- CARDINALITY: \" . $info[2];\n            infoprint \" +-- TYPE        : \" . $info[4] if defined $info[4];\n            infoprint \" +-- COMMENT     : \" . $info[5] if defined $info[5];\n            $found++;\n        }\n        my $nbTables = select_one(\n\"SELECT count(*) from information_schema.TABLES WHERE TABLE_TYPE ='BASE TABLE' AND TABLE_SCHEMA='$dbname'\"\n        );\n        badprint \"No index found for $dbname database\"\n          if $found == 0 and $nbTables > 1;\n        push @generalrec, \"Add indexes on tables from $dbname database\"\n          if $found == 0 and $nbTables > 1;\n    }\n    return\n      unless ( defined( $myvar{'performance_schema'} )\n        and $myvar{'performance_schema'} eq 'ON' );\n\n    $selIdxReq = <<'ENDSQL';\nSELECT CONCAT(object_schema, '.', object_name) AS 'table', index_name\nFROM performance_schema.table_io_waits_summary_by_index_usage\nWHERE index_name IS NOT NULL\nAND count_star = 0\nAND index_name <> 'PRIMARY'\nAND object_schema NOT IN ('mysql', 'performance_schema', 'information_schema')\nORDER BY count_star, object_schema, object_name;\nENDSQL\n    @idxinfo = select_array($selIdxReq);\n    infoprint \"Unused indexes:\";\n    push( @generalrec, \"Remove unused indexes.\" ) if ( scalar(@idxinfo) > 0 );\n    foreach (@idxinfo) {\n        debugprint \"$_\";\n        my @info = split /\\s/;\n        badprint \"Index: $info[1] on $info[0] is not used.\";\n        push @{ $result{'Indexes'}{'Unused Indexes'} },\n          $info[0] . \".\" . $info[1];\n    }\n}\n\nsub mysql_views {\n    subheaderprint \"Views Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Views metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n}\n\nsub mysql_routines {\n    subheaderprint \"Routines Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Routines metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n}\n\nsub mysql_triggers {\n    subheaderprint \"Triggers Metrics\";\n    unless ( mysql_version_ge( 5, 5 ) ) {\n        infoprint\n\"Trigger metrics from information schema are missing in this version. Skipping...\";\n        return;\n    }\n}\n\n# Take the two recommendation arrays and display them at the end of the output\nsub make_recommendations {\n    $result{'Recommendations'} = \\@generalrec;\n    $result{'AdjustVariables'} = \\@adjvars;\n    subheaderprint \"Recommendations\";\n    if ( @generalrec > 0 ) {\n        prettyprint \"General recommendations:\";\n        foreach (@generalrec) { prettyprint \"    \" . $_ . \"\"; }\n    }\n    if ( @adjvars > 0 ) {\n        prettyprint \"Variables to adjust:\";\n        if ( $mycalc{'pct_max_physical_memory'} > 90 ) {\n            prettyprint\n              \"  *** MySQL's maximum memory usage is dangerously high ***\\n\"\n              . \"  *** Add RAM before increasing MySQL buffer variables ***\";\n        }\n        foreach (@adjvars) { prettyprint \"    \" . $_ . \"\"; }\n    }\n    if ( @generalrec == 0 && @adjvars == 0 ) {\n        prettyprint \"No additional performance recommendations are available.\";\n    }\n}\n\nsub close_outputfile {\n    close($fh) if defined($fh);\n}\n\nsub headerprint {\n    prettyprint \" >>  MySQLTuner $tunerversion\\n\"\n      . \"\\t * Jean-Marie Renouard <jmrenouard\\@gmail.com>\\n\"\n      . \"\\t * Major Hayden <major\\@mhtx.net>\\n\"\n      . \" >>  Bug reports, feature requests, and downloads at http://mysqltuner.pl/\\n\"\n      . \" >>  Run with '--help' for additional options and output filtering\";\n    debugprint( \"Debug: \" . $opt{debug} );\n    debugprint( \"Experimental: \" . $opt{experimental} );\n}\n\nsub string2file {\n    my $filename = shift;\n    my $content  = shift;\n    open my $fh, q(>), $filename\n      or die\n\"Unable to open $filename in write mode. Please check permissions for this file or directory\";\n    print $fh $content if defined($content);\n    close $fh;\n    debugprint $content;\n}\n\nsub file2array {\n    my $filename = shift;\n    debugprint \"* reading $filename\";\n    my $fh;\n    open( $fh, q(<), \"$filename\" )\n      or die \"Couldn't open $filename for reading: $!\\n\";\n    my @lines = <$fh>;\n    close($fh);\n    return @lines;\n}\n\nsub file2string {\n    return join( '', file2array(@_) );\n}\n\nmy $templateModel;\nif ( $opt{'template'} ne 0 ) {\n    $templateModel = file2string( $opt{'template'} );\n}\nelse {\n    # DEFAULT REPORT TEMPLATE\n    $templateModel = <<'END_TEMPLATE';\n<!DOCTYPE html>\n<html>\n<head>\n  <title>MySQLTuner Report</title>\n  <meta charset=\"UTF-8\">\n</head>\n<body>\n\n<h1>Result output</h1>\n<pre>\n{$data}\n</pre>\n\n</body>\n</html>\nEND_TEMPLATE\n}\n\nsub dump_result {\n\n    #debugprint Dumper( \\%result ) if ( $opt{'debug'} );\n    debugprint \"HTML REPORT: $opt{'reportfile'}\";\n\n    if ( $opt{'reportfile'} ne 0 ) {\n        eval { require Text::Template };\n        eval { require JSON };\n        if ($@) {\n            badprint \"Text::Template Module is needed.\";\n            die \"Text::Template Module is needed.\";\n        }\n\n        my $json      = JSON->new->allow_nonref;\n        my $json_text = $json->pretty->encode( \\%result );\n        my %vars      = (\n            'data'  => \\%result,\n            'debug' => $json_text,\n        );\n        my $template;\n        {\n            no warnings 'once';\n            $template = Text::Template->new(\n                TYPE       => 'STRING',\n                PREPEND    => q{;},\n                SOURCE     => $templateModel,\n                DELIMITERS => [ '[%', '%]' ]\n            ) or die \"Couldn't construct template: $Text::Template::ERROR\";\n        }\n\n        open my $fh, q(>), $opt{'reportfile'}\n          or die\n\"Unable to open $opt{'reportfile'} in write mode. please check permissions for this file or directory\";\n        $template->fill_in( HASH => \\%vars, OUTPUT => $fh );\n        close $fh;\n    }\n\n    if ( $opt{'json'} ne 0 ) {\n        eval { require JSON };\n        if ($@) {\n            print \"$bad JSON Module is needed.\\n\";\n            return 1;\n        }\n\n        my $json = JSON->new->allow_nonref;\n        print $json->utf8(1)->pretty( ( $opt{'prettyjson'} ? 1 : 0 ) )\n          ->encode( \\%result );\n\n        if ( $opt{'outputfile'} ne 0 ) {\n            unlink $opt{'outputfile'} if ( -e $opt{'outputfile'} );\n            open my $fh, q(>), $opt{'outputfile'}\n              or die\n\"Unable to open $opt{'outputfile'} in write mode. please check permissions for this file or directory\";\n            print $fh $json->utf8(1)->pretty( ( $opt{'prettyjson'} ? 1 : 0 ) )\n              ->encode( \\%result );\n            close $fh;\n        }\n    }\n}\n\nsub which {\n    my $prog_name   = shift;\n    my $path_string = shift;\n    my @path_array  = split /:/, $ENV{'PATH'};\n\n    for my $path (@path_array) {\n        return \"$path/$prog_name\" if ( -x \"$path/$prog_name\" );\n    }\n\n    return 0;\n}\n\n# ---------------------------------------------------------------------------\n# BEGIN 'MAIN'\n# ---------------------------------------------------------------------------\nheaderprint;    # Header Print\n\nvalidate_tuner_version;    # Check latest version\nmysql_setup;               # Gotta login first\ndebugprint \"MySQL FINAL Client : $mysqlcmd $mysqllogin\";\ndebugprint \"MySQL Admin FINAL Client : $mysqladmincmd $mysqllogin\";\n\n#exit(0);\nos_setup;                  # Set up some OS variables\nget_all_vars;              # Toss variables/status into hashes\nget_tuning_info;           # Get information about the tuning connection\ncalculations;              # Calculate everything we need\ncheck_architecture;        # Suggest 64-bit upgrade\ncheck_storage_engines;     # Show enabled storage engines\nif ( $opt{'feature'} ne '' ) {\n    subheaderprint \"See FEATURES.md for more information\";\n    no strict 'refs';\n    for my $feature ( split /,/, $opt{'feature'} ) {\n        subheaderprint \"Running feature: $opt{'feature'}\";\n        $feature->();\n    }\n    make_recommendations;\n    exit(0);\n}\nvalidate_mysql_version;    # Check current MySQL version\n\nsystem_recommendations;    # Avoid too many services on the same host\nlog_file_recommendations;  # check log file content\n\ncheck_metadata_perf;      # Show parameter impacting performance during analysis\nmysql_databases;          # Show information about databases\nmysql_tables;             # Show information about table column\nmysql_table_structures;   # Show information about table structures\n\nmysql_indexes;            # Show information about indexes\nmysql_views;              # Show information about views\nmysql_triggers;           # Show information about triggers\nmysql_routines;           # Show information about routines\nsecurity_recommendations; # Display some security recommendations\ncve_recommendations;      # Display related CVE\n\nmysql_stats;              # Print the server stats\nmysql_pfs;                # Print Performance schema info\n\nmariadb_threadpool;       # Print MariaDB ThreadPool stats\nmysql_myisam;             # Print MyISAM stats\nmysql_innodb;             # Print InnoDB stats\nmariadb_aria;             # Print MariaDB Aria stats\nmariadb_tokudb;           # Print MariaDB Tokudb stats\nmariadb_xtradb;           # Print MariaDB XtraDB stats\n\n#mariadb_rockdb;           # Print MariaDB RockDB stats\n#mariadb_spider;           # Print MariaDB Spider stats\n#mariadb_connect;          # Print MariaDB Connect stats\nmariadb_galera;            # Print MariaDB Galera Cluster stats\nget_replication_status;    # Print replication info\nmake_recommendations;      # Make recommendations based on stats\ndump_result;               # Dump result if debug is on\nclose_outputfile;          # Close reportfile if needed\n\n# ---------------------------------------------------------------------------\n# END 'MAIN'\n# ---------------------------------------------------------------------------\n1;\n\n__END__\n\n=pod\n\n=encoding UTF-8\n\n=head1 NAME\n\n MySQLTuner 2.6.0 - MySQL High Performance Tuning Script\n\n=head1 IMPORTANT USAGE GUIDELINES\n\nTo run the script with the default options, run the script without arguments\nAllow MySQL server to run for at least 24-48 hours before trusting suggestions\nSome routines may require root level privileges (script will provide warnings)\nYou must provide the remote server's total memory when connecting to other servers\n\n=head1 CONNECTION AND AUTHENTICATION\n\n --host <hostname>           Connect to a remote host to perform tests (default: localhost)\n --socket <socket>           Use a different socket for a local connection\n --port <port>               Port to use for connection (default: 3306)\n --protocol tcp              Force TCP connection instead of socket\n --user <username>           Username to use for authentication\n --userenv <envvar>          Name of env variable which contains username to use for authentication\n --pass <password>           Password to use for authentication\n --passenv <envvar>          Name of env variable which contains password to use for authentication\n --ssl-ca <path>             Path to public key\n --mysqladmin <path>         Path to a custom mysqladmin executable\n --mysqlcmd <path>           Path to a custom mysql executable\n --defaults-file <path>      Path to a custom .my.cnf\n --defaults-extra-file <path>      Path to an extra custom config file\n --server-log <path>         Path to explicit log file (error_log)\n\n=head1 PERFORMANCE AND REPORTING OPTIONS\n\n --skipsize                  Don't enumerate tables and their types/sizes (default: on)\n                             (Recommended for servers with many tables)\n --json                      Print result as JSON string\n --prettyjson                Print result as JSON formatted string\n --skippassword              Don't perform checks on user passwords (default: off)\n --checkversion              Check for updates to MySQLTuner (default: don't check)\n --updateversion             Check for updates to MySQLTuner and update when newer version is available (default: don't check)\n --forcemem <size>           Amount of RAM installed in megabytes\n --forceswap <size>          Amount of swap memory configured in megabytes\n --passwordfile <path>       Path to a password file list (one password by line)\n --cvefile <path>            CVE File for vulnerability checks\n --outputfile <path>         Path to a output txt file\n --reportfile <path>         Path to a report txt file\n --template   <path>         Path to a template file\n --dumpdir <path>            Path to a directory where to dump information files\n --feature <feature>         Run a specific feature (see FEATURES section)\n --dumpdir <path>            information_schema tables and sys views are dumped in CSV in this path\n\n=head1 OUTPUT OPTIONS\n\n --silent                    Don't output anything on screen\n --verbose                   Print out all options (default: no verbose, dbstat, idxstat, sysstat, tbstat, pfstat)\n --color                     Print output in color\n --nocolor                   Don't print output in color\n --nogood                    Remove OK responses\n --nobad                     Remove negative/suggestion responses\n --noinfo                    Remove informational responses\n --debug                     Print debug information\n --experimental              Print experimental analysis (may fail)\n --nondedicated              Consider server is not dedicated to Db server usage only\n --noprocess                 Consider no other process is running\n --dbstat                    Print database information\n --nodbstat                  Don't print database information\n --tbstat                    Print table information\n --notbstat                  Don't print table information\n --colstat                   Print column information\n --nocolstat                 Don't print column information\n --idxstat                   Print index information\n --noidxstat                 Don't print index information\n --nomyisamstat              Don't print MyIsam information\n --sysstat                   Print system information\n --nosysstat                 Don't print system information\n --nostructstat              Don't print table structures information\n --pfstat                    Print Performance schema\n --nopfstat                  Don't print Performance schema\n --bannedports               Ports banned separated by comma (,)\n --server-log                Define specific error_log to analyze\n --maxportallowed            Number of open ports allowable on this host\n --buffers                   Print global and per-thread buffer values\n\n=head1 PERLDOC\n\nYou can find documentation for this module with the perldoc command.\n\n  perldoc mysqltuner\n\n=head2 INTERNALS\n\nL<https://github.com/major/MySQLTuner-perl/blob/master/INTERNALS.md>\n\n Internal documentation\n\n=head1 AUTHORS\n\nMajor Hayden - major@mhtx.net\nJean-Marie Renouard - jmrenouard@gmail.com\n\n=head1 CONTRIBUTORS\n\n=over 4\n\n=item *\n\nMatthew Montgomery\n\n=item *\n\nPaul Kehrer\n\n=item *\n\nDave Burgess\n\n=item *\n\nJonathan Hinds\n\n=item *\n\nMike Jackson\n\n=item *\n\nNils Breunese\n\n=item *\n\nShawn Ashlee\n\n=item *\n\nLuuk Vosslamber\n\n=item *\n\nVille Skytta\n\n=item *\n\nTrent Hornibrook\n\n=item *\n\nJason Gill\n\n=item *\n\nMark Imbriaco\n\n=item *\n\nGreg Eden\n\n=item *\n\nAubin Galinotti\n\n=item *\n\nGiovanni Bechis\n\n=item *\n\nBill Bradford\n\n=item *\n\nRyan Novosielski\n\n=item *\n\nMichael Scheidell\n\n=item *\n\nBlair Christensen\n\n=item *\n\nHans du Plooy\n\n=item *\n\nVictor Trac\n\n=item *\n\nEverett Barnes\n\n=item *\n\nTom Krouper\n\n=item *\n\nGary Barrueto\n\n=item *\n\nSimon Greenaway\n\n=item *\n\nAdam Stein\n\n=item *\n\nIsart Montane\n\n=item *\n\nBaptiste M.\n\n=item *\n\nCole Turner\n\n=item *\n\nMajor Hayden\n\n=item *\n\nJoe Ashcraft\n\n=item *\n\nJean-Marie Renouard\n\n=item *\n\nStephan GroBberndt\n\n=item *\n\nChristian Loos\n\n=item *\n\nLong Radix\n\n=back\n\n=head1 SUPPORT\n\n\nBug reports, feature requests, and downloads at http://mysqltuner.pl/\n\nBug tracker can be found at https://github.com/major/MySQLTuner-perl/issues\n\nMaintained by Jean-Marie Renouard (jmrenouard\\@gmail.com) - Licensed under GPL\n\n=head1 SOURCE CODE\n\nL<https://github.com/major/MySQLTuner-perl>\n\n git clone https://github.com/major/MySQLTuner-perl.git\n\n=head1 COPYRIGHT AND LICENSE\n\nCopyright (C) 2006-2023 Major Hayden - major@mhtx.net\n# Copyright (C) 2015-2023 Jean-Marie Renouard - jmrenouard@gmail.com\n\nFor the latest updates, please visit http://mysqltuner.pl/\n\nGit repository available at https://github.com/major/MySQLTuner-perl\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n See the GNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program.  If not, see <https://www.gnu.org/licenses/>.\n\n=cut\n\n# Local variables:\n# indent-tabs-mode: t\n# cperl-indent-level: 8\n# perl-indent-level: 8\n# End:\n"
  },
  {
    "path": "aegir/helpers/rtoc.php.txt",
    "content": "<?php\n/*\nOCP - Opcache Control Panel   (aka Zend Optimizer+ Control Panel for PHP)\nAuthor: _ck_   (with contributions by GK, stasilok, n1xim)\nVersion: 0.1.8\nFree for any kind of use or modification, I am not responsible for anything, please share your improvements\n\n* revision history\n0.1.8  2015-09-01  regex fix for PHP7 phpinfo\n0.1.7  2013-08-29  changed scaling of \"hits\" to base 10 (n1xim)\n0.1.6  2013-04-12  moved meta to footer so graphs can be higher and reduce clutter\n0.1.5  2013-04-12  added graphs to visualize cache state, please report any browser/style bugs\n0.1.4  2013-04-09  added \"recheck\" to update files when using large revalidate_freq (or validate_timestamps=Off)\n0.1.3  2013-03-30  show host and php version, can bookmark with hashtag ie. #statistics - needs new layout asap\n0.1.2  2013-03-25  show optimization levels, number formatting, support for start_time in 7.0.2\n0.1.1  2013-03-18  today Zend completely renamed Optimizer+ to OPcache, adjusted OCP to keep working\n0.1.0  2013-03-17  added group/sort indicators, replaced \"accelerator_\" functions with \"opcache_\"\n0.0.6  2013-03-16  transition support as Zend renames product and functions for PHP 5.5 (stasilok)\n0.0.5  2013-03-10  added refresh button (GK)\n0.0.4  2013-02-18  added file grouping and sorting (click on headers) - code needs cleanup but gets the job done\n0.0.2  2013-02-14  first public release\n\n* known problems/limitations:\nUnlike APC, the Zend OPcache API\n - cannot determine when a file was put into the cache\n - cannot change settings on the fly\n - cannot protect opcache functions by restricting execution to only specific scripts/paths\n\n* todo:\nExtract variables for prefered ordering and better layout instead of just dumping into tables\nFile list filter\n\n*/\n\n// ini_set('display_errors',1); error_reporting(-1);\nif ( count(get_included_files())>1 || php_sapi_name()=='cli' || empty($_SERVER['REMOTE_ADDR']) ) { die; }  // weak block against indirect access\n\n$time=time();\ndefine('CACHEPREFIX',function_exists('opcache_reset')?'opcache_':(function_exists('accelerator_reset')?'accelerator_':''));\n\nif ( !empty($_GET['RESET']) ) {\n\tif ( function_exists(CACHEPREFIX.'reset') ) { call_user_func(CACHEPREFIX.'reset'); }\n\theader( 'Location: '.str_replace('?'.$_SERVER['QUERY_STRING'],'',$_SERVER['REQUEST_URI']) );\n\texit;\n}\n\nif ( !empty($_GET['RECHECK']) ) {\n\tif ( function_exists(CACHEPREFIX.'invalidate') ) {\n\t\t$recheck=trim($_GET['RECHECK']); $files=call_user_func(CACHEPREFIX.'get_status');\n\t\tif (!empty($files['scripts'])) {\n\t\t\tforeach ($files['scripts'] as $file=>$value) {\n\t\t\t\tif ( $recheck==='1' || strpos($file,$recheck)===0 )  call_user_func(CACHEPREFIX.'invalidate',$file);\n\t\t\t}\n\t\t}\n\t\theader( 'Location: '.str_replace('?'.$_SERVER['QUERY_STRING'],'',$_SERVER['REQUEST_URI']) );\n\t} else { echo 'Sorry, this feature requires Zend Opcache newer than April 8th 2013'; }\n\texit;\n}\n\n?><!DOCTYPE html>\n<html>\n<head>\n\t<title>OCP - Opcache Control Panel</title>\n\t<meta name=\"ROBOTS\" content=\"NOINDEX,NOFOLLOW,NOARCHIVE\" />\n\n<style type=\"text/css\">\n\tbody {background-color: #fff; color: #000;}\n\tbody, td, th, h1, h2 {font-family: sans-serif;}\n\tpre {margin: 0px; font-family: monospace;}\n\ta:link,a:visited {color: #000099; text-decoration: none;}\n\ta:hover {text-decoration: underline;}\n\ttable {border-collapse: collapse; width: 600px; }\n\t.center {text-align: center;}\n\t.center table { margin-left: auto; margin-right: auto; text-align: left;}\n\t.center th { text-align: center !important; }\n\t.middle {vertical-align:middle;}\n\ttd, th { border: 1px solid #000; font-size: 75%; vertical-align: baseline; padding: 3px; }\n\th1 {font-size: 150%;}\n\th2 {font-size: 125%;}\n\t.p {text-align: left;}\n\t.e {background-color: #ccccff; font-weight: bold; color: #000; width:50%; white-space:nowrap;}\n\t.h {background-color: #9999cc; font-weight: bold; color: #000;}\n\t.v {background-color: #cccccc; color: #000;}\n\t.vr {background-color: #cccccc; text-align: right; color: #000; white-space: nowrap;}\n\t.b {font-weight:bold;}\n\t.white, .white a {color:#fff;}\n\timg {float: right; border: 0px;}\n\thr {width: 600px; background-color: #cccccc; border: 0px; height: 1px; color: #000;}\n\t.meta, .small {font-size: 75%; }\n\t.meta {margin: 2em 0;}\n\t.meta a, th a {padding: 10px; white-space:nowrap; }\n\t.buttons {margin:0 0 1em;}\n\t.buttons a {margin:0 15px; background-color: #9999cc; color:#fff; text-decoration:none; padding:1px; border:1px solid #000; display:inline-block; width:5em; text-align:center;}\n\t#files td.v a {font-weight:bold; color:#9999cc; margin:0 10px 0 5px; text-decoration:none; font-size:120%;}\n\t#files td.v a:hover {font-weight:bold; color:#ee0000;}\n\t.graph {display:inline-block; width:145px; margin:1em 0 1em 1px; border:0; vertical-align:top;}\n\t.graph table {width:100%; height:150px; border:0; padding:0; margin:5px 0 0 0; position:relative;}\n\t.graph td {vertical-align:middle; border:0; padding:0 0 0 5px;}\n\t.graph .bar {width:25px; text-align:right; padding:0 2px; color:#fff;}\n\t.graph .total {width:34px; text-align:center; padding:0 5px 0 0;}\n\t.graph .total div {border:1px dashed #888; border-right:0; height:99%; width:12px; position:absolute; bottom:0; left:17px; z-index:-1;}\n\t.graph .total span {background:#fff; font-weight:bold;}\n\t.graph .actual {text-align:right; font-weight:bold; padding:0 5px 0 0;}\n\t.graph .red {background:#ee0000;}\n\t.graph .green {background:#00cc00;}\n\t.graph .brown {background:#8B4513;}\n</style>\n<!--[if lt IE 9]><script type=\"text/javascript\" defer=\"defer\">\nwindow.onload=function(){var i,t=document.getElementsByTagName('table');for(i=0;i<t.length;i++){if(t[i].parentNode.className=='graph')t[i].style.height=150-(t[i].clientHeight-150)+'px';}}\n</script><![endif]-->\n</head>\n\n<body>\n<div class=\"center\">\n\n<h1><a href=\"?\">Opcache Control Panel</a></h1>\n\n<div class=\"buttons\">\n\t<a href=\"?ALL=1\">Details</a>\n\t<a href=\"?FILES=1&GROUP=2&SORT=3\">Files</a>\n\t<a href=\"?RESET=1\" onclick=\"return confirm('RESET cache ?')\">Reset</a>\n\t<?php if ( function_exists(CACHEPREFIX.'invalidate') ) { ?>\n\t<a href=\"?RECHECK=1\" onclick=\"return confirm('Recheck all files in the cache ?')\">Recheck</a>\n\t<?php } ?>\n\t<a href=\"?\" onclick=\"window.location.reload(true); return false\">Refresh</a>\n</div>\n\n<?php\n\nif ( !function_exists(CACHEPREFIX.'get_status') ) { echo '<h2>Opcache not detected?</h2>'; die; }\n\nif ( !empty($_GET['FILES']) ) { echo '<h2>files cached</h2>'; files_display(); echo '</div></body></html>'; exit; }\n\nif ( !(isset($_REQUEST['GRAPHS']) && !$_REQUEST['GRAPHS']) && CACHEPREFIX=='opcache_') { graphs_display(); if ( !empty($_REQUEST['GRAPHS']) ) { exit; } }\n\nob_start(); phpinfo(8); $phpinfo = ob_get_contents(); ob_end_clean(); \t\t // some info is only available via phpinfo? sadly buffering capture has to be used\nif ( !preg_match( '/module\\_Zend.(Optimizer\\+|OPcache).+?(\\<table[^>]*\\>.+?\\<\\/table\\>).+?(\\<table[^>]*\\>.+?\\<\\/table\\>)/is', $phpinfo, $opcache) ) { }  // todo\n\nif ( function_exists(CACHEPREFIX.'get_configuration') ) { echo '<h2>general</h2>'; $configuration=call_user_func(CACHEPREFIX.'get_configuration'); }\n\n$host=function_exists('gethostname')?@gethostname():@php_uname('n'); if (empty($host)) { $host=empty($_SERVER['SERVER_NAME'])?$_SERVER['HOST_NAME']:$_SERVER['SERVER_NAME']; }\n$version=array('Host'=>$host);\n$version['PHP Version']='PHP '.(defined('PHP_VERSION')?PHP_VERSION:'???').' '.(defined('PHP_SAPI')?PHP_SAPI:'').' '.(defined('PHP_OS')?' '.PHP_OS:'');\n$version['Opcache Version']=empty($configuration['version']['version'])?'???':$configuration['version'][CACHEPREFIX.'product_name'].' '.$configuration['version']['version'];\nprint_table($version);\n\nif ( !empty($opcache[2]) ) { echo preg_replace('/\\<tr\\>\\<td class\\=\"e\"\\>[^>]+\\<\\/td\\>\\<td class\\=\"v\"\\>[0-9\\,\\. ]+\\<\\/td\\>\\<\\/tr\\>/','',$opcache[2]); }\n\nif ( function_exists(CACHEPREFIX.'get_status') && $status=call_user_func(CACHEPREFIX.'get_status') ) {\n\t$uptime=array();\n\tif ( !empty($status[CACHEPREFIX.'statistics']['start_time']) ) {\n\t\t$uptime['uptime']=time_since($time,$status[CACHEPREFIX.'statistics']['start_time'],1,'');\n\t}\n\tif ( !empty($status[CACHEPREFIX.'statistics']['last_restart_time']) ) {\n\t\t$uptime['last_restart']=time_since($time,$status[CACHEPREFIX.'statistics']['last_restart_time']);\n\t}\n\tif (!empty($uptime)) {print_table($uptime);}\n\n\tif ( !empty($status['cache_full']) ) { $status['memory_usage']['cache_full']=$status['cache_full']; }\n\n\techo '<h2 id=\"memory\">memory</h2>';\n\tprint_table($status['memory_usage']);\n\tunset($status[CACHEPREFIX.'statistics']['start_time'],$status[CACHEPREFIX.'statistics']['last_restart_time']);\n\techo '<h2 id=\"statistics\">statistics</h2>';\n\tprint_table($status[CACHEPREFIX.'statistics']);\n}\n\nif ( empty($_GET['ALL']) ) { meta_display(); exit; }\n\nif ( !empty($configuration['blacklist']) ) { echo '<h2 id=\"blacklist\">blacklist</h2>'; print_table($configuration['blacklist']); }\n\nif ( !empty($opcache[3]) ) { echo '<h2 id=\"runtime\">runtime</h2>'; echo $opcache[3]; }\n\n$name='zend opcache'; $functions=get_extension_funcs($name);\nif (!$functions) { $name='zend optimizer+'; $functions=get_extension_funcs($name); }\nif ($functions) { echo '<h2 id=\"functions\">functions</h2>'; print_table($functions);  } else { $name=''; }\n\n$level=trim(CACHEPREFIX,'_').'.optimization_level';\nif (isset($configuration['directives'][$level])) {\n\techo '<h2 id=\"optimization\">optimization levels</h2>';\n\t$levelset=strrev(base_convert($configuration['directives'][$level], 10, 2));\n\t$levels=array(\n    \t1=>'<a href=\"http://wikipedia.org/wiki/Common_subexpression_elimination\">Constants subexpressions elimination</a> (CSE) true, false, null, etc.<br />Optimize series of ADD_STRING / ADD_CHAR<br />Convert CAST(IS_BOOL,x) into BOOL(x)<br />Convert <a href=\"http://www.php.net/manual/internals2.opcodes.init-fcall-by-name.php\">INIT_FCALL_BY_NAME</a> + <a href=\"http://www.php.net/manual/internals2.opcodes.do-fcall-by-name.php\">DO_FCALL_BY_NAME</a> into <a href=\"http://www.php.net/manual/internals2.opcodes.do-fcall.php\">DO_FCALL</a>',\n    \t2=>'Convert constant operands to expected types<br />Convert conditional <a href=\"http://php.net/manual/internals2.opcodes.jmp.php\">JMP</a>  with constant operands<br />Optimize static <a href=\"http://php.net/manual/internals2.opcodes.brk.php\">BRK</a> and <a href=\"<a href=\"http://php.net/manual/internals2.opcodes.cont.php\">CONT</a>',\n    \t3=>'Convert $a = $a + expr into $a += expr<br />Convert $a++ into ++$a<br />Optimize series of <a href=\"http://php.net/manual/internals2.opcodes.jmp.php\">JMP</a>',\n    \t4=>'PRINT and ECHO optimization (<a href=\"https://github.com/zend-dev/ZendOptimizerPlus/issues/73\">defunct</a>)',\n\t5=>'Block Optimization - most expensive pass<br />Performs many different optimization patterns based on <a href=\"http://wikipedia.org/wiki/Control_flow_graph\">control flow graph</a> (CFG)',\n\t9=>'Optimize <a href=\"http://wikipedia.org/wiki/Register_allocation\">register allocation</a> (allows re-usage of temporary variables)',\n\t10=>'Remove NOPs'\n\t);\n\techo '<table width=\"600\" border=\"0\" cellpadding=\"3\"><tbody><tr class=\"h\"><th>Pass</th><th>Description</th></tr>';\n\tforeach ($levels as $pass=>$description) {\n\t\t$disabled=substr($levelset,$pass-1,1)!=='1' || $pass==4 ? ' white':'';\n\t\techo '<tr><td class=\"v center middle'.$disabled.'\">'.$pass.'</td><td class=\"v'.$disabled.'\">'.$description.'</td></tr>';\n\t}\n\techo '</table>';\n}\n\nif ( isset($_GET['DUMP']) ) {\n\tif ($name) { echo '<h2 id=\"ini\">ini</h2>'; print_table(ini_get_all($name,true)); }\n\tforeach ($configuration as $key=>$value) { echo '<h2>',$key,'</h2>'; print_table($configuration[$key]); }\n\texit;\n}\n\nmeta_display();\n\necho '</div></body></html>';\n\nexit;\n\nfunction time_since($time,$original,$extended=0,$text='ago') {\n\t$time =  $time - $original;\n\t$day = $extended? floor($time/86400) : round($time/86400,0);\n\t$amount=0; $unit='';\n\tif ( $time < 86400) {\n\t\tif ( $time < 60)\t\t{ $amount=$time; $unit='second'; }\n\t\telseif ( $time < 3600) { $amount=floor($time/60); $unit='minute'; }\n\t\telse\t\t\t\t{ $amount=floor($time/3600); $unit='hour'; }\n\t}\n\telseif ( $day < 14) \t{ $amount=$day; $unit='day'; }\n\telseif ( $day < 56) \t{ $amount=floor($day/7); $unit='week'; }\n\telseif ( $day < 672) { $amount=floor($day/30); $unit='month'; }\n\telse {\t\t\t  $amount=intval(2*($day/365))/2; $unit='year'; }\n\n\tif ( $amount!=1) {$unit.='s';}\n\tif ($extended && $time>60) { $text=' and '.time_since($time,$time<86400?($time<3600?$amount*60:$amount*3600):$day*86400,0,'').$text; }\n\n\treturn $amount.' '.$unit.' '.$text;\n}\n\nfunction print_table($array,$headers=false) {\n\tif ( empty($array) || !is_array($array) ) {return;}\n  \techo '<table border=\"0\" cellpadding=\"3\" width=\"600\">';\n  \tif (!empty($headers)) {\n  \t\tif (!is_array($headers)) {$headers=array_keys(reset($array));}\n  \t\techo '<tr class=\"h\">';\n  \t\tforeach ($headers as $value) { echo '<th>',$value,'</th>'; }\n  \t\techo '</tr>';\n  \t}\n  \tforeach ($array as $key=>$value) {\n    \t\techo '<tr>';\n    \t\tif ( !is_numeric($key) ) {\n      \t\t\t$key=ucwords(str_replace('_',' ',$key));\n      \t\t\techo '<td class=\"e\">',$key,'</td>';\n      \t\t\tif ( is_numeric($value) ) {\n        \t\t\t\tif ( $value>1048576) { $value=round($value/1048576,1).'M'; }\n        \t\t\t\telseif ( is_float($value) ) { $value=round($value,1); }\n      \t\t\t}\n    \t\t}\n    \t\tif ( is_array($value) ) {\n      \t\t\tforeach ($value as $column) {\n         \t\t\techo '<td class=\"v\">',$column,'</td>';\n      \t\t\t}\n      \t\t\techo '</tr>';\n    \t\t}\n    \t\telse { echo '<td class=\"v\">',$value,'</td></tr>'; }\n\t}\n \techo '</table>';\n}\n\nfunction files_display() {\n\t$status=call_user_func(CACHEPREFIX.'get_status');\n\tif ( empty($status['scripts']) ) {return;}\n\tif ( isset($_GET['DUMP']) ) { print_table($status['scripts']); exit;}\n    \t$time=time(); $sort=0;\n\t$nogroup=preg_replace('/\\&?GROUP\\=[\\-0-9]+/','',$_SERVER['REQUEST_URI']);\n\t$nosort=preg_replace('/\\&?SORT\\=[\\-0-9]+/','',$_SERVER['REQUEST_URI']);\n\t$group=empty($_GET['GROUP'])?0:intval($_GET['GROUP']); if ( $group<0 || $group>9) { $group=1;}\n\t$groupset=array_fill(0,9,''); $groupset[$group]=' class=\"b\" ';\n\n\techo '<div class=\"meta\">\n\t\t<a ',$groupset[0],'href=\"',$nogroup,'\">ungroup</a> |\n\t\t<a ',$groupset[1],'href=\"',$nogroup,'&GROUP=1\">1</a> |\n\t\t<a ',$groupset[2],'href=\"',$nogroup,'&GROUP=2\">2</a> |\n\t\t<a ',$groupset[3],'href=\"',$nogroup,'&GROUP=3\">3</a> |\n\t\t<a ',$groupset[4],'href=\"',$nogroup,'&GROUP=4\">4</a> |\n\t\t<a ',$groupset[5],'href=\"',$nogroup,'&GROUP=5\">5</a>\n\t</div>';\n\n\tif ( !$group ) { $files =& $status['scripts']; }\n\telse {\n\t\t$files=array();\n\t\tforeach ($status['scripts'] as $data) {\n\t\t\tif ( preg_match('@^[/]([^/]+[/]){'.$group.'}@',$data['full_path'],$path) ) {\n\t\t\t\tif ( empty($files[$path[0]])) { $files[$path[0]]=array('full_path'=>'','files'=>0,'hits'=>0,'memory_consumption'=>0,'last_used_timestamp'=>'','timestamp'=>''); }\n\t\t\t\t$files[$path[0]]['full_path']=$path[0];\n\t\t\t\t$files[$path[0]]['files']++;\n\t\t\t\t$files[$path[0]]['memory_consumption']+=$data['memory_consumption'];\n\t\t\t\t$files[$path[0]]['hits']+=$data['hits'];\n\t\t\t\tif ( $data['last_used_timestamp']>$files[$path[0]]['last_used_timestamp']) {$files[$path[0]]['last_used_timestamp']=$data['last_used_timestamp'];}\n\t\t\t\tif ( $data['timestamp']>$files[$path[0]]['timestamp']) {$files[$path[0]]['timestamp']=$data['timestamp'];}\n\t\t\t}\n\t\t}\n\t}\n\n\tif ( !empty($_GET['SORT']) ) {\n\t\t$keys=array(\n\t\t\t'full_path'=>SORT_STRING,\n\t\t\t'files'=>SORT_NUMERIC,\n\t\t\t'memory_consumption'=>SORT_NUMERIC,\n\t\t\t'hits'=>SORT_NUMERIC,\n\t\t\t'last_used_timestamp'=>SORT_NUMERIC,\n\t\t\t'timestamp'=>SORT_NUMERIC\n\t\t);\n\t\t$titles=array('','path',$group?'files':'','size','hits','last used','created');\n\t\t$offsets=array_keys($keys);\n\t\t$key=intval($_GET['SORT']);\n\t\t$direction=$key>0?1:-1;\n\t\t$key=abs($key)-1;\n\t\t$key=isset($offsets[$key])&&!($key==1&&empty($group))?$offsets[$key]:reset($offsets);\n\t\t$sort=array_search($key,$offsets)+1;\n\t\t$sortflip=range(0,7); $sortflip[$sort]=-$direction*$sort;\n\t\tif ( $keys[$key]==SORT_STRING) {$direction=-$direction; }\n\t\t$arrow=array_fill(0,7,''); $arrow[$sort]=$direction>0?' &#x25BC;':' &#x25B2;';\n\t\t$direction=$direction>0?SORT_DESC:SORT_ASC;\n\t\t$column=array(); foreach ($files as $data) { $column[]=$data[$key]; }\n\t\tarray_multisort($column, $keys[$key], $direction, $files);\n\t}\n\n\techo '<table border=\"0\" cellpadding=\"3\" width=\"960\" id=\"files\">\n         \t\t<tr class=\"h\">';\n         foreach ($titles as $column=>$title) {\n         \tif ($title) echo '<th><a href=\"',$nosort,'&SORT=',$sortflip[$column],'\">',$title,$arrow[$column],'</a></th>';\n         }\n         echo '\t</tr>';\n    \tforeach ($files as $data) {\n    \t\techo '<tr>\n    \t\t\t\t<td class=\"v\" nowrap><a title=\"recheck\" href=\"?RECHECK=',rawurlencode($data['full_path']),'\">x</a>',$data['full_path'],'</td>',\n      \t\t\t\t($group?'<td class=\"vr\">'.number_format($data['files']).'</td>':''),\n         \t\t\t'<td class=\"vr\">',number_format(round($data['memory_consumption']/1024)),'K</td>',\n         \t\t\t'<td class=\"vr\">',number_format($data['hits']),'</td>',\n         \t\t\t'<td class=\"vr\">',time_since($time,$data['last_used_timestamp']),'</td>',\n         \t\t\t'<td class=\"vr\">',empty($data['timestamp'])?'':time_since($time,$data['timestamp']),'</td>\n         \t\t</tr>';\n\t}\n\techo '</table>';\n}\n\nfunction graphs_display() {\n\t$graphs=array();\n\t$colors=array('green','brown','red');\n\t$primes=array(223, 463, 983, 1979, 3907, 7963, 16229, 32531, 65407, 130987);\n\t$configuration=call_user_func(CACHEPREFIX.'get_configuration');\n\t$status=call_user_func(CACHEPREFIX.'get_status');\n\n\t$graphs['memory']['total']=$configuration['directives']['opcache.memory_consumption'];\n\t$graphs['memory']['free']=$status['memory_usage']['free_memory'];\n\t$graphs['memory']['used']=$status['memory_usage']['used_memory'];\n\t$graphs['memory']['wasted']=$status['memory_usage']['wasted_memory'];\n\n\t$graphs['keys']['total']=$status[CACHEPREFIX.'statistics']['max_cached_keys'];\n\tforeach ($primes as $prime) { if ($prime>=$graphs['keys']['total']) { $graphs['keys']['total']=$prime; break;} }\n\t$graphs['keys']['free']=$graphs['keys']['total']-$status[CACHEPREFIX.'statistics']['num_cached_keys'];\n\t$graphs['keys']['scripts']=$status[CACHEPREFIX.'statistics']['num_cached_scripts'];\n\t$graphs['keys']['wasted']=$status[CACHEPREFIX.'statistics']['num_cached_keys']-$status[CACHEPREFIX.'statistics']['num_cached_scripts'];\n\n\t$graphs['hits']['total']=0;\n\t$graphs['hits']['hits']=$status[CACHEPREFIX.'statistics']['hits'];\n\t$graphs['hits']['misses']=$status[CACHEPREFIX.'statistics']['misses'];\n\t$graphs['hits']['blacklist']=$status[CACHEPREFIX.'statistics']['blacklist_misses'];\n\t$graphs['hits']['total']=array_sum($graphs['hits']);\n\n\t$graphs['restarts']['total']=0;\n\t$graphs['restarts']['manual']=$status[CACHEPREFIX.'statistics']['manual_restarts'];\n\t$graphs['restarts']['keys']=$status[CACHEPREFIX.'statistics']['hash_restarts'];\n\t$graphs['restarts']['memory']=$status[CACHEPREFIX.'statistics']['oom_restarts'];\n\t$graphs['restarts']['total']=array_sum($graphs['restarts']);\n\n\tforeach ( $graphs as $caption=>$graph) {\n\t\techo '<div class=\"graph\"><div class=\"h\">',$caption,'</div><table border=\"0\" cellpadding=\"0\" cellspacing=\"0\">';\n\t\tforeach ($graph as $label=>$value) {\n\t\t\tif ($caption!='hits'){\n\t\t\t\tif ($label=='total') { $key=0; $total=$value; $totaldisplay='<td rowspan=\"3\" class=\"total\"><span>'.($total>999999?round($total/1024/1024).'M':($total>9999?round($total/1024).'K':$total)).'</span><div></div></td>'; continue;}\n\t\t\t\t\t$percent=$total?floor($value*100/$total):''; $percent=!$percent||$percent>99?'':$percent.'%';\n\t\t\t\t\techo '<tr>',$totaldisplay,'<td class=\"actual\">', ($value>999999?round($value/1024/1024).'M':($value>9999?round($value/1024).'K':$value)),'</td><td class=\"bar ',$colors[$key],'\" height=\"',$percent,'\">',$percent,'</td><td>',$label,'</td></tr>';\n\t\t\t\t\t$key++; $totaldisplay='';\n\t\t\t}else{\n\t\t\t\tif ($label=='total') { $key=0; $total=$value; $totaldisplay='<td rowspan=\"3\" class=\"total\"><span>'.($total>999999?round($total/1000/1000).'M':($total>9999?round($total/1000).'K':$total)).'</span><div></div></td>'; continue;}\n\t\t\t\t$percent=$total?floor($value*100/$total):''; $percent=!$percent||$percent>99?'':$percent.'%';\n\t\t\t\techo '<tr>',$totaldisplay,'<td class=\"actual\">', ($value>999999?round($value/1000/1000).'M':($value>9999?round($value/1000).'K':$value)),'</td><td class=\"bar ',$colors[$key],'\" height=\"',$percent,'\">',$percent,'</td><td>',$label,'</td></tr>';\n\t\t\t\t$key++; $totaldisplay='';\n\t\t\t}\n\t\t}\n\techo '</table></div>',\"\\n\";\n\t}\n}\n\nfunction meta_display() {\n?>\n<div class=\"meta\">\n\t<a href=\"http://files.zend.com/help/Zend-Server-6/content/zendoptimizerplus.html\">directives guide</a> |\n\t<a href=\"http://files.zend.com/help/Zend-Server-6/content/zend_optimizer+_-_php_api.htm\">functions guide</a> |\n\t<a href=\"https://wiki.php.net/rfc/optimizerplus\">wiki.php.net</a> |\n\t<a href=\"http://pecl.php.net/package/ZendOpcache\">pecl</a> |\n\t<a href=\"https://github.com/zend-dev/ZendOptimizerPlus/\">Zend source</a> |\n\t<a href=\"https://gist.github.com/ck-on/4959032/?ocp.php\">OCP latest</a>\n</div>\n<?php\n}\n"
  },
  {
    "path": "aegir/helpers/spinner",
    "content": "#!/bin/bash\n\ni=1\nsp=\"/-\\|\"\nesc=\"\u001b\"\nreset=\"${esc}[0m\"\ntestecho=/opt/tmp/testecho-$$\necho -ne > $testecho\nif grep -q \"\\-ne\" $testecho; then\n  while [ -e $1 ]\n  do\n    echo -n \".\"\n    sleep 1\n  done\n  echo\nelse\n  while [ -e $1 ]\n  do\n    echo -en \"\\b${sp:i++%${#sp}:1}\"\n    sleep .2\n  done\n  echo -en \"${reset}\\b\"\nfi\n\n"
  },
  {
    "path": "aegir/helpers/systemtime",
    "content": "#!/bin/bash\n#\n# system time hourly\n\nntpdate pool.ntp.org\n\nexit 0\n"
  },
  {
    "path": "aegir/helpers/websh.sh.txt",
    "content": "#!/bin/bash\n\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec:/usr/local/ssl3\n\n_DEST_DRUSH=\"/opt/tools/drush/8/drush/drush.php\"\n\n_forward_to_dash() {\n  if [[ \"${_ARGS}\" =~ \"sudo_noexec.so\" ]] && [[ \"${_LTD_GID}\" =~ \"ltd-shell-more\"($) ]]; then\n    ### echo FWD 0 DIRECT\n    _R_M=`echo -n ${_ARGS} | sed 's|LD_PRELOAD.*.so||'`\n    ### echo _R_M is ${_R_M}\n    exec /bin/dash -c \"${_R_M}\"\n    exit 0\n  fi\n  # Path to the underlying shell (dash in this case)\n  _shell=\"/bin/dash\"\n  # Name under which the shell should think it was invoked\n  _shell_name=\"sh\"\n  # Arrays to hold options and positional parameters\n  _options=()\n  _positional=()\n  # Variables to hold command strings or script files\n  _command=\"\"\n  _script=\"\"\n\n  # Parse the options and arguments\n  while [[ $# -gt 0 ]]; do\n    case \"$1\" in\n      --)\n        # End of options\n        shift\n        _positional+=(\"$@\")\n        break\n        ;;\n      -c)\n        # Command option\n        if [[ -n \"$2\" ]]; then\n          _options+=(\"-c\" \"$2\")\n          shift 2\n          _positional+=(\"$@\")\n          break\n        else\n          echo \"sh: option requires an argument -- 'c'\" >&2\n          exit 1\n        fi\n        ;;\n      -i|-l|-s)\n        # Other common options\n        _options+=(\"$1\")\n        shift\n        ;;\n      -*)\n        # Unrecognized options\n        _options+=(\"$1\")\n        shift\n        ;;\n      *)\n        # First non-option argument is the script file\n        _script=\"$1\"\n        shift\n        _positional+=(\"$@\")\n        break\n        ;;\n    esac\n  done\n\n  # Prepare to execute the underlying shell\n  if [[ -n \"${_script}\" ]]; then\n    # Execute a script file\n    ### echo FWD 1 PARSED\n    exec -a \"${_shell_name}\" \"${_shell}\" \"${_options[@]}\" \"${_script}\" \"${_positional[@]}\"\n    exit 0\n  else\n    # Execute commands or start an interactive shell\n    ### echo FWD 2 PARSED\n    exec -a \"${_shell_name}\" \"${_shell}\" \"${_options[@]}\" \"${_positional[@]}\"\n    exit 0\n  fi\n}\n\n# Capture all arguments\n_ALL=\"$@\"\n### echo \"_ALL is ${_ALL}\"\n\n# Capture environment variables\n_ENV=$(env 2>&1)\n### echo \"_ENV is ${_ENV}\"\n\n# Determine _ARGS based on the first argument\nif [ \"${1}\" = \"-c\" ]; then\n  _ARGS=\"${2}\"\nelse\n  _ARGS=\"${1}\"\nfi\n\n### echo \"_ARGS is ${_ARGS}\"\n\nif [[ \"${_ARGS}\" =~ \"--php=\" ]]; then\n  PHP_FWD=YES\nelse\n  PHP_FWD=\nfi\n\nif [[ \"${_ARGS}\" =~ \"true COLUMNS=\" ]]; then\n  _R_M=`echo -n \"${_ARGS}\" | grep -o \"true COLUMNS=[0-9]\\+ \"`\n  _ARGS=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\nfi\n\nif [[ \"${_ARGS}\" =~ \"'\" ]] && [[ \"${_ARGS}\" =~ \"drush\" ]]; then\n  ### echo _ARGS RAW is ${_ARGS}\n  _ARGS=$(echo -n ${_ARGS} | tr -d \"'\" 2>&1)\n  ### echo _ARGS CLEAN is ${_ARGS}\nfi\n\nif [[ ! \"${_ARGS}\" =~ \"composer\" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"git\" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"cd \" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"mysql\" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"sudo\" ]]; then\n  _ARR=\n  if [[ \"${_ARGS}\" =~ \"/vendor/drush/drush/drush.php \" ]]; then\n    _R_M=`echo -n ${_ARGS}  | grep -o \".*/vendor/drush/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n    _R_M=`echo -n ${_ARGS}  | grep -o \"vendor/drush/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush8 \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush8\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush8\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush10 \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush10\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush10\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush11 \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush11\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush11\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"php /opt/tools/\" ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /opt/tools/drush/.*/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"php /data/disk/\" ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /data/disk/.*/tools/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ php\\ /mnt/.*/data/disk/ ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /mnt/.*/data/disk/.*/tools/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _ARR is ${_ARR}\n  fi\nfi\n\n_C_ARR=\nif [[ \"${_ARGS}\" =~ \"composer \" ]] && [[ ! \"${_ARGS}\" =~ \"git remote add composer\" ]]; then\n  if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n    _C_RM=`echo -n ${_ARGS}  | grep -o \"set -m\\; composer \"`\n  else\n    _C_RM=`echo -n ${_ARGS}  | grep -o \"composer \"`\n  fi\n  ### echo _C_RM is ${_C_RM}\n  _CLR=`echo ${_ARGS}  | sed \"s/\\${_C_RM}//g\"`\n  _C_ARR=() # the buffer array for filtered parameters\n  for arg in \"${_CLR}\"; do\n    case ${_C_RM} in\n      $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n      *) _C_ARR+=(\"$arg\") ;;\n    esac\n  done\n  ### echo _C_ARR is ${_C_ARR}\nfi\n\n_INTERNAL=NO\n_LTD_GID=$(id -nG ${USER} 2>&1)\n_LTD_UID=$(id -nu ${USER} 2>&1)\nif [ -z \"${USER}\" ]; then\n  USER=$(id -nu ${USER} 2>&1)\n  _LTD_QQQ=YES\nfi\n_X_USR=\".*\"\nif [ \"${USER}\" = \"aegir\" ] \\\n  || [ \"${HOME}\" = \"/var/aegir\" ]; then\n  _Y_USR=aegir\n  _DRUSH_CLI_CTRL=\"/var/aegir/static/control\"\n  ### echo _DRUSH_CLI_CTRL is ${_DRUSH_CLI_CTRL}\nelse\n  _Y_USR=${USER%${_X_USR}}\n  _DRUSH_CLI_CTRL=\"/data/disk/${_Y_USR}/static/control\"\n  ### echo _DRUSH_CLI_CTRL is ${_DRUSH_CLI_CTRL}\nfi\nif [ -z \"${HOME}\" ]; then\n  if [ -d \"/home/${USER}/.tmp\" ]; then\n    HOME=\"/home/${USER}\"\n  elif [ -d \"/data/disk/${_Y_USR}/.tmp\" ]; then\n    HOME=\"/data/disk/${_Y_USR}\"\n  elif [ -d \"/var/${_Y_USR}/.tmp\" ]; then\n    HOME=\"/var/${_Y_USR}\"\n  fi\nfi\n\nif [[ \"${_ARR}\" =~ \" aliases\" ]] || [[ \"${_ARR}\" =~ \" sa\" ]]; then\n  if [[ \"${_ALL}\" =~ \"drush10 \" ]] || [[ \"${_ALL}\" =~ \"drush11 \" ]]; then\n    _ARR=\"sa --format=list | egrep -v \\\"(none|hostmaster|hm|server_|platform_|@none|@self)\\\"\"\n  elif [[ \"${_ALL}\" =~ \"drush \" ]] || [[ \"${_ALL}\" =~ \"drush8 \" ]]; then\n    _ARR=\"sa | egrep -v \\\"(none|hostmaster|hm|server_|platform_|@none|@self)\\\"\"\n  fi\nfi\n\nif [[ \"${_ARR}\" =~ \"-c \" ]]; then\n  _R_M=`echo -n \"${_ARR}\" | grep -o \"\\-c \"`\n  _ARR=`echo ${_ARR} | sed \"s/\\${_R_M}//g\"`\nfi\n\nif [[ \"${HOME}\" =~ (^)\"/data/disk/\" ]] \\\n  && [ -z \"${PHP_FWD}\" ] \\\n  && [[ \"${_ARGS}\" =~ (^)\"php ${HOME}\" ]]; then\n  _OCTO_SYS=\"${USER}\"\n  _OCTO_SYS_ARR=\n  if [[ \"${_ARGS}\" =~ \"tools/drush/drush.php\" ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /data/disk/.*/tools/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _OCTO_SYS_ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _OCTO_SYS_ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _OCTO_SYS_ARR is ${_OCTO_SYS_ARR}\n  fi\nelse\n  _OCTO_SYS=\nfi\n\nif [[ \"${HOME}\" =~ (^)\"/yyydata/disk/\" ]]; then\n  _DRUSH_CLI_CTRL=\n  ### echo _DRUSH_CLI_CTRL has been disabled\nfi\n\nif [ -d \"/home/${USER}/.tmp\" ]; then\n  export TMP=\"/home/${USER}/.tmp\"\n  export TMPDIR=\"/home/${USER}/.tmp\"\n  export TEMP=\"/home/${USER}/.tmp\"\n  if [[ \"${_ARGS}\" =~ \" id \" ]] \\\n    || [[ \"${_ARGS}\" =~ (^)\"id \" ]]; then\n    exit 1\n  elif [[ \"${_ARGS}\" =~ (^)\"newrelic\" ]] \\\n    || [[ \"${_ARGS}\" =~ (^)\"nrsysm\" ]]; then\n    exit 1\n  fi\nelif [ -d \"/data/disk/${_Y_USR}/.tmp\" ]; then\n  export TMP=\"/data/disk/${_Y_USR}/.tmp\"\n  export TMPDIR=\"/data/disk/${_Y_USR}/.tmp\"\n  export TEMP=\"/data/disk/${_Y_USR}/.tmp\"\nelif [ -d \"/var/${_Y_USR}/.tmp\" ]; then\n  export TMP=\"/var/${_Y_USR}/.tmp\"\n  export TMPDIR=\"/var/${_Y_USR}/.tmp\"\n  export TEMP=\"/var/${_Y_USR}/.tmp\"\nelse\n  export TMP=\"/tmp\"\n  export TMPDIR=\"/tmp\"\n  export TEMP=\"/tmp\"\nfi\n\nexport HOME=${HOME}\nexport TEMP=${TEMP}\nexport USER=${USER}\n\n### echo HOME is ${HOME}\n### echo TEMP is ${TEMP}\n### echo USER is ${USER}\n#\n### echo _ALL is ${_ALL}\n### echo _ARGS is ${_ARGS}\n### echo _LTD_GID is ${_LTD_GID}\n### echo _LTD_QQQ is ${_LTD_QQQ}\n### echo _LTD_UID is ${_LTD_UID}\n### echo _Y_USR is ${_Y_USR}\n#\n### echo 0 is $0\n### echo 1 is $1\n### echo 2 is $2\n### echo 3 is $3\n### echo 4 is $4\n### echo 5 is $5\n### echo 6 is $6\n### echo 7 is $7\n### echo 8 is $8\n### echo 9 is $9\n\n# Check PHP CLI version defined.\ncheck_php_cli_version() {\n  ### echo CHK start check_php_cli_version\n  if [ \"${HOME}\" = \"/var/aegir\" ] && [ -f \"/var/aegir/drush/drush.php\" ]; then\n    _PHP_CLI=$(grep \"/opt/php\" /var/aegir/drush/drush.php 2>&1)\n  else\n    if [ -f \"/data/disk/${_Y_USR}/tools/drush/drush.php\" ]; then\n      _PHP_CLI=$(grep \"/opt/php\" /data/disk/${_Y_USR}/tools/drush/drush.php 2>&1)\n    elif [ -f \"/data/disk/${_Y_USR}/static/control/cli.info\" ]; then\n      _PHP_CLI=\"php$(tr -d '.\\n' < /data/disk/${_Y_USR}/static/control/cli.info)\"\n    fi\n  fi\n  ### echo CHK 1 _PHP_CLI is ${_PHP_CLI}\n\n  _PHP_V=\"56 70 71 72 73 74 80 81 82 83 84 85\"\n  for e in ${_PHP_V}; do\n    if [[ \"${_PHP_CLI}\" =~ \"php${e}\" ]] && [ -x \"/opt/php${e}/bin/php\" ]; then\n      DRUSH_PHP=\"/opt/php${e}/bin/php\"\n      PHP_INI=\"/opt/php${e}/lib/php.ini\"\n      PHPRC=\"/opt/php${e}/lib\"\n      if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n        PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n        PHPRC=\"${HOME}/.drush/php${e}\"\n      fi\n    fi\n  done\n  ### echo CHK 2 DRUSH_PHP is ${DRUSH_PHP}\n  ### echo CHK 2 PHP_INI is ${PHP_INI}\n  ### echo CHK 2 PHPRC is ${PHPRC}\n\n  for e in ${_PHP_V}; do\n    if [ -e \"${_DRUSH_CLI_CTRL}/php${e}.info\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n      DRUSH_PHP=\"/opt/php${e}/bin/php\"\n      PHP_INI=\"/opt/php${e}/lib/php.ini\"\n      PHPRC=\"/opt/php${e}/lib\"\n      if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n        PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n        PHPRC=\"${HOME}/.drush/php${e}\"\n      fi\n    fi\n  done\n  ### echo CHK 3 DRUSH_PHP is ${DRUSH_PHP}\n  ### echo CHK 3 PHP_INI is ${PHP_INI}\n  ### echo CHK 3 PHPRC is ${PHPRC}\n\n  if [ ! -z \"${PHP_INI}\" ]; then\n    export DRUSH_PHP;export PHP_INI;export PHPRC;\n  else\n    DRUSH_PHP=\"/usr/bin/php\"\n    export DRUSH_PHP;\n    ### echo CHK 4 DRUSH_PHP is ${DRUSH_PHP}\n  fi\n\n  ### echo CHK fin check_php_cli_version\n}\n\nif [ -n \"${JENKINS_HOME}\" ] \\\n  || [ -n \"${JENKINS_NODE_COOKIE}\" ] \\\n  || [ -n \"${WORKSPACE_TMP}\" ]; then\n  _IS_JENKINS=TRUE\nelse\n  _IS_JENKINS=FALSE\nfi\n\nif [ \"${_IS_JENKINS}\" = \"FALSE\" ]; then\n  check_php_cli_version\nfi\n\nif [ \"${_LTD_GID}\" = \"www-data users\" ] \\\n  || [[ \"${HOME}\" =~ (^)\"/var/aegir\" ]] \\\n  || [[ \"${HOME}\" =~ (^)\"/data/disk/\" ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"lshellg\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"ltd-shell-more\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"lshellg rvm\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"ltd-shell\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"ltd-shell rvm\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"rvm ltd-shell\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ (^)\"users www-data\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ (^)\"aegir www-data users\"($) ]]; then\n  if [ \"${1}\" = \"-c\" ]; then\n    _IS_SH_PATH=NO\n    if [ \"$0\" = \"/bin/sh\" ] || [ \"$0\" = \"/usr/bin/sh\" ]; then\n      _IS_SH_PATH=YES\n    else\n      echo\n      echo \"  ERROR: Not Authorized Path\"\n      echo\n      exit 1\n    fi\n    if [[ $(whoami) == *.ftp ]] \\\n      || [[ \"${2}\" =~ \"drush\" ]] \\\n      || [[ \"${2}\" =~ \"mysql \" ]]; then\n      _IN_PATH=YES\n      _INTERNAL=YES\n      if [[ \"${_ARGS}\" =~ \"mysql \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush8 \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush10 \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush11 \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n        _PWD=$(pwd 2>&1)\n        _DEST_DRUSH=\"/opt/tools/drush/8/drush/drush.php\"\n        if [[ \"${_ARGS}\" =~ \"vendor/bin/drush \" ]]; then\n          _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n        fi\n        if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush \" ]]; then\n          _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n        fi\n        if [[ \"${_ARGS}\" =~ \"/vendor/bin/drush \" ]]; then\n          _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n        fi\n        if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n          # Detect if PWD ends with /web, /html, or /docroot\n          if [[ \"$PWD\" =~ /(web|html|docroot)$ ]]; then\n            _DEST_DRUSH=\"../vendor/drush/drush/drush.php\"\n          else\n            _DEST_DRUSH=\"vendor/drush/drush/drush.php\"\n          fi\n          ### echo INF 0 _DEST_DRUSH is ${_DEST_DRUSH}\n        fi\n        if [[ \"${_ARGS}\" =~ \"drush11 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush10 \" ]]; then\n          if [[ \"${_ARGS}\" =~ \"drush11 \" ]]; then\n            _DEST_DRUSH=\"/usr/bin/drush11\"\n          elif [[ \"${_ARGS}\" =~ \"drush10 \" ]]; then\n            _DEST_DRUSH=\"/usr/bin/drush10\"\n          fi\n          if [[ ! \"${HOME}\" =~ (^)\"/data/disk/\" ]]; then\n            _PHP_V=\"74 80 81 82 83 84 85\"\n            for e in ${_PHP_V}; do\n              if [ -e \"${_DRUSH_CLI_CTRL}/php${e}.info\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n                DRUSH_PHP=\"/opt/php${e}/bin/php\"\n                PHP_INI=\"/opt/php${e}/lib/php.ini\"\n                PHPRC=\"/opt/php${e}/lib\"\n                if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n                  PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n                  PHPRC=\"${HOME}/.drush/php${e}\"\n                fi\n              fi\n            done\n          fi\n          if [ ! -z \"${DRUSH_PHP}\" ] && [ ! -z \"${PHP_INI}\" ]; then\n            export DRUSH_PHP;export PHP_INI;export PHPRC;\n            ### echo INF 3 DRUSH_PHP is ${DRUSH_PHP}\n            ### echo INF 3 PHP_INI is ${PHP_INI}\n            ### echo INF 3 PHPRC is ${PHPRC}\n            ### echo INF 3 _DEST_DRUSH is ${_DEST_DRUSH}\n          else\n            echo\n            echo \"  Drush 11 and Drush 10 require at least PHP 7.4\"\n            echo \"  Please create empty control file:\"\n            echo\n            echo \"  ${_DRUSH_CLI_CTRL}/php74.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php81.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php82.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php83.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php84.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php85.info\"\n            echo\n            echo \"  NOTE: If you create more than one,\"\n            echo \"        the highest version wins.\"\n            echo \"  Bye\"\n            echo\n            exit 0\n          fi\n        elif [[ \"${_ARGS}\" =~ \"drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush8 \" ]]; then\n          _DEST_DRUSH=\"/opt/tools/drush/8/drush/drush.php\"\n          if [[ \"${_ARGS}\" =~ \"vendor/bin/drush \" ]]; then\n            _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n          fi\n          if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush \" ]]; then\n            _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n          fi\n          if [[ \"${_ARGS}\" =~ \"/vendor/bin/drush \" ]]; then\n            _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n          fi\n          if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n            # Detect if PWD ends with /web, /html, or /docroot\n            if [[ \"$PWD\" =~ /(web|html|docroot)$ ]]; then\n              _DEST_DRUSH=\"../vendor/drush/drush/drush.php\"\n            else\n              _DEST_DRUSH=\"vendor/drush/drush/drush.php\"\n            fi\n            ### echo INF 1 _DEST_DRUSH is ${_DEST_DRUSH}\n          fi\n          if [[ ! \"${HOME}\" =~ (^)\"/data/disk/\" ]] && [ \"${_IS_JENKINS}\" = \"FALSE\" ]; then\n            _PHP_V=\"56 70 71 72 73 74 80 81 82 83 84 85\"\n            for e in ${_PHP_V}; do\n              if [ -e \"${_DRUSH_CLI_CTRL}/php${e}.info\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n                DRUSH_PHP=\"/opt/php${e}/bin/php\"\n                PHP_INI=\"/opt/php${e}/lib/php.ini\"\n                PHPRC=\"/opt/php${e}/lib\"\n                if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n                  PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n                  PHPRC=\"${HOME}/.drush/php${e}\"\n                fi\n              fi\n            done\n          fi\n          if [ ! -z \"${DRUSH_PHP}\" ] && [ ! -z \"${PHP_INI}\" ] && [ \"${_IS_JENKINS}\" = \"FALSE\" ]; then\n            export DRUSH_PHP;export PHP_INI;export PHPRC;\n            ### echo INF 4 DRUSH_PHP is ${DRUSH_PHP}\n            ### echo INF 4 PHP_INI is ${PHP_INI}\n            ### echo INF 4 PHPRC is ${PHPRC}\n            ### echo INF 4 _DEST_DRUSH is ${_DEST_DRUSH}\n          fi\n        fi\n        if [[ \"${_ARGS}\" =~ \"drush make\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush8 make\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush cc drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush8 cc drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush10 cr drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush11 cr drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php cr drush\" ]]; then\n          if [[ \"${_PWD}\" =~ \"/static\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush cc drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 cc drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 cr drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 cr drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php cr drush\" ]]; then\n            _CORRECT=YES\n            _CORRECT_PWD_R=$(pwd 2>&1)\n            _CORRECT_ARGS_R=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_R is ${_CORRECT_PWD_R}\n            ### echo _CORRECT_ARGS_R is ${_CORRECT_ARGS_R}\n          else\n            if [[ \"${_ARGS}\" =~ \"make-generate\" ]] \\\n              && [ -f \"${_PWD}/settings.php\" ]; then\n              _CORRECT=YES\n              _CORRECT_PWD_S=$(pwd 2>&1)\n              _CORRECT_ARGS_S=\"${_ARGS}\"\n              ### echo _CORRECT_PWD_S is ${_CORRECT_PWD_S}\n              ### echo _CORRECT_ARGS_S is ${_CORRECT_ARGS_S}\n            else\n              echo\n              echo \" This drush command can not be run in ${_PWD}\"\n              if [[ \"${2}\" =~ \"make-generate\" ]]; then\n                echo \" Please cd to the valid sites/foo.com directory first\"\n                echo \" or use a valid @alias, like: drush @foo.com status\"\n                echo \" Hint: Use 'drush aliases' to display all Drush 8 aliases\"\n                echo \" Hint: Use 'drush11 aliases' to display all Drush 10+ aliases\"\n              else\n                echo \" Please cd ~/static first\"\n              fi\n              echo\n              exit 0\n            fi\n          fi\n        else\n          if [[ \"${_ARGS}\" =~ \"drush @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush -vvv @\" ]]; then\n            if [[ \"${2}\" =~ \"restore\"($) ]] \\\n              || [[ \"${2}\" =~ \"arr\"($) ]] \\\n              || [[ \"${2}\" =~ \"cli\"($) ]] \\\n              || [[ \"${2}\" =~ \"conf\"($) ]] \\\n              || [[ \"${2}\" =~ \"config\"($) ]] \\\n              || [[ \"${2}\" =~ \"execute\"($) ]] \\\n              || [[ \"${2}\" =~ \"core-quick-drupal\"($) ]] \\\n              || [[ \"${2}\" =~ \"exec\"($) ]] \\\n              || [[ \"${2}\" =~ \"xstatus\"($) ]] \\\n              || [[ \"${2}\" =~ \"redis-flush\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"qd\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"rs\"($) ]] \\\n              || [[ \"${2}\" =~ \"runserver\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"scr\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"sha\"($) ]] \\\n              || [[ \"${2}\" =~ \"shell-alias\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"si\"($) ]] \\\n              || [[ \"${2}\" =~ \"sql-create\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"ssh\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"sup\"($) ]]; then\n              echo\n              echo \" This drush command is not available (A)\"\n              echo\n              exit 0\n            else\n              _CORRECT=YES\n              ### LSHELL ==> ALL vdrush commands WITH @alias START here\n              ### LSHELL ==> ALL drush8 commands WITH @alias START here\n              _CORRECT_PWD_T=$(pwd 2>&1)\n              _CORRECT_ARGS_T=\"${_ARGS}\"\n              ### echo _CORRECT_PWD_T is ${_CORRECT_PWD_T}\n              ### echo _CORRECT_ARGS_T is ${_CORRECT_ARGS_T}\n            fi\n            ### LSHELL ==> BASIC vdrush commands WITH @alias END here\n            ### LSHELL ==> ALL drush8 commands WITH @alias END here\n            ### LSHELL ==> RESPAWNED vdrush commands WITH @alias like updatedb CONTINUE here\n            _CORRECT_PWD_U=$(pwd 2>&1)\n            _CORRECT_ARGS_U=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_U is ${_CORRECT_PWD_U}\n            ### echo _CORRECT_ARGS_U is ${_CORRECT_ARGS_U}\n          elif [[ \"${_ARGS}\" =~ \"cc drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"cr drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush dl\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 dl\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 pm-download\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush pm-download\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"/data/disk/\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php help\" ]]; then\n            _CORRECT=YES\n            ### LSHELL ==> RESPAWNED vdrush commands WITH @alias like updatedb END here\n            ### LSHELL ==> Commands like drush11 aliases START and END here\n            _CORRECT_PWD_V=$(pwd 2>&1)\n            _CORRECT_ARGS_V=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_V is ${_CORRECT_PWD_V}\n            ### echo _CORRECT_ARGS_V is ${_CORRECT_ARGS_V}\n          else\n            ### LSHELL ==> ALL drush8 commands WITHOUT @alias START here\n            _CORRECT_PWD_X=$(pwd 2>&1)\n            _CORRECT_ARGS_X=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_X is ${_CORRECT_PWD_X}\n            ### echo _CORRECT_ARGS_X is ${_CORRECT_ARGS_X}\n            if [ -f \"${_PWD}/settings.php\" ]; then\n              if [[ \"${_ARGS}\" =~ \"drush \" ]] \\\n                || [[ \"${_ARGS}\" =~ \"drush8 \" ]] \\\n                || [[ \"${_ARGS}\" =~ \"drush10 \" ]] \\\n                || [[ \"${_ARGS}\" =~ \"drush11 \" ]]; then\n                _CORRECT=YES\n                ### LSHELL ==> ALL drush8 commands WITHOUT @alias END here\n                _CORRECT_PWD_Y=$(pwd 2>&1)\n                _CORRECT_ARGS_Y=\"${_ARGS}\"\n                ### echo _CORRECT_PWD_Y is ${_CORRECT_PWD_Y}\n                ### echo _CORRECT_ARGS_Y is ${_CORRECT_ARGS_Y}\n              fi\n            fi\n          fi\n        fi\n      fi\n    else\n      if [[ \"${_ARGS}\" =~ \"drush @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush8 @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush10 @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush11 @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/bin/drush @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush8 -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush10 -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush11 -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/bin/drush -vvv @\" ]]; then\n        if [[ \"${2}\" =~ \"restore\"($) ]] \\\n          || [[ \"${2}\" =~ \"arr\"($) ]] \\\n          || [[ \"${2}\" =~ \"cli\"($) ]] \\\n          || [[ \"${2}\" =~ \"conf\"($) ]] \\\n          || [[ \"${2}\" =~ \"config\"($) ]] \\\n          || [[ \"${2}\" =~ \"execute\"($) ]] \\\n          || [[ \"${2}\" =~ \"core-quick-drupal\"($) ]] \\\n          || [[ \"${2}\" =~ \"exec\"($) ]] \\\n          || [[ \"${2}\" =~ \"xstatus\"($) ]] \\\n          || [[ \"${2}\" =~ \"redis-flush\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"qd\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"rs\"($) ]] \\\n          || [[ \"${2}\" =~ \"runserver\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"scr\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"sha\"($) ]] \\\n          || [[ \"${2}\" =~ \"shell-alias\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"si\"($) ]] \\\n          || [[ \"${2}\" =~ \"sql-create\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"ssh\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"sup\"($) ]]; then\n          echo\n          echo \" This drush command is not available (B)\"\n          echo\n          exit 0\n        fi\n        _DEBUG_PWD_X=$(pwd 2>&1)\n        _DEBUG_ARGS_X=\"${_ARGS}\"\n        ### echo _DEBUG_PWD_X is ${_DEBUG_PWD_X}\n        ### echo _DEBUG_ARGS_X is ${_DEBUG_ARGS_X}\n      fi\n      _RAW_IN_PATH=${2//[^a-z/]/}\n      if [[ \"${2}\" =~ (^)\"/usr/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/bin/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/opt/\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"(/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"/var/${_Y_USR}/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"(/var/${_Y_USR}/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltopdf\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltoimage\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/local/bin/wkhtmltopdf\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/local/bin/wkhtmltoimage\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltopdf-0.12.4\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltoimage-0.12.4\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/local/bin/composer\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/composer\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/unzip\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/convert\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/gs\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"/home/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/data/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/tmp/\" ]]; then\n        if [ -e \"${2}\" ]; then\n          _IN_PATH=NO\n        fi\n      else\n        _WHICH_TEST=\"$(which ${2})\"\n        if [[ \"${_WHICH_TEST}\" =~ (^)\"/usr/\" ]] \\\n          || [[ \"${_WHICH_TEST}\" =~ (^)\"/bin/\" ]] \\\n          || [[ \"${_WHICH_TEST}\" =~ (^)\"/opt/\" ]]; then\n          _IN_PATH=YES\n        else\n          _IN_PATH=NO\n        fi\n      fi\n    fi\n  else\n    if [[ \"${_ARGS}\" =~ \"drush @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush8 @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush10 @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush11 @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/bin/drush @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush8 -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush10 -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush11 -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/bin/drush -vvv @\" ]]; then\n      if [[ \"${2}\" =~ \"restore\"($) ]] \\\n        || [[ \"${2}\" =~ \"arr\"($) ]] \\\n        || [[ \"${2}\" =~ \"cli\"($) ]] \\\n        || [[ \"${2}\" =~ \"conf\"($) ]] \\\n        || [[ \"${2}\" =~ \"config\"($) ]] \\\n        || [[ \"${2}\" =~ \"execute\"($) ]] \\\n        || [[ \"${2}\" =~ \"core-quick-drupal\"($) ]] \\\n        || [[ \"${2}\" =~ \"exec\"($) ]] \\\n        || [[ \"${2}\" =~ \"xstatus\"($) ]] \\\n        || [[ \"${2}\" =~ \"redis-flush\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"qd\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"rs\"($) ]] \\\n        || [[ \"${2}\" =~ \"runserver\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"scr\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"sha\"($) ]] \\\n        || [[ \"${2}\" =~ \"shell-alias\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"si\"($) ]] \\\n        || [[ \"${2}\" =~ \"sql-create\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"ssh\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"sup\"($) ]]; then\n        echo\n        echo \" This drush command is not available (C)\"\n        echo\n        exit 0\n      fi\n      _DEBUG_PWD_Y=$(pwd 2>&1)\n      _DEBUG_ARGS_Y=\"${_ARGS}\"\n      ### echo _DEBUG_PWD_Y is ${_DEBUG_PWD_Y}\n      ### echo _DEBUG_ARGS_Y is ${_DEBUG_ARGS_Y}\n    fi\n    if [[ \"${1}\" =~ (^)\"/usr/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/bin/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/opt/\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"(/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"/var/${_Y_USR}/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"(/var/${_Y_USR}/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"/home/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/data/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/tmp/\" ]]; then\n      if [ -e \"${1}\" ]; then\n        _IN_PATH=NO\n      fi\n    else\n      _WHICH_TEST=\"$(which ${1})\"\n      if [[ \"${_WHICH_TEST}\" =~ (^)\"/usr/\" ]] \\\n        || [[ \"${_WHICH_TEST}\" =~ (^)\"/bin/\" ]] \\\n        || [[ \"${_WHICH_TEST}\" =~ (^)\"/opt/\" ]]; then\n        _IN_PATH=YES\n      else\n        _IN_PATH=NO\n      fi\n    fi\n  fi\n  if [[ \"${_LTD_GID}\" =~ \"lshellg\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"ltd-shell-more\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"lshellg rvm\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"ltd-shell\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"rvm ltd-shell\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"ltd-shell rvm\"($) ]]; then\n    if [[ \"${_ARGS}\" =~ \"*\" ]]; then\n      if [[ $(whoami) == *.ftp ]]; then\n        _SILENT=YES\n        ####\n        #### The [[ $(whoami) == *.ftp ]] ### OK for Drush and Drupal\n        #### The [[ \"${_ARGS}\" =~ \"set -m; \" ]] ### Legacy defunct method\n        ####\n      else\n        if [[ \"${_ARGS}\" =~ \"__build__\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"_tmp_\" ]] \\\n          || [[ \"${_ARGS}\" =~ \".tmp\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"avconv\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"bzr \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"chdir \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"compass \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"composer \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"convert \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"curl \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"ffmpeg \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"flvtool \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"git \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"is_\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"java\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"logger \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php56 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php74 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php81 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php82 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php83 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php84 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php85 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"unzip \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"rename \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"rrdtool \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"rsync \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"sass \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"scp \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"scss \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"sendmail \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"ssh \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"svn \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"tar \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"wget \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"wkhtmltoimage\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"wkhtmltopdf\" ]]; then\n          _SILENT=YES\n        else\n          echo\n        fi\n      fi\n    fi\n  fi\n  if [ \"${_IN_PATH}\" = \"YES\" ]; then\n    if [ -x \"/usr/local/bin/ruby\" ] && [ -x \"/usr/local/bin/gem\" ]; then\n      if [[ $(whoami) == *.ftp ]] || [ ! -z \"${SSH_CLIENT}\" ]; then\n        _RUBY_ALLOW=YES\n      fi\n    fi\n    if [ \"${_RUBY_ALLOW}\" = \"YES\" ]; then\n      if [ -d \"/opt/user/gems/${USER}\" ]; then\n        export GEM_HOME=\"/opt/user/gems/${USER}\"\n        export GEM_PATH=\"/opt/user/gems/${USER}\"\n        export PATH=\"/opt/user/gems/${USER}/bin:$PATH\"\n      fi\n    fi\n    if [ -x \"/usr/bin/npm\" ] && [ -e \"/home/${USER}/.npmrc\" ]; then\n      if [[ $(whoami) == *.ftp ]] || [ ! -z \"${SSH_CLIENT}\" ]; then\n        _NPM_ALLOW=YES\n      fi\n    fi\n    if [ \"${_NPM_ALLOW}\" = \"YES\" ]; then\n      if [ -d \"/opt/user/npm/${USER}\" ]; then\n        export NPM_PACKAGES=\"/opt/user/npm/${USER}/.npm-packages\"\n        export PATH=\"${NPM_PACKAGES}/bin:${PATH}\"\n        export NODE_PATH=\"${NPM_PACKAGES}/lib/node_modules:${NODE_PATH}\"\n      fi\n    fi\n    if [ \"$0\" = \"/bin/sh\" ] \\\n      || [ \"$0\" = \"/usr/bin/sh\" ] \\\n      || [ \"$0\" = \"/opt/local/bin/websh\" ] \\\n      || [ \"$0\" = \"/bin/websh\" ]; then\n      if [ -x \"/bin/dash\" ]; then\n        if [ ! -z \"${_ARR}\" ] && [ -z \"${_OCTO_SYS_ARR}\" ]; then\n          _DEST_DRUSH=${_DEST_DRUSH//\\\\/}\n          ### echo EXD 1 DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXD 1 PHP_INI is ${PHP_INI}\n          ### echo EXD 1 _DEST_DRUSH is ${_DEST_DRUSH}\n          ### echo EXD 1 _ARR is ${_ARR}\n          ### echo EXD 1 ${DRUSH_PHP} ${_DEST_DRUSH} ${_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} ${_DEST_DRUSH} ${_ARR}\"\n          exit 0\n        elif [ ! -z \"${PHP_FWD}\" ] && [ ! -z \"${_OCTO_SYS_ARR}\" ]; then\n          _DEST_DRUSH=${_DEST_DRUSH//\\\\/}\n          ### echo EXD 3-PHP_FWD DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXD 3-PHP_FWD PHP_INI is ${PHP_INI}\n          ### echo EXD 3-PHP_FWD _DEST_DRUSH is ${_DEST_DRUSH}\n          ### echo EXD 3-PHP_FWD _OCTO_SYS_ARR is ${_OCTO_SYS_ARR}\n          ### echo EXD 3-PHP_FWD ${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\"\n          exit 0\n        elif [ -z \"${PHP_FWD}\" ] && [ ! -z \"${_OCTO_SYS_ARR}\" ]; then\n          _DEST_DRUSH=${_DEST_DRUSH//\\\\/}\n          ### echo EXD 3-NO-PHP_FWD DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXD 3-NO-PHP_FWD PHP_INI is ${PHP_INI}\n          ### echo EXD 3-NO-PHP_FWD _DEST_DRUSH is ${_DEST_DRUSH}\n          ### echo EXD 3-NO-PHP_FWD _OCTO_SYS_ARR is ${_OCTO_SYS_ARR}\n          ### echo EXD 3-NO-PHP_FWD ${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\"\n          exit 0\n        elif [ ! -z \"${_C_ARR}\" ]; then\n          ### echo EXC 1 DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXC 1 _C_ARR is ${_C_ARR}\n          ### echo EXC 1 _F_ARR is \"$@\"\n          ### echo EXC 1 ${DRUSH_PHP} /usr/local/bin/composer ${_C_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} /usr/local/bin/composer ${_C_ARR}\"\n          exit 0\n        else\n          ### echo EXH 1 _forward_to_dash \"$@\"\n          _forward_to_dash \"$@\"\n          exit 0\n        fi\n      else\n        ### echo EXH 3 _F_ARR is \"$@\"\n        ### echo EXH 3 /bin/bash \"$@\"\n        exec /bin/bash \"$@\"\n        exit 0\n      fi\n    else\n      ### echo EXO 1 _F_ARR is \"$@\"\n      ### echo EXO 1 $0 \"$@\"\n      exec $0 \"$@\"\n      exit 0\n    fi\n  else\n    exit 1\n  fi\nelse\n  if [ \"${USER}\" = \"root\" ]; then\n    if [[ \"${1}\" =~ \"drush\" ]] \\\n      || [[ \"${2}\" =~ \"drush\" ]]; then\n      if [[ \"${2}\" =~ \"uli\" ]] \\\n        || [[ \"${2}\" =~ \"vget\" ]] \\\n        || [[ \"${2}\" =~ \"config-list\" ]] \\\n        || [[ \"${2}\" =~ \"config-edit\" ]] \\\n        || [[ \"${2}\" =~ \"config-get\" ]] \\\n        || [[ \"${2}\" =~ \"config-set\" ]] \\\n        || [[ \"${2}\" =~ \"--version\" ]] \\\n        || [[ \"${2}\" =~ \"vset\" ]] \\\n        || [[ \"${2}\" =~ \"status\" ]]; then\n        _ALLOW=YES\n      else\n        echo\n        echo \" Drush should never be run as root!\"\n        echo \" Please su to some non-root account\"\n        echo\n        exit 0\n      fi\n    fi\n  fi\n  if [ \"$0\" = \"/bin/sh\" ] \\\n    || [ \"$0\" = \"/usr/bin/sh\" ] \\\n    || [ \"$0\" = \"/opt/local/bin/websh\" ] \\\n    || [ \"$0\" = \"/bin/websh\" ]; then\n    if [ -x \"/bin/dash\" ]; then\n      ### echo EXH 4 _F_ARR is \"$@\"\n      ### echo EXH 4 /bin/dash \"$@\"\n      exec /bin/dash \"$@\"\n      exit 0\n    else\n      ### echo EXH 6 _F_ARR is \"$@\"\n      ### echo EXH 6 /bin/bash \"$@\"\n      exec /bin/bash \"$@\"\n      exit 0\n    fi\n  else\n    ### echo EXO 2 _F_ARR is \"$@\"\n    ### echo EXO 2 $0 \"$@\"\n    exec $0 \"$@\"\n    exit 0\n  fi\n  exit 0\nfi\n"
  },
  {
    "path": "aegir/makefiles/civicrm-4.5-d6.make",
    "content": "; CiviCRM 4.5-d6 master makefile\n;\n\napi = 2\ncore = 6.x\n\nprojects[pressflow][type] = \"core\"\nprojects[pressflow][download][type] = \"get\"\nprojects[pressflow][download][url] = \"http://files.aegir.cc/core/pressflow-6.60.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.5.8/civicrm-4.5.8-drupal6.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrm_l10n][type] = \"module\"\nprojects[civicrm_l10n][subdir] = \"civicrm\"\nprojects[civicrm_l10n][download][type] = \"get\"\nprojects[civicrm_l10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.5.8/civicrm-4.5.8-l10n.tar.gz/download?use_mirror=autoselect\"\nprojects[civicrm_l10n][overwrite] = TRUE\n\nprojects[civicrm_theme][type] = \"theme\"\nprojects[civicrm_theme][subdir] = \"contrib\"\nprojects[civicrm_theme][version] = \"1.4\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"1.8\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-4.5-d7.make",
    "content": "; CiviCRM 4.5-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.5.8/civicrm-4.5.8-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.5.8/civicrm-4.5.8-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-4.6-d6.make",
    "content": "; CiviCRM 4.6-d6 master makefile\n;\n\napi = 2\ncore = 6.x\n\nprojects[pressflow][type] = \"core\"\nprojects[pressflow][download][type] = \"get\"\nprojects[pressflow][download][url] = \"http://files.aegir.cc/core/pressflow-6.60.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.6.37/civicrm-4.6.37-drupal6.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrm_l10n][type] = \"module\"\nprojects[civicrm_l10n][subdir] = \"civicrm\"\nprojects[civicrm_l10n][download][type] = \"get\"\nprojects[civicrm_l10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.6.37/civicrm-4.6.37-l10n.tar.gz/download?use_mirror=autoselect\"\nprojects[civicrm_l10n][overwrite] = TRUE\n\nprojects[civicrm_theme][type] = \"theme\"\nprojects[civicrm_theme][subdir] = \"contrib\"\nprojects[civicrm_theme][version] = \"1.4\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"1.8\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-4.6-d7.make",
    "content": "; CiviCRM 4.6-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.6.37/civicrm-4.6.37-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.6.37/civicrm-4.6.37-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-4.7-d6.make",
    "content": "; CiviCRM 4.7-d6 master makefile\n;\n\napi = 2\ncore = 6.x\n\nprojects[pressflow][type] = \"core\"\nprojects[pressflow][download][type] = \"get\"\nprojects[pressflow][download][url] = \"http://files.aegir.cc/core/pressflow-6.60.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.7.31/civicrm-4.7.31-drupal6.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrm_l10n][type] = \"module\"\nprojects[civicrm_l10n][subdir] = \"civicrm\"\nprojects[civicrm_l10n][download][type] = \"get\"\nprojects[civicrm_l10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.7.31/civicrm-4.7.31-l10n.tar.gz/download?use_mirror=autoselect\"\nprojects[civicrm_l10n][overwrite] = TRUE\n\nprojects[civicrm_theme][type] = \"theme\"\nprojects[civicrm_theme][subdir] = \"contrib\"\nprojects[civicrm_theme][version] = \"1.4\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"1.8\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-4.7-d7.make",
    "content": "; CiviCRM 4.7-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.7.31/civicrm-4.7.31-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/4.7.31/civicrm-4.7.31-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-5.0-d6.make",
    "content": "; CiviCRM 5.0-d6 master makefile\n;\n\napi = 2\ncore = 6.x\n\nprojects[pressflow][type] = \"core\"\nprojects[pressflow][download][type] = \"get\"\nprojects[pressflow][download][url] = \"http://files.aegir.cc/core/pressflow-6.60.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.0.2/civicrm-5.0.2-drupal6.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrm_l10n][type] = \"module\"\nprojects[civicrm_l10n][subdir] = \"civicrm\"\nprojects[civicrm_l10n][download][type] = \"get\"\nprojects[civicrm_l10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.0.2/civicrm-5.0.2-l10n.tar.gz/download?use_mirror=autoselect\"\nprojects[civicrm_l10n][overwrite] = TRUE\n\nprojects[civicrm_theme][type] = \"theme\"\nprojects[civicrm_theme][subdir] = \"contrib\"\nprojects[civicrm_theme][version] = \"1.4\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"1.8\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-5.0-d7.make",
    "content": "; CiviCRM 5.0-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.0.2/civicrm-5.0.2-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.0.2/civicrm-5.0.2-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-5.1-d7.make",
    "content": "; CiviCRM 5.1-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.1.2/civicrm-5.1.2-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.1.2/civicrm-5.1.2-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-5.2-d7.make",
    "content": "; CiviCRM 5.2-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.2.2/civicrm-5.2.2-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.2.2/civicrm-5.2.2-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-5.3-d7.make",
    "content": "; CiviCRM 5.3-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.3.0/civicrm-5.3.0-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.3.0/civicrm-5.3.0-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-5.35-d7.make",
    "content": "; CiviCRM 5.35-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"https://download.civicrm.org/civicrm-5.35.0-drupal.tar.gz\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"https://download.civicrm.org/civicrm-5.35.0-l10n.tar.gz\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/makefiles/civicrm-5.9-d7.make",
    "content": "; CiviCRM 5.9-d7 master makefile\n;\n\napi = 2\ncore = 7.x\n\nprojects[drupal][type] = \"core\"\nprojects[drupal][download][type] = \"get\"\nprojects[drupal][download][url] = \"http://files.aegir.cc/core/drupal-7.105.1.tar.gz\"\n\nprojects[civicrm][type] = \"module\"\nprojects[civicrm][directory_name] = \"civicrm\"\nprojects[civicrm][download][type] = \"get\"\nprojects[civicrm][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.9.0/civicrm-5.9.0-drupal.tar.gz/download?use_mirror=autoselect\"\n\nprojects[civicrml10n][type] = \"module\"\nprojects[civicrml10n][subdir] = \"civicrm\"\nprojects[civicrml10n][download][type] = \"get\"\nprojects[civicrml10n][download][url] = \"http://sourceforge.net/projects/civicrm/files/civicrm-stable/5.9.0/civicrm-5.9.0-l10n.tar.gz/download?use_mirror=autoselect\"\n\nprojects[admin_menu][type] = \"module\"\nprojects[admin_menu][subdir] = \"contrib\"\nprojects[admin_menu][version] = \"3.0-rc6\"\n"
  },
  {
    "path": "aegir/patches/0001-Print-site_footer-if-defined.patch",
    "content": "From c22f1e9aa41010d91f3105628a8652ef42e12efa Mon Sep 17 00:00:00 2001\nFrom: Barracuda Team <admin@omega8.cc>\nDate: Sat, 16 Apr 2016 18:11:35 +0200\nSubject: [PATCH] Print $site_footer if defined\n\n---\n page.tpl.php | 5 +++++\n 1 file changed, 5 insertions(+)\n\ndiff --git a/page.tpl.php b/page.tpl.php\nindex 6a68dc4..a7eda81 100644\n--- a/page.tpl.php\n+++ b/page.tpl.php\n@@ -51,6 +51,11 @@\n   </div></div>\n \n   <div id=\"footer\" class='reverse'><div class='limiter'>\n+\n+    <?php if ($site_footer = variable_get('site_footer', '')): ?>\n+    <?php print render($site_footer); ?>\n+    <?php endif; ?>\n+\n     <?php print render($page['footer']); ?>\n     <?php if ($secondary_menu): ?>\n         <?php print theme('links__system_secondary_menu', array('links' => $secondary_menu, 'attributes' => array('id' => 'secondary-menu', 'class' => array('links', 'inline')))); ?>\n-- \n2.2.1\n\n"
  },
  {
    "path": "aegir/patches/2106995-fatal-error-non-object-1.patch",
    "content": "diff -urp a/registration.install b/registration.install\n--- a/registration.install      2016-04-22 17:11:27.000000000 +0000\n+++ b/registration.install      2016-05-03 19:36:33.000000000 +0000\n@@ -5,6 +5,16 @@\n  * Schema and installation hooks for registration module.\n  */\n\n+// https://www.drupal.org/node/542202#comment-6065048\n+module_load_include('inc','registration','lib/registration.entity');\n+module_load_include('inc','registration','lib/registration_state.ui_controller');\n+module_load_include('inc','registration','lib/registration.metadata');\n+module_load_include('inc','registration','lib/registration_type.controller');\n+module_load_include('inc','registration','lib/registration_state.controller');\n+module_load_include('inc','registration','lib/registration_type.entity');\n+module_load_include('inc','registration','lib/registration_state.entity');\n+module_load_include('inc','registration','lib/registration_type.ui_controller');\n+\n /**\n  * Implements hook_schema().\n  */\n"
  },
  {
    "path": "aegir/patches/6-core/SA-CORE-2018-002-D6.patch",
    "content": "From 2d234e178e17f3db557a0919eec303917ece09b0 Mon Sep 17 00:00:00 2001\nFrom: David Snopek <dsnopek@gmail.com>\nDate: Wed, 28 Mar 2018 14:53:39 -0500\nSubject: [PATCH] Backport of fixes from SA-CORE-2018-002 (#116)\n\n---\n includes/bootstrap.inc       | 55 ++++++++++++++++++++++++++++++++++++++++++++\n modules/system/system.module |  2 +-\n 2 files changed, 56 insertions(+), 1 deletion(-)\n\ndiff --git a/includes/bootstrap.inc b/includes/bootstrap.inc\nindex acc055e745..afed7408e4 100644\n--- a/includes/bootstrap.inc\n+++ b/includes/bootstrap.inc\n@@ -1483,6 +1483,7 @@ function _drupal_bootstrap($phase) {\n       timer_start('page');\n       // Initialize the configuration\n       conf_init();\n+      _drupal_bootstrap_sanitize_request();\n       break;\n \n     case DRUPAL_BOOTSTRAP_EARLY_PAGE_CACHE:\n@@ -2207,3 +2208,57 @@ function filter_xss_bad_protocol($string, $decode = TRUE) {\n   } while ($before != $string);\n   return check_plain($string);\n }\n+\n+/**\n+ * Sanitizes unsafe keys from the request.\n+ */\n+function _drupal_bootstrap_sanitize_request() {\n+  global $conf;\n+  static $sanitized;\n+\n+  if (!$sanitized) {\n+    // Ensure the whitelist array exists.\n+    if (!isset($conf['sanitize_input_whitelist']) || !is_array($conf['sanitize_input_whitelist'])) {\n+      $conf['sanitize_input_whitelist'] = array();\n+    }\n+\n+    $sanitized_keys = _drupal_bootstrap_sanitize_input($_GET, $conf['sanitize_input_whitelist']);\n+    $sanitized_keys = array_merge($sanitized_keys, _drupal_bootstrap_sanitize_input($_POST, $conf['sanitize_input_whitelist']));\n+    $sanitized_keys = array_merge($sanitized_keys, _drupal_bootstrap_sanitize_input($_REQUEST, $conf['sanitize_input_whitelist']));\n+    $sanitized_keys = array_merge($sanitized_keys, _drupal_bootstrap_sanitize_input($_COOKIE, $conf['sanitize_input_whitelist']));\n+    $sanitized_keys = array_unique($sanitized_keys);\n+\n+    if (count($sanitized_keys) && !empty($conf['sanitize_input_logging'])) {\n+      trigger_error(check_plain(sprintf('Potentially unsafe keys removed from request parameters: %s', implode(', ', $sanitized_keys)), E_USER_WARNING));\n+    }\n+\n+    $sanitized = TRUE;\n+  }\n+}\n+\n+/**\n+ * Sanitizes unsafe keys from user input.\n+ *\n+ * @param mixed $input\n+ *   Input to sanitize.\n+ * @param array $whitelist\n+ *   Whitelist of values.\n+ * @return array\n+ */\n+function _drupal_bootstrap_sanitize_input(&$input, $whitelist = array()) {\n+  $sanitized_keys = array();\n+\n+  if (is_array($input)) {\n+    foreach ($input as $key => $value) {\n+      if ($key !== '' && $key[0] === '#' && !in_array($key, $whitelist, TRUE)) {\n+        unset($input[$key]);\n+        $sanitized_keys[] = $key;\n+      }\n+      elseif (is_array($input[$key])) {\n+        $sanitized_keys = array_merge($sanitized_keys, _drupal_bootstrap_sanitize_input($input[$key], $whitelist));\n+      }\n+    }\n+  }\n+\n+  return $sanitized_keys;\n+}\ndiff --git a/modules/system/system.module b/modules/system/system.module\nindex 93962ac5b5..ed372b8052 100644\n--- a/modules/system/system.module\n+++ b/modules/system/system.module\n@@ -8,7 +8,7 @@\n /**\n  * The current system version.\n  */\n-define('VERSION', '6.39');\n+define('VERSION', '6.40');\n \n /**\n  * Core API compatibility.\n-- \n2.14.3 (Apple Git-98)\n\n"
  },
  {
    "path": "aegir/patches/6-core/SA-CORE-2018-004-D6.patch",
    "content": "From f7876dcf6428b934c928aa76dbbaacf2c6e49875 Mon Sep 17 00:00:00 2001\nFrom: David Snopek <dsnopek@gmail.com>\nDate: Wed, 25 Apr 2018 11:12:44 -0500\nSubject: [PATCH] Backport of fixes from SA-CORE-2018-004\n\n---\n includes/bootstrap.inc | 93 ++++++++++++++++++++++++++++++++++++++++++++++++++\n 1 file changed, 93 insertions(+)\n\ndiff --git a/includes/bootstrap.inc b/includes/bootstrap.inc\nindex 5654ddec8b..72343aac79 100644\n--- a/includes/bootstrap.inc\n+++ b/includes/bootstrap.inc\n@@ -1202,6 +1202,10 @@ function _drupal_bootstrap($phase) {\n           unset($_GET['destination']);\n           unset($_REQUEST['destination']);\n         }\n+        // Ensure that the destination's query parameters are not dangerous.\n+        if (isset($_GET['destination'])) {\n+          _drupal_bootstrap_clean_destination();\n+        }\n         // If there's still something in $_REQUEST['destination'] that didn't\n         // come from $_GET, check it too.\n         if (isset($_REQUEST['destination']) && (!isset($_GET['destination']) || $_REQUEST['destination'] != $_GET['destination']) && menu_path_is_external($_REQUEST['destination'])) {\n@@ -1660,3 +1664,92 @@ function _drupal_bootstrap_sanitize_input(&$input, $whitelist = array()) {\n \n   return $sanitized_keys;\n }\n+\n+/**\n+ * Removes the destination if it is dangerous.\n+ *\n+ * Note this can only be called after common.inc has been included.\n+ *\n+ * @return bool\n+ *   TRUE if the destination has been removed from $_GET, FALSE if not.\n+ */\n+function _drupal_bootstrap_clean_destination() {\n+  $dangerous_keys = array();\n+\n+  $log_sanitized_keys = variable_get('sanitize_input_logging', FALSE);\n+\n+  $parts = _drupal_parse_url($_GET['destination']);\n+  if (!empty($parts['query'])) {\n+    $whitelist = variable_get('sanitize_input_whitelist', array());\n+    $log_sanitized_keys = variable_get('sanitize_input_logging', FALSE);\n+\n+    $dangerous_keys = _drupal_bootstrap_sanitize_input($parts['query'], $whitelist);\n+    if (!empty($dangerous_keys)) {\n+      // The destination is removed rather than sanitized to mirror the\n+      // handling of external destinations.\n+      unset($_GET['destination']);\n+      unset($_REQUEST['destination']);\n+      if ($log_sanitized_keys) {\n+        trigger_error(sprintf('Potentially unsafe destination removed from query string parameters (GET) because it contained the following keys: %s', implode(', ', $dangerous_keys)));\n+      }\n+      return TRUE;\n+    }\n+  }\n+  return FALSE;\n+}\n+\n+/**\n+ * Backport of drupal_parse_url() from Drupal 7.\n+ */\n+function _drupal_parse_url($url) {\n+  $options = array(\n+    'path' => NULL,\n+    'query' => array(),\n+    'fragment' => '',\n+  );\n+\n+  // External URLs: not using parse_url() here, so we do not have to rebuild\n+  // the scheme, host, and path without having any use for it.\n+  if (strpos($url, '://') !== FALSE) {\n+\n+    // Split off everything before the query string into 'path'.\n+    $parts = explode('?', $url);\n+    $options['path'] = $parts[0];\n+\n+    // If there is a query string, transform it into keyed query parameters.\n+    if (isset($parts[1])) {\n+      $query_parts = explode('#', $parts[1]);\n+      parse_str($query_parts[0], $options['query']);\n+\n+      // Take over the fragment, if there is any.\n+      if (isset($query_parts[1])) {\n+        $options['fragment'] = $query_parts[1];\n+      }\n+    }\n+  }\n+  else {\n+\n+    // parse_url() does not support relative URLs, so make it absolute. E.g. the\n+    // relative URL \"foo/bar:1\" isn't properly parsed.\n+    $parts = parse_url('http://example.com/' . $url);\n+\n+    // Strip the leading slash that was just added.\n+    $options['path'] = substr($parts['path'], 1);\n+    if (isset($parts['query'])) {\n+      parse_str($parts['query'], $options['query']);\n+    }\n+    if (isset($parts['fragment'])) {\n+      $options['fragment'] = $parts['fragment'];\n+    }\n+  }\n+\n+  // The 'q' parameter contains the path of the current page if clean URLs are\n+  // disabled. It overrides the 'path' of the URL when present, even if clean\n+  // URLs are enabled, due to how Apache rewriting rules work. The path\n+  // parameter must be a string.\n+  if (isset($options['query']['q']) && is_string($options['query']['q'])) {\n+    $options['path'] = $options['query']['q'];\n+    unset($options['query']['q']);\n+  }\n+  return $options;\n+}\n"
  },
  {
    "path": "aegir/patches/6-core/patch_commit_7a847db99f80.patch",
    "content": "diff --git a/modules/system/system.install b/modules/system/system.install\nindex 953ad1631f15d42529d76aeb9b7fe0adcfe1b2e4..4a75da5c1c0280c82df953ef8d620687b5f7e1ec 100644\n--- a/modules/system/system.install\n+++ b/modules/system/system.install\n@@ -119,14 +119,17 @@ function system_requirements($phase) {\n \n   // Test the web server identity.\n   if (isset($_SERVER['SERVER_SOFTWARE']) && preg_match(\"/(?:ApacheSolaris|Nginx)/i\", $_SERVER['SERVER_SOFTWARE'])) {\n-    $is_nginx = TRUE; // Skip this on BOA since .htaccess is never used in Nginx.\n+    // Skip this on BOA and Nginx since .htaccess is never used in Nginx.\n+    $is_nginx = TRUE;\n   }\n   elseif (!isset($_SERVER['SERVER_SOFTWARE'])) {\n-    $is_nginx = TRUE; // Skip this on BOA since .htaccess is never used in Nginx.\n+    // Skip this in Aegir backend where SERVER_SOFTWARE is not set, since .htaccess is never used in Nginx.\n+    $is_nginx = TRUE;\n   }\n   else {\n     $is_nginx = FALSE;\n   }\n+\n   // Test the contents of the .htaccess files.\n   if ($phase == 'runtime' && !$is_nginx) {\n     // Try to write the .htaccess files first, to prevent false alarms in case\n"
  },
  {
    "path": "aegir/patches/7-core/3143016-83-D7.patch",
    "content": "From 4e87afee1ea3aa5379b1726b53015545fcaa7d66 Mon Sep 17 00:00:00 2001\nFrom: mcdruid <drew@mcdruid.co.uk>\nDate: Mon, 1 Jun 2020 12:23:57 +0100\nSubject: [PATCH] Issue #3143016 by dsnopek, mcdruid, effulgentsia, philltran,\n Fabianx, lauriii: Chrome 83 cancels jquery.form ajax requests over https\n\n---\n misc/ajax.js                  | 19 +++++++++++++++++++\n modules/system/system.install |  7 +++++++\n 2 files changed, 26 insertions(+)\n\ndiff --git a/misc/ajax.js b/misc/ajax.js\nindex c944ebbf24..0c9579b00d 100644\n--- a/misc/ajax.js\n+++ b/misc/ajax.js\n@@ -198,6 +198,25 @@ Drupal.ajax = function (base, element, element_settings) {\n     type: 'POST'\n   };\n\n+  // For multipart forms (e.g., file uploads), jQuery Form targets the form\n+  // submission to an iframe instead of using an XHR object. The initial \"src\"\n+  // of the iframe, prior to the form submission, is set to options.iframeSrc.\n+  // \"about:blank\" is the semantically correct, standards-compliant, way to\n+  // initialize a blank iframe; however, some old IE versions (possibly only 6)\n+  // incorrectly report a mixed content warning when iframes with an\n+  // \"about:blank\" src are added to a parent document with an https:// origin.\n+  // jQuery Form works around this by defaulting to \"javascript:false\" instead,\n+  // but that breaks on Chrome 83, so here we force the semantically correct\n+  // behavior for all browsers except old IE.\n+  // @see https://www.drupal.org/project/drupal/issues/3143016\n+  // @see https://github.com/jquery-form/form/blob/df9cb101b9c9c085c8d75ad980c7ff1cf62063a1/jquery.form.js#L68\n+  // @see https://bugs.chromium.org/p/chromium/issues/detail?id=1084874\n+  // @see https://html.spec.whatwg.org/multipage/browsers.html#creating-browsing-contexts\n+  // @see https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy\n+  if (navigator.userAgent.indexOf(\"MSIE\") === -1) {\n+    ajax.options.iframeSrc = 'about:blank';\n+  }\n+\n   // Bind the ajaxSubmit function to the element event.\n   $(ajax.element).bind(element_settings.event, function (event) {\n     if (!Drupal.settings.urlIsAjaxTrusted[ajax.url] && !Drupal.urlIsLocal(ajax.url)) {\ndiff --git a/modules/system/system.install b/modules/system/system.install\nindex 93afb06f78..d6707bedf7 100644\n--- a/modules/system/system.install\n+++ b/modules/system/system.install\n@@ -3299,6 +3299,13 @@ function system_update_7083() {\n   // Empty update to force a rebuild of hook_library() and JS aggregates.\n }\n\n+/**\n+ * Rebuild JavaScript aggregates to include 'ajax.js' fix for Chrome 83.\n+ */\n+function system_update_7084() {\n+  // Empty update to force a rebuild of JS aggregates.\n+}\n+\n /**\n  * @} End of \"defgroup updates-7.x-extra\".\n  * The next series of updates should start at 8000.\n--\n2.24.0\n\n"
  },
  {
    "path": "aegir/patches/7-core/SA-CORE-2014-005-D7.patch",
    "content": "diff --git a/includes/database/database.inc b/includes/database/database.inc\nindex f78098b..01b6385 100644\n--- a/includes/database/database.inc\n+++ b/includes/database/database.inc\n@@ -736,7 +736,7 @@ abstract class DatabaseConnection extends PDO {\n     // to expand it out into a comma-delimited set of placeholders.\n     foreach (array_filter($args, 'is_array') as $key => $data) {\n       $new_keys = array();\n-      foreach ($data as $i => $value) {\n+      foreach (array_values($data) as $i => $value) {\n         // This assumes that there are no other placeholders that use the same\n         // name.  For example, if the array placeholder is defined as :example\n         // and there is already an :example_2 placeholder, this will generate\n"
  },
  {
    "path": "aegir/patches/7-core/SA-CORE-2018-002-D7.patch",
    "content": "From 2266d2a83db50e2f97682d9a0fb8a18e2722cba5 Mon Sep 17 00:00:00 2001\nFrom: David Rothstein <drothstein@gmail.com>\nDate: Tue, 27 Mar 2018 15:24:40 -0400\nSubject: [PATCH] SA-CORE-2018-002 by Jasu_M, samuel.mortenson,\n David_Rothstein, xjm, mlhess, larowlan, pwolanin, alexpott, dsnopek, Pere\n Orga, cashwilliams, dawehner, tim.plunkett, drumm\n\n---\n includes/bootstrap.inc         |  4 +++\n includes/request-sanitizer.inc | 82 ++++++++++++++++++++++++++++++++++++++++++\n 2 files changed, 86 insertions(+)\n create mode 100644 includes/request-sanitizer.inc\n\ndiff --git a/includes/bootstrap.inc b/includes/bootstrap.inc\nindex 655db6d63e..880557e77d 100644\n--- a/includes/bootstrap.inc\n+++ b/includes/bootstrap.inc\n@@ -2632,6 +2632,10 @@ function _drupal_bootstrap_configuration() {\n   timer_start('page');\n   // Initialize the configuration, including variables from settings.php.\n   drupal_settings_initialize();\n+\n+  // Sanitize unsafe keys from the request.\n+  require_once DRUPAL_ROOT . '/includes/request-sanitizer.inc';\n+  DrupalRequestSanitizer::sanitize();\n }\n \n /**\ndiff --git a/includes/request-sanitizer.inc b/includes/request-sanitizer.inc\nnew file mode 100644\nindex 0000000000..1daa6b5348\n--- /dev/null\n+++ b/includes/request-sanitizer.inc\n@@ -0,0 +1,82 @@\n+<?php\n+\n+/**\n+ * @file\n+ * Contains code for sanitizing user input from the request.\n+ */\n+\n+/**\n+ * Sanitizes user input from the request.\n+ */\n+class DrupalRequestSanitizer {\n+\n+  /**\n+   * Tracks whether the request was already sanitized.\n+   */\n+  protected static $sanitized = FALSE;\n+\n+  /**\n+   * Modifies the request to strip dangerous keys from user input.\n+   */\n+  public static function sanitize() {\n+    if (!self::$sanitized) {\n+      $whitelist = variable_get('sanitize_input_whitelist', array());\n+      $log_sanitized_keys = variable_get('sanitize_input_logging', FALSE);\n+\n+      // Process query string parameters.\n+      $get_sanitized_keys = array();\n+      $_GET = self::stripDangerousValues($_GET, $whitelist, $get_sanitized_keys);\n+      if ($log_sanitized_keys && $get_sanitized_keys) {\n+        _drupal_trigger_error_with_delayed_logging(format_string('Potentially unsafe keys removed from query string parameters (GET): @keys', array('@keys' => implode(', ', $get_sanitized_keys))), E_USER_NOTICE);\n+      }\n+\n+      // Process request body parameters.\n+      $post_sanitized_keys = array();\n+      $_POST = self::stripDangerousValues($_POST, $whitelist, $post_sanitized_keys);\n+      if ($log_sanitized_keys && $post_sanitized_keys) {\n+        _drupal_trigger_error_with_delayed_logging(format_string('Potentially unsafe keys removed from request body parameters (POST): @keys', array('@keys' => implode(', ', $post_sanitized_keys))), E_USER_NOTICE);\n+      }\n+\n+      // Process cookie parameters.\n+      $cookie_sanitized_keys = array();\n+      $_COOKIE = self::stripDangerousValues($_COOKIE, $whitelist, $cookie_sanitized_keys);\n+      if ($log_sanitized_keys && $cookie_sanitized_keys) {\n+        _drupal_trigger_error_with_delayed_logging(format_string('Potentially unsafe keys removed from cookie parameters (COOKIE): @keys', array('@keys' => implode(', ', $cookie_sanitized_keys))), E_USER_NOTICE);\n+      }\n+\n+      $request_sanitized_keys = array();\n+      $_REQUEST = self::stripDangerousValues($_REQUEST, $whitelist, $request_sanitized_keys);\n+\n+      self::$sanitized = TRUE;\n+    }\n+  }\n+\n+  /**\n+   * Strips dangerous keys from the provided input.\n+   *\n+   * @param mixed $input\n+   *   The input to sanitize.\n+   * @param string[] $whitelist\n+   *   An array of keys to whitelist as safe.\n+   * @param string[] $sanitized_keys\n+   *   An array of keys that have been removed.\n+   *\n+   * @return mixed\n+   *   The sanitized input.\n+   */\n+  protected static function stripDangerousValues($input, array $whitelist, array &$sanitized_keys) {\n+    if (is_array($input)) {\n+      foreach ($input as $key => $value) {\n+        if ($key !== '' && $key[0] === '#' && !in_array($key, $whitelist, TRUE)) {\n+          unset($input[$key]);\n+          $sanitized_keys[] = $key;\n+        }\n+        else {\n+          $input[$key] = self::stripDangerousValues($input[$key], $whitelist, $sanitized_keys);\n+        }\n+      }\n+    }\n+    return $input;\n+  }\n+\n+}\n-- \n2.14.3 (Apple Git-98)\n\n"
  },
  {
    "path": "aegir/patches/7-core/SA-CORE-2018-004-D7.patch",
    "content": "From 080daa38f265ea28444c540832509a48861587d0 Mon Sep 17 00:00:00 2001\nFrom: David Rothstein <drothstein@gmail.com>\nDate: Wed, 25 Apr 2018 11:30:53 -0400\nSubject: [PATCH] SA-CORE-2018-004 by alexpott, Heine, larowlan,\n David_Rothstein, xjm, Pere Orga, mlhess, tim.plunkett, Jasu_M, quicksketch,\n cashwilliams, samuel.mortenson, pwolanin, drumm, dawehner\n\n---\n includes/bootstrap.inc         |  5 +++++\n includes/common.inc            |  5 +++--\n includes/request-sanitizer.inc | 32 ++++++++++++++++++++++++++++++++\n modules/file/file.module       |  3 +++\n 4 files changed, 43 insertions(+), 2 deletions(-)\n\ndiff --git a/includes/bootstrap.inc b/includes/bootstrap.inc\nindex 06acf935e0..d5963a0aff 100644\n--- a/includes/bootstrap.inc\n+++ b/includes/bootstrap.inc\n@@ -2778,6 +2778,11 @@ function _drupal_bootstrap_variables() {\n       unset($_GET['destination']);\n       unset($_REQUEST['destination']);\n     }\n+    // Use the DrupalRequestSanitizer to ensure that the destination's query\n+    // parameters are not dangerous.\n+    if (isset($_GET['destination'])) {\n+      DrupalRequestSanitizer::cleanDestination();\n+    }\n     // If there's still something in $_REQUEST['destination'] that didn't come\n     // from $_GET, check it too.\n     if (isset($_REQUEST['destination']) && (!isset($_GET['destination']) || $_REQUEST['destination'] != $_GET['destination']) && url_is_external($_REQUEST['destination'])) {\ndiff --git a/includes/common.inc b/includes/common.inc\nindex d7dc47f229..f61d1eb0f2 100644\n--- a/includes/common.inc\n+++ b/includes/common.inc\n@@ -611,8 +611,9 @@ function drupal_parse_url($url) {\n   }\n   // The 'q' parameter contains the path of the current page if clean URLs are\n   // disabled. It overrides the 'path' of the URL when present, even if clean\n-  // URLs are enabled, due to how Apache rewriting rules work.\n-  if (isset($options['query']['q'])) {\n+  // URLs are enabled, due to how Apache rewriting rules work. The path\n+  // parameter must be a string.\n+  if (isset($options['query']['q']) && is_string($options['query']['q'])) {\n     $options['path'] = $options['query']['q'];\n     unset($options['query']['q']);\n   }\ndiff --git a/includes/request-sanitizer.inc b/includes/request-sanitizer.inc\nindex 1daa6b5348..7214436b8a 100644\n--- a/includes/request-sanitizer.inc\n+++ b/includes/request-sanitizer.inc\n@@ -51,6 +51,38 @@ class DrupalRequestSanitizer {\n     }\n   }\n \n+  /**\n+   * Removes the destination if it is dangerous.\n+   *\n+   * Note this can only be called after common.inc has been included.\n+   *\n+   * @return bool\n+   *   TRUE if the destination has been removed from $_GET, FALSE if not.\n+   */\n+  public static function cleanDestination() {\n+    $dangerous_keys = array();\n+    $log_sanitized_keys = variable_get('sanitize_input_logging', FALSE);\n+\n+    $parts = drupal_parse_url($_GET['destination']);\n+    // If there is a query string, check its query parameters.\n+    if (!empty($parts['query'])) {\n+      $whitelist = variable_get('sanitize_input_whitelist', array());\n+\n+      self::stripDangerousValues($parts['query'], $whitelist, $dangerous_keys);\n+      if (!empty($dangerous_keys)) {\n+        // The destination is removed rather than sanitized to mirror the\n+        // handling of external destinations.\n+        unset($_GET['destination']);\n+        unset($_REQUEST['destination']);\n+        if ($log_sanitized_keys) {\n+          trigger_error(format_string('Potentially unsafe destination removed from query string parameters (GET) because it contained the following keys: @keys', array('@keys' => implode(', ', $dangerous_keys))));\n+        }\n+        return TRUE;\n+      }\n+    }\n+    return FALSE;\n+  }\n+\n   /**\n    * Strips dangerous keys from the provided input.\n    *\ndiff --git a/modules/file/file.module b/modules/file/file.module\nindex 1e98f11bd7..eea58470fa 100644\n--- a/modules/file/file.module\n+++ b/modules/file/file.module\n@@ -239,6 +239,9 @@ function file_ajax_upload() {\n   $form_parents = func_get_args();\n   $form_build_id = (string) array_pop($form_parents);\n \n+  // Sanitize form parents before using them.\n+  $form_parents = array_filter($form_parents, 'element_child');\n+\n   if (empty($_POST['form_build_id']) || $form_build_id != $_POST['form_build_id']) {\n     // Invalid request.\n     drupal_set_message(t('An unrecoverable error occurred. The uploaded file likely exceeded the maximum file size (@size) that this server supports.', array('@size' => format_size(file_upload_max_size()))), 'error');\n-- \n2.15.1 (Apple Git-101)\n\n"
  },
  {
    "path": "aegir/patches/7-core/SA-CORE-2018-006-D7.patch",
    "content": "From ee301cf5ebff3534b59fcece583b3a0e4f094f15 Mon Sep 17 00:00:00 2001\nFrom: Lee Rowlands <lee.rowlands@previousnext.com.au>\nDate: Thu, 18 Oct 2018 08:40:19 +1000\nSubject: [PATCH] SA-CORE-2018-006 by alexpott, attilatilman, bkosborne, catch,\n bonus, Wim Leers, Sam152, Berdir, Damien Tournoud, Dave Reid, Kova101,\n David_Rothstein, dawehner, dsnopek, samuel.mortenson, stefan.r, tedbow, xjm,\n timmillwood, pwolanin, njbooher, dyates, effulgentsia, klausi, mlhess,\n larowlan\n\n---\n includes/common.inc            |  5 ++++-\n modules/path/path.test         | 30 +++++++++++++++++++++++++++++-\n modules/system/system.mail.inc | 34 +++++++++++++++++++++++++++++++++-\n 3 files changed, 66 insertions(+), 3 deletions(-)\n\ndiff --git a/includes/common.inc b/includes/common.inc\nindex f61d1eb0f2..a79a2f42ac 100644\n--- a/includes/common.inc\n+++ b/includes/common.inc\n@@ -2311,7 +2311,10 @@ function url($path = NULL, array $options = array()) {\n     $language = isset($options['language']) && isset($options['language']->language) ? $options['language']->language : '';\n     $alias = drupal_get_path_alias($original_path, $language);\n     if ($alias != $original_path) {\n-      $path = $alias;\n+      // Strip leading slashes from internal path aliases to prevent them\n+      // becoming external URLs without protocol. /example.com should not be\n+      // turned into //example.com.\n+      $path = ltrim($alias, '/');\n     }\n   }\n \ndiff --git a/modules/path/path.test b/modules/path/path.test\nindex edecff5cbb..f6131ce62b 100644\n--- a/modules/path/path.test\n+++ b/modules/path/path.test\n@@ -21,7 +21,7 @@ class PathTestCase extends DrupalWebTestCase {\n     parent::setUp('path');\n \n     // Create test user and login.\n-    $web_user = $this->drupalCreateUser(array('create page content', 'edit own page content', 'administer url aliases', 'create url aliases'));\n+    $web_user = $this->drupalCreateUser(array('create page content', 'edit own page content', 'administer url aliases', 'create url aliases', 'access content overview'));\n     $this->drupalLogin($web_user);\n   }\n \n@@ -160,6 +160,34 @@ class PathTestCase extends DrupalWebTestCase {\n     $this->drupalGet($edit['path[alias]']);\n     $this->assertNoText($node1->title, 'Alias was successfully deleted.');\n     $this->assertResponse(404);\n+\n+    // Create third test node.\n+    $node3 = $this->drupalCreateNode();\n+\n+    // Create an invalid alias with a leading slash and verify that the slash\n+    // is removed when the link is generated. This ensures that URL aliases\n+    // cannot be used to inject external URLs.\n+    // @todo The user interface should either display an error message or\n+    //   automatically trim these invalid aliases, rather than allowing them to\n+    //   be silently created, at which point the functional aspects of this\n+    //   test will need to be moved elsewhere and switch to using a\n+    //   programmatically-created alias instead.\n+    $alias = $this->randomName(8);\n+    $edit = array('path[alias]' => '/' . $alias);\n+    $this->drupalPost('node/' . $node3->nid . '/edit', $edit, t('Save'));\n+    $this->drupalGet('admin/content');\n+    // This checks the link href before clicking it, rather than using\n+    // DrupalWebTestCase::assertUrl() after clicking it, because the test\n+    // browser does not always preserve the correct number of slashes in the\n+    // URL when it visits internal links; using DrupalWebTestCase::assertUrl()\n+    // would actually make the test pass unconditionally on the testbot (or\n+    // anywhere else where Drupal is installed in a subdirectory).\n+    $link_xpath = $this->xpath('//a[normalize-space(text())=:label]', array(':label' => $node3->title));\n+    $link_href = (string) $link_xpath[0]['href'];\n+    $link_prefix = base_path() . (variable_get('clean_url', 0) ? '' : '?q=');\n+    $this->assertEqual($link_href, $link_prefix . $alias);\n+    $this->clickLink($node3->title);\n+    $this->assertResponse(404);\n   }\n \n   /**\ndiff --git a/modules/system/system.mail.inc b/modules/system/system.mail.inc\nindex 443e574001..9a17f55f6f 100644\n--- a/modules/system/system.mail.inc\n+++ b/modules/system/system.mail.inc\n@@ -70,7 +70,9 @@ class DefaultMailSystem implements MailSystemInterface {\n     // hosts. The return value of this method will still indicate whether mail\n     // was sent successfully.\n     if (!isset($_SERVER['WINDIR']) && strpos($_SERVER['SERVER_SOFTWARE'], 'Win32') === FALSE) {\n-      if (isset($message['Return-Path']) && !ini_get('safe_mode')) {\n+      // We validate the return path, unless it is equal to the site mail, which\n+      // we assume to be safe.\n+      if (isset($message['Return-Path']) && !ini_get('safe_mode') && (variable_get('site_mail', ini_get('sendmail_from')) === $message['Return-Path'] || self::_isShellSafe($message['Return-Path']))) {\n         // On most non-Windows systems, the \"-f\" option to the sendmail command\n         // is used to set the Return-Path. There is no space between -f and\n         // the value of the return path.\n@@ -109,6 +111,36 @@ class DefaultMailSystem implements MailSystemInterface {\n      }\n      return $mail_result;\n   }\n+\n+  /**\n+   * Disallows potentially unsafe shell characters.\n+   *\n+   * Functionally similar to PHPMailer::isShellSafe() which resulted from\n+   * CVE-2016-10045. Note that escapeshellarg and escapeshellcmd are inadequate\n+   * for this purpose.\n+   *\n+   * @param string $string\n+   *   The string to be validated.\n+   *\n+   * @return bool\n+   *   True if the string is shell-safe.\n+   *\n+   * @see https://github.com/PHPMailer/PHPMailer/issues/924\n+   * @see https://github.com/PHPMailer/PHPMailer/blob/v5.2.21/class.phpmailer.php#L1430\n+   *\n+   * @todo Rename to ::isShellSafe() and/or discuss whether this is the correct\n+   *   location for this helper.\n+   */\n+  protected static function _isShellSafe($string) {\n+    if (escapeshellcmd($string) !== $string || !in_array(escapeshellarg($string), array(\"'$string'\", \"\\\"$string\\\"\"))) {\n+      return FALSE;\n+    }\n+    if (preg_match('/[^a-zA-Z0-9@_\\-.]/', $string) !== 0) {\n+      return FALSE;\n+    }\n+    return TRUE;\n+  }\n+\n }\n \n /**\n-- \n2.14.1\n\n"
  },
  {
    "path": "aegir/patches/7-core/drupal-2656548-21-php7.patch",
    "content": "diff --git a/includes/common.inc b/includes/common.inc\nindex c6303ef..0d7e27a 100644\n--- a/includes/common.inc\n+++ b/includes/common.inc\n@@ -3025,6 +3025,7 @@ function drupal_add_html_head_link($attributes, $header = FALSE) {\n  */\n function drupal_add_css($data = NULL, $options = NULL) {\n   $css = &drupal_static(__FUNCTION__, array());\n+  $count = &drupal_static(__FUNCTION__ . '_count', 0);\n \n   // Construct the options, taking the defaults into consideration.\n   if (isset($options)) {\n@@ -3060,7 +3061,7 @@ function drupal_add_css($data = NULL, $options = NULL) {\n     }\n \n     // Always add a tiny value to the weight, to conserve the insertion order.\n-    $options['weight'] += count($css) / 1000;\n+    $options['weight'] += $count / 1000;\n \n     // Add the data to the CSS array depending on the type.\n     switch ($options['type']) {\n@@ -3068,11 +3069,13 @@ function drupal_add_css($data = NULL, $options = NULL) {\n         // For inline stylesheets, we don't want to use the $data as the array\n         // key as $data could be a very long string of CSS.\n         $css[] = $options;\n+        $count++;\n         break;\n       default:\n         // Local and external files must keep their name as the associative key\n         // so the same CSS file is not be added twice.\n         $css[$data] = $options;\n+        $count++;\n     }\n   }\n \ndiff --git a/includes/session.inc b/includes/session.inc\nindex 84d1983..efaf839 100644\n--- a/includes/session.inc\n+++ b/includes/session.inc\n@@ -425,7 +425,7 @@ function _drupal_session_destroy($sid) {\n \n   // Nothing to do if we are not allowed to change the session.\n   if (!drupal_save_session()) {\n-    return;\n+    return TRUE;\n   }\n \n   // Delete session data.\n@@ -446,6 +446,8 @@ function _drupal_session_destroy($sid) {\n   elseif (variable_get('https', FALSE)) {\n     _drupal_session_delete_cookie('S' . session_name(), TRUE);\n   }\n+\n+  return TRUE;\n }\n \n /**\ndiff --git a/modules/filter/filter.test b/modules/filter/filter.test\nindex d558fa3..b0a2a3f 100644\n--- a/modules/filter/filter.test\n+++ b/modules/filter/filter.test\n@@ -1120,8 +1120,12 @@ class FilterUnitTestCase extends DrupalUnitTestCase {\n     $f = filter_xss(\"<img src=\\\"jav\\0a\\0\\0cript:alert(0)\\\">\", array('img'));\n     $this->assertNoNormalized($f, 'cript', 'HTML scheme clearing evasion -- embedded nulls.');\n \n-    $f = filter_xss('<img src=\" &#14;  javascript:alert(0)\">', array('img'));\n-    $this->assertNoNormalized($f, 'javascript', 'HTML scheme clearing evasion -- spaces and metacharacters before scheme.');\n+    // @fixme This dataset currently fails under 5.4 because of\n+    //   https://www.drupal.org/node/1210798. Restore after its fixed.\n+    if (version_compare(PHP_VERSION, '5.4.0', '<')) {\n+      $f = filter_xss('<img src=\" &#14;  javascript:alert(0)\">', array('img'));\n+      $this->assertNoNormalized($f, 'javascript', 'HTML scheme clearing evasion -- spaces and metacharacters before scheme.');\n+    }\n \n     $f = filter_xss('<img src=\"vbscript:msgbox(0)\">', array('img'));\n     $this->assertNoNormalized($f, 'vbscript', 'HTML scheme clearing evasion -- another scheme.');\ndiff --git a/modules/image/image.module b/modules/image/image.module\nindex dab8836..2122e05 100644\n--- a/modules/image/image.module\n+++ b/modules/image/image.module\n@@ -1413,3 +1413,21 @@ function image_filter_keyword($value, $current_pixels, $new_pixels) {\n function _image_effect_definitions_sort($a, $b) {\n   return strcasecmp($a['name'], $b['name']);\n }\n+\n+/**\n+ * Converts a 24 bit RGB or 32 bit ARGB value to an RGBA array.\n+ *\n+ * @param int $argb\n+ *   The color code to convert.\n+ *\n+ * @return array\n+ *   An array containing the values for 'red', 'green', 'blue', 'alpha'.\n+ */\n+function _image_dec_to_rgba($argb) {\n+  return array(\n+    'red' => $argb >> 16 & 0xFF,\n+    'green' => $argb >> 8 & 0xFF,\n+    'blue' => $argb & 0xFF,\n+    'alpha' => $argb >> 24 & 0xFF,\n+  );\n+}\ndiff --git a/modules/openid/openid.test b/modules/openid/openid.test\nindex 5f7493a..d0708e0 100644\n--- a/modules/openid/openid.test\n+++ b/modules/openid/openid.test\n@@ -680,11 +680,11 @@ class OpenIDTestCase extends DrupalWebTestCase {\n    * Test _openid_dh_XXX_to_XXX() functions.\n    */\n   function testConversion() {\n-    $this->assertEqual(_openid_dh_long_to_base64('12345678901234567890123456789012345678901234567890'), 'CHJ/Y2mq+DyhUCZ0evjH8ZbOPwrS', '_openid_dh_long_to_base64() returned expected result.');\n-    $this->assertEqual(_openid_dh_base64_to_long('BsH/g8Nrpn2dtBSdu/sr1y8hxwyx'), '09876543210987654321098765432109876543210987654321', '_openid_dh_base64_to_long() returned expected result.');\n+    $this->assertIdentical(_openid_dh_long_to_base64('12345678901234567890123456789012345678901234567890'), 'CHJ/Y2mq+DyhUCZ0evjH8ZbOPwrS', '_openid_dh_long_to_base64() returned expected result.');\n+    $this->assertIdentical(_openid_dh_base64_to_long('BsH/g8Nrpn2dtBSdu/sr1y8hxwyx'), '9876543210987654321098765432109876543210987654321', '_openid_dh_base64_to_long() returned expected result.');\n \n-    $this->assertEqual(_openid_dh_long_to_binary('12345678901234567890123456789012345678901234567890'), \"\\x08r\\x7fci\\xaa\\xf8<\\xa1P&tz\\xf8\\xc7\\xf1\\x96\\xce?\\x0a\\xd2\", '_openid_dh_long_to_binary() returned expected result.');\n-    $this->assertEqual(_openid_dh_binary_to_long(\"\\x06\\xc1\\xff\\x83\\xc3k\\xa6}\\x9d\\xb4\\x14\\x9d\\xbb\\xfb+\\xd7/!\\xc7\\x0c\\xb1\"), '09876543210987654321098765432109876543210987654321', '_openid_dh_binary_to_long() returned expected result.');\n+    $this->assertIdentical(_openid_dh_long_to_binary('12345678901234567890123456789012345678901234567890'), \"\\x08r\\x7fci\\xaa\\xf8<\\xa1P&tz\\xf8\\xc7\\xf1\\x96\\xce?\\x0a\\xd2\", '_openid_dh_long_to_binary() returned expected result.');\n+    $this->assertIdentical(_openid_dh_binary_to_long(\"\\x06\\xc1\\xff\\x83\\xc3k\\xa6}\\x9d\\xb4\\x14\\x9d\\xbb\\xfb+\\xd7/!\\xc7\\x0c\\xb1\"), '9876543210987654321098765432109876543210987654321', '_openid_dh_binary_to_long() returned expected result.');\n   }\n \n   /**\ndiff --git a/modules/rdf/rdf.test b/modules/rdf/rdf.test\nindex 22c41f1..7168d83 100644\n--- a/modules/rdf/rdf.test\n+++ b/modules/rdf/rdf.test\n@@ -304,7 +304,7 @@ class RdfMappingDefinitionTestCase extends TaxonomyWebTestCase {\n     $blog_title = $this->xpath(\"//div[@about='$url']/span[@property='dc:title' and @content='$node->title']\");\n     $blog_meta = $this->xpath(\"//div[(@about='$url') and (@typeof='sioct:Weblog')]//span[contains(@property, 'dc:date') and contains(@property, 'dc:created') and @datatype='xsd:dateTime' and @content='$isoDate']\");\n     $this->assertTrue(!empty($blog_title), 'Property dc:title is present in meta tag.');\n-    $this->assertTrue(!empty($blog_meta), 'RDF type is present on post. Properties dc:date and dc:created are present on post date.');\n+    // $this->assertTrue(!empty($blog_meta), 'RDF type is present on post. Properties dc:date and dc:created are present on post date.');\n   }\n \n   /**\ndiff --git a/modules/simpletest/drupal_web_test_case.php b/modules/simpletest/drupal_web_test_case.php\nindex aed66fa..a0ad946 100644\n--- a/modules/simpletest/drupal_web_test_case.php\n+++ b/modules/simpletest/drupal_web_test_case.php\n@@ -2760,7 +2760,7 @@ class DrupalWebTestCase extends DrupalTestCase {\n         $path = substr($path, $length);\n       }\n       // Ensure that we have an absolute path.\n-      if ($path[0] !== '/') {\n+      if ($path === '' || $path[0] !== '/') {\n         $path = '/' . $path;\n       }\n       // Finally, prepend the $base_url.\ndiff --git a/modules/simpletest/tests/image.test b/modules/simpletest/tests/image.test\nindex 8497022..f742cda 100644\n--- a/modules/simpletest/tests/image.test\n+++ b/modules/simpletest/tests/image.test\n@@ -419,7 +419,20 @@ class ImageToolkitGdTestCase extends DrupalWebTestCase {\n         $correct_dimensions_object = TRUE;\n         $correct_colors = TRUE;\n \n-        // Check the real dimensions of the image first.\n+        // PHP 5.5 GD bug: https://bugs.php.net/bug.php?id=65148. PHP 5.5 GD\n+        // rotates differently then it did in PHP 5.4 resulting in different\n+        // dimensions then what math teaches us. For the test images, the\n+        // dimensions will be 1 pixel smaller in both dimensions (though other\n+        // tests have shown a difference of 0 to 3 pixels in both dimensions.\n+        // @todo: if and when the PHP bug gets solved, add an upper limit\n+        //   version check.\n+        // @todo: in [#1551686] the dimension calculations for rotation are\n+        //   reworked. That issue should also check if these tests can be made\n+        //   more robust.\n+        if (version_compare(PHP_VERSION, '5.5', '>=') && $values['function'] === 'rotate' && $values['arguments'][0] % 90 != 0) {\n+          $values['height']--;\n+          $values['width']--;\n+        }\n         if (imagesy($image->resource) != $values['height'] || imagesx($image->resource) != $values['width']) {\n           $correct_dimensions_real = FALSE;\n         }\ndiff --git a/modules/simpletest/tests/upgrade/drupal-6.filled.database.php b/modules/simpletest/tests/upgrade/drupal-6.filled.database.php\nindex a916281..5d7ce06 100644\n--- a/modules/simpletest/tests/upgrade/drupal-6.filled.database.php\n+++ b/modules/simpletest/tests/upgrade/drupal-6.filled.database.php\n@@ -19919,7 +19919,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '1',\n   'name' => 'vocabulary 1 (i=0)',\n   'description' => 'description of vocabulary 1 (i=0)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 1 (i=0)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '0',\n@@ -19932,7 +19932,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '2',\n   'name' => 'vocabulary 2 (i=1)',\n   'description' => 'description of vocabulary 2 (i=1)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 2 (i=1)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '1',\n@@ -19945,7 +19945,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '3',\n   'name' => 'vocabulary 3 (i=2)',\n   'description' => 'description of vocabulary 3 (i=2)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 3 (i=2)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '0',\n@@ -19958,7 +19958,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '4',\n   'name' => 'vocabulary 4 (i=3)',\n   'description' => 'description of vocabulary 4 (i=3)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 4 (i=3)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '1',\n@@ -19971,7 +19971,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '5',\n   'name' => 'vocabulary 5 (i=4)',\n   'description' => 'description of vocabulary 5 (i=4)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 5 (i=4)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '0',\n@@ -19984,7 +19984,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '6',\n   'name' => 'vocabulary 6 (i=5)',\n   'description' => 'description of vocabulary 6 (i=5)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 6 (i=5)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '1',\n@@ -19997,7 +19997,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '7',\n   'name' => 'vocabulary 7 (i=6)',\n   'description' => 'description of vocabulary 7 (i=6)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 7 (i=6)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '0',\n@@ -20010,7 +20010,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '8',\n   'name' => 'vocabulary 8 (i=7)',\n   'description' => 'description of vocabulary 8 (i=7)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 8 (i=7)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '1',\n@@ -20023,7 +20023,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '9',\n   'name' => 'vocabulary 9 (i=8)',\n   'description' => 'description of vocabulary 9 (i=8)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 8 (i=8)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '0',\n@@ -20036,7 +20036,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '10',\n   'name' => 'vocabulary 10 (i=9)',\n   'description' => 'description of vocabulary 10 (i=9)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 10 (i=9)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '1',\n@@ -20049,7 +20049,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '11',\n   'name' => 'vocabulary 11 (i=10)',\n   'description' => 'description of vocabulary 11 (i=10)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 11 (i=10)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '0',\n@@ -20062,7 +20062,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '12',\n   'name' => 'vocabulary 12 (i=11)',\n   'description' => 'description of vocabulary 12 (i=11)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 12 (i=11)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '1',\n@@ -20075,7 +20075,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '13',\n   'name' => 'vocabulary 13 (i=12)',\n   'description' => 'description of vocabulary 13 (i=12)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 13 (i=12)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '0',\n@@ -20088,7 +20088,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '14',\n   'name' => 'vocabulary 14 (i=13)',\n   'description' => 'description of vocabulary 14 (i=13)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 14 (i=13)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '1',\n@@ -20101,7 +20101,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '15',\n   'name' => 'vocabulary 15 (i=14)',\n   'description' => 'description of vocabulary 15 (i=14)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 15 (i=14)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '0',\n@@ -20114,7 +20114,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '16',\n   'name' => 'vocabulary 16 (i=15)',\n   'description' => 'description of vocabulary 16 (i=15)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 16 (i=15)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '1',\n@@ -20127,7 +20127,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '17',\n   'name' => 'vocabulary 17 (i=16)',\n   'description' => 'description of vocabulary 17 (i=16)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 17 (i=16)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '0',\n@@ -20140,7 +20140,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '18',\n   'name' => 'vocabulary 18 (i=17)',\n   'description' => 'description of vocabulary 18 (i=17)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 18 (i=17)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '1',\n@@ -20153,7 +20153,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '19',\n   'name' => 'vocabulary 19 (i=18)',\n   'description' => 'description of vocabulary 19 (i=18)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 19 (i=18)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '0',\n@@ -20166,7 +20166,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '20',\n   'name' => 'vocabulary 20 (i=19)',\n   'description' => 'description of vocabulary 20 (i=19)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 20 (i=19)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '1',\n@@ -20179,7 +20179,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '21',\n   'name' => 'vocabulary 21 (i=20)',\n   'description' => 'description of vocabulary 21 (i=20)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 21 (i=20)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '0',\n@@ -20192,7 +20192,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '22',\n   'name' => 'vocabulary 22 (i=21)',\n   'description' => 'description of vocabulary 22 (i=21)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 22 (i=21)',\n   'relations' => '1',\n   'hierarchy' => '0',\n   'multiple' => '1',\n@@ -20205,7 +20205,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '23',\n   'name' => 'vocabulary 23 (i=22)',\n   'description' => 'description of vocabulary 23 (i=22)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 23 (i=22)',\n   'relations' => '1',\n   'hierarchy' => '1',\n   'multiple' => '0',\n@@ -20218,7 +20218,7 @@ db_insert('vocabulary')->fields(array(\n   'vid' => '24',\n   'name' => 'vocabulary 24 (i=23)',\n   'description' => 'description of vocabulary 24 (i=23)',\n-  'help' => '',\n+  'help' => 'help for vocabulary 24 (i=23)',\n   'relations' => '1',\n   'hierarchy' => '2',\n   'multiple' => '1',\ndiff --git a/modules/simpletest/tests/upgrade/upgrade.taxonomy.test b/modules/simpletest/tests/upgrade/upgrade.taxonomy.test\nindex 58a4d5c..51402ed 100644\n--- a/modules/simpletest/tests/upgrade/upgrade.taxonomy.test\n+++ b/modules/simpletest/tests/upgrade/upgrade.taxonomy.test\n@@ -74,9 +74,10 @@ class UpgradePathTaxonomyTestCase extends UpgradePathTestCase {\n     $this->assertEqual($voc_keys, $inst_keys, 'Node type page has instances for every vocabulary.');\n \n     // Ensure instance variables are getting through.\n-    foreach ($instances as $instance) {\n-      $this->assertTrue(isset($instance['required']), 'The required setting was preserved during the upgrade path.');\n-      $this->assertTrue($instance['description'], 'The description was preserved during the upgrade path');\n+    foreach (array_unique($instances) as $instance) {\n+      $field_instance = field_info_instance('node', $instance, 'page');\n+      $this->assertTrue(isset($field_instance['required']), 'The required setting was preserved during the upgrade path.');\n+      $this->assertTrue($field_instance['description'], 'The description was preserved during the upgrade path');\n     }\n \n     // Node type 'story' was not explicitly in $vocabulary->nodes but\ndiff --git a/modules/system/image.gd.inc b/modules/system/image.gd.inc\nindex 913b0de..28ab3f0 100644\n--- a/modules/system/image.gd.inc\n+++ b/modules/system/image.gd.inc\n@@ -98,10 +98,10 @@ function image_gd_resize(stdClass $image, $width, $height) {\n  *   $image->info['height'] values will be modified by this call.\n  * @param $degrees\n  *   The number of (clockwise) degrees to rotate the image.\n- * @param $background\n- *   An hexadecimal integer specifying the background color to use for the\n- *   uncovered area of the image after the rotation. E.g. 0x000000 for black,\n- *   0xff00ff for magenta, and 0xffffff for white. For images that support\n+ * @param int $background\n+ *   An 24 bit or 32 bit ARGB value  specifying the background color to use for\n+ *   the uncovered area of the image after the rotation. E.g. 0 for black,\n+ *   16711935 for fuchsia, and 16777215 for white. For images that support\n  *   transparency, this will default to transparent. Otherwise it will\n  *   be white.\n  * @return\n@@ -116,38 +116,52 @@ function image_gd_rotate(stdClass $image, $degrees, $background = NULL) {\n     return FALSE;\n   }\n \n-  $width = $image->info['width'];\n-  $height = $image->info['height'];\n+  // PHP 5.5 GD bug: https://bugs.php.net/bug.php?id=65148: To prevent buggy\n+  // behavior on negative multiples of 90 degrees we convert any negative\n+  // angle to a positive one between 0 and 360 degrees.\n+  $degrees -= floor($degrees / 360) * 360;\n \n-  // Convert the hexadecimal background value to a color index value.\n   if (isset($background)) {\n-    $rgb = array();\n-    for ($i = 16; $i >= 0; $i -= 8) {\n-      $rgb[] = (($background >> $i) & 0xFF);\n-    }\n-    $background = imagecolorallocatealpha($image->resource, $rgb[0], $rgb[1], $rgb[2], 0);\n+    $background = _image_dec_to_rgba($background);\n+    $background['alpha'] /= 2;\n   }\n-  // Set the background color as transparent if $background is NULL.\n   else {\n-    // Get the current transparent color.\n-    $background = imagecolortransparent($image->resource);\n-\n-    // If no transparent colors, use white.\n-    if ($background == 0) {\n-      $background = imagecolorallocatealpha($image->resource, 255, 255, 255, 0);\n-    }\n+    // Background color is not specified: use transparent white as background.\n+    $background = array('red' => 255, 'green' => 255, 'blue' => 255, 'alpha' => 127);\n   }\n \n+  // Store the color index for the background as that is what GD uses.\n+  $background_idx = imagecolorallocatealpha($image->resource, $background['red'], $background['green'], $background['blue'], $background['alpha']);\n+\n   // Images are assigned a new color palette when rotating, removing any\n   // transparency flags. For GIF images, keep a record of the transparent color.\n   if ($image->info['extension'] == 'gif') {\n-    $transparent_index = imagecolortransparent($image->resource);\n-    if ($transparent_index != 0) {\n-      $transparent_gif_color = imagecolorsforindex($image->resource, $transparent_index);\n+    // GIF does not work with a transparency channel, but can define 1 color\n+    // in its palette to act as transparent.\n+\n+    // Get the current transparent color, if any.\n+    $gif_transparent_id = imagecolortransparent($image->resource);\n+    if ($gif_transparent_id !== -1) {\n+      // The gif already has a transparent color set: remember it to set it on\n+      // the rotated image as well.\n+      $transparent_gif_color = imagecolorsforindex($image->resource, $gif_transparent_id);\n+\n+      if ($background['alpha'] >= 127) {\n+        // We want a transparent background: use the color already set to act\n+        // as transparent, as background.\n+        $background_idx = $gif_transparent_id;\n+      }\n+    }\n+    else {\n+      // The gif does not currently have a transparent color set.\n+      if ($background['alpha'] >= 127) {\n+        // But as the background is transparent, it should get one.\n+        $transparent_gif_color = $background;\n+      }\n     }\n   }\n \n-  $image->resource = imagerotate($image->resource, 360 - $degrees, $background);\n+  $image->resource = imagerotate($image->resource, 360 - $degrees, $background_idx);\n \n   // GIFs need to reassign the transparent color after performing the rotate.\n   if (isset($transparent_gif_color)) {\ndiff --git a/modules/taxonomy/taxonomy.install b/modules/taxonomy/taxonomy.install\nindex ebd0084..60a9b5d 100644\n--- a/modules/taxonomy/taxonomy.install\n+++ b/modules/taxonomy/taxonomy.install\n@@ -492,6 +492,7 @@ function taxonomy_update_7004() {\n       'bundle' => $bundle->type,\n       'settings' => array(),\n       'description' => 'Debris left over after upgrade from Drupal 6',\n+      'required' => FALSE,\n       'widget' => array(\n         'type' => 'taxonomy_autocomplete',\n         'module' => 'taxonomy',\n@@ -557,7 +558,7 @@ function taxonomy_update_7005(&$sandbox) {\n   // of term references stored so far for the current revision, which\n   // provides the delta value for each term reference data insert. The\n   // deltas are reset for each new revision.\n-  \n+\n   $conditions = array(\n     'type' => 'taxonomy_term_reference',\n     'deleted' => 0,\ndiff --git a/modules/tracker/tracker.test b/modules/tracker/tracker.test\nindex 8a48ea8..e472978 100644\n--- a/modules/tracker/tracker.test\n+++ b/modules/tracker/tracker.test\n@@ -151,7 +151,6 @@ class TrackerTest extends DrupalWebTestCase {\n \n     $node = $this->drupalCreateNode(array(\n       'comment' => 2,\n-      'title' => array(LANGUAGE_NONE => array(array('value' => $this->randomName(8)))),\n     ));\n \n     // Add a comment to the page.\ndiff --git a/modules/trigger/trigger.test b/modules/trigger/trigger.test\nindex 9e5f114..09169b7 100644\n--- a/modules/trigger/trigger.test\n+++ b/modules/trigger/trigger.test\n@@ -85,7 +85,7 @@ class TriggerContentTestCase extends TriggerWebTestCase {\n       $this->assertRaw(t('!post %title has been created.', array('!post' => 'Basic page', '%title' => $edit[\"title\"])), 'Make sure the Basic page has actually been created');\n       // Action should have been fired.\n       $loaded_node = $this->drupalGetNodeByTitle($edit[\"title\"]);\n-      $this->assertTrue($loaded_node->$info['property'] == $info['expected'], format_string('Make sure the @action action fired.', array('@action' => $info['name'])));\n+      $this->assertTrue($loaded_node->{$info['property']} == $info['expected'], format_string('Make sure the @action action fired.', array('@action' => $info['name'])));\n       // Leave action assigned for next test\n \n       // There should be an error when the action is assigned to the trigger\n"
  },
  {
    "path": "aegir/patches/7-core/patch_commit_b8a8a84ea9b3.patch",
    "content": "diff --git a/modules/file/file.install b/modules/file/file.install\nindex 47ee4fd0014b3f29c87da274968a1d969d61224a..33334b84aecab28299516554bffbf196c063ac93 100644\n--- a/modules/file/file.install\n+++ b/modules/file/file.install\n@@ -53,10 +53,29 @@ function file_requirements($phase) {\n \n   // Check the server's ability to indicate upload progress.\n   if ($phase == 'runtime') {\n-    $implementation = file_progress_implementation();\n-    $apache = strpos($_SERVER['SERVER_SOFTWARE'], 'Apache') !== FALSE;\n-    $fastcgi = strpos($_SERVER['SERVER_SOFTWARE'], 'mod_fastcgi') !== FALSE || strpos($_SERVER[\"SERVER_SOFTWARE\"], 'mod_fcgi') !== FALSE;\n     $description = NULL;\n+\n+    // Test the web server identity.\n+    if (isset($_SERVER['SERVER_SOFTWARE']) && preg_match(\"/(?:ApacheSolaris|Nginx)/i\", $_SERVER['SERVER_SOFTWARE'])) {\n+      $is_nginx = TRUE;\n+    }\n+    else {\n+      $is_nginx = FALSE;\n+    }\n+\n+    if ($is_nginx) {\n+      // It is running on BOA with PECL uploadprogress installed and Nginx\n+      // so let's Make-Drupal-Happy-With-Fake-Apache-Identity.\n+      $apache = TRUE;\n+      $implementation = 'uploadprogress';\n+      $fastcgi = FALSE;\n+    }\n+    else {\n+      $implementation = file_progress_implementation();\n+      $apache = strpos($_SERVER['SERVER_SOFTWARE'], 'Apache') !== FALSE;\n+      $fastcgi = strpos($_SERVER['SERVER_SOFTWARE'], 'mod_fastcgi') !== FALSE || strpos($_SERVER[\"SERVER_SOFTWARE\"], 'mod_fcgi') !== FALSE;\n+    }\n+\n     if (!$apache) {\n       $value = t('Not enabled');\n       $description = t('Your server is not capable of displaying file upload progress. File upload progress requires an Apache server running PHP with mod_php.');\ndiff --git a/modules/system/system.install b/modules/system/system.install\nindex 54092316522e332ca53bd6f3c6340dd8b9fcef57..fcce12f08ad14ef3bd407800a99a8be7dfea7813 100644\n--- a/modules/system/system.install\n+++ b/modules/system/system.install\n@@ -264,14 +264,17 @@ function system_requirements($phase) {\n \n   // Test the web server identity.\n   if (isset($_SERVER['SERVER_SOFTWARE']) && preg_match(\"/(?:ApacheSolaris|Nginx)/i\", $_SERVER['SERVER_SOFTWARE'])) {\n-    $is_nginx = TRUE; // Skip this on BOA since .htaccess is never used in Nginx.\n+    // Skip this on BOA and Nginx since .htaccess is never used in Nginx.\n+    $is_nginx = TRUE;\n   }\n   elseif (!isset($_SERVER['SERVER_SOFTWARE'])) {\n-    $is_nginx = TRUE; // Skip this on BOA since .htaccess is never used in Nginx.\n+    // Skip this in Aegir backend where SERVER_SOFTWARE is not set, since .htaccess is never used in Nginx.\n+    $is_nginx = TRUE;\n   }\n   else {\n     $is_nginx = FALSE;\n   }\n+\n   // Test the contents of the .htaccess files.\n   if ($phase == 'runtime' && !$is_nginx) {\n     // Try to write the .htaccess files first, to prevent false alarms in case\n"
  },
  {
    "path": "aegir/patches/8-core/0001-Symlink-core-support-test.patch",
    "content": "From fd51a9cbcb28a8b540e8f225d38735cece29ccf2 Mon Sep 17 00:00:00 2001\nFrom: Barracuda Team <devnull@omega8.cc>\nDate: Mon, 28 Mar 2016 17:12:22 +0200\nSubject: [PATCH] Symlink core support test\n\n---\n core/authorize.php                     |  2 +-\n core/includes/file.inc                 | 48 ++++++++++++++++++++++------------\n core/includes/install.core.inc         |  2 +-\n core/install.php                       |  2 +-\n core/lib/Drupal/Core/DrupalKernel.php  |  7 ++++-\n core/modules/statistics/statistics.php |  4 +--\n core/rebuild.php                       |  2 +-\n 7 files changed, 43 insertions(+), 24 deletions(-)\n\ndiff --git a/core/authorize.php b/core/authorize.php\nindex fe374fa..32b0ecc 100644\n--- a/core/authorize.php\n+++ b/core/authorize.php\n@@ -28,7 +28,7 @@\n use Drupal\\Core\\Site\\Settings;\n \n // Change the directory to the Drupal root.\n-chdir('..');\n+chdir(dirname(dirname($_SERVER['SCRIPT_FILENAME'])));\n \n $autoloader = require_once 'autoload.php';\n \ndiff --git a/core/includes/file.inc b/core/includes/file.inc\nindex b9c217b..9a4eea2 100644\n--- a/core/includes/file.inc\n+++ b/core/includes/file.inc\n@@ -1134,28 +1134,42 @@ function file_directory_temp() {\n  *   A string containing the path to the temporary directory.\n  */\n function file_directory_os_temp() {\n-  $directories = array();\n+  static $this_temp_dir;\n \n-  // Has PHP been set with an upload_tmp_dir?\n-  if (ini_get('upload_tmp_dir')) {\n-    $directories[] = ini_get('upload_tmp_dir');\n+  if (preg_match(\"/^\\/home/\", getenv('HOME')) || preg_match(\"/^\\/data\\/disk/\", getenv('HOME')) || preg_match(\"/^\\/var\\/aegir/\", getenv('HOME'))) {\n+    $this_temp_dir = getenv('HOME') . \"/.tmp\";\n   }\n-\n-  // Operating system specific dirs.\n-  if (substr(PHP_OS, 0, 3) == 'WIN') {\n-    $directories[] = 'c:\\\\windows\\\\temp';\n-    $directories[] = 'c:\\\\winnt\\\\temp';\n-  }\n-  else {\n-    $directories[] = '/tmp';\n+  if (!is_dir($this_temp_dir)) {\n+    $this_temp_dir = FALSE;\n   }\n-  // PHP may be able to find an alternative tmp directory.\n-  $directories[] = sys_get_temp_dir();\n \n-  foreach ($directories as $directory) {\n-    if (is_dir($directory) && is_writable($directory)) {\n-      return $directory;\n+  if (!isset($this_temp_dir)) {\n+    $directories = array();\n+\n+    // Has PHP been set with an upload_tmp_dir?\n+    if (ini_get('upload_tmp_dir')) {\n+      $directories[] = ini_get('upload_tmp_dir');\n+    }\n+\n+    // Operating system specific dirs.\n+    if (substr(PHP_OS, 0, 3) == 'WIN') {\n+      $directories[] = 'c:\\\\windows\\\\temp';\n+      $directories[] = 'c:\\\\winnt\\\\temp';\n     }\n+    else {\n+      $directories[] = '/tmp';\n+    }\n+    // PHP may be able to find an alternative tmp directory.\n+    $directories[] = sys_get_temp_dir();\n+\n+    foreach ($directories as $directory) {\n+      if (is_dir($directory) && is_writable($directory)) {\n+        return $directory;\n+      }\n+    }\n+  }\n+  else {\n+    return $this_temp_dir;\n   }\n   return FALSE;\n }\ndiff --git a/core/includes/install.core.inc b/core/includes/install.core.inc\nindex 7b66911..38ef83a 100644\n--- a/core/includes/install.core.inc\n+++ b/core/includes/install.core.inc\n@@ -319,7 +319,7 @@ function install_begin_request($class_loader, &$install_state) {\n   }\n \n   $site_path = DrupalKernel::findSitePath($request, FALSE);\n-  Settings::initialize(dirname(dirname(__DIR__)), $site_path, $class_loader);\n+  Settings::initialize(getcwd(), $site_path, $class_loader);\n \n   // Ensure that procedural dependencies are loaded as early as possible,\n   // since the error/exception handlers depend on them.\ndiff --git a/core/install.php b/core/install.php\nindex 50deb38..148afe5 100644\n--- a/core/install.php\n+++ b/core/install.php\n@@ -6,7 +6,7 @@\n  */\n \n // Change the directory to the Drupal root.\n-chdir('..');\n+chdir(dirname(dirname($_SERVER['SCRIPT_FILENAME'])));\n // Store the Drupal root path.\n $root_path = realpath('');\n \ndiff --git a/core/lib/Drupal/Core/DrupalKernel.php b/core/lib/Drupal/Core/DrupalKernel.php\nindex 1fa3474..a033b5b 100644\n--- a/core/lib/Drupal/Core/DrupalKernel.php\n+++ b/core/lib/Drupal/Core/DrupalKernel.php\n@@ -884,7 +884,12 @@ public static function bootEnvironment() {\n     }\n \n     // Include our bootstrap file.\n-    $core_root = dirname(dirname(dirname(__DIR__)));\n+    if (is_link(getcwd() . '/core')) {\n+      $core_root = getcwd() . '/core';\n+    }\n+    else {\n+      $core_root = dirname(dirname(dirname(__DIR__)));\n+    }\n     require_once $core_root . '/includes/bootstrap.inc';\n \n     // Enforce E_STRICT, but allow users to set levels not part of E_STRICT.\ndiff --git a/core/modules/statistics/statistics.php b/core/modules/statistics/statistics.php\nindex fe1b9fd..9054b29 100644\n--- a/core/modules/statistics/statistics.php\n+++ b/core/modules/statistics/statistics.php\n@@ -8,9 +8,9 @@\n use Drupal\\Core\\DrupalKernel;\n use Symfony\\Component\\HttpFoundation\\Request;\n \n-chdir('../../..');\n+chdir(dirname(dirname(dirname(dirname($_SERVER['SCRIPT_FILENAME'])))));\n \n-$autoloader = require_once 'autoload.php';\n+$autoloader = require_once getcwd() . '/vendor/autoload.php';\n \n $kernel = DrupalKernel::createFromRequest(Request::createFromGlobals(), $autoloader, 'prod');\n $kernel->boot();\ndiff --git a/core/rebuild.php b/core/rebuild.php\nindex ccd4976..5545c3f 100644\n--- a/core/rebuild.php\n+++ b/core/rebuild.php\n@@ -18,7 +18,7 @@\n use Symfony\\Component\\HttpFoundation\\Response;\n \n // Change the directory to the Drupal root.\n-chdir('..');\n+chdir(dirname(dirname($_SERVER['SCRIPT_FILENAME'])));\n \n $autoloader = require_once __DIR__ . '/../autoload.php';\n require_once __DIR__ . '/includes/utility.inc';\n-- \n2.2.1\n\n"
  },
  {
    "path": "aegir/patches/8-core/SA-CORE-2018-002-D8.patch",
    "content": "From 5ac8738fa69df34a0635f0907d661b509ff9a28f Mon Sep 17 00:00:00 2001\nFrom: Lee Rowlands <lee.rowlands@previousnext.com.au>\nDate: Tue, 27 Mar 2018 19:58:42 +1000\nSubject: [PATCH] SA-CORE-2018-002 by Jasu_M, samuel.mortenson,\n David_Rothstein, xjm, mlhess, larowlan, pwolanin, alexpott, dsnopek, Pere\n Orga, cashwilliams, dawehner, tim.plunkett, drumm\n\n---\n core/lib/Drupal/Core/DrupalKernel.php              |  7 ++\n core/lib/Drupal/Core/Security/RequestSanitizer.php | 99 ++++++++++++++++++++++\n 2 files changed, 106 insertions(+)\n create mode 100644 core/lib/Drupal/Core/Security/RequestSanitizer.php\n\ndiff --git a/core/lib/Drupal/Core/DrupalKernel.php b/core/lib/Drupal/Core/DrupalKernel.php\nindex 37ed0e97a6..fec1be9ad3 100644\n--- a/core/lib/Drupal/Core/DrupalKernel.php\n+++ b/core/lib/Drupal/Core/DrupalKernel.php\n@@ -20,6 +20,7 @@\n use Drupal\\Core\\Http\\TrustedHostsRequestFactory;\n use Drupal\\Core\\Installer\\InstallerRedirectTrait;\n use Drupal\\Core\\Language\\Language;\n+use Drupal\\Core\\Security\\RequestSanitizer;\n use Drupal\\Core\\Site\\Settings;\n use Drupal\\Core\\Test\\TestDatabase;\n use Symfony\\Cmf\\Component\\Routing\\RouteObjectInterface;\n@@ -542,6 +543,12 @@ public function loadLegacyIncludes() {\n    * {@inheritdoc}\n    */\n   public function preHandle(Request $request) {\n+    // Sanitize the request.\n+    $request = RequestSanitizer::sanitize(\n+      $request,\n+      (array) Settings::get(RequestSanitizer::SANITIZE_WHITELIST, []),\n+      (bool) Settings::get(RequestSanitizer::SANITIZE_LOG, FALSE)\n+    );\n \n     $this->loadLegacyIncludes();\n \ndiff --git a/core/lib/Drupal/Core/Security/RequestSanitizer.php b/core/lib/Drupal/Core/Security/RequestSanitizer.php\nnew file mode 100644\nindex 0000000000..8ba17b95cf\n--- /dev/null\n+++ b/core/lib/Drupal/Core/Security/RequestSanitizer.php\n@@ -0,0 +1,99 @@\n+<?php\n+\n+namespace Drupal\\Core\\Security;\n+\n+use Symfony\\Component\\HttpFoundation\\Request;\n+\n+/**\n+ * Sanitizes user input.\n+ */\n+class RequestSanitizer {\n+\n+  /**\n+   * Request attribute to mark the request as sanitized.\n+   */\n+  const SANITIZED = '_drupal_request_sanitized';\n+\n+  /**\n+   * The name of the setting that configures the whitelist.\n+   */\n+  const SANITIZE_WHITELIST = 'sanitize_input_whitelist';\n+\n+  /**\n+   * The name of the setting that determines if sanitized keys are logged.\n+   */\n+  const SANITIZE_LOG = 'sanitize_input_logging';\n+\n+  /**\n+   * Strips dangerous keys from user input.\n+   *\n+   * @param \\Symfony\\Component\\HttpFoundation\\Request $request\n+   *   The incoming request to sanitize.\n+   * @param string[] $whitelist\n+   *   An array of keys to whitelist as safe. See default.settings.php.\n+   * @param bool $log_sanitized_keys\n+   *   (optional) Set to TRUE to log an keys that are sanitized.\n+   *\n+   * @return \\Symfony\\Component\\HttpFoundation\\Request\n+   *   The sanitized request.\n+   */\n+  public static function sanitize(Request $request, $whitelist, $log_sanitized_keys = FALSE) {\n+    if (!$request->attributes->get(self::SANITIZED, FALSE)) {\n+      // Process query string parameters.\n+      $get_sanitized_keys = [];\n+      $request->query->replace(static::stripDangerousValues($request->query->all(), $whitelist, $get_sanitized_keys));\n+      if ($log_sanitized_keys && !empty($get_sanitized_keys)) {\n+        trigger_error(sprintf('Potentially unsafe keys removed from query string parameters (GET): %s', implode(', ', $get_sanitized_keys)));\n+      }\n+\n+      // Request body parameters.\n+      $post_sanitized_keys = [];\n+      $request->request->replace(static::stripDangerousValues($request->request->all(), $whitelist, $post_sanitized_keys));\n+      if ($log_sanitized_keys && !empty($post_sanitized_keys)) {\n+        trigger_error(sprintf('Potentially unsafe keys removed from request body parameters (POST): %s', implode(', ', $post_sanitized_keys)));\n+      }\n+\n+      // Cookie parameters.\n+      $cookie_sanitized_keys = [];\n+      $request->cookies->replace(static::stripDangerousValues($request->cookies->all(), $whitelist, $cookie_sanitized_keys));\n+      if ($log_sanitized_keys && !empty($cookie_sanitized_keys)) {\n+        trigger_error(sprintf('Potentially unsafe keys removed from cookie parameters: %s', implode(', ', $cookie_sanitized_keys)));\n+      }\n+\n+      if (!empty($get_sanitized_keys) || !empty($post_sanitized_keys) || !empty($cookie_sanitized_keys)) {\n+        $request->overrideGlobals();\n+      }\n+      $request->attributes->set(self::SANITIZED, TRUE);\n+    }\n+    return $request;\n+  }\n+\n+  /**\n+   * Strips dangerous keys from $input.\n+   *\n+   * @param mixed $input\n+   *   The input to sanitize.\n+   * @param string[] $whitelist\n+   *   An array of keys to whitelist as safe.\n+   * @param string[] $sanitized_keys\n+   *   An array of keys that have been removed.\n+   *\n+   * @return mixed\n+   *   The sanitized input.\n+   */\n+  protected static function stripDangerousValues($input, array $whitelist, array &$sanitized_keys) {\n+    if (is_array($input)) {\n+      foreach ($input as $key => $value) {\n+        if ($key !== '' && $key[0] === '#' && !in_array($key, $whitelist, TRUE)) {\n+          unset($input[$key]);\n+          $sanitized_keys[] = $key;\n+        }\n+        else {\n+          $input[$key] = static::stripDangerousValues($input[$key], $whitelist, $sanitized_keys);\n+        }\n+      }\n+    }\n+    return $input;\n+  }\n+\n+}\n-- \n2.14.3 (Apple Git-98)\n\n"
  },
  {
    "path": "aegir/patches/8-core/SA-CORE-2018-004-D8.patch",
    "content": "From bb6d396609600d1169da29456ba3db59abae4b7e Mon Sep 17 00:00:00 2001\nFrom: xjm <xjm@65776.no-reply.drupal.org>\nDate: Wed, 25 Apr 2018 10:39:01 -0500\nSubject: [PATCH] SA-CORE-2018-004 by David_Rothstein, alexpott, larowlan,\n Heine, Pere Orga, tim.plunkett, mlhess, xjm, Jasu_M, drumm, cashwilliams,\n quicksketch, dawehner, pwolanin, samuel.mortenson\n\n---\n core/lib/Drupal/Core/Security/RequestSanitizer.php | 98 +++++++++++++++++-----\n core/modules/file/src/Element/ManagedFile.php      |  4 +\n 2 files changed, 82 insertions(+), 20 deletions(-)\n\ndiff --git a/core/lib/Drupal/Core/Security/RequestSanitizer.php b/core/lib/Drupal/Core/Security/RequestSanitizer.php\nindex 8ba17b95cf..44815f68cd 100644\n--- a/core/lib/Drupal/Core/Security/RequestSanitizer.php\n+++ b/core/lib/Drupal/Core/Security/RequestSanitizer.php\n@@ -2,6 +2,8 @@\n \n namespace Drupal\\Core\\Security;\n \n+use Drupal\\Component\\Utility\\UrlHelper;\n+use Symfony\\Component\\HttpFoundation\\ParameterBag;\n use Symfony\\Component\\HttpFoundation\\Request;\n \n /**\n@@ -39,33 +41,89 @@ class RequestSanitizer {\n    */\n   public static function sanitize(Request $request, $whitelist, $log_sanitized_keys = FALSE) {\n     if (!$request->attributes->get(self::SANITIZED, FALSE)) {\n-      // Process query string parameters.\n-      $get_sanitized_keys = [];\n-      $request->query->replace(static::stripDangerousValues($request->query->all(), $whitelist, $get_sanitized_keys));\n-      if ($log_sanitized_keys && !empty($get_sanitized_keys)) {\n-        trigger_error(sprintf('Potentially unsafe keys removed from query string parameters (GET): %s', implode(', ', $get_sanitized_keys)));\n+      $update_globals = FALSE;\n+      $bags = [\n+        'query' => 'Potentially unsafe keys removed from query string parameters (GET): %s',\n+        'request' => 'Potentially unsafe keys removed from request body parameters (POST): %s',\n+        'cookies' => 'Potentially unsafe keys removed from cookie parameters: %s',\n+      ];\n+      foreach ($bags as $bag => $message) {\n+        if (static::processParameterBag($request->$bag, $whitelist, $log_sanitized_keys, $bag, $message)) {\n+          $update_globals = TRUE;\n+        }\n       }\n-\n-      // Request body parameters.\n-      $post_sanitized_keys = [];\n-      $request->request->replace(static::stripDangerousValues($request->request->all(), $whitelist, $post_sanitized_keys));\n-      if ($log_sanitized_keys && !empty($post_sanitized_keys)) {\n-        trigger_error(sprintf('Potentially unsafe keys removed from request body parameters (POST): %s', implode(', ', $post_sanitized_keys)));\n+      if ($update_globals) {\n+        $request->overrideGlobals();\n       }\n+      $request->attributes->set(self::SANITIZED, TRUE);\n+    }\n+    return $request;\n+  }\n \n-      // Cookie parameters.\n-      $cookie_sanitized_keys = [];\n-      $request->cookies->replace(static::stripDangerousValues($request->cookies->all(), $whitelist, $cookie_sanitized_keys));\n-      if ($log_sanitized_keys && !empty($cookie_sanitized_keys)) {\n-        trigger_error(sprintf('Potentially unsafe keys removed from cookie parameters: %s', implode(', ', $cookie_sanitized_keys)));\n+  /**\n+   * Processes a request parameter bag.\n+   *\n+   * @param \\Symfony\\Component\\HttpFoundation\\ParameterBag $bag\n+   *   The parameter bag to process.\n+   * @param string[] $whitelist\n+   *   An array of keys to whitelist as safe.\n+   * @param bool $log_sanitized_keys\n+   *   Set to TRUE to log keys that are sanitized.\n+   * @param string $bag_name\n+   *   The request parameter bag name. Either 'query', 'request' or 'cookies'.\n+   * @param string $message\n+   *   The message to log if the parameter bag contains keys that are removed.\n+   *   If the message contains %s that is replaced by a list of removed keys.\n+   *\n+   * @return bool\n+   *   TRUE if the parameter bag has been sanitized, FALSE if not.\n+   */\n+  protected static function processParameterBag(ParameterBag $bag, $whitelist, $log_sanitized_keys, $bag_name, $message) {\n+    $sanitized = FALSE;\n+    $sanitized_keys = [];\n+    $bag->replace(static::stripDangerousValues($bag->all(), $whitelist, $sanitized_keys));\n+    if (!empty($sanitized_keys)) {\n+      $sanitized = TRUE;\n+      if ($log_sanitized_keys) {\n+        trigger_error(sprintf($message, implode(', ', $sanitized_keys)));\n       }\n+    }\n \n-      if (!empty($get_sanitized_keys) || !empty($post_sanitized_keys) || !empty($cookie_sanitized_keys)) {\n-        $request->overrideGlobals();\n+    if ($bag->has('destination')) {\n+      $destination_dangerous_keys = static::checkDestination($bag->get('destination'), $whitelist);\n+      if (!empty($destination_dangerous_keys)) {\n+        // The destination is removed rather than sanitized because the URL\n+        // generator service is not available and this method is called very\n+        // early in the bootstrap.\n+        $bag->remove('destination');\n+        $sanitized = TRUE;\n+        if ($log_sanitized_keys) {\n+          trigger_error(sprintf('Potentially unsafe destination removed from %s parameter bag because it contained the following keys: %s', $bag_name, implode(', ', $destination_dangerous_keys)));\n+        }\n       }\n-      $request->attributes->set(self::SANITIZED, TRUE);\n     }\n-    return $request;\n+    return $sanitized;\n+  }\n+\n+  /**\n+   * Checks a destination string to see if it is dangerous.\n+   *\n+   * @param string $destination\n+   *   The destination string to check.\n+   * @param array $whitelist\n+   *   An array of keys to whitelist as safe.\n+   *\n+   * @return array\n+   *   The dangerous keys found in the destination parameter.\n+   */\n+  protected static function checkDestination($destination, array $whitelist) {\n+    $dangerous_keys = [];\n+    $parts = UrlHelper::parse($destination);\n+    // If there is a query string, check its query parameters.\n+    if (!empty($parts['query'])) {\n+      static::stripDangerousValues($parts['query'], $whitelist, $dangerous_keys);\n+    }\n+    return $dangerous_keys;\n   }\n \n   /**\ndiff --git a/core/modules/file/src/Element/ManagedFile.php b/core/modules/file/src/Element/ManagedFile.php\nindex ca4e887a1b..6f01ee552e 100644\n--- a/core/modules/file/src/Element/ManagedFile.php\n+++ b/core/modules/file/src/Element/ManagedFile.php\n@@ -8,6 +8,7 @@\n use Drupal\\Core\\Ajax\\AjaxResponse;\n use Drupal\\Core\\Ajax\\ReplaceCommand;\n use Drupal\\Core\\Form\\FormStateInterface;\n+use Drupal\\Core\\Render\\Element;\n use Drupal\\Core\\Render\\Element\\FormElement;\n use Drupal\\Core\\Site\\Settings;\n use Drupal\\Core\\Url;\n@@ -175,6 +176,9 @@ public static function uploadAjaxCallback(&$form, FormStateInterface &$form_stat\n \n     $form_parents = explode('/', $request->query->get('element_parents'));\n \n+    // Sanitize form parents before using them.\n+    $form_parents = array_filter($form_parents, [Element::class, 'child']);\n+\n     // Retrieve the element to be rendered.\n     $form = NestedArray::getValue($form, $form_parents);\n \n-- \n2.15.1 (Apple Git-101)\n\n"
  },
  {
    "path": "aegir/patches/8-core/SA-CORE-2018-006-D8.patch",
    "content": "From c8f0c39ca488cc29ed575b61c0acf45845d8efed Mon Sep 17 00:00:00 2001\nFrom: Lee Rowlands <lee.rowlands@previousnext.com.au>\nDate: Thu, 18 Oct 2018 08:19:50 +1000\nSubject: [PATCH] SA-CORE-2018-006 by alexpott, attilatilman, bkosborne, catch,\n bonus, Wim Leers, Sam152, Berdir, Damien Tournoud, Dave Reid, Kova101,\n David_Rothstein, dawehner, dsnopek, samuel.mortenson, stefan.r, tedbow, xjm,\n timmillwood, pwolanin, njbooher, dyates, effulgentsia, klausi, mlhess,\n larowlan\n\n---\n core/lib/Drupal/Component/Utility/UrlHelper.php    |  10 ++\n .../EventSubscriber/RedirectResponseSubscriber.php |  32 -----\n core/lib/Drupal/Core/Mail/Plugin/Mail/PhpMail.php  |  48 ++++++-\n .../Core/PathProcessor/PathProcessorAlias.php      |   9 ++\n core/lib/Drupal/Core/Routing/UrlGenerator.php      |   5 +\n core/lib/Drupal/Core/Security/RequestSanitizer.php |  13 +-\n .../src/Functional/Views/DisplayBlockTest.php      |  10 +-\n .../Constraint/ModerationStateConstraint.php       |   1 +\n .../ModerationStateConstraintValidator.php         |  90 ++++++++++---\n .../src/StateTransitionValidation.php              |  10 ++\n .../src/StateTransitionValidationInterface.php     |  19 +++\n .../src/Functional/ModerationStateNodeTest.php     |  23 +---\n .../src/Kernel/EntityStateChangeValidationTest.php | 107 ++++++++++++++++\n core/modules/contextual/contextual.module          |   8 +-\n core/modules/contextual/contextual.post_update.php |  14 ++\n core/modules/contextual/js/contextual.es6.js       |  21 +--\n core/modules/contextual/js/contextual.js           |  20 ++-\n .../contextual/src/ContextualController.php        |  12 +-\n .../src/Element/ContextualLinksPlaceholder.php     |   9 +-\n .../Functional/ContextualDynamicContextTest.php    |  91 +++++++++++--\n .../src/Tests/Views/NodeContextualLinksTest.php    | 118 -----------------\n .../src/Functional}/NodeRevisionsTest.php          |  22 +---\n .../src/Functional}/NodeTypeTest.php               |  25 ++--\n .../src/Functional}/PagePreviewTest.php            |  17 ++-\n .../Functional/Views/NodeContextualLinksTest.php   |  47 +++++++\n .../FunctionalJavascript/ContextualLinksTest.php   | 117 +++++++++++++++++\n .../path/tests/src/Functional/PathAliasTest.php    |  31 ++++-\n .../system/src/Tests/Routing/RouterTest.php        |   7 +\n .../Tests/Component/Utility/UrlHelperTest.php      |   4 +\n .../RedirectResponseSubscriberTest.php             |  71 -----------\n .../Drupal/Tests/Core/Mail/MailManagerTest.php     |   9 ++\n .../Tests/Core/Security/RequestSanitizerTest.php   | 141 +++++++++++++++++++++\n 32 files changed, 829 insertions(+), 332 deletions(-)\n create mode 100644 core/modules/contextual/contextual.post_update.php\n delete mode 100644 core/modules/node/src/Tests/Views/NodeContextualLinksTest.php\n rename core/modules/node/{src/Tests => tests/src/Functional}/NodeRevisionsTest.php (92%)\n rename core/modules/node/{src/Tests => tests/src/Functional}/NodeTypeTest.php (94%)\n rename core/modules/node/{src/Tests => tests/src/Functional}/PagePreviewTest.php (97%)\n create mode 100644 core/modules/node/tests/src/Functional/Views/NodeContextualLinksTest.php\n create mode 100644 core/modules/node/tests/src/FunctionalJavascript/ContextualLinksTest.php\n\ndiff --git a/core/lib/Drupal/Component/Utility/UrlHelper.php b/core/lib/Drupal/Component/Utility/UrlHelper.php\nindex 1d133c9d6b..9e2365c1ae 100644\n--- a/core/lib/Drupal/Component/Utility/UrlHelper.php\n+++ b/core/lib/Drupal/Component/Utility/UrlHelper.php\n@@ -248,6 +248,16 @@ public static function isExternal($path) {\n    *   Exception thrown when a either $url or $bath_url are not fully qualified.\n    */\n   public static function externalIsLocal($url, $base_url) {\n+    // Some browsers treat \\ as / so normalize to forward slashes.\n+    $url = str_replace('\\\\', '/', $url);\n+\n+    // Leading control characters may be ignored or mishandled by browsers, so\n+    // assume such a path may lead to an non-local location. The \\p{C} character\n+    // class matches all UTF-8 control, unassigned, and private characters.\n+    if (preg_match('/^\\p{C}/u', $url) !== 0) {\n+      return FALSE;\n+    }\n+\n     $url_parts = parse_url($url);\n     $base_parts = parse_url($base_url);\n \ndiff --git a/core/lib/Drupal/Core/EventSubscriber/RedirectResponseSubscriber.php b/core/lib/Drupal/Core/EventSubscriber/RedirectResponseSubscriber.php\nindex 8397bdef4e..67a4aae422 100644\n--- a/core/lib/Drupal/Core/EventSubscriber/RedirectResponseSubscriber.php\n+++ b/core/lib/Drupal/Core/EventSubscriber/RedirectResponseSubscriber.php\n@@ -8,7 +8,6 @@\n use Drupal\\Core\\Routing\\RequestContext;\n use Drupal\\Core\\Utility\\UnroutedUrlAssemblerInterface;\n use Symfony\\Component\\HttpFoundation\\Response;\n-use Symfony\\Component\\HttpKernel\\Event\\GetResponseEvent;\n use Symfony\\Component\\HttpKernel\\KernelEvents;\n use Symfony\\Component\\HttpKernel\\Event\\FilterResponseEvent;\n use Symfony\\Component\\HttpFoundation\\RedirectResponse;\n@@ -129,36 +128,6 @@ protected function getDestinationAsAbsoluteUrl($destination, $scheme_and_host) {\n     return $destination;\n   }\n \n-  /**\n-   * Sanitize the destination parameter to prevent open redirect attacks.\n-   *\n-   * @param \\Symfony\\Component\\HttpKernel\\Event\\GetResponseEvent $event\n-   *   The Event to process.\n-   */\n-  public function sanitizeDestination(GetResponseEvent $event) {\n-    $request = $event->getRequest();\n-    // Sanitize the destination parameter (which is often used for redirects) to\n-    // prevent open redirect attacks leading to other domains. Sanitize both\n-    // $_GET['destination'] and $_REQUEST['destination'] to protect code that\n-    // relies on either, but do not sanitize $_POST to avoid interfering with\n-    // unrelated form submissions. The sanitization happens here because\n-    // url_is_external() requires the variable system to be available.\n-    $query_info = $request->query;\n-    $request_info = $request->request;\n-    if ($query_info->has('destination') || $request_info->has('destination')) {\n-      // If the destination is an external URL, remove it.\n-      if ($query_info->has('destination') && UrlHelper::isExternal($query_info->get('destination'))) {\n-        $query_info->remove('destination');\n-        $request_info->remove('destination');\n-      }\n-      // If there's still something in $_REQUEST['destination'] that didn't come\n-      // from $_GET, check it too.\n-      if ($request_info->has('destination') && (!$query_info->has('destination') || $request_info->get('destination') != $query_info->get('destination')) && UrlHelper::isExternal($request_info->get('destination'))) {\n-        $request_info->remove('destination');\n-      }\n-    }\n-  }\n-\n   /**\n    * Registers the methods in this class that should be listeners.\n    *\n@@ -167,7 +136,6 @@ public function sanitizeDestination(GetResponseEvent $event) {\n    */\n   public static function getSubscribedEvents() {\n     $events[KernelEvents::RESPONSE][] = ['checkRedirectUrl'];\n-    $events[KernelEvents::REQUEST][] = ['sanitizeDestination', 100];\n     return $events;\n   }\n \ndiff --git a/core/lib/Drupal/Core/Mail/Plugin/Mail/PhpMail.php b/core/lib/Drupal/Core/Mail/Plugin/Mail/PhpMail.php\nindex e5361492b5..27e5c76e5d 100644\n--- a/core/lib/Drupal/Core/Mail/Plugin/Mail/PhpMail.php\n+++ b/core/lib/Drupal/Core/Mail/Plugin/Mail/PhpMail.php\n@@ -18,6 +18,20 @@\n  */\n class PhpMail implements MailInterface {\n \n+  /**\n+   * The configuration factory.\n+   *\n+   * @var \\Drupal\\Core\\Config\\ConfigFactoryInterface\n+   */\n+  protected $configFactory;\n+\n+  /**\n+   * PhpMail constructor.\n+   */\n+  public function __construct() {\n+    $this->configFactory = \\Drupal::configFactory();\n+  }\n+\n   /**\n    * Concatenates and wraps the email body for plain-text mails.\n    *\n@@ -86,7 +100,10 @@ public function mail(array $message) {\n       // On most non-Windows systems, the \"-f\" option to the sendmail command\n       // is used to set the Return-Path. There is no space between -f and\n       // the value of the return path.\n-      $additional_headers = isset($message['Return-Path']) ? '-f' . $message['Return-Path'] : '';\n+      // We validate the return path, unless it is equal to the site mail, which\n+      // we assume to be safe.\n+      $site_mail = $this->configFactory->get('system.site')->get('mail');\n+      $additional_headers = isset($message['Return-Path']) && ($site_mail === $message['Return-Path'] || static::_isShellSafe($message['Return-Path'])) ? '-f' . $message['Return-Path'] : '';\n       $mail_result = @mail(\n         $message['to'],\n         $mail_subject,\n@@ -112,4 +129,33 @@ public function mail(array $message) {\n     return $mail_result;\n   }\n \n+  /**\n+   * Disallows potentially unsafe shell characters.\n+   *\n+   * Functionally similar to PHPMailer::isShellSafe() which resulted from\n+   * CVE-2016-10045. Note that escapeshellarg and escapeshellcmd are inadequate\n+   * for this purpose.\n+   *\n+   * @param string $string\n+   *   The string to be validated.\n+   *\n+   * @return bool\n+   *   True if the string is shell-safe.\n+   *\n+   * @see https://github.com/PHPMailer/PHPMailer/issues/924\n+   * @see https://github.com/PHPMailer/PHPMailer/blob/v5.2.21/class.phpmailer.php#L1430\n+   *\n+   * @todo Rename to ::isShellSafe() and/or discuss whether this is the correct\n+   *   location for this helper.\n+   */\n+  protected static function _isShellSafe($string) {\n+    if (escapeshellcmd($string) !== $string || !in_array(escapeshellarg($string), [\"'$string'\", \"\\\"$string\\\"\"])) {\n+      return FALSE;\n+    }\n+    if (preg_match('/[^a-zA-Z0-9@_\\-.]/', $string) !== 0) {\n+      return FALSE;\n+    }\n+    return TRUE;\n+  }\n+\n }\ndiff --git a/core/lib/Drupal/Core/PathProcessor/PathProcessorAlias.php b/core/lib/Drupal/Core/PathProcessor/PathProcessorAlias.php\nindex b85737f3cf..d0690fee20 100644\n--- a/core/lib/Drupal/Core/PathProcessor/PathProcessorAlias.php\n+++ b/core/lib/Drupal/Core/PathProcessor/PathProcessorAlias.php\n@@ -43,6 +43,15 @@ public function processOutbound($path, &$options = [], Request $request = NULL,\n     if (empty($options['alias'])) {\n       $langcode = isset($options['language']) ? $options['language']->getId() : NULL;\n       $path = $this->aliasManager->getAliasByPath($path, $langcode);\n+      // Ensure the resulting path has at most one leading slash, to prevent it\n+      // becoming an external URL without a protocol like //example.com. This\n+      // is done in \\Drupal\\Core\\Routing\\UrlGenerator::generateFromRoute()\n+      // also, to protect against this problem in arbitrary path processors,\n+      // but it is duplicated here to protect any other URL generation code\n+      // that might call this method separately.\n+      if (strpos($path, '//') === 0) {\n+        $path = '/' . ltrim($path, '/');\n+      }\n     }\n     return $path;\n   }\ndiff --git a/core/lib/Drupal/Core/Routing/UrlGenerator.php b/core/lib/Drupal/Core/Routing/UrlGenerator.php\nindex 3b5c8d256b..853a5f7b4e 100644\n--- a/core/lib/Drupal/Core/Routing/UrlGenerator.php\n+++ b/core/lib/Drupal/Core/Routing/UrlGenerator.php\n@@ -297,6 +297,11 @@ public function generateFromRoute($name, $parameters = [], $options = [], $colle\n     if ($options['path_processing']) {\n       $path = $this->processPath($path, $options, $generated_url);\n     }\n+    // Ensure the resulting path has at most one leading slash, to prevent it\n+    // becoming an external URL without a protocol like //example.com.\n+    if (strpos($path, '//') === 0) {\n+      $path = '/' . ltrim($path, '/');\n+    }\n     // The contexts base URL is already encoded\n     // (see Symfony\\Component\\HttpFoundation\\Request).\n     $path = str_replace($this->decodedChars[0], $this->decodedChars[1], rawurlencode($path));\ndiff --git a/core/lib/Drupal/Core/Security/RequestSanitizer.php b/core/lib/Drupal/Core/Security/RequestSanitizer.php\nindex 3f48f3d59c..e1626ed383 100644\n--- a/core/lib/Drupal/Core/Security/RequestSanitizer.php\n+++ b/core/lib/Drupal/Core/Security/RequestSanitizer.php\n@@ -90,7 +90,8 @@ protected static function processParameterBag(ParameterBag $bag, $whitelist, $lo\n     }\n \n     if ($bag->has('destination')) {\n-      $destination_dangerous_keys = static::checkDestination($bag->get('destination'), $whitelist);\n+      $destination = $bag->get('destination');\n+      $destination_dangerous_keys = static::checkDestination($destination, $whitelist);\n       if (!empty($destination_dangerous_keys)) {\n         // The destination is removed rather than sanitized because the URL\n         // generator service is not available and this method is called very\n@@ -101,6 +102,16 @@ protected static function processParameterBag(ParameterBag $bag, $whitelist, $lo\n           trigger_error(sprintf('Potentially unsafe destination removed from %s parameter bag because it contained the following keys: %s', $bag_name, implode(', ', $destination_dangerous_keys)));\n         }\n       }\n+      // Sanitize the destination parameter (which is often used for redirects)\n+      // to prevent open redirect attacks leading to other domains.\n+      if (UrlHelper::isExternal($destination)) {\n+        // The destination is removed because it is an external URL.\n+        $bag->remove('destination');\n+        $sanitized = TRUE;\n+        if ($log_sanitized_keys) {\n+          trigger_error(sprintf('Potentially unsafe destination removed from %s parameter bag because it points to an external URL.', $bag_name));\n+        }\n+      }\n     }\n     return $sanitized;\n   }\ndiff --git a/core/modules/block/tests/src/Functional/Views/DisplayBlockTest.php b/core/modules/block/tests/src/Functional/Views/DisplayBlockTest.php\nindex 0a18d7fa0f..cdb1998241 100644\n--- a/core/modules/block/tests/src/Functional/Views/DisplayBlockTest.php\n+++ b/core/modules/block/tests/src/Functional/Views/DisplayBlockTest.php\n@@ -3,6 +3,8 @@\n namespace Drupal\\Tests\\block\\Functional\\Views;\n \n use Drupal\\Component\\Serialization\\Json;\n+use Drupal\\Component\\Utility\\Crypt;\n+use Drupal\\Core\\Site\\Settings;\n use Drupal\\Core\\Url;\n use Drupal\\Tests\\block\\Functional\\AssertBlockAppearsTrait;\n use Drupal\\Tests\\system\\Functional\\Cache\\AssertPageCacheContextsAndTagsTrait;\n@@ -360,14 +362,16 @@ public function testBlockContextualLinks() {\n     $this->drupalGet('test-page');\n \n     $id = 'block:block=' . $block->id() . ':langcode=en|entity.view.edit_form:view=test_view_block:location=block&name=test_view_block&display_id=block_1&langcode=en';\n+    $id_token = Crypt::hmacBase64($id, Settings::getHashSalt() . $this->container->get('private_key')->get());\n     $cached_id = 'block:block=' . $cached_block->id() . ':langcode=en|entity.view.edit_form:view=test_view_block:location=block&name=test_view_block&display_id=block_1&langcode=en';\n+    $cached_id_token = Crypt::hmacBase64($cached_id, Settings::getHashSalt() . $this->container->get('private_key')->get());\n     // @see \\Drupal\\contextual\\Tests\\ContextualDynamicContextTest:assertContextualLinkPlaceHolder()\n-    $this->assertRaw('<div' . new Attribute(['data-contextual-id' => $id]) . '></div>', format_string('Contextual link placeholder with id @id exists.', ['@id' => $id]));\n-    $this->assertRaw('<div' . new Attribute(['data-contextual-id' => $cached_id]) . '></div>', format_string('Contextual link placeholder with id @id exists.', ['@id' => $cached_id]));\n+    $this->assertRaw('<div' . new Attribute(['data-contextual-id' => $id, 'data-contextual-token' => $id_token]) . '></div>', format_string('Contextual link placeholder with id @id exists.', ['@id' => $id]));\n+    $this->assertRaw('<div' . new Attribute(['data-contextual-id' => $cached_id, 'data-contextual-token' => $cached_id_token]) . '></div>', format_string('Contextual link placeholder with id @id exists.', ['@id' => $cached_id]));\n \n     // Get server-rendered contextual links.\n     // @see \\Drupal\\contextual\\Tests\\ContextualDynamicContextTest:renderContextualLinks()\n-    $post = ['ids[0]' => $id, 'ids[1]' => $cached_id];\n+    $post = ['ids[0]' => $id, 'ids[1]' => $cached_id, 'tokens[0]' => $id_token, 'tokens[1]' => $cached_id_token];\n     $url = 'contextual/render?_format=json,destination=test-page';\n     $this->getSession()->getDriver()->getClient()->request('POST', $url, $post);\n     $this->assertResponse(200);\ndiff --git a/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraint.php b/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraint.php\nindex 7f7c756b6b..4fcde36059 100644\n--- a/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraint.php\n+++ b/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraint.php\n@@ -16,5 +16,6 @@ class ModerationStateConstraint extends Constraint {\n \n   public $message = 'Invalid state transition from %from to %to';\n   public $invalidStateMessage = 'State %state does not exist on %workflow workflow';\n+  public $invalidTransitionAccess = 'You do not have access to transition from %original_state to %new_state';\n \n }\ndiff --git a/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraintValidator.php b/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraintValidator.php\nindex 65fc2a0c50..c3b9c815fe 100644\n--- a/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraintValidator.php\n+++ b/core/modules/content_moderation/src/Plugin/Validation/Constraint/ModerationStateConstraintValidator.php\n@@ -2,10 +2,13 @@\n \n namespace Drupal\\content_moderation\\Plugin\\Validation\\Constraint;\n \n+use Drupal\\content_moderation\\StateTransitionValidationInterface;\n use Drupal\\Core\\DependencyInjection\\ContainerInjectionInterface;\n+use Drupal\\Core\\Entity\\ContentEntityInterface;\n use Drupal\\Core\\Entity\\EntityInterface;\n use Drupal\\Core\\Entity\\EntityTypeManagerInterface;\n use Drupal\\content_moderation\\ModerationInformationInterface;\n+use Drupal\\Core\\Session\\AccountInterface;\n use Symfony\\Component\\DependencyInjection\\ContainerInterface;\n use Symfony\\Component\\Validator\\Constraint;\n use Symfony\\Component\\Validator\\ConstraintValidator;\n@@ -29,6 +32,20 @@ class ModerationStateConstraintValidator extends ConstraintValidator implements\n    */\n   protected $moderationInformation;\n \n+  /**\n+   * The current user.\n+   *\n+   * @var \\Drupal\\Core\\Session\\AccountInterface\n+   */\n+  protected $currentUser;\n+\n+  /**\n+   * The state transition validation service.\n+   *\n+   * @var \\Drupal\\content_moderation\\StateTransitionValidationInterface\n+   */\n+  protected $stateTransitionValidation;\n+\n   /**\n    * Creates a new ModerationStateConstraintValidator instance.\n    *\n@@ -36,10 +53,16 @@ class ModerationStateConstraintValidator extends ConstraintValidator implements\n    *   The entity type manager.\n    * @param \\Drupal\\content_moderation\\ModerationInformationInterface $moderation_information\n    *   The moderation information.\n+   * @param \\Drupal\\Core\\Session\\AccountInterface $current_user\n+   *   The current user.\n+   * @param \\Drupal\\content_moderation\\StateTransitionValidationInterface $state_transition_validation\n+   *   The state transition validation service.\n    */\n-  public function __construct(EntityTypeManagerInterface $entity_type_manager, ModerationInformationInterface $moderation_information) {\n+  public function __construct(EntityTypeManagerInterface $entity_type_manager, ModerationInformationInterface $moderation_information, AccountInterface $current_user, StateTransitionValidationInterface $state_transition_validation) {\n     $this->entityTypeManager = $entity_type_manager;\n     $this->moderationInformation = $moderation_information;\n+    $this->currentUser = $current_user;\n+    $this->stateTransitionValidation = $state_transition_validation;\n   }\n \n   /**\n@@ -48,7 +71,9 @@ public function __construct(EntityTypeManagerInterface $entity_type_manager, Mod\n   public static function create(ContainerInterface $container) {\n     return new static(\n       $container->get('entity_type.manager'),\n-      $container->get('content_moderation.moderation_information')\n+      $container->get('content_moderation.moderation_information'),\n+      $container->get('current_user'),\n+      $container->get('content_moderation.state_transition_validation')\n     );\n   }\n \n@@ -76,32 +101,59 @@ public function validate($value, Constraint $constraint) {\n       return;\n     }\n \n+    $new_state = $workflow->getTypePlugin()->getState($entity->moderation_state->value);\n+    $original_state = $this->getOriginalOrInitialState($entity);\n+\n     // If a new state is being set and there is an existing state, validate\n     // there is a valid transition between them.\n+    if (!$original_state->canTransitionTo($new_state->id())) {\n+      $this->context->addViolation($constraint->message, [\n+        '%from' => $original_state->label(),\n+        '%to' => $new_state->label(),\n+      ]);\n+    }\n+    else {\n+      // If we're sure the transition exists, make sure the user has permission\n+      // to use it.\n+      if (!$this->stateTransitionValidation->isTransitionValid($workflow, $original_state, $new_state, $this->currentUser)) {\n+        $this->context->addViolation($constraint->invalidTransitionAccess, [\n+          '%original_state' => $original_state->label(),\n+          '%new_state' => $new_state->label(),\n+        ]);\n+      }\n+    }\n+  }\n+\n+  /**\n+   * Gets the original or initial state of the given entity.\n+   *\n+   * When a state is being validated, the original state is used to validate\n+   * that a valid transition exists for target state and the user has access\n+   * to the transition between those two states. If the entity has been\n+   * moderated before, we can load the original unmodified revision and\n+   * translation for this state.\n+   *\n+   * If the entity is new we need to load the initial state from the workflow.\n+   * Even if a value was assigned to the moderation_state field, the initial\n+   * state is used to compute an appropriate transition for the purposes of\n+   * validation.\n+   *\n+   * @return \\Drupal\\workflows\\StateInterface\n+   *   The original or default moderation state.\n+   */\n+  protected function getOriginalOrInitialState(ContentEntityInterface $entity) {\n+    $state = NULL;\n+    $workflow_type = $this->moderationInformation->getWorkflowForEntity($entity)->getTypePlugin();\n     if (!$entity->isNew() && !$this->isFirstTimeModeration($entity)) {\n       $original_entity = $this->entityTypeManager->getStorage($entity->getEntityTypeId())->loadRevision($entity->getLoadedRevisionId());\n       if (!$entity->isDefaultTranslation() && $original_entity->hasTranslation($entity->language()->getId())) {\n         $original_entity = $original_entity->getTranslation($entity->language()->getId());\n       }\n-\n-      // If the state of the original entity doesn't exist on the workflow,\n-      // we cannot do any further validation of transitions, because none will\n-      // be setup for a state that doesn't exist. Instead allow any state to\n-      // take its place.\n-      if (!$workflow->getTypePlugin()->hasState($original_entity->moderation_state->value)) {\n-        return;\n-      }\n-\n-      $new_state = $workflow->getTypePlugin()->getState($entity->moderation_state->value);\n-      $original_state = $workflow->getTypePlugin()->getState($original_entity->moderation_state->value);\n-\n-      if (!$original_state->canTransitionTo($new_state->id())) {\n-        $this->context->addViolation($constraint->message, [\n-          '%from' => $original_state->label(),\n-          '%to' => $new_state->label(),\n-        ]);\n+      if ($workflow_type->hasState($original_entity->moderation_state->value)) {\n+        $state = $workflow_type->getState($original_entity->moderation_state->value);\n       }\n     }\n+    return $state ?: $workflow_type->getInitialState($entity);\n   }\n \n   /**\ndiff --git a/core/modules/content_moderation/src/StateTransitionValidation.php b/core/modules/content_moderation/src/StateTransitionValidation.php\nindex 01b2ad8458..35d657e550 100644\n--- a/core/modules/content_moderation/src/StateTransitionValidation.php\n+++ b/core/modules/content_moderation/src/StateTransitionValidation.php\n@@ -4,7 +4,9 @@\n \n use Drupal\\Core\\Entity\\ContentEntityInterface;\n use Drupal\\Core\\Session\\AccountInterface;\n+use Drupal\\workflows\\StateInterface;\n use Drupal\\workflows\\Transition;\n+use Drupal\\workflows\\WorkflowInterface;\n \n /**\n  * Validates whether a certain state transition is allowed.\n@@ -47,4 +49,12 @@ public function getValidTransitions(ContentEntityInterface $entity, AccountInter\n     });\n   }\n \n+  /**\n+   * {@inheritdoc}\n+   */\n+  public function isTransitionValid(WorkflowInterface $workflow, StateInterface $original_state, StateInterface $new_state, AccountInterface $user) {\n+    $transition = $workflow->getTypePlugin()->getTransitionFromStateToState($original_state->id(), $new_state->id());\n+    return $user->hasPermission('use ' . $workflow->id() . ' transition ' . $transition->id());\n+  }\n+\n }\ndiff --git a/core/modules/content_moderation/src/StateTransitionValidationInterface.php b/core/modules/content_moderation/src/StateTransitionValidationInterface.php\nindex 1acbf052fd..c793fe53e2 100644\n--- a/core/modules/content_moderation/src/StateTransitionValidationInterface.php\n+++ b/core/modules/content_moderation/src/StateTransitionValidationInterface.php\n@@ -4,6 +4,8 @@\n \n use Drupal\\Core\\Entity\\ContentEntityInterface;\n use Drupal\\Core\\Session\\AccountInterface;\n+use Drupal\\workflows\\StateInterface;\n+use Drupal\\workflows\\WorkflowInterface;\n \n /**\n  * Validates whether a certain state transition is allowed.\n@@ -23,4 +25,21 @@\n    */\n   public function getValidTransitions(ContentEntityInterface $entity, AccountInterface $user);\n \n+  /**\n+   * Checks if a transition between two states if valid for the given user.\n+   *\n+   * @param \\Drupal\\workflows\\WorkflowInterface $workflow\n+   *   The workflow entity.\n+   * @param \\Drupal\\workflows\\StateInterface $original_state\n+   *   The original workflow state.\n+   * @param \\Drupal\\workflows\\StateInterface $new_state\n+   *   The new workflow state.\n+   * @param \\Drupal\\Core\\Session\\AccountInterface $user\n+   *   The user to validate.\n+   *\n+   * @return bool\n+   *   Returns TRUE if transition is valid, otherwise FALSE.\n+   */\n+  public function isTransitionValid(WorkflowInterface $workflow, StateInterface $original_state, StateInterface $new_state, AccountInterface $user);\n+\n }\ndiff --git a/core/modules/content_moderation/tests/src/Functional/ModerationStateNodeTest.php b/core/modules/content_moderation/tests/src/Functional/ModerationStateNodeTest.php\nindex 11deaa72c0..5fd168d0bb 100644\n--- a/core/modules/content_moderation/tests/src/Functional/ModerationStateNodeTest.php\n+++ b/core/modules/content_moderation/tests/src/Functional/ModerationStateNodeTest.php\n@@ -158,32 +158,15 @@ public function testNoContentModerationPermissions() {\n     ]);\n     $this->drupalLogin($limited_user);\n \n-    // Check the user can add content, but can't see the moderation state\n-    // select.\n+    // Check the user can see the content entity form, but can't see the\n+    // moderation state select or save the entity form.\n     $this->drupalGet('node/add/moderated_content');\n     $session_assert->statusCodeEquals(200);\n     $session_assert->fieldNotExists('moderation_state[0][state]');\n     $this->drupalPostForm(NULL, [\n       'title[0][value]' => 'moderated content',\n     ], 'Save');\n-\n-    // Manually move the content to archived because the user doesn't have\n-    // permission to do this.\n-    $node = $this->getNodeByTitle('moderated content');\n-    $node->moderation_state->value = 'archived';\n-    $node->save();\n-\n-    // Check the user can see the current state but not the select.\n-    $this->drupalGet('node/' . $node->id() . '/edit');\n-    $session_assert->statusCodeEquals(200);\n-    $session_assert->pageTextContains('Archived');\n-    $session_assert->fieldNotExists('moderation_state[0][state]');\n-    $this->drupalPostForm(NULL, [], 'Save');\n-\n-    // When saving they should still be on the edit form, and see the validation\n-    // error message.\n-    $session_assert->pageTextContains('Edit Moderated content moderated content');\n-    $session_assert->pageTextContains('Invalid state transition from Archived to Archived');\n+    $session_assert->pageTextContains('You do not have access to transition from Draft to Draft');\n   }\n \n }\ndiff --git a/core/modules/content_moderation/tests/src/Kernel/EntityStateChangeValidationTest.php b/core/modules/content_moderation/tests/src/Kernel/EntityStateChangeValidationTest.php\nindex dc1e7f6917..df90bf63c8 100644\n--- a/core/modules/content_moderation/tests/src/Kernel/EntityStateChangeValidationTest.php\n+++ b/core/modules/content_moderation/tests/src/Kernel/EntityStateChangeValidationTest.php\n@@ -7,6 +7,7 @@\n use Drupal\\node\\Entity\\Node;\n use Drupal\\node\\Entity\\NodeType;\n use Drupal\\Tests\\content_moderation\\Traits\\ContentModerationTestTrait;\n+use Drupal\\Tests\\user\\Traits\\UserCreationTrait;\n \n /**\n  * @coversDefaultClass \\Drupal\\content_moderation\\Plugin\\Validation\\Constraint\\ModerationStateConstraintValidator\n@@ -15,6 +16,7 @@\n class EntityStateChangeValidationTest extends KernelTestBase {\n \n   use ContentModerationTestTrait;\n+  use UserCreationTrait;\n \n   /**\n    * {@inheritdoc}\n@@ -29,6 +31,13 @@ class EntityStateChangeValidationTest extends KernelTestBase {\n     'workflows',\n   ];\n \n+  /**\n+   * An admin user.\n+   *\n+   * @var \\Drupal\\Core\\Session\\AccountInterface\n+   */\n+  protected $adminUser;\n+\n   /**\n    * {@inheritdoc}\n    */\n@@ -40,6 +49,9 @@ protected function setUp() {\n     $this->installEntitySchema('user');\n     $this->installEntitySchema('content_moderation_state');\n     $this->installConfig('content_moderation');\n+    $this->installSchema('system', ['sequences']);\n+\n+    $this->adminUser = $this->createUser(array_keys($this->container->get('user.permissions')->getPermissions()));\n   }\n \n   /**\n@@ -48,6 +60,8 @@ protected function setUp() {\n    * @covers ::validate\n    */\n   public function testValidTransition() {\n+    $this->setCurrentUser($this->adminUser);\n+\n     $node_type = NodeType::create([\n       'type' => 'example',\n     ]);\n@@ -76,6 +90,8 @@ public function testValidTransition() {\n    * @covers ::validate\n    */\n   public function testInvalidTransition() {\n+    $this->setCurrentUser($this->adminUser);\n+\n     $node_type = NodeType::create([\n       'type' => 'example',\n     ]);\n@@ -125,6 +141,7 @@ public function testInvalidState() {\n    * Test validation with content that has no initial state or an invalid state.\n    */\n   public function testInvalidStateWithoutExisting() {\n+    $this->setCurrentUser($this->adminUser);\n     // Create content without moderation enabled for the content type.\n     $node_type = NodeType::create([\n       'type' => 'example',\n@@ -156,15 +173,24 @@ public function testInvalidStateWithoutExisting() {\n     // validating.\n     $workflow->getTypePlugin()->deleteState('deleted_state');\n     $workflow->save();\n+\n+    // When there is an invalid state, the content will revert to \"draft\". This\n+    // will allow a draft to draft transition.\n     $node->moderation_state->value = 'draft';\n     $violations = $node->validate();\n     $this->assertCount(0, $violations);\n+    // This will disallow a draft to archived transition.\n+    $node->moderation_state->value = 'archived';\n+    $violations = $node->validate();\n+    $this->assertCount(1, $violations);\n   }\n \n   /**\n    * Test state transition validation with multiple languages.\n    */\n   public function testInvalidStateMultilingual() {\n+    $this->setCurrentUser($this->adminUser);\n+\n     ConfigurableLanguage::createFromLangcode('fr')->save();\n     $node_type = NodeType::create([\n       'type' => 'example',\n@@ -220,6 +246,8 @@ public function testInvalidStateMultilingual() {\n    * Tests that content without prior moderation information can be moderated.\n    */\n   public function testExistingContentWithNoModeration() {\n+    $this->setCurrentUser($this->adminUser);\n+\n     $node_type = NodeType::create([\n       'type' => 'example',\n     ]);\n@@ -254,6 +282,8 @@ public function testExistingContentWithNoModeration() {\n    * Tests that content without prior moderation information can be translated.\n    */\n   public function testExistingMultilingualContentWithNoModeration() {\n+    $this->setCurrentUser($this->adminUser);\n+\n     // Enable French.\n     ConfigurableLanguage::createFromLangcode('fr')->save();\n \n@@ -293,4 +323,81 @@ public function testExistingMultilingualContentWithNoModeration() {\n     $node_fr->save();\n   }\n \n+  /**\n+   * @dataProvider transitionAccessValidationTestCases\n+   */\n+  public function testTransitionAccessValidation($permissions, $target_state, $messages) {\n+    $node_type = NodeType::create([\n+      'type' => 'example',\n+    ]);\n+    $node_type->save();\n+    $workflow = $this->createEditorialWorkflow();\n+    $workflow->getTypePlugin()->addState('foo', 'Foo');\n+    $workflow->getTypePlugin()->addTransition('draft_to_foo', 'Draft to foo', ['draft'], 'foo');\n+    $workflow->getTypePlugin()->addTransition('foo_to_foo', 'Foo to foo', ['foo'], 'foo');\n+    $workflow->getTypePlugin()->addEntityTypeAndBundle('node', 'example');\n+    $workflow->save();\n+\n+    $this->setCurrentUser($this->createUser($permissions));\n+\n+    $node = Node::create([\n+      'type' => 'example',\n+      'title' => 'Test content',\n+      'moderation_state' => $target_state,\n+    ]);\n+    $this->assertTrue($node->isNew());\n+    $violations = $node->validate();\n+    $this->assertCount(count($messages), $violations);\n+    foreach ($messages as $i => $message) {\n+      $this->assertEquals($message, $violations->get($i)->getMessage());\n+    }\n+  }\n+\n+  /**\n+   * Test cases for ::testTransitionAccessValidation.\n+   */\n+  public function transitionAccessValidationTestCases() {\n+    return [\n+      'Invalid transition, no permissions validated' => [\n+        [],\n+        'archived',\n+        ['Invalid state transition from <em class=\"placeholder\">Draft</em> to <em class=\"placeholder\">Archived</em>'],\n+      ],\n+      'Valid transition, missing permission' => [\n+        [],\n+        'published',\n+        ['You do not have access to transition from <em class=\"placeholder\">Draft</em> to <em class=\"placeholder\">Published</em>'],\n+      ],\n+      'Valid transition, granted published permission' => [\n+        ['use editorial transition publish'],\n+        'published',\n+        [],\n+      ],\n+      'Valid transition, granted draft permission' => [\n+        ['use editorial transition create_new_draft'],\n+        'draft',\n+        [],\n+      ],\n+      'Valid transition, incorrect permission granted' => [\n+        ['use editorial transition create_new_draft'],\n+        'published',\n+        ['You do not have access to transition from <em class=\"placeholder\">Draft</em> to <em class=\"placeholder\">Published</em>'],\n+      ],\n+      // Test with an additional state and set of transitions, since the\n+      // \"published\" transition can start from either \"draft\" or \"published\", it\n+      // does not capture bugs that fail to correctly distinguish the initial\n+      // workflow state from the set state of a new entity.\n+      'Valid transition, granted foo permission' => [\n+        ['use editorial transition draft_to_foo'],\n+        'foo',\n+        [],\n+      ],\n+      'Valid transition, incorrect  foo permission granted' => [\n+        ['use editorial transition foo_to_foo'],\n+        'foo',\n+        ['You do not have access to transition from <em class=\"placeholder\">Draft</em> to <em class=\"placeholder\">Foo</em>'],\n+      ],\n+    ];\n+  }\n+\n }\ndiff --git a/core/modules/contextual/contextual.module b/core/modules/contextual/contextual.module\nindex b9d61b76d2..8b9fc36fd7 100644\n--- a/core/modules/contextual/contextual.module\n+++ b/core/modules/contextual/contextual.module\n@@ -191,13 +191,19 @@ function _contextual_links_to_id($contextual_links) {\n /**\n  * Unserializes the result of _contextual_links_to_id().\n  *\n- * @see _contextual_links_to_id\n+ * Note that $id is user input. Before calling this method the ID should be\n+ * checked against the token stored in the 'data-contextual-token' attribute\n+ * which is passed via the 'tokens' request parameter to\n+ * \\Drupal\\contextual\\ContextualController::render().\n  *\n  * @param string $id\n  *   A serialized representation of a #contextual_links property value array.\n  *\n  * @return array\n  *   The value for a #contextual_links property.\n+ *\n+ * @see _contextual_links_to_id()\n+ * @see \\Drupal\\contextual\\ContextualController::render()\n  */\n function _contextual_id_to_links($id) {\n   $contextual_links = [];\ndiff --git a/core/modules/contextual/contextual.post_update.php b/core/modules/contextual/contextual.post_update.php\nnew file mode 100644\nindex 0000000000..8decad05f0\n--- /dev/null\n+++ b/core/modules/contextual/contextual.post_update.php\n@@ -0,0 +1,14 @@\n+<?php\n+\n+/**\n+ * @file\n+ * Post update functions for Contextual Links.\n+ */\n+\n+/**\n+ * Ensure new page loads use the updated JS and get the updated markup.\n+ */\n+function contextual_post_update_fixed_endpoint_and_markup() {\n+  // Empty update to trigger a change to css_js_query_string and invalidate\n+  // cached markup.\n+}\ndiff --git a/core/modules/contextual/js/contextual.es6.js b/core/modules/contextual/js/contextual.es6.js\nindex 46a57085df..f52059ae61 100644\n--- a/core/modules/contextual/js/contextual.es6.js\n+++ b/core/modules/contextual/js/contextual.es6.js\n@@ -168,12 +168,16 @@\n       // Collect the IDs for all contextual links placeholders.\n       const ids = [];\n       $placeholders.each(function() {\n-        ids.push($(this).attr('data-contextual-id'));\n+        ids.push({\n+          id: $(this).attr('data-contextual-id'),\n+          token: $(this).attr('data-contextual-token'),\n+        });\n       });\n \n-      // Update all contextual links placeholders whose HTML is cached.\n-      const uncachedIDs = _.filter(ids, contextualID => {\n-        const html = storage.getItem(`Drupal.contextual.${contextualID}`);\n+      const uncachedIDs = [];\n+      const uncachedTokens = [];\n+      ids.forEach(contextualID => {\n+        const html = storage.getItem(`Drupal.contextual.${contextualID.id}`);\n         if (html && html.length) {\n           // Initialize after the current execution cycle, to make the AJAX\n           // request for retrieving the uncached contextual links as soon as\n@@ -182,13 +186,14 @@\n           // Drupal.contextual.collection.\n           window.setTimeout(() => {\n             initContextual(\n-              $context.find(`[data-contextual-id=\"${contextualID}\"]`),\n+              $context.find(`[data-contextual-id=\"${contextualID.id}\"]`),\n               html,\n             );\n           });\n-          return false;\n+          return;\n         }\n-        return true;\n+        uncachedIDs.push(contextualID.id);\n+        uncachedTokens.push(contextualID.token);\n       });\n \n       // Perform an AJAX request to let the server render the contextual links\n@@ -197,7 +202,7 @@\n         $.ajax({\n           url: Drupal.url('contextual/render'),\n           type: 'POST',\n-          data: { 'ids[]': uncachedIDs },\n+          data: { 'ids[]': uncachedIDs, 'tokens[]': uncachedTokens },\n           dataType: 'json',\n           success(results) {\n             _.each(results, (html, contextualID) => {\ndiff --git a/core/modules/contextual/js/contextual.js b/core/modules/contextual/js/contextual.js\nindex 049233b4e1..d51eba21a9 100644\n--- a/core/modules/contextual/js/contextual.js\n+++ b/core/modules/contextual/js/contextual.js\n@@ -95,25 +95,31 @@\n \n       var ids = [];\n       $placeholders.each(function () {\n-        ids.push($(this).attr('data-contextual-id'));\n+        ids.push({\n+          id: $(this).attr('data-contextual-id'),\n+          token: $(this).attr('data-contextual-token')\n+        });\n       });\n \n-      var uncachedIDs = _.filter(ids, function (contextualID) {\n-        var html = storage.getItem('Drupal.contextual.' + contextualID);\n+      var uncachedIDs = [];\n+      var uncachedTokens = [];\n+      ids.forEach(function (contextualID) {\n+        var html = storage.getItem('Drupal.contextual.' + contextualID.id);\n         if (html && html.length) {\n           window.setTimeout(function () {\n-            initContextual($context.find('[data-contextual-id=\"' + contextualID + '\"]'), html);\n+            initContextual($context.find('[data-contextual-id=\"' + contextualID.id + '\"]'), html);\n           });\n-          return false;\n+          return;\n         }\n-        return true;\n+        uncachedIDs.push(contextualID.id);\n+        uncachedTokens.push(contextualID.token);\n       });\n \n       if (uncachedIDs.length > 0) {\n         $.ajax({\n           url: Drupal.url('contextual/render'),\n           type: 'POST',\n-          data: { 'ids[]': uncachedIDs },\n+          data: { 'ids[]': uncachedIDs, 'tokens[]': uncachedTokens },\n           dataType: 'json',\n           success: function success(results) {\n             _.each(results, function (html, contextualID) {\ndiff --git a/core/modules/contextual/src/ContextualController.php b/core/modules/contextual/src/ContextualController.php\nindex 58e42ecd6b..d05c6a8527 100644\n--- a/core/modules/contextual/src/ContextualController.php\n+++ b/core/modules/contextual/src/ContextualController.php\n@@ -2,8 +2,10 @@\n \n namespace Drupal\\contextual;\n \n+use Drupal\\Component\\Utility\\Crypt;\n use Drupal\\Core\\DependencyInjection\\ContainerInjectionInterface;\n use Drupal\\Core\\Render\\RendererInterface;\n+use Drupal\\Core\\Site\\Settings;\n use Symfony\\Component\\DependencyInjection\\ContainerInterface;\n use Symfony\\Component\\HttpFoundation\\JsonResponse;\n use Symfony\\Component\\HttpFoundation\\Request;\n@@ -63,8 +65,16 @@ public function render(Request $request) {\n       throw new BadRequestHttpException(t('No contextual ids specified.'));\n     }\n \n+    $tokens = $request->request->get('tokens');\n+    if (!isset($tokens)) {\n+      throw new BadRequestHttpException(t('No contextual ID tokens specified.'));\n+    }\n+\n     $rendered = [];\n-    foreach ($ids as $id) {\n+    foreach ($ids as $key => $id) {\n+      if (!isset($tokens[$key]) || !Crypt::hashEquals($tokens[$key], Crypt::hmacBase64($id, Settings::getHashSalt() . \\Drupal::service('private_key')->get()))) {\n+        throw new BadRequestHttpException('Invalid contextual ID specified.');\n+      }\n       $element = [\n         '#type' => 'contextual_links',\n         '#contextual_links' => _contextual_id_to_links($id),\ndiff --git a/core/modules/contextual/src/Element/ContextualLinksPlaceholder.php b/core/modules/contextual/src/Element/ContextualLinksPlaceholder.php\nindex 97afde9a24..5e993941a6 100644\n--- a/core/modules/contextual/src/Element/ContextualLinksPlaceholder.php\n+++ b/core/modules/contextual/src/Element/ContextualLinksPlaceholder.php\n@@ -2,6 +2,8 @@\n \n namespace Drupal\\contextual\\Element;\n \n+use Drupal\\Component\\Utility\\Crypt;\n+use Drupal\\Core\\Site\\Settings;\n use Drupal\\Core\\Template\\Attribute;\n use Drupal\\Core\\Render\\Element\\RenderElement;\n use Drupal\\Component\\Render\\FormattableMarkup;\n@@ -43,7 +45,12 @@ public function getInfo() {\n    * @see _contextual_links_to_id()\n    */\n   public static function preRenderPlaceholder(array $element) {\n-    $element['#markup'] = new FormattableMarkup('<div@attributes></div>', ['@attributes' => new Attribute(['data-contextual-id' => $element['#id']])]);\n+    $token = Crypt::hmacBase64($element['#id'], Settings::getHashSalt() . \\Drupal::service('private_key')->get());\n+    $attribute = new Attribute([\n+      'data-contextual-id' => $element['#id'],\n+      'data-contextual-token' => $token,\n+    ]);\n+    $element['#markup'] = new FormattableMarkup('<div@attributes></div>', ['@attributes' => $attribute]);\n \n     return $element;\n   }\ndiff --git a/core/modules/contextual/tests/src/Functional/ContextualDynamicContextTest.php b/core/modules/contextual/tests/src/Functional/ContextualDynamicContextTest.php\nindex 340b60821f..74a6d504e8 100644\n--- a/core/modules/contextual/tests/src/Functional/ContextualDynamicContextTest.php\n+++ b/core/modules/contextual/tests/src/Functional/ContextualDynamicContextTest.php\n@@ -3,9 +3,10 @@\n namespace Drupal\\Tests\\contextual\\Functional;\n \n use Drupal\\Component\\Serialization\\Json;\n+use Drupal\\Component\\Utility\\Crypt;\n+use Drupal\\Core\\Site\\Settings;\n use Drupal\\Core\\Url;\n use Drupal\\language\\Entity\\ConfigurableLanguage;\n-use Drupal\\Core\\Template\\Attribute;\n use Drupal\\Tests\\BrowserTestBase;\n \n /**\n@@ -140,17 +141,76 @@ public function testDifferentPermissions() {\n     $this->assertRaw('<li class=\"menu-testcontextual-hidden-manage-edit\"><a href=\"' . base_path() . 'menu-test-contextual/1/edit\" class=\"use-ajax\" data-dialog-type=\"modal\" data-is-something>Edit menu - contextual</a></li>');\n   }\n \n+  /**\n+   * Tests the contextual placeholder content is protected by a token.\n+   */\n+  public function testTokenProtection() {\n+    $this->drupalLogin($this->editorUser);\n+\n+    // Create a node that will have a contextual link.\n+    $node1 = $this->drupalCreateNode(['type' => 'article', 'promote' => 1]);\n+\n+    // Now, on the front page, all article nodes should have contextual links\n+    // placeholders, as should the view that contains them.\n+    $id = 'node:node=' . $node1->id() . ':changed=' . $node1->getChangedTime() . '&langcode=en';\n+\n+    // Editor user: can access contextual links and can edit articles.\n+    $this->drupalGet('node');\n+    $this->assertContextualLinkPlaceHolder($id);\n+\n+    $http_client = $this->getHttpClient();\n+    $url = Url::fromRoute('contextual.render', [], [\n+      'query' => [\n+        '_format' => 'json',\n+        'destination' => 'node',\n+      ],\n+    ])->setAbsolute()->toString();\n+\n+    $response = $http_client->request('POST', $url, [\n+      'cookies' => $this->getSessionCookies(),\n+      'form_params' => ['ids' => [$id], 'tokens' => []],\n+      'http_errors' => FALSE,\n+    ]);\n+    $this->assertEquals('400', $response->getStatusCode());\n+    $this->assertContains('No contextual ID tokens specified.', (string) $response->getBody());\n+\n+    $response = $http_client->request('POST', $url, [\n+      'cookies' => $this->getSessionCookies(),\n+      'form_params' => ['ids' => [$id], 'tokens' => ['wrong_token']],\n+      'http_errors' => FALSE,\n+    ]);\n+    $this->assertEquals('400', $response->getStatusCode());\n+    $this->assertContains('Invalid contextual ID specified.', (string) $response->getBody());\n+\n+    $response = $http_client->request('POST', $url, [\n+      'cookies' => $this->getSessionCookies(),\n+      'form_params' => ['ids' => [$id], 'tokens' => ['wrong_key' => $this->createContextualIdToken($id)]],\n+      'http_errors' => FALSE,\n+    ]);\n+    $this->assertEquals('400', $response->getStatusCode());\n+    $this->assertContains('Invalid contextual ID specified.', (string) $response->getBody());\n+\n+    $response = $http_client->request('POST', $url, [\n+      'cookies' => $this->getSessionCookies(),\n+      'form_params' => ['ids' => [$id], 'tokens' => [$this->createContextualIdToken($id)]],\n+      'http_errors' => FALSE,\n+    ]);\n+    $this->assertEquals('200', $response->getStatusCode());\n+  }\n+\n   /**\n    * Asserts that a contextual link placeholder with the given id exists.\n    *\n    * @param string $id\n    *   A contextual link id.\n-   *\n-   * @return bool\n-   *   The result of the assertion.\n    */\n   protected function assertContextualLinkPlaceHolder($id) {\n-    return $this->assertRaw('<div' . new Attribute(['data-contextual-id' => $id]) . '></div>', format_string('Contextual link placeholder with id @id exists.', ['@id' => $id]));\n+    $this->assertSession()->elementAttributeContains(\n+      'css',\n+      'div[data-contextual-id=\"' . $id . '\"]',\n+      'data-contextual-token',\n+      $this->createContextualIdToken($id)\n+    );\n   }\n \n   /**\n@@ -158,12 +218,9 @@ protected function assertContextualLinkPlaceHolder($id) {\n    *\n    * @param string $id\n    *   A contextual link id.\n-   *\n-   * @return bool\n-   *   The result of the assertion.\n    */\n   protected function assertNoContextualLinkPlaceHolder($id) {\n-    return $this->assertNoRaw('<div' . new Attribute(['data-contextual-id' => $id]) . '></div>', format_string('Contextual link placeholder with id @id does not exist.', ['@id' => $id]));\n+    $this->assertSession()->elementNotExists('css', 'div[data-contextual-id=\"' . $id . '\"]');\n   }\n \n   /**\n@@ -178,6 +235,7 @@ protected function assertNoContextualLinkPlaceHolder($id) {\n    *   The response object.\n    */\n   protected function renderContextualLinks($ids, $current_path) {\n+    $tokens = array_map([$this, 'createContextualIdToken'], $ids);\n     $http_client = $this->getHttpClient();\n     $url = Url::fromRoute('contextual.render', [], [\n       'query' => [\n@@ -188,9 +246,22 @@ protected function renderContextualLinks($ids, $current_path) {\n \n     return $http_client->request('POST', $this->buildUrl($url), [\n       'cookies' => $this->getSessionCookies(),\n-      'form_params' => ['ids' => $ids],\n+      'form_params' => ['ids' => $ids, 'tokens' => $tokens],\n       'http_errors' => FALSE,\n     ]);\n   }\n \n+  /**\n+   * Creates a contextual ID token.\n+   *\n+   * @param string $id\n+   *   The contextual ID to create a token for.\n+   *\n+   * @return string\n+   *   The contextual ID token.\n+   */\n+  protected function createContextualIdToken($id) {\n+    return Crypt::hmacBase64($id, Settings::getHashSalt() . $this->container->get('private_key')->get());\n+  }\n+\n }\ndiff --git a/core/modules/node/src/Tests/Views/NodeContextualLinksTest.php b/core/modules/node/src/Tests/Views/NodeContextualLinksTest.php\ndeleted file mode 100644\nindex dc23d0ce55..0000000000\n--- a/core/modules/node/src/Tests/Views/NodeContextualLinksTest.php\n+++ /dev/null\n@@ -1,118 +0,0 @@\n-<?php\n-\n-namespace Drupal\\node\\Tests\\Views;\n-\n-use Drupal\\Component\\Serialization\\Json;\n-use Drupal\\user\\Entity\\User;\n-\n-/**\n- * Tests views contextual links on nodes.\n- *\n- * @group node\n- */\n-class NodeContextualLinksTest extends NodeTestBase {\n-\n-  /**\n-   * Modules to enable.\n-   *\n-   * @var array\n-   */\n-  public static $modules = ['contextual'];\n-\n-  /**\n-   * Views used by this test.\n-   *\n-   * @var array\n-   */\n-  public static $testViews = ['test_contextual_links'];\n-\n-  /**\n-   * Tests contextual links.\n-   */\n-  public function testNodeContextualLinks() {\n-    $this->drupalCreateContentType(['type' => 'page']);\n-    $this->drupalCreateNode(['promote' => 1]);\n-    $this->drupalGet('node');\n-\n-    $user = $this->drupalCreateUser(['administer nodes', 'access contextual links']);\n-    $this->drupalLogin($user);\n-\n-    $response = $this->renderContextualLinks(['node:node=1:'], 'node');\n-    $this->assertResponse(200);\n-    $json = Json::decode($response);\n-    $this->setRawContent($json['node:node=1:']);\n-\n-    // @todo Add these back when the functionality for making Views displays\n-    //   appear in contextual links is working again.\n-    // $this->assertLinkByHref('node/1/contextual-links', 0, 'The contextual link to the view was found.');\n-    // $this->assertLink('Test contextual link', 0, 'The contextual link to the view was found.');\n-  }\n-\n-  /**\n-   * Get server-rendered contextual links for the given contextual link ids.\n-   *\n-   * Copied from \\Drupal\\contextual\\Tests\\ContextualDynamicContextTest::renderContextualLinks().\n-   *\n-   * @param array $ids\n-   *   An array of contextual link ids.\n-   * @param string $current_path\n-   *   The Drupal path for the page for which the contextual links are rendered.\n-   *\n-   * @return string\n-   *   The response body.\n-   */\n-  protected function renderContextualLinks($ids, $current_path) {\n-    // Build POST values.\n-    $post = [];\n-    for ($i = 0; $i < count($ids); $i++) {\n-      $post['ids[' . $i . ']'] = $ids[$i];\n-    }\n-\n-    // Serialize POST values.\n-    foreach ($post as $key => $value) {\n-      // Encode according to application/x-www-form-urlencoded\n-      // Both names and values needs to be urlencoded, according to\n-      // http://www.w3.org/TR/html4/interact/forms.html#h-17.13.4.1\n-      $post[$key] = urlencode($key) . '=' . urlencode($value);\n-    }\n-    $post = implode('&', $post);\n-\n-    // Perform HTTP request.\n-    return $this->curlExec([\n-      CURLOPT_URL => \\Drupal::url('contextual.render', [], ['absolute' => TRUE, 'query' => ['destination' => $current_path]]),\n-      CURLOPT_POST => TRUE,\n-      CURLOPT_POSTFIELDS => $post,\n-      CURLOPT_HTTPHEADER => [\n-        'Accept: application/json',\n-        'Content-Type: application/x-www-form-urlencoded',\n-      ],\n-    ]);\n-  }\n-\n-  /**\n-   * Tests if the node page works if Contextual Links is disabled.\n-   *\n-   * All views have Contextual links enabled by default, even with the\n-   * Contextual links module disabled. This tests if no calls are done to the\n-   * Contextual links module by views when it is disabled.\n-   *\n-   * @see https://www.drupal.org/node/2379811\n-   */\n-  public function testPageWithDisabledContextualModule() {\n-    \\Drupal::service('module_installer')->uninstall(['contextual']);\n-    \\Drupal::service('module_installer')->install(['views_ui']);\n-\n-    // Ensure that contextual links don't get called for admin users.\n-    $admin_user = User::load(1);\n-    $admin_user->setPassword('new_password');\n-    $admin_user->pass_raw = 'new_password';\n-    $admin_user->save();\n-\n-    $this->drupalCreateContentType(['type' => 'page']);\n-    $this->drupalCreateNode(['promote' => 1]);\n-\n-    $this->drupalLogin($admin_user);\n-    $this->drupalGet('node');\n-  }\n-\n-}\ndiff --git a/core/modules/node/src/Tests/NodeRevisionsTest.php b/core/modules/node/tests/src/Functional/NodeRevisionsTest.php\nsimilarity index 92%\nrename from core/modules/node/src/Tests/NodeRevisionsTest.php\nrename to core/modules/node/tests/src/Functional/NodeRevisionsTest.php\nindex fdc929a84c..6b16ce8bdc 100644\n--- a/core/modules/node/src/Tests/NodeRevisionsTest.php\n+++ b/core/modules/node/tests/src/Functional/NodeRevisionsTest.php\n@@ -1,6 +1,6 @@\n <?php\n \n-namespace Drupal\\node\\Tests;\n+namespace Drupal\\Tests\\node\\Functional;\n \n use Drupal\\Core\\Url;\n use Drupal\\field\\Entity\\FieldConfig;\n@@ -158,17 +158,6 @@ public function testRevisions() {\n     // Confirm that this is the default revision.\n     $this->assertTrue($node->isDefaultRevision(), 'Third node revision is the default one.');\n \n-    // Confirm that the \"Edit\" and \"Delete\" contextual links appear for the\n-    // default revision.\n-    $ids = ['node:node=' . $node->id() . ':changed=' . $node->getChangedTime()];\n-    $json = $this->renderContextualLinks($ids, 'node/' . $node->id());\n-    $this->verbose($json[$ids[0]]);\n-\n-    $expected = '<li class=\"entitynodeedit-form\"><a href=\"' . base_path() . 'node/' . $node->id() . '/edit\">Edit</a></li>';\n-    $this->assertTrue(strstr($json[$ids[0]], $expected), 'The \"Edit\" contextual link is shown for the default revision.');\n-    $expected = '<li class=\"entitynodedelete-form\"><a href=\"' . base_path() . 'node/' . $node->id() . '/delete\">Delete</a></li>';\n-    $this->assertTrue(strstr($json[$ids[0]], $expected), 'The \"Delete\" contextual link is shown for the default revision.');\n-\n     // Confirm that revisions revert properly.\n     $this->drupalPostForm(\"node/\" . $node->id() . \"/revisions/\" . $nodes[1]->getRevisionid() . \"/revert\", [], t('Revert'));\n     $this->assertRaw(t('@type %title has been reverted to the revision from %revision-date.', [\n@@ -188,15 +177,6 @@ public function testRevisions() {\n     $node = node_revision_load($node->getRevisionId());\n     $this->assertFalse($node->isDefaultRevision(), 'Third node revision is not the default one.');\n \n-    // Confirm that \"Edit\" and \"Delete\" contextual links don't appear for\n-    // non-default revision.\n-    $ids = ['node_revision::node=' . $node->id() . '&node_revision=' . $node->getRevisionId() . ':'];\n-    $json = $this->renderContextualLinks($ids, 'node/' . $node->id() . '/revisions/' . $node->getRevisionId() . '/view');\n-    $this->verbose($json[$ids[0]]);\n-\n-    $this->assertFalse(strstr($json[$ids[0]], '<li class=\"entitynodeedit-form\">'), 'The \"Edit\" contextual link is not shown for a non-default revision.');\n-    $this->assertFalse(strstr($json[$ids[0]], '<li class=\"entitynodedelete-form\">'), 'The \"Delete\" contextual link is not shown for a non-default revision.');\n-\n     // Confirm revisions delete properly.\n     $this->drupalPostForm(\"node/\" . $node->id() . \"/revisions/\" . $nodes[1]->getRevisionId() . \"/delete\", [], t('Delete'));\n     $this->assertRaw(t('Revision from %revision-date of @type %title has been deleted.', [\ndiff --git a/core/modules/node/src/Tests/NodeTypeTest.php b/core/modules/node/tests/src/Functional/NodeTypeTest.php\nsimilarity index 94%\nrename from core/modules/node/src/Tests/NodeTypeTest.php\nrename to core/modules/node/tests/src/Functional/NodeTypeTest.php\nindex 9938bb0ab5..84c549e8d5 100644\n--- a/core/modules/node/src/Tests/NodeTypeTest.php\n+++ b/core/modules/node/tests/src/Functional/NodeTypeTest.php\n@@ -1,11 +1,12 @@\n <?php\n \n-namespace Drupal\\node\\Tests;\n+namespace Drupal\\Tests\\node\\Functional;\n \n use Drupal\\field\\Entity\\FieldConfig;\n use Drupal\\node\\Entity\\NodeType;\n use Drupal\\Core\\Url;\n-use Drupal\\system\\Tests\\Menu\\AssertBreadcrumbTrait;\n+use Drupal\\Tests\\system\\Functional\\Menu\\AssertBreadcrumbTrait;\n+use Drupal\\Tests\\system\\Functional\\Cache\\AssertPageCacheContextsAndTagsTrait;\n \n /**\n  * Ensures that node type functions work correctly.\n@@ -15,6 +16,7 @@\n class NodeTypeTest extends NodeTestBase {\n \n   use AssertBreadcrumbTrait;\n+  use AssertPageCacheContextsAndTagsTrait;\n \n   /**\n    * Modules to enable.\n@@ -87,6 +89,7 @@ public function testNodeTypeCreation() {\n    * Tests editing a node type using the UI.\n    */\n   public function testNodeTypeEditing() {\n+    $assert = $this->assertSession();\n     $this->drupalPlaceBlock('system_breadcrumb_block');\n     $web_user = $this->drupalCreateUser(['bypass node access', 'administer content types', 'administer node fields']);\n     $this->drupalLogin($web_user);\n@@ -96,8 +99,8 @@ public function testNodeTypeEditing() {\n \n     // Verify that title and body fields are displayed.\n     $this->drupalGet('node/add/page');\n-    $this->assertRaw('Title', 'Title field was found.');\n-    $this->assertRaw('Body', 'Body field was found.');\n+    $assert->pageTextContains('Title');\n+    $assert->pageTextContains('Body');\n \n     // Rename the title field.\n     $edit = [\n@@ -106,8 +109,8 @@ public function testNodeTypeEditing() {\n     $this->drupalPostForm('admin/structure/types/manage/page', $edit, t('Save content type'));\n \n     $this->drupalGet('node/add/page');\n-    $this->assertRaw('Foo', 'New title label was displayed.');\n-    $this->assertNoRaw('Title', 'Old title label was not displayed.');\n+    $assert->pageTextContains('Foo');\n+    $assert->pageTextNotContains('Title');\n \n     // Change the name and the description.\n     $edit = [\n@@ -117,11 +120,11 @@ public function testNodeTypeEditing() {\n     $this->drupalPostForm('admin/structure/types/manage/page', $edit, t('Save content type'));\n \n     $this->drupalGet('node/add');\n-    $this->assertRaw('Bar', 'New name was displayed.');\n-    $this->assertRaw('Lorem ipsum', 'New description was displayed.');\n+    $assert->pageTextContains('Bar');\n+    $assert->pageTextContains('Lorem ipsum');\n     $this->clickLink('Bar');\n-    $this->assertRaw('Foo', 'Title field was found.');\n-    $this->assertRaw('Body', 'Body field was found.');\n+    $assert->pageTextContains('Foo');\n+    $assert->pageTextContains('Body');\n \n     // Change the name through the API\n     /** @var \\Drupal\\node\\NodeTypeInterface $node_type */\n@@ -146,7 +149,7 @@ public function testNodeTypeEditing() {\n     ]);\n     // Check that the body field doesn't exist.\n     $this->drupalGet('node/add/page');\n-    $this->assertNoRaw('Body', 'Body field was not found.');\n+    $assert->pageTextNotContains('Body');\n   }\n \n   /**\ndiff --git a/core/modules/node/src/Tests/PagePreviewTest.php b/core/modules/node/tests/src/Functional/PagePreviewTest.php\nsimilarity index 97%\nrename from core/modules/node/src/Tests/PagePreviewTest.php\nrename to core/modules/node/tests/src/Functional/PagePreviewTest.php\nindex 2bc9cd3ce1..70305349d4 100644\n--- a/core/modules/node/src/Tests/PagePreviewTest.php\n+++ b/core/modules/node/tests/src/Functional/PagePreviewTest.php\n@@ -1,6 +1,6 @@\n <?php\n \n-namespace Drupal\\node\\Tests;\n+namespace Drupal\\Tests\\node\\Functional;\n \n use Drupal\\comment\\Tests\\CommentTestTrait;\n use Drupal\\Core\\Field\\FieldStorageDefinitionInterface;\n@@ -12,6 +12,7 @@\n use Drupal\\node\\Entity\\NodeType;\n use Drupal\\taxonomy\\Entity\\Term;\n use Drupal\\taxonomy\\Entity\\Vocabulary;\n+use Drupal\\Tests\\TestFileCreationTrait;\n \n /**\n  * Tests the node entity preview functionality.\n@@ -22,6 +23,9 @@ class PagePreviewTest extends NodeTestBase {\n \n   use EntityReferenceTestTrait;\n   use CommentTestTrait;\n+  use TestFileCreationTrait {\n+    getTestFiles as drupalGetTestFiles;\n+  }\n \n   /**\n    * Enable the comment, node and taxonomy modules to test on the preview.\n@@ -177,7 +181,8 @@ public function testPagePreview() {\n     $this->drupalPostForm(NULL, ['field_image[0][alt]' => 'Picture of llamas'], t('Preview'));\n \n     // Check that the preview is displaying the title, body and term.\n-    $this->assertTitle(t('@title | Drupal', ['@title' => $edit[$title_key]]), 'Basic page title is preview.');\n+    $expected_title = $edit[$title_key] . ' | Drupal';\n+    $this->assertSession()->titleEquals($expected_title);\n     $this->assertEscaped($edit[$title_key], 'Title displayed and escaped.');\n     $this->assertText($edit[$body_key], 'Body displayed.');\n     $this->assertText($edit[$term_key], 'Term displayed.');\n@@ -210,13 +215,13 @@ public function testPagePreview() {\n     $this->assertFieldByName($body_key, $edit[$body_key], 'Body field displayed.');\n     $this->assertFieldByName($term_key, $edit[$term_key], 'Term field displayed.');\n     $this->assertFieldByName('field_image[0][alt]', 'Picture of llamas');\n-    $this->drupalPostAjaxForm(NULL, [], ['field_test_multi_add_more' => t('Add another item')], NULL, [], [], 'node-page-form');\n+    $this->getSession()->getPage()->pressButton('Add another item');\n     $this->assertFieldByName('field_test_multi[0][value]');\n     $this->assertFieldByName('field_test_multi[1][value]');\n \n     // Return to page preview to check everything is as expected.\n     $this->drupalPostForm(NULL, [], t('Preview'));\n-    $this->assertTitle(t('@title | Drupal', ['@title' => $edit[$title_key]]), 'Basic page title is preview.');\n+    $this->assertSession()->titleEquals($expected_title);\n     $this->assertEscaped($edit[$title_key], 'Title displayed and escaped.');\n     $this->assertText($edit[$body_key], 'Body displayed.');\n     $this->assertText($edit[$term_key], 'Term displayed.');\n@@ -353,8 +358,8 @@ public function testPagePreview() {\n     $this->assertText('Basic page ' . $title . ' has been created.');\n     $node = $this->drupalGetNodeByTitle($title);\n     $this->drupalGet('node/' . $node->id() . '/edit');\n-    $this->drupalPostAjaxForm(NULL, [], ['field_test_multi_add_more' => t('Add another item')]);\n-    $this->drupalPostAjaxForm(NULL, [], ['field_test_multi_add_more' => t('Add another item')]);\n+    $this->getSession()->getPage()->pressButton('Add another item');\n+    $this->getSession()->getPage()->pressButton('Add another item');\n     $edit = [\n       'field_test_multi[1][value]' => $example_text_2,\n       'field_test_multi[2][value]' => $example_text_3,\ndiff --git a/core/modules/node/tests/src/Functional/Views/NodeContextualLinksTest.php b/core/modules/node/tests/src/Functional/Views/NodeContextualLinksTest.php\nnew file mode 100644\nindex 0000000000..73ccfef758\n--- /dev/null\n+++ b/core/modules/node/tests/src/Functional/Views/NodeContextualLinksTest.php\n@@ -0,0 +1,47 @@\n+<?php\n+\n+namespace Drupal\\Tests\\node\\Functional\\Views;\n+\n+use Drupal\\user\\Entity\\User;\n+\n+/**\n+ * Tests views contextual links on nodes.\n+ *\n+ * @group node\n+ */\n+class NodeContextualLinksTest extends NodeTestBase {\n+\n+  /**\n+   * Modules to enable.\n+   *\n+   * @var array\n+   */\n+  public static $modules = ['contextual'];\n+\n+  /**\n+   * Tests if the node page works if Contextual Links is disabled.\n+   *\n+   * All views have Contextual links enabled by default, even with the\n+   * Contextual links module disabled. This tests if no calls are done to the\n+   * Contextual links module by views when it is disabled.\n+   *\n+   * @see https://www.drupal.org/node/2379811\n+   */\n+  public function testPageWithDisabledContextualModule() {\n+    \\Drupal::service('module_installer')->uninstall(['contextual']);\n+    \\Drupal::service('module_installer')->install(['views_ui']);\n+\n+    // Ensure that contextual links don't get called for admin users.\n+    $admin_user = User::load(1);\n+    $admin_user->setPassword('new_password');\n+    $admin_user->passRaw = 'new_password';\n+    $admin_user->save();\n+\n+    $this->drupalCreateContentType(['type' => 'page']);\n+    $this->drupalCreateNode(['promote' => 1]);\n+\n+    $this->drupalLogin($admin_user);\n+    $this->drupalGet('node');\n+  }\n+\n+}\ndiff --git a/core/modules/node/tests/src/FunctionalJavascript/ContextualLinksTest.php b/core/modules/node/tests/src/FunctionalJavascript/ContextualLinksTest.php\nnew file mode 100644\nindex 0000000000..98051262ec\n--- /dev/null\n+++ b/core/modules/node/tests/src/FunctionalJavascript/ContextualLinksTest.php\n@@ -0,0 +1,117 @@\n+<?php\n+\n+namespace Drupal\\Tests\\node\\FunctionalJavascript;\n+\n+use Drupal\\FunctionalJavascriptTests\\WebDriverTestBase;\n+use Drupal\\node\\Entity\\Node;\n+use Drupal\\Tests\\contextual\\FunctionalJavascript\\ContextualLinkClickTrait;\n+\n+/**\n+ * Create a node with revisions and test contextual links.\n+ *\n+ * @group node\n+ */\n+class ContextualLinksTest extends WebDriverTestBase {\n+\n+  use ContextualLinkClickTrait;\n+\n+  /**\n+   * An array of node revisions.\n+   *\n+   * @var \\Drupal\\node\\NodeInterface[]\n+   */\n+  protected $nodes;\n+\n+\n+  /**\n+   * {@inheritdoc}\n+   */\n+  protected static $modules = ['node', 'contextual'];\n+\n+  /**\n+   * {@inheritdoc}\n+   */\n+  protected function setUp() {\n+    parent::setUp();\n+\n+    $this->drupalCreateContentType([\n+      'type' => 'page',\n+      'name' => 'Basic page',\n+      'display_submitted' => FALSE,\n+    ]);\n+\n+    // Create initial node.\n+    $node = $this->drupalCreateNode();\n+\n+    $nodes = [];\n+\n+    // Get original node.\n+    $nodes[] = clone $node;\n+\n+    // Create two revisions.\n+    $revision_count = 2;\n+    for ($i = 0; $i < $revision_count; $i++) {\n+\n+      // Create revision with a random title and body and update variables.\n+      $node->title = $this->randomMachineName();\n+      $node->body = [\n+        'value' => $this->randomMachineName(32),\n+        'format' => filter_default_format(),\n+      ];\n+      $node->setNewRevision();\n+\n+      $node->save();\n+\n+      // Make sure we get revision information.\n+      $node = Node::load($node->id());\n+      $nodes[] = clone $node;\n+    }\n+\n+    $this->nodes = $nodes;\n+\n+    $this->drupalLogin($this->createUser(\n+      [\n+        'view page revisions',\n+        'revert page revisions',\n+        'delete page revisions',\n+        'edit any page content',\n+        'delete any page content',\n+        'access contextual links',\n+        'administer content types',\n+      ]\n+    ));\n+  }\n+\n+  /**\n+   * Tests the contextual links on revisions.\n+   */\n+  public function testRevisionContextualLinks() {\n+    // Confirm that the \"Edit\" and \"Delete\" contextual links appear for the\n+    // default revision.\n+    $this->drupalGet('node/' . $this->nodes[0]->id());\n+    $page = $this->getSession()->getPage();\n+    $page->waitFor(10, function () use ($page) {\n+      return $page->find('css', \"main .contextual\");\n+    });\n+\n+    $this->toggleContextualTriggerVisibility('main');\n+    $page->find('css', 'main .contextual button')->press();\n+    $links = $page->findAll('css', \"main .contextual-links li a\");\n+\n+    $this->assertEquals('Edit', $links[0]->getText());\n+    $this->assertEquals('Delete', $links[1]->getText());\n+\n+    // Confirm that \"Edit\" and \"Delete\" contextual links don't appear for\n+    // non-default revision.\n+    $this->drupalGet(\"node/\" . $this->nodes[0]->id() . \"/revisions/\" . $this->nodes[1]->getRevisionId() . \"/view\");\n+    $this->assertSession()->pageTextContains($this->nodes[1]->getTitle());\n+    $page->waitFor(10, function () use ($page) {\n+      return $page->find('css', \"main .contextual\");\n+    });\n+\n+    $this->toggleContextualTriggerVisibility('main');\n+    $contextual_button = $page->find('css', 'main .contextual button');\n+    $this->assertEmpty(0, $contextual_button);\n+  }\n+\n+}\ndiff --git a/core/modules/path/tests/src/Functional/PathAliasTest.php b/core/modules/path/tests/src/Functional/PathAliasTest.php\nindex b8ac5968db..19115cd27b 100644\n--- a/core/modules/path/tests/src/Functional/PathAliasTest.php\n+++ b/core/modules/path/tests/src/Functional/PathAliasTest.php\n@@ -4,6 +4,7 @@\n \n use Drupal\\Core\\Cache\\Cache;\n use Drupal\\Core\\Database\\Database;\n+use Drupal\\Core\\Url;\n \n /**\n  * Add, edit, delete, and change alias and verify its consistency in the\n@@ -24,7 +25,7 @@ protected function setUp() {\n     parent::setUp();\n \n     // Create test user and log in.\n-    $web_user = $this->drupalCreateUser(['create page content', 'edit own page content', 'administer url aliases', 'create url aliases']);\n+    $web_user = $this->drupalCreateUser(['create page content', 'edit own page content', 'administer url aliases', 'create url aliases', 'access content overview']);\n     $this->drupalLogin($web_user);\n   }\n \n@@ -327,6 +328,34 @@ public function testNodeAlias() {\n     $node5->delete();\n     $path_alias = \\Drupal::service('path.alias_storage')->lookupPathAlias('/node/' . $node5->id(), $node5->language()->getId());\n     $this->assertFalse($path_alias, 'Alias was successfully deleted when the referenced node was deleted.');\n+\n+    // Create sixth test node.\n+    $node6 = $this->drupalCreateNode();\n+\n+    // Create an invalid alias with two leading slashes and verify that the\n+    // extra slash is removed when the link is generated. This ensures that URL\n+    // aliases cannot be used to inject external URLs.\n+    // @todo The user interface should either display an error message or\n+    //   automatically trim these invalid aliases, rather than allowing them to\n+    //   be silently created, at which point the functional aspects of this\n+    //   test will need to be moved elsewhere and switch to using a\n+    //   programmatically-created alias instead.\n+    $alias = $this->randomMachineName(8);\n+    $edit = ['path[0][alias]' => '//' . $alias];\n+    $this->drupalPostForm($node6->toUrl('edit-form'), $edit, t('Save'));\n+    $this->drupalGet(Url::fromRoute('system.admin_content'));\n+    // This checks the link href before clicking it, rather than using\n+    // \\Drupal\\Tests\\BrowserTestBase::assertSession()->addressEquals() after\n+    // clicking it, because the test browser does not always preserve the\n+    // correct number of slashes in the URL when it visits internal links;\n+    // using \\Drupal\\Tests\\BrowserTestBase::assertSession()->addressEquals()\n+    // would actually make the test pass unconditionally on the testbot (or\n+    // anywhere else where Drupal is installed in a subdirectory).\n+    $link_xpath = $this->xpath('//a[normalize-space(text())=:label]', [':label' => $node6->getTitle()]);\n+    $link_href = $link_xpath[0]->getAttribute('href');\n+    $this->assertEquals($link_href, base_path() . $alias);\n+    $this->clickLink($node6->getTitle());\n+    $this->assertResponse(404);\n   }\n \n   /**\ndiff --git a/core/modules/system/src/Tests/Routing/RouterTest.php b/core/modules/system/src/Tests/Routing/RouterTest.php\nindex 83a9c55b39..8d7c43e86a 100644\n--- a/core/modules/system/src/Tests/Routing/RouterTest.php\n+++ b/core/modules/system/src/Tests/Routing/RouterTest.php\n@@ -320,6 +320,13 @@ public function testLeadingSlashes() {\n     $this->drupalGet($url);\n     $this->assertEqual(1, $this->redirectCount, $url . \" redirected to \" . $this->url);\n     $this->assertUrl($request->getUriForPath('/router_test/test1') . '?qs=test');\n+\n+    // Ensure that external URLs in destination query params are not redirected\n+    // to.\n+    $url = $request->getUriForPath('/////////////////////////////////////////////////router_test/test1') . '?qs=test&destination=http://www.example.com%5c@drupal8alt.test';\n+    $this->drupalGet($url);\n+    $this->assertEqual(1, $this->redirectCount, $url . \" redirected to \" . $this->url);\n+    $this->assertUrl($request->getUriForPath('/router_test/test1') . '?qs=test');\n   }\n \n }\ndiff --git a/core/tests/Drupal/Tests/Component/Utility/UrlHelperTest.php b/core/tests/Drupal/Tests/Component/Utility/UrlHelperTest.php\nindex d185219c9a..beaa472c26 100644\n--- a/core/tests/Drupal/Tests/Component/Utility/UrlHelperTest.php\n+++ b/core/tests/Drupal/Tests/Component/Utility/UrlHelperTest.php\n@@ -563,6 +563,10 @@ public function providerTestExternalIsLocal() {\n       ['http://example.com/foo', 'http://example.com/bar', FALSE],\n       ['http://example.com', 'http://example.com/bar', FALSE],\n       ['http://example.com/bar', 'http://example.com/bar/', FALSE],\n+      // Ensure \\ is normalised to / since some browsers do that.\n+      ['http://www.example.ca\\@example.com', 'http://example.com', FALSE],\n+      // Some browsers ignore or strip leading control characters.\n+      [\"\\x00//www.example.ca\", 'http://example.com', FALSE],\n     ];\n   }\n \ndiff --git a/core/tests/Drupal/Tests/Core/EventSubscriber/RedirectResponseSubscriberTest.php b/core/tests/Drupal/Tests/Core/EventSubscriber/RedirectResponseSubscriberTest.php\nindex 8659a6f126..85b3da313c 100644\n--- a/core/tests/Drupal/Tests/Core/EventSubscriber/RedirectResponseSubscriberTest.php\n+++ b/core/tests/Drupal/Tests/Core/EventSubscriber/RedirectResponseSubscriberTest.php\n@@ -11,7 +11,6 @@\n use Symfony\\Component\\HttpFoundation\\RedirectResponse;\n use Symfony\\Component\\HttpFoundation\\Request;\n use Symfony\\Component\\HttpKernel\\Event\\FilterResponseEvent;\n-use Symfony\\Component\\HttpKernel\\Event\\GetResponseEvent;\n use Symfony\\Component\\HttpKernel\\HttpKernelInterface;\n use Symfony\\Component\\HttpKernel\\KernelEvents;\n \n@@ -192,74 +191,4 @@ public function providerTestDestinationRedirectWithInvalidUrl() {\n     return $data;\n   }\n \n-  /**\n-   * Tests that $_GET only contain internal URLs.\n-   *\n-   * @covers ::sanitizeDestination\n-   *\n-   * @dataProvider providerTestSanitizeDestination\n-   *\n-   * @see \\Drupal\\Component\\Utility\\UrlHelper::isExternal\n-   */\n-  public function testSanitizeDestinationForGet($input, $output) {\n-    $request = new Request();\n-    $request->query->set('destination', $input);\n-\n-    $listener = new RedirectResponseSubscriber($this->urlAssembler, $this->requestContext);\n-    $kernel = $this->getMock('Symfony\\Component\\HttpKernel\\HttpKernelInterface');\n-    $event = new GetResponseEvent($kernel, $request, HttpKernelInterface::MASTER_REQUEST);\n-\n-    $dispatcher = new EventDispatcher();\n-    $dispatcher->addListener(KernelEvents::REQUEST, [$listener, 'sanitizeDestination'], 100);\n-    $dispatcher->dispatch(KernelEvents::REQUEST, $event);\n-\n-    $this->assertEquals($output, $request->query->get('destination'));\n-  }\n-\n-  /**\n-   * Tests that $_REQUEST['destination'] only contain internal URLs.\n-   *\n-   * @covers ::sanitizeDestination\n-   *\n-   * @dataProvider providerTestSanitizeDestination\n-   *\n-   * @see \\Drupal\\Component\\Utility\\UrlHelper::isExternal\n-   */\n-  public function testSanitizeDestinationForPost($input, $output) {\n-    $request = new Request();\n-    $request->request->set('destination', $input);\n-\n-    $listener = new RedirectResponseSubscriber($this->urlAssembler, $this->requestContext);\n-    $kernel = $this->getMock('Symfony\\Component\\HttpKernel\\HttpKernelInterface');\n-    $event = new GetResponseEvent($kernel, $request, HttpKernelInterface::MASTER_REQUEST);\n-\n-    $dispatcher = new EventDispatcher();\n-    $dispatcher->addListener(KernelEvents::REQUEST, [$listener, 'sanitizeDestination'], 100);\n-    $dispatcher->dispatch(KernelEvents::REQUEST, $event);\n-\n-    $this->assertEquals($output, $request->request->get('destination'));\n-  }\n-\n-  /**\n-   * Data provider for testSanitizeDestination().\n-   */\n-  public function providerTestSanitizeDestination() {\n-    $data = [];\n-    // Standard internal example node path is present in the 'destination'\n-    // parameter.\n-    $data[] = ['node', 'node'];\n-    // Internal path with one leading slash is allowed.\n-    $data[] = ['/example.com', '/example.com'];\n-    // External URL without scheme is not allowed.\n-    $data[] = ['//example.com/test', ''];\n-    // Internal URL using a colon is allowed.\n-    $data[] = ['example:test', 'example:test'];\n-    // External URL is not allowed.\n-    $data[] = ['http://example.com', ''];\n-    // Javascript URL is allowed because it is treated as an internal URL.\n-    $data[] = ['javascript:alert(0)', 'javascript:alert(0)'];\n-\n-    return $data;\n-  }\n-\n }\ndiff --git a/core/tests/Drupal/Tests/Core/Mail/MailManagerTest.php b/core/tests/Drupal/Tests/Core/Mail/MailManagerTest.php\nindex ea523028a8..de2e3d943f 100644\n--- a/core/tests/Drupal/Tests/Core/Mail/MailManagerTest.php\n+++ b/core/tests/Drupal/Tests/Core/Mail/MailManagerTest.php\n@@ -7,6 +7,7 @@\n \n namespace Drupal\\Tests\\Core\\Mail;\n \n+use Drupal\\Core\\DependencyInjection\\ContainerBuilder;\n use Drupal\\Core\\Render\\RenderContext;\n use Drupal\\Core\\Render\\RendererInterface;\n use Drupal\\Tests\\UnitTestCase;\n@@ -103,6 +104,9 @@ protected function setUpMailManager($interface = []) {\n       'system.mail' => [\n         'interface' => $interface,\n       ],\n+      'system.site' => [\n+        'mail' => 'test@example.com',\n+      ],\n     ]);\n     $logger_factory = $this->getMock('\\Drupal\\Core\\Logger\\LoggerChannelFactoryInterface');\n     $string_translation = $this->getStringTranslationStub();\n@@ -110,6 +114,11 @@ protected function setUpMailManager($interface = []) {\n     // Construct the manager object and override its discovery.\n     $this->mailManager = new TestMailManager(new \\ArrayObject(), $this->cache, $this->moduleHandler, $this->configFactory, $logger_factory, $string_translation, $this->renderer);\n     $this->mailManager->setDiscovery($this->discovery);\n+\n+    // @see \\Drupal\\Core\\Plugin\\Factory\\ContainerFactory::createInstance()\n+    $container = new ContainerBuilder();\n+    $container->set('config.factory', $this->configFactory);\n+    \\Drupal::setContainer($container);\n   }\n \n   /**\ndiff --git a/core/tests/Drupal/Tests/Core/Security/RequestSanitizerTest.php b/core/tests/Drupal/Tests/Core/Security/RequestSanitizerTest.php\nindex 53147f3b7d..e828de086e 100644\n--- a/core/tests/Drupal/Tests/Core/Security/RequestSanitizerTest.php\n+++ b/core/tests/Drupal/Tests/Core/Security/RequestSanitizerTest.php\n@@ -197,6 +197,147 @@ public function providerTestRequestSanitization() {\n     return $tests;\n   }\n \n+  /**\n+   * Tests acceptable destinations are not removed from GET requests.\n+   *\n+   * @param string $destination\n+   *   The destination string to test.\n+   *\n+   * @dataProvider providerTestAcceptableDestinations\n+   */\n+  public function testAcceptableDestinationGet($destination) {\n+    // Set up a GET request.\n+    $request = $this->createRequestForTesting(['destination' => $destination]);\n+\n+    $request = RequestSanitizer::sanitize($request, [], TRUE);\n+\n+    $this->assertSame($destination, $request->query->get('destination', NULL));\n+    $this->assertNull($request->request->get('destination', NULL));\n+    $this->assertSame($destination, $_GET['destination']);\n+    $this->assertSame($destination, $_REQUEST['destination']);\n+    $this->assertArrayNotHasKey('destination', $_POST);\n+    $this->assertEquals([], $this->errors);\n+  }\n+\n+  /**\n+   * Tests unacceptable destinations are removed from GET requests.\n+   *\n+   * @param string $destination\n+   *   The destination string to test.\n+   *\n+   * @dataProvider providerTestSanitizedDestinations\n+   */\n+  public function testSanitizedDestinationGet($destination) {\n+    // Set up a GET request.\n+    $request = $this->createRequestForTesting(['destination' => $destination]);\n+\n+    $request = RequestSanitizer::sanitize($request, [], TRUE);\n+\n+    $this->assertNull($request->request->get('destination', NULL));\n+    $this->assertNull($request->query->get('destination', NULL));\n+    $this->assertArrayNotHasKey('destination', $_POST);\n+    $this->assertArrayNotHasKey('destination', $_REQUEST);\n+    $this->assertArrayNotHasKey('destination', $_GET);\n+    $this->assertError('Potentially unsafe destination removed from query parameter bag because it points to an external URL.', E_USER_NOTICE);\n+  }\n+\n+  /**\n+   * Tests acceptable destinations are not removed from POST requests.\n+   *\n+   * @param string $destination\n+   *   The destination string to test.\n+   *\n+   * @dataProvider providerTestAcceptableDestinations\n+   */\n+  public function testAcceptableDestinationPost($destination) {\n+    // Set up a POST request.\n+    $request = $this->createRequestForTesting([], ['destination' => $destination]);\n+\n+    $request = RequestSanitizer::sanitize($request, [], TRUE);\n+\n+    $this->assertSame($destination, $request->request->get('destination', NULL));\n+    $this->assertNull($request->query->get('destination', NULL));\n+    $this->assertSame($destination, $_POST['destination']);\n+    $this->assertSame($destination, $_REQUEST['destination']);\n+    $this->assertArrayNotHasKey('destination', $_GET);\n+    $this->assertEquals([], $this->errors);\n+  }\n+\n+  /**\n+   * Tests unacceptable destinations are removed from GET requests.\n+   *\n+   * @param string $destination\n+   *   The destination string to test.\n+   *\n+   * @dataProvider providerTestSanitizedDestinations\n+   */\n+  public function testSanitizedDestinationPost($destination) {\n+    // Set up a POST request.\n+    $request = $this->createRequestForTesting([], ['destination' => $destination]);\n+\n+    $request = RequestSanitizer::sanitize($request, [], TRUE);\n+\n+    $this->assertNull($request->request->get('destination', NULL));\n+    $this->assertNull($request->query->get('destination', NULL));\n+    $this->assertArrayNotHasKey('destination', $_POST);\n+    $this->assertArrayNotHasKey('destination', $_REQUEST);\n+    $this->assertArrayNotHasKey('destination', $_GET);\n+    $this->assertError('Potentially unsafe destination removed from request parameter bag because it points to an external URL.', E_USER_NOTICE);\n+  }\n+\n+  /**\n+   * Creates a request and sets PHP globals for testing.\n+   *\n+   * @param array $query\n+   *   (optional) The GET parameters.\n+   * @param array $request\n+   *   (optional) The POST parameters.\n+   *\n+   * @return \\Symfony\\Component\\HttpFoundation\\Request\n+   *   The request object.\n+   */\n+  protected function createRequestForTesting(array $query = [], array $request = []) {\n+    $request = new Request($query, $request);\n+\n+    // Set up globals.\n+    $_GET = $request->query->all();\n+    $_POST = $request->request->all();\n+    $_COOKIE = $request->cookies->all();\n+    $_REQUEST = array_merge($request->query->all(), $request->request->all());\n+    $request->server->set('QUERY_STRING', http_build_query($request->query->all()));\n+    $_SERVER['QUERY_STRING'] = $request->server->get('QUERY_STRING');\n+    return $request;\n+  }\n+\n+  /**\n+   * Data provider for testing acceptable destinations.\n+   */\n+  public function providerTestAcceptableDestinations() {\n+    $data = [];\n+    // Standard internal example node path is present in the 'destination'\n+    // parameter.\n+    $data[] = ['node'];\n+    // Internal path with one leading slash is allowed.\n+    $data[] = ['/example.com'];\n+    // Internal URL using a colon is allowed.\n+    $data[] = ['example:test'];\n+    // Javascript URL is allowed because it is treated as an internal URL.\n+    $data[] = ['javascript:alert(0)'];\n+    return $data;\n+  }\n+\n+  /**\n+   * Data provider for testing sanitized destinations.\n+   */\n+  public function providerTestSanitizedDestinations() {\n+    $data = [];\n+    // External URL without scheme is not allowed.\n+    $data[] = ['//example.com/test'];\n+    // External URL is not allowed.\n+    $data[] = ['http://example.com'];\n+    return $data;\n+  }\n+\n   /**\n    * Catches and logs errors to $this->errors.\n    *\n-- \n2.14.1\n\n"
  },
  {
    "path": "aegir/patches/992540-3-reset_flood_limit_on_password_reset-drush.patch",
    "content": "diff --git modules/user/user.pages.inc modules/user/user.pages.inc\nindex 697a82d..797e3d1 100644\n--- modules/user/user.pages.inc\n+++ modules/user/user.pages.inc\n@@ -135,6 +135,16 @@ function user_pass_reset($form, &$form_state, $uid, $timestamp, $hashed_pass, $a\n           // Let the user's password be changed without the current password check.\n           $token = drupal_hash_base64(drupal_random_bytes(55));\n           $_SESSION['pass_reset_' . $user->uid] = $token;\n+          //clear out flood event for user trying to log in too many times\n+          if (variable_get('user_failed_login_identifier_uid_only', FALSE)) {\n+            $identifier = $account->uid;\n+          }\n+          else {\n+            $identifier = $account->uid . '-' . ip_address();\n+          }\n+          flood_clear_event('failed_login_attempt_user', $identifier);\n+          //also clear out the ip attempts for that user\n+          flood_clear_event('failed_login_attempt_ip');\n           drupal_goto('user/' . $user->uid . '/edit', array('query' => array('pass-reset-token' => $token)));\n         }\n         else {\n@@ -319,6 +329,15 @@ function user_profile_form_submit($form, &$form_state) {\n     // Remove the password reset tag since a new password was saved.\n     unset($_SESSION['pass_reset_'. $account->uid]);\n   }\n+\n+  // Clear the flood table. Since we don't know the IP address for this user\n+  // we can't use flood_clear_event because we need to use the LIKE operator.\n+  $identifier = $account->uid .'-%';\n+  db_delete('flood')\n+    ->condition('event', 'failed_login_attempt_user')\n+    ->condition('identifier', $identifier, 'LIKE')\n+    ->execute();\n+\n   // Clear the page cache because pages can contain usernames and/or profile information:\n   cache_clear_all();\n \ndiff --git modules/user/user.test modules/user/user.test\nindex 6ecbfac..5c13145 100644\n--- modules/user/user.test\n+++ modules/user/user.test\n@@ -396,6 +396,67 @@ class UserLoginTestCase extends DrupalWebTestCase {\n   }\n \n   /**\n+    * Test that flood events are removed after an account has been updated.\n+    */\n+   function testUpdatedUserFloodControl() {\n+     // Set a high global limit out so that it is not relevant in the test.\n+     variable_set('user_failed_login_ip_limit', 4000);\n+     // Set the per-user login limit.\n+     variable_set('user_failed_login_user_limit', 3);\n+\n+     $user1 = $this->drupalCreateUser(array('administer users'));\n+     $user2 = $this->drupalCreateUser(array());\n+     $user2->pass_raw .= 'incorrect';\n+\n+     // Try 3 failed logins.\n+     for ($i = 0; $i < 3; $i++) {\n+       $this->assertFailedLogin($user2);\n+     }\n+\n+     // The next login trial should result in an user-based flood error message.\n+     $this->assertFailedLogin($user2, 'user');\n+\n+     // Update the account and assert the user can login again.\n+     $this->drupalLogin($user1);\n+     $user2->pass_raw = 'goodpass';\n+     $edit = array(\n+       'pass[pass1]' => $user2->pass_raw,\n+       'pass[pass2]' => $user2->pass_raw,\n+     );\n+     $this->drupalPost('user/' . $user2->uid . '/edit', $edit, t('Save'));\n+     $this->drupalLogout();\n+     $this->drupalLogin($user2);\n+   }\n+\n+   /**\n+    * Test that flood events are removed after password reset.\n+    */\n+   function testResetPasswordFloodControl() {\n+     // Set a high global limit out so that it is not relevant in the test.\n+     variable_set('user_failed_login_ip_limit', 4000);\n+     // Set the per-user login limit.\n+     variable_set('user_failed_login_user_limit', 3);\n+\n+     $user1 = $this->drupalCreateUser();\n+     $correct = $user1->pass_raw;\n+     $user1->pass_raw .= 'incorrect';\n+\n+     // Try 3 failed logins.\n+     for ($i = 0; $i < 3; $i++) {\n+       $this->assertFailedLogin($user1);\n+     }\n+\n+     // The next login trial should result in an user-based flood error message.\n+     $this->assertFailedLogin($user1, 'user');\n+\n+     // Request new password, logout and login.\n+     $this->drupalPost(user_pass_reset_url($user1), array(), t('Log in'));\n+     $this->drupalLogout();\n+     $user1->pass_raw = $correct;\n+     $this->drupalLogin($user1);\n+   }\n+\n+  /**\n    * Test that user password is re-hashed upon login after changing $count_log2.\n    */\n   function testPasswordRehashOnLogin() {\n"
  },
  {
    "path": "aegir/patches/MailManagerReplacement.php.patch",
    "content": "--- src/MailManagerReplacement.php.org\t2024-01-25 12:04:50.463753755 +0100\n+++ src/MailManagerReplacement.php\t2024-01-25 12:02:53.223548904 +0100\n@@ -86,17 +86,19 @@\n \n     // Create an email from the array.\n     $builder = $this->emailBuilderManager->createInstanceFromMessage($message);\n-    $email = $builder->fromArray($this->emailFactory, $message);\n+    if ($builder) {\n+      $email = $builder->fromArray($this->emailFactory, $message);\n \n-    if ($send) {\n-      $message['result'] = $email->send();\n-    }\n-    else {\n-      // We set 'result' to NULL, because FALSE indicates an error in sending.\n-      $message['result'] = NULL;\n-    }\n+      if ($send) {\n+        $message['result'] = $email->send();\n+      }\n+      else {\n+        // We set 'result' to NULL, because FALSE indicates an error in sending.\n+        $message['result'] = NULL;\n+      }\n \n-    $this->legacyHelper->emailToArray($email, $message);\n+      $this->legacyHelper->emailToArray($email, $message);\n+    }\n     return $message;\n   }\n \n"
  },
  {
    "path": "aegir/patches/PHP-5.6.31-OpenSSL-1.1.0-compatibility-20170801.patch",
    "content": "diff -rupN php-5.6.31.orig/ext/openssl/openssl.c php-5.6.31/ext/openssl/openssl.c\n--- php-5.6.31.orig/ext/openssl/openssl.c\t2017-07-06 00:25:00.000000000 +0200\n+++ php-5.6.31/ext/openssl/openssl.c\t2017-08-01 10:55:28.108819344 +0200\n@@ -42,6 +42,12 @@\n \n /* OpenSSL includes */\n #include <openssl/evp.h>\n+#if OPENSSL_VERSION_NUMBER >= 0x10002000L\n+#include <openssl/bn.h>\n+#include <openssl/rsa.h>\n+#include <openssl/dsa.h>\n+#include <openssl/dh.h>\n+#endif\n #include <openssl/x509.h>\n #include <openssl/x509v3.h>\n #include <openssl/crypto.h>\n@@ -531,6 +537,133 @@ zend_module_entry openssl_module_entry =\n ZEND_GET_MODULE(openssl)\n #endif\n \n+/* {{{ OpenSSL compatibility functions and macros */\n+#if OPENSSL_VERSION_NUMBER < 0x10100000L || defined (LIBRESSL_VERSION_NUMBER)\n+#define EVP_PKEY_get0_RSA(_pkey) _pkey->pkey.rsa\n+#define EVP_PKEY_get0_DH(_pkey) _pkey->pkey.dh\n+#define EVP_PKEY_get0_DSA(_pkey) _pkey->pkey.dsa\n+#define EVP_PKEY_get0_EC_KEY(_pkey) _pkey->pkey.ec\n+\n+static int RSA_set0_key(RSA *r, BIGNUM *n, BIGNUM *e, BIGNUM *d)\n+{\n+\tr->n = n;\n+\tr->e = e;\n+\tr->d = d;\n+\n+\treturn 1;\n+}\n+\n+static int RSA_set0_factors(RSA *r, BIGNUM *p, BIGNUM *q)\n+{\n+\tr->p = p;\n+\tr->q = q;\n+\n+\treturn 1;\n+}\n+\n+static int RSA_set0_crt_params(RSA *r, BIGNUM *dmp1, BIGNUM *dmq1, BIGNUM *iqmp)\n+{\n+\tr->dmp1 = dmp1;\n+\tr->dmq1 = dmq1;\n+\tr->iqmp = iqmp;\n+\n+\treturn 1;\n+}\n+\n+static void RSA_get0_key(const RSA *r, const BIGNUM **n, const BIGNUM **e, const BIGNUM **d)\n+{\n+\t*n = r->n;\n+\t*e = r->e;\n+\t*d = r->d;\n+}\n+\n+static void RSA_get0_factors(const RSA *r, const BIGNUM **p, const BIGNUM **q)\n+{\n+\t*p = r->p;\n+\t*q = r->q;\n+}\n+\n+static void RSA_get0_crt_params(const RSA *r, const BIGNUM **dmp1, const BIGNUM **dmq1, const BIGNUM **iqmp)\n+{\n+\t*dmp1 = r->dmp1;\n+\t*dmq1 = r->dmq1;\n+\t*iqmp = r->iqmp;\n+}\n+\n+static void DH_get0_pqg(const DH *dh, const BIGNUM **p, const BIGNUM **q, const BIGNUM **g)\n+{\n+\t*p = dh->p;\n+\t*q = dh->q;\n+\t*g = dh->g;\n+}\n+\n+static int DH_set0_pqg(DH *dh, BIGNUM *p, BIGNUM *q, BIGNUM *g)\n+{\n+\tdh->p = p;\n+\tdh->q = q;\n+\tdh->g = g;\n+\n+\treturn 1;\n+}\n+\n+static void DH_get0_key(const DH *dh, const BIGNUM **pub_key, const BIGNUM **priv_key)\n+{\n+\t*pub_key = dh->pub_key;\n+\t*priv_key = dh->priv_key;\n+}\n+\n+static int DH_set0_key(DH *dh, BIGNUM *pub_key, BIGNUM *priv_key)\n+{\n+\tdh->pub_key = pub_key;\n+\tdh->priv_key = priv_key;\n+\n+\treturn 1;\n+}\n+\n+static void DSA_get0_pqg(const DSA *d, const BIGNUM **p, const BIGNUM **q, const BIGNUM **g)\n+{\n+\t*p = d->p;\n+\t*q = d->q;\n+\t*g = d->g;\n+}\n+\n+int DSA_set0_pqg(DSA *d, BIGNUM *p, BIGNUM *q, BIGNUM *g)\n+{\n+\td->p = p;\n+\td->q = q;\n+\td->g = g;\n+\n+\treturn 1;\n+}\n+\n+static void DSA_get0_key(const DSA *d, const BIGNUM **pub_key, const BIGNUM **priv_key)\n+{\n+\t*pub_key = d->pub_key;\n+\t*priv_key = d->priv_key;\n+}\n+\n+int DSA_set0_key(DSA *d, BIGNUM *pub_key, BIGNUM *priv_key)\n+{\n+\td->pub_key = pub_key;\n+\td->priv_key = priv_key;\n+\n+\treturn 1;\n+}\n+\n+#if OPENSSL_VERSION_NUMBER < 0x10002000L || defined (LIBRESSL_VERSION_NUMBER)\n+#define EVP_PKEY_id(_pkey) _pkey->type\n+#define EVP_PKEY_base_id(_key) EVP_PKEY_type(_key->type)\n+\n+static int X509_get_signature_nid(const X509 *x)\n+{\n+\treturn OBJ_obj2nid(x->sig_alg->algorithm);\n+}\n+\n+#endif\n+\n+#endif\n+/* }}} */\n+\n static int le_key;\n static int le_x509;\n static int le_csr;\n@@ -825,7 +958,7 @@ static int add_oid_section(struct php_x5\n \t}\n \tfor (i = 0; i < sk_CONF_VALUE_num(sktmp); i++) {\n \t\tcnf = sk_CONF_VALUE_value(sktmp, i);\n-\t\tif (OBJ_create(cnf->value, cnf->name, cnf->name) == NID_undef) {\n+\t\tif (OBJ_sn2nid(cnf->name) == NID_undef && OBJ_ln2nid(cnf->name) == NID_undef && OBJ_create(cnf->value, cnf->name, cnf->name) == NID_undef) {\n \t\t\tphp_error_docref(NULL TSRMLS_CC, E_WARNING, \"problem creating object %s=%s\", cnf->name, cnf->value);\n \t\t\treturn FAILURE;\n \t\t}\n@@ -967,7 +1100,7 @@ static void php_openssl_dispose_config(s\n }\n /* }}} */\n \n-#ifdef PHP_WIN32\n+#if defined(PHP_WIN32) || (OPENSSL_VERSION_NUMBER >= 0x10100000L && !defined(LIBRESSL_VERSION_NUMBER))\n #define PHP_OPENSSL_RAND_ADD_TIME() ((void) 0)\n #else\n #define PHP_OPENSSL_RAND_ADD_TIME() php_openssl_rand_add_timeval()\n@@ -1053,9 +1186,11 @@ static EVP_MD * php_openssl_get_evp_md_f\n \t\t\tmdtype = (EVP_MD *) EVP_md2();\n \t\t\tbreak;\n #endif\n+#if OPENSSL_VERSION_NUMBER < 0x10100000L || defined (LIBRESSL_VERSION_NUMBER)\n \t\tcase OPENSSL_ALGO_DSS1:\n \t\t\tmdtype = (EVP_MD *) EVP_dss1();\n \t\t\tbreak;\n+#endif\n #if OPENSSL_VERSION_NUMBER >= 0x0090708fL\n \t\tcase OPENSSL_ALGO_SHA224:\n \t\t\tmdtype = (EVP_MD *) EVP_sha224();\n@@ -1146,6 +1281,12 @@ PHP_MINIT_FUNCTION(openssl)\n \tOpenSSL_add_all_digests();\n \tOpenSSL_add_all_algorithms();\n \n+#if !defined(OPENSSL_NO_AES) && defined(EVP_CIPH_CCM_MODE) && OPENSSL_VERSION_NUMBER < 0x100020000\n+\tEVP_add_cipher(EVP_aes_128_ccm());\n+\tEVP_add_cipher(EVP_aes_192_ccm());\n+\tEVP_add_cipher(EVP_aes_256_ccm());\n+#endif\n+\n \tSSL_load_error_strings();\n \n \t/* register a resource id number with OpenSSL so that we can map SSL -> stream structures in\n@@ -1173,7 +1314,9 @@ PHP_MINIT_FUNCTION(openssl)\n #ifdef HAVE_OPENSSL_MD2_H\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_ALGO_MD2\", OPENSSL_ALGO_MD2, CONST_CS|CONST_PERSISTENT);\n #endif\n+#if OPENSSL_VERSION_NUMBER < 0x10100000L || defined (LIBRESSL_VERSION_NUMBER)\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_ALGO_DSS1\", OPENSSL_ALGO_DSS1, CONST_CS|CONST_PERSISTENT);\n+#endif\n #if OPENSSL_VERSION_NUMBER >= 0x0090708fL\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_ALGO_SHA224\", OPENSSL_ALGO_SHA224, CONST_CS|CONST_PERSISTENT);\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_ALGO_SHA256\", OPENSSL_ALGO_SHA256, CONST_CS|CONST_PERSISTENT);\n@@ -1251,7 +1394,9 @@ PHP_MINIT_FUNCTION(openssl)\n \t}\n \n \tphp_stream_xport_register(\"ssl\", php_openssl_ssl_socket_factory TSRMLS_CC);\n+#ifndef OPENSSL_NO_SSL3\n \tphp_stream_xport_register(\"sslv3\", php_openssl_ssl_socket_factory TSRMLS_CC);\n+#endif\n #ifndef OPENSSL_NO_SSL2\n \tphp_stream_xport_register(\"sslv2\", php_openssl_ssl_socket_factory TSRMLS_CC);\n #endif\n@@ -1308,7 +1453,9 @@ PHP_MSHUTDOWN_FUNCTION(openssl)\n #ifndef OPENSSL_NO_SSL2\n \tphp_stream_xport_unregister(\"sslv2\" TSRMLS_CC);\n #endif\n+#ifndef OPENSSL_NO_SSL3\n \tphp_stream_xport_unregister(\"sslv3\" TSRMLS_CC);\n+#endif\n \tphp_stream_xport_unregister(\"tls\" TSRMLS_CC);\n \tphp_stream_xport_unregister(\"tlsv1.0\" TSRMLS_CC);\n #if OPENSSL_VERSION_NUMBER >= 0x10001001L\n@@ -1893,6 +2040,7 @@ static int openssl_x509v3_subjectAltName\n {\n \tGENERAL_NAMES *names;\n \tconst X509V3_EXT_METHOD *method = NULL;\n+\tASN1_OCTET_STRING *extension_data;\n \tlong i, length, num;\n \tconst unsigned char *p;\n \n@@ -1901,8 +2049,9 @@ static int openssl_x509v3_subjectAltName\n \t\treturn -1;\n \t}\n \n-\tp = extension->value->data;\n-\tlength = extension->value->length;\n+\textension_data = X509_EXTENSION_get_data(extension);\n+\tp = extension_data->data;\n+\tlength = extension_data->length;\n \tif (method->it) {\n \t\tnames = (GENERAL_NAMES*)(ASN1_item_d2i(NULL, &p, length,\n \t\t\t\t\t\t       ASN1_ITEM_ptr(method->it)));\n@@ -1965,6 +2114,8 @@ PHP_FUNCTION(openssl_x509_parse)\n \tchar * tmpstr;\n \tzval * subitem;\n \tX509_EXTENSION *extension;\n+\tX509_NAME *subject_name;\n+\tchar *cert_name;\n \tchar *extname;\n \tBIO  *bio_out;\n \tBUF_MEM *bio_buf;\n@@ -1979,10 +2130,10 @@ PHP_FUNCTION(openssl_x509_parse)\n \t}\n \tarray_init(return_value);\n \n-\tif (cert->name) {\n-\t\tadd_assoc_string(return_value, \"name\", cert->name, 1);\n-\t}\n-/*\tadd_assoc_bool(return_value, \"valid\", cert->valid); */\n+\tsubject_name = X509_get_subject_name(cert);\n+\tcert_name = X509_NAME_oneline(subject_name, NULL, 0);\n+\tadd_assoc_string(return_value, \"name\", cert_name, 1);\n+\tOPENSSL_free(cert_name);\n \n \tadd_assoc_name_entry(return_value, \"subject\", \t\tX509_get_subject_name(cert), useshortnames TSRMLS_CC);\n \t/* hash as used in CA directories to lookup cert by subject name */\n@@ -2008,7 +2159,7 @@ PHP_FUNCTION(openssl_x509_parse)\n \t\tadd_assoc_string(return_value, \"alias\", tmpstr, 1);\n \t}\n \n-\tsig_nid = OBJ_obj2nid((cert)->sig_alg->algorithm);\n+\tsig_nid = X509_get_signature_nid(cert);\n \tadd_assoc_string(return_value, \"signatureTypeSN\", (char*)OBJ_nid2sn(sig_nid), 1);\n \tadd_assoc_string(return_value, \"signatureTypeLN\", (char*)OBJ_nid2ln(sig_nid), 1);\n \tadd_assoc_long(return_value, \"signatureTypeNID\", sig_nid);\n@@ -3217,7 +3368,21 @@ PHP_FUNCTION(openssl_csr_get_public_key)\n \t\tRETURN_FALSE;\n \t}\n \n-\ttpubkey=X509_REQ_get_pubkey(csr);\n+#if OPENSSL_VERSION_NUMBER >= 0x10100000L && !defined(LIBRESSL_VERSION_NUMBER)\n+\t/* Due to changes in OpenSSL 1.1 related to locking when decoding CSR,\n+\t * the pub key is not changed after assigning. It means if we pass\n+\t * a private key, it will be returned including the private part.\n+\t * If we duplicate it, then we get just the public part which is\n+\t * the same behavior as for OpenSSL 1.0 */\n+\tcsr = X509_REQ_dup(csr);\n+#endif\n+\t/* Retrieve the public key from the CSR */\n+\ttpubkey = X509_REQ_get_pubkey(csr);\n+\n+#if OPENSSL_VERSION_NUMBER >= 0x10100000L && !defined(LIBRESSL_VERSION_NUMBER)\n+\t/* We need to free the CSR as it was duplicated */\n+\tX509_REQ_free(csr);\n+#endif\n \tRETVAL_RESOURCE(zend_list_insert(tpubkey, le_key TSRMLS_CC));\n \treturn;\n }\n@@ -3482,13 +3647,20 @@ static int php_openssl_is_private_key(EV\n {\n \tassert(pkey != NULL);\n \n-\tswitch (pkey->type) {\n+\tswitch (EVP_PKEY_id(pkey)) {\n #ifndef NO_RSA\n \t\tcase EVP_PKEY_RSA:\n \t\tcase EVP_PKEY_RSA2:\n-\t\t\tassert(pkey->pkey.rsa != NULL);\n-\t\t\tif (pkey->pkey.rsa != NULL && (NULL == pkey->pkey.rsa->p || NULL == pkey->pkey.rsa->q)) {\n-\t\t\t\treturn 0;\n+\t\t\t{\n+\t\t\t\tRSA *rsa = EVP_PKEY_get0_RSA(pkey);\n+\t\t\t\tif (rsa != NULL) {\n+\t\t\t\t\tconst BIGNUM *p, *q;\n+\n+\t\t\t\t\tRSA_get0_factors(rsa, &p, &q);\n+\t\t\t\t\t if (p == NULL || q == NULL) {\n+\t\t\t\t\t\treturn 0;\n+\t\t\t\t\t }\n+\t\t\t\t}\n \t\t\t}\n \t\t\tbreak;\n #endif\n@@ -3498,28 +3670,51 @@ static int php_openssl_is_private_key(EV\n \t\tcase EVP_PKEY_DSA2:\n \t\tcase EVP_PKEY_DSA3:\n \t\tcase EVP_PKEY_DSA4:\n-\t\t\tassert(pkey->pkey.dsa != NULL);\n-\n-\t\t\tif (NULL == pkey->pkey.dsa->p || NULL == pkey->pkey.dsa->q || NULL == pkey->pkey.dsa->priv_key){ \n-\t\t\t\treturn 0;\n+\t\t\t{\n+\t\t\t\tDSA *dsa = EVP_PKEY_get0_DSA(pkey);\n+\t\t\t\tif (dsa != NULL) {\n+\t\t\t\t\tconst BIGNUM *p, *q, *g, *pub_key, *priv_key;\n+\n+\t\t\t\t\tDSA_get0_pqg(dsa, &p, &q, &g);\n+\t\t\t\t\tif (p == NULL || q == NULL) {\n+\t\t\t\t\t\treturn 0;\n+\t\t\t\t\t}\n+ \n+\t\t\t\t\tDSA_get0_key(dsa, &pub_key, &priv_key);\n+\t\t\t\t\tif (priv_key == NULL) {\n+\t\t\t\t\t\treturn 0;\n+\t\t\t\t\t}\n+\t\t\t\t}\n \t\t\t}\n \t\t\tbreak;\n #endif\n #ifndef NO_DH\n \t\tcase EVP_PKEY_DH:\n-\t\t\tassert(pkey->pkey.dh != NULL);\n-\n-\t\t\tif (NULL == pkey->pkey.dh->p || NULL == pkey->pkey.dh->priv_key) {\n-\t\t\t\treturn 0;\n+\t\t\t{\n+\t\t\t\tDH *dh = EVP_PKEY_get0_DH(pkey);\n+\t\t\t\tif (dh != NULL) {\n+\t\t\t\t\tconst BIGNUM *p, *q, *g, *pub_key, *priv_key;\n+\n+\t\t\t\t\tDH_get0_pqg(dh, &p, &q, &g);\n+\t\t\t\t\tif (p == NULL) {\n+\t\t\t\t\t\treturn 0;\n+\t\t\t\t\t}\n+ \n+\t\t\t\t\tDH_get0_key(dh, &pub_key, &priv_key);\n+\t\t\t\t\tif (priv_key == NULL) {\n+\t\t\t\t\t\treturn 0;\n+\t\t\t\t\t}\n+\t\t\t\t}\n \t\t\t}\n \t\t\tbreak;\n #endif\n #ifdef HAVE_EVP_PKEY_EC\n \t\tcase EVP_PKEY_EC:\n-\t\t\tassert(pkey->pkey.ec != NULL);\n-\n-\t\t\tif ( NULL == EC_KEY_get0_private_key(pkey->pkey.ec)) {\n-\t\t\t\treturn 0;\n+\t\t\t{\n+\t\t\t\tEC_KEY *ec = EVP_PKEY_get0_EC_KEY(pkey);\n+\t\t\t\tif (ec != NULL && NULL == EC_KEY_get0_private_key(ec)) {\n+\t\t\t\t\treturn 0;\n+\t\t\t\t}\n \t\t\t}\n \t\t\tbreak;\n #endif\n@@ -3531,34 +3726,80 @@ static int php_openssl_is_private_key(EV\n }\n /* }}} */\n \n-#define OPENSSL_PKEY_GET_BN(_type, _name) do {\t\t\t\t\t\t\t\\\n-\t\tif (pkey->pkey._type->_name != NULL) {\t\t\t\t\t\t\t\\\n-\t\t\tint len = BN_num_bytes(pkey->pkey._type->_name);\t\t\t\\\n-\t\t\tchar *str = emalloc(len + 1);\t\t\t\t\t\t\t\t\\\n-\t\t\tBN_bn2bin(pkey->pkey._type->_name, (unsigned char*)str);\t\\\n-\t\t\tstr[len] = 0;                                           \t\\\n-\t\t\tadd_assoc_stringl(_type, #_name, str, len, 0);\t\t\t\t\\\n-\t\t}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n-\t} while (0)\n-\n-#define OPENSSL_PKEY_SET_BN(_ht, _type, _name) do {\t\t\t\t\t\t\\\n-\t\tzval **bn;\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n-\t\tif (zend_hash_find(_ht, #_name, sizeof(#_name),\t(void**)&bn) == SUCCESS && \\\n-\t\t\t\tZ_TYPE_PP(bn) == IS_STRING) {\t\t\t\t\t\t\t\\\n-\t\t\t_type->_name = BN_bin2bn(\t\t\t\t\t\t\t\t\t\\\n-\t\t\t\t(unsigned char*)Z_STRVAL_PP(bn),\t\t\t\t\t\t\\\n-\t \t\t\tZ_STRLEN_PP(bn), NULL);\t\t\t\t\t\t\t\t\t\\\n-\t    }                                                               \\\n+#define OPENSSL_GET_BN(_array, _bn, _name) do { \\\n+\t\tif (_bn != NULL) { \\\n+\t\t\tint len = BN_num_bytes(_bn); \\\n+\t\t\tchar *str = emalloc(len + 1); \\\n+\t\t\tBN_bn2bin(_bn, (unsigned char*)str); \\\n+\t\t\tstr[len] = 0; \\\n+\t\t\tadd_assoc_stringl(_array, #_name, str, len, 0); \\\n+\t\t} \\\n \t} while (0);\n \n+#define OPENSSL_PKEY_GET_BN(_type, _name) OPENSSL_GET_BN(_type, _name, _name)\n+\n+#define OPENSSL_PKEY_SET_BN(_data, _name) do { \\\n+\t\tzval **bn; \\\n+\t\tif (zend_hash_find(Z_ARRVAL_P(_data), #_name, sizeof(#_name),(void**)&bn) == SUCCESS && \\\n+\t\t\t\tZ_TYPE_PP(bn) == IS_STRING) { \\\n+\t\t\t_name = BN_bin2bn( \\\n+\t\t\t\t(unsigned char*)Z_STRVAL_PP(bn), \\\n+\t\t\t\tZ_STRLEN_PP(bn), NULL); \\\n+\t\t} else { \\\n+\t\t\t_name = NULL; \\\n+\t\t} \\\n+ \t} while (0);\n+\n+/* {{{ php_openssl_pkey_init_rsa */\n+zend_bool php_openssl_pkey_init_and_assign_rsa(EVP_PKEY *pkey, RSA *rsa, zval *data)\n+{\n+\tBIGNUM *n, *e, *d, *p, *q, *dmp1, *dmq1, *iqmp;\n+\n+\tOPENSSL_PKEY_SET_BN(data, n);\n+\tOPENSSL_PKEY_SET_BN(data, e);\n+\tOPENSSL_PKEY_SET_BN(data, d);\n+\tif (!n || !d || !RSA_set0_key(rsa, n, e, d)) {\n+\t\treturn 0;\n+\t}\n+\n+\tOPENSSL_PKEY_SET_BN(data, p);\n+\tOPENSSL_PKEY_SET_BN(data, q);\n+\tif ((p || q) && !RSA_set0_factors(rsa, p, q)) {\n+\t\treturn 0;\n+\t}\n+\n+\tOPENSSL_PKEY_SET_BN(data, dmp1);\n+\tOPENSSL_PKEY_SET_BN(data, dmq1);\n+\tOPENSSL_PKEY_SET_BN(data, iqmp);\n+\tif ((dmp1 || dmq1 || iqmp) && !RSA_set0_crt_params(rsa, dmp1, dmq1, iqmp)) {\n+\t\treturn 0;\n+\t}\n+\n+\tif (!EVP_PKEY_assign_RSA(pkey, rsa)) {\n+\t\treturn 0;\n+\t}\n+\n+\treturn 1;\n+}\n+/* }}} */\n+\n /* {{{ php_openssl_pkey_init_dsa */\n-zend_bool php_openssl_pkey_init_dsa(DSA *dsa)\n+zend_bool php_openssl_pkey_init_dsa(DSA *dsa, zval *data)\n {\n-\tif (!dsa->p || !dsa->q || !dsa->g) {\n+\tBIGNUM *p, *q, *g, *priv_key, *pub_key;\n+\tconst BIGNUM *priv_key_const, *pub_key_const;\n+\n+\tOPENSSL_PKEY_SET_BN(data, p);\n+\tOPENSSL_PKEY_SET_BN(data, q);\n+\tOPENSSL_PKEY_SET_BN(data, g);\n+\tif (!p || !q || !g || !DSA_set0_pqg(dsa, p, q, g)) {\n \t\treturn 0;\n \t}\n-\tif (dsa->priv_key || dsa->pub_key) {\n-\t\treturn 1;\n+\n+\tOPENSSL_PKEY_SET_BN(data, pub_key);\n+\tOPENSSL_PKEY_SET_BN(data, priv_key);\n+\tif (pub_key) {\n+\t\treturn DSA_set0_key(dsa, pub_key, priv_key);\n \t}\n \tPHP_OPENSSL_RAND_ADD_TIME();\n \tif (!DSA_generate_key(dsa)) {\n@@ -3566,7 +3807,8 @@ zend_bool php_openssl_pkey_init_dsa(DSA\n \t}\n \t/* if BN_mod_exp return -1, then DSA_generate_key succeed for failed key\n \t * so we need to double check that public key is created */\n-\tif (!dsa->pub_key || BN_is_zero(dsa->pub_key)) {\n+\tDSA_get0_key(dsa, &pub_key_const, &priv_key_const);\n+\tif (!pub_key_const || BN_is_zero(pub_key_const)) {\n \t\treturn 0;\n \t}\n \t/* all good */\n@@ -3574,14 +3816,66 @@ zend_bool php_openssl_pkey_init_dsa(DSA\n }\n /* }}} */\n \n+/* {{{ php_openssl_dh_pub_from_priv */\n+static BIGNUM *php_openssl_dh_pub_from_priv(BIGNUM *priv_key, BIGNUM *g, BIGNUM *p)\n+{\n+\tBIGNUM *pub_key, *priv_key_const_time;\n+\tBN_CTX *ctx;\n+\n+\tpub_key = BN_new();\n+\tif (pub_key == NULL) {\n+\t\treturn NULL;\n+\t}\n+\n+\tpriv_key_const_time = BN_new();\n+\tif (priv_key_const_time == NULL) {\n+\t\tBN_free(pub_key);\n+\t\treturn NULL;\n+\t}\n+\tctx = BN_CTX_new();\n+\tif (ctx == NULL) {\n+\t\tBN_free(pub_key);\n+\t\tBN_free(priv_key_const_time);\n+\t\treturn NULL;\n+\t}\n+\n+\tBN_with_flags(priv_key_const_time, priv_key, BN_FLG_CONSTTIME);\n+\n+\tif (!BN_mod_exp_mont(pub_key, g, priv_key_const_time, p, ctx, NULL)) {\n+\t\tBN_free(pub_key);\n+\t\tpub_key = NULL;\n+\t}\n+\n+\tBN_free(priv_key_const_time);\n+\tBN_CTX_free(ctx);\n+\n+\treturn pub_key;\n+}\n+/* }}} */\n+\n /* {{{ php_openssl_pkey_init_dh */\n-zend_bool php_openssl_pkey_init_dh(DH *dh)\n+zend_bool php_openssl_pkey_init_dh(DH *dh, zval *data)\n {\n-\tif (!dh->p || !dh->g) {\n+\tBIGNUM *p, *q, *g, *priv_key, *pub_key;\n+\n+\tOPENSSL_PKEY_SET_BN(data, p);\n+\tOPENSSL_PKEY_SET_BN(data, q);\n+\tOPENSSL_PKEY_SET_BN(data, g);\n+\tif (!p || !g || !DH_set0_pqg(dh, p, q, g)) {\n \t\treturn 0;\n \t}\n-\tif (dh->pub_key) {\n-\t\treturn 1;\n+\n+\tOPENSSL_PKEY_SET_BN(data, priv_key);\n+\tOPENSSL_PKEY_SET_BN(data, pub_key);\n+\tif (pub_key) {\n+\t\treturn DH_set0_key(dh, pub_key, priv_key);\n+\t}\n+\tif (priv_key) {\n+\t\tpub_key = php_openssl_dh_pub_from_priv(priv_key, g, p);\n+\t\tif (pub_key == NULL) {\n+\t\t\treturn 0;\n+\t\t}\n+\t\treturn DH_set0_key(dh, pub_key, priv_key);\n \t}\n \tPHP_OPENSSL_RAND_ADD_TIME();\n \tif (!DH_generate_key(dh)) {\n@@ -3614,18 +3908,8 @@ PHP_FUNCTION(openssl_pkey_new)\n \t\t    if (pkey) {\n \t\t\t\tRSA *rsa = RSA_new();\n \t\t\t\tif (rsa) {\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, n);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, e);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, d);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, p);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, q);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, dmp1);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, dmq1);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), rsa, iqmp);\n-\t\t\t\t\tif (rsa->n && rsa->d) {\n-\t\t\t\t\t\tif (EVP_PKEY_assign_RSA(pkey, rsa)) {\n-\t\t\t\t\t\t\tRETURN_RESOURCE(zend_list_insert(pkey, le_key TSRMLS_CC));\n-\t\t\t\t\t\t}\n+\t\t\t\t\tif (php_openssl_pkey_init_and_assign_rsa(pkey, rsa, *data)) {\n+\t\t\t\t\t\tRETURN_RESOURCE(zend_list_insert(pkey, le_key TSRMLS_CC));\n \t\t\t\t\t}\n \t\t\t\t\tRSA_free(rsa);\n \t\t\t\t}\n@@ -3638,12 +3922,7 @@ PHP_FUNCTION(openssl_pkey_new)\n \t\t    if (pkey) {\n \t\t\t\tDSA *dsa = DSA_new();\n \t\t\t\tif (dsa) {\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dsa, p);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dsa, q);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dsa, g);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dsa, priv_key);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dsa, pub_key);\n-\t\t\t\t\tif (php_openssl_pkey_init_dsa(dsa)) {\n+\t\t\t\t\tif (php_openssl_pkey_init_dsa(dsa, *data)) {\n \t\t\t\t\t\tif (EVP_PKEY_assign_DSA(pkey, dsa)) {\n \t\t\t\t\t\t\tRETURN_RESOURCE(zend_list_insert(pkey, le_key TSRMLS_CC));\n \t\t\t\t\t\t}\n@@ -3659,11 +3938,7 @@ PHP_FUNCTION(openssl_pkey_new)\n \t\t    if (pkey) {\n \t\t\t\tDH *dh = DH_new();\n \t\t\t\tif (dh) {\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dh, p);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dh, g);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dh, priv_key);\n-\t\t\t\t\tOPENSSL_PKEY_SET_BN(Z_ARRVAL_PP(data), dh, pub_key);\n-\t\t\t\t\tif (php_openssl_pkey_init_dh(dh)) {\n+\t\t\t\t\tif (php_openssl_pkey_init_dh(dh, *data)) {\n \t\t\t\t\t\tif (EVP_PKEY_assign_DH(pkey, dh)) {\n \t\t\t\t\t\t\tRETURN_RESOURCE(zend_list_insert(pkey, le_key TSRMLS_CC));\n \t\t\t\t\t\t}\n@@ -3738,10 +4013,10 @@ PHP_FUNCTION(openssl_pkey_export_to_file\n \t\t\tcipher = NULL;\n \t\t}\n \n-\t\tswitch (EVP_PKEY_type(key->type)) {\n+\t\tswitch (EVP_PKEY_base_id(key)) {\n #ifdef HAVE_EVP_PKEY_EC\n \t\t\tcase EVP_PKEY_EC:\n-\t\t\t\tpem_write = PEM_write_bio_ECPrivateKey(bio_out, EVP_PKEY_get1_EC_KEY(key), cipher, (unsigned char *)passphrase, passphrase_len, NULL, NULL);\n+\t\t\t\tpem_write = PEM_write_bio_ECPrivateKey(bio_out, EVP_PKEY_get0_EC_KEY(key), cipher, (unsigned char *)passphrase, passphrase_len, NULL, NULL);\n \t\t\t\tbreak;\n #endif\n \t\t\tdefault:\n@@ -3807,7 +4082,7 @@ PHP_FUNCTION(openssl_pkey_export)\n \t\t\tcipher = NULL;\n \t\t}\n \n-\t\tswitch (EVP_PKEY_type(key->type)) {\n+\t\tswitch (EVP_PKEY_base_id(key)) {\n #ifdef HAVE_EVP_PKEY_EC\n \t\t\tcase EVP_PKEY_EC:\n \t\t\t\tpem_write = PEM_write_bio_ECPrivateKey(bio_out, EVP_PKEY_get1_EC_KEY(key), cipher, (unsigned char *)passphrase, passphrase_len, NULL, NULL);\n@@ -3928,25 +4203,33 @@ PHP_FUNCTION(openssl_pkey_get_details)\n \t/*TODO: Use the real values once the openssl constants are used \n \t * See the enum at the top of this file\n \t */\n-\tswitch (EVP_PKEY_type(pkey->type)) {\n+\tswitch (EVP_PKEY_base_id(pkey)) {\n \t\tcase EVP_PKEY_RSA:\n \t\tcase EVP_PKEY_RSA2:\n-\t\t\tktype = OPENSSL_KEYTYPE_RSA;\n-\n-\t\t\tif (pkey->pkey.rsa != NULL) {\n-\t\t\t\tzval *rsa;\n-\n-\t\t\t\tALLOC_INIT_ZVAL(rsa);\n-\t\t\t\tarray_init(rsa);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, n);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, e);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, d);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, p);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, q);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, dmp1);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, dmq1);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(rsa, iqmp);\n-\t\t\t\tadd_assoc_zval(return_value, \"rsa\", rsa);\n+\t\t\t{\n+\t\t\t\tRSA *rsa = EVP_PKEY_get0_RSA(pkey);\n+\t\t\t\tktype = OPENSSL_KEYTYPE_RSA;\n+\n+\t\t\t\tif (rsa != NULL) {\n+\t\t\t\t\tzval *z_rsa;\n+\t\t\t\t\tconst BIGNUM *n, *e, *d, *p, *q, *dmp1, *dmq1, *iqmp;\n+\n+\t\t\t\t\tRSA_get0_key(rsa, &n, &e, &d);\n+\t\t\t\t\tRSA_get0_factors(rsa, &p, &q);\n+\t\t\t\t\tRSA_get0_crt_params(rsa, &dmp1, &dmq1, &iqmp);\n+\n+\t\t\t\t\tALLOC_INIT_ZVAL(z_rsa);\n+\t\t\t\t\tarray_init(z_rsa);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, n);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, e);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, d);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, p);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, q);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, dmp1);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, dmq1);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_rsa, iqmp);\n+\t\t\t\t\tadd_assoc_zval(return_value, \"rsa\", z_rsa);\n+\t\t\t\t}\n \t\t\t}\n \n \t\t\tbreak;\t\n@@ -3954,42 +4237,55 @@ PHP_FUNCTION(openssl_pkey_get_details)\n \t\tcase EVP_PKEY_DSA2:\n \t\tcase EVP_PKEY_DSA3:\n \t\tcase EVP_PKEY_DSA4:\n-\t\t\tktype = OPENSSL_KEYTYPE_DSA;\n-\n-\t\t\tif (pkey->pkey.dsa != NULL) {\n-\t\t\t\tzval *dsa;\n-\n-\t\t\t\tALLOC_INIT_ZVAL(dsa);\n-\t\t\t\tarray_init(dsa);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dsa, p);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dsa, q);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dsa, g);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dsa, priv_key);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dsa, pub_key);\n-\t\t\t\tadd_assoc_zval(return_value, \"dsa\", dsa);\n+\t\t\t{\n+\t\t\t\tDSA *dsa = EVP_PKEY_get0_DSA(pkey);\n+\t\t\t\tktype = OPENSSL_KEYTYPE_DSA;\n+\n+\t\t\t\tif (dsa != NULL) {\n+\t\t\t\t\tzval *z_dsa;\n+\t\t\t\t\tconst BIGNUM *p, *q, *g, *priv_key, *pub_key;\n+\n+\t\t\t\t\tDSA_get0_pqg(dsa, &p, &q, &g);\n+\t\t\t\t\tDSA_get0_key(dsa, &pub_key, &priv_key);\n+\n+\t\t\t\t\tALLOC_INIT_ZVAL(z_dsa);\n+\t\t\t\t\tarray_init(z_dsa);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dsa, p);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dsa, q);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dsa, g);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dsa, priv_key);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dsa, pub_key);\n+\t\t\t\t\tadd_assoc_zval(return_value, \"dsa\", z_dsa);\n+\t\t\t\t}\n \t\t\t}\n \t\t\tbreak;\n \t\tcase EVP_PKEY_DH:\n-\t\t\t\n-\t\t\tktype = OPENSSL_KEYTYPE_DH;\n-\n-\t\t\tif (pkey->pkey.dh != NULL) {\n-\t\t\t\tzval *dh;\n-\n-\t\t\t\tALLOC_INIT_ZVAL(dh);\n-\t\t\t\tarray_init(dh);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dh, p);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dh, g);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dh, priv_key);\n-\t\t\t\tOPENSSL_PKEY_GET_BN(dh, pub_key);\n-\t\t\t\tadd_assoc_zval(return_value, \"dh\", dh);\n+\t\t\t{\n+\t\t\t\tDH *dh = EVP_PKEY_get0_DH(pkey);\n+\t\t\t\tktype = OPENSSL_KEYTYPE_DH;\n+\n+\t\t\t\tif (dh != NULL) {\n+\t\t\t\t\tzval *z_dh;\n+\t\t\t\t\tconst BIGNUM *p, *q, *g, *priv_key, *pub_key;\n+\n+\t\t\t\t\tDH_get0_pqg(dh, &p, &q, &g);\n+\t\t\t\t\tDH_get0_key(dh, &pub_key, &priv_key);\n+\n+\t\t\t\t\tALLOC_INIT_ZVAL(z_dh);\n+\t\t\t\t\tarray_init(z_dh);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dh, p);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dh, g);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dh, priv_key);\n+\t\t\t\t\tOPENSSL_PKEY_GET_BN(z_dh, pub_key);\n+\t\t\t\t\tadd_assoc_zval(return_value, \"dh\", z_dh);\n+\t\t\t\t}\n \t\t\t}\n \n \t\t\tbreak;\n #ifdef HAVE_EVP_PKEY_EC\n \t\tcase EVP_PKEY_EC:\n \t\t\tktype = OPENSSL_KEYTYPE_EC;\n-\t\t\tif (pkey->pkey.ec != NULL) {\n+\t\t\tif (EVP_PKEY_get0_EC_KEY(pkey) != NULL) {\n \t\t\t\tzval *ec;\n \t\t\t\tconst EC_GROUP *ec_group;\n \t\t\t\tint nid;\n@@ -4546,13 +4842,13 @@ PHP_FUNCTION(openssl_private_encrypt)\n \tcryptedlen = EVP_PKEY_size(pkey);\n \tcryptedbuf = emalloc(cryptedlen + 1);\n \n-\tswitch (pkey->type) {\n+\tswitch (EVP_PKEY_id(pkey)) {\n \t\tcase EVP_PKEY_RSA:\n \t\tcase EVP_PKEY_RSA2:\n \t\t\tsuccessful =  (RSA_private_encrypt(data_len, \n \t\t\t\t\t\t(unsigned char *)data, \n \t\t\t\t\t\tcryptedbuf, \n-\t\t\t\t\t\tpkey->pkey.rsa, \n+\t\t\t\t\t\tEVP_PKEY_get0_RSA(pkey), \n \t\t\t\t\t\tpadding) == cryptedlen);\n \t\t\tbreak;\n \t\tdefault:\n@@ -4604,13 +4900,13 @@ PHP_FUNCTION(openssl_private_decrypt)\n \tcryptedlen = EVP_PKEY_size(pkey);\n \tcrypttemp = emalloc(cryptedlen + 1);\n \n-\tswitch (pkey->type) {\n+\tswitch (EVP_PKEY_id(pkey)) {\n \t\tcase EVP_PKEY_RSA:\n \t\tcase EVP_PKEY_RSA2:\n \t\t\tcryptedlen = RSA_private_decrypt(data_len, \n \t\t\t\t\t(unsigned char *)data, \n \t\t\t\t\tcrypttemp, \n-\t\t\t\t\tpkey->pkey.rsa, \n+\t\t\t\t\tEVP_PKEY_get0_RSA(pkey), \n \t\t\t\t\tpadding);\n \t\t\tif (cryptedlen != -1) {\n \t\t\t\tcryptedbuf = emalloc(cryptedlen + 1);\n@@ -4669,13 +4965,13 @@ PHP_FUNCTION(openssl_public_encrypt)\n \tcryptedlen = EVP_PKEY_size(pkey);\n \tcryptedbuf = emalloc(cryptedlen + 1);\n \n-\tswitch (pkey->type) {\n+\tswitch (EVP_PKEY_id(pkey)) {\n \t\tcase EVP_PKEY_RSA:\n \t\tcase EVP_PKEY_RSA2:\n \t\t\tsuccessful = (RSA_public_encrypt(data_len, \n \t\t\t\t\t\t(unsigned char *)data, \n \t\t\t\t\t\tcryptedbuf, \n-\t\t\t\t\t\tpkey->pkey.rsa, \n+\t\t\t\t\t\tEVP_PKEY_get0_RSA(pkey), \n \t\t\t\t\t\tpadding) == cryptedlen);\n \t\t\tbreak;\n \t\tdefault:\n@@ -4728,13 +5024,13 @@ PHP_FUNCTION(openssl_public_decrypt)\n \tcryptedlen = EVP_PKEY_size(pkey);\n \tcrypttemp = emalloc(cryptedlen + 1);\n \n-\tswitch (pkey->type) {\n+\tswitch (EVP_PKEY_id(pkey)) {\n \t\tcase EVP_PKEY_RSA:\n \t\tcase EVP_PKEY_RSA2:\n \t\t\tcryptedlen = RSA_public_decrypt(data_len, \n \t\t\t\t\t(unsigned char *)data, \n \t\t\t\t\tcrypttemp, \n-\t\t\t\t\tpkey->pkey.rsa, \n+\t\t\t\t\tEVP_PKEY_get0_RSA(pkey), \n \t\t\t\t\tpadding);\n \t\t\tif (cryptedlen != -1) {\n \t\t\t\tcryptedbuf = emalloc(cryptedlen + 1);\n@@ -4798,7 +5094,7 @@ PHP_FUNCTION(openssl_sign)\n \tlong keyresource = -1;\n \tchar * data;\n \tint data_len;\n-\tEVP_MD_CTX md_ctx;\n+\tEVP_MD_CTX *md_ctx;\n \tzval *method = NULL;\n \tlong signature_algo = OPENSSL_ALGO_SHA1;\n \tconst EVP_MD *mdtype;\n@@ -4831,9 +5127,10 @@ PHP_FUNCTION(openssl_sign)\n \tsiglen = EVP_PKEY_size(pkey);\n \tsigbuf = emalloc(siglen + 1);\n \n-\tEVP_SignInit(&md_ctx, mdtype);\n-\tEVP_SignUpdate(&md_ctx, data, data_len);\n-\tif (EVP_SignFinal (&md_ctx, sigbuf,(unsigned int *)&siglen, pkey)) {\n+\tmd_ctx = EVP_MD_CTX_create();\n+\tEVP_SignInit(md_ctx, mdtype);\n+\tEVP_SignUpdate(md_ctx, data, data_len);\n+\tif (EVP_SignFinal (md_ctx, sigbuf,(unsigned int *)&siglen, pkey)) {\n \t\tzval_dtor(signature);\n \t\tsigbuf[siglen] = '\\0';\n \t\tZVAL_STRINGL(signature, (char *)sigbuf, siglen, 0);\n@@ -4842,7 +5139,7 @@ PHP_FUNCTION(openssl_sign)\n \t\tefree(sigbuf);\n \t\tRETVAL_FALSE;\n \t}\n-\tEVP_MD_CTX_cleanup(&md_ctx);\n+\tEVP_MD_CTX_destroy(md_ctx);\n \tif (keyresource == -1) {\n \t\tEVP_PKEY_free(pkey);\n \t}\n@@ -4856,7 +5153,7 @@ PHP_FUNCTION(openssl_verify)\n \tzval **key;\n \tEVP_PKEY *pkey;\n \tint err;\n-\tEVP_MD_CTX     md_ctx;\n+\tEVP_MD_CTX     *md_ctx;\n \tconst EVP_MD *mdtype;\n \tlong keyresource = -1;\n \tchar * data;\tint data_len;\n@@ -4890,10 +5187,11 @@ PHP_FUNCTION(openssl_verify)\n \t\tRETURN_FALSE;\n \t}\n \n-\tEVP_VerifyInit   (&md_ctx, mdtype);\n-\tEVP_VerifyUpdate (&md_ctx, data, data_len);\n-\terr = EVP_VerifyFinal (&md_ctx, (unsigned char *)signature, signature_len, pkey);\n-\tEVP_MD_CTX_cleanup(&md_ctx);\n+\tmd_ctx = EVP_MD_CTX_create();\n+\tEVP_VerifyInit   (md_ctx, mdtype);\n+\tEVP_VerifyUpdate (md_ctx, data, data_len);\n+\terr = EVP_VerifyFinal (md_ctx, (unsigned char *)signature, signature_len, pkey);\n+\tEVP_MD_CTX_destroy(md_ctx);\n \n \tif (keyresource == -1) {\n \t\tEVP_PKEY_free(pkey);\n@@ -4917,7 +5215,7 @@ PHP_FUNCTION(openssl_seal)\n \tchar *method =NULL;\n \tint method_len = 0;\n \tconst EVP_CIPHER *cipher;\n-\tEVP_CIPHER_CTX ctx;\n+\tEVP_CIPHER_CTX *ctx;\n \n \tif (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, \"szza/|s\", &data, &data_len, &sealdata, &ekeys, &pubkeys, &method, &method_len) == FAILURE) {\n \t\treturn;\n@@ -4950,6 +5248,7 @@ PHP_FUNCTION(openssl_seal)\n \tmemset(eks, 0, sizeof(*eks) * nkeys);\n \tkey_resources = safe_emalloc(nkeys, sizeof(long), 0);\n \tmemset(key_resources, 0, sizeof(*key_resources) * nkeys);\n+\tmemset(pkeys, 0, sizeof(*pkeys) * nkeys);\n \n \t/* get the public keys we are using to seal this data */\n \tzend_hash_internal_pointer_reset_ex(pubkeysht, &pos);\n@@ -4967,27 +5266,28 @@ PHP_FUNCTION(openssl_seal)\n \t\ti++;\n \t}\n \n-\tif (!EVP_EncryptInit(&ctx,cipher,NULL,NULL)) {\n+\tctx = EVP_CIPHER_CTX_new();\n+\tif (ctx == NULL || !EVP_EncryptInit(ctx,cipher,NULL,NULL)) {\n \t\tRETVAL_FALSE;\n-\t\tEVP_CIPHER_CTX_cleanup(&ctx);\n+\t\tEVP_CIPHER_CTX_free(ctx);\n \t\tgoto clean_exit;\n \t}\n \n #if 0\n \t/* Need this if allow ciphers that require initialization vector */\n-\tivlen = EVP_CIPHER_CTX_iv_length(&ctx);\n+\tivlen = EVP_CIPHER_CTX_iv_length(ctx);\n \tiv = ivlen ? emalloc(ivlen + 1) : NULL;\n #endif\n \t/* allocate one byte extra to make room for \\0 */\n-\tbuf = emalloc(data_len + EVP_CIPHER_CTX_block_size(&ctx));\n-\tEVP_CIPHER_CTX_cleanup(&ctx);\n+\tbuf = emalloc(data_len + EVP_CIPHER_CTX_block_size(ctx));\n+\tEVP_CIPHER_CTX_cleanup(ctx);\n \n-\tif (EVP_SealInit(&ctx, cipher, eks, eksl, NULL, pkeys, nkeys) <= 0 ||\n-\t\t\t!EVP_SealUpdate(&ctx, buf, &len1, (unsigned char *)data, data_len) ||\n-\t\t\t!EVP_SealFinal(&ctx, buf + len1, &len2)) {\n+\tif (EVP_SealInit(ctx, cipher, eks, eksl, NULL, pkeys, nkeys) <= 0 ||\n+\t\t\t!EVP_SealUpdate(ctx, buf, &len1, (unsigned char *)data, data_len) ||\n+\t\t\t!EVP_SealFinal(ctx, buf + len1, &len2)) {\n \t\tRETVAL_FALSE;\n \t\tefree(buf);\n-\t\tEVP_CIPHER_CTX_cleanup(&ctx);\n+\t\tEVP_CIPHER_CTX_free(ctx);\n \t\tgoto clean_exit;\n \t}\n \n@@ -5018,7 +5318,7 @@ PHP_FUNCTION(openssl_seal)\n \t\tefree(buf);\n \t}\n \tRETVAL_LONG(len1 + len2);\n-\tEVP_CIPHER_CTX_cleanup(&ctx);\n+\tEVP_CIPHER_CTX_free(ctx);\n \n clean_exit:\n \tfor (i=0; i<nkeys; i++) {\n@@ -5045,7 +5345,7 @@ PHP_FUNCTION(openssl_open)\n \tint len1, len2;\n \tunsigned char *buf;\n \tlong keyresource = -1;\n-\tEVP_CIPHER_CTX ctx;\n+\tEVP_CIPHER_CTX *ctx;\n \tchar * data;\tint data_len;\n \tchar * ekey;\tint ekey_len;\n \tchar *method =NULL;\n@@ -5074,8 +5374,9 @@ PHP_FUNCTION(openssl_open)\n \t\n \tbuf = emalloc(data_len + 1);\n \n-\tif (EVP_OpenInit(&ctx, cipher, (unsigned char *)ekey, ekey_len, NULL, pkey) && EVP_OpenUpdate(&ctx, buf, &len1, (unsigned char *)data, data_len)) {\n-\t\tif (!EVP_OpenFinal(&ctx, buf + len1, &len2) || (len1 + len2 == 0)) {\n+\tctx = EVP_CIPHER_CTX_new();\n+\tif (EVP_OpenInit(ctx, cipher, (unsigned char *)ekey, ekey_len, NULL, pkey) && EVP_OpenUpdate(ctx, buf, &len1, (unsigned char *)data, data_len)) {\n+\t\tif (!EVP_OpenFinal(ctx, buf + len1, &len2) || (len1 + len2 == 0)) {\n \t\t\tefree(buf);\n \t\t\tRETVAL_FALSE;\n \t\t} else {\n@@ -5091,7 +5392,7 @@ PHP_FUNCTION(openssl_open)\n \tif (keyresource == -1) {\n \t\tEVP_PKEY_free(pkey);\n \t}\n-\tEVP_CIPHER_CTX_cleanup(&ctx);\n+\tEVP_CIPHER_CTX_free(ctx);\n }\n /* }}} */\n \n@@ -5151,7 +5452,7 @@ PHP_FUNCTION(openssl_digest)\n \tchar *data, *method;\n \tint data_len, method_len;\n \tconst EVP_MD *mdtype;\n-\tEVP_MD_CTX md_ctx;\n+\tEVP_MD_CTX *md_ctx;\n \tint siglen;\n \tunsigned char *sigbuf;\n \n@@ -5167,9 +5468,10 @@ PHP_FUNCTION(openssl_digest)\n \tsiglen = EVP_MD_size(mdtype);\n \tsigbuf = emalloc(siglen + 1);\n \n-\tEVP_DigestInit(&md_ctx, mdtype);\n-\tEVP_DigestUpdate(&md_ctx, (unsigned char *)data, data_len);\n-\tif (EVP_DigestFinal (&md_ctx, (unsigned char *)sigbuf, (unsigned int *)&siglen)) {\n+\tmd_ctx = EVP_MD_CTX_create();\n+\tEVP_DigestInit(md_ctx, mdtype);\n+\tEVP_DigestUpdate(md_ctx, (unsigned char *)data, data_len);\n+\tif (EVP_DigestFinal (md_ctx, (unsigned char *)sigbuf, (unsigned int *)&siglen)) {\n \t\tif (raw_output) {\n \t\t\tsigbuf[siglen] = '\\0';\n \t\t\tRETVAL_STRINGL((char *)sigbuf, siglen, 0);\n@@ -5185,6 +5487,8 @@ PHP_FUNCTION(openssl_digest)\n \t\tefree(sigbuf);\n \t\tRETVAL_FALSE;\n \t}\n+\n+\tEVP_MD_CTX_destroy(md_ctx);\n }\n /* }}} */\n \n@@ -5230,7 +5534,7 @@ PHP_FUNCTION(openssl_encrypt)\n \tchar *data, *method, *password, *iv = \"\";\n \tint data_len, method_len, password_len, iv_len = 0, max_iv_len;\n \tconst EVP_CIPHER *cipher_type;\n-\tEVP_CIPHER_CTX cipher_ctx;\n+\tEVP_CIPHER_CTX *cipher_ctx;\n \tint i=0, outlen, keylen;\n \tunsigned char *outbuf, *key;\n \tzend_bool free_iv;\n@@ -5262,19 +5566,24 @@ PHP_FUNCTION(openssl_encrypt)\n \toutlen = data_len + EVP_CIPHER_block_size(cipher_type);\n \toutbuf = safe_emalloc(outlen, 1, 1);\n \n-\tEVP_EncryptInit(&cipher_ctx, cipher_type, NULL, NULL);\n+\tcipher_ctx = EVP_CIPHER_CTX_new();\n+\tif (!cipher_ctx) {\n+\t\tphp_error_docref(NULL TSRMLS_CC, E_WARNING, \"Failed to create cipher context\");\n+\t\tRETURN_FALSE;\n+\t}\n+\tEVP_EncryptInit(cipher_ctx, cipher_type, NULL, NULL);\n \tif (password_len > keylen) {\n-\t\tEVP_CIPHER_CTX_set_key_length(&cipher_ctx, password_len);\n+\t\tEVP_CIPHER_CTX_set_key_length(cipher_ctx, password_len);\n \t}\n-\tEVP_EncryptInit_ex(&cipher_ctx, NULL, NULL, key, (unsigned char *)iv);\n+\tEVP_EncryptInit_ex(cipher_ctx, NULL, NULL, key, (unsigned char *)iv);\n \tif (options & OPENSSL_ZERO_PADDING) {\n-\t\tEVP_CIPHER_CTX_set_padding(&cipher_ctx, 0);\n+\t\tEVP_CIPHER_CTX_set_padding(cipher_ctx, 0);\n \t}\n \tif (data_len > 0) {\n-\t\tEVP_EncryptUpdate(&cipher_ctx, outbuf, &i, (unsigned char *)data, data_len);\n+\t\tEVP_EncryptUpdate(cipher_ctx, outbuf, &i, (unsigned char *)data, data_len);\n \t}\n \toutlen = i;\n-\tif (EVP_EncryptFinal(&cipher_ctx, (unsigned char *)outbuf + i, &i)) {\n+\tif (EVP_EncryptFinal(cipher_ctx, (unsigned char *)outbuf + i, &i)) {\n \t\toutlen += i;\n \t\tif (options & OPENSSL_RAW_DATA) {\n \t\t\toutbuf[outlen] = '\\0';\n@@ -5301,7 +5610,8 @@ PHP_FUNCTION(openssl_encrypt)\n \tif (free_iv) {\n \t\tefree(iv);\n \t}\n-\tEVP_CIPHER_CTX_cleanup(&cipher_ctx);\n+\tEVP_CIPHER_CTX_cleanup(cipher_ctx);\n+\tEVP_CIPHER_CTX_free(cipher_ctx);\n }\n /* }}} */\n \n@@ -5313,7 +5623,7 @@ PHP_FUNCTION(openssl_decrypt)\n \tchar *data, *method, *password, *iv = \"\";\n \tint data_len, method_len, password_len, iv_len = 0;\n \tconst EVP_CIPHER *cipher_type;\n-\tEVP_CIPHER_CTX cipher_ctx;\n+\tEVP_CIPHER_CTX *cipher_ctx;\n \tint i, outlen, keylen;\n \tunsigned char *outbuf, *key;\n \tint base64_str_len;\n@@ -5359,17 +5669,23 @@ PHP_FUNCTION(openssl_decrypt)\n \toutlen = data_len + EVP_CIPHER_block_size(cipher_type);\n \toutbuf = emalloc(outlen + 1);\n \n-\tEVP_DecryptInit(&cipher_ctx, cipher_type, NULL, NULL);\n+\tcipher_ctx = EVP_CIPHER_CTX_new();\n+\tif (!cipher_ctx) {\n+\t\tphp_error_docref(NULL TSRMLS_CC, E_WARNING, \"Failed to create cipher context\");\n+\t\tRETURN_FALSE;\n+\t}\n+\n+\tEVP_DecryptInit(cipher_ctx, cipher_type, NULL, NULL);\n \tif (password_len > keylen) {\n-\t\tEVP_CIPHER_CTX_set_key_length(&cipher_ctx, password_len);\n+\t\tEVP_CIPHER_CTX_set_key_length(cipher_ctx, password_len);\n \t}\n-\tEVP_DecryptInit_ex(&cipher_ctx, NULL, NULL, key, (unsigned char *)iv);\n+\tEVP_DecryptInit_ex(cipher_ctx, NULL, NULL, key, (unsigned char *)iv);\n \tif (options & OPENSSL_ZERO_PADDING) {\n-\t\tEVP_CIPHER_CTX_set_padding(&cipher_ctx, 0);\n+\t\tEVP_CIPHER_CTX_set_padding(cipher_ctx, 0);\n \t}\n-\tEVP_DecryptUpdate(&cipher_ctx, outbuf, &i, (unsigned char *)data, data_len);\n+\tEVP_DecryptUpdate(cipher_ctx, outbuf, &i, (unsigned char *)data, data_len);\n \toutlen = i;\n-\tif (EVP_DecryptFinal(&cipher_ctx, (unsigned char *)outbuf + i, &i)) {\n+\tif (EVP_DecryptFinal(cipher_ctx, (unsigned char *)outbuf + i, &i)) {\n \t\toutlen += i;\n \t\toutbuf[outlen] = '\\0';\n \t\tRETVAL_STRINGL((char *)outbuf, outlen, 0);\n@@ -5386,7 +5702,8 @@ PHP_FUNCTION(openssl_decrypt)\n \tif (base64_str) {\n \t\tefree(base64_str);\n \t}\n- \tEVP_CIPHER_CTX_cleanup(&cipher_ctx);\n+ \tEVP_CIPHER_CTX_cleanup(cipher_ctx);\n+ \tEVP_CIPHER_CTX_free(cipher_ctx);\n }\n /* }}} */\n \n@@ -5424,6 +5741,7 @@ PHP_FUNCTION(openssl_dh_compute_key)\n \tzval *key;\n \tchar *pub_str;\n \tint pub_len;\n+\tDH *dh;\n \tEVP_PKEY *pkey;\n \tBIGNUM *pub;\n \tchar *data;\n@@ -5433,14 +5751,21 @@ PHP_FUNCTION(openssl_dh_compute_key)\n \t\treturn;\n \t}\n \tZEND_FETCH_RESOURCE(pkey, EVP_PKEY *, &key, -1, \"OpenSSL key\", le_key);\n-\tif (!pkey || EVP_PKEY_type(pkey->type) != EVP_PKEY_DH || !pkey->pkey.dh) {\n+\tif (pkey == NULL) {\n+\t\tRETURN_FALSE;\n+\t}\n+\tif (EVP_PKEY_base_id(pkey) != EVP_PKEY_DH) {\n+\t\tRETURN_FALSE;\n+\t}\n+\tdh = EVP_PKEY_get0_DH(pkey);\n+\tif (dh == NULL) {\n \t\tRETURN_FALSE;\n \t}\n \n \tpub = BN_bin2bn((unsigned char*)pub_str, pub_len, NULL);\n \n-\tdata = emalloc(DH_size(pkey->pkey.dh) + 1);\n-\tlen = DH_compute_key((unsigned char*)data, pub, pkey->pkey.dh);\n+\tdata = emalloc(DH_size(dh) + 1);\n+\tlen = DH_compute_key((unsigned char*)data, pub, dh);\n \n \tif (len >= 0) {\n \t\tdata[len] = 0;\ndiff -rupN php-5.6.31.orig/ext/openssl/tests/bug41033.phpt php-5.6.31/ext/openssl/tests/bug41033.phpt\n--- php-5.6.31.orig/ext/openssl/tests/bug41033.phpt\t2017-07-06 00:25:00.000000000 +0200\n+++ php-5.6.31/ext/openssl/tests/bug41033.phpt\t2017-08-01 10:49:25.008823468 +0200\n@@ -13,11 +13,11 @@ $pub = 'file://' . dirname(__FILE__) . '\n \n $prkeyid = openssl_get_privatekey($prv, \"1234\");\n $ct = \"Hello I am some text!\";\n-openssl_sign($ct, $signature, $prkeyid, OPENSSL_ALGO_DSS1);\n+openssl_sign($ct, $signature, $prkeyid, OPENSSL_VERSION_NUMBER < 0x10100000 ? OPENSSL_ALGO_DSS1 : OPENSSL_ALGO_SHA1);\n echo \"Signature: \".base64_encode($signature) . \"\\n\";\n \n $pukeyid = openssl_get_publickey($pub);\n-$valid = openssl_verify($ct, $signature, $pukeyid, OPENSSL_ALGO_DSS1);\n+$valid = openssl_verify($ct, $signature, $pukeyid, OPENSSL_VERSION_NUMBER < 0x10100000 ? OPENSSL_ALGO_DSS1 : OPENSSL_ALGO_SHA1);\n echo \"Signature validity: \" . $valid . \"\\n\";\n \n \ndiff -rupN php-5.6.31.orig/ext/openssl/tests/bug66501.phpt php-5.6.31/ext/openssl/tests/bug66501.phpt\n--- php-5.6.31.orig/ext/openssl/tests/bug66501.phpt\t2017-07-06 00:25:00.000000000 +0200\n+++ php-5.6.31/ext/openssl/tests/bug66501.phpt\t2017-08-01 10:49:25.008823468 +0200\n@@ -16,7 +16,7 @@ AwEHoUQDQgAEPq4hbIWHvB51rdWr8ejrjWo4qVNW\n sqOTOnMoezkbSmVVMuwz9flvnqHGmQvmug==\r\n -----END EC PRIVATE KEY-----';\r\n $key = openssl_pkey_get_private($pkey);\r\n-$res = openssl_sign($data ='alpha', $sign, $key, 'ecdsa-with-SHA1');\r\n+$res = openssl_sign($data ='alpha', $sign, $key, OPENSSL_VERSION_NUMBER < 0x10100000 ? 'ecdsa-with-SHA1' : 'SHA1');\r\n var_dump($res);\r\n --EXPECTF--\r\n bool(true)\r\ndiff -rupN php-5.6.31.orig/ext/openssl/tests/openssl_error_string_basic.phpt php-5.6.31/ext/openssl/tests/openssl_error_string_basic.phpt\n--- php-5.6.31.orig/ext/openssl/tests/openssl_error_string_basic.phpt\t2017-07-06 00:25:00.000000000 +0200\n+++ php-5.6.31/ext/openssl/tests/openssl_error_string_basic.phpt\t2017-08-01 10:49:25.008823468 +0200\n@@ -105,7 +105,7 @@ expect_openssl_errors('openssl_private_d\n // public encrypt and decrypt with failed padding check and padding\n @openssl_public_encrypt(\"data\", $crypted, $public_key_file, 1000);\n @openssl_public_decrypt(\"data\", $crypted, $public_key_file);\n-expect_openssl_errors('openssl_private_(en|de)crypt padding', ['0906D06C', '04068076', '0407006A', '04067072']);\n+expect_openssl_errors('openssl_private_(en|de)crypt padding', OPENSSL_VERSION_NUMBER < 0x10100000 ? ['0906D06C', '04068076', '0407006A', '04067072'] : ['0906D06C', '04068076', '04067072']);\n \n // X509\n echo \"X509 errors\\n\";\ndiff -rupN php-5.6.31.orig/ext/openssl/tests/sni_server.phpt php-5.6.31/ext/openssl/tests/sni_server.phpt\n--- php-5.6.31.orig/ext/openssl/tests/sni_server.phpt\t2017-07-06 00:25:00.000000000 +0200\n+++ php-5.6.31/ext/openssl/tests/sni_server.phpt\t2017-08-01 10:49:25.012823468 +0200\n@@ -27,6 +27,9 @@ CODE;\n $clientCode = <<<'CODE'\n     $flags = STREAM_CLIENT_CONNECT;\n     $ctxArr = [\n+        'verify_peer'  => false,\n+        'verify_peer_name' => false,\n+        'allow_self_signed' => true,\n         'cafile' => __DIR__ . '/sni_server_ca.pem',\n         'capture_peer_cert' => true\n     ];\ndiff -rupN php-5.6.31.orig/ext/openssl/xp_ssl.c php-5.6.31/ext/openssl/xp_ssl.c\n--- php-5.6.31.orig/ext/openssl/xp_ssl.c\t2017-07-06 00:25:00.000000000 +0200\n+++ php-5.6.31/ext/openssl/xp_ssl.c\t2017-08-01 10:49:25.012823468 +0200\n@@ -935,7 +935,7 @@ static int set_local_cert(SSL_CTX *ctx,\n static const SSL_METHOD *php_select_crypto_method(long method_value, int is_client TSRMLS_DC) /* {{{ */\n {\n \tif (method_value == STREAM_CRYPTO_METHOD_SSLv2) {\n-#ifndef OPENSSL_NO_SSL2\n+#if !defined(OPENSSL_NO_SSL2) && OPENSSL_VERSION_NUMBER < 0x10100000L\n \t\treturn is_client ? SSLv2_client_method() : SSLv2_server_method();\n #else\n \t\tphp_error_docref(NULL TSRMLS_CC, E_WARNING,\n@@ -1588,12 +1588,26 @@ int php_openssl_setup_crypto(php_stream\n }\n /* }}} */\n \n+#define PHP_SSL_MAX_VERSION_LEN 32\n+\n+static char *php_ssl_cipher_get_version(const SSL_CIPHER *c, char *buffer, size_t max_len) /* {{{ */\n+{\n+\tconst char *version = SSL_CIPHER_get_version(c);\n+\tstrncpy(buffer, version, max_len);\n+\tif (max_len <= strlen(version)) {\n+\t\tbuffer[max_len - 1] = 0;\n+\t}\n+\treturn buffer;\n+}\n+/* }}} */\n+\n static zval *capture_session_meta(SSL *ssl_handle) /* {{{ */\n {\n \tzval *meta_arr;\n \tchar *proto_str;\n \tlong proto = SSL_version(ssl_handle);\n \tconst SSL_CIPHER *cipher = SSL_get_current_cipher(ssl_handle);\n+\tchar version_str[PHP_SSL_MAX_VERSION_LEN];\n \n \tswitch (proto) {\n #if OPENSSL_VERSION_NUMBER >= 0x10001001L\n@@ -1611,7 +1625,7 @@ static zval *capture_session_meta(SSL *s\n \tadd_assoc_string(meta_arr, \"protocol\", proto_str, 1);\n \tadd_assoc_string(meta_arr, \"cipher_name\", (char *) SSL_CIPHER_get_name(cipher), 1);\n \tadd_assoc_long(meta_arr, \"cipher_bits\", SSL_CIPHER_get_bits(cipher, NULL));\n-\tadd_assoc_string(meta_arr, \"cipher_version\", SSL_CIPHER_get_version(cipher), 1);\n+\tadd_assoc_string(meta_arr, \"cipher_version\", php_ssl_cipher_get_version(cipher, version_str, PHP_SSL_MAX_VERSION_LEN), 1);\n \n \treturn meta_arr;\n }\ndiff -rupN php-5.6.31.orig/ext/phar/util.c php-5.6.31/ext/phar/util.c\n--- php-5.6.31.orig/ext/phar/util.c\t2017-07-06 00:25:00.000000000 +0200\n+++ php-5.6.31/ext/phar/util.c\t2017-08-01 10:49:25.020823468 +0200\n@@ -1531,7 +1531,7 @@ int phar_verify_signature(php_stream *fp\n \t\t\tBIO *in;\n \t\t\tEVP_PKEY *key;\n \t\t\tEVP_MD *mdtype = (EVP_MD *) EVP_sha1();\n-\t\t\tEVP_MD_CTX md_ctx;\n+\t\t\tEVP_MD_CTX *md_ctx;\n #else\n \t\t\tint tempsig;\n #endif\n@@ -1608,7 +1608,8 @@ int phar_verify_signature(php_stream *fp\n \t\t\t\treturn FAILURE;\n \t\t\t}\n \n-\t\t\tEVP_VerifyInit(&md_ctx, mdtype);\n+\t\t\tmd_ctx = EVP_MD_CTX_create();\n+\t\t\tEVP_VerifyInit(md_ctx, mdtype);\n \t\t\tread_len = end_of_phar;\n \n \t\t\tif (read_len > sizeof(buf)) {\n@@ -1620,7 +1621,7 @@ int phar_verify_signature(php_stream *fp\n \t\t\tphp_stream_seek(fp, 0, SEEK_SET);\n \n \t\t\twhile (read_size && (len = php_stream_read(fp, (char*)buf, read_size)) > 0) {\n-\t\t\t\tEVP_VerifyUpdate (&md_ctx, buf, len);\n+\t\t\t\tEVP_VerifyUpdate (md_ctx, buf, len);\n \t\t\t\tread_len -= (off_t)len;\n \n \t\t\t\tif (read_len < read_size) {\n@@ -1628,9 +1629,9 @@ int phar_verify_signature(php_stream *fp\n \t\t\t\t}\n \t\t\t}\n \n-\t\t\tif (EVP_VerifyFinal(&md_ctx, (unsigned char *)sig, sig_len, key) != 1) {\n+\t\t\tif (EVP_VerifyFinal(md_ctx, (unsigned char *)sig, sig_len, key) != 1) {\n \t\t\t\t/* 1: signature verified, 0: signature does not match, -1: failed signature operation */\n-\t\t\t\tEVP_MD_CTX_cleanup(&md_ctx);\n+\t\t\t\tEVP_MD_CTX_destroy(md_ctx);\n \n \t\t\t\tif (error) {\n \t\t\t\t\tspprintf(error, 0, \"broken openssl signature\");\n@@ -1639,7 +1640,7 @@ int phar_verify_signature(php_stream *fp\n \t\t\t\treturn FAILURE;\n \t\t\t}\n \n-\t\t\tEVP_MD_CTX_cleanup(&md_ctx);\n+\t\t\tEVP_MD_CTX_destroy(md_ctx);\n #endif\n \n \t\t\t*signature_len = phar_hex_str((const char*)sig, sig_len, signature TSRMLS_CC);\n"
  },
  {
    "path": "aegir/patches/activity.patch",
    "content": "diff -urp a/activity.install b/activity.install\n--- a/activity.install\t2010-06-30 21:04:18.000000000 +0000\n+++ b/activity.install\t2010-10-15 21:41:11.000000000 +0000\n@@ -5,7 +5,7 @@ function activity_install() {\n   drupal_install_schema('activity');\n   // Set Trigger's weight to 2 so that it will fire AFTER pathauto. This makes\n   // pathauto alias' work.\n-  if (activity_bad_trigger_weight()) {\n+  if (activity_bad_trigger_weight() && isset($_SERVER['HTTP_USER_AGENT'])) {\n     drupal_set_message(t('In order for proper Pathauto behavior with Activity module, the Trigger module\\'s weight needs to be fixed up. !clickhere', array('!clickhere' => l(t('Click here to fix Trigger\\'s weight'), 'admin/activity/weight', array('query' => drupal_get_destination())))), 'error');\n   }\n }\n@@ -214,4 +214,4 @@ function activity_update_6201() {\n   $ret = array();\n   db_change_field($ret, 'activity_messages', 'amid', 'amid', array('type' => 'serial', 'unsigned' => TRUE, 'not null' => TRUE));\n   return $ret;\n-}\n\\ No newline at end of file\n+}\n"
  },
  {
    "path": "aegir/patches/apps_msg.patch",
    "content": "diff -urp a/apps.profile.inc b/apps.profile.inc\n--- a/apps.profile.inc\t2012-10-14 06:42:00.000000000 -0400\n+++ b/apps.profile.inc\t2012-10-14 06:42:48.000000000 -0400\n@@ -177,7 +177,7 @@ function apps_profile_download_batch_fin\n       'title' => t('Downloading updates failed:'),\n       'items' => $results['errors'],\n     );\n-    drupal_set_message(theme('item_list', $error_list), 'error');\n+    //drupal_set_message(theme('item_list', $error_list), 'error');\n   }\n   elseif ($success) {\n     drupal_set_message(t('Updates downloaded successfully.'));\n"
  },
  {
    "path": "aegir/patches/bug62886.patch",
    "content": "diff --git a/sapi/fpm/fpm/fpm.c b/sapi/fpm/fpm/fpm.c\nindex dab415d..2f42175 100644\n--- a/sapi/fpm/fpm/fpm.c\n+++ b/sapi/fpm/fpm/fpm.c\n@@ -39,7 +39,7 @@ struct fpm_globals_s fpm_globals = {\n \t.test_successful = 0,\n \t.heartbeat = 0,\n \t.run_as_root = 0,\n-\t.send_config_signal = 0,\n+\t.send_config_pipe = {0, 0},\n };\n \n int fpm_init(int argc, char **argv, char *config, char *prefix, char *pid, int test_conf, int run_as_root) /* {{{ */\ndiff --git a/sapi/fpm/fpm/fpm.h b/sapi/fpm/fpm/fpm.h\nindex 7a2903d..c576876 100644\n--- a/sapi/fpm/fpm/fpm.h\n+++ b/sapi/fpm/fpm/fpm.h\n@@ -55,7 +55,7 @@ struct fpm_globals_s {\n \tint test_successful;\n \tint heartbeat;\n \tint run_as_root;\n-\tint send_config_signal;\n+\tint send_config_pipe[2];\n };\n \n extern struct fpm_globals_s fpm_globals;\ndiff --git a/sapi/fpm/fpm/fpm_main.c b/sapi/fpm/fpm/fpm_main.c\nindex b058d7a..7d53927 100644\n--- a/sapi/fpm/fpm/fpm_main.c\n+++ b/sapi/fpm/fpm/fpm_main.c\n@@ -1804,16 +1804,18 @@ consult the installation file that came with this distribution, or visit \\n\\\n \n \tif (0 > fpm_init(argc, argv, fpm_config ? fpm_config : CGIG(fpm_config), fpm_prefix, fpm_pid, test_conf, php_allow_to_run_as_root)) {\n \n-\t\tif (fpm_globals.send_config_signal) {\n-\t\t\tzlog(ZLOG_DEBUG, \"Sending SIGUSR2 (error) to parent %d\", getppid());\n-\t\t\tkill(getppid(), SIGUSR2);\n+\t\tif (fpm_globals.send_config_pipe[1]) {\n+\t\t\tint writeval = 0;\n+\t\t\tzlog(ZLOG_DEBUG, \"Sending \\\"0\\\" (error) to parent via fd=%d\", fpm_globals.send_config_pipe[1]);\n+\t\t\twrite(fpm_globals.send_config_pipe[1], &writeval, sizeof(writeval));\n \t\t}\n \t\treturn FPM_EXIT_CONFIG;\n \t}\n \n-\tif (fpm_globals.send_config_signal) {\n-\t\tzlog(ZLOG_DEBUG, \"Sending SIGUSR1 (OK) to parent %d\", getppid());\n-\t\tkill(getppid(), SIGUSR1);\n+\tif (fpm_globals.send_config_pipe[1]) {\n+\t\tint writeval = 1;\n+\t\tzlog(ZLOG_DEBUG, \"Sending \\\"1\\\" (OK) to parent via fd=%d\", fpm_globals.send_config_pipe[1]);\n+\t\twrite(fpm_globals.send_config_pipe[1], &writeval, sizeof(writeval));\n \t}\n \tfpm_is_running = 1;\n \ndiff --git a/sapi/fpm/fpm/fpm_signals.c b/sapi/fpm/fpm/fpm_signals.c\nindex 656269f..8993a86 100644\n--- a/sapi/fpm/fpm/fpm_signals.c\n+++ b/sapi/fpm/fpm/fpm_signals.c\n@@ -249,15 +249,3 @@ int fpm_signals_get_fd() /* {{{ */\n }\n /* }}} */\n \n-void fpm_signals_sighandler_exit_ok(pid_t pid) /* {{{ */\n-{\n-\texit(FPM_EXIT_OK);\n-}\n-/* }}} */\n-\n-void fpm_signals_sighandler_exit_config(pid_t pid) /* {{{ */\n-{\n-\texit(FPM_EXIT_CONFIG);\n-}\n-/* }}} */\n-\ndiff --git a/sapi/fpm/fpm/fpm_signals.h b/sapi/fpm/fpm/fpm_signals.h\nindex 13484cb..eb80fae 100644\n--- a/sapi/fpm/fpm/fpm_signals.h\n+++ b/sapi/fpm/fpm/fpm_signals.h\n@@ -11,9 +11,6 @@ int fpm_signals_init_main();\n int fpm_signals_init_child();\n int fpm_signals_get_fd();\n \n-void fpm_signals_sighandler_exit_ok(pid_t pid);\n-void fpm_signals_sighandler_exit_config(pid_t pid);\n-\n extern const char *fpm_signal_names[NSIG + 1];\n \n #endif\ndiff --git a/sapi/fpm/fpm/fpm_unix.c b/sapi/fpm/fpm/fpm_unix.c\nindex 5c5e37c..443f606 100644\n--- a/sapi/fpm/fpm/fpm_unix.c\n+++ b/sapi/fpm/fpm/fpm_unix.c\n@@ -262,36 +262,19 @@ int fpm_unix_init_main() /* {{{ */\n \t\t * The parent process has then to wait for the master\n \t\t * process to initialize to return a consistent exit\n \t\t * value. For this pupose, the master process will\n-\t\t * send USR1 if everything went well and USR2\n-\t\t * otherwise.\n+\t\t * send \\\"1\\\" into the pipe if everything went well \n+\t\t * and \\\"0\\\" otherwise.\n \t\t */\n \n-\t\tstruct sigaction act;\n-\t\tstruct sigaction oldact_usr1;\n-\t\tstruct sigaction oldact_usr2;\n-\t\tstruct timeval tv;\n \n-\t\t/*\n-\t\t * set sigaction for USR1 before fork\n-\t\t * save old sigaction to restore it after\n-\t\t * fork in the child process (the master process)\n-\t\t */\n-\t\tmemset(&act, 0, sizeof(act));\n-\t\tmemset(&act, 0, sizeof(oldact_usr1));\n-\t\tact.sa_handler = fpm_signals_sighandler_exit_ok;\n-\t\tsigfillset(&act.sa_mask);\n-\t\tsigaction(SIGUSR1, &act, &oldact_usr1);\n+\t\tstruct timeval tv;\n+\t\tfd_set rfds;\n+\t\tint ret;\n \n-\t\t/*\n-\t\t * set sigaction for USR2 before fork\n-\t\t * save old sigaction to restore it after\n-\t\t * fork in the child process (the master process)\n-\t\t */\n-\t\tmemset(&act, 0, sizeof(act));\n-\t\tmemset(&act, 0, sizeof(oldact_usr2));\n-\t\tact.sa_handler = fpm_signals_sighandler_exit_config;\n-\t\tsigfillset(&act.sa_mask);\n-\t\tsigaction(SIGUSR2, &act, &oldact_usr2);\n+\t\tif (pipe(fpm_globals.send_config_pipe) == -1) {\n+\t\t\tzlog(ZLOG_SYSERROR, \"failed to create pipe\");\n+\t\t\treturn -1;\n+\t\t}\n \n \t\t/* then fork */\n \t\tpid_t pid = fork();\n@@ -302,24 +285,54 @@ int fpm_unix_init_main() /* {{{ */\n \t\t\t\treturn -1;\n \n \t\t\tcase 0 : /* children */\n-\t\t\t\t/* restore USR1 and USR2 sigaction */\n-\t\t\t\tsigaction(SIGUSR1, &oldact_usr1, NULL);\n-\t\t\t\tsigaction(SIGUSR2, &oldact_usr2, NULL);\n-\t\t\t\tfpm_globals.send_config_signal = 1;\n+\t\t\t\tclose(fpm_globals.send_config_pipe[0]); /* close the read side of the pipe */\n \t\t\t\tbreak;\n \n \t\t\tdefault : /* parent */\n-\t\t\t\tfpm_cleanups_run(FPM_CLEANUP_PARENT_EXIT);\n+\t\t\t\tclose(fpm_globals.send_config_pipe[1]); /* close the write side of the pipe */\n \n \t\t\t\t/*\n \t\t\t\t * wait for 10s before exiting with error\n-\t\t\t\t * the child is supposed to send USR1 or USR2 to tell the parent\n+\t\t\t\t * the child is supposed to send 1 or 0 into the pipe to tell the parent\n \t\t\t\t * how it goes for it\n \t\t\t\t */\n+\t\t\t\tFD_ZERO(&rfds);\n+\t\t\t\tFD_SET(fpm_globals.send_config_pipe[0], &rfds);\n+\n \t\t\t\ttv.tv_sec = 10;\n \t\t\t\ttv.tv_usec = 0;\n-\t\t\t\tzlog(ZLOG_DEBUG, \"The calling process is waiting for the master process to ping\");\n-\t\t\t\tselect(0, NULL, NULL, NULL, &tv);\n+\n+\t\t\t\tzlog(ZLOG_DEBUG, \"The calling process is waiting for the master process to ping via fd=%d\", fpm_globals.send_config_pipe[0]);\n+\t\t\t\tret = select(fpm_globals.send_config_pipe[0] + 1, &rfds, NULL, NULL, &tv);\n+\t\t\t\tif (ret == -1) {\n+\t\t\t\t\tzlog(ZLOG_SYSERROR, \"failed to select\");\n+\t\t\t\t\texit(FPM_EXIT_SOFTWARE);\n+\t\t\t\t}\n+\t\t\t\tif (ret) { /* data available */\n+\t\t\t\t\tint readval;\n+\t\t\t\t\tret = read(fpm_globals.send_config_pipe[0], &readval, sizeof(readval));\n+\t\t\t\t\tif (ret == -1) {\n+\t\t\t\t\t\tzlog(ZLOG_SYSERROR, \"failed to read from pipe\");\n+\t\t\t\t\t\texit(FPM_EXIT_SOFTWARE);\n+\t\t\t\t\t}\n+\n+\t\t\t\t\tif (ret == 0) {\n+\t\t\t\t\t\tzlog(ZLOG_ERROR, \"no data have been read from pipe\");\n+\t\t\t\t\t\texit(FPM_EXIT_SOFTWARE);\n+\t\t\t\t\t} else {\n+\t\t\t\t\t\tif (readval == 1) {\n+\t\t\t\t\t\t\tzlog(ZLOG_DEBUG, \"I received a valid acknoledge from the master process, I can exit without error\");\n+\t\t\t\t\t\t\tfpm_cleanups_run(FPM_CLEANUP_PARENT_EXIT);\n+\t\t\t\t\t\t\texit(FPM_EXIT_OK);\n+\t\t\t\t\t\t} else {\n+\t\t\t\t\t\t\tzlog(ZLOG_ERROR, \"The master process returned an error !\");\n+\t\t\t\t\t\t\texit(FPM_EXIT_SOFTWARE);\n+\t\t\t\t\t\t}\n+\t\t\t\t\t}\n+\t\t\t\t} else { /* no date sent ! */\n+\t\t\t\t\tzlog(ZLOG_ERROR, \"the master process didn't send back its status (via the pipe to the calling process)\");\n+\t\t\t\t  exit(FPM_EXIT_SOFTWARE);\n+\t\t\t\t}\n \t\t\t\texit(FPM_EXIT_SOFTWARE);\n \t\t}\n \t}\n"
  },
  {
    "path": "aegir/patches/civicrm.drush.inc.patch.txt",
    "content": "diff -burp a/civicrm/drupal/drush/civicrm.drush.inc b/civicrm/drupal/drush/civicrm.drush.inc\n--- a/civicrm/drupal/drush/civicrm.drush.inc\t2014-09-18 11:46:17.000000000 +0000\n+++ b/civicrm/drupal/drush/civicrm.drush.inc\t2015-02-04 16:13:25.000000000 +0000\n@@ -257,6 +257,20 @@ function civicrm_drush_command() {\n  * Implementation of drush_hook_COMMAND_validate for command 'civicrm-install'\n  */\n function drush_civicrm_install_validate() {\n+\n+  switch (substr(drush_core_version(), 0, 1)) {\n+    case '7':\n+      $sql = drush_get_class('Drush\\Sql\\Sql', array(), array(drush_drupal_major_version()));\n+      $db_spec = $sql->get_db_spec();\n+      break;\n+    case '6':\n+    case '5':\n+      $db_spec = _drush_sql_get_db_spec();\n+      break;\n+    default:\n+      drush_set_error('DRUSH_UNSUPPORTED_VERSION', dt('Drush !version is not supported'));\n+  }\n+\n   // TODO: Replace these with required options (Drush 5).\n   // Get the drupal credentials in case civi specific db info is not passed.\n   if (drush_get_option('db-url', FALSE)) {\n@@ -445,7 +459,19 @@ function _civicrm_generate_settings_file\n   }\n\n   $baseUrl = !$baseUrl ? ($GLOBALS['base_url']) : ($protocol . '://' . $baseUrl);\n+\n+  switch (substr(drush_core_version(), 0, 1)) {\n+    case '7':\n+      $sql = drush_get_class('Drush\\Sql\\Sql', array(), array(drush_drupal_major_version()));\n+      $db_spec = $sql->get_db_spec();\n+      break;\n+    case '6':\n+    case '5':\n   $db_spec = _drush_sql_get_db_spec();\n+      break;\n+    default:\n+      drush_set_error('DRUSH_UNSUPPORTED_VERSION', dt('Drush !version is not supported'));\n+  }\n\n   // Check version: since 4.1, Drupal6 must be used for the UF in D6\n   // The file civicrm-version.php appeared around 4.0, so it is safe to assume\n@@ -961,7 +987,19 @@ function drush_civicrm_restore() {\n   $restore_backup_dir = rtrim($restore_backup_dir, '/');\n\n   // get confirmation from user -\n+  switch (substr(drush_core_version(), 0, 1)) {\n+    case '7':\n+      $sql = drush_get_class('Drush\\Sql\\Sql', array(), array(drush_drupal_major_version()));\n+      $db_spec = $sql->get_db_spec();\n+      break;\n+    case '6':\n+    case '5':\n   $db_spec = _drush_sql_get_db_spec();\n+      break;\n+    default:\n+      drush_set_error('DRUSH_UNSUPPORTED_VERSION', dt('Drush !version is not supported'));\n+  }\n+\n   drush_print(dt(\"\\nProcess involves :\"));\n   drush_print(dt(\"1. Restoring '\\$restore-dir/civicrm' directory to '!toDir'.\", array('!toDir' => $civicrm_root_base)));\n   drush_print(dt(\"2. Dropping and creating '!db' database.\", array('!db' => $db_spec['database'])));\n"
  },
  {
    "path": "aegir/patches/civicrm_engage.install",
    "content": "<?php\n/*\n +--------------------------------------------------------------------+\n | CiviCRM version 4.1                                                |\n +--------------------------------------------------------------------+\n | Copyright CiviCRM LLC (c) 2004-2011                                |\n +--------------------------------------------------------------------+\n | This file is a part of CiviCRM.                                    |\n |                                                                    |\n | CiviCRM is free software; you can copy, modify, and distribute it  |\n | under the terms of the GNU Affero General Public License           |\n | Version 3, 19 November 2007.                                       |\n |                                                                    |\n | CiviCRM is distributed in the hope that it will be useful, but     |\n | WITHOUT ANY WARRANTY; without even the implied warranty of         |\n | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.               |\n | See the GNU Affero General Public License for more details.        |\n |                                                                    |\n | You should have received a copy of the GNU Affero General Public   |\n | License along with this program; if not, contact CiviCRM LLC       |\n | at info[AT]civicrm[DOT]org. If you have questions about the        |\n | GNU Affero General Public License or the licensing of CiviCRM,     |\n | see the CiviCRM license FAQ at http://civicrm.org/licensing        |\n +--------------------------------------------------------------------+\n*/\n\n/**\n *\n * @package CRM\n * @copyright CiviCRM LLC (c) 2004-2011\n * $Id$\n *\n */\n\n/**\n * Implementation of hook_enable().\n *\n */\nfunction civicrm_engage_enable() {\n  civicrm_initialize();\n  // functions that should run repeatedly\n  // every time the module is re-enabled.\n  civicrm_engage_add_contact_subtypes();\n}\n\n/**\n * Implementation of hook_install().\n *\n */\nfunction civicrm_engage_install() {\n  civicrm_initialize();\n  // functions that should only run once, on install\n  civcrm_engage_enable_street_address_parsing();\n  civcrm_engage_set_autocomplete_options();\n  civicrm_engage_load_configuration();\n}\n\n/**\n * Create contact subtypes which are referenced in the\n * xml file that is imported to boostrap the CiviCRM\n * Campaign component. This function is indempotent, so\n * it's run on enable.\n */\nfunction civicrm_engage_add_contact_subtypes() {\n  // parent_id : 1 is Individual, 3 is Organization\n  $create = array(\n    'Media_Contact' => array(\n      'label' => 'Media Contact',\n      'parent_id' => 1,\n      'is_active' => 1,\n      'name' => 'Media_Contact',\n    ),\n    'Funder_Contact' => array(\n      'label' => 'Funder Contact',\n      'parent_id' => 1,\n      'is_active' => 1,\n      'name' => 'Funder_Contact',\n    ),\n    'Elected_Official' => array(\n      'label' => 'Elected Official',\n      'parent_id' => 1,\n      'is_active' => 1,\n      'name' => 'Elected_Official',\n    ),\n    'Media_Outlet' => array(\n      'label' => 'Media Outlet',\n      'parent_id' => 3,\n      'is_active' => 1,\n      'name' => 'Media_Outlet',\n    ),\n    'Foundation' => array(\n      'label' => 'Foundation',\n      'parent_id' => 3,\n      'is_active' => 1,\n      'name' => 'Foundation',\n    ),\n  );\n  require_once (\"CRM/Contact/BAO/ContactType.php\");\n  $existing_types = CRM_Contact_BAO_ContactType::subTypeInfo();\n  while (list($k) = each($existing_types)) {\n    if (array_key_exists($k, $create)) {\n      drupal_set_message(t(\"Not creating @type, already exists.\", array(\"@type\" => $k)));\n      unset($create[$k]);\n    }\n  }\n  $contact_type = new CRM_Contact_BAO_ContactType();\n  while (list($k, $v) = each($create)) {\n    drupal_set_message(t(\"Creating %c\", array('%c' => $k)));\n    $contact_type->add($v);\n  }\n}\n\n/**\n * Street parsing is required for walk lists because it needs to\n * sort by even/odd address numbers so, when canvassing a street\n * in which even addresses are on one side and odd on the other, you\n * can divide the task between two people with two different lists.\n */\nfunction civcrm_engage_enable_street_address_parsing() {\n  include_once 'CRM/Core/BAO/Setting.php';\n  $address_options = CRM_Core_BAO_Setting::valueOptions(CRM_Core_BAO_Setting::SYSTEM_PREFERENCES_NAME,\n    'address_options',\n    TRUE, NULL, TRUE\n  );\n  $address_options['street_address_parsing'] = 1;\n  CRM_Core_BAO_Setting::setValueOption(CRM_Core_BAO_Setting::SYSTEM_PREFERENCES_NAME,\n    'address_options',\n    $address_options\n  );\n}\n\n/**\n * Check Phone Number for auto complete. This setting doubles for batch\n * update, and when making a phone list in CiviCampaign you really need\n * to have the phone number included.\n */\nfunction civcrm_engage_set_autocomplete_options() {\n  include_once 'CRM/Core/BAO/Setting.php';\n  $contact_autocomplete_options = CRM_Core_BAO_Setting::valueOptions(\n    CRM_Core_BAO_Setting::SYSTEM_PREFERENCES_NAME,\n    'contact_autocomplete_options',\n    TRUE, NULL, TRUE\n  );\n  $contact_autocomplete_options['phone'] = 1;\n  CRM_Core_BAO_Setting::setValueOption(\n    CRM_Core_BAO_Setting::SYSTEM_PREFERENCES_NAME,\n    'contact_autocomplete_options',\n    $contact_autocomplete_options\n  );\n}\n\nfunction civicrm_engage_load_configuration() {\n  drupal_set_message(\"Loading default civicrm_engage configuration.\");\n  // we need to build the path to the CustomGroupDate.xml file\n  // shipped in the civicrm_engage directory. civicrm_engage could\n  // be installed outside of the civicrm root, so we can't rely on\n  // that contstant.\n\n  // instead... start with the drupal root\n  $root = getcwd();\n  // and then added the relative path to the module\n  $civi_engage_path = drupal_get_path('module', 'civicrm_engage');\n  $xml_file = $root . '/' . $civi_engage_path . '/CustomGroupData.xml';\n  require_once 'CRM/Utils/Migrate/Import.php';\n  $import = new CRM_Utils_Migrate_Import();\n\n  $import->run($xml_file);\n}\n\n"
  },
  {
    "path": "aegir/patches/commerce_kickstart.patch",
    "content": "diff --git a/profiles/commerce_kickstart/modules/commerce_kickstart/commerce_kickstart_slideshow/commerce_kickstart_slideshow.module b/profiles/commerce_kickstart/modules/commerce_kickstart/commerce_kickstart_slideshow/commerce_kickstart_slideshow.module\nindex ce7bbed..6e4af5a 100644\n--- a/profiles/commerce_kickstart/modules/commerce_kickstart/commerce_kickstart_slideshow/commerce_kickstart_slideshow.module\n+++ b/profiles/commerce_kickstart/modules/commerce_kickstart/commerce_kickstart_slideshow/commerce_kickstart_slideshow.module\n@@ -15,7 +15,7 @@ function commerce_kickstart_slideshow_library() {\n     'website' => 'http://bxslider.com/',\n     'version' => '4.0',\n     'js' => array(\n-      libraries_get_path('jquery.bxslider') . '/jquery.bxslider.min.js' => array(),\n+      libraries_get_path('jquery.bxslider') . '/dist/jquery.bxslider.min.js' => array(),\n     ),\n   );\n   return $libraries;\n"
  },
  {
    "path": "aegir/patches/commons-1045778-fix-aegir-installs.patch",
    "content": "diff --git a/drupal_commons.profile b/drupal_commons.profile\nindex 48f7a16..57b5089 100644\n--- a/drupal_commons.profile\n+++ b/drupal_commons.profile\n@@ -68,7 +68,7 @@ function drupal_commons_profile_modules() {\n  *   language-specific profiles.\n  */\n function drupal_commons_profile_details() {\n-  $logo = '<a href=\"https://drupal.org/project/commons\" target=\"_blank\"><img alt=\"Drupal Commons\" title=\"Drupal Commons\" src=\"' . base_path() . 'profiles/drupal_commons/images/logo.png' . '\"></img></a>';\n+  $logo = '<a href=\"https://drupal.org/project/commons\" target=\"_blank\"><img alt=\"Drupal Commons\" title=\"Drupal Commons\" src=\"./profiles/drupal_commons/images/logo.png' . '\"></img></a>';\n   $description = st('Select this profile to install the Drupal Commons distribution for powering your community website. Drupal Commons provides provides blogging, discussions, user profiles, and other useful community features for both private communities (e.g. an Intranet), or public communities (e.g. a customer community).');\n   $description .= '<br/>' . $logo;\n\n"
  },
  {
    "path": "aegir/patches/commons-1060250-aegir-infinite-loop.patch",
    "content": "diff --git a/drupal_commons.profile b/drupal_commons.profile\nindex 57b5089..ae0ffbd 100644\n--- a/drupal_commons.profile\n+++ b/drupal_commons.profile\n@@ -116,7 +116,33 @@ function drupal_commons_profile_tasks(&$task, $url) {\n\n   // Provide a form to choose features\n   if ($task == 'configure-commons') {\n-    $output = drupal_get_form('drupal_commons_features_form', $url);\n+    if (defined('DRUSH_BASE_PATH')) {\n+      // Set some sane defaults\n+      $features = array(\n+        'commons_core',\n+        'commons_home',\n+        'commons_blog',\n+        'commons_discussion',\n+        'commons_document',\n+        'commons_wiki',\n+        'commons_poll',\n+        'commons_event',\n+        'commons_dashboard',\n+        'commons_notifications',\n+        'commons_reputation',\n+        'commons_group_aggregator',\n+        'commons_admin',\n+        'commons_seo'\n+      );\n+      variable_set('commons_selected_features', $features);\n+\n+      // Initiate the next installation step\n+      $task = 'install-commons';\n+      variable_set('install_task', $task);\n+    }\n+    else {\n+      $output = drupal_get_form('drupal_commons_features_form', $url);\n+    }\n   }\n\n   // Installation batch process\n"
  },
  {
    "path": "aegir/patches/commons_chicken_egg.patch",
    "content": "diff -urp a/commons.install b/commons.install\n--- a/commons.install\t2014-08-08 15:45:53.000000000 -0400\n+++ b/commons.install\t2014-08-08 15:38:43.000000000 -0400\n@@ -474,7 +474,8 @@ function commons_add_user_avatar($accoun\n   if ($account->uid) {\n     $picture_directory =  file_default_scheme() . '://' . variable_get('user_picture_path', 'pictures');\n     if(file_prepare_directory($picture_directory, FILE_CREATE_DIRECTORY)){\n-      $picture_result = drupal_http_request($base_url . '/profiles/commons/images/avatars/avatar-' . commons_normalize_name($account->name) . '.png');\n+      $tmp_url = 'http://127.0.0.1:8888';\n+      $picture_result = drupal_http_request($tmp_url . '/profiles/commons/images/avatars/avatar-' . commons_normalize_name($account->name) . '.png');\n       $picture_path = file_stream_wrapper_uri_normalize($picture_directory . '/picture-' . $account->uid . '-' . REQUEST_TIME . '.jpg');\n       $picture_file = file_save_data($picture_result->data, $picture_path, FILE_EXISTS_REPLACE);\n\n"
  },
  {
    "path": "aegir/patches/disable_SSLv2_for_openssl_1_0_0.patch",
    "content": "--- a/ext/openssl/xp_ssl.c\n+++ b/ext/openssl/xp_ssl.c\n@@ -328,10 +328,12 @@ static inline int php_openssl_setup_cryp\n \t\t\tsslsock->is_client = 1;\n \t\t\tmethod = SSLv23_client_method();\n \t\t\tbreak;\n+#ifndef OPENSSL_NO_SSL2\n \t\tcase STREAM_CRYPTO_METHOD_SSLv2_CLIENT:\n \t\t\tsslsock->is_client = 1;\n \t\t\tmethod = SSLv2_client_method();\n \t\t\tbreak;\n+#endif\n \t\tcase STREAM_CRYPTO_METHOD_SSLv3_CLIENT:\n \t\t\tsslsock->is_client = 1;\n \t\t\tmethod = SSLv3_client_method();\n@@ -348,10 +350,12 @@ static inline int php_openssl_setup_cryp\n \t\t\tsslsock->is_client = 0;\n \t\t\tmethod = SSLv3_server_method();\n \t\t\tbreak;\n+#ifndef OPENSSL_NO_SSL2\n \t\tcase STREAM_CRYPTO_METHOD_SSLv2_SERVER:\n \t\t\tsslsock->is_client = 0;\n \t\t\tmethod = SSLv2_server_method();\n \t\t\tbreak;\n+#endif\n \t\tcase STREAM_CRYPTO_METHOD_TLS_SERVER:\n \t\t\tsslsock->is_client = 0;\n \t\t\tmethod = TLSv1_server_method();\n@@ -629,9 +633,11 @@ static inline int php_openssl_tcp_sockop\n \t\t\t\tcase STREAM_CRYPTO_METHOD_SSLv23_CLIENT:\n \t\t\t\t\tsock->method = STREAM_CRYPTO_METHOD_SSLv23_SERVER;\n \t\t\t\t\tbreak;\n+#ifndef OPENSSL_NO_SSL2\n \t\t\t\tcase STREAM_CRYPTO_METHOD_SSLv2_CLIENT:\n \t\t\t\t\tsock->method = STREAM_CRYPTO_METHOD_SSLv2_SERVER;\n \t\t\t\t\tbreak;\n+#endif\n \t\t\t\tcase STREAM_CRYPTO_METHOD_SSLv3_CLIENT:\n \t\t\t\t\tsock->method = STREAM_CRYPTO_METHOD_SSLv3_SERVER;\n \t\t\t\t\tbreak;\n@@ -911,9 +917,11 @@ php_stream *php_openssl_ssl_socket_facto\n \tif (strncmp(proto, \"ssl\", protolen) == 0) {\n \t\tsslsock->enable_on_connect = 1;\n \t\tsslsock->method = STREAM_CRYPTO_METHOD_SSLv23_CLIENT;\n+#ifndef OPENSSL_NO_SSL2\n \t} else if (strncmp(proto, \"sslv2\", protolen) == 0) {\n \t\tsslsock->enable_on_connect = 1;\n \t\tsslsock->method = STREAM_CRYPTO_METHOD_SSLv2_CLIENT;\n+#endif\n \t} else if (strncmp(proto, \"sslv3\", protolen) == 0) {\n \t\tsslsock->enable_on_connect = 1;\n \t\tsslsock->method = STREAM_CRYPTO_METHOD_SSLv3_CLIENT;\n"
  },
  {
    "path": "aegir/patches/drupal-eleven-aegir-console-02.patch",
    "content": "diff -urp a/console/Output/Output.php b/console/Output/Output.php\n--- a/console/Output/Output.php\t2025-05-24 11:34:04.000000000 +0100\n+++ b/console/Output/Output.php\t2025-07-15 22:16:24.471373174 +0100\n@@ -140,5 +140,5 @@ abstract class Output implements OutputI\n     /**\n      * Writes a message to the output.\n      */\n-    abstract protected function doWrite(string $message, bool $newline): void;\n+    abstract protected function doWrite($message, $newline);\n }\n\n"
  },
  {
    "path": "aegir/patches/drupal-eleven-aegir-core-01.patch",
    "content": "diff -urp a/core/lib/Drupal/Core/Logger/LoggerChannel.php b/core/lib/Drupal/Core/Logger/LoggerChannel.php\n--- a/core/lib/Drupal/Core/Logger/LoggerChannel.php\t2025-06-26 14:56:54.000000000 +0100\n+++ b/core/lib/Drupal/Core/Logger/LoggerChannel.php\t2025-07-17 23:40:15.830594172 +0100\n@@ -91,7 +91,7 @@ class LoggerChannel implements LoggerCha\n   /**\n    * {@inheritdoc}\n    */\n-  public function log($level, string|\\Stringable $message, array $context = []): void {\n+  public function log($level, $message, array $context = []) {\n     if ($this->callDepth == self::MAX_CALL_DEPTH) {\n       return;\n     }\ndiff -urp a/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php b/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php\n--- a/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php\t2025-06-26 14:56:54.000000000 +0100\n+++ b/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php\t2025-07-17 23:40:15.830594172 +0100\n@@ -17,62 +17,62 @@ trait RfcLoggerTrait {\n   /**\n    * {@inheritdoc}\n    */\n-  public function emergency(string|\\Stringable $message, array $context = []): void {\n+  public function emergency($message, array $context = []) {\n     $this->log(RfcLogLevel::EMERGENCY, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function alert(string|\\Stringable $message, array $context = []): void {\n+  public function alert($message, array $context = []) {\n     $this->log(RfcLogLevel::ALERT, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function critical(string|\\Stringable $message, array $context = []): void {\n+  public function critical($message, array $context = []) {\n     $this->log(RfcLogLevel::CRITICAL, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function error(string|\\Stringable $message, array $context = []): void {\n+  public function error($message, array $context = []) {\n     $this->log(RfcLogLevel::ERROR, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function warning(string|\\Stringable $message, array $context = []): void {\n+  public function warning($message, array $context = []) {\n     $this->log(RfcLogLevel::WARNING, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function notice(string|\\Stringable $message, array $context = []): void {\n+  public function notice($message, array $context = []) {\n     $this->log(RfcLogLevel::NOTICE, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function info(string|\\Stringable $message, array $context = []): void {\n+  public function info($message, array $context = []) {\n     $this->log(RfcLogLevel::INFO, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function debug(string|\\Stringable $message, array $context = []): void {\n+  public function debug($message, array $context = []) {\n     $this->log(RfcLogLevel::DEBUG, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  abstract public function log($level, string|\\Stringable $message, array $context = []): void;\n+  abstract public function log($level, $message, array $context = []);\n\n }\ndiff -urp a/core/modules/dblog/src/Logger/DbLog.php b/core/modules/dblog/src/Logger/DbLog.php\n--- a/core/modules/dblog/src/Logger/DbLog.php\t2025-06-26 14:56:54.000000000 +0100\n+++ b/core/modules/dblog/src/Logger/DbLog.php\t2025-07-17 23:57:11.122594172 +0100\n@@ -52,7 +52,7 @@ class DbLog implements LoggerInterface {\n   /**\n    * {@inheritdoc}\n    */\n-  public function log($level, string|\\Stringable $message, array $context = []): void {\n+  public function log($level, $message, array $context = []) {\n     // Remove backtrace and exception since they may contain an unserializable\n     // variable.\n     unset($context['backtrace'], $context['exception']);\ndiff -urp a/core/modules/syslog/src/Logger/SysLog.php b/core/modules/syslog/src/Logger/SysLog.php\n--- a/core/modules/syslog/src/Logger/SysLog.php\t2025-06-26 14:56:54.000000000 +0100\n+++ b/core/modules/syslog/src/Logger/SysLog.php\t2025-07-17 23:40:15.842594172 +0100\n@@ -67,7 +67,7 @@ class SysLog implements LoggerInterface\n   /**\n    * {@inheritdoc}\n    */\n-  public function log($level, string|\\Stringable $message, array $context = []): void {\n+  public function log($level, $message, array $context = []) {\n     global $base_url;\n\n     $format = $this->config->get('format');\n"
  },
  {
    "path": "aegir/patches/drupal-eleven-aegir-validator-03.patch",
    "content": "diff -urp a/core/modules/package_manager/src/Validator/DiskSpaceValidator.php b/core/modules/package_manager/src/Validator/DiskSpaceValidator.php\n--- a/core/modules/package_manager/src/Validator/DiskSpaceValidator.php\t2025-06-26 14:56:54.000000000 +0100\n+++ b/core/modules/package_manager/src/Validator/DiskSpaceValidator.php\t2025-07-28 00:56:23.271822366 +0100\n@@ -40,7 +40,7 @@ class DiskSpaceValidator implements Even\n    *   If the amount of free space could not be determined.\n    */\n   protected function freeSpace(string $path): float {\n-    $free_space = disk_free_space($path);\n+    $free_space = function_exists('disk_free_space') ? disk_free_space($path) : 9999999999; // assume plenty of space\n     if ($free_space === FALSE) {\n       throw new \\RuntimeException(\"Cannot get disk information for $path.\");\n     }\n"
  },
  {
    "path": "aegir/patches/drupal-ten-aegir-console-02.patch",
    "content": "diff -urp a/console/Output/Output.php b/console/Output/Output.php\n--- a/console/Output/Output.php\t2025-05-07 08:05:04.000000000 +0100\n+++ b/console/Output/Output.php\t2025-07-24 03:21:09.032482294 +0100\n@@ -151,5 +151,5 @@ abstract class Output implements OutputI\n      *\n      * @return void\n      */\n-    abstract protected function doWrite(string $message, bool $newline);\n+    abstract protected function doWrite($message, $newline);\n }\n"
  },
  {
    "path": "aegir/patches/drupal-ten-aegir-core-01.patch",
    "content": "diff -urp a/core/lib/Drupal/Core/Logger/LoggerChannel.php b/core/lib/Drupal/Core/Logger/LoggerChannel.php\n--- a/core/lib/Drupal/Core/Logger/LoggerChannel.php\t2023-12-23 03:12:54.060219719 +0100\n+++ b/core/lib/Drupal/Core/Logger/LoggerChannel.php\t2023-12-23 03:12:39.896219719 +0100\n@@ -91,7 +91,7 @@ class LoggerChannel implements LoggerCha\n   /**\n    * {@inheritdoc}\n    */\n-  public function log($level, string|\\Stringable $message, array $context = []): void {\n+  public function log($level, $message, array $context = []) {\n     if ($this->callDepth == self::MAX_CALL_DEPTH) {\n       return;\n     }\ndiff -urp a/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php b/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php\n--- a/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php\t2023-12-23 03:12:54.060219719 +0100\n+++ b/core/lib/Drupal/Core/Logger/RfcLoggerTrait.php\t2023-12-23 03:12:39.896219719 +0100\n@@ -17,62 +17,62 @@ trait RfcLoggerTrait {\n   /**\n    * {@inheritdoc}\n    */\n-  public function emergency(string|\\Stringable $message, array $context = []): void {\n+  public function emergency($message, array $context = []) {\n     $this->log(RfcLogLevel::EMERGENCY, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function alert(string|\\Stringable $message, array $context = []): void {\n+  public function alert($message, array $context = []) {\n     $this->log(RfcLogLevel::ALERT, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function critical(string|\\Stringable $message, array $context = []): void {\n+  public function critical($message, array $context = []) {\n     $this->log(RfcLogLevel::CRITICAL, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function error(string|\\Stringable $message, array $context = []): void {\n+  public function error($message, array $context = []) {\n     $this->log(RfcLogLevel::ERROR, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function warning(string|\\Stringable $message, array $context = []): void {\n+  public function warning($message, array $context = []) {\n     $this->log(RfcLogLevel::WARNING, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function notice(string|\\Stringable $message, array $context = []): void {\n+  public function notice($message, array $context = []) {\n     $this->log(RfcLogLevel::NOTICE, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function info(string|\\Stringable $message, array $context = []): void {\n+  public function info($message, array $context = []) {\n     $this->log(RfcLogLevel::INFO, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  public function debug(string|\\Stringable $message, array $context = []): void {\n+  public function debug($message, array $context = []) {\n     $this->log(RfcLogLevel::DEBUG, $message, $context);\n   }\n\n   /**\n    * {@inheritdoc}\n    */\n-  abstract public function log($level, string|\\Stringable $message, array $context = []): void;\n+  abstract public function log($level, $message, array $context = []);\n\n }\ndiff -urp a/core/modules/dblog/src/Logger/DbLog.php b/core/modules/dblog/src/Logger/DbLog.php\n--- a/core/modules/dblog/src/Logger/DbLog.php\t2023-12-23 03:12:54.060219719 +0100\n+++ b/core/modules/dblog/src/Logger/DbLog.php\t2023-12-23 03:12:39.896219719 +0100\n@@ -52,7 +52,7 @@ class DbLog implements LoggerInterface {\n   /**\n    * {@inheritdoc}\n    */\n-  public function log($level, string|\\Stringable $message, array $context = []): void {\n+  public function log($level, $message, array $context = []) {\n     // Remove backtrace and exception since they may contain an unserializable variable.\n     unset($context['backtrace'], $context['exception']);\n\ndiff -urp a/core/modules/syslog/src/Logger/SysLog.php b/core/modules/syslog/src/Logger/SysLog.php\n--- a/core/modules/syslog/src/Logger/SysLog.php\t2023-12-06 10:22:56.000000000 +0100\n+++ b/core/modules/syslog/src/Logger/SysLog.php\t2023-12-23 03:10:01.744219719 +0100\n@@ -65,7 +65,7 @@ class SysLog implements LoggerInterface\n   /**\n    * {@inheritdoc}\n    */\n-  public function log($level, string|\\Stringable $message, array $context = []): void {\n+  public function log($level, $message, array $context = []) {\n     global $base_url;\n\n     $format = $this->config->get('format');\n"
  },
  {
    "path": "aegir/patches/drush-remote_make_files.patch",
    "content": "Index: drush_make.utilities.inc\n===================================================================\nRCS file: /cvs/drupal-contrib/contributions/modules/drush_make/Attic/drush_make.utilities.inc,v\nretrieving revision 1.1.2.55\ndiff -u -p -r1.1.2.55 drush_make.utilities.inc\n--- drush_make.utilities.inc\t12 Nov 2010 08:19:52 -0000\t1.1.2.55\n+++ drush_make.utilities.inc\t23 Nov 2010 13:02:49 -0000\n@@ -454,7 +454,7 @@ function drush_make_get_data($data_sourc\n   }\n   // Remote file.\n   else {\n-    $file = _drush_make_download_file(array('url' => $datasource));\n+    $file = _drush_make_download_file(array('url' => $data_source));\n     $data = file_get_contents($file);\n     drush_op('unlink', $file);\n   }\n"
  },
  {
    "path": "aegir/patches/drush_make-drush-4.x-fix-do7-compatibility.patch",
    "content": "diff --git a/drush_make.drush.inc b/drush_make.drush.inc\nindex c8a9b42..f6c541f 100644\n--- a/drush_make.drush.inc\n+++ b/drush_make.drush.inc\n@@ -310,10 +310,22 @@ function drush_make_update_xml_download($project) {\n   // Make an array of releases.\n   foreach ($project['release_history']->releases->release as $release) {\n     $version = (string) $release->version_major;\n+    // Work around drupal.org D7 upgrade inconsistently including version_patch\n+    // see https://drupal.org/node/2140621.\n+    if (empty($release->version_patch)) {\n+      $release->version_patch = 0;\n+    }\n     // there should be version_patch attribute for every stable release\n     // so checking whether the attribute exists should be enough\n     if (isset($release->version_patch) && ((string) $release->version_extra) != 'dev') {\n-      $version .= '.' . (string) $release->version_patch;\n+      // Set point version accounting for drupal.org D7 upgrade\n+      // As of the drupal.org D7 upgrade version_minor replaces version_patch.\n+      if (isset($release->version_minor) ) {\n+        $version .= '.' . (string) $release->version_minor;\n+      }\n+      else {\n+        $version .= '.' . (string) $release->version_patch;\n+      }\n     }\n     // if version_patch attribute does not exist, then it should be a dev release\n     // and the version string should be in format MAJOR_VERSION.x-dev\n"
  },
  {
    "path": "aegir/patches/drush_make.drush.inc.patch",
    "content": "Index: drush_make.drush.inc\n===================================================================\nRCS file: /cvs/drupal-contrib/contributions/modules/drush_make/Attic/drush_make.drush.inc,v\nretrieving revision 1.11.2.74\ndiff -u -p -r1.11.2.74 drush_make.drush.inc\n--- drush_make.drush.inc\t10 Oct 2010 18:04:07 -0000\t1.11.2.74\n+++ drush_make.drush.inc\t24 Oct 2010 02:27:15 -0000\n@@ -225,8 +225,8 @@ function drush_make_updatexml($project) \n     $term_map = array(\n       'Modules' => 'module',\n       'Themes' => 'theme',\n-      'Drupal project' => 'core',\n-      'Installation profiles' => 'profile',\n+      'Drupal Project' => 'core',\n+      'Installation Profiles' => 'profile',\n       'Translations' => 'translation'\n     );\n     // Iterate through all terms related to this project.\n"
  },
  {
    "path": "aegir/patches/features-1265168-19-roles.patch",
    "content": "diff --git a/includes/features.user.inc b/includes/features.user.inc\nindex c76455d..bf6d6ff 100644\n--- a/includes/features.user.inc\n+++ b/includes/features.user.inc\n@@ -122,7 +122,11 @@ function user_permission_features_rebuild($module) {\n \n     $roles = _user_features_get_roles();\n     $permissions_by_role = _user_features_get_permissions(FALSE);\n+    $modules = user_permission_get_modules();\n     foreach ($defaults as $permission) {\n+      if (empty($modules[$permission['name']])) {\n+        continue;\n+      }\n       $perm = $permission['name'];\n       foreach ($roles as $role) {\n         if (in_array($role, $permission['roles'])) {\n"
  },
  {
    "path": "aegir/patches/field_info_collate_fields-1400256-25.patch",
    "content": "diff --git a/modules/field/field.info.inc b/modules/field/field.info.inc\nindex 9e7ab93..30c7cb1 100644\n--- a/modules/field/field.info.inc\n+++ b/modules/field/field.info.inc\n@@ -183,10 +183,13 @@ function _field_info_collate_types($reset = FALSE) {\n  */\n function _field_info_collate_fields($reset = FALSE) {\n   static $info;\n+  static $rebuilding = FALSE;\n \n   if ($reset) {\n-    $info = NULL;\n-    cache_clear_all('field_info_fields', 'cache_field');\n+    if(!$rebuilding) {\n+      $info = NULL;\n+      cache_clear_all('field_info_fields', 'cache_field');\n+    }\n     return;\n   }\n \n@@ -195,6 +198,7 @@ function _field_info_collate_fields($reset = FALSE) {\n       $info = $cached->data;\n     }\n     else {\n+      $rebuilding = TRUE;\n       $definitions = array(\n         'field_ids' => field_read_fields(array(), array('include_deleted' => 1)),\n         'instances' => field_read_instances(),\n@@ -244,6 +248,7 @@ function _field_info_collate_fields($reset = FALSE) {\n       }\n \n       cache_set('field_info_fields', $info, 'cache_field');\n+      $rebuilding = FALSE;\n     }\n   }\n \n"
  },
  {
    "path": "aegir/patches/fpm_main.c.patch",
    "content": "--- a/fpm_main.c\t2013-12-10 14:04:57.000000000 -0500\n+++ b/fpm_main.c\t2013-12-22 16:55:03.000000000 -0500\n@@ -164,6 +164,7 @@ typedef struct _php_cgi_globals_struct {\n \tzend_bool rfc2616_headers;\n \tzend_bool nph;\n \tzend_bool fix_pathinfo;\n+\tchar *fix_chrootpath;\n \tzend_bool force_redirect;\n \tzend_bool discard_path;\n \tzend_bool fcgi_logging;\n@@ -1058,6 +1059,27 @@ static void init_request_info(TSRMLS_D)\n \tchar *ini;\n \tint apache_was_here = 0;\n\n+\tif (CGIG(fix_chrootpath)) {\n+\t\tsize_t chroot_len = strlen(CGIG(fix_chrootpath));\n+\t\t/* remove trail slash */\n+\t\twhile (chroot_len > 0 && CGIG(fix_chrootpath)[chroot_len-1] == '/') {\n+\t\t\tchroot_len--;\n+\t\t}\n+\t\tif (chroot_len > 0) {\n+\t\t\tchar *env_document_root = sapi_cgibin_getenv(\"DOCUMENT_ROOT\", sizeof(\"DOCUMENT_ROOT\") - 1 TSRMLS_CC);\n+\n+\t\t\tif (strncmp(env_path_translated, CGIG(fix_chrootpath), chroot_len) == 0) {\n+\t\t\t\tenv_path_translated += chroot_len;\n+\t\t\t\tenv_path_translated = _sapi_cgibin_putenv(\"PATH_TRANSLATED\", env_path_translated TSRMLS_CC);\n+\t\t\t}\n+\n+\t\t\tif (strncmp(env_document_root, CGIG(fix_chrootpath), chroot_len) == 0) {\n+\t\t\t\tenv_document_root += chroot_len;\n+\t\t\t\tenv_document_root = _sapi_cgibin_putenv(\"DOCUMENT_ROOT\", env_document_root TSRMLS_CC);\n+\t\t\t}\n+\t\t}\n+\t}\n+\n \t/* some broken servers do not have script_filename or argv0\n \t * an example, IIS configured in some ways.  then they do more\n \t * broken stuff and set path_translated to the cgi script location */\n@@ -1446,6 +1468,7 @@ PHP_INI_BEGIN()\n \tSTD_PHP_INI_ENTRY(\"cgi.force_redirect\",      \"1\",  PHP_INI_SYSTEM, OnUpdateBool,   force_redirect, php_cgi_globals_struct, php_cgi_globals)\n \tSTD_PHP_INI_ENTRY(\"cgi.redirect_status_env\", NULL, PHP_INI_SYSTEM, OnUpdateString, redirect_status_env, php_cgi_globals_struct, php_cgi_globals)\n \tSTD_PHP_INI_ENTRY(\"cgi.fix_pathinfo\",        \"1\",  PHP_INI_SYSTEM, OnUpdateBool,   fix_pathinfo, php_cgi_globals_struct, php_cgi_globals)\n+\tSTD_PHP_INI_ENTRY(\"cgi.fix_chrootpath\",      NULL, PHP_INI_SYSTEM, OnUpdateString, fix_chrootpath, php_cgi_globals_struct, php_cgi_globals)\n \tSTD_PHP_INI_ENTRY(\"cgi.discard_path\",        \"0\",  PHP_INI_SYSTEM, OnUpdateBool,   discard_path, php_cgi_globals_struct, php_cgi_globals)\n \tSTD_PHP_INI_ENTRY(\"fastcgi.logging\",         \"1\",  PHP_INI_SYSTEM, OnUpdateBool,   fcgi_logging, php_cgi_globals_struct, php_cgi_globals)\n \tSTD_PHP_INI_ENTRY(\"fastcgi.error_header\",    NULL, PHP_INI_SYSTEM, OnUpdateString, error_header, php_cgi_globals_struct, php_cgi_globals)\n@@ -1461,6 +1484,7 @@ static void php_cgi_globals_ctor(php_cgi\n \tphp_cgi_globals->force_redirect = 1;\n \tphp_cgi_globals->redirect_status_env = NULL;\n \tphp_cgi_globals->fix_pathinfo = 1;\n+\tphp_cgi_globals->fix_chrootpath = NULL;\n \tphp_cgi_globals->discard_path = 0;\n \tphp_cgi_globals->fcgi_logging = 1;\n \tzend_hash_init(&php_cgi_globals->user_config_cache, 0, NULL, (dtor_func_t) user_config_cache_entry_dtor, 1);\n"
  },
  {
    "path": "aegir/patches/freetype.patch",
    "content": "diff -u -r php-7.2.5/ext/gd/config.m4 php-7.2.5-freetype/ext/gd/config.m4\n--- php-7.2.5/ext/gd/config.m4\t2018-04-24 17:09:54.000000000 +0200\n+++ php-7.2.5-freetype/ext/gd/config.m4\t2018-05-09 14:49:03.647108948 +0200\n@@ -186,6 +186,9 @@\n AC_DEFUN([PHP_GD_FREETYPE2],[\n   if test \"$PHP_FREETYPE_DIR\" != \"no\"; then\n\n+    AC_PATH_PROG(PKG_CONFIG, pkg-config, no)\n+\n+    AC_MSG_CHECKING([for freetype])\n     for i in $PHP_FREETYPE_DIR /usr/local /usr; do\n       if test -f \"$i/bin/freetype-config\"; then\n         FREETYPE2_DIR=$i\n@@ -194,13 +197,20 @@\n       fi\n     done\n\n-    if test -z \"$FREETYPE2_DIR\"; then\n+    if test -n \"$FREETYPE2_CONFIG\"; then\n+      FREETYPE2_CFLAGS=`$FREETYPE2_CONFIG --cflags`\n+      FREETYPE2_LIBS=`$FREETYPE2_CONFIG --libs`\n+      AC_MSG_RESULT([found in $FREETYPE2_DIR])\n+    elif test \"$PKG_CONFIG\" != \"no\" && $PKG_CONFIG --exists freetype2; then\n+      FREETYPE2_DIR=pkg-config\n+      FREETYPE2_CFLAGS=`$PKG_CONFIG freetype2 --cflags`\n+      FREETYPE2_LIBS=`$PKG_CONFIG freetype2 --libs`\n+      AC_MSG_RESULT([found by pkg-config])\n+    else\n+      AC_MSG_RESULT([not found])\n       AC_MSG_ERROR([freetype-config not found.])\n     fi\n\n-    FREETYPE2_CFLAGS=`$FREETYPE2_CONFIG --cflags`\n-    FREETYPE2_LIBS=`$FREETYPE2_CONFIG --libs`\n-\n     PHP_EVAL_INCLINE($FREETYPE2_CFLAGS)\n     PHP_EVAL_LIBLINE($FREETYPE2_LIBS, GD_SHARED_LIBADD)\n     AC_DEFINE(HAVE_LIBFREETYPE,1,[ ])\n"
  },
  {
    "path": "aegir/patches/hosting_advanced_cron.patch",
    "content": "diff -urp a/hosting_advanced_cron/hosting_advanced_cron.module b/hosting_advanced_cron/hosting_advanced_cron.module\n--- a/hosting_advanced_cron/hosting_advanced_cron.module\t2012-05-27 13:12:12.000000000 +0100\n+++ b/hosting_advanced_cron/hosting_advanced_cron.module\t2012-05-27 15:09:14.000000000 +0100\n@@ -1,7 +1,7 @@\n <?php\n \n // Use the default cron interval for this site.\n-define('HOSTING_ADVANCED_CRON_SITE_DEFAULT', 0);\n+define('HOSTING_ADVANCED_CRON_SITE_DEFAULT', 10800);\n \n // Do not run cron for this site.\n define('HOSTING_ADVANCED_CRON_SITE_DISABLED', -1);\n@@ -54,19 +54,29 @@ function hosting_advanced_cron_queue($co\n \n   foreach ($sites as $site) {\n     $site_name = hosting_context_name($site->nid);\n-    if (variable_get('hosting_cron_use_backend', TRUE)) {\n-      provision_backend_invoke($site_name, \"cron\");\n-    }\n-    else {\n-      $cmd = sprintf(\"wget -O - -q %s  > /dev/null\", escapeshellarg(_hosting_site_url($site) . '/cron.php'));\n-      drush_shell_exec($cmd);\n-    }\n+    if (!preg_match(\"/(?:dev\\.|devel\\.)/\", $site_name)) {\n+      if (variable_get('hosting_cron_use_backend', TRUE)) {\n+        provision_backend_invoke($site_name, \"cron\");\n+      }\n+      else {\n+        // Optionally add the cron_key querystring key if the site has one.\n+        $url =_hosting_site_url($site) . '/cron.php';\n+        if (!empty($site->cron_key)) {\n+          $url .= '?cron_key=' . rawurlencode($site->cron_key);\n+        }\n+        $cmd = sprintf(\"wget -O - -q %s  > /dev/null\", escapeshellarg($url));\n+        drush_shell_exec($cmd);\n+      }\n+\n+      // We are updating the site table here directly to avoid a possible race condition,\n+      // with the task queue. There exists a chance that they might both try to save the\n+      // same node at the same time, and then an old record from the cron queue might\n+      // replace the newly updated record.\n+      db_query(\"UPDATE {hosting_site} SET last_cron=%d WHERE nid=%d\", time(), $site->nid);\n \n-    // We are updating the site table here directly to avoid a possible race condition,\n-    // with the task queue. There exists a chance that they might both try to save the\n-    // same node at the same time, and then an old record from the cron queue might\n-    // replace the newly updated record.\n-    db_query(\"UPDATE {hosting_site} SET last_cron = %d WHERE nid = %d\", time(), $site->nid);\n+      // A small trick to avoid high load when still too many crons are started at once.\n+      sleep(5);\n+    }\n   }\n }\n \n@@ -99,7 +109,7 @@ function hosting_advanced_cron_nodeapi(&\n         if (!$result) {\n           $result = array('cron_interval' => variable_get('hosting_advanced_cron_default_interval', 3600));\n         }\n-  \n+\n         return $result;\n \n       case 'delete':\n@@ -125,8 +135,8 @@ function hosting_advanced_cron_get_sites\n     $cron_interval = $site->cron_interval ? $site->cron_interval : variable_get('hosting_advanced_cron_default_interval', 3600);\n \n     // Run cron if it has never ran before for this site, or if the cron\n-    // interval since last cron run has been exceeded. \n-    if ($cron_interval != HOSTING_ADVANCED_CRON_SITE_DISABLED && (!$site->last_cron || ($site->last_cron + $site->cron_interval < time()))) {\n+    // interval since last cron run has been exceeded.\n+    if ($cron_interval != HOSTING_ADVANCED_CRON_SITE_DISABLED && (!$site->last_cron || ($site->last_cron + $cron_interval < time()))) {\n       $sites[$site->nid] = node_load($site->nid);\n     }\n   }\n@@ -191,9 +201,9 @@ function hosting_advanced_cron_form_site\n function hosting_advanced_cron_interval_options() {\n   $options = array(\n     HOSTING_ADVANCED_CRON_SITE_DISABLED => t('Disabled'),\n-    HOSTING_ADVANCED_CRON_SITE_DEFAULT => t('Default'),\n+    HOSTING_ADVANCED_CRON_SITE_DEFAULT => t('3h (default)'),\n   );\n-  $options += drupal_map_assoc(array(60, 300, 900, 1800, 3600, 21600, 86400), 'format_interval');\n+  $options += drupal_map_assoc(array(60, 180, 300, 600, 900, 1800, 3600, 10800, 21600, 43200, 86400), 'format_interval');\n \n   return $options;\n }\n"
  },
  {
    "path": "aegir/patches/hosting_cron.module",
    "content": "<?php\n/**\n * @file\n * Allow the hosting system to cron all the installed sites on a schedule.\n */\n\n// Define the default cron interval, if not specified\n// in hosting_cron_default_interval variable.\ndefine('ADV_CRON_DEFAULT', 3600);\n\n// Define the interval interpreted as cron turned off.\ndefine('ADV_CRON_TURNED_OFF', 0);\n\n// Define the max number of sites to run cron in parallel.\ndefine('ADV_CRON_MAX_PLL', 5);\n\n/**\n * Implements hook_hosting_queues().\n *\n * @todo: In Hosting 4.x change the type to HOSTING_QUEUE_TYPE_SPREAD.\n */\nfunction hosting_cron_hosting_queues() {\n  $items['cron'] = array(\n    'type' => HOSTING_QUEUE_TYPE_BATCH,\n    'name' => t('Advanced Cron queue'),\n    'description' => t('Run advanced cron on hosted sites.'),\n    'total_items' => hosting_cron_hosting_site_count(),\n    'frequency' => strtotime(\"1 min\", 0),\n    'min_threads' => 6,\n    'max_threads' => 12,\n    'threshold' => 100,\n    'singular' => t('site'),\n    'plural' => t('sites'),\n  );\n  return $items;\n}\n\nfunction hosting_cron_hosting_site_count() {\n  $sql = \"SELECT count(n.nid) FROM {node} n\n    LEFT JOIN {hosting_site} hs ON n.nid = hs.nid\n    LEFT JOIN {hosting_cron} hac ON n.nid = hac.nid\n    WHERE n.type = :site AND hs.status = :status\n    AND ((hac.cron_interval IS NOT NULL AND hac.cron_interval > 0)\n    OR (hac.cron_interval IS NULL AND :cron_interval > 0))\";\n  $result = db_query($sql, array(':site' => 'site', ':status' => HOSTING_SITE_ENABLED, ':cron_interval' => variable_get('hosting_cron_default_interval', ADV_CRON_DEFAULT)))->fetchField();\n  return $result;\n}\n\n/**\n * Implements hook_permission().\n */\nfunction hosting_cron_permission() {\n  return array(\n    'configure site cron interval' => array(\n      'title' => t('configure site cron interval'),\n    ),\n  );\n}\n\n/**\n * Queue callback (hosting_<QUEUE_NAME>_queue) for the advanced cron queue.\n *\n * This function is called by hosting_run_queue() whenever the \"Advanced Cron\"\n * queue is run.\n */\nfunction hosting_cron_queue($count = ADV_CRON_MAX_PLL) {\n  if (is_readable($_SERVER['HOME'] . '/static/control/cron-proxy.info')) {\n    $use_proxy_mode = TRUE;\n  }\n  else {\n    $use_proxy_mode = FALSE;\n  }\n  // Get a list of sites for which to run cron.\n  $sites = hosting_cron_get_sites($count);\n  foreach ($sites as $site) {\n    $site_name = hosting_context_name($site->nid);\n    $this_name = ltrim($site_name, '@');\n    $this_host = '-H \"Host: ' . $this_name . '\"';\n    $this_cuid = '.cron.' . md5($this_name . '.' . $site->nid) . '.pid';\n    $profile = node_load($site->profile);\n    $platform = node_load($site->platform);\n    if ($profile->short_name == 'hostmaster') {\n      provision_backend_invoke($site_name, \"cron\");\n    }\n    elseif (variable_get('hosting_cron_use_backend', TRUE)) {\n      provision_backend_invoke($site_name, \"elysia-cron\");\n      sleep(3);\n      provision_backend_invoke($site_name, \"cron\");\n    }\n    else {\n      if (is_readable($_SERVER['HOME'] . '/.tmp')) {\n        $this_tmp = $_SERVER['HOME'] . '/.tmp/';\n      }\n      else {\n        $this_tmp = '/tmp/';\n      }\n      $result = db_query(\"SELECT p.publish_path FROM {hosting_platform} p LEFT JOIN {hosting_site} s ON p.nid=s.platform WHERE platform = :platform\", array(':platform' => $platform->nid));\n      foreach ($result as $row) {\n        $this_platform_root = $row->publish_path;\n      }\n      if (is_readable($this_platform_root . '/core') ||\n          is_readable($this_platform_root . '/docroot/core') ||\n          is_readable($this_platform_root . '/html/core') ||\n          is_readable($this_platform_root . '/web/core')) {\n        $url_own = 'https://' . $this_name . '/cron/';\n        $url = 'https://127.0.0.1/cron/';\n        // Optionally add the cron_key query string key if the site has one.\n        if (!empty($site->cron_key)) {\n          $url_own .= rawurlencode($site->cron_key);\n          $url .= rawurlencode($site->cron_key);\n        }\n      }\n      else {\n        $url_own = 'https://' . $this_name . '/cron.php';\n        $url = 'https://127.0.0.1/cron.php';\n        // Optionally add the cron_key query string key if the site has one.\n        if (!empty($site->cron_key)) {\n          $url_own .= '?cron_key=' . rawurlencode($site->cron_key);\n          $url .= '?cron_key=' . rawurlencode($site->cron_key);\n        }\n      }\n      // Prepare the log file and command.\n      $this_clog = $this_tmp . $this_cuid . '.log';\n      // Add a date-stamped entry to the log.\n      $date = date(\"Y-m-d H:i:s\");\n      if (!empty($use_proxy_mode)) {\n        $date = date(\"Y-m-d H:i:s\");\n        file_put_contents($this_clog, \"\\n$date cURL command via IP request\\n\", FILE_APPEND);\n        file_put_contents($this_clog, \"$date KEY $this_host KEY $url\\n\", FILE_APPEND);\n        $second_cmd = 'curl -L --max-redirs 5 -k -s --retry 3 --retry-delay 5 --max-time 600 -A iCabProXy ' . $this_host . ' ' . $url;\n        $second_output = [];\n        $second_return_var = 0;\n        exec($second_cmd, $second_output, $second_return_var);\n        // Append curl response to the log.\n        if (!empty($second_response)) {\n          file_put_contents($this_clog, \"$date First IP Response:\\n\" . implode(\"\\n\", $second_output) . \"\\n\", FILE_APPEND);\n        } else {\n          file_put_contents($this_clog, \"$date First IP request returned empty OK response.\\n\", FILE_APPEND);\n        }\n      }\n      else {\n        file_put_contents($this_clog, \"\\n$date cURL command via DOMAIN request\\n\", FILE_APPEND);\n        file_put_contents($this_clog, \"$date KEY $url_own\\n\", FILE_APPEND);\n        $cmd = 'curl -L --max-redirs 5 -k -s --retry 3 --retry-delay 5 --max-time 600 -A iCabProXy ' . $url_own;\n        $output = [];\n        $return_var = 0;\n        exec($cmd, $output, $return_var);\n        $response = implode(\"\\n\", $output);\n        if (!empty($response)) {\n          $date = date(\"Y-m-d H:i:s\");\n          file_put_contents($this_clog, \"$date NON-EMPTY-RESPONSE FOLLOWS\\n\", FILE_APPEND);\n          file_put_contents($this_clog, $response . \"\\n\", FILE_APPEND);\n          file_put_contents($this_clog, \"$date Another cURL command, this time via IP request\\n\", FILE_APPEND);\n          file_put_contents($this_clog, \"$date KEY $this_host KEY $url\\n\", FILE_APPEND);\n          $second_cmd = 'curl -L --max-redirs 5 -k -s --retry 3 --retry-delay 5 --max-time 600 -A iCabProXy ' . $this_host . ' ' . $url;\n          $second_output = [];\n          $second_return_var = 0;\n          exec($second_cmd, $second_output, $second_return_var);\n          if (!empty($second_response)) {\n            file_put_contents($this_clog, \"$date Second IP Response:\\n\" . implode(\"\\n\", $second_output) . \"\\n\", FILE_APPEND);\n          } else {\n            file_put_contents($this_clog, \"$date Second IP request returned empty OK response.\\n\", FILE_APPEND);\n          }\n        }\n        else {\n          $date = date(\"Y-m-d H:i:s\");\n          file_put_contents($this_clog, \"$date First DOMAIN request returned empty OK response.\\n\", FILE_APPEND);\n        }\n      }\n    }\n    db_update('hosting_site')\n      ->fields(array(\n        'last_cron' => REQUEST_TIME,\n      ))\n      ->condition('nid', $site->nid)\n      ->execute();\n    // A small trick to avoid high load when still too many crons are started at once.\n    unset($site_name, $this_name, $this_host, $this_cuid, $this_clog, $profile, $platform, $this_tmp, $this_platform_root, $url);\n    sleep(3);\n  }\n}\n\n/**\n * Implements hook_node_load().\n */\nfunction hosting_cron_node_load($nodes, $types) {\n  foreach ($nodes as $nid => &$node) {\n    if ($node->type == 'site') {\n\n      $this_cron_interval = db_query(\"SELECT cron_interval FROM {hosting_cron} WHERE nid = :nid\", array(':nid' => $node->nid))->fetchField();\n      if ($this_cron_interval) {\n        $this_cron_interval = array('cron_interval' => $this_cron_interval);\n      }\n\n      if (isset($node->cron_interval) && $node->cron_interval > -1) {\n        $cron_interval_ok = TRUE;\n      }\n      else {\n        if (isset($this_cron_interval) && $this_cron_interval > -1) {\n          $node->cron_interval = $this_cron_interval;\n        }\n      }\n\n      return $this_cron_interval;\n    }\n  }\n}\n\n/**\n * Implements hook_node_view().\n */\nfunction hosting_cron_node_view($node, $view_mode, $langcode) {\n  if ($node->type == 'site') {\n    if ($view_mode != 'teaser') {\n      $this_cron_interval = db_query(\"SELECT cron_interval FROM {hosting_cron} WHERE nid = :nid\", array(':nid' => $node->nid))->fetchField();\n      if (!$node->cron_interval && $this_cron_interval) {\n        $node->cron_interval = $this_cron_interval;\n      }\n      $cron_text = $this_cron_interval == ADV_CRON_TURNED_OFF ? t('Disabled') : t('Every !interval', array('!interval' => format_interval($this_cron_interval)));\n      $cron_text .= '<br />' . t('(Last run: !interval)', array('!interval' => hosting_format_interval($node->last_cron)));\n      $node->content['info']['last_cron'] = array(\n        '#type' => 'item',\n        '#title' => t('Cron'),\n        '#weight' => 20,\n        '#markup' => $cron_text,\n      );\n    }\n  }\n}\n\n/**\n * Implements hook_node_delete().\n */\nfunction hosting_cron_node_delete($node) {\n  if ($node->type == \"site\") {\n    db_delete('hosting_cron')\n      ->condition('nid', $node->nid)\n      ->execute();\n  }\n}\n\n/**\n * Implements hook_node_update().\n */\nfunction hosting_cron_node_update($node) {\n  if ($node->type == \"site\") {\n    $use_cron_interval = ADV_CRON_TURNED_OFF;\n    $this_cron_interval = db_query(\"SELECT cron_interval FROM {hosting_cron} WHERE nid = :nid\", array(':nid' => $node->nid))->fetchField();\n\n    if ($node->cron_interval > -1) {\n      $use_cron_interval = $node->cron_interval;\n    }\n    else {\n      if ($this_cron_interval > -1) {\n        $use_cron_interval = $this_cron_interval;\n      }\n    }\n\n    if ($node->nid == 10) {\n      if ($node->cron_interval == ADV_CRON_TURNED_OFF) {\n        db_update('hosting_cron')\n          ->fields(array('cron_interval' => '3600'))\n          ->condition('nid', '10')\n          ->execute();\n      }\n    }\n\n    if ($this_cron_interval > -1) {\n      $cron_interval_ok = TRUE;\n    }\n    else {\n      db_insert('hosting_cron')\n        ->fields(array(\n          'nid' => $node->nid,\n          'cron_interval' => $use_cron_interval,\n        ))\n        ->execute();\n    }\n\n    if ($use_cron_interval == ADV_CRON_TURNED_OFF) {\n      db_update('hosting_cron')\n        ->fields(array(\n          'cron_interval' => ADV_CRON_TURNED_OFF,\n        ))\n        ->condition('nid', $node->nid)\n        ->execute();\n    }\n    elseif ($use_cron_interval > ADV_CRON_TURNED_OFF) {\n      db_update('hosting_cron')\n        ->fields(array(\n          'cron_interval' => $use_cron_interval,\n        ))\n        ->condition('nid', $node->nid)\n        ->execute();\n    }\n    else {\n      db_insert('hosting_cron')\n        ->fields(array(\n          'nid' => $node->nid,\n          'cron_interval' => ADV_CRON_TURNED_OFF,\n        ))\n        ->execute();\n    }\n  }\n}\n\n/**\n * Implements hook_node_insert().\n */\nfunction hosting_cron_node_insert($node) {\n  if ($node->type == 'site') {\n    $use_cron_interval = ADV_CRON_TURNED_OFF;\n    $this_cron_interval = db_query(\"SELECT cron_interval FROM {hosting_cron} WHERE nid = :nid\", array(':nid' => $node->nid))->fetchField();\n\n    if (isset($node->cron_interval) && $node->cron_interval > -1) {\n      $use_cron_interval = $node->cron_interval;\n    }\n    else {\n      if ($this_cron_interval > -1) {\n        $use_cron_interval = $this_cron_interval;\n      }\n    }\n\n    if ($node->nid == 10) {\n      $use_cron_interval = '3600';\n    }\n\n    if ($use_cron_interval == ADV_CRON_TURNED_OFF) {\n      db_insert('hosting_cron')\n        ->fields(array(\n          'nid' => $node->nid,\n          'cron_interval' => ADV_CRON_TURNED_OFF,\n        ))\n        ->execute();\n    }\n    elseif ($use_cron_interval > ADV_CRON_TURNED_OFF) {\n      db_insert('hosting_cron')\n        ->fields(array(\n          'nid' => $node->nid,\n          'cron_interval' => $use_cron_interval,\n        ))\n        ->execute();\n    }\n    else {\n      db_insert('hosting_cron')\n        ->fields(array(\n          'nid' => $node->nid,\n          'cron_interval' => ADV_CRON_TURNED_OFF,\n        ))\n        ->execute();\n    }\n  }\n}\n\n/**\n * Retrieves a list of sites for which to run cron.\n */\nfunction hosting_cron_get_sites($count) {\n  $result = db_query('SELECT n.nid, hs.last_cron, hac.cron_interval FROM {node} n LEFT JOIN {hosting_site} hs ON n.nid = hs.nid LEFT JOIN {hosting_cron} hac ON n.nid = hac.nid WHERE n.type = :site AND hs.status = :status ORDER BY hs.last_cron ASC, n.nid ASC', array(':site' => 'site', ':status' => HOSTING_SITE_ENABLED));\n  $counter = 0;\n  foreach ($result as $site) {\n    if ($counter <= $count && $counter <= ADV_CRON_MAX_PLL) {\n      //\n      // Run cron if it has never ran before for this site,\n      // but only if it has been enabled for this site.\n      //\n      // This shouldn't happen for any newly cloned site,\n      // no matter if the cron is enabled on the source site or not,\n      // to avoid running cron on the cloned copy without any prior control.\n      //\n      // Note that we can't use hosting_cron_default_interval here\n      // if $site->cron_interval is empty / not set yet, so we have to ignore\n      // the first cron run attempt by using ADV_CRON_TURNED_OFF by default\n      // instead of hosting_cron_default_interval if we can't read\n      // $site->cron_interval or it is empty for some reason. This means, however,\n      // that we must store cron_interval also for sites using default value,\n      // or the cron would never run on newly created/cloned site.\n      //\n      if (!$site->last_cron) {\n        $this_cron_interval = db_query(\"SELECT cron_interval FROM {hosting_cron} WHERE nid = :nid\", array(':nid' => $site->nid))->fetchField();\n        if ($this_cron_interval != ADV_CRON_TURNED_OFF) {\n          $sites[$site->nid] = node_load($site->nid);\n          $counter++;\n        }\n      }\n      else {\n        //\n        // Determine the cron interval. If not specified for this site,\n        // use the default hosting_cron_default_interval or\n        // ADV_CRON_DEFAULT.\n        //\n        $this_cron_interval = db_query(\"SELECT cron_interval FROM {hosting_cron} WHERE nid = :nid\", array(':nid' => $site->nid))->fetchField();\n        //\n        // Run cron if it has already ran before for this site,\n        // and the cron is enabled on this site, but cron interval\n        // since last cron run has been exceeded.\n        //\n        if ($this_cron_interval != ADV_CRON_TURNED_OFF) {\n          if ($site->last_cron + $this_cron_interval < time()) {\n            $sites[$site->nid] = node_load($site->nid);\n            $counter++;\n          }\n        }\n      }\n    }\n  }\n  return $sites;\n}\n\n/**\n * Implements hook_form_<FORM_ID>_alter().\n */\nfunction hosting_cron_form_hosting_settings_alter(&$form, $form_state) {\n  $options = hosting_cron_interval_options();\n  unset($options[0]);\n  $form['hosting_cron_default_interval'] = array(\n    '#type' => 'select',\n    '#title' => t('Default cron interval'),\n    '#options' => $options,\n    '#description' => t('The cron interval to use for all sites unless overridden on the site node itself.'),\n    '#default_value' => variable_get('hosting_cron_default_interval', ADV_CRON_DEFAULT),\n  );\n  $form['hosting_cron_use_backend'] = array(\n    '#type' => 'radios',\n    '#title' => t('Cron method'),\n    '#description' => t('For running cron on a site. You can use the drush cron implementation or a traditional wget method.'),\n    '#options' => array('Wget', 'Drush'),\n    '#default_value' => variable_get('hosting_cron_use_backend', TRUE),\n  );\n  // Add some weight to the buttons to push them to the bottom of the form.\n  $form['buttons']['#weight'] = 1000;\n}\n\n/**\n * Implements hook_form_<FORM_ID>_alter().\n *\n * Alter the node form for a site to the cron interval setting.\n */\nfunction hosting_cron_form_site_node_form_alter(&$form, $form_state) {\n  if (user_access('configure site cron interval')) {\n    if (!empty($form['#node']) && isset($form['#node']->nid)) {\n      if (empty($form['#node']->cron_interval)) {\n        $default_value = ADV_CRON_TURNED_OFF;\n      }\n      else {\n        $default_value = $form['#node']->cron_interval;\n      }\n    }\n    else {\n      $default_value = variable_get('hosting_cron_default_interval', ADV_CRON_DEFAULT);\n    }\n    $form['cron_interval'] = array(\n      '#type' => 'select',\n      '#title' => t('Cron interval'),\n      '#options' => hosting_cron_interval_options(),\n      '#description' => t('Cron will be automatically run for this site at the interval defined here.'),\n      '#default_value' => $default_value,\n      '#weight' => 3,\n    );\n    return $form;\n  }\n}\n\n/**\n * Returns an array of options for the cron interval.\n *\n * @return\n *   An associative array with the interval in seconds as key, and a\n *   human-readable interval as value.\n */\nfunction hosting_cron_interval_options() {\n  $options = array(\n    ADV_CRON_TURNED_OFF => t('Disabled'),\n    ADV_CRON_DEFAULT => t('1h (default)'),\n  );\n  $options += drupal_map_assoc(array(60, 180, 300, 600, 900, 1800, 3600, 10800, 21600, 43200, 86400, 604800), 'format_interval');\n  return $options;\n}\n"
  },
  {
    "path": "aegir/patches/hosting_cron_queue-reliability.patch",
    "content": "From dc226392e0480bb4ea6fe98ff2f65bbf1fd89e65 Mon Sep 17 00:00:00 2001\nFrom: BOA Dev Team <boa@omega8.cc>\nDate: Mon, 9 Dec 2024 02:45:37 +0100\nSubject: [PATCH] Improve hosting_cron_queue reliability\n\n---\n cron/hosting_cron.module | 61 ++++++++++++++++++++++++++++++++++++----\n 1 file changed, 56 insertions(+), 5 deletions(-)\n\ndiff --git a/cron/hosting_cron.module b/cron/hosting_cron.module\nindex 122e5378..ec5347d9 100644\n--- a/cron/hosting_cron.module\n+++ b/cron/hosting_cron.module\n@@ -64,6 +64,12 @@ function hosting_cron_permission() {\n  * queue is run.\n  */\n function hosting_cron_queue($count = ADV_CRON_MAX_PLL) {\n+  if (is_readable($_SERVER['HOME'] . '/static/control/cron-proxy.info')) {\n+    $use_proxy_mode = TRUE;\n+  }\n+  else {\n+    $use_proxy_mode = FALSE;\n+  }\n   // Get a list of sites for which to run cron.\n   $sites = hosting_cron_get_sites($count);\n   foreach ($sites as $site) {\n@@ -96,26 +102,70 @@ function hosting_cron_queue($count = ADV_CRON_MAX_PLL) {\n           is_readable($this_platform_root . '/docroot/core') ||\n           is_readable($this_platform_root . '/html/core') ||\n           is_readable($this_platform_root . '/web/core')) {\n+        $url_own = 'https://' . $this_name . '/cron/';\n         $url = 'https://127.0.0.1/cron/';\n         // Optionally add the cron_key query string key if the site has one.\n         if (!empty($site->cron_key)) {\n+          $url_own .= rawurlencode($site->cron_key);\n           $url .= rawurlencode($site->cron_key);\n         }\n       }\n       else {\n+        $url_own = 'https://' . $this_name . '/cron.php';\n         $url = 'https://127.0.0.1/cron.php';\n         // Optionally add the cron_key query string key if the site has one.\n         if (!empty($site->cron_key)) {\n+          $url_own .= '?cron_key=' . rawurlencode($site->cron_key);\n           $url .= '?cron_key=' . rawurlencode($site->cron_key);\n         }\n       }\n-      if (is_readable($this_tmp . $this_cuid)) {\n-        system('touch ' . $this_tmp . '.busy' . $this_cuid);\n+      // Prepare the log file and command.\n+      $this_clog = $this_tmp . $this_cuid . '.log';\n+      // Add a date-stamped entry to the log.\n+      $date = date(\"Y-m-d H:i:s\");\n+      if (!empty($use_proxy_mode)) {\n+        $date = date(\"Y-m-d H:i:s\");\n+        file_put_contents($this_clog, \"\\n$date cURL command via IP request\\n\", FILE_APPEND);\n+        file_put_contents($this_clog, \"$date KEY $this_host KEY $url\\n\", FILE_APPEND);\n+        $second_cmd = 'curl -L --max-redirs 5 -k -s --retry 3 --retry-delay 5 --max-time 600 -A iCabProXy ' . $this_host . ' ' . $url;\n+        $second_output = [];\n+        $second_return_var = 0;\n+        exec($second_cmd, $second_output, $second_return_var);\n+        // Append curl response to the log.\n+        if (!empty($second_response)) {\n+          file_put_contents($this_clog, \"$date First IP Response:\\n\" . implode(\"\\n\", $second_output) . \"\\n\", FILE_APPEND);\n+        } else {\n+          file_put_contents($this_clog, \"$date First IP request returned empty OK response.\\n\", FILE_APPEND);\n+        }\n       }\n       else {\n-        system('touch ' . $this_tmp . $this_cuid);\n-        system('curl -L --max-redirs 5 -k -s --retry 1 --retry-delay 10 --max-time 300 -A iCabProXy ' . $this_host . ' ' . $url . ' > /dev/null');\n-        system('rm -f ' . $this_tmp . $this_cuid);\n+        file_put_contents($this_clog, \"\\n$date cURL command via DOMAIN request\\n\", FILE_APPEND);\n+        file_put_contents($this_clog, \"$date KEY $url_own\\n\", FILE_APPEND);\n+        $cmd = 'curl -L --max-redirs 5 -k -s --retry 3 --retry-delay 5 --max-time 600 -A iCabProXy ' . $url_own;\n+        $output = [];\n+        $return_var = 0;\n+        exec($cmd, $output, $return_var);\n+        $response = implode(\"\\n\", $output);\n+        if (!empty($response)) {\n+          $date = date(\"Y-m-d H:i:s\");\n+          file_put_contents($this_clog, \"$date NON-EMPTY-RESPONSE FOLLOWS\\n\", FILE_APPEND);\n+          file_put_contents($this_clog, $response . \"\\n\", FILE_APPEND);\n+          file_put_contents($this_clog, \"$date Another cURL command, this time via IP request\\n\", FILE_APPEND);\n+          file_put_contents($this_clog, \"$date KEY $this_host KEY $url\\n\", FILE_APPEND);\n+          $second_cmd = 'curl -L --max-redirs 5 -k -s --retry 3 --retry-delay 5 --max-time 600 -A iCabProXy ' . $this_host . ' ' . $url;\n+          $second_output = [];\n+          $second_return_var = 0;\n+          exec($second_cmd, $second_output, $second_return_var);\n+          if (!empty($second_response)) {\n+            file_put_contents($this_clog, \"$date Second IP Response:\\n\" . implode(\"\\n\", $second_output) . \"\\n\", FILE_APPEND);\n+          } else {\n+            file_put_contents($this_clog, \"$date Second IP request returned empty OK response.\\n\", FILE_APPEND);\n+          }\n+        }\n+        else {\n+          $date = date(\"Y-m-d H:i:s\");\n+          file_put_contents($this_clog, \"$date First DOMAIN request returned empty OK response.\\n\", FILE_APPEND);\n+        }\n       }\n     }\n     db_update('hosting_site')\n@@ -125,6 +175,7 @@ function hosting_cron_queue($count = ADV_CRON_MAX_PLL) {\n       ->condition('nid', $site->nid)\n       ->execute();\n     // A small trick to avoid high load when still too many crons are started at once.\n+    unset($site_name, $this_name, $this_host, $this_cuid, $this_clog, $profile, $platform, $this_tmp, $this_platform_root, $url);\n     sleep(3);\n   }\n }\n--\n2.45.1\n\n"
  },
  {
    "path": "aegir/patches/hosting_le_vhost.drush.inc",
    "content": "<?php\n\n/**\n * Register our directory as a place to find provision classes.\n */\nfunction hosting_le_vhost_register_autoload() {\n  static $loaded = FALSE;\n  if (!$loaded) {\n    $loaded = TRUE;\n    $list = drush_commandfile_list();\n    $provision_dir = dirname($list['provision']);\n    if (is_readable($provision_dir . '/provision.inc')) {\n      include_once($provision_dir . '/provision.inc');\n      include_once($provision_dir . '/provision.service.inc');\n      if (function_exists('provision_autoload_register_prefix')) {\n        provision_autoload_register_prefix('Provision_', dirname(__FILE__));\n      }\n    }\n  }\n}\n\n/**\n * Implements hook_drush_init().\n */\nfunction hosting_le_vhost_drush_init() {\n  hosting_le_vhost_register_autoload();\n}\n\n/**\n *  Implements hook_provision_services().\n */\nfunction hosting_le_vhost_provision_services() {\n  hosting_le_vhost_register_autoload();\n  return array('Le' => NULL);\n}\n\n/*\n * Implementation of hook_provision_nginx_vhost_config()\n */\nfunction hosting_le_vhost_provision_nginx_vhost_config($uri, $data) {\n\n  if (d()->type == 'site') {\n\n    $aegir_root = d('@server_master')->aegir_root;\n    $le_cert = d('@server_master')->aegir_root . \"/tools/le/certs\";\n    $is_boa = FALSE;\n    $is_boa_ctrl = \"/data/conf/global.inc\";\n\n    if (provision_file()->exists($is_boa_ctrl)->status()) {\n      $is_boa = TRUE;\n    }\n\n    $main_name = $real_name = substr(d()->name, 1);\n    if ($real_name == 'hostmaster') {\n      $real_name = $main_name = d()->uri;\n    }\n\n    if (d()->redirection) {\n      $main_name = d()->redirection;\n      if ($is_boa) {\n        $cert_dir = $le_cert . \"/\" . $real_name;\n      }\n      else {\n        $cert_dir = $le_cert . \"/\" . $main_name;\n      }\n    }\n    else {\n      $cert_dir = $le_cert . \"/\" . $main_name;\n    }\n\n    $chain_pem = $cert_dir . \"/chain.pem\";\n\n    $lines = array();\n\n    $lines[] = \"\";\n    if (d()->ssl_enabled) {\n      if (provision_file()->exists($chain_pem)->status()) {\n        $lines[] = \"  ssl_trusted_certificate    $chain_pem;\";\n        $lines[] = \"\";\n      }\n    }\n    $lines[] = \"  ###\";\n    $lines[] = \"  ### Allow access to letsencrypt.org ACME challenges directory.\";\n    $lines[] = \"  ###\";\n    $lines[] = \"  location ^~ /.well-known/acme-challenge {\";\n    $lines[] = \"    allow all;\";\n    $lines[] = \"    alias $aegir_root/tools/le/.acme-challenges;\";\n    $lines[] = \"    try_files \\$uri 404;\";\n    $lines[] = \"    auth_basic off;\";\n    $lines[] = \"  }\";\n    $lines[] = \"\\n\";\n\n    return implode(\"\\n\", $lines);\n  }\n\n  return '';\n}\n\n/*\n * Implementation of hook_provision_apache_vhost_config()\n */\nfunction hosting_le_vhost_provision_apache_vhost_config($uri, $data) {\n\n  $aegir_root = d('@server_master')->aegir_root;\n\n  if (d()->type == 'site') {\n\n    $lines = array();\n\n    $lines[] = \"\";\n    $lines[] = \"  Alias /.well-known/acme-challenge $aegir_root/tools/le/.acme-challenges\";\n    $lines[] = \"\";\n    $lines[] = \"  # Allow access to letsencrypt.org ACME challenges directory.\";\n    $lines[] = \"  <Directory \\\"$aegir_root/tools/le/.acme-challenges\\\">\";\n    $lines[] = \"    Order allow,deny\";\n    $lines[] = \"    Allow from all\";\n    $lines[] = \"    Require all granted\";\n    $lines[] = \"    Satisfy Any\";\n    $lines[] = \"  </Directory>\";\n    $lines[] = \"\\n\";\n\n    return implode(\"\\n\", $lines);\n  }\n\n  return '';\n}\n"
  },
  {
    "path": "aegir/patches/imagecache-1243258-5.patch",
    "content": "diff --git a/imagecache.module b/imagecache.module\nindex 55d48ce..36ace81 100644\n--- a/imagecache.module\n+++ b/imagecache.module\n@@ -442,7 +442,7 @@ function _imagecache_cache($presetname, $path) {\n   }\n \n   // umm yeah deliver it early if it is there. especially useful\n-  // to prevent lock files from being created when delivering private files.\n+  // to prevent locks from being created when delivering private files.\n   $dst = imagecache_create_path($preset['presetname'], $path);\n   if (is_file($dst)) {\n     imagecache_transfer($dst);\n@@ -458,34 +458,39 @@ function _imagecache_cache($presetname, $path) {\n     exit;\n   };\n \n-  // Bail if the requested file isn't an image you can't request .php files\n+  // Bail if the requested file isn't an image. You can't request .php files\n   // etc...\n-  if (!getimagesize($src)) {\n+  if (!@getimagesize($src)) {\n     watchdog('imagecache', '403: File is not an image %image ', array('%image' => $src), WATCHDOG_ERROR);\n     header('HTTP/1.0 403 Forbidden');\n     exit;\n   }\n \n-  $lockfile = file_directory_temp() .'/'. $preset['presetname'] . basename($src);\n-  if (file_exists($lockfile)) {\n-    watchdog('imagecache', 'ImageCache already generating: %dst, Lock file: %tmp.', array('%dst' => $dst, '%tmp' => $lockfile), WATCHDOG_NOTICE);\n-    // 307 Temporary Redirect, to myself. Lets hope the image is done next time around.\n-    header('Location: '. request_uri(), TRUE, 307);\n-    exit;\n+  // Generate preset inside of a lock.\n+  $lockname = $preset['presetname'] . basename($src);\n+  $wait = FALSE;\n+  if (lock_acquire($lockname)) {\n+    imagecache_build_derivative($preset['actions'], $src, $dst);\n+    lock_release($lockname);\n+  }\n+  else {\n+    lock_wait($lockname);\n+    $wait = TRUE;\n   }\n-  touch($lockfile);\n-  // register the shtdown function to clean up lock files. by the time shutdown\n-  // functions are being called the cwd has changed from document root, to\n-  // server root so absolute paths must be used for files in shutdown functions.\n-  register_shutdown_function('file_delete', realpath($lockfile));\n \n-  // check if deriv exists... (file was created between apaches request handler and reaching this code)\n-  // otherwise try to create the derivative.\n-  if (file_exists($dst) || imagecache_build_derivative($preset['actions'], $src, $dst)) {\n+  // Make sure derivative image exists before trying to send it.\n+  if (file_exists($dst)) {\n+    // exit gets called inside this function.\n     imagecache_transfer($dst);\n   }\n+\n   // Generate an error if image could not generate.\n-  watchdog('imagecache', 'Failed generating an image from %image using imagecache preset %preset.', array('%image' => $path, '%preset' => $preset['presetname']), WATCHDOG_ERROR);\n+  if ($wait) {\n+    watchdog('imagecache', 'Acquired lock, but failed in generating an image from %image using imagecache preset %preset.', array('%image' => $path, '%preset' => $preset['presetname']), WATCHDOG_ERROR);\n+  }\n+  else {\n+    watchdog('imagecache', 'Waited for the lock, but found no generated image from %image using imagecache preset %preset.', array('%image' => $path, '%preset' => $preset['presetname']), WATCHDOG_ERROR);\n+  }\n   header(\"HTTP/1.0 500 Internal Server Error\");\n   exit;\n }\n"
  },
  {
    "path": "aegir/patches/imagefield_crop.patch",
    "content": "Index: imagefield_crop.module\n===================================================================\n--- imagefield_crop.module\t(revision 1120)\n+++ imagefield_crop.module\t(working copy)\n@@ -45,6 +45,8 @@\n  * Delegated to filefield.\n  */\n function imagefield_crop_widget_settings($op, $widget) {\n+  // make sure we have the functions as this may be called from update.php\n+  module_load_include('inc', 'imagefield_crop', 'imagefield_crop_widget');\n   switch ($op) {\n     case 'form':\n       return imagefield_crop_widget_settings_form($widget);\n"
  },
  {
    "path": "aegir/patches/julio_profile.patch",
    "content": "diff --git a/julio.profile b/julio.profile\nindex 340580d..8522810 100644\n--- a/julio.profile\n+++ b/julio.profile\n@@ -1,6 +1,8 @@\n <?php\n // include form function for feature_set\n-!function_exists('feature_set_admin_form') ? module_load_include('inc', 'feature_set', 'feature_set.admin') : FALSE;\n+if (!defined('DRUSH_BASE_PATH')) {\n+  !function_exists('feature_set_admin_form') ? module_load_include('inc', 'feature_set', 'feature_set.admin') : FALSE;\n+}\n\n /**\n  * Implements form_alter for the configuration form\n"
  },
  {
    "path": "aegir/patches/my_config.h.patch",
    "content": "diff -burp a/my_config.h b/my_config.h\n--- a/my_config.h\t2014-10-09 19:32:46.000000000 -0400\n+++ b/my_config.h\t2014-10-09 19:35:12.000000000 -0400\n@@ -641,17 +641,4 @@\n #define SIZEOF_TIME_T 8\n /* #undef TIME_T_UNSIGNED */\n\n-/*\n-  stat structure (from <sys/stat.h>) is conditionally defined\n-  to have different layout and size depending on the defined macros.\n-  The correct macro is defined in my_config.h, which means it MUST be\n-  included first (or at least before <features.h> - so, practically,\n-  before including any system headers).\n-\n-  __GLIBC__ is defined in <features.h>\n-*/\n-#ifdef __GLIBC__\n-#error <my_config.h> MUST be included first!\n-#endif\n-\n #endif\n"
  },
  {
    "path": "aegir/patches/mysql.provision.patch",
    "content": "diff --git mysql_service.inc mysql_service.inc\nindex 10dccaa..d3a3e10 100644\n--- mysql_service.inc\n+++ mysql_service.inc\n@@ -113,6 +113,13 @@ class provisionService_db_mysql extends provisionService_db_pdo {\n     $cmd = sprintf('mysqldump --defaults-file=/dev/fd/3 --opt --skip-lock-tables --order-by-primary --default-character-set=utf8 -Q --hex-blob --single-transaction --quick -r%s/database.sql %s', escapeshellcmd(d()->site_path), escapeshellcmd(drush_get_option('db_name')));\n     $success = $this->safe_shell_exec($cmd, drush_get_option('db_host'), urldecode(drush_get_option('db_user')), urldecode(drush_get_option('db_passwd')));\n \n+    $cmd = sprintf('sed \\'s|/\\*!50001 CREATE ALGORITHM=UNDEFINED \\*/|/\\*!50001 CREATE \\*/|g\\' %s/database.sql > %s/database_temp.sql', escapeshellcmd(d()->site_path), escapeshellcmd(d()->site_path));\n+    $success = $this->safe_shell_exec($cmd);\n+    $cmd = sprintf('sed \\'s|/\\*!50013 DEFINER=.*||g\\' %s/database_temp.sql > %s/database.sql', escapeshellcmd(d()->site_path), escapeshellcmd(d()->site_path));\n+    $success = $this->safe_shell_exec($cmd);\n+    $cmd = sprintf('rm %s/database_temp.sql', escapeshellcmd(d()->site_path));\n+    $success = $this->safe_shell_exec($cmd);\n+\n     if (!$success && !drush_get_option('force', false)) {\n       drush_set_error('PROVISION_BACKUP_FAILED', dt('Could not generate database backup from mysqldump. (error: %msg)', array('%msg' => $this->safe_shell_exec_output)));\n     }\n"
  },
  {
    "path": "aegir/patches/nik.patch",
    "content": "--- BARRACUDA.sh.txt\t2010-10-03 14:15:01.000000000 +1100\n+++ BARRACUDA-mod.sh.txt\t2010-10-03 15:55:49.000000000 +1100\n@@ -198,6 +198,14 @@ prompt_yes_no () {\n     esac\n  done \n }\n+\n+prompt_confirm_choice () {\n+\tread -p \"$1 [$2]:\" _CONFIRMED_ANSWER\n+\tif [ -z \"$_CONFIRMED_ANSWER\" ] ; then\n+\t\t_CONFIRMED_ANSWER=$2\n+\tfi\n+}\n+\n #\n # Stop on error\n # set -e ### disable this for debugging\n@@ -703,34 +711,40 @@ if [ ! -f \"/var/aegir/config/includes/ap\n     cd /opt/tmp/$_BOA_REPO_NAME/aegir/helpers\n     _MIRROR=`bash ffmirror.sh.txt < apt-list-ubuntu.txt`\n+\t\t_MIRROR=\"http://$_MIRROR/ubuntu/\"\n+\t\tprompt_confirm_choice \"Enter your own mirror to use or press enter to use the fastest found mirror \" $_MIRROR\n+\t\t_MIRROR=$_CONFIRMED_ANSWER\n     msg \"$(date 2>&1) INFO: We will use $_THIS_OS mirror $_MIRROR\"\n     cd /var/opt\n     echo \"## MAIN REPOSITORIES\" > /etc/apt/sources.list\n-    echo \"deb http://$_MIRROR/ubuntu/ $_REL_VERSION main restricted universe multiverse\" >> /etc/apt/sources.list\n-    echo \"deb-src http://$_MIRROR/ubuntu/ $_REL_VERSION main restricted universe multiverse\" >> /etc/apt/sources.list\n+    echo \"deb $_MIRROR $_REL_VERSION main restricted universe multiverse\" >> /etc/apt/sources.list\n+    echo \"deb-src $_MIRROR $_REL_VERSION main restricted universe multiverse\" >> /etc/apt/sources.list\n     echo \"\" >> /etc/apt/sources.list\n     echo \"## MAJOR BUG FIX UPDATES produced after the final release\" >> /etc/apt/sources.list\n-    echo \"deb http://$_MIRROR/ubuntu/ $_REL_VERSION-updates main restricted universe multiverse\" >> /etc/apt/sources.list\n-    echo \"deb-src http://$_MIRROR/ubuntu/ $_REL_VERSION-updates main restricted universe multiverse\" >> /etc/apt/sources.list\n+    echo \"deb $_MIRROR $_REL_VERSION-updates main restricted universe multiverse\" >> /etc/apt/sources.list\n+    echo \"deb-src $_MIRROR $_REL_VERSION-updates main restricted universe multiverse\" >> /etc/apt/sources.list\n     echo \"\" >> /etc/apt/sources.list\n     echo \"## UBUNTU SECURITY UPDATES\" >> /etc/apt/sources.list\n     echo \"deb http://security.ubuntu.com/ubuntu $_REL_VERSION-security main restricted universe multiverse\" >> /etc/apt/sources.list\n     echo \"deb-src http://security.ubuntu.com/ubuntu $_REL_VERSION-security main restricted universe multiverse\" >> /etc/apt/sources.list\n     echo \"\" >> /etc/apt/sources.list\n     echo \"## BACKPORTS REPOSITORY\" >> /etc/apt/sources.list\n-    echo \"deb http://$_MIRROR/ubuntu/ $_REL_VERSION-backports main restricted universe multiverse\" >> /etc/apt/sources.list\n-    echo \"deb-src http://$_MIRROR/ubuntu/ $_REL_VERSION-backports main restricted universe multiverse\" >> /etc/apt/sources.list\n+    echo \"deb $_MIRROR $_REL_VERSION-backports main restricted universe multiverse\" >> /etc/apt/sources.list\n+    echo \"deb-src $_MIRROR $_REL_VERSION-backports main restricted universe multiverse\" >> /etc/apt/sources.list\n   elif [ \"$_THIS_OS\" = \"Debian\" ] ; then\n     msg \"$(date 2>&1) INFO: Now looking for the best/fastest $_THIS_OS mirror, this may take a while, please wait...\"\n     cd /opt/tmp/$_BOA_REPO_NAME/aegir/helpers\n     _MIRROR=`bash ffmirror.sh.txt < apt-list-debian.txt`\n+\t\t_MIRROR=\"http://$_MIRROR/debian/\"\n+\t\tprompt_confirm_choice \"Enter your own mirror to use or press enter to use the fastest found mirror \" $_MIRROR\n+\t\t_MIRROR=$_CONFIRMED_ANSWER\n     msg \"$(date 2>&1) INFO: We will use $_THIS_OS mirror $_MIRROR\"\n     cd /var/opt\n-    echo \"deb http://$_MIRROR/debian/ $_REL_VERSION main contrib non-free\" > /etc/apt/sources.list\n-    echo \"deb-src http://$_MIRROR/debian/ $_REL_VERSION main contrib non-free\" >> /etc/apt/sources.list\n+    echo \"deb $_MIRROR $_REL_VERSION main contrib non-free\" > /etc/apt/sources.list\n+    echo \"deb-src $_MIRROR $_REL_VERSION main contrib non-free\" >> /etc/apt/sources.list\n     echo \"deb http://security.debian.org/ $_REL_VERSION/updates main contrib non-free\" >> /etc/apt/sources.list\n     echo \"deb-src http://security.debian.org/ $_REL_VERSION/updates main contrib non-free\" >> /etc/apt/sources.list\n     echo \"deb http://volatile.debian.org/debian-volatile $_REL_VERSION/volatile main contrib non-free\" >> /etc/apt/sources.list\n@@ -758,31 +772,31 @@ fi\n \n ###--------------------###\n msg \"$(date 2>&1) INFO: Run apt update, please wait...\"\n-runner \"apt-fast update\"\n+runner \"apt-fast update -y\"\n if [ \"$_THIS_OS\" = \"Ubuntu\" ] ; then\n   runner \"apt-fast upgrade -y\"\n-  runner \"apt-fast update\"\n-  runner \"apt-fast clean\"\n-  runner \"apt-fast dist-upgrade\"\n-  runner \"apt-fast autoclean\"\n+  runner \"apt-fast update -y\"\n+  runner \"apt-fast clean -y\"\n+  runner \"apt-fast dist-upgrade -y\"\n+  runner \"apt-fast autoclean -y\"\n elif [ \"$_THIS_OS\" = \"Debian\" ] ; then\n   runner \"apt-fast upgrade -y\"\n-  runner \"apt-fast update\"\n-  runner \"apt-fast clean\"\n-  runner \"apt-fast dist-upgrade\"\n+  runner \"apt-fast update -y\"\n+  runner \"apt-fast clean -y\"\n+  runner \"apt-fast dist-upgrade -y\"\n   runner \"aptitude full-upgrade -y\"\n-  runner \"apt-fast autoclean\"\n+  runner \"apt-fast autoclean -y\"\n fi\n \n \n ###--------------------###\n msg \"$(date 2>&1) INFO: Run apt update again, please wait...\"\n-runner \"apt-fast update\"\n-runner \"apt-fast clean\"\n+runner \"apt-fast update -y\"\n+runner \"apt-fast clean -y\"\n if [ \"$_THIS_OS\" = \"Ubuntu\" ] ; then\n   runner \"apt-fast upgrade -y\"\n-  runner \"apt-fast dist-upgrade\"\n-  runner \"apt-fast autoclean\"\n+  runner \"apt-fast dist-upgrade -y\"\n+  runner \"apt-fast autoclean -y\"\n elif [ \"$_THIS_OS\" = \"Debian\" ] ; then\n   runner \"aptitude full-upgrade -y\"\n fi\n@@ -790,12 +804,12 @@ fi\n \n ###--------------------###\n msg \"$(date 2>&1) INFO: Run apt update again, please wait...\"\n-runner \"apt-fast update\"\n-runner \"apt-fast clean\"\n+runner \"apt-fast update -y\"\n+runner \"apt-fast clean -y\"\n if [ \"$_THIS_OS\" = \"Ubuntu\" ] ; then\n   runner \"apt-fast upgrade -y\"\n-  runner \"apt-fast dist-upgrade\"\n-  runner \"apt-fast autoclean\"\n+  runner \"apt-fast dist-upgrade -y\"\n+  runner \"apt-fast autoclean -y\"\n elif [ \"$_THIS_OS\" = \"Debian\" ] ; then\n   runner \"aptitude full-upgrade -y\"\n fi\n"
  },
  {
    "path": "aegir/patches/object_conversion_menu_router_build-972536-1.patch",
    "content": "--- includes/menu.inc.orig\t2010-11-15 07:55:27.000000000 -0500\n+++ includes/menu.inc\t2010-11-15 13:38:36.000000000 -0500\n@@ -3367,7 +3367,7 @@ function _menu_router_build($callbacks) \n       $sort[$path] = $number_parts;\n     }\n   }\n-  array_multisort($sort, SORT_NUMERIC, $menu);\n+  array_multisort($sort, SORT_NUMERIC, $menu, SORT_STRING);\n   // Apply inheritance rules.\n   foreach ($menu as $path => $v) {\n     $item = &$menu[$path];\n"
  },
  {
    "path": "aegir/patches/octopus_video.patch",
    "content": "diff -urp a/octopus_video/modules/features/video_core/video_core.strongarm.inc b/octopus_video/modules/features/video_core/video_core.strongarm.inc\n--- a/octopus_video/modules/features/video_core/video_core.strongarm.inc\t2012-04-03 12:12:51.000000000 -0400\n+++ b/octopus_video/modules/features/video_core/video_core.strongarm.inc\t2012-04-04 21:26:51.000000000 -0400\n@@ -293,7 +293,7 @@ function video_core_strongarm() {\n   $strongarm->disabled = FALSE; /* Edit this to true to make a default strongarm disabled initially */\n   $strongarm->api_version = 1;\n   $strongarm->name = 'videojs_directory';\n-  $strongarm->value = 'profiles/octopus/libraries/video-js/video-js';\n+  $strongarm->value = 'profiles/octopus_video/libraries/video-js';\n   $export['videojs_directory'] = $strongarm;\n\n   $strongarm = new stdClass;\ndiff -urp a/octopus_video/octopus_video.info b/octopus_video/octopus_video.info\n--- a/octopus_video/octopus_video.info\t2012-04-03 12:12:51.000000000 -0400\n+++ b/octopus_video/octopus_video.info\t2012-04-04 22:52:53.000000000 -0400\n@@ -16,8 +16,8 @@ dependencies[] = statistics\n dependencies[] = syslog\n\n ; Contrib\n-dependencies[] = admin_menu\n-dependencies[] = admin_menu_toolbar\n+dependencies[] = zencoderapi\n+dependencies[] = toolbar\n dependencies[] = amazons3\n dependencies[] = awssdk\n dependencies[] = awssdk_ui\n@@ -68,4 +68,8 @@ dependencies[]  = video_user\n\n ; custom\n dependencies[] = jwplayer\n-dependencies[] = octopus_helper\n\\ No newline at end of file\n+dependencies[] = octopus_helper\n+\n+version = \"1.0-alpha6\"\n+project = \"octopus_video\"\n+\ndiff -urp a/octopus_video/octopus_video.install b/octopus_video/octopus_video.install\n--- a/octopus_video/octopus_video.install\t2012-04-03 12:12:51.000000000 -0400\n+++ b/octopus_video/octopus_video.install\t2012-04-04 23:05:32.000000000 -0400\n@@ -5,9 +5,10 @@\n  */\n function octopus_video_install() {\n   // set themes\n-  theme_enable(array('octopus_video'));\n+  theme_enable(array('octopus_video','rubik'));\n   variable_set('theme_default', 'octopus_video');\n-  variable_set('admin_theme', 'seven');\n+  variable_set('admin_theme', 'rubik');\n+  variable_set('node_admin_theme', '0');\n   // Add text formats.\n   $filtered_html_format = array(\n     'format' => 'filtered_html',\ndiff -urp a/octopus_video/themes/octopus_video/octopus_video.info b/octopus_video/themes/octopus_video/octopus_video.info\n--- a/octopus_video/themes/octopus_video/octopus_video.info\t2012-04-03 12:12:51.000000000 -0400\n+++ b/octopus_video/themes/octopus_video/octopus_video.info\t2012-04-04 21:24:25.000000000 -0400\n@@ -118,13 +118,13 @@ settings[alpha_css][omega-menu.css] = '0\n settings[alpha_css][omega-forms.css] = '0'\n\n settings[alpha_css][omega-visuals.css] = '0'\n\n settings[alpha_exclude][modules/comment/comment.css] = '0'\n\n-settings[alpha_exclude][profiles/octopus/modules/contrib/date/date_api/date.css] = '0'\n\n+settings[alpha_exclude][profiles/octopus_video/modules/contrib/date/date_api/date.css] = '0'\n\n settings[alpha_exclude][modules/field/theme/field.css] = '0'\n\n-settings[alpha_exclude][profiles/octopus/modules/contrib/logintoboggan/logintoboggan.css] = '0'\n\n+settings[alpha_exclude][profiles/octopus_video/modules/contrib/logintoboggan/logintoboggan.css] = '0'\n\n settings[alpha_exclude][modules/node/node.css] = '0'\n\n settings[alpha_exclude][modules/search/search.css] = '0'\n\n settings[alpha_exclude][modules/user/user.css] = '0'\n\n-settings[alpha_exclude][profiles/octopus/modules/contrib/views/css/views.css] = '0'\n\n+settings[alpha_exclude][profiles/octopus_video/modules/contrib/views/css/views.css] = '0'\n\n settings[alpha_exclude][misc/vertical-tabs.css] = '0'\n\n settings[alpha_exclude][modules/aggregator/aggregator.css] = '0'\n\n settings[alpha_exclude][modules/block/block.css] = '0'\n\n"
  },
  {
    "path": "aegir/patches/og_update_6205_commons_fix.patch",
    "content": "diff -urp a/og.install b/og.install\n--- a/og.install        2012-01-18 23:55:27.000000000 +0000\n+++ b/og.install        2012-03-17 13:46:41.000000000 +0000\n@@ -447,12 +447,13 @@ function og_update_6204() {\n }\n\n /**\n- * Add an index on og_uid.uid.\n+ * Add an index on og_uid.uid. Already applied in drupal_commons.make\n  */\n function og_update_6205() {\n-  $ret = array();\n-  db_add_index($ret, 'og_uid', 'uid', array('uid'));\n-  return $ret;\n+  //$ret = array();\n+  //db_add_index($ret, 'og_uid', 'uid', array('uid'));\n+  //return $ret;\n+  return array();\n }\n\n // end updates //\n"
  },
  {
    "path": "aegir/patches/openacademy-search-off.patch",
    "content": "diff -urp a/openacademy.info b/openacademy.info\n--- a/openacademy.info\t2012-07-20 21:37:28.000000000 +0000\n+++ b/openacademy.info\t2012-09-02 00:03:45.000000000 +0000\n@@ -69,13 +69,6 @@ dependencies[] = simplified_menu_admin\n dependencies[] = references_dialog \n dependencies[] = backports\n \n-; Panopoly - Contrib - Search \n-dependencies[] = search_api\n-dependencies[] = search_api_solr\n-dependencies[] = facetapi\n-dependencies[] = search_api_facetapi\n-dependencies[] = search_api_views\n-\n ; Panopoly - Contrib - Products\n dependencies[] = apps \n dependencies[] = features\ndiff -urp a/openacademy.profile b/openacademy.profile\n--- a/openacademy.profile\t2012-08-18 18:57:09.000000000 +0000\n+++ b/openacademy.profile\t2012-09-02 00:02:54.000000000 +0000\n@@ -30,7 +30,6 @@ function openacademy_install_tasks($inst\n       'panopoly_images',\n       'panopoly_magic',\n       'panopoly_pages',\n-      'panopoly_search',\n       'panopoly_theme',\n       'panopoly_users',\n       'panopoly_widgets',\n@@ -42,7 +41,6 @@ function openacademy_install_tasks($inst\n       'panopoly_images',      \n       'panopoly_magic',             \n       'panopoly_pages',                   \n-      'panopoly_search',                        \n       'panopoly_theme',                               \n       'panopoly_users',                                     \n       'panopoly_widgets',                                         \n"
  },
  {
    "path": "aegir/patches/openacademy.patch",
    "content": "diff -urp a/openacademy/openacademy.info b/openacademy/openacademy.info\n--- a/openacademy/openacademy.info\t2012-05-31 14:51:19.000000000 -0400\n+++ b/openacademy/openacademy.info\t2012-06-03 12:35:45.000000000 -0400\n@@ -31,6 +31,8 @@ dependencies[] = views\n dependencies[] = views_content\n dependencies[] = views_ui\n dependencies[] = token\n+dependencies[] = ds\n+dependencies[] = ds_extras\n \n ; Panopoly - Contrib - Field UI and Content Types\n dependencies[] = tablefield\n@@ -70,13 +72,6 @@ dependencies[] = simplified_menu_admin\n dependencies[] = references_dialog \n dependencies[] = backports\n \n-; Panopoly - Contrib - Search \n-dependencies[] = search_api\n-dependencies[] = search_api_solr\n-dependencies[] = facetapi\n-dependencies[] = search_api_facetapi\n-dependencies[] = search_api_views\n-\n ; Panopoly - Contrib - Products\n dependencies[] = apps \n dependencies[] = features\n@@ -86,9 +81,6 @@ dependencies[] = defaultcontent\n dependencies[] = strongarm\n dependencies[] = libraries\n \n-; Panopoly - Contrib - Performance\n-dependencies[] = redis\n-\n ; Panopoly - Contrib - Development\n dependencies[] = devel\n dependencies[] = devel_generate\ndiff -urp a/openacademy/openacademy.profile b/openacademy/openacademy.profile\n--- a/openacademy/openacademy.profile\t2012-05-29 18:39:27.000000000 -0400\n+++ b/openacademy/openacademy.profile\t2012-06-03 12:38:48.000000000 -0400\n@@ -26,7 +26,6 @@ function openacademy_install_tasks($inst\n       'panopoly_images',\n       'panopoly_magic',\n       'panopoly_pages',\n-      'panopoly_search',\n       'panopoly_theme',\n       'panopoly_users',\n       'panopoly_widgets',\n@@ -38,7 +37,6 @@ function openacademy_install_tasks($inst\n       'panopoly_images',      \n       'panopoly_magic',             \n       'panopoly_pages',                   \n-      'panopoly_search',                        \n       'panopoly_theme',                               \n       'panopoly_users',                                     \n       'panopoly_widgets',                                         \n@@ -315,7 +313,12 @@ function openacademy_theme_form($form, &\n function openacademy_theme_form_submit($form, &$form_state) {\n   \n   // Enable and set the theme of choice\n-  $theme = $form_state['input']['theme'];\n+  if (defined('DRUSH_BASE_PATH')) {\n+    $theme = \"openacademy_default\";\n+  }\n+  else {\n+    $theme = $form_state['input']['theme'];\n+  }\n   theme_enable(array($theme));\n   variable_set('theme_default', $theme);\n  \n"
  },
  {
    "path": "aegir/patches/openaid-tpl.patch",
    "content": "diff -urp a/openaid/themes/openaid/templates/page.tpl.php b/openaid/themes/openaid/templates/page.tpl.php\n--- a/openaid/themes/openaid/templates/page.tpl.php\t2012-04-12 00:48:17.000000000 +0000\n+++ b/openaid/themes/openaid/templates/page.tpl.php\t2012-04-12 23:00:59.000000000 +0000\n@@ -20,7 +20,7 @@\n       </h1>\n       <?php if ($page['header']) { ?>\n           <?php print render($page['header']); ?>\n-      <? } // end header ?>\n+      <?php } // end header ?>\n       <?php if ($main_menu): ?>\n         <div id=\"navigation\">\n           <?php print $main_menu; ?>\n@@ -31,81 +31,81 @@\n\n   <div id=\"main\">\n     <div class=\"container\">\n-\n+\n       <?php if ($messages) { ?>\n         <div id=\"messages\">\n             <?php print $messages; ?>\n         </div>\n-      <? } // end messages ?>\n-\n+      <?php } // end messages ?>\n+\n       <?php if (render($tabs)) { ?>\n         <div id=\"tabs\">\n           <?php print render($tabs); ?>\n         </div>\n-      <? } // end tabs ?>\n-\n+      <?php } // end tabs ?>\n+\n       <div id=\"main-content\">\n-\n-        <div id=\"content\">\n-\n+\n+        <div id=\"content\">\n+\n           <?php if (render($breadcrumb)) { ?>\n             <div id=\"breadcrumb\">\n               <?php print render($breadcrumb); ?>\n             </div>\n-          <? } // end breadcrumb ?>\n-\n+          <?php } // end breadcrumb ?>\n+\n           <?php if ($page['highlighted']) { ?>\n             <div id=\"highlighted\">\n               <?php print render($page['highlighted']); ?>\n             </div>\n-          <? } // end highlighted ?>\n-\n+          <?php } // end highlighted ?>\n+\n           <?php if ($page['featured']) { ?>\n             <div id=\"featured\">\n               <?php print render($page['featured']); ?>\n             </div>\n-          <? } // end featured ?>\n-\n+          <?php } // end featured ?>\n+\n           <?php if (!$is_front && strlen($title) > 0) { ?>\n-            <h1 class=\"page-title\"><?php print $title; ?></h1>\n-          <? } ?>\n-\n+            <h1 class=\"page-title\"><?php print $title; ?></h1>\n+          <?php } ?>\n+\n           <div id=\"content-inner\">\n-\n+\n           <?php if ($page['help']) { ?>\n             <div id=\"help\">\n               <?php print render($page['help']); ?>\n             </div>\n-          <? } // end tabs ?>\n-\n+          <?php } // end tabs ?>\n+\n           <?php print render($page['content']); ?>\n-\n+\n           </div>\n         </div>\n-\n+\n         <?php if ($page['sidebar_first']) { ?>\n           <div id=\"sidebar-first\" class=\"aside\">\n             <?php print render($page['sidebar_first']); ?>\n           </div>\n-        <? } // end sidebar_first ?>\n-\n+        <?php } // end sidebar_first ?>\n+\n       </div>\n-\n+\n     </div>\n   </div>\n-\n+\n   <div id=\"footer\">\n     <div class=\"container\">\n       <?php print render($page['footer']); ?>\n     </div>\n   </div>\n-\n+\n   <?php if ($page['admin_footer']) { ?>\n     <div id=\"admin-footer\">\n       <div class=\"container\">\n         <?php print render($page['admin_footer']); ?>\n       </div>\n-    </div>\n-  <? } // end admin_footer ?>\n-\n+    </div>\n+  <?php } // end admin_footer ?>\n+\n </div>\n"
  },
  {
    "path": "aegir/patches/openenterprise.patch",
    "content": "diff -urp a/openenterprise.profile b/openenterprise.profile\n--- a/openenterprise.profile\t2011-02-11 15:47:05.000000000 +0000\n+++ b/openenterprise.profile\t2011-04-18 04:55:30.000000000 +0000\n@@ -11,9 +11,10 @@ define('OPENENTERPRISE_FILTERED_HTML', '\n  * Implementation of hook_profile_details().\n  */\n function openenterprise_profile_details() {\n+  $description = st('Open Enterprise by LevelTen Interactive.');\n   return array(\n-    'name' => t('Open Enterprise'),\n-    'description' => t('Open Enterprise by LevelTen Interactive.'),\n+    'name' => 'Open Enterprise',\n+    'description' => $description,\n     'old_short_name' => 'enterprise_installer',\n   );\n }\n@@ -657,4 +658,4 @@ function openenterprise_config_date_form\n \n   variable_set('date_format_date_only', 'l, F j, Y');\n   variable_set('date_format_time_only', 'g:i a');\n-}\n\\ No newline at end of file\n+}\n"
  },
  {
    "path": "aegir/patches/openoutreach.patch",
    "content": "From b72102ef8f5fb913d2f197067d8c2cd7ee3e99ed Mon Sep 17 00:00:00 2001\nFrom: Nedjo Rogers\nDate: Wed, 04 Feb 2015 19:27:16 +0000\nSubject: Issue #2293949: fix drush install time errors by enabling file_entity earlier.\n\n---\ndiff --git a/openoutreach.info b/openoutreach.info\nindex ea5540c..1471c80 100644\n--- a/openoutreach.info\n+++ b/openoutreach.info\n@@ -14,7 +14,7 @@ dependencies[] = \"dashboard\"\n dependencies[] = \"dblog\"\n dependencies[] = \"features\"\n dependencies[] = \"field_ui\"\n-dependencies[] = \"file\"\n+dependencies[] = \"file_entity\"\n dependencies[] = \"help\"\n dependencies[] = \"image\"\n dependencies[] = \"libraries\"\n@@ -85,4 +85,4 @@ subprofiles[membership][features][debut_section] = TRUE\n subprofiles[membership][features][debut_seo] = TRUE\n subprofiles[membership][features][debut_social] = TRUE\n subprofiles[membership][features][debut_wysiwyg] = TRUE\n-subprofiles[membership][features][openoutreach_front_page] = TRUE\n\\ No newline at end of file\n+subprofiles[membership][features][openoutreach_front_page] = TRUE\n--\ncgit v0.9.2\n"
  },
  {
    "path": "aegir/patches/openpublic.patch",
    "content": "diff -urp a/openpublic.profile b/openpublic.profile\n--- a/openpublic.profile\t2011-07-10 16:30:50.000000000 -0400\n+++ b/openpublic.profile\t2011-07-11 10:39:52.000000000 -0400\n@@ -13,7 +13,7 @@\n  * called through custom invocation, so $form_state is not populated.\n  */\n function openpublic_form_alter(&$form, $form_state, $form_id) {\n-  if ($form_id == 'install_configure_form') {\n+  if ($form_id == 'install_configure_form' && !defined('DRUSH_BASE_PATH')) {\n     $roles = array(DRUPAL_AUTHENTICATED_RID);\n     $policy = _password_policy_load_active_policy($roles);\n \n"
  },
  {
    "path": "aegir/patches/openscholar.profile.patch",
    "content": "diff -urp a/openscholar.profile b/openscholar.profile\n--- a/openscholar.profile\t2010-08-24 20:16:19.000000000 +0000\n+++ b/openscholar.profile\t2010-10-16 01:00:00.000000000 +0000\n@@ -35,16 +35,43 @@ function openscholar_profile_modules() {\n   );\n }\n \n+\n+/**\n+ * Returns an array list of core contributed modules to be installed first.\n+ */\n+function _openscholar_core_modules_first() {\n+ $contrib_modules = array(\n+  //cck\n+    'content',\n+    'content_copy',\n+    'diff',\n+    'date_timezone',\n+    'date_api',\n+    'date',\n+    'date_popup',\n+    'filefield',\n+    'fieldgroup',\n+    'imagecache',\n+    'imagecache_ui',\n+    'imagefield',\n+    'link',\n+    'text',\n+    'number',\n+    'nodereference',\n+    'nodereference_url',\n+    'optionwidgets',\n+  );\n+  return $contrib_modules;\n+}\n+\n+\n /**\n  * Returns an array list of core contributed modules.\n  */\n function _openscholar_core_modules() {\n  $contrib_modules = array(\n-  // sites/all/contrib\n-    'activity',\n     'addthis',\n     'advanced_help',\n-    'calendar',\n     'litecal',\n     'context',\n     'context_contrib',\n@@ -83,7 +110,6 @@ function _openscholar_core_modules() {\n     'stringoverrides',\n     'strongarm',\n     'token',\n-    'trigger',\n     'transliteration',\n     'twitter_pull',\n     'ucreate',\n@@ -95,38 +121,17 @@ function _openscholar_core_modules() {\n     'views_attach',\n     'vertical_tabs',\n     'wysiwyg',\n-  \n-  //cck\n-    'content',\n-    'content_copy',\n-    'diff',\n-    'date_timezone',\n-    'date_api',\n-    'date',\n-    'date_popup',\n-    'filefield',\n-    'fieldgroup',\n-    'imagecache',\n-    'imagecache_ui',\n-    'imagefield',\n-    'imagefield_crop',\n-    'link',\n-    'text',\n-    'number',\n-    'nodereference',\n-    'nodereference_url',\n-    'optionwidgets',\n-\n     'install_profile_api',\n     'schema',\n- \n     // Optional Development Resources\n-    //'admin_menu',\n+    'admin_menu',\n     //'devel',\n     //'devel_generate',\n-    \n+    'calendar',\n+    'trigger',\n+    'imagefield_crop',\n+    'activity',\n   );\n-  \n   return $contrib_modules;\n }\n \n@@ -190,8 +195,6 @@ function openscholar_profile_task_list()\n   global $conf;\n   $conf['site_name'] = 'OpenScholar';\n   $conf['site_footer'] = '<a href=\"http://openscholar.harvard.edu\">OpenScholar</a> by <a href=\"http://iq.harvard.edu\">IQSS</a> at Harvard University';\n-  \n-  \n   $tasks = array(\n     'openscholar-configure' => st('openscholar  configuration'),\n   );\n@@ -206,11 +209,15 @@ function openscholar_profile_tasks(&$tas\n   $output = '';\n \n   if ($task == 'profile') {\n+\n+    $modules_first = _openscholar_core_modules_first();\n     $modules = _openscholar_core_modules();\n     $modules = array_merge($modules, _openscholar_scholar_modules());\n-\n     $files = module_rebuild_cache();\n     $operations = array();\n+    foreach ($modules_first as $module) {\n+      $operations[] = array('_install_module_batch', array($module, $files[$module]->info['name']));\n+    }\n     foreach ($modules as $module) {\n       $operations[] = array('_install_module_batch', array($module, $files[$module]->info['name']));\n     }\n"
  },
  {
    "path": "aegir/patches/openscholar_projects.profile.patch",
    "content": "diff -urp a/openscholar_projects.profile b/openscholar_projects.profile\n--- a/openscholar_projects.profile\t2010-08-18 15:16:43.000000000 +0000\n+++ b/openscholar_projects.profile\t2010-10-16 02:17:04.000000000 +0000\n@@ -35,16 +35,43 @@ function openscholar_projects_profile_mo\n   );\n }\n \n+\n+/**\n+ * Returns an array list of core contributed modules to be installed first.\n+ */\n+function _openscholar_projects_core_modules_first() {\n+ $contrib_modules = array(\n+  //cck\n+    'content',\n+    'content_copy',\n+    'diff',\n+    'date_timezone',\n+    'date_api',\n+    'date',\n+    'date_popup',\n+    'filefield',\n+    'fieldgroup',\n+    'imagecache',\n+    'imagecache_ui',\n+    'imagefield',\n+    'link',\n+    'text',\n+    'number',\n+    'nodereference',\n+    'nodereference_url',\n+    'optionwidgets',\n+  );\n+  return $contrib_modules;\n+}\n+\n+\n /**\n  * Returns an array list of core contributed modules.\n  */\n function _openscholar_projects_core_modules() {\n  $contrib_modules = array(\n-  // sites/all/contrib\n-    'activity',\n     'addthis',\n     'advanced_help',\n-    'calendar',\n     'litecal',\n     'context',\n     'context_contrib',\n@@ -83,7 +110,6 @@ function _openscholar_projects_core_modu\n     'stringoverrides',\n     'strongarm',\n     'token',\n-    'trigger',\n     'transliteration',\n     'twitter_pull',\n     'ucreate',\n@@ -95,38 +121,17 @@ function _openscholar_projects_core_modu\n     'views_attach',\n     'vertical_tabs',\n     'wysiwyg',\n-  \n-  //cck\n-    'content',\n-    'content_copy',\n-    'diff',\n-    'date_timezone',\n-    'date_api',\n-    'date',\n-    'date_popup',\n-    'filefield',\n-    'fieldgroup',\n-    'imagecache',\n-    'imagecache_ui',\n-    'imagefield',\n-    'imagefield_crop',\n-    'link',\n-    'text',\n-    'number',\n-    'nodereference',\n-    'nodereference_url',\n-    'optionwidgets',\n-\n     'install_profile_api',\n     'schema',\n- \n     // Optional Development Resources\n-    //'admin_menu',\n+    'admin_menu',\n     //'devel',\n     //'devel_generate',\n-    \n+    'calendar',\n+    'trigger',\n+    'imagefield_crop',\n+    'activity',\n   );\n-  \n   return $contrib_modules;\n }\n \n@@ -180,8 +185,6 @@ function _openscholar_projects_scholar_m\n     'scholar_reader',\n     'scholar_front',\n     'scholar_profiles',\n-  \n-\n   );\n }\n \n@@ -192,7 +195,6 @@ function openscholar_projects_profile_ta\n   global $conf;\n   $conf['site_name'] = 'OpenScholar';\n   $conf['site_footer'] = '<a href=\"http://openscholar.harvard.edu\">OpenScholar</a> by <a href=\"http://iq.harvard.edu\">IQSS</a> at Harvard University';\n-  \n   $tasks = array(\n     'scholar_projects-configure' => st('Projects  configuration'),\n   );\n@@ -207,11 +209,15 @@ function openscholar_projects_profile_ta\n   $output = '';\n \n   if ($task == 'profile') {\n+  \n+    $modules_first = _openscholar_projects_core_modules_first();\n     $modules = _openscholar_projects_core_modules();\n     $modules = array_merge($modules, _openscholar_projects_scholar_modules());\n-\n     $files = module_rebuild_cache();\n     $operations = array();\n+    foreach ($modules_first as $module) {\n+      $operations[] = array('_install_module_batch', array($module, $files[$module]->info['name']));\n+    }\n     foreach ($modules as $module) {\n       $operations[] = array('_install_module_batch', array($module, $files[$module]->info['name']));\n     }\n"
  },
  {
    "path": "aegir/patches/panopoly-search-off.patch",
    "content": "diff -urp a/panopoly.info b/panopoly.info\n--- a/panopoly.info\t2012-06-21 16:21:56.000000000 +0000\n+++ b/panopoly.info\t2012-08-27 09:08:43.000000000 +0000\n@@ -68,14 +68,6 @@ dependencies[] = simplified_menu_admin\n dependencies[] = references_dialog \n dependencies[] = backports\n \n-; Panopoly - Contrib - Search \n-dependencies[] = facetapi\n-dependencies[] = search_api\n-dependencies[] = search_api_solr\n-dependencies[] = search_api_facetapi\n-dependencies[] = search_api_views\n-dependencies[] = search_api_db\n-\n ; Panopoly - Contrib - Products\n dependencies[] = apps \n dependencies[] = features\ndiff -urp a/panopoly.profile b/panopoly.profile\n--- a/panopoly.profile\t2012-08-18 18:57:09.000000000 +0000\n+++ b/panopoly.profile\t2012-08-27 09:14:04.000000000 +0000\n@@ -31,7 +31,6 @@ function panopoly_install_tasks($install\n       'panopoly_images',\n       'panopoly_magic',\n       'panopoly_pages',\n-      'panopoly_search',\n       'panopoly_theme',\n       'panopoly_users',\n       'panopoly_widgets',\n@@ -43,7 +42,6 @@ function panopoly_install_tasks($install\n       'panopoly_images',      \n       'panopoly_magic',             \n       'panopoly_pages',                   \n-      'panopoly_search',                        \n       'panopoly_theme',                               \n       'panopoly_users',                                     \n       'panopoly_widgets',                                         \n"
  },
  {
    "path": "aegir/patches/panopoly-search-redis.patch",
    "content": "diff -urp a/panopoly/panopoly.info b/panopoly/panopoly.info\n--- a/panopoly/panopoly.info\t2012-05-05 02:11:30.000000000 +0000\n+++ b/panopoly/panopoly.info\t2012-05-17 00:45:16.000000000 +0000\n@@ -70,13 +70,6 @@ dependencies[] = simplified_menu_admin\n dependencies[] = references_dialog \n dependencies[] = backports\n \n-; Contrib - Search \n-dependencies[] = search_api\n-dependencies[] = search_api_solr\n-dependencies[] = facetapi\n-dependencies[] = search_api_facetapi\n-dependencies[] = search_api_views\n-\n ; Contrib - Products\n dependencies[] = apps \n dependencies[] = features\n@@ -86,9 +79,6 @@ dependencies[] = defaultcontent\n dependencies[] = strongarm\n dependencies[] = libraries\n \n-; Contrib - Performance\n-dependencies[] = redis\n-\n ; Contrib - Development\n dependencies[] = devel\n dependencies[] = devel_generate\ndiff -urp a/panopoly/panopoly.profile b/panopoly/panopoly.profile\n--- a/panopoly/panopoly.profile\t2012-05-05 01:40:14.000000000 +0000\n+++ b/panopoly/panopoly.profile\t2012-05-17 00:45:30.000000000 +0000\n@@ -27,7 +27,6 @@ function panopoly_install_tasks($install\n       'panopoly_images',\n       'panopoly_magic',\n       'panopoly_pages',\n-      'panopoly_search',\n       'panopoly_theme',\n       'panopoly_users',\n       'panopoly_widgets',\n"
  },
  {
    "path": "aegir/patches/patch_commit_6fabd31b0f81.patch",
    "content": "diff --git a/includes/file.inc b/includes/file.inc\nindex c5e5cf07d636d2454c3e158eda0e2c0f6f7297fa..57a4e4734a8a175c4ae8dd8894a281699529b95b 100644\n--- a/includes/file.inc\n+++ b/includes/file.inc\n@@ -476,6 +476,9 @@ function file_ensure_htaccess() {\n  *   The default is TRUE which indicates a private and protected directory.\n  */\n function file_create_htaccess($directory, $private = TRUE) {\n+  // Skip this on BOA since .htaccess is never used in Nginx.\n+  return;\n+\n   if (file_uri_scheme($directory)) {\n     $directory = file_stream_wrapper_uri_normalize($directory);\n   }\n"
  },
  {
    "path": "aegir/patches/patch_commit_fa47bad85589.patch",
    "content": "diff --git a/Provision/Config/cdn.tpl.php b/Provision/Config/cdn.tpl.php\nindex a4ce098ea323d9855642ab4b17a54d93cb2d1d0f..00597f8795768bc43dea4fa0eae8afca75a22f67 100644\n--- a/Provision/Config/cdn.tpl.php\n+++ b/Provision/Config/cdn.tpl.php\n@@ -3,7 +3,16 @@ $ip_address = !empty($ip_address) ? $ip_address : '*';\n ?>\n server {\n   limit_conn   limreq 555; # like mod_evasive - this allows max 555 simultaneous connections from one IP address\n-  listen       <?php print $ip_address . ':' . $http_port; ?>;\n+<?php\n+if ($ip_address == '*') {\n+  print \"  listen       {$ip_address}:{$http_port};\\n\";\n+}\n+else {\n+  foreach ($server->ip_addresses as $ip) {\n+    print \"  listen       {$ip}:{$http_port};\\n\";\n+  }\n+}\n+?>\n   server_name <?php foreach ($this->cdn as $cdn_domain) : if (trim($cdn_domain)) : ?> <?php print $cdn_domain; ?><?php endif; endforeach; ?>;\n   root         <?php print \"{$this->root}\"; ?>;\n\ndiff --git a/Provision/Config/cdn_disabled.tpl.php b/Provision/Config/cdn_disabled.tpl.php\nindex 396e309a3e746794759bb910edeb500bea38aa41..cee2eb0acaabb7304cd4cc9e61f5d81d4a1fd3bf 100644\n--- a/Provision/Config/cdn_disabled.tpl.php\n+++ b/Provision/Config/cdn_disabled.tpl.php\n@@ -2,10 +2,19 @@\n $ip_address = !empty($ip_address) ? $ip_address : '*';\n ?>\n server {\n-  listen       <?php print $ip_address . ':' . $http_port; ?>;\n+  limit_conn   limreq 555;\n+<?php\n+if ($ip_address == '*') {\n+  print \"  listen       {$ip_address}:{$http_port};\\n\";\n+}\n+else {\n+  foreach ($server->ip_addresses as $ip) {\n+    print \"  listen       {$ip}:{$http_port};\\n\";\n+  }\n+}\n+?>\n   server_name  <?php print implode(' ', $this->cdn); ?>;\n   root         /var/www/nginx-default;\n   index        index.html index.htm;\n-\n   ### Do not reveal Aegir front-end URL here.\n }\n"
  },
  {
    "path": "aegir/patches/php-8.1-openssl3.patch",
    "content": "diff -Naur a/ext/openssl/openssl.c b/ext/openssl/openssl.c\n--- a/ext/openssl/openssl.c\t2021-07-20 19:08:50.000000000 +0300\n+++ b/ext/openssl/openssl.c\t2021-07-21 15:44:11.395257764 +0300\n@@ -1198,7 +1198,6 @@\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_CMS_NOSIGS\", CMS_NOSIGS, CONST_CS|CONST_PERSISTENT);\n\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_PKCS1_PADDING\", RSA_PKCS1_PADDING, CONST_CS|CONST_PERSISTENT);\n-\tREGISTER_LONG_CONSTANT(\"OPENSSL_SSLV23_PADDING\", RSA_SSLV23_PADDING, CONST_CS|CONST_PERSISTENT);\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_NO_PADDING\", RSA_NO_PADDING, CONST_CS|CONST_PERSISTENT);\n \tREGISTER_LONG_CONSTANT(\"OPENSSL_PKCS1_OAEP_PADDING\", RSA_PKCS1_OAEP_PADDING, CONST_CS|CONST_PERSISTENT);\n\n"
  },
  {
    "path": "aegir/patches/provision/patch_commit_e4abc685f9b4.patch",
    "content": "diff --git a/http/Provision/Config/Nginx/server.tpl.php b/http/Provision/Config/Nginx/server.tpl.php\nindex 1158df5756ca13516f8eb16c34ba16a2742c95ed..4ce8cebf9583c17a5d335f2e691bba9e5531169c 100644\n--- a/http/Provision/Config/Nginx/server.tpl.php\n+++ b/http/Provision/Config/Nginx/server.tpl.php\n@@ -16,7 +16,7 @@\n   fastcgi_param  DOCUMENT_ROOT       $document_root;\n   fastcgi_param  SERVER_PROTOCOL     $server_protocol;\n   fastcgi_param  GATEWAY_INTERFACE   CGI/1.1;\n-  fastcgi_param  SERVER_SOFTWARE     ApacheSolaris/$nginx_version;\n+  fastcgi_param  SERVER_SOFTWARE     ApacheSolarisNginx/$nginx_version;\n   fastcgi_param  REMOTE_ADDR         $remote_addr;\n   fastcgi_param  REMOTE_PORT         $remote_port;\n   fastcgi_param  SERVER_ADDR         $server_addr;\ndiff --git a/http/Provision/Service/http/fastcgi_params.conf b/http/Provision/Service/http/fastcgi_params.conf\nindex 70d62e38d15a93c5ce3bb8502664290c7efd95a7..e1991a74580e1b11f6942506f71de9929c308abe 100644\n--- a/http/Provision/Service/http/fastcgi_params.conf\n+++ b/http/Provision/Service/http/fastcgi_params.conf\n@@ -12,7 +12,7 @@ fastcgi_param  DOCUMENT_URI       $document_uri;\n fastcgi_param  DOCUMENT_ROOT      $document_root;\n fastcgi_param  SERVER_PROTOCOL    $server_protocol;\n fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;\n-fastcgi_param  SERVER_SOFTWARE    ApacheSolaris/$nginx_version;\n+fastcgi_param  SERVER_SOFTWARE    ApacheSolarisNginx/$nginx_version;\n fastcgi_param  REMOTE_ADDR        $remote_addr;\n fastcgi_param  REMOTE_PORT        $remote_port;\n fastcgi_param  SERVER_ADDR        $server_addr;\ndiff --git a/http/Provision/Service/http/fastcgi_ssl_params.conf b/http/Provision/Service/http/fastcgi_ssl_params.conf\nindex c24a765bcfe90216283d8177f9acead6bb749d46..2b7eedaf06eb12aa17017059c25f9a45555d6bcb 100644\n--- a/http/Provision/Service/http/fastcgi_ssl_params.conf\n+++ b/http/Provision/Service/http/fastcgi_ssl_params.conf\n@@ -12,7 +12,7 @@ fastcgi_param  DOCUMENT_URI       $document_uri;\n fastcgi_param  DOCUMENT_ROOT      $document_root;\n fastcgi_param  SERVER_PROTOCOL    $server_protocol;\n fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;\n-fastcgi_param  SERVER_SOFTWARE    SSLApacheSolaris/$nginx_version;\n+fastcgi_param  SERVER_SOFTWARE    ApacheSolarisSSLNginx/$nginx_version;\n fastcgi_param  REMOTE_ADDR        $remote_addr;\n fastcgi_param  REMOTE_PORT        $remote_port;\n fastcgi_param  SERVER_ADDR        $server_addr;\n"
  },
  {
    "path": "aegir/patches/provision_hosting_le.drush.inc",
    "content": "<?php\n\n/**\n * Implements hook_post_provision_verify().\n */\nfunction drush_provision_hosting_le_post_provision_verify() {\n\n  if (d()->type == 'site') {\n    $le_root = d('@server_master')->aegir_root . \"/tools/le\";\n    $le_cert = d('@server_master')->aegir_root . \"/tools/le/certs\";\n    $le_acme = d('@server_master')->aegir_root . \"/tools/le/.acme-challenges\";\n    $le_ctrl = d('@server_master')->aegir_root . \"/tools/le/.ctrl\";\n    $le_exec = d('@server_master')->aegir_root . \"/tools/le/dehydrated\";\n    $le_conf = d('@server_master')->aegir_root . \"/tools/le/config.sh\";\n    $le_cnfx = d('@server_master')->aegir_root . \"/tools/le/config\";\n    $le_acct = d('@server_master')->aegir_root . \"/tools/le/accounts\";\n    $le_hook = d('@server_master')->aegir_root . \"/tools/le/letsencrypt-sh-hooks.sh\";\n    $cf_hook = d('@server_master')->aegir_root . \"/tools/le/hooks/cloudflare/hook.py\";\n\n    $is_boa = FALSE;\n    $is_boa_ctrl = \"/data/conf/global.inc\";\n\n    if (provision_file()->exists($is_boa_ctrl)->status()) {\n      $is_boa = TRUE;\n    }\n\n    $main_name = $real_name = substr(d()->name, 1);\n    if ($real_name == 'hostmaster') {\n      $real_name = $main_name = d()->uri;\n    }\n\n    if (d()->redirection) {\n      drush_log('[hosting_le] This sitename redirection target is ' . d()->redirection, 'info');\n      $main_name = d()->redirection;\n      if ($is_boa) {\n        $cert_dir = $le_cert . \"/\" . $real_name;\n      }\n      else {\n        $cert_dir = $le_cert . \"/\" . $main_name;\n        drush_log('[hosting_le] LE SSL certificate will be valid only for ' . d()->redirection, 'info');\n      }\n    }\n    else {\n      $cert_dir = $le_cert . \"/\" . $main_name;\n    }\n\n    drush_log('[hosting_le] This cert_dir is ' . $cert_dir, 'info');\n\n    if ($is_boa) {\n      drush_log('[hosting_le] This site main SSL name is ' . $real_name, 'info');\n      drush_log('[hosting_le] BOA system detected, congrats!', 'info');\n    }\n    else {\n      drush_log('[hosting_le] This site main SSL name is ' . $main_name, 'info');\n    }\n\n    $wildcard_ctrl = d('@server_master')->aegir_root . \"/static/control/wildcard-enable-\" . $main_name . \".info\";\n    drush_log('[hosting_le] The optional wildcard flag is ' . $wildcard_ctrl, 'info');\n\n    $legacy_tls_ctrl = d('@server_master')->aegir_root . \"/static/control/tls-legacy-enable-\" . $main_name . \".info\";\n    drush_log('[hosting_le] The optional legacy TLSv1.1 flag is ' . $legacy_tls_ctrl, 'info');\n\n    $no_san_ctrl = d('@server_master')->aegir_root . \"/static/control/ssl-no-san-\" . $main_name . \".info\";\n    drush_log('[hosting_le] The optional no-SAN flag is ' . $no_san_ctrl, 'info');\n\n    $yes_dev_ctrl = d('@server_master')->aegir_root . \"/static/control/ssl-yes-dev-\" . $main_name . \".info\";\n    drush_log('[hosting_le] The optional enable-DEV flag is ' . $yes_dev_ctrl, 'info');\n\n    $immutable = $le_ctrl . \"/dont-overwrite-\" . $main_name . \".pid\";\n    drush_log('[hosting_le] The optional immutable flag is ' . $immutable, 'info');\n\n    $demo_mode_ctrl = $le_ctrl . \"/ssl-demo-mode.pid\";\n    drush_log('[hosting_le] The optional demo flag is ' . $demo_mode_ctrl, 'info');\n\n    // https://community.letsencrypt.org/t/revoking-certain-certificates-on-march-4/114864\n    $forced_renewal_ctrl = $le_ctrl . \"/forced-renewal-02-\" . $main_name . \".pid\";\n\n    $site_mode_demo_ctrl = $le_ctrl . \"/demo-\" . $main_name . \".pid\";\n    $site_mode_live_ctrl = $le_ctrl . \"/live-\" . $main_name . \".pid\";\n\n    $force_renew = FALSE;\n\n    $on_remote_server = !provision_is_local_host(d()->platform->web_server->remote_host);\n  }\n\n  $is_wildcard = FALSE;\n  if (provision_file()->exists($wildcard_ctrl)->status()) {\n    if (provision_file()->exists($cf_hook)->status()) {\n      if (provision_file()->exists($le_cnfx)->status()) {\n        $is_wildcard = TRUE;\n        $main_name = preg_replace('`^www\\.`', '', $main_name);\n        $real_name = preg_replace('`^www\\.`', '', $real_name);\n      }\n    }\n  }\n\n  if (d()->type == 'site' &&\n     !d()->ssl_enabled &&\n     !provision_file()->exists($immutable)->status()) {\n    if (file_exists($cert_dir)) {\n      exec(\"/bin/bash \" . $le_exec . \" --cleanup\", $output_b);\n      $acme_result_b = implode(' ', $output_b);\n      drush_log('[hosting_le] ACME Cleanup Output: ' . $acme_result_b, 'info');\n\n      exec(\"symlinks -dr \" . $cert_dir, $output_c);\n      $acme_result_c = implode(' ', $output_c);\n      drush_log('[hosting_le] ACME Cleanup Symlinks: ' . $acme_result_c, 'info');\n    }\n  }\n  elseif (d()->type == 'site' &&\n          d()->ssl_enabled) {\n\n    provision_file()->create_dir($le_root, dt('[hosting_le] LE root'), 0711);\n    provision_file()->create_dir($le_cert, dt('[hosting_le] LE certs'), 0700);\n    provision_file()->create_dir($le_acme, dt('[hosting_le] LE challenges'), 0711);\n    provision_file()->create_dir($le_ctrl, dt('[hosting_le] LE ctrl'), 0711);\n\n    if (!provision_file()->exists($le_exec)->status()) {\n      drush_log('[hosting_le] Please upload dehydrated to ' . $le_exec, 'warning');\n      drush_log('[hosting_le] URL: https://raw.githubusercontent.com/omega8cc/dehydrated/master/dehydrated', 'warning');\n      return FALSE;\n    }\n\n    if ($on_remote_server && !provision_file()->exists($le_hook)->status()) {\n      drush_log('[hosting_le] Please copy letsencrypt-sh-hooks.sh to ' . $le_root, 'warning');\n      return FALSE;\n    }\n\n    $enable_dev_ctrl = FALSE;\n    if (provision_file()->exists($yes_dev_ctrl)->status()) {\n      $enable_dev_ctrl = TRUE;\n      drush_log('[hosting_le] SSL enable-DEV mode ctrl file detected for ' . $main_name, 'info');\n    }\n\n    if ($is_boa) {\n      if (preg_match(\"/\\.(?:host8|boa|aegir|o8)\\.(?:biz|io|cc)$/\", $main_name) ||\n          preg_match(\"/\\.(?:nodns)\\./\", $main_name)\n         ) {\n        drush_log('[hosting_le] Skipping LE setup for ' . $main_name, 'warning');\n        return FALSE;\n      }\n      if (!$enable_dev_ctrl) {\n        if (preg_match(\"/\\.(?:temp|tmp|temporary)\\./\", $main_name) ||\n          preg_match(\"/\\.(?:test|testing)\\./\", $main_name) ||\n          preg_match(\"/\\.(?:dev|devel)\\./\", $main_name)\n          ) {\n          drush_log('[hosting_le] Skipping LE setup for ' . $main_name, 'warning');\n          return FALSE;\n        }\n      }\n    }\n\n    if (provision_file()->exists($demo_mode_ctrl)->status()) {\n      if (!provision_file()->exists($le_conf)->status()) {\n\n        $le_conf_lines = \"CA=\\\"https://acme-staging-v02.api.letsencrypt.org/directory\\\"\\n\";\n\n        provision_file()->file_put_contents($le_conf, $le_conf_lines)\n          ->succeed('[hosting_le] Created cnf ' . $le_conf)\n          ->fail('[hosting_le] Could not create cnf ' . $le_conf);\n\n        copy($le_conf, $le_cnfx);\n\n        if (provision_file()->exists($le_acct)->status()) {\n          drush_log('[hosting_le] Demo LE account will be created.', 'info');\n          rename($le_acct, $le_acct . \"-live\");\n        }\n\n        drush_log('[hosting_le] New LE accounts require registration on ACMEv2.', 'info');\n        drush_log('[hosting_le] Running --register --accept-terms as required by ACMEv2.', 'info');\n        $le_register = \" --register --accept-terms\";\n        exec(\"/bin/bash \" . $le_exec . $le_register . \" 2>&1\", $register_output);\n        $acme_register = implode(' ', $register_output);\n        drush_log(\"[hosting_le] ACMEv2 Demo Register Output: \" . $acme_register, 'info');\n        $acme_register = \"\";\n      }\n      drush_log('[hosting_le] Demo LE mode active. No real LE certs will be generated.', 'info');\n      $demo_mode = TRUE;\n    }\n    else {\n      if (provision_file()->exists($le_conf)->status()) {\n        unlink($le_conf);\n        unlink($le_cnfx);\n        if (provision_file()->exists($le_acct)->status()) {\n          rename($le_acct, $le_acct . \"-demo\");\n        }\n        drush_log('[hosting_le] Live LE account will be registered.', 'info');\n        drush_log('[hosting_le] New LE accounts require registration on ACMEv2.', 'info');\n        drush_log('[hosting_le] Running --register --accept-terms as required by ACMEv2.', 'info');\n        $le_register = \" --register --accept-terms\";\n        exec(\"/bin/bash \" . $le_exec . $le_register . \" 2>&1\", $register_output);\n        $acme_register = implode(' ', $register_output);\n        drush_log(\"[hosting_le] ACMEv2 Live Register Output: \" . $acme_register, 'info');\n        $acme_register = \"\";\n      }\n      drush_log('[hosting_le] Live LE mode active. Real LE certs will be generated.', 'info');\n      $demo_mode = FALSE;\n    }\n\n    if ($demo_mode) {\n      if (file_exists($site_mode_live_ctrl) || !file_exists($site_mode_demo_ctrl)) {\n        unlink($site_mode_live_ctrl);\n        $force_renew = TRUE;\n        drush_log('[hosting_le] Forcing DEMO certificate renew for ' . $main_name, 'info');\n      }\n      if (!file_exists($site_mode_demo_ctrl)) {\n        provision_file()->file_put_contents($site_mode_demo_ctrl, $main_name)\n          ->succeed('[hosting_le] Created pid ' . $site_mode_demo_ctrl)\n          ->fail('[hosting_le] Could not create pid ' . $site_mode_demo_ctrl);\n      }\n    }\n    else {\n      if (file_exists($site_mode_demo_ctrl) || !file_exists($site_mode_live_ctrl)) {\n        @unlink($site_mode_demo_ctrl);\n        $force_renew = TRUE;\n        drush_log('[hosting_le] Forcing LIVE certificate renew for ' . $main_name, 'info');\n      }\n      if (!file_exists($site_mode_live_ctrl)) {\n        provision_file()->file_put_contents($site_mode_live_ctrl, $main_name)\n          ->succeed('[hosting_le] Created pid ' . $site_mode_live_ctrl)\n          ->fail('[hosting_le] Could not create pid ' . $site_mode_live_ctrl);\n      }\n    }\n\n    if (!file_exists($forced_renewal_ctrl)) {\n      $force_renew = TRUE;\n      provision_file()->file_put_contents($forced_renewal_ctrl, $main_name)\n        ->succeed('[hosting_le] Created pid ' . $forced_renewal_ctrl)\n        ->fail('[hosting_le] Could not create pid ' . $forced_renewal_ctrl);\n    }\n\n    // WIP: needed after certs deleted\n    // $force_renew = TRUE;\n\n    drush_log('[hosting_le] LE certificate for ' . $main_name, 'info');\n\n    if (provision_file()->exists($no_san_ctrl)->status()) {\n      $no_alt_names = TRUE;\n      drush_log('[hosting_le] SSL no-SAN mode ctrl file detected for ' . $main_name, 'info');\n    }\n    else {\n      $no_alt_names = FALSE;\n      if ($is_boa) {\n        if (!empty(d()->aliases)) {\n          foreach (d()->aliases as $alias) {\n            if ($is_wildcard) {\n              if (!preg_match(\"/\\.(?:host8|boa|aegir|o8)\\.(?:biz|io|cc)$/\", $alias) &&\n                !preg_match(\"/\\.(?:nodns)\\./\", $alias) &&\n                !strpos($alias, $main_name)) {\n                $alt_names .= ' --domain ' . str_replace('/', '.', $alias);\n              }\n            }\n            else {\n              if (!preg_match(\"/\\.(?:host8|boa|aegir|o8)\\.(?:biz|io|cc)$/\", $alias) &&\n                !preg_match(\"/\\.(?:nodns)\\./\", $alias)) {\n                $alt_names .= ' --domain ' . str_replace('/', '.', $alias);\n              }\n            }\n          }\n        }\n      }\n      else {\n        if (!empty(d()->aliases)) {\n          $alt_names = implode(' --domain ', str_replace('/', '.', d()->aliases));\n          $alt_names = ' --domain ' . $alt_names;\n        }\n      }\n      drush_log('[hosting_le] ALT names:' . $alt_names, 'info');\n    }\n\n    $web_server = d()->platform->web_server;\n    // check if server is a pack\n    if ($web_server->master_web_servers) {\n      // use pack master\n      $web_server = d(reset($web_server->master_web_servers));\n    }\n    $site_vhost = $web_server->http_vhostd_path . \"/\" . $real_name;\n\n    if (provision_file()->exists($site_vhost)->status()) {\n      $grep_output = '';\n      $redirect_result = '';\n      $http_service_type = $web_server->http_service_type;\n\n      if ($http_service_type == 'nginx_ssl') {\n        exec(\"/bin/grep  \\\"alias redirection virtual host\\\" \" . $site_vhost, $grep_output);\n      }\n      elseif ($http_service_type == 'apache_ssl') {\n        exec(\"/bin/grep  \\\"Redirect all aliases\\\" \" . $site_vhost, $grep_output);\n      }\n\n      $redirect_result = implode(' ', $grep_output);\n      drush_log('[hosting_le] Redirect check result for ' . $main_name . ' : ' . $redirect_result, 'info');\n\n      if ($redirect_result && !$no_alt_names && !$is_boa) {\n        drush_log(\"[hosting_le] Aliases redirection must be disabled if all aliases are expected to be listed as SAN names.\", 'info');\n        drush_log(\"[hosting_le] The alternative is to disable SAN mode for this site with empty ctrl file: \" . $no_san_ctrl, 'info');\n        drush_log('[hosting_le] Forcing no-SAN-mode for ' . $main_name, 'info');\n        $no_alt_names = TRUE;\n      }\n    }\n    else {\n      drush_log(\"[hosting_le] The site's vhost must already exist, or the LE agent will not be able to proceed.\", 'warning');\n      drush_log('[hosting_le] Path to vhost: ' . $site_vhost, 'info');\n      drush_log('[hosting_le] Skipping LE setup for ' . $main_name, 'warning');\n      return FALSE;\n    }\n\n    if (provision_file()->exists($immutable)->status() &&\n        provision_file()->exists($cert_dir)->status()) {\n      $needs_update = FALSE;\n      drush_log(\"[hosting_le] Immutable protection mode detected for this domain: \" . $cert_dir, 'info');\n      drush_log(\"[hosting_le] SSL Certificate for this domain already exists in: \" . $cert_dir, 'info');\n      drush_log(\"[hosting_le] You can replace it with any other certificate since it will be left here as-is forever.\", 'info');\n      drush_log(\"[hosting_le] To re-activate LE auto-renewals please delete this file: \" . $immutable, 'info');\n      drush_log(\"[hosting_le] NOTE: On hosted Ægir service you need to contact your host support for further assistance.\", 'info');\n    }\n    else {\n      drush_log(\"[hosting_le] To stop the LE Certificate auto-renewals please create an empty ctrl file.\", 'info');\n      drush_log(\"[hosting_le] Path to use for this site specific empty ctrl file: \" . $immutable, 'info');\n      drush_log(\"[hosting_le] You could then replace existing cert with any other cert since it will be left here as-is forever.\", 'info');\n      drush_log(\"[hosting_le] NOTE: On hosted Ægir service you need to contact your host support for further assistance.\", 'info');\n      $output = '';\n      $le_options = \" --cron --ipv4\";\n      $le_challenge = \" --domain \" . $real_name . \" --challenge dns-01 --hook \" . $cf_hook;\n      if ($on_remote_server) {\n        $le_options .= \" --hook \" . $le_hook;\n      }\n      if ($force_renew) {\n        if ($no_alt_names || empty($alt_names)) {\n          if ($is_wildcard) {\n            $le_options .= \" --alias \" . $real_name . \" --domain *.\" . $real_name . $le_challenge;\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" --force 2>&1\", $output);\n          }\n          else {\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" --force --domain \" . $main_name . ' 2>&1', $output);\n          }\n        }\n        else {\n          if ($is_wildcard) {\n            $le_options .= \" --alias \" . $real_name . \" --domain *.\" . $real_name . $alt_names . $le_challenge;\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" --force 2>&1\", $output);\n          }\n          else {\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" --force --domain \" . $real_name . $alt_names . ' 2>&1', $output);\n          }\n        }\n      }\n      else {\n        if ($no_alt_names || empty($alt_names)) {\n          if ($is_wildcard) {\n            $le_options .= \" --alias \" . $real_name . \" --domain *.\" . $real_name . $le_challenge;\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" 2>&1\", $output);\n          }\n          else {\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" --domain \" . $main_name . ' 2>&1', $output);\n          }\n        }\n        else {\n          if ($is_wildcard) {\n            $le_options .= \" --alias \" . $real_name . \" --domain *.\" . $real_name . $alt_names . $le_challenge;\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" 2>&1\", $output);\n          }\n          else {\n            exec(\"/bin/bash \" . $le_exec . $le_options . \" --domain \" . $real_name . $alt_names . ' 2>&1', $output);\n          }\n        }\n      }\n      $acme_result = implode(' ', $output);\n      drush_log(\"[hosting_le] ACME Output: \" . $acme_result, 'info');\n      if (!provision_file()->exists($cert_dir)->status()) {\n        $needs_update = FALSE;\n        drush_log(\"[hosting_le] Hmm.. For some reason cert_dir doesn't exist:  \" . $cert_dir, 'info');\n        drush_log(\"[hosting_le] I couldn't generate LE cert during this Verify procedure.\", 'info');\n        drush_log(\"[hosting_le] It's normal while running a series of Verify sub-tasks during Rename/Migrate.\", 'info');\n        drush_log(\"[hosting_le] But if this happens during standalone Verify, maybe permissions are incorrect.\", 'info');\n        drush_log(\"[hosting_le] Let's abort the procedure here. Bye.\", 'info');\n        return FALSE;\n      }\n      else {\n        if (preg_match(\"/unchanged.*Skipping/i\", $acme_result)) {\n          $needs_update = FALSE;\n          drush_log(\"[hosting_le] The existing LE Certificate is up to date in \" . $cert_dir, 'success');\n        }\n        elseif (preg_match(\"/Forcing.*renew/i\", $acme_result) &&\n                preg_match(\"/Creating.*fullchain/i\", $acme_result)) {\n          $needs_update = TRUE;\n          drush_log(\"[hosting_le] The LE Certificate has been successfully updated in \" . $cert_dir, 'success');\n        }\n        elseif (preg_match(\"/Forcing.*renew/i\", $acme_result) &&\n               !preg_match(\"/Creating.*fullchain/i\", $acme_result)) {\n          $needs_update = FALSE;\n          drush_log(\"[hosting_le] The LE Certificate attempted update looks incomplete in \" . $cert_dir, 'warning');\n          drush_log(\"[hosting_le] Make sure that all aliases have valid DNS names pointing to your instance IP address.\", 'warning');\n          if (!$is_boa) {\n            drush_log(\"[hosting_le] Aliases redirection must be disabled, or the LE agent will not be able to proceed.\", 'warning');\n          }\n          drush_log(\"[hosting_le] The alternative is to disable SAN mode for this site with empty ctrl file: \" . $no_san_ctrl, 'warning');\n        }\n        elseif (preg_match(\"/Requesting.*challenge/i\", $acme_result) &&\n               !preg_match(\"/Forcing.*renew/i\", $acme_result) &&\n               !preg_match(\"/Creating.*fullchain/i\", $acme_result)) {\n          $needs_update = FALSE;\n          drush_log(\"[hosting_le] The LE Certificate attempted creation failed in \" . $cert_dir, 'warning');\n          drush_log(\"[hosting_le] Make sure that all aliases have valid DNS names pointing to your instance IP address.\", 'warning');\n          if (!$is_boa) {\n            drush_log(\"[hosting_le] Aliases redirection must be disabled, or the LE agent will not be able to proceed.\", 'warning');\n          }\n          drush_log(\"[hosting_le] The alternative is to disable SAN mode for this site with empty ctrl file: \" . $no_san_ctrl, 'warning');\n        }\n        else {\n          $needs_update = TRUE;\n          drush_log(\"[hosting_le] The LE Certificate has been successfully [re]generated in \" . $cert_dir, 'success');\n        }\n      }\n    }\n\n    if ($needs_update && !provision_file()->exists($immutable)->status()) {\n\n      exec(\"/bin/bash \" . $le_exec . \" --cleanup\", $output_clean);\n      $acme_result_clean = implode(' ', $output_clean);\n      drush_log('[hosting_le] ACME Cleanup Output: ' . $acme_result_clean, 'info');\n\n      $ssl_symlinks[] = d('@server_master')->ssld_path . \"/\" . $real_name;\n      $ssl_symlinks[] = $web_server->http_ssld_path . \"/\" . $real_name;\n\n      foreach ($ssl_symlinks as $symlink) {\n        if (provision_file()->exists($symlink)->status()) {\n          drush_log('[hosting_le] File exists: ' . $symlink, 'info');\n\n          if (!is_link($symlink)) {\n            drush_log('[hosting_le] Moving original directory out of the way: ' . $symlink, 'info');\n\n            // This will overwrite symlink.bak if necessary, so we don't end up\n            // with dozens of backups of unused certificates.\n            rename($symlink, $symlink . \".bak\");\n          }\n          else {\n            drush_log('[hosting_le] SSL certificate already symlinked: ' . $symlink, 'success');\n            continue;\n          }\n        }\n\n        drush_log('[hosting_le] Creating symlink at ' . $symlink, 'info');\n\n        if (symlink($cert_dir, $symlink)) {\n          drush_log('[hosting_le] Symlinked cert directory to ' . $symlink, 'success');\n        }\n        else {\n          drush_log('[hosting_le] Could not symlink cert directory to ' . $symlink, 'warning');\n        }\n      }\n\n      drush_log('[hosting_le] Replacing openssl symlinks.', 'info');\n\n      $filenames = array(\n        'openssl.crt' => 'cert.pem',\n        'openssl.csr' => 'cert.csr',\n        'openssl.key' => 'privkey.pem',\n        'openssl_chain.crt' => 'fullchain.pem',\n      );\n\n      $success = TRUE;\n      foreach ($filenames as $original => $target) {\n        // Remove current symlink or file (this would have been generated by\n        // Ægir AFTER the original dir symlinking, meaning it's self-generated\n        // and therefore unimportant.\n        @unlink($cert_dir . \"/\" . $original);\n\n        $success = ($success && symlink($cert_dir . \"/\" . $target, $cert_dir . \"/\" . $original));\n      }\n\n      if ($success) {\n        drush_log('[hosting_le] Successfully replaced all symlinks.', 'success');\n      }\n      else {\n        drush_log('[hosting_le] Could not replace one or more symlinks. Check ' . $certdir, 'warning');\n      }\n\n      $web_server->sync($le_cert . '/' . $real_name);\n      $web_server->sync($web_server->http_ssld_path);\n\n      $pid = $le_ctrl . \"/\" . $main_name . \".pid\";\n\n      if (file_exists($cert_dir) && !file_exists($pid)) {\n        provision_file()->file_put_contents($pid, $main_name)\n          ->succeed('[hosting_le] Created pid ' . $pid)\n          ->fail('[hosting_le] Could not create pid ' . $pid);\n        // We will not run the secondary Verify if pid file doesn't exist,\n        // to avoid verify-inside-verify loop which could overload the system.\n        if (provision_file()->exists($pid)->status()) {\n          drush_log('[hosting_le] Running Verify again to reload web server once openssl_chain.crt is present in the vhost', 'info');\n          $local_uri_verify = '@' . $real_name;\n          provision_backend_invoke($local_uri_verify, 'provision-verify');\n          // We could run it via frontend but it is not needed currently.\n          //provision_backend_invoke('@hostmaster', 'hosting-task', array($local_uri_verify, 'verify'), array('force' => TRUE));\n          sleep(5); // A small trick to avoid high load and race conditions.\n        }\n      }\n\n      drush_log('[hosting_le] Restarting webserver', 'info');\n      $web_server->restart();\n    }\n  }\n}\n"
  },
  {
    "path": "aegir/patches/remove_usr1_usr2_fpm_unix.patch",
    "content": "diff --git a/sapi/fpm/fpm/fpm_unix.c b/sapi/fpm/fpm/fpm_unix.c\nindex 5c5e37c..ed3b352 100644\n--- a/sapi/fpm/fpm/fpm_unix.c\n+++ b/sapi/fpm/fpm/fpm_unix.c\n@@ -271,28 +271,6 @@ int fpm_unix_init_main() /* {{{ */\n \t\tstruct sigaction oldact_usr2;\n \t\tstruct timeval tv;\n \n-\t\t/*\n-\t\t * set sigaction for USR1 before fork\n-\t\t * save old sigaction to restore it after\n-\t\t * fork in the child process (the master process)\n-\t\t */\n-\t\tmemset(&act, 0, sizeof(act));\n-\t\tmemset(&act, 0, sizeof(oldact_usr1));\n-\t\tact.sa_handler = fpm_signals_sighandler_exit_ok;\n-\t\tsigfillset(&act.sa_mask);\n-\t\tsigaction(SIGUSR1, &act, &oldact_usr1);\n-\n-\t\t/*\n-\t\t * set sigaction for USR2 before fork\n-\t\t * save old sigaction to restore it after\n-\t\t * fork in the child process (the master process)\n-\t\t */\n-\t\tmemset(&act, 0, sizeof(act));\n-\t\tmemset(&act, 0, sizeof(oldact_usr2));\n-\t\tact.sa_handler = fpm_signals_sighandler_exit_config;\n-\t\tsigfillset(&act.sa_mask);\n-\t\tsigaction(SIGUSR2, &act, &oldact_usr2);\n-\n \t\t/* then fork */\n \t\tpid_t pid = fork();\n \t\tswitch (pid) {\n@@ -311,15 +289,6 @@ int fpm_unix_init_main() /* {{{ */\n \t\t\tdefault : /* parent */\n \t\t\t\tfpm_cleanups_run(FPM_CLEANUP_PARENT_EXIT);\n \n-\t\t\t\t/*\n-\t\t\t\t * wait for 10s before exiting with error\n-\t\t\t\t * the child is supposed to send USR1 or USR2 to tell the parent\n-\t\t\t\t * how it goes for it\n-\t\t\t\t */\n-\t\t\t\ttv.tv_sec = 10;\n-\t\t\t\ttv.tv_usec = 0;\n-\t\t\t\tzlog(ZLOG_DEBUG, \"The calling process is waiting for the master process to ping\");\n-\t\t\t\tselect(0, NULL, NULL, NULL, &tv);\n \t\t\t\texit(FPM_EXIT_SOFTWARE);\n \t\t}\n \t}\n"
  },
  {
    "path": "aegir/patches/restaurant_demo.patch",
    "content": "diff -urp a/texts.csv b/texts.csv\n--- a/texts.csv\t2014-08-25 11:19:44.000000000 +0000\n+++ b/texts.csv\t2014-08-25 12:22:37.000000000 +0000\n@@ -1,3 +1,3 @@\n \"Title\",\"Text\",\"Overrides\"\n-\"About Restaurant\",\"<p>The Restaurant distribution is powered by Drupal and Panopoly.</p><p><a href='http://drupal.org/project/restaurant' class='btn btn-lg btn-success'>Download Restaurant</a></p><p><a href='http://drupal.org/project/restaurant'>Learn more</a> · <a href='https://dashboard.getpantheon.com/products/restaurant/spinup'>Try on Pantheon</a></p>\",\"site_template_panel_context__new-7\"\n-\"How to book a table\",\"<h4>Step 1</h4><p>Fill in the reservation form with your contact information, the date of your reservation and the number of persons.</p><h4>Step 2</h4><p>Wait for our email that confirms that your reservation has been received.</p><h4>Step 3</h4><p>We will email you or call you back when your reservation has been confirmed.</p>\",\"reservation__page_reservation_panel_context_2__new-2\"\n\\ No newline at end of file\n+\"About Restaurant\",\"<p>The Restaurant distribution is powered by Drupal and Panopoly.</p><p><a href='http://drupal.org/project/restaurant' class='btn btn-lg btn-success'>Download Restaurant</a></p><p><a href='http://drupal.org/project/restaurant'>Learn more</a> · <a href='https://omega8.cc/demo'>Try on Aegir</a></p>\",\"site_template_panel_context__new-7\"\n+\"How to book a table\",\"<h4>Step 1</h4><p>Fill in the reservation form with your contact information, the date of your reservation and the number of persons.</p><h4>Step 2</h4><p>Wait for our email that confirms that your reservation has been received.</p><h4>Step 3</h4><p>We will email you or call you back when your reservation has been confirmed.</p>\",\"reservation__page_reservation_panel_context_2__new-2\"\n"
  },
  {
    "path": "aegir/patches/singular.mft.patch",
    "content": "diff -urp singular/style.css singular/style.css\r\n--- singular/style.css    2011-03-10 20:28:50.000000000 +0100\r\n+++ singular/style.css    2011-03-10 20:28:44.000000000 +0100\r\n@@ -22,13 +22,13 @@ a {\r\n   -moz-border-radius-bottomright:5px;\r\n   -webkit-border-top-right-radius:5px;\r\n   -webkit-border-bottom-right-radius:5px;\r\n+   border-top-right-radius:5px;\r\n+   border-bottom-right-radius:5px;\r\n\r\n   font-size:13px;\r\n   line-height:20px;\r\n-\r\n   background:url(images/mask.png);\r\n   color:#fff;\r\n-\r\n   margin:100px 0px 0px;\r\n   float:right;\r\n   width:220px;\r\n@@ -36,9 +36,9 @@ a {\r\n\r\n #main {\r\n   -moz-border-radius:10px;\r\n-  -moz-border-radius:10px;\r\n-  -webkit-border-radius:10px;\r\n   -webkit-border-radius:10px;\r\n+  border-radius:10px;\r\n+\r\n   background:url(images/mask.png);\r\n   width:740px;\r\n   }\r\n@@ -102,6 +102,8 @@ body.fluid #sidebar { width:20%; }\r\n   #branding div.primary ul.links a {\r\n     -moz-border-radius:10px;\r\n     -webkit-border-radius:10px;\r\n+    border-radius:10px;\r\n+\r\n     background:url(images/mask.png);\r\n     padding:0px 15px;\r\n     }\r\n@@ -123,6 +125,8 @@ body.fluid #sidebar { width:20%; }\r\n   -moz-border-radius-topright:5px;\r\n   -webkit-border-top-left-radius:5px;\r\n   -webkit-border-top-right-radius:5px;\r\n+   border-top-left-radius:5px;\r\n+   border-top-right-radius:5px;\r\n\r\n   color:#ccc;\r\n   background:url(images/mask.png);\r\n@@ -195,6 +199,8 @@ div.left div.admin-panel { margin-right:\r\n   -moz-border-radius-bottomright:5px;\r\n   -webkit-border-bottom-left-radius:5px;\r\n   -webkit-border-bottom-right-radius:5px;\r\n+   border-bottom-right-radius: 5px;\r\n+   border-bottom-left-radius: 5px;\r\n   }\r\n\r\n #page #content div.block {\r\n@@ -212,6 +218,8 @@ div.left div.admin-panel { margin-right:\r\n #page #content div.block-content {\r\n   -moz-border-radius:10px;\r\n   -webkit-border-radius:10px;\r\n+   border-radius:10px;\r\n+\r\n   background:#fff;\r\n   padding:10px;\r\n   }\r\n@@ -262,6 +270,7 @@ div.left div.admin-panel { margin-right:\r\n #growl div.messages {\r\n   -moz-border-radius:5px;\r\n   -webkit-border-radius:5px;\r\n+   border-radius:5px;\r\n\r\n   margin:5px 0px;\r\n   background:#eff;\r\n@@ -320,6 +329,8 @@ dl dd {\r\n   -moz-border-radius-bottomleft:5px;\r\n   -webkit-border-top-left-radius:5px;\r\n   -webkit-border-bottom-left-radius:5px;\r\n+   border-top-left-radius: 5px;\r\n+   border-bottom-left-radius: 5px;\r\n   }\r\n\r\n #sidebar div.block h2.block-title {\r\n@@ -390,6 +401,8 @@ div.node-links ul.links {\r\n   margin:20px 0px 0px;\r\n   -moz-border-radius:5px;\r\n   -webkit-border-radius:5px;\r\n+   border-radius:5px;\r\n+\r\n   background:#fff;\r\n   float:right;\r\n   }\r\n"
  },
  {
    "path": "aegir/patches/singular.patch",
    "content": "diff -urp singular/style.css singular/style.css\n--- singular/style.css\t2009-08-12 21:41:25.000000000 +0200\n+++ singular/style.css\t2009-08-18 14:51:26.000000000 +0200\n@@ -20,6 +20,8 @@ a {\n #sidebar {\n   -moz-border-radius-topright:5px;\n   -moz-border-radius-bottomright:5px;\n+  -webkit-border-top-right-radius:5px;\n+  -webkit-border-bottom-right-radius:5px;\n\n   font-size:13px;\n   line-height:20px;\n@@ -35,6 +37,8 @@ a {\n #main {\n   -moz-border-radius:10px;\n   -moz-border-radius:10px;\n+  -webkit-border-radius:10px;\n+  -webkit-border-radius:10px;\n   background:url(images/mask.png);\n   width:740px;\n   }\n@@ -187,9 +191,10 @@ div.left div.admin-panel { margin-right:\n   background:#f8f8f8;\n   border-top:1px solid #ddd;\n   padding:19px 20px 20px;\n-\n   -moz-border-radius-bottomleft:5px;\n   -moz-border-radius-bottomright:5px;\n+  -webkit-border-bottom-left-radius:5px;\n+  -webkit-border-bottom-right-radius:5px;\n   }\n\n #page #content div.block {\n@@ -206,6 +211,7 @@ div.left div.admin-panel { margin-right:\n\n #page #content div.block-content {\n   -moz-border-radius:10px;\n+  -webkit-border-radius:10px;\n   background:#fff;\n   padding:10px;\n   }\n@@ -383,6 +389,7 @@ div.node-submitted { margin:0px 0px 20px\n div.node-links ul.links {\n   margin:20px 0px 0px;\n   -moz-border-radius:5px;\n+  -webkit-border-radius:5px;\n   background:#fff;\n   float:right;\n   }\n"
  },
  {
    "path": "aegir/patches/skwashd.commons.patch",
    "content": "diff -aburN --exclude='CVS*' drupal_commons.orig/profiles/drupal_commons/drupal_commons.profile drupal_commons/profiles/drupal_commons/drupal_commons.profile\n--- drupal_commons.orig/profiles/drupal_commons/drupal_commons.profile\t2010-08-17 08:36:12.000000000 +1000\n+++ drupal_commons/profiles/drupal_commons/drupal_commons.profile\t2010-10-11 12:46:40.428489204 +1100\n@@ -177,6 +177,7 @@\n  *   modify the $task, otherwise discarded.\n  */\n function drupal_commons_profile_tasks(&$task, $url) {\n+  drupal_commons_config_vars();\n   drupal_commons_build_directories();\n   drupal_commons_config_taxonomy();\n   drupal_commons_config_profile();\n@@ -193,7 +194,6 @@\n   drupal_commons_config_views();\n   drupal_commons_config_theme();\n   drupal_commons_config_images();\n-  drupal_commons_config_vars();\n   drupal_commons_cleanup();\n }\n\n"
  },
  {
    "path": "aegir/patches/taxonomy-6.20.patch",
    "content": "From a88b4ae0ec60221b93cd5ed14ac67b1ff5719ddb Mon Sep 17 00:00:00 2001\nFrom: Thomas Skovgaard Gielfeldt <thomas@gielfeldt.com>\nDate: Sun, 22 May 2011 10:34:56 +0200\nSubject: [PATCH] Use Taxonomy Edge functionality.\n\n---\n modules/taxonomy/taxonomy.module |    6 ++++++\n 1 files changed, 6 insertions(+), 0 deletions(-)\n mode change 100644 => 100755 modules/taxonomy/taxonomy.module\n\ndiff --git a/modules/taxonomy/taxonomy.module b/modules/taxonomy/taxonomy.module\nindex 0141120..26a6845\n--- a/modules/taxonomy/taxonomy.module\n+++ b/modules/taxonomy/taxonomy.module\n@@ -835,6 +835,9 @@ function taxonomy_get_children($tid, $vid = 0, $key = 'tid') {\n  *   Results are statically cached.\n  */\n function taxonomy_get_tree($vid, $parent = 0, $depth = -1, $max_depth = NULL) {\n+  if (function_exists('taxonomy_edge_taxonomy_get_tree')) {\n+    return taxonomy_edge_taxonomy_get_tree($vid, $parent, $depth, $max_depth);\n+  }\n   static $children, $parents, $terms;\n\n   $depth++;\n@@ -1130,6 +1133,9 @@ function theme_taxonomy_term_select($element) {\n  *   A resource identifier pointing to the query results.\n  */\n function taxonomy_select_nodes($tids = array(), $operator = 'or', $depth = 0, $pager = TRUE, $order = 'n.sticky DESC, n.created DESC') {\n+  if (function_exists('taxonomy_edge_taxonomy_select_nodes')) {\n+    return taxonomy_edge_taxonomy_select_nodes($tids, $operator, $depth, $pager, $order);\n+  }\n   if (count($tids) > 0) {\n     // For each term ID, generate an array of descendant term IDs to the right depth.\n     $descendant_tids = array();\n--\n1.7.4\n"
  },
  {
    "path": "aegir/patches/taxonomy-6.26.patch",
    "content": "--- modules/taxonomy/taxonomy.module.orig\t2012-02-29 17:44:11.000000000 +0100\n+++ modules/taxonomy/taxonomy.module\t2012-03-14 18:46:54.000000000 +0100\n@@ -846,6 +846,9 @@\n  *   Results are statically cached.\n  */\n function taxonomy_get_tree($vid, $parent = 0, $depth = -1, $max_depth = NULL) {\n+  if (function_exists('taxonomy_edge_get_tree')) {\n+    return taxonomy_edge_get_tree($vid, $parent, $depth, $max_depth);\n+  }\n   static $children, $parents, $terms;\n \n   // We cache trees, so it's not CPU-intensive to call get_tree() on a term\n@@ -1181,6 +1184,9 @@\n  *   A resource identifier pointing to the query results.\n  */\n function taxonomy_select_nodes($tids = array(), $operator = 'or', $depth = 0, $pager = TRUE, $order = 'n.sticky DESC, n.created DESC') {\n+  if (function_exists('taxonomy_edge_select_nodes')) {\n+    return taxonomy_edge_select_nodes($tids, $operator, $depth, $pager, $order);\n+  }\n   if (count($tids) > 0) {\n     // For each term ID, generate an array of descendant term IDs to the right depth.\n     $descendant_tids = array();\n"
  },
  {
    "path": "aegir/patches/taxonomy-7.12.patch",
    "content": "--- modules/taxonomy/taxonomy.module.orig\t2012-02-01 23:03:14.000000000 +0100\n+++ modules/taxonomy/taxonomy.module\t2012-03-18 19:24:09.000000000 +0100\n@@ -968,6 +968,9 @@\n  *   depending on the $load_entities parameter.\n  */\n function taxonomy_get_tree($vid, $parent = 0, $max_depth = NULL, $load_entities = FALSE) {\n+  if (module_exists('taxonomy_edge') && function_exists('taxonomy_edge_get_tree')) {\n+    return taxonomy_edge_get_tree($vid, $parent, $max_depth, $load_entities);\n+  }\n   $children = &drupal_static(__FUNCTION__, array());\n   $parents = &drupal_static(__FUNCTION__ . ':parents', array());\n   $terms = &drupal_static(__FUNCTION__ . ':terms', array());\n"
  },
  {
    "path": "aegir/patches/taxonomy-7.7.patch",
    "content": "From f2b5994fe2c2236fb763bdc811437e0fb595c7c5 Mon Sep 17 00:00:00 2001\nFrom: Thomas Skovgaard Gielfeldt <thomas@gielfeldt.com>\nDate: Sun, 21 Aug 2011 07:45:59 +0200\nSubject: [PATCH] Patch for Taxonomy Edge.\n\n---\n modules/taxonomy/taxonomy.module |    3 +++\n 1 files changed, 3 insertions(+), 0 deletions(-)\n\ndiff --git a/modules/taxonomy/taxonomy.module b/modules/taxonomy/taxonomy.module\nindex dc2847d..e3337de 100644\n--- a/modules/taxonomy/taxonomy.module\n+++ b/modules/taxonomy/taxonomy.module\n@@ -925,6 +925,9 @@ function taxonomy_get_children($tid, $vid = 0) {\n  *   depending on the $load_entities parameter.\n  */\n function taxonomy_get_tree($vid, $parent = 0, $max_depth = NULL, $load_entities = FALSE) {\n+  if (function_exists('taxonomy_edge_get_tree')) {\n+    return taxonomy_edge_get_tree($vid, $parent, $max_depth, $load_entities);\n+  }\n   $children = &drupal_static(__FUNCTION__, array());\n   $parents = &drupal_static(__FUNCTION__ . ':parents', array());\n   $terms = &drupal_static(__FUNCTION__ . ':terms', array());\n--\n1.7.5.1\n"
  },
  {
    "path": "aegir/patches/ubercart-1167276-reroll.patch",
    "content": "diff --git a/uc_cart/uc_cart.module b/uc_cart/uc_cart.module\nindex a3cc27f..dd6722f 100644\n--- a/uc_cart/uc_cart.module\n+++ b/uc_cart/uc_cart.module\n@@ -374,12 +374,19 @@ function uc_cart_block($op = 'list', $delta = 0, $edit = array()) {\n     case 'view':\n       // 0 = Default shopping cart block.\n       if ($delta == 0) {\n-        $cachable = !$user->uid && variable_get('cache', CACHE_DISABLED) != CACHE_DISABLED;\n+        $cachable = TRUE;\n+        if (function_exists('drupal_page_is_cacheable')) {\n+          $cachable = drupal_page_is_cacheable();\n+        }\n+        else {\n+          $cachable = !$user->uid && variable_get('cache', CACHE_DISABLED) != CACHE_DISABLED;\n+        }\n+\n         $product_count = count(uc_cart_get_contents());\n \n         // Display nothing if the block is set to hide on empty and there are no\n         // items in the cart.\n-        if (!$cachable && variable_get('uc_cart_block_empty_hide', FALSE) && !$product_count) {\n+        if (variable_get('uc_cart_block_empty_hide', FALSE) && !$product_count) {\n           return;\n         }\n \n"
  },
  {
    "path": "aegir/patches/user.drush.inc.patch",
    "content": "--- /dev/null\t2010-06-09 13:49:44.501193278 -0500\n+++ commands/user/user.drush.inc\t2010-06-24 14:18:14.000000000 -0500\n@@ -0,0 +1,548 @@\n+<?php\n+// $Id: \n+\n+/**\n+ * @file Drush User Management commands\n+ */\n+\n+/**\n+ * Implementation of hook_drush_help().\n+ */\n+function user_drush_help($section) {\n+  switch ($section) {\n+    case 'drush:user-information':\n+      return dt(\"Display information about a user identified by username, uid or email address.\");\n+    case 'drush:user-block':\n+      return dt(\"Block the specified user(s).\");\n+    case 'drush:user-unblock':\n+      return dt(\"Unblock the specified user(s).\");\n+    case 'drush:user-add-role':\n+      return dt(\"Add a role to the specified user accounts.\");\n+    case 'drush:user-remove-role':\n+      return dt(\"Remove a role from the specified user accounts.\");\n+    case 'drush:user-create':\n+      return dt(\"Create a user account.\");\n+    case 'drush:user-cancel':\n+      return dt(\"Cancel a user account.\");\n+    case 'drush:user-password':\n+      return dt(\"(Re)Set the password for the given user account.\");\n+  }\n+}\n+\n+/**\n+ * Implementation of hook_drush_command().\n+ */\n+function user_drush_command() {\n+  $items['user-information'] = array(\n+    'callback' => 'drush_user_information',\n+    'description' => 'Print information about the specified user(s).',\n+    'aliases' => array('uinf'),\n+    'examples' => array(\n+      'drush user-information 2,3,someguy,somegal,billgates@microsoft.com' => \n+        'Display information about any users with uids, names, or mail addresses matching the strings between commas.',\n+    ),\n+    'arguments' => array(\n+      'users' => 'A comma delimited list of uids, user names, or email addresses.',\n+    ),\n+    'options' => array(\n+      '--full' => 'show extended information about the user',\n+      '--short' => 'show basic information about the user (this is the default)',\n+    ),\n+  );\n+  $items['user-block'] = array(\n+    'callback' => 'drush_user_block',\n+    'description' => 'Block the specified user(s).',\n+    'aliases' => array('ublk'),\n+    'arguments' => array(\n+      'users' => 'A comma delimited list of uids, user names, or email addresses.',\n+    ),\n+    'examples' => array(\n+      'drush user-block 5,user3 --uid=2,3 --name=someguy,somegal --mail=billgates@microsoft.com' => \n+        'Block the users with name, id, or email 5 or user3, uids 2 and 3, names someguy and somegal, and email address of billgates@microsoft.com',\n+    ),\n+    'options' => array(\n+      '--uid' => 'A comma delimited list of uids to block',\n+      '--name' => 'A comma delimited list of user names to block',\n+      '--mail' => 'A comma delimited list of user mail addresses to block',\n+    ),\n+  );\n+  $items['user-unblock'] = array(\n+    'callback' => 'drush_user_unblock',\n+    'description' => 'Unblock the specified user(s).',\n+    'aliases' => array('uublk'),\n+    'arguments' => array(\n+      'users' => 'A comma delimited list of uids, user names, or email addresses.',\n+    ),\n+    'examples' => array(\n+      'drush user-unblock 5,user3 --uid=2,3 --name=someguy,somegal --mail=billgates@microsoft.com' => \n+        'Unblock the users with name, id, or email 5 or user3, uids 2 and 3, names someguy and somegal, and email address of billgates@microsoft.com',\n+    ),\n+    'options' => array(\n+      '--uid' => 'A comma delimited list of uids to unblock',\n+      '--name' => 'A comma delimited list of user names to unblock',\n+      '--mail' => 'A comma delimited list of user mail addresses to unblock',\n+    ),\n+  );\n+  $items['user-add-role'] = array(\n+    'callback' => 'drush_user_add_role',\n+    'description' => 'Add a role to the specified user accounts.',\n+    'aliases' => array('urol'),\n+    'arguments' => array(\n+      'role' => 'The name of the role to add',\n+      'users' => '(optional) A comma delimited list of uids, user names, or email addresses.',\n+    ),\n+    'examples' => array(\n+      'drush user-add-role \"power user\" 5,user3 --uid=2,3 --name=someguy,somegal --mail=billgates@microsoft.com' => \n+        'Add the \"power user\" role to the accounts with name, id, or email 5 or user3, uids 2 and 3, names someguy and somegal, and email address of billgates@microsoft.com',\n+    ),\n+    'options' => array(\n+      '--uid' => 'A comma delimited list of uids',\n+      '--name' => 'A comma delimited list of user names',\n+      '--mail' => 'A comma delimited list of user mail addresses',\n+    ),\n+  );\n+  $items['user-remove-role'] = array(\n+    'callback' => 'drush_user_remove_role',\n+    'description' => 'Remove a role from the specified user accounts.',\n+    'aliases' => array('urrol'),\n+    'arguments' => array(\n+      'role' => 'The name of the role to remove',\n+      'users' => '(optional) A comma delimited list of uids, user names, or email addresses.',\n+    ),\n+    'examples' => array(\n+      'drush user-remove-role \"power user\" 5,user3 --uid=2,3 --name=someguy,somegal --mail=billgates@microsoft.com' => \n+        'Remove the \"power user\" role from the accounts with name, id, or email 5 or user3, uids 2 and 3, names someguy and somegal, and email address of billgates@microsoft.com',\n+    ),\n+    'options' => array(\n+      '--uid' => 'A comma delimited list of uids',\n+      '--name' => 'A comma delimited list of user names',\n+      '--mail' => 'A comma delimited list of user mail addresses',\n+    ),\n+  );\n+  $items['user-create'] = array(\n+    'callback' => 'drush_user_create',\n+    'description' => 'Create a user account with the specified name.',\n+    'aliases' => array('ucrt'),\n+    'arguments' => array(\n+      'name' => 'The name of the account to add'\n+    ),\n+    'examples' => array(\n+      'drush user-create newuser --mail=\"person@example.com\" --password=\"letmein\"' => \n+        'Create a new user account with the name newuser, the email address person@example.com, and the password letmein',\n+    ),\n+    'options' => array(\n+      '--password' => 'The password for the new account',\n+      '--mail' => 'The email address for the new account',\n+    ),\n+  );\n+  $items['user-cancel'] = array(\n+    'callback' => 'drush_user_cancel',\n+    'description' => 'Cancel a user account with the specified name.',\n+    'aliases' => array('ucan'),\n+    'arguments' => array(\n+      'name' => 'The name of the account to cancel',\n+    ),\n+    'examples' => array(\n+      'drush user-cancel username' => \n+        'Cancel the user account with the name username and anonymize all content created by that user.',\n+    ),\n+  );\n+  $items['user-password'] = array(\n+    'callback' => 'drush_user_password',\n+    'description' => '(Re)Set the password for the user account with the specified name.',\n+    'aliases' => array('upwd'),\n+    'arguments' => array(\n+      'name' => 'The name of the account to modify'\n+    ),\n+    'options' => array(\n+      '--password' => '(required) The new password for the account',\n+    ),\n+    'examples' => array(\n+      'drush user-password someuser --password=\"gr3@tP@$s\"' => \n+        'Set the password for the username someuser to gr3@tP@$s.',\n+    ),\n+  );\n+\n+  // Drupal 7 only options.\n+  if (drush_drupal_major_version() >= 7) {\n+    $items['user-cancel']['options'] = array(\n+      'delete-content' => 'Delete all content created by the user',\n+    );\n+    $items['user-cancel']['examples']['drush user-cancel --delete-content=true username'] = \n+      'Cancel the user account with the name username and delete all content created by that user.';\n+  }\n+  return $items;\n+}\n+\n+// Implementation of hook_drush_init().\n+function user_drush_init() {\n+  $command_info = drush_get_command();\n+  $command = $command_info['command'];\n+  $needs_parse_args = array('user-block', 'user-unblock', 'user-add-role', 'user-remove-role');\n+  if (in_array($command, $needs_parse_args)) {\n+    // parse args and call drush_set_option for --uids\n+    $users = array();\n+    foreach (array('uid', 'name', 'mail' ) as $user_attr) {\n+      if ($arg = drush_get_option($user_attr)) {\n+        foreach(explode(',', $arg) as $search) {\n+          $uid_query = FALSE;\n+          switch ($user_attr) {\n+            case 'uid':\n+              if (drush_drupal_major_version() >= 7) {\n+                $uid_query = db_query(\"SELECT uid FROM {users} WHERE uid = :uid\", array(':uid' => $search));\n+              }\n+              else {\n+                $uid_query = db_query(\"SELECT uid FROM {users} WHERE uid = %d\", $search);\n+              }\n+              break;\n+            case 'name':\n+              if (drush_drupal_major_version() >= 7) {\n+                $uid_query = db_query(\"SELECT uid FROM {users} WHERE name = :name\", array(':name' => $search));\n+              }\n+              else {\n+                $uid_query = db_query(\"SELECT uid FROM {users} WHERE name = '%s'\", $search);\n+              }\n+              break;\n+            case 'mail':\n+              if (drush_drupal_major_version() >= 7) {\n+                $uid_query = db_query(\"SELECT uid FROM {users} WHERE mail = :mail\", array(':mail' => $search));\n+              }\n+              else {\n+                $uid_query = db_query(\"SELECT uid FROM {users} WHERE mail = '%s'\", $search);\n+              }\n+              break;\n+          }\n+          if ($uid_query !== FALSE) {\n+            if ($uid = drush_db_result($uid_query)) {\n+              $users[] = $uid;\n+            }\n+            else {\n+              drush_set_error(\"Could not find a uid for $user_attr = $search\");\n+            }\n+          }\n+        }\n+      }\n+    }\n+    if (!empty($users)) {\n+      drush_set_option('uids', $users);\n+    }\n+  }\n+}\n+\n+/**\n+ * Prints information about the specified user(s).\n+ */\n+function drush_user_information($users) {\n+  $users = explode(',', $users);\n+  foreach($users as $user) {\n+    $uid = _drush_user_get_uid($user);  \n+    if ($uid !== FALSE) {\n+        _drush_user_print_info($uid);\n+    }\n+  }\n+}\n+\n+/**\n+ * Block the specified user(s).\n+ */\n+function drush_user_block($users = '') {\n+  $uids = drush_get_option('uids');\n+  if ($users !== '') {\n+    $users = explode(',', $users);\n+    foreach($users as $user) {\n+      $uid = _drush_user_get_uid($user);  \n+      if ($uid !== FALSE) {\n+        $uids[] = $uid;\n+      }\n+    }\n+  }\n+  if (!empty($uids)) {\n+    user_user_operations_block($uids);\n+  }\n+  else {\n+    return drush_set_error(\"Could not find any valid uids!\");\n+  }\n+}\n+\n+/**\n+ * Unblock the specified user(s).\n+ */\n+function drush_user_unblock($users = '') {\n+  $uids = drush_get_option('uids');\n+  if ($users !== '') {\n+    $users = explode(',', $users);\n+    foreach($users as $user) {\n+      $uid = _drush_user_get_uid($user);  \n+      if ($uid !== FALSE) {\n+        $uids[] = $uid;\n+      }\n+    }\n+  }\n+  if (!empty($uids)) {\n+    user_user_operations_unblock($uids);\n+  }\n+  else {\n+    return drush_set_error(\"Could not find any valid uids!\");\n+  }\n+}\n+\n+/**\n+ * Add a role to the specified user accounts.\n+ */\n+function drush_user_add_role($role, $users = '') {\n+  $uids = drush_get_option('uids');\n+  if ($users !== '') {\n+    $users = explode(',', $users);\n+    foreach($users as $user) {\n+      $uid = _drush_user_get_uid($user);  \n+      if ($uid !== FALSE) {\n+        $uids[] = $uid;\n+      }\n+    }\n+  }\n+  if (drush_drupal_major_version() >= 7) {\n+    $rid_query = db_query(\"SELECT rid FROM {role} WHERE name = :role\", array(':role' => $role));\n+  }\n+  else {\n+    $rid_query = db_query(\"SELECT rid FROM {role} WHERE name = '%s'\", $role);\n+  }\n+  if (!empty($uids)) {\n+    if ($rid = drush_db_result($rid_query)) {\n+      user_multiple_role_edit($uids, 'add_role', $rid);\n+      foreach($uids as $uid) {\n+        drush_log(dt(\"Added the %role role to uid %uid\", array('%role' => $role, '%uid' => $uid)), 'success');\n+      }\n+    }\n+    else {\n+      return drush_set_error(\"There is no role named: \\\"$role\\\"!\");\n+    }\n+  }\n+  else {\n+    return drush_set_error(\"Could not find any valid uids!\");\n+  }\n+}\n+\n+/**\n+ * Remove a role from the specified user accounts.\n+ */\n+function drush_user_remove_role($role, $users = '') {\n+  $uids = drush_get_option('uids');\n+  if ($users !== '') {\n+    $users = explode(',', $users);\n+    foreach($users as $user) {\n+      $uid = _drush_user_get_uid($user);  \n+      if ($uid !== FALSE) {\n+        $uids[] = $uid;\n+      }\n+    }\n+  }\n+  if (drush_drupal_major_version() >= 7) {\n+    $rid_query = db_query(\"SELECT rid FROM {role} WHERE name = :role\", array(':role' => $role));\n+  }\n+  else {\n+    $rid_query = db_query(\"SELECT rid FROM {role} WHERE name = '%s'\", $role);\n+  }\n+  if (!empty($uids)) {\n+    if ($rid = drush_db_result($rid_query)) {\n+      user_multiple_role_edit($uids, 'remove_role', $rid);\n+      foreach($uids as $uid) {\n+        drush_log(dt(\"Removed the %role role from uid %uid\", array('%role' => $role, '%uid' => $uid)), 'success');\n+      }\n+    }\n+    else {\n+      return drush_set_error(\"There is no role named: \\\"$role\\\"!\");\n+    }\n+  }\n+  else {\n+    return drush_set_error(\"Could not find any valid uids!\");\n+  }\n+}\n+\n+/**\n+ * Creates a new user account.\n+ */\n+function drush_user_create($name) {\n+  $mail = drush_get_option('mail');\n+  $pass = drush_get_option('password');\n+  $new_user = array(\n+    'name' => $name,\n+    'pass' => $pass,\n+    'mail' => $mail,\n+    'access' => '0',\n+    'status' => 1,\n+  );\n+  if (drush_drupal_major_version() >= 7) {\n+    $result = db_query(\"SELECT uid FROM {users} WHERE name = :name OR mail = :mail\", array(':name' => $name, ':mail' => $new_user['mail']));\n+  }\n+  else {\n+    $result = db_query(\"SELECT uid FROM {users} WHERE name = '%s' OR mail = '%s'\", $name, $new_user['mail']);\n+  }\n+  if (drush_db_result($result) === FALSE) {\n+    $new_user_object = user_save(NULL, $new_user, NULL);\n+    if ($new_user_object !== FALSE) {\n+      _drush_user_print_info($new_user_object->uid);\n+    }\n+    else {\n+      drush_set_error(\"Could not create a new user account with the name \" . $name . \"!\");\n+    }\n+  }\n+  else {\n+    drush_set_error(\"There is already a user account with the name \" . $name . \" or email address \" . $new_user['mail'] . \"!\");\n+  }\n+}\n+\n+/**\n+ * Cancels a user account.\n+ */\n+function drush_user_cancel($name) {\n+  if (drush_drupal_major_version() >= 7) {\n+    $result = db_query(\"SELECT uid FROM {users} WHERE name = :name\", array(':name' => $name));\n+  }\n+  else {\n+    $result = db_query(\"SELECT uid FROM {users} WHERE name = '%s'\", $name);\n+  }\n+  $uid = drush_db_result($result);\n+  if ($uid !== FALSE) {\n+    drush_print(\"Cancelling the user account with the following information:\");\n+    _drush_user_print_info($uid);\n+    if (drush_get_option('delete-content') && drush_drupal_major_version() >= 7) {\n+      drush_print(\"All content created by this user will be deleted!\");\n+    }\n+    if (drush_confirm('Cancel user account?: ')) {\n+      if (drush_drupal_major_version() >= 7) {\n+        if (drush_get_option('delete-content')) {\n+          user_cancel(array(), $uid, 'user_cancel_delete');\n+        }\n+        else {\n+          user_cancel(array(), $uid, 'user_cancel_reassign');\n+        }\n+        // I got the following technique here: https://drupal.org/node/638712\n+        $batch =& batch_get();\n+        $batch['progressive'] = FALSE;\n+        batch_process();\n+      }\n+      else {\n+        user_delete(array(), $uid);\n+      }\n+    }\n+  }\n+  else {\n+    drush_set_error(\"Could not find a user account with the name \" . $name . \"!\");\n+  }\n+}\n+\n+/**\n+ * Sets the password for the account with the given username\n+ */\n+function drush_user_password($name) {\n+  $pass = drush_get_option('password');\n+  if (empty($pass)) {\n+    return drush_set_error(\"You must specify a password!\");\n+  }\n+  if (drush_drupal_major_version() >= 7) {\n+    $result = db_query(\"SELECT uid, name FROM {users} WHERE name = :name\", array(':name' => $name));\n+    $userinfo = drush_db_fetch_object($result);\n+    if ($userinfo->name != $name) return drush_set_error(\"Could not find a user with the name '\" . $name . \"'!\");\n+    $user = user_load(drush_db_result($result));\n+  }\n+  else {\n+    $user = user_load(array('name' => $name));\n+  }\n+  if ($user !== FALSE) {\n+    $user_object = user_save($user, array('pass' => $pass));\n+    if ($user_object === FALSE) {\n+      drush_set_error(\"Could not change the password for the user account with the name \" . $name . \"!\");\n+    }\n+  }\n+  else {\n+    drush_set_error(\"The user account with the name \" . $name . \" could not be loaded!\");\n+  }\n+}\n+\n+/**\n+ * Print information about a given uid\n+ */\n+function _drush_user_print_info($uid) {\n+  if (drush_drupal_major_version() >= 7) {\n+    $userinfo = user_load($uid);\n+  }\n+  else {\n+    $userinfo = user_load(array('uid' => $uid));\n+  }\n+  if (drush_get_option('full')) {\n+    $userinfo = (array)$userinfo;\n+    $userinfo_pipe = array();\n+    unset($userinfo['data']);\n+    unset($userinfo['block']);\n+    unset($userinfo['form_build_id']);\n+    foreach($userinfo as $key => $val) {\n+      if (is_array($val)) {\n+        drush_print($key . ': ');\n+        drush_print_r($val);\n+        $userinfo_pipe[] = '\"' . implode(\",\", $val) . '\"';\n+      }\n+      else {\n+        if ($key === 'created' OR $key === 'access' OR $key === 'login') {\n+          drush_print($key . ': ' . format_date($val));\n+          $userinfo_pipe[] = $val;\n+        }\n+        else {\n+          drush_print($key . ': ' . $val);\n+          $userinfo_pipe[] = $val;\n+        }\n+      }\n+    }\n+    drush_print_pipe(implode(\",\", $userinfo_pipe));\n+    drush_print_pipe(\"\\n\");\n+  }\n+  else {\n+    $userinfo_short = array(\n+      'User ID' => $userinfo->uid,\n+      'User name' => $userinfo->name,\n+      'User mail' => $userinfo->mail,\n+    );\n+    $userinfo_short['User roles'] = implode(', ', $userinfo->roles);\n+    $userinfo->status ? $userinfo_short['User status'] = 'active' : $userinfo_short['User status'] = 'blocked';\n+    drush_print_table(drush_key_value_to_array_table($userinfo_short));\n+    drush_print_pipe(\"$userinfo->name, $userinfo->uid, $userinfo->mail, $userinfo->status, \\\"\" . implode(', ', $userinfo->roles) . \"\\\"\\n\");\n+  }\n+}\n+\n+/**\n+ * Get uid(s) from a uid, user name, or email address.\n+ * Returns a uid, or FALSE if none found.\n+ */\n+function _drush_user_get_uid($search) {\n+  // We use a DB query while looking for the uid to keep things speedy.\n+  $uids = array();\n+  if (is_numeric($search)) {\n+    if (drush_drupal_major_version() >= 7) {\n+      $uid_query = db_query(\"SELECT uid, name FROM {users} WHERE uid = :uid OR name = :name\", array(':uid' => $search, ':name' => $search));\n+    }\n+    else {\n+      $uid_query = db_query(\"SELECT uid, name FROM {users} WHERE uid = %d OR name = '%d'\", $search, $search);\n+    }\n+  }\n+  else {\n+    if (drush_drupal_major_version() >= 7) {\n+      $uid_query = db_query(\"SELECT uid, name FROM {users} WHERE mail = :mail OR name = :name\", array(':mail' => $search, ':name' => $search));\n+    }\n+    else {\n+      $uid_query = db_query(\"SELECT uid, name FROM {users} WHERE mail = '%s' OR name = '%s'\", $search, $search);\n+    }\n+  }\n+  while ($uid = drush_db_fetch_object($uid_query)) {\n+    $uids[$uid->uid] = $uid->name;\n+  }\n+  switch (count($uids)) {\n+    case 0:\n+      return drush_set_error(\"Could not find a uid for the search term '\" . $search . \"'!\");\n+      break;\n+    case 1:\n+      return array_pop(array_keys($uids));\n+      break;\n+    default:\n+      drush_print('More than one user account was found for the search string \"' . $search . '\".');\n+      return(drush_choice($uids, 'Please choose a name:', '!value (uid=!key)'));\n+  }\n+}\n"
  },
  {
    "path": "aegir/patches/videola.patch",
    "content": "diff -urp a/videola.info b/videola.info\n--- a/videola.info\t2011-06-15 10:17:14.000000000 +0000\n+++ b/videola.info\t2011-07-01 13:15:48.000000000 +0000\n@@ -119,7 +119,7 @@ dependencies[] = jquery_ui\n dependencies[] = jquery_update\n dependencies[] = vertical_tabs\n dependencies[] = better_formats\n-dependencies[] = bueditor\n+;dependencies[] = bueditor\n \n \n ; Other\n@@ -132,7 +132,7 @@ dependencies[] = commentmail\n dependencies[] = context\n dependencies[] = context_ui\n dependencies[] = date\n-dependencies[] = devel\n+;dependencies[] = devel\n dependencies[] = diff\n dependencies[] = features\n dependencies[] = flag\n@@ -154,6 +154,11 @@ dependencies[] = strongarm\n dependencies[] = term_node_count\n dependencies[] = token\n \n+; o_contrib\n+dependencies[] = cache\n+dependencies[] = path_alias_cache\n+dependencies[] = filefield_nginx_progress\n+\n ; Videola\n dependencies[] = videola_core\n dependencies[] = videola_video\n@@ -197,6 +202,8 @@ users[superduper][status] = 1\n variables[site_name] = Videola\n variables[site_mail] = testing@testing.com\n variables[site_frontpage] = videola-front\n+variables[admin_theme] = rubik\n+variables[node_admin_theme] = 1\n \n variables[pathauto_node_pattern] = 0\n variables[pathauto_node_videola_video_pattern] = videos/[title-raw]\ndiff -urp a/videola.profile b/videola.profile\n--- a/videola.profile\t2011-06-15 10:17:14.000000000 +0000\n+++ b/videola.profile\t2011-07-01 12:47:56.000000000 +0000\n@@ -67,7 +67,12 @@ function videola_profile_tasks(&$task, $\n     profiler_profile_tasks(profiler_v2_load_config('videola'), $task, $url);\n     // Profiler stets the $task to 'profile-finished', in order to add our own\n     // tasks we need to override that and set it to our task.\n-    $task = 'videola';\n+    if (defined('DRUSH_BASE_PATH')) {\n+      $task = 'profile-finished'; // Required to support Aegir.\n+    }\n+    else {\n+      $task = 'videola';\n+    }\n   }\n \n   if ($task == 'videola') {\n"
  },
  {
    "path": "aegir/patches/views-853864_2.patch",
    "content": "Index: includes/cache.inc\n===================================================================\nRCS file: /cvs/drupal-contrib/contributions/modules/views/includes/cache.inc,v\nretrieving revision 1.25.2.4\ndiff -u -p -r1.25.2.4 cache.inc\n--- includes/cache.inc\t12 Mar 2010 01:51:47 -0000\t1.25.2.4\n+++ includes/cache.inc\t4 Jan 2011 12:16:36 -0000\n@@ -100,20 +100,64 @@ function _views_discover_default_views()\n   static $cache = NULL;\n \n   if (!isset($cache)) {\n+    $lock_name = __FUNCTION__;\n     $index = views_cache_get('views_default_views_index', TRUE);\n \n+    $rebuild_cache = TRUE;\n     // Retrieve each cached default view\n     if (isset($index->data) && is_array($index->data)) {\n+      $rebuild_cache = FALSE;\n       $cache = array();\n       foreach ($index->data as $view_name) {\n-        $data = views_cache_get('views_default:' . $view_name, TRUE);\n-        if (isset($data->data) && is_object($data->data)) {\n-          $cache[$view_name] = $data->data;\n+        $cid = 'views_default:' . $view_name;\n+        if ($cached = views_cache_get($cid, TRUE)) {\n+          $cache[$view_name] = $cached->data;\n+        }\n+        else {\n+          // As soon as there is a cache miss on one item, try to acquire a\n+          // lock.\n+          if (!$lock_acquired = lock_acquire($lock_name)) {\n+            lock_wait($lock_name);\n+            // After waiting, try to fetch the default view from cache again.\n+            // If available another process may have rebuilt it, so do not\n+            // attempt to rebuild the cache.\n+            if ($cached = views_cache_get($cid, TRUE)) {\n+              $cache[$view_name] = $cached->data;\n+            }\n+            // If the item is still not in the cache, try to acquire the lock\n+            // again and rebuild the cache.\n+            else {\n+              $lock_acquired = lock_acquire($lock_name);\n+              $rebuild_cache = TRUE;\n+              break;\n+            }\n+          }\n+          // If the lock was acquired, always rebuild the cache.\n+          else {\n+            $rebuild_cache = TRUE;\n+            break;\n+          }\n         }\n       }\n     }\n-    // If missing index, rebuild the cache\n     else {\n+      if (!$lock_acquired = lock_acquire($lock_name)) {\n+        lock_wait($lock_name);\n+        if ($cached = views_cache_get('views_default_views_index', TRUE)) {\n+          // Another process has rebuilt the cache while we waited. Re-run the\n+          // function to avoid a full cache rebuild.\n+          $cache = _views_discover_default_views();\n+          $rebuild_cache = FALSE;\n+        }\n+        else {\n+          // Try to re-acquire the lock and re-build the cache either way.\n+          lock_acquire($lock_name);\n+          $rebuild_cache = TRUE;\n+        }\n+      }\n+    }\n+    // Rebuild the cache if necessary.\n+    if ($rebuild_cache) {\n       views_include_default_views();\n       $cache = array();\n \n@@ -139,13 +183,16 @@ function _views_discover_default_views()\n       // Allow modules to modify default views before they are cached.\n       drupal_alter('views_default_views', $cache);\n \n-      // Cache the index\n-      $index = array_keys($cache);\n-      views_cache_set('views_default_views_index', $index, TRUE);\n-\n-      // Cache each view\n-      foreach ($cache as $name => $view) {\n-        views_cache_set('views_default:' . $name, $view, TRUE);\n+      if (!empty($lock_acquired)) {\n+        // Cache the index\n+        $index = array_keys($cache);\n+        views_cache_set('views_default_views_index', $index, TRUE);\n+\n+        // Cache each view\n+        foreach ($cache as $name => $view) {\n+          views_cache_set('views_default:' . $name, $view, TRUE);\n+        }\n+        lock_release($lock_name);\n       }\n     }\n   }\n"
  },
  {
    "path": "aegir/patches/views-exposed-sorts-2037469-1.patch",
    "content": "diff --git a/plugins/views_plugin_exposed_form.inc b/plugins/views_plugin_exposed_form.inc\nindex 5d54600..1d19ed1 100644\n--- a/plugins/views_plugin_exposed_form.inc\n+++ b/plugins/views_plugin_exposed_form.inc\n@@ -220,17 +220,24 @@ class views_plugin_exposed_form extends views_plugin {\n     }\n \n     if (count($exposed_sorts)) {\n+      if (isset($form_state['input']['sort_by']) && isset($this->view->sort[$form_state['input']['sort_by']])) {\n+        $default_sort_order = $form_state['input']['sort_by'];\n+      } else {\n+        $first_sort = reset($this->view->sort);\n+        $default_sort_order = $first_sort->options['order'];\n+      }\n       $form['sort_by'] = array(\n         '#type' => 'select',\n         '#options' => $exposed_sorts,\n         '#title' => $this->options['exposed_sorts_label'],\n+        '#default_value' => $default_sort_order,\n       );\n       $sort_order = array(\n         'ASC' => $this->options['sort_asc_label'],\n         'DESC' => $this->options['sort_desc_label'],\n       );\n       if (isset($form_state['input']['sort_by']) && isset($this->view->sort[$form_state['input']['sort_by']])) {\n-        $default_sort_order = $this->view->sort[$form_state['input']['sort_by']]->options['order'];\n+        $default_sort_order = $form_state['input']['sort_order'];\n       } else {\n         $first_sort = reset($this->view->sort);\n         $default_sort_order = $first_sort->options['order'];\n"
  },
  {
    "path": "aegir/patches/views-revert-broken-filter-or-groups-1766338-7.patch",
    "content": "diff --git a/modules/field/views_handler_filter_field_list.inc b/modules/field/views_handler_filter_field_list.inc\nindex 440d55b..b955e70 100644\n--- a/modules/field/views_handler_filter_field_list.inc\n+++ b/modules/field/views_handler_filter_field_list.inc\n@@ -10,21 +10,7 @@\n  *\n  * @ingroup views_filter_handlers\n  */\n-class views_handler_filter_field_list extends views_handler_filter_many_to_one {\n-\n-  function init(&$view, &$options) {\n-    parent::init($view, $options);\n-    // Migrate the settings from the old filter_in_operator values to filter_many_to_one.\n-    if ($this->options['operator'] == 'in') {\n-      $this->options['operator'] = 'or';\n-    }\n-    if ($this->options['operator'] == 'not in') {\n-      $this->options['operator'] = 'not';\n-    }\n-    $this->operator = $this->options['operator'];\n-  }\n-\n-\n+class views_handler_filter_field_list extends views_handler_filter_in_operator {\n   function get_value_options() {\n     $field = field_info_field($this->definition['field_name']);\n     $this->value_options = list_allowed_values($field);\n"
  },
  {
    "path": "aegir/patches/views-unpack_options-cache-6.2-51.patch",
    "content": "diff --git a/plugins/views_plugin_display.inc b/plugins/views_plugin_display.inc\nindex 3c47037..a66c3c0 100644\n--- a/plugins/views_plugin_display.inc\n+++ b/plugins/views_plugin_display.inc\n@@ -39,7 +39,28 @@ class views_plugin_display extends views_plugin {\n       unset($options['defaults']);\n     }\n \n-    $this->unpack_options($this->options, $options);\n+    // Cache for unpack_options, but not if we are in the ui.\n+    static $unpack_options = array();\n+    if (empty($view->editing)) {\n+      $cid = 'unpack_options:' . md5(serialize(array($this->options, $options)));\n+      if (empty($unpack_options[$cid])) {\n+        $cache = views_cache_get($cid, TRUE);\n+        if (!empty($cache->data)) {\n+          $this->options = $cache->data;\n+        }\n+        else {\n+          $this->unpack_options($this->options, $options);\n+          views_cache_set($cid, $this->options, TRUE);\n+        }\n+        $unpack_options[$cid] = $this->options;\n+      }\n+      else {\n+        $this->options = $unpack_options[$cid];\n+      }\n+    }\n+    else {\n+      $this->unpack_options($this->options, $options);\n+    }\n   }\n \n   function destroy() {\n"
  },
  {
    "path": "aegir/scripts/AegirSetupA.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\n\n###\n### Helper variables\n###\n_bldPth=\"/opt/tmp/boa\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_filIncO=\"octopus.sh.cnf\"\n_gCb=\"git clone --branch\"\n_gitHub=\"https://github.com/omega8cc\"\n_gitLab=\"https://gitlab.com/omega8cc\"\n_libFnc=\"${_bldPth}/lib/functions\"\n_tocIncO=\"${_filIncO}.$1\"\n_vBs=\"/var/backups\"\n_vSet=\"variable-set --always-set\"\nexport _tRee=dev\n\n\n###\n### Panic on missing include\n###\n_panic_exit() {\n  echo\n  echo \" EXIT: Required lib file not available?\"\n  echo \" EXIT: $1\"\n  echo \" EXIT: Cannot continue\"\n  echo \" EXIT: Bye (0)\"\n  echo\n  touch /opt/tmp/status-AegirSetupA-FAIL\n  exit 1\n}\n\n\n###\n### Include helper functions\n###\nif [ -e \"${_vBs}/${_tocIncO}\" ]; then\n  source \"${_vBs}/${_tocIncO}\"\n  _tInc=\"${_vBs}/${_tocIncO}\"\nelif [ -e \"${_vBs}/${_filIncO}\" ]; then\n  source \"${_vBs}/${_filIncO}\"\n  _tInc=\"${_vBs}/${_filIncO}\"\nelse\n  _panic_exit \"${_tInc}\"\nfi\n\n\n###\n### Env debugging\n###\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  echo DEBUG AegirSetupA\n  echo DEBUG AegirSetupA\n  echo Effective _USER is $1\n  [ -r \"${_vBs}/${_tocIncO}\" ] && echo Effective _tocIncO is ${_tocIncO}\n  echo DEBUG AegirSetupA\n  echo DEBUG AegirSetupA\n  env\n  echo DEBUG AegirSetupA\n  echo DEBUG AegirSetupA\nfi\n\n\n###\n### More helper variables\n###\nexport _urlDev=\"http://${_USE_MIR}/dev\"\nexport _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n\n\n###\n### Include shared functions\n###\n_FL=\"helper satellite\"\nfor f in ${_FL}; do\n  [ -r \"${_libFnc}/${f}.sh.inc\" ] || _panic_exit \"${f}\"\n  source \"${_libFnc}/${f}.sh.inc\"\ndone\n\n\n###\n### Local variables\n###\nif [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n  _THIS_DB_HOST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nfi\n_DIST_INSTALL=NO\n_STATUS=INIT\n_LOCAL_STATUS=\"${_STATUS}\"\n_ROOT=\"/data/disk/${_USER}\"\n_HM_ROOT=\"${_ROOT}/aegir/distro/${_HM_DISTRO}\"\n_DISTRO_ROOT=\"${_ROOT}/distro/${_DISTRO}\"\n_D=\"/data/all\"\n_SRCDIR=\"/opt/tmp/files\"\nif [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n  && [ -x \"/opt/php85/bin/php\" ]; then\n  _T_CLI=/opt/php85/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n  && [ -x \"/opt/php84/bin/php\" ]; then\n  _T_CLI=/opt/php84/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n  && [ -x \"/opt/php83/bin/php\" ]; then\n  _T_CLI=/opt/php83/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.2\" ] \\\n  && [ -x \"/opt/php82/bin/php\" ]; then\n  _T_CLI=/opt/php82/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.1\" ] \\\n  && [ -x \"/opt/php81/bin/php\" ]; then\n  _T_CLI=/opt/php81/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.0\" ] \\\n  && [ -x \"/opt/php80/bin/php\" ]; then\n  _T_CLI=/opt/php80/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.4\" ] \\\n  && [ -x \"/opt/php74/bin/php\" ]; then\n  _T_CLI=/opt/php74/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.3\" ] \\\n  && [ -x \"/opt/php73/bin/php\" ]; then\n  _T_CLI=/opt/php73/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.2\" ] \\\n  && [ -x \"/opt/php72/bin/php\" ]; then\n  _T_CLI=/opt/php72/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.1\" ] \\\n  && [ -x \"/opt/php71/bin/php\" ]; then\n  _T_CLI=/opt/php71/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.0\" ] \\\n  && [ -x \"/opt/php70/bin/php\" ]; then\n  _T_CLI=/opt/php70/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"5.6\" ] \\\n  && [ -x \"/opt/php56/bin/php\" ]; then\n  _T_CLI=/opt/php56/bin\nfi\n_DRUSHCMD=\"${_T_CLI}/php ${_ROOT}/tools/drush/drush.php\"\nPATH=${_T_CLI}:/usr/local/bin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\nSHELL=/bin/bash\n\n\n###\n### Status check and update on the fly\n###\nif [ -e \"${_ROOT}/aegir.sh\" ]; then\n  _STATUS=UPGRADE\n  cd ${_ROOT}\n  rm -f ${_ROOT}/AegirSetupC.sh.txt*\n  rm -f ${_ROOT}/AegirSetupB.sh.txt*\n  _LOCAL_STATUS=\"${_STATUS}\"\nfi\n\n\n###\n### User check\n###\nif [ \"$(id -u)\" -eq 0 ]; then\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Ægir automated install script part A\"\n  fi\nelse\n  _msg \"${_STATUS} A: FATAL ERROR: This script should be run as a root user\"\n  _msg \"${_STATUS} A: FATAL ERROR: Aborting AegirSetupA installer NOW!\"\n  touch /opt/tmp/status-AegirSetupA-FAIL\n  exit 1\nfi\n\n\n###\n### Run key pre/child/post procedures\n###\n_satellite_hot_sauce_check\n_satellite_add_user_dirs\n_satellite_if_add_snail_access\n_satellite_prepare_child_scripts\n_satellite_run_pre_install\nif [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n  _DB_SERVER=Percona\nelse\n  _DB_SERVER=Percona\nfi\nif [ \"$(boa info | grep -c ${_DB_SERVER})\" -lt 3 ] || [ ! -e \"/usr/sbin/csf\" ]; then\n  if [ ! -e \"/opt/tmp/make_local/hostmaster/hostmaster.make\" ] \\\n    || [ ! -e \"/opt/tmp/make_local/hosting/server/hosting_server.install\" ] \\\n    || [ ! -e \"/opt/tmp/make_local/drupal/includes/database/mysql/schema.inc\" ]; then\n    _satellite_download_for_local_build\n  fi\nelse\n  _satellite_download_for_local_build\nfi\n_satellite_run_child_b\n\n\n###\n### Run accelerated tasks queue\n###\nif [ -e \"/var/xdrago/run-${_USER}\" ]; then\n  _msg \"${_STATUS} A: Ægir accelerated task queue will run for 60 seconds...\"\n  su -s /bin/bash - ${_USER} -c \"drush8 @hostmaster ${_vSet} hosting_queue_tasks_items 3\" &> /dev/null\n  _msg \"${_STATUS} A: Please wait...\"\n  for _iteration in {1..10}; do\n    nohup /var/xdrago/run-${_USER} > /dev/null 2>&1 &\n    sleep 5\n  done\nfi\nsu -s /bin/bash - ${_USER} -c \"drush8 @hostmaster ${_vSet} hosting_queue_tasks_items 1\" &> /dev/null\n\n\n###\n### Run more pre/child/post procedures\n###\n_satellite_if_create_local_bin\n_satellite_run_post_install\n_satellite_set_permissions_for_all\n_satellite_run_child_c\n_satellite_child_scripts_cleanup\n_satellite_if_add_ftps_lshell_access\n_satellite_if_add_update_user_symlinks\n_satellite_if_add_update_user_dot_dirs\n_satellite_if_read_create_pass_txt\n_satellite_if_add_update_user_platforms_symlinks\n_satellite_if_add_update_backend_user_dirs_files_clean\n[ ! -e \"/root/.silent.update.cnf\" ] && _satellite_prepare_setup_email_tpl\n[ ! -e \"/root/.silent.update.cnf\" ] && _satellite_send_welcome_email\n_satellite_letsencrypt_vhost_setup\n_satellite_log_update\n_satellite_batch_cleanup\n_satellite_display_url_finalize\n\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "aegir/scripts/AegirSetupB.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\n\n###\n### Helper variables\n###\n_bldPth=\"/opt/tmp/boa\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_filIncO=\"octopus.sh.cnf\"\n_gCb=\"git clone --branch\"\n_gitHub=\"https://github.com/omega8cc\"\n_gitLab=\"https://gitlab.com/omega8cc\"\n_libFnc=\"${_bldPth}/lib/functions\"\n_tocIncO=\"${_filIncO}.$1\"\n_vBs=\"/var/backups\"\n_vSet=\"variable-set --always-set\"\nexport _tRee=dev\n\n\n###\n### Panic on missing include\n###\n_panic_exit() {\n  echo\n  echo \" EXIT: Required lib file not available?\"\n  echo \" EXIT: $1\"\n  echo \" EXIT: Cannot continue\"\n  echo \" EXIT: Bye (0)\"\n  echo\n  touch /opt/tmp/status-AegirSetupB-FAIL\n  exit 1\n}\n\n\n###\n### Include helper functions\n###\nif [ -e \"${_vBs}/${_tocIncO}\" ]; then\n  source \"${_vBs}/${_tocIncO}\"\n  _tInc=\"${_vBs}/${_tocIncO}\"\nelif [ -e \"${_vBs}/${_filIncO}\" ]; then\n  source \"${_vBs}/${_filIncO}\"\n  _tInc=\"${_vBs}/${_filIncO}\"\nelse\n  _panic_exit \"${_tInc}\"\nfi\n\n\n###\n### Env debugging\n###\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  echo DEBUG AegirSetupB\n  echo DEBUG AegirSetupB\n  echo Effective _USER is $1\n  [ -r \"${_vBs}/${_tocIncO}\" ] && echo Effective _tocIncO is ${_tocIncO}\n  echo DEBUG AegirSetupB\n  echo DEBUG AegirSetupB\n  env\n  echo DEBUG AegirSetupB\n  echo DEBUG AegirSetupB\nfi\n\n\n###\n### More helper variables\n###\nexport _urlDev=\"http://${_USE_MIR}/dev\"\nexport _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n\n\n###\n### Include shared functions\n###\n_FL=\"helper satellite\"\nfor f in ${_FL}; do\n  [ -r \"${_libFnc}/${f}.sh.inc\" ] || _panic_exit \"${f}\"\n  source \"${_libFnc}/${f}.sh.inc\"\ndone\n\n\n###\n### Local variables\n###\nif [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n  _THIS_DB_HOST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nfi\n_DIST_INSTALL=NO\n_STATUS=INIT\n_LOCAL_STATUS=\"${_STATUS}\"\n_ROOT=\"/data/disk/${_USER}\"\n_HM_ROOT=\"${_ROOT}/aegir/distro/${_HM_DISTRO}\"\n_DISTRO_ROOT=\"${_ROOT}/distro/${_DISTRO}\"\n_PREV_HM_ROOT=\"${_ROOT}/aegir/distro/${_LAST_HMR}\"\n_D=\"/data/all\"\n_SRCDIR=\"/opt/tmp/files\"\nif [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n  && [ -x \"/opt/php85/bin/php\" ]; then\n  _T_CLI=/opt/php85/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n  && [ -x \"/opt/php84/bin/php\" ]; then\n  _T_CLI=/opt/php84/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n  && [ -x \"/opt/php83/bin/php\" ]; then\n  _T_CLI=/opt/php83/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.2\" ] \\\n  && [ -x \"/opt/php82/bin/php\" ]; then\n  _T_CLI=/opt/php82/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.1\" ] \\\n  && [ -x \"/opt/php81/bin/php\" ]; then\n  _T_CLI=/opt/php81/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.0\" ] \\\n  && [ -x \"/opt/php80/bin/php\" ]; then\n  _T_CLI=/opt/php80/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.4\" ] \\\n  && [ -x \"/opt/php74/bin/php\" ]; then\n  _T_CLI=/opt/php74/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.3\" ] \\\n  && [ -x \"/opt/php73/bin/php\" ]; then\n  _T_CLI=/opt/php73/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.2\" ] \\\n  && [ -x \"/opt/php72/bin/php\" ]; then\n  _T_CLI=/opt/php72/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.1\" ] \\\n  && [ -x \"/opt/php71/bin/php\" ]; then\n  _T_CLI=/opt/php71/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.0\" ] \\\n  && [ -x \"/opt/php70/bin/php\" ]; then\n  _T_CLI=/opt/php70/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"5.6\" ] \\\n  && [ -x \"/opt/php56/bin/php\" ]; then\n  _T_CLI=/opt/php56/bin\nfi\n_DRUSHCMD=\"${_T_CLI}/php ${_ROOT}/tools/drush/drush.php\"\nPATH=${_T_CLI}:/usr/local/bin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\nSHELL=/bin/bash\n\n\n###\n### Status check and update on the fly\n###\nif [ -e \"${_ROOT}/aegir.sh\" ]; then\n  _STATUS=UPGRADE\n  cd ${_ROOT}\nfi\n\n\n###\n### User check\n###\nif [ \"$(id -u)\" -eq 0 ]; then\n  _msg \"${_STATUS} B: FATAL ERROR: This script should be run as a non-root user\"\n  _msg \"${_STATUS} B: FATAL ERROR: Aborting AegirSetupB installer NOW!\"\n  touch /opt/tmp/status-AegirSetupB-FAIL\n  exit 1\nelse\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} B: Ægir automated install script part B\"\n  fi\nfi\n\n\n###\n### Run all child B procedures\n###\n_satellite_child_b_prepare_dirs_permissions\n_satellite_child_b_install_drush\n_satellite_child_b_drush_xts_cleanup\n_satellite_child_b_drush_xts_install\n_satellite_child_b_drush_test\n_satellite_child_b_aegir_build\n_satellite_child_b_aegir_health_check\n_satellite_child_b_letsencrypt\n_satellite_child_b_aegir_ui_enhance\n_satellite_child_b_vhosts_hotfix\n_satellite_child_b_symlink_global_inc\n_satellite_child_b_redis_enable_finalize\n\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "aegir/scripts/AegirSetupC.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\n\n###\n### Helper variables\n###\nexport _bldPth=\"/opt/tmp/boa\"\nexport _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\nexport _wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\nexport _filIncO=\"octopus.sh.cnf\"\nexport _gCb=\"git clone --branch\"\nexport _gitHub=\"https://github.com/omega8cc\"\nexport _gitLab=\"https://gitlab.com/omega8cc\"\nexport _libFnc=\"${_bldPth}/lib/functions\"\nexport _tocIncO=\"${_filIncO}.$1\"\nexport _vBs=\"/var/backups\"\nexport _vSet=\"variable-set --always-set\"\nexport _tRee=dev\n\n\n###\n### Panic on missing include\n###\n_panic_exit() {\n  echo\n  echo \" EXIT: Required lib file not available?\"\n  echo \" EXIT: $1\"\n  echo \" EXIT: Cannot continue\"\n  echo \" EXIT: Bye (0)\"\n  echo\n  touch /opt/tmp/status-AegirSetupC-FAIL\n  exit 1\n}\n\n\n###\n### Include helper functions\n###\nif [ -e \"${_vBs}/${_tocIncO}\" ]; then\n  source \"${_vBs}/${_tocIncO}\"\n  _tInc=\"${_vBs}/${_tocIncO}\"\nelif [ -e \"${_vBs}/${_filIncO}\" ]; then\n  source \"${_vBs}/${_filIncO}\"\n  _tInc=\"${_vBs}/${_filIncO}\"\nelse\n  _panic_exit \"${_tInc}\"\nfi\n\n\n###\n### Env debugging\n###\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  echo DEBUG AegirSetupC\n  echo DEBUG AegirSetupC\n  echo Effective _USER is $1\n  [ -r \"${_vBs}/${_tocIncO}\" ] && echo Effective _tocIncO is ${_tocIncO}\n  echo DEBUG AegirSetupC\n  echo DEBUG AegirSetupC\n  env\n  echo DEBUG AegirSetupC\n  echo DEBUG AegirSetupC\nfi\n\n\n###\n### More helper variables\n###\nexport _urlDev=\"http://${_USE_MIR}/dev\"\nexport _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n\n\n###\n### Include shared functions\n###\n_FL=\"helper satellite\"\nfor f in ${_FL}; do\n  [ -r \"${_libFnc}/${f}.sh.inc\" ] || _panic_exit \"${f}\"\n  source \"${_libFnc}/${f}.sh.inc\"\ndone\n\n\n###\n### Local variables\n###\nif [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n  export _THIS_DB_HOST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nfi\nexport _USE_AEGIR_VER=SRC\nexport _T_BUILD=SRC\nexport _DIST_INSTALL=NO\nexport _STATUS=INIT\nexport _USE_DISTRO_CORE=NO\nexport _LOCAL_STATUS=\"${_STATUS}\"\nexport _ROOT=\"/data/disk/${_USER}\"\nexport _HM_ROOT=\"${_ROOT}/aegir/distro/${_HM_DISTRO}\"\nexport _DISTRO_ROOT=\"${_ROOT}/distro/${_DISTRO}\"\nexport _PREV_HM_ROOT=\"${_ROOT}/aegir/distro/${_LAST_HMR}\"\nexport _D=\"/data/all\"\nexport _SRCDIR=\"/opt/tmp/files\"\n\nif [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n  && [ -x \"/opt/php85/bin/php\" ]; then\n  _T_CLI=/opt/php85/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n  && [ -x \"/opt/php84/bin/php\" ]; then\n  _T_CLI=/opt/php84/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n  && [ -x \"/opt/php83/bin/php\" ]; then\n  _T_CLI=/opt/php83/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.2\" ] \\\n  && [ -x \"/opt/php82/bin/php\" ]; then\n  _T_CLI=/opt/php82/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.1\" ] \\\n  && [ -x \"/opt/php81/bin/php\" ]; then\n  _T_CLI=/opt/php81/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.0\" ] \\\n  && [ -x \"/opt/php80/bin/php\" ]; then\n  _T_CLI=/opt/php80/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.4\" ] \\\n  && [ -x \"/opt/php74/bin/php\" ]; then\n  _T_CLI=/opt/php74/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.3\" ] \\\n  && [ -x \"/opt/php73/bin/php\" ]; then\n  _T_CLI=/opt/php73/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.2\" ] \\\n  && [ -x \"/opt/php72/bin/php\" ]; then\n  _T_CLI=/opt/php72/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.1\" ] \\\n  && [ -x \"/opt/php71/bin/php\" ]; then\n  _T_CLI=/opt/php71/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"7.0\" ] \\\n  && [ -x \"/opt/php70/bin/php\" ]; then\n  _T_CLI=/opt/php70/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"5.6\" ] \\\n  && [ -x \"/opt/php56/bin/php\" ]; then\n  _T_CLI=/opt/php56/bin\nfi\nexport _DRUSHCMD=\"${_T_CLI}/php ${_ROOT}/tools/drush/drush.php\"\n#\nexport PATH=${_T_CLI}:/usr/local/bin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\nexport SHELL=/bin/bash\n#\nexport _pthPch=\"/opt/tmp/boa/aegir/patches\"\nexport _urlDrp=\"http://ftp.drupal.org/files/projects\"\nexport _urlPrt=\"https://drupal.org/project\"\n#\nexport _noT=\"not installed\"\nexport _yOk=\"installation in progress...\"\n\n\n###---### Checking status.\n#\nif [ -e \"${_ROOT}/log/setupmail.txt\" ] \\\n  || [ -e \"${_ROOT}/log/legacy_setupmail.txt\" ] \\\n  || [ -e \"${_ROOT}/log/latest_setupmail.txt\" ]; then\n  _STATUS=UPGRADE\n  cd ${_ROOT}\nfi\n\n\n###---### User check.\n#\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  _msg \"${_STATUS} C: Ægir automated install script part C\"\nfi\nif [ \"$(id -u)\" -eq 0 ]; then\n  _msg \"${_STATUS} C: FATAL ERROR: This script should be run as a non-root user\"\n  _msg \"${_STATUS} C: FATAL ERROR: Aborting AegirSetupC installer NOW!\"\n  touch /opt/tmp/status-AegirSetupC-FAIL\n  exit 1\nfi\n\n\n###---### Hot Sauce check.\n#\nif [ \"${_HOT_SAUCE}\" = \"NO\" ]; then\n  export _CORE=\"/data/all/${_LAST_ALL}\"\n  export _THIS_CORE=\"${_LAST_ALL}\"\n  if [ \"${_USE_CURRENT}\" = \"YES\" ] \\\n    && [ -e \"/data/all/000/core-v-${_SMALLCORE6_V}.txt\" ] \\\n    && [ -e \"/data/all/000/core-v-${_SMALLCORE7_V}.txt\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} C: Shared platforms code v.${_LAST_ALL} will be used\"\n    fi\n  elif [ \"${_USE_CURRENT}\" = \"NO\" ] \\\n    || [ ! -e \"/data/all/000/core-v-${_SMALLCORE6_V}.txt\" ] \\\n    || [ ! -e \"/data/all/000/core-v-${_SMALLCORE7_V}.txt\" ]; then\n    export _CORE=\"/data/all/${_ALL_DISTRO}\"\n    export _THIS_CORE=\"${_ALL_DISTRO}\"\n    _msg \"${_STATUS} C: Shared platforms code v.${_ALL_DISTRO} (new) will be created\"\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} C: Shared platforms code v.${_LAST_ALL} will be used\"\n    fi\n  fi\nelse\n  export _CORE=\"/data/all/${_ALL_DISTRO}\"\n  export _THIS_CORE=\"${_ALL_DISTRO}\"\n  _msg \"${_STATUS} C: Shared platforms code v.${_ALL_DISTRO} (new) will be created\"\nfi\nexport _D6_CORE_DIR=\"/data/all/000/core/${_DRUPAL6}\"\nexport _D7_CORE_DIR=\"/data/all/000/core/${_DRUPAL7}\"\n\nexport _pthDst=\"${_ROOT}/distro/${_THIS_CORE}\"\n\nmkdir -p ${_pthDst}\nchmod 0711 ${_ROOT}/distro &> /dev/null\nchmod 0711 ${_pthDst} &> /dev/null\n\n###---###\nexport _ALLOW_ALL=YES\nif [ \"${_CLIENT_CORES}\" -lt 1 ]; then\n  _ALLOW_ALL=NO\n  _D_8_ALLOW=NO\nfi\n\n\n###\n###---### Functions.\n###\n\n#\n# Prepare for Save & Verify Platforms.\n_prepare_for_save_verify_platforms() {\n  _LOCAL_STATUS=\"NOT_SET\"\n  if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n    _THIS_HM=\"${_HM_ROOT}/sites/${_DOMAIN}\"\n  else\n    if [ -e \"${_ROOT}/.drush/hostmaster.alias.drushrc.php\" ]; then\n      _THIS_HM=$(cat ${_ROOT}/.drush/hostmaster.alias.drushrc.php \\\n        | grep 'site_path' \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' 2>&1)\n      _THIS_HM=$(echo ${_THIS_HM} | sed \"s/[\\,']//g\" 2>&1)\n    else\n      _THIS_HM=\"${_HM_ROOT}/sites/${_DOMAIN}\"\n    fi\n  fi\n  if [ ! -d \"${_THIS_HM}\" ]; then\n    _THIS_HM=\"${_PREV_HM_ROOT}/sites/${_DOMAIN}\"\n  fi\n  if [ -d \"${_THIS_HM}\" ] && [ ! -e \"${_THIS_HM}/make_platform.php\" ]; then\n    cp -af /opt/tmp/boa/aegir/helpers/make_platform.php.txt ${_THIS_HM}/make_platform.php\n  fi\n  if [ \"${_SERIES_RESULT}\" = \"OK\" ]; then\n    export _drhSrc=\"sites/all/drush/drushrc.php\"\n  else\n    export _drhSrc=\"drushrc.php\"\n  fi\n}\n\n#\n# Save & Verify Platform\n_save_verify_this_platform() {\n\n  # Declare 'params' as local to the function\n  local -A params=(\n    [dist]=\"$1\"\n    [version]=\"$2\"\n    [core_version]=\"$3\"\n    [description]=\"$4\"\n    [profile_path]=\"$5\"\n    [profile_name]=\"$6\"\n    [web_dir]=\"$7\"\n    [php_version]=\"$8\"\n    [url]=\"$9\"\n  )\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _save_verify_this_platform\"\n    for key in \"${!params[@]}\"; do\n      _msg \"DEBUG: _save_verify_this_platform ${key} is '${params[$key]}'\"\n    done\n  fi\n\n  # _save_verify_this_platform \"${1}\" \"${_SHRD_PLNAME}\" \"${3}\" \"${params[description]} [P.${_THIS_CORE}]\" \"${5}\" \"${6}\" \"${7}\"\n  # _save_verify_this_platform \"${1}\" \"${2}\" \"${3}\" \"${_DIST_PLNAME}\" \"${5}\" \"${6}\" \"${7}\"\n\n  # _save_verify_this_platform 'OCS' '2.2.1' '10.3.6' 'openculturas-2.2.1-10.3.6' 'contrib/' 'openculturas' '/web'\n  # make_platform 'ezcontent-2.2.15-10.3.6' ezcontent /data/disk/o8/distro/001/ezcontent-2.2.15-10.3.6/web\n  # make_platform 'commerce_base-2.40-10.1.8' commerce_base /data/disk/o8/distro/001/commerce_base-2.40-10.1.8/web\n  _make_p=\"${_pthDst}/${params[profile_name]}-${params[version]}-${params[core_version]}${params[web_dir]}\"\n\n  # _save_verify_this_platform 'UC7' '3.13' '7.105.1' 'ubercart-3.13-7.105.1' '/' 'minimal' ''\n  _make_u=\"${_pthDst}/ubercart-${params[version]}-${params[core_version]}${params[web_dir]}\"\n\n  # _save_verify_this_platform 'DX3' '10.3.6' '10.3.6' 'drupal-10.3.6-dev' '/' 'standard' '/web'\n  # /data/disk/o8/distro/001/drupal-10.3.6-prod\n  _make_x=\"${_pthDst}/${params[version]}${params[web_dir]}\"\n\n  if [ -d \"${_make_p}\" ]; then\n    if [ ! -e \"${_make_p}/${_drhSrc}\" ]; then\n      if [ -d \"${_THIS_HM}\" ] && [ -e \"${_THIS_HM}/make_platform.php\" ]; then\n        cd ${_THIS_HM}\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: ${_DRUSHCMD} php-script make_platform '${4}' ${6} ${_make_p}\"\n        ${_DRUSHCMD} php-script make_platform \"${4}\" ${6} ${_make_p} &> /dev/null\n      fi\n    fi\n  elif [ -d \"${_make_u}\" ]; then\n    if [ ! -e \"${_make_u}/${_drhSrc}\" ]; then\n      if [ -d \"${_THIS_HM}\" ] && [ -e \"${_THIS_HM}/make_platform.php\" ]; then\n        cd ${_THIS_HM}\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: ${_DRUSHCMD} php-script make_platform '${4}' ${6} ${_make_u}\"\n        ${_DRUSHCMD} php-script make_platform \"${4}\" ${6} ${_make_u} &> /dev/null\n      fi\n    fi\n  elif [ -d \"${_make_x}\" ]; then\n    if [ ! -e \"${_make_x}/${_drhSrc}\" ]; then\n      if [ -d \"${_THIS_HM}\" ] && [ -e \"${_THIS_HM}/make_platform.php\" ]; then\n        cd ${_THIS_HM}\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: ${_DRUSHCMD} php-script make_platform '${4}' ${6} ${_make_x}\"\n        ${_DRUSHCMD} php-script make_platform \"${4}\" ${6} ${_make_x} &> /dev/null\n      fi\n    fi\n  fi\n}\n\n#\n# Download and extract from core archive.\n_get_core_ext() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"http://${_USE_MIR}/core/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download http://${_USE_MIR}/core/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Download and extract from distro archive.\n_get_distro_ext() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"http://${_USE_MIR}/distro/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download http://${_USE_MIR}/distro/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Create standard directories.\n_fix_dirs_files() {\n  rm -f ./*.txt\n  rm -f ./modules/*.txt\n  rm -f ./themes/*.txt\n  rm -rf ./modules/cookie_cache_bypass\n  mkdir -p ./sites/default/files\n  mkdir -p ./cache/{normal,perm}\n  chmod -R 777 ./cache\n  if [ -e \"./sites/default/default.settings.php\" ]; then\n    cp -af ./sites/default/default.settings.php ./sites/default/settings.php\n  fi\n  chmod a+rw ./sites/default/settings.php\n  chmod a+rwx ./sites/default/files\n  mkdir -p ./profiles\n  mkdir -p ./sites/all/{modules,libraries,themes}\n  rm -f ./core/modules/*.txt\n  rm -f ./core/themes/*.txt\n  rm -f ./modules/*.txt\n  rm -f ./themes/*.txt\n  rm -f ./sites/all/*.txt\n  echo empty > ./profiles/EMPTY.txt\n  echo empty > ./sites/all/EMPTY.txt\n  echo empty > ./sites/all/modules/EMPTY.txt\n  echo empty > ./sites/all/libraries/EMPTY.txt\n  echo empty > ./sites/all/themes/EMPTY.txt\n  chmod 0755 ./profiles &> /dev/null\n  chmod 0755 ./sites\n  chmod 0755 ./sites/all\n  chmod 02775 ./sites/all/{modules,libraries,themes}\n  cp -af /opt/tmp/boa/aegir/conf/var/get.htaccess.txt ./.htaccess\n  cp -af /opt/tmp/boa/aegir/conf/var/crossdomain.xml ./\n}\n\n#\n# Create D6 symlinks.\n_create_d6_symlinks() {\n  if [ ! -L \"includes\" ]; then\n    ln -sfn ${_D6_CORE_DIR}/.htaccess .htaccess\n    ln -sfn ${_D6_CORE_DIR}/boost_stats.php boost_stats.php\n    ln -sfn ${_D6_CORE_DIR}/cron.php cron.php\n    ln -sfn ${_D6_CORE_DIR}/crossdomain.xml crossdomain.xml\n    ln -sfn ${_D6_CORE_DIR}/includes includes\n    ln -sfn ${_D6_CORE_DIR}/index.php index.php\n    ln -sfn ${_D6_CORE_DIR}/install.php install.php\n    ln -sfn ${_D6_CORE_DIR}/js.php js.php\n    ln -sfn ${_D6_CORE_DIR}/misc misc\n    ln -sfn ${_D6_CORE_DIR}/modules modules\n    ln -sfn ${_D6_CORE_DIR}/themes themes\n    ln -sfn ${_D6_CORE_DIR}/update.php update.php\n    ln -sfn ${_D6_CORE_DIR}/xmlrpc.php xmlrpc.php\n    cp -af ${_D6_CORE_DIR}/sites ./\n  fi\n  if [ ! -L \"${_OCTO_PLPATH}/profiles\" ] && [ -d \"${_SHRD_PLPATH}/profiles\" ]; then\n    rm -rf ${_OCTO_PLPATH}/profiles\n    ln -sfn ${_SHRD_PLPATH}/profiles ${_OCTO_PLPATH}/profiles\n  fi\n}\n\n#\n# Create D7 symlinks.\n_create_d7_symlinks() {\n  if [ ! -L \"web.config\" ]; then\n    ln -sfn ${_D7_CORE_DIR}/.htaccess .htaccess\n    ln -sfn ${_D7_CORE_DIR}/authorize.php authorize.php\n    ln -sfn ${_D7_CORE_DIR}/cron.php cron.php\n    ln -sfn ${_D7_CORE_DIR}/crossdomain.xml crossdomain.xml\n    ln -sfn ${_D7_CORE_DIR}/includes includes\n    ln -sfn ${_D7_CORE_DIR}/index.php index.php\n    ln -sfn ${_D7_CORE_DIR}/install.php install.php\n    ln -sfn ${_D7_CORE_DIR}/js.php js.php\n    ln -sfn ${_D7_CORE_DIR}/misc misc\n    ln -sfn ${_D7_CORE_DIR}/modules modules\n    ln -sfn ${_D7_CORE_DIR}/themes themes\n    ln -sfn ${_D7_CORE_DIR}/update.php update.php\n    ln -sfn ${_D7_CORE_DIR}/web.config web.config\n    ln -sfn ${_D7_CORE_DIR}/xmlrpc.php xmlrpc.php\n    cp -af ${_D7_CORE_DIR}/sites ./\n  fi\n  if [ ! -L \"${_OCTO_PLPATH}/profiles\" ] && [ -d \"${_SHRD_PLPATH}/profiles\" ]; then\n    rm -rf ${_OCTO_PLPATH}/profiles\n    ln -sfn ${_SHRD_PLPATH}/profiles ${_OCTO_PLPATH}/profiles\n  fi\n}\n\n#\n# Create distro own D7 core symlinks.\n_create_distro_d7_symlinks() {\n  if [ ! -L \"web.config\" ]; then\n    if [ ! -f \"${_SHRD_PLPATH}/crossdomain.xml\" ]; then\n      rm -f ${_SHRD_PLPATH}/crossdomain.xml\n      cd ${_SHRD_PLPATH}\n      _fix_dirs_files\n    fi\n    if [ ! -L \"${_SHRD_PLPATH}/modules/o_contrib_seven\" ]; then\n      ln -sfn ${_CORE}/o_contrib_seven ${_SHRD_PLPATH}/modules/o_contrib_seven\n    fi\n    cd ${_OCTO_PLPATH}\n    ln -sfn ${_SHRD_PLPATH}/.htaccess .htaccess\n    ln -sfn ${_SHRD_PLPATH}/authorize.php authorize.php\n    ln -sfn ${_SHRD_PLPATH}/cron.php cron.php\n    ln -sfn ${_SHRD_PLPATH}/crossdomain.xml crossdomain.xml\n    ln -sfn ${_SHRD_PLPATH}/includes includes\n    ln -sfn ${_SHRD_PLPATH}/index.php index.php\n    ln -sfn ${_SHRD_PLPATH}/install.php install.php\n    ln -sfn ${_CORE}/o_contrib_seven/js/js.php js.php\n    ln -sfn ${_SHRD_PLPATH}/misc misc\n    ln -sfn ${_SHRD_PLPATH}/modules modules\n    ln -sfn ${_SHRD_PLPATH}/themes themes\n    ln -sfn ${_SHRD_PLPATH}/update.php update.php\n    ln -sfn ${_SHRD_PLPATH}/web.config web.config\n    ln -sfn ${_SHRD_PLPATH}/xmlrpc.php xmlrpc.php\n    cp -af ${_SHRD_PLPATH}/sites ./\n  fi\n  if [ ! -L \"${_OCTO_PLPATH}/profiles\" ] && [ -d \"${_SHRD_PLPATH}/profiles\" ]; then\n    rm -rf ${_OCTO_PLPATH}/profiles\n    ln -sfn ${_SHRD_PLPATH}/profiles ${_OCTO_PLPATH}/profiles\n  fi\n}\n\n#\n# Rename D7 profiles.\n_rename_drupal7_profiles() {\n  for _Files in `find ./profiles -type f`; do\n    sed -i \"s/name = Minimal/name = Vanilla Minimal/g\"   ${_Files} &> /dev/null\n    wait\n    sed -i \"s/name = Standard/name = Vanilla Standard/g\" ${_Files} &> /dev/null\n    wait\n    sed -i \"s/name = Testing/name = Vanilla Testing/g\"   ${_Files} &> /dev/null\n    wait\n    sed -i \"s/hidden = TRUE//g\"                          ${_Files} &> /dev/null\n    wait\n  done\n}\n\n#\n# Rename D9 profiles.\n_rename_drupal9_profiles() {\n  for _Files in `find ./core/profiles -type f`; do\n    sed -i \"s/name: Minimal/name: Vanilla Minimal/g\"   ${_Files} &> /dev/null\n    wait\n    sed -i \"s/name: Standard/name: Vanilla Standard/g\" ${_Files} &> /dev/null\n    wait\n    sed -i \"s/name: Testing/name: Vanilla Testing/g\"   ${_Files} &> /dev/null\n    wait\n  done\n}\n\n#\n# Prepare D6 core.\n_prepare_drupal6_core() {\n  if [ ! -e \"${_D6_CORE_DIR}\" ]; then\n    if [ -L \"${_D6_CORE_DIR}\" ]; then\n      unlink ${_D6_CORE_DIR}\n    fi\n    cd /data/all/000/core\n    [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_core_ext '${_DRUPAL6}.tar.gz'\"\n    _get_core_ext \"${_DRUPAL6}.tar.gz\"\n    find ${_D6_CORE_DIR} -type d -exec chmod 0755 {} \\; &> /dev/null\n    find ${_D6_CORE_DIR} -type f -exec chmod 0644 {} \\; &> /dev/null\n    cd ${_D6_CORE_DIR}/\n    _fix_dirs_files\n    patch -p0 < ${_pthPch}/taxonomy-6.26.patch &> /dev/null\n    rm -f ${_D6_CORE_DIR}/modules/taxonomy/taxonomy.module.orig\n    rm -f modules/o_contrib\n    ln -sfn ${_CORE}/o_contrib modules/o_contrib\n    ln -sfn ${_CORE}/o_contrib/js/js.php js.php\n    cp -af ${_CORE}/o_contrib/image/image.imagemagick.inc includes/\n    cp -af ${_CORE}/o_contrib/boost/stats/boost_stats.php ./ &> /dev/null\n    rm -rf ${_D6_CORE_DIR}/scripts\n    cd ${_D6_CORE_DIR}/themes\n    _get_dev_contrib \"rubik-6.x-3.0-beta5.tar.gz\"\n    _get_dev_contrib \"tao-6.x-3.3.tar.gz\"\n    rm -f ${_D6_CORE_DIR}/sites/all/*.txt\n    cd ${_CORE}\n  fi\n}\n\n#\n# Prepare D7 core.\n_prepare_drupal7_core() {\n  if [ ! -e \"${_D7_CORE_DIR}\" ]; then\n    if [ -L \"${_D7_CORE_DIR}\" ]; then\n      unlink ${_D7_CORE_DIR}\n    fi\n    cd /data/all/000/core\n    [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_core_ext '${_DRUPAL7}.tar.gz'\"\n    _get_core_ext \"${_DRUPAL7}.tar.gz\"\n    find ${_D7_CORE_DIR} -type d -exec chmod 0755 {} \\; &> /dev/null\n    find ${_D7_CORE_DIR} -type f -exec chmod 0644 {} \\; &> /dev/null\n    cd ${_D7_CORE_DIR}/\n    _fix_dirs_files\n    _rename_drupal7_profiles\n    patch -p0 < ${_pthPch}/taxonomy-7.12.patch &> /dev/null\n    rm -f ${_D7_CORE_DIR}/modules/taxonomy/taxonomy.module.orig\n    rm -f modules/o_contrib_seven\n    ln -sfn ${_CORE}/o_contrib_seven modules/o_contrib_seven\n    ln -sfn ${_CORE}/o_contrib_seven/js/js.php js.php\n    rm -rf ${_D7_CORE_DIR}/scripts\n    cd ${_D7_CORE_DIR}/themes\n    _get_dev_contrib \"rubik-7.x-4.4.tar.gz\"\n    _get_dev_contrib \"tao-7.x-3.1.tar.gz\"\n    cd ${_CORE}\n  fi\n}\n\n#\n# Remove D6 core from distro directory.\n_nocore_d6_dist_clean() {\n  rm -f ${_SHRD_PLPATH}/.gitignore\n  rm -f ${_SHRD_PLPATH}/.htaccess\n  rm -f ${_SHRD_PLPATH}/*.php\n  rm -f ${_SHRD_PLPATH}/*.txt\n  rm -f ${_SHRD_PLPATH}/*.xml\n  rm -rf ${_SHRD_PLPATH}/cache\n  rm -rf ${_SHRD_PLPATH}/includes\n  rm -rf ${_SHRD_PLPATH}/misc\n  rm -rf ${_SHRD_PLPATH}/modules\n  rm -rf ${_SHRD_PLPATH}/sites\n  rm -rf ${_SHRD_PLPATH}/scripts\n  rm -rf ${_SHRD_PLPATH}/themes\n  if [ ! -d \"${_SHRD_PLPATH}/profiles\" ] && [ -d \"${_D6_CORE_DIR}/profiles\" ]; then\n    rm -rf ${_SHRD_PLPATH}/profiles\n    cp -af ${_D6_CORE_DIR}/profiles ${_SHRD_PLPATH}/\n  fi\n  sed -i \"s/'dblog'/'robotstxt', 'path_alias_cache'/g\" \\\n    ${_SHRD_PRPATH}/${_REAL_PRNAME}.profile &> /dev/null\n}\n\n#\n# Remove D7 core from distro directory.\n_nocore_d7_dist_clean() {\n  if [ \"${_USE_DISTRO_CORE}\" = \"NO\" ]; then\n    rm -f ${_SHRD_PLPATH}/.gitignore\n    rm -f ${_SHRD_PLPATH}/.htaccess\n    rm -f ${_SHRD_PLPATH}/*.php\n    rm -f ${_SHRD_PLPATH}/*.txt\n    rm -f ${_SHRD_PLPATH}/*.xml\n    rm -f ${_SHRD_PLPATH}/web.config\n    rm -rf ${_SHRD_PLPATH}/cache\n    rm -rf ${_SHRD_PLPATH}/includes\n    rm -rf ${_SHRD_PLPATH}/misc\n    rm -rf ${_SHRD_PLPATH}/modules\n    rm -rf ${_SHRD_PLPATH}/sites\n    rm -rf ${_SHRD_PLPATH}/scripts\n    rm -rf ${_SHRD_PLPATH}/themes\n    _REVISIONS=\"34 35 36 37 38 39\"\n    for i in ${_REVISIONS}; do\n      if [ -d \"${_CORE}/drupal-7.$i\" ] && [ ! -e \"${_SHRD_PLPATH}\" ]; then\n        mv ${_CORE}/drupal-7.$i ${_SHRD_PLPATH}\n      fi\n    done\n    if [ ! -d \"${_SHRD_PLPATH}/profiles\" ] && [ -d \"${_D7_CORE_DIR}/profiles\" ]; then\n      rm -rf ${_SHRD_PLPATH}/profiles\n      cp -af ${_D7_CORE_DIR}/profiles ${_SHRD_PLPATH}/\n    fi\n  fi\n}\n\n#\n# Enable D6 admin.\n_enable_drupal6_admin() {\n  sed -i \"s/'path_alias_cache'/'path_alias_cache', 'admin'/g\" \\\n    ${_SHRD_PRPATH}/${_REAL_PRNAME}.profile &> /dev/null\n}\n\n#\n# Remove default core seven profiles.\n_remove_default_core_seven_profiles() {\n  rm -rf ${_SHRD_PLPATH}/profiles/minimal\n  rm -rf ${_SHRD_PLPATH}/profiles/standard\n  rm -rf ${_SHRD_PLPATH}/profiles/testing\n}\n\n#\n# Init this distro root.\n_init_this_distro_root() {\n  mkdir -p ${_OCTO_PLPATH}\n  cd ${_OCTO_PLPATH}\n  if [[ \"${_USE_DISTRO_CORE}\" = \"YES\" ]]; then\n    _create_distro_d7_symlinks\n  else\n    if [[ \"${_SHRD_PLNAME}\" =~ \"-${_SMALLCORE6_V}\" ]]; then\n      _create_d6_symlinks\n    elif [[ \"${_SHRD_PLNAME}\" =~ \"-${_SMALLCORE7_V}\" ]]; then\n      _create_d7_symlinks\n    fi\n  fi\n}\n\n#\n# Upgrade contrib less.\n_upgrade_contrib_less() {\n  if [ -e \"${_SHRD_PRPATH}/modules/contrib/rules_conditional\" ]; then\n    rm -rf ${_SHRD_PRPATH}/modules/contrib/rules_conditional\n    cd ${_SHRD_PRPATH}/modules/contrib\n    _get_dev_contrib \"rules_conditional-7.x-1.x-dev.tar.gz\"\n    if [ ! -e \"${_SHRD_PRPATH}/modules/contrib/rules_conditional\" ]; then\n      _get_dev_contrib \"rules_conditional-7.x-1.x-dev.tar.gz\"\n    fi\n    cd ${_CORE}\n  fi\n  if [ -e \"${_SHRD_PRPATH}/modules/contrib/webform\" ]; then\n    rm -rf ${_SHRD_PRPATH}/modules/contrib/webform\n    cd ${_SHRD_PRPATH}/modules/contrib\n    _get_dev_contrib \"webform-7.x-4.18.tar.gz\"\n    if [ ! -e \"${_SHRD_PRPATH}/modules/contrib/webform\" ]; then\n      _get_dev_contrib \"webform-7.x-4.18.tar.gz\"\n    fi\n    cd ${_CORE}\n  fi\n  if [ -e \"${_SHRD_PRPATH}/modules/contrib/panels\" ]; then\n    rm -rf ${_SHRD_PRPATH}/modules/contrib/panels\n    cd ${_SHRD_PRPATH}/modules/contrib\n    _get_dev_contrib \"panels-7.x-3.8.tar.gz\"\n    if [ ! -e \"${_SHRD_PRPATH}/modules/contrib/panels\" ]; then\n      _get_dev_contrib \"panels-7.x-3.8.tar.gz\"\n    fi\n    cd ${_CORE}\n  fi\n  if [ -e \"${_SHRD_PRPATH}/modules/contrib/rules\" ]; then\n    rm -rf ${_SHRD_PRPATH}/modules/contrib/rules\n    cd ${_SHRD_PRPATH}/modules/contrib\n    _get_dev_contrib \"rules-7.x-2.12.tar.gz\"\n    if [ ! -e \"${_SHRD_PRPATH}/modules/contrib/rules\" ]; then\n      _get_dev_contrib \"rules-7.x-2.12.tar.gz\"\n    fi\n    cd ${_CORE}\n  fi\n  if [ -e \"${_SHRD_PRPATH}/modules/rules\" ]; then\n    rm -rf ${_SHRD_PRPATH}/modules/rules\n    cd ${_SHRD_PRPATH}/modules\n    _get_dev_contrib \"rules-7.x-2.12.tar.gz\"\n    if [ ! -e \"${_SHRD_PRPATH}/modules/rules\" ]; then\n      _get_dev_contrib \"rules-7.x-2.12.tar.gz\"\n    fi\n    cd ${_CORE}\n  fi\n}\n\n# Create Drupal basic platform for versions 6 and 7.\n_create_drupal6_or_7_basic() {\n  local version=\"$1\"  # Accepts \"6\" or \"7\"\n  if [ ! -d \"${_SHRD_PLPATH}\" ]; then\n    mkdir -p \"${_SHRD_PLPATH}\"\n    local core_dir_var=\"_D${version}_CORE_DIR\"\n    local core_dir=\"${!core_dir_var}\"\n    if [ ! -e \"${core_dir}\" ]; then\n      local prepare_func=\"_prepare_drupal${version}_core\"\n      if declare -f \"$prepare_func\" > /dev/null; then\n        \"$prepare_func\"\n      else\n        echo \"Function $prepare_func does not exist.\"\n      fi\n    fi\n    if [ \"$version\" = \"6\" ]; then\n      cd \"${_CORE}\"\n      _nocore_d6_dist_clean\n      _enable_drupal6_admin\n    elif [ \"$version\" = \"7\" ]; then\n      _nocore_d7_dist_clean\n    fi\n  fi\n  _init_this_distro_root\n}\n\n# Create Drupal basic platform for versions 9 and 10.x.\n_create_drupal_basic_version() {\n  local version=\"$1\"  # Accepts \"9\", \"10_0\", \"10_1\", etc.\n  if [ ! -d \"${_OCTO_PLPATH}\" ]; then\n    cd \"${_pthDst}\"\n    local drupal_var=\"_DRUPAL${version}\"\n    local drupal_version=\"${!drupal_var}\"\n    [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_core_ext '${drupal_version}.tar.gz'\"\n    _get_core_ext \"${drupal_version}.tar.gz\"\n    mv -f \"${drupal_version}\" \"${_SHRD_PLNAME}\"\n    if [ \"$version\" = \"9\" ]; then\n      cd \"${_pthDst}/${_SHRD_PLNAME}/web\"\n      _rename_drupal9_profiles\n    fi\n  fi\n}\n\n# Create Drupal core-only basic platform.\n_create_drupal_core_basic() {\n  local version_code=\"$1\"\n  case \"$version_code\" in\n    \"DL6\")\n      _create_drupal6_or_7_basic \"6\"\n      ;;\n    \"DL7\")\n      _create_drupal6_or_7_basic \"7\"\n      ;;\n    \"DL9\")\n      _create_drupal_basic_version \"9\"\n      ;;\n    \"DX0\")\n      _create_drupal_basic_version \"10_0\"\n      ;;\n    \"DX1\")\n      _create_drupal_basic_version \"10_1\"\n      ;;\n    \"DX2\")\n      _create_drupal_basic_version \"10_2\"\n      ;;\n    \"DX3\")\n      _create_drupal_basic_version \"10_3\"\n      ;;\n    \"DX4\")\n      _create_drupal_basic_version \"10_4\"\n      ;;\n    \"DX5\")\n      _create_drupal_basic_version \"10_5\"\n      ;;\n    \"DX6\")\n      _create_drupal_basic_version \"10_6\"\n      ;;\n    \"DE1\")\n      _create_drupal_basic_version \"11_1\"\n      ;;\n    \"DE2\")\n      _create_drupal_basic_version \"11_2\"\n      ;;\n    \"DE3\")\n      _create_drupal_basic_version \"11_3\"\n      ;;\n    *)\n      echo \"Unsupported version code: $version_code\"\n      ;;\n  esac\n}\n\n# Define distros names and their key-words for configuration\ndeclare -A _distros_names=(\n  [\"CMS\"]=\"Drupal CMS\"\n  [\"CK1\"]=\"Commerce v.1\"\n  [\"CK2\"]=\"Commerce v.2\"\n  [\"CK3\"]=\"Commerce v.3\"\n  [\"DXP\"]=\"DXPR Marketing\"\n  [\"EZC\"]=\"EzContent\"\n  [\"FOS\"]=\"farmOS\"\n  [\"LGV\"]=\"LocalGov\"\n  [\"OCS\"]=\"OpenCulturas\"\n  [\"OFD\"]=\"OpenFed\"\n  [\"OLS\"]=\"OpenLucius\"\n  [\"OPG\"]=\"Opigno LMS\"\n  [\"SCR\"]=\"Sector\"\n  [\"SOC\"]=\"Social\"\n  [\"THR\"]=\"Thunder\"\n  [\"UC6\"]=\"Ubercart\"\n  [\"UC7\"]=\"Ubercart\"\n  [\"VB9\"]=\"Varbase\"\n  [\"VBX\"]=\"Varbase\"\n  [\"DL6\"]=\"Pressflow\"\n  [\"DL7\"]=\"Drupal\"\n  [\"DL9\"]=\"Drupal\"\n  [\"DX0\"]=\"Drupal\"\n  [\"DX1\"]=\"Drupal\"\n  [\"DX2\"]=\"Drupal\"\n  [\"DX3\"]=\"Drupal\"\n  [\"DX4\"]=\"Drupal\"\n  [\"DX5\"]=\"Drupal\"\n  [\"DX6\"]=\"Drupal\"\n  [\"DE1\"]=\"Drupal\"\n  [\"DE2\"]=\"Drupal\"\n  [\"DE3\"]=\"Drupal\"\n)\n\n# Define distros versions for configuration\ndeclare -A _distros_versions=(\n  [\"CMS\"]=\"2.0.0\"\n  [\"CK1\"]=\"2.77\"\n  [\"CK2\"]=\"2.40\"\n  [\"CK3\"]=\"3.2.0\"\n  [\"DXP\"]=\"10.3.0\"\n  [\"EZC\"]=\"2.2.15\"\n  [\"FOS\"]=\"3.5.1\"\n  [\"LGV\"]=\"3.4.0\"\n  [\"OCS\"]=\"2.5.4\"\n  [\"OFD\"]=\"12.2.4\"\n  [\"OLS\"]=\"2.0.0\"\n  [\"OPG\"]=\"3.1.0\"\n  [\"SCR\"]=\"11.0.x-dev\"\n  [\"SOC\"]=\"12.4.5\"\n  [\"THR\"]=\"8.3.1\"\n  [\"UC6\"]=\"2.15\"\n  [\"UC7\"]=\"3.13\"\n  [\"VB9\"]=\"9.1.13\"\n  [\"VBX\"]=\"10.1.0\"\n  [\"DL6\"]=\"${_SMALLCORE6_V}\"\n  [\"DL7\"]=\"${_SMALLCORE7_V}\"\n  [\"DL9\"]=\"${_SMALLCORE9_V}\"\n  [\"DX0\"]=\"${_SMALLCORE10_0_V}\"\n  [\"DX1\"]=\"${_SMALLCORE10_1_V}\"\n  [\"DX2\"]=\"${_SMALLCORE10_2_V}\"\n  [\"DX3\"]=\"${_SMALLCORE10_3_V}\"\n  [\"DX4\"]=\"${_SMALLCORE10_4_V}\"\n  [\"DX5\"]=\"${_SMALLCORE10_5_V}\"\n  [\"DX6\"]=\"${_SMALLCORE10_6_V}\"\n  [\"DE1\"]=\"${_SMALLCORE11_1_V}\"\n  [\"DE2\"]=\"${_SMALLCORE11_2_V}\"\n  [\"DE3\"]=\"${_SMALLCORE11_3_V}\"\n)\n\n# Define distros Drupal cores versions for configuration\ndeclare -A _distros_drupal_cores=(\n  [\"CMS\"]=\"${_SMALLCORE11_3_V}\"\n  [\"CK1\"]=\"${_SMALLCORE7_V}\"\n  [\"CK2\"]=\"${_SMALLCORE10_1_V}\"\n  [\"CK3\"]=\"${_SMALLCORE11_3_V}\"\n  [\"DXP\"]=\"10.3.6\"\n  [\"EZC\"]=\"10.3.6\"\n  [\"FOS\"]=\"10.6.2\"\n  [\"LGV\"]=\"${_SMALLCORE10_6_V}\"\n  [\"OCS\"]=\"${_SMALLCORE10_5_V}\"\n  [\"OFD\"]=\"10.2.10\"\n  [\"OLS\"]=\"${_SMALLCORE9_V}\"\n  [\"OPG\"]=\"${_SMALLCORE9_V}\"\n  [\"SCR\"]=\"${_SMALLCORE11_3_V}\"\n  [\"SOC\"]=\"10.2.10\"\n  [\"THR\"]=\"${_SMALLCORE11_3_V}\"\n  [\"UC6\"]=\"${_SMALLCORE6_V}\"\n  [\"UC7\"]=\"${_SMALLCORE7_V}\"\n  [\"VB9\"]=\"10.6.1\"\n  [\"VBX\"]=\"11.3.1\"\n  [\"DL6\"]=\"${_SMALLCORE6_V}\"\n  [\"DL7\"]=\"${_SMALLCORE7_V}\"\n  [\"DL9\"]=\"${_SMALLCORE9_V}\"\n  [\"DX0\"]=\"${_SMALLCORE10_0_V}\"\n  [\"DX1\"]=\"${_SMALLCORE10_1_V}\"\n  [\"DX2\"]=\"${_SMALLCORE10_2_V}\"\n  [\"DX3\"]=\"${_SMALLCORE10_3_V}\"\n  [\"DX4\"]=\"${_SMALLCORE10_4_V}\"\n  [\"DX5\"]=\"${_SMALLCORE10_5_V}\"\n  [\"DX6\"]=\"${_SMALLCORE10_6_V}\"\n  [\"DE1\"]=\"${_SMALLCORE11_1_V}\"\n  [\"DE2\"]=\"${_SMALLCORE11_2_V}\"\n  [\"DE3\"]=\"${_SMALLCORE11_3_V}\"\n)\n\n# Define distros profiles names for configuration\ndeclare -A _distros_profiles_names=(\n  [\"CMS\"]=\"drupal_cms_installer\"\n  [\"CK1\"]=\"commerce_kickstart\"\n  [\"CK2\"]=\"commerce_base\"\n  [\"CK3\"]=\"commerce_kickstart\"\n  [\"DXP\"]=\"dxpr_marketing_cms\"\n  [\"EZC\"]=\"ezcontent\"\n  [\"FOS\"]=\"farm\"\n  [\"LGV\"]=\"localgov\"\n  [\"OCS\"]=\"openculturas\"\n  [\"OFD\"]=\"openfed\"\n  [\"OLS\"]=\"openlucius\"\n  [\"OPG\"]=\"opigno_lms\"\n  [\"SCR\"]=\"sector\"\n  [\"SOC\"]=\"social\"\n  [\"THR\"]=\"thunder\"\n  [\"UC6\"]=\"uberdrupal\"\n  [\"UC7\"]=\"minimal\"\n  [\"VB9\"]=\"varbase\"\n  [\"VBX\"]=\"varbase\"\n  [\"DL6\"]=\"default\"\n  [\"DL7\"]=\"standard\"\n  [\"DL9\"]=\"standard\"\n  [\"DX0\"]=\"standard\"\n  [\"DX1\"]=\"standard\"\n  [\"DX2\"]=\"standard\"\n  [\"DX3\"]=\"standard\"\n  [\"DX4\"]=\"standard\"\n  [\"DX5\"]=\"standard\"\n  [\"DX6\"]=\"standard\"\n  [\"DE1\"]=\"standard\"\n  [\"DE2\"]=\"standard\"\n  [\"DE3\"]=\"standard\"\n)\n\n# Define distros profiles paths for configuration\ndeclare -A _distros_profiles_paths=(\n  [\"CMS\"]=\"/\"\n  [\"CK1\"]=\"/\"\n  [\"CK2\"]=\"contrib/\"\n  [\"CK3\"]=\"contrib/\"\n  [\"DXP\"]=\"contrib/\"\n  [\"EZC\"]=\"contrib/\"\n  [\"FOS\"]=\"/\"\n  [\"LGV\"]=\"contrib/\"\n  [\"OCS\"]=\"contrib/\"\n  [\"OFD\"]=\"contrib/\"\n  [\"OLS\"]=\"contrib/\"\n  [\"OPG\"]=\"contrib/\"\n  [\"SCR\"]=\"contrib/\"\n  [\"SOC\"]=\"contrib/\"\n  [\"THR\"]=\"contrib/\"\n  [\"UC6\"]=\"/\"\n  [\"UC7\"]=\"/\"\n  [\"VB9\"]=\"/\"\n  [\"VBX\"]=\"contrib/\"\n  [\"DL6\"]=\"/\"\n  [\"DL7\"]=\"/\"\n  [\"DL9\"]=\"/\"\n  [\"DX0\"]=\"/\"\n  [\"DX1\"]=\"/\"\n  [\"DX2\"]=\"/\"\n  [\"DX3\"]=\"/\"\n  [\"DX4\"]=\"/\"\n  [\"DX5\"]=\"/\"\n  [\"DX6\"]=\"/\"\n  [\"DE1\"]=\"/\"\n  [\"DE2\"]=\"/\"\n  [\"DE3\"]=\"/\"\n)\n\n# Define distros web dirs for configuration\ndeclare -A _distros_web_dirs=(\n  [\"CMS\"]=\"/web\"\n  [\"CK1\"]=\"\"\n  [\"CK2\"]=\"/web\"\n  [\"CK3\"]=\"/web\"\n  [\"DXP\"]=\"/web\"\n  [\"EZC\"]=\"/web\"\n  [\"FOS\"]=\"/web\"\n  [\"LGV\"]=\"/web\"\n  [\"OCS\"]=\"/web\"\n  [\"OFD\"]=\"/docroot\"\n  [\"OLS\"]=\"/web\"\n  [\"OPG\"]=\"/web\"\n  [\"SCR\"]=\"/web\"\n  [\"SOC\"]=\"/html\"\n  [\"THR\"]=\"/docroot\"\n  [\"UC6\"]=\"\"\n  [\"UC7\"]=\"\"\n  [\"VB9\"]=\"/docroot\"\n  [\"VBX\"]=\"/docroot\"\n  [\"DL6\"]=\"\"\n  [\"DL7\"]=\"\"\n  [\"DL9\"]=\"/web\"\n  [\"DX0\"]=\"/web\"\n  [\"DX1\"]=\"/web\"\n  [\"DX2\"]=\"/web\"\n  [\"DX3\"]=\"/web\"\n  [\"DX4\"]=\"/web\"\n  [\"DX5\"]=\"/web\"\n  [\"DX6\"]=\"/web\"\n  [\"DE1\"]=\"/web\"\n  [\"DE2\"]=\"/web\"\n  [\"DE3\"]=\"/web\"\n)\n\n# Define distros names and their URLs for information\ndeclare -A _distros_urls=(\n  [\"CMS\"]=\"https://new.drupal.org/drupal-cms\"\n  [\"CK1\"]=\"${_urlPrt}/commerce_kickstart\"\n  [\"CK2\"]=\"${_urlPrt}/commerce\"\n  [\"CK3\"]=\"${_urlPrt}/commerce_kickstart\"\n  [\"DXP\"]=\"${_urlPrt}/dxpr_marketing_cms\"\n  [\"EZC\"]=\"${_urlPrt}/ezcontent\"\n  [\"FOS\"]=\"${_urlPrt}/farm\"\n  [\"LGV\"]=\"${_urlPrt}/localgov\"\n  [\"OCS\"]=\"${_urlPrt}/openculturas\"\n  [\"OFD\"]=\"${_urlPrt}/openfed\"\n  [\"OLS\"]=\"${_urlPrt}/openlucius\"\n  [\"OPG\"]=\"${_urlPrt}/opigno_lms\"\n  [\"SCR\"]=\"${_urlPrt}/sector\"\n  [\"SOC\"]=\"${_urlPrt}/social\"\n  [\"THR\"]=\"${_urlPrt}/thunder\"\n  [\"UC6\"]=\"${_urlPrt}/ubercart\"\n  [\"UC7\"]=\"${_urlPrt}/ubercart\"\n  [\"VB9\"]=\"${_urlPrt}/varbase\"\n  [\"VBX\"]=\"${_urlPrt}/varbase\"\n  [\"DL6\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE6_V}\"\n  [\"DL7\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE7_V}\"\n  [\"DL9\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE9_V}\"\n  [\"DX0\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE10_0_V}\"\n  [\"DX1\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE10_1_V}\"\n  [\"DX2\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE10_2_V}\"\n  [\"DX3\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE10_3_V}\"\n  [\"DX4\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE10_4_V}\"\n  [\"DX5\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE10_5_V}\"\n  [\"DX6\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE10_6_V}\"\n  [\"DE1\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE11_1_V}\"\n  [\"DE2\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE11_2_V}\"\n  [\"DE3\"]=\"${_urlPrt}/drupal/releases/${_SMALLCORE11_3_V}\"\n)\n\n# Define distros names and their compatible max PHP versions for information\ndeclare -A _distros_php_versions=(\n  [\"CMS\"]=\"8.4\"\n  [\"CK1\"]=\"7.4\"\n  [\"CK2\"]=\"8.3\"\n  [\"CK3\"]=\"8.4\"\n  [\"DXP\"]=\"8.3\"\n  [\"EZC\"]=\"8.3\"\n  [\"FOS\"]=\"8.3\"\n  [\"LGV\"]=\"8.3\"\n  [\"OCS\"]=\"8.3\"\n  [\"OFD\"]=\"8.3\"\n  [\"OLS\"]=\"8.3\"\n  [\"OPG\"]=\"8.3\"\n  [\"SCR\"]=\"8.4\"\n  [\"SOC\"]=\"8.3\"\n  [\"THR\"]=\"8.4\"\n  [\"UC6\"]=\"7.4\"\n  [\"UC7\"]=\"7.4\"\n  [\"VB9\"]=\"8.3\"\n  [\"VBX\"]=\"8.4\"\n  [\"DL6\"]=\"7.4\"\n  [\"DL7\"]=\"8.3\"\n  [\"DL9\"]=\"8.3\"\n  [\"DX0\"]=\"8.3\"\n  [\"DX1\"]=\"8.3\"\n  [\"DX2\"]=\"8.3\"\n  [\"DX3\"]=\"8.3\"\n  [\"DX4\"]=\"8.3\"\n  [\"DX5\"]=\"8.3\"\n  [\"DX6\"]=\"8.3\"\n  [\"DE1\"]=\"8.4\"\n  [\"DE2\"]=\"8.4\"\n  [\"DE3\"]=\"8.4\"\n)\n\n\n_commerce_7_2_install() {\n###---### Commerce 7.x-2.x\n#\n  # Declare 'params' as local to the function\n  local -A params=(\n    [dist]=\"$1\"\n    [version]=\"$2\"\n    [core_version]=\"$3\"\n    [description]=\"$4\"\n    [profile_path]=\"$5\"\n    [profile_name]=\"$6\"\n    [web_dir]=\"$7\"\n    [php_version]=\"$8\"\n    [url]=\"$9\"\n  )\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _commerce_7_2_install\"\n    for key in \"${!params[@]}\"; do\n      _msg \"DEBUG: _commerce_7_2_install ${key} is '${params[$key]}'\"\n    done\n  fi\n\n  # /data/disk/o8/distro/001/commerce_kickstart-2.77-7.105.1\n  _REAL_PRNAME=\"${params[profile_name]}\"\n  _SHRD_PLNAME=\"${params[profile_name]}-${params[version]}-${params[core_version]}\"\n  _SHRD_PLPATH=\"${_CORE}/${_SHRD_PLNAME}\"\n  _SHRD_PRPATH=\"${_SHRD_PLPATH}/profiles/${_REAL_PRNAME}\"\n  _OCTO_PLPATH=\"${_pthDst}/${_SHRD_PLNAME}\"\n  if [ \"${_ALLOW_ALL}\" = \"YES\" ]; then\n    if [[ \"${_PLATFORMS_LIST}\" =~ ALL ]] \\\n      || [[ \"${_PLATFORMS_LIST}\" =~ ${params[dist]} ]]; then\n      if [ ! -d \"${_OCTO_PLPATH}\" ]; then\n        if _prompt_yes_no \"${params[description]} - ${params[url]}\" ; then\n          true\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_yOk}\"\n          if [ ! -d \"${_SHRD_PLPATH}\" ]; then\n            cd ${_CORE}\n            if [ \"${_DL_MODE}\" = \"GIT\" ] \\\n              && [ \"${_T_BUILD}\" = \"SRC\" ]; then\n              if [ ! -e \"${_D7_CORE_DIR}\" ]; then\n                _prepare_drupal7_core\n              fi\n              [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_distro_ext '${params[profile_name]}-7.x-${params[version]}-core.tar.gz'\"\n              _get_distro_ext \"${params[profile_name]}-7.x-${params[version]}-core.tar.gz\"\n              if [ -d \"${_CORE}/commerce-7.x-${params[version]}\" ]; then\n                mv -f commerce-7.x-${params[version]} ${_SHRD_PLNAME}\n              elif [ -d \"${_CORE}/${_REAL_PRNAME}-7.x-${params[version]}\" ]; then\n                mv -f ${_REAL_PRNAME}-7.x-${params[version]} ${_SHRD_PLNAME}\n              fi\n              _nocore_d7_dist_clean\n              _remove_default_core_seven_profiles\n              _upgrade_contrib_less\n            else\n              [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_distro_ext '${_SHRD_PLNAME}.tar.gz'\"\n              _get_distro_ext \"${_SHRD_PLNAME}.tar.gz\"\n            fi\n          fi\n          _init_this_distro_root\n          _msg \"DISTRO: ${params[description]} installed\"\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _save_verify_this_platform '${1}' '${2}' '${3}' '${params[description]}' '${5}' '${6}' '${7}'\"\n          _save_verify_this_platform \"${1}\" \"${2}\" \"${3}\" \"${params[description]}\" \"${5}\" \"${6}\" \"${7}\"\n        else\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_noT}\"\n        fi\n      fi\n    fi\n  fi\n}\n\n\n_ubercart6_install() {\n###---### Ubercart 2\n#\n  # Declare 'params' as local to the function\n  local -A params=(\n    [dist]=\"$1\"\n    [version]=\"$2\"\n    [core_version]=\"$3\"\n    [description]=\"$4\"\n    [profile_path]=\"$5\"\n    [profile_name]=\"$6\"\n    [web_dir]=\"$7\"\n    [php_version]=\"$8\"\n    [url]=\"$9\"\n  )\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _ubercart6_install\"\n    for key in \"${!params[@]}\"; do\n      _msg \"DEBUG: _ubercart6_install ${key} is '${params[$key]}'\"\n    done\n  fi\n\n  # /data/disk/o8/distro/001/ubercart-2.15-6.60.1\n  _REAL_PRNAME=\"${params[profile_name]}\"\n  _VIRT_PRNAME=\"ubercart\"\n  _SHRD_PLNAME=\"${_VIRT_PRNAME}-${params[version]}-${params[core_version]}\"\n  _SHRD_PLPATH=\"${_CORE}/${_SHRD_PLNAME}\"\n  _SHRD_PRPATH=\"${_SHRD_PLPATH}/profiles/${_REAL_PRNAME}\"\n  _OCTO_PLPATH=\"${_pthDst}/${_SHRD_PLNAME}\"\n  if [[ \"${_PLATFORMS_LIST}\" =~ ALL ]] \\\n    || [[ \"${_PLATFORMS_LIST}\" =~ ${params[dist]} ]]; then\n    if [ ! -d \"${_OCTO_PLPATH}/modules/path_alias_cache\" ]; then\n      if _prompt_yes_no \"${params[description]} - ${params[url]}\" ; then\n        true\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_yOk}\"\n        if [ ! -d \"${_SHRD_PLPATH}\" ]; then\n          if [ \"${_DL_MODE}\" = \"GIT\" ] \\\n            && [ \"${_T_BUILD}\" = \"SRC\" ]; then\n            cd ${_CORE}\n            if [ ! -e \"${_D6_CORE_DIR}\" ]; then\n              _prepare_drupal6_core\n            fi\n            mkdir -p ${_SHRD_PLPATH}\n            _nocore_d6_dist_clean\n            rm -rf ${_SHRD_PLPATH}/profiles/default\n            cd ${_SHRD_PLPATH}/profiles\n            _get_dev_contrib \"uberdrupal.tar.gz\"\n            cd ${_CORE}\n            mkdir -p ${_SHRD_PRPATH}/{modules,themes,libraries}\n            cd ${_SHRD_PRPATH}/libraries\n            _get_dev_contrib \"colorbox-1.3.18.zip\"\n            cd ${_SHRD_PRPATH}/modules\n            _get_dev_contrib \"admin_menu-6.x-3.x-dev.tar.gz\"\n            _get_dev_contrib \"cck-6.x-3.0-alpha4.tar.gz\"\n            _get_dev_contrib \"colorbox-6.x-1.4.tar.gz\"\n            _get_dev_contrib \"date-6.x-2.9.tar.gz\"\n            _get_dev_contrib \"filefield-6.x-3.14.tar.gz\"\n            _get_dev_contrib \"google_analytics-6.x-4.3.tar.gz\"\n            _get_dev_contrib \"imageapi-6.x-1.10.tar.gz\"\n            _get_dev_contrib \"imagecache-6.x-2.x-dev.tar.gz\"\n            _get_dev_contrib \"imagefield-6.x-3.11.tar.gz\"\n            _get_dev_contrib \"jquery_update-6.x-2.0-alpha1.tar.gz\"\n            _get_dev_contrib \"libraries-6.x-1.0.tar.gz\"\n            _get_dev_contrib \"lightbox2-6.x-1.x-dev.tar.gz\"\n            _get_dev_contrib \"rules-6.x-1.5.tar.gz\"\n            _get_dev_contrib \"skinr-6.x-1.7.tar.gz\"\n            _get_dev_contrib \"token-6.x-1.19.tar.gz\"\n            _get_dev_contrib \"ubercart-6.x-${params[version]}.tar.gz\"\n            _get_dev_contrib \"views-6.x-3.3.tar.gz\"\n            _get_dev_contrib \"webform-6.x-3.23.tar.gz\"\n            ### https://drupal.org/node/1167276#comment-5138248\n            cd ${_SHRD_PRPATH}/modules/${_VIRT_PRNAME}\n            patch -p1 < ${_pthPch}/${_VIRT_PRNAME}-1167276-reroll.patch &> /dev/null\n            cd ${_SHRD_PRPATH}/modules/imagecache\n            # https://drupal.org/node/1243258#comment-4850634\n            patch -p1 < ${_pthPch}/imagecache-1243258-5.patch &> /dev/null\n            cd ${_SHRD_PRPATH}/themes\n            _get_dev_contrib \"fusion-6.x-1.x-dev.tar.gz\"\n            _get_dev_contrib \"acquia_prosper-6.x-1.1.tar.gz\"\n          else\n            cd ${_CORE}\n            [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_distro_ext '${_SHRD_PLNAME}.tar.gz'\"\n            _get_distro_ext \"${_SHRD_PLNAME}.tar.gz\"\n          fi\n        fi\n        _init_this_distro_root\n        _msg \"DISTRO: ${params[description]} installed\"\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _save_verify_this_platform '${1}' '${2}' '${3}' '${params[description]}' '${5}' '${6}' '${7}'\"\n        _save_verify_this_platform \"${1}\" \"${2}\" \"${3}\" \"${params[description]}\" \"${5}\" \"${6}\" \"${7}\"\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_noT}\"\n      fi\n    fi\n  fi\n}\n\n\n_ubercart7_install() {\n###---### Ubercart 3\n#\n  # Declare 'params' as local to the function\n  local -A params=(\n    [dist]=\"$1\"\n    [version]=\"$2\"\n    [core_version]=\"$3\"\n    [description]=\"$4\"\n    [profile_path]=\"$5\"\n    [profile_name]=\"$6\"\n    [web_dir]=\"$7\"\n    [php_version]=\"$8\"\n    [url]=\"$9\"\n  )\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _ubercart7_install\"\n    for key in \"${!params[@]}\"; do\n      _msg \"DEBUG: _ubercart7_install ${key} is '${params[$key]}'\"\n    done\n  fi\n\n  # /data/disk/o8/distro/001/ubercart-3.13-7.105.1\n  _REAL_PRNAME=\"${params[profile_name]}\"\n  _VIRT_PRNAME=\"ubercart\"\n  _SHRD_PLNAME=\"${_VIRT_PRNAME}-${params[version]}-${params[core_version]}\"\n  _SHRD_PLPATH=\"${_CORE}/${_SHRD_PLNAME}\"\n  _SHRD_PRPATH=\"${_SHRD_PLPATH}/profiles/${_REAL_PRNAME}\"\n  _OCTO_PLPATH=\"${_pthDst}/${_SHRD_PLNAME}\"\n  if [ \"${_ALLOW_ALL}\" = \"YES\" ]; then\n    if [[ \"${_PLATFORMS_LIST}\" =~ ALL ]] \\\n      || [[ \"${_PLATFORMS_LIST}\" =~ ${params[dist]} ]]; then\n      if [ ! -d \"${_OCTO_PLPATH}\" ]; then\n        if _prompt_yes_no \"${params[description]} - ${params[url]}\" ; then\n          true\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_yOk}\"\n          if [ ! -d \"${_SHRD_PLPATH}\" ]; then\n            cd ${_CORE}\n            if [ \"${_DL_MODE}\" = \"GIT\" ] \\\n              && [ \"${_T_BUILD}\" = \"SRC\" ]; then\n              if [ ! -e \"${_D7_CORE_DIR}\" ]; then\n                _prepare_drupal7_core\n              fi\n              mkdir -p ${_SHRD_PLPATH}\n              _nocore_d7_dist_clean\n              rm -rf ${_SHRD_PLPATH}/profiles/standard\n              rm -rf ${_SHRD_PLPATH}/profiles/testing\n              sed -i \"s/version = VERSION/version = \\\"${_SMALLCORE7_V}\\\"/g\" \\\n                ${_SHRD_PLPATH}/profiles/minimal/minimal.info &> /dev/null\n              mkdir -p ${_SHRD_PRPATH}/libraries\n              cd ${_SHRD_PRPATH}/libraries\n              _get_dev_contrib \"colorbox-1.5.13.zip\"\n              if [ -d \"colorbox-master\" ]; then\n                mv -f colorbox-master colorbox\n              fi\n              mkdir -p ${_SHRD_PRPATH}/modules\n              cd ${_SHRD_PRPATH}/modules\n              _get_dev_contrib \"colorbox-7.x-2.13.tar.gz\"\n              _get_dev_contrib \"ctools-7.x-1.21.tar.gz\"\n              _get_dev_contrib \"entity-7.x-1.12.tar.gz\"\n              _get_dev_contrib \"google_analytics-7.x-2.6.tar.gz\"\n              _get_dev_contrib \"libraries-7.x-2.5.tar.gz\"\n              _get_dev_contrib \"pathauto-7.x-1.3.tar.gz\"\n              _get_dev_contrib \"rules-7.x-2.12.tar.gz\"\n              _get_dev_contrib \"token-7.x-1.7.tar.gz\"\n              _get_dev_contrib \"ubercart-7.x-${params[version]}.tar.gz\"\n              _get_dev_contrib \"views-7.x-3.30.tar.gz\"\n              cd ${_SHRD_PRPATH}/modules/views\n              # https://drupal.org/node/1766338#comment-6445882\n              patch -p1 < \\\n                ${_pthPch}/views-revert-broken-filter-or-groups-1766338-7.patch &> /dev/null\n              # https://drupal.org/node/2037469\n              patch -p1 < \\\n                ${_pthPch}/views-exposed-sorts-2037469-1.patch &> /dev/null\n            else\n              [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_distro_ext '${_SHRD_PLNAME}.tar.gz'\"\n              _get_distro_ext \"${_SHRD_PLNAME}.tar.gz\"\n            fi\n          fi\n          _init_this_distro_root\n          _msg \"DISTRO: ${params[description]} installed\"\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _save_verify_this_platform '${1}' '${2}' '${3}' '${params[description]}' '${5}' '${6}' '${7}'\"\n          _save_verify_this_platform \"${1}\" \"${2}\" \"${3}\" \"${params[description]}\" \"${5}\" \"${6}\" \"${7}\"\n          # _save_verify_this_platform 'UC7' '3.13' '7.105.1' 'ubercart-3.13-7.105.1' '/' 'minimal' ''\n        else\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_noT}\"\n        fi\n      fi\n    fi\n  fi\n}\n\n\n_d_dist_custom_platform_install() {\n###---### Template function for distros-custom-type platforms\n#\n  # Declare 'params' as local to the function\n  local -A params=(\n    [dist]=\"$1\"\n    [version]=\"$2\"\n    [core_version]=\"$3\"\n    [description]=\"$4\"\n    [profile_path]=\"$5\"\n    [profile_name]=\"$6\"\n    [web_dir]=\"$7\"\n    [php_version]=\"$8\"\n    [url]=\"$9\"\n  )\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _d_dist_custom_platform_install\"\n    for key in \"${!params[@]}\"; do\n      _msg \"DEBUG: _d_dist_custom_platform_install ${key} is '${params[$key]}'\"\n    done\n  fi\n\n  if [[ \"${1}\" == \"CK1\" ]]; then\n    _commerce_7_2_install \"${1}\" \"${2}\" \"${3}\" \"${4}\" \"${5}\" \"${6}\" \"${7}\" \"${8}\" \"${9}\"\n  elif [[ \"${1}\" == \"UC6\" ]]; then\n    _ubercart6_install \"${1}\" \"${2}\" \"${3}\" \"${4}\" \"${5}\" \"${6}\" \"${7}\" \"${8}\" \"${9}\"\n  elif [[ \"${1}\" == \"UC7\" ]]; then\n    _ubercart7_install \"${1}\" \"${2}\" \"${3}\" \"${4}\" \"${5}\" \"${6}\" \"${7}\" \"${8}\" \"${9}\"\n  fi\n}\n\n\n_d_dist_platform_install() {\n###---### Template function for distros-type platforms\n#\n  # Declare 'params' as local to the function\n  local -A params=(\n    [dist]=\"$1\"\n    [version]=\"$2\"\n    [core_version]=\"$3\"\n    [description]=\"$4\"\n    [profile_path]=\"$5\"\n    [profile_name]=\"$6\"\n    [web_dir]=\"$7\"\n    [php_version]=\"$8\"\n    [url]=\"$9\"\n  )\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _d_dist_platform_install\"\n    for key in \"${!params[@]}\"; do\n      _msg \"DEBUG: _d_dist_platform_install ${key} is '${params[$key]}'\"\n    done\n  fi\n\n  # Access parameters by name\n  _REAL_PRNAME=\"${params[profile_name]}\"\n  _DIST_PLNAME=\"${params[profile_name]}-${params[version]}-${params[core_version]}\"\n  _DIST_PLPATH=\"${_CORE}/${_DIST_PLNAME}\"\n  _DIST_PRPATH=\"${_DIST_PLPATH}/profiles/${params[profile_path]}${params[profile_name]}\"\n  _OCTO_PLPATH=\"${_pthDst}/${_DIST_PLNAME}${7}\"\n  if [ \"${_ALLOW_ALL}\" = \"YES\" ]; then\n    if [[ \"${_PLATFORMS_LIST}\" =~ ALL ]] \\\n      || [[ \"${_PLATFORMS_LIST}\" =~ ${params[dist]} ]]; then\n      if [ ! -d \"${_OCTO_PLPATH}\" ]; then\n        if _prompt_yes_no \"${params[description]} - ${params[url]}\" ; then\n          true\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_yOk}\"\n          if [ ! -d \"${_OCTO_PLPATH}\" ]; then\n            cd ${_pthDst}\n            [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _get_distro_ext '${_DIST_PLNAME}.tar.gz'\"\n            _get_distro_ext \"${_DIST_PLNAME}.tar.gz\"\n          fi\n          if [ -d \"${_OCTO_PLPATH}\" ]; then\n            _msg \"DISTRO: ${params[description]} installed\"\n            [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _save_verify_this_platform '${1}' '${2}' '${3}' '${params[description]}' '${5}' '${6}' '${7}'\"\n            _save_verify_this_platform \"${1}\" \"${2}\" \"${3}\" \"${params[description]}\" \"${5}\" \"${6}\" \"${7}\"\n          fi\n        else\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_noT}\"\n        fi\n      fi\n    fi\n  fi\n}\n\n\n_d_core_platform_install() {\n###---### Template function for core-only-type platforms\n#\n  if [ \"${1}\" = \"DE1\" ]; then\n    _CORE_DEV=\"${_DRUPAL11_1_D}\"\n    _CORE_STG=\"${_DRUPAL11_1_S}\"\n    _CORE_PRD=\"${_DRUPAL11_1_P}\"\n  elif [ \"${1}\" = \"DE2\" ]; then\n    _CORE_DEV=\"${_DRUPAL11_2_D}\"\n    _CORE_STG=\"${_DRUPAL11_2_S}\"\n    _CORE_PRD=\"${_DRUPAL11_2_P}\"\n  elif [ \"${1}\" = \"DE3\" ]; then\n    _CORE_DEV=\"${_DRUPAL11_3_D}\"\n    _CORE_STG=\"${_DRUPAL11_3_S}\"\n    _CORE_PRD=\"${_DRUPAL11_3_P}\"\n  elif [ \"${1}\" = \"DX6\" ]; then\n    _CORE_DEV=\"${_DRUPAL10_6_D}\"\n    _CORE_STG=\"${_DRUPAL10_6_S}\"\n    _CORE_PRD=\"${_DRUPAL10_6_P}\"\n  elif [ \"${1}\" = \"DX5\" ]; then\n    _CORE_DEV=\"${_DRUPAL10_5_D}\"\n    _CORE_STG=\"${_DRUPAL10_5_S}\"\n    _CORE_PRD=\"${_DRUPAL10_5_P}\"\n  elif [ \"${1}\" = \"DX4\" ]; then\n    _CORE_DEV=\"${_DRUPAL10_4_D}\"\n    _CORE_STG=\"${_DRUPAL10_4_S}\"\n    _CORE_PRD=\"${_DRUPAL10_4_P}\"\n  elif [ \"${1}\" = \"DX3\" ]; then\n    _CORE_DEV=\"${_DRUPAL10_3_D}\"\n    _CORE_STG=\"${_DRUPAL10_3_S}\"\n    _CORE_PRD=\"${_DRUPAL10_3_P}\"\n  elif [ \"${1}\" = \"DX2\" ]; then\n    _CORE_DEV=\"${_DRUPAL10_2_D}\"\n    _CORE_STG=\"${_DRUPAL10_2_S}\"\n    _CORE_PRD=\"${_DRUPAL10_2_P}\"\n  elif [ \"${1}\" = \"DX1\" ]; then\n    _CORE_DEV=\"${_DRUPAL10_1_D}\"\n    _CORE_STG=\"${_DRUPAL10_1_S}\"\n    _CORE_PRD=\"${_DRUPAL10_1_P}\"\n  elif [ \"${1}\" = \"DX0\" ]; then\n    _CORE_DEV=\"${_DRUPAL10_0_D}\"\n    _CORE_STG=\"${_DRUPAL10_0_S}\"\n    _CORE_PRD=\"${_DRUPAL10_0_P}\"\n  elif [ \"${1}\" = \"DL9\" ]; then\n    _CORE_DEV=\"${_DRUPAL9_D}\"\n    _CORE_STG=\"${_DRUPAL9_S}\"\n    _CORE_PRD=\"${_DRUPAL9_P}\"\n  elif [ \"${1}\" = \"DL7\" ]; then\n    _CORE_DEV=\"${_DRUPAL7_D}\"\n    _CORE_STG=\"${_DRUPAL7_S}\"\n    _CORE_PRD=\"${_DRUPAL7_P}\"\n  elif [ \"${1}\" = \"DL6\" ]; then\n    _CORE_DEV=\"${_DRUPAL6_D}\"\n    _CORE_STG=\"${_DRUPAL6_S}\"\n    _CORE_PRD=\"${_DRUPAL6_P}\"\n  fi\n\n  # Declare 'params' as local to the function\n  local -A params=(\n    [dist]=\"$1\"\n    [version]=\"$2\"\n    [core_version]=\"$3\"\n    [description]=\"$4\"\n    [profile_path]=\"$5\"\n    [profile_name]=\"$6\"\n    [web_dir]=\"$7\"\n    [php_version]=\"$8\"\n    [url]=\"$9\"\n  )\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _d_core_platform_install\"\n    for key in \"${!params[@]}\"; do\n      _msg \"DEBUG: _d_core_platform_install ${key} is '${params[$key]}'\"\n    done\n  fi\n\n  # Access parameters by name\n  _REAL_PRNAME=\"${params[profile_name]}\"\n\n  if [[ \"${_PLATFORMS_LIST}\" =~ ALL ]] \\\n    || [[ \"${_PLATFORMS_LIST}\" =~ ${params[dist]} ]]; then\n    if [ ! -d \"${_pthDst}/${_CORE_DEV}\" ]; then\n      if _prompt_yes_no \"${params[description]} - ${params[url]}\" ; then\n        true\n        ###---### Drupal Core-Only Development\n        #\n        _SHRD_PLNAME=\"${_CORE_DEV}\"\n        [[ \"${1}\" =~ DL7 || \"${1}\" =~ DL6 ]] && _SHRD_PLPATH=\"${_CORE}/${_SHRD_PLNAME}\"\n        [[ \"${1}\" =~ DL7 || \"${1}\" =~ DL6 ]] && _SHRD_PRPATH=\"${_SHRD_PLPATH}/profiles/${_REAL_PRNAME}\"\n        _OCTO_PLPATH=\"${_pthDst}/${_SHRD_PLNAME}\"\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} [D.${_THIS_CORE}] ${_yOk}\"\n        _create_drupal_core_basic \"${params[dist]}\"\n        if [ -d \"${_OCTO_PLPATH}\" ]; then\n          _msg \"DISTRO: ${params[description]} [D.${_THIS_CORE}] installed\"\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _save_verify_this_platform '${1}' '${_SHRD_PLNAME}' '${3}' '${params[description]} [D.${_THIS_CORE}]' '${5}' '${6}' '${7}'\"\n          _save_verify_this_platform \"${1}\" \"${_SHRD_PLNAME}\" \"${3}\" \"${params[description]} [D.${_THIS_CORE}]\" \"${5}\" \"${6}\" \"${7}\"\n        fi\n\n        ###---### Drupal Core-Only Staging\n        #\n        _SHRD_PLNAME=\"${_CORE_STG}\"\n        [[ \"${1}\" =~ DL7 || \"${1}\" =~ DL6 ]] && _SHRD_PLPATH=\"${_CORE}/${_SHRD_PLNAME}\"\n        [[ \"${1}\" =~ DL7 || \"${1}\" =~ DL6 ]] && _SHRD_PRPATH=\"${_SHRD_PLPATH}/profiles/${_REAL_PRNAME}\"\n        _OCTO_PLPATH=\"${_pthDst}/${_SHRD_PLNAME}\"\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} [S.${_THIS_CORE}] ${_yOk}\"\n        _create_drupal_core_basic \"${params[dist]}\"\n        if [ -d \"${_OCTO_PLPATH}\" ]; then\n          _msg \"DISTRO: ${params[description]} [S.${_THIS_CORE}] installed\"\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _save_verify_this_platform '${1}' '${_SHRD_PLNAME}' '${3}' '${params[description]} [S.${_THIS_CORE}]' '${5}' '${6}' '${7}'\"\n          _save_verify_this_platform \"${1}\" \"${_SHRD_PLNAME}\" \"${3}\" \"${params[description]} [S.${_THIS_CORE}]\" \"${5}\" \"${6}\" \"${7}\"\n        fi\n\n        ###---### Drupal Core-Only Production\n        #\n        _SHRD_PLNAME=\"${_CORE_PRD}\"\n        [[ \"${1}\" =~ DL7 || \"${1}\" =~ DL6 ]] && _SHRD_PLPATH=\"${_CORE}/${_SHRD_PLNAME}\"\n        [[ \"${1}\" =~ DL7 || \"${1}\" =~ DL6 ]] && _SHRD_PRPATH=\"${_SHRD_PLPATH}/profiles/${_REAL_PRNAME}\"\n        _OCTO_PLPATH=\"${_pthDst}/${_SHRD_PLNAME}\"\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} [P.${_THIS_CORE}] ${_yOk}\"\n        _create_drupal_core_basic \"${params[dist]}\"\n        if [ -d \"${_OCTO_PLPATH}\" ]; then\n          _msg \"DISTRO: ${params[description]} [P.${_THIS_CORE}] installed\"\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DEBUG: _save_verify_this_platform '${1}' '${_SHRD_PLNAME}' '${3}' '${params[description]} [P.${_THIS_CORE}]' '${5}' '${6}' '${7}'\"\n          _save_verify_this_platform \"${1}\" \"${_SHRD_PLNAME}\" \"${3}\" \"${params[description]} [P.${_THIS_CORE}]\" \"${5}\" \"${6}\" \"${7}\"\n        fi\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DISTRO: ${params[description]} ${_noT}\"\n      fi\n    fi\n  fi\n}\n\n_d_dist_loop() {\n  # Loop through the _distros_names\n  for _dist in \"${!_distros_names[@]}\"; do\n    _d_name=\"${_distros_names[${_dist}]}\"\n    _d_vers=\"${_distros_versions[${_dist}]}\"\n    _d_core=\"${_distros_drupal_cores[${_dist}]}\"\n    _d_desc=\"${_d_name} ${_d_vers} ${_d_core} [${_THIS_CORE}]\"\n    _d_xdes=\"${_d_name} ${_d_vers}\"\n    _d_ppth=\"${_distros_profiles_paths[${_dist}]}\"\n    _d_prfn=\"${_distros_profiles_names[${_dist}]}\"\n    _d_webd=\"${_distros_web_dirs[${_dist}]}\"\n    _d_phpv=\"${_distros_php_versions[${_dist}]}\"\n    _d_xurl=\"${_distros_urls[${_dist}]}\"\n    if [[ \"${_dist}\" =~ DL || \"${_dist}\" =~ DX || \"${_dist}\" =~ DE ]]; then\n      _d_core_platform_install \"${_dist}\" \"${_d_vers}\" \"${_d_core}\" \"${_d_xdes}\" \"${_d_ppth}\" \"${_d_prfn}\" \"${_d_webd}\" \"${_d_phpv}\" \"${_d_xurl}\"\n    elif [[ \"${_dist}\" =~ CK1 || \"${_dist}\" =~ UC ]]; then\n      _d_dist_custom_platform_install \"${_dist}\" \"${_d_vers}\" \"${_d_core}\" \"${_d_desc}\" \"${_d_ppth}\" \"${_d_prfn}\" \"${_d_webd}\" \"${_d_phpv}\" \"${_d_xurl}\"\n    else\n      _d_dist_platform_install \"${_dist}\" \"${_d_vers}\" \"${_d_core}\" \"${_d_desc}\" \"${_d_ppth}\" \"${_d_prfn}\" \"${_d_webd}\" \"${_d_phpv}\" \"${_d_xurl}\"\n    fi\n  done\n}\n\n###\n###---### Action starts here.\n###\n\n\n###---### Prepare D6 and D7 shared core\n#\n_prepare_drupal6_core\n_prepare_drupal7_core\n\n# Prepare\n_prepare_for_save_verify_platforms\n\n# Loop\n_d_dist_loop\n\n\n###---### Remove some unused core files.\n#\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  echo \" \"\n  _msg \"${_STATUS} C: Removing some unused core files...\"\nfi\nif [[ \"${_CORE}\" =~ \"/data/all/\" ]]; then\n  rm -rf ${_CORE}/*/scripts\n  rm -f ${_CORE}/*{.make,.tar,.tar.gz,.zip}\nfi\n[ -e \"${_THIS_HM}/make_platform.php\" ] && rm -f ${_THIS_HM}/make_platform.php\n[ -e \"${_THIS_HM}/make_client.php\" ] && rm -f ${_THIS_HM}/make_client.php\n[ -e \"${_THIS_HM}/make_home.php\" ] && rm -f ${_THIS_HM}/make_home.php\ncd ${_CORE}\n\n###----------------------------------------###\n###\n###  Octopus Ægir Installer\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "aegir/scripts/AegirSetupM.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Barracuda Ægir Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\n\n###\n### Default variables\n###\n_bldPth=\"/opt/tmp/boa\"\n_filIncB=\"barracuda.sh.cnf\"\n_libFnc=\"${_bldPth}/lib/functions\"\n_vBs=\"/var/backups\"\nexport _tRee=dev\n\n\n###\n### Panic on missing include\n###\n_panic_exit() {\n  echo\n  echo \" EXIT: Required lib file not available?\"\n  echo \" EXIT: $1\"\n  echo \" EXIT: Cannot continue\"\n  echo \" EXIT: Bye (0)\"\n  echo\n  touch /opt/tmp/status-AegirSetupM-FAIL\n  exit 1\n}\n\n\n###\n### Include default settings and basic functions\n###\n[ -r \"${_vBs}/${_filIncB}\" ] || _panic_exit \"${_vBs}/${_filIncB}\"\n  source \"${_vBs}/${_filIncB}\"\n\n\n###\n### Include shared functions\n###\n_FL=\"helper master\"\nfor f in ${_FL}; do\n  [ -r \"${_libFnc}/${f}.sh.inc\" ] || _panic_exit \"${f}\"\n  source \"${_libFnc}/${f}.sh.inc\"\ndone\n\n\n###\n### Local settings\n###\nif [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n  _THIS_DB_HOST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nfi\nif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n  && [ -x \"/opt/php84/bin/php\" ]; then\n  _T_CLI=/opt/php84/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n  && [ -x \"/opt/php85/bin/php\" ]; then\n  _T_CLI=/opt/php85/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n  && [ -x \"/opt/php83/bin/php\" ]; then\n  _T_CLI=/opt/php83/bin\nfi\n_ROOT=\"${HOME}\"\n_DRUSHCMD=\"${_T_CLI}/php ${_ROOT}/drush/drush.php\"\n#\nPATH=${_T_CLI}:/usr/local/bin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\nSHELL=/bin/bash\n#\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_gCb=\"git clone --branch\"\n_gitHub=\"https://github.com/omega8cc\"\n_gitLab=\"https://gitlab.com/omega8cc\"\n#\nexport _urlDev=\"http://${_USE_MIR}/dev\"\nexport _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n\n###--------------------###\n\nif [ \"$(id -u)\" -eq 0 ]; then\n  _msg \"FATAL ERROR: This script should be run as a non-root user\"\n  _msg \"FATAL ERROR: Aborting AegirSetupM installer NOW!\"\n  touch /opt/tmp/status-AegirSetupM-FAIL\n  exit 1\nfi\n\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  _msg \"INFO: Installing Ægir Provision backend...\"\nfi\n\n_hostmaster_dr_up\n_provision_backend_up\n\nif ${_DRUSHCMD} help | grep \"^ provision-install\" > /dev/null ; then\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Drush test result OK\"\n  fi\nelse\n  _msg \"FATAL ERROR: Drush is broken (${_DRUSHCMD} help failed)\"\n  _msg \"FATAL ERROR: Aborting AegirSetupM installer NOW!\"\n  touch /opt/tmp/status-AegirSetupM-FAIL\n  exit 1\nfi\n\nsed -i \"s/files.aegir.cc/${_USE_MIR}/g\" ${_ROOT}/.drush/sys/provision/aegir.make &> /dev/null\n\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  _msg \"INFO: Installing the frontend\"\nfi\n${_DRUSHCMD} cc drush >${_ROOT}/install.log 2>&1\nrm -rf ${_ROOT}/.tmp/cache\n_HM_ROOT=\"${_ROOT}/hostmaster-${_AEGIR_VERSION}\"\n\n${_DRUSHCMD} hostmaster-install \\\n  --aegir_host=${_AEGIR_HOST} \\\n  --aegir_db_user=${_AEGIR_DB_USER} \\\n  --aegir_db_pass=${_ESC_PASS} \\\n  --aegir_root=${_ROOT} \\\n  --root=${_HM_ROOT} \\\n  --version=${_AEGIR_VERSION} $@\n\nmkdir -p /var/aegir/backups/system\nchmod 700 /var/aegir/backups/system\n_L_SYS=\"/var/aegir/backups/system/.${_AEGIR_DB_USER}.pass.txt\"\necho \"${_ESC_PASS}\" > ${_L_SYS}\nchmod 0600 ${_L_SYS}\n\n###----------------------------------------###\n###\n###  Barracuda Ægir Installer\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "aegir/scripts/AegirUpgrade.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Barracuda Ægir Installer\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\n\n###\n### Default variables\n###\n_bldPth=\"/opt/tmp/boa\"\n_filIncB=\"barracuda.sh.cnf\"\n_libFnc=\"${_bldPth}/lib/functions\"\n_vBs=\"/var/backups\"\nexport _tRee=dev\n\n\n###\n### Panic on missing include\n###\n_panic_exit() {\n  echo\n  echo \" EXIT: Required lib file not available?\"\n  echo \" EXIT: $1\"\n  echo \" EXIT: Cannot continue\"\n  echo \" EXIT: Bye (0)\"\n  echo\n  touch /opt/tmp/status-AegirUpgrade-FAIL\n  exit 1\n}\n\n\n###\n### Include default settings and basic functions\n###\n[ -r \"${_vBs}/${_filIncB}\" ] || _panic_exit \"${_vBs}/${_filIncB}\"\n  source \"${_vBs}/${_filIncB}\"\n\n\n###\n### Include shared functions\n###\n_FL=\"helper master\"\nfor f in ${_FL}; do\n  [ -r \"${_libFnc}/${f}.sh.inc\" ] || _panic_exit \"${f}\"\n  source \"${_libFnc}/${f}.sh.inc\"\ndone\n\n\n###\n### Local settings\n###\nif [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n  _THIS_DB_HOST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nfi\nif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n  && [ -x \"/opt/php84/bin/php\" ]; then\n  _T_CLI=/opt/php84/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n  && [ -x \"/opt/php85/bin/php\" ]; then\n  _T_CLI=/opt/php85/bin\nelif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n  && [ -x \"/opt/php83/bin/php\" ]; then\n  _T_CLI=/opt/php83/bin\nfi\n_ROOT=\"${HOME}\"\n_DRUSHCMD=\"${_T_CLI}/php ${_ROOT}/drush/drush.php\"\n#\nPATH=${_T_CLI}:/usr/local/bin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\nSHELL=/bin/bash\n#\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_gCb=\"git clone --branch\"\n_gitHub=\"https://github.com/omega8cc\"\n_gitLab=\"https://gitlab.com/omega8cc\"\n#\nexport _urlDev=\"http://${_USE_MIR}/dev\"\nexport _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n\n\n###---### Local functions\n#\n#\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n#\n_hostmaster_mv_up() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Moving old directories\"\n  fi\n  mkdir -p ${_ROOT}/backups/system\n  chmod 700 ${_ROOT}/backups/system\n  mv -f ${_ROOT}/backups/drush-pre* ${_ROOT}/backups/system/ &> /dev/null\n  _D_EXT=\"provision clean_missing_modules drupalgeddon drush_ecl make_local \\\n    provision_boost provision_cdn provision_civicrm provision_site_backup \\\n    provision_tasks_extra registry_rebuild remote_import \\\n    safe_cache_form_clear security_check security_review utf8mb4_convert\"\n  for e in ${_D_EXT}; do\n    if [ -e \"${_ROOT}/.drush/$e\" ]; then\n      mv -f ${_ROOT}/.drush/$e \\\n        ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n      mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n    fi\n    if [ -e \"${_ROOT}/.drush/xts/$e\" ]; then\n      mv -f ${_ROOT}/.drush/xts/$e \\\n        ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n      mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n    fi\n    if [ -e \"${_ROOT}/.drush/usr/$e\" ]; then\n      mv -f ${_ROOT}/.drush/usr/$e \\\n        ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n      mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n    fi\n    if [ -e \"${_ROOT}/.drush/sys/$e\" ]; then\n      mv -f ${_ROOT}/.drush/sys/$e \\\n        ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n      mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n    fi\n  done\n}\n#\n_hostmaster_go_up() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Installing Ægir Provision backend...\"\n  fi\n  mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n  rm -rf ${_ROOT}/.drush/drush_make\n  rm -rf ${_ROOT}/.drush/sys/drush_make\n  cd ${_ROOT}/.drush\n  if [ \"${_DL_MODE}\" = \"BATCH\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    rm -rf ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/{provision,drush_make}\n    _get_dev_ext \"backend.tar.gz\"\n    mv -f ${_ROOT}/.drush/backend/sys ${_ROOT}/.drush/\n    mv -f ${_ROOT}/.drush/backend/xts ${_ROOT}/.drush/\n    mv -f ${_ROOT}/.drush/backend/usr ${_ROOT}/.drush/\n    if [ -e \"${_ROOT}/.drush/sys/provision/provision.inc\" ] \\\n      && [ -d \"${_ROOT}/.drush/xts/security_review\" ] \\\n      && [ -d \"${_ROOT}/.drush/usr/registry_rebuild\" ]; then\n      [ -e \"${_ROOT}/.drush/backend\" ] && rm -rf ${_ROOT}/.drush/backend*\n    fi\n  elif [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    rm -rf ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/{provision,drush_make}\n    mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n    _rD=\"${_ROOT}/.drush\"\n    ${_gCb} ${_BRANCH_PRN} ${_gitHub}/provision.git      ${_rD}/sys/provision &> /dev/null\n    ${_gCb} 7.x-1.x-dev ${_gitHub}/drupalgeddon.git      ${_rD}/usr/drupalgeddon &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/drush_ecl.git             ${_rD}/usr/drush_ecl &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/security_review.git       ${_rD}/xts/security_review &> /dev/null\n    ${_gCb} 7.x-2.x ${_gitHub}/provision_boost.git       ${_rD}/xts/provision_boost &> /dev/null\n    ${_gCb} 7.x-2.x ${_gitHub}/registry_rebuild.git      ${_rD}/usr/registry_rebuild &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/safe_cache_form_clear.git ${_rD}/usr/safe_cache_form_clear &> /dev/null\n    rm -rf ${_rD}/*/.git\n    rm -rf ${_rD}/*/*/.git\n    cd ${_rD}/usr\n    _get_dev_ext \"clean_missing_modules.tar.gz\"\n    _get_dev_ext \"utf8mb4_convert-7.x-1.3.tar.gz\"\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    cd ${_ROOT}/.drush/sys\n    _get_dev_ext \"provision.tar.gz\"\n    cd ${_ROOT}/.drush/usr\n    _get_dev_ext \"clean_missing_modules.tar.gz\"\n    _get_dev_ext \"drupalgeddon.tar.gz\"\n    _get_dev_ext \"drush_ecl.tar.gz\"\n    _get_dev_ext \"registry_rebuild.tar.gz\"\n    _get_dev_ext \"safe_cache_form_clear.tar.gz\"\n    _get_dev_ext \"utf8mb4_convert-7.x-1.3.tar.gz\"\n    cd ${_ROOT}/.drush/xts\n    _get_dev_ext \"provision_boost.tar.gz\"\n    _get_dev_ext \"security_review.tar.gz\"\n  fi\n  rm -rf ${_ROOT}/.drush/*/.git\n  rm -rf ${_ROOT}/.drush/*/*/.git\n  sed -i \"s/files.aegir.cc/${_USE_MIR}/g\" ${_ROOT}/.drush/sys/provision/aegir.make &> /dev/null\n  cd ${_PREV_HM_ROOT}\n}\n#\n_hostmaster_dr_tt() {\n  if ${_DRUSHCMD} help | grep \"^ provision-install\" > /dev/null ; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Drush test result OK\"\n    fi\n  else\n    _msg \"FATAL ERROR: Drush is broken (${_DRUSHCMD} help failed)\"\n    _msg \"FATAL ERROR: Aborting AegirUpgrade installer NOW!\"\n    touch /opt/tmp/status-AegirUpgrade-FAIL\n    exit 1\n  fi\n}\n#\n_hostmaster_mi_up() {\n  _msg \"INFO: Running hostmaster-migrate, please wait...\"\n  ### security_review breaks the upgrade if active\n  mv -f ${_ROOT}/.drush/xts/security_review/security_review.drush.inc \\\n    ${_ROOT}/.drush/xts/security_review/foo.txt  &> /dev/null\n  export DEBIAN_FRONTEND=noninteractive\n  export APT_LISTCHANGES_FRONTEND=none\n  if [ -z \"${TERM+x}\" ]; then\n    export TERM=vt100\n  fi\n\n  #\n  # Fix broken Entity module if needed.\n  #\n  _pthA=\"profiles/hostmaster/modules/contrib/entity\"\n  _pthB=\"module_filter.module\"\n  #\n  if [ -e \"${_PREV_HM_ROOT}/${_pthA}/${_pthB}\" ]; then\n    _msg \"INFO: Fixing broken Entity module...\"\n    rm -rf ${_PREV_HM_ROOT}/${_pthA}\n    cd ${_PREV_HM_ROOT}/profiles/hostmaster/modules/contrib\n    _get_dev_stc \"entity-7.x-1.12.tar.gz\"\n    ${_DRUSHCMD} @hostmaster en entity -y\n    ${_DRUSHCMD} @hostmaster dis hosting_ssl -y\n    ${_DRUSHCMD} @hostmaster dis hosting_le -y\n    ${_DRUSHCMD} @hostmaster dis hosting_le_vhost -y\n    ${_DRUSHCMD} @hostmaster dis hosting_nginx_ssl -y\n    ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_service WHERE service LIKE 'http'\"\n    ${_DRUSHCMD} @hostmaster sqlq \"INSERT INTO hosting_service (nid, vid, service, type, restart_cmd, port, available) VALUES ('2', '2', 'http', 'nginx', 'sudo /etc/init.d/nginx reload', '80', '1')\"\n    ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_service SET type='nginx' WHERE service='http'\"\n    ${_DRUSHCMD} @hostmaster hosting-task @server_master verify --force\n    ${_DRUSHCMD} @hostmaster hosting-dispatch\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force\n    wait\n    _msg \"INFO: Waiting 15 seconds...\"\n    sleep 15\n  fi\n  _BROKEN_SSL_TEST=$(grep \"nginx default ssl server\" /var/aegir/config/server_master/nginx.conf 2>&1)\n  if [ ! -z \"${_BROKEN_SSL_TEST}\" ]; then\n    _msg \"INFO: Disabling nginx_ssl on master...\"\n    ${_DRUSHCMD} @hostmaster dis hosting_ssl -y\n    ${_DRUSHCMD} @hostmaster dis hosting_le -y\n    ${_DRUSHCMD} @hostmaster dis hosting_le_vhost -y\n    ${_DRUSHCMD} @hostmaster dis hosting_nginx_ssl -y\n    ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_service WHERE service LIKE 'http'\"\n    ${_DRUSHCMD} @hostmaster sqlq \"INSERT INTO hosting_service (nid, vid, service, type, restart_cmd, port, available) VALUES ('2', '2', 'http', 'nginx', 'sudo /etc/init.d/nginx reload', '80', '1')\"\n    ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_service SET type='nginx' WHERE service='http'\"\n    ${_DRUSHCMD} @hostmaster hosting-task @server_master verify --force\n    ${_DRUSHCMD} @hostmaster hosting-dispatch\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force\n    wait\n    touch /var/aegir/disable_nginx_ssl.log\n    _msg \"INFO: Waiting 5 seconds...\"\n    sleep 5\n  fi\n  if [ -e \"${_PREV_HM_ROOT}/modules/path_alias_cache\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      ${_DRUSHCMD} @hostmaster dis aegir_custom_settings -y\n      ${_DRUSHCMD} @hostmaster pm-uninstall aegir_custom_settings -y\n      ${_DRUSHCMD} @hostmaster dis hosting_advanced_cron -y\n      ${_DRUSHCMD} @hostmaster en ctools -y\n      ${_DRUSHCMD} @hostmaster registry-rebuild\n    else\n      ${_DRUSHCMD} @hostmaster dis aegir_custom_settings -y &> /dev/null\n      ${_DRUSHCMD} @hostmaster pm-uninstall aegir_custom_settings -y &> /dev/null\n      ${_DRUSHCMD} @hostmaster dis hosting_advanced_cron -y &> /dev/null\n      ${_DRUSHCMD} @hostmaster en ctools -y &> /dev/null\n      ${_DRUSHCMD} @hostmaster registry-rebuild &> /dev/null\n    fi\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      ${_DRUSHCMD} @hostmaster dis hosting_custom_settings -y\n      ${_DRUSHCMD} @hostmaster pm-uninstall hosting_custom_settings -y\n      ${_DRUSHCMD} @hostmaster registry-rebuild\n    else\n      ${_DRUSHCMD} @hostmaster dis hosting_custom_settings -y &> /dev/null\n      ${_DRUSHCMD} @hostmaster pm-uninstall hosting_custom_settings -y &> /dev/null\n      ${_DRUSHCMD} @hostmaster registry-rebuild &> /dev/null\n    fi\n  fi\n  ${_DRUSHCMD} cc drush &> /dev/null\n  rm -rf ${_ROOT}/.tmp/cache\n  ${_DRUSHCMD} @hostmaster sqlc < ${_bldPth}/aegir/helpers/hosting_cron.sql &> /dev/null\n  ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_task_log \\\n    WHERE timestamp < UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 3 MONTH))\" &> /dev/null\n  ${_DRUSHCMD} @hostmaster sqlq \"OPTIMIZE TABLE hosting_task_log\" &> /dev/null\n  ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_task \\\n    WHERE task_type='delete' AND task_status='-1'\" &> /dev/null\n  ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_task \\\n    WHERE task_type='delete' AND task_status='0' AND executed='0'\" &> /dev/null\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    ${_DRUSHCMD} hostmaster-migrate ${_DOMAIN} ${_HM_ROOT} -y -d\n  else\n    ${_DRUSHCMD} hostmaster-migrate ${_DOMAIN} ${_HM_ROOT} -y &> /dev/null\n  fi\n  if [ -e \"${_ROOT}/.drush/hostmaster.alias.drushrc.php\" ]; then\n    _THIS_HM_ROOT=$(cat ${_ROOT}/.drush/hostmaster.alias.drushrc.php \\\n      | grep \"root'\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $3}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    if [ -e \"${_THIS_HM_ROOT}/sites/all\" ] \\\n      && [ ! -e \"${_THIS_HM_ROOT}/sites/all/libraries\" ]; then\n      mkdir -p \\\n        ${_THIS_HM_ROOT}/sites/all/{modules,themes,libraries} &> /dev/null\n    fi\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    ${_DRUSHCMD} @hostmaster registry-rebuild\n    ${_DRUSHCMD} @hostmaster en hosting_cron -y\n    ${_DRUSHCMD} @hostmaster cache-clear all\n    ${_DRUSHCMD} @hostmaster updb -y\n  else\n    ${_DRUSHCMD} @hostmaster registry-rebuild &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_cron -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster cache-clear all &> /dev/null\n    ${_DRUSHCMD} @hostmaster updb -y &> /dev/null\n  fi\n  export DEBIAN_FRONTEND=text\n  mv -f ${_ROOT}/.drush/xts/security_review/foo.txt \\\n    ${_ROOT}/.drush/xts/security_review/security_review.drush.inc &> /dev/null\n  mkdir -p ${_ROOT}/backups/system/old_hostmaster\n  chmod 700 ${_ROOT}/backups/system/old_hostmaster\n  chmod 700 ${_ROOT}/backups/system\n  mv -f ${_ROOT}/backups/*host8* \\\n    ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n  mv -f ${_ROOT}/backups/*o8.io* \\\n    ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n  mv -f ${_ROOT}/backups/*boa.io* \\\n    ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n  mv -f ${_ROOT}/backups/*aegir.cc* \\\n    ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n  chmod 600 ${_ROOT}/backups/system/old_hostmaster/* &> /dev/null\n}\n\n###--------------------###\n\n_LASTNUM=001\n_DISTRO=001\n_PREV_HM_ROOT=$(find ${_ROOT} -maxdepth 1 -type d | grep hostmaster 2>&1)\n\nif [ -d \"${_ROOT}/host_master\" ]; then\n  if [ ! -d \"${_ROOT}/host_master/000\" ]; then\n    mkdir -p ${_ROOT}/host_master/000\n    if [ ! -e \"${_ROOT}/host_master/000/placeholder_dont_remove.txt\" ]; then\n      touch ${_ROOT}/host_master/000/placeholder_dont_remove.txt\n    fi\n  fi\nfi\n\nif [ -d \"${_ROOT}/host_master/000\" ]; then\n  cd ${_ROOT}/host_master\n  _list=([0-9]*)\n  _last=${_list[@]: -1}\n  _LASTNUM=$_last\n  _BASH_TEST=$(bash --version 2>&1)\n  if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n    _nextnum=00$((10#0${_last%%[^0-9]*} + 1))\n  else\n    _nextnum=00$((10#${_last%%[^0-9]*} + 1))\n  fi\n  _nextnum=${_nextnum: -3}\n  _PREV_HM_ROOT_TEST=\"${_ROOT}/host_master/${_LASTNUM}\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Testing previous install...\"\n  fi\n  if [ -e \"${_PREV_HM_ROOT_TEST}/sites/${_DOMAIN}/settings.php\" ]; then\n    _DISTRO=${_nextnum}\n    _PREV_HM_ROOT=\"${_ROOT}/host_master/${_LASTNUM}\"\n    if [ -e \"${_PREV_HM_ROOT}/modules/path_alias_cache\" ]; then\n      _DEBUG_MODE=YES\n    fi\n  else\n    _DEBUG_MODE=YES\n    _msg \"INFO: Testing previous install...\"\n    _msg \"INFO: OPS, zombie found, moving it to backups...\"\n    sleep 1\n    mv -f ${_PREV_HM_ROOT_TEST} \\\n      ${_ROOT}/backups/system/empty-hm-${_LASTNUM}-${_NOW} &> /dev/null\n    cd ${_ROOT}/host_master\n    _list=([0-9]*)\n    _last=${_list[@]: -1}\n    _LASTNUM=$_last\n    _BASH_TEST=$(bash --version 2>&1)\n    if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n      _nextnum=00$((10#0${_last%%[^0-9]*} + 1))\n    else\n      _nextnum=00$((10#${_last%%[^0-9]*} + 1))\n    fi\n    _nextnum=${_nextnum: -3}\n    _DISTRO=${_nextnum}\n    _PREV_HM_ROOT_TEST=\"${_ROOT}/host_master/${_LASTNUM}\"\n    _msg \"INFO: Testing previous install again after removing zombie...\"\n    sleep 1\n    if [ -e \"${_PREV_HM_ROOT_TEST}/sites/${_DOMAIN}/settings.php\" ]; then\n      _DISTRO=${_nextnum}\n      _PREV_HM_ROOT=\"${_ROOT}/host_master/${_LASTNUM}\"\n    else\n      _DEBUG_MODE=YES\n      _msg \"INFO: Testing previous install again...\"\n      _msg \"INFO: OPS, another zombie found, moving it to backups...\"\n      sleep 1\n      mv -f ${_PREV_HM_ROOT_TEST} \\\n        ${_ROOT}/backups/system/empty-hm-${_LASTNUM}-${_NOW}-sec &> /dev/null\n      cd ${_ROOT}/host_master\n      _list=([0-9]*)\n      _last=${_list[@]: -1}\n      _LASTNUM=$_last\n      _BASH_TEST=$(bash --version 2>&1)\n      if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n        _nextnum=00$((10#0${_last%%[^0-9]*} + 1))\n      else\n        _nextnum=00$((10#${_last%%[^0-9]*} + 1))\n      fi\n      _nextnum=${_nextnum: -3}\n      _DISTRO=${_nextnum}\n      _PREV_HM_ROOT_TEST=\"${_ROOT}/host_master/${_LASTNUM}\"\n      _msg \"INFO: Testing previous install again after removing second zombie...\"\n      sleep 1\n      if [ -e \"${_PREV_HM_ROOT_TEST}/sites/${_DOMAIN}/settings.php\" ]; then\n        _DISTRO=${_nextnum}\n        _PREV_HM_ROOT=\"${_ROOT}/host_master/${_LASTNUM}\"\n      fi\n    fi\n  fi\nfi\n\n_HM_ROOT=\"${_ROOT}/host_master/${_DISTRO}\"\nif [ -d \"${_HM_ROOT}\" ]; then\n  _msg \"FATAL ERROR: ${_HM_ROOT} already exists\"\n  _msg \"FATAL ERROR: Too many zombies to delete! Try again...\"\n  _msg \"FATAL ERROR: Aborting AegirUpgrade installer NOW!\"\n  touch /opt/tmp/status-AegirUpgrade-FAIL\n  exit 1\nfi\n\nmkdir -p ${_ROOT}/host_master\nchmod 711 ${_ROOT}/host_master &> /dev/null\nif [ ! -d \"/var/aegir/.drush/sys/provision/http\" ]; then\n  _msg \"FATAL ERROR: Required directory does not exist:\"\n  _msg \"FATAL ERROR: /var/aegir/.drush/sys/provision/http\"\n  _msg \"FATAL ERROR: Aborting AegirUpgrade installer NOW!\"\n  touch /opt/tmp/status-AegirUpgrade-FAIL\n  exit 1\nfi\nif [ -e \"${_PREV_HM_ROOT}/sites/${_DOMAIN}/settings.php\" ]; then\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Test OK, we can proceed with Hostmaster upgrade\"\n  fi\n  _hostmaster_mv_up\n  _hostmaster_dr_up\n  _hostmaster_go_up\n  _hostmaster_dr_tt\n  _hostmaster_mi_up\nelse\n  _msg \"FATAL ERROR: Your setup is probably broken because required file\"\n  _msg \"FATAL ERROR: ${_PREV_HM_ROOT}/sites/${_DOMAIN}/settings.php\"\n  _msg \"FATAL ERROR: does not exist\"\n  _msg \"FATAL ERROR: Aborting AegirUpgrade installer NOW!\"\n  touch /opt/tmp/status-AegirUpgrade-FAIL\n  exit 1\nfi\n\n\n###----------------------------------------###\n###\n###  Barracuda Ægir Installer\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "aegir/scripts/run-xdrago",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n_H_USER=EDIT_USER\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 9 -p $$\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n_run_cleanup() {\n  _buildTest=\"1\"\n  _tpDir=\"/data/disk/${_H_USER}/.tmp\"\n  _ceDir=\"${_tpDir}/cache\"\n  _dlDir=\"${_ceDir}/download\"\n  _gtDir=\"${_ceDir}/git\"\n  _clCtr=\"/data/disk/${_H_USER}/static/control/clear-drush-cache.info\"\n  _exCtr=\"/data/disk/${_H_USER}/backups/tmp_expim/metadata\"\n  if [ -e \"${_tpDir}\" ]; then\n    _buildTest=$(ls ${_tpDir} | grep \"_tmp_\" | wc -l | tr -d \"\\n\" 2>&1)\n    _buildTest=${_buildTest//[^0-9]/}\n  fi\n  if [ -e \"${_clCtr}\" ]; then\n    if [ -e \"${_exCtr}\" ]; then\n      rm -f ${_exCtr}\n      rm -f ${_clCtr}\n      _buildTest=\"0\"\n    fi\n    if [ -e \"${_gtDir}\" ] || [ \"${_buildTest}\" -ge 1 ]; then\n      rm -rf ${_tpDir}/*\n      rm -f ${_clCtr}\n      _buildTest=\"0\"\n    fi\n  fi\n  if [ \"${_buildTest}\" = \"0\" ] && [ -e \"${_gtDir}\" ]; then\n    rm -rf ${_gtDir}\n    rm -rf ${_dlDir}\n  fi\n}\n_run_cleanup\n\n# Remove dangerous stuff from the string.\n_sanitize_string() {\n  echo \"$1\" | sed 's/[\\\\\\/\\^\\?\\>\\`\\#\\\"\\{\\(\\&\\|\\*]//g; s/\\(['\"'\"'\\]\\)//g'\n}\n\n# Generate new sftp password and update expiration date\n_if_sftp_password_update() {\n  _upCtr=\"/data/disk/${_H_USER}/static/control/run-sftp-password-update.pid\"\n  if [ -e \"${_upCtr}\" ]; then\n    _sftpUser=\"${_H_USER}.ftp\"\n    rm -f ${_upCtr}\n    _PWD_CHARS=64\n    _RANDPASS_TEST=$(randpass -V 2>&1)\n    if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n      _ESC_PASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n    else\n      _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n    fi\n    _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n    _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n    if [ -z \"${_ESC_PASS}\" ] || [ \"${_LEN_PASS}\" -lt 9 ]; then\n      _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n    fi\n    _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n    if [ ! -z \"${_ESC_PASS}\" ] && [ \"${_LEN_PASS}\" -gt 9 ]; then\n      chage -I -1 -m 0 -M 99999 -E -1 ${_sftpUser}\n      echo \"${_sftpUser}:${_ESC_PASS}\" | chpasswd\n      chage -M 90 ${_sftpUser}\n      chage -W 7 ${_sftpUser}\n      chage -d $(date +%Y-%m-%d) ${_sftpUser}\n      echo \"${_ESC_PASS}\" > /data/disk/${_H_USER}/static/control/new-${_sftpUser}-password.txt\n    fi\n  fi\n}\n\n_if_octopus_upgrade() {\n  _upCtr=\"/data/disk/${_H_USER}/static/control/run-upgrade.pid\"\n  _plCtr=\"/data/disk/${_H_USER}/static/control/platforms.info\"\n  if [ -e \"${_plCtr}\" ] && [ -e \"${_upCtr}\" ]; then\n    rm -f ${_upCtr}\n    [ -e \"/root/.silent.update.cnf\" ] && rm -f /root/.silent.update.cnf\n    _TODAY=$(date +%y%m%d)\n    _TODAY=${_TODAY//[^0-9]/}\n    _NOW=$(date +%y%m%d-%H%M%S)\n    _NOW=${_NOW//[^0-9-]/}\n    _vBs=\"/var/backups\"\n    _LOG_UP_DIR=\"${_vBs}/reports/up/$(basename \"$0\")/${_TODAY}\"\n    _UP_OCTOPUS_LOG=\"${_LOG_UP_DIR}/$(basename \"$0\")-up-octopus-${_NOW}.log\"\n    mkdir -p ${_LOG_UP_DIR}\n    nohup /opt/local/bin/octopus up-${_tRee} ${_H_USER} force log noscreen >${_UP_OCTOPUS_LOG} 2>&1 &\n  fi\n}\n\n_run_action() {\n  if [ \"${_buildTest}\" = \"0\" ] \\\n    || [ -z \"${_buildTest}\" ] \\\n    || [ ! -e \"${_ceDir}\" ]; then\n    su -s /bin/bash - ${_H_USER} -c \"drush8 cc drush\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_H_USER} -c \"bash /data/disk/${_H_USER}/aegir.sh\"\n    wait\n    touch /var/log/boa/last-run-${_H_USER}\n  else\n    touch /var/log/boa/skip-run-${_H_USER}\n  fi\n}\n\nif [ -e \"/run/boa_wait.pid\" ]; then\n  touch /var/log/boa/wait-${_H_USER}\n  exit 0\nelse\n  _if_sftp_password_update\n  _if_octopus_upgrade\n  _run_action\n  exit 0\nfi\n"
  },
  {
    "path": "aegir/tools/BOND.sh.txt",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Barracuda-Octopus-Nginx-Drupal Tuner\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: run it with bash, not with sh  ###\n###----------------------------------------###\n###\n### $ bash BOND.sh.txt\n###\n### Note: to restore default values it is\n###       enough to start this script with\n###       any values defined below and answer\n###       NO when it prompts for confirmation\n###       \"Are you ready to tune your Ægir\".\n###\n\n\n###----------------------------------------###\n### EDITME                                 ###\n###----------------------------------------###\n###\n### Enter below the settings you wish to use.\n###\n\n\n###----------------------------------------###\n### Hostmaster root directory - /var/aegir\n###\n### Note: most of values tuned by this script\n###       are server-vide, while some, like\n###       mod_evasive settings will affect\n###       only sites hosted on the Ægir\n###       Satellite Instance defined below.\n###\n_TUNE_HOSTMASTER=/data/disk/o1\n\n\n###----------------------------------------###\n### Nginx server mod_evasive - default ON\n###\n### Note: running verify task on any SITE\n###       will restore default value ON\n###       for that site only, while TUNER\n###       will turn OFF/ON this feature\n###       for all sites hosted on the\n###       Hostmaster defined above.\n###\n_TUNE_NGINX_CONNECT=OFF\n\n\n###----------------------------------------###\n### Nginx server fastcgi timeout - default 180\n###\n### Note: running verify task on the SERVER\n###       in the Hostmaster created\n###       by Barracuda (not Octopus!)\n###       will restore default value\n###       for the server and all existing\n###       Ægir Satellite Instances.\n###\n_TUNE_NGINX_TIMEOUT=9999\n\n\n###----------------------------------------###\n### Nginx server firewall limit - default 300\n###\n### Note: don't change the default value\n###       if you are the only visitor, or\n###       you will lock yourself easily.\n###\n###       The default value 300 means the\n###       firewall limit is OFF because\n###       it scans only the last 300 lines\n###       of your web server log file.\n###\n###       If you will set this value to 100\n###       then every visitor IP with more\n###       than 100 out of the last 300\n###       requests will be locked.\n###\n###       Only dynamic requests (pages) are\n###       counted because static files like\n###       images are generally not logged.\n###\n_TUNE_NGINX_FIREWALL=300\n\n\n###----------------------------------------###\n### Database server timeout - default 9999\n###\n_TUNE_SQL_TIMEOUT=9999\n\n\n###----------------------------------------###\n### PHP-FPM server timeout - default 180\n###\n_TUNE_PHP_FPM_TIMEOUT=9999\n\n\n###----------------------------------------###\n### PHP-CLI server timeout - default 9999\n###\n_TUNE_PHP_CLI_TIMEOUT=9999\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_AEGIR_VERSION=\"${_tRee}\"\n_BRANCH_BOA=\"5.x-${_tRee}\"\n_X_VERSION=\"BOA-5.9.1-${_tRee}\"\n_MYSQLTUNER_VRN=1.9.4\n\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n_RAM=$(free -mt | grep Mem: | awk '{ print $2 }' 2>&1)\n_SPINNER=NO\nif [ -n \"${STY+x}\" ]; then\n  _SPINNER=NO\nfi\n_PHP56_API=20131226\n_PHP56_VRN=5.6.40\n_PHP70_API=20151012\n_PHP70_VRN=7.0.33\n_PHP71_API=20160303\n_PHP71_VRN=7.1.33\n_PHP72_API=20170718\n_PHP72_VRN=7.2.34\n_PHP73_API=20180731\n_PHP73_VRN=7.3.33\n_PHP74_API=20190902\n_PHP74_VRN=7.4.33\n_PHP80_API=20200930\n_PHP80_VRN=8.0.30\n_PHP81_API=20210902\n_PHP81_VRN=8.1.34\n_PHP82_API=20220829\n_PHP82_VRN=8.2.31\n_PHP83_API=20230831\n_PHP83_VRN=8.3.31\n_PHP84_API=20240924\n_PHP84_VRN=8.4.21\n_PHP85_API=20250925\n_PHP85_VRN=8.5.6\n\n###\n### Helper variables\n###\n_bldPth=\"/opt/tmp/boa\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_gCb=\"git clone --branch\"\n_gitHub=\"https://github.com/omega8cc\"\n_gitLab=\"https://gitlab.com/omega8cc\"\n_libFnc=\"${_bldPth}/lib/functions\"\n_locCnf=\"${_bldPth}/aegir/conf\"\n_vBs=\"/var/backups\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n###---### Functions\n#\n# Clean pid files on exit.\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.bond.sh.exit.exceptions.log\n    [ -e \"/opt/tmp/boa\" ] && rm -rf /opt/tmp/*\n  fi\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  service cron start &> /dev/null\n  exit 1\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n    # Sanitize to allow only digits and minus sign\n    export _B_NICE=${_B_NICE//[^0-9-]/}\n\n    # Validate and set default if necessary\n    if ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n      _B_NICE=0\n    fi\n\n    # Clamp the value within -20 to 19\n    if (( _B_NICE < -20 )); then\n      _B_NICE=-20\n    elif (( _B_NICE > 19 )); then\n      _B_NICE=19\n    fi\n\n    renice ${_B_NICE} -p $$ &> /dev/null\n    ionice -c2 -n7 -p $$\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    _clean_pid_exit\n  fi\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n}\n_check_root\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n_check_sql_running() {\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"INFO: Waiting for MySQLD availability...\"\n    sleep 3\n  done\n}\n_check_sql_running\n\n_check_sql_access() {\n  if [ -e \"/root/.my.pass.txt\" ] && [ -e \"/root/.my.cnf\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_SYNC_SQL_PSWD=$(grep \"${_SQL_PSWD}\" /root/.my.cnf 2>&1)\n  else\n    echo \"ALERT: /root/.my.cnf or /root/.my.pass.txt not found.\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_a\n  fi\n  if [ -z \"${_IS_SYNC_SQL_PSWD}\" ] \\\n    || [[ ! \"${_IS_SYNC_SQL_PSWD}\" =~ \"password=${_SQL_PSWD}\" ]]; then\n    echo \"ALERT: SQL password is out of sync between\"\n    echo \"ALERT: /root/.my.cnf and /root/.my.pass.txt\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_b\n  else\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n      echo \"ALERT: SQL server on this system is not running at all.\"\n      echo \"ALERT: Please fix this before trying again, giving up.\"\n      echo \"Bye\"\n      echo \" \"\n      _clean_pid_exit _check_sql_access_c\n    else\n      _MYSQL_CONN_TEST=$(mysql -u root -e \"status\" 2>&1)\n      if [ -z \"${_MYSQL_CONN_TEST}\" ] \\\n        || [[ \"${_MYSQL_CONN_TEST}\" =~ \"Access denied\" ]]; then\n        echo \"ALERT: SQL password in /root/.my.cnf does not work.\"\n        echo \"ALERT: Please fix this before trying again, giving up.\"\n        echo \"Bye\"\n        echo \" \"\n        _clean_pid_exit _check_sql_access_d\n      fi\n    fi\n  fi\n}\n_check_sql_access\n\n#\n# Noticeable messages.\n_msg() {\n  echo \"Tuner [$(date)] ==> $*\"\n}\n# Simple prompt.\n_prompt_yes_no() {\nif [ \"${_AUTOPILOT}\" = \"YES\" ]; then\n  return 0\nelse\n  while true; do\n    printf \"$* [Y/n] \"\n    read _answer\n    if [ -z \"${_answer}\" ]; then\n      return 0\n    fi\n    case ${_answer} in\n      [Yy]|[Yy][Ee][Ss])\n        return 0\n        ;;\n      [Nn]|[Nn][Oo])\n        return 1\n        ;;\n      *)\n        echo \"Please answer yes or no\"\n        ;;\n    esac\n  done\nfi\n}\n#\n# Count system CPUs.\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n#\n# Find the fastest mirror.\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n}\n#\n# Fix php.ini files to remove ionCube\n_fix_php_ini_ioncube() {\n  if [ -e \"${_THIS_FILE}\" ] && [ \"${_PHP_IONCUBE}\" = \"NO\" ]; then\n    _IONCUBE_INI_TEST=$(grep \"ioncube_loader\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_IONCUBE_INI_TEST}\" =~ \"ioncube_loader\" ]]; then\n      sed -i \"s/.*ioncube_loader.*//g\" ${_THIS_FILE} &> /dev/null\n      wait\n    fi\n  fi\n}\n#\n# Fix php.ini files to remove jsmin.so\n_remove_php_ini_jsmin() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _JSMIN_INI_TEST=$(grep \"^extension=jsmin.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_JSMIN_INI_TEST}\" =~ \"extension=jsmin.so\" ]]; then\n      sed -i \"s/.*jsmin.*//g\" ${_THIS_FILE} &> /dev/null\n      wait\n    fi\n  fi\n}\n#\n# Fix php.ini files to remove suhosin.so\n_remove_php_ini_suhosin() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _SUHOSIN_INI_TEST=$(grep \"^extension=suhosin.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_SUHOSIN_INI_TEST}\" =~ \"extension=suhosin.so\" ]]; then\n      sed -i \"s/.*suhosin.*//g\" ${_THIS_FILE} &> /dev/null\n      wait\n    fi\n  fi\n}\n#\n# Fix php.ini files to add mailparse.so\n_fix_php_ini_mailparse() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MAILPARSE_INI_TEST=$(grep \"^extension=mailparse.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MAILPARSE_INI_TEST}\" =~ \"extension=mailparse.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mailparse.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini files to add yaml.so\n_fix_php_ini_yaml() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _YAML_INI_TEST=$(grep \"^extension=yaml.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_YAML_INI_TEST}\" =~ \"extension=yaml.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=yaml.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini files to add jsmin.so\n_add_php_ini_jsmin() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _JSMIN_INI_TEST=$(grep \"^extension=jsmin.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_JSMIN_INI_TEST}\" =~ \"extension=jsmin.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=jsmin.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini files to add twig.so\n_fix_php_ini_twig() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _TWIG_INI_TEST=$(grep \"^extension=twig.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_TWIG_INI_TEST}\" =~ \"extension=twig.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=twig.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini files to add redis.so\n_fix_php_ini_redis() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _REDIS_INI_TEST=$(grep \"^extension=redis.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_REDIS_INI_TEST}\" =~ \"extension=redis.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=redis.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini files to add mcrypt.so\n_fix_php_ini_mcrypt() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MCRYPT_INI_TEST=$(grep \"^extension=mcrypt.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MCRYPT_INI_TEST}\" =~ \"extension=mcrypt.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mcrypt.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini files to add apcu.so\n_fix_php_ini_apcu() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _APCU_INI_TEST=$(grep \"^apc.shm_size\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_APCU_INI_TEST}\" =~ \"apc.shm_size\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \" \"                  >> ${_THIS_FILE}\n      echo \"; APCu\"             >> ${_THIS_FILE}\n      echo \"extension=apcu.so\"  >> ${_THIS_FILE}\n      echo \"apc.enable_cli=1\"   >> ${_THIS_FILE}\n      echo \"apc.gc_ttl=300\"     >> ${_THIS_FILE}\n      echo \"apc.shm_segments=1\" >> ${_THIS_FILE}\n      echo \"apc.shm_size=256M\"  >> ${_THIS_FILE}\n      echo \"apc.slam_defense=0\" >> ${_THIS_FILE}\n      echo \"apc.ttl=0\"          >> ${_THIS_FILE}\n      echo \";\"                  >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini files to add igbinary.so\n_fix_php_ini_igbinary() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _IGBINARY_INI_TEST=$(grep \"^extension=igbinary.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_IGBINARY_INI_TEST}\" =~ \"extension=igbinary.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=igbinary.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini file to add newrelic.ini\n_fix_php_ini_newrelic() {\n  _NR_TPL=\"${_locCnf}/php/newrelic.ini\"\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _NEWRELIC_INI_TEST_A=$(grep \"^extension=newrelic.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_NEWRELIC_INI_TEST_A}\" =~ \"extension=newrelic.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      cat ${_NR_TPL} >> ${_THIS_FILE}\n    fi\n    _NEWRELIC_INI_TEST_B=$(grep \"newrelic.framework.drupal.modules\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_NEWRELIC_INI_TEST_B}\" =~ \"newrelic.framework.drupal.modules\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"newrelic.framework.drupal.modules = 1\" >> ${_THIS_FILE}\n    fi\n    sed -i \"/REPLACE_WITH_REAL_KEY//g\" ${_THIS_FILE} &> /dev/null\n    wait\n    sed -i \"s/license_key=//g\" ${_THIS_FILE} &> /dev/null\n    wait\n  fi\n}\n#\n# Fix all php.ini files to add newrelic.ini\n_fix_php_ini_newrelic_all() {\n  if [ -e \"/etc/newrelic/newrelic.cfg\" ]; then\n    if [ -z \"${_NEWRELIC_KEY}\" ]; then\n      _NEWRELIC_KEY=$(grep license_key /etc/newrelic/newrelic.cfg 2>/dev/null | tr -d '\\n')\n    fi\n    _PHP_V=\"84 83 82 81 80 74 73 72\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_newrelic\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_newrelic\n    done\n  fi\n}\n#\n# Fix FMP php.ini file to add opcache.so\n_fix_php_ini_opcache() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_opcache $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    if grep -q \"opcache.so\" -- \"${_THIS_FILE}\"; then\n      _DO_NOTHING=YES\n    else\n      {\n        echo\n        echo \"; Zend OPcache\"\n        echo \"zend_extension=\\\"${_OPCACHE_SO}\\\"\"\n        echo \"opcache.enable=1\"\n        echo \"opcache.memory_consumption=181\"\n        echo \"opcache.revalidate_freq=60\"\n        echo \"opcache.dups_fix=1\"\n        echo \"opcache.file_update_protection=8\"\n        echo \"opcache.huge_code_pages=0\"\n        case \"${1}\" in\n          80|74|73|72|71|70|56)\n            echo \"opcache.interned_strings_buffer=32\"\n            ;;\n          81|82|83|84)\n            echo \"opcache.interned_strings_buffer=128\"\n            ;;\n          *)\n            echo \"opcache.interned_strings_buffer=128\"\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              _msg \"WARN: Unknown PHP version '${1}', using default buffer=128\"\n            fi\n            ;;\n        esac\n        echo \"opcache.jit=off\"\n        echo \"opcache.lockfile_path=/var/tmp/fpm\"\n        echo \"opcache.max_accelerated_files=200000\"\n        echo \"opcache.restrict_api=/var/www\"\n        echo \"opcache.revalidate_path=1\"\n        echo \"opcache.save_comments=1\"\n        echo \"opcache.use_cwd=1\"\n        echo \"opcache.validate_permission=1\"\n        echo \"opcache.validate_root=1\"\n        echo \"opcache.validate_timestamps=1\"\n        echo \";\"\n      } >> \"${_THIS_FILE}\"\n    fi\n  fi\n}\n#\n# Fix all FMP php.ini files to add Zend OPcache\n_fix_php_ini_opcache_all() {\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    _P_API=\n    case \"${e}\" in\n      85) _P_API=\"${_PHP85_API}\" ;;\n      84) _P_API=\"${_PHP84_API}\" ;;\n      83) _P_API=\"${_PHP83_API}\" ;;\n      82) _P_API=\"${_PHP82_API}\" ;;\n      81) _P_API=\"${_PHP81_API}\" ;;\n      80) _P_API=\"${_PHP80_API}\" ;;\n      74) _P_API=\"${_PHP74_API}\" ;;\n      73) _P_API=\"${_PHP73_API}\" ;;\n      72) _P_API=\"${_PHP72_API}\" ;;\n      71) _P_API=\"${_PHP71_API}\" ;;\n      70) _P_API=\"${_PHP70_API}\" ;;\n      56) _P_API=\"${_PHP56_API}\" ;;\n      *)  _msg \"WARN: Unknown PHP API version for PHP ${e}\"\n      ;;\n    esac\n    _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n    _OPCACHE_LP=\"/opt/php${e}/lib/php/extensions/no-debug-non-zts\"\n    _OPCACHE_SO=\"${_OPCACHE_LP}-${_P_API}/opcache.so\"\n    _fix_php_ini_opcache \"${e}\"\n  done\n}\n#\n# Fix php.ini file to add php_tet.so\n_fix_php_ini_tet() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _TET_INI_TEST=$(grep \"^extension=php_tet.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_TET_INI_TEST}\" =~ \"extension=php_tet.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=php_tet.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix all php.ini files to add php_tet.so\n_fix_php_ini_tet_all() {\n  if [ \"${_PHP_TET}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"TET\" ]]; then\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      _P_API=\n      case \"${e}\" in\n        85) _P_API=\"${_PHP85_API}\" ;;\n        84) _P_API=\"${_PHP84_API}\" ;;\n        83) _P_API=\"${_PHP83_API}\" ;;\n        82) _P_API=\"${_PHP82_API}\" ;;\n        81) _P_API=\"${_PHP81_API}\" ;;\n        80) _P_API=\"${_PHP80_API}\" ;;\n        74) _P_API=\"${_PHP74_API}\" ;;\n        73) _P_API=\"${_PHP73_API}\" ;;\n        72) _P_API=\"${_PHP72_API}\" ;;\n        71) _P_API=\"${_PHP71_API}\" ;;\n        70) _P_API=\"${_PHP70_API}\" ;;\n        56) _P_API=\"${_PHP56_API}\" ;;\n        *)  _msg \"WARN: Unknown PHP API version for PHP ${e}\"\n        ;;\n      esac\n      _TET_BASE=\"/opt/php${e}/lib/php/extensions/no-debug-non-zts\"\n      _TET_SO=\"${_TET_BASE}-${_P_API}/php_tet.so\"\n      if [ ! -e \"${_TET_SO}\" ]; then\n        if [[ \"${e}\" =~ \"80\" ]] || [[ \"${e}\" =~ \"74\" ]] || [[ \"${e}\" =~ \"73\" ]]; then\n          _TET_VRN=\"5.3-Linux-x64-Perl-PHP-Python-Ruby\"\n        else\n          _TET_VRN=\"5.2-Linux-x86_64-Perl-PHP-Python-Ruby\"\n        fi\n        if [ ! -e \"/var/opt/TET-${_TET_VRN}/bind/php\" ]; then\n          mkdir -p  /var/opt\n          cd /var/opt\n          _get_dev_src \"TET-${_TET_VRN}.tar.gz\"\n        fi\n        if [ -e \"/var/opt/TET-${_TET_VRN}/bind/php/php-${e}0-nts\" ]; then\n          cd /var/opt/TET-${_TET_VRN}/bind/php/php-${e}0-nts/\n          cp -a php_tet.so ${_TET_SO}\n        fi\n      fi\n      if [ -e \"${_TET_SO}\" ]; then\n        _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n        _fix_php_ini_tet\n        _THIS_FILE=/opt/php${e}/lib/php.ini\n        _fix_php_ini_tet\n      fi\n    done\n  fi\n}\n#\n# Fix php.ini file to add geos.so\n_fix_php_ini_geos() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _GEOS_INI_TEST=$(grep \"^extension=geos.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_GEOS_INI_TEST}\" =~ \"extension=geos.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=geos.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix all php.ini files to add geos.so\n_fix_php_ini_geos_all() {\n  if [ \"${_PHP_GEOS}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"GEO\" ]]; then\n    _PHP_V=\"56\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_geos\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_geos\n    done\n  fi\n}\n#\n# Fix php.ini file to add mongo.so\n_fix_php_ini_mongo() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MONGO_INI_TEST=$(grep \"^extension=mongo.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MONGO_INI_TEST}\" =~ \"extension=mongo.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mongo.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix php.ini file to add mongodb.so\n_fix_php_ini_mongodb() {\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MONGODB_INI_TEST=$(grep \"^extension=mongodb.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MONGODB_INI_TEST}\" =~ \"extension=mongodb.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mongodb.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n#\n# Fix all php.ini files to add mongo.so or mongodb.so\n_fix_php_ini_mongo_all() {\n  if [ \"${_PHP_MONGODB}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"MNG\" ]]; then\n    _PHP_V=\"56\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_mongo\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_mongo\n    done\n    _PHP_V=\"72 71 70\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_mongodb\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_mongodb\n    done\n  fi\n}\n#\n# Update PHP Config.\n_php_conf_update() {\n  if [ -z \"${_THISHTIP}\" ]; then\n    _LOC_DOM=\"${_hName}\"\n    _find_correct_ip\n    _THISHTIP=\"${_LOC_IP}\"\n  fi\n  if [ ! -e \"/opt/etc/fpm\" ] \\\n    || [ ! -e \"/opt/etc/fpm/fpm-pool-common.conf\" ] \\\n    || [ ! -e \"/opt/etc/fpm/fpm-pool-common-legacy.conf\" ] \\\n    || [ ! -e \"/opt/etc/fpm/fpm-pool-common-modern.conf\" ]; then\n    mkdir -p /opt/etc/fpm\n  fi\n  cp -af ${_locCnf}/php/fpm-pool-common.conf /opt/etc/fpm/fpm-pool-common.conf\n  cp -af ${_locCnf}/php/fpm-pool-common-legacy.conf /opt/etc/fpm/fpm-pool-common-legacy.conf\n  cp -af ${_locCnf}/php/fpm-pool-common-modern.conf /opt/etc/fpm/fpm-pool-common-modern.conf\n  sed -i \"s/127.0.0.1/127.0.0.1,${_THISHTIP}/g\" /opt/etc/fpm/fpm-pool-commo*.conf\n  wait\n  sed -i \"s/mode =.*/mode = 0660/g\" /opt/etc/fpm/fpm-pool-commo*.conf\n  wait\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ ! -e \"/var/www/www${e}\" ]; then\n      adduser --system --group --home /var/www/www${e} www${e} &> /dev/null\n      usermod -aG www-data www${e}\n    fi\n    if [ ! -e \"/opt/php${e}/etc/php${e}.ini\" ] \\\n      || [ ! -e \"/opt/php${e}/etc/pool.d/www${e}.conf\" ]; then\n      mkdir -p /opt/php${e}/etc/pool.d\n      cp -af ${_locCnf}/php/php${e}.ini /opt/php${e}/etc/php${e}.ini\n    fi\n    cp -af ${_locCnf}/php/fpm${e}-pool-www.conf /opt/php${e}/etc/pool.d/www${e}.conf\n    if [ ! -e \"/opt/php${e}/lib/php.ini\" ]; then\n      mkdir -p /opt/php${e}/lib\n      cp -af ${_locCnf}/php/php${e}-cli.ini /opt/php${e}/lib/php.ini\n    fi\n    cp -af ${_locCnf}/php/php${e}.ini /opt/php${e}/etc/php${e}.ini\n    cp -af ${_locCnf}/php/php${e}-cli.ini /opt/php${e}/lib/php.ini\n    cp -af ${_locCnf}/php/php${e}-fpm.conf /opt/php${e}/etc/php${e}-fpm.conf\n\n    _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n    if [ \"${e}\" != \"56\" ]; then\n      _fix_php_ini_apcu\n    fi\n    if [ \"${e}\" != \"56\" ] && [ \"${e}\" != \"70\" ] && [ \"${e}\" != \"71\" ]; then\n      _fix_php_ini_mcrypt\n    fi\n    if [ \"${e}\" = 56 ]; then\n      _fix_php_ini_mailparse\n      _fix_php_ini_twig\n    fi\n    if [ \"${e}\" != 80 ] && [ \"${e}\" != 81 ]; then\n      _add_php_ini_jsmin\n    fi\n    if [ \"${e}\" = 80 ] || [ \"${e}\" = 81 ]; then\n      _remove_php_ini_jsmin\n    fi\n    _fix_php_ini_igbinary\n    _fix_php_ini_redis\n    _fix_php_ini_ioncube\n    _remove_php_ini_suhosin\n    _fix_php_ini_yaml\n\n    _THIS_FILE=/opt/php${e}/lib/php.ini\n    if [ \"${e}\" != \"56\" ]; then\n      _fix_php_ini_apcu\n    fi\n    if [ \"${e}\" != \"56\" ] && [ \"${e}\" != \"70\" ] && [ \"${e}\" != \"71\" ]; then\n      _fix_php_ini_mcrypt\n    fi\n    if [ \"${e}\" = 56 ]; then\n      _fix_php_ini_mailparse\n      _fix_php_ini_twig\n    fi\n    if [ \"${e}\" != 80 ] && [ \"${e}\" != 81 ]; then\n      _add_php_ini_jsmin\n    fi\n    if [ \"${e}\" = 80 ] || [ \"${e}\" = 81 ]; then\n      _remove_php_ini_jsmin\n    fi\n    _fix_php_ini_igbinary\n    _fix_php_ini_redis\n    _fix_php_ini_ioncube\n    _remove_php_ini_suhosin\n    _fix_php_ini_yaml\n\n    if [ -e \"/opt/php${e}/etc/php${e}.ini\" ]; then\n      sed -i \"s/^zlib.output_compression.*/zlib.output_compression = Off/g\"       /opt/php${e}/etc/php${e}.ini\n      wait\n      sed -i \"s/.*zlib.output_compression_level/;zlib.output_compression_level/g\" /opt/php${e}/etc/php${e}.ini\n      wait\n    fi\n    if [ -e \"/opt/php${e}/lib/php.ini\" ]; then\n      sed -i \"s/^zlib.output_compression.*/zlib.output_compression = Off/g\"       /opt/php${e}/lib/php.ini\n      wait\n      sed -i \"s/.*zlib.output_compression_level/;zlib.output_compression_level/g\" /opt/php${e}/lib/php.ini\n      wait\n    fi\n  done\n  rm -f /etc/php5/conf.d/{opcache.ini,apc.ini,imagick.ini,memcached.ini}\n  rm -f /etc/php5/conf.d/{redis.ini,suhosin.ini,newrelic.ini}\n  _fix_php_ini_newrelic_all\n  _fix_php_ini_geos_all\n  _fix_php_ini_mongo_all\n  _fix_php_ini_tet_all\n  _fix_php_ini_opcache_all\n}\n#\n_restore_default_php() {\n  _msg \"INFO: Restoring default PHP configuration\"\n  cp -af ${_locCnf}/php/php85-cli.ini /opt/php85/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php85.ini /opt/php85/etc/php85.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php84-cli.ini /opt/php84/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php84.ini /opt/php84/etc/php84.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php83-cli.ini /opt/php83/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php83.ini /opt/php83/etc/php83.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php82-cli.ini /opt/php82/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php82.ini /opt/php82/etc/php82.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php81-cli.ini /opt/php81/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php81.ini /opt/php81/etc/php81.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php80-cli.ini /opt/php80/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php80.ini /opt/php80/etc/php80.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php74-cli.ini /opt/php74/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php74.ini /opt/php74/etc/php74.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php73-cli.ini /opt/php73/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php73.ini /opt/php73/etc/php73.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php72-cli.ini /opt/php72/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php72.ini /opt/php72/etc/php72.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php71-cli.ini /opt/php71/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php71.ini /opt/php71/etc/php71.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php70-cli.ini /opt/php70/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php70.ini /opt/php70/etc/php70.ini   &> /dev/null\n  cp -af ${_locCnf}/php/php56-cli.ini /opt/php56/lib/php.ini &> /dev/null\n  cp -af ${_locCnf}/php/php56.ini /opt/php56/etc/php56.ini   &> /dev/null\n}\n#\n_tune_php() {\n  _msg \"INFO: Tuning PHP configuration\"\n  if [ \"${_TUNE_PHP_FPM_TIMEOUT}\" -lt 60 ]; then\n    _TUNE_PHP_FPM_TIMEOUT=60\n  fi\n  # PHP-FPM pools\n  sed -i \"s/180s/${_TUNE_PHP_FPM_TIMEOUT}s/g\" /opt/php*/etc/pool.d/*.conf                                           &> /dev/null\n  wait\n  sed -i \"s/180s/${_TUNE_PHP_FPM_TIMEOUT}s/g\" /opt/php*/etc/php*-fpm.conf                                           &> /dev/null\n  wait\n  sed -i \"s/180/${_TUNE_PHP_FPM_TIMEOUT}/g\" /opt/etc/fpm/fpm-pool-common*.conf                                      &> /dev/null\n  wait\n  # PHP-FPM INI\n  sed -i \"s/^default_socket_timeout =.*/default_socket_timeout = ${_TUNE_PHP_FPM_TIMEOUT}/g\" /opt/php*/etc/php*.ini &> /dev/null\n  wait\n  sed -i \"s/^max_execution_time =.*/max_execution_time = ${_TUNE_PHP_FPM_TIMEOUT}/g\" /opt/php*/etc/php*.ini         &> /dev/null\n  wait\n  sed -i \"s/^max_input_time =.*/max_input_time = ${_TUNE_PHP_FPM_TIMEOUT}/g\" /opt/php*/etc/php*.ini                 &> /dev/null\n  wait\n  # PHP-CLI INI\n  sed -i \"s/^max_execution_time =.*/max_execution_time = ${_TUNE_PHP_CLI_TIMEOUT}/g\" /opt/php*/lib/php.ini          &> /dev/null\n  wait\n  sed -i \"s/^max_input_time =.*/max_input_time = ${_TUNE_PHP_CLI_TIMEOUT}/g\" /opt/php*/lib/php.ini                  &> /dev/null\n  wait\n  sed -i \"s/^default_socket_timeout =.*/default_socket_timeout = ${_TUNE_PHP_CLI_TIMEOUT}/g\" /opt/php*/lib/php.ini  &> /dev/null\n  wait\n  # Redis config should sync with PHP-CLI\n  sed -i \"s/^timeout .*/timeout ${_TUNE_PHP_CLI_TIMEOUT}/g\" /etc/redis/redis.conf                                   &> /dev/null\n  wait\n}\n#\n# Update innodb_log_file_size.\n_innodb_log_file_size_update() {\n  _msg \"INFO: InnoDB log file will be set to ${_INNODB_LOG_FILE_SIZE_MB}...\"\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  fi\n  _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n  if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ] && [ \"${_DB_V}\" = \"5.7\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n    fi\n    mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 0;\" &> /dev/null\n  fi\n  service mysql stop\n  echo \"Waiting 15 seconds...\"\n  sleep 15\n  if [ ! -e \"/run/mysqld/mysqld.sock\" ] \\\n    && [ ! -e \"/run/mysqld/mysqld.pid\" ]; then\n    mkdir -p ${_vBs}/old-sql-ib-log-${_NOW}\n    sleep 5\n    mv -f /var/lib/mysql/ib_logfile0 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n    mv -f /var/lib/mysql/ib_logfile1 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n    sed -i \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n    wait\n    echo \"Waiting 15 seconds...\"\n    sleep 15\n  fi\n  if [ ! -e \"/run/mysqld/mysqld.sock\" ]; then\n    service mysql start &> /dev/null\n  fi\n}\n#\n_restore_default_sql() {\n  _msg \"INFO: Restoring default SQL configuration\"\n  sed -i \"s/.*check_for_crashed_tables/#check_for_crashed_tables/g\" /etc/mysql/debian-start &> /dev/null\n  wait\n  _if_hosted_sys\n  if [ \"${_CUSTOM_CONFIG_SQL}\" = \"NO\" ] \\\n    || [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_CUSTOM_CONFIG_SQL}\" = \"YES\" ]; then\n      _DO_NOTHING=YES\n    else\n      cp -af /etc/mysql/my.cnf \\\n        /var/backups/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n      cp -af ${_locCnf}/var/my.cnf.txt /etc/mysql/my.cnf\n      _INNODB_LOG_FILE_SIZE=${_INNODB_LOG_FILE_SIZE//[^0-9]/}\n      if [ ! -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n        if [ \"${_INNODB_LOG_FILE_SIZE}\" -ge 50 ]; then\n          _INNODB_LOG_FILE_SIZE_MB=\"${_INNODB_LOG_FILE_SIZE}M\"\n          _INNODB_LOG_FILE_SIZE_TEST=$(grep \"innodb_log_file_size\" \\\n            /var/backups/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW} 2>&1)\n          if [[ \"${_INNODB_LOG_FILE_SIZE_TEST}\" =~ \"= ${_INNODB_LOG_FILE_SIZE_MB}\" ]]; then\n            _INNODB_LOG_FILE_SIZE_SAME=YES\n          else\n            _INNODB_LOG_FILE_SIZE_SAME=NO\n          fi\n        fi\n      fi\n      sed -i \"s/.*slow_query_log/#slow_query_log/g\"           /etc/mysql/my.cnf\n      wait\n      sed -i \"s/.*long_query_time/#long_query_time/g\"         /etc/mysql/my.cnf\n      wait\n      sed -i \"s/.*slow_query_log_file/#slow_query_log_file/g\" /etc/mysql/my.cnf\n      wait\n      if [ ! -e \"/etc/mysql/skip-name-resolve.txt\" ]; then\n        sed -i \"s/.*skip-name-resolve/#skip-name-resolve/g\"   /etc/mysql/my.cnf\n        wait\n      fi\n    fi\n  fi\n  mv -f /etc/mysql/my.cnf-pre* /var/backups/dragon/t/ &> /dev/null\n  sed -i \"s/.*default-table-type/#default-table-type/g\" /etc/mysql/my.cnf &> /dev/null\n  wait\n  sed -i \"s/.*language/#language/g\" /etc/mysql/my.cnf &> /dev/null\n  wait\n  sed -i \"s/.*innodb_lazy_drop_table.*//g\" /etc/mysql/my.cnf &> /dev/null\n  wait\n  if [ \"${_CUSTOM_CONFIG_SQL}\" = \"NO\" ]; then\n    if [ \"${_DB_BINARY_LOG}\" = \"NO\" ]; then\n      # Disable binary logging\n      sed -i \\\n        -e \"s/^\\s*\\(log_bin\\s*=.*\\)/#\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(skip-log-bin\\)/\\1/\" \\\n        -e \"s/^\\s*\\(max_binlog_size\\s*=.*\\)/#\\1/\" \\\n        -e \"s/^\\s*\\(binlog_row_image\\s*=.*\\)/#\\1/\" \\\n        -e \"s/^\\s*\\(binlog_format\\s*=.*\\)/#\\1/\" \\\n        /etc/mysql/my.cnf &> /dev/null\n    elif [ \"${_DB_BINARY_LOG}\" = \"YES\" ]; then\n      # Enable binary logging\n      sed -i \\\n        -e \"s/^\\s*#\\s*\\(log_bin\\s*=.*\\)/\\1/\" \\\n        -e \"s/^\\s*\\(skip-log-bin\\)/#\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(max_binlog_size\\s*=.*\\)/\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(binlog_row_image\\s*=.*\\)/\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(binlog_format\\s*=.*\\)/\\1/\" \\\n        /etc/mysql/my.cnf &> /dev/null\n    fi\n    if [ ! -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -ge 50 ]; then\n        _INNODB_LOG_FILE_SIZE_MB=\"${_INNODB_LOG_FILE_SIZE}M\"\n        _INNODB_LOG_FILE_SIZE_TEST=$(grep \"innodb_log_file_size\" /etc/mysql/my.cnf 2>&1)\n        if [[ \"${_INNODB_LOG_FILE_SIZE_TEST}\" =~ \"= ${_INNODB_LOG_FILE_SIZE_MB}\" ]]; then\n          _DO_NOTHING=YES\n        else\n          if [ \"${_INNODB_LOG_FILE_SIZE_SAME}\" = \"YES\" ]; then\n            sed -i \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" \\\n              /etc/mysql/my.cnf &> /dev/null\n            wait\n          else\n            _innodb_log_file_size_update\n          fi\n        fi\n      fi\n    fi\n  fi\n}\n#\n_tune_sql() {\n  _msg \"INFO: Tuning SQL configuration\"\n  sed -i \"s/9999/${_TUNE_SQL_TIMEOUT}/g\" /etc/mysql/my.cnf     &> /dev/null\n  wait\n  sed -i \"s/9999/${_TUNE_SQL_TIMEOUT}/g\" /var/xdrago/minute.sh &> /dev/null\n  wait\n}\n#\n_restore_default_nginx() {\n  _msg \"INFO: Restoring default Nginx configuration\"\n  if [ -d \"${_TUNE_HOSTMASTER}\" ]; then\n    for _Files in `find ${_TUNE_HOSTMASTER}/config/server_master/nginx/vhost.d -type f`; do\n      sed -i \"s/#limit_conn /limit_conn /g\" ${_Files} &> /dev/null\n      wait\n    done\n  fi\n  su -s /bin/bash - aegir -c \"drush8 @server_master provision-verify\" &> /dev/null\n  wait\n  sleep 8\n}\n#\n_tune_nginx() {\n  _msg \"INFO: Tuning Nginx configuration\"\n  sed -i \"s/60/${_TUNE_NGINX_TIMEOUT}/g\" /var/aegir/config/server_master/nginx.conf    &> /dev/null\n  wait\n  sed -i \"s/300/${_TUNE_NGINX_TIMEOUT}/g\" /var/aegir/config/server_master/nginx.conf   &> /dev/null\n  wait\n  sed -i \"s/180/${_TUNE_NGINX_TIMEOUT}/g\" /var/aegir/config/server_master/nginx.conf   &> /dev/null\n  wait\n  if [ \"${_TUNE_NGINX_CONNECT}\" = \"OFF\" ]; then\n    sed -i \"s/limit_conn /#limit_conn /g\" /var/aegir/config/server_master/nginx.conf &> /dev/null\n    wait\n    if [ -d \"${_TUNE_HOSTMASTER}\" ]; then\n      for _Files in `find ${_TUNE_HOSTMASTER}/config/server_master/nginx/vhost.d -type f`; do\n        sed -i \"s/limit_conn /#limit_conn /g\" ${_Files} &> /dev/null\n        wait\n      done\n    fi\n  fi\n}\n#\n_restart_services() {\n  _msg \"INFO: Reloading services\"\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ]; then\n      service \"php${e}-fpm\" reload &> /dev/null\n    fi\n  done\n  bash /var/xdrago/move_sql.sh &> /dev/null\n  wait\n  service nginx reload &> /dev/null\n  if [ -e \"/etc/init.d/valkey-server\" ]; then\n    service valkey-server reload &> /dev/null\n  elif [ -e \"/etc/init.d/redis-server\" ]; then\n    service redis-server reload &> /dev/null\n  fi\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n    if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n      csf -ra &> /dev/null\n      synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    else\n      csf -r &> /dev/null\n    fi\n    ### Linux kernel TCP SACK CVEs mitigation\n    ### CVE-2019-11477 SACK Panic\n    ### CVE-2019-11478 SACK Slowness\n    ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n    if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n      _SACK_TEST=$(ip6tables --list | grep tcpmss)\n      if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n        sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n        iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    fi\n  fi\n}\n#\n# Tune Web Sever configuration.\n_tune_web_server_config() {\n  _LIM_FPM=\"${_L_PHP_FPM_WORKERS}\"\n  if [ \"${_LIM_FPM}\" -lt 48 ]; then\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _LIM_FPM=48\n    fi\n  fi\n  _CHILD_MAX_FPM=$(( _LIM_FPM * 2 ))\n  if [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ]; then\n    _PHP_FPM_WORKERS=${_PHP_FPM_WORKERS//[^0-9]/}\n    if [ ! -z \"${_PHP_FPM_WORKERS}\" ] && [ \"${_PHP_FPM_WORKERS}\" -gt 0 ]; then\n      _CHILD_MAX_FPM=\"${_PHP_FPM_WORKERS}\"\n    fi\n  fi\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    sed -i \"s/pm.max_children =.*/pm.max_children = ${_CHILD_MAX_FPM}/g\" \\\n      /opt/php${e}/etc/pool.d/www${e}.conf &> /dev/null\n    wait\n    if [ ! -z \"${_PHP_FPM_DENY}\" ]; then\n      sed -i \"s/passthru,/${_PHP_FPM_DENY},/g\" \\\n        /opt/php${e}/etc/pool.d/www${e}.conf &> /dev/null\n      wait\n    fi\n  done\n  # PHP-FPM INI\n  sed -i \"s/^default_socket_timeout =.*/default_socket_timeout = 180/g\" /opt/php*/etc/php*.ini &> /dev/null\n  wait\n  sed -i \"s/^max_execution_time =.*/max_execution_time = 180/g\" /opt/php*/etc/php*.ini         &> /dev/null\n  wait\n  sed -i \"s/^max_input_time =.*/max_input_time = 180/g\" /opt/php*/etc/php*.ini                 &> /dev/null\n  wait\n  # PHP-CLI INI\n  sed -i \"s/^default_socket_timeout =.*/default_socket_timeout = 3600/g\" /opt/php*/lib/php.ini &> /dev/null\n  wait\n  sed -i \"s/^max_execution_time =.*/max_execution_time = 3600/g\" /opt/php*/lib/php.ini         &> /dev/null\n  wait\n  sed -i \"s/^max_input_time =.*/max_input_time = 3600/g\" /opt/php*/lib/php.ini                 &> /dev/null\n  wait\n  # Redis config should sync with PHP-CLI\n  sed -i \"s/^timeout .*/timeout 3600/g\" /etc/redis/redis.conf                                  &> /dev/null\n  wait\n}\n#\n#\n_check_mysqld_running() {\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    _msg \"INFO: Waiting for MySQLD availability before _tune_sql_memory_limits...\"\n    sleep 5\n    service mysql start &> /dev/null\n  done\n}\n#\n# Tune memory limits for SQL server.\n_tune_sql_memory_limits() {\n  _check_mysqld_running\n  # https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl\n  _pthTun=\"/var/opt/mysqltuner.pl\"\n  _outTun=\"/var/opt/mysqltuner-${_xSrl}-${_X_VERSION}-${_NOW}.txt\"\n  if [ ! -e \"${_outTun}\" ] \\\n    && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _msg \"INFO: Running MySQLTuner check on all databases\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    _MYSQLTUNER_TEST_RESULT=OK\n    rm -f /var/opt/mysqltuner*\n    curl ${_crlGet} \"${_urlDev}/mysqltuner.pl.${_MYSQLTUNER_VRN}\" -o ${_pthTun}\n    if [ ! -e \"${_pthTun}\" ]; then\n      curl ${_crlGet} \"${_urlDev}/mysqltuner.pl\" -o ${_pthTun}\n    fi\n    if [ -e \"${_pthTun}\" ]; then\n      perl ${_pthTun} > ${_outTun} 2>&1\n    fi\n  fi\n  if [ -e \"${_pthTun}\" ] \\\n    && [ -e \"${_outTun}\" ] \\\n    && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _REC_MYISAM_MEM=$(cat ${_outTun} \\\n      | grep \"Data in MyISAM tables\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $1}' 2>&1)\n    _REC_INNODB_MEM=$(cat ${_outTun} \\\n      | grep \"data size:\" \\\n      | cut -d/ -f3 \\\n      | awk '{ print $1}' 2>&1)\n    _MYSQLTUNER_TEST=$(cat ${_outTun} 2>&1)\n    cp -a ${_outTun} ${_pthLog}/\n    if [ -z \"${_REC_INNODB_MEM}\" ] \\\n      || [[ \"${_MYSQLTUNER_TEST}\" =~ \"Cannot calculate MyISAM index\" ]] \\\n      || [[ \"${_MYSQLTUNER_TEST}\" =~ \"InnoDB is enabled but isn\" ]]; then\n      _MYSQLTUNER_TEST_RESULT=FAIL\n      _msg \"NOTE: The MySQLTuner test failed!\"\n      _msg \"NOTE: Please review ${_outTun}\"\n      _msg \"NOTE: We will use some sane SQL defaults instead, do not worry!\"\n    fi\n    ###--------------------###\n    if [ ! -z \"${_REC_MYISAM_MEM}\" ] \\\n      && [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"OK\" ]; then\n      _RAW_MYISAM_MEM=$(echo ${_REC_MYISAM_MEM} | sed \"s/[A-Z]//g\" 2>&1)\n      if [[ \"${_REC_MYISAM_MEM}\" =~ \"G\" ]]; then\n        _RAW_MYISAM_MEM=$(( _RAW_MYISAM_MEM * 1024 ))\n      fi\n      if [ \"${_RAW_MYISAM_MEM}\" -gt \"${_USE_SQL}\" ]; then\n        _USE_MYISAM_MEM=\"${_USE_SQL}\"\n      else\n        _USE_MYISAM_MEM=\"${_RAW_MYISAM_MEM}\"\n      fi\n      if [ \"${_USE_MYISAM_MEM}\" -lt 256 ] || [ -z \"${_USE_MYISAM_MEM}\" ]; then\n        _USE_MYISAM_MEM=\"${_USE_SQL}\"\n      fi\n      _USE_MYISAM_MEM=\"${_USE_MYISAM_MEM}M\"\n      sed -i \"s/^key_buffer_size.*/key_buffer_size         = ${_USE_MYISAM_MEM}/g\"  /etc/mysql/my.cnf\n      wait\n    else\n      _USE_MYISAM_MEM=\"${_USE_SQL}M\"\n      if [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"FAIL\" ]; then\n        _msg \"NOTE: _USE_MYISAM_MEM is ${_USE_MYISAM_MEM} because _REC_MYISAM_MEM was empty!\"\n      fi\n      sed -i \"s/^key_buffer_size.*/key_buffer_size         = ${_USE_MYISAM_MEM}/g\"  /etc/mysql/my.cnf\n      wait\n    fi\n    ###--------------------###\n    if [ ! -z \"${_REC_INNODB_MEM}\" ] && [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"OK\" ]; then\n      _RAW_INNODB_MEM=$(echo ${_REC_INNODB_MEM} | sed \"s/[A-Z]//g\" 2>&1)\n      if [[ \"${_REC_INNODB_MEM}\" =~ \"G\" ]]; then\n        _RAW_INNODB_MEM=$(echo ${_RAW_INNODB_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_INNODB_MEM=$(echo \"${_RAW_INNODB_MEM} * 1024\" | bc -l 2>&1)\n      elif [[ \"${_REC_INNODB_MEM}\" =~ \"M\" ]]; then\n        _RAW_INNODB_MEM=$(echo ${_RAW_INNODB_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_INNODB_MEM=$(echo \"${_RAW_INNODB_MEM} * 1\" | bc -l 2>&1)\n      fi\n      _RAW_INNODB_MEM=$(echo \"(${_RAW_INNODB_MEM}+0.5)/1\" | bc 2>&1)\n      if [ \"${_RAW_INNODB_MEM}\" -gt \"${_USE_SQL}\" ] \\\n        || [ -z \"${_USE_INNODB_MEM}\" ] \\\n        || [ \"${_RAW_INNODB_MEM}\" -lt 512 ]; then\n        _USE_INNODB_MEM=\"${_USE_SQL}\"\n      else\n        _RAW_INNODB_MEM=$(echo \"scale=2; (${_RAW_INNODB_MEM} * 1.1)\" | bc 2>&1)\n        _USE_INNODB_MEM=$(echo \"(${_RAW_INNODB_MEM}+0.5)/1\" | bc 2>&1)\n      fi\n      _INNODB_BPI=$(echo \"scale=0; ${_USE_INNODB_MEM}/1024/2\" | bc 2>&1)\n      if [ \"${_INNODB_BPI}\" -lt 1 ] || [ -z \"${_INNODB_BPI}\" ]; then\n        _INNODB_BPI=\"1\"\n      fi\n      sed -i \"s/^innodb_buffer_pool_instances.*/innodb_buffer_pool_instances = ${_INNODB_BPI}/g\" /etc/mysql/my.cnf\n      wait\n      sed -i \"s/^innodb_page_cleaners.*/innodb_page_cleaners = ${_INNODB_BPI}/g\" /etc/mysql/my.cnf\n      wait\n      _USE_INNODB_MEM=\"${_USE_INNODB_MEM}M\"\n      sed -i \"s/^innodb_buffer_pool_size.*/innodb_buffer_pool_size = ${_USE_INNODB_MEM}/g\"  /etc/mysql/my.cnf\n      wait\n    else\n      _USE_INNODB_MEM=\"${_USE_SQL}M\"\n      _msg \"NOTE: _USE_INNODB_MEM is ${_USE_INNODB_MEM} because _REC_INNODB_MEM was empty!\"\n      sed -i \"s/^innodb_buffer_pool_size.*/innodb_buffer_pool_size = ${_USE_INNODB_MEM}/g\"  /etc/mysql/my.cnf\n      wait\n    fi\n  else\n    _THIS_USE_MEM=\"${_USE_SQL}M\"\n    if [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"FAIL\" ] \\\n      && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n      _msg \"NOTE: _USE_MYISAM_MEM is ${_THIS_USE_MEM} because _REC_MYISAM_MEM was empty!\"\n      _msg \"NOTE: _USE_INNODB_MEM is ${_THIS_USE_MEM} because _REC_INNODB_MEM was empty!\"\n    fi\n    sed -i \"s/= 181/= ${_USE_SQL}/g\"  /etc/mysql/my.cnf\n    wait\n  fi\n}\n#\n# Tune memory limits for PHP, Nginx and Percona.\n_tune_memory_limits() {\n  _msg \"INFO: Default Memory Tuning\"\n  _VM_TEST=\"$(uname -a)\"\n  if [ -e \"/proc/bean_counters\" ]; then\n    _VMFAMILY=\"VZ\"\n  elif [ -e \"/root/.tg.cnf\" ]; then\n    _VMFAMILY=\"TG\"\n  else\n    _VMFAMILY=\"XEN\"\n  fi\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _VMFAMILY=\"VS\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n  fi\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n  _CPU_MX=$(( _CPU_NR * 2 ))\n  if [ \"${_CPU_MX}\" -lt 4 ]; then\n    _CPU_MX=4\n  fi\n  _CPU_TG=$(( _CPU_NR / 2 ))\n  if [ \"${_CPU_TG}\" -lt 4 ]; then\n    _CPU_TG=4\n  fi\n  _CPU_VS=$(( _CPU_NR / 12 ))\n  if [ \"${_CPU_VS}\" -lt 2 ]; then\n    _CPU_VS=2\n  fi\n  _PrTest=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PrTest}\" =~ \"POWER\" ]]; then\n    if [ \"${_CPU_VS}\" -lt 8 ]; then\n      _CPU_VS=8\n    fi\n  fi\n  _PrTest=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PrTest}\" =~ \"PHANTOM\" ]]; then\n    if [ \"${_CPU_VS}\" -lt 8 ]; then\n      _CPU_VS=8\n    fi\n  fi\n  _PrTest=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PrTest}\" =~ \"CLUSTER\" ]]; then\n    if [ \"${_CPU_VS}\" -lt 8 ]; then\n      _CPU_VS=8\n    fi\n  fi\n  _PrTest=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PrTest}\" =~ \"ULTRA\" ]]; then\n    if [ \"${_CPU_VS}\" -lt 8 ]; then\n      _CPU_VS=8\n    fi\n  fi\n  _PrTest=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PrTest}\" =~ \"MONSTER\" ]]; then\n    if [ \"${_CPU_VS}\" -lt 8 ]; then\n      _CPU_VS=8\n    fi\n  fi\n  _RAM=$(free -mt | grep Mem: | awk '{ print $2 }' 2>&1)\n  if [ \"${_RESERVED_RAM}\" -gt 0 ]; then\n    _RAM=$(( _RAM - _RESERVED_RAM ))\n  else\n    _RESERVED_RAM=$(( _RAM / 4 ))\n    _RAM=$(( _RAM - _RESERVED_RAM ))\n  fi\n  _USE=$(( _RAM / 4 ))\n  _if_hosted_sys\n  if [ \"${_VMFAMILY}\" = \"VS\" ] \\\n    || [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_VMFAMILY}\" = \"VS\" ]; then\n      if [ -e \"/root/.tg.cnf\" ]; then\n        _USE_SQL=$(( _RAM / 12 ))\n      else\n        _USE_SQL=$(( _RAM / 24 ))\n      fi\n    else\n      _USE_SQL=$(( _RAM / 8 ))\n    fi\n  else\n    _USE_SQL=$(( _RAM / 8 ))\n  fi\n  if [ \"${_USE_SQL}\" -lt 64 ]; then\n    _USE_SQL=64\n  fi\n  _TMP_SQL=\"${_USE_SQL}M\"\n  _SRT_SQL=$(( _USE_SQL * 2 ))\n  _SRT_SQL=\"${_SRT_SQL}K\"\n  if [ \"${_USE}\" -ge 512 ] && [ \"${_USE}\" -lt 2048 ]; then\n    _USE_PHP=1024\n    _USE_OPC=1024\n    _USE_CLI=2048\n    _QCE_SQL=32M\n    _RND_SQL=8M\n    _JBF_SQL=4M\n    if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n      _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n    else\n      _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n    fi\n    _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n    if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n      _L_NGX_WRKS=${_CPU_MX}\n    else\n      _L_NGX_WRKS=${_NGINX_WORKERS}\n    fi\n  elif [ \"${_USE}\" -ge 2048 ]; then\n    if [ \"${_VMFAMILY}\" = \"XEN\" ] || [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n      _USE_PHP=2048\n      _USE_OPC=2048\n      _USE_CLI=2048\n      _QCE_SQL=64M\n      _RND_SQL=8M\n      _JBF_SQL=4M\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n      else\n        _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n      fi\n      _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n      if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n        _L_NGX_WRKS=${_CPU_MX}\n      else\n        _L_NGX_WRKS=${_NGINX_WORKERS}\n      fi\n    elif [ \"${_VMFAMILY}\" = \"VS\" ] || [ \"${_VMFAMILY}\" = \"TG\" ]; then\n      if [ -e \"/boot/grub/grub.cfg\" ] \\\n        || [ -e \"/boot/grub/menu.lst\" ] \\\n        || [ -e \"/root/.tg.cnf\" ]; then\n        _USE_PHP=2048\n        _USE_OPC=2048\n        _USE_CLI=2048\n        _QCE_SQL=64M\n        _RND_SQL=8M\n        _JBF_SQL=4M\n        if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n          _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n        else\n          _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n        fi\n        _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n        if [ \"${_MXC_SQL}\" -lt 10 ]; then\n          _MXC_SQL=10\n        fi\n        if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n          _L_NGX_WRKS=${_CPU_TG}\n        else\n          _L_NGX_WRKS=${_NGINX_WORKERS}\n        fi\n        sed -i \"s/64000/128000/g\"  /opt/php85/etc/php85.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php84/etc/php84.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php83/etc/php83.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php82/etc/php82.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php81/etc/php81.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php80/etc/php80.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php74/etc/php74.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php73/etc/php73.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php72/etc/php72.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php71/etc/php71.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php70/etc/php70.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php56/etc/php56.ini &> /dev/null\n      else\n        _USE_PHP=2048\n        _USE_OPC=2048\n        _USE_CLI=2048\n        _QCE_SQL=64M\n        _RND_SQL=2M\n        _JBF_SQL=2M\n        if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n          _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n        else\n          _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n        fi\n        _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n        if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n          _L_NGX_WRKS=${_CPU_VS}\n        else\n          _L_NGX_WRKS=${_NGINX_WORKERS}\n        fi\n      fi\n    else\n      _USE_PHP=512\n      _USE_OPC=512\n      _USE_CLI=512\n      _QCE_SQL=32M\n      _RND_SQL=2M\n      _JBF_SQL=2M\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n      else\n        _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n      fi\n      _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n      if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n        _L_NGX_WRKS=${_CPU_MX}\n      else\n        _L_NGX_WRKS=${_NGINX_WORKERS}\n      fi\n    fi\n  else\n    _USE_PHP=\"${_USE}\"\n    _USE_OPC=\"${_USE}\"\n    _USE_CLI=\"${_USE}\"\n    _QCE_SQL=32M\n    _RND_SQL=1M\n    _JBF_SQL=1M\n    if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n      _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n    else\n      _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n    fi\n    _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n    if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n      _L_NGX_WRKS=${_CPU_MX}\n    else\n      _L_NGX_WRKS=${_NGINX_WORKERS}\n    fi\n  fi\n  _USE_JETTY=\"-Xmx${_USE_OPC}m\"\n  if [ \"${_VMFAMILY}\" = \"VZ\" ]; then\n    _USE_OPC=64\n  fi\n  if [ \"${_USE_PHP}\" -lt 1024 ]; then\n    _USE_PHP=1024\n  fi\n  _USE_FPM=$(( _USE_PHP / 2 ))\n  if [ \"${_USE_FPM}\" -lt 1024 ]; then\n    _USE_FPM=1024\n  fi\n  if [ ! -e \"/var/xdrago/conf/fpm-pool-foo-multi.conf\" ]; then\n    mkdir -p /var/xdrago/conf\n  fi\n  if [ ! -e \"/data/conf\" ]; then\n    mkdir -p /data/conf\n  fi\n  cp -af ${_locCnf}/php/fpm-pool-foo-multi.conf     /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-foo.conf           /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-common.conf        /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-common-legacy.conf /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-common-modern.conf /var/xdrago/conf/\n  sed -i \"s/127.0.0.1/127.0.0.1,${_THISHTIP}/g\"     /var/xdrago/conf/fpm-pool-commo*.conf\n  if [ -e \"/opt/etc/fpm/fpm-pool-common.conf\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/etc/fpm/fpm-pool-commo*.conf &> /dev/null\n    wait\n  fi\n  if [ -e \"/opt/php85/etc/php85.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php85/etc/php85.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php85/etc/php85.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php85/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php84/etc/php84.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php84/etc/php84.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php84/etc/php84.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php84/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php83/etc/php83.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php83/etc/php83.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php83/etc/php83.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php83/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php82/etc/php82.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php82/etc/php82.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php82/etc/php82.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php82/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php81/etc/php81.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php81/etc/php81.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php81/etc/php81.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php81/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php80/etc/php80.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php80/etc/php80.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php80/etc/php80.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php80/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php74/etc/php74.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php74/etc/php74.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php74/etc/php74.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php74/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php73/etc/php73.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php73/etc/php73.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php73/etc/php73.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php73/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php72/etc/php72.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php72/etc/php72.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php72/etc/php72.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php72/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php71/etc/php71.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php71/etc/php71.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php71/etc/php71.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php71/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php70/etc/php70.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php70/etc/php70.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php70/etc/php70.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php70/lib/php.ini   &> /dev/null\n  fi\n  if [ -e \"/opt/php56/etc/php56.ini\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/php56/etc/php56.ini &> /dev/null\n    wait\n    sed -i \"s/181/${_USE_OPC}/g\" /opt/php56/etc/php56.ini &> /dev/null\n    sed -i \"s/395/${_USE_CLI}/g\" /opt/php56/lib/php.ini   &> /dev/null\n  fi\n  if [ \"${_CUSTOM_CONFIG_SQL}\" = \"NO\" ]; then\n    _tune_sql_memory_limits\n    _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n      || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n      _UXC_SQL=\"${_MXC_SQL}\"\n    else\n      _UXC_SQL=$(echo \"scale=0; ${_MXC_SQL}/2\" | bc 2>&1)\n    fi\n    sed -i \"s/= 191/= ${_UXC_SQL}/g\"                                              /etc/mysql/my.cnf\n    wait\n    sed -i \"s/= 292/= ${_MXC_SQL}/g\"                                              /etc/mysql/my.cnf\n    wait\n    sed -i \"s/^tmp_table_size.*/tmp_table_size          = ${_TMP_SQL}/g\"          /etc/mysql/my.cnf\n    wait\n    sed -i \"s/^max_heap_table_size.*/max_heap_table_size     = ${_TMP_SQL}/g\"     /etc/mysql/my.cnf\n    wait\n    sed -i \"s/^myisam_sort_buffer_size.*/myisam_sort_buffer_size = ${_SRT_SQL}/g\" /etc/mysql/my.cnf\n    wait\n    sed -i \"s/^read_rnd_buffer_size.*/read_rnd_buffer_size    = ${_RND_SQL}/g\"    /etc/mysql/my.cnf\n    wait\n    sed -i \"s/^join_buffer_size.*/join_buffer_size        = ${_JBF_SQL}/g\"        /etc/mysql/my.cnf\n    wait\n  fi\n  if [ \"${_USE_OPC}\" -gt 2048 ]; then\n    _MAX_MEM_VALKEY=2048\n  else\n    _MAX_MEM_VALKEY=\"${_USE_OPC}\"\n  fi\n  _MAX_MEM_VALKEY=\"${_MAX_MEM_VALKEY}MB\"\n  sed -i \"s/^maxmemory .*/maxmemory ${_MAX_MEM_VALKEY}/g\" /etc/valkey/valkey.conf &> /dev/null\n  sed -i \"s/^maxmemory .*/maxmemory ${_MAX_MEM_VALKEY}/g\" /etc/redis/redis.conf &> /dev/null\n  wait\n  if [ -e \"/etc/default/jetty9\" ] && [ -e \"/opt/solr4\" ]; then\n    sed -i \"s/^JAVA_OPTIONS.*/JAVA_OPTIONS=\\\"-Xms64m ${_USE_JETTY} -Djava.awt.headless=true -Dsolr.solr.home=\\/opt\\/solr4 \\$JAVA_OPTIONS\\\" # Options/g\" /etc/default/jetty9\n    wait\n  fi\n  _tune_web_server_config\n}\n#\n_check_git_repos() {\n  if [ \"${_DL_MODE}\" != \"GIT\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _GITHUB_WORKS=NO\n  _GITLAB_WORKS=NO\n  if [ \"${_FORCE_GIT_MIRROR}\" = \"drupal\" ]; then\n    _FORCE_GIT_MIRROR=github\n  fi\n  if [ \"${_FORCE_GIT_MIRROR}\" = \"gitorious\" ]; then\n    _FORCE_GIT_MIRROR=gitlab\n  fi\n  if [ \"${_FORCE_GIT_MIRROR}\" = \"github\" ]; then\n    _msg \"INFO: We will use forced GitHub repository without testing connection\"\n    _GITHUB_WORKS=YES\n    _GITLAB_WORKS=NO\n    sleep 1\n  elif [ \"${_FORCE_GIT_MIRROR}\" = \"gitlab\" ]; then\n    _msg \"INFO: We will use forced GitLab mirror without testing connection\"\n    _GITHUB_WORKS=NO\n    _GITLAB_WORKS=YES\n    sleep 1\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Testing repository mirror servers availability...\"\n    fi\n    sleep 1\n    _GITHUB_WORKS=YES\n    _GITLAB_WORKS=YES\n    if ! command nc -w 10 -z github.com 443 >/dev/null 2>&1 ; then\n      _GITHUB_WORKS=NO\n      _msg \"WARN: The GitHub master repository server doesn't respond...\"\n    elif ! command nc -w 10 -z gitlab.com 443 >/dev/null 2>&1 ; then\n      _GITLAB_WORKS=NO\n      _msg \"WARN: The GitLab mirror repository server doesn't respond...\"\n    fi\n  fi\n  if [ \"${_GITHUB_WORKS}\" = \"YES\" ]; then\n    _BOA_REPO_NAME=\"boa\"\n    _BOA_REPO_GIT_URL=\"https://github.com/omega8cc\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: GitHub master repository will be used\"\n    fi\n  elif [ \"${_GITLAB_WORKS}\" = \"YES\" ]; then\n    _BOA_REPO_NAME=\"boa\"\n    _BOA_REPO_GIT_URL=\"${_gitLab}\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: GitLab mirror repository will be used\"\n    fi\n  else\n    cat <<EOF\n\n    None of repository servers responded in 5 seconds,\n    so we can't continue this installation.\n\n    Please try again later or check if your firewall has port 443 open.\n\n    Bye.\n\nEOF\n    _clean_pid_exit _check_git_repos_a\n  fi\n}\n\n\n###---### init\n#\ntouch /run/boa_run.pid\n#\n_BOA_REPO_NAME=\"boa\"\n_BOA_REPO_GIT_URL=\"https://github.com/omega8cc\"\nif [ \"${_DL_MODE}\" = \"GIT\" ]; then\n  _check_git_repos\nfi\n#\n#\nif [ \"$(id -u)\" -eq 0 ]; then\n  chmod a+w /dev/null\n  _msg \"INFO: This script is ran as a root user\"\n  _count_cpu\n  _find_fast_mirror_early\nelse\n  _msg \"ERROR: This script should be run as a root user\"\n  _msg \"Bye\"\n  _clean_pid_exit\nfi\n#\n#\nif [ ! -f \"/var/log/barracuda_log.txt\" ]; then\n  _msg \"ERROR: Please upgrade this system to BOA version ${_X_VERSION} first\"\n  _msg \"Bye\"\n  _clean_pid_exit version_a\nelse\n  _VERSIONS_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n  if [[ \"${_VERSIONS_TEST}\" =~ \"${_X_VERSION}\" ]]; then\n    _VERSIONS_TEST_RESULT=OK\n  else\n    _msg \"ERROR: Please upgrade this system to BOA version ${_X_VERSION} first\"\n    _msg \"Bye\"\n    _clean_pid_exit version_b\n  fi\nfi\n#\n#\nrm -f /opt/tmp/testecho*\n_SRCDIR=\"/opt/tmp/files\"\nmkdir -p ${_SRCDIR}\nchmod -R 777 /opt/tmp &> /dev/null\ncd /opt/tmp\nrm -rf /opt/tmp/boa\nif [ \"${_DL_MODE}\" = \"GIT\" ]; then\n  ${_gCb} ${_BRANCH_BOA} ${_BOA_REPO_GIT_URL}/${_BOA_REPO_NAME}.git &> /dev/null\nelse\n  curl ${_crlGet} \"${_urlDev}/${_AEGIR_VERSION}/boa.tar.gz\" | tar -xzf -\n  _BOA_REPO_NAME=\"boa\"\nfi\n#\n# Create tmp stuff\n_LOG=/var/backups/bond-${_NOW}.log\n#\n#\n\n\n###---### Tune Your Ægir Hosting System\n#\necho \" \"\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  _msg \"TUNER START -> checkpoint: \"\n  cat <<EOF\n\n  * Ægir Satellite Instance to tune: ${_TUNE_HOSTMASTER}\n  * Nginx server mod_evasive will be set to ${_TUNE_NGINX_CONNECT}\n  * Nginx server fastcgi timeout will be set to ${_TUNE_NGINX_TIMEOUT} seconds\n  * Nginx firewall limit of allowed requests will be set to ${_TUNE_NGINX_FIREWALL}/300\n  * Database server timeout will be set to ${_TUNE_SQL_TIMEOUT} seconds\n  * PHP-FPM server timeout will be set to ${_TUNE_PHP_FPM_TIMEOUT} seconds\n  * PHP-CLI drush timeout will be set to ${_TUNE_PHP_CLI_TIMEOUT} seconds\n\nEOF\n  echo \" \"\nfi\n_tPrmt=\"Are you ready to tune your system\"\n_tPrmt=\"${_tPrmt} with values shown above\"\n_tPrmt=$(echo -n ${_tPrmt} | fmt -su -w 2500 2>&1)\nif _prompt_yes_no \"${_tPrmt}?\" ; then\n  true\n  if [ ! -e \"/root/.upstart.cnf\" ]; then\n    _msg \"INFO: We will stop cron and then wait 30 seconds...\"\n    service cron stop &> /dev/null\n    sleep 30\n  fi\n  _msg \"INFO: Tuning in progress, please wait...\"\n  _restore_default_php\n  _php_conf_update\n  _tune_php\n  _restore_default_sql\n  _tune_sql\n  _restore_default_nginx\n  _tune_nginx\n  _tune_memory_limits\n  _restart_services\n  _msg \"INFO: Tuning completed\"\nelse\n  if [ ! -e \"/root/.upstart.cnf\" ]; then\n    _msg \"INFO: We will stop cron and then wait 30 seconds...\"\n    service cron stop &> /dev/null\n    sleep 30\n  fi\n  _restore_default_php\n  _php_conf_update\n  _restore_default_sql\n  _restore_default_nginx\n  _tune_memory_limits\n  _restart_services\n  _msg \"INFO: Tuning stopped and default settings restored\"\nfi\n[ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\nif [ ! -e \"/root/.upstart.cnf\" ]; then\n  service cron start &> /dev/null\nfi\nif [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n  _msg \"INFO: Cron started again\"\nfi\n_msg \"BYE!\"\n\n###----------------------------------------###\n###\n###  Barracuda-Octopus-Nginx-Drupal Tuner\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###----------------------------------------###\n"
  },
  {
    "path": "aegir/tools/backup/run/create_config_readme.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _sPid=\"f62\"\n\n# Base directory for user configurations\n_BASE_DIR=\"/data/disk\"\n\n# Function to ensure the config directory exists\n_ensure_config_dir() {\n  _user=$1\n  _config_dir=\"${_BASE_DIR}/${_user}/static/control/remote_backups/config\"\n  _dir_ctrl_file=\"${_BASE_DIR}/${_user}/log/.backboa.${_user}.${_sPid}.config.dir.ctrl\"\n  if [ ! -d \"${_config_dir}\" ] || [ ! -e \"${_dir_ctrl_file}\" ]; then\n    mkdir -p \"${_config_dir}\"\n    chown -R ${_user}.ftp:users \"${_config_dir}\"\n    chmod 700 \"${_config_dir}\"\n    touch \"${_dir_ctrl_file}\"\n    echo \"Created config directory for user: ${_user}\"\n  fi\n}\n\n# Function to create a README file in the config directory\n_create_config_readme_file() {\n  _user=$1\n  _config_dir=\"${_BASE_DIR}/${_user}/static/control/remote_backups/config\"\n  _readme_file=\"${_config_dir}/README.txt\"\n  _readme_ctrl_file=\"${_BASE_DIR}/${_user}/log/.backboa.${_user}.${_sPid}.config.readme.ctrl\"\n  _user_static_dir=\"/data/disk/${_user}/static\"\n  _user_ftp_dir=\"/home/${_user}.ftp\"\n  _user_ftp_dir_regex=\"/home/${_user}\\.ftp\"\n\n  _ensure_config_dir \"${_user}\"\n\n  if [ ! -f \"${_readme_ctrl_file}\" ]; then\n    cat << EOF > \"${_readme_file}\"\nBackup Configuration README\n\nThis directory contains configuration files for customizing backup behavior.\nUsers can define include and exclude directives for their backups.\n\nAllowed paths for backups are restricted to:\n- ${_user_static_dir}\n- ${_user_ftp_dir}\n\nAvailable Configuration Files:\n\n1. include.txt\n   Use this file to specify additional directories or files to include in the backup.\n\n2. exclude.txt\n   Use this file to specify directories or files to exclude from the backup.\n\n3. include_regexp.txt\n   Use this file to specify patterns for including directories or files using regular expressions.\n\n4. exclude_regexp.txt\n   Use this file to specify patterns for excluding directories or files using regular expressions.\n\nUsage Instructions:\n\n1. include.txt\n   List full paths to the directories or files you want to include in the backup.\n   Example:\n   --include ${_user_static_dir}/documents\n   --include ${_user_ftp_dir}/documents\n\n2. exclude.txt\n   List full paths to the directories or files you want to exclude from the backup.\n   Example:\n   --exclude ${_user_static_dir}/cache\n   --exclude ${_user_ftp_dir}/temp\n\n3. include_regexp.txt\n   Use regular expressions to specify patterns for directories or files to include in the backup.\n   Example:\n   --include-regexp '^${_user_ftp_dir_regex}/documents/.*\\.pdf$'\n   --include-regexp '^${_user_static_dir}/project_data/.*'\n\n4. exclude_regexp.txt\n   Use regular expressions to specify patterns for directories or files to exclude from the backup.\n   Example:\n   --exclude-regexp '^${_user_static_dir}/trash/.*'\n   --exclude-regexp '^${_user_ftp_dir_regex}/temp_files/.*'\n\nSecurity:\n- Ensure these files are restricted to the user only:\n  - Files should have permissions set to 600 (chmod 600 <file>).\n  - The directory should have permissions set to 700 (chmod 700 <directory>).\n\nNotes:\n- Directives in these files will be merged with default system directives during backup operations.\n- Patterns defined in exclude_regexp.txt will take precedence over those in include_regexp.txt.\n- You can only define paths in the /data/disk/${_user}/static/ directory tree.\n- Paths for platforms without direct access in /data/disk/${_user}/distro are included by default.\n- Invalid entries may cause the backup process to fail.\n\nExample Configuration:\n\nIf you want to exclude temporary files:\n- Add the following to exclude_regexp.txt:\n  --exclude-regexp '^${_user_static_dir}/temp/.*'\n  --exclude-regexp '^${_user_ftp_dir_regex}/temp/.*'\n\nIf you want to include specific documents:\n- Add the following to include_regexp.txt:\n  --include-regexp '^${_user_ftp_dir_regex}/documents/.*\\.pdf$'\n  --include-regexp '^${_user_static_dir}/important_data/.*'\n\nEOF\n    chmod 600 \"${_readme_file}\"\n    chown ${_user}.ftp:users \"${_readme_file}\"\n    echo \"Created README file for config directory of user: ${_user}\"\n    touch \"${_readme_ctrl_file}\"\n  else\n    echo \"README file already updated for config directory of user: ${_user}\"\n  fi\n}\n\n# Main function to create README files for all users\n_main() {\n  for _user_dir in \"${_BASE_DIR}\"/*; do\n    if [ -d \"${_user_dir}\" ]; then\n      _user=$(basename \"${_user_dir}\")\n      if [ \"${_user}\" != \"arch\" ] \\\n        && [ \"${_user}\" != \"data\" ] \\\n        && [ \"${_user}\" != \"global\" ] \\\n        && [ \"${_user}\" != \"static\" ] \\\n        && [ \"${_user}\" != \"custom\" ]; then\n        _create_config_readme_file \"${_user}\"\n      fi\n    fi\n  done\n}\n\n# Execute the script\n_main\n"
  },
  {
    "path": "aegir/tools/backup/run/create_credentials_templates.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Global credentials directory\n_GLOBAL_CREDENTIALS_DIR=\"/root/.remote_backups/credentials\"\n\n# Base directory for user-specific credentials\n_USER_BASE_DIR=\"/data/disk\"\n\n# Function to ensure a directory exists\n_ensure_directory() {\n  _dir=$1\n  if [ ! -d \"${_dir}\" ]; then\n    mkdir -p \"${_dir}\"\n    chmod 700 \"${_dir}\"\n    echo \"Created directory: ${_dir}\"\n  fi\n}\n\n# Function to create credentials template files\n_create_credentials_templates() {\n  _target_dir=$1\n\n  # List of supported services\n  _services=(\n    \"aws_one_zone\"\n    \"aws_standard_ia\"\n    \"aws\"\n    \"azure\"\n    \"b2\"\n    \"cloudflare\"\n    \"do_spaces\"\n    \"gcs\"\n    \"ibm\"\n    \"linode\"\n    \"wasabi\"\n  )\n\n  for _service in \"${_services[@]}\"; do\n    _template_file=\"${_target_dir}/${_service}.txt\"\n\n    if [ -e \"${_user_pid_dir}\" ] && [ ! -e \"${_user_pid_dir}/.backboa.${_user}.credentials.${_service}.tpl.ctrl\" ]; then\n      sed -i \"s/FULL_BACKUP_FREQUENCY=.*/FULL_BACKUP_FREQUENCY=\\\"28D\\\"/g\" ${_template_file}\n      touch ${_user_pid_dir}/.backboa.${_user}.credentials.${_service}.tpl.ctrl\n    fi\n\n    if [ ! -f \"${_template_file}\" ]; then\n      case \"${_service}\" in\n        aws|aws_one_zone|aws_standard_ia)\n          cat << EOF > \"${_template_file}\"\nexport AWS_ACCESS_KEY_ID=\"your_aws_access_key\"\nexport AWS_SECRET_ACCESS_KEY=\"your_aws_secret_key\"\nexport AWS_REGION=\"your_aws_region\"  # E.g., \"us-east-1\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        azure)\n          cat << EOF > \"${_template_file}\"\nexport AZURE_STORAGE_ACCOUNT=\"your_azure_storage_account\"\nexport AZURE_STORAGE_KEY=\"your_azure_storage_key\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        b2)\n          cat << EOF > \"${_template_file}\"\nexport B2_ACCOUNT_ID=\"your_b2_account_id\"\nexport B2_APPLICATION_KEY=\"your_b2_application_key\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        cloudflare)\n          cat << EOF > \"${_template_file}\"\nexport R2_ACCOUNT_ID=\"your_account_id\"\nexport R2_ACCESS_KEY_ID=\"your_access_key_id\"\nexport R2_SECRET_ACCESS_KEY=\"your_secret_access_key\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        do_spaces)\n          cat << EOF > \"${_template_file}\"\nexport DO_SPACES_KEY=\"your_do_spaces_key\"\nexport DO_SPACES_SECRET=\"your_do_spaces_secret\"\nexport DO_SPACES_REGION=\"your_do_spaces_region\"  # E.g., \"nyc3\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        gcs)\n          cat << EOF > \"${_template_file}\"\nexport GCS_PROJECT_ID=\"your_gcs_project_id\"\nexport GCS_SERVICE_ACCOUNT_KEY=\"your_gcs_service_account_key\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        ibm)\n          cat << EOF > \"${_template_file}\"\nexport IBM_API_KEY_ID=\"your_ibm_api_key_id\"\nexport IBM_SERVICE_INSTANCE_ID=\"your_ibm_service_instance_id\"\nexport IBM_REGION=\"your_ibm_region\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        linode)\n          cat << EOF > \"${_template_file}\"\nexport LINODE_ACCESS_KEY=\"your_linode_access_key\"\nexport LINODE_SECRET_KEY=\"your_linode_secret_key\"\nexport LINODE_REGION=\"your_linode_region\"  # E.g., \"us-east-1\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n        wasabi)\n          cat << EOF > \"${_template_file}\"\nexport WASABI_ACCESS_KEY=\"your_wasabi_access_key\"\nexport WASABI_SECRET_KEY=\"your_wasabi_secret_key\"\nexport WASABI_REGION=\"your_wasabi_region\"  # E.g., \"us-east-1\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\nEOF\n          ;;\n      esac\n      chmod 600 \"${_template_file}\"\n      echo \"Created template for service: ${_service} at ${_template_file}\"\n    else\n      echo \"Template for service: ${_service} already exists at ${_template_file}\"\n    fi\n  done\n}\n\n# Main function to create templates globally and for all users\n_main() {\n  # Create templates in the global credentials directory\n  _ensure_directory \"${_GLOBAL_CREDENTIALS_DIR}\"\n  _create_credentials_templates \"${_GLOBAL_CREDENTIALS_DIR}\"\n\n  # Iterate over user directories and create templates for each user\n  for _user_dir in \"${_USER_BASE_DIR}\"/*; do\n    if [ -d \"${_user_dir}\" ]; then\n      _user=$(basename \"${_user_dir}\")\n      if [ \"${_user}\" != \"arch\" ] \\\n        && [ \"${_user}\" != \"data\" ] \\\n        && [ \"${_user}\" != \"global\" ] \\\n        && [ \"${_user}\" != \"static\" ] \\\n        && [ \"${_user}\" != \"custom\" ]; then\n        _user_pid_dir=\"${_USER_BASE_DIR}/${_user}/log\"\n        _user_credentials_dir=\"${_USER_BASE_DIR}/${_user}/static/control/remote_backups/credentials\"\n        _ensure_directory \"${_user_credentials_dir}\"\n        _create_credentials_templates \"${_user_credentials_dir}\"\n      fi\n    fi\n  done\n}\n\n# Execute the script\n_main\n"
  },
  {
    "path": "aegir/tools/backup/run/create_cron_entries.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Configurable in /root/.barracuda.cnf interval in minutes between backup cycles\n_BACKUP_INTERVAL=360\n_WRAPPER_DIR=\"/root/.remote_backups/run\"\n_WRAPPER_SCRIPT=\"${_WRAPPER_DIR}/sequential_backups.sh\"\n_SCHEDULE_DIR=\"/root/.remote_backups/schedule\"\n_SCHEDULE_FILE=\"${_SCHEDULE_DIR}/backup_schedule.txt\"\n_CRON_FILE=\"/etc/cron.d/duplicity_backup\"\n_LOGFILE=\"/var/log/backup_runtime.log\"\n\n# Function to verify root access\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 0 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\nexport _BACKUP_INTERVAL=${_BACKUP_INTERVAL//[^0-9]/}\n: \"${_BACKUP_INTERVAL:=360}\"\n\n# Ensure global run directory exists and is owned by root\nmkdir -p \"${_WRAPPER_DIR}\"\nchown root:root \"${_WRAPPER_DIR}\"\nchmod 700 \"${_WRAPPER_DIR}\"\n\n# Ensure global schedule directory exists and is owned by root\nmkdir -p \"${_SCHEDULE_DIR}\"\nchown root:root \"${_SCHEDULE_DIR}\"\nchmod 700 \"${_SCHEDULE_DIR}\"\n\n# Function to generate the wrapper script\n_generate_wrapper_script() {\n  cat << 'EOF' > \"${_WRAPPER_SCRIPT}\"\n#!/bin/bash\n\n# Enable strict error handling for debugging only\n# set -euo pipefail\n\n# Environment setup\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# File paths\n_SCHEDULE_FILE=\"/root/.remote_backups/schedule/backup_schedule.txt\"\n_PID_DIR=\"/run\"\n_LOGFILE=\"/var/log/backup_runtime.log\"\n\n# Function to create PID file\n_create_pid_file() {\n  local _pidfile=$1\n  if [ -e \"${_pidfile}\" ]; then\n    echo \"Process already running with PID file ${_pidfile}\"\n    exit 1\n  else\n    echo $$ > \"${_pidfile}\"\n  fi\n}\n\n# Function to remove PID file\n_remove_pid_file() {\n  local _pidfile=$1\n  if [ -f \"${_pidfile}\" ]; then\n    rm -f \"${_pidfile}\" || {\n      echo \"Warning: Failed to remove PID file: ${_pidfile}\"\n    }\n  fi\n}\n\n# Function to remove stale multiback PID file\n_remove_stale_multiback_pid() {\n  _multiback_pidfile=\"/run/duplicity_${_service}_${_user}.pid\"\n  if [ -f \"${_multiback_pidfile}\" ]; then\n    _old_pid=$(cat \"${_multiback_pidfile}\")\n    if [ -n \"${_old_pid}\" ] && ! kill -0 \"${_old_pid}\" 2>/dev/null; then\n      echo \"Stale multiback PID file detected: ${_multiback_pidfile}. Removing it.\"\n      rm -f \"${_multiback_pidfile}\"\n    fi\n  fi\n}\n\n# Read backup services and users from the configuration file\nif [ ! -f \"${_SCHEDULE_FILE}\" ]; then\n  echo \"Error: Backup schedule file ${_SCHEDULE_FILE} not found.\"\n  exit 1\nfi\n\n# Function to print env for debugging\n_print_env() {\n  if [ \"$(id -u)\" -eq 0 ] && [ -e \"/root/.dev.server.cnf\" ]; then\n    _ENV=$(env 2>&1)\n    echo\n    echo \"_ENV in $1 start\"\n    echo \"${_ENV}\"\n    echo \"_ENV in $1 end\"\n    echo\n    _ENV=\n  fi\n}\n\n# Process each line in the backup configuration file\nwhile IFS= read -r _line || [ -n \"${_line}\" ]; do\n  # Skip empty lines and comments\n  if [[ \"${_line}\" =~ ^\\s*# ]] || [[ -z \"${_line}\" ]]; then\n    continue\n  fi\n\n  # Parse the service and user\n  _service=$(echo \"${_line}\" | cut -d' ' -f1)\n  _user=$(echo \"${_line}\" | cut -d' ' -f2)\n\n  # Ensure both service and user are defined\n  if [ -z \"${_service}\" ] || [ -z \"${_user}\" ]; then\n    echo \"Error: Invalid line in configuration file: ${_line}\"\n    continue\n  fi\n\n  echo \"Starting backup for ${_service} (${_user})...\"\n  export _service=\"${_service}\"\n  export _user=\"${_user}\"\n\n  # Define the PID file path\n  _CURRENT_PIDFILE=\"${_PID_DIR}/duplicity_${_service}_${_user}_sequential.pid\"\n\n  # Create the PID file\n  _create_pid_file \"${_CURRENT_PIDFILE}\"\n  trap \"rm -f ${_PIDFILE}; exit\" EXIT\n\n  # Remove stale multiback PID file if necessary\n  _remove_stale_multiback_pid\n\n  # Determine the paths configuration file\n  if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    _paths_file=\"/root/.remote_backups/paths/${_user}_paths.txt\"\n    _credentials_file=\"/root/.remote_backups/credentials/${_service}.txt\"\n    _secret_file=\"/root/.remote_backups/.secret.txt\"\n\n    if [ ! -f \"${_secret_file}\" ]; then\n      echo \"Secret file ${_secret_file} not found.\"\n      _remove_pid_file \"${_CURRENT_PIDFILE}\"\n      continue\n    fi\n\n    # Check if _paths_file exists\n    if [ ! -f \"${_paths_file}\" ]; then\n      echo \"Error: Paths configuration file ${_paths_file} not found.\"\n      _remove_pid_file \"${_CURRENT_PIDFILE}\"\n      continue\n    fi\n\n    # Check if credentials file exists\n    if [ ! -f \"${_credentials_file}\" ]; then\n      echo \"Error: Credentials file ${_credentials_file} not found.\"\n      _remove_pid_file \"${_CURRENT_PIDFILE}\"\n      continue\n    fi\n\n    # Change to the directory where _paths_file and credentials are located\n    cd /root/.remote_backups\n    _print_env \"sequential_backups_a\"\n\n  elif [ \"${_user}\" != \"arch\" ] \\\n    && [ \"${_user}\" != \"data\" ] \\\n    && [ \"${_user}\" != \"global\" ] \\\n    && [ \"${_user}\" != \"static\" ] \\\n    && [ \"${_user}\" != \"custom\" ]; then\n    _paths_file=\"/data/disk/${_user}/remote_backups/paths/paths.txt\"\n    _credentials_file=\"/data/disk/${_user}/static/control/remote_backups/credentials/${_service}.txt\"\n    _secret_file=\"/data/disk/${_user}/remote_backups/.secret.txt\"\n\n    if [ ! -f \"${_secret_file}\" ]; then\n      echo \"Secret file ${_secret_file} not found.\"\n      _remove_pid_file \"${_CURRENT_PIDFILE}\"\n      continue\n    fi\n\n    if [ ! -f \"${_paths_file}\" ]; then\n      echo \"Error: Paths configuration file ${_paths_file} not found.\"\n      _remove_pid_file \"${_CURRENT_PIDFILE}\"\n      continue\n    fi\n\n    if [ ! -f \"${_credentials_file}\" ]; then\n      echo \"Error: Credentials file ${_credentials_file} not found.\"\n      _remove_pid_file \"${_CURRENT_PIDFILE}\"\n      continue\n    fi\n\n    # Change to the directory where _paths_file and credentials are located\n    cd \"/data/disk/${_user}/remote_backups\"\n    _print_env \"sequential_backups_b\"\n  fi\n\n  # Perform the backup\n  if multiback backup \"${_service}\" \"${_user}\"; then\n    echo \"Backup for ${_service} (${_user}) completed successfully.\"\n  else\n    echo \"Backup for ${_service} (${_user}) failed.\"\n  fi\n\n  # Wipe out exported variables to clean up env after running the backup\n  export _service=\n  export _user=\n\n  # Return to the original directory\n  cd -\n  _print_env \"sequential_backups_d\"\n\n  # Remove the PID file\n  _remove_pid_file \"${_CURRENT_PIDFILE}\"\n\ndone < \"${_SCHEDULE_FILE}\"\nEOF\n\n  chmod +x \"${_WRAPPER_SCRIPT}\"\n  echo \"Wrapper script created at ${_WRAPPER_SCRIPT}\"\n}\n\n# Function to create cron entries\n_create_cron_entries() {\n  echo \"# Cron job for sequential backups\" > \"${_CRON_FILE}\"\n  echo \"0 */$((_BACKUP_INTERVAL / 60)) * * * root ${_WRAPPER_SCRIPT}\" >> \"${_CRON_FILE}\"\n  chmod 644 \"${_CRON_FILE}\"\n  echo \"Cron entry created at ${_CRON_FILE}\"\n\n  # Validate the cron file\n  _validate_cron_file\n}\n\n# Function to validate the cron file\n_validate_cron_file() {\n  if ! grep -q -E \"^[^#]*${_WRAPPER_SCRIPT}\" \"${_CRON_FILE}\"; then\n    echo \"Error: Cron file validation failed. Please check the file at ${_CRON_FILE}.\"\n    exit 1\n  fi\n  echo \"Cron file validated successfully.\"\n}\n\n# Function to generate the backup schedule\n_generate_backup_schedule() {\n  echo \"# Backup schedule (service user)\" > \"${_SCHEDULE_FILE}\"\n\n  # Add global backups\n  _custom_paths_file=\"/root/.remote_backups/paths/custom_paths.txt\"\n  _GLOBAL_CRED_DIR=\"/root/.remote_backups/credentials\"\n  for _service in aws aws_one_zone aws_standard_ia azure b2 cloudflare do_spaces gcs ibm linode wasabi; do\n    if [ -f \"${_GLOBAL_CRED_DIR}/${_service}.txt\" ] && ! grep -q \"your_\" \"${_GLOBAL_CRED_DIR}/${_service}.txt\"; then\n      echo \"${_service} global\" >> \"${_SCHEDULE_FILE}\"\n      echo \"${_service} data\" >> \"${_SCHEDULE_FILE}\"\n      [ -s \"${_custom_paths_file}\" ] && echo \"${_service} custom\" >> \"${_SCHEDULE_FILE}\"\n    fi\n  done\n\n  # Add user-specific backups\n  for _user_dir in /data/disk/*; do\n    if [ -d \"${_user_dir}\" ]; then\n      _user=$(basename \"${_user_dir}\")\n      _USER_CRED_DIR=\"/data/disk/${_user}/static/control/remote_backups/credentials\"\n      for _service in aws aws_one_zone aws_standard_ia azure b2 cloudflare do_spaces gcs ibm linode wasabi; do\n        if [ -f \"${_USER_CRED_DIR}/${_service}.txt\" ] && ! grep -q \"your_\" \"${_USER_CRED_DIR}/${_service}.txt\"; then\n          echo \"${_service} ${_user}\" >> \"${_SCHEDULE_FILE}\"\n        fi\n      done\n    fi\n  done\n\n  echo \"Backup schedule created at ${_SCHEDULE_FILE}\"\n}\n\n# Function to adjust the backup interval dynamically\n_adjust_backup_interval() {\n  if [ -f \"${_LOGFILE}\" ]; then\n    _TOTAL_RUNTIME=$(tail -n1 \"${_LOGFILE}\" | awk '{print $NF}')\n    _NEW_INTERVAL=$(( (_TOTAL_RUNTIME / 3600 + 1) * 60 ))  # Round up to the next hour\n    if [ \"${_NEW_INTERVAL}\" -gt \"${_BACKUP_INTERVAL}\" ]; then\n      _BACKUP_INTERVAL=\"${_NEW_INTERVAL}\"\n      echo \"Adjusted backup interval to $((_BACKUP_INTERVAL / 60)) hours.\"\n      _create_cron_entries\n    fi\n  fi\n}\n\n# Main script execution\n_generate_wrapper_script\n_generate_backup_schedule\n_adjust_backup_interval\n_create_cron_entries\n"
  },
  {
    "path": "aegir/tools/backup/run/create_global_paths_config.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _sPid=\"f62\"\n\n# Function to create or update global paths configuration\n_create_global_paths_config() {\n  _global_config_dir=\"/root/.remote_backups/paths\"\n  _include_list=\"${_global_config_dir}/.backboa.include.list\"\n  _exclude_list=\"${_global_config_dir}/.backboa.exclude.list\"\n  _custom_include_list=\"${_global_config_dir}/.backboa.custom_include.list\"\n  _custom_exclude_list=\"${_global_config_dir}/.backboa.custom_exclude.list\"\n\n  _include_global_regexp_file=\"${_global_config_dir}/.backboa.include_global_regexp.file\"\n  _include_global_file=\"${_global_config_dir}/.backboa.include_global.file\"\n  _exclude_global_file=\"${_global_config_dir}/.backboa.exclude.file\"\n  _merged_global_include_file=\"${_global_config_dir}/.backboa.global_include.merged.file\"\n  _merged_global_exclude_file=\"${_global_config_dir}/.backboa.global_exclude.merged.file\"\n\n  _include_data_file=\"${_global_config_dir}/.backboa.include_data.file\"\n  _exclude_data_file=\"${_global_config_dir}/.backboa.exclude_data.file\"\n  _merged_data_include_file=\"${_global_config_dir}/.backboa.data_include.merged.file\"\n  _merged_data_exclude_file=\"${_global_config_dir}/.backboa.data_exclude.merged.file\"\n\n  _global_ctrl_file=\"${_global_config_dir}/.backboa.${_sPid}.paths.ctrl.file\"\n  _global_paths_file=\"${_global_config_dir}/global_paths.txt\"\n  _data_paths_file=\"${_global_config_dir}/data_paths.txt\"\n  _custom_paths_file=\"${_global_config_dir}/custom_paths.txt\"\n  _disk_dir=\"/data/disk\"\n  _home_dir=\"/home\"\n\n  # Ensure global configuration directory exists and is owned by root\n  mkdir -p \"${_global_config_dir}\"\n  chown root:root \"${_global_config_dir}\"\n  chmod 700 \"${_global_config_dir}\"\n\n  # Function to validate configuration files\n  _validate_config() {\n    _file=$1\n    _type=$2\n\n    # Check for invalid entries\n    if [ \"${_type}\" = \"regexp\" ]; then\n      _invalid_lines=$(grep -Ev \"^--(include-regexp|exclude-regexp)\" \"${_file}\" || true)\n      if [ -n \"${_invalid_lines}\" ]; then\n        echo \"Error: Invalid entries in ${_file}:\"\n        echo \"${_invalid_lines}\"\n        exit 1\n      fi\n    else\n      _invalid_lines=$(grep -Ev \"^--(include|exclude)\" \"${_file}\" || true)\n      if [ -n \"${_invalid_lines}\" ]; then\n        echo \"Error: Invalid entries in ${_file}:\"\n        echo \"${_invalid_lines}\"\n        exit 1\n      fi\n    fi\n  }\n\n  # Function to append unique entries from source to target file\n  _append_unique_entries() {\n    _source_file=$1\n    _target_file=$2\n    if [ -f \"${_source_file}\" ]; then\n      grep -v -F -x -f \"${_target_file}\" \"${_source_file}\" >> \"${_target_file}\"\n    fi\n  }\n\n  if [ ! -f \"${_global_ctrl_file}\" ]; then\n\n    ### Migrate legacy exclude/include files if present and merge unique entries\n\n    # _include_list\n    if [ -f \"/root/.backboa.include\" ]; then\n      if [ ! -f \"${_include_list}\" ]; then\n        cp \"/root/.backboa.include\" \"${_include_list}\"\n      else\n        _append_unique_entries \"/root/.backboa.include\" \"${_include_list}\"\n      fi\n    fi\n\n    # _exclude_list\n    if [ -f \"/root/.backboa.exclude\" ]; then\n      if [ ! -f \"${_exclude_list}\" ]; then\n        cp \"/root/.backboa.exclude\" \"${_exclude_list}\"\n      else\n        _append_unique_entries \"/root/.backboa.exclude\" \"${_exclude_list}\"\n      fi\n    else\n      cat << EOF > \"${_exclude_list}\"\n**files/advagg_css/**\n**files/advagg_js/**\n**files/css/**\n**files/js/**\n**private/temp/**\nEOF\n    fi\n\n    # _custom_include_list\n    if [ -f \"/root/.backboa.custom.include\" ]; then\n      if [ ! -f \"${_custom_include_list}\" ]; then\n        cp \"/root/.backboa.custom.include\" \"${_custom_include_list}\"\n      else\n        _append_unique_entries \"/root/.backboa.custom.include\" \"${_custom_include_list}\"\n      fi\n    fi\n\n    # _custom_exclude_list\n    if [ -f \"/root/.backboa.custom.exclude\" ]; then\n      if [ ! -f \"${_custom_exclude_list}\" ]; then\n        cp \"/root/.backboa.custom.exclude\" \"${_custom_exclude_list}\"\n      else\n        _append_unique_entries \"/root/.backboa.custom.exclude\" \"${_custom_exclude_list}\"\n      fi\n    fi\n\n    ### Create default exclude/include files if they don't exist\n\n    # _include_global_file\n    cat << EOF > \"${_include_global_file}\"\n--include /data/disk/arch\n--include /var/backups/csf\n--include /var/backups/dragon\n--include /var/backups/reports\nEOF\n\n    # _exclude_global_file\n    cat << EOF > \"${_exclude_global_file}\"\n--exclude /var/aegir/backups\n--exclude /var/aegir/.tmp\nEOF\n\n    # _include_data_file\n    # Start writing to the include data file\n    cat << EOF > \"${_include_data_file}\"\nEOF\n\n    # Iterate over each item in the disk directory\n    for subdir in \"${_disk_dir}\"/*/; do\n      # Check if it's a directory\n      if [ -d \"${subdir}\" ]; then\n        # Remove the trailing slash for consistency\n        sanitized_subdir=\"${subdir%/}\"\n        # Append the --include line to the include data file\n        if [ \"${sanitized_subdir}\" != \"/data/disk/arch\" ] \\\n          && [ \"${sanitized_subdir}\" != \"/data/disk/static\" ]; then\n          echo \"--include ${sanitized_subdir}\" >> \"${_include_data_file}\"\n        fi\n      fi\n    done\n\n    # Append the additional include statements\n    cat << EOF >> \"${_include_data_file}\"\n--include /data/all\n--include /data/conf\n--include /home\nEOF\n\n    # _include_global_regexp_file\n    cat << EOF > \"${_include_global_regexp_file}\"\n--include-regexp '^/root/\\..*\\.cnf$'\nEOF\n\n    # _exclude_data_file\n    # Start writing to the exclude data file\n    cat << EOF > \"${_exclude_data_file}\"\nEOF\n\n    # Iterate over each item in the disk directory\n    for subdir in \"${_disk_dir}\"/*/; do\n      # Check if it's a directory\n      if [ -d \"${subdir}\" ]; then\n        # Remove the trailing slash for consistency\n        sanitized_subdir=\"${subdir%/}\"\n        # Append the --exclude line to the exclude data path\n        if [ \"${sanitized_subdir}\" != \"/data/disk/arch\" ] \\\n          && [ \"${sanitized_subdir}\" != \"/data/disk/static\" ]; then\n          echo \"--exclude ${sanitized_subdir}/.tmp\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/backup-exports\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/backups\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/clients\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/src\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/static/.tmp\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/static/restores\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/static/tmp\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/static/trash\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/u\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/undo\" >> \"${_exclude_data_file}\"\n        else\n          echo \"--exclude ${sanitized_subdir}\" >> \"${_exclude_data_file}\"\n        fi\n      fi\n    done\n\n    # Iterate over each item in the home directory\n    for subdir in \"${_home_dir}\"/*/; do\n      # Check if it's a directory\n      if [ -d \"${subdir}\" ]; then\n        # Remove the trailing slash for consistency\n        sanitized_subdir=\"${subdir%/}\"\n        # Append the --exclude line to the exclude home path\n        if [[ \"${sanitized_subdir}\" =~ \".ftp\"($) ]]; then\n          echo \"--exclude ${sanitized_subdir}/.tmp\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/backups\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/clients\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/platforms\" >> \"${_exclude_data_file}\"\n          echo \"--exclude ${sanitized_subdir}/static\" >> \"${_exclude_data_file}\"\n        else\n          echo \"--exclude ${sanitized_subdir}\" >> \"${_exclude_data_file}\"\n        fi\n      fi\n    done\n\n    # Append the additional exclude statements\n    cat << EOF >> \"${_exclude_data_file}\"\n--exclude /var/www\nEOF\n\n\n    # Validate and merge exclude/include files\n    [ -e \"${_include_data_file}\" ] && _validate_config \"${_include_data_file}\"\n    [ -e \"${_include_global_file}\" ] && _validate_config \"${_include_global_file}\"\n    [ -e \"${_exclude_global_file}\" ] && _validate_config \"${_exclude_global_file}\"\n\n    [ -e \"${_include_data_file}\" ] && cat \"${_include_data_file}\" > \"${_merged_data_include_file}\"\n    [ -e \"${_include_global_file}\" ] && cat \"${_include_global_file}\" > \"${_merged_global_include_file}\"\n    [ -e \"${_exclude_global_file}\" ] && cat \"${_exclude_global_file}\" > \"${_merged_global_exclude_file}\"\n\n    # Merge regexp files into final configurations\n    if [ -s \"${_include_global_regexp_file}\" ]; then\n      _validate_config \"${_include_global_regexp_file}\" \"regexp\"\n      cat \"${_include_global_regexp_file}\" >> \"${_merged_global_include_file}\"\n    fi\n    if [ -s \"${_exclude_data_file}\" ]; then\n      _validate_config \"${_exclude_data_file}\"\n      cat \"${_exclude_data_file}\" > \"${_merged_data_exclude_file}\"\n    fi\n\n    # Convert the exclude file contents to a single-line variable without backslashes and excessive whitespace\n    [ -e \"${_merged_data_include_file}\" ] && _MERGED_DATA_INCLUDE=$(cat \"${_merged_data_include_file}\" | tr '\\n' ' ' | tr -s ' ' | sed 's/^ *//;s/ *$//')\n    [ -e \"${_merged_data_exclude_file}\" ] && _MERGED_DATA_EXCLUDE=$(cat \"${_merged_data_exclude_file}\" | tr '\\n' ' ' | tr -s ' ' | sed 's/^ *//;s/ *$//')\n    [ -e \"${_merged_global_include_file}\" ] && _MERGED_GLOBAL_INCLUDE=$(cat \"${_merged_global_include_file}\" | tr '\\n' ' ' | tr -s ' ' | sed 's/^ *//;s/ *$//')\n    [ -e \"${_merged_global_exclude_file}\" ] && _MERGED_GLOBAL_EXCLUDE=$(cat \"${_merged_global_exclude_file}\" | tr '\\n' ' ' | tr -s ' ' | sed 's/^ *//;s/ *$//')\n\n    # Create the final paths configuration file\n    cat << EOF > \"${_global_paths_file}\"\n_SOURCE=\"/etc /opt/solr4 /var/aegir /var/solr7 /var/solr9 /var/www /var/xdrago\"\n_INCLUDE_PATHS=\"${_MERGED_GLOBAL_INCLUDE}\"\n_EXCLUDE_PATHS=\"${_MERGED_GLOBAL_EXCLUDE}\"\n_INCLUDE_LIST=\"${_include_list}\"\n_EXCLUDE_LIST=\"${_exclude_list}\"\nEOF\n\n    echo \"Global paths configuration created or updated at ${_global_paths_file}\"\n\n    # Create the final paths configuration file\n    cat << EOF > \"${_data_paths_file}\"\n_SOURCE=\"\"\n_INCLUDE_PATHS=\"${_MERGED_DATA_INCLUDE}\"\n_EXCLUDE_PATHS=\"${_MERGED_DATA_EXCLUDE}\"\n_INCLUDE_LIST=\"${_include_list}\"\n_EXCLUDE_LIST=\"${_exclude_list}\"\nEOF\n\n    echo \"Global paths configuration created or updated at ${_data_paths_file}\"\n\n    [ -s \"${_custom_include_list}\" ] && cat << EOF > \"${_custom_paths_file}\"\n_SOURCE=\"\"\n_INCLUDE_PATHS=\"\"\n_EXCLUDE_PATHS=\"\"\n_INCLUDE_LIST=\"${_custom_include_list}\"\nEOF\n\n    [ -s \"${_custom_exclude_list}\" ] && cat << EOF >> \"${_custom_paths_file}\"\n_EXCLUDE_LIST=\"${_custom_exclude_list}\"\nEOF\n\n    [ -s \"${_custom_paths_file}\" ] && echo \"Global paths configuration created or updated at ${_custom_paths_file}\"\n\n    rm -f ${_global_config_dir}/.backboa*paths.ctrl.file\n    touch ${_global_ctrl_file}\n  fi\n}\n\n#### Generate Passphrase for Root\n_generate_global_secret_file() {\n  local _secret_file=\"/root/.remote_backups/.secret.txt\"\n\n  if [ ! -s \"${_secret_file}\" ]; then\n    openssl rand -base64 32 > \"${_secret_file}\"\n    chmod 600 \"${_secret_file}\"\n    chattr +i \"${_secret_file}\"\n    echo \"Global secret file created at ${_secret_file} and made immutable.\"\n  else\n    echo \"Global secret file already exists at ${_secret_file}.\"\n  fi\n}\n\n# Main execution\n_create_global_paths_config\n_generate_global_secret_file\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/backup/run/create_readme.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _sPid=\"f62\"\n\n# Directory for storing README files\n_BASE_DIR=\"/data/disk\"\n\n# Function to ensure the README directory exists\n_ensure_readme_dir() {\n  _user=$1\n  _credentials_dir=\"${_BASE_DIR}/${_user}/static/control/remote_backups/credentials\"\n  _dir_ctrl_file=\"${_BASE_DIR}/${_user}/log/.backboa.${_user}.${_sPid}.credentials.dir.ctrl\"\n  if [ ! -d \"${_credentials_dir}\" ] || [ ! -e \"${_dir_ctrl_file}\" ]; then\n    mkdir -p \"${_credentials_dir}\"\n    chown -R ${_user}.ftp:users \"${_credentials_dir}\"\n    chmod 700 \"${_credentials_dir}\"\n    touch \"${_dir_ctrl_file}\"\n    echo \"Created credentials directory for user: ${_user}\"\n  fi\n}\n\n# Function to create a README file for a specific user\n_create_readme_file() {\n  _user=$1\n  _credentials_dir=\"${_BASE_DIR}/${_user}/static/control/remote_backups/credentials\"\n  _readme_file=\"${_credentials_dir}/README.txt\"\n  _readme_ctrl_file=\"${_BASE_DIR}/${_user}/log/.backboa.${_user}.${_sPid}.credentials.readme.ctrl\"\n\n  _ensure_readme_dir \"${_user}\"\n\n  if [ ! -f \"${_readme_ctrl_file}\" ]; then\n    cat << EOF > \"${_readme_file}\"\n# Backup Credentials README\n\nThis directory contains credentials files for the backup services supported by the system.\nEach file corresponds to a specific backup service and must follow the correct format.\n\n## Supported Services and Corresponding Files:\nAmazon S3 (Standard, One Zone, Standard-IA)\n  File: aws.txt\n  export AWS_ACCESS_KEY_ID=\"your_aws_access_key\"\n  export AWS_SECRET_ACCESS_KEY=\"your_aws_secret_key\"\n  export AWS_REGION=\"your_aws_region\"  # E.g., \"us-east-1\"\n  export KEEP_WITHIN=\"3M\"              # Retain backups from the last 3 months\n  export FULL_BACKUP_FREQUENCY=\"28D\"   # Create a full backup every 28 days\n\nGoogle Cloud Storage\n  File: gcs.txt\n  export GCS_PROJECT_ID=\"your_gcs_project_id\"\n  export GCS_SERVICE_ACCOUNT_KEY=\"your_gcs_service_account_key\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\nBackblaze B2\n  File: b2.txt\n  export B2_ACCOUNT_ID=\"your_b2_account_id\"\n  export B2_APPLICATION_KEY=\"your_b2_application_key\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\nCloudflare R2 Object Storage\n  File: cloudflare.txt\n  export R2_ACCOUNT_ID=\"your_account_id\"\n  export R2_ACCESS_KEY_ID=\"your_access_key_id\"\n  export R2_SECRET_ACCESS_KEY=\"your_secret_access_key\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\nAzure Blob Storage\n  File: azure.txt\n  export AZURE_STORAGE_ACCOUNT=\"your_azure_storage_account\"\n  export AZURE_STORAGE_KEY=\"your_azure_storage_key\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\nIBM Cloud Object Storage\n  File: ibm.txt\n  export IBM_API_KEY_ID=\"your_ibm_api_key_id\"\n  export IBM_SERVICE_INSTANCE_ID=\"your_ibm_service_instance_id\"\n  export IBM_REGION=\"your_ibm_region\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\nWasabi Hot Cloud Storage\n  File: wasabi.txt\n  export WASABI_ACCESS_KEY=\"your_wasabi_access_key\"\n  export WASABI_SECRET_KEY=\"your_wasabi_secret_key\"\n  export WASABI_REGION=\"your_wasabi_region\"  # E.g., \"us-east-1\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\nDigitalOcean Spaces\n  File: do_spaces.txt\n  export DO_SPACES_KEY=\"your_do_spaces_key\"\n  export DO_SPACES_SECRET=\"your_do_spaces_secret\"\n  export DO_SPACES_REGION=\"your_do_spaces_region\"  # E.g., \"nyc3\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\nLinode Object Storage by Akamai\n  File: linode.txt\n  export LINODE_ACCESS_KEY=\"your_linode_access_key\"\n  export LINODE_SECRET_KEY=\"your_linode_secret_key\"\n  export LINODE_REGION=\"your_linode_region\"  # E.g., \"us-east-1\"\n  export KEEP_WITHIN=\"3M\"\n  export FULL_BACKUP_FREQUENCY=\"28D\"\n\n## Security\n- Ensure credentials files are securely managed:\n  - Files should have permissions set to 600 (\\`chmod 600 <file>\\`).\n  - The credentials directory should have permissions set to 700 (\\`chmod 700 <directory>\\`).\n\n## Notes\n- The backup process will fail if the credentials file for a required service is missing or contains invalid placeholders.\n- Users are responsible for managing these credentials files securely.\n\nEOF\n    chmod 600 \"${_readme_file}\"\n    chown ${_user}.ftp:users \"${_readme_file}\"\n    echo \"Created README file for user: ${_user}\"\n    touch \"${_readme_ctrl_file}\"\n  else\n    echo \"README file already updated for user: ${_user}\"\n  fi\n}\n\n# Main function to create README files for all users\n_main() {\n  for _user_dir in \"${_BASE_DIR}\"/*; do\n    if [ -d \"${_user_dir}\" ]; then\n      _user=$(basename \"${_user_dir}\")\n      if [ \"${_user}\" != \"arch\" ] \\\n        && [ \"${_user}\" != \"data\" ] \\\n        && [ \"${_user}\" != \"global\" ] \\\n        && [ \"${_user}\" != \"static\" ] \\\n        && [ \"${_user}\" != \"custom\" ]; then\n        _create_readme_file \"${_user}\"\n      fi\n    fi\n  done\n}\n\n# Execute the script\n_main\n"
  },
  {
    "path": "aegir/tools/backup/run/create_user_paths_config.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _sPid=\"f62\"\n\n# Log file for escape attempts and validation issues\n_VALIDATION_LOG_FILE=\"/var/log/backup_validation_issues.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  fi\n}\n_check_root\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n# Function to generate passphrase for user level backups\n_generate_user_secret_file() {\n  local _user=$1\n  local _secret_file=\"/data/disk/${_user}/remote_backups/.secret.txt\"\n  if [ ! -s \"${_secret_file}\" ]; then\n    openssl rand -base64 32 > \"${_secret_file}\"\n    chmod 600 \"${_secret_file}\"\n    chattr +i \"${_secret_file}\"\n    echo \"User secret file created at ${_secret_file} and made immutable.\"\n  else\n    echo \"User secret file already exists at ${_secret_file}.\"\n  fi\n}\n\n# Function to log validation issues\n_log_issue() {\n  local _type=$1\n  local _file=$2\n  local _message=$3\n  echo \"[$(date)] Validation issue: [${_type}] in file: [${_file}] with error: ${_message}\" >> \"${_VALIDATION_LOG_FILE}\"\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    # Alert the admin\n    echo \"Sending Backup Validation Alert to ${_MY_EMAIL} on $(date)\" >> ${_VALIDATION_LOG_FILE}\n    s-nail -s \"Backup Validation Alert for [$(hostname)] on $(date)\" ${_MY_EMAIL} < ${_VALIDATION_LOG_FILE}\n  fi\n}\n\n# Function to validate and merge configuration files\n_validate_and_merge_paths() {\n  local _file=$1\n  local _user=$2\n  local _allowed_prefixes=\"'^/((data/disk/${_user}/static)|(home/${_user}\\.ftp))\"\n  local _output_file=$3\n  local _if_validate=$4\n  local _invalid_paths_found=false\n\n  # Ensure output file exists and is empty\n  > \"${_output_file}\"\n\n  while IFS= read -r _line; do\n    # Skip empty lines and comments\n    if [[ \"${_line}\" =~ ^\\s*(#|$) ]]; then\n      echo \"${_line}\" >> \"${_output_file}\"\n      continue\n    fi\n\n    if [ \"${_if_validate}\" = \"YES\" ]; then\n      # Validate directives\n      if [[ \"${_line}\" =~ ^--(include|exclude|include-regexp|exclude-regexp) ]]; then\n        if echo \"${_line}\" | grep -Eq \"^--(include|exclude|include-regexp|exclude-regexp) ${_allowed_prefixes}\"; then\n          echo \"${_line}\" >> \"${_output_file}\"\n        else\n          _log_issue \"${_user}\" \"${_file}\" \"Invalid path: ${_line}\"\n          _invalid_paths_found=true\n        fi\n      else\n        _log_issue \"${_user}\" \"${_file}\" \"Invalid directive: ${_line}\"\n        _invalid_paths_found=true\n      fi\n    elif [ \"${_if_validate}\" = \"NO\" ]; then\n      echo \"${_line}\" >> \"${_output_file}\"\n    fi\n  done < \"${_file}\"\n\n  # If invalid paths were found, alert and skip merging\n  if [ \"${_invalid_paths_found}\" = true ]; then\n    echo \"Skipping invalid file '${_file}' for user '${_user}'.\"\n    > \"${_output_file}\" # Clear output to avoid invalid entries\n  fi\n}\n\n# Function to create or update a user's paths configuration file\n_create_user_paths_config() {\n  local _user=$1\n  local _user_config_dir=\"/data/disk/${_user}/remote_backups/paths\"\n  local _user_control_dir=\"/data/disk/${_user}/static/control/remote_backups/config\"\n  local _include_list=\"${_user_config_dir}/.backboa.${_user}.include.list\"\n  local _exclude_list=\"${_user_config_dir}/.backboa.${_user}.exclude.list\"\n  local _include_file=\"${_user_config_dir}/.backboa.${_user}.include.file\"\n  local _exclude_file=\"${_user_config_dir}/.backboa.${_user}.exclude.file\"\n  local _include_regexp_file=\"${_user_config_dir}/.backboa.${_user}.include_regexp.file\"\n  local _exclude_regexp_file=\"${_user_config_dir}/.backboa.${_user}.exclude_regexp.file\"\n  local _merged_include_file=\"${_user_config_dir}/.backboa.${_user}.include.merged.file\"\n  local _merged_exclude_file=\"${_user_config_dir}/.backboa.${_user}.exclude.merged.file\"\n  local _merged_regexp_include_file=\"${_user_config_dir}/.backboa.${_user}.include_regexp.merged.file\"\n  local _merged_regexp_exclude_file=\"${_user_config_dir}/.backboa.${_user}.exclude_regexp.merged.file\"\n  local _user_ctrl_file=\"${_user_config_dir}/.backboa.${_sPid}.paths.ctrl.file\"\n  local _include_ctrl_file=\"${_user_config_dir}/.backboa.${_user}.${_sPid}.include.ctrl.file\"\n  local _exclude_ctrl_file=\"${_user_config_dir}/.backboa.${_user}.${_sPid}.exclude.ctrl.file\"\n  local _merged_all_include_file=\"${_user_config_dir}/.backboa.${_user}.all.include.merged.file\"\n  local _merged_all_exclude_file=\"${_user_config_dir}/.backboa.${_user}.all.exclude.merged.file\"\n  local _user_paths_file=\"${_user_config_dir}/paths.txt\"\n\n  # Ensure user configuration directory exists and is owned by root\n  mkdir -p \"${_user_config_dir}\"\n  chown root:root \"${_user_config_dir}\"\n  chmod 755 \"${_user_config_dir}\"\n\n  # Function to append unique entries from source to target file\n  _append_unique_entries() {\n    _source_file=$1\n    _target_file=$2\n    if [ -f \"${_source_file}\" ]; then\n      grep -v -F -x -f \"${_target_file}\" \"${_source_file}\" >> \"${_target_file}\"\n    fi\n  }\n\n  if [ ! -f \"${_user_ctrl_file}\" ]; then\n\n    ### Migrate legacy exclude/include files if present and merge unique entries\n\n    # _include_list\n    if [ -f \"/root/.backboa.include\" ]; then\n      if [ ! -f \"${_include_list}\" ]; then\n        cp \"/root/.backboa.include\" \"${_include_list}\"\n      else\n        _append_unique_entries \"/root/.backboa.include\" \"${_include_list}\"\n      fi\n    fi\n\n    # _exclude_list\n    if [ -f \"/root/.backboa.exclude\" ]; then\n      if [ ! -f \"${_exclude_list}\" ]; then\n        cp \"/root/.backboa.exclude\" \"${_exclude_list}\"\n      else\n        _append_unique_entries \"/root/.backboa.exclude\" \"${_exclude_list}\"\n      fi\n    else\n      cat << EOF > \"${_exclude_list}\"\n**files/advagg_css/**\n**files/advagg_js/**\n**files/css/**\n**files/js/**\n**private/temp/**\nEOF\n    fi\n\n    ### Create default exclude/include files if they don't exist\n\n    # _include_file\n    if [ ! -f \"${_include_ctrl_file}\" ]; then\n      cat << EOF > \"${_include_file}\"\n--include /data/disk/${_user}/distro\n--include /data/disk/${_user}/platforms\n--include /data/disk/${_user}/static\n--include /home/${_user}.ftp\nEOF\n      rm -f \"${_user_config_dir}/.backboa.${_user}.*.include.ctrl.file\"\n      touch \"${_include_ctrl_file}\"\n    fi\n\n    # _exclude_file\n    if [ ! -f \"${_exclude_ctrl_file}\" ]; then\n      cat << EOF > \"${_exclude_file}\"\n--exclude /data/disk/${_user}/.tmp\n--exclude /data/disk/${_user}/clients\n--exclude /data/disk/${_user}/static/restores\n--exclude /data/disk/${_user}/static/tmp\n--exclude /data/disk/${_user}/static/trash\n--exclude /data/disk/${_user}/u\n--exclude /data/disk/${_user}/undo\n--exclude /home/${_user}.ftp/.tmp\n--exclude /home/${_user}.ftp/backups\n--exclude /home/${_user}.ftp/clients\n--exclude /home/${_user}.ftp/platforms\n--exclude /home/${_user}.ftp/static\nEOF\n      rm -f ${_user_config_dir}/.backboa.${_user}.*.exclude.ctrl.file\n      touch \"${_exclude_ctrl_file}\"\n    fi\n\n    # Cleanup for empty or not used include config files\n    if [ ! -f \"${_user_control_dir}/include_regexp.txt\" ]; then\n      [ -e \"${_include_regexp_file}\" ] && rm -f \"${_include_regexp_file}\"\n      [ -e \"${_merged_regexp_include_file}\" ] && rm -f \"${_merged_regexp_include_file}\"\n    fi\n\n    # Cleanup for empty or not used exclude config files\n    if [ ! -f \"${_user_control_dir}/exclude_regexp.txt\" ]; then\n      [ -e \"${_exclude_regexp_file}\" ] && rm -f \"${_exclude_regexp_file}\"\n      [ -e \"${_merged_regexp_exclude_file}\" ] && rm -f \"${_merged_regexp_exclude_file}\"\n    fi\n\n    # Validate and merge system and user-space include files\n    _validate_and_merge_paths \"${_include_file}\" \"${_user}\" \"${_merged_include_file}\" NO\n    if [ -f \"${_user_control_dir}/include.txt\" ]; then\n      _validate_and_merge_paths \"${_user_control_dir}/include.txt\" \"${_user}\" \"${_merged_include_file}\" YES\n    fi\n\n    # Validate and merge system and user-space exclude files\n    _validate_and_merge_paths \"${_exclude_file}\" \"${_user}\" \"${_merged_exclude_file}\" NO\n    if [ -f \"${_user_control_dir}/exclude.txt\" ]; then\n      _validate_and_merge_paths \"${_user_control_dir}/exclude.txt\" \"${_user}\" \"${_merged_exclude_file}\" YES\n    fi\n\n    # Validate and merge regexp include files\n    if [ -f \"${_include_regexp_file}\" ]; then\n      _validate_and_merge_paths \"${_include_regexp_file}\" \"${_user}\" \"${_merged_regexp_include_file}\" NO\n    fi\n    if [ -f \"${_user_control_dir}/include_regexp.txt\" ]; then\n      _validate_and_merge_paths \"${_user_control_dir}/include_regexp.txt\" \"${_user}\" \"${_merged_regexp_include_file}\" YES\n    fi\n\n    # Validate and merge regexp exclude files\n    if [ -f \"${_exclude_regexp_file}\" ]; then\n      _validate_and_merge_paths \"${_exclude_regexp_file}\" \"${_user}\" \"${_merged_regexp_exclude_file}\" NO\n    fi\n    if [ -f \"${_user_control_dir}/exclude_regexp.txt\" ]; then\n      _validate_and_merge_paths \"${_user_control_dir}/exclude_regexp.txt\" \"${_user}\" \"${_merged_regexp_exclude_file}\" YES\n    fi\n\n    # Merge all include path directives into single file\n    cat \"${_merged_include_file}\" > \"${_merged_all_include_file}\"\n    cat \"${_merged_regexp_include_file}\" >> \"${_merged_all_include_file}\"\n\n    # Merge all exclude path directives into single file\n    cat \"${_merged_exclude_file}\" > \"${_merged_all_exclude_file}\"\n    cat \"${_merged_regexp_exclude_file}\" >> \"${_merged_all_exclude_file}\"\n\n    # Convert the include file contents to a single-line variable without extra backslashes and excessive whitespace\n    local _MERGED_ALL_INCLUDE=$(cat \"${_merged_all_include_file}\" | tr '\\n' ' ' | tr -s ' ' | sed 's/^ *//;s/ *$//')\n\n    # Convert the exclude file contents to a single-line variable without extra backslashes and excessive whitespace\n    local _MERGED_ALL_EXCLUDE=$(cat \"${_merged_all_exclude_file}\" | tr '\\n' ' ' | tr -s ' ' | sed 's/^ *//;s/ *$//')\n\n    # Create the final paths configuration file\n    cat << EOF > \"${_user_paths_file}\"\n_SOURCE=\"\"\n_USER_INCLUDE_PATHS=\"${_MERGED_ALL_INCLUDE}\"\n_USER_EXCLUDE_PATHS=\"${_MERGED_ALL_EXCLUDE}\"\n_INCLUDE_LIST=\"${_include_list}\"\n_EXCLUDE_LIST=\"${_exclude_list}\"\nEOF\n\n    rm -f ${_user_config_dir}/.backboa*paths.ctrl.file\n    touch ${_user_ctrl_file}\n    echo \"Paths configuration for '${_user}' created or updated at '${_user_paths_file}'.\"\n  fi\n}\n\n# Generate paths configuration for each user\nfor _user_dir in /data/disk/*; do\n  if [ -d \"${_user_dir}\" ]; then\n    _user=$(basename \"${_user_dir}\")\n    if [ \"${_user}\" != \"arch\" ] \\\n      && [ \"${_user}\" != \"data\" ] \\\n      && [ \"${_user}\" != \"global\" ] \\\n      && [ \"${_user}\" != \"static\" ] \\\n      && [ \"${_user}\" != \"custom\" ]; then\n      _create_user_paths_config \"${_user}\"\n      _generate_user_secret_file \"${_user}\"\n    fi\n  fi\ndone\n"
  },
  {
    "path": "aegir/tools/backup/run/duplicity_backup.sh",
    "content": "#!/bin/bash\n\n# Environment setup\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n# Function to print env for debugging\n_print_env() {\n  if [ \"$(id -u)\" -eq 0 ] && [ -e \"/root/.dev.server.cnf\" ]; then\n    _ENV=$(env 2>&1)\n    echo\n    echo \"_ENV in $1 start\"\n    echo \"${_ENV}\"\n    echo \"_ENV in $1 end\"\n    echo\n    _ENV=\n  fi\n}\n\n# Function to verify BOA keys\n_verify_boa_keys() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _verify_boa_keys in multiback\"\n  fi\n  if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n    _allw=NO\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlEnc=\"http://files.aegir.cc/enc/2024\"\n    _encName=$(echo ${_hName} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".o8.io\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".boa.io\"($) ]]; then\n      _allw=YES\n    fi\n    mkdir -p /var/opt\n    rm -f /var/opt/_encN*\n    curl ${_crlGet} \"${_urlEnc}/${_encName}\" -o /var/opt/_encN.${_encName}.tmp\n    wait\n    echo \"${_hName}.${_encName}\" > /var/opt/_encN_local.${_encName}.tmp\n    wait\n    if [ -e \"/var/opt/_encN.${_encName}.tmp\" ] && [ -e \"/var/opt/_encN_local.${_encName}.tmp\" ]; then\n      _diffTestIf=$(diff -w -B /var/opt/_encN.${_encName}.tmp /var/opt/_encN_local.${_encName}.tmp 2>&1)\n      if [ ! -z \"${_diffTestIf}\" ] && [ \"${_allw}\" = \"NO\" ]; then\n        echo\n        echo \"Your system requires valid license to use this function\"\n        echo \"Please visit https://omega8.cc/licenses to purchase your own\"\n        echo\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n        rm -f /var/opt/_encN*\n        exit 0\n      else\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n      fi\n    else\n      echo\n      echo \"Your system requires valid license to use this BOA feature\"\n      echo \"Unfortunately it was not possible to verify your system status\"\n      echo \"Please contact our support but visit https://omega8.cc/licenses first\"\n      echo\n      exit 0\n    fi\n  fi\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n# Function to calculate RAM usage percentage as an integer\n_calculate_ram_usage_percent() {\n  _total_ram_kb=$1\n  _available_ram_kb=$2\n  used_ram_kb=$((_total_ram_kb - _available_ram_kb))\n\n  # Using integer division to get a whole number percentage\n  echo $(( (used_ram_kb * 100) / _total_ram_kb ))\n}\n\n# Function to check and display system info\n_check_system_ram() {\n  # Get the total and available RAM in KB\n  _total_ram_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')\n  _available_ram_kb=$(grep MemAvailable /proc/meminfo | awk '{print $2}')\n\n  # Calculate RAM usage percentage\n  _ram_usage_percent=$(_calculate_ram_usage_percent ${_total_ram_kb} ${_available_ram_kb})\n}\n\n# Function to check and optimize RAM and disk caches\n_optimize_ram() {\n  swapoff -a\n  _check_system_ram\n  if [ \"${_ram_usage_percent}\" -gt 50 ]; then\n    sync && echo 3 | tee /proc/sys/vm/drop_caches\n  fi\n  swapon -a\n}\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n\n# Function to verify root access\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  _AWS_VLV=${_AWS_VLV//[^a-z]/}\n  if [ -z \"${_AWS_VLV}\" ]; then\n    _AWS_VLV=\"warning\"\n  fi\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  _cpuNr=\"$(cat /data/all/cpuinfo 2>/dev/null | tr -d '\\n' || nproc 2>/dev/null)\"\n  if [ -n \"${_cpuNr}\" ]; then\n    [ \"${_cpuNr}\" -gt 8 ] && _useCpu=4\n    [ \"${_cpuNr}\" -le 8 ] && _useCpu=2\n    [ \"${_cpuNr}\" -le 4 ] && _useCpu=1\n  else\n    _useCpu=1\n  fi\n}\n_check_root\n_normalize_incident_report\n_optimize_ram\n_if_hosted_sys\n_verify_boa_keys\n_print_env \"multiback_init\"\n\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n# New OpenSSL 3.x version is required\nif [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  echo \"New OpenSSL 3.x version is required\"\n  exit 1\nfi\n\n# Function to notify about still running backup\n_waiting_notify() {\n  local _templog=\"/var/backups/multiback_waiting_queue.log\"\n  cat /root/.remote_backups/schedule/backup_schedule.txt > ${_templog}\n  ps axf | grep multiback >> ${_templog}\n  ps axf | grep duplicity >> ${_templog}\n  ls -la /tmp/duplicity-*-tempdir >> ${_templog}\n  tree /root/.cache/duplicity >> ${_templog}\n  ls -laR /root/.cache/duplicity >> ${_templog}\n  grep \"Out of memory: Killed process.*duplicity\" /var/log/iptables.log >> ${_templog}\n  boa info  >> ${_templog}\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    s-nail -s \"Multiback Waiting Report for [${_hName}] on $(date)\" ${_MY_EMAIL} < ${_templog}\n  fi\n}\n\n_CNT=$(pgrep -fc duplicity)\nif (( _CNT > 0 )); then\n  echo \"[$(date)] Active duplicity process detected, will try again later...\" >> /var/log/mybackup_waiting_queue.log\n  _waiting_notify\n  exit 1\nfi\n\n# Function to display usage information\n_usage() {\n  echo \"Usage: $0 {backup|cleanup|restore} <SERVICE> <USER> [RESTORE_TARGET] [RESTORE_PATH] [RESTORE_TIME]\"\n  echo\n  echo \"Example commands:\"\n  echo \"  Backup:\"\n  echo \"  $0 backup aws john\"\n  echo \"  $0 backup b2 jane\"\n  echo\n  echo \"  Cleanup:\"\n  echo \"  $0 cleanup aws john\"\n  echo \"  $0 cleanup gcs jane\"\n  echo\n  echo \"  Restore:\"\n  echo \"  $0 restore aws john /restore/target /specific/path 1D\"\n  echo \"  $0 restore b2 jane /restore/target /another/path 2W\"\n  echo\n  echo \"Supported services:\"\n  echo \"  aws, aws_one_zone, aws_standard_ia, azure, b2, cloudflare, do_spaces, gcs, ibm, linode, wasabi\"\n  echo\n  echo \"NOTE: [RESTORE_PATH] must be an absolute path (no leading slash) of the file or directory to restore\"\n  echo\n  exit 1\n}\n\n# Function to create PID file\n_create_pid_file() {\n  local _pidfile=$1\n  if [ -e \"${_pidfile}\" ]; then\n    echo \"Process already running with PID file ${_pidfile}\"\n    exit 1\n  else\n    echo $$ > \"${_pidfile}\"\n  fi\n}\n\n# Function to remove PID file\n_remove_pid_file() {\n  local _pidfile=$1\n  if [ -f \"${_pidfile}\" ]; then\n    rm -f \"${_pidfile}\" || {\n      echo \"Warning: Failed to remove PID file: ${_pidfile}\"\n    }\n  fi\n}\n\n# Function to remove stale multiback PID file\n_remove_stale_multiback_pid() {\n  local _service=$1\n  local _user=$2\n  _multiback_pidfile=\"/run/duplicity_${_service}_${_user}.pid\"\n  if [ -f \"${_multiback_pidfile}\" ]; then\n    _old_pid=$(cat \"${_multiback_pidfile}\")\n    if [ -n \"${_old_pid}\" ] && ! kill -0 \"${_old_pid}\" 2>/dev/null; then\n      echo \"Stale multiback PID file detected: ${_multiback_pidfile}. Removing it.\"\n      rm -f \"${_multiback_pidfile}\"\n    fi\n  fi\n}\n\n# Function to log validation issues\n_log_issue() {\n  local _type=$1\n  local _file=$2\n  local _message=$3\n  echo \"[$(date)] Validation issue type: [${_type}] in file: [${_file}] with error: ${_message}\" >> \"${_VALIDATION_LOG_FILE}\"\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    # Alert the admin\n    boa info  >> ${_LOGFILE}\n    echo \"Sending Backup Validation Alert to ${_MY_EMAIL} on $(date)\" >> ${_LOGFILE}\n    s-nail -s \"Backup Validation Alert for [$(hostname)] on $(date)\" ${_MY_EMAIL} < ${_LOGFILE}\n  fi\n}\n\n# Helper function to URL-encode using jq\n_url_encode() {\n  echo -n \"$1\" | jq -s -R -r @uri\n}\n\n# Function to escape values\n_escape_value() {\n  printf '%q' \"$1\"\n}\n\n# Function to sanitize and validate credentials file\n_validate_credentials() {\n  local _cred_file=\"$1\"\n  local _service=\"$2\"\n  local _line_number=0\n\n  while IFS= read -r _line || [ -n \"${_line}\" ]; do\n    _line_number=$(( _line_number + 1 ))\n\n    # Trim leading and trailing whitespace\n    _line=\"${_line#\"${_line%%[![:space:]]*}\"}\"\n    _line=\"${_line%\"${_line##*[![:space:]]}\"}\"\n\n    # Skip empty lines immediately\n    if [[ -z \"${_line}\" ]]; then\n      continue\n    fi\n\n    # Remove full-line comments: lines that *start* with '#'\n    if [[ \"${_line}\" == \\#* ]]; then\n      continue\n    fi\n\n    # Remove anything after (and including) the first '#' for inline comments\n    # (This is a naive approach that does not consider # within quotes)\n    if [[ \"${_line}\" == *\"#\"* ]]; then\n      _line=\"${_line%%#*}\"\n      # Re-trim after removing the comment\n      _line=\"${_line#\"${_line%%[![:space:]]*}\"}\"\n      _line=\"${_line%\"${_line##*[![:space:]]}\"}\"\n    fi\n\n    # Skip if there's nothing left after stripping inline comment\n    if [[ -z \"${_line}\" ]]; then\n      continue\n    fi\n\n    # Remove 'export ' prefix if present\n    _line=\"${_line#export }\"\n\n    # Validate the variable assignment (key=value)\n    if [[ \"${_line}\" =~ ^([A-Za-z_][A-Za-z0-9_]*)=(\\\".*\\\"|'.*'|[^[:space:]]+)$ ]]; then\n      export _varname=\"${BASH_REMATCH[1]}\"\n      export _value=\"${BASH_REMATCH[2]}\"\n\n      # Remove surrounding quotes if present\n      if [[ \"${_value}\" =~ ^\\\".*\\\"$ || \"${_value}\" =~ ^\\'.*\\'$ ]]; then\n        export _value=\"${_value:1:-1}\"\n      fi\n\n      # Check for forbidden characters in value\n      if echo \"${_value}\" | grep -q -E '[$`(){};&|<>]'; then\n        _log_issue \"credentials\" \"${_cred_file}\" \\\n          \"Forbidden characters in value at line ${_line_number}: ${_line}\"\n        continue\n      fi\n\n      # Safely export the variable (URL-encode if needed)\n      if [ \"${_service}\" = \"b2\" ]; then\n        export ${_varname}=$(_url_encode \"${_value}\")\n      else\n        export ${_varname}=\"${_value}\"\n      fi\n    else\n      _log_issue \"credentials\" \"${_cred_file}\" \\\n        \"Invalid syntax at line ${_line_number}: ${_line}\"\n    fi\n  done < \"${_cred_file}\"\n\n  _print_env \"multiback_validate_credentials\"\n}\n\n# Function to load credentials\n_load_credentials() {\n  local _service=\"$1\"\n  local _user=\"$2\"\n  if [ \"${_user}\" != \"arch\" ] \\\n    && [ \"${_user}\" != \"data\" ] \\\n    && [ \"${_user}\" != \"global\" ] \\\n    && [ \"${_user}\" != \"static\" ] \\\n    && [ \"${_user}\" != \"custom\" ]; then\n    local _cred_file=\"/data/disk/${_user}/static/control/remote_backups/credentials/${_service}.txt\"\n    local _secret_file=\"/data/disk/${_user}/remote_backups/.secret.txt\"\n  fi\n  if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    local _cred_file=\"/root/.remote_backups/credentials/${_service}.txt\"\n    local _secret_file=\"/root/.remote_backups/.secret.txt\"\n  fi\n\n  if [ -s \"${_secret_file}\" ]; then\n    export PASSPHRASE=$(cat \"${_secret_file}\")\n  else\n    echo \"Secret file ${_secret_file} not found. Unable to proceed.\"\n    exit 1\n  fi\n\n  if [ ! -s \"${_cred_file}\" ]; then\n    echo \"Error: Credentials file '${_cred_file}' not found.\"\n    exit 1\n  fi\n\n  _validate_credentials \"${_cred_file}\" \"${_service}\"\n  _print_env \"multiback_load_credentials\"\n}\n\n# Function to load paths configuration\n_load_paths() {\n  local _user=\"$1\"\n  if [ \"${_user}\" != \"arch\" ] \\\n    && [ \"${_user}\" != \"data\" ] \\\n    && [ \"${_user}\" != \"global\" ] \\\n    && [ \"${_user}\" != \"static\" ] \\\n    && [ \"${_user}\" != \"custom\" ]; then\n    local _paths_file=\"/data/disk/${_user}/remote_backups/paths/paths.txt\"\n  elif [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    local _paths_file=\"/root/.remote_backups/paths/${_user}_paths.txt\"\n  fi\n\n  if [ ! -f \"${_paths_file}\" ]; then\n    echo \"Error: Paths configuration file '${_paths_file}' not found.\"\n    exit 1\n  fi\n\n  if [ \"${_user}\" != \"arch\" ]; then\n    source \"${_paths_file}\"\n  fi\n  _print_env \"multiback_load_paths\"\n}\n\n# Function to validate duration format and fallback to default\n_validate_or_default_duration() {\n  local _value=$1\n  local _var_name=$2\n  local _default=$3\n\n  # Supported formats: number followed by D (days), W (weeks), M (months), Y (years)\n  if [[ ! \"${_value}\" =~ ^[0-9]+[DWMY]$ ]] || [[ \"${_value}\" =~ ^[0][DWMY]$ ]]; then\n    echo \"Warning: Invalid value '${_value}' for ${_var_name}. Using default '${_default}'.\"\n    eval \"${_var_name}='${_default}'\"\n    _print_env \"multiback_validate_or_default_duration\"\n  fi\n\n  # Enforced min value for KEEP_WITHIN (1M)\n  if [ \"${_var_name}\" = \"KEEP_WITHIN\" ] && [[ ! \"${_value}\" =~ ^[0-9]+[MY]$ ]]; then\n    echo \"Warning: Invalid value '${_value}' for ${_var_name}. It must be at least 1M. Using default '${_default}'.\"\n    eval \"${_var_name}='${_default}'\"\n    _print_env \"multiback_validate_or_default_duration_keep\"\n  fi\n\n  # Enforced min and max value for FULL_BACKUP_FREQUENCY (7D to 60D)\n  if [ \"${_var_name}\" = \"FULL_BACKUP_FREQUENCY\" ] && [[ ! \"${_value}\" =~ ^([7-9]|[1-5][0-9]|60)D$ ]]; then\n    echo \"Warning: Invalid value '${_value}' for ${_var_name}. It must be between 7D and 60D. Using default '${_default}'.\"\n    eval \"${_var_name}='${_default}'\"\n    _print_env \"multiback_validate_or_default_duration_freq\"\n  fi\n}\n\n# Function to construct _BUCKET_NAME\n_construct_bucket_name() {\n  local _service_abbr=$1\n  local _user=$2\n  _service_dash=$(echo -n ${_service_abbr} | tr _ -)\n  _hst_dash=$(echo -n ${_hName} | tr . -)\n  export _BUCKET_NAME=\"back-to-${_user}-${_hst_dash}-${_service_dash}\"\n  export _NAME=\"${_user}-${_service_dash}\"\n  export _LOGFILE=\"${_LOGPTH}/${_BUCKET_NAME}.log\"\n  _print_env \"multiback_construct_bucket_name\"\n}\n\n# Function to generate duplicity-compatible include directives\n_generate_include_directives() {\n  local _source=$1\n  local _include=\"\"\n  for _cdir in ${_source}; do\n    _include=\"${_include} --include ${_cdir}\"\n  done\n  echo \"${_include}\"\n}\n\n# Function to prepare backup directives\n_backup_prepare() {\n  if [ -e \"/root/.cache/duplicity/${_NAME}\" ]; then\n    _CacheTest=$(find /root/.cache/duplicity/${_NAME} \\\n      -maxdepth 1 \\\n      -mindepth 1 \\\n      -type f \\\n      | sort 2>&1)\n    if [[ \"${_CacheTest}\" =~ \"No such file or directory\" ]] \\\n      || [ -z \"${_CacheTest}\" ]; then\n      export _cached=NO\n    else\n      export _cached=YES\n    fi\n  fi\n  # Generate include directives dynamically\n  [ -n \"${_SOURCE}\" ] && _SRC_INCLUDE=$(_generate_include_directives \"${_SOURCE}\")\n  #\n  [ -n \"${_INCLUDE_PATHS}\" ] && _MERGED_ALL_INCLUDE=\"${_INCLUDE_PATHS}\"\n  [ -n \"${_EXCLUDE_PATHS}\" ] && _MERGED_ALL_EXCLUDE=\"${_EXCLUDE_PATHS}\"\n  #\n  [ -n \"${_USER_INCLUDE_PATHS}\" ] && _USER_MERGED_ALL_INCLUDE=\"${_USER_INCLUDE_PATHS}\"\n  [ -n \"${_USER_EXCLUDE_PATHS}\" ] && _USER_MERGED_ALL_EXCLUDE=\"${_USER_EXCLUDE_PATHS}\"\n  #\n  [ -s \"${_INCLUDE_LIST}\" ] && _LST_INCLUDE=\"--include-filelist ${_INCLUDE_LIST}\"\n  [ -s \"${_EXCLUDE_LIST}\" ] && _LST_EXCLUDE=\"--exclude-filelist ${_EXCLUDE_LIST}\"\n  ###\n  [ -n \"${_MERGED_ALL_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_MERGED_ALL_INCLUDE}\"\n  [ -n \"${_USER_MERGED_ALL_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_USER_MERGED_ALL_INCLUDE}\"\n  [ -n \"${_LST_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_BATCH_INCLUDE} ${_LST_INCLUDE}\"\n  [ -n \"${_SRC_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_BATCH_INCLUDE} ${_SRC_INCLUDE}\"\n  #\n  [ -n \"${_MERGED_ALL_EXCLUDE}\" ] && _BATCH_EXCLUDE=\"${_MERGED_ALL_EXCLUDE}\"\n  [ -n \"${_USER_MERGED_ALL_EXCLUDE}\" ] && _BATCH_EXCLUDE=\"${_USER_MERGED_ALL_EXCLUDE}\"\n  [ -n \"${_LST_EXCLUDE}\" ] && _BATCH_EXCLUDE=\"${_BATCH_EXCLUDE} ${_LST_EXCLUDE}\"\n  #\n  export _BATCH_INCLUDE\n  export _BATCH_EXCLUDE\n\n  _print_env \"multiback_backup_prepare\"\n}\n\n\n# Function to set backup mode\n_set_mode() {\n  local _user=\"${_USER}\"\n  [ -z \"${_MODE}\" ] && _MODE=\"backup\"\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] && [ \"${_cached}\" = \"YES\" ]; then\n    export _MODE=\"incremental\"\n  else\n    [ ! -e \"${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.full.log\" ] && export _MODE=\"full\"\n  fi\n  [ -e \"/root/.dev.server.cnf\" ] && echo \"The _MODE has been set to (${_MODE}) in _set_mode for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n      if [ \"${_DOM}\" = 1 ] && [ ! -e \"${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.full.log\" ]; then\n        _MODE=\"full\"\n        echo \"The _MODE has been re-set to (${_MODE}) in _set_mode for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n      fi\n    fi\n  else\n    [ -e \"/root/.dev.server.cnf\" ] && echo \"The FULL_BACKUP_FREQUENCY is (${FULL_BACKUP_FREQUENCY}) for ${_BUCKET_NAME}\" >> ${_LOGFILE}\n  fi\n  export _MODE\n  _print_env \"multiback_set_mode\"\n}\n\n# Function to construct backup command\n_set_cmd() {\n  local _user=\"${_USER}\"\n  if [ -z \"${KEEP_WITHIN}\" ] && [ -n \"${_AWS_TTL}\" ]; then\n    export KEEP_WITHIN=\"${_AWS_TTL}\"\n  fi\n  if [ -z \"${FULL_BACKUP_FREQUENCY}\" ] && [ -n \"${_AWS_FLC}\" ]; then\n    export FULL_BACKUP_FREQUENCY=\"${_AWS_FLC}\"\n  fi\n\n  # Validate or set default for KEEP_WITHIN\n  _validate_or_default_duration \"${KEEP_WITHIN}\" \"KEEP_WITHIN\" \"${_DEFAULT_KEEP_WITHIN}\"\n\n  # Validate or set default for FULL_BACKUP_FREQUENCY\n  _validate_or_default_duration \"${FULL_BACKUP_FREQUENCY}\" \"FULL_BACKUP_FREQUENCY\" \"${_DEFAULT_FULL_BACKUP_FREQUENCY}\"\n\n  ### Default backup command with encryption\n  export _DCY_BUP_CMD=\"/usr/local/bin/duplicity ${_MODE} \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu} \\\n    --copy-links \\\n    --full-if-older-than ${FULL_BACKUP_FREQUENCY} \\\n    --volsize 300\"\n\n  ### Default utility command with encryption\n  export _DCY_UTL_CMD=\"/usr/local/bin/duplicity \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu}\"\n\n  ### Custom backup command with encryption and enforced own FULL_BACKUP_FREQUENCY\n  export _FBF_BUP_CMD=\"/usr/local/bin/duplicity ${_MODE} \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu} \\\n    --copy-links \\\n    --volsize 300\"\n\n  ### Custom backup command without encryption and enforced own FULL_BACKUP_FREQUENCY\n  export _NOE_BUP_CMD=\"/usr/local/bin/duplicity ${_MODE} \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu} \\\n    --no-encryption \\\n    --volsize 300\"\n\n  ### Custom utility command without encryption\n  export _NOE_UTL_CMD=\"/usr/local/bin/duplicity \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --no-encryption \\\n    --concurrency ${_useCpu}\"\n\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ]; then\n      export _DCY_BUP_CMD=\"${_FBF_BUP_CMD}\"\n    elif [ \"${_user}\" = \"custom\" ]; then\n      export _DCY_BUP_CMD=\"${_NOE_BUP_CMD}\"\n      export _DCY_UTL_CMD=\"${_NOE_UTL_CMD}\"\n    fi\n  fi\n\n  _print_env \"multiback_set_cmd\"\n}\n\n_test() {\n  local _mode=\"$1\"\n  if [ \"${_mode}\" != \"only\" ]; then\n    _set_mode\n    _set_cmd\n  fi\n  echo \"Running ${_BUCKET_NAME} connection test, please wait...\"\n  echo \"Command is ${_DCY_UTL_CMD} cleanup --dry-run --timeout 8 ${_BACKUP_TARGET}\"\n  _ConnTest=$(${_DCY_UTL_CMD} cleanup --dry-run --timeout 8 ${_BACKUP_TARGET} 2>&1)\n  if [[ \"${_ConnTest}\" =~ \"No connection to backend\" ]] \\\n    || [[ \"${_ConnTest}\" =~ \"does not exist\" ]] \\\n    || [[ \"${_ConnTest}\" =~ \"IllegalLocationConstraintException\" ]]; then\n    echo \"Sorry, I can't connect to ${_BUCKET_NAME}\"\n    echo >> ${_LOGFILE}\n    echo \"Sorry, I can't connect to ${_BUCKET_NAME}\" >> ${_LOGFILE}\n    echo \"Please check if the bucket has expected name:\" >> ${_LOGFILE}\n    echo \" ${_BUCKET_NAME}\" >> ${_LOGFILE}\n    echo \"This bucket must exist in the specified ${_SERVICE} region\" >> ${_LOGFILE}\n    echo >> ${_LOGFILE}\n  else\n    echo \"OK, I can connect to ${_BUCKET_NAME}\"\n  fi\n}\n\n# Function to check collection-status only\n_status() {\n  echo \"Command is ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET}\n  wait\n}\n\n# Function to list-current-files only\n_list() {\n  echo \"Command is ${_DCY_UTL_CMD} list-current-files ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} list-current-files ${_BACKUP_TARGET}\n  wait\n}\n\n_remove_older_than() {\n  echo \"Running remove-older-than ${KEEP_WITHIN} for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} remove-older-than ${KEEP_WITHIN} --force ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} remove-older-than ${KEEP_WITHIN} --force ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n_collection_status() {\n  echo \"Running collection-status for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n# Function to only repair incomplete backup sets\n_repair_only() {\n  echo \"Running repair via cleanup --force for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} cleanup --force ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} cleanup --force ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n# Function to repair incomplete backup sets\n_repair() {\n  _repair_only\n  _collection_status\n}\n\n# Function to check if repair incomplete backup sets is needed\n_check_if_repair() {\n  if grep -q \"found incomplete backup sets\" \"${_LOGFILE}\"; then\n    _repair_only\n  fi\n}\n\n# Function to check if backup worked cleanly or log the errors\n_check_if_worked_cleanly_or_log_err() {\n  if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    local _logs_dir=\"/root/.remote_backups/logs\"\n  else\n    local _logs_dir=\"/data/disk/${_user}/static/control/remote_backups/logs\"\n  fi\n  if grep -q \"Backup Statistics\" \"${_LOGFILE}\"; then\n    [ ! -e \"${_logs_dir}\" ] && mkdir -p ${_logs_dir}\n    cp -af \"${_LOGFILE}\" \"${_logs_dir}/OK-${_BUCKET_NAME}.log\"\n  else\n    [ ! -e \"${_logs_dir}\" ] && mkdir -p ${_logs_dir}\n    cp -af \"${_LOGFILE}\" \"${_logs_dir}/ERR-${_BUCKET_NAME}.log\"\n  fi\n}\n\n# Function to wipe the bucket completely\n_wipe() {\n  echo \"Running wipe via remove-all-but-n-full 0 --force for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} remove-all-but-n-full 0 --force ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} remove-all-but-n-full 0 --force ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n# Function to purge all backup sets\n_purge() {\n  _repair_only\n  _wipe\n  _collection_status\n}\n\n# Function to run weekly cleanup\n_weekly_cleanup() {\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n    && [ ! -e \"${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.cleanup.log\" ] \\\n    && [ \"${_DOW}\" = 7 ] \\\n    && [ \"${_cached}\" = \"YES\" ]; then\n    _test \"only\"\n    _remove_older_than\n    echo \"$(date)\" >> ${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.cleanup.log\n  else\n    _test \"only\"\n  fi\n}\n\n# Function to clean up old backups\n_cleanup() {\n  _remove_older_than\n  _collection_status\n}\n\n# Function to perform backup\n_run_backup() {\n  export _FULL_BACK_CMD=\"${_DCY_BUP_CMD} ${_BATCH_EXCLUDE} ${_BATCH_INCLUDE} --exclude '**' / ${_BACKUP_TARGET}\"\n  echo \"Running in ${_MODE} mode for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"$(date)\" >> ${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.${_MODE}.log\n  ${_DCY_BUP_CMD} ${_BATCH_EXCLUDE} ${_BATCH_INCLUDE} --exclude '**' / ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n  _print_env \"multiback_run_backup\"\n}\n\n# Function to prepare backup\n_backup() {\n  _backup_prepare\n  _set_mode\n  _set_cmd\n  _run_backup\n  _check_if_repair\n  _weekly_cleanup\n  _check_if_worked_cleanly_or_log_err\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" != \"OFF\" ]; then\n    boa info  >> ${_LOGFILE}\n    echo \"Sending email report on $(date)\" >> ${_LOGFILE}\n    echo >> ${_LOGFILE}\n    s-nail -s \"Backup report (${_MODE}) for ${_BUCKET_NAME} on $(date)\" ${_MY_EMAIL} < ${_LOGFILE}\n  fi\n  cat ${_LOGFILE} >> ${_LOGPTH}/${_BUCKET_NAME}.archive.log\n  rm -f ${_LOGFILE}\n  _print_env \"multiback_backup\"\n}\n\n### Legacy procedure for reference\n#\n#   Note: Be careful while restoring not to prepend a slash to the path!\n#\n# $ backboa restore file [time] destination\n#\n#   Restoring a single file to tmp/\n#   $ backboa restore data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz\n#\n#   Restoring an older version of a directory to tmp/ - interval or full date\n#   $ backboa restore data/disk/o1/backups 7D8h8s tmp/backups\n#   $ backboa restore data/disk/o1/backups 2014/11/11 tmp/backups\n#\n# _restore() {\n#   if [ $# = 2 ]; then\n#     echo \"Command is ${_DCY_UTL_CMD} restore --path-to-restore $1 ${_BACKUP_TARGET} $2\"\n#     ${_DCY_UTL_CMD} restore --path-to-restore $1 ${_BACKUP_TARGET} $2\n#   else\n#     echo \"Command is ${_DCY_UTL_CMD} restore --path-to-restore $1 --time $2 ${_BACKUP_TARGET} $3\"\n#     ${_DCY_UTL_CMD} restore --path-to-restore $1 --time $2 ${_BACKUP_TARGET} $3\n#   fi\n# }\n#\n### Legacy procedure for reference\n\n### Duplicity man page https://duplicity.gitlab.io/devel/duplicity.1.html#name\n#\n#   duplicity [backup|full|incremental] [options] source_directory target_url\n#   duplicity verify [options] [--compare-data] [--path-to-restore <relpath>] [--time time] source_url target_directory\n#   duplicity collection-status [options] [--file-changed <relpath>] [--show-changes-in-set <index>] [--jsonstat]] target_url\n#   duplicity list-current-files [options] [--time time] target_url\n#   duplicity [restore] [options] [--path-to-restore <relpath>] [--time time] source_url target_directory\n#   duplicity remove-older-than <time> [options] [--force] target_url\n#   duplicity remove-all-but-n-full <count> [options] [--force] target_url\n#   duplicity remove-all-inc-of-but-n-full <count> [options] [--force] target_url\n#   duplicity cleanup [options] [--force] target_url\n#\n#   Duplicity enters restore mode because the URL comes before the local directory.\n#   If we wanted to restore just the file \"Mail/article\" in /home/me as it was three days ago into /home/me/restored_file:\n#\n#   duplicity -t 3D --path-to-restore Mail/article sftp://uid@other.host/some_dir /home/me/restored_file\n#\n#   duplicity [restore] [options] [--path-to-restore <relpath>] [--time time] source_url target_directory\n#\n### Duplicity man page\n\n### Restore Command in mybackup for reference\n#\n#   mybackup restore <SERVICE> [RESTORE_PATH] [RESTORE_TIME]\n#   mybackup restore aws data/disk/your_username/static/projects 7D\n#\n# - <SERVICE>: The cloud storage service used for your backups (e.g., aws, b2, wasabi).\n# - [RESTORE_PATH] (optional): The absolute path (no leading slash) of the file or directory to restore.\n# - [RESTORE_TIME] (optional): The point in time for the restore, specified in human-readable formats like:\n#   - 1D (1 day ago)\n#   - 7D (7 days ago)\n#   - 1M (1 month ago)\n#\n### Restore Command in mybackup for reference\n\n# Function to restore backup\n_restore() {\n  local _restore_target=$1\n  local _restore_path=$2\n  local _restore_time=$3\n  local _restore_command=\"${_DCY_UTL_CMD} restore\"\n\n  # Remove any trailing slash from _restore_path for proper basename extraction\n  _clean_restore_path=\"${_restore_path%/}\"\n\n  # Extract the last part (basename) of the restore path.\n  _last_part=$(basename \"${_clean_restore_path}\")\n\n  # Combine _restore_target with the extracted basename.\n  # Also, remove any trailing slash from _restore_target to avoid double slashes.\n  _final_restore_target=\"${_restore_target%/}/${_last_part}\"\n\n  # Ensure _RESTORE_TARGET exists\n  if [ -n \"${_restore_target}\" ]; then\n    if [ ! -d \"${_restore_target}\" ]; then\n      echo \"Creating restore target directory: ${_restore_target}\"\n      mkdir -p \"${_restore_target}\"\n    fi\n  else\n    _restore_target=\"/data/disk/${_user}/static/restores\"\n    if [ ! -d \"${_restore_target}\" ]; then\n      echo \"Creating restore target directory: ${_restore_target}\"\n      mkdir -p \"${_restore_target}\"\n    fi\n  fi\n  if [ -n \"${_restore_time}\" ]; then\n    _restore_command=\"${_restore_command} --time ${_restore_time}\"\n  fi\n  _restore_command=\"${_restore_command} ${_BACKUP_TARGET}\"\n  if [ -n \"${_restore_path}\" ]; then\n    _restore_command=\"${_restore_command} --path-to-restore ${_restore_path}\"\n  fi\n  _restore_command=\"${_restore_command} ${_final_restore_target}\"\n\n  echo \"Command is ${_restore_command}\"\n  # ${_DCY_UTL_CMD} restore --time ${_restore_time} ${_BACKUP_TARGET} --path-to-restore ${_restore_path} ${_restore_target}\n\n  # su -s /bin/bash ${_user} -c \"eval \\\"${_restore_command}\\\"\" &> /dev/null\n  eval \"${_restore_command}\"\n\n  _print_env \"multiback_restore\"\n}\n\n# Function to set backup target based on service\n_set_backup_target() {\n  local _service=$1\n  local _user=$2\n\n  case \"${_service}\" in\n    aws|aws_one_zone|aws_standard_ia)\n      _load_credentials \"${_service}\" \"${_user}\"\n      _construct_bucket_name \"${_service}\" \"${_user}\"\n\n      # Define S3-specific options\n      local _s3_endpoint=\"https://s3.dualstack.${AWS_REGION}.amazonaws.com\"\n      local _s3_options=\"--s3-endpoint-url ${_s3_endpoint} --s3-region-name ${AWS_REGION}\"\n\n      # Use intelligent-tiering options for specific services\n      if [ \"${_service}\" = \"aws_standard_ia\" ] || [ \"${_service}\" = \"aws_one_zone\" ]; then\n        local _s3_options=\"${_s3_options} --s3-use-ia\"\n      fi\n\n      export _BACKUP_TARGET=\"boto3+s3://${_BUCKET_NAME} ${_s3_options}\"\n      ;;\n    azure)\n      _load_credentials \"azure\" \"${_user}\"\n      _construct_bucket_name \"azure\" \"${_user}\"\n      export _BACKUP_TARGET=\"azure://${AZURE_STORAGE_ACCOUNT}@${_BUCKET_NAME}\"\n      ;;\n    b2)\n      _load_credentials \"b2\" \"${_user}\"\n      _construct_bucket_name \"b2\" \"${_user}\"\n      export _BACKUP_TARGET=\"b2://${B2_ACCOUNT_ID}:${B2_APPLICATION_KEY}@${_BUCKET_NAME}\"\n      ;;\n    cloudflare)\n      _load_credentials \"cloudflare\" \"${_user}\"\n      _construct_bucket_name \"cloudflare\" \"${_user}\"\n\n      # Custom endpoint for Cloudflare R2\n      local _r2_endpoint=\"https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com\"\n\n      # Configure the S3 backup target\n      export _BACKUP_TARGET=\"boto3+s3://${R2_ACCESS_KEY_ID}:${R2_SECRET_ACCESS_KEY}@${_r2_endpoint}/${_BUCKET_NAME}\"\n      ;;\n    do_spaces)\n      _load_credentials \"do_spaces\" \"${_user}\"\n      _construct_bucket_name \"do_spaces\" \"${_user}\"\n      export _BACKUP_TARGET=\"s3://${DO_SPACES_KEY}:${DO_SPACES_SECRET}@${DO_SPACES_REGION}/${_BUCKET_NAME}\"\n      ;;\n    gcs)\n      _load_credentials \"gcs\" \"${_user}\"\n      _construct_bucket_name \"gcs\" \"${_user}\"\n      export _BACKUP_TARGET=\"gs://${_BUCKET_NAME}\"\n      ;;\n    ibm)\n      _load_credentials \"ibm\" \"${_user}\"\n      _construct_bucket_name \"ibm\" \"${_user}\"\n      export _BACKUP_TARGET=\"ibmcos://${IBM_API_KEY_ID}:${IBM_SERVICE_INSTANCE_ID}@${IBM_REGION}/${_BUCKET_NAME}\"\n      ;;\n    linode)\n      _load_credentials \"linode\" \"${_user}\"\n      _construct_bucket_name \"linode\" \"${_user}\"\n      export _BACKUP_TARGET=\"s3://${LINODE_ACCESS_KEY}:${LINODE_SECRET_KEY}@${LINODE_REGION}/${_BUCKET_NAME}\"\n      ;;\n    wasabi)\n      _load_credentials \"wasabi\" \"${_user}\"\n      _construct_bucket_name \"wasabi\" \"${_user}\"\n      export _BACKUP_TARGET=\"s3://${WASABI_ACCESS_KEY}:${WASABI_SECRET_KEY}@${WASABI_REGION}/${_BUCKET_NAME}\"\n      ;;\n    *)\n      echo \"Error: Unknown service ${_service}\"\n      exit 1\n      ;;\n  esac\n\n  _print_env \"multiback_set_backup_target\"\n}\n\n# Main script\nif [ \"$#\" -lt 3 ]; then\n  _usage\nfi\n\nexport _LOGPTH=\"/var/log/boa\"\n_NOW=$(date +%y%m%d-%H%M%S)\nexport _NOW=${_NOW//[^0-9-]/}\n_TODAY=$(date +%y%m%d)\nexport _TODAY=${_TODAY//[^0-9]/}\n_DOW=$(date +%u)\nexport _DOW=${_DOW//[^1-7]/}\n_DOM=$(date +%e)\nexport _DOM=${_DOM//[^0-9]/}\n_HST=${_hName//[^a-zA-Z0-9-.]/}\n_HST=$(echo -n ${_HST} | tr A-Z a-z 2>&1)\nexport _HST_DASH=$(echo -n ${_HST} | tr . - 2>&1)\n\nexport _ACTION=$1\nexport _SERVICE=$2\nexport _USER=$3\nexport _RESTORE_TARGET=\"${4:-/var/backups/restored/}\"\nexport _RESTORE_PATH=\"${5:-}\"\nexport _RESTORE_TIME=\"${6:-}\"\nexport _PIDFILE=\"/run/duplicity_${_SERVICE}_${_USER}.pid\"\n# Default values\nexport _DEFAULT_KEEP_WITHIN=\"3M\"            # Default: 3 month\nexport _DEFAULT_FULL_BACKUP_FREQUENCY=\"28D\" # Default: 28 days\n\n# Log file for validation issues\nexport _VALIDATION_LOG_FILE=\"/var/log/backup_validation_issues.log\"\nexport _SANITIZATION_TMP_DIR=\"/var/tmp/backup_sanitization\"\nmkdir -p \"${_SANITIZATION_TMP_DIR}\"\nchmod 700 \"${_SANITIZATION_TMP_DIR}\"\n\n_print_env \"multiback_main\"\n\n# Create the PID file\n_create_pid_file \"${_PIDFILE}\"\ntrap \"rm -f ${_PIDFILE}; exit\" EXIT\n\n# Remove stale multiback PID file if necessary\n_remove_stale_multiback_pid \"${_SERVICE}\" \"${_USER}\"\n\n# Load paths configuration\n_load_paths \"${_USER}\"\n\ncase \"${_ACTION}\" in\n  test)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    ;;\n  backup)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _backup\n    ;;\n  cleanup)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _cleanup\n    ;;\n  list)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _list\n    ;;\n  purge)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _purge\n    ;;\n  status)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _status\n    ;;\n  repair)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _repair\n    ;;\n  restore)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _restore \"${_RESTORE_TARGET}\" \"${_RESTORE_PATH}\" \"${_RESTORE_TIME}\"\n    ;;\n  *)\n    _usage\n    ;;\nesac\n\n_remove_pid_file \"${_PIDFILE}\"\n\n# Wipe out any exported variables to clean up env after running the backup\n  export FULL_BACKUP_FREQUENCY=\n  export PASSPHRASE=\n  export _ACTION=\n  export _BACKUP_TARGET=\n  export _BUCKET_NAME=\n  export _DCY_BUP_CMD=\n  export _DCY_UTL_CMD=\n  export _EXCLUDE_LIST=\n  export _FBF_BUP_CMD=\n  export _INCLUDE_LIST=\n  export _LST_EXCLUDE=\n  export _LST_INCLUDE=\n  export _MODE=\n  export _NAME=\n  export _NOE_BUP_CMD=\n  export _NOE_UTL_CMD=\n  export _PIDFILE=\n  export _RESTORE_PATH=\n  export _RESTORE_TARGET=\n  export _RESTORE_TIME=\n  export _SERVICE=\n  export _SOURCE=\n  export _SRC_INCLUDE=\n  export _USER=\n  export _USER_EXCLUDE_PATHS=\n  export _USER_INCLUDE_PATHS=\n  export _USER_MERGED_ALL=\n  export _cached=\n  export _credentials_file=\n  export _paths_file=\n  export _secret_file=\n  export _value=\n  export _varname=\n\n_print_env \"multiback_exit\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/backup/run/duplicity_bundle_installer.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n# Function to verify BOA keys\n_verify_boa_keys() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _verify_boa_keys in duplicity_bundle_installer\"\n  fi\n  if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n    _allw=NO\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlEnc=\"http://files.aegir.cc/enc/2024\"\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    _encName=$(echo ${_hName} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".o8.io\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".boa.io\"($) ]]; then\n      _allw=YES\n    fi\n    mkdir -p /var/opt\n    rm -f /var/opt/_encN*\n    curl ${_crlGet} \"${_urlEnc}/${_encName}\" -o /var/opt/_encN.${_encName}.tmp\n    wait\n    echo \"${_hName}.${_encName}\" > /var/opt/_encN_local.${_encName}.tmp\n    wait\n    if [ -e \"/var/opt/_encN.${_encName}.tmp\" ] && [ -e \"/var/opt/_encN_local.${_encName}.tmp\" ]; then\n      _diffTestIf=$(diff -w -B /var/opt/_encN.${_encName}.tmp /var/opt/_encN_local.${_encName}.tmp 2>&1)\n      if [ ! -z \"${_diffTestIf}\" ] && [ \"${_allw}\" = \"NO\" ]; then\n        echo\n        echo \"Your system requires valid license to use this function\"\n        echo \"Please visit https://omega8.cc/licenses to purchase your own\"\n        echo\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n        rm -f /var/opt/_encN*\n        exit 0\n      else\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n      fi\n    else\n      echo\n      echo \"Your system requires valid license to use this BOA feature\"\n      echo \"Unfortunately it was not possible to verify your system status\"\n      echo \"Please contact our support but visit https://omega8.cc/licenses first\"\n      echo\n      exit 0\n    fi\n  fi\n}\n\n# Function to verify root access\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 0 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  _AWS_VLV=${_AWS_VLV//[^a-z]/}\n  if [ -z \"${_AWS_VLV}\" ]; then\n    _AWS_VLV=\"warning\"\n  fi\n}\n_check_root\n_verify_boa_keys\n\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n# New OpenSSL 3.x version is required\nif [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  echo \"New OpenSSL 3.x version is required\"\n  exit 1\nfi\n\n# Directory where all scripts are located\n_SCRIPT_DIR=\"/root/.remote_backups/run\"\n\n# Define paths to individual scripts\n_INSTALL_DEPENDENCIES_SCRIPT=\"${_SCRIPT_DIR}/install_dependencies.sh\"\n_CREATE_CREDENTIALS_TEMPLATES_SCRIPT=\"${_SCRIPT_DIR}/create_credentials_templates.sh\"\n_CREATE_GLOBAL_PATHS_CONFIG_SCRIPT=\"${_SCRIPT_DIR}/create_global_paths_config.sh\"\n_CREATE_USER_PATHS_CONFIG_SCRIPT=\"${_SCRIPT_DIR}/create_user_paths_config.sh\"\n_CREATE_CRON_ENTRIES_SCRIPT=\"${_SCRIPT_DIR}/create_cron_entries.sh\"\n_CREATE_README_SCRIPT=\"${_SCRIPT_DIR}/create_readme.sh\"\n_CREATE_CONFIG_README_SCRIPT=\"${_SCRIPT_DIR}/create_config_readme.sh\"\n\n# Function to display usage information\n_usage() {\n  echo \"Usage: $0 {install|setup|update}\"\n  echo \"  install : Install dependencies required for backups.\"\n  echo \"  setup   : Perform initial configuration setup (creates paths, credentials, and cron entries).\"\n  echo \"  update  : Alias for setup; updates existing configuration.\"\n  exit 1\n}\n\n# Function to check for required scripts\n_check_scripts() {\n  for _script in \\\n    \"${_INSTALL_DEPENDENCIES_SCRIPT}\" \\\n    \"${_CREATE_CREDENTIALS_TEMPLATES_SCRIPT}\" \\\n    \"${_CREATE_GLOBAL_PATHS_CONFIG_SCRIPT}\" \\\n    \"${_CREATE_USER_PATHS_CONFIG_SCRIPT}\" \\\n    \"${_CREATE_CRON_ENTRIES_SCRIPT}\" \\\n    \"${_CREATE_README_SCRIPT}\" \\\n    \"${_CREATE_CONFIG_README_SCRIPT}\"\n  do\n    if [ ! -f \"${_script}\" ]; then\n      echo \"Error: Required script ${_script} not found.\"\n      exit 1\n    fi\n  done\n}\n\n# Function to install dependencies\n_install_dependencies() {\n  echo \"Installing dependencies...\"\n  service cron stop && ln -sfn /bin/dash /usr/bin/sh\n  bash \"${_INSTALL_DEPENDENCIES_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to install dependencies.\"\n    exit 1\n  fi\n  echo \"Dependencies installed successfully.\"\n  service cron start\n}\n\n# Function to perform setup (initial configuration)\n_setup_configuration() {\n  echo \"Setting up configuration...\"\n\n  echo \"Step 1: Creating global paths configuration...\"\n  bash \"${_CREATE_GLOBAL_PATHS_CONFIG_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create global paths configuration.\"\n    exit 1\n  fi\n\n  echo \"Step 2: Creating user-specific paths configuration...\"\n  bash \"${_CREATE_USER_PATHS_CONFIG_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create user paths configuration.\"\n    exit 1\n  fi\n\n  echo \"Step 3: Creating credentials templates...\"\n  bash \"${_CREATE_CREDENTIALS_TEMPLATES_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create credentials templates.\"\n    exit 1\n  fi\n\n  echo \"Step 4: Creating cron entries...\"\n  bash \"${_CREATE_CRON_ENTRIES_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create cron entries.\"\n    exit 1\n  fi\n\n  echo \"Step 5: Creating global README files...\"\n  bash \"${_CREATE_README_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create global README files.\"\n    exit 1\n  fi\n\n  echo \"Step 6: Creating user config README files...\"\n  bash \"${_CREATE_CONFIG_README_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create user config README files.\"\n    exit 1\n  fi\n\n  echo \"Configuration setup completed successfully.\"\n}\n\n# Main logic\nif [ $# -ne 1 ]; then\n  _usage\nfi\n\n_action=$1\n_check_scripts\n\ncase \"${_action}\" in\n  install)\n    _install_dependencies\n    ;;\n  setup|update)\n    _setup_configuration\n    ;;\n  *)\n    _usage\n    ;;\nesac\n"
  },
  {
    "path": "aegir/tools/backup/run/install_dependencies.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_PTN_VRN=3.13.9\n_DCY_VRN=3.0.6\n_DCY_CMD=\"/usr/local/bin/duplicity\"\n\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 0 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n\n_check_openssl() {\n  # New OpenSSL 3.x version is required\n  if [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n    echo \"New OpenSSL 3.x version is required\"\n    exit 1\n  fi\n}\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth}\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n# Function to install other dependencies\n_install_other_dependencies() {\n  echo \"Checking and installing other dependencies...\"\n\n  echo \"Installing boto3 for S3-compatible services...\"\n  pipx install boto3 --include-deps --force\n\n  echo \"Installing awscli for S3-compatible services...\"\n  pipx install awscli --include-deps --force\n\n  echo \"Installing azure-storage-blob for Azure Blob Storage...\"\n  pipx install azure-storage-blob --include-deps --force\n\n  echo \"Installing b2sdk for Backblaze B2...\"\n  pipx install b2sdk --include-deps --force\n\n  echo \"Installing google-cloud-storage for Google Cloud Storage...\"\n  pipx install google-cloud-storage --include-deps --force\n\n  echo \"Installing ibm-cos-sdk for IBM Cloud Object Storage...\"\n  pipx install ibm-cos-sdk --include-deps --force\n\n  echo \"All dependencies are installed.\"\n}\n\n_install_duplicity() {\n  pip3 install --upgrade pip --root-user-action ignore\n  echo \"Installing pipx...\"\n\n  ${_DCY_PTN} -m pip install pipx --break-system-packages --root-user-action ignore\n\n  export PIPX_BIN_DIR=/usr/local/bin\n  export PIPX_HOME=/opt/pipx/venvs\n\n  if [ -x \"${_DCY_CMD}\" ]; then\n    _DCY_TEST=$(${_DCY_CMD} --version 2>&1)\n    if [[ \"${_DCY_TEST}\" =~ \"duplicity ${_DCY_VRN}\" ]]; then\n      echo \"Already Installed ${_DCY_TEST}\"\n      if [ ! -e \"/root/.force.duplicity.reinstall.cnf\" ]; then\n        exit 1\n      fi\n    fi\n  fi\n\n  echo \"Installing Duplicity ${_DCY_VRN}...\"\n  pipx install duplicity --include-deps --force\n\n  _DCY_TEST=$(${_DCY_CMD} --version 2>&1)\n  echo \"Just Installed ${_DCY_TEST}\"\n\n  if [[ \"${_DCY_TEST}\" =~ \"duplicity 3.\" ]]; then\n    echo \"Duplicity installation complete!\"\n  else\n    echo \"Duplicity installation failed with ${_DCY_TEST}\"\n    exit 1\n  fi\n}\n\n_python_install_src() {\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n  _find_fast_mirror_early\n  _apt_clean_update\n  apt-get install ${_aptYesUnth} \\\n    intltool \\\n    jq \\\n    libdb-dev \\\n    libffi-dev \\\n    libgdbm-compat-dev \\\n    libgdbm-dev \\\n    liblzma-dev \\\n    libncursesw5-dev \\\n    libreadline-dev \\\n    par2 \\\n    python3 \\\n    python3-pip \\\n    python3-venv \\\n    rclone \\\n    rdiff \\\n    tk-dev \\\n    tzdata \\\n    uuid-dev\n\n  _PTN_TEST=$(python3 --version 2>&1)\n  if [[ ! \"${_PTN_TEST}\" =~ \"Python ${_PTN_VRN}\" ]] \\\n    || [ ! -x \"${_DCY_PTN}\" ]; then\n    cd /var/opt\n    rm -rf Python*\n    wget ${_wgetGet} ${_urlDev}/src/Python-${_PTN_VRN}.tgz\n    tar -xzf Python-${_PTN_VRN}.tgz\n    cd Python-${_PTN_VRN}\n    bash ./configure --with-openssl=/usr/local/ssl3\n    make -j $(nproc) --quiet\n    make install --quiet\n    cd\n  fi\n  _PTN_TEST=$(/usr/local/bin/python3.12 --version 2>&1)\n  if [[ \"${_PTN_TEST}\" =~ \"Python ${_PTN_VRN}\" ]]; then\n    echo \"Python ${_PTN_VRN} installed\"\n    _DCY_PTN=\"/usr/local/bin/python3.12\"\n    export PYTHONPATH=\"/usr/local/lib/python3.12/site-packages\"\n  else\n    echo \"Python ${_PTN_VRN} installation failed with ${_PTN_TEST}\"\n    exit 1\n  fi\n\n  echo \"Locating pip3...\"\n  if [ -x \"/usr/local/bin/pip3\" ]; then\n    _usePip=/usr/local/bin/pip3\n  elif [ -x \"/usr/bin/pip3\" ]; then\n    _usePip=/usr/bin/pip3\n  fi\n  echo \"_usePip is ${_usePip}\"\n\n  echo \"Installing pip...\"\n  _PIP_TEST=$(${_usePip} --version 2>&1)\n  if [[ \"${_PIP_TEST}\" =~ \"python 3.11\" ]] \\\n    || [[ \"${_PIP_TEST}\" =~ \"python 3.12\" ]] \\\n    || [[ \"${_PIP_TEST}\" =~ \"python 3.13\" ]]; then\n    ${_usePip} install --upgrade pip --root-user-action ignore\n  else\n    ${_usePip} install --upgrade pip\n  fi\n\n  _install_duplicity\n  _install_other_dependencies\n}\n\n_if_python_install_src() {\n  _PYTHON_INSTALL=NO\n  [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  _PYTHON_TEST=$(python3 --version 2>&1)\n  if [[ ! \"${_PYTHON_TEST}\" =~ \"Python ${_PTN_VRN}\" ]]; then\n    echo \"Python ${_PTN_VRN} installation is required to support Duplicity ${_DCY_VRN}\"\n    _python_install_src\n  else\n    if ! ${_DCY_PTN} -c \"import boto3\" &> /dev/null; then\n      _PYTHON_INSTALL=YES\n    fi\n    if ! ${_DCY_PTN} -c \"import b2sdk\" &> /dev/null; then\n      _PYTHON_INSTALL=YES\n    fi\n    if [ \"${_PYTHON_INSTALL}\" = \"YES\" ]; then\n      _python_install_src\n    fi\n  fi\n}\n\n_check_root\n_check_openssl\n_os_detection_minimal\n_if_python_install_src\n\n"
  },
  {
    "path": "aegir/tools/bin/aptcleanup",
    "content": "#!/bin/bash\n\n#\n# Detect and quarantine unexpected vendor APT files/dirs into /var/backups.\n# Default behavior: keep only standard APT structure and Debian keyrings;\n# quarantine vendor sources/preferences and any non-standard subdirs (e.g. /etc/apt/mirrors)\n# while preserving DigitalOcean droplet-agent sources; if droplet-agent\n# is installed, create a SysV init script and restrict its repo via pinning.\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# --- Config\n\n_DEBUG_MODE=\"NO\"        # YES/NO — prints extra messages\n_DRY_RUN=\"NO\"           # YES/NO — report what would happen without moving anything\n_APT_ROOT=\"/etc/apt\"\n_BACKUP_ROOT=\"/var/backups\"\n\n# Treat ONLY the following as \"expected\" top-level entries in /etc/apt.\n# Anything else (dirs/files) will be quarantined (e.g., \"mirrors\").\n_EXPECTED_TOP_DIRS=(\n  \"apt.conf.d\"\n  \"auth.conf.d\"\n  \"keyrings\"\n  \"preferences.d\"\n  \"sources.list.d\"\n  \"trusted.gpg.d\"\n  \"listchanges.conf.d\"\n\n)\n_EXPECTED_TOP_FILES=(\n  \"sources.list\"\n  \"listchanges.conf\"\n)\n\n# Keep-globs: any path (relative to /etc/apt) matching one of these globs will be kept in place.\n# By default we keep Debian key material and the two top-level config files; we MOVE:\n#   - everything under sources.list.d (both *.list and *.sources),\n#   - everything under preferences.d (vendor pins),\n#   - any non-standard top-level dirs (e.g., mirrors) and their contents.\n_KEEP_GLOBS_REL=(\n  \"apt.conf.d/*\"\n  \"auth.conf.d/*\"\n  \"keyrings/*.asc\"\n  \"keyrings/*.gpg\"\n  \"listchanges.conf.d/*\"\n  \"listchanges.conf\"\n  \"preferences.d/00percona.pref\"\n  \"preferences.d/99-droplet-agent-restrict.pref\"\n  \"preferences.d/99-prefer-devuan\"\n  \"preferences.d/fuse\"\n  \"preferences.d/libldap\"\n  \"preferences.d/makedev\"\n  \"preferences.d/mariadb-common\"\n  \"preferences.d/nginx-common\"\n  \"preferences.d/offsystemd\"\n  \"preferences.d/percona-release\"\n  \"preferences.d/udev\"\n  \"sources.list.d/backports.list\"\n  \"sources.list.d/bullseye.list\"\n  \"sources.list.d/chimaera.list\"\n  \"sources.list.d/ffmpeg.list\"\n  \"sources.list.d/newrelic.list\"\n  \"sources.list.d/nodesource.list\"\n  \"sources.list.d/percona-release.list\"\n  \"sources.list.d/mysql-release.list\"\n  \"sources.list.d/webmin.list\"\n  \"sources.list\"\n  \"trusted.gpg.d/*.asc\"\n  \"trusted.gpg.d/*.gpg\"\n  \"trusted.gpg.d/*.gpg~\"\n)\n\n# DigitalOcean / droplet-agent preservation policy:\n# AUTO -> preserve if agent is installed; YES -> always preserve; NO -> quarantine\n_PRESERVE_DO_POLICY=\"AUTO\"\n_VENDOR_DO_PATTERNS=( \"droplet-agent-restrict\" \"droplet-agent.list\")\n\n# Where to write our APT pin that restricts DO origin to droplet-agent only\n_DO_PIN_FILE=\"${_APT_ROOT}/preferences.d/99-droplet-agent-restrict.pref\"\n\n# --- Helpers -------------------------------------------------------------------\n_dt() {\n  date +%Y%m%d-%H%M%S\n}\n\n_msg() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    printf '%s %s\\n' \"$(_dt)\" \"$*\"\n  fi\n}\n\n_err() {\n  printf 'ERROR: %s\\n' \"$*\" >&2\n}\n\n###\n### Atomic unlock to prevent TOCTOU race\n###\n_single_instance_unlock() {\n  _FD=\"$1\"; _PATH=\"$2\"\n  if command -v flock >/dev/null 2>&1; then\n    flock -u \"${_FD}\" 2>/dev/null || true\n    eval \"exec ${_FD}>&-\"\n    rm -f \"${_PATH}\" 2>/dev/null || true\n  else\n    rm -rf \"${_PATH}\" 2>/dev/null || true\n  fi\n}\n\n###\n### Atomic lock to prevent TOCTOU race\n###\n_single_instance_lock() {\n  # Ensure not too many instances are running\n  # usage: _single_instance_lock [lockfile_path] [fd]\n  # default lock: /run/<script>.lock (falls back to /run or /tmp)\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  _LOCK_FD=\"${2:-9}\"\n  if [ -n \"${1:-}\" ]; then\n    _LOCK_PATH=\"$1\"\n  else\n    _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/tmp\"\n    _LOCK_PATH=\"${_DIR}/${_SELF_NAME%.sh}.lock\"\n  fi\n\n  if command -v flock >/dev/null 2>&1; then\n    eval \"exec ${_LOCK_FD}>\\\"${_LOCK_PATH}\\\"\"\n    if ! flock -n \"${_LOCK_FD}\"; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    printf '%s\\n' \"$$\" 1>&\"${_LOCK_FD}\" 2>/dev/null || true   # optional: PID note\n    trap \"_single_instance_unlock ${_LOCK_FD} '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  else\n    # mkdir is atomic; directory presence == lock held\n    if ! mkdir \"${_LOCK_PATH}\" 2>/dev/null; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    echo \"$$\" > \"${_LOCK_PATH}/pid\" 2>/dev/null || true\n    trap \"rm -rf '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  fi\n}\n\n# --- DO agent detection & policy ----------------------------------------------\n_do_agent_installed() {\n  dpkg-query -W -f='${Status}\\n' droplet-agent 2>/dev/null | grep -q 'install ok installed' && return 0\n  [ -x /usr/bin/droplet-agent ] && return 0\n  [ -x /usr/local/bin/droplet-agent ] && return 0\n  [ -x /opt/digitalocean/bin/droplet-agent ] && return 0\n  [ -x /opt/digitalocean/droplet-agent/bin/droplet-agent ] && return 0\n  return 1\n}\n\n_should_preserve_do() {\n  case \"${_PRESERVE_DO_POLICY}\" in\n    YES) printf 'YES'; return 0 ;;\n    NO)  printf 'NO';  return 0 ;;\n    AUTO)\n      if _do_agent_installed; then printf 'YES'; else printf 'NO'; fi\n      return 0\n      ;;\n  esac\n  printf 'NO'\n}\n\n# --- Path classification -------------------------------------------------------\n_is_member() {\n  _ITEM=\"$1\"; shift\n  for _E in \"$@\"; do\n    if [ \"${_ITEM}\" = \"${_E}\" ]; then return 0; fi\n  done\n  return 1\n}\n\n_is_do_vendor_related() {\n  # path or contents mention DO/droplet\n  _ABS=\"$1\"\n  _REL=\"${_ABS#${_APT_ROOT}/}\"\n\n  for _P in \"${_VENDOR_DO_PATTERNS[@]}\"; do\n    case \"${_REL}\" in *${_P}*) return 0 ;; esac\n  done\n\n  if [ -f \"${_ABS}\" ]; then\n    if grep -qiE 'droplet-agent-restrict|droplet-agent.list' \"${_ABS}\" 2>/dev/null; then\n      return 0\n    fi\n  fi\n  return 1\n}\n\n_keep_match() {\n  # Return 0 (keep) if matches any allowed criteria; 1 otherwise\n  _ABS=\"$1\"\n  _REL=\"${_ABS#${_APT_ROOT}/}\"\n\n  # Always keep top-level expected files\n  for _F in \"${_EXPECTED_TOP_FILES[@]}\"; do\n    if [ \"${_REL}\" = \"${_F}\" ]; then return 0; fi\n  done\n\n  # Keep-globs (relative)\n  for _G in \"${_KEEP_GLOBS_REL[@]}\"; do\n    case \"${_REL}\" in ${_G}) return 0 ;; esac\n  done\n\n  # Preserve DO vendor files per policy\n  if [ \"$(_should_preserve_do)\" = \"YES\" ] && _is_do_vendor_related \"${_ABS}\"; then\n    return 0\n  fi\n\n  return 1\n}\n\n# --- Quarantine machinery ------------------------------------------------------\n_ensure_backup_dir() {\n  if [ -z \"${_BACKUP_DIR}\" ]; then\n    _STAMP=\"$(_dt)\"\n    _BACKUP_DIR=\"${_BACKUP_ROOT}/apt-vendor-${_STAMP}\"\n    _LOG_FILE=\"${_BACKUP_DIR}/moved.log\"\n    if [ \"${_DRY_RUN}\" = \"NO\" ]; then\n      mkdir -p \"${_BACKUP_DIR}\"\n      : > \"${_LOG_FILE}\"\n    fi\n    _msg \"Backup dir: ${_BACKUP_DIR}\"\n  fi\n}\n\n_move_path() {\n  _SRC=\"$1\"\n  _ensure_backup_dir\n  _REL=\"${_SRC#${_APT_ROOT}/}\"\n  _DEST_DIR=\"${_BACKUP_DIR}/etc/apt/$(dirname \"${_REL}\")\"\n  _DEST=\"${_BACKUP_DIR}/etc/apt/${_REL}\"\n\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would move: %s -> %s\\n' \"${_SRC}\" \"${_DEST}\"\n    return 0\n  fi\n\n  mkdir -p \"${_DEST_DIR}\"\n  # Use 'mv -T' to avoid merging directories unexpectedly (busybox note: if unavailable, fallback)\n  if mv -T \"${_SRC}\" \"${_DEST}\" 2>/dev/null; then\n    :\n  else\n    # Fallback if -T unsupported\n    mv \"${_SRC}\" \"${_DEST}\"\n  fi\n  printf '%s -> %s\\n' \"${_SRC}\" \"${_DEST}\" >> \"${_LOG_FILE}\"\n}\n\n_quarantine_path() {\n  _PATH=\"$1\"\n  if _keep_match \"${_PATH}\"; then\n    _msg \"Keeping (allowed): ${_PATH#${_APT_ROOT}/}\"\n    return 0\n  fi\n  _move_path \"${_PATH}\"\n}\n\n_scan_expected_dirs() {\n  for _DIR in \"${_EXPECTED_TOP_DIRS[@]}\"; do\n    _ABS_DIR=\"${_APT_ROOT}/${_DIR}\"\n    if [ -d \"${_ABS_DIR}\" ]; then\n      # shellcheck disable=SC2045\n      for _ITEM in $(ls -A \"${_ABS_DIR}\" 2>/dev/null); do\n        _ABS_ITEM=\"${_ABS_DIR}/${_ITEM}\"\n        _quarantine_path \"${_ABS_ITEM}\"\n      done\n    fi\n  done\n}\n\n_scan_unexpected_top() {\n  # shellcheck disable=SC2045\n  for _ITEM in $(ls -A \"${_APT_ROOT}\" 2>/dev/null); do\n    if _is_member \"${_ITEM}\" \"${_EXPECTED_TOP_FILES[@]}\" || _is_member \"${_ITEM}\" \"${_EXPECTED_TOP_DIRS[@]}\"; then\n      _msg \"Expected top: ${_ITEM}\"\n      continue\n    fi\n    _quarantine_path \"${_APT_ROOT}/${_ITEM}\"\n  done\n}\n\n# --- DO agent SysV init & APT pin ---------------------------------------------\n_write_do_init_if_missing() {\n  # Create /etc/init.d/droplet-agent if agent installed and init missing\n  if ! _do_agent_installed; then\n    _msg \"droplet-agent not installed; skipping init creation\"\n    return 0\n  fi\n  if [ -x /etc/init.d/droplet-agent ]; then\n    _msg \"droplet-agent init already present\"\n    return 0\n  fi\n\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would write /etc/init.d/droplet-agent and enable it\\n'\n    return 0\n  fi\n\n  cat > /etc/init.d/droplet-agent <<'INIT_EOF'\n#!/usr/bin/env bash\n### BEGIN INIT INFO\n# Provides:          droplet-agent\n# Required-Start:    $remote_fs $syslog $network\n# Required-Stop:     $remote_fs $syslog $network\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: DigitalOcean Droplet Agent\n### END INIT INFO\n\n_PATHS=\"/usr/bin/droplet-agent /usr/local/bin/droplet-agent /opt/digitalocean/bin/droplet-agent /opt/digitalocean/droplet-agent/droplet-agent\"\n_DAEMON=\"\"\n_NAME=\"droplet-agent\"\n_DESC=\"DigitalOcean Droplet Agent\"\n_PIDFILE=\"/run/${_NAME}.pid\"\n_USER=\"root\"\n_NICE=\"0\"\n\n_for_each_path() {\n  for _P in ${_PATHS}; do\n    if [ -x \"${_P}\" ]; then _DAEMON=\"${_P}\"; return 0; fi\n  done\n  return 1\n}\n\n_do_start() {\n  if [ -z \"${_DAEMON}\" ] && ! _for_each_path; then\n    echo \"${_DESC}: binary not found\"; return 1\n  fi\n  start-stop-daemon --start --quiet --background \\\n    --make-pidfile --pidfile \"${_PIDFILE}\" \\\n    --chuid \"${_USER}\" --nicelevel \"${_NICE}\" \\\n    --exec \"${_DAEMON}\" -- || return 1\n  return 0\n}\n\n_do_stop() {\n  if [ -f \"${_PIDFILE}\" ]; then\n    start-stop-daemon --stop --quiet --pidfile \"${_PIDFILE}\" --retry=TERM/15/KILL/5\n    rm -f \"${_PIDFILE}\" 2>/dev/null || true\n    return 0\n  fi\n  pkill -f \"${_NAME}\" 2>/dev/null || true\n  return 0\n}\n\n_do_status() {\n  if [ -f \"${_PIDFILE}\" ] && ps -p \"$(cat \"${_PIDFILE}\" 2>/dev/null)\" >/dev/null 2>&1; then\n    echo \"${_DESC} is running (pid $(cat \"${_PIDFILE}\"))\"\n    return 0\n  fi\n  pgrep -f \"${_NAME}\" >/dev/null 2>&1 && { echo \"${_DESC} seems running (no pidfile)\"; return 0; }\n  echo \"${_DESC} is not running\"\n  return 3\n}\n\ncase \"$1\" in\n  start)   _do_start ;;\n  stop)    _do_stop ;;\n  restart) _do_stop; _do_start ;;\n  status)  _do_status ;;\n  *) echo \"Usage: $0 {start|stop|restart|status}\"; exit 2 ;;\nesac\nINIT_EOF\n\n  chmod 0755 /etc/init.d/droplet-agent\n  update-rc.d droplet-agent defaults >/dev/null 2>&1 || true\n  service droplet-agent start >/dev/null 2>&1 || true\n  _msg \"droplet-agent init created, enabled, and started\"\n}\n\n_write_do_restriction_pin() {\n  # Restrict DO origin to droplet-agent only; block everything else by default\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would write %s\\n' \"${_DO_PIN_FILE}\"\n    return 0\n  fi\n\n  mkdir -p \"${_APT_ROOT}/preferences.d\"\n  cat > \"${_DO_PIN_FILE}\" <<'PIN_EOF'\nPackage: droplet-agent*\nPin: origin repos-droplet.digitalocean.com\nPin-Priority: 500\n\nPackage: *\nPin: origin repos-droplet.digitalocean.com\nPin-Priority: 1\nPIN_EOF\n  chmod 0644 \"${_DO_PIN_FILE}\"\n  _msg \"DO APT pin written: ${_DO_PIN_FILE}\"\n}\n\n# --- Main ----------------------------------------------------------------------\n_main() {\n  # Parse args\n  while [ \"$#\" -gt 0 ]; do\n    case \"$1\" in\n      --debug) _DEBUG_MODE=\"YES\" ;;\n      --dry-run) _DRY_RUN=\"YES\" ;;\n      --preserve-do) _PRESERVE_DO_POLICY=\"YES\" ;;\n      --drop-do) _PRESERVE_DO_POLICY=\"NO\" ;;\n      -h|--help) _usage; exit 0 ;;\n      *) _err \"Unknown option: $1\"; _usage; exit 2 ;;\n    esac\n    shift\n  done\n\n  if [ \"$(id -u)\" -ne 0 ]; then\n    _err \"Please run as root.\"\n    exit 1\n  fi\n  if [ ! -d \"${_APT_ROOT}\" ]; then\n    _err \"APT root not found: ${_APT_ROOT}\"\n    exit 1\n  fi\n\n  _single_instance_lock\n\n  _msg \"Policy for DO preservation: ${_PRESERVE_DO_POLICY} -> $(_should_preserve_do)\"\n\n  # Phase 0: If policy resolves to YES, ensure we won't quarantine DO files.\n  # (Handled dynamically in _keep_match)\n\n  # Phase 1: Quarantine unexpected top-level entries\n  _scan_unexpected_top\n\n  # Phase 2: Within expected dirs, quarantine items not matching keep globs/vendor rules\n  _scan_expected_dirs\n\n  # Phase 3: If we preserved DO agent (either AUTO+installed or forced YES),\n  #          ensure it keeps working under Devuan:\n  if [ \"$(_should_preserve_do)\" = \"YES\" ]; then\n    _write_do_init_if_missing\n    _write_do_restriction_pin\n  fi\n\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf 'Dry-run complete. No changes made.\\n'\n  else\n    if [ -n \"${_BACKUP_DIR}\" ]; then\n      printf 'Quarantine complete. Backup: %s\\n' \"${_BACKUP_DIR}\"\n      [ -n \"${_LOG_FILE}\" ] && printf 'Log: %s\\n' \"${_LOG_FILE}\"\n    else\n      printf 'Nothing to quarantine. System appears clean.\\n'\n    fi\n  fi\n}\n\n_main \"$@\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/aptfast",
    "content": "#!/bin/bash\n\n# apt-fast v1.4 by Matt Parnell http://www.mattparnell.com, GNU GPLv3\n# Use this just like aptitude for faster package downloading.\n\n###################################################################\n# CONFIGURATION OPTIONS                                           #\n###################################################################\n\n# Maximum number of connections\n_MAXNUM=5\n\n# Note that the download manager you choose has other options - feel free\n# to setup your own _DOWNLOADER line or customize one of the ones below...\n# they're simply there for example purposes, and to provide sane defaults\n\n# Download manager selection (choose one by uncommenting one #_DOWNLOADER line)\n\n# aria2c:\n#_DOWNLOADER='aria2c -c -j ${_MAXNUM} --input-file=/opt/tmp/apt-fast.list --connect-timeout=600 --timeout=600 -m0'\n\n# aria2c with a proxy (set username, proxy, ip and password!)\n#_DOWNLOADER='aria2c -s 20 -j ${_MAXNUM} --http-proxy=http://username:password@proxy_ip:proxy_port -i apt-fast.list'\n\n# axel:\n_DOWNLOADER='cat /opt/tmp/apt-fast.list | xargs -r -l1 axel -n ${_MAXNUM} -a' # axel\n\n###################################################################\n# DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING! #\n###################################################################\n\n_unlock()\n{\n\trm -f $LCK_FILE\n}\n\n# Check for proper priveliges\n[ \"`whoami`\" = root ] || exec sudo \"$0\" \"$@\"\n\n# Define our lock location\nLCK_FILE=/var/lock/apt-fast.lck\n\ntrap \" [ -f ${LCK_FILE} ] && _unlock\" 0 1 2 3 13 15\n\n# Make sure one of the download managers is enabled\n[ -z \"$_DOWNLOADER\" ] && echo \"You must configure apt-fast to use axel or aria2c\" && _unlock && exit 1;\n\n# If the user entered arguments contain upgrade, install, or dist-upgrade\nif echo \"$@\" | grep -q \"upgrade\\|install\\|dist-upgrade\"; then\n  echo \"Working...\";\n\n  # Go into the directory aptitude normally puts downloaded packages\n  cd /var/cache/apt/archives/;\n\n  # Have aptitude print the information, including the URI's to the packages\n  # Strip out the URI's, and download the packages with Axel for speediness\n  # I found this regex elsewhere, showing how to manually strip package URI's you may need...thanks to whoever wrote it\n  apt-get -y --print-uris $@ | egrep -o -e \"(ht|f)tp://[^\\']+\" > /opt/tmp/apt-fast.list\n  if [ -s \"/opt/tmp/apt-fast.list\" ] ; then\n\teval ${_DOWNLOADER}\n  fi\n  # Install our downloaded packages\n  aptitude $@;\n\n  echo -e \"\\nDone!\\nVerify that all packages were installed successfully.\\nIf errors are found, run aptitude clean as root\\nand try again using aptitude directly.\\n\";\n\nelse\n   aptitude $@;\nfi\n\n# Remove our lock\n\n_unlock\n"
  },
  {
    "path": "aegir/tools/bin/autobeowulf",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Automatic BOA System Major Upgrade Tool\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: Launch auto-upgrade properly   ###\n###----------------------------------------###\n###\n###  Start with manual barracuda upgrade.\n###\n###    $ barracuda up-lts system\n###\n###  !!! CREATE A FRESH VM BACKUP SNAPSHOT !!!\n###  !!! TEST THE FRESHLY CREATED BACKUP.. !!!\n###  !!! BY USING IT TO CREATE NEW TEST VM !!!\n###  !!! DO NOT CONTINUE UNTIL IT WORKS... !!!\n###\n###  Reboot the server to make sure there are\n###  no issues with boot process.\n###\n###    $ shutdown -r now\n###\n###  If reboot worked and there are no issues,\n###  you are ready for the automated magic...\n###\n###    $ touch /root/.run-to-beowulf.cnf\n###    $ service clean-boa-env start\n###\n###  Once enabled, the system will launch\n###  a series of barracuda upgrades/reboots\n###  until it migrates any supported Debian\n###  version to Devuan Beowulf -- excluding\n###  however Debian Bullseye which can be\n###  upgraded only to Devuan Chimaera.\n###\n###  !!! WARNING !!!\n###\n###  EXPECT IT TO CRASH COMPLETELY, SO ONLY\n###  FULL RESTORE FROM LATEST BACKUP SNAPSHOT\n###  OF ENTIRE VM WILL BRING IT BACK TO LIVE.\n###\n###  DO NOT PROCEED UNTIL YOU ARE READY FOR\n###  DISASTER RECOVERY FROM TESTED BACKUP!\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_barCnf=\"/root/.barracuda.cnf\"\n_logAbf=\"/root/.autobeowulf.log\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_aptAllow=\"--allow-unauthenticated\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n_OS_CODE=check\n#\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\ncd /root/\n\nif [ \"${_tRee}\" = \"dev\" ]; then\n  touch /root/.debug-boa-installer.cnf\n  touch /root/.debug-octopus-installer.cnf\nfi\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"AutoBeowulf v.${_tRee} [$(date +%T)] ==> $*\"\n}\n\n_check_manufacturer_compatibility() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    /usr/bin/apt-get update --allow-insecure-repositories &> /dev/null\n    ${_INITINS} dmidecode &> /dev/null\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    _msg \"Not supported environment detected: ${_HOST_INFO}\" >> \"${_logAbf}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAbf}\"\n    _msg \"Bye!\" >> \"${_logAbf}\"\n    echo \"Not supported environment detected: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    _msg \"Mysterious environment: ${_HOST_INFO}\" >> \"${_logAbf}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAbf}\"\n    _msg \"Bye!\" >> \"${_logAbf}\"\n    echo \"Mysterious environment: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  fi\n}\n_check_manufacturer_compatibility\n\n_check_mysql_compatibility() {\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  else\n    _DB_V=UNSUPPORTED\n  fi\n  if [ \"${_DB_V}\" = \"UNSUPPORTED\" ]; then\n    _msg \"Not supported DB server detected ${_DB_SERVER_TEST}\" >> \"${_logAbf}\"\n    exit 1\n  fi\n}\n_check_mysql_compatibility\n\n###\n### Faster reboot\n###\n_faster_reboot() {\n  _msg \"Faster reboot prepare...\" >> \"${_logAbf}\"\n  service cron stop &> /dev/null\n  killall cron &> /dev/null\n  pkill -9 -f second.sh\n  pkill -9 -f minute.sh\n  pkill -9 -f runner.sh\n  _msg \"Cron has been stopped\" >> \"${_logAbf}\"\n  _msg \"Now waiting 60 seconds for any running tasks to complete\" >> \"${_logAbf}\"\n  sleep 55\n  if [ ! -e \"/root/.allow.clamav.cnf\" ] || [ -e \"/root/.deny.clamav.cnf\" ]; then\n    if [ -e \"/etc/init.d/clamav-daemon\" ]; then\n      update-rc.d -f clamav-daemon remove &> /dev/null\n    fi\n    if [ -e \"/etc/init.d/clamav-freshclam\" ]; then\n      update-rc.d -f clamav-freshclam remove &> /dev/null\n    fi\n  fi\n  pkill -9 -f avahi-daemon\n  pkill -9 -f clamd\n  pkill -9 -f freshclam\n  pkill -9 -f java\n  rm -f /run/clamav/*\n  _msg \"Java/Solr/Clamav have been stopped\" >> \"${_logAbf}\"\n  service nginx stop &> /dev/null\n  killall nginx &> /dev/null\n  killall php &> /dev/null\n  pkill -9 -f php-fpm\n  _msg \"Nginx, PHP-CLI and PHP-FPM have been stopped\" >> \"${_logAbf}\"\n  csf -df &> /dev/null\n  csf -tf &> /dev/null\n  _msg \"Firewall has been purged\" >> \"${_logAbf}\"\n  if [ -e \"/root/.my.pass.txt\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ] && [ ! -z \"${_SQL_PSWD}\" ]; then\n      _msg \"Preparing MySQLD for quick shutdown...\" >> \"${_logAbf}\"\n      mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n      fi\n      mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n      _msg \"Stopping MySQLD now...\" >> \"${_logAbf}\"\n      service mysql stop &> /dev/null\n      wait\n      _msg \"MySQLD stopped\" >> \"${_logAbf}\"\n    else\n      _msg \"MySQLD already stopped\" >> \"${_logAbf}\"\n    fi\n  fi\n  _msg \"Faster reboot done\" >> \"${_logAbf}\"\n}\n\nif [ ! -e \"/root/.run-to-beowulf.cnf\" ]; then\n  echo \"ERROR: /root/.run-to-beowulf.cnf is required!\"\n  exit 1\nfi\n\n_check_os_compatibility() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ \"${_OS_CODE}\" = \"beowulf\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-beowulf-one.cnf\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-beowulf-two.cnf\" ]; then\n    echo \"This server already runs ${_OS_DIST}/${_OS_CODE}\"\n    echo \"Bye!\"\n    exit 1\n  elif [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n    || [ \"${_OS_CODE}\" = \"trixie\" ] \\\n    || [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n    || [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    echo \"This server already runs newer ${_OS_DIST}/${_OS_CODE}\"\n    echo \"Bye!\"\n    exit 1\n  fi\n  if [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _NEXT_OS_CODE=beowulf\n  elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    _NEXT_OS_CODE=beowulf\n  elif [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _NEXT_OS_CODE=beowulf\n  else\n    if [ ! -e \"/root/.run-auto-major-os-reboot-beowulf-one.cnf\" ] \\\n      && [ ! -e \"/root/.run-auto-major-os-reboot-beowulf-two.cnf\" ]; then\n      echo \"This procedure does not support ${_OS_DIST}/${_OS_CODE}\"\n      echo \"The minimum supported system is Debian/jessie\"\n      echo \"The maximum supported system is Debian/buster\"\n      echo \"Bye!\"\n      exit 1\n    fi\n  fi\n}\n_check_os_compatibility\n\nif [ -x \"/opt/local/bin/killer\" ]; then\n  sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n  echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\nfi\n\n_if_remove_cloud_utils() {\n  _INITD_TEST=$(ls -la /etc/init.d/*cloud* 2>&1)\n  _aptAkamaiLst=\"/etc/apt/sources.list.d/akamai-linux-team.list\"\n  _aptCloudInit=\"/etc/apt/preferences.d/99linode-cloudinit\"\n  _etcCloudCfgd=\"/etc/cloud/cloud.cfg.d\"\n  if [[ ! \"${_INITD_TEST}\" =~ \"No such file\" ]] \\\n    || [ -e \"${_aptAkamaiLst}\" ] \\\n    || [ -e \"${_aptCloudInit}\" ] \\\n    || [ -e \"${_etcCloudCfgd}\" ]; then\n    _msg \"Removing problematic cloud-utils detected on this system\" >> \"${_logAbf}\"\n    /usr/bin/apt-get update --allow-insecure-repositories 2> /dev/null\n    /usr/bin/apt-get remove cloud-utils cloud-init -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get remove cloud-image-utils -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get autoremove --purge -y 2> /dev/null\n    /usr/bin/apt-get autoclean -y 2> /dev/null\n    [ -e \"${_etcCloudCfgd}\" ] && mv -f /etc/cloud /var/backups/\n    [ -e \"${_aptAkamaiLst}\" ] && mv -f ${_aptAkamaiLst} /var/backups/\n    [ -e \"${_aptCloudInit}\" ] && mv -f ${_aptCloudInit} /var/backups/\n  fi\n}\nif [ \"${_VMFAMILY}\" != \"AWS\" ]; then\n  [ -e \"/root/.mode.selected.full.cnf\" ] && _if_remove_cloud_utils\nfi\n\n_if_clean_boa_env() {\n  if [ ! -x \"/etc/init.d/clean-boa-env\" ] \\\n    || [ ! -e \"/root/.run-auto-update-clean-boa-env.cnf\" ]; then\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      mv -f /etc/init.d/clean-boa-env /var/backups/clean-boa-env-bak\n    fi\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlHmr=\"http://files.aegir.cc/versions/${_tRee}/boa/aegir\"\n    curl ${_crlGet} \"${_urlHmr}/conf/var/clean-boa-env\" -o /etc/init.d/clean-boa-env\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      chmod 700 /etc/init.d/clean-boa-env\n      chown root:root /etc/init.d/clean-boa-env\n      update-rc.d clean-boa-env defaults &> /dev/null\n      touch /root/.run-auto-update-clean-boa-env.cnf\n    else\n      if [ -e \"/var/backups/clean-boa-env-bak\" ]; then\n        mv -f /var/backups/clean-boa-env-bak /etc/init.d/clean-boa-env\n      fi\n    fi\n  fi\n}\n_if_clean_boa_env\n\n_if_fix_dhcp() {\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _DHCP_LOG=\"/var/log/daemon.log\"\n  else\n    _DHCP_LOG=\"/var/log/syslog\"\n  fi\n  if [ -e \"${_DHCP_LOG}\" ]; then\n    if [ `tail --lines=3 ${_DHCP_LOG} \\\n      | grep --count \"dhclient.*Failed\"` -gt 0 ]; then\n      sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n      wait\n      sed -i \"/^$/d\" /etc/csf/csf.allow\n      _DHCP_TEST=$(grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq 2>&1)\n      if [[ \"${_DHCP_TEST}\" =~ \"port\" ]]; then\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f12 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      else\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      fi\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n    fi\n  fi\n}\n\nif [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n  echo \" \" >> \"${_logAbf}\"\n  if [ -e \"/root/.run-auto-major-os-reboot-beowulf-one.cnf\" ] \\\n    || [ -e \"/root/.run-auto-major-os-reboot-beowulf-two.cnf\" ]; then\n    _msg \"Waiting 30 seconds for the system start scripts to finish\" >> \"${_logAbf}\"\n    sleep 30\n    _if_fix_dhcp\n  else\n    _msg \"Automatic BOA System Major Upgrade Tool welcomes you aboard!\" >> \"${_logAbf}\"\n    sleep 3\n    _if_fix_dhcp\n  fi\nfi\n\n_AUTO_BEOWULF_TEST=$(grep _AUTO_BEOWULF ${_barCnf} 2>&1)\nif [[ ! \"${_AUTO_BEOWULF_TEST}\" =~ \"_AUTO_BEOWULF\" ]]; then\n  echo \"_AUTO_BEOWULF=YES\" >> ${_barCnf}\nfi\n\nif [ -e \"/root/.run-to-beowulf.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-beowulf-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-beowulf-two.cnf\" ]; then\n  echo \" \" >> \"${_logAbf}\"\n  _msg \"Running barracuda php-idle disable to speed up upgrades\" >> \"${_logAbf}\"\n  barracuda php-idle disable >> \"${_logAbf}\"\n  wait\n  _msg \"The barracuda php-idle disable completed\" >> \"${_logAbf}\"\n  _msg \"Launching standard barracuda up-${_tRee} system first\" >> \"${_logAbf}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAbf}\"\n  wait\n  _msg \"The standard barracuda up-${_tRee} system completed\" >> \"${_logAbf}\"\n  if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n    _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n    _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n    if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n      echo \"_JESSIE_TO_BEOWULF=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/beowulf\" >> \"${_logAbf}\"\n    elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n      echo \"_STRETCH_TO_BEOWULF=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/beowulf\" >> \"${_logAbf}\"\n    elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n      echo \"_BUSTER_TO_BEOWULF=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/beowulf\" >> \"${_logAbf}\"\n    fi\n    _msg \"The first stage of major OS upgrade will start now\" >> \"${_logAbf}\"\n    [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n    /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAbf}\"\n    wait\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      _msg \"The first stage of major OS upgrade completed\" >> \"${_logAbf}\"\n      _msg \"The system will reboot now\" >> \"${_logAbf}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-beowulf-one.cnf\n      update-grub >> \"${_logAbf}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\nfi\n\nif [ -e \"/root/.run-to-beowulf.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-beowulf-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-beowulf-two.cnf\" ]; then\n  echo \" \" >> \"${_logAbf}\"\n  _msg \"Launching post-reboot barracuda up-${_tRee} system\" >> \"${_logAbf}\"\n  _msg \"to complete the first stage of major OS upgrade\" >> \"${_logAbf}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAbf}\"\n  wait\n  _msg \"The post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAbf}\"\n  echo \" \" >> \"${_logAbf}\"\n  if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n    _msg \"The single stage major OS upgrade completed\" >> \"${_logAbf}\"\n    _msg \"The system will reboot now for a final upgrade\" >> \"${_logAbf}\"\n    rm -f /root/.latest-barracuda-upgrade-finale.info\n    touch /root/.run-auto-major-os-reboot-beowulf-two.cnf\n    update-grub >> \"${_logAbf}\"\n    _faster_reboot\n    shutdown -r now\n    wait\n    exit 0\n  fi\nfi\n\nif [ -e \"/root/.run-to-beowulf.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-beowulf-one.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-beowulf-two.cnf\" ]; then\n  echo \" \" >> \"${_logAbf}\"\n  touch /root/.auto-upgraded-to-beowulf.cnf\n  _msg \"Launching the final post-second-reboot barracuda up-${_tRee} system\" >> \"${_logAbf}\"\n  [ ! -e \"/root/.allow.apparmor.cnf\" ] && touch /root/.allow.apparmor.cnf\n  [ ! -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && touch /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAbf}\"\n  wait\n  [ -e \"/root/.run-to-beowulf.cnf\" ] && rm -f /root/.run-to-beowulf.cnf\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-beowulf-one.cnf\" ] rm -f /root/.run-auto-major-os-reboot-beowulf-one.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-beowulf-two.cnf\" ] rm -f /root/.run-auto-major-os-reboot-beowulf-two.cnf\n  sed -i \"s/^_AUTO_BEOWULF.*//g\" /root/.barracuda.cnf\n  _msg \"The final post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAbf}\"\n  _msg \"That's all folks!\" >> \"${_logAbf}\"\n  _msg \"Bye!\" >> \"${_logAbf}\"\n  echo \" \" >> \"${_logAbf}\"\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/autochimaera",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Automatic BOA System Major Upgrade Tool\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: Launch auto-upgrade properly   ###\n###----------------------------------------###\n###\n###  Start with manual barracuda upgrade.\n###\n###    $ barracuda up-lts system\n###\n###  !!! CREATE A FRESH VM BACKUP SNAPSHOT !!!\n###  !!! TEST THE FRESHLY CREATED BACKUP.. !!!\n###  !!! BY USING IT TO CREATE NEW TEST VM !!!\n###  !!! DO NOT CONTINUE UNTIL IT WORKS... !!!\n###\n###  Reboot the server to make sure there are\n###  no issues with boot process.\n###\n###    $ shutdown -r now\n###\n###  If reboot worked and there are no issues,\n###  you are ready for the automated magic...\n###\n###    $ touch /root/.run-to-chimaera.cnf\n###    $ service clean-boa-env start\n###\n###  Once enabled, the system will launch\n###  a series of barracuda upgrades/reboots\n###  until it migrates any supported Debian\n###  or Devuan version to Devuan Chimaera.\n###\n###  !!! WARNING !!!\n###\n###  EXPECT IT TO CRASH COMPLETELY, SO ONLY\n###  FULL RESTORE FROM LATEST BACKUP SNAPSHOT\n###  OF ENTIRE VM WILL BRING IT BACK TO LIVE.\n###\n###  DO NOT PROCEED UNTIL YOU ARE READY FOR\n###  DISASTER RECOVERY FROM TESTED BACKUP!\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_barCnf=\"/root/.barracuda.cnf\"\n_logAch=\"/root/.autochimaera.log\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_aptAllow=\"--allow-unauthenticated\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n_OS_CODE=check\n#\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\ncd /root/\n\nif [ \"${_tRee}\" = \"dev\" ]; then\n  touch /root/.debug-boa-installer.cnf\n  touch /root/.debug-octopus-installer.cnf\nfi\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"AutoChimaera v.${_tRee} [$(date +%T)] ==> $*\"\n}\n\n_check_manufacturer_compatibility() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    /usr/bin/apt-get update --allow-insecure-repositories &> /dev/null\n    ${_INITINS} dmidecode &> /dev/null\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    _msg \"Not supported environment detected: ${_HOST_INFO}\" >> \"${_logAch}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAch}\"\n    _msg \"Bye!\" >> \"${_logAch}\"\n    echo \"Not supported environment detected: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    _msg \"Mysterious environment: ${_HOST_INFO}\" >> \"${_logAch}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAch}\"\n    _msg \"Bye!\" >> \"${_logAch}\"\n    echo \"Mysterious environment: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  fi\n}\n_check_manufacturer_compatibility\n\n_check_mysql_compatibility() {\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  else\n    _DB_V=UNSUPPORTED\n  fi\n  if [ \"${_DB_V}\" = \"UNSUPPORTED\" ]; then\n    _msg \"Not supported DB server detected ${_DB_SERVER_TEST}\" >> \"${_logAch}\"\n    exit 1\n  fi\n}\n_check_mysql_compatibility\n\n###\n### Faster reboot\n###\n_faster_reboot() {\n  _msg \"Faster reboot prepare...\" >> \"${_logAch}\"\n  service cron stop &> /dev/null\n  killall cron &> /dev/null\n  pkill -9 -f second.sh\n  pkill -9 -f minute.sh\n  pkill -9 -f runner.sh\n  _msg \"Cron has been stopped\" >> \"${_logAch}\"\n  _msg \"Now waiting 60 seconds for any running tasks to complete\" >> \"${_logAch}\"\n  sleep 55\n  if [ ! -e \"/root/.allow.clamav.cnf\" ] || [ -e \"/root/.deny.clamav.cnf\" ]; then\n    if [ -e \"/etc/init.d/clamav-daemon\" ]; then\n      update-rc.d -f clamav-daemon remove &> /dev/null\n    fi\n    if [ -e \"/etc/init.d/clamav-freshclam\" ]; then\n      update-rc.d -f clamav-freshclam remove &> /dev/null\n    fi\n  fi\n  pkill -9 -f avahi-daemon\n  pkill -9 -f clamd\n  pkill -9 -f freshclam\n  pkill -9 -f java\n  rm -f /run/clamav/*\n  _msg \"Java/Solr/Clamav have been stopped\" >> \"${_logAch}\"\n  service nginx stop &> /dev/null\n  killall nginx &> /dev/null\n  killall php &> /dev/null\n  pkill -9 -f php-fpm\n  _msg \"Nginx, PHP-CLI and PHP-FPM have been stopped\" >> \"${_logAch}\"\n  csf -df &> /dev/null\n  csf -tf &> /dev/null\n  _msg \"Firewall has been purged\" >> \"${_logAch}\"\n  if [ -e \"/root/.my.pass.txt\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ] && [ ! -z \"${_SQL_PSWD}\" ]; then\n      _msg \"Preparing MySQLD for quick shutdown...\" >> \"${_logAch}\"\n      mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n      fi\n      mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n      _msg \"Stopping MySQLD now...\" >> \"${_logAch}\"\n      service mysql stop &> /dev/null\n      wait\n      _msg \"MySQLD stopped\" >> \"${_logAch}\"\n    else\n      _msg \"MySQLD already stopped\" >> \"${_logAch}\"\n    fi\n  fi\n  _msg \"Faster reboot done\" >> \"${_logAch}\"\n}\n\nif [ ! -e \"/root/.run-to-chimaera.cnf\" ]; then\n  echo \"ERROR: /root/.run-to-chimaera.cnf is required!\"\n  exit 1\nfi\n\n_check_os_compatibility() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-chimaera-one.cnf\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-chimaera-two.cnf\" ]; then\n    echo \"This server already runs ${_OS_DIST}/${_OS_CODE}\"\n    echo \"Bye!\"\n    exit 1\n  elif [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"trixie\" ] \\\n    || [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    echo \"This server already runs newer ${_OS_DIST}/${_OS_CODE}\"\n    echo \"Bye!\"\n    exit 1\n  fi\n  if [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _NEXT_OS_CODE=chimaera\n  elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _NEXT_OS_CODE=chimaera\n  elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _NEXT_OS_CODE=beowulf\n  elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    _NEXT_OS_CODE=beowulf\n  elif [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _NEXT_OS_CODE=beowulf\n  else\n    if [ ! -e \"/root/.run-auto-major-os-reboot-chimaera-one.cnf\" ] \\\n      && [ ! -e \"/root/.run-auto-major-os-reboot-chimaera-two.cnf\" ]; then\n      echo \"This procedure does not support ${_OS_DIST}/${_OS_CODE}\"\n      echo \"The minimum supported system is Debian/jessie\"\n      echo \"The maximum supported system is Debian/bullseye or Devuan/beowulf\"\n      echo \"Bye!\"\n      exit 1\n    fi\n  fi\n}\n_check_os_compatibility\n\nif [ -x \"/opt/local/bin/killer\" ]; then\n  sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n  echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\nfi\n\n_if_remove_cloud_utils() {\n  _INITD_TEST=$(ls -la /etc/init.d/*cloud* 2>&1)\n  _aptAkamaiLst=\"/etc/apt/sources.list.d/akamai-linux-team.list\"\n  _aptCloudInit=\"/etc/apt/preferences.d/99linode-cloudinit\"\n  _etcCloudCfgd=\"/etc/cloud/cloud.cfg.d\"\n  if [[ ! \"${_INITD_TEST}\" =~ \"No such file\" ]] \\\n    || [ -e \"${_aptAkamaiLst}\" ] \\\n    || [ -e \"${_aptCloudInit}\" ] \\\n    || [ -e \"${_etcCloudCfgd}\" ]; then\n    _msg \"Removing problematic cloud-utils detected on this system\" >> \"${_logAch}\"\n    /usr/bin/apt-get update --allow-insecure-repositories 2> /dev/null\n    /usr/bin/apt-get remove cloud-utils cloud-init -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get remove cloud-image-utils -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get autoremove --purge -y 2> /dev/null\n    /usr/bin/apt-get autoclean -y 2> /dev/null\n    [ -e \"${_etcCloudCfgd}\" ] && mv -f /etc/cloud /var/backups/\n    [ -e \"${_aptAkamaiLst}\" ] && mv -f ${_aptAkamaiLst} /var/backups/\n    [ -e \"${_aptCloudInit}\" ] && mv -f ${_aptCloudInit} /var/backups/\n  fi\n}\nif [ \"${_VMFAMILY}\" != \"AWS\" ]; then\n  [ -e \"/root/.mode.selected.full.cnf\" ] && _if_remove_cloud_utils\nfi\n\n_if_clean_boa_env() {\n  if [ ! -x \"/etc/init.d/clean-boa-env\" ] \\\n    || [ ! -e \"/root/.run-auto-update-clean-boa-env.cnf\" ]; then\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      mv -f /etc/init.d/clean-boa-env /var/backups/clean-boa-env-bak\n    fi\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlHmr=\"http://files.aegir.cc/versions/${_tRee}/boa/aegir\"\n    curl ${_crlGet} \"${_urlHmr}/conf/var/clean-boa-env\" -o /etc/init.d/clean-boa-env\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      chmod 700 /etc/init.d/clean-boa-env\n      chown root:root /etc/init.d/clean-boa-env\n      update-rc.d clean-boa-env defaults &> /dev/null\n      touch /root/.run-auto-update-clean-boa-env.cnf\n    else\n      if [ -e \"/var/backups/clean-boa-env-bak\" ]; then\n        mv -f /var/backups/clean-boa-env-bak /etc/init.d/clean-boa-env\n      fi\n    fi\n  fi\n}\n_if_clean_boa_env\n\n_if_fix_dhcp() {\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _DHCP_LOG=\"/var/log/daemon.log\"\n  else\n    _DHCP_LOG=\"/var/log/syslog\"\n  fi\n  if [ -e \"${_DHCP_LOG}\" ]; then\n    if [ `tail --lines=3 ${_DHCP_LOG} \\\n      | grep --count \"dhclient.*Failed\"` -gt 0 ]; then\n      sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n      wait\n      sed -i \"/^$/d\" /etc/csf/csf.allow\n      _DHCP_TEST=$(grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq 2>&1)\n      if [[ \"${_DHCP_TEST}\" =~ \"port\" ]]; then\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f12 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      else\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      fi\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n    fi\n  fi\n}\n\nif [ -e \"/root/.run-to-chimaera.cnf\" ]; then\n  echo \" \" >> \"${_logAch}\"\n  if [ -e \"/root/.run-auto-major-os-reboot-chimaera-one.cnf\" ] \\\n    || [ -e \"/root/.run-auto-major-os-reboot-chimaera-two.cnf\" ]; then\n    _msg \"Waiting 30 seconds for the system start scripts to finish\" >> \"${_logAch}\"\n    sleep 30\n    _if_fix_dhcp\n  else\n    _msg \"Automatic BOA System Major Upgrade Tool welcomes you aboard!\" >> \"${_logAch}\"\n    sleep 3\n    _if_fix_dhcp\n  fi\nfi\n\n_AUTO_CHIMAERA_TEST=$(grep _AUTO_CHIMAERA ${_barCnf} 2>&1)\nif [[ ! \"${_AUTO_CHIMAERA_TEST}\" =~ \"_AUTO_CHIMAERA\" ]]; then\n  echo \"_AUTO_CHIMAERA=YES\" >> ${_barCnf}\nfi\n\nif [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-chimaera-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-chimaera-two.cnf\" ]; then\n  echo \" \" >> \"${_logAch}\"\n  _msg \"Running barracuda php-idle disable to speed up upgrades\" >> \"${_logAch}\"\n  barracuda php-idle disable >> \"${_logAch}\"\n  wait\n  _msg \"The barracuda php-idle disable completed\" >> \"${_logAch}\"\n  _msg \"Launching standard barracuda up-${_tRee} system now\" >> \"${_logAch}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAch}\"\n  wait\n  _msg \"The standard barracuda up-${_tRee} system completed\" >> \"${_logAch}\"\n  if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n    _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n    _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n    if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n      echo \"_JESSIE_TO_BEOWULF=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/beowulf\" >> \"${_logAch}\"\n    elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n      echo \"_STRETCH_TO_BEOWULF=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/beowulf\" >> \"${_logAch}\"\n    elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n      echo \"_BUSTER_TO_BEOWULF=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/beowulf\" >> \"${_logAch}\"\n    elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n      echo \"_BULLSEYE_TO_CHIMAERA=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAch}\"\n    elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n      echo \"_BEOWULF_TO_CHIMAERA=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAch}\"\n    fi\n    _msg \"The first stage of major OS upgrade will start now\" >> \"${_logAch}\"\n    [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n    /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAch}\"\n    wait\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      _msg \"The first stage of major OS upgrade completed\" >> \"${_logAch}\"\n      _msg \"The system will reboot now\" >> \"${_logAch}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-chimaera-one.cnf\n      update-grub >> \"${_logAch}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\nfi\n\nif [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-chimaera-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-chimaera-two.cnf\" ]; then\n  echo \" \" >> \"${_logAch}\"\n  _msg \"Launching post-reboot barracuda up-${_tRee} system\" >> \"${_logAch}\"\n  _msg \"to complete the first stage of major OS upgrade\" >> \"${_logAch}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAch}\"\n  wait\n  _msg \"The post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAch}\"\n  echo \" \" >> \"${_logAch}\"\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ -e \"/root/.beowulf_to_chimaera_major_os_upgrade.info\" ] \\\n    || [ -e \"/root/.bullseye_to_chimaera_major_os_upgrade.info\" ]; then\n    _FURTHER_UPGRADE=NO\n  else\n    _FURTHER_UPGRADE=YES\n  fi\n  if [ \"${_FURTHER_UPGRADE}\" = \"NO\" ]; then\n    if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n      _msg \"The single stage major OS upgrade completed\" >> \"${_logAch}\"\n      _msg \"The system will reboot now for a final upgrade\" >> \"${_logAch}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-chimaera-two.cnf\n      update-grub >> \"${_logAch}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\n  if [ \"${_OS_CODE}\" = \"bullseye\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BULLSEYE_TO_CHIMAERA=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAch}\"\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BEOWULF_TO_CHIMAERA=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAch}\"\n  fi\n  if [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    _msg \"The second stage of major OS upgrade will start now\" >> \"${_logAch}\"\n    [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n    /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAch}\"\n    wait\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      _msg \"The second stage of major OS upgrade completed\" >> \"${_logAch}\"\n      _msg \"The system will reboot now\" >> \"${_logAch}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-chimaera-two.cnf\n      update-grub >> \"${_logAch}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\nfi\n\nif [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-chimaera-one.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-chimaera-two.cnf\" ]; then\n  echo \" \" >> \"${_logAch}\"\n  touch /root/.auto-upgraded-to-chimaera.cnf\n  _msg \"Launching the final post-second-reboot barracuda up-${_tRee} system\" >> \"${_logAch}\"\n  [ ! -e \"/root/.allow.apparmor.cnf\" ] && touch /root/.allow.apparmor.cnf\n  [ ! -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && touch /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAch}\"\n  wait\n  [ -e \"/root/.run-to-chimaera.cnf\" ] && rm -f /root/.run-to-chimaera.cnf\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-chimaera-one.cnf\" ] && rm -f /root/.run-auto-major-os-reboot-chimaera-one.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-chimaera-two.cnf\" ] && rm -f /root/.run-auto-major-os-reboot-chimaera-two.cnf\n  sed -i \"s/^_AUTO_CHIMAERA.*//g\" /root/.barracuda.cnf\n  _msg \"The final post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAch}\"\n  _msg \"That's all folks!\" >> \"${_logAch}\"\n  _msg \"Bye!\" >> \"${_logAch}\"\n  echo \" \" >> \"${_logAch}\"\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/autodaedalus",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Automatic BOA System Major Upgrade Tool\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: Launch auto-upgrade properly   ###\n###----------------------------------------###\n###\n###  Start with manual barracuda upgrade.\n###\n###    $ barracuda up-lts system\n###\n###  !!! CREATE A FRESH VM BACKUP SNAPSHOT !!!\n###  !!! TEST THE FRESHLY CREATED BACKUP.. !!!\n###  !!! BY USING IT TO CREATE NEW TEST VM !!!\n###  !!! DO NOT CONTINUE UNTIL IT WORKS... !!!\n###\n###  Reboot the server to make sure there are\n###  no issues with boot process.\n###\n###    $ shutdown -r now\n###\n###  If reboot worked and there are no issues,\n###  you are ready for the automated magic...\n###\n###    $ touch /root/.run-to-daedalus.cnf\n###    $ service clean-boa-env start\n###\n###  Once enabled, the system will launch\n###  a series of barracuda upgrades/reboots\n###  until it migrates any supported Debian\n###  or Devuan version to Devuan Daedalus.\n###\n###  !!! WARNING !!!\n###\n###  EXPECT IT TO CRASH COMPLETELY, SO ONLY\n###  FULL RESTORE FROM LATEST BACKUP SNAPSHOT\n###  OF ENTIRE VM WILL BRING IT BACK TO LIVE.\n###\n###  DO NOT PROCEED UNTIL YOU ARE READY FOR\n###  DISASTER RECOVERY FROM TESTED BACKUP!\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_barCnf=\"/root/.barracuda.cnf\"\n_logAds=\"/root/.autodaedalus.log\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_aptAllow=\"--allow-unauthenticated\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n_OS_CODE=check\n#\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\ncd /root/\n\nif [ \"${_tRee}\" = \"dev\" ]; then\n  touch /root/.debug-boa-installer.cnf\n  touch /root/.debug-octopus-installer.cnf\nfi\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"AutoDaedalus v.${_tRee} [$(date +%T)] ==> $*\"\n}\n\n_check_manufacturer_compatibility() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    /usr/bin/apt-get update --allow-insecure-repositories &> /dev/null\n    ${_INITINS} dmidecode &> /dev/null\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    _msg \"Not supported environment detected: ${_HOST_INFO}\" >> \"${_logAds}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAds}\"\n    _msg \"Bye!\" >> \"${_logAds}\"\n    echo \"Not supported environment detected: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    _msg \"Mysterious environment: ${_HOST_INFO}\" >> \"${_logAds}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAds}\"\n    _msg \"Bye!\" >> \"${_logAds}\"\n    echo \"Mysterious environment: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  fi\n}\n_check_manufacturer_compatibility\n\n_check_mysql_compatibility() {\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  else\n    _DB_V=UNSUPPORTED\n  fi\n  if [ \"${_DB_V}\" = \"UNSUPPORTED\" ]; then\n    _msg \"Not supported DB server detected ${_DB_SERVER_TEST}\" >> \"${_logAds}\"\n    exit 1\n  fi\n}\n_check_mysql_compatibility\n\n###\n### Faster reboot\n###\n_faster_reboot() {\n  _msg \"Faster reboot prepare...\" >> \"${_logAds}\"\n  service cron stop &> /dev/null\n  killall cron &> /dev/null\n  pkill -9 -f second.sh\n  pkill -9 -f minute.sh\n  pkill -9 -f runner.sh\n  _msg \"Cron has been stopped\" >> \"${_logAds}\"\n  _msg \"Now waiting 60 seconds for any running tasks to complete\" >> \"${_logAds}\"\n  sleep 55\n  if [ ! -e \"/root/.allow.clamav.cnf\" ] || [ -e \"/root/.deny.clamav.cnf\" ]; then\n    if [ -e \"/etc/init.d/clamav-daemon\" ]; then\n      update-rc.d -f clamav-daemon remove &> /dev/null\n    fi\n    if [ -e \"/etc/init.d/clamav-freshclam\" ]; then\n      update-rc.d -f clamav-freshclam remove &> /dev/null\n    fi\n  fi\n  pkill -9 -f avahi-daemon\n  pkill -9 -f clamd\n  pkill -9 -f freshclam\n  pkill -9 -f java\n  rm -f /run/clamav/*\n  _msg \"Java/Solr/Clamav have been stopped\" >> \"${_logAds}\"\n  service nginx stop &> /dev/null\n  killall nginx &> /dev/null\n  killall php &> /dev/null\n  pkill -9 -f php-fpm\n  _msg \"Nginx, PHP-CLI and PHP-FPM have been stopped\" >> \"${_logAds}\"\n  csf -df &> /dev/null\n  csf -tf &> /dev/null\n  _msg \"Firewall has been purged\" >> \"${_logAds}\"\n  if [ -e \"/root/.my.pass.txt\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ] && [ ! -z \"${_SQL_PSWD}\" ]; then\n      _msg \"Preparing MySQLD for quick shutdown...\" >> \"${_logAds}\"\n      mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n      fi\n      mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n      _msg \"Stopping MySQLD now...\" >> \"${_logAds}\"\n      service mysql stop &> /dev/null\n      wait\n      _msg \"MySQLD stopped\" >> \"${_logAds}\"\n    else\n      _msg \"MySQLD already stopped\" >> \"${_logAds}\"\n    fi\n  fi\n  _msg \"Faster reboot done\" >> \"${_logAds}\"\n}\n\nif [ ! -e \"/root/.run-to-daedalus.cnf\" ]; then\n  echo \"ERROR: /root/.run-to-daedalus.cnf is required!\"\n  exit 1\nfi\n\n_check_os_compatibility() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-daedalus-one.cnf\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-daedalus-two.cnf\" ]; then\n    echo \"This server already runs ${_OS_DIST}/${_OS_CODE}\"\n    echo \"Bye!\"\n    exit 1\n  fi\n  if [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _NEXT_OS_CODE=daedalus\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _NEXT_OS_CODE=daedalus\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _NEXT_OS_CODE=chimaera\n  elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _NEXT_OS_CODE=chimaera\n  elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _NEXT_OS_CODE=beowulf\n  else\n    if [ ! -e \"/root/.run-auto-major-os-reboot-daedalus-one.cnf\" ] \\\n      && [ ! -e \"/root/.run-auto-major-os-reboot-daedalus-two.cnf\" ]; then\n      echo \"This procedure does not support ${_OS_DIST}/${_OS_CODE}\"\n      echo \"The minimum supported system is Debian/buster\"\n      echo \"The maximum supported system is Debian/bookworm or Devuan/chimaera\"\n      echo \"Bye!\"\n      exit 1\n    fi\n  fi\n}\n_check_os_compatibility\n\nif [ -x \"/opt/local/bin/killer\" ]; then\n  sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n  echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\nfi\n\n_if_remove_cloud_utils() {\n  _INITD_TEST=$(ls -la /etc/init.d/*cloud* 2>&1)\n  _aptAkamaiLst=\"/etc/apt/sources.list.d/akamai-linux-team.list\"\n  _aptCloudInit=\"/etc/apt/preferences.d/99linode-cloudinit\"\n  _etcCloudCfgd=\"/etc/cloud/cloud.cfg.d\"\n  if [[ ! \"${_INITD_TEST}\" =~ \"No such file\" ]] \\\n    || [ -e \"${_aptAkamaiLst}\" ] \\\n    || [ -e \"${_aptCloudInit}\" ] \\\n    || [ -e \"${_etcCloudCfgd}\" ]; then\n    _msg \"Removing problematic cloud-utils detected on this system\" >> \"${_logAds}\"\n    /usr/bin/apt-get update --allow-insecure-repositories 2> /dev/null\n    /usr/bin/apt-get remove cloud-utils cloud-init -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get remove cloud-image-utils -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get autoremove --purge -y 2> /dev/null\n    /usr/bin/apt-get autoclean -y 2> /dev/null\n    [ -e \"${_etcCloudCfgd}\" ] && mv -f /etc/cloud /var/backups/\n    [ -e \"${_aptAkamaiLst}\" ] && mv -f ${_aptAkamaiLst} /var/backups/\n    [ -e \"${_aptCloudInit}\" ] && mv -f ${_aptCloudInit} /var/backups/\n  fi\n}\nif [ \"${_VMFAMILY}\" != \"AWS\" ]; then\n  [ -e \"/root/.mode.selected.full.cnf\" ] && _if_remove_cloud_utils\nfi\n\n_if_clean_boa_env() {\n  if [ ! -x \"/etc/init.d/clean-boa-env\" ] \\\n    || [ ! -e \"/root/.run-auto-update-clean-boa-env.cnf\" ]; then\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      mv -f /etc/init.d/clean-boa-env /var/backups/clean-boa-env-bak\n    fi\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlHmr=\"http://files.aegir.cc/versions/${_tRee}/boa/aegir\"\n    curl ${_crlGet} \"${_urlHmr}/conf/var/clean-boa-env\" -o /etc/init.d/clean-boa-env\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      chmod 700 /etc/init.d/clean-boa-env\n      chown root:root /etc/init.d/clean-boa-env\n      update-rc.d clean-boa-env defaults &> /dev/null\n      touch /root/.run-auto-update-clean-boa-env.cnf\n    else\n      if [ -e \"/var/backups/clean-boa-env-bak\" ]; then\n        mv -f /var/backups/clean-boa-env-bak /etc/init.d/clean-boa-env\n      fi\n    fi\n  fi\n}\n_if_clean_boa_env\n\n_if_fix_dhcp() {\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _DHCP_LOG=\"/var/log/daemon.log\"\n  else\n    _DHCP_LOG=\"/var/log/syslog\"\n  fi\n  if [ -e \"${_DHCP_LOG}\" ]; then\n    if [ `tail --lines=3 ${_DHCP_LOG} \\\n      | grep --count \"dhclient.*Failed\"` -gt 0 ]; then\n      sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n      wait\n      sed -i \"/^$/d\" /etc/csf/csf.allow\n      _DHCP_TEST=$(grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq 2>&1)\n      if [[ \"${_DHCP_TEST}\" =~ \"port\" ]]; then\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f12 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      else\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      fi\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n    fi\n  fi\n}\n\nif [ -e \"/root/.run-to-daedalus.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  if [ -e \"/root/.run-auto-major-os-reboot-daedalus-one.cnf\" ] \\\n    || [ -e \"/root/.run-auto-major-os-reboot-daedalus-two.cnf\" ]; then\n    _msg \"Waiting 30 seconds for the system start scripts to finish\" >> \"${_logAds}\"\n    sleep 30\n    _if_fix_dhcp\n  else\n    _msg \"Automatic BOA System Major Upgrade Tool welcomes you aboard!\" >> \"${_logAds}\"\n    sleep 3\n    _if_fix_dhcp\n  fi\nfi\n\n_AUTO_DAEDALUS_TEST=$(grep _AUTO_DAEDALUS ${_barCnf} 2>&1)\nif [[ ! \"${_AUTO_DAEDALUS_TEST}\" =~ \"_AUTO_DAEDALUS\" ]]; then\n  echo \"_AUTO_DAEDALUS=YES\" >> ${_barCnf}\nfi\n\nif [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-daedalus-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-daedalus-two.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  _msg \"Running barracuda php-idle disable to speed up upgrades\" >> \"${_logAds}\"\n  barracuda php-idle disable >> \"${_logAds}\"\n  wait\n  _msg \"The barracuda php-idle disable completed\" >> \"${_logAds}\"\n  _msg \"Launching standard barracuda up-${_tRee} system now\" >> \"${_logAds}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n  wait\n  _msg \"The standard barracuda up-${_tRee} system completed\" >> \"${_logAds}\"\n  if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n    _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n    _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n    if [ \"${_OS_CODE}\" = \"buster\" ]; then\n      echo \"_BUSTER_TO_BEOWULF=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/beowulf\"  >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n      echo \"_BULLSEYE_TO_CHIMAERA=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n      echo \"_BEOWULF_TO_CHIMAERA=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n      echo \"_BOOKWORM_TO_DAEDALUS=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n      echo \"_CHIMAERA_TO_DAEDALUS=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n    fi\n    _msg \"The first stage of major OS upgrade will start now\" >> \"${_logAds}\"\n    [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n    /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n    wait\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      _msg \"The first stage of major OS upgrade completed\" >> \"${_logAds}\"\n      _msg \"The system will reboot now\" >> \"${_logAds}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-daedalus-one.cnf\n      update-grub >> \"${_logAds}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\nfi\n\nif [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-daedalus-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-daedalus-two.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  _msg \"Launching post-reboot barracuda up-${_tRee} system\" >> \"${_logAds}\"\n  _msg \"to complete the first stage of major OS upgrade\" >> \"${_logAds}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n  wait\n  _msg \"The post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAds}\"\n  echo \" \" >> \"${_logAds}\"\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ -e \"/root/.chimaera_to_daedalus_major_os_upgrade.info\" ] \\\n    || [ -e \"/root/.bookworm_to_daedalus_major_os_upgrade.info\" ]; then\n    _FURTHER_UPGRADE=NO\n  else\n    _FURTHER_UPGRADE=YES\n  fi\n  if [ \"${_FURTHER_UPGRADE}\" = \"NO\" ]; then\n    if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n      _msg \"The single stage major OS upgrade completed\" >> \"${_logAds}\"\n      _msg \"The system will reboot now for a final upgrade\" >> \"${_logAds}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-daedalus-two.cnf\n      update-grub >> \"${_logAds}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\n  if [ \"${_OS_CODE}\" = \"bullseye\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BULLSEYE_TO_CHIMAERA=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BEOWULF_TO_CHIMAERA=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BOOKWORM_TO_DAEDALUS=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_CHIMAERA_TO_DAEDALUS=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n  fi\n  if [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    _msg \"The second stage of major OS upgrade will start now\" >> \"${_logAds}\"\n    [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n    /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n    wait\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      _msg \"The second stage of major OS upgrade completed\" >> \"${_logAds}\"\n      _msg \"The system will reboot now\" >> \"${_logAds}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-daedalus-two.cnf\n      update-grub >> \"${_logAds}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\nfi\n\nif [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-daedalus-one.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-daedalus-two.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  touch /root/.auto-upgraded-to-daedalus.cnf\n  _msg \"Launching the final post-second-reboot barracuda up-${_tRee} system\" >> \"${_logAds}\"\n  [ ! -e \"/root/.allow.apparmor.cnf\" ] && touch /root/.allow.apparmor.cnf\n  [ ! -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && touch /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n  wait\n  [ -e \"/root/.run-to-daedalus.cnf\" ] && rm -f /root/.run-to-daedalus.cnf\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-daedalus-one.cnf\" ] && rm -f /root/.run-auto-major-os-reboot-daedalus-one.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-daedalus-two.cnf\" ] && rm -f /root/.run-auto-major-os-reboot-daedalus-two.cnf\n  sed -i \"s/^_AUTO_DAEDALUS.*//g\" /root/.barracuda.cnf\n  _msg \"The final post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAds}\"\n  _msg \"That's all folks!\" >> \"${_logAds}\"\n  _msg \"Bye!\" >> \"${_logAds}\"\n  echo \" \" >> \"${_logAds}\"\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/autoexcalibur",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Automatic BOA System Major Upgrade Tool\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: Launch auto-upgrade properly   ###\n###----------------------------------------###\n###\n###  Start with manual barracuda upgrade.\n###\n###    $ barracuda up-lts system\n###\n###  !!! CREATE A FRESH VM BACKUP SNAPSHOT !!!\n###  !!! TEST THE FRESHLY CREATED BACKUP.. !!!\n###  !!! BY USING IT TO CREATE NEW TEST VM !!!\n###  !!! DO NOT CONTINUE UNTIL IT WORKS... !!!\n###\n###  Reboot the server to make sure there are\n###  no issues with boot process.\n###\n###    $ shutdown -r now\n###\n###  If reboot worked and there are no issues,\n###  you are ready for the automated magic...\n###\n###    $ touch /root/.run-to-excalibur.cnf\n###    $ service clean-boa-env start\n###\n###  Once enabled, the system will launch\n###  a series of barracuda upgrades/reboots\n###  until it migrates any supported Debian\n###  or Devuan version to Devuan Excalibur.\n###\n###  !!! WARNING !!!\n###\n###  EXPECT IT TO CRASH COMPLETELY, SO ONLY\n###  FULL RESTORE FROM LATEST BACKUP SNAPSHOT\n###  OF ENTIRE VM WILL BRING IT BACK TO LIVE.\n###\n###  DO NOT PROCEED UNTIL YOU ARE READY FOR\n###  DISASTER RECOVERY FROM TESTED BACKUP!\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_barCnf=\"/root/.barracuda.cnf\"\n_logAds=\"/root/.autoexcalibur.log\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_aptAllow=\"--allow-unauthenticated\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n_OS_CODE=check\n#\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\ncd /root/\n\nif [ \"${_tRee}\" = \"dev\" ]; then\n  touch /root/.debug-boa-installer.cnf\n  touch /root/.debug-octopus-installer.cnf\nfi\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"AutoExcalibur v.${_tRee} [$(date +%T)] ==> $*\"\n}\n\n_check_manufacturer_compatibility() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    /usr/bin/apt-get update --allow-insecure-repositories &> /dev/null\n    ${_INITINS} dmidecode &> /dev/null\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    _msg \"Not supported environment detected: ${_HOST_INFO}\" >> \"${_logAds}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAds}\"\n    _msg \"Bye!\" >> \"${_logAds}\"\n    echo \"Not supported environment detected: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    _msg \"Mysterious environment: ${_HOST_INFO}\" >> \"${_logAds}\"\n    _msg \"Please check https://bit.ly/boa-caveats\" >> \"${_logAds}\"\n    _msg \"Bye!\" >> \"${_logAds}\"\n    echo \"Mysterious environment: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    exit 1\n  fi\n}\n_check_manufacturer_compatibility\n\n_check_mysql_compatibility() {\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  else\n    _DB_V=UNSUPPORTED\n  fi\n  if [ \"${_DB_V}\" = \"UNSUPPORTED\" ]; then\n    _msg \"Not supported DB server detected ${_DB_SERVER_TEST}\" >> \"${_logAds}\"\n    exit 1\n  fi\n}\n_check_mysql_compatibility\n\n###\n### Faster reboot\n###\n_faster_reboot() {\n  _msg \"Faster reboot prepare...\" >> \"${_logAds}\"\n  service cron stop &> /dev/null\n  killall cron &> /dev/null\n  pkill -9 -f second.sh\n  pkill -9 -f minute.sh\n  pkill -9 -f runner.sh\n  _msg \"Cron has been stopped\" >> \"${_logAds}\"\n  _msg \"Now waiting 60 seconds for any running tasks to complete\" >> \"${_logAds}\"\n  sleep 55\n  if [ ! -e \"/root/.allow.clamav.cnf\" ] || [ -e \"/root/.deny.clamav.cnf\" ]; then\n    if [ -e \"/etc/init.d/clamav-daemon\" ]; then\n      update-rc.d -f clamav-daemon remove &> /dev/null\n    fi\n    if [ -e \"/etc/init.d/clamav-freshclam\" ]; then\n      update-rc.d -f clamav-freshclam remove &> /dev/null\n    fi\n  fi\n  pkill -9 -f avahi-daemon\n  pkill -9 -f clamd\n  pkill -9 -f freshclam\n  pkill -9 -f java\n  rm -f /run/clamav/*\n  _msg \"Java/Solr/Clamav have been stopped\" >> \"${_logAds}\"\n  service nginx stop &> /dev/null\n  killall nginx &> /dev/null\n  killall php &> /dev/null\n  pkill -9 -f php-fpm\n  _msg \"Nginx, PHP-CLI and PHP-FPM have been stopped\" >> \"${_logAds}\"\n  csf -df &> /dev/null\n  csf -tf &> /dev/null\n  _msg \"Firewall has been purged\" >> \"${_logAds}\"\n  if [ -e \"/root/.my.pass.txt\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ] && [ ! -z \"${_SQL_PSWD}\" ]; then\n      _msg \"Preparing MySQLD for quick shutdown...\" >> \"${_logAds}\"\n      mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n      fi\n      mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n      _msg \"Stopping MySQLD now...\" >> \"${_logAds}\"\n      service mysql stop &> /dev/null\n      wait\n      _msg \"MySQLD stopped\" >> \"${_logAds}\"\n    else\n      _msg \"MySQLD already stopped\" >> \"${_logAds}\"\n    fi\n  fi\n  _msg \"Faster reboot done\" >> \"${_logAds}\"\n}\n\nif [ ! -e \"/root/.run-to-excalibur.cnf\" ]; then\n  echo \"ERROR: /root/.run-to-excalibur.cnf is required!\"\n  exit 1\nfi\n\n_check_os_compatibility() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-excalibur-one.cnf\" ] \\\n    && [ ! -e \"/root/.run-auto-major-os-reboot-excalibur-two.cnf\" ]; then\n    echo \"This server already runs ${_OS_DIST}/${_OS_CODE}\"\n    echo \"Bye!\"\n    exit 1\n  fi\n  if [ \"${_OS_CODE}\" = \"trixie\" ]; then\n    _NEXT_OS_CODE=excalibur\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n    _NEXT_OS_CODE=excalibur\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _NEXT_OS_CODE=daedalus\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _NEXT_OS_CODE=daedalus\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _NEXT_OS_CODE=chimaera\n  elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _NEXT_OS_CODE=chimaera\n  else\n    if [ ! -e \"/root/.run-auto-major-os-reboot-excalibur-one.cnf\" ] \\\n      && [ ! -e \"/root/.run-auto-major-os-reboot-excalibur-two.cnf\" ]; then\n      echo \"This procedure does not support ${_OS_DIST}/${_OS_CODE}\"\n      echo \"The minimum supported system is Debian/bullseye\"\n      echo \"The maximum supported system is Debian/trixie or Devuan/daedalus\"\n      echo \"Bye!\"\n      exit 1\n    fi\n  fi\n}\n_check_os_compatibility\n\nif [ -x \"/opt/local/bin/killer\" ]; then\n  sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n  echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\nfi\n\n_if_remove_cloud_utils() {\n  _INITD_TEST=$(ls -la /etc/init.d/*cloud* 2>&1)\n  _aptAkamaiLst=\"/etc/apt/sources.list.d/akamai-linux-team.list\"\n  _aptCloudInit=\"/etc/apt/preferences.d/99linode-cloudinit\"\n  _etcCloudCfgd=\"/etc/cloud/cloud.cfg.d\"\n  if [[ ! \"${_INITD_TEST}\" =~ \"No such file\" ]] \\\n    || [ -e \"${_aptAkamaiLst}\" ] \\\n    || [ -e \"${_aptCloudInit}\" ] \\\n    || [ -e \"${_etcCloudCfgd}\" ]; then\n    _msg \"Removing problematic cloud-utils detected on this system\" >> \"${_logAds}\"\n    /usr/bin/apt-get update --allow-insecure-repositories 2> /dev/null\n    /usr/bin/apt-get remove cloud-utils cloud-init -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get remove cloud-image-utils -y --purge --auto-remove -qq 2> /dev/null\n    /usr/bin/apt-get autoremove --purge -y 2> /dev/null\n    /usr/bin/apt-get autoclean -y 2> /dev/null\n    [ -e \"${_etcCloudCfgd}\" ] && mv -f /etc/cloud /var/backups/\n    [ -e \"${_aptAkamaiLst}\" ] && mv -f ${_aptAkamaiLst} /var/backups/\n    [ -e \"${_aptCloudInit}\" ] && mv -f ${_aptCloudInit} /var/backups/\n  fi\n}\nif [ \"${_VMFAMILY}\" != \"AWS\" ]; then\n  [ -e \"/root/.mode.selected.full.cnf\" ] && _if_remove_cloud_utils\nfi\n\n_if_clean_boa_env() {\n  if [ ! -x \"/etc/init.d/clean-boa-env\" ] \\\n    || [ ! -e \"/root/.run-auto-update-clean-boa-env.cnf\" ]; then\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      mv -f /etc/init.d/clean-boa-env /var/backups/clean-boa-env-bak\n    fi\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlHmr=\"http://files.aegir.cc/versions/${_tRee}/boa/aegir\"\n    curl ${_crlGet} \"${_urlHmr}/conf/var/clean-boa-env\" -o /etc/init.d/clean-boa-env\n    if [ -e \"/etc/init.d/clean-boa-env\" ]; then\n      chmod 700 /etc/init.d/clean-boa-env\n      chown root:root /etc/init.d/clean-boa-env\n      update-rc.d clean-boa-env defaults &> /dev/null\n      touch /root/.run-auto-update-clean-boa-env.cnf\n    else\n      if [ -e \"/var/backups/clean-boa-env-bak\" ]; then\n        mv -f /var/backups/clean-boa-env-bak /etc/init.d/clean-boa-env\n      fi\n    fi\n  fi\n}\n_if_clean_boa_env\n\n_if_fix_dhcp() {\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _DHCP_LOG=\"/var/log/daemon.log\"\n  else\n    _DHCP_LOG=\"/var/log/syslog\"\n  fi\n  if [ -e \"${_DHCP_LOG}\" ]; then\n    if [ `tail --lines=3 ${_DHCP_LOG} \\\n      | grep --count \"dhclient.*Failed\"` -gt 0 ]; then\n      sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n      wait\n      sed -i \"/^$/d\" /etc/csf/csf.allow\n      _DHCP_TEST=$(grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq 2>&1)\n      if [[ \"${_DHCP_TEST}\" =~ \"port\" ]]; then\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f12 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      else\n        for _IP in `grep DHCPREQUEST ${_DHCP_LOG} | cut -d ' ' -f13 | sort | uniq`;do echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow;done\n      fi\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n    fi\n  fi\n}\n\nif [ -e \"/root/.run-to-excalibur.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  if [ -e \"/root/.run-auto-major-os-reboot-excalibur-one.cnf\" ] \\\n    || [ -e \"/root/.run-auto-major-os-reboot-excalibur-two.cnf\" ]; then\n    _msg \"Waiting 30 seconds for the system start scripts to finish\" >> \"${_logAds}\"\n    sleep 30\n    _if_fix_dhcp\n  else\n    _msg \"Automatic BOA System Major Upgrade Tool welcomes you aboard!\" >> \"${_logAds}\"\n    sleep 3\n    _if_fix_dhcp\n  fi\nfi\n\n_AUTO_EXCALIBUR_TEST=$(grep _AUTO_EXCALIBUR ${_barCnf} 2>&1)\nif [[ ! \"${_AUTO_EXCALIBUR_TEST}\" =~ \"_AUTO_EXCALIBUR\" ]]; then\n  echo \"_AUTO_EXCALIBUR=YES\" >> ${_barCnf}\nfi\n\nif [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-excalibur-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-excalibur-two.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  _msg \"Running barracuda php-idle disable to speed up upgrades\" >> \"${_logAds}\"\n  barracuda php-idle disable >> \"${_logAds}\"\n  wait\n  _msg \"The barracuda php-idle disable completed\" >> \"${_logAds}\"\n  _msg \"Launching standard barracuda up-${_tRee} system now\" >> \"${_logAds}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n  wait\n  _msg \"The standard barracuda up-${_tRee} system completed\" >> \"${_logAds}\"\n  if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n    _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n    _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n    if [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n      echo \"_BULLSEYE_TO_CHIMAERA=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n      echo \"_BEOWULF_TO_CHIMAERA=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n      echo \"_BOOKWORM_TO_DAEDALUS=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n      echo \"_CHIMAERA_TO_DAEDALUS=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"trixie\" ]; then\n      echo \"_TRIXIE_TO_EXCALIBUR=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/excalibur\" >> \"${_logAds}\"\n    elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n      echo \"_DAEDALUS_TO_EXCALIBUR=YES\" >> ${_barCnf}\n      _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/excalibur\" >> \"${_logAds}\"\n    fi\n    _msg \"The first stage of major OS upgrade will start now\" >> \"${_logAds}\"\n    [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n    /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n    wait\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      _msg \"The first stage of major OS upgrade completed\" >> \"${_logAds}\"\n      _msg \"The system will reboot now\" >> \"${_logAds}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-excalibur-one.cnf\n      update-grub >> \"${_logAds}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\nfi\n\nif [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-excalibur-one.cnf\" ] \\\n  && [ ! -e \"/root/.run-auto-major-os-reboot-excalibur-two.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  _msg \"Launching post-reboot barracuda up-${_tRee} system\" >> \"${_logAds}\"\n  _msg \"to complete the first stage of major OS upgrade\" >> \"${_logAds}\"\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n  wait\n  _msg \"The post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAds}\"\n  echo \" \" >> \"${_logAds}\"\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ -e \"/root/.daedalus_to_excalibur_major_os_upgrade.info\" ] \\\n    || [ -e \"/root/.trixie_to_excalibur_major_os_upgrade.info\" ]; then\n    _FURTHER_UPGRADE=NO\n  else\n    _FURTHER_UPGRADE=YES\n  fi\n  if [ \"${_FURTHER_UPGRADE}\" = \"NO\" ]; then\n    if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n      _msg \"The single stage major OS upgrade completed\" >> \"${_logAds}\"\n      _msg \"The system will reboot now for a final upgrade\" >> \"${_logAds}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-excalibur-two.cnf\n      update-grub >> \"${_logAds}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\n  if [ \"${_OS_CODE}\" = \"bullseye\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BULLSEYE_TO_CHIMAERA=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BEOWULF_TO_CHIMAERA=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/chimaera\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_BOOKWORM_TO_DAEDALUS=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_CHIMAERA_TO_DAEDALUS=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/daedalus\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"trixie\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_TRIXIE_TO_EXCALIBUR=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/excalibur\" >> \"${_logAds}\"\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ] && [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    echo \"_DAEDALUS_TO_EXCALIBUR=YES\" >> ${_barCnf}\n    _msg \"Launching major upgrade from ${_OS_DIST}/${_OS_CODE} to Devuan/excalibur\" >> \"${_logAds}\"\n  fi\n  if [ \"${_FURTHER_UPGRADE}\" = \"YES\" ]; then\n    _msg \"The second stage of major OS upgrade will start now\" >> \"${_logAds}\"\n    [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n    /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n    wait\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      _msg \"The second stage of major OS upgrade completed\" >> \"${_logAds}\"\n      _msg \"The system will reboot now\" >> \"${_logAds}\"\n      rm -f /root/.latest-barracuda-upgrade-finale.info\n      touch /root/.run-auto-major-os-reboot-excalibur-two.cnf\n      update-grub >> \"${_logAds}\"\n      _faster_reboot\n      shutdown -r now\n      wait\n      exit 0\n    fi\n  fi\nfi\n\nif [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-excalibur-one.cnf\" ] \\\n  && [ -e \"/root/.run-auto-major-os-reboot-excalibur-two.cnf\" ]; then\n  echo \" \" >> \"${_logAds}\"\n  touch /root/.auto-upgraded-to-excalibur.cnf\n  _msg \"Launching the final post-second-reboot barracuda up-${_tRee} system\" >> \"${_logAds}\"\n  [ ! -e \"/root/.allow.apparmor.cnf\" ] && touch /root/.allow.apparmor.cnf\n  [ ! -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && touch /root/.force.rebuild.src.on.auto.now.cnf\n  /opt/local/bin/barracuda up-${_tRee} system noscreen >> \"${_logAds}\"\n  wait\n  [ -e \"/root/.run-to-excalibur.cnf\" ] && rm -f /root/.run-to-excalibur.cnf\n  [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ] && rm -f /root/.force.rebuild.src.on.auto.now.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-excalibur-one.cnf\" ] && rm -f /root/.run-auto-major-os-reboot-excalibur-one.cnf\n  [ -e \"/root/.run-auto-major-os-reboot-excalibur-two.cnf\" ] && rm -f /root/.run-auto-major-os-reboot-excalibur-two.cnf\n  sed -i \"s/^_AUTO_EXCALIBUR.*//g\" /root/.barracuda.cnf\n  _msg \"The final post-reboot barracuda up-${_tRee} system completed\" >> \"${_logAds}\"\n  _msg \"That's all folks!\" >> \"${_logAds}\"\n  _msg \"Bye!\" >> \"${_logAds}\"\n  echo \" \" >> \"${_logAds}\"\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/autoinit",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Automatic BOA System AUTO-INIT Tool\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: Launch AUTO-INIT properly      ###\n###----------------------------------------###\n###\n###  Use clean minimal Debian OS based VPS.\n###\n###  Initialise the system before installing\n###  BOA to remove systemd and quickly\n###  upgrade to latest Devuan OS version.\n###\n###   $ wget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n###   $ autoinit\n###\n###  Once started, the autoinit will launch\n###  a series of upgrades and reboots until\n###  you get a basic latest system installed\n###  to be able to run standard BOA install.\n###\n###  The script logs its actions in the files\n###  you can examine later:\n###\n###   $ cat /root/.autoinit.log\n###\n###  There's also a very verbose extra log:\n###\n###   $ cat /root/.autoinit-verbose.log\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n###\n### Uninstall cloud-utils if not required\n###\n_if_remove_cloud_utils() {\n  _INITD_TEST=$(ls -la /etc/init.d/*cloud* 2>&1)\n  _etcCloudCfgd=\"/etc/cloud/cloud.cfg.d\"\n  if [[ ! \"${_INITD_TEST}\" =~ \"No such file\" ]] \\\n    || [ -e \"${_etcCloudCfgd}\" ]; then\n    _msg \"[init] rcld ==> Removing problematic cloud-utils detected on this system...\" >> \"${_logInt}\"\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    /usr/bin/apt-get remove cloud-utils cloud-init -y -qq >> \"${_logSlt}\"\n    /usr/bin/apt-get remove cloud-image-utils -y -qq >> \"${_logSlt}\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    [ -e \"${_etcCloudCfgd}\" ] && mv -f /etc/cloud /var/backups/ >> \"${_logSlt}\"\n  fi\n}\n\n# Resolve vmnetfix path (autoinit historically used /opt/local/bin/vmnetfix).\n# Prefer locally patched versions first.\n_VMNETFIX_BIN=\"vmnetfix\"\n_resolve_vmnetfix_bin() {\n  if [ -x \"/usr/local/sbin/vmnetfix\" ]; then\n    _VMNETFIX_BIN=\"/usr/local/sbin/vmnetfix\"\n  elif [ -x \"/usr/local/bin/vmnetfix\" ]; then\n    _VMNETFIX_BIN=\"/usr/local/bin/vmnetfix\"\n  elif [ -x \"/opt/local/bin/vmnetfix\" ]; then\n    _VMNETFIX_BIN=\"/opt/local/bin/vmnetfix\"\n  elif command -v vmnetfix >/dev/null 2>&1; then\n    _VMNETFIX_BIN=\"$(command -v vmnetfix)\"\n  else\n    _VMNETFIX_BIN=\"vmnetfix\"\n  fi\n}\n_resolve_vmnetfix_bin\n\n###\n### Ensure SysV init networking wiring for Devuan (post-systemd purge stage)\n###\n_init_sysv_net_repair() {\n  _msg \"[init] netr ==> Ensuring SysV init networking wiring...\" >> \"${_logInt}\"\n  /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n  /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n\n  # Install SysV core bits (safe after systemd-sysv is removed/purged).\n  ${_INITINS} initscripts sysv-rc insserv startpar -qq >> \"${_logSlt}\"\n  /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n  /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n\n  # Remove dhcpcd to avoid DHCP race with ifupdown/dhclient (seen on some vendor images).\n  if dpkg -l | awk '{print $1\" \"$2}' | grep -q \"^ii dhcpcd-base$\"; then\n    _msg \"[init] netr ==> Purging dhcpcd-base to avoid DHCP race...\" >> \"${_logInt}\"\n    /usr/bin/apt-get purge dhcpcd-base -y -qq >> \"${_logSlt}\" || true\n    /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\" || true\n    /usr/bin/dpkg --configure -a >> \"${_logSlt}\" || true\n  fi\n\n  # Ensure networking init script is enabled for SysV boots.\n  if [ -x \"/etc/init.d/networking\" ] && command -v update-rc.d >/dev/null 2>&1; then\n    update-rc.d networking defaults >> \"${_logSlt}\" 2>/dev/null || true\n  fi\n\n  # Refresh rc links if insserv is present.\n  if command -v insserv >/dev/null 2>&1; then\n    insserv -v networking >> \"${_logSlt}\" 2>/dev/null || insserv -v >> \"${_logSlt}\" 2>/dev/null || true\n  fi\n}\n\n###\n### Repair SysV insserv dependency wiring after apt/dpkg churn (Debian->Devuan migrations)\n### Some vendor images may trigger insserv early, before initscripts/cloud-init init scripts\n### are in place, leaving rc links incomplete.\n###\n_init_sysv_insserv_repair() {\n  _msg \"[init] neti ==> Repairing SysV insserv deps and rc links...\" >> \"${_logInt}\"\n  /usr/bin/dpkg --configure -a >> \"${_logSlt}\" || true\n\n  # insserv expects 'mountkernfs' while initscripts may ship mountkernfs.sh\n  if [ -e \"/etc/init.d/mountkernfs.sh\" ] && [ ! -e \"/etc/init.d/mountkernfs\" ]; then\n    ln -sf mountkernfs.sh /etc/init.d/mountkernfs\n  fi\n  [ -e \"/etc/init.d/mountkernfs\" ] && chmod 0755 /etc/init.d/mountkernfs 2>/dev/null || true\n  [ -e \"/etc/init.d/urandom\" ] && chmod 0755 /etc/init.d/urandom 2>/dev/null || true\n\n  # If cloud-config exists but cloud-init is missing, create a minimal stub so insserv can proceed.\n  if [ -x \"/etc/init.d/cloud-config\" ] && [ ! -e \"/etc/init.d/cloud-init\" ]; then\n    cat > /etc/init.d/cloud-init <<'EOF'\n#!/bin/sh\n### BEGIN INIT INFO\n# Provides:          cloud-init\n# Required-Start:\n# Required-Stop:\n# Default-Start:\n# Default-Stop:\n# Short-Description: Stub for insserv during Debian->Devuan migration (autoinit)\n### END INIT INFO\nexit 0\nEOF\n    chmod 0755 /etc/init.d/cloud-init\n  fi\n\n  # Rebuild SysV rc links.\n  if command -v insserv >/dev/null 2>&1; then\n    insserv -v >> \"${_logSlt}\" 2>/dev/null || true\n  fi\n  if [ -x \"/etc/init.d/networking\" ] && command -v update-rc.d >/dev/null 2>&1; then\n    update-rc.d networking defaults >> \"${_logSlt}\" 2>/dev/null || true\n  fi\n}\n\n###\n### Install minimal boot failsafe for console-less VPSes (no APT, no systemd)\n###\n_init_vmnetfix_failsafe_install() {\n  _msg \"[init] netf ==> Installing vmnetfix failsafe boot helper...\" >> \"${_logInt}\"\n  cat > /etc/init.d/vmnetfix-failsafe <<'EOF'\n#!/bin/sh\n### BEGIN INIT INFO\n# Provides:          vmnetfix-failsafe\n# Required-Start:    $local_fs $remote_fs $syslog\n# Required-Stop:\n# Default-Start:     S\n# Default-Stop:\n# Short-Description: Minimal network bring-up failsafe for console-less VPS\n### END INIT INFO\n\nPATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin\nLOG=/var/log/vmnetfix-failsafe.log\n\ncase \"$1\" in\n  start|restart|force-reload)\n    {\n      echo \"[$(date)] vmnetfix-failsafe: start\"\n      if ip -4 route show default 2>/dev/null | grep -q '^default '; then\n        echo \"[$(date)] vmnetfix-failsafe: default route exists; nothing to do\"\n        exit 0\n      fi\n\n      IFACE=\"$(ip -o link show up 2>/dev/null | awk -F': ' '$2!=\"lo\" {print $2; exit}')\"\n      if [ -z \"$IFACE\" ]; then\n        IFACE=\"$(ip -o link show 2>/dev/null | awk -F': ' '$2!=\"lo\" {print $2; exit}')\"\n      fi\n      if [ -z \"$IFACE\" ]; then\n        echo \"[$(date)] vmnetfix-failsafe: no interface found\"\n        exit 0\n      fi\n\n      echo \"[$(date)] vmnetfix-failsafe: using $IFACE\"\n\n      if [ ! -e /etc/network/interfaces ]; then\n        echo \"[$(date)] vmnetfix-failsafe: /etc/network/interfaces missing; writing minimal dhcp config\"\n        cat > /etc/network/interfaces <<EOT\nauto lo\niface lo inet loopback\n\nauto $IFACE\nallow-hotplug $IFACE\niface $IFACE inet dhcp\nEOT\n      fi\n\n      # Detect interface rename: if the interface named in /etc/network/interfaces\n      # does not exist as a kernel interface but IFACE does, update the config.\n      # This handles vendors (e.g. IONOS) where systemd-udev provided predictable\n      # naming (ens6, enp3s0, etc.) that reverts to eth0 after systemd is purged.\n      # Fully vendor-agnostic: we check physical existence, not naming conventions.\n      CFG_IF=\"$(grep -m1 '^auto ' /etc/network/interfaces 2>/dev/null | awk '{print $2}' | grep -v '^lo$')\"\n      if [ -n \"$CFG_IF\" ] && [ \"$CFG_IF\" != \"$IFACE\" ] && [ ! -e \"/sys/class/net/$CFG_IF\" ]; then\n        echo \"[$(date)] vmnetfix-failsafe: interface mismatch — config has $CFG_IF but kernel has $IFACE; updating /etc/network/interfaces\"\n        sed -i \"s/$CFG_IF/$IFACE/g\" /etc/network/interfaces\n        echo \"[$(date)] vmnetfix-failsafe: renamed $CFG_IF -> $IFACE in /etc/network/interfaces\"\n      fi\n\n      if command -v ifup >/dev/null 2>&1; then\n        ifup -v \"$IFACE\" 2>&1 || true\n      elif command -v dhclient >/dev/null 2>&1; then\n        dhclient -v \"$IFACE\" 2>&1 || true\n      fi\n\n      ip -4 addr show dev \"$IFACE\" 2>/dev/null || true\n      ip -4 route show default 2>/dev/null || true\n      echo \"[$(date)] vmnetfix-failsafe: done\"\n    } >> \"$LOG\" 2>&1\n    ;;\n  stop)\n    exit 0\n    ;;\n  *)\n    echo \"Usage: $0 {start|stop|restart|force-reload}\" >&2\n    exit 1\n    ;;\nesac\n\nexit 0\nEOF\n  chmod 0755 /etc/init.d/vmnetfix-failsafe\n  mkdir -p /var/log\n  if [ -d /etc/rcS.d ]; then\n    ln -sf ../init.d/vmnetfix-failsafe /etc/rcS.d/S08vmnetfix-failsafe\n  fi\n}\n\n\n###\n### Pin the current primary interface MAC -> eth0 via udev rule so that\n### after eudev replaces systemd-udev (which provided predictable naming),\n### the interface keeps a known stable name across reboots.\n### Also ensures /etc/network/interfaces references that stable name.\n### Must be called AFTER vmnetfix has written /etc/network/interfaces.\n###\n_init_pin_iface_name() {\n  _LIVE_IF=\"$(ip -o link show up 2>/dev/null | awk -F': ' '$2!=\"lo\" {print $2; exit}')\"\n  if [ -z \"${_LIVE_IF}\" ]; then\n    _msg \"[init] pini ==> No live interface found; skipping interface name pinning.\" >> \"${_logInt}\"\n    return 0\n  fi\n\n  _LIVE_MAC=\"$(ip link show dev \"${_LIVE_IF}\" 2>/dev/null | awk '/link\\/ether/{print $2}')\"\n  if [ -z \"${_LIVE_MAC}\" ]; then\n    _msg \"[init] pini ==> No MAC found for ${_LIVE_IF}; skipping interface name pinning.\" >> \"${_logInt}\"\n    return 0\n  fi\n\n  _PIN_NAME=\"eth0\"\n  _UDEV_RULE_FILE=\"/etc/udev/rules.d/70-persistent-net.rules\"\n\n  _msg \"[init] pini ==> Pinning ${_LIVE_IF} (MAC ${_LIVE_MAC}) -> ${_PIN_NAME} via udev rule...\" >> \"${_logInt}\"\n  mkdir -p /etc/udev/rules.d\n  cat > \"${_UDEV_RULE_FILE}\" <<EOF\n# Pinned by autoinit: map MAC ${_LIVE_MAC} to ${_PIN_NAME}\n# This ensures a stable interface name after eudev replaces systemd-udev\n# (which provided predictable naming like ens6/enp3s0).\nSUBSYSTEM==\"net\", ACTION==\"add\", ATTR{address}==\"${_LIVE_MAC}\", NAME=\"${_PIN_NAME}\"\nEOF\n  _msg \"[init] pini ==> Written ${_UDEV_RULE_FILE}\" >> \"${_logInt}\"\n\n  # Ensure /etc/network/interfaces references the pinned name.\n  # Three cases:\n  # 1) File has a non-lo stanza with old name -> rename it.\n  # 2) File exists but has only lo -> append the pinned stanza.\n  # 3) File missing entirely -> write from scratch.\n  _CFG_IF=\"$(grep '^auto ' /etc/network/interfaces 2>/dev/null | awk '{print $2}' | grep -v '^lo$' | head -n1)\"\n  if [ -n \"${_CFG_IF}\" ] && [ \"${_CFG_IF}\" != \"${_PIN_NAME}\" ]; then\n    sed -i \"s/${_CFG_IF}/${_PIN_NAME}/g\" /etc/network/interfaces\n    _msg \"[init] pini ==> Updated /etc/network/interfaces: ${_CFG_IF} -> ${_PIN_NAME}\" >> \"${_logInt}\"\n  elif [ -z \"${_CFG_IF}\" ] && [ -f \"/etc/network/interfaces\" ]; then\n    cat >> /etc/network/interfaces <<EOF\n\nauto ${_PIN_NAME}\nallow-hotplug ${_PIN_NAME}\niface ${_PIN_NAME} inet dhcp\n  dns-nameservers 1.1.1.1 8.8.8.8 9.9.9.9\nEOF\n    _msg \"[init] pini ==> Appended ${_PIN_NAME} dhcp stanza to /etc/network/interfaces\" >> \"${_logInt}\"\n  elif [ ! -f \"/etc/network/interfaces\" ]; then\n    cat > /etc/network/interfaces <<EOF\nauto lo\niface lo inet loopback\n\nauto ${_PIN_NAME}\nallow-hotplug ${_PIN_NAME}\niface ${_PIN_NAME} inet dhcp\n  dns-nameservers 1.1.1.1 8.8.8.8 9.9.9.9\nEOF\n    _msg \"[init] pini ==> Wrote /etc/network/interfaces from scratch for ${_PIN_NAME}\" >> \"${_logInt}\"\n  fi\n}\n\n###\n### Variables\n###\n_initFile=\"/root/.init-to-devuan-ctrl.cnf\"\n_barCnf=\"/root/.barracuda.cnf\"\n_logInt=\"/root/.autoinit.log\"\n_logSlt=\"/root/.autoinit-verbose.log\"\n_aptAllow=\"--allow-unauthenticated\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n###\n### The lock is always cleared even if apt bails\n###\ntrap 'rm -f /run/autoinit.pid' EXIT\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"AutoInit v.${_tRee} [$(date +%T)] ==> $*\"\n}\n\n###\n### Only root allowed\n###\n_init_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  else\n    _msg \"[init] root ==> ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    _msg \"[init] root ==> ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    _msg \"[init] root ==> ERROR: We can not proceed until it is below 90/100\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    sed -i \"s/.*killer.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    exit 1\n  fi\n}\n\n###\n### Atomic unlock to prevent TOCTOU race\n###\n_single_instance_unlock() {\n  _FD=\"$1\"; _PATH=\"$2\"\n  if command -v flock >/dev/null 2>&1; then\n    flock -u \"${_FD}\" 2>/dev/null || true\n    eval \"exec ${_FD}>&-\"\n    rm -f \"${_PATH}\" 2>/dev/null || true\n  else\n    rm -rf \"${_PATH}\" 2>/dev/null || true\n  fi\n}\n\n###\n### Atomic lock to prevent TOCTOU race\n###\n_single_instance_lock() {\n  # Ensure not too many instances are running\n  # usage: _single_instance_lock [lockfile_path] [fd]\n  # default lock: /run/<script>.lock (falls back to /run or /tmp)\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  _LOCK_FD=\"${2:-9}\"\n  if [ -n \"${1:-}\" ]; then\n    _LOCK_PATH=\"$1\"\n  else\n    _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/tmp\"\n    _LOCK_PATH=\"${_DIR}/${_SELF_NAME%.sh}.lock\"\n  fi\n\n  if command -v flock >/dev/null 2>&1; then\n    eval \"exec ${_LOCK_FD}>\\\"${_LOCK_PATH}\\\"\"\n    if ! flock -n \"${_LOCK_FD}\"; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    printf '%s\\n' \"$$\" 1>&\"${_LOCK_FD}\" 2>/dev/null || true   # optional: PID note\n    trap \"_single_instance_unlock ${_LOCK_FD} '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  else\n    # mkdir is atomic; directory presence == lock held\n    if ! mkdir \"${_LOCK_PATH}\" 2>/dev/null; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    echo \"$$\" > \"${_LOCK_PATH}/pid\" 2>/dev/null || true\n    trap \"rm -rf '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  fi\n}\n\n###\n### Mode selection\n###\n_init_mode() {\n  # Usage: autoinit [minimal|full] [vanilla|classic|predictable]\n  # Default: \"minimal\". Persisted across reboots via cron argument.\n  if [ -z \"${_SET_MODE}\" ]; then\n    case \"${1:-minimal}\" in\n      minimal|*) _SET_MODE=\"minimal\" ;;\n      full)      _SET_MODE=\"full\" ;;\n    esac\n  fi\n  if [ ! -e \"/root/.mode.selected.${_SET_MODE}.cnf\" ]; then\n    touch /root/.mode.selected.${_SET_MODE}.cnf\n    chattr +i /root/.mode.selected.${_SET_MODE}.cnf\n  fi\n  _msg \"[init] mode ==> Mode selected: ${_SET_MODE}\" >> \"${_logInt}\"\n}\n\n###\n### Networking Interface Naming Convention\n###\n_init_ninc() {\n  # Usage: autoinit [minimal|full] [vanilla|classic|predictable]\n  # Default: \"vanilla\". Persisted across reboots via cron argument.\n\n  # Enforce classic interface naming (eth0) for network stability on NAT/cloud VMs\n  # This prevents ens3/eth0 confusion that can break networking after reboot\n  if [ ! -e \"/root/.ninc.selected.classic.cnf\" ]; then\n    # Check if we're on a NAT/cloud setup (all IPs are private)\n    _HAS_PRIVATE_ONLY=\"YES\"\n    for _TEST_IP in $(ip -4 addr show | grep -oP '(?<=inet\\s)\\d+(\\.\\d+){3}' | grep -v '^127\\.'); do\n      case \"${_TEST_IP}\" in\n        10.*|172.1[6-9].*|172.2[0-9].*|172.3[0-1].*|192.168.*) ;;\n        *) _HAS_PRIVATE_ONLY=\"NO\"; break ;;\n      esac\n    done\n\n    # On NAT/cloud setups, enforce classic naming to prevent interface name changes\n    if [ \"${_HAS_PRIVATE_ONLY}\" = \"YES\" ]; then\n      _msg \"[init] prep ==> NAT/cloud setup detected; enforcing classic eth0 naming...\" >> \"${_logInt}\"\n      touch /root/.ninc.selected.classic.cnf\n      chattr +i /root/.ninc.selected.classic.cnf 2>/dev/null || true\n      if [ -e \"/root/.ignore.ifnames.cnf\" ]; then\n        chattr -i /root/.ignore.ifnames.cnf 2>/dev/null || true\n        rm -f /root/.ignore.ifnames.cnf 2>/dev/null || true\n      fi\n    fi\n  fi\n\n  if [ -e \"/root/.ninc.selected.classic.cnf\" ]; then\n    _SET_NINC=\"classic\"\n  elif [ -e \"/root/.ninc.selected.predictable.cnf\" ]; then\n    _SET_NINC=\"predictable\"\n  elif [ -e \"/root/.ninc.selected.vanilla.cnf\" ]; then\n    _SET_NINC=\"vanilla\"\n  elif [ -e \"/root/.ninc.selected.auto.cnf\" ]; then\n    _SET_NINC=\"auto\"\n  fi\n\n  if [ -z \"${_SET_NINC}\" ]; then\n    case \"${2:-vanilla}\" in\n      vanilla|*)   _SET_NINC=\"vanilla\" ;;\n      auto)        _SET_NINC=\"auto\" ;;\n      classic)     _SET_NINC=\"classic\" ;;\n      predictable) _SET_NINC=\"predictable\" ;;\n    esac\n  fi\n  if [ ! -e \"/root/.ninc.selected.${_SET_NINC}.cnf\" ]; then\n    touch /root/.ninc.selected.${_SET_NINC}.cnf 2>/dev/null || true\n    chattr +i /root/.ninc.selected.${_SET_NINC}.cnf 2>/dev/null || true\n  fi\n  _msg \"[init] ninc ==> NINC selected: ${_SET_NINC}\" >> \"${_logInt}\"\n}\n\n###\n### Check Manufacturer Compatibility\n###\n_init_mnfr() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    ${_INITINS} dmidecode >> \"${_logSlt}\"\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    _msg \"[init] mnfr ==> Not supported environment detected: ${_HOST_INFO}\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Bye!\" >> \"${_logInt}\"\n    echo \"[init] mnfr ==> Not supported environment detected: ${_HOST_INFO}\"\n    echo \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\"\n    echo \"[init] mnfr ==> Bye!\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    sed -i \"s/.*killer.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    exit 1\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    _msg \"[init] mnfr ==> Mysterious environment: ${_HOST_INFO}\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Bye!\" >> \"${_logInt}\"\n    echo \"[init] mnfr ==> Mysterious environment: ${_HOST_INFO}\"\n    echo \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\"\n    echo \"[init] mnfr ==> Bye!\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    sed -i \"s/.*killer.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    exit 1\n  fi\n}\n\n###\n### Check Ægir\n###\n_init_aegir() {\n  if [ -e \"/var/aegir\" ]; then\n    echo\n    echo \"[init] aegir ==> ERROR: This script can not be used once BOA is installed\"\n    echo\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    exit 1\n  fi\n}\n\n###\n### Check Systemd\n###\n_init_no_systemd() {\n  if [ -e \"/lib/systemd/systemd\" ]; then\n    echo \" \" >> \"${_logInt}\"\n    _msg \"[init] nosd ==> OOPS: Systemd still not removed cleanly\" >> \"${_logInt}\"\n    echo \" \" >> \"${_logInt}\"\n  fi\n}\n\n###\n### Fix locales\n###\n_init_locales() {\n  _isLoc=\"$(which locale)\"\n  if [ ! -x \"${_isLoc}\" ] || [ -z \"${_isLoc}\" ]; then\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    ${_INITINS} locales >> \"${_logSlt}\"\n    [ \"${_SET_MODE}\" = \"full\" ] && ${_INITINS} locales-all >> \"${_logSlt}\"\n  fi\n  _LOC_TEST=$(locale 2>&1)\n  if [[ \"${_LOC_TEST}\" =~ LANG=.*UTF-8 ]]; then\n    _LOCALE_TEST=OK\n  fi\n  if [[ \"${_LOC_TEST}\" =~ \"Cannot\" ]]; then\n    _LOCALE_TEST=BROKEN\n  fi\n  if [ \"${_LOCALE_TEST}\" = \"BROKEN\" ]; then\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen >> \"${_logSlt}\"\n    locale-gen en_US.UTF-8 >> \"${_logSlt}\"\n    # Explicitly enforce all locale settings\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_TIME=en_US.UTF-8 \\\n      LC_MONETARY=en_US.UTF-8 \\\n      LC_MESSAGES=en_US.UTF-8 \\\n      LC_PAPER=en_US.UTF-8 \\\n      LC_NAME=en_US.UTF-8 \\\n      LC_ADDRESS=en_US.UTF-8 \\\n      LC_TELEPHONE=en_US.UTF-8 \\\n      LC_MEASUREMENT=en_US.UTF-8 \\\n      LC_IDENTIFICATION=en_US.UTF-8 \\\n      LC_ALL= >> \"${_logSlt}\"\n    # Define all locale settings on the fly to prevent unnecessary\n    # warnings during installation of packages.\n    export LANG=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_CTYPE=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_COLLATE=POSIX >> \"${_logSlt}\"\n    export LC_NUMERIC=POSIX >> \"${_logSlt}\"\n    export LC_TIME=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_MONETARY=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_MESSAGES=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_PAPER=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_NAME=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_ADDRESS=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_TELEPHONE=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_MEASUREMENT=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_IDENTIFICATION=en_US.UTF-8 >> \"${_logSlt}\"\n    export LC_ALL= >> \"${_logSlt}\"\n  else\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen >> \"${_logSlt}\"\n    locale-gen en_US.UTF-8 >> \"${_logSlt}\"\n    # Explicitly enforce locale settings required for consistency\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_ALL= >> \"${_logSlt}\"\n    # Define locale settings required for consistency also on the fly\n    export LC_COLLATE=POSIX >> \"${_logSlt}\"\n    export LC_NUMERIC=POSIX >> \"${_logSlt}\"\n    export LC_ALL= >> \"${_logSlt}\"\n  fi\n  _LOCALES_BASHRC_TEST=$(grep LC_COLLATE /root/.bashrc 2>&1)\n  if [[ ! \"${_LOCALES_BASHRC_TEST}\" =~ \"LC_COLLATE\" ]]; then\n    printf \"\\n\" >> /root/.bashrc\n    echo \"export LANG=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_CTYPE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_COLLATE=POSIX\" >> /root/.bashrc\n    echo \"export LC_NUMERIC=POSIX\" >> /root/.bashrc\n    echo \"export LC_TIME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MONETARY=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MESSAGES=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_PAPER=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_NAME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ADDRESS=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_TELEPHONE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MEASUREMENT=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_IDENTIFICATION=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ALL=\" >> /root/.bashrc\n    printf \"\\n\" >> /root/.bashrc\n  fi\n}\n\n###\n### Check cron\n###\n_init_cron() {\n  _isCrn=\"$(which cron)\"\n  if [ ! -x \"${_isCrn}\" ] || [ -z \"${_isCrn}\" ]; then\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    ${_INITINS} cron >> \"${_logSlt}\"\n  fi\n}\n\n###\n### Should we launch boa install?\n###\n_if_launch_boa_install() {\n  _BOA_LOGFILE=\"/root/.boa.install.command.cnf\"\n  if [[ -s \"${_BOA_LOGFILE}\" ]]; then\n    _BOA_COMMAND=$(cat \"${_BOA_LOGFILE}\")\n    _BOA_COMMAND=$(echo \"${_BOA_COMMAND}\" | sed -E \"s/noscreen//g\")\n    if [[ \"${_BOA_COMMAND}\" =~ \" silent\" ]] \\\n      || [[ \"${_BOA_COMMAND}\" =~ \" system\" ]]; then\n      _BOA_INSTALL_COMMAND=\"${_BOA_COMMAND}\"\n    else\n      _BOA_INSTALL_COMMAND=\"${_BOA_COMMAND} silent\"\n    fi\n    if [[ \"${_BOA_COMMAND}\" =~ \"boa in-lts\" ]] \\\n      || [[ \"${_BOA_COMMAND}\" =~ \"boa in-dev\" ]]; then\n      _msg \"_if_launch_boa_install ==> Time for ${_BOA_INSTALL_COMMAND} noscreen\" >> \"${_logInt}\"\n      eval \"${_BOA_INSTALL_COMMAND} noscreen\"\n      wait\n      [ -d \"/var/backups\" ] || mkdir -p /var/backups\n      mv -f ${_BOA_LOGFILE} /var/backups/\n    fi\n  fi\n}\n\n###\n### Return 0 if package is installed, 1 otherwise.\n###\n_pkg_installed() {\n  /usr/bin/dpkg-query -W -f='${Status}' \"$1\" 2>/dev/null | grep -qx 'install ok installed'\n}\n\n###\n### Turn off AppArmor\n###\n_turn_off_apparmor() {\n\n  # Fix shell in AppArmor scripts (only do this if needed)\n  grep -q \"/usr/bin/sh\" /lib/apparmor/apparmor.systemd && \\\n    sed -i \"s#/usr/bin/sh#/bin/bash#g\" /lib/apparmor/apparmor.systemd\n  grep -q \"/bin/sh\" /lib/apparmor/apparmor.systemd && \\\n    sed -i \"s#/bin/sh#/bin/bash#g\" /lib/apparmor/apparmor.systemd\n\n  grep -q \"/usr/bin/sh\" /lib/apparmor/rc.apparmor.functions && \\\n    sed -i \"s#/usr/bin/sh#/bin/bash#g\" /lib/apparmor/rc.apparmor.functions\n  grep -q \"/bin/sh\" /lib/apparmor/rc.apparmor.functions && \\\n    sed -i \"s#/bin/sh#/bin/bash#g\" /lib/apparmor/rc.apparmor.functions\n\n  grep -q \"/usr/bin/sh\" /lib/apparmor/profile-load && \\\n    sed -i \"s#/usr/bin/sh#/bin/bash#g\" /lib/apparmor/profile-load\n  grep -q \"/bin/sh\" /lib/apparmor/profile-load && \\\n    sed -i \"s#/bin/sh#/bin/bash#g\" /lib/apparmor/profile-load\n\n  # Disable AppArmor and Auditd\n  if [ -e \"/etc/apparmor.d\" ] && [ -e \"/var/cache/apparmor\" ]; then\n    rm -rf /var/cache/apparmor/* >> \"${_logSlt}\"\n    rm -f /etc/apparmor.d/*~ >> \"${_logSlt}\"\n    apparmor_parser -r /etc/apparmor.d/* >> \"${_logSlt}\"\n    rm -f /etc/apparmor.d/*~ >> \"${_logSlt}\"\n    aa-complain /etc/apparmor.d/* >> \"${_logSlt}\"\n  fi\n  if [ -x \"/etc/init.d/apparmor\" ]; then\n    service apparmor stop >> \"${_logSlt}\"\n    update-rc.d -f apparmor remove >> \"${_logSlt}\"\n    service auditd stop >> \"${_logSlt}\"\n    update-rc.d -f auditd remove >> \"${_logSlt}\"\n  fi\n  if [ -x \"/usr/sbin/aa-teardown\" ]; then\n    aa-teardown >> \"${_logSlt}\"\n  fi\n  for _PKG in auditd apparmor apparmor-utils apparmor-notify apparmor-profiles apparmor-profiles-extra; do\n    if _pkg_installed \"${_PKG}\"; then\n      /usr/bin/apt-get remove ${_PKG} -y -qq >> \"${_logSlt}\"\n    fi\n  done\n  _isAppArmorGrubFile=\"/etc/default/grub.d/apparmor.cfg\"\n  mkdir -p /etc/default/grub.d\n  echo 'GRUB_CMDLINE_LINUX_DEFAULT=\"$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0\"' | tee ${_isAppArmorGrubFile} >> \"${_logSlt}\"\n  update-grub >> \"${_logSlt}\"\n  [ ! -e \"/root/.disable.apparmor.cnf\" ] && touch /root/.disable.apparmor.cnf\n}\n\n###\n### Only disable AppArmor if it’s active (profiles currently loaded)\n###\n_if_off_apparmor() {\n  if [ ! -e \"/root/.keep_apparmor_on.cnf\" ]; then\n    _isAppArmOn=N\n    if [ -e \"/sys/module/apparmor/parameters/enabled\" ]; then\n      _isAppArmOn=\"$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null | tr -d '\\n')\"\n    fi\n    if [ \"${_isAppArmOn}\" = \"Y\" ]; then\n      _turn_off_apparmor\n    fi\n  fi\n}\n\n###\n### Grub and Ifnames\n###\n_ifnames_grub() {\n\n  _USE_NINC=NO\n  _NEW_GRUB=DEMO\n\n  if [ -e \"/root/.ignore.ifnames.cnf\" ] || [ -e \"/root/.ninc.grub.updated.cnf\" ]; then\n    return 1  # Exit the function but continue the script\n  else\n    [ -e \"/root/.ninc.selected.predictable.cnf\" ] && _USE_NINC=predictable && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.classic.cnf\" ]     && _USE_NINC=classic     && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.auto.cnf\" ]        && _USE_NINC=auto        && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.vanilla.cnf\" ]     && _USE_NINC=vanilla\n  fi\n\n  if [ \"${_USE_NINC}\" = \"vanilla\" ] || [ \"${_USE_NINC}\" = \"NO\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n\n  _IS_IFACE=$(ip a 2>&1)\n  _ADD_GRUB_CMD=\"\"\n  _GRUB_FILE=\"/etc/default/grub\"\n\n  if [ -e \"${_GRUB_FILE}\" ]; then\n    if echo \"${_IS_IFACE}\" | grep -qE \"eth[0-9]+\"; then\n      _msg \"_ifnames_grub ==> GRUB: Classic ethX interface naming found.\" >> \"${_logInt}\"\n    elif echo \"${_IS_IFACE}\" | grep -qE \"(ens|enp|eno|wlp|wlo)[0-9]+:\"; then\n      _msg \"_ifnames_grub ==> GRUB: Predictable (ensX, enpX, enoX, wlpX, wloX) interface naming found.\" >> \"${_logInt}\"\n    else\n      _msg \"_ifnames_grub ==> GRUB: config exists, but no recognized network interface naming found.\" >> \"${_logInt}\"\n    fi\n\n    # Extract the current GRUB_CMDLINE_LINUX line\n    _GRUB_CMDLINE_LINUX=$(grep -E \"^GRUB_CMDLINE_LINUX=\" \"${_GRUB_FILE}\")\n    _msg \"_ifnames_grub ==> GRUB: Current config is ${_GRUB_CMDLINE_LINUX}\" >> \"${_logInt}\"\n\n    # Initialize variables to check for existing options\n    _SYS_NET_IFNAMES=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"net.ifnames=[01]\")\n    _SYS_BIOSDEVNAME=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"biosdevname=[01]\")\n    _SYS_MEMHP_STATE=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"memhp_default_state=online\")\n\n    # Function to append option to _ADD_GRUB_CMD\n    _append_grub_cmd_option() {\n      _option=\"$1\"\n      if [[ -z \"${_ADD_GRUB_CMD}\" ]]; then\n        _ADD_GRUB_CMD=\"${_option}\"\n      else\n        _ADD_GRUB_CMD=\"${_ADD_GRUB_CMD} ${_option}\"\n      fi\n    }\n\n    # Always add memhp_default_state=online\n    _append_grub_cmd_option \"memhp_default_state=online\"\n\n    if [[ \"${_USE_NINC}\" == \"classic\" ]]; then\n      # Always set net.ifnames=0 and biosdevname=0\n      _append_grub_cmd_option \"net.ifnames=0\"\n      _append_grub_cmd_option \"biosdevname=0\"\n    elif [[ \"${_USE_NINC}\" == \"predictable\" ]]; then\n      # Always set net.ifnames=1 and biosdevname=1\n      _append_grub_cmd_option \"net.ifnames=1\"\n      _append_grub_cmd_option \"biosdevname=1\"\n    fi\n\n    if [[ -n \"${_ADD_GRUB_CMD}\" ]]; then\n      # Backup the GRUB file\n      cp \"${_GRUB_FILE}\" \"${_GRUB_FILE}.bak\"\n\n      # Remove existing options from GRUB_CMDLINE_LINUX\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_GRUB_CMDLINE_LINUX}\" | sed -E \"s/(net.ifnames=[01]|biosdevname=[01]|memhp_default_state=online)//g\")\n\n      # Clean up extra spaces and trailing spaces before the closing quote\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | tr -s ' ' | sed -E 's/\\s*\"$/\"/')\n\n      # Extract current kernel parameters\n      _CURRENT_CMDLINE=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | sed -E 's/^GRUB_CMDLINE_LINUX=\"(.*)\"$/\\1/')\n\n      # Append new options\n      _UPDATED_CMDLINE=\"${_CURRENT_CMDLINE} ${_ADD_GRUB_CMD}\"\n      _UPDATED_CMDLINE=$(echo \"${_UPDATED_CMDLINE}\" | sed 's/^ *//;s/ *$//')\n\n      # Form the new GRUB_CMDLINE_LINUX line\n      _NEW_GRUB_CMDLINE_LINUX=\"GRUB_CMDLINE_LINUX=\\\"${_UPDATED_CMDLINE}\\\"\"\n\n      echo \" \" >> \"${_logInt}\"\n      if [[ \"${_NEW_GRUB}\" == \"LIVE\" ]]; then\n        # Update the GRUB file\n        _msg \"_ifnames_grub ==> GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: Update in the LIVE MODE\" >> \"${_logInt}\"\n        sed -i \"s|^GRUB_CMDLINE_LINUX=.*|${_NEW_GRUB_CMDLINE_LINUX}|\" \"${_GRUB_FILE}\"\n        _msg \"_ifnames_grub ==> GRUB_CMDLINE_LINUX has been updated with ${_UPDATED_CMDLINE}\" >> \"${_logInt}\"\n        [ -e \"/root/.ninc.grub.updated.cnf\" ] || touch /root/.ninc.grub.updated.cnf\n      elif [[ \"${_NEW_GRUB}\" == \"DEMO\" ]]; then\n        # Demo info\n        _msg \"_ifnames_grub ==> GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: Update in the DEMO MODE\" >> \"${_logInt}\"\n        echo \" \" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB_CMDLINE_LINUX would be updated with:\" >> \"${_logInt}\"\n        _msg \"   ${_UPDATED_CMDLINE}\" >> \"${_logInt}\"\n        echo \" \" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: Update in the LIVE MODE requires presence of control file:\" >> \"${_logInt}\"\n        _msg \"   /root/.ninc.selected.auto.cnf\" >> \"${_logInt}\"\n        echo \" \"\n        _msg \"_ifnames_grub ==> GRUB: Note that this extra control file must not exist:\" >> \"${_logInt}\"\n        _msg \"   /root/.ignore.ifnames.cnf\" >> \"${_logInt}\"\n        echo \" \" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: This requirement serves as a double-check to confirm\" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: that you are aware of and agree to auto-update GRUB configuration.\" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: Incorrect GRUB settings can render your virtual machine unbootable\" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: necessitating a rescue operation using a CD-ROM or ISO image.\" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: For this reason, running BOA directly on physical hardware (bare metal) is not supported\" >> \"${_logInt}\"\n        echo \" \" >> \"${_logInt}\"\n        _msg \"_ifnames_grub ==> GRUB: NEVER USE LIVE MODE IF YOU ARE NOT SURE IF YOU NEED IT\" >> \"${_logInt}\"\n      fi\n      echo \" \" >> \"${_logInt}\"\n    fi\n  else\n    _msg \"_ifnames_grub ==> GRUB config does not exist.\" >> \"${_logInt}\"\n  fi\n}\n\n###\n### Prefer Devuan APT sources\n###\n_prefer_devuan_repositories() {\n  # Prefer Devuan; force base-files from Devuan (handles lower version vs Debian).\n  mkdir -p /etc/apt/preferences.d\n  cat >/etc/apt/preferences.d/99-prefer-devuan <<'EOF'\nPackage: *\nPin: release o=Devuan\nPin-Priority: 700\n\nPackage: base-files\nPin: release o=Devuan\nPin-Priority: 1001\nEOF\n  /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n  /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n}\n\n###\n### Prepare system\n###\n_init_prepare() {\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n  if [ ! -e \"/opt/apt/apt.conf.noi.dist\" ] \\\n    || [ ! -e \"/opt/apt/apt.conf.noi.nrml\" ]; then\n    mkdir -p /opt/apt\n    echo \"APT::Get::Assume-Yes \\\"true\\\";\" > /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Show-Upgraded \\\"true\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Install-Recommends \\\"false\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Install-Suggests \\\"false\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Quiet \\\"true\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"DPkg::Options {\\\"--force-confnew\\\";\\\"--force-confmiss\\\";};\" >> /opt/apt/apt.conf.noi.dist\n    echo \"DPkg::Pre-Install-Pkgs {\\\"/usr/sbin/dpkg-preconfigure --apt\\\";};\" >> /opt/apt/apt.conf.noi.dist\n    echo \"Dir::Etc::SourceList \\\"/etc/apt/sources.list\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Assume-Yes \\\"true\\\";\" > /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Get::Show-Upgraded \\\"true\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Get::Install-Recommends \\\"false\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Get::Install-Suggests \\\"false\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Quiet \\\"true\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"DPkg::Options {\\\"--force-confdef\\\";\\\"--force-confmiss\\\";\\\"--force-confold\\\"};\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"DPkg::Pre-Install-Pkgs {\\\"/usr/sbin/dpkg-preconfigure --apt\\\";};\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"Dir::Etc::SourceList \\\"/etc/apt/sources.list\\\";\" >> /opt/apt/apt.conf.noi.nrml\n  fi\n\n  ###\n  ### Grub and Ifnames\n  ###\n  _ifnames_grub\n\n  if [[ -n \"${_ADD_GRUB_CMD}\" ]] && [[ \"${_NEW_GRUB}\" == \"LIVE\" ]]; then\n    touch /run/autoinit.pid\n    update-grub >> \"${_logSlt}\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    echo \"*/1 *   * * *   root    bash /opt/local/bin/autoinit ${_SET_MODE} ${_USE_NINC}\" >> /etc/crontab\n    if [ -x \"/opt/local/bin/killer\" ]; then\n      sed -i \"s/.*killer.*//gi\" /etc/crontab >> \"${_logSlt}\"\n      echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\n    fi\n    _msg \"[init] prep ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n    echo \" \" >> \"${_logInt}\"\n    if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n      service cron stop\n      sleep 90\n      rm -f /run/autoinit.pid\n    else\n      rm -f /run/autoinit.pid\n      shutdown -r now\n    fi\n    exit 0\n  else\n    if [ ! -x \"/usr/bin/mc\" ]; then\n      touch /run/autoinit.pid\n      _if_off_apparmor\n      if [ -e \"/root/.autoinit-early-upgrade.cnf\" ]; then\n        _msg \"[init] prep ==> Let's update existing system first...\" >> \"${_logInt}\"\n        if pgrep -f unattended-upgrades >/dev/null 2>&1; then\n          pkill -9 -f unattended-upgrades\n        fi\n        /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n        /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n        /usr/bin/apt-get upgrade ${_nrmUpArg} >> \"${_logSlt}\"\n        /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n        /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n      fi\n      ${_INITINS} lsb-release >> \"${_logSlt}\"\n      _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n      _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n      _msg \"[init] prep ==> We need to install some tools early...\" >> \"${_logInt}\"\n      /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n      /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n      if _pkg_installed \"chrony\"; then\n        /usr/bin/apt-get remove chrony -y -qq >> \"${_logSlt}\"\n      fi\n      ${_INITINS} sudo screen mc aptitude bc cron curl >> \"${_logSlt}\"\n      ${_INITINS} dnsutils hostname net-tools netcat-traditional ntpsec-ntpdate >> \"${_logSlt}\"\n      ${_INITINS} sudo screen mc aptitude bc cron curl >> \"${_logSlt}\"\n      ${_INITINS} dnsutils hostname net-tools netcat-traditional ntpsec-ntpdate >> \"${_logSlt}\"\n      sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n      echo \"*/1 *   * * *   root    bash /opt/local/bin/autoinit ${_SET_MODE} ${_USE_NINC}\" >> /etc/crontab\n      if [ -x \"/opt/local/bin/killer\" ]; then\n        sed -i \"s/.*killer.*//gi\" /etc/crontab >> \"${_logSlt}\"\n        echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\n      fi\n      touch ${_initFile}\n      _msg \"[init] prep ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n      echo \" \" >> \"${_logInt}\"\n      if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n        service cron stop\n        sleep 90\n        rm -f /run/autoinit.pid\n      else\n        rm -f /run/autoinit.pid\n        shutdown -r now\n      fi\n      exit 0\n    fi\n  fi\n}\n\n###\n### Ensure Devuan base-files\n###\n_ensure_devuan_bf() {\n  # Ensure Devuan base-files is applied so /etc/os-release reflects Devuan/daedalus\n  _msg \"_ensure_devuan_bf ==> Ensuring Devuan base-files is applied (force-confnew)\" >> \"${_logInt}\"\n  _BF_CANDIDATE=$(apt-cache policy base-files 2>/dev/null | awk '/Candidate: /{print $2; exit}')\n  _APTOPTS=\"-o Dpkg::Options::=--force-confnew \\\n            -o Dpkg::Options::=--force-overwrite \\\n            -o Dpkg::Options::=--force-confdef\"\n  if [ ! -z \"${_BF_CANDIDATE}\" ]; then\n    /usr/bin/apt-get ${_aptAllow} -y ${_APTOPTS} install \\\n    base-files=\"${_BF_CANDIDATE}\" --allow-downgrades >> \"${_logSlt}\" 2>&1 || true\n  else\n    /usr/bin/apt-get ${_aptAllow} -y ${_APTOPTS} install \\\n    base-files --allow-downgrades >> \"${_logSlt}\" 2>&1 || true\n  fi\n  # If /etc/os-release still says Debian, try to refresh from the installed base-files\n  if grep -qi 'ID=debian' /etc/os-release 2>/dev/null; then\n    if [ -f /usr/lib/os-release ]; then\n      cp -f /usr/lib/os-release /etc/os-release || true\n    elif [ -f /usr/share/base-files/os-release ]; then\n      cp -f /usr/share/base-files/os-release /etc/os-release || true\n    fi\n  fi\n}\n\n\n# --- BEGIN: Devuan Excalibur systemd → sysvinit keep-SSH helper ----\n\n# Ensure anacron exists to satisfy logrotate alternatives when systemd is gone\n_ensure_cron_provider() {\n  if ! _pkg_installed \"cron\" && ! _pkg_installed \"anacron\"; then\n    _msg \"[init] Installing anacron as cron provider\" >> \"${_logInt}\"\n    ${_INITINS} anacron >> \"${_logSlt}\"\n  fi\n  /usr/bin/apt-mark manual anacron >> \"${_logSlt}\"\n  _init_cron\n}\n\n# Install elogind before touching libsystemd0 (Devuan way)\n_switch_to_elogind_stack() {\n  _msg \"[init] Installing elogind stack (libelogind0, libpam-elogind, compat)\" >> \"${_logInt}\"\n  ${_INITINS} libelogind0 libpam-elogind libelogind-compat >> \"${_logSlt}\"\n}\n\n# Build & install a minimal package that Provides: systemd (so openssh-server stays)\n_install_systemd_dummy() {\n\n  _msg \"[init] Creating minimal systemd-dummy provider\" >> \"${_logInt}\"\n  _DROOT=\"/root/.autoinit-systemd-dummy\"\n  mkdir -p \"${_DROOT}/DEBIAN\"\n  cat > \"${_DROOT}/DEBIAN/control\" <<'EOF'\nPackage: systemd-dummy\nVersion: 1.0\nSection: base\nPriority: optional\nArchitecture: all\nMaintainer: Omega8cc Autoinit <root@localhost>\nProvides: systemd\nDescription: Dummy package that provides 'systemd' virtual for sysvinit systems\nEOF\n\n  /usr/bin/dpkg-deb --build \"${_DROOT}\" >> \"${_logSlt}\"\n  wait\n  ${_INITINS} \"${_DROOT}.deb\" >> \"${_logSlt}\"\n\n  # sanity check\n  if ! /usr/bin/dpkg-query -W -f='${Provides}\\n' systemd-dummy 2>/dev/null | grep 'systemd'; then\n    echo \"[init][ERROR] systemd-dummy failed to provide 'systemd'\" >> \"${_logInt}\"\n  fi\n}\n\n# Mark critical daemons as manual so metas/solver don't toss them\n_protect_daemons() {\n  _msg \"[init] Marking core daemons manual: openssh, cron\" >> \"${_logInt}\"\n  /usr/bin/apt-mark manual ssh openssh-server openssh-sftp-server task-ssh-server >> \"${_logSlt}\"\n  /usr/bin/apt-mark manual cron cron-daemon-common >> \"${_logSlt}\"\n}\n\n# Safety: ensure openssh-server present & configured now\n_ensure_openssh_server() {\n  if ! _pkg_installed \"cron\"; then\n    _msg \"[init] Installing cron\" >> \"${_logInt}\"\n    ${_INITINS} cron cron-daemon-common >> \"${_logSlt}\"\n  fi\n  if ! _pkg_installed \"opensysusers\"; then\n    _msg \"[init] Installing opensysusers\" >> \"${_logInt}\"\n    ${_INITINS} opensysusers >> \"${_logSlt}\"\n  fi\n  if ! _pkg_installed \"openssh-server\"; then\n    _msg \"[init] Installing openssh-server\" >> \"${_logInt}\"\n    ${_INITINS} ssh openssh-server openssh-sftp-server task-ssh-server >> \"${_logSlt}\"\n  fi\n  if ! _pkg_installed \"ssh\"; then\n    _msg \"[init] Installing ssh\" >> \"${_logInt}\"\n    ${_INITINS} ssh openssh-server openssh-sftp-server task-ssh-server >> \"${_logSlt}\"\n  fi\n  # (optional) make sure it is enabled under sysvinit\n  if [ -x \"/etc/init.d/ssh\" ]; then\n    service ssh status >/dev/null 2>&1 || service ssh start >> \"${_logSlt}\"\n  fi\n}\n\n# Before real removal, simulate and auto-heal if SSH would be removed.\n_purge_systemd_keep_ssh() {\n\n  _install_systemd_dummy\n\n  _msg \"[init] Simulating 'apt-get remove systemd'\" >> \"${_logInt}\"\n  if /usr/bin/apt-get -s remove systemd 2>/dev/null | grep -E 'Remv (openssh-server|ssh)\\b'; then\n    _msg \"[init] APT would remove SSH; installing dummy provider\" >> \"${_logInt}\"\n    _msg \"[init] Re-simulating removal\" >> \"${_logInt}\"\n    if /usr/bin/apt-get -s remove systemd 2>/dev/null | grep -E 'Remv (openssh-server|ssh)\\b'; then\n      _msg \"[init][ERROR] SSH still slated for removal even with systemd-dummy present\" >> \"${_logInt}\"\n    fi\n  else\n    _msg \"[init] Simulating 'apt-get remove systemd' appeared clean\" >> \"${_logInt}\"\n  fi\n\n  _msg \"[init] Purging systemd safely\" >> \"${_logInt}\"\n  /usr/bin/apt-get purge systemd libnss-systemd -y -qq >> \"${_logSlt}\"\n\n  # Some builds still leave libsystemd0 behind; elogind step should allow its removal.\n#   if _pkg_installed \"libsystemd0\"; then\n#     _msg \"[init] Removing libsystemd0 (elogind present)\" >> \"${_logInt}\"\n#     /usr/bin/apt-get remove libsystemd0 -y --purge --auto-remove -qq >> \"${_logSlt}\"\n#   fi\n\n  # Clean stragglers, but do not autoremove SSH/SFTP bits\n#   _msg \"[init] Autoremove harmless leftovers (kept minimal)\" >> \"${_logInt}\"\n#   /usr/bin/apt-get autoremove --purge -y -qq >> \"${_logSlt}\"\n#   /usr/bin/apt-get autoclean -y -qq >> \"${_logSlt}\"\n}\n\n# High-level orchestrator to call in the off-systemd phase\n_autoinit_devuan_protect_essential() {\n  _ensure_cron_provider\n#   _switch_to_elogind_stack\n  _ensure_openssh_server\n  _protect_daemons\n  _purge_systemd_keep_ssh\n  _ensure_openssh_server\n  _msg \"[init] systemd purged while keeping openssh-server intact\" >> \"${_logInt}\"\n  return 0\n}\n\n# --- END: Devuan Excalibur systemd → sysvinit keep-SSH helper ----\n\n###\n### Sync base-files if needed\n###\n_init_base_sync() {\n  if [ -e \"/root/.top-daedalus.cnf\" ]; then\n    ${_INITINS} base-files/daedalus >> \"${_logSlt}\"\n  fi\n  if [ -e \"/root/.top-excalibur.cnf\" ]; then\n    ${_INITINS} base-files/excalibur >> \"${_logSlt}\"\n  fi\n  if [ -e \"/root/.top-daedalus.cnf\" ]; then\n    # Initialize the update flag to NO\n    _needsBaseFilesUpdate=NO\n    # Check if /etc/os-release mentions 'daedalus'\n    if grep -i 'daedalus' /etc/os-release &> /dev/null; then\n      _msg \"[init] base ==> The /etc/os-release already mentions daedalus\" >> \"${_logInt}\"\n    else\n      _msg \"[init] base ==> The /etc/os-release doesn't mention daedalus yet\" >> \"${_logInt}\"\n      _needsBaseFilesUpdate=YES\n    fi\n    # Check if apt policy for base-files mentions 'daedalus'\n    if apt policy base-files | grep -i 'daedalus' &> /dev/null; then\n      _msg \"[init] base ==> The apt policy base-files already mentions daedalus\" >> \"${_logInt}\"\n    else\n      _msg \"[init] base ==> The apt policy base-files doesn't mention daedalus yet\" >> \"${_logInt}\"\n      _needsBaseFilesUpdate=YES\n    fi\n    # If any of the above checks require an update, perform the upgrade\n    if [ \"${_needsBaseFilesUpdate}\" = \"YES\" ]; then\n      _msg \"[init] base ==> Upgrading base-files from Devuan repository (dynamic candidate)\" >> \"${_logInt}\"\n      _ensure_devuan_bf\n    fi\n  fi\n  if [ -e \"/root/.top-excalibur.cnf\" ]; then\n    # Initialize the update flag to NO\n    _needsBaseFilesUpdate=NO\n    # Check if /etc/os-release mentions 'excalibur'\n    if grep -i 'excalibur' /etc/os-release &> /dev/null; then\n      _msg \"[init] base ==> The /etc/os-release already mentions excalibur\" >> \"${_logInt}\"\n    else\n      _msg \"[init] base ==> The /etc/os-release doesn't mention excalibur yet\" >> \"${_logInt}\"\n      _needsBaseFilesUpdate=YES\n    fi\n    # Check if apt policy for base-files mentions 'excalibur'\n    if apt policy base-files | grep -i 'excalibur' &> /dev/null; then\n      _msg \"[init] base ==> The apt policy base-files already mentions excalibur\" >> \"${_logInt}\"\n    else\n      _msg \"[init] base ==> The apt policy base-files doesn't mention excalibur yet\" >> \"${_logInt}\"\n      _needsBaseFilesUpdate=YES\n    fi\n    # If any of the above checks require an update, perform the upgrade\n    if [ \"${_needsBaseFilesUpdate}\" = \"YES\" ]; then\n      _msg \"[init] base ==> Upgrading base-files from Devuan repository (dynamic candidate)\" >> \"${_logInt}\"\n      _ensure_devuan_bf\n    fi\n  fi\n}\n\n###\n### Final procedure to make the system ready\n###\n_init_post_ready() {\n  _msg \"[init] post ==> Final OS post-upgrade to ${_OS_DIST}/${_OS_CODE} procedure...\" >> \"${_logInt}\"\n  /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n  /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n  /usr/bin/apt-get upgrade ${_nrmUpArg} >> \"${_logSlt}\"\n  /usr/bin/apt-get dist-upgrade ${_dstUpArg} >> \"${_logSlt}\"\n  /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n  /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n}\n\n###\n### Remove and block systemd\n###\n_init_offsystemd() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  touch /run/autoinit.pid\n  _msg \"[init] offd ==> Launching a quick dist-upgrade...\" >> \"${_logInt}\"\n  if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n    rm -f /etc/apt/preferences.d/systemd\n    echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n    echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n  fi\n  /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n  /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n  if [ -e \"/root/.top-excalibur.cnf\" ]; then\n    _msg \"[init] offd ==> Installing elogind on ${_OS_CODE} early\" >> \"${_logInt}\"\n    /usr/bin/apt-get -o Dpkg::Options::=\"--force-overwrite\" install elogind >> \"${_logSlt}\"\n  fi\n  /usr/bin/apt-get dist-upgrade ${_dstUpArg} >> \"${_logSlt}\"\n  [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n\n  _init_base_sync\n\n  if grep -qi 'ID=debian' /etc/os-release 2>/dev/null; then\n    _ensure_devuan_bf\n  fi\n  if apt policy base-files | grep -i 'debian' &> /dev/null; then\n    _ensure_devuan_bf\n  fi\n\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n\n  _msg \"[init] offd ==> Your system now runs on ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n  _msg \"[init] offd ==> Removing systemd on ${_OS_CODE}\" >> \"${_logInt}\"\n\n  # Prepare classic networking before systemd purge (important on console-less VPSes).\n  _msg \"[init] offd ==> Preparing classic networking with vmnetfix before systemd purge...\" >> \"${_logInt}\"\n  \"${_VMNETFIX_BIN}\" --mode full --disable-cloud-init >> \"${_logSlt}\"\n  wait\n\n  _autoinit_devuan_protect_essential\n\n  _msg \"[init] offd ==> Final cleanup for systemd on ${_OS_CODE}\" >> \"${_logInt}\"\n  /usr/bin/apt-get remove systemd-timesyncd -y -qq >> \"${_logSlt}\"\n  /usr/bin/apt-get remove systemd-dummy -y -qq >> \"${_logSlt}\"\n  /usr/bin/apt-get purge systemd libnss-systemd -y -qq >> \"${_logSlt}\"\n\n  # Ensure classic ifupdown networking and DNS after systemd purge.\n  _msg \"[init] offd ==> Ensuring classic networking with vmnetfix after systemd purge...\" >> \"${_logInt}\"\n  \"${_VMNETFIX_BIN}\" --mode full --disable-cloud-init >> \"${_logSlt}\"\n  wait\n  _msg \"[init] offd ==> Pinning interface name for post-systemd-purge boots...\" >> \"${_logInt}\"\n  _init_pin_iface_name\n\n  # Ensure SysV init will actually bring networking up on boot (Devuan goal).\n  _init_sysv_net_repair\n  _init_vmnetfix_failsafe_install\n\n  # Uninstall cloud-utils if not required\n  [ -e \"/root/.mode.selected.full.cnf\" ] && _if_remove_cloud_utils\n\n  _msg \"[init] offd ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n  echo \" \" >> \"${_logInt}\"\n  touch /root/.run-offsystemd-devuan-init.cnf\n  if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n    service cron stop\n    sleep 90\n    rm -f /run/autoinit.pid\n  else\n    rm -f /run/autoinit.pid\n    shutdown -r now\n  fi\n  exit 0\n}\n\n###\n### Find the best Devuan APT sources mirror\n###\n_find_fast_devuan_mirror() {\n  _ffDevuan=\"$(which ffdevuan)\"\n  if [ -x \"${_ffDevuan}\" ]; then\n    bash ${_ffDevuan} >> \"${_logSlt}\"\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/log/boa/devaun-fast-mirrors-list.txt\"\n    mkdir -p /var/log/boa/\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFLIST=$(grep \"merged\" ${_ffList} 2>&1)\n    fi\n    if [ ! -e \"${_ffList}\" ] || [[ ! \"${_BROKEN_FFLIST}\" =~ \"merged\" ]]; then\n      echo \"https://mirrors.dotsrc.org/devuan/merged\"  > ${_ffList}\n      echo \"https://mirror.akardam.net/devuan/merged\" >> ${_ffList}\n      echo \"https://mirror.hootsoftware.com/devuan/merged\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]] \\\n        || [[ ! \"${_BROKEN_FFLIST}\" =~ \"merged\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n      else\n        _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n      fi\n    else\n      _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n    fi\n  else\n    _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n  fi\n  echo \"${_USE_MIR}\"\n}\n\n###\n### Init cycle procedure\n###\n_init_cycle() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  touch /run/autoinit.pid\n  _msg \"[init] cycl ==> Your current system is ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n  _NEW_SYS=Devuan\n  _OLD_SYS=Devuan\n  if [ \"${_OS_CODE}\" = \"trixie\" ]; then\n    _NEW_OS_CODE=excalibur\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-excalibur.cnf\" ] && touch /root/.top-excalibur.cnf\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _NEW_OS_CODE=daedalus\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _NEW_OS_CODE=daedalus\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _NEW_OS_CODE=chimaera\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _NEW_OS_CODE=beowulf\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _NEW_OS_CODE=chimaera\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ] || [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _NEW_OS_CODE=\n    _NEW_SYS=\n  else\n    _msg \"[init] cycl ==> This procedure does not support ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n    _msg \"[init] cycl ==> Bye!\" >> \"${_logInt}\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    sed -i \"s/.*killer.*//gi\" /etc/crontab >> \"${_logSlt}\"\n    echo \" \" >> \"${_logInt}\"\n    exit 1\n  fi\n  if [ \"${_OLD_SYS}\" = \"Debian\" ]; then\n    [ -e \"/etc/apt/apt.conf.d/20listchanges\" ] && mv -f /etc/apt/apt.conf.d/20listchanges /root/20listchanges.disabled 2>/dev/null || true\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    APT_LISTCHANGES_FRONTEND=none /usr/bin/apt-get full-upgrade ${_nrmUpArg} >> \"${_logSlt}\"\n    /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    [ -e \"/root/20listchanges.disabled\" ] && mv -f /root/20listchanges.disabled /etc/apt/apt.conf.d/20listchanges 2>/dev/null || true\n  fi\n  if [ \"${_OLD_SYS}\" = \"Devuan\" ]; then\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    /usr/bin/apt-get upgrade ${_nrmUpArg} >> \"${_logSlt}\"\n  fi\n  if [ ! -z \"${_NEW_OS_CODE}\" ] && [ ! -z \"${_NEW_SYS}\" ]; then\n    _msg \"[init] cycl ==> Update networking with vmnetfix early...\" >> \"${_logInt}\"\n    \"${_VMNETFIX_BIN}\" --mode full --disable-cloud-init >> \"${_logSlt}\"\n    wait\n    _msg \"[init] cycl ==> Running APT Cleanup...\" >> \"${_logInt}\"\n    /opt/local/bin/aptcleanup >> \"${_logSlt}\"\n    wait\n    _msg \"[init] cycl ==> Launching a quick upgrade to ${_NEW_SYS}/${_NEW_OS_CODE}\" >> \"${_logInt}\"\n    _aptLiSys=\"/etc/apt/sources.list\"\n    if [ \"${_NEW_OS_CODE}\" = \"beowulf\" ]; then\n      _TGT_MRR=\"http://archive.devuan.org/merged\"\n    else\n      _TGT_MRR=$(_find_fast_devuan_mirror)\n    fi\n    : \"${_TGT_MRR:=https://mirrors.dotsrc.org/devuan/merged}\"\n    _msg \"[init] cycl ==> Using Devuan mirror: ${_TGT_MRR}\" >> \"${_logInt}\"\n    echo \"## DEVUAN MAIN REPOSITORIES\" > ${_aptLiSys}\n    echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE} main\" >> ${_aptLiSys}\n    echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE} main\" >> ${_aptLiSys}\n    echo \"\" >> ${_aptLiSys}\n    if [ \"${_NEW_OS_CODE}\" != \"beowulf\" ]; then\n      echo \"## MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n      echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE}-updates main\" >> ${_aptLiSys}\n      echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE}-updates main\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n    fi\n    echo \"## DEVUAN SECURITY UPDATES\" >> ${_aptLiSys}\n    echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE}-security main\" >> ${_aptLiSys}\n    echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE}-security main\" >> ${_aptLiSys}\n    _prefer_devuan_repositories\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    if [ \"${_OLD_SYS}\" = \"Debian\" ]; then\n      ${_INITINS} devuan-keyring >> \"${_logSlt}\"\n    else\n      /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    fi\n    /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    /usr/bin/apt-get upgrade ${_dstUpArg} >> \"${_logSlt}\"\n    if [ \"${_OLD_SYS}\" = \"Debian\" ]; then\n      /usr/bin/apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n      /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n      if [ \"${_NEW_OS_CODE}\" = \"excalibur\" ] || [ -e \"/root/.top-excalibur.cnf\" ]; then\n        ${_INITINS} eudev sysvinit-core systemd-sysv- --allow-remove-essential -qq >> \"${_logSlt}\"\n      else\n        ${_INITINS} eudev sysvinit-core -qq >> \"${_logSlt}\"\n      fi\n      /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n      if [ \"${_NEW_OS_CODE}\" = \"excalibur\" ] || [ -e \"/root/.top-excalibur.cnf\" ]; then\n        ${_INITINS} eudev sysvinit-core systemd-sysv- --allow-remove-essential -qq >> \"${_logSlt}\"\n      else\n        ${_INITINS} eudev sysvinit-core -qq >> \"${_logSlt}\"\n      fi\n      /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n      /usr/bin/dpkg --configure -a >> \"${_logSlt}\"\n    else\n      [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n      /usr/bin/apt-get dist-upgrade ${_dstUpArg} >> \"${_logSlt}\"\n      /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n      [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n      /usr/bin/apt-get autoremove --purge -y -qq >> \"${_logSlt}\"\n      /usr/bin/apt-get autoclean -y -qq >> \"${_logSlt}\"\n    fi\n  fi\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ \"${_OLD_SYS}\" = \"Debian\" ]; then\n    # Some vendor images (e.g. Civo) can end up with incomplete SysV rc wiring\n    # because insserv runs before initscripts/cloud-init init scripts are in place.\n    # Repair it here while we still have working network access.\n    _msg \"[init] cycl ==> Repairing SysV init wiring (insserv) before reboot...\" >> \"${_logInt}\"\n    _init_sysv_insserv_repair\n\n    # Remove dhcpcd to avoid DHCP race with ifupdown/dhclient on next boots.\n    if dpkg -l | awk '{print $1\" \"$2}' | grep -q \"^ii dhcpcd-base$\"; then\n      _msg \"[init] cycl ==> Purging dhcpcd-base to avoid DHCP race...\" >> \"${_logInt}\"\n      /usr/bin/apt-get purge dhcpcd-base -y -qq >> \"${_logSlt}\" || true\n      /usr/bin/apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\" || true\n      /usr/bin/dpkg --configure -a >> \"${_logSlt}\" || true\n    fi\n\n    # Install minimal boot failsafe for console-less VPSes.\n    _init_vmnetfix_failsafe_install\n\n    _msg \"[init] cycl ==> Update networking with vmnetfix just in case before reboot...\" >> \"${_logInt}\"\n    \"${_VMNETFIX_BIN}\" --mode full --disable-cloud-init >> \"${_logSlt}\"\n    wait\n    _msg \"[init] cycl ==> Pinning interface name for post-eudev boots...\" >> \"${_logInt}\"\n    _init_pin_iface_name\n    _msg \"[init] cycl ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n    echo \" \" >> \"${_logInt}\"\n    touch /root/.run-cycle-devuan-init.cnf\n    if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n      service cron stop\n      sleep 90\n      rm -f /run/autoinit.pid\n    else\n      rm -f /run/autoinit.pid\n      shutdown -r now\n    fi\n    exit 0\n  else\n    if [ \"${_OS_CODE}\" = \"daedalus\" ] || [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      _init_base_sync\n      _init_post_ready\n      _init_no_systemd\n      _msg \"[init] cycl ==> Your system now runs on ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n      if [ ! -e \"/etc/network/interfaces\" ]; then\n        _msg \"[init] cycl ==> Update networking to fix missing /etc/network/interfaces...\" >> \"${_logInt}\"\n        \"${_VMNETFIX_BIN}\" --mode full --disable-cloud-init >> \"${_logSlt}\"\n        wait\n      fi\n      _msg \"[init] cycl ==> Running APT Cleanup..\" >> \"${_logInt}\"\n      /opt/local/bin/aptcleanup >> \"${_logSlt}\"\n      wait\n      _msg \"[init] cycl ==> The system is now ready for boa install\" >> \"${_logInt}\"\n      rm -f /root/.init-to-devuan-ctrl.cnf\n      sed -i \"s/.*autoinit.*//gi\" /etc/crontab >> \"${_logSlt}\"\n      _if_launch_boa_install\n    else\n      _msg \"[init] cycl ==> Your system now runs on ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n      if [ ! -e \"/etc/network/interfaces\" ]; then\n        _msg \"[init] cycl ==> Update networking to fix missing /etc/network/interfaces...\" >> \"${_logInt}\"\n        \"${_VMNETFIX_BIN}\" --mode full --disable-cloud-init >> \"${_logSlt}\"\n        wait\n      fi\n      _msg \"[init] cycl ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n    fi\n    echo \" \" >> \"${_logInt}\"\n    touch /root/.run-cycle-devuan-init.cnf\n    touch /root/.run-cycle-${_OS_CODE}-init.cnf\n    if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n      service cron stop\n      sleep 90\n      rm -f /run/autoinit.pid\n    else\n      rm -f /run/autoinit.pid\n      shutdown -r now\n    fi\n    exit 0\n  fi\n}\n\n###\n### Display welcome info\n###\n_init_info() {\n  echo \"[init] info ==> The system will reboot and continue several times...\"\n  sleep 3\n  echo \"[init] info ==> until it is ugraded to latest Devuan version.\"\n  sleep 3\n  echo \"[init] info ==> Once all upgrades are complete, you can run boa install\"\n  sleep 3\n  echo \"[init] info ==> Let's go! It will take only ten minutes max!\"\n  sleep 3\n  echo \"...\"\n}\n\n###\n### Init info or prepare\n###\n_init_conf() {\n  if [ ! -e \"${_initFile}\" ]; then\n    if [ ! -x \"/usr/bin/mc\" ]; then\n      _init_mnfr\n      _init_info\n    fi\n    _init_prepare >> \"${_logSlt}\"\n  fi\n}\n\n###\n### Init start\n###\n_init_start() {\n  _init_conf\n  if [ -e \"${_initFile}\" ]; then\n    if [ -e \"/root/.run-cycle-devuan-init.cnf\" ]; then\n      if [ ! -e \"/root/.run-offsystemd-devuan-init.cnf\" ]; then\n        _init_offsystemd\n      fi\n    fi\n    _init_cycle\n  fi\n}\n\n###\n### Init launch\n###\n_init_launch() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  touch /run/autoinit.pid\n  if pgrep -f unattended-upgrades >/dev/null 2>&1; then\n    pkill -9 -f unattended-upgrades\n  fi\n  if [ ! -e \"/root/.autoinit.log\" ]; then\n    echo \"[init] lnch ==> Launching BOA System INIT on ${_OS_DIST}/${_OS_CODE}...\"\n    _msg \"[init] lnch ==> Launching BOA System INIT on ${_OS_DIST}/${_OS_CODE}...\" >> \"${_logInt}\"\n  fi\n  sleep 15\n  _TIME_IS=$(date)\n  _UPTIME=$(uptime)\n  _msg \"[init] lnch ==> Date/Time: ${_TIME_IS}\" >> \"${_logInt}\"\n  _msg \"[init] lnch ==> Uptime: ${_UPTIME}\" >> \"${_logInt}\"\n  _init_mode\n  _init_ninc\n  _init_locales\n  _init_cron\n  _init_start\n}\n\n### ---------------------------- Prepare for _init_launch\n_init_root\n_init_aegir\n_single_instance_lock\ncd /root/\n\n### ---------------------------- Auto-enable debug on dev\nif [ \"${_tRee}\" = \"dev\" ]; then\n  [ ! -e \"/root/.debug-boa-installer.cnf\" ] && touch /root/.debug-boa-installer.cnf\n  [ ! -e \"/root/.debug-octopus-installer.cnf\" ] && touch /root/.debug-octopus-installer.cnf\nfi\n\n### ---------------------------- Extra protection with pid file\nif [ -e \"/run/autoinit.pid\" ]; then\n  echo \" \" >> \"${_logSlt}\"\n  _msg \"The /run/autoinit.pid is blocking this run...\" >> \"${_logSlt}\"\n  echo \" \" >> \"${_logSlt}\"\n  exit 1\nelse\n  ### -------------------------- Let's go!\n  _init_launch\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/automini",
    "content": "#!/bin/bash\n\n\n###----------------------------------------###\n###\n###  Automatic BOA System AUTO-INIT Tool\n###\n###  Copyright (C) 2009-2026 Omega8.cc\n###  noc@omega8.cc www.omega8.cc\n###\n###  This program is free software. You can\n###  redistribute it and/or modify it under\n###  the terms of the GNU GPL as published by\n###  the Free Software Foundation, version 2\n###  or later.\n###\n###  This program is distributed in the hope\n###  that it will be useful, but WITHOUT ANY\n###  WARRANTY; without even the implied\n###  warranty of MERCHANTABILITY or FITNESS\n###  FOR A PARTICULAR PURPOSE. See the GNU GPL\n###  for more details.\n###\n###  You should have received a copy of the\n###  GNU GPL along with this program.\n###  If not, see http://www.gnu.org/licenses/\n###\n###  Code: https://github.com/omega8cc/boa\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### How To: Launch AUTO-INIT properly      ###\n###----------------------------------------###\n###\n###  Use clean minimal Debian OS based VPS.\n###\n###  Initialise the system before installing\n###  BOA to remove systemd and quickly\n###  upgrade to latest Devuan OS version.\n###\n###   $ wget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n###   $ autoinit\n###\n###  Once started, the autoinit will launch\n###  a series of upgrades and reboots until\n###  you get a basic latest system installed\n###  to be able to run standard BOA install.\n###\n###  The script logs its actions in the files\n###  you can examine later:\n###\n###   $ cat /root/.autoinit.log\n###\n###  There's also a very verbose extra log:\n###\n###   $ cat /root/.autoinit-verbose.log\n###\n###----------------------------------------###\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n###\n### Variables\n###\n_initFile=\"/root/.init-to-devuan-ctrl.cnf\"\n_barCnf=\"/root/.barracuda.cnf\"\n_logInt=\"/root/.autoinit.log\"\n_logSlt=\"/root/.autoinit-verbose.log\"\n_aptAllow=\"--allow-unauthenticated\"\n_INITINS=\"apt-get ${_aptAllow} -y install\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n###\n### The lock is always cleared even if apt bails\n###\ntrap 'rm -f /run/autoinit.pid' EXIT\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"AutoInit v.${_tRee} [$(date +%T)] ==> $*\"\n}\n\n###\n### Only root allowed\n###\n_init_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  else\n    _msg \"[init] root ==> ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    _msg \"[init] root ==> ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    _msg \"[init] root ==> ERROR: We can not proceed until it is below 90/100\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab &> /dev/null\n    sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n    exit 1\n  fi\n}\n\n###\n### Atomic unlock to prevent TOCTOU race\n###\n_single_instance_unlock() {\n  _FD=\"$1\"; _PATH=\"$2\"\n  if command -v flock >/dev/null 2>&1; then\n    flock -u \"${_FD}\" 2>/dev/null || true\n    eval \"exec ${_FD}>&-\"\n    rm -f \"${_PATH}\" 2>/dev/null || true\n  else\n    rm -rf \"${_PATH}\" 2>/dev/null || true\n  fi\n}\n\n###\n### Atomic lock to prevent TOCTOU race\n###\n_single_instance_lock() {\n  # Ensure not too many instances are running\n  # usage: _single_instance_lock [lockfile_path] [fd]\n  # default lock: /run/<script>.lock (falls back to /run or /tmp)\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  _LOCK_FD=\"${2:-9}\"\n  if [ -n \"${1:-}\" ]; then\n    _LOCK_PATH=\"$1\"\n  else\n    _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/tmp\"\n    _LOCK_PATH=\"${_DIR}/${_SELF_NAME%.sh}.lock\"\n  fi\n\n  if command -v flock >/dev/null 2>&1; then\n    eval \"exec ${_LOCK_FD}>\\\"${_LOCK_PATH}\\\"\"\n    if ! flock -n \"${_LOCK_FD}\"; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    printf '%s\\n' \"$$\" 1>&\"${_LOCK_FD}\" 2>/dev/null || true   # optional: PID note\n    trap \"_single_instance_unlock ${_LOCK_FD} '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  else\n    # mkdir is atomic; directory presence == lock held\n    if ! mkdir \"${_LOCK_PATH}\" 2>/dev/null; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    echo \"$$\" > \"${_LOCK_PATH}/pid\" 2>/dev/null || true\n    trap \"rm -rf '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  fi\n}\n\n###\n### Check Manufacturer Compatibility\n###\n_init_mnfr() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    ${_INITINS} dmidecode &> /dev/null\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    _msg \"[init] mnfr ==> Not supported environment detected: ${_HOST_INFO}\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Bye!\" >> \"${_logInt}\"\n    echo \"[init] mnfr ==> Not supported environment detected: ${_HOST_INFO}\"\n    echo \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\"\n    echo \"[init] mnfr ==> Bye!\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab &> /dev/null\n    sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n    exit 1\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    _msg \"[init] mnfr ==> Mysterious environment: ${_HOST_INFO}\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\" >> \"${_logInt}\"\n    _msg \"[init] mnfr ==> Bye!\" >> \"${_logInt}\"\n    echo \"[init] mnfr ==> Mysterious environment: ${_HOST_INFO}\"\n    echo \"[init] mnfr ==> Please check https://bit.ly/boa-caveats\"\n    echo \"[init] mnfr ==> Bye!\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab &> /dev/null\n    sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n    exit 1\n  fi\n}\n\n###\n### Check Ægir\n###\n_init_aegir() {\n  if [ -e \"/var/aegir\" ]; then\n    echo\n    echo \"[init] aegir ==> ERROR: This script can not be used once BOA is installed\"\n    echo\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab &> /dev/null\n    exit 1\n  fi\n}\n\n###\n### Check Systemd\n###\n_init_no_systemd() {\n  if [ -e \"/lib/systemd/systemd\" ]; then\n    echo \" \" >> \"${_logInt}\"\n    _msg \"[init] nosd ==> OOPS: Systemd still not removed cleanly\" >> \"${_logInt}\"\n    echo \" \" >> \"${_logInt}\"\n  fi\n}\n\n###\n### Fix locales\n###\n_init_locales() {\n  _isLoc=\"$(which locale)\"\n  if [ ! -x \"${_isLoc}\" ] || [ -z \"${_isLoc}\" ]; then\n    apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    ${_INITINS} locales &> /dev/null\n  fi\n  _LOC_TEST=$(locale 2>&1)\n  if [[ \"${_LOC_TEST}\" =~ LANG=.*UTF-8 ]]; then\n    _LOCALE_TEST=OK\n  fi\n  if [[ \"${_LOC_TEST}\" =~ \"Cannot\" ]]; then\n    _LOCALE_TEST=BROKEN\n  fi\n  if [ \"${_LOCALE_TEST}\" = \"BROKEN\" ]; then\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen &> /dev/null\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce all locale settings\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_TIME=en_US.UTF-8 \\\n      LC_MONETARY=en_US.UTF-8 \\\n      LC_MESSAGES=en_US.UTF-8 \\\n      LC_PAPER=en_US.UTF-8 \\\n      LC_NAME=en_US.UTF-8 \\\n      LC_ADDRESS=en_US.UTF-8 \\\n      LC_TELEPHONE=en_US.UTF-8 \\\n      LC_MEASUREMENT=en_US.UTF-8 \\\n      LC_IDENTIFICATION=en_US.UTF-8 \\\n      LC_ALL= &> /dev/null\n    # Define all locale settings on the fly to prevent unnecessary\n    # warnings during installation of packages.\n    export LANG=en_US.UTF-8 &> /dev/null\n    export LC_CTYPE=en_US.UTF-8 &> /dev/null\n    export LC_COLLATE=POSIX &> /dev/null\n    export LC_NUMERIC=POSIX &> /dev/null\n    export LC_TIME=en_US.UTF-8 &> /dev/null\n    export LC_MONETARY=en_US.UTF-8 &> /dev/null\n    export LC_MESSAGES=en_US.UTF-8 &> /dev/null\n    export LC_PAPER=en_US.UTF-8 &> /dev/null\n    export LC_NAME=en_US.UTF-8 &> /dev/null\n    export LC_ADDRESS=en_US.UTF-8 &> /dev/null\n    export LC_TELEPHONE=en_US.UTF-8 &> /dev/null\n    export LC_MEASUREMENT=en_US.UTF-8 &> /dev/null\n    export LC_IDENTIFICATION=en_US.UTF-8 &> /dev/null\n    export LC_ALL= &> /dev/null\n  else\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen &> /dev/null\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce locale settings required for consistency\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_ALL= &> /dev/null\n    # Define locale settings required for consistency also on the fly\n    export LC_COLLATE=POSIX &> /dev/null\n    export LC_NUMERIC=POSIX &> /dev/null\n    export LC_ALL= &> /dev/null\n  fi\n  _LOCALES_BASHRC_TEST=$(grep LC_COLLATE /root/.bashrc 2>&1)\n  if [[ ! \"${_LOCALES_BASHRC_TEST}\" =~ \"LC_COLLATE\" ]]; then\n    printf \"\\n\" >> /root/.bashrc\n    echo \"export LANG=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_CTYPE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_COLLATE=POSIX\" >> /root/.bashrc\n    echo \"export LC_NUMERIC=POSIX\" >> /root/.bashrc\n    echo \"export LC_TIME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MONETARY=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MESSAGES=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_PAPER=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_NAME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ADDRESS=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_TELEPHONE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MEASUREMENT=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_IDENTIFICATION=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ALL=\" >> /root/.bashrc\n    printf \"\\n\" >> /root/.bashrc\n  fi\n}\n\n###\n### Check cron\n###\n_init_cron() {\n  _isCrn=\"$(which cron)\"\n  if [ ! -x \"${_isCrn}\" ] || [ -z \"${_isCrn}\" ]; then\n    apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    ${_INITINS} cron >> \"${_logSlt}\"\n  fi\n}\n\n###\n### Should we launch boa install?\n###\n_if_launch_boa_install() {\n  _BOA_LOGFILE=\"/root/.boa.install.command.cnf\"\n  if [[ -s \"${_BOA_LOGFILE}\" ]]; then\n    _BOA_COMMAND=$(cat \"${_BOA_LOGFILE}\")\n    _BOA_COMMAND=$(echo \"${_BOA_COMMAND}\" | sed -E \"s/noscreen//g\")\n    if [[ \"${_BOA_COMMAND}\" =~ \" silent\" ]] \\\n      || [[ \"${_BOA_COMMAND}\" =~ \" system\" ]]; then\n      _BOA_INSTALL_COMMAND=\"${_BOA_COMMAND}\"\n    else\n      _BOA_INSTALL_COMMAND=\"${_BOA_COMMAND} silent\"\n    fi\n    if [[ \"${_BOA_COMMAND}\" =~ \"boa in-lts\" ]] \\\n      || [[ \"${_BOA_COMMAND}\" =~ \"boa in-dev\" ]]; then\n      _msg \"_if_launch_boa_install ==> Time for ${_BOA_INSTALL_COMMAND} noscreen\" >> \"${_logInt}\"\n      eval \"${_BOA_INSTALL_COMMAND} noscreen\"\n      wait\n      mv -f ${_BOA_LOGFILE} /var/backups/\n    fi\n  fi\n}\n\n###\n### Prepare system\n###\n_init_prepare() {\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n  if [ ! -e \"/opt/apt/apt.conf.noi.dist\" ] \\\n    || [ ! -e \"/opt/apt/apt.conf.noi.nrml\" ]; then\n    mkdir -p /opt/apt\n    echo \"APT::Get::Assume-Yes \\\"true\\\";\" > /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Show-Upgraded \\\"true\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Install-Recommends \\\"false\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Install-Suggests \\\"false\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Quiet \\\"true\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"DPkg::Options {\\\"--force-confnew\\\";\\\"--force-confmiss\\\";};\" >> /opt/apt/apt.conf.noi.dist\n    echo \"DPkg::Pre-Install-Pkgs {\\\"/usr/sbin/dpkg-preconfigure --apt\\\";};\" >> /opt/apt/apt.conf.noi.dist\n    echo \"Dir::Etc::SourceList \\\"/etc/apt/sources.list\\\";\" >> /opt/apt/apt.conf.noi.dist\n    echo \"APT::Get::Assume-Yes \\\"true\\\";\" > /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Get::Show-Upgraded \\\"true\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Get::Install-Recommends \\\"false\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Get::Install-Suggests \\\"false\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"APT::Quiet \\\"true\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"DPkg::Options {\\\"--force-confdef\\\";\\\"--force-confmiss\\\";\\\"--force-confold\\\"};\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"DPkg::Pre-Install-Pkgs {\\\"/usr/sbin/dpkg-preconfigure --apt\\\";};\" >> /opt/apt/apt.conf.noi.nrml\n    echo \"Dir::Etc::SourceList \\\"/etc/apt/sources.list\\\";\" >> /opt/apt/apt.conf.noi.nrml\n  fi\n  if [ ! -x \"/usr/bin/mc\" ]; then\n    touch /run/autoinit.pid\n    ${_INITINS} lsb-release >> \"${_logSlt}\"\n    _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n    _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n    _msg \"[init] prep ==> We need to install some tools early...\" >> \"${_logInt}\"\n    apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    ${_INITINS} aptitude  >> \"${_logSlt}\"\n    ${_INITINS} bc        >> \"${_logSlt}\"\n    ${_INITINS} cron      >> \"${_logSlt}\"\n    ${_INITINS} curl      >> \"${_logSlt}\"\n    ${_INITINS} dnsutils  >> \"${_logSlt}\"\n    ${_INITINS} hostname  >> \"${_logSlt}\"\n    ${_INITINS} mc        >> \"${_logSlt}\"\n    ${_INITINS} net-tools >> \"${_logSlt}\"\n    ${_INITINS} netcat-traditional  >> \"${_logSlt}\"\n    ${_INITINS} ntpsec-ntpdate      >> \"${_logSlt}\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab &> /dev/null\n    echo \"*/1 *   * * *   root    bash /opt/local/bin/autoinit\" >> /etc/crontab\n    if [ -x \"/opt/local/bin/killer\" ]; then\n      sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n      echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\n    fi\n    touch ${_initFile}\n    _msg \"[init] prep ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n    echo \" \" >> \"${_logInt}\"\n    if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n      service cron stop\n      sleep 90\n      rm -f /run/autoinit.pid\n    else\n      rm -f /run/autoinit.pid\n      shutdown -r now\n    fi\n    exit 0\n  fi\n}\n\n###\n### Sync base-files if needed\n###\n_init_base_sync() {\n  if [ -e \"/root/.top-daedalus.cnf\" ]; then\n    ${_INITINS} base-files/daedalus >> \"${_logSlt}\"\n  fi\n  if [ -e \"/root/.top-excalibur.cnf\" ]; then\n    ${_INITINS} base-files/excalibur >> \"${_logSlt}\"\n  fi\n}\n\n###\n### Final procedure to make the system ready\n###\n_init_post_ready() {\n  _msg \"[init] post ==> Final OS post-upgrade to ${_OS_DIST}/${_OS_CODE} procedure...\" >> \"${_logInt}\"\n  apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n  apt-get upgrade ${_nrmUpArg} >> \"${_logSlt}\"\n  apt-get dist-upgrade ${_dstUpArg} >> \"${_logSlt}\"\n  apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n}\n\n###\n### Remove systemd\n###\n_init_offsystemd() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  touch /run/autoinit.pid\n  _msg \"[init] offd ==> Launching a quick dist-upgrade...\" >> \"${_logInt}\"\n  apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n  if [ -e \"/root/.top-excalibur.cnf\" ]; then\n    _msg \"[init] offd ==> Installing elogind on ${_OS_CODE} early\" >> \"${_logInt}\"\n    apt-get -o Dpkg::Options::=\"--force-overwrite\" install elogind >> \"${_logSlt}\"\n  fi\n  apt-get dist-upgrade ${_dstUpArg} >> \"${_logSlt}\"\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _msg \"[init] offd ==> Your system now runs on ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n  _msg \"[init] offd ==> Removing systemd on ${_OS_CODE}\" >> \"${_logInt}\"\n  apt-get purge systemd libnss-systemd -y -qq >> \"${_logSlt}\"\n  apt-get autoremove --purge -y -qq >> \"${_logSlt}\"\n  apt-get autoclean -y -qq >> \"${_logSlt}\"\n  _msg \"[init] offd ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n  echo \" \" >> \"${_logInt}\"\n  touch /root/.run-offsystemd-devuan-init.cnf\n  if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n    service cron stop\n    sleep 90\n    rm -f /run/autoinit.pid\n  else\n    rm -f /run/autoinit.pid\n    shutdown -r now\n  fi\n  exit 0\n}\n\n###\n### Find the best Devuan APT sources mirror\n###\n_find_fast_devuan_mirror() {\n  _ffDevuan=\"$(which ffdevuan)\"\n  if [ -x \"${_ffDevuan}\" ]; then\n    bash ${_ffDevuan} >> \"${_logSlt}\"\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/log/boa/devaun-fast-mirrors-list.txt\"\n    mkdir -p /var/log/boa/\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFLIST=$(grep \"merged\" ${_ffList} 2>&1)\n    fi\n    if [ ! -e \"${_ffList}\" ] || [[ ! \"${_BROKEN_FFLIST}\" =~ \"merged\" ]]; then\n      echo \"https://mirrors.dotsrc.org/devuan/merged\"  > ${_ffList}\n      echo \"https://mirror.akardam.net/devuan/merged\" >> ${_ffList}\n      echo \"https://mirror.hootsoftware.com/devuan/merged\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]] \\\n        || [[ ! \"${_BROKEN_FFLIST}\" =~ \"merged\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n      else\n        _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n      fi\n    else\n      _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n    fi\n  else\n    _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n  fi\n  echo \"${_USE_MIR}\"\n}\n\n###\n### Return 0 if package is installed, 1 otherwise.\n###\n_pkg_installed() {\n  /usr/bin/dpkg-query -W -f='${Status}' \"$1\" 2>/dev/null | grep -qx 'install ok installed'\n}\n\n###\n### Init cycle procedure\n###\n_init_cycle() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  touch /run/autoinit.pid\n  _msg \"[init] cycl ==> Your current system is ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n  _NEW_SYS=Devuan\n  _OLD_SYS=Devuan\n  if [ \"${_OS_CODE}\" = \"trixie\" ]; then\n    _NEW_OS_CODE=excalibur\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-excalibur.cnf\" ] && touch /root/.top-excalibur.cnf\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _NEW_OS_CODE=daedalus\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _NEW_OS_CODE=daedalus\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _NEW_OS_CODE=chimaera\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _NEW_OS_CODE=beowulf\n    _OLD_SYS=Debian\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _NEW_OS_CODE=chimaera\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ] || [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _NEW_OS_CODE=\n    _NEW_SYS=\n  else\n    _msg \"[init] cycl ==> This procedure does not support ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n    _msg \"[init] cycl ==> Bye!\" >> \"${_logInt}\"\n    sed -i \"s/.*autoinit.*//gi\" /etc/crontab &> /dev/null\n    sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n    echo \" \" >> \"${_logInt}\"\n    exit 1\n  fi\n  if [ \"${_OLD_SYS}\" = \"Devuan\" ]; then\n    apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    apt-get upgrade ${_nrmUpArg} >> \"${_logSlt}\"\n  fi\n  if [ ! -z \"${_NEW_OS_CODE}\" ] && [ ! -z \"${_NEW_SYS}\" ]; then\n    _msg \"[init] cycl ==> Update networking with vmnetfix early...\" >> \"${_logInt}\"\n    /opt/local/bin/vmnetfix --mode full >> \"${_logSlt}\"\n    wait\n    _msg \"[init] cycl ==> Running APT Cleanup...\" >> \"${_logInt}\"\n    /opt/local/bin/aptcleanup >> \"${_logSlt}\"\n    wait\n    _msg \"[init] cycl ==> Launching a quick upgrade to ${_NEW_SYS}/${_NEW_OS_CODE}\" >> \"${_logInt}\"\n    _aptLiSys=\"/etc/apt/sources.list\"\n    if [ \"${_NEW_OS_CODE}\" = \"beowulf\" ]; then\n      _TGT_MRR=\"http://archive.devuan.org/merged\"\n    else\n      _TGT_MRR=$(_find_fast_devuan_mirror)\n    fi\n    : \"${_TGT_MRR:=https://mirrors.dotsrc.org/devuan/merged}\"\n    _msg \"[init] cycl ==> Using Devuan mirror: ${_TGT_MRR}\" >> \"${_logInt}\"\n    echo \"## DEVUAN MAIN REPOSITORIES\" > ${_aptLiSys}\n    echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE} main\" >> ${_aptLiSys}\n    echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE} main\" >> ${_aptLiSys}\n    echo \"\" >> ${_aptLiSys}\n    if [ \"${_NEW_OS_CODE}\" != \"beowulf\" ]; then\n      echo \"## MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n      echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE}-updates main\" >> ${_aptLiSys}\n      echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE}-updates main\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n    fi\n    echo \"## DEVUAN SECURITY UPDATES\" >> ${_aptLiSys}\n    echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE}-security main\" >> ${_aptLiSys}\n    echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE}-security main\" >> ${_aptLiSys}\n    apt-get update --allow-insecure-repositories >> \"${_logSlt}\"\n    ${_INITINS} devuan-keyring >> \"${_logSlt}\"\n    apt-get update >> \"${_logSlt}\"\n    apt-get upgrade ${_dstUpArg} >> \"${_logSlt}\"\n    if [ \"${_OLD_SYS}\" = \"Debian\" ]; then\n      if [ \"${_NEW_OS_CODE}\" = \"excalibur\" ] || [ -e \"/root/.top-excalibur.cnf\" ]; then\n        ${_INITINS} eudev sysvinit-core systemd-sysv- --allow-remove-essential -qq >> \"${_logSlt}\"\n      else\n        ${_INITINS} eudev sysvinit-core -qq >> \"${_logSlt}\"\n      fi\n      apt-get -f install ${_aptAllow} -y -qq >> \"${_logSlt}\"\n    fi\n  fi\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ \"${_OLD_SYS}\" = \"Debian\" ]; then\n    _msg \"[init] cycl ==> Update networking if needed with vmnetfix...\" >> \"${_logInt}\"\n    /opt/local/bin/vmnetfix --mode full >> \"${_logSlt}\"\n    wait\n    _msg \"[init] cycl ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n    echo \" \" >> \"${_logInt}\"\n    touch /root/.run-cycle-devuan-init.cnf\n    if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n      service cron stop\n      sleep 90\n      rm -f /run/autoinit.pid\n    else\n      rm -f /run/autoinit.pid\n      shutdown -r now\n    fi\n    exit 0\n  else\n    if [ \"${_OS_CODE}\" = \"daedalus\" ] || [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      _init_base_sync\n      _init_post_ready\n      _init_no_systemd\n      _msg \"[init] cycl ==> Your system now runs on ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n      _msg \"[init] cycl ==> The system is now ready for boa install\" >> \"${_logInt}\"\n      rm -f /root/.init-to-devuan-ctrl.cnf\n      sed -i \"s/.*autoinit.*//gi\" /etc/crontab &> /dev/null\n      _if_launch_boa_install\n    else\n      _msg \"[init] cycl ==> Your system now runs on ${_OS_DIST}/${_OS_CODE}\" >> \"${_logInt}\"\n      _msg \"[init] cycl ==> Time for reboot and next upgrade cycle...\" >> \"${_logInt}\"\n    fi\n    echo \" \" >> \"${_logInt}\"\n    touch /root/.run-cycle-devuan-init.cnf\n    touch /root/.run-cycle-${_OS_CODE}-init.cnf\n    if [ -e \"/root/.manual-autoinit-reboot.cnf\" ]; then\n      service cron stop\n      sleep 90\n      rm -f /run/autoinit.pid\n    else\n      rm -f /run/autoinit.pid\n      shutdown -r now\n    fi\n    exit 0\n  fi\n}\n\n###\n### Display welcome info\n###\n_init_info() {\n  echo \"[init] info ==> The system will reboot and continue several times...\"\n  sleep 3\n  echo \"[init] info ==> until it is ugraded to latest Devuan version.\"\n  sleep 3\n  echo \"[init] info ==> Once all upgrades are complete, you can run boa install\"\n  sleep 3\n  echo \"[init] info ==> Let's go! It will take only ten minutes max!\"\n  sleep 3\n  echo \"...\"\n}\n\n###\n### Init info or prepare\n###\n_init_conf() {\n  if [ ! -e \"${_initFile}\" ]; then\n    if [ ! -x \"/usr/bin/mc\" ]; then\n      _init_mnfr\n      _init_info\n    fi\n    _init_prepare &> /dev/null\n  fi\n}\n\n###\n### Init start\n###\n_init_start() {\n  _init_conf\n  if [ -e \"${_initFile}\" ]; then\n    if [ -e \"/root/.run-cycle-devuan-init.cnf\" ]; then\n      if [ ! -e \"/root/.run-offsystemd-devuan-init.cnf\" ]; then\n        _init_offsystemd\n      fi\n    fi\n    _init_cycle\n  fi\n}\n\n###\n### Init launch\n###\n_init_launch() {\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  touch /run/autoinit.pid\n  if pgrep -f unattended-upgrades >/dev/null 2>&1; then\n    pkill -9 -f unattended-upgrades\n  fi\n  if [ ! -e \"/root/.autoinit.log\" ]; then\n    echo \"[init] lnch ==> Launching BOA System INIT on ${_OS_DIST}/${_OS_CODE}...\"\n    _msg \"[init] lnch ==> Launching BOA System INIT on ${_OS_DIST}/${_OS_CODE}...\" >> \"${_logInt}\"\n  fi\n  sleep 15\n  _TIME_IS=$(date)\n  _UPTIME=$(uptime)\n  _msg \"[init] lnch ==> Date/Time: ${_TIME_IS}\" >> \"${_logInt}\"\n  _msg \"[init] lnch ==> Uptime: ${_UPTIME}\" >> \"${_logInt}\"\n  _init_locales\n  _init_cron\n  _init_start\n}\n\n### ---------------------------- Prepare for _init_launch\n_init_root\n_init_aegir\n_single_instance_lock\ncd /root/\n\n### ---------------------------- Auto-enable debug on dev\nif [ \"${_tRee}\" = \"dev\" ]; then\n  [ ! -e \"/root/.debug-boa-installer.cnf\" ] && touch /root/.debug-boa-installer.cnf\n  [ ! -e \"/root/.debug-octopus-installer.cnf\" ] && touch /root/.debug-octopus-installer.cnf\nfi\n\n### ---------------------------- Extra protection with pid file\nif [ -e \"/run/autoinit.pid\" ]; then\n  echo \" \" >> \"${_logSlt}\"\n  _msg \"The /run/autoinit.pid is blocking this run...\" >> \"${_logSlt}\"\n  echo \" \" >> \"${_logSlt}\"\n  exit 1\nelse\n  ### -------------------------- Let's go!\n  _init_launch\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/autosymlink",
    "content": "#!/bin/bash\n\n#\n# Sites Files Auto Symlink\n#\n# Modes:\n#   DRY     (default) : no changes, only show + log actions, and record if run is CLEAN\n#   LIVE    (live)    : apply changes, per-site confirmation\n#   BATCH   (batch)   : apply changes for all sites without confirmation,\n#                       allowed only after a CLEAN DRY run\n#   BATCH_IF_CLEAN    : run DRY and if CLEAN automatically run BATCH (cron-safe)\n#   REPORT  (report)  : read-only report of shared files/private symlinks\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n_MODE=\"DRY\"\ncase \"$1\" in\n  live|LIVE|--live)\n    _MODE=\"LIVE\"\n    ;;\n  batch|BATCH|--batch)\n    _MODE=\"BATCH\"\n    ;;\n  --batch-if-clean|batch-if-clean|BATCH_IF_CLEAN|batch_if_clean)\n    _MODE=\"BATCH_IF_CLEAN\"\n    ;;\n  report|REPORT|--report)\n    _MODE=\"REPORT\"\n    ;;\nesac\n\n_LOG_DIR=\"/var/log/boa\"\n_LOG_FILE=\"${_LOG_DIR}/autosymlink.log\"\n\nif [ ! -d \"${_LOG_DIR}\" ]; then\n  mkdir -p \"${_LOG_DIR}\"\nfi\n\n_STATE_FILE=\"${_LOG_DIR}/autosymlink.state\"\n\n# Flags used only during DRY mode\n_DRY_HAS_ERRORS=\"NO\"\n_DRY_HAS_EDGE_CASES=\"NO\"\n\n# Count applied changes in LIVE/BATCH modes\n_APPLY_COUNT=0\n_LAST_ACTION=NO\n\n# Per-site LIVE confirmation flag (reset in _process_site)\n_LIVE_SITE_APPROVED=\"NO\"\n\n# Cache for global FS pair status: key=\"SRC_DEV|TGT_DEV\", value=\"SPACIOUS\" or \"TIGHT\"\ndeclare -A _FS_PAIR_STATUS\n# Cumulative planned/reserved KB per target filesystem (used mainly for break-sharing copies)\ndeclare -A _FS_RESERVED_KB\n\n# ---------------------------------------------------------------------------\n# Helpers\n# ---------------------------------------------------------------------------\n\n_log() {\n  # Single place to log + print\n  # Arguments: message...\n  _NOW=\"$(date \"+%Y-%m-%d %H:%M:%S\")\"\n  _LINE=\"${_NOW} $*\"\n\n  # Track applied changes only for modes which can apply changes\n  if [ \"${_MODE}\" = \"LIVE\" ] || [ \"${_MODE}\" = \"BATCH\" ] || [ \"${_MODE}\" = \"BATCH_IF_CLEAN\" ]; then\n    case \"${_LINE}\" in\n      *\"[APPLY]\"*)\n        _APPLY_COUNT=$(( _APPLY_COUNT + 1 ))\n        ;;\n    esac\n  fi\n\n  echo \"${_LINE}\" | tee -a \"${_LOG_FILE}\"\n}\n\n_write_state() {\n  # Write a small state file with mode + status\n  # Args: 1: status (CLEAN / NOT_CLEAN / FAILED / etc.)\n  #       2: mode   (DRY / LIVE / BATCH / REPORT / BATCH_IF_CLEAN)\n  _STATUS=\"$1\"\n  _MODE_SAVED=\"$2\"\n  _TS=\"$(date \"+%Y-%m-%d %H:%M:%S\")\"\n\n  if [ \"${_APPLY_COUNT}\" -gt 0 ]; then\n    _LAST_ACTION=\"YES\"\n  else\n    _LAST_ACTION=NO\n  fi\n\n  {\n    echo \"_LAST_MODE=${_MODE_SAVED}\"\n    echo \"_LAST_STATUS=${_STATUS}\"\n    echo \"_LAST_ACTION=${_LAST_ACTION}\"\n    echo \"_LAST_APPLY_COUNT=${_APPLY_COUNT}\"\n    echo \"_LAST_TS=${_TS}\"\n  } > \"${_STATE_FILE}\"\n}\n\n_require_clean_dry_run_or_exit() {\n  # For BATCH mode: ensure a CLEAN DRY run was executed just before.\n  if [ ! -f \"${_STATE_FILE}\" ]; then\n    _log \"[ERROR] Batch mode requires a prior CLEAN DRY-RUN, but no state file found: ${_STATE_FILE}\"\n    exit 1\n  fi\n\n  _LAST_MODE=$(grep '^_LAST_MODE=' \"${_STATE_FILE}\" 2>/dev/null | head -n 1 | cut -d= -f2-)\n  _LAST_STATUS=$(grep '^_LAST_STATUS=' \"${_STATE_FILE}\" 2>/dev/null | head -n 1 | cut -d= -f2-)\n  _LAST_TS=$(grep '^_LAST_TS=' \"${_STATE_FILE}\" 2>/dev/null | head -n 1 | cut -d= -f2-)\n\n  if [ -z \"${_LAST_MODE}\" ] || [ -z \"${_LAST_STATUS}\" ]; then\n    _log \"[ERROR] State file ${_STATE_FILE} is incomplete. Run DRY-RUN again before using batch mode.\"\n    exit 1\n  fi\n\n  if [ \"${_LAST_MODE}\" != \"DRY\" ] || [ \"${_LAST_STATUS}\" != \"CLEAN\" ]; then\n    _log \"[ERROR] Batch mode requires last run to be DRY and CLEAN, but got _LAST_MODE='${_LAST_MODE}', _LAST_STATUS='${_LAST_STATUS}' (at ${_LAST_TS}).\"\n    _log \"[ERROR] Please run this script in DRY-RUN mode again and ensure no edge cases/errors appear.\"\n    exit 1\n  fi\n\n  _log \"[INFO] Batch mode: confirmed last run was CLEAN DRY-RUN at ${_LAST_TS}.\"\n  # Invalidate the CLEAN DRY status immediately so another batch cannot reuse it.\n  _write_state \"BATCH\" \"BATCH\"\n}\n\n_finalize_dry_run_state() {\n  # Evaluate DRY flags and write state\n  if [ \"${_DRY_HAS_ERRORS}\" = \"NO\" ] && [ \"${_DRY_HAS_EDGE_CASES}\" = \"NO\" ]; then\n    _log \"[DRY] OK: checks passed (including any pending break-sharing fixes). Batch LIVE mode is allowed.\"\n    _write_state \"CLEAN\" \"DRY\"\n  else\n    _log \"[DRY] NOT CLEAN: edge cases and/or errors detected. Batch LIVE mode is NOT allowed.\"\n    _write_state \"NOT_CLEAN\" \"DRY\"\n  fi\n}\n\n\n_log_reserved_fs_summary() {\n  # End-of-run summary for cumulative break-sharing copy reservations per target filesystem.\n  # Helps estimate whether planned batch work will overcommit storage.\n  _HAS_RESERVED=\"NO\"\n\n  for _FS_DEV in \"${!_FS_RESERVED_KB[@]}\"; do\n    _HAS_RESERVED=\"YES\"\n    break\n  done\n\n  if [ \"${_HAS_RESERVED}\" != \"YES\" ]; then\n    return\n  fi\n\n  _log \"------------------------------------------------------------------\"\n  _log \"[INFO] Cumulative reservation summary for pending break-sharing copies\"\n\n  for _FS_DEV in $(printf '%s\\n' \"${!_FS_RESERVED_KB[@]}\" | sort); do\n    _RES_KB=\"${_FS_RESERVED_KB[${_FS_DEV}]}\"\n    _FS_FREE_KB=$(df -Pk \"${_FS_DEV}\" 2>/dev/null | awk 'NR==2 {print $4}')\n    _FS_MOUNT=$(df -Pk \"${_FS_DEV}\" 2>/dev/null | awk 'NR==2 {print $6}')\n\n    if [ -z \"${_FS_FREE_KB}\" ]; then\n      _log \"[INFO] ${_FS_DEV}: reserved=${_RES_KB}K, current free=?, projected free=? (unable to read df)\"\n      continue\n    fi\n\n    _FS_PROJ_KB=$(( _FS_FREE_KB - _RES_KB ))\n    if [ \"${_FS_PROJ_KB}\" -lt 0 ]; then\n      _FS_PROJ_KB=0\n    fi\n\n    if [ -n \"${_FS_MOUNT}\" ]; then\n      _log \"[INFO] ${_FS_DEV} (${_FS_MOUNT}): current free=${_FS_FREE_KB}K, reserved=${_RES_KB}K, projected free=${_FS_PROJ_KB}K\"\n    else\n      _log \"[INFO] ${_FS_DEV}: current free=${_FS_FREE_KB}K, reserved=${_RES_KB}K, projected free=${_FS_PROJ_KB}K\"\n    fi\n  done\n}\n\n_get_site_path() {\n  # Extracts site_path from a Drush alias file.\n  # Usage: _get_site_path \"/path/to/file.alias.drushrc.php\"\n  _ALIAS_FILE=\"$1\"\n\n  if [ ! -f \"${_ALIAS_FILE}\" ]; then\n    echo \"\"\n    return\n  fi\n\n  # Expect line like:\n  # 'site_path' => '/data/disk/oXXX/static/.../sites/example.com',\n  _LINE=$(grep -E \"'site_path'\\s*=>\" \"${_ALIAS_FILE}\" 2>/dev/null | head -n 1)\n\n  if [ -z \"${_LINE}\" ]; then\n    echo \"\"\n    return\n  fi\n\n  # Split by single quote and take the path part\n  _SITE_PATH=$(printf '%s\\n' \"${_LINE}\" | awk -F\"'\" '{print $4}')\n  echo \"${_SITE_PATH}\"\n}\n\n_ask_confirm_site() {\n  # Ask once per site if we should perform the prepared actions.\n  # Expects \"yes\" or \"Y\" (case-insensitive).\n  _SITE_NAME=\"$1\"\n\n  echo\n  echo \"READY TO APPLY CHANGES FOR SITE: ${_SITE_NAME}\"\n  printf \"Type 'yes' or 'Y' to proceed, anything else to skip: \"\n  read _ANSWER\n\n  case \"${_ANSWER}\" in\n    y|Y|yes|Yes|YES)\n      return 0\n      ;;\n    *)\n      _log \"[LIVE] Skipping changes for site: ${_SITE_NAME}\"\n      return 1\n      ;;\n  esac\n}\n\n_make_unique_target() {\n  # Ensure the given path becomes unique by renaming the existing\n  # file/dir/symlink to PATH-<timestamp>-<pid>-<random>\n  _TARGET=\"$1\"\n\n  if [ ! -e \"${_TARGET}\" ] && [ ! -L \"${_TARGET}\" ]; then\n    return\n  fi\n\n  _SUFFIX=\"$(date +%Y%m%d-%H%M%S)-$$-${RANDOM}\"\n  _NEW_TARGET=\"${_TARGET}-${_SUFFIX}\"\n\n  _log \"[APPLY] Target exists, renaming: ${_TARGET} -> ${_NEW_TARGET}\"\n  mv \"${_TARGET}\" \"${_NEW_TARGET}\"\n}\n\n_check_space_for_transfer() {\n  # Args:\n  #   1: source directory\n  #   2: target base directory (where new dir will live, e.g. /data/disk/oX/static/files/SITE)\n  #   3: description (e.g. \"move files\", \"copy private\")\n  #   4: site name\n  #   5: force detailed check (YES/NO, optional)\n  _SRC_DIR=\"$1\"\n  _TARGET_BASE=\"$2\"\n  _DESC=\"$3\"\n  _SITE=\"$4\"\n  _FORCE_DETAILED_CHECK=\"$5\"\n\n  if [ ! -d \"${_SRC_DIR}\" ]; then\n    _log \"[WARN] Source directory missing for ${_DESC} of site ${_SITE}: ${_SRC_DIR}\"\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _DRY_HAS_ERRORS=\"YES\"\n    fi\n    return 1\n  fi\n\n  _TARGET_CHECK=\"${_TARGET_BASE}\"\n  if [ ! -d \"${_TARGET_CHECK}\" ]; then\n    _TARGET_CHECK=$(dirname \"${_TARGET_CHECK}\")\n  fi\n\n  # Detect source and target devices\n  _SRC_DEV=$(df -P \"${_SRC_DIR}\" 2>/dev/null | awk 'NR==2 {print $1}')\n  _TGT_DEV=$(df -P \"${_TARGET_CHECK}\" 2>/dev/null | awk 'NR==2 {print $1}')\n\n  # If we got both devs and they match, a move on the same FS usually does not need\n  # extra free space. However, for copy operations used to break shared symlinks we\n  # must still verify there is enough space even on the same filesystem.\n  if [ -n \"${_SRC_DEV}\" ] && [ -n \"${_TGT_DEV}\" ] && [ \"${_SRC_DEV}\" = \"${_TGT_DEV}\" ]; then\n    if [ \"${_FORCE_DETAILED_CHECK}\" = \"YES\" ]; then\n      _log \"[INFO] Same filesystem detected for ${_DESC} of site ${_SITE} (${_SRC_DEV}) but detailed space check is required for copy operation.\"\n    else\n      _log \"[INFO] Space check skipped for ${_DESC} of site ${_SITE}: source and target on same filesystem (${_SRC_DEV}).\"\n      return 0\n    fi\n  fi\n\n  # Build FS pair key if possible\n  _KEY=\"${_SRC_DEV}|${_TGT_DEV}\"\n\n  # For anomaly handling (copy ... break sharing), always run a real per-site du/df check.\n  # We may still log global pair info for visibility, but we must not skip the detailed check.\n  if [ \"${_FORCE_DETAILED_CHECK}\" = \"YES\" ]; then\n    if [ -n \"${_SRC_DEV}\" ] && [ -n \"${_TGT_DEV}\" ]; then\n      _SRC_USED_KB=$(df -Pk \"${_SRC_DIR}\" 2>/dev/null | awk 'NR==2 {print $3}')\n      _AVAIL_TGT_KB=$(df -Pk \"${_TARGET_CHECK}\" 2>/dev/null | awk 'NR==2 {print $4}')\n      if [ -n \"${_SRC_USED_KB}\" ] && [ -n \"${_AVAIL_TGT_KB}\" ]; then\n        _log \"[INFO] Global filesystem snapshot for ${_DESC} of site ${_SITE}: pair ${_KEY}, src used=${_SRC_USED_KB}K, tgt avail=${_AVAIL_TGT_KB}K (informational only; per-site du/df check will still run)\"\n      fi\n    fi\n  else\n    # If we already know this pair is SPACIOUS, skip per-site checks (normal move logic only)\n    if [ -n \"${_SRC_DEV}\" ] && [ -n \"${_TGT_DEV}\" ] && [ -n \"${_FS_PAIR_STATUS[${_KEY}]+x}\" ]; then\n      if [ \"${_FS_PAIR_STATUS[${_KEY}]}\" = \"SPACIOUS\" ]; then\n        _log \"[INFO] Space check skipped for ${_DESC} of site ${_SITE}: filesystem pair ${_KEY} already marked SPACIOUS.\"\n        return 0\n      fi\n      # If marked TIGHT, we fall through to detailed per-site check below\n    fi\n\n    # If this FS pair hasn't been classified yet and we have dev IDs, try a global check.\n    if [ -n \"${_SRC_DEV}\" ] && [ -n \"${_TGT_DEV}\" ] && [ -z \"${_FS_PAIR_STATUS[${_KEY}]+x}\" ]; then\n      _SRC_USED_KB=$(df -Pk \"${_SRC_DIR}\" 2>/dev/null | awk 'NR==2 {print $3}')\n      _AVAIL_TGT_KB=$(df -Pk \"${_TARGET_CHECK}\" 2>/dev/null | awk 'NR==2 {print $4}')\n\n      if [ -n \"${_SRC_USED_KB}\" ] && [ -n \"${_AVAIL_TGT_KB}\" ] && [ \"${_AVAIL_TGT_KB}\" -ge \"${_SRC_USED_KB}\" ]; then\n        _FS_PAIR_STATUS[${_KEY}]=\"SPACIOUS\"\n        _log \"[INFO] Filesystem pair ${_KEY} marked SPACIOUS (src used=${_SRC_USED_KB}K, tgt avail=${_AVAIL_TGT_KB}K) – skipping per-site size checks.\"\n        return 0\n      else\n        _FS_PAIR_STATUS[${_KEY}]=\"TIGHT\"\n        _log \"[INFO] Filesystem pair ${_KEY} marked TIGHT or unknown (src used=${_SRC_USED_KB:-?}K, tgt avail=${_AVAIL_TGT_KB:-?}K) – will run per-site space checks.\"\n        # Fall through to per-site du logic\n      fi\n    fi\n  fi\n\n  # Per-site detailed size check (fallback when FS pair is TIGHT or devs unknown)\n  _SIZE_KB=$(du -sk \"${_SRC_DIR}\" 2>/dev/null | awk '{print $1}')\n  if [ -z \"${_SIZE_KB}\" ]; then\n    _log \"[WARN] Unable to determine size for ${_SRC_DIR} (${_DESC} of site ${_SITE})\"\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _DRY_HAS_ERRORS=\"YES\"\n    fi\n    return 1\n  fi\n\n  _AVAIL_KB=$(df -Pk \"${_TARGET_CHECK}\" 2>/dev/null | awk 'NR==2 {print $4}')\n  if [ -z \"${_AVAIL_KB}\" ]; then\n    _log \"[WARN] Unable to determine available space on ${_TARGET_CHECK} (${_DESC} of site ${_SITE})\"\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _DRY_HAS_ERRORS=\"YES\"\n    fi\n    return 1\n  fi\n\n  _RESERVED_KB=0\n  _IS_BREAK_SHARING=\"NO\"\n  _USE_CUMULATIVE_RESERVATION=\"NO\"\n  if printf '%s' \"${_DESC}\" | grep -q \"break sharing\"; then\n    _IS_BREAK_SHARING=\"YES\"\n    # Cumulative reservation is for planning only (DRY mode).\n    # In LIVE/BATCH, df reflects actual space after each completed copy,\n    # so keeping reservations would double-count and cause false warnings.\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _USE_CUMULATIVE_RESERVATION=\"YES\"\n    fi\n  fi\n  if [ \"${_USE_CUMULATIVE_RESERVATION}\" = \"YES\" ]; then\n    if [ -n \"${_TGT_DEV}\" ] && [ -n \"${_FS_RESERVED_KB[${_TGT_DEV}]+x}\" ]; then\n      _RESERVED_KB=\"${_FS_RESERVED_KB[${_TGT_DEV}]}\"\n    fi\n  fi\n\n  _EFFECTIVE_AVAIL_KB=\"${_AVAIL_KB}\"\n  if [ \"${_RESERVED_KB}\" -gt 0 ]; then\n    _EFFECTIVE_AVAIL_KB=$(( _AVAIL_KB - _RESERVED_KB ))\n    if [ \"${_EFFECTIVE_AVAIL_KB}\" -lt 0 ]; then\n      _EFFECTIVE_AVAIL_KB=0\n    fi\n  fi\n\n  if [ \"${_EFFECTIVE_AVAIL_KB}\" -lt \"${_SIZE_KB}\" ]; then\n    if [ \"${_IS_BREAK_SHARING}\" = \"YES\" ] && [ \"${_USE_CUMULATIVE_RESERVATION}\" = \"YES\" ]; then\n      _log \"[WARN] Not enough space on ${_TARGET_CHECK} for ${_DESC} of site ${_SITE}: need ${_SIZE_KB}K, free now ${_AVAIL_KB}K, already reserved ${_RESERVED_KB}K, effective free ${_EFFECTIVE_AVAIL_KB}K\"\n    elif [ \"${_IS_BREAK_SHARING}\" = \"YES\" ]; then\n      _log \"[WARN] Not enough space on ${_TARGET_CHECK} for ${_DESC} of site ${_SITE}: need ${_SIZE_KB}K, free now ${_AVAIL_KB}K\"\n    else\n      _log \"[WARN] Not enough space on ${_TARGET_CHECK} for ${_DESC} of site ${_SITE}: need ${_SIZE_KB}K, have ${_EFFECTIVE_AVAIL_KB}K\"\n    fi\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _DRY_HAS_ERRORS=\"YES\"\n    fi\n    return 1\n  fi\n\n  if [ \"${_IS_BREAK_SHARING}\" = \"YES\" ]; then\n    _AFTER_COPY_KB=$(( _EFFECTIVE_AVAIL_KB - _SIZE_KB ))\n    if [ \"${_USE_CUMULATIVE_RESERVATION}\" = \"YES\" ]; then\n      _log \"[INFO] Ready to copy for ${_DESC} of site ${_SITE}: source size=${_SIZE_KB}K, target free now=${_AVAIL_KB}K, reserved on fs=${_RESERVED_KB}K, effective free=${_EFFECTIVE_AVAIL_KB}K, projected free after copy=${_AFTER_COPY_KB}K on ${_TARGET_CHECK}\"\n    else\n      _log \"[INFO] Ready to copy for ${_DESC} of site ${_SITE}: source size=${_SIZE_KB}K, target free now=${_AVAIL_KB}K, projected free after copy=${_AFTER_COPY_KB}K on ${_TARGET_CHECK}\"\n    fi\n    if [ \"${_USE_CUMULATIVE_RESERVATION}\" = \"YES\" ] && [ -n \"${_TGT_DEV}\" ]; then\n      _FS_RESERVED_KB[${_TGT_DEV}]=$(( _RESERVED_KB + _SIZE_KB ))\n      _log \"[INFO] Reserved cumulative space for pending break-sharing copies on ${_TGT_DEV}: ${_FS_RESERVED_KB[${_TGT_DEV}]}K\"\n    fi\n  else\n    _log \"[INFO] Space check OK for ${_DESC} of site ${_SITE}: need ${_SIZE_KB}K, available ${_EFFECTIVE_AVAIL_KB}K on ${_TARGET_CHECK}\"\n  fi\n  return 0\n}\n\n_check_symlink_sharing() {\n  # Args:\n  #   1: path to symlink (e.g. /.../sites/example.com/files)\n  #   2: type: \"files\" or \"private\"\n  #   3: site name (e.g. example.com)\n  #   4: account dir (/data/disk/o1)\n  _PATH=\"$1\"\n  _TYPE=\"$2\"\n  _SITE_NAME=\"$3\"\n  _ACCOUNT_DIR=\"$4\"\n\n  if [ ! -L \"${_PATH}\" ]; then\n    return\n  fi\n\n  _LINK_TARGET=$(readlink \"${_PATH}\")\n\n  if [ -z \"${_LINK_TARGET}\" ]; then\n    _log \"[WARN] ${_TYPE} symlink for site ${_SITE_NAME} is empty or broken: ${_PATH}\"\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _DRY_HAS_ERRORS=\"YES\"\n    fi\n    return\n  fi\n\n  # Expect something like /data/disk/o1/static/files/OTHER-SITE/files[/...]\n  case \"${_LINK_TARGET}\" in\n    *\"/static/files/\"*/\"${_TYPE}\"|*\"/static/files/\"*/\"${_TYPE}/\"|*\"/static/files/\"*/\"${_TYPE}/\"* )\n      _AFTER=\"${_LINK_TARGET#*/static/files/}\"\n      _TARGET_SITE_NAME=\"${_AFTER%%/*}\"\n      ;;\n    *)\n      # Unknown layout; in REPORT we just note it, in DRY we treat as edge case.\n      if [ \"${_MODE}\" = \"REPORT\" ]; then\n        _log \"[REPORT] ${_TYPE} symlink for site ${_SITE_NAME} has non-standard target: ${_LINK_TARGET}\"\n        return\n      fi\n      _log \"[WARN] ${_TYPE} symlink for site ${_SITE_NAME} points to unknown pattern: ${_LINK_TARGET} – skipping\"\n      if [ \"${_MODE}\" = \"DRY\" ]; then\n        _DRY_HAS_EDGE_CASES=\"YES\"\n      fi\n      return\n      ;;\n  esac\n\n  # REPORT mode: just classify and exit.\n  if [ \"${_MODE}\" = \"REPORT\" ]; then\n    if [ \"${_TARGET_SITE_NAME}\" = \"${_SITE_NAME}\" ]; then\n      # Proper, non-shared symlink.\n      _log \"[REPORT] ${_TYPE} symlink for site ${_SITE_NAME} → self (${_TARGET_SITE_NAME}), OK, target='${_LINK_TARGET}'\"\n      return\n    fi\n\n    _CONTROL_DIR=\"${_ACCOUNT_DIR}/static/control\"\n    _SHARE_THIS=\"${_CONTROL_DIR}/share.files.${_SITE_NAME}.info\"\n    _SHARE_TARGET=\"${_CONTROL_DIR}/share.files.${_TARGET_SITE_NAME}.info\"\n\n    _INTENT=\"ACCIDENTAL\"\n    if [ -f \"${_SHARE_THIS}\" ] || [ -f \"${_SHARE_TARGET}\" ]; then\n      _INTENT=\"INTENTIONAL\"\n      _SITE_HAS_INTENTIONAL_SHARED_SYMLINK=\"YES\"\n    else\n      _SITE_NEEDS_SHARED_SYMLINK_FIX=\"YES\"\n    fi\n\n    _log \"[REPORT] ${_TYPE} symlink for site ${_SITE_NAME} → ${_TARGET_SITE_NAME} (${_INTENT}), target='${_LINK_TARGET}'\"\n    return\n  fi\n\n  # For non-REPORT modes, continue with normal sharing logic.\n\n  if [ \"${_TARGET_SITE_NAME}\" = \"${_SITE_NAME}\" ]; then\n    return\n  fi\n\n  _CONTROL_DIR=\"${_ACCOUNT_DIR}/static/control\"\n  _SHARE_THIS=\"${_CONTROL_DIR}/share.files.${_SITE_NAME}.info\"\n  _SHARE_TARGET=\"${_CONTROL_DIR}/share.files.${_TARGET_SITE_NAME}.info\"\n\n  if [ -f \"${_SHARE_THIS}\" ] || [ -f \"${_SHARE_TARGET}\" ]; then\n    _SITE_HAS_INTENTIONAL_SHARED_SYMLINK=\"YES\"\n    _log \"[INFO] ${_TYPE} symlink for site ${_SITE_NAME} intentionally shares with ${_TARGET_SITE_NAME} (control file present) – leaving as is\"\n    return\n  fi\n\n  _SITE_NEEDS_SHARED_SYMLINK_FIX=\"YES\"\n\n  _NEW_TARGET_BASE=\"${_ACCOUNT_DIR}/static/files/${_SITE_NAME}\"\n  _ORIGINAL_TOP=\"${_ACCOUNT_DIR}/static/files/${_TARGET_SITE_NAME}/${_TYPE}\"\n  _NEW_TARGET=\"${_NEW_TARGET_BASE}/${_TYPE}\"\n\n  _log \"[INFO] ${_TYPE} symlink for site ${_SITE_NAME} currently points to ${_LINK_TARGET} (owned by ${_TARGET_SITE_NAME})\"\n  _log \"[PLAN] Break sharing: copy from ${_ORIGINAL_TOP} to ${_NEW_TARGET} and update symlink at ${_PATH}\"\n\n  if [ \"${_MODE}\" = \"LIVE\" ] && [ \"${_LIVE_SITE_APPROVED}\" != \"YES\" ]; then\n    if ! _ask_confirm_site \"${_SITE_NAME}\"; then\n      _log \"[LIVE] Skipping shared symlink fix for site: ${_SITE_NAME}\"\n      _SITE_SHARED_SYMLINK_FIX_SKIPPED=\"YES\"\n      return\n    fi\n    _LIVE_SITE_APPROVED=\"YES\"\n  fi\n\n  if [ \"${_MODE}\" = \"DRY\" ]; then\n    if [ ! -d \"${_ORIGINAL_TOP}\" ]; then\n      _log \"[WARN] Original ${_TYPE} directory not found at ${_ORIGINAL_TOP} – would fail breaking sharing\"\n      _DRY_HAS_ERRORS=\"YES\"\n      return\n    fi\n    # Check space on target filesystem (if different FS)\n    if ! _check_space_for_transfer \"${_ORIGINAL_TOP}\" \"${_NEW_TARGET_BASE}\" \"copy ${_TYPE} (break sharing)\" \"${_SITE_NAME}\" \"YES\"; then\n      # _check_space_for_transfer already logged + flagged DRY errors if needed\n      return\n    fi\n    return\n  fi\n\n  if [ ! -d \"${_ORIGINAL_TOP}\" ]; then\n    _log \"[WARN] Original ${_TYPE} directory not found at ${_ORIGINAL_TOP} – cannot break sharing for ${_SITE_NAME}\"\n    _SITE_SHARED_SYMLINK_FIX_SKIPPED=\"YES\"\n    return\n  fi\n\n  # LIVE / BATCH: space check\n  if ! _check_space_for_transfer \"${_ORIGINAL_TOP}\" \"${_NEW_TARGET_BASE}\" \"copy ${_TYPE} (break sharing)\" \"${_SITE_NAME}\" \"YES\"; then\n    _log \"[WARN] Skipping breaking sharing for ${_TYPE} of site ${_SITE_NAME} due to insufficient space.\"\n    _SITE_SHARED_SYMLINK_FIX_SKIPPED=\"YES\"\n    return\n  fi\n\n  if [ -e \"${_NEW_TARGET_BASE}\" ] && [ ! -d \"${_NEW_TARGET_BASE}\" ]; then\n    _make_unique_target \"${_NEW_TARGET_BASE}\"\n  fi\n\n  if [ ! -d \"${_NEW_TARGET_BASE}\" ]; then\n    _log \"[APPLY] Creating directory: ${_NEW_TARGET_BASE}\"\n    mkdir -p \"${_NEW_TARGET_BASE}\"\n  fi\n\n  if [ -e \"${_NEW_TARGET}\" ] || [ -L \"${_NEW_TARGET}\" ]; then\n    _make_unique_target \"${_NEW_TARGET}\"\n  fi\n\n  _log \"[APPLY] Copying ${_ORIGINAL_TOP} -> ${_NEW_TARGET}\"\n  mkdir -p \"${_NEW_TARGET}\"\n  if ! rsync -a \"${_ORIGINAL_TOP}/\" \"${_NEW_TARGET}/\"; then\n    _log \"[ERROR] rsync failed for site ${_SITE_NAME} (${_TYPE}: ${_ORIGINAL_TOP} -> ${_NEW_TARGET}) – leaving symlink unchanged.\"\n    _SITE_SHARED_SYMLINK_FIX_SKIPPED=\"YES\"\n    return\n  fi\n\n  _log \"[APPLY] Updating symlink: ${_PATH} -> ${_NEW_TARGET}\"\n  rm -f \"${_PATH}\"\n  if ! ln -s \"${_NEW_TARGET}\" \"${_PATH}\"; then\n    _log \"[ERROR] Failed to create symlink for site ${_SITE_NAME} (${_TYPE}: ${_PATH} -> ${_NEW_TARGET})\"\n    _SITE_SHARED_SYMLINK_FIX_SKIPPED=\"YES\"\n    return\n  fi\n  _SITE_SHARED_SYMLINK_FIX_APPLIED=\"YES\"\n}\n\n_process_site() {\n  # Handle single site for given account.\n  # Args:\n  #   1: account name (e.g. o1)\n  #   2: account dir (e.g. /data/disk/o1)\n  #   3: site name (e.g. example.com)\n  #   4: alias file (e.g. /data/disk/o1/.drush/example.com.alias.drushrc.php)\n  _ACCOUNT_NAME=\"$1\"\n  _ACCOUNT_DIR=\"$2\"\n  _SITE_NAME=\"$3\"\n  _ALIAS_FILE=\"$4\"\n\n  _SITE_PATH=$(_get_site_path \"${_ALIAS_FILE}\")\n\n  if [ -z \"${_SITE_PATH}\" ]; then\n    _log \"[WARN] No site_path found for ${_SITE_NAME} in ${_ALIAS_FILE} – skipping\"\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _DRY_HAS_ERRORS=\"YES\"\n    fi\n    return\n  fi\n\n  if [ ! -d \"${_SITE_PATH}\" ]; then\n    _log \"[WARN] site_path directory does not exist for ${_SITE_NAME}: ${_SITE_PATH} – skipping\"\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _DRY_HAS_ERRORS=\"YES\"\n    fi\n    return\n  fi\n\n  _FILES_PATH=\"${_SITE_PATH}/files\"\n  _PRIVATE_PATH=\"${_SITE_PATH}/private\"\n\n  _TARGET_BASE=\"${_ACCOUNT_DIR}/static/files/${_SITE_NAME}\"\n  _DO_FILES_CONVERT=\"NO\"\n  _DO_PRIVATE_CONVERT=\"NO\"\n\n  # Per-site flags used for summary / LIVE confirmation re-use\n  _LIVE_SITE_APPROVED=\"NO\"\n  _SITE_NEEDS_SHARED_SYMLINK_FIX=\"NO\"\n  _SITE_HAS_INTENTIONAL_SHARED_SYMLINK=\"NO\"\n  _SITE_SHARED_SYMLINK_FIX_APPLIED=\"NO\"\n  _SITE_SHARED_SYMLINK_FIX_SKIPPED=\"NO\"\n\n  # Decide what needs to be done for files/\n  if [ -d \"${_FILES_PATH}\" ] && [ ! -L \"${_FILES_PATH}\" ]; then\n    _DO_FILES_CONVERT=\"YES\"\n  fi\n\n  # Decide what needs to be done for private/\n  if [ -d \"${_PRIVATE_PATH}\" ] && [ ! -L \"${_PRIVATE_PATH}\" ]; then\n    _DO_PRIVATE_CONVERT=\"YES\"\n  fi\n\n  _HAS_LOCAL_CONVERT=\"NO\"\n  if [ \"${_DO_FILES_CONVERT}\" = \"YES\" ] || [ \"${_DO_PRIVATE_CONVERT}\" = \"YES\" ]; then\n    _HAS_LOCAL_CONVERT=\"YES\"\n  fi\n\n  _log \"------------------------------------------------------------------\"\n  _log \"[INFO] Site: ${_SITE_NAME}\"\n  _log \"[INFO] Alias: ${_ALIAS_FILE}\"\n  _log \"[INFO] site_path: ${_SITE_PATH}\"\n\n  # Always inspect symlinks, even when files/private are already symlinks,\n  # because they may accidentally point to another site's target.\n  if [ -L \"${_FILES_PATH}\" ]; then\n    _check_symlink_sharing \"${_FILES_PATH}\" \"files\" \"${_SITE_NAME}\" \"${_ACCOUNT_DIR}\"\n  fi\n  if [ -L \"${_PRIVATE_PATH}\" ]; then\n    _check_symlink_sharing \"${_PRIVATE_PATH}\" \"private\" \"${_SITE_NAME}\" \"${_ACCOUNT_DIR}\"\n  fi\n\n  # Clearer one-line summary per site after symlink inspection.\n  _SHARED_FIX_SUMMARY=\"shared symlink fix needed\"\n  if [ \"${_SITE_NEEDS_SHARED_SYMLINK_FIX}\" = \"YES\" ] && [ \"${_MODE}\" != \"DRY\" ] && [ \"${_MODE}\" != \"REPORT\" ]; then\n    if [ \"${_SITE_SHARED_SYMLINK_FIX_APPLIED}\" = \"YES\" ] && [ \"${_SITE_SHARED_SYMLINK_FIX_SKIPPED}\" = \"YES\" ]; then\n      _SHARED_FIX_SUMMARY=\"shared symlink fix partially applied\"\n    elif [ \"${_SITE_SHARED_SYMLINK_FIX_APPLIED}\" = \"YES\" ]; then\n      _SHARED_FIX_SUMMARY=\"shared symlink fix applied\"\n    elif [ \"${_SITE_SHARED_SYMLINK_FIX_SKIPPED}\" = \"YES\" ]; then\n      _SHARED_FIX_SUMMARY=\"shared symlink fix pending (not applied)\"\n    fi\n  fi\n\n  if [ \"${_HAS_LOCAL_CONVERT}\" = \"YES\" ] && [ \"${_SITE_NEEDS_SHARED_SYMLINK_FIX}\" = \"YES\" ]; then\n    _log \"[INFO] Summary for ${_SITE_NAME}: local conversion needed + ${_SHARED_FIX_SUMMARY}\"\n  elif [ \"${_HAS_LOCAL_CONVERT}\" = \"YES\" ]; then\n    _log \"[INFO] Summary for ${_SITE_NAME}: local conversion needed\"\n  elif [ \"${_SITE_NEEDS_SHARED_SYMLINK_FIX}\" = \"YES\" ]; then\n    _log \"[INFO] Summary for ${_SITE_NAME}: ${_SHARED_FIX_SUMMARY}\"\n  elif [ \"${_SITE_HAS_INTENTIONAL_SHARED_SYMLINK}\" = \"YES\" ]; then\n    _log \"[INFO] Summary for ${_SITE_NAME}: intentional sharing detected (left as is)\"\n  else\n    _log \"[INFO] Summary for ${_SITE_NAME}: clean (no conversion needed, no accidental sharing)\"\n  fi\n\n  # REPORT mode is read-only; symlink sharing checks above already produced output.\n  if [ \"${_MODE}\" = \"REPORT\" ]; then\n    return\n  fi\n\n  # If nothing needs conversion from local dir -> symlink, and symlink checks were done,\n  # then only now it's safe to say \"nothing to do\".\n  if [ \"${_HAS_LOCAL_CONVERT}\" != \"YES\" ]; then\n    _log \"[INFO] No local files/private dirs to convert for site ${_SITE_NAME} (symlink sharing already checked above)\"\n    return\n  fi\n\n  # --- Normal modes (DRY / LIVE / BATCH) ---\n\n  if [ \"${_DO_FILES_CONVERT}\" = \"YES\" ]; then\n    if [ -e \"${_TARGET_BASE}/files\" ] || [ -L \"${_TARGET_BASE}/files\" ]; then\n      _log \"[PLAN] Existing target will be renamed before move: ${_TARGET_BASE}/files -> ${_TARGET_BASE}/files-<hashstring>\"\n      if [ \"${_MODE}\" = \"DRY\" ]; then\n        _DRY_HAS_EDGE_CASES=\"YES\"\n      fi\n    fi\n    _log \"[PLAN] Move directory: ${_FILES_PATH} -> ${_TARGET_BASE}/files\"\n    _log \"[PLAN] Symlink: ${_FILES_PATH} -> ${_TARGET_BASE}/files\"\n\n    # DRY: check space for cross-FS moves, so batch can know if it's safe.\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _check_space_for_transfer \"${_FILES_PATH}\" \"${_TARGET_BASE}\" \"move files\" \"${_SITE_NAME}\" || true\n    fi\n  fi\n\n  if [ \"${_DO_PRIVATE_CONVERT}\" = \"YES\" ]; then\n    if [ -e \"${_TARGET_BASE}/private\" ] || [ -L \"${_TARGET_BASE}/private\" ]; then\n      _log \"[PLAN] Existing target will be renamed before move: ${_TARGET_BASE}/private -> ${_TARGET_BASE}/private-<hashstring>\"\n      if [ \"${_MODE}\" = \"DRY\" ]; then\n        _DRY_HAS_EDGE_CASES=\"YES\"\n      fi\n    fi\n    _log \"[PLAN] Move directory: ${_PRIVATE_PATH} -> ${_TARGET_BASE}/private\"\n    _log \"[PLAN] Symlink: ${_PRIVATE_PATH} -> ${_TARGET_BASE}/private\"\n\n    if [ \"${_MODE}\" = \"DRY\" ]; then\n      _check_space_for_transfer \"${_PRIVATE_PATH}\" \"${_TARGET_BASE}\" \"move private\" \"${_SITE_NAME}\" || true\n    fi\n  fi\n\n  if [ \"${_MODE}\" = \"DRY\" ]; then\n    _log \"[DRY] DRY-RUN mode – conversion planned for ${_SITE_NAME}, no changes applied\"\n  else\n    # LIVE and BATCH\n    if [ \"${_MODE}\" = \"LIVE\" ]; then\n      if [ \"${_LIVE_SITE_APPROVED}\" != \"YES\" ]; then\n        if ! _ask_confirm_site \"${_SITE_NAME}\"; then\n          _log \"[LIVE] Skipping conversion for site: ${_SITE_NAME}\"\n          return\n        fi\n        _LIVE_SITE_APPROVED=\"YES\"\n      fi\n\n      if [ -e \"${_TARGET_BASE}\" ] && [ ! -d \"${_TARGET_BASE}\" ]; then\n        _make_unique_target \"${_TARGET_BASE}\"\n      fi\n      if [ ! -d \"${_TARGET_BASE}\" ]; then\n        _log \"[APPLY] Creating directory: ${_TARGET_BASE}\"\n        mkdir -p \"${_TARGET_BASE}\"\n      fi\n\n      if [ \"${_DO_FILES_CONVERT}\" = \"YES\" ]; then\n        if ! _check_space_for_transfer \"${_FILES_PATH}\" \"${_TARGET_BASE}\" \"move files\" \"${_SITE_NAME}\"; then\n          _log \"[WARN] Skipping moving files for site ${_SITE_NAME} due to insufficient space.\"\n        else\n          if [ -e \"${_TARGET_BASE}/files\" ] || [ -L \"${_TARGET_BASE}/files\" ]; then\n            _make_unique_target \"${_TARGET_BASE}/files\"\n          fi\n          _log \"[APPLY] Moving: ${_FILES_PATH} -> ${_TARGET_BASE}/files\"\n          mv \"${_FILES_PATH}\" \"${_TARGET_BASE}/files\"\n          _log \"[APPLY] Creating symlink: ${_FILES_PATH} -> ${_TARGET_BASE}/files\"\n          ln -s \"${_TARGET_BASE}/files\" \"${_FILES_PATH}\"\n        fi\n      fi\n\n      if [ \"${_DO_PRIVATE_CONVERT}\" = \"YES\" ]; then\n        if ! _check_space_for_transfer \"${_PRIVATE_PATH}\" \"${_TARGET_BASE}\" \"move private\" \"${_SITE_NAME}\"; then\n          _log \"[WARN] Skipping moving private for site ${_SITE_NAME} due to insufficient space.\"\n        else\n          if [ -e \"${_TARGET_BASE}/private\" ] || [ -L \"${_TARGET_BASE}/private\" ]; then\n            _make_unique_target \"${_TARGET_BASE}/private\"\n          fi\n          _log \"[APPLY] Moving: ${_PRIVATE_PATH} -> ${_TARGET_BASE}/private\"\n          mv \"${_PRIVATE_PATH}\" \"${_TARGET_BASE}/private\"\n          _log \"[APPLY] Creating symlink: ${_PRIVATE_PATH} -> ${_TARGET_BASE}/private\"\n          ln -s \"${_TARGET_BASE}/private\" \"${_PRIVATE_PATH}\"\n        fi\n      fi\n    else\n      # BATCH (no per-site confirmation)\n      if [ -e \"${_TARGET_BASE}\" ] && [ ! -d \"${_TARGET_BASE}\" ]; then\n        _make_unique_target \"${_TARGET_BASE}\"\n      fi\n      if [ ! -d \"${_TARGET_BASE}\" ]; then\n        _log \"[APPLY] Creating directory: ${_TARGET_BASE}\"\n        mkdir -p \"${_TARGET_BASE}\"\n      fi\n\n      if [ \"${_DO_FILES_CONVERT}\" = \"YES\" ]; then\n        if ! _check_space_for_transfer \"${_FILES_PATH}\" \"${_TARGET_BASE}\" \"move files\" \"${_SITE_NAME}\"; then\n          _log \"[WARN] Skipping moving files for site ${_SITE_NAME} due to insufficient space.\"\n        else\n          if [ -e \"${_TARGET_BASE}/files\" ] || [ -L \"${_TARGET_BASE}/files\" ]; then\n            _make_unique_target \"${_TARGET_BASE}/files\"\n          fi\n          _log \"[APPLY] Moving: ${_FILES_PATH} -> ${_TARGET_BASE}/files\"\n          mv \"${_FILES_PATH}\" \"${_TARGET_BASE}/files\"\n          _log \"[APPLY] Creating symlink: ${_FILES_PATH} -> ${_TARGET_BASE}/files\"\n          ln -s \"${_TARGET_BASE}/files\" \"${_FILES_PATH}\"\n        fi\n      fi\n\n      if [ \"${_DO_PRIVATE_CONVERT}\" = \"YES\" ]; then\n        if ! _check_space_for_transfer \"${_PRIVATE_PATH}\" \"${_TARGET_BASE}\" \"move private\" \"${_SITE_NAME}\"; then\n          _log \"[WARN] Skipping moving private for site ${_SITE_NAME} due to insufficient space.\"\n        else\n          if [ -e \"${_TARGET_BASE}/private\" ] || [ -L \"${_TARGET_BASE}/private\" ]; then\n            _make_unique_target \"${_TARGET_BASE}/private\"\n          fi\n          _log \"[APPLY] Moving: ${_PRIVATE_PATH} -> ${_TARGET_BASE}/private\"\n          mv \"${_PRIVATE_PATH}\" \"${_TARGET_BASE}/private\"\n          _log \"[APPLY] Creating symlink: ${_PRIVATE_PATH} -> ${_TARGET_BASE}/private\"\n          ln -s \"${_TARGET_BASE}/private\" \"${_PRIVATE_PATH}\"\n        fi\n      fi\n    fi\n  fi\n}\n\n_process_account() {\n  # Handle one /data/disk/* account.\n  # Arg: account dir (/data/disk/o1)\n  _ACCOUNT_DIR=\"$1\"\n  _ACCOUNT_NAME=$(basename \"${_ACCOUNT_DIR}\")\n\n  # Ignore /data/disk/arch/\n  if [ \"${_ACCOUNT_NAME}\" = \"arch\" ]; then\n    return\n  fi\n\n  _VHOST_DIR=\"${_ACCOUNT_DIR}/config/server_master/nginx/vhost.d\"\n  _DRUSH_DIR=\"${_ACCOUNT_DIR}/.drush\"\n\n  if [ ! -d \"${_VHOST_DIR}\" ]; then\n    return\n  fi\n\n  if [ ! -d \"${_DRUSH_DIR}\" ]; then\n    return\n  fi\n\n  _log \"==================================================================\"\n  _log \"[INFO] Processing account: ${_ACCOUNT_NAME} (${_ACCOUNT_DIR})\"\n\n  # Track detected/valid sites for orphan static/files reporting\n  local -A _SITES_IN_ACCOUNT\n\n  # Iterate Drush aliases, use base name as site name, match with vhost.\n  for _ALIAS_FILE in \"${_DRUSH_DIR}\"/*.alias.drushrc.php; do\n    if [ ! -f \"${_ALIAS_FILE}\" ]; then\n      continue\n    fi\n\n    _ALIAS_BASENAME=$(basename \"${_ALIAS_FILE}\")\n    # Strip .alias.drushrc.php\n    _SITE_NAME=${_ALIAS_BASENAME%.alias.drushrc.php}\n\n    _VHOST_FILE=\"${_VHOST_DIR}/${_SITE_NAME}\"\n\n    # Proceed only for valid vhost/drush-alias pairs\n    if [ ! -f \"${_VHOST_FILE}\" ]; then\n      continue\n    fi\n\n    _SITES_IN_ACCOUNT[\"${_SITE_NAME}\"]=\"YES\"\n\n    _process_site \"${_ACCOUNT_NAME}\" \"${_ACCOUNT_DIR}\" \"${_SITE_NAME}\" \"${_ALIAS_FILE}\"\n  done\n\n  # In REPORT mode, also list orphaned static/files entries (leftovers after site deletion).\n  # We DO NOT delete anything; this is for operator review only.\n  if [ \"${_MODE}\" = \"REPORT\" ]; then\n    _STATIC_FILES_DIR=\"${_ACCOUNT_DIR}/static/files\"\n    if [ -d \"${_STATIC_FILES_DIR}\" ]; then\n      _FOUND_ORPHANS=\"NO\"\n      for _CAND in \"${_STATIC_FILES_DIR}\"/*; do\n        if [ ! -e \"${_CAND}\" ]; then\n          continue\n        fi\n\n        _CAND_NAME=$(basename \"${_CAND}\")\n\n        # Skip known non-site entries\n        if [ \"${_CAND_NAME}\" = \"dbackup\" ] || [ \"${_CAND_NAME}\" = \"foo.com\" ]; then\n          continue\n        fi\n\n        # Ignore hidden/metadata entries\n        case \"${_CAND_NAME}\" in\n          .*|lost+found)\n            continue\n            ;;\n        esac\n\n        # Only consider entries that look like site storage (dir or symlink with files/private inside).\n        if [ -d \"${_CAND}\" ] || [ -L \"${_CAND}\" ]; then\n          if [ -d \"${_CAND}/files\" ] || [ -d \"${_CAND}/private\" ] || [ -L \"${_CAND}\" ]; then\n            if [ -z \"${_SITES_IN_ACCOUNT[\"${_CAND_NAME}\"]+x}\" ]; then\n              _FOUND_ORPHANS=\"YES\"\n              _ORPH_KB=$(du -sk \"${_CAND}\" 2>/dev/null | awk 'NR==1 {print $1}')\n              if [ -z \"${_ORPH_KB}\" ]; then\n                _ORPH_KB=\"?\"\n              fi\n              _log \"[REPORT] ORPHAN static/files entry in account ${_ACCOUNT_NAME}: ${_CAND} (size=${_ORPH_KB}K) – no matching active site (vhost+alias) found\"\n            fi\n          fi\n        fi\n      done\n\n      if [ \"${_FOUND_ORPHANS}\" != \"YES\" ]; then\n        _log \"[REPORT] No orphan static/files entries detected in account ${_ACCOUNT_NAME}\"\n      fi\n    fi\n  fi\n}\n\n_main() {\n  _log \"==================================================================\"\n  if [ \"${_MODE}\" = \"BATCH_IF_CLEAN\" ]; then\n    _log \"[START] autosymlink running in BATCH_IF_CLEAN mode (DRY -> if CLEAN -> BATCH)\"\n    # First pass: DRY run\n    _MODE=\"DRY\"\n    _DRY_HAS_ERRORS=\"NO\"\n    _DRY_HAS_EDGE_CASES=\"NO\"\n    _APPLY_COUNT=0\n\n    for _ACCOUNT_DIR in /data/disk/*; do\n      if [ ! -d \"${_ACCOUNT_DIR}\" ] || [ ! -e \"${_ACCOUNT_DIR}/tools/drush\" ]; then\n        continue\n      fi\n      _process_account \"${_ACCOUNT_DIR}\"\n    done\n\n    _log_reserved_fs_summary\n    _finalize_dry_run_state\n\n    if [ \"${_DRY_HAS_ERRORS}\" != \"NO\" ] || [ \"${_DRY_HAS_EDGE_CASES}\" != \"NO\" ]; then\n      _write_state \"NOT_CLEAN\" \"BATCH_IF_CLEAN\"\n      _log \"[WARN] DRY run is NOT CLEAN, skipping BATCH\"\n      _log \"[DONE] autosymlink finished\"\n      exit 10\n    fi\n\n    # Second pass: BATCH\n    _MODE=\"BATCH\"\n    _APPLY_COUNT=0\n    _log \"==================================================================\"\n    _log \"[START] autosymlink running in BATCH mode (no per-site confirmation)\"\n    _require_clean_dry_run_or_exit\n\n    for _ACCOUNT_DIR in /data/disk/*; do\n      if [ ! -d \"${_ACCOUNT_DIR}\" ] || [ ! -e \"${_ACCOUNT_DIR}/tools/drush\" ]; then\n        continue\n      fi\n      _process_account \"${_ACCOUNT_DIR}\"\n    done\n\n    _log_reserved_fs_summary\n    _write_state \"DONE\" \"BATCH_IF_CLEAN\"\n    _log \"[DONE] autosymlink finished\"\n    exit 0\n  fi\n\n  if [ \"${_MODE}\" = \"LIVE\" ]; then\n    _log \"[START] autosymlink running in LIVE mode (per-site confirmation required)\"\n  elif [ \"${_MODE}\" = \"BATCH\" ]; then\n    _log \"[START] autosymlink running in BATCH mode (no per-site confirmation)\"\n    _require_clean_dry_run_or_exit\n  elif [ \"${_MODE}\" = \"REPORT\" ]; then\n    _log \"[START] autosymlink running in REPORT mode (read-only, no changes)\"\n  else\n    _log \"[START] autosymlink running in DRY-RUN mode (no changes will be made)\"\n  fi\n\n  for _ACCOUNT_DIR in /data/disk/*; do\n    if [ ! -d \"${_ACCOUNT_DIR}\" ] || [ ! -e \"${_ACCOUNT_DIR}/tools/drush\" ]; then\n      continue\n    fi\n    _process_account \"${_ACCOUNT_DIR}\"\n  done\n\n  _log_reserved_fs_summary\n\n  if [ \"${_MODE}\" = \"DRY\" ]; then\n    _finalize_dry_run_state\n  fi\n\n  _log \"[DONE] autosymlink finished\"\n}\n\n_main \"$@\"\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/autoupboa",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_barCnf=\"/root/.barracuda.cnf\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n#\n# Protect from any autoupboa updates when _SKYNET_MODE=OFF\nif [ ! -z \"${_SKYNET_MODE}\" ] && [ \"${_SKYNET_MODE}\" = \"OFF\" ]; then\n  exit 0\nfi\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n  if [ -n \"${_LOC_IP}\" ] && grep -qE \"${_LOC_IP}\\s\" /etc/hosts; then\n    cp -af /etc/hosts /etc/.was.hosts\n    sed -i \"s/^${_LOC_IP}.*//g\" /etc/hosts\n  fi\n}\n\n#\n# Valkey without RDB snapshots\n_valkey_no_rdb() {\n  if [ -e \"/etc/valkey/valkey.conf\" ]; then\n    _NEEDS_VALKEY_RESTART=\n    if ! grep -qE '^maxmemory-policy[[:space:]]*allkeys-lru' /etc/valkey/valkey.conf; then\n      sed -i \"s/^maxmemory-policy .*/maxmemory-policy allkeys-lru/g\" /etc/valkey/valkey.conf &> /dev/null\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if ! grep -qE '^[[:space:]]*save\\b' /etc/valkey/valkey.conf; then\n      sed -i \"s/^# save \\\"\\\"/save \\\"\\\"/g\" /etc/valkey/valkey.conf &> /dev/null\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if ! grep -qE '^[[:space:]]*save\\b' /etc/valkey/valkey.conf; then\n      printf '\\n# Cache-only instance: disable persistence (no RDB snapshots)\\nsave \"\"\\n' >> /etc/valkey/valkey.conf\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if [ -n \"${_NEEDS_VALKEY_RESTART}\" ]; then\n      service valkey-server restart &> /dev/null\n    fi\n  fi\n}\n_valkey_no_rdb\n\n# Function to enable fstrim instead of discard\n_if_enable_fstrim() {\n  if [ -x \"/usr/sbin/fstrim\" ] && [ -d \"/etc/cron.daily\" ] && [ ! -e \"/etc/cron.daily/fstrim\" ]; then\n    cat > /etc/cron.daily/fstrim <<'EOF'\n#!/bin/bash\n# Run TRIM periodically instead of using mount option \"discard\"\ncommand -v fstrim >/dev/null 2>&1 || exit 0\nfstrim -av >/dev/null 2>&1\nexit 0\nEOF\n    chmod 0755 /etc/cron.daily/fstrim\n  fi\n}\n_if_enable_fstrim\n\n# Function to calculate RAM usage percentage as an integer\n_calculate_ram_usage_percent() {\n  _total_ram_kb=$1\n  _available_ram_kb=$2\n  used_ram_kb=$((_total_ram_kb - _available_ram_kb))\n\n  # Using integer division to get a whole number percentage\n  echo $(( (used_ram_kb * 100) / _total_ram_kb ))\n}\n\n# Function to check and display system info\n_check_system_ram() {\n  # Get the total and available RAM in KB\n  _total_ram_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')\n  _available_ram_kb=$(grep MemAvailable /proc/meminfo | awk '{print $2}')\n\n  # Calculate RAM usage percentage\n  _ram_usage_percent=$(_calculate_ram_usage_percent ${_total_ram_kb} ${_available_ram_kb})\n}\n\n# Function to check and optimize RAM and disk caches\n_optimize_ram() {\n  _check_system_ram\n  if [ \"${_ram_usage_percent}\" -gt 90 ]; then\n    sync && echo 3 | tee /proc/sys/vm/drop_caches\n  fi\n}\n\n###\n### Return 0 if package is installed, 1 otherwise.\n###\n_pkg_installed() {\n  /usr/bin/dpkg-query -W -f='${Status}' \"$1\" 2>/dev/null | grep -qx 'install ok installed'\n}\n\n###\n### Turn off AppArmor\n###\n_turn_off_apparmor() {\n\n  # Fix shell in AppArmor scripts (only do this if needed)\n  grep -q \"/usr/bin/sh\" /lib/apparmor/apparmor.systemd && \\\n    sed -i \"s#/usr/bin/sh#/bin/bash#g\" /lib/apparmor/apparmor.systemd\n  grep -q \"/bin/sh\" /lib/apparmor/apparmor.systemd && \\\n    sed -i \"s#/bin/sh#/bin/bash#g\" /lib/apparmor/apparmor.systemd\n\n  grep -q \"/usr/bin/sh\" /lib/apparmor/rc.apparmor.functions && \\\n    sed -i \"s#/usr/bin/sh#/bin/bash#g\" /lib/apparmor/rc.apparmor.functions\n  grep -q \"/bin/sh\" /lib/apparmor/rc.apparmor.functions && \\\n    sed -i \"s#/bin/sh#/bin/bash#g\" /lib/apparmor/rc.apparmor.functions\n\n  grep -q \"/usr/bin/sh\" /lib/apparmor/profile-load && \\\n    sed -i \"s#/usr/bin/sh#/bin/bash#g\" /lib/apparmor/profile-load\n  grep -q \"/bin/sh\" /lib/apparmor/profile-load && \\\n    sed -i \"s#/bin/sh#/bin/bash#g\" /lib/apparmor/profile-load\n\n  # Disable AppArmor and Auditd\n  if [ -e \"/etc/apparmor.d\" ] && [ -e \"/var/cache/apparmor\" ]; then\n    rm -rf /var/cache/apparmor/* &> /dev/null\n    rm -f /etc/apparmor.d/*~ &> /dev/null\n    apparmor_parser -r /etc/apparmor.d/* &> /dev/null\n    rm -f /etc/apparmor.d/*~ &> /dev/null\n    aa-complain /etc/apparmor.d/* &> /dev/null\n  fi\n  if [ -x \"/etc/init.d/apparmor\" ]; then\n    service apparmor stop &> /dev/null\n    update-rc.d -f apparmor remove &> /dev/null\n    service auditd stop &> /dev/null\n    update-rc.d -f auditd remove &> /dev/null\n  fi\n  if [ -x \"/usr/sbin/aa-teardown\" ]; then\n    aa-teardown &> /dev/null\n  fi\n  for _PKG in auditd apparmor apparmor-utils apparmor-notify apparmor-profiles apparmor-profiles-extra; do\n    if _pkg_installed \"${_PKG}\"; then\n      /usr/bin/apt-get remove ${_PKG} -y -qq &> /dev/null\n    fi\n  done\n  _isAppArmorGrubFile=\"/etc/default/grub.d/apparmor.cfg\"\n  mkdir -p /etc/default/grub.d\n  echo 'GRUB_CMDLINE_LINUX_DEFAULT=\"$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0\"' | tee ${_isAppArmorGrubFile} &> /dev/null\n  update-grub &> /dev/null\n  [ ! -e \"/root/.disable.apparmor.cnf\" ] && touch /root/.disable.apparmor.cnf\n}\n\n###\n### Only disable AppArmor if it’s active (profiles currently loaded)\n###\n_if_off_apparmor() {\n  if [ ! -e \"/root/.keep_apparmor_on.cnf\" ]; then\n    _isAppArmOn=N\n    if [ -e \"/sys/module/apparmor/parameters/enabled\" ]; then\n      _isAppArmOn=\"$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null | tr -d '\\n')\"\n    fi\n    if [ \"${_isAppArmOn}\" = \"Y\" ]; then\n      _turn_off_apparmor\n    fi\n  fi\n}\n_if_off_apparmor\n\n# Define default values\n_DEFAULT_CPU_CRIT_RATIO=\"6.1\"\n_DEFAULT_CPU_MAX_RATIO=\"4.1\"\n_DEFAULT_CPU_TASK_RATIO=\"3.1\"\n_DEFAULT_CPU_SPIDER_RATIO=\"2.1\"\n_LOCK_CPU_CONFIG=\"/root/.lock.cpu.update.f98.cnf\"\n_UPDATE_CPU_PID=\"/var/log/boa/.update_conf_variable.ctrl.f98.${_tRee}.${_xSrl}.pid\"\n_LOCK_DOS_CONFIG=\"/root/.lock.dos.update.f98.cnf\"\n_LOCK_DOS_PID=\"/var/log/boa/.sync_scan_nginx.ctrl.f98.${_tRee}.${_xSrl}.pid\"\n_ALLOW_SENDMAIL_PID=\"/var/log/boa/.allow_sendmail.ctrl.f98.${_tRee}.${_xSrl}.pid\"\n_DEFAULT_NGINX_DOS_LINES=1999\n_DEFAULT_NGINX_DOS_LIMIT=399\n_DEFAULT_NGINX_DOS_MODE=2\n_DEFAULT_NGINX_DOS_DIV_INC_NR=40\n_DEFAULT_NGINX_DOS_INC_MIN=3\n_DEFAULT_NGINX_DOS_LOG=SILENT\n_DEFAULT_NGINX_DOS_IGNORE=\"doccomment\"\n_DEFAULT_NGINX_DOS_STOP=\"WAITFOR.DELAY|DECLARE.*@x|/\\*\\*/|%27.*%29.*%3B|0x[0-9a-f]{6}\"\n\n# Function to update or add variables in the configuration file\n_update_conf_variable() {\n  _var_name=\"$1\"\n  _default_value=\"$2\"\n  _conf_file=\"$3\"\n\n  if grep -qE \"^#?${_var_name}=\" \"${_conf_file}\"; then\n    # Variable exists, uncomment and update value\n    sed -i \"s|^#\\?${_var_name}=.*|${_var_name}=\\\"${_default_value}\\\"|g\" \"${_conf_file}\"\n  else\n    # Variable doesn't exist, append to the file\n    echo \"${_var_name}=\\\"${_default_value}\\\"\" >> \"${_conf_file}\"\n  fi\n}\n\nif [ ! -e \"${_LOCK_CPU_CONFIG}\" ]; then\n  if [ ! -e \"${_UPDATE_CPU_PID}\" ]; then\n    # Update variables\n    _update_conf_variable \"_CPU_CRIT_RATIO\" \"${_DEFAULT_CPU_CRIT_RATIO}\" \"${_barCnf}\"\n    _update_conf_variable \"_CPU_MAX_RATIO\" \"${_DEFAULT_CPU_MAX_RATIO}\" \"${_barCnf}\"\n    _update_conf_variable \"_CPU_TASK_RATIO\" \"${_DEFAULT_CPU_TASK_RATIO}\" \"${_barCnf}\"\n    _update_conf_variable \"_CPU_SPIDER_RATIO\" \"${_DEFAULT_CPU_SPIDER_RATIO}\" \"${_barCnf}\"\n    rm -f /var/log/boa/.update_conf_variable.ctrl*\n    touch ${_UPDATE_CPU_PID}\n    touch ${_LOCK_CPU_CONFIG}\n  fi\nfi\n\n# Define directories and error log path\n_SSL_DIR=\"/etc/ssl/private\"\n_NGINX_ERROR_LOG=\"/var/log/nginx/error.log\"\n\n# Function to check if a file is empty\n_is_empty_file() {\n  local file=\"$1\"\n  [[ ! -s \"$file\" ]]\n}\n\n# Check for empty .dhp files and attempt to replace or regenerate them\n_process_dhp_files() {\n  _dhp_files=(\"${_SSL_DIR}\"/*.dhp)\n  _non_empty_dhp=\"\"\n  # Find the first non-empty .dhp file\n  for _dhp_file in \"${_dhp_files[@]}\"; do\n    if ! _is_empty_file \"${_dhp_file}\"; then\n      _non_empty_dhp=\"${_dhp_file}\"\n      break\n    fi\n  done\n  # Loop over all .dhp files\n  for _dhp_file in \"${_dhp_files[@]}\"; do\n    if _is_empty_file \"${_dhp_file}\"; then\n      if [[ -n \"${_non_empty_dhp}\" ]]; then\n        echo \"Replacing empty file ${_dhp_file} with ${_non_empty_dhp}\"\n        cp \"${_non_empty_dhp}\" \"${_dhp_file}\"\n      else\n        echo \"No non-empty .dhp file found, generating new file for ${_dhp_file}\"\n        openssl dhparam -out \"${_dhp_file}\" 4096 > /dev/null 2>&1 &\n      fi\n    fi\n  done\n}\n\n# Scan Nginx error log for .dhp related errors\n_scan_nginx_errors() {\n  grep \"PEM_read_bio_DHparams\" \"${_NGINX_ERROR_LOG}\" | awk -F'(\")' '{print $2}' | while read -r _missing_dhp; do\n    if [[ ! -f \"${_missing_dhp}\" ]]; then\n      echo \"Missing .dhp file detected: ${_missing_dhp}\"\n      if [[ -n \"${_non_empty_dhp}\" ]]; then\n        echo \"Copying non-empty .dhp file to ${_missing_dhp}\"\n        cp \"${_non_empty_dhp}\" \"${_missing_dhp}\"\n      else\n        echo \"Generating new .dhp file for ${_missing_dhp}\"\n        openssl dhparam -out \"${_missing_dhp}\" 4096 > /dev/null 2>&1 &\n      fi\n    fi\n  done\n}\n\n_process_dhp_files\n_scan_nginx_errors\n\n_crontab_check_clean_race() {\n  _CSF_CRON_TEST=$(grep water /etc/crontab 2>&1)\n  if [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ -x \"/usr/sbin/csf\" ] \\\n    && [[ \"${_CSF_CRON_TEST}\" =~ \"water\" ]]; then\n    sed -i \"s/.*fire.*//g\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*water.*//g\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"/^$/d\" /etc/crontab &> /dev/null\n    wait\n  fi\n}\n_crontab_check_clean_race\n\n_bring_all_ram_cpu_online() {\n  _RAM_AUTO_FILE=\"/sys/devices/system/memory/auto_online_blocks\"\n  if [ -f \"${_RAM_AUTO_FILE}\" ]; then\n    if grep -qx offline \"${_RAM_AUTO_FILE}\"; then\n      echo online > \"${_RAM_AUTO_FILE}\"\n    fi\n  fi\n  for _CPU_DIR in /sys/devices/system/cpu/cpu[0-9]*\n  do\n    _CPU=${_CPU_DIR##*/}\n    _CPU_STATE_FILE=\"${_CPU_DIR}/online\"\n    if [ -f \"${_CPU_STATE_FILE}\" ]; then\n      if grep -qx 0 \"${_CPU_STATE_FILE}\"; then\n        echo 1 > \"${_CPU_STATE_FILE}\"\n      fi\n    fi\n  done\n  for _RAM_DIR in /sys/devices/system/memory/memory[0-9]*\n  do\n    _RAM=${_RAM_DIR##*/}\n    _RAM_STATE_FILE=\"${_RAM_DIR}/state\"\n    if [ -f \"${_RAM_STATE_FILE}\" ]; then\n      if grep -qx offline \"${_RAM_STATE_FILE}\"; then\n        echo online > \"${_RAM_STATE_FILE}\"\n      fi\n    fi\n  done\n}\n_bring_all_ram_cpu_online\n\n_if_disable_not_used_services() {\n  _clearSwap=No\n\n  if [ \"${_tRee}\" = \"dev\" ] && [ -e \"/root/.ice.vm.cnf\" ]; then\n    echo > /var/xdrago/move_sql.sh\n    echo > /var/xdrago/proc_num_ctrl.pl\n    echo > /var/xdrago/runner.sh\n    echo > /var/xdrago/monitor/check/nginx.sh\n    echo > /var/xdrago/monitor/check/mysql.sh\n    echo > /var/xdrago/mysql_backup.sh\n    rm -f /var/spool/cron/crontabs/aegir\n    killall -9 pure-ftpd\n    killall -9 newrelic-daemon\n    [ -e \"/run/nginx.pid\" ] && service nginx stop\n    [ -e \"/run/php83-fpm.pid\" ] && killall -9 php-fpm\n    [ -e \"/run/mysqld/mysqld.pid\" ] && service mysql stop\n    [ -e \"/run/valkey/valkey.pid\" ] && service valkey-server stop\n    [ -e \"/run/redis/redis.pid\" ] && service redis-server stop\n    [ -e \"/root/.deny.clamav.cnf\" ] && touch /root/.deny.clamav.cnf\n    [ -e \"/root/.deny.java.cnf\" ] && touch /root/.deny.java.cnf\n    _clearSwap=YES\n  fi\n\n  if [ ! -e \"/root/.allow.clamav.cnf\" ] || [ -e \"/root/.deny.clamav.cnf\" ]; then\n    if [ -e \"/etc/init.d/clamav-daemon\" ]; then\n      _clearSwap=YES\n      update-rc.d -f clamav-daemon remove &> /dev/null\n      mv -f /etc/init.d/clamav-daemon /var/backups/\n    fi\n    if [ -e \"/etc/init.d/clamav-freshclam\" ]; then\n      _clearSwap=YES\n      update-rc.d -f clamav-freshclam remove &> /dev/null\n      mv -f /etc/init.d/clamav-freshclam /var/backups/\n    fi\n    pkill -9 -f clamd\n    pkill -9 -f freshclam\n    rm -f /run/clamav/*\n  fi\n\n  if [ -e \"/root/.deny.java.cnf\" ]; then\n    if [ -e \"/etc/init.d/solr9\" ]; then\n      _clearSwap=YES\n      update-rc.d -f solr9 remove &> /dev/null\n      mv -f /etc/init.d/solr9 /var/backups/\n    fi\n    if [ -e \"/etc/init.d/solr7\" ]; then\n      _clearSwap=YES\n      update-rc.d -f solr7 remove &> /dev/null\n      mv -f /etc/init.d/solr7 /var/backups/\n    fi\n    if [ -e \"/etc/init.d/jetty9\" ]; then\n      _clearSwap=YES\n      update-rc.d -f jetty9 remove &> /dev/null\n      mv -f /etc/init.d/jetty9 /var/backups/\n    fi\n    pkill -9 -f avahi-daemon\n    pkill -9 -f java\n  fi\n\n  if [ -e \"/root/.deny.jetty9.cnf\" ]; then\n    if [ -e \"/etc/init.d/jetty9\" ]; then\n      _clearSwap=YES\n      update-rc.d -f jetty9 remove &> /dev/null\n      mv -f /etc/init.d/jetty9 /var/backups/\n      pkill -9 -f avahi-daemon\n      pkill -9 -f jetty9\n    fi\n  fi\n\n  if [ -e \"/root/.deny.solr7.cnf\" ]; then\n    if [ -e \"/etc/init.d/solr7\" ]; then\n      _clearSwap=YES\n      update-rc.d -f solr7 remove &> /dev/null\n      mv -f /etc/init.d/solr7 /var/backups/\n      pkill -9 -f avahi-daemon\n      pkill -9 -f solr7\n    fi\n  fi\n\n  if [ -e \"/root/.deny.solr9.cnf\" ]; then\n    if [ -e \"/etc/init.d/solr9\" ]; then\n      _clearSwap=YES\n      update-rc.d -f solr9 remove &> /dev/null\n      mv -f /etc/init.d/solr9 /var/backups/\n      pkill -9 -f avahi-daemon\n      pkill -9 -f solr9\n    fi\n  fi\n\n  if [ \"${_clearSwap}\" = \"YES\" ]; then\n    _optimize_ram\n  fi\n}\n_if_disable_not_used_services\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_system_hostname_check_fix() {\n  if [ -e \"${_barCnf}\" ]; then\n    if [ ! -z \"${_MY_HOSTN}\" ]; then\n      _diffHostTest=$(diff -w -B /etc/hostname /etc/mailname 2>&1)\n      if [ -z \"${_diffHostTest}\" ]; then\n        _hostUpdate=\"\"\n        echo \"INFO: hostname/mailname diff empty -- nothing to update\"\n      else\n        _hostUpdate=YES\n        _diffHostTest=$(echo -n ${_diffHostTest} | fmt -su -w 2500 2>&1)\n        echo \"INFO: hostname/mailname diff ${_diffHostTest}\"\n      fi\n      if [ \"${_hostUpdate}\" = \"YES\" ]; then\n        hostname -b ${_MY_HOSTN} ### force our custom FQDN/local hostname\n        echo \"${_MY_HOSTN}\" > /etc/hostname\n        echo \"${_MY_HOSTN}\" > /etc/mailname\n      fi\n    fi\n  fi\n}\n_system_hostname_check_fix\n\n_ADD_XTRA=YES\n_CRON_TEST=$(grep \"clear.xsh\" /var/spool/cron/crontabs/root 2>&1)\nif [[ \"${_CRON_TEST}\" =~ \"clear.xsh\" ]]; then\n  _ADD_XTRA=NO\nfi\n\n_TIME=$(date +%H%M)\n_TIME=${_TIME//[^0-9-]/}\n_MINUTE=$(date +%M)\n_MINUTE=${_MINUTE//[^0-9-]/}\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n\n_if_hosted_sys() {\n  _find_correct_ip\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n    [ -e \"/root/.auto.up.cnf\" ] || touch /root/.auto.up.cnf\n  else\n    _hostedSys=NO\n  fi\n}\n\n_send_boa_system_report() {\n  if [ -e \"${_barCnf}\" ]; then\n    _RCPT_EMAIL=\n    _BCC_EMAIL=\n    _repBodFile=/var/backups/hosted-boa-status.txt\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _RCPT_EMAIL=\"systems@omega8.cc\"\n      _BCC_EMAIL=\"notify@omega8.cc\"\n    fi\n    if [ ! -z \"${_RCPT_EMAIL}\" ] \\\n      && [ ! -z \"${_BCC_EMAIL}\" ] \\\n      && [ -e \"/root/.run.example.report.cnf\" ]; then\n      _repSub=\"BOA Status on ${_hName}\"\n      _repSub=$(echo -n ${_repSub} | fmt -su -w 2500 2>&1)\n      boa info report both > ${_repBodFile}\n      cat ${_repBodFile} | s-nail -b ${_BCC_EMAIL} -s \"${_repSub} at ${_NOW}\" ${_RCPT_EMAIL}\n      rm -f ${_repBodFile}\n    fi\n  fi\n}\n\nif [ \"${_TIME}\" = \"1200\" ] || [ \"${_MINUTE}\" = \"00\" ]; then\n  export TERM=vt100\n  _send_boa_system_report\nfi\n\n_check_system_manufacturer() {\n  # Extract manufacturer string\n  _SYS_MANUFACTURER_RAW=$(dmidecode -s system-manufacturer 2>/dev/null)\n\n  # Trim leading/trailing whitespace, convert spaces to underscores\n  _SYS_MANUFACTURER_CLEAN=$(printf \"%s\" \"${_SYS_MANUFACTURER_RAW}\" \\\n    | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//' -e 's/[[:space:]]\\+/_/g')\n\n  # Normalise: keep only A-Za-z0-9_\n  _SYS_MANUFACTURER=$(printf \"%s\" \"${_SYS_MANUFACTURER_CLEAN}\" | tr -cd 'A-Za-z0-9_')\n\n  # Enforce max length with safe underscore-aware truncation\n  _MAX_LEN=32\n  if [ \"${#_SYS_MANUFACTURER}\" -gt \"${_MAX_LEN}\" ]; then\n    _CUT=$(printf \"%s\" \"${_SYS_MANUFACTURER}\" | cut -c1-\"${_MAX_LEN}\")\n    _SAFE_CUT=$(printf \"%s\" \"${_CUT}\" | sed 's/_[^_]*$//')\n    if [ -n \"${_SAFE_CUT}\" ]; then\n      _SYS_MANUFACTURER=\"${_SAFE_CUT}\"\n    else\n      _SYS_MANUFACTURER=\"${_CUT}\"\n    fi\n  fi\n\n  # Secondary variable: lowercase\n  _SYS_MANUFACTURER_LC=$(printf \"%s\" \"${_SYS_MANUFACTURER}\" | tr 'A-Z' 'a-z')\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"_SYS_MANUFACTURER_RAW: ${_SYS_MANUFACTURER_RAW}\"\n    echo \"_SYS_MANUFACTURER_CLEAN: ${_SYS_MANUFACTURER_CLEAN}\"\n    echo \"_SYS_MANUFACTURER: ${_SYS_MANUFACTURER}\"\n    echo \"_SYS_MANUFACTURER_LC: ${_SYS_MANUFACTURER_LC}\"\n  fi\n}\n\n_is_protected_run() {\n  _protectedRun=FALSE\n  _optBin=\"/opt/local/bin\"\n  _boaBins=\"autoinit automini barracuda boa octopus\"\n  for _cbn in ${_boaBins}; do\n    if [ -e \"${_optBin}/${_cbn}\" ]; then\n      _CNT=$(pgrep -fc /local/bin/${_cbn})\n      if (( _CNT > 0 )); then\n        echo \"The ${_cbn} is running!\"\n        _protectedRun=TRUE\n      fi\n    fi\n  done\n  [ -e \"/run/octopus_install_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_wait.pid\" ] && _protectedRun=TRUE\n}\n\n_generic_vm_postinstall() {\n  _is_protected_run\n  if [ \"${_protectedRun}\" = \"FALSE\" ]; then\n    rm -f /var/log/boa/${_SYS_MANUFACTURER_LC}_vm.pid\n    _TODAY=$(date +%y%m%d)\n    _TODAY=${_TODAY//[^0-9]/}\n    _NOW=$(date +%y%m%d-%H%M%S)\n    _NOW=${_NOW//[^0-9-]/}\n    _vBs=\"/var/backups\"\n    _LOG_UP_DIR=\"${_vBs}/reports/up/$(basename \"$0\")/${_TODAY}\"\n    _UP_BARRACUDA_LOG=\"${_LOG_UP_DIR}/$(basename \"$0\")-up-barracuda-${_NOW}.log\"\n    mkdir -p ${_LOG_UP_DIR}\n    nohup /opt/local/bin/barracuda up-${_tRee} system noscreen >${_UP_BARRACUDA_LOG} 2>&1 &\n  fi\n}\n\n# Restrict sendmail access unless /root/.allow.sendmail.cnf exists\n_restrict_sendmail() {\n  _SNAIL_EXISTS=$(getent group allow-snail 2>&1)\n  if [[ ! \"${_SNAIL_EXISTS}\" =~ \"allow-snail\" ]]; then\n    addgroup --system allow-snail &> /dev/null\n    chown root /usr/sbin/sendmail &> /dev/null\n    chgrp allow-snail /usr/sbin/sendmail &> /dev/null\n  fi\n  if [ ! -e \"/root/.allow.sendmail.cnf\" ]; then\n    chmod 750 /usr/sbin/sendmail &> /dev/null\n  fi\n}\n\n_if_restrict_sendmail() {\n  if [ -e \"/root/.allow.sendmail.cnf\" ]; then\n    if [ ! -e \"${_ALLOW_SENDMAIL_PID}\" ]; then\n      chmod 755 /usr/sbin/sendmail &> /dev/null\n      touch \"${_ALLOW_SENDMAIL_PID}\"\n    fi\n  else\n    _restrict_sendmail\n  fi\n}\n\nif [ \"$(boa info | grep -c Percona)\" -ge 3 ] && [ -e \"/usr/sbin/csf\" ]; then\n  _is_protected_run\n  if [ \"${_protectedRun}\" = \"FALSE\" ]; then\n    _if_restrict_sendmail\n  fi\nfi\n\n_status_vm_postinstall() {\n  _check_system_manufacturer\n  _VM_N_PID=\"/var/log/boa/${_SYS_MANUFACTURER_LC}_vm.pid\"\n  _VM_N_SYS=\"/var/log/boa/${_SYS_MANUFACTURER_LC}_vm_postinstall.pid\"\n  _VM_O_PID=\"/run/${_SYS_MANUFACTURER_LC}_vm.pid\"\n  _VM_O_SYS=\"/run/${_SYS_MANUFACTURER_LC}_vm_postinstall.pid\"\n  if [ -e \"${_VM_O_PID}\" ] && [ ! -e \"${_VM_N_PID}\" ]; then\n    touch ${_VM_N_PID}\n  fi\n  if [ -e \"${_VM_O_SYS}\" ] && [ ! -e \"${_VM_N_SYS}\" ]; then\n    touch ${_VM_N_SYS}\n    chattr +i ${_VM_N_SYS}\n  fi\n  if [ \"$(boa info | grep -c Percona)\" -lt 3 ] || [ ! -e \"/usr/sbin/csf\" ]; then\n    _NEEDS_UPGRADE=YES\n    echo \"A _NEEDS_UPGRADE is ${_NEEDS_UPGRADE}\"\n    [ -n \"${_SYS_MANUFACTURER_LC}\" ] && _generic_vm_postinstall\n  else\n    _NEEDS_UPGRADE=NO\n    echo \"B _NEEDS_UPGRADE is ${_NEEDS_UPGRADE}\"\n    if [ ! -e \"${_VM_N_PID}\" ] && [ ! -e \"${_VM_N_SYS}\" ]; then\n      touch ${_VM_N_SYS}\n      chattr +i ${_VM_N_SYS}\n      _is_protected_run\n      if [ \"${_protectedRun}\" = \"FALSE\" ]; then\n        rm -rf /opt/tmp/*\n      fi\n    fi\n    if [ ! -e \"/var/log/boa/reset_no_new_password.pid\" ]; then\n      # Re-enable default MySQL/Valkey/Redis password update logic once\n      [ -e \"/root/.mysql.no.new.password.cnf\" ]  && rm -f /root/.mysql.no.new.password.cnf\n      [ -e \"/root/.valkey.no.new.password.cnf\" ] && rm -f /root/.valkey.no.new.password.cnf\n      [ -e \"/root/.redis.no.new.password.cnf\" ]  && rm -f /root/.redis.no.new.password.cnf\n      touch /var/log/boa/reset_no_new_password.pid\n      chattr +i /var/log/boa/reset_no_new_password.pid\n    fi\n  fi\n}\n\n_opt_tmp_cleanup() {\n  _is_protected_run\n  if [ \"${_protectedRun}\" = \"FALSE\" ]; then\n    _OLD_FOUND=\"NO\"\n    # Detect anything in /opt/tmp/boa older than 24 hours\n    if find /opt/tmp/boa -mindepth 1 -mtime +0 -print -quit 2>/dev/null | grep -q .; then\n      _OLD_FOUND=\"YES\"\n    fi\n    if [ \"${_OLD_FOUND}\" = \"YES\" ]; then\n      # Remove everything under /opt/tmp safely\n      rm -rf /opt/tmp/*\n    fi\n  fi\n}\n\nif [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n  && [ -e \"/data/u\" ] \\\n  && [ -e \"/var/xdrago\" ]; then\n  if [ ! -e \"/var/log/boa/reset_no_new_password.pid\" ]; then\n    _status_vm_postinstall\n  else\n    _opt_tmp_cleanup\n  fi\nfi\n\n_if_new_kernel_reboot() {\n  _isReboot=NO\n  _is_protected_run\n  if [ \"${_protectedRun}\" = \"FALSE\" ]; then\n    if [ -e \"/run/reboot-required.pkgs\" ]; then\n      if grep -q linux \"/run/reboot-required.pkgs\"; then\n        _isReboot=REQ\n      fi\n    fi\n    if [ \"$(boa info | grep -c Next)\" -ge 1 ]; then\n      _isReboot=REQ\n    fi\n    if [ \"${_isReboot}\" = \"REQ\" ]; then\n      nohup /opt/local/bin/boa reboot > /dev/null 2>&1 &\n      exit 1\n    fi\n  fi\n}\n\n_if_hosted_sys\nif [ \"${_hostedSys}\" = \"YES\" ] || [ -e \"/root/.allow.auto.reboot.cnf\" ]; then\n  _if_new_kernel_reboot\nfi\n\n_crontab_monitor_cleanup() {\n  if [ -e \"/etc/crontab\" ] && [ -e \"/var/xdrago/monitor/log\" ] && [ -e \"/etc/csf\" ]; then\n    if ! grep -q \"/var/xdrago/monitor/log\" /etc/crontab; then\n      echo \"*/59 *  * * *   root    rm -f /var/xdrago/monitor/log/web.log >/dev/null 2>&1\" >> /etc/crontab\n      echo \"*/59 *  * * *   root    rm -f /var/xdrago/monitor/log/ssh.log >/dev/null 2>&1\" >> /etc/crontab\n      echo \"*/59 *  * * *   root    rm -f /var/xdrago/monitor/log/ftp.log >/dev/null 2>&1\" >> /etc/crontab\n    fi\n  fi\n}\n_crontab_monitor_cleanup\n\n_crontab_update() {\n  if [ -e \"/etc/crontab\" ]; then\n    sed -i \"s/.*xdrago.*//gi\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*arracuda.*//gi\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*ctopus.*//gi\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*du.*sql.*//gi\" /etc/crontab &> /dev/null\n    wait\n    if [ \"${_ADD_XTRA}\" = \"YES\" ]; then\n      echo \"*/19 *  * * *   root    bash /var/xdrago/clear.sh >/dev/null 2>&1\" >> /etc/crontab\n    fi\n\n    _crontab_monitor_cleanup\n\n    sed -i \"/^$/d\" /etc/crontab &> /dev/null\n    wait\n\n    if [ \"${_AUTO_PHP}\" = \"php-all\" ]; then\n      _BCDA_FULL=\"up-${_AUTO_VER} log php-all\"\n      _BCDA_SYST=\"up-${_AUTO_VER} system php-all\"\n    elif [ \"${_AUTO_PHP}\" = \"php-min\" ]; then\n      _BCDA_FULL=\"up-${_AUTO_VER} log php-min\"\n      _BCDA_SYST=\"up-${_AUTO_VER} system php-min\"\n    elif [ \"${_AUTO_PHP}\" = \"php-max\" ]; then\n      _BCDA_FULL=\"up-${_AUTO_VER} log php-max\"\n      _BCDA_SYST=\"up-${_AUTO_VER} system php-max\"\n    else\n      _BCDA_FULL=\"up-${_AUTO_VER} log\"\n      _BCDA_SYST=\"up-${_AUTO_VER} system\"\n    fi\n    _OCTO_FULL=\"up-${_AUTO_VER} all force log\"\n\n    _BCDA_FULL=\"${_BCDA_FULL} noscreen\"\n    _BCDA_SYST=\"${_BCDA_SYST} noscreen\"\n    _OCTO_FULL=\"${_OCTO_FULL} noscreen\"\n\n    if [ ! -z \"${_BCDA_SYST}\" ]; then\n      echo \"# Barracuda weekly system only upgrade\" >> /etc/crontab\n      echo \"${_AUTO_UP_MINUTE} ${_AUTO_UP_HOUR}    * * ${_AUTO_UP_WEEKLY}   root    bash /opt/local/bin/barracuda ${_BCDA_SYST}\" >> /etc/crontab\n    fi\n\n    if [ ! -z \"${_BCDA_FULL}\" ]; then\n      echo \"# Barracuda ${_AUTO_VER} full upgrade\" >> /etc/crontab\n      echo \"${_AUTO_UP_MINUTE} ${_AUTO_UP_HOUR}    ${_AUTO_UP_DAY} ${_AUTO_UP_MONTH} *   root    bash /opt/local/bin/barracuda ${_BCDA_FULL}\" >> /etc/crontab\n    fi\n\n    if [ ! -z \"${_OCTO_FULL}\" ]; then\n      echo \"# Octopus ${_AUTO_VER} full upgrade\" >> /etc/crontab\n      echo \"${_AUTO_OCT_UP_MINUTE} ${_AUTO_OCT_UP_HOUR}    ${_AUTO_UP_DAY} ${_AUTO_UP_MONTH} *   root    bash /opt/local/bin/octopus ${_OCTO_FULL}\" >> /etc/crontab\n    fi\n\n    echo Cron Update Completed\n  fi\n}\n\n_crontab_cleanup() {\n  if [ -e \"/etc/crontab\" ]; then\n    sed -i \"s/.*xdrago.*//gi\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*arracuda.*//gi\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*ctopus.*//gi\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*du.*sql.*//gi\" /etc/crontab &> /dev/null\n    wait\n    if [ \"${_ADD_XTRA}\" = \"YES\" ]; then\n      echo \"*/19 *  * * *   root    bash /var/xdrago/clear.sh >/dev/null 2>&1\" >> /etc/crontab\n    fi\n\n    _crontab_monitor_cleanup\n\n    sed -i \"/^$/d\" /etc/crontab &> /dev/null\n    wait\n    echo Cron Cleanup Completed\n  fi\n}\n\n[ -e \"/root/.use.curl.from.packages.cnf\" ] && chattr -i /root/.use.curl.from.packages.cnf\n[ -e \"/root/.use.curl.from.packages.cnf\" ] && rm -f /root/.use.curl.from.packages.cnf\n\nif [ -e \"/var/log/boa\" ] && [ -e \"/var/xdrago\" ]; then\n  _BROKEN_UPDATE_TEST=$(grep \"Under Construction\" /var/xdrago/*.sh 2>&1)\n  if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n    rm -f /var/log/boa/*.pid\n  fi\n  _BROKEN_UPDATE_TEST=$(grep \"404 Not Found\" /var/xdrago/*.sh 2>&1)\n  if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n    rm -f /var/log/boa/*.pid\n  fi\n  _BROKEN_UPDATE_TEST=$(grep \"Under Construction\" /var/xdrago/monitor/check/* 2>&1)\n  if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n    rm -f /var/log/boa/*.pid\n  fi\n  _BROKEN_UPDATE_TEST=$(grep \"404 Not Found\" /var/xdrago/monitor/check/* 2>&1)\n  if [ ! -z \"${_BROKEN_UPDATE_TEST}\" ]; then\n    rm -f /var/log/boa/*.pid\n  fi\nfi\n\nif [ -e \"/root/.pause_heavy_tasks_maint.cnf\" ]; then\n  killall -9 mysqldump\n  killall -9 rsync\nfi\n\nif [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n  && [ -e \"/etc/apt/apt.conf.d\" ]; then\n  echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  rm -f /etc/apt/apt.conf.d/00sandboxtmp\n  rm -f /etc/apt/apt.conf.d/00temp\n  _apt_clean_update\nfi\n\nif [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n  if [ ! -e \"${_LOCK_DOS_CONFIG}\" ]; then\n    if [ ! -e \"${_LOCK_DOS_PID}\" ]; then\n      # Update variables\n      _update_conf_variable \"_NGINX_DOS_LINES\" \"${_DEFAULT_NGINX_DOS_LINES}\" \"${_barCnf}\"\n      _update_conf_variable \"_NGINX_DOS_LIMIT\" \"${_DEFAULT_NGINX_DOS_LIMIT}\" \"${_barCnf}\"\n      _update_conf_variable \"_NGINX_DOS_MODE\" \"${_DEFAULT_NGINX_DOS_MODE}\" \"${_barCnf}\"\n      _update_conf_variable \"_NGINX_DOS_DIV_INC_NR\" \"${_DEFAULT_NGINX_DOS_DIV_INC_NR}\" \"${_barCnf}\"\n      _update_conf_variable \"_NGINX_DOS_INC_MIN\" \"${_DEFAULT_NGINX_DOS_INC_MIN}\" \"${_barCnf}\"\n      _update_conf_variable \"_NGINX_DOS_LOG\" \"${_DEFAULT_NGINX_DOS_LOG}\" \"${_barCnf}\"\n      _update_conf_variable \"_NGINX_DOS_IGNORE\" \"${_DEFAULT_NGINX_DOS_IGNORE}\" \"${_barCnf}\"\n      _update_conf_variable \"_NGINX_DOS_STOP\" \"${_DEFAULT_NGINX_DOS_STOP}\" \"${_barCnf}\"\n      # Update legacy Perl script too\n      sed -i \"s/^_NGINX_DOS_LIMIT=.*/_NGINX_DOS_LIMIT=399/g\"  ${_barCnf}\n      rm -f /var/log/boa/.sync_scan_nginx*\n      rm -f /var/log/scan_nginx_*\n      touch ${_LOCK_DOS_PID}\n      touch ${_LOCK_DOS_CONFIG}\n    fi\n  fi\n\n  [ -z \"${_AUTO_VER}\" ] && _AUTO_VER=\"${_tRee}\"\n\n  if [ -e \"${_barCnf}\" ]; then\n    _AUTO_UP_WEEKLY=${_AUTO_UP_WEEKLY//[^0-9]/}\n    _AUTO_UP_MONTH=${_AUTO_UP_MONTH//[^0-9]/}\n    _AUTO_UP_DAY=${_AUTO_UP_DAY//[^0-9]/}\n    _AUTO_UP_HOUR=${_AUTO_UP_HOUR//[^0-9]/}\n    _AUTO_UP_MINUTE=${_AUTO_UP_MINUTE//[^0-9]/}\n    _AUTO_OCT_UP_HOUR=${_AUTO_OCT_UP_HOUR//[^0-9]/}\n    _AUTO_OCT_UP_MINUTE=${_AUTO_OCT_UP_MINUTE//[^0-9]/}\n\n    if [ ! -z \"${_AUTO_UP_WEEKLY}\" ] \\\n      && [ ! -z \"${_AUTO_UP_MONTH}\" ] \\\n      && [ ! -z \"${_AUTO_UP_DAY}\" ]; then\n      [ -z \"${_AUTO_UP_HOUR}\" ] && _AUTO_UP_HOUR=0\n      [ -z \"${_AUTO_UP_MINUTE}\" ] && _AUTO_UP_MINUTE=15\n      [ -z \"${_AUTO_OCT_UP_HOUR}\" ] && _AUTO_OCT_UP_HOUR=1\n      [ -z \"${_AUTO_OCT_UP_MINUTE}\" ] && _AUTO_OCT_UP_MINUTE=15\n    fi\n  fi\n\n  _if_hosted_sys\n\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n\n  \tif [[ \"${_hName}\" =~ ^(lcy1.ao.|lcy15.uk.) ]]; then\n  \t  _DONT_TOUCH=OK\n  \t  rm -f /etc/csf/csf.conf-pre*\n  \telse\n  \t  if [ ! -e \"/var/log/boa/.csf_legacy_cleanup.${_xSrl}.pid\" ]; then\n  \t    sed -i \"s/.*Legacy.*//g\"    /etc/csf/csf.ignore\n  \t    wait\n  \t    sed -i \"s/.*Legacy.*//g\"    /etc/csf/csf.allow\n  \t    wait\n  \t    sed -i \"s/.*Manually.*//g\"  /etc/csf/csf.ignore\n  \t    wait\n  \t    sed -i \"s/.*Manually.*//g\"  /etc/csf/csf.allow\n  \t    wait\n  \t    sed -i \"s/.*Temporary.*//g\" /etc/csf/csf.ignore\n  \t    wait\n  \t    sed -i \"s/.*Temporary.*//g\" /etc/csf/csf.allow\n  \t    wait\n  \t    sed -i \"/^$/d\" /etc/csf/csf.ignore\n  \t    sed -i \"/^$/d\" /etc/csf/csf.allow\n  \t    rm -f /etc/csf/csf.conf-pre*\n  \t    touch /var/log/boa/.csf_legacy_cleanup.${_xSrl}.pid\n  \t  fi\n  \tfi\n\n  \trm -f /var/xdrago/*.old\n  \trm -f /var/xdrago/.*.off\n\n    if [ ! -e \"/var/log/boa/.csf_dhcp_udp_cleanup.${_xSrl}.pid\" ]; then\n      sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n      wait\n      sed -i \"/^$/d\" /etc/csf/csf.allow\n      if [ -e \"/var/log/daemon.log\" ]; then\n        _DHCP_LOG=\"/var/log/daemon.log\"\n      else\n        _DHCP_LOG=\"/var/log/syslog\"\n      fi\n      grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n        if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n          IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n          if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n            echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n          fi\n        fi\n      done\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n      touch /var/log/boa/.csf_dhcp_udp_cleanup.${_xSrl}.pid\n    fi\n\n    if [ ! -e \"/var/log/boa/.disabled_modules_fix.${_xSrl}.pid\" ]; then\n      sed -i \"s/^_MODULES_FIX=.*/_MODULES_FIX=NO/g\"  ${_barCnf}\n      touch /var/log/boa/.disabled_modules_fix.${_xSrl}.pid\n    fi\n\n    _AUTO_PHP=\"php-min\"\n    _AUTO_VER=\"${_tRee}\"\n    _AUTO_UP_WEEKLY=7\n    _AUTO_UP_MONTH=7\n    _AUTO_UP_DAY=7\n    _AUTO_UP_HOUR=3\n    _AUTO_UP_MINUTE=10\n    _AUTO_OCT_UP_HOUR=4\n    _AUTO_OCT_UP_MINUTE=10\n\n    [ -e \"/root/.my.optimize.cnf\" ] && rm -f /root/.my.optimize.cnf\n    [ -e \"/root/.restrict_this_vm.cnf\" ] && rm -f /root/.restrict_this_vm.cnf\n    [ -e \"/root/.force.sites.verify.cnf\" ] && rm -f /root/.force.sites.verify.cnf\n    [ -e \"/root/.run.example.report.cnf\" ] && rm -f /root/.run.example.report.cnf\n    [ -e \"/var/xdrago/weekly.sh\" ] && rm -f /var/xdrago/weekly.sh\n    if [ -e \"/run/boa_run.pid\" ]; then\n      touch /root/.pause_tasks_maint.cnf\n    else\n      [ -e \"/root/.pause_tasks_maint.cnf\" ] && rm -f /root/.pause_tasks_maint.cnf\n    fi\n    if [ -e \"/root/.restrict_this_vm.cnf\" ]; then\n      killall -9 rsync\n      chmod 700 /usr/bin/rsync\n      chmod 700 /usr/bin/mysqldump\n    else\n      chmod 750 /usr/bin/rsync\n      chmod 750 /usr/bin/mysqldump\n    fi\n  fi\n\n  if [ ! -e \"/etc/init.d/ssh\" ]; then\n    _AUTO_UP_WEEKLY=\n    _AUTO_UP_MONTH=\n    _AUTO_UP_DAY=\n  fi\n\n  _LOCK_FILE=\"/root/.turn.off.auto.update.cnf\"\n  _LOCK_STATE_FILE=\"/var/log/boa/.etc_crontab_lock_state.${_tRee}.${_xSrl}.txt\"\n\n  # Determine current lock state (LOCK if the file exists, otherwise UNLOCK)\n  if [ -e \"${_LOCK_FILE}\" ]; then\n    _LOCK_STATE=\"LOCK\"\n  else\n    _LOCK_STATE=\"UNLOCK\"\n  fi\n\n  # Previous lock state (if any)\n  _LOCK_STATE_PREV=\n  if [ -e \"${_LOCK_STATE_FILE}\" ]; then\n    _LOCK_STATE_PREV=$(cat \"${_LOCK_STATE_FILE}\" 2>/dev/null)\n  fi\n\n  # If lock state changed (file created or removed) => ignore 2nd-level PID once\n  if [ \"${_LOCK_STATE}\" != \"${_LOCK_STATE_PREV}\" ]; then\n    rm -f /var/log/boa/.etc_crontab_update_dev_ctrl*.pid\n    rm -f /var/log/boa/.etc_crontab_update_prod_ctrl*.pid\n    echo \"${_LOCK_STATE}\" > \"${_LOCK_STATE_FILE}\"\n  fi\n\n  # First-level guard: cluster root pwd or lock disables auto updates\n  if [ -e \"/root/.my.cluster_root_pwd.txt\" ] || [ -e \"${_LOCK_FILE}\" ]; then\n    if [ -e \"/var/log/boa/.etc_crontab_allow.pid\" ]; then\n      rm -f /var/log/boa/.etc_crontab_allow.pid\n    fi\n    _AUTO_UP_WEEKLY=\n    _AUTO_UP_MONTH=\n    _AUTO_UP_DAY=\n  else\n    if [ ! -e \"/var/log/boa/.etc_crontab_allow.pid\" ]; then\n      touch /var/log/boa/.etc_crontab_allow.pid\n    fi\n  fi\n\n  # Second-level: per-tree PID; only cleared once when lock toggles\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    if [ ! -e \"/var/log/boa/.etc_crontab_update_dev_ctrl_f012.${_tRee}.${_xSrl}.pid\" ]; then\n      rm -f /var/log/boa/.etc_crontab_update_dev_ctrl*.pid\n      _crontab_cleanup\n      service cron restart\n      touch \"/var/log/boa/.etc_crontab_update_dev_ctrl_f012.${_tRee}.${_xSrl}.pid\"\n    fi\n  else\n    if [ ! -e \"/var/log/boa/.etc_crontab_update_prod_ctrl_f012.${_tRee}.${_xSrl}.pid\" ]; then\n      rm -f /var/log/boa/.etc_crontab_update_prod_ctrl*.pid\n      if [ -n \"${_AUTO_UP_MONTH}\" ] && [ -n \"${_AUTO_UP_DAY}\" ]; then\n        _crontab_update\n      else\n        _crontab_cleanup\n      fi\n      service cron restart\n      touch \"/var/log/boa/.etc_crontab_update_prod_ctrl_f012.${_tRee}.${_xSrl}.pid\"\n    fi\n  fi\n\n  if [ ! -e \"/var/log/boa/.etc_csf_allow_ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    [ -d \"/var/backups/csf\" ] || mkdir -p /var/backups/csf\n    cp -a /etc/csf/csf.ignore /var/backups/csf/csf.ignore.${_tRee}.${_xSrl}.txt\n    cp -a /etc/csf/csf.allow /var/backups/csf/csf.allow.${_tRee}.${_xSrl}.txt\n    rm -f /root/.*.pid\n    rm -f /var/log/boa/.etc_csf_allow_ctrl*\n    touch /var/log/boa/.etc_csf_allow_ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n  if [ -x \"/sbin/iptables\" ] && [ ! -e \"/usr/sbin/iptables\" ]; then\n    ln -sfn /sbin/iptables /usr/sbin/iptables\n  fi\n  if [ -x \"/usr/sbin/iptables\" ] && [ ! -e \"/sbin/iptables\" ]; then\n    ln -sfn /usr/sbin/iptables /sbin/iptables\n  fi\n  if [ -x \"/sbin/iptables-save\" ] && [ ! -e \"/usr/sbin/iptables-save\" ]; then\n    ln -sfn /sbin/iptables-save /usr/sbin/iptables-save\n  fi\n  if [ -x \"/usr/sbin/iptables-save\" ] && [ ! -e \"/sbin/iptables-save\" ]; then\n    ln -sfn /usr/sbin/iptables-save /sbin/iptables-save\n  fi\n  if [ -x \"/sbin/iptables-restore\" ] && [ ! -e \"/usr/sbin/iptables-restore\" ]; then\n    ln -sfn /sbin/iptables-restore /usr/sbin/iptables-restore\n  fi\n  if [ -x \"/usr/sbin/iptables-restore\" ] && [ ! -e \"/sbin/iptables-restore\" ]; then\n    ln -sfn /usr/sbin/iptables-restore /sbin/iptables-restore\n  fi\n  if [ -x \"/sbin/ip6tables\" ] && [ ! -e \"/usr/sbin/ip6tables\" ]; then\n    ln -sfn /sbin/ip6tables /usr/sbin/ip6tables\n  fi\n  if [ -x \"/usr/sbin/ip6tables\" ] && [ ! -e \"/sbin/ip6tables\" ]; then\n    ln -sfn /usr/sbin/ip6tables /sbin/ip6tables\n  fi\n  if [ -x \"/sbin/ip6tables-save\" ] && [ ! -e \"/usr/sbin/ip6tables-save\" ]; then\n    ln -sfn /sbin/ip6tables-save /usr/sbin/ip6tables-save\n  fi\n  if [ -x \"/usr/sbin/ip6tables-save\" ] && [ ! -e \"/sbin/ip6tables-save\" ]; then\n    ln -sfn /usr/sbin/ip6tables-save /sbin/ip6tables-save\n  fi\n  if [ -x \"/sbin/ip6tables-restore\" ] && [ ! -e \"/usr/sbin/ip6tables-restore\" ]; then\n    ln -sfn /sbin/ip6tables-restore /usr/sbin/ip6tables-restore\n  fi\n  if [ -x \"/usr/sbin/ip6tables-restore\" ] && [ ! -e \"/sbin/ip6tables-restore\" ]; then\n    ln -sfn /usr/sbin/ip6tables-restore /sbin/ip6tables-restore\n  fi\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n\n  _NFTABLES_TEST=$(iptables -V)\n  if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n    if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n      update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n    fi\n    if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n    fi\n    if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n      update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n    fi\n    if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n      update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n    fi\n    touch /var/log/boa/.nf_tables_ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  if [ -x \"/etc/init.d/site24x7monagent\" ] \\\n    && [ ! -e \"/var/log/boa/.site24x7monagent_ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    service site24x7monagent stop\n    wait\n    service site24x7monagent start\n    rm -f /var/log/boa/.site24x7monagent*\n    touch /var/log/boa/.site24x7monagent_ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  if [ -x \"/usr/local/bin/curl\" ] \\\n    && [ ! -e \"/var/log/boa/.curl_ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    if [ -f \"/usr/bin/curl\" ]; then\n      mv -f /usr/bin/curl /usr/bin/legacy-curl\n    fi\n    ln -sfn /usr/local/bin/curl /usr/bin/curl\n    touch /var/log/boa/.curl_ctrl.${_tRee}.${_xSrl}.pid\n  fi\n\n  _IS_MOVESQL_RUNNING=$(pgrep -f move_sql.sh)\n  if [ ! -z \"${_IS_MOVESQL_RUNNING}\" ]; then\n    _CNT=$(pgrep -fc move_sql.sh)\n    if (( _CNT > 2 )); then\n      pkill -9 -f move_sql.sh\n      rm -f /run/mysql_restart_running.pid\n      rm -f /run/boa_wait.pid\n      rm -f /var/log/boa/.move_sql_ctrl*\n      touch /var/log/boa/.move_sql_ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n\n  _IS_SQLCLEANUP_RUNNING=$(pgrep -f mysql_cleanup.sh)\n  if [ ! -z \"${_IS_SQLCLEANUP_RUNNING}\" ]; then\n    _CNT=$(pgrep -fc mysql_cleanup.sh)\n    if (( _CNT > 2 )); then\n      pkill -9 -f mysql_cleanup.sh\n      rm -f /run/mysql_backup_running.pid\n      rm -f /var/log/boa/.sqlcleanup_ctrl*\n      touch /var/log/boa/.sqlcleanup_ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n\n  _IS_SOLRSTART_RUNNING=$(pgrep -f 'solr start')\n  if [ ! -z \"${_IS_SOLRSTART_RUNNING}\" ]; then\n    _CNT=$(pgrep -fc \"solr start\")\n    if (( _CNT > 2 )); then\n      pkill -9 -f 'solr start'\n      touch /var/log/boa/.solr_start_ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n\n  _IS_PROCTRL_RUNNING=$(pgrep -f proc_num_ctrl.pl)\n  if [ ! -z \"${_IS_PROCTRL_RUNNING}\" ]; then\n    _CNT=$(pgrep -fc proc_num_ctrl.pl)\n    if (( _CNT > 1 )); then\n      pkill -9 -f proc_num_ctrl.pl\n      touch /var/log/boa/.proc_num_ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n\n  rm -f /run/fmp_wait.pid\n\n  _IS_SQLBACKUP_RUNNING=$(pgrep -f mysql_backup.sh)\n  if [ ! -z \"${_IS_SQLBACKUP_RUNNING}\" ]; then\n    _CNT=$(pgrep -fc mysql_backup.sh)\n    if (( _CNT > 2 )); then\n      pkill -9 -f mysql_backup.sh\n      pkill -9 -f mydumper\n      pkill -9 -f usage.sh\n      pkill -9 -f runner.sh\n      rm -f /run/boa_sql_backup.pid\n      rm -f /run/mysql_backup_running.pid\n      rm -f /run/boa_wait.pid\n      rm -f /run/daily-fix.pid\n      rm -f /var/log/boa/.sqlbackup_ctrl*\n      touch /var/log/boa/.sqlbackup_ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/backboa",
    "content": "#!/bin/bash\n\n###\n### Acknowledgements\n###\n### Thomas Sileo @ https://thomassileo.name\n### Original recipe: http://bit.ly/1QX462w\n###\n### Extended by Barracuda Team for BOA project\n###\n### See also:\n### http://www.nongnu.org/duplicity/index.html\n###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n# New OpenSSL 3.x version is required\nif [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  echo \"New OpenSSL 3.x version is required\"\n  exit 1\nfi\n\n_PTN_VRN=3.13.9\n_DCY_VRN=3.0.6\n_LOGPTH=\"/var/log/boa\"\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n_DOW=$(date +%u)\n_DOW=${_DOW//[^1-7]/}\n_DOM=$(date +%e)\n_DOM=${_DOM//[^0-9]/}\n_HST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n_HST=${_HST//[^a-zA-Z0-9-.]/}\n_HST=$(echo -n ${_HST} | tr A-Z a-z 2>&1)\n_HST_DASH=$(echo -n ${_HST} | tr . - 2>&1)\n\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_AWS_VLV=${_AWS_VLV//[^a-z]/}\nif [ -z \"${_AWS_VLV}\" ]; then\n  _AWS_VLV=\"warning\"\nfi\n\n# Set extra environment variables\nexport PYTHONPATH=\"/usr/local/lib/python3.12/site-packages\"\n\n_DCY_PTN=\"/usr/local/bin/python3\"\n_DCY_CMD=\"/usr/local/bin/duplicity -v ${_AWS_VLV}\"\n\nif [ \"$1\" != \"help\" ]; then\n  # Check the Python version to ensure we're using the correct one\n  echo \"Checking expected Python ${_PTN_VRN} version...\"\n  ${_DCY_PTN} --version\n  # Check the Duplicity version to ensure we're using the correct one\n  echo \"Checking expected Duplicity ${_DCY_VRN} version...\"\n  ${_DCY_CMD} --version\nfi\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_check_vps() {\n  _BENG_VS=NO\n  _VM_TEST=\"$(uname -a)\"\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _BENG_VS=YES\n  fi\n}\n_check_vps\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth}\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  export _urlDev=\"http://${_USE_MIR}/dev\"\n  export _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_install() {\n  if [ ! -d \"${_LOGPTH}\" ]; then\n    mkdir -p ${_LOGPTH}\n  fi\n  [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  _DUPLICITY_ITD=$(duplicity --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  if [ \"${_DUPLICITY_ITD}\" = \"${_DCY_VRN}\" ] \\\n    && [ -L \"/usr/local/bin/jp.py\" ] \\\n    && [ -L \"/usr/local/bin/duplicity\" ] \\\n    && [ -L \"/usr/local/bin/aws\" ]; then\n    echo \"Latest duplicity version ${_DCY_VRN} already installed\"\n  else\n    echo \"Installing duplicity dependencies...\"\n    cd\n    _find_fast_mirror_early\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    aptitude purge duplicity -y\n    rm -f /usr/local/bin/duplicity\n    rm -f /usr/local/bin/jp.py\n    rm -f /usr/local/bin/aws*\n    apt-get install ${_aptYesUnth} \\\n        intltool \\\n        libffi-dev \\\n        par2 \\\n        python3-pip \\\n        python3-venv \\\n        python3 \\\n        rclone \\\n        rdiff \\\n        tzdata\n    _PTN_TEST=$(${_DCY_PTN} --version 2>&1)\n    if [[ ! \"${_PTN_TEST}\" =~ \"Python ${_PTN_VRN}\" ]] \\\n      || [ ! -x \"${_DCY_PTN}\" ]; then\n      cd /var/opt\n      rm -rf Python*\n      wget ${_wgetGet} ${_urlDev}/src/Python-${_PTN_VRN}.tgz\n      tar -xzf Python-${_PTN_VRN}.tgz\n      cd Python-${_PTN_VRN}\n      if [ -d \"/usr/local/ssl3\" ]; then\n        bash ./configure --with-openssl=/usr/local/ssl3\n      else\n        bash ./configure --with-openssl=/usr/local/ssl\n      fi\n      make -j $(nproc) --quiet\n      make install --quiet\n      cd\n    fi\n    _PTN_TEST=$(${_DCY_PTN} --version 2>&1)\n    if [[ \"${_PTN_TEST}\" =~ \"Python ${_PTN_VRN}\" ]]; then\n      python3 -m pip install pipx --break-system-packages --root-user-action ignore\n      pip3 install --upgrade pip --root-user-action ignore\n      export PIPX_BIN_DIR=/usr/local/bin\n      export PIPX_HOME=/opt/pipx/venvs\n      pipx install duplicity --include-deps --force\n      pipx install awscli --include-deps --force\n      pipx install boto3 --include-deps --force\n    else\n      echo \"Python ${_PTN_VRN} installation failed with ${_PTN_TEST}\"\n      exit 1\n    fi\n    _DCY_TEST=$(${_DCY_CMD} --version 2>&1)\n    if [[ \"${_DCY_TEST}\" =~ \"duplicity ${_DCY_VRN}\" ]]; then\n      echo \"Installation complete!\"\n    else\n      echo \"Installation failed with ${_DCY_TEST}\"\n      exit 1\n    fi\n  fi\n}\n\n_check_aws() {\n  if [ ! -x \"/usr/local/bin/aws\" ]; then\n    echo \"Upgrade to add AWS tools required...\"\n    install\n  fi\n}\n\n_CNT=$(pgrep -fc duplicity)\nif (( _CNT > 0 )); then\n  echo \"[$(date)] Active duplicity process detected, will try again later...\" >> /var/log/mybackup_waiting_queue.log\n  exit 1\nfi\n\nif [ -z \"${_AWS_KEY}\" ] || [ -z \"${_AWS_SEC}\" ] || [ -z \"${_AWS_PWD}\" ]; then\n  echo \"\n\n  CONFIGURATION REQUIRED!\n\n  Add listed below four (4) required lines to your /root/.barracuda.cnf file.\n  Required lines are marked with [R] and optional with [O]:\n\n    _AWS_KEY='Your AWS Access Key ID'     ### [R] From your AWS S3 settings\n    _AWS_SEC='Your AWS Secret Access Key' ### [R] From your AWS S3 settings\n    _AWS_PWD='Your Secret Password'       ### [R] Generate with 'openssl rand -base64 32'\n    _AWS_REG='Your AWS Region ID'         ### [R] By default 'us-east-1'\n\n    _AWS_TTL='Your Backup Rotation'       ### [O] By default '30D'\n    _AWS_FLC='Your Backup Full Cycle'     ### [O] By default '7D'\n    _AWS_VLV='Your Backup Log Verbosity'  ### [O] By default 'warning' -- [ewnid]\n    _AWS_EXB='Exclude Ægir Backups'       ### [O] By default 'YES' -- can be YES/NO\n\n    Supported values to use as _AWS_REG (the symbol after the # comment):\n\n      Africa (Cape Town)         # af-south-1\n      Asia Pacific (Hong Kong)   # ap-east-1\n      Asia Pacific (Hyderabad)   # ap-south-2\n      Asia Pacific (Jakarta)     # ap-southeast-3\n      Asia Pacific (Melbourne)   # ap-southeast-4\n      Asia Pacific (Mumbai)      # ap-south-1\n      Asia Pacific (Osaka)       # ap-northeast-3\n      Asia Pacific (Seoul)       # ap-northeast-2\n      Asia Pacific (Singapore)   # ap-southeast-1\n      Asia Pacific (Sydney)      # ap-southeast-2\n      Asia Pacific (Tokyo)       # ap-northeast-1\n      Canada (Central)           # ca-central-1\n      Canada West (Calgary)      # ca-west-1\n      Europe (Frankfurt)         # eu-central-1\n      Europe (Ireland)           # eu-west-1\n      Europe (London)            # eu-west-2\n      Europe (Milan)             # eu-south-1\n      Europe (Paris)             # eu-west-3\n      Europe (Spain)             # eu-south-2\n      Europe (Stockholm)         # eu-north-1\n      Europe (Zurich)            # eu-central-2\n      Israel (Tel Aviv)          # il-central-1\n      Middle East (Bahrain)      # me-south-1\n      Middle East (UAE)          # me-central-1\n      South America (São Paulo)  # sa-east-1\n      US East (N. Virginia)      # us-east-1\n      US East (Ohio)             # us-east-2\n      US West (N. California)    # us-west-1\n      US West (Oregon)           # us-west-2\n\n      ### Special regions, see: https://aws.amazon.com/govcloud-us/\n\n      AWS GovCloud (US-East)     # us-gov-east-1\n      AWS GovCloud (US-West)     # us-gov-west-1\n\n    Source: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region\n\n    You have to use S3 Console at https://console.aws.amazon.com/s3/home\n    (before attempting to run initial backup!) to create S3 bucket in the\n    desired region with correct name as shown below:\n\n      daily-boa-${_HST_DASH}\n\n    While duplicity should be able to create new bucket on demand, in practice\n    it almost never works due to typical delays between various AWS regions.\n\n    Please run: 'backboa test' to make sure that the connection works.\n\n  \"\n  exit 1\nfi\n\nif [ -z \"${_AWS_REG}\" ]; then\n  _AWS_REG=\"us-east-1\"\nfi\n\nif [ \"${_AWS_REG}\" = \"af-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-northeast-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-northeast-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-northeast-3\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-south-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-3\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-4\" ] \\\n  || [ \"${_AWS_REG}\" = \"ca-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ca-west-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-central-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-north-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-south-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-west-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-west-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-west-3\" ] \\\n  || [ \"${_AWS_REG}\" = \"il-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"me-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"me-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"sa-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-east-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-west-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-west-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-gov-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-gov-west-1\" ]; then\n  _GOOD_AWS_REG=YES\nfi\n\n_AWS_TTL=${_AWS_TTL//[^A-Z0-9]/}\nif [ -z \"${_AWS_TTL}\" ]; then\n  _AWS_TTL=\"30D\"\nfi\n\n_AWS_FLC=${_AWS_FLC//[^A-Z0-9]/}\nif [ -z \"${_AWS_FLC}\" ]; then\n  _AWS_FLC=\"7D\"\nfi\n\nif [ ! -z \"${_AWS_EXB}\" ] && [ \"${_AWS_EXB}\" = \"NO\" ]; then\n  _EXCLUDE=\"--exclude /data/conf/arch\"\nelse\n  _EXCLUDE=\"--exclude /data/conf/arch --exclude-regexp '^/data/disk/.*/backups'\"\nfi\n\n_USER_INCLUDE=\"\"\nif [ -f \"/root/.backboa.include\" ]; then\n  _USER_INCLUDE=\"--include-filelist /root/.backboa.include\"\nfi\n\n_USER_EXCLUDE=\"\"\nif [ -f \"/root/.backboa.exclude\" ]; then\n  _USER_EXCLUDE=\"--exclude-filelist /root/.backboa.exclude\"\nfi\n\nexport AWS_ACCESS_KEY_ID=\"${_AWS_KEY}\"\nexport AWS_SECRET_ACCESS_KEY=\"${_AWS_SEC}\"\nexport PASSPHRASE=\"${_AWS_PWD}\"\n\n_SOURCE=\"/data /etc /home /opt/solr4 /var/aegir /var/solr7 /var/solr9 /var/www /var/xdrago\"\n_BUCKET_NAME=\"daily-boa-${_HST_DASH}\"\n_BUCKET_NAME_DOT=\"daily.boa.${_HST}\"\n_TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n_LOGFILE=\"${_LOGPTH}/${_BUCKET_NAME}.log\"\n_NAME=\"--name=daily-boa\"\n\n_DCY_MN_CMD=\"/usr/local/bin/duplicity -v ${_AWS_VLV} \\\n  --concurrency 4 \\\n  --s3-endpoint-url https://s3.dualstack.${_AWS_REG}.amazonaws.com \\\n  --s3-region-name ${_AWS_REG}\"\n\nif [ -e \"${_LOGPTH}/${_BUCKET_NAME_DOT}.archive.log\" ]; then\n  cat ${_LOGPTH}/${_BUCKET_NAME_DOT}.archive.log >> ${_LOGPTH}/${_BUCKET_NAME}.archive.log\n  mv ${_LOGPTH}/${_BUCKET_NAME_DOT}.archive.log /var/backups/${_BUCKET_NAME_DOT}.archive.log\n  cat ${_LOGPTH}/${_BUCKET_NAME_DOT}.randomize.cleanup.log >> ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\n  mv ${_LOGPTH}/${_BUCKET_NAME_DOT}.randomize.cleanup.log /var/backups/${_BUCKET_NAME_DOT}.randomize.cleanup.log\n  #apt-get clean -qq\n  #rm -rf /var/lib/apt/lists/*\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n  aws s3 mb s3://${_BUCKET_NAME} --region ${_AWS_REG}\n  aws s3 sync s3://${_BUCKET_NAME_DOT} s3://${_BUCKET_NAME}\nfi\n\n_backup_prepare() {\n  _INCLUDE=\"\"\n  for _CDIR in ${_SOURCE}; do\n    _TMP=\" --include ${_CDIR}\"\n    _INCLUDE=\"${_INCLUDE}${_TMP}\"\n  done\n  if [ -e \"/root/.cache/duplicity\" ]; then\n    _CacheTest=$(find /root/.cache/duplicity/* \\\n      -maxdepth 1 \\\n      -mindepth 1 \\\n      -type f \\\n      | sort 2>&1)\n    if [[ \"${_CacheTest}\" =~ \"No such file or directory\" ]] \\\n      || [ -z \"${_CacheTest}\" ]; then\n      _DO_CLEANUP=NO\n    else\n      _DO_CLEANUP=YES\n    fi\n  fi\n}\n\n_monthly_cleanup() {\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\" ]; then\n    _RCL=$(cat ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log 2>&1)\n    _RCL=$(echo -n ${_RCL} | tr -d \"\\n\" 2>&1)\n    _RCL=${_RCL//[^1-5]/}\n  else\n    _RCL=$((RANDOM%5+1))\n    _RCL=${_RCL//[^1-5]/}\n    echo ${_RCL} > ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\n  fi\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n    && [ ! -e \"/root/.skip_duplicity_monthly_cleanup.cnf\" ] \\\n    && [ \"${_DOM}\" = \"${_RCL}\" ] \\\n    && [ \"${_DO_CLEANUP}\" = \"YES\" ]; then\n    if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n      _n=$((RANDOM%300+8))\n      echo \"Waiting ${_n} seconds on $(date) before running cleanup --force\" > ${_LOGFILE}\n      sleep ${_n}\n    fi\n    echo \"Running cleanup --force on $(date)\" >> ${_LOGFILE}\n    echo \"Command is ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\"\n    ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\n    rm -f ${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log\n    rm -f ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\n  fi\n}\n\n_randomize_full() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log\" ]; then\n      _RDW=$(cat ${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log 2>&1)\n      _RDW=$(echo -n ${_RDW} | tr -d \"\\n\" 2>&1)\n      _RDW=${_RDW//[^1-7]/}\n      _MODE=\"incremental\"\n    else\n      _RDW=$((RANDOM%7+1))\n      _RDW=${_RDW//[^1-7]/}\n      _MODE=\"full\"\n      echo ${_RDW} > ${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log\n    fi\n  else\n    _RDW=7\n  fi\n}\n\n_set_mode() {\n  if [ \"${_DOW}\" = \"${_RDW}\" ] && [ \"${_AWS_FLC}\" = \"7D\" ]; then\n    if [ ! -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n      _MODE=\"full\"\n      _AWS_FLC=\"1M\"\n    fi\n  else\n    if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n      && [ \"${_DO_CLEANUP}\" = \"YES\" ]; then\n      _MODE=\"incremental\"\n    else\n      _MODE=\"full\"\n    fi\n  fi\n}\n\n_set_cmd() {\n  _DCY_UP_CMD=\"/usr/local/bin/duplicity ${_MODE} -v ${_AWS_VLV} \\\n    --allow-source-mismatch \\\n    --concurrency 4 \\\n    --full-if-older-than ${_AWS_FLC} \\\n    --s3-endpoint-url https://s3.dualstack.${_AWS_REG}.amazonaws.com \\\n    --s3-region-name ${_AWS_REG} \\\n    --s3-use-ia \\\n    --volsize 300\"\n}\n\n_run_backup() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    if [ ! -e \"/root/tmp/home/\" ]; then\n      _n=$((RANDOM%300+8))\n      echo \"Waiting ${_n} seconds on $(date) before running restore home 7D tmp/home\" >> ${_LOGFILE}\n      sleep ${_n}\n      restore home 7D tmp/home >> ${_LOGFILE}\n    fi\n    _n=$((RANDOM%300+8))\n    echo \"Waiting ${_n} seconds on $(date) before running ${_MODE} backup\" >> ${_LOGFILE}\n    sleep ${_n}\n  fi\n  echo \"Running ${_MODE} backup on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UP_CMD} \\\n    ${_NAME} \\\n    ${_EXCLUDE} \\\n    ${_USER_EXCLUDE} \\\n    ${_INCLUDE} \\\n    ${_USER_INCLUDE} \\\n    --exclude '**' / ${_TARGET}\"\n  ${_DCY_UP_CMD} \\\n    ${_NAME} \\\n    ${_EXCLUDE} \\\n    ${_USER_EXCLUDE} \\\n    ${_INCLUDE} \\\n    ${_USER_INCLUDE} \\\n    --exclude '**' / ${_TARGET} >> ${_LOGFILE}\n}\n\n_remove_older_than() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    _n=$((RANDOM%300+8))\n    echo \"Waiting ${_n} seconds on $(date) before running remove-older-than ${_AWS_TTL}\" >> ${_LOGFILE}\n    sleep ${_n}\n  fi\n  echo \"Running remove-older-than on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_MN_CMD} remove-older-than ${_AWS_TTL} --force ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} remove-older-than ${_AWS_TTL} --force ${_NAME} ${_TARGET} >> ${_LOGFILE}\n}\n\n_collection_status() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    _n=$((RANDOM%300+8))\n    echo \"Waiting ${_n} seconds on $(date) before running collection-status\" >> ${_LOGFILE}\n    sleep ${_n}\n  fi\n  echo \"Running collection-status on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET} >> ${_LOGFILE}\n}\n\n_backup() {\n  _backup_prepare\n  _monthly_cleanup\n  _randomize_full\n  _set_mode\n  _set_cmd\n  _run_backup\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n    && [ \"${_DOW}\" = \"${_RDW}\" ] \\\n    && [ \"${_DO_CLEANUP}\" = \"YES\" ]; then\n    _remove_older_than\n    _collection_status\n  fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" != \"OFF\" ]; then\n    echo \"Sending email report on $(date)\" >> ${_LOGFILE}\n    s-nail -s \"Daily backup: ${_MODE} ${_HST} $(date)\" ${_MY_EMAIL} < ${_LOGFILE}\n  fi\n  cat ${_LOGFILE} >> ${_LOGPTH}/${_BUCKET_NAME}.archive.log\n  rm -f ${_LOGFILE}\n}\n\n_conn_test() {\n  if [ $# = 1 ]; then\n    _BUCKET_NAME=\"daily-boa-$1\"\n    _TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n  fi\n  echo \"Running AWS connection test, please wait...\"\n  echo \"Command is ${_DCY_MN_CMD} cleanup --dry-run --timeout 5 ${_NAME} ${_TARGET}\"\n  _ConnTest=$(${_DCY_MN_CMD} cleanup --dry-run --timeout 5 ${_NAME} ${_TARGET} 2>&1)\n  ### echo _ConnTest is STR ${_ConnTest} END\n  if [[ \"${_ConnTest}\" =~ \"No connection to backend\" ]] \\\n    || [[ \"${_ConnTest}\" =~ \"IllegalLocationConstraintException\" ]]; then\n    echo\n    echo \"  Sorry, I can't connect to ${_TARGET}\"\n    echo \"  Please check if the bucket has expected name: ${_BUCKET_NAME}\"\n    echo \"  This bucket must already exist in the ${_AWS_REG} AWS region\"\n    echo \"  http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region\"\n    echo \"  Bye\"\n    echo\n    exit 1\n  else\n    echo \"OK, I can connect to ${_TARGET}\"\n  fi\n}\n\n_status() {\n  echo \"Command is ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\n}\n\n_cleanup() {\n  echo \"Command is ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\n  echo \"Command is ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\n}\n\n_list() {\n  echo \"Command is ${_DCY_MN_CMD} list-current-files ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} list-current-files ${_NAME} ${_TARGET}\n}\n\n_restore() {\n  if [ $# = 2 ]; then\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\n  else\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\n  fi\n}\n\n_retrieve() {\n  if [ $# = 3 ]; then\n    _HST_DASH=$(echo -n $3 | tr . - 2>&1)\n    _BUCKET_NAME=\"daily-boa-${_HST_DASH}\"\n    _TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\n  elif [ $# = 4 ]; then\n    _HST_DASH=$(echo -n $4 | tr . - 2>&1)\n    _BUCKET_NAME=\"daily-boa-${_HST_DASH}\"\n    _TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\n  fi\n}\n\nif [ \"$1\" = \"backup\" ]; then\n  if test -f /run/${_HST}_backup.pid ; then\n    touch ${_LOGPTH}/wait_${_HST}_backup.log\n    echo \"The duplicity backup is running already?\"\n    echo \"Existing /run/${_HST}_backup.pid found...\"\n    echo \"But no active duplicity process detected...\"\n    exit 1\n  else\n    touch /run/${_HST}_backup.pid\n    echo \"The duplicity backup is starting now...\"\n    _check_aws\n    _backup\n    echo \"The duplicity backup is complete!\"\n    touch ${_LOGPTH}/run_${_HST}_backup.log\n    rm -f /run/${_HST}_backup.pid\n  fi\nelif [ \"$1\" = \"install\" ]; then\n  _install\nelif [ \"$1\" = \"cleanup\" ]; then\n  _cleanup\nelif [ \"$1\" = \"list\" ]; then\n  _list\nelif [ \"$1\" = \"restore\" ]; then\n  if [ $# = 3 ]; then\n    _restore $2 $3\n  else\n    _restore $2 $3 $4\n  fi\nelif [ \"$1\" = \"retrieve\" ]; then\n  if [ $# = 4 ]; then\n    _retrieve $2 $3 $4\n  elif [ $# = 5 ]; then\n    _retrieve $2 $3 $4 $5\n  else\n    echo \"You have to specify also hostname of the backed up system\"\n    exit 1\n  fi\nelif [ \"$1\" = \"status\" ]; then\n  _check_aws\n  _status\nelif [ \"$1\" = \"test\" ]; then\n  _conn_test\nelse\n  echo \"\n\n  INSTALLATION:\n\n  $ backboa install\n\n  USAGE:\n\n  $ backboa backup\n  $ backboa cleanup\n  $ backboa list\n  $ backboa status\n  $ backboa test\n  $ backboa restore file [time] destination\n  $ backboa retrieve file [time] destination hostname\n\n  RESTORE EXAMPLES:\n\n  Note: Be careful while restoring not to prepend a slash to the path!\n\n  Restoring a single file to tmp/\n  $ backboa restore data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz\n\n  Restoring an older version of a directory to tmp/ - interval or full date\n  $ backboa restore data/disk/o1/backups 7D8h8s tmp/backups\n  $ backboa restore data/disk/o1/backups 2014/11/11 tmp/backups\n\n  Restoring data on a different server\n  $ backboa retrieve data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz srv.foo.bar\n  $ backboa retrieve data/disk/o1/backups 2014/11/11 tmp/backups srv.foo.bar\n\n  Note: The srv.foo.bar is a hostname of the BOA system backed up before.\n        In the 'retrieve' mode it will use the _AWS_* variables configured\n        in the current system /root/.barracuda.cnf file - so make sure to edit\n        this file to set/replace temporarily all four required _AWS_* variables\n        used originally on the host you are retrieving data from! You should\n        keep them secret and manage in your offline password manager app.\n\n  \"\n  exit 1\nfi\n\nexport AWS_ACCESS_KEY_ID=\nexport AWS_SECRET_ACCESS_KEY=\nexport PASSPHRASE=\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/backchain",
    "content": "#!/bin/bash\n\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n#----------------------------------------\n# Helpers\n#----------------------------------------\n_usage() {\n  echo \"Usage: ${0##*/} <mode> [user]\"\n  echo \"  setup <user>    Prepare sudo-NOPASSWD for <user>\"\n  echo \"  switch          Run backup routines (default)\"\n  exit 1\n}\n\n_error() {\n  echo \"ERROR: $*\" >&2\n}\n\n_info() {\n  echo \"INFO:  $*\"\n}\n\n#----------------------------------------\n# Checks & Setup\n#----------------------------------------\n_check_disk() {\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$ &>/dev/null\n    renice 19 -p $$       &>/dev/null\n    chmod a+w /dev/null   &>/dev/null\n  else\n    _error \"This command must be run as root\"\n    exit 1\n  fi\n}\n\n_add_sudo_rule() {\n  _USER=\"$1\"\n  _CMD=\"$2\"\n  _SUF=\"$3\"\n  _ENTRY=\"${_USER} ALL=NOPASSWD: ${_CMD}\"\n  _FILE=\"/etc/sudoers.d/${_SUF}\"\n\n  if ! grep -qF \"${_ENTRY}\" \"${_FILE}\" 2>/dev/null; then\n    _info \"Adding sudoers entry to ${_FILE}\"\n    echo \"${_ENTRY}\" >> \"${_FILE}\"\n    chmod 0440 \"${_FILE}\"\n  else\n    _info \"Sudoers entry already present in ${_FILE}\"\n  fi\n}\n\n_set_sudo() {\n  _add_sudo_rule \"${_USER}\" \"/var/xdrago/mysql_backup.sh\" mysql_backup\n  _add_sudo_rule \"${_USER}\" \"/opt/local/bin/multiback\"   multiback\n}\n\n_run_cmd() {\n  _info \"Running: sudo --non-interactive $*\"\n  sudo --non-interactive \"$@\"\n  _RC=$?\n\n  if [ \"${_RC}\" -ge 128 ]; then\n    _SIG=$(( _RC - 128 ))\n    _error \"Command '$*' terminated by signal ${_SIG}\"\n    if [ \"${_SIG}\" -eq 9 ]; then\n      _error \"  → SIGKILL (9) usually means the OS ran out of memory or someone used kill -9.\"\n      _error \"    Check 'dmesg' or 'journalctl -k' for OOM events.\"\n    fi\n    exit \"${_RC}\"\n  elif [ \"${_RC}\" -ne 0 ]; then\n    _error \"Command failed with exit code ${_RC}: $*\"\n    exit \"${_RC}\"\n  fi\n}\n\n#----------------------------------------\n# Parse arguments\n#----------------------------------------\nif [ \"${1:-}\" = \"setup\" ] && [ -n \"${2:-}\" ]; then\n  _THIS_MODE=\"setup\"\n  _USER=\"${2}\"\nelif [ \"${1:-}\" = \"switch\" ] || [ -z \"${1:-}\" ]; then\n  _THIS_MODE=\"switch\"\nelse\n  _usage\nfi\n\n#----------------------------------------\n# Main logic\n#----------------------------------------\nif [ \"${_THIS_MODE}\" = \"setup\" ]; then\n  _check_root\n  _check_disk\n  _set_sudo\n  _info \"Sudoers setup complete for user '${_USER}'\"\n  exit 0\nfi\n\n# mode = switch\n_check_disk\n_run_cmd /var/xdrago/mysql_backup.sh basic\n_info \"MySQL backup completed successfully.\"\n\n_run_cmd /opt/local/bin/multiback repair b2 data\n_run_cmd /opt/local/bin/multiback backup b2 data\n_info \"Multiback operations completed successfully.\"\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/barracuda",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_TODAY=$(date +%y%m%d)\nexport _TODAY=${_TODAY//[^0-9]/}\n\n_NOW=$(date +%y%m%d-%H%M%S)\nexport _NOW=${_NOW//[^0-9-]/}\n\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n\n_barCnf=\"/root/.barracuda.cnf\"\n_barName=\"BARRACUDA.sh.txt\"\n_filIncB=\"barracuda.sh.cnf\"\n_pthIncB=\"lib/settings/${_filIncB}\"\n_vBs=\"/var/backups\"\n_bldPth=\"/opt/tmp/boa\"\n\n_LOG_DIR=\"${_vBs}/reports/up/$(basename \"$0\")/${_TODAY}\"\n_UP_LOG=\"${_LOG_DIR}/$(basename \"$0\")-up-${_NOW}.log\"\n\n_VMFAMILY=XEN\n_VM_TEST=\"$(uname -a)\"\nif [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n  _VMFAMILY=\"VS\"\nfi\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.barracuda.exit.exceptions.log\n    [ -e \"/opt/tmp/boa\" ] && rm -rf /opt/tmp/*\n  fi\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/tmp/aegir_backup_mode.txt\" ] && rm -f /tmp/aegir_backup_mode.txt\n  service cron start &> /dev/null\n  _CNT=$(pgrep -fc 'tee -a /var/backups/barracuda-')\n  if (( _CNT > 1 )); then\n    pkill -f 'tee -a /var/backups/barracuda-'\n  fi\n  exit 1\n}\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n  [ -f \"/etc/resolv.conf\" ] && chattr -i /etc/resolv.conf\n}\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_ifnames_grub_check_sync() {\n\n  _USE_NINC=NO\n  _NEW_GRUB=DEMO\n\n  if [ -e \"/root/.ignore.ifnames.cnf\" ]; then\n    return 1  # Exit the function but continue the script\n  else\n    [ -e \"/root/.ninc.selected.predictable.cnf\" ] && _USE_NINC=predictable && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.classic.cnf\" ]     && _USE_NINC=classic     && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.auto.cnf\" ]        && _USE_NINC=auto        && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.vanilla.cnf\" ]     && _USE_NINC=vanilla\n  fi\n\n  if [ \"${_USE_NINC}\" = \"vanilla\" ] || [ \"${_USE_NINC}\" = \"NO\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n\n  _IS_IFACE=$(ip a 2>&1)\n  _ADD_GRUB_CMD=\"\"\n  _GRUB_FILE=\"/etc/default/grub\"\n\n  if [ -e \"${_GRUB_FILE}\" ]; then\n    if echo \"${_IS_IFACE}\" | grep -qE \"eth[0-9]+\"; then\n      _USE_IFNAMES=\"CLASSIC\"\n      echo \"GRUB: Classic ethX interface naming found.\"\n    elif echo \"${_IS_IFACE}\" | grep -qE \"(ens|enp|eno|wlp|wlo)[0-9]+:\"; then\n      _USE_IFNAMES=\"PREDICTABLE\"\n      echo \"GRUB: Predictable (ensX, enpX, enoX, wlpX, wloX) interface naming found.\"\n    else\n      _USE_IFNAMES=\"DONTMODIFY\"\n      echo \"GRUB: config exists, but no recognized network interface naming found.\"\n    fi\n\n    # Extract the current GRUB_CMDLINE_LINUX line\n    _GRUB_CMDLINE_LINUX=$(grep -E \"^GRUB_CMDLINE_LINUX=\" \"${_GRUB_FILE}\")\n    echo \"GRUB: Current config is ${_GRUB_CMDLINE_LINUX}\"\n\n    # Initialize variables to check for existing options\n    _SYS_NET_IFNAMES=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"net.ifnames=[01]\")\n    _SYS_BIOSDEVNAME=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"biosdevname=[01]\")\n    _SYS_MEMHP_STATE=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"memhp_default_state=online\")\n\n    # Function to append option to _ADD_GRUB_CMD\n    _append_grub_cmd_option() {\n      _option=\"$1\"\n      if [[ -z \"${_ADD_GRUB_CMD}\" ]]; then\n        _ADD_GRUB_CMD=\"${_option}\"\n      else\n        _ADD_GRUB_CMD=\"${_ADD_GRUB_CMD} ${_option}\"\n      fi\n    }\n\n    # Always add memhp_default_state=online\n    _append_grub_cmd_option \"memhp_default_state=online\"\n\n    if [[ \"${_USE_IFNAMES}\" == \"CLASSIC\" ]]; then\n      # Always set net.ifnames=0 and biosdevname=0\n      _append_grub_cmd_option \"net.ifnames=0\"\n      _append_grub_cmd_option \"biosdevname=0\"\n    elif [[ \"${_USE_IFNAMES}\" == \"PREDICTABLE\" ]]; then\n      # Always set net.ifnames=1 and biosdevname=1\n      _append_grub_cmd_option \"net.ifnames=1\"\n      _append_grub_cmd_option \"biosdevname=1\"\n    fi\n\n    if [[ -n \"${_ADD_GRUB_CMD}\" ]]; then\n      # Backup the GRUB file\n      cp \"${_GRUB_FILE}\" \"${_GRUB_FILE}.bak\"\n\n      # Remove existing options from GRUB_CMDLINE_LINUX\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_GRUB_CMDLINE_LINUX}\" | sed -E \"s/(net.ifnames=[01]|biosdevname=[01]|memhp_default_state=online)//g\")\n\n      # Clean up extra spaces and trailing spaces before the closing quote\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | tr -s ' ' | sed -E 's/\\s*\"$/\"/')\n\n      # Extract current kernel parameters\n      _CURRENT_CMDLINE=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | sed -E 's/^GRUB_CMDLINE_LINUX=\"(.*)\"$/\\1/')\n\n      # Append new options\n      _UPDATED_CMDLINE=\"${_CURRENT_CMDLINE} ${_ADD_GRUB_CMD}\"\n      _UPDATED_CMDLINE=$(echo \"${_UPDATED_CMDLINE}\" | sed 's/^ *//;s/ *$//')\n\n      # Form the new GRUB_CMDLINE_LINUX line\n      _NEW_GRUB_CMDLINE_LINUX=\"GRUB_CMDLINE_LINUX=\\\"${_UPDATED_CMDLINE}\\\"\"\n\n      echo \" \"\n      if [[ \"${_NEW_GRUB}\" == \"LIVE\" ]]; then\n        # Update the GRUB file\n        echo \"GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\"\n        echo \"GRUB: Update in the LIVE MODE\"\n        sed -i \"s|^GRUB_CMDLINE_LINUX=.*|${_NEW_GRUB_CMDLINE_LINUX}|\" \"${_GRUB_FILE}\"\n        echo \"GRUB_CMDLINE_LINUX has been updated with ${_UPDATED_CMDLINE}\"\n      elif [[ \"${_NEW_GRUB}\" == \"DEMO\" ]]; then\n        # Demo info\n        echo \"GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\"\n        echo \"GRUB: Update in the DEMO MODE\"\n        echo \" \"\n        echo \"GRUB_CMDLINE_LINUX would be updated with:\"\n        echo \"   ${_UPDATED_CMDLINE}\"\n        echo \" \"\n        echo \"GRUB: Update in the LIVE MODE requires presence of control file:\"\n        echo \"   /root/.ninc.selected.auto.cnf\"\n        echo \" \"\n        echo \"GRUB: Note that this extra control file must not exist:\"\n        echo \"   /root/.ignore.ifnames.cnf\"\n        echo \" \"\n        echo \"GRUB: This requirement serves as a double-check to confirm\"\n        echo \"GRUB: that you are aware of and agree to auto-update GRUB configuration.\"\n        echo \"GRUB: Incorrect GRUB settings can render your virtual machine unbootable\"\n        echo \"GRUB: necessitating a rescue operation using a CD-ROM or ISO image.\"\n        echo \"GRUB: For this reason, running BOA directly on physical hardware (bare metal) is not supported\"\n        echo \" \"\n        echo \"GRUB: NEVER USE LIVE MODE IF YOU ARE NOT SURE IF YOU NEED IT\"\n      fi\n      echo \" \"\n    fi\n  else\n    echo \"GRUB config does not exist.\"\n  fi\n}\n\n_check_root_direct() {\n  _U_TEST=DENY\n  [ \"${SUDO_USER}\" ] && _U_TEST_SDO=${SUDO_USER} || _U_TEST_SDO=$(whoami)\n  _U_TEST_WHO=$(who am i | awk '{print $1}' 2>&1)\n  _U_TEST_LNE=$(logname 2>&1)\n  if [ \"${_U_TEST_SDO}\" = \"root\" ] || [ \"${_U_TEST_LNE}\" = \"root\" ]; then\n    if [ -z \"${_U_TEST_WHO}\" ]; then\n      _U_TEST=ALLOW\n      ### normal for root scripts running from cron\n    else\n      if [ \"${_U_TEST_WHO}\" = \"root\" ]; then\n        _U_TEST=ALLOW\n      fi\n    fi\n  fi\n  if [ \"${_U_TEST}\" = \"DENY\" ]; then\n    echo\n    echo \"ERROR: This script must be run as root directly,\"\n    echo \"ERROR: without sudo/su switch from regular system user\"\n    echo \"ERROR: Please add and test your SSH (ed25519) keys for root account\"\n    echo \"ERROR: with direct access before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HINT:  You can always restrict access later, or\"\n    echo \"       allow only SSH (ed25519) keys for root with directive\"\n    echo \"         PermitRootLogin prohibit-password\"\n    echo \"       in the /etc/ssh/sshd_config file\"\n    echo \"Bye\"\n    _clean_pid_exit\n  fi\n}\n\n_check_root_keys_pwd() {\n  # Check if root's password is locked\n  _ROOT_PWD_LOCKED=\"NO\"\n  _S_TEST=$(grep 'root:\\*:' /etc/shadow 2>&1)\n  if [[ \"${_S_TEST}\" =~ root:\\*: ]]; then\n    _ROOT_PWD_LOCKED=\"YES\"\n  fi\n\n  # Check for presence of SSH keys\n  _SSH_KEYS_OK=\"NO\"\n  if [ -e \"/root/.ssh/authorized_keys\" ]; then\n    if grep -qE '^(ssh-rsa|ssh-ed25519|ecdsa-sha2)' /root/.ssh/authorized_keys; then\n      _SSH_KEYS_OK=\"YES\"\n    fi\n  fi\n\n  if [[ \"${_ROOT_PWD_LOCKED}\" == \"NO\" ]] && [[ \"${_SSH_KEYS_OK}\" == \"NO\" ]]; then\n    echo\n    echo \"ERROR: BOA requires working SSH keys for system root present\"\n    echo \"ERROR: Please add and test your SSH keys for root account\"\n    echo \"ERROR: before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HOWTO: You can prioritize your keys by adding to ~/.ssh/config lines:\"\n    echo \" Host *\"\n    echo \"   IdentityFile ~/.ssh/id_ed25519\"\n    echo \"   IdentityFile ~/.ssh/id_ecdsa\"\n    echo \"   IdentityFile ~/.ssh/id_rsa\"\n    echo\n    echo \"Bye\"\n    echo\n    _clean_pid_exit\n  fi\n}\n\n_check_sql_running() {\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"INFO: Waiting for MySQLD availability...\"\n    sleep 5\n  done\n}\n\n_check_sql_access() {\n  if [ -e \"/root/.my.pass.txt\" ] && [ -e \"/root/.my.cnf\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_SYNC_SQL_PSWD=$(grep \"${_SQL_PSWD}\" /root/.my.cnf 2>&1)\n  else\n    echo \"ALERT: /root/.my.cnf or /root/.my.pass.txt not found.\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_a\n  fi\n  if [ -z \"${_IS_SYNC_SQL_PSWD}\" ] \\\n    || [[ ! \"${_IS_SYNC_SQL_PSWD}\" =~ \"password=${_SQL_PSWD}\" ]]; then\n    echo \"ALERT: SQL password is out of sync between\"\n    echo \"ALERT: /root/.my.cnf and /root/.my.pass.txt\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_b\n  else\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n      echo \"ALERT: SQL server on this system is not running at all.\"\n      echo \"ALERT: Please fix this before trying again, giving up.\"\n      echo \"Bye\"\n      echo \" \"\n      _clean_pid_exit _check_sql_access_c\n    else\n      _MYSQL_CONN_TEST=$(mysql -u root -e \"status\" 2>&1)\n      if [ -z \"${_MYSQL_CONN_TEST}\" ] \\\n        || [[ \"${_MYSQL_CONN_TEST}\" =~ \"Access denied\" ]]; then\n        echo \"ALERT: SQL password in /root/.my.cnf does not work.\"\n        echo \"ALERT: Please fix this before trying again, giving up.\"\n        echo \"Bye\"\n        echo \" \"\n        _clean_pid_exit _check_sql_access_d\n      fi\n    fi\n  fi\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n_fix_dns_settings() {\n  [ ! -d \"${_vBs}\" ] && mkdir -p ${_vBs}\n  rm -f ${_vBs}/resolv.conf.tmp\n  if ! grep -q \"nameserver 127.0.0.1\" /etc/resolv.conf; then\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      _FORCE_RESOLV_UPDATE=YES\n    else\n      _FORCE_RESOLV_UPDATE=NO\n    fi\n  fi\n  if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf || [ \"${_FORCE_RESOLV_UPDATE}\" = \"YES\" ]; then\n    echo \"### BOA-DNS-Config ###\" > ${_vBs}/resolv.conf.tmp\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      echo \"nameserver 127.0.0.1\" >> ${_vBs}/resolv.conf.tmp\n    fi\n    echo \"nameserver 1.1.1.1\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 8.8.8.8\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 9.9.9.9\" >> ${_vBs}/resolv.conf.tmp\n  fi\n  if [ -e \"${_vBs}/resolv.conf.tmp\" ]; then\n    chattr -i /etc/resolv.conf\n    rm -f /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp /etc/resolv.conf\n    chmod 0644 /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp ${_vBs}/resolv.conf.vanilla\n  fi\n  if [ -x \"/usr/sbin/unbound-control\" ] \\\n    && [ -e \"/etc/resolvconf/run/interface/lo.unbound\" ]; then\n    unbound-control reload &> /dev/null\n  fi\n}\n\n_check_dns_settings() {\n  if [ -L \"/etc/resolv.conf\" ]; then\n    _fix_dns_settings\n    return 1  # Exit the function but continue the script\n  fi\n  if [ -e \"/root/.use.default.nameservers.cnf\" ]; then\n    if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n      rm -f /root/.use.local.nameservers.cnf\n    fi\n    _USE_DEFAULT_DNS=YES\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n    _USE_PROVIDER_DNS=YES\n  else\n    _REMOTE_DNS_TEST=$(host files.aegir.cc 1.1.1.1 -w 10 2>&1)\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [[ \"${_REMOTE_DNS_TEST}\" =~ \"no servers could be reached\" ]] \\\n    || [[ \"${_REMOTE_DNS_TEST}\" =~ \"Host files.aegir.cc not found\" ]] \\\n    || [ \"${_USE_PROVIDER_DNS}\" = \"YES\" ]; then\n    _fix_dns_settings\n  fi\n}\n\n_barracuda_downgrade_protection() {\n  if [ \"${_cmNd}\" != \"php-idle\" ] \\\n    && [ \"${_cmNd}\" != \"up-distro\" ] \\\n    && [ -e \"/var/log/barracuda_log.txt\" ]; then\n    _SERIES_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n    if [ \"${_cmNd}\" = \"up-lts\" ]; then\n      if [[ \"${_SERIES_TEST}\" =~ \"Barracuda ${_rLsn}-pro\" ]]; then\n        echo\n        echo \"ERROR: Your system has been already upgraded to ${_rLsn}-pro\"\n        echo \"You can not downgrade back to previous/older/lts BOA version\"\n        echo \"Please use 'barracuda up-pro system' to upgrade this server\"\n        echo \"Bye\"\n        echo\n        _clean_pid_exit _barracuda_downgrade_protection_a\n      elif [[ \"${_SERIES_TEST}\" =~ \"Barracuda ${_rLsn}-dev\" ]]; then\n        echo\n        echo \"ERROR: Your system has been already upgraded to ${_rLsn}-dev\"\n        echo \"You can not downgrade back to previous/older/lts BOA version\"\n        echo \"Please use 'barracuda up-dev system' to upgrade this server\"\n        echo \"Bye\"\n        echo\n        _clean_pid_exit _barracuda_downgrade_protection_b\n      fi\n    fi\n    if [ \"${_cmNd}\" != \"up-dev\" ] \\\n      && [ \"${_cmNd}\" != \"up-pro\" ] \\\n      && [ \"${_cmNd}\" != \"up-lts\" ] \\\n      && [ \"${_cmNd}\" != \"help\" ] \\\n      && [ \"${_cmNd}\" != \"info\" ]; then\n      echo\n      echo \"Sorry, you are trying not supported command..\"\n      echo \"Display supported commands with: $(basename \"$0\") help\"\n      echo\n      _clean_pid_exit _barracuda_downgrade_protection_c\n    fi\n  fi\n}\n\n_if_extended_report() {\n  _EXTENDED=NO\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ] \\\n    || [ -e \"/root/.send-extended-report.cnf\" ]; then\n    _EXTENDED=YES\n  fi\n  if [ \"${_EXTENDED}\" = \"YES\" ]; then\n    if [ -e \"/root/.autoexcalibur.log\" ]; then\n      cat /root/.autoexcalibur.log >> ${_UP_LOG}\n    elif [ -e \"/root/.autodaedalus.log\" ]; then\n      cat /root/.autodaedalus.log >> ${_UP_LOG}\n    elif [ -e \"/root/.autochimaera.log\" ]; then\n      cat /root/.autochimaera.log >> ${_UP_LOG}\n    elif [ -e \"/root/.autobeowulf.log\" ]; then\n      cat /root/.autobeowulf.log >> ${_UP_LOG}\n    fi\n    echo            >> ${_UP_LOG}\n    ls -ltcra /root >> ${_UP_LOG}\n    echo            >> ${_UP_LOG}\n    ps auxf         >> ${_UP_LOG}\n    echo            >> ${_UP_LOG}\n    aureport        >> ${_UP_LOG}\n    echo            >> ${_UP_LOG}\n    aa-status | grep loaded   >> ${_UP_LOG}\n    aa-status | grep enforce  >> ${_UP_LOG}\n    aa-status | grep complain >> ${_UP_LOG}\n    echo            >> ${_UP_LOG}\n    aa-unconfined   >> ${_UP_LOG}\n    echo            >> ${_UP_LOG}\n    /opt/local/bin/boa info full >> ${_UP_LOG}\n    echo            >> ${_UP_LOG}\n  else\n    echo            >> ${_UP_LOG}\n    /opt/local/bin/boa info >> ${_UP_LOG}\n    echo            >> ${_UP_LOG}\n  fi\n}\n\n_send_report() {\n  if [ -e \"${_barCnf}\" ]; then\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _MY_EMAIL=\"$(basename \"$0\")@omega8.cc\"\n    fi\n    if [ ! -z \"${_MY_EMAIL}\" ]; then\n      _repSub=\"Successful Barracuda upgrade\"\n      _repSub=\"REPORT: ${_repSub} on ${_hName}\"\n      _repSub=$(echo -n ${_repSub} | fmt -su -w 2500 2>&1)\n      _if_extended_report\n      cat ${_UP_LOG} | s-nail -s \"${_repSub} at ${_NOW}\" ${_MY_EMAIL}\n      echo \"${_repSub} sent to ${_MY_EMAIL}\"\n    fi\n  fi\n}\n\n_send_alert() {\n  if [ -e \"${_barCnf}\" ]; then\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _MY_EMAIL=\"$(basename \"$0\")@omega8.cc\"\n    fi\n    if [ ! -z \"${_MY_EMAIL}\" ]; then\n      _repSub=\"${_ALERT_MSG} on ${_hName}\"\n      _repSub=$(echo -n ${_repSub} | fmt -su -w 2500 2>&1)\n      _if_extended_report\n      cat ${_UP_LOG} | s-nail -s \"${_repSub} at ${_NOW}\" ${_MY_EMAIL}\n      echo \"${_repSub} sent to ${_MY_EMAIL}\"\n    fi\n  fi\n}\n\n_check_report() {\n  sed -i \"s/^_AWS_.*//g\"         ${_UP_LOG}\n  wait\n  sed -i \"s/^_NEWRELIC_KEY.*//g\" ${_UP_LOG}\n  wait\n  sed -i \"/^$/d\"                 ${_UP_LOG}\n  wait\n  _SEND_ALERT=NO\n  _RESULT_TEST_OK=$(grep \"INFO: Test OK\" ${_UP_LOG} 2>&1)\n  _RESULT_TEST_CARD=$(grep \"CARD: Now charging\" ${_UP_LOG} 2>&1)\n  _RESULT_KERNEL=$(grep \"NOTE: Your OS kernel has been upgraded\" ${_UP_LOG} 2>&1)\n  _RESULT_REBOOT=$(grep \"NOTE: Please reboot this server\" ${_UP_LOG} 2>&1)\n  _RESULT_ENJOY=$(grep \"Enjoy your Ægir Hosting System\" ${_UP_LOG} 2>&1)\n  _RESULT_RLLY=$(grep \"RLLY: No errors\" ${_UP_LOG} 2>&1)\n  _RESULT_APT_FAIL=$(grep \"Displaying the last 15 lines\" ${_UP_LOG} 2>&1)\n  _RESULT_ALRT=$(grep \"ALRT\" ${_UP_LOG} 2>&1)\n  _RESULT_FATAL=$(grep \"FATAL ERROR\" ${_UP_LOG} 2>&1)\n  _RESULT_LIB_ERR=$(grep \"cannot open shared object file\" ${_UP_LOG} 2>&1)\n  if [[ \"${_RESULT_TEST_OK}\" =~ \"INFO: Test OK\" ]] \\\n    || [[ \"${_RESULT_TEST_OK}\" =~ \"binary file matches\" ]] \\\n    || [[ \"${_RESULT_TEST_CARD}\" =~ \"CARD: Now charging\" ]] \\\n    || [[ \"${_RESULT_TEST_CARD}\" =~ \"binary file matches\" ]] \\\n    || [[ \"${_RESULT_RLLY}\" =~ \"RLLY: No errors\" ]] \\\n    || [[ \"${_RESULT_RLLY}\" =~ \"binary file matches\" ]] \\\n    || [[ \"${_RESULT_ENJOY}\" =~ \"Enjoy your Ægir\" ]] \\\n    || [[ \"${_RESULT_ENJOY}\" =~ \"binary file matches\" ]]; then\n    if [[ \"${_RESULT_APT_FAIL}\" =~ \"Displaying the last 15 lines\" ]] \\\n      || [[ \"${_RESULT_APT_FAIL}\" =~ \"binary file matches\" ]]; then\n      if [[ \"${_RESULT_LIB_ERR}\" =~ \"cannot open shared object file\" ]] \\\n        || [[ \"${_RESULT_LIB_ERR}\" =~ \"binary file matches\" ]]; then\n        _SEND_ALERT=YES\n        _ALERT_MSG=\"ALERT: Failed (libs) Barracuda upgrade\"\n      fi\n    else\n      _DO_NOTHING=YES\n    fi\n  else\n    _SEND_ALERT=YES\n    _ALERT_MSG=\"ALERT: Failed (msg) Barracuda upgrade\"\n  fi\n  if [[ \"${_RESULT_APT_FAIL}\" =~ \"Displaying the last 15 lines\" ]] \\\n    || [[ \"${_RESULT_APT_FAIL}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=YES\n    _ALERT_MSG=\"ALERT: Failed (apt) Barracuda upgrade\"\n  fi\n  if [[ \"${_RESULT_KERNEL}\" =~ \"Your OS kernel\" ]] \\\n    && [[ \"${_RESULT_REBOOT}\" =~ \"Please reboot\" ]]; then\n    _SEND_ALERT=YES\n    _ALERT_MSG=\"REBOOT: Your new system kernel requires boa reboot\"\n  fi\n  if [[ \"${_RESULT_ALRT}\" =~ \"ALRT\" ]] \\\n    || [[ \"${_RESULT_ALRT}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=YES\n    _ALERT_MSG=\"ALERT: Failed (alrt) Barracuda upgrade\"\n  fi\n  if [[ \"${_RESULT_FATAL}\" =~ \"FATAL ERROR\" ]] \\\n    || [[ \"${_RESULT_FATAL}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=YES\n    _ALERT_MSG=\"ALERT: Failed (aborted) Barracuda upgrade\"\n  fi\n  if [[ \"${_RESULT_LIB_ERR}\" =~ \"cannot open shared object file\" ]] \\\n    || [[ \"${_RESULT_LIB_ERR}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=YES\n    _ALERT_MSG=\"ALERT: Failed (libs) Barracuda upgrade\"\n  fi\n  if [ \"${_SEND_ALERT}\" = \"YES\" ]; then\n    _send_alert\n  else\n    _send_report\n  fi\n}\n\n_check_php_cli() {\n  _PHP_CHECK=\"$(readlink -n /usr/bin/php)\"\n  if [ ! -x \"${_PHP_CHECK}\" ]; then\n    if [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ]; then\n      _PHP_CLI_PATH=\"/opt/php84/bin/php\"\n    elif [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ]; then\n      _PHP_CLI_PATH=\"/opt/php85/bin/php\"\n    elif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ]; then\n      _PHP_CLI_PATH=\"/opt/php83/bin/php\"\n    else\n      _PHP_CLI_PATH=\"\"\n    fi\n    if [ -x \"${_PHP_CLI_PATH}\" ]; then\n      _USE_PHP_CLI_PATH=\"${_PHP_CLI_PATH}\"\n    else\n      if  [ -x \"/opt/php84/bin/php\" ]; then\n        _USE_PHP_CLI_PATH=\"/opt/php84/bin/php\"\n      elif  [ -x \"/opt/php85/bin/php\" ]; then\n        _USE_PHP_CLI_PATH=\"/opt/php85/bin/php\"\n      elif  [ -x \"/opt/php83/bin/php\" ]; then\n        _USE_PHP_CLI_PATH=\"/opt/php83/bin/php\"\n      fi\n    fi\n    if [ -x \"${_USE_PHP_CLI_PATH}\" ]; then\n      rm -f /usr/bin/php\n      rm -f /usr/bin/php-cli\n      ln -sfn ${_USE_PHP_CLI_PATH} /usr/bin/php\n      ln -sfn ${_USE_PHP_CLI_PATH} /usr/bin/php-cli\n    else\n      echo \"ERROR: I can not find PHP-CLI anywhere!\"\n      echo \"ERROR: BOA requires PHP 8.3 or newer\"\n      _clean_pid_exit _check_php_cli_a\n    fi\n  fi\n}\n\n_oops_mysql_exit() {\n  echo \"Oops.. ${1}\"\n  echo \"We can not continue\"\n  echo \"Bye!\"\n  _clean_pid_exit _mysql_exit\n}\n\n_mysql_downgrade_protection() {\n  _DB_V=\"\"\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  else\n    _oops_mysql_exit \"MISSING MySQL: ${_DBS_TEST}\"\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  else\n    _oops_mysql_exit \"MYSTERIOUS MySQL: ${_DB_SERVER_TEST}\"\n  fi\n}\n\n_up_action() {\n  if [ -e \"${_vBs}/${_barName}\" ] && [ -e \"${_barCnf}\" ]; then\n    if [ ! -z \"${_rKey}\" ]; then\n      if [ \"${_rKey}\" = \"php-85\" ] || [ \"${_rKey}\" = \"php-8.5\" ]; then\n        _phpS=\"8.5\"\n      elif [ \"${_rKey}\" = \"php-84\" ] || [ \"${_rKey}\" = \"php-8.4\" ]; then\n        _phpS=\"8.4\"\n      elif [ \"${_rKey}\" = \"php-83\" ] || [ \"${_rKey}\" = \"php-8.3\" ]; then\n        _phpS=\"8.3\"\n      elif [ \"${_rKey}\" = \"php-all\" ] || [ \"${_rKey}\" = \"php-min\" ]; then\n        _phpS=\"MIN\"\n        [ -e \"/root/.allow-php-multi-install-cleanup.cnf\" ] && rm -f /root/.allow-php-multi-install-cleanup.cnf\n      elif [ \"${_rKey}\" = \"php-max\" ]; then\n        _phpS=\"MAX\"\n        [ -e \"/root/.allow-php-multi-install-cleanup.cnf\" ] && rm -f /root/.allow-php-multi-install-cleanup.cnf\n      elif [ \"${_rKey}\" = \"nodns\" ]; then\n        sed -i \"s/^_SMTP_RELAY_TEST.*/_SMTP_RELAY_TEST=NO/g\" ${_vBs}/${_barName}\n        wait\n        sed -i \"s/^_DNS_SETUP_TEST.*/_DNS_SETUP_TEST=NO/g\"   ${_vBs}/${_barName}\n        wait\n        sed -i \"s/^_SMTP_RELAY_TEST.*/_SMTP_RELAY_TEST=NO/g\"          ${_barCnf}\n        wait\n        sed -i \"s/^_DNS_SETUP_TEST.*/_DNS_SETUP_TEST=NO/g\"            ${_barCnf}\n        wait\n      else\n        _L_KEY=$(echo ${#_rKey} 2>&1)\n        if [ ! -z \"${_L_KEY}\" ] && [ \"${_L_KEY}\" = \"40\" ]; then\n          sed -i \"s/^_NEWREL.*/_NEWRELIC_KEY=\\\"${_rKey}\\\"/g\" ${_vBs}/${_barName}\n          wait\n          sed -i \"s/^_NEWRELIC.*/_NEWRELIC_KEY=\\\"${_rKey}\\\"/g\"        ${_barCnf}\n          wait\n        fi\n      fi\n\n      ### Enable debugging if requested\n      if [ -e \"/root/.debug-boa-installer.cnf\" ] \\\n        || [ -e \"/root/.debug-barracuda-installer.cnf\" ]; then\n        sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"         ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"                  ${_barCnf}\n        wait\n      fi\n\n      ### Debugging\n      if [ \"${_dEbg}\" = \"debug\" ]; then\n        echo _rKey is ${_rKey}\n        echo waiting 8 seconds...\n        sleep 8\n      fi\n      ### Debugging\n\n    fi\n    ### Make sure that _PHP_SINGLE_INSTALL is set in ${_barCnf}\n    if [ \"${_phpS}\" = \"8.5\" ] \\\n      || [ \"${_phpS}\" = \"8.4\" ] \\\n      || [ \"${_phpS}\" = \"8.3\" ]; then\n      _PHP_SINGLE_INSTALL_TEST=$(grep _PHP_SINGLE_INSTALL ${_barCnf} 2>&1)\n      if [[ \"${_PHP_SINGLE_INSTALL_TEST}\" =~ \"_PHP_SINGLE_INSTALL\" ]]; then\n        sed -i \"s/^_PHP_SINGLE.*/_PHP_SINGLE_INSTALL=${_phpS}/g\" ${_barCnf}\n        wait\n      else\n        echo \"_PHP_SINGLE_INSTALL=${_phpS}\" >> ${_barCnf}\n      fi\n    fi\n    ### Make sure that _PHP_SINGLE_INSTALL takes precedence\n    if [ ! -z \"${_phpS}\" ]; then\n      if [ \"${_phpS}\" = \"8.5\" ] \\\n        || [ \"${_phpS}\" = \"8.4\" ] \\\n        || [ \"${_phpS}\" = \"8.3\" ]; then\n\n        ### Debugging\n        if [ \"${_dEbg}\" = \"debug\" ]; then\n          echo _PHP_MULTI_INSTALL is ${_phpS}\n          echo _PHP_SINGLE_INSTALL is ${_phpS}\n          echo waiting 8 seconds...\n          sleep 8\n        fi\n        ###\n\n        sed -i \"s/^_PHP_SIN.*/_PHP_SINGLE_INSTALL=${_phpS}/g\" ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_MUL.*/_PHP_MULTI_INSTALL=${_phpS}/g\"  ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_CLI_V.*/_PHP_CLI_VERSION=${_phpS}/g\"  ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_FPM_V.*/_PHP_FPM_VERSION=${_phpS}/g\"  ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_SINGL.*/_PHP_SINGLE_INSTALL=${_phpS}/g\"        ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_MULTI.*/_PHP_MULTI_INSTALL=${_phpS}/g\"         ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_CLI_V.*/_PHP_CLI_VERSION=${_phpS}/g\"           ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_FPM_V.*/_PHP_FPM_VERSION=${_phpS}/g\"           ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_CLI_V.*/_PHP_CLI_VERSION=${_phpS}/g\" \\\n          /root/.*.octopus.cnf &> /dev/null\n        wait\n        sed -i \"s/^_PHP_FPM_V.*/_PHP_FPM_VERSION=${_phpS}/g\" \\\n          /root/.*.octopus.cnf &> /dev/null\n        wait\n\n        ### Debugging\n        if [ \"${_dEbg}\" = \"debug\" ]; then\n          cat ${_barCnf}\n          echo\n          echo test fin!\n          _clean_pid_exit debug_a\n        fi\n        ### Debugging\n\n        if [ -d \"/data/u\" ] && [ -e \"/data/conf/global.inc\" ]; then\n          for _Ctrl in `find /data/disk/*/log/fpm.txt \\\n            -maxdepth 0 -mindepth 0 | sort`; do\n            echo ${_phpS} > ${_Ctrl}\n          done\n          for _Ctrl in `find /data/disk/*/log/cli.txt \\\n            -maxdepth 0 -mindepth 0 | sort`; do\n            echo ${_phpS} > ${_Ctrl}\n          done\n        fi\n      elif [ \"${_phpS}\" = \"MIN\" ] || [ \"${_phpS}\" = \"MAX\" ]; then\n        if [ \"${_phpS}\" = \"MAX\" ]; then\n          _phpM=\"5.6 7.0 7.1 7.2 7.3 7.4 8.0 8.1 8.2 8.3 8.4 8.5\"\n        else\n          _phpM=\"8.3 8.4 8.5\"\n        fi\n        _phpS=\"8.4\"\n\n        ### Debugging\n        if [ \"${_dEbg}\" = \"debug\" ]; then\n          echo _PHP_MULTI_INSTALL is ${_phpM}\n          echo _PHP_SINGLE_INSTALL is ${_phpS}\n          echo waiting 8 seconds...\n          sleep 8\n        fi\n        ###\n\n        sed -i \"s/^_PHP_SINGLE.*/_PHP_SINGLE_INSTALL=/g\"     ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_M.*/_PHP_MULTI_INSTALL=\\\"${_phpM}\\\"/g\" ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_CLI_V.*/_PHP_CLI_VERSION=${_phpS}/g\"  ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_FPM_V.*/_PHP_FPM_VERSION=${_phpS}/g\"  ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_SINGL.*/_PHP_SINGLE_INSTALL=/g\"               ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_MULTI.*/_PHP_MULTI_INSTALL=\\\"${_phpM}\\\"/g\"     ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_CLI_VERSION=.*/_PHP_CLI_VERSION=${_phpS}/g\"    ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_FPM_VERSION=.*/_PHP_FPM_VERSION=${_phpS}/g\"    ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_CLI_VERSION=.*/_PHP_CLI_VERSION=${_phpS}/g\" \\\n          /root/.*.octopus.cnf &> /dev/null\n        wait\n        sed -i \"s/^_PHP_FPM_VERSION=.*/_PHP_FPM_VERSION=${_phpS}/g\" \\\n          /root/.*.octopus.cnf &> /dev/null\n        wait\n\n        ### Debugging\n        if [ \"${_dEbg}\" = \"debug\" ]; then\n          cat ${_barCnf}\n          echo\n          echo test fin!\n          _clean_pid_exit debug_b\n        fi\n        ### Debugging\n\n        if [ -d \"/data/u\" ] && [ -e \"/data/conf/global.inc\" ]; then\n          for _Ctrl in `find /data/disk/*/log/fpm.txt \\\n            -maxdepth 0 -mindepth 0 | sort`; do\n            echo ${_phpS} > ${_Ctrl}\n          done\n          for _Ctrl in `find /data/disk/*/log/cli.txt \\\n            -maxdepth 0 -mindepth 0 | sort`; do\n            echo ${_phpS} > ${_Ctrl}\n          done\n        fi\n      fi\n    fi\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" != \"YES\" ]; then\n      mkdir -p ${_vBs}/old-firewall-monitor/${_NOW}/legacy\n      mv -f /var/xdrago/monitor/*.log \\\n        ${_vBs}/old-firewall-monitor/${_NOW}/legacy/ &> /dev/null\n      mv -f /var/xdrago/monitor/log/*.log \\\n        ${_vBs}/old-firewall-monitor/${_NOW}/ &> /dev/null\n    fi\n    if [ \"${_cmNd}\" = \"up-dev\" ] \\\n      || [ \"${_cmNd}\" = \"up-pro\" ] \\\n      || [ \"${_cmNd}\" = \"up-lts\" ] \\\n      || [ \"${_cmNd}\" = \"up-distro\" ] \\\n      || [ \"${_cmNd}\" = \"php-idle\" ]; then\n      sed -i \"s/^_BRANCH_PRN=.*/_BRANCH_PRN=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_VERSION.*/_AEGIR_VERSION=${_tRee}/g\" ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_XTS_VRN.*/_AEGIR_XTS_VRN=${_tRee}/g\" ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_X_VERSION=.*/_X_VERSION=${_rlsE}/g\"        ${_vBs}/${_filIncB}\n      wait\n#       if [ \"${_hostedSys}\" = \"YES\" ]; then\n#         sed -i \"s/^_VALKEY_MAJOR_.*/_VALKEY_MAJOR_RELEASE=9/g\"        ${_barCnf}\n#         wait\n#       fi\n      sed -i \"s/^_REDIS_MAJOR_.*/_REDIS_MAJOR_RELEASE=7/g\"            ${_barCnf}\n      wait\n    else\n      sed -i \"s/^_BRANCH_PRN=.*/_BRANCH_PRN=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_VERSION.*/_AEGIR_VERSION=${_tRee}/g\" ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_XTS_VRN.*/_AEGIR_XTS_VRN=${_rlsE}/g\" ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_X_VERSION=.*/_X_VERSION=${_rlsE}/g\"        ${_vBs}/${_filIncB}\n      wait\n    fi\n    if [ \"${_cmNd}\" = \"up-dev\" ] \\\n      || [ \"${_cmNd}\" = \"up-pro\" ] \\\n      || [ \"${_cmNd}\" = \"up-lts\" ] \\\n      || [ \"${_cmNd}\" = \"up-distro\" ] \\\n      || [ \"${_cmNd}\" = \"php-idle\" ]; then\n      sed -i \"s/^_BRANCH_BOA=.*/_BRANCH_BOA=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n    else\n      sed -i \"s/^_BRANCH_BOA=.*/_BRANCH_BOA=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n    fi\n    sed -i \"s/^_AUTOPILOT=NO/_AUTOPILOT=YES/g\"               ${_vBs}/${_filIncB}\n    wait\n    sed -i \"s/^_SMTP_RELAY_TEST=YES/_SMTP_RELAY_TEST=NO/g\"   ${_vBs}/${_filIncB}\n    wait\n    ### Force HTTP/2 or SPDY plus PFS on supported systems\n    sed -i \"s/^_NGINX_SPDY=.*/_NGINX_SPDY=YES/g\"                      ${_barCnf}\n    wait\n    sed -i \"s/^_NGINX_FORWARD.*/_NGINX_FORWARD_SECRECY=YES/g\"         ${_barCnf}\n    wait\n    ### Force Valkey SOCKET mode if PORT was used before\n    sed -i \"s/^_VALKEY_LISTEN_MODE=PORT/_VALKEY_LISTEN_MODE=SOCKET/g\" ${_barCnf}\n    wait\n    ### Force Redis SOCKET mode if PORT was used before\n    sed -i \"s/^_REDIS_LISTEN_MODE=PORT/_REDIS_LISTEN_MODE=SOCKET/g\"   ${_barCnf}\n    wait\n    ### Force latest OpenSSH from sources on supported systems\n    sed -i \"s/^_SSH_FROM_SOURCES=.*/_SSH_FROM_SOURCES=YES/g\"          ${_barCnf}\n    wait\n    if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      sed -i \"s/^_DB_SERVER=.*/_DB_SERVER=Percona/g\"         ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_DB_SERVER=.*/_DB_SERVER=Percona/g\"                  ${_barCnf}\n      wait\n    else\n      sed -i \"s/^_DB_SERVER=.*/_DB_SERVER=Percona/g\"         ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_DB_SERVER=.*/_DB_SERVER=Percona/g\"                  ${_barCnf}\n      wait\n    fi\n    if [ \"${_sQld}\" = \"percona-8.4\" ] \\\n      || [ \"${_rKey}\" = \"percona-8.4\" ] \\\n      || [ \"${_dEbg}\" = \"percona-8.4\" ] \\\n      || [ -e \"/root/.percona.8.4.cnf\" ]; then\n      if [[ \"${_DB_V}\" =~ 8.3 || \"${_DB_V}\" =~ 8.0 || \"${_DB_V}\" =~ 5.7 ]]; then\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.4/g\" ${_vBs}/${_filIncB}\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.4/g\" ${_barCnf}\n      fi\n    elif [ \"${_sQld}\" = \"percona-8.0\" ] \\\n      || [ \"${_rKey}\" = \"percona-8.0\" ] \\\n      || [ \"${_dEbg}\" = \"percona-8.0\" ] \\\n      || [ -e \"/root/.percona.8.0.cnf\" ]; then\n      if [[ \"${_DB_V}\" =~ 5.7 ]]; then\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.0/g\" ${_vBs}/${_filIncB}\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.0/g\" ${_barCnf}\n      fi\n    elif [ \"${_sQld}\" = \"percona-5.7\" ] \\\n      || [ \"${_rKey}\" = \"percona-5.7\" ] \\\n      || [ \"${_dEbg}\" = \"percona-5.7\" ] \\\n      || [ -e \"/root/.percona.5.7.cnf\" ]; then\n      if [[ \"${_DB_V}\" =~ 5.7 ]]; then\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=5.7/g\" ${_vBs}/${_filIncB}\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=5.7/g\" ${_barCnf}\n      fi\n    else\n      if [[ \"${_DB_V}\" =~ 5.7 ]]; then\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=5.7/g\" ${_vBs}/${_filIncB}\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=5.7/g\" ${_barCnf}\n      fi\n    fi\n\n    if [ -e \"${_barCnf}\" ]; then\n      wait\n      sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"                      ${_barCnf}\n      wait\n      sed -i \"s/^_DNS_SETUP_TEST=.*/_DNS_SETUP_TEST=YES/g\"            ${_barCnf}\n      wait\n      sed -i \"s/^_SMTP_RELAY_TEST=.*/_SMTP_RELAY_TEST=NO/g\"           ${_barCnf}\n      wait\n      sed -i \"s/^_STRONG_PASSWORDS=.*/_STRONG_PASSWORDS=YES/g\"        ${_barCnf}\n      wait\n      sed -i \"s/^_PERMISSIONS_FIX=.*/_PERMISSIONS_FIX=YES/g\"          ${_barCnf}\n      wait\n      sed -i \"s/^_VALKEY_LISTEN.*/_VALKEY_LISTEN_MODE=SOCKET/g\"       ${_barCnf}\n      wait\n      sed -i \"s/^_REDIS_LISTEN.*/_REDIS_LISTEN_MODE=SOCKET/g\"         ${_barCnf}\n      wait\n      if [ -e \"/etc/init.d/valkey-server\" ] || [ -e \"/etc/init.d/redis-server\" ]; then\n        sed -i \"s/^_USE_MYSQLTUNER=.*/_USE_MYSQLTUNER=NO/g\"           ${_barCnf}\n        wait\n      fi\n    fi\n\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      sed -i 's~^_MY_EMAIL=.*~_MY_EMAIL=\"notify@omega8.cc\"~'          ${_barCnf}\n      wait\n    fi\n\n    if [ -e \"${_vBs}/${_barName}\" ] && [ -e \"${_vBs}/${_filIncB}\" ]; then\n      if [ \"${_outP}\" = \"log\" ] \\\n        || [ \"${_outP}\" = \"system\" ] \\\n        || [ \"${_outP}\" = \"disable\" ] \\\n        || [ \"${_outP}\" = \"enable\" ]; then\n\n        if [ \"${_outP}\" = \"log\" ] \\\n          || [ \"${_outP}\" = \"system\" ] \\\n          || [ \"${_outP}\" = \"aegir\" ]; then\n          echo\n          echo \"Preparing the upgrade in silent mode...\"\n          echo\n          echo \"NOTE: There will be no progress displayed in the console\"\n          echo \"but you will receive an email once the upgrade is complete\"\n          echo\n          sleep 5\n          echo \"You could watch the progress in another window with command:\"\n          echo \"  tail -f ${_UP_LOG}\"\n          echo \"or wait until you will see the line: BARRACUDA upgrade completed\"\n          echo\n          echo \"Waiting 5 seconds...\"\n          sleep 5\n          echo \"Starting the upgrade in silent mode now...\"\n          echo\n          sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"                  ${_barCnf}\n          wait\n          sed -i \"s/^_SPINNER=YES/_SPINNER=NO/g\"             ${_vBs}/${_barName}\n          wait\n        fi\n        if [ \"${_outP}\" = \"system\" ]; then\n          touch /run/boa_system_wait.pid\n          if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n            && [ -d \"/var/aegir/config/server_master/nginx/subdir.d\" ]; then\n            sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=YES/g\"   ${_vBs}/${_filIncB}\n            wait\n            sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=YES/g\"            ${_barCnf}\n            wait\n          else\n            sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"    ${_vBs}/${_filIncB}\n            wait\n            sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"             ${_barCnf}\n            wait\n          fi\n          sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=NO/g\"   ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=NO/g\"            ${_barCnf}\n          wait\n          bash ${_vBs}/${_barName} >${_UP_LOG} 2>&1\n          wait\n          _check_report\n        elif [ \"${_outP}\" = \"log\" ]; then\n          sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"      ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"               ${_barCnf}\n          wait\n          sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=NO/g\"   ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=NO/g\"            ${_barCnf}\n          wait\n          bash ${_vBs}/${_barName} 2>&1 | tee ${_UP_LOG}\n          wait\n          _check_report\n        elif [ \"${_outP}\" = \"aegir\" ]; then\n          sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"      ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"               ${_barCnf}\n          wait\n          sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=YES/g\"  ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=YES/g\"           ${_barCnf}\n          wait\n          bash ${_vBs}/${_barName} 2>&1 | tee ${_UP_LOG}\n          wait\n          _check_report\n        elif [ \"${_outP}\" = \"enable\" ] || [ \"${_outP}\" = \"disable\" ]; then\n          sed -i \"s/^_PHP_IDLE.*//g\"                                  ${_barCnf}\n          wait\n          if [ \"${_outP}\" = \"enable\" ]; then\n            echo \"_PHP_IDLE=ON\"                                    >> ${_barCnf}\n            wait\n          elif [ \"${_outP}\" = \"disable\" ]; then\n            echo \"_PHP_IDLE=OFF\"                                   >> ${_barCnf}\n            wait\n          fi\n          bash ${_vBs}/${_barName} 2>&1 | tee ${_UP_LOG}\n          wait\n        fi\n      else\n        sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"        ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_SYSTEM_UP.*/_SYSTEM_UP_ONLY=NO/g\"                 ${_barCnf}\n        wait\n        sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=NO/g\"     ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_AEGIR_UP.*/_AEGIR_UPGRADE_ONLY=NO/g\"              ${_barCnf}\n        wait\n        sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=NO/g\"                     ${_barCnf}\n        wait\n        if [ \"${_cmNd}\" = \"php-idle\" ]; then\n          sed -i \"s/^_PHP_IDLE.*//g\"                                  ${_barCnf}\n          wait\n          if [ \"${_outP}\" = \"enable\" ]; then\n            echo \"_PHP_IDLE=ON\"                                    >> ${_barCnf}\n            wait\n          elif [ \"${_outP}\" = \"disable\" ]; then\n            echo \"_PHP_IDLE=OFF\"                                   >> ${_barCnf}\n            wait\n          fi\n        fi\n        if [ \"${_cmNd}\" = \"up-distro\" ]; then\n          sed -i \"s/^_UP_DISTRO.*//g\"                                 ${_barCnf}\n          wait\n          if [ \"${_outP}\" = \"enable\" ]; then\n            echo \"_UP_DISTRO=ON\"                                   >> ${_barCnf}\n            wait\n          elif [ \"${_outP}\" = \"disable\" ]; then\n            echo \"_UP_DISTRO=OFF\"                                  >> ${_barCnf}\n            wait\n          fi\n        fi\n        bash ${_vBs}/${_barName} 2>&1 | tee ${_UP_LOG}\n        wait\n      fi\n    fi\n  else\n    if [ ! -e \"${_vBs}/${_barName}\" ]; then\n      echo\n      echo \"  ${_vBs}/${_barName} installer not available, exit\"\n      echo\n      _clean_pid_exit _barName\n    fi\n    if [ ! -e \"${_vBs}/${_filIncB}\" ]; then\n      echo\n      echo \"  ${_vBs}/${_filIncB} inc file not available, exit\"\n      echo\n      _clean_pid_exit _filIncB\n    fi\n    if [ ! -e \"${_barCnf}\" ]; then\n      echo\n      echo \"  ${_barCnf} file not available, exit\"\n      echo\n      _clean_pid_exit _barCnf\n    fi\n  fi\n}\n\n_up_start() {\n  if [ -e \"/run/boa_run.pid\" ]; then\n    echo\n    echo \"  Another BOA installer is running probably\"\n    echo \"  because /run/boa_run.pid exists\"\n    echo\n    exit 1\n  elif [ -e \"/run/boa_wait.pid\" ]; then\n    echo\n    echo \"  Some important system task is running probably\"\n    echo \"  because /run/boa_wait.pid exists\"\n    echo\n    exit 1\n  else\n    touch /run/boa_run.pid\n    touch /run/boa_wait.pid\n    [ -e \"/tmp/aegir_backup_mode.txt\" ] && rm -f /tmp/aegir_backup_mode.txt\n    #pkill -9 -f daily.sh\n    #rm -f /run/*daily*.pid\n    mkdir -p ${_LOG_DIR}\n    cd ${_vBs}\n    rm -f ${_vBs}/BARRACUDA.sh*\n  fi\n  if [ -e \"/opt/local/bin/php\" ] \\\n    || [ -e \"/opt/local/bin/pear\" ] \\\n    || [ -e \"/usr/local/bin/php\" ] \\\n    || [ -e \"/usr/local/bin/pear\" ]; then\n    rm -f /opt/local/bin/pear\n    rm -f /opt/local/bin/php\n    rm -f /usr/local/bin/pear\n    rm -f /usr/local/bin/php\n  fi\n}\n\n_if_fix_iptables_symlinks() {\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n  if [ -x \"/sbin/iptables\" ] && [ ! -e \"/usr/sbin/iptables\" ]; then\n    ln -sfn /sbin/iptables /usr/sbin/iptables\n  fi\n  if [ -x \"/usr/sbin/iptables\" ] && [ ! -e \"/sbin/iptables\" ]; then\n    ln -sfn /usr/sbin/iptables /sbin/iptables\n  fi\n  if [ -x \"/sbin/iptables-save\" ] && [ ! -e \"/usr/sbin/iptables-save\" ]; then\n    ln -sfn /sbin/iptables-save /usr/sbin/iptables-save\n  fi\n  if [ -x \"/usr/sbin/iptables-save\" ] && [ ! -e \"/sbin/iptables-save\" ]; then\n    ln -sfn /usr/sbin/iptables-save /sbin/iptables-save\n  fi\n  if [ -x \"/sbin/iptables-restore\" ] && [ ! -e \"/usr/sbin/iptables-restore\" ]; then\n    ln -sfn /sbin/iptables-restore /usr/sbin/iptables-restore\n  fi\n  if [ -x \"/usr/sbin/iptables-restore\" ] && [ ! -e \"/sbin/iptables-restore\" ]; then\n    ln -sfn /usr/sbin/iptables-restore /sbin/iptables-restore\n  fi\n  if [ -x \"/sbin/ip6tables\" ] && [ ! -e \"/usr/sbin/ip6tables\" ]; then\n    ln -sfn /sbin/ip6tables /usr/sbin/ip6tables\n  fi\n  if [ -x \"/usr/sbin/ip6tables\" ] && [ ! -e \"/sbin/ip6tables\" ]; then\n    ln -sfn /usr/sbin/ip6tables /sbin/ip6tables\n  fi\n  if [ -x \"/sbin/ip6tables-save\" ] && [ ! -e \"/usr/sbin/ip6tables-save\" ]; then\n    ln -sfn /sbin/ip6tables-save /usr/sbin/ip6tables-save\n  fi\n  if [ -x \"/usr/sbin/ip6tables-save\" ] && [ ! -e \"/sbin/ip6tables-save\" ]; then\n    ln -sfn /usr/sbin/ip6tables-save /sbin/ip6tables-save\n  fi\n  if [ -x \"/sbin/ip6tables-restore\" ] && [ ! -e \"/usr/sbin/ip6tables-restore\" ]; then\n    ln -sfn /sbin/ip6tables-restore /usr/sbin/ip6tables-restore\n  fi\n  if [ -x \"/usr/sbin/ip6tables-restore\" ] && [ ! -e \"/sbin/ip6tables-restore\" ]; then\n    ln -sfn /usr/sbin/ip6tables-restore /sbin/ip6tables-restore\n  fi\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n}\n\n_up_finish() {\n  rm -f /root/.bashrc.bak*\n  rm -f /root/BOA.sh*\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/run/boa_system_wait.pid\" ] && rm -f /run/boa_system_wait.pid\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n  [ -e \"/run/manage_ruby_users.pid\" ] && rm -f /run/manage_ruby_users.pid\n  [ -e \"/tmp/aegir_backup_mode.txt\" ] && rm -f /tmp/aegir_backup_mode.txt\n  if [ -d \"/data/u\" ]; then\n    rm -f ${_vBs}/*.sh.cnf*\n    rm -f ${_vBs}/BARRACUDA.sh*\n  fi\n  [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n  mv -f /var/log/php/* /var/backups/php-logs/${_NOW}/ &> /dev/null\n  if [ -e \"/opt/local/bin/php\" ] || [ -e \"/usr/local/bin/php\" ]; then\n    rm -f /opt/local/bin/php\n    rm -f /usr/local/bin/php\n  fi\n  _if_hosted_sys\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    service webmin stop &> /dev/null\n    service usermin stop &> /dev/null\n  fi\n  if [ -x \"/usr/sbin/csf\" ] \\\n    && [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ ! -x \"/etc/csf/csfpost.sh\" ]; then\n    echo \"\" > /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A OUTPUT -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    chmod 700 /etc/csf/csfpost.sh\n    service lfd stop &> /dev/null\n    pkill -9 -f ConfigServer\n    killall sleep &> /dev/null\n    rm -f /etc/csf/csf.error\n    csf -x  &> /dev/null\n    service clean-boa-env start &> /dev/null\n    _if_fix_iptables_symlinks\n    ### csf -uf &> /dev/null\n    ### wait\n    _NFTABLES_TEST=$(iptables -V)\n    if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n      if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n        update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n        update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n        update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n        update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n      fi\n    fi\n    csf -e &> /dev/null\n    sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n    sed -i \"/^$/d\" /etc/csf/csf.allow\n    if [ -e \"/var/log/daemon.log\" ]; then\n      _DHCP_LOG=\"/var/log/daemon.log\"\n    else\n      _DHCP_LOG=\"/var/log/syslog\"\n    fi\n    grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n      if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n        IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n        if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n          echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n        fi\n      fi\n    done\n    if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n      csf -ra &> /dev/null\n      synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    else\n      csf -r &> /dev/null\n    fi\n    ### Linux kernel TCP SACK CVEs mitigation\n    ### CVE-2019-11477 SACK Panic\n    ### CVE-2019-11478 SACK Slowness\n    ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n    if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n      _SACK_TEST=$(ip6tables --list | grep tcpmss)\n      if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n        sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n        iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    fi\n  fi\n  _CNT=$(pgrep -fc 'tee -a /var/backups/barracuda-')\n  if (( _CNT > 1 )); then\n    pkill -f 'tee -a /var/backups/barracuda-'\n  fi\n  if [ -e \"/etc/init.d/postfix\" ]; then\n    service postfix restart &> /dev/null\n  fi\n  if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _DB_SERVER=Percona\n  else\n    _DB_SERVER=Percona\n  fi\n  if [ \"$(boa info | grep -c ${_DB_SERVER})\" -ge 3 ] || [ -e \"/usr/sbin/csf\" ]; then\n    # Execute the daily.sh and exit\n    nohup /var/xdrago/daily.sh > /dev/null 2>&1 &\n    sleep 1\n  fi\n  echo\n  echo BARRACUDA upgrade completed\n  echo Bye\n  echo\n  exit 0\n}\n\n_check_etc_apt_preferences() {\n  if [ -e \"/etc/apt/preferences\" ] \\\n    && [ ! -e \"/var/backups/old_etc_apt_preferences\" ]; then\n    mv -f /etc/apt/preferences /var/backups/old_etc_apt_preferences\n  fi\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        export _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && export _USE_MIR=\"files.aegir.cc\"\n      else\n        export _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      export _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    export _USE_MIR=\"files.aegir.cc\"\n  fi\n  export _urlDev=\"http://${_USE_MIR}/dev\"\n  export _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_if_reinstall_curl() {\n  _CURL_VRN=8.20.0\n  if ! command -v lsb_release &> /dev/null; then\n    apt-get update -qq &> /dev/null\n    apt-get install lsb-release ${_aptYesUnth} -qq &> /dev/null\n  fi\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  [ \"${_OS_CODE}\" = \"wheezy\" ] && _CURL_VRN=7.50.1\n  [ \"${_OS_CODE}\" = \"jessie\" ] && _CURL_VRN=7.71.1\n  [ \"${_OS_CODE}\" = \"stretch\" ] && _CURL_VRN=8.2.1\n  _isCurl=$(curl --version 2>&1)\n  if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n    echo \"OOPS: cURL is broken! Re-installing..\"\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    echo \"curl install\" | dpkg --set-selections 2> /dev/null\n    _apt_clean_update\n    # Check for libssl1.0-dev and remove conditionally\n    if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n      apt-get remove libssl1.0-dev -y --purge --auto-remove -qq 2>/dev/null\n    fi\n    apt-get autoremove -y 2> /dev/null\n    apt-get install libssl-dev ${_aptYesUnth} -qq 2> /dev/null\n    apt-get build-dep curl ${_aptYesUnth} 2> /dev/null\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      apt-get install curl --reinstall ${_aptYesUnth} -qq 2> /dev/null\n    fi\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      echo \"INFO: Installing curl from sources...\"\n      mkdir -p /var/opt\n      rm -rf /var/opt/curl*\n      cd /var/opt\n      wget ${_wgetGet} http://files.aegir.cc/dev/src/curl-${_CURL_VRN}.tar.gz &> /dev/null\n      tar -xzf curl-${_CURL_VRN}.tar.gz &> /dev/null\n      if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n        && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n        _SSL_BINARY=/usr/local/ssl3/bin/openssl\n      else\n        _SSL_BINARY=/usr/local/ssl/bin/openssl\n      fi\n      if [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n        _SSL_PATH=\"/usr/local/ssl3\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n      else\n        _SSL_PATH=\"/usr/local/ssl\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n      fi\n      _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n\n      if [ -e \"${_PKG_CONFIG_PATH}\" ] \\\n        && [ -e \"/var/opt/curl-${_CURL_VRN}\" ]; then\n        cd /var/opt/curl-${_CURL_VRN}\n        LIBS=\"-ldl -lpthread\" PKG_CONFIG_PATH=\"${_PKG_CONFIG_PATH}\" ./configure \\\n          --with-openssl \\\n          --with-zlib=/usr \\\n          --prefix=/usr/local &> /dev/null\n        make -j $(nproc) --quiet &> /dev/null\n        make --quiet install &> /dev/null\n        ldconfig 2> /dev/null\n      fi\n    fi\n    if [ -f \"/usr/local/bin/curl\" ]; then\n      _isCurl=$(/usr/local/bin/curl --version 2>&1)\n      if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n        echo \"ERRR: curl is still broken, please install it and debug manually\"\n        _clean_pid_exit _if_reinstall_curl\n      else\n        echo \"GOOD: /usr/local/bin/curl works\"\n      fi\n    fi\n  fi\n}\n\n_check_dns_curl() {\n  _check_dns_settings\n  _find_fast_mirror_early\n  _if_reinstall_curl\n  _CURL_TEST=$(curl -L -k -s \\\n    --max-redirs 10 \\\n    --retry 3 \\\n    --retry-delay 10 \\\n    -I \"http://${_USE_MIR}\" 2> /dev/null)\n  if [[ ! \"${_CURL_TEST}\" =~ \"200 OK\" ]]; then\n    if [[ \"${_CURL_TEST}\" =~ \"unknown option was passed in to libcurl\" ]]; then\n      echo \"ERROR: cURL libs are out of sync! Re-installing again..\"\n      _if_reinstall_curl\n    else\n      echo \"ERROR: ${_USE_MIR} is not available, please try later\"\n      _clean_pid_exit _check_dns_curl_a\n    fi\n  fi\n}\n\n_csf_check_fix() {\n  if [ -x \"/usr/sbin/csf\" ] \\\n    && [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ ! -x \"/etc/csf/csfpost.sh\" ]; then\n    echo \"\" > /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A OUTPUT -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    chmod 700 /etc/csf/csfpost.sh\n    sed -i \"s/.*aegir.*//g\" /etc/csf/csf.allow\n    csf -a 172.235.166.69  eu.files.aegir.cc &> /dev/null\n    csf -a 172.233.219.37  us.files.aegir.cc &> /dev/null\n    csf -a 172.105.168.103 ao.files.aegir.cc &> /dev/null\n    service lfd stop &> /dev/null\n    pkill -9 -f ConfigServer\n    killall sleep &> /dev/null\n    rm -f /etc/csf/csf.error\n    csf -x  &> /dev/null\n    service clean-boa-env start &> /dev/null\n    _if_fix_iptables_symlinks\n    ### csf -uf &> /dev/null\n    ### wait\n    _NFTABLES_TEST=$(iptables -V)\n    if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n      if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n        update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n        update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n        update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n        update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n      fi\n    fi\n    csf -e &> /dev/null\n    sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n    wait\n    sed -i \"/^$/d\" /etc/csf/csf.allow\n    if [ -e \"/var/log/daemon.log\" ]; then\n      _DHCP_LOG=\"/var/log/daemon.log\"\n    else\n      _DHCP_LOG=\"/var/log/syslog\"\n    fi\n    grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n      if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n        IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n        if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n          echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n        fi\n      fi\n    done\n    if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n      csf -ra &> /dev/null\n      synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    else\n      csf -r &> /dev/null\n    fi\n    ### Linux kernel TCP SACK CVEs mitigation\n    ### CVE-2019-11477 SACK Panic\n    ### CVE-2019-11478 SACK Slowness\n    ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n    if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n      _SACK_TEST=$(ip6tables --list | grep tcpmss)\n      if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n        sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n        iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    fi\n  fi\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n  if [ -n \"${_LOC_IP}\" ] && grep -qE \"${_LOC_IP}\\s\" /etc/hosts; then\n    cp -af /etc/hosts /etc/.was.hosts\n    sed -i \"s/^${_LOC_IP}.*//g\" /etc/hosts\n  fi\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n    #\n    # Protect from any barracuda updates when _SKYNET_MODE=OFF\n    if [ ! -z \"${_SKYNET_MODE}\" ] && [ \"${_SKYNET_MODE}\" = \"OFF\" ]; then\n      if [ -n \"${SSH_TTY+x}\" ]; then\n        echo\n        echo \"STATUS: BOA Skynet Agent is Inactive!\"\n        echo\n        echo \"NOTE: With _SKYNET_MODE=OFF barracuda can not upgrade your system.\"\n        echo\n        echo \"HINT: Please remove the _SKYNET_MODE=OFF line from\"\n        echo \"HINT: ${_barCnf} to enable it.\"\n        echo \"HINT: Wait 10 minutes before trying barracuda upgrade again.\"\n        echo \"Bye!\"\n        echo\n      fi\n      _clean_pid_exit\n    fi\n\n    _find_correct_ip\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n\n    # Sanitize to allow only digits and minus sign\n    export _B_NICE=${_B_NICE//[^0-9-]/}\n\n    # Validate and set default if necessary\n    if ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n      _B_NICE=0\n    fi\n\n    # Clamp the value within -20 to 19\n    if (( _B_NICE < -20 )); then\n      _B_NICE=-20\n    elif (( _B_NICE > 19 )); then\n      _B_NICE=19\n    fi\n\n    renice ${_B_NICE} -p $$ &> /dev/null\n\n    _csf_check_fix\n    chmod a+w /dev/null\n    if [ ! -e \"/dev/fd\" ]; then\n      if [ -e \"/proc/self/fd\" ]; then\n        rm -rf /dev/fd\n        ln -sfn /proc/self/fd /dev/fd\n      fi\n    fi\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    if [ -x \"/opt/local/bin/killer\" ]; then\n      sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n      echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\n    fi\n    if [ \"${_VMFAMILY}\" = \"VS\" ]; then\n      if [ ! -e \"/etc/apt/preferences.d/fuse\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: fuse\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/fuse\n        _apt_clean_update\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/udev\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: udev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/udev\n        _apt_clean_update\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/makedev\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: makedev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/makedev\n        _apt_clean_update\n      fi\n      apt-get remove fuse -y --purge --auto-remove -qq 2> /dev/null\n      apt-get remove udev -y --purge --auto-remove -qq 2> /dev/null\n      apt-get remove makedev -y --purge --auto-remove -qq 2> /dev/null\n      _PTMX=OK\n      if [ -e \"/sbin/hdparm\" ]; then\n        apt-get remove hdparm -y --purge --auto-remove -qq 2> /dev/null\n      fi\n      _REMOVE_LINKS=\"buagent \\\n                     checkroot.sh \\\n                     fancontrol \\\n                     halt \\\n                     hwclock.sh \\\n                     hwclockfirst.sh \\\n                     ifupdown \\\n                     ifupdown-clean \\\n                     kerneloops \\\n                     klogd \\\n                     mountall-bootclean.sh \\\n                     mountall.sh \\\n                     mountdevsubfs.sh \\\n                     mountkernfs.sh \\\n                     mountnfs-bootclean.sh \\\n                     mountnfs.sh \\\n                     mountoverflowtmp \\\n                     mountvirtfs \\\n                     mtab.sh \\\n                     networking \\\n                     procps \\\n                     reboot \\\n                     sendsigs \\\n                     setserial \\\n                     svscan \\\n                     sysstat \\\n                     umountfs \\\n                     umountnfs.sh \\\n                     umountroot \\\n                     urandom \\\n                     vnstat\"\n      for _link in ${_REMOVE_LINKS}; do\n        if [ -e \"/etc/init.d/${_link}\" ]; then\n          update-rc.d -f ${_link} remove &> /dev/null\n          mv -f /etc/init.d/${_link} /var/backups/init.d.${_link}\n        fi\n      done\n      for s in cron dbus ssh; do\n        if [ -e \"/etc/init.d/${s}\" ]; then\n          sed -rn -e 's/^(# Default-Stop:).*$/\\1 0 1 6/' -e '/^### BEGIN INIT INFO/,/^### END INIT INFO/p' /etc/init.d/${s} > /etc/insserv/overrides/${s}\n        fi\n      done\n      /sbin/insserv -v -d &> /dev/null\n    else\n      _PTMX=CHECK\n    fi\n    _PTS_TEST=$(cat /proc/mounts | grep devpts 2>&1)\n    if [[ ! \"${_PTS_TEST}\" =~ \"devpts\" ]] && [ ! -e \"/dev/pts/ptmx\" ]; then\n      _PTS=FIX\n    else\n      _PTS=OK\n    fi\n    if [ \"${_PTMX}\" = \"CHECK\" ] && [ \"${_PTS}\" = \"FIX\" ]; then\n      echo \"Required /dev/pts/ptmx does not exist! We will fix this now...\"\n      mkdir -p /dev/pts\n      rm -rf /dev/pts/*\n      if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n        && [ -e \"/etc/apt/apt.conf.d\" ]; then\n        echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n      fi\n      _apt_clean_update\n      apt-get install udev ${_aptYesUnth} 2> /dev/null\n      echo \"devpts          /dev/pts        devpts  rw,noexec,nosuid,gid=5,mode=620 0  0\" >> /etc/fstab\n      mount -t devpts devpts /dev/pts &> /dev/null\n    fi\n    _EHU=NO\n    if ! grep -q \"127.0.0.1 localhost\" /etc/hosts; then\n      sed -i \"s/^127.0.0.1.*//g\" /etc/hosts\n      echo \"\" >> /etc/hosts\n      echo \"127.0.0.1 localhost\" >> /etc/hosts\n      _EHU=YES\n    fi\n    if grep -q \"files.aegir.cc\" /etc/hosts; then\n      sed -i \"s/.*files.aegir.cc.*//g\" /etc/hosts\n      _EHU=YES\n    fi\n    if grep -q \"github\" /etc/hosts; then\n      sed -i \"s/.*github.*//g\" /etc/hosts\n      _EHU=YES\n    fi\n    if [ \"${_EHU}\" = \"YES\" ]; then\n      echo >>/etc/hosts\n      sed -i \"/^$/d\" /etc/hosts\n    fi\n    if [ -e \"/etc/init.d/postfix\" ]; then\n      service postfix restart &> /dev/null\n    fi\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    _clean_pid_exit\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    _clean_pid_exit\n  fi\n}\n\n_check_no_systemd() {\n  if [ -e \"/lib/systemd/systemd\" ]; then\n    echo \"ERROR: This script can not be used with systemd\"\n    echo \"ERROR: Please run 'autoinit' first\"\n    _clean_pid_exit _check_no_systemd\n  fi\n}\n\n_old_apt_keys_cleanup() {\n  _isAptKey=\"$(which apt-key)\"\n  if [ -x \"${_isAptKey}\" ]; then\n    _OLD_KEY_KSPLICE_TEST=$(${_isAptKey} list | grep ksplice 2>&1)\n    _OLD_KEY_ORACLE_TEST=$(${_isAptKey} list | grep oracle 2>&1)\n    _OLD_KEY_NEWRELIC_TEST=$(${_isAptKey} list | grep newrelic 2>&1)\n    _OLD_KEY_PERCONA_TEST=$(${_isAptKey} list | grep percona 2>&1)\n    _OLD_KEY_MALLORY_TEST=$(${_isAptKey} list | grep mallory 2>&1)\n    _OLD_KEY_GHOST_TEST=$(${_isAptKey} list | grep \"6857 6280\" 2>&1)\n    if [[ \"${_OLD_KEY_KSPLICE_TEST}\" =~ \"ksplice\" ]]; then\n      ${_isAptKey} del B6D4038E &> /dev/null\n    fi\n    if [[ \"${_OLD_KEY_ORACLE_TEST}\" =~ \"oracle\" ]]; then\n      ${_isAptKey} del AD986DA3 &> /dev/null\n    fi\n    if [[ \"${_OLD_KEY_NEWRELIC_TEST}\" =~ \"newrelic\" ]]; then\n      ${_isAptKey} del 548C16BF &> /dev/null\n    fi\n    if [[ \"${_OLD_KEY_PERCONA_TEST}\" =~ \"percona\" ]]; then\n      ${_isAptKey} del 8507EFA5 &> /dev/null\n    fi\n    if [[ \"${_OLD_KEY_MALLORY_TEST}\" =~ \"mallory\" ]]; then\n      ${_isAptKey} del 8507EFA5 &> /dev/null\n    fi\n    if [[ \"${_OLD_KEY_GHOST_TEST}\" =~ \"6857 6280\" ]]; then\n      ${_isAptKey} del 68576280 &> /dev/null\n    fi\n  fi\n}\n\n_if_force_php_rebuild() {\n  if [[ \"${_ARGS}\" =~ (php-rebuild|php-reinstall) ]]; then\n    echo \"_PHP_FORCE_REINSTALL=YES\" >> ${_barCnf}\n  fi\n}\n\n_proceed() {\n  _check_root_keys_pwd\n  _check_root\n  _check_no_systemd\n  _mysql_downgrade_protection\n  _up_start\n  _barracuda_downgrade_protection\n  _os_detection_minimal\n  _check_dns_curl\n  _check_php_cli\n  _if_force_php_rebuild\n  _old_apt_keys_cleanup &> /dev/null\n  _check_etc_apt_preferences\n  if [ \"${_tRee}\" = \"dev\" ]; then\n    touch /root/.debug-boa-installer.cnf\n    touch /root/.debug-octopus-installer.cnf\n  fi\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ] \\\n    || [ \"${_cmNd}\" = \"help\" ] \\\n    || [ \"${_cmNd}\" = \"info\" ] \\\n    || [ \"${_cmNd}\" = \"php-idle\" ] \\\n    || [ \"${_cmNd}\" = \"up-distro\" ]; then\n    _CHECK_SQL=NO\n  else\n    _CHECK_SQL=YES\n  fi\n  if [ \"${_CHECK_SQL}\" = \"YES\" ]; then\n    _check_sql_running\n    _check_sql_access\n  fi\n\n  curl ${_crlGet} \"${_rgUrl}/${_barName}\" -o ${_vBs}/${_barName}\n  curl ${_crlGet} \"${_rgUrl}/${_pthIncB}\" -o ${_vBs}/${_filIncB}\n\n  _up_action\n  _up_finish\n}\n\n_check_manufacturer_compatibility() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    /usr/bin/apt-get update &> /dev/null\n    ${_INITINS} dmidecode &> /dev/null\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    echo \"Not supported environment detected: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    _clean_pid_exit _check_manufacturer_compatibility_a\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    echo \"Mysterious environment: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    _clean_pid_exit _check_manufacturer_compatibility_b\n  fi\n}\n\n_if_start_screen() {\n  if [[ -n \"$SSH_CONNECTION\" || -n \"$SSH_CLIENT\" ]]; then\n    # Check if the user is inside a screen session\n    if [[ ! \"${_ARGS}\" =~ (^|[[:space:]])(info|help)([[:space:]]|$) ]]; then\n      if [ -z \"$STY\" ]; then\n        # If not in screen, start a new screen session with the same script\n        echo \"You are not inside a screen session. Starting screen...\"\n        sleep 5\n        screen -S session_barracuda bash -c \"$0 ${_ARGS}\"\n        exit\n      else\n        # If already inside screen, continue the script\n        echo \"You are in a screen session now\"\n        sleep 3\n      fi\n    fi\n  fi\n}\n\n_if_display_help() {\n  if [ \"${_cmNd}\" = \"help\" ] || [ \"${_cmNd}\" = \"info\" ]; then\n    echo\n    echo \"Usage: $(basename \"$0\") {version} {mode} {options}\"\n    echo\n    echo \"Usage: $(basename \"$0\") php-idle {enable|disable}\"\n    echo\n    cat <<EOF\n\n    Accepted keywords and values in every option:\n\n    {version}\n      up-lts <------- upgrade to Barracuda LTS release (no license)\n      up-pro <------- upgrade to Barracuda PRO release (requires license)\n      up-dev <------- upgrade to Barracuda Cutting Edge (requires license)\n      php-idle <----- disable not used PHP versions or enable them again\n\n    {mode}\n      system <------- upgrade only system without Ægir Master (silent mode)\n      aegir <-------- upgrade only Ægir Master Hostmaster (silent mode)\n      log <---------- upgrade both system and Ægir Master (silent mode)\n\n    {options}\n      newrelickey <-- activate New Relic integration with valid license key\n      php-8.5 <------ enable single-PHP mode (8.5 or 8.4 or 8.3)\n      php-min <------ install PHP 8.5, 8.4, 8.3, use 8.4 by default (php-all)\n      php-max <------ install PHP 8.0-8.5, 7.0-7.4, 5.6\n      nodns <-------- disable DNS/SMTP checks on the fly\n      percona-8.4 <-- specify Percona version to use (5.7, 8.0, 8.4)\n\n    See docs/UPGRADE.md for more details.\n\nEOF\n    _clean_pid_exit\n  fi\n}\n\n_set_tree_vars() {\n  if [ ! -z \"${_outP}\" ]; then\n    if [ \"${_outP}\" = \"log\" ] \\\n      || [ \"${_outP}\" = \"disable\" ] \\\n      || [ \"${_outP}\" = \"enable\" ] \\\n      || [ \"${_outP}\" = \"system\" ] \\\n      || [ \"${_outP}\" = \"distro\" ]; then\n      export _outP=\"${_argK}\"\n    else\n      export _outP=\n      export _rKey=\"${_argK}\"\n      export _dEbg=\"${_argE}\"\n      export _sQld=\"${_argQ}\"\n    fi\n  fi\n  if [ \"${_cmNd}\" = \"up-dev\" ]; then\n    export _tRee=dev\n    _ifnames_grub_check_sync\n  elif [ \"${_cmNd}\" = \"up-pro\" ]; then\n    export _tRee=pro\n    _ifnames_grub_check_sync\n  elif [ \"${_cmNd}\" = \"up-lts\" ]; then\n    export _tRee=lts\n    _ifnames_grub_check_sync\n  else\n    if [ \"${_cmNd}\" = \"php-idle\" ] \\\n      || [ \"${_cmNd}\" = \"help\" ] \\\n      || [ \"${_cmNd}\" = \"info\" ] \\\n      || [ \"${_cmNd}\" = \"up-distro\" ]; then\n      export _tRee=dev\n    else\n      echo\n      echo \"Sorry, you are trying not supported command..\"\n      echo \"Display supported commands with: $(basename \"$0\") help\"\n      echo\n      _clean_pid_exit\n    fi\n  fi\n\n  export _tRee=\"${_tRee}\"\n  export _rLsn=\"BOA-5.9.1\"\n  export _rlsE=\"${_rLsn}-${_tRee}\"\n  export _bRnh=\"5.x-${_tRee}\"\n  export _rgUrl=\"http://files.aegir.cc/versions/${_tRee}/boa\"\n\n}\n\nexport _ARGS=\"$@\"\nexport _cmNd=\"$1\"\nexport _outP=\"$2\"\nexport _rKey=\"$3\"\nexport _dEbg=\"$4\"\nexport _sQld=\"$5\"\n\nexport _argK=\"$2\"\nexport _argE=\"$3\"\nexport _argQ=\"$4\"\n\n_check_root_direct\n_set_tree_vars\n_if_display_help\n###_check_manufacturer_compatibility\n[[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n[[ \"${_ARGS}\" =~ noscreen ]] && export _ARGS=$(echo \"${_ARGS}\" | sed -E \"s/noscreen//g\")\n_proceed\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/boa",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_TODAY=$(date +%y%m%d)\nexport _TODAY=${_TODAY//[^0-9]/}\n\n_NOW=$(date +%y%m%d-%H%M%S)\nexport _NOW=${_NOW//[^0-9-]/}\n\n_barCnf=\"/root/.barracuda.cnf\"\n_barName=\"BARRACUDA.sh.txt\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_filIncB=\"barracuda.sh.cnf\"\n_filIncO=\"octopus.sh.cnf\"\n_octName=\"OCTOPUS.sh.txt\"\n_pthIncB=\"lib/settings/${_filIncB}\"\n_pthIncO=\"lib/settings/${_filIncO}\"\n_vBs=\"/var/backups\"\n\n_LOG_IN_DIR=\"${_vBs}/reports/in/$(basename \"$0\")/${_TODAY}\"\n_IN_BARRACUDA_LOG=\"${_LOG_IN_DIR}/$(basename \"$0\")-in-barracuda-${_NOW}.log\"\n_IN_OCTOPUS_LOG=\"${_LOG_IN_DIR}/$(basename \"$0\")-in-octopus-${_NOW}.log\"\n_LOG_UP_DIR=\"${_vBs}/reports/up/$(basename \"$0\")/${_TODAY}\"\n\n_VMFAMILY=XEN\n_VM_TEST=\"$(uname -a)\"\nif [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n  _VMFAMILY=\"VS\"\nfi\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n\n_DEBUG_MODE=$([ -e \"/root/.debug-barracuda-installer.cnf\" ] && echo \"YES\" || echo \"NO\")\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.boa.exit.exceptions.log\n    [ -e \"/opt/tmp/boa\" ] && rm -rf /opt/tmp/*\n  fi\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/run/octopus_install_run.pid\" ] && rm -f /run/octopus_install_run.pid\n  [ -e \"/tmp/aegir_backup_mode.txt\" ] && rm -f /tmp/aegir_backup_mode.txt\n  rm -f ${_vBs}/*.sh.cnf*\n  rm -f ${_vBs}/OCTOPUS.sh*\n  rm -f ${_vBs}/BARRACUDA.sh*\n  service cron start &> /dev/null\n  _CNT=$(pgrep -fc 'tee -a /var/backups/barracuda-')\n  if (( _CNT > 1 )); then\n    pkill -f 'tee -a /var/backups/barracuda-'\n  fi\n  exit 1\n}\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    && [ -e \"/lib/systemd/systemd\" ] ; then\n    ${_APT_UPDATE} -qq 2> /dev/null\n    apt-get purge systemd libnss-systemd -y -qq 2> /dev/null\n    apt-get remove stunnel4 -y --purge --auto-remove -qq 2> /dev/null\n  fi\n}\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_check_manufacturer_compatibility() {\n  # Install dmidecode if not present\n  if ! command -v dmidecode &> /dev/null; then\n    /usr/bin/apt-get update &> /dev/null\n    ${_INITINS} dmidecode &> /dev/null\n  fi\n  # Check if dmidecode is available\n  _DMI_TEST=\"$(which dmidecode)\"\n  if [ -x \"${_DMI_TEST}\" ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n  else\n    _HOST_INFO=\"Unknown, dmidecode not available\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n    echo \"Not supported environment detected: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    _clean_pid_exit _check_manufacturer_compatibility_a\n  elif [[ \"${_HOST_INFO}\" =~ \"Unknown\" ]] || [ -z \"${_HOST_INFO}\" ]; then\n    echo \"Mysterious environment: ${_HOST_INFO}\"\n    echo \"Please check https://bit.ly/boa-caveats\"\n    echo \"Bye!\"\n    _clean_pid_exit _check_manufacturer_compatibility_b\n  fi\n}\n\n_check_no_systemd() {\n  if [ -e \"/lib/systemd/systemd\" ]; then\n    echo \"ERROR: This script can not be used with systemd\"\n    echo \"ERROR: Please run 'autoinit' first\"\n    _clean_pid_exit _check_no_systemd\n  fi\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n_not_supported_virt() {\n  echo\n  echo \"=== OOPS! ===\"\n  echo\n  echo \"You are running not supported virtualization system:\"\n  echo \"  $1\"\n  echo\n  echo \"If you wish to try BOA on this system anyway,\"\n  echo \"please create an empty control file:\"\n  echo \"  /root/.allow.any.virt.cnf\"\n  echo\n  echo \"Please be aware that it may not work at all,\"\n  echo \"or you can experience errors breaking BOA.\"\n  echo\n  echo \"WARNING! BOA IS NOT DESIGNED TO RUN DIRECTLY ON A BARE METAL.\"\n  echo \"WARNING! IT IS VERY DANGEROUS AND THUS EXTREMELY BAD IDEA!\"\n  echo \"WARNING! You are free to experiment but don't expect *ANY* support.\"\n  echo\n  echo \"BOA is known to work well on:\"\n  echo\n  echo \" * Linux Containers (LXC)\"\n  echo \" * Linux KVM guest\"\n  echo \" * Microsoft Hyper-V\"\n  echo \" * OpenVZ Containers\"\n  echo \" * Parallels guest\"\n  echo \" * Red Hat KVM guest\"\n  echo \" * VirtualBox guest\"\n  echo \" * VMware ESXi guest (but excluding vCloud Air)\"\n  echo \" * VServer guest\"\n  echo \" * Xen guest fully virtualized (HVM)\"\n  echo \" * Xen guest\"\n  echo \" * Xen paravirtualized guest domain\"\n  echo\n  echo \"Bye\"\n  echo\n  _clean_pid_exit _not_supported_virt_a\n}\n\n# --- internal: print message only in debug mode\n_msg() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"[virt-what-fix] $*\"\n  fi\n}\n\n# --- internal: run virt-what under strace and parse the helper's exec path\n_discover_with_strace() {\n  local _path_found=\"\"\n  if ! command -v strace >/dev/null 2>&1; then\n    _msg \"strace not available, skipping strace-based discovery\"\n    echo \"\"\n    return 0\n  fi\n  # Temporarily extend PATH so virt-what can exec the helper for strace to see.\n  PATH=\"${PATH}:${_CANDIDATE_PATHS}\" strace -f -qq -e trace=execve -o \"${_TRACE}\" virt-what >/dev/null 2>&1\n\n  # mawk-safe parsing: pull the first quoted arg from execve(\"…\") and check suffix\n  if [ -s \"${_TRACE}\" ]; then\n    _path_found=$(\n      awk -v n=\"${_HELPER_NAME}\" '\n        /execve\\(\"/ {\n          # Find start of execve(\" then extract up to next quote\n          i = index($0, \"execve(\\\"\")\n          if (i) {\n            s = substr($0, i + 8)         # after execve(\"\n            j = index(s, \"\\\"\")\n            if (j) {\n              p = substr(s, 1, j - 1)     # the path inside quotes\n              if (p ~ (\"/\" n \"$\")) { print p; exit }\n            }\n          }\n        }\n      ' \"${_TRACE}\"\n    )\n  fi\n  rm -f \"${_TRACE}\"\n\n  if [ -n \"${_path_found}\" ] && [ -x \"${_path_found}\" ]; then\n    _msg \"strace discovered helper at: ${_path_found}\"\n    echo \"${_path_found}\"\n    return 0\n  fi\n  _msg \"strace discovery failed\"\n  echo \"\"\n  return 0\n}\n\n# --- internal: dpkg-based discovery (Debian/Devuan)\n_discover_with_dpkg() {\n  local _p=\"\"\n  if command -v dpkg >/dev/null 2>&1; then\n    _p=$(dpkg -L virt-what 2>/dev/null | grep -E \"/${_HELPER_NAME}$\" | head -n1)\n    if [ -n \"${_p}\" ] && [ -x \"${_p}\" ]; then\n      _msg \"dpkg discovered helper at: ${_p}\"\n      echo \"${_p}\"\n      return 0\n    fi\n  fi\n  echo \"\"\n  return 0\n}\n\n# --- internal: filesystem search fallback (bounded)\n_discover_with_find() {\n  local _p=\"\"\n  # Keep it bounded to /usr to stay fast/noisy-free.\n  _p=$(find /usr -maxdepth 4 -type f -name \"${_HELPER_NAME}\" 2>/dev/null | head -n1)\n  if [ -n \"${_p}\" ] && [ -x \"${_p}\" ]; then\n    _msg \"find discovered helper at: ${_p}\"\n    echo \"${_p}\"\n    return 0\n  fi\n  echo \"\"\n  return 0\n}\n\n# --- main: ensure symlink\n_ensure_virt_what_helper_symlink() {\n  # If the symlink already exists and is working, nothing to do.\n  if [ -L \"${_SYMLINK}\" ] && [ -x \"${_SYMLINK}\" ] && [ -e \"$(readlink -f \"${_SYMLINK}\")\" ]; then\n    _msg \"Symlink already present and valid: ${_SYMLINK} -> $(readlink -f \"${_SYMLINK}\")\"\n    return 0\n  fi\n\n  local _helper_path=\"\"\n  _helper_path=\"$(_discover_with_strace)\"\n  if [ -z \"${_helper_path}\" ]; then\n    _helper_path=\"$(_discover_with_dpkg)\"\n  fi\n  if [ -z \"${_helper_path}\" ]; then\n    _helper_path=\"$(_discover_with_find)\"\n  fi\n\n  if [ -z \"${_helper_path}\" ]; then\n    echo \"ERROR: Could not locate ${_HELPER_NAME} anywhere under /usr.\" 1>&2\n    return 1\n  fi\n\n  # Safety: if a non-symlink file already exists at the target, back it up once.\n  if [ -e \"${_SYMLINK}\" ] && [ ! -L \"${_SYMLINK}\" ]; then\n    _msg \"Backing up existing non-symlink at ${_SYMLINK} to ${_SYMLINK}.orig\"\n    mv -f \"${_SYMLINK}\" \"${_SYMLINK}.orig\"\n  fi\n\n  ln -sfn \"${_helper_path}\" \"${_SYMLINK}\"\n  if [ -x \"${_SYMLINK}\" ]; then\n    _msg \"Symlink created: ${_SYMLINK} -> ${_helper_path}\"\n    return 0\n  else\n    echo \"ERROR: Failed to create working symlink ${_SYMLINK} -> ${_helper_path}\" 1>&2\n    return 2\n  fi\n}\n\n###\n### Fix VM system detection\n###\n_fix_virt_what() {\n  _VIRT_TEST=\"$(which virt-what)\"\n  if [ -n \"${_VIRT_TEST}\" ] && [ -x \"${_VIRT_TEST}\" ]; then\n    _SHELL_TEST_A=$(grep -I -o \"\\#\\!.*/usr/bin/sh\" ${_VIRT_TEST} 2>&1)\n    _SHELL_TEST_B=$(grep -I -o \"\\#\\!.*/bin/sh\" ${_VIRT_TEST} 2>&1)\n    if [[ \"${_SHELL_TEST_A}\" =~ \"/usr/bin/sh\" ]]; then\n      sed -i \"s/\\/usr\\/bin\\/sh/\\/bin\\/dash/g\" ${_VIRT_TEST}\n    fi\n    if [[ \"${_SHELL_TEST_B}\" =~ \"/bin/sh\" ]]; then\n      sed -i \"s/\\/bin\\/sh/\\/bin\\/dash/g\" ${_VIRT_TEST}\n    fi\n    _HELPER_NAME=\"virt-what-cpuid-helper\"\n    _SYMLINK=\"/usr/sbin/${_HELPER_NAME}\"\n    _TRACE=\"/tmp/virtwhat.$$.strace\"\n    # Extra dirs we temporarily expose to PATH so virt-what can exec the helper for strace discovery\n    _CANDIDATE_PATHS=\"/usr/libexec:/usr/lib/x86_64-linux-gnu:/usr/lib64/virt-what:/usr/lib/virt-what\"\n    if [ ! -e \"${_SYMLINK}\" ]; then\n      echo \"INFO: virt-what tool requires small update, fixing...\"\n      if ! command -v strace &> /dev/null; then\n        _apt_clean_update\n        apt-get install strace ${_aptYesUnth}\n      fi\n      _ensure_virt_what_helper_symlink\n    fi\n  fi\n}\n\n###\n### Fix or install VM system detection\n###\n_fix_or_install_virt_what() {\n  _VIRT_TEST=\"$(which virt-what)\"\n  if [ -n \"${_VIRT_TEST}\" ] && [ -x \"${_VIRT_TEST}\" ]; then\n    _fix_virt_what\n  else\n    echo \"INFO: installing required virt-what tool ...\"\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install virt-what ${_aptYesUnth}\n    wait\n    _fix_virt_what\n  fi\n}\n\n_check_virt() {\n  _fix_or_install_virt_what\n  _VIRT_TOOL=\"$(which virt-what)\"\n  if [ -x \"${_VIRT_TOOL}\" ]; then\n    _VIRT_TEST=$(virt-what)\n    _VIRT_TEST=$(echo -n ${_VIRT_TEST} | fmt -su -w 2500 2>&1)\n    if [[ \"${_VIRT_TEST}\" =~ \"program not found\" ]]; then\n      echo \"ERROR: virt-what says: ${_VIRT_TEST}\"\n      echo \"ERROR: virt-what detection fails for unknown reason, exit\"\n      _clean_pid_exit _check_virt\n    fi\n    if [ ! -e \"/root/.allow.any.virt.cnf\" ]; then\n      if [ -e \"/proc/self/status\" ]; then\n        _VS_GUEST_TEST=$(grep -E \"VxID:[[:space:]]*[0-9]{2,}$\" /proc/self/status 2> /dev/null)\n        _VS_HOST_TEST=$(grep -E \"VxID:[[:space:]]*0$\" /proc/self/status 2> /dev/null)\n      fi\n      if [ ! -z \"${_VS_HOST_TEST}\" ] || [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n        if [ -z \"${_VS_HOST_TEST}\" ] && [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n          _VIRT_IS=\"Linux VServer guest\"\n        else\n          if [ ! -z \"${_VS_HOST_TEST}\" ]; then\n            _not_supported_virt \"Linux VServer host\"\n          else\n            _not_supported_virt \"unknown / not a virtual machine\"\n          fi\n        fi\n      else\n        if [ -z \"${_VIRT_TEST}\" ] || [ \"${_VIRT_TEST}\" = \"0\" ]; then\n          _not_supported_virt \"unknown / not a virtual machine\"\n        elif [[ \"${_VIRT_TEST}\" =~ \"xen-dom0\" ]]; then\n          _not_supported_virt \"Xen privileged domain\"\n        elif [[ \"${_VIRT_TEST}\" =~ \"linux_vserver-host\" ]]; then\n          _not_supported_virt \"Linux VServer host\"\n        else\n          if [[ \"${_VIRT_TEST}\" =~ \"xen xen-hvm\" ]]; then\n            _VIRT_TEST=\"xen-hvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"xen xen-domU\" ]]; then\n            _VIRT_TEST=\"xen-domU\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"virtualbox kvm\" ]]; then\n            _VIRT_TEST=\"virtualbox\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"hyperv qemu\" ]]; then\n            _VIRT_TEST=\"hyperv\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"kvm aws\" ]]; then\n            _VIRT_TEST=\"kvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"redhat kvm\" ]]; then\n            _VIRT_TEST=\"redhat-kvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"openvz lxc\" ]]; then\n            _VIRT_TEST=\"openvz\"\n          fi\n          case \"${_VIRT_TEST}\" in\n            hyperv)      _VIRT_IS=\"Microsoft Hyper-V\" ;;\n            kvm)         _VIRT_IS=\"Linux KVM guest\" ;;\n            lxc)         _VIRT_IS=\"Linux Containers (LXC)\" ;;\n            openvz)      _VIRT_IS=\"OpenVZ Containers\" ;;\n            parallels)   _VIRT_IS=\"Parallels guest\" ;;\n            redhat-kvm)  _VIRT_IS=\"Red Hat KVM guest\" ;;\n            virtualbox)  _VIRT_IS=\"VirtualBox guest\" ;;\n            vmware)      _VIRT_IS=\"VMware ESXi guest\" ;;\n            xen-domU)    _VIRT_IS=\"Xen paravirtualized guest domain\" ;;\n            xen-hvm)     _VIRT_IS=\"Xen guest fully virtualized (HVM)\" ;;\n            xen)         _VIRT_IS=\"Xen guest\" ;;\n            *)  _not_supported_virt \"${_VIRT_TEST}\"\n            ;;\n          esac\n        fi\n      fi\n    else\n      if [ -z \"${_VIRT_TEST}\" ] || [ \"${_VIRT_TEST}\" = \"0\" ]; then\n        _VIRT_TEST=\"unknown / not a virtual machine\"\n      fi\n    fi\n  fi\n}\n\n_system_check_ready() {\n  if [ ! -e \"/etc/nginx\" ] \\\n    || [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    || [ ! -e \"/etc/mysql\" ] \\\n    || [ ! -e \"/var/lib/mysql/mysql\" ]; then\n    echo \"ERROR: Please install complete BOA system before trying\"\n    echo \"ERROR: to install additional Ægir / Octopus instances\"\n    echo \"Bye\"\n    _clean_pid_exit _system_check_ready_a\n  fi\n}\n\n_system_check_clean() {\n  if [ -e \"/etc/apache2\" ]; then\n    mv -f /etc/apache2 /etc/apache2_pre\n    if [ -e \"/etc/mysql\" ]; then\n      mv -f /etc/mysql /etc/mysql_pre\n    fi\n  fi\n  if [ -e \"/etc/nginx\" ] \\\n    || [ -e \"/etc/mysql\" ] \\\n    || [ -e \"/var/lib/mysql/mysql\" ] \\\n    || [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    echo\n    echo \"ERROR: BOA requires minimal supported OS with no services installed.\"\n    echo\n    echo \"Please make sure you don't have MySQL nor Apache installed.\"\n    echo \"Here's the list of directories which shouldn't exist:\"\n    echo \"/etc/nginx /etc/mysql /var/lib/mysql/mysql /var/aegir\"\n    if [ ! -e \"/etc/nginx\" ]; then\n      echo\n      echo \"HINT: Try to run: 'apt-get purge mysql-common' and then try again.\"\n      echo \"HINT: You can also enforce installation with empty control file:\"\n      echo \" touch /root/.force.reinstall.cnf -- and then try again.\"\n      echo\n    fi\n    echo \"Bye\"\n    _clean_pid_exit _system_check_clean_a\n  fi\n}\n\n_ifnames_grub_check_sync() {\n\n  _USE_NINC=NO\n  _NEW_GRUB=DEMO\n\n  if [ -e \"/root/.ignore.ifnames.cnf\" ]; then\n    return 1  # Exit the function but continue the script\n  else\n    [ -e \"/root/.ninc.selected.predictable.cnf\" ] && _USE_NINC=predictable && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.classic.cnf\" ]     && _USE_NINC=classic     && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.auto.cnf\" ]        && _USE_NINC=auto        && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.vanilla.cnf\" ]     && _USE_NINC=vanilla\n  fi\n\n  if [ \"${_USE_NINC}\" = \"vanilla\" ] || [ \"${_USE_NINC}\" = \"NO\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n\n  _IS_IFACE=$(ip a 2>&1)\n  _ADD_GRUB_CMD=\"\"\n  _GRUB_FILE=\"/etc/default/grub\"\n\n  if [ -e \"${_GRUB_FILE}\" ]; then\n    if echo \"${_IS_IFACE}\" | grep -qE \"eth[0-9]+\"; then\n      _USE_IFNAMES=\"CLASSIC\"\n      echo \"GRUB: Classic ethX interface naming found.\"\n    elif echo \"${_IS_IFACE}\" | grep -qE \"(ens|enp|eno|wlp|wlo)[0-9]+:\"; then\n      _USE_IFNAMES=\"PREDICTABLE\"\n      echo \"GRUB: Predictable (ensX, enpX, enoX, wlpX, wloX) interface naming found.\"\n    else\n      _USE_IFNAMES=\"DONTMODIFY\"\n      echo \"GRUB: config exists, but no recognized network interface naming found.\"\n    fi\n\n    # Extract the current GRUB_CMDLINE_LINUX line\n    _GRUB_CMDLINE_LINUX=$(grep -E \"^GRUB_CMDLINE_LINUX=\" \"${_GRUB_FILE}\")\n    echo \"GRUB: Current config is ${_GRUB_CMDLINE_LINUX}\"\n\n    # Initialize variables to check for existing options\n    _SYS_NET_IFNAMES=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"net.ifnames=[01]\")\n    _SYS_BIOSDEVNAME=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"biosdevname=[01]\")\n    _SYS_MEMHP_STATE=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"memhp_default_state=online\")\n\n    # Function to append option to _ADD_GRUB_CMD\n    _append_grub_cmd_option() {\n      _option=\"$1\"\n      if [[ -z \"${_ADD_GRUB_CMD}\" ]]; then\n        _ADD_GRUB_CMD=\"${_option}\"\n      else\n        _ADD_GRUB_CMD=\"${_ADD_GRUB_CMD} ${_option}\"\n      fi\n    }\n\n    # Always add memhp_default_state=online\n    _append_grub_cmd_option \"memhp_default_state=online\"\n\n    if [[ \"${_USE_IFNAMES}\" == \"CLASSIC\" ]]; then\n      # Always set net.ifnames=0 and biosdevname=0\n      _append_grub_cmd_option \"net.ifnames=0\"\n      _append_grub_cmd_option \"biosdevname=0\"\n    elif [[ \"${_USE_IFNAMES}\" == \"PREDICTABLE\" ]]; then\n      # Always set net.ifnames=1 and biosdevname=1\n      _append_grub_cmd_option \"net.ifnames=1\"\n      _append_grub_cmd_option \"biosdevname=1\"\n    fi\n\n    if [[ -n \"${_ADD_GRUB_CMD}\" ]]; then\n      # Backup the GRUB file\n      cp \"${_GRUB_FILE}\" \"${_GRUB_FILE}.bak\"\n\n      # Remove existing options from GRUB_CMDLINE_LINUX\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_GRUB_CMDLINE_LINUX}\" | sed -E \"s/(net.ifnames=[01]|biosdevname=[01]|memhp_default_state=online)//g\")\n\n      # Clean up _eXtr spaces and trailing spaces before the closing quote\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | tr -s ' ' | sed -E 's/\\s*\"$/\"/')\n\n      # Extract current kernel parameters\n      _CURRENT_CMDLINE=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | sed -E 's/^GRUB_CMDLINE_LINUX=\"(.*)\"$/\\1/')\n\n      # Append new options\n      _UPDATED_CMDLINE=\"${_CURRENT_CMDLINE} ${_ADD_GRUB_CMD}\"\n      _UPDATED_CMDLINE=$(echo \"${_UPDATED_CMDLINE}\" | sed 's/^ *//;s/ *$//')\n\n      # Form the new GRUB_CMDLINE_LINUX line\n      _NEW_GRUB_CMDLINE_LINUX=\"GRUB_CMDLINE_LINUX=\\\"${_UPDATED_CMDLINE}\\\"\"\n\n      echo \" \"\n      if [[ \"${_NEW_GRUB}\" == \"LIVE\" ]]; then\n        # Update the GRUB file\n        echo \"GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\"\n        echo \"GRUB: Update in the LIVE MODE\"\n        sed -i \"s|^GRUB_CMDLINE_LINUX=.*|${_NEW_GRUB_CMDLINE_LINUX}|\" \"${_GRUB_FILE}\"\n        echo \"GRUB_CMDLINE_LINUX has been updated with ${_UPDATED_CMDLINE}\"\n      elif [[ \"${_NEW_GRUB}\" == \"DEMO\" ]]; then\n        # Demo info\n        echo \"GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\"\n        echo \"GRUB: Update in the DEMO MODE\"\n        echo \" \"\n        echo \"GRUB_CMDLINE_LINUX would be updated with:\"\n        echo \"   ${_UPDATED_CMDLINE}\"\n        echo \" \"\n        echo \"GRUB: Update in the LIVE MODE requires presence of control file:\"\n        echo \"   /root/.ninc.selected.auto.cnf\"\n        echo \" \"\n        echo \"GRUB: Note that this extra control file must not exist:\"\n        echo \"   /root/.ignore.ifnames.cnf\"\n        echo \" \"\n        echo \"GRUB: This requirement serves as a double-check to confirm\"\n        echo \"GRUB: that you are aware of and agree to auto-update GRUB configuration.\"\n        echo \"GRUB: Incorrect GRUB settings can render your virtual machine unbootable\"\n        echo \"GRUB: necessitating a rescue operation using a CD-ROM or ISO image.\"\n        echo \"GRUB: For this reason, running BOA directly on physical hardware (bare metal) is not supported\"\n        echo \" \"\n        echo \"GRUB: NEVER USE LIVE MODE IF YOU ARE NOT SURE IF YOU NEED IT\"\n      fi\n      echo \" \"\n    fi\n  else\n    echo \"GRUB config does not exist.\"\n  fi\n}\n\n_check_root_direct() {\n  _U_TEST=DENY\n  [ \"${SUDO_USER}\" ] && _U_TEST_SDO=${SUDO_USER} || _U_TEST_SDO=$(whoami)\n  _U_TEST_WHO=$(who am i | awk '{print $1}' 2>&1)\n  _U_TEST_LNE=$(logname 2>&1)\n  if [ \"${_U_TEST_SDO}\" = \"root\" ] || [ \"${_U_TEST_LNE}\" = \"root\" ]; then\n    if [ -z \"${_U_TEST_WHO}\" ]; then\n      _U_TEST=ALLOW\n      ### normal for root scripts running from cron\n    else\n      if [ \"${_U_TEST_WHO}\" = \"root\" ]; then\n        _U_TEST=ALLOW\n      fi\n    fi\n  fi\n  if [ \"${_U_TEST}\" = \"DENY\" ]; then\n    echo\n    echo \"ERROR: This script must be run as root directly,\"\n    echo \"ERROR: without sudo/su switch from regular system user\"\n    echo \"ERROR: Please add and test your SSH (ed25519) keys for root account\"\n    echo \"ERROR: with direct access before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HINT:  You can always restrict access later, or\"\n    echo \"       allow only SSH (ed25519) keys for root with directive\"\n    echo \"         PermitRootLogin prohibit-password\"\n    echo \"       in the /etc/ssh/sshd_config file\"\n    echo \"Bye\"\n    _clean_pid_exit\n  fi\n}\n\n_check_root_keys_pwd() {\n  # Check if root's password is locked\n  _ROOT_PWD_LOCKED=\"NO\"\n  _S_TEST=$(grep 'root:\\*:' /etc/shadow 2>&1)\n  if [[ \"${_S_TEST}\" =~ root:\\*: ]]; then\n    _ROOT_PWD_LOCKED=\"YES\"\n  fi\n\n  # Check for presence of SSH keys\n  _SSH_KEYS_OK=\"NO\"\n  if [ -e \"/root/.ssh/authorized_keys\" ]; then\n    if grep -qE '^(ssh-rsa|ssh-ed25519|ecdsa-sha2)' /root/.ssh/authorized_keys; then\n      _SSH_KEYS_OK=\"YES\"\n    fi\n  fi\n\n  if [[ \"${_ROOT_PWD_LOCKED}\" == \"NO\" ]] && [[ \"${_SSH_KEYS_OK}\" == \"NO\" ]]; then\n    echo\n    echo \"ERROR: BOA requires working SSH keys for system root present\"\n    echo \"ERROR: Please add and test your SSH keys for root account\"\n    echo \"ERROR: before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HOWTO: You can prioritize your keys by adding to ~/.ssh/config lines:\"\n    echo \" Host *\"\n    echo \"   IdentityFile ~/.ssh/id_ed25519\"\n    echo \"   IdentityFile ~/.ssh/id_ecdsa\"\n    echo \"   IdentityFile ~/.ssh/id_rsa\"\n    echo\n    echo \"Bye\"\n    echo\n    _clean_pid_exit\n  fi\n}\n\n_satellite_check_id() {\n  _USER=$1\n  _ID_EXISTS=$(getent passwd ${_USER} 2>&1)\n  if [ -z \"${_ID_EXISTS}\" ]; then\n    _DO_NOTHING=YES\n  elif [[ \"${_ID_EXISTS}\" =~ \"${_USER}\" ]]; then\n    echo \"ERROR: ${_USER} username is already taken\"\n    echo \"Please choose different username\"\n    _clean_pid_exit\n  else\n    echo \"ERROR: ${_USER} username check failed\"\n    echo \"Please try different username\"\n    _clean_pid_exit\n  fi\n  if [ \"${_USER}\" = \"admin\" ] \\\n    || [ \"${_USER}\" = \"hostmaster\" ] \\\n    || [ \"${_USER}\" = \"barracuda\" ] \\\n    || [ \"${_USER}\" = \"octopus\" ] \\\n    || [ \"${_USER}\" = \"boa\" ] \\\n    || [ \"${_USER}\" = \"all\" ]; then\n    echo \"ERROR: ${_USER} is a restricted username, \\\n      please choose different _USER\"\n    _clean_pid_exit\n  elif [[ \"${_USER}\" =~ \"aegir\" ]] \\\n    || [[ \"${_USER}\" =~ \"drupal\" ]] \\\n    || [[ \"${_USER}\" =~ \"drush\" ]] \\\n    || [[ \"${_USER}\" =~ \"sites\" ]] \\\n    || [[ \"${_USER}\" =~ \"default\" ]]; then\n    echo \"ERROR: ${_USER} includes restricted keyword, \\\n      please choose different _USER\"\n    _clean_pid_exit\n  fi\n  _REGEX=\"^[[:digit:]]\"\n  if [[ \"${_USER}\" =~ \"${_REGEX}\" ]]; then\n    echo \"ERROR: ${_USER} is a wrong username, \\\n      it should start with a letter, not digit\"\n    _clean_pid_exit\n  fi\n}\n\n_fix_dns_settings() {\n  [ ! -d \"${_vBs}\" ] && mkdir -p ${_vBs}\n  rm -f ${_vBs}/resolv.conf.tmp\n  if ! grep -q \"nameserver 127.0.0.1\" /etc/resolv.conf; then\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      _FORCE_RESOLV_UPDATE=YES\n    else\n      _FORCE_RESOLV_UPDATE=NO\n    fi\n  fi\n  if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf || [ \"${_FORCE_RESOLV_UPDATE}\" = \"YES\" ]; then\n    echo \"### BOA-DNS-Config ###\" > ${_vBs}/resolv.conf.tmp\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      echo \"nameserver 127.0.0.1\" >> ${_vBs}/resolv.conf.tmp\n    fi\n    echo \"nameserver 1.1.1.1\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 8.8.8.8\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 9.9.9.9\" >> ${_vBs}/resolv.conf.tmp\n  fi\n  if [ -e \"${_vBs}/resolv.conf.tmp\" ]; then\n    chattr -i /etc/resolv.conf\n    rm -f /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp /etc/resolv.conf\n    chmod 0644 /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp ${_vBs}/resolv.conf.vanilla\n  fi\n  if [ -x \"/usr/sbin/unbound-control\" ] \\\n    && [ -e \"/etc/resolvconf/run/interface/lo.unbound\" ]; then\n    unbound-control reload &> /dev/null\n  fi\n}\n\n_check_dns_settings() {\n  if [ -L \"/etc/resolv.conf\" ]; then\n    _fix_dns_settings\n    return 1  # Exit the function but continue the script\n  fi\n  if [ -e \"/root/.use.default.nameservers.cnf\" ]; then\n    if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n      rm -f /root/.use.local.nameservers.cnf\n    fi\n    _USE_DEFAULT_DNS=YES\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n    _USE_PROVIDER_DNS=YES\n  else\n    _REMOTE_DNS_TEST=$(host files.aegir.cc 1.1.1.1 -w 10 2>&1)\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [[ \"${_REMOTE_DNS_TEST}\" =~ \"no servers could be reached\" ]] \\\n    || [[ \"${_REMOTE_DNS_TEST}\" =~ \"Host files.aegir.cc not found\" ]] \\\n    || [ \"${_USE_PROVIDER_DNS}\" = \"YES\" ]; then\n    _fix_dns_settings\n  fi\n}\n\n_if_extended_report() {\n  _EXTENDED=NO\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ] \\\n    || [ -e \"/root/.send-extended-report.cnf\" ]; then\n    _EXTENDED=YES\n  fi\n  if [ \"${_EXTENDED}\" = \"YES\" ]; then\n    if [ -e \"/root/.autoexcalibur.log\" ]; then\n      cat /root/.autoexcalibur.log >> ${_THIS_LOG}\n    elif [ -e \"/root/.autodaedalus.log\" ]; then\n      cat /root/.autodaedalus.log >> ${_THIS_LOG}\n    elif [ -e \"/root/.autochimaera.log\" ]; then\n      cat /root/.autochimaera.log >> ${_THIS_LOG}\n    elif [ -e \"/root/.autobeowulf.log\" ]; then\n      cat /root/.autobeowulf.log >> ${_THIS_LOG}\n    fi\n    echo            >> ${_THIS_LOG}\n    ls -ltcra /root >> ${_THIS_LOG}\n    echo            >> ${_THIS_LOG}\n    ps auxf         >> ${_THIS_LOG}\n    echo            >> ${_THIS_LOG}\n    aureport        >> ${_THIS_LOG}\n    echo            >> ${_THIS_LOG}\n    aa-status | grep loaded   >> ${_THIS_LOG}\n    aa-status | grep enforce  >> ${_THIS_LOG}\n    aa-status | grep complain >> ${_THIS_LOG}\n    echo            >> ${_THIS_LOG}\n    aa-unconfined   >> ${_THIS_LOG}\n    echo            >> ${_THIS_LOG}\n    /opt/local/bin/boa info full >> ${_THIS_LOG}\n    echo            >> ${_THIS_LOG}\n  fi\n}\n\n_send_report() {\n  if [ -e \"${_IN_BARRACUDA_LOG}\" ]; then\n    _THIS_LOG=\"${_IN_BARRACUDA_LOG}\"\n    _RS=\"Barracuda\"\n  elif [ -e \"${_IN_OCTOPUS_LOG}\" ]; then\n    _THIS_LOG=\"${_IN_OCTOPUS_LOG}\"\n    _RS=\"Octopus\"\n  fi\n  _MY_EMAIL=\"${_eMal}\"\n  _if_hosted_sys\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    _MY_EMAIL=\"$(basename \"$0\")@omega8.cc\"\n  fi\n  if [ ! -z \"${_MY_EMAIL}\" ]; then\n    _repSub=\"Successful ${_RS} installation\"\n    _repSub=\"REPORT: ${_repSub} on ${_hName}\"\n    _repSub=$(echo -n ${_repSub} | fmt -su -w 2500 2>&1)\n    _if_extended_report\n    cat ${_THIS_LOG} | s-nail -s \"${_repSub} at ${_NOW}\" ${_MY_EMAIL}\n    echo \"${_repSub} sent to ${_MY_EMAIL}\"\n  fi\n}\n\n_check_s_nail() {\n  # Check if postfix is available\n  if ! command -v postfix &> /dev/null; then\n    /usr/bin/apt-get update &> /dev/null\n    ${_INITINS} postfix postfix-pcre &> /dev/null\n  fi\n  # Install s-nail if not present\n  if ! command -v s-nail &> /dev/null; then\n    /usr/bin/apt-get update &> /dev/null\n    ${_INITINS} s-nail &> /dev/null\n  fi\n  if ! command -v s-nail &> /dev/null; then\n    return 1  # Exit the function but continue the script\n  else\n    return 0\n  fi\n}\n\n_send_errors() {\n  if ! _check_s_nail; then\n    return 1  # Exit the function but continue the script\n  fi\n  if [ -e \"${_IN_BARRACUDA_LOG}\" ]; then\n    _THIS_LOG=\"${_IN_BARRACUDA_LOG}\"\n    _RS=\"Barracuda\"\n  elif [ -e \"${_IN_OCTOPUS_LOG}\" ]; then\n    _THIS_LOG=\"${_IN_OCTOPUS_LOG}\"\n    _RS=\"Octopus\"\n  fi\n  _MY_EMAIL=\"${_eMal}\"\n  _if_hosted_sys\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    _MY_EMAIL=\"$(basename \"$0\")@omega8.cc\"\n  fi\n  if [ ! -z \"${_MY_EMAIL}\" ]; then\n    _repSub=\"FAILED ${_RS} installation\"\n    _repSub=\"REPORT: ${_repSub} on ${_hName}\"\n    _repSub=$(echo -n ${_repSub} | fmt -su -w 2500 2>&1)\n    _if_extended_report\n    cat ${_THIS_LOG} | s-nail -s \"${_repSub} at ${_NOW}\" ${_MY_EMAIL}\n    echo \"${_repSub} sent to ${_MY_EMAIL}\"\n  fi\n}\n\n_octopus_install() {\n  if [ -e \"${_vBs}/${_octName}\" ] && [ -e \"${_vBs}/${_filIncO}\" ]; then\n    if [ -z \"${_usEr}\" ]; then\n      _usEr=\"o1\"\n    else\n      _usEr=${_usEr//[^a-zA-Z0-9-.]/}\n      _usEr=$(echo -n ${_usEr} | tr A-Z a-z 2>&1)\n    fi\n    if [ \"${_cmNd}\" = \"in-octopus\" ] || [ \"${_cmNd}\" = \"in-oct\" ]; then\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" != \"YES\" ]; then\n        if [ -e \"${_barCnf}\" ]; then\n          source ${_barCnf}\n        fi\n        if [ ! -z \"${_MY_OCTO_EMAIL}\" ] \\\n          && [ \"${_MY_OCTO_EMAIL}\" = \"${_eMal}\" ]; then\n          sed -i 's~^_MY_OCTO.*~_MY_OCTO_EMAIL=\"root\"~'      ${_vBs}/${_octName}\n          wait\n        else\n          _lml=\"${_MY_OCTO_EMAIL}\"\n          if [ ! -z \"${_lml}\" ]; then\n            sed -i \"s~^_MY_OC.*~_MY_OCTO_EMAIL=\\\"${_lml}\\\"~\" ${_vBs}/${_octName}\n            wait\n          else\n            sed -i 's~^_MY_OCTO.*~_MY_OCTO_EMAIL=\"root\"~'    ${_vBs}/${_octName}\n            wait\n          fi\n        fi\n      fi\n      sed -i \"s/^_SPINNER=YES/_SPINNER=NO/g\"                 ${_vBs}/${_octName}\n      wait\n      if [ ! -z \"${_pXyc}\" ] && [ ! -z \"${_pXyi}\" ]; then\n        sed -i \"s/^_THIS_DB_PO.*/_THIS_DB_PORT=${_pXyc}/g\"   ${_vBs}/${_octName}\n        wait\n        sed -i \"s/^_THIS_DB_HO.*/_THIS_DB_HOST=${_pXyi}/g\"   ${_vBs}/${_octName}\n        wait\n        sed -i \"s/^_THIS_DB_PO.*/_THIS_DB_PORT=${_pXyc}/g\"   ${_vBs}/${_filIncO}\n        wait\n        sed -i \"s/^_THIS_DB_HO.*/_THIS_DB_HOST=${_pXyi}/g\"   ${_vBs}/${_filIncO}\n        wait\n      fi\n      if [ ! -z \"${_cOpt}\" ]; then\n        sed -i \"s/^_CLIENT_OPTI.*/_CLIENT_OPTION=${_cOpt}/g\" ${_vBs}/${_octName}\n        wait\n      fi\n      if [ ! -z \"${_cSub}\" ]; then\n        sed -i \"s/^_CLIENT_SUBS.*/_CLIENT_SUBSCR=${_cSub}/g\" ${_vBs}/${_octName}\n        wait\n      fi\n      if [ ! -z \"${_cCor}\" ]; then\n        sed -i \"s/^_CLIENT_COR.*/_CLIENT_CORES=${_cCor}/g\"   ${_vBs}/${_octName}\n        wait\n      fi\n    else\n      sed -i \"s~^_MY_OCTOPU=.*~_MY_OCTO_EMAIL=\\\"${_eMal}\\\"~\" ${_vBs}/${_octName}\n      wait\n    fi\n    sed -i \"s~^_CLIENT_EMAIL=.*~_CLIENT_EMAIL=\\\"${_eMal}\\\"~\" ${_vBs}/${_octName}\n    wait\n    sed -i \"s/^_USER=.*/_USER=${_usEr}/g\"                    ${_vBs}/${_octName}\n    wait\n\n    sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"               ${_vBs}/${_filIncO}\n    wait\n    sed -i \"s/^_PLATFORMS_LIST=.*/_PLATFORMS_LIST=none/g\"    ${_vBs}/${_filIncO}\n    wait\n\n    sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"               ${_vBs}/${_filIncO}\n    wait\n    sed -i \"s/^_STRONG_PASS.*/_STRONG_PASSWORDS=YES/g\"       ${_vBs}/${_filIncO}\n    wait\n\n    if [ \"${_cmNd}\" = \"in-dev\" ] \\\n      || [ \"${_cmNd}\" = \"in-pro\" ] \\\n      || [ \"${_cmNd}\" = \"in-lts\" ] \\\n      || [ \"${_cmNd}\" = \"in-octopus\" ] \\\n      || [ \"${_cmNd}\" = \"in-oct\" ] \\\n      || [ \"${_cOpt}\" = \"dev\" ] \\\n      || [ \"${_cOpt}\" = \"pro\" ] \\\n      || [ \"${_cOpt}\" = \"lts\" ]; then\n      sed -i \"s/^_BRANCH_PRN=.*/_BRANCH_PRN=${_bRnh}/g\"      ${_vBs}/${_filIncO}\n      wait\n      sed -i \"s/^_AEGIR_VERSION.*/_AEGIR_VERSION=${_tRee}/g\" ${_vBs}/${_filIncO}\n      wait\n      sed -i \"s/^_AEGIR_XTS_VRN.*/_AEGIR_XTS_VRN=${_tRee}/g\" ${_vBs}/${_filIncO}\n      wait\n      sed -i \"s/^_X_VERSION=.*/_X_VERSION=${_rlsE}/g\"        ${_vBs}/${_filIncO}\n      wait\n    else\n      sed -i \"s/^_BRANCH_PRN=.*/_BRANCH_PRN=${_bRnh}/g\"      ${_vBs}/${_filIncO}\n      wait\n      sed -i \"s/^_AEGIR_VERSION.*/_AEGIR_VERSION=${_tRee}/g\" ${_vBs}/${_filIncO}\n      wait\n      sed -i \"s/^_AEGIR_XTS_VRN.*/_AEGIR_XTS_VRN=${_tRee}/g\" ${_vBs}/${_filIncO}\n      wait\n      sed -i \"s/^_X_VERSION=.*/_X_VERSION=${_rlsE}/g\"        ${_vBs}/${_filIncO}\n      wait\n    fi\n\n    if [ \"${_cmNd}\" = \"in-dev\" ] \\\n      || [ \"${_cmNd}\" = \"in-pro\" ] \\\n      || [ \"${_cmNd}\" = \"in-lts\" ] \\\n      || [ \"${_cmNd}\" = \"in-octopus\" ] \\\n      || [ \"${_cmNd}\" = \"in-oct\" ] \\\n      || [ \"${_cOpt}\" = \"dev\" ] \\\n      || [ \"${_cOpt}\" = \"pro\" ] \\\n      || [ \"${_cOpt}\" = \"lts\" ]; then\n      sed -i \"s/^_BRANCH_BOA=.*/_BRANCH_BOA=${_bRnh}/g\"      ${_vBs}/${_filIncO}\n       wait\n    else\n      sed -i \"s/^_BRANCH_BOA=.*/_BRANCH_BOA=${_bRnh}/g\"      ${_vBs}/${_filIncO}\n      wait\n    fi\n\n    if [ ! -z \"${_pXyc}\" ] && [ ! -z \"${_pXyi}\" ]; then\n      if [ -e \"/data/conf/global.inc\" ]; then\n        echo \"ProxySQL\" > /data/conf/${_usEr}_use_proxysql.txt\n      fi\n    fi\n\n    if [ \"${_outP}\" = \"logged\" ]; then\n      _n=$((RANDOM%9+2))\n      echo\n      echo \"Preparing to install Octopus in almost silent mode...\"\n      echo\n      echo \"NOTE: There will be no progress displayed in this console\"\n      echo \"but you will receive an email once the installation is complete\"\n      echo\n      sleep ${_n}\n      echo \"Please watch the progress in another console window with command:\"\n      echo \"  tail -f ${_IN_OCTOPUS_LOG}\"\n      echo \"or wait until you will see the line: BOA ${_cmNd} completed, Bye\"\n      echo\n      sleep ${_n}\n      echo \"Starting now...\"\n      echo\n      sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"             ${_vBs}/${_octName}\n      wait\n      sed -i \"s/^_SPINNER=YES/_SPINNER=NO/g\"                 ${_vBs}/${_octName}\n      wait\n    fi\n\n    if [ \"${_cmNd}\" = \"in-octopus\" ] \\\n      || [ \"${_cmNd}\" = \"in-oct\" ]; then\n      if [ -x \"/opt/php84/bin/php\" ]; then\n        _USE_PHP=8.4\n      elif [ -x \"/opt/php85/bin/php\" ]; then\n        _USE_PHP=8.5\n      elif [ -x \"/opt/php83/bin/php\" ]; then\n        _USE_PHP=8.3\n      fi\n      sed -i \"s/^_PHP_FPM.*/_PHP_FPM_VERSION=${_USE_PHP}/g\"  ${_vBs}/${_filIncO}\n      wait\n      sed -i \"s/^_PHP_CLI.*/_PHP_CLI_VERSION=${_USE_PHP}/g\"  ${_vBs}/${_filIncO}\n      wait\n    fi\n\n    if [ -e \"${_vBs}/${_octName}\" ]; then\n      if [ \"${_outP}\" = \"logged\" ] || [ \"${_outP}\" = \"onlylogged\" ]; then\n        bash ${_vBs}/${_octName} >${_IN_OCTOPUS_LOG} 2>&1\n      else\n        bash ${_vBs}/${_octName} 2>&1 | tee ${_IN_OCTOPUS_LOG}\n      fi\n      wait\n    fi\n\n    if [ ! -z \"${_pXyc}\" ] && [ ! -z \"${_pXyi}\" ]; then\n      if [ -e \"/data/disk/${_usEr}/log/octopus_log.txt\" ]; then\n        echo \"ProxySQL\" > /data/disk/${_usEr}/log/use_proxysql.txt\n        echo \"ProxySQL\" > /data/conf/clstr.cnf\n      fi\n    fi\n  else\n    if [ ! -e \"${_vBs}/${_octName}\" ]; then\n      echo\n      echo \"${_vBs}/${_octName} installer not available, exit\"\n      echo\n      _clean_pid_exit _octName\n    fi\n    if [ ! -e \"${_vBs}/${_filIncO}\" ]; then\n      echo\n      echo \"${_vBs}/${_filIncO} inc file not available, exit\"\n      echo\n      _clean_pid_exit _filIncO\n    fi\n  fi\n}\n\n_barracuda_install() {\n  if [ -e \"${_vBs}/${_barName}\" ] && [ -e \"${_vBs}/${_filIncB}\" ]; then\n\n    # Unlock /etc/resolv.conf just in case, so it doesn't affect apt-get tasks\n    [ -f \"/etc/resolv.conf\" ] && chattr -i /etc/resolv.conf\n\n    sed -i \"s~^_MY_EMAIL=.*~_MY_EMAIL=\\\"${_eMal}\\\"~\"         ${_vBs}/${_filIncB}\n    wait\n    sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"               ${_vBs}/${_filIncB}\n    wait\n\n    if [ \"${_kiNd}\" = \"local\" ]; then\n      echo \"127.0.1.1 aegir.local o1.sub.aegir.local \\\n        o2.sub.aegir.local o3.sub.aegir.local\" >> /etc/hosts\n      sed -i \"s/^_EASY_SETUP=.*/_EASY_SETUP=LOCAL/g\"         ${_vBs}/${_filIncB}\n      wait\n    elif [ \"${_kiNd}\" = \"public\" ] && [ ! -z \"${_fQdn}\" ]; then\n      sed -i \"s/^_EASY_SETUP=.*/_EASY_SETUP=PUBLIC/g\"        ${_vBs}/${_filIncB}\n      wait\n      _fQdn=\"${_fQdn//[^a-zA-Z0-9-.]/}\"\n      _fQdn=\"$(echo -n ${_fQdn} | tr A-Z a-z 2>&1)\"\n      sed -i \"s/^_EASY_HOSTNAME.*/_EASY_HOSTNAME=${_fQdn}/g\" ${_vBs}/${_filIncB}\n      wait\n    else\n      sed -i \"s/^_EASY_SETUP=.*/_EASY_SETUP=PUBLIC/g\"        ${_vBs}/${_filIncB}\n      wait\n      _fQdn=\"${_fQdn//[^a-zA-Z0-9-.]/}\"\n      _fQdn=\"$(echo -n ${_fQdn} | tr A-Z a-z 2>&1)\"\n      sed -i \"s/^_EASY_HOSTNAME.*/_EASY_HOSTNAME=${_fQdn}/g\" ${_vBs}/${_filIncB}\n      wait\n    fi\n\n    if [ ! -z \"${_fQdn}\" ]; then\n      export _EASY_HOSTNAME=${_fQdn}\n      hostname -b ${_fQdn} ### force our custom FQDN/local hostname\n      echo \"${_fQdn}\" > /etc/hostname\n      echo \"${_fQdn}\" > /etc/mailname\n    fi\n\n    if [ -n \"${_eXtr}\" ] \\\n      && [ -e \"${_vBs}/${_filIncB}\" ] \\\n      && [ -e \"${_vBs}/${_filIncO}\" ]; then\n      if [ -n \"${_eXtr}\" ]; then\n        if [ \"${_eXtr}\" = \"php-85\" ] || [ \"${_eXtr}\" = \"php-8.5\" ]; then\n          _pI=8.5\n        elif [ \"${_eXtr}\" = \"php-84\" ] || [ \"${_eXtr}\" = \"php-8.4\" ]; then\n          _pI=8.4\n        elif [ \"${_eXtr}\" = \"php-83\" ] || [ \"${_eXtr}\" = \"php-8.3\" ]; then\n          _pI=8.3\n        fi\n        if [ -n \"${_pI}\" ]; then\n          sed -i \"s/^_PHP_SI.*/_PHP_SINGLE_INSTALL=${_pI}/g\" ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_PHP_MU.*/_PHP_MULTI_INSTALL=${_pI}/g\"  ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_PHP_FPM_V.*/_PHP_FPM_VERSION=${_pI}/g\" ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_PHP_CLI_V.*/_PHP_CLI_VERSION=${_pI}/g\" ${_vBs}/${_filIncB}\n          wait\n          sed -i \"s/^_PHP_FPM_V.*/_PHP_FPM_VERSION=${_pI}/g\" ${_vBs}/${_filIncO}\n          wait\n          sed -i \"s/^_PHP_CLI_V.*/_PHP_CLI_VERSION=${_pI}/g\" ${_vBs}/${_filIncO}\n          wait\n        fi\n      elif [ \"${_eXtr}\" = \"php-all\" ] \\\n        || [ \"${_eXtr}\" = \"php-min\" ] \\\n        || [ \"${_eXtr}\" = \"php-max\" ]; then\n        sed -i \"s/^_PHP_SINGLE_.*/_PHP_SINGLE_INSTALL=/g\"    ${_vBs}/${_filIncB}\n        if [ \"${_eXtr}\" = \"php-max\" ]; then\n          _pA=\"5.6 7.0 7.1 7.2 7.3 7.4 8.0 8.1 8.2 8.3 8.4 8.5\"\n        else\n          _pA=\"8.3 8.4 8.5\"\n        fi\n        _pI=8.4\n        sed -i \"s/^_PHP_M.*/_PHP_MULTI_INSTALL=\\\"${_pA}\\\"/g\" ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_FPM_VER.*/_PHP_FPM_VERSION=${_pI}/g\" ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_CLI_VER.*/_PHP_CLI_VERSION=${_pI}/g\" ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_PHP_FPM_VER.*/_PHP_FPM_VERSION=${_pI}/g\" ${_vBs}/${_filIncO}\n        wait\n        sed -i \"s/^_PHP_CLI_VER.*/_PHP_CLI_VERSION=${_pI}/g\" ${_vBs}/${_filIncO}\n        wait\n      elif [ \"${_eXtr}\" = \"nodns\" ]; then\n        sed -i \"s/^_SMTP_RELAY_TES.*/_SMTP_RELAY_TEST=NO/g\"  ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_DNS_SETUP_TES.*/_DNS_SETUP_TEST=NO/g\"    ${_vBs}/${_filIncB}\n        wait\n        sed -i \"s/^_DNS_SETUP_TES.*/_DNS_SETUP_TEST=NO/g\"    ${_vBs}/${_filIncO}\n        wait\n      elif [[ \"${_eXtr}\" =~ \"percona\" ]]; then\n        if [ \"${_eXtr}\" = \"percona-8.4\" ]; then\n          sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.4/g\"         ${_vBs}/${_filIncB}\n        elif [ \"${_eXtr}\" = \"percona-8.0\" ]; then\n          sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.0/g\"         ${_vBs}/${_filIncB}\n        elif [ \"${_eXtr}\" = \"percona-5.7\" ]; then\n          sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=5.7/g\"         ${_vBs}/${_filIncB}\n        else\n          sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=5.7/g\"         ${_vBs}/${_filIncB}\n        fi\n        wait\n      else\n        sed -i \"s/^_NEWRELIC.*/_NEWRELIC_KEY=\\\"${_eXtr}\\\"/g\" ${_vBs}/${_filIncB}\n        wait\n      fi\n    fi\n\n    ###\n    ### Percona SQL server supports Debian Trixie / Devuan Excalibur already.\n    ### But only latest Percona 8.4 is supported on Devuan Excalibur.\n    ###\n    if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n      && [ \"${_eXtr}\" != \"percona-8.4\" ]; then\n      sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.4/g\"             ${_vBs}/${_filIncB}\n      wait\n    fi\n\n    if [ \"${_cmNd}\" = \"in-dev\" ] \\\n      || [ \"${_cmNd}\" = \"in-pro\" ] \\\n      || [ \"${_cmNd}\" = \"in-lts\" ] \\\n      || [ \"${_cmNd}\" = \"in-octopus\" ] \\\n      || [ \"${_cmNd}\" = \"in-oct\" ]; then\n      sed -i \"s/^_BRANCH_PRN=.*/_BRANCH_PRN=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_VERSI.*/_AEGIR_VERSION=${_tRee}/g\"   ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_XTS_V.*/_AEGIR_XTS_VRN=${_tRee}/g\"   ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_X_VERSION=.*/_X_VERSION=${_rlsE}/g\"        ${_vBs}/${_filIncB}\n      wait\n    else\n      sed -i \"s/^_BRANCH_PRN=.*/_BRANCH_PRN=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_VERSI.*/_AEGIR_VERSION=${_tRee}/g\"   ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_AEGIR_XTS_V.*/_AEGIR_XTS_VRN=${_tRee}/g\"   ${_vBs}/${_filIncB}\n      wait\n      sed -i \"s/^_X_VERSION=.*/_X_VERSION=${_rlsE}/g\"        ${_vBs}/${_filIncB}\n      wait\n    fi\n\n    if [ \"${_cmNd}\" = \"in-dev\" ] \\\n      || [ \"${_cmNd}\" = \"in-pro\" ] \\\n      || [ \"${_cmNd}\" = \"in-lts\" ] \\\n      || [ \"${_cmNd}\" = \"in-octopus\" ] \\\n      || [ \"${_cmNd}\" = \"in-oct\" ]; then\n      sed -i \"s/^_BRANCH_BOA=.*/_BRANCH_BOA=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n    else\n      sed -i \"s/^_BRANCH_BOA=.*/_BRANCH_BOA=${_bRnh}/g\"      ${_vBs}/${_filIncB}\n      wait\n    fi\n\n    if [ \"${_outP}\" = \"logged\" ]; then\n      _n=$((RANDOM%9+2))\n      echo\n      if [ \"${_sYst}\" = \"bundle\" ]; then\n        echo \"Preparing to install Barracuda and Octopus in almost silent mode...\"\n      else\n        echo \"Preparing to install only Barracuda in almost silent mode...\"\n      fi\n      echo\n      echo \"NOTE: There will be no progress displayed in this console\"\n      echo \"but you will receive an email once the installation is complete\"\n      echo\n      sleep ${_n}\n      echo \"Please watch the progress in another console window with command:\"\n      echo \"  tail -f ${_IN_BARRACUDA_LOG}\"\n      echo \"or wait until you will see the line: BOA ${_cmNd} completed, Bye\"\n      echo\n      sleep ${_n}\n      echo \"Starting now...\"\n      echo\n      sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"               ${_vBs}/${_barName}\n      wait\n      sed -i \"s/^_SPINNER=YES/_SPINNER=NO/g\"                   ${_vBs}/${_barName}\n      wait\n    fi\n\n    if [ -e \"${_vBs}/${_barName}\" ]; then\n      if [ \"${_outP}\" = \"logged\" ] || [ \"${_outP}\" = \"onlylogged\" ]; then\n        bash ${_vBs}/${_barName} >${_IN_BARRACUDA_LOG} 2>&1\n      else\n        bash ${_vBs}/${_barName} 2>&1 | tee ${_IN_BARRACUDA_LOG}\n      fi\n      wait\n    fi\n  else\n    if [ ! -e \"${_vBs}/${_barName}\" ]; then\n      echo \"${_vBs}/${_barName} installer not available, exit\"\n      _clean_pid_exit _barName\n    fi\n    if [ ! -e \"${_vBs}/${_filIncB}\" ]; then\n      echo \"${_vBs}/${_filIncB} inc file not available, exit\"\n      _clean_pid_exit _filIncB\n    fi\n  fi\n}\n\n_init_start() {\n  if [ -e \"/run/boa_run.pid\" ]; then\n    echo\n    echo \"  Another BOA installer is running probably\"\n    echo \"  because /run/boa_run.pid exists\"\n    echo\n    exit 1\n  elif [ -e \"/run/boa_wait.pid\" ]; then\n    echo\n    echo \"  Some important system task is running probably\"\n    echo \"  because /run/boa_wait.pid exists\"\n    echo\n    exit 1\n  else\n    touch /run/boa_run.pid\n    touch /run/boa_wait.pid\n    mkdir -p ${_LOG_IN_DIR}\n    mkdir -p ${_LOG_UP_DIR}\n    cd ${_vBs}\n    rm -f ${_vBs}/*.sh.cnf*\n    rm -f ${_vBs}/BARRACUDA.sh*\n    rm -f ${_vBs}/OCTOPUS.sh*\n  fi\n}\n\n_check_etc_apt_preferences() {\n  if [ -e \"/etc/apt/preferences\" ] \\\n    && [ ! -e \"/var/backups/old_etc_apt_preferences\" ]; then\n    mv -f /etc/apt/preferences /var/backups/old_etc_apt_preferences\n  fi\n}\n\n_init_cleanup() {\n  rm -f /root/BOA.sh*\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n  [ -e \"/run/manage_ruby_users.pid\" ] && rm -f /run/manage_ruby_users.pid\n  rm -f ${_vBs}/*.sh.cnf*\n  rm -f ${_vBs}/BARRACUDA.sh*\n  rm -f ${_vBs}/OCTOPUS.sh*\n}\n\n_init_finish() {\n  rm -f /root/BOA.sh*\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n  [ -e \"/run/manage_ruby_users.pid\" ] && rm -f /run/manage_ruby_users.pid\n  rm -f ${_vBs}/*.sh.cnf*\n  rm -f ${_vBs}/BARRACUDA.sh*\n  rm -f ${_vBs}/OCTOPUS.sh*\n  echo\n  echo BOA ${_cmNd} completed\n  echo Bye\n  echo\n  exit 0\n}\n\n_init_setup() {\n\n  if [ \"${_cmNd}\" = \"in-octopus\" ] || [ \"${_cmNd}\" = \"in-oct\" ]; then\n\n    if [ -e \"/lib/systemd/systemd\" ]; then\n      # Explain the procedure\n      echo\n      echo \"  Since this system is not ready for Octopus installation yet,\"\n      echo \"  we will run the 'autoinit' for you first, reboot and then\"\n      echo \"  you can install BOA first and then add more Octopus instances\"\n      echo\n      echo \"  NOTE: All required autoinit steps are automated\"\n      echo\n      echo \"  Feel free to log out now and check back after 10 minutes\"\n      echo\n      echo \"  PLEASE DON'T REBOOT THIS SERVER WHILE THE PROCEDURE IS RUNNING!\"\n      echo\n      echo \"  Bye!\"\n      echo\n      # Execute the autonit and exit\n      nohup /usr/local/bin/autoinit > /dev/null 2>&1 &\n      sleep 1\n      _clean_pid_exit\n    fi\n\n    if [ -e \"/root/.dev.server.cnf\" ]; then\n      echo\n      echo RAWV _tRee is ${_tRee}\n      echo RAWV _eMal is ${_eMal}\n      echo RAWV _usEr is ${_usEr}\n      echo RAWV _eXtr is ${_eXtr}\n      echo RAWV _cOpt is ${_cOpt}\n      echo RAWV _cSub is ${_cSub}\n      echo RAWV _cCor is ${_cCor}\n      echo\n    fi\n    if [ \"${_eXtr}\" = \"lts\" ] \\\n      || [ \"${_eXtr}\" = \"dev\" ]; then\n      _OCTO_EXTRA=OK\n    else\n      _eXtr=lite\n    fi\n    _satellite_check_id ${_usEr}\n    if [ \"${_cOpt}\" = \"octolog\" ]; then\n      _outP=\"logged\"\n      _cOpt=\n    elif [ \"${_cOpt}\" = \"silent\" ]; then\n      _outP=\"onlylogged\"\n      _cOpt=\n    elif [ \"${_cOpt}\" = \"system\" ]; then\n      _outP=\"logged\"\n      _cOpt=\n    elif [ \"${_cOpt}\" = \"minimal\" ]; then\n      _outP=\"logged\"\n      _cOpt=\n    else\n      _outP=\"verbose\"\n    fi\n\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _defaultOption=EDGE\n    else\n      _defaultOption=POWER\n    fi\n\n    if [ \"${_outP}\" = \"logged\" ] \\\n      || [ -z \"${_cOpt}\" ] \\\n      || [ -z \"${_cSub}\" ] \\\n      || [ -z \"${_cCor}\" ]; then\n      _cOpt=${_defaultOption}\n      _cSub=Q\n      _cCor=1\n    fi\n    if [ \"${_outP}\" = \"onlylogged\" ] \\\n      || [ -z \"${_cOpt}\" ] \\\n      || [ -z \"${_cSub}\" ] \\\n      || [ -z \"${_cCor}\" ]; then\n      _cOpt=${_defaultOption}\n      _cSub=Q\n      _cCor=1\n    fi\n    if [ -e \"/root/.dev.server.cnf\" ]; then\n      echo MODD _tRee is ${_tRee}\n      echo MODD _eMal is ${_eMal}\n      echo MODD _usEr is ${_usEr}\n      echo MODD _eXtr is ${_eXtr}\n      echo MODD _cOpt is ${_cOpt}\n      echo MODD _cSub is ${_cSub}\n      echo MODD _cCor is ${_cCor}\n      echo\n      echo CTRL _outP is ${_outP}\n      echo\n      echo Waiting 60 seconds...\n      sleep 15\n      echo Waiting 45 seconds...\n      sleep 15\n      echo Waiting 30 seconds...\n      sleep 15\n      echo Waiting 15 seconds...\n      echo\n    fi\n    if [ \"${_eXtr}\" = \"dev\" ]; then\n      export _tRee=dev\n    elif [ \"${_eXtr}\" = \"pro\" ]; then\n      export _tRee=pro\n    elif [ \"${_eXtr}\" = \"lts\" ]; then\n      export _tRee=lts\n    fi\n  fi\n\n  # Define the BOA log file for use after autoinit\n  _BOA_LOGFILE=\"/root/.boa.install.command.cnf\"\n\n  if [ \"${_cmNd}\" = \"in-dev\" ] \\\n    || [ \"${_cmNd}\" = \"in-pro\" ] \\\n    || [ \"${_cmNd}\" = \"in-lts\" ]; then\n\n    # Disable forced MySQL/Valkey/Redis password updates during installation\n    # Delete any/all of these files to re-enable standard behaviour later\n    touch /root/.mysql.no.new.password.cnf\n    touch /root/.valkey.no.new.password.cnf\n    touch /root/.redis.no.new.password.cnf\n\n    if [ ! -e \"/root/.autoinit.log\" ] || [ -e \"/lib/systemd/systemd\" ]; then\n      # Write the command to the log file\n      echo \"${_BOA_COMMAND}\" > ${_BOA_LOGFILE}\n      # Explain the procedure\n      echo\n      echo \"  Since this system is not ready for BOA installation yet,\"\n      echo \"  we will run the 'autoinit' for you first, reboot and then\"\n      echo \"  your original boa command will be run in a silent mode\"\n      echo\n      echo \"  The command to run once autonit is complete:\"\n      echo\n      echo \"    ${_BOA_COMMAND}\"\n      echo\n      echo \"  NOTE: All those steps are automated\"\n      echo\n      echo \"  Feel free to log out now and check back after 30-40 minutes\"\n      echo\n      echo \"  PLEASE DON'T REBOOT THIS SERVER WHILE THE PROCEDURE IS RUNNING!\"\n      echo\n      echo \"  You should receive an email once the installation is complete\"\n      echo\n      echo \"  Bye!\"\n      echo\n      # Unlock /etc/resolv.conf just in case, so it doesn't affect apt-get tasks\n      [ -f \"/etc/resolv.conf\" ] && chattr -i /etc/resolv.conf\n      # Execute the autonit and exit\n      nohup /usr/local/bin/autoinit > /dev/null 2>&1 &\n      sleep 1\n      _clean_pid_exit\n    fi\n  fi\n\n  if [ \"${_cmNd}\" = \"in-dev\" ] \\\n    || [ \"${_cmNd}\" = \"in-pro\" ] \\\n    || [ \"${_cmNd}\" = \"in-lts\" ]; then\n    _check_no_systemd\n    _ifnames_grub_check_sync\n  fi\n\n  if [ \"${_cmNd}\" = \"in-dev\" ] \\\n    || [ \"${_cmNd}\" = \"in-pro\" ] \\\n    || [ \"${_cmNd}\" = \"in-lts\" ]; then\n    if [ -e \"/root/.dev.server.cnf\" ]; then\n      echo\n      echo RAWV _tRee is ${_tRee}\n      echo RAWV _kiNd is ${_kiNd}\n      echo RAWV _fQdn is ${_fQdn}\n      echo RAWV _eMal is ${_eMal}\n      echo RAWV _usEr is ${_usEr}\n      echo RAWV _eXtr is ${_eXtr}\n      echo RAWV _outP is ${_outP}\n      echo\n    fi\n    if [ \"${_kiNd}\" = \"local\" ]; then\n      if [[ \"${_eMal}\" =~ \"php\" ]]; then\n        _eXtr=\"${_eMal}\"\n      fi\n      _eMal=\"${_fQdn}\"\n      _usEr=\"o1\"\n    fi\n    if [ \"${_kiNd}\" = \"public\" ]; then\n      if [[ \"${_usEr}\" =~ \"php\" ]]; then\n        _eXtr=\"${_usEr}\"\n        _usEr=\"o1\"\n      fi\n      _satellite_check_id ${_usEr}\n    fi\n    if [ \"${_usEr}\" = \"silent\" ] \\\n      || [ \"${_eXtr}\" = \"silent\" ] \\\n      || [ \"${_outP}\" = \"silent\" ]; then\n      _outP=\"onlylogged\"\n      _sYst=\"bundle\"\n    elif [ \"${_usEr}\" = \"minimal\" ] \\\n      || [ \"${_eXtr}\" = \"minimal\" ] \\\n      || [ \"${_outP}\" = \"minimal\" ]; then\n      _outP=\"logged\"\n      _sYst=\"bundle\"\n    elif [ \"${_usEr}\" = \"octolog\" ] \\\n      || [ \"${_eXtr}\" = \"octolog\" ] \\\n      || [ \"${_outP}\" = \"octolog\" ]; then\n      _outP=\"logged\"\n      _sYst=\"bundle\"\n    elif [ \"${_usEr}\" = \"system\" ] \\\n      || [ \"${_eXtr}\" = \"system\" ] \\\n      || [ \"${_outP}\" = \"system\" ]; then\n      _outP=\"logged\"\n      _sYst=\"barracuda\"\n    else\n      _outP=\"verbose\"\n      _sYst=\"bundle\"\n    fi\n    if [ -e \"/root/.dev.server.cnf\" ]; then\n      echo MODD _tRee is ${_tRee}\n      echo MODD _kiNd is ${_kiNd}\n      _fQdn=\"${_fQdn//[^a-zA-Z0-9-.]/}\"\n      _fQdn=\"$(echo -n ${_fQdn} | tr A-Z a-z 2>&1)\"\n      echo MODD _fQdn is ${_fQdn}\n      echo MODD _eMal is ${_eMal}\n      echo MODD _usEr is ${_usEr}\n      echo MODD _eXtr is ${_eXtr}\n      echo MODD _outP is ${_outP}\n      echo\n      echo CTRL _outP is ${_outP}\n      echo CTRL _sYst is ${_sYst}\n      echo\n      echo Waiting 60 seconds...\n      sleep 15\n      echo Waiting 45 seconds...\n      sleep 15\n      echo Waiting 30 seconds...\n      sleep 15\n      echo Waiting 15 seconds...\n      echo\n    fi\n  fi\n\n  _init_start\n  _check_etc_apt_preferences\n\n  export _tRee=\"${_tRee}\"\n  export _rLsn=\"BOA-5.9.1\"\n  export _rlsE=\"${_rLsn}-${_tRee}\"\n  export _bRnh=\"5.x-${_tRee}\"\n  export _rgUrl=\"http://files.aegir.cc/versions/${_tRee}/boa\"\n\n  if [ \"${_cmNd}\" = \"in-dev\" ] \\\n    || [ \"${_cmNd}\" = \"in-pro\" ] \\\n    || [ \"${_cmNd}\" = \"in-lts\" ]; then\n    curl ${_crlGet} \"${_rgUrl}/${_barName}\"  -o ${_vBs}/${_barName}\n    curl ${_crlGet} \"${_rgUrl}/${_octName}\"  -o ${_vBs}/${_octName}\n    curl ${_crlGet} \"${_rgUrl}/${_pthIncB}\"  -o ${_vBs}/${_filIncB}\n    curl ${_crlGet} \"${_rgUrl}/${_pthIncO}\"  -o ${_vBs}/${_filIncO}\n  elif [ \"${_cmNd}\" = \"in-octopus\" ] \\\n    || [ \"${_cmNd}\" = \"in-oct\" ]; then\n    curl ${_crlGet} \"${_rgUrl}/${_barName}\"  -o ${_vBs}/${_barName}\n    curl ${_crlGet} \"${_rgUrl}/${_octName}\"  -o ${_vBs}/${_octName}\n    curl ${_crlGet} \"${_rgUrl}/${_pthIncB}\"  -o ${_vBs}/${_filIncB}\n    curl ${_crlGet} \"${_rgUrl}/${_pthIncO}\"  -o ${_vBs}/${_filIncO}\n  fi\n  if [ -e \"/root/.debug-boa-installer.cnf\" ]; then\n    sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"             ${_vBs}/${_filIncB}\n    wait\n    sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"             ${_vBs}/${_filIncO}\n    wait\n  fi\n  if [ -e \"/root/.debug-barracuda-installer.cnf\" ]; then\n    sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"             ${_vBs}/${_filIncB}\n    wait\n  fi\n  if [ -e \"/root/.debug-octopus-installer.cnf\" ]; then\n    sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"             ${_vBs}/${_filIncO}\n    wait\n  fi\n  if [ \"${_cmNd}\" = \"in-octopus\" ] || [ \"${_cmNd}\" = \"in-oct\" ]; then\n    _OCTOPUS_ONLY=YES\n  else\n    _barracuda_install\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      echo \"ERROR: Barracuda install failed, we can not continue\"\n      echo \"ERROR: Please examine available logs for possible culprit\"\n      echo \"LOG TO CHECK: ${_IN_BARRACUDA_LOG}\"\n      _send_errors\n      _clean_pid_exit _barracuda_install\n    else\n      _CNT=$(pgrep -fc 'tee -a /var/backups/barracuda-')\n      if (( _CNT > 1 )); then\n        pkill -f 'tee -a /var/backups/barracuda-'\n      fi\n    fi\n    _NOW=$(date +%y%m%d-%H%M%S)\n    _NOW=${_NOW//[^0-9-]/}\n    _UP_BARRACUDA_LOG=\"${_LOG_UP_DIR}/$(basename \"$0\")-up-barracuda-${_NOW}.log\"\n    if [ \"${_outP}\" = \"verbose\" ]; then\n      echo\n      echo \"Time for a quick barracuda system upgrade to complete installation\"\n      echo\n      /opt/local/bin/barracuda up-${_tRee} system noscreen 2>&1 | tee ${_UP_BARRACUDA_LOG}\n      wait\n    else\n      /opt/local/bin/barracuda up-${_tRee} system noscreen >${_UP_BARRACUDA_LOG} 2>&1\n      wait\n    fi\n  fi\n  if [ \"${_OCTOPUS_ONLY}\" = \"YES\" ] || [ \"${_sYst}\" = \"bundle\" ]; then\n    [ ! -e \"/run/octopus_install_run.pid\" ] && touch /run/octopus_install_run.pid\n    sleep 3\n    _octopus_install\n    if [ ! -e \"/data/disk/${_usEr}/config/includes/nginx_vhost_common.conf\" ] \\\n      || [ ! -e \"/data/disk/${_usEr}/log/octopus_log.txt\" ]; then\n      echo \"ERROR: Octopus install failed, we can not continue\"\n      echo \"ERROR: Please examine available logs for possible culprit\"\n      echo \"LOG TO CHECK: ${_IN_OCTOPUS_LOG}\"\n      _send_errors\n      _clean_pid_exit _octopus_install\n    else\n      rm -rf /data/disk/${_usEr}/.tmp/cache\n    fi\n  fi\n  if [ \"${_outP}\" = \"logged\" ]; then\n    _send_report\n  fi\n  if [ \"${_OCTOPUS_ONLY}\" = \"YES\" ] || [ \"${_sYst}\" = \"bundle\" ]; then\n    _init_cleanup\n    [ ! -e \"/run/octopus_install_run.pid\" ] && touch /run/octopus_install_run.pid\n    sleep 3\n    # Execute post-install octopus upgrade and exit\n    touch /data/disk/${_usEr}/static/control/ssl-live-mode.info\n    rm -f /data/disk/${_usEr}/tools/le/.ctrl/ssl-demo-mode.pid\n    rm -rf /data/disk/${_usEr}/.tmp/cache\n    echo\n    if [ ! -e \"/root/.dont.upgrade.octopus.on.install.cnf\" ]; then\n      echo \"Invoking hosting-dispatch/hosting-tasks to complete the installation\"\n      sleep 3\n      echo \"Please wait, it will take about 30 seconds...\"\n      su -s /bin/bash ${_usEr} -c \"drush @hm hosting-dispatch\"  &> /dev/null\n      echo \"1/5\"\n      sleep 5\n      su -s /bin/bash ${_usEr} -c \"drush @hm hosting-tasks --force\" &> /dev/null\n      echo \"2/5\"\n      sleep 5\n      su -s /bin/bash ${_usEr} -c \"drush @hm hosting-tasks --force\" &> /dev/null\n      echo \"3/5\"\n      sleep 5\n      su -s /bin/bash ${_usEr} -c \"drush @hm hosting-tasks --force\" &> /dev/null\n      echo \"4/5\"\n      sleep 5\n      su -s /bin/bash ${_usEr} -c \"drush @hm hosting-tasks --force\" &> /dev/null\n      echo \"5/5\"\n      sleep 3\n      echo \"The post-installation upgrade is now being invoked...\"\n      echo \"...to complete the Let's Encrypt setup.\"\n      sleep 3\n      echo \"This process will take a few minutes and will run in the background.\"\n      sleep 3\n    fi\n#     echo \"NOTE! The initial one-time login link will no longer work!\"\n#     echo \"Please wait until the procesure is complete.\"\n#     echo \"Then generate a new link once you see the instructions below.\"\n#     sleep 3\n    echo\n    if [ ! -e \"/root/.dont.upgrade.octopus.on.install.cnf\" ]; then\n      _NOW=$(date +%y%m%d-%H%M%S)\n      _NOW=${_NOW//[^0-9-]/}\n      _UP_OCTOPUS_LOG=\"${_LOG_UP_DIR}/$(basename \"$0\")-up-octopus-${_NOW}.log\"\n      rm -rf /data/disk/${_usEr}/.tmp/cache\n      if [ \"${_OCTOPUS_ONLY}\" = \"YES\" ]; then\n#         echo \"Launching octopus up-${_tRee} ${_usEr} in the background now...\"\n#         echo\n#         echo \"Please wait a few minutes before generating one-time login link!\"\n#         echo\n#         echo \"  su -s /bin/bash ${_usEr} -c \\\"drush @hm uli\\\"\"\n#         echo\n#         echo \"Enjoy!\"\n#         echo\n        nohup /opt/local/bin/octopus up-${_tRee} ${_usEr} force log noscreen >${_UP_OCTOPUS_LOG} 2>&1 &\n      else\n        echo \"Now waiting 5 minutes before running octopus upgrade, please wait...\"\n        sleep 300\n        echo \"Launching octopus up-${_tRee} ${_usEr} now...\"\n        /opt/local/bin/octopus up-${_tRee} ${_usEr} force log noscreen 2>&1 | tee ${_UP_OCTOPUS_LOG}\n        wait\n        echo \"Checking LE setup status...\"\n        if [ -e \"/data/disk/${_usEr}/log/domain.txt\" ]; then\n          _hmFront=$(cat /data/disk/${_usEr}/log/domain.txt 2>&1)\n          _hmFront=$(echo -n ${_hmFront} | tr -d \"\\n\" 2>&1)\n        fi\n        _leRoot=\"/data/disk/${_usEr}/tools/le\"\n        _leCrtPath=\"${_leRoot}/certs/${_hmFront}\"\n        _lePxyPath=\"/var/aegir/config/server_master/nginx/pre.d/z_${_hmFront}_ssl_proxy.conf\"\n        if [ -e \"${_leCrtPath}/fullchain.pem\" ] \\\n          || [ -e \"${_leCrtPath}/cert.pem\" ] \\\n          || [ -e \"${_lePxyPath}\" ]; then\n          _UP_OCTOPUS_AGAIN=NO\n        else\n          _UP_OCTOPUS_AGAIN=YES\n        fi\n        if [ \"${_UP_OCTOPUS_AGAIN}\" = \"YES\" ]; then\n          echo \"Octopus LE setup status: NOT READY!\"\n          echo \"Now waiting 5 minutes before trying again, please wait...\"\n          sleep 300\n          echo \"Launching another octopus upgrade to fix LE...\"\n          _UP_OCTOPUS_AGAIN_LOG=\"${_LOG_UP_DIR}/$(basename \"$0\")-up-again-octopus-${_NOW}.log\"\n          /opt/local/bin/octopus up-${_tRee} ${_usEr} force log noscreen 2>&1 | tee ${_UP_OCTOPUS_AGAIN_LOG}\n          wait\n          echo \"Checking LE setup status...\"\n          if [ -e \"/data/disk/${_usEr}/log/domain.txt\" ]; then\n            _hmFront=$(cat /data/disk/${_usEr}/log/domain.txt 2>&1)\n            _hmFront=$(echo -n ${_hmFront} | tr -d \"\\n\" 2>&1)\n          fi\n          _leRoot=\"/data/disk/${_usEr}/tools/le\"\n          _leCrtPath=\"${_leRoot}/certs/${_hmFront}\"\n          _lePxyPath=\"/var/aegir/config/server_master/nginx/pre.d/z_${_hmFront}_ssl_proxy.conf\"\n          if [ -e \"${_leCrtPath}/fullchain.pem\" ] \\\n            || [ -e \"${_leCrtPath}/cert.pem\" ] \\\n            || [ -e \"${_lePxyPath}\" ]; then\n            echo \"Octopus LE setup status: OK!\"\n            echo\n            echo \"You can generate one-time login link now:\"\n            echo\n            echo \"  su -s /bin/bash ${_usEr} -c \\\"drush @hm uli\\\"\"\n            echo\n            echo \"Enjoy!\"\n          else\n            echo \"Octopus LE setup status: STILL NOT READY!\"\n            echo \"Please run another upgrade with:\"\n            echo \"  octopus up-${_tRee} ${_usEr} force\"\n            echo \"Bye!\"\n          fi\n        else\n          echo \"Octopus LE setup status: READY!\"\n          echo\n          echo \"You can generate one-time login link now:\"\n          echo\n          echo \"  su -s /bin/bash ${_usEr} -c \\\"drush @hm uli\\\"\"\n          echo\n          echo \"Enjoy!\"\n        fi\n      fi\n    fi\n    sleep 1\n    exit 0\n  else\n    _init_finish\n  fi\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        export _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && export _USE_MIR=\"files.aegir.cc\"\n      else\n        export _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      export _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    export _USE_MIR=\"files.aegir.cc\"\n  fi\n  export _urlDev=\"http://${_USE_MIR}/dev\"\n  export _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_if_reinstall_curl() {\n  _isCurl=$(curl --version 2>&1)\n  if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] \\\n    || [[ \"${_isCurl}\" =~ \"libcurl.so.4\" ]] \\\n    || [ -z \"${_isCurl}\" ]; then\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      echo \"OOPS: cURL is broken!\"\n    fi\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n        && [ -e \"/etc/apt/apt.conf.d\" ]; then\n        echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n      fi\n      echo \"curl install\" | dpkg --set-selections 2> /dev/null\n      _apt_clean_update\n      # Check for libssl1.0-dev and remove conditionally\n      if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n        apt-get remove libssl1.0-dev -y --purge --auto-remove -qq 2>/dev/null\n      fi\n      apt-get autoremove -y 2> /dev/null\n      apt-get install libssl-dev ${_aptYesUnth} -qq 2> /dev/null\n      apt-get build-dep curl ${_aptYesUnth} 2> /dev/null\n      apt-get install curl --reinstall ${_aptYesUnth} -qq 2> /dev/null\n    fi\n    _isCurl=$(curl --version 2>&1)\n    if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n      echo \"ERRR: curl is still broken, please install it and debug manually\"\n      _clean_pid_exit _if_reinstall_curl\n    fi\n  fi\n}\n\n_check_dns_curl() {\n  _check_dns_settings\n  _find_fast_mirror_early\n  _if_reinstall_curl\n  _CURL_TEST=$(curl -L -k -s \\\n    --max-redirs 10 \\\n    --retry 3 \\\n    --retry-delay 10 \\\n    -I \"http://${_USE_MIR}\" 2> /dev/null)\n  if [[ ! \"${_CURL_TEST}\" =~ \"200 OK\" ]]; then\n    if [[ \"${_CURL_TEST}\" =~ \"unknown option was passed in to libcurl\" ]]; then\n      echo \"ERROR: cURL libs are out of sync! Re-installing again..\"\n      _if_reinstall_curl\n    else\n      echo \"ERROR: ${_USE_MIR} is not available, please try later\"\n      _clean_pid_exit _check_dns_curl_a\n    fi\n  fi\n}\n\n_if_fix_iptables_symlinks() {\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n  if [ -x \"/sbin/iptables\" ] && [ ! -e \"/usr/sbin/iptables\" ]; then\n    ln -sfn /sbin/iptables /usr/sbin/iptables\n  fi\n  if [ -x \"/usr/sbin/iptables\" ] && [ ! -e \"/sbin/iptables\" ]; then\n    ln -sfn /usr/sbin/iptables /sbin/iptables\n  fi\n  if [ -x \"/sbin/iptables-save\" ] && [ ! -e \"/usr/sbin/iptables-save\" ]; then\n    ln -sfn /sbin/iptables-save /usr/sbin/iptables-save\n  fi\n  if [ -x \"/usr/sbin/iptables-save\" ] && [ ! -e \"/sbin/iptables-save\" ]; then\n    ln -sfn /usr/sbin/iptables-save /sbin/iptables-save\n  fi\n  if [ -x \"/sbin/iptables-restore\" ] && [ ! -e \"/usr/sbin/iptables-restore\" ]; then\n    ln -sfn /sbin/iptables-restore /usr/sbin/iptables-restore\n  fi\n  if [ -x \"/usr/sbin/iptables-restore\" ] && [ ! -e \"/sbin/iptables-restore\" ]; then\n    ln -sfn /usr/sbin/iptables-restore /sbin/iptables-restore\n  fi\n  if [ -x \"/sbin/ip6tables\" ] && [ ! -e \"/usr/sbin/ip6tables\" ]; then\n    ln -sfn /sbin/ip6tables /usr/sbin/ip6tables\n  fi\n  if [ -x \"/usr/sbin/ip6tables\" ] && [ ! -e \"/sbin/ip6tables\" ]; then\n    ln -sfn /usr/sbin/ip6tables /sbin/ip6tables\n  fi\n  if [ -x \"/sbin/ip6tables-save\" ] && [ ! -e \"/usr/sbin/ip6tables-save\" ]; then\n    ln -sfn /sbin/ip6tables-save /usr/sbin/ip6tables-save\n  fi\n  if [ -x \"/usr/sbin/ip6tables-save\" ] && [ ! -e \"/sbin/ip6tables-save\" ]; then\n    ln -sfn /usr/sbin/ip6tables-save /sbin/ip6tables-save\n  fi\n  if [ -x \"/sbin/ip6tables-restore\" ] && [ ! -e \"/usr/sbin/ip6tables-restore\" ]; then\n    ln -sfn /sbin/ip6tables-restore /usr/sbin/ip6tables-restore\n  fi\n  if [ -x \"/usr/sbin/ip6tables-restore\" ] && [ ! -e \"/sbin/ip6tables-restore\" ]; then\n    ln -sfn /usr/sbin/ip6tables-restore /sbin/ip6tables-restore\n  fi\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n}\n\n_csf_check_fix() {\n  if [ -x \"/usr/sbin/csf\" ] \\\n    && [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ ! -x \"/etc/csf/csfpost.sh\" ]; then\n    echo \"\" > /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A OUTPUT -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    chmod 700 /etc/csf/csfpost.sh\n    sed -i \"s/.*aegir.*//g\" /etc/csf/csf.allow\n    csf -a 172.235.166.69  eu.files.aegir.cc &> /dev/null\n    csf -a 172.233.219.37  us.files.aegir.cc &> /dev/null\n    csf -a 172.105.168.103 ao.files.aegir.cc &> /dev/null\n    service lfd stop &> /dev/null\n    pkill -9 -f ConfigServer\n    killall sleep &> /dev/null\n    rm -f /etc/csf/csf.error\n    csf -x  &> /dev/null\n    service clean-boa-env start &> /dev/null\n    _if_fix_iptables_symlinks\n    ### csf -uf &> /dev/null\n    ### wait\n    _NFTABLES_TEST=$(iptables -V)\n    if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n      if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n        update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n        update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n        update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n        update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n      fi\n    fi\n    csf -e  &> /dev/null\n    sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n    sed -i \"/^$/d\" /etc/csf/csf.allow\n    if [ -e \"/var/log/daemon.log\" ]; then\n      _DHCP_LOG=\"/var/log/daemon.log\"\n    else\n      _DHCP_LOG=\"/var/log/syslog\"\n    fi\n    grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n      if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n        IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n        if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n          echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n        fi\n      fi\n    done\n    service lfd start &> /dev/null\n    ### Linux kernel TCP SACK CVEs mitigation\n    ### CVE-2019-11477 SACK Panic\n    ### CVE-2019-11478 SACK Slowness\n    ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n    if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n      _SACK_TEST=$(ip6tables --list | grep tcpmss)\n      if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n        sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n        iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    fi\n  fi\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n  if [ -n \"${_LOC_IP}\" ] && grep -qE \"${_LOC_IP}\\s\" /etc/hosts; then\n    cp -af /etc/hosts /etc/.was.hosts\n    sed -i \"s/^${_LOC_IP}.*//g\" /etc/hosts\n  fi\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    _find_correct_ip\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n    if [ \"${_cmNd}\" = \"in-octopus\" ] || [ \"${_cmNd}\" = \"in-oct\" ]; then\n      _system_check_ready\n    else\n      if [ ! -e \"/root/.force.reinstall.cnf\" ]; then\n        _system_check_clean\n      fi\n    fi\n    _csf_check_fix\n    chmod a+w /dev/null\n    if [ ! -e \"/dev/fd\" ]; then\n      if [ -e \"/proc/self/fd\" ]; then\n        rm -rf /dev/fd\n        ln -sfn /proc/self/fd /dev/fd\n      fi\n    fi\n    _EHU=NO\n    if ! grep -q \"127.0.0.1 localhost\" /etc/hosts; then\n      sed -i \"s/^127.0.0.1.*//g\" /etc/hosts\n      echo \"\" >> /etc/hosts\n      echo \"127.0.0.1 localhost\" >> /etc/hosts\n      _EHU=YES\n    fi\n    if grep -q \"files.aegir.cc\" /etc/hosts; then\n      sed -i \"s/.*files.aegir.cc.*//g\" /etc/hosts\n      _EHU=YES\n    fi\n    if grep -q \"github\" /etc/hosts; then\n      sed -i \"s/.*github.*//g\" /etc/hosts\n      _EHU=YES\n    fi\n    if [ \"${_EHU}\" = \"YES\" ]; then\n      echo >>/etc/hosts\n      sed -i \"/^$/d\" /etc/hosts\n    fi\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    _clean_pid_exit\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    _clean_pid_exit\n  fi\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n  if [ -x \"/opt/local/bin/killer\" ]; then\n    sed -i \"s/.*killer.*//gi\" /etc/crontab &> /dev/null\n    echo \"*/1 *   * * *   root    bash /opt/local/bin/killer\" >> /etc/crontab\n  fi\n  if [ \"${_VMFAMILY}\" = \"VS\" ]; then\n    if [ ! -e \"/etc/apt/preferences.d/fuse\" ]; then\n      mkdir -p /etc/apt/preferences.d/\n      echo -e 'Package: fuse\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/fuse\n      _apt_clean_update\n    fi\n    if [ ! -e \"/etc/apt/preferences.d/udev\" ]; then\n      mkdir -p /etc/apt/preferences.d/\n      echo -e 'Package: udev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/udev\n      _apt_clean_update\n    fi\n    if [ ! -e \"/etc/apt/preferences.d/makedev\" ]; then\n      mkdir -p /etc/apt/preferences.d/\n      echo -e 'Package: makedev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/makedev\n      _apt_clean_update\n    fi\n    apt-get remove fuse -y --purge --auto-remove -qq 2> /dev/null\n    apt-get remove udev -y --purge --auto-remove -qq 2> /dev/null\n    apt-get remove makedev -y --purge --auto-remove -qq 2> /dev/null\n    if [ -e \"/sbin/hdparm\" ]; then\n      apt-get remove hdparm -y --purge --auto-remove -qq 2> /dev/null\n    fi\n    _REMOVE_LINKS=\"buagent \\\n                   checkroot.sh \\\n                   fancontrol \\\n                   halt \\\n                   hwclock.sh \\\n                   hwclockfirst.sh \\\n                   ifupdown \\\n                   ifupdown-clean \\\n                   kerneloops \\\n                   klogd \\\n                   mountall-bootclean.sh \\\n                   mountall.sh \\\n                   mountdevsubfs.sh \\\n                   mountkernfs.sh \\\n                   mountnfs-bootclean.sh \\\n                   mountnfs.sh \\\n                   mountoverflowtmp \\\n                   mountvirtfs \\\n                   mtab.sh \\\n                   networking \\\n                   procps \\\n                   reboot \\\n                   sendsigs \\\n                   setserial \\\n                   svscan \\\n                   sysstat \\\n                   umountfs \\\n                   umountnfs.sh \\\n                   umountroot \\\n                   urandom \\\n                   vnstat\"\n    for _link in ${_REMOVE_LINKS}; do\n      if [ -e \"/etc/init.d/${_link}\" ]; then\n        update-rc.d -f ${_link} remove &> /dev/null\n        mv -f /etc/init.d/${_link} /var/backups/init.d.${_link}\n      fi\n    done\n    for s in cron dbus ssh; do\n      if [ -e \"/etc/init.d/${s}\" ]; then\n        sed -rn -e 's/^(# Default-Stop:).*$/\\1 0 1 6/' -e '/^### BEGIN INIT INFO/,/^### END INIT INFO/p' /etc/init.d/${s} > /etc/insserv/overrides/${s}\n      fi\n    done\n    /sbin/insserv -v -d &> /dev/null\n  fi\n}\n\n_display_version() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n    _VIRT_TOOL=\"$(which virt-what)\"\n    if [ -x \"${_VIRT_TOOL}\" ]; then\n      _VIRT_TEST=$(virt-what)\n      _VIRT_TEST=$(echo -n ${_VIRT_TEST} | fmt -su -w 2500 2>&1)\n      if [[ \"${_VIRT_TEST}\" =~ \"program not found\" ]]; then\n        echo \"ERROR: virt-what says: ${_VIRT_TEST}\"\n        echo \"ERROR: virt-what detection fails for unknown reason, exit\"\n        _clean_pid_exit _display_version\n      fi\n      if [ ! -e \"/root/.allow.any.virt.cnf\" ]; then\n        if [ -e \"/proc/self/status\" ]; then\n          _VS_GUEST_TEST=$(grep -E \"VxID:[[:space:]]*[0-9]{2,}$\" /proc/self/status 2> /dev/null)\n          _VS_HOST_TEST=$(grep -E \"VxID:[[:space:]]*0$\" /proc/self/status 2> /dev/null)\n        fi\n        if [ ! -z \"${_VS_HOST_TEST}\" ] || [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n          if [ -z \"${_VS_HOST_TEST}\" ] && [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n            _VIRT_IS=\"Linux VServer guest\"\n          else\n            if [ ! -z \"${_VS_HOST_TEST}\" ]; then\n              _VIRT_IS=\"Linux VServer host\"\n            else\n              _VIRT_IS=\"unknown / not a virtual machine\"\n            fi\n          fi\n        else\n          if [ -z \"${_VIRT_TEST}\" ] \\\n            || [ \"${_VIRT_TEST}\" = \"0\" ] \\\n            || [ \"${_VIRT_TEST}\" = \" \" ]; then\n            _VIRT_IS=\"unknown / not a virtual machine\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"xen-dom0\" ]]; then\n            _VIRT_IS=\"Xen privileged domain\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"linux_vserver-host\" ]]; then\n            _VIRT_IS=\"Linux VServer host\"\n          else\n            if [[ \"${_VIRT_TEST}\" =~ \"xen xen-hvm\" ]]; then\n              _VIRT_TEST=\"xen-hvm\"\n            elif [[ \"${_VIRT_TEST}\" =~ \"xen xen-domU\" ]]; then\n              _VIRT_TEST=\"xen-domU\"\n            elif [[ \"${_VIRT_TEST}\" =~ \"virtualbox kvm\" ]]; then\n              _VIRT_TEST=\"virtualbox\"\n            elif [[ \"${_VIRT_TEST}\" =~ \"hyperv qemu\" ]]; then\n              _VIRT_TEST=\"hyperv\"\n            elif [[ \"${_VIRT_TEST}\" =~ \"kvm aws\" ]]; then\n              _VIRT_TEST=\"kvm\"\n            elif [[ \"${_VIRT_TEST}\" =~ \"redhat kvm\" ]]; then\n              _VIRT_TEST=\"redhat-kvm\"\n            elif [[ \"${_VIRT_TEST}\" =~ \"openvz lxc\" ]]; then\n              _VIRT_TEST=\"openvz\"\n            fi\n            case \"${_VIRT_TEST}\" in\n              hyperv)      _VIRT_IS=\"Microsoft Hyper-V\" ;;\n              kvm)         _VIRT_IS=\"Linux KVM guest\" ;;\n              lxc)         _VIRT_IS=\"Linux Containers (LXC)\" ;;\n              openvz)      _VIRT_IS=\"OpenVZ Containers\" ;;\n              parallels)   _VIRT_IS=\"Parallels guest\" ;;\n              redhat-kvm)  _VIRT_IS=\"Red Hat KVM guest\" ;;\n              virtualbox)  _VIRT_IS=\"VirtualBox guest\" ;;\n              vmware)      _VIRT_IS=\"VMware ESXi guest\" ;;\n              xen-domU)    _VIRT_IS=\"Xen paravirtualized guest domain\" ;;\n              xen-hvm)     _VIRT_IS=\"Xen guest fully virtualized (HVM)\" ;;\n              xen)         _VIRT_IS=\"Xen guest\" ;;\n              *)  _VIRT_IS=\"${_VIRT_TEST} not supported\"\n              ;;\n            esac\n          fi\n        fi\n      else\n        if [ -z \"${_VIRT_TEST}\" ] \\\n          || [ \"${_VIRT_TEST}\" = \"0\" ] \\\n          || [ \"${_VIRT_TEST}\" = \" \" ]; then\n          _VIRT_IS=\"unknown / not a virtual machine\"\n        fi\n        _VIRT_IS=\"${_VIRT_TEST} not supported\"\n      fi\n    fi\n    _thiSys=\"$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)/$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\"\n    if [ -e \"/var/log/barracuda_log.txt\" ]; then\n      _crlBoav=`tail --lines=1 /var/log/barracuda_log.txt \\\n        | cut -d '/' -f4 \\\n        | sed \"s/ Barracuda //g\"`\n      _crlBoav=${_crlBoav//[^a-zA-Z0-9-.]/}\n    fi\n    echo \"${_crlBoav} on ${_thiSys} on ${_VIRT_IS} on ${_HOST_INFO}\"\n    exit 0\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n\n# Function to remove whitespace\n_sanitize_pid() {\n  echo \"$1\" | sed 's/[^0-9]//g'\n}\n\n# Function to display time in appropriate units\n_display_time() {\n  local _service=$1\n  local _pidfile=$2\n\n  if [ ! -f \"${_pidfile}\" ]; then\n    if [ \"${_service}\" = \"jenkins\" ] && [ ! -e \"/var/lib/jenkins\" ]; then\n      return\n    elif [ \"${_service}\" = \"solr4\" ] && [ ! -e \"/opt/solr4/solr.xml\" ]; then\n      return\n    elif [ \"${_service}\" = \"solr7\" ] && [ ! -e \"/var/solr7/logs/solr.log\" ]; then\n      return\n    elif [ \"${_service}\" = \"solr9\" ] && [ ! -e \"/var/solr9/logs/solr.log\" ]; then\n      return\n    elif [ \"${_service}\" = \"pure-ftpd\" ] && [ ! -e \"/usr/local/sbin/pure-ftpd\" ]; then\n      return\n    elif [ \"${_service}\" = \"php56-fpm\" ] && [ ! -e \"/opt/php56/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php70-fpm\" ] && [ ! -e \"/opt/php70/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php71-fpm\" ] && [ ! -e \"/opt/php71/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php72-fpm\" ] && [ ! -e \"/opt/php72/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php73-fpm\" ] && [ ! -e \"/opt/php73/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php74-fpm\" ] && [ ! -e \"/opt/php74/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php80-fpm\" ] && [ ! -e \"/opt/php80/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php81-fpm\" ] && [ ! -e \"/opt/php81/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php82-fpm\" ] && [ ! -e \"/opt/php82/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php83-fpm\" ] && [ ! -e \"/opt/php83/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php84-fpm\" ] && [ ! -e \"/opt/php84/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php85-fpm\" ] && [ ! -e \"/opt/php85/bin/php\" ]; then\n      return\n    else\n      _not_running_services+=(\"  _XSE ${_service} is not running (PID file not found).\")\n      return\n    fi\n  fi\n\n  # Get the PID from the PID file\n  _pid=$(cat \"${_pidfile}\")\n  _pid=$(_sanitize_pid \"${_pid}\" 2>&1)\n\n  # Check if the process is running\n  if ! ps -p \"${_pid}\" > /dev/null 2>&1; then\n    if [ \"${_service}\" = \"jenkins\" ] && [ ! -e \"/var/lib/jenkins\" ]; then\n      return\n    elif [ \"${_service}\" = \"solr4\" ] && [ ! -e \"/opt/solr4/solr.xml\" ]; then\n      return\n    elif [ \"${_service}\" = \"solr7\" ] && [ ! -e \"/var/solr7/logs/solr.log\" ]; then\n      return\n    elif [ \"${_service}\" = \"solr9\" ] && [ ! -e \"/var/solr9/logs/solr.log\" ]; then\n      return\n    elif [ \"${_service}\" = \"pure-ftpd\" ] && [ ! -e \"/usr/local/sbin/pure-ftpd\" ]; then\n      return\n    elif [ \"${_service}\" = \"php56-fpm\" ] && [ ! -e \"/opt/php56/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php70-fpm\" ] && [ ! -e \"/opt/php70/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php71-fpm\" ] && [ ! -e \"/opt/php71/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php72-fpm\" ] && [ ! -e \"/opt/php72/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php73-fpm\" ] && [ ! -e \"/opt/php73/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php74-fpm\" ] && [ ! -e \"/opt/php74/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php80-fpm\" ] && [ ! -e \"/opt/php80/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php81-fpm\" ] && [ ! -e \"/opt/php81/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php82-fpm\" ] && [ ! -e \"/opt/php82/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php83-fpm\" ] && [ ! -e \"/opt/php83/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php84-fpm\" ] && [ ! -e \"/opt/php84/bin/php\" ]; then\n      return\n    elif [ \"${_service}\" = \"php85-fpm\" ] && [ ! -e \"/opt/php85/bin/php\" ]; then\n      return\n    else\n      _not_running_services+=(\"  _XSE ${_service} is not running (stale PID file).\")\n      return\n    fi\n  fi\n\n  # Get the current time and the PID file modification time in seconds since epoch\n  _current_time=$(date +%s)\n  _file_time=$(stat -c %Y \"${_pidfile}\")\n\n  # Calculate the time difference in seconds\n  _elapsed=$((_current_time - _file_time))\n\n  if [ \"${_elapsed}\" -lt 60 ]; then\n    _running_services+=(\"  _XSE ${_service} is running for ${_elapsed} seconds.\")\n  elif [ \"${_elapsed}\" -lt 3600 ]; then\n    _minutes=$((_elapsed / 60))\n    _running_services+=(\"  _XSE ${_service} is running for ${_minutes} minutes.\")\n  elif [ \"${_elapsed}\" -lt 172800 ]; then\n    _hours=$((_elapsed / 3600))\n    _running_services+=(\"  _XSE ${_service} is running for ${_hours} hours.\")\n  else\n    _days=$((_elapsed / 86400))\n    _running_services+=(\"  _XSE ${_service} is running for ${_days} days.\")\n  fi\n}\n\n# Function to convert KB to GB (1 GB = 1024 * 1024 KB)\n_convert_kb_to_gb() {\n  echo \"scale=2; $1 / 1024 / 1024\" | bc\n}\n\n# Function to calculate RAM usage percentage as an integer\n_calculate_ram_usage_percent() {\n  _total_ram_kb=$1\n  _available_ram_kb=$2\n  _used_ram_kb=$((_total_ram_kb - _available_ram_kb))\n\n  # Using integer division to get a whole number percentage\n  echo $(( (_used_ram_kb * 100) / _total_ram_kb ))\n}\n\n# Function to check and display system info\n_check_system_ram() {\n  # Get the total and available RAM in KB\n  _total_ram_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')\n  _available_ram_kb=$(grep MemAvailable /proc/meminfo | awk '{print $2}')\n\n  # Convert total RAM to GB\n  _total_ram_gb=$(_convert_kb_to_gb ${_total_ram_kb})\n\n  # Calculate RAM usage percentage\n  _ram_usage_percent=$(_calculate_ram_usage_percent ${_total_ram_kb} ${_available_ram_kb})\n}\n\n_display_info() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    _HOST_INFO=\"$(dmidecode -s system-manufacturer)\"\n    _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n    _CPU_INFO=${_CPU_INFO//[^0-9]/}\n    _NPROC_TEST=\"$(which nproc)\"\n    if [ -z \"${_NPROC_TEST}\" ]; then\n      _CPU_NR=\"${_CPU_INFO}\"\n    else\n      _CPU_NR=$(nproc 2>&1)\n    fi\n    _CPU_NR=${_CPU_NR//[^0-9]/}\n    if [ ! -z \"${_CPU_NR}\" ] \\\n      && [ ! -z \"${_CPU_INFO}\" ] \\\n      && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n      && [ \"${_CPU_INFO}\" -gt 0 ]; then\n      _CPU_NR=\"${_CPU_INFO}\"\n    fi\n    _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n      NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n      NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n    if [ -x \"/usr/sbin/nginx\" ]; then\n      _NGX=$(/usr/sbin/nginx -v 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' \\\n        | cut -d\"/\" -f2 \\\n        | awk '{ print $1}' 2>&1)\n      if [ -z \"${_NGX}\" ]; then\n        _NGX=$(/usr/sbin/nginx -v 2>&1 \\\n          | tr -d \"\\n\" \\\n          | cut -d\" \" -f3 \\\n          | awk '{ print $1}' \\\n          | cut -d\"/\" -f2 \\\n          | awk '{ print $1}' 2>&1)\n      fi\n    fi\n    if [ -x \"/usr/bin/php-cli\" ]; then\n      _PHP=$(/usr/bin/php-cli -v | grep 'PHP 7' \\\n        | cut -d: -f1 | awk '{ print $2}' 2>&1)\n      if [ -z \"${_PHP}\" ]; then\n        _PHP=$(/usr/bin/php-cli -v | grep 'PHP 5' \\\n          | cut -d: -f1 | awk '{ print $2}' 2>&1)\n      fi\n      if [ -z \"${_PHP}\" ]; then\n        _PHP=$(/usr/bin/php-cli -v | grep 'PHP 8' \\\n          | cut -d: -f1 | awk '{ print $2}' 2>&1)\n      fi\n    fi\n    _DBV=$(mysql -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f6 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    if [ \"${_DBV}\" = \"Linux\" ]; then\n      _DBV=$(mysql -V 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $1}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n    fi\n    if [ -x \"/usr/bin/proxysql\" ]; then\n      _PXY=$(proxysql --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $3}' 2>&1)\n    fi\n    if [ -x \"/usr/sbin/csf\" ]; then\n      _CSF=$(/usr/sbin/csf --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | tr -d \"v\" \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $2}' 2>&1)\n    fi\n    if [ -x \"/usr/bin/valkey-server\" ]; then\n      _VKS=$(valkey-server -v 2>&1 \\\n        | tr -d \"\\n\" \\\n        | tr -d \"v=\" \\\n        | cut -d\" \" -f2 \\\n        | awk '{ print $1}' 2>&1)\n      if [ \"${_VKS}\" = \"serer\" ]; then\n        _VKS=$(valkey-server -v 2>&1 \\\n          | tr -d \"\\n\" \\\n          | tr -d \"v=\" \\\n          | cut -d\" \" -f3 \\\n          | awk '{ print $1}' 2>&1)\n      fi\n    fi\n    _isLshell=\"$(which lshell)\"\n    _LSH=$(${_isLshell} --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\"-\" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    if [ -x \"/usr/bin/python3\" ]; then\n      _PY3=$(/usr/bin/python3 --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f2 \\\n        | awk '{ print $1}' 2>&1)\n    fi\n    if [ -x \"/usr/local/bin/python3\" ]; then\n      _PX3=$(/usr/local/bin/python3 --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f2 \\\n        | awk '{ print $1}' 2>&1)\n    fi\n    if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n      && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n      _SSL_BINARY=/usr/local/ssl3/bin/openssl\n    else\n      _SSL_BINARY=/usr/local/ssl/bin/openssl\n    fi\n    _SSL=$(${_SSL_BINARY} version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    _CRL=$(curl --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    _SHD=$(ssh -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | tr -d \",\" \\\n      | cut -d\"_\" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    _SSH=$(ssh -V 2>&1)\n    _NRC=$(newrelic-daemon --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f5 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' 2>&1)\n    [ -e \"/opt/site24x7/monagent/version.txt\" ] && _MON=$(cat /opt/site24x7/monagent/version.txt 2>&1)\n    [ -n \"${_MON}\" ] && _MON=$(echo -n ${_MON} | tr -d \"\\n\" 2>&1)\n    [ \"${_mOde}\" = \"clear\" ] && clear\n    echo\n    _thiSys=\"$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)/$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\"\n    if [ -e \"/var/log/barracuda_log.txt\" ]; then\n      _AUT=$(grep _AUTOPILOT ${_barCnf} 2>&1)\n      _BIL=$(grep _BACKEND_ITEMS_LIST ${_barCnf} 2>&1)\n      _CCC=$(grep _CUSTOM_CONFIG_CSF ${_barCnf} 2>&1)\n      _CCL=$(grep _CUSTOM_CONFIG_LSHELL ${_barCnf} 2>&1)\n      _CCV=$(grep _CUSTOM_CONFIG_VALKEY ${_barCnf} 2>&1)\n      _CCR=$(grep _CUSTOM_CONFIG_REDIS ${_barCnf} 2>&1)\n      _CCS=$(grep _CUSTOM_CONFIG_SQL ${_barCnf} 2>&1)\n      _DBG=$(grep _DEBUG_MODE ${_barCnf} 2>&1)\n      _DBS=$(grep _DB_SERIES ${_barCnf} 2>&1)\n      _DNS=$(grep _DNS_SETUP_TEST ${_barCnf} 2>&1)\n      _EXP=$(grep _EXTRA_PACKAGES ${_barCnf} 2>&1)\n      _MFS=$(grep _MAGICK_FROM_SOURCES ${_barCnf} 2>&1)\n      _MFX=$(grep _MODULES_FIX ${_barCnf} 2>&1)\n      _PCV=$(grep _PHP_CLI_VERSION ${_barCnf} 2>&1)\n      _PEC=$(grep _PHP_EXTRA_CONF ${_barCnf} 2>&1)\n      _PFD=$(grep _PHP_FPM_DENY ${_barCnf} 2>&1)\n      _PFV=$(grep _PHP_FPM_VERSION ${_barCnf} 2>&1)\n      _PFX=$(grep _PERMISSIONS_FIX ${_barCnf} 2>&1)\n      _PMI=$(grep _PHP_MULTI_INSTALL ${_barCnf} 2>&1)\n      _PSI=$(grep _PHP_SINGLE_INSTALL ${_barCnf} 2>&1)\n      _SAR=$(grep _SSH_ARMOUR ${_barCnf} 2>&1)\n      _SBP=$(grep _STRICT_BIN_PERMISSIONS ${_barCnf} 2>&1)\n      _SFS=$(grep _SSH_FROM_SOURCES ${_barCnf} 2>&1)\n      _SKY=$(grep _SKYNET_MODE ${_barCnf} 2>&1)\n      _STP=$(grep _STRONG_PASSWORDS ${_barCnf} 2>&1)\n      _SUO=$(grep _SYSTEM_UP_ONLY ${_barCnf} 2>&1)\n      _UMY=$(grep _USE_MYSQLTUNER ${_barCnf} 2>&1)\n      _XTR=$(grep _XTRAS_LIST ${_barCnf} 2>&1)\n      _crlBoav=`tail --lines=1 /var/log/barracuda_log.txt \\\n        | cut -d '/' -f4 \\\n        | sed \"s/ Barracuda //g\"`\n      _crlBoav=${_crlBoav//[^a-zA-Z0-9-.]/}\n      echo \"Ægir ${_crlBoav} on ${_thiSys}\"\n    else\n      _crlBoav=\"Vanilla System\"\n      echo \"Ready for BOA ${_crlBoav} on ${_thiSys}\"\n    fi\n\n    [ \"${_mOde}\" = \"report\" ] && echo \"Host ${_hName} check on $(date)\"\n    [ \"${_mOde}\" = \"report\" ] && echo \"Host uptime $(uptime)\"\n    [ ! -z \"${_HOST_INFO}\" ] && echo \"  HOST ${_HOST_INFO}\"\n    [ ! -z \"${_VIRT_IS}\" ] && echo \"  VPS ${_VIRT_IS}\"\n    [ -z \"${_VIRT_IS}\" ] && echo \"  VPS unknown / not a virtual machine\"\n\n    if [ -e \"/root/.allow.any.virt.cnf\" ]; then\n      echo\n      echo \"  !!! WARNING !!! /root/.allow.any.virt.cnf detected...\"\n      echo\n      echo \"  !!! YOU ARE RUNNING EITHER ON BARE METAL\"\n      echo \"  !!! OR NOT SUPPORTED VIRTUALIZATION SYSTEM.\"\n      if [ -n \"${_VIRT_IS}\" ]; then\n         echo\n         echo \"  !!! ${_VIRT_TEST}\"\n      fi\n      echo\n      echo \"  !!! Please be aware that it may not work at all,\"\n      echo \"  !!! or you can experience errors breaking BOA.\"\n      echo\n      echo \"  !!! BOA IS NOT DESIGNED TO RUN DIRECTLY ON A BARE METAL.\"\n      echo \"  !!! IT IS VERY DANGEROUS AND THUS EXTREMELY BAD IDEA!\"\n      echo\n      echo \"  !!! WARNING !!! You are obviously free to experiment...\"\n      echo \"  !!! WARNING !!! But don't expect *ANY* support.\"\n      echo\n    fi\n\n    [ ! -z \"${_SKY}\" ] && echo \"  SKY ${_SKY}\"\n    [ ! -z \"${_CSF}\" ] && echo \"  CSF ${_CSF}\"\n    [ ! -z \"${_NGX}\" ] && echo \"  NGX ${_NGX}\"\n    [ ! -z \"${_PHP}\" ] && echo \"  PHP ${_PHP}\"\n    [ ! -z \"${_DBV}\" ] && echo \"  DBV ${_DBV}\"\n    [ ! -z \"${_VKS}\" ] && echo \"  RDS ${_VKS}\"\n    [ ! -z \"${_PXY}\" ] && echo \"  PXY ${_PXY}\"\n    [ ! -z \"${_SHD}\" ] && echo \"  SHD ${_SHD}\"\n    [ ! -z \"${_SSH}\" ] && echo \"  SSH ${_SSH}\"\n    [ ! -z \"${_LSH}\" ] && echo \"  LSH ${_LSH}\"\n    [ ! -z \"${_PY3}\" ] && echo \"  PY3 ${_PY3}\"\n    [ ! -z \"${_PX3}\" ] && echo \"  PX3 ${_PX3}\"\n    [ ! -z \"${_SSL}\" ] && echo \"  SSL ${_SSL}\"\n    [ ! -z \"${_CRL}\" ] && echo \"  CRL ${_CRL}\"\n    if [ -e \"/root/.use.curl.from.packages.cnf\" ]; then\n      echo \"  CRL_From_Packages YES\"\n    else\n      echo \"  CRL_From_Packages NO\"\n    fi\n    if [ -e \"/usr/local/bin/curl\" ]; then\n      echo \"  CRL_Local_Bin YES\"\n    else\n      echo \"  CRL_Local_Bin NO\"\n    fi\n\n    if [ \"${_mOde}\" != \"report\" ]; then\n\n      [ ! -z \"${_AUT}\" ] && echo \"  _AUT ${_AUT}\"\n      [ ! -z \"${_BIL}\" ] && echo \"  _BIL ${_BIL}\"\n      [ ! -z \"${_CCC}\" ] && echo \"  _CCC ${_CCC}\"\n      [ ! -z \"${_CCL}\" ] && echo \"  _CCL ${_CCL}\"\n      [ ! -z \"${_CCV}\" ] && echo \"  _CCV ${_CCV}\"\n      [ ! -z \"${_CCR}\" ] && echo \"  _CCV ${_CCR}\"\n      [ ! -z \"${_CCS}\" ] && echo \"  _CCS ${_CCS}\"\n      [ ! -z \"${_DBG}\" ] && echo \"  _DBG ${_DBG}\"\n      [ ! -z \"${_DBS}\" ] && echo \"  _DBS ${_DBS}\"\n      [ ! -z \"${_DNS}\" ] && echo \"  _DNS ${_DNS}\"\n      [ ! -z \"${_EXP}\" ] && echo \"  _EXP ${_EXP}\"\n      [ ! -z \"${_MFS}\" ] && echo \"  _MFS ${_MFS}\"\n      [ ! -z \"${_MFX}\" ] && echo \"  _MFX ${_MFX}\"\n      [ ! -z \"${_PCV}\" ] && echo \"  _PCV ${_PCV}\"\n      [ ! -z \"${_PEC}\" ] && echo \"  _PEC ${_PEC}\"\n      [ ! -z \"${_PFD}\" ] && echo \"  _PFD ${_PFD}\"\n      [ ! -z \"${_PFV}\" ] && echo \"  _PFV ${_PFV}\"\n      [ ! -z \"${_PFX}\" ] && echo \"  _PFX ${_PFX}\"\n      [ ! -z \"${_PMI}\" ] && echo \"  _PMI ${_PMI}\"\n      [ ! -z \"${_PSI}\" ] && echo \"  _PSI ${_PSI}\"\n      [ ! -z \"${_SAR}\" ] && echo \"  _SAR ${_SAR}\"\n      [ ! -z \"${_SBP}\" ] && echo \"  _SBP ${_SBP}\"\n      [ ! -z \"${_SFS}\" ] && echo \"  _SFS ${_SFS}\"\n      [ ! -z \"${_STP}\" ] && echo \"  _STP ${_STP}\"\n      [ ! -z \"${_SUO}\" ] && echo \"  _SUO ${_SUO}\"\n      [ ! -z \"${_UMY}\" ] && echo \"  _UMY ${_UMY}\"\n      [ ! -z \"${_XTR}\" ] && echo \"  _XTR ${_XTR}\"\n      [ ! -z \"${_NRC}\" ] && echo \"  _NRC ${_NRC}\"\n      [ ! -z \"${_MON}\" ] && echo \"  _MON ${_MON}\"\n      [ ! -z \"${_AUT}\" ] && echo \"  _AUT ${_AUT}\"\n\n      # Get the currently used GRUB config affecting networking\n      _GRUB_FILE=\"/etc/default/grub\"\n      _XSG=$(grep -E \"^GRUB_CMDLINE_LINUX=\" \"${_GRUB_FILE}\")\n\n      # Get the currently running kernel version\n      _XSK=$(uname -r)\n\n      # Get the kernel version that will be activated after reboot (latest installed kernel)\n      _XSN=$(ls -1 /boot/vmlinuz-* | sort -V | tail -n 1 | sed 's/\\/boot\\/vmlinuz-//')\n\n      _XSU=$(uptime -p 2>&1)\n      _XSL=$(uptime | awk -F'load average:' '{ print $2 }' | sed 's/^ //' 2>&1)\n\n      _check_system_ram\n\n      echo\n      echo \"  _XSY System Uptime/Load/Kernel/CPU/Memory/Disk Report\"\n      echo\n      echo \"  _XSU System Uptime: ${_XSU}\"\n      echo \"  _XSL System Load: ${_XSL}\"\n      echo \"  _XSK Current Kernel Version: ${_XSK}\"\n      # Only display the next kernel version if it's different from the current one\n      if [ \"${_XSK}\" != \"${_XSN}\" ]; then\n        echo \"  _XSN Next Kernel Version (after reboot): ${_XSN}\"\n      fi\n      echo \"  _XSG GRUB: ${_XSG}\"\n      echo \"  _CPU Number: ${_CPU_NR}\"\n      echo \"  _RAM Total: ${_total_ram_gb} GB\"\n      echo \"  _RAM Usage: ${_ram_usage_percent}%\"\n      echo \"  _DSK Usage for relevant partitions:\"\n      # Loop through all partitions and display their usage, excluding tmpfs, udev, and empty partitions\n      df -h | awk 'NR > 1 && $1 !~ /tmpfs|udev/ && $5 != \"0%\" {print \"  _DSK \" $1 \": \" $5 \" used (\" $6 \")\"}'\n      echo\n\n      echo \"  _XSE Key Services Uptime Report\"\n      echo\n      # Arrays to store results\n      _running_services=()\n      _not_running_services=()\n      # Indexed array to hold the services in the desired order\n      _services_order=(\n        \"sshd\"\n        \"crond\"\n        \"postfix\"\n        \"nginx\"\n        \"php56-fpm\"\n        \"php70-fpm\"\n        \"php71-fpm\"\n        \"php72-fpm\"\n        \"php73-fpm\"\n        \"php74-fpm\"\n        \"php80-fpm\"\n        \"php81-fpm\"\n        \"php82-fpm\"\n        \"php83-fpm\"\n        \"php84-fpm\"\n        \"php85-fpm\"\n        \"mysql\"\n        \"valkey\"\n        \"jenkins\"\n        \"solr9\"\n        \"solr7\"\n        \"solr4\"\n        \"pure-ftpd\"\n        \"lfd\"\n        \"rsyslogd\"\n        \"unbound\"\n        \"vnstat\"\n      )\n      # List of services and their respective PID files\n      declare -A _services=(\n        [\"sshd\"]=\"/run/sshd.pid\"\n        [\"crond\"]=\"/run/crond.pid\"\n        [\"postfix\"]=\"/var/spool/postfix/pid/master.pid\"\n        [\"nginx\"]=\"/run/nginx.pid\"\n        [\"php56-fpm\"]=\"/run/php56-fpm.pid\"\n        [\"php70-fpm\"]=\"/run/php70-fpm.pid\"\n        [\"php71-fpm\"]=\"/run/php71-fpm.pid\"\n        [\"php72-fpm\"]=\"/run/php72-fpm.pid\"\n        [\"php73-fpm\"]=\"/run/php73-fpm.pid\"\n        [\"php74-fpm\"]=\"/run/php74-fpm.pid\"\n        [\"php80-fpm\"]=\"/run/php80-fpm.pid\"\n        [\"php81-fpm\"]=\"/run/php81-fpm.pid\"\n        [\"php82-fpm\"]=\"/run/php82-fpm.pid\"\n        [\"php83-fpm\"]=\"/run/php83-fpm.pid\"\n        [\"php84-fpm\"]=\"/run/php84-fpm.pid\"\n        [\"php85-fpm\"]=\"/run/php85-fpm.pid\"\n        [\"mysql\"]=\"/run/mysqld/mysqld.pid\"\n        [\"valkey\"]=\"/run/valkey/valkey.pid\"\n        [\"jenkins\"]=\"/run/jenkins/jenkins.pid\"\n        [\"solr9\"]=\"/var/solr9/solr-9099.pid\"\n        [\"solr7\"]=\"/var/solr7/solr-9077.pid\"\n        [\"solr4\"]=\"/run/jetty9.pid\"\n        [\"pure-ftpd\"]=\"/run/pure-ftpd.pid\"\n        [\"lfd\"]=\"/run/lfd.pid\"\n        [\"rsyslogd\"]=\"/run/rsyslogd.pid\"\n        [\"unbound\"]=\"/run/unbound/unbound.pid\"\n        [\"vnstat\"]=\"/run/vnstat/vnstat.pid\"\n      )\n      # Loop through the services in the correct order\n      for _service in \"${_services_order[@]}\"; do\n        _display_time \"${_service}\" \"${_services[${_service}]}\"\n      done\n      # Conditionally display headers only if there are both running and not running services\n      if [ \"${#_running_services[@]}\" -gt 0 ]; then\n        if [ \"${#_not_running_services[@]}\" -gt 0 ]; then\n          echo \"  _XSE Running services:\"\n          echo\n        fi\n        for _service in \"${_running_services[@]}\"; do\n          echo \"${_service}\"\n        done\n      fi\n      if [ \"${#_not_running_services[@]}\" -gt 0 ]; then\n        echo\n        echo \"  _XSE Not running services:\"\n        echo\n        for _service in \"${_not_running_services[@]}\"; do\n          echo \"${_service}\"\n        done\n      fi\n      echo\n    fi\n\n    if [ -e \"/var/log/barracuda_log.txt\" ]; then\n      if [ \"${_mOde}\" = \"report\" ]; then\n        _display=\"system\"\n        if [ \"${_eXtr}\" = \"octopus\" ] && [ -d \"/data/u\" ]; then\n          echo\n          _OCT_NR=$(ls /data/disk | wc -l)\n          _OCT_NR=$(( _OCT_NR - 1 ))\n          echo \"  All Octopus Instances: ${_OCT_NR}\"\n          for _OCT in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n            _SITES_NR=0\n            _THIS_U=$(echo ${_OCT} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n            _OCT_USG=$(grep \",${_THIS_U}\" /var/log/boa/usage/usage-latest-silent.log)\n            if [ -n \"${_OCT_USG}\" ]; then\n              echo\n              echo \"  Usage Report for ${_OCT}\"\n              echo \"  ${_OCT_USG}\"\n            fi\n            if [ -e \"${_OCT}/config/server_master/nginx/vhost.d\" ]; then\n              _SITES_NR=$(ls ${_OCT}/config/server_master/nginx/vhost.d | wc -l)\n              if [ \"${_SITES_NR}\" -gt 0 ]; then\n                echo \"  Number of Hosted Sites in ${_OCT} is ${_SITES_NR}\"\n              else\n                _OCT_NR=$(( _OCT_NR - 1 ))\n              fi\n              _PHP_CLI=$(grep _PHP_CLI_VERSION /root/.${_THIS_U}.octopus.cnf)\n              _PHP_FPM=$(grep _PHP_FPM_VERSION /root/.${_THIS_U}.octopus.cnf)\n              echo \"  ${_PHP_CLI}\"\n              echo \"  ${_PHP_FPM}\"\n              echo\n              _OCT_ERR=$(grep \"Incompatibility detected in codebase at ${_OCT}/\" /var/log/boa/core/incompatible-*.log)\n              if [ -n \"${_OCT_ERR}\" ]; then\n                echo \"  ${_THIS_U} Percona 8.0 Incompatibility Report\"\n                echo ${_OCT_ERR}\n              fi\n            fi\n          done\n          echo\n          echo \"  All Active Octopus Instances: ${_OCT_NR}\"\n          _ALL_SITES_NR=$(ls /data/disk/*/config/server_master/nginx/vhost.d | wc -l)\n          _ALL_SITES_NR=$(( _ALL_SITES_NR - _OCT_NR ))\n          echo \"  Number of All Hosted Sites: ${_ALL_SITES_NR}\"\n          echo\n        fi\n        if [ \"${_eXtr}\" = \"backups\" ]; then\n          tail --lines=19 /var/log/boa/*.archive.log\n          echo\n        fi\n        if [ \"${_eXtr}\" = \"both\" ]; then\n          grep _PHP_CLI_VERSION /root/.*.octopus.cnf\n          echo\n          grep _PHP_FPM_VERSION /root/.*.octopus.cnf\n          echo\n          tail --lines=19 /var/log/boa/*.archive.log\n          echo\n        fi\n      elif [ \"${_mOde}\" = \"full\" ]; then\n        cat /var/log/barracuda_log.txt\n      elif [ \"${_mOde}\" = \"more\" ]; then\n        _display=\"default\"\n        cat /var/log/barracuda_log.txt | grep $(date +'%Y' -d 'last year')\n        cat /var/log/barracuda_log.txt | grep $(date +%Y)\n      else\n        _display=\"default\"\n        tail --lines=5 /var/log/barracuda_log.txt\n      fi\n    fi\n    if [ \"${_display}\" = \"default\" ]; then\n      echo\n      echo \"Please link this information in your submission,\"\n      echo \"but only in a form of Gist snippet and not inline,\"\n      echo \"along with your hosting provider name\"\n      echo \"in the BOA issue queue on GitHub.\"\n      echo\n    fi\n    exit 0\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n\n_octopus_cleanup() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    if [ \"${_aCtn}\" = \"detect\" ]; then\n      echo\n      echo \"Accounts marked for cleanup or active\"\n      echo\n      rm -f /root/.cleanup.detect.txt\n      for _Usr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`\n      do\n        if [ -d \"${_Usr}/config/server_master/nginx/vhost.d\" ] \\\n          && [ -e \"${_Usr}/log/cores.txt\" ] \\\n          && [ -e \"${_Usr}/log/CANCELLED\" ]; then\n          _sIze=$(du -s -h ${_Usr} | cut -d' ' -f1 | awk '{ print $1}' 2>/dev/null)\n          echo \"CANCELLED ${_Usr} ${_sIze}\"\n          _iUsr=$(echo ${_Usr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n          echo \"CANCELLED ${_iUsr}\" >> /root/.cleanup.detect.txt\n        else\n          if [ -d \"${_Usr}/config/server_master/nginx/vhost.d\" ] \\\n            && [ -e \"${_Usr}/log/cores.txt\" ]; then\n            echo \"ACTIVE ${_Usr}\"\n          fi\n        fi\n      done\n    elif [ \"${_aCtn}\" = \"purge\" ] \\\n      && [ \"${_usEr}\" = \"batch\" ] \\\n      && [ -e \"/root/.cleanup.detect.txt\" ]; then\n        IFS=$'\\12'\n        for p in $(cat /root/.cleanup.detect.txt 2>&1);do\n          _usr_purge=`echo $p | cut -d' ' -f2 | awk '{ print $1}'`\n          boa cleanup purge ${_usr_purge}\n          wait\n        done\n    elif [ \"${_aCtn}\" = \"purge\" ] \\\n      && [ ! -z \"${_usEr}\" ] \\\n      && [ \"${_usEr}\" != \"batch\" ] \\\n      && [ -e \"/root/.cleanup.detect.txt\" ]; then\n      _zombie_dir=\"/var/backups/zombie/purged/${_usEr}/home/\"\n      _nginx_inc=\"/var/aegir/config/server_master/nginx/platform.d\"\n      _nginx_ssl=\"/var/aegir/config/server_master/nginx/pre.d\"\n      _nginx_prx=$(ls ${_nginx_ssl}/z_${_usEr}.*_ssl_proxy.conf 2>&1)\n      _sites_nr=$(ls /data/disk/${_usEr}/config/server_master/nginx/vhost.d | wc -l)\n      if [ -d \"/data/disk/${_usEr}/config/server_master/nginx/vhost.d\" ] \\\n        && [ \"${_sites_nr}\" = \"0\" ] \\\n        && [ -e \"/data/disk/${_usEr}/log/cores.txt\" ] \\\n        && [ -e \"/data/disk/${_usEr}/log/CANCELLED\" ]; then\n        echo\n        echo \"Account to purge: ${_usEr}\"\n        echo\n        mkdir -p ${_zombie_dir}\n        mv -f \"${_nginx_inc}/${_usEr}.conf\" ${_zombie_dir}\n        mv -f \"${_nginx_prx}\" ${_zombie_dir}\n        echo \"${_usEr}:/data/disk/${_usEr}\" > ${_zombie_dir}.purge.list\n        echo \"${_usEr}.ftp:/home/${_usEr}.ftp\" >> ${_zombie_dir}.purge.list\n        echo \"${_usEr}.web:/home/${_usEr}.web\" >> ${_zombie_dir}.purge.list\n        _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n        for e in ${_PHP_V}; do\n          if [ -e \"/opt/php${e}/etc/pool.d/${_usEr}.${e}.conf\" ] \\\n            || [ -e \"/home/${_usEr}.${e}.web\" ]; then\n            rm -f /opt/php${e}/etc/pool.d/${_usEr}.${e}.conf\n            echo \"${_usEr}.${e}.web:/home/${_usEr}.${e}.web\" >> ${_zombie_dir}.purge.list\n          fi\n          if [ -e \"/opt/php${e}/etc/pool.d/${_usEr}.conf\" ] \\\n            || [ -e \"/home/${_usEr}.web\" ]; then\n            rm -f /opt/php${e}/etc/pool.d/${_usEr}.conf\n            echo \"${_usEr}.web:/home/${_usEr}.web\" >> ${_zombie_dir}.purge.list\n          fi\n        done\n        pkill -9 -f gpg-agent\n        IFS=$'\\12'\n          for p in $(cat ${_zombie_dir}.purge.list | grep \"${_usEr}\" 2>&1);do\n            _usr_name=`echo $p | cut -d':' -f1 | awk '{ print $1}'`\n            _usr_home=`echo $p | cut -d':' -f2 | awk '{ print $1}'`\n            echo disabling chattr for ${_usr_name} in ${_usr_home}..\n            chattr -i -R ${_usr_home}/ &> /dev/null\n            if [ -d \"${_usr_home}/.drush/\" ]; then\n              chattr -i ${_usr_home}/.drush/\n            fi\n            rm -rf ${_usr_home}/.gnupg\n            echo purging ${_usr_name}..\n            if [ -d \"${_usr_home}/static\" ]; then\n              rm -rf ${_usr_home}/backups\n              rm -rf ${_usr_home}/distro\n              rm -rf ${_usr_home}/src\n              rm -rf ${_usr_home}/static\n              rm -rf ${_usr_home}/undo\n            fi\n            deluser \\\n              --remove-home \\\n              --backup-to /var/backups/zombie/purged/${_usEr}/home/ ${_usr_name} &> /dev/null\n          done;\n        echo\n        echo \"Cleanup complete\"\n      else\n        echo\n        echo \"OOPS.. ${_usEr} is either active, with existing vhosts or does not exist\"\n      fi\n    fi\n    echo\n    exit 0\n  fi\n}\n\n_if_start_screen() {\n  if [ -e \"/root/.autoinit.log\" ] && [ ! -e \"/lib/systemd/systemd\" ]; then\n    if [[ -n \"$SSH_CONNECTION\" || -n \"$SSH_CLIENT\" ]]; then\n      # Check if the user is inside a screen session\n      if [[ ! \"${_ARGS}\" =~ (^|[[:space:]])(info|help|version|reboot)([[:space:]]|$) ]]; then\n        if [ -z \"$STY\" ]; then\n          # If not in screen, start a new screen session with the same script\n          echo \"You are not inside a screen session. Starting screen...\"\n          sleep 5\n          screen -S session_boa bash -c \"$0 ${_ARGS}\"\n          exit\n        else\n          # If already inside screen, continue the script\n          echo \"You are in a screen session now\"\n          sleep 3\n        fi\n      fi\n    fi\n  fi\n}\n\n_if_display_help() {\n  if [ \"${_cmNd}\" = \"help\" ]; then\n    echo\n    echo \"Installation commands:\"\n    echo\n    echo \"Usage: $(basename \"$0\") {version} {kind} {fqdn} {email} {user} {extra} {output}\"\n    echo \"Usage: $(basename \"$0\") {in-octopus} {email} {o2} {lts|dev|pro} {output}\"\n    echo\n    echo \"Other available commands:\"\n    echo\n    echo \"Usage: $(basename \"$0\") version\"\n    echo \"Usage: $(basename \"$0\") info {more}\"\n    echo \"Usage: $(basename \"$0\") info report {octopus|backups|both}\"\n    echo \"Usage: $(basename \"$0\") cleanup {detect|purge} {user|batch}\"\n    echo \"Usage: $(basename \"$0\") reboot\"\n    echo\n    cat <<EOF\n\n    Accepted keywords and values for installation and other commands:\n\n    {version}\n      in-lts <------- install BOA LTS release (no license)\n      in-dev <------- install BOA Cutting Edge (requires license)\n      in-pro <------- install BOA PRO release (requires license)\n      in-octopus <--- install extra Octopus instance (lts|dev|pro)\n\n    {kind}\n      public <------- recommended for general use\n      local <-------- experimental\n\n    {fqdn}\n      my.fqdn <------ valid subdomain to use as a hostname\n\n    {email}\n      my@email <----- your valid email address\n\n    {user}\n      o1 <----------- default Octopus system account\n\n    {extra}\n      license <------ valid new relic license key\n      php-8.5 <------ enable single-PHP mode (8.5 or 8.4 or 8.3)\n      php-min <------ install PHP 8.5, 8.4, 8.3 use 8.4 by default (php-all)\n      php-max <------ install PHP 8.0-8.5, 7.0-7.4, 5.6\n      nodns <-------- disable DNS/SMTP checks on the fly\n      percona-8.4 <-- specify Percona version to use (5.7, 8.0, 8.4)\n\n    {output}\n      verbose <------ barracuda and octopus installed (output in the console) (default)\n      minimal <------ barracuda and octopus installed (output mostly logged)\n      silent <------- barracuda and octopus installed (output only logged)\n      system <------- only barracuda is installed (output mostly logged)\n\n    {other}\n      version <------ display BOA and OS version\n      info <--------- generate various system reports\n      cleanup <------ (detect|purge) cancelled Octopus instances files (no dbs)\n      reboot <------- run accelerated system reboot\n\n    See docs/INSTALL.md for more details.\n\nEOF\n    _clean_pid_exit\n\n  fi\n}\n\n_check_root_direct\n_os_detection_minimal\n\nexport _ARGS=\"$@\"\n\ncase \"$1\" in\n  info)       _cmNd=\"$1\"\n              _mOde=\"$2\"\n              _eXtr=\"$3\"\n              _check_virt\n              _display_info\n  ;;\n  version)    _cmNd=\"$1\"\n              _display_version\n  ;;\n  help)       _cmNd=\"$1\"\n              _if_display_help\n  ;;\n  reboot)     service clean-boa-env stop\n              wait\n              shutdown -r now\n              wait\n              exit 0\n  ;;\n  cleanup)    _cmNd=\"$1\"\n              _aCtn=\"$2\"\n              _usEr=\"$3\"\n              [[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n              _octopus_cleanup\n  ;;\n  in-lts)     export _tRee=lts\n              _cmNd=\"$1\"\n              _kiNd=\"$2\"\n              _fQdn=\"$3\"\n              _eMal=\"$4\"\n              _usEr=\"$5\"\n              _eXtr=\"$6\"\n              _outP=\"$7\"\n              _check_root\n              _check_manufacturer_compatibility\n              _check_root_keys_pwd\n              _check_virt\n              _check_dns_curl\n              [[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n              # Capture the complete command with arguments\n              _BOA_COMMAND=\"$0 $@\"\n              [[ \"${_ARGS}\" =~ noscreen ]] && _BOA_COMMAND=$(echo \"${_BOA_COMMAND}\" | sed -E \"s/noscreen//g\")\n              _init_setup\n  ;;\n  in-dev)     export _tRee=dev\n              _cmNd=\"$1\"\n              _kiNd=\"$2\"\n              _fQdn=\"$3\"\n              _eMal=\"$4\"\n              _usEr=\"$5\"\n              _eXtr=\"$6\"\n              _outP=\"$7\"\n              _check_root\n              _check_root_keys_pwd\n              _check_virt\n              _check_dns_curl\n              [[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n              # Capture the complete command with arguments\n              _BOA_COMMAND=\"$0 $@\"\n              [[ \"${_ARGS}\" =~ noscreen ]] && _BOA_COMMAND=$(echo \"${_BOA_COMMAND}\" | sed -E \"s/noscreen//g\")\n              _init_setup\n  ;;\n  in-pro)     export _tRee=pro\n              _cmNd=\"$1\"\n              _kiNd=\"$2\"\n              _fQdn=\"$3\"\n              _eMal=\"$4\"\n              _usEr=\"$5\"\n              _eXtr=\"$6\"\n              _outP=\"$7\"\n              _check_root\n              _check_root_keys_pwd\n              _check_virt\n              _check_dns_curl\n              [[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n              # Capture the complete command with arguments\n              _BOA_COMMAND=\"$0 $@\"\n              [[ \"${_ARGS}\" =~ noscreen ]] && _BOA_COMMAND=$(echo \"${_BOA_COMMAND}\" | sed -E \"s/noscreen//g\")\n              _init_setup\n  ;;\n  in-octopus) _cmNd=\"$1\"\n              _eMal=\"$2\"\n              _usEr=\"$3\"\n              _eXtr=\"$4\"\n              _cOpt=\"$5\"\n              _cSub=\"$6\"\n              _cCor=\"$7\"\n              _check_root\n              _check_root_keys_pwd\n              _check_virt\n              _check_dns_curl\n              [[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n              # Capture the complete command with arguments\n              _BOA_COMMAND=\"$0 $@\"\n              [[ \"${_ARGS}\" =~ noscreen ]] && _BOA_COMMAND=$(echo \"${_BOA_COMMAND}\" | sed -E \"s/noscreen//g\")\n              _init_setup\n  ;;\n  in-oct)     _cmNd=\"$1\"\n              _eMal=\"$2\"\n              _usEr=\"$3\"\n              _eXtr=\"$4\"\n              _cOpt=\"$5\"\n              _cSub=\"$6\"\n              _cCor=\"$7\"\n              _pXyc=\"6033\"\n              _pXyi=\"127.0.0.1\"\n              _check_root\n              _check_root_keys_pwd\n              _check_virt\n              _check_dns_curl\n              [[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n              # Capture the complete command with arguments\n              _BOA_COMMAND=\"$0 $@\"\n              [[ \"${_ARGS}\" =~ noscreen ]] && _BOA_COMMAND=$(echo \"${_BOA_COMMAND}\" | sed -E \"s/noscreen//g\")\n              _init_setup\n  ;;\n  *)          echo\n              echo \"Sorry, you are trying not supported command..\"\n              echo \"Display supported commands with: $(basename \"$0\") help\"\n              echo\n              _clean_pid_exit\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/cluster",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_DB_SERIES=5.7\n_CSF_VRN=15.00\n_PROXYSQL_VRN=\"1.3.5-debian8_amd64\"\n_UTIL_VRN=\"0.30.216-pre3126\"\n_VNSTAT_VRN=2.13\n_APT_XTR=\"-y\"\n_aptAllow=\"--allow-unauthenticated\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n_INSTALL_DIST=\"/usr/bin/apt-get ${_dstUpArg} install\"\n_INSTALL_NRML=\"/usr/bin/apt-get ${_nrmUpArg} install\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n_OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n_aptLiSys=\"/etc/apt/sources.list\"\n\nif [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n  _DPKG_CNF=\"confold\"\nelse\n  _DPKG_CNF=\"confnew\"\nfi\n\n_INSTAPP=\"/usr/bin/aptitude -f -y -q \\\n  --allow-untrusted \\\n  -o Dpkg::Options::=--force-confmiss \\\n  -o Dpkg::Options::=--force-confdef \\\n  -o Dpkg::Options::=--force-${_DPKG_CNF} install\"\n\nif [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n  _INSTAPP=\"${_INSTALL_DIST}\"\nfi\n\n_NOSTRICT=\"-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n_X_VERSION=\"BOA-5.9.1-${_tRee}\"\n#\n_barCnf=\"/root/.barracuda.cnf\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_optBin=\"/opt/local/bin\"\n_usrBin=\"/usr/local/bin\"\n_pthLog=\"/var/log/boa\"\n_tBn=\"tools/bin\"\n#\n\n###-------------SYSTEM-----------------###\n\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.cluster.exit.exceptions.log\n    [ -e \"/opt/tmp/boa\" ] && rm -rf /opt/tmp/*\n  fi\n  rm -f /run/cluster_run.pid\n  service cron start\n  exit 1\n}\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_check_mysql_version() {\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  fi\n}\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    chmod a+w /dev/null\n    if [ ! -e \"/dev/fd\" ]; then\n      if [ -e \"/proc/self/fd\" ]; then\n        rm -rf /dev/fd\n        ln -sfn /proc/self/fd /dev/fd\n      fi\n    fi\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _LSB_TEST=\"$(which lsb_release)\"\n    if [ ! -x \"${_LSB_TEST}\" ]; then\n      _apt_clean_update\n      apt-get install lsb-release ${_aptYesUnth}\n    fi\n    _LSB_TEST=\"$(which lsb_release)\"\n    if [ -x \"${_LSB_TEST}\" ]; then\n      if [ \"${_OS_DIST}\" = \"Devuan\" ] || [ \"${_OS_DIST}\" = \"Debian\" ]; then\n        _SUPPORTED_OS=YES\n      else\n        echo \"ERROR: Not supported OS detected: ${_OS_DIST}/${_OS_CODE}\"\n        exit 1\n      fi\n      echo\n      if [ \"${_OS_CODE}\" = \"buster\" ] || [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n        _APT_MIRROR=\"archive.debian.org/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-backports\"\n        _SEC_MIRROR=\"archive.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}/updates\"\n      elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org\"\n        _SEC_REPSRC=\"${_OS_CODE}/updates\"\n      elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}-security\"\n      elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}-security\"\n      fi\n      echo \"## DEBIAN MAIN REPOSITORIES\" > ${_aptLiSys}\n      echo \"deb http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n      echo \"deb http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## DEBIAN SECURITY UPDATES\" >> ${_aptLiSys}\n      echo \"deb http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      echo \"deb-src http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      echo\n      _apt_clean_update\n    fi\n    if [ \"${_OS_DIST}\" = \"Devuan\" ] || [ \"${_OS_DIST}\" = \"Debian\" ]; then\n      _SUPPORTED_OS=YES\n    else\n      echo \"ERROR: Not supported OS detected: ${_OS_DIST}/${_OS_CODE}\"\n      exit 1\n    fi\n    if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n      _BENG_SERIES=\"4.9\"\n      _BNG_VERSION=\"4.9.202-vs2.3.9.9\"\n    elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n      _BENG_SERIES=\"4.9\"\n      _BNG_VERSION=\"4.9.202-vs2.3.9.9\"\n    elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n      _BENG_SERIES=\"4.9\"\n      _BNG_VERSION=\"4.9.202-vs2.3.9.9\"\n    elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n      _BENG_SERIES=\"4.9\"\n      _BNG_VERSION=\"4.9.202-vs2.3.9.9\"\n    elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n      _BENG_SERIES=\"4.9\"\n      _BNG_VERSION=\"4.9.202-vs2.3.9.9\"\n    else\n      echo \"ERROR: Not supported OS detected: ${_OS_DIST}/${_OS_CODE}\"\n      exit 1\n    fi\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n\n_check_no_systemd() {\n  if [ -e \"/lib/systemd/systemd\" ]; then\n    echo \"ERROR: This script can not be used with systemd\"\n    echo \"ERROR: Please run 'autoinit' first\"\n    _clean_pid_exit _check_no_systemd\n  fi\n}\n\n_ifnames_grub_check_sync() {\n\n  _USE_NINC=NO\n  _NEW_GRUB=DEMO\n\n  if [ -e \"/root/.ignore.ifnames.cnf\" ]; then\n    return 1  # Exit the function but continue the script\n  else\n    [ -e \"/root/.ninc.selected.predictable.cnf\" ] && _USE_NINC=predictable && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.classic.cnf\" ]     && _USE_NINC=classic     && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.auto.cnf\" ]        && _USE_NINC=auto        && _NEW_GRUB=LIVE\n    [ -e \"/root/.ninc.selected.vanilla.cnf\" ]     && _USE_NINC=vanilla\n  fi\n\n  if [ \"${_USE_NINC}\" = \"vanilla\" ] || [ \"${_USE_NINC}\" = \"NO\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n\n  _IS_IFACE=$(ip a 2>&1)\n  _ADD_GRUB_CMD=\"\"\n  _GRUB_FILE=\"/etc/default/grub\"\n\n  if [ -e \"${_GRUB_FILE}\" ]; then\n    if echo \"${_IS_IFACE}\" | grep -qE \"eth[0-9]+\"; then\n      _USE_IFNAMES=\"CLASSIC\"\n      echo \"GRUB: Classic ethX interface naming found.\"\n    elif echo \"${_IS_IFACE}\" | grep -qE \"(ens|enp|eno|wlp|wlo)[0-9]+:\"; then\n      _USE_IFNAMES=\"PREDICTABLE\"\n      echo \"GRUB: Predictable (ensX, enpX, enoX, wlpX, wloX) interface naming found.\"\n    else\n      _USE_IFNAMES=\"DONTMODIFY\"\n      echo \"GRUB: config exists, but no recognized network interface naming found.\"\n    fi\n\n    # Extract the current GRUB_CMDLINE_LINUX line\n    _GRUB_CMDLINE_LINUX=$(grep -E \"^GRUB_CMDLINE_LINUX=\" \"${_GRUB_FILE}\")\n    echo \"GRUB: Current config is ${_GRUB_CMDLINE_LINUX}\"\n\n    # Initialize variables to check for existing options\n    _SYS_NET_IFNAMES=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"net.ifnames=[01]\")\n    _SYS_BIOSDEVNAME=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"biosdevname=[01]\")\n    _SYS_MEMHP_STATE=$(echo \"${_GRUB_CMDLINE_LINUX}\" | grep -o \"memhp_default_state=online\")\n\n    # Function to append option to _ADD_GRUB_CMD\n    _append_grub_cmd_option() {\n      _option=\"$1\"\n      if [[ -z \"${_ADD_GRUB_CMD}\" ]]; then\n        _ADD_GRUB_CMD=\"${_option}\"\n      else\n        _ADD_GRUB_CMD=\"${_ADD_GRUB_CMD} ${_option}\"\n      fi\n    }\n\n    # Always add memhp_default_state=online\n    _append_grub_cmd_option \"memhp_default_state=online\"\n\n    if [[ \"${_USE_IFNAMES}\" == \"CLASSIC\" ]]; then\n      # Always set net.ifnames=0 and biosdevname=0\n      _append_grub_cmd_option \"net.ifnames=0\"\n      _append_grub_cmd_option \"biosdevname=0\"\n    elif [[ \"${_USE_IFNAMES}\" == \"PREDICTABLE\" ]]; then\n      # Always set net.ifnames=1 and biosdevname=1\n      _append_grub_cmd_option \"net.ifnames=1\"\n      _append_grub_cmd_option \"biosdevname=1\"\n    fi\n\n    if [[ -n \"${_ADD_GRUB_CMD}\" ]]; then\n      # Backup the GRUB file\n      cp \"${_GRUB_FILE}\" \"${_GRUB_FILE}.bak\"\n\n      # Remove existing options from GRUB_CMDLINE_LINUX\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_GRUB_CMDLINE_LINUX}\" | sed -E \"s/(net.ifnames=[01]|biosdevname=[01]|memhp_default_state=online)//g\")\n\n      # Clean up extra spaces and trailing spaces before the closing quote\n      _NEW_GRUB_CMDLINE_LINUX=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | tr -s ' ' | sed -E 's/\\s*\"$/\"/')\n\n      # Extract current kernel parameters\n      _CURRENT_CMDLINE=$(echo \"${_NEW_GRUB_CMDLINE_LINUX}\" | sed -E 's/^GRUB_CMDLINE_LINUX=\"(.*)\"$/\\1/')\n\n      # Append new options\n      _UPDATED_CMDLINE=\"${_CURRENT_CMDLINE} ${_ADD_GRUB_CMD}\"\n      _UPDATED_CMDLINE=$(echo \"${_UPDATED_CMDLINE}\" | sed 's/^ *//;s/ *$//')\n\n      # Form the new GRUB_CMDLINE_LINUX line\n      _NEW_GRUB_CMDLINE_LINUX=\"GRUB_CMDLINE_LINUX=\\\"${_UPDATED_CMDLINE}\\\"\"\n\n      echo \" \"\n      if [[ \"${_NEW_GRUB}\" == \"LIVE\" ]]; then\n        # Update the GRUB file\n        echo \"GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\"\n        echo \"GRUB: Update in the LIVE MODE\"\n        sed -i \"s|^GRUB_CMDLINE_LINUX=.*|${_NEW_GRUB_CMDLINE_LINUX}|\" \"${_GRUB_FILE}\"\n        echo \"GRUB_CMDLINE_LINUX has been updated with ${_UPDATED_CMDLINE}\"\n      elif [[ \"${_NEW_GRUB}\" == \"DEMO\" ]]; then\n        # Demo info\n        echo \"GRUB: Networking Interface Naming Convention selected: ${_USE_NINC}\"\n        echo \"GRUB: Update in the DEMO MODE\"\n        echo \" \"\n        echo \"GRUB_CMDLINE_LINUX would be updated with:\"\n        echo \"   ${_UPDATED_CMDLINE}\"\n        echo \" \"\n        echo \"GRUB: Update in the LIVE MODE requires presence of control file:\"\n        echo \"   /root/.ninc.selected.auto.cnf\"\n        echo \" \"\n        echo \"GRUB: Note that this extra control file must not exist:\"\n        echo \"   /root/.ignore.ifnames.cnf\"\n        echo \" \"\n        echo \"GRUB: This requirement serves as a double-check to confirm\"\n        echo \"GRUB: that you are aware of and agree to auto-update GRUB configuration.\"\n        echo \"GRUB: Incorrect GRUB settings can render your virtual machine unbootable\"\n        echo \"GRUB: necessitating a rescue operation using a CD-ROM or ISO image.\"\n        echo \"GRUB: For this reason, running BOA directly on physical hardware (bare metal) is not supported\"\n        echo \" \"\n        echo \"GRUB: NEVER USE LIVE MODE IF YOU ARE NOT SURE IF YOU NEED IT\"\n      fi\n      echo \" \"\n    fi\n  else\n    echo \"GRUB config does not exist.\"\n  fi\n}\n\n_check_root_direct() {\n  _U_TEST=DENY\n  [ \"${SUDO_USER}\" ] && _U_TEST_SDO=${SUDO_USER} || _U_TEST_SDO=$(whoami)\n  _U_TEST_WHO=$(who am i | awk '{print $1}' 2>&1)\n  _U_TEST_LNE=$(logname 2>&1)\n  if [ \"${_U_TEST_SDO}\" = \"root\" ] || [ \"${_U_TEST_LNE}\" = \"root\" ]; then\n    if [ -z \"${_U_TEST_WHO}\" ]; then\n      _U_TEST=ALLOW\n      ### normal for root scripts running from cron\n    else\n      if [ \"${_U_TEST_WHO}\" = \"root\" ]; then\n        _U_TEST=ALLOW\n      fi\n    fi\n  fi\n  if [ \"${_U_TEST}\" = \"DENY\" ]; then\n    echo\n    echo \"ERROR: This script must be run as root directly,\"\n    echo \"ERROR: without sudo/su switch from regular system user\"\n    echo \"ERROR: Please add and test your SSH (ed25519) keys for root account\"\n    echo \"ERROR: with direct access before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HINT:  You can always restrict access later, or\"\n    echo \"       allow only SSH (ed25519) keys for root with directive\"\n    echo \"         PermitRootLogin prohibit-password\"\n    echo \"       in the /etc/ssh/sshd_config file\"\n    echo \"Bye\"\n    _clean_pid_exit\n  fi\n}\n\n_check_root_keys_pwd() {\n  # Check if root's password is locked\n  _ROOT_PWD_LOCKED=\"NO\"\n  _S_TEST=$(grep 'root:\\*:' /etc/shadow 2>&1)\n  if [[ \"${_S_TEST}\" =~ root:\\*: ]]; then\n    _ROOT_PWD_LOCKED=\"YES\"\n  fi\n\n  # Check for presence of SSH keys\n  _SSH_KEYS_OK=\"NO\"\n  if [ -e \"/root/.ssh/authorized_keys\" ]; then\n    if grep -qE '^(ssh-rsa|ssh-ed25519|ecdsa-sha2)' /root/.ssh/authorized_keys; then\n      _SSH_KEYS_OK=\"YES\"\n    fi\n  fi\n\n  if [[ \"${_ROOT_PWD_LOCKED}\" == \"NO\" ]] && [[ \"${_SSH_KEYS_OK}\" == \"NO\" ]]; then\n    echo\n    echo \"ERROR: BOA requires working SSH keys for system root present\"\n    echo \"ERROR: Please add and test your SSH keys for root account\"\n    echo \"ERROR: before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HOWTO: You can prioritize your keys by adding to ~/.ssh/config lines:\"\n    echo \" Host *\"\n    echo \"   IdentityFile ~/.ssh/id_ed25519\"\n    echo \"   IdentityFile ~/.ssh/id_ecdsa\"\n    echo \"   IdentityFile ~/.ssh/id_rsa\"\n    echo\n    echo \"Bye\"\n    echo\n    _clean_pid_exit\n  fi\n}\n\n_system_check_clean() {\n  if [ -e \"/etc/nginx\" ] \\\n    || [ -e \"/data/disk/all\" ] \\\n    || [ -e \"/etc/apache2\" ] \\\n    || [ -e \"/etc/mysql\" ] \\\n    || [ -e \"/var/lib/mysql/mysql\" ]; then\n    echo \"ERROR: Cluster requires minimal, supported OS, with no services installed\"\n    echo \"ERROR: The only acceptable exceptions are: sshd and mail servers\"\n    echo \"ERROR: This script must be run on a bare metal server main OS level\"\n    echo \"Bye\"\n    _clean_pid_exit _system_check_clean_a\n  fi\n}\n\n#\n# Fix locales.\n_locales_check_fix() {\n  ${_INITINS} locales &> /dev/null\n  if [ -e \"/etc/ssh/sshd_config\" ]; then\n    _SSH_LC_TEST=$(grep \"^AcceptEnv LANG LC_\" /etc/ssh/sshd_config 2>&1)\n    if [[ \"${_SSH_LC_TEST}\" =~ \"AcceptEnv LANG LC_\" ]]; then\n      _DO_NOTHING=YES\n    else\n      sed -i \"s/.*AcceptEnv.*//g\" /etc/ssh/sshd_config\n      wait\n      echo \"AcceptEnv LANG LC_*\" >> /etc/ssh/sshd_config\n    fi\n  fi\n  _LOC_TEST=$(locale 2>&1)\n  if [[ \"${_LOC_TEST}\" =~ LANG=.*UTF-8 ]]; then\n    _LOCALE_TEST=OK\n  fi\n  if [ -n \"${STY+x}\" ]; then\n    _LOCALE_TEST=OK\n  fi\n  if [[ \"${_LOC_TEST}\" =~ \"Cannot\" ]]; then\n    _LOCALE_TEST=BROKEN\n  fi\n  if [ \"${_LOCALE_TEST}\" = \"BROKEN\" ]; then\n    echo \"WARNING!\"\n    cat <<EOF\n\n  Locales on this system are broken or not installed\n  and/or not configured correctly yet. This is a known\n  issue on some systems/hosts which either don't configure\n  locales at all or don't use UTF-8 compatible locales\n  during initial OS setup.\n\n  We will fix this problem for you now by enforcing en_US.UTF-8\n  locale settings on the fly during install, and as system\n  defaults in /etc/default/locale for future sessions. This\n  overrides any locale settings passed by your SSH client.\n\n  You should log out when this installer will finish all its tasks\n  and display last line with \"BYE!\" and then log in again\n  to see the result.\n\n  We will continue in 5 seconds...\n\nEOF\n    sleep 5\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen &> /dev/null\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce all locale settings\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_TIME=en_US.UTF-8 \\\n      LC_MONETARY=en_US.UTF-8 \\\n      LC_MESSAGES=en_US.UTF-8 \\\n      LC_PAPER=en_US.UTF-8 \\\n      LC_NAME=en_US.UTF-8 \\\n      LC_ADDRESS=en_US.UTF-8 \\\n      LC_TELEPHONE=en_US.UTF-8 \\\n      LC_MEASUREMENT=en_US.UTF-8 \\\n      LC_IDENTIFICATION=en_US.UTF-8 \\\n      LC_ALL= &> /dev/null\n    # Define all locale settings on the fly to prevent unnecessary\n    # warnings during installation of packages.\n    export LANG=en_US.UTF-8\n    export LC_CTYPE=en_US.UTF-8\n    export LC_COLLATE=POSIX\n    export LC_NUMERIC=POSIX\n    export LC_TIME=en_US.UTF-8\n    export LC_MONETARY=en_US.UTF-8\n    export LC_MESSAGES=en_US.UTF-8\n    export LC_PAPER=en_US.UTF-8\n    export LC_NAME=en_US.UTF-8\n    export LC_ADDRESS=en_US.UTF-8\n    export LC_TELEPHONE=en_US.UTF-8\n    export LC_MEASUREMENT=en_US.UTF-8\n    export LC_IDENTIFICATION=en_US.UTF-8\n    export LC_ALL=\n  else\n    _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n    if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n      echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n    fi\n    sed -i \"/^$/d\" /etc/locale.gen\n    locale-gen &> /dev/null\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce locale settings required for consistency\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_ALL= &> /dev/null\n    # Define locale settings required for consistency also on the fly\n    if [ \"${_STATUS}\" != \"INIT\" ]; then\n      # On initial install it usually causes a warning:\n      # setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8):\n      # No such file or directory\n      export LC_CTYPE=en_US.UTF-8\n    fi\n    export LC_COLLATE=POSIX\n    export LC_NUMERIC=POSIX\n    export LC_ALL=\n  fi\n  _LOCALES_BASHRC_TEST=$(grep LC_COLLATE /root/.bashrc 2>&1)\n  if [[ ! \"${_LOCALES_BASHRC_TEST}\" =~ \"LC_COLLATE\" ]]; then\n    printf \"\\n\" >> /root/.bashrc\n    echo \"export LANG=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_CTYPE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_COLLATE=POSIX\" >> /root/.bashrc\n    echo \"export LC_NUMERIC=POSIX\" >> /root/.bashrc\n    echo \"export LC_TIME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MONETARY=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MESSAGES=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_PAPER=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_NAME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ADDRESS=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_TELEPHONE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MEASUREMENT=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_IDENTIFICATION=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ALL=\" >> /root/.bashrc\n    printf \"\\n\" >> /root/.bashrc\n  fi\n}\n\n_check_all() {\n  _check_root\n  _check_root_direct\n  _check_root_keys_pwd\n  _check_no_systemd\n  _ifnames_grub_check_sync\n  _os_detection_minimal\n  _system_check_clean\n  _locales_check_fix\n}\n\n_check_config_cluster() {\n\n  # shellcheck disable=SC1091\n  [ -e \"/root/.cluster.cnf\" ] && source /root/.cluster.cnf\n\n  _CLUSTER_PREFIX=\"${_CLUSTER_PREFIX//[^a-z0-9]/}\"\n  _CLUSTER_PREFIX=\"$(echo -n ${_CLUSTER_PREFIX} | tr A-Z a-z 2>&1)\"\n  _CLUSTER_SUFFIX=\"${_CLUSTER_SUFFIX//[^a-z0-9-.]/}\"\n  _CLUSTER_SUFFIX=\"$(echo -n ${_CLUSTER_SUFFIX} | tr A-Z a-z 2>&1)\"\n  _WEB_FQDN=\"${_WEB_FQDN//[^a-zA-Z0-9-.]/}\"\n  _WEB_FQDN=\"$(echo -n ${_WEB_FQDN} | tr A-Z a-z 2>&1)\"\n  _WEB_NODE_IP=\"${_WEB_NODE_IP//[^0-9.]/}\"\n  _DB_NODE_IP[0]=\"${_DB_NODE_IP[0]//[^0-9.]/}\"\n  _DB_NODE_IP[1]=\"${_DB_NODE_IP[1]//[^0-9.]/}\"\n  _DB_NODE_IP[2]=\"${_DB_NODE_IP[2]//[^0-9.]/}\"\n\n  if [ -z \"${_CLUSTER_PREFIX}\" ]; then\n    _CLUSTER_PREFIX=\"c1r\"\n  fi\n\n  if [ -z \"${_CLUSTER_SUFFIX}\" ]; then\n    _CLUSTER_SUFFIX=\"example.com\"\n  fi\n\n  if [ -z \"${_CLUSTER_OS}\" ]; then\n    _CLUSTER_OS=\"jessie\"\n  fi\n\n  if [ -z \"${_CLUSTER_XCHECK}\" ]; then\n    _CLUSTER_XCHECK=\"YES\"\n  fi\n\n  if [ -z \"${_CLUSTER_EMAIL}\" ] \\\n     || [ -z \"${_CLUSTER_PREFIX}\" ] \\\n     || [ -z \"${_CLUSTER_SUFFIX}\" ] \\\n     || [ -z \"${_WEB_FQDN}\" ] \\\n     || [ -z \"${_WEB_NODE_IP}\" ] \\\n     || [ -z \"${_DB_NODE_IP[0]}\" ] \\\n     || [ -z \"${_DB_NODE_IP[1]}\" ] \\\n     || [ -z \"${_DB_NODE_IP[2]}\" ]; then\n    echo \"\n\n    CONFIGURATION REQUIRED!\n\n    Add listed below required lines to your /root/.cluster.cnf file.\n    Required lines are marked with [R] and optional with [O]:\n\n       #\n      _CLUSTER_EMAIL=     ### [R] Technical contact email\n       #\n       # Public IP and hostname with working DNS for the main web node\n       #\n      _WEB_NODE_IP=       ### [R] Public IP address assigned to the machine\n      _WEB_FQDN=          ### [R] Valid FQDN pointing to WEB_NODE_IP\n       #\n       # An odd number of DB nodes in the array: 3, 5, 7 etc. numbered from 0\n       #\n      _DB_NODE_IP[0]=     ### [R] Private or Public IP address to use\n      _DB_NODE_IP[1]=     ### [R] Private or Public IP address to use\n      _DB_NODE_IP[2]=     ### [R] Private or Public IP address to use\n       #\n      _CLUSTER_PREFIX=c1r ### [O] For Linux VServer guests short names\n      _CLUSTER_SUFFIX=    ### [O] For DB nodes FQDN hostnames: example.com\n      _CLUSTER_OS=jessie  ### [O] Debian version: jessie\n       #\n      _CLUSTER_XCHECK=YES ### [O] Standard galera_check script if NO\n       #\n\n\"\n    exit 1\n  fi\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n_find_fast_mirror_early\n\n_if_firewall_update() {\n  if [ -e \"/etc/csf\" ]; then\n    echo \"Running Firewall Update on $(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)...\"\n    _NOW=$(date +%y%m%d-%H%M)\n    _WFC=\"/var/backups/wfc/${_NOW}\"\n    mkdir -p ${_WFC}\n    cp -af /etc/sysctl.conf  ${_WFC}/sysctl.conf-pre\n    cp -af /opt/water.sh     ${_WFC}/water.sh-pre\n    cp -af /opt/fire.sh      ${_WFC}/fire.sh-pre\n    cp -af /etc/csf/csf.conf ${_WFC}/csf.conf-pre\n    cd ${_WFC}\n    curl ${_crlGet} \"${_urlHmr}/conf/var/sysctl.conf\"      -o sysctl.conf\n    curl ${_crlGet} \"${_urlHmr}/tools/host/host-water.sh\"  -o host-water.sh\n    curl ${_crlGet} \"${_urlHmr}/tools/host/host-fire.sh\"   -o host-fire.sh\n    curl ${_crlGet} \"http://${_USE_MIR}/cluster/csf.conf\"  -o csf.conf\n    cp -af ${_WFC}/sysctl.conf   /etc/sysctl.conf\n    cp -af ${_WFC}/host-water.sh /opt/water.sh\n    cp -af ${_WFC}/host-fire.sh  /opt/fire.sh\n    cp -af ${_WFC}/csf.conf      /etc/csf/csf.conf\n    chmod 644 /etc/sysctl.conf\n    chmod 700 /opt/water.sh\n    chmod 700 /opt/fire.sh\n    chmod 600 /etc/csf/csf.conf\n    sed -i \"s/.*fire.sh.*//gi\"  /etc/crontab\n    wait\n    sed -i \"s/.*water.sh.*//gi\" /etc/crontab\n    wait\n    sed -i \"s/.*csf.error.*//gi\" /etc/crontab\n    wait\n    sed -i \"s/.*vservers.*//gi\" /etc/crontab\n    wait\n    if [ -e \"/opt/fire.sh\" ] && [ -e \"/opt/water.sh\" ]; then\n      echo \"*  *    * * *   root    test -e /etc/csf/csf.error && ( rm -f /etc/csf/csf.error && service lfd restart && csf -ra && echo WTF \\`date\\` >> /var/log/csf_wtf.log )\" >> /etc/crontab\n      echo \"*  *    * * *   root    bash /opt/fire.sh >/dev/null 2>&1\" >> /etc/crontab\n      echo \"05 5    * * *   root    bash /opt/water.sh >/dev/null 2>&1\" >> /etc/crontab\n      echo \"30 *    * * *   root    rm -f /vservers/*/var/xdrago/monitor/log/ssh.log >/dev/null 2>&1\" >> /etc/crontab\n      echo \"30 *    * * *   root    rm -f /vservers/*/var/xdrago/monitor/log/web.log >/dev/null 2>&1\" >> /etc/crontab\n      echo \"30 *    * * *   root    rm -f /vservers/*/var/xdrago/monitor/log/ftp.log >/dev/null 2>&1\" >> /etc/crontab\n      sed -i \"/^$/d\" /etc/crontab\n    fi\n    echo kernel.vshelper = /sbin/vshelper >> /etc/sysctl.conf\n    if [ -e \"/etc/security/limits.conf\" ]; then\n      _IF_NF=$(grep '2097152' /etc/security/limits.conf 2>&1)\n      if [ ! -z \"${_IF_NF}\" ]; then\n        sed -i \"s/.*2097152.*//g\" /etc/security/limits.conf\n        wait\n      fi\n      _IF_NF=$(grep '524288' /etc/security/limits.conf)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"*         soft    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"root      hard    nofile      1048576\" >> /etc/security/limits.conf\n        echo \"root      soft    nofile      1048576\" >> /etc/security/limits.conf\n      fi\n      _IF_NF=$(grep '65556' /etc/security/limits.conf 2>&1)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nproc       65556\"   >> /etc/security/limits.conf\n        echo \"*         soft    nproc       65556\"   >> /etc/security/limits.conf\n      fi\n    fi\n    sysctl -p /etc/sysctl.conf\n    cd\n    pkill -9 -f ConfigServer\n    killall sleep &> /dev/null\n    rm -f /etc/csf/csf.error\n    if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n      csf -ra &> /dev/null\n      synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    else\n      csf -r &> /dev/null\n    fi\n    ### Linux kernel TCP SACK CVEs mitigation\n    ### CVE-2019-11477 SACK Panic\n    ### CVE-2019-11478 SACK Slowness\n    ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n    if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n      _SACK_TEST=$(ip6tables --list | grep tcpmss)\n      if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n        sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n        iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    fi\n  fi\n}\n\n_install_proxysql() {\n\n  if [ -z \"${_ROOT_SQL_PASWD}\" ]; then\n    _vidn=\"${_CLUSTER_PREFIX}db0\"\n    if [ -e \"/vservers/${_vidn}/root/.my.cluster_root_pwd.txt\" ]; then\n      _ROOT_SQL_PASWD=$(cat /vservers/${_vidn}/root/.my.cluster_root_pwd.txt 2>&1)\n      _ROOT_SQL_PASWD=$(echo -n ${_ROOT_SQL_PASWD} | tr -d \"\\n\" 2>&1)\n      _E_ROOT_SQL_PASWD=\"${_ROOT_SQL_PASWD//\\//\\\\\\/}\"\n    fi\n  fi\n\n  _PROXYSQL_PASSWORD=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n  _MONITOR_PASSWORD=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n  _CLUSTER_APP_PASSWORD=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n\n  _E_PROXYSQL_PASSWORD=\"${_PROXYSQL_PASSWORD//\\//\\\\\\/}\"\n  _E_MONITOR_PASSWORD=\"${_MONITOR_PASSWORD//\\//\\\\\\/}\"\n  _E_CLUSTER_APP_PASSWORD=\"${_CLUSTER_APP_PASSWORD//\\//\\\\\\/}\"\n\n  _pxyCtx=\"mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032\"\n  _pXcnf=\"/vservers/${_idn}/etc/proxysql-admin.cnf\"\n\n  echo \"Installing ProxySQL on the web cluster node...\"\n  if [ ! -z \"${_E_PROXYSQL_PASSWORD}\" ] && [ ! -z \"${_E_ROOT_SQL_PASWD}\" ]; then\n\n    if [ \"${_cmd}\" = \"in-pxy\" ] \\\n      && [ -e \"/vservers/${_idn}/usr/bin/proxysql\" ] \\\n      && [ -e \"/vservers/${_idn}/var/lib/proxysql\" ]; then\n      _ATMIS=$(date +%y%m%d-%H%M%S)\n      _XPSQL=\"/vservers/${_idn}/var/backups/xpySQL/${_ATMIS}\"\n      mkdir -p ${_XPSQL}\n      cp -af /vservers/${_idn}/var/lib/proxysql/* ${_XPSQL}/\n      vserver ${_idn} exec service proxysql stop\n      vserver ${_idn} exec apt-get clean\n      vserver ${_idn} exec ${_APT_UPDATE}\n      vserver ${_idn} exec apt-get remove proxysql -y -qq\n      rm -rf /vservers/${_idn}/var/lib/proxysql\n      rm -rf /vservers/${_idn}/var/log/proxysql\n      rm -f ${_pXcnf}\n      rm -f /vservers/${_idn}/etc/proxysql.cnf\n      rm -f /vservers/${_idn}/etc/apt/sources.list.d/proxysql.list\n      for IP in \"${_DB_NODE_IP[@]}\"; do\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'monitor'@'${IP}';\\\"\"\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'proxysql_user'@'${IP}';\\\"\"\n      done\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'monitor'@'${_WEB_NODE_IP}';\\\"\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'proxysql_user'@'${_WEB_NODE_IP}';\\\"\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'monitor'@'%';\\\"\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'proxysql_user'@'%';\\\"\"\n\n      _S_N=${_WEB_NODE_IP}\n      _S_T=${_S_N#*.*}\n      _S_Q=${_S_N%%${_S_T}}\n\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'monitor'@'${_S_Q}%';\\\"\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"DROP USER 'proxysql_user'@'${_S_Q}%';\\\"\"\n    fi\n\n    echo \"${_ROOT_SQL_PASWD}\" > /vservers/${_idn}/root/.my.cluster_root_pwd.txt\n    echo \"${_PROXYSQL_PASSWORD}\" > /vservers/${_idn}/root/.my.proxysql_adm_pwd.txt\n    echo \"${_DB_NODE_IP[1]}\" > /vservers/${_idn}/root/.my.cluster_write_node.txt\n    echo \"<?php\" > /vservers/${_idn}/opt/tools/drush/proxysql_adm_pwd.inc\n    echo \"\\$prxy_adm_paswd = \\\"${_PROXYSQL_PASSWORD}\\\";\" >> /vservers/${_idn}/opt/tools/drush/proxysql_adm_pwd.inc\n    echo \"\\$writer_node_ip = \\\"${_DB_NODE_IP[1]}\\\";\"     >> /vservers/${_idn}/opt/tools/drush/proxysql_adm_pwd.inc\n    echo \"ProxySQL\" > /vservers/${_idn}/data/conf/clstr.cnf\n    mkdir -p /vservers/${_idn}/opt/tmp\n    mkdir -p /vservers/${_idn}/opt/apt\n    echo '\nAPT::Get::Assume-Yes \"true\";\nAPT::Get::Show-Upgraded \"true\";\nAPT::Get::Install-Recommends \"false\";\nAPT::Get::Install-Suggests \"false\";\nAPT::Quiet \"true\";\nDPkg::Options {\"--force-confdef\";\"--force-confmiss\";\"--force-confold\"};\nDPkg::Pre-Install-Pkgs {\"/usr/sbin/dpkg-preconfigure --apt\";};\nDir::Etc::SourceList \"/etc/apt/sources.list\";\n' > /vservers/${_idn}/opt/apt/apt.conf.noi.nrml\n    _nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n    vserver ${_idn} exec apt-get clean\n    vserver ${_idn} exec ${_APT_UPDATE} -qq\n    vserver ${_idn} exec ${_INSTAPP} sysbench\n    vserver ${_idn} exec ${_INSTAPP} debconf-utils\n    vserver ${_idn} exec ${_INSTAPP} proxysql\n    vserver ${_idn} exec apt-get install --only-upgrade ${_nrmUpArg} proxysql\n    vserver ${_idn} exec mkdir -p /var/log/proxysql\n    vserver ${_idn} exec chown -R proxysql:proxysql /var/log/proxysql\n    vserver ${_idn} exec chmod 700 /var/log/proxysql\n    vserver ${_idn} exec usermod -aG users proxysql\n\n    echo \"Pausing ProxySQL...\"\n    vserver ${_idn} exec service proxysql stop\n    wait\n\n    sed -i \"s/admin:admin/admin:${_E_PROXYSQL_PASSWORD}/g\" /vservers/${_idn}/etc/proxysql.cnf\n    wait\n    sed -i \"s/localhost/127.0.0.1/g\" /vservers/${_idn}/etc/proxysql.cnf\n\n    sed -i \"s/PROXYSQL_PASSWORD=.*/PROXYSQL_PASSWORD=\\\"${_E_PROXYSQL_PASSWORD}\\\"/g\" ${_pXcnf}\n    wait\n    sed -i \"s/PROXYSQL_HOSTNAME=.*/PROXYSQL_HOSTNAME=\\\"127.0.0.1\\\"/g\" ${_pXcnf}\n    wait\n\n    sed -i \"s/MONITOR_PASSWORD=.*/MONITOR_PASSWORD=\\\"${_E_MONITOR_PASSWORD}\\\"/g\" ${_pXcnf}\n    wait\n\n    sed -i \"s/CLUSTER_APP_PASSWORD=.*/CLUSTER_APP_PASSWORD=\\\"${_E_CLUSTER_APP_PASSWORD}\\\"/g\" ${_pXcnf}\n    wait\n\n    sed -i \"s/CLUSTER_USERNAME=.*/CLUSTER_USERNAME=\\\"root\\\"/g\" ${_pXcnf}\n    wait\n    sed -i \"s/CLUSTER_PASSWORD=.*/CLUSTER_PASSWORD=\\\"${_E_ROOT_SQL_PASWD}\\\"/g\" ${_pXcnf}\n    wait\n    sed -i \"s/CLUSTER_HOSTNAME=.*/CLUSTER_HOSTNAME=\\\"${_DB_NODE_IP[1]}\\\"/g\" ${_pXcnf}\n    wait\n\n    if [ \"${_CLUSTER_XCHECK}\" = \"YES\" ]; then\n      sed -i \"s/WRITE_HOSTGROUP_ID=.*/WRITE_HOSTGROUP_ID=\\\"500\\\"/g\" ${_pXcnf}\n      wait\n      sed -i \"s/READ_HOSTGROUP_ID=.*/READ_HOSTGROUP_ID=\\\"501\\\"/g\" ${_pXcnf}\n      wait\n    fi\n\n    sed -i \"s/MODE=.*/MODE=\\\"loadbal\\\"/g\" ${_pXcnf}\n    wait\n    sleep 1\n\n    chmod 640 /vservers/${_idn}/etc/proxysql.cnf\n    chmod 640 ${_pXcnf}\n\n    echo \"Starting ProxySQL...\"\n    vserver ${_idn} exec service proxysql initial\n    wait\n\n    chmod 640 /vservers/${_idn}/etc/proxysql.cnf\n    chmod 640 ${_pXcnf}\n\n    echo \"Running default auto-configuration of ProxySQL...\"\n    _pxyCmd=\"proxysql-admin --config-file=/etc/proxysql-admin.cnf --mode=loadbal --enable\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCmd}\"\n    sleep 10\n\n    echo \"Adding root user to mysql_users in ProxySQL...\"\n    _pxyCmd=\"INSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('root','${_ROOT_SQL_PASWD}','10');\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"LOAD MYSQL USERS TO RUNTIME;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"SAVE MYSQL USERS FROM RUNTIME;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"SAVE MYSQL USERS TO DISK;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n\n    echo \"Adding root user to mysql_query_rules in ProxySQL...\"\n    _pxyCmd=\"INSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('root',10,1);\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"INSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('root',11,1);\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"LOAD MYSQL QUERY RULES TO RUNTIME;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"SAVE MYSQL QUERY RULES TO DISK;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n\n    sleep 1\n\n    if [ \"${_CLUSTER_XCHECK}\" != \"YES\" ]; then\n      for IP in \"${_DB_NODE_IP[@]}\"; do\n        _pxyCmd=\"INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,comment) VALUES ('${IP}',11,3306,1000,'READWRITE');\"\n          ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      done\n      _pxyCmd=\"LOAD MYSQL SERVERS TO RUNTIME;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"SAVE MYSQL SERVERS TO DISK;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    fi\n\n    if [ \"${_CLUSTER_XCHECK}\" = \"YES\" ]; then\n      echo \"Adding monitor user to ProxySQL...\"\n      _pxyCmd=\"INSERT INTO mysql_users (username,password) VALUES ('monitor','${_MONITOR_PASSWORD}');\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n      _pxyCmd=\"UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n      _pxyCmd=\"UPDATE global_variables SET variable_value='${_MONITOR_PASSWORD}' WHERE variable_name='mysql-monitor_password';\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n      _pxyCmd=\"UPDATE global_variables SET variable_value=268435456 WHERE variable_name='mysql-max_allowed_packet';\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n\n      echo \"Load ProxySQL variables to runtime and save to disk...\"\n      _pxyCmd=\"LOAD MYSQL VARIABLES TO RUNTIME;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n      _pxyCmd=\"SAVE MYSQL VARIABLES TO DISK;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n\n      echo \"Adding grants for ProxySQL monitor user on Percona Cluster...\"\n      for IP in \"${_DB_NODE_IP[@]}\"; do\n        _pxyCmd=\"CREATE USER IF NOT EXISTS 'monitor'@'${IP}';\"\n          ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n        _pxyCmd=\"GRANT USAGE ON *.* TO 'monitor'@'${IP}';\"\n          ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n        _pxyCmd=\"ALTER USER 'monitor'@'${IP}' IDENTIFIED BY '${_MONITOR_PASSWORD}';\"\n          ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n      done\n      _pxyCmd=\"CREATE USER IF NOT EXISTS 'monitor'@'${_WEB_NODE_IP}';\"\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"GRANT USAGE ON *.* TO 'monitor'@'${_WEB_NODE_IP}';\"\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"ALTER USER 'monitor'@'${_WEB_NODE_IP}' IDENTIFIED BY '${_MONITOR_PASSWORD}';\"\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n\n      echo \"Adding proxysql_user to ProxySQL...\"\n      _pxyCmd=\"INSERT INTO mysql_users (username,password) VALUES ('proxysql_user','${_CLUSTER_APP_PASSWORD}');\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n\n      echo \"Adding grants for ProxySQL proxysql_user on Percona Cluster...\"\n      for IP in \"${_DB_NODE_IP[@]}\"; do\n        _pxyCmd=\"CREATE USER IF NOT EXISTS 'proxysql_user'@'${IP}';\"\n          ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n        _pxyCmd=\"GRANT USAGE ON *.* TO 'proxysql_user'@'${IP}';\"\n          ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n        _pxyCmd=\"ALTER USER 'proxysql_user'@'${IP}' IDENTIFIED BY '${_CLUSTER_APP_PASSWORD}';\"\n          ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n      done\n      _pxyCmd=\"CREATE USER IF NOT EXISTS 'proxysql_user'@'${_WEB_NODE_IP}';\"\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"GRANT USAGE ON *.* TO 'proxysql_user'@'${_WEB_NODE_IP}';\"\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"ALTER USER 'proxysql_user'@'${_WEB_NODE_IP}' IDENTIFIED BY '${_CLUSTER_APP_PASSWORD}';\"\n        ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n\n      echo \"Load ProxySQL users to runtime and save to disk...\"\n      _pxyCmd=\"LOAD MYSQL USERS TO RUNTIME;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n      _pxyCmd=\"SAVE MYSQL USERS TO DISK;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      sleep 1\n    fi\n\n    if [ \"${_CLUSTER_XCHECK}\" = \"YES\" ]; then\n      echo \"Adding mysql_servers in ProxySQL...\"\n      _pxyCmd=\"DELETE FROM mysql_replication_hostgroups where writer_hostgroup=500;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"DELETE FROM mysql_servers where hostgroup_id in (500,501);\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      cNt=\"100\"\n      for IP in \"${_DB_NODE_IP[@]}\"; do\n        _pxyCmd=\"INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('${IP}',500,3306,${cNt});\"\n          ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n        cNt=$(( 10000 * cNt ))\n      done\n      cNt=\"1000000000\"\n      for IP in \"${_DB_NODE_IP[@]}\"; do\n        cNt=$(( cNt / 100 ))\n        _pxyCmd=\"INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('${IP}',501,3306,${cNt});\"\n          ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      done\n      _pxyCmd=\"LOAD MYSQL SERVERS TO RUNTIME;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"SAVE MYSQL SERVERS TO DISK;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    fi\n\n    echo \"Checking runtime_scheduler entries in ProxySQL...\"\n    _pxyCmd=\"SELECT * FROM runtime_scheduler\\\\G\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n\n    echo \"Checking mysql_servers status in ProxySQL...\"\n    sleep 10\n    _pxyCmd=\"SELECT hostgroup_id,hostname,port,status,weight FROM mysql_servers;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n\n    _isPxy=$(vserver ${_idn} exec which proxysql 2>&1)\n    echo \"Relative _isPxy is ${_isPxy}\"\n    echo \"Absolute _isPxy is /vservers/${_idn}${_isPxy}\"\n\n    if [ -x \"/vservers/${_idn}${_isPxy}\" ] \\\n      && [ -e \"/vservers/${_idn}/etc/proxysql.cnf\" ] \\\n      && [ -e \"${_pXcnf}\" ]; then\n      echo \"INFO: Updating ProxySQL Galera Checker...\"\n      _pGc=\"/vservers/${_idn}/usr/bin/proxysql_galera_checker\"\n      if [ -e \"${_pGc}\" ]; then\n        rm -f ${_pGc}\n      fi\n      _tURL=\"${_urlHmr}/${_tBn}/proxysql_galera_checker\"\n      echo PGC download URL is ${_tURL}\n      curl -I ${_tURL}\n      curl ${_crlGet} \"${_tURL}\" -o ${_pGc}\n      if [ ! -e \"${_pGc}\" ]; then\n        curl ${_crlGet} \"${_tURL}\" -o ${_pGc}\n      else\n        _PAV_TEST=$(grep PROXYSQL_ADMIN_VERSION ${_pGc} 2>&1)\n        if [[ ! \"${_PAV_TEST}\" =~ \"PROXYSQL_ADMIN_VERSION\" ]]; then\n          rm -f ${_pGc}\n          curl ${_crlGet} \"${_tURL}\" -o ${_pGc}\n        fi\n      fi\n      chmod 755 ${_pGc}\n      echo loadbal > /vservers/${_idn}/var/lib/proxysql/mode\n      echo loadbal > /vservers/${_idn}/var/lib/proxysql/c1r_galera_mode\n      echo loadbal > /vservers/${_idn}/var/lib/proxysql/--mode=singlewrite_mode\n      echo loadbal > /vservers/${_idn}/var/lib/proxysql/--mode=loadbal_mode\n      echo 0 > /vservers/${_idn}/var/lib/proxysql/reload\n      echo 0 > /vservers/${_idn}/var/lib/proxysql/c1r_galera_reload\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"chown proxysql:proxysql /var/lib/proxysql/*reload\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"chown proxysql:proxysql /var/lib/proxysql/*mode*\"\n      rm -f /vservers/${_idn}/var/lib/proxysql/pxc_test_proxysql_galera_check.log\n      _pxyCmd=\"proxysql_galera_checker --log=/var/lib/proxysql/pxc_test_proxysql_galera_check.log --debug\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCmd}\"\n      cat /vservers/${_idn}/var/lib/proxysql/pxc_test_proxysql_galera_check.log\n      echo \"INFO: Restarting ProxySQL server...\"\n      vserver ${_idn} exec service proxysql restart\n      _pxyCmd=\"SELECT * FROM runtime_scheduler\\\\G\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n      _pxyCmd=\"SELECT * FROM mysql_servers;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n    else\n      echo \"OOPS: ProxySQL will not work!\"\n    fi\n  else\n    echo \"Can't install ProxySQL because _E_PROXYSQL_PASSWORD or _E_ROOT_SQL_PASWD is empty\"\n  fi\n\n  if [ \"${_CLUSTER_XCHECK}\" = \"YES\" ]; then\n    if [ -e \"/vservers/${_idn}/data/all\" ] \\\n      && [ -e \"/vservers/${_idn}/var/lib/proxysql\" ]; then\n      echo \"Installing galera_check for ProxySQL...\"\n      curl -s -A iCab \"http://${_USE_MIR}/cluster/galera_check.pl\" \\\n        -o /vservers/${_idn}/var/lib/proxysql/galera_check.pl\n      vserver ${_idn} exec chmod 755 /var/lib/proxysql/galera_check.pl\n      vserver ${_idn} exec chown proxysql:proxysql /var/lib/proxysql/galera_check.pl\n      if [ ! -e \"/vservers/${_idn}/var/lib/proxysql/galera_check.pl\" ]; then\n        echo \"ERROR: galera_check for ProxySQL not available!\"\n      fi\n    fi\n\n    if [ -e \"/vservers/${_idn}/var/lib/proxysql/galera_check.pl\" ]; then\n      echo \"Adding custom runtime_scheduler entry in ProxySQL...\"\n      _rplSchdrA=\"DELETE FROM scheduler where id=10;\"\n      # _rplSchdrB=\"DELETE FROM scheduler where id=11;\"\n      _rplSchdrC=\"INSERT INTO scheduler (id,active,interval_ms,filename,arg1) VALUES (10,1,2000,'/var/lib/proxysql/galera_check.pl','-u=admin -p=${_PROXYSQL_PASSWORD} -h=127.0.0.1 -H=10:W,11:R -P=6032 --execution_time=1 --retry_down=2 --retry_up=1 --main_segment=1 --debug=0 --active_failover --log=/var/log/proxysql/galera_check.log');\"\n      _rplSchdrD=\"LOAD SCHEDULER TO RUNTIME;\"\n      _rplSchdrE=\"SAVE SCHEDULER TO DISK;\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_rplSchdrA}\\\"\"\n        # ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_rplSchdrB}\\\"\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_rplSchdrC}\\\"\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_rplSchdrD}\\\"\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_rplSchdrE}\\\"\"\n      echo \"Checking modified runtime_scheduler in ProxySQL...\"\n        ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"SELECT * FROM runtime_scheduler\\\\G\\\"\"\n    fi\n\n    echo \"Checking mysql_servers status in ProxySQL...\"\n    sleep 10\n    _pxyCmd=\"SELECT hostgroup_id,hostname,port,status,weight FROM mysql_servers;\"\n      ssh ${_NOSTRICT} root@${_WEB_NODE_IP} \"${_pxyCtx} -e \\\"${_pxyCmd}\\\"\"\n\n  fi\n}\n\n_install_vps() {\n\n  if [ ! -d \"/vservers/\" ]; then\n    echo\n    echo \"Please run 'in-host', 'reboot' and 'up-host' before 'in-vps'\"\n    echo \"Bye\"\n    echo\n    exit 1\n  fi\n\n  echo \"Installing vps ${_idn}...\"\n\n  _incl=\"sysvinit-core,sysvinit-utils,ssh,lsb-release,dnsutils,netcat,curl,wget,aptitude,locales,screen,python3\"\n  _excl=\"systemd,systemd-sysv,libsystemd0,udev,makedev,fuse\"\n\n  if [ \"${_fce}\" = \"force\" ]; then\n    _locMode=\"--force -m debootstrap\"\n  else\n    _locMode=\"-m debootstrap\"\n  fi\n\n  _eth=$(ifconfig 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f1 \\\n    | awk '{ print $1}' \\\n    | sed \"s/[\\,':]//g\" 2>&1)\n\n  vserver ${_idn} build -n ${_idn} ${_locMode} --i-know-its-there \\\n    --hostname ${_hst} \\\n    --interface ${_eth}:${_vip}/32 \\\n    -- -d ${_osx} -- --arch=amd64 --include=${_incl} \\\n    --exclude=${_excl};\n\n  echo \"Starting vps ${_idn}...\"\n  vserver-autostart ${_idn}\n  vserver ${_idn} start\n\n  echo \"en_US.UTF-8 UTF-8\" >> /vservers/${_idn}/etc/locale.gen\n  sed -i \"/^$/d\" /vservers/${_idn}/etc/locale.gen\n  vserver ${_idn} exec locale-gen en_US.UTF-8\n  vserver ${_idn} exec locale\n\n  echo \"Syncing timezone for vps ${_idn}...\"\n  cp -af /etc/timezone /vservers/${_idn}/etc/\n  vserver ${_idn} exec dpkg-reconfigure -f noninteractive tzdata\n\n  echo \"Syncing resolv.conf for vps ${_idn}...\"\n  rm -f /vservers/${_idn}/etc/resolv.conf\n  cp -af /etc/resolv.conf /vservers/${_idn}/etc/\n  vserver ${_idn} exec dig aegirproject.org\n\n  if [ \"${_ver}\" != \"galera\" ]; then\n    echo \"Installing BOA ${_ver} in vps ${_idn}...\"\n    rm -f BOA.sh.txt*\n    wget ${_wgetGet} http://${_USE_MIR}/versions/${_tRee}/boa/BOA.sh.txt\n    mv -f BOA.sh.txt /vservers/${_idn}/root/\n    vserver ${_idn} exec bash /root/BOA.sh.txt\n    vserver ${_idn} exec /opt/local/bin/boa in-${_ver} public ${_hst} ${_eml} o8 mini\n  else\n    _REMOVE_LINKS=\"buagent \\\n                   checkroot.sh \\\n                   fancontrol \\\n                   halt \\\n                   hwclock.sh \\\n                   hwclockfirst.sh \\\n                   ifupdown \\\n                   ifupdown-clean \\\n                   kerneloops \\\n                   klogd \\\n                   mountall-bootclean.sh \\\n                   mountall.sh \\\n                   mountdevsubfs.sh \\\n                   mountkernfs.sh \\\n                   mountnfs-bootclean.sh \\\n                   mountnfs.sh \\\n                   mountoverflowtmp \\\n                   mountvirtfs \\\n                   mtab.sh \\\n                   networking \\\n                   procps \\\n                   reboot \\\n                   sendsigs \\\n                   setserial \\\n                   svscan \\\n                   sysstat \\\n                   umountfs \\\n                   umountnfs.sh \\\n                   umountroot \\\n                   urandom \\\n                   vnstat\"\n    for _link in ${_REMOVE_LINKS}; do\n      if [ -e \"/vservers/${_idn}/etc/init.d/${_link}\" ]; then\n        vserver ${_idn} exec update-rc.d -f ${_link} remove\n        mv -f /etc/init.d/${_link} /var/backups/init.d.${_link}\n      fi\n    done\n    for s in cron dbus ssh; do\n      if [ -e \"/etc/init.d/${s}\" ]; then\n        sed -rn -e 's/^(# Default-Stop:).*$/\\1 0 1 6/' -e '/^### BEGIN INIT INFO/,/^### END INIT INFO/p' /etc/init.d/${s} > /etc/insserv/overrides/${s}\n      fi\n    done\n    /sbin/insserv -v -d &> /dev/null\n  fi\n\n  echo \"Stopping vps ${_idn}...\"\n  vserver ${_idn} stop\n\n  echo \"Adding immutable flag to ${_idn} vps config...\"\n  echo \"CAP_LINUX_IMMUTABLE\" >  /usr/etc/vservers/${_idn}/bcapabilities\n  echo \"Starting vps ${_idn}...\"\n  vserver ${_idn} start\n  echo\n  echo \"Testing vservers status...\"\n  vserver-stat\n  echo\n\n  echo \"en_US.UTF-8 UTF-8\" >> /vservers/${_idn}/etc/locale.gen\n  sed -i \"/^$/d\" /vservers/${_idn}/etc/locale.gen\n  vserver ${_idn} exec locale-gen en_US.UTF-8\n  vserver ${_idn} exec locale\n\n  if [ -d \"/vservers/${_idn}/root\" ] \\\n    && [ -e \"/root/.ssh/id_ed25519.pub\" ] \\\n    && [ ! -e \"/vservers/${_idn}/root/.ssh/id_ed25519\" ]; then\n    vserver ${_idn} exec mkdir -p ~/.ssh\n    vserver ${_idn} exec ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519\n    echo \"Host *\" >> /vservers/${_idn}/root/.ssh/config\n    echo \"   StrictHostKeyChecking no\" >> /vservers/${_idn}/root/.ssh/config\n    echo \"   UserKnownHostsFile=/dev/null\" >> /vservers/${_idn}/root/.ssh/config\n    echo >> /vservers/${_idn}/root/.ssh/authorized_keys\n    cat /root/.ssh/id_ed25519.pub >> /vservers/${_idn}/root/.ssh/authorized_keys\n    vserver ${_idn} exec chmod 600 ~/.ssh/authorized_keys\n    vserver ${_idn} exec chmod 700 ~/.ssh/\n  fi\n\n  if [ -x \"/vservers/${_idn}/usr/bin/gpg2\" ]; then\n    _GPG=gpg2\n  else\n    _GPG=gpg\n  fi\n\n  if [ \"${_ver}\" = \"galera\" ]; then\n    echo \"Running post-install Galera VPS upgrade...\"\n    if [ -x \"/vservers/${_idn}/lib/systemd/systemd\" ]; then\n      echo \"Removing systemd on ${_osx}...\"\n      if [ \"${_osx}\" = \"jessie\" ] \\\n        || [ \"${_osx}\" = \"stretch\" ] \\\n        || [ \"${_osx}\" = \"buster\" ] \\\n        || [ \"${_osx}\" = \"bullseye\" ] \\\n        || [ \"${_osx}\" = \"bookworm\" ]; then\n        vserver ${_idn} exec ${_INSTAPP} sysvinit-core\n        vserver ${_idn} exec ${_INSTAPP} sysvinit-utils\n        if [ -e \"/vservers/${_idn}/usr/share/sysvinit/inittab\" ]; then\n          cp -af /vservers/${_idn}/usr/share/sysvinit/inittab /vservers/${_idn}/etc/inittab\n        fi\n        if [ ! -e \"/vservers/${_idn}/etc/apt/preferences.d/offsystemd\" ]; then\n          rm -f /vservers/${_idn}/etc/apt/preferences.d/systemd\n          echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /vservers/${_idn}/etc/apt/preferences.d/offsystemd\n          echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /vservers/${_idn}/etc/apt/preferences.d/offsystemd\n        fi\n        vserver ${_idn} exec ${_INSTAPP} sysvinit-core\n        vserver ${_idn} exec ${_INSTAPP} sysvinit-utils\n        rm -f /vservers/${_idn}/etc/apt/sources.list.d/nosystemd.list\n        rm -f /vservers/${_idn}/etc/apt/preferences.d/nosystemd\n      fi\n    fi\n  else\n    echo \"Running post-install Barracuda upgrade...\"\n    sleep 8\n    if [ -z \"${_ROOT_SQL_PASWD}\" ]; then\n      _vidn=\"${_CLUSTER_PREFIX}db0\"\n      if [ -e \"/vservers/${_vidn}/root/.my.cluster_root_pwd.txt\" ]; then\n        _ROOT_SQL_PASWD=$(cat /vservers/${_vidn}/root/.my.cluster_root_pwd.txt 2>&1)\n        _ROOT_SQL_PASWD=$(echo -n ${_ROOT_SQL_PASWD} | tr -d \"\\n\" 2>&1)\n        _E_ROOT_SQL_PASWD=\"${_ROOT_SQL_PASWD//\\//\\\\\\/}\"\n      fi\n    fi\n    if [ ! -z \"${_ROOT_SQL_PASWD}\" ]; then\n      echo \"${_ROOT_SQL_PASWD}\" > /vservers/${_idn}/root/.my.cluster_root_pwd.txt\n    fi\n    vserver ${_idn} exec /opt/local/bin/barracuda up-${_ver}\n    echo\n  fi\n\n  if [ \"${_cmd}\" = \"in-all\" ] || [ \"${_cmd}\" = \"in-web\" ]; then\n    if [ \"${_ver}\" != \"galera\" ]; then\n      _install_proxysql\n    fi\n  fi\n\n  echo\n  echo \"The ${_idn} VPS installation is complete!\"\n  echo\n}\n\n_install_octopus() {\n  _check_config_cluster\n  _idn=\"${_CLUSTER_PREFIX}web\"\n  if [ -e \"/vservers/${_idn}/data/all\" ] \\\n    && [[ \"${_idn}\" =~ \"web\" ]]; then\n    vserver ${_idn} exec /opt/local/bin/boa in-oct ${_email} ${_user} ${_mode} ${_copt} ${_csub} ${_ccor}\n    exit 0\n  else\n    echo\n    echo \"Please install the cluster before trying to add this Octopus instance\"\n    echo \"Bye\"\n    echo\n    exit 1\n  fi\n}\n\n_re_install_pxy() {\n  _check_config_cluster\n  _idn=\"${_idn}\"\n  _vip=\"${_vip}\"\n  _fce=\"${_fce}\"\n  if [ -e \"/vservers/${_idn}/usr/bin/mysql\" ] \\\n    && [ \"${_vip}\" = \"${_WEB_NODE_IP}\" ] \\\n    && [ \"${_fce}\" = \"force-reinstall\" ]; then\n    _install_proxysql\n  fi\n}\n\n_install_web_node() {\n  _check_config_cluster\n  _idn=\"${_CLUSTER_PREFIX}web\"\n  _hst=\"${_WEB_FQDN}\"\n  _vip=\"${_WEB_NODE_IP}\"\n  _osx=\"${_CLUSTER_OS}\"\n  if [ \"${_cmd}\" = \"in-all\" ]; then\n    _ver=\"${_xer}\"\n  fi\n  _eml=\"${_CLUSTER_EMAIL}\"\n  if [ ! -e \"/vservers/${_idn}/\" ] \\\n    && [[ \"${_idn}\" =~ \"web\" ]]; then\n    _install_vps\n  fi\n}\n\n_upgrade_web_node() {\n  _check_config_cluster\n  _idn=\"${_CLUSTER_PREFIX}web\"\n  vserver ${_idn} exec /opt/local/bin/barracuda up-${_ver} system\n}\n\n_default_my_cnf_copy() {\n\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n  _CPU_MX=$(( _CPU_NR * 2 ))\n  if [ \"${_CPU_MX}\" -lt 64 ]; then\n    _CPU_MX=64\n  fi\n  _CPU_FC=$(( _CPU_MX * 5 ))\n\n  ssh ${_NOSTRICT} root@${IP} \"echo '\n[mysqld]\n\n###\n### Galera configuration template\n### /etc/mysql/conf.d/galera.cnf\n###\n\n# Data directory\n#datadir=/var/lib/mysql\n\n# Temp directory\n#tmpdir=/tmp\n\n# MySQL User\n#user=mysql\n\n### Mandatory for Galera\n#\nbinlog_format=ROW\n#default_storage_engine=InnoDB\ninnodb_autoinc_lock_mode=2\n\n### Recommended for Galera\n#\ninnodb_flush_log_at_trx_commit=0\nperformance_schema=OFF\nbinlog_row_image=minimal\nwsrep_slave_threads=${_CPU_MX}\nwsrep_log_conflicts=ON\npxc_strict_mode=PERMISSIVE\n\n### Optional Memory Settings for Galera\n#\n# gcs.recv_q_hard_limit=64G\n# gcs.recv_q_soft_limit=32G\n# gcs.max_throttle=0.25\n# gcs.fc_limit=${_CPU_FC}\n# gcs.fc_factor=0.8\n\n### Galera Provider Configuration\n#\nwsrep_provider=/usr/lib/galera3/libgalera_smm.so\n\n### Galera Cluster Configuration\n#\nwsrep_cluster_name=\\\"'${_CLUSTER_NAME}'\\\"\nwsrep_cluster_address=\\\"'${_CLUSTER_STRING}',...?pc.wait_prim=no\\\"\n\n### Galera Synchronization Congifuration\n#\nwsrep_sst_method=rsync\nwsrep_sst_auth=sstuser:'${_MAINT_USER_PWD}'\n\n### Galera extra tuning\n#\nwsrep_debug=OFF\nwsrep_retry_autocommit=8\nwsrep_sync_wait=7\n\n### Galera Node Configuration\n#\nwsrep_node_address=\\\"'${IP}'\\\"\nwsrep_node_name=\\\"'${IP}'\\\"\n\n### Optional MyISAM Support in Galera\n#\n# wsrep_replicate_myisam=1\n\n' > /etc/mysql/conf.d/galera.cnf\"\n\n  \tssh ${_NOSTRICT} root@${IP} \"echo '\n[client]\nport                    = 3306\nsocket                  = /run/mysqld/mysqld.sock\ndefault-character-set   = utf8mb4\n\n[mysqld]\nuser                    = mysql\npid-file                = /run/mysqld/mysqld.pid\nsocket                  = /run/mysqld/mysqld.sock\nport                    = 3306\nbasedir                 = /usr\ndatadir                 = /var/lib/mysql\ntmpdir                  = /tmp\n#default_storage_engine  = InnoDB\nlc_messages_dir         = /usr/share/mysql\nlc_messages             = en_US\ncharacter_set_server    = utf8mb4\ncollation_server        = utf8mb4_unicode_ci\ntransaction-isolation   = READ-COMMITTED\ntransaction-read-only   = OFF\nskip-external-locking\nskip-name-resolve\nperformance_schema      = OFF\nconnect_timeout         = 60\njoin_buffer_size        = 4M\nkey_buffer_size         = 1024M\nmax_allowed_packet      = 256M\nmax_connect_errors      = 300\nmax_connections         = 300\nmax_user_connections    = 150\nmyisam_sort_buffer_size = 256K\nread_buffer_size        = 8M\nread_rnd_buffer_size    = 8M\nsort_buffer_size        = 256K\nbulk_insert_buffer_size = 256K\ntable_open_cache        = 2048\ntable_definition_cache  = 512\nthread_stack            = 256K\nthread_cache_size       = 128\nwait_timeout            = 3600\ntmp_table_size          = 1024M\nmax_heap_table_size     = 1024M\nlow_priority_updates    = 1\nconcurrent_insert       = 2\n#max_tmp_tables          = 16384\nserver-id               = '${_inc}'\n#myisam-recover-options  = BACKUP\n#myisam_recover          = BACKUP\nsync_binlog             = 0\nopen_files_limit        = 294912\ninnodb_autoinc_lock_mode= 2\ngroup_concat_max_len    = 10000\nskip-log-bin\n#log_bin                 = ON\n#max_binlog_size         = 256M\n#binlog_row_image        = minimal\n#binlog_format           = ROW\n#slow_query_log          = 1\n#long_query_time         = 10\n#slow_query_log_file     = /var/log/mysql/sql-slow-query.log\n#log_queries_not_using_indexes\n#innodb-defragment       = 1\n\n# * InnoDB\nsql_mode                = NO_ENGINE_SUBSTITUTION\ninnodb_buffer_pool_instances = 8\ninnodb_page_cleaners    = 8\ninnodb_lru_scan_depth   = 1024\n#innodb_redo_log_capacity = 1024M\n#innodb_log_file_size    = 1024M\ninnodb_buffer_pool_size = 2048M\ninnodb_log_buffer_size  = 256M\ninnodb_file_per_table   = 1\n#innodb_use_native_aio   = 1\ninnodb_open_files       = 196608\ninnodb_io_capacity      = 3000\ninnodb_io_capacity_max  = 6000\ninnodb_flush_method     = O_DIRECT\ninnodb_flush_log_at_trx_commit = 0\ninnodb_thread_concurrency = 0\ninnodb_lock_wait_timeout = 300\ninnodb_buffer_pool_dump_at_shutdown = 1\ninnodb_buffer_pool_load_at_startup = 1\n#innodb_buffer_pool_dump_pct = 100\n#innodb_buffer_pool_dump_now = ON\ninnodb_stats_on_metadata = OFF\ninnodb_adaptive_hash_index = 0\ninnodb_default_row_format = dynamic\ninnodb_print_all_deadlocks = ON\ninnodb_doublewrite = 1\n#innodb_checksum_algorithm=crc32\ninnodb_flush_log_at_timeout = 1\n#innodb_force_recovery = 3\n#innodb_temp_data_file_path = ibtmp1:12M:autoextend:max:900M\n\n[mysqld_safe]\nsocket                  = /run/mysqld/mysqld.sock\nnice                    = 0\nopen_files_limit        = 294912\nsyslog\n\n[mysqldump]\nquick\nmax_allowed_packet      = 256M\nquote-names\n\n[mysql]\ndefault-character-set   = utf8mb4\nno-auto-rehash\n\n[myisamchk]\nkey_buffer              = 1M\nsort_buffer_size        = 256K\nread_buffer             = 4M\nwrite_buffer            = 4M\n\n[isamchk]\nkey_buffer              = 1M\nsort_buffer_size        = 256K\nread_buffer             = 4M\nwrite_buffer            = 4M\n\n[mysqlhotcopy]\ninteractive-timeout\n\n!includedir /etc/mysql/conf.d/\n' > /etc/mysql/my.cnf\"\n}\n\n_install_db_cluster() {\n  _check_config_cluster\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _SQL_OS_CODE=trixie\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n    _SQL_OS_CODE=bookworm\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _SQL_OS_CODE=bullseye\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _SQL_OS_CODE=buster\n  else\n    _SQL_OS_CODE=\"${_OS_CODE}\"\n  fi\n\n  if [ -x \"/vservers/${_idn}/usr/bin/gpg2\" ]; then\n    _GPG=gpg2\n  else\n    _GPG=gpg\n  fi\n\n  _CLUSTER_NAME=\"${_CLUSTER_PREFIX}_galera\"\n  _MAINT_USER_PWD=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n  _ROOT_SQL_PASWD=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n  _E_MAINT_USER_PWD=\"${_MAINT_USER_PWD//\\//\\\\\\/}\"\n  _E_ROOT_SQL_PASWD=\"${_ROOT_SQL_PASWD//\\//\\\\\\/}\"\n  _CLUSTER_STRING=\"gcomm://\"$(IFS=, ; echo \"${_DB_NODE_IP[*]}\")\n  _inc=\"0\"\n\n  _percList=\"/etc/apt/sources.list.d/percona-release.list\"\n  _DB_SRC=\"repo.percona.com\"\n  _percNodot=\"${_DB_SERIES//./}\"\n  _percRepo=\"${_DB_SRC}/ps-${_percNodot}/apt\"\n  _percTools=\"${_DB_SRC}/tools/apt\"\n\n  for IP in \"${_DB_NODE_IP[@]}\"; do\n    echo \"${IP} # Cluster Galera Node\" >> /etc/csf/csf.allow\n    echo \"${IP} # Cluster Galera Node\" >> /etc/csf/csf.ignore\n  done\n  if [ ! -z \"${_WEB_NODE_IP}\" ]; then\n    echo \"${_WEB_NODE_IP} # Web Cluster Node\" >> /etc/csf/csf.allow\n    echo \"${_WEB_NODE_IP} # Web Cluster Node\" >> /etc/csf/csf.ignore\n  fi\n  service lfd stop\n  pkill -9 -f ConfigServer\n  killall sleep &> /dev/null\n  rm -f /etc/csf/csf.error\n  if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n    csf -ra &> /dev/null\n    synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  else\n    csf -r &> /dev/null\n  fi\n  ### Linux kernel TCP SACK CVEs mitigation\n  ### CVE-2019-11477 SACK Panic\n  ### CVE-2019-11478 SACK Slowness\n  ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n    _SACK_TEST=$(ip6tables --list | grep tcpmss)\n    if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n      sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n      iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    fi\n  fi\n\n  for IP in \"${_DB_NODE_IP[@]}\"; do\n  \techo \"Installing DB cluster node on ${IP} ...\"\n    _idn=\"${_CLUSTER_PREFIX}db${_inc}\"\n    _hst=\"${_CLUSTER_PREFIX}db${_inc}.${_CLUSTER_SUFFIX}\"\n    _vip=\"${IP}\"\n    _osx=\"${_CLUSTER_OS}\"\n    _ver=\"galera\"\n    _eml=\"${_CLUSTER_EMAIL}\"\n    if [ ! -e \"/vservers/${_idn}/\" ] \\\n      && [ ! -z \"${_idn}\" ]; then\n      _install_vps\n      echo \"Installing Percona Cluster on VPS ${_idn}...\"\n    fi\n    if [ ! -e \"/vservers/${_idn}/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/vservers/${_idn}/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /vservers/${_idn}/etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _inc=$((_inc+1))\n    echo \"${_MAINT_USER_PWD}\" > /vservers/${_idn}/root/.my.cluster_maint_pwd.txt\n    echo \"${_ROOT_SQL_PASWD}\" > /vservers/${_idn}/root/.my.cluster_root_pwd.txt\n  \tvserver ${_idn} exec ${_APT_UPDATE} -qq\n    vserver ${_idn} exec ${_INSTAPP} debian-keyring\n    vserver ${_idn} exec ${_INSTAPP} debian-archive-keyring\n  \tvserver ${_idn} exec apt-get upgrade -y -qq\n  \tvserver ${_idn} exec ${_INSTAPP} dirmngr\n    rm -f /vservers/${_idn}/etc/apt/sources.list.d/mariadb*\n    rm -f /vservers/${_idn}/etc/apt/sources.list.d/ourdelta*\n    rm -f /vservers/${_idn}/etc/apt/sources.list.d/percona*\n    rm -f /vservers/${_idn}/etc/apt/sources.list.d/xtrabackup*\n    _PERCONA_KEYS_SIG=\"8507EFA5\"\n    if [ ! -e \"/vservers/${_idn}/etc/apt/keyrings/percona.gpg\" ]; then\n      if [ ! -e \"/vservers/${_idn}/etc/apt/keyrings\" ]; then\n        mkdir -m 0755 -p /vservers/${_idn}/etc/apt/keyrings\n      fi\n      if [ -e \"/vservers/${_idn}/etc/apt/trusted.gpg.d/percona.gpg\" ] \\\n        || [ -e \"/vservers/${_idn}/etc/apt/trusted.gpg.d/percona-keyring.gpg~\" ]; then\n        rm -f /vservers/${_idn}/etc/apt/trusted.gpg.d/percona*\n      fi\n      vserver ${_idn} exec apt-key del ${_PERCONA_KEYS_SIG} &> /dev/null\n      if [ ! -e \"/vservers/${_idn}/etc/apt/keyrings/percona.gpg\" ]; then\n        curl -fsSL ${_urlDev}/percona-key.gpg | ${_GPG} --dearmor -o /vservers/${_idn}/etc/apt/keyrings/percona.gpg\n      fi\n      chmod 644 /vservers/${_idn}/etc/apt/keyrings/percona.gpg\n    fi\n    echo \"## Percona Cluster APT Repository\" > /vservers/${_idn}${_percList}\n    if [ -e \"/vservers/${_idn}/etc/apt/keyrings/percona.gpg\" ]; then\n      echo \"deb [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percRepo} ${_SQL_OS_CODE} main\" >> /vservers/${_idn}${_percList}\n      echo \"deb-src [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percRepo} ${_SQL_OS_CODE} main\" >> /vservers/${_idn}${_percList}\n    else\n      echo \"deb http://${_percRepo} ${_SQL_OS_CODE} main\" >> /vservers/${_idn}${_percList}\n      echo \"deb-src http://${_percRepo} ${_SQL_OS_CODE} main\" >> /vservers/${_idn}${_percList}\n    fi\n    chmod 644 /vservers/${_idn}${_percList}\n    echo -e 'Package: *\\nPin: release o=Percona Development Team\\nPin-Priority: 1001' > /vservers/${_idn}/etc/apt/preferences.d/00percona.pref\n  \tvserver ${_idn} exec ${_APT_UPDATE} -qq\n  \tvserver ${_idn} exec ${_INSTAPP} percona-xtradb-cluster-57\n  \t##vserver ${_idn} exec ${_INSTAPP} percona-xtrabackup-24\n    echo \"[client]\" > /vservers/${_idn}/root/.my.cnf\n    echo \"user=root\" >> /vservers/${_idn}/root/.my.cnf\n    echo \"password=${_ROOT_SQL_PASWD}\" >> /vservers/${_idn}/root/.my.cnf\n    vserver ${_idn} exec chmod 0600 /root/.my.cnf\n    echo \"db=mysql\" > /vservers/${_idn}/root/.mytop\n    vserver ${_idn} exec chmod 0600 /root/.mytop\n    vserver ${_idn} exec update-rc.d -f mysql remove\n\n    ##echo \"Running mysql_secure_installation procedure on ${IP} DB node...\"\n    ##vserver ${_idn} exec mysqladmin -u root flush-hosts\"\n    ##vserver ${_idn} exec mysql -u root -e \"DELETE FROM mysql.user WHERE user='';\"\n    ##vserver ${_idn} exec mysql -u root -e \"DELETE FROM mysql.user WHERE user='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1');\"\n    ##vserver ${_idn} exec mysql -u root -e \"DELETE FROM mysql.db WHERE Db='test' OR Db='test\\\\_%';\"\n    ##vserver ${_idn} exec mysql -u root -e \"UPDATE mysql.user SET Password=PASSWORD('${_ROOT_SQL_PASWD}') WHERE user='root';\"\n    ##vserver ${_idn} exec mysql -u root -e \"UPDATE mysql.user SET Password=PASSWORD('${_MAINT_USER_PWD}') WHERE user='debian-sys-maint';\"\n    ##if [ -e \"/vservers/${_idn}/etc/mysql/debian.cnf\" ]; then\n    ##  sed -i \"s/^password =.*/password = ${_MAINT_USER_PWD}/g\" /vservers/${_idn}/etc/mysql/debian.cnf\n    ##fi\n    ##vserver ${_idn} exec mysql -u root -e \"FLUSH PRIVILEGES;\"\n\n  \tssh ${_NOSTRICT} root@${IP} \"echo '\n[client]\nhost     = localhost\nuser     = debian-sys-maint\npassword = '${_MAINT_USER_PWD}'\nsocket   = /run/mysqld/mysqld.sock\n[mysql_upgrade]\nhost     = localhost\nuser     = debian-sys-maint\npassword = '${_MAINT_USER_PWD}'\nsocket   = /run/mysqld/mysqld.sock\nbasedir  = /usr\n' > /etc/mysql/debian.cnf\"\n\n    _default_my_cnf_copy\n\n    ssh ${_NOSTRICT} root@${IP} \"service mysql stop\"\n\n  done\n\n  echo \"Waiting ~5 minutes for ${_DB_NODE_IP[1]} DB node bootstrap-pxc...\"\n  ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"service mysql stop\"\n  sleep 25\n  ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mv -f /var/lib/mysql/ib_logfile0 /var/backups/old-ib_logfile0\"\n  ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mv -f /var/lib/mysql/ib_logfile1 /var/backups/old-ib_logfile1\"\n  sleep 25\n  ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"service mysql bootstrap-pxc\"\n  sleep 180\n\n  echo \"Adding grants for Galera sstuser on Percona Cluster...\"\n  _check_mysql_version\n  for IP in \"${_DB_NODE_IP[@]}\"; do\n    _pxyCmd=\"CREATE USER IF NOT EXISTS 'root'@'${IP}';\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"GRANT ALL ON *.* TO 'root'@'${IP}' WITH GRANT OPTION;\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      _pxyCmd=\"ALTER USER 'root'@'${IP}' IDENTIFIED WITH mysql_native_password BY '${_ROOT_SQL_PASWD}';\"\n    else\n      _pxyCmd=\"ALTER USER 'root'@'${IP}' IDENTIFIED WITH caching_sha2_password BY '${_ROOT_SQL_PASWD}';\"\n    fi\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"CREATE USER IF NOT EXISTS 'sstuser'@'${IP}';\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'${IP}';\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n    _pxyCmd=\"ALTER USER 'sstuser'@'${IP}' IDENTIFIED BY '${_MAINT_USER_PWD}';\"\n      ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  done\n  _pxyCmd=\"CREATE USER IF NOT EXISTS 'sstuser'@'localhost';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"CREATE USER IF NOT EXISTS 'sstuser'@'127.0.0.1';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'127.0.0.1';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"ALTER USER 'sstuser'@'localhost' IDENTIFIED BY '${_MAINT_USER_PWD}';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"ALTER USER 'sstuser'@'127.0.0.1' IDENTIFIED BY '${_MAINT_USER_PWD}';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n\n  echo \"Adding grants for legacy debian-sys-maint on Percona Cluster...\"\n  _pxyCmd=\"CREATE USER IF NOT EXISTS 'debian-sys-maint'@'localhost';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"CREATE USER IF NOT EXISTS 'debian-sys-maint'@'127.0.0.1';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"GRANT ALL ON *.* TO 'debian-sys-maint'@'localhost';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"GRANT ALL ON *.* TO 'debian-sys-maint'@'127.0.0.1';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"ALTER USER 'debian-sys-maint'@'localhost' IDENTIFIED BY '${_MAINT_USER_PWD}';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"ALTER USER 'debian-sys-maint'@'127.0.0.1' IDENTIFIED BY '${_MAINT_USER_PWD}';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n\n  echo \"Syncing grants for root on Percona Cluster...\"\n  _pxyCmd=\"CREATE USER IF NOT EXISTS 'root'@'localhost';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"CREATE USER IF NOT EXISTS 'root'@'127.0.0.1';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"CREATE USER IF NOT EXISTS 'root'@'${_WEB_NODE_IP}';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"GRANT ALL ON *.* TO 'root'@'localhost' WITH GRANT OPTION;\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"GRANT ALL ON *.* TO 'root'@'127.0.0.1' WITH GRANT OPTION;\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  _pxyCmd=\"GRANT ALL ON *.* TO 'root'@'${_WEB_NODE_IP}' WITH GRANT OPTION;\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  if [ \"${_DB_V}\" = \"5.7\" ]; then\n    _pxyCmd=\"ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '${_ROOT_SQL_PASWD}';\"\n  else\n    _pxyCmd=\"ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY '${_ROOT_SQL_PASWD}';\"\n  fi\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  if [ \"${_DB_V}\" = \"5.7\" ]; then\n    _pxyCmd=\"ALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH mysql_native_password BY '${_ROOT_SQL_PASWD}';\"\n  else\n    _pxyCmd=\"ALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH caching_sha2_password BY '${_ROOT_SQL_PASWD}';\"\n  fi\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n  if [ \"${_DB_V}\" = \"5.7\" ]; then\n    _pxyCmd=\"ALTER USER 'root'@'${_WEB_NODE_IP}' IDENTIFIED WITH mysql_native_password BY '${_ROOT_SQL_PASWD}';\"\n  else\n    _pxyCmd=\"ALTER USER 'root'@'${_WEB_NODE_IP}' IDENTIFIED WITH caching_sha2_password BY '${_ROOT_SQL_PASWD}';\"\n  fi\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n\n  _pxyCmd=\"SHOW STATUS LIKE 'wsrep_%';\"\n    ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"${_pxyCmd}\\\"\"\n\n  for IP in \"${_DB_NODE_IP[@]}\"; do\n    if [ \"${IP}\" != \"${_DB_NODE_IP[1]}\" ]; then\n      echo \"Waiting ~5 minutes for ${IP} DB node restart...\"\n      ssh ${_NOSTRICT} root@${IP} \"service mysql stop\"\n      sleep 25\n      ssh ${_NOSTRICT} root@${IP} \"mv -f /var/lib/mysql/ib_logfile0 /var/backups/old-ib_logfile0\"\n      ssh ${_NOSTRICT} root@${IP} \"mv -f /var/lib/mysql/ib_logfile1 /var/backups/old-ib_logfile1\"\n      sleep 25\n      ssh ${_NOSTRICT} root@${IP} \"service mysql start\"\n      sleep 180\n      ssh ${_NOSTRICT} root@${IP} \"mysql -u root -e \\\"SHOW STATUS LIKE 'wsrep%';\\\"\"\n    fi\n  done\n  echo \"All DB nodes restarted!\"\n\n}\n\n_upgrade_db_cluster() {\n  _check_config_cluster\n\n  _CLUSTER_NAME=\"${_CLUSTER_PREFIX}_galera\"\n  _CLUSTER_STRING=\"gcomm://\"$(IFS=, ; echo \"${_DB_NODE_IP[*]}\")\n  _inc=\"0\"\n\n  for IP in \"${_DB_NODE_IP[@]}\"; do\n    _idn=\"${_CLUSTER_PREFIX}db${_inc}\"\n    _hst=\"${_CLUSTER_PREFIX}db${_inc}.${_CLUSTER_SUFFIX}\"\n    _vip=\"${IP}\"\n    _osx=\"${_CLUSTER_OS}\"\n    _ver=\"galera\"\n    _eml=\"${_CLUSTER_EMAIL}\"\n    _inc=$((_inc+1))\n    _MAINT_USER_PWD=`cat /vservers/${_idn}/root/.my.cluster_maint_pwd.txt`\n    _ROOT_SQL_PASWD=`cat /vservers/${_idn}/root/.my.cluster_root_pwd.txt`\n\n    echo \"Upgrading DB cluster node on ${_idn}/${IP}...\"\n\n    if [ ! -e \"/vservers/${_idn}/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/vservers/${_idn}/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /vservers/${_idn}/etc/apt/apt.conf.d/00sandboxoff\n    fi\n\n    if [ -e \"/vservers/${_idn}${_optBin}/boa\" ]; then\n      rm -f /vservers/${_idn}${_optBin}/boa\n    fi\n    mkdir -p /vservers/${_idn}/${_optBin}\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/boa\" -o /vservers/${_idn}${_optBin}/boa\n    ssh ${_NOSTRICT} root@${IP} \"chmod 700 ${_optBin}/boa\"\n\n    if [ -e \"/vservers/${_idn}${_optBin}/mycnfup\" ]; then\n      rm -f /vservers/${_idn}${_optBin}/mycnfup\n    fi\n    mkdir -p /vservers/${_idn}/${_optBin}\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/mycnfup\" -o /vservers/${_idn}${_optBin}/mycnfup\n    ssh ${_NOSTRICT} root@${IP} \"chmod 700 ${_optBin}/mycnfup\"\n    if [ -e \"/vservers/${_idn}${_optBin}/mycnfup\" ]; then\n      echo \"Running mycnfup check on ${_idn}/${IP}...\"\n      ssh ${_NOSTRICT} root@${IP} \"bash ${_optBin}/mycnfup check\"\n    fi\n\n    if [ -x \"/vservers/${_idn}/usr/bin/gpg2\" ]; then\n      _GPG=gpg2\n    else\n      _GPG=gpg\n    fi\n    _PERCONA_KEYS_SIG=\"8507EFA5\"\n\n    echo \"INFO: Retrieving ${_PERCONA_KEYS_SIG} key on ${_idn}/${IP}...\"\n    if [ ! -e \"/vservers/${_idn}/etc/apt/keyrings/percona.gpg\" ]; then\n      if [ ! -e \"/vservers/${_idn}/etc/apt/keyrings\" ]; then\n        mkdir -m 0755 -p /vservers/${_idn}/etc/apt/keyrings\n      fi\n      if [ -e \"/vservers/${_idn}/etc/apt/trusted.gpg.d/percona.gpg\" ] \\\n        || [ -e \"/vservers/${_idn}/etc/apt/trusted.gpg.d/percona-keyring.gpg~\" ]; then\n        rm -f /vservers/${_idn}/etc/apt/trusted.gpg.d/percona*\n      fi\n      vserver ${_idn} exec apt-key del ${_PERCONA_KEYS_SIG} &> /dev/null\n      if [ ! -e \"/vservers/${_idn}/etc/apt/keyrings/percona.gpg\" ]; then\n        curl -fsSL ${_urlDev}/percona-key.gpg | ${_GPG} --dearmor -o /vservers/${_idn}/etc/apt/keyrings/percona.gpg\n      fi\n      chmod 644 /vservers/${_idn}/etc/apt/keyrings/percona.gpg\n    fi\n  \tvserver ${_idn} exec ${_APT_UPDATE} -qq\n  \tvserver ${_idn} exec ${_INSTAPP} bc\n  \tvserver ${_idn} exec apt-get upgrade ${_APT_XTR}\n  \tvserver ${_idn} exec apt-get dist-upgrade ${_APT_XTR}\n  \tvserver ${_idn} exec apt-get dist-upgrade ${_APT_XTR}\n  \tvserver ${_idn} exec rm -f /var/lib/man-db/auto-update\n  \tvserver ${_idn} exec apt-get install lsb-release ${_aptYesUnth}\n  \tvserver ${_idn} exec dpkg --configure --force-all -a\n  \tvserver ${_idn} exec update-rc.d -f mysql remove\n\n  \t_default_my_cnf_copy\n\n    if [ -e \"/vservers/${_idn}${_optBin}/mycnfup\" ]; then\n      echo \"Running mycnfup tune on ${_idn}/${IP}...\"\n      ssh ${_NOSTRICT} root@${IP} \"bash ${_optBin}/mycnfup tune\"\n      wait\n    fi\n    echo \"Running mysql_upgrade on ${_idn}/${IP} to update tables if needed\"\n    ssh ${_NOSTRICT} root@${IP} \"mysql_upgrade\"\n    wait\n    echo \"Checking wsrep status on ${_idn}/${IP}\"\n    sleep 3\n    ssh ${_NOSTRICT} root@${IP} \"mysql -u root -e \\\"SHOW STATUS LIKE 'wsrep%';\\\"\"\n    wait\n    echo \"Waiting 60 seconds after ${_idn}/${IP} upgrade...\"\n    sleep 60\n    echo \"Upgrade for ${_idn}/${IP} completed!\"\n  done\n\n  echo \"All DB nodes updated!\"\n}\n\n# Ensure /usr/sbin/ipset and /sbin/ipset both resolve to the actual ipset binary.\n_ensure_ipset_symlinks() {\n  _IPSET_REAL=\"$(command -v ipset 2>/dev/null || true)\"\n  if [ -z \"${_IPSET_REAL}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"ipset not installed; skipping symlink fixes\"\n    fi\n    return 0\n  fi\n\n  # Resolve through any intermediate symlinks.\n  if [ -L \"${_IPSET_REAL}\" ]; then\n    _IPSET_REAL=\"$(readlink -f \"${_IPSET_REAL}\")\"\n  fi\n\n  for _CAND in /usr/sbin/ipset /sbin/ipset; do\n    _PARENT=\"$(dirname \"${_CAND}\")\"\n    [ -d \"${_PARENT}\" ] || mkdir -p \"${_PARENT}\"\n\n    # If the candidate *is* the real file, nothing to do.\n    if [ \"${_CAND}\" = \"${_IPSET_REAL}\" ]; then\n      continue\n    fi\n\n    # If it exists, check whether it already resolves to the right target.\n    if [ -e \"${_CAND}\" ] || [ -L \"${_CAND}\" ]; then\n      _TARGET=\"$(readlink -f \"${_CAND}\" 2>/dev/null || true)\"\n      if [ \"${_TARGET}\" = \"${_IPSET_REAL}\" ]; then\n        continue\n      fi\n    fi\n\n    ln -sfn \"${_IPSET_REAL}\" \"${_CAND}\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"Linked ${_CAND} -> ${_IPSET_REAL}\"\n    fi\n  done\n}\n\n_install_host() {\n\n  _CUSTOM_DNS_TEST=$(grep 8.8.8.8 /etc/resolv.conf 2>&1)\n  if [[ ! \"${_CUSTOM_DNS_TEST}\" =~ \"8.8.8.8\" ]]; then\n    echo \"Fixing resolver...\"\n    chattr -i /etc/resolv.conf\n    echo \"### BOA-DNS-Config ###\" > /etc/resolv.conf\n    echo \"nameserver 1.1.1.1\" >> /etc/resolv.conf\n    echo \"nameserver 8.8.8.8\" >> /etc/resolv.conf\n    echo \"nameserver 9.9.9.9\" >> /etc/resolv.conf\n    chmod 0644 /etc/resolv.conf\n    cat /etc/resolv.conf\n    echo\n  fi\n\n  ${_INSTAPP} wget\n\n  echo \"Configuring /etc/apt/sources.list\"\n\n  cat /etc/apt/sources.list\n  echo\n  if [ \"${_OS_CODE}\" = \"buster\" ] || [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _APT_MIRROR=\"archive.debian.org/debian\"\n    _APT_REPSRC=\"${_OS_CODE}-backports\"\n    _SEC_MIRROR=\"archive.debian.org/debian-security\"\n    _SEC_REPSRC=\"${_OS_CODE}/updates\"\n  elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _APT_MIRROR=\"${_MIRROR}/debian\"\n    _APT_REPSRC=\"${_OS_CODE}-updates\"\n    _SEC_MIRROR=\"security.debian.org\"\n    _SEC_REPSRC=\"${_OS_CODE}/updates\"\n  elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _APT_MIRROR=\"${_MIRROR}/debian\"\n    _APT_REPSRC=\"${_OS_CODE}-updates\"\n    _SEC_MIRROR=\"security.debian.org/debian-security\"\n    _SEC_REPSRC=\"${_OS_CODE}-security\"\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _APT_MIRROR=\"${_MIRROR}/debian\"\n    _APT_REPSRC=\"${_OS_CODE}-updates\"\n    _SEC_MIRROR=\"security.debian.org/debian-security\"\n    _SEC_REPSRC=\"${_OS_CODE}-security\"\n  fi\n  echo \"## DEBIAN MAIN REPOSITORIES\" > ${_aptLiSys}\n  echo \"deb http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n  echo \"deb-src http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n  echo \"\" >> ${_aptLiSys}\n  echo \"## MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n  echo \"deb http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n  echo \"deb-src http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n  echo \"\" >> ${_aptLiSys}\n  echo \"## DEBIAN SECURITY UPDATES\" >> ${_aptLiSys}\n  echo \"deb http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n  echo \"deb-src http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n  echo\n  cat /etc/apt/sources.list\n\n  echo \"Installing basic tools...\"\n  _apt_clean_update\n  apt-get upgrade ${_nrmUpArg}\n  ${_INSTAPP} locales\n  ${_INSTAPP} lsb-release\n  ${_INSTAPP} dnsutils\n  ${_INSTAPP} netcat-traditional\n  ${_INSTAPP} curl\n  ${_INSTAPP} wget\n  ${_INSTAPP} libwww-perl\n  ${_INSTAPP} mc\n  ${_INSTAPP} screen\n  ${_INSTAPP} hdparm\n  ${_INSTAPP} iotop\n  ${_INSTAPP} ntpsec-ntpdate\n  ${_INSTAPP} parted\n  ${_INSTAPP} build-essential\n  ${_INSTAPP} iptables\n  ${_INSTAPP} ipset\n\n  _ensure_ipset_symlinks\n\n  echo \"Setting hostname to ${_hst}\"\n  echo ${_hst} > /etc/hostname\n  hostname -b ${_hst}\n\n  echo \"Installing csf/lfd...\"\n  cd /var/opt\n  wget ${_wgetGet} http://${_USE_MIR}/dev/src/csf-${_CSF_VRN}.tar.gz\n  tar -xzf csf-${_CSF_VRN}.tar.gz\n  cd csf\n  sh install.sh\n  cd /etc/csf\n  mv -f csf.conf csf.conf-pre\n  wget ${_wgetGet} http://${_USE_MIR}/cluster/csf.conf\n  chmod 600 /etc/csf/csf.conf\n  sed -i \"s/^Port.*/Port 24/g\" /etc/ssh/sshd_config\n  echo \"UseDNS no\" >> /etc/ssh/sshd_config\n  /etc/init.d/ssh restart\n  chmod 700 /root\n  cd /\n  chmod 711 bin boot data dev emul etc home lib media mnt opt sbin selinux srv sys usr var\n\n  echo \"Updating init scripts...\"\n  update-rc.d cron defaults\n  update-rc.d lfd defaults\n  update-rc.d csf defaults\n  invoke-rc.d cron restart\n  invoke-rc.d lfd start\n  invoke-rc.d csf start\n  rm -rf /var/opt/csf*\n\n  _if_firewall_update\n\n  echo \"Improving sshd configuration...\"\n  sed -i \"s/^PermitRootLogin.*/PermitRootLogin prohibit-password/g\" /etc/ssh/sshd_config\n  sed -i \"s/^IgnoreUserKnownHosts.*//g\" /etc/ssh/sshd_config\n  sed -i \"s/^PasswordAuthentication.*//g\" /etc/ssh/sshd_config\n  sed -i \"s/^UseDNS.*//g\" /etc/ssh/sshd_config\n  sed -i \"s/^ClientAliveInterval.*//g\" /etc/ssh/sshd_config\n  sed -i \"s/^ClientAliveCountMax.*//g\" /etc/ssh/sshd_config\n  echo \"IgnoreUserKnownHosts no\" >> /etc/ssh/sshd_config\n  if [ -e \"/root/.ssh.auth.keys.only.cnf\" ]; then\n    echo \"PasswordAuthentication no\" >> /etc/ssh/sshd_config\n  else\n    echo \"PasswordAuthentication yes\" >> /etc/ssh/sshd_config\n  fi\n  echo \"UseDNS no\" >> /etc/ssh/sshd_config\n  echo \"ClientAliveInterval 300\" >> /etc/ssh/sshd_config\n  echo \"ClientAliveCountMax 10000\" >> /etc/ssh/sshd_config\n  echo \"TCPKeepAlive yes\" >> /etc/ssh/sshd_config\n\n  echo \"Restarting sshd...\"\n  service ssh restart\n\n  echo \"Checking if sshd listens on expected port...\"\n  _TEST_SSHD=$(netstat -l -n | grep 24 2>&1)\n  if [[ ! \"${_TEST_SSHD}\" =~ \"24\" ]]; then\n    echo \"Ouch! sshd apparently doesn't listen on port 24!\"\n  else\n    echo \"OK, sshd listens on port 24\"\n    echo \"Please remember to use this new sshd port after reboot!\"\n  fi\n\n  if [ ! -e \"/root/.ssh/id_ed25519.pub\" ]; then\n    echo \"Generating SSH (ed25519) keys for root...\"\n    ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519\n    echo\n    cat /root/.ssh/id_ed25519.pub\n    echo\n  fi\n\n  if [ -x \"/usr/bin/gpg2\" ]; then\n    _GPG=gpg2\n  else\n    _GPG=gpg\n  fi\n\n  if [ ! -e \"/boot/vmlinuz-${_BNG_VERSION}-beng\" ]; then\n    echo \"Installing beng vserver kernel...\"\n    echo \"deb http://repo.psand.net/ ${_OS_CODE} main\" > /etc/apt/sources.list.d/vserver.list\n    cd /var/backups\n    _KEYS_SERVER_TEST=FALSE\n    until [[ \"${_KEYS_SERVER_TEST}\" =~ \"GnuPG\" ]]; do\n      rm -f pubkey.txt*\n      wget ${_wgetGet} http://repo.psand.net/pubkey.txt\n      _KEYS_SERVER_TEST=$(grep GnuPG pubkey.txt 2>&1)\n      sleep 2\n      _CNT=$(pgrep -fc dirmngr)\n      if (( _CNT > 5 )); then\n        pkill -9 -f dirmngr\n        echo \"$(date) Too many dirmngr processes killed (count=${_CNT})\" >> \\\n          /var/log/boa/dirmngr-count.kill.log\n      fi\n    done\n    cat pubkey.txt | ${_GPG} --import &> /dev/null\n    rm -f pubkey.txt*\n    _apt_clean_update\n    ${_INSTAPP} build-essential\n    ${_INSTAPP} linux-headers-vserver-${_BENG_SERIES}-beng\n    ${_INSTAPP} linux-image-vserver-${_BENG_SERIES}-beng\n    ${_INSTAPP} linux-source-vserver-${_BENG_SERIES}-beng\n    ls -la /boot\n  fi\n\n  if [ -x \"/lib/systemd/systemd\" ]; then\n    echo \"Installing sysvinit-core in ${_OS_DIST}/${_OS_CODE}...\"\n    if [ \"${_osx}\" = \"jessie\" ] \\\n      || [ \"${_osx}\" = \"stretch\" ] \\\n      || [ \"${_osx}\" = \"buster\" ] \\\n      || [ \"${_osx}\" = \"bullseye\" ] \\\n      || [ \"${_osx}\" = \"bookworm\" ]; then\n      _apt_clean_update\n      echo \"sysvinit-core install\" | dpkg --set-selections\n      echo \"sysvinit-utils install\" | dpkg --set-selections\n      ${_INSTAPP} sysvinit-core\n      ${_INSTAPP} sysvinit-utils\n      if [ -e \"/usr/share/sysvinit/inittab\" ]; then\n        cp -af /usr/share/sysvinit/inittab /etc/inittab\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n        rm -f /etc/apt/sources.list.d/nosystemd.list\n        rm -f /etc/apt/preferences.d/nosystemd\n        rm -f /etc/apt/preferences.d/systemd\n        echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n        echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n        _apt_clean_update\n      fi\n      echo \"sysvinit-core hold\" | dpkg --set-selections &> /dev/null\n      echo \"sysvinit-utils hold\" | dpkg --set-selections &> /dev/null\n    fi\n    echo\n    echo \"NOTE: Please reboot and then run 'cluster up-host'\"\n    echo \"NOTE: once cluster script completes initial installation\"\n    echo \"NOTE: to cleanly remove not used systemd packages!\"\n    echo\n    sleep 8\n  fi\n\n  _TEST_FSTAB=$(grep \"defaults,errors=remount-ro,noatime,nodiratime\" /etc/fstab 2>&1)\n  if [[ ! \"${_TEST_FSTAB}\" =~ \"noatime,nodiratime\" ]]; then\n    echo \"Updating fstab...\"\n    sed -i \"s/errors=remount-ro/defaults,errors=remount-ro,noatime,nodiratime/g\" /etc/fstab\n    echo\n    cat /etc/fstab\n  fi\n\n  cat <<EOF\n\n  Please reboot this machine now!\n  Remember to use port 24 for SSH connections to the host system.\n  Then run 'cluster up-host' to complete this host installation procedures.\n\nEOF\n  exit 0\n}\n\n_upgrade_host() {\n\n  if [ \"${_mod}\" = \"upgrade\" ]; then\n    _thisMode=\"UPGRADE\"\n  else\n    _thisMode=\"UPDATE\"\n  fi\n\n  if [ -x \"/lib/systemd/systemd\" ]; then\n    echo \"Removing systemd in ${_OS_DIST}/${_OS_CODE}...\"\n    _apt_clean_update\n    echo \"sysvinit-core install\" | dpkg --set-selections\n    echo \"sysvinit-utils install\" | dpkg --set-selections\n    if [ \"${_osx}\" = \"jessie\" ] \\\n      || [ \"${_osx}\" = \"stretch\" ] \\\n      || [ \"${_osx}\" = \"buster\" ] \\\n      || [ \"${_osx}\" = \"bullseye\" ] \\\n      || [ \"${_osx}\" = \"bookworm\" ]; then\n      ${_INSTAPP} sysvinit-core\n      ${_INSTAPP} sysvinit-utils\n      if [ -e \"/usr/share/sysvinit/inittab\" ]; then\n        cp -af /usr/share/sysvinit/inittab /etc/inittab\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n        rm -f /etc/apt/sources.list.d/nosystemd.list\n        rm -f /etc/apt/preferences.d/nosystemd\n        rm -f /etc/apt/preferences.d/systemd\n        echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n        echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n        _apt_clean_update\n      fi\n    fi\n    apt-get purge systemd libnss-systemd -y -qq 2> /dev/null\n    apt-get autoremove --purge -y -qq 2> /dev/null\n    apt-get autoclean -y -qq 2> /dev/null\n    echo \"sysvinit-core hold\" | dpkg --set-selections &> /dev/null\n    echo \"sysvinit-utils hold\" | dpkg --set-selections &> /dev/null\n  fi\n\n  if [ \"${_thisMode}\" = \"UPGRADE\" ]; then\n    echo \"Installing vserver tools dependencies...\"\n    _apt_clean_update\n    ${_INSTAPP} vlan\n    ${_INSTAPP} e2fslibs-dev\n    ${_INSTAPP} libnss3-dev\n    ${_INSTAPP} libvserver0-dev\n    ${_INSTAPP} libvserver0\n    ${_INSTAPP} debootstrap\n    if [ -e \"/etc/cron.daily/mlocate\" ]; then\n      mv -f /etc/cron.daily/mlocate /var/backups/\n    fi\n\n    echo \"Installing otherwise uninstalled stuff...\"\n    ${_INSTAPP} irqbalance\n    ${_INSTAPP} libuuid-perl\n    ${_INSTAPP} linux-base\n    echo \"OPTIONS=\\\"--hintpolicy=ignore\\\"\" >> /etc/default/irqbalance\n    service irqbalance stop\n    service irqbalance start\n\n    echo \"Installing vserver tools...\"\n    apt-get remove util-vserver -y --purge --auto-remove -qq\n    apt-get autoremove -y\n    cd /var/opt\n    rm -rf util-vserver*\n    wget ${_wgetGet} http://${_USE_MIR}/dev/src/util-vserver-${_UTIL_VRN}.tar.gz\n    tar -xzf util-vserver-${_UTIL_VRN}.tar.gz\n    cd util-vserver-${_UTIL_VRN}\n    bash ./configure --prefix=/usr\n    make --quiet\n    make --quiet install\n    make --quiet install-distribution\n    cd\n    vserver --version\n\n    chmod 755 /vservers\n    setattr --barrier /vservers\n    echo \"kernel.vshelper = /sbin/vshelper\" >> /etc/sysctl.conf\n  fi\n\n  cd\n  cp -af /etc/sysctl.conf /etc/sysctl.conf.thrd.three\n  wget ${_wgetGet} http://${_USE_MIR}/versions/${_tRee}/boa/aegir/conf/sysctl.conf\n  cp -af sysctl.conf /etc/sysctl.conf\n  echo \"kernel.vshelper = /sbin/vshelper\" >> /etc/sysctl.conf\n  rm -f sysctl.conf*\n  sysctl -p\n  swapoff -a\n  swapon -a\n\n  if [ \"${_thisMode}\" = \"UPGRADE\" ]; then\n    echo \"Installing vserver initd scripts...\"\n    cp -af /usr/etc/init.d/vservers-default /etc/init.d/\n    cp -af /usr/etc/init.d/vprocunhide /etc/init.d/\n    cp -af /usr/etc/init.d/util-vserver /etc/init.d/\n    update-rc.d util-vserver defaults\n    update-rc.d vprocunhide defaults\n    update-rc.d vservers-default defaults\n    service vprocunhide start\n    service util-vserver start\n    service vservers-default start\n\n    echo \"Updating vserver defaults...\"\n    echo \"none  /proc     proc    defaults        0 0\" >/usr/share/util-vserver/defaults/fstab\n    echo \"none  /dev/pts  devpts  gid=5,mode=620  0 0\" >>/usr/share/util-vserver/defaults/fstab\n\n    if [ ! -e \"/usr/etc/vservers/.defaults/apps/vunify/hash/root\" ]; then\n      mkdir -p /usr/etc/vservers/.defaults/apps/vunify/hash /vservers/.hash\n      ln -sfn /vservers/.hash /usr/etc/vservers/.defaults/apps/vunify/hash/root\n    fi\n  fi\n\n  echo \"Installing vnstat...\"\n  cd /var/opt\n  rm -rf vnstat*\n  wget ${_wgetGet} http://${_USE_MIR}/dev/src/vnstat-${_VNSTAT_VRN}.tar.gz\n  tar -xzf vnstat-${_VNSTAT_VRN}.tar.gz\n  cd vnstat-${_VNSTAT_VRN}\n  bash ./configure --prefix=/usr\n  make --quiet\n  make --quiet install\n  for INF in `vnstat --iflist \\\n    | sed \"s/Available interfaces//g; s/(1000 Mbit)//g; s/(100 Mbit)//g; s/ lo//g;\" \\\n    | cut -d: -f2` ;do vnstat -i ${IP}NF;done\n  cp -af /var/opt/vnstat-${_VNSTAT_VRN}/examples/init.d/debian/vnstat /etc/init.d/vnstat\n  chmod 755 /etc/init.d/vnstat\n  update-rc.d vnstat defaults\n  if [ -e \"/usr/etc/vnstat.conf\" ]; then\n    sed -i \"s/^MaxBandwidth.*/MaxBandwidth 1000/g\" /usr/etc/vnstat.conf\n  fi\n  if [ -e \"/etc/vnstat.conf\" ]; then\n    sed -i \"s/^MaxBandwidth.*/MaxBandwidth 1000/g\" /etc/vnstat.conf\n  fi\n  service vnstat start\n  killall vnstatd\n  service vnstat restart\n\n  if [ ! -e \"/usr/local/sbin/vserver-autostart\" ] \\\n    || [ \"${_thisMode}\" = \"UPGRADE\" ]; then\n    echo \"Installing vserver-autostart...\"\n    rm -f /usr/local/sbin/vserver-autostart\n    cd /usr/local/sbin/\n    wget ${_wgetGet} http://${_USE_MIR}/cluster/vserver-autostart\n    if [ -e \"/usr/local/sbin/vserver-autostart\" ]; then\n      cd;chmod u+x /usr/local/sbin/vserver-autostart\n    else\n      echo \"ERROR: /usr/local/sbin/vserver-autostart not available!\"\n    fi\n  fi\n\n  chmod 755 /vservers\n\n  if [ \"${_thisMode}\" = \"UPGRADE\" ]; then\n    echo \"Installing Postfix locally...\"\n    ${_INSTAPP} postfix\n    lSmtp=\"localhost:25      inet  n       -       -       -       -       smtpd\"\n    sed -i \"s/^smtp.*inet.*n.*smtpd/${lSmtp}/g\" /etc/postfix/master.cf\n    apt-get remove exim -y --purge --auto-remove -qq\n    apt-get autoremove -y\n    service postfix restart\n  fi\n\n  echo \"Updating packages on this system...\"\n  _apt_clean_update\n  apt-get install aptitude ${_APT_XTR} \\\n    && aptitude full-upgrade -f -y -q \\\n      -o Dpkg::Options::=--force-confmiss \\\n      -o Dpkg::Options::=--force-confdef \\\n      -o Dpkg::Options::=--force-confnew \\\n    && apt-get dist-upgrade ${_APT_XTR} \\\n    && apt-get dist-upgrade ${_APT_XTR}\n\n  _if_firewall_update\n\n  echo\n  echo \"That's all. Enjoy!\"\n  echo\n  exit 0\n}\n\n_check_vsd() {\n  _VSD_TEST=$(grep \"cluster\" /etc/init.d/vservers-default 2>&1)\n  if [ -z \"${_VSD_TEST}\" ]; then\n    echo \"##\" >> /etc/init.d/vservers-default\n    echo \"sleep 30\" >> /etc/init.d/vservers-default\n    echo \"exec \\\"/usr/local/bin/cluster re-dbs\\\"\" >> /etc/init.d/vservers-default\n    echo \"##\" >> /etc/init.d/vservers-default\n  fi\n}\n\n_check_heads() {\n  _check_config_cluster\n  _check_vsd\n  _idn=\"${_CLUSTER_PREFIX}web\"\n  for i in `dir -d /vservers/*` ; do\n    _THIS_VM=`echo $i | cut -d'/' -f3 | awk '{ print $1}'`\n    _VS_NAME=`echo ${_THIS_VM} | cut -d'/' -f3 | awk '{ print $1}'`\n    if [ -e \"${i}${_optBin}/boa\" ]; then\n      rm -f ${i}${_optBin}/boa\n    fi\n    mkdir -p ${i}${_optBin}\n    curl ${_crlGet} \"${_urlHmr}/${_tBn}/boa\" -o ${i}${_optBin}/boa\n    chmod 700 ${i}${_optBin}/boa\n    ls -la ${i}${_optBin}/boa\n    if [[ ${_THIS_VM} =~ v168q ]]; then\n      echo Check Skipped for $i\n    else\n      if [ -e \"/usr/run/vservers/${_THIS_VM}\" ] \\\n        && [ -e \"${i}/run/crond.pid\" ]; then\n        _VS_HOSTNAME=`vserver ${_VS_NAME} exec hostname 2>&1`\n        _VS_OS=`vserver ${_VS_NAME} exec lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2`\n        echo \"The ${_VS_HOSTNAME} ${_VS_NAME} VM is running ${_VS_OS}\"\n        vserver ${_VS_NAME} exec ${_optBin}/boa info ${_mod} ${_ext}\n      fi\n    fi\n  done\n}\n\n_restart_db_cluster() {\n  _check_config_cluster\n\n  _inc=\"0\"\n  for IP in \"${_DB_NODE_IP[@]}\"; do\n    _idn=\"${_CLUSTER_PREFIX}db${_inc}\"\n    vserver ${_idn} exec update-rc.d -f mysql remove\n    if [ -e \"/vservers/${_idn}${_optBin}/mycnfup\" ]; then\n      echo \"Running mycnfup stop on ${_idn}/${IP}...\"\n      ssh ${_NOSTRICT} root@${IP} \"bash ${_optBin}/mycnfup stop\"\n    else\n      mkdir -p /vservers/${_idn}/${_optBin}\n      curl ${_crlGet} \"${_urlHmr}/${_tBn}/mycnfup\" -o /vservers/${_idn}${_optBin}/mycnfup\n      ssh ${_NOSTRICT} root@${IP} \"chmod 700 ${_optBin}/mycnfup\"\n      echo \"Running mycnfup stop on ${_idn}/${IP}...\"\n      ssh ${_NOSTRICT} root@${IP} \"bash ${_optBin}/mycnfup stop\"\n    fi\n    if [ \"${IP}\" = \"${_DB_NODE_IP[1]}\" ]; then\n      sed -i \"s/^safe_to_bootstrap.*/safe_to_bootstrap: 1/g\"  /vservers/${_idn}/var/lib/mysql/grastate.dat\n    fi\n    _inc=$((_inc+1))\n    echo\n    echo \"Testing ${_idn} status...\"\n    vserver-stat | grep ${_idn}\n    echo\n  done\n\n  echo \"Waiting for ${_DB_NODE_IP[1]} DB node bootstrap-pxc...\"\n  echo \"Running mycnfup init on ${_DB_NODE_IP[1]}...\"\n  ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"bash ${_optBin}/mycnfup init\"\n  ssh ${_NOSTRICT} root@${_DB_NODE_IP[1]} \"mysql -u root -e \\\"SHOW STATUS LIKE 'wsrep%';\\\"\"\n\n  for IP in \"${_DB_NODE_IP[@]}\"; do\n    if [ \"${IP}\" != \"${_DB_NODE_IP[1]}\" ]; then\n\t  echo \"Running mycnfup start on ${IP}...\"\n      ssh ${_NOSTRICT} root@${IP} \"bash ${_optBin}/mycnfup start\"\n      ssh ${_NOSTRICT} root@${IP} \"mysql -u root -e \\\"SHOW STATUS LIKE 'wsrep%';\\\"\"\n    fi\n  done\n  echo \"All DB nodes started in cluster mode!\"\n}\n\ncase \"$1\" in\n  re-dbs)  _cmd=\"$1\"\n           _hst=\"$2\"\n           _check_all\n           _restart_db_cluster\n  ;;\n  in-host) _cmd=\"$1\"\n           _hst=\"$2\"\n           _check_all\n           _install_host\n  ;;\n  up-host) _cmd=\"$1\"\n           _mod=\"$2\"\n           _check_all\n           _upgrade_host\n  ;;\n  in-vps)  _cmd=\"$1\"\n           _idn=\"$2\"\n           _hst=\"$3\"\n           _vip=\"$4\"\n           _osx=\"$5\"\n           _ver=\"$6\"\n           _eml=\"$7\"\n           _fce=\"$8\"\n           _check_all\n           _install_vps\n  ;;\n  in-all)  _cmd=\"$1\"\n           _ver=\"$2\"\n           _xer=\"$2\"\n           _check_all\n           _install_db_cluster\n           _install_web_node\n  ;;\n  in-dbs)  _cmd=\"$1\"\n           _ver=\"$2\"\n           _xer=\"$2\"\n           _check_all\n           _install_db_cluster\n  ;;\n  in-web)  _cmd=\"$1\"\n           _ver=\"$2\"\n           _xer=\"$2\"\n           _check_all\n           _install_web_node\n  ;;\n  up-dbs)  _cmd=\"$1\"\n           _ver=\"$2\"\n           _check_all\n           _upgrade_db_cluster\n  ;;\n  up-web)  _cmd=\"$1\"\n           _ver=\"$2\"\n           _check_all\n           _upgrade_web_node\n  ;;\n  up-all)  _cmd=\"$1\"\n           _ver=\"$2\"\n           _check_all\n           _upgrade_db_cluster\n           _upgrade_web_node\n  ;;\n  in-oct)  _cmd=\"$1\"\n           _email=\"$2\"\n           _user==\"$3\"\n           _mode=\"$4\"\n           _copt=\"$5\"\n           _csub=\"$6\"\n           _ccor=\"$7\"\n           _check_all\n           _install_octopus\n  ;;\n  in-pxy)  _cmd=\"$1\"\n           _idn=\"$2\"\n           _vip=\"$3\"\n           _fce=\"$4\"\n           _check_all\n           _re_install_pxy\n  ;;\n  check)   _cmd=\"$1\"\n           _mod=\"$2\"\n           _ext=\"$3\"\n           _check_all\n           _check_heads\n  ;;\n  *)       echo\n           echo \"Usage: cluster {in-host} {fqdn}\"\n           echo \"Usage: cluster {up-host} {update|upgrade}\"\n           echo \"Usage: cluster {in-vps} {id} {fqdn} {ip} {os} {stable|head|galera} {email} {force}\"\n           echo \"Usage: cluster {in-all} {stable|head}\"\n           echo \"Usage: cluster {in-dbs} {stable|head}\"\n           echo \"Usage: cluster {in-web} {stable|head}\"\n           echo \"Usage: cluster {up-dbs} {stable|head}\"\n           echo \"Usage: cluster {up-web} {stable|head}\"\n           echo \"Usage: cluster {up-all} {stable|head}\"\n           echo \"Usage: cluster {in-oct} {email} {o2} {mini|max|none} {stable|head}\"\n           echo \"Usage: cluster {in-pxy} {id} {ip} force-reinstall\"\n           echo \"Usage: cluster {check} {more|report} {backups|octopus}\"\n           echo\n           exit 1\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/codebasecheck",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Script to check Drupal codebase compatibility with Percona 8.0\n# Focuses on Drupal core version and lists contributed modules\n\n# Set the path to your Drupal codebase; can be passed as the first argument\n_DRUPAL_PATH=\"${1:-/path/to/drupal}\"\n\n# Log file to store incompatibility results\n[ ! -d \"/var/log/boa/core\" ] && mkdir -p /var/log/boa/core\n_LOG_FILE=\"/var/log/boa/core/incompatible-$(date +%y%m%d-%H%M%S).log\"\n\n# Function to get Drupal core version\nget_drupal_version() {\n  echo \"Checking Drupal version in path: ${_DRUPAL_PATH}\" >&2\n\n  # Check for Drupal 8 and above\n  if [ -f \"${_DRUPAL_PATH}/core/lib/Drupal.php\" ]; then\n    echo \"Detected Drupal 8 or higher.\" >&2\n    _version_line=$(grep -m1 'const VERSION' \"${_DRUPAL_PATH}/core/lib/Drupal.php\")\n    echo \"Version line: ${_version_line}\" >&2\n    _version=$(echo \"${_version_line}\" | sed -E \"s/.*const VERSION\\s*=\\s*'([^']+)'.*/\\1/\")\n    echo \"Extracted version: ${_version}\" >&2\n    echo >&2\n    echo \"Detected Drupal version: ${_version}\" >&2\n    echo \"${_version}\"\n    return\n  fi\n\n  # Check for Drupal 7\n  if [ -f \"${_DRUPAL_PATH}/includes/bootstrap.inc\" ]; then\n    if grep -q \"define('VERSION',\" \"${_DRUPAL_PATH}/includes/bootstrap.inc\"; then\n      echo \"Detected Drupal 7.\" >&2\n      _version_line=$(grep -m1 \"define('VERSION',\" \"${_DRUPAL_PATH}/includes/bootstrap.inc\")\n      echo \"Version line: ${_version_line}\" >&2\n      _version=$(echo \"${_version_line}\" | sed -E \"s/.*define\\('VERSION',\\s*'([^']+)'\\);.*/\\1/\")\n      echo \"Extracted version: ${_version}\" >&2\n      echo >&2\n      echo \"Detected Drupal version: ${_version}\" >&2\n      echo \"${_version}\"\n      return\n    fi\n  fi\n\n  # Check for Drupal 6\n  if [ -f \"${_DRUPAL_PATH}/modules/system/system.module\" ]; then\n    if grep -q \"define('VERSION',\" \"${_DRUPAL_PATH}/modules/system/system.module\"; then\n      echo \"Detected Drupal 6.\" >&2\n      _version_line=$(grep -m1 \"define('VERSION',\" \"${_DRUPAL_PATH}/modules/system/system.module\")\n      echo \"Version line: ${_version_line}\" >&2\n      _version=$(echo \"${_version_line}\" | sed -E \"s/.*define\\('VERSION',\\s*'([^']+)'\\);.*/\\1/\")\n      echo \"Extracted version: ${_version}\" >&2\n      echo >&2\n      echo \"Detected Drupal version: ${_version}\" >&2\n      echo \"${_version}\"\n      return\n    fi\n  fi\n\n  # Try to get version from VERSION.txt\n  if [ -f \"${_DRUPAL_PATH}/core/VERSION.txt\" ]; then\n    echo \"Found core/VERSION.txt\" >&2\n    _version=$(cat \"${_DRUPAL_PATH}/core/VERSION.txt\" | tr -d '[:space:]')\n    echo \"Extracted version: ${_version}\" >&2\n    echo >&2\n    echo \"Detected Drupal version: ${_version}\" >&2\n    echo \"${_version}\"\n    return\n  fi\n\n  if [ -f \"${_DRUPAL_PATH}/VERSION.txt\" ]; then\n    echo \"Found VERSION.txt\" >&2\n    _version=$(cat \"${_DRUPAL_PATH}/VERSION.txt\" | tr -d '[:space:]')\n    echo \"Extracted version: ${_version}\" >&2\n    echo >&2\n    echo \"Detected Drupal version: ${_version}\" >&2\n    echo \"${_version}\"\n    return\n  fi\n\n  # Version not found\n  echo \"Could not determine Drupal version for codebase at ${_DRUPAL_PATH}\" >&2\n  echo \"\"\n  return\n}\n\n# Function to compare versions\nversion_lt() {\n  # Returns 0 (true) if first version is less than second\n  test \"$(printf '%s\\n' \"$1\" \"$2\" | sort -V | head -n1)\" != \"$2\"\n}\n\n# Function to check compatibility based on Drupal version\ncheck_compatibility() {\n  local _version=\"$1\"\n  local _major_version\n  _major_version=$(echo \"${_version}\" | cut -d '.' -f1)\n\n  if [ -z \"${_version}\" ]; then\n    # Could not determine version\n    echo \"Could not determine Drupal version for codebase at ${_DRUPAL_PATH}\" | tee -a \"${_LOG_FILE}\"\n    return 1\n  fi\n\n  echo \"Checking compatibility for Drupal version: ${_version}\"\n\n  # Drupal 6 does not support MySQL 8.0\n  if [ \"${_major_version}\" = \"6\" ]; then\n    echo \"Drupal 6 (${_version}) does not support MySQL 8.0\" | tee -a \"${_LOG_FILE}\"\n    return 1\n  fi\n\n  # Drupal 7 supports MySQL 8.0 since version 7.76\n  if [ \"${_major_version}\" = \"7\" ]; then\n    if version_lt \"${_version}\" \"7.76\"; then\n      echo \"Drupal 7 (${_version}) supports MySQL 8.0 since version 7.76\" | tee -a \"${_LOG_FILE}\"\n      return 1\n    fi\n  fi\n\n  # Drupal 8 supports MySQL 8.0 since version 8.6.0\n  if [ \"${_major_version}\" = \"8\" ]; then\n    if version_lt \"${_version}\" \"8.6.0\"; then\n      echo \"Drupal 8 (${_version}) supports MySQL 8.0 since version 8.6.0\" | tee -a \"${_LOG_FILE}\"\n      return 1\n    fi\n  fi\n\n  # Drupal 11 no longer supports MySQL 5.7\n  if [ \"${_major_version}\" = \"11\" ]; then\n    echo \"Drupal 11 (${_version}) requires MySQL 8.0\" | tee -a \"${_LOG_FILE}\"\n    # No need to return 1, as it's compatible with MySQL 8.0\n  fi\n\n  # All other versions are considered compatible\n  echo \"Drupal version ${_version} is compatible with MySQL 8.0\"\n  return 0\n}\n\n# Function to list contributed modules and their versions\nlist_contrib_modules() {\n  local _module_paths=()\n  echo \"Listing contributed modules...\"\n\n  # Drupal 7 and below modules path\n  if [ -d \"${_DRUPAL_PATH}/sites/all/modules\" ]; then\n    _module_paths+=(\"${_DRUPAL_PATH}/sites/all/modules\")\n  fi\n  # Include site-specific modules\n  if ls -d \"${_DRUPAL_PATH}/sites/\"*/modules 1> /dev/null 2>&1; then\n    for dir in \"${_DRUPAL_PATH}/sites/\"*/modules; do\n      _module_paths+=(\"${dir}\")\n    done\n  fi\n  # Drupal 8 and above modules path\n  if [ -d \"${_DRUPAL_PATH}/modules/contrib\" ]; then\n    _module_paths+=(\"${_DRUPAL_PATH}/modules/contrib\")\n  fi\n  if [ -d \"${_DRUPAL_PATH}/modules/custom\" ]; then\n    _module_paths+=(\"${_DRUPAL_PATH}/modules/custom\")\n  fi\n  # Include top-level modules directory for Drupal 8+\n  if [ -d \"${_DRUPAL_PATH}/modules\" ]; then\n    _module_paths+=(\"${_DRUPAL_PATH}/modules\")\n  fi\n\n  for _module_dir in \"${_module_paths[@]}\"; do\n    echo \"Scanning directory: ${_module_dir}\"\n    find \"${_module_dir}\" -type f \\( -name \"*.info.yml\" -o -name \"*.info\" \\) | while read -r _file; do\n      _module_name=$(basename \"$(dirname \"${_file}\")\")\n      # Try to extract version\n      _module_version=$(grep -E '^(version|core)[:=]' \"${_file}\" 2>/dev/null | head -n1 | awk -F'[:=]' '{print $2}' | tr -d '[:space:]')\n      if [ -z \"${_module_version}\" ]; then\n        _module_version=\"(version not specified)\"\n      fi\n      echo \"Module: ${_module_name}, Version: ${_module_version}, Path: $(dirname \"${_file}\")\" | tee -a \"${_LOG_FILE}\"\n    done\n  done\n}\n\n# Main script execution\n\n# Initialize incompatibility found flag\n_INCOMPATIBILITY_FOUND=0\n\n# Get Drupal core version\n_DRUPAL_VERSION=$(get_drupal_version)\n\n# Check compatibility\ncheck_compatibility \"${_DRUPAL_VERSION}\"\n_COMPATIBLE=$?\n\n# If codebase is incompatible, set incompatibility found flag\nif [ \"${_COMPATIBLE}\" -ne 0 ]; then\n  _INCOMPATIBILITY_FOUND=1\nfi\n\n# If any incompatibilities were found, log the codebase path and list contributed modules\nif [ \"${_INCOMPATIBILITY_FOUND}\" -eq 1 ]; then\n  echo \"Incompatibility detected in codebase at ${_DRUPAL_PATH}\" | tee -a \"${_LOG_FILE}\"\n  echo \"Drupal version: ${_DRUPAL_VERSION}\" | tee -a \"${_LOG_FILE}\"\n  echo \"Listing contributed modules for manual compatibility check:\" | tee -a \"${_LOG_FILE}\"\n  list_contrib_modules\n  echo \"-----------------------------\" | tee -a \"${_LOG_FILE}\"\nfi\n"
  },
  {
    "path": "aegir/tools/bin/copydbackup",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as root.\"\n    exit 1\n  fi\n}\n_check_root\n\n# -----------------------------------------------------------------------------\n# 1) Identify which user references which DB by scanning vhosts\n# -----------------------------------------------------------------------------\n_VHOSTS_DIRS=(\"/data/disk/*/config/server_master/nginx/vhost.d\")\n\n# We'll store:  db_name -> \"vhost1 vhost2 ...\"\ndeclare -A _vhost_db_map\n\nfor _dir_pattern in \"${_VHOSTS_DIRS[@]}\"; do\n  for _dir in ${_dir_pattern}; do\n    [ -d \"${_dir}\" ] || continue\n    for _vhost_file in \"${_dir}\"/*; do\n      [ -f \"${_vhost_file}\" ] || continue\n\n      # We'll read the file first to see if we should skip it entirely:\n      skip_this_vhost=0\n      while IFS= read -r _line; do\n        # If line references /data/disk/whatever/aegir/distro/ in the docroot, we skip\n        if echo \"${_line}\" | grep -qE \"root\\s+/data/disk/.*/aegir/distro/\"; then\n          skip_this_vhost=1\n          break\n        fi\n      done < \"${_vhost_file}\"\n\n      # If skip_this_vhost=1, do not parse DB references; move on to next file\n      if [ \"${skip_this_vhost}\" -eq 1 ]; then\n        # Optionally log this skip:\n        echo \"Skipping vhost due to aegir/distro pattern: ${_vhost_file}\"\n        continue\n      fi\n\n      # Otherwise, we parse the file again for DB references (or rewind w/ file descriptor)\n      while IFS= read -r _line; do\n        if echo \"${_line}\" | grep -q \"fastcgi_param db_name\"; then\n          _db_name=$(echo \"${_line}\" | awk '{print $NF}' | tr -d ';')\n          # Append vhost file to the db_name’s list if not already included\n          if ! [[ \" ${_vhost_db_map[\"${_db_name}\"]} \" =~ \" ${_vhost_file} \" ]]; then\n            _vhost_db_map[\"${_db_name}\"]+=\"${_vhost_file} \"\n          fi\n        fi\n      done < \"${_vhost_file}\"\n    done\n  done\ndone\n\n# -----------------------------------------------------------------------------\n# 2) Build a \"db -> user(s)\" map from the vhost paths\n# -----------------------------------------------------------------------------\n_get_username_from_path() {\n  # Typical pattern: /data/disk/USER/config/server_master/nginx/vhost.d/example.conf\n  # Splitting by '/' => [0:'',1:'data',2:'disk',3:'USERNAME',4:'config',...]\n  local _vfile=\"$1\"\n  echo \"${_vfile}\" | cut -d'/' -f4\n}\n\ndeclare -A _db_users_map  # db_name -> \"user1 user2 ...\"\nfor _db_name in \"${!_vhost_db_map[@]}\"; do\n  for _vhost_file in ${_vhost_db_map[\"${_db_name}\"]}; do\n    _this_user=$(_get_username_from_path \"${_vhost_file}\")\n    if [ -n \"${_this_user}\" ]; then\n      # Avoid duplicates\n      if ! [[ \" ${_db_users_map[\"${_db_name}\"]} \" =~ \" ${_this_user} \" ]]; then\n        _db_users_map[\"${_db_name}\"]+=\"${_this_user} \"\n      fi\n    fi\n  done\ndone\n\n# -----------------------------------------------------------------------------\n# 3) Copy relevant DB backups to each user’s space\n# -----------------------------------------------------------------------------\n_ARCH_SQL_BASE=\"/data/disk/arch/sql\"\necho \"Copying relevant DB backups into user spaces...\"\n\nfor _backup_dir in \"${_ARCH_SQL_BASE}\"/*; do\n  [ -d \"${_backup_dir}\" ] || continue\n  _dir_name=$(basename \"${_backup_dir}\")\n  _datetime=\"${_dir_name#*-}\"  # e.g. host-DATESTAMP => just the DATETIME part\n\n  # We'll track users who received new files in this directory,\n  # then do a single chown -R for each user at the end.\n  declare -A _touched_users=()\n\n  for _backup_file in \"${_backup_dir}\"/*; do\n    [ -f \"${_backup_file}\" ] || continue\n\n    _base_file=$(basename \"${_backup_file}\")\n\n    # Remove up to two extensions => e.g. dbName.sql.bz2 => dbName\n    filename_no_ext=\"${_base_file}\"\n    filename_no_ext=\"${filename_no_ext%.*}\"  # 1st ext\n    filename_no_ext=\"${filename_no_ext%.*}\"  # 2nd ext\n\n    # If base ends with -DATETIME, remove that suffix => dbName\n    if [[ \"${filename_no_ext}\" == *\"-${_datetime}\" ]]; then\n      filename_no_ext=\"${filename_no_ext%-${_datetime}}\"\n    fi\n\n    _possible_dbname=\"${filename_no_ext}\"\n\n    if [[ \"${_possible_dbname}\" == \"mysql\" || \"${_possible_dbname}\" == \"sys\" ]]; then\n      # Debug log (optional)\n      echo \"Skipping system DB backup: ${_base_file} (DB: ${_possible_dbname})\"\n      continue\n    else\n      echo \"_possible_dbname is ${_possible_dbname}\"\n    fi\n\n    # If this DB is used by any users, copy the file for them\n    if [ -n \"${_db_users_map[\"${_possible_dbname}\"]}\" ]; then\n      for _user in ${_db_users_map[\"${_possible_dbname}\"]}; do\n        _target_dir=\"/data/disk/${_user}/static/files/dbackup/${_datetime}\"\n        [ ! -d \"${_target_dir}\" ] && mkdir -p \"${_target_dir}\"\n\n        _target_file=\"${_target_dir}/${_base_file}\"\n\n        if [ -e \"${_target_file}\" ]; then\n          echo \"File already exists, skipping: ${_target_file}\"\n        else\n          echo \"Copying ${_backup_file} => ${_target_file}\"\n          cp -a \"${_backup_file}\" \"${_target_file}\"\n          # Mark that we touched this user's folder in this directory\n          _touched_users[\"${_user}\"]=1\n        fi\n      done\n    fi\n  done\n\n  # Now batch-fix ownership for each user who got new files\n  for _user in \"${!_touched_users[@]}\"; do\n    _target_dir=\"/data/disk/${_user}/static/files/dbackup/${_datetime}\"\n    echo \"Setting ownership for ${_target_dir} => ${_user}.ftp:users\"\n    chown -R \"${_user}.ftp:users\" \"${_target_dir}\"\n  done\ndone\n\necho \"Backup copying completed.\"\n\n# -----------------------------------------------------------------------------\n# 4) Cleanup old backups based on /data/disk/USER/static/control/dBackupCycle.info\n# -----------------------------------------------------------------------------\n#\n# If the file exists and contains a number of days (e.g. 7, 14, 21, 60),\n# remove archives older than that many days. Then remove empty subdirs.\n# By default use 14 days if the file is missing or zero-length.\n\n# Gather all users from _db_users_map to do cleanup\ndeclare -A _all_involved_users=()\nfor _db_name in \"${!_db_users_map[@]}\"; do\n  for _usr in ${_db_users_map[\"${_db_name}\"]}; do\n    _all_involved_users[\"${_usr}\"]=1\n  done\ndone\n\necho \"Performing cleanup of old backups based on dBackupCycle.info...\"\n\nfor _user in \"${!_all_involved_users[@]}\"; do\n  _days_to_keep=\n  _cycle_file=\"/data/disk/${_user}/static/control/dBackupCycle.info\"\n\n  # If empty or missing, default to 14 days\n  if [ ! -s \"${_cycle_file}\" ]; then\n    _days_to_keep=14\n  fi\n\n  # If user has a file OR we have a default\n  if [ -s \"${_cycle_file}\" ] || [ -n \"${_days_to_keep}\" ]; then\n    # If the file is non-empty, parse it\n    if [ -s \"${_cycle_file}\" ]; then\n      _days_to_keep=$(tr -d '[:space:]' < \"${_cycle_file}\")\n      # Keep only digits\n      _days_to_keep=\"${_days_to_keep//[^0-9]/}\"\n    fi\n\n    if [[ \"${_days_to_keep}\" =~ ^[0-9]+$ ]]; then\n      echo \"User: ${_user}; removing backups older than ${_days_to_keep} days...\"\n      find \"/data/disk/${_user}/static/files/dbackup\" \\\n        -type f -mtime +${_days_to_keep} -exec rm -f {} \\;\n      # Optionally remove empty directories, too:\n      find \"/data/disk/${_user}/static/files/dbackup\" \\\n        -type d -empty -delete\n    else\n      echo \"Warning: ${_cycle_file} does not contain a valid integer. Skipping cleanup for ${_user}.\"\n    fi\n  fi\ndone\n\necho \"Cleanup completed.\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/dcysetup",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n# Function to verify BOA keys\n_verify_boa_keys() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _verify_boa_keys in dcysetup\"\n  fi\n  if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n    _allw=NO\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlEnc=\"http://files.aegir.cc/enc/2024\"\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    _encName=$(echo ${_hName} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".o8.io\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".boa.io\"($) ]]; then\n      _allw=YES\n    fi\n    mkdir -p /var/opt\n    rm -f /var/opt/_encN*\n    curl ${_crlGet} \"${_urlEnc}/${_encName}\" -o /var/opt/_encN.${_encName}.tmp\n    wait\n    echo \"${_hName}.${_encName}\" > /var/opt/_encN_local.${_encName}.tmp\n    wait\n    if [ -e \"/var/opt/_encN.${_encName}.tmp\" ] && [ -e \"/var/opt/_encN_local.${_encName}.tmp\" ]; then\n      _diffTestIf=$(diff -w -B /var/opt/_encN.${_encName}.tmp /var/opt/_encN_local.${_encName}.tmp 2>&1)\n      if [ ! -z \"${_diffTestIf}\" ] && [ \"${_allw}\" = \"NO\" ]; then\n        echo\n        echo \"Your system requires valid license to use this function\"\n        echo \"Please visit https://omega8.cc/licenses to purchase your own\"\n        echo\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n        rm -f /var/opt/_encN*\n        exit 0\n      else\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n      fi\n    else\n      echo\n      echo \"Your system requires valid license to use this BOA feature\"\n      echo \"Unfortunately it was not possible to verify your system status\"\n      echo \"Please contact our support but visit https://omega8.cc/licenses first\"\n      echo\n      exit 0\n    fi\n  fi\n}\n\n# Function to verify root access\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 0 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  _AWS_VLV=${_AWS_VLV//[^a-z]/}\n  if [ -z \"${_AWS_VLV}\" ]; then\n    _AWS_VLV=\"warning\"\n  fi\n}\n_check_root\n_verify_boa_keys\n\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n# New OpenSSL 3.x version is required\nif [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  echo \"New OpenSSL 3.x version is required\"\n  exit 1\nfi\n\n# Directory where all scripts are located\n_SCRIPT_DIR=\"/root/.remote_backups/run\"\n\n# Define paths to individual scripts\n_INSTALL_DEPENDENCIES_SCRIPT=\"${_SCRIPT_DIR}/install_dependencies.sh\"\n_CREATE_CREDENTIALS_TEMPLATES_SCRIPT=\"${_SCRIPT_DIR}/create_credentials_templates.sh\"\n_CREATE_GLOBAL_PATHS_CONFIG_SCRIPT=\"${_SCRIPT_DIR}/create_global_paths_config.sh\"\n_CREATE_USER_PATHS_CONFIG_SCRIPT=\"${_SCRIPT_DIR}/create_user_paths_config.sh\"\n_CREATE_CRON_ENTRIES_SCRIPT=\"${_SCRIPT_DIR}/create_cron_entries.sh\"\n_CREATE_README_SCRIPT=\"${_SCRIPT_DIR}/create_readme.sh\"\n_CREATE_CONFIG_README_SCRIPT=\"${_SCRIPT_DIR}/create_config_readme.sh\"\n\n# Function to display usage information\n_usage() {\n  echo \"Usage: $0 {install|setup|update}\"\n  echo \"  install : Install dependencies required for backups.\"\n  echo \"  setup   : Perform initial configuration setup (creates paths, credentials, and cron entries).\"\n  echo \"  update  : Alias for setup; updates existing configuration.\"\n  exit 1\n}\n\n# Function to check for required scripts\n_check_scripts() {\n  for _script in \\\n    \"${_INSTALL_DEPENDENCIES_SCRIPT}\" \\\n    \"${_CREATE_CREDENTIALS_TEMPLATES_SCRIPT}\" \\\n    \"${_CREATE_GLOBAL_PATHS_CONFIG_SCRIPT}\" \\\n    \"${_CREATE_USER_PATHS_CONFIG_SCRIPT}\" \\\n    \"${_CREATE_CRON_ENTRIES_SCRIPT}\" \\\n    \"${_CREATE_README_SCRIPT}\" \\\n    \"${_CREATE_CONFIG_README_SCRIPT}\"\n  do\n    if [ ! -f \"${_script}\" ]; then\n      echo \"Error: Required script ${_script} not found.\"\n      exit 1\n    fi\n  done\n}\n\n# Function to install dependencies\n_install_dependencies() {\n  echo \"Installing dependencies...\"\n  service cron stop && ln -sfn /bin/dash /usr/bin/sh\n  bash \"${_INSTALL_DEPENDENCIES_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to install dependencies.\"\n    exit 1\n  fi\n  echo \"Dependencies installed successfully.\"\n  service cron start\n}\n\n# Function to perform setup (initial configuration)\n_setup_configuration() {\n  echo \"Setting up configuration...\"\n\n  echo \"Step 1: Creating global paths configuration...\"\n  bash \"${_CREATE_GLOBAL_PATHS_CONFIG_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create global paths configuration.\"\n    exit 1\n  fi\n\n  echo \"Step 2: Creating user-specific paths configuration...\"\n  bash \"${_CREATE_USER_PATHS_CONFIG_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create user paths configuration.\"\n    exit 1\n  fi\n\n  echo \"Step 3: Creating credentials templates...\"\n  bash \"${_CREATE_CREDENTIALS_TEMPLATES_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create credentials templates.\"\n    exit 1\n  fi\n\n  echo \"Step 4: Creating cron entries...\"\n  bash \"${_CREATE_CRON_ENTRIES_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create cron entries.\"\n    exit 1\n  fi\n\n  echo \"Step 5: Creating global README files...\"\n  bash \"${_CREATE_README_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create global README files.\"\n    exit 1\n  fi\n\n  echo \"Step 6: Creating user config README files...\"\n  bash \"${_CREATE_CONFIG_README_SCRIPT}\"\n  if [ $? -ne 0 ]; then\n    echo \"Error: Failed to create user config README files.\"\n    exit 1\n  fi\n\n  echo \"Configuration setup completed successfully.\"\n}\n\n# Main logic\nif [ $# -ne 1 ]; then\n  _usage\nfi\n\n_action=$1\n_check_scripts\n\ncase \"${_action}\" in\n  install)\n    _install_dependencies\n    ;;\n  setup|update)\n    _setup_configuration\n    ;;\n  *)\n    _usage\n    ;;\nesac\n"
  },
  {
    "path": "aegir/tools/bin/dhcpfix",
    "content": "#!/bin/bash\n\n# Enable strict error handling for debugging only\n# set -euo pipefail\n\n# === Config ===\n_RESOLV_TARGET_CONTENT=$'nameserver 127.0.0.1\\nnameserver 1.1.1.1\\nnameserver 8.8.8.8\\nnameserver 9.9.9.9\\n'\n_DEBUG_MODE=\"${_DEBUG_MODE:-NO}\"\n\n# === Internals ===\n_DHCLIENT_CONF=\"/etc/dhcp/dhclient.conf\"\n_DHCLIENT_HOOK=\"/etc/dhcp/dhclient-enter-hooks.d/nodnsupdate\"\n_DHCPCD_CONF=\"/etc/dhcpcd.conf\"\n_NM_CONF=\"/etc/NetworkManager/NetworkManager.conf\"\n_RESOLV=\"/etc/resolv.conf\"\n\n_msg() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"[dns-lock] $*\"\n  fi\n}\n\n_backup_file() {\n  if [ -f \"$1\" ]; then\n    cp -a \"$1\" \"$1.bak.$(date +%s)\"\n    _msg \"Backup: $1 -> $1.bak.TIMESTAMP\"\n  fi\n}\n\n_detect_client() {\n  # Returns via echo one of: dhclient, dhcpcd, networkmanager, none\n  if pgrep -x dhclient >/dev/null 2>&1; then\n    echo \"dhclient\"; return\n  fi\n  if pgrep -x dhcpcd >/dev/null 2>&1; then\n    echo \"dhcpcd\"; return\n  fi\n  if pgrep -x NetworkManager >/dev/null 2>&1; then\n    echo \"networkmanager\"; return\n  fi\n  echo \"none\"\n}\n\n_configure_dhclient() {\n  _msg \"Configuring ISC dhclient to ignore DNS\"\n  if [ -f \"${_DHCLIENT_CONF}\" ]; then\n    _backup_file \"${_DHCLIENT_CONF}\"\n    # Replace entire request block with a DNS-free version\n    awk '\n      BEGIN{inreq=0}\n      /^request[[:space:]]/{\n        inreq=1\n        print \"request subnet-mask, broadcast-address, time-offset, routers,\"\n        print \"        domain-name, host-name,\"\n        print \"        interface-mtu,\"\n        print \"        rfc3442-classless-static-routes, ntp-servers;\"\n        while (inreq && getline){ if ($0 ~ /;/){ inreq=0; break } }\n        next\n      }\n      { print }\n    ' \"${_DHCLIENT_CONF}.bak.\"* | tail -n +1 > \"${_DHCLIENT_CONF}\" 2>/dev/null || \\\n    awk '\n      BEGIN{inreq=0}\n      /^request[[:space:]]/{\n        inreq=1\n        print \"request subnet-mask, broadcast-address, time-offset, routers,\"\n        print \"        domain-name, host-name,\"\n        print \"        interface-mtu,\"\n        print \"        rfc3442-classless-static-routes, ntp-servers;\"\n        while (inreq && getline){ if ($0 ~ /;/){ inreq=0; break } }\n        next\n      }\n      { print }\n    ' \"${_DHCLIENT_CONF}.bak.$(ls -1t ${_DHCLIENT_CONF}.bak.* 2>/dev/null | head -n1 | xargs -r basename)\" > \"${_DHCLIENT_CONF}\" 2>/dev/null || true\n\n    # If awk path above failed (no earlier .bak match), do in-place from current file\n    if ! grep -q \"rfc3442-classless-static-routes, ntp-servers;\" \"${_DHCLIENT_CONF}\" 2>/dev/null; then\n      awk '\n        BEGIN{inreq=0}\n        /^request[[:space:]]/{\n          inreq=1\n          print \"request subnet-mask, broadcast-address, time-offset, routers,\"\n          print \"        domain-name, host-name,\"\n          print \"        interface-mtu,\"\n          print \"        rfc3442-classless-static-routes, ntp-servers;\"\n          while (inreq && getline){ if ($0 ~ /;/){ inreq=0; break } }\n          next\n        }\n        { print }\n      ' \"${_DHCLIENT_CONF}.bak.$(date +%s)\" 2>/dev/null || true\n    fi\n  fi\n\n  # Belt-and-suspenders: no-op the make_resolv_conf hook\n  mkdir -p \"$(dirname \"${_DHCLIENT_HOOK}\")\"\n  cat > \"${_DHCLIENT_HOOK}\" <<'EOF'\n#!/bin/dash\n# Prevent dhclient from overwriting /etc/resolv.conf\nmake_resolv_conf() { :; }\nEOF\n  chmod +x \"${_DHCLIENT_HOOK}\"\n\n  # Renew lease\n  if command -v dhclient >/dev/null 2>&1; then\n    dhclient -r || true\n    dhclient || true\n  fi\n}\n\n_configure_dhcpcd() {\n  _msg \"Configuring dhcpcd to ignore resolv.conf\"\n  if [ -f \"${_DHCPCD_CONF}\" ]; then\n    _backup_file \"${_DHCPCD_CONF}\"\n  fi\n  touch \"${_DHCPCD_CONF}\"\n  if ! grep -qE '^[[:space:]]*nohook[[:space:]]+resolv\\.conf' \"${_DHCPCD_CONF}\"; then\n    echo \"nohook resolv.conf\" >> \"${_DHCPCD_CONF}\"\n  fi\n  if command -v dhcpcd >/dev/null 2>&1; then\n    dhcpcd -k || true\n    dhcpcd || true\n  fi\n}\n\n_configure_nm() {\n  _msg \"Configuring NetworkManager to not manage DNS\"\n  if [ -f \"${_NM_CONF}\" ]; then\n    _backup_file \"${_NM_CONF}\"\n  else\n    mkdir -p \"$(dirname \"${_NM_CONF}\")\"\n    touch \"${_NM_CONF}\"\n  fi\n  if grep -q '^\\[main\\]' \"${_NM_CONF}\" 2>/dev/null; then\n    # Replace or add dns=none under [main]\n    awk '\n      BEGIN{inmain=0}\n      /^\\[main\\]/ {print; inmain=1; next}\n      /^\\[/ && $0 !~ /^\\[main\\]/ { if(inmain){print \"dns=none\"; inmain=0}; print; next}\n      { if(inmain && $0 ~ /^dns=/){ next } print }\n      END{ if(inmain){print \"dns=none\"} }\n    ' \"${_NM_CONF}\" > \"${_NM_CONF}.new\" && mv -f \"${_NM_CONF}.new\" \"${_NM_CONF}\"\n  else\n    cat > \"${_NM_CONF}\" <<'EOF'\n[main]\ndns=none\nEOF\n  fi\n  service network-manager restart 2>/dev/null || true\n}\n\n_finalize_resolv() {\n  # Ensure /etc/resolv.conf is a regular file with target content\n  if [ -L \"${_RESOLV}\" ]; then\n    _msg \"Replacing symlinked /etc/resolv.conf with a plain file\"\n    _backup_file \"${_RESOLV}\"\n    rm -f \"${_RESOLV}\"\n  fi\n  if [ -e \"${_RESOLV}\" ]; then\n    _backup_file \"${_RESOLV}\"\n    rm -f \"${_RESOLV}\"\n  fi\n  printf \"%s\" \"${_RESOLV_TARGET_CONTENT}\" > \"${_RESOLV}\"\n  chmod 0644 \"${_RESOLV}\"\n  _msg \"Wrote static ${_RESOLV} pointing to 127.0.0.1\"\n}\n\n_main() {\n  _msg \"Detecting DHCP/DNS manager…\"\n  _CLIENT=\"$(_detect_client)\"\n  _msg \"Detected: ${_CLIENT}\"\n\n  case \"${_CLIENT}\" in\n    dhclient)        _configure_dhclient ;;\n    dhcpcd)          _configure_dhcpcd ;;\n    networkmanager)  _configure_nm ;;\n    none)            _msg \"No active dhcp client detected; proceeding to finalize resolv.conf\" ;;\n  esac\n\n  # Keep resolvconf installed if you want; it won’t get DNS from DHCP anymore.\n  _finalize_resolv\n\n  _msg \"Verification:\"\n  _msg \"ls -l /etc/resolv.conf -> $(ls -l /etc/resolv.conf 2>/dev/null || echo missing)\"\n  _msg \"Current nameservers:\"\n  _msg \"$(grep -E '^(domain|search|nameserver|options)' /etc/resolv.conf || true)\"\n}\n\n_main \"$@\"\n"
  },
  {
    "path": "aegir/tools/bin/duobackboa",
    "content": "#!/bin/bash\n\n###\n### Acknowledgements\n###\n### Thomas Sileo @ https://thomassileo.name\n### Original recipe: http://bit.ly/1QX462w\n###\n### Extended by Barracuda Team for BOA project\n###\n### See also:\n### http://www.nongnu.org/duplicity/index.html\n###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n# New OpenSSL 3.x version is required\nif [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  echo \"New OpenSSL 3.x version is required\"\n  exit 1\nfi\n\n_PTN_VRN=3.13.9\n_DCY_VRN=3.0.6\n_LOGPTH=\"/var/log/boa\"\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n_DOW=$(date +%u)\n_DOW=${_DOW//[^1-7]/}\n_DOM=$(date +%e)\n_DOM=${_DOM//[^0-9]/}\n_HST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n_HST=${_HST//[^a-zA-Z0-9-.]/}\n_HST=$(echo -n ${_HST} | tr A-Z a-z 2>&1)\n_HST_DASH=$(echo -n ${_HST} | tr . - 2>&1)\n\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n\n# shellcheck disable=SC1091\n[ -e \"/root/.duobackboa.cnf\" ] && source /root/.duobackboa.cnf\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_AWS_VLV=${_AWS_VLV//[^a-z]/}\nif [ -z \"${_AWS_VLV}\" ]; then\n  _AWS_VLV=\"warning\"\nfi\n\n# Set extra environment variables\nexport PYTHONPATH=\"/usr/local/lib/python3.12/site-packages\"\n\n_DCY_PTN=\"/usr/local/bin/python3\"\n_DCY_CMD=\"/usr/local/bin/duplicity -v ${_AWS_VLV}\"\n\nif [ \"$1\" != \"help\" ]; then\n  # Check the Python version to ensure we're using the correct one\n  echo \"Checking expected Python ${_PTN_VRN} version...\"\n  ${_DCY_PTN} --version\n  # Check the Duplicity version to ensure we're using the correct one\n  echo \"Checking expected Duplicity ${_DCY_VRN} version...\"\n  ${_DCY_CMD} --version\nfi\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_check_vps() {\n  _BENG_VS=NO\n  _VM_TEST=\"$(uname -a)\"\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _BENG_VS=YES\n  fi\n}\n_check_vps\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth}\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_install() {\n  if [ ! -d \"${_LOGPTH}\" ]; then\n    mkdir -p ${_LOGPTH}\n  fi\n  [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  _DUPLICITY_ITD=$(duplicity --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  if [ \"${_DUPLICITY_ITD}\" = \"${_DCY_VRN}\" ] \\\n    && [ -L \"/usr/local/bin/jp.py\" ] \\\n    && [ -L \"/usr/local/bin/duplicity\" ] \\\n    && [ -L \"/usr/local/bin/aws\" ]; then\n    echo \"Latest duplicity version ${_DCY_VRN} already installed\"\n  else\n    echo \"Installing duplicity dependencies...\"\n    cd\n    _find_fast_mirror_early\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    aptitude purge duplicity -y\n    rm -f /usr/local/bin/duplicity\n    rm -f /usr/local/bin/jp.py\n    rm -f /usr/local/bin/aws*\n    apt-get install ${_aptYesUnth} \\\n        intltool \\\n        libffi-dev \\\n        par2 \\\n        python3-pip \\\n        python3-venv \\\n        python3 \\\n        rclone \\\n        rdiff \\\n        tzdata\n    _PTN_TEST=$(${_DCY_PTN} --version 2>&1)\n    if [[ ! \"${_PTN_TEST}\" =~ \"Python ${_PTN_VRN}\" ]] \\\n      || [ ! -x \"${_DCY_PTN}\" ]; then\n      cd /var/opt\n      rm -rf Python*\n      wget ${_wgetGet} ${_urlDev}/src/Python-${_PTN_VRN}.tgz\n      tar -xzf Python-${_PTN_VRN}.tgz\n      cd Python-${_PTN_VRN}\n      if [ -d \"/usr/local/ssl3\" ]; then\n        bash ./configure --with-openssl=/usr/local/ssl3\n      else\n        bash ./configure --with-openssl=/usr/local/ssl\n      fi\n      make -j $(nproc) --quiet\n      make install --quiet\n      cd\n    fi\n    _PTN_TEST=$(${_DCY_PTN} --version 2>&1)\n    if [[ \"${_PTN_TEST}\" =~ \"Python ${_PTN_VRN}\" ]]; then\n      python3 -m pip install pipx --break-system-packages --root-user-action ignore\n      pip3 install --upgrade pip --root-user-action ignore\n      export PIPX_BIN_DIR=/usr/local/bin\n      export PIPX_HOME=/opt/pipx/venvs\n      pipx install duplicity --include-deps --force\n      pipx install awscli --include-deps --force\n      pipx install boto3 --include-deps --force\n    else\n      echo \"Python ${_PTN_VRN} installation failed with ${_PTN_TEST}\"\n      exit 1\n    fi\n    _DCY_TEST=$(${_DCY_CMD} --version 2>&1)\n    if [[ \"${_DCY_TEST}\" =~ \"duplicity ${_DCY_VRN}\" ]]; then\n      echo \"Installation complete!\"\n    else\n      echo \"Installation failed with ${_DCY_TEST}\"\n      exit 1\n    fi\n  fi\n}\n\n_check_aws() {\n  if [ ! -x \"/usr/local/bin/aws\" ]; then\n    echo \"Upgrade to add AWS tools required...\"\n    install\n  fi\n}\n\n_CNT=$(pgrep -fc duplicity)\nif (( _CNT > 0 )); then\n  echo \"[$(date)] Active duplicity process detected, will try again later...\" >> /var/log/mybackup_waiting_queue.log\n  exit 1\nfi\n\nif [ -z \"${_AWS_KEY}\" ] || [ -z \"${_AWS_SEC}\" ] || [ -z \"${_AWS_PWD}\" ]; then\n  echo \"\n\n  CONFIGURATION REQUIRED!\n\n  Add listed below four (4) required lines to your /root/.duobackboa.cnf file.\n  Required lines are marked with [R] and optional with [O]:\n\n    _AWS_KEY='Your AWS Access Key ID'     ### [R] From your AWS S3 settings\n    _AWS_SEC='Your AWS Secret Access Key' ### [R] From your AWS S3 settings\n    _AWS_PWD='Your Secret Password'       ### [R] Generate with 'openssl rand -base64 32'\n    _AWS_REG='Your AWS Region ID'         ### [R] By default 'us-east-1'\n\n    _AWS_TTL='Your Backup Rotation'       ### [O] By default '30D'\n    _AWS_FLC='Your Backup Full Cycle'     ### [O] By default '7D'\n    _AWS_VLV='Your Backup Log Verbosity'  ### [O] By default 'warning' -- [ewnid]\n    _AWS_EXB='Exclude Ægir Backups'       ### [O] By default 'YES' -- can be YES/NO\n\n    Supported values to use as _AWS_REG (the symbol after the # comment):\n\n      Africa (Cape Town)         # af-south-1\n      Asia Pacific (Hong Kong)   # ap-east-1\n      Asia Pacific (Hyderabad)   # ap-south-2\n      Asia Pacific (Jakarta)     # ap-southeast-3\n      Asia Pacific (Melbourne)   # ap-southeast-4\n      Asia Pacific (Mumbai)      # ap-south-1\n      Asia Pacific (Osaka)       # ap-northeast-3\n      Asia Pacific (Seoul)       # ap-northeast-2\n      Asia Pacific (Singapore)   # ap-southeast-1\n      Asia Pacific (Sydney)      # ap-southeast-2\n      Asia Pacific (Tokyo)       # ap-northeast-1\n      Canada (Central)           # ca-central-1\n      Canada West (Calgary)      # ca-west-1\n      Europe (Frankfurt)         # eu-central-1\n      Europe (Ireland)           # eu-west-1\n      Europe (London)            # eu-west-2\n      Europe (Milan)             # eu-south-1\n      Europe (Paris)             # eu-west-3\n      Europe (Spain)             # eu-south-2\n      Europe (Stockholm)         # eu-north-1\n      Europe (Zurich)            # eu-central-2\n      Israel (Tel Aviv)          # il-central-1\n      Middle East (Bahrain)      # me-south-1\n      Middle East (UAE)          # me-central-1\n      South America (São Paulo)  # sa-east-1\n      US East (N. Virginia)      # us-east-1\n      US East (Ohio)             # us-east-2\n      US West (N. California)    # us-west-1\n      US West (Oregon)           # us-west-2\n\n      ### Special regions, see: https://aws.amazon.com/govcloud-us/\n\n      AWS GovCloud (US-East)     # us-gov-east-1\n      AWS GovCloud (US-West)     # us-gov-west-1\n\n    Source: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region\n\n    You have to use S3 Console at https://console.aws.amazon.com/s3/home\n    (before attempting to run initial backup!) to create S3 bucket in the\n    desired region with correct name as shown below:\n\n      daily-remote-${_HST_DASH}\n\n    While duplicity should be able to create new bucket on demand, in practice\n    it almost never works due to typical delays between various AWS regions.\n\n    Please run: 'duobackboa test' to make sure that the connection works.\n\n  \"\n  exit 1\nfi\n\nif [ -z \"${_AWS_REG}\" ]; then\n  _AWS_REG=\"us-east-1\"\nfi\n\nif [ \"${_AWS_REG}\" = \"af-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-northeast-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-northeast-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-northeast-3\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-south-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-3\" ] \\\n  || [ \"${_AWS_REG}\" = \"ap-southeast-4\" ] \\\n  || [ \"${_AWS_REG}\" = \"ca-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"ca-west-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-central-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-north-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-south-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-west-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-west-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"eu-west-3\" ] \\\n  || [ \"${_AWS_REG}\" = \"il-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"me-central-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"me-south-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"sa-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-east-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-west-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-west-2\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-gov-east-1\" ] \\\n  || [ \"${_AWS_REG}\" = \"us-gov-west-1\" ]; then\n  _GOOD_AWS_REG=YES\nfi\n\n_AWS_TTL=${_AWS_TTL//[^A-Z0-9]/}\nif [ -z \"${_AWS_TTL}\" ]; then\n  _AWS_TTL=\"30D\"\nfi\n\n_AWS_FLC=${_AWS_FLC//[^A-Z0-9]/}\nif [ -z \"${_AWS_FLC}\" ]; then\n  _AWS_FLC=\"7D\"\nfi\n\nif [ ! -z \"${_AWS_EXB}\" ] && [ \"${_AWS_EXB}\" = \"NO\" ]; then\n  _EXCLUDE=\"--exclude /data/conf/arch\"\nelse\n  _EXCLUDE=\"--exclude /data/conf/arch --exclude-regexp '^/data/disk/.*/backups'\"\nfi\n\n_USER_INCLUDE=\"\"\nif [ -f \"/root/.duobackboa.include\" ]; then\n  _USER_INCLUDE=\"--include-filelist /root/.duobackboa.include\"\nfi\n\n_USER_EXCLUDE=\"\"\nif [ -f \"/root/.duobackboa.exclude\" ]; then\n  _USER_EXCLUDE=\"--exclude-filelist /root/.duobackboa.exclude\"\nfi\n\nexport AWS_ACCESS_KEY_ID=\"${_AWS_KEY}\"\nexport AWS_SECRET_ACCESS_KEY=\"${_AWS_SEC}\"\nexport PASSPHRASE=\"${_AWS_PWD}\"\n\n_SOURCE=\"/data /etc /home /opt/solr4 /var/aegir /var/solr7 /var/solr9 /var/www /var/xdrago\"\n_BUCKET_NAME=\"daily-remote-${_HST_DASH}\"\n_BUCKET_NAME_DOT=\"daily.remote.${_HST}\"\n_TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n_LOGFILE=\"${_LOGPTH}/${_BUCKET_NAME}.log\"\n_NAME=\"--name=daily-remote\"\n\n_DCY_MN_CMD=\"/usr/local/bin/duplicity -v ${_AWS_VLV} \\\n  --concurrency 4 \\\n  --s3-endpoint-url https://s3.dualstack.${_AWS_REG}.amazonaws.com \\\n  --s3-region-name ${_AWS_REG}\"\n\nif [ -e \"${_LOGPTH}/${_BUCKET_NAME_DOT}.archive.log\" ]; then\n  cat ${_LOGPTH}/${_BUCKET_NAME_DOT}.archive.log >> ${_LOGPTH}/${_BUCKET_NAME}.archive.log\n  mv ${_LOGPTH}/${_BUCKET_NAME_DOT}.archive.log /var/backups/${_BUCKET_NAME_DOT}.archive.log\n  cat ${_LOGPTH}/${_BUCKET_NAME_DOT}.randomize.cleanup.log >> ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\n  mv ${_LOGPTH}/${_BUCKET_NAME_DOT}.randomize.cleanup.log /var/backups/${_BUCKET_NAME_DOT}.randomize.cleanup.log\n  #apt-get clean -qq\n  #rm -rf /var/lib/apt/lists/*\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n  aws s3 mb s3://${_BUCKET_NAME} --region ${_AWS_REG}\n  aws s3 sync s3://${_BUCKET_NAME_DOT} s3://${_BUCKET_NAME}\nfi\n\n_backup_prepare() {\n  _INCLUDE=\"\"\n  for _CDIR in ${_SOURCE}; do\n    _TMP=\" --include ${_CDIR}\"\n    _INCLUDE=\"${_INCLUDE}${_TMP}\"\n  done\n  if [ -e \"/root/.cache/duplicity\" ]; then\n    _CacheTest=$(find /root/.cache/duplicity/* \\\n      -maxdepth 1 \\\n      -mindepth 1 \\\n      -type f \\\n      | sort 2>&1)\n    if [[ \"${_CacheTest}\" =~ \"No such file or directory\" ]] \\\n      || [ -z \"${_CacheTest}\" ]; then\n      _DO_CLEANUP=NO\n    else\n      _DO_CLEANUP=YES\n    fi\n  fi\n}\n\n_monthly_cleanup() {\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\" ]; then\n    _RCL=$(cat ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log 2>&1)\n    _RCL=$(echo -n ${_RCL} | tr -d \"\\n\" 2>&1)\n    _RCL=${_RCL//[^1-5]/}\n  else\n    _RCL=$((RANDOM%5+1))\n    _RCL=${_RCL//[^1-5]/}\n    echo ${_RCL} > ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\n  fi\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n    && [ ! -e \"/root/.skip_duplicity_monthly_cleanup.cnf\" ] \\\n    && [ \"${_DOM}\" = \"${_RCL}\" ] \\\n    && [ \"${_DO_CLEANUP}\" = \"YES\" ]; then\n    if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n      _n=$((RANDOM%300+8))\n      echo \"Waiting ${_n} seconds on $(date) before running cleanup --force\" > ${_LOGFILE}\n      sleep ${_n}\n    fi\n    echo \"Running cleanup --force on $(date)\" >> ${_LOGFILE}\n    echo \"Command is ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\"\n    ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\n    rm -f ${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log\n    rm -f ${_LOGPTH}/${_BUCKET_NAME}.randomize.cleanup.log\n  fi\n}\n\n_randomize_full() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log\" ]; then\n      _RDW=$(cat ${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log 2>&1)\n      _RDW=$(echo -n ${_RDW} | tr -d \"\\n\" 2>&1)\n      _RDW=${_RDW//[^1-7]/}\n      _MODE=\"incremental\"\n    else\n      _RDW=$((RANDOM%7+1))\n      _RDW=${_RDW//[^1-7]/}\n      _MODE=\"full\"\n      echo ${_RDW} > ${_LOGPTH}/${_BUCKET_NAME}.randomize.full.log\n    fi\n  else\n    _RDW=6\n  fi\n}\n\n_set_mode() {\n  if [ \"${_DOW}\" = \"${_RDW}\" ] && [ \"${_AWS_FLC}\" = \"7D\" ]; then\n    if [ ! -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n      _MODE=\"full\"\n      _AWS_FLC=\"1M\"\n    fi\n  else\n    if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n      && [ \"${_DO_CLEANUP}\" = \"YES\" ]; then\n      _MODE=\"incremental\"\n    else\n      _MODE=\"full\"\n    fi\n  fi\n}\n\n_set_cmd() {\n  _DCY_UP_CMD=\"/usr/local/bin/duplicity ${_MODE} -v ${_AWS_VLV} \\\n    --allow-source-mismatch \\\n    --concurrency 4 \\\n    --full-if-older-than ${_AWS_FLC} \\\n    --s3-endpoint-url https://s3.dualstack.${_AWS_REG}.amazonaws.com \\\n    --s3-region-name ${_AWS_REG} \\\n    --s3-use-ia \\\n    --volsize 300\"\n}\n\n_run_backup() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    if [ ! -e \"/root/tmp/home/\" ]; then\n      _n=$((RANDOM%300+8))\n      echo \"Waiting ${_n} seconds on $(date) before running restore home 7D tmp/home\" >> ${_LOGFILE}\n      sleep ${_n}\n      restore home 7D tmp/home >> ${_LOGFILE}\n    fi\n    _n=$((RANDOM%300+8))\n    echo \"Waiting ${_n} seconds on $(date) before running ${_MODE} backup\" >> ${_LOGFILE}\n    sleep ${_n}\n  fi\n  echo \"Running ${_MODE} backup on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UP_CMD} \\\n    ${_NAME} \\\n    ${_EXCLUDE} \\\n    ${_USER_EXCLUDE} \\\n    ${_INCLUDE} \\\n    ${_USER_INCLUDE} \\\n    --exclude '**' / ${_TARGET}\"\n  ${_DCY_UP_CMD} \\\n    ${_NAME} \\\n    ${_EXCLUDE} \\\n    ${_USER_EXCLUDE} \\\n    ${_INCLUDE} \\\n    ${_USER_INCLUDE} \\\n    --exclude '**' / ${_TARGET} >> ${_LOGFILE}\n}\n\n_remove_older_than() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    _n=$((RANDOM%300+8))\n    echo \"Waiting ${_n} seconds on $(date) before running remove-older-than ${_AWS_TTL}\" >> ${_LOGFILE}\n    sleep ${_n}\n  fi\n  echo \"Running remove-older-than on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_MN_CMD} remove-older-than ${_AWS_TTL} --force ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} remove-older-than ${_AWS_TTL} --force ${_NAME} ${_TARGET} >> ${_LOGFILE}\n}\n\n_collection_status() {\n  if [ -e \"/root/.randomize_duplicity_full_backup_day.cnf\" ]; then\n    _n=$((RANDOM%300+8))\n    echo \"Waiting ${_n} seconds on $(date) before running collection-status\" >> ${_LOGFILE}\n    sleep ${_n}\n  fi\n  echo \"Running collection-status on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET} >> ${_LOGFILE}\n}\n\n_backup() {\n  _backup_prepare\n  _monthly_cleanup\n  _randomize_full\n  _set_mode\n  _set_cmd\n  _run_backup\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n    && [ \"${_DOW}\" = \"${_RDW}\" ] \\\n    && [ \"${_DO_CLEANUP}\" = \"YES\" ]; then\n    _remove_older_than\n    _collection_status\n  fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" != \"OFF\" ]; then\n    echo \"Sending email report on $(date)\" >> ${_LOGFILE}\n    s-nail -s \"Daily backup: ${_MODE} ${_HST} $(date)\" ${_MY_EMAIL} < ${_LOGFILE}\n  fi\n  cat ${_LOGFILE} >> ${_LOGPTH}/${_BUCKET_NAME}.archive.log\n  rm -f ${_LOGFILE}\n}\n\n_conn_test() {\n  if [ $# = 1 ]; then\n    _BUCKET_NAME=\"daily-remote-$1\"\n    _TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n  fi\n  echo \"Running AWS connection test, please wait...\"\n  echo \"Command is ${_DCY_MN_CMD} cleanup --dry-run --timeout 5 ${_NAME} ${_TARGET}\"\n  _ConnTest=$(${_DCY_MN_CMD} cleanup --dry-run --timeout 5 ${_NAME} ${_TARGET} 2>&1)\n  ### echo _ConnTest is STR ${_ConnTest} END\n  if [[ \"${_ConnTest}\" =~ \"No connection to backend\" ]] \\\n    || [[ \"${_ConnTest}\" =~ \"IllegalLocationConstraintException\" ]]; then\n    echo\n    echo \"  Sorry, I can't connect to ${_TARGET}\"\n    echo \"  Please check if the bucket has expected name: ${_BUCKET_NAME}\"\n    echo \"  This bucket must already exist in the ${_AWS_REG} AWS region\"\n    echo \"  http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region\"\n    echo \"  Bye\"\n    echo\n    exit 1\n  else\n    echo \"OK, I can connect to ${_TARGET}\"\n  fi\n}\n\n_status() {\n  echo \"Command is ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\n}\n\n_cleanup() {\n  echo \"Command is ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} cleanup --force ${_NAME} ${_TARGET}\n  echo \"Command is ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} collection-status ${_NAME} ${_TARGET}\n}\n\n_list() {\n  echo \"Command is ${_DCY_MN_CMD} list-current-files ${_NAME} ${_TARGET}\"\n  ${_DCY_MN_CMD} list-current-files ${_NAME} ${_TARGET}\n}\n\n_restore() {\n  if [ $# = 2 ]; then\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\n  else\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\n  fi\n}\n\n_retrieve() {\n  if [ $# = 3 ]; then\n    _HST_DASH=$(echo -n $3 | tr . - 2>&1)\n    _BUCKET_NAME=\"daily-remote-${_HST_DASH}\"\n    _TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 ${_NAME} ${_TARGET} $2\n  elif [ $# = 4 ]; then\n    _HST_DASH=$(echo -n $4 | tr . - 2>&1)\n    _BUCKET_NAME=\"daily-remote-${_HST_DASH}\"\n    _TARGET=\"boto3+s3://${_BUCKET_NAME}\"\n    echo \"Command is ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\"\n    ${_DCY_MN_CMD} restore --path-to-restore $1 --time $2 ${_NAME} ${_TARGET} $3\n  fi\n}\n\nif [ \"$1\" = \"backup\" ]; then\n  if test -f /run/${_HST}_backup.pid ; then\n    touch ${_LOGPTH}/wait_${_HST}_backup.log\n    echo \"The duplicity backup is running already?\"\n    echo \"Existing /run/${_HST}_backup.pid found...\"\n    echo \"But no active duplicity process detected...\"\n    exit 1\n  else\n    touch /run/${_HST}_backup.pid\n    echo \"The duplicity backup is starting now...\"\n    _check_aws\n    _backup\n    echo \"The duplicity backup is complete!\"\n    touch ${_LOGPTH}/run_${_HST}_backup.log\n    rm -f /run/${_HST}_backup.pid\n  fi\nelif [ \"$1\" = \"install\" ]; then\n  _install\nelif [ \"$1\" = \"cleanup\" ]; then\n  _cleanup\nelif [ \"$1\" = \"list\" ]; then\n  _list\nelif [ \"$1\" = \"restore\" ]; then\n  if [ $# = 3 ]; then\n    _restore $2 $3\n  else\n    _restore $2 $3 $4\n  fi\nelif [ \"$1\" = \"retrieve\" ]; then\n  if [ $# = 4 ]; then\n    _retrieve $2 $3 $4\n  elif [ $# = 5 ]; then\n    _retrieve $2 $3 $4 $5\n  else\n    echo \"You have to specify also hostname of the backed up system\"\n    exit 1\n  fi\nelif [ \"$1\" = \"status\" ]; then\n  _check_aws\n  _status\nelif [ \"$1\" = \"test\" ]; then\n  _conn_test\nelse\n  echo \"\n\n  INSTALLATION:\n\n  $ duobackboa install\n\n  USAGE:\n\n  $ duobackboa backup\n  $ duobackboa cleanup\n  $ duobackboa list\n  $ duobackboa status\n  $ duobackboa test\n  $ duobackboa restore file [time] destination\n  $ duobackboa retrieve file [time] destination hostname\n\n  RESTORE EXAMPLES:\n\n  Note: Be careful while restoring not to prepend a slash to the path!\n\n  Restoring a single file to tmp/\n  $ duobackboa restore data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz\n\n  Restoring an older version of a directory to tmp/ - interval or full date\n  $ duobackboa restore data/disk/o1/backups 7D8h8s tmp/backups\n  $ duobackboa restore data/disk/o1/backups 2014/11/11 tmp/backups\n\n  Restoring data on a different server\n  $ duobackboa retrieve data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz srv.foo.bar\n  $ duobackboa retrieve data/disk/o1/backups 2014/11/11 tmp/backups srv.foo.bar\n\n  Note: The srv.foo.bar is a hostname of the BOA system backed up before.\n        In the 'retrieve' mode it will use the _AWS_* variables configured\n        in the current system /root/.duobackboa.cnf file - so make sure to edit\n        this file to set/replace temporarily all four required _AWS_* variables\n        used originally on the host you are retrieving data from! You should\n        keep them secret and manage in your offline password manager app.\n\n  \"\n  exit 1\nfi\n\nexport AWS_ACCESS_KEY_ID=\nexport AWS_SECRET_ACCESS_KEY=\nexport PASSPHRASE=\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/fancynow",
    "content": "#!/bin/bash\n\n# NAME: fancynow\n# PATH: $HOME/bin\n# DESC: Display current weather, calendar and time\n# CALL: Called from terminal or ~/.bashrc\n# DATE: Apr 6, 2017. Modified: May 24, 2019.\n\n# UPDT: 2019-05-24 If Weather unavailable nicely formatted error message.\n\n# NOTE: To display all available toilet fonts use this one-liner:\n#       for i in ${TOILET_FONT_PATH:=/usr/share/figlet}/*.{t,f}lf; do j=${i##*/}; toilet -d \"${i%/*}\" -f \"$j\" \"${j%.*}\"; done\n\n# Setup for 92 character wide terminal\n_DateColumn=55 # Default is 27 for 80 character line, 34 for 92 character line\n_TimeColumn=80 # Default is 49 for   \"   \"   \"   \"    61 \"   \"   \"   \"\n\n  if [ -e \"/root/.found_correct_city.cnf\" ]; then\n    _LOC_CITY=$(cat /root/.found_correct_city.cnf 2>/dev/null | tr -d '\\n')\n  else\n    exit 0\n  fi\n\n# Replace Edmonton with your city name, GPS, etc. See: curl wttr.in/:help\ncurl wttr.in/${_LOC_CITY}?0 --silent --max-time 2 > /tmp/now-weather\n# Timeout #. Increase for slow connection---^\n\n[ ! -e \"/tmp/now-weather\" ] && exit 0\n\nreadarray _aWeather < /tmp/now-weather\nrm -f /tmp/now-weather\n\n# Was valid weather report found or an error message?\nif [[ \"${_aWeather[0]}\" == \"Weather report:\"* ]]; then\n    _WeatherSuccess=true\n    if [[ \"${_aWeather[0]}\" =~ \"+\" ]]; then\n      _iWeather=$(echo \"${_aWeather[@]}\" | tr '+' ' ' 2>&1)\n      echo \"${_iWeather}\"\n    else\n      echo \"${_aWeather[@]}\"\n    fi\nelse\n    _WeatherSuccess=false\n    echo \"                              \"\n    echo \"                              \"\n    echo \"                              \"\n    echo \"                              \"\n    echo \"                              \"\n    echo \"                              \"\n    echo \"                              \"\n    echo \" \"\nfi\necho \" \"                # Pad blank lines for calendar & time to fit\necho \" \"\n\n#--------- DATE -------------------------------------------------------------\n\n# calendar current month with today highlighted.\n# colors 00=bright white, 31=red, 32=green, 33=yellow, 34=blue, 35=purple,\n#        36=cyan, 37=white\n\ntput sc                 # Save cursor position.\n# Move up 9 lines\ni=0\nwhile [ $((++i)) -lt 10 ]; do tput cuu1; done\n\nif [[ \"${_WeatherSuccess}\" == true ]]; then\n    # Depending on length of your city name and country name you will:\n    #   1. Comment out next three lines of code. Uncomment fourth code line.\n    #   2. Change subtraction value and set number of print spaces to match\n    #      subtraction value. Then place comment on fourth code line.\n    _Column=$((_DateColumn - 10))\n    tput cuf ${_Column}        # Move x column number\n    # Blank out \", country\" with x spaces\n    printf \"          \"\nelse\n    tput cuf ${_DateColumn}    # Position to column 27 for date display\nfi\n\n# -h needed to turn off formating: https://askubuntu.com/questions/1013954/bash-substring-stringoffsetlength-error/1013960#1013960\ncal > /tmp/terminal1\n# -h not supported in Ubuntu 18.04. Use second answer: https://askubuntu.com/a/1028566/307523\ntr -cd '\\11\\12\\15\\40\\60-\\136\\140-\\176' < /tmp/terminal1  > /tmp/terminal\n\n_CalLineCnt=1\n_Today=$(date +\"%e\")\n\nprintf \"\\033[32m\"   # color green -- see list above.\n\nwhile IFS= read -r Cal; do\n    printf \"%s\" \"$Cal\"\n    if [[ ${_CalLineCnt} -gt 2 ]]; then\n        # See if today is on current line & invert background\n        tput cub 22\n        for (( j=0 ; j <= 18 ; j += 3 )) ; do\n            Test=${Cal:$j:2}            # Current day on calendar line\n            if [[ \"$Test\" == \"${_Today}\" ]]; then\n                printf \"\\033[7m\"        # Reverse: [ 7 m\n                printf \"%s\" \"${_Today}\"\n                printf \"\\033[0m\"        # Normal: [ 0 m\n                printf \"\\033[32m\"       # color green -- see list above.\n                tput cuf 1\n            else\n                tput cuf 3\n            fi\n        done\n    fi\n\n    tput cud1               # Down one line\n    tput cuf ${_DateColumn}    # Move 27 columns right\n    _CalLineCnt=$((++_CalLineCnt))\ndone < /tmp/terminal\n\nprintf \"\\033[00m\"           # color -- bright white (default)\necho \"\"\n\ntput rc                     # Restore saved cursor position.\n\n#-------- TIME --------------------------------------------------------------\n\ntput sc                 # Save cursor position.\n# Move up 8 lines\ni=0\nwhile [ $((++i)) -lt 9 ]; do tput cuu1; done\ntput cuf ${_TimeColumn}    # Move 49 columns right\n\n# Do we have the toilet package?\nif hash toilet 2>/dev/null; then\n    echo \" $(date +\"%I:%M %P\") \" | \\\n        toilet -f future --filter border > /tmp/terminal\n# Do we have the figlet package?\nelif hash figlet 2>/dev/null; then\n#    echo $(date +\"%I:%M %P\") | figlet > /tmp/terminal\n    date +\"%I:%M %P\" | figlet > /tmp/terminal\n# else use standard font\nelse\n#    echo $(date +\"%I:%M %P\") > /tmp/terminal\n    date +\"%I:%M %P\" > /tmp/terminal\nfi\n\nwhile IFS= read -r _Time; do\n    printf \"\\033[01;36m\"    # color cyan\n    printf \"%s\" \"${_Time}\"\n    tput cud1               # Up one line\n    tput cuf ${_TimeColumn}    # Move 49 columns right\ndone < /tmp/terminal\n\ntput rc                     # Restore saved cursor position.\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/ffdevuan",
    "content": "#!/bin/bash\n\n#\n# Devuan mirrors → flat list for speed testing + apt sources rendering\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# ===== Tunables ==============================================================\n_MIRROR_SRC_URL=\"https://pkgmaster.devuan.org/mirror_list.txt\"\n# To use an already-downloaded file, set _MIRROR_SRC_FILE=/path/to/mirror_list.txt\n_MIRROR_SRC_FILE=\"${_MIRROR_SRC_FILE:-}\"\n\n# Default suite for probing and for rendered sources.list\n_SUITE_DEFAULT=\"${_SUITE_DEFAULT:-daedalus}\"\n\n# Reputation policy (from Devuan metadata)\n_REQUIRE_DNSRR=\"NO\"              # if YES, require DNSRR: yes\n_REQUIRE_DNSRR_OR_CC=\"YES\"       # if YES, require DNSRR: yes OR DNSRRCC: yes (recommended)\n\n# Transport policy\n_REQUIRE_HTTPS=\"YES\"             # if YES, drop mirrors without HTTPS\n_PROTOCOL_PREFERENCE=\"HTTPS\"     # currently used only for clarity; we pick https if available\n\n# Bandwidth policy\n_MIN_MBPS=\"${_MIN_MBPS:-500}\"    # drop mirrors reporting < 500 Mbps (approx) after normalization\n_REJECT_IF_BW_NONE=\"YES\"         # drop mirrors with Bandwidth: None/empty/unknown\n\n# Quick live reliability probe\n_REQUIRE_QUICK_PROBE=\"YES\"       # if YES, try fetching 1 byte of Release; drop on timeout/SSL errors\n_PROBE_TIMEOUT=\"${_PROBE_TIMEOUT:-5}\"  # seconds per probe\n\n# Static blacklist (comma-separated substrings matched vs FQDN or BaseURL), case-insensitive\n_BLACKLISTED_MIRRORS=\"${_BLACKLISTED_MIRRORS:-mirror.abmanagement.al}\"\n\n# Output: flat list for ffmirror tester (one URL per line)\n_FF_LIST=\"${_FF_LIST:-/var/log/boa/devaun-fast-mirrors-list.txt}\"\n\n# Curl commands\n_CURL=\"curl -fsSL\"\n_CURL_HEAD=\"curl -fsSL -I\"\n\n# ===== Internals / temp files ===============================================\n_TMP_DIR=\"$(mktemp -d)\"\n_RAW_LIST=\"${_TMP_DIR}/mirror_list.txt\"\n\n_cleanup() { rm -rf \"${_TMP_DIR}\" 2>/dev/null || true; }\ntrap _cleanup EXIT\n\n# ===== Helpers ==============================================================\n\n# Convert \"Bandwidth: ...\" to rough Mbps integer; unknown→0\n_convert_bandwidth_to_mbps() {\n  _bw_raw=\"$1\"\n  _bw_norm=\"$(printf '%s' \"${_bw_raw}\" | tr '[:upper:]' '[:lower:]' | tr -d ' ')\"\n\n  case \"${_bw_norm}\" in\n    *nolimit*|*unlimit*|*fewgb*|*10g*|*20g*|*26g*)\n      echo 10000; return 0 ;;\n  esac\n\n  # Extract numeric part and unit\n  _num=\"$(printf '%s' \"${_bw_norm}\" | sed -E 's/^([^0-9]*)([0-9]+(\\.[0-9]+)?).*/\\2/')\" || true\n  _unit=\"$(printf '%s' \"${_bw_norm}\" | sed -nE 's/.*([kmgt]?b(ps)?|[kmgt]?bit\\/s).*/\\1/p')\" || true\n\n  if [ -z \"${_num:-}\" ]; then\n    echo 0; return 0\n  fi\n\n  _mbps=\"0\"\n  case \"${_unit:-}\" in\n    g* ) _mbps=\"$(awk -v n=\"${_num}\" 'BEGIN{printf \"%.0f\", n*1000}')\" ;;\n    m* ) _mbps=\"$(awk -v n=\"${_num}\" 'BEGIN{printf \"%.0f\", n}')\" ;;\n    k* ) _mbps=\"$(awk -v n=\"${_num}\" 'BEGIN{printf \"%.0f\", n/1000}')\" ;;\n    *  ) _mbps=\"$(awk -v n=\"${_num}\" 'BEGIN{printf \"%.0f\", n}')\" ;;\n  esac\n  echo \"${_mbps}\"\n}\n\n# Reputation gate based on Devuan metadata flags\n_reputation_ok() {\n  _dnsrr=\"$1\"; _dnsrrcc=\"$2\"\n  _dnsrr_lc=\"$(printf '%s' \"${_dnsrr}\" | tr '[:upper:]' '[:lower:]')\"\n  _dnsrrcc_lc=\"$(printf '%s' \"${_dnsrrcc}\" | tr '[:upper:]' '[:lower:]')\"\n\n  if [ \"${_REQUIRE_DNSRR}\" = \"YES\" ] && [ \"${_dnsrr_lc}\" != \"yes\" ]; then\n    return 1\n  fi\n  if [ \"${_REQUIRE_DNSRR_OR_CC}\" = \"YES\" ]; then\n    if [ \"${_dnsrr_lc}\" = \"yes\" ] || [ \"${_dnsrrcc_lc}\" = \"yes\" ]; then\n      return 0\n    else\n      return 1\n    fi\n  fi\n  return 0\n}\n\n# Prefer HTTPS if listed; allow HTTP only if policy permits\n_pick_scheme() {\n  _protocols=\"$1\"\n  _p_lc=\"$(printf '%s' \"${_protocols}\" | tr '[:upper:]' '[:lower:]')\"\n  if echo \"${_p_lc}\" | grep -q \"https\"; then\n    echo \"https\"; return 0\n  fi\n  if [ \"${_REQUIRE_HTTPS}\" = \"YES\" ]; then\n    echo \"\"; return 0\n  fi\n  echo \"http\"\n}\n\n# Detect repo root by probing Release (\"/merged\" → \"/devuan\" → \"\")\n_detect_repo_root() {\n  _scheme=\"$1\"; _base=\"$2\"; _suite=\"${3:-${_SUITE_DEFAULT}}\"\n\n  _b=\"${_base#http://}\"; _b=\"${_b#https://}\"\n\n  _try() {\n    _path=\"$1\"\n    _url=\"${_scheme}://${_b}${_path}/dists/${_suite}/Release\"\n    _code=\"$(${_CURL_HEAD} --max-time \"${_PROBE_TIMEOUT}\" -o /dev/null -w '%{http_code}' \"${_url}\" || true)\"\n    if [ \"${_code}\" = \"200\" ] || [ \"${_code}\" = \"301\" ] || [ \"${_code}\" = \"302\" ]; then\n      printf '%s' \"${_path}\"; return 0\n    fi\n    return 1\n  }\n\n  _try \"/merged\" && return 0\n  _try \"/devuan\" && return 0\n  _try \"\" && return 0\n\n  echo \"\"; return 1\n}\n\n# Tiny GET to catch SSL/timeouts; returns 0 if ok\n_quick_probe_release() {\n  _fullbase=\"$1\"   # e.g. https://mirror/foo\n  _suite=\"${2:-${_SUITE_DEFAULT}}\"\n  ${_CURL} --max-time \"${_PROBE_TIMEOUT}\" -r 0-0 \\\n    \"${_fullbase}/dists/${_suite}/Release\" >/dev/null 2>&1\n}\n\n# ===== Fetch the mirror list =================================================\nif [ -n \"${_MIRROR_SRC_FILE}\" ]; then\n  cp -f \"${_MIRROR_SRC_FILE}\" \"${_RAW_LIST}\"\nelse\n  ${_CURL} \"${_MIRROR_SRC_URL}\" > \"${_RAW_LIST}\"\nfi\n\n# ===== Parse blocks ==========================================================\n_FQDN=\"\"\n_BASEURL=\"\"\n_PROTOCOLS=\"\"\n_ACTIVE=\"\"\n_DNSRR=\"\"\n_DNSRRCC=\"\"\n_BANDWIDTH=\"\"\n\n_ACCEPTED_URLS=\"\"\n\n_process_block() {\n  # Minimal fields required\n  if [ -z \"${_FQDN}\" ] || [ -z \"${_BASEURL}\" ]; then\n    return 0\n  fi\n\n  # Active?\n  _act_lc=\"$(printf '%s' \"${_ACTIVE}\" | tr '[:upper:]' '[:lower:]')\"\n  if [ \"${_act_lc}\" != \"yes\" ]; then\n    return 0\n  fi\n\n  # Static blacklist\n  if [ -n \"${_BLACKLISTED_MIRRORS}\" ]; then\n    _ifs_save=\"${IFS}\"; IFS=\",\"\n    for _bad in ${_BLACKLISTED_MIRRORS}; do\n      IFS=\"${_ifs_save}\"\n      _bad_trim=\"$(printf '%s' \"${_bad}\" | xargs)\"\n      if [ -n \"${_bad_trim}\" ] && echo \"${_FQDN} ${_BASEURL}\" | grep -qi \"${_bad_trim}\"; then\n        return 0\n      fi\n    done\n    IFS=\"${_ifs_save}\"\n  fi\n\n  # Scheme selection\n  _scheme=\"$(_pick_scheme \"${_PROTOCOLS}\")\"\n  if [ -z \"${_scheme}\" ]; then\n    return 0\n  fi\n\n  # Reputation (Devuan metadata)\n  if ! _reputation_ok \"${_DNSRR}\" \"${_DNSRRCC}\"; then\n    return 0\n  fi\n\n  # Bandwidth normalization & policy\n  _bw_mbps=\"$(_convert_bandwidth_to_mbps \"${_BANDWIDTH}\")\"\n  if [ \"${_REJECT_IF_BW_NONE}\" = \"YES\" ]; then\n    _bw_lc=\"$(printf '%s' \"${_BANDWIDTH}\" | tr '[:upper:]' '[:lower:]')\"\n    if [ -z \"${_BANDWIDTH}\" ] || [ \"${_bw_lc}\" = \"none\" ] || [ \"${_bw_mbps}\" -eq 0 ]; then\n      return 0\n    fi\n  fi\n  if [ \"${_bw_mbps}\" -lt \"${_MIN_MBPS}\" ]; then\n    return 0\n  fi\n\n  # Detect repo root path (merged/devuan/bare)\n  _root_path=\"$(_detect_repo_root \"${_scheme}\" \"${_BASEURL}\" \"${_SUITE_DEFAULT}\")\" || true\n  if [ -z \"${_root_path}\" ]; then\n    return 0\n  fi\n\n  # Build apt base\n  _b=\"${_BASEURL#http://}\"; _b=\"${_b#https://}\"\n  _apt_base=\"${_scheme}://${_b}${_root_path}\"\n\n  # Quick reliability probe (cheap)\n  if [ \"${_REQUIRE_QUICK_PROBE}\" = \"YES\" ]; then\n    if ! _quick_probe_release \"${_apt_base}\" \"${_SUITE_DEFAULT}\"; then\n      return 0\n    fi\n  fi\n\n  # Append unique\n  if ! printf '%s\\n' \"${_ACCEPTED_URLS}\" | grep -qx \"${_apt_base}\"; then\n    _ACCEPTED_URLS=\"$(printf '%s\\n%s\\n' \"${_ACCEPTED_URLS}\" \"${_apt_base}\" | sed '/^$/d')\"\n  fi\n}\n\n# Walk file, blocks separated by blank lines\nwhile IFS= read -r _line || [ -n \"${_line}\" ]; do\n  if [ -z \"${_line}\" ]; then\n    _process_block\n    _FQDN=\"\"; _BASEURL=\"\"; _PROTOCOLS=\"\"; _ACTIVE=\"\"; _DNSRR=\"\"; _DNSRRCC=\"\"; _BANDWIDTH=\"\"\n    continue\n  fi\n\n  case \"${_line}\" in\n    FQDN:*)       _FQDN=\"$(printf '%s' \"${_line#FQDN:}\" | xargs)\" ;;\n    BaseURL:*)    _BASEURL=\"$(printf '%s' \"${_line#BaseURL:}\" | xargs)\" ;;\n    Protocols:*)  _PROTOCOLS=\"$(printf '%s' \"${_line#Protocols:}\" | xargs)\" ;;\n    Active:*)     _ACTIVE=\"$(printf '%s' \"${_line#Active:}\" | xargs)\" ;;\n    DNSRR:*)      _DNSRR=\"$(printf '%s' \"${_line#DNSRR:}\" | xargs)\" ;;\n    DNSRRCC:*)    _DNSRRCC=\"$(printf '%s' \"${_line#DNSRRCC:}\" | xargs)\" ;;\n    Bandwidth:*)  _BANDWIDTH=\"$(printf '%s' \"${_line#Bandwidth:}\" | xargs)\" ;;\n    # Other fields are ignored for filtering: Country, Rate, etc.\n  esac\ndone < \"${_RAW_LIST}\"\n# Process last block if file doesn't end with newline\n_process_block\n\n# ===== Emit the flat list ====================================================\nmkdir -p \"$(dirname \"${_FF_LIST}\")\"\n# Stable sorted unique list\nprintf '%s\\n' \"${_ACCEPTED_URLS}\" | LC_ALL=C sort -u > \"${_FF_LIST}\"\n\nif [ -s \"${_FF_LIST}\" ]; then\n  echo \"OK: wrote $(wc -l < \"${_FF_LIST}\") mirrors to ${_FF_LIST}\"\n  exit 0\nelse\n  echo \"No mirrors matched given policy.\"\n  exit 1\nfi\n"
  },
  {
    "path": "aegir/tools/bin/ffmirror",
    "content": "#!/bin/bash\n\n# borrowed from http://grulos.blogspot.com\n\ntrap 'printf \"\\ncaught signal\\nGot only ${#slist[@]}\\n\" &&\n  printf \"%s\\n\" \"${slist[@]}\" && exit 1' 2\n\nwhile read line; do\n\n  # b-a to get time difference.\n  # could have used date +%s%N\n  # I guess this won't work on a BSD\n  a=$(</proc/uptime)\n  a=${a%%\\ *}\n  a=${a/./}\n\n  # prepare checkline\n  checkline=\n  if [[ \"${line}\" =~ \"http:\" ]] || [[ \"${line}\" =~ \"https:\" ]]; then\n    checkline=\"${line}\"\n  else\n    if [ -n \"${line}\" ]; then\n      checkline=\"http://${line}\"\n    else\n      continue  # Skip this line but continue for other lines\n    fi\n  fi\n\n  # wget stuff if checkline looks fine\n  wget --spider -q -O /dev/null ${checkline} &&\n     b=$(</proc/uptime) &&\n       b=${b%%\\ *} &&\n         b=${b/./}\n\n  # printf \"%06s%04s%s\\n\" \"$((b-a))0ms ${response[1]} $host\"\n\n  # try next if not connected within 2s\n  [ $((b-a)) -le 0 ] && continue\n\n  # this is my lazy sort algorithm (c)\n  c=$(((b-a)*100))\n  until [ \"${slist[$c]}\" == \"\" ]; do\n    ((c++))\n  done\n  # slist[$c]=\"$((b-a))0ms $line\"\n  slist[$c]=\"$line\\n\"\ndone\n\n# printf \"%s\\n\" \"${slist[@]}\"\nprintf \"${slist[@]}\"\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/fix-drupal-platform-ownership.sh",
    "content": "#!/bin/bash\n\n# Help menu\nprint_help() {\ncat <<-HELP\nThis script is used to fix the file ownership of a Drupal platform. You need to\nprovide the following arguments:\n\n  --root: Path to the root of your Drupal installation.\n  --script-user: Username of the user to whom you want to give file ownership\n                 (defaults to 'aegir').\n  --web-group: Web server group name (defaults to 'www-data').\n\nUsage: (sudo) ${0##*/} --root=PATH --script-user=USER --web_group=GROUP\nExample: (sudo) ${0##*/} --drupal_path=/var/aegir/platforms/drupal-7.50 --script-user=aegir --web-group=www-data\nHELP\nexit 0\n}\n\nif [ \"$(id -u)\" != 0 ]; then\n  printf \"Error: You must run this with sudo or root.\\n\"\n  exit 1\nfi\n\ndrupal_root=${1%/}\nscript_user=${2:-aegir}\nweb_group=\"${3:-www-data}\"\n\n# Parse Command Line Arguments\nwhile [ \"$#\" -gt 0 ]; do\n  case \"$1\" in\n    --root=*)\n        drupal_root=\"${1#*=}\"\n        ;;\n    --script-user=*)\n        script_user=\"${1#*=}\"\n        ;;\n    --web-group=*)\n        web_group=\"${1#*=}\"\n        ;;\n    --help) print_help;;\n    *)\n      printf \"Error: Invalid argument, run --help for valid arguments.\\n\"\n      exit 1\n  esac\n  shift\ndone\n\nif [ -z \"${drupal_root}\" ] \\\n  || [ ! -d \"${drupal_root}/sites\" ] \\\n  || [ ! -f \"${drupal_root}/core/modules/system/system.module\" ] \\\n  && [ ! -f \"${drupal_root}/modules/system/system.module\" ]; then\n    printf \"Error: Please provide a valid Drupal root directory.\\n\"\n    exit 1\nfi\n\nif [ -z \"${script_user}\" ] \\\n  || [[ $(id -un \"${script_user}\" 2> /dev/null) != \"${script_user}\" ]]; then\n    printf \"Error: Please provide a valid user.\\n\"\n    exit 1\nfi\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n\n### Fix ownership only once daily, unless it's Drupal 8 or newer\nif [ -e \"${drupal_root}/sites/all/libraries/ownership-fixed-${_TODAY}.pid\" ]; then\n  if [ -e \"${drupal_root}/core/themes/olivero\" ] \\\n    || [ ! -e \"${drupal_root}/core/themes/classy\" ]; then\n    _drupal_eight_nine_ten=TRUE\n  else\n    exit 0\n  fi\nfi\n\ncd ${drupal_root}\n\nprintf \"Setting ownership of \"${drupal_root}\" to: user => \"${script_user}\" group => \"users\"\\n\"\nchown ${script_user}:users ${drupal_root}\n\n### Make sure that expected sites/all sub-directories exist\nmkdir -p ${drupal_root}/sites/all/{modules,themes,libraries,drush}\n\n### Create ctrl pid\nrm -f ${drupal_root}/sites/all/libraries/ownership-fixed*.pid\ntouch ${drupal_root}/sites/all/libraries/ownership-fixed-${_TODAY}.pid\n\n### BOA specific path and logic for limited user own codebases\nif [[ \"${drupal_root}\" =~ \"/static/\" ]] && [ -e \"${drupal_root}/core\" ]; then\n  rm -f ${drupal_root}/sites/development.services.yml\nfi\n\nif [ -e \"${drupal_root}/vendor\" ]; then\n  chown -R ${script_user}:users ${drupal_root}/vendor\nelif [ -e \"${drupal_root}/../vendor\" ]; then\n  chown -R ${script_user}:users ${drupal_root}/../vendor\nfi\n\nchown -R ${script_user}:users \\\n  ${drupal_root}/sites/all/{modules,themes,libraries,drush}\n\nchown -R ${script_user}:users \\\n  ${drupal_root}/{modules,themes,libraries,includes,misc,profiles,core}\n\nchown ${script_user}:users \\\n  ${drupal_root}/sites/all/drush/drushrc.php \\\n  ${drupal_root}/sites \\\n  ${drupal_root}/sites/* \\\n  ${drupal_root}/sites/sites.php \\\n  ${drupal_root}/sites/all\n\n### known exceptions\nchown -R ${script_user}:www-data \\\n  ${drupal_root}/sites/all/libraries/tcpdf/cache &> /dev/null\n\necho \"Done setting proper ownership of platform files and directories.\"\n"
  },
  {
    "path": "aegir/tools/bin/fix-drupal-platform-permissions.sh",
    "content": "#!/bin/bash\n\n# Help menu\nprint_help() {\ncat <<-HELP\nThis script is used to fix the file permissions of a Drupal platform. You need\nto provide the following argument:\n\n  --root: Path to the root of your Drupal installation.\n\nUsage: (sudo) ${0##*/} --root=PATH\nExample: (sudo) ${0##*/} --drupal_path=/var/aegir/platforms/drupal-7.50\nHELP\nexit 0\n}\n\nif [ \"$(id -u)\" != 0 ]; then\n  printf \"Error: You must run this with sudo or root.\\n\"\n  exit 1\nfi\n\ndrupal_root=${1%/}\n\n# Parse Command Line Arguments\nwhile [ \"$#\" -gt 0 ]; do\n  case \"$1\" in\n    --root=*)\n        drupal_root=\"${1#*=}\"\n        ;;\n    --help) print_help;;\n    *)\n      printf \"Error: Invalid argument, run --help for valid arguments.\\n\"\n      exit 1\n  esac\n  shift\ndone\n\nif [ -z \"${drupal_root}\" ] \\\n  || [ ! -d \"${drupal_root}/sites\" ] \\\n  || [ ! -f \"${drupal_root}/core/modules/system/system.module\" ] \\\n  && [ ! -f \"${drupal_root}/modules/system/system.module\" ]; then\n    printf \"Error: Please provide a valid Drupal root directory.\\n\"\n    exit 1\nfi\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n\n### Fix permissions only once daily, unless it's Drupal 8 or newer\nif [ -e \"${drupal_root}/sites/all/libraries/permissions-fixed-${_TODAY}.pid\" ]; then\n  if [ -e \"${drupal_root}/core/themes/olivero\" ] \\\n    || [ ! -e \"${drupal_root}/core/themes/classy\" ]; then\n    _drupal_eight_nine_ten=TRUE\n  else\n    exit 0\n  fi\nfi\n\ncd ${drupal_root}\n\nprintf \"Setting main permissions inside \"${drupal_root}\"...\\n\"\nmkdir -p ${drupal_root}/sites/all/{modules,themes,libraries,drush}\n\n### Create ctrl pid\nrm -f ${drupal_root}/sites/all/libraries/permissions-fixed*.pid\ntouch ${drupal_root}/sites/all/libraries/permissions-fixed-${_TODAY}.pid\n\nprintf \"Setting permissions of all codebase directories inside \"${drupal_root}\"...\\n\"\nfind ${drupal_root}/{modules,themes,libraries,includes,misc,profiles,core} -type d -exec chmod 02775 {} \\;\n\nprintf \"Setting permissions of all codebase files inside \"${drupal_root}\"...\\n\"\nfind ${drupal_root}/{modules,themes,libraries,includes,misc,profiles,core} -type f -exec chmod 0664 {} \\;\n\nif [ -e \"${drupal_root}/core/modules/workspaces_ui\" ]; then\n  printf \"Removing all .drush.inc files inside codebase \"${drupal_root}\"...\\n\"\n  find ${drupal_root}/modules/contrib -type f -name \"*.drush.inc\" -exec rm -f {} \\;\n  find ${drupal_root}/sites/*/modules -type f -name \"*.drush.inc\" -exec rm -f {} \\;\nfi\n\nif [ -e \"${drupal_root}/vendor\" ]; then\n  printf \"Setting permissions of all codebase directories inside \"${drupal_root}/vendor\"...\\n\"\n  find ${drupal_root}/vendor -type d -exec chmod 02775 {} \\;\n  printf \"Setting permissions of all codebase files inside \"${drupal_root}/vendor\"...\\n\"\n  find ${drupal_root}/vendor -type f -exec chmod 0664 {} \\;\nelif [ -e \"${drupal_root}/../vendor\" ]; then\n  printf \"Setting permissions of all codebase directories inside \"${drupal_root}/../vendor\"...\\n\"\n  find ${drupal_root}/../vendor -type d -exec chmod 02775 {} \\;\n  printf \"Setting permissions of all codebase files inside \"${drupal_root}/../vendor\"...\\n\"\n  find ${drupal_root}/../vendor -type f -exec chmod 0664 {} \\;\nfi\n\nif [ -e \"${drupal_root}/vendor/bin/drush\" ]; then\n  mv -f ${drupal_root}/vendor/bin/drush ${drupal_root}/vendor/bin/.off-drush\nelif [ -e \"${drupal_root}/../vendor/bin/drush\" ]; then\n  mv -f ${drupal_root}/../vendor/bin/drush ${drupal_root}/../vendor/bin/.off-drush\nfi\n\nif [ -e \"${drupal_root}/vendor/drush/drush/drush\" ]; then\n  mv -f ${drupal_root}/vendor/drush/drush/drush ${drupal_root}/vendor/drush/drush/.off-drush\nelif [ -e \"${drupal_root}/../vendor/drush/drush/drush\" ]; then\n  mv -f ${drupal_root}/../vendor/drush/drush/drush ${drupal_root}/../vendor/drush/drush/.off-drush\nfi\n\nif [ -e \"${drupal_root}/vendor/drush/drush/drush.php\" ]; then\n  chmod 0775 ${drupal_root}/vendor/drush/drush/drush.php\nelif [ -e \"${drupal_root}/../vendor/drush/drush/drush.php\" ]; then\n  chmod 0775 ${drupal_root}/../vendor/drush/drush/drush.php\nfi\n\n[ -d \"${drupal_root}\" ] && chmod 02775 ${drupal_root}\n\nif [ -d \"${drupal_root}/web\" ]; then\n  chmod 02775 ${drupal_root}/web\nelif [ -d \"${drupal_root}/docroot\" ]; then\n  chmod 02775 ${drupal_root}/docroot\nelif [ -d \"${drupal_root}/html\" ]; then\n  chmod 02775 ${drupal_root}/html\nfi\n\nprintf \"Setting permissions of all codebase directories inside \"${drupal_root}/sites/all\"...\\n\"\nfind ${drupal_root}/sites/all/{modules,themes,libraries} -type d -exec chmod 02775 {} \\;\n\nprintf \"Setting permissions of all codebase files inside \"${drupal_root}/sites/all\"...\\n\"\nfind ${drupal_root}/sites/all/{modules,themes,libraries} -type f -exec chmod 0664 {} \\;\n\nchmod 0644 ${drupal_root}/*.php\nchmod 0664 ${drupal_root}/autoload.php\nchmod 0751 ${drupal_root}/sites\nchmod 0755 ${drupal_root}/sites/*\nchmod 0644 ${drupal_root}/sites/*.php\nchmod 0644 ${drupal_root}/sites/*.txt\nchmod 0644 ${drupal_root}/sites/*.yml\nchmod 0755 ${drupal_root}/sites/all/drush\n\n### Lock Local Drush and Symfony Console Input/Style\nif [ -e \"${drupal_root}/core\" ]; then\n  if [ -e \"${drupal_root}/vendor\" ]; then\n    printf \"Locking Drush and Symfony Console Input in \"${drupal_root}/vendor\"...\\n\"\n    chmod 0400 ${drupal_root}/vendor/drush\n    chmod 0400 ${drupal_root}/vendor/symfony/console/Input\n    chmod 0400 ${drupal_root}/vendor/symfony/console/Style\n  elif [ -e \"${drupal_root}/../vendor\" ]; then\n    printf \"Locking Drush and Symfony Console Input in \"${drupal_root}/../vendor\"...\\n\"\n    chmod 0400 ${drupal_root}/../vendor/drush\n    chmod 0400 ${drupal_root}/../vendor/symfony/console/Input\n    chmod 0400 ${drupal_root}/../vendor/symfony/console/Style\n  fi\nfi\n\n### Known exceptions\nchmod -R 775 ${drupal_root}/sites/all/libraries/tcpdf/cache &> /dev/null\nchmod 0644 ${drupal_root}/.htaccess\n\necho \"Done setting proper permissions on platform files and directories.\"\n"
  },
  {
    "path": "aegir/tools/bin/fix-drupal-site-ownership.sh",
    "content": "#!/bin/bash\n\n# Help menu\nprint_help() {\ncat <<-HELP\nThis script is used to fix the file ownership of a Drupal site. You need to\nprovide the following arguments:\n\n  --site-path: Path to the Drupal site directory.\n  --script-user: Username of the user to whom you want to give file ownership\n                 (defaults to 'aegir').\n  --web-group: Web server group name (defaults to 'www-data').\n\nUsage: (sudo) ${0##*/} --site-path=PATH --script-user=USER --web_group=GROUP\nExample: (sudo) ${0##*/} --site-path=/var/aegir/platforms/drupal-7.50/sites/example.com --script-user=aegir --web-group=www-data\nHELP\nexit 0\n}\n\nif [ \"$(id -u)\" != 0 ]; then\n  printf \"Error: You must run this with sudo or root.\\n\"\n  exit 1\nfi\n\nsite_path=${1%/}\nscript_user=${2:-aegir}\nweb_group=\"${3:-www-data}\"\n\n# Parse Command Line Arguments\nwhile [ \"$#\" -gt 0 ]; do\n  case \"$1\" in\n    --site-path=*)\n        site_path=\"${1#*=}\"\n        ;;\n    --script-user=*)\n        script_user=\"${1#*=}\"\n        ;;\n    --web-group=*)\n        web_group=\"${1#*=}\"\n        ;;\n    --help) print_help;;\n    *)\n      printf \"Error: Invalid argument, run --help for valid arguments.\\n\"\n      exit 1\n  esac\n  shift\ndone\n\nif [ -z \"${site_path}\" ] || [ ! -f \"${site_path}/settings.php\" ]; then\n  printf \"Error: Please provide a valid Drupal site directory.\\n\"\n  exit 1\nfi\n\nif [ -z \"${script_user}\" ] \\\n  || [[ $(id -un \"${script_user}\" 2> /dev/null) != \"${script_user}\" ]]; then\n  printf \"Error: Please provide a valid user.\\n\"\n  exit 1\nfi\n\nif [ -e \"${site_path}/libraries/ownership-fixed.pid\" ]; then\n  rm -f ${site_path}/libraries/ownership-fixed.pid\nfi\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n\nif [ -e \"${site_path}/../sites/default/default.services.yml\" ]; then\n  if [ ! -e \"${site_path}/modules/default.services.yml\" ]; then\n    cp -a ${site_path}/../sites/default/default.services.yml ${site_path}/modules/\n  fi\nfi\nif [ -e \"${site_path}/modules/services.yml\" ] && [ ! -e \"${site_path}/services.yml\" ]; then\n  ln -sfn ${site_path}/modules/services.yml ${site_path}/services.yml\nfi\n\ncd ${site_path}\nprintf \"Setting ownership of key files and directories inside \"${site_path}\" to: user => \"${script_user}\"\\n\"\nif [ ! -e \"${site_path}/libraries\" ]; then\n  mkdir ${site_path}/libraries\nfi\n### directory and settings files - site level\nchown ${script_user}:users ${site_path} &> /dev/null\nchown ${script_user}:www-data \\\n  ${site_path}/{local.settings.php,settings.php,civicrm.settings.php,solr.php} &> /dev/null\n### modules,themes,libraries - site level\nchown -R ${script_user}:users \\\n  ${site_path}/{modules,themes,libraries}/* &> /dev/null\nchown ${script_user}:users \\\n  ${site_path}/drushrc.php \\\n  ${site_path}/modules/*.yml \\\n  ${site_path}/{modules,themes,libraries} &> /dev/null\n\nif [ ! -e \"${site_path}/files/ownership-fixed-${_TODAY}.pid\" ]; then\n  ### ctrl pid\n  rm -f ${site_path}/files/ownership-fixed*.pid\n  touch ${site_path}/files/ownership-fixed-${_TODAY}.pid\n  ### files - site level\n  chown -L -R ${script_user}:www-data ${site_path}/files &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files/{tmp,images,pictures,css,js} &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files/{advagg_css,advagg_js,ctools} &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files/{ctools/css,imagecache,locations} &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files/{xmlsitemap,deployment,styles,private} &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files/{civicrm,civicrm/templates_c} &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files/{civicrm/upload,civicrm/persist} &> /dev/null\n  chown ${script_user}:www-data ${site_path}/files/{civicrm/custom,civicrm/dynamic} &> /dev/null\n  ### private - site level\n  chown -L -R ${script_user}:www-data ${site_path}/private &> /dev/null\n  chown ${script_user}:www-data ${site_path}/private &> /dev/null\n  chown ${script_user}:www-data ${site_path}/private/{files,temp} &> /dev/null\n  chown ${script_user}:www-data ${site_path}/private/files/backup_migrate &> /dev/null\n  chown ${script_user}:www-data ${site_path}/private/files/backup_migrate/{manual,scheduled} &> /dev/null\n  chown -L -R ${script_user}:www-data ${site_path}/private/config &> /dev/null\nfi\n\necho \"Done setting proper ownership of site files and directories.\"\n"
  },
  {
    "path": "aegir/tools/bin/fix-drupal-site-permissions.sh",
    "content": "#!/bin/bash\n\n# Help menu\nprint_help() {\ncat <<-HELP\nThis script is used to fix the file permissions of a Drupal site. You need\nto provide the following argument:\n\n  --site-path: Path to the Drupal site's directory.\n\nUsage: (sudo) ${0##*/} --site-path=PATH\nExample: (sudo) ${0##*/} --site-path=/var/aegir/platforms/drupal-7.50/sites/example.com\nHELP\nexit 0\n}\n\nif [ \"$(id -u)\" != 0 ]; then\n  printf \"Error: You must run this with sudo or root.\\n\"\n  exit 1\nfi\n\nsite_path=${1%/}\n\n# Parse Command Line Arguments\nwhile [ \"$#\" -gt 0 ]; do\n  case \"$1\" in\n    --site-path=*)\n        site_path=\"${1#*=}\"\n        ;;\n    --help) print_help;;\n    *)\n      printf \"Error: Invalid argument, run --help for valid arguments.\\n\"\n      exit 1\n  esac\n  shift\ndone\n\nif [ -z \"${site_path}\" ] || [ ! -f \"${site_path}/settings.php\" ]; then\n  printf \"Error: Please provide a valid Drupal site directory.\\n\"\n  exit 1\nfi\n\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n\nif [ -e \"${site_path}/libraries/permissions-fixed.pid\" ]; then\n  rm -f ${site_path}/libraries/permissions-fixed.pid\nfi\ncd ${site_path}\nprintf \"Setting correct permissions on key files and directories inside \"${site_path}\"...\\n\"\n### directory and settings files - site level\nif [ -e \"${site_path}/aegir.services.yml\" ]; then\n  rm -f ${site_path}/aegir.services.yml\nfi\nfind ${site_path}/*.php -type f -exec chmod 0440 {} \\; &> /dev/null\nchmod 0640 ${site_path}/civicrm.settings.php &> /dev/null\n### modules,themes,libraries - site level\nfind ${site_path}/{modules,themes,libraries} -type d -exec \\\n  chmod 02775 {} \\; &> /dev/null\nfind ${site_path}/{modules,themes,libraries} -type f -exec \\\n  chmod 0664 {} \\; &> /dev/null\n\nif [ ! -e \"${site_path}/files/permissions-fixed-${_TODAY}.pid\" ]; then\n  ### ctrl pid\n  rm -f ${site_path}/files/permissions-fixed*.pid\n  touch ${site_path}/files/permissions-fixed-${_TODAY}.pid\n  ### files - site level\n  find ${site_path}/files/ -type d -exec chmod 02775 {} \\; &> /dev/null\n  find ${site_path}/files/ -type f -exec chmod 0664 {} \\; &> /dev/null\n  chmod 02775 ${site_path}/files &> /dev/null\n  ### private - site level\n  find ${site_path}/private/ -type d -exec chmod 02775 {} \\; &> /dev/null\n  find ${site_path}/private/ -type f -exec chmod 0664 {} \\; &> /dev/null\n  ### known exceptions\n  chmod 0644 ${site_path}/files/.htaccess\nfi\n\necho \"Done setting proper permissions on site files and directories.\"\n"
  },
  {
    "path": "aegir/tools/bin/fixmounts",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n#\n# Normalize /etc/fstab for \"big\" mounts:\n# - Prefer UUID=\n# - Add noatime,nodiratime\n# - Remove discard and x-systemd.* options\n# - Keep swap/EFI/cdrom entries untouched\n# - Optional: tune2fs -m 1 for ext2/3/4 block devices\n# - Bootstrap dependencies on Debian/Devuan if missing\n#\n# Default mode: DRY RUN (preview only)\n# To apply changes: fixmounts --apply\n#\n# Optional:\n#   --tune2fs        Run tune2fs -m 1 on supported ext* devices\n#   --verbose        Extra logs\n#   --no-install     Do not auto-install dependencies\n#\n\nset -u\n\n_DEBUG_MODE=\"NO\"\n_APPLY=\"NO\"\n_RUN_TUNE2FS=\"NO\"\n_AUTO_INSTALL_DEPS=\"YES\"\n\n_FSTAB=\"/etc/fstab\"\n_BACKUP=\"\"\n_TMP_OUT=\"\"\n_TMP_NEW=\"\"\n\n_is_big_mount_point() {\n  _MP=\"${1}\"\n  case \"${_MP}\" in\n    /) return 0 ;;\n    /mnt|/mnt/*) return 0 ;;\n    /srv|/srv/*) return 0 ;;\n    /data|/data/*) return 0 ;;\n    /opt|/opt/*) return 0 ;;\n    /var|/var/*) return 0 ;;\n    *) return 1 ;;\n  esac\n}\n\n_msg() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"${@}\"\n  fi\n}\n\n_die() {\n  echo \"ERROR: ${*}\" >&2\n  exit 1\n}\n\n_usage() {\n  cat <<'EOF'\nUsage:\n  fixmounts [--apply] [--tune2fs] [--verbose] [--no-install]\n\nOptions:\n  --apply        Actually write /etc/fstab (default is dry-run)\n  --tune2fs      Run tune2fs -m 1 on adjusted ext2/3/4 block devices\n  --verbose      Enable debug logs\n  --no-install   Do not auto-install missing dependencies\n  -h|--help      Show help\nEOF\n}\n\n_cleanup() {\n  [ -n \"${_TMP_OUT}\" ] && [ -e \"${_TMP_OUT}\" ] && rm -f \"${_TMP_OUT}\"\n  [ -n \"${_TMP_NEW}\" ] && [ -e \"${_TMP_NEW}\" ] && rm -f \"${_TMP_NEW}\"\n}\n\ntrap _cleanup EXIT\n\n_detect_pkg_mgr() {\n  if command -v apt-get >/dev/null 2>&1; then\n    echo \"apt\"\n    return 0\n  fi\n  echo \"none\"\n  return 1\n}\n\n_install_missing_dependencies() {\n  [ \"${_AUTO_INSTALL_DEPS}\" = \"YES\" ] || return 0\n\n  _PKG_MGR=\"$(_detect_pkg_mgr)\"\n  [ \"${_PKG_MGR}\" = \"apt\" ] || {\n    _msg \"No supported package manager detected for auto-install, skipping.\"\n    return 0\n  }\n\n  _NEED_PKGS=\"\"\n\n  # Core utilities used by the script\n  command -v blkid >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} util-linux\"\n  command -v findmnt >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} util-linux\"\n  command -v mount >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} mount\"\n  command -v readlink >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} coreutils\"\n  command -v sed >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} sed\"\n  command -v awk >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} gawk\"\n\n  # Optional for tune2fs mode\n  if [ \"${_RUN_TUNE2FS}\" = \"YES\" ]; then\n    command -v tune2fs >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} e2fsprogs\"\n  fi\n\n  # User-requested helpers\n  command -v growpart >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} cloud-guest-utils\"\n  command -v sgdisk >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} gdisk\"\n  command -v parted >/dev/null 2>&1 || _NEED_PKGS=\"${_NEED_PKGS} parted\"\n\n  _UNIQ_PKGS=\"\"\n  for _P in ${_NEED_PKGS}; do\n    case \" ${_UNIQ_PKGS} \" in\n      *\" ${_P} \"*) ;;\n      *) _UNIQ_PKGS=\"${_UNIQ_PKGS} ${_P}\" ;;\n    esac\n  done\n\n  if [ -z \"${_UNIQ_PKGS# }\" ]; then\n    _msg \"All dependencies already installed.\"\n    return 0\n  fi\n\n  echo \"Installing missing dependencies:${_UNIQ_PKGS}\"\n  export DEBIAN_FRONTEND=noninteractive\n\n  apt-get update || _die \"apt-get update failed\"\n  # shellcheck disable=SC2086\n  apt-get install -y ${_UNIQ_PKGS} || _die \"apt-get install failed\"\n}\n\n_resolve_device_from_spec() {\n  _SPEC=\"${1}\"\n\n  # Already a direct /dev path (includes /dev/disk/by-id/*)\n  if [ -e \"${_SPEC}\" ]; then\n    readlink -f \"${_SPEC}\" 2>/dev/null && return 0\n  fi\n\n  case \"${_SPEC}\" in\n    UUID=*)\n      _UUID=\"${_SPEC#UUID=}\"\n      blkid -U \"${_UUID}\" 2>/dev/null && return 0\n      ;;\n    LABEL=*)\n      _LABEL=\"${_SPEC#LABEL=}\"\n      blkid -L \"${_LABEL}\" 2>/dev/null && return 0\n      ;;\n    PARTUUID=*)\n      _PUUID=\"${_SPEC#PARTUUID=}\"\n      _DEV=\"$(blkid -t \"PARTUUID=${_PUUID}\" -o device 2>/dev/null | head -n 1)\"\n      if [ -n \"${_DEV}\" ]; then\n        echo \"${_DEV}\"\n        return 0\n      fi\n      ;;\n  esac\n\n  return 1\n}\n\n# Return UUID=... for a device if possible, else print original spec.\n_to_uuid_spec() {\n  _ORIG_SPEC=\"${1}\"\n  _DEV=\"$(_resolve_device_from_spec \"${_ORIG_SPEC}\")\" || {\n    echo \"${_ORIG_SPEC}\"\n    return 0\n  }\n\n  _UUID=\"$(blkid -s UUID -o value \"${_DEV}\" 2>/dev/null | head -n 1)\"\n  if [ -n \"${_UUID}\" ]; then\n    echo \"UUID=${_UUID}\"\n  else\n    echo \"${_ORIG_SPEC}\"\n  fi\n}\n\n# Normalize option list:\n# - remove discard\n# - remove nodiratime/noatime (we re-add in stable order)\n# - remove x-systemd.*\n# - remove duplicate commas/duplicates\n# - add noatime,nodiratime\n#\n# Keeps other options (errors=remount-ro, defaults, nofail, rw, etc.)\n_normalize_mount_opts() {\n  _IN_OPTS=\"${1}\"\n\n  # Empty safety\n  if [ -z \"${_IN_OPTS}\" ]; then\n    _IN_OPTS=\"defaults\"\n  fi\n\n  # Split on commas and rebuild\n  _OUT=\"\"\n  IFS=',' read -r -a _ARR <<< \"${_IN_OPTS}\"\n\n  for _OPT in \"${_ARR[@]}\"; do\n    # trim spaces\n    _OPT=\"$(echo \"${_OPT}\" | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')\"\n    [ -z \"${_OPT}\" ] && continue\n\n    case \"${_OPT}\" in\n      discard|nodiscard) continue ;;\n      noatime|nodiratime|atime|diratime|relatime|strictatime) continue ;;\n      x-systemd.*) continue ;;\n      comment=cloudconfig) continue ;;\n    esac\n\n    # Deduplicate while preserving order\n    case \",${_OUT},\" in\n      *,\"${_OPT}\",*) ;;\n      *)\n        if [ -z \"${_OUT}\" ]; then\n          _OUT=\"${_OPT}\"\n        else\n          _OUT=\"${_OUT},${_OPT}\"\n        fi\n        ;;\n    esac\n  done\n\n  # If nothing left, use defaults\n  if [ -z \"${_OUT}\" ]; then\n    _OUT=\"defaults\"\n  fi\n\n  # Enforce noatime,nodiratime (stable order at end)\n  case \",${_OUT},\" in\n    *,noatime,*) ;;\n    *) _OUT=\"${_OUT},noatime\" ;;\n  esac\n\n  case \",${_OUT},\" in\n    *,nodiratime,*) ;;\n    *) _OUT=\"${_OUT},nodiratime\" ;;\n  esac\n\n  echo \"${_OUT}\"\n}\n\n# Decide if entry should be rewritten.\n_should_rewrite_entry() {\n  _SPEC=\"${1}\"\n  _MP=\"${2}\"\n  _FSTYPE=\"${3}\"\n\n  # Skip comments/empty handled elsewhere\n  # Skip swap and obvious non-target fs types\n  case \"${_FSTYPE}\" in\n    swap|vfat|udf|iso9660|tmpfs|devtmpfs|proc|sysfs|cgroup*|overlay|squashfs|nfs|nfs4|cifs|fuse*|zfs)\n      return 1\n      ;;\n  esac\n\n  # Only local Linux fs types we care about here\n  case \"${_FSTYPE}\" in\n    ext2|ext3|ext4|xfs) ;;\n    *) return 1 ;;\n  esac\n\n  # Mountpoint must be \"big\"\n  if ! _is_big_mount_point \"${_MP}\"; then\n    return 1\n  fi\n\n  return 0\n}\n\n# Track adjusted ext* devices for optional tune2fs\n_TUNE_DEVS=\"\"\n\n_add_tune_device_if_supported() {\n  _SPEC=\"${1}\"\n  _FSTYPE=\"${2}\"\n\n  case \"${_FSTYPE}\" in\n    ext2|ext3|ext4) ;;\n    *) return 0 ;;\n  esac\n\n  _DEV=\"$(_resolve_device_from_spec \"${_SPEC}\")\" || return 0\n\n  # Must be a real block device path for tune2fs\n  if [ ! -b \"${_DEV}\" ]; then\n    return 0\n  fi\n\n  case \" ${_TUNE_DEVS} \" in\n    *\" ${_DEV} \"*) ;;\n    *) _TUNE_DEVS=\"${_TUNE_DEVS} ${_DEV}\" ;;\n  esac\n}\n\n_run_tune2fs_min_reserved() {\n  for _DEV in ${_TUNE_DEVS}; do\n    echo \"tune2fs target: ${_DEV}\"\n\n    # Read current reserved block percentage (best effort)\n    _CUR_PCT=\"$(tune2fs -l \"${_DEV}\" 2>/dev/null | awk -F: '/Reserved block count/ {print $2}' | head -n1 | tr -d '[:space:]')\"\n    if [ \"${_APPLY}\" = \"YES\" ]; then\n      tune2fs -m 1 \"${_DEV}\" || echo \"WARN: tune2fs failed on ${_DEV}\" >&2\n    else\n      echo \"DRY-RUN: tune2fs -m 1 ${_DEV}\"\n    fi\n  done\n}\n\n# Parse args\nwhile [ \"${#}\" -gt 0 ]; do\n  case \"${1}\" in\n    --apply) _APPLY=\"YES\" ;;\n    --tune2fs) _RUN_TUNE2FS=\"YES\" ;;\n    --verbose) _DEBUG_MODE=\"YES\" ;;\n    --no-install) _AUTO_INSTALL_DEPS=\"NO\" ;;\n    -h|--help) _usage; exit 0 ;;\n    *) _die \"Unknown option: ${1}\" ;;\n  esac\n  shift\ndone\n\n[ \"$(id -u)\" -eq 0 ] || _die \"Please run as root.\"\n\n_install_missing_dependencies\n\n[ -r \"${_FSTAB}\" ] || _die \"Cannot read ${_FSTAB}\"\n\n_TMP_OUT=\"$(mktemp /tmp/fstab.adjust.XXXXXX)\"\n_TMP_NEW=\"$(mktemp /tmp/fstab.preview.XXXXXX)\"\n\necho \"Processing ${_FSTAB} ...\"\necho \"Mode: $( [ \"${_APPLY}\" = \"YES\" ] && echo APPLY || echo DRY-RUN )\"\necho\n\nwhile IFS= read -r _LINE || [ -n \"${_LINE}\" ]; do\n  # Preserve comments and blank lines verbatim\n  case \"${_LINE}\" in\n    \"\"|\\#*)\n      echo \"${_LINE}\" >> \"${_TMP_OUT}\"\n      continue\n      ;;\n  esac\n\n  # Parse first 6 fields only; fstab normally uses whitespace separators\n  # shellcheck disable=SC2086\n  set -- ${_LINE}\n\n  if [ \"${#}\" -lt 4 ]; then\n    # Unusual line -> preserve unchanged\n    echo \"${_LINE}\" >> \"${_TMP_OUT}\"\n    continue\n  fi\n\n  _SPEC=\"${1}\"\n  _MP=\"${2}\"\n  _FSTYPE=\"${3}\"\n  _OPTS=\"${4}\"\n  _DUMP=\"${5:-0}\"\n  _PASS=\"${6:-0}\"\n\n  if _should_rewrite_entry \"${_SPEC}\" \"${_MP}\" \"${_FSTYPE}\"; then\n    _NEW_SPEC=\"$(_to_uuid_spec \"${_SPEC}\")\"\n    _NEW_OPTS=\"$(_normalize_mount_opts \"${_OPTS}\")\"\n    _NEW_LINE=\"${_NEW_SPEC} ${_MP} ${_FSTYPE} ${_NEW_OPTS} ${_DUMP} ${_PASS}\"\n\n    if [ \"${_LINE}\" != \"${_NEW_LINE}\" ]; then\n      echo \"- ${_LINE}\" >> \"${_TMP_NEW}\"\n      echo \"+ ${_NEW_LINE}\" >> \"${_TMP_NEW}\"\n      echo >> \"${_TMP_NEW}\"\n    fi\n\n    echo \"${_NEW_LINE}\" >> \"${_TMP_OUT}\"\n\n    if [ \"${_RUN_TUNE2FS}\" = \"YES\" ]; then\n      _add_tune_device_if_supported \"${_NEW_SPEC}\" \"${_FSTYPE}\"\n    fi\n  else\n    echo \"${_LINE}\" >> \"${_TMP_OUT}\"\n  fi\ndone < \"${_FSTAB}\"\n\nif [ ! -s \"${_TMP_NEW}\" ]; then\n  echo \"No changes needed.\"\nelse\n  echo \"Planned changes:\"\n  cat \"${_TMP_NEW}\"\nfi\n\necho\necho \"Validation check (syntax only):\"\nif mount -fav -T \"${_TMP_OUT}\" >/dev/null 2>&1; then\n  echo \"OK: mount -fav -T preview file passed\"\nelse\n  echo \"WARN: mount -fav -T preview file reported an issue\"\n  echo \"Preview file kept at: ${_TMP_OUT}\"\n  exit 1\nfi\n\nif [ \"${_APPLY}\" = \"YES\" ]; then\n  _BACKUP=\"/etc/fstab.bak.$(date +%Y%m%d-%H%M%S)\"\n  cp -a \"${_FSTAB}\" \"${_BACKUP}\" || _die \"Backup failed\"\n  cp -f \"${_TMP_OUT}\" \"${_FSTAB}\" || _die \"Write failed\"\n\n  echo\n  echo \"Updated ${_FSTAB}\"\n  echo \"Backup saved as ${_BACKUP}\"\n\n  # Optional tune2fs\n  if [ \"${_RUN_TUNE2FS}\" = \"YES\" ] && [ -n \"${_TUNE_DEVS}\" ]; then\n    echo\n    echo \"Applying tune2fs -m 1 on ext* devices used by adjusted mounts...\"\n    _run_tune2fs_min_reserved\n  fi\n\n  echo\n  echo \"Next steps (recommended):\"\n  echo \"  1) mount -fav\"\n  echo \"  2) mount -o remount /\"\n  echo \"  3) findmnt -n -o SOURCE,FSTYPE,OPTIONS /\"\n  echo \"  4) reboot during maintenance window (especially for /)\"\nelse\n  echo\n  echo \"Dry-run only. To apply:\"\n  echo \"  $(basename \"$0\") --apply\"\n  if [ \"${_RUN_TUNE2FS}\" = \"YES\" ]; then\n    echo \"  (and it would also run tune2fs -m 1 on adjusted ext* devices)\"\n  fi\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/fixrepo",
    "content": "#!/bin/bash\n\n#\n# Purpose:\n#   Fix group writable permissions on an entire codebase (app root) and its Git metadata,\n#   while preserving safe execute bits and ensuring setgid on directories for group inheritance.\n#\n# Usage:\n#   fixrepo /path/to/codebase\n#\n# Example:\n#   fixrepo /data/disk/o1/static/live/drupal_v11_2026\n#\n# Notes:\n#   - Designed for shared workflows (e.g. o1 + o1.ftp in the same group).\n#   - Applies to the whole codebase and explicitly to .git when present.\n#   - Sets git core.sharedRepository=group (best effort).\n#   - Uses g+rwX (capital X) so regular non-executable files do not become executable.\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_DEBUG_MODE=\"${_DEBUG_MODE:-NO}\"\n_STRICT_MODE=\"${_STRICT_MODE:-YES}\"\n\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n_msg() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    printf '%s\\n' \"$*\"\n  fi\n}\n\n_die() {\n  printf 'ERROR: %s\\n' \"$*\" >&2\n  exit 1\n}\n\n_usage() {\n  cat <<'EOF'\nUsage:\n  fixrepo /absolute/or/relative/path/to/codebase\n\nDescription:\n  Fixes group-writable permissions recursively on the codebase root, ensures setgid on\n  all directories, and (if present) explicitly fixes .git internals and sets:\n    git config core.sharedRepository group\n\nEnvironment (optional):\n  _DEBUG_MODE=YES   Enable verbose progress messages\n  _STRICT_MODE=NO   Continue on non-fatal chmod/find/git config errors (default: YES)\nEOF\n}\n\n_require_cmd() {\n  _CMD=\"${1}\"\n  command -v \"${_CMD}\" >/dev/null 2>&1 || _die \"Required command not found: ${_CMD}\"\n}\n\n_run_cmd() {\n  # Wrapper to optionally continue on non-fatal errors.\n  # Usage: _run_cmd \"description\" command arg1 arg2 ...\n  _DESC=\"${1}\"\n  shift\n\n  _msg \"==> ${_DESC}\"\n  \"$@\"\n  _RC=$?\n\n  if [ \"${_RC}\" -ne 0 ]; then\n    if [ \"${_STRICT_MODE}\" = \"YES\" ]; then\n      _die \"${_DESC} failed (exit code: ${_RC})\"\n    else\n      printf 'WARNING: %s failed (exit code: %s)\\n' \"${_DESC}\" \"${_RC}\" >&2\n    fi\n  fi\n\n  return 0\n}\n\n_fix_permissions_tree() {\n  _TARGET_DIR=\"${1}\"\n\n  [ -n \"${_TARGET_DIR}\" ] || return 1\n  [ -d \"${_TARGET_DIR}\" ] || return 1\n\n  _msg \"Processing tree: ${_TARGET_DIR}\"\n\n  # Add group read/write to all files and dirs; add execute only to dirs and already executable files.\n  _run_cmd \"chmod -R g+rwX on ${_TARGET_DIR}\" chmod -R g+rwX \"${_TARGET_DIR}\"\n\n  # Ensure all directories are group-traversable, writable, and setgid so new files inherit group.\n  _run_cmd \"setgid on directories in ${_TARGET_DIR}\" \\\n    find \"${_TARGET_DIR}\" -type d -exec chmod g+rws {} +\n\n  return 0\n}\n\n_set_git_shared_mode() {\n  _REPO_ROOT=\"${1}\"\n\n  [ -n \"${_REPO_ROOT}\" ] || return 1\n  [ -d \"${_REPO_ROOT}\" ] || return 1\n  [ -d \"${_REPO_ROOT}/.git\" ] || return 0\n\n  if command -v git >/dev/null 2>&1; then\n    _run_cmd \"git core.sharedRepository=group in ${_REPO_ROOT}\" \\\n      git -C \"${_REPO_ROOT}\" config core.sharedRepository group\n  else\n    printf 'WARNING: git not found; skipped core.sharedRepository setting for %s\\n' \"${_REPO_ROOT}\" >&2\n  fi\n\n  return 0\n}\n\n_show_summary() {\n  _REPO_ROOT=\"${1}\"\n\n  printf '\\nDone.\\n'\n  printf 'Codebase fixed: %s\\n' \"${_REPO_ROOT}\"\n\n  if [ -d \"${_REPO_ROOT}/.git\" ]; then\n    printf 'Git metadata fixed: %s/.git\\n' \"${_REPO_ROOT}\"\n    if command -v git >/dev/null 2>&1; then\n      _GIT_SHARED=\"$(git -C \"${_REPO_ROOT}\" config --get core.sharedRepository 2>/dev/null)\"\n      if [ -n \"${_GIT_SHARED}\" ]; then\n        printf 'git core.sharedRepository: %s\\n' \"${_GIT_SHARED}\"\n      fi\n    fi\n  else\n    printf 'No .git directory detected (skipped Git-specific steps).\\n'\n  fi\n\n  cat <<'EOF'\n\nRecommended for shared write operations (shell/session):\n  umask 002\n\nThis helps new files remain group-writable together with setgid directories.\nEOF\n}\n\n_main() {\n  if [ \"${#}\" -ne 1 ]; then\n    _usage\n    exit 1\n  fi\n\n  _REPO_ROOT=\"${1}\"\n\n  [ \"${_REPO_ROOT}\" = \"-h\" ] && { _usage; exit 0; }\n  [ \"${_REPO_ROOT}\" = \"--help\" ] && { _usage; exit 0; }\n\n  _require_cmd chmod\n  _require_cmd find\n\n  [ -e \"${_REPO_ROOT}\" ] || _die \"Path does not exist: ${_REPO_ROOT}\"\n  [ -d \"${_REPO_ROOT}\" ] || _die \"Path is not a directory: ${_REPO_ROOT}\"\n\n  # Resolve to absolute path if possible (best effort, no hard dependency on readlink -f).\n  if command -v readlink >/dev/null 2>&1; then\n    _RESOLVED=\"$(readlink -f \"${_REPO_ROOT}\" 2>/dev/null)\"\n    if [ -n \"${_RESOLVED}\" ] && [ -d \"${_RESOLVED}\" ]; then\n      _REPO_ROOT=\"${_RESOLVED}\"\n    fi\n  fi\n\n  _msg \"Target codebase root: ${_REPO_ROOT}\"\n  _msg \"STRICT_MODE=${_STRICT_MODE}\"\n  _msg \"DEBUG_MODE=${_DEBUG_MODE}\"\n\n  # Fix whole codebase tree first.\n  _fix_permissions_tree \"${_REPO_ROOT}\" || _die \"Failed to fix codebase permissions\"\n\n  # Explicitly fix .git tree too (clarity + robustness in shared workflows).\n  if [ -d \"${_REPO_ROOT}/.git\" ]; then\n    _set_git_shared_mode \"${_REPO_ROOT}\"\n    _fix_permissions_tree \"${_REPO_ROOT}/.git\" || _die \"Failed to fix .git permissions\"\n  fi\n\n  _show_summary \"${_REPO_ROOT}\"\n}\n\n_main \"$@\"\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/killer",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Exit if the pause tasks maintenance config file exists\n[ -e \"/root/.pause_tasks_maint.cnf\" ] && exit 0\n\n_maxEtime=\"300\"\n_cmndsList=\"apt-get apt\"\n\n# Function to kill long-running commands\n_licence_to_kill() {\n  if [[ \"${1}\" =~ \"apt-get\" ]]; then\n    _killCmnd=\"apt-get update\"\n    _maxEtime=\"99\"\n  elif [[ \"${1}\" =~ \"apt\" ]]; then\n    _killCmnd=\"apt update\"\n    _maxEtime=\"99\"\n  fi\n  _aptTms=$(ps -eo uid,pid,etimes,cmd | grep -v \"grep\" | grep \"${_killCmnd}\" | egrep ' ([0-9]+-)?([0-9]{1}:?){3}' | awk '{print $3}')\n  _aptPid=$(ps -eo uid,pid,etimes,cmd | grep -v \"grep\" | grep \"${_killCmnd}\" | egrep ' ([0-9]+-)?([0-9]{1}:?){3}' | awk '{print $2}')\n  if [ ! -z \"${_aptTms}\" ] && [ \"${_aptTms}\" -gt \"${_maxEtime}\" ]; then\n    echo \"REASON _aptTms for ${_killCmnd} was ${_aptTms} on $(date)\" >> /root/.proc.forced.kill.exceptions.log\n    kill -9 ${_aptPid}\n  fi\n}\n\n# Remove auto-update file if it exists\n[ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n\n# Loop through the commands list\nfor _frozenCmnd in ${_cmndsList}; do\n  _licence_to_kill ${_frozenCmnd} # Fixed: pass the correct variable\ndone\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/loadguard",
    "content": "#!/bin/bash\n\n###\n### CPU-scaled load tiers + action hooks\n###\n\n# Environment setup\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n# Sanitize numeric variables (allow digits and decimal point)\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n# Paths\n_pthOml=\"/var/log/boa/high.load.incident.log\"\n\n# Load _RATIO defaults + sanitize\n_CPU_CRIT_RATIO=\"$(_sanitize_number \"${_CPU_CRIT_RATIO}\")\"\n_CPU_MAX_RATIO=\"$(_sanitize_number \"${_CPU_MAX_RATIO}\")\"\n_CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n_CPU_SPIDER_RATIO=\"$(_sanitize_number \"${_CPU_SPIDER_RATIO}\")\"\n\n# ===== Config (ratios per CPU) =====\n: \"${_CPU_CRIT_RATIO:=6.1}\"    # CRIT: pause web + kill long procs + block spiders\n: \"${_CPU_MAX_RATIO:=4.1}\"     # MAX:  pause web + block spiders\n: \"${_CPU_TASK_RATIO:=3.1}\"    # TASK: skip backend tasks (but web OK)\n: \"${_CPU_SPIDER_RATIO:=2.1}\"  # SPIDER: allow web; block spiders only\n\n# Which loads to use for decisions (1m is reactive, 5m guards sustained)\n: \"${_USE_1M:=YES}\"\n: \"${_USE_5M:=YES}\"\n\n# Optional iowait relaxation (don’t overreact to D-state spikes)\n: \"${_IOWAIT_RELAX_THRESHOLD:=10.0}\"   # percent\n: \"${_IOWAIT_RELAX_FACTOR:=1.15}\"\n\n# Hysteresis to avoid flapping (resume/unblock below this fraction of threshold)\n: \"${_RESUME_FRACTION:=0.80}\"\n\n# State paths (adjust if you want)\n: \"${_LOAD_GUARD_STATE:=/run/load_guard.state}\"\n: \"${_SPIDER_BLOCK_FLAG:=/run/spiders.blocked}\"\n\n# Debug\n: \"${_DEBUG_MODE:=NO}\"\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n###\n### Function to send incident email report\n###\n_incident_email_report() {\n  _check_uptime_grace_period >/dev/null || return 1\n  local _subject=\"${1:-(no subject)}\"\n  local _lvl=\"${2:-INFO}\"\n  _lvl=\"${_lvl^^}\"\n  [ -n \"${_MY_EMAIL}\" ] || return 1\n  # Decide if we should send\n  case \"${_INCIDENT_REPORT}\" in\n    OFF)  return 1 ;;                            # always veto\n    CRIT) [ \"${_lvl}\" = \"ALERT\" ] || return 1 ;; # veto unless ALERT\n    ALL|MINI) : ;;                               # allow\n  esac\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n  s-nail -s \"Incident Report on ${_hName}: ${_subject}\" \"${_MY_EMAIL}\" < ${_pthOml}\n}\n\n###\n### Fire-and-forget launcher, cron-safe and interactive-safe\n###\n_spawn_detached() {\n  _cmd=\"$1\"\n  if command -v nohup >/dev/null 2>&1; then\n    nohup bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  elif command -v setsid >/dev/null 2>&1; then\n    setsid bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  else\n    ( bash -c \"${_cmd}\" >/dev/null 2>&1 ) &\n  fi\n  # If interactive shell, drop it from the job table to mimic cron behavior\n  if [[ \"$-\" == *i* ]]; then disown; fi\n}\n\n###\n### Function to control processes\n###\n_proc_control() {\n  echo \"Running process control...\"\n  renice \"${_B_NICE}\" -p $$ &> /dev/null\n  if [ -e \"/var/xdrago/proc_num_ctrl.pl\" ]; then\n    _spawn_detached 'perl /var/xdrago/proc_num_ctrl.pl'\n  fi\n  touch /var/log/boa/proc_num_ctrl.done.pid\n  echo \"Process control done.\"\n}\n\n###\n### Function to terminate long-running processes\n###\n_terminate_processes() {\n  local _current_load=\"$1\"\n  local _threshold=\"$2\"\n  local _load_period=\"$3\"\n  if [ ! -e \"/run/boa_wait.pid\" ]; then\n    killall -9 php drush.php wget curl &> /dev/null\n    local _log_message\n    _log_message=\"$(date) System Load ${_current_load}% (${_load_period}) - PHP/Wget/cURL terminated\"\n    echo \"${_log_message}\" >> ${_pthOml}\n    local _subject=\"Processes Terminated - ${_load_period} Load ${_current_load}% exceeded Critical Load Threshold ${_threshold}%\"\n    _incident_email_report \"${_subject}\" \"ALERT\"\n    echo >> ${_pthOml}\n    echo \"Action Taken: Long-running processes terminated due to critical load.\"\n  fi\n}\n\n# ===== Internals =====\n_CPU_NR=1 _L1=0 _L5=0 _L15=0 _IOWAIT_PCT=0\n\n_count_cpu() {\n  _CPU_NR=\"$( (command -v nproc >/dev/null 2>&1 && nproc --all) || grep -cE '^processor' /proc/cpuinfo 2>/dev/null || echo 1 )\"\n  _CPU_NR=\"${_CPU_NR//[^0-9]/}\"\n  [ -z \"${_CPU_NR}\" ] && _CPU_NR=1\n  [ \"${_CPU_NR}\" -lt 1 ] && _CPU_NR=1\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then echo \"DBG: CPUs=${_CPU_NR}\"; fi\n}\n\n_get_loads() {\n  read -r _L1 _L5 _L15 _rest < /proc/loadavg\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then echo \"DBG: load 1/5/15 = ${_L1}/${_L5}/${_L15}\"; fi\n}\n\n_get_iowait_pct() {\n  if ! head -n1 /proc/stat >/dev/null 2>&1; then _IOWAIT_PCT=\"0\"; return; fi\n  read -r _ _u1 _n1 _s1 _i1 _w1 _irq1 _sirq1 _st1 _g1 _gn1 < /proc/stat\n  sleep 0.2\n  read -r _ _u2 _n2 _s2 _i2 _w2 _irq2 _sirq2 _st2 _g2 _gn2 < /proc/stat\n  _TOTAL=$(( (_u2-_u1)+(_n2-_n1)+(_s2-_s1)+(_i2-_i1)+(_w2-_w1)+(_irq2-_irq1)+(_sirq2-_sirq1)+(_st2-_st1) ))\n  [ \"${_TOTAL}\" -le 0 ] && { _IOWAIT_PCT=\"0\"; return; }\n  _IOWAIT_PCT=\"$(awk -v w=\"$((_w2-_w1))\" -v t=\"${_TOTAL}\" 'BEGIN{printf \"%.1f\",(w/t)*100}')\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then echo \"DBG: iowait%≈${_IOWAIT_PCT}\"; fi\n}\n\n# Scale a ratio (float) by CPUs → absolute load ceiling\n# $1 = ratio\n_ratio_limit() { awk -v c=\"${_CPU_NR}\" -v r=\"$1\" 'BEGIN{printf \"%.3f\", c*r}'; }\n\n# Float compare helpers (returns shell 0 when TRUE)\n# $1 op $2  (op in >, >=, <, <=)\n_fcmp() { awk -v a=\"$1\" -v b=\"$3\" \"BEGIN{exit ! (a $2 b)}\"; }\n\n# ===== Decision engine =====\n# Sets: _LOAD_TIER to one of CRIT|MAX|TASK|SPIDER|OK\n# Also sets absolute thresholds used (for logs): _TH_CRIT,_TH_MAX,_TH_TASK,_TH_SPIDER\n_load_eval() {\n  _count_cpu\n  _get_loads\n  _get_iowait_pct\n\n  _TH_CRIT=\"$(_ratio_limit \"${_CPU_CRIT_RATIO}\")\"\n  _TH_MAX=\"$(_ratio_limit \"${_CPU_MAX_RATIO}\")\"\n  _TH_TASK=\"$(_ratio_limit \"${_CPU_TASK_RATIO}\")\"\n  _TH_SPIDER=\"$(_ratio_limit \"${_CPU_SPIDER_RATIO}\")\"\n\n  # Optional relaxation on iowait\n  awk -v i=\"${_IOWAIT_PCT}\" -v th=\"${_IOWAIT_RELAX_THRESHOLD}\" 'BEGIN{exit (i>=th)?0:1}'\n  if [ $? -eq 0 ]; then\n    _TH_CRIT=\"$(awk -v v=\"${_TH_CRIT}\" -v f=\"${_IOWAIT_RELAX_FACTOR}\" 'BEGIN{printf \"%.3f\",v*f}')\"\n    _TH_MAX=\"$(awk -v v=\"${_TH_MAX}\" -v f=\"${_IOWAIT_RELAX_FACTOR}\" 'BEGIN{printf \"%.3f\",v*f}')\"\n    _TH_TASK=\"$(awk -v v=\"${_TH_TASK}\" -v f=\"${_IOWAIT_RELAX_FACTOR}\" 'BEGIN{printf \"%.3f\",v*f}')\"\n    _TH_SPIDER=\"$(awk -v v=\"${_TH_SPIDER}\" -v f=\"${_IOWAIT_RELAX_FACTOR}\" 'BEGIN{printf \"%.3f\",v*f}')\"\n    [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DBG: thresholds relaxed due to iowait≥${_IOWAIT_RELAX_THRESHOLD}%\"\n  fi\n\n  # Pick governing load(s)\n  _TRIP=0\n  if [ \"${_USE_1M}\" = \"YES\" ]; then _G1=\"${_L1}\"; else _G1=\"0\"; fi\n  if [ \"${_USE_5M}\" = \"YES\" ]; then _G5=\"${_L5}\"; else _G5=\"0\"; fi\n  # decide tier by descending severity; any governing load breaching sets the tier\n  if _fcmp \"${_G1}\" \">\" \"${_TH_CRIT}\" || _fcmp \"${_G5}\" \">\" \"${_TH_CRIT}\"; then\n    _LOAD_TIER=\"CRIT\"\n  elif _fcmp \"${_G1}\" \">\" \"${_TH_MAX}\" || _fcmp \"${_G5}\" \">\" \"${_TH_MAX}\"; then\n    _LOAD_TIER=\"MAX\"\n  elif _fcmp \"${_G1}\" \">\" \"${_TH_TASK}\" || _fcmp \"${_G5}\" \">\" \"${_TH_TASK}\"; then\n    _LOAD_TIER=\"TASK\"\n  elif _fcmp \"${_G1}\" \">\" \"${_TH_SPIDER}\" || _fcmp \"${_G5}\" \">\" \"${_TH_SPIDER}\"; then\n    _LOAD_TIER=\"SPIDER\"\n  else\n    _LOAD_TIER=\"OK\"\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"DBG: thresholds SPIDER=${_TH_SPIDER} TASK=${_TH_TASK} MAX=${_TH_MAX} CRIT=${_TH_CRIT}\"\n    echo \"DBG: tier=${_LOAD_TIER} (1m=${_L1}, 5m=${_L5}, CPUs=${_CPU_NR})\"\n  fi\n}\n\n# ===== Action hooks (customize to your stack) =====\n# These default to NOOPs except for simple examples; replace with your own logic.\n\n# Web Services to pause or enable\n_WEB_SERVICES=(\n  \"nginx\"\n  \"php85-fpm\"\n  \"php84-fpm\"\n  \"php83-fpm\"\n  \"php82-fpm\"\n  \"php81-fpm\"\n  \"php80-fpm\"\n  \"php74-fpm\"\n  \"php73-fpm\"\n  \"php72-fpm\"\n  \"php71-fpm\"\n  \"php70-fpm\"\n  \"php56-fpm\"\n)\n\n_action_pause_web() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then echo \"DBG: Pausing web services\"; fi\n  for _svc in \"${_WEB_SERVICES[@]}\"; do\n    if [ -x \"/etc/init.d/${_svc}\" ]; then /etc/init.d/\"${_svc}\" stop >/dev/null 2>&1 || true; fi\n  done\n  echo \"WEB SERVICES PAUSED\" > \"${_LOAD_GUARD_STATE}\"\n}\n\n_action_resume_web_if_safe() {\n  # resume only when below hysteresis of MAX threshold\n  _resume_th=\"$(awk -v v=\"${_TH_MAX}\" -v f=\"${_RESUME_FRACTION}\" 'BEGIN{printf \"%.3f\", v*f}')\"\n  if _fcmp \"${_L1}\" \"<\" \"${_resume_th}\" && _fcmp \"${_L5}\" \"<\" \"${_resume_th}\"; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then echo \"DBG: Resuming web services\"; fi\n    for _svc in \"${_WEB_SERVICES[@]}\"; do\n      if [ -x \"/etc/init.d/${_svc}\" ]; then /etc/init.d/\"${_svc}\" start >/dev/null 2>&1 || true; fi\n    done\n    echo \"WEB SERVICES RUNNING\" > \"${_LOAD_GUARD_STATE}\"\n  else\n    [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DBG: Keeping web paused (above resume threshold)\"\n  fi\n}\n\n# Kill long-running user processes (configurable)\n# Configure: users to target and min runtime in seconds\n: \"${_KILL_PROCS:=php,drush.php,wget,curl}\" # comma-separated\n: \"${_KILL_MIN_RUNTIME:=900}\"               # 15 minutes\n\n_action_kill_long_user_procs() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"DBG: Killing procs runtime>=${_KILL_MIN_RUNTIME}s for procs=${_KILL_PROCS}\"\n  fi\n  IFS=',' read -r -a _PROCS_ARR <<< \"${_KILL_PROCS}\"\n  for _process in \"${_PROCS_ARR[@]}\"; do\n    ps -eo etimes,user,pid,comm --no-headers \\\n      | awk -v p=\"${_process}\" -v min=\"${_KILL_MIN_RUNTIME}\" '$1>=min && $4==p {print $3}' \\\n      | while read -r _pid; do\n          kill -TERM \"${_pid}\" 2>/dev/null || true\n          sleep 2\n          kill -KILL \"${_pid}\" 2>/dev/null || true\n        done\n  done\n}\n\n_action_block_spiders() {\n  # Example: drop a flag your Nginx/CF logic understands\n  [ ! -e \"${_SPIDER_BLOCK_FLAG}\" ] && touch \"${_SPIDER_BLOCK_FLAG}\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then echo \"DBG: Spiders blocked (flag on)\"; fi\n}\n\n_action_unblock_spiders_if_safe() {\n  _resume_th=\"$(awk -v v=\"${_TH_SPIDER}\" -v f=\"${_RESUME_FRACTION}\" 'BEGIN{printf \"%.3f\", v*f}')\"\n  if _fcmp \"${_L1}\" \"<\" \"${_resume_th}\" && _fcmp \"${_L5}\" \"<\" \"${_resume_th}\"; then\n    [ -e \"${_SPIDER_BLOCK_FLAG}\" ] && rm -f \"${_SPIDER_BLOCK_FLAG}\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then echo \"DBG: Spiders unblocked (flag off)\"; fi\n  else\n    [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DBG: Keeping spiders blocked\"\n  fi\n}\n\n# ===== Public helpers for other scripts =====\n_ok_to_run_proc_control() {\n  _load_eval\n  # Auto-healing allowed only in tiers OK and SPIDER\n  if [ \"${_LOAD_TIER}\" = \"OK\" ] || [ \"${_LOAD_TIER}\" = \"SPIDER\" ]; then\n    return 0\n  fi\n  return 1\n}\n\n_ok_to_run_backend_tasks() {\n  _load_eval\n  # Backend tasks allowed only in tiers OK and SPIDER\n  if [ \"${_LOAD_TIER}\" = \"OK\" ] || [ \"${_LOAD_TIER}\" = \"SPIDER\" ]; then\n    return 0\n  fi\n  return 1\n}\n\n_ok_to_allow_spiders() {\n  _load_eval\n  # Spiders allowed only in OK\n  [ \"${_LOAD_TIER}\" = \"OK\" ] && return 0 || return 1\n}\n\n# ===== Controller (optional cron/loop entrypoint) =====\n# Call this periodically from cron to enforce policy system-wide.\n_enforce_load_policy() {\n  _load_eval\n\n  case \"${_LOAD_TIER}\" in\n    CRIT)\n      _action_block_spiders\n      _action_pause_web\n      [ ! -e \"/run/boa_wait.pid\" ] && _action_kill_long_user_procs\n      ;;\n    MAX)\n      _action_block_spiders\n      _action_pause_web\n      ;;\n    TASK)\n      _action_block_spiders\n      _action_resume_web_if_safe   # might already be running; hysteresis protects\n      ;;\n    SPIDER)\n      _action_block_spiders\n      _action_resume_web_if_safe\n      ;;\n    OK)\n      _action_unblock_spiders_if_safe\n      _action_resume_web_if_safe\n      ;;\n  esac\n\n  # Optional: export a simple status line for logs/monitoring\n  echo \"LOADGUARD tier=${_LOAD_TIER} l1=${_L1} l5=${_L5} cpus=${_CPU_NR}\"\n}\n\n# ===== Usage examples =====\n# System-wide guard via cron (every minute)\n#   * * * * * root /bin/bash /opt/local/bin/loadguard enforce >/var/log/boa/load_guard.log 2>&1\n#\n# Gate Ægir backend runner\n# Decide whether to run _proc_control\nif _ok_to_run_backend_tasks; then\n  nohup ionice -c2 -n7 nice -n5 bash /var/xdrago/runner.sh > /dev/null 2>&1 &\n  _proc_control\nelse\n  [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DBG: Skipping backend tasks due to load\"\nfi\n\n# Simple CLI\ncase \"$1\" in\n  enforce) _enforce_load_policy ;;\n  ok-tasks) _ok_to_run_backend_tasks ;;\n  ok-spiders) _ok_to_allow_spiders ;;\n  *)\n    echo \"Usage: $0 {enforce|ok-tasks|ok-spiders}\"\n    exit 2\n    ;;\nesac\n"
  },
  {
    "path": "aegir/tools/bin/lock-local-drush-permissions.sh",
    "content": "#!/bin/bash\n\n# Help menu\nprint_help() {\ncat <<-HELP\nThis script is used to lock permissions on local Drush. You need\nto provide the following argument:\n\n  --root: Path to the root of your Drupal installation.\n  --mode: Action mode lock/unlock (defaults to 'lock')\n\nUsage: (sudo) ${0##*/} --root=PATH --mode=MODE\nExample: (sudo) ${0##*/} --drupal_path=/var/aegir/platforms/drupal-10.1\nHELP\nexit 0\n}\n\nif [ \"$(id -u)\" != 0 ]; then\n  printf \"Error: You must run this with sudo or root.\\n\"\n  exit 1\nfi\n\ndrupal_root=${1%/}\nlock_mode=${2:-lock}\n\n# Parse Command Line Arguments\nwhile [ \"$#\" -gt 0 ]; do\n  case \"$1\" in\n    --root=*)\n        drupal_root=\"${1#*=}\"\n        ;;\n    --mode=*)\n        mode=\"${1#*=}\"\n        ;;\n    --help) print_help;;\n    *)\n      printf \"Error: Invalid argument, run --help for valid arguments.\\n\"\n      exit 1\n  esac\n  shift\ndone\n\nif [ -z \"${drupal_root}\" ] \\\n  || [ ! -d \"${drupal_root}/sites\" ] \\\n  || [ ! -f \"${drupal_root}/core/modules/system/system.module\" ] \\\n  && [ ! -f \"${drupal_root}/modules/system/system.module\" ]; then\n    printf \"Error: Please provide a valid Drupal root directory.\\n\"\n    exit 1\nfi\n\ncd ${drupal_root}\n\nif [ -e \"${drupal_root}/core\" ]; then\n  if [ -e \"${drupal_root}/vendor\" ]; then\n    if [ \"$mode\" = \"unlock\" ]; then\n      printf \"Unlocking Drush and Symfony Console Input in \"${drupal_root}/vendor\"...\\n\"\n      chmod 0775 ${drupal_root}/vendor/drush\n      chmod 0775 ${drupal_root}/vendor/symfony/console/Input\n      chmod 0775 ${drupal_root}/vendor/symfony/console/Style\n    else\n      printf \"Locking Drush and Symfony Console Input in \"${drupal_root}/vendor\"...\\n\"\n      chmod 0400 ${drupal_root}/vendor/drush\n      chmod 0400 ${drupal_root}/vendor/symfony/console/Input\n      chmod 0400 ${drupal_root}/vendor/symfony/console/Style\n    fi\n  elif [ -e \"${drupal_root}/../vendor\" ]; then\n    if [ \"$mode\" = \"unlock\" ]; then\n      printf \"Unlocking Drush and Symfony Console Input in \"${drupal_root}/../vendor\"...\\n\"\n      chmod 0775 ${drupal_root}/../vendor/drush\n      chmod 0775 ${drupal_root}/../vendor/symfony/console/Input\n      chmod 0775 ${drupal_root}/../vendor/symfony/console/Style\n    else\n      printf \"Locking Drush and Symfony Console Input in \"${drupal_root}/../vendor\"...\\n\"\n      chmod 0400 ${drupal_root}/../vendor/drush\n      chmod 0400 ${drupal_root}/../vendor/symfony/console/Input\n      chmod 0400 ${drupal_root}/../vendor/symfony/console/Style\n    fi\n  fi\n  if [ \"$mode\" = \"unlock\" ]; then\n    echo \"Done Unlocking Drush and Symfony Console Input.\"\n  else\n    echo \"Done Locking Drush and Symfony Console Input.\"\n  fi\nfi\n\n\n"
  },
  {
    "path": "aegir/tools/bin/lock.inc",
    "content": "#!/bin/bash\n\n#\n# lockinc — reusable single-instance lock helpers\n# Provides: _single_instance_lock [lockfile] [fd]\n#           _single_instance_unlock <fd> <lockfile>\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n###\n### Idempotent include guard\n###\n[ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && return 0\n_SINGLE_INSTANCE_LIB_VER=\"1.0\"\n\n###\n### Atomic unlock to prevent TOCTOU race\n###\n_single_instance_unlock() {\n  _FD=\"$1\"; _PATH=\"$2\"\n  if command -v flock >/dev/null 2>&1; then\n    flock -u \"${_FD}\" 2>/dev/null || true\n    eval \"exec ${_FD}>&-\"\n    rm -f \"${_PATH}\" 2>/dev/null || true\n  else\n    rm -rf \"${_PATH}\" 2>/dev/null || true\n  fi\n}\n\n###\n### Atomic lock to prevent TOCTOU race\n###\n_single_instance_lock() {\n  # Ensure not too many instances are running\n  # usage: _single_instance_lock [lockfile_path] [fd]\n  # default lock: /run/<script>.lock (falls back to /run or /tmp)\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  _LOCK_FD=\"${2:-9}\"\n  if [ -n \"${1:-}\" ]; then\n    _LOCK_PATH=\"$1\"\n  else\n    _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/tmp\"\n    _LOCK_PATH=\"${_DIR}/${_SELF_NAME%.sh}.lock\"\n  fi\n\n  if command -v flock >/dev/null 2>&1; then\n    eval \"exec ${_LOCK_FD}>\\\"${_LOCK_PATH}\\\"\"\n    if ! flock -n \"${_LOCK_FD}\"; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    printf '%s\\n' \"$$\" 1>&\"${_LOCK_FD}\" 2>/dev/null || true   # optional: PID note\n    trap \"_single_instance_unlock ${_LOCK_FD} '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  else\n    # mkdir is atomic; directory presence == lock held\n    if ! mkdir \"${_LOCK_PATH}\" 2>/dev/null; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    echo \"$$\" > \"${_LOCK_PATH}/pid\" 2>/dev/null || true\n    trap \"rm -rf '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  fi\n}\n\n###\n### Prevent a flood of alerts on services up/down status if uptime < 15 minutes\n###\n### Usage: in any _send_email function add at the top this line\n###        to exit/disable that function but continue the script:\n###\n###        if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n###\n_check_uptime_grace_period() {\n  _UPTIME_MIN=$(awk '{print int($1/60)}' /proc/uptime 2>/dev/null)\n  if [ \"${_UPTIME_MIN}\" -lt 15 ] \\\n    || [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    || [ ! -x \"/usr/sbin/csf\" ] \\\n    || [ ! -e \"/run/lfd.pid\" ] \\\n    || [ -e \"/run/octopus_install_run.pid\" ] \\\n    || [ -e \"/run/boa_run.pid\" ] \\\n    || [ -e \"/run/boa_wait.pid\" ]; then\n    export _INCIDENT_REPORT=OFF\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"Skipping alerts: system uptime ${_UPTIME_MIN} min < 15 min threshold.\"\n    fi\n    return 1\n  fi\n  return 0\n}\n"
  },
  {
    "path": "aegir/tools/bin/memorytuner",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Ensure the script is run with root privileges\nif [ \"$EUID\" -ne 0 ]; then\n  echo \"Please run as root.\"\n  exit 1\nfi\n\n# Function to register failures\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.memorytuner.exit.exceptions.log\n  fi\n  exit 1\n}\n\n# Function to check if MySQL server is running\n_check_sql_running() {\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep mysqld 2>&1)\n    echo \"INFO: Waiting for MySQLD availability...\"\n    sleep 5\n  done\n}\n_check_sql_running\n\n# Function to check if MySQL server access credentials for root are working\n_check_sql_access() {\n  if [ -e \"/root/.my.pass.txt\" ] && [ -e \"/root/.my.cnf\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_SYNC_SQL_PSWD=$(grep \"password=${_SQL_PSWD}\" /root/.my.cnf 2>&1)\n  else\n    echo \"ALERT: /root/.my.cnf or /root/.my.pass.txt not found.\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_a\n  fi\n  if [ -z \"${_IS_SYNC_SQL_PSWD}\" ]; then\n    echo \"ALERT: SQL password is out of sync between\"\n    echo \"ALERT: /root/.my.cnf and /root/.my.pass.txt\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_b\n  else\n    _IS_MYSQLD_RUNNING=$(pgrep mysqld 2>&1)\n    if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n      echo \"ALERT: SQL server on this system is not running at all.\"\n      echo \"ALERT: Please fix this before trying again, giving up.\"\n      echo \"Bye\"\n      echo \" \"\n      _clean_pid_exit _check_sql_access_c\n    else\n      _MYSQL_CONN_TEST=$(mysql -u root -e \"status\" 2>&1)\n      if [ -z \"${_MYSQL_CONN_TEST}\" ] \\\n        || [[ \"${_MYSQL_CONN_TEST}\" =~ \"Access denied\" ]]; then\n        echo \"ALERT: SQL password in /root/.my.cnf does not work.\"\n        echo \"ALERT: Please fix this before trying again, giving up.\"\n        echo \"Bye\"\n        echo \" \"\n        _clean_pid_exit _check_sql_access_d\n      fi\n    fi\n  fi\n}\n_check_sql_access\n\n# Function to get total system memory in MB\n_get_total_mem_mb() {\n  _mem_kb\n  _mem_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')\n  echo $((_mem_kb / 1024))\n}\n\n# Function to get memory usage of a process or group of processes in MB\n_get_mem_usage_mb() {\n  _process_name=$1\n  _mem_usage_kb\n  _mem_usage_kb=$(ps -eo rss,args | grep -w \"${_process_name}\" | grep -v grep | awk '{sum+=$1} END {print sum}')\n  _mem_usage_kb=${_mem_usage_kb:-0}\n  echo \"scale=2; ${_mem_usage_kb} / 1024\" | bc\n}\n\n# Function to get service uptime in seconds\n_get_service_uptime() {\n  local service_name=$1\n  local pidfile=$2\n\n  if [ ! -f \"$pidfile\" ]; then\n    echo 0  # Service is not running\n    return\n  fi\n\n  # Get the PID from the PID file\n  local pid\n  pid=$(cat \"$pidfile\")\n\n  # Check if the process is running\n  if ! ps -p \"$pid\" > /dev/null 2>&1; then\n    echo 0  # Process is not running\n    return\n  fi\n\n  # Get the current time and the PID file modification time in seconds since epoch\n  local current_time\n  local file_time\n  current_time=$(date +%s)\n  file_time=$(stat -c %Y \"$pidfile\")\n\n  # Calculate the uptime in seconds\n  local uptime\n  uptime=$((current_time - file_time))\n\n  echo \"$uptime\"\n}\n\n# Function to get MySQL server uptime in seconds\n_get_mysql_uptime() {\n  _uptime=$(mysql -Nse \"SHOW GLOBAL STATUS LIKE 'Uptime';\" | awk '{print $2}')\n  echo \"${_uptime}\"\n}\n\n# Function to get MySQL version\n_get_mysql_version() {\n  mysql -V | awk '{print $5}' | tr -d ','\n}\n\n# Function to get number of databases\n_get_number_of_databases() {\n  mysql -Nse \"SELECT COUNT(*) FROM information_schema.schemata;\"\n}\n\n# Function to parse MySQLTuner-perl recommendations with progress indicator\n_parse_mysqltuner_recommendations() {\n  if [[ \"${_MYSQL_VERSION}\" =~ ^8\\. ]]; then\n    # Use mysqltuner8 for Percona 8.x\n    if ! command -v mysqltuner8 &> /dev/null; then\n      echo \"mysqltuner8 is not installed. Please install it before running this script.\"\n      exit 1\n    fi\n    echo \"Running mysqltuner8...\"\n    _tempfile=$(mktemp)\n    mysqltuner8 --nogood --nocolor --buffers --silent > \"${_tempfile}\" &\n    _mysqltuner_pid=$!\n  elif [[ \"${_MYSQL_VERSION}\" =~ ^5\\.7 ]]; then\n    # Use mysqltuner5 for Percona 5.7\n    if ! command -v mysqltuner5 &> /dev/null; then\n      echo \"mysqltuner5 is not installed. Please install it before running this script.\"\n      exit 1\n    fi\n    echo \"Running mysqltuner5...\"\n    _tempfile=$(mktemp)\n    mysqltuner5 --nogood --nocolor --buffers --silent > \"${_tempfile}\" &\n    _mysqltuner_pid=$!\n  else\n    echo \"Unsupported MySQL version: ${_MYSQL_VERSION}\"\n    exit 1\n  fi\n\n  # Show progress while mysqltuner is running\n  echo -n \"Processing\"\n  while kill -0 ${_mysqltuner_pid} 2>/dev/null; do\n    echo -n \".\"\n    sleep 5\n  done\n  echo \" Done.\"\n\n  _mysqltuner_output=$(cat \"${_tempfile}\")\n  rm \"${_tempfile}\"\n\n  # Parse key recommendations\n  _REC_INNODB_BUFFER_POOL_SIZE_MB=$(echo \"${_mysqltuner_output}\" | grep -i \"InnoDB Buffer Pool\" | grep -o '[0-9]\\+M' | tr -d 'M')\n  _REC_KEY_BUFFER_SIZE_MB=$(echo \"${_mysqltuner_output}\" | grep -i \"Key buffer size\" | grep -o '[0-9]\\+M' | tr -d 'M')\n  _REC_TMP_TABLE_SIZE_MB=$(echo \"${_mysqltuner_output}\" | grep -i \"Temporary tables\" | grep -o '[0-9]\\+M' | tr -d 'M')\n  _REC_READ_RND_BUFFER_SIZE_MB=$(echo \"${_mysqltuner_output}\" | grep -i \"read_rnd_buffer_size\" | grep -o '[0-9]\\+K' | tr -d 'K')\n  _REC_JOIN_BUFFER_SIZE_MB=$(echo \"${_mysqltuner_output}\" | grep -i \"join_buffer_size\" | grep -o '[0-9]\\+K' | tr -d 'K')\n\n  # Convert K to M for read_rnd_buffer_size and join_buffer_size\n  _REC_READ_RND_BUFFER_SIZE_MB=$((_REC_READ_RND_BUFFER_SIZE_MB / 1024))\n  _REC_JOIN_BUFFER_SIZE_MB=$((_REC_JOIN_BUFFER_SIZE_MB / 1024))\n\n  # Parse innodb_log_file_size or innodb_redo_log_capacity\n  if [[ \"${_MYSQL_VERSION}\" =~ ^8\\. ]]; then\n    _REC_INNODB_LOG_SIZE_MB=$(echo \"${_mysqltuner_output}\" | \\\n      grep -i \"innodb_redo_log_capacity should be\" | \\\n      grep -o '[0-9]\\+M' | tr -d 'M')\n  else\n    _REC_INNODB_LOG_SIZE_MB=$(echo \"${_mysqltuner_output}\" | \\\n      grep -i \"innodb_log_file_size should be\" | \\\n      grep -o '[0-9]\\+M' | tr -d 'M')\n  fi\n\n  # If recommendations are not found, set to default values\n  : ${_REC_INNODB_BUFFER_POOL_SIZE_MB:=0}\n  : ${_REC_KEY_BUFFER_SIZE_MB:=0}\n  : ${_REC_TMP_TABLE_SIZE_MB:=0}\n  : ${_REC_READ_RND_BUFFER_SIZE_MB:=0}\n  : ${_REC_JOIN_BUFFER_SIZE_MB:=0}\n  : ${_REC_INNODB_LOG_SIZE_MB:=0}\n}\n\n# Function to get current MySQL settings from /etc/mysql/my.cnf\n_get_current_mysql_settings() {\n  _CUR_INNODB_BUFFER_POOL_SIZE=$(grep -i \"^innodb_buffer_pool_size\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d 'M ')\n  _CUR_KEY_BUFFER_SIZE=$(grep -i \"^key_buffer_size\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d 'M ')\n  _CUR_TMP_TABLE_SIZE=$(grep -i \"^tmp_table_size\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d 'M ')\n  _CUR_MAX_CONS=$(grep -i \"^max_connections\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d ' ')\n  _CUR_READ_RND_BUFFER_SIZE=$(grep -i \"^read_rnd_buffer_size\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d 'M ')\n  _CUR_JOIN_BUFFER_SIZE=$(grep -i \"^join_buffer_size\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d 'M ')\n\n  if [[ \"${_MYSQL_VERSION}\" =~ ^8\\. ]]; then\n    _CUR_INNODB_LOG_SIZE=$(grep -i \"^innodb_redo_log_capacity\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d 'M ')\n  else\n    _CUR_INNODB_LOG_SIZE=$(grep -i \"^innodb_log_file_size\" /etc/mysql/my.cnf | awk -F'=' '{print $2}' | tr -d 'M ')\n  fi\n}\n\n### Main script starts here\n\n# Get total system memory\n_TOTAL_MEM_MB=$(_get_total_mem_mb)\n\n# Define services and their PID files\ndeclare -A _services_pidfiles=(\n  [\"nginx\"]=\"/run/nginx.pid\"\n  [\"php74-fpm\"]=\"/run/php74-fpm.pid\"\n  [\"php81-fpm\"]=\"/run/php81-fpm.pid\"\n  [\"php82-fpm\"]=\"/run/php82-fpm.pid\"\n  [\"php83-fpm\"]=\"/run/php83-fpm.pid\"\n  [\"php84-fpm\"]=\"/run/php84-fpm.pid\"\n  [\"php85-fpm\"]=\"/run/php85-fpm.pid\"\n  [\"redis\"]=\"/run/redis/redis.pid\"\n  [\"java\"]=\"/var/solr9/solr-9099.pid\"\n)\n\n# Define estimated memory usage for services (in MB)\ndeclare -A _service_estimated_mem_mb=(\n  [\"nginx\"]=100\n  [\"php74-fpm\"]=200\n  [\"php81-fpm\"]=200\n  [\"php82-fpm\"]=200\n  [\"php83-fpm\"]=200\n  [\"php84-fpm\"]=200\n  [\"php85-fpm\"]=200\n  [\"redis\"]=200\n  [\"java\"]=100\n)\n\n# Ensure _TOTAL_MEM_MB is set by default to 75% of available RAM\n_MAX_MEM_MB=\"${_TOTAL_MEM_MB}\"\n_TOTAL_MEM_MB=$(echo \"(${_TOTAL_MEM_MB} * 75) / 100\" | bc)\n\n# Threshold for uptime (e.g., 1 hour = 3600 seconds)\n_UPTIME_THRESHOLD=3600\n\n# Initialize total memory usage of other services\n_TOTAL_OTHER_SERVICES_MEM_MB=500\n\n# Array to store debugging information\ndeclare -A _service_mem_usage_mb\n\n# Loop through the services\nfor _service in \"${!_services_pidfiles[@]}\"; do\n  _pidfile=\"${_services_pidfiles[${_service}]}\"\n  _uptime=$(_get_service_uptime \"${_service}\" \"${_pidfile}\")\n\n  if [ \"${_uptime}\" -ge \"${_UPTIME_THRESHOLD}\" ]; then\n    # Use actual memory usage\n    _mem_usage_mb=$(_get_mem_usage_mb \"${_service}\")\n  else\n    # Use estimated memory usage\n    _mem_usage_mb=${_service_estimated_mem_mb[${_service}]}\n    echo \"Service ${_service} has short uptime (${_uptime} seconds). Using estimated memory usage: ${_mem_usage_mb} MB\"\n  fi\n\n  # Ensure _mem_usage_mb is a valid number\n  if ! [[ \"${_mem_usage_mb}\" =~ ^[0-9]+(\\.[0-9]+)?$ ]]; then\n    _mem_usage_mb=0\n  fi\n\n  # Accumulate total memory usage\n  _TOTAL_OTHER_SERVICES_MEM_MB=${_TOTAL_OTHER_SERVICES_MEM_MB:-0}\n  _TOTAL_OTHER_SERVICES_MEM_MB=$(echo \"${_TOTAL_OTHER_SERVICES_MEM_MB} + ${_mem_usage_mb}\" | bc)\n  # Store for debugging\n  _service_mem_usage_mb[\"${_service}\"]=${_mem_usage_mb}\ndone\n\n# Get current memory usage of MySQL server\n_MYSQL_MEM_MB=$(_get_mem_usage_mb 'mysqld')\n\n# Ensure variables are not empty and are valid numbers\nfor _service in \"${!_service_mem_usage_mb[@]}\"; do\n  _mem_usage=${_service_mem_usage_mb[${_service}]}\n  if ! [[ \"${_mem_usage}\" =~ ^[0-9]+(\\.[0-9]+)?$ ]]; then\n    _mem_usage=0\n  fi\n  _service_mem_usage_mb[${_service}]=${_mem_usage}\ndone\n\n# Debugging statements to verify memory usage\nfor _service in \"${!_service_mem_usage_mb[@]}\"; do\n  echo \"${_service} Memory Usage: ${_service_mem_usage_mb[${_service}]} MB\"\ndone\n\n# Get current MySQL usage statistics\n_MYSQL_VERSION=$(_get_mysql_version)\n_NUMBER_OF_DATABASES=$(_get_number_of_databases)\n\n# Ensure _TOTAL_OTHER_SERVICES_MEM_MB is valid\nif ! [[ \"${_TOTAL_OTHER_SERVICES_MEM_MB}\" =~ ^[0-9]+(\\.[0-9]+)?$ ]]; then\n  _TOTAL_OTHER_SERVICES_MEM_MB=0\nfi\n\n# Calculate max memory for MySQL\n_MAX_MEM_FOR_MYSQL_MB=$(echo \"${_TOTAL_MEM_MB} - ${_TOTAL_OTHER_SERVICES_MEM_MB}\" | bc)\n\n# Ensure _AVA_MEM_FOR_MYSQL_MB is set by default to 75% of _MAX_MEM_FOR_MYSQL_MB\n_AVA_MEM_FOR_MYSQL_MB=$(echo \"(${_MAX_MEM_FOR_MYSQL_MB} * 75) / 100\" | bc)\n\necho \"Max raw RAM available: ${_MAX_MEM_MB} MB\"\necho \"Memory safely available for all services: ${_TOTAL_MEM_MB} MB\"\necho \"Memory actually used by other services now: ${_TOTAL_OTHER_SERVICES_MEM_MB} MB\"\necho \"Memory actually used by MySQL now: ${_MYSQL_MEM_MB} MB\"\necho \"Memory available for MySQL theoretically: ${_AVA_MEM_FOR_MYSQL_MB} MB\"\necho \"Max RAM available for MySQL: ${_MAX_MEM_FOR_MYSQL_MB} MB\"\n\n# Safety check: Ensure there's enough memory for MySQL\n_MIN_MYSQL_MEM_MB=1024  # Minimum memory required for MySQL in MB\nif (( $(echo \"${_AVA_MEM_FOR_MYSQL_MB} < ${_MIN_MYSQL_MEM_MB}\" | bc -l) )); then\n  echo \"Error: Not enough memory available for MySQL after accounting for other services.\"\n  echo \"Available memory for MySQL: ${_AVA_MEM_FOR_MYSQL_MB} MB\"\n  echo \"Consider upgrading your system's RAM.\"\n  exit 1\nfi\n\n# Get MySQL uptime\n_MYSQL_UPTIME_SECONDS=$(_get_mysql_uptime)\n\n# Run MySQLTuner-perl only if uptime >= 1 hour (3600 seconds)\nif [ \"${_NUMBER_OF_DATABASES}\" -lt 300 ] && [ \"${_MYSQL_UPTIME_SECONDS}\" -ge 3600 ]; then\n  echo \"Running MySQLTuner-perl to gather recommendations...\"\n  _parse_mysqltuner_recommendations\nelse\n  echo \"Skipping MySQLTuner-perl.\"\n  if [ \"${_NUMBER_OF_DATABASES}\" -ge 300 ]; then\n    echo \"Reason: Number of databases is ${_NUMBER_OF_DATABASES} (>=300).\"\n  fi\n  if [ \"${_MYSQL_UPTIME_SECONDS}\" -lt 3600 ]; then\n    echo \"Reason: MySQL uptime is less than 1 hour.\"\n  fi\n  _REC_INNODB_BUFFER_POOL_SIZE_MB=0\n  _REC_KEY_BUFFER_SIZE_MB=0\n  _REC_TMP_TABLE_SIZE_MB=0\n  _REC_READ_RND_BUFFER_SIZE_MB=0\n  _REC_JOIN_BUFFER_SIZE_MB=0\n  _REC_INNODB_LOG_SIZE_MB=0\nfi\n\n# Ensure _AVA_MEM_FOR_MYSQL_MB is valid\nif ! [[ \"${_AVA_MEM_FOR_MYSQL_MB}\" =~ ^[0-9]+(\\.[0-9]+)?$ ]]; then\n  _AVA_MEM_FOR_MYSQL_MB=0\nfi\n\n# Calculate recommended InnoDB buffer pool size\nif [ \"${_REC_INNODB_BUFFER_POOL_SIZE_MB}\" -gt 0 ]; then\n  echo \"Using MySQLTuner recommended InnoDB buffer pool size: ${_REC_INNODB_BUFFER_POOL_SIZE_MB} MB\"\nelse\n  # Default to 75% of available MySQL memory\n  _REC_INNODB_BUFFER_POOL_SIZE_MB=$(echo \"(${_AVA_MEM_FOR_MYSQL_MB} * 75) / 100\" | bc)\n  echo \"Calculated InnoDB buffer pool size: ${_REC_INNODB_BUFFER_POOL_SIZE_MB} MB\"\nfi\n\n# Ensure InnoDB buffer pool size does not exceed available memory\nif (( $(echo \"${_REC_INNODB_BUFFER_POOL_SIZE_MB} > ${_AVA_MEM_FOR_MYSQL_MB}\" | bc -l) )); then\n  _REC_INNODB_BUFFER_POOL_SIZE_MB=${_AVA_MEM_FOR_MYSQL_MB}\nfi\n\n# Calculate recommended key_buffer_size\nif [ \"${_REC_KEY_BUFFER_SIZE_MB}\" -gt 0 ]; then\n  echo \"Using MySQLTuner recommended key_buffer_size: ${_REC_KEY_BUFFER_SIZE_MB} MB\"\nelse\n  # Default to a minimal value if MyISAM is not used much\n  _REC_KEY_BUFFER_SIZE_MB=8\n  echo \"Calculated key_buffer_size: ${_REC_KEY_BUFFER_SIZE_MB} MB\"\nfi\n\n# Calculate tmp_table_size and max_heap_table_size\nif [ \"${_REC_TMP_TABLE_SIZE_MB}\" -gt 0 ]; then\n  echo \"Using MySQLTuner recommended tmp_table_size: ${_REC_TMP_TABLE_SIZE_MB} MB\"\nelse\n  _EST_MEM_PER_TMP_TABLE_MB=64  # Estimated size per temp table in MB\n  _REC_TMP_TABLE_SIZE_MB=$(echo \"${_AVA_MEM_FOR_MYSQL_MB} / 4\" | bc)\n  if (( $(echo \"${_REC_TMP_TABLE_SIZE_MB} > ${_EST_MEM_PER_TMP_TABLE_MB}\" | bc -l) )); then\n    _REC_TMP_TABLE_SIZE_MB=${_EST_MEM_PER_TMP_TABLE_MB}\n  fi\n  echo \"Calculated tmp_table_size: ${_REC_TMP_TABLE_SIZE_MB} MB\"\nfi\n\n# Calculate read_rnd_buffer_size\nif [ \"${_REC_READ_RND_BUFFER_SIZE_MB}\" -gt 0 ]; then\n  echo \"Using MySQLTuner recommended read_rnd_buffer_size: ${_REC_READ_RND_BUFFER_SIZE_MB} MB\"\nelse\n  # Default value\n  _REC_READ_RND_BUFFER_SIZE_MB=4  # 4MB is a reasonable default\n  echo \"Calculated read_rnd_buffer_size: ${_REC_READ_RND_BUFFER_SIZE_MB} MB\"\nfi\n\n# Calculate join_buffer_size\nif [ \"${_REC_JOIN_BUFFER_SIZE_MB}\" -gt 0 ]; then\n  echo \"Using MySQLTuner recommended join_buffer_size: ${_REC_JOIN_BUFFER_SIZE_MB} MB\"\nelse\n  # Default value\n  _REC_JOIN_BUFFER_SIZE_MB=8  # 8MB is a reasonable default\n  echo \"Calculated join_buffer_size: ${_REC_JOIN_BUFFER_SIZE_MB} MB\"\nfi\n\n# Estimate average memory per connection, including per-connection buffers\n_EST_MEM_PER_CON_MB=$(echo \"${_REC_READ_RND_BUFFER_SIZE_MB} + ${_REC_JOIN_BUFFER_SIZE_MB} + 4\" | bc)  # 4MB for other per-connection buffers\n\n# Ensure _EST_MEM_PER_CON_MB is valid\nif ! [[ \"${_EST_MEM_PER_CON_MB}\" =~ ^[0-9]+(\\.[0-9]+)?$ ]]; then\n  _EST_MEM_PER_CON_MB=16  # Default value\nfi\n\n# Calculate max_connections based on available memory\n_MAX_POS_CONS=$(echo \"${_AVA_MEM_FOR_MYSQL_MB} / ${_EST_MEM_PER_CON_MB}\" | bc)\n\n# Ensure _MAX_POS_CONS is valid\nif ! [[ \"${_MAX_POS_CONS}\" =~ ^[0-9]+$ ]]; then\n  _MAX_POS_CONS=25\nfi\n\n# Ensure max_connections is not set unreasonably high\nif [ \"${_MAX_POS_CONS}\" -gt 500 ]; then\n  _REC_MAX_CONS=500\nelif [ \"${_MAX_POS_CONS}\" -lt 25 ]; then\n  _REC_MAX_CONS=25\nelse\n  _REC_MAX_CONS=${_MAX_POS_CONS}\nfi\necho \"Calculated max_connections: ${_REC_MAX_CONS}\"\n\n# Calculate innodb_log_file_size or innodb_redo_log_capacity\nif [[ \"${_MYSQL_VERSION}\" =~ ^8\\. ]]; then\n  if [ \"${_REC_INNODB_LOG_SIZE_MB}\" -gt 0 ]; then\n    echo \"Using MySQLTuner recommended innodb_redo_log_capacity: ${_REC_INNODB_LOG_SIZE_MB} MB\"\n  else\n    # Default calculation\n    _REC_INNODB_LOG_SIZE_MB=$(echo \"(${_REC_INNODB_BUFFER_POOL_SIZE_MB} * 20) / 100\" | bc)\n    echo \"Calculated innodb_redo_log_capacity: ${_REC_INNODB_LOG_SIZE_MB} MB\"\n  fi\nelse\n  if [ \"${_REC_INNODB_LOG_SIZE_MB}\" -gt 0 ]; then\n    echo \"Using MySQLTuner recommended innodb_log_file_size: ${_REC_INNODB_LOG_SIZE_MB} MB\"\n  else\n    # Default calculation\n    _REC_INNODB_LOG_SIZE_MB=$(echo \"(${_REC_INNODB_BUFFER_POOL_SIZE_MB} * 25) / 100\" | bc)\n    echo \"Calculated innodb_log_file_size: ${_REC_INNODB_LOG_SIZE_MB} MB\"\n  fi\nfi\n\n# Ensure it does not exceed available memory\nif (( $(echo \"${_REC_INNODB_LOG_SIZE_MB} > ${_AVA_MEM_FOR_MYSQL_MB}\" | bc -l) )); then\n  _REC_INNODB_LOG_SIZE_MB=${_AVA_MEM_FOR_MYSQL_MB}\nfi\n\n# Get current MySQL settings\n_get_current_mysql_settings\n\n# Initialize a variable to track if a restart is needed\n_NEEDS_RESTART=0\n\n# Function to update MySQL configuration settings\n_update_setting() {\n  _setting_name=$1\n  _new_value=$2\n  _current_value=$3\n  _requires_restart=$4\n  _is_memory_size=$5  # New parameter indicating if setting is a memory size\n\n  # Ensure _new_value is valid\n  if [ -z \"${_new_value}\" ] || ! [[ \"${_new_value}\" =~ ^[0-9]+(\\.[0-9]+)?$ ]]; then\n    echo \"Warning: Invalid value for ${_setting_name}. Skipping update.\"\n    return\n  fi\n\n  if [ \"${_current_value}\" != \"${_new_value}\" ]; then\n    if grep -q \"^${_setting_name}\" \"${_TMP_CONFIG_FILE}\"; then\n      if [ \"${_is_memory_size}\" -eq 1 ]; then\n        sed -i \"s|^${_setting_name}.*|${_setting_name} = ${_new_value}M|\" \"${_TMP_CONFIG_FILE}\"\n      else\n        sed -i \"s|^${_setting_name}.*|${_setting_name} = ${_new_value}|\" \"${_TMP_CONFIG_FILE}\"\n      fi\n    else\n      if [ \"${_is_memory_size}\" -eq 1 ]; then\n        echo \"${_setting_name} = ${_new_value}M\" >> \"${_TMP_CONFIG_FILE}\"\n      else\n        echo \"${_setting_name} = ${_new_value}\" >> \"${_TMP_CONFIG_FILE}\"\n      fi\n    fi\n\n    if [ \"${_requires_restart}\" -eq 1 ]; then\n      _NEEDS_RESTART=1\n    else\n      if [ \"${_is_memory_size}\" -eq 1 ]; then\n        # Apply setting dynamically with multiplication\n        mysql -e \"SET GLOBAL ${_setting_name}=$(echo \"${_new_value} * 1024 * 1024\" | bc);\"\n      else\n        # Apply setting dynamically without multiplication\n        mysql -e \"SET GLOBAL ${_setting_name}=${_new_value};\"\n      fi\n    fi\n  fi\n}\n\n# Update /etc/mysql/my.cnf if settings differ\n_update_mysql_config() {\n  _CONFIG_FILE=\"/etc/mysql/my.cnf\"\n  _TMP_CONFIG_FILE=\"/tmp/my.cnf.$$\"\n  cp \"${_CONFIG_FILE}\" \"${_TMP_CONFIG_FILE}\"\n\n  # Update settings\n  _update_setting \"innodb_buffer_pool_size\" \"${_REC_INNODB_BUFFER_POOL_SIZE_MB}\" \"${_CUR_INNODB_BUFFER_POOL_SIZE}\" 1 1\n  if [[ \"${_MYSQL_VERSION}\" =~ ^8\\. ]]; then\n    _update_setting \"innodb_redo_log_capacity\" \"${_REC_INNODB_LOG_SIZE_MB}\" \"${_CUR_INNODB_LOG_SIZE}\" 1 1\n  else\n    _update_setting \"innodb_log_file_size\" \"${_REC_INNODB_LOG_SIZE_MB}\" \"${_CUR_INNODB_LOG_SIZE}\" 1 1\n  fi\n  _update_setting \"key_buffer_size\" \"${_REC_KEY_BUFFER_SIZE_MB}\" \"${_CUR_KEY_BUFFER_SIZE}\" 1 1\n  _update_setting \"max_connections\" \"${_REC_MAX_CONS}\" \"${_CUR_MAX_CONS}\" 0 0  # Not a memory size\n  _update_setting \"tmp_table_size\" \"${_REC_TMP_TABLE_SIZE_MB}\" \"${_CUR_TMP_TABLE_SIZE}\" 0 1\n  _update_setting \"max_heap_table_size\" \"${_REC_TMP_TABLE_SIZE_MB}\" \"${_CUR_TMP_TABLE_SIZE}\" 0 1\n  _update_setting \"read_rnd_buffer_size\" \"${_REC_READ_RND_BUFFER_SIZE_MB}\" \"${_CUR_READ_RND_BUFFER_SIZE}\" 0 1\n  _update_setting \"join_buffer_size\" \"${_REC_JOIN_BUFFER_SIZE_MB}\" \"${_CUR_JOIN_BUFFER_SIZE}\" 0 1\n\n  # Replace the original config file if changes were made\n  if cmp -s \"${_TMP_CONFIG_FILE}\" \"${_CONFIG_FILE}\"; then\n    rm \"${_TMP_CONFIG_FILE}\"\n  else\n    mv \"${_TMP_CONFIG_FILE}\" \"${_CONFIG_FILE}\"\n  fi\n}\n\n_update_mysql_config\n\nif [ \"${_NEEDS_RESTART}\" -eq 1 ]; then\n  echo \"Some settings require a MySQL restart to take effect.\"\n  echo \"Invoking /var/xdrago/move_sql.sh to safely restart MySQL.\"\n  bash /var/xdrago/move_sql.sh\n  wait\nelse\n  echo \"Settings have been applied dynamically where possible.\"\n  echo \"No restart required.\"\nfi\necho \"MySQL configuration update completed.\"\n"
  },
  {
    "path": "aegir/tools/bin/mergecsf",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Script to merge CSF configuration files from /etc/csf and /var/backups/*/csf/\n# It merges csf.ignore, csf.allow, and csf.deny files, removes duplicates, and ignores commented lines.\n# A backup of /etc/csf/ is created before merging.\n\n# Variables\n_FILES=(\"csf.ignore\" \"csf.allow\" \"csf.deny\")\n_CONFIG_DIR=\"/etc/csf\"\n_BACKUP_DIR=\"/var/backups\"\n_BACKUP_TIMESTAMP=$(date +\"%Y%m%d%H%M%S\")\n\n# Create a backup of /etc/csf/\ncp -a \"${_CONFIG_DIR}\" \"${_CONFIG_DIR}_backup_${_BACKUP_TIMESTAMP}\"\n\n# Loop through each CSF configuration file\nfor _FILE in \"${_FILES[@]}\"; do\n  _FILE_PATHS=()\n\n  # Check if the file exists in /etc/csf/\n  if [ -f \"${_CONFIG_DIR}/${_FILE}\" ]; then\n    _FILE_PATHS+=(\"${_CONFIG_DIR}/${_FILE}\")\n  fi\n\n  # Check for files in /var/backups/*/csf/\n  for _BACKUP_SUBDIR in \"${_BACKUP_DIR}\"/*; do\n    if [ -d \"${_BACKUP_SUBDIR}\" ]; then\n      if [ -f \"${_BACKUP_SUBDIR}/csf/${_FILE}\" ]; then\n        _FILE_PATHS+=(\"${_BACKUP_SUBDIR}/csf/${_FILE}\")\n      fi\n    fi\n  done\n\n  # Proceed if any files were found\n  if [ \"${#_FILE_PATHS[@]}\" -eq 0 ]; then\n    echo \"No files found for ${_FILE}\"\n    continue\n  fi\n\n  # Create a temporary file for merging\n  _TEMP_FILE=$(mktemp)\n\n  # Read each file, ignore commented lines, and append to temporary file\n  for _PATH in \"${_FILE_PATHS[@]}\"; do\n    grep -v '^\\s*#' \"${_PATH}\" >> \"${_TEMP_FILE}\"\n  done\n\n  # Remove duplicate entries\n  sort -u \"${_TEMP_FILE}\" -o \"${_TEMP_FILE}\"\n\n  # Replace the original file with the merged content\n  mv \"${_TEMP_FILE}\" \"${_CONFIG_DIR}/${_FILE}\"\n\n  echo \"Merged ${_FILE} successfully.\"\ndone\n\necho \"All files have been merged and duplicates removed.\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/multiback",
    "content": "#!/bin/bash\n\n# Environment setup\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n# Function to print env for debugging\n_print_env() {\n  if [ \"$(id -u)\" -eq 0 ] && [ -e \"/root/.dev.server.cnf\" ]; then\n    _ENV=$(env 2>&1)\n    echo\n    echo \"_ENV in $1 start\"\n    echo \"${_ENV}\"\n    echo \"_ENV in $1 end\"\n    echo\n    _ENV=\n  fi\n}\n\n# Function to verify BOA keys\n_verify_boa_keys() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _verify_boa_keys in multiback\"\n  fi\n  if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n    _allw=NO\n    _crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n    _urlEnc=\"http://files.aegir.cc/enc/2024\"\n    _encName=$(echo ${_hName} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".o8.io\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".boa.io\"($) ]]; then\n      _allw=YES\n    fi\n    mkdir -p /var/opt\n    rm -f /var/opt/_encN*\n    curl ${_crlGet} \"${_urlEnc}/${_encName}\" -o /var/opt/_encN.${_encName}.tmp\n    wait\n    echo \"${_hName}.${_encName}\" > /var/opt/_encN_local.${_encName}.tmp\n    wait\n    if [ -e \"/var/opt/_encN.${_encName}.tmp\" ] && [ -e \"/var/opt/_encN_local.${_encName}.tmp\" ]; then\n      _diffTestIf=$(diff -w -B /var/opt/_encN.${_encName}.tmp /var/opt/_encN_local.${_encName}.tmp 2>&1)\n      if [ ! -z \"${_diffTestIf}\" ] && [ \"${_allw}\" = \"NO\" ]; then\n        echo\n        echo \"Your system requires valid license to use this function\"\n        echo \"Please visit https://omega8.cc/licenses to purchase your own\"\n        echo\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n        rm -f /var/opt/_encN*\n        exit 0\n      else\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n      fi\n    else\n      echo\n      echo \"Your system requires valid license to use this BOA feature\"\n      echo \"Unfortunately it was not possible to verify your system status\"\n      echo \"Please contact our support but visit https://omega8.cc/licenses first\"\n      echo\n      exit 0\n    fi\n  fi\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n# Function to calculate RAM usage percentage as an integer\n_calculate_ram_usage_percent() {\n  _total_ram_kb=$1\n  _available_ram_kb=$2\n  used_ram_kb=$((_total_ram_kb - _available_ram_kb))\n\n  # Using integer division to get a whole number percentage\n  echo $(( (used_ram_kb * 100) / _total_ram_kb ))\n}\n\n# Function to check and display system info\n_check_system_ram() {\n  # Get the total and available RAM in KB\n  _total_ram_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')\n  _available_ram_kb=$(grep MemAvailable /proc/meminfo | awk '{print $2}')\n\n  # Calculate RAM usage percentage\n  _ram_usage_percent=$(_calculate_ram_usage_percent ${_total_ram_kb} ${_available_ram_kb})\n}\n\n# Function to check and optimize RAM and disk caches\n_optimize_ram() {\n  swapoff -a\n  _check_system_ram\n  if [ \"${_ram_usage_percent}\" -gt 50 ]; then\n    sync && echo 3 | tee /proc/sys/vm/drop_caches\n  fi\n  swapon -a\n}\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n\n# Function to verify root access\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n    [ -e \"/root/.gnupg\" ] && chmod 700 /root/.gnupg\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  _AWS_VLV=${_AWS_VLV//[^a-z]/}\n  if [ -z \"${_AWS_VLV}\" ]; then\n    _AWS_VLV=\"warning\"\n  fi\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  _cpuNr=\"$(cat /data/all/cpuinfo 2>/dev/null | tr -d '\\n' || nproc 2>/dev/null)\"\n  if [ -n \"${_cpuNr}\" ]; then\n    [ \"${_cpuNr}\" -gt 8 ] && _useCpu=4\n    [ \"${_cpuNr}\" -le 8 ] && _useCpu=2\n    [ \"${_cpuNr}\" -le 4 ] && _useCpu=1\n  else\n    _useCpu=1\n  fi\n}\n_check_root\n_normalize_incident_report\n_optimize_ram\n_if_hosted_sys\n_verify_boa_keys\n_print_env \"multiback_init\"\n\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n# New OpenSSL 3.x version is required\nif [ ! -x \"/usr/local/ssl3/bin/openssl\" ]; then\n  echo \"New OpenSSL 3.x version is required\"\n  exit 1\nfi\n\n# Function to notify about still running backup\n_waiting_notify() {\n  local _templog=\"/var/backups/multiback_waiting_queue.log\"\n  cat /root/.remote_backups/schedule/backup_schedule.txt > ${_templog}\n  ps axf | grep multiback >> ${_templog}\n  ps axf | grep duplicity >> ${_templog}\n  ls -la /tmp/duplicity-*-tempdir >> ${_templog}\n  tree /root/.cache/duplicity >> ${_templog}\n  ls -laR /root/.cache/duplicity >> ${_templog}\n  grep \"Out of memory: Killed process.*duplicity\" /var/log/iptables.log >> ${_templog}\n  boa info  >> ${_templog}\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    s-nail -s \"Multiback Waiting Report for [${_hName}] on $(date)\" ${_MY_EMAIL} < ${_templog}\n  fi\n}\n\n_CNT=$(pgrep -fc duplicity)\nif (( _CNT > 0 )); then\n  echo \"[$(date)] Active duplicity process detected, will try again later...\" >> /var/log/mybackup_waiting_queue.log\n  _waiting_notify\n  exit 1\nfi\n\n# Function to display usage information\n_usage() {\n  echo \"Usage: $0 {backup|cleanup|restore} <SERVICE> <USER> [RESTORE_TARGET] [RESTORE_PATH] [RESTORE_TIME]\"\n  echo\n  echo \"Example commands:\"\n  echo \"  Backup:\"\n  echo \"  $0 backup aws john\"\n  echo \"  $0 backup b2 jane\"\n  echo\n  echo \"  Cleanup:\"\n  echo \"  $0 cleanup aws john\"\n  echo \"  $0 cleanup gcs jane\"\n  echo\n  echo \"  Restore:\"\n  echo \"  $0 restore aws john /restore/target /specific/path 1D\"\n  echo \"  $0 restore b2 jane /restore/target /another/path 2W\"\n  echo\n  echo \"Supported services:\"\n  echo \"  aws, aws_one_zone, aws_standard_ia, azure, b2, cloudflare, do_spaces, gcs, ibm, linode, wasabi\"\n  echo\n  echo \"NOTE: [RESTORE_PATH] must be an absolute path (no leading slash) of the file or directory to restore\"\n  echo\n  exit 1\n}\n\n# Function to create PID file\n_create_pid_file() {\n  local _pidfile=$1\n  if [ -e \"${_pidfile}\" ]; then\n    echo \"Process already running with PID file ${_pidfile}\"\n    exit 1\n  else\n    echo $$ > \"${_pidfile}\"\n  fi\n}\n\n# Function to remove PID file\n_remove_pid_file() {\n  local _pidfile=$1\n  if [ -f \"${_pidfile}\" ]; then\n    rm -f \"${_pidfile}\" || {\n      echo \"Warning: Failed to remove PID file: ${_pidfile}\"\n    }\n  fi\n}\n\n# Function to remove stale multiback PID file\n_remove_stale_multiback_pid() {\n  local _service=$1\n  local _user=$2\n  _multiback_pidfile=\"/run/duplicity_${_service}_${_user}.pid\"\n  if [ -f \"${_multiback_pidfile}\" ]; then\n    _old_pid=$(cat \"${_multiback_pidfile}\")\n    if [ -n \"${_old_pid}\" ] && ! kill -0 \"${_old_pid}\" 2>/dev/null; then\n      echo \"Stale multiback PID file detected: ${_multiback_pidfile}. Removing it.\"\n      rm -f \"${_multiback_pidfile}\"\n    fi\n  fi\n}\n\n# Function to log validation issues\n_log_issue() {\n  local _type=$1\n  local _file=$2\n  local _message=$3\n  echo \"[$(date)] Validation issue type: [${_type}] in file: [${_file}] with error: ${_message}\" >> \"${_VALIDATION_LOG_FILE}\"\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    # Alert the admin\n    boa info  >> ${_LOGFILE}\n    echo \"Sending Backup Validation Alert to ${_MY_EMAIL} on $(date)\" >> ${_LOGFILE}\n    s-nail -s \"Backup Validation Alert for [$(hostname)] on $(date)\" ${_MY_EMAIL} < ${_LOGFILE}\n  fi\n}\n\n# Helper function to URL-encode using jq\n_url_encode() {\n  echo -n \"$1\" | jq -s -R -r @uri\n}\n\n# Function to escape values\n_escape_value() {\n  printf '%q' \"$1\"\n}\n\n# Function to sanitize and validate credentials file\n_validate_credentials() {\n  local _cred_file=\"$1\"\n  local _service=\"$2\"\n  local _line_number=0\n\n  while IFS= read -r _line || [ -n \"${_line}\" ]; do\n    _line_number=$(( _line_number + 1 ))\n\n    # Trim leading and trailing whitespace\n    _line=\"${_line#\"${_line%%[![:space:]]*}\"}\"\n    _line=\"${_line%\"${_line##*[![:space:]]}\"}\"\n\n    # Skip empty lines immediately\n    if [[ -z \"${_line}\" ]]; then\n      continue\n    fi\n\n    # Remove full-line comments: lines that *start* with '#'\n    if [[ \"${_line}\" == \\#* ]]; then\n      continue\n    fi\n\n    # Remove anything after (and including) the first '#' for inline comments\n    # (This is a naive approach that does not consider # within quotes)\n    if [[ \"${_line}\" == *\"#\"* ]]; then\n      _line=\"${_line%%#*}\"\n      # Re-trim after removing the comment\n      _line=\"${_line#\"${_line%%[![:space:]]*}\"}\"\n      _line=\"${_line%\"${_line##*[![:space:]]}\"}\"\n    fi\n\n    # Skip if there's nothing left after stripping inline comment\n    if [[ -z \"${_line}\" ]]; then\n      continue\n    fi\n\n    # Remove 'export ' prefix if present\n    _line=\"${_line#export }\"\n\n    # Validate the variable assignment (key=value)\n    if [[ \"${_line}\" =~ ^([A-Za-z_][A-Za-z0-9_]*)=(\\\".*\\\"|'.*'|[^[:space:]]+)$ ]]; then\n      export _varname=\"${BASH_REMATCH[1]}\"\n      export _value=\"${BASH_REMATCH[2]}\"\n\n      # Remove surrounding quotes if present\n      if [[ \"${_value}\" =~ ^\\\".*\\\"$ || \"${_value}\" =~ ^\\'.*\\'$ ]]; then\n        export _value=\"${_value:1:-1}\"\n      fi\n\n      # Check for forbidden characters in value\n      if echo \"${_value}\" | grep -q -E '[$`(){};&|<>]'; then\n        _log_issue \"credentials\" \"${_cred_file}\" \\\n          \"Forbidden characters in value at line ${_line_number}: ${_line}\"\n        continue\n      fi\n\n      # Safely export the variable (URL-encode if needed)\n      if [ \"${_service}\" = \"b2\" ]; then\n        export ${_varname}=$(_url_encode \"${_value}\")\n      else\n        export ${_varname}=\"${_value}\"\n      fi\n    else\n      _log_issue \"credentials\" \"${_cred_file}\" \\\n        \"Invalid syntax at line ${_line_number}: ${_line}\"\n    fi\n  done < \"${_cred_file}\"\n\n  _print_env \"multiback_validate_credentials\"\n}\n\n# Function to load credentials\n_load_credentials() {\n  local _service=\"$1\"\n  local _user=\"$2\"\n  if [ \"${_user}\" != \"arch\" ] \\\n    && [ \"${_user}\" != \"data\" ] \\\n    && [ \"${_user}\" != \"global\" ] \\\n    && [ \"${_user}\" != \"static\" ] \\\n    && [ \"${_user}\" != \"custom\" ]; then\n    local _cred_file=\"/data/disk/${_user}/static/control/remote_backups/credentials/${_service}.txt\"\n    local _secret_file=\"/data/disk/${_user}/remote_backups/.secret.txt\"\n  fi\n  if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    local _cred_file=\"/root/.remote_backups/credentials/${_service}.txt\"\n    local _secret_file=\"/root/.remote_backups/.secret.txt\"\n  fi\n\n  if [ -s \"${_secret_file}\" ]; then\n    export PASSPHRASE=$(cat \"${_secret_file}\")\n  else\n    echo \"Secret file ${_secret_file} not found. Unable to proceed.\"\n    exit 1\n  fi\n\n  if [ ! -s \"${_cred_file}\" ]; then\n    echo \"Error: Credentials file '${_cred_file}' not found.\"\n    exit 1\n  fi\n\n  _validate_credentials \"${_cred_file}\" \"${_service}\"\n  _print_env \"multiback_load_credentials\"\n}\n\n# Function to load paths configuration\n_load_paths() {\n  local _user=\"$1\"\n  if [ \"${_user}\" != \"arch\" ] \\\n    && [ \"${_user}\" != \"data\" ] \\\n    && [ \"${_user}\" != \"global\" ] \\\n    && [ \"${_user}\" != \"static\" ] \\\n    && [ \"${_user}\" != \"custom\" ]; then\n    local _paths_file=\"/data/disk/${_user}/remote_backups/paths/paths.txt\"\n  elif [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    local _paths_file=\"/root/.remote_backups/paths/${_user}_paths.txt\"\n  fi\n\n  if [ ! -f \"${_paths_file}\" ]; then\n    echo \"Error: Paths configuration file '${_paths_file}' not found.\"\n    exit 1\n  fi\n\n  if [ \"${_user}\" != \"arch\" ]; then\n    source \"${_paths_file}\"\n  fi\n  _print_env \"multiback_load_paths\"\n}\n\n# Function to validate duration format and fallback to default\n_validate_or_default_duration() {\n  local _value=$1\n  local _var_name=$2\n  local _default=$3\n\n  # Supported formats: number followed by D (days), W (weeks), M (months), Y (years)\n  if [[ ! \"${_value}\" =~ ^[0-9]+[DWMY]$ ]] || [[ \"${_value}\" =~ ^[0][DWMY]$ ]]; then\n    echo \"Warning: Invalid value '${_value}' for ${_var_name}. Using default '${_default}'.\"\n    eval \"${_var_name}='${_default}'\"\n    _print_env \"multiback_validate_or_default_duration\"\n  fi\n\n  # Enforced min value for KEEP_WITHIN (1M)\n  if [ \"${_var_name}\" = \"KEEP_WITHIN\" ] && [[ ! \"${_value}\" =~ ^[0-9]+[MY]$ ]]; then\n    echo \"Warning: Invalid value '${_value}' for ${_var_name}. It must be at least 1M. Using default '${_default}'.\"\n    eval \"${_var_name}='${_default}'\"\n    _print_env \"multiback_validate_or_default_duration_keep\"\n  fi\n\n  # Enforced min and max value for FULL_BACKUP_FREQUENCY (7D to 60D)\n  if [ \"${_var_name}\" = \"FULL_BACKUP_FREQUENCY\" ] && [[ ! \"${_value}\" =~ ^([7-9]|[1-5][0-9]|60)D$ ]]; then\n    echo \"Warning: Invalid value '${_value}' for ${_var_name}. It must be between 7D and 60D. Using default '${_default}'.\"\n    eval \"${_var_name}='${_default}'\"\n    _print_env \"multiback_validate_or_default_duration_freq\"\n  fi\n}\n\n# Function to construct _BUCKET_NAME\n_construct_bucket_name() {\n  local _service_abbr=$1\n  local _user=$2\n  _service_dash=$(echo -n ${_service_abbr} | tr _ -)\n  _hst_dash=$(echo -n ${_hName} | tr . -)\n  export _BUCKET_NAME=\"back-to-${_user}-${_hst_dash}-${_service_dash}\"\n  export _NAME=\"${_user}-${_service_dash}\"\n  export _LOGFILE=\"${_LOGPTH}/${_BUCKET_NAME}.log\"\n  _print_env \"multiback_construct_bucket_name\"\n}\n\n# Function to generate duplicity-compatible include directives\n_generate_include_directives() {\n  local _source=$1\n  local _include=\"\"\n  for _cdir in ${_source}; do\n    _include=\"${_include} --include ${_cdir}\"\n  done\n  echo \"${_include}\"\n}\n\n# Function to prepare backup directives\n_backup_prepare() {\n  if [ -e \"/root/.cache/duplicity/${_NAME}\" ]; then\n    _CacheTest=$(find /root/.cache/duplicity/${_NAME} \\\n      -maxdepth 1 \\\n      -mindepth 1 \\\n      -type f \\\n      | sort 2>&1)\n    if [[ \"${_CacheTest}\" =~ \"No such file or directory\" ]] \\\n      || [ -z \"${_CacheTest}\" ]; then\n      export _cached=NO\n    else\n      export _cached=YES\n    fi\n  fi\n  # Generate include directives dynamically\n  [ -n \"${_SOURCE}\" ] && _SRC_INCLUDE=$(_generate_include_directives \"${_SOURCE}\")\n  #\n  [ -n \"${_INCLUDE_PATHS}\" ] && _MERGED_ALL_INCLUDE=\"${_INCLUDE_PATHS}\"\n  [ -n \"${_EXCLUDE_PATHS}\" ] && _MERGED_ALL_EXCLUDE=\"${_EXCLUDE_PATHS}\"\n  #\n  [ -n \"${_USER_INCLUDE_PATHS}\" ] && _USER_MERGED_ALL_INCLUDE=\"${_USER_INCLUDE_PATHS}\"\n  [ -n \"${_USER_EXCLUDE_PATHS}\" ] && _USER_MERGED_ALL_EXCLUDE=\"${_USER_EXCLUDE_PATHS}\"\n  #\n  [ -s \"${_INCLUDE_LIST}\" ] && _LST_INCLUDE=\"--include-filelist ${_INCLUDE_LIST}\"\n  [ -s \"${_EXCLUDE_LIST}\" ] && _LST_EXCLUDE=\"--exclude-filelist ${_EXCLUDE_LIST}\"\n  ###\n  [ -n \"${_MERGED_ALL_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_MERGED_ALL_INCLUDE}\"\n  [ -n \"${_USER_MERGED_ALL_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_USER_MERGED_ALL_INCLUDE}\"\n  [ -n \"${_LST_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_BATCH_INCLUDE} ${_LST_INCLUDE}\"\n  [ -n \"${_SRC_INCLUDE}\" ] && _BATCH_INCLUDE=\"${_BATCH_INCLUDE} ${_SRC_INCLUDE}\"\n  #\n  [ -n \"${_MERGED_ALL_EXCLUDE}\" ] && _BATCH_EXCLUDE=\"${_MERGED_ALL_EXCLUDE}\"\n  [ -n \"${_USER_MERGED_ALL_EXCLUDE}\" ] && _BATCH_EXCLUDE=\"${_USER_MERGED_ALL_EXCLUDE}\"\n  [ -n \"${_LST_EXCLUDE}\" ] && _BATCH_EXCLUDE=\"${_BATCH_EXCLUDE} ${_LST_EXCLUDE}\"\n  #\n  export _BATCH_INCLUDE\n  export _BATCH_EXCLUDE\n\n  _print_env \"multiback_backup_prepare\"\n}\n\n\n# Function to set backup mode\n_set_mode() {\n  local _user=\"${_USER}\"\n  [ -z \"${_MODE}\" ] && _MODE=\"backup\"\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] && [ \"${_cached}\" = \"YES\" ]; then\n    export _MODE=\"incremental\"\n  else\n    [ ! -e \"${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.full.log\" ] && export _MODE=\"full\"\n  fi\n  [ -e \"/root/.dev.server.cnf\" ] && echo \"The _MODE has been set to (${_MODE}) in _set_mode for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n      if [ \"${_DOM}\" = 1 ] && [ ! -e \"${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.full.log\" ]; then\n        _MODE=\"full\"\n        echo \"The _MODE has been re-set to (${_MODE}) in _set_mode for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n      fi\n    fi\n  else\n    [ -e \"/root/.dev.server.cnf\" ] && echo \"The FULL_BACKUP_FREQUENCY is (${FULL_BACKUP_FREQUENCY}) for ${_BUCKET_NAME}\" >> ${_LOGFILE}\n  fi\n  export _MODE\n  _print_env \"multiback_set_mode\"\n}\n\n# Function to construct backup command\n_set_cmd() {\n  local _user=\"${_USER}\"\n  if [ -z \"${KEEP_WITHIN}\" ] && [ -n \"${_AWS_TTL}\" ]; then\n    export KEEP_WITHIN=\"${_AWS_TTL}\"\n  fi\n  if [ -z \"${FULL_BACKUP_FREQUENCY}\" ] && [ -n \"${_AWS_FLC}\" ]; then\n    export FULL_BACKUP_FREQUENCY=\"${_AWS_FLC}\"\n  fi\n\n  # Validate or set default for KEEP_WITHIN\n  _validate_or_default_duration \"${KEEP_WITHIN}\" \"KEEP_WITHIN\" \"${_DEFAULT_KEEP_WITHIN}\"\n\n  # Validate or set default for FULL_BACKUP_FREQUENCY\n  _validate_or_default_duration \"${FULL_BACKUP_FREQUENCY}\" \"FULL_BACKUP_FREQUENCY\" \"${_DEFAULT_FULL_BACKUP_FREQUENCY}\"\n\n  ### Default backup command with encryption\n  export _DCY_BUP_CMD=\"/usr/local/bin/duplicity ${_MODE} \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu} \\\n    --copy-links \\\n    --full-if-older-than ${FULL_BACKUP_FREQUENCY} \\\n    --volsize 300\"\n\n  ### Default utility command with encryption\n  export _DCY_UTL_CMD=\"/usr/local/bin/duplicity \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu}\"\n\n  ### Custom backup command with encryption and enforced own FULL_BACKUP_FREQUENCY\n  export _FBF_BUP_CMD=\"/usr/local/bin/duplicity ${_MODE} \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu} \\\n    --copy-links \\\n    --volsize 300\"\n\n  ### Custom backup command without encryption and enforced own FULL_BACKUP_FREQUENCY\n  export _NOE_BUP_CMD=\"/usr/local/bin/duplicity ${_MODE} \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency ${_useCpu} \\\n    --no-encryption \\\n    --volsize 300\"\n\n  ### Custom utility command without encryption\n  export _NOE_UTL_CMD=\"/usr/local/bin/duplicity \\\n    -v ${_AWS_VLV} \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --no-encryption \\\n    --concurrency ${_useCpu}\"\n\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ]; then\n      export _DCY_BUP_CMD=\"${_FBF_BUP_CMD}\"\n    elif [ \"${_user}\" = \"custom\" ]; then\n      export _DCY_BUP_CMD=\"${_NOE_BUP_CMD}\"\n      export _DCY_UTL_CMD=\"${_NOE_UTL_CMD}\"\n    fi\n  fi\n\n  _print_env \"multiback_set_cmd\"\n}\n\n_test() {\n  local _mode=\"$1\"\n  if [ \"${_mode}\" != \"only\" ]; then\n    _set_mode\n    _set_cmd\n  fi\n  echo \"Running ${_BUCKET_NAME} connection test, please wait...\"\n  echo \"Command is ${_DCY_UTL_CMD} cleanup --dry-run --timeout 8 ${_BACKUP_TARGET}\"\n  _ConnTest=$(${_DCY_UTL_CMD} cleanup --dry-run --timeout 8 ${_BACKUP_TARGET} 2>&1)\n  if [[ \"${_ConnTest}\" =~ \"No connection to backend\" ]] \\\n    || [[ \"${_ConnTest}\" =~ \"does not exist\" ]] \\\n    || [[ \"${_ConnTest}\" =~ \"IllegalLocationConstraintException\" ]]; then\n    echo \"Sorry, I can't connect to ${_BUCKET_NAME}\"\n    echo >> ${_LOGFILE}\n    echo \"Sorry, I can't connect to ${_BUCKET_NAME}\" >> ${_LOGFILE}\n    echo \"Please check if the bucket has expected name:\" >> ${_LOGFILE}\n    echo \" ${_BUCKET_NAME}\" >> ${_LOGFILE}\n    echo \"This bucket must exist in the specified ${_SERVICE} region\" >> ${_LOGFILE}\n    echo >> ${_LOGFILE}\n  else\n    echo \"OK, I can connect to ${_BUCKET_NAME}\"\n  fi\n}\n\n# Function to check collection-status only\n_status() {\n  echo \"Command is ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET}\n  wait\n}\n\n# Function to list-current-files only\n_list() {\n  echo \"Command is ${_DCY_UTL_CMD} list-current-files ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} list-current-files ${_BACKUP_TARGET}\n  wait\n}\n\n_remove_older_than() {\n  echo \"Running remove-older-than ${KEEP_WITHIN} for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} remove-older-than ${KEEP_WITHIN} --force ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} remove-older-than ${KEEP_WITHIN} --force ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n_collection_status() {\n  echo \"Running collection-status for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} collection-status ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n# Function to only repair incomplete backup sets\n_repair_only() {\n  echo \"Running repair via cleanup --force for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} cleanup --force ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} cleanup --force ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n# Function to repair incomplete backup sets\n_repair() {\n  _repair_only\n  _collection_status\n}\n\n# Function to check if repair incomplete backup sets is needed\n_check_if_repair() {\n  if grep -q \"found incomplete backup sets\" \"${_LOGFILE}\"; then\n    _repair_only\n  fi\n}\n\n# Function to check if backup worked cleanly or log the errors\n_check_if_worked_cleanly_or_log_err() {\n  if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    local _logs_dir=\"/root/.remote_backups/logs\"\n  else\n    local _logs_dir=\"/data/disk/${_user}/static/control/remote_backups/logs\"\n  fi\n  if grep -q \"Backup Statistics\" \"${_LOGFILE}\"; then\n    [ ! -e \"${_logs_dir}\" ] && mkdir -p ${_logs_dir}\n    cp -af \"${_LOGFILE}\" \"${_logs_dir}/OK-${_BUCKET_NAME}.log\"\n  else\n    [ ! -e \"${_logs_dir}\" ] && mkdir -p ${_logs_dir}\n    cp -af \"${_LOGFILE}\" \"${_logs_dir}/ERR-${_BUCKET_NAME}.log\"\n  fi\n}\n\n# Function to wipe the bucket completely\n_wipe() {\n  echo \"Running wipe via remove-all-but-n-full 0 --force for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"Command is ${_DCY_UTL_CMD} remove-all-but-n-full 0 --force ${_BACKUP_TARGET}\"\n  ${_DCY_UTL_CMD} remove-all-but-n-full 0 --force ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n}\n\n# Function to purge all backup sets\n_purge() {\n  _repair_only\n  _wipe\n  _collection_status\n}\n\n# Function to run weekly cleanup\n_weekly_cleanup() {\n  if [ -e \"${_LOGPTH}/${_BUCKET_NAME}.archive.log\" ] \\\n    && [ ! -e \"${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.cleanup.log\" ] \\\n    && [ \"${_DOW}\" = 7 ] \\\n    && [ \"${_cached}\" = \"YES\" ]; then\n    _test \"only\"\n    _remove_older_than\n    echo \"$(date)\" >> ${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.cleanup.log\n  else\n    _test \"only\"\n  fi\n}\n\n# Function to clean up old backups\n_cleanup() {\n  _remove_older_than\n  _collection_status\n}\n\n# Function to perform backup\n_run_backup() {\n  export _FULL_BACK_CMD=\"${_DCY_BUP_CMD} ${_BATCH_EXCLUDE} ${_BATCH_INCLUDE} --exclude '**' / ${_BACKUP_TARGET}\"\n  echo \"Running in ${_MODE} mode for ${_BUCKET_NAME} on $(date)\" >> ${_LOGFILE}\n  echo \"$(date)\" >> ${_LOGPTH}/${_BUCKET_NAME}.${_TODAY}.${_MODE}.log\n  ${_DCY_BUP_CMD} ${_BATCH_EXCLUDE} ${_BATCH_INCLUDE} --exclude '**' / ${_BACKUP_TARGET} >> ${_LOGFILE}\n  wait\n  _print_env \"multiback_run_backup\"\n}\n\n# Function to prepare backup\n_backup() {\n  _backup_prepare\n  _set_mode\n  _set_cmd\n  _run_backup\n  _check_if_repair\n  _weekly_cleanup\n  _check_if_worked_cleanly_or_log_err\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" != \"OFF\" ]; then\n    boa info  >> ${_LOGFILE}\n    echo \"Sending email report on $(date)\" >> ${_LOGFILE}\n    echo >> ${_LOGFILE}\n    s-nail -s \"Backup report (${_MODE}) for ${_BUCKET_NAME} on $(date)\" ${_MY_EMAIL} < ${_LOGFILE}\n  fi\n  cat ${_LOGFILE} >> ${_LOGPTH}/${_BUCKET_NAME}.archive.log\n  rm -f ${_LOGFILE}\n  _print_env \"multiback_backup\"\n}\n\n### Legacy procedure for reference\n#\n#   Note: Be careful while restoring not to prepend a slash to the path!\n#\n# $ backboa restore file [time] destination\n#\n#   Restoring a single file to tmp/\n#   $ backboa restore data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz\n#\n#   Restoring an older version of a directory to tmp/ - interval or full date\n#   $ backboa restore data/disk/o1/backups 7D8h8s tmp/backups\n#   $ backboa restore data/disk/o1/backups 2014/11/11 tmp/backups\n#\n# _restore() {\n#   if [ $# = 2 ]; then\n#     echo \"Command is ${_DCY_UTL_CMD} restore --path-to-restore $1 ${_BACKUP_TARGET} $2\"\n#     ${_DCY_UTL_CMD} restore --path-to-restore $1 ${_BACKUP_TARGET} $2\n#   else\n#     echo \"Command is ${_DCY_UTL_CMD} restore --path-to-restore $1 --time $2 ${_BACKUP_TARGET} $3\"\n#     ${_DCY_UTL_CMD} restore --path-to-restore $1 --time $2 ${_BACKUP_TARGET} $3\n#   fi\n# }\n#\n### Legacy procedure for reference\n\n### Duplicity man page https://duplicity.gitlab.io/devel/duplicity.1.html#name\n#\n#   duplicity [backup|full|incremental] [options] source_directory target_url\n#   duplicity verify [options] [--compare-data] [--path-to-restore <relpath>] [--time time] source_url target_directory\n#   duplicity collection-status [options] [--file-changed <relpath>] [--show-changes-in-set <index>] [--jsonstat]] target_url\n#   duplicity list-current-files [options] [--time time] target_url\n#   duplicity [restore] [options] [--path-to-restore <relpath>] [--time time] source_url target_directory\n#   duplicity remove-older-than <time> [options] [--force] target_url\n#   duplicity remove-all-but-n-full <count> [options] [--force] target_url\n#   duplicity remove-all-inc-of-but-n-full <count> [options] [--force] target_url\n#   duplicity cleanup [options] [--force] target_url\n#\n#   Duplicity enters restore mode because the URL comes before the local directory.\n#   If we wanted to restore just the file \"Mail/article\" in /home/me as it was three days ago into /home/me/restored_file:\n#\n#   duplicity -t 3D --path-to-restore Mail/article sftp://uid@other.host/some_dir /home/me/restored_file\n#\n#   duplicity [restore] [options] [--path-to-restore <relpath>] [--time time] source_url target_directory\n#\n### Duplicity man page\n\n### Restore Command in mybackup for reference\n#\n#   mybackup restore <SERVICE> [RESTORE_PATH] [RESTORE_TIME]\n#   mybackup restore aws data/disk/your_username/static/projects 7D\n#\n# - <SERVICE>: The cloud storage service used for your backups (e.g., aws, b2, wasabi).\n# - [RESTORE_PATH] (optional): The absolute path (no leading slash) of the file or directory to restore.\n# - [RESTORE_TIME] (optional): The point in time for the restore, specified in human-readable formats like:\n#   - 1D (1 day ago)\n#   - 7D (7 days ago)\n#   - 1M (1 month ago)\n#\n### Restore Command in mybackup for reference\n\n# Function to restore backup\n_restore() {\n  local _restore_target=$1\n  local _restore_path=$2\n  local _restore_time=$3\n  local _restore_command=\"${_DCY_UTL_CMD} restore\"\n\n  # Remove any trailing slash from _restore_path for proper basename extraction\n  _clean_restore_path=\"${_restore_path%/}\"\n\n  # Extract the last part (basename) of the restore path.\n  _last_part=$(basename \"${_clean_restore_path}\")\n\n  # Combine _restore_target with the extracted basename.\n  # Also, remove any trailing slash from _restore_target to avoid double slashes.\n  _final_restore_target=\"${_restore_target%/}/${_last_part}\"\n\n  # Ensure _RESTORE_TARGET exists\n  if [ -n \"${_restore_target}\" ]; then\n    if [ ! -d \"${_restore_target}\" ]; then\n      echo \"Creating restore target directory: ${_restore_target}\"\n      mkdir -p \"${_restore_target}\"\n    fi\n  else\n    _restore_target=\"/data/disk/${_user}/static/restores\"\n    if [ ! -d \"${_restore_target}\" ]; then\n      echo \"Creating restore target directory: ${_restore_target}\"\n      mkdir -p \"${_restore_target}\"\n    fi\n  fi\n  if [ -n \"${_restore_time}\" ]; then\n    _restore_command=\"${_restore_command} --time ${_restore_time}\"\n  fi\n  _restore_command=\"${_restore_command} ${_BACKUP_TARGET}\"\n  if [ -n \"${_restore_path}\" ]; then\n    _restore_command=\"${_restore_command} --path-to-restore ${_restore_path}\"\n  fi\n  _restore_command=\"${_restore_command} ${_final_restore_target}\"\n\n  echo \"Command is ${_restore_command}\"\n  # ${_DCY_UTL_CMD} restore --time ${_restore_time} ${_BACKUP_TARGET} --path-to-restore ${_restore_path} ${_restore_target}\n\n  # su -s /bin/bash ${_user} -c \"eval \\\"${_restore_command}\\\"\" &> /dev/null\n  eval \"${_restore_command}\"\n\n  _print_env \"multiback_restore\"\n}\n\n# Function to set backup target based on service\n_set_backup_target() {\n  local _service=$1\n  local _user=$2\n\n  case \"${_service}\" in\n    aws|aws_one_zone|aws_standard_ia)\n      _load_credentials \"${_service}\" \"${_user}\"\n      _construct_bucket_name \"${_service}\" \"${_user}\"\n\n      # Define S3-specific options\n      local _s3_endpoint=\"https://s3.dualstack.${AWS_REGION}.amazonaws.com\"\n      local _s3_options=\"--s3-endpoint-url ${_s3_endpoint} --s3-region-name ${AWS_REGION}\"\n\n      # Use intelligent-tiering options for specific services\n      if [ \"${_service}\" = \"aws_standard_ia\" ] || [ \"${_service}\" = \"aws_one_zone\" ]; then\n        local _s3_options=\"${_s3_options} --s3-use-ia\"\n      fi\n\n      export _BACKUP_TARGET=\"boto3+s3://${_BUCKET_NAME} ${_s3_options}\"\n      ;;\n    azure)\n      _load_credentials \"azure\" \"${_user}\"\n      _construct_bucket_name \"azure\" \"${_user}\"\n      export _BACKUP_TARGET=\"azure://${AZURE_STORAGE_ACCOUNT}@${_BUCKET_NAME}\"\n      ;;\n    b2)\n      _load_credentials \"b2\" \"${_user}\"\n      _construct_bucket_name \"b2\" \"${_user}\"\n      export _BACKUP_TARGET=\"b2://${B2_ACCOUNT_ID}:${B2_APPLICATION_KEY}@${_BUCKET_NAME}\"\n      ;;\n    cloudflare)\n      _load_credentials \"cloudflare\" \"${_user}\"\n      _construct_bucket_name \"cloudflare\" \"${_user}\"\n\n      # Custom endpoint for Cloudflare R2\n      local _r2_endpoint=\"https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com\"\n\n      # Configure the S3 backup target\n      export _BACKUP_TARGET=\"boto3+s3://${R2_ACCESS_KEY_ID}:${R2_SECRET_ACCESS_KEY}@${_r2_endpoint}/${_BUCKET_NAME}\"\n      ;;\n    do_spaces)\n      _load_credentials \"do_spaces\" \"${_user}\"\n      _construct_bucket_name \"do_spaces\" \"${_user}\"\n      export _BACKUP_TARGET=\"s3://${DO_SPACES_KEY}:${DO_SPACES_SECRET}@${DO_SPACES_REGION}/${_BUCKET_NAME}\"\n      ;;\n    gcs)\n      _load_credentials \"gcs\" \"${_user}\"\n      _construct_bucket_name \"gcs\" \"${_user}\"\n      export _BACKUP_TARGET=\"gs://${_BUCKET_NAME}\"\n      ;;\n    ibm)\n      _load_credentials \"ibm\" \"${_user}\"\n      _construct_bucket_name \"ibm\" \"${_user}\"\n      export _BACKUP_TARGET=\"ibmcos://${IBM_API_KEY_ID}:${IBM_SERVICE_INSTANCE_ID}@${IBM_REGION}/${_BUCKET_NAME}\"\n      ;;\n    linode)\n      _load_credentials \"linode\" \"${_user}\"\n      _construct_bucket_name \"linode\" \"${_user}\"\n      export _BACKUP_TARGET=\"s3://${LINODE_ACCESS_KEY}:${LINODE_SECRET_KEY}@${LINODE_REGION}/${_BUCKET_NAME}\"\n      ;;\n    wasabi)\n      _load_credentials \"wasabi\" \"${_user}\"\n      _construct_bucket_name \"wasabi\" \"${_user}\"\n      export _BACKUP_TARGET=\"s3://${WASABI_ACCESS_KEY}:${WASABI_SECRET_KEY}@${WASABI_REGION}/${_BUCKET_NAME}\"\n      ;;\n    *)\n      echo \"Error: Unknown service ${_service}\"\n      exit 1\n      ;;\n  esac\n\n  _print_env \"multiback_set_backup_target\"\n}\n\n# Main script\nif [ \"$#\" -lt 3 ]; then\n  _usage\nfi\n\nexport _LOGPTH=\"/var/log/boa\"\n_NOW=$(date +%y%m%d-%H%M%S)\nexport _NOW=${_NOW//[^0-9-]/}\n_TODAY=$(date +%y%m%d)\nexport _TODAY=${_TODAY//[^0-9]/}\n_DOW=$(date +%u)\nexport _DOW=${_DOW//[^1-7]/}\n_DOM=$(date +%e)\nexport _DOM=${_DOM//[^0-9]/}\n_HST=${_hName//[^a-zA-Z0-9-.]/}\n_HST=$(echo -n ${_HST} | tr A-Z a-z 2>&1)\nexport _HST_DASH=$(echo -n ${_HST} | tr . - 2>&1)\n\nexport _ACTION=$1\nexport _SERVICE=$2\nexport _USER=$3\nexport _RESTORE_TARGET=\"${4:-/var/backups/restored/}\"\nexport _RESTORE_PATH=\"${5:-}\"\nexport _RESTORE_TIME=\"${6:-}\"\nexport _PIDFILE=\"/run/duplicity_${_SERVICE}_${_USER}.pid\"\n# Default values\nexport _DEFAULT_KEEP_WITHIN=\"3M\"            # Default: 3 month\nexport _DEFAULT_FULL_BACKUP_FREQUENCY=\"28D\" # Default: 28 days\n\n# Log file for validation issues\nexport _VALIDATION_LOG_FILE=\"/var/log/backup_validation_issues.log\"\nexport _SANITIZATION_TMP_DIR=\"/var/tmp/backup_sanitization\"\nmkdir -p \"${_SANITIZATION_TMP_DIR}\"\nchmod 700 \"${_SANITIZATION_TMP_DIR}\"\n\n_print_env \"multiback_main\"\n\n# Create the PID file\n_create_pid_file \"${_PIDFILE}\"\ntrap \"rm -f ${_PIDFILE}; exit\" EXIT\n\n# Remove stale multiback PID file if necessary\n_remove_stale_multiback_pid \"${_SERVICE}\" \"${_USER}\"\n\n# Load paths configuration\n_load_paths \"${_USER}\"\n\ncase \"${_ACTION}\" in\n  test)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    ;;\n  backup)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _backup\n    ;;\n  cleanup)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _cleanup\n    ;;\n  list)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _list\n    ;;\n  purge)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _purge\n    ;;\n  status)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _status\n    ;;\n  repair)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _repair\n    ;;\n  restore)\n    _set_backup_target \"${_SERVICE}\" \"${_USER}\"\n    _test\n    _restore \"${_RESTORE_TARGET}\" \"${_RESTORE_PATH}\" \"${_RESTORE_TIME}\"\n    ;;\n  *)\n    _usage\n    ;;\nesac\n\n_remove_pid_file \"${_PIDFILE}\"\n\n# Wipe out any exported variables to clean up env after running the backup\n  export FULL_BACKUP_FREQUENCY=\n  export PASSPHRASE=\n  export _ACTION=\n  export _BACKUP_TARGET=\n  export _BUCKET_NAME=\n  export _DCY_BUP_CMD=\n  export _DCY_UTL_CMD=\n  export _EXCLUDE_LIST=\n  export _FBF_BUP_CMD=\n  export _INCLUDE_LIST=\n  export _LST_EXCLUDE=\n  export _LST_INCLUDE=\n  export _MODE=\n  export _NAME=\n  export _NOE_BUP_CMD=\n  export _NOE_UTL_CMD=\n  export _PIDFILE=\n  export _RESTORE_PATH=\n  export _RESTORE_TARGET=\n  export _RESTORE_TIME=\n  export _SERVICE=\n  export _SOURCE=\n  export _SRC_INCLUDE=\n  export _USER=\n  export _USER_EXCLUDE_PATHS=\n  export _USER_INCLUDE_PATHS=\n  export _USER_MERGED_ALL=\n  export _cached=\n  export _credentials_file=\n  export _paths_file=\n  export _secret_file=\n  export _value=\n  export _varname=\n\n_print_env \"multiback_exit\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/mybackup",
    "content": "#!/bin/bash\n\n# Environment setup\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n# Function to print env for debugging\n_print_env() {\n  if [ \"$(id -u)\" -eq 0 ] && [ -e \"/root/.dev.server.cnf\" ]; then\n    _ENV=$(env 2>&1)\n    echo\n    echo \"_ENV in $1 start\"\n    echo \"${_ENV}\"\n    echo \"_ENV in $1 end\"\n    echo\n    _ENV=\n  fi\n}\n\n# Base directory for user commands\n_BASE_DIR=\"/data/disk\"\n_RUN_SUBDIR=\"static/control/.run\"\n_CMD_FILE=\"command.txt\"\n\n# Function to display usage information\n_usage() {\n  echo \"Usage: $0 restore <SERVICE> [RESTORE_PATH] [RESTORE_TIME]\"\n  echo\n  echo \"Example command:\"\n  echo \"  Restore:\"\n  echo \"  $0 restore aws static/projects 1D\"\n  echo \"  $0 restore b2 static/another-project 2W\"\n  echo\n  echo \"Supported services:\"\n  echo \"  aws, aws_one_zone, aws_standard_ia, azure, b2, cloudflare, do_spaces, gcs, ibm, linode, wasabi\"\n  echo\n  echo \"NOTE: [RESTORE_PATH] must be an absolute path (no leading slash) of the file or directory to restore\"\n  echo\n  exit 1\n}\n\n# Function to create PID file\n_create_pid_file() {\n  local _pidfile=$1\n  if [ -e \"${_pidfile}\" ]; then\n    echo \"Process already running with PID file ${_pidfile}\"\n    exit 1\n  else\n    echo $$ > \"${_pidfile}\"\n  fi\n}\n\n# Function to remove PID file\n_remove_pid_file() {\n  local _pidfile=$1\n  if [ -f \"${_pidfile}\" ]; then\n    rm -f \"${_pidfile}\" || {\n      echo \"Warning: Failed to remove PID file: ${_pidfile}\"\n    }\n  fi\n}\n\n# Function to log validation issues\n_log_issue() {\n  local _type=$1\n  local _file=$2\n  local _message=$3\n  echo \"[$(date)] Validation issue type: [${_type}] in file: [${_file}] with error: ${_message}\" >> \"${_LOG_FILE}\"\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    # Alert the admin\n    echo \"Sending Backup Validation Alert to ${_MY_EMAIL} on $(date)\" >> ${_LOGFILE}\n    s-nail -s \"Backup Validation Alert for [$(hostname)] on $(date)\" ${_MY_EMAIL} < ${_LOGFILE}\n  fi\n}\n\n# Helper function to URL-encode using jq\n_url_encode() {\n  echo -n \"$1\" | jq -s -R -r @uri\n}\n\n# Function to escape values\n_escape_value() {\n  printf '%q' \"$1\"\n}\n\n# Function to sanitize and validate credentials file\n_validate_credentials() {\n  local _cred_file=\"$1\"\n  local _service=\"$2\"\n  local _line_number=0\n\n  while IFS= read -r _line || [ -n \"${_line}\" ]; do\n    _line_number=$(( _line_number + 1 ))\n    # Trim leading and trailing whitespace\n    _line=\"${_line#\"${_line%%[![:space:]]*}\"}\"\n    _line=\"${_line%\"${_line##*[![:space:]]}\"}\"\n\n    # Skip comments and empty lines\n    if [[ -z \"${_line}\" || \"${_line}\" == \\#* ]]; then\n      continue\n    fi\n\n    # Remove 'export ' prefix if present\n    _line=\"${_line#export }\"\n\n    # Validate the variable assignment\n    if [[ \"${_line}\" =~ ^([A-Za-z_][A-Za-z0-9_]*)=(\\\".*\\\"|'.*'|[^[:space:]]+)$ ]]; then\n      local _varname=\"${BASH_REMATCH[1]}\"\n      local _value=\"${BASH_REMATCH[2]}\"\n\n      # Remove surrounding quotes if present\n      if [[ \"${_value}\" =~ ^\\\".*\\\"$ || \"${_value}\" =~ ^\\'.*\\'$ ]]; then\n        _value=\"${_value:1:-1}\"\n      fi\n\n      # Check for forbidden characters in value\n      if echo \"${_value}\" | grep -q -E '[$`(){};&|<>]'; then\n        _log_issue \"credentials\" \"${_cred_file}\" \"Forbidden characters in value at line ${_line_number}: ${_line}\"\n        continue\n      fi\n\n      # Safely export the variable (URL-encode if needed)\n      if [ \"${_service}\" = \"b2\" ]; then\n        export ${_varname}=$(_url_encode \"${_value}\")\n      else\n        export ${_varname}=\"${_value}\"\n      fi\n    else\n      _log_issue \"credentials\" \"${_cred_file}\" \"Invalid syntax at line ${_line_number}: ${_line}\"\n    fi\n  done < \"${_cred_file}\"\n  _print_env \"mybackup_validate_credentials\"\n}\n\n# Function to load credentials\n_load_credentials() {\n  local _service=\"$1\"\n  local _user=\"$2\"\n  if [ \"${_user}\" != \"arch\" ] \\\n    && [ \"${_user}\" != \"data\" ] \\\n    && [ \"${_user}\" != \"global\" ] \\\n    && [ \"${_user}\" != \"static\" ] \\\n    && [ \"${_user}\" != \"custom\" ]; then\n    _cred_file=\"/data/disk/${_user}/static/control/remote_backups/credentials/${_service}.txt\"\n    _secret_file=\"/data/disk/${_user}/remote_backups/.secret.txt\"\n  fi\n  if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    _cred_file=\"/root/.remote_backups/credentials/${_service}.txt\"\n    _secret_file=\"/root/.remote_backups/.secret.txt\"\n  fi\n\n  if [ -f \"${_secret_file}\" ]; then\n    export PASSPHRASE=$(cat \"${_secret_file}\")\n  else\n    echo \"Secret file ${_secret_file} not found. Unable to proceed.\"\n    _remove_pid_file \"${_PIDFILE}\"\n    exit 1\n  fi\n\n  if [ ! -f \"${_cred_file}\" ]; then\n    echo \"Error: Credentials file '${_cred_file}' not found.\"\n    _remove_pid_file \"${_PIDFILE}\"\n    exit 1\n  fi\n\n  _validate_credentials \"${_cred_file}\" \"${_service}\"\n  _print_env \"mybackup_load_credentials\"\n}\n\n# Function to load paths configuration\n_load_paths() {\n  local _user=\"$1\"\n  if [ \"${_user}\" != \"arch\" ] \\\n    && [ \"${_user}\" != \"data\" ] \\\n    && [ \"${_user}\" != \"global\" ] \\\n    && [ \"${_user}\" != \"static\" ] \\\n    && [ \"${_user}\" != \"custom\" ]; then\n    local _paths_file=\"/data/disk/${_user}/remote_backups/paths/paths.txt\"\n  fi\n  if [ \"${_user}\" = \"global\" ] || [ \"${_user}\" = \"data\" ] || [ \"${_user}\" = \"custom\" ]; then\n    local _paths_file=\"/root/.remote_backups/paths/${_user}_paths.txt\"\n  fi\n\n  if [ ! -f \"${_paths_file}\" ]; then\n    echo \"Error: Paths configuration file '${_paths_file}' not found.\"\n    _remove_pid_file \"${_PIDFILE}\"\n    exit 1\n  fi\n\n  if [ \"${_user}\" != \"arch\" ]; then\n    source \"${_paths_file}\"\n  fi\n  _print_env \"mybackup_load_paths\"\n}\n\n# Function to construct _BUCKET_NAME\n_construct_bucket_name() {\n  local _service_abbr=$1\n  local _user=$2\n  _service_dash=$(echo -n ${_service_abbr} | tr _ - 2>&1)\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  _hst_dash=$(echo -n ${_hName} | tr . -)\n  export _BUCKET_NAME=\"back-to-${_user}-${_hst_dash}-${_service_dash}\"\n  export _NAME=\"${_user}-${_service_dash}\"\n}\n\n# Function to notify user about completed restore\n_send_restore_confirmation_email() {\n  local _recipient=$1\n  local _subject=$1\n\n  if [ -n \"${_recipient}\" ]; then\n    echo \"Sending email report on $(date)\" >> ${_LOGFILE}\n    s-nail -s \"${_subject}\" \"${_recipient}\" < ${_LOGFILE}\n  fi\n}\n\n# Function to restore backup\n_restore() {\n  local _restore_path=$1\n  local _restore_time=$2\n  local _restore_command=\"${_DCY_MN_CMD} restore --no-restore-owner\"\n  local _restore_target=\"/data/disk/${_user}/static/restores\"\n\n  # Ensure _restore_target exists\n  if [ ! -d \"${_restore_target}\" ]; then\n    echo \"Creating restore target directory: ${_restore_target}\"\n    mkdir -p ${_restore_target}\n    chown ${_user}:users ${_restore_target}\n  fi\n\n  # Remove any trailing slash from _restore_path for proper basename extraction\n  _clean_restore_path=\"${_restore_path%/}\"\n\n  # Extract the last part (basename) of the restore path.\n  _last_part=$(basename \"${_clean_restore_path}\")\n\n  # Combine _restore_target with the extracted basename.\n  # Also, remove any trailing slash from _restore_target to avoid double slashes.\n  _final_restore_target=\"${_restore_target%/}/${_last_part}\"\n\n  if [ -n \"${_restore_time}\" ]; then\n    _restore_command=\"${_restore_command} --time ${_restore_time}\"\n  fi\n  _restore_command=\"${_restore_command} ${_BACKUP_TARGET}\"\n  if [ -n \"${_restore_path}\" ]; then\n    _restore_command=\"${_restore_command} --path-to-restore ${_restore_path}\"\n  fi\n  _restore_command=\"${_restore_command} ${_final_restore_target}\"\n\n  echo \"Command is ${_restore_command}\"\n\n  echo \"MyBackup Restore for [${_user}] on [$(date)]\" >> ${_LOGFILE}\n  eval \"${_restore_command}\" >> ${_LOGFILE}\n  if [ -n \"${_userEmail}\" ]; then\n    _subject=\"MyBackup Restore Notification [${_user}] on [$(date)]\"\n    _send_restore_confirmation_email \"${_userEmail}\" \"${_subject}\"\n  fi\n}\n\n# Function to set backup target based on service\n_set_backup_target() {\n  local _service=$1\n  local _user=$2\n\n  case \"${_service}\" in\n    aws|aws_one_zone|aws_standard_ia)\n      _load_credentials \"${_service}\" \"${_user}\"\n      _construct_bucket_name \"${_service}\" \"${_user}\"\n\n      # Define S3-specific options\n      _s3_endpoint=\"https://s3.dualstack.${AWS_REGION}.amazonaws.com\"\n      _s3_options=\"--s3-endpoint-url ${_s3_endpoint} --s3-region-name ${AWS_REGION}\"\n\n      # Use intelligent-tiering options for specific services\n      if [ \"${_service}\" = \"aws_standard_ia\" ] || [ \"${_service}\" = \"aws_one_zone\" ]; then\n        _s3_options=\"${_s3_options} --s3-use-ia\"\n      fi\n\n      _BACKUP_TARGET=\"boto3+s3://${_BUCKET_NAME} ${_s3_options}\"\n      ;;\n    azure)\n      _load_credentials \"azure\" \"${_user}\"\n      _construct_bucket_name \"azure\" \"${_user}\"\n      _BACKUP_TARGET=\"azure://${AZURE_STORAGE_ACCOUNT}@${_BUCKET_NAME}\"\n      ;;\n    b2)\n      _load_credentials \"b2\" \"${_user}\"\n      _construct_bucket_name \"b2\" \"${_user}\"\n      _BACKUP_TARGET=\"b2://${B2_ACCOUNT_ID}:${B2_APPLICATION_KEY}@${_BUCKET_NAME}\"\n      ;;\n    cloudflare)\n      _load_credentials \"cloudflare\" \"${_user}\"\n      _construct_bucket_name \"cloudflare\" \"${_user}\"\n\n      # Custom endpoint for Cloudflare R2\n      _r2_endpoint=\"https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com\"\n\n      # Configure the S3 backup target\n      _BACKUP_TARGET=\"boto3+s3://${R2_ACCESS_KEY_ID}:${R2_SECRET_ACCESS_KEY}@${_r2_endpoint}/${_BUCKET_NAME}\"\n      ;;\n    do_spaces)\n      _load_credentials \"do_spaces\" \"${_user}\"\n      _construct_bucket_name \"do_spaces\" \"${_user}\"\n      _BACKUP_TARGET=\"s3://${DO_SPACES_KEY}:${DO_SPACES_SECRET}@${DO_SPACES_REGION}/${_BUCKET_NAME}\"\n      ;;\n    gcs)\n      _load_credentials \"gcs\" \"${_user}\"\n      _construct_bucket_name \"gcs\" \"${_user}\"\n      _BACKUP_TARGET=\"gs://${_BUCKET_NAME}\"\n      ;;\n    ibm)\n      _load_credentials \"ibm\" \"${_user}\"\n      _construct_bucket_name \"ibm\" \"${_user}\"\n      _BACKUP_TARGET=\"ibmcos://${IBM_API_KEY_ID}:${IBM_SERVICE_INSTANCE_ID}@${IBM_REGION}/${_BUCKET_NAME}\"\n      ;;\n    linode)\n      _load_credentials \"linode\" \"${_user}\"\n      _construct_bucket_name \"linode\" \"${_user}\"\n      _BACKUP_TARGET=\"s3://${LINODE_ACCESS_KEY}:${LINODE_SECRET_KEY}@${LINODE_REGION}/${_BUCKET_NAME}\"\n      ;;\n    wasabi)\n      _load_credentials \"wasabi\" \"${_user}\"\n      _construct_bucket_name \"wasabi\" \"${_user}\"\n      _BACKUP_TARGET=\"s3://${WASABI_ACCESS_KEY}:${WASABI_SECRET_KEY}@${WASABI_REGION}/${_BUCKET_NAME}\"\n      ;;\n    *)\n      echo \"Error: Unknown service ${_service}\"\n      _remove_pid_file \"${_PIDFILE}\"\n      exit 1\n      ;;\n  esac\n}\n\n# Function to validate restore command\n_validate_restore_command() {\n  # Store all arguments in an array\n  local args=(\"$@\")\n\n  # Now use the array indices to assign variables.\n  # Note: Array indices start at 0.\n  local _action=\"${args[0]}\"\n  local _service=\"${args[1]}\"\n  local _restore_path=\"${args[2]}\"\n  local _restore_time=\"${args[3]}\"\n\n  # Ensure the command starts with \"restore\"\n  if [[ \"${_action}\" != \"restore\" ]]; then\n    echo \"Error: Invalid command action '${_action}'. Only 'restore' is allowed.\"\n    return 1\n  fi\n\n  # Validate service\n  case \"${_service}\" in\n    aws|aws_one_zone|aws_standard_ia|azure|b2|cloudflare|do_spaces|gcs|ibm|linode|wasabi)\n      # Valid services\n      ;;\n    *)\n      echo \"Error: Invalid service '${_service}'.\"\n      return 1\n      ;;\n  esac\n\n  # Validate restore path (optional, no leading slash)\n  if [[ -n \"${_restore_path}\" && \"${_restore_path}\" =~ ^/ ]]; then\n    echo \"Error: Restore path '${_restore_path}' must not have a leading slash.\"\n    return 1\n  fi\n\n  # Validate restore time (optional, must match supported formats: 1D, 7D, 2W, etc.)\n  if [[ -n \"${_restore_time}\" && ! \"${_restore_time}\" =~ ^[0-9]+[DWMY]$ ]]; then\n    echo \"Error: Invalid restore time format '${_restore_time}'. Supported formats: 1D, 7D, 2W, etc.\"\n    return 1\n  fi\n\n  # Command is valid\n  _print_env \"mybackup_validate_restore_command\"\n  return 0\n}\n\n# Check if run as root\nexport _run_as_user=$(whoami)\n\n# Main script\n_main_script() {\n  _CNT=$(pgrep -fc duplicity)\n  if (( _CNT > 0 )); then\n    echo \"[$(date)] Active duplicity process detected, will try again later...\" >> /var/log/mybackup_waiting_queue.log\n    exit 1\n  fi\n  # Process queued commands for all users\n  for _user_dir in \"${_BASE_DIR}\"/*; do\n    if [[ -d \"${_user_dir}\" ]]; then\n      export _user=$(basename \"${_user_dir}\")\n      export _user_run_dir=\"${_user_dir}/${_RUN_SUBDIR}\"\n      export _user_cmd_file=\"${_user_run_dir}/${_CMD_FILE}\"\n      if [[ -f \"${_user_cmd_file}\" ]]; then\n        export _command=$(cat \"${_user_cmd_file}\")\n        echo \"Executing backup restore for user ${_user}.\"\n        chmod 755 /data/disk/${_user}/remote_backups/paths\n        local _user_credentials_dir=\"/data/disk/${_user}/static/control/remote_backups/credentials\"\n        chmod 755 ${_user_credentials_dir}\n        chmod 644 ${_user_credentials_dir}/*.txt\n        chattr -i /data/disk/${_user}/remote_backups/.secret.txt\n        chmod 644 /data/disk/${_user}/remote_backups/.secret.txt\n        su -s /bin/bash - ${_user} -c \"mybackup ${_command}\"\n        wait\n        # Clean up the command file\n        echo \"[$(date)] Command was run for user ${_user}: ${_command}\" >> /var/log/mybackup_commands_history.log\n        echo \"[$(date)] Command was run for user ${_user}: ${_command}\" >> /data/disk/${_user}/log/mybackup_commands_history.log\n        rm -f \"${_user_cmd_file}\"\n        chmod 700 ${_user_credentials_dir}\n        chmod 600 ${_user_credentials_dir}/*.txt\n        chmod 600 /data/disk/${_user}/remote_backups/.secret.txt\n        chattr +i /data/disk/${_user}/remote_backups/.secret.txt\n        sleep 10\n      fi\n    fi\n  _print_env \"mybackup_run_as_user\"\n  export _user=\n  export _user_run_dir=\n  export _user_cmd_file=\n  done\n  exit 0\n}\n\nif [[ \"$#\" -eq 0 ]] && [ \"${_run_as_user}\" = \"root\" ]; then\n  _main_script\nfi\n\n# Extract the current user\nexport _user=$(whoami | sed 's/\\.ftp$//')\n\n_userEmail=\"\"\n[ -e \"/data/disk/${_user}/log/email.txt\" ] && _userEmail=\"$(cat /data/disk/${_user}/log/email.txt)\"\n[ -n \"${_userEmail}\" ] && export _userEmail=$(echo -n ${_userEmail} | tr -d \"\\n\" 2>&1)\n\nif [[ $(whoami) == *.ftp ]]; then\n  _user_run_dir=\"/data/disk/${_user}/static/control/.run\"\n  [ ! -e \"${_user_run_dir}\" ] && mkdir -p \"${_user_run_dir}\"\n  # Check if there is any task queued already\n  [ -e \"${_user_run_dir}/${_CMD_FILE}\" ] && echo \"Previous task is still in the queue. Please try again later.\"\n  # Validate the command\n  if ! _validate_restore_command \"$@\"; then\n    echo \"Error: Invalid command. The command has not been queued.\"\n    # Optionally log invalid commands\n    echo \"[$(date)] Invalid command attempted by $(whoami): $@\" >> ${_user_run_dir}/mybackup_invalid_commands.log\n    exit 1\n  fi\n\n  # If invoked as USER.ftp, queue the command\n  echo \"$@\" > \"${_user_run_dir}/${_CMD_FILE}\"\n  echo \"Command queued for user ${_user}...\"\n  echo \"It will be executed by the system shortly...\"\n  echo \"Bye!\"\n  exit 0\nfi\n\nexport _ACTION=$1\nexport _SERVICE=$2\nexport _RESTORE_PATH=\"${3:-}\"\nexport _RESTORE_TIME=\"${4:-}\"\nexport _PIDFILE=\"/data/disk/${_user}/log/mybackup_${_SERVICE}_${_user}.pid\"\nexport _LOGPTH=\"/data/disk/${_user}/log\"\nexport _LOGFILE=\"${_LOGPTH}/mybackup_${_SERVICE}_${_user}.log\"\n\nexport _DCY_MN_CMD=\"/usr/local/bin/duplicity \\\n    --name=${_NAME} \\\n    --allow-source-mismatch \\\n    --concurrency 4\"\n\n_create_pid_file \"${_PIDFILE}\"\n\n# Load paths configuration\n_load_paths \"${_user}\"\n\n_print_env \"mybackup_main\"\n\ncase \"${_ACTION}\" in\n  restore)\n    _set_backup_target \"${_SERVICE}\" \"${_user}\"\n    _restore \"${_RESTORE_PATH}\" \"${_RESTORE_TIME}\"\n    ;;\n  *)\n    _usage\n    ;;\nesac\n\n_remove_pid_file \"${_PIDFILE}\"\n\n# Wipe out any exported variables to clean up env after running the backup\n  export PASSPHRASE=\n  export _ACTION=\n  export _BACKUP_TARGET=\n  export _BUCKET_NAME=\n  export _DCY_MN_CMD=\n  export _NAME=\n  export _PIDFILE=\n  export _RESTORE_PATH=\n  export _RESTORE_TIME=\n  export _SERVICE=\n  export _cred_file=\n  export _paths_file=\n  export _r2_endpoint=\n  export _run_as_user=\n  export _secret_file=\n  export _user=\n  export _userEmail=\n\n_print_env \"mybackup_exit\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/mycnfup",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_AEGIR_VERSION=\"${_tRee}\"\n_BRANCH_BOA=\"5.x-${_tRee}\"\n_X_VERSION=\"BOA-5.9.1-${_tRee}\"\n_MYSQLTUNER_VRN=1.9.4\n_PHP_FPM_WORKERS=AUTO\n_RESERVED_RAM=0\n\n###\n### Commands shortcuts\n###\n_aptAllow=\"--allow-unauthenticated\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n_INSTALL_DIST=\"/usr/bin/apt-get ${_dstUpArg} install\"\n_INSTALL_NRML=\"/usr/bin/apt-get ${_nrmUpArg} install\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_vBs=\"/var/backups\"\n_tVr=\"0.7\"\n_OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n_OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\nif [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n  _DPKG_CNF=\"confold\"\nelse\n  _DPKG_CNF=\"confnew\"\nfi\n_INSTAPP=\"/usr/bin/aptitude -f -y -q \\\n  --allow-untrusted \\\n  -o Dpkg::Options::=--force-confmiss \\\n  -o Dpkg::Options::=--force-confdef \\\n  -o Dpkg::Options::=--force-${_DPKG_CNF} install\"\nif [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n  _INSTAPP=\"${_INSTALL_DIST}\"\nfi\n\n\n###-------------SYSTEM-----------------###\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n_locales_check_fix() {\n  ${_INITINS} locales &> /dev/null\n  if [ -e \"/etc/ssh/sshd_config\" ]; then\n    _SSH_LC_TEST=$(grep \"^AcceptEnv LANG LC_\" /etc/ssh/sshd_config 2>&1)\n    if [[ \"${_SSH_LC_TEST}\" =~ \"AcceptEnv LANG LC_\" ]]; then\n      _DO_NOTHING=YES\n    else\n      sed -i \"s/.*AcceptEnv.*//g\" /etc/ssh/sshd_config\n      wait\n      echo \"AcceptEnv LANG LC_*\" >> /etc/ssh/sshd_config\n    fi\n  fi\n  _LOC_TEST=$(locale 2>&1)\n  if [[ \"${_LOC_TEST}\" =~ LANG=.*UTF-8 ]]; then\n    _LOCALE_TEST=OK\n  fi\n  if [ -n \"${STY+x}\" ]; then\n    _LOCALE_TEST=OK\n  fi\n  if [[ \"${_LOC_TEST}\" =~ \"Cannot\" ]]; then\n    _LOCALE_TEST=BROKEN\n  fi\n  if [ \"${_LOCALE_TEST}\" = \"BROKEN\" ]; then\n    echo \"WARNING!\"\n    cat <<EOF\n\n  Locales on this system are broken or not installed\n  and/or not configured correctly yet. This is a known\n  issue on some systems/hosts which either don't configure\n  locales at all or don't use UTF-8 compatible locales\n  during initial OS setup.\n\n  We will fix this problem for you now by enforcing en_US.UTF-8\n  locale settings on the fly during install, and as system\n  defaults in /etc/default/locale for future sessions. This\n  overrides any locale settings passed by your SSH client.\n\n  You should log out when this installer will finish all its tasks\n  and display last line with \"BYE!\" and then log in again\n  to see the result.\n\n  We will continue in 5 seconds...\n\nEOF\n    sleep 5\n    if [ \"${_OS_DIST}\" = \"Debian\" ] || [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n      if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n        echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n      fi\n      sed -i \"/^$/d\" /etc/locale.gen\n      locale-gen &> /dev/null\n    else\n      locale-gen en_US.UTF-8 &> /dev/null\n    fi\n    # Explicitly enforce all locale settings\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_TIME=en_US.UTF-8 \\\n      LC_MONETARY=en_US.UTF-8 \\\n      LC_MESSAGES=en_US.UTF-8 \\\n      LC_PAPER=en_US.UTF-8 \\\n      LC_NAME=en_US.UTF-8 \\\n      LC_ADDRESS=en_US.UTF-8 \\\n      LC_TELEPHONE=en_US.UTF-8 \\\n      LC_MEASUREMENT=en_US.UTF-8 \\\n      LC_IDENTIFICATION=en_US.UTF-8 \\\n      LC_ALL= &> /dev/null\n    # Define all locale settings on the fly to prevent unnecessary\n    # warnings during installation of packages.\n    export LANG=en_US.UTF-8\n    export LC_CTYPE=en_US.UTF-8\n    export LC_COLLATE=POSIX\n    export LC_NUMERIC=POSIX\n    export LC_TIME=en_US.UTF-8\n    export LC_MONETARY=en_US.UTF-8\n    export LC_MESSAGES=en_US.UTF-8\n    export LC_PAPER=en_US.UTF-8\n    export LC_NAME=en_US.UTF-8\n    export LC_ADDRESS=en_US.UTF-8\n    export LC_TELEPHONE=en_US.UTF-8\n    export LC_MEASUREMENT=en_US.UTF-8\n    export LC_IDENTIFICATION=en_US.UTF-8\n    export LC_ALL=\n  else\n    if [ \"${_OS_DIST}\" = \"Debian\" ] || [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n      if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n        echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n      fi\n      sed -i \"/^$/d\" /etc/locale.gen\n      locale-gen &> /dev/null\n    else\n      locale-gen en_US.UTF-8 &> /dev/null\n    fi\n    # Explicitly enforce locale settings required for consistency\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_ALL= &> /dev/null\n    # Define locale settings required for consistency also on the fly\n    if [ \"${_STATUS}\" != \"INIT\" ]; then\n      # On initial install it usually causes a warning:\n      # setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8):\n      # No such file or directory\n      export LC_CTYPE=en_US.UTF-8\n    fi\n    export LC_COLLATE=POSIX\n    export LC_NUMERIC=POSIX\n    export LC_ALL=\n  fi\n  _LOCALES_BASHRC_TEST=$(grep LC_COLLATE /root/.bashrc 2>&1)\n  if [[ ! \"${_LOCALES_BASHRC_TEST}\" =~ \"LC_COLLATE\" ]]; then\n    printf \"\\n\" >> /root/.bashrc\n    echo \"export LANG=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_CTYPE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_COLLATE=POSIX\" >> /root/.bashrc\n    echo \"export LC_NUMERIC=POSIX\" >> /root/.bashrc\n    echo \"export LC_TIME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MONETARY=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MESSAGES=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_PAPER=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_NAME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ADDRESS=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_TELEPHONE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MEASUREMENT=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_IDENTIFICATION=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ALL=\" >> /root/.bashrc\n    printf \"\\n\" >> /root/.bashrc\n  fi\n}\n\n_remove_systemd() {\n  if [ -x \"/lib/systemd/systemd\" ]; then\n    ls -la /lib/systemd/systemd\n    _apt_clean_update\n    echo \"sysvinit-core install\" | dpkg --set-selections\n    echo \"sysvinit-utils install\" | dpkg --set-selections\n    ${_INSTAPP} sysvinit-core\n    ${_INSTAPP} sysvinit-utils\n    ls -la /usr/share/sysvinit/inittab\n    if [ -e \"/usr/share/sysvinit/inittab\" ]; then\n      cp -af /usr/share/sysvinit/inittab /etc/inittab\n    fi\n    apt-get purge systemd libnss-systemd -y -qq 2> /dev/null\n    if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n      rm -f /etc/apt/preferences.d/systemd\n      echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n      echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n      _apt_clean_update\n    fi\n    apt-get autoremove --purge -y -qq 2> /dev/null\n    apt-get autoclean -y -qq 2> /dev/null\n    echo \"sysvinit-core hold\" | dpkg --set-selections &> /dev/null\n    echo \"sysvinit-utils hold\" | dpkg --set-selections &> /dev/null\n  fi\n}\n\n_check_root() {\n  echo \"running ${_tVr} _check_root procedure...\"\n  if [ \"$(id -u)\" -eq 0 ]; then\n    _locales_check_fix\n    _os_detection_minimal\n    _remove_systemd\n    sed -i \"s/.*du.sql.*//gi\"  /etc/crontab\n    echo \"22 22   * * *   root    du -s /var/lib/mysql/* > /root/.du.sql\" >> /etc/crontab\n    sed -i \"/^$/d\"  /etc/crontab\n    grep du.sql /etc/crontab\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n\n    # Sanitize to allow only digits and minus sign\n    export _B_NICE=${_B_NICE//[^0-9-]/}\n\n    # Validate and set default if necessary\n    if ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n      _B_NICE=0\n    fi\n\n    # Clamp the value within -20 to 19\n    if (( _B_NICE < -20 )); then\n      _B_NICE=-20\n    elif (( _B_NICE > 19 )); then\n      _B_NICE=19\n    fi\n\n    renice ${_B_NICE} -p $$ &> /dev/null\n\n    chmod a+w /dev/null\n    if [ ! -e \"/dev/fd\" ]; then\n      if [ -e \"/proc/self/fd\" ]; then\n        rm -rf /dev/fd\n        ln -sfn /proc/self/fd /dev/fd\n      fi\n    fi\n    _VM_TEST=\"$(uname -a)\"\n    if [ -e \"/proc/bean_counters\" ]; then\n      _VMFAMILY=\"VZ\"\n    elif [ -e \"/root/.tg.cnf\" ]; then\n      _VMFAMILY=\"TG\"\n    else\n      _VMFAMILY=\"XEN\"\n    fi\n    if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n      _VMFAMILY=\"VS\"\n    fi\n    # Check if dmidecode is available\n    if ! command -v dmidecode &> /dev/null; then\n      _apt_clean_update\n      ${_INITINS} dmidecode &> /dev/null\n    fi\n    # Check for Amazon EC2 in the system manufacturer field\n    if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n      _VMFAMILY=\"AWS\"\n    fi\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    if [ ! -e \"/opt/apt/apt.conf.noi.dist\" ] \\\n      || [ ! -e \"/opt/apt/apt.conf.noi.nrml\" ]; then\n      mkdir -p /opt/apt\n      echo \"APT::Get::Assume-Yes \\\"true\\\";\" > /opt/apt/apt.conf.noi.dist\n      echo \"APT::Get::Show-Upgraded \\\"true\\\";\" >> /opt/apt/apt.conf.noi.dist\n      echo \"APT::Quiet \\\"true\\\";\" >> /opt/apt/apt.conf.noi.dist\n      echo \"DPkg::Options {\\\"--force-confnew\\\";\\\"--force-confmiss\\\";};\" >> /opt/apt/apt.conf.noi.dist\n      echo \"DPkg::Pre-Install-Pkgs {\\\"/usr/sbin/dpkg-preconfigure --apt\\\";};\" >> /opt/apt/apt.conf.noi.dist\n      echo \"Dir::Etc::SourceList \\\"/etc/apt/sources.list\\\";\" >> /opt/apt/apt.conf.noi.dist\n      echo \"APT::Get::Assume-Yes \\\"true\\\";\" > /opt/apt/apt.conf.noi.nrml\n      echo \"APT::Get::Show-Upgraded \\\"true\\\";\" >> /opt/apt/apt.conf.noi.nrml\n      echo \"APT::Quiet \\\"true\\\";\" >> /opt/apt/apt.conf.noi.nrml\n      echo \"DPkg::Options {\\\"--force-confdef\\\";\\\"--force-confmiss\\\";\\\"--force-confold\\\"};\" >> /opt/apt/apt.conf.noi.nrml\n      echo \"DPkg::Pre-Install-Pkgs {\\\"/usr/sbin/dpkg-preconfigure --apt\\\";};\" >> /opt/apt/apt.conf.noi.nrml\n      echo \"Dir::Etc::SourceList \\\"/etc/apt/sources.list\\\";\" >> /opt/apt/apt.conf.noi.nrml\n    fi\n    _LSB_TEST=\"$(which lsb_release)\"\n    if [ ! -x \"${_LSB_TEST}\" ]; then\n      _apt_clean_update\n      ${_INSTAPP} lsb-release\n    fi\n    _LSB_TEST=\"$(which lsb_release)\"\n    if [ -x \"${_LSB_TEST}\" ]; then\n      if [ \"${_OS_DIST}\" != \"Debian\" ]; then\n        echo \"ERROR: Not supported OS detected: ${_OS_DIST}/${_OS_CODE}\"\n        exit 1\n      fi\n      echo\n      if [ \"${_OS_CODE}\" = \"buster\" ] || [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n        _APT_MIRROR=\"archive.debian.org/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-backports\"\n        _SEC_MIRROR=\"archive.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}/updates\"\n      elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org\"\n        _SEC_REPSRC=\"${_OS_CODE}/updates\"\n      elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}-security\"\n      elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}-security\"\n      fi\n      echo \"## DEBIAN MAIN REPOSITORIES\" > ${_aptLiSys}\n      echo \"deb http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n      echo \"deb http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## DEBIAN SECURITY UPDATES\" >> ${_aptLiSys}\n      echo \"deb http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      echo \"deb-src http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      echo\n      _apt_clean_update\n    fi\n    if [ \"${_OS_DIST}\" != \"Debian\" ]; then\n      echo \"ERROR: Not supported OS detected: ${_OS_DIST}/${_OS_CODE}\"\n      exit 1\n    fi\n    if [ \"${_VMFAMILY}\" = \"VS\" ]; then\n      if [ ! -e \"/etc/apt/preferences.d/fuse\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: fuse\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/fuse\n        _apt_clean_update\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/udev\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: udev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/udev\n        _apt_clean_update\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/makedev\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: makedev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/makedev\n        _apt_clean_update\n      fi\n      apt-get remove fuse -y --purge --auto-remove -qq 2> /dev/null\n      apt-get remove udev -y --purge --auto-remove -qq 2> /dev/null\n      apt-get remove makedev -y --purge --auto-remove -qq 2> /dev/null\n      _PTMX=OK\n      if [ -e \"/sbin/hdparm\" ]; then\n        apt-get remove hdparm -y --purge --auto-remove -qq 2> /dev/null\n      fi\n      _REMOVE_LINKS=\"buagent \\\n                     checkroot.sh \\\n                     fancontrol \\\n                     halt \\\n                     hwclock.sh \\\n                     hwclockfirst.sh \\\n                     ifupdown \\\n                     ifupdown-clean \\\n                     kerneloops \\\n                     klogd \\\n                     mountall-bootclean.sh \\\n                     mountall.sh \\\n                     mountdevsubfs.sh \\\n                     mountkernfs.sh \\\n                     mountnfs-bootclean.sh \\\n                     mountnfs.sh \\\n                     mountoverflowtmp \\\n                     mountvirtfs \\\n                     mtab.sh \\\n                     networking \\\n                     procps \\\n                     reboot \\\n                     sendsigs \\\n                     setserial \\\n                     svscan \\\n                     sysstat \\\n                     umountfs \\\n                     umountnfs.sh \\\n                     umountroot \\\n                     urandom \\\n                     vnstat\"\n      for _link in ${_REMOVE_LINKS}; do\n        if [ -e \"/etc/init.d/${_link}\" ]; then\n          update-rc.d -f ${_link} remove &> /dev/null\n          mv -f /etc/init.d/${_link} /var/backups/init.d.${_link}\n        fi\n      done\n      for s in cron dbus ssh; do\n        if [ -e \"/etc/init.d/${s}\" ]; then\n          sed -rn -e 's/^(# Default-Stop:).*$/\\1 0 1 6/' -e '/^### BEGIN INIT INFO/,/^### END INIT INFO/p' /etc/init.d/${s} > /etc/insserv/overrides/${s}\n        fi\n      done\n      /sbin/insserv -v -d &> /dev/null\n    fi\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  if [ -e \"${_barCnf}\" ]; then\n    source ${_barCnf}\n  fi\n  [ \"$1\" = \"check\" ] && exit 0\n}\n\n[ \"$1\" != \"check\" ] && _check_root\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    ${_INSTAPP} netcat-traditional &> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n_find_fast_mirror_early\n\n_sysctl_update() {\n  if [ ! -e \"/root/.no.sysctl.update.cnf\" ] \\\n    && [ ! -e \"/var/backups/sysctl.conf-${_xSrl}.log\" ]; then\n    _find_fast_mirror_early\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    cd /var/backups\n    rm -f /var/backups/sysctl.conf\n    curl ${_crlGet} \"${_urlHmr}/conf/var/sysctl.conf\" -o sysctl.conf\n    if [ -e \"/var/backups/sysctl.conf\" ]; then\n      cp -af /var/backups/sysctl.conf /etc/sysctl.conf\n    fi\n    if [ -e \"/etc/security/limits.conf\" ]; then\n      _IF_NF=$(grep '2097152' /etc/security/limits.conf 2>&1)\n      if [ ! -z \"${_IF_NF}\" ]; then\n        sed -i \"s/.*2097152.*//g\" /etc/security/limits.conf\n        wait\n      fi\n      _IF_NF=$(grep '524288' /etc/security/limits.conf 2>&1)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"*         soft    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"root      hard    nofile      1048576\" >> /etc/security/limits.conf\n        echo \"root      soft    nofile      1048576\" >> /etc/security/limits.conf\n      fi\n      _IF_NF=$(grep '65556' /etc/security/limits.conf 2>&1)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nproc       65556\"   >> /etc/security/limits.conf\n        echo \"*         soft    nproc       65556\"   >> /etc/security/limits.conf\n      fi\n    fi\n    if [ -e \"/boot/grub/grub.cfg\" ] || [ -e \"/boot/grub/menu.lst\" ]; then\n      #echo never > /sys/kernel/mm/transparent_hugepage/enabled\n      if [ -e \"/etc/sysctl.conf\" ]; then\n        sysctl -p /etc/sysctl.conf &> /dev/null\n      fi\n    else\n      if [ -e \"/etc/sysctl.conf\" ]; then\n        sysctl -p /etc/sysctl.conf &> /dev/null\n      fi\n    fi\n    touch /var/backups/sysctl.conf-${_xSrl}.log\n  fi\n}\n_sysctl_update\n\nif [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n  _SQL_PSWD=$(cat /root/.my.cluster_root_pwd.txt 2>/dev/null | tr -d '\\n')\nelif [ -e \"/root/.my.pass.txt\" ]; then\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\nfi\n\n_SQL_HOST=\"127.0.0.1\"\n_SQL_PORT=\"3306\"\n\nif [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n  _SQL_PORT=\"6033\"\n  _SQL_HOST=\"127.0.0.1\"\nelse\n  if [ -e \"/root/.my.cluster_write_node.txt\" ]; then\n    _SQL_HOST=$(cat /root/.my.cluster_write_node.txt 2>&1)\n    _SQL_HOST=$(echo -n ${_SQL_HOST} | tr -d \"\\n\" 2>&1)\n  fi\nfi\n\n_C_SQL=\"mysql --user=root --password=${_SQL_PSWD} --host=${_SQL_HOST} --port=${_SQL_PORT} --protocol=tcp\"\n\necho \"SQL --host=${_SQL_HOST} --port=${_SQL_PORT}\"\n\n_DATE=$(date +%y%m%d-%H%M%S)\n_TODAY=$(date +%y%m%d)\n_TODAY=${_TODAY//[^0-9]/}\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n_LOG_DIR=\"${_vBs}/reports/up/$(basename \"$0\")/${_TODAY}\"\n_UP_LOG=\"${_LOG_DIR}/$(basename \"$0\")-up-${_NOW}.log\"\n\n_create_locks() {\n  echo \"running ${_tVr} ${_hName} _create_locks procedure...\"\n  if [ ! -e \"/run/mysql_restart_running.pid\" ]; then\n    echo \"Creating locks...\"\n    touch /run/mysql_restart_running.pid\n  fi\n}\n\n_remove_locks() {\n  echo \"running ${_tVr} ${_hName} _remove_locks procedure...\"\n  if [ -e \"/run/mysql_restart_running.pid\" ]; then\n    echo \"Removing locks...\"\n    rm -f /run/mysql_restart_running.pid\n  fi\n}\n\n_check_restart_if_running() {\n  echo \"running ${_tVr} ${_hName} _check_restart_if_running procedure...\"\n  if [ -e \"/run/mysql_restart_running.pid\" ]; then\n    echo \"MySQLD restart procedure in progress?\"\n    echo \"Nothing to do, let's quit now. Bye!\"\n    exit 1\n  fi\n}\n\n_start_sql() {\n  echo \"running ${_tVr} ${_hName} _start_sql procedure...\"\n  _check_restart_if_running\n  _create_locks\n\n  _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n  if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n    echo \"MySQLD already running?\"\n    echo \"Nothing to do. Bye!\"\n    _remove_locks\n    [ \"$1\" != \"chain\" ] && exit 1\n  fi\n\n  echo \"Starting MySQLD again...\"\n  renice ${_B_NICE} -p $$ &> /dev/null\n  update-rc.d -f mysql remove &> /dev/null\n  if [ \"$1\" = \"init\" ]; then\n    _MASTER_START=YES\n    sed -i \"s/^safe_to_bootstrap.*/safe_to_bootstrap: 1/g\"  /var/lib/mysql/grastate.dat\n    service mysql bootstrap-pxc\n  else\n    _MASTER_START=NO\n    service mysql start\n  fi\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"Waiting for MySQLD graceful start...\"\n    sleep 3\n  done\n  echo \"MySQLD started\"\n\n  _remove_locks\n  echo \"MySQLD start procedure completed\"\n  mysql -u root -e \"SET GLOBAL optimizer_switch='derived_merge=off';\"\n  [ \"$1\" != \"chain\" ] && exit 0\n}\n\n_stop_sql() {\n  echo \"running ${_tVr} ${_hName} _stop_sql procedure...\"\n  _check_restart_if_running\n  _create_locks\n\n  _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n  if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n    echo \"Preparing MySQLD for quick shutdown...\"\n    _DBS_TEST=\"$(which mysql)\"\n    if [ ! -z \"${_DBS_TEST}\" ]; then\n      _DB_SERVER_TEST=$(mysql -V 2>&1)\n    fi\n    if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n      _DB_V=8.4\n    elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n      _DB_V=8.0\n    elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n      _DB_V=5.7\n    fi\n    mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n    fi\n    mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n    echo \"Stopping MySQLD now...\"\n    update-rc.d -f mysql remove &> /dev/null\n    service mysql stop\n  else\n    echo \"MySQLD already stopped?\"\n    echo \"Nothing to do. Bye!\"\n    _remove_locks\n    [ \"$1\" != \"chain\" ] && exit 1\n  fi\n\n  until [ -z \"${_IS_MYSQLD_RUNNING}\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"Waiting for MySQLD graceful shutdown...\"\n    sleep 3\n  done\n  echo \"MySQLD stopped\"\n\n  _remove_locks\n  echo \"MySQLD stop procedure completed\"\n  [ \"$1\" != \"chain\" ] && exit 0\n}\n\n_re_start_sql() {\n  echo \"running ${_tVr} ${_hName} _re_start_sql procedure...\"\n  _stop_sql \"chain\"\n  _start_sql \"chain\"\n  exit 0\n}\n\n_if_re_start_sql() {\n  echo \"running ${_tVr} ${_hName} _if_re_start_sql procedure...\"\n  _DB_SERVER=Percona\n  _myCnf=\"/etc/mysql/my.cnf\"\n  _preCnf=\"${_vBs}/dragon/t/my.cnf-pre-${_NOW}\"\n  if [ -f \"${_myCnf}\" ]; then\n    _myCnfUpdate=NO\n    _myRstrd=NO\n    if [ ! -f \"${_preCnf}\" ]; then\n      mkdir -p ${_vBs}/dragon/t/\n      cp -af ${_myCnf} ${_preCnf}\n    fi\n    _diffMyTest=$(diff -w -B \\\n      -I tmp_table_size \\\n      -I max_heap_table_size \\\n      -I myisam_sort_buffer_size \\\n      -I key_buffer_size ${_myCnf} ${_preCnf} 2>&1)\n    if [ -z \"${_diffMyTest}\" ]; then\n      _myCnfUpdate=NO\n      echo \"INFO: ${_DB_SERVER} diff0 empty\"\n    else\n      _myCnfUpdate=YES\n      echo \"INFO: ${_DB_SERVER} diff1 ${_diffMyTest}\"\n    fi\n    if [[ \"${_diffMyTest}\" =~ \"innodb_buffer_pool_size\" ]]; then\n      _myCnfUpdate=YES\n      echo \"INFO: ${_DB_SERVER} diff2 ${_diffMyTest}\"\n    fi\n    if [[ \"${_diffMyTest}\" =~ \"No such file or directory\" ]]; then\n      _myCnfUpdate=NO\n      echo \"INFO: ${_DB_SERVER} diff3 ${_diffMyTest}\"\n    fi\n  fi\n  _myUptime=$(mysqladmin -u root version | grep -i uptime 2>&1)\n  _myUptime=$(echo -n ${_myUptime} | fmt -su -w 2500 2>&1)\n  echo \"INFO: ${_DB_SERVER} ${_myUptime}\"\n  if [ \"${_myCnfUpdate}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: Restarting ${_DB_SERVER} server...\"\n    fi\n    _re_start_sql\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: ${_DB_SERVER} server restart completed\"\n    fi\n    _myRstrd=YES\n  fi\n}\n\n_check_mysql_up() {\n  echo \"running ${_tVr} ${_hName} _check_mysql_up procedure...\"\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"Waiting for MySQLD availability...\"\n    sleep 15\n    _start_sql \"chain\"\n  done\n}\n\n_truncate_cache_tables() {\n  echo \"running ${_tVr} ${_hName} _truncate_cache_tables procedure...\"\n  _check_mysql_up\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^cache | uniq | sort 2>&1)\n  for C in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${C};\nEOFMYSQL\n    sleep 1\n  done\n}\n\n_truncate_watchdog_tables() {\n  echo \"running ${_tVr} ${_hName} _truncate_watchdog_tables procedure...\"\n  _check_mysql_up\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^watchdog$ 2>&1)\n  for A in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${A};\nEOFMYSQL\n    sleep 1\n  done\n}\n\n_truncate_accesslog_tables() {\n  echo \"running ${_tVr} ${_hName} _truncate_accesslog_tables procedure...\"\n  _check_mysql_up\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^accesslog$ 2>&1)\n  for A in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${A};\nEOFMYSQL\n    sleep 1\n  done\n}\n\n_truncate_queue_tables() {\n  echo \"running ${_tVr} ${_hName} _truncate_queue_tables procedure...\"\n  _check_mysql_up\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^queue$ 2>&1)\n  for Q in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${Q};\nEOFMYSQL\n    sleep 1\n  done\n}\n\n_repair_this_database() {\n  echo \"running ${_tVr} ${_hName} _repair_this_database procedure...\"\n  _check_mysql_up\n  mysqlcheck --host=${_SQL_HOST} --port=${_SQL_PORT} --protocol=tcp -u root --auto-repair --silent ${_DB}\n}\n\n_optimize_this_database() {\n  echo \"running ${_tVr} ${_hName} _optimize_this_database procedure...\"\n  _check_mysql_up\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | uniq | sort 2>&1)\n  for T in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nOPTIMIZE TABLE ${T};\nEOFMYSQL\n  done\n}\n\n_convert_to_innodb() {\n  echo \"running ${_tVr} ${_hName} _convert_to_innodb procedure...\"\n  _check_mysql_up\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | uniq | sort 2>&1)\n  for T in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nALTER TABLE ${T} ENGINE=INNODB;\nEOFMYSQL\n  done\n}\n\n#\n# Update innodb_log_file_size.\n_innodb_log_file_size_update() {\n  _check_mysql_version\n  echo \"running ${_tVr} ${_hName} _innodb_log_file_size_update procedure...\"\n  echo \"INFO: InnoDB log file will be set to ${_INNODB_LOG_FILE_SIZE_MB}...\"\n  mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 0;\" &> /dev/null\n  _stop_sql \"chain\"\n  _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n  if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n    mkdir -p ${_vBs}/old-sql-ib-log-${_NOW}\n    sleep 1\n    mv -f /var/lib/mysql/ib_logfile0 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n    mv -f /var/lib/mysql/ib_logfile1 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      sed -i \"s/.*innodb_redo_log_capacity.*/#innodb_redo_log_capacity    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n      wait\n      sed -i \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n      wait\n      echo \"innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.cluster_innodb_log_file_size.txt\n    else\n      sed -i \"s/.*innodb_redo_log_capacity.*/innodb_redo_log_capacity    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n      wait\n      sed -i \"s/.*innodb_log_file_size.*/#innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n      wait\n      echo \"innodb_redo_log_capacity    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.cluster_innodb_log_file_size.txt\n    fi\n    wait\n    _start_sql \"chain\"\n  else\n    echo \"INFO: Waiting 180s for ${_DB_SERVER} clean shutdown...\"\n    _stop_sql \"chain\"\n    sleep 180\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n      mkdir -p ${_vBs}/old-sql-ib-log-${_NOW}\n      sleep 1\n      mv -f /var/lib/mysql/ib_logfile0 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      mv -f /var/lib/mysql/ib_logfile1 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        sed -i \"s/.*innodb_redo_log_capacity.*/#innodb_redo_log_capacity    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n        wait\n        sed -i \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n        wait\n        echo \"innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.cluster_innodb_log_file_size.txt\n      else\n        sed -i \"s/.*innodb_redo_log_capacity.*/innodb_redo_log_capacity    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n        wait\n        sed -i \"s/.*innodb_log_file_size.*/#innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n        wait\n        echo \"innodb_redo_log_capacity    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.cluster_innodb_log_file_size.txt\n      fi\n      _start_sql \"chain\"\n    else\n      echo \"WARN: ${_DB_SERVER} refused to stop, InnoDB log file size not updated\"\n      sleep 5\n    fi\n  fi\n}\n\n#\n# Update SQL Config.\n_sql_conf_update() {\n  echo \"running ${_tVr} ${_hName} _sql_conf_update procedure...\"\n  sed -i \"s/.*innodb_force_recovery/#innodb_force_recovery/g\" /etc/mysql/my.cnf\n  wait\n  sed -i \"s/.*innodb_corrupt_table_action/#innodb_corrupt_table_action/g\" /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^thread_concurrency.*//g\" /etc/mysql/my.cnf\n  wait\n  if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n    _INNODB_LOG_FILE_SIZE=${_INNODB_LOG_FILE_SIZE//[^0-9]/}\n    if [ ! -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -ge 50 ]; then\n        _INNODB_LOG_FILE_SIZE_MB=\"${_INNODB_LOG_FILE_SIZE}M\"\n        _INNODB_LOG_FILE_SIZE_TEST=$(grep \"innodb_log_file_size\" \\\n          ${_vBs}/dragon/t/my.cnf-pre-${_NOW} 2>&1)\n        if [[ \"${_INNODB_LOG_FILE_SIZE_TEST}\" =~ \"= ${_INNODB_LOG_FILE_SIZE_MB}\" ]]; then\n          _INNODB_LOG_FILE_SIZE_SAME=YES\n        else\n          _INNODB_LOG_FILE_SIZE_SAME=NO\n        fi\n      fi\n    fi\n    sed -i \"s/.*slow_query_log/#slow_query_log/g\" /etc/mysql/my.cnf\n    wait\n    sed -i \"s/.*long_query_time/#long_query_time/g\" /etc/mysql/my.cnf\n    wait\n    sed -i \"s/.*slow_query_log_file/#slow_query_log_file/g\" /etc/mysql/my.cnf\n    wait\n    echo \"skip-name-resolve\" > /etc/mysql/skip-name-resolve.txt\n    if [ ! -e \"/etc/mysql/skip-name-resolve.txt\" ]; then\n      sed -i \"s/.*skip-name-resolve/#skip-name-resolve/g\" /etc/mysql/my.cnf\n      wait\n    fi\n  fi\n  mv -f /etc/mysql/my.cnf-pre* ${_vBs}/dragon/t/ &> /dev/null\n  sed -i \"s/.*default-table-type/#default-table-type/g\" /etc/mysql/my.cnf\n  wait\n  sed -i \"s/.*language/#language/g\" /etc/mysql/my.cnf\n  wait\n  sed -i \"s/.*innodb_lazy_drop_table.*//g\" /etc/mysql/my.cnf\n  wait\n  if [ ! -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n    if [ \"${_INNODB_LOG_FILE_SIZE}\" -ge 50 ]; then\n      _INNODB_LOG_FILE_SIZE_MB=\"${_INNODB_LOG_FILE_SIZE}M\"\n      _INNODB_LOG_FILE_SIZE_TEST=$(grep \"innodb_log_file_size\" \\\n        /root/.my.cluster_innodb_log_file_size.txt 2>&1)\n      if [[ \"${_INNODB_LOG_FILE_SIZE_TEST}\" =~ \"= ${_INNODB_LOG_FILE_SIZE_MB}\" ]]; then\n        echo \"No changes on ${_hName} for ${_INNODB_LOG_FILE_SIZE_MB}\"\n      else\n        if [ \"${_INNODB_LOG_FILE_SIZE_SAME}\" = \"YES\" ]; then\n          sed -i \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" /etc/mysql/my.cnf\n          echo \"innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.cluster_innodb_log_file_size.txt\n          wait\n        else\n          echo \"Changes required on ${_hName} for ${_INNODB_LOG_FILE_SIZE_MB}\"\n          echo \"innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.cluster_innodb_log_file_size.txt\n          _innodb_log_file_size_update\n        fi\n      fi\n    fi\n  fi\n}\n\n#\n# Tune memory limits for SQL server.\n_tune_sql_memory_limits() {\n  _check_mysql_up\n  echo \"running ${_tVr} ${_hName} _tune_sql_memory_limits procedure...\"\n  if [ ! -e \"${_vBs}/dragon/t/my.cnf-pre-${_NOW}\" ]; then\n    mkdir -p ${_vBs}/dragon/t/\n    if [ -e \"/etc/mysql/my.cnf\" ]; then\n      cp -af /etc/mysql/my.cnf ${_vBs}/dragon/t/my.cnf-pre-${_NOW}\n    fi\n  fi\n  # https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl\n  _pthTun=\"/var/opt/mysqltuner.pl\"\n  _outTun=\"/var/opt/mysqltuner-${_NOW}.txt\"\n  if [ ! -e \"${_outTun}\" ]; then\n    echo \"INFO: Running MySQLTuner check on all databases\"\n    echo \"WAIT: This may take a while, please wait...\"\n    _MYSQLTUNER_TEST_RESULT=OK\n    rm -f /var/opt/mysqltuner*\n    curl ${_crlGet} \"${_urlDev}/mysqltuner.pl.${_MYSQLTUNER_VRN}\" -o ${_pthTun}\n    if [ ! -e \"${_pthTun}\" ]; then\n      curl ${_crlGet} \"${_urlDev}/mysqltuner.pl\" -o ${_pthTun}\n    fi\n    if [ -e \"${_pthTun}\" ]; then\n      perl ${_pthTun} > ${_outTun} 2>&1\n    fi\n  fi\n  if [ -e \"${_pthTun}\" ] \\\n    && [ -e \"${_outTun}\" ]; then\n    _REC_MYISAM_MEM=$(cat ${_outTun} \\\n      | grep \"Data in MyISAM tables\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $1}' 2>&1)\n    _REC_INNODB_MEM=$(cat ${_outTun} \\\n      | grep \"data size:\" \\\n      | cut -d/ -f3 \\\n      | awk '{ print $1}' 2>&1)\n    _MYSQLTUNER_TEST=$(cat ${_outTun} 2>&1)\n    cp -a ${_outTun} ${_pthLog}/\n    if [ -z \"${_REC_INNODB_MEM}\" ] \\\n      || [[ \"${_MYSQLTUNER_TEST}\" =~ \"Cannot calculate MyISAM index\" ]] \\\n      || [[ \"${_MYSQLTUNER_TEST}\" =~ \"InnoDB is enabled but isn\" ]]; then\n      _MYSQLTUNER_TEST_RESULT=FAIL\n      echo \"ALRT! The MySQLTuner test failed!\"\n      echo \"ALRT! Please review ${_outTun}\"\n      echo \"ALRT! We will use some sane SQL defaults instead, do not worry!\"\n    fi\n    ###--------------------###\n    if [ ! -z \"${_REC_MYISAM_MEM}\" ] \\\n      && [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"OK\" ]; then\n      _RAW_MYISAM_MEM=$(echo ${_REC_MYISAM_MEM} | sed \"s/[A-Z]//g\" 2>&1)\n      if [[ \"${_REC_MYISAM_MEM}\" =~ \"G\" ]]; then\n        _RAW_MYISAM_MEM=$(echo ${_RAW_MYISAM_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_MYISAM_MEM=$(echo \"${_RAW_MYISAM_MEM} * 1024\" | bc -l 2>&1)\n      elif [[ \"${_REC_MYISAM_MEM}\" =~ \"M\" ]]; then\n        _RAW_MYISAM_MEM=$(echo ${_RAW_MYISAM_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_MYISAM_MEM=$(echo \"${_RAW_MYISAM_MEM} * 1\" | bc -l 2>&1)\n      fi\n      _RAW_MYISAM_MEM=$(echo \"(${_RAW_MYISAM_MEM}+0.5)/1\" | bc 2>&1)\n      if [ \"${_RAW_MYISAM_MEM}\" -gt \"${_USE_SQL}\" ]; then\n        _USE_MYISAM_MEM=\"${_USE_SQL}\"\n      else\n        _RAW_MYISAM_MEM=$(echo \"scale=2; (${_RAW_MYISAM_MEM} * 1.1)\" | bc 2>&1)\n        _USE_MYISAM_MEM=$(echo \"(${_RAW_MYISAM_MEM}+0.5)/1\" | bc 2>&1)\n      fi\n      if [ \"${_USE_MYISAM_MEM}\" -lt 256 ] \\\n        || [ -z \"${_USE_MYISAM_MEM}\" ]; then\n        _USE_MYISAM_MEM=\"${_USE_SQL}\"\n      fi\n      _USE_MYISAM_MEM=\"${_USE_MYISAM_MEM}M\"\n      sed -i \"s/^key_buffer_size.*/key_buffer_size         = ${_USE_MYISAM_MEM}/g\"  /etc/mysql/my.cnf\n      wait\n    else\n      _USE_MYISAM_MEM=\"${_USE_SQL}M\"\n      if [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"FAIL\" ]; then\n        echo \"NOTE: _USE_MYISAM_MEM is ${_USE_MYISAM_MEM} because _REC_MYISAM_MEM was empty!\"\n      fi\n      sed -i \"s/^key_buffer_size.*/key_buffer_size         = ${_USE_MYISAM_MEM}/g\"  /etc/mysql/my.cnf\n      wait\n    fi\n    ###--------------------###\n    if [ ! -z \"${_REC_INNODB_MEM}\" ] \\\n      && [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"OK\" ]; then\n      echo _REC_INNODB_MEM is ${_REC_INNODB_MEM}\n      _RAW_INNODB_MEM=$(echo ${_REC_INNODB_MEM} | sed \"s/[A-Z]//g\" 2>&1)\n      if [[ \"${_REC_INNODB_MEM}\" =~ \"G\" ]]; then\n        _RAW_INNODB_MEM=$(echo ${_RAW_INNODB_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_INNODB_MEM=$(echo \"${_RAW_INNODB_MEM} * 1024\" | bc -l 2>&1)\n      elif [[ \"${_REC_INNODB_MEM}\" =~ \"M\" ]]; then\n        _RAW_INNODB_MEM=$(echo ${_RAW_INNODB_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_INNODB_MEM=$(echo \"${_RAW_INNODB_MEM} * 1\" | bc -l 2>&1)\n      fi\n      _RAW_INNODB_MEM=$(echo \"(${_RAW_INNODB_MEM}+0.5)/1\" | bc 2>&1)\n      echo _RAW_INNODB_MEM is ${_RAW_INNODB_MEM}\n      if [ \"${_RAW_INNODB_MEM}\" -gt \"${_USE_SQL}\" ] \\\n        || [ -z \"${_USE_INNODB_MEM}\" ] \\\n        || [ \"${_RAW_INNODB_MEM}\" -lt 512 ]; then\n        _USE_INNODB_MEM=\"${_USE_SQL}\"\n      else\n        _RAW_INNODB_MEM=$(echo \"scale=2; (${_RAW_INNODB_MEM} * 1.1)\" | bc 2>&1)\n        _USE_INNODB_MEM=$(echo \"(${_RAW_INNODB_MEM}+0.5)/1\" | bc 2>&1)\n      fi\n      echo _RAW_INNODB_MEM is ${_RAW_INNODB_MEM}\n      echo _USE_INNODB_MEM is ${_USE_INNODB_MEM}\n      _INNODB_BPI=$(echo \"scale=0; ${_USE_INNODB_MEM}/1024/2\" | bc 2>&1)\n      echo Initial _INNODB_BPI is ${_INNODB_BPI}\n      if [ \"${_INNODB_BPI}\" -lt 1 ] || [ -z \"${_INNODB_BPI}\" ]; then\n        _INNODB_BPI=\"1\"\n        echo Forced _INNODB_BPI is ${_INNODB_BPI}\n      fi\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/^innodb_buffer_pool_instances.*/innodb_buffer_pool_instances = ${_INNODB_BPI}/g\" /etc/mysql/my.cnf\n        wait\n        sed -i \"s/^innodb_page_cleaners.*/innodb_page_cleaners = ${_INNODB_BPI}/g\" /etc/mysql/my.cnf\n        wait\n      fi\n      _INNODB_LOG_FILE_SIZE=$(echo \"scale=0; ${_USE_INNODB_MEM}/4/40*40\" | bc 2>&1)\n      _DB_COUNT=$(ls /var/lib/mysql/ | wc -l 2>&1)\n      if [ \"${_DB_COUNT}\" -gt 3 ]; then\n        if [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 64 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 256 ]; then\n          _INNODB_LOG_FILE_SIZE=256\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 256 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 512 ]; then\n          _INNODB_LOG_FILE_SIZE=512\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 512 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 1024 ]; then\n          _INNODB_LOG_FILE_SIZE=1024\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 1024 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        fi\n      fi\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -le 64 ] \\\n        || [ -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n        _INNODB_LOG_FILE_SIZE=64\n      fi\n      _USE_INNODB_MEM=\"${_USE_INNODB_MEM}M\"\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/^innodb_buffer_pool_size.*/innodb_buffer_pool_size = ${_USE_INNODB_MEM}/g\"  /etc/mysql/my.cnf\n      fi\n      wait\n    else\n      _USE_INNODB_MEM=\"${_USE_SQL}M\"\n      _INNODB_LOG_FILE_SIZE=$(echo \"scale=0; ${_USE_SQL}/4/40*40\" | bc 2>&1)\n      _DB_COUNT=$(ls /var/lib/mysql/ | wc -l 2>&1)\n      if [ \"${_DB_COUNT}\" -gt 3 ]; then\n        if [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 64 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 256 ]; then\n          _INNODB_LOG_FILE_SIZE=256\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 256 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 512 ]; then\n          _INNODB_LOG_FILE_SIZE=512\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 512 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 1024 ]; then\n          _INNODB_LOG_FILE_SIZE=1024\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 1024 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        fi\n      fi\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -le 64 ] \\\n        || [ -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n        _INNODB_LOG_FILE_SIZE=64\n      fi\n      echo \"NOTE: _USE_INNODB_MEM is ${_USE_INNODB_MEM} because _REC_INNODB_MEM was empty!\"\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/^innodb_buffer_pool_size.*/innodb_buffer_pool_size = ${_USE_INNODB_MEM}/g\"  /etc/mysql/my.cnf\n      fi\n      wait\n    fi\n  else\n    _THIS_USE_MEM=\"${_USE_SQL}M\"\n    if [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"FAIL\" ]; then\n      echo \"NOTE: _USE_MYISAM_MEM is ${_THIS_USE_MEM} because _REC_MYISAM_MEM was empty!\"\n      echo \"NOTE: _USE_INNODB_MEM is ${_THIS_USE_MEM} because _REC_INNODB_MEM was empty!\"\n    fi\n    _INNODB_LOG_FILE_SIZE=$(echo \"scale=0; ${_USE_SQL}/4/40*40\" | bc 2>&1)\n    _DB_COUNT=$(ls /var/lib/mysql/ | wc -l 2>&1)\n    if [ \"${_DB_COUNT}\" -gt 3 ]; then\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 64 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 256 ]; then\n        _INNODB_LOG_FILE_SIZE=256\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 256 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 512 ]; then\n        _INNODB_LOG_FILE_SIZE=512\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 512 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 1024 ]; then\n        _INNODB_LOG_FILE_SIZE=1024\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 1024 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 2048 ]; then\n        _INNODB_LOG_FILE_SIZE=2048\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 2048 ]; then\n        _INNODB_LOG_FILE_SIZE=2048\n      fi\n    fi\n    if [ \"${_INNODB_LOG_FILE_SIZE}\" -le 64 ] \\\n      || [ -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n      _INNODB_LOG_FILE_SIZE=64\n    fi\n  fi\n}\n\n#\n# Tune memory limits for Percona.\n_tune_memory_limits() {\n  echo \"running ${_tVr} ${_hName} _tune_memory_limits procedure...\"\n  _RAM=$(free -mt | grep Mem: | awk '{ print $2 }' 2>&1)\n  if [ \"${_RESERVED_RAM}\" -gt 0 ]; then\n    _RAM=$(( _RAM - _RESERVED_RAM ))\n  else\n    _RESERVED_RAM=$(( _RAM / 4 ))\n    _RAM=$(( _RAM - _RESERVED_RAM ))\n  fi\n  _USE=$(( _RAM / 4 ))\n  if [ \"${_VMFAMILY}\" = \"VS\" ] \\\n    || [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n    if [ \"${_VMFAMILY}\" = \"VS\" ]; then\n      if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n        _USE_SQL=$(( _RAM / 24 ))\n      elif [ -e \"/root/.tg.cnf\" ]; then\n        _USE_SQL=$(( _RAM / 12 ))\n      else\n        _USE_SQL=$(( _RAM / 24 ))\n      fi\n    else\n      _USE_SQL=$(( _RAM / 8 ))\n    fi\n  else\n    _USE_SQL=$(( _RAM / 8 ))\n  fi\n  if [ \"${_USE_SQL}\" -lt 64 ]; then\n    _USE_SQL=64\n  fi\n  _TMP_SQL=\"${_USE_SQL}M\"\n  _SRT_SQL=$(( _USE_SQL * 2 ))\n  _SRT_SQL=\"${_SRT_SQL}K\"\n  if [ \"${_USE}\" -ge 512 ] && [ \"${_USE}\" -lt 2048 ]; then\n    _USE_PHP=1024\n    _USE_OPC=1024\n    _QCE_SQL=64M\n    _RND_SQL=8M\n    _JBF_SQL=4M\n    if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n      _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n    else\n      _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n    fi\n    _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n  elif [ \"${_USE}\" -ge 2048 ]; then\n    if [ \"${_VMFAMILY}\" = \"XEN\" ] || [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n      _USE_PHP=2048\n      _USE_OPC=2048\n      _QCE_SQL=64M\n      _RND_SQL=8M\n      _JBF_SQL=4M\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n      else\n        _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n      fi\n      _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n    elif [ \"${_VMFAMILY}\" = \"VS\" ] || [ \"${_VMFAMILY}\" = \"TG\" ]; then\n      if [ -e \"/root/.my.cluster_root_pwd.txt\" ] \\\n        || [ -e \"/root/.tg.cnf\" ]; then\n        _USE_PHP=2048\n        _USE_OPC=2048\n        _QCE_SQL=64M\n        _RND_SQL=8M\n        _JBF_SQL=4M\n        if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n          _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n        else\n          _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n        fi\n        _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n        if [ \"${_MXC_SQL}\" -lt 10 ]; then\n          _MXC_SQL=10\n        fi\n      else\n        _USE_PHP=2048\n        _USE_OPC=2048\n        _QCE_SQL=64M\n        _RND_SQL=2M\n        _JBF_SQL=2M\n        if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n          _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n        else\n          _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n        fi\n        _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n      fi\n    else\n      _USE_PHP=512\n      _USE_OPC=512\n      _QCE_SQL=32M\n      _RND_SQL=2M\n      _JBF_SQL=2M\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n      else\n        _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n      fi\n      _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n    fi\n  else\n    _USE_PHP=\"${_USE}\"\n    _USE_OPC=\"${_USE}\"\n    _QCE_SQL=32M\n    _RND_SQL=1M\n    _JBF_SQL=1M\n    if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n      _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n    else\n      _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n    fi\n    _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n  fi\n  _tune_sql_memory_limits\n  _sql_conf_update\n  _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n  _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n  _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n    || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n    || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n    || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n    || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n    || [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n    _UXC_SQL=\"${_MXC_SQL}\"\n  else\n    _UXC_SQL=$(echo \"scale=0; ${_MXC_SQL}/2\" | bc 2>&1)\n  fi\n  sed -i \"s/^max_connect_errors.*/max_connect_errors      = ${_UXC_SQL}/g\"      /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^max_user_connections.*/max_user_connections    = ${_UXC_SQL}/g\"    /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^max_connections.*/max_connections         = ${_MXC_SQL}/g\"         /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^tmp_table_size.*/tmp_table_size          = ${_TMP_SQL}/g\"          /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^max_heap_table_size.*/max_heap_table_size     = ${_TMP_SQL}/g\"     /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^myisam_sort_buffer_size.*/myisam_sort_buffer_size = ${_SRT_SQL}/g\" /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^read_rnd_buffer_size.*/read_rnd_buffer_size    = ${_RND_SQL}/g\"    /etc/mysql/my.cnf\n  wait\n  sed -i \"s/^join_buffer_size.*/join_buffer_size        = ${_JBF_SQL}/g\"        /etc/mysql/my.cnf\n  wait\n  echo _USE_SQL is ${_USE_SQL}\n  echo _USE_PHP is ${_USE_PHP}\n  echo _UXC_SQL is ${_UXC_SQL}\n  echo _MXC_SQL is ${_MXC_SQL}\n  echo _TMP_SQL is ${_TMP_SQL}\n  echo _QCE_SQL is ${_QCE_SQL}\n  echo _RND_SQL is ${_RND_SQL}\n  echo _JBF_SQL is ${_JBF_SQL}\n  echo _SRT_SQL is ${_SRT_SQL}\n  echo _INNODB_BPI is ${_INNODB_BPI}\n  echo _PHP_FPM_WORKERS is ${_PHP_FPM_WORKERS}\n}\n\n_tune_sql() {\n  echo \"running ${_tVr} ${_hName} _tune_sql procedure...\"\n  rm -f /run/mysql_restart_running.pid\n  _check_mysql_up\n  _tune_memory_limits\n  _if_re_start_sql\n  exit 0\n}\n\n_db_head_to_stretch_first() {\n  if [ -e \"/run/sshd.pid\" ] && [ -e \"/run/crond.pid\" ]; then\n    _HOST_HOSTNAME=`hostname 2>&1`\n    if [[ \"${_OS_CODE}\" =~ \"stretch\" ]]; then\n      echo \"The ${_HOST_HOSTNAME} is already running Stretch\"\n    elif [[ \"${_OS_CODE}\" =~ \"jessie\" ]]; then\n  \t  cat /etc/resolv.conf > /var/backups/resolv.conf.pre-dist-upgrade\n  \t  chattr -i /etc/resolv.conf\n  \t  rm -f /etc/resolv.conf\n      echo \"### BOA-DNS-Config ###\" > /etc/resolv.conf\n      echo \"nameserver 1.1.1.1\" >> /etc/resolv.conf\n      echo \"nameserver 8.8.8.8\" >> /etc/resolv.conf\n      echo \"nameserver 9.9.9.9\" >> /etc/resolv.conf\n      chmod 0644 /etc/resolv.conf\n  \t  cat /etc/resolv.conf\n  \t  if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n  \t    rm -f /etc/apt/preferences.d/systemd\n        echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n        echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n      fi\n  \t  _apt_clean_update\n      echo \"curl install\"           | dpkg --set-selections &> /dev/null\n      echo \"git install\"            | dpkg --set-selections &> /dev/null\n      echo \"git-core install\"       | dpkg --set-selections &> /dev/null\n      echo \"git-man install\"        | dpkg --set-selections &> /dev/null\n      echo \"libssl-dev install\"     | dpkg --set-selections &> /dev/null\n      echo \"nginx install\"          | dpkg --set-selections &> /dev/null\n      echo \"nginx-common install\"   | dpkg --set-selections &> /dev/null\n      echo \"openssh-client install\" | dpkg --set-selections &> /dev/null\n      echo \"openssh-server install\" | dpkg --set-selections &> /dev/null\n      echo \"openssh-sftp-server install\" | dpkg --set-selections &> /dev/null\n      echo \"openssl install\"        | dpkg --set-selections &> /dev/null\n      echo \"ssh install\"            | dpkg --set-selections &> /dev/null\n      echo \"sysvinit-core install\"  | dpkg --set-selections &> /dev/null\n      echo \"sysvinit-utils install\" | dpkg --set-selections &> /dev/null\n      echo \"zlib1g install\"         | dpkg --set-selections &> /dev/null\n      echo \"zlib1g-dev install\"     | dpkg --set-selections &> /dev/null\n      echo \"zlibc install\"          | dpkg --set-selections &> /dev/null\n  \t  apt-get upgrade ${_nrmUpArg}\n  \t  apt-get install lsb-release ${_nrmUpArg}\n      ### Check if we can continue\n      _AUDIT_DPKG=$(dpkg --audit 2>&1)\n      if [ ! -z \"${_AUDIT_DPKG}\" ]; then\n        echo \"ALRT! I can not continue until dpkg --audit is clean\"\n        echo \"ALRT! ${_AUDIT_DPKG}\"\n        echo \"ALRT! Aborting installer NOW!\"\n        exit 1\n      fi\n      _HOLD_TEST_DPKG=$(dpkg --get-selections | grep 'hold$' 2>&1)\n      if [ ! -z \"${_HOLD_TEST_DPKG}\" ]; then\n        echo \"ALRT! I can not continue until these packages are un-hold\"\n        echo \"ALRT! ${_HOLD_TEST_DPKG}\"\n        echo \"ALRT! Aborting installer NOW!\"\n        exit 1\n      fi\n      _HOLD_TEST_ATE=$(aptitude search \"~ahold\" 2>&1)\n      if [ ! -z \"${_HOLD_TEST_ATE}\" ]; then\n        echo \"ALRT! I can not continue until these packages are un-hold\"\n        echo \"ALRT! ${_HOLD_TEST_ATE}\"\n        echo \"ALRT! Aborting installer NOW!\"\n        exit 1\n      fi\n      sed -i \"s/.*DEBIAN LTS.*//g\" /etc/apt/sources.list\n      wait\n      sed -i \"s/.*jessie-lts.*//g\" /etc/apt/sources.list\n      wait\n      sed -i \"s/.*PROPOSED.*//g\"   /etc/apt/sources.list\n      wait\n      sed -i \"s/.*proposed.*//g\"   /etc/apt/sources.list\n      wait\n      sed -i \"s/jessie/stretch/g\"   /etc/apt/sources.list\n      wait\n      sed -i \"s/jessie/stretch/g\"   /etc/apt/sources.list.d/*\n      wait\n      if [ -e \"/etc/apt/apt.conf\" ]; then\n        rm -f /etc/apt/apt.conf\n      fi\n  \t  cat /etc/apt/sources.list\n  \t  _apt_clean_update\n  \t  apt-get install apt -t stretch ${_dstUpArg}\n  \t  apt-get upgrade ${_dstUpArg}\n  \t  apt-get install apt dpkg aptitude util-linux ${_dstUpArg}\n  \t  apt-get upgrade ${_dstUpArg}\n  \t  apt-get dist-upgrade ${_dstUpArg}\n  \t  apt-get dist-upgrade ${_dstUpArg}\n  \t  [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  \t  apt-get install lsb-release ${_dstUpArg}\n  \t  if [ -x \"/lib/systemd/systemd\" ]; then\n        ls -la /lib/systemd/systemd\n        _apt_clean_update\n        ${_INSTAPP} sysvinit-core\n        ${_INSTAPP} sysvinit-utils\n        ls -la /usr/share/sysvinit/inittab\n        if [ -e \"/usr/share/sysvinit/inittab\" ]; then\n          cp -af /usr/share/sysvinit/inittab /etc/inittab\n        fi\n        if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n          rm -f /etc/apt/preferences.d/systemd\n          echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n          echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n          _apt_clean_update\n        fi\n  \t  fi\n      echo \"Upgrade OS to Stretch 1st stage complete!\"\n      echo \"Bye!\"\n    fi\n  fi\n}\n\n_db_head_to_stretch_second() {\n  if [ -e \"/run/sshd.pid\" ] && [ -e \"/run/crond.pid\" ]; then\n    _HOST_HOSTNAME=`hostname 2>&1`\n    if [[ \"${_OS_CODE}\" =~ \"stretch\" ]]; then\n  \t  if [ -x \"/lib/systemd/systemd\" ]; then\n        ls -la /lib/systemd/systemd\n        _apt_clean_update\n        ${_INSTAPP} sysvinit-core\n        ${_INSTAPP} sysvinit-utils\n        ls -la /usr/share/sysvinit/inittab\n        if [ -e \"/usr/share/sysvinit/inittab\" ]; then\n          cp -af /usr/share/sysvinit/inittab /etc/inittab\n        fi\n        apt-get purge systemd libnss-systemd -y -qq 2> /dev/null\n        apt-get autoremove --purge -y -qq 2> /dev/null\n        apt-get autoclean -y -qq 2> /dev/null\n        if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n          rm -f /etc/apt/preferences.d/systemd\n          echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n          echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n        fi\n  \t  fi\n  \t  _apt_clean_update\n  \t  apt-get upgrade ${_nrmUpArg}\n  \t  apt-get dist-upgrade ${_nrmUpArg}\n  \t  apt-get dist-upgrade ${_nrmUpArg}\n  \t  [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  \t  apt-get install lsb-release ${_nrmUpArg}\n  \t  echo \"Upgrade OS to Stretch 2nd stage complete!\"\n      echo \"Bye!\"\n    fi\n  fi\n}\n\ncase \"$1\" in\n  stretch-first) _db_head_to_stretch_first ;;\n  stretch-second) _db_head_to_stretch_second ;;\n  restart) _re_start_sql ;;\n  start)   _start_sql \"only\" ;;\n  stop)    _stop_sql  \"only\" ;;\n  init)    _start_sql \"init\" ;;\n  check)   _check_root \"check\" ;;\n  tune)    _tune_sql ;;\n  *)       _tune_sql\n  ;;\nesac\n\nexit 0\n\n\n"
  },
  {
    "path": "aegir/tools/bin/octopus",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_TODAY=$(date +%y%m%d)\nexport _TODAY=${_TODAY//[^0-9]/}\n\n_NOW=$(date +%y%m%d-%H%M%S)\nexport _NOW=${_NOW//[^0-9-]/}\n\n_barCnf=\"/root/.barracuda.cnf\"\n_barName=\"BARRACUDA.sh.txt\"\n_bldPth=\"/opt/tmp/boa\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_filIncB=\"barracuda.sh.cnf\"\n_filIncO=\"octopus.sh.cnf\"\n_gCb=\"git clone --branch\"\n_octName=\"OCTOPUS.sh.txt\"\n_pthIncB=\"lib/settings/${_filIncB}\"\n_pthIncO=\"lib/settings/${_filIncO}\"\n_vBs=\"/var/backups\"\n\n_LOG_DIR=\"${_vBs}/reports/up/$(basename \"$0\")/${_TODAY}\"\n_VMFAMILY=XEN\n_VM_TEST=\"$(uname -a)\"\nif [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n  _VMFAMILY=\"VS\"\nfi\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n_clean_pid_exit() {\n  if [ -n \"${1}\" ]; then\n    echo \"REASON ${1} on $(date)\" >> /root/.octopus.exit.exceptions.log\n    [ -e \"/opt/tmp/boa\" ] && rm -rf /opt/tmp/*\n  fi\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/tmp/aegir_backup_mode.txt\" ] && rm -f /tmp/aegir_backup_mode.txt\n  service cron start &> /dev/null\n  exit 1\n}\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n_check_sql_running() {\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"INFO: Waiting for MySQLD availability...\"\n    sleep 3\n  done\n}\n\n_check_sql_access() {\n  if [ -e \"/root/.my.pass.txt\" ] && [ -e \"/root/.my.cnf\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_SYNC_SQL_PSWD=$(grep \"${_SQL_PSWD}\" /root/.my.cnf 2>&1)\n  else\n    echo \"ALERT: /root/.my.cnf or /root/.my.pass.txt not found.\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_a\n  fi\n  if [ -z \"${_IS_SYNC_SQL_PSWD}\" ] \\\n    || [[ ! \"${_IS_SYNC_SQL_PSWD}\" =~ \"password=${_SQL_PSWD}\" ]]; then\n    echo \"ALERT: SQL password is out of sync between\"\n    echo \"ALERT: /root/.my.cnf and /root/.my.pass.txt\"\n    echo \"ALERT: Please fix this before trying again, giving up.\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _check_sql_access_b\n  else\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n      echo \"ALERT: SQL server on this system is not running at all.\"\n      echo \"ALERT: Please fix this before trying again, giving up.\"\n      echo \"Bye\"\n      echo \" \"\n      _clean_pid_exit _check_sql_access_c\n    else\n      _MYSQL_CONN_TEST=$(mysql -u root -e \"status\" 2>&1)\n      if [ -z \"${_MYSQL_CONN_TEST}\" ] \\\n        || [[ \"${_MYSQL_CONN_TEST}\" =~ \"Access denied\" ]]; then\n        echo \"ALERT: SQL password in /root/.my.cnf does not work.\"\n        echo \"ALERT: Please fix this before trying again, giving up.\"\n        echo \"Bye\"\n        echo \" \"\n        _clean_pid_exit _check_sql_access_d\n      fi\n    fi\n  fi\n}\n\n_fix_dns_settings() {\n  [ ! -d \"${_vBs}\" ] && mkdir -p ${_vBs}\n  rm -f ${_vBs}/resolv.conf.tmp\n  if ! grep -q \"nameserver 127.0.0.1\" /etc/resolv.conf; then\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      _FORCE_RESOLV_UPDATE=YES\n    else\n      _FORCE_RESOLV_UPDATE=NO\n    fi\n  fi\n  if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf || [ \"${_FORCE_RESOLV_UPDATE}\" = \"YES\" ]; then\n    echo \"### BOA-DNS-Config ###\" > ${_vBs}/resolv.conf.tmp\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      echo \"nameserver 127.0.0.1\" >> ${_vBs}/resolv.conf.tmp\n    fi\n    echo \"nameserver 1.1.1.1\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 8.8.8.8\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 9.9.9.9\" >> ${_vBs}/resolv.conf.tmp\n  fi\n  if [ -e \"${_vBs}/resolv.conf.tmp\" ]; then\n    chattr -i /etc/resolv.conf\n    rm -f /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp /etc/resolv.conf\n    chmod 0644 /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp ${_vBs}/resolv.conf.vanilla\n  fi\n  if [ -x \"/usr/sbin/unbound-control\" ] \\\n    && [ -e \"/etc/resolvconf/run/interface/lo.unbound\" ]; then\n    unbound-control reload &> /dev/null\n  fi\n}\n\n_check_dns_settings() {\n  if [ -L \"/etc/resolv.conf\" ]; then\n    _fix_dns_settings\n    return 1  # Exit the function but continue the script\n  fi\n  if [ -e \"/root/.use.default.nameservers.cnf\" ]; then\n    if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n      rm -f /root/.use.local.nameservers.cnf\n    fi\n    _USE_DEFAULT_DNS=YES\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n    _USE_PROVIDER_DNS=YES\n  else\n    _REMOTE_DNS_TEST=$(host files.aegir.cc 1.1.1.1 -w 10 2>&1)\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [[ \"${_REMOTE_DNS_TEST}\" =~ \"no servers could be reached\" ]] \\\n    || [[ \"${_REMOTE_DNS_TEST}\" =~ \"Host files.aegir.cc not found\" ]] \\\n    || [ \"${_USE_PROVIDER_DNS}\" = \"YES\" ]; then\n    _fix_dns_settings\n  fi\n}\n\n_send_report() {\n  if [ -e \"${_barCnf}\" ]; then\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _MY_EMAIL=\"$(basename \"$0\")@omega8.cc\"\n    fi\n    if [ ! -z \"${_MY_EMAIL}\" ]; then\n      _repSub=\"Successful Octopus upgrade for ${_octUsr}\"\n      _repSub=\"REPORT: ${_repSub} on ${_hName}\"\n      _repSub=$(echo -n ${_repSub} | fmt -su -w 2500 2>&1)\n      cat ${_UP_LOG} | s-nail -s \"${_repSub} at ${_NOW}\" ${_MY_EMAIL}\n      echo \"${_repSub} sent to ${_MY_EMAIL}\"\n    fi\n  fi\n}\n\n_send_alert() {\n  if [ -e \"${_barCnf}\" ]; then\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _MY_EMAIL=\"$(basename \"$0\")@omega8.cc\"\n    fi\n    if [ ! -z \"${_MY_EMAIL}\" ]; then\n      _repSub=\"ALERT: Failed Octopus upgrade for ${_octUsr} on ${_hName}\"\n      _repSub=$(echo -n ${_repSub} | fmt -su -w 2500 2>&1)\n      cat ${_UP_LOG} | s-nail -s \"${_repSub} at ${_NOW}\" ${_MY_EMAIL}\n      echo \"${_repSub} sent to ${_MY_EMAIL}\"\n    fi\n  fi\n}\n\n_check_report() {\n  _SEND_ALERT=NO\n  _RESULT_ALRT=$(grep \"ALRT\" ${_UP_LOG} 2>&1)\n  _RESULT_FATAL=$(grep \"FATAL ERROR\" ${_UP_LOG} 2>&1)\n  _RESULT_ALREADY=$(grep \"This Ægir Instance is already up\" ${_UP_LOG} 2>&1)\n  _RESULT_BYE=$(grep \"BYE\" ${_UP_LOG} 2>&1)\n  if [[ \"${_RESULT_ALRT}\" =~ \"ALRT\" ]] \\\n    || [[ \"${_RESULT_ALRT}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=YES\n  fi\n  if [[ \"${_RESULT_FATAL}\" =~ \"FATAL ERROR\" ]] \\\n    || [[ \"${_RESULT_FATAL}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=YES\n  fi\n  if [[ \"${_RESULT_ALREADY}\" =~ \"This Ægir Instance is already up\" ]] \\\n    || [[ \"${_RESULT_ALREADY}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=NO\n  fi\n  if [[ \"${_RESULT_BYE}\" =~ \"BYE\" ]] \\\n    || [[ \"${_RESULT_BYE}\" =~ \"binary file matches\" ]]; then\n    _SEND_ALERT=NO\n  else\n    _SEND_ALERT=YES\n  fi\n  if [ \"${_SEND_ALERT}\" = \"YES\" ]; then\n    _send_alert\n  else\n    _send_report\n  fi\n}\n\n_up_mode() {\n  if [ \"${_mCmd}\" = \"aegir\" ]; then\n    sed -i \"s/^_HM_ONLY=NO/_HM_ONLY=YES/g\"                   ${_octCnf}\n    wait\n    sed -i \"s/^_HM_ONLY=NO/_HM_ONLY=YES/g\"                   ${_vBs}/${_tocIncO}\n    wait\n    sed -i \"s/^_PLATFORMS_ONLY=YES/_PLATFORMS_ONLY=NO/g\"     ${_octCnf}\n    wait\n    sed -i \"s/^_PLATFORMS_ONLY=YES/_PLATFORMS_ONLY=NO/g\"     ${_vBs}/${_tocIncO}\n    wait\n    bash  ${_vBs}/${_tocName}\n    touch ${_usEr}/log/up-${_TODAY}\n  elif [ \"${_mCmd}\" = \"platforms\" ]; then\n    sed -i \"s/^_PLATFORMS_ONLY=NO/_PLATFORMS_ONLY=YES/g\"     ${_octCnf}\n    wait\n    sed -i \"s/^_PLATFORMS_ONLY=NO/_PLATFORMS_ONLY=YES/g\"     ${_vBs}/${_tocIncO}\n    wait\n    sed -i \"s/^_HM_ONLY=YES/_HM_ONLY=NO/g\"                   ${_octCnf}\n    wait\n    sed -i \"s/^_HM_ONLY=YES/_HM_ONLY=NO/g\"                   ${_vBs}/${_tocIncO}\n    wait\n    bash  ${_vBs}/${_tocName}\n    touch ${_usEr}/log/up-${_TODAY}\n  elif [ \"${_mCmd}\" = \"both\" ] || [ \"${_mCmd}\" = \"force\" ]; then\n    sed -i \"s/^_HM_ONLY=YES/_HM_ONLY=NO/g\"                   ${_octCnf}\n    wait\n    sed -i \"s/^_HM_ONLY=YES/_HM_ONLY=NO/g\"                   ${_vBs}/${_tocIncO}\n    wait\n    sed -i \"s/^_PLATFORMS_ONLY=YES/_PLATFORMS_ONLY=NO/g\"     ${_octCnf}\n    wait\n    sed -i \"s/^_PLATFORMS_ONLY=YES/_PLATFORMS_ONLY=NO/g\"     ${_vBs}/${_tocIncO}\n    wait\n    bash  ${_vBs}/${_tocName}\n    touch ${_usEr}/log/up-${_TODAY}\n  else\n    sed -i \"s/^_HM_ONLY=YES/_HM_ONLY=NO/g\"                   ${_octCnf}\n    wait\n    sed -i \"s/^_HM_ONLY=YES/_HM_ONLY=NO/g\"                   ${_vBs}/${_tocIncO}\n    wait\n    sed -i \"s/^_PLATFORMS_ONLY=YES/_PLATFORMS_ONLY=NO/g\"     ${_octCnf}\n    wait\n    sed -i \"s/^_PLATFORMS_ONLY=YES/_PLATFORMS_ONLY=NO/g\"     ${_vBs}/${_tocIncO}\n    wait\n    bash  ${_vBs}/${_tocName}\n    touch ${_usEr}/log/up-${_TODAY}\n  fi\n}\n\n_satellite_downgrade_protection() {\n  _usEr=\"/data/disk/${_sCnd}\"\n  _octUsr=\"${_sCnd}\"\n  if [ -e \"${_usEr}/log/octopus_log.txt\" ] \\\n    && [ \"${_cmNd}\" = \"up-lts\" ]; then\n    _SERIES_TEST=$(cat ${_usEr}/log/octopus_log.txt 2>&1)\n    if [[ \"${_SERIES_TEST}\" =~ \"Octopus ${_rLsn}-pro\" ]]; then\n      echo\n      echo \"ERROR: Your Octopus has been already upgraded to ${_rLsn}-pro\"\n      echo \"You can not downgrade back to previous/older/lts BOA version\"\n      echo \"Please use 'octopus up-pro ${_octUsr} force' to upgrade\"\n      echo \"Bye\"\n      echo\n      _clean_pid_exit _satellite_downgrade_protection_a\n    elif [[ \"${_SERIES_TEST}\" =~ \"Octopus ${_rLsn}-dev\" ]]; then\n      echo\n      echo \"ERROR: Your Octopus has been already upgraded to ${_rLsn}-dev\"\n      echo \"You can not downgrade back to previous/older/lts BOA version\"\n      echo \"Please use 'octopus up-dev ${_octUsr} force' to upgrade\"\n      echo \"Bye\"\n      echo\n      _clean_pid_exit _satellite_downgrade_protection_b\n    fi\n  fi\n  if [ -e \"${_usEr}/log/octopus_log.txt\" ] \\\n    && [ \"${_cmNd}\" != \"up-dev\" ] \\\n    && [ \"${_cmNd}\" != \"up-pro\" ] \\\n    && [ \"${_cmNd}\" != \"up-lts\" ] \\\n    && [ \"${_cmNd}\" != \"info\" ] \\\n    && [ \"${_cmNd}\" != \"help\" ]; then\n    _SERIES_TEST=$(cat ${_usEr}/log/octopus_log.txt 2>&1)\n    _BOA_SERIES_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n    if [[ \"${_SERIES_TEST}\" =~ \"Octopus ${_rLsn}\" ]] \\\n      || [[ \"${_BOA_SERIES_TEST}\" =~ \"Barracuda ${_rLsn}\" ]]; then\n      echo\n      echo \"ERROR: Your system has been already upgraded to ${_rLsn}\"\n      echo \"You can not downgrade back to previous/older BOA version\"\n      echo \"Please use 'octopus up-lts/pro/dev ${_octUsr} force' to upgrade\"\n      echo \"Display all supported commands with: $(basename \"$0\") help\"\n      echo \"Bye\"\n      echo\n      _clean_pid_exit _satellite_downgrade_protection_c\n    fi\n  fi\n}\n\n_satellite_check_if_already_upgraded() {\n  if [ \"${_cmNd}\" = \"up-${_tRee}\" ]; then\n    _fMdy=\n    _ifSkip=\n    _SERIES_TEST=\n    _mCmd=\"${_oMcm}\"\n    if [ -e \"${_usEr}/log/octopus_log.txt\" ]; then\n      _SERIES_TEST=$(cat ${_usEr}/log/octopus_log.txt 2>&1)\n      if [[ \"${_SERIES_TEST}\" =~ \"${_rlsE}\" ]]; then\n        if [ \"${_oMcm}\" != \"force\" ]; then\n          _mCmd=\n          _fMdy=force\n          if [ ! -e \"/root/.silent-octopus-upgrade.cnf\" ]; then\n            echo\n            echo \"This Ægir Instance ${_octUsr} is already up to date!\"\n            echo \"If you wish to run/force the upgrade again,\"\n            echo \"please use the forced upgrade mode, as shown below\"\n            echo\n            echo \"Usage: $(basename \"$0\") ${_oRcm} ${_sCnd} force\"\n            echo\n            _ifSkip=YES\n          fi\n        fi\n      fi\n    fi\n  fi\n  _SERIES_TEST=\n}\n\n_up_one() {\n  if [ -e \"${_vBs}/${_tocIncO}\" ] && [ -e \"${_vBs}/${_tocName}\" ] && [ -e \"${_octCnf}\" ]; then\n    if [ \"${_cmNd}\" = \"up-dev\" ] \\\n      || [ \"${_cmNd}\" = \"up-pro\" ] \\\n      || [ \"${_cmNd}\" = \"up-lts\" ]; then\n\n      _satellite_check_if_already_upgraded\n\n      if [ \"${_ifSkip}\" = \"YES\" ]; then\n        return 1  # Exit the function but continue the script\n      fi\n\n      sed -i \"s/^_BRANCH_PRN=.*/_BRANCH_PRN=${_bRnh}/g\"      ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_AEGIR_VERSION.*/_AEGIR_VERSION=${_tRee}/g\" ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_AEGIR_XTS_VRN.*/_AEGIR_XTS_VRN=${_tRee}/g\" ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_X_VERSION=.*/_X_VERSION=${_rlsE}/g\"        ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_BRANCH_BOA=.*/_BRANCH_BOA=${_bRnh}/g\"      ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_AUTOPILOT=NO/_AUTOPILOT=YES/g\"             ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_DNS_SETUP_TEST=YES/_DNS_SETUP_TEST=NO/g\"   ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_USER=o1/_USER=${_octUsr}/g\"                ${_vBs}/${_tocName}\n      wait\n      sed -i \"s/^_STRONG_PASSW.*/_STRONG_PASSWORDS=YES/g\"    ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_PLATFORMS_LIST.*/_PLATFORMS_LIST=none/g\"   ${_vBs}/${_tocIncO}\n      wait\n      sed -i \"s/^_PLATFORMS_LIST=.*/_PLATFORMS_LIST=none/g\"           ${_octCnf}\n      wait\n      sed -i \"s/^_AUTOPILOT=NO/_AUTOPILOT=YES/g\"                      ${_octCnf}\n      wait\n      sed -i \"s/^_DNS_SETUP_TEST=YES/_DNS_SETUP_TEST=NO/g\"            ${_octCnf}\n      wait\n      sed -i \"s/^_STRONG_PASSW.*/_STRONG_PASSWORDS=YES/g\"             ${_octCnf}\n      wait\n    fi\n\n    if [ -e \"${_vBs}/${_tocName}\" ]; then\n      if [ -e \"${_usEr}/.drush/sys/provision/http\" ]; then\n        _IS_OLD=$(find ${_usEr}/.drush/sys/provision/ \\\n          -maxdepth 1 -mindepth 1 -mtime +0 -type d | grep example)\n      elif [ -e \"${_usEr}/.drush/provision\" ]; then\n        _IS_OLD=$(find ${_usEr}/.drush/provision/ \\\n          -maxdepth 1 -mindepth 1 -mtime +0 -type d | grep example)\n      fi\n      if [ -e \"${_usEr}/.drush/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n        _IS_OLD=\n      else\n        _IS_OLD=OLD\n      fi\n      if [ -z \"${_IS_OLD}\" ] \\\n        && [ -z \"${_mCmd}\" ] \\\n        && [ -e \"${_usEr}/.drush/sys/provision/http\" ]; then\n        if [ -z \"${_fMdy}\" ]; then\n          echo\n          echo \"This Ægir Instance is already up to date!\"\n          echo \"If you wish to run/force the upgrade again,\"\n          echo \"please specify desired upgrade mode:\"\n          echo \"aegir, platforms or both - as shown below\"\n          echo\n          echo \"Usage: $(basename \"$0\") ${_cmNd} ${_sCnd} {aegir|platforms|both}\"\n          echo\n          return 1  # Exit the function but continue the script\n        fi\n      else\n        if [ -e \"${_usEr}/.drush/sys/provision/http\" ]; then\n          _up_mode\n        elif [ -e \"${_usEr}/.drush/provision\" ]; then\n          _up_mode\n        else\n          echo \"${_usEr}/.drush/sys/provision does not exist!\"\n          rm -rf ${_usEr}/.drush/{sys,xts,usr}\n          rm -rf ${_usEr}/.drush/{provision,drush_make}\n          mkdir -p ${_usEr}/.drush/{sys,xts,usr}\n          ${_gCb} ${_bRnh} ${_gitHub}/provision.git \\\n            ${_usEr}/.drush/sys/provision &> /dev/null\n          chown -R ${_octUsr}:users ${_usEr}/.drush/sys\n          _up_mode\n        fi\n      fi\n    else\n      if [ ! -e \"${_vBs}/${_tocIncO}\" ]; then\n        echo\n        echo \"${_vBs}/${_tocIncO} installer not available, exit\"\n        echo\n      fi\n      if [ ! -e \"${_vBs}/${_tocName}\" ]; then\n        echo\n        echo \"${_vBs}/${_tocName} installer not available, exit\"\n        echo\n      fi\n      if [ ! -e \"${_octCnf}\" ]; then\n        echo\n        echo \"${_octCnf} file not available, exit\"\n        echo\n      fi\n      _clean_pid_exit\n    fi\n  fi\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n_get_load() {\n  read -r _one _five _rest <<< \"$(cat /proc/loadavg)\"\n  _O_LOAD=$(awk -v _load_value=\"${_one}\" -v _cpus=\"${_CPU_NR}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n}\n\n_load_control() {\n  : \"${_CPU_TASK_RATIO:=3.1}\"\n  _CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n  _O_LOAD_MAX=$(echo \"${_CPU_TASK_RATIO} * 100\" | bc -l)\n  _get_load\n}\n\n_up_action_all() {\n  for _usEr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n    _count_cpu\n    _load_control\n    if [ -d \"${_usEr}/config/server_master/nginx/vhost.d\" ] \\\n      && [ -e \"${_usEr}/log/cores.txt\" ] \\\n      && [ ! -e \"${_usEr}/log/CANCELLED\" ]; then\n      if (( $(echo \"${_O_LOAD} < ${_O_LOAD_MAX}\" | bc -l) )); then\n        _octUsr=$(echo ${_usEr} | cut -d'/' -f4 | awk '{ print $1}')\n        _tocName=\"${_octName}.${_octUsr}\"\n        _tocIncO=\"${_filIncO}.${_octUsr}\"\n        if [ -e \"${_vBs}/${_octName}\" ]; then\n          cp -af ${_vBs}/${_octName} ${_vBs}/${_tocName}\n        fi\n        if [ -e \"${_vBs}/${_filIncO}\" ]; then\n          cp -af ${_vBs}/${_filIncO} ${_vBs}/${_tocIncO}\n        fi\n        _octCnf=\"/root/.${_octUsr}.octopus.cnf\"\n        echo load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\n        echo Octopus upgrade for User ${_usEr}\n        _n=$((RANDOM%9+2))\n        echo Waiting ${_n} seconds...\n        sleep ${_n}\n\n        ### Enable debugging if requested\n        if [ -e \"/root/.debug-boa-installer.cnf\" ] \\\n          || [ -e \"/root/.debug-octopus-installer.cnf\" ]; then\n          sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"       ${_vBs}/${_tocIncO}\n          wait\n          sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"                ${_octCnf}\n          wait\n        fi\n\n        if [ \"${_outP}\" = \"log\" ]; then\n          _UP_LOG=\"${_LOG_DIR}/$(basename \"$0\")-up-${_octUsr}-${_NOW}.log\"\n          echo\n          echo \"Preparing the upgrade in silent mode...\"\n          echo\n          echo \"NOTE: There will be no progress displayed in the console\"\n          echo \"but you will receive an email once the upgrade is complete\"\n          echo\n          sleep ${_n}\n          echo \"You could watch the progress in another window with command:\"\n          echo \"  tail -f ${_UP_LOG}\"\n          echo \"or wait until you will see the line: OCTOPUS upgrade completed\"\n          echo\n          echo \"Starting the upgrade in silent mode now...\"\n          echo\n          if [ -e \"${_vBs}/${_octName}\" ]; then\n            sed -i \"s/^_SPINNER=YES/_SPINNER=NO/g\"           ${_vBs}/${_octName}\n            wait\n            sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"       ${_vBs}/${_tocIncO}\n            wait\n            sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"                ${_octCnf}\n            wait\n          fi\n          _up_one >>${_UP_LOG} 2>&1\n          _check_report\n        else\n          sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=NO/g\"                   ${_octCnf}\n          wait\n          _up_one 2>&1 | tee ${_UP_LOG}\n        fi\n        _n=$((RANDOM%9+2))\n        echo \"Waiting ${_n} seconds...\"\n        sleep ${_n}\n        rm -f ${_vBs}/${_tocName}\n        rm -f ${_vBs}/${_tocIncO}\n        rm -rf ${_usEr}/.tmp/cache\n        echo \"Done for ${_usEr}\"\n      else\n        echo \"load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\"\n        echo \"...we have to wait...\"\n      fi\n    fi\n  done\n}\n\n_up_action_one() {\n  _usEr=\"/data/disk/${_sCnd}\"\n  _count_cpu\n  _load_control\n  if [ -d \"${_usEr}/config/server_master/nginx/vhost.d\" ] \\\n    && [ -e \"${_usEr}/log/cores.txt\" ] \\\n    && [ ! -e \"${_usEr}/log/CANCELLED\" ]; then\n    if (( $(echo \"${_O_LOAD} < ${_O_LOAD_MAX}\" | bc -l) )); then\n      _octUsr=\"${_sCnd}\"\n      _tocName=\"${_octName}.${_octUsr}\"\n      _tocIncO=\"${_filIncO}.${_octUsr}\"\n      if [ -e \"${_vBs}/${_octName}\" ]; then\n        cp -af ${_vBs}/${_octName} ${_vBs}/${_tocName}\n      fi\n      if [ -e \"${_vBs}/${_filIncO}\" ]; then\n        cp -af ${_vBs}/${_filIncO} ${_vBs}/${_tocIncO}\n      fi\n      _octCnf=\"/root/.${_octUsr}.octopus.cnf\"\n      echo load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\n      echo Octopus upgrade for User ${_usEr}\n      _n=$((RANDOM%9+2))\n\n      ### Enable debugging if requested\n      if [ -e \"/root/.debug-boa-installer.cnf\" ] \\\n        || [ -e \"/root/.debug-octopus-installer.cnf\" ]; then\n        sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"         ${_vBs}/${_tocIncO}\n        wait\n        sed -i \"s/^_DEBUG_MODE=.*/_DEBUG_MODE=YES/g\"                  ${_octCnf}\n        wait\n      fi\n\n      if [ \"${_outP}\" = \"log\" ]; then\n        _UP_LOG=\"${_LOG_DIR}/$(basename \"$0\")-up-${_octUsr}-${_NOW}.log\"\n        echo\n        echo \"Preparing the upgrade in silent mode...\"\n        echo\n        echo \"NOTE: There will be no progress displayed in the console\"\n        echo \"but you will receive an email once the upgrade is complete\"\n        echo\n        sleep ${_n}\n        echo \"You could watch the progress in another window with command:\"\n        echo \"  tail -f ${_UP_LOG}\"\n        echo \"or wait until you will see the line: OCTOPUS upgrade completed\"\n        echo\n        echo \"Starting the upgrade in silent mode now...\"\n        echo\n        if [ -e \"${_vBs}/${_octName}\" ]; then\n          sed -i \"s/^_SPINNER=YES/_SPINNER=NO/g\"             ${_vBs}/${_octName}\n          wait\n          sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"         ${_vBs}/${_tocIncO}\n          wait\n          sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"                  ${_octCnf}\n          wait\n        fi\n        _up_one >>${_UP_LOG} 2>&1\n        _check_report\n      else\n        sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=NO/g\"                     ${_octCnf}\n        wait\n        _up_one 2>&1 | tee ${_UP_LOG}\n      fi\n      sleep 3\n      rm -f ${_vBs}/${_tocName}\n      rm -f ${_vBs}/${_tocIncO}\n      rm -rf ${_usEr}/.tmp/cache\n      echo Done for ${_usEr}\n    else\n      echo load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\n      echo try again later\n    fi\n  fi\n}\n\n_up_start() {\n  if [ -e \"/run/boa_run.pid\" ]; then\n    echo\n    echo \"  Another BOA installer is running probably\"\n    echo \"  because /run/boa_run.pid exists\"\n    echo\n    exit 1\n  elif [ -e \"/run/boa_wait.pid\" ]; then\n    echo\n    echo \"  Some important system task is running probably\"\n    echo \"  because /run/boa_wait.pid exists\"\n    echo\n    exit 1\n  else\n    touch /run/boa_run.pid\n    touch /run/boa_wait.pid\n    touch /run/octopus_install_run.pid\n    [ -e \"/tmp/aegir_backup_mode.txt\" ] && rm -f /tmp/aegir_backup_mode.txt\n    mkdir -p ${_LOG_DIR}\n    cd ${_vBs}\n    rm -f ${_vBs}/OCTOPUS.sh*\n    ## rm -f ${_vBs}/*.sh.cnf*\n  fi\n  if [ \"${_mOde}\" = \"log\" ]; then\n    _outP=\"${_mOde}\"\n  fi\n  if [ ! -z \"${_mOde}\" ] && [ \"${_mOde}\" != \"log\" ]; then\n    _mCmd=\"${_mOde}\"\n    _oMcm=\"${_mCmd}\"\n  fi\n  if [ -e \"/opt/local/bin/php\" ] || [ -e \"/usr/local/bin/php\" ]; then\n    rm -f /opt/local/bin/php\n    rm -f /usr/local/bin/php\n  fi\n  _oRcm=\"${_cmNd}\"\n}\n\n_up_finish() {\n  rm -f /root/BOA.sh*\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n  [ -e \"/run/manage_ruby_users.pid\" ] && rm -f /run/manage_ruby_users.pid\n  [ -e \"/run/octopus_install_run.pid\" ] && rm -f /run/octopus_install_run.pid\n  [ -e \"/tmp/aegir_backup_mode.txt\" ] && rm -f /tmp/aegir_backup_mode.txt\n  rm -f ${_vBs}/*.sh.cnf*\n  rm -f ${_vBs}/OCTOPUS.sh*\n  [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n  mv -f /var/log/php/* /var/backups/php-logs/${_NOW}/ &> /dev/null\n  if [ -e \"/opt/local/bin/php\" ] || [ -e \"/usr/local/bin/php\" ]; then\n    rm -f /opt/local/bin/php\n    rm -f /usr/local/bin/php\n  fi\n  echo\n  echo OCTOPUS upgrade completed\n  echo Bye\n  echo\n  exit 0\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        export _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && export _USE_MIR=\"files.aegir.cc\"\n      else\n        export _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      export _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    export _USE_MIR=\"files.aegir.cc\"\n  fi\n  export _urlDev=\"http://${_USE_MIR}/dev\"\n  export _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_if_reinstall_curl() {\n  _CURL_VRN=8.20.0\n  if ! command -v lsb_release &> /dev/null; then\n    apt-get update -qq &> /dev/null\n    apt-get install lsb-release ${_aptYesUnth} -qq &> /dev/null\n  fi\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  [ \"${_OS_CODE}\" = \"jessie\" ] && _CURL_VRN=7.71.1\n  [ \"${_OS_CODE}\" = \"stretch\" ] && _CURL_VRN=8.2.1\n  _isCurl=$(curl --version 2>&1)\n  if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n    echo \"OOPS: cURL is broken! Re-installing..\"\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    echo \"curl install\" | dpkg --set-selections 2> /dev/null\n    _apt_clean_update\n    # Check for libssl1.0-dev and remove conditionally\n    if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n      apt-get remove libssl1.0-dev -y --purge --auto-remove -qq 2>/dev/null\n    fi\n    apt-get autoremove -y 2> /dev/null\n    apt-get install libssl-dev ${_aptYesUnth} -qq 2> /dev/null\n    apt-get build-dep curl ${_aptYesUnth} 2> /dev/null\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      apt-get install curl --reinstall ${_aptYesUnth} -qq 2> /dev/null\n    fi\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      echo \"INFO: Installing curl from sources...\"\n      mkdir -p /var/opt\n      rm -rf /var/opt/curl*\n      cd /var/opt\n      wget ${_wgetGet} http://files.aegir.cc/dev/src/curl-${_CURL_VRN}.tar.gz &> /dev/null\n      tar -xzf curl-${_CURL_VRN}.tar.gz &> /dev/null\n      if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n        && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n        _SSL_BINARY=/usr/local/ssl3/bin/openssl\n      else\n        _SSL_BINARY=/usr/local/ssl/bin/openssl\n      fi\n      if [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n        _SSL_PATH=\"/usr/local/ssl3\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n      else\n        _SSL_PATH=\"/usr/local/ssl\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n      fi\n      _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n\n      if [ -e \"${_PKG_CONFIG_PATH}\" ] \\\n        && [ -e \"/var/opt/curl-${_CURL_VRN}\" ]; then\n        cd /var/opt/curl-${_CURL_VRN}\n        LIBS=\"-ldl -lpthread\" PKG_CONFIG_PATH=\"${_PKG_CONFIG_PATH}\" ./configure \\\n          --with-openssl \\\n          --with-zlib=/usr \\\n          --prefix=/usr/local &> /dev/null\n        make -j $(nproc) --quiet &> /dev/null\n        make --quiet install &> /dev/null\n        ldconfig 2> /dev/null\n      fi\n    fi\n    if [ -f \"/usr/local/bin/curl\" ]; then\n      _isCurl=$(/usr/local/bin/curl --version 2>&1)\n      if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n        echo \"ERRR: curl is still broken, please install it and debug manually\"\n        _clean_pid_exit _if_reinstall_curl\n      else\n        echo \"GOOD: /usr/local/bin/curl works\"\n      fi\n    fi\n  fi\n}\n\n_check_dns_curl() {\n  _check_dns_settings\n  _find_fast_mirror_early\n  _if_reinstall_curl\n  _CURL_TEST=$(curl -L -k -s \\\n    --max-redirs 10 \\\n    --retry 3 \\\n    --retry-delay 10 \\\n    -I \"http://${_USE_MIR}\" 2> /dev/null)\n  if [[ ! \"${_CURL_TEST}\" =~ \"200 OK\" ]]; then\n    if [[ \"${_CURL_TEST}\" =~ \"unknown option was passed in to libcurl\" ]]; then\n      echo \"ERROR: cURL libs are out of sync! Re-installing again..\"\n      _if_reinstall_curl\n    else\n      echo \"ERROR: ${_USE_MIR} is not available, please try later\"\n      _clean_pid_exit _check_dns_curl_a\n    fi\n  fi\n}\n\n_check_root_direct() {\n  _U_TEST=DENY\n  [ \"${SUDO_USER}\" ] && _U_TEST_SDO=${SUDO_USER} || _U_TEST_SDO=$(whoami)\n  _U_TEST_WHO=$(who am i | awk '{print $1}' 2>&1)\n  _U_TEST_LNE=$(logname 2>&1)\n  if [ \"${_U_TEST_SDO}\" = \"root\" ] || [ \"${_U_TEST_LNE}\" = \"root\" ]; then\n    if [ -z \"${_U_TEST_WHO}\" ]; then\n      _U_TEST=ALLOW\n      ### normal for root scripts running from cron\n    else\n      if [ \"${_U_TEST_WHO}\" = \"root\" ]; then\n        _U_TEST=ALLOW\n      fi\n    fi\n  fi\n  if [ \"${_U_TEST}\" = \"DENY\" ]; then\n    echo\n    echo \"ERROR: This script must be run as root directly,\"\n    echo \"ERROR: without sudo/su switch from regular system user\"\n    echo \"ERROR: Please add and test your SSH (ed25519) keys for root account\"\n    echo \"ERROR: with direct access before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HINT:  You can always restrict access later, or\"\n    echo \"       allow only SSH (ed25519) keys for root with directive\"\n    echo \"         PermitRootLogin prohibit-password\"\n    echo \"       in the /etc/ssh/sshd_config file\"\n    echo \"Bye\"\n    _clean_pid_exit\n  fi\n}\n\n_check_root_keys_pwd() {\n  # Check if root's password is locked\n  _ROOT_PWD_LOCKED=\"NO\"\n  _S_TEST=$(grep 'root:\\*:' /etc/shadow 2>&1)\n  if [[ \"${_S_TEST}\" =~ root:\\*: ]]; then\n    _ROOT_PWD_LOCKED=\"YES\"\n  fi\n\n  # Check for presence of SSH keys\n  _SSH_KEYS_OK=\"NO\"\n  if [ -e \"/root/.ssh/authorized_keys\" ]; then\n    if grep -qE '^(ssh-rsa|ssh-ed25519|ecdsa-sha2)' /root/.ssh/authorized_keys; then\n      _SSH_KEYS_OK=\"YES\"\n    fi\n  fi\n\n  if [[ \"${_ROOT_PWD_LOCKED}\" == \"NO\" ]] && [[ \"${_SSH_KEYS_OK}\" == \"NO\" ]]; then\n    echo\n    echo \"ERROR: BOA requires working SSH keys for system root present\"\n    echo \"ERROR: Please add and test your SSH keys for root account\"\n    echo \"ERROR: before trying again\"\n    echo\n    echo \"HOWTO: Run one of these commands on your local PC machine:\"\n    echo \"  ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 (modern, recommended)\"\n    echo \"  ssh-keygen -t ecdsa -b 256 -N '' -f ~/.ssh/id_ecdsa (good, but dated)\"\n    echo \"  ssh-keygen -b 4096 -t rsa -N '' -f ~/.ssh/id_rsa (legacy and slow)\"\n    echo\n    echo \"HOWTO: Copy the public key to the server's ~/.ssh/authorized_keys file:\"\n    echo \"  ssh-copy-id -i ~/.ssh/id_ed25519 root@your_server_ip\"\n    echo\n    echo \"HOWTO: Or copy manually to ~/.ssh/authorized_keys file on the server\"\n    echo \"  cat ~/.ssh/id_ed25519.pub\"\n    echo \"  cat ~/.ssh/id_ecdsa.pub\"\n    echo \"  cat ~/.ssh/id_rsa.pub\"\n    echo\n    echo \"HOWTO: Ensure the each key is not split into more than 1 line\"\n    echo\n    echo \"HOWTO: Ensure the authorized_keys file has the correct permissions:\"\n    echo \"  chmod 600 ~/.ssh/authorized_keys\"\n    echo \"  chmod 700 ~/.ssh\"\n    echo\n    echo \"HOWTO: You can prioritize your keys by adding to ~/.ssh/config lines:\"\n    echo \" Host *\"\n    echo \"   IdentityFile ~/.ssh/id_ed25519\"\n    echo \"   IdentityFile ~/.ssh/id_ecdsa\"\n    echo \"   IdentityFile ~/.ssh/id_rsa\"\n    echo\n    echo \"Bye\"\n    echo\n    _clean_pid_exit\n  fi\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n  if [ -n \"${_LOC_IP}\" ] && grep -qE \"${_LOC_IP}\\s\" /etc/hosts; then\n    cp -af /etc/hosts /etc/.was.hosts\n    sed -i \"s/^${_LOC_IP}.*//g\" /etc/hosts\n  fi\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n    #\n    # Protect from any octopus updates when _SKYNET_MODE=OFF\n    if [ ! -z \"${_SKYNET_MODE}\" ] && [ \"${_SKYNET_MODE}\" = \"OFF\" ]; then\n      if [ -n \"${SSH_TTY+x}\" ]; then\n        echo\n        echo \"STATUS: BOA Skynet Agent is Inactive!\"\n        echo\n        echo \"NOTE: With _SKYNET_MODE=OFF octopus can not upgrade your Ægir.\"\n        echo\n        echo \"HINT: Please remove the _SKYNET_MODE=OFF line from\"\n        echo \"HINT: ${_barCnf} to enable it.\"\n        echo \"HINT: Wait 10 minutes before trying octopus upgrade again.\"\n        echo \"Bye!\"\n        echo\n      fi\n      _clean_pid_exit\n    fi\n\n    _find_correct_ip\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    # Sanitize to allow only digits and minus sign\n    export _B_NICE=${_B_NICE//[^0-9-]/}\n\n    # Validate and set default if necessary\n    if ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n      _B_NICE=0\n    fi\n\n    # Clamp the value within -20 to 19\n    if (( _B_NICE < -20 )); then\n      _B_NICE=-20\n    elif (( _B_NICE > 19 )); then\n      _B_NICE=19\n    fi\n\n    renice ${_B_NICE} -p $$ &> /dev/null\n    ionice -c2 -n7 -p $$\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    _clean_pid_exit\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    _clean_pid_exit\n  fi\n}\n\n_check_no_systemd() {\n  if [ -e \"/lib/systemd/systemd\" ]; then\n    echo \"ERROR: This script can not be used with systemd\"\n    echo \"ERROR: Please run 'autoinit' first\"\n    _clean_pid_exit _check_no_systemd\n  fi\n}\n\n_if_start_screen() {\n  if [[ -n \"$SSH_CONNECTION\" || -n \"$SSH_CLIENT\" ]]; then\n    # Check if the user is inside a screen session\n    if [[ ! \"${_ARGS}\" =~ (^|[[:space:]])(info|help)([[:space:]]|$) ]]; then\n      if [ -z \"$STY\" ]; then\n        # If not in screen, start a new screen session with the same script\n        echo \"You are not inside a screen session. Starting screen...\"\n        sleep 5\n        screen -S session_octopus bash -c \"$0 ${_ARGS}\"\n        exit\n      else\n        # If already inside screen, continue the script\n        echo \"You are in a screen session now\"\n        sleep 3\n      fi\n    fi\n  fi\n}\n\n_if_display_help() {\n  if [ \"${_cmNd}\" = \"help\" ] || [ \"${_cmNd}\" = \"info\" ]; then\n    echo\n    echo \"Usage: $(basename \"$0\") {version} {instance} {mode}\"\n    cat <<EOF\n\n    Accepted keywords and values in every option:\n\n    {version}\n      up-lts <------- upgrade to Octopus LTS release (no license)\n      up-pro <------- upgrade to Octopus PRO release (requires license)\n      up-dev <------- upgrade to Octopus Cutting Edge (requires license)\n\n    {instance}\n      all <---------- upgrade all active Octopus instances in a batch\n      o1 <----------- upgrade only listed Octopus instance (system name)\n\n    {mode}\n      aegir <-------- upgrade only Ægir Hostmaster\n      platforms <---- upgrade only Ægir Platforms\n      both <--------- upgrade both Ægir Hostmaster and Platforms\n      force <-------- force upgrade of either specified instance or all\n\n    See docs/UPGRADE.md for more details.\n\nEOF\n    _clean_pid_exit\n  fi\n}\n\n_up_proceed() {\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ] \\\n    || [ \"${_cmNd}\" = \"info\" ] \\\n    || [ \"${_cmNd}\" = \"help\" ]; then\n    _CHECK_SQL=NO\n  elif [ \"${_cmNd}\" = \"up-lts\" ] \\\n    || [ \"${_cmNd}\" = \"up-pro\" ] \\\n    || [ \"${_cmNd}\" = \"up-dev\" ]; then\n    _CHECK_SQL=YES\n  else\n    _CHECK_SQL=NO\n  fi\n  if [ \"${_CHECK_SQL}\" = \"YES\" ]; then\n    _check_sql_running\n    _check_sql_access\n  fi\n  if [ \"${_cmNd}\" = \"up-dev\" ]; then\n    export _tRee=dev\n  elif [ \"${_cmNd}\" = \"up-pro\" ]; then\n    export _tRee=pro\n  elif [ \"${_cmNd}\" = \"up-lts\" ]; then\n    export _tRee=lts\n  elif [ \"${_cmNd}\" = \"help\" ] || [ \"${_cmNd}\" = \"info\" ]; then\n    export _tRee=dev\n  else\n    echo\n    echo \"Sorry, you are trying not supported command..\"\n    echo \"Display supported commands with: $(basename \"$0\") help\"\n    echo\n    _clean_pid_exit\n  fi\n\n  export _tRee=\"${_tRee}\"\n  export _rLsn=\"BOA-5.9.1\"\n  export _rlsE=\"${_rLsn}-${_tRee}\"\n  export _bRnh=\"5.x-${_tRee}\"\n  export _rgUrl=\"http://files.aegir.cc/versions/${_tRee}/boa\"\n\n  _satellite_downgrade_protection\n  _up_start\n\n  curl ${_crlGet} \"${_rgUrl}/${_octName}\"  -o ${_vBs}/${_octName}\n  curl ${_crlGet} \"${_rgUrl}/${_pthIncO}\"  -o ${_vBs}/${_filIncO}\n\n  if [ \"${_sCnd}\" = \"all\" ]; then\n    _up_action_all\n  else\n    _up_action_one\n  fi\n  _up_finish\n}\n\nexport _cmNd=\"$1\"\nexport _sCnd=\"$2\"\nexport _mOde=\"$3\"\nexport _outP=\"$4\"\nexport _ARGS=\"$@\"\n\n_if_display_help\n_check_root_direct\n_check_root_keys_pwd\n_check_root\n_check_no_systemd\n_os_detection_minimal\n_check_dns_curl\n[[ ! \"${_ARGS}\" =~ noscreen ]] && _if_start_screen\n[[ \"${_ARGS}\" =~ noscreen ]] && export _ARGS=$(echo \"${_ARGS}\" | sed -E \"s/noscreen//g\")\n_up_proceed\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/perftest",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport LC_ALL=C\nexport LANG=C\n\n###\n_PT_VERSION=\"v73\"\n\n### Avoid too many questions\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\n\n# Tunables\n_PHP_BENCH_ITERATIONS=50000 # per process\n_PHP_BENCH_RUNS=5 # single-thread runs for median\n_PHP_BENCH_CONCURRENCY_MAX=8 # cap on concurrent PHP workers\n_SYSBENCH_CPU_TIME=30 # seconds\n_SYSBENCH_CPU_THREADS_SAT=8 # cap MT CPU threads (informational; avoids regression/noise on large vCPU VMs)\n\n# MySQL durability mode hint (used to interpret disk fsync tail).\n# If MySQL is configured with innodb_flush_log_at_trx_commit!=1 or sync_binlog!=1,\n# fsync-per-commit behavior is relaxed and a high fio fsync p99 can be less predictive of DB OLTP results.\n_DB_DURABILITY_RELAXED=\"NO\"\n_DB_DURABILITY_PROBED=\"NO\"\n_SYSBENCH_MEM_TIME=30 # seconds\n_DB_SYSBENCH_TIME_RO=30 # seconds (read-only OLTP)\n_DB_SYSBENCH_TIME_RW=30 # seconds (read-write OLTP)\n_DB_SYSBENCH_TABLES=4 # number of tables\n_DB_SYSBENCH_TABLE_SIZE=100000 # rows per table (light)\n_DB_SYSBENCH_TABLE_SIZE_HEAVY=1000000 # rows per table (heavy)\n_DB_SYSBENCH_THREADS_MAX=16 # cap threads\n_DB_SYSBENCH_THREADS_MIN=2 # min threads\n_DB_SYSBENCH_THREADS_SAT=8 # saturation cap (avoid penalizing bigger VMs due to concurrency regression)\n_MEM_BENCH_TIME=10 # seconds (DRAM-focused membench, per run)\n_MEM_BENCH_REPEATS=3 # repeats for stability; median-of-3 is used\n_MEM_BENCH_SIZE_MIN_MIB=128 # MiB working set (min; must exceed LLC)\n_MEM_BENCH_SIZE_MAX_MIB=1024 # MiB working set (cap to avoid OOM)\n_MEM_BENCH_THREADS_MAX=8 # cap threads used for memory bandwidth\n_MEM_BENCH_THREADS_SAT=4 # cap threads for membench (avoid bandwidth dip on >4 vCPU VMs; keeps grading comparable)\n_MEM_BENCH_LAT_SIZE_MIB=256 # MiB pointer-chase working set (DRAM latency)\n_FIO_RAND_RUNTIME=60 # seconds\n_FIO_SEQ_RUNTIME=60 # seconds\n_FIO_QD1_RUNTIME=20 # seconds (QD=1 random latency for tiering)\n_FIO_QD1_SIZE_M=256 # MiB per QD=1 test\n_FIO_QD1_FSYNC_RUNTIME=15 # seconds (QD=1 randwrite with fsync for DB realism)\n_FIO_QD1_FSYNC_SIZE_M=128 # MiB (file size for fsync test)\n_FIO_QD1_REPEATS=3 # repeats for QD=1 tests to reduce noise (median-of-N used)\n_FIO_QD1_FSYNC_REPEATS=3 # repeats for QD=1 fsync tests to reduce noise (median-of-N used)\n_METADATA_FILE_COUNT=5000 # small-file ops per mount\n_GRADE_NEAR_BAND_PCT=0.10 # top-of-band promotion window (10%); keeps grades strict but not harsh near upper boundaries\n_NET_TIMEOUT=10 # seconds for curl speedtest\n_NET_IPERF_TIME=8 # seconds per iperf3 direction\n_NET_IPERF_TIMEOUT=15 # seconds hard timeout wrapper for iperf3\n_NET_IPERF_PARALLEL=1 # iperf3 parallel streams\n_NET_IPERF_MAX_SERVERS=3 # max iperf3 servers to test\n_NET_IPERF_ENABLED=\"${_NET_IPERF_ENABLED:-NO}\" # YES to enable iperf3 test + CSF edits + iperf3 install\n_NET_HTTP_OUTLIER_RATIO=\"${_NET_HTTP_OUTLIER_RATIO:-0.30}\" # ignore HTTP results < (median*ratio) to drop very weak endpoints\n_NET_IPERF_OUTLIER_RATIO=\"${_NET_IPERF_OUTLIER_RATIO:-0.30}\" # ignore iPerf3 results < (median*ratio) to drop very weak remotes\n_NET_APAC_LOCAL_ONLY=\"${_NET_APAC_LOCAL_ONLY:-YES}\" # if YES, AU/NZ hosts test only AU/NZ endpoints\n_NET_IPERF_MIN_Mbps=\"${_NET_IPERF_MIN_Mbps:-1.0}\" # treat < this as failure (filters \"0.00\" pseudo-results)\n_AES_TEST_MIB=500 # MiB encrypted for AES throughput test\n_IOPING_SECONDS=5 # seconds for ioping checks\n\n# Output mode\n_VERBOSE=0   # raw output without grading\n_COMPACT=0   # compact BOA-focused mode with grading\n_FANCY=1     # fancy BOA-focused mode with grading\n\n# Test selection (must be explicit: use --all or one/more test flags)\n_DO_SYSINFO=NO\n_DO_CPU=NO\n_DO_MEM=NO\n_DO_AES=NO\n_DO_DISK=NO\n_DO_NET=NO\n_DO_PHP=NO\n_DO_DB=NO\n_DO_FS=NO\n\n# Ensure maps used with mountpoint keys are associative arrays (prevents bash treating \"/\" as arithmetic index).\ndeclare -A _ioping_seek_line\ndeclare -A _fio_seqwrite_bw_write\ndeclare -A _fio_seqread_bw_read\ndeclare -A _fio_rand_iops_read\ndeclare -A _fio_rand_iops_write\ndeclare -A _fio_rand_bw_read\ndeclare -A _fio_rand_bw_write\ndeclare -A _fio_rand_latency_read\ndeclare -A _fio_rand_latency_write\ndeclare -A _fio_qd1_p95_read\ndeclare -A _fio_qd1_p95_write\ndeclare -A _fio_qd1_p99_read\ndeclare -A _fio_qd1_p99_write\ndeclare -A _fio_qd1_fsync_p95_write\ndeclare -A _fio_qd1_fsync_p99_write\ndeclare -A _meta_ops\n\n_list_tests() {\n  cat <<'TESTS'\nAvailable tests:\n  sysinfo   Basic system info (hostname, CPU model, vCPUs, block devices, PCI summary)\n  cpu       CPU benchmark (sysbench cpu)\n  mem       Memory benchmark (membench)\n  aes       AES throughput (openssl enc)\n  disk      Disk latency + fio I/O (ioping + fio)\n  net       Network tests (HTTP download; add --iperf to also run iperf3 TCP test)\n  php       PHP CLI micro-benchmark\n  db        MySQL sysbench OLTP benchmark\n  fs        Filesystems: fstype/options + /etc/fstab hints + space/inodes sanity\n\nNote: --iperf is a modifier for --net/--all, not a standalone test.\n      --all does not include iperf3; use --all --iperf to add it.\nTESTS\n}\n\n# DB benchmark (sysbench OLTP)\n_DBTEST_ENABLED=YES\n_DBTEST_HEAVY=NO\n_DBTEST_KEEP_DB=NO\n\n# CSV output\n# Printed only in --verbose mode (no CSV in compact mode).\n\n# ----\n# Usage\n# ----\n_usage() {\n  cat <<'USAGE'\nUsage: perftest (--all | one/more test flags) [--compact|--fancy|--verbose] [other options] [-h|--help]\n\n  --fancy             Print BOA-focused fancy summary with grading (default)\n  --compact           Print BOA-focused compact summary with grading\n  --verbose           Print full detailed output or just CSV summary with | grep CSV_\n  --all               Run all tests (excludes iperf3; use --all --iperf to include it)\n  --only TEST         Run only one test (see --list-tests)\n  --sysinfo           Run system info test\n  --cpu               Run CPU benchmark\n  --mem               Run memory benchmark\n  --aes               Run AES throughput benchmark\n  --disk              Run disk benchmarks (ioping + fio)\n  --net               Run network benchmarks (HTTP download only by default)\n  --iperf             Add iperf3 TCP throughput test to --net or --all (opt-in)\n  --php               Run PHP CLI micro-benchmark\n  --db                Run MySQL sysbench OLTP benchmark\n  --fs                Run filesystem + fstab/inodes sanity checks\n  --list-tests        List available tests and exit\n  --no-dbtest         Skip MySQL sysbench OLTP benchmark (even if --db/--all)\n  --dbtest-heavy      Use larger dataset for DB test (more disk pressure)\n  --keep-db           Keep sysbench DB after run (for debugging)\n  -h, --help          Show this help\n\nBOA context: This script evaluates VM vendor/host quality for BOA stack\n(Drupal + APC/Valkey/OPcache/Nginx microcaching). Compact mode highlights\nthe metrics that matter most for in-memory caching performance.\nUSAGE\n}\n\n# ----\n# Helpers\n# ----\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script must be run as root\"\n    exit 1\n  fi\n}\n\n_log() {\n  [ \"${_VERBOSE}\" -eq 1 ] && echo \"$@\"\n}\n\n_cron_stop_quiet() {\n  if [ -x \"/etc/init.d/cron\" ] && command -v service >/dev/null 2>&1; then\n    service cron stop >/dev/null 2>&1 || true\n  fi\n}\n\n_cron_restart_quiet() {\n  if [ -x \"/etc/init.d/cron\" ] && command -v service >/dev/null 2>&1; then\n    service cron restart >/dev/null 2>&1 || true\n  fi\n}\n\n_find_fast_mirror_early() {\n  _USE_MIR=\n  _urlDld=\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\n      fi\n    fi\n  fi\n  if [ -n \"${_USE_MIR}\" ]; then\n    _urlDld=\"http://${_USE_MIR}/dev/solr-4.2.1.tgz\"\n  fi\n}\n\n# Function to check if a package is installed\n_check_and_install_package() {\n  # $1 = package name\n  if [ \"${1}\" = \"iperf3\" ]; then\n    printf 'iperf3 iperf3/start_daemon boolean false\\n' | debconf-set-selections >/dev/null 2>&1 || true\n  fi\n  if ! dpkg -s \"$1\" >/dev/null 2>&1; then\n    echo \"Installing package: $1\"\n    apt-get install -y -qq --no-install-recommends \\\n      -o Dpkg::Options::=--force-confdef \\\n      -o Dpkg::Options::=--force-confold \\\n      \"$1\" </dev/null >/dev/null 2>&1 || true\n  fi\n}\n\n# Ensure sysbench build dependencies are present (always run)\n_ensure_sysbench_build_deps() {\n  _check_and_install_package \"git\"\n  _check_and_install_package \"make\"\n  _check_and_install_package \"gcc\"\n  _check_and_install_package \"automake\"\n  _check_and_install_package \"libtool\"\n  _check_and_install_package \"pkg-config\"\n  _check_and_install_package \"libaio-dev\"\n  _check_and_install_package \"pciutils\"\n}\n\n# Extract sysbench version (first X.Y[.Z...] token)\n_sysbench_get_version() {\n  sysbench --version 2>/dev/null | grep -Eo '[0-9]+(\\.[0-9]+)+' | head -n 1\n}\n\n# Return 0 if version A >= version B (dot-separated numeric)\n_version_ge() {\n  # Usage: _version_ge \"1.0.20\" \"1.0.18\"\n  _A=\"${1}\"\n  _B=\"${2}\"\n\n  # Empty versions => fail safe\n  if [ -z \"${_A}\" ] || [ -z \"${_B}\" ]; then\n    return 1\n  fi\n\n  # shellcheck disable=SC2206\n  _A_ARR=(${_A//./ })\n  # shellcheck disable=SC2206\n  _B_ARR=(${_B//./ })\n\n  _I=0\n  _MAX=\"${#_A_ARR[@]}\"\n  if [ \"${#_B_ARR[@]}\" -gt \"${_MAX}\" ]; then\n    _MAX=\"${#_B_ARR[@]}\"\n  fi\n\n  while [ \"${_I}\" -lt \"${_MAX}\" ]; do\n    _AI=\"${_A_ARR[${_I}]:-0}\"\n    _BI=\"${_B_ARR[${_I}]:-0}\"\n    # Force base-10 numeric compare (avoid 08/09 octal edge)\n    if [ \"$((10#${_AI}))\" -gt \"$((10#${_BI}))\" ]; then\n      return 0\n    fi\n    if [ \"$((10#${_AI}))\" -lt \"$((10#${_BI}))\" ]; then\n      return 1\n    fi\n    _I=$((_I + 1))\n  done\n\n  return 0\n}\n\n# Detect whether sysbench was built with mysql driver\n_sysbench_has_mysql_driver() {\n  # Never allow this check to hang perftest (and don't let sysbench read from stdin).\n  if command -v timeout >/dev/null 2>&1; then\n    _SB_HELP=\"$(timeout 3s sysbench --help </dev/null 2>/dev/null | tr '\\n' ' ' | tr -s ' ')\"\n  else\n    _SB_HELP=\"$(sysbench --help </dev/null 2>/dev/null | tr '\\n' ' ' | tr -s ' ')\"\n  fi\n\n  # If sysbench was killed by timeout or produced nothing, treat as \"no mysql driver\"\n  if [ -z \"${_SB_HELP}\" ]; then\n    return 1\n  fi\n\n  # Match either:\n  # - the \"Compiled-in database drivers:\" section mentioning mysql (even if it was multiline originally), OR\n  # - the explicit \"mysql - MySQL driver\" entry\n  printf '%s\\n' \"${_SB_HELP}\" | grep -Eqi 'Compiled-in database drivers:.*(^|[[:space:],])mysql([[:space:],]|$)' && return 0\n  printf '%s\\n' \"${_SB_HELP}\" | grep -Eqi '(^|[[:space:]])mysql[[:space:]]*-[[:space:]]*MySQL driver([[:space:]]|$)' && return 0\n\n  return 1\n}\n\n_mysql_is_present() {\n  if command -v mysqld >/dev/null 2>&1 || [ -d \"/var/lib/mysql/mysql\" ]; then\n    return 0\n  fi\n  return 1\n}\n\n_mysql_build_support_ok() {\n  if ! command -v mysql_config >/dev/null 2>&1; then\n    return 1\n  fi\n  mysql_config --cflags >/dev/null 2>&1 || return 1\n  mysql_config --libs   >/dev/null 2>&1 || return 1\n  return 0\n}\n\n# Prefer sysbench build-from-source without MySQL, but rebuild with mysql support when appropriate.\n# Rebuild triggers:\n# - sysbench missing\n# - sysbench version < ${_SYSBENCH_MIN_VER} (defaults below)\n# - mysql is present + mysql_config works + sysbench lacks mysql driver\n_ensure_sysbench() {\n  # Dummy default; override elsewhere if you want.\n  if [ -z \"${_SYSBENCH_MIN_VER}\" ]; then\n    _SYSBENCH_MIN_VER=\"1.1.0\"\n  fi\n\n  _MYSQL_SUPPORT_POSSIBLE=\"NO\"\n  _NEEDS_BUILD=\"NO\"\n\n  # Always ensure prerequisites (even if we end up doing nothing)\n  _ensure_sysbench_build_deps\n\n  if _mysql_is_present && _mysql_build_support_ok; then\n    _MYSQL_SUPPORT_POSSIBLE=\"YES\"\n  fi\n\n  if ! command -v sysbench >/dev/null 2>&1; then\n    _NEEDS_BUILD=\"YES\"\n    _log \"The sysbench is not installed yet, building from source...\"\n  else\n    _SYSBENCH_CUR_VER=\"$(_sysbench_get_version)\"\n\n    # Version gate: rebuild if current version cannot be read or is too old\n    if ! _version_ge \"${_SYSBENCH_CUR_VER}\" \"${_SYSBENCH_MIN_VER}\"; then\n      _NEEDS_BUILD=\"YES\"\n      _log \"The sysbench version is ${_SYSBENCH_CUR_VER} while minimum is ${_SYSBENCH_MIN_VER}, building from source...\"\n    fi\n\n    # MySQL driver gate: rebuild only when MySQL support is actually possible\n    if [ \"${_NEEDS_BUILD}\" != \"YES\" ] && [ \"${_MYSQL_SUPPORT_POSSIBLE}\" = \"YES\" ]; then\n      if ! _sysbench_has_mysql_driver; then\n        _NEEDS_BUILD=\"YES\"\n        _log \"The sysbench MySQL support is missing, compiling from source...\"\n      fi\n    fi\n  fi\n\n  if [ \"${_NEEDS_BUILD}\" != \"YES\" ]; then\n    return 0\n  fi\n\n  if command -v sysbench >/dev/null 2>&1; then\n    _log \"Rebuilding sysbench from source (min ${_SYSBENCH_MIN_VER}; MySQL support possible: ${_MYSQL_SUPPORT_POSSIBLE})...\"\n  else\n    _log \"Compiling sysbench from source (min ${_SYSBENCH_MIN_VER})...\"\n  fi\n\n  rm -rf /tmp/sysbench-build >/dev/null 2>&1\n  mkdir -p /tmp/sysbench-build\n  cd /tmp/sysbench-build\n\n  git clone --depth 1 https://github.com/omega8cc/sysbench.git >/dev/null 2>&1\n  cd sysbench\n\n  ./autogen.sh >/dev/null 2>&1\n\n  # Default: without mysql. Enable mysql only when we proved build support is present.\n  if [ \"${_MYSQL_SUPPORT_POSSIBLE}\" = \"YES\" ]; then\n    ./configure >/dev/null 2>&1\n  else\n    ./configure --without-mysql >/dev/null 2>&1\n  fi\n\n  make -j\"$(nproc)\" >/dev/null 2>&1\n  make install >/dev/null 2>&1\n\n  cd /root\n  rm -rf /tmp/sysbench-build >/dev/null 2>&1\n\n  # Ensure shell picks up any newly installed sysbench in PATH\n  hash -r >/dev/null 2>&1\n}\n\n# Best-effort C compiler (for DRAM-focused membench)\n_ensure_cc() {\n  if command -v gcc >/dev/null 2>&1; then\n    return 0\n  fi\n  if command -v cc >/dev/null 2>&1; then\n    return 0\n  fi\n  _check_and_install_package \"gcc\"\n}\n\n_pci_init() {\n  _HAS_LSPCI=\"NO\"\n  _LSPCI_CACHE=\"\"\n  _PCI_STORAGE_SUMMARY=\"\"\n  _PCI_NIC_SUMMARY=\"\"\n  _PCI_NET_STORAGE_HINT=\"NO\"\n  _PCI_VIRTIO_STORAGE_HINT=\"NO\"\n  _PCI_NVME_CTRL_HINT=\"NO\"\n  _HAS_NVME_SYS=\"NO\"\n\n  # Linux-visible NVMe devices (guest-level)\n  if [ -d /sys/class/nvme ] && [ \"$(ls -1 /sys/class/nvme 2>/dev/null | wc -l)\" -gt 0 ]; then\n    _HAS_NVME_SYS=\"YES\"\n  fi\n\n  if command -v lspci >/dev/null 2>&1; then\n    _HAS_LSPCI=\"YES\"\n    _LSPCI_CACHE=\"$(lspci -nn 2>/dev/null)\"\n\n    if printf \"%s\\n\" \"${_LSPCI_CACHE}\" | grep -Eqi 'Non-Volatile memory controller|NVMe'; then\n      _PCI_NVME_CTRL_HINT=\"YES\"\n    fi\n    if printf \"%s\\n\" \"${_LSPCI_CACHE}\" | grep -Eqi 'Virtio.*(block|SCSI)'; then\n      _PCI_VIRTIO_STORAGE_HINT=\"YES\"\n    fi\n\n    # Storage-ish controllers (best-effort, first match)\n    _PCI_STORAGE_SUMMARY=\"$(printf \"%s\\n\" \"${_LSPCI_CACHE}\" | \\\n      grep -Eim1 'Non-Volatile memory controller|SATA controller|RAID bus controller|SCSI storage controller|Mass storage controller|IDE interface|Virtio.*block|Virtio.*SCSI' | \\\n      sed 's/^[0-9a-fA-F:.]*[[:space:]]*//')\"\n\n    # NIC-ish controllers (best-effort, first match)\n    _PCI_NIC_SUMMARY=\"$(printf \"%s\\n\" \"${_LSPCI_CACHE}\" | \\\n      grep -Eim1 'Ethernet controller|Network controller' | \\\n      sed 's/^[0-9a-fA-F:.]*[[:space:]]*//')\"\n\n    # Strong hint for network-backed “NVMe” storage (extend patterns as you like)\n    if printf \"%s\\n\" \"${_LSPCI_CACHE}\" | grep -Eqi 'NVMe.*EBS|EBS Controller|Elastic Block Store'; then\n      _PCI_NET_STORAGE_HINT=\"YES\"\n    fi\n  fi\n}\n\n_get_l3_cache_mib() {\n  # Best-effort detection of L3 cache size (MiB) for sizing the memory benchmark.\n  # Returns empty string if unknown.\n  local _idx _lvl _sz _val\n\n  # Prefer sysfs (works in most Linux guests).\n  for _idx in /sys/devices/system/cpu/cpu0/cache/index*; do\n    [ -d \"${_idx}\" ] || continue\n    _lvl=\"$(cat \"${_idx}/level\" 2>/dev/null)\"\n    if [ \"${_lvl}\" = \"3\" ]; then\n      _sz=\"$(cat \"${_idx}/size\" 2>/dev/null)\"\n      case \"${_sz}\" in\n        *K) _val=\"${_sz%K}\"; echo $(( _val / 1024 )); return 0 ;;\n        *M) _val=\"${_sz%M}\"; echo \"${_val}\"; return 0 ;;\n      esac\n    fi\n  done\n\n  # Fallback to lscpu output.\n  if command -v lscpu >/dev/null 2>&1; then\n    _sz=\"$(lscpu 2>/dev/null | awk -F: '/L3 cache/ {gsub(/^[ \t]+/,\"\",$2); print $2; exit}')\"\n    case \"${_sz}\" in\n      *K) _val=\"${_sz%K}\"; echo $(( _val / 1024 )); return 0 ;;\n      *M) _val=\"${_sz%M}\"; echo \"${_val}\"; return 0 ;;\n    esac\n  fi\n\n  echo \"\"\n}\n\n_calc_mem_bench_size_mib() {\n  # Choose a working set intended to exceed LLC but avoid OOM.\n  # NOTE: membench allocates *two* buffers of this size (src+dst), so the\n  # actual resident footprint is roughly 2x.\n  local _avail_kb _avail_mib _mib _l3_mib _target _max_by_avail\n\n  _avail_kb=\"$(awk '/MemAvailable:/ {print $2}' /proc/meminfo 2>/dev/null)\"\n  if [ -z \"${_avail_kb}\" ]; then\n    _avail_kb=\"$(awk '/MemFree:/ {print $2}' /proc/meminfo 2>/dev/null)\"\n  fi\n  _avail_mib=0\n  if [ -n \"${_avail_kb}\" ]; then\n    _avail_mib=$(( _avail_kb / 1024 ))\n  fi\n\n  _l3_mib=\"$(_get_l3_cache_mib)\"\n\n  # Start from 4x L3 (if known), otherwise 25% of MemAvailable.\n  if [ -n \"${_l3_mib}\" ] && [ \"${_l3_mib}\" -gt 0 ]; then\n    _target=$(( _l3_mib * 4 ))\n  else\n    if [ \"${_avail_mib}\" -gt 0 ]; then\n      _target=$(( _avail_mib / 4 ))\n    else\n      _target=\"${_MEM_BENCH_SIZE_MIN_MIB}\"\n    fi\n  fi\n\n  # Clamp to configured min/max.\n  _mib=\"${_target}\"\n  if [ \"${_mib}\" -lt \"${_MEM_BENCH_SIZE_MIN_MIB}\" ]; then\n    _mib=\"${_MEM_BENCH_SIZE_MIN_MIB}\"\n  fi\n  if [ \"${_mib}\" -gt \"${_MEM_BENCH_SIZE_MAX_MIB}\" ]; then\n    _mib=\"${_MEM_BENCH_SIZE_MAX_MIB}\"\n  fi\n\n  # Headroom guard: keep 2x buffers under ~60% of MemAvailable.\n  # (2 buffers + overhead + other daemons)\n  if [ \"${_avail_mib}\" -gt 0 ]; then\n    _max_by_avail=$(( (_avail_mib * 60 / 100) / 2 ))\n    if [ \"${_max_by_avail}\" -gt 0 ] && [ \"${_mib}\" -gt \"${_max_by_avail}\" ]; then\n      _mib=\"${_max_by_avail}\"\n      if [ \"${_mib}\" -lt \"${_MEM_BENCH_SIZE_MIN_MIB}\" ]; then\n        _mib=\"${_MEM_BENCH_SIZE_MIN_MIB}\"\n      fi\n    fi\n  fi\n\n  echo \"${_mib}\"\n}\n\n_build_membench() {\n  # Builds /tmp/perftest_membench if missing.\n  local _bin _c _cc\n  _bin=\"/tmp/perftest_membench\"\n  [ -x \"${_bin}\" ] && return 0\n\n  _ensure_cc\n\n  if command -v gcc >/dev/null 2>&1; then\n    _cc=\"gcc\"\n  else\n    _cc=\"cc\"\n  fi\n\n  _c=\"/tmp/perftest_membench.c\"\n\n  cat > \"${_c}\" <<'C'\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdint.h>\n#include <string.h>\n#include <time.h>\n#include <pthread.h>\n#include <unistd.h>\n\ntypedef struct {\n  uint8_t *src;\n  uint8_t *dst;\n  size_t offset;\n  size_t len;\n  double seconds;\n  uint64_t bytes;\n} worker_t;\n\nstatic inline double now_sec(void) {\n  struct timespec ts;\n  clock_gettime(CLOCK_MONOTONIC, &ts);\n  return (double)ts.tv_sec + (double)ts.tv_nsec / 1e9;\n}\n\nstatic void *bw_worker(void *arg) {\n  worker_t *w = (worker_t *)arg;\n  uint8_t *src = w->src + w->offset;\n  uint8_t *dst = w->dst + w->offset;\n  size_t len = w->len;\n  uint64_t bytes = 0;\n\n  // Warm-up touch\n  for (size_t i = 0; i < len; i += 4096) {\n    dst[i] = (uint8_t)(src[i] ^ 0x5a);\n  }\n\n  const double end = now_sec() + w->seconds;\n  while (1) {\n    // Do a small batch of copies before re-checking time (reduces timer overhead)\n    for (int k = 0; k < 8; k++) {\n      memcpy(dst, src, len);\n      // Prevent overly aggressive optimization / dead-store elimination\n      dst[0] ^= src[0];\n      bytes += (uint64_t)len;\n    }\n    if (now_sec() >= end) break;\n  }\n\n  w->bytes = bytes;\n  return NULL;\n}\n\nstatic uint64_t xorshift64(uint64_t *state) {\n  uint64_t x = *state;\n  x ^= x << 13;\n  x ^= x >> 7;\n  x ^= x << 17;\n  *state = x;\n  return x;\n}\n\nstatic double pointer_chase_latency_ns(size_t bytes, double seconds) {\n  // Pointer chasing over a large random cycle to approximate DRAM latency.\n  // bytes should exceed LLC to avoid cache-only results.\n\n  size_t n = bytes / sizeof(uint32_t);\n  if (n < 1024) n = 1024;\n\n  uint32_t *next = NULL;\n  if (posix_memalign((void **)&next, 64, n * sizeof(uint32_t)) != 0) {\n    return 0.0;\n  }\n\n  // Build a random permutation (single cycle)\n  uint32_t *perm = NULL;\n  if (posix_memalign((void **)&perm, 64, n * sizeof(uint32_t)) != 0) {\n    free(next);\n    return 0.0;\n  }\n\n  for (size_t i = 0; i < n; i++) perm[i] = (uint32_t)i;\n\n  uint64_t seed = 0x12345678abcdef00ULL;\n  for (size_t i = n - 1; i > 0; i--) {\n    size_t j = (size_t)(xorshift64(&seed) % (i + 1));\n    uint32_t tmp = perm[i];\n    perm[i] = perm[j];\n    perm[j] = tmp;\n  }\n\n  for (size_t i = 0; i < n; i++) {\n    next[perm[i]] = perm[(i + 1) % n];\n  }\n\n  // Warm-up: walk a bit\n  volatile uint32_t idx = 0;\n  for (size_t i = 0; i < (n < 100000 ? n : 100000); i++) {\n    idx = next[idx];\n  }\n\n  const double end = now_sec() + seconds;\n  uint64_t ops = 0;\n  double t0 = now_sec();\n\n  while (1) {\n    // Batch pointer derefs to reduce timer overhead\n    for (int k = 0; k < 4096; k++) {\n      idx = next[idx];\n    }\n    ops += 4096;\n    if (now_sec() >= end) break;\n  }\n\n  double t1 = now_sec();\n  (void)idx;\n\n  free(perm);\n  free(next);\n\n  double elapsed = t1 - t0;\n  if (ops == 0 || elapsed <= 0.0) return 0.0;\n  return (elapsed * 1e9) / (double)ops;\n}\n\nint main(int argc, char **argv) {\n  long size_mib = 256;\n  long lat_mib = 0;\n  int threads = 1;\n  double seconds = 5.0;\n\n  for (int i = 1; i < argc; i++) {\n    if (!strcmp(argv[i], \"--size-mib\") && i + 1 < argc) {\n      size_mib = atol(argv[++i]);\n    } else if (!strcmp(argv[i], \"--threads\") && i + 1 < argc) {\n      threads = atoi(argv[++i]);\n    } else if (!strcmp(argv[i], \"--lat-mib\") && i + 1 < argc) {\n      lat_mib = atol(argv[++i]);\n    } else if (!strcmp(argv[i], \"--seconds\") && i + 1 < argc) {\n      seconds = atof(argv[++i]);\n    }\n  }\n\n  if (size_mib < 64) size_mib = 64;\n  if (threads < 1) threads = 1;\n\n  size_t total = (size_t)size_mib * 1024 * 1024;\n\n  uint8_t *src = NULL;\n  uint8_t *dst = NULL;\n  if (posix_memalign((void **)&src, 64, total) != 0) return 2;\n  if (posix_memalign((void **)&dst, 64, total) != 0) return 2;\n\n  // Fill src with a pattern to avoid special zero-page behaviors\n  for (size_t i = 0; i < total; i += 4096) {\n    src[i] = (uint8_t)((i >> 12) ^ 0xa5);\n  }\n\n  // Partition work among threads\n  pthread_t *t = calloc((size_t)threads, sizeof(pthread_t));\n  worker_t *w = calloc((size_t)threads, sizeof(worker_t));\n  if (!t || !w) return 3;\n\n  size_t chunk = total / (size_t)threads;\n  // Align chunk to 64 bytes\n  chunk = (chunk / 64) * 64;\n\n  double start = now_sec();\n  for (int i = 0; i < threads; i++) {\n    w[i].src = src;\n    w[i].dst = dst;\n    w[i].offset = (size_t)i * chunk;\n    w[i].len = chunk;\n    w[i].seconds = seconds;\n    w[i].bytes = 0;\n    pthread_create(&t[i], NULL, bw_worker, &w[i]);\n  }\n  for (int i = 0; i < threads; i++) pthread_join(t[i], NULL);\n  double end = now_sec();\n\n  uint64_t total_bytes = 0;\n  for (int i = 0; i < threads; i++) total_bytes += w[i].bytes;\n\n  double elapsed = end - start;\n  double mibps = 0.0;\n  if (elapsed > 0.0) mibps = ((double)total_bytes / 1024.0 / 1024.0) / elapsed;\n\n  // Latency test: pointer-chase over a large random cycle\n  // Default: derive from bandwidth working set; override with --lat-mib\n  size_t lat_bytes;\n  if (lat_mib > 0) {\n    lat_bytes = (size_t)lat_mib * 1024 * 1024;\n  } else {\n    lat_bytes = total;\n  }\n  if (lat_bytes > (size_t)512 * 1024 * 1024) lat_bytes = (size_t)512 * 1024 * 1024;\n  if (lat_bytes < (size_t)128 * 1024 * 1024) lat_bytes = (size_t)128 * 1024 * 1024;\n  double lat_ns = pointer_chase_latency_ns(lat_bytes, 1.5);\n\n  printf(\"BW_MIBPS=%.2f\\n\", mibps);\n  printf(\"LAT_NS=%.2f\\n\", lat_ns);\n  printf(\"SIZE_MIB=%ld\\n\", size_mib);\n  printf(\"THREADS=%d\\n\", threads);\n\n  free(w);\n  free(t);\n  free(dst);\n  free(src);\n\n  return 0;\n}\nC\n\n  \"${_cc}\" -O3 -pthread \"${_c}\" -o \"${_bin}\" >/dev/null 2>&1\n  if [ ! -x \"${_bin}\" ]; then\n    return 1\n  fi\n  return 0\n}\n\n_run_membench() {\n  # Outputs key=value lines on success.\n  local _size_mib=\"$1\"\n  local _threads=\"$2\"\n  local _seconds=\"$3\"\n  local _lat_mib=\"$4\"\n  local _bin=\"/tmp/perftest_membench\"\n\n  _build_membench || return 1\n\n  if [ -n \"${_lat_mib}\" ]; then\n    \"${_bin}\" --size-mib \"${_size_mib}\" --threads \"${_threads}\" --seconds \"${_seconds}\" --lat-mib \"${_lat_mib}\" 2>/dev/null\n  else\n    \"${_bin}\" --size-mib \"${_size_mib}\" --threads \"${_threads}\" --seconds \"${_seconds}\" 2>/dev/null\n  fi\n}\n\n_run_mem_benchmarks() {\n  # Sets globals:\n  #   _mem_transfer_speed (\"NNNNN MiB/s\")\n  #   _mem_latency        (\"NN.NN\" in ns)\n  #   _mem_latency_unit   (\"ns\")\n  #   _mem_bench_size_mib (per-buffer size)\n  #   _mem_bench_threads\n  #   _mem_bw_min/_mem_bw_max (numeric MiB/s, for variance hint)\n  #   _mem_lat_min/_mem_lat_max (numeric ns, for variance hint)\n  local _size_mib _threads _lat_mib _rep _out _bw _lat\n  local _bw_vals _lat_vals _bw_med _lat_med\n\n  _size_mib=\"$(_calc_mem_bench_size_mib)\"\n  _mem_bench_size_mib=\"${_size_mib}\"\n  _mem_bench_total_mib=$(( _size_mib * 2 ))\n\n  _threads=\"$(nproc 2>/dev/null || echo 1)\"\n  if [ \"${_threads}\" -gt \"${_MEM_BENCH_THREADS_MAX}\" ]; then\n    _threads=\"${_MEM_BENCH_THREADS_MAX}\"\n  fi\n  # Many VPSes won't improve (and can even *lose*) bandwidth past ~4 threads due to contention/boost behavior.\n  # Cap the bandwidth run at _MEM_BENCH_THREADS_SAT to keep results comparable across VM sizes.\n  if [ -n \"${_MEM_BENCH_THREADS_SAT:-}\" ] && [ \"${_MEM_BENCH_THREADS_SAT}\" -gt 0 ] && [ \"${_threads}\" -gt \"${_MEM_BENCH_THREADS_SAT}\" ]; then\n    _threads=\"${_MEM_BENCH_THREADS_SAT}\"\n  fi\n  if [ \"${_threads}\" -lt 1 ]; then\n    _threads=1\n  fi\n  _mem_bench_threads=\"${_threads}\"\n  _mem_bench_threads_grade=\"${_threads}\"\n\n  _lat_mib=\"${_MEM_BENCH_LAT_SIZE_MIB}\"\n  if [ \"${_lat_mib}\" -gt \"${_size_mib}\" ]; then\n    _lat_mib=\"${_size_mib}\"\n  fi\n  if [ \"${_lat_mib}\" -gt 512 ]; then\n    _lat_mib=512\n  fi\n  if [ \"${_lat_mib}\" -lt 128 ]; then\n    _lat_mib=128\n  fi\n\n  _bw_vals=\"\"\n  _lat_vals=\"\"\n\n  _rep=1\n  while [ \"${_rep}\" -le \"${_MEM_BENCH_REPEATS}\" ]; do\n    _out=\"$(_run_membench \"${_size_mib}\" \"${_threads}\" \"${_MEM_BENCH_TIME}\" \"${_lat_mib}\")\"\n    _bw=\"$(printf '%s\\n' \"${_out}\" | awk -F= '/^BW_MIBPS=/{print $2}' | head -n1)\"\n    _lat=\"$(printf '%s\\n' \"${_out}\" | awk -F= '/^LAT_NS=/{print $2}' | head -n1)\"\n\n    if [ -n \"${_bw}\" ]; then\n      _bw_vals=\"${_bw_vals}${_bw}\\n\"\n    fi\n    if [ -n \"${_lat}\" ]; then\n      _lat_vals=\"${_lat_vals}${_lat}\\n\"\n    fi\n\n    _rep=$((_rep + 1))\n  done\n\n  # Median-of-N to reduce noise (best-of was too optimistic and hides contention).\n  _bw_med=\"\"\n  _lat_med=\"\"\n  _mem_bw_min=\"\"\n  _mem_bw_max=\"\"\n  _mem_lat_min=\"\"\n  _mem_lat_max=\"\"\n\n  if [ -n \"${_bw_vals}\" ]; then\n    _bw_med=\"$(printf '%b' \"${_bw_vals}\" | sort -n | awk '{a[NR]=$1} END{ if(NR==0) exit; if(NR%2==1) print a[(NR+1)/2]; else print (a[NR/2]+a[NR/2+1])/2 }')\"\n    _mem_bw_min=\"$(printf '%b' \"${_bw_vals}\" | sort -n | head -n1)\"\n    _mem_bw_max=\"$(printf '%b' \"${_bw_vals}\" | sort -n | tail -n1)\"\n  fi\n\n  if [ -n \"${_lat_vals}\" ]; then\n    _lat_med=\"$(printf '%b' \"${_lat_vals}\" | sort -n | awk '{a[NR]=$1} END{ if(NR==0) exit; if(NR%2==1) print a[(NR+1)/2]; else print (a[NR/2]+a[NR/2+1])/2 }')\"\n    _mem_lat_min=\"$(printf '%b' \"${_lat_vals}\" | sort -n | head -n1)\"\n    _mem_lat_max=\"$(printf '%b' \"${_lat_vals}\" | sort -n | tail -n1)\"\n  fi\n\n  _mem_bench_tool=\"membench\"\n\n  if [ -n \"${_bw_med}\" ]; then\n    _mem_transfer_speed=\"${_bw_med} MiB/s\"\n  else\n    _mem_bench_tool=\"sysbench\"\n\n    # Fallback to sysbench if membench failed\n    # IMPORTANT: keep thread count consistent with membench grading logic (avoid nproc-penalty on larger VMs).\n    # sysbench memory throughput is not perfectly comparable to membench, but it is still useful as a fallback signal.\n    _mem_test=\"$(sysbench memory \\\n      --memory-block-size=1M \\\n      --memory-oper=write \\\n      --memory-access-mode=seq \\\n      --memory-total-size=100G \\\n      --threads=\"${_mem_bench_threads:-1}\" \\\n      --time=\"${_SYSBENCH_MEM_TIME}\" run </dev/null 2>/dev/null)\"\n    _mem_transfer_speed=\"$(echo \"${_mem_test}\" | awk '/transferred/ {gsub(\"[()]\",\"\",$(NF-1)); print $(NF-1)\" \"$(NF)}')\"\n  fi\n\n  _mem_latency_unit=\"ns\"\n  if [ -n \"${_lat_med}\" ]; then\n    _mem_latency=\"${_lat_med}\"\n  else\n    _mem_latency=\"\"\n  fi\n}\n\n# Convert bytes/sec to MiB/s (numeric only) (numeric only)\n_bps_to_mibps_num() {\n  awk '{ if (NR>0) printf \"%.2f\", $0 / 1024 / 1024 }'\n}\n\n# Convert bytes/sec to pretty \"X.XX MiB/s\"\n_bps_to_mibps_pretty() {\n  awk '{ if (NR>0) printf \"%.2f MiB/s\", $0 / 1024 / 1024; else printf \"error\" }'\n}\n\n# Convert bytes/sec to Mbit/s (numeric only; decimal megabits, 10^6)\n_bps_to_mbitps_num() {\n  awk '{ if (NR>0) printf \"%.2f\", ($0*8)/1000000 }'\n}\n\n# Convert bytes/sec to pretty \"X.XX Mbit/s\"\n_bps_to_mbitps_pretty() {\n  awk '{ if (NR>0) printf \"%.2f Mbit/s\", ($0*8)/1000000; else printf \"error\" }'\n}\n\n_redact_ip() {\n  case \"$1\" in\n    *.*) printf '%s.xxxx' \"$(printf '%s\\n' \"$1\" | cut -d . -f 1-3)\" ;;\n    *:*) printf '%s:xxxx' \"$(printf '%s\\n' \"$1\" | cut -d : -f 1-3)\" ;;\n    *) printf '%s' \"$1\" ;;\n  esac\n}\n\n_get_public_ipv4() {\n  curl -4 -s --max-time 5 http://icanhazip.com/ 2>/dev/null | tr -d '\\r\\n'\n}\n\n_download_benchmark_v4() {\n  # Single-stream IPv4 download speed probe (curl).\n  #\n  # Uses a byte-range request to keep it fast and to avoid false positives from\n  # empty/placeholder responses.\n  #\n  # Args:\n  #   $1 = URL\n  #   $2 = expected_min_bytes (optional, default 5 MiB)\n  #   $3 = range_end_byte (optional, default ~100 MiB)\n  #\n  # Output (on success):\n  #   speed_bps|size_bytes|time_total|http_code\n  #\n  # Output (on failure): empty line\n  local url=\"$1\"\n  local min_bytes=\"${2:-5242880}\"   # ~5 MiB\n  local range_end=\"${3:-104857590}\" # ~100 MiB\n\n  local out http_code size_download speed_bps time_total\n\n  out=\"$(curl -4 -A \"perftest/${_PT_VERSION}\" -sL --retry 1 --retry-delay 0 --max-time \"${_NET_TIMEOUT}\" --range \"0-${range_end}\" -o /dev/null \\\n    -w '%{http_code} %{size_download} %{speed_download} %{time_total}\\n' \"${url}\" 2>/dev/null)\"\n\n  [ -n \"${out}\" ] || return 0\n\n  http_code=\"$(printf '%s\\n' \"${out}\" | awk '{print $1}')\"\n  size_download=\"$(printf '%s\\n' \"${out}\" | awk '{print $2}')\"\n  speed_bps=\"$(printf '%s\\n' \"${out}\" | awk '{print $3}')\"\n  time_total=\"$(printf '%s\\n' \"${out}\" | awk '{print $4}')\"\n\n  # Validate HTTP status (200 OK or 206 Partial Content are expected)\n  case \"${http_code}\" in\n    200|206) : ;;\n    *) return 0 ;;\n  esac\n\n  # Validate payload size (protects against empty/placeholder/truncated responses)\n  if ! printf '%s' \"${size_download}\" | grep -Eq '^[0-9]+$'; then\n    return 0\n  fi\n  if [ \"${size_download}\" -lt \"${min_bytes}\" ]; then\n    _log \"Network test INVALID (${url}): got ${size_download} bytes, expected >=${min_bytes}\"\n    return 0\n  fi\n\n  # Validate speed numeric\n  if ! printf '%s' \"${speed_bps}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n    return 0\n  fi\n\n  printf '%s|%s|%s|%s\\n' \"${speed_bps}\" \"${size_download}\" \"${time_total}\" \"${http_code}\"\n}\n\n_iperf3_benchmark_v4() {\n  # Best-effort iPerf3 TCP throughput probe (IPv4), download + upload.\n  #\n  # Notes:\n  #  - Leaseweb speedtest endpoints allow only one connection per port at a time (5201-5210),\n  #    so we probe ports sequentially and force single stream on those hosts.\n  #\n  # Args:\n  #   $1 = host\n  #   $2 = port (optional; default 5201; used as a starting port for probing on Leaseweb)\n  #\n  # Output (on success):\n  #   dl_mbps|ul_mbps\n  #\n  # Output (on failure): empty line\n  local host=\"${1}\"\n  local port_start=\"${2:-5201}\"\n\n  [ -n \"${host}\" ] || return 0\n  [ \"${_NET_IPERF_ENABLED}\" = \"YES\" ] || return 0\n  command -v iperf3 >/dev/null 2>&1 || return 0\n\n  local parallel=\"${_NET_IPERF_PARALLEL}\"\n  local ports=\"${port_start}\"\n\n  if printf '%s' \"${host}\" | grep -q '\\.leaseweb\\.net$'; then\n    parallel=1\n    ports=\"5201 5202 5203 5204 5205 5206 5207 5208 5209 5210\"\n  fi\n\n  local p _out dl ul\n\n  for p in ${ports}; do\n    # Download (client receives) uses reverse mode (-R)\n    if command -v timeout >/dev/null 2>&1; then\n      _out=\"$(timeout \"${_NET_IPERF_TIMEOUT}s\" iperf3 -4 -f m -c \"${host}\" -p \"${p}\" -P \"${parallel}\" -t \"${_NET_IPERF_TIME}\" -R 2>/dev/null)\"\n    else\n      _out=\"$(iperf3 -4 -f m -c \"${host}\" -p \"${p}\" -P \"${parallel}\" -t \"${_NET_IPERF_TIME}\" -R 2>/dev/null)\"\n    fi\n    dl=\"$(printf '%s\\n' \"${_out}\" | awk '/receiver/ {v=$(NF-2)} END{print v}')\"\n\n    # Upload (client sends)\n    if command -v timeout >/dev/null 2>&1; then\n      _out=\"$(timeout \"${_NET_IPERF_TIMEOUT}s\" iperf3 -4 -f m -c \"${host}\" -p \"${p}\" -P \"${parallel}\" -t \"${_NET_IPERF_TIME}\" 2>/dev/null)\"\n    else\n      _out=\"$(iperf3 -4 -f m -c \"${host}\" -p \"${p}\" -P \"${parallel}\" -t \"${_NET_IPERF_TIME}\" 2>/dev/null)\"\n    fi\n    ul=\"$(printf '%s\\n' \"${_out}\" | awk '/receiver/ {v=$(NF-2)} END{print v}')\"\n\n    # numeric sanity\n    printf '%s' \"${dl}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$' || continue\n    printf '%s' \"${ul}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$' || continue\n\n    # filter obvious failures (e.g., 0.00)\n    awk -v dl=\"${dl}\" -v ul=\"${ul}\" -v min=\"${_NET_IPERF_MIN_Mbps}\" 'BEGIN{exit !(dl>=min && ul>=min)}' 2>/dev/null || continue\n\n    printf '%s|%s\\n' \"${dl}\" \"${ul}\"\n    return 0\n  done\n\n  return 0\n}\n\n# Best-effort ioping setup\n_ensure_ioping() {\n  if command -v ioping >/dev/null 2>&1; then\n    _ioping_cmd=\"ioping\"\n    return 0\n  fi\n\n  # Try package install first (preferred and safer than fetching binaries).\n  if command -v apt-get >/dev/null 2>&1; then\n    apt-get install -y ioping >/dev/null 2>&1 || true\n    if command -v ioping >/dev/null 2>&1; then\n      _ioping_cmd=\"ioping\"\n      return 0\n    fi\n  fi\n\n  # Fallback: fetch static binary (HTTPS first, then HTTP).\n  local dst=\"/tmp/ioping.static\"\n  if command -v curl >/dev/null 2>&1; then\n    if curl -fsSL --max-time 10 -o \"${dst}\" https://wget.racing/ioping.static 2>/dev/null || curl -fsSL --max-time 10 -o \"${dst}\" http://wget.racing/ioping.static 2>/dev/null; then\n      chmod +x \"${dst}\" 2>/dev/null || true\n      _ioping_cmd=\"${dst}\"\n      return 0\n    fi\n  fi\n\n  _ioping_cmd=\"\"\n  return 1\n}\n\n# Parse ioping \"min/avg/max/mdev = ...\" line and extract AVG.\n# Returns:\n#  - _parse_ioping_avg_us: numeric microseconds (rounded, empty if unknown)\n#  - _parse_ioping_avg_display: \"N.NN us|ms\" (empty if unknown)\n_parse_ioping_avg_us() {\n  # $1 = seek line\n  local line=\"$1\"\n  [ -n \"${line}\" ] || return 0\n  local avg_token\n  avg_token=\"$(printf \"%s\" \"${line}\" | awk -F'=' '{print $2}' | awk -F'/' '{print $2}' | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')\"\n  [ -n \"${avg_token}\" ] || return 0\n  local v u\n  v=\"$(printf \"%s\" \"${avg_token}\" | awk '{print $1}')\"\n  u=\"$(printf \"%s\" \"${avg_token}\" | awk '{print $2}')\"\n  v=\"$(_num_or_empty \"${v}\")\"\n  [ -n \"${v}\" ] || return 0\n\n  case \"${u}\" in\n    ms|msec|msecs)\n      awk -v v=\"${v}\" 'BEGIN{printf \"%.0f\", v*1000}' 2>/dev/null\n      ;;\n    us|usec|usecs|'')\n      # Default to microseconds if unit is missing but looks like ioping output\n      awk -v v=\"${v}\" 'BEGIN{printf \"%.0f\", v}' 2>/dev/null\n      ;;\n    *)\n      # Unknown unit\n      ;;\n  esac\n}\n\n_parse_ioping_avg_display() {\n  # $1 = seek line\n  local line=\"$1\"\n  [ -n \"${line}\" ] || return 0\n  local avg_token\n  avg_token=\"$(printf \"%s\" \"${line}\" | awk -F'=' '{print $2}' | awk -F'/' '{print $2}' | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')\"\n  [ -n \"${avg_token}\" ] || return 0\n  local v u\n  v=\"$(printf \"%s\" \"${avg_token}\" | awk '{print $1}')\"\n  u=\"$(printf \"%s\" \"${avg_token}\" | awk '{print $2}')\"\n  v=\"$(_num_or_empty \"${v}\")\"\n  [ -n \"${v}\" ] || return 0\n\n  case \"${u}\" in\n    ms|msec|msecs)\n      printf \"%.2f ms\" \"${v}\"\n      ;;\n    us|usec|usecs|'')\n      printf \"%.1f us\" \"${v}\"\n      ;;\n    *)\n      # Fall back to raw token if it looks sane\n      printf \"%s\" \"${avg_token}\"\n      ;;\n  esac\n}\n\n_run_ioping_quick() {\n  _ioping_seek=\"N/A\"\n  _ioping_seqread=\"N/A\"\n  _ensure_ioping || return 0\n  [ -n \"${_ioping_cmd}\" ] || return 0\n  # Keep it short; these lines match nench-style output snippets\n  _ioping_seek=\"$(\"${_ioping_cmd}\" -DR -w \"${_IOPING_SECONDS}\" . 2>/dev/null | tail -n 1)\"\n  _ioping_seqread=\"$(\"${_ioping_cmd}\" -DRL -w \"${_IOPING_SECONDS}\" . 2>/dev/null | tail -n 2 | head -n 1)\"\n}\n\n_run_ioping_quick_mounts() {\n  # Requires _test_mounts[] already populated.\n  # Populates:\n  #   _ioping_seek_line[\"/mount\"]    (last line of ioping -DR output)\n  #   _ioping_seqread_line[\"/mount\"] (seq read snippet)\n  # Keeps legacy globals _ioping_seek/_ioping_seqread as root (/) values.\n  _ioping_seek=\"N/A\"\n  _ioping_seqread=\"N/A\"\n  _ensure_ioping || return 0\n  [ -n \"${_ioping_cmd}\" ] || return 0\n\n  local _mp\n  for _mp in \"${_test_mounts[@]}\"; do\n    if [ -d \"${_mp}\" ]; then\n      _ioping_seek_line[\"${_mp}\"]=\"$(\"${_ioping_cmd}\" -DR -w \"${_IOPING_SECONDS}\" \"${_mp}\" 2>/dev/null | tail -n 1)\"\n      _ioping_seqread_line[\"${_mp}\"]=\"$(\"${_ioping_cmd}\" -DRL -w \"${_IOPING_SECONDS}\" \"${_mp}\" 2>/dev/null | tail -n 2 | head -n 1)\"\n    else\n      _ioping_seek_line[\"${_mp}\"]=\"N/A\"\n      _ioping_seqread_line[\"${_mp}\"]=\"N/A\"\n    fi\n  done\n\n  # Backward-compatible globals (root only)\n  if [ -n \"${_ioping_seek_line[\"/\"]:-}\" ]; then\n    _ioping_seek=\"${_ioping_seek_line[\"/\"]}\"\n  fi\n  if [ -n \"${_ioping_seqread_line[\"/\"]:-}\" ]; then\n    _ioping_seqread=\"${_ioping_seqread_line[\"/\"]}\"\n  fi\n}\n\n# AES throughput test (captures AES-NI issues better than sysbench cpu)\n_run_aes_throughput_test() {\n  _aes_mibps=\"N/A\"\n  _aes_seconds=\"N/A\"\n  if ! command -v openssl >/dev/null 2>&1; then\n    return 0\n  fi\n  local start end elapsed\n  start=\"$(date +%s.%N)\"\n  # Use fixed key/iv to avoid KDF overhead; measures raw AES speed better\n  dd if=/dev/zero bs=1M count=\"${_AES_TEST_MIB}\" 2>/dev/null \\\n    | openssl enc -aes-256-cbc \\\n      -K 0000 \\\n      -iv 0000 \\\n      -nosalt 2>/dev/null \\\n    > /dev/null\n  end=\"$(date +%s.%N)\"\n  elapsed=\"$(echo \"${end} - ${start}\" | bc -l 2>/dev/null || echo 0)\"\n  if [ \"${elapsed}\" != \"0\" ]; then\n    _aes_seconds=\"$(printf '%.3f' \"${elapsed}\")\"\n    _aes_mibps=\"$(echo \"${_AES_TEST_MIB} / ${elapsed}\" | bc -l 2>/dev/null | awk '{printf \"%.2f MiB/s\",$0}')\"\n  fi\n}\n\n_net_region_from_city() {\n  # Determine rough region based on ipinfo city name (best-effort).\n  # Returns: EU | NA | APAC | GLOBAL\n  local _city=\"${1:-}\"\n  _city=\"$(printf '%s' \"${_city}\" | tr '+' ' ' | tr '[:upper:]' '[:lower:]' 2>/dev/null)\"\n\n  # Normalize some common UTF-8 city spellings to ASCII-ish forms (best effort)\n  # so that patterns like \"nuernberg\" match even if ipinfo returns \"nürnberg\".\n  _city=\"$(printf '%s' \"${_city}\" | sed     -e 's/ü/ue/g' -e 's/ö/oe/g' -e 's/ä/ae/g' -e 's/ß/ss/g'     2>/dev/null)\"\n\n  [ -n \"${_city}\" ] || { echo \"GLOBAL\"; return 0; }\n\n  case \"${_city}\" in\n    # APAC\n    *sydney*|*melbourne*|*auckland*|*wellington*|*singapore*|*tokyo*|*osaka*|*seoul*|*hong*kong*|*taipei*|*manila*|*bangkok*|*jakarta*|*kuala*lumpur*|*mumbai*|*delhi*|*bangalore*|*perth*|*brisbane*|*adelaide*|*chennai*)\n      echo \"APAC\"\n      return 0\n      ;;\n    # North America\n    *new*york*|*nyc*|*chicago*|*kansas*|*dallas*|*atlanta*|*miami*|*los*angeles*|*san*francisco*|*seattle*|*boston*|*washington*|*ashburn*|*manassas*|*orangeburg*|*woodbury*|*phoenix*|*denver*|*toronto*|*montreal*|*vancouver*|*ottawa*|*kelowna*)\n      echo \"NA\"\n      return 0\n      ;;\n    # Europe (incl. UK)\n    *amsterdam*|*frankfurt*|*london*|*paris*|*warsaw*|*berlin*|*munich*|*nürnberg*|*nuremberg*|*nuernberg*|*nurnberg*|*vienna*|*prague*|*brussels*|*zurich*|*geneva*|*milan*|*rome*|*madrid*|*lisbon*|*stockholm*|*oslo*|*helsinki*|*copenhagen*|*dublin*|*manchester*|*barcelona*|*budapest*|*bucharest*|*sofia*|*zagreb*|*ljubljana*|*coventry*|*solihull*|*birmingham*|*vilnius*|*dronten*)\n      echo \"EU\"\n      return 0\n      ;;\n  esac\n\n  echo \"GLOBAL\"\n}\n\n_net_apac_zone_from_city() {\n  # For APAC, distinguish ANZ (Australia/New Zealand) vs Asia so we don't test Tokyo/Singapore from AU/NZ.\n  # Returns: ANZ | ASIA\n  local _city=\"${1:-}\"\n  _city=\"$(printf '%s' \"${_city}\" | tr '+' ' ' | tr '[:upper:]' '[:lower:]' 2>/dev/null)\"\n  # Best-effort normalize umlauts\n  _city=\"$(printf '%s' \"${_city}\" | sed -e 's/ü/ue/g' -e 's/ö/oe/g' -e 's/ä/ae/g' -e 's/ß/ss/g' 2>/dev/null)\"\n\n  case \"${_city}\" in\n    *sydney*|*melbourne*|*auckland*|*wellington*|*perth*|*brisbane*|*adelaide*|*canberra*|*christchurch*|*hobart*)\n      echo \"ANZ\"\n      ;;\n    *singapore*|*tokyo*|*osaka*|*seoul*|*hong*kong*|*taipei*|*manila*|*bangkok*|*jakarta*|*kuala*lumpur*|*hanoi*|*ho*chi*minh*|*mumbai*|*delhi*|*bangalore*|*chennai*)\n      echo \"ASIA\"\n      ;;\n    *)\n      # Default to ANZ for BOA fleet (AU/NZ).\n      echo \"ANZ\"\n      ;;\n  esac\n}\n\n_run_ipv4_net_check() {\n  _net_ipv4=\"N/A\"\n  _net_download=\"N/A\"\n  _net_download_mibps_num=\"\"\n  _net_download_mbps_num=\"\"\n  _net_http_ok=\"0\"\n  _net_http_total=\"0\"\n  _net_iperf_dl=\"N/A\"\n  _net_iperf_ul=\"N/A\"\n  _net_iperf_dl_mbps_med=\"\"\n  _net_iperf_ul_mbps_med=\"\"\n  _net_region=\"GLOBAL\"\n  _net_region_pretty=\"GLOBAL (unknown)\"\n\n  # Per-test HTTP results (used later by output sections)\n  _net_dl_name=()\n  _net_dl_url=()\n  _net_dl_pretty=()\n  _net_dl_bps=()\n  _net_dl_bytes=()\n  _net_dl_time=()\n  _net_dl_http=()\n\n  # Per-test iPerf3 results\n  _net_iperf_name=()\n  _net_iperf_host=()\n  _net_iperf_port=()\n  _net_iperf_dl_mbps=()\n  _net_iperf_ul_mbps=()\n\n  local ip\n  ip=\"$(_get_public_ipv4)\"\n  [ -n \"${ip}\" ] || return 0\n  _net_ipv4=\"$(_redact_ip \"${ip}\")\"\n\n  # Determine region early to avoid \"NYC from Australia\" nonsense.\n  local _city\n  _city=\"\"\n  _find_server_city\n  if [ -n \"${_LOC_CITY}\" ]; then\n    _city=\"$(printf '%s' \"${_LOC_CITY}\" | tr '+' ' ' 2>/dev/null)\"\n  fi\n  [ -z \"${_city}\" ] && _city=\"unknown\"\n  _net_region=\"$(_net_region_from_city \"${_city}\")\"\n  _net_region_pretty=\"${_net_region} (${_city})\"\n\n  _net_apac_zone=\"\"\n  if [ \"${_net_region}\" = \"APAC\" ] && [ \"${_NET_APAC_LOCAL_ONLY}\" = \"YES\" ]; then\n    _net_apac_zone=\"$(_net_apac_zone_from_city \"${_city}\")\"\n  fi\n\n  # Stable HTTP endpoints (Range pulls; avoids \"speedtest\" bot blocks)\n  # Hetzner region endpoints\n  local h_nbg h_fsn h_ash\n  h_nbg=\"https://nbg1-speed.hetzner.com/100MB.bin\"\n  h_fsn=\"https://fsn1-speed.hetzner.com/100MB.bin\"\n  h_ash=\"https://ash-speed.hetzner.com/100MB.bin\"\n\n  # OVH proof file (EU)\n  local ovh_100m\n  ovh_100m=\"https://proof.ovh.net/files/100Mb.dat\"\n\n  # Leaseweb speedtest files (docs list HTTP; HTTPS often fails)\n  local lw_ams lw_fra lw_lon lw_wdc lw_nyc lw_chi lw_mtl lw_syd lw_sin lw_hkg lw_tyo\n  lw_ams=\"http://speedtest.ams1.nl.leaseweb.net/100mb.bin\"\n  lw_fra=\"http://speedtest.fra1.de.leaseweb.net/100mb.bin\"\n  lw_lon=\"http://speedtest.lon1.uk.leaseweb.net/100mb.bin\"\n  lw_wdc=\"http://speedtest.wdc2.us.leaseweb.net/100mb.bin\"\n  lw_nyc=\"http://speedtest.nyc1.us.leaseweb.net/100mb.bin\"\n  lw_chi=\"http://speedtest.chi11.us.leaseweb.net/100mb.bin\"\n  lw_mtl=\"http://speedtest.mtl2.ca.leaseweb.net/100mb.bin\"\n  lw_syd=\"http://speedtest.syd12.au.leaseweb.net/100mb.bin\"\n  lw_sin=\"http://speedtest.sin1.sg.leaseweb.net/100mb.bin\"\n  lw_hkg=\"http://speedtest.hkg12.hk.leaseweb.net/100mb.bin\"\n  lw_tyo=\"http://speedtest.tyo11.jp.leaseweb.net/100mb.bin\"\n\n  # Akamai Connected Cloud / Linode speedtest files (multi-region)\n  # (Simple file URLs used by their speedtest pages; generally stable for curl.)\n  local a_ams a_fra a_lon a_chi a_dal a_new a_syd a_sin a_tyo2\n  a_ams=\"https://speedtest.amsterdam.linode.com/100MB-amsterdam.bin\"\n  a_fra=\"https://speedtest.frankfurt.linode.com/100MB-frankfurt.bin\"\n  a_lon=\"https://speedtest.london.linode.com/100MB-london.bin\"\n  a_chi=\"https://speedtest.chicago.linode.com/100MB-chicago.bin\"\n  a_dal=\"https://speedtest.dallas.linode.com/100MB-dallas.bin\"\n  a_new=\"https://speedtest.newark.linode.com/100MB-newark.bin\"\n  a_syd=\"https://speedtest.sydney.linode.com/100MB-sydney.bin\"\n  a_sin=\"https://speedtest.singapore.linode.com/100MB-singapore.bin\"\n  a_tyo2=\"https://speedtest.tokyo2.linode.com/100MB-tokyo2.bin\"\n\n  # Optional BOA mirror (Akamai-backed; should pick closest)\n  local boa_url\n  boa_url=\"\"\n  _find_fast_mirror_early\n  if [ -n \"${_urlDld}\" ]; then\n    boa_url=\"${_urlDld}\"\n  fi\n\n  # Require at least 5 MiB downloaded within timeout; request up to ~100 MiB\n  local min_bytes range_end\n  min_bytes=5242880\n  range_end=104857590\n\n  # Build regional target list (HTTP)\n  case \"${_net_region}\" in\n    EU)\n      _net_dl_name+=(\"Hetzner NBG1\"); _net_dl_url+=(\"${h_nbg}\")\n      _net_dl_name+=(\"Hetzner FSN1\"); _net_dl_url+=(\"${h_fsn}\")\n      _net_dl_name+=(\"Leaseweb AMS\");  _net_dl_url+=(\"${lw_ams}\")\n      _net_dl_name+=(\"Leaseweb FRA\");  _net_dl_url+=(\"${lw_fra}\")\n      _net_dl_name+=(\"Leaseweb LON\");  _net_dl_url+=(\"${lw_lon}\")\n      _net_dl_name+=(\"Akamai AMS\");    _net_dl_url+=(\"${a_ams}\")\n      _net_dl_name+=(\"Akamai FRA\");    _net_dl_url+=(\"${a_fra}\")\n      _net_dl_name+=(\"Akamai LON\");    _net_dl_url+=(\"${a_lon}\")\n      _net_dl_name+=(\"OVH proof\");     _net_dl_url+=(\"${ovh_100m}\")\n      _net_dl_name+=(\"BOA mirror\");    _net_dl_url+=(\"${boa_url}\")\n      ;;\n    NA)\n      _net_dl_name+=(\"Hetzner ASH\");   _net_dl_url+=(\"${h_ash}\")\n      _net_dl_name+=(\"Leaseweb WDC\");  _net_dl_url+=(\"${lw_wdc}\")\n      _net_dl_name+=(\"Leaseweb NYC\");  _net_dl_url+=(\"${lw_nyc}\")\n      _net_dl_name+=(\"Leaseweb CHI\");  _net_dl_url+=(\"${lw_chi}\")\n      _net_dl_name+=(\"Leaseweb MTL\");  _net_dl_url+=(\"${lw_mtl}\")\n      _net_dl_name+=(\"Akamai CHI\");    _net_dl_url+=(\"${a_chi}\")\n      _net_dl_name+=(\"Akamai DAL\");    _net_dl_url+=(\"${a_dal}\")\n      _net_dl_name+=(\"Akamai NEW\");    _net_dl_url+=(\"${a_new}\")\n      _net_dl_name+=(\"BOA mirror\");    _net_dl_url+=(\"${boa_url}\")\n      ;;\n    APAC)\n    if [ \"${_NET_APAC_LOCAL_ONLY}\" = \"YES\" ] && [ \"${_net_apac_zone}\" = \"ANZ\" ]; then\n      _net_dl_name+=(\"Leaseweb SYD\");  _net_dl_url+=(\"${lw_syd}\")\n      _net_dl_name+=(\"Akamai SYD\");    _net_dl_url+=(\"${a_syd}\")\n      _net_dl_name+=(\"BOA mirror\");    _net_dl_url+=(\"${boa_url}\")\n    else\n      # Asia-wide APAC (only if the server itself is in Asia region)\n      _net_dl_name+=(\"Leaseweb SIN\");  _net_dl_url+=(\"${lw_sin}\")\n      _net_dl_name+=(\"Leaseweb TYO\");  _net_dl_url+=(\"${lw_tyo}\")\n      _net_dl_name+=(\"Leaseweb HKG\");  _net_dl_url+=(\"${lw_hkg}\")\n      _net_dl_name+=(\"Akamai SIN\");    _net_dl_url+=(\"${a_sin}\")\n      _net_dl_name+=(\"Akamai TYO2\");   _net_dl_url+=(\"${a_tyo2}\")\n      _net_dl_name+=(\"BOA mirror\");    _net_dl_url+=(\"${boa_url}\")\n    fi\n    ;;\n    *)\n      # Conservative fallback when city is unknown.\n      _net_dl_name+=(\"Hetzner FSN1\");  _net_dl_url+=(\"${h_fsn}\")\n      _net_dl_name+=(\"Leaseweb AMS\");  _net_dl_url+=(\"${lw_ams}\")\n      _net_dl_name+=(\"Leaseweb WDC\");  _net_dl_url+=(\"${lw_wdc}\")\n      _net_dl_name+=(\"Leaseweb SYD\");  _net_dl_url+=(\"${lw_syd}\")\n      _net_dl_name+=(\"Akamai FRA\");    _net_dl_url+=(\"${a_fra}\")\n      _net_dl_name+=(\"Akamai CHI\");    _net_dl_url+=(\"${a_chi}\")\n      _net_dl_name+=(\"Akamai SYD\");    _net_dl_url+=(\"${a_syd}\")\n      _net_dl_name+=(\"BOA mirror\");    _net_dl_url+=(\"${boa_url}\")\n      ;;\n  esac\n\n  local i name url line bps bytes ttotal code pretty\n  local _ok_bps=()\n  i=0\n  while [ \"${i}\" -lt \"${#_net_dl_name[@]}\" ]; do\n    name=\"${_net_dl_name[${i}]}\"\n    url=\"${_net_dl_url[${i}]}\"\n    if [ -z \"${url}\" ]; then\n      _net_dl_pretty+=(\"N/A\")\n      _net_dl_bps+=(\"\")\n      _net_dl_bytes+=(\"\")\n      _net_dl_time+=(\"\")\n      _net_dl_http+=(\"\")\n      i=$((i + 1))\n      continue\n    fi\n\n    _log \"Network HTTP test: ${name} (${url})\"\n    line=\"$(_download_benchmark_v4 \"${url}\" \"${min_bytes}\" \"${range_end}\")\"\n    if [ -n \"${line}\" ]; then\n      bps=\"$(printf '%s\\n' \"${line}\" | cut -d'|' -f1)\"\n      bytes=\"$(printf '%s\\n' \"${line}\" | cut -d'|' -f2)\"\n      ttotal=\"$(printf '%s\\n' \"${line}\" | cut -d'|' -f3)\"\n      code=\"$(printf '%s\\n' \"${line}\" | cut -d'|' -f4)\"\n      pretty=\"$(printf '%s\\n' \"${bps}\" | _bps_to_mbitps_pretty)\"\n      _net_dl_pretty+=(\"${pretty}\")\n      _net_dl_bps+=(\"${bps}\")\n      _net_dl_bytes+=(\"${bytes}\")\n      _net_dl_time+=(\"${ttotal}\")\n      _net_dl_http+=(\"${code}\")\n      if printf '%s' \"${bps}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n        _ok_bps+=(\"${bps}\")\n      fi\n    else\n      _net_dl_pretty+=(\"error\")\n      _net_dl_bps+=(\"\")\n      _net_dl_bytes+=(\"\")\n      _net_dl_time+=(\"\")\n      _net_dl_http+=(\"\")\n    fi\n    i=$((i + 1))\n  done\n\n  _net_http_total=\"${#_net_dl_name[@]}\"\n  _net_http_ok=\"${#_ok_bps[@]}\"\n\n  # Score = filtered median of successful HTTP pulls (bytes/sec)\n  # Drop only the *very weak* endpoints relative to the unfiltered median.\n  # This avoids (a) punishing a healthy link because one remote is slow, but also (b) inflating the score\n  # by effectively keeping only the single fastest mirror.\n  local med_bps raw_med_bps thr_bps\n  local _ok_bps_filt=()\n  raw_med_bps=\"$(_median_num \"${_ok_bps[@]}\")\"\n  if printf '%s' \"${raw_med_bps}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n    thr_bps=\"$(awk -v m=\"${raw_med_bps}\" -v r=\"${_NET_HTTP_OUTLIER_RATIO}\" 'BEGIN{printf \"%.6f\", m*r}')\"\n    local _j\n    _j=0\n    while [ \"${_j}\" -lt \"${#_ok_bps[@]}\" ]; do\n      if awk -v v=\"${_ok_bps[${_j}]}\" -v t=\"${thr_bps}\" 'BEGIN{exit !(v>=t)}'; then\n        _ok_bps_filt+=(\"${_ok_bps[${_j}]}\")\n      fi\n      _j=$((_j + 1))\n    done\n  fi\n  if [ \"${#_ok_bps_filt[@]}\" -ge 2 ]; then\n    med_bps=\"$(_median_num \"${_ok_bps_filt[@]}\")\"\n  else\n    med_bps=\"${raw_med_bps}\"\n  fi\n\n  if printf '%s' \"${med_bps}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n    _net_download_mibps_num=\"$(printf '%s\\n' \"${med_bps}\" | _bps_to_mibps_num)\"\n    _net_download_mbps_num=\"$(printf '%s\\n' \"${med_bps}\" | _bps_to_mbitps_num)\"\n    _net_download=\"$(printf '%s\\n' \"${med_bps}\" | _bps_to_mbitps_pretty) (median ${_net_http_ok}/${_net_http_total})\"\n  else\n    _net_download_mibps_num=\"\"\n    _net_download_mbps_num=\"\"\n    _net_download=\"error\"\n  fi\n\n  # iPerf3 port throughput (best-effort; region-filtered; median of successes)\n  # Guarded by _NET_IPERF_ENABLED — even if iperf3 is already installed,\n  # we do not run it unless explicitly opted in via --iperf.\n  if [ \"${_NET_IPERF_ENABLED}\" = \"YES\" ] && command -v iperf3 >/dev/null 2>&1; then\n    local _cand entry _n _h _p res dl_mbps ul_mbps ok\n    ok=0\n    _cand=()\n\n    case \"${_net_region}\" in\n      EU)\n        _cand+=(\"LW-AMS|speedtest.ams1.nl.leaseweb.net|5201\")\n        _cand+=(\"LW-FRA|speedtest.fra1.de.leaseweb.net|5201\")\n        _cand+=(\"LW-LON|speedtest.lon1.uk.leaseweb.net|5201\")\n        _cand+=(\"LW-L12|speedtest.lon12.uk.leaseweb.net|5201\")\n        ;;\n      NA)\n        _cand+=(\"LW-WDC|speedtest.wdc2.us.leaseweb.net|5201\")\n        _cand+=(\"LW-NYC|speedtest.nyc1.us.leaseweb.net|5201\")\n        _cand+=(\"LW-CHI|speedtest.chi11.us.leaseweb.net|5201\")\n        _cand+=(\"LW-MTL|speedtest.mtl2.ca.leaseweb.net|5201\")\n        ;;\n      APAC)\n      if [ \"${_NET_APAC_LOCAL_ONLY}\" = \"YES\" ] && [ \"${_net_apac_zone}\" = \"ANZ\" ]; then\n        _cand+=(\"LW-SYD|speedtest.syd12.au.leaseweb.net|5201\")\n      else\n        _cand+=(\"LW-SIN|speedtest.sin1.sg.leaseweb.net|5201\")\n        _cand+=(\"LW-TYO|speedtest.tyo11.jp.leaseweb.net|5201\")\n        _cand+=(\"LW-HKG|speedtest.hkg12.hk.leaseweb.net|5201\")\n      fi\n      ;;\n      *)\n        _cand+=(\"LW-AMS|speedtest.ams1.nl.leaseweb.net|5201\")\n        _cand+=(\"LW-WDC|speedtest.wdc2.us.leaseweb.net|5201\")\n        _cand+=(\"LW-SYD|speedtest.syd12.au.leaseweb.net|5201\")\n        ;;\n    esac\n\n    for entry in \"${_cand[@]}\"; do\n      [ \"${ok}\" -ge \"${_NET_IPERF_MAX_SERVERS}\" ] && break\n      _n=\"$(printf '%s' \"${entry}\" | cut -d'|' -f1)\"\n      _h=\"$(printf '%s' \"${entry}\" | cut -d'|' -f2)\"\n      _p=\"$(printf '%s' \"${entry}\" | cut -d'|' -f3)\"\n\n      _log \"Network iPerf3 test: ${_n} (${_h}:${_p})\"\n      res=\"$(_iperf3_benchmark_v4 \"${_h}\" \"${_p}\")\"\n      if [ -n \"${res}\" ]; then\n        dl_mbps=\"$(printf '%s\\n' \"${res}\" | cut -d'|' -f1)\"\n        ul_mbps=\"$(printf '%s\\n' \"${res}\" | cut -d'|' -f2)\"\n        _net_iperf_name+=(\"${_n}\")\n        _net_iperf_host+=(\"${_h}\")\n        _net_iperf_port+=(\"${_p}\")\n        _net_iperf_dl_mbps+=(\"${dl_mbps}\")\n        _net_iperf_ul_mbps+=(\"${ul_mbps}\")\n        ok=$((ok + 1))\n      fi\n    done\n\n    if [ \"${#_net_iperf_dl_mbps[@]}\" -gt 0 ]; then\n      # Filter out very weak remote results relative to the *best* DL result, then take the median\n      # of the remaining set. This prevents a single distant/poor route from dragging the capacity\n      # estimate down (TCP here is used as the primary \"port capacity\" signal).\n      local _dl_med _ul_med _thr_dl _best_dl\n      local _dl_filt=()\n      local _ul_filt=()\n      _best_dl=\"$(printf '%s\n' \"${_net_iperf_dl_mbps[@]}\" | sort -n | tail -n 1)\"\n      _thr_dl=\"$(awk -v b=\"${_best_dl}\" -v r=\"${_NET_IPERF_OUTLIER_RATIO}\" 'BEGIN{printf \"%.3f\", b*r}')\"\n      local _k\n      _k=0\n      while [ \"${_k}\" -lt \"${#_net_iperf_dl_mbps[@]}\" ]; do\n        if awk -v v=\"${_net_iperf_dl_mbps[${_k}]}\" -v t=\"${_thr_dl}\" 'BEGIN{exit !(v>=t)}'; then\n          _dl_filt+=(\"${_net_iperf_dl_mbps[${_k}]}\")\n          _ul_filt+=(\"${_net_iperf_ul_mbps[${_k}]}\")\n        fi\n        _k=$((_k + 1))\n      done\n      if [ \"${#_dl_filt[@]}\" -ge 2 ]; then\n        _dl_med=\"$(_median_num \"${_dl_filt[@]}\")\"\n        _ul_med=\"$(_median_num \"${_ul_filt[@]}\")\"\n      elif [ \"${#_dl_filt[@]}\" -eq 1 ]; then\n        _dl_med=\"${_dl_filt[0]}\"\n        _ul_med=\"${_ul_filt[0]}\"\n      else\n        _dl_med=\"$(_median_num \"${_net_iperf_dl_mbps[@]}\")\"\n        _ul_med=\"$(_median_num \"${_net_iperf_ul_mbps[@]}\")\"\n      fi\n      if printf '%s' \"${_dl_med}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n        _net_iperf_dl_mbps_med=\"${_dl_med}\"\n        _net_iperf_dl=\"${_dl_med} Mbit/s\"\n      fi\n      if printf '%s' \"${_ul_med}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n        _net_iperf_ul_mbps_med=\"${_ul_med}\"\n        _net_iperf_ul=\"${_ul_med} Mbit/s\"\n      fi\n    fi\n  fi\n}\n\n# Parse mpstat output and return average %steal.\n# Works across sysstat versions where column order may vary (do NOT assume last column).\n_parse_mpstat_steal_avg() {\n  # Reads from stdin by default; or from a file if $1 is provided.\n  local _src=\"${1:-/dev/stdin}\"\n  awk '\n    BEGIN { col = 0 }\n    /%steal/ {\n      # Locate the %steal column dynamically from the header line.\n      for (i = 1; i <= NF; i++) {\n        if ($i ~ /%steal/) { col = i }\n      }\n    }\n    /^[0-9]/ {\n      if (col > 0) { steal_sum += $(col); steal_cnt++ }\n    }\n    END {\n      if (steal_cnt > 0) { printf \"%.2f\", steal_sum / steal_cnt }\n    }' \"${_src}\" 2>/dev/null\n}\n\n# Return average %steal while running for N seconds (requires mpstat)\n_measure_cpu_steal() {\n  # $1 = seconds\n  local _seconds=\"$1\"\n  if ! command -v mpstat >/dev/null 2>&1; then\n    echo \"\"\n    return 0\n  fi\n  mpstat 1 \"${_seconds}\" 2>/dev/null | _parse_mpstat_steal_avg\n}\n\n# Decide which mounts to test:\n# - include \"/\" always if it is a \"real\" filesystem type\n# - include other mounted filesystems (separate partitions/volumes)\n# - exclude /boot*, pseudo fs, and tiny mounts\n_get_test_mounts() {\n  df -P -T -B1 \\\n    | awk 'NR>1 {print $2 \"|\" $7 \"|\" $3}' \\\n    | while IFS=\"|\" read -r _fstype _mount _size_bytes; do\n      case \"${_fstype}\" in\n        ext4|xfs|btrfs|zfs|f2fs) : ;;\n        *) continue ;;\n      esac\n      case \"${_mount}\" in\n        /boot|/boot/*|/boot/efi|/boot/efi/*) continue ;;\n        /proc|/proc/*|/sys|/sys/*|/dev|/dev/*|/run|/run/*) continue ;;\n        /snap|/snap/*) continue ;;\n      esac\n      if [ \"${_size_bytes}\" -lt $((2 * 1024 * 1024 * 1024)) ]; then\n        continue\n      fi\n      case \"${_mount}\" in\n        /*) echo \"${_mount}\" ;;\n        *) : ;;\n      esac\n    done \\\n    | awk '!seen[$0]++ { print $0 }'\n}\n\n# Extract Use% for a mount (robust across df variants / extra columns).\n_df_get_used_pct() {\n  # $1 = mountpoint\n  local _mp=\"$1\"\n  command df -P -l \"${_mp}\" 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { if (u>0) { gsub(/%/,\"\",$u); print $u } }\n  '\n}\n\n# Extract inode Use% for a mount.\n_df_get_iused_pct() {\n  # $1 = mountpoint\n  local _mp=\"$1\"\n  command df -P -i -l \"${_mp}\" 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"IUse%\") u=i }\n    NR==2 { if (u>0) { gsub(/%/,\"\",$u); print $u } }\n  '\n}\n\n# List relevant mounted filesystems (real disk mounts only).\n_list_real_mounts() {\n  # Output: mountpoint|source|fstype|options\n  command findmnt -rno TARGET,SOURCE,FSTYPE,OPTIONS 2>/dev/null \\\n    | awk -F' ' '{\n        mp=$1; src=$2; fs=$3; opt=$4;\n        if (fs ~ /^(ext4|xfs|btrfs|f2fs|zfs)$/) {\n          if (mp ~ /^\\/boot(\\/|$)/) next;\n          print mp \"|\" src \"|\" fs \"|\" opt;\n        }\n      }' \\\n    | awk '!seen[$0]++{print $0}'\n}\n\n# Analyze /etc/fstab for performance/operational hints.\n_analyze_fstab_mount_opts() {\n  # Output lines: mountpoint|fstype|opts|spec\n  [ -e /etc/fstab ] || return 0\n  awk '\n    BEGIN{OFS=\"|\"}\n    /^[[:space:]]*#/ {next}\n    /^[[:space:]]*$/ {next}\n    NF<4 {next}\n    {\n      spec=$1; mp=$2; fs=$3; opt=$4;\n      if (mp ~ /^\\// && fs ~ /^(ext4|xfs|btrfs|f2fs|zfs)$/) {\n        print mp,fs,opt,spec;\n      }\n    }' /etc/fstab 2>/dev/null\n}\n\n_print_fs_section() {\n  local _mp _src _fs _opt\n  local _used _iused\n  local _fstab_cache _fstab_line _fstab_opt _fstab_fs _fstab_spec\n  local _notes _had\n  local _first\n  local _mp_has_noatime _mp_has_discard\n  local _fs_has_noatime _fs_has_discard\n  local _want_noatime _want_nofail _want_rm_growfs\n  local _rootflags_discard\n\n  echo \" =============================================================================\"\n  echo \" == 3. FILESYSTEMS (type/options + space/inodes sanity)                     ==\"\n  echo \" =============================================================================\"\n\n  if ! command -v findmnt >/dev/null 2>&1; then\n    echo \"   Note: findmnt not found - skipping filesystem details\"\n    echo \"\"\n    return 0\n  fi\n\n  _rootflags_discard=\"NO\"\n  if [ -r /proc/cmdline ] && grep -qE '(^|[[:space:]])rootflags=[^[:space:]]*discard' /proc/cmdline 2>/dev/null; then\n    _rootflags_discard=\"YES\"\n  fi\n\n  _fstab_cache=\"$(mktemp /tmp/perftest.fstab.XXXXXX 2>/dev/null)\"\n  if [ -n \"${_fstab_cache}\" ]; then\n    _analyze_fstab_mount_opts > \"${_fstab_cache}\" 2>/dev/null || true\n  fi\n\n  _had=\"NO\"\n  _first=\"YES\"\n  while IFS='|' read -r _mp _src _fs _opt; do\n    [ -n \"${_mp}\" ] || continue\n    _had=\"YES\"\n\n    _used=\"$(_df_get_used_pct \"${_mp}\")\"\n    _used=\"${_used//[^0-9]/}\"\n    _iused=\"$(_df_get_iused_pct \"${_mp}\")\"\n    _iused=\"${_iused//[^0-9]/}\"\n\n    echo \"\"\n    echo \"   ${_mp}\"\n    echo \"     type: ${_fs}  src: ${_src}\"\n    echo \"     usage: ${_used:-N/A}%  inodes: ${_iused:-N/A}%\"\n\n    # Flags from live mount options.\n    _mp_has_noatime=\"NO\"\n    _mp_has_discard=\"NO\"\n\n    # Normalize: remove spaces, wrap in commas once.\n    _this_opt=\",${_opt//[[:space:]]/},\"\n\n    case \",${_this_opt},\" in *,noatime,*) _mp_has_noatime=\"YES\";; esac\n    case \",${_this_opt},\" in *,discard,*) _mp_has_discard=\"YES\";; esac\n\n    # Alerts (space / inodes) attached to mount.\n    if [ -n \"${_used}\" ] && [ \"${_used}\" -ge 90 ] 2>/dev/null; then\n      echo \"     - ⚠️ ALERT: filesystem is >90% used (risk of failures)\"\n    fi\n    if [ -n \"${_iused}\" ] && [ \"${_iused}\" -ge 90 ] 2>/dev/null; then\n      echo \"     - ⚠️ ALERT: inode usage is >90% (many-small-files risk; Composer/vendor trees can trigger this)\"\n    fi\n\n    # /etc/fstab lookup for this mount (so hints can be attached and deduplicated).\n    _fstab_line=\"\"\n    _fstab_opt=\"\"\n    _fstab_fs=\"\"\n    _fstab_spec=\"\"\n    _fs_has_noatime=\"NO\"\n    _fs_has_discard=\"NO\"\n    _want_noatime=\"NO\"\n    _want_nofail=\"NO\"\n    _want_rm_growfs=\"NO\"\n    if [ -n \"${_fstab_cache}\" ] && [ -e \"${_fstab_cache}\" ]; then\n      _fstab_line=\"$(awk -F'|' -v mp=\"${_mp}\" '$1==mp {print $0; exit}' \"${_fstab_cache}\" 2>/dev/null)\"\n      if [ -n \"${_fstab_line}\" ]; then\n        _fstab_fs=\"$(echo \"${_fstab_line}\" | awk -F'|' '{print $2}')\"\n        _fstab_opt=\"$(echo \"${_fstab_line}\" | awk -F'|' '{print $3}')\"\n        _fstab_spec=\"$(echo \"${_fstab_line}\" | awk -F'|' '{print $4}')\"\n\n        # Normalize: remove spaces, wrap in commas once.\n        _opts=\",${_fstab_opt//[[:space:]]/},\"\n\n        case \",${_opts},\" in *,noatime,*) _fs_has_noatime=\"YES\";; esac\n        case \",${_opts},\" in *,discard,*) _fs_has_discard=\"YES\";; esac\n\n        if [ \"${_fs_has_noatime}\" != \"YES\" ]; then\n          _want_noatime=\"YES\"\n        fi\n        if [ \"${_mp}\" != \"/\" ]; then\n          case \",${_fstab_opt},\" in\n            *,nofail,*) : ;;\n            *) _want_nofail=\"YES\";;\n          esac\n        fi\n        case \",${_fstab_opt},\" in\n          *,x-systemd.growfs,*|*,x-systemd.growfs|x-systemd.growfs,*|x-systemd.growfs)\n            _want_rm_growfs=\"YES\"\n            ;;\n        esac\n      fi\n    fi\n\n    # Mount-level hints (only when we can't attach a clear fstab hint).\n    if [ -z \"${_fstab_line}\" ]; then\n      if [ \"${_mp_has_noatime}\" != \"YES\" ]; then\n        echo \"     hint: consider adding 'noatime' to reduce metadata writes\"\n      fi\n    fi\n\n    # 'discard' note (print once, even if both live mount and fstab include it).\n    if [ \"${_mp_has_discard}\" = \"YES\" ] || [ \"${_fs_has_discard}\" = \"YES\" ]; then\n      if [ \"${_mp}\" = \"/\" ] && [ \"${_rootflags_discard}\" = \"YES\" ]; then\n        echo \"     note: 'discard' is enforced for / via kernel cmdline (rootflags=discard); remove it in GRUB to disable\"\n      else\n        echo \"     note: 'discard' is enabled (online TRIM); periodic fstrim is often preferred\"\n      fi\n    fi\n\n    # /etc/fstab hints for this mount (attached; print only when actionable).\n    if [ -n \"${_fstab_line}\" ]; then\n      if [ \"${_want_noatime}\" = \"YES\" ] || [ \"${_want_nofail}\" = \"YES\" ] || [ \"${_want_rm_growfs}\" = \"YES\" ]; then\n        echo \"     fstab: opts=${_fstab_opt}\"\n        if [ \"${_want_noatime}\" = \"YES\" ]; then\n          echo \"       - consider adding: noatime (reduces metadata writes)\"\n        fi\n        if [ \"${_want_nofail}\" = \"YES\" ]; then\n          echo \"       - consider adding: nofail (boot continues even if this disk is missing)\"\n        fi\n        if [ \"${_want_rm_growfs}\" = \"YES\" ]; then\n          echo \"       - consider removing: x-systemd.growfs (systemd-only; not useful on Devuan)\"\n        fi\n      fi\n    fi\n  done < <(_list_real_mounts)\n\n  if [ \"${_had}\" = \"NO\" ]; then\n    echo \"   No disk-backed ext4/xfs/btrfs/f2fs mounts detected\"\n    echo \"\"\n  else\n    echo \"\"\n  fi\n\n  [ -n \"${_fstab_cache}\" ] && [ -e \"${_fstab_cache}\" ] && rm -f \"${_fstab_cache}\" >/dev/null 2>&1 || true\n}\n\n\n# Run fio random 4k randrw test in a mount and write JSON output to a file\n_run_fio_rand_test() {\n  # $1 = mountpoint\n  # $2 = output_file\n  local _mount_point=\"$1\"\n  local _output_file=\"$2\"\n  local _safe_mount_point\n  _safe_mount_point=\"$(echo \"${_mount_point}\" | sed 's#/#_#g')\"\n  local _test_dir=\"${_mount_point}/.perftest_fio_rand_${_safe_mount_point}_$$\"\n  mkdir -p \"${_test_dir}\" || return 1\n  fio \\\n    --name=randreadwrite \\\n    --directory=\"${_test_dir}\" \\\n    --ioengine=libaio \\\n    --iodepth=64 \\\n    --rw=randrw \\\n    --rwmixread=50 \\\n    --bs=4k \\\n    --direct=1 \\\n    --size=1G \\\n    --numjobs=4 \\\n    --runtime=\"${_FIO_RAND_RUNTIME}\" \\\n    --time_based \\\n    --group_reporting \\\n    --output-format=json \\\n    > \"${_output_file}\" 2>/dev/null\n  rm -rf \"${_test_dir}\" >/dev/null 2>&1\n}\n\n# Run fio sequential read test\n_run_fio_seq_read_test() {\n  # $1 = mountpoint\n  # $2 = output_file\n  local _mount_point=\"$1\"\n  local _output_file=\"$2\"\n  local _safe_mount_point\n  _safe_mount_point=\"$(echo \"${_mount_point}\" | sed 's#/#_#g')\"\n  local _test_dir=\"${_mount_point}/.perftest_fio_seqread_${_safe_mount_point}_$$\"\n  mkdir -p \"${_test_dir}\" || return 1\n  fio \\\n    --name=seqread \\\n    --directory=\"${_test_dir}\" \\\n    --ioengine=libaio \\\n    --iodepth=32 \\\n    --rw=read \\\n    --bs=128k \\\n    --direct=1 \\\n    --size=2G \\\n    --runtime=\"${_FIO_SEQ_RUNTIME}\" \\\n    --time_based \\\n    --group_reporting \\\n    --output-format=json \\\n    > \"${_output_file}\" 2>/dev/null\n  rm -rf \"${_test_dir}\" >/dev/null 2>&1\n}\n\n# Run fio sequential write test\n_run_fio_seq_write_test() {\n  # $1 = mountpoint\n  # $2 = output_file\n  local _mount_point=\"$1\"\n  local _output_file=\"$2\"\n  local _safe_mount_point\n  _safe_mount_point=\"$(echo \"${_mount_point}\" | sed 's#/#_#g')\"\n  local _test_dir=\"${_mount_point}/.perftest_fio_seqwrite_${_safe_mount_point}_$$\"\n  mkdir -p \"${_test_dir}\" || return 1\n  fio \\\n    --name=seqwrite \\\n    --directory=\"${_test_dir}\" \\\n    --ioengine=libaio \\\n    --iodepth=32 \\\n    --rw=write \\\n    --bs=128k \\\n    --direct=1 \\\n    --size=2G \\\n    --runtime=\"${_FIO_SEQ_RUNTIME}\" \\\n    --time_based \\\n    --group_reporting \\\n    --output-format=json \\\n    > \"${_output_file}\" 2>/dev/null\n  rm -rf \"${_test_dir}\" >/dev/null 2>&1\n}\n\n# Run fio random 4k randrw QD=1 test (latency-focused)\n_run_fio_qd1_test() {\n  # $1 = mountpoint\n  # $2 = output_file\n  local _mount_point=\"$1\"\n  local _output_file=\"$2\"\n  local _safe_mount_point\n  _safe_mount_point=\"$(echo \"${_mount_point}\" | sed 's#/#_#g')\"\n  local _test_dir=\"${_mount_point}/.perftest_fio_qd1_${_safe_mount_point}_$$\"\n  mkdir -p \"${_test_dir}\" || return 1\n  fio \\\n    --name=randrw_qd1 \\\n    --directory=\"${_test_dir}\" \\\n    --ioengine=libaio \\\n    --iodepth=1 \\\n    --rw=randrw \\\n    --rwmixread=50 \\\n    --bs=4k \\\n    --direct=1 \\\n    --size=\"${_FIO_QD1_SIZE_M}M\" \\\n    --numjobs=1 \\\n    --runtime=\"${_FIO_QD1_RUNTIME}\" \\\n    --time_based \\\n    --group_reporting \\\n    --output-format=json \\\n    > \"${_output_file}\" 2>/dev/null\n  rm -rf \"${_test_dir}\" >/dev/null 2>&1\n}\n\n# Run fio random 4k randwrite QD=1 with fsync (durability-focused)\n_run_fio_qd1_fsync_test() {\n  # $1 = mountpoint\n  # $2 = output_file\n  local _mount_point=\"$1\"\n  local _output_file=\"$2\"\n  local _safe_mount_point\n  _safe_mount_point=\"$(echo \"${_mount_point}\" | sed 's#/#_#g')\"\n  local _test_dir=\"${_mount_point}/.perftest_fio_qd1_fsync_${_safe_mount_point}_$$\"\n  mkdir -p \"${_test_dir}\" || return 1\n  fio \\\n    --name=randwrite_qd1_fsync \\\n    --directory=\"${_test_dir}\" \\\n    --ioengine=libaio \\\n    --iodepth=1 \\\n    --rw=randwrite \\\n    --bs=4k \\\n    --direct=1 \\\n    --fdatasync=1 \\\n    --size=\"${_FIO_QD1_FSYNC_SIZE_M}M\" \\\n    --numjobs=1 \\\n    --runtime=\"${_FIO_QD1_FSYNC_RUNTIME}\" \\\n    --time_based \\\n    --group_reporting \\\n    --output-format=json \\\n    > \"${_output_file}\" 2>/dev/null\n  rm -rf \"${_test_dir}\" >/dev/null 2>&1\n}\n# Parse fio JSON and return values via stdout (key=value lines)\n_parse_fio_json() {\n  # $1 = json file\n  local _json_file=\"$1\"\n  local _read_iops _write_iops _read_bw_kib _write_bw_kib\n  _read_iops=\"$(jq -r '[.jobs[].read.iops] | add' \"${_json_file}\" 2>/dev/null)\"\n  _write_iops=\"$(jq -r '[.jobs[].write.iops] | add' \"${_json_file}\" 2>/dev/null)\"\n  _read_bw_kib=\"$(jq -r '[.jobs[].read.bw] | add' \"${_json_file}\" 2>/dev/null)\" # KiB/s\n  _write_bw_kib=\"$(jq -r '[.jobs[].write.bw] | add' \"${_json_file}\" 2>/dev/null)\" # KiB/s\n\n  local _read_clat_ns_mean _write_clat_ns_mean\n  _read_clat_ns_mean=\"$(jq -r '([.jobs[].read.clat_ns.mean?] | map(select(.!=null)) | (if length>0 then (add/length) else empty end))' \"${_json_file}\" 2>/dev/null)\"\n  _write_clat_ns_mean=\"$(jq -r '([.jobs[].write.clat_ns.mean?] | map(select(.!=null)) | (if length>0 then (add/length) else empty end))' \"${_json_file}\" 2>/dev/null)\"\n\n  local _read_lat_ms _write_lat_ms\n  if [ -n \"${_read_clat_ns_mean}\" ] && [ -n \"${_write_clat_ns_mean}\" ]; then\n    _read_lat_ms=\"$(awk -v ns=\"${_read_clat_ns_mean}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n    _write_lat_ms=\"$(awk -v ns=\"${_write_clat_ns_mean}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n  else\n    local _read_clat_usec_mean _write_clat_usec_mean\n    _read_clat_usec_mean=\"$(jq -r '([.jobs[].read.clat.mean?] | map(select(.!=null)) | (if length>0 then (add/length) else empty end))' \"${_json_file}\" 2>/dev/null)\"\n    _write_clat_usec_mean=\"$(jq -r '([.jobs[].write.clat.mean?] | map(select(.!=null)) | (if length>0 then (add/length) else empty end))' \"${_json_file}\" 2>/dev/null)\"\n    _read_lat_ms=\"$(awk -v us=\"${_read_clat_usec_mean:-0}\" 'BEGIN{printf \"%.2f\", us/1000.0}')\"\n    _write_lat_ms=\"$(awk -v us=\"${_write_clat_usec_mean:-0}\" 'BEGIN{printf \"%.2f\", us/1000.0}')\"\n  fi\n\n  local _read_bw_mb _write_bw_mb\n  _read_bw_mb=\"$(awk -v kib=\"${_read_bw_kib:-0}\" 'BEGIN{printf \"%.2f\", kib/1024.0}')\"\n  _write_bw_mb=\"$(awk -v kib=\"${_write_bw_kib:-0}\" 'BEGIN{printf \"%.2f\", kib/1024.0}')\"\n\n  # Optional percentiles (fio3+): clat_ns.percentile\n  local _read_p95_ns _read_p99_ns _write_p95_ns _write_p99_ns\n  local _read_p95_ms _read_p99_ms _write_p95_ms _write_p99_ms\n  local _sync_p95_ns _sync_p99_ns _sync_p95_ms _sync_p99_ms\n\n  _read_p95_ns=\"$(jq -r '([.jobs[].read.clat_ns.percentile[\"95.000000\"]?] | map(select(.!=null)) | max) // empty' \"${_json_file}\" 2>/dev/null)\"\n  _read_p99_ns=\"$(jq -r '([.jobs[].read.clat_ns.percentile[\"99.000000\"]?] | map(select(.!=null)) | max) // empty' \"${_json_file}\" 2>/dev/null)\"\n  _write_p95_ns=\"$(jq -r '([.jobs[].write.clat_ns.percentile[\"95.000000\"]?] | map(select(.!=null)) | max) // empty' \"${_json_file}\" 2>/dev/null)\"\n  _write_p99_ns=\"$(jq -r '([.jobs[].write.clat_ns.percentile[\"99.000000\"]?] | map(select(.!=null)) | max) // empty' \"${_json_file}\" 2>/dev/null)\"\n\n  if [ -n \"${_read_p95_ns}\" ]; then\n    _read_p95_ms=\"$(awk -v ns=\"${_read_p95_ns}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n  fi\n  if [ -n \"${_read_p99_ns}\" ]; then\n    _read_p99_ms=\"$(awk -v ns=\"${_read_p99_ns}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n  fi\n  if [ -n \"${_write_p95_ns}\" ]; then\n    _write_p95_ms=\"$(awk -v ns=\"${_write_p95_ns}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n  fi\n  if [ -n \"${_write_p99_ns}\" ]; then\n    _write_p99_ms=\"$(awk -v ns=\"${_write_p99_ns}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n  fi\n\n  # Optional percentiles for sync/fsync (fio JSON uses a separate sync section)\n  _sync_p95_ns=\"$(jq -r '([.jobs[].sync.lat_ns.percentile[\"95.000000\"]?] | map(select(.!=null)) | max) // ([.jobs[].sync.clat_ns.percentile[\"95.000000\"]?] | map(select(.!=null)) | max) // empty' \"${_json_file}\" 2>/dev/null)\"\n  _sync_p99_ns=\"$(jq -r '([.jobs[].sync.lat_ns.percentile[\"99.000000\"]?] | map(select(.!=null)) | max) // ([.jobs[].sync.clat_ns.percentile[\"99.000000\"]?] | map(select(.!=null)) | max) // empty' \"${_json_file}\" 2>/dev/null)\"\n  if [ -n \"${_sync_p95_ns}\" ]; then\n    _sync_p95_ms=\"$(awk -v ns=\"${_sync_p95_ns}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n  fi\n  if [ -n \"${_sync_p99_ns}\" ]; then\n    _sync_p99_ms=\"$(awk -v ns=\"${_sync_p99_ns}\" 'BEGIN{printf \"%.2f\", ns/1000000.0}')\"\n  fi\n\n  echo \"READ_IOPS=${_read_iops}\"\n  echo \"WRITE_IOPS=${_write_iops}\"\n  echo \"READ_BW_MB=${_read_bw_mb}\"\n  echo \"WRITE_BW_MB=${_write_bw_mb}\"\n  echo \"READ_LAT_MS=${_read_lat_ms}\"\n  echo \"WRITE_LAT_MS=${_write_lat_ms}\"\n  echo \"READ_P95_MS=${_read_p95_ms}\"\n  echo \"READ_P99_MS=${_read_p99_ms}\"\n  echo \"WRITE_P95_MS=${_write_p95_ms}\"\n  echo \"WRITE_P99_MS=${_write_p99_ms}\"\n  echo \"SYNC_P95_MS=${_sync_p95_ms}\"\n  echo \"SYNC_P99_MS=${_sync_p99_ms}\"\n}\n\n# Simple metadata / small file operations test\n_run_metadata_test() {\n  # $1 = mountpoint\n  # $2 = output var name (global)\n  local _mount_point=\"$1\"\n  local _outvar=\"$2\"\n  local _safe_mount_point\n  _safe_mount_point=\"$(echo \"${_mount_point}\" | sed 's#/#_#g')\"\n  local _test_dir=\"${_mount_point}/.perftest_meta_${_safe_mount_point}_$$\"\n  mkdir -p \"${_test_dir}\" || { eval \"${_outvar}=\\\"ERROR\\\"\"; return; }\n\n  local _count=\"${_METADATA_FILE_COUNT}\"\n  local _start _end _elapsed _ops _ops_per_sec\n\n  _start=\"$(date +%s.%N)\"\n  local _i\n  for _i in $(seq 1 \"${_count}\"); do\n    echo \"x\" > \"${_test_dir}/file_${_i}\" || break\n  done\n  for _i in $(seq 1 \"${_count}\"); do\n    cat \"${_test_dir}/file_${_i}\" >/dev/null 2>&1 || break\n  done\n  rm -f \"${_test_dir}\"/file_* >/dev/null 2>&1\n  rmdir \"${_test_dir}\" >/dev/null 2>&1 || true\n  _end=\"$(date +%s.%N)\"\n\n  _elapsed=\"$(echo \"${_end} - ${_start}\" | bc -l 2>/dev/null || echo 0)\"\n  _ops=$((_count * 2))\n  if [ \"${_elapsed}\" != \"0\" ]; then\n    _ops_per_sec=\"$(echo \"${_ops} / ${_elapsed}\" | bc -l 2>/dev/null || echo 0)\"\n    eval \"${_outvar}=\\\"$(printf '%.2f' \"${_ops_per_sec}\") ops/s\\\"\"\n  else\n    eval \"${_outvar}=\\\"ERROR\\\"\"\n  fi\n}\n\n# Run sysbench CPU and extract events/sec and avg latency\n_run_sysbench_cpu() {\n  # $1 = threads\n  # $2 = label (for variable names)\n  local _threads=\"$1\"\n  local _label=\"$2\"\n  local _test\n  _test=\"$(sysbench cpu --threads=\"${_threads}\" --time=\"${_SYSBENCH_CPU_TIME}\" run 2>/dev/null)\"\n  local _events _lat\n  _events=\"$(echo \"${_test}\" | awk '/events per second:/ {print $4}')\"\n  _lat=\"$(echo \"${_test}\" | awk '/avg:/ {print $2}')\"\n  eval \"_cpu_events_${_label}=\\\"${_events}\\\"\"\n  eval \"_cpu_latency_${_label}=\\\"${_lat}\\\"\"\n}\n\n# PHP micro-benchmark: multi-run median + concurrent throughput\n_run_php_benchmarks() {\n  if ! command -v php >/dev/null 2>&1; then\n    _php_time_median=\"N/A (php not installed)\"\n    _php_time_runs=\"\"\n    _php_concurrency_wall=\"N/A\"\n    _php_concurrency_throughput=\"N/A\"\n    return\n  fi\n\n  local _script=\"/tmp/phpbench.php\"\n  cat << 'EOF' > \"${_script}\"\n<?php\n$iterations = getenv('PHP_BENCH_ITERATIONS') ? intval(getenv('PHP_BENCH_ITERATIONS')) : 50000;\n$start = microtime(true);\n$acc = 0;\nfor ($i = 0; $i < $iterations; $i++) {\n  $s = \"the quick brown fox jumps over the lazy dog \" . $i;\n  $h = hash('sha256', $s, false);\n  $acc += ord($h[0]);\n}\n$elapsed = microtime(true) - $start;\nprintf(\"%.6f\\n\", $elapsed);\nEOF\n\n  local _runs=\"${_PHP_BENCH_RUNS}\"\n  local _times=()\n  local _i\n  echo \"Running PHP single-thread benchmark (${_runs} runs, iterations=${_PHP_BENCH_ITERATIONS})...\"\n  for _i in $(seq 1 \"${_runs}\"); do\n    local _t\n    _t=\"$(PHP_BENCH_ITERATIONS=\"${_PHP_BENCH_ITERATIONS}\" php \"${_script}\" 2>/dev/null || echo 0)\"\n    _log \" Run ${_i}: ${_t} seconds\"\n    _times+=(\"${_t}\")\n  done\n\n  # Median (PHP_BENCH_RUNS is 5, so 3rd value after sort)\n  local _sorted\n  _sorted=($(printf '%s\\n' \"${_times[@]}\" | sort -n))\n  _php_time_median=\"${_sorted[2]}\"\n  _php_time_runs=\"$(printf '%s ' \"${_times[@]}\")\"\n\n  # Concurrent benchmark\n  local _conc\n  local _nproc\n  _nproc=\"$(nproc)\"\n  if [ \"${_nproc}\" -le 0 ]; then\n    _nproc=1\n  fi\n  if [ \"${_nproc}\" -gt \"${_PHP_BENCH_CONCURRENCY_MAX}\" ]; then\n    _conc=\"${_PHP_BENCH_CONCURRENCY_MAX}\"\n  else\n    _conc=\"${_nproc}\"\n  fi\n\n  echo \"Running PHP concurrent benchmark (${_conc} processes, iterations=${_PHP_BENCH_ITERATIONS} each)...\"\n  local _start _end _elapsed\n  _start=\"$(date +%s.%N)\"\n  for _i in $(seq 1 \"${_conc}\"); do\n    PHP_BENCH_ITERATIONS=\"${_PHP_BENCH_ITERATIONS}\" php \"${_script}\" >/dev/null 2>&1 &\n  done\n  wait\n  _end=\"$(date +%s.%N)\"\n  _elapsed=\"$(echo \"${_end} - ${_start}\" | bc -l 2>/dev/null || echo 0)\"\n  _php_concurrency_wall=\"$(printf '%.6f' \"${_elapsed}\")\"\n\n  if [ \"${_elapsed}\" != \"0\" ]; then\n    local _total_ops\n    _total_ops=\"$(echo \"${_conc} * ${_PHP_BENCH_ITERATIONS}\" | bc -l 2>/dev/null || echo 0)\"\n    local _thr\n    _thr=\"$(echo \"${_total_ops} / ${_elapsed}\" | bc -l 2>/dev/null || echo 0)\"\n    _php_concurrency_throughput=\"$(printf '%.2f ops/sec' \"${_thr}\")\"\n  else\n    _php_concurrency_throughput=\"N/A\"\n  fi\n\n  rm -f \"${_script}\" >/dev/null 2>&1\n}\n\n# ----\n# BOA-specific grading helpers\n# ----\n_num_or_empty() {\n  printf '%s' \"$1\" | grep -Eo '^[0-9]+(\\.[0-9]+)?' || true\n}\n\n_median_num() {\n  # Median of numeric arguments (prints empty if no valid inputs).\n  # Proper even/odd handling:\n  #  - odd  N: middle element\n  #  - even N: average of the two middle elements\n  if [ \"$#\" -eq 0 ]; then\n    echo \"\"\n    return\n  fi\n  printf '%s\\n' \"$@\" | awk 'NF{print}' | sort -n | awk '{\n      a[NR]=$1\n    }\n    END{\n      if(NR==0){print \"\"; exit}\n      if(NR%2==1){\n        mid=(NR+1)/2\n        printf \"%.3f\", a[mid]\n      } else {\n        mid=NR/2\n        printf \"%.3f\", (a[mid]+a[mid+1])/2\n      }\n    }'\n}\n\n_max_num() {\n  # Max of numeric arguments (prints empty if no valid inputs)\n  if [ \"$#\" -eq 0 ]; then\n    echo \"\"\n    return\n  fi\n  printf '%s\\n' \"$@\" | awk 'NF{print}' | sort -n | tail -n 1\n}\n\n_boa_mem_thresholds() {\n  # Prints: thr_avg thr_good thr_exc thr_ultra (MiB/s)\n  # Soft-scaling for very small VMs (threads 1-3) to avoid unfairly strict grading,\n  # but WITHOUT giving 2-vCPU VMs an outsized advantage over larger instances.\n  local threads=\"${1:-4}\"\n  local scale=\"1.00\"\n\n  if [ -n \"${threads}\" ] && awk -v t=\"${threads}\" 'BEGIN{exit !(t<4)}' 2>/dev/null; then\n    # 1 -> 0.85, 2 -> 0.90, 3 -> 0.95 (clamped)\n    scale=\"$(awk -v t=\"${threads}\" 'BEGIN{\n      s = 0.80 + (0.05 * t);\n      if (s < 0.85) s = 0.85;\n      if (s > 0.95) s = 0.95;\n      printf \"%.2f\", s\n    }')\"\n  fi\n\n  local thr_avg thr_good thr_exc thr_ultra\n  # Tuned for membench (>=128 MiB buffers) on typical VPS hardware.\n  # NOTE: BOA tends to \"feel\" latency more than raw memcpy bandwidth, so thresholds are conservative.\n  thr_avg=\"$(awk -v s=\"${scale}\" 'BEGIN{printf \"%.0f\", 12000*s}')\"\n  thr_good=\"$(awk -v s=\"${scale}\" 'BEGIN{printf \"%.0f\", 18000*s}')\"\n  thr_exc=\"$(awk -v s=\"${scale}\" 'BEGIN{printf \"%.0f\", 28000*s}')\"\n  thr_ultra=\"$(awk -v s=\"${scale}\" 'BEGIN{printf \"%.0f\", 42000*s}')\"\n\n  printf '%s %s %s %s\\n' \"${thr_avg}\" \"${thr_good}\" \"${thr_exc}\" \"${thr_ultra}\"\n}\n\n_boa_grade_mem_bw() {\n  # Bandwidth in MiB/s; optionally pass number of threads used.\n  local bw=\"$1\"\n  local threads=\"${2:-4}\"\n  local thr_avg thr_good thr_exc thr_ultra\n\n  set -- $(_boa_mem_thresholds \"${threads}\")\n  thr_avg=\"$1\"; thr_good=\"$2\"; thr_exc=\"$3\"; thr_ultra=\"$4\"\n\n  if [ -z \"${bw}\" ]; then\n    echo \"low\"\n    return\n  fi\n\n  if awk -v bw=\"${bw}\" -v t=\"${thr_ultra}\" 'BEGIN{exit !(bw>=t)}' 2>/dev/null; then\n    echo \"ultra\"; return\n  fi\n  if awk -v bw=\"${bw}\" -v t=\"${thr_exc}\" 'BEGIN{exit !(bw>=t)}' 2>/dev/null; then\n    echo \"excellent\"; return\n  fi\n  if awk -v bw=\"${bw}\" -v t=\"${thr_good}\" 'BEGIN{exit !(bw>=t)}' 2>/dev/null; then\n    echo \"good\"; return\n  fi\n  if awk -v bw=\"${bw}\" -v t=\"${thr_avg}\" 'BEGIN{exit !(bw>=t)}' 2>/dev/null; then\n    echo \"average\"; return\n  fi\n\n  echo \"low\"\n}\n\n_boa_grade_mem_final() {\n  local mem_bw_mibps=\"$1\"\n  local mem_latency_ns=\"$2\"\n  local threads_used=\"$3\"\n  local grade\n  local thr_avg thr_good thr_exc thr_ultra\n\n  grade=\"$(_boa_grade_mem_bw \"${mem_bw_mibps}\" \"${threads_used}\")\"\n\n  set -- $(_boa_mem_thresholds \"${threads_used}\")\n  thr_avg=\"$1\"; thr_good=\"$2\"; thr_exc=\"$3\"; thr_ultra=\"$4\"\n\n  # BOA is latency-sensitive (Valkey/Redis, MySQL buffer pool, Opcache/APCu):\n  #   - allow low latency to lift borderline bandwidth\n  #   - apply latency-led caps so higher grades reflect snappy cache-heavy workloads\n  if [ -n \"${mem_latency_ns}\" ]; then\n    # Excellent-latency rescue: avoid \"low\" when bandwidth is only slightly under the average cutoff.\n    # This prevents confusing outcomes where a larger VM (using more threads) scores worse than a smaller VM\n    # despite having comparable real-world caching behavior.\n    if awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l<=115)}' 2>/dev/null; then\n      if [ \"${grade}\" = \"low\" ] && awk -v b=\"${mem_bw_mibps}\" -v t=\"${thr_avg}\" 'BEGIN{exit !(b>=(t*0.90))}' 2>/dev/null; then\n        grade=\"average\"\n      fi\n    fi\n\n    # If latency is excellent, ensure at least \"excellent\" for average/good bandwidth.\n    if awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l<=115)}' 2>/dev/null; then\n      if [ \"${grade}\" = \"average\" ] || [ \"${grade}\" = \"good\" ]; then\n        grade=\"excellent\"\n      fi\n    fi\n\n    # Near-excellent latency grace window:\n    # Small measurement variance (e.g., 115ns vs 120ns) should not swing two full tiers\n    # when bandwidth is solidly in the upper part of the current band.\n    # If latency is close to excellent (<=130ns) and bandwidth is in the upper ~55% of the \"average\" band,\n    # treat it as excellent.\n    if awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l<=130)}' 2>/dev/null; then\n      if [ \"${grade}\" = \"average\" ] || [ \"${grade}\" = \"good\" ]; then\n        if awk -v b=\"${mem_bw_mibps}\" -v a=\"${thr_avg}\" -v g=\"${thr_good}\" 'BEGIN{\n          thr = a + (0.45 * (g-a));\n          exit !(b>=thr);\n        }' 2>/dev/null; then\n          grade=\"excellent\"\n        fi\n      fi\n    fi\n\n    # Latency-led caps (widened to avoid harsh demotions on high-throughput / many-core hosts):\n    #   - >200ns: cap to average (typically contention/NUMA/host noise)\n    #   - 165-200ns: cap ultra/excellent to good, BUT allow \"excellent\" when bandwidth is ultra-class\n    #   - 140-165ns: cap ultra -> excellent (keep excellent)\n    #   - >=260ns: treat as low ONLY when bandwidth is also weak\n    if awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l>=260)}' 2>/dev/null; then\n      # If bandwidth is strong, this is likely a contention outlier rather than \"bad RAM\".\n      if awk -v b=\"${mem_bw_mibps}\" -v t=\"${thr_exc}\" 'BEGIN{exit !(b>=t)}' 2>/dev/null; then\n        grade=\"average\"\n      else\n        grade=\"low\"\n      fi\n    elif awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l>230)}' 2>/dev/null; then\n      if [ \"${grade}\" != \"low\" ]; then\n        grade=\"average\"\n      fi\n    elif awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l>165)}' 2>/dev/null; then\n      if [ \"${grade}\" = \"ultra\" ] || [ \"${grade}\" = \"excellent\" ]; then\n        # If bandwidth is clearly ultra-class, keep \"excellent\" even with 165-200ns latency.\n        if awk -v b=\"${mem_bw_mibps}\" -v t=\"${thr_ultra}\" 'BEGIN{exit !(b>=t)}' 2>/dev/null; then\n          grade=\"excellent\"\n        else\n          grade=\"good\"\n        fi\n      fi\n    elif awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l>140)}' 2>/dev/null; then\n      if [ \"${grade}\" = \"ultra\" ]; then\n        grade=\"excellent\"\n      fi\n    fi\n  fi\n\n  # Soft promotion near upper bandwidth boundaries (strict by default; forgiving only near the top of a band)\n  if [ -n \"${mem_bw_mibps}\" ]; then\n    if [ \"${grade}\" = \"good\" ]; then\n      # Promote good->excellent only when latency isn't the limiting factor.\n      if [ -z \"${mem_latency_ns}\" ] || awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l<=165)}' 2>/dev/null; then\n        if awk -v b=\"${mem_bw_mibps}\" -v g=\"${thr_good}\" -v e=\"${thr_exc}\" -v pct=\"${_GRADE_NEAR_BAND_PCT}\" 'BEGIN{\n          thr = e - (pct * (e-g));\n          exit !(b>=thr);\n        }' 2>/dev/null; then\n          grade=\"excellent\"\n        fi\n      fi\n    fi\n\n    if [ \"${grade}\" = \"excellent\" ]; then\n      # Promote excellent->ultra only when latency isn't the limiting factor.\n      if [ -z \"${mem_latency_ns}\" ] || awk -v l=\"${mem_latency_ns}\" 'BEGIN{exit !(l<=140)}' 2>/dev/null; then\n        if awk -v b=\"${mem_bw_mibps}\" -v e=\"${thr_exc}\" -v u=\"${thr_ultra}\" -v pct=\"${_GRADE_NEAR_BAND_PCT}\" 'BEGIN{\n          thr = u - (pct * (u-e));\n          exit !(b>=thr);\n        }' 2>/dev/null; then\n          grade=\"ultra\"\n        fi\n      fi\n    fi\n  fi\n\n  echo \"${grade}\"\n}\n\n_boa_grade_cpu_steal() {\n  # input: percent\n  awk -v s=\"$1\" 'BEGIN{\n    if (s==\"\" || s==\"N/A\") print \"unknown\";\n    else if (s<1) print \"excellent\";\n    else if (s<3) print \"good\";\n    else if (s<7) print \"warning\";\n    else print \"bad\";\n  }'\n}\n\n_boa_grade_cpu_speed() {\n  # input: single-thread events/sec\n  # Based on typical BOA workloads (Drupal PHP)\n  # Soft promotion near upper boundaries (keeps grades strict, avoids harsh demotions when \"almost there\"):\n  #  - promote excellent->ultra in the top ${_GRADE_NEAR_BAND_PCT} of [4000..5000)\n  #  - promote good->excellent in the top ${_GRADE_NEAR_BAND_PCT} of [3000..4000)\n  awk -v eps=\"$1\" -v pct=\"${_GRADE_NEAR_BAND_PCT}\" 'BEGIN{\n    if (eps==\"\" || eps==\"N/A\") { print \"unknown\"; exit }\n    band = 1000.0\n    near_ultra = 5000.0 - (pct * band)\n    near_exc   = 4000.0 - (pct * band)\n\n    if (eps>=5000) print \"ultra\";\n    else if (eps>=near_ultra) print \"ultra\";\n    else if (eps>=4000) print \"excellent\";\n    else if (eps>=near_exc) print \"excellent\";\n    else if (eps>=3000) print \"good\";\n    else if (eps>=2000) print \"average\";\n    else if (eps>=1000) print \"weak\";\n    else print \"poor\";\n  }'\n}\n\n_detect_mysql_durability_relaxed() {\n  # Sets global _DB_DURABILITY_RELAXED=\"YES\"/\"NO\".\n  if [ \"${_DB_DURABILITY_PROBED}\" = \"YES\" ]; then\n    return\n  fi\n  _DB_DURABILITY_PROBED=\"YES\"\n  _DB_DURABILITY_RELAXED=\"NO\"\n\n  if ! command -v mysql >/dev/null 2>&1; then\n    return\n  fi\n  if [ ! -e \"/root/.my.pass.txt\" ]; then\n    return\n  fi\n\n  local _pass _sock _vals _trx _bin\n  _pass=\"$(head -n 1 /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\"\n  if [ -z \"${_pass}\" ]; then\n    return\n  fi\n\n  _sock=\"$(_guess_mysql_socket)\"\n  if [ -z \"${_sock}\" ]; then\n    return\n  fi\n\n  # Query in one round-trip; keep it simple and quiet.\n  _vals=\"$(mysql --protocol=socket -uroot -p\"${_pass}\" -S \"${_sock}\" -Nse \"SELECT @@innodb_flush_log_at_trx_commit, @@sync_binlog;\" 2>/dev/null | head -n 1)\"\n  _trx=\"$(echo \"${_vals}\" | awk '{print $1}')\"\n  _bin=\"$(echo \"${_vals}\" | awk '{print $2}')\"\n\n  # Treat anything other than strict 1/1 as relaxed durability.\n  if [ -n \"${_trx}\" ] && [ -n \"${_bin}\" ]; then\n    if [ \"${_trx}\" != \"1\" ] || [ \"${_bin}\" != \"1\" ]; then\n      _DB_DURABILITY_RELAXED=\"YES\"\n    fi\n  fi\n}\n\n_boa_grade_db_oltp() {\n  # Inputs:\n  #  $1 = rw_tps, $2 = rw_p95_ms, $3 = vcpus, $4 = ro_p95_ms (optional), $5 = threads_used (optional)\n  #\n  # This is a lightweight \"experience proxy\" for Drupal workloads (cache misses + writes).\n  # We grade primarily on RW TPS per vCPU, with latency caps on RW p95.\n  #\n  # Soft promotion near upper boundaries is allowed ONLY when the limiting metric is very close\n  # to the next grade threshold (top ${_GRADE_NEAR_BAND_PCT} of the band).\n  local rw_tps rw_p95 vcpus ro_p95 threads_used norm_cores per_core grade\n  rw_tps=\"$(_num_or_empty \"$1\")\"\n  rw_p95=\"$(_num_or_empty \"$2\")\"\n  vcpus=\"$(_num_or_empty \"$3\")\"\n  ro_p95=\"$(_num_or_empty \"$4\")\"\n  threads_used=\"$(_num_or_empty \"$5\")\"\n\n  [ -n \"${vcpus}\" ] || vcpus=\"1\"\n\n  if [ -z \"${rw_tps}\" ] || [ -z \"${rw_p95}\" ]; then\n    echo \"unknown\"\n    return\n  fi\n\n  # Normalize TPS to a bounded \"effective core\" count so larger VMs aren't penalized\n  # when OLTP stops scaling linearly with concurrency.\n  #\n  # - If threads_used is known, approximate \"cores exercised\" as ceil(threads/2)\n  # - Cap normalization at 4 cores (experience-focused; avoids punishing large VMs)\n  norm_cores=\"$(awk -v v=\"${vcpus}\" -v th=\"${threads_used}\" 'BEGIN{\n  nc=v;\n  if(th>0){\n    tc=int((th+1)/2); # ceil(th/2)\n    if(tc<1) tc=1;\n    if(tc<nc) nc=tc;\n  }\n  if(nc>4) nc=4;\n  if(nc<1) nc=1;\n  printf \"%d\", nc;\n}')\"\n\n  per_core=\"$(awk -v t=\"${rw_tps}\" -v c=\"${norm_cores}\" 'BEGIN{if(c>0) printf \"%.2f\", t/c; else print \"\"}')\"\n\n  # Base grade (strict thresholds)\n  if awk -v p=\"${per_core}\" -v l=\"${rw_p95}\" 'BEGIN{exit !(p<250 || l>=25)}' 2>/dev/null; then\n    grade=\"poor\"\n  elif awk -v p=\"${per_core}\" -v l=\"${rw_p95}\" 'BEGIN{exit !(p<400 || l>=15)}' 2>/dev/null; then\n    grade=\"average\"\n  elif awk -v p=\"${per_core}\" -v l=\"${rw_p95}\" 'BEGIN{exit !(p<600 || l>=10)}' 2>/dev/null; then\n    grade=\"good\"\n  else\n    # Excellent vs Ultra split (strict)\n    if awk -v p=\"${per_core}\" -v l=\"${rw_p95}\" 'BEGIN{exit !(p>=900 && l<7)}' 2>/dev/null; then\n      grade=\"ultra\"\n    else\n      grade=\"excellent\"\n    fi\n  fi\n\n  # Soft promotion windows near upper boundaries (strict by default; forgiving only at the very top of a band)\n  if [ \"${grade}\" = \"good\" ]; then\n    # Promote good->excellent if TPS/core is within top pct of [400..600) AND latency is already in the excellent range (<10ms).\n    if awk -v p=\"${per_core}\" -v l=\"${rw_p95}\" -v pct=\"${_GRADE_NEAR_BAND_PCT}\" 'BEGIN{\n      p_near = 600 - (pct * (600-400));\n      exit !(p>=p_near && l<10);\n    }' 2>/dev/null; then\n      grade=\"excellent\"\n    fi\n  fi\n\n  if [ \"${grade}\" = \"excellent\" ]; then\n    # Promote excellent->ultra if very close to the ultra split on BOTH dimensions:\n    # - TPS/core in top pct of [600..900)\n    # - latency within a small band above 7ms\n    if awk -v p=\"${per_core}\" -v l=\"${rw_p95}\" -v pct=\"${_GRADE_NEAR_BAND_PCT}\" 'BEGIN{\n      p_near = 900 - (pct * (900-600));\n      l_near = 7 + (pct * (10-7));\n      exit !(p>=p_near && l<l_near);\n    }' 2>/dev/null; then\n      grade=\"ultra\"\n    fi\n  fi\n\n  echo \"${grade}\"\n}\n\n_boa_grade_disk_final() {\n  # Inputs (best-effort numeric):\n  #  $1=ioping_avg_us, $2=seq_read_mbps, $3=seq_write_mbps, $4=read_iops, $5=write_iops, $6=meta_ops,\n  #  $7=qd1_p99_read_ms, $8=qd1_p99_write_ms, $9=qd1_fsync_p99_write_ms, $10=qd1_fsync_p95_write_ms\n  #\n  # Disk grading improvements (v57):\n  # - Produce three grades:\n  #     overall: BOA-weighted (default) or mode-dependent\n  #     throughput: bulk transfer (backups/rsync/archives)\n  #     tail: stall risk (QD1 p99 + fsync p99 + ioping)\n  #\n  # Modes:\n  #   DISK_GRADE_MODE=BOA (default): overall = 70% throughput + 30% tail\n  #   DISK_GRADE_MODE=THROUGHPUT:    overall = throughput only\n  #   DISK_GRADE_MODE=OLTP_STRICT:   overall = min(throughput, strict tail)\n  #\n  # Output (space-separated):\n  #   \"<overall> <throughput> <tail> <tp_gmean> <imbalance> <meta> <fsync_p99> <qd1p99r> <qd1p99w> <ioping_us>\"\n\n  local ioping_us=\"$1\"\n  local seq_read_mbps=\"$2\"\n  local seq_write_mbps=\"$3\"\n  local read_iops=\"$4\"\n  local write_iops=\"$5\"\n  local meta_ops=\"$6\"\n  local qd1_p99_read_ms=\"$7\"\n  local qd1_p99_write_ms=\"$8\"\n  local qd1_fsync_p99_write_ms=\"$9\"\n  local qd1_fsync_p95_write_ms=\"$10\"\n\n  # Extract numeric values\n  ioping_us=\"$(_num_or_empty \"${ioping_us}\")\"\n  seq_read_mbps=\"$(_num_or_empty \"${seq_read_mbps}\")\"\n  seq_write_mbps=\"$(_num_or_empty \"${seq_write_mbps}\")\"\n  read_iops=\"$(_num_or_empty \"${read_iops}\")\"\n  write_iops=\"$(_num_or_empty \"${write_iops}\")\"\n  meta_ops=\"$(_num_or_empty \"${meta_ops}\")\"\n  qd1_p99_read_ms=\"$(_num_or_empty \"${qd1_p99_read_ms}\")\"\n  qd1_p99_write_ms=\"$(_num_or_empty \"${qd1_p99_write_ms}\")\"\n  qd1_fsync_p99_write_ms=\"$(_num_or_empty \"${qd1_fsync_p99_write_ms}\")\"\n  qd1_fsync_p95_write_ms=\"$(_num_or_empty \"${qd1_fsync_p95_write_ms}\")\"\n\n  # Mode switch (default: BOA)\n  local _mode=\"${DISK_GRADE_MODE:-BOA}\"\n\n  # Helper rank mapping (kept local on purpose)\n  _grade_rank() {\n    case \"$1\" in\n      bad) echo 0 ;;\n      poor) echo 1 ;;\n      average) echo 2 ;;\n      good) echo 3 ;;\n      excellent) echo 4 ;;\n      ultra) echo 5 ;;\n      *) echo 1 ;;\n    esac\n  }\n\n  _rank_to_grade() {\n    case \"$1\" in\n      0) echo bad ;;\n      1) echo poor ;;\n      2) echo average ;;\n      3) echo good ;;\n      4) echo excellent ;;\n      5) echo ultra ;;\n      *) echo poor ;;\n    esac\n  }\n\n  # CRITICAL: Sequential write throttled/failing\n  if [ -n \"${seq_write_mbps}\" ] && awk -v sw=\"${seq_write_mbps}\" 'BEGIN{exit !(sw<100)}' 2>/dev/null; then\n    echo \"bad poor poor N/A N/A ${meta_ops:-N/A} ${qd1_fsync_p99_write_ms:-N/A} ${qd1_p99_read_ms:-N/A} ${qd1_p99_write_ms:-N/A} ${ioping_us:-N/A}\"\n    return\n  fi\n\n  #\n  # 1) Throughput grade\n  #\n  local tp_grade=\"poor\"\n  local tp_gmean=\"\"\n  local imbalance=\"\"\n\n  if [ -n \"${seq_write_mbps}\" ] && [ -n \"${seq_read_mbps}\" ]; then\n    tp_gmean=\"$(awk -v sw=\"${seq_write_mbps}\" -v sr=\"${seq_read_mbps}\" 'BEGIN{if (sw>0 && sr>0) printf \"%.0f\", sqrt(sw*sr); else print \"\"}' 2>/dev/null)\"\n    imbalance=\"$(awk -v sw=\"${seq_write_mbps}\" -v sr=\"${seq_read_mbps}\" 'BEGIN{if (sw>0 && sr>0){r=sr/sw; if(r<1) r=1/r; printf \"%.2f\", r}else print \"\"}' 2>/dev/null)\"\n  fi\n\n  # Map throughput gmean to grade (more discrimination than an AND gate)\n  if [ -n \"${tp_gmean}\" ]; then\n    if awk -v g=\"${tp_gmean}\" 'BEGIN{exit !(g>=4800)}' 2>/dev/null; then\n      tp_grade=\"ultra\"\n    elif awk -v g=\"${tp_gmean}\" 'BEGIN{exit !(g>=2400)}' 2>/dev/null; then\n      tp_grade=\"excellent\"\n    elif awk -v g=\"${tp_gmean}\" 'BEGIN{exit !(g>=1200)}' 2>/dev/null; then\n      tp_grade=\"good\"\n    elif awk -v g=\"${tp_gmean}\" 'BEGIN{exit !(g>=600)}' 2>/dev/null; then\n      tp_grade=\"average\"\n    else\n      tp_grade=\"poor\"\n    fi\n  fi\n\n  # Imbalance penalty: if read/write are >3x apart, drop one throughput rank\n  if [ -n \"${imbalance}\" ] && awk -v r=\"${imbalance}\" 'BEGIN{exit !(r>=2.5)}' 2>/dev/null; then\n    local _tr\n    _tr=\"$(_grade_rank \"${tp_grade}\")\"\n    [ \"${_tr}\" -gt 1 ] && _tr=$((_tr-1))\n    tp_grade=\"$(_rank_to_grade \"${_tr}\")\"\n  fi\n\n  # Metadata influence: if metadata ops are notably low, drop one rank (deploys/caches)\n  if [ -n \"${meta_ops}\" ] && awk -v m=\"${meta_ops}\" 'BEGIN{exit !(m>0 && m<2000)}' 2>/dev/null; then\n    local _tr\n    _tr=\"$(_grade_rank \"${tp_grade}\")\"\n    [ \"${_tr}\" -gt 2 ] && _tr=$((_tr-1))\n    tp_grade=\"$(_rank_to_grade \"${_tr}\")\"\n  fi\n\n  # Gentle IOPS influence\n  if [ -n \"${read_iops}\" ] && [ -n \"${write_iops}\" ]; then\n    local tp_r\n    tp_r=\"$(_grade_rank \"${tp_grade}\")\"\n\n    if awk -v ri=\"${read_iops}\" -v wi=\"${write_iops}\" 'BEGIN{exit !(ri>=100000 && wi>=100000)}' 2>/dev/null; then\n      [ \"${tp_r}\" -lt 5 ] && tp_r=$((tp_r+1))\n    elif awk -v ri=\"${read_iops}\" -v wi=\"${write_iops}\" 'BEGIN{exit !(ri>=50000 && wi>=50000)}' 2>/dev/null; then\n      [ \"${tp_r}\" -lt 4 ] && tp_r=$((tp_r+1))\n    elif awk -v ri=\"${read_iops}\" -v wi=\"${write_iops}\" 'BEGIN{exit !(ri>=20000 && wi>=20000)}' 2>/dev/null; then\n      [ \"${tp_r}\" -lt 3 ] && tp_r=$((tp_r+1))\n    fi\n\n    tp_grade=\"$(_rank_to_grade \"${tp_r}\")\"\n  fi\n\n  #\n  # 2) Tail grade\n  #\n  local tail_grade=\"unknown\"\n  local strict_tail_grade=\"unknown\"\n\n  if [ -n \"${qd1_fsync_p99_write_ms}\" ] && [ -n \"${qd1_p99_read_ms}\" ] && [ -n \"${qd1_p99_write_ms}\" ] && [ -n \"${ioping_us}\" ]; then\n\n    # Strict (historical cap behavior)\n    if awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" \\\n      'BEGIN{exit !(f<=1.5 && r<=0.3 && w<=0.3 && io<=150)}' 2>/dev/null; then\n      strict_tail_grade=\"ultra\"\n    elif awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" \\\n      'BEGIN{exit !(f<=2 && r<=0.5 && w<=0.5 && io<=200)}' 2>/dev/null; then\n      strict_tail_grade=\"excellent\"\n    elif awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" \\\n      'BEGIN{exit !(f<=5 && r<=1.0 && w<=1.0)}' 2>/dev/null; then\n      strict_tail_grade=\"good\"\n    elif awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" \\\n      'BEGIN{exit !(f<=12 && r<=3.0 && w<=3.0)}' 2>/dev/null; then\n      strict_tail_grade=\"average\"\n    else\n      strict_tail_grade=\"poor\"\n    fi\n\n    # Relaxed (BOA-weighted)\n    if awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" \\\n      'BEGIN{exit !(f<=1.5 && r<=0.30 && w<=0.30 && io<=150)}' 2>/dev/null; then\n      tail_grade=\"ultra\"\n    elif awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" \\\n      'BEGIN{exit !(f<=2.5 && r<=1.00 && w<=0.80 && io<=250)}' 2>/dev/null; then\n      tail_grade=\"excellent\"\n    elif awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" \\\n      'BEGIN{exit !(f<=5.0 && r<=2.00 && w<=2.00 && io<=400)}' 2>/dev/null; then\n      tail_grade=\"good\"\n    elif awk -v f=\"${qd1_fsync_p99_write_ms}\" -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" \\\n      'BEGIN{exit !(f<=12.0 && r<=5.00 && w<=5.00 && io<=800)}' 2>/dev/null; then\n      tail_grade=\"average\"\n    else\n      tail_grade=\"poor\"\n    fi\n\n    # Durability-relaxed softening (keep original intent)\n    if [ \"${_DB_DURABILITY_RELAXED}\" = \"YES\" ]; then\n\n      # For relaxed tail grade\n      if [ \"${tail_grade}\" = \"average\" ] && [ -n \"${qd1_fsync_p95_write_ms}\" ]; then\n        if awk -v p95=\"${qd1_fsync_p95_write_ms}\" 'BEGIN{exit !(p95<=3.5)}' 2>/dev/null \\\n          && awk -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" 'BEGIN{exit !(r<=1.0 && w<=2.0 && io<=250)}' 2>/dev/null; then\n          if awk -v ri=\"${read_iops}\" -v wi=\"${write_iops}\" 'BEGIN{exit !(ri>=30000 && wi>=30000)}' 2>/dev/null \\\n            || awk -v sw=\"${seq_write_mbps}\" 'BEGIN{exit !(sw>=900)}' 2>/dev/null; then\n            tail_grade=\"good\"\n          fi\n        fi\n      fi\n\n      # For strict tail grade\n      if [ \"${strict_tail_grade}\" = \"average\" ] && [ -n \"${qd1_fsync_p95_write_ms}\" ]; then\n        if awk -v p95=\"${qd1_fsync_p95_write_ms}\" 'BEGIN{exit !(p95<=3.5)}' 2>/dev/null \\\n          && awk -v r=\"${qd1_p99_read_ms}\" -v w=\"${qd1_p99_write_ms}\" -v io=\"${ioping_us}\" 'BEGIN{exit !(r<=1.0 && w<=2.0 && io<=250)}' 2>/dev/null; then\n          if awk -v ri=\"${read_iops}\" -v wi=\"${write_iops}\" 'BEGIN{exit !(ri>=30000 && wi>=30000)}' 2>/dev/null \\\n            || awk -v sw=\"${seq_write_mbps}\" 'BEGIN{exit !(sw>=900)}' 2>/dev/null; then\n            strict_tail_grade=\"good\"\n          fi\n        fi\n      fi\n    fi\n  fi\n\n  # If tail metrics missing, fall back so we don't produce \"unknown\"\n  [ \"${tail_grade}\" = \"unknown\" ] && tail_grade=\"${tp_grade}\"\n  [ \"${strict_tail_grade}\" = \"unknown\" ] && strict_tail_grade=\"${tail_grade}\"\n\n  #\n  # 3) Overall grade\n  #\n  local overall_grade=\"poor\"\n\n  if [ \"${_mode}\" = \"THROUGHPUT\" ]; then\n    overall_grade=\"${tp_grade}\"\n  elif [ \"${_mode}\" = \"OLTP_STRICT\" ]; then\n    local tr sr\n    tr=\"$(_grade_rank \"${tp_grade}\")\"\n    sr=\"$(_grade_rank \"${strict_tail_grade}\")\"\n    if [ \"${sr}\" -le \"${tr}\" ]; then\n      overall_grade=\"${strict_tail_grade}\"\n    else\n      overall_grade=\"${tp_grade}\"\n    fi\n  else\n    local tr sr weighted\n    tr=\"$(_grade_rank \"${tp_grade}\")\"\n    sr=\"$(_grade_rank \"${tail_grade}\")\"\n    weighted=\"$(awk -v tr=\"${tr}\" -v sr=\"${sr}\" 'BEGIN{w=(0.7*tr)+(0.3*sr); printf \"%.0f\", (w+0.0000001)}' 2>/dev/null)\"\n    [ -n \"${weighted}\" ] || weighted=\"${tr}\"\n    [ \"${weighted}\" -lt 1 ] && weighted=1\n    [ \"${weighted}\" -gt 5 ] && weighted=5\n    overall_grade=\"$(_rank_to_grade \"${weighted}\")\"\n  fi\n\n  # Ultra overall requires ultra throughput from sequential (no IOPS-only ultra)\n  if [ \"${overall_grade}\" = \"ultra\" ] && [ \"${tp_grade}\" != \"ultra\" ]; then\n    overall_grade=\"excellent\"\n  fi\n\n  # Fsync stall penalty (BOA mode): if fsync p99 is very high, reduce overall by one rank\n  if [ \"${_mode}\" = \"BOA\" ] && [ -n \"${qd1_fsync_p99_write_ms}\" ] && awk -v f=\"${qd1_fsync_p99_write_ms}\" 'BEGIN{exit !(f>=10.0)}' 2>/dev/null; then\n    local or\n    or=\"$(_grade_rank \"${overall_grade}\")\"\n    [ \"${or}\" -gt 1 ] && overall_grade=\"$(_rank_to_grade \"$((or-1))\")\"\n  fi\n\n  # Safety: if throughput is poor, do not inflate overall beyond average\n  if [ \"${tp_grade}\" = \"poor\" ]; then\n    local or\n    or=\"$(_grade_rank \"${overall_grade}\")\"\n    [ \"${or}\" -gt 2 ] && overall_grade=\"average\"\n  fi\n\n  echo \"${overall_grade} ${tp_grade} ${tail_grade} ${tp_gmean:-N/A} ${imbalance:-N/A} ${meta_ops:-N/A} ${qd1_fsync_p99_write_ms:-N/A} ${qd1_p99_read_ms:-N/A} ${qd1_p99_write_ms:-N/A} ${ioping_us:-N/A}\"\n}\n\n_boa_grade_network_http() {\n  # input: numeric Mbit/s (HTTP range pull median)\n  #\n  # NOTE: HTTP here is a *single-flow* short download (~100 MiB via Range) from a few public endpoints.\n  # It is a useful sanity check, but it is often limited by the *remote endpoint* (per-connection caps,\n  # busy servers), CDN routing variance, and TCP slow-start on short transfers.\n  # Therefore HTTP grades use different thresholds than iPerf3/TCP capacity grades.\n  awk -v n=\"$1\" 'BEGIN{\n    if (n==\"\" || n==\"N/A\") print \"unknown\";\n    else if (n>=900) print \"ultra\";     # ~900 Mbit/s+\n    else if (n>=300) print \"excellent\"; # ~300 Mbit/s+\n    else if (n>=100) print \"good\";      # ~100 Mbit/s+\n    else if (n>=50)  print \"average\";   #  ~50 Mbit/s+\n    else print \"poor\";\n  }'\n}\n\n_boa_grade_network_tcp() {\n  # input: numeric Mbit/s (iPerf3 DL median)\n  #\n  # iPerf3 measures *raw TCP port capacity* and is the primary indicator of link capability for things\n  # like rsync/scp/ssh transfers and sustained bulk traffic. It is far less sensitive to CDN/WAF\n  # variance than HTTP pulls, so we grade TCP with higher expectations.\n  awk -v n=\"$1\" 'BEGIN{\n    if (n==\"\" || n==\"N/A\") print \"unknown\";\n    else if (n>=1500) print \"ultra\";      # ~1500 Mbit/s+\n    else if (n>=800)  print \"excellent\";  #  ~800 Mbit/s+\n    else if (n>=300)  print \"good\";       #  ~300 Mbit/s+\n    else if (n>=150)  print \"average\";    #  ~150 Mbit/s+\n    else print \"poor\";\n  }'\n}\n\n_boa_grade_network_rank() {\n  case \"$1\" in\n    ultra) echo 5 ;;\n    excellent) echo 4 ;;\n    good) echo 3 ;;\n    average) echo 2 ;;\n    poor) echo 1 ;;\n    *) echo 0 ;;\n  esac\n}\n\n_boa_grade_network_from_rank() {\n  case \"$1\" in\n    5) echo \"ultra\" ;;\n    4) echo \"excellent\" ;;\n    3) echo \"good\" ;;\n    2) echo \"average\" ;;\n    1) echo \"poor\" ;;\n    *) echo \"unknown\" ;;\n  esac\n}\n\n_boa_guess_storage_tier() {\n  # Behavior-first storage guess for BOA/Drupal.\n  # We *can* usually infer LOCAL vs NET from latency/variance/caps.\n  # We *cannot* reliably infer the physical media behind network block storage,\n  # so network labels use *_CLASS suffix to mean \"behaves like\".\n  #\n  # Inputs (numeric, best-effort):\n  # $1 = ioping_avg_us\n  # $2 = seq_read_mbps\n  # $3 = seq_write_mbps\n  # $4 = qd1_p99_read_ms\n  # $5 = qd1_p99_write_ms\n  # $6 = qd1_fsync_p99_write_ms\n  local ioping_us=\"$1\"\n  local sr=\"$2\"\n  local sw=\"$3\"\n  local p99r=\"$4\"\n  local p99w=\"$5\"\n  local p99wf=\"$6\"\n\n  # Hard fail / severely throttled\n  if [ -n \"${sw}\" ] && awk -v sw=\"${sw}\" 'BEGIN{exit !(sw<100)}' 2>/dev/null; then\n    echo \"CRITICAL_DEGRADED\"\n    return\n  fi\n\n  # Normalize unknowns\n  [ -n \"${ioping_us}\" ] || ioping_us=\"\"\n  [ -n \"${sr}\" ] || sr=\"\"\n  [ -n \"${sw}\" ] || sw=\"\"\n  [ -n \"${p99r}\" ] || p99r=\"\"\n  [ -n \"${p99w}\" ] || p99w=\"\"\n  [ -n \"${p99wf}\" ] || p99wf=\"\"\n\n  # ----\n  # Step 0: DEGRADED detection (HDD-ish / ancient SSD / extremely contended)\n  # ----\n  local _degraded=\"NO\"\n  if [ -n \"${sw}\" ] && awk -v sw=\"${sw}\" 'BEGIN{exit !(sw<150)}' 2>/dev/null; then\n    _degraded=\"YES\"\n  fi\n  if [ \"${_degraded}\" = \"NO\" ] && [ -n \"${p99wf}\" ] && awk -v f=\"${p99wf}\" 'BEGIN{exit !(f>50)}' 2>/dev/null; then\n    _degraded=\"YES\"\n  fi\n  if [ \"${_degraded}\" = \"NO\" ] && [ -n \"${p99w}\" ] && awk -v w=\"${p99w}\" 'BEGIN{exit !(w>10)}' 2>/dev/null; then\n    _degraded=\"YES\"\n  fi\n  if [ \"${_degraded}\" = \"NO\" ] && [ -n \"${ioping_us}\" ] && awk -v u=\"${ioping_us}\" 'BEGIN{exit !(u>1500)}' 2>/dev/null; then\n    _degraded=\"YES\"\n  fi\n\n  # ----\n  # Step 1: LOCAL vs NET guess\n  # ----\n  local _net=\"NO\"\n\n  # Strong signal: ioping in high hundreds of microseconds usually means network/storage fabric.\n  if [ -n \"${ioping_us}\" ] && awk -v u=\"${ioping_us}\" 'BEGIN{exit !(u>=350)}' 2>/dev/null; then\n    _net=\"YES\"\n  fi\n\n  # Durability tail + random tail suggests network block, even if sequential is OK.\n  if [ \"${_net}\" = \"NO\" ] && [ -n \"${p99wf}\" ] && [ -n \"${p99w}\" ] && \\\n     awk -v f=\"${p99wf}\" -v w=\"${p99w}\" 'BEGIN{exit !(f>=8 && (w>=1.0))}' 2>/dev/null; then\n    _net=\"YES\"\n  fi\n\n  # Classic capped-block signature (e.g., ~500-600 MB/s ceilings)\n  if [ \"${_net}\" = \"NO\" ] && [ -n \"${sr}\" ] && [ -n \"${sw}\" ] && [ -n \"${ioping_us}\" ] && \\\n     awk -v sr=\"${sr}\" -v sw=\"${sw}\" -v u=\"${ioping_us}\" 'BEGIN{exit !(sr>=450 && sr<=700 && sw>=350 && sw<=700 && u>=250)}' 2>/dev/null; then\n    _net=\"YES\"\n  fi\n\n  # Strong vendor/controller hint for network-backed NVMe\n  if [ \"${_PCI_NET_STORAGE_HINT:-NO}\" = \"YES\" ]; then\n    _net=\"YES\"\n  fi\n\n  # ----\n  # Step 2: Performance class (NVMe vs SSD) based on observed behavior\n  # ----\n  local _class=\"UNKNOWN\"\n\n  # NVMe-class: very low QD=1 tail + low ioping\n  if [ -n \"${p99r}\" ] && [ -n \"${p99w}\" ] && [ -n \"${ioping_us}\" ] && \\\n     awk -v pr=\"${p99r}\" -v pw=\"${p99w}\" -v u=\"${ioping_us}\" 'BEGIN{exit !(pr<=0.20 && pw<=0.20 && u<=200)}' 2>/dev/null; then\n    _class=\"NVME\"\n  fi\n\n  # NVMe-class (throughput override): some providers cap per-VM throughput/latency,\n  # but multi-GB/s sequential strongly indicates NVMe-class when not network.\n  if [ \"${_class}\" = \"UNKNOWN\" ] && [ \"${_net}\" = \"NO\" ] && [ \"${_degraded}\" = \"NO\" ] && \\\n     [ -n \"${sr}\" ] && [ -n \"${sw}\" ] && [ -n \"${ioping_us}\" ] && \\\n     awk -v sr=\"${sr}\" -v sw=\"${sw}\" -v u=\"${ioping_us}\" 'BEGIN{exit !((sr>=1500 || sw>=1200) && u<=350)}' 2>/dev/null; then\n    _class=\"NVME\"\n  fi\n\n  # SSD-class: still decent tails, but not NVMe-like\n  if [ \"${_class}\" = \"UNKNOWN\" ] && [ -n \"${p99w}\" ] && [ -n \"${p99wf}\" ] && \\\n     awk -v w=\"${p99w}\" -v f=\"${p99wf}\" 'BEGIN{exit !(w<=2.0 && f<=20)}' 2>/dev/null; then\n    _class=\"SSD\"\n  fi\n\n  # Fallback when tails are missing: use seq + ioping\n  if [ \"${_class}\" = \"UNKNOWN\" ] && [ -n \"${sr}\" ] && [ -n \"${sw}\" ] && [ -n \"${ioping_us}\" ] && \\\n     awk -v sr=\"${sr}\" -v sw=\"${sw}\" -v u=\"${ioping_us}\" 'BEGIN{exit !(sw>=1200 && sr>=1800 && u<=350)}' 2>/dev/null; then\n    _class=\"NVME\"\n  fi\n  if [ \"${_class}\" = \"UNKNOWN\" ] && [ -n \"${sr}\" ] && [ -n \"${sw}\" ] && \\\n     awk -v sr=\"${sr}\" -v sw=\"${sw}\" 'BEGIN{exit !(sw>=300 && sr>=600)}' 2>/dev/null; then\n    _class=\"SSD\"\n  fi\n\n  # ----\n  # Final label\n  # ----\n  if [ \"${_degraded}\" = \"YES\" ]; then\n    if [ \"${_net}\" = \"YES\" ]; then\n      echo \"NET_DEGRADED\"\n    else\n      echo \"LOCAL_DEGRADED\"\n    fi\n    return\n  fi\n\n  # LOCAL NVMe that is clearly capped by policy/limits (common on some vendors).\n  # We infer this when latency looks local+NVMe-like, but sequential sits in a well-known ceiling band.\n  if [ \"${_net}\" = \"NO\" ] && [ -n \"${sr}\" ] && [ -n \"${sw}\" ] && [ -n \"${ioping_us}\" ] && [ -n \"${p99wf}\" ] && [ -n \"${p99w}\" ] && \\\n     awk -v sr=\"${sr}\" -v sw=\"${sw}\" -v u=\"${ioping_us}\" -v f=\"${p99wf}\" -v w=\"${p99w}\" \\\n       'BEGIN{exit !((sr>=450 && sr<=750) && (sw>=350 && sw<=750) && (u<=250) && (f<=5) && (w<=1.0))}' 2>/dev/null; then\n    echo \"LOCAL_NVME_CAPPED\"\n    return\n  fi\n\n  if [ \"${_net}\" = \"YES\" ]; then\n    if [ \"${_class}\" = \"NVME\" ]; then\n      echo \"NET_NVME_CLASS\"\n    else\n      # default for net block: SSD-class behavior is most common\n      echo \"NET_SSD_CLASS\"\n    fi\n  else\n    # Virtual controllers (virtio) cannot prove \"local\" even if the backing is NVMe.\n    # Avoid asserting locality; report behavior class instead.\n    if [ \"${_PCI_VIRTIO_STORAGE_HINT:-NO}\" = \"YES\" ] && [ \"${_HAS_NVME_SYS:-NO}\" = \"NO\" ] && [ \"${_PCI_NVME_CTRL_HINT:-NO}\" = \"NO\" ]; then\n      if [ \"${_class}\" = \"NVME\" ]; then\n        echo \"VIRTIO_NVME_CLASS\"\n      else\n        echo \"VIRTIO_SSD_CLASS\"\n      fi\n    else\n      if [ \"${_class}\" = \"NVME\" ]; then\n        echo \"LOCAL_NVME\"\n      else\n        echo \"LOCAL_SSD\"\n      fi\n    fi\n  fi\n}\n\n_boa_disk_profile_note() {\n  # Human-readable hint for \"poor\" disk grade, focusing on DB/CMS impact.\n  # Inputs:\n  #  $1=ioping_avg_display, $2=ioping_avg_us, $3=qd1_fsync_p99_display, $4=qd1_fsync_p99_ms,\n  #  $5=qd1_p99_write_display, $6=qd1_p99_write_ms, $7=seq_write_display, $8=seq_write_mbps\n  local io_disp=\"$1\"\n  local io_us=\"$2\"\n  local fs_disp=\"$3\"\n  local fs_ms=\"$4\"\n  local w_disp=\"$5\"\n  local w_ms=\"$6\"\n  local sw_disp=\"$7\"\n  local sw_mbps=\"$8\"\n\n  io_us=\"$(_num_or_empty \"${io_us}\")\"\n  fs_ms=\"$(_num_or_empty \"${fs_ms}\")\"\n  w_ms=\"$(_num_or_empty \"${w_ms}\")\"\n  sw_mbps=\"$(_num_or_empty \"${sw_mbps}\")\"\n\n  if [ -n \"${sw_mbps}\" ] && awk -v sw=\"${sw_mbps}\" 'BEGIN{exit !(sw<150)}' 2>/dev/null; then\n    printf \"low sequential write (%s)\" \"${sw_disp}\"\n    return\n  fi\n  if [ -n \"${fs_ms}\" ] && awk -v f=\"${fs_ms}\" 'BEGIN{exit !(f>=8)}' 2>/dev/null; then\n    printf \"high durable-write tail (QD1 fsync p99 %s)\" \"${fs_disp}\"\n    return\n  fi\n  if [ -n \"${io_us}\" ] && awk -v u=\"${io_us}\" 'BEGIN{exit !(u>=1000)}' 2>/dev/null; then\n    printf \"high latency (ioping avg %s)\" \"${io_disp}\"\n    return\n  fi\n  if [ -n \"${w_ms}\" ] && awk -v w=\"${w_ms}\" 'BEGIN{exit !(w>=3)}' 2>/dev/null; then\n    printf \"high random-write tail (QD1 write p99 %s)\" \"${w_disp}\"\n    return\n  fi\n\n  printf \"inconsistent latency/tails for DB/CMS workloads\"\n}\n\n_print_disk_compact_section() {\n  # $1 = mountpoint, $2 = short description (header hint)\n  local _mp=\"$1\"\n  local _desc=\"$2\"\n\n  local _seqw _seqr _riops _wiops _meta\n  local _rbw _wbw _rlat _wlat\n  local _seek_line _ioping_avg _ioping_avg_us\n  local _seqw_num _seqr_num _riops_num _wiops_num _meta_num\n  local _disk_grade\n\n  _seqw=\"${_fio_seqwrite_bw_write[\"${_mp}\"]:-N/A}\"\n  _seqr=\"${_fio_seqread_bw_read[\"${_mp}\"]:-N/A}\"\n  _riops=\"${_fio_rand_iops_read[\"${_mp}\"]:-N/A}\"\n  _wiops=\"${_fio_rand_iops_write[\"${_mp}\"]:-N/A}\"\n  _meta=\"${_meta_ops[\"${_mp}\"]:-N/A}\"\n\n  _rbw=\"${_fio_rand_bw_read[\"${_mp}\"]:-N/A}\"\n  _wbw=\"${_fio_rand_bw_write[\"${_mp}\"]:-N/A}\"\n  _rlat=\"${_fio_rand_latency_read[\"${_mp}\"]:-N/A}\"\n  _wlat=\"${_fio_rand_latency_write[\"${_mp}\"]:-N/A}\"\n\n  _seek_line=\"${_ioping_seek_line[\"${_mp}\"]:-N/A}\"\n  _ioping_avg=\"$(_parse_ioping_avg_display \"${_seek_line}\")\"\n  _ioping_avg_us=\"$(_parse_ioping_avg_us \"${_seek_line}\")\"\n  [ -n \"${_ioping_avg}\" ] || _ioping_avg=\"N/A\"\n\n  _seqw_num=\"$(_num_or_empty \"$(echo \"${_seqw}\" | awk '{print $1}')\")\"\n  _seqr_num=\"$(_num_or_empty \"$(echo \"${_seqr}\" | awk '{print $1}')\")\"\n  _riops_num=\"$(_num_or_empty \"${_riops}\")\"\n  _wiops_num=\"$(_num_or_empty \"${_wiops}\")\"\n  _meta_num=\"$(_num_or_empty \"$(echo \"${_meta}\" | awk '{print $1}')\")\"\n\n  _qd1_p95_r=\"${_fio_qd1_p95_read[\"${_mp}\"]:-N/A}\"\n  _qd1_p95_w=\"${_fio_qd1_p95_write[\"${_mp}\"]:-N/A}\"\n  _qd1_p99_r=\"${_fio_qd1_p99_read[\"${_mp}\"]:-N/A}\"\n  _qd1_p99_w=\"${_fio_qd1_p99_write[\"${_mp}\"]:-N/A}\"\n  _qd1_fsync_p95_w=\"${_fio_qd1_fsync_p95_write[\"${_mp}\"]:-N/A}\"\n  _qd1_fsync_p99_w=\"${_fio_qd1_fsync_p99_write[\"${_mp}\"]:-N/A}\"\n\n  _qd1_p99_r_num=\"$(_num_or_empty \"$(echo \"${_qd1_p99_r}\" | awk \"{print \\$1}\")\")\"\n  _qd1_p99_w_num=\"$(_num_or_empty \"$(echo \"${_qd1_p99_w}\" | awk \"{print \\$1}\")\")\"\n  _qd1_fsync_p95_w_num=\"$(_num_or_empty \"$(echo \"${_qd1_fsync_p95_w}\" | awk \"{print \\$1}\")\")\"\n  _qd1_fsync_p99_w_num=\"$(_num_or_empty \"$(echo \"${_qd1_fsync_p99_w}\" | awk \"{print \\$1}\")\")\"\n\n  local _disk_grade_tp _disk_grade_tail\n  local _disk_gmean _disk_imbalance _disk_meta _disk_fsync_p99 _disk_qd1p99r _disk_qd1p99w _disk_ioping\n  read -r _disk_grade _disk_grade_tp _disk_grade_tail _disk_gmean _disk_imbalance _disk_meta _disk_fsync_p99 _disk_qd1p99r _disk_qd1p99w _disk_ioping <<< \"$(_boa_grade_disk_final \"${_ioping_avg_us}\" \"${_seqr_num}\" \"${_seqw_num}\" \"${_riops_num}\" \"${_wiops_num}\" \"${_meta_num}\" \"${_qd1_p99_r_num}\" \"${_qd1_p99_w_num}\" \"${_qd1_fsync_p99_w_num}\" \"${_qd1_fsync_p95_w_num}\")\"\n  _storage_tier=\"$(_boa_guess_storage_tier \"${_ioping_avg_us}\" \"${_seqr_num}\" \"${_seqw_num}\" \"${_qd1_p99_r_num}\" \"${_qd1_p99_w_num}\" \"${_qd1_fsync_p99_w_num}\")\"\n  _disk_note=\"$(_boa_disk_profile_note \"${_ioping_avg}\" \"${_ioping_avg_us}\" \"${_qd1_fsync_p99_w}\" \"${_qd1_fsync_p99_w_num}\" \"${_qd1_p99_w}\" \"${_qd1_p99_w_num}\" \"${_seqw}\" \"${_seqw_num}\")\"\n\n  echo \" =============================================================================\"\n  if [ \"${_mp}\" = \"/\" ]; then\n    echo \" == 4. DISK (Root partition / main filesystem)                              ==\"\n  else\n    printf \" == 4. DISK (%s) (Mounted extra filesystem) \\n\" \"${_mp}\"\n  fi\n  echo \" =============================================================================\"\n  if [ \"${_FANCY}\" -eq 1 ]; then\n    echo \"\"\n    printf \"   Sequential write (CRITICAL): %s\\n\" \"${_seqw}\"\n    printf \"   Sequential read:             %s\\n\" \"${_seqr}\"\n    printf \"   Random 4K IOPS:              %s read / %s write\\n\" \"${_riops}\" \"${_wiops}\"\n    printf \"   Real latency (ioping avg):   %s\\n\" \"${_ioping_avg}\"\n    printf \"   Metadata ops:                %s\\n\" \"${_meta}\"\n    printf \"   Random 4K QD1 p95/p99:       %s / %s read, %s / %s write\\n\" \"${_qd1_p95_r}\" \"${_qd1_p99_r}\" \"${_qd1_p95_w}\" \"${_qd1_p99_w}\"\n    printf \"   Random 4K QD1 fsync p95/p99: %s / %s write\\n\" \"${_qd1_fsync_p95_w}\" \"${_qd1_fsync_p99_w}\"\n    printf \"   Storage tier guess:          %s\\n\" \"${_storage_tier}\"\n    echo \"\"\n    printf \"   Details (fio synthetic latency):\\n\"\n    printf \"     Random 4K read:   %s @ %s\\n\" \"${_rbw}\" \"${_rlat}\"\n    printf \"     Random 4K write:  %s @ %s\\n\" \"${_wbw}\" \"${_wlat}\"\n  fi\n  echo \"\"\n\n  if [ \"${_disk_grade}\" = \"bad\" ]; then\n    echo \"   ❌ CRITICAL: Sequential write < 100 MB/s - disk is throttled or failing!\"\n    echo \"   This will severely impact BOA performance (cache writes, logs, DB)\"\n  elif [ \"${_disk_grade}\" = \"poor\" ]; then\n    echo \"   ⚠️ Disk profile is inconsistent: ${_disk_note}\"\n  elif [ \"${_disk_grade}\" = \"average\" ]; then\n    echo \"   ☑️ Average disk performance for BOA workloads\"\n  elif [ \"${_disk_grade}\" = \"good\" ]; then\n    echo \"   🌟 Good disk performance for BOA workloads\"\n  elif [ \"${_disk_grade}\" = \"excellent\" ]; then\n    echo \"   🏆 Excellent disk performance (NVMe-class)\"\n  elif [ \"${_disk_grade}\" = \"ultra\" ]; then\n    echo \"   🚀 Ultra disk performance (top-tier NVMe)\"\n  fi\n\n  # v54: show grading breakdown (overall vs throughput vs tail) in verbose output\n  if [ \"${_FANCY}\" -eq 1 ]; then\n      printf \"   Breakdown:                 %s overall (mode=%s)\\n\" \"${_disk_grade}\" \"${DISK_GRADE_MODE:-BOA}\"\n      printf \"     - Throughput (bulk):     %s (gmean=%s MB/s, R/W ratio=%sx)\\n\" \"${_disk_grade_tp}\" \"${_disk_gmean:-N/A}\" \"${_disk_imbalance:-N/A}\"\n      printf \"     - Tail (stall risk):     %s (fsync p99=%sms, QD1 p99 R/W=%s/%sms, ioping=%sus)\\n\" \"${_disk_grade_tail}\" \"${_disk_fsync_p99:-N/A}\" \"${_disk_qd1p99r:-N/A}\" \"${_disk_qd1p99w:-N/A}\" \"${_disk_ioping:-N/A}\"\n  fi\n  echo \"\"\n}\n\n_find_server_city() {\n  if [ -e \"/root/.found_correct_city.cnf\" ]; then\n    _LOC_CITY=$(cat /root/.found_correct_city.cnf 2>/dev/null | tr -d '\\n')\n  else\n    if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n      _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n      _LOC_CITY=$(curl ${_crlGet} ipinfo.io/${_LOC_IP}/city 2>&1)\n      _LOC_CITY=$(echo -n ${_LOC_CITY} | tr -d \"\\n\" 2>&1)\n    fi\n    if [ ! -z \"${_LOC_CITY}\" ]; then\n      _LOC_CITY=$(echo \"${_LOC_CITY}\" | tr ' ' '+' 2>&1)\n      echo ${_LOC_CITY} > /root/.found_correct_city.cnf\n    fi\n  fi\n}\n\n_print_host_info() {\n  _find_server_city\n  [ -n \"${_LOC_CITY}\" ] && _LOC_CITY=\"$(echo \"${_LOC_CITY}\" | tr '+' ' ')\"\n  [ -z \"${_LOC_CITY}\" ] && _LOC_CITY=\"Cicely\"\n  echo \"${_LOC_CITY}\"\n}\n\n_print_grading_summary() {\n  echo \" =============================================================================\"\n  echo \" ==                 BOA Performance Test - Compact Summary                  ==\"\n  echo \" ==              Priority: Memory > CPU > Disk > DB > Network               ==\"\n  echo \" =============================================================================\"\n  echo \"\"\n  echo \"Host and system details\"\n  echo \"\"\n  echo \" Perftest ${_PT_VERSION}\"\n  echo \" Date: [$(date)]\"\n  echo \" City: $(_print_host_info)\"\n  echo \" Info: $(boa version)\"\n  echo \"\"\n  if [ -x \"/opt/local/bin/screenfetch\" ]; then\n    bash /opt/local/bin/screenfetch -n\n    if [ \"${_HAS_LSPCI:-NO}\" = \"YES\" ]; then\n      [ -n \"${_PCI_STORAGE_SUMMARY}\" ] && echo \" PCI storage: ${_PCI_STORAGE_SUMMARY}\"\n      [ -n \"${_PCI_NIC_SUMMARY}\" ] && echo \" PCI NIC: ${_PCI_NIC_SUMMARY}\"\n    fi\n  else\n    echo \"Host: ${_hostname}\"\n    echo \"CPU: ${_cpu_info}\"\n    echo \"vCPUs: ${_cpu_cores}\"\n    _total_ram_mb=\"$(free -m | awk '/^Mem:/ {print $2}')\"\n    echo \"RAM:   ${_total_ram_mb} MB\"\n    if [ \"${_HAS_LSPCI:-NO}\" = \"YES\" ]; then\n      [ -n \"${_PCI_STORAGE_SUMMARY}\" ] && echo \"PCI storage: ${_PCI_STORAGE_SUMMARY}\"\n      [ -n \"${_PCI_NIC_SUMMARY}\" ] && echo \"PCI NIC: ${_PCI_NIC_SUMMARY}\"\n    fi\n  fi\n  echo \"\"\n\n  # Extract numeric values\n  local mem_mibps mem_lat cpu_steal st_eps mt_eps aes_mibps net_mibps net_mbps net_tcp_dl_mbps _mt_note\n  local rand_read_lat seq_read_mbps seq_write_mbps ioping_avg_us meta_ops_num\n  local read_iops write_iops\n\n  mem_mibps=\"$(_num_or_empty \"$(echo \"${_mem_transfer_speed}\" | awk '{print $1}')\")\"\n  mem_lat=\"$(_num_or_empty \"${_mem_latency}\")\"\n  cpu_steal=\"${_cpu_steal_mt}\"\n  st_eps=\"$(_num_or_empty \"${_cpu_events_ST}\")\"\n  mt_eps=\"$(_num_or_empty \"${_cpu_events_MT_NPROC}\")\"\n  aes_mibps=\"$(_num_or_empty \"$(echo \"${_aes_mibps}\" | awk '{print $1}')\")\"\n  net_mibps=\"$(_num_or_empty \"${_net_download_mibps_num}\")\"\n  net_mbps=\"$(_num_or_empty \"${_net_download_mbps_num}\")\"\n  net_tcp_dl_mbps=\"$(_num_or_empty \"${_net_iperf_dl_mbps_med}\")\"\n\n  if [ \"${_DO_DISK}\" = \"YES\" ]; then\n    rand_read_lat=\"$(_num_or_empty \"$(echo \"${_root_rand_lat_read}\" | awk '{print $1}')\")\"\n    seq_read_mbps=\"$(_num_or_empty \"$(echo \"${_root_seqread_bw_read}\" | awk '{print $1}')\")\"\n    seq_write_mbps=\"$(_num_or_empty \"$(echo \"${_root_seqwrite_bw_write}\" | awk '{print $1}')\")\"\n    meta_ops_num=\"$(_num_or_empty \"$(echo \"${_root_meta_ops}\" | awk '{print $1}')\")\"\n    read_iops=\"$(_num_or_empty \"${_fio_rand_iops_read[\"/\"]}\")\"\n    write_iops=\"$(_num_or_empty \"${_fio_rand_iops_write[\"/\"]}\")\"\n\n    # Extract ioping avg latency (format: \"min/avg/max/mdev = 28.3 us / 126.5 us / ...\")\n    ioping_avg_us=\"$(_parse_ioping_avg_us \"${_ioping_seek}\")\"\n  fi\n\n  # Calculate grades\n  local mem_grade steal_grade cpu_grade disk_grade db_grade net_grade net_http_grade net_tcp_grade net_note net_grade_rank _http_rank _tcp_rank\n  mem_grade=\"$(_boa_grade_mem_final \"${mem_mibps:-0}\" \"${mem_lat}\" \"${_mem_bench_threads_grade:-${_mem_bench_threads}}\")\"\n  steal_grade=\"$(_boa_grade_cpu_steal \"${cpu_steal}\")\"\n  cpu_grade=\"$(_boa_grade_cpu_speed \"${st_eps}\")\"\n  if [ \"${_DO_DISK}\" = \"YES\" ]; then\n    root_qd1_p99_r_num=\"$(_num_or_empty \"$(echo \"${_root_qd1_p99_read}\" | awk \"{print \\$1}\")\")\"\n    root_qd1_p99_w_num=\"$(_num_or_empty \"$(echo \"${_root_qd1_p99_write}\" | awk \"{print \\$1}\")\")\"\n    root_qd1_fsync_p95_w_num=\"$(_num_or_empty \"$(echo \"${_root_qd1_fsync_p95_write}\" | awk \"{print \\$1}\")\")\"\n    root_qd1_fsync_p99_w_num=\"$(_num_or_empty \"$(echo \"${_root_qd1_fsync_p99_write}\" | awk \"{print \\$1}\")\")\"\n    local disk_grade_tp disk_grade_tail\n    local disk_gmean disk_imbalance disk_meta disk_fsync_p99 disk_qd1p99r disk_qd1p99w disk_ioping\n    read -r disk_grade disk_grade_tp disk_grade_tail disk_gmean disk_imbalance disk_meta disk_fsync_p99 disk_qd1p99r disk_qd1p99w disk_ioping <<< \"$(_boa_grade_disk_final \"${ioping_avg_us}\" \"${seq_read_mbps}\" \"${seq_write_mbps}\" \"${read_iops}\" \"${write_iops}\" \"${meta_ops_num}\" \"${root_qd1_p99_r_num}\" \"${root_qd1_p99_w_num}\" \"${root_qd1_fsync_p99_w_num}\" \"${root_qd1_fsync_p95_w_num}\")\"\n  else\n    disk_grade=\"unknown\"\n  fi\n  net_http_grade=\"$(_boa_grade_network_http \"${net_mbps}\")\"\n  net_tcp_grade=\"$(_boa_grade_network_tcp \"${net_tcp_dl_mbps}\")\"\n\n  _http_rank=\"$(_boa_grade_network_rank \"${net_http_grade}\")\"\n  _tcp_rank=\"$(_boa_grade_network_rank \"${net_tcp_grade}\")\"\n\n  # Overall network grade: primarily based on iPerf3 (port throughput).\n  # HTTP pulls are a sanity check and can be skewed by remote/CDN variance; we annotate but do not downgrade hard.\n  net_grade_rank=\"0\"\n  net_note=\"\"\n  if [ \"${_tcp_rank}\" -gt 0 ]; then\n    net_grade_rank=\"${_tcp_rank}\"\n    if [ \"${_http_rank}\" -gt 0 ] && [ \"${_http_rank}\" -lt \"${_tcp_rank}\" ]; then\n      net_note=\"HTTP slower than TCP (endpoint variance)\"\n    fi\n  else\n    net_grade_rank=\"${_http_rank}\"\n  fi\n  net_grade=\"$(_boa_grade_network_from_rank \"${net_grade_rank}\")\"\n\n  # Database grade (optional; only if DB test ran)\n  db_grade=\"unknown\"\n  if [ \"${_db_test_status}\" = \"OK\" ] || [ \"${_db_test_status}\" = \"WARN\" ]; then\n    local _vcpus_num\n    _vcpus_num=\"$(_num_or_empty \"${_cpu_cores}\")\"\n    [ -n \"${_vcpus_num}\" ] || _vcpus_num=\"1\"\n    db_grade=\"$(_boa_grade_db_oltp \"${_db_rw_tps}\" \"${_db_rw_p95_ms}\" \"${_vcpus_num}\" \"${_db_ro_p95_ms}\" \"${_db_threads_used:-0}\")\"\n  fi\n\n  if [ \"${_DO_MEM}\" = \"YES\" ]; then\n    echo \" =============================================================================\"\n    echo \" == 1. MEMORY (Critical for APC/Valkey/OPcache + Nginx microcaching)        ==\"\n    echo \" =============================================================================\"\n    if [ \"${_FANCY}\" -eq 1 ]; then\n      echo \"\"\n      printf \"   Bandwidth:  %-20s Grade: [%s]%s\\n\" \"${_mem_transfer_speed}\" \"${mem_grade}\" \"$( [ \"${_mem_bench_tool:-membench}\" = \"sysbench\" ] && echo \" (fallback: sysbench)\" || echo \"\" )\"\n      if [ -n \"${_mem_latency}\" ]; then\n        printf \"   Latency:    %-20s\\n\" \"${_mem_latency} ${_mem_latency_unit}\"\n      else\n        printf \"   Latency:    %-20s\\n\" \"N/A\"\n      fi\n      if [ -n \"${_mem_bench_size_mib:-}\" ]; then\n        echo \"   Bench:      ${_mem_bench_size_mib} MiB per buffer x2 (~${_mem_bench_total_mib} MiB total), ${_mem_bench_threads} threads (median of ${_MEM_BENCH_REPEATS})\"\n        if [ -n \"${_mem_bw_min}\" ] && [ -n \"${_mem_bw_max}\" ]; then\n          echo \"   Variance:   ${_mem_bw_min}-${_mem_bw_max} MiB/s (bw), ${_mem_lat_min}-${_mem_lat_max} ns (lat)\"\n        fi\n      fi\n    fi\n    echo \"\"\n\n    if [ \"${mem_grade}\" = \"low\" ]; then\n      if [ -n \"${mem_lat}\" ] && awk -v l=\"${mem_lat}\" 'BEGIN{exit !(l>230)}' 2>/dev/null; then\n        echo \"   ⚠️ High/variable memory latency - may hurt cache-heavy BOA workloads\"\n      else\n        echo \"   ⚠️ Low memory bandwidth - may limit BOA caching under heavy load\"\n      fi\n    elif [ \"${mem_grade}\" = \"average\" ]; then\n      if [ -n \"${mem_lat}\" ] && awk -v l=\"${mem_lat}\" 'BEGIN{exit !(l>230)}' 2>/dev/null; then\n        echo \"   ☑️ Strong memory throughput with elevated latency (contention/NUMA variance)\"\n      else\n        echo \"   ☑️ Average memory performance for in-memory caching\"\n      fi\n    elif [ \"${mem_grade}\" = \"good\" ]; then\n      echo \"   🌟 Good memory performance for in-memory caching\"\n    elif [ \"${mem_grade}\" = \"excellent\" ]; then\n      echo \"   🏆 Excellent memory performance for in-memory caching\"\n    elif [ \"${mem_grade}\" = \"ultra\" ]; then\n      echo \"   🚀 Ultra memory performance for in-memory caching\"\n    fi\n    echo \"\"\n  fi\n\n  if [ \"${_DO_CPU}\" = \"YES\" ] || [ \"${_DO_AES}\" = \"YES\" ]; then\n    echo \" =============================================================================\"\n    echo \" == 2. CPU (Drupal/PHP runtime; watch steal on oversold nodes)              ==\"\n    echo \" =============================================================================\"\n    if [ \"${_FANCY}\" -eq 1 ]; then\n      echo \"\"\n      printf \"   Single-thread:  %-15s events/sec  [%s] (most important)\\n\" \"${_cpu_events_ST}\" \"${cpu_grade}\"\n      if [ -n \"${_cpu_mt_threads_used:-}\" ] && [ -n \"${_cpu_cores:-}\" ] && [ \"${_cpu_mt_threads_used}\" -lt \"${_cpu_cores}\" ] 2>/dev/null; then\n        _mt_note=\"(${_cpu_cores} vCPUs; measured @ ${_cpu_mt_threads_used} threads cap)\"\n      else\n        _mt_note=\"(${_cpu_cores} vCPUs)\"\n      fi\n      printf \"   Multi-thread:   %-15s events/sec  %s\\n\" \"${_cpu_events_MT_NPROC}\" \"${_mt_note}\"\n      printf \"   CPU Steal:      %-15s%%  [%s]\\n\" \"${cpu_steal:-N/A}\" \"${steal_grade}\"\n      printf \"   AES Throughput: %-15s (crypto/compression)\\n\" \"${_aes_mibps}\"\n    fi\n    echo \"\"\n\n    if [ -n \"${cpu_steal}\" ] && [ \"${cpu_steal}\" != \"N/A\" ]; then\n      if awk -v s=\"${cpu_steal}\" 'BEGIN{exit !(s>=7)}' 2>/dev/null; then\n        echo \"   ❌ HIGH CPU steal - oversold host, NOT recommended for production\"\n        steal_grade=bad\n      elif awk -v s=\"${cpu_steal}\" 'BEGIN{exit !(s>=3)}' 2>/dev/null; then\n        echo \"   ⚠️ Elevated CPU steal - will cause latency spikes under load\"\n        steal_grade=warning\n      elif awk -v s=\"${cpu_steal}\" 'BEGIN{exit !(s<1)}' 2>/dev/null; then\n        echo \"   🏆 Excellent CPU steal - minimal noisy neighbor impact\"\n      elif awk -v s=\"${cpu_steal}\" 'BEGIN{exit !(s<3)}' 2>/dev/null; then\n        echo \"   ☑️ Minimal CPU steal - but may cause latency spikes under load\"\n      fi\n    fi\n\n    if [ \"${cpu_grade}\" = \"poor\" ]; then\n      echo \"   ❌ CPU performance too slow for BOA\"\n    elif [ \"${cpu_grade}\" = \"weak\" ]; then\n      echo \"   ⚠️ Weak CPU performance, not optimal\"\n    elif [ \"${cpu_grade}\" = \"average\" ]; then\n      echo \"   ☑️ Average CPU performance\"\n    elif [ \"${cpu_grade}\" = \"good\" ]; then\n      echo \"   🌟 Good CPU performance \"\n    elif [ \"${cpu_grade}\" = \"excellent\" ]; then\n      echo \"   🏆 Excellent CPU performance\"\n    elif [ \"${cpu_grade}\" = \"ultra\" ]; then\n      echo \"   🚀 Ultra CPU performance\"\n    fi\n    echo \"\"\n  fi\n\n  # Filesystem details + fstab hints + space/inodes sanity (informational)\n  if [ \"${_DO_FS}\" = \"YES\" ]; then\n    _print_fs_section\n  fi\n\n  if [ \"${_DO_DISK}\" = \"YES\" ]; then\n    # DISK (root is used for overall rating; other mounts are informational)\n    _detect_mysql_durability_relaxed\n    _print_disk_compact_section \"/\" \"Root partition / main filesystem\"\n    local _mp\n    for _mp in \"${_test_mounts[@]}\"; do\n      if [ \"${_mp}\" != \"/\" ]; then\n        _print_disk_compact_section \"${_mp}\" \"Mounted extra filesystem\"\n      fi\n    done\n  fi\n\n  if [ \"${_DO_DB}\" = \"YES\" ]; then\n    echo \" =============================================================================\"\n    echo \" == 5. DATABASE (Percona/MySQL OLTP proxy via sysbench)                     ==\"\n    echo \" =============================================================================\"\n    echo \"\"\n    if [ \"${_db_test_status}\" = \"OK\" ] || [ \"${_db_test_status}\" = \"WARN\" ]; then\n      if [ \"${_FANCY}\" -eq 1 ]; then\n        printf \"   Threads used:            %s\\n\" \"${_db_threads_used:-N/A}\"\n        printf \"   Read-only (tps / p95):   %-10s / %s ms\\n\" \"${_db_ro_tps:-N/A}\" \"${_db_ro_p95_ms:-N/A}\"\n        printf \"   Read-write (tps / p95):  %-10s / %s ms  [%s]\\n\" \"${_db_rw_tps:-N/A}\" \"${_db_rw_p95_ms:-N/A}\" \"${db_grade}\"\n        echo \"\"\n      fi\n      if [ \"${db_grade}\" = \"poor\" ]; then\n        echo \"   ❌ Slow DB OLTP - cache misses and admin tasks may feel sluggish\"\n      elif [ \"${db_grade}\" = \"average\" ]; then\n        echo \"   ☑️ Average DB OLTP performance\"\n      elif [ \"${db_grade}\" = \"good\" ]; then\n        echo \"   🌟 Good DB OLTP performance\"\n      elif [ \"${db_grade}\" = \"excellent\" ]; then\n        echo \"   🏆 Excellent DB OLTP performance\"\n      elif [ \"${db_grade}\" = \"ultra\" ]; then\n        echo \"   🚀 Ultra DB OLTP performance\"\n      fi\n      if [ \"${_db_test_status}\" = \"WARN\" ] && [ -n \"${_db_test_reason}\" ]; then\n        echo \"   ⚠️ Note: ${_db_test_reason}\"\n      fi\n    else\n      printf \"   Status: %s\\n\" \"${_db_test_status}\"\n      if [ -n \"${_db_test_reason}\" ]; then\n        printf \"   Reason: %s\\n\" \"${_db_test_reason}\"\n      fi\n    fi\n    echo \"\"\n  fi\n\n  if [ \"${_DO_NET}\" = \"YES\" ]; then\n    echo \" =============================================================================\"\n    echo \" == 6. NETWORK (Sanity check)                                               ==\"\n    echo \" =============================================================================\"\n    if [ \"${_FANCY}\" -eq 1 ]; then\n      echo \"\"\n      printf \"   Public IPv4:        %s\\n\" \"${_net_ipv4}\"\n      printf \"   Region:             %s\\n\" \"${_net_region_pretty}\"\n      _http_show_grade=\"${net_http_grade}\"\n      if [ -z \"${_http_show_grade}\" ] || [ \"${_http_show_grade}\" = \"unknown\" ]; then\n        _http_show_grade=\"${net_grade}\"\n      fi\n      printf \"   HTTP download:      %-15s [%s]\\n\" \"${_net_download}\" \"${_http_show_grade}\"\n      printf \"   Overall grade:      %s\\n\" \"${net_grade}\"\n      if [ -n \"${net_http_grade}\" ] && [ -n \"${net_tcp_grade}\" ] && [ \"${net_tcp_grade}\" != \"unknown\" ]; then\n        printf \"   Grades:             HTTP %s | TCP %s\\n\" \"${net_http_grade}\" \"${net_tcp_grade}\"\n      fi\n      if [ -n \"${net_note}\" ]; then\n        printf \"   Note:               %s\\n\" \"${net_note}\"\n      fi\n      echo \"\"\n      printf \"   Score: median of %s/%s single-stream Range pulls (~100 MiB, timeout %ss)\\n\" \"${_net_http_ok}\" \"${_net_http_total}\" \"${_NET_TIMEOUT}\"\n      if [ \"${#_net_dl_name[@]}\" -gt 0 ]; then\n        local _i\n        _i=0\n        while [ \"${_i}\" -lt \"${#_net_dl_name[@]}\" ]; do\n          printf \"     - %-10s %s\\n\" \"${_net_dl_name[${_i}]}:\" \"${_net_dl_pretty[${_i}]}\"\n          _i=$((_i + 1))\n        done\n        if [ \"${_NET_IPERF_ENABLED}\" = \"YES\" ]; then\n          echo \"\"\n          if [ \"${#_net_iperf_name[@]}\" -gt 0 ]; then\n            printf \"   iPerf3 (TCP) median: DL %s | UL %s (%s servers, %s streams, %ss)\\n\" \"${_net_iperf_dl}\" \"${_net_iperf_ul}\" \"${#_net_iperf_name[@]}\" \"${_NET_IPERF_PARALLEL}\" \"${_NET_IPERF_TIME}\"\n            _i=0\n            while [ \"${_i}\" -lt \"${#_net_iperf_name[@]}\" ]; do\n              printf \"     - %-10s DL %s Mbit/s | UL %s Mbit/s\\n\" \"${_net_iperf_name[${_i}]}:\" \"${_net_iperf_dl_mbps[${_i}]}\" \"${_net_iperf_ul_mbps[${_i}]}\"\n              _i=$((_i + 1))\n            done\n          else\n            printf \"   iPerf3 (TCP):       no data (all tests failed or busy)\\n\"\n          fi\n        else\n          printf \"   iPerf3 (TCP):       disabled\\n\"\n        fi\n      fi\n    fi\n    echo \"\"\n\n    if [ \"${net_grade}\" = \"poor\" ]; then\n      echo \"   ⚠️ Slow network - may affect external API calls and package updates\"\n      _warnings=$((_warnings + 1))\n    elif [ \"${net_grade}\" = \"average\" ]; then\n      echo \"   ☑️ Average network speed\"\n    elif [ \"${net_grade}\" = \"good\" ]; then\n      echo \"   🌟 Good network speed\"\n    elif [ \"${net_grade}\" = \"excellent\" ]; then\n      echo \"   🏆 Excellent network speed\"\n    elif [ \"${net_grade}\" = \"ultra\" ]; then\n      echo \"   🚀 Ultra network speed\"\n    fi\n    echo \"\"\n  fi\n\n  if [ \"${_DO_MEM}\" = \"YES\" ] && [ \"${_DO_CPU}\" = \"YES\" ] && [ \"${_DO_DISK}\" = \"YES\" ] && [ \"${_DO_NET}\" = \"YES\" ]; then\n    echo \" =============================================================================\"\n    echo \" == BOA SUITABILITY SUMMARY                                                 ==\"\n    echo \" =============================================================================\"\n    echo \"\"\n\n    local _issues=0\n    local _warnings=0\n\n    # Memory\n    # Treat \"low\" as a critical issue only when bandwidth is genuinely weak.\n    # Higher latency alone is a warning (VM contention/NUMA effects), not a hard fail.\n    if [ \"${mem_grade}\" = \"low\" ]; then\n      if [ -n \"${mem_mibps}\" ] && awk -v b=\"${mem_mibps}\" 'BEGIN{exit !(b>=20000)}' 2>/dev/null; then\n        echo \"   ⚠️ Memory latency/variance looks high (bandwidth is strong) - monitor under load\"\n        _warnings=$((_warnings + 1))\n      else\n        echo \"   ❌ Memory bandwidth too low for BOA\"\n        _issues=$((_issues + 1))\n      fi\n    elif [ \"${mem_grade}\" = \"average\" ]; then\n      if [ -n \"${mem_lat}\" ] && awk -v l=\"${mem_lat}\" 'BEGIN{exit !(l>230)}' 2>/dev/null; then\n        echo \"   ⚠️ Memory latency higher than expected - may reduce cache hit performance\"\n        _warnings=$((_warnings + 1))\n      fi\n    fi\n\n    # CPU speed\n    if [ \"${cpu_grade}\" = \"poor\" ]; then\n      echo \"   ❌ CPU performance too slow for BOA\"\n      _issues=$((_issues + 1))\n    elif [ \"${cpu_grade}\" = \"weak\" ]; then\n      echo \"   ⚠️ CPU performance weak, not optimal\"\n      _warnings=$((_warnings + 1))\n    fi\n\n    # CPU steal\n    if [ \"${steal_grade}\" = \"bad\" ]; then\n      echo \"   ❌ CPU steal critically high - oversold host, NOT recommended\"\n      _issues=$((_issues + 1))\n    elif [ \"${steal_grade}\" = \"warning\" ]; then\n      echo \"   ⚠️ CPU steal elevated - may cause latency spikes\"\n      _warnings=$((_warnings + 1))\n    fi\n\n    # Disk\n    if [ \"${disk_grade}\" = \"bad\" ]; then\n      echo \"   ❌ CRITICAL: Sequential write < 100 MB/s - disk is throttled or failing!\"\n      _issues=$((_issues + 1))\n    elif [ \"${disk_grade}\" = \"poor\" ]; then\n      echo \"   ⚠️ Slow disk - may impact deployments\"\n      _warnings=$((_warnings + 1))\n    fi\n\n    # Database (optional)\n    if [ \"${_db_test_status}\" = \"OK\" ] || [ \"${_db_test_status}\" = \"WARN\" ]; then\n      if [ \"${db_grade}\" = \"poor\" ]; then\n        echo \"   ❌ Slow DB OLTP - cache misses and admin tasks may feel sluggish\"\n        _issues=$((_issues + 1))\n      fi\n    fi\n\n    # Network\n    if [ \"${net_grade}\" = \"poor\" ]; then\n      echo \"   ❌ Slow network - not suitable for high traffic sites\"\n      _issues=$((_issues + 1))\n    fi\n\n    # Summary\n    if [ \"${_issues}\" -eq 0 ] && [ \"${_warnings}\" -eq 0 ]; then\n      echo \"   ✅ This host is well-suited for BOA deployment\"\n      echo \"   ✅ All critical metrics meet recommended thresholds\"\n    elif [ \"${_issues}\" -eq 0 ]; then\n      echo \"   ⚠️ This host is acceptable for BOA with ${_warnings} warning(s)\"\n      echo \"   Should work fine for most BOA workloads\"\n    else\n      echo \"   ❌ This host has ${_issues} critical issue(s)\"\n      echo \"   NOT recommended for production BOA deployment\"\n    fi\n\n    echo \"\"\n    echo \" =============================================================================\"\n    echo \"\"\n    echo \"For detailed metrics, run: perftest --verbose\"\n    echo \"\"\n  fi\n}\n\n# ----\n# DB benchmark (sysbench OLTP, MySQL root)\n# Runs only in --verbose mode by default.\n# ----\n_db_ro_tps=\"\"\n_db_ro_p95_ms=\"\"\n_db_rw_tps=\"\"\n_db_rw_p95_ms=\"\"\n_db_test_status=\"SKIPPED\"\n_db_test_reason=\"\"\n\n_read_mysql_root_pass() {\n  if [ ! -e \"/root/.my.pass.txt\" ]; then\n    echo \"\"\n    return\n  fi\n  head -n 1 /root/.my.pass.txt 2>/dev/null | tr -d '\n'\n}\n\n_guess_mysql_socket() {\n  if [ -S \"/var/run/mysqld/mysqld.sock\" ]; then\n    echo \"/var/run/mysqld/mysqld.sock\"\n    return\n  fi\n  if [ -S \"/run/mysqld/mysqld.sock\" ]; then\n    echo \"/run/mysqld/mysqld.sock\"\n    return\n  fi\n  if [ -S \"/var/lib/mysql/mysql.sock\" ]; then\n    echo \"/var/lib/mysql/mysql.sock\"\n    return\n  fi\n  echo \"\"\n}\n\n_sysbench_lua() {\n  # Args: file name (e.g. oltp_read_only.lua)\n  if [ -f \"/usr/share/sysbench/$1\" ]; then\n    echo \"/usr/share/sysbench/$1\"\n    return\n  fi\n  # Older locations (rare)\n  if [ -f \"/usr/local/share/sysbench/$1\" ]; then\n    echo \"/usr/local/share/sysbench/$1\"\n    return\n  fi\n  echo \"\"\n}\n\n_parse_sysbench_tps() {\n  awk '/transactions:/ { for (i=1;i<=NF;i++) { if ($i==\"per\") { v=$(i-1); gsub(/[()]/,\"\",v); print v; exit } } }'\n}\n\n_parse_sysbench_p95() {\n  awk '/95th percentile:/ { print $3; exit }'\n}\n\n_run_db_sysbench() {\n  # Args: mode(ro|rw), dbname, threads, table_size, tables, time\n  local mode=\"$1\"\n  local dbname=\"$2\"\n  local threads=\"$3\"\n  local table_size=\"$4\"\n  local tables=\"$5\"\n  local time_s=\"$6\"\n\n  local pass sock lua out tps p95\n  pass=\"$(_read_mysql_root_pass)\"\n  [ -n \"${pass}\" ] || return 2\n\n  sock=\"$(_guess_mysql_socket)\"\n\n  if [ \"${mode}\" = \"ro\" ]; then\n    lua=\"$(_sysbench_lua \"oltp_read_only.lua\")\"\n  else\n    lua=\"$(_sysbench_lua \"oltp_read_write.lua\")\"\n  fi\n  [ -n \"${lua}\" ] || return 3\n\n  # Common args\n  local args=(\n    \"--db-driver=mysql\"\n    \"--mysql-user=root\"\n    \"--mysql-password=${pass}\"\n    \"--mysql-db=${dbname}\"\n    \"--tables=${tables}\"\n    \"--table-size=${table_size}\"\n    \"--threads=${threads}\"\n    \"--time=${time_s}\"\n    \"--report-interval=0\"\n  )\n\n  # Prefer socket if available\n  if [ -n \"${sock}\" ]; then\n    args+=(\"--mysql-socket=${sock}\")\n  else\n    args+=(\"--mysql-host=127.0.0.1\" \"--mysql-port=3306\")\n  fi\n\n  out=\"$(sysbench \"${lua}\" \"${args[@]}\" run 2>/dev/null)\"\n\n  tps=\"$(printf '%s\n' \"${out}\" | _parse_sysbench_tps | head -n1)\"\n  p95=\"$(printf '%s\n' \"${out}\" | _parse_sysbench_p95 | head -n1)\"\n\n  # Validate numeric\n  if ! printf '%s' \"${tps}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n    tps=\"\"\n  fi\n  if ! printf '%s' \"${p95}\" | grep -Eq '^[0-9]+(\\.[0-9]+)?$'; then\n    p95=\"\"\n  fi\n\n  printf '%s|%s\n' \"${tps}\" \"${p95}\"\n}\n\n_db_benchmark_sysbench() {\n  # Runs when enabled and MySQL root access is available.\n  if [ \"${_DBTEST_ENABLED}\" != \"YES\" ]; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"disabled\"\n    return\n  fi\n  if ! command -v sysbench >/dev/null 2>&1; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"sysbench not installed\"\n    return\n  fi\n  if ! command -v mysql >/dev/null 2>&1; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"mysql client not installed\"\n    return\n  fi\n\n  if ! _sysbench_has_mysql_driver; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"sysbench built without mysql driver\"\n    return\n  fi\n\n  local pass\n  pass=\"$(_read_mysql_root_pass)\"\n  if [ -z \"${pass}\" ]; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"/root/.my.pass.txt missing/empty\"\n    return\n  fi\n\n  # Ensure mysqld responds\n  local sock extra\n  sock=\"$(_guess_mysql_socket)\"\n  extra=\"\"\n  if [ -n \"${sock}\" ]; then\n    extra=\"--socket=${sock}\"\n  fi\n\n  if ! mysql -uroot -p\"${pass}\" ${extra} -e 'SELECT 1' >/dev/null 2>&1; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"cannot connect to MySQL as root\"\n    return\n  fi\n\n  echo \"Running MySQL OLTP (sysbench: RO+RW, ${_DB_SYSBENCH_TIME_RO}s+${_DB_SYSBENCH_TIME_RW}s)...\"\n\n  # Threads = min(vCPU*2, cap)\n  local threads\n  threads=$(( $(nproc 2>/dev/null || echo 1) * 2 ))\n  if [ \"${threads}\" -gt \"${_DB_SYSBENCH_THREADS_MAX}\" ]; then\n    threads=\"${_DB_SYSBENCH_THREADS_MAX}\"\n  fi\n  # Many DB OLTP workloads (and VPS hosts) stop scaling before vCPU*2 threads.\n  # Cap threads for grading consistency so bigger VMs are not penalized by concurrency regression.\n  if [ -n \"${_DB_SYSBENCH_THREADS_SAT:-}\" ] && [ \"${_DB_SYSBENCH_THREADS_SAT}\" -gt 0 ] && [ \"${threads}\" -gt \"${_DB_SYSBENCH_THREADS_SAT}\" ]; then\n    threads=\"${_DB_SYSBENCH_THREADS_SAT}\"\n  fi\n  if [ \"${threads}\" -lt \"${_DB_SYSBENCH_THREADS_MIN}\" ]; then\n    threads=\"${_DB_SYSBENCH_THREADS_MIN}\"\n  fi\n\n  _db_threads_used=\"${threads}\"\n\n  local table_size tables time_ro time_rw\n  tables=\"${_DB_SYSBENCH_TABLES}\"\n  time_ro=\"${_DB_SYSBENCH_TIME_RO}\"\n  time_rw=\"${_DB_SYSBENCH_TIME_RW}\"\n  if [ \"${_DBTEST_HEAVY}\" = \"YES\" ]; then\n    table_size=\"${_DB_SYSBENCH_TABLE_SIZE_HEAVY}\"\n  else\n    table_size=\"${_DB_SYSBENCH_TABLE_SIZE}\"\n  fi\n\n  # Unique db name (<=64 chars)\n  local dbname\n  dbname=\"perftest_sb_${_hostname}_$$\"\n  dbname=\"$(printf '%s' \"${dbname}\" | tr -c 'A-Za-z0-9_' '_' | cut -c1-63)\"\n\n  # Create db\n  if ! mysql -uroot -p\"${pass}\" ${extra} -e \"CREATE DATABASE IF NOT EXISTS ${dbname}\" >/dev/null 2>&1; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"failed to create db\"\n    return\n  fi\n\n  # Prepare tables once (read-only/read-write share dataset)\n  local lua_prepare\n  lua_prepare=\"$(_sysbench_lua \"oltp_read_write.lua\")\"\n  if [ -z \"${lua_prepare}\" ]; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"sysbench lua not found\"\n    return\n  fi\n\n  local sb_args=(\n    \"--db-driver=mysql\"\n    \"--mysql-user=root\"\n    \"--mysql-password=${pass}\"\n    \"--mysql-db=${dbname}\"\n    \"--tables=${tables}\"\n    \"--table-size=${table_size}\"\n    \"--threads=${threads}\"\n    \"--report-interval=0\"\n  )\n  if [ -n \"${sock}\" ]; then\n    sb_args+=(\"--mysql-socket=${sock}\")\n  else\n    sb_args+=(\"--mysql-host=127.0.0.1\" \"--mysql-port=3306\")\n  fi\n\n  if ! sysbench \"${lua_prepare}\" \"${sb_args[@]}\" prepare >/dev/null 2>&1; then\n    _db_test_status=\"SKIPPED\"\n    _db_test_reason=\"sysbench prepare failed\"\n    [ \"${_DBTEST_KEEP_DB}\" = \"YES\" ] && return\n    mysql -uroot -p\"${pass}\" ${extra} -e \"DROP DATABASE IF EXISTS ${dbname}\" >/dev/null 2>&1 || true\n    return\n  fi\n\n  # Run RO and RW, capture tps + p95 latency\n  local ro rw\n  ro=\"$(_run_db_sysbench ro \"${dbname}\" \"${threads}\" \"${table_size}\" \"${tables}\" \"${time_ro}\")\"\n  rw=\"$(_run_db_sysbench rw \"${dbname}\" \"${threads}\" \"${table_size}\" \"${tables}\" \"${time_rw}\")\"\n\n  _db_ro_tps=\"$(printf '%s' \"${ro}\" | cut -d'|' -f1)\"\n  _db_ro_p95_ms=\"$(printf '%s' \"${ro}\" | cut -d'|' -f2)\"\n  _db_rw_tps=\"$(printf '%s' \"${rw}\" | cut -d'|' -f1)\"\n  _db_rw_p95_ms=\"$(printf '%s' \"${rw}\" | cut -d'|' -f2)\"\n\n  if [ -n \"${_db_ro_tps}\" ] && [ -n \"${_db_rw_tps}\" ]; then\n    _db_test_status=\"OK\"\n    _db_test_reason=\"\"\n  else\n    _db_test_status=\"WARN\"\n    _db_test_reason=\"could not parse results\"\n  fi\n\n  # Cleanup tables\n  sysbench \"${lua_prepare}\" \"${sb_args[@]}\" cleanup >/dev/null 2>&1 || true\n\n  if [ \"${_DBTEST_KEEP_DB}\" != \"YES\" ]; then\n    mysql -uroot -p\"${pass}\" ${extra} -e \"DROP DATABASE IF EXISTS ${dbname}\" >/dev/null 2>&1 || true\n  fi\n}\n# ----\n# Main\n# ----\n\n# Parse arguments (explicit test selection required)\nif [ \"$#\" -eq 0 ]; then\n  _usage\n  exit 1\nfi\n\nwhile [ \"$#\" -gt 0 ]; do\n  case \"$1\" in\n    --fancy) _FANCY=1; _VERBOSE=0; _COMPACT=0 ;;\n    --compact) _FANCY=0; _VERBOSE=0; _COMPACT=1 ;;\n    --verbose) _FANCY=0; _VERBOSE=1; _COMPACT=0 ;;\n    --no-dbtest) _DBTEST_ENABLED=NO ;;\n    --dbtest-heavy) _DBTEST_HEAVY=YES ;;\n    --keep-db) _DBTEST_KEEP_DB=YES ;;\n    --list-tests) _list_tests; exit 0 ;;\n    --all)\n      _DO_SYSINFO=YES\n      _DO_CPU=YES\n      _DO_MEM=YES\n      _DO_AES=YES\n      _DO_DISK=YES\n      _DO_NET=YES\n      _DO_PHP=YES\n      _DO_DB=YES\n      _DO_FS=YES\n      ;;\n    --only)\n      shift\n      [ -n \"${1:-}\" ] || { echo \"ERROR: --only requires a test name\"; _usage; exit 1; }\n      _DO_SYSINFO=NO\n      _DO_CPU=NO\n      _DO_MEM=NO\n      _DO_AES=NO\n      _DO_DISK=NO\n      _DO_NET=NO\n      _DO_PHP=NO\n      _DO_DB=NO\n      _DO_FS=NO\n      case \"$1\" in\n        sysinfo) _DO_SYSINFO=YES ;;\n        cpu) _DO_CPU=YES ;;\n        mem) _DO_MEM=YES ;;\n        aes) _DO_AES=YES ;;\n        disk) _DO_DISK=YES ;;\n        net) _DO_NET=YES ;;\n        php) _DO_PHP=YES ;;\n        db) _DO_DB=YES ;;\n        fs) _DO_FS=YES ;;\n        *) echo \"ERROR: Unknown test for --only: $1\"; _list_tests; exit 1 ;;\n      esac\n      ;;\n    --sysinfo) _DO_SYSINFO=YES ;;\n    --cpu) _DO_CPU=YES ;;\n    --mem) _DO_MEM=YES ;;\n    --aes) _DO_AES=YES ;;\n    --disk) _DO_DISK=YES ;;\n    --net) _DO_NET=YES ;;\n    --iperf) _NET_IPERF_ENABLED=YES ;;\n    --php) _DO_PHP=YES ;;\n    --db) _DO_DB=YES ;;\n    --fs) _DO_FS=YES ;;\n    -h|--help) _usage; exit 0 ;;\n    *)\n      echo \"Unknown option: $1\"\n      _usage\n      exit 1\n      ;;\n  esac\n  shift\ndone\n\n# Require explicit test selection\nif [ \"${_DO_SYSINFO}\" != \"YES\" ] \\\n  && [ \"${_DO_CPU}\" != \"YES\" ] \\\n  && [ \"${_DO_MEM}\" != \"YES\" ] \\\n  && [ \"${_DO_AES}\" != \"YES\" ] \\\n  && [ \"${_DO_DISK}\" != \"YES\" ] \\\n  && [ \"${_DO_NET}\" != \"YES\" ] \\\n  && [ \"${_DO_PHP}\" != \"YES\" ] \\\n  && [ \"${_DO_DB}\" != \"YES\" ] \\\n  && [ \"${_DO_FS}\" != \"YES\" ]; then\n  echo \"ERROR: You must select tests explicitly with --all or one/more test flags.\"\n  echo \"\"\n  _usage\n  exit 1\nfi\n\n# DB test runs only when requested (unless explicitly forced off)\nif [ \"${_DO_DB}\" != \"YES\" ]; then\n  _DBTEST_ENABLED=NO\nfi\n\n_check_root\n\necho \"\"\necho \"Starting Perftest ${_PT_VERSION}, please wait..\"\necho \"System Time is [$(date)]\"\necho \"Disabling cron temporarily to avoid auto-healing intervention..\"\n_cron_stop_quiet\n\n# Ensure outbound TCP allows iperf3 public ports (Leaseweb uses 5201-5210).\n# Only touch CSF when: iperf3 test is enabled, CSF is present and enabled,\n# and the ports are not already open. A csf -r inside a VM on routed-IP\n# setups can cause a brief ARP blackout that poisons the\n# upstream router's ARP cache for the VM's IP.\nif [ \"${_NET_IPERF_ENABLED}\" = \"YES\" ] \\\n  && [ -f /etc/csf/csf.conf ] \\\n  && csf -v &>/dev/null \\\n  && ! grep -q \"^TESTING\\s*=\\s*\\\"1\\\"\" /etc/csf/csf.conf; then\n  if ! grep -q \"5201:5210\" /etc/csf/csf.conf; then\n    sed -i -E 's/^(TCP_OUT[[:space:]]*=[[:space:]]*\"[^\"]*)\"/\\1,5201:5210\"/' /etc/csf/csf.conf\n    csf -r >/dev/null 2>&1\n  fi\nfi\n\n# Give BOA auto-healing a moment to calm down after cron changes,\n# so it doesn't interfere with perftest under load.\nsleep 15\n\n# Required packages (install only what is needed for selected tests)\n_DO_NEED_PKGS=\"NO\"\nif [ \"${_DO_NET}\" = \"YES\" ]; then _DO_NEED_PKGS=\"YES\"; fi\nif [ \"${_DO_AES}\" = \"YES\" ]; then _DO_NEED_PKGS=\"YES\"; fi\nif [ \"${_DO_DISK}\" = \"YES\" ]; then _DO_NEED_PKGS=\"YES\"; fi\nif [ \"${_DO_CPU}\" = \"YES\" ] || [ \"${_DO_DB}\" = \"YES\" ] || [ \"${_DO_MEM}\" = \"YES\" ]; then _DO_NEED_PKGS=\"YES\"; fi\nif [ \"${_DO_PHP}\" = \"YES\" ]; then _DO_NEED_PKGS=\"YES\"; fi\nif [ \"${_DO_SYSINFO}\" = \"YES\" ]; then _DO_NEED_PKGS=\"YES\"; fi\n\n# Update package list silently only if we may need to install anything\nif [ \"${_DO_NEED_PKGS}\" = \"YES\" ]; then\n  apt-get update -qq >/dev/null 2>&1 || true\nfi\n\n# Minimal deps per test group\nif [ \"${_DO_NET}\" = \"YES\" ]; then\n  _check_and_install_package \"curl\"\n  if [ \"${_NET_IPERF_ENABLED}\" = \"YES\" ]; then\n    _check_and_install_package \"iperf3\"\n    # Ensure iperf3 is not running as a local daemon/socket after install.\n    # A local iperf3 listener on 5201 would intercept outbound client\n    # connections to remote iperf3 servers, producing invalid results.\n    # Use service (sysvinit/Devuan) with systemctl as fallback (systemd).\n    if command -v service >/dev/null 2>&1 && [ -x \"/etc/init.d/iperf3\" ]; then\n      service iperf3 stop >/dev/null 2>&1 || true\n    elif command -v systemctl >/dev/null 2>&1; then\n      systemctl stop iperf3.service iperf3.socket >/dev/null 2>&1 || true\n      systemctl disable iperf3.service iperf3.socket >/dev/null 2>&1 || true\n    fi\n    # Kill any remaining iperf3 server process regardless of init system\n    pkill -x iperf3 >/dev/null 2>&1 || true\n  fi\nfi\n\nif [ \"${_DO_AES}\" = \"YES\" ]; then\n  _check_and_install_package \"openssl\"\nfi\n\nif [ \"${_DO_DISK}\" = \"YES\" ]; then\n  _check_and_install_package \"fio\"\n  _check_and_install_package \"jq\"\nfi\n\n# bc is used by grading/math across several tests\nif [ \"${_DO_CPU}\" = \"YES\" ] || [ \"${_DO_DB}\" = \"YES\" ] || [ \"${_DO_MEM}\" = \"YES\" ] || [ \"${_DO_DISK}\" = \"YES\" ] || [ \"${_DO_NET}\" = \"YES\" ] || [ \"${_DO_AES}\" = \"YES\" ] || [ \"${_DO_PHP}\" = \"YES\" ]; then\n  _check_and_install_package \"bc\"\nfi\n\nif [ \"${_DO_CPU}\" = \"YES\" ] || [ \"${_DO_DB}\" = \"YES\" ]; then\n  _check_and_install_package \"sysstat\" # mpstat\n  _ensure_sysbench\nfi\n\n# membench is compiled on demand; ensure toolchain only if memory test is requested\nif [ \"${_DO_MEM}\" = \"YES\" ]; then\n  _check_and_install_package \"gcc\" >/dev/null 2>&1 || true\n  _check_and_install_package \"make\" >/dev/null 2>&1 || true\nfi\n\n# PCI inventory is optional; install only when sysinfo is requested\nif [ \"${_DO_SYSINFO}\" = \"YES\" ]; then\n  _check_and_install_package \"pciutils\" >/dev/null 2>&1 || true\nfi\n\n\n# Basic system info\n_hostname=\"$(hostname)\"\n_cpu_info=\"$(grep -m1 'model name' /proc/cpuinfo | cut -d: -f2- | sed 's/^ *//')\"\n_cpu_cores=\"$(nproc 2>/dev/null || echo N/A)\"\nif [ \"${_DO_SYSINFO}\" = \"YES\" ]; then\n  _pci_init\nfi\n\n# DB bench outputs (sysbench OLTP)\n_db_test_status=\"SKIPPED\"\n_db_test_reason=\"\"\n_db_ro_tps=\"\"\n_db_ro_p95_ms=\"\"\n_db_rw_tps=\"\"\n_db_rw_p95_ms=\"\"\n\nif [ \"${_VERBOSE}\" -eq 1 ] && [ \"${_DO_SYSINFO}\" = \"YES\" ]; then\n  echo \"Host: ${_hostname}\"\n  echo \"CPU: ${_cpu_info}\"\n  echo \"vCPUs: ${_cpu_cores}\"\n  echo \"\"\n  echo \"Block devices (lsblk):\"\n  lsblk -o NAME,TYPE,SIZE,ROTA | sed 's/^/  /'\n  echo \"\"\nfi\n\nif command -v php >/dev/null 2>&1 && [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"PHP version:\"\n  php -v | sed 's/^/  /'\n  echo \"\"\nfi\n\n# CPU Performance Tests\nif [ \"${_DO_CPU}\" = \"YES\" ]; then\n  echo \"Running CPU performance tests with sysbench...\"\n  _cpu_steal_mt=\"$(_measure_cpu_steal 1)\" # initial check\n\n  # Multi-threaded: nproc\n  if command -v mpstat >/dev/null 2>&1; then\n    mpstat 1 \"${_SYSBENCH_CPU_TIME}\" >/tmp/perftest_mpstat_mt.txt 2>/dev/null &\n    _mpstat_pid=\"$!\"\n  fi\n\n  _cpu_mt_threads_raw=\"$(nproc 2>/dev/null || echo 1)\"\n  _cpu_mt_threads_used=\"${_cpu_mt_threads_raw}\"\n  if [ -n \"${_SYSBENCH_CPU_THREADS_SAT:-}\" ] && [ \"${_SYSBENCH_CPU_THREADS_SAT}\" -gt 0 ] && [ \"${_cpu_mt_threads_used}\" -gt \"${_SYSBENCH_CPU_THREADS_SAT}\" ]; then\n    _cpu_mt_threads_used=\"${_SYSBENCH_CPU_THREADS_SAT}\"\n  fi\n  if [ \"${_cpu_mt_threads_used}\" -lt 1 ]; then\n    _cpu_mt_threads_used=1\n  fi\n\n  _run_sysbench_cpu \"${_cpu_mt_threads_used}\" \"MT_NPROC\"\n\n  if [ -n \"${_mpstat_pid:-}\" ]; then\n    wait \"${_mpstat_pid}\" >/dev/null 2>&1\n    _cpu_steal_mt=\"$(_parse_mpstat_steal_avg /tmp/perftest_mpstat_mt.txt)\"\n    rm -f /tmp/perftest_mpstat_mt.txt >/dev/null 2>&1\n  fi\n\n  # Single-threaded\n  echo \"Running Single-threaded CPU performance tests...\"\n  _run_sysbench_cpu 1 \"ST\"\n\n  # 4-thread\n  echo \"Running Multi-threaded CPU tests...\"\n  _run_sysbench_cpu 4 \"MT4\"\nelse\n  if [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"Skipping CPU test (--cpu not selected)\"\n  fi\n  _cpu_events_MT_NPROC=\"SKIPPED\"\n  _cpu_latency_MT_NPROC=\"SKIPPED\"\n  _cpu_events_ST=\"SKIPPED\"\n  _cpu_latency_ST=\"SKIPPED\"\n  _cpu_events_MT4=\"SKIPPED\"\n  _cpu_latency_MT4=\"SKIPPED\"\nfi\n\n# Memory Performance Test (DRAM-focused)\nif [ \"${_DO_MEM}\" = \"YES\" ]; then\n  echo \"Running Memory performance test (DRAM-focused membench)...\"\n  _run_mem_benchmarks\nelse\n  if [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"Skipping memory test (--mem not selected)\"\n  fi\n  _mem_transfer_speed=\"SKIPPED\"\n  _mem_latency=\"SKIPPED\"\nfi\n\n# AES throughput (nench-inspired)\nif [ \"${_DO_AES}\" = \"YES\" ]; then\n  echo \"Running AES throughput test (openssl enc, ${_AES_TEST_MIB} MiB)...\"\n  _run_aes_throughput_test\nelse\n  if [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"Skipping AES test (--aes not selected)\"\n  fi\n  _aes_seconds=\"SKIPPED\"\n  _aes_mibps=\"SKIPPED\"\nfi\n\n# PHP Performance Tests (single + concurrent)\nif [ \"${_DO_PHP}\" = \"YES\" ]; then\n  echo \"Running PHP performance tests...\"\n  _run_php_benchmarks\nelse\n  if [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"Skipping PHP test (--php not selected)\"\n  fi\n  _php_time_runs=\"SKIPPED\"\n  _php_time_median=\"SKIPPED\"\n  _php_concurrency_wall=\"SKIPPED\"\n  _php_concurrency_throughput=\"SKIPPED\"\nfi\n\n# Database OLTP experience proxy (sysbench + MySQL root)\nif [ \"${_DO_DB}\" = \"YES\" ] && [ \"${_DBTEST_ENABLED}\" = \"YES\" ]; then\n  echo \"Running Database performance tests...\"\n  _db_benchmark_sysbench\nelse\n  if [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"Skipping DB test (--db not selected or disabled)\"\n  fi\n  _db_test_status=\"SKIPPED\"\n  _db_test_reason=\"\"\nfi\n\n# Disk I/O Performance Tests\nif [ \"${_DO_DISK}\" = \"YES\" ]; then\n  _log \"Detecting mounted filesystems to test (root + real mounted partitions; excluding /boot*)...\"\n  mapfile -t _test_mounts < <(_get_test_mounts)\n\n  # ioping latency (nench-inspired)\n  echo \"Running ioping quick latency checks (${_IOPING_SECONDS}s)...\"\n  declare -A _ioping_seek_line\n  declare -A _ioping_seqread_line\n  _run_ioping_quick_mounts\nelse\n  if [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"Skipping disk tests (--disk not selected)\"\n  fi\n  _test_mounts=()\n  _ioping_seek=\"SKIPPED\"\n  _ioping_seqread=\"SKIPPED\"\nfi\n\n# IPv4 network sanity check (CDN with fallback)\nif [ \"${_DO_NET}\" = \"YES\" ]; then\n  echo \"Running IPv4 network check (4 tests, ${_NET_TIMEOUT}s timeout)...\"\n  _run_ipv4_net_check\nelse\n  if [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"Skipping network tests (--net not selected)\"\n  fi\n  _net_ipv4=\"SKIPPED\"\n  _net_download=\"SKIPPED\"\n  _net_http_ok=\"0\"\n  _net_http_total=\"0\"\n  _net_iperf_dl=\"SKIPPED\"\n  _net_iperf_ul=\"SKIPPED\"\nfi\n\ndeclare -A _fio_rand_iops_read\ndeclare -A _fio_rand_iops_write\ndeclare -A _fio_rand_bw_read\ndeclare -A _fio_rand_bw_write\ndeclare -A _fio_rand_latency_read\ndeclare -A _fio_rand_latency_write\ndeclare -A _fio_seqread_bw_read\ndeclare -A _fio_seqread_latency_read\ndeclare -A _fio_seqwrite_bw_write\ndeclare -A _fio_seqwrite_latency_write\ndeclare -A _fio_qd1_p95_read\ndeclare -A _fio_qd1_p95_write\ndeclare -A _fio_qd1_p99_read\ndeclare -A _fio_qd1_p99_write\ndeclare -A _fio_qd1_fsync_p95_write\ndeclare -A _fio_qd1_fsync_p99_write\ndeclare -A _meta_ops\n\n# For CSV summary (root mount)\n_root_rand_bw_read=\"\"\n_root_rand_bw_write=\"\"\n_root_rand_lat_read=\"\"\n_root_rand_lat_write=\"\"\n_root_seqread_bw_read=\"\"\n_root_seqread_lat_read=\"\"\n_root_seqwrite_bw_write=\"\"\n_root_seqwrite_lat_write=\"\"\n_root_meta_ops=\"\"\n_root_qd1_p95_read=\"\"\n_root_qd1_p95_write=\"\"\n_root_qd1_p99_read=\"\"\n_root_qd1_p99_write=\"\"\n_root_qd1_fsync_p95_write=\"\"\n_root_qd1_fsync_p99_write=\"\"\n\nfor _mount_point in \"${_test_mounts[@]}\"; do\n  [ -z \"${_mount_point}\" ] && continue\n  echo \"Running Disk I/O tests on ${_mount_point}...\"\n  _safe_mount_point=\"$(echo \"${_mount_point}\" | sed 's#/#_#g')\"\n\n  # Random 4k read/write\n  _rand_output=\"/tmp/fio_rand_${_safe_mount_point}_$$.json\"\n  if _run_fio_rand_test \"${_mount_point}\" \"${_rand_output}\"; then\n    while IFS=\"=\" read -r _k _v; do\n      case \"${_k}\" in\n        READ_IOPS) _fio_rand_iops_read[\"${_mount_point}\"]=\"${_v}\" ;;\n        WRITE_IOPS) _fio_rand_iops_write[\"${_mount_point}\"]=\"${_v}\" ;;\n        READ_BW_MB) _fio_rand_bw_read[\"${_mount_point}\"]=\"${_v} MB/s\" ;;\n        WRITE_BW_MB) _fio_rand_bw_write[\"${_mount_point}\"]=\"${_v} MB/s\" ;;\n        READ_LAT_MS) _fio_rand_latency_read[\"${_mount_point}\"]=\"${_v} ms\" ;;\n        WRITE_LAT_MS) _fio_rand_latency_write[\"${_mount_point}\"]=\"${_v} ms\" ;;\n      esac\n    done < <(_parse_fio_json \"${_rand_output}\")\n  else\n    _fio_rand_iops_read[\"${_mount_point}\"]=\"ERROR\"\n    _fio_rand_iops_write[\"${_mount_point}\"]=\"ERROR\"\n    _fio_rand_bw_read[\"${_mount_point}\"]=\"ERROR\"\n    _fio_rand_bw_write[\"${_mount_point}\"]=\"ERROR\"\n    _fio_rand_latency_read[\"${_mount_point}\"]=\"ERROR\"\n    _fio_rand_latency_write[\"${_mount_point}\"]=\"ERROR\"\n  fi\n  rm -f \"${_rand_output}\" >/dev/null 2>&1\n\n  # Sequential read\n  _seqread_output=\"/tmp/fio_seqread_${_safe_mount_point}_$$.json\"\n  if _run_fio_seq_read_test \"${_mount_point}\" \"${_seqread_output}\"; then\n    while IFS=\"=\" read -r _k _v; do\n      case \"${_k}\" in\n        READ_BW_MB) _fio_seqread_bw_read[\"${_mount_point}\"]=\"${_v} MB/s\" ;;\n        READ_LAT_MS) _fio_seqread_latency_read[\"${_mount_point}\"]=\"${_v} ms\" ;;\n      esac\n    done < <(_parse_fio_json \"${_seqread_output}\")\n  else\n    _fio_seqread_bw_read[\"${_mount_point}\"]=\"ERROR\"\n    _fio_seqread_latency_read[\"${_mount_point}\"]=\"ERROR\"\n  fi\n  rm -f \"${_seqread_output}\" >/dev/null 2>&1\n\n  # Sequential write\n  _seqwrite_output=\"/tmp/fio_seqwrite_${_safe_mount_point}_$$.json\"\n  if _run_fio_seq_write_test \"${_mount_point}\" \"${_seqwrite_output}\"; then\n    while IFS=\"=\" read -r _k _v; do\n      case \"${_k}\" in\n        WRITE_BW_MB) _fio_seqwrite_bw_write[\"${_mount_point}\"]=\"${_v} MB/s\" ;;\n        WRITE_LAT_MS) _fio_seqwrite_latency_write[\"${_mount_point}\"]=\"${_v} ms\" ;;\n      esac\n    done < <(_parse_fio_json \"${_seqwrite_output}\")\n  else\n    _fio_seqwrite_bw_write[\"${_mount_point}\"]=\"ERROR\"\n    _fio_seqwrite_latency_write[\"${_mount_point}\"]=\"ERROR\"\n  fi\n  rm -f \"${_seqwrite_output}\" >/dev/null 2>&1\n\n  # Random 4k randrw QD=1 (latency percentiles for tiering) - repeated for stability\n  echo \"Running QD=1 random latency test (x${_FIO_QD1_REPEATS}) on ${_mount_point}...\"\n  _qd1_rep=\"\"\n  _qd1_p95_r_list=() _qd1_p95_w_list=() _qd1_p99_r_list=() _qd1_p99_w_list=()\n  _qd1_ok=\"YES\"\n  for _qd1_rep in $(seq 1 \"${_FIO_QD1_REPEATS}\"); do\n    _qd1_output=\"/tmp/fio_qd1_${_safe_mount_point}_${_qd1_rep}_$$.json\"\n    if _run_fio_qd1_test \"${_mount_point}\" \"${_qd1_output}\"; then\n      _tmp_r95=\"\" _tmp_w95=\"\" _tmp_r99=\"\" _tmp_w99=\"\"\n      while IFS=\"=\" read -r _k _v; do\n        case \"${_k}\" in\n          READ_P95_MS) _tmp_r95=\"${_v}\" ;;\n          WRITE_P95_MS) _tmp_w95=\"${_v}\" ;;\n          READ_P99_MS) _tmp_r99=\"${_v}\" ;;\n          WRITE_P99_MS) _tmp_w99=\"${_v}\" ;;\n        esac\n      done < <(_parse_fio_json \"${_qd1_output}\")\n      [ -n \"${_tmp_r95}\" ] && _qd1_p95_r_list+=(\"${_tmp_r95}\")\n      [ -n \"${_tmp_w95}\" ] && _qd1_p95_w_list+=(\"${_tmp_w95}\")\n      [ -n \"${_tmp_r99}\" ] && _qd1_p99_r_list+=(\"${_tmp_r99}\")\n      [ -n \"${_tmp_w99}\" ] && _qd1_p99_w_list+=(\"${_tmp_w99}\")\n    else\n      _qd1_ok=\"NO\"\n    fi\n    rm -f \"${_qd1_output}\" >/dev/null 2>&1\n  done\n\n  if [ \"${_qd1_ok}\" = \"YES\" ] && [ \"${#_qd1_p99_r_list[@]}\" -gt 0 ] && [ \"${#_qd1_p99_w_list[@]}\" -gt 0 ]; then\n    _fio_qd1_p95_read[\"${_mount_point}\"]=\"$(_median_num \"${_qd1_p95_r_list[@]}\") ms\"\n    _fio_qd1_p95_write[\"${_mount_point}\"]=\"$(_median_num \"${_qd1_p95_w_list[@]}\") ms\"\n    _fio_qd1_p99_read[\"${_mount_point}\"]=\"$(_median_num \"${_qd1_p99_r_list[@]}\") ms\"\n    _fio_qd1_p99_write[\"${_mount_point}\"]=\"$(_median_num \"${_qd1_p99_w_list[@]}\") ms\"\n  else\n    _fio_qd1_p95_read[\"${_mount_point}\"]=\"ERROR\"\n    _fio_qd1_p95_write[\"${_mount_point}\"]=\"ERROR\"\n    _fio_qd1_p99_read[\"${_mount_point}\"]=\"ERROR\"\n    _fio_qd1_p99_write[\"${_mount_point}\"]=\"ERROR\"\n  fi\n\n  # Random 4k randwrite QD=1 with fsync (durability realism) - repeated for stability\n  echo \"Running QD=1 fsync write test (x${_FIO_QD1_FSYNC_REPEATS}) on ${_mount_point}...\"\n  _fs_rep=\"\"\n  _fs_p95_list=() _fs_p99_list=()\n  _fs_ok=\"YES\"\n  for _fs_rep in $(seq 1 \"${_FIO_QD1_FSYNC_REPEATS}\"); do\n    _qd1_fsync_output=\"/tmp/fio_qd1_fsync_${_safe_mount_point}_${_fs_rep}_$$.json\"\n    if _run_fio_qd1_fsync_test \"${_mount_point}\" \"${_qd1_fsync_output}\"; then\n      _tmp_fs95=\"\" _tmp_fs99=\"\"\n      while IFS=\"=\" read -r _k _v; do\n        case \"${_k}\" in\n          SYNC_P95_MS) _tmp_fs95=\"${_v}\" ;;\n          SYNC_P99_MS) _tmp_fs99=\"${_v}\" ;;\n          WRITE_P95_MS) [ -z \"${_tmp_fs95}\" ] && _tmp_fs95=\"${_v}\" ;;\n          WRITE_P99_MS) [ -z \"${_tmp_fs99}\" ] && _tmp_fs99=\"${_v}\" ;;\n        esac\n      done < <(_parse_fio_json \"${_qd1_fsync_output}\")\n      [ -n \"${_tmp_fs95}\" ] && _fs_p95_list+=(\"${_tmp_fs95}\")\n      [ -n \"${_tmp_fs99}\" ] && _fs_p99_list+=(\"${_tmp_fs99}\")\n    else\n      _fs_ok=\"NO\"\n    fi\n    rm -f \"${_qd1_fsync_output}\" >/dev/null 2>&1\n  done\n\n  if [ \"${_fs_ok}\" = \"YES\" ] && [ \"${#_fs_p99_list[@]}\" -gt 0 ]; then\n    _fio_qd1_fsync_p95_write[\"${_mount_point}\"]=\"$(_median_num \"${_fs_p95_list[@]}\") ms\"\n    _fio_qd1_fsync_p99_write[\"${_mount_point}\"]=\"$(_median_num \"${_fs_p99_list[@]}\") ms\"\n  else\n    _fio_qd1_fsync_p95_write[\"${_mount_point}\"]=\"ERROR\"\n    _fio_qd1_fsync_p99_write[\"${_mount_point}\"]=\"ERROR\"\n  fi\n\n  # Metadata test\n  _run_metadata_test \"${_mount_point}\" \"_meta_ops[\\\"${_mount_point}\\\"]\"\n\n  # Capture root mount metrics for CSV\n  if [ \"${_mount_point}\" = \"/\" ]; then\n    _root_rand_bw_read=\"${_fio_rand_bw_read[\"${_mount_point}\"]}\"\n    _root_rand_bw_write=\"${_fio_rand_bw_write[\"${_mount_point}\"]}\"\n    _root_rand_lat_read=\"${_fio_rand_latency_read[\"${_mount_point}\"]}\"\n    _root_rand_lat_write=\"${_fio_rand_latency_write[\"${_mount_point}\"]}\"\n    _root_seqread_bw_read=\"${_fio_seqread_bw_read[\"${_mount_point}\"]}\"\n    _root_seqread_lat_read=\"${_fio_seqread_latency_read[\"${_mount_point}\"]}\"\n    _root_seqwrite_bw_write=\"${_fio_seqwrite_bw_write[\"${_mount_point}\"]}\"\n    _root_seqwrite_lat_write=\"${_fio_seqwrite_latency_write[\"${_mount_point}\"]}\"\n    _root_meta_ops=\"${_meta_ops[\"${_mount_point}\"]}\"\n    _root_qd1_p95_read=\"${_fio_qd1_p95_read[\"${_mount_point}\"]}\"\n    _root_qd1_p95_write=\"${_fio_qd1_p95_write[\"${_mount_point}\"]}\"\n    _root_qd1_p99_read=\"${_fio_qd1_p99_read[\"${_mount_point}\"]}\"\n    _root_qd1_p99_write=\"${_fio_qd1_p99_write[\"${_mount_point}\"]}\"\n    _root_qd1_fsync_p95_write=\"${_fio_qd1_fsync_p95_write[\"${_mount_point}\"]}\"\n    _root_qd1_fsync_p99_write=\"${_fio_qd1_fsync_p99_write[\"${_mount_point}\"]}\"\n  fi\ndone\n\n# ----\n# Display Results\n# ----\necho \"\"\n\nif [ \"${_FANCY}\" -eq 1 ] || [ \"${_COMPACT}\" -eq 1 ]; then\n  _print_grading_summary\nelse\n  echo \"\"\n  echo \"==== Perftest ${_PT_VERSION} Summary ====\"\n  echo \"\"\n  echo \"Host and system details\"\n  echo \"\"\n  echo \" Date: [$(date)]\"\n  echo \" City: $(_print_host_info)\"\n  echo \" Info: $(boa version)\"\n  echo \"\"\n  if [ -x \"/opt/local/bin/screenfetch\" ]; then\n    bash /opt/local/bin/screenfetch -n\n    if [ \"${_HAS_LSPCI:-NO}\" = \"YES\" ]; then\n      [ -n \"${_PCI_STORAGE_SUMMARY}\" ] && echo \" PCI storage: ${_PCI_STORAGE_SUMMARY}\"\n      [ -n \"${_PCI_NIC_SUMMARY}\" ] && echo \" PCI NIC: ${_PCI_NIC_SUMMARY}\"\n    fi\n  else\n    echo \"  - Hostname: ${_hostname}\"\n    echo \"  - CPU: ${_cpu_info}\"\n    echo \"  - vCPUs: ${_cpu_cores}\"\n    _total_ram_mb=\"$(free -m | awk '/^Mem:/ {print $2}')\"\n    echo \"  - RAM: ${_total_ram_mb} MB\"\n    if [ \"${_HAS_LSPCI:-NO}\" = \"YES\" ]; then\n      [ -n \"${_PCI_STORAGE_SUMMARY}\" ] && echo \"  - PCI storage: ${_PCI_STORAGE_SUMMARY}\"\n      [ -n \"${_PCI_NIC_SUMMARY}\" ] && echo \"  - PCI NIC: ${_PCI_NIC_SUMMARY}\"\n    fi\n  fi\n  echo \"\"\n  echo \"CPU Performance (Multi-threaded, nproc=${_cpu_mt_threads_raw:-$(nproc)}, used=${_cpu_mt_threads_used:-$(nproc)}):\"\n  echo \"  - Events per Second: ${_cpu_events_MT_NPROC}\"\n  echo \"  - Average Latency: ${_cpu_latency_MT_NPROC} ms\"\n  if [ -n \"${_cpu_steal_mt}\" ]; then\n    echo \"  - Avg CPU Steal: ${_cpu_steal_mt} %\"\n  fi\n  echo \"\"\n  echo \"CPU Performance (4 threads):\"\n  echo \"  - Events per Second: ${_cpu_events_MT4}\"\n  echo \"  - Average Latency: ${_cpu_latency_MT4} ms\"\n  echo \"\"\n  echo \"CPU Performance (Single-threaded):\"\n  echo \"  - Events per Second: ${_cpu_events_ST}\"\n  echo \"  - Average Latency: ${_cpu_latency_ST} ms\"\n  echo \"\"\n  echo \"Memory Performance:\"\n  echo \"  - Transfer Speed: ${_mem_transfer_speed}\"\n  if [ -n \"${_mem_latency}\" ]; then\n    echo \"  - Pointer-chase latency: ${_mem_latency} ${_mem_latency_unit}\"\n  else\n    echo \"  - Pointer-chase latency: N/A\"\n  fi\n  if [ -n \"${_mem_bench_size_mib:-}\" ]; then\n    echo \"  - Working set: ${_mem_bench_size_mib} MiB per buffer x2 (~${_mem_bench_total_mib} MiB total), Threads: ${_mem_bench_threads} (median of ${_MEM_BENCH_REPEATS})\"\n    if [ -n \"${_mem_bw_min}\" ] && [ -n \"${_mem_bw_max}\" ]; then\n      echo \"  - Variance: ${_mem_bw_min}-${_mem_bw_max} MiB/s (bw), ${_mem_lat_min}-${_mem_lat_max} ns (lat)\"\n    fi\n  fi\n  echo \"\"\n  echo \"AES Performance (openssl enc):\"\n  echo \"  - Time: ${_aes_seconds} s\"\n  echo \"  - Throughput: ${_aes_mibps}\"\n  echo \"\"\n  echo \"Disk Latency (ioping):\"\n  echo \"  - Seek rate: ${_ioping_seek}\"\n  echo \"  - Sequential read: ${_ioping_seqread}\"\n  echo \"\"\n  echo \"Network (IPv4 HTTP Range pulls):\"\n  echo \"  - Public IPv4: ${_net_ipv4}\"\n  echo \"  - HTTP download (median ${_net_http_ok}/${_net_http_total}): ${_net_download}\"\n  if [ \"${_NET_IPERF_ENABLED}\" = \"YES\" ]; then\n    echo \"  - iPerf3 TCP median: DL ${_net_iperf_dl} | UL ${_net_iperf_ul}\"\n  fi\n  if [ \"${#_net_dl_name[@]}\" -gt 0 ]; then\n    echo \"  - Tests:\"\n    _i=0\n    while [ \"${_i}\" -lt \"${#_net_dl_name[@]}\" ]; do\n      echo \"    - ${_net_dl_name[${_i}]}: ${_net_dl_pretty[${_i}]}\"\n      _i=$((_i + 1))\n    done\n  fi\n  echo \"\"\n  echo \"PHP Performance (CLI Micro-benchmark):\"\n  if command -v php >/dev/null 2>&1; then\n    echo \"  - Runs (seconds): ${_php_time_runs}\"\n    echo \"  - Median Time: ${_php_time_median} seconds\"\n    echo \"  - Concurrent Procs: $( [ \"$(nproc)\" -gt \"${_PHP_BENCH_CONCURRENCY_MAX}\" ] && echo \"${_PHP_BENCH_CONCURRENCY_MAX}\" || nproc )\"\n    echo \"  - Wall Time (conc.): ${_php_concurrency_wall} seconds\"\n    echo \"  - Throughput (conc.): ${_php_concurrency_throughput}\"\n  else\n    echo \"  - PHP not installed\"\n  fi\n\n  echo \"\"\n  echo \"Database OLTP (sysbench, MySQL root):\"\n  if [ \"${_db_test_status}\" = \"OK\" ] || [ \"${_db_test_status}\" = \"WARN\" ]; then\n    echo \"  - Threads: ${_db_threads_used:-N/A}\"\n    echo \"  - Read-only:  ${_db_ro_tps} TPS, p95 ${_db_ro_p95_ms} ms\"\n    echo \"  - Read-write: ${_db_rw_tps} TPS, p95 ${_db_rw_p95_ms} ms\"\n    if [ \"${_db_test_status}\" = \"WARN\" ]; then\n      echo \"  - Note: results parsed with warning (${_db_test_reason})\"\n    fi\n  else\n    if [ -n \"${_db_test_reason}\" ]; then\n      echo \"  - ${_db_test_status}: ${_db_test_reason}\"\n    else\n      echo \"  - ${_db_test_status}\"\n    fi\n  fi\n\n  for _mount_point in \"${_test_mounts[@]}\"; do\n    [ -z \"${_mount_point}\" ] && continue\n    echo \"\"\n    echo \"Disk I/O Performance on ${_mount_point}:\"\n    echo \"\"\n    echo \"  Random 4k randrw:\"\n    echo \"    - Read IOPS: ${_fio_rand_iops_read[\"${_mount_point}\"]}\"\n    echo \"    - Write IOPS: ${_fio_rand_iops_write[\"${_mount_point}\"]}\"\n    echo \"    - Read Bandwidth: ${_fio_rand_bw_read[\"${_mount_point}\"]}\"\n    echo \"    - Write Bandwidth: ${_fio_rand_bw_write[\"${_mount_point}\"]}\"\n    echo \"    - Avg Read Latency: ${_fio_rand_latency_read[\"${_mount_point}\"]}\"\n    echo \"    - Avg Write Latency: ${_fio_rand_latency_write[\"${_mount_point}\"]}\"\n    echo \"\"\n    echo \"  QD=1 randrw (p95/p99):\"\n    echo \"    - Read p95/p99: ${_fio_qd1_p95_read[\"${_mount_point}\"]} / ${_fio_qd1_p99_read[\"${_mount_point}\"]}\"\n    echo \"    - Write p95/p99: ${_fio_qd1_p95_write[\"${_mount_point}\"]} / ${_fio_qd1_p99_write[\"${_mount_point}\"]}\"\n    _tier_ioping_us=\"$(_parse_ioping_avg_us \"${_ioping_seek_line[\"${_mount_point}\"]:-}\")\"\n    _tier_ioping_us=\"$(_num_or_empty \"${_tier_ioping_us}\")\"\n    _tier_srw_num=\"$(_num_or_empty \"$(echo \"${_fio_seqread_bw_read[\"${_mount_point}\"]}\" | awk '{print $1}')\")\"\n    _tier_sww_num=\"$(_num_or_empty \"$(echo \"${_fio_seqwrite_bw_write[\"${_mount_point}\"]}\" | awk '{print $1}')\")\"\n    _tier_p99r_num=\"$(_num_or_empty \"$(echo \"${_fio_qd1_p99_read[\"${_mount_point}\"]}\" | awk '{print $1}')\")\"\n    _tier_p99w_num=\"$(_num_or_empty \"$(echo \"${_fio_qd1_p99_write[\"${_mount_point}\"]}\" | awk '{print $1}')\")\"\n    _tier_fsync_p95w_num=\"$(_num_or_empty \"$(echo \"${_fio_qd1_fsync_p95_write[\"${_mount_point}\"]}\" | awk '{print $1}')\")\"\n    _tier_fsync_p99w_num=\"$(_num_or_empty \"$(echo \"${_fio_qd1_fsync_p99_write[\"${_mount_point}\"]}\" | awk '{print $1}')\")\"\n    _tier_storage_tier=\"$(_boa_guess_storage_tier \"${_tier_ioping_us}\" \"${_tier_srw_num}\" \"${_tier_sww_num}\" \"${_tier_p99r_num}\" \"${_tier_p99w_num}\" \"${_tier_fsync_p99w_num}\")\"\n    echo \"    - Storage tier guess: ${_tier_storage_tier}\"\n    echo \"\"\n    echo \"  Sequential Read (128k):\"\n    echo \"    - Read Bandwidth: ${_fio_seqread_bw_read[\"${_mount_point}\"]}\"\n    echo \"    - Avg Read Latency: ${_fio_seqread_latency_read[\"${_mount_point}\"]}\"\n    echo \"\"\n    echo \"  Sequential Write (128k):\"\n    echo \"    - Write Bandwidth: ${_fio_seqwrite_bw_write[\"${_mount_point}\"]}\"\n    echo \"    - Avg Write Latency: ${_fio_seqwrite_latency_write[\"${_mount_point}\"]}\"\n    echo \"\"\n    echo \"  Metadata (small file create+read+delete):\"\n    echo \"    - Ops per second: ${_meta_ops[\"${_mount_point}\"]}\"\n\n    # CSV per-mount disk line (verbose mode only)\n    # Format:\n    # CSV_DISK_V1:hostname,\"/mount\",seq_read_MBps,seq_write_MBps,rand_iops_r,rand_iops_w,meta_ops,ioping_us,qd1_p99_r,qd1_p99_w,qd1_fsync_p99_w,\"tier\",\"grade\"\n    if [ \"${_VERBOSE}\" -eq 1 ]; then\n      _csv_seqr_num=\"${_tier_srw_num}\"\n      _csv_seqw_num=\"${_tier_sww_num}\"\n      _csv_riops_num=\"$(_num_or_empty \"${_fio_rand_iops_read[\"${_mount_point}\"]}\")\"\n      _csv_wiops_num=\"$(_num_or_empty \"${_fio_rand_iops_write[\"${_mount_point}\"]}\")\"\n      _csv_meta_num=\"$(_num_or_empty \"$(echo \"${_meta_ops[\"${_mount_point}\"]}\" | awk '{print $1}')\")\"\n      _csv_ioping_avg_us=\"${_tier_ioping_us}\"\n      _csv_p99r_num=\"${_tier_p99r_num}\"\n      _csv_p99w_num=\"${_tier_p99w_num}\"\n      _csv_fsync_p95w_num=\"${_tier_fsync_p95w_num}\"\n      _csv_fsync_p99w_num=\"${_tier_fsync_p99w_num}\"\n      read -r _csv_disk_grade _csv_disk_grade_tp _csv_disk_grade_tail _csv_disk_gmean _csv_disk_imbalance _csv_disk_meta _csv_disk_fsync_p99 _csv_disk_qd1p99r _csv_disk_qd1p99w _csv_disk_ioping <<< \"$(_boa_grade_disk_final \"${_csv_ioping_avg_us}\" \"${_csv_seqr_num}\" \"${_csv_seqw_num}\" \"${_csv_riops_num}\" \"${_csv_wiops_num}\" \"${_csv_meta_num}\" \"${_csv_p99r_num}\" \"${_csv_p99w_num}\" \"${_csv_fsync_p99w_num}\" \"${_csv_fsync_p95w_num}\")\"\n      echo \"CSV_DISK_V1:${_hostname},\\\"${_mount_point}\\\",${_csv_seqr_num},${_csv_seqw_num},${_csv_riops_num},${_csv_wiops_num},${_csv_meta_num},${_csv_ioping_avg_us},${_csv_p99r_num},${_csv_p99w_num},${_csv_fsync_p99w_num},\\\"${_tier_storage_tier}\\\",\\\"${_csv_disk_grade}\\\"\"\n    fi\n  done\n\n  echo \"\"\n  echo \"Performance tests completed.\"\n  echo \"\"\nfi\n\n# ----\n# CSV summary line (focus on root mount) for easy ingest\n# ----\n# Fields (comma-separated, no header):\n# hostname,cpu_model,vcpus,cpu_mt_eps,cpu_4t_eps,cpu_st_eps,mem_bw,php_median_s,php_conc_thr_ops,\n# root_rand_bw_read,root_rand_bw_write,root_rand_lat_r,root_rand_lat_w,\n# root_seqread_bw,root_seqread_lat_r,root_seqwrite_bw,root_seqwrite_lat_w,root_meta_ops,\n# aes_mibps,net_mibps\n\n# Strip units from some disk fields for CSV\n_strip_unit() {\n  # $1 = value like \"195.67 MB/s\" or \"1.23 ms\"\n  echo \"$1\" | awk '{print $1}'\n}\n\n_csv_mem_bw=\"$(_strip_unit \"${_mem_transfer_speed}\")\"\n_csv_root_rand_bw_read=\"$(_strip_unit \"${_root_rand_bw_read}\")\"\n_csv_root_rand_bw_write=\"$(_strip_unit \"${_root_rand_bw_write}\")\"\n_csv_root_rand_lat_read=\"$(_strip_unit \"${_root_rand_lat_read}\")\"\n_csv_root_rand_lat_write=\"$(_strip_unit \"${_root_rand_lat_write}\")\"\n_csv_root_seqread_bw_read=\"$(_strip_unit \"${_root_seqread_bw_read}\")\"\n_csv_root_seqread_lat_read=\"$(_strip_unit \"${_root_seqread_lat_read}\")\"\n_csv_root_seqwrite_bw_write=\"$(_strip_unit \"${_root_seqwrite_bw_write}\")\"\n_csv_root_seqwrite_lat_write=\"$(_strip_unit \"${_root_seqwrite_lat_write}\")\"\n_csv_root_meta_ops=\"$(_strip_unit \"${_root_meta_ops}\")\"\n_csv_aes_mibps=\"$(_strip_unit \"${_aes_mibps}\")\"\n_csv_net_download=\"$(_strip_unit \"${_net_download}\")\"\n\n# Root-only extras for grading/tiering\n_csv_root_qd1_p95_read=\"$(_strip_unit \"${_root_qd1_p95_read}\")\"\n_csv_root_qd1_p95_write=\"$(_strip_unit \"${_root_qd1_p95_write}\")\"\n_csv_root_qd1_p99_read=\"$(_strip_unit \"${_root_qd1_p99_read}\")\"\n_csv_root_qd1_p99_write=\"$(_strip_unit \"${_root_qd1_p99_write}\")\"\n_csv_root_qd1_fsync_p95_write=\"$(_strip_unit \"${_root_qd1_fsync_p95_write}\")\"\n_csv_root_qd1_fsync_p99_write=\"$(_strip_unit \"${_root_qd1_fsync_p99_write}\")\"\n_csv_root_ioping_avg_us=\"$(_parse_ioping_avg_us \"${_ioping_seek}\")\"\n_csv_root_ioping_avg_us=\"$(_num_or_empty \"${_csv_root_ioping_avg_us}\")\"\n_csv_root_seqr_num=\"$(_num_or_empty \"${_csv_root_seqread_bw_read}\")\"\n_csv_root_seqw_num=\"$(_num_or_empty \"${_csv_root_seqwrite_bw_write}\")\"\n_csv_root_p99r_num=\"$(_num_or_empty \"${_csv_root_qd1_p99_read}\")\"\n_csv_root_p99w_num=\"$(_num_or_empty \"${_csv_root_qd1_p99_write}\")\"\n_csv_root_fsync_p99w_num=\"$(_num_or_empty \"${_csv_root_qd1_fsync_p99_write}\")\"\n_csv_root_storage_tier=\"$(_boa_guess_storage_tier \"${_csv_root_ioping_avg_us}\" \"${_csv_root_seqr_num}\" \"${_csv_root_seqw_num}\" \"${_csv_root_p99r_num}\" \"${_csv_root_p99w_num}\" \"${_csv_root_fsync_p99w_num}\")\"\n\n# DRAM pointer-chase latency (ns) helps distinguish true RAM vs cache effects\n_csv_mem_lat_ns=\"${_mem_latency}\"\n[ -z \"${_csv_mem_lat_ns}\" ] && _csv_mem_lat_ns=\"N/A\"\n\nif [ \"${_VERBOSE}\" -eq 1 ]; then\n  echo \"CSV_SUMMARY_V4:${_hostname},\\\"${_cpu_info}\\\",${_cpu_cores},${_cpu_events_MT_NPROC},${_cpu_events_MT4},${_cpu_events_ST},${_csv_mem_bw},${_csv_mem_lat_ns},${_php_time_median},\\\"${_php_concurrency_throughput}\\\",${_csv_root_rand_bw_read},${_csv_root_rand_bw_write},${_csv_root_rand_lat_read},${_csv_root_rand_lat_write},${_csv_root_seqread_bw_read},${_csv_root_seqread_lat_read},${_csv_root_seqwrite_bw_write},${_csv_root_seqwrite_lat_write},${_csv_root_meta_ops},${_csv_root_qd1_p95_read},${_csv_root_qd1_p95_write},${_csv_root_qd1_p99_read},${_csv_root_qd1_p99_write},${_csv_root_qd1_fsync_p95_write},${_csv_root_qd1_fsync_p99_write},\\\"${_csv_root_storage_tier}\\\",${_csv_aes_mibps},${_csv_net_download}\"\n  echo \"CSV_DB_V1:${_hostname},${_db_ro_tps},${_db_ro_p95_ms},${_db_rw_tps},${_db_rw_p95_ms},${_db_test_status}\"\nfi\n_cron_restart_quiet\n"
  },
  {
    "path": "aegir/tools/bin/proxysql_galera_checker",
    "content": "#!/bin/bash\n## inspired by Percona clustercheck.sh\n\n\n#-------------------------------------------------------------------------------\n#\n# Step 1 : Bash internal configuration\n#\n\nset -o nounset    # no undefined variables\n\n#-------------------------------------------------------------------------------\n#\n# Step 2 : Global variables\n#\n\n#\n# Script parameters/constants\n#\ndeclare  -i DEBUG=0\nreadonly    PROXYSQL_ADMIN_VERSION=\"1.4.16\"\n\n#Timeout exists for instances where mysqld may be hung\ndeclare  -i TIMEOUT=10\n\ndeclare     RED=\"\"\ndeclare     NRED=\"\"\n\n\n#\n# Global variables used by the script\n#\ndeclare     ERR_FILE=\"/dev/stderr\"\ndeclare     NODE_MONITOR_LOG_FILE=\"/var/lib/proxysql/c1r_galera_proxysql_node_monitor.log\"\ndeclare     CONFIG_FILE=\"/etc/proxysql-admin.cnf\"\ndeclare     HOST_PRIORITY_FILE=\"\"\ndeclare     CHECKER_PIDFILE=\"\"\n\n# Set to send output here when DEBUG is set\ndeclare     DEBUG_ERR_FILE=\"/dev/null\"\n\n\ndeclare -i  HOSTGROUP_WRITER_ID=10\ndeclare -i  HOSTGROUP_READER_ID=11\ndeclare -i  HOSTGROUP_SLAVEREADER_ID=11\ndeclare -i  NUMBER_WRITERS=0\n\ndeclare     WRITER_IS_READER=\"ondemand\"\n\ndeclare     SLAVE_IS_WRITER=\"yes\"\n\ndeclare     P_MODE=\"loadbal\"\ndeclare     P_PRIORITY=\"\"\n\ndeclare     MYSQL_USERNAME\ndeclare     MYSQL_PASSWORD\ndeclare     PROXYSQL_USERNAME\ndeclare     PROXYSQL_PASSWORD\ndeclare     PROXYSQL_HOSTNAME\ndeclare     PROXYSQL_PORT\n\ndeclare     PROXYSQL_DATADIR='/var/lib/proxysql'\n\n# Set to 1 if slave readers are being used\ndeclare -i  HAVE_SLAVEREADERS=0\ndeclare -i  HAVE_SLAVEWRITERS=0\n\n# How far behind can a slave be before its put into OFFLINE_SOFT state\ndeclare     SLAVE_SECONDS_BEHIND=3600\n\n# Some extra text that will be logged\n# (useful for debugging)\ndeclare     LOG_TEXT=\"Maestro\"\n\n# Default value for max_connections in mysql_servers\ndeclare     MAX_CONNECTIONS=1000\n\n\n#-------------------------------------------------------------------------------\n#\n# Step 3 : Helper functions\n#\n\nfunction log() {\n  local lineno=$1\n  shift\n\n  if [[ -n $ERR_FILE ]]; then\n    if [[ -n $lineno && $DEBUG -ne 0 ]]; then\n      echo -e \"[$(date +%Y-%m-%d\\ %H:%M:%S)] $$ (line $lineno) $*\" >> $ERR_FILE\n    else\n      echo -e \"[$(date +%Y-%m-%d\\ %H:%M:%S)] $$ $*\" >> $ERR_FILE\n    fi\n  fi\n}\n\n# Checks the return value of the most recent command\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: the lineno where the error occurred\n#   2: the error code of the most recent command\n#   3: the error message if the error code is non-zero\n#\nfunction check_cmd() {\n  local lineno=$1\n  local retcode=$2\n  local errmsg=$3\n  shift 3\n\n  if [[ ${retcode} -ne 0 ]]; then\n    error \"$lineno\" $errmsg\n    if [[ -n \"$*\" ]]; then\n      log \"$lineno\" $*\n    fi\n  fi\n  return $retcode\n}\n\n# Checks the return value of the most recent command\n# Exits the program if the command fails (non-zero return codes)\n#\n# This should not be used with commands that can fail for legitimate\n# reasons.\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: the lineno where the error occurred\n#   2: the error code of the most recent command\n#   3: the error message if the error code is non-zero\n#\nfunction check_cmd_and_exit() {\n  check_cmd \"$@\"\n  local retcode=$?\n  if [[ $retcode -ne 0 ]]; then\n    exit 1\n  fi\n  return $retcode\n}\n\nfunction log_if_success() {\n  local lineno=$1\n  local rc=$2\n  shift 2\n\n  if [[ $rc -eq 0 ]]; then\n    log \"$lineno\" \"$*\"\n  fi\n  return $rc\n}\n\nfunction error() {\n  local lineno=$1\n  shift\n\n  log \"$lineno\" \"proxysql_galera_checker : Error ($lineno): $*\"\n}\n\nfunction warning() {\n  local lineno=$1\n  shift\n\n  log \"$lineno\" \"Warning: $*\"\n}\n\nfunction debug() {\n  if [[ $DEBUG -eq 0 ]]; then\n    return\n  fi\n\n  local lineno=$1\n  shift\n\n  log \"$lineno\" \"${RED}debug: $*${NRED}\"\n}\n\nfunction usage() {\n  local path=$0\n  cat << EOF\nUsage: ${path##*/} --write-hg=10 --read-hg=11 --config-file=/etc/proxysql-admin.cnf --log=/var/lib/proxysql/pxc_test_proxysql_galera_check.log\n\nOptions:\n  -w, --write-hg=<NUMBER>             Specify ProxySQL write hostgroup.\n  -r, --read-hg=<NUMBER>              Specify ProxySQL read hostgroup.\n  -c, --config-file=PATH              Specify ProxySQL-admin configuration file.\n  -l, --log=PATH                      Specify proxysql_galera_checker log file.\n  --log-text=TEXT                     This is text that will be written to the log file\n                                      whenever this script is run (useful for debugging).\n  --node-monitor-log=PATH             Specify proxysql_node_monitor log file.\n  -n, --writer-count=<NUMBER>         Maximum number of write hostgroup_id nodes\n                                      that can be marked ONLINE\n                                      When 0 (default), all nodes can be marked ONLINE\n  -p, --priority=<HOST_LIST>          Can accept comma delimited list of write nodes priority\n  -m, --mode=[loadbal|singlewrite]    ProxySQL read/write configuration mode,\n                                      currently supporting: 'loadbal' and 'singlewrite'\n  --writer-is-reader=<value>          Defines if the writer node also accepts writes.\n                                      Possible values are 'always', 'never', and 'ondemand'.\n                                      'ondemand' means that the writer node only accepts reads\n                                      if there are no other readers.\n                                      (default: 'never')\n  --use-slave-as-writer=<yes/no>      If this is 'yes' then slave nodes may\n                                      be added to the write hostgroup if all other\n                                      cluster nodes are down.\n                                      (default: 'yes')\n  --max-connections=<NUMBER>          Value for max_connections in the mysql_servers table.\n                                      This is the maximum number of connections that\n                                      ProxySQL will open to the backend servers.\n                                      (default: 1000)\n  --debug                             Enables additional debug logging.\n  -h, --help                          Display script usage information\n  -v, --version                       Print version info\n\nNotes about the mysql_servers in ProxySQL:\n\n- NODE STATUS   * Nodes that are in status OFFLINE_HARD will not be checked\n                  nor will their status be changed\n                * SHUNNED nodes are not to be used with Galera based systems,\n                  they will be checked and their status will be changed\n                  to either ONLINE or OFFLINE_SOFT.\n\n\nWhen no nodes were found to be in wsrep_local_state=4 (SYNCED) for either\nread or write nodes, then the script will try 5 times for each node to try\nto find nodes wsrep_local_state=4 (SYNCED) or wsrep_local_state=2 (DONOR/DESYNC)\nEOF\n}\n\n\n# Check the permissions for a file or directory\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: the bash test to be applied to the file\n#   2: the lineno where this call is invoked (used for errors)\n#   3: the path to the file\n#   4: (optional) description of the path (mostly used for existence checks)\n#\n# Exits the script if the permissions test fails.\n#\nfunction check_permission() {\n  local permission=$1\n  local lineno=$2\n  local path_to_check=$3\n  local description=\"\"\n  if [[ $# -gt 3 ]]; then\n    description=\"$4\"\n  fi\n\n  if [ ! $permission \"$path_to_check\" ]; then\n    if [[ $permission == \"-r\" ]]; then\n      error $lineno \"You do not have READ permission for: $path_to_check\"\n    elif [[ $permission == \"-w\" ]]; then\n      error $lineno \"You do not have WRITE permission for: $path_to_check\"\n    elif [[ $permission == \"-x\" ]]; then\n      error $lineno \"You do not have EXECUTE permission for: $path_to_check\"\n    elif [[ $permission == \"-e\" ]]; then\n      if [[ -n $description ]]; then\n        error $lineno \"Could not find the $description: $path_to_check\"\n      else\n        error $lineno \"Could not find: $path_to_check\"\n      fi\n    elif [[ $permission == \"-d\" ]]; then\n      if [[ -n $description ]]; then\n        error $lineno \"Could not find the $description: $path_to_check\"\n      else\n        error $lineno \"Could not find the directory: $path_to_check\"\n      fi\n    elif [[ $permission == \"-f\" ]]; then\n      if [[ -n $description ]]; then\n        error $lineno \"Could not find the $description: $path_to_check\"\n      else\n        error $lineno \"Could not find the file: $path_to_check\"\n      fi\n    else\n      error $lineno \"You do not have the correct permissions for: $path_to_check\"\n    fi\n    exit 1\n  fi\n}\n\n# Executes a SQL query with the (fully) specified server\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: lineno\n#   2: the name of the user\n#   3: the user's password\n#   4: the hostname of the server\n#   5: the port used to connect to the server\n#   6: the query to be run\n#   7: arguments to the mysql client\n#   8: timeout in secs\n#   9: additional options, space separated\n#      Available options:\n#       \"hide_output\"\n#         This will not show the output of the query when DEBUG is set.\n#         Used to stop the display of sensitve information (such as passwords)\n#         from being displayed when debugging.\n#\nfunction exec_sql() {\n  local lineno=$1\n  local user=$2\n  local password=$3\n  local hostname=$4\n  local port=$5\n  local query=$6\n  local args=$7\n  local timeout_secs=$8\n  local more_options=$9\n  local retvalue\n  local retoutput\n\n  debug \"$lineno\" \"exec_sql : $user@$hostname:$port ($args) ==> $query\"\n\n  retoutput=$(printf \"[client]\\nuser=${user}\\npassword=\\\"${password}\\\"\\nhost=${hostname}\\nport=${port}\"  \\\n      | timeout ${timeout_secs} mysql --defaults-file=/dev/stdin --protocol=tcp \\\n              ${args} -e \"$query\")\n  retvalue=$?\n\n  if [[ $DEBUG -eq 1 ]]; then\n    local number_of_newlines=0\n    local dbgoutput=$retoutput\n\n    if [[ \" $more_options \" =~ [[:space:]]hide_output[[:space:]] ]]; then\n      dbgoutput=\"**** data hidden ****\"\n    fi\n\n    if [[ -n $dbgoutput ]]; then\n      number_of_newlines=$(printf \"%s\" \"${dbgoutput}\" | wc -l)\n    fi\n\n    if [[  $retvalue -ne 0 ]]; then\n      debug \"\" \"--> query failed $retvalue\"\n    elif [[ -z $dbgoutput ]]; then\n      debug \"\" \"--> query returned $retvalue : <query returned no data>\"\n    elif [[ ${number_of_newlines} -eq 0 ]]; then\n      debug \"\" \"--> query returned $retvalue : ${dbgoutput}\"\n    else\n      debug \"\" \"--> query returned $retvalue : <data follows>\"\n      printf \"${dbgoutput//%/%%}\\n\" | while IFS= read -r line; do\n        debug \"\" \"----> $line\"\n      done\n    fi\n  fi\n  printf \"${retoutput//%/%%}\"\n  return $retvalue\n}\n\n\n# Executes a SQL query on proxysql (with a timeout of $TIMEOUT seconds)\n#\n# Globals:\n#   PROXYSQL_USERNAME\n#   PROXYSQL_PASSWORD\n#   PROXYSQL_HOSTNAME\n#   PROXYSQL_PORT\n#\n# Arguments:\n#   1: lineno (used for debugging/output, may be blank)\n#   2: Additional arguments to the mysql client for the query\n#   3: The SQL query\n#   4: (optional) see the additional options for exec_sql\n#\nfunction proxysql_exec() {\n  local lineno=$1\n  local args=$2\n  local query=\"$3\"\n  local more_options=\"\"\n  local retoutput\n\n  if [[ $# -ge 4 ]]; then\n    more_options=$4\n  fi\n\n  exec_sql \"$lineno\" \"$PROXYSQL_USERNAME\" \"$PROXYSQL_PASSWORD\" \\\n           \"$PROXYSQL_HOSTNAME\" \"$PROXYSQL_PORT\" \\\n           \"$query\" \"$args\" \"$TIMEOUT\" \"$more_options\"\n  retoutput=$?\n  if [[ $retoutput -eq 124 ]]; then\n    error $lineno \"TIMEOUT: SQL query ($PROXYSQL_HOSTNAME:$PROXYSQL_PORT) : $query\"\n  fi\n  return $retoutput\n}\n\n# Executes a SQL query on mysql (with a timeout of $TIMEOUT secs)\n#\n# Globals:\n#   MYSQL_USERNAME\n#   MYSQL_PASSWORD\n#\n# Arguments:\n#   1: lineno (used for debugging/output, may be blank)\n#   2: the hostname of the server\n#   3: the port used to connect to the server\n#   4: arguments to the mysql client\n#   5: the query to be run\n#   6: (optional) more options see exec_sql\n#\nfunction mysql_exec() {\n  local lineno=$1\n  local hostname=$2\n  local port=$3\n  local args=$4\n  local query=$5\n  local more_options=\"\"\n  local retoutput\n\n  if [[ $# -ge 6 ]]; then\n    more_options=$6\n  fi\n\n  exec_sql \"$lineno\" \"$MYSQL_USERNAME\" \"$MYSQL_PASSWORD\" \\\n           \"$hostname\" \"$port\" \\\n           \"$query\" \"$args\" \"$TIMEOUT\" \"$more_options\"\n  retoutput=$?\n  if [[ $retoutput -eq 124 ]]; then\n    error $lineno \"TIMEOUT: SQL query ($hostname:$$port) : $query\"\n  fi\n  return $retoutput\n}\n\n\n# Separates the IP address from the port in a network address\n# Works for IPv4 and IPv6\n#\n# Globals:\n#   None\n#\n# Params:\n#   1. The network address to be parsed\n#\n# Outputs:\n#   A string with a space separating the IP address from the port\n#\nfunction separate_ip_port_from_address()\n{\n  #\n  # Break address string into host:port/path parts\n  #\n  local address=$1\n\n  # Has to have at least one ':' to separate the port from the ip address\n  if [[ $address =~ : ]]; then\n    ip_addr=${address%:*}\n    port=${address##*:}\n  else\n    ip_addr=$address\n    port=\"\"\n  fi\n\n  # Remove any braces that surround the ip address portion\n  ip_addr=${ip_addr#\\[}\n  ip_addr=${ip_addr%\\]}\n\n  echo \"${ip_addr} ${port}\"\n}\n\n# Combines the IP address and port into a network address\n# Works for IPv4 and IPv6\n# (If the IP address is IPv6, the IP portion will have brackets)\n#\n# Globals:\n#   None\n#\n# Params:\n#   1: The IP address portion\n#   2: The port\n#\n# Outputs:\n#   A string containing the full network address\n#\nfunction combine_ip_port_into_address()\n{\n  local ip_addr=$1\n  local port=$2\n  local addr\n\n  if [[ ! $ip_addr =~ \\[.*\\] && $ip_addr =~ .*:.* ]]; then\n    # If there are no brackets and it does have a ':', then add the brackets\n    # because this is an unbracketed IPv6 address\n    addr=\"[${ip_addr}]:${port}\"\n  else\n    addr=\"${ip_addr}:${port}\"\n  fi\n  echo $addr\n}\n\n\n# upgrade scheduler from old layout to new layout\n#\n# Globals:\n#   PROXYSQL_DATADIR\n#   TIMEOUT\n#   HOST_PRIORITY_FILE\n#\n# Arguments:\n#   None\nfunction upgrade_scheduler(){\n  log $LINENO \"**** Scheduler upgrade started ****\"\n  if [[ -f /etc/proxysql-admin.cnf ]]; then\n    # shellcheck disable=SC1091\n    source /etc/proxysql-admin.cnf\n  else\n    error $LINENO \"Assert! proxysql-admin configuration file : /etc/proxysql-admin.cnf does not exist, Terminating!\"\n    exit 1\n  fi\n\n  # For this function, use a shorter timeout than normal\n  TIMEOUT=2\n\n  local scheduler_rows\n  local -i rows_found=0\n  local -i rows_modified=0\n\n  scheduler_rows=$(proxysql_exec $LINENO \"-Ns\" \"SELECT * FROM scheduler\")\n  check_cmd_and_exit $LINENO $? \"Could not retreive rows from scheduler (query failed). Exiting\"\n\n  while read i; do\n    if [[ -z $i ]]; then continue; fi\n\n    rows_found+=1\n\n    # Extract fields from the line\n    local id=$(echo \"$i\" | awk '{print $1}')\n    local s_write_hg=$(echo \"$i\" | awk '{print $5}')\n    local s_read_hg=$(echo \"$i\" | awk '{print $6}')\n    local s_number_of_writes=$(echo \"$i\" | awk '{print $7}')\n    local s_log=$(echo \"$i\" | awk '{print $9}')\n    local s_cluster_name=$(echo \"$i\" | awk '{print $10}')\n    local s_mode=\"\"\n\n    log $LINENO \"Modifying the scheduler for write hostgroup: 10\"\n\n    # Get the mode for this cluster\n    local proxysql_mode_file\n    if [[ -z $s_cluster_name ]]; then\n      proxysql_mode_file=\"${PROXYSQL_DATADIR}/mode\"\n    else\n      proxysql_mode_file=\"${PROXYSQL_DATADIR}/${s_cluster_name}_mode\"\n    fi\n\n    if [[ -f ${proxysql_mode_file} && -r ${proxysql_mode_file} ]]; then\n      s_mode=$(cat ${proxysql_mode_file})\n    else\n      log $LINENO \".. Cannot find the ${proxysql_mode_file} file\"\n      if [[ $s_read_hg == \"-1\" ]]; then\n        log $LINENO \".. Assuming mode='loadbal'\"\n        s_mode=\"loadbal\"\n      else\n        log $LINENO \".. Assuming mode='singlewrite'\"\n        s_mode=\"singlewrite\"\n      fi\n    fi\n\n    # TODO: kennt\n    # This will fail in the multi-cluster case, but may be a non-issue\n    # since those nodes should appear in one cluster (may cause extra work\n    # for the othre clusters though).\n\n    # Get the host priority file\n    local s_host_priority=''\n    if [[ -n $HOST_PRIORITY_FILE && -f $HOST_PRIORITY_FILE ]]; then\n      debug $LINENO \"Found a host priority file: $HOST_PRIORITY_FILE\"\n      local p_priority_hosts=\"\"\n\n      # Get the list of hosts from the host_priority file ignoring blanks\n      # and any lines that start with '#'\n      p_priority_hosts=$(cat $HOST_PRIORITY_FILE | grep '^[^#]' | sed ':a;N;$!ba;s/\\n/,/g')\n      if [[ ! -z $p_priority_hosts ]]; then\n        s_host_priority=\"--priority=$p_priority_hosts\"\n      fi\n    fi\n\n    # Make the changes\n    s_log=\"/var/lib/proxysql/c1r_galera_proxysql_galera_check.log\"\n    if [[ ! -z $s_log ]]; then\n      proxysql_exec $LINENO -Ns \"UPDATE scheduler SET arg1='--config-file=/etc/proxysql-admin.cnf --writer-is-reader=ondemand --write-hg=10 --read-hg=11 --writer-count=0 --mode=loadbal --log=$s_log', arg2=NULL, arg3=NULL, arg4=NULL, arg5=NULL WHERE id=$id\"\n      check_cmd_and_exit $LINENO $? \"Could not update the scheduler (query failed). Exiting.\"\n      rows_modified+=1\n    fi\n  done< <(printf \"${scheduler_rows}\\n\")\n\n  if [[ $rows_modified -gt 0 ]]; then\n    proxysql_exec $LINENO -Ns \"LOAD SCHEDULER TO RUNTIME; SAVE SCHEDULER TO DISK;\"\n    check_cmd_and_exit $LINENO $? \"Could not save scheduler changes to runtime or disk (query failed). Exiting.\"\n    log_if_success $LINENO $? \"Scheduler changes saved to runtime and disk\"\n  fi\n\n  log $LINENO \"$rows_modified row(s) modified / $rows_found row(s) found\"\n  log $LINENO \"**** Scheduler upgrade finished ****\"\n}\n\n\n# Upgrade the status of a node\n# This may also perform an INSERT (depending on the reader_status)\n#\n# Globals:\n#   TIMEOUT\n#\n# Arguments:\n#   1: lineno\n#   2: hostgroup\n#   3: server address\n#   4: port\n#   5: new status\n#   6: reader_status\n#   7: comment\n#\nfunction change_server_status() {\n  local lineno=$1\n  local hostgroup=$2\n  local server=$3\n  local port=$4\n  local status=$5\n  local reader_status=$6\n  local comment=$7\n  local address\n\n  address=$(combine_ip_port_into_address \"$server\" \"$port\")\n\n  # If we have a PRIORITY_NODE, then we don't have a WRITER entry\n  # to upgrade, but we do have a READER entry, so upgrade that\n  if [[ $reader_status == \"PRIORITY_NODE\" ]]; then\n    proxysql_exec $lineno -Ns \"INSERT INTO mysql_servers\n          (hostname,hostgroup_id,port,weight,status,comment,max_connections)\n          VALUES ('$server',$hostgroup,$port,1000000,'$status','WRITE',$MAX_CONNECTIONS);\"\n    check_cmd_and_exit $LINENO $? \"Could not create new mysql_servers row (query failed). Exiting.\"\n    log \"$lineno\" \"Adding server $hostgroup:$address with status $status. Reason: $comment\"\n  else\n    proxysql_exec $lineno -Ns \"UPDATE mysql_servers\n          set status = '$status' WHERE hostgroup_id = $hostgroup AND hostname = '$server' AND port = $port;\" 2>>${ERR_FILE}\n    check_cmd_and_exit $LINENO $? \"Could not update new mysql_servers row (query failed). Exiting.\"\n    log \"$lineno\" \"Changing server $hostgroup:$address to status $status. Reason: $comment\"\n  fi\n}\n\n\n# Arguments:\n#   The arguments to the script.\n#\nfunction parse_args() {\n  local go_out=\"\"\n\n  # TODO: kennt, what happens if we don't have a functional getopt()?\n  # Check if we have a functional getopt(1)\n  if ! getopt --test; then\n    go_out=\"$(getopt --options=w:r:c:l:n:m:p:vh --longoptions=write-hg:,read-hg:,config-file:,log:,node-monitor-log:,writer-count:,mode:,priority:,writer-is-reader:,use-slave-as-writer:,log-text:,max-connections:,version,debug,help \\\n    --name=\"$(basename \"$0\")\" -- \"$@\")\"\n    if [[ $? -ne 0 ]]; then\n      # no place to send output\n      echo \"proxysql_galera_checker : Script error: getopt() failed\" >&2\n      exit 1\n    fi\n    eval set -- \"$go_out\"\n  fi\n\n  if [[ $go_out == \" --\" ]];then\n    usage\n    exit 1\n  fi\n\n  #\n  # We iterate through the command-line options twice\n  # (1) to handle options that don't need permissions (such as --help)\n  # (2) to handle options that need to be done before other\n  #     options, such as loading the config file\n  #\n  for arg\n  do\n    case \"$arg\" in\n      -- ) shift; break;;\n      --config-file )\n        CONFIG_FILE=\"$2\"\n        check_permission -e $LINENO \"$CONFIG_FILE\" \"proxysql-admin configuration file\"\n        debug $LINENO  \"--config-file specified, using : $CONFIG_FILE\"\n        shift 2\n        ;;\n      --help)\n        usage\n        exit 0\n        ;;\n      -v | --version)\n        echo \"proxysql_galera_checker version $PROXYSQL_ADMIN_VERSION\"\n        exit 0\n        ;;\n      --debug)\n        DEBUG=1\n        shift\n        ;;\n      *)\n        shift\n        ;;\n    esac\n  done\n\n  #\n  # Load the config file before reading in the command-line options\n  #\n  readonly CONFIG_FILE\n  if [ ! -e \"$CONFIG_FILE\" ]; then\n      warning \"\" \"Could not locate the configuration file: $CONFIG_FILE\"\n  else\n      check_permission -r $LINENO \"$CONFIG_FILE\"\n      debug $LINENO \"Loading $CONFIG_FILE\"\n      source \"$CONFIG_FILE\"\n  fi\n\n  if [[ $DEBUG -ne 0 ]]; then\n    # For now\n    if [[ -t 1 ]]; then\n      ERR_FILE=/dev/stderr\n    fi\n  fi\n\n  # Reset the command line for the next invocation\n  eval set -- \"$go_out\"\n\n  for arg\n  do\n    case \"$arg\" in\n      -- ) shift; break;;\n      -w | --write-hg )\n        HOSTGROUP_WRITER_ID=$2\n        shift 2\n      ;;\n      -r | --read-hg )\n        HOSTGROUP_READER_ID=$2\n        shift 2\n      ;;\n      --config-file )\n        # Do no processing of config-file here, it is processed\n        # before this loop (see above)\n        shift 2\n      ;;\n      -l | --log )\n        ERR_FILE=\"$2\"\n        shift 2\n        # Test if stderr is open to a terminal\n        # We cannot use stdout as the log output, since it is used\n        # to return values.\n        if [[ $ERR_FILE == \"/dev/stderr\" ]]; then\n          RED=$(tput setaf 1)\n          NRED=$(tput sgr0)\n        else\n          RED=\"\"\n          NRED=\"\"\n        fi\n      ;;\n      --node-monitor-log )\n        NODE_MONITOR_LOG_FILE=\"$2\"\n        shift 2\n      ;;\n      -n | --writer-count )\n        NUMBER_WRITERS=\"$2\"\n        shift 2\n      ;;\n      -p | --priority )\n        P_PRIORITY=\"$2\"\n        shift 2\n      ;;\n      -m | --mode )\n        P_MODE=\"$2\"\n        shift 2\n        if [ \"$P_MODE\" != \"loadbal\" ] && [ \"$P_MODE\" != \"singlewrite\" ]; then\n          echo \"ERROR: Invalid --mode passed:\"\n          echo \"  Please choose one of these modes: loadbal, singlewrite\"\n          exit 1\n        fi\n      ;;\n      --writer-is-reader )\n        WRITER_IS_READER=\"$2\"\n        shift 2\n      ;;\n      --use-slave-as-writer )\n        SLAVE_IS_WRITER=\"$2\"\n        shift 2\n      ;;\n      --max-connections )\n        MAX_CONNECTIONS=\"$2\"\n        shift 2\n      ;;\n      --debug )\n        # Not handled here, see above\n        shift\n      ;;\n      --log-text )\n        LOG_TEXT=\"$2\"\n        shift 2\n      ;;\n      -v | --version )\n        # Not handled here, see above\n        shift\n      ;;\n      -h | --help )\n        # Not handled here, see above\n        shift\n      ;;\n    esac\n  done\n\n  if [[ $DEBUG -eq 1 ]]; then\n    DEBUG_ERR_FILE=$ERR_FILE\n  fi\n\n  #\n  # Argument validation\n  #\n  test $HOSTGROUP_WRITER_ID -ge 0 &>/dev/null\n  if [[ $? -ne 0 ]]; then\n    echo \"ERROR: writer hostgroup_id is not an integer\"\n    usage\n    exit 1\n  fi\n\n  test $HOSTGROUP_READER_ID -ge -1 &>/dev/null\n  if [[ $? -ne 0 ]]; then\n    echo \"ERROR: reader hostgroup_id is not an integer\"\n    usage\n    exit 1\n  fi\n\n  HOSTGROUP_SLAVEREADER_ID=$HOSTGROUP_READER_ID\n  if [ $HOSTGROUP_SLAVEREADER_ID -eq $HOSTGROUP_WRITER_ID ];then\n    let HOSTGROUP_SLAVEREADER_ID+=1\n  fi\n\n  if [[ $NUMBER_WRITERS -lt 0 ]]; then\n    echo \"ERROR: The number of writers should either be 0 to enable all possible nodes ONLINE\"\n    echo \"       or be larger than 0 to limit the number of writers\"\n    usage\n    exit 1\n  fi\n\n  if [[ ! $WRITER_IS_READER =~ ^(always|never|ondemand)$ ]]; then\n    error \"\" \"Invalid --writer-is-reader option: '$WRITER_IS_READER'\"\n    echo \"Please choose one of these values: always, never, or ondemand\"\n    exit 1\n  fi\n\n  if [[ ! $SLAVE_IS_WRITER =~ ^(yes|YES|no|NO)$ ]]; then\n    error \"\" \"Invalid --use-slave-as-writer option: '$SLAVE_IS_WRITER'\"\n    echo \"Please choose either yes or no\"\n    exit 1\n  fi\n  if [[ $SLAVE_IS_WRITER =~ ^(yes|YES)$ ]]; then\n    SLAVE_IS_WRITER=\"yes\"\n  else\n    SLAVE_IS_WRITER=\"no\"\n  fi\n\n  # These may get set in the config file, but they are not used\n  # by this script.  So to avoid confusion and problems, remove\n  # these variables explicitly.\n  unset WRITE_HOSTGROUP_ID\n  unset READ_HOSTGROUP_ID\n  unset SLAVEREAD_HOSTGROUP_ID\n\n\n  # Verify that we have an integer\n  if [[ -n $MAX_CONNECTIONS ]]; then\n    if ! [ \"$MAX_CONNECTIONS\" -eq \"$MAX_CONNECTIONS\" ] 2>/dev/null\n    then\n      error \"\" \"option '--max-connections' parameter (must be a number) : $MAX_CONNECTIONS\"\n      exit 1\n    fi\n  fi\n\n  readonly ERR_FILE\n  readonly CONFIG_FILE\n  readonly DEBUG_ERR_FILE\n\n  readonly HOSTGROUP_WRITER_ID\n  readonly HOSTGROUP_READER_ID\n  readonly HOSTGROUP_SLAVEREADER_ID\n  readonly NUMBER_WRITERS\n\n  readonly WRITER_IS_READER\n  readonly SLAVE_IS_WRITER\n\n  readonly P_PRIORITY\n  readonly P_MODE\n\n  readonly MAX_CONNECTIONS\n}\n\n\n# Checks to see if another instance of the script is running.\n# We want only one instance of this script to be running at a time.\n#\n# Globals:\n#   PROXYSQL_DATADIR\n#   ERR_FILE\n#\n# Arguments:\n#   1: the cluster name\n#\n# This function will exit if another instance of the script is running.\n#\n# With thanks, http://bencane.com/2015/09/22/preventing-duplicate-cron-job-executions/\n#\nfunction check_is_galera_checker_running() {\n  local cluster_name=$1\n\n  if [[ -z $cluster_name ]]; then\n    CHECKER_PIDFILE=${PROXYSQL_DATADIR}/galera_checker.pid\n  else\n    CHECKER_PIDFILE=${PROXYSQL_DATADIR}/${cluster_name}_galera_checker.pid\n  fi\n\n  if [[ -f $CHECKER_PIDFILE && -r $CHECKER_PIDFILE ]]; then\n    local GPID\n    GPID=$(cat \"$CHECKER_PIDFILE\")\n    if ps -p $GPID -o args=ARGS | grep $ERR_FILE | grep -o proxysql_galera_check >/dev/null 2>&1 ; then\n      ps -p $GPID > /dev/null 2>&1\n      if [[ $? -eq 0 ]]; then\n        log \"$LINENO\" \"ProxySQL galera checker process already running. (pid:$GPID  this pid:$$)\"\n\n        # We don't want to remove this file on cleanup\n        CHECKER_PIDFILE=\"\"\n        exit 1\n      else\n        echo $$ > $CHECKER_PIDFILE\n        if [[ $? -ne 0 ]]; then\n          warning \"$LINENO\" \"Could not create galera checker PID file\"\n          exit 1\n        fi\n        debug \"$LINENO\" \"Created PID file at $CHECKER_PIDFILE\"\n      fi\n    else\n      warning \"$LINENO\" \"Existing PID($GPID) belongs to some other process. Creating new PID file.\"\n      echo $$ > $CHECKER_PIDFILE\n      if [[ $? -ne 0 ]]; then\n        warning \"$LINENO\" \"Could not create galera checker PID file\"\n        exit 1\n      fi\n      debug \"$LINENO\" \"Created PID file at $CHECKER_PIDFILE\"\n    fi\n  else\n    echo \"$$\" > \"$CHECKER_PIDFILE\"\n    if [[ $? -ne 0 ]]; then\n      warning \"$LINENO\" \"Could not create galera checker PID file\"\n      exit 1\n    fi\n    debug \"$LINENO\" \"Created PID file at $CHECKER_PIDFILE\"\n  fi\n}\n\nfunction cleanup_handler() {\n  if [[ -n $CHECKER_PIDFILE ]]; then\n    rm -f $CHECKER_PIDFILE\n  fi\n}\n\n\n# Checks to see that the READ hostgroup entries have been created/deleted\n# This function has no meaning if mode=\"loadbal\".\n#\n# Globals:\n#   HOSTGROUP_READER_ID\n#   HOSTGROUP_WRITER_ID\n#   WRITER_IS_READER\n#   MODE\n#\n# Arguments:\n#   1: the name of the reload_check_file\n#   2: the number of online-readers\n#\nfunction writer_is_reader_check() {\n  local reload_check_file=$1\n  local number_readers_online=$2\n  local servers\n\n  if [[ $MODE == \"loadbal\" ]]; then\n    return\n  fi\n\n  servers=$(proxysql_exec $LINENO -Ns \"SELECT hostgroup_id, hostname, port, status\n                                       FROM mysql_servers\n                                       WHERE hostgroup_id IN ($HOSTGROUP_READER_ID,$HOSTGROUP_WRITER_ID) AND\n                                       status <> 'OFFLINE_HARD'\n                                        AND comment <> 'SLAVEREAD'\n                                       ORDER BY hostname, port, hostgroup_id\")\n  check_cmd_and_exit $LINENO $? \"Could not retreive data from mysql_servers (query failed). Exiting.\"\n\n  debug $LINENO \"writer_is_reader_check : number_readers_online:$number_readers_online\"\n  # The extra test at the end is to ensure that the very last line read in\n  # is also handled (the read may be returning EOF)\n  printf \"${servers}\" | while read hostgroup server port stat || [ -n \"$stat\" ]\n  do\n    debug $LINENO \"Examining $hostgroup:$server:$port\"\n\n    # Only look at writer nodes, so skip non-writers\n    if [[ $hostgroup -ne $HOSTGROUP_WRITER_ID ]]; then\n      continue\n    fi\n\n    local regex=\"(^|[[:space:]])${HOSTGROUP_READER_ID}[[:blank:]]${server}[[:blank:]]${port}[[:blank:]]\"\n    local address\n    address=$(combine_ip_port_into_address \"$server\" \"$port\")\n\n    if [[ $WRITER_IS_READER == \"always\" || $WRITER_IS_READER == \"ondemand\" ]]; then\n      #\n      # Ensure that there is a corresponding entry in mysql_servers\n      #\n      local reader_row=$(echo \"$servers\" | grep -E \"$regex\")\n      if [[ -z $reader_row ]]; then\n        # We did not find a matching row, insert the corresponding READER entry\n\n        # If we are to always have a reader, or if there are no other\n        # readers, add it as an ONLINE reader\n        local comment\n        if [[ $WRITER_IS_READER == \"always\" || $number_readers_online -eq 0 ]]; then\n          comment=\"ONLINE\"\n        else\n          comment=\"OFFLINE_SOFT\"\n        fi\n        proxysql_exec $LINENO -Ns \"INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,status,comment,max_connections) VALUES ('$server',$HOSTGROUP_READER_ID,$port,1000,'$comment','READ',$MAX_CONNECTIONS);\"\n        check_cmd_and_exit $LINENO $? \"writer-is-reader($WRITER_IS_READER): Failed to add $HOSTGROUP_READER_ID:$address (query failed). Exiting.\"\n        log_if_success $LINENO $? \"writer-is-reader($WRITER_IS_READER): Adding reader $HOSTGROUP_READER_ID:$address with status OFFLINE_SOFT\"\n        echo \"1\" > ${reload_check_file}\n      elif [[ $WRITER_IS_READER == \"ondemand\" && $stat == \"ONLINE\" && $number_readers_online -gt 1 ]]; then\n        # If the reader node is ONLINE and there is another reader\n        # Then we can deactivate this reader node (if ondemand)\n\n        # This should only do this if there is a corresponding READ entry that is NOT \"OFFLINE_SOFT\"\n        # Search through the $servers for an entry corresponding to the READ data\n        local reader_status=$(echo -e \"$reader_row\" | cut -f4)\n        if [[ $reader_status != \"OFFLINE_SOFT\" ]]; then\n          proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status='OFFLINE_SOFT' WHERE hostgroup_id=$HOSTGROUP_READER_ID AND hostname='$server' AND port=$port\"\n          check_cmd_and_exit $LINENO $? \"Could not update mysql_servers (query failed). Exiting.\"\n          log_if_success $LINENO $? \"Updating $hostgroup:$address to OFFLINE_SOFT because writers prefer to not be readers (writer-is-reader:ondemand)\"\n          echo \"1\" > ${reload_check_file}\n        fi\n      fi\n    elif [[ $WRITER_IS_READER == \"never\" ]]; then\n      #\n      # Ensure that there is NO corresponding entry in mysql_servers\n      #\n      if [[ $servers =~ $regex ]]; then\n        # Delete the corresponding READER entry\n        proxysql_exec $LINENO -Ns \"DELETE FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_READER_ID and hostname='$server' and port=$port\"\n        check_cmd_and_exit $LINENO $? \"writer-is-reader(never): Failed to remove $HOSTGROUP_READER_ID:$address (query failed). Exiting.\"\n        log_if_success $LINENO $? \"writer-is-reader(never): Removing reader $HOSTGROUP_READER_ID:$address\"\n        echo \"1\" > ${reload_check_file}\n      fi\n    fi\n  done\n}\n\n\n# Returns the address of an available (online) cluster host\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   None\n#\n# Returns:\n#   0 : if the function succeeds (this means that no errors occurred in the SQL queries)\n#     The function can return nothing and still return success\n#   1 : if an error occurs\n#\nfunction find_online_cluster_host() {\n  # Query the proxysql database for hosts,ports in use\n  # Then just go through the list until we reach one that responds\n  local hosts\n  hosts=$(proxysql_exec $LINENO -Ns \"SELECT hostname,port FROM mysql_servers where hostgroup_id in ($HOSTGROUP_WRITER_ID, $HOSTGROUP_READER_ID) and comment <> 'SLAVEREAD' and status='ONLINE'\")\n  if [[ $? -ne 0 ]]; then\n    return 1\n  fi\n  printf \"$hosts\" | while read server port || [[ -n $port ]]\n  do\n    debug $LINENO \"Trying to contact $server:$port...\"\n    mysql_exec \"$LINENO\" \"$server\" \"$port\" -Bs \"select @@port\" 1>/dev/null 2>>${DEBUG_ERR_FILE}\n    if [[ $? -eq 0 ]]; then\n      printf \"$server $port\"\n      return 0\n    fi\n  done\n\n  # No cluster host available (cannot contact any)\n  return 0\n}\n\n# Checks the number of online writers and promotes a reader to a writer\n# if needed\n#\n# Globals:\n#   HOSTGROUP_WRITER_ID\n#   CHECK_STATUS\n#\n# Arguments:\n#   1: The path to the reload_check_file\n#\nfunction ensure_one_writer_node() {\n  local reload_check_file=$1\n\n  log $LINENO \"No ONLINE writers found, looking for readers to promote\"\n\n  # We are trying to find a reader node that can be promoted to\n  # a writer.\n  # So what we do is we ORDER BY writer.status ASC because\n  # by accident ONLINE is last in the line\n  local reader_proxysql_query=\"SELECT\n           reader.hostname,\n           reader.port,\n           writer.status\n    FROM mysql_servers as reader\n    LEFT JOIN mysql_servers as writer\n      ON writer.hostgroup_id = $HOSTGROUP_WRITER_ID\n      AND writer.hostname = reader.hostname\n      AND writer.port = reader.port\n    WHERE reader.hostgroup_id = $HOSTGROUP_READER_ID\n      AND reader.status = 'ONLINE'\n      AND reader.comment = 'READ'\n    ORDER BY writer.status ASC,\n             reader.weight DESC,\n             reader.hostname,\n             reader.port\"\n\n  local possible_hosts\n  possible_hosts=$(proxysql_exec $LINENO -Ns \"$reader_proxysql_query\")\n  check_cmd_and_exit $LINENO $? \"Could not get data from mysql_servers (query failed). Exiting.\"\n  if [[ -z $possible_hosts ]]; then\n    log $LINENO \"Cannot find a reader that can be promoted to a writer\"\n    return\n  fi\n\n  while read line; do\n    if [[ -z $line ]]; then\n      continue\n    fi\n\n    local host=$(echo $line | awk '{ print $1 }')\n    local port=$(echo $line | awk '{ print $2 }')\n\n    # Have to check that we can actually access the node\n    local wsrep_status\n    local pxc_main_mode\n    local result\n    local address\n    address=$(combine_ip_port_into_address \"$host\" \"$port\")\n    result=$(mysql_exec $LINENO \"$host\" \"$port\" -Nns \"SHOW STATUS LIKE 'wsrep_local_state'; SHOW VARIABLES LIKE 'pxc_maint_mode';\" 2>>${DEBUG_ERR_FILE})\n    wsrep_status=$(echo \"$result\" | grep \"wsrep_local_state\" | awk '{ print $2 }')\n    pxc_main_mode=$(echo \"$result\" | grep \"pxc_maint_mode\" | awk '{ print $2 }')\n\n    if [[ -z $pxc_main_mode ]]; then\n      pxc_main_mode=\"DISABLED\"\n    fi\n\n    if [[ $wsrep_status -ne 4 || $pxc_main_mode != \"DISABLED\" ]]; then\n      continue\n    fi\n\n    local write_stat=$(echo \"$line\" | awk '{print $3}')\n    debug $LINENO \"Looking at $address write_status:$write_stat\"\n\n    log $LINENO \"Promoting $address as writer node...\"\n\n    # We have an ONLINE reader node\n    # if ondemand or always keep the reader node (may need to move it to OFFLINE_SOFT)\n    #   if we have a writer, move to ONLINE\n    #   if no writer, create it\n\n    if [[ $WRITER_IS_READER == \"ondemand\" || $WRITER_IS_READER == \"always\" ]]; then\n      if [[ -z $write_stat || $write_stat == \"NULL\" ]]; then\n        # Writer does not exist, add an ONLINE writer\n        proxysql_exec $LINENO -Ns \\\n          \"INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,status,comment,max_connections)\n           VALUES ('$host',$HOSTGROUP_WRITER_ID,$port,1000000,'ONLINE','WRITE',$MAX_CONNECTIONS);\"\n        check_cmd_and_exit $LINENO $? \"Cannot add a PXC writer node to ProxySQL (query failed). Exiting.\"\n        log_if_success $LINENO $? \"Added $HOSTGROUP_WRITER_ID:$address as a writer.\"\n      else\n        # Writer exists, move writer to ONLINE\n        proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status='ONLINE',comment='WRITE',weight=1000000 WHERE hostgroup_id=$HOSTGROUP_WRITER_ID AND hostname='$host' AND port=$port\"\n        check_cmd_and_exit $LINENO $? \"Cannot update the PXC node $HOSTGROUP_WRITER_ID:$address (query failed). Exiting.\"\n        log_if_success $LINENO $? \"Updated ${HOSTGROUP_WRITER_ID}:${host}:${port} node in ProxySQL.\"\n      fi\n    else\n      if [[ -z $write_stat || $write_stat == \"NULL\" ]]; then\n        # Writer does not exist, move reader to writer\n        proxysql_exec $$LINENO -Ns\\\n            \"UPDATE mysql_servers set status='ONLINE',hostgroup_id=$HOSTGROUP_WRITER_ID, comment='WRITE', weight=1000000 WHERE hostgroup_id=$HOSTGROUP_READER_ID and hostname='$host' and port=$port\"\n        check_cmd_and_exit $LINENO $? \"Cannot update PXC writer in ProxySQL (query failed). Exiting.\"\n        log_if_success $LINENO $? \"Added $HOSTGROUP_WRITER_ID:$address as a writer node.\"\n      else\n        # Writer exists, move writer to ONLINE, remove reader\n        proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status='ONLINE',comment='WRITE',weight=1000000 WHERE hostgroup_id=$HOSTGROUP_WRITER_ID AND hostname='$host' AND port=$port\"\n        check_cmd_and_exit $LINENO $? \"Cannot update the PXC node $HOSTGROUP_WRITER_ID:$address (query failed). Exiting.\"\n        log_if_success $LINENO $? \"Updated ${HOSTGROUP_WRITER_ID}:${host}:${port} node in ProxySQL.\"\n\n        proxysql_exec $LINENO -Ns \"DELETE FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_READER_ID and hostname='$host' and port=$port\"\n        check_cmd_and_exit $LINENO $? \"writer-is-reader(never): Failed to remove $HOSTGROUP_READER_ID:$server:$port (query failed). Exiting.\"\n        log_if_success $LINENO $? \"writer-is-reader(never): Removing reader $HOSTGROUP_READER_ID:$server:$port\"\n      fi\n    fi\n\n    echo '1' > ${reload_check_file}\n    break\n\n  done< <(printf \"${possible_hosts}\\n\")\n}\n\n# Returns the host priority list defined by the user\n#\n# Globals:\n#   HOST_PRIORITY_FILE\n#   P_PRIORITY\n#   PROXYSQL_DATADIR\n#   CLUSTER_NAME\n#\n# Arguments:\n#   1: cluster name\n#\nfunction get_host_priority_list() {\n  local cluster_name=$1\n  local priority_hosts=\"\"\n\n  if [[ -z $HOST_PRIORITY_FILE || ! -f $HOST_PRIORITY_FILE ]]; then\n    HOST_PRIORITY_FILE=${PROXYSQL_DATADIR}/${cluster_name}_host_priority\n  fi\n\n  if [[ ! -z \"$P_PRIORITY\" ]]; then\n    IFS=',' read -r -a priority_hosts <<< \"$P_PRIORITY\"\n    debug $LINENO \"host_priority = ${priority_hosts[@]}\"\n  elif [[ -f $HOST_PRIORITY_FILE ]];then\n    # Get the list of hosts from the host_priority file ignoring blanks and\n    # any lines that start with '#'\n    debug $LINENO \"Found a host priority file: $HOST_PRIORITY_FILE\"\n    priority_hosts=$(cat \"$HOST_PRIORITY_FILE\" | grep '^[^#]')\n    if [[ -n $priority_hosts ]]; then\n      priority_hosts=($(echo $priority_hosts))\n    fi\n  fi\n  #   File sample:\n  #   10.11.12.21:3306\n  #   10.21.12.21:3306\n  #   10.31.12.21:3306\n  #\n  echo \"${priority_hosts[@]}\"\n}\n\n\n# Builds the proxysql host list and merges it with the priority list.\n# The entries from the priority list are put at the top in order.\n#\n# If they have an entry in the proxysql_list, the data is copied over.\n# If it does not exist in the proxysql_list, then it is added\n# with status=OFFLINE_SOFT.\n#\n# This ensures that the entry will not be added as\n# a writer if the node is really offline.\n#\n# Globals:\n#   None\n#\n# Argument:\n#   1: field separator\n#   2: the proxysql list (array of nodes from proxysql)\n#   3: the priority list\n#\nfunction build_proxysql_list_with_priority() {\n  local sep=$1\n  local proxysql_list=$(echo \"$2\" | tr ' ' '\\n')\n  local priority_list=($3)\n  local new_proxysql_list=()\n  local reader_list=\"\"\n  local reader_query=0\n\n  for prio in \"${priority_list[@]}\"; do\n    local psql_entry\n    local prio_entry\n    local prio_ip\n    local prio_port\n    prio_entry=$(separate_ip_port_from_address \"$prio\")\n    prio_ip=$(echo \"$prio_entry\" | cut -d' ' -f1)\n    prio_port=$(echo \"$prio_entry\" | cut -d' ' -f2)\n    prio_entry=\"$prio_ip$sep$prio_port\"\n    # properly escape the characters for grep\n    prio_entry_re=\"$(printf '%s' \"$prio_entry\" | sed 's/[.[\\*^$]/\\\\&/g')\"\n\n    psql_entry=$(echo \"$proxysql_list\" | grep \"^${prio_entry_re}[^[[:space:]]]*\")\n    if [[ $? -eq 0 && -n ${psql_entry} ]]; then\n      # Add entries that are in the priority list and in the proxysql list\n      new_proxysql_list+=($psql_entry)\n    else\n      # Add entries that are in the priority list but not in proxysql\n      # (but only if there is a reader entry for the address)\n      if [[ $reader_query -eq 0 ]]; then\n        reader_list=$(proxysql_exec $LINENO -Ns \"SELECT hostname, port\n                          FROM mysql_servers\n                          WHERE hostgroup_id IN ($HOSTGROUP_READER_ID)\n                            AND status <> 'OFFLINE_HARD'\n                            AND comment <> 'SLAVEREAD'\n                            ORDER BY status DESC, hostgroup_id, weight DESC, hostname, port\")\n        check_cmd_and_exit $LINENO $? \"Unable to obtain list of nodes from ProxySQL (query failed). Exiting.\"\n        reader_list=$(echo \"$reader_list\" | tr '\\t' \"$sep\" | tr '\\n' ' ')\n        reader_query=1\n      fi\n      if [[ -n $reader_list ]]; then\n        if echo $reader_list | grep -E -q \"(^|[[:space:]])${prio_entry_re}($|[[:space:]])*\"; then\n          # If this address is a reader, add a fake writer entry\n          # If detected to be online, the writer check algorithm\n          # will insert an entry\n          new_proxysql_list+=(\"${prio_entry}${sep}${HOSTGROUP_WRITER_ID}${sep}OFFLINE_SOFT${sep}WRITE${sep}PRIORITY_NODE\")\n        fi\n      fi\n    fi\n  done\n\n  # Add the rest of the entries (not in the priority list but in proxysql)\n\n  # For this magic, see:\n  # https://stackoverflow.com/questions/7577052/bash-empty-array-expansion-with-set-u\n  if [[ -n ${priority_list[@]+\"${priority_list[@]}\"} ]]; then\n    local priority_string=\"${priority_list[@]}\"\n    for row in ${proxysql_list}; do\n      local host=$(echo $row | cut -d \"$sep\" -f 1)\n      local port=$(echo $row | cut -d \"$sep\" -f 2)\n      local addr=$(combine_ip_port_into_address \"$host\" \"$port\")\n\n      # properly escape the characters for grep -E\n      addr_re=\"$(printf '%s' \"$addr\" | sed 's/[.[\\*^$()+?{|]/\\\\&/g')\"\n\n      if ! echo $priority_string | grep -E -q \"(^|[[:space:]])${addr_re}($|[[:space:]])\"; then\n        new_proxysql_list+=($row)\n      fi\n    done\n  fi\n\n  if [[ -n ${new_proxysql_list[@]+\"${new_proxysql_list[@]}\"} ]]; then\n    debug $LINENO \"new priority list = ${new_proxysql_list[@]}\"\n    echo \"${new_proxysql_list[@]}\"\n  fi\n}\n\n\nfunction update_writers() {\n  local reload_check_file=$1\n  local proxysql_list=\"\"\n  local priority_list=\"\"\n\n  # Nodes are orderd by status DESC first, this allows ONLINE nodes to always\n  # be processed first.\n  writer_proxysql_query=\"SELECT\n            writer.hostname,\n            writer.port,\n            writer.hostgroup_id,\n            writer.status,\n            writer.comment,\n            reader.status\n    FROM mysql_servers as writer\n    LEFT JOIN mysql_servers as reader\n      ON reader.hostgroup_id = $HOSTGROUP_READER_ID\n      AND reader.hostname = writer.hostname\n      AND reader.port = writer.port\n    WHERE writer.hostgroup_id = $HOSTGROUP_WRITER_ID\n      AND writer.status <> 'OFFLINE_HARD'\n    ORDER BY writer.status DESC\"\n\n  proxysql_list=$(proxysql_exec $LINENO -Ns \"$writer_proxysql_query\")\n  check_cmd_and_exit $LINENO $? \"Unable to obtain list of nodes from ProxySQL (query failed). Exiting.\"\n\n  if [[ $proxysql_list =~ [[:space:]]ONLINE[[:space:]]SLAVEREAD[[:space:]] ]]; then\n    HAVE_SLAVEWRITERS=1\n  fi\n\n  # Now that we have the proxysql list, order it by the priority list\n  # (with unprioritized nodes at the end)\n\n  # Using the priority list only makes sense for non-load balancing\n  if [[ $MODE != \"loadbal\" ]]; then\n    priority_list=$(get_host_priority_list \"$cluster_name\")\n  fi\n  if [[ -n $priority_list ]]; then\n    #\n    # Need to do some processing of the lists so that we can pass them down\n    # Assume that the fields cannot contain the ';' character'\n    # Can't use \"local -n\" becaus of Centos6 (needs newer bash version)\n    #\n    local prio_list=${priority_list}\n    local pxsql_list=$(echo \"${proxysql_list[@]}\" | tr '\\t' ';' | tr '\\n' ' ')\n\n    proxysql_list=$(build_proxysql_list_with_priority \";\" \"$pxsql_list\" \"$prio_list\")\n    proxysql_list=$(echo \"${proxysql_list}\" | tr ' ' '\\n' | tr ';' '\\t')\n  fi\n\n  if [[ -z ${proxysql_list[@]+\"${proxysql_list[@]}\"} ]]; then\n    log $LINENO \"No writers found.\"\n    return\n  fi\n\n  # Go through the list\n  #\n  while read line; do\n    if [[ -z $line ]]; then\n      continue\n    fi\n\n    server=$(echo -e \"$line\" | cut -f1)\n    port=$(echo -e \"$line\" | cut -f2)\n    hostgroup=$(echo -e \"$line\" | cut -f3)\n    stat=$(echo -e \"$line\" | cut -f4)\n    comment=$(echo -e \"$line\" | cut -f5)\n    rdstat=$(echo -e \"$line\" | cut -f6)\n\n    if [[ $comment == \"SLAVEREAD\" ]]; then\n      set_slave_status \"${reload_check_file}\" $hostgroup $server $port $stat\n      continue\n    fi\n\n    local wsrep_status\n    local pxc_main_mode\n    local result\n    local address\n    address=$(combine_ip_port_into_address \"$server\" \"$port\")\n    result=$(mysql_exec $LINENO \"$server\" \"$port\" -Nns \"SHOW STATUS LIKE 'wsrep_local_state'; SHOW VARIABLES LIKE 'pxc_maint_mode';\" 2>>${DEBUG_ERR_FILE})\n    wsrep_status=$(echo \"$result\" | grep \"wsrep_local_state\" | awk '{ print $2 }')\n    pxc_main_mode=$(echo \"$result\" | grep \"pxc_maint_mode\" | awk '{ print $2 }')\n\n    if [[ -z $wsrep_status ]]; then\n      wsrep_status=\"<unknown:query failed>\"\n    fi\n\n    # For PXC 5.6 there is no pxc_maint_mode, so assume DISABLED\n    if [[ -z $pxc_main_mode ]]; then\n      pxc_main_mode=\"DISABLED\"\n    fi\n\n    # If this node was added because of the priority list and the\n    # node is not reporting SYNCED, then just skip it\n    if [[ $rdstat == \"PRIORITY_NODE\" && $wsrep_status != \"4\" ]]; then\n      continue\n    fi\n\n    log \"\" \"--> Checking WRITE server $hostgroup:$address, current status $stat, wsrep_local_state $wsrep_status\"\n\n    # we have to limit amount of writers, WSREP status OK, AND node is marked ONLINE\n    # PXC and ProxySQL agreee on the status\n    # wsrep:ok  pxc:--  status:online\n    if [ $NUMBER_WRITERS -gt 0 -a \"${wsrep_status}\" = \"4\" -a \"$stat\" == \"ONLINE\" -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n      if [[ $number_writers_online -lt $NUMBER_WRITERS ]]; then\n        number_writers_online=$(( $number_writers_online + 1 ))\n        log \"\" \"server $hostgroup:$address is already ONLINE: ${number_writers_online} of ${NUMBER_WRITERS} write nodes\"\n      else\n        number_writers_online=$(( $number_writers_online + 1 ))\n        change_server_status $LINENO $HOSTGROUP_WRITER_ID \"$server\" \"$port\" \"OFFLINE_SOFT\" \"$rdstat\" \\\n                             \"max write nodes reached (${NUMBER_WRITERS})\"\n        echo \"1\" > ${reload_check_file}\n      fi\n    fi\n\n    # WSREP status OK, but node is not marked ONLINE\n    # Make the node ONLINE if possible\n    # wsrep:ok  pxc:ok  status:not online\n    if [ \"${wsrep_status}\" = \"4\" -a \"$stat\" != \"ONLINE\" -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n      # we have to limit amount of writers\n      if [[ $NUMBER_WRITERS -gt 0 ]]; then\n        if [[ $number_writers_online -lt $NUMBER_WRITERS ]]; then\n          number_writers_online=$(( $number_writers_online + 1 ))\n          change_server_status $LINENO $HOSTGROUP_WRITER_ID \"$server\" \"$port\" \"ONLINE\" \"$rdstat\" \\\n                               \"${number_writers_online} of ${NUMBER_WRITERS} write nodes\"\n          echo \"1\" > ${reload_check_file}\n        else\n          number_writers_online=$(( $number_writers_online + 1 ))\n          if [ \"$stat\" != \"OFFLINE_SOFT\" ]; then\n            change_server_status $LINENO $HOSTGROUP_WRITER_ID \"$server\" \"$port\" \"OFFLINE_SOFT\" \"\" \\\n                                 \"max write nodes reached (${NUMBER_WRITERS})\"\n            echo \"1\" > ${reload_check_file}\n          elif [[ $rdstat != \"PRIORITY_NODE\" ]]; then\n             log $LINENO \"server $hostgroup:$address is already OFFLINE_SOFT, max write nodes reached (${NUMBER_WRITERS})\"\n          fi\n        fi\n      # we do not have to limit\n      elif [[ $NUMBER_WRITERS -eq 0 ]]; then\n        # TODO: kennt, What if node is SHUNNED?\n        change_server_status $LINENO $HOSTGROUP_WRITER_ID \"$server\" \"$port\" \"ONLINE\" \"$rdstat\"\\\n                             \"Changed state, marking write node ONLINE\"\n        echo \"1\" > ${reload_check_file}\n      fi\n    fi\n\n    # WSREP status is not ok, but the node is marked online, we should put it offline\n    # wsrep:not ok  pxc:--  status:online\n    if [ \"${wsrep_status}\" != \"4\" -a \"$stat\" = \"ONLINE\" ]; then\n      change_server_status $LINENO $HOSTGROUP_WRITER_ID \"$server\" \"$port\" \"OFFLINE_SOFT\" \"\" \\\n                           \"WSREP status is ${wsrep_status} which is not ok\"\n      echo \"1\" > ${reload_check_file}\n    # wsrep:--  pxc:not ok  status:online\n    elif [ \"${pxc_main_mode}\" != \"DISABLED\" -a \"$stat\" = \"ONLINE\" ]; then\n      change_server_status $LINENO $HOSTGROUP_WRITER_ID \"$server\" \"$port\" \"OFFLINE_SOFT\" \"\" \\\n                           \"pxc_maint_mode is $pxc_main_mode\" 2>>${ERR_FILE}\n      echo \"1\" > ${reload_check_file}\n    # wsrep:not ok  pxc:--  status:offline soft\n    elif [ \"${wsrep_status}\" != \"4\" -a \"$stat\" = \"OFFLINE_SOFT\" -a \"$rdstat\" != \"PRIORITY_NODE\" ]; then\n      log \"\" \"server $hostgroup:$address is already OFFLINE_SOFT, WSREP status is ${wsrep_status} which is not ok\"\n    # wsrep:--  pxc:not ok  status:offline soft\n    elif [ \"${pxc_main_mode}\" != \"DISABLED\" -a \"$stat\" = \"OFFLINE_SOFT\" -a \"$rdstat\" != \"PRIORITY_NODE\" ]; then\n      log \"\" \"server $hostgroup:$address is already OFFLINE_SOFT, pxc_maint_mode is ${pxc_main_mode} which is not ok\"\n    fi\n  done< <(printf \"${proxysql_list[@]}\\n\")\n}\n\n#\n# This function checks the status of slave machines and sets their status field\n#\n# Globals:\n#   PROXSQL_DATADIR\n#   SLAVE_SECONDS_BEHIND\n#   HOSTGROUP_SLAVEREADER_ID\n#\n# Arguments:\n#   1: Path to the reload_check_file\n#   2: Slave hostgroup\n#   3: Slave IP address\n#   4: Slave port\n#   5: Slave status in ProxySQL\n#\nfunction set_slave_status() {\n  debug $LINENO \"START set_slave_status\"\n  local reload_check_file=$1\n  local ws_hg_id=$2\n  local ws_ip=$3\n  local ws_port=$4\n  local ws_status=$5\n  local ws_address\n  ws_address=$(combine_ip_port_into_address \"$ws_ip\" \"$ws_port\")\n  local node_id=\"${ws_hg_id}:${ws_address}\"\n\n  # This function will get and return a status of a slave node, 4=GOOD, 2=BEHIND, 0=OTHER\n  local slave_status\n\n  log $LINENO \"--> Checking SLAVE server ${node_id}\"\n\n  slave_status=$(mysql_exec $LINENO \"$ws_ip\" \"$ws_port\" -nsE \"SHOW SLAVE STATUS\")\n  check_cmd $LINENO $? \"Cannot get status from the slave $ws_address, Please check cluster login credentials\"\n\n  slave_status=$(echo \"$slave_status\" | sed 's/ //g')\n  echo \"$slave_status\" | grep \"^Master_Host:\" >/dev/null\n  if [ $? -ne 0 ];then\n    #\n    # No status was found, this is not replicating\n    # Only changing the status here as another node might be in the writer hostgroup\n    #\n    proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status = 'OFFLINE_HARD'  WHERE hostname='$ws_ip' and port=$ws_port;\"\n    check_cmd_and_exit $LINENO $? \"Cannot update Percona XtraDB Cluster node $ws_address to ProxySQL database (query failed). Exiting\"\n    log_if_success $LINENO $? \"slave server ${ws_address} set to OFFLINE_HARD status in ProxySQL (cannot determine slave status).\"\n    echo \"1\" > ${reload_check_file}\n  else\n    local slave_master_host slave_io_running slave_sql_running seconds_behind\n    slave_master_host=$(echo \"$slave_status\" | grep \"^Master_Host:\" | cut -d: -f2)\n    slave_io_running=$(echo \"$slave_status\" | grep \"^Slave_IO_Running:\" | cut -d: -f2)\n    slave_sql_running=$(echo \"$slave_status\" | grep \"^Slave_SQL_Running:\" | cut -d: -f2)\n    seconds_behind=$(echo \"$slave_status\" | grep \"^Seconds_Behind_Master:\" | cut -d: -f2)\n\n    if [ \"$seconds_behind\" == \"NULL\" ];then\n      #\n      # When slave_io is not working, the seconds behind value will read 'NULL',\n      # convert this to a number higher than the max\n      #\n      let seconds_behind=SLAVE_SECONDS_BEHIND+1\n    fi\n\n    if [ \"$slave_sql_running\" != \"Yes\" ];then\n      #\n      # Slave is not replicating, so set to OFFLINE_HARD\n      #\n      if [ \"$ws_status\" != \"OFFLINE_HARD\" ];then\n        proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status = 'OFFLINE_HARD' WHERE hostname='$ws_ip' and port=$ws_port;\"\n        check_cmd_and_exit $LINENO $? \"Cannot update Percona XtraDB Cluster node $ws_address to ProxySQL database (query failed). Exiting.\"\n        log_if_success $LINENO $? \"slave server ${ws_address} set to OFFLINE_HARD status in ProxySQL (io:$slave_io_running sql:$slave_sql_running).\"\n        echo \"1\" > ${reload_check_file}\n      else\n        log $LINENO \"slave server (${ws_hg_id}:${ws_address}) current status '$ws_status' in ProxySQL. (io:$slave_io_running sql:$slave_sql_running)\"\n      fi\n    elif [[ $slave_io_running == \"Yes\" ]]; then\n      #\n      # The slave is replicating (and the cluster is up)\n      # So set the status accordingly\n      #\n      if [ $seconds_behind -gt $SLAVE_SECONDS_BEHIND ];then\n        # Slave is more than the set number of seconds behind, return status 2\n        if [ \"$ws_status\" != \"OFFLINE_SOFT\" ];then\n          proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status = 'OFFLINE_SOFT' WHERE hostname='$ws_ip' and port=$ws_port;\"\n          check_cmd_and_exit $LINENO $? \"Cannot update Percona XtraDB Cluster node $ws_address to ProxySQL database (query failed). Exiting.\"\n          log_if_success $LINENO $? \"slave server ${ws_address} set to OFFLINE_SOFT status in ProxySQL (slave is too far behind:$seconds_behind). (io:$slave_io_running sql:$slave_sql_running)\"\n          echo \"1\" > ${reload_check_file}\n        else\n          log $LINENO \"slave server (${node_id}) current status '$ws_status' in ProxySQL. (io:$slave_io_running sql:$slave_sql_running)\"\n        fi\n      else\n        #\n        # The slave is replicating and is caught up (relatively)\n        # So it is ok for READs\n        #\n        if [ \"$ws_status\" != \"ONLINE\" ];then\n          proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status = 'ONLINE' WHERE hostgroup_id=$HOSTGROUP_SLAVEREADER_ID AND hostname='$ws_ip' and port=$ws_port;\"\n          check_cmd_and_exit $LINENO $? \"Cannot update Percona XtraDB Cluster node $ws_address in ProxySQL (query failed). Exiting.\"\n          log_if_success $LINENO $? \"slave server $HOSTGROUP_SLAVEREADER_ID:${ws_address} set to ONLINE status in ProxySQL. (io:$slave_io_running sql:$slave_sql_running)\"\n          if [[ $ws_hg_id -ne $HOSTGROUP_SLAVEREADER_ID ]]; then\n            log $LINENO \"slave server (${node_id}) current status '$ws_status' in ProxySQL. (io:$slave_io_running sql:$slave_sql_running)\"\n          fi\n          echo \"1\" > ${reload_check_file}\n        else\n          log $LINENO \"slave server (${node_id}) current status '$ws_status' in ProxySQL. (io:$slave_io_running sql:$slave_sql_running)\"\n        fi\n      fi\n    else\n      #\n      # Note: if slave_sql_running is YES and slave_io_running is NO\n      # This may indicate that the cluster is down, so we may move a\n      # slave to the WRITE hostgroup (but that is not done here).\n      # So leave it ONLINE in this state.\n      #\n      if [[ $ws_status == \"ONLINE\" ]]; then\n        log $LINENO \"slave server ${ws_address} status:'${ws_status}' maintained in case the cluster is down. (io:$slave_io_running sql:$slave_sql_running)\"\n      else\n        proxysql_exec $LINENO -Ns \"UPDATE mysql_servers SET status = 'OFFLINE_SOFT' WHERE hostname='$ws_ip' and port=$ws_port;\"\n        check_cmd_and_exit $LINENO $? \"Unable to update mysql_servers (query failed). Exiting.\"\n        log $LINENO \"slave server ${ws_address} status:'OFFLINE_SOFT'. (io:$slave_io_running sql:$slave_sql_running)\"\n        echo \"1\" > ${reload_check_file}\n      fi\n    fi\n  fi\n  debug $LINENO \"END set_slave_status\"\n}\n\n\nfunction update_readers() {\n  local reload_check_file=$1\n  if [[ $WRITER_IS_READER != \"ondemand\" ]]; then\n    reader_proxysql_query=\"SELECT hostgroup_id,\n                                  hostname,\n                                  port,\n                                  status,\n                                  comment,\n                                  'NULL'\n                            FROM mysql_servers\n                            WHERE hostgroup_id IN ($HOSTGROUP_READER_ID)\n                              AND status <> 'OFFLINE_HARD'\n                            ORDER BY weight DESC, hostname, port\"\n  elif [[ $WRITER_IS_READER == \"ondemand\" ]]; then\n    # We will not try to change reader state of nodes that are writer ONLINE,\n    # so what we do is we ORDER BY writer.status ASC because by accident ONLINE\n    # is last in the line\n    reader_proxysql_query=\"SELECT reader.hostgroup_id,\n           reader.hostname,\n           reader.port,\n           reader.status,\n           reader.comment,\n           writer.status\n    FROM mysql_servers as reader\n    LEFT JOIN mysql_servers as writer\n      ON writer.hostgroup_id = $HOSTGROUP_WRITER_ID\n      AND writer.hostname = reader.hostname\n      AND writer.port = reader.port\n    WHERE reader.hostgroup_id = $HOSTGROUP_READER_ID\n    ORDER BY writer.status ASC,\n             reader.weight DESC,\n             reader.hostname,\n             reader.port\"\n  fi\n\n  # This is the count of nodes that have readers with ONLINE status\n  # and writers that are not ONLINE (either no entry or !ONLINE)\n  local online_readonly_nodes_found=0\n  local query_result\n\n  query_result=$(proxysql_exec $LINENO -Ns \"$reader_proxysql_query\")\n  check_cmd_and_exit $LINENO $? \"Unable to obtain list of nodes from ProxySQL (query failed). Exiting.\"\n\n  if [[ $query_result =~ [[:space:]]SLAVEREAD[[:space:]] ]]; then\n    HAVE_SLAVEREADERS=1\n  fi\n\n  while read line; do\n    if [[ -z $line ]]; then\n      continue\n    fi\n\n    hostgroup=$(echo \"$line\" | cut -f1)\n    server=$(echo \"$line\" | cut -f2)\n    port=$(echo \"$line\" | cut -f3)\n    stat=$(echo \"$line\" | cut -f4)\n    comment=$(echo \"$line\" | cut -f5)\n    writer_stat=$(echo \"$line\" | cut -f6)\n\n    if [[ $comment == \"SLAVEREAD\" ]]; then\n      set_slave_status \"${reload_check_file}\" $hostgroup $server $port $stat\n      continue\n    fi\n    if [[ $stat == \"OFFLINE_HARD\" ]]; then\n      continue\n    fi\n\n    local wsrep_status\n    local pxc_main_mode\n    local result\n    local address\n    address=$(combine_ip_port_into_address \"$server\" \"$port\")\n    result=$(mysql_exec $LINENO \"$server\" \"$port\" -Nns \"SHOW STATUS LIKE 'wsrep_local_state'; SHOW VARIABLES LIKE 'pxc_maint_mode';\" 2>>${DEBUG_ERR_FILE})\n    wsrep_status=$(echo \"$result\" | grep \"wsrep_local_state\" | awk '{ print $2 }')\n    pxc_main_mode=$(echo \"$result\" | grep \"pxc_maint_mode\" | awk '{ print $2 }')\n\n    if [[ -z $wsrep_status ]]; then\n      wsrep_status=\"<unknown:query failed>\"\n    fi\n\n    # For PXC 5.6 there is no pxc_maint_mode, so assume DISABLED\n    if [[ -z $pxc_main_mode ]]; then\n      pxc_main_mode=\"DISABLED\"\n    fi\n\n    log \"\" \"--> Checking READ server $hostgroup:$address, current status $stat, wsrep_local_state $wsrep_status\"\n\n    if [[ $WRITER_IS_READER == \"ondemand\" && $writer_stat == \"ONLINE\" ]]; then\n      if [ $online_readonly_nodes_found -eq 0 ]; then\n        # WSREP:ok  PXC:ok  STATUS:online\n        if [ \"${wsrep_status}\" = \"4\" -a \"$stat\" == \"ONLINE\" -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n          log \"\" \"server $hostgroup:$address is already ONLINE, is also write node in ONLINE state, not enough non-ONLINE readers found\"\n        fi\n\n        # WSREP:ok  PXC:ok  STATUS:not online\n        if [ \"${wsrep_status}\" = \"4\" -a \"$stat\" != \"ONLINE\"  -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n          #\n          # Enable the first one found as ONLINE\n          # (when writer-is-reader=ondemand)\n          #\n          change_server_status $LINENO $HOSTGROUP_READER_ID \"$server\" \"$port\" \"ONLINE\" \"\"\\\n                               \"marking ONLINE write node as read ONLINE state, not enough non-ONLINE readers found\"\n          echo \"1\" > ${reload_check_file}\n        fi\n      else\n        # WSREP:ok  PXC:ok  STATUS:online\n        if [ \"${wsrep_status}\" = \"4\" -a \"$stat\" == \"ONLINE\" -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n          # Else disable the other READ nodes\n          change_server_status $LINENO $HOSTGROUP_READER_ID \"$server\" \"$port\" \"OFFLINE_SOFT\" \"\"\\\n                               \"making ONLINE writer node as read OFFLINE_SOFT as well because writers should not be readers\"\n          echo \"1\" > ${reload_check_file}\n        fi\n        # WSREP:ok  PXC:ok  STATUS:not online\n        if [ \"${wsrep_status}\" = \"4\" -a \"$stat\" != \"ONLINE\" -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n          log \"\" \"server $hostgroup:$address is $stat, keeping node as $stat,as it is an ONLINE writer and we prefer not to have writers as readers (writer-is-reader:ondemand)\"\n        fi\n      fi\n    else\n      # WSREP:ok  PXC:ok  STATUS:online\n      if [ \"${wsrep_status}\" = \"4\" -a \"$stat\" == \"ONLINE\" -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n        log \"\" \"server $hostgroup:$address is already ONLINE\"\n        online_readonly_nodes_found=$(( $online_readonly_nodes_found + 1 ))\n      # WSREP:ok  PXC:not ok  STATUS:not online\n      elif [ \"${wsrep_status}\" = \"4\" -a \"$stat\" != \"ONLINE\" -a \"${pxc_main_mode}\" != \"DISABLED\" ]; then\n        log \"\" \"server $hostgroup:$address is $stat\"\n      fi\n\n      # WSREP status OK, but node is not marked ONLINE\n      # WSREP:ok  PXC:ok  STATUS:not online\n      if [ \"${wsrep_status}\" = \"4\" -a \"$stat\" != \"ONLINE\" -a \"${pxc_main_mode}\" == \"DISABLED\" ]; then\n        change_server_status $LINENO $HOSTGROUP_READER_ID \"$server\" \"$port\" \"ONLINE\" \"\"\\\n                              \"changed state\"\n        echo \"1\" > ${reload_check_file}\n        online_readonly_nodes_found=$(( $online_readonly_nodes_found + 1 ))\n      fi\n    fi\n\n    # WSREP status is not ok, but the node is marked online, we should put it offline\n    # WSREP:not ok  STATUS:online\n    if [ \"${wsrep_status}\" != \"4\" -a \"$stat\" = \"ONLINE\" ]; then\n      change_server_status $LINENO $HOSTGROUP_READER_ID \"$server\" \"$port\" \"OFFLINE_SOFT\" \"\"\\\n                           \"WSREP status is ${wsrep_status} which is not ok\"\n      echo \"1\" > ${reload_check_file}\n    # PXC:not ok  STATUS:online\n    elif [ \"${pxc_main_mode}\" != \"DISABLED\" -a \"$stat\" = \"ONLINE\" ];then\n      change_server_status $LINENO $HOSTGROUP_READER_ID \"$server\" \"$port\" \"OFFLINE_SOFT\" \"\"\\\n                           \"pxc_maint_mode is $pxc_main_mode\" 2>>${ERR_FILE}\n      echo \"1\" > ${reload_check_file}\n    # WSREP:not ok  STATUS:offline soft\n    elif [ \"${wsrep_status}\" != \"4\" -a \"$stat\" = \"OFFLINE_SOFT\" ]; then\n      log \"\" \"server $hostgroup:$address is already OFFLINE_SOFT, WSREP status is ${wsrep_status} which is not ok\"\n    fi\n  done< <(printf \"$query_result\\n\")\n}\n\n\n# Looks specifically for nodes that are in the DONOR/DESYNCED(2) state\n#\n# Globals:\n#   NUMBER_WRITERS\n#   HOSTGROUP_WRITER_ID\n#\n# Arguments:\n#   None\n#\nfunction search_for_desynced_writers() {\n  local reload_check_file=$1\n  local cnt=0\n  local sort_order=\"ASC\"\n\n  # We want the writers to come before the readers\n  if [[ $HOSTGROUP_WRITER_ID -gt $HOSTGROUP_READER_ID ]]; then\n    sort_order=\"DESC\"\n  fi\n\n  local writer_query=\"SELECT hostgroup_id, hostname, port, status\n                      FROM mysql_servers\n                      WHERE hostgroup_id IN ($HOSTGROUP_WRITER_ID,$HOSTGROUP_READER_ID)\n                        AND status <> 'OFFLINE_HARD'\n                        AND comment <> 'SLAVEREAD'\n                      ORDER BY hostgroup_id ${sort_order}\"\n\n  query_result=$(proxysql_exec $LINENO -Ns \"$writer_query\")\n  check_cmd_and_exit $LINENO $? \"Could not get the list of nodes (query failed). Exiting.\"\n\n  while read line; do\n    if [[ -z $line ]]; then\n      continue\n    fi\n\n    hostgroup=$(echo \"$line\" | cut -f1)\n    server=$(echo \"$line\" | cut -f2)\n    port=$(echo \"$line\" | cut -f3)\n    stat=$(echo \"$line\" | cut -f4)\n\n    safety_cnt=0\n\n    while [ ${cnt} -lt $NUMBER_WRITERS -a ${safety_cnt} -lt 5 ]\n      do\n        local wsrep_status\n        local pxc_main_mode\n        local result\n        local address\n        address=$(combine_ip_port_into_address \"$server\" \"$port\")\n        result=$(mysql_exec $LINENO \"$server\" \"$port\" -Nns \"SHOW STATUS LIKE 'wsrep_local_state'; SHOW VARIABLES LIKE 'pxc_maint_mode';\" 2>>${DEBUG_ERR_FILE})\n        wsrep_status=$(echo \"$result\" | grep \"wsrep_local_state\" | awk '{ print $2 }')\n        pxc_main_mode=$(echo \"$result\" | grep \"pxc_maint_mode\" | awk '{ print $2 }')\n\n        if [[ -z $wsrep_status ]]; then\n          wsrep_status=\"<unknown:query failed>\"\n        fi\n\n        # PXC 5.6 does not have this, so default to DISABLED\n        if [[ -z $pxc_main_mode ]]; then\n          pxc_main_mode=\"DISABLED\"\n        fi\n\n        # Nodes in maintenance are not allowed\n        if [[ $pxc_main_mode != \"DISABLED\" ]]; then\n          log \"\" \"Skipping $hostgroup:$address node is in pxc_maint_mode:$pxc_main_mode\"\n          break\n        fi\n\n        log \"\" \"Checking $hostgroup:$address for node in DONOR state, status $stat , wsrep_local_state $wsrep_status\"\n        if [ \"${wsrep_status}\" = \"2\" -a \"$stat\" != \"ONLINE\" ]; then\n          # if we are on Donor/Desync and not online in mysql_servers -> proceed\n\n          # If we do not have a writer row, we have to add it\n          local writer_count=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_WRITER_ID AND hostname='$server' AND port=$port\")\n          check_cmd_and_exit $LINENO $? \"Could not get writer count (query failed). Exiting.\"\n\n          if [[ $writer_count -eq 0 ]]; then\n            proxysql_exec $LINENO -Ns \\\n              \"INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,comment,max_connections)\n                  VALUES ('$server',$HOSTGROUP_WRITER_ID,$port,1000000,'WRITE',$MAX_CONNECTIONS);\"\n            check_cmd_and_exit $LINENO $? \"Could not add writer node (query failed). Exiting.\"\n            log $LINENO \"Adding server $HOSTGROUP_WRITER_ID:$address with status ONLINE. Reason: WSREP status is DESYNC/DONOR, as this is the only node we will put this one online\"\n          else\n            change_server_status $LINENO $HOSTGROUP_WRITER_ID \"$server\" \"$port\" \"ONLINE\" \"\"\\\n                                 \"WSREP status is DESYNC/DONOR, as this is the only node we will put this one online\"\n          fi\n          cnt=$(( $cnt + 1 ))\n\n          local proxy_runtime_status\n          proxy_runtime_status=$(proxysql_exec $LINENO -Ns \"SELECT status FROM runtime_mysql_servers WHERE hostname='${server}' AND port='${port}' AND hostgroup_id='${hostgroup}'\")\n          check_cmd_and_exit $LINENO $? \"Could not get writer node status (query failed). Exiting.\"\n          if [ \"${proxy_runtime_status}\" != \"ONLINE\" ]; then\n            # if we are not online in runtime_mysql_servers, proceed to change\n            # the server status and reload mysql_servers\n            echo \"1\" > ${reload_check_file}\n          fi\n\n          # TODO: kennt, this doesn't work. We will go through the upper\n          # loop and move the writer to OFFLINE_SOFT (since it's desynced).\n          # We would have to detect that state (then we could move it to 0).\n          # Or we could only force the update if the tables are the same.\n          # (But only for the descyned node case).\n          # Maybe we can compare the table to itself, no updates needed if it\n          # hasn't changed.\n\n          # Note: If the node in the runtime is already ONLINE, then we don't\n          # need to change the reload_check_file to 1\n          # So if there was no change to the state, we will skip uploading\n          # the in-memory tables and nothing will change.  (This only makes\n          # sense if all the nodes are already down).  If any of the nodes are\n          # up, this node will be moved to OFFLINE_SOFT since it is not in\n          # the SYNCED(4) state.\n        fi\n        safety_cnt=$(( $safety_cnt + 1 ))\n    done\n  done< <(printf \"$query_result\\n\")\n}\n\n\n#\n#\n# Globals:\n#   HOSTGROUP_READER_ID\n#\n# Arguments:\n#   None\n#\nfunction search_for_desynced_readers() {\n  local reload_check_file=$1\n  local cnt=0\n\n  query_result=$(proxysql_exec $LINENO -Ns \"SELECT hostgroup_id, hostname, port, status FROM mysql_servers WHERE hostgroup_id IN ($HOSTGROUP_READER_ID) AND status <> 'OFFLINE_HARD' AND comment <> 'SLAVEREAD'\")\n  check_cmd_and_exit $LINENO $? \"Could not get the list of nodes (query failed). Exiting.\"\n\n  while read line; do\n    if [[ -z $line ]]; then\n      continue\n    fi\n\n    hostgroup=$(echo \"$line\" | cut -f1)\n    server=$(echo \"$line\" | cut -f2)\n    port=$(echo \"$line\" | cut -f3)\n    stat=$(echo \"$line\" | cut -f4)\n\n    local safety_cnt=0\n    while [[ ${cnt} -eq 0 && ${safety_cnt} -lt 5 ]]\n      do\n        local wsrep_status\n        local pxc_main_mode\n        local result\n        local address\n        address=$(combine_ip_port_into_address \"$server\" \"$port\")\n        result=$(mysql_exec $LINENO \"$server\" \"$port\" -Nns \"SHOW STATUS LIKE 'wsrep_local_state'; SHOW VARIABLES LIKE 'pxc_maint_mode';\" 2>>${DEBUG_ERR_FILE})\n        wsrep_status=$(echo \"$result\" | grep \"wsrep_local_state\" | awk '{ print $2 }')\n        pxc_main_mode=$(echo \"$result\" | grep \"pxc_maint_mode\" | awk '{ print $2 }')\n\n        if [[ -z $wsrep_status ]]; then\n          wsrep_status=\"<unknown:query failed>\"\n        fi\n\n        # PXC 5.6 does not have this, so default to DISABLED\n        if [[ -z $pxc_main_mode ]]; then\n          pxc_main_mode=\"DISABLED\"\n        fi\n\n        # Nodes in maintenance are not allowed\n        if [[ $pxc_main_mode != \"DISABLED\" ]]; then\n          log \"\" \"Skipping $hostgroup:$address node is in pxc_maint_mode:$pxc_main_mode\"\n          break\n        fi\n\n        log \"\" \"Checking $hostgroup:$address for node in DONOR state, status $stat , wsrep_local_state $wsrep_status\"\n        if [ \"${wsrep_status}\" = \"2\" -a \"$stat\" != \"ONLINE\" ];then\n          # if we are on Donor/Desync an not online in mysql_servers -> proceed\n          change_server_status $LINENO $HOSTGROUP_READER_ID \"$server\" \"$port\" \"ONLINE\" \"\"\\\n                               \"WSREP status is DESYNC/DONOR, as this is the only node we will put this one online\"\n          cnt=$(( $cnt + 1 ))\n\n          local proxy_runtime_status\n          proxy_runtime_status=$(proxysql_exec $LINENO -Ns \"SELECT status FROM runtime_mysql_servers WHERE hostname='${server}' AND port='${port}' AND hostgroup_id='${hostgroup}'\")\n          check_cmd_and_exit $LINENO $? \"Could not get the runtime server status (query failed). Exiting.\"\n          if [ \"${proxy_runtime_status}\" != \"ONLINE\" ]; then\n            # if we are not online in runtime_mysql_servers,\n            # proceed to change the server status and reload mysql_servers\n            echo \"1\" > ${reload_check_file}\n          fi\n          # Note: If the node in the runtime is already ONLINE, then we don't\n          # need to change the reload_check_file to 1\n          # So if there was no change to the state, we will skip uploading\n          # the in-memory tables and nothing will change.  (This only makes\n          # sense if all the nodes are already down).  If any of the nodes are\n          # up, this node will be moved to OFFLINE_SOFT since it is not in\n          # the SYNCED(4) state.\n          break 2\n        fi\n        safety_cnt=$(( $safety_cnt + 1 ))\n    done\n  done< <(printf \"$query_result\\n\")\n}\n\n\n# Saves the reload_check_file value and resets it to 0\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: path to the reload_check_file\n#\nfunction save_reload_check() {\n  local reload_check_file=$1\n  local save_state\n\n  save_state=$(cat \"${reload_check_file}\")\n  echo 0 > \"${reload_check_file}\"\n  echo \"${save_state}\"\n}\n\n# Resets the restore_reload_check value if needed\n# If nothing has changed, restores the value\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: Path to the reload_check_file\n#   2: The saved (previous) value\n#\n# Returns:\n#   Returns 0 (success) if something changed\n#   Else returns 1\n#\nfunction restore_reload_check() {\n  local reload_check_file=$1\n  local save_state=$2\n  # Default return value is failure (non-zero)\n  local rc=1\n\n  if [[ $(cat ${reload_check_file}) -ne 0 ]]; then\n    save_state=1\n\n    # A success means that something changed\n    rc=0\n  fi\n  echo \"${save_state}\" > \"${reload_check_file}\"\n  return $rc\n}\n\n\n# Adds slaves to the write hostgroup if needed\n#\n# Globals:\n#   HOSTGROUP_WRITER_ID\n#   HOSTGROUP_SLAVEREADER_ID\n#\n# Arguments:\n#   None\n#\nfunction add_slave_to_write_hostgroup() {\n  # General outline:\n  #   Delete all non-ONLINE slaves to the read hostgroup\n  #   Check for an ONLINE slave in the write hostgroup\n  #   If there is none, add a random slave to the write hostgroup\n  #   If cannot find an ONLINE, look for OFFLINE_SOFT (emergency only)\n  local reload_check_file=$1\n  local -i online_count=0\n  local -i offline_count=0\n  local query_result=\"\"\n\n  query_result=$(proxysql_exec $LINENO -Ns \"SELECT status FROM mysql_servers\n                                            WHERE hostgroup_id=$HOSTGROUP_WRITER_ID\n                                            AND comment = 'SLAVEREAD'\")\n  check_cmd_and_exit $LINENO $? \"Could not get the list of slave writers (query failed). Exiting.\"\n\n  # Count # of online vs offline slave nodes\n  local status\n  while read line; do\n    if [[ -z $line ]]; then\n      continue\n    fi\n\n    status=$(echo $line | awk '{ print $1 }')\n    if [[ $status = 'ONLINE' ]]; then\n      online_count+=1\n    else\n      offline_count+=1\n    fi\n  done< <(printf \"$query_result\\n\")\n\n  # Remove any nodes that have been moved OFFLINE\n  if [[ $offline_count -gt 0 ]]; then\n    proxysql_exec $LINENO -Ns \"DELETE FROM mysql_servers\n                                WHERE hostgroup_id=$HOSTGROUP_WRITER_ID\n                                  AND status != 'ONLINE'\n                                  AND comment = 'SLAVEREAD'\"\n    check_cmd_and_exit $LINENO $? \"Failed to remove all non-ONLINE slaves acting as writers (query failed). Exiting.\"\n    log_if_success $LINENO $? \"Removed all non-ONLINE slaves acting as writers\"\n    echo 1 > \"${reload_check_file}\"\n  fi\n\n  # If there is an active slave writer node, no need to add another node\n  if [[ $online_count -eq 1 ]]; then\n    log $LINENO \"Async-slave already ONLINE\"\n    return\n  fi\n\n  if [[ $online_count -eq 0 ]]; then\n    # There are no ONLINE slaves, so add a random one from the\n    # list of ONLINE reader slaves\n    local next_host  host  port\n    next_host=$(proxysql_exec $LINENO -Ns \"select hostname,port FROM mysql_servers\n                                           WHERE status='ONLINE'\n                                           AND comment = 'SLAVEREAD'\n                                           AND hostgroup_id='$HOSTGROUP_SLAVEREADER_ID'\n                                           ORDER BY random() LIMIT 1\")\n    check_cmd_and_exit $LINENO $? \"Could not get info for a slave node (query failed). Exiting.\"\n\n    # Emergency situation, if there are no ONLINE hosts,\n    # look for OFFLINE_SOFT hosts\n    if [[ -z $next_host ]]; then\n      next_host=$(proxysql_exec $LINENO -Ns \"select hostname,port FROM mysql_servers\n                                           WHERE status='OFFLINE_SOFT'\n                                           AND comment = 'SLAVEREAD'\n                                           AND hostgroup_id='$HOSTGROUP_SLAVEREADER_ID'\n                                           ORDER BY random() LIMIT 1\")\n      check_cmd_and_exit $LINENO $? \"Could not get info for a slave node (query failed). Exiting.\"\n      if [[ -n $next_host ]]; then\n        local host=$(echo \"$next_host\" | awk '{ print $1 }')\n        local port=$(echo \"$next_host\" | awk '{ print $2 }')\n        local address=$(combine_ip_port_into_address \"$host\" \"$port\")\n\n        # Found a node, update the READER to ONLINE\n        proxysql_exec $LINENO -Ns \"UPDATE mysql_servers\n                                    SET status='ONLINE'\n                                    WHERE hostgroup_id=$HOSTGROUP_SLAVEREADER_ID\n                                      AND hostname='$host'\n                                      AND port=$port\"\n        check_cmd_and_exit $LINENO $? \"Could not update the info for a slave node (query failed). Exiting.\"\n        log_if_success $LINENO $? \"slave server ${HOSTGROUP_SLAVEREADER_ID}:$address set to ONLINE status in ProxySQL.\"\n        echo 1 > \"${reload_check_file}\"\n      fi\n    fi\n\n    if [[ -n $next_host ]]; then\n      local host=$(echo \"$next_host\" | awk '{ print $1 }')\n      local port=$(echo \"$next_host\" | awk '{ print $2 }')\n      local address=$(combine_ip_port_into_address \"$host\" \"$port\")\n\n      proxysql_exec $LINENO -Ns \"INSERT INTO mysql_servers\n            (hostname,hostgroup_id,port,weight,status,comment,max_connections)\n            VALUES ('$host',$HOSTGROUP_WRITER_ID,$port,1000000,'ONLINE','SLAVEREAD',$MAX_CONNECTIONS);\"\n      check_cmd_and_exit $LINENO $? \"Cannot add Percona XtraDB Cluster node $address to ProxySQL (query failed). Exiting.\"\n      log_if_success $LINENO $? \"slave server ${HOSTGROUP_WRITER_ID}:$address set to ONLINE status in ProxySQL.\"\n      echo 1 > \"${reload_check_file}\"\n    else\n      log $LINENO \"Could not find any slave readers to promote to a writer\"\n    fi\n  fi\n}\n\n# Removes slaves from the write hostgroup\n# (Actually moves the writers to OFFLINE_SOFT)\n#\n# Globals:\n#   HOSTGROUP_WRITER_ID\n#\n# Arguments:\n#   None\n#\nfunction remove_slave_from_write_hostgroup() {\n  local reload_check_file=$1\n  log $LINENO \"Removing async-slaves from the writegroup\"\n  # Move all writer slaves to OFFLINE_SOFT\n  proxysql_exec $LINENO -Ns \"UPDATE mysql_servers\n                             SET status = 'OFFLINE_SOFT'\n                             WHERE hostgroup_id = $HOSTGROUP_WRITER_ID\n                             AND comment = 'SLAVEREAD'\"\n  check_cmd_and_exit $LINENO $? \"Failed to move slave writers to OFFLINE_SOFT (query failed). Exiting.\"\n  log_if_success $LINENO $? \"Moved slave writers to OFFLINE_SOFT\"\n   echo 1 > \"${reload_check_file}\"\n}\n\n\n#\n# Globals:\n#   TIMEOUT\n#   MYSQL_USERNAME MYSQL_PASSWORD MYSQL_HOSTNAME MYSQL_PORT\n#   PROXYSQL_DATADIR\n#   HOSTGROUP_READER_ID HOSTGROUP_WRITER_ID\n#   CONFIG_FILE\n#   NUMBER_WRITERS\n#\nfunction main() {\n  local cluster_name=\"\"\n  local mysql_credentials\n  local scheduler_id\n\n  # If this call fails, exit out.  We can't do anything without the monitor\n  # credentials.  (Or if we can't connect to proxysql).\n  mysql_credentials=$(proxysql_exec $LINENO -Ns \"SELECT variable_value FROM global_variables WHERE variable_name IN ('mysql-monitor_username','mysql-monitor_password') ORDER BY variable_name DESC\" \"hide_output\")\n  check_cmd_and_exit $LINENO $? \"Unable to obtain MySQL credentials from ProxySQL (query failed). Exiting.\"\n\n  MYSQL_USERNAME=$(echo $mysql_credentials | awk '{print $1}')\n  MYSQL_PASSWORD=$(echo $mysql_credentials | awk '{print $2}')\n\n  # Search for the scheduler id that corresponds to our write hostgroup\n  scheduler_id=$(proxysql_exec $LINENO -Ns \"SELECT id FROM scheduler where arg1 LIKE '%--write-hg=$HOSTGROUP_WRITER_ID %' OR arg1 LIKE '%-w $HOSTGROUP_WRITER_ID %'\")\n  check_cmd_and_exit $LINENO $? \"Could not retreive scheduler row (query failed). Exiting.\"\n  if [[ -z $scheduler_id ]]; then\n    error $LINENO \"Cannot find the scheduler row with write hostgroup : $HOSTGROUP_WRITER_ID\"\n  else\n    cluster_name=$(proxysql_exec $LINENO -Ns \"SELECT comment FROM scheduler WHERE id=${scheduler_id}\" 2>>$ERR_FILE)\n    check_cmd_and_exit $LINENO $? \"Cannot get the cluster name from ProxySQL (query failed). Exiting.\"\n  fi\n\n  if [[ -z $cluster_name ]]; then\n    #\n    # Could not find the cluster name from the scheduler,\n    # contact the cluster directly\n    #\n    local available_host=\"\"\n    local server port\n\n    # Find a PXC host that's we can connect to\n    available_host=$(find_online_cluster_host)\n    check_cmd_and_exit $LINENO $? \"Cannot get the list of nodes in the cluster from ProxySQL (query failed). Exiting\"\n\n    if [[ -z $available_host ]]; then\n      error $LINENO \"Cannot contact a PXC host in hostgroups($HOSTGROUP_WRITER_ID, $HOSTGROUP_READER_ID)\"\n    else\n      server=$(echo $available_host | awk '{print $1}')\n      port=$(echo $available_host | awk '{print $2}')\n\n      cluster_name=$(mysql_exec $LINENO \"$server\" \"$port\" -Nn \\\n          \"SELECT @@wsrep_cluster_name\" 2>>${ERR_FILE} | tail -1)\n    fi\n\n    if [[ ! -z $cluster_name ]]; then\n      #\n      # We've found the cluster name so update the scheduler args\n      #\n      if [[ -z $scheduler_id ]]; then\n        warning \"$LINENO\" \"Cannot update scheduler due to missing scheduler_id\"\n      else\n        local arg1 my_path\n        log \"$LINENO\" \"Updating scheduler for cluster:${cluster_name} write_hg:$HOSTGROUP_WRITER_ID\"\n        arg1=$(proxysql_exec $LINENO -Ns \"SELECT arg1 from scheduler where id=${scheduler_id}\")\n        check_cmd_and_exit $LINENO $? \"Could not get arg1 from scheduler from (query failed)\" \"Please check ProxySQL credentials and status.\"\n        if [[ -z $arg1 ]]; then\n          warning $LINENO \"Cannot update scheduler due to missing arguments (arg1 is empty)\"\n        else\n          my_path=$(echo ${PROXYSQL_DATADIR}/${cluster_name}_proxysql_galera_check.log | sed  's#\\/#\\\\\\/#g')\n          arg1=$(echo $arg1 | sed \"s/--log=.*/--log=$my_path/g\")\n\n          proxysql_exec $LINENO -Ns \"UPDATE scheduler set comment='$cluster_name',arg1='$arg1' where id=${scheduler_id};load scheduler to runtime;save scheduler to disk\"\n          check_cmd_and_exit $LINENO $? \"Could not update scheduler from (query failed). Exiting.\"\n        fi\n      fi\n    fi\n  fi\n\n  # Check to see if there are other proxysql_galera_checkers running\n  # Before we can check, we need the cluster name\n  check_is_galera_checker_running \"$cluster_name\"\n\n  local mode=$MODE\n  if [[ ! -z $P_MODE ]]; then\n    mode=$P_MODE\n  else\n    local proxysql_mode_file\n    if [[ -z $cluster_name ]]; then\n      proxysql_mode_file=\"${PROXYSQL_DATADIR}/mode\"\n    else\n      proxysql_mode_file=\"${PROXYSQL_DATADIR}/${cluster_name}_mode\"\n    fi\n\n    if [[ ! -f ${proxysql_mode_file} ]]; then\n      local mode_check\n      mode_check=$(proxysql_exec $LINENO -Ns \"SELECT comment from mysql_servers where comment='WRITE' and hostgroup_id in ($HOSTGROUP_WRITER_ID, $HOSTGROUP_READER_ID)\")\n      if [[ \"$mode_check\" == \"WRITE\" ]]; then\n        echo \"singlewrite\" > ${proxysql_mode_file}\n      else\n        echo \"loadbal\" > ${proxysql_mode_file}\n      fi\n    fi\n\n    if [[ -r $proxysql_mode_file ]]; then\n      mode=$(cat ${proxysql_mode_file})\n    fi\n  fi\n  MODE=$mode\n\n  # Running proxysql_node_monitor script.\n  # First try the same directory as this script\n  local monitor_dir\n  monitor_dir=$(cd $(dirname $0) && pwd)\n\n  if [[ ! -f $monitor_dir/proxysql_node_monitor ]]; then\n    # Cannot find it in same directory, try default location\n    monitor_dir=\"/usr/bin\"\n    if [[ ! -f $monitor_dir/proxysql_node_monitor ]]; then\n      log \"\" \"ERROR! Could not find proxysql_node_monitor. Terminating\"\n      exit 1\n    fi\n  fi\n\n  # If the reload_check_file contains 1, then there was a change\n  # made to the state and we need to upload the MYSQL SERVERS to runtime.\n  # This is used because changes are made to the MYSQL SERVERS table\n  # in other scripts/subshells.\n  local reload_check_file\n  if [[ -z $cluster_name ]]; then\n    reload_check_file=\"${PROXYSQL_DATADIR}/reload\"\n  else\n    reload_check_file=\"${PROXYSQL_DATADIR}/${cluster_name}_reload\"\n  fi\n  echo \"0\" > ${reload_check_file}\n\n  # Run the monitor script\n  local proxysql_monitor_log\n  if [[ -n $NODE_MONITOR_LOG_FILE ]]; then\n    proxysql_monitor_log=$NODE_MONITOR_LOG_FILE\n  else\n    if [[ -z $cluster_name ]]; then\n      proxysql_monitor_log=\"${PROXYSQL_DATADIR}/proxysql_node_monitor.log\"\n    else\n      proxysql_monitor_log=\"${PROXYSQL_DATADIR}/${cluster_name}_proxysql_node_monitor.log\"\n    fi\n  fi\n\n  local more_monitor_options=\"\"\n  if [[ $DEBUG -ne 0 ]]; then\n    more_monitor_options+=\" --debug \"\n  fi\n\n  # TODO: kennt, do we need to check the return code?  do we care?\n  $monitor_dir/proxysql_node_monitor --config-file=$CONFIG_FILE \\\n                                     --write-hg=$HOSTGROUP_WRITER_ID \\\n                                     --read-hg=$HOSTGROUP_READER_ID \\\n                                     --mode=$mode \\\n                                     --reload-check-file=\"$reload_check_file\" \\\n                                     --log-text=\"$LOG_TEXT\" \\\n                                     --max-connections=\"$MAX_CONNECTIONS\" \\\n                                     --log=\"$proxysql_monitor_log\" $more_monitor_options\n\n  # print information prior to a run if ${ERR_FILE} is defined\n  log \"\" \"###### proxysql_galera_checker.sh SUMMARY ######\"\n  log \"\" \"Hostgroup writers $HOSTGROUP_WRITER_ID\"\n  log \"\" \"Hostgroup readers $HOSTGROUP_READER_ID\"\n  log \"\" \"Number of writers $NUMBER_WRITERS\"\n  log \"\" \"Writers are readers $WRITER_IS_READER\"\n  log \"\" \"Log file          $ERR_FILE\"\n  log \"\" \"Mode              $MODE\"\n  if [[ -n $P_PRIORITY ]]; then\n    log \"\" \"Priority          $P_PRIORITY\"\n  fi\n  if [[ -n $LOG_TEXT ]]; then\n    log \"\" \"Extra notes       $LOG_TEXT\"\n  fi\n\n  local number_readers_online=0\n  local number_writers_online=0\n  local save_reload_state=0\n\n\n  log \"\" \"###### HANDLE WRITER NODES ######\"\n  update_writers \"${reload_check_file}\"\n  number_writers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_WRITER_ID AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\" 2>>${ERR_FILE})\n  check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n\n  # Check to see if we need to add readers for any writer nodes\n  # (depends on the writer-is-reader setting)\n  number_readers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id IN ($HOSTGROUP_READER_ID) AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\")\n  check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n\n  save_reload_state=$(save_reload_check \"${reload_check_file}\")\n\n  writer_is_reader_check \"$reload_check_file\" \"$number_readers_online\"\n\n  # Something changed\n  if restore_reload_check \"${reload_check_file}\" $save_reload_state; then\n    number_readers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id IN ($HOSTGROUP_READER_ID) AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\")\n    check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n  fi\n\n\n  if [ ${HOSTGROUP_READER_ID} -ne -1 ]; then\n    log \"\" \"###### HANDLE READER NODES ######\"\n    save_reload_state=$(save_reload_check \"${reload_check_file}\")\n\n    update_readers \"${reload_check_file}\"\n\n    # Something changed\n    if restore_reload_check \"${reload_check_file}\" $save_reload_state; then\n      number_readers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id IN ($HOSTGROUP_READER_ID) AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\")\n      check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n    fi\n  fi\n\n  if [[ $MODE != \"loadbal\" ]]; then\n    # If we have no writers, check if we can create one (just one)\n    if [[ $number_writers_online -eq 0 ]]; then\n      save_reload_state=$(save_reload_check \"${reload_check_file}\")\n\n      ensure_one_writer_node \"${reload_check_file}\"\n\n      # Something changed\n      if restore_reload_check \"${reload_check_file}\" $save_reload_state; then\n        # Check to see if we need to add readers for any writer nodes\n        # (depends on the writer-is-reader setting)\n        writer_is_reader_check \"$reload_check_file\" \"$number_readers_online\"\n        echo 1 > \"${reload_check_file}\"\n        number_readers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id IN ($HOSTGROUP_READER_ID) AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\")\n        check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n        number_writers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_WRITER_ID AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\" 2>>${ERR_FILE})\n        check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n      fi\n    else\n      debug $LINENO \"writer nodes found: $number_writers_online, no need to add more\"\n    fi\n  fi\n\n\n  log \"\" \"###### SUMMARY ######\"\n  log \"\" \"--> Number of writers that are 'ONLINE': ${number_writers_online} : hostgroup: ${HOSTGROUP_WRITER_ID}\"\n  [[ ${HOSTGROUP_READER_ID} -ne -1 ]] && log \"\" \"--> Number of readers that are 'ONLINE': ${number_readers_online} : hostgroup: ${HOSTGROUP_READER_ID}\"\n\n\n  # We don't have any writers... alert, try to bring some online!\n  # This includes bringing a DONOR online\n  if [[ ${number_writers_online} -eq 0 ]]; then\n    log \"\" \"###### TRYING TO FIX MISSING WRITERS ######\"\n    log \"\" \"No writers found, Trying to enable last available node of the cluster (in Donor/Desync state)\"\n    save_reload_state=$(save_reload_check \"${reload_check_file}\")\n\n    search_for_desynced_writers \"${reload_check_file}\"\n\n    # Something changed\n    if restore_reload_check \"${reload_check_file}\" $save_reload_state; then\n      number_writers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id=$HOSTGROUP_WRITER_ID AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\" 2>>${ERR_FILE})\n      check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n    fi\n  fi\n\n\n  # We don't have any readers... alert, try to bring some online!\n  if [[  ${HOSTGROUP_READER_ID} -ne -1 && ${number_readers_online} -eq 0 ]]; then\n    log \"\" \"###### TRYING TO FIX MISSING READERS ######\"\n    log \"\" \"--> No readers found, Trying to enable last available node of the cluster (in Donor/Desync state) or pick the master\"\n    save_reload_state=$(save_reload_check \"${reload_check_file}\")\n\n    search_for_desynced_readers \"${reload_check_file}\"\n\n    # Something changed\n    if restore_reload_check \"${reload_check_file}\" $save_reload_state; then\n      number_readers_online=$(proxysql_exec $LINENO -Ns \"SELECT count(*) FROM mysql_servers WHERE hostgroup_id IN ($HOSTGROUP_READER_ID) AND status = 'ONLINE' AND comment <> 'SLAVEREAD'\")\n      check_cmd_and_exit $LINENO $? \"Could not get node count (query failed). Exiting.\"\n    fi\n  fi\n\n  # Check to see if we need to enable the slaves as readers/writers\n  if [[ $HAVE_SLAVEREADERS -eq 1 ]]; then\n    save_reload_state=$(save_reload_check \"${reload_check_file}\")\n\n    log $LINENO \"###### ASYNC-SLAVE ACTIVITY ######\"\n\n    # If use-slave-as-writer is set, then we do not allow slaves to\n    # be added, however we still call the remove slave writers\n    # in case the option was just changed.\n\n    if [[ $SLAVE_IS_WRITER == \"yes\" && $number_readers_online -eq 0 && $number_writers_online -eq 0 ]]; then\n      # No cluster nodes active, add a slave to the writer hostgroup\n      debug $LINENO \"Nothing in the cluster, checking slavereaders\"\n      add_slave_to_write_hostgroup \"${reload_check_file}\"\n    elif [[ $HAVE_SLAVEWRITERS -eq 1 ]]; then\n      # Active cluster nodes discovered, remove all write nodes\n      debug $LINENO \"Found a cluster, removing slavereaders (acting as writers)\"\n      remove_slave_from_write_hostgroup \"${reload_check_file}\"\n    fi\n\n    local query_result\n    local -i num_slave_writers=0\n    local -i num_slave_readers=0\n\n    query_result=$(proxysql_exec $LINENO -Ns \"SELECT hostgroup_id FROM mysql_servers\n                                            WHERE hostgroup_id IN ($HOSTGROUP_WRITER_ID, $HOSTGROUP_READER_ID, $HOSTGROUP_SLAVEREADER_ID)\n                                            AND comment = 'SLAVEREAD'\n                                            AND status = 'ONLINE'\")\n    check_cmd_and_exit $LINENO $? \"Could not get the list of slave nodes (query failed). Exiting.\"\n    while read line; do\n      if [[ -z $line ]]; then\n        continue\n      fi\n\n      if [[ $line = \"$HOSTGROUP_WRITER_ID\" ]]; then\n        num_slave_writers+=1\n      elif [[ $line = \"$HOSTGROUP_READER_ID\" ]]; then\n        num_slave_readers+=1\n      fi\n    done< <(printf \"$query_result\\n\")\n\n    log \"\" \"--> Number of slave writers that are 'ONLINE': ${num_slave_writers} : hostgroup: ${HOSTGROUP_WRITER_ID}\"\n    [[ ${HOSTGROUP_READER_ID} -ne -1 ]] && log \"\" \"--> Number of slave readers that are 'ONLINE': ${num_slave_readers} : hostgroup: ${HOSTGROUP_READER_ID}\"\n\n    restore_reload_check \"${reload_check_file}\" $save_reload_state\n  fi\n\n\n  if [[ $(cat ${reload_check_file}) -ne 0 ]]; then\n      log \"\" \"###### Loading mysql_servers config into runtime ######\"\n      proxysql_exec $LINENO -Ns \"LOAD MYSQL SERVERS TO RUNTIME;\" 2>>${ERR_FILE}\n      check_cmd_and_exit $LINENO $? \"Could not update the mysql_servers table in ProxySQL. Exiting.\"\n  else\n      log \"\" \"###### Not loading mysql_servers, no change needed ######\"\n  fi\n}\n\n\n#-------------------------------------------------------------------------------\n#\n# Step 4 : Begin script execution\n#\n\n#\n# In the initial version, ProxySQL has 5 slots for command-line arguments\n# and would send them separately (each enclosed in double quotes,\n# such as \"arg1\" \"arg2\" etc..).\n#\n# We now configure all parameters in the arg1 field (since we need more\n# than 5 arguments).  This means that we now receive all the arguments\n# in one parameter \"arg1 arg2 arg3 arg4 arg5\"\n#\n# This means that anytime this script is called with > 1 argument, we\n# assume that the old method is in use and we need to upgrade the script\n# to the new method (in upgrade_scheduler).\n#\n\nif [ \"$#\" -eq 1 ]; then\n  # Below set will reshuffle parameters.\n  # example arg1=\" --one=1 --two=2\" will result in:\n  # $1 = --one=1\n  # $2 = --two=2\n  #\n  # The eval is needed here to preserve whitespace in the arguments\n  eval set -- $1\nelse\n  # Parse the arguments\n  declare param value\n\n  # We don't need all the options here, just the log/debug options\n  while [[ $# -gt 0 && \"$1\" != \"\" ]]; do\n      param=`echo $1 | awk -F= '{print $1}'`\n      value=`echo $1 | awk -F= '{print $2}'`\n\n      # Assume that all options start with a '-'\n      # otherwise treat as a positional parameter\n      if [[ ! $param =~ ^- ]]; then\n        continue\n      fi\n      case $param in\n        -h | --help)\n          usage\n          exit 0\n          ;;\n        --debug)\n          DEBUG=1\n          ;;\n        -v | --version)\n          echo \"proxysql_galera_checker version $PROXYSQL_ADMIN_VERSION\"\n          exit 0\n          ;;\n        --log)\n          if [[ -n $value ]]; then\n            ERR_FILE=$value\n          fi\n          ;;\n      esac\n      shift\n  done\n\n  echo \"Old config detected...trying to upgrade More than one parameter\"\n\n  if [[ $DEBUG -eq 1 ]]; then\n    # For now\n    if [[ -z $ERR_FILE && -t 1 ]]; then\n      ERR_FILE=/dev/stderr\n    fi\n  fi\n\n  upgrade_scheduler\n  exit 1\nfi\n\ntrap cleanup_handler EXIT\n\nparse_args \"$@\"\nmain\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/proxysql_node_monitor",
    "content": "#!/bin/bash -u\n# This script will assist to setup Percona XtraDB cluster ProxySQL monitoring script.\n#####################################################################################\n\n\n#-------------------------------------------------------------------------------\n#\n# Step 1 : Bash internal configuration\n#\n\nset -o nounset    # no undefined variables\nset -o pipefail   # internal pipe failures cause an exit\n\n#bash prompt internal configuration\ndeclare RED=\"\"\ndeclare NRED=\"\"\n\n#-------------------------------------------------------------------------------\n#\n# Step 2 : Global variables\n#\n\ndeclare -i  DEBUG=0\nreadonly    PROXYSQL_ADMIN_VERSION=\"1.4.16\"\n\ndeclare     CONFIG_FILE=\"/etc/proxysql-admin.cnf\"\ndeclare     ERR_FILE=\"/dev/null\"\ndeclare     RELOAD_CHECK_FILE=\"/var/lib/proxysql/reload\"\n\n# Set to send output here when DEBUG is set\ndeclare     DEBUG_ERR_FILE=\"/dev/null\"\n\ndeclare -i  WRITE_HOSTGROUP_ID=10\ndeclare -i  READ_HOSTGROUP_ID=11\ndeclare -i  SLAVEREAD_HOSTGROUP_ID=11\n\n# This is the hostgroup that new nodes will be added to\ndeclare -i  DEFAULT_HOSTGROUP_ID=10\n\ndeclare     MODE=\"loadbal\"\n\ndeclare     CHECK_STATUS=0\n\ndeclare     PROXYSQL_DATADIR='/var/lib/proxysql'\n\ndeclare -i  TIMEOUT=10\n\n# Maximum time to wait for cluster status\ndeclare -i  CLUSTER_TIMEOUT=3\n\n# Extra text that will be logged with the output\n# (useful for debugging/testing)\ndeclare     LOG_TEXT=\"Vivaldi\"\n\n# Default value for max_connections in mysql_servers\ndeclare     MAX_CONNECTIONS=\"1000\"\n\n\n#-------------------------------------------------------------------------------\n#\n# Step 3 : Helper functions\n#\n\nfunction log() {\n  local lineno=$1\n  shift\n\n  if [[ -n $ERR_FILE ]]; then\n    if [[ -n $lineno && $DEBUG -ne 0 ]]; then\n      echo \"[$(date +%Y-%m-%d\\ %H:%M:%S)] (line $lineno) $*\" >> $ERR_FILE\n    else\n      echo \"[$(date +%Y-%m-%d\\ %H:%M:%S)] $*\" >> $ERR_FILE\n    fi\n  fi\n}\n\nfunction log_if_success() {\n  local lineno=$1\n  local rc=$2\n  shift 2\n\n  if [[ $rc -eq 0 ]]; then\n    log \"$lineno\" \"$*\"\n  fi\n}\n\nfunction error() {\n  local lineno=$1\n  shift\n\n  log \"$lineno\" \"ERROR: $*\"\n}\n\nfunction warning() {\n  local lineno=$1\n  shift\n\n  log \"$lineno\" \"WARNING: $*\"\n}\n\nfunction debug() {\n  if [[ $DEBUG -eq 0 ]]; then\n    return\n  fi\n\n  local lineno=$1\n  shift\n\n  log \"$lineno\" \"${RED}debug: $*${NRED}\"\n}\n\n\nfunction usage () {\n  local path=$0\n  cat << EOF\nUsage: ${path##*/} [ options ]\n\nExample:\n  proxysql_node_monitor --write-hg=10 --read-hg=11 --config-file=/etc/proxysql-admin.cnf --log=/var/lib/proxysql/pxc_test_proxysql_galera_check.log\n\nOptions:\n  -w, --write-hg=<NUMBER>             Specify ProxySQL write hostgroup.\n  -r, --read-hg=<NUMBER>              Specify ProxySQL read hostgroup.\n  -m, --mode=[loadbal|singlewrite]    ProxySQL read/write configuration mode, currently supporting: 'loadbal' and 'singlewrite' (the default) modes\n  -p, --priority=<HOST_LIST>          Can accept comma delimited list of write nodes priority\n  -c, --config-file=PATH              Specify ProxySQL-admin configuration file.\n  -l, --log=PATH                      Specify proxysql_node_monitor log file.\n  --log-text=TEXT                     This is text that will be written to the log file\n                                      whenever this script is run (useful for debugging).\n  --reload-check-file=PATH            Specify file used to notify proysql_galera_checker\n                                      of a change in server configuration\n  --max-connections=<NUMBER>          Value for max_connections in the mysql_servers table.\n                                      This is the maximum number of connections that\n                                      ProxySQL will open to the backend servers.\n                                      (default: 1000)\n  --debug                             Enables additional debug logging.\n  -h, --help                          Display script usage information\n  -v, --version                       Print version info\nEOF\n}\n\n\n# Check the permissions for a file or directory\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: the bash test to be applied to the file\n#   2: the lineno where this call is invoked (used for errors)\n#   3: the path to the file\n#   4: (optional) description of the path (mostly used for existence checks)\n#\n# Exits the script if the permissions test fails.\n#\nfunction check_permission() {\n  local permission=$1\n  local lineno=$2\n  local path_to_check=$3\n  local description=\"\"\n  if [[ $# -gt 3 ]]; then\n    description=\"$4\"\n  fi\n\n  if [ ! $permission \"$path_to_check\" ]; then\n    if [[ $permission == \"-r\" ]]; then\n      error $lineno \"You do not have READ permission for: $path_to_check\"\n    elif [[ $permission == \"-w\" ]]; then\n      error $lineno \"You do not have WRITE permission for: $path_to_check\"\n    elif [[ $permission == \"-x\" ]]; then\n      error $lineno \"You do not have EXECUTE permission for: $path_to_check\"\n    elif [[ $permission == \"-e\" ]]; then\n      if [[ -n $description ]]; then\n        error $lineno \"Could not find the $description: $path_to_check\"\n      else\n        error $lineno \"Could not find: $path_to_check\"\n      fi\n    elif [[ $permission == \"-d\" ]]; then\n      if [[ -n $description ]]; then\n        error $lineno \"Could not find the $description: $path_to_check\"\n      else\n        error $lineno \"Could not find the directory: $path_to_check\"\n      fi\n    elif [[ $permission == \"-f\" ]]; then\n      if [[ -n $description ]]; then\n        error $lineno \"Could not find the $description: $path_to_check\"\n      else\n        error $lineno \"Could not find the file: $path_to_check\"\n      fi\n    else\n      error $lineno \"You do not have the correct permissions for: $path_to_check\"\n    fi\n    exit 1\n  fi\n}\n\n\n#\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: the lineno\n#   2: the return value that is being checked\n#   3: the error message\n#   4: Additional information (only used if an error occurred) (optional)\n#\n# Returns:\n#   Returns the return value that is passed in.\n#   This allows the code that follows to check the return value.\n#\n# Note that this will NOT exit the script.\n#\nfunction check_cmd() {\n  local lineno=$1\n  local mpid=$2\n  local error_msg=$3\n  local error_info=\"\"\n\n  if [[ $# -ge 4 ]]; then\n    error_info=$4\n  fi\n\n  if [ \"$mpid\" == \"124\" ]; then\n    error $lineno \"TIMEOUT: Connection terminated due to timeout.\"\n  fi\n  if [ ${mpid} -ne 0 ]; then\n    warning $lineno \"$error_msg.\"\n    if [[ ! -z  $error_info ]]; then\n      log $lineno \"$error_info.\"\n    fi\n  fi\n  return $mpid\n}\n\n# Executes a SQL query with the (fully) specified server\n#\n# Globals:\n#   None\n#\n# Arguments:\n#   1: lineno\n#   2: the name of the user\n#   3: the user's password\n#   4: the hostname of the server\n#   5: the port used to connect to the server\n#   6: timeout in secs\n#   7: arguments to the mysql client\n#   8: additional options to the [client] config\n#   9: the query to be run\n#   10: additional options, space separated\n#      Available options:\n#       \"hide_output\"\n#         This will not show the output of the query when DEBUG is set.\n#         Used to stop the display of sensitve information (such as passwords)\n#         from being displayed when debugging.\n#\nfunction exec_sql() {\n  local lineno=$1\n  local user=$2\n  local password=$3\n  local hostname=$4\n  local port=$5\n  local timeout_secs=$6\n  local args=$7\n  local client_options=$8\n  local query=\"$9\"\n  local more_options=\"${10}\"\n  local retvalue\n  local retoutput\n\n  debug \"$lineno\" \"exec_sql : $user@$hostname:$port ==> $query\"\n\n  retoutput=$(printf \"[client]\\n${client_options}\\nuser=${user}\\npassword=\\\"${password}\\\"\\nhost=${hostname}\\nport=${port}\"  \\\n      | timeout ${timeout_secs} mysql --defaults-file=/dev/stdin --protocol=tcp \\\n              ${args} -e \"$query\")\n  retvalue=$?\n\n  if [[ $DEBUG -eq 1 ]]; then\n    local number_of_newlines=0\n    local dbgoutput=$retoutput\n\n    if [[ \" $more_options \" =~ [[:space:]]hide_output[[:space:]] ]]; then\n      dbgoutput=\"**** data hidden ****\"\n    fi\n\n    if [[ -n $dbgoutput ]]; then\n      number_of_newlines=$(printf \"%s\" \"${dbgoutput}\" | wc -l)\n    fi\n\n    if [[  $retvalue -ne 0 ]]; then\n      debug \"\" \"--> query failed $retvalue\"\n    elif [[ -z $dbgoutput ]]; then\n      debug \"\" \"--> query returned $retvalue : <query returned no data>\"\n    elif [[ ${number_of_newlines} -eq 0 ]]; then\n      debug \"\" \"--> query returned $retvalue : ${dbgoutput}\"\n    else\n      debug \"\" \"--> query returned $retvalue : <data follows>\"\n      printf \"${dbgoutput//%/%%}\\n\" | while IFS= read -r line; do\n        debug \"\" \"----> $line\"\n      done\n    fi\n  fi\n\n  printf \"${retoutput//%/%%}\"\n  return $retvalue\n}\n\n\n# Executes a SQL query on proxysql (with a timeout of $TIMEOUT seconds)\n#\n# Globals:\n#   PROXYSQL_USERNAME\n#   PROXYSQL_PASSWORD\n#   PROXYSQL_HOSTNAME\n#   PROXYSQL_PORT\n#   TIMEOUT\n#\n# Arguments:\n#   1: lineno (used for debugging/output, may be blank)\n#   2: The SQL query\n#   3: (optional) more options, see exec_sql\n#\nfunction proxysql_exec() {\n  local lineno=$1\n  local query=\"$2\"\n  local more_options=\"\"\n  local retoutput\n\n  if [[ $# -ge 3 ]]; then\n    more_options=$3\n  fi\n\n  exec_sql \"$lineno\" \"$PROXYSQL_USERNAME\" \"$PROXYSQL_PASSWORD\" \\\n           \"$PROXYSQL_HOSTNAME\" \"$PROXYSQL_PORT\" \\\n           \"$TIMEOUT\" \"-Bs\" \"\" \"$query\" \"$more_options\"\n  retoutput=$?\n  return $retoutput\n}\n\n# Executes a SQL query on mysql (with a timeout of $TIMEOUT secs)\n#\n# Globals:\n#   CLUSTER_USERNAME\n#   CLUSTER_PASSWORD\n#   CLUSTER_HOSTNAME\n#   CLUSTER_PORT\n#   CLUSTER_TIMEOUT\n#\n# Arguments:\n#   1: lineno (used for debugging/output, may be blank)\n#   2: the query to be run\n#   3: (optional) more options, see exec_sql\n#\nfunction mysql_exec() {\n  local lineno=$1\n  local query=$2\n  local more_options=\"\"\n  local retoutput\n\n  if [[ $# -ge 3 ]]; then\n    more_options=$3\n  fi\n\n  exec_sql \"$lineno\" \"$CLUSTER_USERNAME\" \"$CLUSTER_PASSWORD\" \\\n           \"$CLUSTER_HOSTNAME\" \"$CLUSTER_PORT\" \\\n           \"$TIMEOUT\" \"-Bs\" \"connect-timeout=${CLUSTER_TIMEOUT}\" \"$query\" \"$more_options\"\n  retoutput=$?\n  return $retoutput\n}\n\n\n# Executes a SQL query on mysql (with a timeout of $TIMEOUT secs)\n#\n# Globals:\n#   CLUSTER_USERNAME\n#   CLUSTER_PASSWORD\n#   CLUSTER_TIMEOUT\n#\n# Arguments:\n#   1: lineno (used for debugging/output, may be blank)\n#   2: the hostname of the server\n#   3: the port used to connect to the server\n#   4: the query to be run\n#   5: (optional) more options, see exec_sql\n#\nfunction slave_exec() {\n  local lineno=$1\n  local hostname=$2\n  local port=$3\n  local query=$4\n  local more_options=\"\"\n  local timeout_secs=$TIMEOUT\n  local retoutput\n\n  if [[ $# -ge 5 ]]; then\n    more_options=$5\n  fi\n\n\n  exec_sql \"$lineno\" \"$CLUSTER_USERNAME\" \"$CLUSTER_PASSWORD\" \\\n           \"$hostname\" \"$port\" \\\n           \"$timeout_secs\" \"-Bs\" \"\" \"$query\" \"$more_options\"\n  retoutput=$?\n  return $retoutput\n}\n\n# Separates the IP address from the port in a network address\n# Works for IPv4 and IPv6\n#\n# Globals:\n#   None\n#\n# Params:\n#   1. The network address to be parsed\n#\n# Outputs:\n#   A string with a space separating the IP address from the port\n#\nfunction separate_ip_port_from_address()\n{\n  #\n  # Break address string into host:port/path parts\n  #\n  local address=$1\n\n  # Has to have at least one ':' to separate the port from the ip address\n  if [[ $address =~ : ]]; then\n    ip_addr=${address%:*}\n    port=${address##*:}\n  else\n    ip_addr=$address\n    port=\"\"\n  fi\n\n  # Remove any braces that surround the ip address portion\n  ip_addr=${ip_addr#\\[}\n  ip_addr=${ip_addr%\\]}\n\n  echo \"${ip_addr} ${port}\"\n}\n\n# Combines the IP address and port into a network address\n# Works for IPv4 and IPv6\n# (If the IP address is IPv6, the IP portion will have brackets)\n#\n# Globals:\n#   None\n#\n# Params:\n#   1: The IP address portion\n#   2: The port\n#\n# Outputs:\n#   A string containing the full network address\n#\nfunction combine_ip_port_into_address()\n{\n  local ip_addr=$1\n  local port=$2\n  local addr\n\n  if [[ ! $ip_addr =~ \\[.*\\] && $ip_addr =~ .*:.* ]]; then\n    # If there are no brackets and it does have a ':', then add the brackets\n    # because this is an unbracketed IPv6 address\n    addr=\"[${ip_addr}]:${port}\"\n  else\n    addr=\"${ip_addr}:${port}\"\n  fi\n  echo $addr\n}\n\n\n# Update Percona XtraDB Cluster nodes in ProxySQL database\n# This will take care of nodes that have gone up or gone down\n# (i.e. if the ProxySQL and PXC memberships differ).\n#\n# This does not take care of the policy issues, it does not\n# ensure there is a writer.\n#\n# Globals:\n#   WRITE_HOSTGROUP_ID\n#   READ_HOSTGROUP_ID\n#   SLAVEREAD_HOSTGROUP_ID\n#   MODE\n#   MODE_COMMENT\n#   CHECK_STATUS\n#\n# Arguments:\n#   1: active cluster host (may be empty if cluster is offline)\n#   1: active cluster port (may be empty if cluster is offline)\n#\nfunction update_cluster() {\n  debug $LINENO \"START update_cluster\"\n  local cluster_host=$1\n  local cluster_port=$2\n  local host_info=\"\"\n  local current_hosts=\"\"\n  local is_current_hosts_empty=0\n  local wsrep_address=\"\"\n  local ws_address\n  local ws_ip\n  local ws_port\n  local ws_hg_status\n  local ws_hg_id\n  local ws_status\n  local ws_comment\n\n  # get all nodes from ProxySQL in use by hostgroups\n  host_info=$(proxysql_exec $LINENO \"SELECT DISTINCT hostname || ':' || port,hostgroup_id,status FROM mysql_servers where status != 'OFFLINE_HARD' and hostgroup_id in ( $WRITE_HOSTGROUP_ID, $READ_HOSTGROUP_ID, $SLAVEREAD_HOSTGROUP_ID )\" | tr '\\t' ' ')\n  if [[ -n host_info ]]; then\n    # Extract the hostname and port from the rows\n    # Creates a string of \"host:port\" separated by spaces\n    current_hosts=\"\"\n\n    while read line; do\n      if [[ -z $line ]]; then\n        continue\n      fi\n      net_address=$(echo $line | cut -d' ' -f1)\n      net_address=$(separate_ip_port_from_address $net_address)\n      local ip_addr=$(echo \"$net_address\" | cut -d' ' -f1)\n      local port=$(echo \"$net_address\" | cut -d' ' -f2)\n      net_address=$(combine_ip_port_into_address \"$ip_addr\" \"$port\")\n      current_hosts+=\"$net_address \"\n    done< <(printf \"$host_info\\n\")\n\n    current_hosts=${current_hosts% }\n  fi\n\n  if [[ -n $cluster_host && -n $cluster_port ]]; then\n    # First, find a host that is online from ProxySQL\n    ws_ip=$cluster_host\n    ws_port=$cluster_port\n\n    # Second, get the wsrep_incoming_addresses from the cluster\n    wsrep_address=$(slave_exec $LINENO \"${ws_ip}\" \"${ws_port}\" \\\n          \"SHOW STATUS LIKE 'wsrep_incoming_addresses'\" | awk '{print $2}' | sed 's|,| |g')\n  fi\n\n  if [[ -z $wsrep_address && -z $current_hosts ]]; then\n    debug $LINENO \"Returning from update_cluster(), both PXC and ProxySQL have no active nodes\"\n    return\n  fi\n\n  #\n  # Given the WSREP members, compare to ProxySQL\n  # If missing from ProxySQL, add to ProxySQL as a reader.\n  #\n  debug $LINENO \"Looking for PXC nodes not in ProxySQL\"\n  for i in ${wsrep_address}; do\n    # if we have a match, the the PXC node is in ProxySQL and we can skip\n    if [[ -n $current_hosts && \" ${current_hosts} \" =~ \" ${i} \" ]]; then\n      continue\n    fi\n\n    log $LINENO \"Cluster node (${i}) does not exist in ProxySQL, adding as a $MODE_COMMENT node\"\n    ws_address=$(separate_ip_port_from_address \"$i\")\n    ws_ip=$(echo \"$ws_address\" | cut -d' ' -f1)\n    ws_port=$(echo \"$ws_address\" | cut -d' ' -f2)\n\n    # Add the node as a reader\n    local hostgroup\n\n    # Before inserting, check if a previous READ entry exists (it may be in OFFLINE_HARD state)\n    hostgroup=$(proxysql_exec $LINENO \"SELECT hostgroup_id FROM mysql_servers WHERE hostgroup_id=${DEFAULT_HOSTGROUP_ID} AND hostname='${ws_ip}' AND port=${ws_port}\")\n\n    if [[ -n $hostgroup ]]; then\n      # Update reader to OFFLINE_SOFT if new PXC node in ProxySQL\n      proxysql_exec $LINENO \"UPDATE mysql_servers SET status='OFFLINE_SOFT',weight=1000,comment='$MODE_COMMENT' WHERE hostname='${ws_ip}' AND port=${ws_port} AND hostgroup_id=${hostgroup}\"\n      check_cmd $LINENO $? \"Cannot update Percona XtraDB Cluster node $ws_address (hostgroup $hostgroup) to ProxySQL database, Please check ProxySQL login credentials\"\n      log_if_success $LINENO $? \"Updated ${hostgroup}:${i} node in the ProxySQL database.\"\n    else\n      # Insert a reader if new PXC node not in ProxySQL\n      proxysql_exec $LINENO \"INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,comment,max_connections) VALUES ('$ws_ip',$DEFAULT_HOSTGROUP_ID,$ws_port,1000,'$MODE_COMMENT',$MAX_CONNECTIONS);\"\n      check_cmd $LINENO $? \"Cannot add Percona XtraDB Cluster node $ws_address (hostgroup $DEFAULT_HOSTGROUP_ID) to ProxySQL database, Please check ProxySQL login credentials\"\n      log_if_success $LINENO $? \"Added ${DEFAULT_HOSTGROUP_ID}:${i} node into ProxySQL database.\"\n    fi\n\n    CHECK_STATUS=1\n  done\n\n  #\n  # Given the ProxySQL members, compare to WSREP\n  # If not in WSREP, mark as OFFLINE_HARD\n  #\n  debug $LINENO \"Looking for ProxySQL nodes not in PXC\"\n  for i in $current_hosts; do\n    # if we have a match, then the proxysql node is in PXC\n    # so we can skip it\n    if [[ -n ${wsrep_address} && \" ${wsrep_address} \" =~ \" ${i} \" ]]; then\n      continue\n    fi\n\n    debug $LINENO \"ProxySQL host $i not found in cluster membership\"\n    #\n    # The current host in current_hosts was not found in cluster membership,\n    # set it OFFLINE_SOFT unless its a slave node\n    #\n    ws_address=$(separate_ip_port_from_address \"$i\")\n    ws_ip=$(echo \"$ws_address\" | cut -d' ' -f1)\n    ws_port=$(echo \"$ws_address\" | cut -d' ' -f2)\n\n    # This is supported by status, so OFFLINE should come before ONLINE\n    # Note that the status is in DESC order, so \"ONLINE : OFFLINE_SOFT : OFFLINE_HARD\"\n    # This is needed because there may be multiple entries\n    ws_hg_status=$(proxysql_exec $LINENO \"SELECT hostgroup_id,status,comment from mysql_servers WHERE hostname='$ws_ip' and port=$ws_port ORDER BY status DESC LIMIT 1\")\n    ws_hg_id=$(echo -e \"$ws_hg_status\" | cut -f1)\n    ws_status=$(echo -e \"$ws_hg_status\" | cut -f2)\n    ws_comment=$(echo -e \"$ws_hg_status\" | cut -f3)\n\n    if [ \"$ws_comment\" == \"SLAVEREAD\" ]; then\n      # This update now happens in proxysql_galera_checker\n      continue\n    fi\n\n    if [ \"$ws_status\" == \"OFFLINE_SOFT\" ]; then\n      #\n      # If OFFLINE_SOFT, move to OFFLINE_HARD\n      #\n      log $LINENO \"Cluster node ${ws_hg_id}:${i} does not exist in PXC! Changing status from OFFLINE_SOFT to OFFLINE_HARD\"\n      proxysql_exec $LINENO \"UPDATE mysql_servers set status='OFFLINE_HARD' WHERE hostname='$ws_ip' and port=$ws_port\"\n      check_cmd $LINENO $? \"Cannot update Percona XtraDB Cluster writer node in ProxySQL database, Please check ProxySQL login credentials\"\n      CHECK_STATUS=1\n    elif [[ $ws_status == \"ONLINE\" ]]; then\n      #\n      # else if ONLINE, move to OFFLINE_SOFT\n      # It will take another iteration to get it to OFFLINE_HARD\n      #\n      log $LINENO \"Cluster node ${ws_hg_id}:${i} does not exist in PXC! Changing status to OFFLINE_SOFT\"\n      # Set all entries to OFFLINE_SOFT\n      proxysql_exec $LINENO \"UPDATE mysql_servers set status='OFFLINE_SOFT' WHERE hostname='$ws_ip' and port=$ws_port\"\n      check_cmd $LINENO $? \"Cannot update Percona XtraDB Cluster writer node in ProxySQL database, Please check ProxySQL login credentials\"\n      CHECK_STATUS=1\n    fi\n\n    node_status=$(proxysql_exec $LINENO \"SELECT status from mysql_servers WHERE hostname='$ws_ip' and port=$ws_port ORDER BY status LIMIT 1\")\n    log $LINENO \"Non-PXC node (${i}) current status '$node_status' in ProxySQL.\"\n  done\n\n  # Update the ProxySQL status for the new nodes\n  for i in ${wsrep_address}; do\n    if [[ -n $current_hosts && \" ${current_hosts} \" =~ \" ${i} \" ]]; then\n      # Lookup the status in the host_info\n      local host\n\n      ws_address=$(separate_ip_port_from_address \"$i\")\n      ws_ip=$(echo \"$ws_address\" | cut -d' ' -f1)\n      ws_port=$(echo \"$ws_address\" | cut -d' ' -f2)\n\n      # properly escape the characters for grep\n      local re_i=\"$(printf '%s' \"$ws_ip:$ws_port\" | sed 's/[.[\\*^$]/\\\\&/g')\"\n      host=$(echo \"$host_info\" | grep \"${re_i}\" | head -1)\n\n      ws_hg_id=$(echo $host | cut -d' ' -f2)\n      ws_status=$(echo $host | cut -d' ' -f3)\n      log \"\" \"Cluster node (${ws_hg_id}:${i}) current status '$ws_status' in ProxySQL.\"\n    else\n      ws_address=$(separate_ip_port_from_address \"$i\")\n      ws_ip=$(echo \"$ws_address\" | cut -d' ' -f1)\n      ws_port=$(echo \"$ws_address\" | cut -d' ' -f2)\n      ws_hg_status=$(proxysql_exec $LINENO \"SELECT hostgroup_id,status from mysql_servers WHERE hostname='$ws_ip' and port=$ws_port\")\n      ws_hg_id=$(echo $ws_hg_status | cut -d' ' -f1)\n      ws_status=$(echo $ws_hg_status | cut -d' ' -f2)\n\n      log $LINENO \"Cluster node (${ws_hg_id}:${i}) current status '$ws_status' in ProxySQL database!\"\n      if [ \"$ws_status\" == \"OFFLINE_HARD\" ]; then\n        # The node was OFFLINE_HARD, but its now in the cluster list\n        # so lets make it OFFLINE_SOFT\n        proxysql_exec $LINENO \"UPDATE mysql_servers set status = 'OFFLINE_SOFT', weight=1000 WHERE hostname='$ws_ip' and port=$ws_port;\"\n        check_cmd $LINENO $? \"Cannot update Percona XtraDB Cluster node $i in the ProxySQL database, Please check the ProxySQL login credentials\"\n        log_if_success $LINENO $? \"${ws_hg_id}:${i} node set to OFFLINE_SOFT status to ProxySQL database.\"\n        CHECK_STATUS=1\n      fi\n    fi\n  done\n  debug $LINENO \"END update_cluster\"\n}\n\n\n# Move the entries in the list from writers to readers\n#\n# Globals:\n#   READ_HOSTGROUP_ID\n#   WRITE_HOSTGROUP_ID\n#\n# Arguments:\n#   1: A list of nodes to move to readers (entries are 'server port hostgroup')\n#\nfunction move_writers_to_readers() {\n  debug $LINENO \"START move_writers_to_readers($*)\"\n  local offline_writers=$1\n\n  debug $LINENO \"$offline_writers\"\n  printf \"$offline_writers\" | while read host port hostgroup || [ -n \"$hostgroup\" ]\n  do\n    local read_count\n\n    debug $LINENO \"mode_change_check: Found OFFLINE_SOFT writer, changing to READ status and hostgroup $READ_HOSTGROUP_ID\"\n\n    read_count=$(proxysql_exec $LINENO \"SELECT COUNT(*) FROM mysql_servers WHERE hostgroup_id=$READ_HOSTGROUP_ID AND hostname='$host' AND port=$port\")\n    if [[ $read_count -ne 0 ]]; then\n      # If node is already a READER, update the READER\n      proxysql_exec $LINENO \"UPDATE mysql_servers SET status='OFFLINE_SOFT',hostgroup_id=$READ_HOSTGROUP_ID, comment='READ', weight=1000 WHERE hostgroup_id=$READ_HOSTGROUP_ID AND hostname='$host' AND port=$port\"\n      check_cmd $LINENO $? \"Cannot update Percona XtraDB Cluster writer node in ProxySQL database, Please check ProxySQL login credentials\"\n      log_if_success $LINENO $? \"Changed OFFLINE_SOFT writer to a reader ($READ_HOSTGROUP_ID:$host:$port)\"\n\n      # Delete the WRITER (so that we don't get here again)\n      proxysql_exec $LINENO \"DELETE FROM mysql_servers WHERE hostgroup_id=$hostgroup AND hostname='$host' AND port=$port\"\n    else\n      # If node is not a reader, change from WRITER to READER\n      proxysql_exec $LINENO \"UPDATE mysql_servers SET status='OFFLINE_SOFT',hostgroup_id=$READ_HOSTGROUP_ID, comment='READ', weight=1000 WHERE hostgroup_id=$hostgroup AND hostname='$host' AND port=$port\"\n      check_cmd $LINENO $? \"Cannot update Percona XtraDB Cluster writer node in ProxySQL database, Please check ProxySQL login credentials\"\n      log_if_success $LINENO $? \"Changed OFFLINE_SOFT writer to a reader ($READ_HOSTGROUP_ID:$host:$port)\"\n    fi\n  done\n}\n\n\n#\n# Globals:\n#   PROXYSQL_DATADIR\n#   CLUSTER_NAME\n#   WRITE_HOSTGROUP_ID  READ_HOSTGROUP_ID\n#   MODE\n#   CHECK_STATUS\n#\n# Arguments:\n#   None\n#\nfunction mode_change_check(){\n  debug $LINENO \"START mode_change_check\"\n\n  # Check if the current writer is in an OFFLINE_SOFT state\n  local offline_writers\n  offline_writers=$(proxysql_exec $LINENO \"SELECT hostname,port,hostgroup_id from mysql_servers where comment in ('WRITE', 'READWRITE') and status <> 'ONLINE' and hostgroup_id in ($WRITE_HOSTGROUP_ID)\")\n  if [[ -n $offline_writers ]]; then\n    #\n    # Found a writer node that was in 'OFFLINE_SOFT' state,\n    # move it to the READ hostgroup unless the MODE is 'loadbal'\n    #\n    if [ \"$MODE\" != \"loadbal\" ]; then\n      move_writers_to_readers \"$offline_writers\"\n      CHECK_STATUS=1\n    fi\n  fi\n\n  debug $LINENO \"END mode_change_check\"\n}\n\n\n#\n# Globals:\n#   DEBUG\n#   CONFIG_FILE\n#   WRITE_HOSTGROUP_ID  READ_HOSTGROUP_ID\n#   DEFAULT_HOSTGROUP_ID\n#   MODE\n#   ERR_FILE\n#   PROXYSQL_ADMIN_VERSION\n#   MODE_COMMENT\n#   WRITE_WEIGHT\n#\n# Arguments:\n#\nfunction parse_args() {\n  # Check if we have a functional getopt(1)\n  if ! getopt --test; then\n    go_out=\"$(getopt --options=w:r:c:l:m:p:vh --longoptions=write-hg:,read-hg:,mode:,priority:,config-file:,log:,reload-check-file:,log-text:,max-connections:,debug,version,help \\\n    --name=\"$(basename \"$0\")\" -- \"$@\")\"\n    if [[ $? -ne 0 ]]; then\n      # no place to send output\n      echo \"Script error: getopt() failed\" >&2\n      exit 1\n    fi\n    eval set -- \"$go_out\"\n  fi\n\n  if [[ $go_out == \" --\" ]];then\n    usage\n    exit 1\n  fi\n\n  #\n  # We iterate through the command-line options twice\n  # (1) to handle options that don't need permissions (such as --help)\n  # (2) to handle options that need to be done before other\n  #     options, such as loading the config file\n  #\n  for arg\n  do\n    case \"$arg\" in\n      -- ) shift; break;;\n      --config-file )\n        CONFIG_FILE=\"$2\"\n        check_permission -e $LINENO \"$CONFIG_FILE\" \"proxysql-admin configuration file\"\n        debug $LINENO  \"--config-file specified, using : $CONFIG_FILE\"\n        shift 2\n        ;;\n      --help)\n        usage\n        exit 0\n        ;;\n      -v | --version)\n        echo \"proxysql_node_monitor version $PROXYSQL_ADMIN_VERSION\"\n        exit 0\n        ;;\n      --debug)\n        DEBUG=1\n        shift\n        ;;\n      *)\n        shift\n        ;;\n    esac\n  done\n\n  #\n  # Load the config file before reading in the command-line options\n  #\n  readonly CONFIG_FILE\n  if [ ! -e \"$CONFIG_FILE\" ]; then\n      warning \"\" \"Could not locate the configuration file: $CONFIG_FILE\"\n  else\n      check_permission -r $LINENO \"$CONFIG_FILE\"\n      debug $LINENO \"Loading $CONFIG_FILE\"\n      source \"$CONFIG_FILE\"\n  fi\n\n\n  if [[ $DEBUG -ne 0 ]]; then\n    # For now\n    if [[ -t 1 ]]; then\n      ERR_FILE=/dev/stdout\n    fi\n  fi\n\n  local p_mode=\"\"\n\n  # Reset the command line for the next invocation\n  eval set -- \"$go_out\"\n\n  for arg\n  do\n    case \"$arg\" in\n      -- ) shift; break;;\n      -w | --write-hg )\n        WRITE_HOSTGROUP_ID=$2\n        shift 2\n      ;;\n      -r | --read-hg )\n        READ_HOSTGROUP_ID=$2\n        shift 2\n      ;;\n      -m | --mode )\n        p_mode=\"$2\"\n        shift 2\n        if [ \"$p_mode\" != \"loadbal\" ] && [ \"$p_mode\" != \"singlewrite\" ]; then\n          echo \"ERROR: Invalid --mode passed:\"\n          echo \"  Please choose any of these modes: loadbal, singlewrite\"\n          exit 1\n        fi\n      ;;\n      -p | --priority )\n        # old parameter\n        shift 2\n      ;;\n      --config-file )\n        shift 2\n        # The config-file is loaded before the command-line\n        # arguments are handled.\n      ;;\n      -l | --log )\n        ERR_FILE=\"$2\"\n        shift 2\n\n        # Test if stdout and stderr are open to a terminal\n        if [[ $ERR_FILE == \"/dev/stdout\" || $ERR_FILE == \"/dev/stderr\" ]]; then\n          RED=$(tput setaf 1)\n          NRED=$(tput sgr0)\n        fi\n      ;;\n      --reload-check-file )\n        RELOAD_CHECK_FILE=\"$2\"\n        shift 2\n      ;;\n      --log-text )\n        LOG_TEXT=\"$2\"\n        shift 2\n      ;;\n      --max-connections )\n        MAX_CONNECTIONS=\"$2\"\n        shift 2\n      ;;\n      --debug )\n        shift;\n      ;;\n      -v | --version )\n        shift;\n      ;;\n      -h | --help )\n        shift;\n      ;;\n    esac\n  done\n\n  if [[ $DEBUG -eq 1 ]]; then\n    DEBUG_ERR_FILE=$ERR_FILE\n  fi\n\n  #Timeout exists for instances where mysqld/proxysql may be hung\n  TIMEOUT=5\n\n  SLAVEREAD_HOSTGROUP_ID=$READ_HOSTGROUP_ID\n  if [ $SLAVEREAD_HOSTGROUP_ID -eq $WRITE_HOSTGROUP_ID ];then\n    let SLAVEREAD_HOSTGROUP_ID+=1\n  fi\n\n  DEFAULT_HOSTGROUP_ID=$READ_HOSTGROUP_ID\n  if [[ $DEFAULT_HOSTGROUP_ID -eq -1 ]]; then\n    DEFAULT_HOSTGROUP_ID=$WRITE_HOSTGROUP_ID\n  fi\n\n  CHECK_STATUS=0\n\n  debug $LINENO \"#### PROXYSQL NODE MONITOR ARGUMENT CHECKING\"\n  debug $LINENO \"MODE: $MODE\"\n  debug $LINENO \"check mode name from proxysql data directory \"\n  CLUSTER_NAME=$(proxysql_exec $LINENO \"SELECT comment from scheduler where arg1 LIKE '%--write-hg=$WRITE_HOSTGROUP_ID %' OR arg1 LIKE '%-w $WRITE_HOSTGROUP_ID %'\")\n  check_cmd $LINENO $? \"Cannot connect to ProxySQL at $PROXYSQL_HOSTNAME:$PROXYSQL_PORT\"\n  if [[ ! -z $p_mode ]]; then\n    MODE=$p_mode\n    debug $LINENO \"command-line: setting MODE to $MODE\"\n  else\n    # Get the name of the mode file\n    local proxysql_mode_file\n    if [[ -z $CLUSTER_NAME ]]; then\n      proxysql_mode_file=\"${PROXYSQL_DATADIR}/mode\"\n    else\n      proxysql_mode_file=\"${PROXYSQL_DATADIR}/${CLUSTER_NAME}_mode\"\n    fi\n\n    if [[ -f \"$proxysql_mode_file\" && -r \"$proxysql_mode_file\" ]]; then\n      MODE=$(cat ${proxysql_mode_file})\n      debug $LINENO \"file: $proxysql_mode_file: setting MODE to $MODE\"\n    fi\n  fi\n\n\n  if [ \"$MODE\" == \"loadbal\" ]; then\n    MODE_COMMENT=\"READWRITE\"\n    WRITE_WEIGHT=\"1000\"\n  else\n    MODE_COMMENT=\"READ\"\n    WRITE_WEIGHT=\"1000000\"\n  fi\n\n  if [[ -z $RELOAD_CHECK_FILE ]]; then\n    error $LINENO \"The --reload-check-file option is required.\"\n    exit 1\n  fi\n  check_permission -r $LINENO \"$RELOAD_CHECK_FILE\"\n\n  # Verify that we have an integer\n  if ! [ \"$MAX_CONNECTIONS\" -eq \"$MAX_CONNECTIONS\" ] 2>/dev/null\n  then\n    error $LINENO \"Invalid --max-connections value (must be a number) : $MAX_CONNECTIONS\"\n    exit 1\n  fi\n\n  readonly WRITE_HOSTGROUP_ID\n  readonly READ_HOSTGROUP_ID\n  readonly SLAVEREAD_HOSTGROUP_ID\n  readonly MODE\n  readonly MODE_COMMENT\n  readonly WRITE_WEIGHT\n  readonly CLUSTER_NAME\n  readonly RELOAD_CHECK_FILE\n  readonly MAX_CONNECTIONS\n}\n\n# Returns the address of an available (online) cluster host\n#\n# Globals:\n#   WRITE_HOSTGROUP_ID\n#   READ_HOSTGROUP_ID\n#\n# Arguments:\n#   None\n#\nfunction find_online_cluster_host() {\n  # Query the proxysql database for hosts,ports in use\n  # Then just go through the list until we reach one that responds\n  local hosts\n  hosts=$(proxysql_exec $LINENO \"SELECT DISTINCT hostname,port FROM mysql_servers WHERE comment<>'SLAVEREAD' AND hostgroup_id in ($WRITE_HOSTGROUP_ID, $READ_HOSTGROUP_ID)\")\n  printf \"$hosts\" | while read server port || [[ -n $port ]]\n  do\n    debug $LINENO \"Trying to contact $server:$port...\"\n    slave_exec \"$LINENO\" \"$server\" \"$port\" \"select @@port\" 1>/dev/null 2>>${DEBUG_ERR_FILE}\n    if [[ $? -eq 0 ]]; then\n      printf \"$server $port\"\n      return 0\n    fi\n  done\n\n  # No cluster host available (cannot contact any)\n  return 1\n}\n\nfunction main() {\n  # Monitoring user needs 'REPLICATION CLIENT' privilege\n  log $LINENO \"###### Percona XtraDB Cluster status ######\"\n  if [[ -n $LOG_TEXT ]]; then\n    log $LINENO \"Extra notes        : $LOG_TEXT\"\n  fi\n  debug $LINENO \"write hostgroup id : $WRITE_HOSTGROUP_ID\"\n  debug $LINENO \"read hostgroup id  : $READ_HOSTGROUP_ID\"\n  debug $LINENO \"mode               : $MODE\"\n\n  CLUSTER_USERNAME=$(proxysql_exec $LINENO \"SELECT variable_value FROM global_variables WHERE variable_name='mysql-monitor_username'\")\n  check_cmd $LINENO $? \"Could not retrieve cluster login info from ProxySQL. Please check ProxySQL login credentials\"\n\n  CLUSTER_PASSWORD=$(proxysql_exec $LINENO \"SELECT variable_value FROM global_variables WHERE variable_name='mysql-monitor_password'\" \"hide_output\")\n  check_cmd $LINENO $? \"Could not retrieve cluster login info from ProxySQL. Please check ProxySQL login credentials\"\n\n  CLUSTER_TIMEOUT=$(proxysql_exec $LINENO \"SELECT MAX(MAX(interval_ms / 1000 - 1, 1)) FROM scheduler\")\n\n  local cluster_host_info\n  cluster_host_info=$(find_online_cluster_host)\n\n  local host=\"\"\n  local port=\"\"\n  if [[ -n $cluster_host_info ]]; then\n    host=$(echo $cluster_host_info | awk '{ print $1 }')\n    port=$(echo $cluster_host_info | awk '{ print $2 }')\n  fi\n\n  update_cluster \"$host\" \"$port\"\n  mode_change_check\n\n  if [ $CHECK_STATUS -eq 0 ]; then\n    if [[ -n $cluster_host_info ]]; then\n      log $LINENO \"Percona XtraDB Cluster membership looks good\"\n    else\n      log $LINENO \"Percona XtraDB Cluster is offline!\"\n    fi\n  else\n    echo \"1\" > ${RELOAD_CHECK_FILE}\n    log $LINENO \"###### MYSQL SERVERS was updated ######\"\n  fi\n}\n\n\n#-------------------------------------------------------------------------------\n#\n# Step 4 : Begin script execution\n#\n\nparse_args \"$@\"\ndebug $LINENO \"#### START PROXYSQL NODE MONITOR\"\nmain\ndebug $LINENO \"#### END PROXYSQL NODE MONITOR\"\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/randpass",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_randpass() {\n  if [ \"${_integer}\" -ge 32 ]; then\n    _rkey=\"${_integer}\"\n  else\n    _rkey=32\n  fi\n  if [ \"${_kind}\" = \"graph\" ]; then\n    _CHAR=\"[:graph:]\"\n    cat /dev/urandom \\\n      | tr -cd \"${_CHAR}\" \\\n      | head -c ${1:-${_rkey}} \\\n      | tr -d \"\\n\"\n  elif [ \"${_kind}\" = \"esc\" ]; then\n    _CHAR=\"[:graph:]\"\n    cat /dev/urandom \\\n      | tr -cd \"${_CHAR}\" \\\n      | head -c ${1:-${_rkey}} \\\n      | tr -d \"\\n\" \\\n      | sed 's/[\\\\\\/\\^\\?\\>\\`\\#\\\"\\{\\(\\$\\@\\&\\|\\*]//g; s/\\(['\"'\"'\\]\\)//g'\n  elif [ \"${_kind}\" = \"hash\" ]; then\n    _CHAR=\"[:alnum:]\"\n    cat /dev/urandom \\\n      | tr -cd \"${_CHAR}\" \\\n      | head -c ${1:-${_rkey}} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\"\n  else\n    _CHAR=\"[:alnum:]\"\n    cat /dev/urandom \\\n      | tr -cd \"${_CHAR}\" \\\n      | head -c ${1:-${_rkey}} \\\n      | tr -d \"\\n\"\n  fi\n  echo\n}\n\ncase \"$2\" in\n  alnum) _integer=\"$1\"\n         _kind=\"$2\"\n         _randpass\n  ;;\n  graph) _integer=\"$1\"\n         _kind=\"$2\"\n         _randpass\n  ;;\n  hash)  _integer=\"$1\"\n         _kind=\"$2\"\n         _randpass\n  ;;\n  esc)   _integer=\"$1\"\n         _kind=\"$2\"\n         _randpass\n  ;;\n  *)     echo \"Usage: randpass {32-128} {alnum|graph|hash|esc}\"\n         exit 1\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/renameaegirhost",
    "content": "#!/bin/bash\n\n###\n### renameaegirhost — in-place rename of the Aegir hostname on BOA\n###\n### Usage: renameaegirhost --aegir-root PATH [--dry-run] [--force-old OLDFQDN]\n###\n###   --aegir-root PATH  Aegir home directory (required); validated as a real\n###                      Aegir root before any changes are made.\n###      Examples:\n###      --aegir-root /var/aegir    (BOA master)\n###      --aegir-root /data/disk/o1 (BOA octopus account)\n###   --dry-run    Print what would be changed; make no modifications\n###   --force-old FQDN   Override auto-detected old hostname with FQDN\n###\n### What it does:\n###   §1  Detect old hostname from <aegir-root>/.drush/server_master.alias.drushrc.php\n###   §2  Detect new hostname from system FQDN (hostname -f)\n###   §3  Rewrite all Drush alias files in <aegir-root>/.drush/\n###   §4  Rename + rewrite vhost files in <aegir-root>/config/server_master/nginx/vhost.d/\n###   §5  Dump Aegir DB → sed replace → re-import\n###   §6  Reload nginx\n###\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\n\n###-------------DEFAULTS------------------###\n\n_AEGIR_ROOT=\"\"\n_DRY_RUN=NO\n_FORCE_OLD=\"\"\n\n###-------------ARGUMENT PARSING----------###\n\nwhile [ $# -gt 0 ]; do\n  case \"$1\" in\n    --aegir-root)\n      _AEGIR_ROOT=\"$2\"\n      shift 2\n      ;;\n    --dry-run)\n      _DRY_RUN=YES\n      shift\n      ;;\n    --force-old)\n      _FORCE_OLD=\"$2\"\n      shift 2\n      ;;\n    -h|--help)\n      sed -n '/^###/{ s/^### \\?//; p }' \"$0\"\n      exit 0\n      ;;\n    *)\n      echo \"ERROR: Unknown argument: $1\"\n      echo \"Usage: $0 --aegir-root PATH [--dry-run] [--force-old OLDFQDN]\"\n      exit 1\n      ;;\n  esac\ndone\n\nif [ -z \"${_AEGIR_ROOT}\" ]; then\n  echo \"ERROR: --aegir-root PATH is required\"\n  echo \"Usage: $0 --aegir-root PATH [--dry-run] [--force-old OLDFQDN]\"\n  exit 1\nfi\n\n# Derive all paths and username from _AEGIR_ROOT now that args are fully parsed\n_DRUSH_DIR=${_AEGIR_ROOT}/.drush\n_VHOSTD=${_AEGIR_ROOT}/config/server_master/nginx/vhost.d\n_BACKUP_DIR=${_AEGIR_ROOT}/backups/rename-hostname\n_AEGIR_USER=$(basename \"${_AEGIR_ROOT}\")\n\n###-------------HELPERS-------------------###\n\n_info() {\n  echo \"INFO: $*\"\n}\n\n_alrt() {\n  echo \"ALRT: $*\"\n}\n\n_die() {\n  echo \"ERROR: $*\"\n  exit 1\n}\n\n_maybe_run() {\n  # Execute command unless --dry-run; always echo it\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo \"DRY:  $*\"\n  else\n    eval \"$@\"\n  fi\n}\n\n###-------------CHECKS--------------------###\n\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    _die \"This script must be run as root\"\n  fi\n\n  _DF_TEST=\"$(df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ -n \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    _die \"Disk usage is at ${_DF_TEST}% — aborting (must be below 90%)\"\n  fi\n\n  if [ ! -d \"${_AEGIR_ROOT}\" ]; then\n    _die \"Aegir root not found: ${_AEGIR_ROOT}\"\n  fi\n\n  # Validate the directory actually looks like an Aegir root before touching anything\n  _aegir_root_ok=YES\n  for _required_subdir in \\\n    .drush \\\n    config/server_master \\\n    backups \\\n  ; do\n    if [ ! -d \"${_AEGIR_ROOT}/${_required_subdir}\" ]; then\n      _alrt \"Expected Aegir subdir missing: ${_AEGIR_ROOT}/${_required_subdir}\"\n      _aegir_root_ok=NO\n    fi\n  done\n  if [ \"${_aegir_root_ok}\" = \"NO\" ]; then\n    _die \"'${_AEGIR_ROOT}' does not look like a valid Aegir root (missing subdirs above)\"\n  fi\n\n  if [ ! -f \"${_DRUSH_DIR}/server_master.alias.drushrc.php\" ]; then\n    _die \"server_master.alias.drushrc.php not found in ${_DRUSH_DIR}\"\n  fi\n\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  if [ -z \"${_SQL_PSWD}\" ]; then\n    _die \"/root/.my.pass.txt missing or empty — cannot connect to MySQL\"\n  fi\n\n  _info \"Aegir root: ${_AEGIR_ROOT}\"\n}\n\n###-------------§1 DETECT OLD HOSTNAME---###\n\n_detect_old_hostname() {\n  if [ -n \"${_FORCE_OLD}\" ]; then\n    _OLD_HOST=\"${_FORCE_OLD}\"\n    _info \"Old hostname forced via --force-old: ${_OLD_HOST}\"\n    return\n  fi\n\n  # Extract remote_host from server_master alias — this is the authoritative\n  # hostname stored in the Aegir drush config\n  _OLD_HOST=$(grep \"remote_host\" \\\n    \"${_DRUSH_DIR}/server_master.alias.drushrc.php\" \\\n    | head -1 \\\n    | sed \"s/.*remote_host.*=>//; s/[' ,;]//g\")\n\n  _OLD_HOST=${_OLD_HOST//[^a-zA-Z0-9.\\-]/}\n\n  if [ -z \"${_OLD_HOST}\" ]; then\n    _die \"Could not detect old hostname from server_master.alias.drushrc.php\"\n  fi\n\n  _info \"Detected old hostname: ${_OLD_HOST}\"\n}\n\n###-------------§2 DETECT NEW HOSTNAME---###\n\n_detect_new_hostname() {\n  _NEW_HOST=$(hostname -f 2>/dev/null | tr -d '\\n')\n\n  if [ -z \"${_NEW_HOST}\" ]; then\n    _die \"Could not determine system FQDN via hostname -f\"\n  fi\n\n  # Sanity: must contain at least one dot (real FQDN)\n  if [[ \"${_NEW_HOST}\" != *.* ]]; then\n    _die \"System hostname '${_NEW_HOST}' does not look like a FQDN (no dot). \" \\\n       \"Set it correctly in /etc/hostname and /etc/hosts first.\"\n  fi\n\n  _info \"New hostname (system FQDN): ${_NEW_HOST}\"\n\n  if [ \"${_OLD_HOST}\" = \"${_NEW_HOST}\" ]; then\n    _info \"Old and new hostname are identical — nothing to do.\"\n    exit 0\n  fi\n}\n\n###-------------BACKUP--------------------###\n\n_backup_files() {\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    _info \"(dry-run) Would create backup in ${_BACKUP_DIR}\"\n    return\n  fi\n\n  mkdir -p \"${_BACKUP_DIR}\"\n  _TS=$(date +%Y%m%d_%H%M%S)\n\n  cp -a \"${_DRUSH_DIR}\" \"${_BACKUP_DIR}/drush_${_TS}\"\n  _info \"Drush dir backed up to ${_BACKUP_DIR}/drush_${_TS}\"\n\n  if [ -d \"${_VHOSTD}\" ]; then\n    cp -a \"${_VHOSTD}\" \"${_BACKUP_DIR}/vhost.d_${_TS}\"\n    _info \"vhost.d backed up to ${_BACKUP_DIR}/vhost.d_${_TS}\"\n  fi\n\n  _info \"Backup timestamp: ${_TS}\"\n}\n\n###-------------§3 REWRITE DRUSH FILES---###\n\n_rewrite_drush_files() {\n  _info \"§3 Rewriting Drush alias files in ${_DRUSH_DIR} ...\"\n\n  for _F in $(find \"${_DRUSH_DIR}\" -maxdepth 1 -type f -name '*.drushrc.php' | sort); do\n    if grep -q \"${_OLD_HOST}\" \"${_F}\" 2>/dev/null; then\n      _info \"  sed in-place: ${_F}\"\n      _maybe_run sed -i \"s/${_OLD_HOST}/${_NEW_HOST}/g\" \\'\"${_F}\"\\'\n    fi\n  done\n}\n\n###-------------§4 RENAME + REWRITE VHOSTS-###\n\n_rewrite_vhosts() {\n  if [ ! -d \"${_VHOSTD}\" ]; then\n    _alrt \"vhost.d dir not found: ${_VHOSTD} — skipping vhost step\"\n    return\n  fi\n\n  _info \"§4 Rewriting vhost files in ${_VHOSTD} ...\"\n\n  for _F in $(find \"${_VHOSTD}\" -maxdepth 1 -type f | sort); do\n    _BASENAME=$(basename \"${_F}\")\n\n    # Rewrite content regardless of filename\n    if grep -q \"${_OLD_HOST}\" \"${_F}\" 2>/dev/null; then\n      _info \"  sed in-place content: ${_F}\"\n      _maybe_run sed -i \"s/${_OLD_HOST}/${_NEW_HOST}/g\" \\'\"${_F}\"\\'\n    fi\n\n    # Rename the file if old hostname appears in the filename\n    if [[ \"${_BASENAME}\" == *\"${_OLD_HOST}\"* ]]; then\n      _NEW_BASENAME=\"${_BASENAME//${_OLD_HOST}/${_NEW_HOST}}\"\n      _NEW_F=\"${_VHOSTD}/${_NEW_BASENAME}\"\n      _info \"  rename: ${_BASENAME} -> ${_NEW_BASENAME}\"\n      _maybe_run mv -f \\'\"${_F}\"\\' \\'\"${_NEW_F}\"\\'\n    fi\n  done\n}\n\n###-------------§5 DATABASE RENAME-------###\n\n_detect_aegir_db() {\n  # The Aegir hostmaster DB name lives in the hostmaster site's drushrc.php\n  # Path is derived from the site_path in hostmaster.alias.drushrc.php\n  _HM_SITE_PATH=$(grep \"site_path'\" \\\n    \"${_DRUSH_DIR}/hostmaster.alias.drushrc.php\" \\\n    | head -1 \\\n    | sed \"s/.*=>//; s/[' ,;]//g\" \\\n    | tr -d ' ')\n\n  if [ -z \"${_HM_SITE_PATH}\" ]; then\n    _die \"Could not extract site_path from hostmaster.alias.drushrc.php\"\n  fi\n\n  _HM_DRUSHRC=\"${_HM_SITE_PATH}/drushrc.php\"\n\n  if [ ! -f \"${_HM_DRUSHRC}\" ]; then\n    _die \"Hostmaster drushrc.php not found: ${_HM_DRUSHRC}\"\n  fi\n\n  _AEGIR_DB=$(grep \"options\\['db_name'\\]\" \"${_HM_DRUSHRC}\" \\\n    | head -1 \\\n    | sed \"s/.*=//; s/[' ,;]//g\" \\\n    | tr -d ' ')\n  _AEGIR_DB=${_AEGIR_DB//[^a-zA-Z0-9_]/}\n\n  if [ -z \"${_AEGIR_DB}\" ]; then\n    _die \"Could not detect Aegir DB name from ${_HM_DRUSHRC}\"\n  fi\n\n  _info \"Detected Aegir database: ${_AEGIR_DB}\"\n}\n\n_rename_in_database() {\n  _info \"§5 Renaming hostname in Aegir database (${_AEGIR_DB}) ...\"\n\n  _TS=$(date +%Y%m%d_%H%M%S)\n  _DUMP_ORIG=\"${_BACKUP_DIR}/aegirdb_${_TS}_orig.sql\"\n  _DUMP_PATCHED=\"${_BACKUP_DIR}/aegirdb_${_TS}_patched.sql\"\n\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    _info \"(dry-run) Would dump ${_AEGIR_DB} to ${_DUMP_ORIG}\"\n    _info \"(dry-run) Would sed s/${_OLD_HOST}/${_NEW_HOST}/g -> ${_DUMP_PATCHED}\"\n    _info \"(dry-run) Would re-import ${_DUMP_PATCHED} into ${_AEGIR_DB}\"\n    return\n  fi\n\n  mkdir -p \"${_BACKUP_DIR}\"\n\n  _info \"  Dumping ${_AEGIR_DB} ...\"\n  mysqldump \\\n    -u root \\\n    --password=\"${_SQL_PSWD}\" \\\n    --single-transaction \\\n    --quick \\\n    --no-autocommit \\\n    --skip-add-locks \\\n    --no-tablespaces \\\n    --hex-blob \\\n    \"${_AEGIR_DB}\" \\\n    > \"${_DUMP_ORIG}\"\n\n  if [ ! -s \"${_DUMP_ORIG}\" ]; then\n    _die \"mysqldump produced an empty file: ${_DUMP_ORIG}\"\n  fi\n\n  _info \"  Dump size: $(du -sh \"${_DUMP_ORIG}\" | cut -f1)\"\n\n  _info \"  Applying sed replacement ...\"\n  # Use | as sed delimiter to avoid colliding with dots in FQDNs\n  sed \"s|${_OLD_HOST}|${_NEW_HOST}|g\" \"${_DUMP_ORIG}\" > \"${_DUMP_PATCHED}\"\n\n  _CHANGED=$(diff \"${_DUMP_ORIG}\" \"${_DUMP_PATCHED}\" | grep -c \"^[<>]\" || true)\n  _info \"  Lines changed in dump: ${_CHANGED}\"\n\n  if [ \"${_CHANGED}\" -eq 0 ]; then\n    _info \"  No occurrences of '${_OLD_HOST}' found in DB dump — skipping re-import\"\n    return\n  fi\n\n  _info \"  Re-importing patched dump into ${_AEGIR_DB} ...\"\n  mysql \\\n    -u root \\\n    --password=\"${_SQL_PSWD}\" \\\n    \"${_AEGIR_DB}\" \\\n    < \"${_DUMP_PATCHED}\"\n\n  _info \"  DB rename complete. Originals kept in ${_BACKUP_DIR}\"\n}\n\n###-------------§6 RELOAD NGINX----------###\n\n_reload_nginx() {\n  _info \"§6 Reloading nginx ...\"\n  _maybe_run nginx -t\n  _maybe_run sudo /etc/init.d/nginx reload\n}\n\n###-------------SUMMARY------------------###\n\n_print_summary() {\n  echo \"\"\n  echo \"======================================================\"\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo \"  DRY-RUN complete — no files were modified\"\n  else\n    echo \"  Rename complete\"\n  fi\n  echo \"  Aegir root   : ${_AEGIR_ROOT}\"\n  echo \"  Old hostname : ${_OLD_HOST}\"\n  echo \"  New hostname : ${_NEW_HOST}\"\n  echo \"======================================================\"\n  echo \"\"\n  if [ \"${_DRY_RUN}\" = \"NO\" ]; then\n    echo \"Next steps:\"\n    echo \"  1. Verify nginx config:  nginx -t\"\n    echo \"  2. Check Aegir UI:     http://master.${_NEW_HOST}/\"\n    echo \"  3. Run hostmaster verify via drush if needed:\"\n    echo \"     su -s /bin/bash - ${_AEGIR_USER} -c 'drush8 @hm hosting-task @server_master verify --force'\"\n    echo \"     su -s /bin/bash - ${_AEGIR_USER} -c 'drush8 @hm hosting-dispatch'\"\n    echo \"\"\n  fi\n}\n\n###-------------MAIN---------------------###\n\n_check_root\n_detect_old_hostname\n_detect_new_hostname\n_backup_files\n_rewrite_drush_files\n_rewrite_vhosts\n_detect_aegir_db\n_rename_in_database\n_reload_nginx\n_print_summary\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/screenfetch",
    "content": "#!/usr/bin/env bash\n\n# screenFetch - a CLI Bash script to show system/theme info in screenshots\n\n# Copyright (c) 2010-2019 Brett Bohnenkamper <kittykatt@kittykatt.us>\n\n#  This program is free software: you can redistribute it and/or modify\n#  it under the terms of the GNU General Public License as published by\n#  the Free Software Foundation, either version 3 of the License, or\n#  (at your option) any later version.\n#\n#  This program is distributed in the hope that it will be useful,\n#  but WITHOUT ANY WARRANTY; without even the implied warranty of\n#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n#  GNU General Public License for more details.\n#\n#  You should have received a copy of the GNU General Public License\n#  along with this program.  If not, see <http://www.gnu.org/licenses/>.\n\n# Yes, I do realize some of this is horribly ugly coding. Any ideas/suggestions would be\n# appreciated by emailing me or by stopping by http://github.com/KittyKatt/screenFetch. You\n# could also drop in on the IRC channel at irc://irc.rizon.net/screenFetch.\n# to put forth suggestions/ideas. Thank you.\n\n# Requires: bash 4.0+\n#           bc\n# Optional dependencies: xorg-xdpyinfo (resoluton detection)\n#                        xorg-xprop (desktop environment detection)\n#                        scrot (screenshot taking)\n#                        curl (screenshot uploading)\n\n\n#LANG=C\n#LANGUAGE=C\n#LC_ALL=C\n\n\nscriptVersion=\"3.9.9\"\n\n######################\n# Settings for fetcher\n######################\n\n# This setting controls what ASCII logo is displayed.\n# distro=\"Linux\"\n\n# This sets the information to be displayed. Available: distro, Kernel, DE, WM, Win_theme, Theme, Icons, Font, Background, ASCII.\n# To get just the information, and not a text-art logo, you would take \"ASCII\" out of the below variable.\nvalid_display=(\n\t\t'distro'\n\t\t'host'\n\t\t'kernel'\n\t\t'uptime'\n\t\t'pkgs'\n\t\t'shell'\n\t\t'res'\n\t\t'de'\n\t\t'wm'\n\t\t'wmtheme'\n\t\t'gtk'\n\t\t'disk'\n\t\t'cpu'\n\t\t'gpu'\n\t\t'mem'\n)\ndisplay=(\n\t\t'distro'\n\t\t'host'\n\t\t'kernel'\n\t\t'uptime'\n\t\t'pkgs'\n\t\t'shell'\n\t\t'res'\n\t\t'de'\n\t\t'wm'\n\t\t'wmtheme'\n\t\t'gtk'\n\t\t'cpu'\n\t\t'disk'\n\t\t'gpu'\n\t\t'mem'\n)\n# Display Type: ASCII or Text\ndisplay_type=\"ASCII\"\n# Plain logo\ndisplay_logo=\"no\"\n\n# Colors to use for the information found. These are set below according to distribution.\n# If you would like to set your OWN color scheme for these, uncomment the lines below and edit them to your heart's content.\n# textcolor=\"\\e[0m\"\n# labelcolor=\"\\e[1;34m\"\n\n# Array: WM Process Names\n# Order matters!\n# Removed WM's: compiz\nwmnames=(\n\t\t'fluxbox'\n\t\t'openbox'\n\t\t'blackbox'\n\t\t'xfwm4'\n\t\t'metacity'\n\t\t'kwin'\n\t\t'twin'\n\t\t'icewm'\n\t\t'pekwm'\n\t\t'flwm'\n\t\t'flwm_topside'\n\t\t'fvwm'\n\t\t'dwm'\n\t\t'awesome'\n\t\t'wmaker'\n\t\t'stumpwm'\n\t\t'musca'\n\t\t'xmonad.*'\n\t\t'i3'\n\t\t'ratpoison'\n\t\t'scrotwm'\n\t\t'spectrwm'\n\t\t'wmfs'\n\t\t'wmii'\n\t\t'beryl'\n\t\t'subtle'\n\t\t'e16'\n\t\t'enlightenment'\n\t\t'sawfish'\n\t\t'emerald'\n\t\t'monsterwm'\n\t\t'dminiwm'\n\t\t'compiz'\n\t\t'Finder'\n\t\t'herbstluftwm'\n\t\t'howm'\n\t\t'notion'\n\t\t'bspwm'\n\t\t'cinnamon'\n\t\t'2bwm'\n\t\t'echinus'\n\t\t'swm'\n\t\t'budgie-wm'\n\t\t'dtwm'\n\t\t'9wm'\n\t\t'chromeos-wm'\n\t\t'deepin-wm'\n\t\t'sway'\n\t\t'mwm'\n)\n\n# Array with full Ubuntu release codenames, used so that the script will display\n# i.e. \"Ubuntu 18.04 LTS (Bionic Beaver)\" instead of \"Ubuntu 18.04 bionic\"\nubuntu_codenames=(\n\t\t'(Warty Warthog)'\t\t#  4.10\n\t\t'(Hoary Hedgehog)'\t\t#  5.04\n\t\t'(Breezy Badger)'\t\t#  5.10\n\t\t'LTS (Dapper Drake)'\t\t#  6.06\n\t\t'(Edgy Eft)'\t\t\t#  6.10\n\t\t'(Feisty Fawn)'\t\t\t#  7.04\n\t\t'(Gutsy Gibbon)'\t\t#  7.10\n\t\t'LTS (Hardy Heron)'\t\t#  8.04\n\t\t'(Intrepid Ibex)'\t\t#  8.10\n\t\t'(Jaunty Jackalope)'\t\t#  9.04\n\t\t'(Karmic Koala)'\t\t#  9.10\n\t\t'LTS (Lucid Lynx)'\t\t# 10.04\n\t\t'(Maverick Meerkat)'\t\t# 10.10\n\t\t'(Natty Narwhal)'\t\t# 11.04\n\t\t'(Oneiric Ocelot)'\t\t# 11.10\n\t\t'LTS (Precise Pangolin)'\t# 12.04\n\t\t'(Quantal Quetzal)'\t\t# 12.10\n\t\t'(Raring Ringtail)'\t\t# 13.04\n\t\t'(Saucy Salamander)'\t\t# 13.10\n\t\t'LTS (Trusty Tahr)'\t\t# 14.04\n\t\t'(Utopic Unicorn)'\t\t# 14.10\n\t\t'(Vivid Vervet)'\t\t# 15.04\n\t\t'(Wily Werewolf)'\t\t# 15.10\n\t\t'LTS (Xenial Xerus)'\t\t# 16.04\n\t\t'(Yakkety Yak)'\t\t\t# 16.10\n\t\t'(Zesty Zapus)'\t\t\t# 17.04\n\t\t'(Artful Aardvark)'\t\t# 17.10\n\t\t'LTS (Bionic Beaver)'\t\t# 18.04\n\t\t'(Cosmic Cuttlefish)'\t\t# 18.10\n\t\t'(Disco Dingo)'\t\t\t# 19.04\n\t\t'(Eoan Ermine)'\t\t\t# 19.10\n\t\t'LTS (Focal Fossa)'\t\t# 20.04\n\t\t'(Groovy Gorilla)'\t\t# 20.10\n\t\t'(Hirsute Hippo)'\t\t# 21.04\n\t\t'LTS (Jammy Jellyfish)'\t\t# 22.04\n\t\t'(Lunar Lobster)'\t\t# 23.04\n\t\t'(Mantic Minotaur)'\t\t# 23.10\n)\n\n# Screenshot Settings\n# This setting lets the script know if you want to take a screenshot or not. 1=Yes 0=No\nscreenshot=\n# This setting lets the script know if you want to upload the screenshot to a filehost. 1=Yes 0=No\nupload=\n# This setting lets the script know where you would like to upload the file to.\n# Valid hosts are: teknik, mediacrush, imgur, hmp, and a configurable local.\nuploadLoc=\n# You can specify a custom screenshot command here. Just uncomment and edit.\n# Otherwise, we'll be using the default command: scrot -cd3.\n# screenCommand=\"scrot -cd5\"\nshotfile=$(printf \"screenFetch-%s.png\" \"$(date +'%Y-%m-%d_%H-%M-%S')\")\n\n# Verbose Setting - Set to 1 for verbose output.\nverbosity=\n\n# Custom lines example:\n# screenfetch -C \"IP 'WAN'=192.168.0.12,IP 'BRIDGED'=10.1.1.10,IP 'HOST-ONLY'=172.16.25.4,,Web terminal interface=http://10.1.1.10\"\n#\n#... will produce:\n#\n# IP 'WAN': 192.168.0.12\n# IP 'BRIDGED': 10.1.1.10\n# IP 'HOST-ONLY': 172.16.25.4\n#\n# Web terminal interface: http://10.1.1.10\n\ncustomlines () {\n\tOLD_IFS=\"${IFS}\"\n\tIFS=,\n\n\tfor custom_line in ${custom_lines_string} ; do\n\t\twhile read -r key; do\n\t\t\tread -r value\n\t\t\tif [[ -z \"${key}\" ]]; then\n\t\t\t\tcustom=\"\" # print an empty line\n\t\t\telse\n\t\t\t\tcustom=$(echo -e \" ${labelcolor}${key}: ${textcolor}${value}\");\n\t\t\tfi\n\t\t\tout_array=( \"${out_array[@]}\" \"${custom}\" ); ((display_index++));\n\t\tdone <<< \"$(echo \"${custom_line}\" | tr \"=\" \"\\n\")\"\n\tdone\n\n\tIFS=\"${OLD_IFS}\"\n}\n\n\n#############################################\n#### CODE No need to edit past here CODE ####\n#############################################\n\n# https://github.com/KittyKatt/screenFetch/issues/549\nif [[ \"${OSTYPE}\" =~ \"linux\" || \"${OSTYPE}\" == \"gnu\" ]]; then\n\t# issue seems to affect Ubuntu; add LSB directories if it appears on other distros too\n\texport GIO_EXTRA_MODULES=\"/usr/lib/x86_64-linux-gnu/gio/modules:/usr/lib/i686-linux-gnu/gio/modules:$GIO_EXTRA_MODULES\"\nfi\n\n#########################################\n# Static Variables and Common Functions #\n#########################################\nc0=$'\\033[0m' # Reset Text\nbold=$'\\033[1m' # Bold Text\nunderline=$'\\033[4m' # Underline Text\ndisplay_index=0\n\n# User options\ngtk_2line=\"no\"\n\n# Static Color Definitions\ncolorize () {\n\tprintf $'\\033[0m\\033[38;5;%sm' \"$1\"\n}\ngetColor () {\n\tlocal tmp_color=\"\"\n\tif [[ -n \"$1\" ]]; then\n\t\tif [[ ${BASH_VERSINFO[0]} -ge 4 ]]; then\n\t\t\tif [[ ${BASH_VERSINFO[0]} -eq 4 && ${BASH_VERSINFO[1]} -gt 1 ]] || [[ ${BASH_VERSINFO[0]} -gt 4 ]]; then\n\t\t\t\ttmp_color=${1,,}\n\t\t\telse\n\t\t\t\ttmp_color=\"$(tr '[:upper:]' '[:lower:]' <<< \"${1}\")\"\n\t\t\tfi\n\t\telse\n\t\t\ttmp_color=\"$(tr '[:upper:]' '[:lower:]' <<< \"${1}\")\"\n\t\tfi\n\t\tcase \"${tmp_color}\" in\n\t\t\t# Standards\n\t\t\t'black')\t\t\t\t\tcolor_ret='\\033[0m\\033[30m';;\n\t\t\t'red')\t\t\t\t\t\tcolor_ret='\\033[0m\\033[31m';;\n\t\t\t'green')\t\t\t\t\tcolor_ret='\\033[0m\\033[32m';;\n\t\t\t'brown')\t\t\t\t\tcolor_ret='\\033[0m\\033[33m';;\n\t\t\t'blue')\t\t\t\t\t\tcolor_ret='\\033[0m\\033[34m';;\n\t\t\t'purple')\t\t\t\t\tcolor_ret='\\033[0m\\033[35m';;\n\t\t\t'cyan')\t\t\t\t\t\tcolor_ret='\\033[0m\\033[36m';;\n\t\t\t'yellow')\t\t\t\t\tcolor_ret='\\033[0m\\033[1;33m';;\n\t\t\t'white')\t\t\t\t\tcolor_ret='\\033[0m\\033[1;37m';;\n\t\t\t# Bolds\n\t\t\t'dark grey'|'dark gray')\tcolor_ret='\\033[0m\\033[1;30m';;\n\t\t\t'light red')\t\t\t\tcolor_ret='\\033[0m\\033[1;31m';;\n\t\t\t'light green')\t\t\t\tcolor_ret='\\033[0m\\033[1;32m';;\n\t\t\t'light blue')\t\t\t\tcolor_ret='\\033[0m\\033[1;34m';;\n\t\t\t'light purple')\t\t\t\tcolor_ret='\\033[0m\\033[1;35m';;\n\t\t\t'light cyan')\t\t\t\tcolor_ret='\\033[0m\\033[1;36m';;\n\t\t\t'light grey'|'light gray')\tcolor_ret='\\033[0m\\033[37m';;\n\t\t\t# Some 256 colors\n\t\t\t'orange')\t\t\t\t\tcolor_ret=\"$(colorize '202')\";; #DarkOrange\n\t\t\t'light orange') \t\t\tcolor_ret=\"$(colorize '214')\";; #Orange1\n\t\t\t# HaikuOS\n\t\t\t'black_haiku') \t\t\t\tcolor_ret=\"$(colorize '7')\";;\n\t\t\t#ROSA color\n\t\t\t'rosa_blue') \t\t\t\tcolor_ret='\\033[01;38;05;25m';;\n\t\t\t# ArcoLinux\n\t\t\t'arco_blue') color_ret='\\033[1;38;05;111m';;\n\t\tesac\n\t\t[[ -n \"${color_ret}\" ]] && echo \"${color_ret}\"\n\tfi\n}\n\nverboseOut () {\n\tif [[ \"$verbosity\" -eq \"1\" ]]; then\n\t\tprintf '\\033[1;31m:: \\033[0m%s\\n' \"$1\"\n\tfi\n}\n\nerrorOut () {\n\tprintf '\\033[1;37m[[ \\033[1;31m! \\033[1;37m]] \\033[0m%s\\n' \"$1\"\n}\nstderrOut () {\n\twhile IFS='' read -r line; do\n\t\tprintf '\\033[1;37m[[ \\033[1;31m! \\033[1;37m]] \\033[0m%s\\n' \"$line\"\n\tdone\n}\n\n\n####################\n#  Color Defines\n####################\n\ncolorNumberToCode () {\n\tlocal number=\"$1\"\n\tif [[ \"${number}\" == \"na\" ]]; then\n\t\tunset code\n\telif [[ $(tput colors) -eq \"256\" ]]; then\n\t\tcode=$(colorize \"${number}\")\n\telse\n\t\tcase \"$number\" in\n\t\t\t0|00) code=$(getColor 'black');;\n\t\t\t1|01) code=$(getColor 'red');;\n\t\t\t2|02) code=$(getColor 'green');;\n\t\t\t3|03) code=$(getColor 'brown');;\n\t\t\t4|04) code=$(getColor 'blue');;\n\t\t\t5|05) code=$(getColor 'purple');;\n\t\t\t6|06) code=$(getColor 'cyan');;\n\t\t\t7|07) code=$(getColor 'light grey');;\n\t\t\t8|08) code=$(getColor 'dark grey');;\n\t\t\t9|09) code=$(getColor 'light red');;\n\t\t\t  10) code=$(getColor 'light green');;\n\t\t\t  11) code=$(getColor 'yellow');;\n\t\t\t  12) code=$(getColor 'light blue');;\n\t\t\t  13) code=$(getColor 'light purple');;\n\t\t\t  14) code=$(getColor 'light cyan');;\n\t\t\t  15) code=$(getColor 'white');;\n\t\t\t*) unset code;;\n\t\tesac\n\tfi\n\techo -n \"${code}\"\n}\n\n\ndetectColors () {\n\tmy_colors=$(sed 's/^,/na,/;s/,$/,na/;s/,/ /' <<< \"${OPTARG}\")\n\tmy_lcolor=$(\"${AWK}\" -F' ' '{print $1}' <<< \"${my_colors}\")\n\tmy_lcolor=$(colorNumberToCode \"${my_lcolor}\")\n\tmy_hcolor=$(\"${AWK}\" -F' ' '{print $2}' <<< \"${my_colors}\")\n\tmy_hcolor=$(colorNumberToCode \"${my_hcolor}\")\n}\n\nsupported_distros=\"ALDOS, Alpine Linux, AlmaLinux, Alter Linux, Amazon Linux, Antergos, Arch Linux (Old and Current Logos), Arch Linux 32, ArcoLinux, Artix Linux, \\\nblackPanther OS, BLAG, BunsenLabs, CentOS, Chakra, Chapeau, Chrome OS, Chromium OS, CrunchBang, CRUX, \\\nDebian, Deepin, DesaOS,Devuan, Dragora, DraugerOS, elementary OS, EuroLinux, Evolve OS, Sulin, Exherbo, Fedora(Old and Current Logos), Frugalware, Fuduntu, Funtoo, \\\nFux, Gentoo, gNewSense, Guix System, Hyperbola GNU/Linux-libre, januslinux, Jiyuu Linux, Kali Linux, KaOS, KDE neon, Kogaion, Korora, \\\nLinuxDeepin, Linux Mint, LMDE, Logos, Mageia, Mandriva/Mandrake, Manjaro, Mer, Netrunner, NixOS, OBRevenge, openSUSE, \\\nOS Elbrus, Oracle Linux, Parabola GNU/Linux-libre, Pardus, Parrot Security, PCLinuxOS, PeppermintOS, Proxmox VE, PureOS, Quirinux, Qubes OS, \\\nRaspbian, Red Hat Enterprise Linux, Rocky Linux, ROSA, Sabayon, SailfishOS, Scientific Linux, Siduction, Slackware, Solus, Source Mage GNU/Linux, \\\nSparkyLinux, SteamOS, SUSE Linux Enterprise, SwagArch, TeArch, TinyCore, Trisquel, Ubuntu, Viperr, Void and Zorin OS and EndeavourOS\"\n\nsupported_other=\"Dragonfly/Free/Open/Net BSD, Haiku, macOS, Windows+Cygwin and Windows+MSYS2.\"\n\nsupported_dms=\"KDE, GNOME, Unity, Xfce, LXDE, Cinnamon, MATE, Deepin, CDE, RazorQt and Trinity.\"\n\nsupported_wms=\"2bwm, 9wm, Awesome, Beryl, Blackbox, Cinnamon, chromeos-wm, Compiz, deepin-wm, \\\ndminiwm, dwm, dtwm, E16, E17, echinus, Emerald, FluxBox, FLWM, FVWM, herbstluftwm, howm, IceWM, KWin, \\\nMetacity, monsterwm, Musca, Gala, Mutter, Muffin, Notion, OpenBox, PekWM, Ratpoison, Sawfish, ScrotWM, SpectrWM, \\\nStumpWM, subtle, sway, TWin, WindowMaker, WMFS, wmii, Xfwm4, XMonad and i3.\"\n\ndisplayHelp () {\n\techo \"${underline}Usage${c0}:\"\n\techo \"  ${0} [OPTIONAL FLAGS]\"\n\techo\n\techo \"screenFetch - a CLI Bash script to show system/theme info in screenshots.\"\n\techo\n\techo \"${underline}Supported GNU/Linux Distributions${c0}:\"\n\techo \"${supported_distros}\" | fold -s | sed 's/^/\\t/g'\n\techo\n\techo \"${underline}Other Supported Systems${c0}:\"\n\techo \"${supported_other}\" | fold -s | sed 's/^/\\t/g'\n\techo\n\techo \"${underline}Supported Desktop Managers${c0}:\"\n\techo \"${supported_dms}\" | fold -s | sed 's/^/\\t/g'\n\techo\n\techo \"${underline}Supported Window Managers${c0}:\"\n\techo \"${supported_wms}\" | fold -s | sed 's/^/\\t/g'\n\techo\n\techo \"${underline}Supported Information Displays${c0}:\"\n\techo \"${valid_display[@]}\" | fold -s | sed 's/^/\\t/g'\n\techo\n\techo \"${underline}Options${c0}:\"\n\techo \"   ${bold}-v${c0}                 Verbose output.\"\n\techo \"   ${bold}-o 'OPTIONS'${c0}       Allows for setting script variables on the\"\n\techo \"                      command line. Must be in the following format...\"\n\techo \"                      'OPTION1=\\\"OPTIONARG1\\\";OPTION2=\\\"OPTIONARG2\\\"'\"\n\techo \"   ${bold}-d '+var;-var;var'${c0} Allows for setting what information is displayed\"\n\techo \"                      on the command line. You can add displays with +var,var. You\"\n\techo \"                      can delete displays with -var,var. Setting without + or - will\"\n\techo \"                      set display to that explicit combination. Add and delete statements\"\n\techo \"                      may be used in conjunction by placing a ; between them as so:\"\n\techo \"                      +var,var,var;-var,var. See above to find supported display names.\"\n\techo \"   ${bold}-n${c0}                 Do not display ASCII distribution logo.\"\n\techo \"   ${bold}-L${c0}                 Display ASCII distribution logo only.\"\n\techo \"   ${bold}-N${c0}                 Strip all color from output.\"\n\techo \"   ${bold}-w${c0}                 Wrap long lines.\"\n\techo \"   ${bold}-t${c0}                 Truncate output based on terminal width (Experimental!).\"\n\techo \"   ${bold}-p${c0}                 Portrait output.\"\n\techo \"   ${bold}-s [-u IMGHOST]${c0}    Using this flag tells the script that you want it\"\n\techo \"                      to take a screenshot. Use the -u flag if you would like\"\n\techo \"                      to upload the screenshots to one of the pre-configured\"\n\techo \"                      locations. These include: teknik, imgur, mediacrush and hmp.\"\n\techo \"   ${bold}-c string${c0}          You may change the outputted colors with -c. The format is\"\n\techo \"                      as follows: [0-9][0-9],[0-9][0-9]. The first argument controls the\"\n\techo \"                      ASCII logo colors and the label colors. The second argument\"\n\techo \"                      controls the colors of the information found. One argument may be\"\n\techo \"                      used without the other. For terminals supporting 256 colors argument\"\n\techo \"                      may also contain other terminal control codes for bold, underline etc.\"\n\techo \"                      separated by semicolon. For example -c \\\"4;1,1;2\\\" will produce bold\"\n\techo \"                      blue and dim red.\"\n\techo \"   ${bold}-a 'PATH'${c0}          You can specify a custom ASCII art by passing the path\"\n\techo \"                      to a Bash script, defining \\`startline\\` and \\`fulloutput\\`\"\n\techo \"                      variables, and optionally \\`labelcolor\\` and \\`textcolor\\`.\"\n\techo \"                      See the \\`asciiText\\` function in the source code for more\"\n\techo \"                      information on the variables format.\"\n\techo \"   ${bold}-S 'COMMAND'${c0}       Here you can specify a custom screenshot command for\"\n\techo \"                      the script to execute. Surrounding quotes are required.\"\n\techo \"   ${bold}-D 'DISTRO'${c0}        Here you can specify your distribution for the script\"\n\techo \"                      to use. Surrounding quotes are required.\"\n\techo \"   ${bold}-A 'DISTRO'${c0}        Here you can specify the distribution art that you want\"\n\techo \"                      displayed. This is for when you want your distro\"\n\techo \"                      detected but want to display a different logo.\"\n\techo \"   ${bold}-E${c0}                 Suppress output of errors.\"\n\techo \"   ${bold}-C${c0}                 Add custom (extra) lines.\"\n\techo \"                      For example:\"\n\techo \"                      \tscreenfetch -C 'IP WAN=192.168.0.12,IP BRIDGED=10.1.1.10'\"\n\techo \"                      ... will add two extra lines:\"\n\techo \"                      \tIP WAN: 192.168.0.12\"\n\techo \"                      \tIP BRIDGED: 10.1.1.10\"\n\techo \"   ${bold}-V, --version${c0}      Display current script version.\"\n\techo \"   ${bold}-h, --help${c0}         Display this help.\"\n}\n\n\ndisplayVersion () {\n\techo \"${underline}screenFetch${c0} - Version ${scriptVersion}\"\n\techo \"Created by and licensed to Brett Bohnenkamper <kittykatt@kittykatt.us>\"\n\techo \"OS X porting done almost solely by shrx (https://github.com/shrx) and John D. Duncan, III (https://github.com/JohnDDuncanIII).\"\n\techo\n\techo \"This is free software; see the source for copying conditions.  There is NO warranty; not even MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\"\n}\n\n\n#####################\n# Begin Flags Phase\n#####################\n\ncase $1 in\n\t--help) displayHelp; exit 0;;\n\t--version) displayVersion; exit 0;;\nesac\n\n# Detect which awk to use (unless already specified in environment)\nif [[ -z \"${AWK}\" ]]; then\n\tfor awk in awk gawk mawk; do\n\t\tif command -v \"${awk}\" > /dev/null; then\n\t\t\tAWK=\"${awk}\"\n\t\t\tbreak\n\t\tfi\n\tdone\nfi\n\nif ! command -v \"${AWK}\" > /dev/null; then\n\terrorOut \"No awk interpreter available (AWK=\\\"${AWK}\\\").\"\n\texit 1\nfi\n\n# Parse the rest of the flags (some of these functions require awk)\nwhile getopts \":hsu:evVEnLNtlS:A:D:o:c:d:pa:w:C:\" flags; do\n\tcase $flags in\n\t\th) displayHelp; exit 0 ;;\n\t\ts) screenshot='1' ;;\n\t\tS) screenCommand=\"${OPTARG}\" ;;\n\t\tu) upload='1'; uploadLoc=\"${OPTARG}\" ;;\n\t\tv) verbosity=1 ;;\n\t\tV) displayVersion; exit 0 ;;\n\t\tE) errorSuppress='1' ;;\n\t\tD) distro=\"${OPTARG}\" ;;\n\t\tA) asc_distro=\"${OPTARG}\" ;;\n\t\tt) truncateSet='Yes' ;;\n\t\tn) display_type='Text' ;;\n\t\tL) display_type='ASCII'; display_logo='Yes' ;;\n\t\to) overrideOpts=\"${OPTARG}\" ;;\n\t\tc) detectColors \"${OPTARGS}\" ;;\n\t\td) overrideDisplay=\"${OPTARG}\" ;;\n\t\tN) no_color='1' ;;\n\t\tp) portraitSet='Yes' ;;\n\t\ta) art=\"${OPTARG}\" ;;\n\t\tw) lineWrap='Yes' ;;\n\t\tC) custom_lines_string=\"${OPTARG}\" ;;\n\t\t:) errorOut \"Error: You're missing an argument somewhere. Exiting.\"; exit 1 ;;\n\t\t?) errorOut \"Error: Invalid flag somewhere. Exiting.\"; exit 1 ;;\n\t\t*) errorOut \"Error\"; exit 1 ;;\n\tesac\ndone\n\n###################\n# End Flags Phase\n###################\n\n\n############################\n# Override Options/Display\n############################\n\nif [[ \"$overrideOpts\" ]]; then\n\tverboseOut \"Found 'o' flag in syntax. Overriding some script variables...\"\n\teval \"${overrideOpts}\"\nfi\n\n\n#########################\n# Begin Detection Phase\n#########################\n\n# Distro Detection - Begin\ndetectdistro () {\n\tlocal distro_detect=\"\"\n\tif [[ -z \"${distro}\" ]]; then\n\t\tdistro=\"Unknown\"\n\t\t# LSB Release or MCST Version Check\n\t\tif [[ -e \"/etc/mcst_version\" ]]; then\n\t\t\tdistro=\"OS Elbrus\"\n\t\t\tdistro_release=\"$(tail -n 1 /etc/mcst_version)\"\n\t\t\tif [[ -n ${distro_release} ]]; then\n\t\t\t\tdistro_more=\"$distro_release\"\n\t\t\tfi\n\t\telif type -p lsb_release >/dev/null 2>&1; then\n\t\t\t# read distro_detect distro_release distro_codename <<< $(lsb_release -sirc)\n\t\t\t#OLD_IFS=$IFS\n\t\t\t#IFS=\" \"\n\t\t\t#read -r -a distro_detect <<< \"$(lsb_release -sirc)\"\n\t\t\t#IFS=$OLD_IFS\n\t\t\t#if [[ ${#distro_detect[@]} -eq 3 ]]; then\n\t\t\t#\tdistro_codename=${distro_detect[2]}\n\t\t\t#\tdistro_release=${distro_detect[1]}\n\t\t\t#\tdistro_detect=${distro_detect[0]}\n\t\t\t#else\n\t\t\t#\tfor i in \"${!distro_detect[@]}\"; do\n\t\t\t#\t\tif [[ ${distro_detect[$i]} =~ ^[[:digit:]]+((.[[:digit:]]+|[[:digit:]]+|)+)$ ]]; then\n\t\t\t#\t\t\tdistro_release=${distro_detect[$i]}\n\t\t\t#\t\t\tdistro_codename=${distro_detect[*]:$((i+1))}\n\t\t\t#\t\t\tdistro_detect=${distro_detect[*]:0:${i}}\n\t\t\t#\t\t\tbreak 1\n\t\t\t#\t\telif [[ ${distro_detect[$i]} =~ [Nn]/[Aa] || ${distro_detect[$i]} == \"rolling\" ]]; then\n\t\t\t#\t\t\tdistro_release=${distro_detect[$i]}\n\t\t\t#\t\t\tdistro_codename=${distro_detect[*]:$((i+1))}\n\t\t\t#\t\t\tdistro_detect=${distro_detect[*]:0:${i}}\n\t\t\t#\t\t\tbreak 1\n\t\t\t#\t\tfi\n\t\t\t#\tdone\n\t\t\t#fi\n\t\t\tdistro_detect=\"$(lsb_release -si)\"\n\t\t\tdistro_release=\"$(lsb_release -sr)\"\n\t\t\tdistro_codename=\"$(lsb_release -sc)\"\n\t\t\tcase \"${distro_detect}\" in\n\t\t\t\t\"archlinux\"|\"Arch Linux\"|\"arch\"|\"Arch\"|\"archarm\")\n\t\t\t\t\tdistro=\"Arch Linux\"\n\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\tif [ -f /etc/os-release ]; then\n\t\t\t\t\t\tos_release=\"/etc/os-release\";\n\t\t\t\t\telif [ -f /usr/lib/os-release ]; then\n\t\t\t\t\t\tos_release=\"/usr/lib/os-release\";\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ -n ${os_release} ]]; then\n\t\t\t\t\t\tif grep -q 'antergos' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"Antergos\"\n\t\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif grep -q -i 'logos' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"Logos\"\n\t\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif grep -q -i 'swagarch' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"SwagArch\"\n\t\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif grep -q -i 'obrevenge' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"OBRevenge\"\n\t\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif grep -q -i 'Alter' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"Alter Linux\"\n\t\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\t\"ALDOS\"|\"Aldos\")\n\t\t\t\t\tdistro=\"ALDOS\"\n\t\t\t\t\t;;\n\t\t\t\t\"ArcoLinux\")\n\t\t\t\t\tdistro=\"ArcoLinux\"\n\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t;;\n\t\t\t\t\"artixlinux\"|\"Artix Linux\"|\"artix\"|\"Artix\"|\"Artix release\")\n\t\t\t\t\tdistro=\"Artix\"\n\t\t\t\t\t;;\n\t\t\t\t\"blackPantherOS\"|\"blackPanther\"|\"blackpanther\"|\"blackpantheros\")\n\t\t\t\t\tdistro=$(source /etc/lsb-release; echo \"$DISTRIB_ID\")\n\t\t\t\t\tdistro_release=$(source /etc/lsb-release; echo \"$DISTRIB_RELEASE\")\n\t\t\t\t\tdistro_codename=$(source /etc/lsb-release; echo \"$DISTRIB_CODENAME\")\n\t\t\t\t\t;;\n\t\t\t\t\"BLAG\")\n\t\t\t\t\tdistro=\"BLAG\"\n\t\t\t\t\tdistro_more=\"$(head -n1 /etc/fedora-release)\"\n\t\t\t\t\t;;\n\t\t\t\t\"Chakra\")\n\t\t\t\t\tdistro=\"Chakra\"\n\t\t\t\t\tdistro_release=\"\"\n\t\t\t\t\t;;\n\t\t\t\t\"BunsenLabs\")\n\t\t\t\t\tdistro=$(source /etc/lsb-release; echo \"$DISTRIB_ID\")\n\t\t\t\t\tdistro_release=$(source /etc/lsb-release; echo \"$DISTRIB_RELEASE\")\n\t\t\t\t\tdistro_codename=$(source /etc/lsb-release; echo \"$DISTRIB_CODENAME\")\n\t\t\t\t\t;;\n\t\t\t\t\"Debian\")\n\t\t\t\t\tif [[ -f /etc/crunchbang-lsb-release || -f /etc/lsb-release-crunchbang ]]; then\n\t\t\t\t\t\tdistro=\"CrunchBang\"\n\t\t\t\t\t\tdistro_release=$(\"${AWK}\" -F'=' '/^DISTRIB_RELEASE=/ {print $2}' /etc/lsb-release-crunchbang)\n\t\t\t\t\t\tdistro_codename=$(\"${AWK}\" -F'=' '/^DISTRIB_DESCRIPTION=/ {print $2}' /etc/lsb-release-crunchbang)\n\t\t\t\t\telif [[ -f /etc/siduction-version ]]; then\n\t\t\t\t\t\tdistro=\"Siduction\"\n\t\t\t\t\t\tdistro_release=\"(Debian Sid)\"\n\t\t\t\t\t\tdistro_codename=\"\"\n\t\t\t\t\telif [[ -f /usr/bin/pveversion ]]; then\n\t\t\t\t\t\tdistro=\"Proxmox VE\"\n\t\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\t\tdistro_release=\"$(/usr/bin/pveversion | grep -oP 'pve-manager\\/\\K\\d+\\.\\d+')\"\n\t\t\t\t\telif [[ -f /etc/os-release ]]; then\n\t\t\t\t\t\tif grep -q -i 'Raspbian' /etc/os-release ; then\n\t\t\t\t\t\t\tdistro=\"Raspbian\"\n\t\t\t\t\t\t\tdistro_release=$(\"${AWK}\" -F'=' '/^PRETTY_NAME=/ {print $2}' /etc/os-release)\n\t\t\t\t\t\telif grep -q -i 'BlankOn' /etc/os-release ; then\n\t\t\t\t\t\t\tdistro='BlankOn'\n\t\t\t\t\t\t\tdistro_release=$(\"${AWK}\" -F'=' '/^PRETTY_NAME=/ {print $2}' /etc/os-release)\n\t\t\t\t\t\telif grep -q -i 'Quirinux' /etc/os-release ; then\n\t\t\t\t\t\t\tdistro='Quirinux'\n\t\t\t\t\t\t\tdistro_release=$(\"${AWK}\" -F'=' '/^PRETTY_NAME=/ {print $2}' /etc/os-release)\t\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tdistro=\"Debian\"\n\t\t\t\t\t\tfi\n\t\t\t\t\telse\n\t\t\t\t\t\tdistro=\"Debian\"\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t  \"DraugerOS\")\n\t\t\t    distro = \"DraugerOS\"\n\t\t\t    fake_distro=\"${distro}\"\n\t\t\t    ;;\n\t\t\t\t\"elementary\"|\"elementary OS\")\n\t\t\t\t\tdistro=\"elementary OS\"\n\t\t\t\t\t;;\n\t\t\t\t\"EvolveOS\")\n\t\t\t\t\tdistro=\"Evolve OS\"\n\t\t\t\t\t;;\n\t\t\t\t\"Sulin\")\n\t\t\t\t\tdistro=\"Sulin\"\n\t\t\t\t\tdistro_release=$(\"${AWK}\" -F'=' '/^ID_LIKE=/ {print $2}' /etc/os-release)\n\t\t\t\t\tdistro_codename=\"Roolling donkey\" # this is not wrong :D\n\t\t\t\t\t;;\n\t\t\t\t\"KaOS\"|\"kaos\")\n\t\t\t\t\tdistro=\"KaOS\"\n\t\t\t\t\t;;\n\t\t\t\t\"frugalware\")\n\t\t\t\t\tdistro=\"Frugalware\"\n\t\t\t\t\tdistro_codename=null\n\t\t\t\t\tdistro_release=null\n\t\t\t\t\t;;\n\t\t\t\t\"Fuduntu\")\n\t\t\t\t\tdistro=\"Fuduntu\"\n\t\t\t\t\tdistro_codename=null\n\t\t\t\t\t;;\n\t\t\t\t\"Fux\")\n\t\t\t\t\tdistro=\"Fux\"\n\t\t\t\t\tdistro_codename=null\n\t\t\t\t\t;;\n\t\t\t\t\"Gentoo\")\n\t\t\t\t\tif [[ \"$(lsb_release -sd)\" =~ \"Funtoo\" ]]; then\n\t\t\t\t\t\tdistro=\"Funtoo\"\n\t\t\t\t\telse\n\t\t\t\t\t\tdistro=\"Gentoo\"\n\t\t\t\t\tfi\n\n\t\t\t\t\t#detecting release stable/testing/experimental\n\t\t\t\t\tif [[ -f /etc/portage/make.conf ]]; then\n\t\t\t\t\t\tsource /etc/portage/make.conf\n\t\t\t\t\telif [[ -d /etc/portage/make.conf ]]; then\n\t\t\t\t\t\tsource /etc/portage/make.conf/*\n\t\t\t\t\tfi\n\n\t\t\t\t\tcase $ACCEPT_KEYWORDS in\n\t\t\t\t\t\t[a-z]*) distro_release=stable       ;;\n\t\t\t\t\t\t~*)     distro_release=testing      ;;\n\t\t\t\t\t\t'**')   distro_release=experimental ;; #experimental usually includes git-versions.\n\t\t\t\t\tesac\n\t\t\t\t\t;;\n\t\t\t\t\"Hyperbola GNU/Linux-libre\"|\"Hyperbola\")\n\t\t\t\t\tdistro=\"Hyperbola GNU/Linux-libre\"\n\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t;;\n\t\t\t\t\"januslinux\"|\"janus\")\n\t\t\t\t\tdistro=\"januslinux\"\n\t\t\t\t\t;;\n\t\t\t\t\"LinuxDeepin\")\n\t\t\t\t\tdistro=\"LinuxDeepin\"\n\t\t\t\t\tdistro_codename=null\n\t\t\t\t\t;;\n\t\t\t\t\"Uos\")\n\t\t\t\t\tdistro=\"Uos\"\n\t\t\t\t\tdistro_codename=null\n\t\t\t\t\t;;\n\t\t\t\t\"Kali\"|\"Debian Kali Linux\")\n\t\t\t\t\tdistro=\"Kali Linux\"\n\t\t\t\t\tif [[ \"${distro_codename}\" =~ \"kali-rolling\" ]]; then\n\t\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\t\"Lunar Linux\"|\"lunar\")\n\t\t\t\t\tdistro=\"Lunar Linux\"\n\t\t\t\t\t;;\n\t\t\t\t\"MandrivaLinux\")\n\t\t\t\t\tdistro=\"Mandriva\"\n\t\t\t\t\tcase \"${distro_codename}\" in\n\t\t\t\t\t\t\"turtle\"|\"Henry_Farman\"|\"Farman\"|\"Adelie\"|\"pauillac\")\n\t\t\t\t\t\t\tdistro=\"Mandriva-${distro_release}\"\n\t\t\t\t\t\t\tdistro_codename=null\n\t\t\t\t\t\t\t;;\n\t\t\t\t\tesac\n\t\t\t\t\t;;\n\t\t\t\t\"ManjaroLinux\")\n\t\t\t\t\tdistro=\"Manjaro\"\n\t\t\t\t\t;;\n\t\t\t\t\"Mer\")\n\t\t\t\t\tdistro=\"Mer\"\n\t\t\t\t\tif [[ -f /etc/os-release ]]; then\n\t\t\t\t\t\tif grep -q 'SailfishOS' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"SailfishOS\"\n\t\t\t\t\t\t\tdistro_codename=\"$(grep 'VERSION=' /etc/os-release | cut -d '(' -f2 | cut -d ')' -f1)\"\n\t\t\t\t\t\t\tdistro_release=\"$(\"${AWK}\" -F'=' '/^VERSION=/ {print $2}' /etc/os-release)\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\t\"neon\"|\"KDE neon\")\n\t\t\t\t\tdistro=\"KDE neon\"\n\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\tif [[ -f /etc/issue ]]; then\n\t\t\t\t\t\tif grep -q '^KDE neon' /etc/issue ; then\n\t\t\t\t\t\t\tdistro_release=\"$(grep '^KDE neon' /etc/issue | cut -d ' ' -f3)\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\t\"Ol\"|\"ol\"|\"Oracle Linux\")\n\t\t\t\t\tdistro=\"Oracle Linux\"\n\t\t\t\t\t[ -f /etc/oracle-release ] && distro_release=\"$(sed 's/Oracle Linux //' /etc/oracle-release)\"\n\t\t\t\t\t;;\n\t\t\t\t\"LinuxMint\")\n\t\t\t\t\tdistro=\"Mint\"\n\t\t\t\t\tif [[ \"${distro_codename}\" == \"debian\" ]]; then\n\t\t\t\t\t\tdistro=\"LMDE\"\n\t\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t#adding support for LMDE 3\n\t\t\t\t\telif [[ \"$(lsb_release -sd)\" =~ \"LMDE\" ]]; then\n\t\t\t\t\t\tdistro=\"LMDE\"\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\t\"openSUSE\"|\"openSUSE project\"|\"SUSE LINUX\" | \"SUSE\")\n\t\t\t\t\tdistro=\"openSUSE\"\n\t\t\t\t\tif [ -f /etc/os-release ]; then\n\t\t\t\t\t\tif grep -q -i 'SUSE Linux Enterprise' /etc/os-release ; then\n\t\t\t\t\t\t\tdistro=\"SUSE Linux Enterprise\"\n\t\t\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\t\t\tdistro_release=$(\"${AWK}\" -F'=' '/^VERSION_ID=/ {print $2}' /etc/os-release | tr -d '\"')\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"${distro_codename}\" == \"Tumbleweed\" ]]; then\n\t\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\t\"Parabola GNU/Linux-libre\"|\"Parabola\")\n\t\t\t\t\tdistro=\"Parabola GNU/Linux-libre\"\n\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t;;\n\t\t\t\t\"Parrot\"|\"Parrot Security\")\n\t\t\t\t\tdistro=\"Parrot Security\"\n\t\t\t\t\t;;\n\t\t\t\t\"PCLinuxOS\")\n\t\t\t\t\tdistro=\"PCLinuxOS\"\n\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\tdistro_release=\"n/a\"\n\t\t\t\t\t;;\n\t\t\t\t\"Peppermint\")\n\t\t\t\t\tdistro=\"Peppermint\"\n\t\t\t\t\tdistro_codename=null\n\t\t\t\t\t;;\n\t\t\t\t\"rhel\")\n\t\t\t\t\tdistro=\"Red Hat Enterprise Linux\"\n\t\t\t\t\t;;\n\t\t\t\t\"RosaDesktopFresh\")\n\t\t\t\t\tdistro=\"ROSA\"\n\t\t\t\t\tdistro_release=$(grep 'VERSION=' /etc/os-release | cut -d ' ' -f3 | cut -d \"\\\"\" -f1)\n\t\t\t\t\tdistro_codename=$(grep 'PRETTY_NAME=' /etc/os-release | cut -d ' ' -f4,4)\n\t\t\t\t\t;;\n\t\t\t\t\"SailfishOS\")\n\t\t\t\t\tdistro=\"SailfishOS\"\n\t\t\t\t\tif [[ -f /etc/os-release ]]; then\n\t\t\t\t\t\tdistro_codename=\"$(grep 'VERSION=' /etc/os-release | cut -d '(' -f2 | cut -d ')' -f1)\"\n\t\t\t\t\t\tdistro_release=\"$(\"${AWK}\" -F'=' '/^VERSION=/ {print $2}' /etc/os-release)\"\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\t\"Sparky\"|\"SparkyLinux\")\n\t\t\t\t\tdistro=\"SparkyLinux\"\n\t\t\t\t\t;;\n                                \"TeArch\"|\"TeArchLinux\"|\"\")\n\t\t\t\t\tdistro=\"TeArch\"\n\t\t\t\t\t;;\n\t\t\t\t\"Ubuntu\")\n\t\t\t\t\tfor each in \"${ubuntu_codenames[@]}\"; do\n\t\t\t\t\t\tif [[ \"${each,,}\" =~ \"${distro_codename,,}\" ]]; then\n\t\t\t\t\t\t\tdistro_codename=\"$each\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tdone\n\t\t\t\t\t;;\n\t\t\t\t\"Viperr\")\n\t\t\t\t\tdistro=\"Viperr\"\n\t\t\t\t\tdistro_codename=null\n\t\t\t\t\t;;\n\t\t\t\t\"Void\"|\"VoidLinux\")\n\t\t\t\t\tdistro=\"Void Linux\"\n\t\t\t\t\tdistro_codename=\"\"\n\t\t\t\t\tdistro_release=\"\"\n\t\t\t\t\t;;\n\t\t\t\t\"Zorin\")\n\t\t\t\t\tdistro=\"Zorin OS\"\n\t\t\t\t\tdistro_codename=\"\"\n\t\t\t\t\t;;\n\t\t\t\t*)\n\t\t\t\t\tif [ \"x$(printf \"${distro_detect}\" | od -t x1 | sed -e 's/^\\w*\\ *//' | tr '\\n' ' ' | grep 'eb b6 89 ec 9d 80 eb b3 84 ')\" != \"x\" ]; then\n\t\t\t\t\t\tdistro=\"Red Star OS\"\n\t\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\t\tdistro_release=$(printf \"${distro_release}\" | grep -o '[0-9.]' | tr -d '\\n')\n\t\t\t\t\telse\n\t\t\t\t\t\tdistro=\"${distro_detect}\"\n\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\tesac\n\t\t\tif [[ \"${distro_detect}\" =~ \"CentOSStream\" ]]; then\n\t\t\t\tdistro=\"CentOS Stream\"\n\t\t\tfi\n\t\t\tif [[ \"${distro_detect}\" =~ \"RedHatEnterprise\" ]]; then\n\t\t\t\tdistro=\"Red Hat Enterprise Linux\"\n\t\t\tfi\n\t\t\tif [[ \"${distro_detect}\" =~ \"Rocky Linux\" ]]; then\n\t\t\t\tdistro=\"Rocky Linux\"\n\t\t\tfi\n\t\t\tif [[ \"${distro_detect}\" =~ \"SUSELinuxEnterprise\" ]]; then\n\t\t\t\tdistro=\"SUSE Linux Enterprise\"\n\t\t\tfi\n\t\t\tif [[ -n ${distro_release} && ${distro_release} != \"n/a\" ]]; then\n\t\t\t\tdistro_more=\"$distro_release\"\n\t\t\tfi\n\t\t\tif [[ -n ${distro_codename} && ${distro_codename} != \"n/a\" ]]; then\n\t\t\t\tdistro_more=\"$distro_more $distro_codename\"\n\t\t\tfi\n\t\tfi\n\n\t\t# Existing File Check\n\t\tif [ \"$distro\" == \"Unknown\" ]; then\n\t\t\tif [ \"$(uname -o 2>/dev/null)\" ]; then\n\t\t\t\tos=\"$(uname -o)\"\n\t\t\t\tcase \"$os\" in\n\t\t\t\t\t\"Cygwin\"|\"FreeBSD\"|\"OpenBSD\"|\"NetBSD\")\n\t\t\t\t\t\tdistro=\"$os\"\n\t\t\t\t\t\tfake_distro=\"${distro}\"\n\t\t\t\t\t;;\n\t\t\t\t\t\"DragonFly\")\n\t\t\t\t\t\tdistro=\"DragonFlyBSD\"\n\t\t\t\t\t\tfake_distro=\"${distro}\"\n\t\t\t\t\t;;\n\t\t\t\t\t\"EndeavourOS\")\n\t\t\t\t\t\tdistro=\"EndeavourOS\"\n\t\t\t\t\t\tfake_distro=\"${distro}\"\n\t\t\t\t\t;;\n\t\t\t\t\t\"Msys\")\n\t\t\t\t\t\tdistro=\"Msys\"\n\t\t\t\t\t\tfake_distro=\"${distro}\"\n\t\t\t\t\t\tdistro_more=\"${distro} $(uname -r | head -c 1)\"\n\t\t\t\t\t;;\n\t\t\t\t\t\"Haiku\")\n\t\t\t\t\t\tdistro=\"Haiku\"\n\t\t\t\t\t\tdistro_more=\"$(uname -v | \"${AWK}\" '/^hrev/ {print $1}')\"\n\t\t\t\t\t;;\n\t\t\t\t\t\"GNU/Linux\")\n\t\t\t\t\t\tif type -p crux >/dev/null 2>&1; then\n\t\t\t\t\t\t\tdistro=\"CRUX\"\n\t\t\t\t\t\t\tdistro_more=\"$(crux | \"${AWK}\" '{print $3}')\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif type -p nixos-version >/dev/null 2>&1; then\n\t\t\t\t\t\t\tdistro=\"NixOS\"\n\t\t\t\t\t\t\tdistro_more=\"$(nixos-version)\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif type -p sorcery >/dev/null 2>&1; then\n\t\t\t\t\t\t\tdistro=\"SMGL\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif (type -p guix && type -p herd) >/dev/null 2>&1; then\n\t\t\t\t\t\t\tdistro=\"Guix System\"\n\t\t\t\t\t\tfi\n\t\t\t\t\t;;\n\t\t\t\tesac\n\t\t\tfi\n\t\t\tif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" ]]; then\n\t\t\t\t# https://msdn.microsoft.com/en-us/library/ms724832%28VS.85%29.aspx\n\t\t\t\tif wmic os get version | grep -q '^\\(6\\.[23]\\|10\\)'; then\n\t\t\t\t\tfake_distro=\"Windows - Modern\"\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [[ \"${distro}\" == \"Unknown\" ]]; then\n\t\t\t\tif [ -f /etc/os-release ]; then\n\t\t\t\t\tos_release=\"/etc/os-release\";\n\t\t\t\telif [ -f /usr/lib/os-release ]; then\n\t\t\t\t\tos_release=\"/usr/lib/os-release\";\n\t\t\t\tfi\n\t\t\t\tif [[ -n ${os_release} ]]; then\n\t\t\t\t\tdistrib_id=$(<${os_release});\n\t\t\t\t\tfor l in $distrib_id; do\n\t\t\t\t\t\tif [[ ${l} =~ ^ID= ]]; then\n\t\t\t\t\t\t\tdistrib_id=${l//*=}\n\t\t\t\t\t\t\tdistrib_id=${distrib_id//\\\"/}\n\t\t\t\t\t\t\tbreak 1\n\t\t\t\t\t\tfi\n\t\t\t\t\tdone\n\t\t\t\t\tif [[ -n ${distrib_id} ]]; then\n\t\t\t\t\t\tif [[ ${BASH_VERSINFO[0]} -ge 4 ]]; then\n\t\t\t\t\t\t\tdistrib_id=$(for i in ${distrib_id}; do echo -n \"${i^} \"; done)\n\t\t\t\t\t\t\tdistro=${distrib_id% }\n\t\t\t\t\t\t\tunset distrib_id\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tdistrib_id=$(for i in ${distrib_id}; do FIRST_LETTER=$(echo -n \"${i:0:1}\" | tr \"[:lower:]\" \"[:upper:]\"); echo -n \"${FIRST_LETTER}${i:1} \"; done)\n\t\t\t\t\t\t\tdistro=${distrib_id% }\n\t\t\t\t\t\t\tunset distrib_id\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\n\t\t\t\t\t# Hotfixes\n                    [[ \"${distro}\" == \"Opensuse-tumbleweed\" ]] && distro=\"openSUSE\" && distro_more=\"Tumbleweed\"\n\t\t\t\t\t[[ \"${distro}\" == \"Opensuse-leap\" ]] && distro=\"openSUSE\"\n\t\t\t\t\t[[ \"${distro}\" == \"void\" ]] && distro=\"Void Linux\"\n\t\t\t\t\t[[ \"${distro}\" == \"evolveos\" ]] && distro=\"Evolve OS\"\n\t\t\t\t\t[[ \"${distro}\" == \"Sulin\" ]] && distro=\"Sulin\"\n\t\t\t\t\t[[ \"${distro}\" == \"antergos\" ]] && distro=\"Antergos\"\n\t\t\t\t\t[[ \"${distro}\" == \"logos\" ]] && distro=\"Logos\"\n\t\t\t\t\t[[ \"${distro}\" == \"alter\" || \"${distro}\" == \"Alter\" ]] && distro=\"Alter Linux\"\n\t\t\t\t\t[[ \"${distro}\" == \"Arch\" || \"${distro}\" == \"Archarm\" || \"${distro}\" == \"archarm\" ]] && distro=\"Arch Linux\"\n\t\t\t\t\t[[ \"${distro}\" == \"elementary\" ]] && distro=\"elementary OS\"\n\t\t\t\t\t[[ \"${distro}\" == \"Fedora\" && -d /etc/qubes-rpc ]] && distro=\"qubes\" # Inner VM\n\t\t\t\t\t[[ \"${distro}\" == \"Ol\" || \"${distro}\" == \"ol\" ]] && distro=\"Oracle Linux\"\n\t\t\t\t\tif [[ \"${distro}\" == \"Oracle Linux\" && -f /etc/oracle-release ]]; then\n\t\t\t\t\t\tdistro_more=\"$(sed 's/Oracle Linux //' /etc/oracle-release)\"\n\t\t\t\t\tfi\n\t\t\t\t\t# Upstream problem, SL and so EL is using rhel ID in os-release\n\t\t\t\t\tif [[ \"${distro}\" == \"rhel\" ]] || [[ \"${distro}\" == \"Rhel\" ]]; then\n\t\t\t\t\t\tdistro=\"Red Hat Enterprise Linux\"\n\t\t\t\t\t\tif grep -q 'Scientific' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"Scientific Linux\"\n\t\t\t\t\t\telif grep -q 'EuroLinux' /etc/os-release; then\n\t\t\t\t\t\t\tdistro=\"EuroLinux\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\n\t\t\t\t\t[[ \"${distro}\" == \"Neon\" ]] && distro=\"KDE neon\"\n\t\t\t\t\t[[ \"${distro}\" == \"SLED\" || \"${distro}\" == \"sled\" || \"${distro}\" == \"SLES\" || \"${distro}\" == \"sles\" ]] && distro=\"SUSE Linux Enterprise\"\n\t\t\t\t\tif [[ \"${distro}\" == \"SUSE Linux Enterprise\" && -f /etc/os-release ]]; then\n\t\t\t\t\t\tdistro_more=\"$(\"${AWK}\" -F'=' '/^VERSION_ID=/ {print $2}' /etc/os-release | tr -d '\"')\"\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"${distro}\" == \"Debian\" && -f /usr/bin/pveversion ]]; then\n\t\t\t\t\t\tdistro=\"Proxmox VE\"\n\t\t\t\t\t\tdistro_codename=\"n/a\"\n\t\t\t\t\t\tdistro_release=\"$(/usr/bin/pveversion | grep -oP 'pve-manager\\/\\K\\d+\\.\\d+')\"\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"${distro}\" == \"Almalinux\" && -f /etc/almalinux-release ]]; then\n\t\t\t\t\t\tdistro=\"AlmaLinux\"\n\t\t\t\t\t\tdistro_release=$(sed 's/AlmaLinux release //' /etc/almalinux-release | cut -f1 -d' ')\n\t\t\t\t\t\tdistro_codename=$(cut -f2 -d'(' /etc/almalinux-release | cut -f1 -d')')\n\t\t\t\t\t\tdistro_more=$(cut -d' ' -f3,4,5,6 /etc/almalinux-release)\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\tfi\n\n\t\t\tif [[ \"${distro}\" == \"Unknown\" && \"${OSTYPE}\" =~ \"linux\" && -f /etc/lsb-release ]]; then\n\t\t\t\tLSB_RELEASE=$(</etc/lsb-release)\n\t\t\t\tdistro=$(echo \"${LSB_RELEASE}\" | \"${AWK}\" 'BEGIN {\n\t\t\t\t\tdistro = \"Unknown\"\n\t\t\t\t}\n\t\t\t\t{\n\t\t\t\t\tif ($0 ~ /[Uu][Bb][Uu][Nn][Tt][Uu]/) {\n\t\t\t\t\t\tdistro = \"Ubuntu\"\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t\telse if ($0 ~ /[Mm][Ii][Nn][Tt]/ && $0 ~ /[Dd][Ee][Bb][Ii][Aa][Nn]/) {\n\t\t\t\t\t\tdistro = \"LMDE\"\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t\telse if ($0 ~ /[Mm][Ii][Nn][Tt]/) {\n\t\t\t\t\t\tdistro = \"Mint\"\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t} END {\n\t\t\t\t\tprint distro\n\t\t\t\t}')\n\t\t\tfi\n\n\t\t\tif [[ \"${distro}\" == \"Unknown\" ]] && [[ \"${OSTYPE}\" =~ \"linux\" || \"${OSTYPE}\" == \"gnu\" ]]; then\n\t\t\t\tfor di in arch chakra crunchbang-lsb evolveos exherbo fedora \\\n\t\t\t\t\t\t\tfrugalware fux gentoo kogaion mageia obarun oracle \\\n\t\t\t\t\t\t\tpardus pclinuxos redhat redstar rosa SuSe; do\n\t\t\t\t\tif [ -f /etc/$di-release ]; then\n\t\t\t\t\t\tdistro=$di\n\t\t\t\t\t\tbreak\n\t\t\t\t\tfi\n\t\t\t\tdone\n\t\t\t\tif [[ \"${distro}\" == \"crunchbang-lsb\" ]]; then\n\t\t\t\t\tdistro=\"Crunchbang\"\n\t\t\t\telif [[ \"${distro}\" == \"gentoo\" ]]; then\n\t\t\t\t\tgrep -q -i 'Funtoo' /etc/gentoo-release && distro=\"Funtoo\"\n\t\t\t\telif [[ \"${distro}\" == \"mandrake\" ]] || [[ \"${distro}\" == \"mandriva\" ]]; then\n\t\t\t\t\tgrep -q -i 'PCLinuxOS' /etc/${distro}-release && distro=\"PCLinuxOS\"\n\t\t\t\telif [[ \"${distro}\" == \"fedora\" ]]; then\n\t\t\t\t\tgrep -q -i 'Korora' /etc/fedora-release && distro=\"Korora\"\n\t\t\t\t\tgrep -q -i 'BLAG' /etc/fedora-release && distro=\"BLAG\" && distro_more=\"$(head -n1 /etc/fedora-release)\"\n\t\t\t\telif [[ \"${distro}\" == \"oracle\" ]]; then\n\t\t\t\t\tdistro_more=\"$(sed 's/Oracle Linux //' /etc/oracle-release)\"\n\t\t\t\telif [[ \"${distro}\" == \"SuSe\" ]]; then\n\t\t\t\t\tdistro=\"openSUSE\"\n\t\t\t\t\tif [ -f /etc/os-release ]; then\n\t\t\t\t\t\tif grep -q -i 'SUSE Linux Enterprise' /etc/os-release ; then\n\t\t\t\t\t\t\tdistro=\"SUSE Linux Enterprise\"\n\t\t\t\t\t\t\tdistro_more=$(\"${AWK}\" -F'=' '/^VERSION_ID=/ {print $2}' /etc/os-release | tr -d '\"')\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"${distro_more}\" =~ \"Tumbleweed\" ]]; then\n\t\t\t\t\t\tdistro_more=\"Tumbleweed\"\n\t\t\t\t\tfi\n\t\t\t\telif [[ \"${distro}\" == \"redstar\" ]]; then\n\t\t\t\t\tdistro_more=$(grep -o '[0-9.]' /etc/redstar-release | tr -d '\\n')\n\t\t\t\telif [[ \"${distro}\" == \"redhat\" ]]; then\n\t\t\t\t\tgrep -q -i 'CentOS' /etc/redhat-release && distro=\"CentOS\"\n\t\t\t\t\tgrep -q -i 'Rocky Linux' /etc/redhat-release && distro=\"Rocky Linux\"\n\t\t\t\t\tgrep -q -i 'Almalinux' /etc/redhat-release && distro=\"AlmaLinux\"\n\t\t\t\t\tgrep -q -i 'Scientific' /etc/redhat-release && distro=\"Scientific Linux\"\n\t\t\t\t\tgrep -q -i 'EuroLinux' /etc/redhat-release && distro=\"EuroLinux\"\n\t\t\t\t\tgrep -q -i 'PCLinuxOS' /etc/redhat-release && distro=\"PCLinuxOS\"\n\t\t\t\t\tif [ \"x$(od -t x1 /etc/redhat-release | sed -e 's/^\\w*\\ *//' | tr '\\n' ' ' | grep 'eb b6 89 ec 9d 80 eb b3 84 ')\" != \"x\" ]; then\n\t\t\t\t\t\tdistro=\"Red Star OS\"\n\t\t\t\t\t\tdistro_more=$(grep -o '[0-9.]' /etc/redhat-release | tr -d '\\n')\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\tfi\n\n\t\t\tif [[ \"${distro}\" == \"Unknown\" ]]; then\n\t\t\t\tif [[ \"${OSTYPE}\" =~ \"linux\" || \"${OSTYPE}\" == \"gnu\" ]]; then\n\t\t\t\t\tif [ -f /etc/debian_version ]; then\n\t\t\t\t\t\tif [ -f /etc/issue ]; then\n\t\t\t\t\t\t\tif grep -q -i 'gNewSense' /etc/issue ; then\n\t\t\t\t\t\t\t\tdistro=\"gNewSense\"\n\t\t\t\t\t\t\telif grep -q -i 'KDE neon' /etc/issue ; then\n\t\t\t\t\t\t\t\tdistro=\"KDE neon\"\n\t\t\t\t\t\t\t\tdistro_more=\"$(cut -d ' ' -f3 /etc/issue)\"\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tdistro=\"Debian\"\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\tfi\n\t\t\t\t\t\tif grep -q -i 'Kali' /etc/debian_version ; then\n\t\t\t\t\t\t\tdistro=\"Kali Linux\"\n\t\t\t\t\t\tfi\n\t\t\t\t\telif [ -f /etc/NIXOS ]; then distro=\"NixOS\"\n\t\t\t\t\telif [ -f /etc/dragora-version ]; then\n\t\t\t\t\t\tdistro=\"Dragora\"\n\t\t\t\t\t\tdistro_more=\"$(cut -d, -f1 /etc/dragora-version)\"\n\t\t\t\t\telif [ -f /etc/slackware-version ]; then distro=\"Slackware\"\n\t\t\t\t\telif [ -f /usr/share/doc/tc/release.txt ]; then\n\t\t\t\t\t\tdistro=\"TinyCore\"\n\t\t\t\t\t\tdistro_more=\"$(cat /usr/share/doc/tc/release.txt)\"\n\t\t\t\t\telif [ -f /etc/sabayon-edition ]; then distro=\"Sabayon\"\n\t\t\t\t\tfi\n\t\t\t\telse\n\t\t\t\t\tif [[ -x /usr/bin/sw_vers ]] && /usr/bin/sw_vers | grep -i 'Mac OS X' >/dev/null; then\n\t\t\t\t\t\tdistro=\"Mac OS X\"\n\t\t\t\t\telif [[ -x /usr/bin/sw_vers ]] && /usr/bin/sw_vers | grep -i 'macOS' >/dev/null; then\n\t\t\t\t\t\tdistro=\"macOS\"\n\t\t\t\t\telif [[ -f /var/run/dmesg.boot ]]; then\n\t\t\t\t\t\tdistro=$(\"${AWK}\" 'BEGIN {\n\t\t\t\t\t\t\tdistro = \"Unknown\"\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif ($0 ~ /DragonFly/) {\n\t\t\t\t\t\t\t\tdistro = \"DragonFlyBSD\"\n\t\t\t\t\t\t\t\texit\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if ($0 ~ /FreeBSD/) {\n\t\t\t\t\t\t\t\tdistro = \"FreeBSD\"\n\t\t\t\t\t\t\t\texit\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if ($0 ~ /NetBSD/) {\n\t\t\t\t\t\t\t\tdistro = \"NetBSD\"\n\t\t\t\t\t\t\t\texit\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\telse if ($0 ~ /OpenBSD/) {\n\t\t\t\t\t\t\t\tdistro = \"OpenBSD\"\n\t\t\t\t\t\t\t\texit\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} END {\n\t\t\t\t\t\t\tprint distro\n\t\t\t\t\t\t}' /var/run/dmesg.boot)\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\tfi\n\n\t\t\tif [[ \"${distro}\" == \"Unknown\" ]] && [[ \"${OSTYPE}\" =~ \"linux\" || \"${OSTYPE}\" == \"gnu\" ]]; then\n\t\t\t\tif [[ -f /etc/issue ]]; then\n\t\t\t\t\tdistro=$(\"${AWK}\" 'BEGIN {\n\t\t\t\t\t\tdistro = \"Unknown\"\n\t\t\t\t\t}\n\t\t\t\t\t{\n\t\t\t\t\t\tif ($0 ~ /\"Hyperbola GNU\\/Linux-libre\"/) {\n\t\t\t\t\t\t\tdistro = \"Hyperbola GNU/Linux-libre\"\n\t\t\t\t\t\t\texit\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if ($0 ~ /\"LinuxDeepin\"/) {\n\t\t\t\t\t\t\tdistro = \"LinuxDeepin\"\n\t\t\t\t\t\t\texit\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if ($0 ~ /\"Obarun\"/) {\n\t\t\t\t\t\t\tdistro = \"Obarun\"\n\t\t\t\t\t\t\texit\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if ($0 ~ /\"Parabola GNU\\/Linux-libre\"/) {\n\t\t\t\t\t\t\tdistro = \"Parabola GNU/Linux-libre\"\n\t\t\t\t\t\t\texit\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if ($0 ~ /\"Solus\"/) {\n\t\t\t\t\t\t\tdistro = \"Solus\"\n\t\t\t\t\t\t\texit\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse if ($0 ~ /\"ALDOS\"/) {\n\t\t\t\t\t\t\tdistro = \"ALDOS\"\n\t\t\t\t\t\t\texit\n\t\t\t\t\t\t}\n\t\t\t\t\t} END {\n\t\t\t\t\t\tprint distro\n\t\t\t\t\t}' /etc/issue)\n\t\t\t\tfi\n\t\t\tfi\n\n\t\t\tif [[ \"${distro}\" == \"Unknown\" ]] && [[ \"${OSTYPE}\" =~ \"linux\" || \"${OSTYPE}\" == \"gnu\" ]]; then\n\t\t\t\tif [[ -f /etc/system-release ]]; then\n\t\t\t\t\tif grep -q -i 'Scientific Linux' /etc/system-release; then\n\t\t\t\t\t\tdistro=\"Scientific Linux\"\n\t\t\t\t\telif grep -q -i 'Oracle Linux' /etc/system-release; then\n\t\t\t\t\t\tdistro=\"Oracle Linux\"\n\t\t\t\t\tfi\n\t\t\t\telif [[ -f /etc/lsb-release ]]; then\n\t\t\t\t\tif grep -q -i 'CHROMEOS_RELEASE_NAME' /etc/lsb-release; then\n\t\t\t\t\t\tdistro=\"$(\"${AWK}\" -F'=' '/^CHROMEOS_RELEASE_NAME=/ {print $2}' /etc/lsb-release)\"\n\t\t\t\t\t\tdistro_more=\"$(\"${AWK}\" -F'=' '/^CHROMEOS_RELEASE_VERSION=/ {print $2}' /etc/lsb-release)\"\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\tfi\n\t\tfi\n\tfi\n\n\tif [[ -n ${distro_more} ]]; then\n\t\tdistro_more=\"${distro} ${distro_more}\"\n\tfi\n\n\tif [[ \"${distro}\" != \"Haiku\" ]]; then\n\t\tif [[ ${BASH_VERSINFO[0]} -ge 4 ]]; then\n\t\t\tif [[ ${BASH_VERSINFO[0]} -eq 4 && ${BASH_VERSINFO[1]} -gt 1 ]] || [[ ${BASH_VERSINFO[0]} -gt 4 ]]; then\n\t\t\t\tdistro=${distro,,}\n\t\t\telse\n\t\t\t\tdistro=\"$(tr '[:upper:]' '[:lower:]' <<< \"${distro}\")\"\n\t\t\tfi\n\t\telse\n\t\t\tdistro=\"$(tr '[:upper:]' '[:lower:]' <<< \"${distro}\")\"\n\t\tfi\n\tfi\n\n\tcase $distro in\n\t\taldos) distro=\"ALDOS\";;\n\t\talpine) distro=\"Alpine Linux\" ;;\n\t\talmalinux) distro=\"AlmaLinux\" ;;\n\t\talter*linux|alter) distro=\"Alter Linux\" ;;\n\t\tamzn|amazon|amazon*linux) distro=\"Amazon Linux\" ;;\n\t\tantergos) distro=\"Antergos\" ;;\n\t\tarch*linux*old) distro=\"Arch Linux - Old\" ;;\n\t\tarch|arch*linux) distro=\"Arch Linux\" ;;\n\t\tarch32) distro=\"Arch Linux 32\" ;;\n\t\tarcolinux|arcolinux*) distro=\"ArcoLinux\" ;;\n\t\tartix|artix*linux) distro=\"Artix Linux\" ;;\n\t\tblackpantheros|black*panther*) distro=\"blackPanther OS\" ;;\n\t\tblag) distro=\"BLAG\" ;;\n\t\tbunsenlabs) distro=\"BunsenLabs\" ;;\n\t\tcentos) distro=\"CentOS\" ;;\n\t\tcentos*stream) distro=\"CentOS Stream\" ;;\n\t\tchakra) distro=\"Chakra\" ;;\n\t\tchapeau) distro=\"Chapeau\" ;;\n\t\tchrome*|chromium*) distro=\"Chrome OS\" ;;\n\t\tcrunchbang) distro=\"CrunchBang\" ;;\n\t\tcrux) distro=\"CRUX\" ;;\n\t\tcygwin) distro=\"Cygwin\" ;;\n\t\tdebian) distro=\"Debian\" ;;\n\t\tdevuan) distro=\"Devuan\" ;;\n\t\tdeepin) distro=\"Deepin\" ;;\n\t\tuos) distro=\"Uos\" ;;\n\t\tdesaos) distro=\"DesaOS\" ;;\n\t\tdragonflybsd) distro=\"DragonFlyBSD\" ;;\n\t\tdragora) distro=\"Dragora\" ;;\n\t\tdrauger*) distro=\"DraugerOS\" ;;\n\t\telementary|'elementary os') distro=\"elementary OS\";;\n\t\teurolinux) distro=\"EuroLinux\" ;;\n\t\tevolveos) distro=\"Evolve OS\" ;;\n\t\tsulin) distro=\"Sulin\" ;;\n\t\texherbo|exherbo*linux) distro=\"Exherbo\" ;;\n\t\tfedora) distro=\"Fedora\" ;;\n\t\tfedora*old) distro=\"Fedora - Old\" ;;\n\t\tfreebsd) distro=\"FreeBSD\" ;;\n\t\tfreebsd*old) distro=\"FreeBSD - Old\" ;;\n\t\tfrugalware) distro=\"Frugalware\" ;;\n\t\tfuduntu) distro=\"Fuduntu\" ;;\n\t\tfuntoo) distro=\"Funtoo\" ;;\n\t\tfux) distro=\"Fux\" ;;\n\t\tgentoo) distro=\"Gentoo\" ;;\n\t\tgnewsense) distro=\"gNewSense\" ;;\n\t\tguix*system) distro=\"Guix System\" ;;\n\t\thaiku) distro=\"Haiku\" ;;\n\t\thyperbolagnu|hyperbolagnu/linux-libre|'hyperbola gnu/linux-libre'|hyperbola) distro=\"Hyperbola GNU/Linux-libre\" ;;\n\t\tjanuslinux) distro=\"januslinux\" ;;\n\t\tkali*linux) distro=\"Kali Linux\" ;;\n\t\tkaos) distro=\"KaOS\";;\n\t\tkde*neon|neon) distro=\"KDE neon\" ;;\n\t\tkogaion) distro=\"Kogaion\" ;;\n\t\tkorora) distro=\"Korora\" ;;\n\t\tlinuxdeepin) distro=\"LinuxDeepin\" ;;\n\t\tlmde) distro=\"LMDE\" ;;\n\t\tlogos) distro=\"Logos\" ;;\n\t\tlunar|lunar*linux) distro=\"Lunar Linux\";;\n\t\tmac*os*x|os*x) distro=\"Mac OS X\" ;;\n                macos) distro=\"macOS\" ;;\n\t\tmanjaro) distro=\"Manjaro\" ;;\n\t\tmageia) distro=\"Mageia\" ;;\n\t\tmandrake) distro=\"Mandrake\" ;;\n\t\tmandriva) distro=\"Mandriva\" ;;\n\t\tmer) distro=\"Mer\" ;;\n\t\tmint|linux*mint) distro=\"Mint\" ;;\n\t\tmsys|msys2) distro=\"Msys\" ;;\n\t\tnetbsd) distro=\"NetBSD\" ;;\n\t\tnetrunner) distro=\"Netrunner\" ;;\n\t\tnix|nix*os) distro=\"NixOS\" ;;\n\t\tobarun) distro=\"Obarun\" ;;\n\t\tobrevenge) distro=\"OBRevenge\" ;;\n\t\tol|oracle*linux) distro=\"Oracle Linux\" ;;\n\t\topenbsd) distro=\"OpenBSD\" ;;\n\t\topensuse) distro=\"openSUSE\" ;;\n\t\tos*elbrus) distro=\"OS Elbrus\" ;;\n\t\tparabolagnu|parabolagnu/linux-libre|'parabola gnu/linux-libre'|parabola) distro=\"Parabola GNU/Linux-libre\" ;;\n\t\tpardus) distro=\"Pardus\" ;;\n\t\tparrot|parrot*security) distro=\"Parrot Security\" ;;\n\t\tpclinuxos|pclos) distro=\"PCLinuxOS\" ;;\n\t\tpeppermint) distro=\"Peppermint\" ;;\n\t\tproxmox|proxmox*ve) distro=\"Proxmox VE\" ;;\n\t\tpureos) distro=\"PureOS\" ;;\n\t\tquirinux) distro=\"Quirinux\" ;;\n\t\tqubes) distro=\"Qubes OS\" ;;\n\t\traspbian) distro=\"Raspbian\" ;;\n\t\tred*hat*|rhel) distro=\"Red Hat Enterprise Linux\" ;;\n\t\trosa) distro=\"ROSA\" ;;\n\t\tred*star|red*star*os) distro=\"Red Star OS\" ;;\n\t\trocky) distro=\"Rocky Linux\" ;;\n\t\tsabayon) distro=\"Sabayon\" ;;\n\t\tsailfish|sailfish*os) distro=\"SailfishOS\" ;;\n\t\tscientific*) distro=\"Scientific Linux\" ;;\n\t\tsiduction) distro=\"Siduction\" ;;\n\t\tslackware) distro=\"Slackware\" ;;\n\t\tsmgl|source*mage|source*mage*gnu*linux) distro=\"Source Mage GNU/Linux\" ;;\n\t\tsolus) distro=\"Solus\" ;;\n\t\tsparky|sparky*linux) distro=\"SparkyLinux\" ;;\n\t\tsteam|steam*os) distro=\"SteamOS\" ;;\n\t\tsuse*linux*enterprise) distro=\"SUSE Linux Enterprise\" ;;\n\t\tswagarch) distro=\"SwagArch\" ;;\n        \ttearch*) distro=\"TeArch\" ;;\n\t\ttinycore|tinycore*linux) distro=\"TinyCore\" ;;\n\t\ttrisquel) distro=\"Trisquel\";;\n\t\tgrombyangos) distro=\"GrombyangOS\" ;;\n\t\tubuntu) distro=\"Ubuntu\";;\n\t\tviperr) distro=\"Viperr\" ;;\n\t\tvoid*linux) distro=\"Void Linux\" ;;\n\t\tzorin*) distro=\"Zorin OS\" ;;\n\t\tendeavour*) distro=\"EndeavourOS\" ;;\n\tesac\n\tif grep -q -i 'Microsoft' /proc/version 2>/dev/null || \\\n\t\tgrep -q -i 'Microsoft' /proc/sys/kernel/osrelease 2>/dev/null\n\tthen\n\t\twsl=\"(on the Windows Subsystem for Linux)\"\n\tfi\n\tverboseOut \"Finding distro...found as '${distro} ${distro_release}'\"\n}\n# Distro Detection - End\n\n# Host and User detection - Begin\ndetecthost () {\n\tmyUser=${USER}\n\tmyHost=${HOSTNAME}\n\tif [[ -z \"$USER\" ]]; then\n\t\tmyUser=$(whoami)\n\tfi\n\tif [[ \"${distro}\" == \"Mac OS X\" || \"${distro}\" == \"macOS\" ]]; then\n\t\tmyHost=${myHost/.local}\n\tfi\n\tverboseOut \"Finding hostname and user...found as '${myUser}@${myHost}'\"\n}\n\n# Find Number of Running Processes\n# processnum=\"$(( $( ps aux | wc -l ) - 1 ))\"\n\n# Kernel Version Detection - Begin\ndetectkernel () {\n\tif [[ \"$distro\" == \"OpenBSD\" ]]; then\n\t\tkernel=$(uname -a | cut -f 3- -d ' ')\n\telse\n\t\t# compatibility for older versions of OS X:\n\t\tkernel=$(uname -m && uname -sr)\n\t\tkernel=${kernel//$'\\n'/ }\n\t\t#kernel=( $(uname -srm) )\n\t\t#kernel=\"${kernel[${#kernel[@]}-1]} ${kernel[@]:0:${#kernel[@]}-1}\"\n\t\tverboseOut \"Finding kernel version...found as '${kernel}'\"\n\tfi\n}\n# Kernel Version Detection - End\n\n\n# Uptime Detection - Begin\ndetectuptime () {\n\tunset uptime\n\tif [[ \"${distro}\" == \"Mac OS X\" || \"${distro}\" == \"macOS\" || \"${distro}\" == \"FreeBSD\" || \"${distro}\" == \"DragonFlyBSD\" ]]; then\n\t\tboot=$(sysctl -n kern.boottime | cut -d \"=\" -f 2 | cut -d \",\" -f 1)\n\t\tnow=$(date +%s)\n\t\tuptime=$((now-boot))\n\telif [[ \"${distro}\" == \"OpenBSD\" ]]; then\n\t\tboot=$(sysctl -n kern.boottime)\n\t\tnow=$(date +%s)\n\t\tuptime=$((now - boot))\n\telif [[ \"${distro}\" == \"Haiku\" ]]; then\n\t\tuptime=$(uptime | \"${AWK}\" -F', up ' '{gsub(/ *hours?,/, \"h\"); gsub(/ *minutes?/, \"m\"); print $2;}')\n\telse\n\t\tif [[ -f /proc/uptime ]]; then\n\t\t\tuptime=$(</proc/uptime)\n\t\t\tuptime=${uptime//.*}\n\t\tfi\n\tfi\n\n\tif [[ -n ${uptime} ]] && [[ \"${distro}\" != \"Haiku\" ]]; then\n\t\tmins=$((uptime/60%60))\n\t\thours=$((uptime/3600%24))\n\t\tdays=$((uptime/86400))\n\t\tuptime=\"${mins}m\"\n\t\tif [ \"${hours}\" -ne \"0\" ]; then\n\t\t\tuptime=\"${hours}h ${uptime}\"\n\t\tfi\n\t\tif [ \"${days}\" -ne \"0\" ]; then\n\t\t\tuptime=\"${days}d ${uptime}\"\n\t\tfi\n\telse\n\t\tif [[ \"$distro\" =~ \"NetBSD\" ]]; then\n\t\t\tuptime=$(\"${AWK}\" -F. '{print $1}' /proc/uptime)\n\t\telif [[ \"$distro\" =~ \"BSD\" ]]; then\n\t\t\tuptime=$(uptime | \"${AWK}\" '{$1=$2=$(NF-6)=$(NF-5)=$(NF-4)=$(NF-3)=$(NF-2)=$(NF-1)=$NF=\"\"; sub(\" days\",\"d\");sub(\",\",\"\");sub(\":\",\"h \");sub(\",\",\"m\"); print}')\n\t\tfi\n\tfi\n\tverboseOut \"Finding current uptime...found as '${uptime}'\"\n}\n# Uptime Detection - End\n\n\n# Package Count - Begin\ndetectpkgs () {\n\tpkgs=\"Unknown\"\n\tlocal offset=0\n\tcase \"${distro}\" in\n\t\t'Alpine Linux') pkgs=$(apk info | wc -l) ;;\n\t\t'Arch Linux'|'Arch Linux 32'|'ArcoLinux'|'Parabola GNU/Linux-libre'|'Hyperbola GNU/Linux-libre'|'Chakra'|'Manjaro'|'Antergos'| \\\n\t\t'Netrunner'|'KaOS'|'Obarun'|'SwagArch'|'OBRevenge'|'Artix Linux'|'EndeavourOS'|'Alter Linux'|'TeArch')\n\t\t\tpkgs=$(pacman -Qq | wc -l) ;;\n\t\t'Chrome OS')\n\t\t\tif [ -d \"/usr/local/lib/crew/packages\" ]; then\n\t\t\t\tpkgs=$(ls -l /usr/local/etc/crew/meta/*.filelist | wc -l)\n\t\t\telse\n\t\t\t\tpkgs=$(ls -d /var/db/pkg/*/* | wc -l)\n\t\t\tfi\n\t\t;;\n\t\t'Dragora')\n\t\t\tpkgs=$(ls -1 /var/db/pkg | wc -l) ;;\n\t\t'Frugalware')\n\t\t\tpkgs=$(pacman-g2 -Q | wc -l) ;;\n\t\t'Debian'|'Ubuntu'|'Mint'|'Fuduntu'|'KDE neon'|'Devuan'|'OS Elbrus'|'Raspbian'|'Quirinux'|'LMDE'|'CrunchBang'|'Peppermint'| \\\n\t\t'LinuxDeepin'|'Deepin'|'Kali Linux'|'Trisquel'|'elementary OS'|'gNewSense'|'BunsenLabs'|'SteamOS'|'Parrot Security'| \\\n\t\t'GrombyangOS'|'DesaOS'|'Zorin OS'|'Proxmox VE'|'PureOS'|'DraugerOS')\n\t\t\tpkgs=$(dpkg -l | grep -c '^i') ;;\n\t\t'Slackware')\n\t\t\tpkgs=$(ls -1 /var/log/packages | wc -l) ;;\n\t\t'Gentoo'|'Sabayon'|'Funtoo'|'Kogaion')\n\t\t\tpkgs=$(ls -d /var/db/pkg/*/* | wc -l) ;;\n\t\t'NixOS')\n\t\t\tpkgs=$(nix-store --query --requisites /run/current-system | wc -l) ;;\n\t\t'Guix System')\n\t\t\tpkgs=$(guix package --list-installed | wc -l) ;;\n\t\t'ALDOS'|'Fedora'|'Fux'|'Korora'|'BLAG'|'Chapeau'|'openSUSE'|'SUSE Linux Enterprise'|'Red Hat Enterprise Linux'| \\\n\t\t'ROSA'|'Oracle Linux'|'Scientific Linux'|'EuroLinux'|'CentOS'|'CentOS Stream'|'Mandriva'|'Mandrake'|'Mageia'|'Mer'|'Rocky Linux'|'SailfishOS'|'PCLinuxOS'|'Viperr'|'Qubes OS'|'AlmaLinux'| \\\n\t\t'Red Star OS'|'blackPanther OS'|'Amazon Linux')\n\t\t\tpkgs=$(rpm -qa | wc -l) ;;\n\t\t'Void Linux')\n\t\t\tpkgs=$(xbps-query -l | wc -l) ;;\n\t\t'Evolve OS')\n\t\t\tpkgs=$(pisi list-installed | wc -l) ;;\n\t\t'Sulin')\n\t\t    #alternative method : (inary li | wc -l)\n\t\t\tpkgs=$(ls /var/lib/inary/package/ | wc -l) ;;\n\t\t'Solus')\n\t\t\tpkgs=$(eopkg list-installed | wc -l) ;;\n\t\t'Source Mage GNU/Linux')\n\t\t\tpkgs=$(gaze installed | wc -l) ;;\n\t\t'CRUX'|'januslinux')\n\t\t\tpkgs=$(pkginfo -i | wc -l) ;;\n\t\t'Lunar Linux')\n\t\t\tpkgs=$(lvu installed | wc -l) ;;\n\t\t'TinyCore')\n\t\t\tpkgs=$(tce-status -i | wc -l) ;;\n\t\t'Exherbo')\n\t\t\txpkgs=$(ls -d -1 /var/db/paludis/repositories/cross-installed/*/data/* | wc -l)\n\t\t\tpkgs=$(ls -d -1 /var/db/paludis/repositories/installed/data/* | wc -l)\n\t\t\tpkgs=$((pkgs + xpkgs))\n\t\t;;\n\t\t'Mac OS X'|'macOS')\n\t\t\toffset=1\n\t\t\tif [ -d \"/usr/local/bin\" ]; then\n\t\t\t\tloc_pkgs=$(ls -l /usr/local/bin/ | grep -cv \"\\(../Cellar/\\|brew\\)\")\n\t\t\t\tpkgs=$((loc_pkgs - offset));\n\t\t\tfi\n\t\t\tif type -p port >/dev/null 2>&1; then\n\t\t\t\tport_pkgs=$(port installed 2>/dev/null | wc -l)\n\t\t\t\tpkgs=$((pkgs + (port_pkgs - offset)))\n\t\t\tfi\n\t\t\tif type -p brew >/dev/null 2>&1; then\n\t\t\t\tif [ -d \"/opt/homebrew/Cellar\" ]; then\n\t\t\t\t\tbrew_pkgs=$(ls -1 /opt/homebrew/Cellar/ | wc -l)\n\t\t\t\telse\n\t\t\t\t\tbrew_pkgs=$(ls -1 /usr/local/Cellar/ | wc -l)\n\t\t\t\tfi\n\t\t\t\tpkgs=$((pkgs + brew_pkgs))\n\t\t\tfi\n\t\t\tif type -p pkgin >/dev/null 2>&1; then\n\t\t\t\tpkgsrc_pkgs=$(pkgin list 2>/dev/null | wc -l)\n\t\t\t\tpkgs=$((pkgs + pkgsrc_pkgs))\n\t\t\tfi\n\t\t;;\n\t\t'DragonFlyBSD')\n\t\t\tif TMPDIR=/dev/null ASSUME_ALWAYS_YES=1 PACKAGESITE=file:///nonexistent pkg info pkg >/dev/null 2>&1; then\n\t\t\t\tpkgs=$(pkg info | grep -c .)\n\t\t\telse\n\t\t\t\tpkgs=$(pkg_info | grep -c .)\n\t\t\tfi\n\t\t;;\n\t\t'OpenBSD'|'NetBSD')\n\t\t\tpkgs=$(pkg_info | grep -c .)\n\t\t;;\n\t\t'FreeBSD')\n\t\t\tpkgs=$(pkg info | grep -c .)\n\t\t;;\n\t\t'Cygwin')\n\t\t\toffset=2\n\t\t\tpkgs=$(($(cygcheck -cd | wc -l) - offset))\n\t\t\tif [ -d \"/cygdrive/c/ProgramData/chocolatey/lib\" ]; then\n\t\t\t\tchocopkgs=$(ls -1 /cygdrive/c/ProgramData/chocolatey/lib | wc -l)\n\t\t\t\tpkgs=$((pkgs + chocopkgs))\n\t\t\tfi\n\t\t;;\n\t\t'Msys')\n\t\t\tpkgs=$(pacman -Qq | wc -l)\n\t\t\tif [ -d \"/c/ProgramData/chocolatey/lib\" ]; then\n\t\t\t\tchocopkgs=$(ls -1 /c/ProgramData/chocolatey/lib | wc -l)\n\t\t\t\tpkgs=$((pkgs + chocopkgs))\n\t\t\tfi\n\t\t;;\n\t\t'Haiku')\n\t\t\thaikualpharelease=\"no\"\n\t\t\tif [ -d /boot/system/package-links ]; then\n\t\t\t\tpkgs=$(ls /boot/system/package-links | wc -l)\n\t\t\telif type -p installoptionalpackage >/dev/null 2>&1; then\n\t\t\t\thaikualpharelease=\"yes\"\n\t\t\t\tpkgs=$(installoptionalpackage -l | sed -n '3p' | wc -w)\n\t\t\tfi\n\t\t;;\n\tesac\n\tif [[ \"${OSTYPE}\" =~ \"linux\" && -z \"${wsl}\" ]] && snap list >/dev/null 2>&1; then\n\t\toffset=1\n\t\tsnappkgs=$(($(snap list 2>/dev/null | wc -l) - offset))\n\t\tif [ $snappkgs -lt 0 ]; then\n\t\t\tsnappkgs=0\n\t\tfi\n\t\tpkgs=$((pkgs + snappkgs))\n\tfi\n\tverboseOut \"Finding current package count...found as '$pkgs'\"\n}\n\n\n\n\n# CPU Detection - Begin\ndetectcpu () {\n\tlocal REGEXP=\"-r\"\n\tif [[ \"$distro\" == \"Mac OS X\" || \"$distro\" == \"macOS\" ]]; then\n\t\tcpu=$(machine)\n\t\tif [[ $cpu == \"ppc750\" ]]; then\n\t\t\tcpu=\"IBM PowerPC G3\"\n\t\telif [[ $cpu == \"ppc7400\" || $cpu == \"ppc7450\" ]]; then\n\t\t\tcpu=\"IBM PowerPC G4\"\n\t\telif [[ $cpu == \"ppc970\" ]]; then\n\t\t\tcpu=\"IBM PowerPC G5\"\n\t\telse\n\t\t\tcpu=$(sysctl -n machdep.cpu.brand_string)\n\t\tfi\n\t\tREGEXP=\"-E\"\n\telif [ \"$OSTYPE\" == \"gnu\" ]; then\n\t\t# no /proc/cpuinfo on GNU/Hurd\n\t\tif uname -m | grep -q 'i.86'; then\n\t\t\tcpu=\"Unknown x86\"\n\t\telse\n\t\t\tcpu=\"Unknown\"\n\t\tfi\n\telif [ \"$distro\" == \"FreeBSD\" ]; then\n\t\tcpu=$(dmesg | \"${AWK}\" -F': ' '/^CPU/ {gsub(/ +/,\" \"); gsub(/\\([^\\(\\)]*\\)|CPU /,\"\", $2); print $2; exit}')\n\telif [ \"$distro\" == \"DragonFlyBSD\" ]; then\n\t\tcpu=$(sysctl -n hw.model)\n\telif [ \"$distro\" == \"OpenBSD\" ]; then\n\t\tcpu=$(sysctl -n hw.model | sed 's/@.*//')\n\telif [ \"$distro\" == \"Haiku\" ]; then\n\t\tcpu=$(sysinfo -cpu | \"${AWK}\" -F': ' '/^CPU #0/ {gsub(/ +/,\" \"); gsub(/\\([^\\(\\)]*\\)|CPU /,\"\", $2); print $2; exit}')\n\telse\n\t\tcpu=$(\"${AWK}\" -F':' '/^model name/ {split($2, A, \" @\"); print A[1]; exit}' /proc/cpuinfo)\n\t\tcpun=$(grep -c '^processor' /proc/cpuinfo)\n\t\tif [ -z \"$cpu\" ]; then\n\t\t\tcpu=$(\"${AWK}\" 'BEGIN{FS=\":\"} /Hardware/ { print $2; exit }' /proc/cpuinfo)\n\t\tfi\n\t\tif [ -z \"$cpu\" ]; then\n\t\t\tcpu=$(\"${AWK}\" 'BEGIN{FS=\":\"} /^cpu/ { gsub(/  +/,\" \",$2); print $2; exit}' /proc/cpuinfo | sed 's/, altivec supported//;s/^ //')\n\t\t\tif [[ $cpu =~ ^(PPC)*9.+ ]]; then\n\t\t\t\tmodel=\"IBM PowerPC G5 \"\n\t\t\telif [[ $cpu =~ 740/750 ]]; then\n\t\t\t\tmodel=\"IBM PowerPC G3 \"\n\t\t\telif [[ $cpu =~ ^74.+ ]]; then\n\t\t\t\tmodel=\"Motorola PowerPC G4 \"\n\t\t\telif [[ $cpu =~ ^POWER.* ]]; then\n\t\t\t\tmodel=\"IBM POWER \"\n\t\t\telif grep -q -i 'BCM2708' /proc/cpuinfo ; then\n\t\t\t\tmodel=\"Broadcom BCM2835 ARM1176JZF-S\"\n\t\t\telse\n\t\t\t\tarch=$(uname -m)\n\t\t\t\tif [[ \"$arch\" == \"s390x\" || \"$arch\" == \"s390\" ]]; then\n\t\t\t\t\tcpu=\"\"\n\t\t\t\t\targs=$(grep 'machine' /proc/cpuinfo | sed 's/^.*://g; s/ //g; s/,/\\n/g' | grep '^machine=.*')\n\t\t\t\t\teval \"$args\"\n\t\t\t\t\tcase \"$machine\" in\n\t\t\t\t\t\t# information taken from https://github.com/SUSE/s390-tools/blob/master/cputype\n\t\t\t\t\t\t2064) model=\"IBM eServer zSeries 900\" ;;\n\t\t\t\t\t\t2066) model=\"IBM eServer zSeries 800\" ;;\n\t\t\t\t\t\t2084) model=\"IBM eServer zSeries 990\" ;;\n\t\t\t\t\t\t2086) model=\"IBM eServer zSeries 890\" ;;\n\t\t\t\t\t\t2094) model=\"IBM System z9 Enterprise Class\" ;;\n\t\t\t\t\t\t2096) model=\"IBM System z9 Business Class\" ;;\n\t\t\t\t\t\t2097) model=\"IBM System z10 Enterprise Class\" ;;\n\t\t\t\t\t\t2098) model=\"IBM System z10 Business Class\" ;;\n\t\t\t\t\t\t2817) model=\"IBM zEnterprise 196\" ;;\n\t\t\t\t\t\t2818) model=\"IBM zEnterprise 114\" ;;\n\t\t\t\t\t\t2827) model=\"IBM zEnterprise EC12\" ;;\n\t\t\t\t\t\t2828) model=\"IBM zEnterprise BC12\" ;;\n\t\t\t\t\t\t2964) model=\"IBM z13\" ;;\n\t\t\t\t\t\t   *) model=\"IBM S/390 machine type $machine\" ;;\n\t\t\t\t\tesac\n\t\t\t\telif [[ \"$arch\" == \"aarch64\" ]]; then\n\t\t\t\t\tcpu_vendor=$(lscpu | grep ^Vendor | sed 's/^.*://g; s/ //g; s/,/\\n/g')\n\t\t\t\t\tcpu=$(lscpu | grep ^Model\\ name: | sed 's/^.*://g; s/ //g; s/,/\\n/g')\n\t\t\t\t\tcpu=\"${cpu_vendor} ${cpu}\"\n\t\t\t\telse\n\t\t\t\t\tmodel=\"Unknown\"\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tcpu=\"${model}${cpu}\"\n\t\tfi\n\t\tloc=\"/sys/devices/system/cpu/cpu0/cpufreq\"\n\t\tbl=\"${loc}/bios_limit\"\n\t\tsmf=\"${loc}/scaling_max_freq\"\n\t\tif [ -f \"$bl\" ] && [ -r \"$bl\" ]; then\n\t\t\tcpu_mhz=$(\"${AWK}\" '{print $1/1000}' \"$bl\")\n\t\telif [ -f \"$smf\" ] && [ -r \"$smf\" ]; then\n\t\t\tcpu_mhz=$(\"${AWK}\" '{print $1/1000}' \"$smf\")\n\t\telse\n\t\t\tcpu_mhz=$(\"${AWK}\" -F':' '/cpu MHz/{ print int($2+.5) }' /proc/cpuinfo | head -n 1)\n\t\tfi\n\t\tif [ -n \"$cpu_mhz\" ]; then\n\t\t\tif [ \"${cpu_mhz%.*}\" -ge 1000 ]; then\n\t\t\t\tcpu_ghz=$(\"${AWK}\" '{print $1/1000}' <<< \"${cpu_mhz}\")\n\t\t\t\tcpufreq=\"${cpu_ghz}GHz\"\n\t\t\telse\n\t\t\t\tcpufreq=\"${cpu_mhz}MHz\"\n\t\t\tfi\n\t\tfi\n\tfi\n\tif [[ \"${cpun}\" -gt \"1\" ]]; then\n\t\tcpun=\"${cpun}x \"\n\telse\n\t\tcpun=\"\"\n\tfi\n\tif [ -z \"$cpufreq\" ]; then\n\t\tcpu=\"${cpun}${cpu}\"\n\telse\n\t\tcpu=\"$cpu @ ${cpun}${cpufreq}\"\n\tfi\n\tif [ -d '/sys/class/hwmon/' ]; then\n\t\tfor dir in /sys/class/hwmon/* ; do\n\t\t\thwmonfile=\"\"\n\t\t\t[ -e \"$dir/name\" ] && hwmonfile=$dir/name\n\t\t\t[ -e \"$dir/device/name\" ] && hwmonfile=$dir/device/name\n\t\t\t[ -n \"$hwmonfile\" ] && if grep -q 'coretemp' \"$hwmonfile\"; then\n\t\t\t\tthermal=\"$dir/temp1_input\"\n\t\t\t\tbreak\n\t\t\tfi\n\t\tdone\n\t\tif [ -e \"$thermal\" ] && [ \"${thermal:+isSetToNonNull}\" = 'isSetToNonNull' ]; then\n\t\t\ttemperature=$(bc <<< \"scale=1; $(cat \"$thermal\")/1000\")\n\t\tfi\n\tfi\n\tif [ -n \"$temperature\" ]; then\n\t\tcpu=\"$cpu [${temperature}°C]\"\n\tfi\n\tcpu=$(sed $REGEXP 's/\\([tT][mM]\\)|\\([Rr]\\)|[pP]rocessor|CPU//g' <<< \"${cpu}\" | xargs)\n\tverboseOut \"Finding current CPU...found as '$cpu'\"\n}\n# CPU Detection - End\n\n\n# GPU Detection - Begin (EXPERIMENTAL!)\ndetectgpu () {\n\tif [[ \"${distro}\" == \"FreeBSD\" || \"${distro}\" == \"DragonFlyBSD\" ]]; then\n\t\tnvisettexist=$(which nvidia-settings)\n\t\tif [ -x \"$nvisettexist\" ]; then\n\t\t\tgpu=\"$(nvidia-settings -t -q gpus | grep \\( | sed 's/.*(\\(.*\\))/\\1/')\"\n\t\telse\n\t\t\tgpu_info=$(pciconf -lv 2> /dev/null | grep -B 4 VGA)\n\t\t\tgpu_info=$(grep -E 'device.*=.*' <<< \"${gpu_info}\")\n\t\t\tgpu=\"${gpu_info##*device*= }\"\n\t\t\tgpu=\"${gpu//\\'}\"\n\t\t\t# gpu=$(sed 's/.*device.*= //' <<< \"${gpu_info}\" | sed \"s/'//g\")\n\t\tfi\n\telif [[ \"${distro}\" == \"OpenBSD\" ]]; then\n\t\tgpu=$(glxinfo 2> /dev/null | \"${AWK}\" '/OpenGL renderer string/ { sub(/OpenGL renderer string: /,\"\"); print }')\n\telif [[ \"${distro}\" == \"Mac OS X\" || \"${distro}\" == \"macOS\" ]]; then\n\t\tgpu=$(system_profiler SPDisplaysDataType | \"${AWK}\" -F': ' '/^ *Chipset Model:/ {print $2}' | \"${AWK}\" '{ printf \"%s / \", $0 }' | sed -e 's/\\/ $//g')\n\telif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" ]]; then\n\t\tgpu=$(wmic path Win32_VideoController get caption | sed -n '2p')\n\telif [[ \"${distro}\" == \"Haiku\" ]]; then\n\t\tgpu=\"$(listdev | grep -A2 -e 'device Display controller' | \"${AWK}\" -F': ' '/^ +device/ {print $2}')\"\n\telse\n\t\tif [[ -n \"$(PATH=\"/opt/bin:$PATH\" type -p nvidia-smi)\" ]]; then\n\t\t\tgpu=$($(PATH=\"/opt/bin:$PATH\" type -p nvidia-smi | cut -f1) -q | \"${AWK}\" -F':' '/Product Name/ {gsub(/: /,\":\"); print $2}' | sed ':a;N;$!ba;s/\\n/, /g')\n\t\telif [[ -n \"$(PATH=\"/usr/sbin:$PATH\" type -p glxinfo)\" && -z \"${gpu}\" ]]; then\n\t\t\tgpu_info=$($(PATH=\"/usr/sbin:$PATH\" type -p glxinfo | cut -f1) 2>/dev/null)\n\t\t\tgpu=$(grep \"OpenGL renderer string\" <<< \"${gpu_info}\" | cut -d ':' -f2 | sed -n -e '1h;2,$H;${g;s/\\n/, /g' -e 'p' -e '}')\n\t\t\tgpu=\"${gpu:1}\"\n\t\t\tgpu_info=$(grep \"OpenGL vendor string\" <<< \"${gpu_info}\")\n\t\telif [[ -n \"$(PATH=\"/usr/sbin:$PATH\" type -p lspci)\" && -z \"$gpu\" ]]; then\n\t\t\tgpu_info=$($(PATH=\"/usr/bin:$PATH\" type -p lspci | cut -f1) 2> /dev/null | grep VGA)\n\t\t\tgpu=$(grep -oE '\\[.*\\]' <<< \"${gpu_info}\" | sed 's/\\[//;s/\\]//' | sed -n -e '1h;2,$H;${g;s/\\n/, /g' -e 'p' -e '}')\n\t\tfi\n\tfi\n\n\tif [ -n \"$gpu\" ];then\n\t\tif grep -q -i 'nvidia' <<< \"${gpu_info}\"; then\n\t\t\tgpu_info=\"NVidia \"\n\t\telif grep -q -i 'intel' <<< \"${gpu_info}\"; then\n\t\t\tgpu_info=\"Intel \"\n\t\telif grep -q -i 'amd' <<< \"${gpu_info}\"; then\n\t\t\tgpu_info=\"AMD \"\n\t\telif grep -q -i 'ati' <<< \"${gpu_info}\" || grep -q -i 'radeon' <<< \"${gpu_info}\"; then\n\t\t\tgpu_info=\"ATI \"\n\t\telse\n\t\t\tgpu_info=$(cut -d ':' -f2 <<< \"${gpu_info}\")\n\t\t\tgpu_info=\"${gpu_info:1} \"\n\t\tfi\n\t\tgpu=\"${gpu}\"\n\telse\n\t\tgpu=\"Not Found\"\n\tfi\n\n\tverboseOut \"Finding current GPU...found as '$gpu'\"\n}\n# GPU Detection - End\n\n# Detect Intel GPU  #works in dash\n# Run it only on Intel Processors if GPU is unknown\nDetectIntelGPU() {\n\tif [ -r /proc/fb ]; then\n\t\tgpu=$(\"${AWK}\" 'BEGIN {ORS = \" &\"} {$1=\"\";print}' /proc/fb | sed  -r s/'^\\s+|\\s*&$'//g)\n\tfi\n\n\tcase $gpu in\n\t\t*mfb)\n\t\t\tgpu=$(lspci | grep -i vga | \"${AWK}\" -F \": \" '{print $2}')\n\t\t\t;;\n\t\t*intel*)\n\t\t\tgpu=\"intel\"\n\t\t\t;;\n\t\t*)\n\t\t\tgpu=\"Not Found\"\n\t\t\t;;\n\tesac\n\n\tif [[ \"$gpu\" = \"intel\" ]]; then\n\t\t#Detect CPU\n\t\tlocal CPU=$(uname -p | \"${AWK}\" '{print $3}')\n\t\tCPU=${CPU#*'-'}; #Detect CPU number\n\n\t\t#Detect Intel GPU\n\t\tcase $CPU in\n\t\t\t[3-6][3-9][0-5]|[3-6][3-9][0-5][K-Y])\n\t\t\t\tgpu='Intel HD Graphics'\n\t\t\t\t;; #1st\n\t\t\t2[1-5][0-3][0-2]*|2390T|2600S)\n\t\t\t\tgpu='Intel HD Graphics 2000'\n\t\t\t\t;; #2nd\n\t\t\t2[1-5][1-7][0-8]*|2105|2500K)\n\t\t\t\tgpu='Intel HD Graphics 3000'\n\t\t\t\t;; #2nd\n\t\t\t32[1-5]0*|3[4-5][5-7]0*|33[3-4]0*)\n\t\t\t\tgpu='Intel HD Graphics 2500'\n\t\t\t\t;; #3rd\n\t\t\t3570K|3427U)\n\t\t\t\tgpu='Intel HD Graphics 4000'\n\t\t\t\t;; #3rd\n\t\t\t4[3-7][0-9][0-5]*)\n\t\t\t\tgpu='Intel HD Graphics 4600'\n\t\t\t\t;; #4th Haswell\n\t\t\t5[5-6]75[C-R]|5350H)\n\t\t\t\tgpu='Intel Iris Pro Graphics 6200'\n\t\t\t\t;; #5th Broadwell\n\t\t\t\t#6th Skylake\n\t\t\t\t#7th Kabylake\n\t\t\t\t#8th Cannonlake\n\t\t\t*)\n\t\t\t\tgpu='Unknown'\n\t\t\t\t;; #Unknown GPU model\n\t\tesac\n\tfi\n}\n\n# Disk Usage Detection - Begin\ndetectdisk () {\n\tdiskusage=\"Unknown\"\n\tif type -p df >/dev/null 2>&1; then\n\t\tif [[ \"${distro}\" =~ (Free|Net|DragonFly)BSD ]]; then\n\t\t\ttotaldisk=$(df -h -c 2>/dev/null | tail -1)\n\t\telif [[ \"${distro}\" == \"OpenBSD\" ]]; then\n\t\t\ttotaldisk=$(df -Pk 2> /dev/null | \"${AWK}\" '\n\t\t\t\t/^\\// {total+=$2; used+=$3; avail+=$4}\n\t\t\t\tEND{printf(\"total %.1fG %.1fG %.1fG %d%%\\n\", total/1048576, used/1048576, avail/1048576, used*100/total)}')\n\t\telif [[ \"${distro}\" == \"Mac OS X\" || \"${distro}\" == \"macOS\" ]]; then\n                        majorVers=$(sw_vers -productVersion | cut -d ':' -f 2 | \"${AWK}\" -F \".\" '{print $1}') # Major version\n\t\t\tminorVers=$(sw_vers -productVersion | cut -d ':' -f 2 | \"${AWK}\" -F \".\" '{print $2}') # Minor version\n\t\t\tif [[ \"${minorVers}\" -ge \"15\" || \"${majorVers}\" -ge \"11\" ]]; then # Catalina or newer\n\t\t\t\ttotaldisk=$(df -H /System/Volumes/Data 2>/dev/null | tail -1)\n\t\t\telse\n\t\t\t\ttotaldisk=$(df -H / 2>/dev/null | tail -1)\n\t\t\tfi\n\t\telse\n\t\t\ttotaldisk=$(df -h -x aufs -x tmpfs -x overlay -x drvfs -x devtmpfs --total 2>/dev/null | tail -1)\n\t\tfi\n\t\tdisktotal=$(\"${AWK}\" '{print $2}' <<< \"${totaldisk}\")\n\t\tdiskused=$(\"${AWK}\" '{print $3}' <<< \"${totaldisk}\")\n\t\tdiskusedper=$(\"${AWK}\" '{print $5}' <<< \"${totaldisk}\")\n\t\tdiskusage=\"${diskused} / ${disktotal} (${diskusedper})\"\n\t\tdiskusage_verbose=$(sed 's/%/%%/' <<< \"$diskusage\")\n\tfi\n\tverboseOut \"Finding current disk usage...found as '$diskusage_verbose'\"\n}\n# Disk Usage Detection - End\n\n\n# Memory Detection - Begin\ndetectmem () {\n\tif [[ \"$distro\" == \"Mac OS X\" || \"$distro\" == \"macOS\" ]]; then\n\t\ttotalmem=$(echo \"$(sysctl -n hw.memsize)\" / 1024^2 | bc)\n\t\twiredmem=$(vm_stat | grep wired | \"${AWK}\" '{ print $4 }' | sed 's/\\.//')\n\t\tactivemem=$(vm_stat | grep ' active' | \"${AWK}\" '{ print $3 }' | sed 's/\\.//')\n\t\tcompressedmem=$(vm_stat | grep occupied | \"${AWK}\" '{ print $5 }' | sed 's/\\.//')\n\t\tif [[ ! -z \"$compressedmem | tr -d\" ]]; then  # FIXME: is this line correct?\n\t\t\tcompressedmem=0\n\t\tfi\n\t\tusedmem=$(((wiredmem + activemem + compressedmem) * 4 / 1024))\n\telif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" ]]; then\n\t\ttotal_mem=$(\"${AWK}\" '/MemTotal/ { print $2 }' /proc/meminfo)\n\t\ttotalmem=$((total_mem / 1024))\n\t\tfree_mem=$(\"${AWK}\" '/MemFree/ { print $2 }' /proc/meminfo)\n\t\tused_mem=$((total_mem - free_mem))\n\t\tusedmem=$((used_mem / 1024))\n\telif [[ \"$distro\" == \"FreeBSD\"  || \"$distro\" == \"DragonFlyBSD\" ]]; then\n\t\tphys_mem=$(sysctl -n hw.physmem)\n\t\tsize_mem=$phys_mem\n\t\tsize_chip=1\n\t\tguess_chip=$(echo \"$size_mem / 8 - 1\" | bc)\n\t\twhile [ \"$guess_chip\" != 0 ]; do\n\t\t\tguess_chip=$(echo \"$guess_chip / 2\" | bc)\n\t\t\tsize_chip=$(echo \"$size_chip * 2\" | bc)\n\t\tdone\n\t\tround_mem=$(echo \"( $size_mem / $size_chip + 1 ) * $size_chip \" | bc)\n\t\ttotalmem=$((round_mem / 1024 / 1024))\n\t\tpagesize=$(sysctl -n hw.pagesize)\n\t\tinactive_count=$(sysctl -n vm.stats.vm.v_inactive_count)\n\t\tinactive_mem=$((inactive_count * pagesize))\n\t\tcache_count=$(sysctl -n vm.stats.vm.v_cache_count)\n\t\tcache_mem=$((cache_count * pagesize))\n\t\tfree_count=$(sysctl -n vm.stats.vm.v_free_count)\n\t\tfree_mem=$((free_count * pagesize))\n\t\tavail_mem=$((inactive_mem + cache_mem + free_mem))\n\t\tused_mem=$((round_mem - avail_mem))\n\t\tusedmem=$((used_mem / 1024 / 1024))\n\telif [ \"$distro\" == \"OpenBSD\" ]; then\n\t\ttotalmem=$(($(sysctl -n hw.physmem) / 1024 / 1024))\n\t\tusedmem=$(vmstat | \"${AWK}\" '!/[a-z]/{gsub(\"M\",\"\"); print $3}')\n\telif [ \"$distro\" == \"NetBSD\" ]; then\n\t\tphys_mem=$(\"${AWK}\" '/MemTotal/ { print $2 }' /proc/meminfo)\n\t\ttotalmem=$((phys_mem / 1024))\n\t\tif grep -q 'Cached' /proc/meminfo; then\n\t\t\tcache=$(\"${AWK}\" '/Cached/ {print $2}' /proc/meminfo)\n\t\t\tusedmem=$((cache / 1024))\n\t\telse\n\t\t\tfree_mem=$(\"${AWK}\" '/MemFree/ { print $2 }' /proc/meminfo)\n\t\t\tused_mem=$((phys_mem - free_mem))\n\t\t\tusedmem=$((used_mem / 1024))\n\t\tfi\n\telif [ \"$distro\" == \"Haiku\" ]; then\n\t\ttotalmem=$(sysinfo -mem | \"${AWK}\" 'NR == 1 {gsub(/[\\(\\)\\/]/, \"\"); printf(\"%d\", $6/1024**2)}')\n\t\tusedmem=$(sysinfo -mem | \"${AWK}\" 'NR == 1 {gsub(/[\\(\\)\\/]/, \"\"); printf(\"%d\", $5/1024**2)}')\n\telse\n\t\t# MemUsed = Memtotal + Shmem - MemFree - Buffers - Cached - SReclaimable\n\t\t# Source: https://github.com/dylanaraps/neofetch/pull/391/files#diff-e863270127ca6116fd30e708cdc582fc\n\t\t#mem_info=$(</proc/meminfo)\n\t\t#mem_info=$(echo $(echo $(mem_info=${mem_info// /}; echo ${mem_info//kB/})))\n\t\t#for m in $mem_info; do\n\t\t#\tcase ${m//:*} in\n\t\t#\t\t\"MemTotal\") usedmem=$((usedmem+=${m//*:})); totalmem=${m//*:} ;;\n\t\t#\t\t\"Shmem\") usedmem=$((usedmem+=${m//*:})) ;;\n\t\t#\t\t\"MemFree\"|\"Buffers\"|\"Cached\"|\"SReclaimable\") usedmem=$((usedmem-=${m//*:})) ;;\n\t\t#\tesac\n\t\t#done\n\t\t#usedmem=$((usedmem / 1024))\n\t\t#totalmem=$((totalmem / 1024))\n\t\tmem=$(free -b | \"${AWK}\" -F ':' 'NR==2{print $2}' | \"${AWK}\" '{print $1\"-\"$6}')\n\t\tusedmem=$((mem / 1024 / 1024))\n\t\ttotalmem=$((${mem//-*} / 1024 / 1024))\n\tfi\n\tmem=\"${usedmem}MiB / ${totalmem}MiB\"\n\tverboseOut \"Finding current RAM usage...found as '$mem'\"\n}\n# Memory Detection - End\n\n\n# Shell Detection - Begin\ndetectshell_ver () {\n\tlocal version_data='' version='' get_version='--version'\n\n\tcase $1 in\n\t\t# ksh sends version to stderr. Weeeeeeird.\n\t\tksh)\n\t\t\tversion_data=\"$( $1 $get_version 2>&1 )\"\n\t\t\t;;\n\t\t*)\n\t\t\tversion_data=\"$( $1 $get_version 2>/dev/null )\"\n\t\t\t;;\n\tesac\n\n\tif [[ -n $version_data ]];then\n\t\tversion=$(\"${AWK}\" '\n\t\tBEGIN {\n\t\t\tIGNORECASE=1\n\t\t}\n\t\t/'$2'/ {\n\t\t\tgsub(/(,|v|V)/, \"\",$'$3')\n\t\t\tif ($2 ~ /[Bb][Aa][Ss][Hh]/) {\n\t\t\t\tgsub(/\\(.*|-release|-version\\)/,\"\",$4)\n\t\t\t}\n\t\t\tprint $'$3'\n\t\t\texit # quit after first match prints\n\t\t}' <<< \"$version_data\")\n\tfi\n\techo \"$version\"\n}\ndetectshell () {\n\tif [[ ! \"${shell_type}\" ]]; then\n\t\tif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" || \"${distro}\" == \"Haiku\" || \"${distro}\" == \"Alpine Linux\" ||\n\t\t\t\"${distro}\" == \"Mac OS X\" || \"${distro}\" == \"macOS\" || \"${distro}\" == \"TinyCore\" || \"${distro}\" == \"Raspbian\" || \"${OSTYPE}\" == \"gnu\" ]]; then\n\t\t\tshell_type=$(echo \"$SHELL\" | \"${AWK}\" -F'/' '{print $NF}')\n\t\telif readlink -f \"$SHELL\" 2>&1 | grep -q -i 'busybox'; then\n\t\t\tshell_type=\"BusyBox\"\n\t\telse\n\t\t\tif [[ \"${OSTYPE}\" =~ \"linux\" ]]; then\n\t\t\t\tshell_type=$(realpath /proc/$PPID/exe | \"${AWK}\" -F'/' '{print $NF}')\n\t\t\telif [[ \"${distro}\" =~ \"BSD\" ]]; then\n\t\t\t\tshell_type=$(ps -p $PPID -o command | tail -1)\n\t\t\telse\n\t\t\t\tshell_type=$(ps -p \"$(ps -p $PPID | \"${AWK}\" '$1 !~ /PID/ {print $1}')\" | \"${AWK}\" 'FNR>1 {print $1}')\n\t\t\tfi\n\t\t\tshell_type=${shell_type/-}\n\t\t\tshell_type=${shell_type//*\\/}\n\t\tfi\n\tfi\n\n\tcase $shell_type in\n\t\tbash)\n\t\t\tshell_version_data=$( detectshell_ver \"$shell_type\" \"^GNU.bash,.version\" \"4\" )\n\t\t\t;;\n\t\tBusyBox)\n\t\t\tshell_version_data=$( busybox | head -n1 | cut -d ' ' -f2 )\n\t\t\t;;\n\t\tcsh)\n\t\t\tshell_version_data=$( detectshell_ver \"$shell_type\" \"$shell_type\" \"3\" )\n\t\t\t;;\n\t\tdash)\n\t\t\tshell_version_data=$( detectshell_ver \"$shell_type\" \"$shell_type\" \"3\" )\n\t\t\t;;\n\t\tksh)\n\t\t\tshell_version_data=$( detectshell_ver \"$shell_type\" \"version\" \"5\" )\n\t\t\t;;\n\t\ttcsh)\n\t\t\tshell_version_data=$( detectshell_ver \"$shell_type\" \"^tcsh\" \"2\" )\n\t\t\t;;\n\t\tzsh)\n\t\t\tshell_version_data=$( detectshell_ver \"$shell_type\" \"^zsh\" \"2\" )\n\t\t\t;;\n\t\tfish)\n\t\t\tshell_version_data=$( fish --version | \"${AWK}\" '{print $3}' )\n\t\t\t;;\n\t\tpwsh)\n\t\t\tshell_version_data=$( pwsh -c '$PSVersionTable.PSVersion.ToString()' )\n\t\t\t;;\n\tesac\n\n\tif [[ -n $shell_version_data ]];then\n\t\tshell_type=\"$shell_type $shell_version_data\"\n\tfi\n\n\tmyShell=${shell_type}\n\tverboseOut \"Finding current shell...found as '$myShell'\"\n}\n# Shell Detection - End\n\n\n# Resolution Detection - Begin\ndetectres () {\n\txResolution=\"No X Server\"\n\tif [[ ${distro} == \"Mac OS X\" || $distro == \"macOS\" ]]; then\n\t\txResolution=$(system_profiler SPDisplaysDataType | \"${AWK}\" '/Resolution:/ {print $2\"x\"$4\" \"}')\n\t\tif [[ \"$(echo \"$xResolution\" | wc -l)\" -ge 1 ]]; then\n\t\t\txResolution=$(echo \"$xResolution\" | tr \" \\\\n\" \", \" | sed 's/\\(.*\\),/\\1/')\n\t\tfi\n\telif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" ]]; then\n\t\txResolution=$(wmic path Win32_VideoController get CurrentHorizontalResolution,CurrentVerticalResolution | \"${AWK}\" 'NR==2 {print $1\"x\"$2}')\n\telif [[ \"${distro}\" == \"Haiku\" ]]; then\n\t\txResolution=\"$(screenmode | grep Resolution | \"${AWK}\" '{gsub(/,/,\"\"); print $2\"x\"$3}')\"\n\telif [[ -n ${DISPLAY} ]]; then\n\t\tif type -p xdpyinfo >/dev/null 2>&1; then\n\t\t\txResolution=$(xdpyinfo | \"${AWK}\" '/^ +dimensions/ {print $2}')\n\t\tfi\n\tfi\n\tverboseOut \"Finding current resolution(s)...found as '$xResolution'\"\n}\n# Resolution Detection - End\n\n\n# DE Detection - Begin\ndetectde () {\n\tDE=\"Not Present\"\n\tif [[ \"${distro}\" == \"Mac OS X\" || \"${distro}\" == \"macOS\" ]]; then\n\t\tif ps -U \"${USER}\" | grep -q -i 'finder'; then\n\t\t\tDE=\"Aqua\"\n\t\tfi\n\telif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" ]]; then\n\t\t# https://msdn.microsoft.com/en-us/library/ms724832%28VS.85%29.aspx\n\t\tif wmic os get version | grep -q '^\\(6\\.[01]\\)'; then\n\t\t\tDE=\"Aero\"\n\t\telif wmic os get version | grep -q '^\\(6\\.[23]\\|10\\)'; then\n\t\t\tDE=\"Modern UI/Metro\"\n\t\telse\n\t\t\tDE=\"Luna\"\n\t\tfi\n\telif [[ -n ${DISPLAY} ]]; then\n\t\tif type -p xprop >/dev/null 2>&1;then\n\t\t\txprop_root=\"$(xprop -root 2>/dev/null)\"\n\t\t\tif [[ -n ${xprop_root} ]]; then\n\t\t\t\tDE=$(echo \"${xprop_root}\" | \"${AWK}\" 'BEGIN {\n\t\t\t\t\tde = \"Not Present\"\n\t\t\t\t}\n\t\t\t\t{\n\t\t\t\t\tif ($1 ~ /^_DT_SAVE_MODE/) {\n\t\t\t\t\t\tde = $NF\n\t\t\t\t\t\tgsub(/\"/,\"\",de)\n\t\t\t\t\t\tde = toupper(de)\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t\telse if ($1 ~/^KDE_SESSION_VERSION/) {\n\t\t\t\t\t\tde = \"KDE\"$NF\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t\telse if ($1 ~ /^_MUFFIN/) {\n\t\t\t\t\t\tde = \"Cinnamon\"\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t\telse if ($1 ~ /^TDE_FULL_SESSION/) {\n\t\t\t\t\t\tde = \"Trinity\"\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t\telse if ($0 ~ /\"xfce4\"/) {\n\t\t\t\t\t\tde = \"Xfce4\"\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t\telse if ($0 ~ /\"xfce5\"/) {\n\t\t\t\t\t\tde = \"Xfce5\"\n\t\t\t\t\t\texit\n\t\t\t\t\t}\n\t\t\t\t} END {\n\t\t\t\t\tprint de\n\t\t\t\t}')\n\t\t\tfi\n\t\tfi\n\n\t\tif [[ ${DE} == \"Not Present\" ]]; then\n\t\t\t# Let's use xdg-open code for GNOME/Enlightment/KDE/LXDE/MATE/Xfce detection\n\t\t\t# http://bazaar.launchpad.net/~vcs-imports/xdg-utils/master/view/head:/scripts/xdg-utils-common.in#L251\n\t\t\tif [ -n \"${XDG_CURRENT_DESKTOP}\" ]; then\n\t\t\t\tcase \"${XDG_CURRENT_DESKTOP,,}\" in\n\t\t\t\t\t'enlightenment')\n\t\t\t\t\t\tDE=\"Enlightenment\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'gnome')\n\t\t\t\t\t\tDE=\"GNOME\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'kde')\n\t\t\t\t\t\tDE=\"KDE\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'lumina')\n\t\t\t\t\t\tDE=\"Lumina\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'lxde')\n\t\t\t\t\t\tDE=\"LXDE\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'mate')\n\t\t\t\t\t\tDE=\"MATE\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'xfce')\n\t\t\t\t\t\tDE=\"Xfce\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'x-cinnamon')\n\t\t\t\t\t\tDE=\"Cinnamon\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'unity')\n\t\t\t\t\t\tDE=\"Unity\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'lxqt')\n\t\t\t\t\t\tDE=\"LXQt\"\n\t\t\t\t\t\t;;\n\t\t\t\tesac\n\t\t\tfi\n\n\t\t\tif [ -n \"$DE\" ]; then\n\t\t\t\t# classic fallbacks\n\t\t\t\tif [ -n \"$KDE_FULL_SESSION\" ]; then\n\t\t\t\t\tDE=\"KDE\"\n\t\t\t\telif [ -n \"$TDE_FULL_SESSION\" ]; then\n\t\t\t\t\tDE=\"Trinity\"\n\t\t\t\telif [ -n \"$GNOME_DESKTOP_SESSION_ID\" ]; then\n\t\t\t\t\tDE=\"GNOME\"\n\t\t\t\telif [ -n \"$MATE_DESKTOP_SESSION_ID\" ]; then\n\t\t\t\t\tDE=\"MATE\"\n\t\t\t\telif dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus \\\n\t\t\t\t\torg.freedesktop.DBus.GetNameOwner string:org.gnome.SessionManager >/dev/null 2>&1 ; then\n\t\t\t\t\tDE=\"GNOME\"\n\t\t\t\telif xprop -root _DT_SAVE_MODE 2> /dev/null | grep -q -i ' = \"xfce4\"$'; then\n\t\t\t\t\tDE=\"Xfce\"\n\t\t\t\telif xprop -root 2> /dev/null | grep -q -i '^xfce_desktop_window'; then\n\t\t\t\t\tDE=\"Xfce\"\n\t\t\t\telif echo \"$DESKTOP\" | grep -q -i '^Enlightenment'; then\n\t\t\t\t\tDE=\"Enlightenment\"\n\t\t\t\tfi\n\t\t\tfi\n\n\t\t\tif [[ -z \"$DE\" || \"$DE\" = \"Not Present\" ]]; then\n\t\t\t\t# fallback to checking $DESKTOP_SESSION\n\t\t\t\tlocal _DESKTOP_SESSION=\n\t\t\t\tif [[ ${BASH_VERSINFO[0]} -ge 4 ]]; then\n\t\t\t\t\tif [[ ${BASH_VERSINFO[0]} -eq 4 && ${BASH_VERSINFO[1]} -gt 1 ]] || [[ ${BASH_VERSINFO[0]} -gt 4 ]]; then\n\t\t\t\t\t\t_DESKTOP_SESSION=${DESKTOP_SESSION,,}\n\t\t\t\t\telse\n\t\t\t\t\t\t_DESKTOP_SESSION=\"$(tr '[:upper:]' '[:lower:]' <<< \"${DESKTOP_SESSION}\")\"\n\t\t\t\t\tfi\n\t\t\t\telse\n\t\t\t\t\t_DESKTOP_SESSION=\"$(tr '[:upper:]' '[:lower:]' <<< \"${DESKTOP_SESSION}\")\"\n\t\t\t\tfi\n\t\t\t\tcase \"${_DESKTOP_SESSION}\" in\n\t\t\t\t\t'gnome'*)\n\t\t\t\t\t\tDE=\"GNOME\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'deepin')\n\t\t\t\t\t\tDE=\"Deepin\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'lumina')\n\t\t\t\t\t\tDE=\"Lumina\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'lxde'|'lubuntu')\n\t\t\t\t\t\tDE=\"LXDE\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'mate')\n\t\t\t\t\t\tDE=\"MATE\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'xfce'*)\n\t\t\t\t\t\tDE=\"Xfce\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'budgie-desktop')\n\t\t\t\t\t\tDE=\"Budgie\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'cinnamon')\n\t\t\t\t\t\tDE=\"Cinnamon\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'trinity')\n\t\t\t\t\t\tDE=\"Trinity\"\n\t\t\t\t\t\t;;\n\t\t\t\tesac\n\t\t\tfi\n\n\t\t\tif [ -n \"$DE\" ]; then\n\t\t\t\t# fallback to checking $GDMSESSION\n\t\t\t\tcase \"${GDMSESSION,,}\" in\n\t\t\t\t\t'lumina'*)\n\t\t\t\t\t\tDE=\"Lumina\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t'mate')\n\t\t\t\t\t\tDE=\"MATE\"\n\t\t\t\t\t\t;;\n\t\t\t\tesac\n\t\t\tfi\n\n\t\t\tif [[ ${DE} == \"GNOME\" ]]; then\n\t\t\t\tif type -p xprop >/dev/null 2>&1; then\n\t\t\t\t\tif xprop -name \"unity-launcher\" >/dev/null 2>&1; then\n\t\t\t\t\t\tDE=\"Unity\"\n\t\t\t\t\telif xprop -name \"launcher\" >/dev/null 2>&1 &&\n\t\t\t\t\t\txprop -name \"panel\" >/dev/null 2>&1; then\n\t\t\t\t\t\tDE=\"Unity\"\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\tfi\n\n\t\t\tif [[ ${DE} == \"KDE\" ]]; then\n\t\t\t\tif [[ -n ${KDE_SESSION_VERSION} ]]; then\n\t\t\t\t\tif [[ ${KDE_SESSION_VERSION} == '5' ]]; then\n\t\t\t\t\t\tDE=\"KDE5\"\n\t\t\t\t\telif [[ ${KDE_SESSION_VERSION} == '4' ]]; then\n\t\t\t\t\t\tDE=\"KDE4\"\n\t\t\t\t\tfi\n\t\t\t\telif [[ ${KDE_FULL_SESSION} == 'true' ]]; then\n\t\t\t\t\tDE=\"KDE\"\n\t\t\t\t\tDEver_data=$(kded --version 2>/dev/null)\n\t\t\t\t\tDEver=$(grep -si '^KDE:' <<< \"$DEver_data\" | \"${AWK}\" '{print $2}')\n\t\t\t\tfi\n\t\t\tfi\n\t\tfi\n\n\t\tif [[ ${DE} != \"Not Present\" ]]; then\n\t\t\tif [[ ${DE} == \"Cinnamon\" ]]; then\n\t\t\t\tif type -p >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(cinnamon --version)\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"GNOME\" ]]; then\n\t\t\t\tif type -p gnome-control-center>/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(gnome-control-center --version 2> /dev/null)\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\telif type -p gnome-session-properties >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(gnome-session-properties --version 2> /dev/null)\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\telif type -p gnome-session >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(gnome-session --version 2> /dev/null)\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"KDE4\" || ${DE} == \"KDE5\" ]]; then\n\t\t\t\tif type -p kded${DE#KDE} >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(kded${DE#KDE} --version)\n\t\t\t\t\tif [[ $(( $(echo \"$DEver\" | wc -w) )) -eq 2 ]] && [[ \"$(echo \"$DEver\" | cut -d ' ' -f1)\" == \"kded${DE#KDE}\" ]]; then\n\t\t\t\t\t\tDEver=$(echo \"$DEver\" | cut -d ' ' -f2)\n\t\t\t\t\t\tDE=\"KDE ${DEver}\"\n\t\t\t\t\telse\n\t\t\t\t\t\tfor l in $(echo \"${DEver// /_}\"); do\n\t\t\t\t\t\t\tif [[ ${l//:*} == \"KDE_Development_Platform\" ]]; then\n\t\t\t\t\t\t\t\tDEver=${l//*:_}\n\t\t\t\t\t\t\t\tDE=\"KDE ${DEver//_*}\"\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\tdone\n\t\t\t\t\tfi\n\t\t\t\t\tif pgrep -U ${UID} plasmashell >/dev/null 2>&1; then\n\t\t\t\t\t\tDEver=$(plasmashell --version | cut -d ' ' -f2)\n\t\t\t\t\t\tDE=\"$DE / Plasma $DEver\"\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"Lumina\" ]]; then\n\t\t\t\tif type -p Lumina-DE.real >/dev/null 2>&1; then\n\t\t\t\t\tlumina=\"$(type -p Lumina-DE.real)\"\n\t\t\t\telif type -p Lumina-DE >/dev/null 2>&1; then\n\t\t\t\t\tlumina=\"$(type -p Lumina-DE)\"\n\t\t\t\tfi\n\t\t\t\tif [ -n \"$lumina\" ]; then\n\t\t\t\t\tif grep -q '--version' \"$lumina\"; then\n\t\t\t\t\t\tDEver=$(\"$lumina\" --version 2>&1 | tr -d \\\")\n\t\t\t\t\t\tDE=\"${DE} ${DEver}\"\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"LXQt\" ]]; then\n\t\t\t\tif type -p lxqt-about >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(lxqt-about --version | \"${AWK}\" '/^liblxqt/ {print $2}')\n\t\t\t\t\tDE=\"${DE} ${DEver}\"\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"MATE\" ]]; then\n\t\t\t\tif type -p mate-session >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(mate-session --version)\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"Unity\" ]]; then\n\t\t\t\tif type -p unity >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(unity --version)\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"Deepin\" ]]; then\n\t\t\t\tif [[ -f /etc/deepin-version ]]; then\n\t\t\t\t\tDEver=\"$(\"${AWK}\" -F '=' '/Version/ {print $2}' /etc/deepin-version)\"\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\tfi\n\t\t\telif [[ ${DE} == \"Trinity\" ]]; then\n\t\t\t\tif type -p tde-config >/dev/null 2>&1; then\n\t\t\t\t\tDEver=\"$(tde-config --version | \"${AWK}\" -F ' ' '/TDE:/ {print $2}')\"\n\t\t\t\t\tDE=\"${DE} ${DEver//* }\"\n\t\t\t\tfi\n\t\t\tfi\n\t\tfi\n\n\t\tif [[ \"${DE}\" == \"Not Present\" ]]; then\n\t\t\tif pgrep -U ${UID} lxsession >/dev/null 2>&1; then\n\t\t\t\tDE=\"LXDE\"\n\t\t\t\tif type -p lxpanel >/dev/null 2>&1; then\n\t\t\t\t\tDEver=$(lxpanel -v)\n\t\t\t\t\tDE=\"${DE} $DEver\"\n\t\t\t\tfi\n\t\t\telif pgrep -U ${UID} lxqt-session >/dev/null 2>&1; then\n\t\t\t\tDE=\"LXQt\"\n\t\t\telif pgrep -U ${UID} razor-session >/dev/null 2>&1; then\n\t\t\t\tDE=\"RazorQt\"\n\t\t\telif pgrep -U ${UID} dtsession >/dev/null 2>&1; then\n\t\t\t\tDE=\"CDE\"\n\t\t\tfi\n\t\tfi\n\tfi\n\tverboseOut \"Finding desktop environment...found as '$DE'\"\n}\n### DE Detection - End\n\n\n# WM Detection - Begin\ndetectwm () {\n\tWM=\"Not Found\"\n\tif [[ ${distro} == \"Mac OS X\" || ${distro} == \"macOS\" ]]; then\n\t\tif ps -U \"${USER}\" | grep -q -i 'finder'; then\n\t\t\tWM=\"Quartz Compositor\"\n\t\tfi\n\telif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" ]]; then\n\t\tif [ \"$(tasklist | grep -o 'bugn' | tr -d '\\r \\n')\" = \"bugn\" ]; then\n\t\t\tWM=\"bug.n\"\n\t\telif [ \"$(tasklist | grep -o 'Windawesome' | tr -d '\\r \\n')\" = \"Windawesome\" ]; then\n\t\t\tWM=\"Windawesome\"\n\t\telif [ \"$(tasklist | grep -o 'blackbox' | tr -d '\\r \\n')\" = \"blackbox\" ]; then\n\t\t\tWM=\"Blackbox\"\n\t\telse\n\t\t\tWM=\"DWM/Explorer\"\n\t\tfi\n\telif [[ -n ${DISPLAY} ]]; then\n\t\tif [[ \"${distro}\" == \"FreeBSD\" ]]; then\n\t\t\tpgrep_flags=\"-aU\"\n\t\telse\n\t\t\tpgrep_flags=\"-U\"\n\t\tfi\n\t\tfor each in \"${wmnames[@]}\"; do\n\t\t\tPID=\"$(pgrep ${pgrep_flags} ${UID} \"^$each$\")\"\n\t\t\tif [ \"$PID\" ]; then\n\t\t\t\tcase $each in\n\t\t\t\t\t'2bwm') WM=\"2bwm\";;\n\t\t\t\t\t'9wm') WM=\"9wm\";;\n\t\t\t\t\t'awesome') WM=\"Awesome\";;\n\t\t\t\t\t'beryl') WM=\"Beryl\";;\n\t\t\t\t\t'blackbox') WM=\"BlackBox\";;\n\t\t\t\t\t'bspwm') WM=\"bspwm\";;\n\t\t\t\t\t'budgie-wm') WM=\"BudgieWM\";;\n\t\t\t\t\t'chromeos-wm') WM=\"chromeos-wm\";;\n\t\t\t\t\t'cinnamon') WM=\"Muffin\";;\n\t\t\t\t\t'compiz') WM=\"Compiz\";;\n\t\t\t\t\t'deepin-wm') WM=\"deepin-wm\";;\n\t\t\t\t\t'dminiwm') WM=\"dminiwm\";;\n\t\t\t\t\t'dtwm') WM=\"dtwm\";;\n\t\t\t\t\t'dwm') WM=\"dwm\";;\n\t\t\t\t\t'e16') WM=\"E16\";;\n\t\t\t\t\t'emerald') WM=\"Emerald\";;\n\t\t\t\t\t'enlightenment') WM=\"E17\";;\n\t\t\t\t\t'fluxbox') WM=\"FluxBox\";;\n\t\t\t\t\t'flwm'|'flwm_topside') WM=\"FLWM\";;\n\t\t\t\t\t'fvwm') WM=\"FVWM\";;\n\t\t\t\t\t'herbstluftwm') WM=\"herbstluftwm\";;\n\t\t\t\t\t'howm') WM=\"howm\";;\n\t\t\t\t\t'i3') WM=\"i3\";;\n\t\t\t\t\t'icewm') WM=\"IceWM\";;\n\t\t\t\t\t'kwin') WM=\"KWin\";;\n\t\t\t\t\t'metacity') WM=\"Metacity\";;\n\t\t\t\t\t'monsterwm') WM=\"monsterwm\";;\n\t\t\t\t\t'musca') WM=\"Musca\";;\n\t\t\t\t\t'mwm') WM=\"MWM\";;\n\t\t\t\t\t'notion') WM=\"Notion\";;\n\t\t\t\t\t'openbox') WM=\"OpenBox\";;\n\t\t\t\t\t'pekwm') WM=\"PekWM\";;\n\t\t\t\t\t'ratpoison') WM=\"Ratpoison\";;\n\t\t\t\t\t'sawfish') WM=\"Sawfish\";;\n\t\t\t\t\t'scrotwm') WM=\"ScrotWM\";;\n\t\t\t\t\t'spectrwm') WM=\"SpectrWM\";;\n\t\t\t\t\t'stumpwm') WM=\"StumpWM\";;\n\t\t\t\t\t'subtle') WM=\"subtle\";;\n\t\t\t\t\t'sway') WM=\"sway\";;\n\t\t\t\t\t'swm') WM=\"swm\";;\n\t\t\t\t\t'twin') WM=\"TWin\";;\n\t\t\t\t\t'wmaker') WM=\"WindowMaker\";;\n\t\t\t\t\t'wmfs') WM=\"WMFS\";;\n\t\t\t\t\t'wmii') WM=\"wmii\";;\n\t\t\t\t\t'xfwm4') WM=\"Xfwm4\";;\n\t\t\t\t\t'xmonad.*') WM=\"XMonad\";;\n\t\t\t\tesac\n\t\t\tfi\n\t\t\tif [[ ${WM} != \"Not Found\" ]]; then\n\t\t\t\tbreak 1\n\t\t\tfi\n\t\tdone\n\n\t\tif [[ ${WM} == \"Not Found\" ]]; then\n\t\t\tif type -p xprop >/dev/null 2>&1; then\n\t\t\t\tWM=$(xprop -root _NET_SUPPORTING_WM_CHECK)\n\t\t\t\tif [[ \"$WM\" =~ 'not found' ]]; then\n\t\t\t\t\tWM=\"Not Found\"\n\t\t\t\telif [[ \"$WM\" =~ 'Not found' ]]; then\n\t\t\t\t\tWM=\"Not Found\"\n\t\t\t\telif [[ \"$WM\" =~ '[Ii]nvalid window id format' ]]; then\n\t\t\t\t\tWM=\"Not Found\"\n\t\t\t\telif [[ \"$WM\" =~ \"no such\" ]]; then\n\t\t\t\t\tWM=\"Not Found\"\n\t\t\t\telse\n\t\t\t\t\tWM=${WM//* }\n\t\t\t\t\tWM=$(xprop -id \"${WM}\" 8s _NET_WM_NAME)\n\t\t\t\t\tWM=$(echo \"$(WM=${WM//*= }; echo \"${WM//\\\"}\")\")\n\t\t\t\tfi\n\t\t\tfi\n\t\tfi\n\n\t\t# Proper format WM names that need it.\n\t\tif [[ ${BASH_VERSINFO[0]} -ge 4 ]]; then\n\t\t\tif [[ ${BASH_VERSINFO[0]} -eq 4 && ${BASH_VERSINFO[1]} -gt 1 ]] || [[ ${BASH_VERSINFO[0]} -gt 4 ]]; then\n\t\t\t\tWM_lower=${WM,,}\n\t\t\telse\n\t\t\t\tWM_lower=\"$(tr '[:upper:]' '[:lower:]' <<< \"${WM}\")\"\n\t\t\tfi\n\t\telse\n\t\t\tWM_lower=\"$(tr '[:upper:]' '[:lower:]' <<< \"${WM}\")\"\n\t\tfi\n\t\tcase ${WM_lower} in\n\t\t\t*'gala'*) WM=\"Gala\";;\n\t\t\t'2bwm') WM=\"2bwm\";;\n\t\t\t'awesome') WM=\"Awesome\";;\n\t\t\t'beryl') WM=\"Beryl\";;\n\t\t\t'blackbox') WM=\"BlackBox\";;\n\t\t\t'budgiewm') WM=\"BudgieWM\";;\n\t\t\t'chromeos-wm') WM=\"chromeos-wm\";;\n\t\t\t'cinnamon') WM=\"Cinnamon\";;\n\t\t\t'compiz') WM=\"Compiz\";;\n\t\t\t'deepin-wm') WM=\"Deepin WM\";;\n\t\t\t'dminiwm') WM=\"dminiwm\";;\n\t\t\t'dwm') WM=\"dwm\";;\n\t\t\t'e16') WM=\"E16\";;\n\t\t\t'echinus') WM=\"echinus\";;\n\t\t\t'emerald') WM=\"Emerald\";;\n\t\t\t'enlightenment') WM=\"E17\";;\n\t\t\t'fluxbox') WM=\"FluxBox\";;\n\t\t\t'flwm'|'flwm_topside') WM=\"FLWM\";;\n\t\t\t'fvwm') WM=\"FVWM\";;\n\t\t\t'gnome shell'*) WM=\"Mutter\";;\n\t\t\t'herbstluftwm') WM=\"herbstluftwm\";;\n\t\t\t'howm') WM=\"howm\";;\n\t\t\t'i3') WM=\"i3\";;\n\t\t\t'icewm') WM=\"IceWM\";;\n\t\t\t'kwin') WM=\"KWin\";;\n\t\t\t'metacity') WM=\"Metacity\";;\n\t\t\t'monsterwm') WM=\"monsterwm\";;\n\t\t\t'muffin') WM=\"Muffin\";;\n\t\t\t'musca') WM=\"Musca\";;\n\t\t\t'mutter'*) WM=\"Mutter\";;\n\t\t\t'mwm') WM=\"MWM\";;\n\t\t\t'notion') WM=\"Notion\";;\n\t\t\t'openbox') WM=\"OpenBox\";;\n\t\t\t'pekwm') WM=\"PekWM\";;\n\t\t\t'ratpoison') WM=\"Ratpoison\";;\n\t\t\t'sawfish') WM=\"Sawfish\";;\n\t\t\t'scrotwm') WM=\"ScrotWM\";;\n\t\t\t'spectrwm') WM=\"SpectrWM\";;\n\t\t\t'stumpwm') WM=\"StumpWM\";;\n\t\t\t'subtle') WM=\"subtle\";;\n\t\t\t'sway') WM=\"sway\";;\n\t\t\t'swm') WM=\"swm\";;\n\t\t\t'twin') WM=\"TWin\";;\n\t\t\t'wmaker') WM=\"WindowMaker\";;\n\t\t\t'wmfs') WM=\"WMFS\";;\n\t\t\t'wmii') WM=\"wmii\";;\n\t\t\t'xfwm4') WM=\"Xfwm4\";;\n\t\t\t'xmonad') WM=\"XMonad\";;\n\t\tesac\n\tfi\n\tverboseOut \"Finding window manager...found as '$WM'\"\n}\n# WM Detection - End\n\n\n# WM Theme Detection - BEGIN\ndetectwmtheme () {\n\tWin_theme=\"Not Found\"\n\tif [[ \"${distro}\" == \"Mac OS X\" || \"${distro}\" == \"macOS\" ]]; then\n\t\tthemeNumber=\"$(defaults read NSGlobalDomain AppleAquaColorVariant 2>/dev/null)\"\n\t\taccentColorNumber=\"$(defaults read NSGlobalDomain AppleAccentColor 2>/dev/null)\"\n\t\tinterfaceStyle=\"$(defaults read NSGlobalDomain AppleInterfaceStyle 2>/dev/null)\"\n\t\tif [ \"${themeNumber}\" == \"1\" ] || [ \"${themeNumber}x\" == \"x\" ]; then\n\t\t\tcase \"${accentColorNumber}\" in\n\t\t\t\"5\")\n\t\t\t\tWin_theme=\"Purple\"\n\t\t\t\t;;\n\t\t\t\"6\")\n\t\t\t\tWin_theme=\"Pink\"\n\t\t\t\t;;\n\t\t\t\"0\")\n\t\t\t\tWin_theme=\"Red\"\n\t\t\t\t;;\n\t\t\t\"1\")\n\t\t\t\tWin_theme=\"Orange\"\n\t\t\t\t;;\n\t\t\t\"2\")\n\t\t\t\tWin_theme=\"Yellow\"\n\t\t\t\t;;\n\t\t\t\"3\")\n\t\t\t\tWin_theme=\"Green\"\n\t\t\t\t;;\n\t\t\t*)\n\t\t\t\tWin_theme=\"Blue\"\n\t\t\t\t;;\n\t\t\tesac\n\t\telse\n\t\t\tWin_theme=\"Graphite\"\n\t\tfi\n\t\tif [ \"${interfaceStyle}\" == \"Dark\" ]; then\n\t\t\tWin_theme=\"${Win_theme} (Dark)\"\n\t\tfi\n\telif [[ \"${distro}\" == \"Cygwin\" || \"${distro}\" == \"Msys\" ]]; then\n\t\tif [ \"${WM}\" == \"Blackbox\" ]; then\n\t\t\tif [ \"${distro}\" == \"Msys\" ]; then\n\t\t\t\tBlackbox_loc=$(reg query 'HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\WinLogon' //v 'Shell')\n\t\t\telse\n\t\t\t\tBlackbox_loc=$(reg query 'HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\WinLogon' /v 'Shell')\n\t\t\tfi\n\t\t\tBlackbox_loc=\"$(echo \"${Blackbox_loc}\" | sed 's/.*REG_SZ//' | sed -e 's/^[ \\t]*//' | sed 's/.\\{4\\}$//')\"\n\t\t\tWin_theme=$(grep 'session.styleFile' \"${Blackbox_loc}.rc\" | sed 's/ //g' | sed 's/session\\.styleFile://g' | sed 's/.*\\\\//g')\n\t\telse\n\t\t\tif [[ \"${distro}\" == \"Msys\" ]]; then\n\t\t\t\tthemeFile=\"$(reg query 'HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Themes' //v 'CurrentTheme')\"\n\t\t\telse\n\t\t\t\tthemeFile=\"$(reg query 'HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Themes' /v 'CurrentTheme')\"\n\t\t\tfi\n\t\t\tWin_theme=$(echo \"$themeFile\" | \"${AWK}\" -F\"\\\\\" '{print $NF}' | sed 's|\\.theme$||')\n\t\tfi\n\telse\n\t\tcase $WM in\n\t\t\t'2bwm'|'9wm'|'Beryl'|'bspwm'|'dminiwm'|'dwm'|'echinus'|'FVWM'|'howm'|'i3'|'monsterwm'|'Musca'|\\\n\t\t\t'Notion'|'Ratpoison'|'ScrotWM'|'SpectrWM'|'swm'|'subtle'|'WindowMaker'|'WMFS'|'wmii'|'XMonad')\n\t\t\t\tWin_theme=\"Not Applicable\"\n\t\t\t;;\n\t\t\t'Awesome')\n\t\t\t\tif [ -f \"/usr/bin/awesome-client\" ]; then\n\t\t\t\t\tWin_theme=\"$(/usr/bin/awesome-client \"return require('beautiful').theme_path\" | grep -oP '[^/]*(?=/\"$)')\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'BlackBox')\n\t\t\t\tif [ -f \"$HOME/.blackboxrc\" ]; then\n\t\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\"/\" '/styleFile/ {print $NF}' \"$HOME/.blackboxrc\")\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'BudgieWM')\n\t\t\t\tWin_theme=\"$(gsettings get org.gnome.desktop.wm.preferences theme)\"\n\t\t\t\tWin_theme=\"${Win_theme//\\'}\"\n\t\t\t;;\n\t\t\t'Cinnamon'|'Muffin')\n\t\t\t\tde_theme=\"$(gsettings get org.cinnamon.theme name)\"\n\t\t\t\tde_theme=${de_theme//\"'\"}\n\t\t\t\twin_theme=\"$(gsettings get org.cinnamon.desktop.wm.preferences theme)\"\n\t\t\t\twin_theme=${win_theme//\"'\"}\n\t\t\t\tWin_theme=\"${de_theme} (${win_theme})\"\n\t\t\t;;\n\t\t\t'Compiz'|'Mutter'*|'GNOME Shell'|'Gala')\n\t\t\t\tif type -p gsettings >/dev/null 2>&1; then\n\t\t\t\t\tWin_theme=\"$(gsettings get org.gnome.shell.extensions.user-theme name 2>/dev/null)\"\n\t\t\t\t\tif [[ -z \"$Win_theme\" ]]; then\n\t\t\t\t\t\tWin_theme=\"$(gsettings get org.gnome.desktop.wm.preferences theme)\"\n\t\t\t\t\tfi\n\t\t\t\t\tWin_theme=${Win_theme//\"'\"}\n\t\t\t\telif type -p gconftool-2 >/dev/null 2>&1; then\n\t\t\t\t\tWin_theme=$(gconftool-2 -g /apps/metacity/general/theme)\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'Deepin WM')\n\t\t\t\tif type -p gsettings >/dev/null 2>&1; then\n\t\t\t\t\tWin_theme=\"$(gsettings get com.deepin.wrap.gnome.desktop.wm.preferences theme)\"\n\t\t\t\t\tWin_theme=${Win_theme//\"'\"}\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'E16')\n\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\"= \" '/theme.name/ {print $2}' $HOME/.e16/e_config--*.cfg)\"\n\t\t\t;;\n\t\t\t'E17'|'Enlightenment')\n\t\t\t\tif [ \"$(which eet 2>/dev/null)\" ]; then\n\t\t\t\t\teconfig=\"$(eet -d \"$HOME/.e/e/config/standard/e.cfg\" config | \"${AWK}\" '/value \\\"file\\\" string.*.edj/{ print $4 }')\"\n\t\t\t\t\teconfigend=\"${econfig##*/}\"\n\t\t\t\t\tWin_theme=${econfigend%.*}\n\t\t\t\telif [ -n \"${E_CONF_PROFILE}\" ]; then\n\t\t\t\t\t#E17 doesn't store cfg files in text format so for now get the profile as opposed to theme. atyoung\n\t\t\t\t\t#TODO: Find a way to extract and read E17 .cfg files ( google seems to have nothing ). atyoung\n\t\t\t\t\tWin_theme=\"${E_CONF_PROFILE}\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'Emerald')\n\t\t\t\tif [ -f \"$HOME/.emerald/theme/theme.ini\" ]; then\n\t\t\t\t\tWin_theme=\"$(for a in /usr/share/emerald/themes/* $HOME/.emerald/themes/*; do cmp \"$HOME/.emerald/theme/theme.ini\" \"$a/theme.ini\" &>/dev/null && basename \"$a\"; done)\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'FluxBox'|'Fluxbox')\n\t\t\t\tif [ -f \"$HOME/.fluxbox/init\" ]; then\n\t\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\"/\" '/styleFile/ {print $NF}' \"$HOME/.fluxbox/init\")\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'IceWM')\n\t\t\t\tif [ -f \"$HOME/.icewm/theme\" ]; then\n\t\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\"[\\\",/]\" '!/#/ {print $2}' \"$HOME/.icewm/theme\")\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'KWin'*)\n\t\t\t\tif [[ -z $KDE_CONFIG_DIR ]]; then\n\t\t\t\t\tif type -p kde5-config >/dev/null 2>&1; then\n\t\t\t\t\t\tKDE_CONFIG_DIR=$(kde5-config --localprefix)\n\t\t\t\t\telif type -p kde4-config >/dev/null 2>&1; then\n\t\t\t\t\t\tKDE_CONFIG_DIR=$(kde4-config --localprefix)\n\t\t\t\t\telif type -p kde-config >/dev/null 2>&1; then\n\t\t\t\t\t\tKDE_CONFIG_DIR=$(kde-config --localprefix)\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t\tif [[ -n $KDE_CONFIG_DIR ]]; then\n\t\t\t\t\tWin_theme=\"Not Applicable\"\n\t\t\t\t\tKDE_CONFIG_DIR=${KDE_CONFIG_DIR%/}\n\t\t\t\t\tif [[ -f $KDE_CONFIG_DIR/share/config/kwinrc ]]; then\n\t\t\t\t\t\tWin_theme=\"$(\"${AWK}\" '/PluginLib=kwin3_/{gsub(/PluginLib=kwin3_/,\"\",$0); print $0; exit}' \"$KDE_CONFIG_DIR/share/config/kwinrc\")\"\n\t\t\t\t\t\tif [[ -z \"$Win_theme\" ]]; then\n\t\t\t\t\t\t\tWin_theme=\"Not Applicable\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"$Win_theme\" == \"Not Applicable\" ]]; then\n\t\t\t\t\t\tif [[ -f $KDE_CONFIG_DIR/share/config/kdebugrc ]]; then\n\t\t\t\t\t\t\tWin_theme=\"$(\"${AWK}\" '/(decoration)/ {gsub(/\\[/,\"\",$1); print $1; exit}' \"$KDE_CONFIG_DIR/share/config/kdebugrc\")\"\n\t\t\t\t\t\t\tif [[ -z \"$Win_theme\" ]]; then\n\t\t\t\t\t\t\t\tWin_theme=\"Not Applicable\"\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"$Win_theme\" == \"Not Applicable\" ]]; then\n\t\t\t\t\t\tif [[ -f $KDE_CONFIG_DIR/share/config/kdeglobals ]]; then\n\t\t\t\t\t\t\tWin_theme=\"$(\"${AWK}\" '/\\[General\\]/ {flag=1;next} /^$/{flag=0} flag {print}' \"$KDE_CONFIG_DIR/share/config/kdeglobals\" | grep -oP 'Name=\\K.*')\"\n\t\t\t\t\t\t\tif [[ -z \"$Win_theme\" ]]; then\n\t\t\t\t\t\t\t\tWin_theme=\"Not Applicable\"\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"$Win_theme\" != \"Not Applicable\" ]]; then\n\t\t\t\t\t\tif [[ ${BASH_VERSINFO[0]} -ge 4 ]]; then\n\t\t\t\t\t\t\tif [[ ${BASH_VERSINFO[0]} -eq 4 && ${BASH_VERSINFO[1]} -gt 1 ]] || [[ ${BASH_VERSINFO[0]} -gt 4 ]]; then\n\t\t\t\t\t\t\t\tWin_theme=\"${Win_theme^}\"\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tWin_theme=\"$(tr '[:lower:]' '[:upper:]' <<< \"${Win_theme:0:1}\")${Win_theme:1}\"\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tWin_theme=\"$(tr '[:lower:]' '[:upper:]' <<< \"${Win_theme:0:1}\")${Win_theme:1}\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'Marco'|'Metacity (Marco)')\n\t\t\t\tWin_theme=\"$(gsettings get org.mate.Marco.general theme)\"\n\t\t\t\tWin_theme=${Win_theme//\"'\"}\n\t\t\t;;\n\t\t\t'Metacity')\n\t\t\t\tif [ \"$(gconftool-2 -g /apps/metacity/general/theme)\" ]; then\n\t\t\t\t\tWin_theme=\"$(gconftool-2 -g /apps/metacity/general/theme)\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'OpenBox'|'Openbox')\n\t\t\t\tif [ -f \"${XDG_CONFIG_HOME:-${HOME}/.config}/openbox/rc.xml\" ]; then\n\t\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\"[<,>]\" '/<theme/ { getline; print $3 }' \"${XDG_CONFIG_HOME:-${HOME}/.config}/openbox/rc.xml\")\";\n\t\t\t\telif [[ -f ${XDG_CONFIG_HOME:-${HOME}/.config}/openbox/lxde-rc.xml && \"${DE}\" == \"LXDE\" ]]; then\n\t\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\"[<,>]\" '/<theme/ { getline; print $3 }' \"${XDG_CONFIG_HOME:-${HOME}/.config}/openbox/lxde-rc.xml\")\";\n\t\t\t\telif [[ -f ${XDG_CONFIG_HOME:-${HOME}/.config}/openbox/lxqt-rc.xml && \"${DE}\" =~ \"LXQt\" ]]; then\n\t\t\t\t\tWin_theme=\"$(\"${AWK}\" -F'=' '/^theme/ {print $2}' ${HOME}/.config/lxqt/lxqt.conf)\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'PekWM')\n\t\t\t\tif [ -f \"$HOME/.pekwm/config\" ]; then\n\t\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\"/\" '/Theme/ {gsub(/\\\"/,\"\"); print $NF}' \"$HOME/.pekwm/config\")\"\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'Sawfish')\n\t\t\t\tWin_theme=\"$(\"${AWK}\" -F\")\" '/\\(quote default-frame-style/{print $2}' \"$HOME/.sawfish/custom\" | sed 's/ (quote //')\"\n\t\t\t;;\n\t\t\t'TWin')\n\t\t\t\tif [[ -z $TDE_CONFIG_DIR ]]; then\n\t\t\t\t\tif type -p tde-config >/dev/null 2>&1; then\n\t\t\t\t\t\tTDE_CONFIG_DIR=$(tde-config --localprefix)\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t\tif [[ -n $TDE_CONFIG_DIR ]]; then\n\t\t\t\t\tTDE_CONFIG_DIR=${TDE_CONFIG_DIR%/}\n\t\t\t\t\tif [[ -f $TDE_CONFIG_DIR/share/config/kcmthememanagerrc ]]; then\n\t\t\t\t\t\tWin_theme=$(\"${AWK}\" '/CurrentTheme=/ {gsub(/CurrentTheme=/,\"\",$0); print $0; exit}' \"$TDE_CONFIG_DIR/share/config/kcmthememanagerrc\")\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ -z $Win_theme ]]; then\n\t\t\t\t\t\tWin_theme=\"Not Applicable\"\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'Xfwm4')\n\t\t\t\tif [ -f \"${XDG_CONFIG_HOME:-${HOME}/.config}/xfce4/xfconf/xfce-perchannel-xml/xfwm4.xml\" ]; then\n\t\t\t\t\tWin_theme=\"$(xfconf-query -c xfwm4 -p /general/theme)\"\n\t\t\t\tfi\n\t\t\t;;\n\t\tesac\n\tfi\n\tverboseOut \"Finding window manager theme...found as '$Win_theme'\"\n}\n# WM Theme Detection - END\n\n# GTK Theme\\Icon\\Font Detection - BEGIN\ndetectgtk () {\n\tgtk2Theme=\"Not Found\"\n\tgtk3Theme=\"Not Found\"\n\tgtkIcons=\"Not Found\"\n\tgtkFont=\"Not Found\"\n\t# Font detection (OS X)\n\tif [[ ${distro} == \"Mac OS X\" || ${distro} == \"macOS\" ]]; then\n\t\tgtk2Theme=\"Not Applicable\"\n\t\tgtk3Theme=\"Not Applicable\"\n\t\tgtkIcons=\"Not Applicable\"\n\t\tif ps -U \"${USER}\" | grep -q -i 'finder'; then\n\t\t\tif [[ ${TERM_PROGRAM} == \"iTerm.app\" ]] && [ -f ~/Library/Preferences/com.googlecode.iterm2.plist ]; then\n\t\t\t\t# iTerm2\n\n\t\t\t\titerm2_theme_uuid=$(defaults read com.googlecode.iTerm2 \"Default Bookmark Guid\")\n\n\t\t\t\tOLD_IFS=$IFS\n\t\t\t\tIFS=$'\\n'\n\t\t\t\titerm2_theme_info=($(defaults read com.googlecode.iTerm2 \"New Bookmarks\" | grep -e 'Guid\\s*=\\s*\\w+' -e 'Normal Font'))\n\t\t\t\tIFS=$OLD_IFS\n\n\t\t\t\tfor i in $(seq 0 $((${#iterm2_theme_info[*]}/2-1))); do\n\t\t\t\t\tfound_uuid=$(str1=${iterm2_theme_info[$i*2]};echo \"${str1}\")\n\t\t\t\t\tif [[ $found_uuid == $iterm2_theme_info ]]; then\n\t\t\t\t\t\tgtkFont=$(str2=${iterm2_theme_info[$i*2+1]};echo ${str2:25:${#str2}-25-2} | sed 's/ [0-9]*$//')\n\t\t\t\t\t\tbreak\n\t\t\t\t\tfi\n\t\t\t\tdone\n\t\t\telse\n\t\t\t\t# Terminal.app\n\n\t\t\t\ttermapp_theme_name=$(defaults read com.apple.Terminal \"Default Window Settings\")\n\n\t\t\t\tOLD_IFS=$IFS\n\t\t\t\tIFS=$'\\n'\n\t\t\t\ttermapp_theme_info=$(/usr/libexec/PlistBuddy -c \\\n\t\t\t\t\t\"print ':Window Settings:${termapp_theme_name}:Font'\" \\\n\t\t\t\t\t~/Library/Preferences/com.apple.Terminal.plist \\\n\t\t\t\t\t| xxd -p |tr -d '\\n\\t\\s')\n\t\t\t\tIFS=$OLD_IFS\n\n\t\t\t\tgtkFont=$(echo \"${termapp_theme_info:288:60}\" | xxd -r -p | perl -pe 'binmode(STDIN, \":bytes\"); tr/A-Za-z0-9_\\!\\@\\#\\$\\%\\&\\^\\*\\(\\)-+=//dc;')\n\t\t\t\tgtkFont=$(echo \"${gtkFont:1}\" | sed 's/Z\\$.*//')\n\t\t\tfi\n\t\tfi\n\telse\n\t\tcase $DE in\n\t\t\t'KDE'*) # Desktop Environment found as \"KDE\"\n\t\t\t\tif type - p kde4-config >/dev/null 2>&1; then\n\t\t\t\t\tKDE_CONFIG_DIR=$(kde4-config --localprefix)\n\t\t\t\t\tif [[ -d ${KDE_CONFIG_DIR} ]]; then\n\t\t\t\t\t\tif [[ -f \"${KDE_CONFIG_DIR}/share/config/kdeglobals\" ]]; then\n\t\t\t\t\t\t\tKDE_CONFIG_FILE=\"${KDE_CONFIG_DIR}/share/config/kdeglobals\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\telif type -p kde5-config >/dev/null 2>&1; then\n\t\t\t\t\tKDE_CONFIG_DIR=$(kde5-config --localprefix)\n\t\t\t\t\tif [[ -d ${KDE_CONFIG_DIR} ]]; then\n\t\t\t\t\t\tif [[ -f \"${KDE_CONFIG_DIR}/share/config/kdeglobals\" ]]; then\n\t\t\t\t\t\t\tKDE_CONFIG_FILE=\"${KDE_CONFIG_DIR}/share/config/kdeglobals\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\telif type -p kde-config >/dev/null 2>&1; then\n\t\t\t\t\tKDE_CONFIG_DIR=$(kde-config --localprefix)\n\t\t\t\t\tif [[ -d ${KDE_CONFIG_DIR} ]]; then\n\t\t\t\t\t\tif [[ -f \"${KDE_CONFIG_DIR}/share/config/kdeglobals\" ]]; then\n\t\t\t\t\t\t\tKDE_CONFIG_FILE=\"${KDE_CONFIG_DIR}/share/config/kdeglobals\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\tfi\n\n\t\t\t\tif [[ -n ${KDE_CONFIG_FILE} ]]; then\n\t\t\t\t\tif grep -q 'widgetStyle=' \"${KDE_CONFIG_FILE}\"; then\n\t\t\t\t\t\tgtk2Theme=$(\"${AWK}\" -F\"=\" '/widgetStyle=/ {print $2}' \"${KDE_CONFIG_FILE}\")\n\t\t\t\t\telif grep -q 'colorScheme=' \"${KDE_CONFIG_FILE}\"; then\n\t\t\t\t\t\tgtk2Theme=$(\"${AWK}\" -F\"=\" '/colorScheme=/ {print $2}' \"${KDE_CONFIG_FILE}\")\n\t\t\t\t\tfi\n\n\t\t\t\t\tif grep -q 'Theme=' \"${KDE_CONFIG_FILE}\"; then\n\t\t\t\t\t\tgtkIcons=$(\"${AWK}\" -F\"=\" '/Theme=/ {print $2}' \"${KDE_CONFIG_FILE}\")\n\t\t\t\t\tfi\n\n\t\t\t\t\tif grep -q 'Font=' \"${KDE_CONFIG_FILE}\"; then\n\t\t\t\t\t\tgtkFont=$(\"${AWK}\" -F\"=\" '/font=/ {print $2}' \"${KDE_CONFIG_FILE}\")\n\t\t\t\t\tfi\n\t\t\t\tfi\n\n\t\t\t\tif [[ -f $HOME/.gtkrc-2.0 ]]; then\n\t\t\t\t\tgtk2Theme=$(grep '^gtk-theme-name' \"$HOME\"/.gtkrc-2.0 | \"${AWK}\" -F'=' '{print $2}')\n\t\t\t\t\tgtk2Theme=${gtk2Theme//\\\"/}\n\t\t\t\t\tgtkIcons=$(grep '^gtk-icon-theme-name' \"$HOME\"/.gtkrc-2.0 | \"${AWK}\" -F'=' '{print $2}')\n\t\t\t\t\tgtkIcons=${gtkIcons//\\\"/}\n\t\t\t\t\tgtkFont=$(grep 'font_name' \"$HOME\"/.gtkrc-2.0 | \"${AWK}\" -F'=' '{print $2}')\n\t\t\t\t\tgtkFont=${gtkFont//\\\"/}\n\t\t\t\tfi\n\n\t\t\t\tif [[ -f $HOME/.config/gtk-3.0/settings.ini ]]; then\n\t\t\t\t\tgtk3Theme=$(grep '^gtk-theme-name=' \"$HOME\"/.config/gtk-3.0/settings.ini | \"${AWK}\" -F'=' '{print $2}')\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'Cinnamon'*) # Desktop Environment found as \"Cinnamon\"\n\t\t\t\tif type -p gsettings >/dev/null 2>&1; then\n\t\t\t\t\tgtk3Theme=$(gsettings get org.cinnamon.desktop.interface gtk-theme)\n\t\t\t\t\tgtk3Theme=${gtk3Theme//\"'\"}\n\t\t\t\t\tgtk2Theme=${gtk3Theme}\n\n\t\t\t\t\tgtkIcons=$(gsettings get org.cinnamon.desktop.interface icon-theme)\n\t\t\t\t\tgtkIcons=${gtkIcons//\"'\"}\n\t\t\t\t\tgtkFont=$(gsettings get org.cinnamon.desktop.interface font-name)\n\t\t\t\t\tgtkFont=${gtkFont//\"'\"}\n\t\t\t\t\tif [ \"$background_detect\" == \"1\" ]; then gtkBackground=$(gsettings get org.gnome.desktop.background picture-uri); fi\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'GNOME'*|'Unity'*|'Budgie') # Desktop Environment found as \"GNOME\"\n\t\t\t\tif type -p gsettings >/dev/null 2>&1; then\n\t\t\t\t\tgtk3Theme=$(gsettings get org.gnome.desktop.interface gtk-theme)\n\t\t\t\t\tgtk3Theme=${gtk3Theme//\"'\"}\n\t\t\t\t\tgtk2Theme=${gtk3Theme}\n\t\t\t\t\tgtkIcons=$(gsettings get org.gnome.desktop.interface icon-theme)\n\t\t\t\t\tgtkIcons=${gtkIcons//\"'\"}\n\t\t\t\t\tgtkFont=$(gsettings get org.gnome.desktop.interface font-name)\n\t\t\t\t\tgtkFont=${gtkFont//\"'\"}\n\t\t\t\t\tif [ \"$background_detect\" == \"1\" ]; then gtkBackground=$(gsettings get org.gnome.desktop.background picture-uri); fi\n\t\t\t\telif type -p gconftool-2 >/dev/null 2>&1; then\n\t\t\t\t\tgtk2Theme=$(gconftool-2 -g /desktop/gnome/interface/gtk_theme)\n\t\t\t\t\tgtkIcons=$(gconftool-2 -g /desktop/gnome/interface/icon_theme)\n\t\t\t\t\tgtkFont=$(gconftool-2 -g /desktop/gnome/interface/font_name)\n\t\t\t\t\tif [ \"$background_detect\" == \"1\" ]; then\n\t\t\t\t\t\tgtkBackgroundFull=$(gconftool-2 -g /desktop/gnome/background/picture_filename)\n\t\t\t\t\t\tgtkBackground=$(echo \"$gtkBackgroundFull\" | \"${AWK}\" -F\"/\" '{print $NF}')\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'MATE'*) # MATE desktop environment\n\t\t\t\tif type -p gsettings >/dev/null 2>&1; then\n\t\t\t\t\tgtk3Theme=$(gsettings get org.mate.interface gtk-theme)\n\t\t\t\t\tgtk3Theme=${gtk3Theme//\"'\"}\n\t\t\t\t\tgtk2Theme=${gtk3Theme}\n\t\t\t\t\tgtkIcons=$(gsettings get org.mate.interface icon-theme)\n\t\t\t\t\tgtkIcons=${gtkIcons//\"'\"}\n\t\t\t\t\tgtkFont=$(gsettings get org.mate.interface font-name)\n\t\t\t\t\tgtkFont=${gtkFont//\"'\"}\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'Xfce'*) # Desktop Environment found as \"Xfce\"\n\t\t\t\tif [ \"$distro\" == \"BunsenLabs\" ] ; then\n\t\t\t\t\tgtk2Theme=$(\"${AWK}\" -F'\"' '/^gtk-theme/ {print $2}' \"$HOME\"/.gtkrc-2.0)\n\t\t\t\t\tgtk3Theme=$(\"${AWK}\" -F'=' '/^gtk-theme-name/ {print $2}' \"$HOME\"/.config/gtk-3.0/settings.ini)\n\t\t\t\t\tgtkIcons=$(\"${AWK}\" -F'\"' '/^gtk-icon-theme/ {print $2}' \"$HOME\"/.gtkrc-2.0)\n\t\t\t\t\tgtkFont=$(\"${AWK}\" -F'\"' '/^gtk-font-name/ {print $2}' \"$HOME\"/.gtkrc-2.0)\n\t\t\t\telse\n\t\t\t\t\tif type -p xfconf-query >/dev/null 2>&1; then\n\t\t\t\t\t\tgtk2Theme=$(xfconf-query -c xsettings -p /Net/ThemeName 2>/dev/null)\n\t\t\t\t\t\t[ -z \"$gtk2Theme\" ] && gtk2Theme=\"Not Found\"\n\t\t\t\t\tfi\n\n\t\t\t\t\tif type -p xfconf-query >/dev/null 2>&1; then\n\t\t\t\t\t\tgtkIcons=$(xfconf-query -c xsettings -p /Net/IconThemeName 2>/dev/null)\n\t\t\t\t\t\t[ -z \"$gtkIcons\" ] && gtkIcons=\"Not Found\"\n\t\t\t\t\tfi\n\n\t\t\t\t\tif type -p xfconf-query >/dev/null 2>&1; then\n\t\t\t\t\t\tgtkFont=$(xfconf-query -c xsettings -p /Gtk/FontName 2>/dev/null)\n\t\t\t\t\t\t[ -z \"$gtkFont\" ] && gtkFont=\"Not Identified\"\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t;;\n\t\t\t'LXDE'*)\n\t\t\t\tconfig_home=\"${XDG_CONFIG_HOME:-${HOME}/.config}\"\n\t\t\t\tif [ -f \"$config_home/lxde/config\" ]; then\n\t\t\t\t\tlxdeconf=\"/lxde/config\"\n\t\t\t\telif [ \"$distro\" == \"Trisquel\" ] || [ \"$distro\" == \"FreeBSD\" ]; then\n\t\t\t\t\tlxdeconf=\"\"\n\t\t\t\telif [ -f \"$config_home/lxsession/Lubuntu/desktop.conf\" ]; then\n\t\t\t\t\tlxdeconf=\"/lxsession/Lubuntu/desktop.conf\"\n\t\t\t\telse\n\t\t\t\t\tlxdeconf=\"/lxsession/LXDE/desktop.conf\"\n\t\t\t\tfi\n\n\t\t\t\tif grep -q 'sNet\\/ThemeName' \"${config_home}${lxdeconf}\" 2>/dev/null; then\n\t\t\t\t\tgtk2Theme=$(\"${AWK}\" -F'=' '/sNet\\/ThemeName/ {print $2}' \"${config_home}${lxdeconf}\")\n\t\t\t\tfi\n\n\t\t\t\tif grep -q 'IconThemeName' \"${config_home}${lxdeconf}\" 2>/dev/null; then\n\t\t\t\t\tgtkIcons=$(\"${AWK}\" -F'=' '/sNet\\/IconThemeName/ {print $2}' \"${config_home}${lxdeconf}\")\n\t\t\t\tfi\n\n\t\t\t\tif grep -q 'FontName' \"${config_home}${lxdeconf}\" 2>/dev/null; then\n\t\t\t\t\tgtkFont=$(\"${AWK}\" -F'=' '/sGtk\\/FontName/ {print $2}' \"${config_home}${lxdeconf}\")\n \t\t\t\tfi\n\t\t\t;;\n\n\t\t\t# /home/me/.config/rox.sourceforge.net/ROX-Session/Settings.xml\n\n\t\t\t*)\t# Lightweight or No DE Found\n\t\t\t\tif [ -f \"$HOME/.gtkrc-2.0\" ]; then\n\t\t\t\t\tif grep -q 'gtk-theme' \"$HOME/.gtkrc-2.0\"; then\n\t\t\t\t\t\tgtk2Theme=$(\"${AWK}\" -F'\"' '/^gtk-theme/ {print $2}' \"$HOME/.gtkrc-2.0\")\n\t\t\t\t\tfi\n\n\t\t\t\t\tif grep -q 'icon-theme' \"$HOME/.gtkrc-2.0\"; then\n\t\t\t\t\t\tgtkIcons=$(\"${AWK}\" -F'\"' '/^gtk-icon-theme/ {print $2}' \"$HOME/.gtkrc-2.0\")\n\t\t\t\t\tfi\n\n\t\t\t\t\tif grep -q 'font' \"$HOME/.gtkrc-2.0\"; then\n\t\t\t\t\t\tgtkFont=$(\"${AWK}\" -F'\"' '/^gtk-font-name/ {print $2}' \"$HOME/.gtkrc-2.0\")\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t\t# $HOME/.gtkrc.mine theme detect only\n\t\t\t\tif [[ -f \"$HOME/.gtkrc.mine\" ]]; then\n\t\t\t\t\tminegtkrc=\"$HOME/.gtkrc.mine\"\n\t\t\t\telif [[ -f \"$HOME/.gtkrc-2.0.mine\" ]]; then\n\t\t\t\t\tminegtkrc=\"$HOME/.gtkrc-2.0.mine\"\n\t\t\t\tfi\n\t\t\t\tif [ -f \"$minegtkrc\" ]; then\n\t\t\t\t\tif grep -q '^include' \"$minegtkrc\"; then\n\t\t\t\t\t\tgtk2Theme=$(grep '^include.*gtkrc' \"$minegtkrc\" | \"${AWK}\" -F \"/\" '{ print $5 }')\n\t\t\t\t\tfi\n\t\t\t\t\tif grep -q '^gtk-icon-theme-name' \"$minegtkrc\"; then\n\t\t\t\t\t\tgtkIcons=$(grep '^gtk-icon-theme-name' \"$minegtkrc\" | \"${AWK}\" -F '\"' '{print $2}')\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t\t# /etc/gtk-2.0/gtkrc compatibility\n\t\t\t\tif [[ -f /etc/gtk-2.0/gtkrc && ! -f \"$HOME/.gtkrc-2.0\" && ! -f \"$HOME/.gtkrc.mine\" && ! -f \"$HOME/.gtkrc-2.0.mine\" ]]; then\n\t\t\t\t\tif grep -q 'gtk-theme-name' /etc/gtk-2.0/gtkrc; then\n\t\t\t\t\t\tgtk2Theme=$(\"${AWK}\" -F'\"' '/^gtk-theme-name/ {print $2}' /etc/gtk-2.0/gtkrc)\n\t\t\t\t\tfi\n\t\t\t\t\tif grep -q 'gtk-fallback-theme-name' /etc/gtk-2.0/gtkrc  && ! [ \"x$gtk2Theme\" = \"x\" ]; then\n\t\t\t\t\t\tgtk2Theme=$(\"${AWK}\" -F'\"' '/^gtk-fallback-theme-name/ {print $2}' /etc/gtk-2.0/gtkrc)\n\t\t\t\t\tfi\n\n\t\t\t\t\tif grep -q 'icon-theme' /etc/gtk-2.0/gtkrc; then\n\t\t\t\t\t\tgtkIcons=$(\"${AWK}\" -F'\"' '/^icon-theme/ {print $2}' /etc/gtk-2.0/gtkrc)\n\t\t\t\t\tfi\n\t\t\t\t\tif  grep -q 'gtk-fallback-icon-theme' /etc/gtk-2.0/gtkrc  && ! [ \"x$gtkIcons\" = \"x\" ]; then\n\t\t\t\t\t\tgtkIcons=$(\"${AWK}\" -F'\"' '/^gtk-fallback-icon-theme/ {print $2}' /etc/gtk-2.0/gtkrc)\n\t\t\t\t\tfi\n\n\t\t\t\t\tif grep -q 'font' /etc/gtk-2.0/gtkrc; then\n\t\t\t\t\t\tgtkFont=$(\"${AWK}\" -F'\"' '/^gtk-font-name/ {print $2}' /etc/gtk-2.0/gtkrc)\n\t\t\t\t\tfi\n\t\t\t\tfi\n\n\t\t\t\t# EXPERIMENTAL gtk3 Theme detection\n\t\t\t\tif [[ \"$gtk3Theme\" = \"Not Found\" && -f \"$HOME/.config/gtk-3.0/settings.ini\" ]]; then\n\t\t\t\t\tif grep -q 'gtk-theme-name' \"$HOME/.config/gtk-3.0/settings.ini\"; then\n\t\t\t\t\t\tgtk3Theme=$(\"${AWK}\" -F'=' '/^gtk-theme-name/ {print $2}' \"$HOME/.config/gtk-3.0/settings.ini\")\n\t\t\t\t\tfi\n\t\t\t\tfi\n\n\t\t\t\t# Proper gtk3 Theme detection\n\t\t\t\tif type -p gsettings >/dev/null 2>&1; then\n\t\t\t\t\tif [[ -z \"$gtk3Theme\"  || \"$gtk3Theme\" = \"Not Found\" ]]; then\n\t\t\t\t\t\tgtk3Theme=$(gsettings get org.gnome.desktop.interface gtk-theme 2>/dev/null)\n\t\t\t\t\t\tgtk3Theme=${gtk3Theme//\"'\"}\n\t\t\t\t\tfi\n\t\t\t\tfi\n\n\t\t\t\t# ROX-Filer icon detect only\n\t\t\t\tif [ -a \"${XDG_CONFIG_HOME:-${HOME}/.config}/rox.sourceforge.net/ROX-Filer/Options\" ]; then\n\t\t\t\t\tgtkIcons=$(\"${AWK}\" -F'[>,<]' '/icon_theme/ {print $3}' \"${XDG_CONFIG_HOME:-${HOME}/.config}/rox.sourceforge.net/ROX-Filer/Options\")\n\t\t\t\tfi\n\n\t\t\t\t# E17 detection\n\t\t\t\tif [ \"$E_ICON_THEME\" ]; then\n\t\t\t\t\tgtkIcons=${E_ICON_THEME}\n\t\t\t\t\tgtk2Theme=\"Not available.\"\n\t\t\t\t\tgtkFont=\"Not available.\"\n\t\t\t\tfi\n\n\t\t\t\t# Background Detection (feh, nitrogen)\n\t\t\t\tif [ \"$background_detect\" == \"1\" ]; then\n\t\t\t\t\tif [ -a \"$HOME/.fehbg\" ]; then\n\t\t\t\t\t\tgtkBackgroundFull=$(\"${AWK}\" -F\"'\" '/feh --bg/{print $2}' \"$HOME/.fehbg\" 2>/dev/null)\n\t\t\t\t\t\tgtkBackground=$(echo \"$gtkBackgroundFull\" | \"${AWK}\" -F\"/\" '{print $NF}')\n\t\t\t\t\telif [ -a \"${XDG_CONFIG_HOME:-${HOME}/.config}/nitrogen/bg-saved.cfg\" ]; then\n\t\t\t\t\t\tgtkBackground=$(\"${AWK}\" -F\"/\" '/file=/ {print $NF}' \"${XDG_CONFIG_HOME:-${HOME}/.config}/nitrogen/bg-saved.cfg\")\n\t\t\t\t\tfi\n\t\t\t\tfi\n\n\t\t\t\tif [[ \"$distro\" == \"Cygwin\" || \"$distro\" == \"Msys\" ]]; then\n\t\t\t\t\tif [ \"$gtkFont\" == \"Not Found\" ]; then\n\t\t\t\t\t\tif [ -f \"$HOME/.minttyrc\" ]; then\n\t\t\t\t\t\t\tgtkFont=\"$(grep '^Font=.*' \"$HOME/.minttyrc\" | grep -o '[0-9A-z ]*$')\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\t;;\n\t\tesac\n\tfi\n\tverboseOut \"Finding GTK2 theme...found as '$gtk2Theme'\"\n\tverboseOut \"Finding GTK3 theme...found as '$gtk3Theme'\"\n\tverboseOut \"Finding icon theme...found as '$gtkIcons'\"\n\tverboseOut \"Finding user font...found as '$gtkFont'\"\n\tif [[ -n \"$gtkBackground\" ]]; then\n\t\tverboseOut \"Finding background...found as '$gtkBackground'\"\n\tfi\n}\n# GTK Theme\\Icon\\Font Detection - END\n\n# Android-specific detections\ndetectdroid () {\n\tdistro_ver=$(getprop ro.build.version.release)\n\thostname=$(getprop net.hostname)\n\tdevice=\"$(getprop ro.product.model) ($(getprop ro.product.device))\"\n\tif [[ $(getprop ro.build.host) == \"cyanogenmod\" ]]; then\n\t\trom=$(getprop ro.cm.version)\n\telse\n\t\trom=$(getprop ro.build.display.id)\n\tfi\n\tbaseband=$(getprop ro.baseband)\n\tcpu=$(\"${AWK}\" -F': ' '/^Processor/ {P=$2} /^Hardware/ {H=$2} END {print H != \"\" ? H : P}' /proc/cpuinfo)\n}\n\n\n#######################\n# End Detection Phase\n#######################\n\ntakeShot () {\n\tif [[ -n \"$screenCommand\" ]]; then\n\t\t$screenCommand\n\telse\n\t\tshotfiles[1]=${shotfile}\n\t\tif [ \"$distro\" == \"Mac OS X\" || \"$distro\" == \"macOS\" ]; then\n\t\t\tdisplays=\"$(system_profiler SPDisplaysDataType | grep -c 'Resolution:' | tr -d ' ')\"\n\t\t\tfor (( i=2; i<=displays; i++))\n\t\t\tdo\n\t\t\t\tshotfiles[$i]=\"$(echo ${shotfile} | sed \"s/\\(.*\\)\\./\\1_${i}./\")\"\n\t\t\tdone\n\t\t\tprintf \"Taking shot in 3.. \"; sleep 1\n\t\t\tprintf \"2.. \"; sleep 1\n\t\t\tprintf \"1.. \"; sleep 1\n\t\t\tprintf \"0.\\n\"\n\t\t\tscreencapture -x ${shotfiles[@]} &> /dev/null\n\t\telse\n\t\t\tif type -p scrot >/dev/null 2>&1; then\n\t\t\t\tscrot -cd3 \"${shotfile}\"\n\t\t\telse\n\t\t\t\terrorOut \"Cannot take screenshot! \\`scrot' not in \\$PATH\"\n\t\t\tfi\n\t\tfi\n\t\tif [ -f \"${shotfile}\" ]; then\n\t\t\tverboseOut \"Screenshot saved at '${shotfiles[*]}'\"\n\t\t\tif [[ \"${upload}\" == \"1\" ]]; then\n\t\t\t\tif type -p curl >/dev/null 2>&1; then\n\t\t\t\t\tprintf \"${bold}==>${c0}  Uploading your screenshot now...\"\n\t\t\t\t\tcase \"${uploadLoc}\" in\n\t\t\t\t\t\t'teknik')\n\t\t\t\t\t\t\tbaseurl='https://u.teknik.io'\n\t\t\t\t\t\t\tuploadurl='https://api.teknik.io/upload/post'\n\t\t\t\t\t\t\tret=$(curl -sf -F file=\"@${shotfiles[*]}\" ${uploadurl})\n\t\t\t\t\t\t\tdesturl=\"${ret##*url\\\":\\\"}\"\n\t\t\t\t\t\t\tdesturl=\"${desturl%%\\\"*}\"\n\t\t\t\t\t\t\tdesturl=\"${desturl//\\\\}\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t\t'mediacrush')\n\t\t\t\t\t\t\tbaseurl='https://mediacru.sh'\n\t\t\t\t\t\t\tuploadurl='https://mediacru.sh/api/upload/file'\n\t\t\t\t\t\t\tret=$(curl -sf -F file=\"@${shotfiles[*]};type=image/png\" ${uploadurl})\n\t\t\t\t\t\t\tfilehash=$(echo \"${ret}\" | grep 'hash' | cut -d '\"' -f4)\n\t\t\t\t\t\t\tdesturl=\"${baseurl}/${filehash}\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t\t'imgur')\n\t\t\t\t\t\t\tbaseurl='http://imgur.com'\n\t\t\t\t\t\t\tuploadurl='http://imgur.com/upload'\n\t\t\t\t\t\t\tret=$(curl -sf -F file=\"@${shotfiles[*]}\" ${uploadurl})\n\t\t\t\t\t\t\tfilehash=\"${ret##*hash\\\":\\\"}\"\n\t\t\t\t\t\t\tfilehash=\"${filehash%%\\\"*}\"\n\t\t\t\t\t\t\tdesturl=\"${baseurl}/${filehash}\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t\t'hmp')\n\t\t\t\t\t\t\tbaseurl='http://i.hmp.me/m'\n\t\t\t\t\t\t\tuploadurl='http://hmp.me/ap/?uf=1'\n\t\t\t\t\t\t\tret=$(curl -sf -F a=\"@${shotfiles[*]};type=image/png\" ${uploadurl})\n\t\t\t\t\t\t\tdesturl=\"${ret##*img_path\\\":\\\"}\"\n\t\t\t\t\t\t\tdesturl=\"${desturl%%\\\"*}\"\n\t\t\t\t\t\t\tdesturl=\"${desturl//\\\\}\"\n\t\t\t\t\t\t;;\n\t\t\t\t\t\t'local-example')\n\t\t\t\t\t\t\tbaseurl=\"http://www.example.com\"\n\t\t\t\t\t\t\tserveraddr=\"www.example.com\"\n\t\t\t\t\t\t\tscptimeout=\"20\"\n\t\t\t\t\t\t\tserverdir=\"/path/to/directory\"\n\t\t\t\t\t\t\tscp -qo ConnectTimeout=\"${scptimeout}\" \"${shotfiles[*]}\" \"${serveraddr}:${serverdir}\"\n\t\t\t\t\t\t\tdesturl=\"${baseurl}/${shotfile}\"\n\t\t\t\t\t\t;;\n\t\t\t\t\tesac\n\t\t\t\t\tprintf \"your screenshot can be viewed at ${desturl}\\n\"\n\t\t\t\telse\n\t\t\t\t\terrorOut \"Cannot upload screenshot! \\`curl' not in \\$PATH\"\n\t\t\t\tfi\n\t\t\tfi\n\t\telse\n\t\t\tif type -p scrot >/dev/null 2>&1; then\n\t\t\t\terrorOut \"ERROR: Problem saving screenshot to ${shotfiles[*]}\"\n\t\t\tfi\n\t\tfi\n\tfi\n}\n\n\nasciiText () {\n# Distro logos and ASCII outputs\n\tif [[ \"$asc_distro\" ]]; then\n\t\tmyascii=\"${asc_distro}\"\n\telif [[ \"$art\" ]]; then\n\t\tmyascii=\"custom\"\n\telif [[ \"$fake_distro\" ]]; then\n\t\tmyascii=\"${fake_distro}\"\n\telse\n\t\tmyascii=\"${distro}\"\n\tfi\n\tcase ${myascii} in\n\t\t\"custom\")\n\t\t\tsource \"$art\"\n\t\t;;\n\n\t\t\"ALDOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light grey') # light grey\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"27\"\n\t\t\tfulloutput=(\n\"${c1}                           %s\"\n\"${c1}           # ## #          %s\"\n\"${c1}        # ######## #       %s\"\n\"${c1}      # ### ######## #     %s\"\n\"${c1}     # #### ######### #    %s\"\n\"${c1}   # #### # # # # #### #   %s\"\n\"${c1}  # ##### #       ##### #  %s\"\n\"${c1}   # ###### ##### #### #   %s\"\n\"${c1}    # ############### #    %s\"\n\"${c1}                           %s\"\n\"${c2}        _ ___   ___  ___   %s\"\n\"${c2}   __ _| |   \\ / _ \\/ __|  %s\"\n\"${c2}  / _' | | |) | (_) \\__ \\  %s\"\n\"${c2}  \\__,_|_|___/ \\___/|___/  %s\"\n\"${c1}                           %s\"\n\"${c1}                           %s\")\n\t\t;;\n\n\t\t\"Alpine Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light\n\t\t\t\tc2=$(getColor 'blue') # Dark\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"34\"\n\t\t\tfulloutput=(\n\"${c1}        ................          %s\"\n\"${c1}       ∴::::::::::::::::∴         %s\"\n\"${c1}      ∴::::::::::::::::::∴        %s\"\n\"${c1}     ∴::::::::::::::::::::∴       %s\"\n\"${c1}    ∴:::::::. :::::':::::::∴      %s\"\n\"${c1}   ∴:::::::.   ;::; ::::::::∴     %s\"\n\"${c1}  ∴::::::;      ∵     :::::::∴    %s\"\n\"${c1} ∴:::::.     .         .::::::∴   %s\"\n\"${c1} ::::::     :::.    .    ::::::   %s\"\n\"${c1} ∵::::     ::::::.  ::.   ::::∵   %s\"\n\"${c1}  ∵:..   .:;::::::: :::.  :::∵    %s\"\n\"${c1}   ∵::::::::::::::::::::::::∵     %s\"\n\"${c1}    ∵::::::::::::::::::::::∵      %s\"\n\"${c1}     ∵::::::::::::::::::::∵       %s\"\n\"${c1}      ::::::::::::::::::::        %s\"\n\"${c1}       ∵::::::::::::::::∵         %s\")\n\t\t;;\n\n\t\t\"AlmaLinux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'red') # White\n\t\t\t\tc2=$(getColor 'light orange') # Light Red\n\t\t\t\tc3=$(getColor 'purple')\n\t\t\t\tc4=$(getColor 'green')\n\t\t\t\tc5=$(getColor 'cyan')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then\n\t\t\t\tc1=\"${my_lcolor}\"\n\t\t\t\tc2=\"${my_lcolor}\"\n\t\t\t\tc3=\"${my_lcolor}\"\n\t\t\t\tc4=\"${my_lcolor}\"\n\t\t\t\tc5=\"${my_lcolor}\"\n\t\t\tfi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"40\"\n\t\t\tfulloutput=(\n\"${c1}         'c:.                             %s\"\n\"${c1}        lkkkx, ..       ${c2}..   ,cc,         %s\"\n\"${c1}        okkkk:ckkx'  ${c2}.lxkkx.okkkkd        %s\"\n\"${c1}        .:llcokkx'  ${c2}:kkkxkko:xkkd,        %s\"\n\"${c1}      .xkkkkdood:  ${c2};kx,  .lkxlll;         %s\"\n\"${c1}       xkkx.       ${c2}xk'     xkkkkk:        %s\"\n\"${c1}       'xkx.       ${c2}xd      .....,.        %s\"\n\"${c3}      .. ${c1}:xkl'     ${c2}:c      ..''..         %s\"\n\"${c3}    .dkx'  ${c1}.:ldl:'. ${c2}'  ${c4}':lollldkkxo;      %s\"\n\"${c3}  .''lkkko'                     ${c4}ckkkx.    %s\"\n\"${c3}'xkkkd:kkd.       ..  ${c5};'        ${c4}:kkxo.    %s\"\n\"${c3},xkkkd;kk'      ,d;    ${c5}ld.   ${c4}':dkd::cc,   %s\"\n\"${c3} .,,.;xkko'.';lxo.      ${c5}dx,  ${c4}:kkk'xkkkkc  %s\"\n\"${c3}     'dkkkkkxo:.        ${c5};kx  ${c4}.kkk:;xkkd.  %s\"\n\"${c3}       .....   ${c5}.;dk:.   ${c5}lkk.  ${c4}:;,         %s\"\n\"             ${c5}:kkkkkkkdoxkkx    %s\"\n\"              ${c5},c,,;;;:xkkd.    %s\"\n\"                ${c5};kkkkl...      %s\"\n\"                ${c5};kkkkl         %s\"\n\"                 ${c5},od; %s\")\n\t\t;;\n\n\t\t\"Arch Linux - Old\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"37\"\n\t\t\tfulloutput=(\n\"${c1}              __                     %s\"\n\"${c1}          _=(SDGJT=_                 %s\"\n\"${c1}        _GTDJHGGFCVS)                %s\"\n\"${c1}       ,GTDJGGDTDFBGX0               %s\"\n\"${c1}      JDJDIJHRORVFSBSVL${c2}-=+=,_        %s\"\n\"${c1}     IJFDUFHJNXIXCDXDSV,${c2}  \\\"DEBL      %s\"\n\"${c1}    [LKDSDJTDU=OUSCSBFLD.${c2}   '?ZWX,   %s\"\n\"${c1}   ,LMDSDSWH'     \\`DCBOSI${c2}     DRDS], %s\"\n\"${c1}   SDDFDFH'         !YEWD,${c2}   )HDROD  %s\"\n\"${c1}  !KMDOCG            &GSU|${c2}\\_GFHRGO\\'  %s\"\n\"${c1}  HKLSGP'${c2}           __${c1}\\TKM0${c2}\\GHRBV)'  %s\"\n\"${c1} JSNRVW'${c2}       __+MNAEC${c1}\\IOI,${c2}\\BN'     %s\"\n\"${c1} HELK['${c2}    __,=OFFXCBGHC${c1}\\FD)         %s\"\n\"${c1} ?KGHE ${c2}\\_-#DASDFLSV='${c1}    'EF         %s\"\n\"${c1} 'EHTI                    !H         %s\"\n\"${c1}  \\`0F'                    '!         %s\"\n\"${c1}                                     %s\"\n\"${c1}                                     %s\")\n\t\t;;\n\n\t\t\"Arch Linux\"|\"Arch Linux 32\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light cyan') # Light\n\t\t\t\tc2=$(getColor 'cyan') # Dark\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c1}                   -\\`                 \"\n\"${c1}                  .o+\\`                %s\"\n\"${c1}                 \\`ooo/                %s\"\n\"${c1}                \\`+oooo:               %s\"\n\"${c1}               \\`+oooooo:              %s\"\n\"${c1}               -+oooooo+:             %s\"\n\"${c1}             \\`/:-:++oooo+:            %s\"\n\"${c1}            \\`/++++/+++++++:           %s\"\n\"${c1}           \\`/++++++++++++++:          %s\"\n\"${c1}          \\`/+++o${c2}oooooooo${c1}oooo/\\`        %s\"\n\"${c2}         ${c1}./${c2}ooosssso++osssssso${c1}+\\`       %s\"\n\"${c2}        .oossssso-\\`\\`\\`\\`/ossssss+\\`      %s\"\n\"${c2}       -osssssso.      :ssssssso.     %s\"\n\"${c2}      :osssssss/        osssso+++.    %s\"\n\"${c2}     /ossssssss/        +ssssooo/-    %s\"\n\"${c2}   \\`/ossssso+/:-        -:/+osssso+-  %s\"\n\"${c2}  \\`+sso+:-\\`                 \\`.-/+oso: %s\"\n\"${c2} \\`++:.                           \\`-/+/%s\"\n\"${c2} .\\`                                 \\`/%s\")\n\t\t;;\n\n\t\t\"Artix Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'cyan')\n\t\t\t\tc2=$(getColor 'blue')\n\t\t\t\tc3=$(getColor 'green')\n\t\t\t\tc4=$(getColor 'dark gray')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then\n\t\t\t\tc1=\"${my_lcolor}\"\n\t\t\t\tc2=\"${my_lcolor}\"\n\t\t\t\tc3=\"${my_lcolor}\"\n\t\t\t\tc4=\"${my_lcolor}\"\n\t\t\tfi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\"\"\n\"${c1}                        d${c2}c.           %s\"\n\"${c1}                       x${c2}dc.           %s\"\n\"${c1}                  '.${c4}.${c1} d${c2}dlc.           %s\"\n\"${c1}                 c${c2}0d:${c1}o${c2}xllc;           %s\"\n\"${c1}                :${c2}0ddlolc,lc,          %s\"\n\"${c1}           :${c1}ko${c4}.${c1}:${c2}0ddollc..dlc.         %s\"\n\"${c1}          ;${c1}K${c2}kxoOddollc'  cllc.        %s\"\n\"${c1}         ,${c1}K${c2}kkkxdddllc,   ${c4}.${c2}lll:        %s\"\n\"${c1}        ,${c1}X${c2}kkkddddlll;${c3}...';${c1}d${c2}llll${c3}dxk:   %s\"\n\"${c1}       ,${c1}X${c2}kkkddddllll${c3}oxxxddo${c2}lll${c3}oooo,   %s\"\n\"${c3}    xxk${c1}0${c2}kkkdddd${c1}o${c2}lll${c1}o${c3}ooooooolooooc;${c1}.   %s\"\n\"${c3}    ddd${c2}kkk${c1}d${c2}ddd${c1}ol${c2}lc:${c3}:;,'.${c3}... .${c2}lll;     %s\"\n\"${c1}   .${c3}xd${c1}x${c2}kk${c1}xd${c2}dl${c1}'cl:${c4}.           ${c2}.llc,    %s\"\n\"${c1}   .${c1}0${c2}kkkxddl${c4}. ${c2};'${c4}.             ${c2};llc.   %s\"\n\"${c1}  .${c1}K${c2}Okdcddl${c4}.                   ${c2}cllc${c4}.  %s\"\n\"${c1}  0${c2}Okd''dc.                    .cll;  %s\"\n\"${c1} k${c2}Okd'                          .llc, %s\"\n\"${c1} d${c2}Od,                            'lc. %s\"\n\"${c1} :,${c4}.                              ${c2}... %s\"\n\"                                                   %s\")\n\t\t;;\n\n\t\t\"blackPanther OS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'yellow') # Light Yellow\n\t\t\t\tc2=$(getColor 'white') # Bold Red\n\t\t\t\tc3=$(getColor 'light red') # Light Red\n\t\t\t\tc4=$(getColor 'dark grey')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then\n\t\t\t\tc1=\"${my_lcolor}\"\n\t\t\t\tc2=\"${my_lcolor}\"\n\t\t\t\tc3=\"${my_lcolor}\"\n\t\t\t\tc4=\"${my_lcolor}\"\n\t\t\tfi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c4}                oxoo              %s\"\n\"${c4}           ooooooxxxxxxxx         %s\"\n\"${c4}      oooooxxxxxxxxxx${c3}O${c1}o.${c4}xx        %s\"\n\"${c4}    oo# ###xxxxxxxxxxx###xxx      %s\"\n\"${c4}  oo .oooooxxxxxxxxx##   #oxx     %s\"\n\"${c4} o  ##xxxxxxxxx###x##   .o###     %s\"\n\"${c4}  .oxxxxxxxx###   ox  .           %s\"\n\"${c4} ooxxxx#xxxxxx     o##            %s\"\n\"${c4}.oxx# #oxxxxx#                    %s\"\n\"${c4}ox#  ooxxxxxx#                  o %s\"\n\"${c4}x#  ooxxxxxxxx           ox     ox%s\"\n\"${c4}x# .oxxxxxxxxxxx        o#     oox%s\"\n\"${c4}#  oxxxxx##xxxxxxooooooo#      o# %s\"\n\"${c4}  .oxxxxxooxxxxxx######       ox# %s\"\n\"${c4}  oxxxxxo oxxxxxxxx         oox## %s\"\n\"${c4}  oxxxxxx  oxxxxxxxxxo   oooox##  %s\"\n\"${c4}   o#xxxxx  oxxxxxxxxxxxxxxxx##   %s\"\n\"${c4}    ##xxxxx  o#xxxxxxxxxxxxx##    %s\"\n\"${c4}      ##xxxx   o#xxxxxxxxx##      %s\"\n\"${c4}         ###xo.  o##xxx###        %s\"\n\"${c4}                                  %s\")\n\t\t\t;;\n\n\t\t\"ArcoLinux\")\n\t\t\tif [[ \"$no_color\" != \"0\" ]]; then\n\t\t\t\tc1=$(getColor 'arco_blue') # dark\n\t\t\t\tc2=$(getColor 'white') # light\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"41\"\n\t\t\tfulloutput=(\n\n\"${c1}                    /-                   \"\n\"${c1}                   ooo:                  %s\"\n\"${c1}                  yoooo/                 %s\"\n\"${c1}                 yooooooo                %s\"\n\"${c1}                yooooooooo               %s\"\n\"${c1}               yooooooooooo              %s\"\n\"${c1}             .yooooooooooooo             %s\"\n\"${c1}            .oooooooooooooooo            %s\"\n\"${c1}           .oooooooarcoooooooo           %s\"\n\"${c1}          .ooooooooo-oooooooooo          %s\"\n\"${c1}         .ooooooooo-  oooooooooo         %s\"\n\"${c1}        :ooooooooo.    :ooooooooo        %s\"\n\"${c1}       :ooooooooo.      :ooooooooo       %s\"\n\"${c1}      :oooarcooo         .oooarcooo      %s\"\n\"${c1}     :ooooooooy           .ooooooooo     %s\"\n\"${c1}    :ooooooooo   ${c2}/ooooooooooooooooooo${c1}    %s\"\n\"${c1}   :ooooooooo      ${c2}.-ooooooooooooooooo.${c1}  %s\"\n\"${c1}  ooooooooo-             ${c2}-ooooooooooooo.${c1} %s\"\n\"${c1} ooooooooo-                 ${c2}.-oooooooooo.${c1}%s\"\n\"${c1}ooooooooo.                     ${c2}-ooooooooo${c1}%s\")\n\t\t;;\n\n\t\t\"Alter Linux\"|\"Alter\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light cyan') # Light\n\t\t\t\tc2=$(getColor 'cyan') # Dark\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c1}                      &,                       \"\n\"${c1}                    ^WWWw                      %s\"\n\"${c1}                   !wwwwww                     %s\"\n\"${c1}                  !wwwwwwww                    %s\"\n\"${c1}                 #wwwwwwwwww                   %s\"\n\"${c1}                @wwwwwwwwwwww                  %s\"\n\"${c1}               wwwwwwwwwwwwwww                 %s\"\n\"${c1}              wwwwwwwwwwwwwwwww                %s\"\n\"${c1}             wwwwwwwwwwwwwwwwwww               %s\"\n\"${c1}            wwwwwwwwwwwwwwwwwwww,              %s\"\n\"${c1}           w~1i.wwwwwwwwwwwwwwwww,             %s\"\n\"${c1}         3~:~1lli.wwwwwwwwwwwwwwww.            %s\"\n\"${c1}        :~~:~?ttttzwwwwwwwwwwwwwwww            %s\"\n\"${c1}       #<~:~~~~?llllltO-.wwwwwwwwwww           %s\"\n\"${c1}      #~:~~:~:~~?ltlltlttO-.wwwwwwwww          %s\"\n\"${c1}     @~:~~:~:~:~~(zttlltltlOda.wwwwwww         %s\"\n\"${c1}    @~:~~: ~:~~:~:(zltlltlO    a,wwwwww        %s\"\n\"${c1}   8~~:~~:~~~~:~~~~_1ltltu          ,www       %s\"\n\"${c1}  5~~:~~:~~:~~:~~:~~~_1ltq             N,,     %s\"\n\"${c1} g~:~~:~~~:~~:~~:~:~~~~1q                N,    %s\" )\n\t\t;;\n\n\t\t\"Mint\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light green') # Bold Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c2}                                      %s\"\n\"${c2} MMMMMMMMMMMMMMMMMMMMMMMMMmds+.       %s\"\n\"${c2} MMm----::-://////////////oymNMd+\\`    %s\"\n\"${c2} MMd      ${c1}/++                ${c2}-sNMd:   %s\"\n\"${c2} MMNso/\\`  ${c1}dMM    \\`.::-. .-::.\\` ${c2}.hMN:  %s\"\n\"${c2} ddddMMh  ${c1}dMM   :hNMNMNhNMNMNh: ${c2}\\`NMm  %s\"\n\"${c2}     NMm  ${c1}dMM  .NMN/-+MMM+-/NMN\\` ${c2}dMM  %s\"\n\"${c2}     NMm  ${c1}dMM  -MMm  \\`MMM   dMM. ${c2}dMM  %s\"\n\"${c2}     NMm  ${c1}dMM  -MMm  \\`MMM   dMM. ${c2}dMM  %s\"\n\"${c2}     NMm  ${c1}dMM  .mmd  \\`mmm   yMM. ${c2}dMM  %s\"\n\"${c2}     NMm  ${c1}dMM\\`  ..\\`   ...   ydm. ${c2}dMM  %s\"\n\"${c2}     hMM- ${c1}+MMd/-------...-:sdds  ${c2}dMM  %s\"\n\"${c2}     -NMm- ${c1}:hNMNNNmdddddddddy/\\`  ${c2}dMM  %s\"\n\"${c2}      -dMNs-${c1}\\`\\`-::::-------.\\`\\`    ${c2}dMM  %s\"\n\"${c2}       \\`/dMNmy+/:-------------:/yMMM  %s\"\n\"${c2}          ./ydNMMMMMMMMMMMMMMMMMMMMM  %s\"\n\"${c2}             \\.MMMMMMMMMMMMMMMMMMM    %s\"\n\"${c2}                                      %s\")\n\t\t;;\n\n\t\t\"LMDE\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light green') # Bold Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"31\"\n\t\t\tfulloutput=(\n\"${c1}          \\`.-::---..           %s\"\n\"${c2}       .:++++ooooosssoo:.      %s\"\n\"${c2}     .+o++::.      \\`.:oos+.    %s\"\n\"${c2}    :oo:.\\`             -+oo${c1}:   %s\"\n\"${c2}  ${c1}\\`${c2}+o/\\`    .${c1}::::::${c2}-.    .++-${c1}\\`  %s\"\n\"${c2} ${c1}\\`${c2}/s/    .yyyyyyyyyyo:   +o-${c1}\\`  %s\"\n\"${c2} ${c1}\\`${c2}so     .ss       ohyo\\` :s-${c1}:  %s\"\n\"${c2} ${c1}\\`${c2}s/     .ss  h  m  myy/ /s\\`${c1}\\`  %s\"\n\"${c2} \\`s:     \\`oo  s  m  Myy+-o:\\`   %s\"\n\"${c2} \\`oo      :+sdoohyoydyso/.     %s\"\n\"${c2}  :o.      .:////////++:       %s\"\n\"${c2}  \\`/++        ${c1}-:::::-          %s\"\n\"${c2}   ${c1}\\`${c2}++-                        %s\"\n\"${c2}    ${c1}\\`${c2}/+-                       %s\"\n\"${c2}      ${c1}.${c2}+/.                     %s\"\n\"${c2}        ${c1}.${c2}:+-.                  %s\"\n\"${c2}           \\`--.\\`\\`              %s\"\n\"${c2}                               %s\")\n\t\t;;\n\n\t\t\"Quirinux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'purple') # Purple\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"31\"\n         fulloutput=(\n\"$c2          @=++++++++++=@            %s\"\n\"$c2       =++++++++++++++++++=        %s\"\n\"$c2     *++++++++++++++++++++++*      %s\"\n\"$c2   =++++++++++++++++++++++++++=    %s\"\n\"$c2  *++++++++$c1-..........-$c2++++++++*   %s\"\n\"$c2 =++++++++$c1..............$c2++++++++=  %s\"\n\"$c2@++++++++$c1:.....$c2:++$c1:.....:$c2++++++++@ %s\"\n\"$c2=++++++++$c1:.....$c2++++$c1.....:$c2++++++++= %s\"\n\"$c2=++++++++$c1:.....$c2++++$c1.....:$c2++++++++= %s\"\n\"$c2#++++++++$c1:.....$c2++++$c1.....:$c2++++++++# %s\"\n\"$c2 +++++++++$c1......$c2--$c1......$c2+++++++++  %s\"\n\"$c2 @++++++++$c1:............:$c2++++++++@  %s\"\n\"$c2  @+++++++++++$c1-....-$c2+++++++++++@   %s\"\n\"$c2    *++++++++++$c1::::$c2++++++++++*     %s\"\n\"$c2      *++++++++++++++++++++*       %s\"\n\"$c2        @*++++++++++++++*@         %s\"\n\"$c2             @#====#@ %s\")\n\t\t;;\t\t\n\n\t\t\"Ubuntu\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\t\tc3=$(getColor 'yellow') # Bold Yellow\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c2}                          ./+o+-      %s\"\n\"${c1}                  yyyyy- ${c2}-yyyyyy+     %s\"\n\"${c1}               ${c1}://+//////${c2}-yyyyyyo     %s\"\n\"${c3}           .++ ${c1}.:/++++++/-${c2}.+sss/\\`     %s\"\n\"${c3}         .:++o:  ${c1}/++++++++/:--:/-     %s\"\n\"${c3}        o:+o+:++.${c1}\\`..\\`\\`\\`.-/oo+++++/    %s\"\n\"${c3}       .:+o:+o/.${c1}          \\`+sssoo+/   %s\"\n\"${c1}  .++/+:${c3}+oo+o:\\`${c1}             /sssooo.  %s\"\n\"${c1} /+++//+:${c3}\\`oo+o${c1}               /::--:.  %s\"\n\"${c1} \\+/+o+++${c3}\\`o++o${c2}               ++////.  %s\"\n\"${c1}  .++.o+${c3}++oo+:\\`${c2}             /dddhhh.  %s\"\n\"${c3}       .+.o+oo:.${c2}          \\`oddhhhh+   %s\"\n\"${c3}        \\+.++o+o\\`${c2}\\`-\\`\\`\\`\\`.:ohdhhhhh+    %s\"\n\"${c3}         \\`:o+++ ${c2}\\`ohhhhhhhhyo++os:     %s\"\n\"${c3}           .o:${c2}\\`.syhhhhhhh/${c3}.oo++o\\`     %s\"\n\"${c2}               /osyyyyyyo${c3}++ooo+++/    %s\"\n\"${c2}                   \\`\\`\\`\\`\\` ${c3}+oo+++o\\:    %s\"\n\"${c3}                          \\`oo++.      %s\")\n\t\t;;\n\n\t\t\"KDE neon\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light green') # Bold Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"43\"\n\t\t\tfulloutput=(\n\"${c1}              \\`..---+/---..\\`               %s\"\n\"${c1}          \\`---.\\`\\`   \\`\\`   \\`.---.\\`           %s\"\n\"${c1}       .--.\\`        \\`\\`        \\`-:-.        %s\"\n\"${c1}     \\`:/:     \\`.----//----.\\`     :/-       %s\"\n\"${c1}    .:.    \\`---\\`          \\`--.\\`    .:\\`     %s\"\n\"${c1}   .:\\`   \\`--\\`                .:-    \\`:.    %s\"\n\"${c1}  \\`/    \\`:.      \\`.-::-.\\`      -:\\`   \\`/\\`   %s\"\n\"${c1}  /.    /.     \\`:++++++++:\\`     .:    .:   %s\"\n\"${c1} \\`/    .:     \\`+++++++++++/      /\\`   \\`+\\`  %s\"\n\"${c1} /+\\`   --     .++++++++++++\\`     :.   .+:  %s\"\n\"${c1} \\`/    .:     \\`+++++++++++/      /\\`   \\`+\\`  %s\"\n\"${c1}  /\\`    /.     \\`:++++++++:\\`     .:    .:   %s\"\n\"${c1}  ./    \\`:.      \\`.:::-.\\`      -:\\`   \\`/\\`   %s\"\n\"${c1}   .:\\`   \\`--\\`                .:-    \\`:.    %s\"\n\"${c1}    .:.    \\`---\\`          \\`--.\\`    .:\\`     %s\"\n\"${c1}     \\`:/:     \\`.----//----.\\`     :/-       %s\"\n\"${c1}       .-:.\\`        \\`\\`        \\`-:-.        %s\"\n\"${c1}          \\`---.\\`\\`   \\`\\`   \\`.---.\\`           %s\"\n\"${c1}              \\`..---+/---..\\`               %s\")\n\t\t;;\n\n\t\t\"Debian\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"32\"\n\t\t\tfulloutput=(\n\"${c1}         _,met\\$\\$\\$\\$\\$gg.          %s\"\n\"${c1}      ,g\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$P.       %s\"\n\"${c1}    ,g\\$\\$P\\\"\\\"       \\\"\\\"\\\"Y\\$\\$.\\\".     %s\"\n\"${c1}   ,\\$\\$P'              \\`\\$\\$\\$.     %s\"\n\"${c1}  ',\\$\\$P       ,ggs.     \\`\\$\\$b:   %s\"\n\"${c1}  \\`d\\$\\$'     ,\\$P\\\"\\'   ${c2}.${c1}    \\$\\$\\$    %s\"\n\"${c1}   \\$\\$P      d\\$\\'     ${c2},${c1}    \\$\\$P    %s\"\n\"${c1}   \\$\\$:      \\$\\$.   ${c2}-${c1}    ,d\\$\\$'    %s\"\n\"${c1}   \\$\\$\\;      Y\\$b._   _,d\\$P'     %s\"\n\"${c1}   Y\\$\\$.    ${c2}\\`.${c1}\\`\\\"Y\\$\\$\\$\\$P\\\"'         %s\"\n\"${c1}   \\`\\$\\$b      ${c2}\\\"-.__              %s\"\n\"${c1}    \\`Y\\$\\$                        %s\"\n\"${c1}     \\`Y\\$\\$.                      %s\"\n\"${c1}       \\`\\$\\$b.                    %s\"\n\"${c1}         \\`Y\\$\\$b.                 %s\"\n\"${c1}            \\`\\\"Y\\$b._             %s\"\n\"${c1}                \\`\\\"\\\"\\\"\\\"           %s\"\n\"${c1}                                %s\")\n\t\t;;\n\n\t\t\"Proxmox VE\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white')\n\t\t\t\tc2=$(getColor 'orange')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"48\"\n\t\t\tfulloutput=(\n\"${c1}           .://:\\`              \\`://:.           %s\"\n\"${c1}         \\`hMMMMMMd/          /dMMMMMMh\\`         %s\"\n\"${c1}          \\`sMMMMMMMd:      :mMMMMMMMs\\`          %s\"\n\"${c2}  \\`-/+oo+/:${c1}\\`.yMMMMMMMh-  -hMMMMMMMy.\\`${c2}:/+oo+/-\\`  %s\"\n\"${c2}  \\`:oooooooo/${c1}\\`-hMMMMMMMyyMMMMMMMh-\\`${c2}/oooooooo:\\`  %s\"\n\"${c2}    \\`/oooooooo:${c1}\\`:mMMMMMMMMMMMMm:\\`${c2}:oooooooo/\\`    %s\"\n\"${c2}      ./ooooooo+-${c1} +NMMMMMMMMN+ ${c2}-+ooooooo/.      %s\"\n\"${c2}        .+ooooooo+-${c1}\\`oNMMMMNo\\`${c2}-+ooooooo+.        %s\"\n\"${c2}          -+ooooooo/.${c1}\\`sMMs\\`${c2}./ooooooo+-          %s\"\n\"${c2}            :oooooooo/${c1}\\`..\\`${c2}/oooooooo:            %s\"\n\"${c2}            :oooooooo/\\`${c1}..${c2}\\`/oooooooo:            %s\"\n\"${c2}          -+ooooooo/.${c1}\\`sMMs${c2}\\`./ooooooo+-          %s\"\n\"${c2}        .+ooooooo+-\\`${c1}oNMMMMNo${c2}\\`-+ooooooo+.        %s\"\n\"${c2}      ./ooooooo+-${c1} +NMMMMMMMMN+ ${c2}-+ooooooo/.      %s\"\n\"${c2}    \\`/oooooooo:\\`${c1}:mMMMMMMMMMMMMm:${c2}\\`:oooooooo/\\`    %s\"\n\"${c2}  \\`:oooooooo/\\`${c1}-hMMMMMMMyyMMMMMMMh-${c2}\\`/oooooooo:\\`  %s\"\n\"${c2}  \\`-/+oo+/:\\`${c1}.yMMMMMMMh-  -hMMMMMMMy.${c2}\\`:/+oo+/-\\`  %s\"\n\"${c1}          \\`sMMMMMMMm:      :dMMMMMMMs          %s\"\n\"${c1}         \\`hMMMMMMd/          /dMMMMMMh         %s\"\n\"${c1}           \\`://:\\`              \\`://:\\`           %s\")\n\t\t;;\n\n\t\t\"Siduction\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"35\"\n\t\t\tfulloutput=(\n\"${c1}               _ass,,                %s\"\n\"${c1}              jmk  dm.               %s\"\n\"${c1}              3##qwm#\\`               %s\"\n\"${c1}          .    \\\"9XZ?\\` _aas,          %s\"\n\"${c1}        ap!!n,      _dW(--\\$a         %s\"\n\"${c1}       )#hc_m#      ]mmwaam#\\`        %s\"\n\"${c1}        ?##WZ^      -4#####! _as,.   %s\"\n\"${c1}  _ais,   -   _au11a. -\\\"\\\"\\\" <m#\\\"\\\"\\\"Wc  %s\"\n\"${c1} )m6_]m,      m#c__m6     :m#m,_<m#> %s\"\n\"${c1} -Y#m#!       4###m#r     -\\$##mBm#Z\\` %s\"\n\"${c1}    -    _as,. \\\"???~ _aawa,.!S##Z?\\`  %s\"\n\"${c1}        ym= 3h.     <##' -Wo         %s\"\n\"${c1}        \\$#mm#D\\`     ]B#qww##         %s\"\n\"${c1}         \\\"?!\\\"\\`  _s,.-?#m##T'         %s\"\n\"${c1}              _dZ\\\"\\\"4a  --            %s\"\n\"${c1}              3Wmaam#;               %s\"\n\"${c1}              -9###Z!                %s\")\n\n\t\t;;\n\n\t\t\"Devuan\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light purple') # Light purple\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"36\"\n\t\t\tfulloutput=(\n\"${c1}                                    %s\"\n\"${c1}     ..,,;;;::;,..                  %s\"\n\"${c1}             \\`':ddd;:,.             %s\"\n\"${c1}                   \\`'dPPd:,.        %s\"\n\"${c1}                       \\`:b\\$\\$b\\`.     %s\"\n\"${c1}                          'P\\$\\$\\$d\\`   %s\"\n\"${c1}                           .\\$\\$\\$\\$\\$\\`  %s\"\n\"${c1}                           ;\\$\\$\\$\\$\\$P  %s\"\n\"${c1}                        .:P\\$\\$\\$\\$\\$\\$\\`  %s\"\n\"${c1}                    .,:b\\$\\$\\$\\$\\$\\$\\$;'   %s\"\n\"${c1}               .,:dP\\$\\$\\$\\$\\$\\$\\$\\$b:'     %s\"\n\"${c1}        .,:;db\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$Pd'\\`        %s\"\n\"${c1}   ,db\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$b:'\\`            %s\"\n\"${c1}  :\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$b:'\\`                 %s\"\n\"${c1}   \\`\\$\\$\\$\\$\\$bd:''\\`                     %s\"\n\"${c1}     \\`'''\\`                          %s\"\n\"${c1}                                    %s\")\n\t\t;;\n\n\t\t\"Raspbian\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light green') # Light Green\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"32\"\n\t\t\tfulloutput=(\n\"${c1}    .',;:cc;,'.    .,;::c:,,.   %s\"\n\"${c1}   ,ooolcloooo:  'oooooccloo:   %s\"\n\"${c1}   .looooc;;:ol  :oc;;:ooooo'   %s\"\n\"${c1}     ;oooooo:      ,ooooooc.    %s\"\n\"${c1}       .,:;'.       .;:;'.      %s\"\n\"${c2}       .dQ. .d0Q0Q0. '0Q.       %s\"\n\"${c2}     .0Q0'   'Q0Q0Q'  'Q0Q.     %s\"\n\"${c2}     ''  .odo.    .odo.  ''     %s\"\n\"${c2}    .  .0Q0Q0Q'  .0Q0Q0Q.  .    %s\"\n\"${c2}  ,0Q .0Q0Q0Q0Q  'Q0Q0Q0b. 0Q.  %s\"\n\"${c2}  :Q0  Q0Q0Q0Q    'Q0Q0Q0  Q0'  %s\"\n\"${c2}  '0    '0Q0' .0Q0. '0'    'Q'  %s\"\n\"${c2}    .oo.     .0Q0Q0.    .oo.    %s\"\n\"${c2}    'Q0Q0.  '0Q0Q0Q0. .Q0Q0b    %s\"\n\"${c2}     'Q0Q0.  '0Q0Q0' .d0Q0Q'    %s\"\n\"${c2}      'Q0Q'    ..    '0Q.'      %s\"\n\"${c2}            .0Q0Q0Q.            %s\"\n\"${c2}             '0Q0Q'             %s\")\n\t\t;;\n\n\t\t\"CrunchBang\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c1}                                      %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}  ████████████████████████████   ███  %s\"\n\"${c1}  ████████████████████████████   ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}  ████████████████████████████   ███  %s\"\n\"${c1}  ████████████████████████████   ███  %s\"\n\"${c1}         ███        ███               %s\"\n\"${c1}         ███        ███               %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}         ███        ███          ███  %s\"\n\"${c1}                                      %s\")\n\t\t;;\n\n\t\t\"CRUX\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light cyan')\n\t\t\t\tc2=$(getColor 'yellow')\n\t\t\t\tc3=$(getColor 'white')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"27\"\n\t\t\tfulloutput=(\"\"\n\"${c1}          odddd            \"\n\"${c1}       oddxkkkxxdoo        %s\"\n\"${c1}      ddcoddxxxdoool       %s\"\n\"${c1}      xdclodod  olol       %s\"\n\"${c1}      xoc  xdd  olol       %s\"\n\"${c1}      xdc  ${c2}k00${c1}Okdlol       %s\"\n\"${c1}      xxd${c2}kOKKKOkd${c1}ldd       %s\"\n\"${c1}      xdco${c2}xOkdlo${c1}dldd       %s\"\n\"${c1}      ddc:cl${c2}lll${c1}oooodo      %s\"\n\"${c1}    odxxdd${c3}xkO000kx${c1}ooxdo    %s\"\n\"${c1}   oxdd${c3}x0NMMMMMMWW0od${c1}kkxo  %s\"\n\"${c1}  oooxd${c3}0WMMMMMMMMMW0o${c1}dxkx  %s\"\n\"${c1} docldkXW${c3}MMMMMMMWWN${c1}Odolco  %s\"\n\"${c1} xx${c2}dx${c1}kxxOKN${c3}WMMWN${c1}0xdoxo::c  %s\"\n\"${c2} xOkkO${c1}0oo${c3}odOW${c2}WW${c1}XkdodOxc:l  %s\"\n\"${c2} dkkkxkkk${c3}OKX${c2}NNNX0Oxx${c1}xc:cd  %s\"\n\"${c2}  odxxdx${c3}xllod${c2}ddooxx${c1}dc:ldo  %s\"\n\"${c2}    lodd${c1}dolccc${c2}ccox${c1}xoloo    %s\"\n\"${c1}                           %s\")\n\t\t;;\n\n\t\t\"Chrome OS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'green') # Green\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\t\tc3=$(getColor 'yellow') # Bold Yellow\n\t\t\t\tc4=$(getColor 'light blue') # Light Blue\n\t\t\t\tc5=$(getColor 'white') # White\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then\n\t\t\t\tc1=\"${my_lcolor}\"\n\t\t\t\tc2=\"${my_lcolor}\"\n\t\t\t\tc3=\"${my_lcolor}\"\n\t\t\t\tc4=\"${my_lcolor}\"\n\t\t\t\tc5=\"${my_lcolor}\"\n\t\t\tfi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c2}             .,:loool:,.              %s\"\n\"${c2}         .,coooooooooooooc,.          %s\"\n\"${c2}      .,lllllllllllllllllllll,.       %s\"\n\"${c2}     ;ccccccccccccccccccccccccc;      %s\"\n\"${c1}   '${c2}ccccccccccccccccccccccccccccc.    %s\"\n\"${c1}  ,oo${c2}c::::::::okO${c5}000${c3}0OOkkkkkkkkkkk:   %s\"\n\"${c1} .ooool${c2};;;;:x${c5}K0${c4}kxxxxxk${c5}0X${c3}K0000000000.  %s\"\n\"${c1} :oooool${c2};,;O${c5}K${c4}ddddddddddd${c5}KX${c3}000000000d  %s\"\n\"${c1} lllllool${c2};l${c5}N${c4}dllllllllllld${c5}N${c3}K000000000  %s\"\n\"${c1} lllllllll${c2}o${c5}M${c4}dccccccccccco${c5}W${c3}K000000000  %s\"\n\"${c1} ;cllllllllX${c5}X${c4}c:::::::::c${c5}0X${c3}000000000d  %s\"\n\"${c1} .ccccllllllO${c5}Nk${c4}c;,,,;cx${c5}KK${c3}0000000000.  %s\"\n\"${c1}  .cccccclllllxOO${c5}OOO${c1}Okx${c3}O0000000000;   %s\"\n\"${c1}   .:ccccccccllllllllo${c3}O0000000OOO,    %s\"\n\"${c1}     ,:ccccccccclllcd${c3}0000OOOOOOl.     %s\"\n\"${c1}       '::ccccccccc${c3}dOOOOOOOkx:.       %s\"\n\"${c1}         ..,::cccc${c3}xOOOkkko;.          %s\"\n\"${c1}             ..,:${c3}dOkxl:.              %s\")\n\t\t;;\n\n\t\t\"DesaOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light green') #Hijau\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"33\"\n\t\t\tfulloutput=(\n\"${c1} ███████████████████████         %s\"\n\"${c1} ███████████████████████         %s\"\n\"${c1} ███████████████████████         %s\"\n\"${c1} ███████████████████████         %s\"\n\"${c1} ████████               ███████  %s\"\n\"${c1} ████████               ███████  %s\"\n\"${c1} ████████               ███████  %s\"\n\"${c1} ████████               ███████  %s\"\n\"${c1} ████████               ███████  %s\"\n\"${c1} ████████               ███████  %s\"\n\"${c1} ████████               ███████  %s\"\n\"${c1} ██████████████████████████████  %s\"\n\"${c1} ██████████████████████████████  %s\"\n\"${c1} ████████████████████████        %s\"\n\"${c1} ████████████████████████        %s\"\n\"${c1} ████████████████████████        %s\"\n\"                                      %s\")\n\t\t;;\n\n\t\t\"Gentoo\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light purple') # Light Purple\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"37\"\n\t\t\tfulloutput=(\n\"${c2}         -/oyddmdhs+:.               %s\"\n\"${c2}     -o${c1}dNMMMMMMMMNNmhy+${c2}-\\`            %s\"\n\"${c2}   -y${c1}NMMMMMMMMMMMNNNmmdhy${c2}+-          %s\"\n\"${c2} \\`o${c1}mMMMMMMMMMMMMNmdmmmmddhhy${c2}/\\`       %s\"\n\"${c2} om${c1}MMMMMMMMMMMN${c2}hhyyyo${c1}hmdddhhhd${c2}o\\`     %s\"\n\"${c2}.y${c1}dMMMMMMMMMMd${c2}hs++so/s${c1}mdddhhhhdm${c2}+\\`   %s\"\n\"${c2} oy${c1}hdmNMMMMMMMN${c2}dyooy${c1}dmddddhhhhyhN${c2}d.  %s\"\n\"${c2}  :o${c1}yhhdNNMMMMMMMNNNmmdddhhhhhyym${c2}Mh  %s\"\n\"${c2}    .:${c1}+sydNMMMMMNNNmmmdddhhhhhhmM${c2}my  %s\"\n\"${c2}       /m${c1}MMMMMMNNNmmmdddhhhhhmMNh${c2}s:  %s\"\n\"${c2}    \\`o${c1}NMMMMMMMNNNmmmddddhhdmMNhs${c2}+\\`   %s\"\n\"${c2}  \\`s${c1}NMMMMMMMMNNNmmmdddddmNMmhs${c2}/.     %s\"\n\"${c2} /N${c1}MMMMMMMMNNNNmmmdddmNMNdso${c2}:\\`       %s\"\n\"${c2}+M${c1}MMMMMMNNNNNmmmmdmNMNdso${c2}/-          %s\"\n\"${c2}yM${c1}MNNNNNNNmmmmmNNMmhs+/${c2}-\\`            %s\"\n\"${c2}/h${c1}MMNNNNNNNNMNdhs++/${c2}-\\`               %s\"\n\"${c2}\\`/${c1}ohdmmddhys+++/:${c2}.\\`                  %s\"\n\"${c2}  \\`-//////:--.                       %s\")\n\t\t;;\n\n\t\t\"Funtoo\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light purple') # Light Purple\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"52\"\n\t\t\tfulloutput=(\n\"${c1}                                                    %s\"\n\"${c1}                                                    %s\"\n\"${c1}                                                    %s\"\n\"${c1}                                                    %s\"\n\"${c1}     _______               ____                     %s\"\n\"${c1}    /MMMMMMM/             /MMMM| _____  _____       %s\"\n\"${c1} __/M${c2}.MMM.${c1}M/_____________|M${c2}.M${c1}MM|/MMMMM\\/MMMMM\\      %s\"\n\"${c1}|MMMM${c2}MM'${c1}MMMMMMMMMMMMMMMMMMM${c2}MM${c1}MMMM${c2}.MMMM..MMMM.${c1}MM\\    %s\"\n\"${c1}|MM${c2}MMMMMMM${c1}/m${c2}MMMMMMMMMMMMMMMMMMMMMM${c1}MMMM${c2}MM${c1}MMMM${c2}MM${c1}MM|   %s\"\n\"${c1}|MMMM${c2}MM${c1}MMM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MM${c1}MMMMM${c2}\\MMM${c1}MMM${c2}MM${c1}MMMM${c2}MM${c1}MMMM${c2}MM${c1}MM|   %s\"\n\"${c1}  |MM${c2}MM${c1}MMM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MMM${c1}MMMM${c2}'MMMM''MMMM'${c1}MM/    %s\"\n\"${c1}  |MM${c2}MM${c1}MMM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MMM${c1}MMM\\MMMMM/\\MMMMM/      %s\"\n\"${c1}  |MM${c2}MM${c1}MMM${c2}MM${c1}MMMMMM${c2}MM${c1}MM${c2}MM${c1}MM${c2}MMMMM'${c1}M|                  %s\"\n\"${c1}  |MM${c2}MM${c1}MMM${c2}MMMMMMMMMMMMMMMMM MM'${c1}M/                   %s\"\n\"${c1}  |MMMMMMMMMMMMMMMMMMMMMMMMMMMM/                    %s\"\n\"${c1}                                                    %s\"\n\"${c1}                                                    %s\"\n\"${c1}                                                    %s\")\n\t\t;;\n\n\t\t\"Kogaion\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"41\"\n\t\t\tfulloutput=(\n\"${c1}                  ;;      ,;             %s\"\n\"${c1}                 ;;;     ,;;             %s\"\n\"${c1}               ,;;;;     ;;;;            %s\"\n\"${c1}            ,;;;;;;;;    ;;;;            %s\"\n\"${c1}           ;;;;;;;;;;;   ;;;;;           %s\"\n\"${c1}          ,;;;;;;;;;;;;  ';;;;;,         %s\"\n\"${c1}          ;;;;;;;;;;;;;;, ';;;;;;;       %s\"\n\"${c1}          ;;;;;;;;;;;;;;;;;, ';;;;;      %s\"\n\"${c1}      ;    ';;;;;;;;;;;;;;;;;;, ;;;      %s\"\n\"${c1}      ;;;,  ';;;;;;;;;;;;;;;;;;;,;;      %s\"\n\"${c1}      ;;;;;,  ';;;;;;;;;;;;;;;;;;,       %s\"\n\"${c1}      ;;;;;;;;,  ';;;;;;;;;;;;;;;;,      %s\"\n\"${c1}      ;;;;;;;;;;;;, ';;;;;;;;;;;;;;      %s\"\n\"${c1}      ';;;;;;;;;;;;; ';;;;;;;;;;;;;      %s\"\n\"${c1}       ';;;;;;;;;;;;;, ';;;;;;;;;;;      %s\"\n\"${c1}        ';;;;;;;;;;;;;  ;;;;;;;;;;       %s\"\n\"${c1}          ';;;;;;;;;;;; ;;;;;;;;         %s\"\n\"${c1}              ';;;;;;;; ;;;;;;           %s\"\n\"${c1}                 ';;;;; ;;;;             %s\"\n\"${c1}                   ';;; ;;               %s\")\n\t\t;;\n\n\t\t\"Fedora - Old\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"37\"\n\t\t\tfulloutput=(\n\"${c2}           /:-------------:\\         %s\"\n\"${c2}        :-------------------::       %s\"\n\"${c2}      :-----------${c1}/shhOHbmp${c2}---:\\\\     %s\"\n\"${c2}    /-----------${c1}omMMMNNNMMD  ${c2}---:    %s\"\n\"${c2}   :-----------${c1}sMMMMNMNMP${c2}.    ---:   %s\"\n\"${c2}  :-----------${c1}:MMMdP${c2}-------    ---\\  %s\"\n\"${c2} ,------------${c1}:MMMd${c2}--------    ---:  %s\"\n\"${c2} :------------${c1}:MMMd${c2}-------    .---:  %s\"\n\"${c2} :----    ${c1}oNMMMMMMMMMNho${c2}     .----:  %s\"\n\"${c2} :--     .${c1}+shhhMMMmhhy++${c2}   .------/  %s\"\n\"${c2} :-    -------${c1}:MMMd${c2}--------------:   %s\"\n\"${c2} :-   --------${c1}/MMMd${c2}-------------;    %s\"\n\"${c2} :-    ------${c1}/hMMMy${c2}------------:     %s\"\n\"${c2} :--${c1} :dMNdhhdNMMNo${c2}------------;      %s\"\n\"${c2} :---${c1}:sdNMMMMNds:${c2}------------:       %s\"\n\"${c2} :------${c1}:://:${c2}-------------::         %s\"\n\"${c2} :---------------------://           %s\"\n\"${c2}                                     %s\")\n\t\t;;\n\n\t\t\"Fedora\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\t\tc2=$(getColor 'white') # White\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c1}             .',;::::;,'.\t\t %s\"\n\"${c1}         .';:cccccccccccc:;,.        \t %s\"\n\"${c1}      .;cccccccccccccccccccccc;.     \t %s\"\n\"${c1}    .:cccccccccccccccccccccccccc:.   \t %s\"\n\"${c1}  .;ccccccccccccc;${c2}.:dddl:.${c1};ccccccc;.     %s\"\n\"${c1} .:ccccccccccccc;${c2}OWMKOOXMWd${c1};ccccccc:.    %s\"\n\"${c1}.:ccccccccccccc;${c2}KMMc${c1};cc;${c2}xMMc${c1};ccccccc:.   %s\"\n\"${c1},cccccccccccccc;${c2}MMM.${c1};cc;${c2};WW:${c1};cccccccc,   %s\"\n\"${c1}:cccccccccccccc;${c2}MMM.${c1};cccccccccccccccc:   %s\"\n\"${c1}:ccccccc;${c2}oxOOOo${c1};${c2}MMM0OOk.${c1};cccccccccccc:   %s\"\n\"${c1}cccccc;${c2}0MMKxdd:${c1};${c2}MMMkddc.${c1};cccccccccccc;   %s\"\n\"${c1}ccccc;${c2}XM0'${c1};cccc;${c2}MMM.${c1};cccccccccccccccc'   %s\"\n\"${c1}ccccc;${c2}MMo${c1};ccccc;${c2}MMW.${c1};ccccccccccccccc;    %s\"\n\"${c1}ccccc;${c2}0MNc.${c1}ccc${c2}.xMMd${c1};ccccccccccccccc;     %s\"\n\"${c1}cccccc;${c2}dNMWXXXWM0:${c1};cccccccccccccc:,      %s\"\n\"${c1}cccccccc;${c2}.:odl:.${c1};cccccccccccccc:,.       %s\"\n\"${c1}:cccccccccccccccccccccccccccc:'.         %s\"\n\"${c1}.:cccccccccccccccccccccc:;,..      %s\"\n\"${c1}  '::cccccccccccccc::;,.\t\t %s\"\n\"${c1}\t\t\t\t\t %s\")\n\t\t;;\n\n\t\t\"Fux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tfulloutput=(\n\"${c2}           --/+osssso+/--           %s\"\n\"${c2}        -/oshhhhhhhhhhhhso/-        %s\"\n\"${c2}      :oyhhhhhso+//+oshhhhhso:      %s\"\n\"${c2}    -+yhhhh+.   ss+/   .+hhhhs+-    %s\"\n\"${c2}   :/hhhh/     shhhy/     /hhhh/:   %s\"\n\"${c2}  ./hhhh- .++:..dhhb..:++. -hhhh/.  %s\"\n\"${c2}  +ohhh: -hoyhohhoohhohyoh- :hhho+  %s\"\n\"${c2}  /hhhh   shhy-ohyyho-yhhs   hhhh/  %s\"\n\"${c2}  /hhhh    shy\\+hhhh+/yhs    hhhh/  %s\"\n\"${c2}  +ohhh:  .:d. +:ys:+ .b:.  :hhho+  %s\"\n\"${c2}  ./hhhh- do  /  oo  \\  ob -hhhh/.  %s\"\n\"${c2}   :/hhhh/   -   ss   -   /hhhh/:   %s\"\n\"${c2}    -+shhhh+.    //    .+hhhhs+-    %s\"\n\"${c2}      :oshhhhhso+//+oshhhhhso:      %s\"\n\"${c2}        -/oshhhhhhhhhhhhso/-        %s\"\n\"${c2}           --/+osssso+/--           %s\")\n\t\t;;\n\n\t\t\"Chapeau\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light green') # Light Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"35\"\n\t\t\tfulloutput=(\n\"${c2}               .-/-.               %s\"\n\"${c2}             ////////.             %s\"\n\"${c2}           ////////${c1}y+${c2}//.           %s\"\n\"${c2}         ////////${c1}mMN${c2}/////.         %s\"\n\"${c2}       ////////${c1}mMN+${c2}////////.       %s\"\n\"${c2}     ////////////////////////.     %s\"\n\"${c2}   /////////+${c1}shhddhyo${c2}+////////.    %s\"\n\"${c2}  ////////${c1}ymMNmdhhdmNNdo${c2}///////.   %s\"\n\"${c2} ///////+${c1}mMms${c2}////////${c1}hNMh${c2}///////.  %s\"\n\"${c2} ///////${c1}NMm+${c2}//////////${c1}sMMh${c2}///////  %s\"\n\"${c2} //////${c1}oMMNmmmmmmmmmmmmMMm${c2}///////  %s\"\n\"${c2} //////${c1}+MMmssssssssssssss+${c2}///////  %s\"\n\"${c2} \\`//////${c1}yMMy${c2}////////////////////   %s\"\n\"${c2}  \\`//////${c1}smMNhso++oydNm${c2}////////    %s\"\n\"${c2}   \\`///////${c1}ohmNMMMNNdy+${c2}///////     %s\"\n\"${c2}     \\`//////////${c1}++${c2}//////////       %s\"\n\"${c2}        \\`////////////////.         %s\"\n\"${c2}            -////////-             %s\"\n\"${c2}                                   %s\")\n\t\t;;\n\n\t\t\"Korora\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white')\n\t\t\t\tc2=$(getColor 'light blue')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"32\"\n\t\t\tfulloutput=(\n\"${c1}                 ____________   %s\"\n\"${c1}              _add55555555554${c2}:  %s\"\n\"${c1}            _w?'${c2}\\`\\`\\`\\`\\`\\`\\`\\`\\`\\`'${c1})k${c2}:  %s\"\n\"${c1}           _Z'${c2}\\`${c1}            ]k${c2}:  %s\"\n\"${c1}           m(${c2}\\`${c1}             )k${c2}:  %s\"\n\"${c1}      _.ss${c2}\\`${c1}m[${c2}\\`${c1},            ]e${c2}:  %s\"\n\"${c1}    .uY\\\"^\\`${c2}\\`${c1}Xc${c2}\\`${c1}?Ss.         d(${c2}\\`  %s\"\n\"${c1}   jF'${c2}\\`${c1}    \\`@.  ${c2}\\`${c1}Sc      .jr${c2}\\`   %s\"\n\"${c1}  jr${c2}\\`${c1}       \\`?n_ ${c2}\\`${c1}$;   _a2\\\"${c2}\\`    %s\"\n\"${c1} .m${c2}:${c1}          \\`~M${c2}\\`${c1}1k${c2}\\`${c1}5?!\\`${c2}\\`      %s\"\n\"${c1} :#${c2}:${c1}             ${c2}\\`${c1})e${c2}\\`\\`\\`         %s\"\n\"${c1} :m${c2}:${c1}             ,#'${c2}\\`           %s\"\n\"${c1} :#${c2}:${c1}           .s2'${c2}\\`            %s\"\n\"${c1} :m,________.aa7^${c2}\\`              %s\"\n\"${c1} :#baaaaaaas!J'${c2}\\`                %s\"\n\"${c2}  \\`\\`\\`\\`\\`\\`\\`\\`\\`\\`\\`                   %s\")\n\t\t;;\n\n\t\t\"gNewSense\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"52\"\n\t\t\tfulloutput=(\n\"${c1}                      ..,,,,..                      %s\"\n\"${c1}                .oocchhhhhhhhhhccoo.                %s\"\n\"${c1}         .ochhlllllllc hhhhhh ollllllhhco.          %s\"\n\"${c1}     ochlllllllllll hhhllllllhhh lllllllllllhco     %s\"\n\"${c1}  .cllllllllllllll hlllllo  +hllh llllllllllllllc.  %s\"\n\"${c1} ollllllllllhco\\'\\'  hlllllo  +hllh  \\`\\`ochllllllllllo %s\"\n\"${c1} hllllllllc\\'       hllllllllllllh       \\`cllllllllh %s\"\n\"${c1} ollllllh          +llllllllllll+          hllllllo %s\"\n\"${c1}  \\`cllllh.           ohllllllho           .hllllc\\'  %s\"\n\"${c1}     ochllc.            ++++            .cllhco     %s\"\n\"${c1}        \\`+occooo+.                .+ooocco+\\'        %s\"\n\"${c1}               \\`+oo++++      ++++oo+\\'               %s\"\n\"${c1}                                                    %s\")\n\t\t;;\n\n\t\t\"BLAG\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light purple')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"36\"\n\t\t\tfulloutput=(\n\"${c1}              d                     %s\"\n\"${c1}             ,MK:                   %s\"\n\"${c1}             xMMMX:                 %s\"\n\"${c1}            .NMMMMMX;               %s\"\n\"${c1}            lMMMMMMMM0clodkO0KXWW:  %s\"\n\"${c1}            KMMMMMMMMMMMMMMMMMMX'   %s\"\n\"${c1}       .;d0NMMMMMMMMMMMMMMMMMMK.    %s\"\n\"${c1}  .;dONMMMMMMMMMMMMMMMMMMMMMMx      %s\"\n\"${c1} 'dKMMMMMMMMMMMMMMMMMMMMMMMMl       %s\"\n\"${c1}    .:xKWMMMMMMMMMMMMMMMMMMM0.      %s\"\n\"${c1}        .:xNMMMMMMMMMMMMMMMMMK.     %s\"\n\"${c1}           lMMMMMMMMMMMMMMMMMMK.    %s\"\n\"${c1}           ,MMMMMMMMWkOXWMMMMMM0    %s\"\n\"${c1}           .NMMMMMNd.     \\`':ldko   %s\"\n\"${c1}            OMMMK:                  %s\"\n\"${c1}            oWk,                    %s\"\n\"${c1}            ;:                      %s\")\n\t\t;;\n\n\t\t\"FreeBSD\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # white\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"37\"\n\t\t\tfulloutput=(\n\"${c1}                                     %s\"\n\"${c1}   \\`\\`\\`                        ${c2}\\`      %s\"\n\"${c1}  \\` \\`.....---...${c2}....--.\\`\\`\\`   -/      %s\"\n\"${c1}  +o   .--\\`         ${c2}/y:\\`      +.     %s\"\n\"${c1}   yo\\`:.            ${c2}:o      \\`+-      %s\"\n\"${c1}    y/               ${c2}-/\\`   -o/       %s\"\n\"${c1}   .-                  ${c2}::/sy+:.      %s\"\n\"${c1}   /                     ${c2}\\`--  /      %s\"\n\"${c1}  \\`:                          ${c2}:\\`     %s\"\n\"${c1}  \\`:                          ${c2}:\\`     %s\"\n\"${c1}   /                          ${c2}/      %s\"\n\"${c1}   .-                        ${c2}-.      %s\"\n\"${c1}    --                      ${c2}-.       %s\"\n\"${c1}     \\`:\\`                  ${c2}\\`:\\`        %s\"\n\"${c2}       .--             \\`--.          %s\"\n\"${c2}          .---.....----.             %s\"\n\"${c2}                                     %s\"\n\"${c2}                                     %s\")\n\t\t;;\n\n\t\t\"FreeBSD - Old\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # white\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"34\"\n\t\t\tfulloutput=(\n\"${c2}              ,        ,          %s\"\n\"${c2}             /(        )\\`         %s\"\n\"${c2}             \\ \\___   / |         %s\"\n\"${c2}             /- ${c1}_${c2}  \\`-/  '         %s\"\n\"${c2}            (${c1}/\\/ \\ ${c2}\\   /\\\\         %s\"\n\"${c1}            / /   |${c2} \\`    \\\\        %s\"\n\"${c1}            O O   )${c2} /    |        %s\"\n\"${c1}            \\`-^--'\\`${c2}<     '        %s\"\n\"${c2}           (_.)  _  )   /         %s\"\n\"${c2}            \\`.___/\\`    /          %s\"\n\"${c2}              \\`-----' /           %s\"\n\"${c1} <----.     ${c2}__/ __   \\\\            %s\"\n\"${c1} <----|====${c2}O}}}${c1}==${c2}} \\} \\/${c1}====      %s\"\n\"${c1} <----'    ${c2}\\`--' \\`.__,' \\\\          %s\"\n\"${c2}              |        |          %s\"\n\"${c2}               \\       /       /\\\\ %s\"\n\"${c2}          ______( (_  / \\______/  %s\"\n\"${c2}        ,'  ,-----'   |           %s\"\n\"${c2}        \\`--{__________)           %s\"\n\"${c2}                                  %s\")\n\t\t;;\n\n\t\t\"OpenBSD\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'yellow') # Light Yellow\n\t\t\t\tc2=$(getColor 'brown') # Bold Yellow\n\t\t\t\tc3=$(getColor 'light cyan') # Light Cyan\n\t\t\t\tc4=$(getColor 'light red') # Light Red\n\t\t\t\tc5=$(getColor 'dark grey')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then\n\t\t\t\tc1=\"${my_lcolor}\"\n\t\t\t\tc2=\"${my_lcolor}\"\n\t\t\t\tc3=\"${my_lcolor}\"\n\t\t\t\tc4=\"${my_lcolor}\"\n\t\t\t\tc5=\"${my_lcolor}\"\n\t\t\tfi\n\t\t\tstartline=\"3\"\n\t\t\tlogowidth=\"44\"\n\t\t\tfulloutput=(\n\"${c3}                                        _   \"\n\"${c3}                                       (_)  \"\n\"${c1}              |    .                        \"\n\"${c1}          .   |L  /|   .         ${c3} _         %s\"\n\"${c1}      _ . |\\ _| \\--+._/| .       ${c3}(_)        %s\"\n\"${c1}     / ||\\| Y J  )   / |/| ./               %s\"\n\"${c1}    J  |)'( |        \\` F\\`.'/       ${c3} _       %s\"\n\"${c1}  -<|  F         __     .-<        ${c3}(_)      %s\"\n\"${c1}    | /       .-'${c3}. ${c1}\\`.  /${c3}-. ${c1}L___             %s\"\n\"${c1}    J \\      <    ${c3}\\ ${c1} | | ${c5}O${c3}\\\\\\\\${c1}|.-' ${c3} _          %s\"\n\"${c1}  _J \\  .-    \\\\\\\\${c3}/ ${c5}O ${c3}| ${c1}| \\  |${c1}F    ${c3}(_)         %s\"\n\"${c1} '-F  -<_.     \\   .-'  \\`-' L__             %s\"\n\"${c1}__J  _   _.     >-'  ${c2})${c4}._.   ${c1}|-'             %s\"\n\"${c1} \\`-|.'   /_.          ${c4}\\_|  ${c1} F               %s\"\n\"${c1}  /.-   .                _.<                %s\"\n\"${c1} /'    /.'             .'  \\`\\               %s\"\n\"${c1}  /L  /'   |/      _.-'-\\                   %s\"\n\"${c1} /'J       ___.---'\\|                       %s\"\n\"${c1}   |\\  .--' V  | \\`. \\`                       %s\"\n\"${c1}   |/\\`. \\`-.     \\`._)                        %s\"\n\"${c1}      / .-.\\                                %s\"\n\"${c1}      \\ (  \\`\\                               %s\"\n\"${c1}       \\`.\\                                  %s\")\n\t\t;;\n\n\t\t\"DragonFlyBSD\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light red') # Red\n\t\t\t\tc2=$(getColor 'white') # White\n\t\t\t\tc3=$(getColor 'yellow')\n\t\t\t\tc4=$(getColor 'light red')\n\t\t\tfi\n            if [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; c4=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"43\"\n\t\t\tfulloutput=(\n\"${c1}                      |                    %s\"\n\"${c1}                     .-.                   %s\"\n\"${c3}                    ()${c1}I${c3}()                  %s\"\n\"${c1}               \\\"==.__:-:__.==\\\"             %s\"\n\"${c1}              \\\"==.__/~|~\\__.==\\\"            %s\"\n\"${c1}              \\\"==._(  Y  )_.==\\\"            %s\"\n\"${c2}   .-'~~\\\"\\\"~=--...,__${c1}\\/|\\/${c2}__,...--=~\\\"\\\"~~'-. %s\"\n\"${c2}  (               ..=${c1}\\\\\\\\=${c1}/${c2}=..               )%s\"\n\"${c2}   \\`'-.        ,.-\\\"\\`;${c1}/=\\\\\\\\${c2} ;\\\"-.,_        .-'\\`%s\"\n\"${c2}       \\`~\\\"-=-~\\` .-~\\` ${c1}|=|${c2} \\`~-. \\`~-=-\\\"~\\`     %s\"\n\"${c2}            .-~\\`    /${c1}|=|${c2}\\    \\`~-.          %s\"\n\"${c2}         .~\\`       / ${c1}|=|${c2} \\       \\`~.       %s\"\n\"${c2}     .-~\\`        .'  ${c1}|=|${c2}  \\\\\\\\\\`.        \\`~-.  %s\"\n\"${c2}   (\\`     _,.-=\\\"\\`    ${c1}|=|${c2}    \\`\\\"=-.,_     \\`) %s\"\n\"${c2}    \\`~\\\"~\\\"\\`           ${c1}|=|${c2}           \\`\\\"~\\\"~\\`  %s\"\n\"${c1}                     /=\\                   %s\"\n\"${c1}                     \\=/                   %s\"\n\"${c1}                      ^                    %s\")\n\t\t;;\n\n\t\t\"NetBSD\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'orange') # Orange\n\t\t\t\tc2=$(getColor 'white') # White\n\t\t\tfi\n            if [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"60\"\n\t\t\tfulloutput=(\n\"${c1}                                  __,gnnnOCCCCCOObaau,_     %s\"\n\"${c2}   _._                    ${c1}__,gnnCCCCCCCCOPF\\\"''              %s\"\n\"${c2}  (N\\\\\\\\\\\\\\\\${c1}XCbngg,._____.,gnnndCCCCCCCCCCCCF\\\"___,,,,___          %s\"\n\"${c2}   \\\\\\\\N\\\\\\\\\\\\\\\\${c1}XCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCOOOOPYvv.     %s\"\n\"${c2}    \\\\\\\\N\\\\\\\\\\\\\\\\${c1}XCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCPF\\\"''               %s\"\n\"${c2}     \\\\\\\\N\\\\\\\\\\\\\\\\${c1}XCCCCCCCCCCCCCCCCCCCCCCCCCOF\\\"'                     %s\"\n\"${c2}      \\\\\\\\N\\\\\\\\\\\\\\\\${c1}XCCCCCCCCCCCCCCCCCCCCOF\\\"'                         %s\"\n\"${c2}       \\\\\\\\N\\\\\\\\\\\\\\\\${c1}XCCCCCCCCCCCCCCCPF\\\"'                             %s\"\n\"${c2}        \\\\\\\\N\\\\\\\\\\\\\\\\${c1}\\\"PCOCCCOCCFP\\\"\\\"                                  %s\"\n\"${c2}         \\\\\\\\N\\                                                %s\"\n\"${c2}          \\\\\\\\N\\                                               %s\"\n\"${c2}           \\\\\\\\N\\                                              %s\"\n\"${c2}            \\\\\\\\NN\\                                            %s\"\n\"${c2}             \\\\\\\\NN\\                                           %s\"\n\"${c2}              \\\\\\\\NNA.                                         %s\"\n\"${c2}               \\\\\\\\NNA,                                        %s\"\n\"${c2}                \\\\\\\\NNN,                                       %s\"\n\"${c2}                 \\\\\\\\NNN\\                                      %s\"\n\"${c2}                  \\\\\\\\NNN\\                                     %s\"\n\"${c2}                   \\\\\\\\NNNA                                    %s\")\n\t\t;;\n\n\t\t\"Mandriva\"|\"Mandrake\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\t\tc2=$(getColor 'yellow') # Bold Yellow\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"41\"\n\t\t\tfulloutput=(\n\"${c2}                                         %s\"\n\"${c2}                         \\`\\`              %s\"\n\"${c2}                        \\`-.              %s\"\n\"${c1}       \\`               ${c2}.---              %s\"\n\"${c1}     -/               ${c2}-::--\\`             %s\"\n\"${c1}   \\`++    ${c2}\\`----...\\`\\`\\`-:::::.             %s\"\n\"${c1}  \\`os.      ${c2}.::::::::::::::-\\`\\`\\`     \\`  \\` %s\"\n\"${c1}  +s+         ${c2}.::::::::::::::::---...--\\` %s\"\n\"${c1} -ss:          ${c2}\\`-::::::::::::::::-.\\`\\`.\\`\\` %s\"\n\"${c1} /ss-           ${c2}.::::::::::::-.\\`\\`   \\`    %s\"\n\"${c1} +ss:          ${c2}.::::::::::::-            %s\"\n\"${c1} /sso         ${c2}.::::::-::::::-            %s\"\n\"${c1} .sss/       ${c2}-:::-.\\`   .:::::            %s\"\n\"${c1}  /sss+.    ${c2}..\\`${c1}  \\`--\\`    ${c2}.:::            %s\"\n\"${c1}   -ossso+/:://+/-\\`        ${c2}.:\\`           %s\"\n\"${c1}     -/+ooo+/-.              ${c2}\\`           %s\"\n\"${c1}                                         %s\"\n\"${c1}                                         %s\")\n\t\t;;\n\n\t\t\"openSUSE\"|\"SUSE Linux Enterprise\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light green') # Bold Green\n\t\t\t\tc2=$c0$bold\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"44\"\n\t\t\tfulloutput=(\n\"${c2}             .;ldkO0000Okdl;.               %s\"\n\"${c2}         .;d00xl:^''''''^:ok00d;.           %s\"\n\"${c2}       .d00l'                'o00d.         %s\"\n\"${c2}     .d0K^'${c1}  Okxoc;:,.          ${c2}^O0d.       %s\"\n\"${c2}    .OVV${c1}AK0kOKKKKKKKKKKOxo:,      ${c2}lKO.      %s\"\n\"${c2}   ,0VV${c1}AKKKKKKKKKKKKK0P^${c2},,,${c1}^dx:${c2}    ;00,     %s\"\n\"${c2}  .OVV${c1}AKKKKKKKKKKKKKk'${c2}.oOPPb.${c1}'0k.${c2}   cKO.    %s\"\n\"${c2}  :KV${c1}AKKKKKKKKKKKKKK: ${c2}kKx..dd ${c1}lKd${c2}   'OK:    %s\"\n\"${c2}  lKl${c1}KKKKKKKKKOx0KKKd ${c2}^0KKKO' ${c1}kKKc${c2}   lKl    %s\"\n\"${c2}  lKl${c1}KKKKKKKKKK;.;oOKx,..${c2}^${c1}..;kKKK0.${c2}  lKl    %s\"\n\"${c2}  :KA${c1}lKKKKKKKKK0o;...^cdxxOK0O/^^'  ${c2}.0K:    %s\"\n\"${c2}   kKA${c1}VKKKKKKKKKKKK0x;,,......,;od  ${c2}lKP     %s\"\n\"${c2}   '0KA${c1}VKKKKKKKKKKKKKKKKKK00KKOo^  ${c2}c00'     %s\"\n\"${c2}    'kKA${c1}VOxddxkOO00000Okxoc;''   ${c2}.dKV'      %s\"\n\"${c2}      l0Ko.                    .c00l'       %s\"\n\"${c2}       'l0Kk:.              .;xK0l'         %s\"\n\"${c2}          'lkK0xc;:,,,,:;odO0kl'            %s\"\n\"${c2}              '^:ldxkkkkxdl:^'              %s\")\n\t\t;;\n\n\t\t\"Slackware\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\t\tc2=$(getColor 'white') # Bold White\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"46\"\n\t\t\tfulloutput=(\n\"${c1}                   :::::::                    \"\n\"${c1}             :::::::::::::::::::              %s\"\n\"${c1}          :::::::::::::::::::::::::           %s\"\n\"${c1}        ::::::::${c2}cllcccccllllllll${c1}::::::        %s\"\n\"${c1}     :::::::::${c2}lc               dc${c1}:::::::      %s\"\n\"${c1}    ::::::::${c2}cl   clllccllll    oc${c1}:::::::::    %s\"\n\"${c1}   :::::::::${c2}o   lc${c1}::::::::${c2}co   oc${c1}::::::::::   %s\"\n\"${c1}  ::::::::::${c2}o    cccclc${c1}:::::${c2}clcc${c1}::::::::::::  %s\"\n\"${c1}  :::::::::::${c2}lc        cclccclc${c1}:::::::::::::  %s\"\n\"${c1} ::::::::::::::${c2}lcclcc          lc${c1}:::::::::::: %s\"\n\"${c1} ::::::::::${c2}cclcc${c1}:::::${c2}lccclc     oc${c1}::::::::::: %s\"\n\"${c1} ::::::::::${c2}o    l${c1}::::::::::${c2}l    lc${c1}::::::::::: %s\"\n\"${c1}  :::::${c2}cll${c1}:${c2}o     clcllcccll     o${c1}:::::::::::  %s\"\n\"${c1}  :::::${c2}occ${c1}:${c2}o                  clc${c1}:::::::::::  %s\"\n\"${c1}   ::::${c2}ocl${c1}:${c2}ccslclccclclccclclc${c1}:::::::::::::   %s\"\n\"${c1}    :::${c2}oclcccccccccccccllllllllllllll${c1}:::::    %s\"\n\"${c1}     ::${c2}lcc1lcccccccccccccccccccccccco${c1}::::     %s\"\n\"${c1}       ::::::::::::::::::::::::::::::::       %s\"\n\"${c1}         ::::::::::::::::::::::::::::         %s\"\n\"${c1}            ::::::::::::::::::::::            %s\"\n\"${c1}                 ::::::::::::                 %s\")\n\t\t;;\n\n\t\t\"ROSA\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'rosa_blue') # special blue color from ROSA\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"3\"\n\t\t\tlogowidth=\"41\"\n\t\t\tfulloutput=(\n\"${c1}            ROSAROSAROSAROSAR            \"\n\"${c1}         ROSA               AROS         \"\n\"${c1}       ROS   SAROSAROSAROSAR   AROS      \"\n\"${c1}     RO   ROSAROSAROSAROSAROSAR   RO     %s\"\n\"${c1}   ARO  AROSAROSAROSARO      AROS  ROS   %s\"\n\"${c1}  ARO  ROSAROS         OSAR   ROSA  ROS  %s\"\n\"${c1}  RO  AROSA   ROSAROSAROSA    ROSAR  RO  %s\"\n\"${c1} RO  ROSAR  ROSAROSAROSAR  R  ROSARO  RO %s\"\n\"${c1} RO  ROSA  AROSAROSAROSA  AR  ROSARO  AR %s\"\n\"${c1} RO AROS  ROSAROSAROSA   ROS  AROSARO AR %s\"\n\"${c1} RO AROS  ROSAROSARO   ROSARO  ROSARO AR %s\"\n\"${c1} RO  ROS  AROSAROS   ROSAROSA AROSAR  AR %s\"\n\"${c1} RO  ROSA  ROS     ROSAROSAR  ROSARO  RO %s\"\n\"${c1}  RO  ROS     AROSAROSAROSA  ROSARO  AR  %s\"\n\"${c1}  ARO  ROSA   ROSAROSAROS   AROSAR  ARO  %s\"\n\"${c1}   ARO  OROSA      R      ROSAROS  ROS   %s\"\n\"${c1}     RO   AROSAROS   AROSAROSAR   RO     %s\"\n\"${c1}      AROS   AROSAROSAROSARO   AROS      %s\"\n\"${c1}         ROSA               SARO         %s\"\n\"${c1}            ROSAROSAROSAROSAR            %s\")\n\t\t;;\n\n\t\t\"Red Hat Enterprise Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light red') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"42\"\n\t\t\tfulloutput=(\n\"${c1}            .MMM..:MMMMMMM                 %s\"\n\"${c1}           MMMMMMMMMMMMMMMMMM              %s\"\n\"${c1}           MMMMMMMMMMMMMMMMMMMM.           %s\"\n\"${c1}          MMMMMMMMMMMMMMMMMMMMMM           %s\"\n\"${c1}         ,MMMMMMMMMMMMMMMMMMMMMM:          %s\"\n\"${c1}         MMMMMMMMMMMMMMMMMMMMMMMM          %s\"\n\"${c1}   .MMMM'  MMMMMMMMMMMMMMMMMMMMMM          %s\"\n\"${c1}  MMMMMM    \\`MMMMMMMMMMMMMMMMMMMM.         %s\"\n\"${c1} MMMMMMMM      MMMMMMMMMMMMMMMMMM .        %s\"\n\"${c1} MMMMMMMMM.       \\`MMMMMMMMMMMMM' MM.      %s\"\n\"${c1} MMMMMMMMMMM.                     MMMM     %s\"\n\"${c1} \\`MMMMMMMMMMMMM.                 ,MMMMM.   %s\"\n\"${c1}  \\`MMMMMMMMMMMMMMMMM.          ,MMMMMMMM.  %s\"\n\"${c1}     MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM  %s\"\n\"${c1}       MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM:  %s\"\n\"${c1}          MMMMMMMMMMMMMMMMMMMMMMMMMMMMMM   %s\"\n\"${c1}             \\`MMMMMMMMMMMMMMMMMMMMMMMM:    %s\"\n\"${c1}                 \\`\\`MMMMMMMMMMMMMMMMM'      %s\")\n\t\t;;\n\n\t\t\"Frugalware\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"3\"\n\t\t\tlogowidth=\"50\"\n\t\t\tfulloutput=(\n\"${c2}          \\`++/::-.\\`                               \"\n\"${c2}         /o+++++++++/::-.\\`                        \"\n\"${c2}        \\`o+++++++++++++++o++/::-.\\`                \"\n\"${c2}        /+++++++++++++++++++++++oo++/:-.\\`\\`        %s\"\n\"${c2}       .o+ooooooooooooooooooosssssssso++oo++/:-\\`  %s\"\n\"${c2}       ++osoooooooooooosssssssssssssyyo+++++++o:  %s\"\n\"${c2}      -o+ssoooooooooooosssssssssssssyyo+++++++s\\`  %s\"\n\"${c2}      o++ssoooooo++++++++++++++sssyyyyo++++++o:   %s\"\n\"${c2}     :o++ssoooooo${c1}/-------------${c2}+syyyyyo+++++oo    %s\"\n\"${c2}    \\`o+++ssoooooo${c1}/-----${c2}+++++ooosyyyyyyo++++os:    %s\"\n\"${c2}    /o+++ssoooooo${c1}/-----${c2}ooooooosyyyyyyyo+oooss     %s\"\n\"${c2}   .o++++ssooooos${c1}/------------${c2}syyyyyyhsosssy-     %s\"\n\"${c2}   ++++++ssooooss${c1}/-----${c2}+++++ooyyhhhhhdssssso      %s\"\n\"${c2}  -s+++++syssssss${c1}/-----${c2}yyhhhhhhhhhhhddssssy.      %s\"\n\"${c2}  sooooooyhyyyyyh${c1}/-----${c2}hhhhhhhhhhhddddyssy+       %s\"\n\"${c2} :yooooooyhyyyhhhyyyyyyhhhhhhhhhhdddddyssy\\`       %s\"\n\"${c2} yoooooooyhyyhhhhhhhhhhhhhhhhhhhddddddysy/        %s\"\n\"${c2}-ysooooooydhhhhhhhhhhhddddddddddddddddssy         %s\"\n\"${c2} .-:/+osssyyyysyyyyyyyyyyyyyyyyyyyyyyssy:         %s\"\n\"${c2}       \\`\\`.-/+oosysssssssssssssssssssssss          %s\"\n\"${c2}               \\`\\`.:/+osyysssssssssssssh.          %s\"\n\"${c2}                        \\`-:/+osyyssssyo           %s\"\n\"${c2}                                .-:+++\\`           %s\")\n\t\t;;\n\n\t\t\"Peppermint\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"3\"\n\t\t\tlogowidth=\"40\"\n\t\t\tfulloutput=(\n\"${c2}             PPPPPPPPPPPPPP             \"\n\"${c2}         PPPP${c1}MMMMMMM${c2}PPPPPPPPPPP         \"\n\"${c2}       PPPP${c1}MMMMMMMMMM${c2}PPPPPPPP${c1}MM${c2}PP       \"\n\"${c2}     PPPPPPPP${c1}MMMMMMM${c2}PPPPPPPP${c1}MMMMM${c2}PP     %s\"\n\"${c2}   PPPPPPPPPPPP${c1}MMMMMM${c2}PPPPPPP${c1}MMMMMMM${c2}PP   %s\"\n\"${c2}  PPPPPPPPPPPP${c1}MMMMMMM${c2}PPPP${c1}M${c2}P${c1}MMMMMMMMM${c2}PP  %s\"\n\"${c2} PP${c1}MMMM${c2}PPPPPPPPPP${c1}MMM${c2}PPPPP${c1}MMMMMMM${c2}P${c1}MM${c2}PPPP %s\"\n\"${c2} P${c1}MMMMMMMMMM${c2}PPPPPP${c1}MM${c2}PPPPP${c1}MMMMMM${c2}PPPPPPPP %s\"\n\"${c2}P${c1}MMMMMMMMMMMM${c2}PPPPP${c1}MM${c2}PP${c1}M${c2}P${c1}MM${c2}P${c1}MM${c2}PPPPPPPPPPP%s\"\n\"${c2}P${c1}MMMMMMMMMMMMMMMM${c2}PP${c1}M${c2}P${c1}MMM${c2}PPPPPPPPPPPPPPPP%s\"\n\"${c2}P${c1}MMM${c2}PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP${c1}MMMMM${c2}P%s\"\n\"${c2}PPPPPPPPPPPPPPPP${c1}MMM${c2}P${c1}M${c2}P${c1}MMMMMMMMMMMMMMMM${c2}PP%s\"\n\"${c2}PPPPPPPPPPP${c1}MM${c2}P${c1}MM${c2}PPPP${c1}MM${c2}PPPPP${c1}MMMMMMMMMMM${c2}PP%s\"\n\"${c2} PPPPPPPP${c1}MMMMMM${c2}PPPPP${c1}MM${c2}PPPPPP${c1}MMMMMMMMM${c2}PP %s\"\n\"${c2} PPPP${c1}MM${c2}P${c1}MMMMMMM${c2}PPPPPP${c1}MM${c2}PPPPPPPPPP${c1}MMMM${c2}PP %s\"\n\"${c2}  PP${c1}MMMMMMMMM${c2}P${c1}M${c2}PPPP${c1}MMMMMM${c2}PPPPPPPPPPPPP  %s\"\n\"${c2}   PP${c1}MMMMMMM${c2}PPPPPPP${c1}MMMMMM${c2}PPPPPPPPPPPP   %s\"\n\"${c2}     PP${c1}MMMM${c2}PPPPPPPPP${c1}MMMMMMM${c2}PPPPPPPP     %s\"\n\"${c2}       PP${c1}MM${c2}PPPPPPPP${c1}MMMMMMMMMM${c2}PPPP       %s\"\n\"${c2}         PPPPPPPPPP${c1}MMMMMMMM${c2}PPPP         %s\"\n\"${c2}             PPPPPPPPPPPPPP             %s\")\n\t\t;;\n\n\"Grombyang\"|\"GrombyangOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue')\n\t\t\t\tc2=$(getColor 'light green')\n\t\t\t\tc3=$(getColor 'light red')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tfulloutput=(\n\"${c1}             eeeeeeeeeeee                           %s\"\n\"${c1}          eeeeeeeeeeeeeeeee          %s\"\n\"${c1}       eeeeeeeeeeeeeeeeeeeeeee       %s\"\n\"${c1}     eeeee       ${c2}.o+       ${c1}eeee      %s\"\n\"${c1}   eeee         ${c2}\\`ooo/         ${c1}eeee   %s\"\n\"${c1}  eeee         ${c2}\\`+oooo:         ${c1}eeee  %s\"\n\"${c1} eee          ${c2}\\`+oooooo:          ${c1}eee %s\"\n\"${c1} eee          ${c2}-+oooooo+:         ${c1}eee %s\"\n\"${c1} ee         ${c2}\\`/:oooooooo+:         ${c1}ee %s\"\n\"${c1} ee        ${c2}\\`/+   +++    +:        ${c1}ee %s\"\n\"${c1} ee              ${c2}+o+\\             ${c1}ee %s\"\n\"${c1} eee             ${c2}+o+\\            ${c1}eee %s\"\n\"${c1} eee        ${c2}//  \\\\ooo/  \\\\\\         ${c1}eee %s\"\n\"${c1}  eee      ${c2}//++++oooo++++\\\\\\      ${c1}eee  %s\"\n\"${c1}   eeee    ${c2}::::++oooo+:::::   ${c1}eeee   %s\"\n\"${c1}     eeeee   ${c3}Grombyang OS ${c1}  eeee     %s\"\n\"${c1}       eeeeeeeeeeeeeeeeeeeeeee       %s\"\n\"${c1}          eeeeeeeeeeeeeeeee          %s\"\n\"                                     %s\"\n\"                                     %s\")\n\t;;\n\n\t\t\"Solus\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'blue') # Blue\n\t\t\t\tc3=$(getColor 'black') # Black\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"36\"\n\t\t\tfulloutput=(\n\"${c3}               ......               %s\"\n\"${c3}         .'${c1}D${c3}lddddddddddd'.          %s\"\n\"${c3}      .'ddd${c1}XM${c3}xdddddddddddddd.       %s\"\n\"${c3}    .dddddx${c1}MMM0${c3};dddddddddddddd.     %s\"\n\"${c3}   'dddddl${c1}MMMMMN${c3}cddddddddddddddd.   %s\"\n\"${c3}  ddddddc${c1}WMMMMMMW${c3}lddddddddddddddd.  %s\"\n\"${c3} ddddddc${c1}WMMMMMMMMO${c3}ddoddddddddddddd. %s\"\n\"${c3}.ddddd:${c1}NMMMMMMMMMK${c3}dd${c1}NX${c3}od;c${c1}lxl${c3}dddddd %s\"\n\"${c3}dddddc${c1}WMMMMMMMMMMNN${c3}dd${c1}MMXl${c3};d${c1}00xl;${c3}ddd.%s\"\n\"${c3}ddddl${c1}WMMMMMMMMMMMMM${c3}d;${c1}MMMM0${c3}:dl${c1}XMMXk:${c3}'%s\"\n\"${c3}dddo${c1}WMMMMMMMMMMMMMM${c3}dd${c1}MMMMMW${c3}od${c3};${c1}XMMMOd%s\"\n\"${c3}.dd${c1}MMMMMMMMMMMMMMMM${c3}d:${c1}MMMMMMM${c3}kd${c1}lMKll %s\"\n\"${c3}.;dk0${c1}KXNWWMMMMMMMMM${c3}dx${c1}MMMMMMM${c3}Xl;lxK; %s\"\n\"${c3}  'dddddddd;:cclodcddxddolloxO0O${c1}d'  %s\"\n\"${c1}   ckkxxxddddddddxxkOOO000Okdool.   %s\"\n\"${c2}    .lddddxxxxxxddddooooooooood     %s\"\n\"${c2}      .:oooooooooooooooooooc'       %s\"\n\"${c2}         .,:looooooooooc;.  %s\")\n\t\t;;\n\n\t\t\"Mageia\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t \tc2=$(getColor 'light cyan') # Light Cyan\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"33\"\n\t\t\tfulloutput=(\n\"${c2}               .°°.              %s\"\n\"${c2}                °°   .°°.        %s\"\n\"${c2}                .°°°. °°         %s\"\n\"${c2}                .   .            %s\"\n\"${c2}                 °°° .°°°.       %s\"\n\"${c2}             .°°°.   '___'       %s\"\n\"${c1}            .${c2}'___'     ${c1}   .      %s\"\n\"${c1}          :dkxc;'.  ..,cxkd;     %s\"\n\"${c1}        .dkk. kkkkkkkkkk .kkd.   %s\"\n\"${c1}       .dkk.  ';cloolc;.  .kkd   %s\"\n\"${c1}       ckk.                .kk;  %s\"\n\"${c1}       xO:                  cOd  %s\"\n\"${c1}       xO:                  lOd  %s\"\n\"${c1}       lOO.                .OO:  %s\"\n\"${c1}       .k00.              .00x   %s\"\n\"${c1}        .k00;            ;00O.   %s\"\n\"${c1}         .lO0Kc;,,,,,,;c0KOc.    %s\"\n\"${c1}            ;d00KKKKKK00d;       %s\"\n\"${c1}               .,KKKK,.          %s\")\n\t\t;;\n\n\t\t\"Hyperbola GNU/Linux-libre\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light grey') # light grey\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"25\"\n\t\t\tfulloutput=(\n\"${c1}                                    %s\"\n\"${c1}                  ..              , %s\"\n\"${c1}                  a;           ._#  %s\"\n\"${c1}                 )##        _au#?   %s\"\n\"${c1}                 ]##s,.__a_w##e^    %s\"\n\"${c1}                 :###########(      %s\"\n\"${c1}                  ^!#####?!^        %s\"\n\"${c1}                  ._                %s\"\n\"${c1}             _au######a,            %s\"\n\"${c1}           sa###########,           %s\"\n\"${c1}        _a##############o           %s\"\n\"${c1}      .a#####?!^^^^^-####_          %s\"\n\"${c1}     j####^           ~##i          %s\"\n\"${c1}   _de!^               -#i          %s\"\n\"${c1} _#e^                   ]+          %s\"\n\"${c1} ^                      ^           %s\"\n\"${c1}                                    %s\")\n\t\t\t;;\n\n\t\t\"Parabola GNU/Linux-libre\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'purple') # Purple\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"33\"\n\t\t\tfulloutput=(\n\"${c1}                                       %s\"\n\"${c1}                          _,,     _    %s\"\n\"${c1}                   _,   ,##'    ,##;   %s\"\n\"${c1}             _, ,##'  ,##'    ,#####;  %s\"\n\"${c1}         _,;#',##'  ,##'    ,#######'  %s\"\n\"${c1}     _,#**^'         \\`    ,#########   %s\"\n\"${c1} .-^\\`                    \\`#########    %s\"\n\"${c1}                          ########     %s\"\n\"${c1}                          ;######      %s\"\n\"${c1}                          ;####*       %s\"\n\"${c1}                          ####'        %s\"\n\"${c1}                         ;###          %s\"\n\"${c1}                        ,##'           %s\"\n\"${c1}                        ##             %s\"\n\"${c1}                       #'              %s\"\n\"${c1}                      /                %s\"\n\"${c1}                     '                 %s\"\n\"${c1}                                       %s\")\n\t\t;;\n\n\t\t\"Viperr\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'dark grey') # Dark Gray\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"31\"\n\t\t\tfulloutput=(\n\"${c1}    wwzapd         dlzazw      %s\"\n\"${c1}   an${c2}#${c1}zncmqzepweeirzpas${c2}#${c1}xz     %s\"\n\"${c1} apez${c2}##${c1}qzdkawweemvmzdm${c2}##${c1}dcmv   %s\"\n\"${c1}zwepd${c2}####${c1}qzdweewksza${c2}####${c1}ezqpa  %s\"\n\"${c1}ezqpdkapeifjeeazezqpdkazdkwqz  %s\"\n\"${c1} ezqpdksz${c2}##${c1}wepuizp${c2}##${c1}wzeiapdk   %s\"\n\"${c1}  zqpakdpa${c2}#${c1}azwewep${c2}#${c1}zqpdkqze    %s\"\n\"${c1}    apqxalqpewenwazqmzazq      %s\"\n\"${c1}     mn${c2}##${c1}==${c2}#######${c1}==${c2}##${c1}qp       %s\"\n\"${c1}      qw${c2}##${c1}=${c2}#######${c1}=${c2}##${c1}zl        %s\"\n\"${c1}      z0${c2}######${c1}=${c2}######${c1}0a        %s\"\n\"${c1}       qp${c2}#####${c1}=${c2}#####${c1}mq         %s\"\n\"${c1}       az${c2}####${c1}===${c2}####${c1}mn         %s\"\n\"${c1}        ap${c2}#########${c1}qz          %s\"\n\"${c1}         9qlzskwdewz           %s\"\n\"${c1}          zqwpakaiw            %s\"\n\"${c1}            qoqpe              %s\"\n\"${c1}                               %s\")\n\t\t;;\n\n\t\t\"LinuxDeepin\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light green') # Bold Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"33\"\n\t\t\tfulloutput=(\n\"${c1}  eeeeeeeeeeeeeeeeeeeeeeeeeeee   %s\"\n\"${c1} eee  eeeeeee          eeeeeeee  %s\"\n\"${c1}ee   eeeeeeeee      eeeeeeeee ee %s\"\n\"${c1}e   eeeeeeeee     eeeeeeeee    e %s\"\n\"${c1}e   eeeeeee    eeeeeeeeee      e %s\"\n\"${c1}e   eeeeee    eeeee            e %s\"\n\"${c1}e    eeeee    eee  eee         e %s\"\n\"${c1}e     eeeee   ee eeeeee        e %s\"\n\"${c1}e      eeeee   eee   eee       e %s\"\n\"${c1}e       eeeeeeeeee  eeee       e %s\"\n\"${c1}e         eeeee    eeee        e %s\"\n\"${c1}e               eeeeee         e %s\"\n\"${c1}e            eeeeeee           e %s\"\n\"${c1}e eee     eeeeeeee             e %s\"\n\"${c1}eeeeeeeeeeeeeeee               e %s\"\n\"${c1}eeeeeeeeeeeee                 ee %s\"\n\"${c1} eeeeeeeeeee                eee  %s\"\n\"${c1}  eeeeeeeeeeeeeeeeeeeeeeeeeeee   %s\"\n\"${c1}                                 %s\")\n\t\t;;\n\n\t\t\"Deepin\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'cyan') # Bold Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"41\"\n\t\t\tfulloutput=(\n\"${c1}              ............               %s\"\n\"${c1}          .';;;;;.       .,;,.           %s\"\n\"${c1}       .,;;;;;;;.       ';;;;;;;.        %s\"\n\"${c1}     .;::::::::'     .,::;;,''''',.      %s\"\n\"${c1}    ,'.::::::::    .;;'.          ';     %s\"\n\"${c1}   ;'  'cccccc,   ,' :: '..        .:    %s\"\n\"${c1}  ,,    :ccccc.  ;: .c, '' :.       ,;   %s\"\n\"${c1} .l.     cllll' ., .lc  :; .l'       l.  %s\"\n\"${c1} .c       :lllc  ;cl:  .l' .ll.      :'  %s\"\n\"${c1} .l        'looc. .   ,o:  'oo'      c,  %s\"\n\"${c1} .o.         .:ool::coc'  .ooo'      o.  %s\"\n\"${c1}  ::            .....   .;dddo      ;c   %s\"\n\"${c1}   l:...            .';lddddo.     ,o    %s\"\n\"${c1}    lxxxxxdoolllodxxxxxxxxxc      :l     %s\"\n\"${c1}     ,dxxxxxxxxxxxxxxxxxxl.     'o,      %s\"\n\"${c1}       ,dkkkkkkkkkkkkko;.    .;o;        %s\"\n\"${c1}         .;okkkkkdl;.    .,cl:.          %s\"\n\"${c1}             .,:cccccccc:,.              %s\")\n\t\t;;\n\n\t\t\"Uos\")\n\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\tc1=$(getColor 'blue') # red\n\t\t\tc2=$(getColor 'orange') # red\n\t\tfi\n\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\tstartline=\"0\"\n\t\tlogowidth=\"52\"\n\t\tfulloutput=(\n\"${c1}                                   %s\"\n\"${c1}                                   %s\"\n\"${c1}           ############            %s\"\n\"${c1}         ##############            %s\"\n\"${c1}         ############    ${c2}oo        %s\"\n\"${c1}           ########    ${c2}oooooo      %s\"\n\"${c1}     ##      ####    ${c2}oooooooo      %s\"\n\"${c1}     ####      ####    ${c2}oooooo      %s\"\n\"${c1}     ######      ####    ${c2}oooo      %s\"\n\"${c1}     ####          ####    ${c2}oo      %s\"\n\"${c1}     ##      ############          %s\"\n\"${c1}           ################        %s\"\n\"${c1}         ################          %s\"\n\"${c1}             ########              %s\"\n\"${c1}                                   %s\"\n\"${c1}                                   %s\")\n\t\t;;\n\n\t\t\"Chakra\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c1}      _ _ _        \\\"kkkkkkkk.         %s\"\n\"${c1}    ,kkkkkkkk.,    \\'kkkkkkkkk,        %s\"\n\"${c1}    ,kkkkkkkkkkkk., \\'kkkkkkkkk.       %s\"\n\"${c1}   ,kkkkkkkkkkkkkkkk,\\'kkkkkkkk,       %s\"\n\"${c1}  ,kkkkkkkkkkkkkkkkkkk\\'kkkkkkk.       %s\"\n\"${c1}   \\\"\\'\\'\\\"\\'\\'\\',;::,,\\\"\\'\\'kkk\\'\\'kkkkk;   __   %s\"\n\"${c1}       ,kkkkkkkkkk, \\\"k\\'\\'kkkkk\\' ,kkkk  %s\"\n\"${c1}     ,kkkkkkk\\' ., \\' .: \\'kkkk\\',kkkkkk  %s\"\n\"${c1}   ,kkkkkkkk\\'.k\\'   ,  ,kkkk;kkkkkkkkk %s\"\n\"${c1}  ,kkkkkkkk\\';kk \\'k  \\\"\\'k\\',kkkkkkkkkkkk %s\"\n\"${c1} .kkkkkkkkk.kkkk.\\'kkkkkkkkkkkkkkkkkk\\' %s\"\n\"${c1} ;kkkkkkkk\\'\\'kkkkkk;\\'kkkkkkkkkkkkk\\'\\'   %s\"\n\"${c1} \\'kkkkkkk; \\'kkkkkkkk.,\\\"\\\"\\'\\'\\\"\\'\\'\\\"\\\"       %s\"\n\"${c1}   \\'\\'kkkk;  \\'kkkkkkkkkk.,             %s\"\n\"${c1}      \\';\\'    \\'kkkkkkkkkkkk.,          %s\"\n\"${c1}              ';kkkkkkkkkk\\'           %s\"\n\"${c1}                ';kkkkkk\\'             %s\"\n\"${c1}                   \\\"\\'\\'\\\"               %s\")\n\t\t;;\n\n\t\t\"Fuduntu\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'dark grey') # Dark Gray\n\t\t\t\tc2=$(getColor 'yellow') # Bold Yellow\n\t\t\t\tc3=$(getColor 'light red') # Light Red\n\t\t\t\tc4=$(getColor 'white') # White\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; c4=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"49\"\n\t\t\tfulloutput=(\n\"${c1}       \\`dwoapfjsod\\`${c2}           \\`dwoapfjsod\\`       \"\n\"${c1}    \\`xdwdsfasdfjaapz\\`${c2}       \\`dwdsfasdfjaapzx\\`    %s\"\n\"${c1}  \\`wadladfladlafsozmm\\`${c2}     \\`wadladfladlafsozmm\\`  %s\"\n\"${c1} \\`aodowpwafjwodisosoaas\\`${c2} \\`odowpwafjwodisosoaaso\\` %s\"\n\"${c1} \\`adowofaowiefawodpmmxs\\`${c2} \\`dowofaowiefawodpmmxso\\` %s\"\n\"${c1} \\`asdjafoweiafdoafojffw\\`${c2} \\`sdjafoweiafdoafojffwq\\` %s\"\n\"${c1}  \\`dasdfjalsdfjasdlfjdd\\`${c2} \\`asdfjalsdfjasdlfjdda\\`  %s\"\n\"${c1}   \\`dddwdsfasdfjaapzxaw\\`${c2} \\`ddwdsfasdfjaapzxawo\\`   %s\"\n\"${c1}     \\`dddwoapfjsowzocmw\\`${c2} \\`ddwoapfjsowzocmwp\\`     %s\"\n\"${c1}       \\`ddasowjfowiejao\\`${c2} \\`dasowjfowiejaow\\`       %s\"\n\"${c1}                                                 %s\"\n\"${c3}       \\`ddasowjfowiejao\\`${c4} \\`dasowjfowiejaow\\`       %s\"\n\"${c3}     \\`dddwoapfjsowzocmw\\`${c4} \\`ddwoapfjsowzocmwp\\`     %s\"\n\"${c3}   \\`dddwdsfasdfjaapzxaw\\`${c4} \\`ddwdsfasdfjaapzxawo\\`   %s\"\n\"${c3}  \\`dasdfjalsdfjasdlfjdd\\`${c4} \\`asdfjalsdfjasdlfjdda\\`  %s\"\n\"${c3} \\`asdjafoweiafdoafojffw\\`${c4} \\`sdjafoweiafdoafojffwq\\` %s\"\n\"${c3} \\`adowofaowiefawodpmmxs\\`${c4} \\`dowofaowiefawodpmmxso\\` %s\"\n\"${c3} \\`aodowpwafjwodisosoaas\\`${c4} \\`odowpwafjwodisosoaaso\\` %s\"\n\"${c3}   \\`wadladfladlafsozmm\\`${c4}     \\`wadladfladlafsozmm\\` %s\"\n\"${c3}     \\`dwdsfasdfjaapzx\\`${c4}       \\`dwdsfasdfjaapzx\\`   %s\"\n\"${c3}        \\`woapfjsod\\`${c4}             \\`woapfjsod\\`      %s\")\n\t\t;;\n\n\t\t\"Zorin OS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tfulloutput=(\n\"${c1}           ...................          %s\"\n\"${c1}          :ooooooooooooooooooo/         %s\"\n\"${c1}         /ooooooooooooooooooooo+        %s\"\n\"${c1}        ''''''''''''''''''''''''        %s\"\n\"${c1}                                        %s\"\n\"${c1}    .++++++++++++++++++/.       :++-    %s\"\n\"${c1}   -oooooooooooooooo/-       :+ooooo:   %s\"\n\"${c1}  :oooooooooooooo/-       :+ooooooooo:  %s\"\n\"${c1} .oooooooooooo+-       :+ooooooooooooo- %s\"\n\"${c1}  -oooooooo/-       -+ooooooooooooooo:  %s\"\n\"${c1}   .oooo+-       -+ooooooooooooooooo-   %s\"\n\"${c1}    .--        .-------------------.    %s\"\n\"${c1}                                        %s\"\n\"${c1}        .//////////////////////-        %s\"\n\"${c1}         :oooooooooooooooooooo/         %s\"\n\"${c1}          :oooooooooooooooooo:          %s\"\n\"${c1}           ''''''''''''''''''           %s\")\n\t\t;;\n\n\t\t\"Mac OS X\"|\"macOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'green') # Green\n\t\t\t\tc2=$(getColor 'brown') # Yellow\n\t\t\t\tc3=$(getColor 'light red') # Orange\n\t\t\t\tc4=$(getColor 'red') # Red\n\t\t\t\tc5=$(getColor 'purple') # Purple\n\t\t\t\tc6=$(getColor 'blue') # Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then\n\t\t\t\tc1=\"${my_lcolor}\"\n\t\t\t\tc2=\"${my_lcolor}\"\n\t\t\t\tc3=\"${my_lcolor}\"\n\t\t\t\tc4=\"${my_lcolor}\"\n\t\t\t\tc5=\"${my_lcolor}\"\n\t\t\t\tc6=\"${my_lcolor}\"\n\t\t\tfi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"31\"\n\t\t\tfulloutput=(\n\"${c1}                               \"\n\"${c1}                 -/+:.         %s\"\n\"${c1}                :++++.         %s\"\n\"${c1}               /+++/.          %s\"\n\"${c1}       .:-::- .+/:-\\`\\`.::-      %s\"\n\"${c1}    .:/++++++/::::/++++++/:\\`   %s\"\n\"${c2}  .:///////////////////////:\\`  %s\"\n\"${c2}  ////////////////////////\\`    %s\"\n\"${c3} -+++++++++++++++++++++++\\`     %s\"\n\"${c3} /++++++++++++++++++++++/      %s\"\n\"${c4} /sssssssssssssssssssssss.     %s\"\n\"${c4} :ssssssssssssssssssssssss-    %s\"\n\"${c5}  osssssssssssssssssssssssso/\\` %s\"\n\"${c5}  \\`syyyyyyyyyyyyyyyyyyyyyyyy+\\` %s\"\n\"${c6}   \\`ossssssssssssssssssssss/   %s\"\n\"${c6}     :ooooooooooooooooooo+.    %s\"\n\"${c6}      \\`:+oo+/:-..-:/+o+/-      %s\"\n\"${c6}                               %s\")\n\t\t;;\n\n\t\t\"Mac OS X - Classic\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'blue') # Blue\n\t\t\t\tc2=$(getColor 'light blue') # Light blue\n\t\t\t\tc3=$(getColor 'light grey') # Gray\n\t\t\t\tc4=$(getColor 'dark grey') # Dark Ggray\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; c4=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"39\"\n\t\t\tfulloutput=(\n\"${c3}                                       \"\n\"${c3}                        ..             %s\"\n\"${c3}                       dWc             %s\"\n\"${c3}                     ,X0'              %s\"\n\"${c1}  ;;;;;;;;;;;;;;;;;;${c3}0Mk${c2}::::::::::::::: %s\"\n\"${c1}  ;;;;;;;;;;;;;;;;;${c3}KWo${c2}:::::::::::::::: %s\"\n\"${c1}  ;;;;;;;;;${c4}NN${c1};;;;;${c3}KWo${c2}:::::${c3}NN${c2}:::::::::: %s\"\n\"${c1}  ;;;;;;;;;${c4}NN${c1};;;;${c3}0Md${c2}::::::${c3}NN${c2}:::::::::: %s\"\n\"${c1}  ;;;;;;;;;${c4}NN${c1};;;${c3}xW0${c2}:::::::${c3}NN${c2}:::::::::: %s\"\n\"${c1}  ;;;;;;;;;;;;;;${c3}KMc${c2}::::::::::::::::::: %s\"\n\"${c1}  ;;;;;;;;;;;;;${c3}lWX${c2}:::::::::::::::::::: %s\"\n\"${c1}  ;;;;;;;;;;;;;${c3}xWWXXXXNN7${c2}::::::::::::: %s\"\n\"${c1}  ;;;;;;;;;;;;;;;;;;;;${c3}WK${c2}:::::::::::::: %s\"\n\"${c1}  ;;;;;${c4}TKX0ko.${c1};;;;;;;${c3}kMx${c2}:::${c3}.cOKNF${c2}::::: %s\"\n\"${c1}  ;;;;;;;;${c4}\\`kO0KKKKKKK${c3}NMNXK0OP*${c2}:::::::: %s\"\n\"${c1}  ;;;;;;;;;;;;;;;;;;;${c3}kMx${c2}:::::::::::::: %s\"\n\"${c1}  ;;;;;;;;;;;;;;;;;;;;${c3}WX${c2}:::::::::::::: %s\"\n\"${c3}                      lMc              %s\"\n\"${c3}                       kN.             %s\"\n\"${c3}                        o'             %s\"\n\"${c3}                                       %s\")\n\t\t;;\n\n\t\t\"Windows\"|\"Cygwin\"|\"Msys\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light red') # Red\n\t\t\t\tc2=$(getColor 'light green') # Green\n\t\t\t\tc3=$(getColor 'light blue') # Blue\n\t\t\t\tc4=$(getColor 'yellow') # Yellow\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; c4=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"37\"\n\t\t\tfulloutput=(\n\"${c1}        ,.=:!!t3Z3z.,                %s\"\n\"${c1}       :tt:::tt333EE3                %s\"\n\"${c1}       Et:::ztt33EEEL${c2} @Ee.,      .., %s\"\n\"${c1}      ;tt:::tt333EE7${c2} ;EEEEEEttttt33# %s\"\n\"${c1}     :Et:::zt333EEQ.${c2} \\$EEEEEttttt33QL %s\"\n\"${c1}     it::::tt333EEF${c2} @EEEEEEttttt33F  %s\"\n\"${c1}    ;3=*^\\`\\`\\`\\\"*4EEV${c2} :EEEEEEttttt33@.  %s\"\n\"${c3}    ,.=::::!t=., ${c1}\\`${c2} @EEEEEEtttz33QF   %s\"\n\"${c3}   ;::::::::zt33)${c2}   \\\"4EEEtttji3P*    %s\"\n\"${c3}  :t::::::::tt33.${c4}:Z3z..${c2}  \\`\\`${c4} ,..g.    %s\"\n\"${c3}  i::::::::zt33F${c4} AEEEtttt::::ztF     %s\"\n\"${c3} ;:::::::::t33V${c4} ;EEEttttt::::t3      %s\"\n\"${c3} E::::::::zt33L${c4} @EEEtttt::::z3F      %s\"\n\"${c3}{3=*^\\`\\`\\`\\\"*4E3)${c4} ;EEEtttt:::::tZ\\`      %s\"\n\"${c3}             \\`${c4} :EEEEtttt::::z7       %s\"\n\"${c4}                 \\\"VEzjt:;;z>*\\`       %s\")\n\t\t;;\n\n\t\t\"Windows - Modern\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c1}                                  .., %s\"\n\"${c1}                      ....,,:;+ccllll %s\"\n\"${c1}        ...,,+:;  cllllllllllllllllll %s\"\n\"${c1}  ,cclllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}                                      %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  llllllllllllll  lllllllllllllllllll %s\"\n\"${c1}  \\`'ccllllllllll  lllllllllllllllllll %s\"\n\"${c1}         \\`'\\\"\\\"*::  :ccllllllllllllllll %s\"\n\"${c1}                        \\`\\`\\`\\`''\\\"*::cll %s\"\n\"${c1}                                   \\`\\` %s\")\n\t\t;;\n\n\t\t\"Haiku\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tif [ \"$haikualpharelease\" == \"yes\" ]; then\n\t\t\t\t\tc1=$(getColor 'black_haiku') # Black\n\t\t\t\t\tc2=$(getColor 'light grey') # Light Gray\n\t\t\t\telse\n\t\t\t\t\tc1=$(getColor 'black') # Black\n\t\t\t\t\tc2=${c1}\n\t\t\t\tfi\n\t\t\t\tc3=$(getColor 'green') # Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"36\"\n\t\t\tfulloutput=(\n\"${c1}          :dc'                      %s\"\n\"${c1}       'l:;'${c2},${c1}'ck.    .;dc:.         %s\"\n\"${c1}       co    ${c2}..${c1}k.  .;;   ':o.       %s\"\n\"${c1}       co    ${c2}..${c1}k. ol      ${c2}.${c1}0.       %s\"\n\"${c1}       co    ${c2}..${c1}k. oc     ${c2}..${c1}0.       %s\"\n\"${c1}       co    ${c2}..${c1}k. oc     ${c2}..${c1}0.       %s\"\n\"${c1}.Ol,.  co ${c2}...''${c1}Oc;kkodxOdddOoc,.    %s\"\n\"${c1} ';lxxlxOdxkxk0kd${c3}oooll${c1}dl${c3}ccc:${c1}clxd;   %s\"\n\"${c1}     ..${c3}oOolllllccccccc:::::${c1}od;      %s\"\n\"${c1}       cx:ooc${c3}:::::::;${c1}cooolcX.       %s\"\n\"${c1}       cd${c2}.${c1}''cloxdoollc' ${c2}...${c1}0.       %s\"\n\"${c1}       cd${c2}......${c1}k;${c2}.${c1}xl${c2}....  .${c1}0.       %s\"\n\"${c1}       .::c${c2};..${c1}cx;${c2}.${c1}xo${c2}..... .${c1}0.       %s\"\n\"${c1}          '::c'${c2}...${c1}do${c2}..... .${c1}K,       %s\"\n\"${c1}                  cd,.${c2}....:${c1}O,${c2}...... %s\"\n\"${c1}                    ':clod:'${c2}......  %s\"\n\"${c1}                        ${c2}.           %s\")\n\t\t;;\n\n\t\t\"Trisquel\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\t\tc2=$(getColor 'light cyan') # Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c1}                          ▄▄▄▄▄▄      %s\"\n\"${c1}                       ▄█████████▄    %s\"\n\"${c1}       ▄▄▄▄▄▄         ████▀   ▀████   %s\"\n\"${c1}    ▄██████████▄     ████▀   ▄▄ ▀███  %s\"\n\"${c1}  ▄███▀▀   ▀▀████     ███▄   ▄█   ███ %s\"\n\"${c1} ▄███   ▄▄▄   ████▄    ▀██████   ▄███ %s\"\n\"${c1} ███   █▀▀██▄  █████▄     ▀▀   ▄████  %s\"\n\"${c1} ▀███      ███  ███████▄▄  ▄▄██████   %s\"\n\"${c1}  ▀███▄   ▄███  █████████████${c2}████▀    %s\"\n\"${c1}   ▀█████████    ███████${c2}███▀▀▀        %s\"\n\"${c1}     ▀▀███▀▀     ██${c2}████▀▀             %s\"\n\"${c2}                ██████▀   ▄▄▄▄        %s\"\n\"${c2}               █████▀   ████████      %s\"\n\"${c2}               █████   ███▀  ▀███     %s\"\n\"${c2}                ████▄   ██▄▄▄  ███    %s\"\n\"${c2}                 █████▄   ▀▀  ▄██     %s\"\n\"${c2}                   ██████▄▄▄████      %s\"\n\"${c2}\t\t              █████▀▀       %s\")\n\t\t;;\n\n\t\t\"Manjaro\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light green') # Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"33\"\n\t\t\tfulloutput=(\"\"\n\"${c1} ██████████████████  ████████    %s\"\n\"${c1} ██████████████████  ████████    %s\"\n\"${c1} ██████████████████  ████████    %s\"\n\"${c1} ██████████████████  ████████    %s\"\n\"${c1} ████████            ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"${c1} ████████  ████████  ████████    %s\"\n\"                                 %s\")\n\t\t;;\n\n\t\t\"Netrunner\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"43\"\n\t\t\tfulloutput=(\n\"${c1} nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn  %s\"\n\"${c1} nnnnnnnnnnnnnn            nnnnnnnnnnnnnn  %s\"\n\"${c1} nnnnnnnnnn     nnnnnnnnnn     nnnnnnnnnn  %s\"\n\"${c1} nnnnnnn   nnnnnnnnnnnnnnnnnnnn   nnnnnnn  %s\"\n\"${c1} nnnn   nnnnnnnnnnnnnnnnnnnnnnnnnn   nnnn  %s\"\n\"${c1} nnn  nnnnnnnnnnnnnnnnnnnnnnnnnnnnnn  nnn  %s\"\n\"${c1} nn  nnnnnnnnnnnnnnnnnnnnnn  nnnnnnnn  nn  %s\"\n\"${c1} n  nnnnnnnnnnnnnnnnn       nnnnnnnnnn  n  %s\"\n\"${c1} n nnnnnnnnnnn              nnnnnnnnnnn n  %s\"\n\"${c1} n nnnnnn                  nnnnnnnnnnnn n  %s\"\n\"${c1} n nnnnnnnnnnn             nnnnnnnnnnnn n  %s\"\n\"${c1} n nnnnnnnnnnnnn           nnnnnnnnnnnn n  %s\"\n\"${c1} n nnnnnnnnnnnnnnnn       nnnnnnnnnnnnn n  %s\"\n\"${c1} n nnnnnnnnnnnnnnnnn      nnnnnnnnnnnnn n  %s\"\n\"${c1} n nnnnnnnnnnnnnnnnnn    nnnnnnnnnnnn   n  %s\"\n\"${c1} nn  nnnnnnnnnnnnnnnnn   nnnnnnnnnnnn  nn  %s\"\n\"${c1} nnn   nnnnnnnnnnnnnnn  nnnnnnnnnnn   nnn  %s\"\n\"${c1} nnnnn   nnnnnnnnnnnnnn nnnnnnnnn   nnnnn  %s\"\n\"${c1} nnnnnnn   nnnnnnnnnnnnnnnnnnnn   nnnnnnn  %s\"\n\"${c1} nnnnnnnnnn     nnnnnnnnnn     nnnnnnnnnn  %s\"\n\"${c1} nnnnnnnnnnnnnn            nnnnnnnnnnnnnn  %s\"\n\"${c1} nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn  %s\"\n\"${c1}                                           %s\")\n\t\t;;\n\n\t\t\t\"Logos\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'green') # Green\n\t\t\tfi\n            if [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"25\"\n\t\t\tfulloutput=(\n\"${c1}    ..:.:.               %s\"\n\"${c1}   ..:.:.:.:.            %s\"\n\"${c1}  ..:.:.:.:.:.:.         %s\"\n\"${c1} ..:.:.:.:.:.:.:.:.      %s\"\n\"${c1}   .:.::;.::::..:.:.:.   %s\"\n\"${c1}      .:.:.::.::.::.;;/  %s\"\n\"${c1}         .:.::.::://///  %s\"\n\"${c1}            ..;;///////  %s\"\n\"${c1}            ///////////  %s\"\n\"${c1}         //////////////  %s\"\n\"${c1}      /////////////////  %s\"\n\"${c1}   ///////////////////   %s\"\n\"${c1} //////////////////      %s\"\n\"${c1}  //////////////         %s\"\n\"${c1}   //////////            %s\"\n\"${c1}    //////               %s\"\n\"${c1}     //                  %s\")\n\t\t;;\n\n\t\t\t\"Manjaro-tree\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=\"\\e[1;32m\" # Green\n\t\t\t\tc2=\"\\e[1;33m\" # Yellow\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"33\"\n\t\t\tfulloutput=(\n\"${c1}                         ###     %s\"\n\"${c1}     ###             ####        %s\"\n\"${c1}        ###       ####           %s\"\n\"${c1}         ##### #####             %s\"\n\"${c1}      #################          %s\"\n\"${c1}    ###     #####    ####        %s\"\n\"${c1}   ##        ${c2}OOO       ${c1}###       %s\"\n\"${c1}  #          ${c2}WW         ${c1}##       %s\"\n\"${c1}            ${c2}WW            ${c1}#      %s\"\n\"${c2}            WW                   %s\"\n\"${c2}            WW                   %s\"\n\"${c2}           WW                    %s\"\n\"${c2}           WW                    %s\"\n\"${c2}           WW                    %s\"\n\"${c2}          WW                     %s\"\n\"${c2}          WW                     %s\"\n\"${c2}          WW                     %s\"\n\"${c2}                                 %s\")\n\t\t;;\n\n\t\t\"elementary OS\"|\"elementary os\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"36\"\n\t\t\tfulloutput=(\n\"${c1}                                    %s\"\n\"${c1}         eeeeeeeeeeeeeeeee          %s\"\n\"${c1}      eeeeeeeeeeeeeeeeeeeeeee       %s\"\n\"${c1}    eeeee  eeeeeeeeeeee   eeeee     %s\"\n\"${c1}  eeee   eeeee       eee     eeee   %s\"\n\"${c1} eeee   eeee          eee     eeee  %s\"\n\"${c1}eee    eee            eee       eee %s\"\n\"${c1}eee   eee            eee        eee %s\"\n\"${c1}ee    eee           eeee       eeee %s\"\n\"${c1}ee    eee         eeeee      eeeeee %s\"\n\"${c1}ee    eee       eeeee      eeeee ee %s\"\n\"${c1}eee   eeee   eeeeee      eeeee  eee %s\"\n\"${c1}eee    eeeeeeeeee     eeeeee    eee %s\"\n\"${c1} eeeeeeeeeeeeeeeeeeeeeeee    eeeee  %s\"\n\"${c1}  eeeeeeee eeeeeeeeeeee      eeee   %s\"\n\"${c1}    eeeee                 eeeee     %s\"\n\"${c1}      eeeeeee         eeeeeee       %s\"\n\"${c1}         eeeeeeeeeeeeeeeee          %s\")\n\t;;\n\n\t\t\"Android\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light green') # Bold Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"2\"\n\t\t\tlogowidth=\"24\"\n\t\t\tfulloutput=(\n\"${c1}       ╲ ▁▂▂▂▁ ╱        \"\n\"${c1}       ▄███████▄        \"\n\"${c1}      ▄██ ███ ██▄       %s\"\n\"${c1}     ▄███████████▄      %s\"\n\"${c1}  ▄█ ▄▄▄▄▄▄▄▄▄▄▄▄▄ █▄   %s\"\n\"${c1}  ██ █████████████ ██   %s\"\n\"${c1}  ██ █████████████ ██   %s\"\n\"${c1}  ██ █████████████ ██   %s\"\n\"${c1}  ██ █████████████ ██   %s\"\n\"${c1}     █████████████      %s\"\n\"${c1}      ███████████       %s\"\n\"${c1}       ██     ██        %s\"\n\"${c1}       ██     ██        %s\")\n\t\t;;\n\n\t\t\"Scientific Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue')\n\t\t\t\tc2=$(getColor 'light red')\n\t\t\t\tc3=$(getColor 'white')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"44\"\n\t\t\tfulloutput=(\n\"${c1}                  =/;;/-                    \"\n\"${c1}                 +:    //                   %s\"\n\"${c1}                /;      /;                  %s\"\n\"${c1}               -X        H.                 %s\"\n\"${c1} .//;;;:;;-,   X=        :+   .-;:=;:;#;.   %s\"\n\"${c1} M-       ,=;;;#:,      ,:#;;:=,       ,@   %s\"\n\"${c1} :#           :#.=/++++/=.$=           #=   %s\"\n\"${c1}  ,#;         #/:+/;,,/++:+/         ;+.    %s\"\n\"${c1}    ,+/.    ,;@+,        ,#H;,    ,/+,      %s\"\n\"${c1}       ;+;;/= @.  ${c2}.H${c3}#${c2}#X   ${c1}-X :///+;         %s\"\n\"${c1}       ;+=;;;.@,  ${c3}.X${c2}M${c3}@$.  ${c1}=X.//;=#/.        %s\"\n\"${c1}    ,;:      :@#=        =\\$H:     .+#-      %s\"\n\"${c1}  ,#=         #;-///==///-//         =#,    %s\"\n\"${c1} ;+           :#-;;;:;;;;-X-           +:   %s\"\n\"${c1} @-      .-;;;;M-        =M/;;;-.      -X   %s\"\n\"${c1}  :;;::;;-.    #-        :+    ,-;;-;:==    %s\"\n\"${c1}               ,X        H.                 %s\"\n\"${c1}                ;/      #=                  %s\"\n\"${c1}                 //    +;                   %s\"\n\"${c1}                  '////'                    %s\")\n\t\t;;\n\n\t\t\"BackTrack Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light red') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"48\"\n\t\t\tfulloutput=(\n\"${c1}..............                                  \"\n\"${c1}            ..,;:ccc,.                          %s\"\n\"${c1}          ......''';lxO.                        %s\"\n\"${c1}.....''''..........,:ld;                        %s\"\n\"${c1}           .';;;:::;,,.x,                       %s\"\n\"${c1}      ..'''.            0Xxoc:,.  ...           %s\"\n\"${c1}  ....                ,ONkc;,;cokOdc',.         %s\"\n\"${c1} .                   OMo           ':${c2}dd${c1}o.       %s\"\n\"${c1}                    dMc               :OO;      %s\"\n\"${c1}                    0M.                 .:o.    %s\"\n\"${c1}                    ;Wd                         %s\"\n\"${c1}                     ;XO,                       %s\"\n\"${c1}                       ,d0Odlc;,..              %s\"\n\"${c1}                           ..',;:cdOOd::,.      %s\"\n\"${c1}                                    .:d;.':;.   %s\"\n\"${c1}                                       'd,  .'  %s\"\n\"${c1}                                         ;l   ..%s\"\n\"${c1}                                          .o    %s\"\n\"${c1}                                            c   %s\"\n\"${c1}                                            .'  %s\"\n\"${c1}                                             .  %s\")\n\t\t;;\n\n\t\t\"Kali Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # White\n\t\t\t\tc2=$(getColor 'black') # Light Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"48\"\n\t\t\tfulloutput=(\n\"${c1}..............                                  \"\n\"${c1}            ..,;:ccc,.                          %s\"\n\"${c1}          ......''';lxO.                        %s\"\n\"${c1}.....''''..........,:ld;                        %s\"\n\"${c1}           .';;;:::;,,.x,                       %s\"\n\"${c1}      ..'''.            0Xxoc:,.  ...           %s\"\n\"${c1}  ....                ,ONkc;,;cokOdc',.         %s\"\n\"${c1} .                   OMo           ':${c2}dd${c1}o.       %s\"\n\"${c1}                    dMc               :OO;      %s\"\n\"${c1}                    0M.                 .:o.    %s\"\n\"${c1}                    ;Wd                         %s\"\n\"${c1}                     ;XO,                       %s\"\n\"${c1}                       ,d0Odlc;,..              %s\"\n\"${c1}                           ..',;:cdOOd::,.      %s\"\n\"${c1}                                    .:d;.':;.   %s\"\n\"${c1}                                       'd,  .'  %s\"\n\"${c1}                                         ;l   ..%s\"\n\"${c1}                                          .o    %s\"\n\"${c1}                                            c   %s\"\n\"${c1}                                            .'  %s\"\n\"${c1}                                             .  %s\")\n\t\t;;\n\n\t\t\"Sabayon\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light blue') # Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"38\"\n\t\t\tfulloutput=(\n\"${c2}            ...........               %s\"\n\"${c2}         ..             ..            %s\"\n\"${c2}      ..                   ..         %s\"\n\"${c2}    ..           ${c1}o           ${c2}..       %s\"\n\"${c2}  ..            ${c1}:W'            ${c2}..     %s\"\n\"${c2} ..             ${c1}.d.             ${c2}..    %s\"\n\"${c2}:.             ${c1}.KNO              ${c2}.:   %s\"\n\"${c2}:.             ${c1}cNNN.             ${c2}.:   %s\"\n\"${c2}:              ${c1}dXXX,              ${c2}:   %s\"\n\"${c2}:   ${c1}.          dXXX,       .cd,   ${c2}:   %s\"\n\"${c2}:   ${c1}'kc ..     dKKK.    ,ll;:'    ${c2}:   %s\"\n\"${c2}:     ${c1}.xkkxc;..dkkkc',cxkkl       ${c2}:   %s\"\n\"${c2}:.     ${c1}.,cdddddddddddddo:.       ${c2}.:   %s\"\n\"${c2} ..         ${c1}:lllllll:           ${c2}..    %s\"\n\"${c2}   ..         ${c1}',,,,,          ${c2}..      %s\"\n\"${c2}     ..                     ..        %s\"\n\"${c2}        ..               ..           %s\"\n\"${c2}          ...............             %s\")\n\t\t;;\n\n\t\t\"KaOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"35\"\n\t\t\tfulloutput=(\n\"${c1}                     ..            %s\"\n\"${c1}  .....         ..OSSAAAAAAA..     %s\"\n\"${c1} .KKKKSS.     .SSAAAAAAAAAAA.      %s\"\n\"${c1}.KKKKKSO.    .SAAAAAAAAAA...       %s\"\n\"${c1}KKKKKKS.   .OAAAAAAAA.             %s\"\n\"${c1}KKKKKKS.  .OAAAAAA.                %s\"\n\"${c1}KKKKKKS. .SSAA..                   %s\"\n\"${c1}.KKKKKS..OAAAAAAAAAAAA........     %s\"\n\"${c1} DKKKKO.=AA=========A===AASSSO..   %s\"\n\"${c1}  AKKKS.==========AASSSSAAAAAASS.  %s\"\n\"${c1}  .=KKO..========ASS.....SSSSASSSS.%s\"\n\"${c1}    .KK.       .ASS..O.. =SSSSAOSS:%s\"\n\"${c1}     .OK.      .ASSSSSSSO...=A.SSA.%s\"\n\"${c1}       .K      ..SSSASSSS.. ..SSA. %s\"\n\"${c1}                 .SSS.AAKAKSSKA.   %s\"\n\"${c1}                    .SSS....S..    %s\")\n\t\t;;\n\n\t\t\"CentOS\"|\"CentOS Stream\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'yellow')\n\t\t\t\tc2=$(getColor 'light green')\n\t\t\t\tc3=$(getColor 'light blue')\n\t\t\t\tc4=$(getColor 'light purple')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; c4=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"40\"\n\t\t\tfulloutput=(\n\"${c1}                   ..                   %s\"\n\"${c1}                 .PLTJ.                 %s\"\n\"${c1}                <><><><>                %s\"\n\"${c2}       KKSSV' 4KKK ${c1}LJ${c4} KKKL.'VSSKK       %s\"\n\"${c2}       KKV' 4KKKKK ${c1}LJ${c4} KKKKAL 'VKK       %s\"\n\"${c2}       V' ' 'VKKKK ${c1}LJ${c4} KKKKV' ' 'V       %s\"\n\"${c2}       .4MA.' 'VKK ${c1}LJ${c4} KKV' '.4Mb.       %s\"\n\"${c4}     . ${c2}KKKKKA.' 'V ${c1}LJ${c4} V' '.4KKKKK ${c3}.     %s\"\n\"${c4}   .4D ${c2}KKKKKKKA.'' ${c1}LJ${c4} ''.4KKKKKKK ${c3}FA.   %s\"\n\"${c4}  <QDD ++++++++++++  ${c3}++++++++++++ GFD>  %s\"\n\"${c4}   'VD ${c3}KKKKKKKK'.. ${c2}LJ ${c1}..'KKKKKKKK ${c3}FV    %s\"\n\"${c4}     ' ${c3}VKKKKK'. .4 ${c2}LJ ${c1}K. .'KKKKKV ${c3}'     %s\"\n\"${c3}        'VK'. .4KK ${c2}LJ ${c1}KKA. .'KV'        %s\"\n\"${c3}       A. . .4KKKK ${c2}LJ ${c1}KKKKA. . .4       %s\"\n\"${c3}       KKA. 'KKKKK ${c2}LJ ${c1}KKKKK' .4KK       %s\"\n\"${c3}       KKSSA. VKKK ${c2}LJ ${c1}KKKV .4SSKK       %s\"\n\"${c2}                <><><><>                %s\"\n\"${c2}                 'MKKM'                 %s\"\n\"${c2}                   ''                   %s\")\n\t\t;;\n\n\t\t\"Jiyuu Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"31\"\n\t\t\tfulloutput=(\n\"${c1}+++++++++++++++++++++++.       %s\"\n\"${c1}ss:-......-+so/:----.os-       %s\"\n\"${c1}ss        +s/        os-       %s\"\n\"${c1}ss       :s+         os-       %s\"\n\"${c1}ss       os.         os-       %s\"\n\"${c1}ss      .so          os-       %s\"\n\"${c1}ss      :s+          os-       %s\"\n\"${c1}ss      /s/          os-       %s\"\n\"${c1}ss      /s:          os-       %s\"\n\"${c1}ss      +s-          os-       %s\"\n\"${c1}ss-.....os:..........os-       %s\"\n\"${c1}++++++++os+++++++++oooo.       %s\"\n\"${c1}        os.     ./oo/.         %s\"\n\"${c1}        os.   ./oo:            %s\"\n\"${c1}        os. ./oo:              %s\"\n\"${c1}        os oo+-                %s\"\n\"${c1}        os+-                   %s\"\n\"${c1}        /.                     %s\")\n\t\t;;\n\n\t\t\"Antergos\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'blue') # Light Blue\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"41\"\n\t\t\tfulloutput=(\n\"${c1}               \\`.-/::/-\\`\\`                \"\n\"${c1}            .-/osssssssso/.              %s\"\n\"${c1}           :osyysssssssyyys+-            %s\"\n\"${c1}        \\`.+yyyysssssssssyyyyy+.          %s\"\n\"${c1}       \\`/syyyyyssssssssssyyyyys-\\`        %s\"\n\"${c1}      \\`/yhyyyyysss${c2}++${c1}ssosyyyyhhy/\\`        %s\"\n\"${c1}     .ohhhyyyys${c2}o++/+o${c1}so${c2}+${c1}syy${c2}+${c1}shhhho.      %s\"\n\"${c1}    .shhhhys${c2}oo++//+${c1}sss${c2}+++${c1}yyy${c2}+s${c1}hhhhs.     %s\"\n\"${c1}   -yhhhhs${c2}+++++++o${c1}ssso${c2}+++${c1}yyy${c2}s+o${c1}hhddy:    %s\"\n\"${c1}  -yddhhy${c2}o+++++o${c1}syyss${c2}++++${c1}yyy${c2}yooy${c1}hdddy-   %s\"\n\"${c1} .yddddhs${c2}o++o${c1}syyyyys${c2}+++++${c1}yyhh${c2}sos${c1}hddddy\\`  %s\"\n\"${c1}\\`odddddhyosyhyyyyyy${c2}++++++${c1}yhhhyosddddddo  %s\"\n\"${c1}.dmdddddhhhhhhhyyyo${c2}+++++${c1}shhhhhohddddmmh. %s\"\n\"${c1}ddmmdddddhhhhhhhso${c2}++++++${c1}yhhhhhhdddddmmdy %s\"\n\"${c1}dmmmdddddddhhhyso${c2}++++++${c1}shhhhhddddddmmmmh %s\"\n\"${c1}-dmmmdddddddhhys${c2}o++++o${c1}shhhhdddddddmmmmd- %s\"\n\"${c1} .smmmmddddddddhhhhhhhhhdddddddddmmmms.  %s\"\n\"${c1}   \\`+ydmmmdddddddddddddddddddmmmmdy/.    %s\"\n\"${c1}      \\`.:+ooyyddddddddddddyyso+:.\\`       %s\")\n\t\t;;\n\n\t\t\"Void Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'green')       # Dark Green\n\t\t\t\tc2=$(getColor 'light green') # Light Green\n\t\t\t\tc3=$(getColor 'dark grey')   # Black\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"47\"\n\t\t\tfulloutput=(\n\"${c2}                 __.;=====;.__                 %s\"\n\"${c2}             _.=+==++=++=+=+===;.              %s\"\n\"${c2}              -=+++=+===+=+=+++++=_            %s\"\n\"${c1}         .     ${c2}-=:\\`\\`     \\`--==+=++==.          %s\"\n\"${c1}        _vi,    ${c2}\\`            --+=++++:         %s\"\n\"${c1}       .uvnvi.       ${c2}_._       -==+==+.        %s\"\n\"${c1}      .vvnvnI\\`    ${c2}.;==|==;.     :|=||=|.       %s\"\n\"${c3} +QmQQm${c1}pvvnv; ${c3}_yYsyQQWUUQQQm #QmQ#${c2}:${c3}QQQWUV\\$QQmL %s\"\n\"${c3}  -QQWQW${c1}pvvo${c3}wZ?.wQQQE${c2}==<${c3}QWWQ/QWQW.QQWW${c2}(: ${c3}jQWQE %s\"\n\"${c3}   -\\$QQQQmmU'  jQQQ@${c2}+=<${c3}QWQQ)mQQQ.mQQQC${c2}+;${c3}jWQQ@' %s\"\n\"${c3}    -\\$WQ8Y${c1}nI:   ${c3}QWQQwgQQWV${c2}\\`${c3}mWQQ.jQWQQgyyWW@!   %s\"\n\"${c1}      -1vvnvv.     ${c2}\\`~+++\\`        ++|+++        %s\"\n\"${c1}       +vnvnnv,                 ${c2}\\`-|===         %s\"\n\"${c1}        +vnvnvns.           .      ${c2}:=-         %s\"\n\"${c1}         -Invnvvnsi..___..=sv=.     ${c2}\\`          %s\"\n\"${c1}           +Invnvnvnnnnnnnnvvnn;.              %s\"\n\"${c1}             ~|Invnvnvvnvvvnnv}+\\`              %s\"\n\"${c1}                -~\\\"|{*l}*|\\\"\\\"~                  %s\")\n\t\t;;\n\n\t\t\"NixOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'blue')\n\t\t\t\tc2=$(getColor 'light blue')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"45\"\n\t\t\tfulloutput=(\n\"${c1}          ::::.    ${c2}':::::     ::::'          %s\"\n\"${c1}          ':::::    ${c2}':::::.  ::::'           %s\"\n\"${c1}            :::::     ${c2}'::::.:::::            %s\"\n\"${c1}      .......:::::..... ${c2}::::::::             %s\"\n\"${c1}     ::::::::::::::::::. ${c2}::::::    ${c1}::::.     %s\"\n\"${c1}    ::::::::::::::::::::: ${c2}:::::.  ${c1}.::::'     %s\"\n\"${c2}           .....           ::::' ${c1}:::::'      %s\"\n\"${c2}          :::::            '::' ${c1}:::::'       %s\"\n\"${c2} ........:::::               ' ${c1}:::::::::::.  %s\"\n\"${c2}:::::::::::::                 ${c1}:::::::::::::  %s\"\n\"${c2} ::::::::::: ${c1}..              :::::           %s\"\n\"${c2}     .::::: ${c1}.:::            :::::            %s\"\n\"${c2}    .:::::  ${c1}:::::          '''''    ${c2}.....    %s\"\n\"${c2}    :::::   ${c1}':::::.  ${c2}......:::::::::::::'    %s\"\n\"${c2}     :::     ${c1}::::::. ${c2}':::::::::::::::::'     %s\"\n\"${c1}            .:::::::: ${c2}'::::::::::            %s\"\n\"${c1}           .::::''::::.     ${c2}'::::.           %s\"\n\"${c1}          .::::'   ::::.     ${c2}'::::.          %s\"\n\"${c1}         .::::      ::::      ${c2}'::::.         %s\")\n\t\t;;\n\n\t\t\"Guix System\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'orange')\n\t\t\t\tc2=$(getColor 'light orange')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"40\"\n\t\t\tfulloutput=(\n\"${c1} +                                    ? %s\"\n\"${c1} ??                                  ?I %s\"\n\"${c1}  ??I?   I??N              ${c2}???    ${c1}????  %s\"\n\"${c1}   ?III7${c2}???????          ??????${c1}7III?Z   %s\"\n\"${c1}     OI77\\$${c2}?????         ?????${c1}$77IIII      %s\"\n\"${c1}           ?????        ${c2}????            %s\"\n\"${c1}            ???ID      ${c2}????             %s\"\n\"${c1}             IIII     ${c2}+????             %s\"\n\"${c1}             IIIII    ${c2}????              %s\"\n\"${c1}              IIII   ${c2}?????              %s\"\n\"${c1}              IIIII  ${c2}????               %s\"\n\"${c1}               II77 ${c2}????$               %s\"\n\"${c1}               7777+${c2}????                %s\"\n\"${c1}                77++?${c2}??$                %s\"\n\"${c1}                N?+???${c2}?                 %s\")\n\t\t\t;;\n\t\t\"BunsenLabs\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'blue')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"5\"\n\t\t\tlogowidth=\"25\"\n\t\t\tfulloutput=(\n\"${c1}            HC]          \"\n\"${c1}          H]]]]          \"\n\"${c1}        H]]]]]]4         \"\n\"${c1}      @C]]]]]]]]*        \"\n\"${c1}     @]]]]]]]]]]xd       \"\n\"${c1}    @]]]]]]]]]]]]]d      %s\"\n\"${c1}   0]]]]]]]]]]]]]]]]     %s\"\n\"${c1}   kx]]]]]]x]]x]]]]]%%    %s\"\n\"${c1}  #x]]]]]]]]]]]]]x]]]d   %s\"\n\"${c1}  #]]]]]]qW  x]]x]]]]]4  %s\"\n\"${c1}  k]x]]xg     %%x]]]]]]%%  %s\"\n\"${c1}  Wx]]]W       x]]]]]]]  %s\"\n\"${c1}  #]]]4         xx]]x]]  %s\"\n\"${c1}   px]           ]]]]]x  %s\"\n\"${c1}   Wx]           x]]x]]  %s\"\n\"${c1}    &x           x]]]]   %s\"\n\"${c1}     m           x]]]]   %s\"\n\"${c1}                 x]x]    %s\"\n\"${c1}                 x]]]    %s\"\n\"${c1}                ]]]]     %s\"\n\"${c1}                x]x      %s\"\n\"${c1}               x]q       %s\"\n\"${c1}               ]g        %s\"\n\"${c1}              q          %s\")\n\t\t;;\n\n\t\t\"SteamOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'grey') # Gray\n\t\t\t\tc2=$(getColor 'purple') # Dark Purple\n\t\t\t\tc3=$(getColor 'light purple') # Light Purple\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"37\"\n\t\t\tfulloutput=(\n\"${c2}               .,,,,.                %s\"\n\"${c2}         .,'onNMMMMMNNnn',.          %s\"\n\"${c2}      .'oNM${c3}ANK${c2}MMMMMMMMMMMNNn'.       %s\"\n\"${c3}    .'ANMMMMMMMXK${c2}NNWWWPFFWNNMNn.     %s\"\n\"${c3}   ;NNMMMMMMMMMMNWW'' ${c2},.., 'WMMM,    %s\"\n\"${c3}  ;NMMMMV+##+VNWWW' ${c3}.+;'':+, 'WM${c2}W,   %s\"\n\"${c3} ,VNNWP+${c1}######${c3}+WW,  ${c1}+:    ${c3}:+, +MMM,  %s\"\n\"${c3} '${c1}+#############,   +.    ,+' ${c3}+NMMM  %s\"\n\"${c1}   '*#########*'     '*,,*' ${c3}.+NMMMM. %s\"\n\"${c1}      \\`'*###*'          ,.,;###${c3}+WNM, %s\"\n\"${c1}          .,;;,      .;##########${c3}+W  %s\"\n\"${c1} ,',.         ';  ,+##############'  %s\"\n\"${c1}  '###+. :,. .,; ,###############'   %s\"\n\"${c1}   '####.. \\`'' .,###############'    %s\"\n\"${c1}     '#####+++################'      %s\"\n\"${c1}       '*##################*'        %s\"\n\"${c1}          ''*##########*''           %s\"\n\"${c1}               ''''''                %s\")\n\t\t;;\n\n\t\t\"SailfishOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'dark grey') # Grey\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"32\"\n\t\t\tfulloutput=(\n\"${c1}                 _a@b            %s\"\n\"${c1}              _#b (b             %s\"\n\"${c1}            _@@   @_         _,  %s\"\n\"${c1}          _#^@ _#*^^*gg,aa@^^    %s\"\n\"${c1}          #- @@^  _a@^^          %s\"\n\"${c1}          @_  *g#b               %s\"\n\"${c1}          ^@_   ^@_              %s\"\n\"${c1}            ^@_   @              %s\"\n\"${c1}             @(b (b              %s\"\n\"${c1}            #b(b#^               %s\"\n\"${c1}          _@_#@^                 %s\"\n\"${c1}       _a@a*^                    %s\"\n\"${c1}   ,a@*^                         %s\")\n\t\t;;\n\n\t\t\"Qubes OS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'cyan')\n\t\t\t\tc2=$(getColor 'blue')\n\t\t\t\tc3=$(getColor 'light blue')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"47\"\n\t\t\tfulloutput=(\n\"${c3}                      ####                     %s\"\n\"${c3}                    ########                   %s\"\n\"${c3}                  ############                 %s\"\n\"${c3}                #######  #######               %s\"\n\"${c1}              #${c3}######      ######${c2}#             %s\"\n\"${c1}            ####${c3}###          ###${c2}####           %s\"\n\"${c1}          ######        ${c2}        ######         %s\"\n\"${c1}          ######        ${c2}        ######         %s\"\n\"${c1}          ######        ${c2}        ######         %s\"\n\"${c1}          ######        ${c2}        ######         %s\"\n\"${c1}          ######        ${c2}        ######         %s\"\n\"${c1}            #######     ${c2}     #######           %s\"\n\"${c1}              #######   ${c2}   #########           %s\"\n\"${c1}                ####### ${c2} ##############        %s\"\n\"${c1}                  ######${c2}######  ######         %s\"\n\"${c1}                    ####${c2}####     ###           %s\"\n\"${c1}                      ##${c2}##                     %s\"\n\"${c1}                                               %s\")\n\t\t;;\n\n\t\t\"PCLinuxOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'blue') # Blue\n\t\t\t\tc2=$(getColor 'light grey') # White\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"50\"\n\t\t\tfulloutput=(\n\"${c1}                                                  %s\"\n\"${c1}                                             <NNN>%s\"\n\"${c1}                                           <NNY   %s\"\n\"${c1}                 <ooooo>--.               ((      %s\"\n\"${c1}               Aoooooooooooo>--.           \\\\\\\\\\\\     %s\"\n\"${c1}              AooodNNNNNNNNNNNNNNNN>--.     ))    %s\"\n\"${c2}          (${c1}  AoodNNNNNNNNNNNNNNNNNNNNNNN>-///'    %s\"\n\"${c2}          \\\\\\\\\\\\\\\\${c1}AodNNNNNNNNNNNNNNNNNNNNNNNNNNNY/      %s\"\n\"${c1}           AodNNNNNNNNNNNNNNNNNNNNNNNNNNNNN       %s\"\n\"${c1}          AdNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNA       %s\"\n\"${c1}         (${c2}/)${c1}NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNA      %s\"\n\"${c2}         //${c1}<NNNNNNNNNNNNNNNNNY'   YNNY YNNNN      %s\"\n\"${c2} ,====#Y//${c1}   \\`<NNNNNNNNNNNY       ANY     YNA     %s\"\n\"${c1}               ANY<NNNNYYN       .NY        YN.   %s\"\n\"${c1}             (NNY       NN      (NND       (NND   %s\"\n\"${c1}                      (NNU                        %s\"\n\"${c1}                                                  %s\")\n\t\t;;\n\n\t\t\"Exherbo\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'dark grey')  # Black\n\t\t\t\tc2=$(getColor 'light blue') # Blue\n\t\t\t\tc3=$(getColor 'light red')  # Beige\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"46\"\n\t\t\tfulloutput=(\n\"${c1}  ,                                           %s\"\n\"${c1}  OXo.                                        %s\"\n\"${c1}  NXdX0:    .cok0KXNNXXK0ko:.                 %s\"\n\"${c1}  KX  '0XdKMMK;.xMMMk, .0MMMMMXx;  ...        %s\"\n\"${c1}  'NO..xWkMMx   kMMM    cMMMMMX,NMWOxOXd.     %s\"\n\"${c1}    cNMk  NK    .oXM.   OMMMMO. 0MMNo  kW.    %s\"\n\"${c1}    lMc   o:       .,   .oKNk;   ;NMMWlxW'    %s\"\n\"${c1}   ;Mc    ..   .,,'    .0M${c2}g;${c1}WMN'dWMMMMMMO     %s\"\n\"${c1}   XX        ,WMMMMW.  cM${c2}cfli${c1}WMKlo.   .kMk    %s\"\n\"${c1}  .Mo        .WM${c2}GD${c1}MW.   XM${c2}WO0${c1}MMk        oMl   %s\"\n\"${c1}  ,M:         ,XMMWx::,''oOK0x;          NM.  %s\"\n\"${c1}  'Ml      ,kNKOxxxxxkkO0XXKOd:.         oMk  %s\"\n\"${c1}   NK    .0Nxc${c3}:::::::::::::::${c1}fkKNk,      .MW  %s\"\n\"${c1}   ,Mo  .NXc${c3}::${c1}qXWXb${c3}::::::::::${c1}oo${c3}::${c1}lNK.    .MW  %s\"\n\"${c1}    ;Wo oMd${c3}:::${c1}oNMNP${c3}::::::::${c1}oWMMMx${c3}:${c1}c0M;   lMO  %s\"\n\"${c1}     'NO;W0c${c3}:::::::::::::::${c1}dMMMMO${c3}::${c1}lMk  .WM'  %s\"\n\"${c1}       xWONXdc${c3}::::::::::::::${c1}oOOo${c3}::${c1}lXN. ,WMd   %s\"\n\"${c1}        'KWWNXXK0Okxxo,${c3}:::::::${c1},lkKNo  xMMO    %s\"\n\"${c1}          :XMNxl,';:lodxkOO000Oxc. .oWMMo     %s\"\n\"${c1}            'dXMMXkl;,.        .,o0MMNo'      %s\"\n\"${c1}               ':d0XWMMMMWNNNNMMMNOl'         %s\"\n\"${c1}                     ':okKXWNKkl'             %s\")\n\t\t;;\n\n\t\t\"Red Star OS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light red')  # Red\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"45\"\n\t\t\tfulloutput=(\n\"${c1}                      ..                     %s\"\n\"${c1}                    .oK0l                    %s\"\n\"${c1}                   :0KKKKd.                  %s\"\n\"${c1}                 .xKO0KKKKd                  %s\"\n\"${c1}                ,Od' .d0000l                 %s\"\n\"${c1}               .c;.   .'''...           ..'. %s\"\n\"${c1}  .,:cloddxxxkkkkOOOOkkkkkkkkxxxxxxxxxkkkx:  %s\"\n\"${c1}  ;kOOOOOOOkxOkc'...',;;;;,,,'',;;:cllc:,.   %s\"\n\"${c1}   .okkkkd,.lko  .......',;:cllc:;,,'''''.   %s\"\n\"${c1}     .cdo. :xd' cd:.  ..';'',,,'',,;;;,'.    %s\"\n\"${c1}        . .ddl.;doooc'..;oc;'..';::;,'.      %s\"\n\"${c1}          coo;.oooolllllllcccc:'.  .         %s\"\n\"${c1}         .ool''lllllccccccc:::::;.           %s\"\n\"${c1}         ;lll. .':cccc:::::::;;;;'           %s\"\n\"${c1}         :lcc:'',..';::::;;;;;;;,,.          %s\"\n\"${c1}         :cccc::::;...';;;;;,,,,,,.          %s\"\n\"${c1}         ,::::::;;;,'.  ..',,,,'''.          %s\"\n\"${c1}          ........          ......           %s\"\n\"${c1}                                             %s\")\n\t\t;;\n\n\t\t\"SparkyLinux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light gray') # Gray\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"48\"\n\t\t\tfulloutput=(\n\"${c1}             .            \\`-:-\\`                %s\"\n\"${c1}            .o\\`       .-///-\\`                  %s\"\n\"${c1}           \\`oo\\`    .:/++:.                     %s\"\n\"${c1}           os+\\`  -/+++:\\` \\`\\`.........\\`\\`\\`        %s\"\n\"${c1}          /ys+\\`./+++/-.-::::::----......\\`\\`     %s\"\n\"${c1}         \\`syyo\\`++o+--::::-::/+++/-\\`\\`           %s\"\n\"${c1}         -yyy+.+o+\\`:/:-:sdmmmmmmmmdy+-\\`        %s\"\n\"${c1}  ::-\\`   :yyy/-oo.-+/\\`ymho++++++oyhdmdy/\\`      %s\"\n\"${c1}  \\`/yy+-\\`.syyo\\`+o..o--h..osyhhddhs+//osyy/\\`    %s\"\n\"${c1}    -ydhs+-oyy/.+o.-: \\` \\`  :/::+ydhy+\\`\\`\\`-os-   %s\"\n\"${c1}     .sdddy::syo--/:.     \\`.:dy+-ohhho    ./:  %s\"\n\"${c1}       :yddds/:+oo+//:-\\`- /+ +hy+.shhy:     \\`\\` %s\"\n\"${c1}        \\`:ydmmdysooooooo-.ss\\`/yss--oyyo        %s\"\n\"${c1}          \\`./ossyyyyo+:-/oo:.osso- .oys        %s\"\n\"${c1}         \\`\\`..-------::////.-oooo/   :so        %s\"\n\"${c1}      \\`...----::::::::--.\\`/oooo:    .o:        %s\"\n\"${c1}             \\`\\`\\`\\`\\`\\`\\`     ++o+:\\`     \\`:\\`        %s\"\n\"${c1}                       ./+/-\\`        \\`         %s\"\n\"${c1}                     \\`-:-.                     %s\"\n\"${c1}                     \\`\\`                        %s\")\n\t\t;;\n\n\t\t\"Pardus\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'yellow') # Light Yellow\n\t\t\t\tc2=$(getColor 'light gray') # Light Gray\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"45\"\n\t\t\tfulloutput=(\n\"\"\n\"${c1}   .smNdy+-    \\`.:/osyyso+:.\\`    -+ydmNs.   %s\"\n\"${c1}  /Md- -/ymMdmNNdhso/::/oshdNNmdMmy/. :dM/  %s\"\n\"${c1}  mN.     oMdyy- -y          \\`-dMo     .Nm  %s\"\n\"${c1}  .mN+\\`  sMy hN+ -:             yMs  \\`+Nm.  %s\"\n\"${c1}   \\`yMMddMs.dy \\`+\\`               sMddMMy\\`   %s\"\n\"${c1}     +MMMo  .\\`  .                 oMMM+     %s\"\n\"${c1}     \\`NM/    \\`\\`\\`\\`\\`.\\`    \\`.\\`\\`\\`\\`\\`    +MN\\`     %s\"\n\"${c1}     yM+   \\`.-:yhomy    ymohy:-.\\`   +My     %s\"\n\"${c1}     yM:          yo    oy          :My     %s\"\n\"${c1}     +Ms         .N\\`    \\`N.      +h sM+     %s\"\n\"${c1}     \\`MN      -   -::::::-   : :o:+\\`NM\\`     %s\"\n\"${c1}      yM/    sh   -dMMMMd-   ho  +y+My      %s\"\n\"${c1}      .dNhsohMh-//: /mm/ ://-yMyoshNd\\`      %s\"\n\"${c1}        \\`-ommNMm+:/. oo ./:+mMNmmo:\\`        %s\"\n\"${c1}       \\`/o+.-somNh- :yy: -hNmos-.+o/\\`       %s\"\n\"${c1}      ./\\` .s/\\`s+sMdd+\\`\\`+ddMs+s\\`/s. \\`/.      %s\"\n\"${c1}          : -y.  -hNmddmNy.  .y- :          %s\"\n\"${c1}           -+       \\`..\\`       +-           %s\"\n\"%s\")\n\t\t;;\n\n\t\t\"Sulin\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light gray') # Light GRAY\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"45\"\n\t\t\tfulloutput=(\n\"\"\n\"${c1}                         /\\          /\\ %s\"\n\"${c1}                        ( \\\\        // ) %s\"\n\"${c1}                         \\ \\\\      // /  %s\"\n\"${c1}                          \\_\\\\||||//_/   %s\"\n\"${c1}                           \\/ _  _ \\    %s\"\n\"${c1}                          \\/|(O)(O)|    %s\"\n\"${c1}                         \\/ |      |    %s\"\n\"${c1}     ___________________\\/  \\      /    %s\"\n\"${c1}    //                //     |____|     %s\"\n\"${c1}   //                ||     /      \\    %s\"\n\"${c1}  //|                \\|     \\ 0  0 /    %s\"\n\"${c1} // \\       )         V    / \\____/     %s\"\n\"${c1}//   \\     /        (     /             %s\"\n\"${c1}      \\   /_________|  |_/              %s\"\n\"${c1}      /  /\\   /     |  ||               %s\"\n\"${c1}     /  / /  /      \\  ||               %s\"\n\"${c1}     | |  | |        | ||               %s\"\n\"${c1}     | |  | |        | ||               %s\"\n\"${c1}     |_|  |_|        |_||               %s\"\n\"${c1}     \\_\\  \\_\\        \\_\\\\               %s\"\n\"\")\n\t\t;;\n\n\t\t\"SwagArch\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"48\"\n\t\t\tfulloutput=(\n\"${c1}                                               %s\"\n\"${c1}          .;ldkOKXXNNNNXXK0Oxoc,.              %s\"\n\"${c1}     ,lkXMMNK0OkkxkkOKWMMMMMMMMMM;             %s\"\n\"${c1}   'K0xo  ..,;:c:.     \\`'lKMMMMM0              %s\"\n\"${c1}       .lONMMMMMM'         \\`lNMk'              %s\"\n\"${c1}      ;WMMMMMMMMMO.              ${c2}....::...     %s\"\n\"${c1}      OMMMMMMMMMMMMKl.       ${c2}.,;;;;;ccccccc,   %s\"\n\"${c1}      \\`0MMMMMMMMMMMMMM0:         ${c2}.. .ccccccc.  %s\"\n\"${c1}        'kWMMMMMMMMMMMMMNo.   ${c2}.,:'  .ccccccc.  %s\"\n\"${c1}          \\`c0MMMMMMMMMMMMMN,${c2},:c;    :cccccc:   %s\"\n\"${c1}   ckl.      \\`lXMMMMMMMMMX${c2}occcc:.. ;ccccccc.   %s\"\n\"${c1}  dMMMMXd,     \\`OMMMMMMWk${c2}ccc;:''\\` ,ccccccc:    %s\"\n\"${c1}  XMMMMMMMWKkxxOWMMMMMNo${c2}ccc;     .cccccccc.    %s\"\n\"${c1}   \\`':ldxO0KXXXXXK0Okdo${c2}cccc.     :cccccccc.    %s\"\n\"${c2}                      :ccc:'     \\`cccccccc:,   %s\"\n\"${c2}                                     ''        %s\"\n\"${c2}                                               %s\")\n\t\t;;\n\n\t\t\"EuroLinux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\";fi;\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"39\"\n\t\t\tfulloutput=(\n\n\"${c1}                                       %s\"\n\"${c1}           DZZZZZZZZZZZZZ              %s\"\n\"${c1}         ZZZZZZZZZZZZZZZZZZZ           %s\"\n\"${c1}          ZZZZZZZZZZZZZZZZZZZZ         %s\"\n\"${c1}           OZZZZZZZZZZZZZZZZZZZZ       %s\"\n\"${c1}   Z         ZZZ    8ZZZZZZZZZZZZ      %s\"\n\"${c1}  ZZZ                   ZZZZZZZZZZ     %s\"\n\"${c1} ZZZZZN                   ZZZZZZZZZ    %s\"\n\"${c1} ZZZZZZZ                    ZZZZZZZZ   %s\"\n\"${c1}ZZZZZZZZ                    OZZZZZZZ   %s\"\n\"${c1}ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ  %s\"\n\"${c1}ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ  %s\"\n\"${c1}ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ  %s\"\n\"${c1}ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ   %s\"\n\"${c1}ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ    %s\"\n\"${c1}ZZZZZZZZ                               %s\"\n\"${c1} ZZZZZZZZ                              %s\"\n\"${c1} OZZZZZZZZO                            %s\"\n\"${c1}  ZZZZZZZZZZZ                          %s\"\n\"${c1}   OZZZZZZZZZZZZZZZN                   %s\"\n\"${c1}     ZZZZZZZZZZZZZZZZ                  %s\"\n\"${c1}      DZZZZZZZZZZZZZZZ                 %s\"\n\"${c1}         ZZZZZZZZZZZZZ                 %s\"\n\"${c1}            NZZZZZZZZ                  %s\"\n\"${c1}                                       %s\")\n\t\t;;\n\n\n\t\t\"OBRevenge\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'red') # White\n\t\t\t\tc2=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"48\"\n\t\t\tfulloutput=(\n\"${c1}       _@@@@   @@@g_      \t%s\"\n\"${c1}     _@@@@@@   @@@@@@     \t%s\"\n\"${c1}    _@@@@@@M   W@@@@@@_   \t%s\"\n\"${c1}   j@@@@P        ^W@@@@   \t%s\"\n\"${c1}   @@@@L____  _____Q@@@@  \t%s\"\n\"${c1}  Q@@@@@@@@@@j@@@@@@@@@@  \t%s\"\n\"${c1}  @@@@@    T@j@    T@@@@@\t%s\"\n\"${c1}  @@@@@ ___Q@J@    _@@@@@ \t%s\"\n\"${c1}  @@@@@fMMM@@j@jggg@@@@@@ \t%s\"\n\"${c1}  @@@@@    j@j@^MW@P @@@@ \t%s\"\n\"${c1}  Q@@@@@ggg@@f@   @@@@@@L \t%s\"\n\"${c1}  ^@@@@WWMMP  @    Q@@@@  \t%s\"\n\"${c1}   @@@@@_         _@@@@l  \t%s\"\n\"${c1}    W@@@@@g_____g@@@@@P   \t%s\"\n\"${c1}     @@@@@@@@@@@@@@@@l    \t%s\"\n\"${c1}      ^W@@@@@@@@@@@P      \t%s\"\n\"${c1}         ^TMMMMTll   \t\t%s\"\n\"${c1}                                  %s\")\n\t\t;;\n\n\t\t\"Parrot Security\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Light Blue\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"43\"\n\t\t\tfulloutput=(\n\"${c1}    ,:oho/-.                              %s\"\n\"${c1}   mMMMMMMMMMMMNmmdhy-                    %s\"\n\"${c1}   dMMMMMMMMMMMMMMMMMMs.                  %s\"\n\"${c1}   +MMsohNMMMMMMMMMMMMMm/                 %s\"\n\"${c1}   .My   .+dMMMMMMMMMMMMMh.               %s\"\n\"${c1}    +       :NMMMMMMMMMMMMNo              %s\"\n\"${c1}             \\`yMMMMMMMMMMMMMm:            %s\"\n\"${c1}               /NMMMMMMMMMMMMMy.          %s\"\n\"${c1}                .hMMMMMMMMMMMMMN+         %s\"\n\"${c1}                    \\`\\`-NMMMMMMMMMd-       %s\"\n\"${c1}                       /MMMMMMMMMMMs.     %s\"\n\"${c1}                        mMMMMMMMsyNMN/    %s\"\n\"${c1}                        +MMMMMMMo  :sNh.  %s\"\n\"${c1}                        \\`NMMMMMMm     -o/ %s\"\n\"${c1}                         oMMMMMMM.        %s\"\n\"${c1}                         \\`NMMMMMM+        %s\"\n\"${c1}                          +MMd/NMh        %s\"\n\"${c1}                           mMm -mN\\`       %s\"\n\"${c1}                           /MM  \\`h:       %s\"\n\"${c1}                            dM\\`   .       %s\"\n\"${c1}                            :M-           %s\"\n\"${c1}                             d:           %s\"\n\"${c1}                             -+           %s\"\n\"${c1}                              -           %s\")\n\t\t;;\n\n\t\t\"Amazon Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light orange') # Orange\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"0\"\n\t\t\tlogowidth=\"40\"\n\t\t\tfulloutput=(\n\"${c1}               .,:cc:,.              %s\"\n\"${c1}          .:okXWMMMMMMWXko:.         %s\"\n\"${c1}      .:kNMMMMMMMMMMMMMMMMMMNkc.     %s\"\n\"${c1}   cc,.    \\`':ox0XWWXOxo:'\\`    .,c;  %s\"\n\"${c1}   KMMMMXOdc,.    ''    .,cdOXWMMMO  %s\"\n\"${c1}   KMMMMMMMMMMWXO.  .OXWMMMMMMMMMMO  %s\"\n\"${c1}   KMMMMMMMMMMMMM,  ,MMMMMMMMMMMMMO  %s\"\n\"${c1}   KMMMMMMMMMMMMM,  ,MMMMMMMMMMMMMO  %s\"\n\"${c1}   KMMMMMMMMMMMMM,  ,MMMMMMMMMMMMMO  %s\"\n\"${c1}   KMMMMMMMMMMMMM,  ,MMMMMMMMMMMMMO  %s\"\n\"${c1}   KMMMMMMMMMMMMM,  ,MMMMMMMMMMMMMO  %s\"\n\"${c1}   KMMMMMMMMMMMMM,  ,MMMMMMMMMMMMMk  %s\"\n\"${c1}   KMMMMMMMMMMMMM,  ,MMMMMMMMMMMMMd  %s\"\n\"${c1}   \\`:lx0WMMMMMMMM,  ,MMMMMMMMW0xl:\\`  %s\"\n\"${c1}         \\`'lx0NMM,  ,MMN0xc'\\`        %s\"\n\"${c1}               \\`''  ''\\`              %s\")\n\t\t;;\n\n\t\t\"Source Mage GNU/Linux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'dark gray')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"40\"\n\t\t\tfulloutput=(\n\"${c1}                              \"\n\"${c1}       -sdNMNds:              %s\"\n\"${c1} .shmNMMMMMMNNNNh.            %s\"\n\"${c1}  \\` \\':sNNNNNNNNNNm-           %s\"\n\"${c1}      .NNNNNNmmmmmdo.         %s\"\n\"${c1}     -mNmmmmmmmmmmddd:        %s\"\n\"${c1}     +mmmmmmmddddddddh-       %s\"\n\"${c1}     :mmdddddddddhhhhhy.      %s\"\n\"${c1}     -ddddddhhhhhhhhyyyo      %s\"\n\"${c1}     .hyhhhhhhhyyyyyyyys:     %s\"\n\"${c1}      .\\`shyyyyyyyyyssssso     %s\"\n\"${c1}        \\`/yyyysssssssoooo.    %s\"\n\"${c1}          .osssssooooo+++/    %s\"\n\"${c1}           \\`:+oooo+++++///.   %s\"\n\"${c1}            \\`://++//////::-   %s\"\n\"${c1}        ..-///  .//::::::--.  %s\"\n\"${c1}       \\`\\`\\`\\` \\`\\`\\`  :::--------\\` %s\"\n\"${c1}                 \\`------....\\` %s\"\n\"${c1}                  \\`.........\\` %s\"\n\"${c1}                  \\`......\\` %s\")\n\t\t;;\n\n\t\t\"OS Elbrus\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'light blue') # Green\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"33\"\n\t\t\tfulloutput=(\"\"\n\"${c1}   ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄   %s\"\n\"${c1}   ██▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀██   %s\"\n\"${c1}   ██                       ██   %s\"\n\"${c1}   ██   ███████   ███████   ██   %s\"\n\"${c1}   ██   ██   ██   ██   ██   ██   %s\"\n\"${c1}   ██   ██   ██   ██   ██   ██   %s\"\n\"${c1}   ██   ██   ██   ██   ██   ██   %s\"\n\"${c1}   ██   ██   ██   ██   ██   ██   %s\"\n\"${c1}   ██   ██   ███████   ███████   %s\"\n\"${c1}   ██   ██                  ██   %s\"\n\"${c1}   ██   ██▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄██   %s\"\n\"${c1}   ██   ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀██   %s\"\n\"${c1}   ██                       ██   %s\"\n\"${c1}   ███████████████████████████   %s\"\n\"                                 %s\")\n\t\t;;\n\n\t\t\"PureOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'dark grey') # \"Black\"\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"44\"\n\t\t\tfulloutput=(\"\"\n\"                                            %s\"\n\"${c1}  dmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmd  %s\"\n\"${c1}  dNm//////////////////////////////////mNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNd                                  dNd  %s\"\n\"${c1}  dNm//////////////////////////////////mNd  %s\"\n\"${c1}  dmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmd  %s\"\n\"                                            %s\")\n\t\t;;\n\n    \"DraugerOS\")\n      if [[ \"$no_color\" != \"1\" ]]; then\n        c1=$(getColor 'red') # red\n      fi\n      if [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n      startline=\"0\"\n      logowidth=\"52\"\n      fulloutput=(\n\"${c1}                                                 %s\"\n\"${c1}                      &   &                      %s\"\n\"${c1}                     %(   (#                     %s\"\n\"${c1}                    (((   (((                    %s\"\n\"${c1}                   ((((   ((((                   %s\"\n\"${c1}                  ((((     ((((                  %s\"\n\"${c1}                 ((((       ((((                 %s\"\n\"${c1}                ((((         ((((                %s\"\n\"${c1}              %((((           ((((%              %s\"\n\"${c1}             ((((               ((((             %s\"\n\"${c1}            ((((                 ((((            %s\"\n\"${c1}           ((((                   ((((           %s\"\n\"${c1}         &((((                     ((((&         %s\"\n\"${c1}        #((((                       ((((#        %s\"\n\"${c1}       (((((                         (((((       %s\"\n\"${c1}      ((((#                           #((((      %s\"\n\"${c1}     (#  (((((((((((((((((((((((((((((((  #(     %s\"\n\"${c1}       (((((((((((((((((((((((((((((((((((       %s\"\n\"                                                      %s\")\n\t\t;;\n\n\t\t\"januslinux\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'white') # white\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"25\"\n\t\t\tfulloutput=(\"\"\n\"                         %s\"\n\"${c1}   ________________      %s\"\n\"${c1}  |\\               \\     %s\"\n\"${c1}  | \\               \\    %s\"\n\"${c1}  |  \\               \\   %s\"\n\"${c1}  |   \\ ______________\\  %s\"\n\"${c1}  |    |              |  %s\"\n\"${c1}  |    |              |  %s\"\n\"${c1}  |    |              |  %s\"\n\"${c1}   \\   |  januslinux  |  %s\"\n\"${c1}    \\  |              |  %s\"\n\"${c1}     \\ |              |  %s\"\n\"${c1}      \\|______________|  %s\"\n\"                         %s\")\n\t\t;;\n\t\t\"EndeavourOS\")\n\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\tc1=$(getColor 'yellow')\n\t\t\t\tc3=$(getColor 'purple')\n\t\t\t\tc5=$(getColor 'cyan')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; c5=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"44\"\n\t\t\tfulloutput=(\"\"\n\"${c1}                  +${c3}I${c5}+\t\t        %s\"\n\"${c1}                 +${c3}777${c5}+                  %s\"\n\"${c1}\t        +${c3}77777${c5}++\t\t%s\"\n\"${c1}\t       +${c3}7777777${c5}++\t\t%s\"\n\"${c1}\t      +${c3}7777777777${c5}++\t\t%s\"\n\"${c1}\t    ++${c3}7777777777777${c5}++\t\t%s\"\n\"${c1}\t   ++${c3}777777777777777${c5}+++       \t%s\"\n\"${c1}\t ++${c3}77777777777777777${c5}++++        %s\"\n\"${c1}\t++${c3}7777777777777777777${c5}++++       %s\"\n\"${c1}      +++${c3}777777777777777777777${c5}++++\t%s\"\n\"${c1}    ++++${c3}7777777777777777777777${c5}+++++  \t%s\"\n\"${c1}   ++++${c3}77777777777777777777777${c5}+++++  \t%s\"\n\"${c1}  +++++${c3}777777777777777777777777${c5}+++++\t%s\"\n\"${c5}       +++++++${c3}7777777777777777${c5}++++++\t%s\"\n\"${c5}      +++++++++++++++++++++++++++++     %s\"\n\"${c5}     +++++++++++++++++++++++++++        %s\"\n\"                                      \t%s\"\n)               ;;\n                \"TeArch\")\n\t\t\tif [[ \"$no_color\" != \"0\" ]]; then\n\t\t\t\tc1=$(getColor 'blue') # white\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"45\"\n\t\t\tfulloutput=(\"\"\n\"${c1}                                          %s\"\n\"${c1}                @@@@@@@@@@@@@             %s\"\n\"${c1}       @@@@@@@@@             @@@@@@@      %s\"\n\"${c1}      @@@@@                     @@@@@     %s\"\n\"${c1}      @@                           @@     %s\"\n\"${c1}       @@                         @@      %s\"\n\"${c1}        @                         @       %s\"\n\"${c1}        @@@@@@@@@@@@@@@@@@@@@@@@ @@       %s\"\n\"${c1}        .@@@@@@@@@@@@/@@@@@@@@@@@@        %s\"\n\"${c1}        @@@@@@@@@@@@///@@@@@@@@@@@@       %s\"\n\"${c1}       @@@@@@@@@@@@@((((@@@@@@@@@@@@      %s\"\n\"${c1}      @@@@@@@@@@@#(((((((#@@@@@@@@@@@     %s\"\n\"${c1}     @@@@@@@@@@@#//////////@@@@@@@@@@&    %s\"\n\"${c1}     @@@@@@@@@@////@@@@@////@@@@@@@@@@    %s\"\n\"${c1}     @@@@@@@@//////@@@@@/////@@@@@@@@@    %s\"\n\"${c1}     @@@@@@@//@@@@@@@@@@@@@@@//@@@@@@@    %s\"\n\"${c1}  @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ %s\"\n\"${c1} @@     .@@@@@@@@@@@@@@@@@@@@@@@@@      @ %s\"\n\"${c1}  @@@@@@           @@@.           @@@@@@@ %s\"\n\"${c1}    @@@@@@@&@@@@@@@#  #@@@@@@@@@@@@@@@@   %s\"\n\"${c1}       @@@@@@@@@@@@@@@@@@@@@@@@@@@@@      %s\"\n\"${c1}           @@@@@@@@@@@@@@@@@@@@@          %s\"\n\"                                               %s\")\n\t\t;;\n\t\t\"Rocky Linux\")\n\t\t\tif [[ \"$no_color\" != \"0\" ]]; then\n\t\t\t\tc1=$(getColor 'green')\n\t\t\tfi\n\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\tstartline=\"1\"\n\t\t\tlogowidth=\"37\"\n\t\t\tfulloutput=(\"\"\n\"${c1}             //////////              %s\"\n\"${c1}          (/(/(/(/(/(/(/(/           %s\"\n\"${c1}      ,//////////////////////        %s\"\n\"${c1}    (/(/(/(/(/(/(/(/(/(/(/(/(/(*     %s\"\n\"${c1}   //////////////////////////////    %s\"\n\"${c1}  (/(/(/(/(/(/(/(/(/(///(/(/(/(/(/   %s\"\n\"${c1} ///////////////////      /////////  %s\"\n\"${c1} /(/(/(/(/(/(/(/(/          //(/(/(  %s\"\n\"${c1} //////////////                ////  %s\"\n\"${c1}  (/(/(/(/(//        /(/(        /   %s\"\n\"${c1}   ///////        .////////.         %s\"\n\"${c1}    //(/        (/(/(/(/(/(/(/       %s\"\n\"${c1}             .///////////////        %s\"\n\"${c1}           /(/(/(/(/(/(/(,           %s\"\n\"${c1}             //////////              %s\"\n\"                                          %s\")\n\t\t;;\n\t\t*)\n\t\t\tif [[ \"${kernel}\" =~ \"Linux\" ]]; then\n\t\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\t\tc1=$(getColor 'white') # White\n\t\t\t\t\tc2=$(getColor 'dark grey') # Light Gray\n\t\t\t\t\tc3=$(getColor 'yellow') # Light Yellow\n\t\t\t\tfi\n\t\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; c2=\"${my_lcolor}\"; c3=\"${my_lcolor}\"; fi\n\t\t\t\tstartline=\"0\"\n\t\t\t\tlogowidth=\"28\"\n\t\t\t\tfulloutput=(\n\"${c2}                            %s\"\n\"${c2}                            %s\"\n\"${c2}                            %s\"\n\"${c2}         #####              %s\"\n\"${c2}        #######             %s\"\n\"${c2}        ##${c1}O${c2}#${c1}O${c2}##             %s\"\n\"${c2}        #${c3}#####${c2}#             %s\"\n\"${c2}      ##${c1}##${c3}###${c1}##${c2}##           %s\"\n\"${c2}     #${c1}##########${c2}##          %s\"\n\"${c2}    #${c1}############${c2}##         %s\"\n\"${c2}    #${c1}############${c2}###        %s\"\n\"${c3}   ##${c2}#${c1}###########${c2}##${c3}#        %s\"\n\"${c3} ######${c2}#${c1}#######${c2}#${c3}######      %s\"\n\"${c3} #######${c2}#${c1}#####${c2}#${c3}#######      %s\"\n\"${c3}   #####${c2}#######${c3}#####        %s\"\n\"${c2}                            %s\"\n\"${c2}                            %s\"\n\"${c2}                            %s\")\n\n\t\t\telif [[ \"${kernel}\" =~ \"Hurd\" || \"${kernel}\" =~ \"GNU\" || \"${OSTYPE}\" == \"gnu\" ]]; then\n\t\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\t\tc1=$(getColor 'dark grey') # Light Gray\n\t\t\t\tfi\n\t\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\t\tstartline=\"0\"\n\t\t\t\tlogowidth=\"37\"\n\t\t\t\tfulloutput=(\n\"${c1}    _-\\`\\`\\`\\`\\`-,           ,- '- .      %s\"\n\"${c1}   .'   .- - |          | - -.  \\`.   %s\"\n\"${c1}  /.'  /                     \\`.   \\\\  %s\"\n\"${c1} :/   :      _...   ..._      \\`\\`   : %s\"\n\"${c1} ::   :     /._ .\\`:'_.._\\\\.    ||   : %s\"\n\"${c1} ::    \\`._ ./  ,\\`  :    \\\\ . _.''   . %s\"\n\"${c1} \\`:.      /   |  -.  \\\\-. \\\\\\\\\\_      /  %s\"\n\"${c1}   \\\\:._ _/  .'   .@)  \\\\@) \\` \\`\\\\ ,.'   %s\"\n\"${c1}      _/,--'       .- .\\\\,-.\\`--\\`.     %s\"\n\"${c1}        ,'/''     (( \\\\ \\`  )          %s\"\n\"${c1}         /'/'  \\\\    \\`-'  (           %s\"\n\"${c1}          '/''  \\`._,-----'           %s\"\n\"${c1}           ''/'    .,---'            %s\"\n\"${c1}            ''/'      ;:             %s\"\n\"${c1}              ''/''  ''/             %s\"\n\"${c1}                ''/''/''             %s\"\n\"${c1}                  '/'/'              %s\"\n\"${c1}                   \\`;                %s\")\n# Source: https://www.gnu.org/graphics/alternative-ascii.en.html\n# Copyright (C) 2003, Vijay Kumar\n# Permission is granted to copy, distribute and/or modify this image under the\n# terms of the GNU General Public License as published by the Free Software\n# Foundation; either version 2 of the License, or (at your option) any later\n# version.\n\n\t\t\telse\n\t\t\t\tif [[ \"$no_color\" != \"1\" ]]; then\n\t\t\t\t\tc1=$(getColor 'light green') # Light Green\n\t\t\t\tfi\n\t\t\t\tif [ -n \"${my_lcolor}\" ]; then c1=\"${my_lcolor}\"; fi\n\t\t\t\tstartline=\"0\"\n\t\t\t\tlogowidth=\"44\"\n\t\t\t\tfulloutput=(\n\"${c1}                                            %s\"\n\"${c1}                                            %s\"\n\"${c1} UUU     UUU NNN      NNN IIIII XXX     XXXX%s\"\n\"${c1} UUU     UUU NNNN     NNN  III    XX   xXX  %s\"\n\"${c1} UUU     UUU NNNNN    NNN  III     XX xXX   %s\"\n\"${c1} UUU     UUU NNN NN   NNN  III      XXXX    %s\"\n\"${c1} UUU     UUU NNN  NN  NNN  III      xXX     %s\"\n\"${c1} UUU     UUU NNN   NN NNN  III     xXXXX    %s\"\n\"${c1} UUU     UUU NNN    NNNNN  III    xXX  XX   %s\"\n\"${c1}  UUUuuuUUU  NNN     NNNN  III   xXX    XX  %s\"\n\"${c1}    UUUUU    NNN      NNN IIIII xXXx    xXXx%s\"\n\"${c1}                                            %s\"\n\"${c1}                                            %s\"\n\"${c1}                                            %s\"\n\"${c1}                                            %s\")\n\t\t\tfi\n\t\t;;\n\tesac\n\n\n\t# Truncate lines based on terminal width.\n\tif [ \"$truncateSet\" == \"Yes\" ]; then\n\t\tmissinglines=$((${#out_array[*]} + startline - ${#fulloutput[*]}))\n\t\tfor ((i=0; i<missinglines; i++)); do\n\t\t\tfulloutput+=(\"${c1}$(printf '%*s' \"$logowidth\")%s\")\n\t\tdone\n\t\tfor ((i=0; i<${#fulloutput[@]}; i++)); do\n\t\t\tmy_out=$(printf \"${fulloutput[i]}$c0\\n\" \"${out_array}\")\n\t\t\tmy_out_full=$(echo \"$my_out\" | cat -v)\n\t\t\ttermWidth=$(tput cols)\n\t\t\tSHOPT_EXTGLOB_STATE=$(shopt -p extglob)\n\t\t\tread SHOPT_CMD SHOPT_STATE SHOPT_OPT <<< \"${SHOPT_EXTGLOB_STATE}\"\n\t\t\tif [[ ${SHOPT_STATE} == \"-u\" ]]; then\n\t\t\t\tshopt -s extglob\n\t\t\tfi\n\n\t\t\tstringReal=\"${my_out_full//\\^\\[\\[@([0-9]|[0-9];[0-9][0-9])m}\"\n\n\t\t\tif [[ ${SHOPT_STATE} == \"-u\" ]]; then\n\t\t\t\tshopt -u extglob\n\t\t\tfi\n\n\t\t\tif [[ \"${#stringReal}\" -le \"${termWidth}\" ]]; then\n\t\t\t\techo -e \"${my_out}\"$c0\n\t\t\telif [[ \"${#stringReal}\" -gt \"${termWidth}\" ]]; then\n\t\t\t\t((NORMAL_CHAR_COUNT=0))\n\t\t\t\tfor ((j=0; j<=${#my_out_full}; j++)); do\n\t\t\t\t\tif [[ \"${my_out_full:${j}:3}\" == '^[[' ]]; then\n\t\t\t\t\t\tif [[ \"${my_out_full:${j}:5}\" =~ ^\\^\\[\\[[[:digit:]]m$ ]]; then\n\t\t\t\t\t\t\tif [[ ${j} -eq 0 ]]; then\n\t\t\t\t\t\t\t\tj=$((j + 5))\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tj=$((j + 4))\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\telif [[ \"${my_out_full:${j}:8}\" =~ ^\\^\\[\\[[[:digit:]]\\;[[:digit:]][[:digit:]]m ]]; then\n\t\t\t\t\t\t\tif [[ ${j} -eq 0 ]]; then\n\t\t\t\t\t\t\t\tj=$((j + 8))\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tj=$((j + 7))\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\tfi\n\t\t\t\t\telse\n\t\t\t\t\t\t((NORMAL_CHAR_COUNT++))\n\t\t\t\t\t\tif [[ ${NORMAL_CHAR_COUNT} -ge ${termWidth} ]]; then\n\t\t\t\t\t\t\techo -e \"${my_out:0:$((j - 5))}\"$c0\n\t\t\t\t\t\t\tbreak 1\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\tdone\n\t\t\tfi\n\n\t\t\tif [[ \"$i\" -ge \"$startline\" ]]; then\n\t\t\t\tunset 'out_array[0]'\n\t\t\t\tout_array=( \"${out_array[@]}\" )\n\t\t\tfi\n\t\tdone\n\telif [[ \"$portraitSet\" = \"Yes\" ]]; then\n\t\tfor i in \"${!fulloutput[@]}\"; do\n\t\t\tprintf \"${fulloutput[$i]}$c0\\n\"\n\t\tdone\n\n\t\tprintf \"\\n\"\n\n\t\tfor ((i=0; i<${#fulloutput[*]}; i++)); do\n\t\t\t[[ -z \"$out_array[0]\" ]] && continue\n\t\t\tprintf \"%s\\n\" \"${out_array[0]}\"\n\t\t\tunset 'out_array[0]'\n\t\t\tout_array=( \"${out_array[@]}\" )\n\t\tdone\n\n\telif [[ \"$display_logo\" == \"Yes\" ]]; then\n\t\tfor i in \"${!fulloutput[@]}\"; do\n\t\t\tprintf \"${fulloutput[i]}$c0\\n\"\n\t\tdone\n\telse\n\t\tif [[ \"$lineWrap\" = \"Yes\" ]]; then\n\t\t\tavailablespace=$(($(tput cols) - logowidth + 16)) #I dont know why 16 but it works\n\t\t\tnew_out_array=(\"${out_array[0]}\")\n\t\t\tfor ((i=1; i<${#out_array[@]}; i++)); do\n\t\t\t\tlines=$(echo \"${out_array[i]}\" | fmt -w $availablespace)\n\t\t\t\tIFS=$'\\n' read -rd '' -a splitlines <<<\"$lines\"\n\t\t\t\tnew_out_array+=(\"${splitlines[0]}\")\n\t\t\t\tfor ((j=1; j<${#splitlines[*]}; j++)); do\n\t\t\t\t\tline=$(echo -e \"$labelcolor $textcolor  ${splitlines[j]}\")\n\t\t\t\t\tnew_out_array=( \"${new_out_array[@]}\" \"$line\" );\n\t\t\t\tdone\n\t\t\tdone\n\t\t\tout_array=(\"${new_out_array[@]}\")\n\t\tfi\n\t\tmissinglines=$((${#out_array[*]} + startline - ${#fulloutput[*]}))\n\t\tfor ((i=0; i<missinglines; i++)); do\n\t\t\tfulloutput+=(\"${c1}$(printf '%*s' \"$logowidth\")%s\")\n\t\tdone\n\t\t#n=${#fulloutput[*]}\n\t\tfor ((i=0; i<${#fulloutput[*]}; i++)); do\n\t\t\t# echo \"${out_array[@]}\"\n\t\t\tcase $(\"${AWK}\" 'BEGIN{srand();print int(rand()*(1000-1))+1 }') in\n\t\t\t\t411|188|15|166|609)\n\t\t\t\t\tf_size=${#fulloutput[*]}\n\t\t\t\t\to_size=${#out_array[*]}\n\t\t\t\t\tf_max=$(( 32768 / f_size * f_size ))\n\t\t\t\t\t#o_max=$(( 32768 / o_size * o_size ))\n\t\t\t\t\tfor ((a=f_size-1; a>0; a--)); do\n\t\t\t\t\t\twhile (( (rand=RANDOM) >= f_max )); do :; done\n\t\t\t\t\t\trand=$(( rand % (a+1) ))\n\t\t\t\t\t\ttmp=${fulloutput[a]} fulloutput[a]=${fulloutput[rand]} fulloutput[rand]=$tmp\n\t\t\t\t\tdone\n\t\t\t\t\tfor ((b=o_size-1; b>0; b--)); do\n\t\t\t\t\t\trand=$(( rand % (b+1) ))\n\t\t\t\t\t\ttmp=${out_array[b]} out_array[b]=${out_array[rand]} out_array[rand]=$tmp\n\t\t\t\t\tdone\n\t\t\t\t;;\n\t\t\tesac\n\t\t\tprintf \"${fulloutput[i]}$c0\\n\" \"${out_array[0]}\"\n\t\t\tif [[ \"$i\" -ge \"$startline\" ]]; then\n\t\t\t\tunset 'out_array[0]'\n\t\t\t\tout_array=( \"${out_array[@]}\" )\n\t\t\tfi\n\t\tdone\n\tfi\n\t# Done with ASCII output\n}\n\ninfoDisplay () {\n\ttextcolor=\"\\033[0m\"\n\t[[ \"$my_hcolor\" ]] && textcolor=\"${my_hcolor}\"\n\t#TODO: Centralize colors and use them across the board so we only change them one place.\n\tmyascii=\"${distro}\"\n\t[[ \"${asc_distro}\" ]] && myascii=\"${asc_distro}\"\n\tcase ${myascii} in\n\t\t\"Alpine Linux\"|\"Arch Linux - Old\"|\"ArcoLinux\"|\"blackPanther OS\"|\"Fedora\"|\"Fedora - Old\"|\"Korora\"|\"Chapeau\"|\"Mandriva\"|\"Mandrake\"| \\\n\t\t\"Chakra\"|\"ChromeOS\"|\"Sabayon\"|\"Slackware\"|\"Mac OS X\"|\"macOS\"|\"Trisquel\"|\"Kali Linux\"|\"Jiyuu Linux\"|\"Antergos\"|\"Alter Linux\"| \\\n\t\t\"KaOS\"|\"Logos\"|\"gNewSense\"|\"Netrunner\"|\"NixOS\"|\"SailfishOS\"|\"Qubes OS\"|\"Kogaion\"|\"PCLinuxOS\"| \\\n\t\t\"Obarun\"|\"Siduction\"|\"Solus\"|\"SwagArch\"|\"Parrot Security\"|\"Zorin OS\"|\"Uos\"|\"TeArch\")\n\t\t\tlabelcolor=$(getColor 'light blue')\n\t\t;;\n\t\t\"Arch Linux\"|\"Arch Linux 32\"|\"Artix Linux\"|\"Frugalware\"|\"Mageia\"|\"Deepin\"|\"CRUX\"|\"OS Elbrus\"|\"EndeavourOS\")\n\t\t\tlabelcolor=$(getColor 'light cyan')\n\t\t;;\n\t\t\"Mint\"|\"LMDE\"|\"KDE neon\"|\"openSUSE\"|\"SUSE Linux Enterprise\"|\"LinuxDeepin\"|\"DragonflyBSD\"|\"Manjaro\"| \\\n\t\t\"Manjaro-tree\"|\"Android\"|\"Void Linux\"|\"DesaOS\"|\"Rocky Linux\")\n\t\t\tlabelcolor=$(getColor 'light green')\n\t\t;;\n\t\t\"Ubuntu\"|\"FreeBSD\"|\"FreeBSD - Old\"|\"Debian\"|\"Raspbian\"|\"BSD\"|\"Red Hat Enterprise Linux\"|\"Oracle Linux\"| \\\n\t\t\"Peppermint\"|\"Cygwin\"|\"Msys\"|\"Fuduntu\"|\"Scientific Linux\"|\"DragonFlyBSD\"|\"BackTrack Linux\"|\"Red Star OS\"| \\\n\t\t\"SparkyLinux\"|\"OBRevenge\"|\"Source Mage GNU/Linux\")\n\t\t\tlabelcolor=$(getColor 'light red')\n\t\t;;\n\t\t\"ROSA\"|\"januslinux\")\n\t\t\tlabelcolor=$(getColor 'white')\n\t\t;;\n\t\t\"CrunchBang\"|\"Viperr\"|\"elementary\"*)\n\t\t\tlabelcolor=$(getColor 'dark grey')\n\t\t;;\n\t\t\"Gentoo\"|\"Parabola GNU/Linux-libre\"|\"Funtoo\"|\"Funtoo-text\"|\"BLAG\"|\"SteamOS\"|\"Devuan\")\n\t\t\tlabelcolor=$(getColor 'light purple')\n\t\t;;\n\t\t\"Haiku\")\n\t\t\tlabelcolor=$(getColor 'green')\n\t\t;;\n\t\t\"NetBSD\"|\"Amazon Linux\"|\"Proxmox VE\")\n\t\t\tlabelcolor=$(getColor 'orange')\n\t\t;;\n\t\t\"AlmaLinux\")\n\t\t\tlabelcolor=$(getColor 'light orange')\n\t\t;;\n\t\t\"CentOS\"|\"CentOS Stream\")\n\t\t\tlabelcolor=$(getColor 'yellow')\n\t\t;;\n\t\t\"Hyperbola GNU/Linux-libre\"|\"PureOS\"|*)\n\t\t\tlabelcolor=$(getColor 'light grey')\n\t\t;;\n\tesac\n\t[[ \"$my_lcolor\" ]] && labelcolor=\"${my_lcolor}\"\n\tif [[ \"$art\" ]]; then\n\t\tsource \"$art\"\n\tfi\n\tif [[ \"$no_color\" == \"1\" ]]; then\n\t\tlabelcolor=\"\"\n\t\tbold=\"\"\n\t\tc0=\"\"\n\t\ttextcolor=\"\"\n\tfi\n\t# Some verbosity stuff\n\t[[ \"$screenshot\" == \"1\" ]] && verboseOut \"Screenshot will be taken after info is displayed.\"\n\t[[ \"$upload\" == \"1\" ]] && verboseOut \"Screenshot will be transferred/uploaded to specified location.\"\n\t#########################\n\t# Info Variable Setting #\n\t#########################\n\tif [[ \"${distro}\" == \"Android\" ]]; then\n\t\tmyhostname=$(echo -e \"${labelcolor} ${hostname}\"); out_array=( \"${out_array[@]}\" \"$myhostname\" )\n\t\tmydistro=$(echo -e \"$labelcolor OS:$textcolor $distro $distro_ver\"); out_array=( \"${out_array[@]}\" \"$mydistro\" )\n\t\tmydevice=$(echo -e \"$labelcolor Device:$textcolor $device\"); out_array=( \"${out_array[@]}\" \"$mydevice\" )\n\t\tmyrom=$(echo -e \"$labelcolor ROM:$textcolor $rom\"); out_array=( \"${out_array[@]}\" \"$myrom\" )\n\t\tmybaseband=$(echo -e \"$labelcolor Baseband:$textcolor $baseband\"); out_array=( \"${out_array[@]}\" \"$mybaseband\" )\n\t\tmykernel=$(echo -e \"$labelcolor Kernel:$textcolor $kernel\"); out_array=( \"${out_array[@]}\" \"$mykernel\" )\n\t\tmyuptime=$(echo -e \"$labelcolor Uptime:$textcolor $uptime\"); out_array=( \"${out_array[@]}\" \"$myuptime\" )\n\t\tmycpu=$(echo -e \"$labelcolor CPU:$textcolor $cpu\"); out_array=( \"${out_array[@]}\" \"$mycpu\" )\n\t\tmygpu=$(echo -e \"$labelcolor GPU:$textcolor $cpu\"); out_array=( \"${out_array[@]}\" \"$mygpu\" )\n\t\tmymem=$(echo -e \"$labelcolor RAM:$textcolor $mem\"); out_array=( \"${out_array[@]}\" \"$mymem\" )\n\telse\n\t\tif [[ \"${display[@]}\" =~ \"host\" ]]; then\n\t\t\tmyinfo=$(echo -e \"${labelcolor} ${myUser}$textcolor${bold}@${c0}${labelcolor}${myHost}\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$myinfo\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"distro\" ]]; then\n\t\t\tif [[ \"$distro\" == \"Mac OS X\" || \"$distro\" == \"macOS\" ]]; then\n\t\t\t\tsysArch=\"$(getconf LONG_BIT)bit\"\n\t\t\t\tprodVers=$(sw_vers -productVersion )\n\t\t\t\tprodVers=${prodVers:16}\n\t\t\t\tbuildVers=$(sw_vers -buildVersion )\n\t\t\t\tbuildVers=${buildVers:14}\n\t\t\t\tif [ -n \"$distro_more\" ]; then\n\t\t\t\t\tmydistro=$(echo -e \"$labelcolor OS:$textcolor $distro_more $sysArch\")\n\t\t\t\telse\n\t\t\t\t\tmydistro=$(echo -e \"$labelcolor OS:$textcolor $sysArch $distro $prodVers $buildVers\")\n\t\t\t\tfi\n\t\t\telif [[ \"$distro\" == \"Cygwin\" || \"$distro\" == \"Msys\" ]]; then\n\t\t\t\tdistro=\"$(wmic os get caption | sed 's/\\r//g; s/[ \\t]*$//g; 2!d')\"\n\t\t\t\tif [[ \"$(wmic os get version | grep -o '^10\\.')\" == \"10.\" ]]; then\n\t\t\t\t\tdistro=\"$distro (v$(wmic os get version | grep '^10\\.' | tr -d ' '))\"\n\t\t\t\tfi\n\t\t\t\tsysArch=$(wmic os get OSArchitecture | sed 's/\\r//g; s/[ \\t]*$//g; 2!d')\n\t\t\t\tmydistro=$(echo -e \"$labelcolor OS:$textcolor $distro $sysArch\")\n\t\t\telse\n\t\t\t\tif [ -n \"$distro_more\" ]; then\n\t\t\t\t\tmydistro=$(echo -e \"$labelcolor OS:$textcolor $distro_more\")\n\t\t\t\telse\n\t\t\t\t\tmydistro=$(echo -e \"$labelcolor OS:$textcolor $distro $sysArch\")\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tout_array=( \"${out_array[@]}\" \"$mydistro$wsl\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"kernel\" ]]; then\n\t\t\tmykernel=$(echo -e \"$labelcolor Kernel:$textcolor $kernel\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$mykernel\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"uptime\" ]]; then\n\t\t\tmyuptime=$(echo -e \"$labelcolor Uptime:$textcolor $uptime\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$myuptime\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"pkgs\" ]]; then\n\t\t\tmypkgs=$(echo -e \"$labelcolor Packages:$textcolor $pkgs\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$mypkgs\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"shell\" ]]; then\n\t\t\tmyshell=$(echo -e \"$labelcolor Shell:$textcolor $myShell\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$myshell\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ -n \"$DISPLAY\" || \"$distro\" == \"Mac OS X\" || \"$distro\" == \"macOS\" ]]; then\n\t\t\tif [ -n \"${xResolution}\" ]; then\n\t\t\t\tif [[ \"${display[@]}\" =~ \"res\" ]]; then\n\t\t\t\t\tmyres=$(echo -e \"$labelcolor Resolution:${textcolor} $xResolution\")\n\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$myres\" )\n\t\t\t\t\t((display_index++))\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"de\" ]]; then\n\t\t\t\tif [[ \"${DE}\" != \"Not Present\" ]]; then\n\t\t\t\t\tmyde=$(echo -e \"$labelcolor DE:$textcolor $DE\")\n\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$myde\" )\n\t\t\t\t\t((display_index++))\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"wm\" ]]; then\n\t\t\t\tmywm=$(echo -e \"$labelcolor WM:$textcolor $WM\")\n\t\t\t\tout_array=( \"${out_array[@]}\" \"$mywm\" )\n\t\t\t\t((display_index++))\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"wmtheme\" ]]; then\n\t\t\t\tif [[ \"${Win_theme}\" != \"Not Applicable\" && \"${Win_theme}\" != \"Not Found\" ]]; then\n\t\t\t\t\tmywmtheme=$(echo -e \"$labelcolor WM Theme:$textcolor $Win_theme\")\n\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$mywmtheme\" )\n\t\t\t\t\t((display_index++))\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"gtk\" ]]; then\n\t\t\t\tif [[ \"$distro\" == \"Mac OS X\" || \"$distro\" == \"macOS\" ]]; then\n\t\t\t\t\tif [[ \"$gtkFont\" != \"Not Applicable\" && \"$gtkFont\" != \"Not Found\" ]]; then\n\t\t\t\t\t\tif [ -n \"$gtkFont\" ]; then\n\t\t\t\t\t\t\tmyfont=$(echo -e \"$labelcolor Font:$textcolor $gtkFont\")\n\t\t\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$myfont\" )\n\t\t\t\t\t\t\t((display_index++))\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\telse\n\t\t\t\t\tif [[ \"$gtk2Theme\" != \"Not Applicable\" && \"$gtk2Theme\" != \"Not Found\" ]]; then\n\t\t\t\t\t\tif [ -n \"$gtk2Theme\" ]; then\n\t\t\t\t\t\t\tmygtk2=\"${gtk2Theme} [GTK2]\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"$gtk3Theme\" != \"Not Applicable\" && \"$gtk3Theme\" != \"Not Found\" ]]; then\n\t\t\t\t\t\tif [ -n \"$mygtk2\" ]; then\n\t\t\t\t\t\t\tmygtk3=\", ${gtk3Theme} [GTK3]\"\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tmygtk3=\"${gtk3Theme} [GTK3]\"\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"$gtk_2line\" == \"yes\" ]]; then\n\t\t\t\t\t\tmygtk2=$(echo -e \"$labelcolor GTK2 Theme:$textcolor $gtk2Theme\")\n\t\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$mygtk2\" )\n\t\t\t\t\t\t((display_index++))\n\t\t\t\t\t\tmygtk3=$(echo -e \"$labelcolor GTK3 Theme:$textcolor $gtk3Theme\")\n\t\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$mygtk3\" )\n\t\t\t\t\t\t((display_index++))\n\t\t\t\t\telse\n\t\t\t\t\t\tif [[ \"$gtk2Theme\" == \"$gtk3Theme\" ]]; then\n\t\t\t\t\t\t\tif [[ \"$gtk2Theme\" != \"Not Applicable\" && \"$gtk2Theme\" != \"Not Found\" ]]; then\n\t\t\t\t\t\t\t\tmygtk=$(echo -e \"$labelcolor GTK Theme:$textcolor ${gtk2Theme} [GTK2/3]\")\n\t\t\t\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$mygtk\" )\n\t\t\t\t\t\t\t\t((display_index++))\n\t\t\t\t\t\t\tfi\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tmygtk=$(echo -e \"$labelcolor GTK Theme:$textcolor ${mygtk2}${mygtk3}\")\n\t\t\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$mygtk\" )\n\t\t\t\t\t\t\t((display_index++))\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"$gtkIcons\" != \"Not Applicable\" && \"$gtkIcons\" != \"Not Found\" ]]; then\n\t\t\t\t\t\tif [ -n \"$gtkIcons\" ]; then\n\t\t\t\t\t\t\tmyicons=$(echo -e \"$labelcolor Icon Theme:$textcolor $gtkIcons\")\n\t\t\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$myicons\" )\n\t\t\t\t\t\t\t((display_index++))\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\t\tif [[ \"$gtkFont\" != \"Not Applicable\" && \"$gtkFont\" != \"Not Found\" ]]; then\n\t\t\t\t\t\tif [ -n \"$gtkFont\" ]; then\n\t\t\t\t\t\t\tmyfont=$(echo -e \"$labelcolor Font:$textcolor $gtkFont\")\n\t\t\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$myfont\" )\n\t\t\t\t\t\t\t((display_index++))\n\t\t\t\t\t\tfi\n\t\t\t\t\tfi\n\t\t\t\tfi\n\t\t\tfi\n\t\telif [[ \"$fake_distro\" == \"Cygwin\" || \"$fake_distro\" == \"Msys\" || \"$fake_distro\" == \"Windows - Modern\" ]]; then\n\t\t\tif [[ \"${display[@]}\" =~ \"res\" && -n \"$xResolution\" ]]; then\n\t\t\t\tmyres=$(echo -e \"$labelcolor Resolution:${textcolor} $xResolution\")\n\t\t\t\tout_array=( \"${out_array[@]}\" \"$myres\" )\n\t\t\t\t((display_index++))\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"de\" ]]; then\n\t\t\t\tif [[ \"${DE}\" != \"Not Present\" ]]; then\n\t\t\t\t\tmyde=$(echo -e \"$labelcolor DE:$textcolor $DE\")\n\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$myde\" )\n\t\t\t\t\t((display_index++))\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"wm\" ]]; then\n\t\t\t\tmywm=$(echo -e \"$labelcolor WM:$textcolor $WM\")\n\t\t\t\tout_array=( \"${out_array[@]}\" \"$mywm\" )\n\t\t\t\t((display_index++))\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"wmtheme\" ]]; then\n\t\t\t\tif [[ \"${Win_theme}\" != \"Not Applicable\" && \"${Win_theme}\" != \"Not Found\" ]]; then\n\t\t\t\t\tmywmtheme=$(echo -e \"$labelcolor WM Theme:$textcolor $Win_theme\")\n\t\t\t\t\tout_array=( \"${out_array[@]}\" \"$mywmtheme\" )\n\t\t\t\t\t((display_index++))\n\t\t\t\tfi\n\t\t\tfi\n\t\telif [[ \"$distro\" == \"Haiku\" && -n \"${xResolution}\" && \"${display[@]}\" =~ \"res\" ]]; then\n\t\t\tmyres=$(echo -e \"$labelcolor Resolution:${textcolor} $xResolution\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$myres\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${fake_distro}\" != \"Cygwin\" && \"${fake_distro}\" != \"Msys\" && \"${fake_distro}\" != \"Windows - Modern\" && \"${display[@]}\" =~ \"disk\" ]]; then\n\t\t\tmydisk=$(echo -e \"$labelcolor Disk:$textcolor $diskusage\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$mydisk\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"cpu\" ]]; then\n\t\t\tmycpu=$(echo -e \"$labelcolor CPU:$textcolor $cpu\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$mycpu\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"gpu\" ]] && [[ \"$gpu\" != \"Not Found\" ]]; then\n\t\t\tmygpu=$(echo -e \"$labelcolor GPU:$textcolor $gpu\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$mygpu\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ \"${display[@]}\" =~ \"mem\" ]]; then\n\t\t\tmymem=$(echo -e \"$labelcolor RAM:$textcolor $mem\")\n\t\t\tout_array=( \"${out_array[@]}\" \"$mymem\" )\n\t\t\t((display_index++))\n\t\tfi\n\t\tif [[ -n \"$custom_lines_string\" ]]; then\n\t\t\tcustomlines\n\t\tfi\n\tfi\n\tif [[ \"$display_type\" == \"ASCII\" ]]; then\n\t\tasciiText\n\telse\n\t\tif [[ \"${display[@]}\" =~ \"host\" ]]; then echo -e \"$myinfo\"; fi\n\t\tif [[ \"${display[@]}\" =~ \"distro\" ]]; then echo -e \"$mydistro\"; fi\n\t\tif [[ \"${display[@]}\" =~ \"kernel\" ]]; then echo -e \"$mykernel\"; fi\n\t\tif [[ \"${distro}\" == \"Android\" ]]; then\n\t\t\techo -e \"$mydevice\"\n\t\t\techo -e \"$myrom\"\n\t\t\techo -e \"$mybaseband\"\n\t\t\techo -e \"$mykernel\"\n\t\t\techo -e \"$myuptime\"\n\t\t\techo -e \"$mycpu\"\n\t\t\techo -e \"$mymem\"\n\t\telse\n\t\t\tif [[ \"${display[@]}\" =~ \"uptime\" ]]; then echo -e \"$myuptime\"; fi\n\t\t\tif [[ \"${display[@]}\" =~ \"pkgs\" && \"$mypkgs\" != \"Unknown\" ]]; then echo -e \"$mypkgs\"; fi\n\t\t\tif [[ \"${display[@]}\" =~ \"shell\" ]]; then echo -e \"$myshell\"; fi\n\t\t\tif [[ \"${display[@]}\" =~ \"res\" ]]; then\n\t\t\t\ttest -z \"$myres\" || echo -e \"$myres\"\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"de\" ]]; then\n\t\t\t\tif [[ \"${DE}\" != \"Not Present\" ]]; then echo -e \"$myde\"; fi\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"wm\" ]]; then\n\t\t\t\ttest -z \"$mywm\" || echo -e \"$mywm\"\n\t\t\t\tif [[ \"${Win_theme}\" != \"Not Applicable\" && \"${Win_theme}\" != \"Not Found\" ]]; then\n\t\t\t\t\ttest -z \"$mywmtheme\" || echo -e \"$mywmtheme\"\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"gtk\" ]]; then\n\t\t\t\tif [[ \"$gtk_2line\" == \"yes\" ]]; then\n\t\t\t\t\ttest -z \"$mygtk2\" || echo -e \"$mygtk2\"\n\t\t\t\t\ttest -z \"$mygtk3\" || echo -e \"$mygtk3\"\n\t\t\t\telse\n\t\t\t\t\ttest -z \"$mygtk\" || echo -e \"$mygtk\"\n\t\t\t\tfi\n\t\t\t\ttest -z \"$myicons\" || echo -e \"$myicons\"\n\t\t\t\ttest -z \"$myfont\" || echo -e \"$myfont\"\n\t\t\tfi\n\t\t\tif [[ \"${display[@]}\" =~ \"disk\" ]]; then echo -e \"$mydisk\"; fi\n\t\t\tif [[ \"${display[@]}\" =~ \"cpu\" ]]; then echo -e \"$mycpu\"; fi\n\t\t\tif [[ \"${display[@]}\" =~ \"gpu\" ]]; then echo -e \"$mygpu\"; fi\n\t\t\tif [[ \"${display[@]}\" =~ \"mem\" ]]; then echo -e \"$mymem\"; fi\n\t\tfi\n\tfi\n}\n\n##################\n# Let's Do This!\n##################\n\nif [[ -f \"$HOME/.screenfetchOR\" ]]; then\n\tsource \"$HOME/.screenfetchOR\"\nfi\n\nif [[ \"$overrideDisplay\" ]]; then\n\tverboseOut \"Found 'd' flag in syntax. Overriding display...\"\n\tOLDIFS=$IFS\n\tIFS=';'\n\tfor i in ${overrideDisplay}; do\n\t\tmodchar=\"${i:0:1}\"\n\t\tif [[ \"${modchar}\" == \"-\" ]]; then\n\t\t\ti=${i/${modchar}}\n\t\t\t_OLDIFS=IFS\n\t\t\tIFS=,\n\t\t\tfor n in $i; do\n\t\t\t\tif [[ ! \"${display[@]}\" =~ \"$n\" ]]; then\n\t\t\t\t\techo \"The var $n is not currently being displayed.\"\n\t\t\t\telse\n\t\t\t\t\tfor e in \"${!display[@]}\"; do\n\t\t\t\t\t\tif [[ ${display[e]} = \"$n\" ]]; then\n\t\t\t\t\t\t\tunset 'display[e]'\n\t\t\t\t\t\tfi\n\t\t\t\t\tdone\n\t\t\t\tfi\n\t\t\tdone\n\t\t\tIFS=$_OLDIFS\n\t\telif [[ \"${modchar}\" == \"+\" ]]; then\n\t\t\ti=${i/${modchar}}\n\t\t\t_OLDIFS=IFS\n\t\t\tIFS=,\n\t\t\tfor n in $i; do\n\t\t\t\tif [[ \"${valid_display[@]}\" =~ \"$n\" ]]; then\n\t\t\t\t\tif [[ \"${display[@]}\" =~ \"$n\" ]]; then\n\t\t\t\t\t\techo \"The $n var is already being displayed.\"\n\t\t\t\t\telse\n\t\t\t\t\t\tdisplay+=(\"$n\")\n\t\t\t\t\tfi\n\t\t\t\telse\n\t\t\t\t\techo \"The var $n is not a valid display var.\"\n\t\t\t\tfi\n\t\t\tdone\n\t\t\tIFS=$_OLDIFS\n\t\telse\n\t\t\tIFS=$OLDIFS\n\t\t\ti=\"${i//,/ }\"\n\t\t\tdisplay=( \"$i\" )\n\t\tfi\n\tdone\n\tIFS=$OLDIFS\nfi\n\nfor i in \"${display[@]}\"; do\n\tif [[ -n \"$i\" ]]; then\n\t\tif [[ $i =~ wm ]]; then\n\t\t\ttest -z \"$WM\" && detectwm\n\t\t\ttest -z \"$Win_theme\" && detectwmtheme\n\t\telse\n\t\t\tif [[ \"${display[*]}\" =~ \"$i\" ]]; then\n\t\t\t\tif [[ \"$errorSuppress\" == \"1\" ]]; then\n\t\t\t\t\tdetect\"${i}\" 2>/dev/null\n\t\t\t\telse\n\t\t\t\t\tdetect\"${i}\"\n\t\t\t\tfi\n\t\t\tfi\n\t\tfi\n\tfi\ndone\n\n# Check for android\nif [[ -f /system/build.prop  && \"${distro}\" != \"SailfishOS\" ]]; then\n    distro=\"Android\"\n    detectmem\n    detectuptime\n    detectkernel\n    detectdroid\n    infoDisplay\n    exit 0\nfi\n\nif [ \"$gpu\" = 'Not Found' ] ; then\n\tDetectIntelGPU\nfi\n\n\ninfoDisplay\n[ \"$screenshot\" == \"1\" ] && takeShot\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/setprio",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# ======================================================================\n# Script Name: setprio\n# Description: Sets CPU and I/O priorities for critical services to ensure\n#  higher priority for web services and lower priority for backups.\n#  Designed for systems using SysVinit (e.g., Devuan).\n# ======================================================================\n\n# Ensure the script is run as root\nif [[ ${EUID} -ne 0 ]]; then\n   echo \"This script must be run as root. Exiting.\"\n   exit 1\nfi\n\n# Optional: Enable verbose mode if needed\n_VERBOSITY=0\nwhile [[ \"$#\" -gt 0 ]]; do\n  case $1 in\n    -v|--verbose) _VERBOSITY=1 ;;\n    *) echo \"Unknown parameter passed: $1\"; exit 1 ;;\n  esac\n  shift\ndone\n\n# Define services and their desired priorities\n# Format: \"service_name:renice_value:ionice_class:ionice_priority\"\ndeclare -A _services_priorities=(\n  [\"nginx\"]=\"0:2:4\"\n  [\"redis-server\"]=\"0:2:4\"\n  [\"sshd\"]=\"-5:2:3\"\n  [\"php-fpm\"]=\"0:2:4\"\n  [\"mysqld\"]=\"0:2:4\"  # Handle all MySQL processes\n)\n\n# Function to get current nice value\n_get_current_nice() {\n  local _pid=$1\n  # Extract the nice value using ps\n  _nice=$(ps -o ni= -p \"${_pid}\" 2>/dev/null | xargs)\n  echo \"${_nice}\"\n}\n\n# Function to get current ionice values using Regex for accurate parsing\n_get_current_ionice() {\n  local _pid=$1\n  # Capture the output of ionice\n  _ionice_output=$(ionice -p \"${_pid}\" 2>/dev/null)\n\n  # Check if ionice_output contains expected information\n  if [[ $? -ne 0 || -z \"${_ionice_output}\" ]]; then\n    echo \"Unknown:Unknown\"\n    return\n  fi\n\n  # Use Regex to parse the ionice output\n  if [[ \"${_ionice_output}\" =~ ^([a-z-]+):[[:space:]]+prio[[:space:]]+([0-9]+)$ ]]; then\n    _description=\"${BASH_REMATCH[1]}\"\n    _priority=\"${BASH_REMATCH[2]}\"\n  elif [[ \"${_ionice_output}\" =~ ^([a-z-]+)$ ]]; then\n    _description=\"${BASH_REMATCH[1]}\"\n    _priority=\"N/A\"\n  else\n    # Handle unexpected formats\n    _description=\"Unknown\"\n    _priority=\"Unknown\"\n  fi\n\n  # Map description to class number\n  case \"${_description}\" in\n    \"realtime\")\n      _class=1\n      ;;\n    \"best-effort\")\n      _class=2\n      ;;\n    \"idle\")\n      _class=3\n      ;;\n    *)\n      _class=\"Unknown\"\n      ;;\n  esac\n\n  echo \"${_class}:${_priority}\"\n}\n\n# Function to set priorities\n_set_priorities() {\n  local _pid=$1\n  _service=$2\n  local _renice_val=$3\n  local _ionice_class=$4\n  local _ionice_priority=$5\n\n  # Get current priorities\n  _current_nice=$(_get_current_nice \"${_pid}\")\n  _current_ionice=$(_get_current_ionice \"${_pid}\")\n  _current_class=$(echo \"${_current_ionice}\" | cut -d':' -f1)\n  _current_priority=$(echo \"${_current_ionice}\" | cut -d':' -f2)\n\n  # Apply renice\n  _renice_output=$(renice \"${_renice_val}\" -p \"${_pid}\" 2>&1)\n  if [[ $? -ne 0 ]]; then\n    echo \"Failed to renice PID ${_pid} (${_service}): ${_renice_output}\"\n    return\n  fi\n\n  # Apply ionice\n  _ionice_output=$(ionice -c\"${_ionice_class}\" -n\"${_ionice_priority}\" -p \"${_pid}\" 2>&1)\n  if [[ $? -ne 0 ]]; then\n    echo \"Failed to ionice PID ${_pid} (${_service}): ${_ionice_output}\"\n    return\n  fi\n\n  # Get new priorities\n  _new_nice=$(_get_current_nice \"${_pid}\")\n  _new_ionice=$(_get_current_ionice \"${_pid}\")\n  _new_class=$(echo \"${_new_ionice}\" | cut -d':' -f1)\n  _new_priority=$(echo \"${_new_ionice}\" | cut -d':' -f2)\n\n  # Display the changes if verbose or if changes were made\n  if [[ ${_VERBOSITY} -eq 1 || \"${_current_nice}\" != \"${_new_nice}\" || \"${_current_class}\" != \"${_new_class}\" || \"${_current_priority}\" != \"${_new_priority}\" ]]; then\n    echo \"Service: ${_service} | PID: ${_pid}\"\n    echo \"  CPU Priority (nice): ${_current_nice} -> ${_new_nice}\"\n    echo \"  I/O Priority (ionice): Class ${_current_class} Priority ${_current_priority} -> Class ${_new_class} Priority ${_new_priority}\"\n    echo \"---------------------------------------------------------------------\"\n  fi\n}\n\n# Function to set priorities for MySQL processes using a more robust method\n_set_mysql_priorities() {\n  _service=\"mysqld\"\n  local _renice_val=\"0\"\n  local _ionice_class=\"2\"\n  local _ionice_priority=\"4\"\n\n  # Use pgrep with -f to match the full command line for all mysqld processes\n  _all_mysql_pids=$(pgrep -u mysql -f 'mysqld')\n\n  if [[ -z \"${_all_mysql_pids}\" ]]; then\n    echo \"No running mysqld processes found.\"\n    echo \"---------------------------------------------------------------------\"\n    return\n  fi\n\n  # Iterate over each PID and set priorities\n  for _pid in ${_all_mysql_pids}; do\n    # Set priorities for the process\n    _set_priorities \"${_pid}\" \"${_service}\" \"${_renice_val}\" \"${_ionice_class}\" \"${_ionice_priority}\"\n  done\n}\n\n# Iterate over each service and apply priorities\nfor _service in \"${!_services_priorities[@]}\"; do\n  # Extract desired priorities\n  IFS=':' read -r _renice_val _ionice_class _ionice_priority <<< \"${_services_priorities[${_service}]}\"\n\n  # Find PIDs matching the service\n  case \"${_service}\" in\n    \"php-fpm\")\n      # Handle multiple php-fpm versions\n      _pids=$(pgrep -f 'php-fpm: pool')\n      ;;\n    \"sshd\")\n      # Exact match for sshd to exclude child processes\n      _pids=$(pgrep -x sshd)\n      ;;\n    \"nginx\")\n      # Handle multiple nginx worker processes\n      _pids=$(pgrep -f 'nginx: worker process')\n      ;;\n    \"redis-server\")\n      # Exact match for redis-server processes\n      _pids=$(pgrep -x redis-server)\n      ;;\n    \"mysqld\")\n      # Handled separately below\n      _pids=\"\"\n      ;;\n    *)\n      _pids=$(pgrep -x \"${_service}\")\n      ;;\n  esac\n\n  # Skip if the service is mysqld (handled separately)\n  if [[ \"${_service}\" == \"mysqld\" ]]; then\n    continue\n  fi\n\n  if [[ -z \"${_pids}\" ]]; then\n    echo \"No running processes found for service: ${_service}\"\n    echo \"---------------------------------------------------------------------\"\n    continue\n  fi\n\n  # Iterate over each PID and set priorities\n  for _pid in ${_pids}; do\n    # Avoid changing priority of this script itself\n    if [[ \"${_pid}\" -eq $$ ]]; then\n      continue\n    fi\n\n    # Set priorities for the process\n    _set_priorities \"${_pid}\" \"${_service}\" \"${_renice_val}\" \"${_ionice_class}\" \"${_ionice_priority}\"\n  done\ndone\n\n# Handle MySQL processes separately\n_set_mysql_priorities\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/showdepend",
    "content": "#!/bin/bash\n\n# Check if a package name is provided as an argument\nif [ -z \"$1\" ]; then\n  echo \"Usage: $0 <package-name>\"\n  exit 1\nfi\n\n# Store the package name\nPACKAGE=$1\n\n# Find installed packages that depend on the given package\napt-get update 2>/dev/null\ndpkg-query -W -f='${Package} ${Depends}\\n' | grep -w \"$PACKAGE\" | awk '{print $1}'\n\n"
  },
  {
    "path": "aegir/tools/bin/smtpgapps",
    "content": "#!/bin/bash\n\n# smtpgapps\n# A script to install and configure msmtp on Devuan\n# Usage: smtpgapps your-email@gmail.com your-app-password\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_usage() {\n  echo\n  echo \"Usage: smtpgapps your-email@gmail.com your-app-password\"\n  echo \"\"\n  echo \"Arguments:\"\n  echo \"  your-email@gmail.com    Your Gmail address\"\n  echo \"  your-app-password       Your Gmail App Password\"\n  echo \"\"\n  echo \"Example:\"\n  echo \"  smtpgapps foobar@gmail.com yourgapppassword\"\n  echo\n  exit 1\n}\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"GA SMTP Setup v.${_tRee} [$(date +%T)] ==> $*\"\n}\n\n# Function to check if running as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo\n    _msg \"Error: This script must be run as root.\"\n    _usage\n    exit 1\n  fi\n}\n\n# Function to check if a package is installed\n_is_installed() {\n  dpkg -l | grep -qw \"$1\"\n}\n\n# Function to validate email address format\n_validate_email() {\n  _EMAIL_REGEX=\"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$\"\n  if [[ ! \"${_GMAIL_USER}\" =~ ${_EMAIL_REGEX} ]]; then\n    echo\n    _msg \"Error: Invalid email address format.\"\n    _usage\n  fi\n}\n\n# Function to validate password format\n_validate_password() {\n  _PASS_REGEX=\"^[a-z]{16}$\"\n  if [[ ! \"${_GMAIL_PASS}\" =~ ${_PASS_REGEX} ]]; then\n    echo\n    _msg \"Error: Password must be exactly 16 lowercase letters (a-z).\"\n    _usage\n  fi\n}\n\n# -----------------------------\n# Initial Setup\n# -----------------------------\n\n# Check if running as root\n_check_root\n_os_detection_minimal\n\n# Check for correct number of arguments\nif [ \"$#\" -ne 2 ]; then\n  echo\n  _msg \"Incorrect number of arguments.\"\n  _usage\nfi\n\n_GMAIL_USER=\"$1\"\n_GMAIL_PASS=\"$2\"\n\n# Validate email and password\n_validate_email\n_validate_password\n\n# Install msmtp if not already installed\nif _is_installed \"msmtp\"; then\n  _msg \"Package 'msmtp' is already installed.\"\nelse\n  _msg \"Installing msmtp and necessary packages...\"\n  _apt_clean_update\n  apt-get install msmtp ca-certificates -y &> /dev/null\n  _msg \"'msmtp' installed successfully.\"\nfi\n\n_msg \"Backing up existing /etc/msmtprc if it exists...\"\nif [ -f /etc/msmtprc ]; then\n  cp -a /etc/msmtprc /etc/msmtprc.bak\n  _msg \"Existing /etc/msmtprc backed up to /etc/msmtprc.bak\"\nfi\n\n_msg \"Creating /etc/msmtprc configuration file...\"\n\ncat <<EOF > /etc/msmtprc\n# msmtp configuration file\n\n# Set default values for all following accounts.\ndefaults\nauth           on\ntls            on\ntls_trust_file /etc/ssl/certs/ca-certificates.crt\nlogfile        /var/log/msmtp.log\n\n# Gmail account\naccount        gmail\nhost           smtp.gmail.com\nport           587\nfrom           ${_GMAIL_USER}\nuser           ${_GMAIL_USER}\npassword       ${_GMAIL_PASS}\n\n# Set a default account\naccount default : gmail\nEOF\n\n_msg \"/etc/msmtprc configuration file created.\"\n\n# Secure the msmtprc file\nchmod 600 /etc/msmtprc\nchown root:root /etc/msmtprc\n\n_msg \"Setting up log file for msmtp...\"\ntouch /var/log/msmtp.log\nchmod 640 /var/log/msmtp.log\nchown root:adm /var/log/msmtp.log\n\n_msg \"Configuring log rotation for msmtp.log...\"\ncat <<EOF > /etc/logrotate.d/msmtp\n/var/log/msmtp.log {\n    weekly\n    rotate 4\n    compress\n    missingok\n    notifempty\n    create 640 root adm\n}\nEOF\n\n_msg \"Log rotation configured.\"\n\n_msg \"Installation and configuration of msmtp completed successfully!\"\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/sqlclean",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Define directories\n_MYSQL_DB_DIR=\"/var/lib/mysql\"\n_VHOSTS_DIRS=(\"/var/aegir/config/server_master/nginx/vhost.d\" \"/data/disk/*/config/server_master/nginx/vhost.d\")\n_GHOST_DBS_LOG=\"/var/log/db_cleanup.log\"\n_ACTIVE_DBS_LOG=\"/var/log/dbs_to_remain.log\"\n\nexport _LIVE_MODE=$1\n: \"${_LIVE_MODE:=DRY}\"\n\n# Create/empty the log files\n> \"${_GHOST_DBS_LOG}\"\n> \"${_ACTIVE_DBS_LOG}\"\n\n# Step 1: Get the list of existing MySQL databases\nmapfile -t _mysql_databases < <(find \"${_MYSQL_DB_DIR}\" -maxdepth 1 -type d -exec basename {} \\; | grep -v \"^mysql$\\|^information_schema$\\|^performance_schema$\\|^sys$\")\n\n# Step 2: Get the list of databases from the vhost configurations\ndeclare -A _vhost_db_map\n_vhost_files=()\nfor _dir_pattern in \"${_VHOSTS_DIRS[@]}\"; do\n  # Expand pattern and check if there are any matching directories\n  for _dir in ${_dir_pattern}; do\n    if [ -d \"${_dir}\" ]; then\n      # Grep for db_name only if the directory exists and contains files\n      while IFS= read -r _line; do\n        _db_name=$(echo \"${_line}\" | awk '{print $NF}' | tr -d ';')\n        _vhost_file=$(echo \"${_line}\" | awk -F':' '{print $1}')\n\n        # Track vhost files and map db_name to vhosts\n        _vhost_files+=(\"${_vhost_file}:${_db_name}\")\n        if ! [[ \" ${_vhost_db_map[\"${_db_name}\"]} \" =~ \" ${_vhost_file} \" ]]; then\n          _vhost_db_map[\"${_db_name}\"]+=\"${_vhost_file} \"  # Append vhost file to the list for this db_name\n        fi\n\n      done < <(grep -h \"fastcgi_param db_name\" \"${_dir}\"/* 2>/dev/null)\n    else\n      echo \"Directory ${_dir} does not exist, skipping...\" | tee -a \"${_GHOST_DBS_LOG}\"\n    fi\n  done\ndone\n\n# Step 3: Compare and separate ghost and active databases\n_ghost_databases=()\n_active_databases=()\nfor _db in \"${_mysql_databases[@]}\"; do\n  if [[ \" ${!_vhost_db_map[*]} \" =~ \" ${_db} \" ]]; then\n    _active_databases+=(\"${_db}\")\n  else\n    _ghost_databases+=(\"${_db}\")\n  fi\ndone\n\n# Step 4: Log active databases\necho \"Databases to remain (referenced in vhosts):\" | tee -a \"${_ACTIVE_DBS_LOG}\"\nfor _active_db in \"${_active_databases[@]}\"; do\n  echo \"Active database: ${_active_db}\" | tee -a \"${_ACTIVE_DBS_LOG}\"\ndone\n\n# Function to drop a MySQL user and all host entries\n_drop_mysql_user() {\n  local _db_user=$1\n  # Get all user@host combinations for the user\n  mapfile -t _user_hosts < <(mysql -e \"SELECT CONCAT(User, '@', Host) FROM mysql.user WHERE user='${_db_user}';\")\n\n  # Drop each user@host entry\n  for _user_host in \"${_user_hosts[@]}\"; do\n    echo \"Dropping user: ${_user_host}\" | tee -a \"${_GHOST_DBS_LOG}\"\n    if [ \"${_LIVE_MODE}\" == \"LIVE\" ]; then\n      mysql -e \"DROP USER '${_user_host}';\" 2>>\"${_GHOST_DBS_LOG}\"\n    fi\n  done\n}\n\n# Step 5: Prompt admin to confirm ghost database deletion\n_confirm_deletion() {\n  local _ghost_db=$1\n  while true; do\n    echo -e \"\\nGhost database found: ${_ghost_db}\"\n    read -p \"Type the name of the ghost database '${_ghost_db}' to confirm deletion, or type 'NO' to skip: \" _confirmation\n    if [ \"${_confirmation}\" == \"${_ghost_db}\" ]; then\n      echo \"Confirmed. Proceeding with deletion of database ${_ghost_db}.\" | tee -a \"${_GHOST_DBS_LOG}\"\n      return 0  # Confirm deletion\n    elif [ \"${_confirmation}\" == \"NO\" ]; then\n      echo \"Skipping deletion of database ${_ghost_db}.\" | tee -a \"${_GHOST_DBS_LOG}\"\n      return 1  # Skip deletion\n    else\n      echo \"Invalid input. Please type the database name or 'NO' to skip.\" | tee -a \"${_GHOST_DBS_LOG}\"\n    fi\n  done\n}\n\n# Step 6: Drop ghost databases and their associated users after _confirmation\necho \"Databases to drop (not referenced in any vhost):\" | tee -a \"${_GHOST_DBS_LOG}\"\nfor _ghost_db in \"${_ghost_databases[@]}\"; do\n  if _confirm_deletion \"${_ghost_db}\"; then\n    # Drop the database\n    echo \"Dropping database: ${_ghost_db}\" | tee -a \"${_GHOST_DBS_LOG}\"\n    if [ \"${_LIVE_MODE}\" == \"LIVE\" ]; then\n      mysql -e \"DROP DATABASE ${_ghost_db};\" 2>>\"${_GHOST_DBS_LOG}\"\n    fi\n\n    # Drop the associated db_user (assuming db_user matches db_name)\n    _drop_mysql_user \"${_ghost_db}\"\n  fi\ndone\n\necho\nif [ \"${_LIVE_MODE}\" == \"LIVE\" ]; then\n  echo \"Percona MySQL Server databases LIVE cleanup completed.\"\n  echo \"Check SQL cleanup actions logs in ${_GHOST_DBS_LOG}\"\n  echo \"Remaining active databases are listed in ${_ACTIVE_DBS_LOG}\"\nelse\n  echo \"Percona MySQL Server databases DRY cleanup completed.\"\n  echo \"To launch real cleanup use 'sqlclean LIVE' command.\"\nfi\necho\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/sqlmagic",
    "content": "#!/bin/bash\n\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_sqlmagic_fix() {\n  if [ -f \"${_sOurce}\" ]; then\n    cat ${_sOurce} \\\n      | sed 's|/\\*!50001 CREATE ALGORITHM=UNDEFINED \\*/|/\\*!50001 CREATE \\*/|g;\n      s|/\\*!50017 DEFINER=`[^`]*`@`[^`]*`\\s*\\*/||g' \\\n      | sed '/\\*!50013 DEFINER=.*/ d' > ${_tarGet}\n    sed -i \"s/^INSERT INTO/INSERT IGNORE INTO/g\" ${_tarGet}\n    wait\n    echo \"Fixed database dump stored as ${_tarGet}\"\n    echo \"Bye\"\n    exit 0\n  else\n    echo \"ERROR: specified file ${_sOurce} does not exist\"\n    echo \"Bye\"\n    exit 1\n  fi\n}\n\n_sqlmagic_convert() {\n  if [ ! -L \"${HOME}/static\" ]; then\n    echo \"ERROR: you must be logged in as a main user, typically o1.ftp\"\n    echo \"Bye\"\n    exit 1\n  fi\n  if [ -z \"${_aliAs}\" ]; then\n    echo \"You must specify a correct drush alias for this site, for example:\"\n    echo \"  sqlmagic convert @sitename to-innodb\"\n    echo \"Bye\"\n    exit 1\n  fi\n  if drush ${_aliAs} status | grep \"Connected\" 2>&1 \\\n    && [ -e \"${HOME}/static\" ]; then\n    _THIS_VN=$(drush ${_aliAs} status \\\n      | grep \"Drupal version\" \\\n      | cut -d: -f2 | awk '{ print $1}')\n    if [ -z \"${_THIS_VN}\" ]; then\n      echo \"ERROR: Drupal version couldn't be determined, so we can't proceed\"\n      echo \"Bye\"\n      exit 1\n    fi\n    if [[ \"${_THIS_VN}\" =~ (^)\"6\" ]]; then\n      echo \"Drupal ${_THIS_VN} detected, ${_kiNd} mode allowed\"\n    else\n      _kiNd=\"to-innodb\"\n      echo \"Drupal ${_THIS_VN} detected, ${_kiNd} mode forced\"\n    fi\n    echo \"It may take a long time, please wait...\"\n    if [ \"${_kiNd}\" = \"to-myisam\" ]; then\n      _THIS_DB=$(drush ${_aliAs} status \\\n        | grep \"Database name\" \\\n        | cut -d: -f2 | awk '{ print $1}')\n      if [ -z \"${_THIS_DB}\" ]; then\n        echo \"ERROR: Database name couldn't be determined, so we can't proceed\"\n        echo \"Bye\"\n        exit 1\n      fi\n      _THIS_SHOW=\"select TABLE_NAME FROM information_schema.TABLES \\\n        WHERE TABLE_SCHEMA = '${_THIS_DB}' and TABLE_TYPE = 'BASE TABLE'\"\n      drush ${_aliAs} sql-query \"${_THIS_SHOW}\" \\\n        | tail -n +2 \\\n        | xargs -I '{}' echo \"ALTER TABLE {} ENGINE=MYISAM;\" > \\\n        ${HOME}/static/to_myisam_alter_table.sql\n      sed -i \"s/.*TABLE_NAME.*//g; s/ *$//g; /^$/d\" \\\n        ${HOME}/static/to_myisam_alter_table.sql &> /dev/null\n      wait\n      perl -p -i -e \\\n        's/(ALTER TABLE \\\n        (cache_[a-z_]+|cache|sessions|users|watchdog|accesslog) \\\n        ENGINE=)MYISAM/\\1INNODB/g' \\\n        ${HOME}/static/to_myisam_alter_table.sql &> /dev/null\n      wait\n      if [ -s \"${HOME}/static/to_myisam_alter_table.sql\" ]; then\n        drush ${_aliAs} sqlc < ${HOME}/static/to_myisam_alter_table.sql\n        echo \"Site ${_aliAs} status: database converted to MyISAM\"\n        echo \"Bye\"\n      else\n        echo \"ERROR: resulting sql file is empty, so we can't proceed!\"\n        echo \"Bye\"\n        exit 1\n      fi\n      rm -f ${HOME}/static/to_myisam_alter_table.sql\n      exit 0\n    elif [ \"${_kiNd}\" = \"to-innodb\" ]; then\n      _THIS_DB=$(drush ${_aliAs} status \\\n        | grep \"Database name\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $1}')\n      if [ -z \"${_THIS_DB}\" ]; then\n        echo \"ERROR: Database name couldn't be determined, so we can't proceed\"\n        echo \"Bye\"\n        exit 1\n      fi\n      _THIS_SHOW=\"select TABLE_NAME FROM information_schema.TABLES \\\n        WHERE TABLE_SCHEMA = '${_THIS_DB}' and TABLE_TYPE = 'BASE TABLE'\"\n      drush ${_aliAs} sql-query \"${_THIS_SHOW}\" \\\n        | tail -n +2 \\\n        | xargs -I '{}' echo \"ALTER TABLE {} ENGINE=INNODB;\" > \\\n        ${HOME}/static/to_innodb_alter_table.sql\n      sed -i \"s/.*TABLE_NAME.*//g; s/ *$//g; /^$/d\" \\\n        ${HOME}/static/to_innodb_alter_table.sql &> /dev/null\n      wait\n      if [ -s \"${HOME}/static/to_innodb_alter_table.sql\" ]; then\n        drush ${_aliAs} sqlc < ${HOME}/static/to_innodb_alter_table.sql\n        echo \"Site ${_aliAs} status: database converted to InnoDB\"\n        echo \"Bye\"\n      else\n        echo \"ERROR: resulting sql file is empty, so we can't proceed!\"\n        echo \"Bye\"\n        exit 1\n      fi\n      rm -f ${HOME}/static/to_innodb_alter_table.sql\n      exit 0\n    else\n      echo \"Invalid target format - use either to-myisam or to-innodb\"\n      echo \"Bye\"\n      exit 1\n    fi\n  else\n    echo \"ERROR: Drush couldn't determine this site's status\"\n    echo \"Bye\"\n    exit 1\n  fi\n}\n\ncase \"$1\" in\n  fix)     _sOurce=\"./$2\"\n           _tarGet=\"./fixed-$2\"\n           _sqlmagic_fix\n  ;;\n  convert) _aliAs=\"$2\"\n           _kiNd=\"$3\"\n           _sqlmagic_convert\n  ;;\n  *)       echo \"Usage: sqlmagic { fix file.sql | convert @sitename ( to-myisam | to-innodb ) }\"\n           exit 1\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/syncpass",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n_vBs=\"/var/backups\"\n_THIS_DB_PORT=3306\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n\n# Remove dangerous stuff from the string.\n_sanitize_string() {\n  echo \"$1\" | sed 's/[\\\\\\/\\^\\?\\>\\`\\#\\\"\\{\\(\\&\\|\\*]//g; s/\\(['\"'\"'\\]\\)//g'\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n}\n\n_check_generate() {\n  rm -f ${_L_SYS}\n  if [ -e \"${_L_SYS}\" ]; then\n    _ESC_PASS=$(cat ${_L_SYS} 2>&1)\n  else\n    echo \"INFO: Expected file ${_L_SYS} does not exist\"\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n      _PWD_CHARS=64\n    elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n      _PWD_CHARS=32\n    else\n      _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n      if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n        && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n        _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n      else\n        _PWD_CHARS=32\n      fi\n      if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n        _PWD_CHARS=128\n      fi\n    fi\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] \\\n      || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n      echo \"INFO: We will generate new random strong password (${_PWD_CHARS})\"\n      if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n        _ESC_PASS=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n      else\n        _RANDPASS_TEST=$(randpass -V 2>&1)\n        if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n          _ESC_PASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n        else\n          _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n          _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n          _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n        fi\n      fi\n      _isPythonTwo=\"$(which python2)\"\n      _isPythonThree=\"$(which python3)\"\n      _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n      if [ -x \"${_isPythonThree}\" ]; then\n        _ENC_PASS=$(python3 -c \"import urllib.parse; print(urllib.parse.quote('''${_ESC_PASS}'''))\")\n      elif [ -x \"${_isPythonTwo}\" ]; then\n        _ENC_PASS=$(python2 -c \"import urllib; print urllib.quote('''${_ESC_PASS}''')\")\n      fi\n      echo \"${_ESC_PASS}\" > ${_L_SYS}\n    else\n       echo \"INFO: We will generate new random password using shuf tool\"\n      _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n      _ENC_PASS=\"${_ESC_PASS}\"\n      echo \"${_ESC_PASS}\" > ${_L_SYS}\n      chmod 0600 ${_L_SYS}\n    fi\n  fi\n  _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n  if [ -z \"${_ESC_PASS}\" ] || [ \"${_LEN_PASS}\" -lt 9 ]; then\n     echo \"WARN: The random password=${_ESC_PASS} does not look good\"\n     echo \"INFO: We will generate new random password using shuf tool\"\n    _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n    _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n    _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n    _ENC_PASS=\"${_ESC_PASS}\"\n    echo \"${_ESC_PASS}\" > ${_L_SYS}\n    chmod 0600 ${_L_SYS}\n  fi\n}\n\n_do_syncpass() {\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  if [ ! -z \"${_uname}\" ]; then\n    _find_correct_ip\n    _prH=\"/var/aegir/.drush\"\n    if [ \"${_uname}\" = \"aegir\" ] && [ -e \"/var/aegir/backups\" ]; then\n      _L_SYS=\"/var/aegir/backups/system/.aegir_root.pass.txt\"\n      cp ${_prH}/server_localhost.alias.drushrc.php \\\n        ${_vBs}/server_localhost.alias.drushrc.php.${_uname}-${_NOW} &> /dev/null\n      cp ${_prH}/server_master.alias.drushrc.php \\\n        ${_vBs}/server_master.alias.drushrc.php.${_uname}-${_NOW} &> /dev/null\n      _check_generate\n      chown ${_uname}:${_uname} ${_L_SYS} &> /dev/null\n      if [ ! -z \"${_ESC_PASS}\" ] && [ ! -z \"${_ENC_PASS}\" ]; then\n        mysqladmin -u root flush-hosts &> /dev/null\n        su -s /bin/bash - ${_uname} -c \"drush8 @hostmaster \\\n          sqlq \\\"UPDATE hosting_db_server \\\n          SET db_passwd='${_ESC_PASS}' \\\n          WHERE db_user='aegir_root'\\\"\" &> /dev/null\n        wait\n        _ESC=\"*.*\"\n        _USE_DB_USER=\"aegir_root\"\n        _USE_AEGIR_HOST=\"${_hName}\"\n        _USE_RESOLVEIP=\"${_LOC_IP}\"\n        [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL1 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n        if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n          _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n          mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n        fi\n        _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_AEGIR_HOST}';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_RESOLVEIP}';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'localhost';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'127.0.0.1';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'%';\" &> /dev/null\n        mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n        sed -i \\\n          \"s/mysql:\\/\\/aegir_root:.*/mysql:\\/\\/aegir_root:${_ENC_PASS}@${_SQL_CONNECT}',/g\" \\\n          ${_prH}/server_*.alias.drushrc.php &> /dev/null\n        wait\n        mysqladmin -u root flush-privileges &> /dev/null\n      else\n        echo \"ERROR: Auto-generated password for aegir_root system user\"\n        echo \"ERROR: did not work as expected, please try again\"\n        exit 1\n      fi\n      echo \"INFO: Fixed Ægir Master Instance system user=aegir_root\"\n      echo \"INFO: New system password=${_ESC_PASS} encoded=${_ENC_PASS}\"\n      echo \"BYE!\"\n    else\n      if [ -e \"/data/disk/${_uname}\" ]; then\n        _L_SYS=\"/data/disk/${_uname}/.${_uname}.pass.txt\"\n        cp /data/disk/${_uname}/.drush/server_localhost.alias.drushrc.php \\\n          ${_vBs}/server_localhost.alias.drushrc.php.${_uname}-${_NOW} &> /dev/null\n        cp /data/disk/${_uname}/.drush/server_master.alias.drushrc.php \\\n          ${_vBs}/server_master.alias.drushrc.php.${_uname}-${_NOW} &> /dev/null\n        _check_generate\n        chown ${_uname}:users ${_L_SYS} &> /dev/null\n        if [ ! -z \"${_ESC_PASS}\" ] && [ ! -z \"${_ENC_PASS}\" ]; then\n          if [ -e \"/data/conf/${_uname}_use_proxysql.txt\" ]; then\n            _SQL_CONNECT=127.0.0.1\n            _THIS_DB_PORT=6033\n            mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-hosts &> /dev/null\n          else\n            mysqladmin -u root flush-hosts &> /dev/null\n          fi\n          su -s /bin/bash - ${_uname} -c \"drush8 @hostmaster \\\n            sqlq \\\"UPDATE hosting_db_server SET db_passwd='${_ESC_PASS}' \\\n            WHERE db_user='${_uname}'\\\"\" &> /dev/null\n          wait\n          _ESC=\"*.*\"\n          _USE_DB_USER=\"${_uname}\"\n          _USE_AEGIR_HOST=\"${_hName}\"\n          _USE_RESOLVEIP=\"${_LOC_IP}\"\n          [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL2 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n          if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n            _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n            mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n          fi\n          _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n          ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_AEGIR_HOST}';\" &> /dev/null\n          ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_RESOLVEIP}';\" &> /dev/null\n          ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'localhost';\" &> /dev/null\n          ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'127.0.0.1';\" &> /dev/null\n          ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'%';\" &> /dev/null\n          mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n          sed -i \\\n            \"s/mysql:\\/\\/${_uname}:.*/mysql:\\/\\/${_uname}:${_ENC_PASS}@${_SQL_CONNECT}',/g\" \\\n            /data/disk/${_uname}/.drush/server_*.alias.drushrc.php &> /dev/null\n          wait\n          if [ -e \"/data/conf/${_uname}_use_proxysql.txt\" ]; then\n            _SQL_CONNECT=127.0.0.1\n            _THIS_DB_PORT=6033\n            mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges &> /dev/null\n          else\n            mysqladmin -u root flush-privileges &> /dev/null\n          fi\n        else\n          echo \"ERROR: Auto-generated password for ${_uname} system user\"\n          echo \"ERROR: did not work as expected, please try again\"\n          exit 1\n        fi\n        _L_SYS_PHP=\"/data/disk/${_uname}/.${_uname}.pass.php\"\n        echo \"<?php\" > ${_L_SYS_PHP}\n        echo \"\\$oct_db_user = \\\"${_uname}\\\";\" >> ${_L_SYS_PHP}\n        echo \"\\$oct_db_pass = \\\"${_ESC_PASS}\\\";\" >> ${_L_SYS_PHP}\n        echo \"\\$oct_db_host = \\\"${_THIS_DB_HOST}\\\";\" >> ${_L_SYS_PHP}\n        echo \"\\$oct_db_port = \\\"${_THIS_DB_PORT}\\\";\" >> ${_L_SYS_PHP}\n        echo \"\\$oct_db_dirs = \\\"/data/disk/${_uname}/backups\\\";\" >> ${_L_SYS_PHP}\n        chown ${_uname}:users ${_L_SYS_PHP}\n        chmod 0600 ${_L_SYS_PHP}\n        echo \"INFO: Fixed Ægir Satellite Instance system user=${_uname}\"\n        echo \"INFO: New system password=${_ESC_PASS} encoded=${_ENC_PASS}\"\n        echo \"INFO: With Satellite oct_db_host=${_THIS_DB_HOST}\"\n        echo \"INFO: With Satellite oct_db_port=${_THIS_DB_PORT}\"\n        echo \"BYE!\"\n      else\n        echo \"ERROR: You must specify the existing Ægir \\\n          instance username to fix\"\n        exit 1\n      fi\n    fi\n    exit 0\n  else\n    echo \"ERROR: You must specify the existing Ægir instance username to fix\"\n    exit 1\n  fi\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    chmod a+w /dev/null\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _SQL_CONNECT=localhost\n    elif [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n      _SQL_CONNECT=127.0.0.1\n    else\n      _SQL_CONNECT=\"${_THIS_DB_HOST}\"\n    fi\n    if [ \"${_THIS_DB_HOST}\" = \"${_MY_OWNIP}\" ]; then\n      _SQL_CONNECT=localhost\n    fi\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n}\n\ncase \"$1\" in\n  fix) _uname=\"$2\"\n       _check_root\n       _do_syncpass\n  ;;\n  *)   echo \"Usage: syncpass fix {aegir|o1}\"\n       exit 1\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/synproxy",
    "content": "#!/bin/bash\n\n#\n# synproxy — SYNPROXY (TCP/443/80) + QUIC limiter (UDP/443) for CSF\n#\n# - Hooks CSF post scripts to call reassert tool on every reload\n# - No background loops, removes NEW->DROP, purges QUIC when disabled\n# - Adds lo bypass + optional trusted bypass, adds ACK-only NEW accept shim\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\nset -Eeuo pipefail\ntrap '_rc=$?; echo \"[INSTALL] line ${LINENO}: exit ${_rc}\" >&2; exit ${_rc}' ERR\n\n# -------------------- TUNABLES --------------------\n_TCP_PORTS=\"${_TCP_PORTS:-443 80}\"           # Protect these TCP ports with SYNPROXY\n_ENABLE_QUIC_LIMITER=\"${_ENABLE_QUIC_LIMITER:-NO}\"  # YES to rate-limit UDP/${_QUIC_PORT}\n_QUIC_PORT=\"${_QUIC_PORT:-443}\"              # QUIC UDP port (ignored if limiter disabled)\n_RATE_PER_IP=\"${_RATE_PER_IP:-80/second}\"    # QUIC per-IP\n_BURST_PER_IP=\"${_BURST_PER_IP:-160}\"\n_RATE_PER_24=\"${_RATE_PER_24:-800/second}\"   # QUIC per-/24\n_BURST_PER_24=\"${_BURST_PER_24:-1600}\"\n_TRUSTED_SRC=\"${_TRUSTED_SRC:-}\"             # space-separated CIDRs to bypass SYNPROXY (optional)\n_APPLY_NOW=\"${_APPLY_NOW:-YES}\"              # Run a CSF reload + reassert at the end\n_SYSCTL_FILE=\"/etc/sysctl.d/60-synproxy.conf\"\n# -------------------------------------------------\n\n# -------------------- prerequisites & helpers ------------------------\n_need(){ command -v \"$1\" >/dev/null 2>&1 || { echo \"[INSTALL] Missing: $1\" >&2; exit 1; }; }\n_need iptables; _need sed; _need grep; _need awk; _need modprobe; _need sysctl\n_CSF_BIN=\"csf\"; command -v \"${_CSF_BIN}\" >/dev/null 2>&1 || _CSF_BIN=\"\"\n\n_IPT_HAS_W=\"NO\"; iptables -w -L -n >/dev/null 2>&1 && _IPT_HAS_W=\"YES\"\n_ipt(){ if [ \"${_IPT_HAS_W}\" = \"YES\" ]; then iptables -w \"$@\"; else iptables \"$@\"; fi; }\n\n_LOG=\"/var/log/synproxy-install.log\"; : > \"${_LOG}\"\n_msg(){ echo \"[INSTALL] $*\" | tee -a \"${_LOG}\"; }\n\n_CSFCONF=\"/etc/csf/csf.conf\"\n_CSFPOST=\"/etc/csf/csfpost.sh\"\n_INC_DIR=\"/etc/csf/csfpost.d\"\n\n_REASSERT=\"/opt/local/bin/synproxy_reassert\"\n_ADDON=\"${_INC_DIR}/99-synproxy-reassert.sh\"\n_WATCH_BIN=\"/usr/local/sbin/watch_synproxy\"\n_ROLLBACK=\"/opt/local/bin/synproxy_rollback\"\n\n# Old artifacts we’ll clean up\n_OLD_SYN_ADDON=\"${_INC_DIR}/synproxy.sh\"\n_OLD_QUIC_ADDONS_GLOB=\"${_INC_DIR}/quic_udp*_limit.sh\"\n_ASSERT_BIN=\"/usr/local/sbin/synproxy_assert\"\n_INITD=\"/etc/init.d/synproxy-assert\"\n_CRON=\"/etc/cron.d/synproxy-assert\"\n_MONITOR_OLD=\"/usr/local/sbin/mon_synproxy\"\n_WATCHER_OLD=\"/usr/local/sbin/watch_synproxy\"\n_OLD_ROLLBACK=\"/usr/local/sbin/synproxy_rollback\"\n# ---------------------------------------------------------------------\n\n_msg \"1) Load SYNPROXY modules & persist nf_synproxy_core\"\nmodprobe xt_CT 2>/dev/null || true\nmodprobe xt_conntrack 2>/dev/null || true\nmodprobe xt_SYNPROXY 2>/dev/null || true\nmodprobe nf_synproxy_core 2>/dev/null || true\ngrep -qs '^nf_synproxy_core$' /etc/modules || echo nf_synproxy_core >> /etc/modules\n\n_msg \"2) Minimal TCP sysctls (non-clobber of other files)\"\nmkdir -p /etc/sysctl.d\n: > \"${_SYSCTL_FILE}\"\n_set_sys(){ local _k=\"$1\" _v=\"$2\" _cur; _cur=\"$(sysctl -n \"${_k}\" 2>/dev/null || echo)\"; [ \"${_cur}\" = \"${_v}\" ] || sysctl -w \"${_k}=${_v}\" >/dev/null; printf '%s = %s\\n' \"${_k}\" \"${_v}\" >> \"${_SYSCTL_FILE}\"; }\n_set_sys net.ipv4.tcp_syncookies 1\n_set_sys net.ipv4.tcp_timestamps 1\n_set_sys net.ipv4.tcp_window_scaling 1\nsysctl -p \"${_SYSCTL_FILE}\" >/dev/null || true\n\n_msg \"3) Ensure csfpost executes ${_INC_DIR}/*.sh with bash (not source)\"\nmkdir -p \"${_INC_DIR}\"\nif [ ! -e \"${_CSFPOST}\" ]; then echo \"#!/bin/bash\" > \"${_CSFPOST}\"; chmod 700 \"${_CSFPOST}\"; fi\ncp -a \"${_CSFPOST}\" \"${_CSFPOST}.bak.$(date +%s)\"\n# remove any prior include block we added\nsed -i '/^# ===== BEGIN synproxy include/,/^# ===== END synproxy include/d' \"${_CSFPOST}\"\ncat >> \"${_CSFPOST}\" <<'EOF'\n\n# ===== BEGIN synproxy include (bash exec) =====\nif [ -d /etc/csf/csfpost.d ]; then\n  for _inc in /etc/csf/csfpost.d/*.sh; do\n    [ -r \"${_inc}\" ] && /bin/bash \"${_inc}\"\n  done\nfi\n# ===== END synproxy include =====\nEOF\nchmod 700 \"${_CSFPOST}\"\n\n_msg \"4) Disable CSF SYNFLOOD (SYNPROXY supersedes it); warn for PORTFLOOD 80/443\"\nif [ -f \"${_CSFCONF}\" ]; then\n  cp -a \"${_CSFCONF}\" \"${_CSFCONF}.bak.$(date +%s)\"\n  sed -i 's/^SYNFLOOD *= *\".*\"/SYNFLOOD = \"0\"/' \"${_CSFCONF}\" || true\n  _pf=\"$(grep -E '^PORTFLOOD' \"${_CSFCONF}\" || true)\"\n  if echo \"${_pf}\" | grep -Eq '(^|,)(80|443);tcp;'; then\n    _msg \"WARNING: PORTFLOOD includes 80/443 — consider removing those to avoid top-of-chain inserts above SYNPROXY.\"\n  fi\nfi\n\n_msg \"5) Remove older/broken artifacts (safe if absent)\"\nrm -f \"${_OLD_SYN_ADDON}\" 2>/dev/null || true\n[ -d \"${_INC_DIR}\" ] && find \"${_INC_DIR}\" -maxdepth 1 -type f -name \"$(basename \"${_OLD_QUIC_ADDONS_GLOB}\")\" -exec rm -f {} + 2>/dev/null || true\nrm -f \"${_ASSERT_BIN}\" \"${_CRON}\" \"${_MONITOR_OLD}\" \"${_OLD_ROLLBACK}\" 2>/dev/null || true\nif [ -x \"${_INITD}\" ]; then\n  \"${_INITD}\" stop >/dev/null 2>&1 || true\n  command -v update-rc.d >/dev/null 2>&1 && update-rc.d synproxy-assert remove >/dev/null 2>&1 || true\n  rm -f \"${_INITD}\" 2>/dev/null || true\nfi\n\n_msg \"6) Write CSF add-on to call reassert on every CSF reload: ${_ADDON}\"\ncat > \"${_ADDON}\" <<EOF\n#!/bin/bash\n# CSF post hook — call SYNPROXY reassert tool (idempotent, safe quoting)\nset -u\n\n_REASSERT=\"${_REASSERT}\"\n\n# Params (can be edited later right here if needed)\n_TCP_PORTS=\"${_TCP_PORTS}\"\n_ENABLE_QUIC_LIMITER=\"${_ENABLE_QUIC_LIMITER}\"\n_QUIC_PORT=\"${_QUIC_PORT}\"\n_RATE_PER_IP=\"${_RATE_PER_IP}\"\n_BURST_PER_IP=\"${_BURST_PER_IP}\"\n_RATE_PER_24=\"${_RATE_PER_24}\"\n_BURST_PER_24=\"${_BURST_PER_24}\"\n_TRUSTED_SRC=\"${_TRUSTED_SRC}\"\n\n# Build argv safely\ndeclare -a _ARGS\n_ARGS=( -p \"${_TCP_PORTS}\" )\nif [ \"\\${_ENABLE_QUIC_LIMITER}\" != \"YES\" ]; then\n  _ARGS+=( --no-quic )\nelse\n  [ -n \"\\${_QUIC_PORT}\" ] && _ARGS+=( --quic-port \"\\${_QUIC_PORT}\" )\n  _ARGS+=( --rate-ip \"\\${_RATE_PER_IP}\" --burst-ip \"\\${_BURST_PER_IP}\" --rate-24 \"\\${_RATE_PER_24}\" --burst-24 \"\\${_BURST_PER_24}\" )\nfi\n[ -n \"\\${_TRUSTED_SRC}\" ] && _ARGS+=( --trust \"\\${_TRUSTED_SRC}\" )\n_ARGS+=( -q )\n\n# Exec the tool with proper quoting\nexec \"\\${_REASSERT}\" \"\\${_ARGS[@]}\"\nEOF\nchmod 700 \"${_ADDON}\"\n\n_msg \"7) Install the new watch tool: ${_WATCH_BIN}\"\ncat > \"${_WATCH_BIN}\" <<'EOWATCH'\n#!/bin/bash\n# /usr/local/sbin/watch_synproxy — Devuan-friendly live view for SYNPROXY.\n# Shows: INPUT top, LOCALINPUT (if any), raw/PREROUTING NOTRACK hits, SYNPROXY_EARLY with counters.\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\nset -u\n\n_INTERVAL=\"${_INTERVAL:-2}\"\n_TCP_PORTS=\"${_TCP_PORTS:-443 80}\"\n_SHOW_INPUT_LINES=\"${_SHOW_INPUT_LINES:-20}\"\n\n_usage(){\n  cat <<USAGE\nUsage: watch_synproxy [options]\n  -n SEC         Refresh interval (default: ${_INTERVAL})\n  -p \"443 80\"    Ports to highlight in raw NOTRACK view (default: \"${_TCP_PORTS}\")\n  -l NUM         How many INPUT lines to show (default: ${_SHOW_INPUT_LINES})\n  -h             Help\nTip: run after a CSF change: csf -r && synproxy_reassert -p \"443 80\" --no-quic -q\nUSAGE\n}\n\nwhile [ $# -gt 0 ]; do\n  case \"$1\" in\n    -n) _INTERVAL=\"$2\"; shift 2;;\n    -p) _TCP_PORTS=\"$2\"; shift 2;;\n    -l) _SHOW_INPUT_LINES=\"$2\"; shift 2;;\n    -h|--help) _usage; exit 0;;\n    *) echo \"Unknown option: $1\" >&2; _usage; exit 2;;\n  esac\ndone\n\n_has(){ command -v \"$1\" >/dev/null 2>&1; }\n_IPT_HAS_W=\"NO\"; iptables -w -L -n >/dev/null 2>&1 && _IPT_HAS_W=\"YES\"\n_ipt(){ if [ \"${_IPT_HAS_W}\" = \"YES\" ]; then iptables -w \"$@\"; else iptables \"$@\"; fi; }\n_chain_exists(){ _ipt -L \"$1\" -n >/dev/null 2>&1; }\n\nwhile :; do\n  clear\n  echo \"=== SYNPROXY monitor @ $(date '+%F %T %Z')  interval=${_INTERVAL}s  ports: ${_TCP_PORTS}\"\n  echo\n\n  echo \"-- INPUT (top ${_SHOW_INPUT_LINES})\"\n  _ipt -L INPUT -n -v --line-numbers | sed -n \"1,${_SHOW_INPUT_LINES}p\"\n  if _chain_exists LOCALINPUT; then\n    echo; echo \"-- LOCALINPUT (top ${_SHOW_INPUT_LINES})\"\n    _ipt -L LOCALINPUT -n -v --line-numbers | sed -n \"1,${_SHOW_INPUT_LINES}p\"\n  fi\n\n  echo; echo \"-- raw/PREROUTING (NOTRACK hits for web ports)\"\n  _ipt -t raw -L PREROUTING -n -v --line-numbers \\\n    | awk -v PORTS=\"${_TCP_PORTS}\" '\n        BEGIN{ split(PORTS,a,\" \"); for(i in a) p[a[i]]=1 }\n        /CT --notrack|NOTRACK/ && /--syn/ && /--dport/ {\n          port=\"\"; for(i=1;i<=NF;i++){ if($i==\"dpt:\" || $i==\"--dport\"){ port=$(i+1) } }\n          if(port in p){ print }\n        }'\n\n  echo; echo \"-- SYNPROXY_EARLY (full chain with counters)\"\n  if _chain_exists SYNPROXY_EARLY; then\n    _ipt -L SYNPROXY_EARLY -n -v --line-numbers\n  else\n    echo \"SYNPROXY_EARLY not found\"\n  fi\n\n  echo; echo \"(q to quit)\"; read -t \"${_INTERVAL}\" -n 1 _k && [ \"${_k:-}\" = \"q\" ] && exit 0\ndone\nEOWATCH\nchmod 755 \"${_WATCH_BIN}\"\n\nif [ -n \"${_CSF_BIN}\" ]; then\n  _msg \"8) Reload CSF once to pick up add-on\"\n  ${_CSF_BIN} -r || true\nelse\n  _msg \"8) CSF not found; skipping CSF reload\"\nfi\n\nif [ \"${_APPLY_NOW}\" = \"YES\" ]; then\n  _msg \"9) Apply reassert now (twice to beat lfd churn)\"\n  \"${_REASSERT}\" -p \"${_TCP_PORTS}\" $([ \"${_ENABLE_QUIC_LIMITER}\" = \"YES\" ] && echo \"--quic-port ${_QUIC_PORT}\" || echo \"--no-quic\") \\\n     --rate-ip \"${_RATE_PER_IP}\" --burst-ip \"${_BURST_PER_IP}\" --rate-24 \"${_RATE_PER_24}\" --burst-24 \"${_BURST_PER_24}\" \\\n     $([ -n \"${_TRUSTED_SRC}\" ] && printf \"%s\" \"--trust \\\"${_TRUSTED_SRC}\\\"\") -q || true\n  sleep 1\n  \"${_REASSERT}\" -p \"${_TCP_PORTS}\" $([ \"${_ENABLE_QUIC_LIMITER}\" = \"YES\" ] && echo \"--quic-port ${_QUIC_PORT}\" || echo \"--no-quic\") -q || true\nfi\n\necho\necho \"---------------------------------------------------------------------\"\necho \"SYNPROXY installed with updated logic.\"\necho \" Reassert tool: ${_REASSERT}\"\necho \" CSF hook:      ${_ADDON}\"\necho \" Watch tool:    ${_WATCH_BIN}    (e.g., ${_WATCH_BIN} -n 2 -p \\\"${_TCP_PORTS}\\\")\"\necho\necho \"Use after ANY CSF mutation (example):\"\necho \"  csf -r && ${_REASSERT} -p \\\"${_TCP_PORTS}\\\" --no-quic -q\"\n[ \"${_ENABLE_QUIC_LIMITER}\" = \"YES\" ] && echo \"QUIC limiter:   ENABLED on UDP/${_QUIC_PORT}  (rates: ${_RATE_PER_IP}, ${_RATE_PER_24})\" || echo \"QUIC limiter:   DISABLED (no UDP/${_QUIC_PORT} drops left)\"\n[ -n \"${_TRUSTED_SRC}\" ] && echo \"Trusted bypass: ${_TRUSTED_SRC}\" || true\necho\necho \"Verify:\"\necho \"  iptables -S INPUT | sed -n '1,25p'     # INPUT#1 should be -j SYNPROXY_EARLY ; no udp/${_QUIC_PORT} INVALID drops if disabled\"\necho \"  iptables -S SYNPROXY_EARLY | sed -n '1,120p'  # lo RETURN; EST/REL ACCEPT; SYNPROXY; ACK-NEW ACCEPT; INVALID DROP\"\necho \"Log: ${_LOG}\"\necho\necho \" Uninstall tool: ${_ROLLBACK}\"\necho \"---------------------------------------------------------------------\"\n"
  },
  {
    "path": "aegir/tools/bin/synproxy_hook_fix",
    "content": "#!/bin/bash\n\n# synproxy_hook_fix — replace CSF post-hook with safe array-based call\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\nset -Eeuo pipefail\ntrap '_rc=$?; echo \"[FIX] line ${LINENO}: exit ${_rc}\" >&2; exit ${_rc}' ERR\n\n# ---- tunables (same defaults you showed; override via env if you like) ----\n_TCP_PORTS=\"${_TCP_PORTS:-443 80}\"\n_ENABLE_QUIC_LIMITER=\"${_ENABLE_QUIC_LIMITER:-NO}\"   # YES/NO\n_QUIC_PORT=\"${_QUIC_PORT:-443}\"\n_RATE_PER_IP=\"${_RATE_PER_IP:-80/second}\"\n_BURST_PER_IP=\"${_BURST_PER_IP:-160}\"\n_RATE_PER_24=\"${_RATE_PER_24:-800/second}\"\n_BURST_PER_24=\"${_BURST_PER_24:-1600}\"\n_TRUSTED_SRC=\"${_TRUSTED_SRC:-}\"                     # optional space-separated CIDRs\n\n# ---- paths ----\n_INC_DIR=\"/etc/csf/csfpost.d\"\nmkdir -p \"${_INC_DIR}\"\n\n# Prefer your moved tool; fall back to the default location.\n_REASSERT_PATH=\"/opt/local/bin/synproxy_reassert\"\n[ -x \"${_REASSERT_PATH}\" ] || _REASSERT_PATH=\"/usr/local/sbin/csf_synproxy_reassert.sh\"\n\n_HOOK=\"${_INC_DIR}/99-synproxy-reassert.sh\"\n\n# ---- write the new hook (uses a bash array to preserve the \"443 80\" as one arg) ----\ncat > \"${_HOOK}\" <<EOF\n#!/bin/bash\n# CSF post hook — call SYNPROXY reassert tool (idempotent, safe quoting)\nset -u\n\n_REASSERT=\"${_REASSERT_PATH}\"\n\n# Params (can be edited later right here if needed)\n_TCP_PORTS=\"${_TCP_PORTS}\"\n_ENABLE_QUIC_LIMITER=\"${_ENABLE_QUIC_LIMITER}\"\n_QUIC_PORT=\"${_QUIC_PORT}\"\n_RATE_PER_IP=\"${_RATE_PER_IP}\"\n_BURST_PER_IP=\"${_BURST_PER_IP}\"\n_RATE_PER_24=\"${_RATE_PER_24}\"\n_BURST_PER_24=\"${_BURST_PER_24}\"\n_TRUSTED_SRC=\"${_TRUSTED_SRC}\"\n\n# Build argv safely\ndeclare -a _ARGS\n_ARGS=( -p \"\\${_TCP_PORTS}\" )\nif [ \"\\${_ENABLE_QUIC_LIMITER}\" != \"YES\" ]; then\n  _ARGS+=( --no-quic )\nelse\n  [ -n \"\\${_QUIC_PORT}\" ] && _ARGS+=( --quic-port \"\\${_QUIC_PORT}\" )\n  _ARGS+=( --rate-ip \"\\${_RATE_PER_IP}\" --burst-ip \"\\${_BURST_PER_IP}\" --rate-24 \"\\${_RATE_PER_24}\" --burst-24 \"\\${_BURST_PER_24}\" )\nfi\n[ -n \"\\${_TRUSTED_SRC}\" ] && _ARGS+=( --trust \"\\${_TRUSTED_SRC}\" )\n_ARGS+=( -q )\n\n# Exec the tool with proper quoting\nexec \"\\${_REASSERT}\" \"\\${_ARGS[@]}\"\nEOF\n\nchmod 700 \"${_HOOK}\"\n\n# ---- sanity: show what we wrote ----\necho \"[FIX] Wrote ${_HOOK} -> uses: ${_REASSERT_PATH}\"\nsed -n '1,80p' \"${_HOOK}\"\n\n# ---- optional: reload CSF once so hook runs automatically next time ----\nif command -v csf >/dev/null 2>&1; then\n  echo \"[FIX] Reload CSF now (csf -r) so the hook is in effect...\"\n  csf -r || true\nfi\n\necho \"[FIX] Done. After any CSF change, the hook will call the tool as:\"\necho \"      ${_REASSERT_PATH} -p \\\"${_TCP_PORTS}\\\" $([ \"${_ENABLE_QUIC_LIMITER}\" = \"YES\" ] && echo \"--quic-port ${_QUIC_PORT}\" || echo \"--no-quic\") -q\"\n"
  },
  {
    "path": "aegir/tools/bin/synproxy_monitor",
    "content": "#!/bin/bash\n\n#\n# Shows: INPUT/OUTPUT top, LOCALINPUT (if any), raw/PREROUTING NOTRACK hits, SYNPROXY_EARLY with counters.\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\nset -u\n\n_INTERVAL=\"${_INTERVAL:-2}\"\n_TCP_PORTS=\"${_TCP_PORTS:-443 80}\"\n_SHOW_INPUT_LINES=\"${_SHOW_INPUT_LINES:-8}\"\n_SHOW_OUTPUT_LINES=\"${_SHOW_OUTPUT_LINES:-8}\"\n\n_usage(){\n  cat <<USAGE\nUsage: watch_synproxy [options]\n  -n SEC         Refresh interval (default: ${_INTERVAL})\n  -p \"443 80\"    Ports to highlight in raw NOTRACK view (default: \"${_TCP_PORTS}\")\n  -l NUM         How many INPUT lines to show (default: ${_SHOW_INPUT_LINES})\n  -h             Help\nTip: run after a CSF change: csf -r && synproxy_reassert -p \"443 80\" --no-quic -q\nUSAGE\n}\n\nwhile [ $# -gt 0 ]; do\n  case \"$1\" in\n    -n) _INTERVAL=\"$2\"; shift 2;;\n    -p) _TCP_PORTS=\"$2\"; shift 2;;\n    -l) _SHOW_INPUT_LINES=\"$2\"; shift 2;;\n    -h|--help) _usage; exit 0;;\n    *) echo \"Unknown option: $1\" >&2; _usage; exit 2;;\n  esac\ndone\n\n_has(){ command -v \"$1\" >/dev/null 2>&1; }\n_IPT_HAS_W=\"NO\"; iptables -w -L -n >/dev/null 2>&1 && _IPT_HAS_W=\"YES\"\n_ipt(){ if [ \"${_IPT_HAS_W}\" = \"YES\" ]; then iptables -w \"$@\"; else iptables \"$@\"; fi; }\n_chain_exists(){ _ipt -L \"$1\" -n >/dev/null 2>&1; }\n\nwhile :; do\n  clear\n  echo \"=== SYNPROXY monitor @ $(date '+%F %T %Z')  interval=${_INTERVAL}s  ports: ${_TCP_PORTS}\"\n  echo\n\n  echo \"-- INPUT (top ${_SHOW_INPUT_LINES})\"\n  _ipt -L INPUT -n -v --line-numbers | sed -n \"1,${_SHOW_INPUT_LINES}p\"\n  if _chain_exists LOCALINPUT; then\n    echo; echo \"-- LOCALINPUT (top ${_SHOW_INPUT_LINES})\"\n    _ipt -L LOCALINPUT -n -v --line-numbers | sed -n \"1,${_SHOW_INPUT_LINES}p\"\n  fi\n\n  echo\n  echo \"-- OUTPUT (top ${_SHOW_OUTPUT_LINES})\"\n  _ipt -L OUTPUT -n -v --line-numbers | sed -n \"1,${_SHOW_OUTPUT_LINES}p\"\n  if _chain_exists LOCALOUTPUT; then\n    echo; echo \"-- LOCALOUTPUT (top ${_SHOW_OUTPUT_LINES})\"\n    _ipt -L LOCALOUTPUT -n -v --line-numbers | sed -n \"1,${_SHOW_OUTPUT_LINES}p\"\n  fi\n\n  echo; echo \"-- raw/PREROUTING (NOTRACK hits for web ports)\"\n  _ipt -t raw -L PREROUTING -n -v --line-numbers \\\n    | awk -v PORTS=\"${_TCP_PORTS}\" '\n        BEGIN{ split(PORTS,a,\" \"); for(i in a) p[a[i]]=1 }\n        /CT --notrack|NOTRACK/ && /--syn/ && /--dport/ {\n          port=\"\"; for(i=1;i<=NF;i++){ if($i==\"dpt:\" || $i==\"--dport\"){ port=$(i+1) } }\n          if(port in p){ print }\n        }'\n\n  echo; echo \"-- SYNPROXY_EARLY (full chain with counters)\"\n  if _chain_exists SYNPROXY_EARLY; then\n    _ipt -L SYNPROXY_EARLY -n -v --line-numbers\n  else\n    echo \"SYNPROXY_EARLY not found\"\n  fi\n\n  echo; echo \"(q to quit)\"; read -t \"${_INTERVAL}\" -n 1 _k && [ \"${_k:-}\" = \"q\" ] && exit 0\ndone\n"
  },
  {
    "path": "aegir/tools/bin/synproxy_reassert",
    "content": "#!/bin/bash\n\n#\n# SYNPROXY limiter for CSF/Devuan (IPv4).\n# QUIC limiter flags accepted (OFF by default).\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\nset -u\n\n# ---------- Defaults ----------\n_TCP_PORTS=\"${_TCP_PORTS:-443 80}\"\n_MSS=\"${_MSS:-1400}\"\n_TRUSTED_SRC=\"${_TRUSTED_SRC:-}\"\n_REBUILD_CHAIN=\"${_REBUILD_CHAIN:-YES}\"\n_STRICT=\"${_STRICT:-NO}\"\n\n_ENABLE_QUIC_LIMITER=\"${_ENABLE_QUIC_LIMITER:-NO}\"\n_QUIC_PORT=\"${_QUIC_PORT:-443}\"\n_RATE_PER_IP=\"${_RATE_PER_IP:-80/second}\"\n_BURST_PER_IP=\"${_BURST_PER_IP:-160}\"\n_RATE_PER_24=\"${_RATE_PER_24:-800/second}\"\n_BURST_PER_24=\"${_BURST_PER_24:-1600}\"\n\n_RETRY_MAX=\"${_RETRY_MAX:-120}\"     # ~30s with default sleep\n_RETRY_SLEEP=\"${_RETRY_SLEEP:-0.25}\"\n\n_QUIET=\"${_QUIET:-NO}\"\n_VERBOSE=\"${_VERBOSE:-NO}\"\n\n# ---------- Logging ----------\n_msg(){ [ \"${_QUIET}\" = \"YES\" ] && return 0; echo \"[REASSERT] $*\"; }\n_v(){ [ \"${_VERBOSE}\" = \"YES\" ] && _msg \"$@\"; }\n_die(){ echo \"[REASSERT][ERROR] $*\" >&2; exit 1; }\n\n# ---------- Args ----------\nwhile [ $# -gt 0 ]; do\n  case \"$1\" in\n    -p|--ports)      _TCP_PORTS=\"$2\"; shift 2;;\n    --mss)           _MSS=\"$2\"; shift 2;;\n    --trust)         _TRUSTED_SRC=\"${2:-}\"; shift 2;;\n    --rebuild-chain) _REBUILD_CHAIN=\"YES\"; shift 1;;\n    --no-rebuild)    _REBUILD_CHAIN=\"NO\"; shift 1;;\n    --strict)        _STRICT=\"YES\"; shift 1;;\n    --no-strict)     _STRICT=\"NO\"; shift 1;;\n    --no-quic)       _ENABLE_QUIC_LIMITER=\"NO\"; shift 1;;\n    --quic-port)     _QUIC_PORT=\"$2\"; shift 2;;\n    --rate-ip)       _RATE_PER_IP=\"$2\"; shift 2;;\n    --burst-ip)      _BURST_PER_IP=\"$2\"; shift 2;;\n    --rate-24)       _RATE_PER_24=\"$2\"; shift 2;;\n    --burst-24)      _BURST_PER_24=\"$2\"; shift 2;;\n    -q|--quiet)      _QUIET=\"YES\"; _VERBOSE=\"NO\"; shift 1;;\n    -v|--verbose)    _VERBOSE=\"YES\"; _QUIET=\"NO\"; shift 1;;\n    -h|--help)\n      cat <<USAGE\nUsage: synproxy_reassert [options]\n  -p, --ports \"443 80\"   TCP ports to protect (default: \"443 80\")\n      --mss 1400         MSS for SYNPROXY cookie (default: 1400)\n      --trust \"CIDR ...\" Bypass SYNPROXY for sources (space-separated)\n      --rebuild-chain    Flush & rebuild SYNPROXY_EARLY/EGRESS (default)\n      --no-rebuild       Ensure-only (no flush)\n      --strict           Add per-port NEW !SYN -> DROP (default: off)\n      --no-strict        Disable the above\n  QUIC limiter (OFF unless enabled):\n      --no-quic          Disable limiter (default)\n      --quic-port 443    QUIC UDP port\n      --rate-ip X/sec    Per-IP rate (default: 80/second)\n      --burst-ip N       Per-IP burst (default: 160)\n      --rate-24 X/sec    Per-/24 rate (default: 800/second)\n      --burst-24 N       Per-/24 burst (default: 1600)\n  -q, --quiet            Minimal output\n  -v, --verbose          Extra output\nExamples:\n  synproxy_reassert -p \"443\" --quic-port 443 -v\n  synproxy_reassert -p \"443 80\" --no-quic -q\nUSAGE\n      exit 0;;\n    *) _die \"Unknown option: $1\";;\n  esac\ndone\n\n# ---------- Tools ----------\ncommand -v iptables >/dev/null 2>&1 || { _msg \"iptables not found\"; exit 0; }\ncommand -v modprobe >/dev/null 2>&1 || true\ncommand -v sleep >/dev/null 2>&1 || _die \"sleep not found\"\n\n_IPT_HAS_W=\"NO\"; iptables -w -L -n >/dev/null 2>&1 && _IPT_HAS_W=\"YES\"\n_ipt(){ if [ \"${_IPT_HAS_W}\" = \"YES\" ]; then iptables -w \"$@\"; else iptables \"$@\"; fi; }\n_chain_exists(){ _ipt -L \"$1\" -n >/dev/null 2>&1; }\n\n# -------- modules (best-effort) ----------------------------------------------\nmodprobe xt_CT 2>/dev/null || true\nmodprobe xt_conntrack 2>/dev/null || true\nmodprobe xt_state 2>/dev/null || true\nmodprobe xt_SYNPROXY 2>/dev/null || true\nmodprobe nf_synproxy_core 2>/dev/null || true\nmodprobe xt_hashlimit 2>/dev/null || true\n\ngrep -wq SYNPROXY /proc/net/ip_tables_targets 2>/dev/null \\\n  || _die \"Kernel target SYNPROXY not available (xt_SYNPROXY/nf_synproxy_core not loaded?)\"\n\n# ---------- Detect NOTRACK flavor ----------\n_NOTRACK_KIND=\"\"\nif iptables -t raw -I PREROUTING 1 -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport 9 -j CT --notrack 2>/dev/null; then\n  iptables -t raw -D PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport 9 -j CT --notrack >/dev/null 2>&1 || true\n  _NOTRACK_KIND=\"CT\"\nelif iptables -t raw -I PREROUTING 1 -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport 9 -j NOTRACK 2>/dev/null; then\n  iptables -t raw -D PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport 9 -j NOTRACK >/dev/null 2>&1 || true\n  _NOTRACK_KIND=\"LEGACY\"\nelse\n  _die \"Kernel lacks CT --notrack/NOTRACK support\"\nfi\n_v \"NOTRACK flavor: ${_NOTRACK_KIND}\"\n\n# ---------- Helpers ----------\n_add_or_die(){\n  if ! iptables \"$@\"; then\n    echo \"[REASSERT][iptables] iptables $* failed:\" >&2\n    iptables \"$@\" 2>&1 | sed 's/^/[REASSERT][iptables] /' >&2 || true\n    _die \"iptables insertion failed (see above)\"\n  fi\n}\n\n# ensure raw NOTRACK for SYN to a port (insert at top; dedupe)\n_ensure_raw_notrack_port(){\n  local _p=\"$1\"\n  if [ \"${_NOTRACK_KIND}\" = \"CT\" ]; then\n    while _ipt -t raw -C PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j CT --notrack 2>/dev/null; do\n      _ipt -t raw -D PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j CT --notrack || break\n    done\n    _add_or_die -t raw -I PREROUTING 1 -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j CT --notrack\n  else\n    while _ipt -t raw -C PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j NOTRACK 2>/dev/null; do\n      _ipt -t raw -D PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j NOTRACK || break\n    done\n    _add_or_die -t raw -I PREROUTING 1 -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j NOTRACK\n  fi\n}\n\n# ensure INPUT/LOCALINPUT has early jump (we try to be #1 but verify presence)\n_ensure_jump_order_INPUT(){\n  local _i=0\n  _chain_exists SYNPROXY_EARLY || _ipt -N SYNPROXY_EARLY\n  while [ \"${_i}\" -lt \"${_RETRY_MAX}\" ]; do\n    # prefer LOCALINPUT first hop if CSF uses it\n    if _ipt -C INPUT -j LOCALINPUT 2>/dev/null && _chain_exists LOCALINPUT; then\n      _ipt -C LOCALINPUT -j SYNPROXY_EARLY 2>/dev/null && _ipt -D LOCALINPUT -j SYNPROXY_EARLY || true\n      _ipt -I LOCALINPUT 1 -j SYNPROXY_EARLY 2>/dev/null || true\n      _ipt -C LOCALINPUT -j SYNPROXY_EARLY 2>/dev/null && return 0\n    else\n      _ipt -C INPUT -j SYNPROXY_EARLY 2>/dev/null && _ipt -D INPUT -j SYNPROXY_EARLY || true\n      _ipt -I INPUT 1 -j SYNPROXY_EARLY 2>/dev/null || true\n      _ipt -C INPUT -j SYNPROXY_EARLY 2>/dev/null && return 0\n    fi\n    sleep \"${_RETRY_SLEEP}\"; _i=$((_i+1))\n  done\n  return 1\n}\n\n# (re)build contents of SYNPROXY_EARLY (—tcp-flags SYN form) + post-SYN DROP\n_populate_filter_early(){\n  [ \"${_REBUILD_CHAIN}\" = \"YES\" ] && _ipt -F SYNPROXY_EARLY || true\n\n  # loopback/trusted bypass\n  _ipt -C SYNPROXY_EARLY -i lo -j RETURN 2>/dev/null || _ipt -I SYNPROXY_EARLY 1 -i lo -j RETURN\n  if [ -n \"${_TRUSTED_SRC}\" ]; then\n    local _c\n    for _c in ${_TRUSTED_SRC}; do\n      _ipt -C SYNPROXY_EARLY -s \"${_c}\" -j RETURN 2>/dev/null || _ipt -I SYNPROXY_EARLY 2 -s \"${_c}\" -j RETURN\n    done\n  fi\n\n  # established/related accept (both matchers for broad compatibility)\n  _ipt -C SYNPROXY_EARLY -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT 2>/dev/null \\\n    || _ipt -A SYNPROXY_EARLY -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT\n  _ipt -C SYNPROXY_EARLY -m state --state ESTABLISHED,RELATED -j ACCEPT 2>/dev/null \\\n    || _ipt -A SYNPROXY_EARLY -m state --state ESTABLISHED,RELATED -j ACCEPT\n\n  # per-port guards + SYNPROXY + post-SYN DROP\n  local _p\n  for _p in ${_TCP_PORTS}; do\n    if [ \"${_STRICT}\" = \"YES\" ]; then\n      _ipt -C SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m conntrack --ctstate NEW -m tcp '!' --syn -j DROP 2>/dev/null \\\n        || _ipt -A SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m conntrack --ctstate NEW -m tcp '!' --syn -j DROP\n      _ipt -C SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m state --state NEW -m tcp '!' --syn -j DROP 2>/dev/null \\\n        || _ipt -A SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m state --state NEW -m tcp '!' --syn -j DROP\n    fi\n    # ANY SYN -> SYNPROXY (exact spec we also use for -C checks)\n    _ipt -C SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m tcp --tcp-flags FIN,SYN,RST,ACK SYN \\\n         -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss \"${_MSS}\" 2>/dev/null \\\n      || _ipt -A SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m tcp --tcp-flags FIN,SYN,RST,ACK SYN \\\n         -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss \"${_MSS}\"\n    # Drop the original SYN so it can't fall through into CSF paths\n    _ipt -C SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j DROP 2>/dev/null \\\n      || _ipt -A SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j DROP\n  done\n\n  # INVALID -> DROP (both matchers)\n  _ipt -C SYNPROXY_EARLY -m conntrack --ctstate INVALID -j DROP 2>/dev/null \\\n    || _ipt -A SYNPROXY_EARLY -m conntrack --ctstate INVALID -j DROP\n  _ipt -C SYNPROXY_EARLY -m state --state INVALID -j DROP 2>/dev/null \\\n    || _ipt -A SYNPROXY_EARLY -m state --state INVALID -j DROP\n}\n\n# OUTPUT: own chain + single jump (we verify with -C only)\n_ensure_output_egress_chain(){\n  _chain_exists SYNPROXY_EGRESS || _ipt -N SYNPROXY_EGRESS\n  [ \"${_REBUILD_CHAIN}\" = \"YES\" ] && _ipt -F SYNPROXY_EGRESS || true\n\n  # Canonical egress allows (SYNPROXY replies + first data)\n  _ipt -C SYNPROXY_EGRESS -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT 2>/dev/null \\\n    || _ipt -A SYNPROXY_EGRESS -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT\n  _ipt -C SYNPROXY_EGRESS -m state --state ESTABLISHED,RELATED -j ACCEPT 2>/dev/null \\\n    || _ipt -A SYNPROXY_EGRESS -m state --state ESTABLISHED,RELATED -j ACCEPT\n\n  local _p\n  for _p in ${_TCP_PORTS}; do\n    _ipt -C SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags ACK ACK -j ACCEPT 2>/dev/null \\\n      || _ipt -A SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags ACK ACK -j ACCEPT\n    _ipt -C SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags SYN,ACK SYN,ACK -j ACCEPT 2>/dev/null \\\n      || _ipt -A SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags SYN,ACK SYN,ACK -j ACCEPT\n    _ipt -C SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags RST RST -j ACCEPT 2>/dev/null \\\n      || _ipt -A SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags RST RST -j ACCEPT\n  done\n\n  # Ensure OUTPUT jump exists and is near top; we verify presence, not position.\n  _ipt -C OUTPUT -j SYNPROXY_EGRESS 2>/dev/null || _ipt -I OUTPUT 1 -j SYNPROXY_EGRESS\n}\n\n# QUIC limiter (optional)\n_purge_quic_rules(){\n  [ -n \"${_QUIC_PORT}\" ] || return 0\n  local _p=\"${_QUIC_PORT}\"\n  while _ipt -C INPUT -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_24 -j DROP 2>/dev/null; do\n    _ipt -D INPUT -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_24 -j DROP || break\n  done\n  while _ipt -C INPUT -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_ip -j DROP 2>/dev/null; do\n    _ipt -D INPUT -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_ip -j DROP || break\n  done\n  while _ipt -C INPUT -p udp --dport \"${_p}\" -m conntrack --ctstate INVALID -j DROP 2>/dev/null; do\n    _ipt -D INPUT -p udp --dport \"${_p}\" -m conntrack --ctstate INVALID -j DROP || break\n  done\n}\n_ensure_quic_limits(){\n  [ -n \"${_QUIC_PORT}\" ] || return 0\n  if [ \"${_ENABLE_QUIC_LIMITER}\" != \"YES\" ]; then\n    _purge_quic_rules; _v \"QUIC limiter disabled; purged for UDP:${_QUIC_PORT}\"; return 0\n  fi\n  local _p=\"${_QUIC_PORT}\"\n  # Place after early jump (positions 2/3/4 typically); verify by presence only.\n  _ipt -C INPUT -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_24 \\\n       --hashlimit-above \"${_RATE_PER_24}\" --hashlimit-burst \"${_BURST_PER_24}\" \\\n       --hashlimit-mode srcip --hashlimit-srcmask 24 --hashlimit-htable-expire 60000 -j DROP 2>/dev/null \\\n    || _ipt -I INPUT 2 -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_24 \\\n       --hashlimit-above \"${_RATE_PER_24}\" --hashlimit-burst \"${_BURST_PER_24}\" \\\n       --hashlimit-mode srcip --hashlimit-srcmask 24 --hashlimit-htable-expire 60000 -j DROP\n\n  _ipt -C INPUT -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_ip \\\n       --hashlimit-above \"${_RATE_PER_IP}\" --hashlimit-burst \"${_BURST_PER_IP}\" \\\n       --hashlimit-mode srcip --hashlimit-htable-expire 60000 -j DROP 2>/dev/null \\\n    || _ipt -I INPUT 3 -p udp --dport \"${_p}\" -m hashlimit --hashlimit-name quic${_p}_ip \\\n       --hashlimit-above \"${_RATE_PER_IP}\" --hashlimit-burst \"${_BURST_PER_IP}\" \\\n       --hashlimit-mode srcip --hashlimit-htable-expire 60000 -j DROP\n\n  _ipt -C INPUT -p udp --dport \"${_p}\" -m conntrack --ctstate INVALID -j DROP 2>/dev/null \\\n    || _ipt -I INPUT 4 -p udp --dport \"${_p}\" -m conntrack --ctstate INVALID -j DROP\n\n  _v \"QUIC limiter enabled on UDP:${_p}\"\n}\n\n# ---------- Verify using iptables -C only ----------\n_verify_once(){\n  local _ok=1 _why=\"\"\n\n  # Early jump present (either INPUT or LOCALINPUT topology)\n  if _ipt -C INPUT -j LOCALINPUT 2>/dev/null && _chain_exists LOCALINPUT; then\n    _ipt -C LOCALINPUT -j SYNPROXY_EARLY 2>/dev/null || { _why=\"${_why}\\n - LOCALINPUT lacks jump to SYNPROXY_EARLY\"; _ok=0; }\n  else\n    _ipt -C INPUT -j SYNPROXY_EARLY 2>/dev/null || { _why=\"${_why}\\n - INPUT lacks jump to SYNPROXY_EARLY\"; _ok=0; }\n  fi\n\n  # raw NOTRACK per port\n  local _p\n  for _p in ${_TCP_PORTS}; do\n    if [ \"${_NOTRACK_KIND}\" = \"CT\" ]; then\n      _ipt -t raw -C PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j CT --notrack 2>/dev/null \\\n        || { _why=\"${_why}\\n - raw/PREROUTING missing CT --notrack for dport ${_p}\"; _ok=0; }\n    else\n      _ipt -t raw -C PREROUTING -p tcp --tcp-flags FIN,SYN,RST,ACK SYN --dport \"${_p}\" -j NOTRACK 2>/dev/null \\\n        || { _why=\"${_why}\\n - raw/PREROUTING missing NOTRACK for dport ${_p}\"; _ok=0; }\n    fi\n  done\n\n  # SYNPROXY + post-SYN DROP present for each port\n  for _p in ${_TCP_PORTS}; do\n    _ipt -C SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m tcp --tcp-flags FIN,SYN,RST,ACK SYN \\\n         -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss \"${_MSS}\" 2>/dev/null \\\n      || { _why=\"${_why}\\n - SYNPROXY_EARLY missing SYNPROXY (dport ${_p})\"; _ok=0; }\n    _ipt -C SYNPROXY_EARLY -p tcp --dport \"${_p}\" -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -j DROP 2>/dev/null \\\n      || { _why=\"${_why}\\n - SYNPROXY_EARLY missing post-SYN DROP (dport ${_p})\"; _ok=0; }\n  done\n\n  # OUTPUT: either has our jump or inline egress rules — we require the jump\n  _ipt -C OUTPUT -j SYNPROXY_EGRESS 2>/dev/null || { _why=\"${_why}\\n - OUTPUT lacks jump to SYNPROXY_EGRESS\"; _ok=0; }\n  # And the egress chain must have its accepts\n  _ipt -C SYNPROXY_EGRESS -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT 2>/dev/null \\\n    || { _why=\"${_why}\\n - SYNPROXY_EGRESS lacks EST/REL accept\"; _ok=0; }\n  _ipt -C SYNPROXY_EGRESS -m state --state ESTABLISHED,RELATED -j ACCEPT 2>/dev/null \\\n    || { _why=\"${_why}\\n - SYNPROXY_EGRESS lacks state EST/REL accept\"; _ok=0; }\n  for _p in ${_TCP_PORTS}; do\n    _ipt -C SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags ACK ACK -j ACCEPT 2>/dev/null \\\n      || { _why=\"${_why}\\n - SYNPROXY_EGRESS lacks ACK allow (sport ${_p})\"; _ok=0; }\n    _ipt -C SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags SYN,ACK SYN,ACK -j ACCEPT 2>/dev/null \\\n      || { _why=\"${_why}\\n - SYNPROXY_EGRESS lacks SYN,ACK allow (sport ${_p})\"; _ok=0; }\n    _ipt -C SYNPROXY_EGRESS -p tcp --sport \"${_p}\" -m tcp --tcp-flags RST RST -j ACCEPT 2>/dev/null \\\n      || { _why=\"${_why}\\n - SYNPROXY_EGRESS lacks RST allow (sport ${_p})\"; _ok=0; }\n  done\n\n  # QUIC (if enabled)\n  if [ \"${_ENABLE_QUIC_LIMITER}\" = \"YES\" ] && [ -n \"${_QUIC_PORT}\" ]; then\n    _ipt -C INPUT -p udp --dport \"${_QUIC_PORT}\" -m hashlimit --hashlimit-name \"quic${_QUIC_PORT}_24\" -j DROP 2>/dev/null \\\n      || { _why=\"${_why}\\n - QUIC /24 hashlimit missing (udp ${_QUIC_PORT})\"; _ok=0; }\n    _ipt -C INPUT -p udp --dport \"${_QUIC_PORT}\" -m hashlimit --hashlimit-name \"quic${_QUIC_PORT}_ip\" -j DROP 2>/dev/null \\\n      || { _why=\"${_why}\\n - QUIC per-IP hashlimit missing (udp ${_QUIC_PORT})\"; _ok=0; }\n    _ipt -C INPUT -p udp --dport \"${_QUIC_PORT}\" -m conntrack --ctstate INVALID -j DROP 2>/dev/null \\\n      || { _why=\"${_why}\\n - QUIC INVALID drop missing (udp ${_QUIC_PORT})\"; _ok=0; }\n  fi\n\n  if [ \"${_ok}\" -ne 1 ]; then _why=\"${_why#\\\\n }\"; echo \"${_why}\"; return 1; fi\n  return 0\n}\n\n_verify_with_retry(){\n  local _i=0\n  while [ \"${_i}\" -lt \"${_RETRY_MAX}\" ]; do\n    if _verify_once >/dev/null; then return 0; fi\n    # attempt repairs during churn\n    local _p\n    for _p in ${_TCP_PORTS}; do _ensure_raw_notrack_port \"${_p}\"; done\n    _ensure_jump_order_INPUT || true\n    _ensure_output_egress_chain || true\n    _ensure_quic_limits || true\n    sleep \"${_RETRY_SLEEP}\"; _i=$((_i+1))\n  done\n  local _why=\"$(_verify_once || true)\"\n  _die \"One or more invariants failed; SYNPROXY not active:${_why:+ ${_why}}\"\n}\n\n# ---------- Execute ----------\n_msg \"Reassert SYNPROXY (ports: ${_TCP_PORTS}, mss: ${_MSS}, strict: ${_STRICT}, quic: ${_ENABLE_QUIC_LIMITER} on udp/${_QUIC_PORT})\"\n\n# 1) raw NOTRACK per protected port\nfor _pp in ${_TCP_PORTS}; do _ensure_raw_notrack_port \"${_pp}\"; done\n\n# 2) early jump placement (INPUT/LOCALINPUT)\n_ensure_jump_order_INPUT || _die \"Could not place jump to SYNPROXY_EARLY in INPUT/LOCALINPUT\"\n\n# 3) canonical early chain with SYNPROXY + post-SYN DROP\n_populate_filter_early\n\n# 4) egress chain + OUTPUT jump\n_ensure_output_egress_chain\n\n# 5) QUIC limiter (optional)\n_ensure_quic_limits\n\n# 6) verify with retries\n_verify_with_retry\n\n_v \"Done.\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/synproxy_rollback",
    "content": "#!/bin/bash\n\n#\n# Remove SYNPROXY/QUIC add-ons & rules without touching CSF\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Enable strict error handling for debugging only\n# set -euo pipefail\n\ntrap '_rc=$?; echo \"[ROLLBACK] line ${LINENO}: exit ${_rc}\" >&2; exit ${_rc}' ERR\n\n# --- utils ---\n_LOG=\"/var/log/synproxy-rollback.log\"\n_msg(){ echo \"[ROLLBACK] $*\" | tee -a \"${_LOG}\"; }\n_silent(){ \"$@\" >/dev/null 2>&1 || true; }\n_need(){ command -v \"$1\" >/dev/null 2>&1 || { echo \"[ROLLBACK] Missing: $1\" >&2; exit 1; }; }\n\n_need iptables\n_need sed\n_need grep\n_need date\n\n# iptables wrapper (-w if supported)\n_IPT_HAS_W=\"NO\"; iptables -w -L -n >/dev/null 2>&1 && _IPT_HAS_W=\"YES\"\n_ipt(){ if [ \"${_IPT_HAS_W}\" = \"YES\" ]; then iptables -w \"$@\"; else iptables \"$@\"; fi; }\n\n# Files we created\n_INC_DIR=\"/etc/csf/csfpost.d\"\n_SYN_ADDON=\"${_INC_DIR}/99-synproxy-reassert.sh\"\n_OLD_SYN_ADDON=\"${_INC_DIR}/synproxy.sh\"\n_MONITOR=\"/usr/local/sbin/mon_synproxy\"\n_WATCHER=\"/usr/local/sbin/watch_synproxy\"\n_ASSERT_BIN=\"/usr/local/sbin/synproxy_assert\"\n_CRON=\"/etc/cron.d/synproxy-assert\"\n_INITD=\"/etc/init.d/synproxy-assert\"\n_OLD_ROLLBACK=\"/usr/local/sbin/synproxy_rollback\"\n_SYSCTL_FILE=\"/etc/sysctl.d/60-synproxy.conf\"\n_ETC_MODULES=\"/etc/modules\"\n\n: > \"${_LOG}\"\necho\n_msg \"Starting rollback at $(date -u +'%F %T UTC')\"\necho\n\n# 1) INPUT/LOCALINPUT jumps to SYNPROXY_EARLY\n_msg \"1) Remove INPUT/LOCALINPUT jumps to SYNPROXY_EARLY\"\n_silent iptables -D INPUT -j SYNPROXY_EARLY\n_silent iptables -D LOCALINPUT -j SYNPROXY_EARLY\n\n# 2) Flush & delete SYNPROXY_EARLY (if not referenced)\n_msg \"2) Flush & delete SYNPROXY_EARLY (if not referenced)\"\nif iptables -L SYNPROXY_EARLY -n >/dev/null 2>&1; then\n  _silent iptables -F SYNPROXY_EARLY\n  # Delete only if no references\n  if ! iptables -S | grep -q -- \"-j SYNPROXY_EARLY\"; then\n    _silent iptables -X SYNPROXY_EARLY\n  fi\nfi\n\n# 3) Remove old direct SYNPROXY & NEW!SYN rules (defensive cleanup)\n_msg \"3) Remove old direct SYNPROXY & NEW!SYN rules (defensive cleanup)\"\nwhile iptables -S INPUT 2>/dev/null | egrep -q -- \"-A INPUT .* -p tcp .* --dport (80|443) .* -j SYNPROXY\"; do\n  spec=\"$(iptables -S INPUT | egrep -- \"-A INPUT .* -p tcp .* --dport (80|443) .* -j SYNPROXY\" | head -n1)\"\n  spec=\"${spec#-A INPUT }\"; _silent iptables -D INPUT ${spec}\ndone\nwhile iptables -S INPUT 2>/dev/null | egrep -q -- \"-A INPUT .* -m (conntrack --ctstate NEW|state --state NEW) .* -m tcp ! --syn .* -j DROP\"; do\n  spec=\"$(iptables -S INPUT | egrep -- \"-A INPUT .* -m (conntrack --ctstate NEW|state --state NEW) .* -m tcp ! --syn .* -j DROP\" | head -n1)\"\n  spec=\"${spec#-A INPUT }\"; _silent iptables -D INPUT ${spec}\ndone\n\n# 4) raw PREROUTING NOTRACK for 80/443 (and any other protected ports if present)\n_msg \"4) Remove raw PREROUTING NOTRACK for 80/443 (and any other protected ports if present)\"\nfor p in 80 443; do\n  while iptables -t raw -S PREROUTING 2>/dev/null | egrep -q -- \"-A PREROUTING .* -p tcp .* --dport ${p} .* (CT --notrack| -j NOTRACK)\"; do\n    spec=\"$(iptables -t raw -S PREROUTING | egrep -- \"-A PREROUTING .* -p tcp .* --dport ${p} .* (CT --notrack| -j NOTRACK)\" | head -n1)\"\n    spec=\"${spec#-A PREROUTING }\"; _silent iptables -t raw -D PREROUTING ${spec}\n  done\ndone\n\n# 5) QUIC limiter rules (UDP/443 default)\n_msg \"6) Remove QUIC limiter rules (UDP/443 default)\"\nfor p in 443; do\n  while iptables -S INPUT 2>/dev/null | egrep -q -- \"-A INPUT .* -p udp .* --dport ${p} .*hashlimit-name quic${p}_(24|ip)\"; do\n    spec=\"$(iptables -S INPUT | egrep -- \"-A INPUT .* -p udp .* --dport ${p} .*hashlimit-name quic${p}_(24|ip)\" | head -n1)\"\n    spec=\"${spec#-A INPUT }\"; _silent iptables -D INPUT ${spec}\n  done\n  while iptables -S INPUT 2>/dev/null | egrep -q -- \"-A INPUT .* -p udp .* --dport ${p} .*(-m conntrack --ctstate INVALID|-m state --state INVALID) .* -j DROP\"; do\n    spec=\"$(iptables -S INPUT | egrep -- \"-A INPUT .* -p udp .* --dport ${p} .*(-m conntrack --ctstate INVALID|-m state --state INVALID) .* -j DROP\" | head -n1)\"\n    spec=\"${spec#-A INPUT }\"; _silent iptables -D INPUT ${spec}\n  done\ndone\n\n# 6) OUTPUT jump & egress chain\n_msg \"6) Remove OUTPUT jump & egress chain\"\n_silent iptables -D OUTPUT -j SYNPROXY_EGRESS\nif iptables -L SYNPROXY_EGRESS -n >/dev/null 2>&1; then\n  _silent iptables -F SYNPROXY_EGRESS\n  if ! iptables -S | grep -q -- \"-j SYNPROXY_EGRESS\"; then\n    _silent iptables -X SYNPROXY_EGRESS\n  fi\nfi\n\n# 7) Remove generated files (leave CSF core files untouched)\n_msg \"7) Remove generated scripts (add-ons/cron/init.d)\"\nrm -f \"${_OLD_SYN_ADDON}\" 2>/dev/null || true\nrm -f \"${_SYN_ADDON}\" 2>/dev/null || true\nif [ -d \"${_INC_DIR}\" ]; then\n  find \"${_INC_DIR}\" -maxdepth 1 -type f -name 'quic_udp*_limit.sh' -exec rm -f {} + 2>/dev/null || true\nfi\nrm -f \"${_MONITOR}\" \"${_WATCHER}\" \"${_ASSERT_BIN}\" \"${_OLD_ROLLBACK}\" 2>/dev/null || true\nrm -f \"${_CRON}\" 2>/dev/null || true\nif [ -x \"${_INITD}\" ]; then\n  \"${_INITD}\" stop >/dev/null 2>&1 || true\n  command -v update-rc.d >/dev/null 2>&1 && update-rc.d synproxy-assert remove >/dev/null 2>&1 || true\n  rm -f \"${_INITD}\" 2>/dev/null || true\nfi\n\n# 8) Clean persistence knobs we added (non-CSF)\n_msg \"8) Remove nf_synproxy_core from /etc/modules; remove our sysctl file (live values unchanged)\"\n[ -f \"${_ETC_MODULES}\" ] && sed -i '/^nf_synproxy_core$/d' \"${_ETC_MODULES}\" 2>/dev/null || true\n[ -f \"${_SYSCTL_FILE}\" ] && rm -f \"${_SYSCTL_FILE}\" 2>/dev/null || true\n\necho\n_msg \"Rollback complete. CSF was not modified or reloaded.\"\n\necho\necho \"Verify (should show no SYNPROXY_EARLY, no quic* hashlimits, and no CT --notrack on web ports):\"\necho \"  iptables -L INPUT -n -v --line-numbers | sed -n '1,60p'\"\necho \"  iptables -L LOCALINPUT -n -v --line-numbers 2>/dev/null | sed -n '1,60p'\"\necho \"  iptables -t raw -L PREROUTING -n -v --line-numbers | sed -n '1,40p'\"\necho\necho \"Removed files (if present):\"\necho \"  ${_OLD_SYN_ADDON}\"\necho \"  ${_SYN_ADDON}\"\necho \"  ${_INC_DIR}/quic_udp*_limit.sh\"\necho \"  ${_MONITOR} , ${_WATCHER} , ${_ASSERT_BIN}\"\necho \"  ${_CRON} , ${_INITD} , ${_OLD_ROLLBACK}\"\necho \"  ${_SYSCTL_FILE}  (live sysctls not reverted)\"\necho \"Persistence cleaned: 'nf_synproxy_core' line removed from ${_ETC_MODULES}\"\necho\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/synproxy_snapshot",
    "content": "#!/bin/bash\n\n#\n# One-shot snapshot of SYNPROXY state: chain order, counters, raw NOTRACK hits, OUTPUT accept/drop posture.\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\nset -u\n_TCP_PORTS=\"${_TCP_PORTS:-443 80}\"\n\necho \"=== $(date -u) ===\"\necho\necho \"## Policies\"\niptables -S | egrep '^-P (INPUT|OUTPUT|FORWARD) ' || true\necho\n\necho \"## INPUT first rule (actual)\"; iptables -S INPUT | awk '$1==\"-A\" && $2==\"INPUT\"{print; exit}'\nif iptables -S LOCALINPUT >/dev/null 2>&1; then\n  echo \"## LOCALINPUT first rule (actual)\"; iptables -S LOCALINPUT | awk '$1==\"-A\" && $2==\"LOCALINPUT\"{print; exit}'\nelse\n  echo \"## LOCALINPUT: (absent)\"\nfi\necho\n\necho \"## SYNPROXY_EARLY (top 40 lines)\"\niptables -S SYNPROXY_EARLY 2>/dev/null | sed -n '1,40p' || echo \"(missing)\"\necho\necho \"## SYNPROXY_EARLY counters\"\niptables -L SYNPROXY_EARLY -n -v --line-numbers 2>/dev/null || echo \"(missing)\"\necho\n\necho \"## raw PREROUTING (line-numbers + counters)\"\niptables -t raw -L PREROUTING -n -v --line-numbers 2>/dev/null || true\necho\necho \"## raw PREROUTING rules\"\niptables -t raw -S PREROUTING 2>/dev/null || true\necho\n\necho \"## OUTPUT (top 40 lines)\"\niptables -S OUTPUT | sed -n '1,40p'\necho\necho \"## OUTPUT counters (first 20 lines)\"\niptables -L OUTPUT -n -v --line-numbers | sed -n '1,20p'\necho\n\n# Quick checks for each protected port\nfor p in ${_TCP_PORTS}; do\n  echo \"== Port ${p} checks ==\"\n  if iptables -t raw -C PREROUTING -p tcp --syn --dport \"${p}\" -j CT --notrack 2>/dev/null; then\n    echo \"raw: CT --notrack present for SYN dport ${p}\"\n  elif iptables -t raw -C PREROUTING -p tcp --syn --dport \"${p}\" -j NOTRACK 2>/dev/null; then\n    echo \"raw: NOTRACK present for SYN dport ${p}\"\n  else\n    echo \"raw: NOTRACK MISSING for SYN dport ${p}\"\n  fi\n\n  if iptables -C SYNPROXY_EARLY -p tcp --dport \"${p}\" -m tcp --syn -j SYNPROXY 2>/dev/null; then\n    echo \"SYNPROXY_EARLY: SYN->SYNPROXY present for dport ${p}\"\n  else\n    echo \"SYNPROXY_EARLY: SYN->SYNPROXY MISSING for dport ${p}\"\n  fi\n\n  if iptables -C OUTPUT -p tcp --sport \"${p}\" -m tcp --tcp-flags SYN,ACK SYN,ACK -j ACCEPT 2>/dev/null; then\n    echo \"OUTPUT: allow SYN,ACK from --sport ${p} present\"\n  else\n    echo \"OUTPUT: allow SYN,ACK from --sport ${p} MISSING\"\n  fi\n\n  if iptables -C OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT 2>/dev/null || iptables -C OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT 2>/dev/null; then\n    echo \"OUTPUT: EST/REL ACCEPT present\"\n  else\n    echo \"OUTPUT: EST/REL ACCEPT MISSING\"\n  fi\n  echo\ndone\n"
  },
  {
    "path": "aegir/tools/bin/synproxy_status",
    "content": "#!/bin/bash\n\n#\n# Simple wrapper around synproxy_snapshot for a live dashboard.\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\nexec /usr/bin/watch -n 2 -d /opt/local/bin/synproxy_snapshot\n"
  },
  {
    "path": "aegir/tools/bin/thinkdifferent",
    "content": "#!/bin/bash\n\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\ncase \"$1\" in\n  dummy) kind=\"$2\"\n    echo \" This command is not available directly!\"\n    exit 1\n  ;;\n  *)  echo\n      echo \" This command is not available directly!\"\n      echo\n      echo \" You should use Ægir for code and db updates,\"\n      echo \" but if you really know what are you doing,\"\n      echo \" you could use these aliases instead:\"\n      echo\n      echo \"   drush dbup  (alias for drush updatedb)\"\n      echo \"   drush mup   (alias for drush up)\"\n      echo \"   drush mupc  (alias for drush upc)\"\n      echo\n      exit 1\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/updatesymlinks",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_logDir=/var/log/boa\n_actionTemporaryLog=\"${_logDir}/autosymlink.tmp.log\"\n_actionPermanentLog=\"${_logDir}/autosymlink.update.log\"\n_actionVerboseArchiveLog=\"${_logDir}/autosymlink.verbose.archive.log\"\n_stateFile=\"${_logDir}/autosymlink.state\"\n_autoBin=/opt/local/bin/autosymlink\n_pauseFile=/root/.pause_tasks_maint.cnf\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n_log() {\n  echo \"[$(date '+%Y-%m-%d %H:%M:%S')] $*\" >> ${_actionPermanentLog}\n}\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\ntrap '_clean_lock' EXIT INT TERM\n\n_clean_lock() {\n  [ -e \"${_pauseFile}\" ] && rm -f \"${_pauseFile}\"\n  if command -v _single_instance_unlock >/dev/null 2>&1; then\n    _single_instance_unlock\n  fi\n  [ -n \"${_lockDir:-}\" ] && [ -d \"${_lockDir}\" ] && rm -rf \"${_lockDir}\"\n}\n\n_action_email_report() {\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_actionReport}\" = \"TRUE\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    _log \"INFO: Sending AutoSymlink Report Email (${1})\"\n    s-nail -s \"AutoSymlink Actions Report: ${1} on ${_hName} at $(date)\" \"${_MY_EMAIL}\" < ${_actionPermanentLog}\n    s-nail -s \"AutoSymlink Details Report: ${1} on ${_hName} at $(date)\" \"${_MY_EMAIL}\" < ${_actionTemporaryLog}\n  fi\n}\n\n_is_protected_run() {\n  _protectedRun=FALSE\n  _optBin=\"/opt/local/bin\"\n  _boaBins=\"autosymlink autoupboa barracuda boa octopus\"\n\n  for _cbn in ${_boaBins}; do\n    if [ -e \"${_optBin}/${_cbn}\" ]; then\n      _CNT=$(pgrep -fc /local/bin/${_cbn})\n      if (( _CNT > 0 )); then\n        _log \"The ${_cbn} is running!\"\n        _protectedRun=TRUE\n      fi\n    fi\n  done\n\n  [ -e \"/run/octopus_install_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_wait.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/max_load.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/critical_load.pid\" ] && _protectedRun=TRUE\n\n  _IS_PROVISION_RUNNING=$(pgrep -fc provision)\n  if (( _IS_PROVISION_RUNNING > 0 )); then\n    _protectedRun=TRUE\n  fi\n  _IS_DUPLICITY_RUNNING=$(pgrep -fc duplicity)\n  if (( _IS_DUPLICITY_RUNNING > 0 )); then\n    _protectedRun=TRUE\n  fi\n}\n\n_read_state_var() {\n  _key=\"$1\"\n  if [ -e \"${_stateFile}\" ]; then\n    grep \"^${_key}=\" \"${_stateFile}\" 2>/dev/null | head -n 1 | cut -d= -f2-\n  fi\n}\n\n_autosymlink_run() {\n  _actionReport=FALSE\n  _orphansFound=FALSE\n  _applyCount=0\n  _lastAction=NO\n\n  [ -d \"${_logDir}\" ] || mkdir -p \"${_logDir}\"\n  : > ${_actionTemporaryLog}\n\n  if [ ! -x \"${_autoBin}\" ]; then\n    _log \"ERROR: Missing autosymlink: ${_autoBin}\"\n    return 1\n  fi\n\n  # Pause tasks to avoid race, then re-check\n  touch \"${_pauseFile}\"\n  sleep 5\n  _is_protected_run\n  if [ \"${_protectedRun}\" = \"TRUE\" ]; then\n    _log \"INFO: Protected run detected after pausing tasks; exiting.\"\n    return 0\n  fi\n\n  # Cron-safe: DRY -> if CLEAN -> BATCH in one run\n  _log \"INFO: Running autosymlink --batch-if-clean\"\n  ${_autoBin} --batch-if-clean > ${_actionTemporaryLog} 2>&1\n  _rcBatch=\"$?\"\n\n  _lastStatus=\"$(_read_state_var \"_LAST_STATUS\")\"\n  _lastAction=\"$(_read_state_var \"_LAST_ACTION\")\"\n  _applyCount=\"$(_read_state_var \"_LAST_APPLY_COUNT\")\"\n  [ -z \"${_applyCount}\" ] && _applyCount=\"0\"\n\n  if [ \"${_rcBatch}\" -eq 0 ]; then\n    if [ \"${_lastAction}\" = \"YES\" ] && [ \"${_applyCount}\" -gt 0 ]; then\n      _actionReport=TRUE\n      _subject=\"APPLIED (${_applyCount} changes)\"\n    else\n      # No action taken -> never email\n      _subject=\"\"\n    fi\n  elif [ \"${_rcBatch}\" -eq 10 ]; then\n    _actionReport=TRUE\n    _subject=\"NOT CLEAN (manual review required)\"\n  else\n    _actionReport=TRUE\n    _subject=\"FAILED (rc=${_rcBatch})\"\n  fi\n\n  # Always run report; email if ORPHANS detected\n  _log \"INFO: Running autosymlink --report\"\n  ${_autoBin} --report >> ${_actionTemporaryLog} 2>&1\n  _rcReport=\"$?\"\n\n  if grep -q \"\\[REPORT\\] ORPHAN\" ${_actionTemporaryLog} 2>/dev/null; then\n    _orphansFound=\"TRUE\"\n    _actionReport=TRUE\n    if [ -z \"${_subject}\" ]; then\n      _subject=\"ORPHANS detected\"\n    else\n      _subject=\"${_subject} + ORPHANS\"\n    fi\n  fi\n\n  if [ \"${_rcReport}\" -ne 0 ]; then\n    _actionReport=TRUE\n    if [ -z \"${_subject}\" ]; then\n      _subject=\"FAILED report (rc=${_rcReport})\"\n    else\n      _subject=\"${_subject} + report rc=${_rcReport}\"\n    fi\n  fi\n\n  # Archive the full temporary log (verbose), so details are preserved even if email delivery fails.\n  {\n    echo \"===== $(date '+%Y-%m-%d %H:%M:%S') autosymlink run (batch_rc=${_rcBatch}, report_rc=${_rcReport}) subject=${_subject} =====\"\n    cat ${_actionTemporaryLog}\n    echo\n  } >> ${_actionVerboseArchiveLog}\n\n  if [ \"${_actionReport}\" = \"TRUE\" ] && [ -n \"${_subject}\" ]; then\n    _action_email_report \"${_subject}\"\n    echo >> ${_actionPermanentLog}\n  fi\n\n  return 0\n}\n\n_is_protected_run\nif [ \"${_protectedRun}\" = \"TRUE\" ]; then\n  exit 0\nelse\n _autosymlink_run\n exit 0\nfi\n"
  },
  {
    "path": "aegir/tools/bin/verifyvhostsdns",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Configuration\n_SERVER_IPV4_FILE=\"/root/.found_correct_ipv4.cnf\"\n_SERVER_IPV4=$(cat \"${_SERVER_IPV4_FILE}\" | tr -d '[:space:]')\n_VHOSTS_DIR=\"/data/disk/*/config/server_master/nginx/vhost.d\"\n_ACCESS_LOG=\"/var/log/nginx/access.log\"\n_REPORT_FILE=\"/root/dns_check_report.txt\"\n\n# Ensure the report file is empty\n> \"${_REPORT_FILE}\"\n\n# Function to clean domain names (remove trailing semicolons)\n_clean_domain() {\n  echo \"$1\" | sed 's/;$//'\n}\n\n# Function to check if a domain resolves to the server's IP or Cloudflare\n_check_dns() {\n  local _domain=$1\n  local _resolved_ips\n  _resolved_ips=$(dig +short \"$(_clean_domain \"${_domain}\")\" | grep -E '^[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+$')\n\n  # Match resolved IPs with server IP\n  for _ip in ${_resolved_ips}; do\n    if [[ \"${_ip}\" == \"${_SERVER_IPV4}\" ]]; then\n      return 0\n    fi\n  done\n\n  # If no direct match, use curl to verify via access log\n  if curl -sI \"https://${_domain}\" &> /dev/null; then\n    if grep -q \"$(_clean_domain \"${_domain}\")\" \"${_ACCESS_LOG}\"; then\n      return 0\n    fi\n  fi\n\n  return 1\n}\n\n# Scan vhost files\necho \"Scanning vhosts in ${_VHOSTS_DIR}...\"\nfor _vhost in $(find ${_VHOSTS_DIR} -type f -name \"*.com\"); do\n  # Extract unique server_name entries\n  _domains=$(grep -E '^\\s*server_name' \"${_vhost}\" | awk '{$1=\"\"; print}' | tr -s ' ' '\\n' | sort -u)\n\n  for _domain in ${_domains}; do\n    # Clean and skip empty or invalid domains\n    _cleaned_domain=$(_clean_domain \"${_domain}\")\n    [[ -z \"${_cleaned_domain}\" ]] && continue\n\n    echo \"Checking domain: ${_cleaned_domain}...\"\n    if ! _check_dns \"${_cleaned_domain}\"; then\n      echo \"Unverified: ${_cleaned_domain} (vhost: ${_vhost})\" >> \"${_REPORT_FILE}\"\n    fi\n  done\ndone\n\n# Generate report\nif [[ -s \"${_REPORT_FILE}\" ]]; then\n  echo \"Report of unverified domains:\"\n  cat \"${_REPORT_FILE}\"\nelse\n  echo \"All domains verified successfully!\"\nfi\n"
  },
  {
    "path": "aegir/tools/bin/vhostcheck",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Define directories\n_VHOSTS_DIRS=(\"/var/aegir/config/server_master/nginx/vhost.d\" \"/data/disk/*/config/server_master/nginx/vhost.d\")\n_VHOSTS_CLEANUP_LOG=\"/var/log/vhost_cleanup.log\"\n_GHOST_VHOSTS_LOG=\"/var/log/ghost_vhosts.log\"\n_DUPLICATE_VHOSTS_LOG=\"/var/log/duplicate_vhosts.log\"\n\n# Create/empty the log files\n> \"${_VHOSTS_CLEANUP_LOG}\"\n> \"${_GHOST_VHOSTS_LOG}\"\n> \"${_DUPLICATE_VHOSTS_LOG}\"\n\n# Step 1: Get the list of existing MySQL databases\nmapfile -t _mysql_databases < <(mysql -u root -e \"SHOW DATABASES;\" | grep -Ev \"(Database|information_schema|performance_schema|mysql|sys)\")\n\n# Step 2: Get the list of vhosts and their db_name references\ndeclare -A _vhost_db_map\nfor _dir_pattern in \"${_VHOSTS_DIRS[@]}\"; do\n  for _dir in ${_dir_pattern}; do\n    if [ -d \"${_dir}\" ]; then\n      for _vhost_file in \"${_dir}\"/*; do\n        if [ -f \"${_vhost_file}\" ]; then\n          while IFS= read -r _line; do\n            # Extract db_name by looking for the exact _line containing it\n            if echo \"${_line}\" | grep -q \"fastcgi_param db_name\"; then\n              _db_name=$(echo \"${_line}\" | awk '{print $NF}' | tr -d ';')\n              # Map db_name to the list of vhost files (ensure no duplicate entries within the same file)\n              if ! [[ \" ${_vhost_db_map[\"${_db_name}\"]} \" =~ \" ${_vhost_file} \" ]]; then\n                _vhost_db_map[\"${_db_name}\"]+=\"${_vhost_file} \"\n              fi\n            fi\n          done < \"${_vhost_file}\"\n        fi\n      done\n    else\n      echo \"Directory ${_dir} does not exist, skipping...\" | tee -a \"${_VHOSTS_CLEANUP_LOG}\"\n    fi\n  done\ndone\n\n# Step 3: Identify and log ghost vhosts (vhosts referencing non-existing databases)\necho \"Vhosts referencing non-existing databases (ghost vhosts):\" | tee -a \"${_GHOST_VHOSTS_LOG}\"\nfor _db_name in \"${!_vhost_db_map[@]}\"; do\n  if ! [[ \" ${_mysql_databases[*]} \" =~ \" ${_db_name} \" ]]; then\n    # Log the vhost as a ghost vhost if the database doesn't exist\n    for _vhost_file in ${_vhost_db_map[${_db_name}]}; do\n      echo \"Ghost vhost found: ${_vhost_file} (references non-existing database: ${_db_name})\" | tee -a \"${_GHOST_VHOSTS_LOG}\"\n    done\n  fi\ndone\n\n# Step 4: Check for duplicate/conflicting vhosts referencing the same database (across different files)\necho \"Checking for duplicate/conflicting vhosts referencing the same database...\" | tee -a \"${_DUPLICATE_VHOSTS_LOG}\"\nfor _db_name in \"${!_vhost_db_map[@]}\"; do\n  # Get the list of unique vhost files for this db_name\n  _vhost_list=(${_vhost_db_map[${_db_name}]})\n\n  # Remove duplicates within the same file and ensure we compare between different files\n  _unique_vhost_files=$(echo \"${_vhost_list[@]}\" | tr ' ' '\n' | sort | uniq)\n\n  # Only flag as duplicate if db_name is referenced in different vhost files\n  if [ $(echo \"${_unique_vhost_files}\" | wc -l) -gt 1 ]; then\n    echo \"Duplicate vhosts found for database ${_db_name} (in different vhost files):\" | tee -a \"${_DUPLICATE_VHOSTS_LOG}\"\n    for _vhost_file in ${_unique_vhost_files}; do\n      echo \"  - ${_vhost_file}\" | tee -a \"${_DUPLICATE_VHOSTS_LOG}\"\n    done\n  fi\ndone\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/vmnetfix",
    "content": "#!/bin/bash\n\n#\n# vmnetfix — Make Devuan networking survive reboot after upgrade from Debian.\n#\n# Behavior (AUTO by default):\n#   • If existing config looks sane, only ensure init wiring (networking start at boot),\n#     install a boot-time route guard, and ALWAYS enforce /etc/resolv.conf (forced DNS).\n#   • Otherwise perform full remediation: optionally disable cloud-init networking,\n#     write /etc/network/interfaces from live values (handles /32 off-link GW via onlink),\n#     sanitize stray/bogus stanzas (bonding etc.), enforce /etc/resolv.conf, and test now.\n#   • DigitalOcean-aware: uses metadata for GW/netmask if missing; prefers public NIC.\n#   • Legacy DO layout auto-switch: if the file uses alias syntax (eth0:1 etc.), install\n#     classic ifupdown (non-interactive) and keep its canonical /etc/init.d/networking.\n#\n# Usage:\n#   vmnetfix [--debug] [--dry-run] [--test-now]\n#            [--disable-cloud-init | --preserve-cloud-init]\n#            [--dns \"1.1.1.1 9.9.9.9\"]\n#            [--mode auto|init-only|full]\n#\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_DEBUG_MODE=\"NO\"\n_DRY_RUN=\"NO\"\n_TEST_NOW=\"NO\"\n_DISABLE_CLOUD_INIT=\"YES\"\n_FORCE_DNS_LIST=\"1.1.1.1 8.8.8.8 9.9.9.9\"\n_MODE=\"auto\"    # auto | init-only | full\n\n_DEF_WAS_DHCP=\"NO\"\n\n# ----------------------------- Core helpers -----------------------------------\n\n_dt() { date +%Y%m%d-%H%M%S; }\n_msg() { [ \"${_DEBUG_MODE}\" = \"YES\" ] && printf '%s %s\\n' \"$(_dt)\" \"$*\"; }\n_err() { printf 'ERROR: %s\\n' \"$*\" >&2; }\n_cmd_exists() { command -v \"$1\" >/dev/null 2>&1; }\n_need_root() { [ \"$(id -u)\" -eq 0 ] || { _err \"Please run as root.\"; exit 1; }; }\n\n_parse_args() {\n  while [ \"$#\" -gt 0 ]; do\n    case \"$1\" in\n      --debug) _DEBUG_MODE=\"YES\" ;;\n      --dry-run) _DRY_RUN=\"YES\" ;;\n      --test-now) _TEST_NOW=\"YES\" ;;\n      --disable-cloud-init) _DISABLE_CLOUD_INIT=\"YES\" ;;\n      --preserve-cloud-init) _DISABLE_CLOUD_INIT=\"NO\" ;;\n      --dns) shift; _FORCE_DNS_LIST=\"$1\" ;;\n      --mode) shift; _MODE=\"$1\" ;;\n      --help|-h)\n        cat <<EOF\nUsage: vmnetfix [options]\n\n  --debug               Verbose logging\n  --dry-run             Do not change anything, just log what would be done\n  --test-now            After config, attempt immediate bring-up and GW reachability test\n  --disable-cloud-init  Disable cloud-init networking (write 99-disable-network-config.cfg)\n  --preserve-cloud-init Leave cloud-init networking config alone\n  --dns \"A B C\"         Force /etc/resolv.conf nameservers (default: 1.1.1.1 9.9.9.9)\n  --mode auto           Only remediate when config is not sane (default)\n  --mode init-only      Only ensure init wiring + boot guard + DNS; no config rewrite\n  --mode full           Always rewrite /etc/network/interfaces from live values, disable\n                        cloud-init networking (unless --preserve-cloud-init), and enforce\n                        DNS, boot guard, etc.\n\nEOF\n        exit 0\n        ;;\n      *)\n        _err \"Unknown argument: $1\"\n        exit 1\n        ;;\n    esac\n    shift\n  done\n}\n\n_backup_file() {\n  _FILE=\"$1\"\n  if [ -f \"${_FILE}\" ] || [ -L \"${_FILE}\" ]; then\n    cp -a \"${_FILE}\" \"${_FILE}.bak.$(_dt)\" 2>/dev/null || true\n    _msg \"Backup saved: ${_FILE}.bak.$(_dt)\"\n  fi\n}\n\n_dedupe_words() {\n  # echo unique words in original order\n  _SEEN=\"\"\n  for _W in $*; do\n    case \" ${_SEEN} \" in\n      *\" ${_W} \"*) : ;;\n      *) _SEEN=\"${_SEEN} ${_W}\" ;;\n    esac\n  done\n  printf '%s\\n' \"$(echo \"${_SEEN}\" | sed 's/^ *//')\"\n}\n\n_noninteractive_apt() {\n  # usage: _noninteractive_apt install -y pkg1 pkg2 ...\n  DEBIAN_FRONTEND=noninteractive \\\n  apt-get -o Dpkg::Options::=\"--force-confdef\" \\\n          -o Dpkg::Options::=\"--force-confnew\" \"$@\"\n}\n\n###\n### Atomic unlock to prevent TOCTOU race\n###\n_single_instance_unlock() {\n  _FD=\"$1\"; _PATH=\"$2\"\n  if command -v flock >/dev/null 2>&1; then\n    flock -u \"${_FD}\" 2>/dev/null || true\n    eval \"exec ${_FD}>&-\"\n    rm -f \"${_PATH}\" 2>/dev/null || true\n  else\n    rm -rf \"${_PATH}\" 2>/dev/null || true\n  fi\n}\n\n###\n### Atomic lock to prevent TOCTOU race\n###\n_single_instance_lock() {\n  # Ensure not too many instances are running\n  # usage: _single_instance_lock [lockfile_path] [fd]\n  # default lock: /run/<script>.lock (falls back to /run or /tmp)\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  _LOCK_FD=\"${2:-9}\"\n  if [ -n \"${1:-}\" ]; then\n    _LOCK_PATH=\"$1\"\n  else\n    _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/run\"; [ -w \"$_DIR\" ] || _DIR=\"/tmp\"\n    _LOCK_PATH=\"${_DIR}/${_SELF_NAME%.sh}.lock\"\n  fi\n\n  if command -v flock >/dev/null 2>&1; then\n    eval \"exec ${_LOCK_FD}>\\\"${_LOCK_PATH}\\\"\"\n    if ! flock -n \"${_LOCK_FD}\"; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    printf '%s\\n' \"$$\" 1>&\"${_LOCK_FD}\" 2>/dev/null || true   # optional: PID note\n    trap \"_single_instance_unlock ${_LOCK_FD} '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  else\n    # mkdir is atomic; directory presence == lock held\n    if ! mkdir \"${_LOCK_PATH}\" 2>/dev/null; then\n      echo \"${_SELF_NAME}: another instance is running; exiting.\"\n      exit 0\n    fi\n    echo \"$$\" > \"${_LOCK_PATH}/pid\" 2>/dev/null || true\n    trap \"rm -rf '${_LOCK_PATH}'\" EXIT INT TERM HUP\n  fi\n}\n\n# ----------------------------- Detection --------------------------------------\n\n_ifup_variant() {\n  _V=\"$(ifup --version 2>&1 | head -n1 || true)\"\n  case \"${_V}\" in\n    ifupdown2:*) echo \"ifupdown2\" ;;\n    \"ifup version \"*) echo \"ifupdown\" ;;\n    *) echo \"unknown\" ;;\n  esac\n}\n\n_default_route_if() { ip -4 route show default 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i==\"dev\"){print $(i+1); exit}}'; }\n_default_gw_v4()    { ip -4 route show default 2>/dev/null | awk '{print $3; exit}'; }\n_live_v4_list_if()  { ip -o -4 addr show dev \"$1\" scope global 2>/dev/null | awk '{print $4}'; }\n_live_v4_cidr()     { _live_v4_list_if \"$1\" | head -n1; }\n_iface_v4_is_dynamic() {\n  # Returns 0 if the interface has a DHCP/dynamic IPv4 address (as reported by iproute2).\n  # This is common on clouds where the \"public IP\" is implemented via NAT outside the guest.\n  _IF=\"$1\"\n  ip -o -4 addr show dev \"${_IF}\" scope global 2>/dev/null | grep -q '[[:space:]]dynamic[[:space:]]'\n}\n\n_default_route_is_dhcp() {\n  ip -4 route show default 2>/dev/null | grep -q '[[:space:]]proto[[:space:]]dhcp[[:space:]]'\n}\n\n_live_public_if() {\n  for _I in $(ls -1 /sys/class/net 2>/dev/null | awk '$0!=\"lo\"'); do\n    ip -o -4 addr show dev \"${_I}\" scope global | awk '{print $4}' | while read -r _C; do\n      _IP=\"${_C%/*}\"\n      case \"${_IP}\" in 10.*|192.168.*|172.1[16-9].*|172.2[0-9].*|172.3[0-1].*) : ;; *) printf '%s\\n' \"$_I\"; return 0 ;; esac\n    done\n  done\n}\n\n_iface_has_dhcp_for_if() {\n  _IF=\"$1\"\n  grep -RqsE \"^[[:space:]]*iface[[:space:]]+${_IF}[[:space:]]+inet[[:space:]]+dhcp\" \\\n    /etc/network/interfaces /etc/network/interfaces.d 2>/dev/null\n}\n\n_has_ci_dhcp_for_if() {\n  _IF=\"$1\"\n  _iface_has_dhcp_for_if \"${_IF}\" || return 1\n  grep -Rqs 'cloud-init' /etc/network/interfaces /etc/network/interfaces.d 2>/dev/null\n}\n\n_iface_static_match_live() {\n  # 0 if /etc/network/interfaces* has static stanza for IF that matches live IPv4\n  _IF=\"$1\"\n  _LIVE=\"$2\"\n  _IP=\"${_LIVE%/*}\"\n\n  # Build a list of files to scan: /etc/network/interfaces plus any snippets.\n  _FILES=\"\"\n  if [ -f /etc/network/interfaces ]; then\n    _FILES=\"/etc/network/interfaces\"\n  fi\n  if ls /etc/network/interfaces.d/* >/dev/null 2>&1; then\n    _FILES=\"${_FILES} /etc/network/interfaces.d/*\"\n  fi\n  [ -n \"${_FILES}\" ] || return 1\n\n  awk -v IFACE=\"${_IF}\" -v IP=\"${_IP}\" '\n    BEGIN{inif=0; ok=0}\n    /^[[:space:]]*iface[[:space:]]+/{\n      # iface <name> inet static\n      inif=($2==IFACE && $3==\"inet\" && $4==\"static\")\n    }\n    inif && /^[[:space:]]*address[[:space:]]+/{\n      # address can be with or without /mask\n      val=$2\n      gsub(/#.*/,\"\",val)\n      split(val,a,\"/\")\n      if (a[1]==IP) ok=1\n    }\n    END{exit ok?0:1}\n  ' ${_FILES} 2>/dev/null\n}\n\n_iface_has_onlink_default_helpers() {\n  _IF=\"$1\"\n  _GW=\"$2\"\n\n  # Look for gateway/onlink helpers in both main file and snippets.\n  _FILES=\"\"\n  if [ -f /etc/network/interfaces ]; then\n    _FILES=\"/etc/network/interfaces\"\n  fi\n  if ls /etc/network/interfaces.d/* >/dev/null 2>&1; then\n    _FILES=\"${_FILES} /etc/network/interfaces.d/*\"\n  fi\n  [ -n \"${_FILES}\" ] || return 1\n\n  # Direct gateway line\n  grep -qsE \"^[[:space:]]*gateway[[:space:]]+${_GW}\\b\" ${_FILES} 2>/dev/null && return 0\n\n  # post-up ip route add ... <GW> ... onlink\n  grep -qsE \"post-up[[:space:]]+ip[[:space:]]+route[[:space:]]+r..a[[:space:]]+${_GW}.*onlink\" ${_FILES} 2>/dev/null && return 0\n\n  return 1\n}\n\n# ----------------------------- IP math & helpers -------------------------------\n\n_cidr_prefix() { printf '%s\\n' \"${1#*/}\"; }\n_rfc1918() { case \"$1\" in 10.*|192.168.*|172.1[6-9].*|172.2[0-9].*|172.3[0-1].*) printf YES;; *) printf NO;; esac; }\n\n_is_nat_only_setup() {\n  # Returns 0 (true) if ALL interfaces with IPv4 have only RFC1918 (private) IPs.\n  # This indicates a NAT scenario (e.g., LunaNode) where the public IP is outside the VM.\n  _HAS_ANY_IPV4=\"NO\"\n  for _I in $(_list_ifaces); do\n    _ALL_V4=\"$(_live_v4_list_if \"${_I}\")\"\n    [ -z \"${_ALL_V4}\" ] && continue\n    _HAS_ANY_IPV4=\"YES\"\n    for _CIDR in ${_ALL_V4}; do\n      _IP=\"${_CIDR%/*}\"\n      if [ \"$(_rfc1918 \"${_IP}\")\" = \"NO\" ]; then\n        # Found a public IP, so not NAT-only\n        return 1\n      fi\n    done\n  done\n  # If we found IPv4 addresses and all were private, it's NAT-only\n  [ \"${_HAS_ANY_IPV4}\" = \"YES\" ] && return 0\n  return 1\n}\n\n_ip_to_int() { IFS=. read -r a b c d <<EOF\n$1\nEOF\nprintf '%u\\n' \"$(( (a<<24) + (b<<16) + (c<<8) + d ))\"; }\n_int_to_ip() { i=\"$1\"; printf '%d.%d.%d.%d\\n' $(( (i>>24)&255 )) $(( (i>>16)&255 )) $(( (i>>8)&255 )) $(( i&255 )); }\n_prefix_to_mask() { p=\"$1\"; m=$(( (0xFFFFFFFF << (32-p)) & 0xFFFFFFFF )); _int_to_ip \"$m\"; }\n_mask_to_prefix() { IFS=. read -r a b c d <<EOF\n$1\nEOF\nfor o in $a $b $c $d; do case \"$o\" in 255) n=$((n+8));; 254) n=$((n+7));; 252) n=$((n+6));; 248) n=$((n+5));; 240) n=$((n+4));; 224) n=$((n+3));; 192) n=$((n+2));; 128) n=$((n+1));; 0) :;; *) n=-1;; esac; done; printf '%s\\n' \"${n:-0}\"; }\n\n_calc_base_plus_one() { ip=\"$1\"; pfx=\"$2\"; mask=\"$(_prefix_to_mask \"$pfx\")\"; ipi=\"$(_ip_to_int \"$ip\")\"; mi=\"$(_ip_to_int \"$mask\")\"; base=$(( ipi & mi )); _int_to_ip $(( base + 1 )); }\n\n# ----------------------------- Metadata (DigitalOcean) -------------------------\n\n_do_meta_get() {\n  _URL=\"$1\"\n  if _cmd_exists curl; then curl -fs --max-time 2 \"${_URL}\"\n  elif _cmd_exists wget; then wget -qO- \"${_URL}\"\n  else return 1\n  fi\n}\n\n_do_meta_gateway_for_if() {\n  _IF=\"$1\"\n  ip route replace 169.254.169.254 dev \"${_IF}\" 2>/dev/null || true\n  _do_meta_get \"http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/gateway\"\n}\n\n_do_meta_netmask_for_if() {\n  _IF=\"$1\"\n  ip route replace 169.254.169.254 dev \"${_IF}\" 2>/dev/null || true\n  _do_meta_get \"http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/netmask\"\n}\n\n# ----------------------------- DNS enforcement ---------------------------------\n\n_collect_forced_dns() {\n  _SEEN=\"\"; for _W in ${_FORCE_DNS_LIST}; do case \" ${_SEEN} \" in *\" ${_W} \"*) : ;; *) _SEEN=\"${_SEEN} ${_W}\" ;; esac; done\n  printf '%s\\n' \"${_SEEN# }\"\n}\n\n_fix_resolv_conf() {\n  _DNS=\"$1\"\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would write /etc/resolv.conf: %s\\n' \"${_DNS}\"\n    return 0\n  fi\n  chattr -i /etc/resolv.conf\n  rm -f /etc/resolv.conf\n  : > /etc/resolv.conf\n  for _NS in ${_DNS}; do printf 'nameserver %s\\n' \"${_NS}\" >> /etc/resolv.conf; done\n  chmod 0644 /etc/resolv.conf\n  chattr +i /etc/resolv.conf\n  _msg \"Wrote real /etc/resolv.conf\"\n}\n\n_install_dhclient_dns_block() {\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would install dhclient enter-hook to block resolv.conf changes\\n'\n    return 0\n  fi\n  mkdir -p /etc/dhcp/dhclient-enter-hooks.d\n  cat > /etc/dhcp/dhclient-enter-hooks.d/99-vmnetfix-nodns <<'EOF'\n# vmnetfix: prevent dhclient from overwriting /etc/resolv.conf\nmake_resolv_conf() { :; }\nEOF\n  chmod 0644 /etc/dhcp/dhclient-enter-hooks.d/99-vmnetfix-nodns\n  _msg \"Installed dhclient hook to block resolv.conf edits\"\n}\n\n_enforce_dns_always() {\n  _DNS=\"$(_collect_forced_dns)\"\n  _fix_resolv_conf \"${_DNS}\"\n  _install_dhclient_dns_block\n}\n\n# ----------------------------- Cloud-init & tooling ----------------------------\n\n_disable_cloudinit_networking() {\n  [ \"${_DISABLE_CLOUD_INIT}\" = \"YES\" ] || return 0\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would write /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg\\n'\n    return 0\n  fi\n  mkdir -p /etc/cloud/cloud.cfg.d\n  printf 'network: {config: disabled}\\n' >/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg\n  _msg \"cloud-init networking disabled.\"\n}\n\n_ensure_ifupdown() {\n  _cmd_exists ifup && return 0\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo 'DRY-RUN: would install ifupdown (or ifupdown2)'\n    return 0\n  fi\n  apt-get update -y || true\n  _noninteractive_apt install -y ifupdown || _noninteractive_apt install -y ifupdown2\n}\n\n_ensure_dhcp_client() {\n  # For classic ifupdown on Devuan we want isc-dhcp-client (dhclient) available.\n  _cmd_exists dhclient && return 0\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo 'DRY-RUN: would install isc-dhcp-client'\n    return 0\n  fi\n  _noninteractive_apt install -y isc-dhcp-client || true\n}\n\n_ensure_sysv_prereqs() {\n  # On Debian images with systemd-sysv, installing classic SysV plumbing packages\n  # can conflict and break apt resolution. We only need these on Devuan (no systemd),\n  # so skip when systemd-sysv is present.\n  if dpkg -s systemd-sysv >/dev/null 2>&1; then\n    _msg \"systemd-sysv present; skipping SysV prereqs install on Debian to avoid conflicts.\"\n    return 0\n  fi\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo 'DRY-RUN: would install initscripts sysv-rc startpar insserv'\n    return 0\n  fi\n  _noninteractive_apt install -y initscripts sysv-rc startpar insserv || true\n}\n\n# ----------------------------- Init hook (ifupdown2) ---------------------------\n\n_write_networking_init_if_missing() {\n  # For ifupdown2 only; add minimal boot hook if missing, with safe LSB headers.\n  [ \"$(_ifup_variant)\" = \"ifupdown2\" ] || { _msg \"ifup variant not ifupdown2 — no networking shim needed.\"; return 0; }\n\n  # Ensure /etc/default/networking exists and CONFIGURE_INTERFACES is enabled\n  if [ ! -f /etc/default/networking ]; then\n    [ \"${_DRY_RUN}\" = \"YES\" ] || echo \"# This file can be used to override the default networking behavior\" > /etc/default/networking\n  fi\n\n  # Ensure CONFIGURE_INTERFACES is set to yes (critical for ifupdown)\n  if ! grep -q \"^CONFIGURE_INTERFACES=yes\" /etc/default/networking 2>/dev/null; then\n    if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n      echo \"[DRY-RUN] Would enable CONFIGURE_INTERFACES in /etc/default/networking\"\n    else\n      # Remove any commented or incorrect lines\n      sed -i '/^[#[:space:]]*CONFIGURE_INTERFACES/d' /etc/default/networking\n      echo \"CONFIGURE_INTERFACES=yes\" >> /etc/default/networking\n      _msg \"Enabled CONFIGURE_INTERFACES in /etc/default/networking\"\n    fi\n  fi\n\n  if [ -x \"/etc/init.d/networking\" ]; then\n    _msg \"/etc/init.d/networking already present.\"\n    chmod 0755 /etc/init.d/networking || true\n    update-rc.d networking defaults >/dev/null 2>&1 || true\n    return 0\n  fi\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would create /etc/init.d/networking (ifupdown2 shim) and enable it\\n'\n    return 0\n  fi\n  cat >/etc/init.d/networking <<'EOF'\n#!/usr/bin/env bash\n### BEGIN INIT INFO\n# Provides:          networking\n# Required-Start:    $remote_fs $syslog\n# Required-Stop:\n# Should-Stop:       $remote_fs $syslog\n# Default-Start:     2 3 4 5\n# Default-Stop:      0 1 6\n# Short-Description: Bring up/down networking via ifup/ifdown (ifupdown2)\n### END INIT INFO\n_L(){ rm -f /run/network/ifupdown2/ifupdown2.lock 2>/dev/null || true; }\nU=\"$(command -v ifup || true)\"; D=\"$(command -v ifdown || true)\"\nstart(){ _L; [ -x \"$U\" ] && \"$U\" -a || true; }\nstop(){  [ -x \"$D\" ] && \"$D\" -a || true; }\ncase \"$1\" in start) start;; stop) stop;; restart|force-reload) stop; start;; status) ip -br a; ip r;; *) echo \"Usage: $0 {start|stop|restart|force-reload|status}\"; exit 2;; esac\nEOF\n  # minimal sysv prerequisites so insserv has the virtual facilities it expects\n  dpkg -s initscripts >/dev/null 2>&1 || _noninteractive_apt install -y initscripts\n  dpkg -s sysv-rc     >/dev/null 2>&1 || _noninteractive_apt install -y sysv-rc startpar insserv\n  chmod 0755 /etc/init.d/networking\n  update-rc.d networking defaults >/dev/null 2>&1 || true\n  _msg \"Created and enabled /etc/init.d/networking shim for ifupdown2.\"\n}\n\n_ensure_networking_exec_and_runlevels() {\n  # Ensure /etc/init.d/networking is executable and runlevels match the LSB header.\n  if [ -f /etc/init.d/networking ]; then\n    chmod 0755 /etc/init.d/networking || true\n  fi\n  # Make any dpkg-old copy non-executable so insserv doesn't get confused\n  chmod 0644 /etc/init.d/networking.dpkg-old 2>/dev/null || true\n\n  # Reset symlinks to header defaults (fixes \"overrides LSB defaults (S)\" warnings)\n  update-rc.d -f networking remove >/dev/null 2>&1 || true\n  insserv -r networking >/dev/null 2>&1 || true\n  update-rc.d networking defaults >/dev/null 2>&1 || true\n  insserv -v networking >/dev/null 2>&1 || true\n}\n\n# ----------------------------- Boot guard (route sanity) -----------------------\n\n_install_boot_guard() {\n  # Runs on each ifup; ensures default route exists (handles /32/off-link).\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo '[DRY-RUN] Would install /etc/network/if-up.d/99-vmnetfix-guard'\n    return 0\n  fi\n  mkdir -p /etc/network/if-up.d\n  cat > /etc/network/if-up.d/99-vmnetfix-guard <<'EOF'\n#!/bin/sh\n# vmnetfix guard: ensure default route for $IFACE (handles /32/off-link GWs).\n[ \"$IFACE\" = \"lo\" ] && exit 0\n\n# If a default route already points via this IFACE, nothing to do.\nDEFDEV=\"$(ip -4 route show default 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i==\"dev\"){print $(i+1); exit}}')\"\n[ -n \"$DEFDEV\" ] && [ \"$DEFDEV\" != \"$IFACE\" ] && exit 0\n\n# Read gateway for this IFACE from /etc/network/interfaces (if any).\nGW=\"$(awk -v IFACE=\"$IFACE\" 'BEGIN{inif=0}\n  /^[[:space:]]*iface[[:space:]]+/ { inif=($2==IFACE && $3==\"inet\") }\n  inif && /^[[:space:]]*gateway[[:space:]]+/ { print $2; exit }' /etc/network/interfaces 2>/dev/null)\"\n\n[ -z \"$GW\" ] && exit 0\n\nCIDR=\"$(ip -o -4 addr show dev \"$IFACE\" scope global 2>/dev/null | awk '{print $4}' | head -n1)\"\nPFX=\"${CIDR#*/}\"\n\n# If gateway is on-link (and not /32), a regular default route suffices.\nif ip route get \"$GW\" 2>/dev/null | grep -q \"dev $IFACE\" && [ \"$PFX\" != \"32\" ]; then\n  ip route replace default via \"$GW\" dev \"$IFACE\" >/dev/null 2>&1 || true\n  exit 0\nfi\n\n# Otherwise add a host route to the GW and set an onlink default.\nip route add \"$GW\" dev \"$IFACE\" >/dev/null 2>&1 || true\nip route replace default via \"$GW\" dev \"$IFACE\" onlink >/dev/null 2>&1 || true\nexit 0\nEOF\n  chmod 0755 /etc/network/if-up.d/99-vmnetfix-guard\n  _msg \"Installed if-up guard\"\n}\n\n# ----------------------------- Interface filters -------------------------------\n\n_is_dir_netdev() { _IF=\"$1\"; [ -d \"/sys/class/net/${_IF}\" ] && ip -o link show dev \"${_IF}\" >/dev/null 2>&1; }\n\n_is_pseudo_if() {\n  case \"$1\" in\n    lo|bonding_masters|*:*|docker*|veth*|br*|virbr*|tun*|tap*|wg*|tailscale*|zt*|nlmon*|ifb*|gre*|gretap*|erspan*|sit*|ip6tnl*|tunl*) return 0 ;;\n    bond*) return 0 ;;  # treat as pseudo unless it already has live IPv4 (writer handles)\n  esac\n  return 1\n}\n\n_list_ifaces() {\n  for _IF in $(ls -1 /sys/class/net 2>/dev/null); do\n    [ \"$_IF\" = \"lo\" ] && continue\n    _is_dir_netdev \"$_IF\" || continue\n    _is_pseudo_if \"$_IF\" && continue\n    printf '%s\\n' \"$_IF\"\n  done\n}\n\n_iface_v6_list() { ip -o -6 addr show dev \"$1\" scope global 2>/dev/null | awk '{print $4}'; }\n\n# ----------------------------- Writers & tests ---------------------------------\n\n_pick_primary_v4_for_if() {\n  # Sets globals: _PRIMARY_V4, _OTHERS_V4 (space-separated), prefer public IP\n  _IF=\"$1\"; _PRIMARY_V4=\"\"; _OTHERS_V4=\"\"\n  _ALL_V4=\"$(_live_v4_list_if \"${_IF}\")\"\n  for _CIDR in ${_ALL_V4}; do\n    _IP=\"${_CIDR%/*}\"\n    [ -z \"${_PRIMARY_V4}\" ] && _PRIMARY_V4=\"${_CIDR}\"\n    [ \"$(_rfc1918 \"${_IP}\")\" = \"NO\" ] && _PRIMARY_V4=\"${_CIDR}\"\n  done\n  for _CIDR in ${_ALL_V4}; do\n    [ \"${_CIDR}\" = \"${_PRIMARY_V4}\" ] && continue\n    _OTHERS_V4=\"${_OTHERS_V4} ${_CIDR}\"\n  done\n  _OTHERS_V4=\"${_OTHERS_V4# }\"\n}\n\n_write_interfaces_all() {\n  _DEF=\"$1\"; _GW=\"$2\"; _DNS=\"$3\"\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo 'DRY-RUN: would write /etc/network/interfaces'\n    return 0\n  fi\n  _backup_file /etc/network/interfaces\n  {\n    echo \"auto lo\"\n    echo \"iface lo inet loopback\"\n    echo\n    for _IF in $(_list_ifaces); do\n      echo \"auto ${_IF}\"\n      echo \"allow-hotplug ${_IF}\"\n\n      _ALL=\"$(_live_v4_list_if \"${_IF}\")\"\n      if [ -n \"${_ALL}\" ]; then\n        _pick_primary_v4_for_if \"${_IF}\"\n        _PRIMARY=\"${_PRIMARY_V4}\"\n        _OTHERS=\"${_OTHERS_V4}\"\n\n        if [ \"${_IF}\" = \"${_DEF}\" ] && [ \"${_DEF_WAS_DHCP}\" = \"YES\" ]; then\n          echo \"iface ${_IF} inet dhcp\"\n          # DNS is enforced via /etc/resolv.conf by vmnetfix, but keeping this hint is harmless.\n          [ -n \"${_DNS}\" ] && { printf '  dns-nameservers'; for _NS in ${_DNS}; do printf ' %s' \"${_NS}\"; done; echo; }\n          # For /32 with off-link GW, dhclient won't add the host route needed to\n          # reach the gateway. Add explicit onlink helpers so routing works after reboot.\n          _PFX=\"$(_cidr_prefix \"${_PRIMARY}\")\"\n          if [ \"${_PFX}\" = \"32\" ] && [ -n \"${_GW}\" ]; then\n            echo \"  post-up ip route add ${_GW} dev ${_IF} || true\"\n            echo \"  post-up ip route replace default via ${_GW} dev ${_IF} onlink || true\"\n            echo \"  pre-down ip route del default via ${_GW} dev ${_IF} || true\"\n            echo \"  pre-down ip route del ${_GW} dev ${_IF} || true\"\n          fi\n        else\n          echo \"iface ${_IF} inet static\"\n          echo \"  address ${_PRIMARY}\"\n\n          if [ \"${_IF}\" = \"${_DEF}\" ] && [ -n \"${_GW}\" ]; then\n            _PFX=\"$(_cidr_prefix \"${_PRIMARY}\")\"\n            if ip route get \"${_GW}\" 2>/dev/null | grep -q \"dev ${_IF}\" && [ \"${_PFX}\" != \"32\" ]; then\n              echo \"  gateway ${_GW}\"\n            else\n              # /32 or off-link default GW: add explicit onlink route on up\n              echo \"  post-up ip route add ${_GW} dev ${_IF} || true\"\n              echo \"  post-up ip route replace default via ${_GW} dev ${_IF} onlink || true\"\n              echo \"  pre-down ip route del default via ${_GW} dev ${_IF} || true\"\n              echo \"  pre-down ip route del ${_GW} dev ${_IF} || true\"\n            fi\n            [ -n \"${_DNS}\" ] && { printf '  dns-nameservers'; for _NS in ${_DNS}; do printf ' %s' \"${_NS}\"; done; echo; }\n          fi\n\n          for _C in ${_OTHERS}; do\n            echo \"  post-up ip addr add ${_C} dev ${_IF} || true\"\n            echo \"  pre-down ip addr del ${_C} dev ${_IF} || true\"\n          done\n        fi\n\n      else\n        # No live IPv4 -> do not emit a DHCP stanza; leave it unmanaged.\n        true\n      fi\n\n      _V6_LIST=\"$(_iface_v6_list \"${_IF}\")\"\n      if [ -n \"${_V6_LIST}\" ]; then\n        # Skip adding static IPv6 if IPv4 on this interface is DHCP (avoid mixed config)\n        _SKIP_V6=\"NO\"\n        if [ \"${_IF}\" = \"${_DEF}\" ] && [ \"${_DEF_WAS_DHCP}\" = \"YES\" ]; then\n          _SKIP_V6=\"YES\"\n          _msg \"Skipping static IPv6 on ${_IF} because IPv4 is DHCP (avoiding mixed config)\"\n        fi\n\n        if [ \"${_SKIP_V6}\" = \"NO\" ]; then\n          _V6P=\"$(printf '%s\\n' \"${_V6_LIST}\" | head -n1)\"\n          echo\n          echo \"iface ${_IF} inet6 static\"\n          echo \"  address ${_V6P}\"\n          # (No default IPv6 gateway by default; add manually if needed)\n        fi\n      fi\n\n      echo\n    done\n\n    # If preserving cloud-init, include vendor snippets too:\n    [ \"${_DISABLE_CLOUD_INIT}\" = \"NO\" ] && echo 'source /etc/network/interfaces.d/*'\n  } >/etc/network/interfaces\n  _msg \"Wrote /etc/network/interfaces\"\n}\n\n_sanitize_interfaces_file() {\n  # Remove stray bond/bonding_masters blocks (idempotent)\n  [ -f /etc/network/interfaces ] || return 0\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo '[DRY-RUN] Would sanitize bonding stanzas in /etc/network/interfaces'\n    return 0\n  fi\n  sed -i \\\n    -e '/^allow-hotplug[[:space:]]\\+bonding_masters$/,/^$/d' \\\n    -e '/^iface[[:space:]]\\+bonding_masters[[:space:]]\\+inet[[:space:]]/,/^$/d' \\\n    -e '/^allow-hotplug[[:space:]]\\+bond[0-9]\\+$/,/^$/d' \\\n    -e '/^iface[[:space:]]\\+bond[0-9]\\+[[:space:]]\\+inet[[:space:]]/,/^$/d' \\\n    /etc/network/interfaces\n}\n\n_ephemeral_static_up() {\n  # Apply an immediate static config: address CIDR + default via GW (onlink safe)\n  _IF=\"$1\"; _CIDR=\"$2\"; _GW=\"$3\"\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    printf '[DRY-RUN] Would apply ephemeral static on %s: %s via %s\\n' \"$_IF\" \"$_CIDR\" \"$_GW\"\n    return 0\n  fi\n  ip addr flush dev \"${_IF}\" || true\n  ip link set \"${_IF}\" up || true\n  ip addr add \"${_CIDR}\" dev \"${_IF}\" || true\n  _PFX=\"$(_cidr_prefix \"${_CIDR}\")\"\n  if [ \"${_PFX}\" = \"32\" ]; then\n    ip route add \"${_GW}\" dev \"${_IF}\" 2>/dev/null || true\n    ip route replace default via \"${_GW}\" dev \"${_IF}\" onlink || true\n  else\n    ip route replace default via \"${_GW}\" dev \"${_IF}\" || true\n  fi\n}\n\n_unload_bonding_if_unused() {\n  lsmod | grep -q '^bonding' || return 0\n  ip -o -4 addr show | grep -q ' bond[0-9]\\+ ' && return 0\n  modprobe -r bonding 2>/dev/null || true\n}\n\n_ephemeral_bringup_all() {\n  [ \"${_TEST_NOW}\" = \"YES\" ] || return 0\n  for _IF in $(_list_ifaces); do\n    ip link set \"${_IF}\" up || true\n    ip -o -4 addr show dev \"${_IF}\" scope global >/dev/null 2>&1 || {\n      _cmd_exists dhclient && dhclient -1 \"${_IF}\" || true\n    }\n  done\n  # Non-fatal reachability check for default GW\n  _DEF=\"$(_default_route_if)\"; _GW=\"$(_default_gw_v4)\"\n  [ -n \"${_DEF}\" ] && [ -n \"${_GW}\" ] && ping -c1 -w4 \"${_GW}\" >/dev/null 2>&1 && _msg \"GW reachable in test.\" || _msg \"GW not reachable in test.\"\n}\n\n# ----------------------------- Sanity classifier -------------------------------\n\n_config_is_sane() {\n  # Sane if:\n  # 1) default route & IF exist, and\n  # 2a) cloud-init DHCP manages that IF, or\n  # 2b) interfaces has matching static stanza (and for /32, has onlink helpers), or\n  # 2c) NAT-only setup with working DHCP (default route exists).\n\n  _DEF_IF=\"$(_default_route_if)\"; _GW=\"$(_default_gw_v4)\"\n  [ -n \"${_DEF_IF}\" ] || { _msg \"No default IF -> not sane\"; return 1; }\n  [ -n \"${_GW}\" ]  || { _msg \"No default GW -> not sane\"; return 1; }\n  _LIVE=\"$(_live_v4_cidr \"${_DEF_IF}\")\"; [ -n \"${_LIVE}\" ] || { _msg \"No live IPv4 -> not sane\"; return 1; }\n\n  # NAT-only scenario: if all IPs are private and DHCP is active with a default route, consider it sane\n  if _is_nat_only_setup; then\n    if _iface_v4_is_dynamic \"${_DEF_IF}\" || _default_route_is_dhcp; then\n      _msg \"NAT-only setup with working DHCP and default route — sane.\"\n      return 0\n    fi\n  fi\n\n  _LIVE=\"$(_live_v4_cidr \"${_DEF_IF}\")\"; [ -n \"${_LIVE}\" ] || { _msg \"No live IPv4 -> not sane\"; return 1; }\n\n  if _has_ci_dhcp_for_if \"${_DEF_IF}\"; then\n    _msg \"cloud-init DHCP on ${_DEF_IF} with default route — sane.\"\n    return 0\n  fi\n\n  if _iface_has_dhcp_for_if \"${_DEF_IF}\"; then\n    _msg \"DHCP stanza for ${_DEF_IF} with default route — sane.\"\n    return 0\n  fi\n\n  if _iface_static_match_live \"${_DEF_IF}\" \"${_LIVE}\"; then\n    _PFX=\"$(_cidr_prefix \"${_LIVE}\")\"\n    if [ \"${_PFX}\" = \"32\" ]; then\n      _iface_has_onlink_default_helpers \"${_DEF_IF}\" \"$(_default_gw_v4)\" && { _msg \"Static /32 with onlink helpers — sane.\"; return 0; }\n      _msg \"Static /32 missing onlink helpers — not sane.\"; return 1\n    fi\n    _msg \"Static matches live — sane.\"\n    return 0\n  fi\n\n  _msg \"Live IP does not match static config — not sane.\"\n  return 1\n}\n\n# ----------------------------- Fallback derivation -----------------------------\n\n_choose_candidate_if() {\n  # Prefer public IF with IPv4, else any IF with IPv4\n  _PUB=\"$(_live_public_if)\"\n  [ -n \"${_PUB}\" ] && { printf '%s\\n' \"${_PUB}\"; return 0; }\n  for _I in $(_list_ifaces); do\n    if [ -n \"$(_live_v4_cidr \"${_I}\")\" ]; then printf '%s\\n' \"${_I}\"; return 0; fi\n  done\n  printf '%s\\n' \"\"\n}\n\n_find_gateway_for_if() {\n  _IF=\"$1\"\n  # 1) Current default\n  _GW=\"$(_default_gw_v4)\"; [ -n \"${_GW}\" ] && { printf '%s\\n' \"${_GW}\"; return 0; }\n\n  # 2) Look in /etc/network/interfaces*\n  if [ -f /etc/network/interfaces ]; then\n    _GW=\"$(awk -v IFACE=\"${_IF}\" '\n      BEGIN{inif=0}\n      /^[[:space:]]*iface[[:space:]]+/ { inif=($2==IFACE && $3==\"inet\") }\n      inif && /^[[:space:]]*gateway[[:space:]]+/ { print $2; exit }\n    ' /etc/network/interfaces)\"\n    [ -n \"${_GW}\" ] && { printf '%s\\n' \"${_GW}\"; return 0; }\n  fi\n\n  # 3) DigitalOcean metadata\n  _GW=\"$(_do_meta_gateway_for_if \"${_IF}\")\"\n  [ -n \"${_GW}\" ] && { printf '%s\\n' \"${_GW}\"; return 0; }\n\n  # 4) Guess base+1 from live CIDR\n  _CIDR=\"$(_live_v4_cidr \"${_IF}\")\"\n  if [ -n \"${_CIDR}\" ]; then\n    _IP=\"${_CIDR%/*}\"; _PFX=\"${_CIDR#*/}\"\n    _GW=\"$(_calc_base_plus_one \"${_IP}\" \"${_PFX}\")\"\n    printf '%s\\n' \"${_GW}\"\n    return 0\n  fi\n\n  printf '%s\\n' \"\"\n}\n\n_find_prefix_for_if() {\n  _IF=\"$1\"\n  _CIDR=\"$(_live_v4_cidr \"${_IF}\")\"\n  if [ -n \"${_CIDR}\" ]; then printf '%s\\n' \"${_CIDR#*/}\"; return 0; fi\n\n  # From DO metadata (netmask -> prefix)\n  _NM=\"$(_do_meta_netmask_for_if \"${_IF}\")\"\n  if [ -n \"${_NM}\" ]; then _mask_to_prefix \"${_NM}\"; return 0; fi\n\n  printf '%s\\n' \"\"\n}\n\n# ----------------------- ifupdown2 -> ifupdown --------------------------------\n\n_has_legacy_alias_layout() {\n  # Detect classic alias syntax and dotted-decimal netmasks used by DO vanilla.\n  # Returns 0 (true) if legacy layout is present.\n  grep -Esq '^[[:space:]]*auto[[:space:]]+[a-z0-9]+:[0-9]+' /etc/network/interfaces 2>/dev/null && return 0\n  grep -Esq '^[[:space:]]*iface[[:space:]]+[a-z0-9]+:[0-9]+' /etc/network/interfaces 2>/dev/null && return 0\n  grep -Esq '^[[:space:]]*netmask[[:space:]]+[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+' /etc/network/interfaces 2>/dev/null && return 0\n  grep -Esq '^[[:space:]]*broadcast[[:space:]]+[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+' /etc/network/interfaces 2>/dev/null && return 0\n  grep -Esq 'post-up[[:space:]]+if(up|config)[[:space:]]+[a-z0-9]+:[0-9]+' /etc/network/interfaces 2>/dev/null && return 0\n  return 1\n}\n\n_switch_to_ifupdown_when_legacy_detected() {\n  # If ifupdown2 is installed AND legacy layout is present, switch to classic ifupdown.\n  _cmd_exists ifup || true  # for logging consistency\n  dpkg -s ifupdown2 >/dev/null 2>&1 || return 0\n  _has_legacy_alias_layout || return 0\n\n  _msg \"Legacy alias/DO layout detected; switching from ifupdown2 -> ifupdown.\"\n\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo \"[DRY-RUN] _noninteractive_apt install -y ifupdown (replaces ifupdown2)\"\n    echo \"[DRY-RUN] update-rc.d networking defaults; insserv -v networking\"\n    return 0\n  fi\n\n  apt-get update -y || true\n  # ifupdown conflicts with ifupdown2; this will replace it cleanly\n  _noninteractive_apt install -y ifupdown || { _err \"ifupdown install failed\"; return 1; }\n\n  # Ensure the canonical init script is enabled and ordered\n  _ensure_networking_exec_and_runlevels\n\n  # Clear any ifupdown2 lock left around and start networking now\n  rm -f /run/network/ifupdown2/ifupdown2.lock 2>/dev/null || true\n  service networking restart >/dev/null 2>&1 || /etc/init.d/networking start >/dev/null 2>&1 || true\n\n  _msg \"Switched to classic ifupdown and refreshed networking.\"\n}\n\n# ---------------------- systemd-networkd handoff ------------------------------\n\n_disable_systemd_networkd_for_next_boot() {\n  # When migrating a Debian image (systemd) to classic ifupdown, systemd-networkd\n  # must not stay enabled, otherwise it can race with ifupdown at boot and leave\n  # the VM offline.\n  #\n  # IMPORTANT: We disable/mask networkd for next boot, then start networking.service\n  # to take over immediately (safe even over SSH since config is already written).\n\n  # Only meaningful if systemd is PID 1.\n  [ -d /run/systemd/system ] || return 0\n  _cmd_exists systemctl || return 0\n\n  if [ \"${_DRY_RUN}\" = \"YES\" ]; then\n    echo \"[DRY-RUN] Would enable networking.service\"\n    echo \"[DRY-RUN] Would disable systemd-networkd + systemd-networkd-wait-online\"\n    echo \"[DRY-RUN] Would mask systemd-networkd + systemd-networkd-wait-online\"\n    echo \"[DRY-RUN] Would start networking.service to take over\"\n    return 0\n  fi\n\n  # Ensure classic networking is enabled under systemd.\n  systemctl enable networking >/dev/null 2>&1 || true\n\n  # Prevent networkd from coming back on reboot (common on cloud images).\n  systemctl disable systemd-networkd systemd-networkd-wait-online >/dev/null 2>&1 || true\n  systemctl mask systemd-networkd systemd-networkd-wait-online >/dev/null 2>&1 || true\n\n  # Start networking.service now to take over from systemd-networkd immediately.\n  # This is safe because /etc/network/interfaces is already written with correct config.\n  systemctl start networking >/dev/null 2>&1 || true\n  _msg \"Started networking.service to take over from systemd-networkd\"\n}\n\n# ----------------------------- Main flow --------------------------------------\n\n_main() {\n  _need_root\n  _single_instance_lock\n  _parse_args \"$@\"\n\n  _ensure_sysv_prereqs\n  _ensure_ifupdown\n  _ensure_dhcp_client\n\n  # If the current file looks like legacy DO alias layout, switch to classic ifupdown first.\n  _switch_to_ifupdown_when_legacy_detected\n\n  _DEF=\"$(_default_route_if)\"; _GW=\"$(_default_gw_v4)\"; _LIVE=\"$(_live_v4_cidr \"${_DEF}\")\"\n  _msg \"Default IF: ${_DEF:-<none>}  GW4: ${_GW:-<none>}  LIVE: ${_LIVE:-<none>}\"\n\n  case \"${_MODE}\" in\n    init-only)\n      _write_networking_init_if_missing\n      _ensure_networking_exec_and_runlevels\n      _install_boot_guard\n      _enforce_dns_always\n      printf 'Init-only mode: boot hook + guard ensured; DNS enforced. No config changes.\\n'\n      exit 0\n      ;;\n    full)\n      [ \"${_DISABLE_CLOUD_INIT}\" = \"YES\" ] && _disable_cloudinit_networking\n      # Derive defaults if missing\n      [ -z \"${_DEF}\" ] && _DEF=\"$(_choose_candidate_if)\"\n      [ -z \"${_LIVE}\" ] && _LIVE=\"$(_live_v4_cidr \"${_DEF}\")\"\n      [ -z \"${_GW}\" ] && _GW=\"$(_find_gateway_for_if \"${_DEF}\")\"\n      _PFX=\"$(_cidr_prefix \"${_LIVE}\")\"\n      [ -z \"${_PFX}\" ] && _PFX=\"$(_find_prefix_for_if \"${_DEF}\")\"\n      [ -z \"${_DEF}${_LIVE}${_GW}${_PFX}\" ] && { _err \"Cannot derive interface/gateway/prefix.\"; exit 1; }\n\n      # Decide whether the default interface is DHCP-managed (do NOT pin a dynamic address as static).\n      _DEF_WAS_DHCP=\"NO\"\n      _iface_v4_is_dynamic \"${_DEF}\" && _DEF_WAS_DHCP=\"YES\"\n      _default_route_is_dhcp && _DEF_WAS_DHCP=\"YES\"\n      if [ \"${_DEF_WAS_DHCP}\" = \"YES\" ]; then\n        _msg \"Detected DHCP-managed IPv4 on ${_DEF}; generating ifupdown DHCP stanza (no static pinning).\"\n      else\n        # Apply ephemeral static now (so this session has networking)\n        _ephemeral_static_up \"${_DEF}\" \"${_LIVE}\" \"${_GW}\"\n      fi\n\n      _DNS=\"$(_collect_forced_dns)\"\n      _sanitize_interfaces_file\n      _write_interfaces_all \"${_DEF}\" \"${_GW}\" \"${_DNS}\"\n      _fix_resolv_conf \"${_DNS}\"\n      _write_networking_init_if_missing\n      _ensure_networking_exec_and_runlevels\n      _disable_systemd_networkd_for_next_boot\n      _install_boot_guard\n      _unload_bonding_if_unused\n      _ephemeral_bringup_all\n      _enforce_dns_always\n      printf 'Full fix applied. Safe to reboot.\\n'\n      exit 0\n      ;;\n    auto|*)\n      if _config_is_sane; then\n        _msg \"AUTO: config looks sane — ensuring boot hook + guard + DNS.\"\n        _write_networking_init_if_missing\n        _ensure_networking_exec_and_runlevels\n        _install_boot_guard\n        _enforce_dns_always\n        printf 'Config sane. Boot hook + guard ensured; DNS enforced. No config changes.\\n'\n        exit 0\n      else\n        _msg \"AUTO: config NOT sane — performing remediation.\"\n\n        [ \"${_DISABLE_CLOUD_INIT}\" = \"YES\" ] && _disable_cloudinit_networking\n\n        # Pick an interface: prefer public IPv4 if default missing\n        [ -z \"${_DEF}\" ] && _DEF=\"$(_choose_candidate_if)\"\n        [ -z \"${_DEF}\" ] && { _err \"No candidate interface with IPv4 found.\"; exit 1; }\n\n        # Live CIDR & prefix\n        _LIVE=\"$(_live_v4_cidr \"${_DEF}\")\"\n        _PFX=\"$(_cidr_prefix \"${_LIVE}\")\"\n        [ -z \"${_PFX}\" ] && _PFX=\"$(_find_prefix_for_if \"${_DEF}\")\"\n\n        # Gateway\n        _GW=\"$(_find_gateway_for_if \"${_DEF}\")\"\n\n        if [ -z \"${_LIVE}\" ] || [ -z \"${_GW}\" ]; then\n          _err \"Insufficient data (LIVE=${_LIVE:-none} GW=${_GW:-none}). Cannot fix safely.\"\n          exit 1\n        fi\n\n        # Decide whether the default interface is DHCP-managed (do NOT pin a dynamic address as static).\n        _DEF_WAS_DHCP=\"NO\"\n        _iface_v4_is_dynamic \"${_DEF}\" && _DEF_WAS_DHCP=\"YES\"\n        _default_route_is_dhcp && _DEF_WAS_DHCP=\"YES\"\n        if [ \"${_DEF_WAS_DHCP}\" = \"YES\" ]; then\n          _msg \"Detected DHCP-managed IPv4 on ${_DEF}; generating ifupdown DHCP stanza (no static pinning).\"\n        else\n          # Bring up now (ephemeral) so we immediately regain connectivity\n          _ephemeral_static_up \"${_DEF}\" \"${_LIVE}\" \"${_GW}\"\n        fi\n\n        _DNS=\"$(_collect_forced_dns)\"\n        _sanitize_interfaces_file\n        _write_interfaces_all \"${_DEF}\" \"${_GW}\" \"${_DNS}\"\n        _fix_resolv_conf \"${_DNS}\"\n        _write_networking_init_if_missing\n        _ensure_networking_exec_and_runlevels\n        _disable_systemd_networkd_for_next_boot\n        _install_boot_guard\n        _unload_bonding_if_unused\n        _ephemeral_bringup_all\n        _enforce_dns_always\n        printf 'Remediation applied. Safe to reboot.\\n'\n        exit 0\n      fi\n      ;;\n  esac\n}\n\n_main \"$@\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/weblogx",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_USE_GITHUB=/root/.goaccess.use.github.txt\n_GOACCESS_VRN=1.9.4\n_sPmrs=\"https://raw.githubusercontent.com/matomo-org/referrer-spam-blacklist/master/spammers.txt\"\n[ -e \"/root/.goaccess.use.github.txt\" ] && rm -f /root/.goaccess.use.github.txt\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -e \"/root/.pause_tasks_maint.cnf\" ] && exit 0\n\nif [ -n \"${_ENABLE_GOACCESS}\" ] && [ \"${_ENABLE_GOACCESS}\" = \"YES\" ]; then\n  _GOACCESS=YES\nelse\n  if [ -d \"/var/www/adminer/access/archive\" ]; then\n    rm -rf /var/www/adminer/access\n    rm -f /root/.goaccessrc*\n  fi\n  exit 0\nfi\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_fetch_geoip() {\n  mkdir -p /usr/share/GeoIP\n  chmod 755 /usr/share/GeoIP\n  mkdir -p /opt/tmp\n  cd /opt/tmp\n  rm -f /opt/tmp/Geo*\n\n  # For GeoIP2 ASN database:\n  wget -q -U iCab ${_urlDev}/src/GeoLite2-ASN.mmdb.gz\n  gunzip GeoLite2-ASN.mmdb.gz &> /dev/null\n  cp -af GeoLite2-ASN.mmdb /usr/share/GeoIP/\n\n  # For GeoIP2 City database:\n  wget -q -U iCab ${_urlDev}/src/GeoLite2-City.mmdb.gz\n  gunzip GeoLite2-City.mmdb.gz &> /dev/null\n  cp -af GeoLite2-City.mmdb /usr/share/GeoIP/\n\n  # For GeoIP2 Country database:\n  wget -q -U iCab ${_urlDev}/src/GeoLite2-Country.mmdb.gz\n  gunzip GeoLite2-Country.mmdb.gz &> /dev/null\n  cp -af GeoLite2-Country.mmdb /usr/share/GeoIP/\n\n  chmod 644 /usr/share/GeoIP/*\n  rm -f /opt/tmp/Geo*\n  cd\n}\n\n_install_goaccess() {\n  echo \"Installing GoAccess ${_GOACCESS_VRN} with dependencies...\"\n  cd\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n  _apt_clean_update\n  aptitude purge goaccess -y\n  apt-get install libmaxminddb-dev ${_aptYesUnth}\n  apt-get install libncurses5-dev ${_aptYesUnth}\n  apt-get install libncursesw5-dev ${_aptYesUnth}\n  apt-get install autopoint ${_aptYesUnth}\n  _find_fast_mirror_early\n  mkdir -p /var/opt\n  cd /var/opt\n  rm -rf /var/opt/goaccess*\n  if [ -e \"${_USE_GITHUB}\" ]; then\n    echo \"Downloading from https://github.com/omega8cc/goaccess.git\"\n    git clone https://github.com/omega8cc/goaccess.git\n    cd /var/opt/goaccess\n    autoreconf -fi\n  else\n    echo \"Downloading from ${_urlDev}/src/goaccess-${_GOACCESS_VRN}.tar.gz\"\n    curl -I ${_urlDev}/src/goaccess-${_GOACCESS_VRN}.tar.gz\n    curl ${_crlGet} \"${_urlDev}/src/goaccess-${_GOACCESS_VRN}.tar.gz\" | tar -xzf -\n    cd /var/opt/goaccess-${_GOACCESS_VRN}\n  fi\n  bash ./configure --prefix=/usr --enable-utf8 --enable-geoip=mmdb --with-getline\n  make --quiet\n  make --quiet install\n  rm -rf /var/opt/goaccess*\n  cd\n}\n\n_if_install() {\n  _isGoacs=\"$(which goaccess)\"\n  if [ ! -x \"${_isGoacs}\" ] \\\n    || [ -z \"${_isGoacs}\" ]; then\n    _install_goaccess\n  else\n    _GOACCESS_ITD=$(goaccess --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | tr -d \"v=\" \\\n      | tr -d \"For\" \\\n      | cut -d\" \" -f3 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_GOACCESS_ITD}\" = \"${_GOACCESS_VRN}.\" ]; then\n      echo \"Latest GoAccess ${_GOACCESS_VRN} already installed\"\n    else\n      _install_goaccess\n    fi\n  fi\n  if [ -e \"/usr/etc/goaccess/goaccess.conf\" ]; then\n    if [ ! -e \"/root/.goaccessrc\" ] || [ ! -e \"/var/log/boa/.goaccessrc.fix.019.pid\" ]; then\n      echo 'date-format %d/%b/%Y' > /root/.goaccessrc\n      echo 'time-format %H:%M:%S' >> /root/.goaccessrc\n      echo 'log-format \"~h{, }\" %v [%d:%t %^] \"%r\" %s %^ %^ %b \"%R\" \"%u\" %T \"%^\"' >> /root/.goaccessrc\n      echo '444-as-404 true' >> /root/.goaccessrc\n      echo '4xx-to-unique-count false' >> /root/.goaccessrc\n      echo 'agent-list false' >> /root/.goaccessrc\n      echo 'all-static-files true' >> /root/.goaccessrc\n      echo 'anonymize-ip true' >> /root/.goaccessrc\n      echo 'browsers-file /usr/etc/goaccess/browsers.list' >> /root/.goaccessrc\n      echo 'double-decode true' >> /root/.goaccessrc\n      echo 'enable-panel BROWSERS' >> /root/.goaccessrc\n      echo 'enable-panel GEO_LOCATION' >> /root/.goaccessrc\n      echo 'enable-panel HOSTS' >> /root/.goaccessrc\n      echo 'enable-panel KEYPHRASES' >> /root/.goaccessrc\n      echo 'enable-panel NOT_FOUND' >> /root/.goaccessrc\n      echo 'enable-panel OS' >> /root/.goaccessrc\n      echo 'enable-panel REFERRERS' >> /root/.goaccessrc\n      echo 'enable-panel REFERRING_SITES' >> /root/.goaccessrc\n      echo 'enable-panel REMOTE_USER' >> /root/.goaccessrc\n      echo 'enable-panel REQUESTS_STATIC' >> /root/.goaccessrc\n      echo 'enable-panel REQUESTS' >> /root/.goaccessrc\n      echo 'enable-panel STATUS_CODES' >> /root/.goaccessrc\n      echo 'enable-panel VIRTUAL_HOSTS' >> /root/.goaccessrc\n      echo 'enable-panel VISIT_TIMES' >> /root/.goaccessrc\n      echo 'enable-panel VISITORS' >> /root/.goaccessrc\n      echo 'exclude-ip 127.0.0.1' >> /root/.goaccessrc\n      echo 'ignore-crawlers true' >> /root/.goaccessrc\n      if [ ! -e \"/usr/etc/goaccess/spammers.txt\" ]; then\n        ### curl ${_crlGet} \"${_sPmrs}\" -o /usr/etc/goaccess/spammers.txt\n        curl ${_crlGet} \"${_urlDev}/src/spammers.txt\" -o /usr/etc/goaccess/spammers.txt\n      fi\n      if [ -e \"/usr/etc/goaccess/spammers.txt\" ]; then\n        echo 'ignore-referrer /usr/etc/goaccess/spammers.txt' >> /root/.goaccessrc\n      fi\n      echo 'ignore-statics req' >> /root/.goaccessrc\n      echo 'ignore-status 301' >> /root/.goaccessrc\n      echo 'ignore-status 302' >> /root/.goaccessrc\n      echo 'real-os true' >> /root/.goaccessrc\n      echo 'sort-panel BROWSERS,BY_VISITORS,DESC' >> /root/.goaccessrc\n      echo 'sort-panel GEO_LOCATION,BY_VISITORS,DESC' >> /root/.goaccessrc\n      echo 'sort-panel HOSTS,BY_VISITORS,DESC' >> /root/.goaccessrc\n      echo 'sort-panel OS,BY_VISITORS,DESC' >> /root/.goaccessrc\n      echo 'sort-panel REFERRERS,BY_VISITORS,DESC' >> /root/.goaccessrc\n      echo 'sort-panel REFERRING_SITES,BY_VISITORS,DESC' >> /root/.goaccessrc\n      echo 'sort-panel REQUESTS_STATIC,BY_VISITORS,DESC' >> /root/.goaccessrc\n      echo 'sort-panel REQUESTS,BY_VISITORS,DESC' >> /root/.goaccessrc\n      if [ ! -e \"/usr/share/GeoIP/GeoLite2-City.mmdb\" ]; then\n        _fetch_geoip\n      fi\n      if [ -e \"/usr/share/GeoIP/GeoLite2-ASN.mmdb\" ]; then\n        echo 'geoip-database /usr/share/GeoIP/GeoLite2-ASN.mmdb' >> /root/.goaccessrc\n      fi\n      if [ -e \"/usr/share/GeoIP/GeoLite2-City.mmdb\" ]; then\n        echo 'geoip-database /usr/share/GeoIP/GeoLite2-City.mmdb' >> /root/.goaccessrc\n      fi\n      if [ -e \"/usr/share/GeoIP/GeoLite2-Country.mmdb\" ]; then\n        echo 'geoip-database /usr/share/GeoIP/GeoLite2-Country.mmdb' >> /root/.goaccessrc\n      fi\n      echo 'html-prefs {\"theme\":\"darkBlue\",\"perPage\":10,\"visitors\":{\"plot\":{\"chartType\":\"bar\"}},\"visit_time\":{\"plot\":{\"chartType\":\"bar\"}}}' >> /root/.goaccessrc\n      rm -f /var/log/boa/.goaccessrc.fix*\n      touch /var/log/boa/.goaccessrc.fix.019.pid\n      sleep 1\n    fi\n  else\n    echo \"ERROR: GoAccess was not found...\"\n    exit 1\n  fi\n}\n_if_install\n\nfor i in \"$@\"; do\n  case $i in\n    -s=*|--site=*)\n        _SITE=\"${i#*=}\"\n        shift # --site=SiteName\n    ;;\n    -e=*|--env=*)\n        _ENV=\"${i#*=}\"\n        shift # --env=[dev|stage|prod]\n    ;;\n    -u=*|--url=*)\n        _URL=\"${i#*=}\"\n        shift # --url=https://site-d9.foo.bar.aegir.cc/\n    ;;\n    -d=*|--dir=*)\n        _DIR=\"${i#*=}\"\n        shift # --dir=foo/bar\n    ;;\n    -c=*|--ga_conf=*)\n        _GACONF=\"${i#*=}\"\n        shift # --ga_conf=/etc/goaccess.conf\n    ;;\n    *)\n        # nope\n    ;;\n  esac\ndone\n\nif [ -z \"${_URL}\" ]; then\n  if [ -z \"${_SITE}\" ] && [ -z \"${_ENV}\" ]; then\n    echo \"[-] --site and --env must be specified\"\n    exit 1;\n  fi\nelse\n  _SITE=${_URL%.aegir.cc}\n  _SITE=${_SITE%.aegir.cc}\n  _SITE=${_SITE#dev-}\n  _SITE=${_SITE#stage-}\n  _SITE=${_SITE#prod-}\n  _ENV=\"$( cut -d '-' -f 1 <<< \"${_URL}\" )\"\nfi\n\necho \"[+] _SITE NAME: ${_SITE} / _ENV: ${_ENV}\"\n\nif [ -z \"${_ARCH}\" ]; then\n  _ARCHLOGS=/var/www/adminer/access/archive\n  mkdir -p ${_ARCHLOGS}/unzip\nelse\n  _ARCHLOGS=${_ARCH}\n  mkdir -p ${_ARCH}/unzip\nfi\n\nif [ -z \"${_DIR}\" ]; then\n  mkdir -p /var/www/adminer/access/${_ENV}/${_SITE}\n  _TARGET=/var/www/adminer/access/${_ENV}/${_SITE}\nelse\n  mkdir -p ${_DIR}\n  _TARGET=${_DIR}\nfi\n\nif [ ! -e \"${_ARCHLOGS}/unzip/.global.pid\" ]; then\n  echo \"[+] SYNCING LOGS TO: ${_ARCHLOGS}\"\n  rsync -rlvz --size-only --progress /var/log/nginx/access* ${_ARCHLOGS}/\n  echo \"[+] COPYING LOGS TO: ${_ARCHLOGS}/unzip/\"\n  cp -af ${_ARCHLOGS}/access* ${_ARCHLOGS}/unzip/\n  echo \"[+] DECOMPRESSING GZ FILES\"\n  find ${_ARCHLOGS}/unzip -name \"*.gz\" -exec gunzip -f {} \\;\n  echo \"[+] RENAMING RAW FILES\"\n  for _log in `find ${_ARCHLOGS}/unzip \\\n    -maxdepth 1 -mindepth 1 -type f | sort`; do\n    mv -f ${_log} ${_log}.txt;\n  done\n  if [ -e \"${_ARCHLOGS}/unzip/.global.pid.txt\" ]; then\n    mv -f ${_ARCHLOGS}/unzip/.global.pid.txt ${_ARCHLOGS}/unzip/.global.pid\n  fi\n  rm -f ${_ARCHLOGS}/unzip/*.txt.txt*\nfi\n\necho \"[+] MERGING AND FILTERING NGINX LOGS\"\nif [ -e \"${_TARGET}/mrgd_nginx.log\" ]; then\n  rm -rf ${_TARGET}/mrgd_nginx.log\nfi\n\nif [[ \"${_SITE}\" =~ \"ALL\" ]]; then\n  _SITE_REGEX=\" /\"\nelse\n  _SITE_REGEX=\" ${_SITE} \"\nfi\n\nfind ${_ARCHLOGS}/unzip -name \"access*\" -exec cat {} \\; | grep \"${_SITE_REGEX}\" >> ${_TARGET}/mrgd_nginx.log\n\necho \"[+] EXPORTING GOACCESS REPORT HTML\"\n_GOVER=$(goaccess -V | awk 'NR == 1 { print substr ($3,0,1)}')\n\n[[ ${_GACONF} ]] && conf=(-p \"${_GACONF}\")\nif [[ ${_GOVER} = 1 ]]; then\n  goaccess -p /root/.goaccessrc --no-global-config --persist --no-query-string -f \"${_TARGET}/mrgd_nginx.log\" \"${conf[@]}\" -a -o \"${_TARGET}/index.html\"\nelse\n  goaccess -p /root/.goaccessrc --no-global-config --persist --no-query-string -f \"${_TARGET}/mrgd_nginx.log\" \"${conf[@]}\" > \"${_TARGET}/index.html\"\nfi\n\nif [ -e \"${_TARGET}/mrgd_nginx.log\" ]; then\n  rm -rf ${_TARGET}/mrgd_nginx.log\nfi\n\necho ${_TARGET}/index.html\necho \"[+] DONE!\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/webserver",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n_valkey_health_check_fix() {\n  if ! pgrep -f /usr/bin/valkey-server \\\n    || [ ! -e \"/run/valkey/valkey.sock\" ] \\\n    || [ ! -e \"/run/valkey/valkey.pid\" ]; then\n    mkdir -p /run/valkey\n    chown -R valkey:valkey /run/valkey\n    killall -9 valkey-server &> /dev/null\n    rm -f /var/lib/valkey/*\n    service valkey-server restart\n    wait\n  fi\n}\n\n_fpm_health_check_fix() {\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -x \"/etc/init.d/php${e}-fpm\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n      _pat=\"php-fpm: master process.*/opt/php${e}/etc/php${e}-fpm.conf\"\n      _TestPhp=\"$(pgrep -f \"${_pat}\")\"\n      if ! pgrep -f \"${_pat}\" \\\n        || [ ! -S \"/run/www${e}.fpm.socket\" ] \\\n        || [ ! -s \"/run/php${e}-fpm.pid\" ]; then\n        : > /run/fmp_wait.pid\n        : > /run/restarting_fmp_wait.pid\n        sleep 1\n        service \"php${e}-fpm\" restart\n        wait\n        sleep 1\n        rm -f /run/fmp_wait.pid /run/restarting_fmp_wait.pid\n      fi\n    fi\n  done\n}\n\n_nginx_health_check_fix() {\n  if [ -x \"/etc/init.d/nginx\" ]; then\n    if ! pgrep -f 'nginx: master process' \\\n      || [ ! -e \"/run/nginx.pid\" ]; then\n      pkill -9 -f nginx: || true\n      service nginx restart\n      wait\n    fi\n  fi\n}\n\n_web_up() {\n  if [ ! -e \"/var/tmp/fpm\" ]; then\n    mkdir -p /var/tmp/fpm\n    chmod 777 /var/tmp/fpm\n  fi\n  _valkey_health_check_fix\n  _fpm_health_check_fix\n  _nginx_health_check_fix\n}\n\n# Simple CLI\ncase \"$1\" in\n  enforce) _web_up ;;\n  up) _web_up ;;\n  *)\n    echo \"Usage: $0 {enforce|up}\"\n    exit 2\n    ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/websh",
    "content": "#!/bin/bash\n\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec:/usr/local/ssl3\n\n_DEST_DRUSH=\"/opt/tools/drush/8/drush/drush.php\"\n\n_forward_to_dash() {\n  if [[ \"${_ARGS}\" =~ \"sudo_noexec.so\" ]] && [[ \"${_LTD_GID}\" =~ \"ltd-shell-more\"($) ]]; then\n    ### echo FWD 0 DIRECT\n    _R_M=`echo -n ${_ARGS} | sed 's|LD_PRELOAD.*.so||'`\n    ### echo _R_M is ${_R_M}\n    exec /bin/dash -c \"${_R_M}\"\n    exit 0\n  fi\n  # Path to the underlying shell (dash in this case)\n  _shell=\"/bin/dash\"\n  # Name under which the shell should think it was invoked\n  _shell_name=\"sh\"\n  # Arrays to hold options and positional parameters\n  _options=()\n  _positional=()\n  # Variables to hold command strings or script files\n  _command=\"\"\n  _script=\"\"\n\n  # Parse the options and arguments\n  while [[ $# -gt 0 ]]; do\n    case \"$1\" in\n      --)\n        # End of options\n        shift\n        _positional+=(\"$@\")\n        break\n        ;;\n      -c)\n        # Command option\n        if [[ -n \"$2\" ]]; then\n          _options+=(\"-c\" \"$2\")\n          shift 2\n          _positional+=(\"$@\")\n          break\n        else\n          echo \"sh: option requires an argument -- 'c'\" >&2\n          exit 1\n        fi\n        ;;\n      -i|-l|-s)\n        # Other common options\n        _options+=(\"$1\")\n        shift\n        ;;\n      -*)\n        # Unrecognized options\n        _options+=(\"$1\")\n        shift\n        ;;\n      *)\n        # First non-option argument is the script file\n        _script=\"$1\"\n        shift\n        _positional+=(\"$@\")\n        break\n        ;;\n    esac\n  done\n\n  # Prepare to execute the underlying shell\n  if [[ -n \"${_script}\" ]]; then\n    # Execute a script file\n    ### echo FWD 1 PARSED\n    exec -a \"${_shell_name}\" \"${_shell}\" \"${_options[@]}\" \"${_script}\" \"${_positional[@]}\"\n    exit 0\n  else\n    # Execute commands or start an interactive shell\n    ### echo FWD 2 PARSED\n    exec -a \"${_shell_name}\" \"${_shell}\" \"${_options[@]}\" \"${_positional[@]}\"\n    exit 0\n  fi\n}\n\n# Capture all arguments\n_ALL=\"$@\"\n### echo \"_ALL is ${_ALL}\"\n\n# Capture environment variables\n_ENV=$(env 2>&1)\n### echo \"_ENV is ${_ENV}\"\n\n# Determine _ARGS based on the first argument\nif [ \"${1}\" = \"-c\" ]; then\n  _ARGS=\"${2}\"\nelse\n  _ARGS=\"${1}\"\nfi\n\n### echo \"_ARGS is ${_ARGS}\"\n\nif [[ \"${_ARGS}\" =~ \"--php=\" ]]; then\n  PHP_FWD=YES\nelse\n  PHP_FWD=\nfi\n\nif [[ \"${_ARGS}\" =~ \"true COLUMNS=\" ]]; then\n  _R_M=`echo -n \"${_ARGS}\" | grep -o \"true COLUMNS=[0-9]\\+ \"`\n  _ARGS=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\nfi\n\nif [[ \"${_ARGS}\" =~ \"'\" ]] && [[ \"${_ARGS}\" =~ \"drush\" ]]; then\n  ### echo _ARGS RAW is ${_ARGS}\n  _ARGS=$(echo -n ${_ARGS} | tr -d \"'\" 2>&1)\n  ### echo _ARGS CLEAN is ${_ARGS}\nfi\n\nif [[ ! \"${_ARGS}\" =~ \"composer\" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"git\" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"cd \" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"mysql\" ]] \\\n  && [[ ! \"${_ARGS}\" =~ \"sudo\" ]]; then\n  _ARR=\n  if [[ \"${_ARGS}\" =~ \"/vendor/drush/drush/drush.php \" ]]; then\n    _R_M=`echo -n ${_ARGS}  | grep -o \".*/vendor/drush/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n    _R_M=`echo -n ${_ARGS}  | grep -o \"vendor/drush/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush8 \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush8\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush8\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush10 \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush10\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush10\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"drush11 \" ]]; then\n    if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n      _R_M=`echo -n ${_ARGS}  | grep -o \"set -m\\; drush11\"`\n    else\n      _R_M=`echo -n ${_ARGS}  | grep -o \"drush11\"`\n    fi\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS}  | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n      esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"php /opt/tools/\" ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /opt/tools/drush/.*/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ \"php /data/disk/\" ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /data/disk/.*/tools/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _ARR is ${_ARR}\n  elif [[ \"${_ARGS}\" =~ php\\ /mnt/.*/data/disk/ ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /mnt/.*/data/disk/.*/tools/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _ARR is ${_ARR}\n  fi\nfi\n\n_C_ARR=\nif [[ \"${_ARGS}\" =~ \"composer \" ]] && [[ ! \"${_ARGS}\" =~ \"git remote add composer\" ]]; then\n  if [[ \"${_ARGS}\" =~ \"set -m\" ]]; then\n    _C_RM=`echo -n ${_ARGS}  | grep -o \"set -m\\; composer \"`\n  else\n    _C_RM=`echo -n ${_ARGS}  | grep -o \"composer \"`\n  fi\n  ### echo _C_RM is ${_C_RM}\n  _CLR=`echo ${_ARGS}  | sed \"s/\\${_C_RM}//g\"`\n  _C_ARR=() # the buffer array for filtered parameters\n  for arg in \"${_CLR}\"; do\n    case ${_C_RM} in\n      $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n      *) _C_ARR+=(\"$arg\") ;;\n    esac\n  done\n  ### echo _C_ARR is ${_C_ARR}\nfi\n\n_INTERNAL=NO\n_LTD_GID=$(id -nG ${USER} 2>&1)\n_LTD_UID=$(id -nu ${USER} 2>&1)\nif [ -z \"${USER}\" ]; then\n  USER=$(id -nu ${USER} 2>&1)\n  _LTD_QQQ=YES\nfi\n_X_USR=\".*\"\nif [ \"${USER}\" = \"aegir\" ] \\\n  || [ \"${HOME}\" = \"/var/aegir\" ]; then\n  _Y_USR=aegir\n  _DRUSH_CLI_CTRL=\"/var/aegir/static/control\"\n  ### echo _DRUSH_CLI_CTRL is ${_DRUSH_CLI_CTRL}\nelse\n  _Y_USR=${USER%${_X_USR}}\n  _DRUSH_CLI_CTRL=\"/data/disk/${_Y_USR}/static/control\"\n  ### echo _DRUSH_CLI_CTRL is ${_DRUSH_CLI_CTRL}\nfi\nif [ -z \"${HOME}\" ]; then\n  if [ -d \"/home/${USER}/.tmp\" ]; then\n    HOME=\"/home/${USER}\"\n  elif [ -d \"/data/disk/${_Y_USR}/.tmp\" ]; then\n    HOME=\"/data/disk/${_Y_USR}\"\n  elif [ -d \"/var/${_Y_USR}/.tmp\" ]; then\n    HOME=\"/var/${_Y_USR}\"\n  fi\nfi\n\nif [[ \"${_ARR}\" =~ \" aliases\" ]] || [[ \"${_ARR}\" =~ \" sa\" ]]; then\n  if [[ \"${_ALL}\" =~ \"drush10 \" ]] || [[ \"${_ALL}\" =~ \"drush11 \" ]]; then\n    _ARR=\"sa --format=list | egrep -v \\\"(none|hostmaster|hm|server_|platform_|@none|@self)\\\"\"\n  elif [[ \"${_ALL}\" =~ \"drush \" ]] || [[ \"${_ALL}\" =~ \"drush8 \" ]]; then\n    _ARR=\"sa | egrep -v \\\"(none|hostmaster|hm|server_|platform_|@none|@self)\\\"\"\n  fi\nfi\n\nif [[ \"${_ARR}\" =~ \"-c \" ]]; then\n  _R_M=`echo -n \"${_ARR}\" | grep -o \"\\-c \"`\n  _ARR=`echo ${_ARR} | sed \"s/\\${_R_M}//g\"`\nfi\n\nif [[ \"${HOME}\" =~ (^)\"/data/disk/\" ]] \\\n  && [ -z \"${PHP_FWD}\" ] \\\n  && [[ \"${_ARGS}\" =~ (^)\"php ${HOME}\" ]]; then\n  _OCTO_SYS=\"${USER}\"\n  _OCTO_SYS_ARR=\n  if [[ \"${_ARGS}\" =~ \"tools/drush/drush.php\" ]]; then\n    _R_M=`echo -n ${_ARGS} | grep -o \"php /data/disk/.*/tools/drush/drush.php\"`\n    ### echo _R_M is ${_R_M}\n    _R_M=${_R_M//\\//\\\\\\/}\n    ### echo _R_M is ${_R_M}\n    _CLR=`echo ${_ARGS} | sed \"s/\\${_R_M}//g\"`\n    _OCTO_SYS_ARR=() # the buffer array for filtered parameters\n    for arg in \"${_CLR}\"; do\n      case ${_R_M} in\n        $arg\\ * | *\\ $arg | *\\ $arg\\ *) ;;\n        *) _OCTO_SYS_ARR+=(\"$arg\") ;;\n\t  esac\n    done\n    ### echo _OCTO_SYS_ARR is ${_OCTO_SYS_ARR}\n  fi\nelse\n  _OCTO_SYS=\nfi\n\nif [[ \"${HOME}\" =~ (^)\"/yyydata/disk/\" ]]; then\n  _DRUSH_CLI_CTRL=\n  ### echo _DRUSH_CLI_CTRL has been disabled\nfi\n\nif [ -d \"/home/${USER}/.tmp\" ]; then\n  export TMP=\"/home/${USER}/.tmp\"\n  export TMPDIR=\"/home/${USER}/.tmp\"\n  export TEMP=\"/home/${USER}/.tmp\"\n  if [[ \"${_ARGS}\" =~ \" id \" ]] \\\n    || [[ \"${_ARGS}\" =~ (^)\"id \" ]]; then\n    exit 1\n  elif [[ \"${_ARGS}\" =~ (^)\"newrelic\" ]] \\\n    || [[ \"${_ARGS}\" =~ (^)\"nrsysm\" ]]; then\n    exit 1\n  fi\nelif [ -d \"/data/disk/${_Y_USR}/.tmp\" ]; then\n  export TMP=\"/data/disk/${_Y_USR}/.tmp\"\n  export TMPDIR=\"/data/disk/${_Y_USR}/.tmp\"\n  export TEMP=\"/data/disk/${_Y_USR}/.tmp\"\nelif [ -d \"/var/${_Y_USR}/.tmp\" ]; then\n  export TMP=\"/var/${_Y_USR}/.tmp\"\n  export TMPDIR=\"/var/${_Y_USR}/.tmp\"\n  export TEMP=\"/var/${_Y_USR}/.tmp\"\nelse\n  export TMP=\"/tmp\"\n  export TMPDIR=\"/tmp\"\n  export TEMP=\"/tmp\"\nfi\n\nexport HOME=${HOME}\nexport TEMP=${TEMP}\nexport USER=${USER}\n\n### echo HOME is ${HOME}\n### echo TEMP is ${TEMP}\n### echo USER is ${USER}\n#\n### echo _ALL is ${_ALL}\n### echo _ARGS is ${_ARGS}\n### echo _LTD_GID is ${_LTD_GID}\n### echo _LTD_QQQ is ${_LTD_QQQ}\n### echo _LTD_UID is ${_LTD_UID}\n### echo _Y_USR is ${_Y_USR}\n#\n### echo 0 is $0\n### echo 1 is $1\n### echo 2 is $2\n### echo 3 is $3\n### echo 4 is $4\n### echo 5 is $5\n### echo 6 is $6\n### echo 7 is $7\n### echo 8 is $8\n### echo 9 is $9\n\n# Check PHP CLI version defined.\ncheck_php_cli_version() {\n  ### echo CHK start check_php_cli_version\n  if [ \"${HOME}\" = \"/var/aegir\" ] && [ -f \"/var/aegir/drush/drush.php\" ]; then\n    _PHP_CLI=$(grep \"/opt/php\" /var/aegir/drush/drush.php 2>&1)\n  else\n    if [ -f \"/data/disk/${_Y_USR}/tools/drush/drush.php\" ]; then\n      _PHP_CLI=$(grep \"/opt/php\" /data/disk/${_Y_USR}/tools/drush/drush.php 2>&1)\n    elif [ -f \"/data/disk/${_Y_USR}/static/control/cli.info\" ]; then\n      _PHP_CLI=\"php$(tr -d '.\\n' < /data/disk/${_Y_USR}/static/control/cli.info)\"\n    fi\n  fi\n  ### echo CHK 1 _PHP_CLI is ${_PHP_CLI}\n\n  _PHP_V=\"56 70 71 72 73 74 80 81 82 83 84 85\"\n  for e in ${_PHP_V}; do\n    if [[ \"${_PHP_CLI}\" =~ \"php${e}\" ]] && [ -x \"/opt/php${e}/bin/php\" ]; then\n      DRUSH_PHP=\"/opt/php${e}/bin/php\"\n      PHP_INI=\"/opt/php${e}/lib/php.ini\"\n      PHPRC=\"/opt/php${e}/lib\"\n      if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n        PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n        PHPRC=\"${HOME}/.drush/php${e}\"\n      fi\n    fi\n  done\n  ### echo CHK 2 DRUSH_PHP is ${DRUSH_PHP}\n  ### echo CHK 2 PHP_INI is ${PHP_INI}\n  ### echo CHK 2 PHPRC is ${PHPRC}\n\n  for e in ${_PHP_V}; do\n    if [ -e \"${_DRUSH_CLI_CTRL}/php${e}.info\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n      DRUSH_PHP=\"/opt/php${e}/bin/php\"\n      PHP_INI=\"/opt/php${e}/lib/php.ini\"\n      PHPRC=\"/opt/php${e}/lib\"\n      if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n        PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n        PHPRC=\"${HOME}/.drush/php${e}\"\n      fi\n    fi\n  done\n  ### echo CHK 3 DRUSH_PHP is ${DRUSH_PHP}\n  ### echo CHK 3 PHP_INI is ${PHP_INI}\n  ### echo CHK 3 PHPRC is ${PHPRC}\n\n  if [ ! -z \"${PHP_INI}\" ]; then\n    export DRUSH_PHP;export PHP_INI;export PHPRC;\n  else\n    DRUSH_PHP=\"/usr/bin/php\"\n    export DRUSH_PHP;\n    ### echo CHK 4 DRUSH_PHP is ${DRUSH_PHP}\n  fi\n\n  ### echo CHK fin check_php_cli_version\n}\n\nif [ -n \"${JENKINS_HOME}\" ] \\\n  || [ -n \"${JENKINS_NODE_COOKIE}\" ] \\\n  || [ -n \"${WORKSPACE_TMP}\" ]; then\n  _IS_JENKINS=TRUE\nelse\n  _IS_JENKINS=FALSE\nfi\n\nif [ \"${_IS_JENKINS}\" = \"FALSE\" ]; then\n  check_php_cli_version\nfi\n\nif [ \"${_LTD_GID}\" = \"www-data users\" ] \\\n  || [[ \"${HOME}\" =~ (^)\"/var/aegir\" ]] \\\n  || [[ \"${HOME}\" =~ (^)\"/data/disk/\" ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"lshellg\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"ltd-shell-more\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"lshellg rvm\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"ltd-shell\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"ltd-shell rvm\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ \"rvm ltd-shell\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ (^)\"users www-data\"($) ]] \\\n  || [[ \"${_LTD_GID}\" =~ (^)\"aegir www-data users\"($) ]]; then\n  if [ \"${1}\" = \"-c\" ]; then\n    _IS_SH_PATH=NO\n    if [ \"$0\" = \"/bin/sh\" ] || [ \"$0\" = \"/usr/bin/sh\" ]; then\n      _IS_SH_PATH=YES\n    else\n      echo\n      echo \"  ERROR: Not Authorized Path\"\n      echo\n      exit 1\n    fi\n    if [[ $(whoami) == *.ftp ]] \\\n      || [[ \"${2}\" =~ \"drush\" ]] \\\n      || [[ \"${2}\" =~ \"mysql \" ]]; then\n      _IN_PATH=YES\n      _INTERNAL=YES\n      if [[ \"${_ARGS}\" =~ \"mysql \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush8 \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush10 \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush11 \" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n        _PWD=$(pwd 2>&1)\n        _DEST_DRUSH=\"/opt/tools/drush/8/drush/drush.php\"\n        if [[ \"${_ARGS}\" =~ \"vendor/bin/drush \" ]]; then\n          _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n        fi\n        if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush \" ]]; then\n          _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n        fi\n        if [[ \"${_ARGS}\" =~ \"/vendor/bin/drush \" ]]; then\n          _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n        fi\n        if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n          # Detect if PWD ends with /web, /html, or /docroot\n          if [[ \"$PWD\" =~ /(web|html|docroot)$ ]]; then\n            _DEST_DRUSH=\"../vendor/drush/drush/drush.php\"\n          else\n            _DEST_DRUSH=\"vendor/drush/drush/drush.php\"\n          fi\n          ### echo INF 0 _DEST_DRUSH is ${_DEST_DRUSH}\n        fi\n        if [[ \"${_ARGS}\" =~ \"drush11 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush10 \" ]]; then\n          if [[ \"${_ARGS}\" =~ \"drush11 \" ]]; then\n            _DEST_DRUSH=\"/usr/bin/drush11\"\n          elif [[ \"${_ARGS}\" =~ \"drush10 \" ]]; then\n            _DEST_DRUSH=\"/usr/bin/drush10\"\n          fi\n          if [[ ! \"${HOME}\" =~ (^)\"/data/disk/\" ]]; then\n            _PHP_V=\"74 80 81 82 83 84 85\"\n            for e in ${_PHP_V}; do\n              if [ -e \"${_DRUSH_CLI_CTRL}/php${e}.info\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n                DRUSH_PHP=\"/opt/php${e}/bin/php\"\n                PHP_INI=\"/opt/php${e}/lib/php.ini\"\n                PHPRC=\"/opt/php${e}/lib\"\n                if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n                  PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n                  PHPRC=\"${HOME}/.drush/php${e}\"\n                fi\n              fi\n            done\n          fi\n          if [ ! -z \"${DRUSH_PHP}\" ] && [ ! -z \"${PHP_INI}\" ]; then\n            export DRUSH_PHP;export PHP_INI;export PHPRC;\n            ### echo INF 3 DRUSH_PHP is ${DRUSH_PHP}\n            ### echo INF 3 PHP_INI is ${PHP_INI}\n            ### echo INF 3 PHPRC is ${PHPRC}\n            ### echo INF 3 _DEST_DRUSH is ${_DEST_DRUSH}\n          else\n            echo\n            echo \"  Drush 11 and Drush 10 require at least PHP 7.4\"\n            echo \"  Please create empty control file:\"\n            echo\n            echo \"  ${_DRUSH_CLI_CTRL}/php74.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php81.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php82.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php83.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php84.info\"\n            echo \"  or\"\n            echo \"  ${_DRUSH_CLI_CTRL}/php85.info\"\n            echo\n            echo \"  NOTE: If you create more than one,\"\n            echo \"        the highest version wins.\"\n            echo \"  Bye\"\n            echo\n            exit 0\n          fi\n        elif [[ \"${_ARGS}\" =~ \"drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush8 \" ]]; then\n          _DEST_DRUSH=\"/opt/tools/drush/8/drush/drush.php\"\n          if [[ \"${_ARGS}\" =~ \"vendor/bin/drush \" ]]; then\n            _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n          fi\n          if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush \" ]]; then\n            _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n          fi\n          if [[ \"${_ARGS}\" =~ \"/vendor/bin/drush \" ]]; then\n            _DEST_DRUSH=\"${_R_M}/vendor/drush/drush/drush.php\"\n          fi\n          if [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php \" ]]; then\n            # Detect if PWD ends with /web, /html, or /docroot\n            if [[ \"$PWD\" =~ /(web|html|docroot)$ ]]; then\n              _DEST_DRUSH=\"../vendor/drush/drush/drush.php\"\n            else\n              _DEST_DRUSH=\"vendor/drush/drush/drush.php\"\n            fi\n            ### echo INF 1 _DEST_DRUSH is ${_DEST_DRUSH}\n          fi\n          if [[ ! \"${HOME}\" =~ (^)\"/data/disk/\" ]] && [ \"${_IS_JENKINS}\" = \"FALSE\" ]; then\n            _PHP_V=\"56 70 71 72 73 74 80 81 82 83 84 85\"\n            for e in ${_PHP_V}; do\n              if [ -e \"${_DRUSH_CLI_CTRL}/php${e}.info\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n                DRUSH_PHP=\"/opt/php${e}/bin/php\"\n                PHP_INI=\"/opt/php${e}/lib/php.ini\"\n                PHPRC=\"/opt/php${e}/lib\"\n                if [ -f \"${HOME}/.drush/php${e}/php.ini\" ]; then\n                  PHP_INI=\"${HOME}/.drush/php${e}/php.ini\"\n                  PHPRC=\"${HOME}/.drush/php${e}\"\n                fi\n              fi\n            done\n          fi\n          if [ ! -z \"${DRUSH_PHP}\" ] && [ ! -z \"${PHP_INI}\" ] && [ \"${_IS_JENKINS}\" = \"FALSE\" ]; then\n            export DRUSH_PHP;export PHP_INI;export PHPRC;\n            ### echo INF 4 DRUSH_PHP is ${DRUSH_PHP}\n            ### echo INF 4 PHP_INI is ${PHP_INI}\n            ### echo INF 4 PHPRC is ${PHPRC}\n            ### echo INF 4 _DEST_DRUSH is ${_DEST_DRUSH}\n          fi\n        fi\n        if [[ \"${_ARGS}\" =~ \"drush make\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush8 make\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush cc drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush8 cc drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush10 cr drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush11 cr drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php cr drush\" ]]; then\n          if [[ \"${_PWD}\" =~ \"/static\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush cc drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 cc drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 cr drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 cr drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php cr drush\" ]]; then\n            _CORRECT=YES\n            _CORRECT_PWD_R=$(pwd 2>&1)\n            _CORRECT_ARGS_R=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_R is ${_CORRECT_PWD_R}\n            ### echo _CORRECT_ARGS_R is ${_CORRECT_ARGS_R}\n          else\n            if [[ \"${_ARGS}\" =~ \"make-generate\" ]] \\\n              && [ -f \"${_PWD}/settings.php\" ]; then\n              _CORRECT=YES\n              _CORRECT_PWD_S=$(pwd 2>&1)\n              _CORRECT_ARGS_S=\"${_ARGS}\"\n              ### echo _CORRECT_PWD_S is ${_CORRECT_PWD_S}\n              ### echo _CORRECT_ARGS_S is ${_CORRECT_ARGS_S}\n            else\n              echo\n              echo \" This drush command can not be run in ${_PWD}\"\n              if [[ \"${2}\" =~ \"make-generate\" ]]; then\n                echo \" Please cd to the valid sites/foo.com directory first\"\n                echo \" or use a valid @alias, like: drush @foo.com status\"\n                echo \" Hint: Use 'drush aliases' to display all Drush 8 aliases\"\n                echo \" Hint: Use 'drush11 aliases' to display all Drush 10+ aliases\"\n              else\n                echo \" Please cd ~/static first\"\n              fi\n              echo\n              exit 0\n            fi\n          fi\n        else\n          if [[ \"${_ARGS}\" =~ \"drush @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php -vvv @\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush -vvv @\" ]]; then\n            if [[ \"${2}\" =~ \"restore\"($) ]] \\\n              || [[ \"${2}\" =~ \"arr\"($) ]] \\\n              || [[ \"${2}\" =~ \"cli\"($) ]] \\\n              || [[ \"${2}\" =~ \"conf\"($) ]] \\\n              || [[ \"${2}\" =~ \"config\"($) ]] \\\n              || [[ \"${2}\" =~ \"execute\"($) ]] \\\n              || [[ \"${2}\" =~ \"core-quick-drupal\"($) ]] \\\n              || [[ \"${2}\" =~ \"exec\"($) ]] \\\n              || [[ \"${2}\" =~ \"xstatus\"($) ]] \\\n              || [[ \"${2}\" =~ \"redis-flush\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"qd\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"rs\"($) ]] \\\n              || [[ \"${2}\" =~ \"runserver\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"scr\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"sha\"($) ]] \\\n              || [[ \"${2}\" =~ \"shell-alias\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"si\"($) ]] \\\n              || [[ \"${2}\" =~ \"sql-create\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"ssh\"($) ]] \\\n              || [[ \"${2}\" =~ (^)\"sup\"($) ]]; then\n              echo\n              echo \" This drush command is not available (A)\"\n              echo\n              exit 0\n            else\n              _CORRECT=YES\n              ### LSHELL ==> ALL vdrush commands WITH @alias START here\n              ### LSHELL ==> ALL drush8 commands WITH @alias START here\n              _CORRECT_PWD_T=$(pwd 2>&1)\n              _CORRECT_ARGS_T=\"${_ARGS}\"\n              ### echo _CORRECT_PWD_T is ${_CORRECT_PWD_T}\n              ### echo _CORRECT_ARGS_T is ${_CORRECT_ARGS_T}\n            fi\n            ### LSHELL ==> BASIC vdrush commands WITH @alias END here\n            ### LSHELL ==> ALL drush8 commands WITH @alias END here\n            ### LSHELL ==> RESPAWNED vdrush commands WITH @alias like updatedb CONTINUE here\n            _CORRECT_PWD_U=$(pwd 2>&1)\n            _CORRECT_ARGS_U=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_U is ${_CORRECT_PWD_U}\n            ### echo _CORRECT_ARGS_U is ${_CORRECT_ARGS_U}\n          elif [[ \"${_ARGS}\" =~ \"cc drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"cr drush\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush dl\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush10 sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush11 sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 aliases\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 dl\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 pm-download\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush8 sa\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"drush pm-download\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/bin/drush help\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"/data/disk/\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php --version\" ]] \\\n            || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php help\" ]]; then\n            _CORRECT=YES\n            ### LSHELL ==> RESPAWNED vdrush commands WITH @alias like updatedb END here\n            ### LSHELL ==> Commands like drush11 aliases START and END here\n            _CORRECT_PWD_V=$(pwd 2>&1)\n            _CORRECT_ARGS_V=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_V is ${_CORRECT_PWD_V}\n            ### echo _CORRECT_ARGS_V is ${_CORRECT_ARGS_V}\n          else\n            ### LSHELL ==> ALL drush8 commands WITHOUT @alias START here\n            _CORRECT_PWD_X=$(pwd 2>&1)\n            _CORRECT_ARGS_X=\"${_ARGS}\"\n            ### echo _CORRECT_PWD_X is ${_CORRECT_PWD_X}\n            ### echo _CORRECT_ARGS_X is ${_CORRECT_ARGS_X}\n            if [ -f \"${_PWD}/settings.php\" ]; then\n              if [[ \"${_ARGS}\" =~ \"drush \" ]] \\\n                || [[ \"${_ARGS}\" =~ \"drush8 \" ]] \\\n                || [[ \"${_ARGS}\" =~ \"drush10 \" ]] \\\n                || [[ \"${_ARGS}\" =~ \"drush11 \" ]]; then\n                _CORRECT=YES\n                ### LSHELL ==> ALL drush8 commands WITHOUT @alias END here\n                _CORRECT_PWD_Y=$(pwd 2>&1)\n                _CORRECT_ARGS_Y=\"${_ARGS}\"\n                ### echo _CORRECT_PWD_Y is ${_CORRECT_PWD_Y}\n                ### echo _CORRECT_ARGS_Y is ${_CORRECT_ARGS_Y}\n              fi\n            fi\n          fi\n        fi\n      fi\n    else\n      if [[ \"${_ARGS}\" =~ \"drush @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush8 @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush10 @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush11 @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/bin/drush @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush8 -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush10 -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"drush11 -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php -vvv @\" ]] \\\n        || [[ \"${_ARGS}\" =~ \"vendor/bin/drush -vvv @\" ]]; then\n        if [[ \"${2}\" =~ \"restore\"($) ]] \\\n          || [[ \"${2}\" =~ \"arr\"($) ]] \\\n          || [[ \"${2}\" =~ \"cli\"($) ]] \\\n          || [[ \"${2}\" =~ \"conf\"($) ]] \\\n          || [[ \"${2}\" =~ \"config\"($) ]] \\\n          || [[ \"${2}\" =~ \"execute\"($) ]] \\\n          || [[ \"${2}\" =~ \"core-quick-drupal\"($) ]] \\\n          || [[ \"${2}\" =~ \"exec\"($) ]] \\\n          || [[ \"${2}\" =~ \"xstatus\"($) ]] \\\n          || [[ \"${2}\" =~ \"redis-flush\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"qd\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"rs\"($) ]] \\\n          || [[ \"${2}\" =~ \"runserver\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"scr\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"sha\"($) ]] \\\n          || [[ \"${2}\" =~ \"shell-alias\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"si\"($) ]] \\\n          || [[ \"${2}\" =~ \"sql-create\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"ssh\"($) ]] \\\n          || [[ \"${2}\" =~ (^)\"sup\"($) ]]; then\n          echo\n          echo \" This drush command is not available (B)\"\n          echo\n          exit 0\n        fi\n        _DEBUG_PWD_X=$(pwd 2>&1)\n        _DEBUG_ARGS_X=\"${_ARGS}\"\n        ### echo _DEBUG_PWD_X is ${_DEBUG_PWD_X}\n        ### echo _DEBUG_ARGS_X is ${_DEBUG_ARGS_X}\n      fi\n      _RAW_IN_PATH=${2//[^a-z/]/}\n      if [[ \"${2}\" =~ (^)\"/usr/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/bin/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/opt/\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"(/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"/var/${_Y_USR}/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"(/var/${_Y_USR}/drush/drush.php\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltopdf\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltoimage\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/local/bin/wkhtmltopdf\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/local/bin/wkhtmltoimage\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltopdf-0.12.4\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/wkhtmltoimage-0.12.4\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/local/bin/composer\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/composer\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/unzip\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/convert\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${_RAW_IN_PATH}\" =~ \"/usr/bin/gs\" ]]; then\n        _IN_PATH=YES\n      elif [[ \"${2}\" =~ (^)\"/home/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/data/\" ]] \\\n        || [[ \"${2}\" =~ (^)\"/tmp/\" ]]; then\n        if [ -e \"${2}\" ]; then\n          _IN_PATH=NO\n        fi\n      else\n        _WHICH_TEST=\"$(which ${2})\"\n        if [[ \"${_WHICH_TEST}\" =~ (^)\"/usr/\" ]] \\\n          || [[ \"${_WHICH_TEST}\" =~ (^)\"/bin/\" ]] \\\n          || [[ \"${_WHICH_TEST}\" =~ (^)\"/opt/\" ]]; then\n          _IN_PATH=YES\n        else\n          _IN_PATH=NO\n        fi\n      fi\n    fi\n  else\n    if [[ \"${_ARGS}\" =~ \"drush @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush8 @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush10 @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush11 @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/bin/drush @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush8 -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush10 -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"drush11 -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/drush/drush/drush.php -vvv @\" ]] \\\n      || [[ \"${_ARGS}\" =~ \"vendor/bin/drush -vvv @\" ]]; then\n      if [[ \"${2}\" =~ \"restore\"($) ]] \\\n        || [[ \"${2}\" =~ \"arr\"($) ]] \\\n        || [[ \"${2}\" =~ \"cli\"($) ]] \\\n        || [[ \"${2}\" =~ \"conf\"($) ]] \\\n        || [[ \"${2}\" =~ \"config\"($) ]] \\\n        || [[ \"${2}\" =~ \"execute\"($) ]] \\\n        || [[ \"${2}\" =~ \"core-quick-drupal\"($) ]] \\\n        || [[ \"${2}\" =~ \"exec\"($) ]] \\\n        || [[ \"${2}\" =~ \"xstatus\"($) ]] \\\n        || [[ \"${2}\" =~ \"redis-flush\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"qd\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"rs\"($) ]] \\\n        || [[ \"${2}\" =~ \"runserver\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"scr\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"sha\"($) ]] \\\n        || [[ \"${2}\" =~ \"shell-alias\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"si\"($) ]] \\\n        || [[ \"${2}\" =~ \"sql-create\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"ssh\"($) ]] \\\n        || [[ \"${2}\" =~ (^)\"sup\"($) ]]; then\n        echo\n        echo \" This drush command is not available (C)\"\n        echo\n        exit 0\n      fi\n      _DEBUG_PWD_Y=$(pwd 2>&1)\n      _DEBUG_ARGS_Y=\"${_ARGS}\"\n      ### echo _DEBUG_PWD_Y is ${_DEBUG_PWD_Y}\n      ### echo _DEBUG_ARGS_Y is ${_DEBUG_ARGS_Y}\n    fi\n    if [[ \"${1}\" =~ (^)\"/usr/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/bin/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/opt/\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"(/data/disk/${_Y_USR}/tools/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"/var/${_Y_USR}/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"(/var/${_Y_USR}/drush/drush.php\" ]]; then\n      _IN_PATH=YES\n    elif [[ \"${1}\" =~ (^)\"/home/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/data/\" ]] \\\n      || [[ \"${1}\" =~ (^)\"/tmp/\" ]]; then\n      if [ -e \"${1}\" ]; then\n        _IN_PATH=NO\n      fi\n    else\n      _WHICH_TEST=\"$(which ${1})\"\n      if [[ \"${_WHICH_TEST}\" =~ (^)\"/usr/\" ]] \\\n        || [[ \"${_WHICH_TEST}\" =~ (^)\"/bin/\" ]] \\\n        || [[ \"${_WHICH_TEST}\" =~ (^)\"/opt/\" ]]; then\n        _IN_PATH=YES\n      else\n        _IN_PATH=NO\n      fi\n    fi\n  fi\n  if [[ \"${_LTD_GID}\" =~ \"lshellg\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"ltd-shell-more\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"lshellg rvm\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"ltd-shell\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"rvm ltd-shell\"($) ]] \\\n    || [[ \"${_LTD_GID}\" =~ \"ltd-shell rvm\"($) ]]; then\n    if [[ \"${_ARGS}\" =~ \"*\" ]]; then\n      if [[ $(whoami) == *.ftp ]]; then\n        _SILENT=YES\n        ####\n        #### The [[ $(whoami) == *.ftp ]] ### OK for Drush and Drupal\n        #### The [[ \"${_ARGS}\" =~ \"set -m; \" ]] ### Legacy defunct method\n        ####\n      else\n        if [[ \"${_ARGS}\" =~ \"__build__\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"_tmp_\" ]] \\\n          || [[ \"${_ARGS}\" =~ \".tmp\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"avconv\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"bzr \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"chdir \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"compass \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"composer \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"convert \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"curl \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"drush\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"ffmpeg \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"flvtool \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"git \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"is_\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"java\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"logger \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php56 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php74 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php81 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php82 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php83 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php84 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"php85 \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"unzip \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"rename \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"rrdtool \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"rsync \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"sass \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"scp \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"scss \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"sendmail \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"ssh \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"svn \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"tar \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"wget \" ]] \\\n          || [[ \"${_ARGS}\" =~ \"wkhtmltoimage\" ]] \\\n          || [[ \"${_ARGS}\" =~ \"wkhtmltopdf\" ]]; then\n          _SILENT=YES\n        else\n          echo\n        fi\n      fi\n    fi\n  fi\n  if [ \"${_IN_PATH}\" = \"YES\" ]; then\n    if [ -x \"/usr/local/bin/ruby\" ] && [ -x \"/usr/local/bin/gem\" ]; then\n      if [[ $(whoami) == *.ftp ]] || [ ! -z \"${SSH_CLIENT}\" ]; then\n        _RUBY_ALLOW=YES\n      fi\n    fi\n    if [ \"${_RUBY_ALLOW}\" = \"YES\" ]; then\n      if [ -d \"/opt/user/gems/${USER}\" ]; then\n        export GEM_HOME=\"/opt/user/gems/${USER}\"\n        export GEM_PATH=\"/opt/user/gems/${USER}\"\n        export PATH=\"/opt/user/gems/${USER}/bin:$PATH\"\n      fi\n    fi\n    if [ -x \"/usr/bin/npm\" ] && [ -e \"/home/${USER}/.npmrc\" ]; then\n      if [[ $(whoami) == *.ftp ]] || [ ! -z \"${SSH_CLIENT}\" ]; then\n        _NPM_ALLOW=YES\n      fi\n    fi\n    if [ \"${_NPM_ALLOW}\" = \"YES\" ]; then\n      if [ -d \"/opt/user/npm/${USER}\" ]; then\n        export NPM_PACKAGES=\"/opt/user/npm/${USER}/.npm-packages\"\n        export PATH=\"${NPM_PACKAGES}/bin:${PATH}\"\n        export NODE_PATH=\"${NPM_PACKAGES}/lib/node_modules:${NODE_PATH}\"\n      fi\n    fi\n    if [ \"$0\" = \"/bin/sh\" ] \\\n      || [ \"$0\" = \"/usr/bin/sh\" ] \\\n      || [ \"$0\" = \"/opt/local/bin/websh\" ] \\\n      || [ \"$0\" = \"/bin/websh\" ]; then\n      if [ -x \"/bin/dash\" ]; then\n        if [ ! -z \"${_ARR}\" ] && [ -z \"${_OCTO_SYS_ARR}\" ]; then\n          _DEST_DRUSH=${_DEST_DRUSH//\\\\/}\n          ### echo EXD 1 DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXD 1 PHP_INI is ${PHP_INI}\n          ### echo EXD 1 _DEST_DRUSH is ${_DEST_DRUSH}\n          ### echo EXD 1 _ARR is ${_ARR}\n          ### echo EXD 1 ${DRUSH_PHP} ${_DEST_DRUSH} ${_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} ${_DEST_DRUSH} ${_ARR}\"\n          exit 0\n        elif [ ! -z \"${PHP_FWD}\" ] && [ ! -z \"${_OCTO_SYS_ARR}\" ]; then\n          _DEST_DRUSH=${_DEST_DRUSH//\\\\/}\n          ### echo EXD 3-PHP_FWD DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXD 3-PHP_FWD PHP_INI is ${PHP_INI}\n          ### echo EXD 3-PHP_FWD _DEST_DRUSH is ${_DEST_DRUSH}\n          ### echo EXD 3-PHP_FWD _OCTO_SYS_ARR is ${_OCTO_SYS_ARR}\n          ### echo EXD 3-PHP_FWD ${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\"\n          exit 0\n        elif [ -z \"${PHP_FWD}\" ] && [ ! -z \"${_OCTO_SYS_ARR}\" ]; then\n          _DEST_DRUSH=${_DEST_DRUSH//\\\\/}\n          ### echo EXD 3-NO-PHP_FWD DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXD 3-NO-PHP_FWD PHP_INI is ${PHP_INI}\n          ### echo EXD 3-NO-PHP_FWD _DEST_DRUSH is ${_DEST_DRUSH}\n          ### echo EXD 3-NO-PHP_FWD _OCTO_SYS_ARR is ${_OCTO_SYS_ARR}\n          ### echo EXD 3-NO-PHP_FWD ${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} ${_DEST_DRUSH} ${_OCTO_SYS_ARR}\"\n          exit 0\n        elif [ ! -z \"${_C_ARR}\" ]; then\n          ### echo EXC 1 DRUSH_PHP is ${DRUSH_PHP}\n          ### echo EXC 1 _C_ARR is ${_C_ARR}\n          ### echo EXC 1 _F_ARR is \"$@\"\n          ### echo EXC 1 ${DRUSH_PHP} /usr/local/bin/composer ${_C_ARR}\n          exec /bin/dash -c \"${DRUSH_PHP} /usr/local/bin/composer ${_C_ARR}\"\n          exit 0\n        else\n          ### echo EXH 1 _forward_to_dash \"$@\"\n          _forward_to_dash \"$@\"\n          exit 0\n        fi\n      else\n        ### echo EXH 3 _F_ARR is \"$@\"\n        ### echo EXH 3 /bin/bash \"$@\"\n        exec /bin/bash \"$@\"\n        exit 0\n      fi\n    else\n      ### echo EXO 1 _F_ARR is \"$@\"\n      ### echo EXO 1 $0 \"$@\"\n      exec $0 \"$@\"\n      exit 0\n    fi\n  else\n    exit 1\n  fi\nelse\n  if [ \"${USER}\" = \"root\" ]; then\n    if [[ \"${1}\" =~ \"drush\" ]] \\\n      || [[ \"${2}\" =~ \"drush\" ]]; then\n      if [[ \"${2}\" =~ \"uli\" ]] \\\n        || [[ \"${2}\" =~ \"vget\" ]] \\\n        || [[ \"${2}\" =~ \"config-list\" ]] \\\n        || [[ \"${2}\" =~ \"config-edit\" ]] \\\n        || [[ \"${2}\" =~ \"config-get\" ]] \\\n        || [[ \"${2}\" =~ \"config-set\" ]] \\\n        || [[ \"${2}\" =~ \"--version\" ]] \\\n        || [[ \"${2}\" =~ \"vset\" ]] \\\n        || [[ \"${2}\" =~ \"status\" ]]; then\n        _ALLOW=YES\n      else\n        echo\n        echo \" Drush should never be run as root!\"\n        echo \" Please su to some non-root account\"\n        echo\n        exit 0\n      fi\n    fi\n  fi\n  if [ \"$0\" = \"/bin/sh\" ] \\\n    || [ \"$0\" = \"/usr/bin/sh\" ] \\\n    || [ \"$0\" = \"/opt/local/bin/websh\" ] \\\n    || [ \"$0\" = \"/bin/websh\" ]; then\n    if [ -x \"/bin/dash\" ]; then\n      ### echo EXH 4 _F_ARR is \"$@\"\n      ### echo EXH 4 /bin/dash \"$@\"\n      exec /bin/dash \"$@\"\n      exit 0\n    else\n      ### echo EXH 6 _F_ARR is \"$@\"\n      ### echo EXH 6 /bin/bash \"$@\"\n      exec /bin/bash \"$@\"\n      exit 0\n    fi\n  else\n    ### echo EXO 2 _F_ARR is \"$@\"\n    ### echo EXO 2 $0 \"$@\"\n    exec $0 \"$@\"\n    exit 0\n  fi\n  exit 0\nfi\n"
  },
  {
    "path": "aegir/tools/bin/xboa",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_WEBG=www-data\n_THIS_DB_PORT=3306\n_ieni=\"--ignore-errors\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n###-------------SYSTEM-----------------###\n\n_send_notice_migration_complete() {\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} -s \"IMPORTANT: Completed migration to ${_tgt} for ${_DOMAIN}\" ${_CLIENT_EMAIL}\nHello,\n\nThis is a notification to inform you that the migration of your Ægir hosting system has been successfully completed. Below are important details and instructions regarding the changes.\n\nYOUR NEW SYSTEM IP ADDRESS IS: [${_tgt}]\n\nImportant: Update DNS for All Your Sites if applicable (see below and contact your Ægir host for clarification if needed)\n\nPlease update DNS for all your sites as soon as possible so traffic goes directly to the new IP address (or to the new pseudo-HA master/mirror pair, where applicable). After migration, the old server may be kept online as an HTTP proxy, depending on the host policy communicated separately (this may be unlimited or strictly limited to a specific number of days). In pseudo-HA master/mirror setups, keeping the existing IP as a proxy can be recommended (if the hosts allow it) to smooth the transition, unless you prefer to disable the proxy sooner for maximum speed and fewer network hops to the new location.\n\n---\n\nImportant Changes to Note:\n\n1. New Ægir Control Panel URL\n   Your new welcome email will include details about the updated Ægir URL, including SQL Adminer access.\n\n2. Access Credentials\n   Ægir Control Panel access credentials for all current admins remain the same. Updated SSH/SFTP credentials (with a new password) are in your welcome email.\n\n3. SSH Key Transfer\n   Only the main SSH account keys have been transferred.\n   - SSH sub-accounts (Clients) will need to be recreated, so their old passwords and SSH keys will no longer work.\n\n4. Main SSH Account Directory\n   The contents of your main SSH account’s home directory have NOT been migrated. Please download any necessary files from the old server.\n\n5. Ægir Backend Key Update\n   The public SSH key (USER.id_ed25519.pub) for the Ægir backend has changed. This is important if you use integration with a remote Git repository.\n\n6. SSL Certificates\n   SSL/Let’s Encrypt certificates have been migrated automatically. Renewals will be handled on the new server, so updating DNS promptly is advised.\n\n7. Paused Cron Tasks\n   All scheduled tasks were paused during migration and will resume within 1-3 hours.\n\n8. Old Ægir Control Panel\n   The old control panel has been disabled to secure proxy configurations.\n\n---\n\nThank you for your cooperation during this migration. If you have any questions or need assistance, please feel free to reach out.\n\nKind regards,\nBOA Servers Sysadmins\n\nEOF\n  fi\n  echo \"INFO: Migration complete notice sent to ${_CLIENT_EMAIL} [${_THIS_HM_USER}]: OK\"\n}\n\n_send_notice_migration_start() {\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} -s \"STATUS: Running migration to ${_tgt} for ${_DOMAIN}\" ${_CLIENT_EMAIL}\nHello,\n\nThis notification is to inform you that the migration of your Ægir hosting system to the new platform has started.\n\nYOUR NEW SYSTEM IP ADDRESS IS: [${_tgt}]\n\nYou will receive another message once the migration is complete, along with a separate New Welcome Email containing important details about your updated system.\n\nPlease do not run any tasks in your old Ægir Control Panel during this migration, as it may interfere with special proxy configurations set up to ensure service continuity.\n\nImportant: Update DNS for All Your Sites if applicable (see below and contact your Ægir host for clarification if needed)\n\nOnce migrated, update DNS for all your sites as soon as possible so traffic goes directly to the new IP address (or to the new pseudo-HA master/mirror pair, where applicable). After migration, the old server may be kept online as an HTTP proxy, depending on the host policy communicated separately (this may be unlimited or strictly limited to a specific number of days). In pseudo-HA master/mirror setups, keeping the existing IP as a proxy can be recommended (if the hosts allow it) to smooth the transition, unless you prefer to disable the proxy sooner for maximum speed and fewer network hops to the new location.\n\n---\n\nImportant Changes to Note:\n\n1. New Ægir Control Panel URL\n   Your New Welcome Email will include details about the updated Ægir URL, including SQL Adminer access.\n\n2. Access Credentials\n   Ægir Control Panel credentials for all current admins will remain the same. Updated SSH/SFTP credentials (with a new password) will be provided in your welcome email.\n\n3. SSH Key Transfer\n   Only the main SSH account keys will be transferred.\n   - SSH sub-accounts (Clients) will need to be recreated, so their old passwords and SSH keys will no longer work.\n\n4. Main SSH Account Directory\n   The contents of your main SSH account’s home directory will NOT be migrated. Please download any necessary files from the old server.\n\n5. Ægir Backend Key Update\n   The public SSH key (USER.id_ed25519.pub) for the Ægir backend will change. This is important if you use integration with a remote Git repository.\n\n6. SSL Certificates\n   SSL/Let’s Encrypt certificates will be migrated automatically. Renewals will be handled on the new server, so updating DNS promptly is advised.\n\n7. Paused Cron Tasks\n   All scheduled tasks will be paused during migration and for an additional 1-3 hours afterward.\n\n8. Old Ægir Control Panel\n   The old control panel will be disabled to secure proxy configurations.\n\n---\n\nThank you for your cooperation during this migration. If you have any questions or need assistance, please feel free to reach out.\n\nKind regards,\nBOA Servers Sysadmins\n\nEOF\n  fi\n  echo \"INFO: Migration start notice sent to ${_CLIENT_EMAIL} [${_THIS_HM_USER}]: OK\"\n}\n\n_enable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ \"$1\" != \"${_THIS_HM_USER}.ftp\" ]; then\n      chattr +i /home/$1/\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr +i /home/$1/platforms/\n        chattr +i /home/$1/platforms/* &> /dev/null\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr +i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr +i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr +i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr +i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n_disable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ \"$1\" != \"${_THIS_HM_USER}.ftp\" ]; then\n      if [ -d \"/home/$1/\" ]; then\n        chattr -i /home/$1/\n      fi\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr -i /home/$1/platforms/\n        chattr -i /home/$1/platforms/* &> /dev/null\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr -i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr -i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr -i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr -i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n_run_drush8_hmr_cmd() {\n  su -s /bin/bash - ${_THIS_HM_USER} -c \"drush8 @hostmaster $1\"\n  wait\n}\n\n_run_drush8_nosilent_cmd() {\n  su -s /bin/bash - ${_THIS_HM_USER} -c \"drush8 @${_Dom} $1\"\n  wait\n}\n\n_migrate_sites() {\n  for _Site in `find ${_USR}/config/server_master/nginx/vhost.d -maxdepth 1 -mindepth 1 -type f | sort`\n  do\n    _Dom=`echo ${_Site} | cut -d'/' -f9 | awk '{ print $1}'`\n    _Vht=\"${_USR}/config/server_master/nginx/vhost.d/${_Dom}\"\n    _Vhd=\"${_USR}/config/server_master/nginx/vhost.d/.${_Dom}\"\n    _Vhs=\"${_USR}/config/server_master/nginx/vhost.d/https.${_Dom}\"\n    if [ -e \"${_Vht}\" ] && [ ! -z \"${_oct}\" ]; then\n      chown ${_oct}:users ${_Vht}\n      chmod 600 ${_Vht}\n    fi\n    if [ -e \"${_Vhd}\" ] && [ ! -z \"${_oct}\" ]; then\n      chown ${_oct}:users ${_Vhd}\n      chmod 600 ${_Vhd}\n    fi\n    if [ -e \"${_USR}/.drush/${_Dom}.alias.drushrc.php\" ]; then\n      echo Dom is ${_Dom}\n      _Dir=`cat ${_USR}/.drush/${_Dom}.alias.drushrc.php | grep \"site_path'\" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`\n      _Plr=`cat ${_USR}/.drush/${_Dom}.alias.drushrc.php | grep \"root'\" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`\n      if [ \"${_cmd}\" = \"import\" ] || [ \"${_cmd}\" = \"export\" ]; then\n        _DBN=`cat ${_Dir}/drushrc.php | grep \"options\\['db_name'\\] = \" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,';]//g\"`\n        echo \"site ${_Dom} raw _DBN is ${_DBN}\"\n        _DBN=${_DBN//[^a-zA-Z0-9_]/}\n        echo \"site ${_Dom} clean _DBN is ${_DBN}\"\n      fi\n      if [ \"${_cmd}\" = \"import\" ]; then\n        _DBP=`cat ${_Dir}/drushrc.php | grep \"options\\['db_passwd'\\] = \" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,';]//g\"`\n        echo \"site ${_Dom} raw _DBP is ${_DBP}\"\n        _DBP=${_DBP//[^a-zA-Z0-9_]/}\n        echo \"site ${_Dom} clean _DBP is ${_DBP}\"\n        if [ -e \"${_Vhd}\" ]; then\n          mv -f ${_Vhd} ${_Vht}\n        fi\n        if [ ! -z \"${_dst}\" ]; then\n          _TGT_HM_USER=\"${_dst}\"\n        else\n          _TGT_HM_USER=\"${_oct}\"\n        fi\n        chown ${_TGT_HM_USER}:users ${_Vht}\n        chmod 600 ${_Vht}\n        if [ -e \"${_Dir}/drushrc.php\" ]; then\n          sed -i \"s/\\$options\\['db_host'\\] = '.*/\\$options[\\'db_host\\'] = \\'localhost\\';/g\" ${_Dir}/drushrc.php\n          wait\n        fi\n        if [ -e \"${_Vht}\" ]; then\n          sed -i \"s/.*fastcgi_param.*db_host.*;/  fastcgi_param db_host   localhost;/g\" ${_Vht}\n          wait\n        fi\n        echo \"INFO: site ${_Dom} db_host on ${_TGT_HM_USER} fixed\"\n        if [ ! -z \"${_DBN}\" ] && [ ! -z \"${_DBP}\" ]; then\n          if [ ! -d \"/var/lib/mysql/${_DBN}\" ]; then\n            if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n              _SQL_CONNECT=127.0.0.1\n              _THIS_DB_PORT=6033\n              mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges &> /dev/null\n            else\n              mysqladmin -u root flush-privileges &> /dev/null\n            fi\n            [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL3 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n            if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n              _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n              mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_DBN}';\nDELETE FROM mysql_query_rules WHERE username='${_DBN}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_DBN}','${_DBP}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_DBN}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_DBN}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n            fi\n            mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE DATABASE ${_DBN} CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;\nCREATE USER IF NOT EXISTS '${_DBN}'@'localhost';\nCREATE USER IF NOT EXISTS '${_DBN}'@'%';\nGRANT ALL ON ${_DBN}.* TO '${_DBN}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_DBN}.* TO '${_DBN}'@'%' WITH GRANT OPTION;\nALTER USER '${_DBN}'@'localhost' IDENTIFIED BY '${_DBP}';\nALTER USER '${_DBN}'@'%' IDENTIFIED BY '${_DBP}';\nEOFMYSQL\n            if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n              _SQL_CONNECT=127.0.0.1\n              _THIS_DB_PORT=6033\n              mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges &> /dev/null\n            else\n              mysqladmin -u root flush-privileges &> /dev/null\n            fi\n            echo \"INFO: site ${_Dom} db setup on ${_oct} complete\"\n          else\n            echo \"ALRT: site ${_Dom} db ${_DBN} already exists!\"\n          fi\n          if [ -e \"${_USR}/src/${_DBN}\" ]; then\n            if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n              _SQL_CONNECT=127.0.0.1\n              _THIS_DB_PORT=6033\n            fi\n            [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL4 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp\"\n            if [ -x \"/usr/local/bin/mydumper\" ]; then\n              _MYQUICK_ITD=$(mydumper -V 2>&1 \\\n                | tr -d \"\\n\" \\\n                | tr -d \",\" \\\n                | tr -d \"v\" \\\n                | cut -d\" \" -f2 \\\n                | awk '{ print $1}' 2>&1)\n              _DB_V=$(mysql -V 2>&1 \\\n                | tr -d \"\\n\" \\\n                | cut -d\" \" -f6 \\\n                | awk '{ print $1}' \\\n                | cut -d\"-\" -f1 \\\n                | awk '{ print $1}' \\\n                | sed \"s/[\\,']//g\" 2>&1)\n              if [ \"${_DB_V}\" = \"Linux\" ]; then\n                _DB_V=$(mysql -V 2>&1 \\\n                  | tr -d \"\\n\" \\\n                  | cut -d\" \" -f4 \\\n                  | awk '{ print $1}' \\\n                  | cut -d\"-\" -f1 \\\n                  | awk '{ print $1}' \\\n                  | sed \"s/[\\,']//g\" 2>&1)\n              fi\n              _MD_V=$(mydumper --version 2>&1 \\\n                | tr -d \"\\n\" \\\n                | cut -d\" \" -f6 \\\n                | awk '{ print $1}' \\\n                | cut -d\"-\" -f1 \\\n                | awk '{ print $1}' \\\n                | sed \"s/[\\,']//g\" 2>&1)\n              if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n                echo \"INFO: Installed MyQuick ${_MYQUICK_ITD} for ${_MD_V} (${_DB_V})\"\n              fi\n            fi\n            myloader \\\n              --database=${_DBN} \\\n              --host=localhost \\\n              --user=root \\\n              --password=${_SQL_PSWD} \\\n              --port=3306 \\\n              --directory=${_USR}/src/${_DBN}/ \\\n              --threads=4 \\\n              --overwrite-tables \\\n              --verbose=1\n            echo \"INFO: site ${_Dom} db import on ${_oct} complete\"\n            ###\n            ### Activate readonly on source via global.inc and not via drush\n            ###\n            ### _run_drush8_nosilent_cmd \"variable-set --always-set site_readonly 0\"\n            ### _run_drush8_nosilent_cmd \"dis readonlymode -y\"\n            ### echo \"INFO: site ${_Dom} readonlymode on ${_oct} disabled\"\n            ###\n          else\n            echo \"ALRT: site ${_Dom} db failure due to missing ${_USR}/src/${_DBN}\"\n          fi\n          if [ -e \"${_USR}/.drush/${_Dom}.alias.drushrc.php\" ]; then\n            _run_drush8_hmr_cmd \"hosting-task @${_Dom} verify --force\"\n            echo \"INFO: site ${_Dom} verify on ${_oct} scheduled\"\n          else\n            echo \"ALRT: site ${_Dom} verify failure due to missing ${_USR}/.drush/${_Dom}.alias.drushrc.php\"\n          fi\n        else\n          echo \"ALRT: site ${_Dom} db failure due to empty _DBN/${_DBN} or _DBP/${_DBP}\"\n        fi\n      elif [ \"${_cmd}\" = \"export\" ]; then\n        _run_drush8_nosilent_cmd \"en readonlymode -y\"\n        ###\n        ### Activate readonly on source via global.inc and not via drush\n        ###\n        ### _run_drush8_nosilent_cmd \"variable-set --always-set site_readonly 1\"\n        ###\n        echo \"INFO: site ${_Dom} readonlymode module on ${_oct} enabled\"\n        if [ -x \"/usr/local/bin/mydumper\" ]; then\n          _MYQUICK_ITD=$(mydumper -V 2>&1 \\\n            | tr -d \"\\n\" \\\n            | tr -d \",\" \\\n            | tr -d \"v\" \\\n            | cut -d\" \" -f2 \\\n            | awk '{ print $1}' 2>&1)\n          _DB_V=$(mysql -V 2>&1 \\\n            | tr -d \"\\n\" \\\n            | cut -d\" \" -f6 \\\n            | awk '{ print $1}' \\\n            | cut -d\"-\" -f1 \\\n            | awk '{ print $1}' \\\n            | sed \"s/[\\,']//g\" 2>&1)\n          if [ \"${_DB_V}\" = \"Linux\" ]; then\n            _DB_V=$(mysql -V 2>&1 \\\n              | tr -d \"\\n\" \\\n              | cut -d\" \" -f4 \\\n              | awk '{ print $1}' \\\n              | cut -d\"-\" -f1 \\\n              | awk '{ print $1}' \\\n              | sed \"s/[\\,']//g\" 2>&1)\n          fi\n          _MD_V=$(mydumper --version 2>&1 \\\n            | tr -d \"\\n\" \\\n            | cut -d\" \" -f6 \\\n            | awk '{ print $1}' \\\n            | cut -d\"-\" -f1 \\\n            | awk '{ print $1}' \\\n            | sed \"s/[\\,']//g\" 2>&1)\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            echo \"INFO: Installed MyQuick ${_MYQUICK_ITD} for ${_MD_V} (${_DB_V})\"\n          fi\n        fi\n        mkdir -p ${_USR}/src/${_DBN}\n        mydumper \\\n          --database=${_DBN} \\\n          --host=localhost \\\n          --user=root \\\n          --password=${_SQL_PSWD} \\\n          --port=3306 \\\n          --outputdir=${_USR}/src/${_DBN}/ \\\n          --rows=50000 \\\n          --build-empty-files \\\n          --threads=4 \\\n          --long-query-guard=900 \\\n          --clear \\\n          --verbose=1\n        echo \"INFO: site ${_Dom} db export on ${_oct} complete\"\n      elif [ \"${_cmd}\" = \"proxy\" ]; then\n        if [ -e \"${_USR}/log/exported.pid\" ] && [ ! -z \"${_tgt}\" ]; then\n          if [ -e \"${_Vht}\" ]; then\n            _NMS=`cat ${_Vht} | grep \"server_name\" | sed \"s/server_name//g; s/;//g\" | sort | uniq | tr -d \"\\n\" | sed \"s/  / /g; s/  / /g; s/  / /g\" | sort | uniq`\n            if [ -e \"${_USR}/config/server_master/ssl.d/${_Dom}/openssl.key\" ] \\\n              && [ -e \"/var/xdrago/conf/https_proxy_le.conf\" ]; then\n              cp -a /var/xdrago/conf/https_proxy_le.conf ${_Vhs}\n              sed -i \"s/.*server_name.*;/  server_name  ${_NMS};/g\" ${_Vhs}\n              wait\n              sed -i \"s/_target_ip/${_tgt}/g\" ${_Vhs}\n              wait\n              sed -i \"s/_oct_uid/${_oct}/g\" ${_Vhs}\n              wait\n              sed -i \"s/_domain_name/${_Dom}/g\" ${_Vhs}\n              wait\n              chown ${_oct}:users ${_Vhs}\n              chmod 600 ${_Vhs}\n            fi\n            mv -f ${_Vht} ${_Vhd}\n            cp -a /var/xdrago/conf/proxy.conf ${_Vht}\n            sed -i \"s/.*server_name.*;/  server_name  ${_NMS};/g\" ${_Vht}\n            wait\n            sed -i \"s/_target_ip/${_tgt}/g\" ${_Vht}\n            wait\n            chown ${_oct}:users ${_Vht}\n            chmod 600 ${_Vht}\n            echo \"INFO: site ${_Dom} proxy conversion on ${_oct} complete\"\n          else\n            echo \"ALRT: site ${_Dom} proxy conversion on ${_oct} failed due to missing ${_Vht}\"\n          fi\n        else\n          echo \"ALRT: site ${_Dom} proxy conversion on ${_oct} failed due to empty tgt/${_tgt}\"\n        fi\n      fi\n    fi\n  done\n}\n\n_delete_this_platform() {\n  _run_drush8_hmr_cmd \"hosting-task @platform_${_T_PFM_NAME} delete --force\"\n  echo \"Old empty platform_${_T_PFM_NAME} will be deleted\"\n  _run_drush8_hmr_cmd \"hosting-dispatch\"\n  sleep 5\n  _run_drush8_hmr_cmd \"hosting-tasks --force\"\n  sleep 1\n}\n\n_xboa_runtime_oct() {\n  if [ \"${_cmd}\" = \"import\" ] && [ ! -z \"${_dst}\" ]; then\n    echo \"${_dst}\"\n  else\n    echo \"${_oct}\"\n  fi\n}\n\n_xboa_rewrite_target_account_refs() {\n  _X_SRC_OCT=\"$1\"\n  _X_DST_OCT=\"$2\"\n  _X_TGT_HOST=\"$3\"\n  _X_DST_USR=\"$4\"\n\n  if [ -z \"${_X_SRC_OCT}\" ] || [ -z \"${_X_DST_OCT}\" ] || [ -z \"${_X_TGT_HOST}\" ] || [ -z \"${_X_DST_USR}\" ]; then\n    return 0\n  fi\n  if [ \"${_X_SRC_OCT}\" = \"${_X_DST_OCT}\" ]; then\n    return 0\n  fi\n\n  echo \"INFO: rename mode active on target ${_X_TGT_HOST}: rewriting refs ${_X_SRC_OCT} -> ${_X_DST_OCT} in selected text configs\"\n  ssh root@${_X_TGT_HOST} \"    _D='${_X_DST_USR}';     for _F in       \\$(find \"\\${_D}/.drush\" -maxdepth 1 -type f -name '*.drushrc.php' 2>/dev/null | sort)       \\$(find \"\\${_D}/config/server_master/nginx/vhost.d\" -maxdepth 1 -type f 2>/dev/null | sort)       \\$(find \"\\${_D}/config/server_master/nginx/pre.d\" -maxdepth 1 -type f 2>/dev/null | sort)       \\$(find \"\\${_D}/config/server_master/nginx/post.d\" -maxdepth 1 -type f 2>/dev/null | sort)       \\$(find \"\\${_D}/config/server_master/nginx/subdir.d\" -maxdepth 1 -type f 2>/dev/null | sort)       \\$(find \"\\${_D}/config/server_master/nginx/platform.d\" -maxdepth 1 -type f 2>/dev/null | sort)       \\$(find \"\\${_D}/config/ssl.d\" -maxdepth 1 -type f 2>/dev/null | sort)       \\$(find \"\\${_D}/config/server_master/ssl.d\" -maxdepth 1 -type f 2>/dev/null | sort)       ; do       [ -f \"\\${_F}\" ] || continue;       grep -q '/data/disk/${_X_SRC_OCT}\\|/home/${_X_SRC_OCT}\\.ftp' \"\\${_F}\" 2>/dev/null || continue;       sed -i 's#/data/disk/${_X_SRC_OCT}#/data/disk/${_X_DST_OCT}#g; s#/home/${_X_SRC_OCT}\\.ftp#/home/${_X_DST_OCT}.ftp#g' \"\\${_F}\";     done\"\n}\n\n_migrate_prepare() {\n  _THIS_HM_USER=$(_xboa_runtime_oct)\n  _USR=/data/disk/${_THIS_HM_USER}\n  _THIS_HM_SITE=`cat ${_USR}/.drush/hostmaster.alias.drushrc.php \\\n    | grep \"site_path'\" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,']//g\"`\n  echo _THIS_HM_USER ${_THIS_HM_USER}\n  echo _USR ${_USR}\n  echo _THIS_HM_SITE ${_THIS_HM_SITE}\n  if [ -e \"${_USR}/log/option.txt\" ]; then\n    _CLIENT_OPTION=`cat ${_USR}/log/option.txt`\n    _CLIENT_OPTION=`echo -n ${_CLIENT_OPTION} | tr -d \"\\n\"`\n  fi\n  if [ -e \"${_USR}/log/cores.txt\" ]; then\n    _CLIENT_CORES=`cat ${_USR}/log/cores.txt`\n    _CLIENT_CORES=`echo -n ${_CLIENT_CORES} | tr -d \"\\n\"`\n  fi\n  if [ -e \"${_USR}/log/subscr.txt\" ]; then\n    _CLIENT_SUBSCR=`cat ${_USR}/log/subscr.txt`\n    _CLIENT_SUBSCR=`echo -n ${_CLIENT_SUBSCR} | tr -d \"\\n\"`\n  fi\n  if [ -e \"${_USR}/log/email.txt\" ]; then\n    _CLIENT_EMAIL=`cat ${_USR}/log/email.txt`\n    _CLIENT_EMAIL=`echo -n ${_CLIENT_EMAIL} | tr -d \"\\n\"`\n    if [[ \"${_CLIENT_EMAIL}\" =~ \"@\" ]]; then\n      _DO_NOTHING=YES\n    else\n      _CLIENT_EMAIL=\"omega8cc@gmail.com\"\n    fi\n    _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n  fi\n\n  [ -z \"${_THIS_DB_PORT}\" ] && _THIS_DB_PORT=3306\n  [ -z \"${_CLIENT_OPTION}\" ] && _CLIENT_OPTION=EDGE\n  [ -z \"${_CLIENT_SUBSCR}\" ] && _CLIENT_SUBSCR=M\n  [ -z \"${_CLIENT_CORES}\" ]  && _CLIENT_CORES=1\n  [ -z \"${_CLIENT_EMAIL}\" ]  && _CLIENT_EMAIL=\"omega8cc@gmail.com\"\n\n  echo \"_CLIENT_EMAIL is ${_CLIENT_EMAIL}\"\n  echo \"_THIS_HM_USER is ${_THIS_HM_USER}\"\n  echo \"_CLIENT_OPTION is ${_CLIENT_OPTION}\"\n  echo \"_CLIENT_SUBSCR is ${_CLIENT_SUBSCR}\"\n  echo \"_CLIENT_CORES is ${_CLIENT_CORES}\"\n\n  if [ \"${_cmd}\" = \"export\" ] && [ ! -z \"${_CLIENT_EMAIL}\" ]; then\n    _send_notice_migration_start\n  fi\n\n  su -s /bin/bash ${_THIS_HM_USER} -c \"drush8 cc drush\" &> /dev/null\n  wait\n  _disable_chattr ${_THIS_HM_USER}.ftp &> /dev/null\n  rm -rf /home/${_THIS_HM_USER}.ftp/drush-backups\n  _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_task WHERE task_type='delete' AND task_status='-1'\\\"\"\n  _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_task WHERE task_type='delete' AND task_status='0' AND executed='0'\\\"\"\n  rm -rf ${_USR}/clients/*/backups\n  symlinks -dr ${_USR}/clients &> /dev/null\n  if [ -d \"/home/${_THIS_HM_USER}.ftp\" ]; then\n    symlinks -dr /home/${_THIS_HM_USER}.ftp &> /dev/null\n    rm -f /home/${_THIS_HM_USER}.ftp/{.profile,.bash_logout,.bash_profile,.bashrc}\n  fi\n  if [ -e \"${_THIS_HM_SITE}/drushrc.php\" ]; then\n    _HDB=`cat ${_THIS_HM_SITE}/drushrc.php \\\n      | grep \"options\\['db_name'\\] = \" \\\n      | cut -d: -f2 \\\n      | awk '{ print $3}' \\\n      | sed \"s/[\\,';]//g\"`\n    echo \"raw _HDB is ${_HDB}\"\n    _HDB=${_HDB//[^a-zA-Z0-9_]/}\n    echo \"clean _HDB is ${_HDB}\"\n  else\n    echo \"no ${_THIS_HM_SITE}/drushrc.php found\"\n  fi\n  mkdir -p ${_USR}/src\n  if [ \"${_cmd}\" = \"export\" ] \\\n    && [ ! -z \"${_HDB}\" ] \\\n    && [ ! -e \"${_USR}/src/prev_hostmaster.sql\" ] \\\n    && [ ! -e \"${_USR}/log/exported.pid\" ]; then\n      mysqldump -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp \\\n        --single-transaction \\\n        --quick \\\n        --no-autocommit \\\n        --skip-add-locks \\\n        --no-tablespaces \\\n        --hex-blob ${_HDB} \\\n        comment \\\n        date_format_locale \\\n        date_format_type \\\n        date_formats \\\n        fe_block_boxes \\\n        features_signature \\\n        field_config \\\n        field_config_instance \\\n        field_data_body \\\n        field_data_comment_body \\\n        field_data_field_composer_git_docroot \\\n        field_data_field_composer_git_path \\\n        field_data_field_composer_git_project_url \\\n        field_data_field_composer_git_version \\\n        field_data_field_composer_project_docroot \\\n        field_data_field_composer_project_package \\\n        field_data_field_composer_project_path \\\n        field_data_field_composer_project_version \\\n        field_data_field_deployment_strategy \\\n        field_data_field_git_docroot \\\n        field_data_field_git_reference \\\n        field_data_field_git_repository_path \\\n        field_data_field_git_repository_url \\\n        field_revision_body \\\n        field_revision_comment_body \\\n        field_revision_field_composer_git_docroot \\\n        field_revision_field_composer_git_path \\\n        field_revision_field_composer_git_project_url \\\n        field_revision_field_composer_git_version \\\n        field_revision_field_composer_project_docroot \\\n        field_revision_field_composer_project_package \\\n        field_revision_field_composer_project_path \\\n        field_revision_field_composer_project_version \\\n        field_revision_field_deployment_strategy \\\n        field_revision_field_git_docroot \\\n        field_revision_field_git_reference \\\n        field_revision_field_git_repository_path \\\n        field_revision_field_git_repository_url \\\n        filter \\\n        filter_format \\\n        history \\\n        hosting_backup_gc_sites \\\n        hosting_client \\\n        hosting_client_user \\\n        hosting_context \\\n        hosting_cron \\\n        hosting_git \\\n        hosting_git_pull \\\n        hosting_http_basic_auth \\\n        hosting_package \\\n        hosting_package_instance \\\n        hosting_package_languages \\\n        hosting_platform \\\n        hosting_platform_client_access \\\n        hosting_service \\\n        hosting_site \\\n        hosting_site_alias \\\n        hosting_site_backups \\\n        hosting_ssl_cert \\\n        hosting_ssl_cert_ips \\\n        hosting_ssl_server \\\n        hosting_ssl_site \\\n        hosting_task \\\n        hosting_task_arguments \\\n        hosting_task_log \\\n        menu_custom \\\n        menu_links \\\n        menu_router \\\n        node \\\n        node_access \\\n        node_comment_statistics \\\n        node_revision \\\n        node_type \\\n        role \\\n        role_permission \\\n        url_alias \\\n        userprotect \\\n        users \\\n        users_roles \\\n        > ${_USR}/src/prev_hostmaster.sql\n    _run_drush8_hmr_cmd \"variable-set --always-set maintenance_mode 1\"\n    if [ -e \"${_USR}/src/prev_hostmaster.sql\" ]; then\n      echo \"INFO: hostmaster ${_oct} db export complete\"\n      touch ${_USR}/log/exported.pid\n    else\n      echo \"ALRT: no ${_USR}/src/prev_hostmaster.sql exists!\"\n      exit 1\n    fi\n  fi\n  if [ \"${_cmd}\" = \"import\" ] \\\n    && [ -e \"${_USR}/config/includes/nginx_vhost_common.conf\" ] \\\n    && [ ! -z \"${_HDB}\" ] \\\n    && [ -e \"${_USR}/src/prev_hostmaster.sql\" ] \\\n    && [ ! -e \"${_USR}/log/imported.pid\" ]; then\n    if [ ! -z \"${_dst}\" ] && [ \"${_oct}\" != \"${_dst}\" ]; then\n      echo \"INFO: rename mode import for source ${_oct} into target account ${_dst}\"\n      [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL5R -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp\"\n    else\n      [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL6 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp\"\n    fi\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp ${_HDB} < ${_USR}/src/prev_hostmaster.sql\n    _run_drush8_hmr_cmd \"variable-set --always-set site_frontpage 'hosting/sites'\"\n    touch ${_USR}/log/imported.pid\n\n    if [ \"${_fix}\" = \"fix\" ]; then\n      if [ ! -e \"${_USR}/log/post-merge-fix.pid\" ]; then\n        echo \"HOTFIX B: Hostmaster STATUS: Fix for migrated/merged instance 1/2 start\"\n        _DOMAIN=`cat ${_USR}/log/domain.txt`\n        _DOMAIN=`echo -n ${_DOMAIN} | tr -d \"\\n\"`\n        _USE_AEGIR_HOST=`uname -n`\n\n        ### Pre-Fix for migrated/merged instances\n        if [ -e \"${_USR}/log/imported.pid\" ] || [ -e \"${_USR}/log/exported.pid\" ]; then\n          if [ -e \"${_USR}/aegir/distro/002/sites/${_DOMAIN}/drushrc.php\" ]; then\n            sed -i \"s/platform_0.*'/platform_002'/g\"           ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/0.*\\/sites/distro\\/002\\/sites/g\" ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/01.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/02.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/03.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/04.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/05.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/platform_0.*'/platform_002'/g\"           ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/0.*\\/sites/distro\\/002\\/sites/g\" ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/01.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/02.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/03.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/04.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/05.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            for _Platform in `find ${_USR}/.drush/platform_* -maxdepth 1 -type f | sort`; do\n              _T_PFM_NAME=$(echo \"${_Platform}\" \\\n                | sed \"s/.*platform_//g; s/.alias.drushrc.php//g\" \\\n                | awk '{ print $1}' 2>&1)\n              _T_PFM_ROOT=$(cat ${_Platform} \\\n                | grep \"root'\" \\\n                | cut -d: -f2 \\\n                | awk '{ print $3}' \\\n                | sed \"s/[\\,']//g\" 2>&1)\n              _T_PFM_SITE=$(grep \"${_T_PFM_ROOT}/sites/\" \\\n                ${_USR}/.drush/*.drushrc.php \\\n                | grep site_path 2>&1)\n              if [ ! -e \"${_T_PFM_ROOT}/sites/all\" ] \\\n                && [ \"${_T_PFM_NAME}\" != \"hostmaster\" ]; then\n                _delete_this_platform\n                mkdir -p ${_USR}/undo\n                mv -f ${_USR}/.drush/platform_${_T_PFM_NAME}.alias.drushrc.php \\\n                  ${_USR}/undo/ &> /dev/null\n                echo \"GHOST platform ${_T_PFM_ROOT} detected and moved to ${_USR}/undo/\"\n              fi\n              if [[ \"${_T_PFM_SITE}\" =~ \".restore\" ]]; then\n                echo \"WARNING: ghost site leftover found: ${_T_PFM_SITE}\"\n              fi\n              if [ -z \"${_T_PFM_SITE}\" ] \\\n                && [ \"${_T_PFM_NAME}\" != \"hostmaster\" ] \\\n                && [ -e \"${_T_PFM_ROOT}/sites/all\" ]; then\n                _delete_this_platform\n              fi\n            done\n          fi\n        fi\n\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO hosting_context (nid, name) VALUES ('4', 'server_localhost'), ('2', 'server_master')\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO hosting_package (vid, nid, package_type, short_name, old_short_name, description) VALUES ('6', '6', 'platform', 'drupal', '', '')\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO node_revision (nid, vid, uid, title, body, teaser, log, timestamp, format) VALUES ('6', '6', '1', 'drupal', '', '', '', '1412168340', '0')\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO node (nid, vid, type, language, title, uid, status, created, changed, comment, promote, moderate, sticky, tnid, translate) VALUES ('6', '6', 'package', '', 'drupal', '1', '1', '1412168321', '1412168340', '0', '0', '0', '0', '0', '0')\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_package WHERE nid=2 AND short_name='drupal'\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_package WHERE nid=4 AND short_name='drupal'\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM node WHERE nid=8 AND type='site'\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM node_revision WHERE nid=8\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET type='server' WHERE nid=2\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET type='server' WHERE nid=4\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET title='${_USE_AEGIR_HOST}' WHERE nid=2\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET title='localhost' WHERE nid=4\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET title='${_USE_AEGIR_HOST}' WHERE nid=2\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET title='localhost' WHERE nid=4\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET db_server=4 WHERE db_server=2\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform SET web_server=2 WHERE web_server=0\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE users_roles SET rid=7 WHERE rid=5\\\"\" &> /dev/null\n        su -s /bin/bash ${_THIS_HM_USER} -c \"drush8 cc drush\" &> /dev/null\n        wait\n        rm -rf ${_USR}/.tmp/cache\n        _run_drush8_hmr_cmd \"hosting-task @server_localhost verify --force\" &> /dev/null\n        _run_drush8_hmr_cmd \"hosting-dispatch\" &> /dev/null\n        sleep 5\n        _run_drush8_hmr_cmd \"hosting-tasks --force\" &> /dev/null\n        echo \"HOTFIX B: Hostmaster STATUS: Fix for migrated/merged instance 1/2 complete\"\n      fi\n      _vSet=\"variable-set --always-set\"\n      _run_drush8_hmr_cmd \"${_vSet} hosting_client_send_welcome 0\" &> /dev/null\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET client=1 WHERE profile=7\\\"\" &> /dev/null\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET client=1 WHERE profile=9\\\"\" &> /dev/null\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET client=1 WHERE client=0\\\"\" &> /dev/null\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform SET web_server=2 WHERE web_server=0\\\"\" &> /dev/null\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET uid=1 WHERE uid=0\\\"\" &> /dev/null\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET uid=1 WHERE uid=0\\\"\" &> /dev/null\n      _HM_NID=$(_run_drush8_hmr_cmd \"sqlq \\\"SELECT MIN(site.nid) AS lowest_nid FROM hosting_site site JOIN hosting_package_instance pkgi ON pkgi.rid=site.nid JOIN hosting_package pkg ON pkg.nid=pkgi.package_id WHERE pkg.short_name='hostmaster';\\\"\" 2>&1)\n      _HM_NID=${_HM_NID//[^0-9]/}\n      if [ ! -z \"${_HM_NID}\" ]; then\n        echo \"HOTFIX B: Hostmaster STATUS: Fix 1/2 hosting_context ${_HM_NID}\"\n        if [ -e \"${_USR}/aegir/distro/002/sites/${_DOMAIN}/drushrc.php\" ]; then\n          _HM_PLF=$(_run_drush8_hmr_cmd \"sqlq \\\"SELECT platform FROM hosting_site WHERE nid=${_HM_NID}\\\"\" 2>&1)\n          _HM_PLF=${_HM_PLF//[^0-9]/}\n          _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_context WHERE name='platform_002' AND nid != ${_HM_PLF}\\\"\" &> /dev/null\n          _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_context SET name='platform_002' WHERE nid=${_HM_PLF}\\\"\" &> /dev/null\n        fi\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_context SET name='hostmaster' WHERE nid=${_HM_NID}\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\\\"\" &> /dev/null\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site_alias SET alias='www.${_DOMAIN}' WHERE nid=${_HM_NID}\\\"\" &> /dev/null\n        echo FIXED > ${_USR}/log/post-merge-fix.pid\n      else\n        echo \"HOTFIX B: Hostmaster STATUS: Fix 1/2 hosting_context _HM_NID empty!\"\n      fi\n      if [ -e \"${_USR}/aegir/distro/002/sites/${_DOMAIN}/drushrc.php\" ] \\\n        && [ ! -e \"${_USR}/log/hmpathfix.pid\" ]; then\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform SET publish_path='${_USR}/aegir/distro/002' WHERE publish_path LIKE '%/aegir/distro/%'\\\"\" &> /dev/null\n        touch ${_USR}/log/hmpathfix.pid\n      fi\n    fi\n\n  fi\n  if [ \"${_cmd}\" = \"create\" ]; then\n    if [ ! -z \"${_dst}\" ]; then\n      _TGT_HM_USER=\"${_dst}\"\n    else\n      _TGT_HM_USER=\"${_oct}\"\n    fi\n    if [ ! -z \"${_CLIENT_EMAIL}\" ] \\\n      && [ ! -z \"${_TGT_HM_USER}\" ] \\\n      && [ ! -z \"${_CLIENT_OPTION}\" ] \\\n      && [ ! -z \"${_CLIENT_SUBSCR}\" ] \\\n      && [ ! -z \"${_CLIENT_CORES}\" ]; then\n      ssh root@${_tgt} \"/opt/local/bin/boa in-octopus ${_CLIENT_EMAIL} ${_TGT_HM_USER} ${_tRee} ${_CLIENT_OPTION} ${_CLIENT_SUBSCR} ${_CLIENT_CORES} noscreen\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/tools/le/.ctrl/ssl-demo-mode.pid\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/tools/le/config\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/tools/le/config.sh\"\n      echo \"Waiting 8 minutes before attempting to run enforced post-install upgrade...\"\n      sleep 500\n      ssh root@${_tgt} \"test ! -e /data/disk/${_TGT_HM_USER}/log/upgrademail.txt && /bin/echo UP >> /data/disk/${_TGT_HM_USER}/static/control/platforms.info\"\n      ssh root@${_tgt} \"test ! -e /data/disk/${_TGT_HM_USER}/log/upgrademail.txt && /bin/touch /data/disk/${_TGT_HM_USER}/static/control/run-upgrade.pid\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/static/control/*.pid\"\n      echo \"INFO: new octopus ${_TGT_HM_USER} setup on ${_tgt} complete\"\n    else\n      echo \"ALRT: new octopus ${_TGT_HM_USER} setup on ${_tgt} failed because of missing arguments\"\n      exit 1\n    fi\n  fi\n  if [ \"${_cmd}\" = \"import\" ] \\\n    || [ \"${_cmd}\" = \"export\" ] \\\n    || [ \"${_cmd}\" = \"proxy\" ]; then\n    _migrate_sites\n  fi\n  echo Done ${_cmd} for ${_USR}\n  _enable_chattr ${_THIS_HM_USER}.ftp &> /dev/null\n}\n\n_xboa_transfer_static_symlink_safe() {\n  _X_SRC_USR=\"$1\"\n  _X_TGT_HOST=\"$2\"\n  _X_DST_USR=\"$3\"\n  _X_IENI=\"$4\"\n\n  _X_SRC_STATIC=\"${_X_SRC_USR}/static\"\n  _X_SRC_STATIC_FILES=\"${_X_SRC_USR}/static/files\"\n  _X_DST_STATIC=\"${_X_DST_USR}/static\"\n  _X_DST_STATIC_FILES=\"${_X_DST_USR}/static/files\"\n\n  if [ ! -d \"${_X_SRC_STATIC}\" ]; then\n    return 0\n  fi\n\n  echo \"INFO: Static sync pass 1/2 (preserve symlinks, exclude static/files)\"\n\n  mkdir -p \"${_X_SRC_USR}/static\"\n  rsync -aEAXqu ${_X_IENI} --exclude=files -e ssh \"${_X_SRC_USR}/static/\" \"root@${_X_TGT_HOST}:${_X_DST_STATIC}/\"\n\n  if [ -L \"${_X_SRC_STATIC_FILES}\" ]; then\n    _X_SRC_STATIC_FILES_REAL=\"$(readlink -f \"${_X_SRC_STATIC_FILES}\" 2>/dev/null)\"\n    echo \"INFO: Source static/files is symlink -> ${_X_SRC_STATIC_FILES_REAL}\"\n  elif [ -d \"${_X_SRC_STATIC_FILES}\" ]; then\n    _X_SRC_STATIC_FILES_REAL=\"${_X_SRC_STATIC_FILES}\"\n    echo \"INFO: Source static/files is a real directory\"\n  else\n    _X_SRC_STATIC_FILES_REAL=\"\"\n    echo \"INFO: Source static/files missing, nothing to materialize\"\n  fi\n\n  if [ -n \"${_X_SRC_STATIC_FILES_REAL}\" ] && [ -d \"${_X_SRC_STATIC_FILES_REAL}\" ]; then\n    echo \"INFO: Static sync pass 2/2 (materialize static/files as real directory on destination)\"\n    ssh root@\"${_X_TGT_HOST}\" \"if [ -L '${_X_DST_STATIC_FILES}' ]; then rm -f '${_X_DST_STATIC_FILES}'; elif [ -e '${_X_DST_STATIC_FILES}' ] && [ ! -d '${_X_DST_STATIC_FILES}' ]; then rm -f '${_X_DST_STATIC_FILES}'; fi; mkdir -p '${_X_DST_STATIC_FILES}'\"\n    rsync -aEAXqu ${_X_IENI} -e ssh \"${_X_SRC_STATIC_FILES_REAL}/\" \"root@${_X_TGT_HOST}:${_X_DST_STATIC_FILES}/\"\n  fi\n}\n\n_migrate_init() {\n  _RUNTIME_OCT=$(_xboa_runtime_oct)\n  _USR=/data/disk/${_RUNTIME_OCT}\n  if [ \"${_oct}\" = \"arch\" ] \\\n    || [ \"${_oct}\" = \"all\" ] \\\n    || [ \"${_oct}\" = \"lost+found\" ] \\\n    || [ \"${_oct}\" = \"lostfound\" ] \\\n    || [ \"${_oct}\" = \"backups\" ] \\\n    || [ \"${_oct}\" = \"codebases-cleanup\" ]; then\n    echo \"_USR ${_oct} is not eligible for migration: system dir\"\n    exit 1\n  fi\n  if [ ! -e \"${_USR}/tools/le/dehydrated\" ] && [ \"${_oct}\" != \"shared\" ]; then\n    echo\n    echo \"ERROR: This version of xboa migration tool supports only BOA-3.2.2 or newer\"\n    echo \"ERROR: Please upgrade ${_oct} instance to current BOA head before trying again\"\n    echo \"Bye.\"\n    echo\n    exit 1\n  fi\n  if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  fi\n  if [ \"${_oct}\" = \"xusage\" ]; then\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} start\"\n    if [ -e \"/var/log/boa/usage\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/log/boa/usage/* root@${_tgt}:/var/log/boa/usage/\n      echo \"INFO: ${_cmd} for ${_oct} /var/log/boa/usage complete\"\n    fi\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} complete\"\n  elif [ \"${_oct}\" = \"shared\" ]; then\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} start\"\n    if [ -d \"/data/all\" ] && [ ! -L \"/data/all\" ]; then\n      rsync -aEAXqu ${_ieni} -e ssh /data/all  root@${_tgt}:/data/\n      echo \"INFO: ${_cmd} for ${_oct} /data/all complete\"\n    fi\n    if [ -d \"/data/disk/all\" ]; then\n      rsync -aEAXqu ${_ieni} -e ssh /data/disk/all  root@${_tgt}:/data/\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/all complete\"\n    fi\n    if [ -e \"/data/disk/arch\" ]; then\n      rsync -aEAXqu ${_ieni} -e ssh /data/disk/arch  root@${_tgt}:/data/disk/\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/arch complete\"\n    fi\n    if [ -e \"/opt/solr4\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /opt/solr4 root@${_tgt}:/opt/\n      echo \"INFO: ${_cmd} for ${_oct} /opt/solr4 complete\"\n    fi\n    if [ -e \"/var/solr7/data\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/solr7/data root@${_tgt}:/var/solr7/\n      echo \"INFO: ${_cmd} for ${_oct} /var/solr7/data complete\"\n    fi\n    if [ -e \"/var/solr9/data\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/solr9/data root@${_tgt}:/var/solr9/\n      echo \"INFO: ${_cmd} for ${_oct} /var/solr9/data complete\"\n    fi\n    if [ -e \"/var/www/static\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/www/static root@${_tgt}:/var/www/\n      echo \"INFO: ${_cmd} for ${_oct} /var/www/static complete\"\n    fi\n    if [ -e \"/etc/bind\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /etc/bind root@${_tgt}:/etc/\n      echo \"INFO: ${_cmd} for ${_oct} /etc/bind complete\"\n    fi\n    if [ -e \"/var/log/boa/usage\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/log/boa/usage root@${_tgt}:/var/log/boa/\n      echo \"INFO: ${_cmd} for ${_oct} /var/log/boa/usage complete\"\n    fi\n    if [ -e \"/data/disk/legacy\" ]; then\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/legacy start\"\n      rsync -aEAXq ${_ieni} -e ssh /data/disk/legacy root@${_tgt}:/data/disk/\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/legacy complete\"\n    fi\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} complete\"\n  else\n    _RUNTIME_OCT=$(_xboa_runtime_oct)\n    _USR=/data/disk/${_RUNTIME_OCT}\n    if [ -e \"${_USR}/log/email.txt\" ]; then\n      _CHECK_EMAIL=`cat ${_USR}/log/email.txt`\n      _CHECK_EMAIL=`echo -n ${_CHECK_EMAIL} | tr -d \"\\n\"`\n      _CHECK_EMAIL=${_CHECK_EMAIL//\\\\\\@/\\@}\n      if [ \"${_CHECK_EMAIL}\" = \"omega8cc@gmail.com\" ] \\\n        || [[ \"${_CHECK_EMAIL}\" =~ \"emaylx@\" ]] \\\n        || [[ \"${_CHECK_EMAIL}\" =~ \"mixomax@\" ]] \\\n        || [[ \"${_CHECK_EMAIL}\" =~ \"@omega8.cc\" ]]; then\n        _RESULT_EMAIL=OUR\n      else\n        _RESULT_EMAIL=OK\n      fi\n    fi\n    if [ \"${_oct}\" = \"arch\" ] \\\n      || [ \"${_oct}\" = \"all\" ] \\\n      || [ \"${_oct}\" = \"lost+found\" ] \\\n      || [ \"${_oct}\" = \"lostfound\" ] \\\n      || [ \"${_oct}\" = \"backups\" ] \\\n      || [ \"${_oct}\" = \"codebases-cleanup\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: system dir\"\n      exit 1\n    elif [ ! -e \"${_USR}/log/cores.txt\" ] \\\n      || [ ! -e \"${_USR}/log/option.txt\" ] \\\n      || [ ! -e \"${_USR}/log/email.txt\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: broken\"\n      exit 1\n    elif [ -e \"${_USR}/log/CANCELLED\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: cancelled\"\n      exit 1\n    elif [ -e \"${_USR}/log/proxied.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already migrated\"\n      exit 1\n    elif [ \"${_cmd}\" = \"create\" ] && [ -e \"${_USR}/log/exported.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already created\"\n      exit 1\n    elif [ \"${_cmd}\" = \"export\" ] && [ -e \"${_USR}/log/exported.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already exported\"\n      exit 1\n    elif [ \"${_cmd}\" = \"transfer\" ] && [ -e \"${_USR}/log/transferred.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already transferred\"\n      exit 1\n    elif [ \"${_cmd}\" = \"import\" ] && [ -e \"${_USR}/log/imported.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already imported\"\n      exit 1\n    elif [ \"${_cmd}\" = \"proxy\" ] && [ -e \"${_USR}/log/proxied.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already proxied\"\n      exit 1\n    elif [ \"${_RESULT_EMAIL}\" = \"OUR\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: internal only\"\n      exit 1\n    else\n      echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} ${_dst} start\"\n      if [ \"${_cmd}\" = \"transfer\" ] || [ \"${_cmd}\" = \"pretransfer\" ]; then\n        if [ ! -z \"${_dst}\" ]; then\n          _DST=/data/disk/${_dst}\n        else\n          _DST=/data/disk/${_oct}\n        fi\n        if [ -e \"${_USR}\" ]; then\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/distro/*         root@${_tgt}:${_DST}/distro/\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/src/*            root@${_tgt}:${_DST}/src/\n          _xboa_transfer_static_symlink_safe \"${_USR}\" \"${_tgt}\" \"${_DST}\" \"${_ieni}\"\n          #\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/clients/*        root@${_tgt}:${_DST}/clients/        &> /dev/null\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/log/*            root@${_tgt}:${_DST}/log/\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/static/control/* root@${_tgt}:${_DST}/static/control/ &> /dev/null\n          #\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/backups/*        root@${_tgt}:${_DST}/backups/        &> /dev/null\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/undo/*           root@${_tgt}:${_DST}/undo/           &> /dev/null\n          if [ ! -z \"${_dst}\" ]; then\n            rsync -aEAXqu ${_ieni} -e ssh /home/${_oct}.ftp/.ssh  root@${_tgt}:/home/${_dst}.ftp/\n          else\n            rsync -aEAXqu ${_ieni} -e ssh /home/${_oct}.ftp/.ssh  root@${_tgt}:/home/${_oct}.ftp/\n          fi\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            --exclude=server_ \\\n            --exclude=hostmaster \\\n            -e ssh ${_USR}/.drush/*.alias.drushrc.php \\\n            root@${_tgt}:${_DST}/.drush/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            --exclude=host8.biz \\\n            --exclude=boa.io \\\n            --exclude=aegir.cc \\\n            -e ssh ${_USR}/config/server_master/nginx/vhost.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/vhost.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/pre.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/pre.d/ &> /dev/null\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/post.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/post.d/ &> /dev/null\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/subdir.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/subdir.d/ &> /dev/null\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/platform.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/platform.d/ &> /dev/null\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/ssl.d/* \\\n            root@${_tgt}:${_DST}/config/ssl.d/ &> /dev/null\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/ssl.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/ssl.d/ &> /dev/null\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/tools/le \\\n            root@${_tgt}:${_DST}/tools/ &> /dev/null\n\n          if [ ! -z \"${_dst}\" ] && [ \"${_dst}\" != \"${_oct}\" ]; then\n            _xboa_rewrite_target_account_refs \"${_oct}\" \"${_dst}\" \"${_tgt}\" \"${_DST}\"\n          fi\n\n          echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} ${_dst} complete\"\n          touch ${_USR}/log/transferred.pid\n        fi\n      else\n        # shellcheck disable=SC1091\n        [ -e \"/root/.${_oct}.octopus.cnf\" ] && source /root/.${_oct}.octopus.cnf\n        ### We are using max verbose mode by default\n        _migrate_prepare\n          ### _NOW=$(date +%y%m%d-%H%M)\n          ### mkdir -p /var/log/boa/migrate\n          ### _migrate_prepare >/var/log/boa/migrate/migrate-${_oct}-${_cmd}-${_NOW}.log 2>&1\n        if [ \"${_cmd}\" = \"proxy\" ]; then\n          service nginx reload\n          _send_notice_migration_complete\n          echo COMPLETE > ${_USR}/log/proxied.pid\n        fi\n      fi\n      echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} ${_dst} complete\"\n      exit 0\n    fi\n  fi\n}\n\n_post_mig() {\n  if [ -e \"/vservers\" ]; then\n    echo You can not run _post_mig on the parent system\n    echo Exit now\n    exit 1\n  else\n    echo Running _post_mig inside vms ${_hName}\n    if [ -x \"/etc/init.d/solr9\" ] && [ -e \"/etc/default/solr9.in.sh\" ]; then\n      pkill -9 -f solr9\n      service solr9 start\n    fi\n    if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/etc/default/solr7.in.sh\" ]; then\n      pkill -9 -f solr7\n      service solr7 start\n    fi\n    pkill -9 -f jetty9\n    rm -rf /tmp/{drush*,pear,jetty*}\n    rm -f /var/log/jetty9/*\n    if [ -e \"/etc/default/jetty9\" ] && [ -e \"/etc/init.d/jetty9\" ]; then\n      service jetty9 start\n    fi\n    service nginx reload\n    [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n    if [ ! -e \"/var/xdrago/runner.sh\" ] && [ -e \"/var/xdrago/.runner.sh.off\" ]; then\n      mv -f /var/xdrago/.runner.sh.off /var/xdrago/runner.sh\n    fi\n    if [ ! -e \"/var/xdrago/daily.sh\" ] && [ -e \"/var/xdrago/.daily.sh.off\" ]; then\n      mv -f /var/xdrago/.daily.sh.off /var/xdrago/daily.sh\n    fi\n    if [ ! -e \"/var/xdrago/usage.sh\" ] && [ -e \"/var/xdrago/.usage.sh.off\" ]; then\n      mv -f /var/xdrago/.usage.sh.off /var/xdrago/usage.sh\n    fi\n    if [ ! -e \"/var/xdrago/graceful.sh\" ] && [ -e \"/var/xdrago/.graceful.sh.off\" ]; then\n      mv -f /var/xdrago/.graceful.sh.off /var/xdrago/graceful.sh\n    fi\n    if [ ! -e \"/var/xdrago/manage_ltd_users.sh\" ] && [ -e \"/var/xdrago/.manage_ltd_users.sh.off\" ]; then\n      mv -f /var/xdrago/.manage_ltd_users.sh.off /var/xdrago/manage_ltd_users.sh\n    fi\n    exit 0\n  fi\n}\n\n_pre_mig() {\n  if [ -e \"/vservers\" ]; then\n    echo You can not run _pre_mig on the parent system\n    echo Exit now\n    exit 1\n  else\n    echo Running _pre_mig inside vms ${_hName}\n    if [ -e \"/var/xdrago/runner.sh\" ]; then\n      mv -f /var/xdrago/runner.sh /var/xdrago/.runner.sh.off\n      pkill -9 -f runner.sh\n    fi\n    if [ -e \"/var/xdrago/daily.sh\" ]; then\n      mv -f /var/xdrago/daily.sh /var/xdrago/.daily.sh.off\n      pkill -9 -f daily.sh\n    fi\n    if [ -e \"/var/xdrago/usage.sh\" ]; then\n      mv -f /var/xdrago/usage.sh /var/xdrago/.usage.sh.off\n      pkill -9 -f usage.sh\n    fi\n    if [ -e \"/var/xdrago/graceful.sh\" ]; then\n      mv -f /var/xdrago/graceful.sh /var/xdrago/.graceful.sh.off\n      pkill -9 -f graceful.sh\n    fi\n    if [ -e \"/var/xdrago/manage_ltd_users.sh\" ]; then\n      mv -f /var/xdrago/manage_ltd_users.sh /var/xdrago/.manage_ltd_users.sh.off\n      pkill -9 -f manage_ltd_users.sh\n    fi\n    if [ \"${_hName}\" = \"${_hst}\" ]; then\n      echo Preparing source ${_hName} for outgoing migration...\n      if [ ! -e \"/root/.ssh/id_ed25519.pub\" ]; then\n        echo \"Generating SSH (ed25519) keys for root...\"\n        ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519\n      fi\n      if [ -e \"/root/.ssh/id_ed25519.pub\" ]; then\n        mkdir -p /var/www/nginx-default\n        cp -af /root/.ssh/id_ed25519.pub /var/www/nginx-default/auth_undefined_keys.txt\n      else\n        echo Ops.. /root/.ssh/id_ed25519.pub does not exist in ${_hName}\n      fi\n    else\n      echo Preparing target ${_hName} for incoming migration...\n      echo Add remote id_ed25519.pub to authorized_keys here...\n      mkdir -p /root/.ssh\n      chmod 700 /root/.ssh\n      rm -f /root/.ssh/auth_undefined_keys.txt*\n      rm -f /var/www/nginx-default/auth_undefined_keys.txt*\n      curl -s -A iCab \"http://undefined.${_hst}/auth_undefined_keys.txt\" -o /root/.ssh/auth_undefined_keys.txt\n      if [ -e \"/root/.ssh/auth_undefined_keys.txt\" ]; then\n        echo >> /root/.ssh/authorized_keys\n        cat /root/.ssh/auth_undefined_keys.txt >> /root/.ssh/authorized_keys\n        chmod 600 /root/.ssh/authorized_keys\n      else\n        echo Ops.. /root/.ssh/auth_undefined_keys.txt does not exist in ${_hName}\n      fi\n    fi\n  fi\n  echo All done!\n}\n\n_sub_ssl_gen() {\n  IFS=$'\\12'\n  for p in `cat /root/.ssl.proxy.cnf`;do\n    _domain_name=`echo $p | cut -d' ' -f1 | awk '{ print $1}'`\n    _target_ip=`echo $p | cut -d' ' -f2 | awk '{ print $1}'`\n    _oct_uid=`echo $p | cut -d' ' -f3 | awk '{ print $1}'`\n    _oct_mail=`echo $p | cut -d' ' -f4 | awk '{ print $1}'`\n    _dedicated_ip=`echo $p | cut -d' ' -f5 | awk '{ print $1}'`\n    echo _domain_name.${_domain_name} _target_ip.${_target_ip} _oct_uid.${_oct_uid} _oct_mail.${_oct_mail} _dedicated_ip.${_dedicated_ip}\n    if [ ! -z \"${_domain_name}\" ] \\\n      && [ ! -z \"${_target_ip}\" ] \\\n      && [ ! -z \"${_oct_uid}\" ] \\\n      && [ ! -z \"${_oct_mail}\" ] \\\n      && [ ! -z \"${_dedicated_ip}\" ]; then\n      _oct_mail=`echo ${_oct_mail} | sed \"s/\\@/\\\\\\@/g\"`;\n      if [ \"${_target_ip}\" = \"${_dedicated_ip}\" ]; then\n        _dedicated_ip=\"*\"\n        if [ -e \"/data/disk/${_oct_uid}/log/extra_domain.txt\" ]; then\n          _hmFrontExtra=$(cat /data/disk/${_oct_uid}/log/extra_domain.txt 2>&1)\n          _hmFrontExtra=$(echo -n ${_hmFrontExtra} | tr -d \"\\n\" 2>&1)\n        fi\n        if [ ! -z \"${_hmFrontExtra}\" ]; then\n          _dedicated_sn=\"${_domain_name} ${_hmFrontExtra} www.${_domain_name}\"\n        else\n          _dedicated_sn=\"${_domain_name} www.${_domain_name}\"\n        fi\n        _single=YES\n      else\n        _dedicated_sn=\"_\"\n        _single=NO\n      fi\n      ###\n      _Pln=\"/var/aegir/config/server_master/nginx/pre.d/z_${_domain_name}_pln_proxy.conf\"\n      _Ssl=\"/var/aegir/config/server_master/nginx/pre.d/z_${_domain_name}_ssl_proxy.conf\"\n      ###\n      if [ \"${_single}\" = \"YES\" ]; then\n        rm -f ${_Pln}\n      else\n        cp -af /var/xdrago/conf/pln_proxy.conf      ${_Pln}\n        sed -i \"s/_domain_name/${_domain_name}/g\"   ${_Pln}\n        wait\n        sed -i \"s/_target_ip/${_target_ip}/g\"       ${_Pln}\n        wait\n        sed -i \"s/_oct_uid/${_oct_uid}/g\"           ${_Pln}\n        wait\n        sed -i \"s/_oct_mail/${_oct_mail}/g\"         ${_Pln}\n        wait\n        sed -i \"s/_dedicated_ip/${_dedicated_ip}/g\" ${_Pln}\n        wait\n        sed -i \"s/_dedicated_sn/${_dedicated_sn}/g\" ${_Pln}\n        wait\n        echo OK created ${_Pln}\n      fi\n      ###\n      cp -af /var/xdrago/conf/ssl_proxy.conf        ${_Ssl}\n      sed -i \"s/_domain_name/${_domain_name}/g\"     ${_Ssl}\n      wait\n      sed -i \"s/_target_ip/${_target_ip}/g\"         ${_Ssl}\n      wait\n      sed -i \"s/_oct_uid/${_oct_uid}/g\"             ${_Ssl}\n      wait\n      sed -i \"s/_oct_mail/${_oct_mail}/g\"           ${_Ssl}\n      wait\n      sed -i \"s/_dedicated_ip/${_dedicated_ip}/g\"   ${_Ssl}\n      wait\n      sed -i \"s/_dedicated_sn/${_dedicated_sn}/g\"   ${_Ssl}\n      wait\n      ###\n      if [ -e \"${_Ssl}\" ]; then\n        _sslFile=\"/etc/ssl/private/${_domain_name}.dhp\"\n        if [ ! -e \"${_sslFile}\" ]; then\n          echo \"We will generate .dhp file now, please wait...\"\n          openssl dhparam -out ${_sslFile} 2048 &> /dev/null\n        else\n          _PFS_TEST=$(grep \"DH PARAMETERS\" ${_sslFile} 2>&1)\n          if [[ ! \"${_PFS_TEST}\" =~ \"DH PARAMETERS\" ]]; then\n            echo \"We will generate .dhp file now, please wait...\"\n            openssl dhparam -out ${_sslFile} 2048 &> /dev/null\n          fi\n        fi\n      fi\n      ###\n      echo OK created ${_Ssl}\n      service nginx reload\n    else\n      echo some variables missing in the record: $p\n    fi\n  done\n}\n\n_ssl_gen() {\n  if [ ! -e \"/root/.ssl.proxy.cnf\" ]; then\n    echo \"please create /root/.ssl.proxy.cnf first\"\n    exit 1\n  elif [ ! -e \"/var/xdrago/conf/pln_proxy.conf\" ]; then\n    echo \"file /var/xdrago/conf/pln_proxy.conf does not exist\"\n    exit 1\n  elif [ ! -e \"/var/xdrago/conf/ssl_proxy.conf\" ]; then\n    echo \"file /var/xdrago/conf/ssl_proxy.conf does not exist\"\n    exit 1\n  elif [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    echo \"vms ${_vms} aegir master not ready\"\n    exit 1\n  else\n    [ -e \"/root/.ssl.proxy.cnf\" ] && _sub_ssl_gen\n  fi\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    chmod a+w /dev/null\n    if [ ! -e \"/dev/fd\" ]; then\n      if [ -e \"/proc/self/fd\" ]; then\n        rm -rf /dev/fd\n        ln -sfn /proc/self/fd /dev/fd\n      fi\n    fi\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    _ADM_EMAIL=${_MY_EMAIL//\\\\\\@/\\@}\n    _BCC_EMAIL=${_MY_EMAIL//\\\\\\@/\\@}\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _SQL_CONNECT=localhost\n    elif [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n      _SQL_CONNECT=127.0.0.1\n    else\n      _SQL_CONNECT=\"${_THIS_DB_HOST}\"\n    fi\n    if [ \"${_THIS_DB_HOST}\" = \"${_MY_OWNIP}\" ]; then\n      _SQL_CONNECT=localhost\n    fi\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n}\n\ncase \"$1\" in\n  export)   _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  create)   _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  import)   _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _fix=\"fix\"\n            _check_root\n            _migrate_init\n  ;;\n  pretransfer) _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  transfer) _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  proxy)    _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  pre-mig)  _cmd=\"$1\"\n            _hst=\"$2\"\n            _check_root\n            _pre_mig\n  ;;\n  post-mig) _cmd=\"$1\"\n            _hst=\"$2\"\n            _check_root\n            _post_mig\n  ;;\n  ssl-gen)  _cmd=\"$1\"\n            _vms=\"$2\"\n            _check_root\n            _ssl_gen\n  ;;\n  *)        echo\n            echo \"Usage: xboa {pre-mig} {fqdn} (source+target vms)\"\n            echo \"Usage: xboa {export|create|pretransfer|transfer|proxy} {o1|shared} {target-ip} {o2}\"\n            echo \"Usage: xboa {import} {o1} {target-ip} {o2} (target vms, optional target account rename)\"\n            echo \"Usage: xboa {post-mig} {fqdn} (source+target vms)\"\n            echo\n            echo \"Usage: xboa {ssl-gen}\"\n            echo\n            exit 1\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/bin/xcopy",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_WEBG=www-data\n_THIS_DB_PORT=3306\n_ieni=\"--ignore-errors\"\n\n###\n### Avoid too many questions\n###\nexport DEBIAN_FRONTEND=noninteractive\nexport APT_LISTCHANGES_FRONTEND=none\nif [ -z \"${TERM+x}\" ]; then\n  export TERM=vt100\nfi\n\n###-------------SYSTEM-----------------###\n\n_enable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ \"$1\" != \"${_THIS_HM_USER}.ftp\" ]; then\n      chattr +i /home/$1/\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr +i /home/$1/platforms/\n        chattr +i /home/$1/platforms/*\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr +i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr +i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr +i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr +i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n_disable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ \"$1\" != \"${_THIS_HM_USER}.ftp\" ]; then\n      if [ -d \"/home/$1/\" ]; then\n        chattr -i /home/$1/\n      fi\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr -i /home/$1/platforms/\n        chattr -i /home/$1/platforms/*\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr -i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr -i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr -i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr -i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n_run_drush8_hmr_cmd() {\n  su -s /bin/bash - ${_THIS_HM_USER} -c \"drush8 @hostmaster $1\"\n  wait\n}\n\n_run_drush8_nosilent_cmd() {\n  su -s /bin/bash - ${_THIS_HM_USER} -c \"drush8 @${_Dom} $1\"\n  wait\n}\n\n_migrate_sites() {\n  for _Site in `find ${_USR}/config/server_master/nginx/vhost.d -maxdepth 1 -mindepth 1 -type f | sort`\n  do\n    _Dom=`echo ${_Site} | cut -d'/' -f9 | awk '{ print $1}'`\n    _Vht=\"${_USR}/config/server_master/nginx/vhost.d/${_Dom}\"\n    _Vhd=\"${_USR}/config/server_master/nginx/vhost.d/.${_Dom}\"\n    _Vhs=\"${_USR}/config/server_master/nginx/vhost.d/https.${_Dom}\"\n    if [ -e \"${_Vht}\" ] && [ ! -z \"${_oct}\" ]; then\n      chown ${_oct}:users ${_Vht}\n      chmod 600 ${_Vht}\n    fi\n    if [ -e \"${_Vhd}\" ] && [ ! -z \"${_oct}\" ]; then\n      chown ${_oct}:users ${_Vhd}\n      chmod 600 ${_Vhd}\n    fi\n    if [ -e \"${_USR}/.drush/${_Dom}.alias.drushrc.php\" ]; then\n      echo Dom is ${_Dom}\n      _Dir=`cat ${_USR}/.drush/${_Dom}.alias.drushrc.php | grep \"site_path'\" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`\n      _Plr=`cat ${_USR}/.drush/${_Dom}.alias.drushrc.php | grep \"root'\" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`\n      if [ \"${_cmd}\" = \"import\" ] || [ \"${_cmd}\" = \"export\" ]; then\n        _DBN=`cat ${_Dir}/drushrc.php | grep \"options\\['db_name'\\] = \" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,';]//g\"`\n        echo \"site ${_Dom} raw _DBN is ${_DBN}\"\n        _DBN=${_DBN//[^a-zA-Z0-9_]/}\n        echo \"site ${_Dom} clean _DBN is ${_DBN}\"\n      fi\n      if [ \"${_cmd}\" = \"import\" ]; then\n        _DBP=`cat ${_Dir}/drushrc.php | grep \"options\\['db_passwd'\\] = \" | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,';]//g\"`\n        echo \"site ${_Dom} raw _DBP is ${_DBP}\"\n        _DBP=${_DBP//[^a-zA-Z0-9_]/}\n        echo \"site ${_Dom} clean _DBP is ${_DBP}\"\n        if [ -e \"${_Vhd}\" ]; then\n          mv -f ${_Vhd} ${_Vht}\n        fi\n        if [ ! -z \"${_dst}\" ]; then\n          _TGT_HM_USER=\"${_dst}\"\n        else\n          _TGT_HM_USER=\"${_oct}\"\n        fi\n        chown ${_TGT_HM_USER}:users ${_Vht}\n        chmod 600 ${_Vht}\n        if [ -e \"${_Dir}/drushrc.php\" ]; then\n          sed -i \"s/\\$options\\['db_host'\\] = '.*/\\$options[\\'db_host\\'] = \\'localhost\\';/g\" ${_Dir}/drushrc.php\n          wait\n        fi\n        if [ -e \"${_Vht}\" ]; then\n          sed -i \"s/.*fastcgi_param.*db_host.*;/  fastcgi_param db_host   localhost;/g\" ${_Vht}\n          wait\n        fi\n        echo \"INFO: site ${_Dom} db_host on ${_TGT_HM_USER} fixed\"\n        if [ ! -z \"${_DBN}\" ] && [ ! -z \"${_DBP}\" ]; then\n          if [ ! -d \"/var/lib/mysql/${_DBN}\" ]; then\n            if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n              _SQL_CONNECT=127.0.0.1\n              _THIS_DB_PORT=6033\n              mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges\n            else\n              mysqladmin -u root flush-privileges\n            fi\n            [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL3 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n            if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n              _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n              mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_DBN}';\nDELETE FROM mysql_query_rules WHERE username='${_DBN}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_DBN}','${_DBP}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_DBN}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_DBN}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n            fi\n            mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE DATABASE ${_DBN} CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;\nCREATE USER IF NOT EXISTS '${_DBN}'@'localhost';\nCREATE USER IF NOT EXISTS '${_DBN}'@'%';\nGRANT ALL ON ${_DBN}.* TO '${_DBN}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_DBN}.* TO '${_DBN}'@'%' WITH GRANT OPTION;\nALTER USER '${_DBN}'@'localhost' IDENTIFIED BY '${_DBP}';\nALTER USER '${_DBN}'@'%' IDENTIFIED BY '${_DBP}';\nEOFMYSQL\n            if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n              _SQL_CONNECT=127.0.0.1\n              _THIS_DB_PORT=6033\n              mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges\n            else\n              mysqladmin -u root flush-privileges\n            fi\n            echo \"INFO: site ${_Dom} db setup on ${_oct} complete\"\n          else\n            echo \"ALRT: site ${_Dom} db ${_DBN} already exists!\"\n          fi\n          if [ -e \"${_USR}/src/${_DBN}\" ]; then\n            if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n              _SQL_CONNECT=127.0.0.1\n              _THIS_DB_PORT=6033\n            fi\n            [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL4 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp\"\n            if [ -x \"/usr/local/bin/mydumper\" ]; then\n              _MYQUICK_ITD=$(mydumper -V 2>&1 \\\n                | tr -d \"\\n\" \\\n                | tr -d \",\" \\\n                | tr -d \"v\" \\\n                | cut -d\" \" -f2 \\\n                | awk '{ print $1}' 2>&1)\n              _DB_V=$(mysql -V 2>&1 \\\n                | tr -d \"\\n\" \\\n                | cut -d\" \" -f6 \\\n                | awk '{ print $1}' \\\n                | cut -d\"-\" -f1 \\\n                | awk '{ print $1}' \\\n                | sed \"s/[\\,']//g\" 2>&1)\n              if [ \"${_DB_V}\" = \"Linux\" ]; then\n                _DB_V=$(mysql -V 2>&1 \\\n                  | tr -d \"\\n\" \\\n                  | cut -d\" \" -f4 \\\n                  | awk '{ print $1}' \\\n                  | cut -d\"-\" -f1 \\\n                  | awk '{ print $1}' \\\n                  | sed \"s/[\\,']//g\" 2>&1)\n              fi\n              _MD_V=$(mydumper --version 2>&1 \\\n                | tr -d \"\\n\" \\\n                | cut -d\" \" -f6 \\\n                | awk '{ print $1}' \\\n                | cut -d\"-\" -f1 \\\n                | awk '{ print $1}' \\\n                | sed \"s/[\\,']//g\" 2>&1)\n              if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n                echo \"INFO: Installed MyQuick ${_MYQUICK_ITD} for ${_MD_V} (${_DB_V})\"\n              fi\n            fi\n            myloader \\\n              --database=${_DBN} \\\n              --host=localhost \\\n              --user=root \\\n              --password=${_SQL_PSWD} \\\n              --port=3306 \\\n              --directory=${_USR}/src/${_DBN}/ \\\n              --threads=4 \\\n              --overwrite-tables \\\n              --verbose=1\n            echo \"INFO: site ${_Dom} db import on ${_oct} complete\"\n          else\n            echo \"ALRT: site ${_Dom} db failure due to missing ${_USR}/src/${_DBN}\"\n          fi\n          if [ -e \"${_USR}/.drush/${_Dom}.alias.drushrc.php\" ]; then\n            _run_drush8_hmr_cmd \"hosting-task @${_Dom} verify --force\"\n            echo \"INFO: site ${_Dom} verify on ${_oct} scheduled\"\n          else\n            echo \"ALRT: site ${_Dom} verify failure due to missing ${_USR}/.drush/${_Dom}.alias.drushrc.php\"\n          fi\n        else\n          echo \"ALRT: site ${_Dom} db failure due to empty _DBN/${_DBN} or _DBP/${_DBP}\"\n        fi\n      elif [ \"${_cmd}\" = \"export\" ]; then\n        if [ -x \"/usr/local/bin/mydumper\" ]; then\n          _MYQUICK_ITD=$(mydumper -V 2>&1 \\\n            | tr -d \"\\n\" \\\n            | tr -d \",\" \\\n            | tr -d \"v\" \\\n            | cut -d\" \" -f2 \\\n            | awk '{ print $1}' 2>&1)\n          _DB_V=$(mysql -V 2>&1 \\\n            | tr -d \"\\n\" \\\n            | cut -d\" \" -f6 \\\n            | awk '{ print $1}' \\\n            | cut -d\"-\" -f1 \\\n            | awk '{ print $1}' \\\n            | sed \"s/[\\,']//g\" 2>&1)\n          if [ \"${_DB_V}\" = \"Linux\" ]; then\n            _DB_V=$(mysql -V 2>&1 \\\n              | tr -d \"\\n\" \\\n              | cut -d\" \" -f4 \\\n              | awk '{ print $1}' \\\n              | cut -d\"-\" -f1 \\\n              | awk '{ print $1}' \\\n              | sed \"s/[\\,']//g\" 2>&1)\n          fi\n          _MD_V=$(mydumper --version 2>&1 \\\n            | tr -d \"\\n\" \\\n            | cut -d\" \" -f6 \\\n            | awk '{ print $1}' \\\n            | cut -d\"-\" -f1 \\\n            | awk '{ print $1}' \\\n            | sed \"s/[\\,']//g\" 2>&1)\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            echo \"INFO: Installed MyQuick ${_MYQUICK_ITD} for ${_MD_V} (${_DB_V})\"\n          fi\n        fi\n        mkdir -p ${_USR}/src/${_DBN}\n        mydumper \\\n          --database=${_DBN} \\\n          --host=localhost \\\n          --user=root \\\n          --password=${_SQL_PSWD} \\\n          --port=3306 \\\n          --outputdir=${_USR}/src/${_DBN}/ \\\n          --rows=50000 \\\n          --build-empty-files \\\n          --threads=4 \\\n          --long-query-guard=900 \\\n          --clear \\\n          --verbose=1\n        echo \"INFO: site ${_Dom} db export on ${_oct} complete\"\n      elif [ \"${_cmd}\" = \"fooproxy\" ]; then\n        if [ -e \"${_USR}/log/exported.pid\" ] && [ ! -z \"${_tgt}\" ]; then\n          if [ -e \"${_Vht}\" ]; then\n            _NMS=`cat ${_Vht} | grep \"server_name\" | sed \"s/server_name//g; s/;//g\" | sort | uniq | tr -d \"\\n\" | sed \"s/  / /g; s/  / /g; s/  / /g\" | sort | uniq`\n            if [ -e \"${_USR}/config/server_master/ssl.d/${_Dom}/openssl.key\" ] \\\n              && [ -e \"/var/xdrago/conf/https_proxy_le.conf\" ]; then\n              cp -a /var/xdrago/conf/https_proxy_le.conf ${_Vhs}\n              sed -i \"s/.*server_name.*;/  server_name  ${_NMS};/g\" ${_Vhs}\n              wait\n              sed -i \"s/_target_ip/${_tgt}/g\" ${_Vhs}\n              wait\n              sed -i \"s/_oct_uid/${_oct}/g\" ${_Vhs}\n              wait\n              sed -i \"s/_domain_name/${_Dom}/g\" ${_Vhs}\n              wait\n              chown ${_oct}:users ${_Vhs}\n              chmod 600 ${_Vhs}\n            fi\n            mv -f ${_Vht} ${_Vhd}\n            cp -a /var/xdrago/conf/proxy.conf ${_Vht}\n            sed -i \"s/.*server_name.*;/  server_name  ${_NMS};/g\" ${_Vht}\n            wait\n            sed -i \"s/_target_ip/${_tgt}/g\" ${_Vht}\n            wait\n            chown ${_oct}:users ${_Vht}\n            chmod 600 ${_Vht}\n            echo \"INFO: site ${_Dom} proxy conversion on ${_oct} complete\"\n          else\n            echo \"ALRT: site ${_Dom} proxy conversion on ${_oct} failed due to missing ${_Vht}\"\n          fi\n        else\n          echo \"ALRT: site ${_Dom} proxy conversion on ${_oct} failed due to empty tgt/${_tgt}\"\n        fi\n      fi\n    fi\n  done\n}\n\n_migrate_prepare() {\n  _THIS_HM_USER=${_oct}\n  _USR=/data/disk/${_oct}\n  _THIS_HM_SITE=`cat ${_USR}/.drush/hostmaster.alias.drushrc.php \\\n    | grep \"site_path'\" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,']//g\"`\n  echo _THIS_HM_USER ${_THIS_HM_USER}\n  echo _USR ${_USR}\n  echo _THIS_HM_SITE ${_THIS_HM_SITE}\n  if [ -e \"${_USR}/log/option.txt\" ]; then\n    _CLIENT_OPTION=`cat ${_USR}/log/option.txt`\n    _CLIENT_OPTION=`echo -n ${_CLIENT_OPTION} | tr -d \"\\n\"`\n  fi\n  if [ -e \"${_USR}/log/cores.txt\" ]; then\n    _CLIENT_CORES=`cat ${_USR}/log/cores.txt`\n    _CLIENT_CORES=`echo -n ${_CLIENT_CORES} | tr -d \"\\n\"`\n  fi\n  if [ -e \"${_USR}/log/subscr.txt\" ]; then\n    _CLIENT_SUBSCR=`cat ${_USR}/log/subscr.txt`\n    _CLIENT_SUBSCR=`echo -n ${_CLIENT_SUBSCR} | tr -d \"\\n\"`\n  fi\n  if [ -e \"${_USR}/log/email.txt\" ]; then\n    _CLIENT_EMAIL=\"mem@tkm.cc\"\n    _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n  fi\n\n  [ -z \"${_THIS_DB_PORT}\" ] && _THIS_DB_PORT=3306\n  [ -z \"${_CLIENT_OPTION}\" ] && _CLIENT_OPTION=EDGE\n  [ -z \"${_CLIENT_SUBSCR}\" ] && _CLIENT_SUBSCR=M\n  [ -z \"${_CLIENT_CORES}\" ]  && _CLIENT_CORES=1\n  [ -z \"${_CLIENT_EMAIL}\" ]  && _CLIENT_EMAIL=\"omega8cc@gmail.com\"\n\n  echo \"_CLIENT_EMAIL is ${_CLIENT_EMAIL}\"\n  echo \"_THIS_HM_USER is ${_THIS_HM_USER}\"\n  echo \"_CLIENT_OPTION is ${_CLIENT_OPTION}\"\n  echo \"_CLIENT_SUBSCR is ${_CLIENT_SUBSCR}\"\n  echo \"_CLIENT_CORES is ${_CLIENT_CORES}\"\n\n  su -s /bin/bash ${_THIS_HM_USER} -c \"drush8 cc drush\"\n  wait\n  _disable_chattr ${_THIS_HM_USER}.ftp\n  rm -rf /home/${_THIS_HM_USER}.ftp/drush-backups\n  _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_task WHERE task_type='delete' AND task_status='-1'\\\"\"\n  _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_task WHERE task_type='delete' AND task_status='0' AND executed='0'\\\"\"\n  rm -rf ${_USR}/clients/*/backups\n  symlinks -dr ${_USR}/clients\n  if [ -d \"/home/${_THIS_HM_USER}.ftp\" ]; then\n    symlinks -dr /home/${_THIS_HM_USER}.ftp\n    rm -f /home/${_THIS_HM_USER}.ftp/{.profile,.bash_logout,.bash_profile,.bashrc}\n  fi\n  if [ -e \"${_THIS_HM_SITE}/drushrc.php\" ]; then\n    _HDB=`cat ${_THIS_HM_SITE}/drushrc.php \\\n      | grep \"options\\['db_name'\\] = \" \\\n      | cut -d: -f2 \\\n      | awk '{ print $3}' \\\n      | sed \"s/[\\,';]//g\"`\n    echo \"raw _HDB is ${_HDB}\"\n    _HDB=${_HDB//[^a-zA-Z0-9_]/}\n    echo \"clean _HDB is ${_HDB}\"\n  else\n    echo \"no ${_THIS_HM_SITE}/drushrc.php found\"\n  fi\n  mkdir -p ${_USR}/src\n  if [ \"${_cmd}\" = \"export\" ] \\\n    && [ ! -z \"${_HDB}\" ] \\\n    && [ ! -e \"${_USR}/src/prev_hostmaster.sql\" ] \\\n    && [ ! -e \"${_USR}/log/exported.pid\" ]; then\n      mysqldump -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp \\\n        --single-transaction \\\n        --quick \\\n        --no-autocommit \\\n        --skip-add-locks \\\n        --no-tablespaces \\\n        --hex-blob ${_HDB} \\\n        comment \\\n        date_format_locale \\\n        date_format_type \\\n        date_formats \\\n        fe_block_boxes \\\n        features_signature \\\n        field_config \\\n        field_config_instance \\\n        field_data_body \\\n        field_data_comment_body \\\n        field_data_field_composer_git_docroot \\\n        field_data_field_composer_git_path \\\n        field_data_field_composer_git_project_url \\\n        field_data_field_composer_git_version \\\n        field_data_field_composer_project_docroot \\\n        field_data_field_composer_project_package \\\n        field_data_field_composer_project_path \\\n        field_data_field_composer_project_version \\\n        field_data_field_deployment_strategy \\\n        field_data_field_git_docroot \\\n        field_data_field_git_reference \\\n        field_data_field_git_repository_path \\\n        field_data_field_git_repository_url \\\n        field_revision_body \\\n        field_revision_comment_body \\\n        field_revision_field_composer_git_docroot \\\n        field_revision_field_composer_git_path \\\n        field_revision_field_composer_git_project_url \\\n        field_revision_field_composer_git_version \\\n        field_revision_field_composer_project_docroot \\\n        field_revision_field_composer_project_package \\\n        field_revision_field_composer_project_path \\\n        field_revision_field_composer_project_version \\\n        field_revision_field_deployment_strategy \\\n        field_revision_field_git_docroot \\\n        field_revision_field_git_reference \\\n        field_revision_field_git_repository_path \\\n        field_revision_field_git_repository_url \\\n        filter \\\n        filter_format \\\n        history \\\n        hosting_backup_gc_sites \\\n        hosting_client \\\n        hosting_client_user \\\n        hosting_context \\\n        hosting_cron \\\n        hosting_git \\\n        hosting_git_pull \\\n        hosting_http_basic_auth \\\n        hosting_package \\\n        hosting_package_instance \\\n        hosting_package_languages \\\n        hosting_platform \\\n        hosting_platform_client_access \\\n        hosting_service \\\n        hosting_site \\\n        hosting_site_alias \\\n        hosting_site_backups \\\n        hosting_ssl_cert \\\n        hosting_ssl_cert_ips \\\n        hosting_ssl_server \\\n        hosting_ssl_site \\\n        hosting_task \\\n        hosting_task_arguments \\\n        hosting_task_log \\\n        menu_custom \\\n        menu_links \\\n        menu_router \\\n        node \\\n        node_access \\\n        node_comment_statistics \\\n        node_revision \\\n        node_type \\\n        role \\\n        role_permission \\\n        url_alias \\\n        userprotect \\\n        users \\\n        users_roles \\\n        > ${_USR}/src/prev_hostmaster.sql\n    if [ -e \"${_USR}/src/prev_hostmaster.sql\" ]; then\n      echo \"INFO: hostmaster ${_oct} db export complete\"\n      touch ${_USR}/log/exported.pid\n    else\n      echo \"ALRT: no ${_USR}/src/prev_hostmaster.sql exists!\"\n      exit 1\n    fi\n  fi\n  if [ \"${_cmd}\" = \"import\" ] \\\n    && [ -e \"${_USR}/config/includes/nginx_vhost_common.conf\" ] \\\n    && [ ! -z \"${_HDB}\" ] \\\n    && [ -e \"${_USR}/src/prev_hostmaster.sql\" ] \\\n    && [ ! -e \"${_USR}/log/imported.pid\" ]; then\n    if [ ! -z \"${_dst}\" ]; then\n      if [ \"${_oct}\" = \"${_dst}\" ]; then\n        [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL5 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp\"\n        mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp ${_HDB} < ${_USR}/src/prev_hostmaster.sql\n        _run_drush8_hmr_cmd \"variable-set --always-set site_frontpage 'hosting/sites'\"\n        touch ${_USR}/log/imported.pid\n      fi\n    else\n      [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL6 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp\"\n      mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp ${_HDB} < ${_USR}/src/prev_hostmaster.sql\n      _run_drush8_hmr_cmd \"variable-set --always-set site_frontpage 'hosting/sites'\"\n      touch ${_USR}/log/imported.pid\n    fi\n\n    if [ \"${_fix}\" = \"fix\" ]; then\n      if [ ! -e \"${_USR}/log/post-merge-fix.pid\" ]; then\n        echo \"HOTFIX B: Hostmaster STATUS: Fix for migrated/merged instance 1/2 start\"\n        _DOMAIN=`cat ${_USR}/log/domain.txt`\n        _DOMAIN=`echo -n ${_DOMAIN} | tr -d \"\\n\"`\n        _USE_AEGIR_HOST=`uname -n`\n\n        ### Pre-Fix for migrated/merged instances\n        if [ -e \"${_USR}/log/imported.pid\" ] || [ -e \"${_USR}/log/exported.pid\" ]; then\n          if [ -e \"${_USR}/aegir/distro/002/sites/${_DOMAIN}/drushrc.php\" ]; then\n            sed -i \"s/platform_0.*'/platform_002'/g\"           ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/0.*\\/sites/distro\\/002\\/sites/g\" ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/01.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/02.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/03.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/04.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/05.*',/distro\\/002',/g\"          ${_USR}/.drush/hostmaster.alias.drushrc.php\n            wait\n            sed -i \"s/platform_0.*'/platform_002'/g\"           ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/0.*\\/sites/distro\\/002\\/sites/g\" ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/01.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/02.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/03.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/04.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n            wait\n            sed -i \"s/distro\\/05.*',/distro\\/002',/g\"          ${_USR}/.drush/hm.alias.drushrc.php\n          fi\n        fi\n\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO hosting_context (nid, name) VALUES ('4', 'server_localhost'), ('2', 'server_master')\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO hosting_package (vid, nid, package_type, short_name, old_short_name, description) VALUES ('6', '6', 'platform', 'drupal', '', '')\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO node_revision (nid, vid, uid, title, body, teaser, log, timestamp, format) VALUES ('6', '6', '1', 'drupal', '', '', '', '1412168340', '0')\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"REPLACE INTO node (nid, vid, type, language, title, uid, status, created, changed, comment, promote, moderate, sticky, tnid, translate) VALUES ('6', '6', 'package', '', 'drupal', '1', '1', '1412168321', '1412168340', '0', '0', '0', '0', '0', '0')\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_package WHERE nid=2 AND short_name='drupal'\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_package WHERE nid=4 AND short_name='drupal'\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM node WHERE nid=8 AND type='site'\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM node_revision WHERE nid=8\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET type='server' WHERE nid=2\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET type='server' WHERE nid=4\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET title='${_USE_AEGIR_HOST}' WHERE nid=2\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET title='localhost' WHERE nid=4\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET title='${_USE_AEGIR_HOST}' WHERE nid=2\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET title='localhost' WHERE nid=4\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET db_server=4 WHERE db_server=2\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform SET web_server=2 WHERE web_server=0\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE users_roles SET rid=7 WHERE rid=5\\\"\"\n        su -s /bin/bash ${_THIS_HM_USER} -c \"drush8 cc drush\"\n        wait\n        rm -rf ${_USR}/.tmp/cache\n        _run_drush8_hmr_cmd \"hosting-task @server_localhost verify --force\"\n        _run_drush8_hmr_cmd \"hosting-dispatch\"\n        sleep 5\n        _run_drush8_hmr_cmd \"hosting-tasks --force\"\n        echo \"HOTFIX B: Hostmaster STATUS: Fix for migrated/merged instance 1/2 complete\"\n      fi\n      _vSet=\"variable-set --always-set\"\n      _run_drush8_hmr_cmd \"${_vSet} hosting_client_send_welcome 0\"\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET client=1 WHERE profile=7\\\"\"\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET client=1 WHERE profile=9\\\"\"\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site SET client=1 WHERE client=0\\\"\"\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform SET web_server=2 WHERE web_server=0\\\"\"\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET uid=1 WHERE uid=0\\\"\"\n      _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET uid=1 WHERE uid=0\\\"\"\n      _HM_NID=$(_run_drush8_hmr_cmd \"sqlq \\\"SELECT MIN(site.nid) AS lowest_nid FROM hosting_site site JOIN hosting_package_instance pkgi ON pkgi.rid=site.nid JOIN hosting_package pkg ON pkg.nid=pkgi.package_id WHERE pkg.short_name='hostmaster';\\\"\" 2>&1)\n      _HM_NID=${_HM_NID//[^0-9]/}\n      if [ ! -z \"${_HM_NID}\" ]; then\n        echo \"HOTFIX B: Hostmaster STATUS: Fix 1/2 hosting_context ${_HM_NID}\"\n        if [ -e \"${_USR}/aegir/distro/002/sites/${_DOMAIN}/drushrc.php\" ]; then\n          _HM_PLF=$(_run_drush8_hmr_cmd \"sqlq \\\"SELECT platform FROM hosting_site WHERE nid=${_HM_NID}\\\"\" 2>&1)\n          _HM_PLF=${_HM_PLF//[^0-9]/}\n          _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_context WHERE name='platform_002' AND nid != ${_HM_PLF}\\\"\"\n          _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_context SET name='platform_002' WHERE nid=${_HM_PLF}\\\"\"\n        fi\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_context SET name='hostmaster' WHERE nid=${_HM_NID}\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE node_revision SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_site_alias SET alias='www.${_DOMAIN}' WHERE nid=${_HM_NID}\\\"\"\n        echo FIXED > ${_USR}/log/post-merge-fix.pid\n      else\n        echo \"HOTFIX B: Hostmaster STATUS: Fix 1/2 hosting_context _HM_NID empty!\"\n      fi\n      if [ -e \"${_USR}/aegir/distro/002/sites/${_DOMAIN}/drushrc.php\" ] \\\n        && [ ! -e \"${_USR}/log/hmpathfix.pid\" ]; then\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform SET publish_path='${_USR}/aegir/distro/002' WHERE publish_path LIKE '%/aegir/distro/%'\\\"\"\n        touch ${_USR}/log/hmpathfix.pid\n      fi\n    fi\n\n  fi\n  if [ \"${_cmd}\" = \"create\" ]; then\n    if [ ! -z \"${_dst}\" ]; then\n      _TGT_HM_USER=\"${_dst}\"\n    else\n      _TGT_HM_USER=\"${_oct}\"\n    fi\n    if [ ! -z \"${_CLIENT_EMAIL}\" ] \\\n      && [ ! -z \"${_TGT_HM_USER}\" ] \\\n      && [ ! -z \"${_CLIENT_OPTION}\" ] \\\n      && [ ! -z \"${_CLIENT_SUBSCR}\" ] \\\n      && [ ! -z \"${_CLIENT_CORES}\" ]; then\n      ssh root@${_tgt} \"/opt/local/bin/boa in-octopus ${_CLIENT_EMAIL} ${_TGT_HM_USER} ${_tRee} ${_CLIENT_OPTION} ${_CLIENT_SUBSCR} ${_CLIENT_CORES} noscreen\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/tools/le/.ctrl/ssl-demo-mode.pid\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/tools/le/config\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/tools/le/config.sh\"\n      echo \"Waiting 8 minutes before attempting to run enforced post-install upgrade...\"\n      sleep 500\n      ssh root@${_tgt} \"test ! -e /data/disk/${_TGT_HM_USER}/log/upgrademail.txt && /bin/echo UP >> /data/disk/${_TGT_HM_USER}/static/control/platforms.info\"\n      ssh root@${_tgt} \"test ! -e /data/disk/${_TGT_HM_USER}/log/upgrademail.txt && /bin/touch /data/disk/${_TGT_HM_USER}/static/control/run-upgrade.pid\"\n      ssh root@${_tgt} \"/bin/rm -f /data/disk/${_TGT_HM_USER}/static/control/*.pid\"\n      echo \"INFO: new octopus ${_TGT_HM_USER} setup on ${_tgt} complete\"\n    else\n      echo \"ALRT: new octopus ${_TGT_HM_USER} setup on ${_tgt} failed because of missing arguments\"\n      exit 1\n    fi\n  fi\n  if [ \"${_cmd}\" = \"import\" ] \\\n    || [ \"${_cmd}\" = \"export\" ] \\\n    || [ \"${_cmd}\" = \"fooproxy\" ]; then\n    _migrate_sites\n  fi\n  echo Done ${_cmd} for ${_USR}\n  _enable_chattr ${_THIS_HM_USER}.ftp\n}\n\n_xboa_transfer_static_symlink_safe() {\n  _X_SRC_USR=\"$1\"\n  _X_TGT_HOST=\"$2\"\n  _X_DST_USR=\"$3\"\n  _X_IENI=\"$4\"\n\n  _X_SRC_STATIC=\"${_X_SRC_USR}/static\"\n  _X_SRC_STATIC_FILES=\"${_X_SRC_USR}/static/files\"\n  _X_DST_STATIC=\"${_X_DST_USR}/static\"\n  _X_DST_STATIC_FILES=\"${_X_DST_USR}/static/files\"\n\n  if [ ! -d \"${_X_SRC_STATIC}\" ]; then\n    return 0\n  fi\n\n  echo \"INFO: Static sync pass 1/2 (preserve symlinks, exclude static/files)\"\n\n  mkdir -p \"${_X_SRC_USR}/static\"\n  rsync -aEAXqu ${_X_IENI} --exclude=files -e ssh \"${_X_SRC_USR}/static/\" \"root@${_X_TGT_HOST}:${_X_DST_STATIC}/\"\n\n  if [ -L \"${_X_SRC_STATIC_FILES}\" ]; then\n    _X_SRC_STATIC_FILES_REAL=\"$(readlink -f \"${_X_SRC_STATIC_FILES}\" 2>/dev/null)\"\n    echo \"INFO: Source static/files is symlink -> ${_X_SRC_STATIC_FILES_REAL}\"\n  elif [ -d \"${_X_SRC_STATIC_FILES}\" ]; then\n    _X_SRC_STATIC_FILES_REAL=\"${_X_SRC_STATIC_FILES}\"\n    echo \"INFO: Source static/files is a real directory\"\n  else\n    _X_SRC_STATIC_FILES_REAL=\"\"\n    echo \"INFO: Source static/files missing, nothing to materialize\"\n  fi\n\n  if [ -n \"${_X_SRC_STATIC_FILES_REAL}\" ] && [ -d \"${_X_SRC_STATIC_FILES_REAL}\" ]; then\n    echo \"INFO: Static sync pass 2/2 (materialize static/files as real directory on destination)\"\n    ssh root@\"${_X_TGT_HOST}\" \"if [ -L '${_X_DST_STATIC_FILES}' ]; then rm -f '${_X_DST_STATIC_FILES}'; elif [ -e '${_X_DST_STATIC_FILES}' ] && [ ! -d '${_X_DST_STATIC_FILES}' ]; then rm -f '${_X_DST_STATIC_FILES}'; fi; mkdir -p '${_X_DST_STATIC_FILES}'\"\n    rsync -aEAXqu ${_X_IENI} -e ssh \"${_X_SRC_STATIC_FILES_REAL}/\" \"root@${_X_TGT_HOST}:${_X_DST_STATIC_FILES}/\"\n  fi\n}\n\n_migrate_init() {\n  _USR=/data/disk/${_oct}\n  if [ \"${_oct}\" = \"arch\" ] \\\n    || [ \"${_oct}\" = \"all\" ] \\\n    || [ \"${_oct}\" = \"lost+found\" ] \\\n    || [ \"${_oct}\" = \"lostfound\" ] \\\n    || [ \"${_oct}\" = \"backups\" ] \\\n    || [ \"${_oct}\" = \"codebases-cleanup\" ]; then\n    echo \"_USR ${_oct} is not eligible for migration: system dir\"\n    exit 1\n  fi\n  if [ ! -e \"${_USR}/tools/le/dehydrated\" ] && [ \"${_oct}\" != \"shared\" ]; then\n    echo\n    echo \"ERROR: This version of xboa migration tool supports only BOA-3.2.2 or newer\"\n    echo \"ERROR: Please upgrade ${_oct} instance to current BOA head before trying again\"\n    echo \"Bye.\"\n    echo\n    exit 1\n  fi\n  if [ -e \"/data/conf/${_USR}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  fi\n  if [ \"${_oct}\" = \"xusage\" ]; then\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} start\"\n    if [ -e \"/var/log/boa/usage\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/log/boa/usage/* root@${_tgt}:/var/log/boa/usage/\n      echo \"INFO: ${_cmd} for ${_oct} /var/log/boa/usage complete\"\n    fi\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} complete\"\n  elif [ \"${_oct}\" = \"shared\" ]; then\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} start\"\n    if [ -d \"/data/all\" ] && [ ! -L \"/data/all\" ]; then\n      rsync -aEAXqu ${_ieni} -e ssh /data/all  root@${_tgt}:/data/\n      echo \"INFO: ${_cmd} for ${_oct} /data/all complete\"\n    fi\n    if [ -d \"/data/disk/all\" ]; then\n      rsync -aEAXqu ${_ieni} -e ssh /data/disk/all  root@${_tgt}:/data/\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/all complete\"\n    fi\n    if [ -e \"/data/disk/arch\" ]; then\n      rsync -aEAXqu ${_ieni} -e ssh /data/disk/arch  root@${_tgt}:/data/disk/\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/arch complete\"\n    fi\n    if [ -e \"/opt/solr4\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /opt/solr4 root@${_tgt}:/opt/\n      echo \"INFO: ${_cmd} for ${_oct} /opt/solr4 complete\"\n    fi\n    if [ -e \"/var/solr7/data\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/solr7/data root@${_tgt}:/var/solr7/\n      echo \"INFO: ${_cmd} for ${_oct} /var/solr7/data complete\"\n    fi\n    if [ -e \"/var/solr9/data\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/solr9/data root@${_tgt}:/var/solr9/\n      echo \"INFO: ${_cmd} for ${_oct} /var/solr9/data complete\"\n    fi\n    if [ -e \"/var/www/static\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/www/static root@${_tgt}:/var/www/\n      echo \"INFO: ${_cmd} for ${_oct} /var/www/static complete\"\n    fi\n    if [ -e \"/etc/bind\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /etc/bind root@${_tgt}:/etc/\n      echo \"INFO: ${_cmd} for ${_oct} /etc/bind complete\"\n    fi\n    if [ -e \"/var/log/boa/usage\" ]; then\n      rsync -aEAXq ${_ieni} -e ssh /var/log/boa/usage root@${_tgt}:/var/log/boa/\n      echo \"INFO: ${_cmd} for ${_oct} /var/log/boa/usage complete\"\n    fi\n    if [ -e \"/data/disk/legacy\" ]; then\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/legacy start\"\n      rsync -aEAXq ${_ieni} -e ssh /data/disk/legacy root@${_tgt}:/data/disk/\n      echo \"INFO: ${_cmd} for ${_oct} /data/disk/legacy complete\"\n    fi\n    echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} complete\"\n  else\n    _USR=/data/disk/${_oct}\n    if [ -e \"${_USR}/log/email.txt\" ]; then\n      _CHECK_EMAIL=`cat ${_USR}/log/email.txt`\n      _CHECK_EMAIL=`echo -n ${_CHECK_EMAIL} | tr -d \"\\n\"`\n      _CHECK_EMAIL=${_CHECK_EMAIL//\\\\\\@/\\@}\n      if [ \"${_CHECK_EMAIL}\" = \"omega8cc@gmail.com\" ] \\\n        || [[ \"${_CHECK_EMAIL}\" =~ \"emaylx@\" ]] \\\n        || [[ \"${_CHECK_EMAIL}\" =~ \"mixomax@\" ]] \\\n        || [[ \"${_CHECK_EMAIL}\" =~ \"@omega8.cc\" ]]; then\n        _RESULT_EMAIL=OUR\n      else\n        _RESULT_EMAIL=OK\n      fi\n    fi\n    if [ \"${_oct}\" = \"arch\" ] \\\n      || [ \"${_oct}\" = \"all\" ] \\\n      || [ \"${_oct}\" = \"lost+found\" ] \\\n      || [ \"${_oct}\" = \"lostfound\" ] \\\n      || [ \"${_oct}\" = \"backups\" ] \\\n      || [ \"${_oct}\" = \"codebases-cleanup\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: system dir\"\n      exit 1\n    elif [ ! -e \"${_USR}/log/cores.txt\" ] \\\n      || [ ! -e \"${_USR}/log/option.txt\" ] \\\n      || [ ! -e \"${_USR}/log/email.txt\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: broken\"\n      exit 1\n    elif [ -e \"${_USR}/log/CANCELLED\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: cancelled\"\n      exit 1\n    elif [ -e \"${_USR}/log/proxied.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already migrated\"\n      exit 1\n    elif [ \"${_cmd}\" = \"create\" ] && [ -e \"${_USR}/log/exported.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already created\"\n      exit 1\n    elif [ \"${_cmd}\" = \"export\" ] && [ -e \"${_USR}/log/exported.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already exported\"\n      exit 1\n    elif [ \"${_cmd}\" = \"transfer\" ] && [ -e \"${_USR}/log/transferred.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already transferred\"\n      exit 1\n    elif [ \"${_cmd}\" = \"import\" ] && [ -e \"${_USR}/log/imported.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already imported\"\n      exit 1\n    elif [ \"${_cmd}\" = \"fooproxy\" ] && [ -e \"${_USR}/log/proxied.pid\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: already proxied\"\n      exit 1\n    elif [ \"${_RESULT_EMAIL}\" = \"OUR\" ]; then\n      echo \"_USR ${_oct} is not eligible for migration: internal only\"\n      exit 1\n    else\n      echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} ${_dst} start\"\n      if [ \"${_cmd}\" = \"transfer\" ] || [ \"${_cmd}\" = \"pretransfer\" ]; then\n        if [ ! -z \"${_dst}\" ]; then\n          _DST=/data/disk/${_dst}\n        else\n          _DST=/data/disk/${_oct}\n        fi\n        if [ -e \"${_USR}\" ]; then\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/distro/*         root@${_tgt}:${_DST}/distro/\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/src/*            root@${_tgt}:${_DST}/src/\n          _xboa_transfer_static_symlink_safe \"${_USR}\" \"${_tgt}\" \"${_DST}\" \"${_ieni}\"\n          #\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/clients/*        root@${_tgt}:${_DST}/clients/\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/log/*            root@${_tgt}:${_DST}/log/\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/static/control/* root@${_tgt}:${_DST}/static/control/\n          #\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/backups/*        root@${_tgt}:${_DST}/backups/\n          rsync -aEAXqu ${_ieni} -e ssh ${_USR}/undo/*           root@${_tgt}:${_DST}/undo/\n          if [ ! -z \"${_dst}\" ]; then\n            rsync -aEAXqu ${_ieni} -e ssh /home/${_oct}.ftp/.ssh  root@${_tgt}:/home/${_dst}.ftp/\n          else\n            rsync -aEAXqu ${_ieni} -e ssh /home/${_oct}.ftp/.ssh  root@${_tgt}:/home/${_oct}.ftp/\n          fi\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            --exclude=server_ \\\n            --exclude=hostmaster \\\n            -e ssh ${_USR}/.drush/*.alias.drushrc.php \\\n            root@${_tgt}:${_DST}/.drush/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            --exclude=host8.biz \\\n            --exclude=boa.io \\\n            --exclude=aegir.cc \\\n            -e ssh ${_USR}/config/server_master/nginx/vhost.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/vhost.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/pre.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/pre.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/post.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/post.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/subdir.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/subdir.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/nginx/platform.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/nginx/platform.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/ssl.d/* \\\n            root@${_tgt}:${_DST}/config/ssl.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/config/server_master/ssl.d/* \\\n            root@${_tgt}:${_DST}/config/server_master/ssl.d/\n\n          rsync -aEAXqu \\\n            ${_ieni} \\\n            -e ssh ${_USR}/tools/le \\\n            root@${_tgt}:${_DST}/tools/\n\n          echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} ${_dst} complete\"\n          touch ${_USR}/log/transferred.pid\n        fi\n      else\n        # shellcheck disable=SC1091\n        [ -e \"/root/.${_oct}.octopus.cnf\" ] && source /root/.${_oct}.octopus.cnf\n        ### We are using max verbose mode by default\n        _migrate_prepare\n          ### _NOW=$(date +%y%m%d-%H%M)\n          ### mkdir -p /var/log/boa/migrate\n          ### _migrate_prepare >/var/log/boa/migrate/migrate-${_oct}-${_cmd}-${_NOW}.log 2>&1\n        if [ \"${_cmd}\" = \"fooproxy\" ]; then\n          service nginx reload\n          echo COMPLETE > ${_USR}/log/proxied.pid\n        fi\n      fi\n      echo \"INFO: ${_cmd} for ${_oct} to ${_tgt} ${_dst} complete\"\n      exit 0\n    fi\n  fi\n}\n\n_post_mig() {\n  if [ -e \"/vservers\" ]; then\n    echo You can not run _post_mig on the parent system\n    echo Exit now\n    exit 1\n  else\n    echo Running _post_mig inside vms ${_hName}\n    if [ -x \"/etc/init.d/solr9\" ] && [ -e \"/etc/default/solr9.in.sh\" ]; then\n      pkill -9 -f solr9\n      service solr9 start\n    fi\n    if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/etc/default/solr7.in.sh\" ]; then\n      pkill -9 -f solr7\n      service solr7 start\n    fi\n    pkill -9 -f jetty9\n    rm -rf /tmp/{drush*,pear,jetty*}\n    rm -f /var/log/jetty9/*\n    if [ -e \"/etc/default/jetty9\" ] && [ -e \"/etc/init.d/jetty9\" ]; then\n      service jetty9 start\n    fi\n    service nginx reload\n    [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n    if [ ! -e \"/var/xdrago/runner.sh\" ] && [ -e \"/var/xdrago/.runner.sh.off\" ]; then\n      mv -f /var/xdrago/.runner.sh.off /var/xdrago/runner.sh\n    fi\n    if [ ! -e \"/var/xdrago/daily.sh\" ] && [ -e \"/var/xdrago/.daily.sh.off\" ]; then\n      mv -f /var/xdrago/.daily.sh.off /var/xdrago/daily.sh\n    fi\n    if [ ! -e \"/var/xdrago/usage.sh\" ] && [ -e \"/var/xdrago/.usage.sh.off\" ]; then\n      mv -f /var/xdrago/.usage.sh.off /var/xdrago/usage.sh\n    fi\n    if [ ! -e \"/var/xdrago/graceful.sh\" ] && [ -e \"/var/xdrago/.graceful.sh.off\" ]; then\n      mv -f /var/xdrago/.graceful.sh.off /var/xdrago/graceful.sh\n    fi\n    if [ ! -e \"/var/xdrago/manage_ltd_users.sh\" ] && [ -e \"/var/xdrago/.manage_ltd_users.sh.off\" ]; then\n      mv -f /var/xdrago/.manage_ltd_users.sh.off /var/xdrago/manage_ltd_users.sh\n    fi\n    exit 0\n  fi\n}\n\n_pre_mig() {\n  if [ -e \"/vservers\" ]; then\n    echo You can not run _pre_mig on the parent system\n    echo Exit now\n    exit 1\n  else\n    echo Running _pre_mig inside vms ${_hName}\n    if [ -e \"/var/xdrago/runner.sh\" ]; then\n      mv -f /var/xdrago/runner.sh /var/xdrago/.runner.sh.off\n      pkill -9 -f runner.sh\n    fi\n    if [ -e \"/var/xdrago/daily.sh\" ]; then\n      mv -f /var/xdrago/daily.sh /var/xdrago/.daily.sh.off\n      pkill -9 -f daily.sh\n    fi\n    if [ -e \"/var/xdrago/usage.sh\" ]; then\n      mv -f /var/xdrago/usage.sh /var/xdrago/.usage.sh.off\n      pkill -9 -f usage.sh\n    fi\n    if [ -e \"/var/xdrago/graceful.sh\" ]; then\n      mv -f /var/xdrago/graceful.sh /var/xdrago/.graceful.sh.off\n      pkill -9 -f graceful.sh\n    fi\n    if [ -e \"/var/xdrago/manage_ltd_users.sh\" ]; then\n      mv -f /var/xdrago/manage_ltd_users.sh /var/xdrago/.manage_ltd_users.sh.off\n      pkill -9 -f manage_ltd_users.sh\n    fi\n    if [ \"${_hName}\" = \"${_hst}\" ]; then\n      echo Preparing source ${_hName} for outgoing migration...\n      if [ ! -e \"/root/.ssh/id_ed25519.pub\" ]; then\n        echo \"Generating SSH (ed25519) keys for root...\"\n        ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519\n      fi\n      if [ -e \"/root/.ssh/id_ed25519.pub\" ]; then\n        mkdir -p /var/www/nginx-default\n        cp -af /root/.ssh/id_ed25519.pub /var/www/nginx-default/auth_undefined_keys.txt\n      else\n        echo Ops.. /root/.ssh/id_ed25519.pub does not exist in ${_hName}\n      fi\n    else\n      echo Preparing target ${_hName} for incoming migration...\n      echo Add remote id_ed25519.pub to authorized_keys here...\n      mkdir -p /root/.ssh\n      chmod 700 /root/.ssh\n      rm -f /root/.ssh/auth_undefined_keys.txt*\n      rm -f /var/www/nginx-default/auth_undefined_keys.txt*\n      curl -s -A iCab \"http://undefined.${_hst}/auth_undefined_keys.txt\" -o /root/.ssh/auth_undefined_keys.txt\n      if [ -e \"/root/.ssh/auth_undefined_keys.txt\" ]; then\n        echo >> /root/.ssh/authorized_keys\n        cat /root/.ssh/auth_undefined_keys.txt >> /root/.ssh/authorized_keys\n        chmod 600 /root/.ssh/authorized_keys\n      else\n        echo Ops.. /root/.ssh/auth_undefined_keys.txt does not exist in ${_hName}\n      fi\n    fi\n  fi\n  echo All done!\n}\n\n_sub_ssl_gen() {\n  IFS=$'\\12'\n  for p in `cat /root/.ssl.proxy.cnf`;do\n    _domain_name=`echo $p | cut -d' ' -f1 | awk '{ print $1}'`\n    _target_ip=`echo $p | cut -d' ' -f2 | awk '{ print $1}'`\n    _oct_uid=`echo $p | cut -d' ' -f3 | awk '{ print $1}'`\n    _oct_mail=`echo $p | cut -d' ' -f4 | awk '{ print $1}'`\n    _dedicated_ip=`echo $p | cut -d' ' -f5 | awk '{ print $1}'`\n    echo _domain_name.${_domain_name} _target_ip.${_target_ip} _oct_uid.${_oct_uid} _oct_mail.${_oct_mail} _dedicated_ip.${_dedicated_ip}\n    if [ ! -z \"${_domain_name}\" ] \\\n      && [ ! -z \"${_target_ip}\" ] \\\n      && [ ! -z \"${_oct_uid}\" ] \\\n      && [ ! -z \"${_oct_mail}\" ] \\\n      && [ ! -z \"${_dedicated_ip}\" ]; then\n      _oct_mail=`echo ${_oct_mail} | sed \"s/\\@/\\\\\\@/g\"`;\n      if [ \"${_target_ip}\" = \"${_dedicated_ip}\" ]; then\n        _dedicated_ip=\"*\"\n        if [ -e \"/data/disk/${_oct_uid}/log/extra_domain.txt\" ]; then\n          _hmFrontExtra=$(cat /data/disk/${_oct_uid}/log/extra_domain.txt 2>&1)\n          _hmFrontExtra=$(echo -n ${_hmFrontExtra} | tr -d \"\\n\" 2>&1)\n        fi\n        if [ ! -z \"${_hmFrontExtra}\" ]; then\n          _dedicated_sn=\"${_domain_name} ${_hmFrontExtra} www.${_domain_name}\"\n        else\n          _dedicated_sn=\"${_domain_name} www.${_domain_name}\"\n        fi\n        _single=YES\n      else\n        _dedicated_sn=\"_\"\n        _single=NO\n      fi\n      ###\n      _Pln=\"/var/aegir/config/server_master/nginx/pre.d/z_${_domain_name}_pln_proxy.conf\"\n      _Ssl=\"/var/aegir/config/server_master/nginx/pre.d/z_${_domain_name}_ssl_proxy.conf\"\n      ###\n      if [ \"${_single}\" = \"YES\" ]; then\n        rm -f ${_Pln}\n      else\n        cp -af /var/xdrago/conf/pln_proxy.conf      ${_Pln}\n        sed -i \"s/_domain_name/${_domain_name}/g\"   ${_Pln}\n        wait\n        sed -i \"s/_target_ip/${_target_ip}/g\"       ${_Pln}\n        wait\n        sed -i \"s/_oct_uid/${_oct_uid}/g\"           ${_Pln}\n        wait\n        sed -i \"s/_oct_mail/${_oct_mail}/g\"         ${_Pln}\n        wait\n        sed -i \"s/_dedicated_ip/${_dedicated_ip}/g\" ${_Pln}\n        wait\n        sed -i \"s/_dedicated_sn/${_dedicated_sn}/g\" ${_Pln}\n        wait\n        echo OK created ${_Pln}\n      fi\n      ###\n      cp -af /var/xdrago/conf/ssl_proxy.conf        ${_Ssl}\n      sed -i \"s/_domain_name/${_domain_name}/g\"     ${_Ssl}\n      wait\n      sed -i \"s/_target_ip/${_target_ip}/g\"         ${_Ssl}\n      wait\n      sed -i \"s/_oct_uid/${_oct_uid}/g\"             ${_Ssl}\n      wait\n      sed -i \"s/_oct_mail/${_oct_mail}/g\"           ${_Ssl}\n      wait\n      sed -i \"s/_dedicated_ip/${_dedicated_ip}/g\"   ${_Ssl}\n      wait\n      sed -i \"s/_dedicated_sn/${_dedicated_sn}/g\"   ${_Ssl}\n      wait\n      ###\n      if [ -e \"${_Ssl}\" ]; then\n        _sslFile=\"/etc/ssl/private/${_domain_name}.dhp\"\n        if [ ! -e \"${_sslFile}\" ]; then\n          echo \"We will generate .dhp file now, please wait...\"\n          openssl dhparam -out ${_sslFile} 2048\n        else\n          _PFS_TEST=$(grep \"DH PARAMETERS\" ${_sslFile} 2>&1)\n          if [[ ! \"${_PFS_TEST}\" =~ \"DH PARAMETERS\" ]]; then\n            echo \"We will generate .dhp file now, please wait...\"\n            openssl dhparam -out ${_sslFile} 2048\n          fi\n        fi\n      fi\n      ###\n      echo OK created ${_Ssl}\n      service nginx reload\n    else\n      echo some variables missing in the record: $p\n    fi\n  done\n}\n\n_ssl_gen() {\n  if [ ! -e \"/root/.ssl.proxy.cnf\" ]; then\n    echo \"please create /root/.ssl.proxy.cnf first\"\n    exit 1\n  elif [ ! -e \"/var/xdrago/conf/pln_proxy.conf\" ]; then\n    echo \"file /var/xdrago/conf/pln_proxy.conf does not exist\"\n    exit 1\n  elif [ ! -e \"/var/xdrago/conf/ssl_proxy.conf\" ]; then\n    echo \"file /var/xdrago/conf/ssl_proxy.conf does not exist\"\n    exit 1\n  elif [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    echo \"vms ${_vms} aegir master not ready\"\n    exit 1\n  else\n    [ -e \"/root/.ssl.proxy.cnf\" ] && _sub_ssl_gen\n  fi\n}\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    chmod a+w /dev/null\n    if [ ! -e \"/dev/fd\" ]; then\n      if [ -e \"/proc/self/fd\" ]; then\n        rm -rf /dev/fd\n        ln -sfn /proc/self/fd /dev/fd\n      fi\n    fi\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    _ADM_EMAIL=${_MY_EMAIL//\\\\\\@/\\@}\n    _BCC_EMAIL=${_MY_EMAIL//\\\\\\@/\\@}\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _SQL_CONNECT=localhost\n    elif [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n      _SQL_CONNECT=127.0.0.1\n    else\n      _SQL_CONNECT=\"${_THIS_DB_HOST}\"\n    fi\n    if [ \"${_THIS_DB_HOST}\" = \"${_MY_OWNIP}\" ]; then\n      _SQL_CONNECT=localhost\n    fi\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n}\n\ncase \"$1\" in\n  export)   _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  create)   _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  import)   _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _fix=\"fix\"\n            _check_root\n            _migrate_init\n  ;;\n  pretransfer) _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  transfer) _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  noproxy)  _cmd=\"$1\"\n            _oct=\"$2\"\n            _tgt=\"$3\"\n            _dst=\"$4\"\n            _check_root\n            _migrate_init\n  ;;\n  pre-mig)  _cmd=\"$1\"\n            _hst=\"$2\"\n            _check_root\n            _pre_mig\n  ;;\n  post-mig) _cmd=\"$1\"\n            _hst=\"$2\"\n            _check_root\n            _post_mig\n  ;;\n  ssl-gen)  _cmd=\"$1\"\n            _vms=\"$2\"\n            _check_root\n            _ssl_gen\n  ;;\n  *)        echo\n            echo \"Usage: xboa {pre-mig} {fqdn} (source+target vms)\"\n            echo \"Usage: xboa {export|create|pretransfer|transfer|proxy} {o1|shared} {target-ip} {o2}\"\n            echo \"Usage: xboa {import} {o1} {target-ip} (target vms)\"\n            echo \"Usage: xboa {post-mig} {fqdn} (source+target vms)\"\n            echo\n            echo \"Usage: xboa {ssl-gen}\"\n            echo\n            exit 1\n  ;;\nesac\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/host/host-fire.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n_guest_proc_monitor() {\n  for i in `dir -d /vservers/*`; do\n    _THIS_VM=`echo $i | cut -d'/' -f3 | awk '{ print $1}'`\n    _VS_NAME=`echo ${_THIS_VM} | cut -d'/' -f3 | awk '{ print $1}'`\n    if [ -e \"${i}/var/xdrago/proc_num_ctrl.pl\" ] \\\n      && [ ! -e \"${i}/run/fmp_wait.pid\" ] \\\n      && [ ! -e \"${i}/run/boa_wait.pid\" ] \\\n      && [ ! -e \"${i}/run/boa_run.pid\" ] \\\n      && [ ! -e \"${i}/run/mysql_restart_running.pid\" ] \\\n      && [ -e \"/usr/run${i}\" ]; then\n      vserver ${_VS_NAME} exec perl /var/xdrago/proc_num_ctrl.pl &\n    fi\n  done\n}\n###_guest_proc_monitor\n\n_guest_guard() {\nif [ ! -e \"/run/fire.pid\" ] && [ ! -e \"/run/water.pid\" ]; then\n  touch /run/fire.pid\n  echo start $(date)\n  for i in `dir -d /vservers/*`; do\n    if [ -e \"${i}/var/xdrago/monitor/log/ssh.log\" ] && [ -e \"/usr/run${i}\" ]; then\n      for _IP in `cat ${i}/var/xdrago/monitor/log/ssh.log | cut -d '#' -f1 | sort`; do\n        _FW_TEST=\n        _FF_TEST=\n        _FW_TEST=$(csf -g ${_IP} 2>&1)\n        _FF_TEST=$(grep \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n        if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n          echo \"${_IP} already denied or allowed on port 22\"\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n            csf -dr ${_IP}\n            csf -tr ${_IP}\n          fi\n        else\n          echo \"Deny ${_IP} on ports 21,22,443,80 in the next 1h\"\n          csf -td ${_IP} 3600 -p 21\n          csf -td ${_IP} 3600 -p 22\n          csf -td ${_IP} 3600 -p 443\n          csf -td ${_IP} 3600 -p 80\n        fi\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      done\n    fi\n    if [ -e \"${i}/var/xdrago/monitor/log/web.log\" ] && [ -e \"/usr/run${i}\" ]; then\n      for _IP in `cat ${i}/var/xdrago/monitor/log/web.log | cut -d '#' -f1 | sort`; do\n        _FW_TEST=\n        _FF_TEST=\n        _FW_TEST=$(csf -g ${_IP} 2>&1)\n        _FF_TEST=$(grep \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n        if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n          echo \"${_IP} already denied or allowed on port 80\"\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n            csf -dr ${_IP}\n            csf -tr ${_IP}\n          fi\n        else\n          echo \"Deny ${_IP} on ports 21,22,443,80 in the next 1h\"\n          csf -td ${_IP} 3600 -p 21\n          csf -td ${_IP} 3600 -p 22\n          csf -td ${_IP} 3600 -p 443\n          csf -td ${_IP} 3600 -p 80\n        fi\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      done\n    fi\n    if [ -e \"${i}/var/xdrago/monitor/log/ftp.log\" ] && [ -e \"/usr/run${i}\" ]; then\n      for _IP in `cat ${i}/var/xdrago/monitor/log/ftp.log | cut -d '#' -f1 | sort`; do\n        _FW_TEST=\n        _FF_TEST=\n        _FW_TEST=$(csf -g ${_IP} 2>&1)\n        _FF_TEST=$(grep \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n        if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n          echo \"${_IP} already denied or allowed on port 21\"\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n            csf -dr ${_IP}\n            csf -tr ${_IP}\n          fi\n        else\n          echo \"Deny ${_IP} on ports 21,22,443,80 in the next 1h\"\n          csf -td ${_IP} 3600 -p 21\n          csf -td ${_IP} 3600 -p 22\n          csf -td ${_IP} 3600 -p 443\n          csf -td ${_IP} 3600 -p 80\n        fi\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      done\n    fi\n    echo Completed for $i $(date)\n  done\n  echo fin $(date)\n  rm -f /run/fire.pid\nfi\n}\n\nif [ -e \"/vservers\" ] \\\n  && [ -e \"/etc/csf/csf.deny\" ] \\\n  && [ ! -e \"/run/water.pid\" ] \\\n  && [ -x \"/usr/sbin/csf\" ]; then\n  [ ! -e \"/run/water.pid\" ] && _guest_guard\n  sleep 10\n  [ ! -e \"/run/water.pid\" ] && _guest_guard\n  sleep 10\n  [ ! -e \"/run/water.pid\" ] && _guest_guard\n  sleep 10\n  [ ! -e \"/run/water.pid\" ] && _guest_guard\n  sleep 10\n  [ ! -e \"/run/water.pid\" ] && _guest_guard\n  rm -f /run/fire.pid\nfi\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/host/host-water.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n[ -d \"/var/backups/csf/water\" ] || mkdir -p /var/backups/csf/water\n\n_whitelist_ip_pingdom() {\n  # Pingdom provides probe IPs in multiple formats:\n  #   Plain IPv4 list: https://my.pingdom.com/probes/ipv4  (preferred - no parsing needed)\n  #   RSS feed:        https://my.pingdom.com/probes/feed  (fallback - XML parsing required)\n  # The plain list is simpler and less fragile; RSS is kept as fallback.\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing pingdom ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-pingdom-${_NOW}\n    sed -i \"s/.*pingdom.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://my.pingdom.com/probes/ipv4 \\\n    | grep -o '[0-9]\\+\\.[0-9]\\+\\.[0-9]\\+\\.[0-9]\\+' \\\n    | sort \\\n    | uniq 2>&1)\n  if [ -z \"${_IPS}\" ]; then\n    echo \"pingdom ipv4 endpoint failed, falling back to RSS feed\"\n    _IPS=$(curl -k -s https://my.pingdom.com/probes/feed \\\n      | grep '<pingdom:ip>' \\\n      | sed 's/.*::.*//g' \\\n      | sed 's/[^0-9\\.]//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n  echo _IPS pingdom list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow pingdom ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # pingdom ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n}\n\n_whitelist_ip_cloudflare() {\n  # Cloudflare publishes IPv4 ranges at two endpoints (both return identical data):\n  #   Plain text: https://www.cloudflare.com/ips-v4  (primary)\n  #   JSON API:   https://api.cloudflare.com/client/v4/ips  (fallback, no auth needed)\n  # Reference: https://www.cloudflare.com/ips/\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing cloudflare ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-cloudflare-${_NOW}\n    sed -i \"s/.*cloudflare.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -sL https://www.cloudflare.com/ips-v4 \\\n    | grep -o '[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*' \\\n    | sort \\\n    | uniq 2>&1)\n  if [ -z \"${_IPS}\" ]; then\n    echo \"cloudflare ips-v4 endpoint failed, falling back to JSON API\"\n    _IPS=$(curl -k -s https://api.cloudflare.com/client/v4/ips \\\n      | grep -o '\"[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*\"' \\\n      | sed 's/\"//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n  echo _IPS cloudflare list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow cloudflare ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # cloudflare ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n}\n\n_whitelist_ip_imperva() {\n  # Imperva Cloud WAF IP ranges API - no authentication required:\n  # https://my.imperva.com/api/integration/v1/ips\n  # Formats: text | json | apache | nginx | iptables (default: json)\n  # Reference: https://docs.imperva.com/howto/c85245b7\n  # Current ranges (as of 2024): 199.83.128.0/21, 198.143.32.0/19, 149.126.72.0/21,\n  #   103.28.248.0/22, 185.11.124.0/22, 192.230.64.0/18, 45.64.64.0/22, 107.154.0.0/16,\n  #   45.60.0.0/16, 45.223.0.0/16, 131.125.128.0/17 (added May 2023)\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing imperva ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-imperva-${_NOW}\n    sed -i \"s/.*imperva.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s --data \"resp_format=text\" https://my.imperva.com/api/integration/v1/ips \\\n    | grep -o '[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*' \\\n    | sort \\\n    | uniq 2>&1)\n  if [ -z \"${_IPS}\" ]; then\n    echo \"imperva text endpoint failed, falling back to JSON format\"\n    _IPS=$(curl -k -s --data \"resp_format=json\" https://my.imperva.com/api/integration/v1/ips \\\n      | grep -o '\"[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*\"' \\\n      | sed 's/\"//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n  echo _IPS imperva list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow imperva ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # imperva ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  # Clean up Imperva ranges from csf.deny\n  # All current Imperva ranges by significant octets:\n  sed -i \"/^199\\.83\\./d\" /etc/csf/csf.deny\n  sed -i \"/^198\\.143\\./d\" /etc/csf/csf.deny\n  sed -i \"/^149\\.126\\./d\" /etc/csf/csf.deny\n  sed -i \"/^103\\.28\\./d\" /etc/csf/csf.deny\n  sed -i \"/^185\\.11\\./d\" /etc/csf/csf.deny\n  sed -i \"/^192\\.230\\./d\" /etc/csf/csf.deny\n  sed -i \"/^45\\.64\\./d\" /etc/csf/csf.deny\n  sed -i \"/^107\\.154\\./d\" /etc/csf/csf.deny\n  sed -i \"/^45\\.60\\./d\" /etc/csf/csf.deny\n  sed -i \"/^45\\.223\\./d\" /etc/csf/csf.deny\n  sed -i \"/^131\\.125\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_googlebot() {\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing googlebot ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-googlebot-${_NOW}\n    sed -i \"s/.*googlebot.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://developers.google.com/static/search/apis/ipranges/googlebot.json \\\n    | grep -o '\"ipv4Prefix\": *\"[^\"]*\"' \\\n    | sed 's/\"ipv4Prefix\": *\"//g' \\\n    | sed 's/\"//g' \\\n    | sort \\\n    | uniq 2>&1)\n  echo _IPS googlebot list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow googlebot ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # googlebot ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  sed -i \"/^66\\.249\\./d\" /etc/csf/csf.deny\n  sed -i \"/^192\\.178\\./d\" /etc/csf/csf.deny\n  sed -i \"/^34\\.\\(22\\|64\\|65\\|80\\|88\\|89\\|96\\|100\\|101\\|118\\|126\\|146\\|147\\|151\\|152\\|154\\|155\\|165\\|175\\|176\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^35\\.247\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_microsoft() {\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing microsoft ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-microsoft-${_NOW}\n    sed -i \"s/.*microsoft.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://www.bing.com/toolbox/bingbot.json \\\n    | grep -o '\"ipv4Prefix\": *\"[^\"]*\"' \\\n    | sed 's/\"ipv4Prefix\": *\"//g' \\\n    | sed 's/\"//g' \\\n    | sort \\\n    | uniq 2>&1)\n  echo _IPS microsoft list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow microsoft ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # microsoft ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  # Remove all current Bingbot ranges from csf.deny\n  # Legacy ranges (no longer in JSON but may be in older deny rules)\n  sed -i \"/^65\\.5[2-5]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^199\\.30\\./d\" /etc/csf/csf.deny\n  # Current Azure-based Bingbot ranges\n  sed -i \"/^13\\.\\(66\\|67\\|69\\|71\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^20\\.\\(15\\|36\\|43\\|74\\|79\\|125\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^40\\.77\\./d\" /etc/csf/csf.deny\n  sed -i \"/^40\\.79\\./d\" /etc/csf/csf.deny\n  sed -i \"/^51\\.105\\./d\" /etc/csf/csf.deny\n  sed -i \"/^52\\.\\(167\\|231\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^139\\.217\\./d\" /etc/csf/csf.deny\n  sed -i \"/^157\\.55\\./d\" /etc/csf/csf.deny\n  sed -i \"/^191\\.233\\./d\" /etc/csf/csf.deny\n  sed -i \"/^207\\.46\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_sucuri() {\n  # Sucuri does not publish a machine-readable IP list endpoint.\n  # IP ranges are maintained as static documentation at:\n  # https://docs.sucuri.net/website-firewall/sucuri-firewall-troubleshooting-guide/\n  # Review that page periodically and update _IPS below if ranges change.\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing sucuri ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-sucuri-${_NOW}\n    sed -i \"s/.*sucuri.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=\"192.88.134.0/23 185.93.228.0/22 66.248.200.0/22 208.109.0.0/22\"\n  echo _IPS sucuri list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow sucuri ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # sucuri ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  sed -i \"/^192\\.88\\.13[4-5]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^185\\.93\\.22[89]\\.\\|^185\\.93\\.23[01]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^66\\.248\\.20[0-3]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^208\\.109\\.[0-3]\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_authzero() {\n  # Auth0 publishes a machine-readable IP list with region breakdown and changelog at:\n  # https://cdn.auth0.com/ip-ranges.json\n  # The list is updated ahead of any functional changes; check last_updated_at to detect changes.\n  # Only whitelist regions relevant to your Auth0 tenant(s). Currently fetching all regions.\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing authzero ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-authzero-${_NOW}\n    sed -i \"s/.*authzero.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://cdn.auth0.com/ip-ranges.json \\\n    | grep -o '\"[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*\"' \\\n    | grep -v ':' \\\n    | sed 's/\"//g' \\\n    | sort \\\n    | uniq 2>&1)\n  echo _IPS authzero list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow authzero ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # authzero ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  # Clean up any authzero IPs from csf.deny (current + previously known retired IPs)\n  # Since all Auth0 IPs are /32 host routes, we match on the specific addresses from\n  # the changelog (both active and historically removed entries) to ensure old deny\n  # rules don't linger. The fetch above handles csf.allow; deny cleanup is best-effort\n  # by known prefix patterns from Auth0's AWS IP space.\n  for _DENY_IP in $(echo \"${_IPS}\" | sed 's|/32||g'); do\n    sed -i \"/^${_DENY_IP//./\\\\.}$/d\" /etc/csf/csf.deny\n    sed -i \"/^${_DENY_IP//./\\\\.}\\/32$/d\" /etc/csf/csf.deny\n  done\n  wait\n}\n\n_whitelist_ip_site24x7_extra() {\n  # These ranges cover Site24x7 backend/infrastructure IPs (not monitoring probes).\n  # Monitoring probe IPs are handled dynamically via DNS in _whitelist_ip_site24x7().\n  # No machine-readable endpoint exists for these ranges; review periodically at:\n  # https://www.site24x7.com/community/filter/announcements/\n  _IPS=\"87.252.213.0/24 89.36.170.0/24 185.172.199.0/27 185.172.199.128/26 185.230.214.0/23\"\n  echo _IPS site24x7_extra list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow site24x7_extra ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # site24x7_extra ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  if [ -e \"/root/.ignore.site24x7.firewall.cnf\" ]; then\n    for _IP in ${_IPS}; do\n      echo checking csf.ignore site24x7_extra ${_IP} now...\n      _IP_CHECK=$(cat /etc/csf/csf.ignore \\\n        | cut -d '#' -f1 \\\n        | sort \\\n        | uniq \\\n        | tr -d \"\\s\" \\\n        | grep -F \"${_IP}\" 2>&1)\n      if [ -z \"${_IP_CHECK}\" ]; then\n        echo \"${_IP} not yet listed in /etc/csf/csf.ignore\"\n        echo \"${_IP} # site24x7_extra ips\" >> /etc/csf/csf.ignore\n      else\n        echo \"${_IP} already listed in /etc/csf/csf.ignore\"\n      fi\n    done\n  fi\n}\n\n_whitelist_ip_site24x7() {\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing site24x7 ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-site24x7-${_NOW}\n    sed -i \"s/.*site24x7.*//g\" /etc/csf/csf.allow\n    wait\n    echo removing site24x7 ips from csf.ignore\n    sed -i \"s/.*site24x7.*//g\" /etc/csf/csf.ignore\n    wait\n  fi\n\n  _IPS=$(host site24x7.enduserexp.com 1.1.1.1  \\\n    | grep 'has address' \\\n    | cut -d ' ' -f4 \\\n    | sed 's/[^0-9\\.]//g' \\\n    | sort \\\n    | uniq 2>&1)\n\n  if [ -z \"${_IPS}\" ] \\\n    || [[ ! \"${_IPS}\" =~ \"104.236.16.22\" ]] \\\n    || [[ \"${_IPS}\" =~ \"HINFO\" ]]; then\n    _IPS=$(dig site24x7.enduserexp.com \\\n      | grep 'IN.*A' \\\n      | cut -d 'A' -f2 \\\n      | sed 's/[^0-9\\.]//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n\n  echo _IPS site24x7 list..\n  echo ${_IPS}\n\n  for _IP in ${_IPS}; do\n    echo checking csf.allow site24x7 ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # site24x7 ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n\n  if [ -e \"/root/.ignore.site24x7.firewall.cnf\" ]; then\n    for _IP in ${_IPS}; do\n      echo checking csf.ignore site24x7 ${_IP} now...\n      _IP_CHECK=$(cat /etc/csf/csf.ignore \\\n        | cut -d '#' -f1 \\\n        | sort \\\n        | uniq \\\n        | tr -d \"\\s\" \\\n        | grep -F \"${_IP}\" 2>&1)\n      if [ -z \"${_IP_CHECK}\" ]; then\n        echo \"${_IP} not yet listed in /etc/csf/csf.ignore\"\n        echo \"${_IP} # site24x7 ips\" >> /etc/csf/csf.ignore\n      else\n        echo \"${_IP} already listed in /etc/csf/csf.ignore\"\n      fi\n    done\n  fi\n\n  if [ ! -e \"/root/.whitelist.site24x7.cnf\" ]; then\n    csf -tf\n    wait\n    csf -df\n    wait\n    touch /root/.whitelist.site24x7.cnf\n    [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  fi\n}\n\n_local_ip_rg() {\n  if [ -e \"/root/.local.IP.list\" ]; then\n    echo \"the file /root/.local.IP.list already exists\"\n    for _IP in `hostname -I`; do\n      _IP_CHECK=$(cat /root/.local.IP.list \\\n        | cut -d '#' -f1 \\\n        | sort \\\n        | uniq \\\n        | tr -d \"\\s\" \\\n        | grep -F \"${_IP}\" 2>&1)\n      if [ -z \"${_IP_CHECK}\" ]; then\n        echo \"${_IP} not yet listed in /root/.local.IP.list\"\n        echo \"${_IP} # local IP address\" >> /root/.local.IP.list\n      else\n        echo \"${_IP} already listed in /root/.local.IP.list\"\n      fi\n    done\n    for _IP in `cat /root/.local.IP.list \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\"`; do\n      if [ ! -z \"${_IP}\" ]; then\n        echo removing ${_IP} from d/t firewall rules\n        csf -ar ${_IP} &> /dev/null\n        csf -dr ${_IP} &> /dev/null\n        csf -tr ${_IP} &> /dev/null\n      fi\n      if [ ! -e \"/root/.local.IP.csf.listed\" ] && [ ! -z \"${_IP}\" ]; then\n        echo removing ${_IP} from csf.ignore\n        sed -i \"s/^${_IP} .*//g\" /etc/csf/csf.ignore\n        wait\n        echo removing ${_IP} from csf.allow\n        _NOW=$(date +%y%m%d-%H%M%S)\n        cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-local-${_NOW}\n        sed -i \"s/^${_IP} .*//g\" /etc/csf/csf.allow\n        wait\n        echo adding ${_IP} to csf.ignore\n        echo \"${_IP} # local.IP.list\" >> /etc/csf/csf.ignore\n        wait\n        echo adding ${_IP} to csf.allow\n        echo \"${_IP} # local.IP.list\" >> /etc/csf/csf.allow\n        wait\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n    touch /root/.local.IP.csf.listed\n  else\n    echo \"the file /root/.local.IP.list does not exist\"\n    rm -f /root/.tmp.IP.list*\n    rm -f /root/.local.IP.list*\n    for _IP in `hostname -I`;do echo ${_IP} >> /root/.tmp.IP.list;done\n    for _IP in `cat /root/.tmp.IP.list \\\n      | sort \\\n      | uniq`;do echo \"${_IP} # local IP address\" >> /root/.local.IP.list;done\n    rm -f /root/.tmp.IP.list*\n  fi\n}\n\n_guard_stats() {\n  for i in `dir -d /vservers/*`; do\n    if [ -e \"/root/.local.IP.list\" ]; then\n      cp -af /root/.local.IP.list ${i}/root/.local.IP.list\n    fi\n    if [ ! -e \"${i}/${_HX}\" ] && [ -e \"${i}/${_HA}\" ]; then\n      mv -f ${i}/${_HA} ${i}/${_HX}\n    fi\n    if [ ! -e \"${i}/${_WX}\" ] && [ -e \"${i}/${_WA}\" ]; then\n      mv -f ${i}/${_WA} ${i}/${_WX}\n    fi\n    if [ ! -e \"${i}/${_FX}\" ] && [ -e \"${i}/${_FA}\" ]; then\n      mv -f ${i}/${_FA} ${i}/${_FX}\n    fi\n    if [ -e \"${i}/${_HA}\" ] && [ -e \"/usr/run${i}\" ]; then\n      for _IP in `cat ${i}/${_HA} | cut -d '#' -f1 | sort | uniq`; do\n        _IP_RV=\n        _NR_TEST=\"0\"\n        _NR_TEST=$(tr -s ' ' '\\n' < ${i}/${_HA} | grep -cF \"${_IP}\" 2>&1)\n        if [ -e \"/root/.local.IP.list\" ]; then\n          _IP_CHECK=$(cat /root/.local.IP.list \\\n            | cut -d '#' -f1 \\\n            | sort \\\n            | uniq \\\n            | tr -d \"\\s\" \\\n            | grep -F \"${_IP}\" 2>&1)\n          if [ ! -z \"${_IP_CHECK}\" ]; then\n            _NR_TEST=\"0\"\n            echo \"${_IP} is a local IP address, ignoring ${i}/${_HA}\"\n          fi\n        fi\n        if [ ! -z \"${_NR_TEST}\" ] && [ \"${_NR_TEST}\" -ge 12 ]; then\n          echo ${_IP} ${_NR_TEST}\n          _FW_TEST=\n          _FF_TEST=\n          _FW_TEST=$(csf -g ${_IP} 2>&1)\n          _FF_TEST=$(grep -F \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n            echo \"${_IP} already denied or allowed on port 22\"\n            if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n              csf -dr ${_IP}\n              csf -tr ${_IP}\n            fi\n          else\n            _IP_RV=$(host -s ${_IP} 2>&1)\n            if [ \"${_NR_TEST}\" -ge 24 ]; then\n              echo \"Deny ${_IP} permanently ${_NR_TEST} ${_IP_RV}\"\n              csf -d ${_IP} do not delete Brute force SSH Server ${_NR_TEST} attacks ${_IP_RV}\n            else\n              echo \"Deny ${_IP} until limits rotation ${_NR_TEST} ${_IP_RV}\"\n              csf -d ${_IP} Brute force SSH Server ${_NR_TEST} attacks ${_IP_RV}\n            fi\n          fi\n        fi\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      done\n    fi\n    if [ -e \"${i}/${_WA}\" ] && [ -e \"/usr/run${i}\" ]; then\n      for _IP in `cat ${i}/${_WA} | cut -d '#' -f1 | sort | uniq`; do\n        _IP_RV=\n        _NR_TEST=\"0\"\n        _NR_TEST=$(tr -s ' ' '\\n' < ${i}/${_WA} | grep -cF \"${_IP}\" 2>&1)\n        if [ -e \"/root/.local.IP.list\" ]; then\n          _IP_CHECK=$(cat /root/.local.IP.list \\\n            | cut -d '#' -f1 \\\n            | sort \\\n            | uniq \\\n            | tr -d \"\\s\" \\\n            | grep -F \"${_IP}\" 2>&1)\n          if [ ! -z \"${_IP_CHECK}\" ]; then\n            _NR_TEST=\"0\"\n            echo \"${_IP} is a local IP address, ignoring ${i}/${_WA}\"\n          fi\n        fi\n        if [ ! -z \"${_NR_TEST}\" ] && [ \"${_NR_TEST}\" -ge 12 ]; then\n          echo ${_IP} ${_NR_TEST}\n          _FW_TEST=\n          _FF_TEST=\n          _FW_TEST=$(csf -g ${_IP} 2>&1)\n          _FF_TEST=$(grep -F \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n            echo \"${_IP} already denied or allowed on port 80\"\n            if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n              csf -dr ${_IP}\n              csf -tr ${_IP}\n            fi\n          else\n            _IP_RV=$(host -s ${_IP} 2>&1)\n            if [ \"${_NR_TEST}\" -ge 24 ]; then\n              echo \"Deny ${_IP} permanently ${_NR_TEST} ${_IP_RV}\"\n              csf -d ${_IP} do not delete Brute force Web Server ${_NR_TEST} attacks ${_IP_RV}\n            else\n              echo \"Deny ${_IP} until limits rotation ${_NR_TEST} ${_IP_RV}\"\n              csf -d ${_IP} Brute force Web Server ${_NR_TEST} attacks ${_IP_RV}\n            fi\n          fi\n        fi\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      done\n    fi\n    if [ -e \"${i}/${_FA}\" ] && [ -e \"/usr/run${i}\" ]; then\n      for _IP in `cat ${i}/${_FA} | cut -d '#' -f1 | sort | uniq`; do\n        _IP_RV=\n        _NR_TEST=\"0\"\n        _NR_TEST=$(tr -s ' ' '\\n' < ${i}/${_FA} | grep -cF \"${_IP}\" 2>&1)\n        if [ -e \"/root/.local.IP.list\" ]; then\n          _IP_CHECK=$(cat /root/.local.IP.list \\\n            | cut -d '#' -f1 \\\n            | sort \\\n            | uniq \\\n            | tr -d \"\\s\" \\\n            | grep -F \"${_IP}\" 2>&1)\n          if [ ! -z \"${_IP_CHECK}\" ]; then\n            _NR_TEST=\"0\"\n            echo \"${_IP} is a local IP address, ignoring ${i}/${_FA}\"\n          fi\n        fi\n        if [ ! -z \"${_NR_TEST}\" ] && [ \"${_NR_TEST}\" -ge 12 ]; then\n          echo ${_IP} ${_NR_TEST}\n          _FW_TEST=\n          _FF_TEST=\n          _FW_TEST=$(csf -g ${_IP} 2>&1)\n          _FF_TEST=$(grep -F \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n            echo \"${_IP} already denied or allowed on port 21\"\n            if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n              csf -dr ${_IP}\n              csf -tr ${_IP}\n            fi\n          else\n            _IP_RV=$(host -s ${_IP} 2>&1)\n            if [ \"${_NR_TEST}\" -ge 24 ]; then\n              echo \"Deny ${_IP} permanently ${_NR_TEST} ${_IP_RV}\"\n              csf -d ${_IP} do not delete Brute force FTP Server ${_NR_TEST} attacks ${_IP_RV}\n            else\n              echo \"Deny ${_IP} until limits rotation ${_NR_TEST} ${_IP_RV}\"\n              csf -d ${_IP} Brute force FTP Server ${_NR_TEST} attacks ${_IP_RV}\n            fi\n          fi\n        fi\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      done\n    fi\n  done\n}\n\n_whitelist_ip_dns() {\n  csf -tr 1.1.1.1\n  csf -tr 8.8.8.8\n  csf -tr 9.9.9.9\n  csf -dr 1.1.1.1\n  csf -dr 8.8.8.8\n  csf -dr 9.9.9.9\n  [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  sed -i \"s/.*1.1.1.1.*//g\"  /etc/csf/csf.allow\n  sed -i \"s/.*1.1.1.1.*//g\"  /etc/csf/csf.ignore\n  sed -i \"s/.*8.8.8.8.*//g\"  /etc/csf/csf.allow\n  sed -i \"s/.*8.8.8.8.*//g\"  /etc/csf/csf.ignore\n  sed -i \"s/.*9.9.9.9.*//g\"  /etc/csf/csf.allow\n  sed -i \"s/.*9.9.9.9.*//g\"  /etc/csf/csf.ignore\n  echo \"tcp|out|d=53|d=1.1.1.1 # Cloudflare DNS\" >> /etc/csf/csf.allow\n  echo \"tcp|out|d=53|d=8.8.8.8 # Google DNS\" >> /etc/csf/csf.allow\n  echo \"tcp|out|d=53|d=9.9.9.9 # Cleaner DNS\" >> /etc/csf/csf.allow\n  sed -i \"/^$/d\" /etc/csf/csf.ignore\n  sed -i \"/^$/d\" /etc/csf/csf.allow\n}\n\nif [ -e \"/vservers\" ] \\\n  && [ -e \"/etc/csf/csf.deny\" ] \\\n  && [ -x \"/usr/sbin/csf\" ]; then\n  if [ -e \"/root/.local.IP.list\" ]; then\n    echo local dr/tr start $(date)\n    for _IP in `cat /root/.local.IP.list \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\"`; do\n      csf -dr ${_IP} &> /dev/null\n      csf -tr ${_IP} &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n\n  _n=$((RANDOM%120+90))\n  touch /run/water.pid\n  echo Waiting ${_n} seconds...\n  sleep ${_n}\n\n  _whitelist_ip_dns\n  _whitelist_ip_pingdom\n  _whitelist_ip_cloudflare\n  _whitelist_ip_googlebot\n  _whitelist_ip_microsoft\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_imperva\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_sucuri\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_authzero\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_site24x7_extra\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_site24x7\n\n  if [ -e \"/root/.full.csf.cleanup.cnf\" ]; then\n    sed -i \"s/.*do not delete.*//g\" /etc/csf/csf.deny\n    wait\n    sed -i \"/^$/d\" /etc/csf/csf.deny\n    wait\n  fi\n\n  pkill -9 -f ConfigServer\n  killall sleep &> /dev/null\n  rm -f /etc/csf/csf.error\n  if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n    csf -ra &> /dev/null\n    synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  else\n    csf -r &> /dev/null\n  fi\n  csf -tf\n  ### Linux kernel TCP SACK CVEs mitigation\n  ### CVE-2019-11477 SACK Panic\n  ### CVE-2019-11478 SACK Slowness\n  ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n    _SACK_TEST=$(ip6tables --list | grep tcpmss)\n    if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n      sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n      iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    fi\n  fi\n\n  echo local start $(date)\n  _local_ip_rg\n\n  _HA=var/xdrago/monitor/log/hackcheck.archive.log\n  _HX=var/xdrago/monitor/log/hackcheck.archive.x3.log\n  _WA=var/xdrago/monitor/log/scan_nginx.archive.log\n  _WX=var/xdrago/monitor/log/scan_nginx.archive.x3.log\n  _FA=var/xdrago/monitor/log/hackftp.archive.log\n  _FX=var/xdrago/monitor/log/hackftp.archive.x3.log\n\n  echo guard start $(date)\n  _guard_stats\n\n  rm -f /vservers/*/var/xdrago/monitor/log/ssh.log\n  rm -f /vservers/*/var/xdrago/monitor/log/web.log\n  rm -f /vservers/*/var/xdrago/monitor/log/ftp.log\n\n  pkill -9 -f ConfigServer\n  killall sleep &> /dev/null\n  rm -f /etc/csf/csf.error\n  service lfd restart\n  sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n  wait\n  sed -i \"/^$/d\" /etc/csf/csf.allow\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _DHCP_LOG=\"/var/log/daemon.log\"\n  else\n    _DHCP_LOG=\"/var/log/syslog\"\n  fi\n  grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n    if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n      IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n      if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n        echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n      fi\n    fi\n  done\n  if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n    csf -ra &> /dev/null\n    synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  else\n    csf -r &> /dev/null\n  fi\n  ### Linux kernel TCP SACK CVEs mitigation\n  ### CVE-2019-11477 SACK Panic\n  ### CVE-2019-11478 SACK Slowness\n  ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n    _SACK_TEST=$(ip6tables --list | grep tcpmss)\n    if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n      sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n      iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    fi\n  fi\n  rm -f /run/water.pid\n  echo guard fin $(date)\nfi\nntpdate pool.ntp.org > /dev/null 2>&1 &\n_IF_BCP=\"$(pgrep -f duplicity)\"\nif [ ! -e \"/root/.no.swap.clear.cnf\" ]; then\n  swapoff -a\n  if [ -z \"${_IF_BCP}\" ]; then\n    swapon -a\n  fi\nfi\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/checksql.pl",
    "content": "#!/usr/bin/perl\n\n### TODO - rewrite this legacy script in bash\n\n$ENV{'HOME'} = '/root';\n$ENV{'PATH'} = '/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';\n\nuse warnings;\nuse File::Spec;\n\n$| = 1;\n\nif (-f \"/root/.proxy.cnf\") {\n  exit;\n}\n$mailx_test=`s-nail -V 2>&1`;\n$status=\"CLEAN\";\n$fixfile = \"/var/xdrago/acrashsql.sh\";\nsystem(\"rm -f $fixfile\");\n$server=`uname -n`;\nchomp($server);\n$timedate=`date +%y%m%d-%H%M`;\nchomp($timedate);\n$logfile=\"/var/log/boa/mysqlcheck.log\";\nsleep(90);\n$mysqlrootpass=`cat /root/.my.pass.txt`;\nchomp($mysqlrootpass);\nsystem(\"/usr/bin/mysqlcheck -u root -Aa > $logfile\");\n&makeactions;\nsystem(\"touch /var/log/boa/last-run-acrashsql\");\nmy $email;\nif (open my $fh, '<', '/root/.barracuda.cnf') {\n  while (<$fh>) {\n    if (/^\\s*_MY_EMAIL\\s*=\\s*(\\S+)/) {\n      $email = $1;\n      last;\n    }\n  }\n  close $fh;\n}\n$email =~ s/\\\\+@/@/g;\nif ($email && $mailx_test =~ /(built for Linux)/i) {\n  if ($status ne \"CLEAN\") {\n    system('s-nail',\n      '-s', \"SQL check ERROR [$server] $timedate\",\n      $email,\n      '<', $logfile);\n    system(\"bash $fixfile | s-nail -s \\\"SQL REPAIR done [$server] $timedate\\\" $email\");\n  }\n  if ($status ne \"ERROR\") {\n    system('s-nail',\n      '-s', \"SQL check CLEAN [$server] $timedate\",\n      $email,\n      '<', $logfile);\n  }\n}\nsystem(\"rm -f $logfile\");\nexit;\n\n#############################################################################\nsub makeactions\n{\n  if (!-e \"$fixfile\") {\n    system(\"echo \\\"#!/bin/bash\\\" > $fixfile\");\n    system(\"echo \\\" \\\" >> $fixfile\");\n  }\n  local(@MYARR)=`tail --lines=999999999 $logfile 2>&1`;\n  local($maxnumber) = 0;\n  local($sumar) = 0;\n  foreach $line (@MYARR) {\n    if ($line =~ /(Table \\'\\.\\/)/i) {\n      $status=\"ERROR\";\n      local($a, $b, $c, $TABLEX, $rest) = split(/\\s+/,$line);\n      chomp($TABLEX);\n      local($a, $TABLE, $b) = split(/\\//,$TABLEX);\n      $TABLE =~ s/[^a-z0-9\\_]//g;\n      if ($TABLE =~ /^[a-z0-9]/) {\n        chomp($line);\n        $li_cnt{$TABLE}++;\n      }\n    }\n  }\n  foreach $TABLE (sort keys %li_cnt) {\n    $sumar = $sumar + $li_cnt{$TABLE};\n    local($thissumar) = $li_cnt{$TABLE};\n    if ($thissumar > $maxnumber) {\n      &repair_this_action($TABLE,$thissumar);\n    }\n  }\n  print \"\\n===[$sumar]\\tGLOBAL===\\n\\n\";\n  undef (%li_cnt);\n}\n\n#############################################################################\nsub repair_this_action\n{\n  local($FIXTABLE,$COUNTER) = @_;\n  print \"$FIXTABLE [$COUNTER] recorded...\\n\";\n  system(\"echo \\\"#-- BELOW --# $FIXTABLE [$COUNTER] recorded...\\\" >> $fixfile\");\n  system(\"echo \\\"/usr/bin/mysqlcheck -u root -r $FIXTABLE\\\" >> $fixfile\");\n  system(\"echo \\\"/usr/bin/mysqlcheck -u root -o $FIXTABLE\\\" >> $fixfile\");\n  system(\"echo \\\"/usr/bin/mysqlcheck -u root -a $FIXTABLE\\\" >> $fixfile\");\n  system(\"echo \\\" \\\" >> $fixfile\");\n}\n\n"
  },
  {
    "path": "aegir/tools/system/clear.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\n\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n    # Sanitize to allow only digits and minus sign\n    export _B_NICE=${_B_NICE//[^0-9-]/}\n\n    # Validate and set default if necessary\n    if ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n      _B_NICE=0\n    fi\n\n    # Clamp the value within -20 to 19\n    if (( _B_NICE < -20 )); then\n      _B_NICE=-20\n    elif (( _B_NICE > 19 )); then\n      _B_NICE=19\n    fi\n\n    renice ${_B_NICE} -p $$ &> /dev/null\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_apt_clean_update() {\n  _os_detection_minimal\n  ${_APT_UPDATE} -qq 2> /dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_FIVE_MINUTES=$(date --date '5 minutes ago' +\"%Y-%m-%d %H:%M:%S\")\nfind /run/fmp_wait.pid -mtime +0 -type f -not -newermt \"${_FIVE_MINUTES}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/restarting_fmp_wait.pid  -mtime +0 -type f -not -newermt \"${_FIVE_MINUTES}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/boa_cron_wait.pid  -mtime +0 -type f -not -newermt \"${_FIVE_MINUTES}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/boa*_auto_healing.pid -mtime +0 -type f -not -newermt \"${_FIVE_MINUTES}\" -exec rm -rf {} \\; &> /dev/null\n\n_ONE_HOUR=$(date --date '1 hour ago' +\"%Y-%m-%d %H:%M:%S\")\nfind /run/mysql_restart_running.pid -mtime +0 -type f -not -newermt \"${_ONE_HOUR}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/boa_wait.pid -mtime +0 -type f -not -newermt \"${_ONE_HOUR}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/manage*users.pid  -mtime +0 -type f -not -newermt \"${_ONE_HOUR}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/speed_cleanup.pid  -mtime +0 -type f -not -newermt \"${_ONE_HOUR}\" -exec rm -rf {} \\; &> /dev/null\n\n_THR_HOURS=$(date --date '3 hours ago' +\"%Y-%m-%d %H:%M:%S\")\nfind /run/boa_run.pid -mtime +0 -type f -not -newermt \"${_THR_HOURS}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/*_backup.pid -mtime +0 -type f -not -newermt \"${_THR_HOURS}\" -exec rm -rf {} \\; &> /dev/null\nfind /run/daily-fix.pid -mtime +0 -type f -not -newermt \"${_THR_HOURS}\" -exec rm -rf {} \\; &> /dev/null\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n\n#\n# Find the fastest mirror.\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    [ ! -e \"/run/clear_m.pid\" ] && _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n      if [ -e \"/etc/csf/csf.allow\" ]; then\n        sed -i \"s/.*aegir.*//g\" /etc/csf/csf.allow\n        csf -a 172.235.166.69  eu.files.aegir.cc &> /dev/null\n        csf -a 172.233.219.37  us.files.aegir.cc &> /dev/null\n        csf -a 172.105.168.103 ao.files.aegir.cc &> /dev/null\n        if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n          csf -ra &> /dev/null\n          synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n        else\n          csf -r &> /dev/null\n        fi\n      fi\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_if_reinstall_curl_src() {\n  _CURL_VRN=8.20.0\n  if ! command -v lsb_release &> /dev/null; then\n    apt-get update -qq &> /dev/null\n    apt-get install lsb-release ${_aptYesUnth} -qq &> /dev/null\n  fi\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  [ \"${_OS_CODE}\" = \"wheezy\" ] && _CURL_VRN=7.50.1\n  [ \"${_OS_CODE}\" = \"jessie\" ] && _CURL_VRN=7.71.1\n  [ \"${_OS_CODE}\" = \"stretch\" ] && _CURL_VRN=8.2.1\n  _isCurl=$(curl --version 2>&1)\n  if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n    echo \"OOPS: cURL is broken! Re-installing..\"\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    echo \"curl install\" | dpkg --set-selections 2> /dev/null\n    [ ! -e \"/run/clear_m.pid\" ] && _apt_clean_update\n    # Check for libssl1.0-dev and remove conditionally\n    if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n      apt-get remove libssl1.0-dev -y --purge --auto-remove -qq 2>/dev/null\n    fi\n    apt-get autoremove -y 2> /dev/null\n    apt-get install libssl-dev ${_aptYesUnth} -qq 2> /dev/null\n    apt-get build-dep curl -y 2> /dev/null\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      apt-get install curl --reinstall ${_aptYesUnth} -qq 2> /dev/null\n    fi\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      echo \"INFO: Installing curl from sources...\"\n      mkdir -p /var/opt\n      rm -rf /var/opt/curl*\n      cd /var/opt\n      wget ${_wgetGet} http://files.aegir.cc/dev/src/curl-${_CURL_VRN}.tar.gz &> /dev/null\n      tar -xzf curl-${_CURL_VRN}.tar.gz &> /dev/null\n      if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n        && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n        _SSL_BINARY=/usr/local/ssl3/bin/openssl\n      else\n        _SSL_BINARY=/usr/local/ssl/bin/openssl\n      fi\n      if [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n        _SSL_PATH=\"/usr/local/ssl3\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n      else\n        _SSL_PATH=\"/usr/local/ssl\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n      fi\n      _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n\n      if [ -e \"${_PKG_CONFIG_PATH}\" ] \\\n        && [ -e \"/var/opt/curl-${_CURL_VRN}\" ]; then\n        cd /var/opt/curl-${_CURL_VRN}\n        LIBS=\"-ldl -lpthread\" PKG_CONFIG_PATH=\"${_PKG_CONFIG_PATH}\" ./configure \\\n          --with-openssl \\\n          --with-zlib=/usr \\\n          --prefix=/usr/local &> /dev/null\n        make -j $(nproc) --quiet &> /dev/null\n        make --quiet install &> /dev/null\n        ldconfig 2> /dev/null\n      fi\n    fi\n    if [ -f \"/usr/local/bin/curl\" ]; then\n      _isCurl=$(/usr/local/bin/curl --version 2>&1)\n      if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n        echo \"ERRR: /usr/local/bin/curl is broken\"\n      else\n        echo \"GOOD: /usr/local/bin/curl works\"\n      fi\n    fi\n  fi\n}\n\n_check_dns_curl() {\n  _find_fast_mirror_early\n  _if_reinstall_curl_src\n  _CURL_TEST=$(curl -L -k -s \\\n    --max-redirs 10 \\\n    --retry 3 \\\n    --retry-delay 10 \\\n    -I \"http://${_USE_MIR}\" 2> /dev/null)\n  if [[ ! \"${_CURL_TEST}\" =~ \"200 OK\" ]]; then\n    if [[ \"${_CURL_TEST}\" =~ \"unknown option was passed in to libcurl\" ]]; then\n      echo \"ERROR: cURL libs are out of sync! Re-installing again..\"\n      _if_reinstall_curl_src\n    else\n      echo \"ERROR: ${_USE_MIR} is not available, please try later\"\n    fi\n  fi\n}\n\nif [ ! -e \"/run/boa_run.pid\" ]; then\n  _check_dns_curl\n  rm -f /tmp/*error*\n  wget -qO- http://${_USE_MIR}/versions/${_tRee}/boa/BOA.sh.txt | bash\n  wait\n  bash /opt/local/bin/autoupboa\n  wait\nfi\n\n_OCT_NR=$(ls /data/disk | wc -l)\n_OCT_NR=$(( _OCT_NR - 1 ))\nfor _OCT in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n  _SITES_NR=0\n  if [ -e \"${_OCT}/config/server_master/nginx/vhost.d\" ]; then\n    _SITES_NR=$(ls ${_OCT}/config/server_master/nginx/vhost.d | wc -l)\n    if [ \"${_SITES_NR}\" -gt 0 ]; then\n      if [ -z \"${_chckSts}\" ]; then\n        _chckSts=\"SNR ${_OCT} ${_SITES_NR} \"\n      else\n        _chckSts=\"SNR ${_OCT} ${_SITES_NR} ${_chckSts} \"\n      fi\n    else\n      _OCT_NR=$(( _OCT_NR - 1 ))\n    fi\n  fi\ndone\nif [ -d \"/data/u\" ]; then\n  _chckSts=\"OCT ${_OCT_NR} ${_chckSts} \"\n  _ALL_SITES_NR=$(ls /data/disk/*/config/server_master/nginx/vhost.d | wc -l)\n  _ALL_SITES_NR=$(( _ALL_SITES_NR - _OCT_NR ))\n  _chckSts=\"SST ${_ALL_SITES_NR} ${_chckSts}\"\n  _chckHst=$(hostname 2>&1)\n  _chckIps=$(hostname -I 2>&1)\n  _checkVn=$(/opt/local/bin/boa version | tr -d \"\\n\" 2>&1)\n  if [[ \"${_checkVn}\" =~ \"===\" ]] || [ -z \"${_checkVn}\" ]; then\n    if [ -e \"/var/log/barracuda_log.txt\" ]; then\n      _checkVn=$(tail --lines=1 /var/log/barracuda_log.txt | tr -d \"\\n\" 2>&1)\n    else\n      _checkVn=\"whereis barracuda_log.txt\"\n    fi\n  fi\n  _crlHead=\"-I -k -s --retry 3 --retry-delay 3\"\n  _urlBpth=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir/tools/bin\"\n  curl ${_crlHead} -A \"${_chckHst} ${_chckIps} ${_checkVn} ${_chckSts}\" \"${_urlBpth}/thinkdifferent\" &> /dev/null\n  wait\nfi\n\n_if_fix_locked_sshd() {\n  _SSH_LOG=\"/var/log/auth.log\"\n  if [ `tail --lines=30 ${_SSH_LOG} \\\n    | grep --count \"error: Bind to port 22\"` -gt 0 ]; then\n    pkill -9 -f /usr/sbin/sshd || true\n    service ssh start\n  fi\n}\n_if_fix_locked_sshd\n\n#setprio &> /dev/null\n\nif [ -e \"/root/.remote_backups/schedule/backup_schedule.txt\" ]; then\n  if [ -e \"/root/.remote_backups/paths/paths.txt\" ]; then\n    rm -f /root/.remote_backups/paths/*\n    rm -f /root/.remote_backups/paths/.*\n    [ -x \"/usr/local/bin/dcysetup\" ] && bash /usr/local/bin/dcysetup update &> /dev/null\n  fi\n  _CNT=$(pgrep -fc duplicity)\n  if (( _CNT > 0 )); then\n    echo \"[$(date)] Active duplicity process detected, will try again later...\" >> /var/log/mybackup_waiting_queue.log\n  else\n    [ -x \"/usr/local/bin/dcysetup\" ] && bash /usr/local/bin/dcysetup update &> /dev/null\n    wait\n    [ -x \"/usr/local/bin/mybackup\" ] && nohup /usr/local/bin/mybackup > /dev/null 2>&1 &\n  fi\nfi\n\ntouch /var/log/boa/clear.done.pid\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/conf/SA-CORE-2014-005-D7.patch",
    "content": "diff --git a/includes/database/database.inc b/includes/database/database.inc\nindex f78098b..01b6385 100644\n--- a/includes/database/database.inc\n+++ b/includes/database/database.inc\n@@ -736,7 +736,7 @@ abstract class DatabaseConnection extends PDO {\n     // to expand it out into a comma-delimited set of placeholders.\n     foreach (array_filter($args, 'is_array') as $key => $data) {\n       $new_keys = array();\n-      foreach ($data as $i => $value) {\n+      foreach (array_values($data) as $i => $value) {\n         // This assumes that there are no other placeholders that use the same\n         // name.  For example, if the array placeholder is defined as :example\n         // and there is already an :example_2 placeholder, this will generate\n"
  },
  {
    "path": "aegir/tools/system/conf/control-readme.txt",
    "content": "\n###\n### Ægir upgrade on-demand\n###\n### You can now launch Ægir upgrade to (re)install platforms listed in the file\n### ~/static/control/platforms.info (see further below) by creating empty file:\n###\n###   ~/static/control/run-upgrade.pid\n###\n### This file, if exists, will launch your Ægir upgrade in just a few minutes,\n### and will be automatically deleted afterwards. This means that you can\n### upgrade your Ægir instance easily to install supported platforms\n### even if you don't have root access or are on hosted BOA system.\n###\n### Note that this pid file will be ignored if there will be no platforms.info\n### file, as explained further below.\n###\n\n\n###\n### Super fast site cloning and migration\n###\n### It is now possible to enable blazing fast migrations and cloning even sites\n### with complex and giant databases with this empty control file:\n###\n###  ~/static/control/MyQuick.info\n###\n### By the way, how fast is the super-fast? It's faster than you would expect!\n### We have seen it speeding up the clone and migrate tasks normally taking\n### 1-2 hours to... even 3-6 minutes! Yes, that's how fast it's!\n###\n### This file, if exists, will enable a super fast per table and parallel DB\n### dump and import, although without leaving a conventional complete database\n### dump file in the site archive normally created by Ægir when you run\n### not only the backup task, but also clone, migrate and delete tasks, hence\n### also restore task will not work anymore.\n###\n### We need to emphasise this again: with this control file present all normally\n### super slow tasks will become blazing fast, but at the cost of not keeping\n### an archived complete database dump file in the archive of the site directory\n### where it would be otherwise included.\n###\n### Of course the system still maintains nightly backups of all your sites\n### using the new split sql dump archives, but with this control file present\n### you won't be able to use restore task in Ægir, because the site archive\n### won't include the database dump -- you can still find that sql dump split\n### into per table files in the backups directory, though, in the subdirectory\n### with timestamp added, so you can still access it manually, if needed.\n###\n\n\n###\n### Even faster site cloning and migration\n###\n### It is now possible to speed up the already blazing fast migrations and\n### cloning with this empty control file:\n###\n###  ~/static/control/FastTrack.info\n###\n### This file, if exists, will drastically reduce the number of tasks otherwise\n### launched automatically in preparation for clone and migrate, namely:\n###\n###  1. Both source and target platforms will no longer be verified\n###  2. The site will no longer be verified before running clone or migrate\n###\n### Please carefully consider implications, though, because there are very good\n### reasons for these extra tasks to be launched before running clone or migrate\n### to make sure that any issues are detected and fixed for you early and not\n### during migration or clone, which could otherwise break the site and leave it\n### in some state not easy to fix, especially without root access to the system.\n###\n### The potential reasons to disable these extra tasks with the help of\n### this new control file can be twofold:\n###\n###  1. To restore default and much faster Ægir own behaviour\n###  2. To help those running mass migrations to avoid running duplicate tasks\n###\n### Still, it's your responsibility to run these extra verify tasks when you\n### need to migrate or clone just single site, but you prefer to have them run\n### for you automatically as before, you can easily restore previous behaviour:\n###\n###  1. Create empty ~/static/control/ClassicTrack.info\n###  2. Delete ~/static/control/FastTrack.info\n###\n\n\n###\n### Ægir version provided by BOA is now fully compatible with PHP 8.5 and 8.4,\n### so both can be used as default versions in the Ægir PHP configuration files\n### ~/static/control/cli.info and ~/static/control/fpm.info\n###\n### !!! >>> PHP CAVEATS for Drupal core 7-10 versions:\n###\n###   => https://www.drupal.org/docs/7/system-requirements/php-requirements\n###   => https://www.drupal.org/docs/system-requirements/php-requirements\n###\n###\n### Support for PHP-FPM version switch per Octopus instance (also per site)\n###\n###  ~/static/control/fpm.info\n###\n### This file, if exists and contains supported and installed PHP-FPM version,\n### will be used by running every 2-3 minutes system agent to switch PHP-FPM\n### version used for serving web requests by this Octopus instance.\n###\n### IMPORTANT: If used, it will switch PHP-FPM for all Drupal sites\n### hosted on the instance, unless multi-fpm.info control file also exists.\n###\n### Supported values for single PHP-FPM mode which can be written in this file:\n###\n### 8.5\n### 8.4\n### 8.3\n### 8.2\n### 8.1\n### 8.0\n### 7.4\n### 7.3\n### 7.2\n### 7.1\n### 7.0\n### 5.6\n###\n### NOTE: There must be only one line and one value (like: 8.1) in this file.\n### Otherwise it will be ignored.\n###\n### NOTE: if the file doesn't exist, the system will create it and set to the\n### lowest available PHP version installed, not to the system default version.\n### This is to guarantee backward compatibility for instances installed\n### before upgrade to BOA-4.1.3, when the default PHP version was 5.6,\n### as otherwise after the upgrade the system would automatically switch such\n### accounts to the new default PHP version which is 8.1, and this could break\n### most of the sites hosted, never before tested for PHP 8.1 compatibility.\n###\n\n\n###\n### It is now possible to make all installed PHP-FPM versions available\n### simultaneously for sites on the Octopus instance with additional\n### control file:\n###\n###  ~/static/control/multi-fpm.info\n###\n### This file, if exists, will switch all sites listed in it to their\n### respective PHP-FPM versions as shown in the example below, while all\n### other sites not listed in multi-fpm.info will continue to use PHP-FPM\n### version defined in fpm.info instead, which can be modified independently.\n###\n### foo.com 8.5\n### bar.com 7.4\n### old.com 5.6\n###\n### NOTE: Each line in the multi-fpm.info file must start with main site name,\n### followed by single space, and then the PHP-FPM version to use.\n###\n\n\n###\n### Support for PHP-CLI version switch per Octopus instance (all sites)\n###\n###  ~/static/control/cli.info\n###\n### This file, while similar to fpm.info, if exists and contains supported\n### and installed PHP version, will be used by running every 2-3 minutes\n### system agent to switch PHP-CLI version for this Octopus instance, but\n### it will do this for all hosted sites. There is no option to switch this\n### or override per site hosted.\n###\n### Supported values which can be written in this file:\n###\n### 8.5\n### 8.4\n### 8.3\n### 8.2\n### 8.1\n### 8.0\n### 7.4\n### 7.3\n### 7.2\n### 7.1\n### 7.0\n### 5.6\n###\n### There must be only one line and one value (like: 8.4) in this control file.\n### Otherwise it will be ignored.\n###\n### NOTE: if the file doesn't exist, the system will create it and set to the\n### lowest available PHP version installed, not to the system default version.\n### This is to guarantee backward compatibility for instances installed\n### before upgrade to BOA-4.1.3, when the default PHP version was 5.6,\n### as otherwise after the upgrade the system would automatically switch such\n### accounts to the new default PHP version which is 8.1, and this could break\n### most of the sites hosted, never before tested for PHP 8.1 compatibility.\n###\n### IMPORTANT: this file will affect only Drush on command line and Drush\n### in Ægir backend, used for all tasks on hosted sites, but it will not\n### affect PHP-CLI version used by Composer on command line, because Composer\n### is installed globally and not per Octopus account, so it will use system\n### default PHP version, which is, since BOA-5.0.0, PHP 8.1 and can be\n### changed only by changing system default _PHP_CLI_VERSION in the file\n### /root/.barracuda.cnf and running barracuda upgrade.\n###\n\n\n###\n### Customize Octopus platform list via control file\n###\n###  ~/static/control/platforms.info\n###\n### This file, if exists and contains a list of symbols used to define supported\n### platforms, allows to control/override the value of _PLATFORMS_LIST variable\n### normally defined in the /root/.${_USER}.octopus.cnf file, which can't be\n### modified by the Ægir instance owner with no system root access.\n###\n### IMPORTANT: If used, it will replace/override the value defined on initial\n### instance install and all previous upgrades. It takes effect on every future\n### Octopus instance upgrade, which means that you will miss all newly added\n### distributions, if they will not be listed also in this control file.\n###\n### Supported values which can be written in this file, listed in a single line\n### or one per line:\n###\n\n### Drupal 11.3\n#\n# DE3 — Drupal 11.3 prod/stage/dev\n# CK3 — Commerce v.3\n# CMS — Drupal CMS\n# SCR — Sector\n# THR — Thunder\n# VBX — Varbase 10\n\n### Drupal 11.2\n#\n# DE2 — Drupal 11.2 prod/stage/dev\n\n### Drupal 11.1\n#\n# DE1 — Drupal 11.1 prod/stage/dev\n\n### Drupal 10.6\n#\n# DX6 — Drupal 10.6 prod/stage/dev\n# FOS — farmOS\n# LGV — LocalGov\n# VB9 — Varbase 9\n\n### Drupal 10.5\n#\n# DX5 — Drupal 10.5 prod/stage/dev\n# OCS — OpenCulturas\n\n### Drupal 10.4\n#\n# DX4 — Drupal 10.4 prod/stage/dev\n\n### Drupal 10.3\n#\n# DX3 — Drupal 10.3 prod/stage/dev\n# DXP — DXPR Marketing\n# EZC — EzContent\n\n### Drupal 10.2\n#\n# DX2 — Drupal 10.2 prod/stage/dev\n# OFD — OpenFed\n# SOC — Social\n\n### Drupal 10.1\n#\n# DX1 — Drupal 10.1 prod/stage/dev\n# CK2 — Commerce v.2\n\n### Drupal 10.0\n#\n# DX0 — Drupal 10.0 prod/stage/dev\n\n### Drupal 9\n#\n# DL9 — Drupal 9 prod/stage/dev\n# OLS — OpenLucius\n# OPG — Opigno LMS\n\n### Drupal 7\n#\n# DL7 — Drupal 7 prod/stage/dev\n# CK1 — Commerce v.1\n# UC7 — Ubercart\n\n### Drupal 6\n#\n# DL6 — Pressflow (LTS) prod/stage/dev\n# UC6 — Ubercart\n\n### You can also use special keyword 'ALL' instead of any other symbols to have\n### all available platforms installed, including newly added in all future BOA\n### system releases.\n###\n### Examples:\n#\n# DX2 DX3 SOC UC7\n# (or)\n# ALL\n\n###\n### IMPORTANT: Supported Drupal core versions and distributions have different\n### PHP versions requirements, while not all PHP versions out of currently\n### supported ten versions are installed by default.\n###\n### Ensure that you have corresponding PHP versions installed with barracuda\n### before attempting to install older Drupal versions and distributions.\n###\n### On hosted BOA contact your host if you need any legacy PHP installed again.\n###\n\n\n###\n### Support for forced Drush cache clear in the Ægir backend\n###\n###  ~/static/control/clear-drush-cache.info\n###\n### Octopus instance will pause all scheduled tasks in its queue, if it will\n### detect a platform build from the makefile in progress, to make sure\n### that no other running task could break the build.\n###\n### This is great, until there will be a broken build, and Drush will fail\n### to clean up all leftovers from its .tmp/cache directory, which in turn\n### will pause all tasks in the queue for up to 24-48 hours, until the cache\n### directory will be automatically purged by running daily cleanup tasks,\n### designed to not touch anything not old enough (24 hours at minimum)\n### to not break any running builds.\n###\n### If you need to unlock the tasks queue by forcefully removing everything\n### from the Ægir backend Drush cache, you can create an empty control file:\n### ~/static/control/clear-drush-cache.info\n###\n\n\n###\n### Support for New Relic monitoring with per Octopus instance license key\n###\n###  ~/static/control/newrelic.info\n###\n### This feature will disable global New Relic monitoring by deactivating\n### server-level license key, so it can safely auto-enable or auto-disable it\n### every 5 minutes, but per Octopus instance -- for all sites hosted on\n### the given instance -- when a valid license key is present in the special\n### new ~/static/control/newrelic.info control file.\n###\n### Please note that valid license key is a 40-character hexadecimal string\n### that New Relic provides when you sign up for an account.\n###\n### To disable New Relic monitoring for the Octopus instance, simply delete\n### its ~/static/control/newrelic.info control file and wait a few minutes.\n###\n### Please note that on a self-hosted BOA you still need to add your valid\n### license key as _NEWRELIC_KEY in the /root/.barracuda.cnf file and run\n### system upgrade with at least 'barracuda up-lts' first. This step is\n### not required on Omega8.cc hosted service, where New Relic agent is already\n### pre-installed for you.\n###\n\n\n###\n### Support for Ruby Gems to install Compass or NPM to install Gulp/Bower\n###\n###  ~/static/control/compass.info\n###\n### Details: https://github.com/omega8cc/boa/blob/5.x-dev/docs/GEM.md\n###\n"
  },
  {
    "path": "aegir/tools/system/conf/https_proxy_le.conf",
    "content": "###\n### Secure HTTPS proxy for _domain_name (START) _oct_uid\n###\nserver {\n  listen                       *:443 ssl;\n  listen                       *:443 quic;\n  http2                        on;\n  http3                        on;\n  http3_hq                     on;\n  server_name                  _;\n  ssl_dhparam                  /etc/ssl/private/nginx-wild-ssl.dhp;\n  ssl_certificate_key          /data/disk/_oct_uid/config/server_master/ssl.d/_domain_name/openssl.key;\n  ssl_certificate              /data/disk/_oct_uid/config/server_master/ssl.d/_domain_name/openssl_chain.crt;\n  ssl_trusted_certificate      /data/disk/_oct_uid/tools/le/certs/_domain_name/chain.pem;\n  access_log                   off;\n  log_not_found                off;\n  location / {\n    proxy_pass                 https://_target_ip;\n    proxy_redirect             off;\n    gzip_vary                  off;\n    proxy_buffering            off;\n    proxy_set_header           Host              $host;\n    proxy_set_header           X-Real-IP         $remote_addr;\n    proxy_set_header           X-Forwarded-By    $server_addr:$server_port;\n    proxy_set_header           X-Forwarded-For   $proxy_add_x_forwarded_for;\n    proxy_set_header           X-Local-Proxy     $scheme;\n    proxy_set_header           X-Forwarded-Proto $scheme;\n    proxy_pass_header          Set-Cookie;\n    proxy_pass_header          Cookie;\n    proxy_pass_header          X-Accel-Expires;\n    proxy_pass_header          X-Accel-Redirect;\n    proxy_pass_header          X-This-Proto;\n    proxy_connect_timeout      180;\n    proxy_send_timeout         180;\n    proxy_read_timeout         180;\n  }\n}\n###\n### Secure HTTPS proxy (END)\n###\n"
  },
  {
    "path": "aegir/tools/system/conf/lshell.conf",
    "content": "# lshell.py configuration file\n#\n# $Id: lshell.conf,v 1.27 2010-10-18 19:05:17 ghantoos Exp $\n\n[global]\n##  log directory (default /var/log/lshell/ )\nlogpath         : /var/log/lsh/\n##  set log level to 0, 1, 2, 3 or 4  (0: no logs, 1: least verbose,\n##                                                 4: log all commands)\nloglevel        : 4\n##  configure log file name (default is %u i.e. username.log)\n#logfilename     : %y%m%d-%u\n#logfilename     : syslog\n\n##  in case you are using syslog, you can choose your logname\n#syslogname      : myapp\n\n##  Set path to sudo noexec library. This path is usually autodetected, only\n##  set this variable to use alternate path. If set and the shared object is\n##  not found, lshell will exit immediately. Otherwise, please check your logs\n##  to verify that a standard path is detected.\n##\n##  while this should not be a common practice, setting this variable to an empty\n##  string will disable LD_PRELOAD prepend of the commands. This is done at your\n##  own risk, as lshell becomes easily breached using some commands like find(1)\n##  using the -exec flag.\n#path_noexec     : '/usr/libexec/sudo_noexec.so'\n\n## include a directory containing multiple configuration files. These files\n## can only contain default/user/group configuration. The global configuration will\n## only be loaded from the default configuration file.\n## e.g. splitting users into separate files\n#include_dir     : /etc/lshell.d/*.conf\n\n[default]\n##  a list of the allowed commands without execution privileges or 'all' to\n##  allow all commands in user's PATH\n##\n##  if  sudo(8) is installed and sudo_noexec.so is available, it will be loaded\n##  before running every command, preventing it from  running  further  commands\n##  itself. If not available, beware of commands like vim/find/more/etc. that\n##  will allow users to execute code (e.g. /bin/sh) from within the application,\n##  thus easily escaping lshell. See variable 'path_noexec' to use an alternative\n##  path to library.\n#allowed         : ['echo test'] # this will allow only the command 'echo test'\nallowed         : ['bower', 'bundle', 'bzip2', 'bzr', 'cat', 'cd', 'chmod', 'clear', 'compass', 'composer', 'cp', 'curl', 'cvs', 'diff', 'drush', 'drush10', 'drush11', 'drush8', 'du', 'echo', 'env', 'find', 'gem-dependency', 'gem-environment', 'gem-list', 'gem-query', 'gem-search', 'gem', 'git-receive-pack', 'git-upload-archive', 'git-upload-pack', 'git', 'grep', 'grunt', 'guard', 'gulp', 'gunzip', 'gzip', 'help', 'history', 'll', 'lpath', 'ls', 'mc', 'mkdir', 'mv', 'mydumper', 'myloader', 'mysql', 'mysqldump', 'nano', 'node', 'npm', 'npx', 'openssl', 'passwd', 'patch', 'ping', 'pwd', 'mybackup', 'rm', 'rmdir', 'rsync', 's4cmd', 'sass-convert', 'sass', 'scp', 'scss', 'sed', 'sqlmagic', 'ssh-keygen', 'ssh', 'svn', 'tar', 'touch', 'true', 'unzip', 'vdrush', 'vendor/drush/drush/drush.php', 'vi', 'vim', 'wget', 'whoami', 'zstd', '1']\n\n##  A list of the allowed commands that are permitted to execute other\n##  programs (e.g. shell scripts with exec(3)). Setting this variable to 'all'\n##  is NOT allowed. Warning do not put here any command that can execute\n##  arbitrary commands (e.g. find, vim, xargs)\n##\n##  Important: commands defined in 'allowed_shell_escape' override their\n##  definition in the 'allowed' variable\nallowed_shell_escape          : ['bower', 'bundle', 'bzip2', 'compass', 'composer', 'curl', 'drush', 'drush10', 'drush11', 'drush8', 'env', 'gem-dependency', 'gem-environment', 'gem-list', 'gem-query', 'gem-search', 'gem', 'git-receive-pack', 'git-upload-archive', 'git-upload-pack', 'git', 'grunt', 'guard', 'gulp', 'gunzip', 'gzip', 'mysql', 'mysqldump', 'node', 'npm', 'npx', 'mybackup', 'rsync', 'sass-convert', 'sass', 'scss', 'sqlmagic', 'ssh', 'tar', 'true', 'unzip', 'vdrush', 'vendor/drush/drush/drush.php', 'zstd', '1']\n\n##  A list of allowed file extensions that can be provided in the command line.\n##  If a list of allowed extensions is provided, all other file extensions will be disallowed.\n#allowed_file_extensions : ['.tmp', '.log']\n\n##  a list of forbidden character or commands\nforbidden       : ['--alias-path', '--use-existing', ';', '`', '$(', '${', 'core-cli', 'drush archive-restore', 'drush arr', 'drush core-config', 'drush core-execute', 'drush core-quick-drupal', 'drush ev', 'drush exec', 'drush php', 'drush qd', 'drush rs', 'drush runserver', 'drush scr', 'drush sha', 'drush shell-alias', 'drush si', 'drush site-ssh', 'drush sql-create', 'drush ssh', 'drush sup', 'drush10 archive-restore', 'drush10 arr', 'drush10 core-config', 'drush10 core-execute', 'drush10 core-quick-drupal', 'drush10 ev', 'drush10 exec', 'drush10 php', 'drush10 qd', 'drush10 rs', 'drush10 runserver', 'drush10 scr', 'drush10 sha', 'drush10 shell-alias', 'drush10 si', 'drush10 site-ssh', 'drush10 sql-create', 'drush10 ssh', 'drush10 sup', 'drush11 archive-restore', 'drush11 arr', 'drush11 core-config', 'drush11 core-execute', 'drush11 core-quick-drupal', 'drush11 ev', 'drush11 exec', 'drush11 php', 'drush11 qd', 'drush11 rs', 'drush11 runserver', 'drush11 scr', 'drush11 sha', 'drush11 shell-alias', 'drush11 si', 'drush11 site-ssh', 'drush11 sql-create', 'drush11 ssh', 'drush11 sup', 'drush8 archive-restore', 'drush8 arr', 'drush8 core-config', 'drush8 core-execute', 'drush8 core-quick-drupal', 'drush8 ev', 'drush8 exec', 'drush8 php', 'drush8 qd', 'drush8 rs', 'drush8 runserver', 'drush8 scr', 'drush8 sha', 'drush8 shell-alias', 'drush8 si', 'drush8 site-ssh', 'drush8 sql-create', 'drush8 ssh', 'drush8 sup', 'hosting_db_server', 'hostmaster', 'master_db', 'os.system', 'php-cli', 'php-script', 'pm-updatecode', 'self-update', 'selfupdate', 'server_localhost', 'server_master', 'shell', 'site-install', 'site-upgrade']\n\n##  a list of allowed command to use with sudo(8)\n##  if set to ´all', all the 'allowed' commands will be accessible through sudo(8)\n#sudo_commands   : ['ls', 'more']\n\n##  number of warnings when user enters a forbidden value before getting\n##  exited from lshell, set to -1 to disable.\nwarning_counter : 3\n\n##  command aliases list (similar to bash’s alias directive)\naliases         : {'1':'true', '2':'true', 'drush dbup':'drush8 updatedb', 'drush mup':'drush8 pm-update', 'drush mupc':'drush8 pm-updatecode', 'drush mups':'drush8 pm-updatestatus', 'drush up':'thinkdifferent', 'drush upc':'thinkdifferent', 'drush updatedb':'thinkdifferent', 'drush updb':'thinkdifferent', 'drush':'drush8', 'du':'du -s -h', 'env':'true', 'gem-dependency':'gem dependency', 'gem-environment':'gem environment', 'gem-list':'gem list', 'gem-query':'gem query', 'gem-search':'gem search', 'll':'ls -l --color=auto', 'mc':'mc -u', 'nano':'rnano', 'vdrush':'vendor/drush/drush/drush.php', 'vi':'rvim', 'vim':'rvim'}\n\n##  introduction text to print (when entering lshell)\nintro           : \"\\n      ======== Welcome to the Ægir, Drush and Compass Shell ========\\n\\n         Type '?' or 'help' to get the list of allowed commands\\n             Note that not all Drush commands are available\\n\\n       Use Gem and Bundler to manage all your Compass gems! Example:\\n                   `gem install --conservative compass`\\n\\n              Use NPM to manage all your packages! Example:\\n                        `npm install -g gulp`\\n\\n      To initialize Ruby use control file and re-login after 5 minutes\\n                 `touch ~/static/control/compass.info`\\n\"\n\n##  configure your prompt using %u or %h (default: username)\n#prompt          : \"%u@%h\"\n\n##  set sort prompt current directory update (default: 0)\nprompt_short    : 1\n\n##  a value in seconds for the session timer\n#timer           : 5\n\n##  list of path to restrict the user \"geographicaly\"\n##  warning: many commands like vi and less allow to break this restriction\n#path            : ['/home/bla/','/etc']\n\n##  set the home folder of your user. If not specified the home_path is set to\n##  the $HOME environment variable\n#home_path       : '/home/bla/'\n\n##  update the environment variable $PATH of the user\n#env_path        : ':/usr/local/bin'\n\n##  a list of path; all executable files inside these path will be allowed\n#allowed_cmd_path: ['/home/bla/bin','/home/bla/stuff/libexec']\n\n##  add environment variables\n#env_vars        : {'TERM':'xterm+256color'}\n\n##  add environment variables from file\n##  file format is: export key=value, one per line\n#env_vars_files  : ['$HOME/.lshell.env']\n\n##  allow or forbid the use of scp (set to 1 or 0)\nscp             : 1\n\n## forbid scp upload\nscp_upload      : 1\n\n## forbid scp download\nscp_download    : 1\n\n##  allow of forbid the use of sftp (set to 1 or 0)\n##  this option will not work if you are using OpenSSH's internal-sftp service\nsftp            : 1\n\n##  list of command allowed to execute over ssh (e.g. rsync, rdiff-backup, etc.)\noverssh         : ['cd', 'compass', 'composer', 'cp', 'drush', 'drush10', 'drush11', 'drush8', 'env', 'git-receive-pack', 'git-upload-archive', 'git-upload-pack', 'git', 'grep', 'ls', 'mv', 'mydumper', 'myloader', 'mysql', 'mysqldump', 'rm', 'rsync', 'scp', 'ssh-add', 'true', '1', '2']\n\n##  logging strictness. If set to 1, any unknown command is considered as\n##  forbidden, and user's warning counter is increased. If set to 0, command is\n##  considered as unknown, and user is only warned (i.e. *** unknown syntax)\nstrict          : 1\n\n##  force files sent through scp to a specific directory\n#scpforce        : '/home/bla/uploads/'\n\n##  Enable support for WinSCP with scp mode (NOT sftp)\n##  When enabled, the following parameters will be overridden:\n##    - scp_upload: 1 (uses scp(1) from within session)\n##    - scp_download: 1 (uses scp(1) from within session)\n##    - scpforce - Ignore (uses scp(1) from within session)\n##    - forbidden: -[';']\n##    - allowed: +['scp', 'env', 'pwd', 'groups', 'unset', 'unalias']\n#winscp: 0\n\n##  history file maximum size\n#history_size     : 100\n\n##  set history file name (default is /home/%u/.lhistory)\n#history_file     : \"/home/%u/.lshell_history\"\n\n##  define the script to run at user login\n#login_script     : \"/path/to/myscript.sh\"\n\n## disable user exit, this could be useful when lshell is spawned from another\n## non-restricted shell (e.g. bash)\n#disable_exit      : 0\n\n[grp:ltd-shell]\nforbidden       : + ['--destination']\n\n[grp:ltd-shell-more]\nallowed         : + ['php56', 'php74', 'php81', 'php82', 'php83', 'php84', 'php85']\naliases         : {'1':'true', '2':'true', 'drush dbup':'drush8 updatedb', 'drush mup':'drush8 pm-update', 'drush mupc':'drush8 pm-updatecode', 'drush mups':'drush8 pm-updatestatus', 'drush up':'thinkdifferent', 'drush upc':'thinkdifferent', 'drush updatedb':'thinkdifferent', 'drush updb':'thinkdifferent', 'drush':'drush8', 'du':'du -s -h', 'env':'true', 'gem-dependency':'gem dependency', 'gem-environment':'gem environment', 'gem-list':'gem list', 'gem-query':'gem query', 'gem-search':'gem search', 'll':'ls -l --color=auto', 'mc':'mc -u', 'nano':'rnano', 'php56':'/opt/php56/bin/php', 'php74':'/opt/php74/bin/php', 'php81':'/opt/php81/bin/php', 'php82':'/opt/php82/bin/php', 'php83':'/opt/php83/bin/php', 'php84':'/opt/php84/bin/php', 'php85':'/opt/php85/bin/php', 'vdrush':'vendor/drush/drush/drush.php', 'vi':'rvim', 'vim':'rvim'}\n"
  },
  {
    "path": "aegir/tools/system/conf/pln_proxy.conf",
    "content": "###\n### Plain HTTP proxy (START) _oct_uid _oct_mail\n###\nserver {\n  listen                       _dedicated_ip:80;\n  server_name                  _dedicated_sn;\n  access_log                   on;\n  log_not_found                off;\n  ###\n  ### Optional permanent redirect to HTTPS per domain/regex\n  ###\n  if ($host ~* ^(www\\.)?(foo\\.com)$) {\n    return 301 https://$host$request_uri;\n  }\n  location / {\n    proxy_pass                 http://_target_ip;\n    proxy_redirect             off;\n    gzip_vary                  off;\n    proxy_buffering            off;\n    proxy_set_header           Host              $host;\n    proxy_set_header           X-Real-IP         $remote_addr;\n    proxy_set_header           X-Forwarded-By    $server_addr:$server_port;\n    proxy_set_header           X-Forwarded-For   $proxy_add_x_forwarded_for;\n    proxy_set_header           X-Local-Proxy     $scheme;\n    proxy_pass_header          Set-Cookie;\n    proxy_pass_header          Cookie;\n    proxy_pass_header          X-Accel-Expires;\n    proxy_pass_header          X-Accel-Redirect;\n    proxy_pass_header          X-This-Proto;\n    proxy_connect_timeout      180;\n    proxy_send_timeout         180;\n    proxy_read_timeout         180;\n  }\n}\n###\n### Plain HTTP proxy (END)\n###\n"
  },
  {
    "path": "aegir/tools/system/conf/proxy.conf",
    "content": "###\n### Plain HTTP proxy (START)\n###\nserver {\n  listen                       *:80;\n  server_name                  _;\n  access_log                   off;\n  log_not_found                off;\n  location / {\n    proxy_pass                 http://_target_ip;\n    proxy_redirect             off;\n    gzip_vary                  off;\n    proxy_buffering            off;\n    proxy_set_header           Host              $host;\n    proxy_set_header           X-Real-IP         $remote_addr;\n    proxy_set_header           X-Forwarded-By    $server_addr:$server_port;\n    proxy_set_header           X-Forwarded-For   $proxy_add_x_forwarded_for;\n    proxy_set_header           X-Local-Proxy     $scheme;\n    proxy_pass_header          Set-Cookie;\n    proxy_pass_header          Cookie;\n    proxy_pass_header          X-Accel-Expires;\n    proxy_pass_header          X-Accel-Redirect;\n    proxy_pass_header          X-This-Proto;\n    proxy_connect_timeout      180;\n    proxy_send_timeout         180;\n    proxy_read_timeout         180;\n  }\n}\n###\n### Plain HTTP proxy (END)\n###\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. The item IDs are in general constructed as follows:\n   Search API:\n     $document->id = $index_id . '-' . $item_id;\n   Apache Solr Search Integration:\n     $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #1 first in searches for \"example query\": -->\n<!--\n <query text=\"example query\">\n  <doc id=\"default_node_index-1\" />\n  <doc id=\"7v3jsc/node/1\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/mapping-ISOLatin1Accent.txt",
    "content": "# This file contains character mappings for the default fulltext field type.\n# The source characters (on the left) will be replaced by the respective target\n# characters before any other processing takes place.\n# Lines starting with a pound character # are ignored.\n#\n# For sensible defaults, use the mapping-ISOLatin1Accent.txt file distributed\n# with the example application of your Solr version.\n#\n# Examples:\n#   \"À\" => \"A\"\n#   \"\\u00c4\" => \"A\"\n#   \"\\u00c4\" => \"\\u0041\"\n#   \"æ\" => \"ae\"\n#   \"\\n\" => \" \"\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/protwords.txt",
    "content": "#-----------------------------------------------------------------------\n# This file blocks words from being operated on by the stemmer and word delimiter.\n&amp;\n&lt;\n&gt;\n&#039;\n&quot;\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n-->\n\n<schema name=\"drupal-4.3-solr-4.x\" version=\"1.3\">\n    <!-- attribute \"name\" is the name of this schema and is only used for display purposes.\n         Applications should change this to reflect the nature of the search collection.\n         version=\"1.2\" is Solr's version number for the schema syntax and semantics.  It should\n         not normally be changed by applications.\n         1.0: multiValued attribute did not exist, all fields are multiValued by nature\n         1.1: multiValued attribute introduced, false by default\n         1.2: omitTermFreqAndPositions attribute introduced, true by default except for text fields.\n         1.3: removed optional field compress feature\n       -->\n  <types>\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in the\n       org.apache.solr.analysis package.\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       - StrField and TextField support an optional compressThreshold which\n       limits compression (if enabled in the derived fields) to values which\n       exceed a certain size (in characters).\n    -->\n    <fieldType name=\"string\" class=\"solr.StrField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldtype name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The optional sortMissingLast and sortMissingFirst attributes are\n         currently supported on types that are sorted internally as strings.\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!-- numeric field types that can be sorted, but are not optimized for range queries -->\n    <fieldType name=\"integer\" class=\"solr.TrieIntField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"float\" class=\"solr.TrieFloatField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"long\" class=\"solr.TrieLongField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"double\" class=\"solr.TrieDoubleField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <!--\n      Note:\n      These should only be used for compatibility with existing indexes (created with older Solr versions)\n      or if \"sortMissingFirst\" or \"sortMissingLast\" functionality is needed. Use Trie based fields instead.\n\n      Numeric field types that manipulate the value into\n      a string value that isn't human-readable in its internal form,\n      but with a lexicographic ordering the same as the numeric ordering,\n      so that range queries work correctly.\n    -->\n    <fieldType name=\"sint\" class=\"solr.TrieIntField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"sfloat\" class=\"solr.TrieFloatField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"slong\" class=\"solr.TrieLongField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"sdouble\" class=\"solr.TrieDoubleField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!--\n     Numeric field types that index each value at various levels of precision\n     to accelerate range queries when the number of values between the range\n     endpoints is large. See the javadoc for NumericRangeQuery for internal\n     implementation details.\n\n     Smaller precisionStep values (specified in bits) will lead to more tokens\n     indexed per value, slightly larger index size, and faster range queries.\n     A precisionStep of 0 disables indexing at different precision levels.\n    -->\n    <fieldType name=\"tint\" class=\"solr.TrieIntField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tfloat\" class=\"solr.TrieFloatField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tlong\" class=\"solr.TrieLongField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tdouble\" class=\"solr.TrieDoubleField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"pfloat\" class=\"solr.FloatField\" omitNorms=\"true\"/>\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\" valType=\"pfloat\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the DateField javadocs for more information.\n      -->\n    <fieldType name=\"date\" class=\"solr.DateField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- A Trie based date field for faster date range queries and date faceting. -->\n    <fieldType name=\"tdate\" class=\"solr.TrieDateField\" omitNorms=\"true\" precisionStep=\"6\" positionIncrementGap=\"0\"/>\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of\n        words on case-change, alpha numeric boundaries, and non-alphanumeric chars,\n        so that a query of \"wifi\" or \"wi fi\" could match a document containing \"Wi-Fi\".\n        Synonyms and stopwords are customized by external files, and stemming is enabled.\n        Duplicate tokens at the same position (which may result from Stemmed Synonyms or\n        WordDelim parts) are removed.\n        -->\n    <fieldType name=\"text\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <!-- in this example, we will only use synonyms at query time\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"index_synonyms.txt\" ignoreCase=\"true\" expand=\"false\"/>\n        -->\n        <!-- Case insensitive stop word removal.\n          add enablePositionIncrements=true in both the index and query\n          analyzers to leave a 'gap' for more accurate phrase queries.\n        -->\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"1\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- An unstemmed text field - good if one does not know the language of the field -->\n    <fieldType name=\"text_und\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\" enablePositionIncrements=\"true\" />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- Edge N gram type - for example for matching against queries with results\n        KeywordTokenizer leaves input string intact as a single term.\n        see: http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/\n      -->\n    <fieldType name=\"edge_n2_kw_text\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.EdgeNGramFilterFactory\" minGramSize=\"2\" maxGramSize=\"25\" />\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n    <!--  Setup simple analysis for spell checking -->\n\n    <fieldType name=\"textSpell\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\" />\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"4\" max=\"20\" />\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\" />\n      </analyzer>\n    </fieldType>\n\n    <!-- This is an example of using the KeywordTokenizer along\n         With various TokenFilterFactories to produce a sortable field\n         that does not include some properties of the source text\n      -->\n    <fieldType name=\"sortString\" class=\"solr.TextField\" sortMissingLast=\"true\" omitNorms=\"true\">\n      <analyzer>\n        <!-- KeywordTokenizer does no actual tokenizing, so the entire\n            input string is preserved as a single token\n          -->\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <!-- The LowerCase TokenFilter does what you expect, which can be\n            when you want your sorting to be case insensitive\n          -->\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <!-- The TrimFilter removes any leading or trailing whitespace -->\n        <filter class=\"solr.TrimFilterFactory\" />\n        <!-- The PatternReplaceFilter gives you the flexibility to use\n            Java Regular expression to replace any sequence of characters\n            matching a pattern with an arbitrary replacement string,\n            which may include back refrences to portions of the orriginal\n            string matched by the pattern.\n\n            See the Java Regular Expression documentation for more\n            infomation on pattern and replacement string syntax.\n\n            http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html\n\n        <filter class=\"solr.PatternReplaceFilterFactory\"\n               pattern=\"(^\\p{Punct}+)\" replacement=\"\" replace=\"all\"\n        />\n        -->\n      </analyzer>\n    </fieldType>\n\n    <!-- A random sort type -->\n    <fieldType name=\"rand\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- since fields of this type are by default not stored or indexed, any data added to\n         them will be ignored outright\n      -->\n    <fieldtype name=\"ignored\" stored=\"false\" indexed=\"false\" class=\"solr.StrField\" />\n\n    <!-- Begin added types to use features in Solr 3.4+ -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"tdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonType\" subFieldType=\"tdouble\"/>\n\n    <!-- A Geohash is a compact representation of a latitude longitude pair in a single field.\n         See http://wiki.apache.org/solr/SpatialSearch\n     -->\n    <fieldtype name=\"geohash\" class=\"solr.GeoHashField\"/>\n    <!-- End added Solr 3.4+ types -->\n\n  </types>\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  <xi:include href=\"schema_extra_types.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <fields>\n    <!-- Valid attributes for fields:\n      name: mandatory - the name for the field\n      type: mandatory - the name of a previously defined type from the <types> section\n      indexed: true if this field should be indexed (searchable or sortable)\n      stored: true if this field should be retrievable\n      compressed: [false] if this field should be stored using gzip compression\n       (this will only apply if the field type is compressable; among\n       the standard field types, only TextField and StrField are)\n      multiValued: true if this field may contain multiple values per document\n      omitNorms: (expert) set to true to omit the norms associated with\n       this field (this disables length normalization and index-time\n       boosting for the field, and saves some memory).  Only full-text\n       fields or fields that need an index-time boost need norms.\n    -->\n\n    <!-- The document id is usually derived from a site-spcific key (hash) and the\n      entity type and ID like:\n      Search Api :\n        The format used is $document->id = $index_id . '-' . $item_id\n      Apache Solr Search Integration\n        The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n    -->\n    <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" />\n\n    <!-- Add Solr Cloud version field as mentioned in\n         http://wiki.apache.org/solr/SolrCloud#Required_Config\n    -->\n    <field name=\"_version_\" type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n\n    <!-- Search Api specific fields -->\n    <!-- item_id contains the entity ID, e.g. a node's nid. -->\n    <field name=\"item_id\"  type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- index_id is the machine name of the search index this entry belongs to. -->\n    <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- Since sorting by ID is explicitly allowed, store item_id also in a sortable way. -->\n    <copyField source=\"item_id\" dest=\"sort_search_api_id\" />\n\n    <!-- Apache Solr Search Integration specific fields -->\n    <!-- entity_id is the numeric object ID, e.g. Node ID, File ID -->\n    <field name=\"entity_id\"  type=\"long\" indexed=\"true\" stored=\"true\" />\n    <!-- entity_type is 'node', 'file', 'user', or some other Drupal object type -->\n    <field name=\"entity_type\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- bundle is a node type, or as appropriate for other entity types -->\n    <field name=\"bundle\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"bundle_name\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"url\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <!-- label is the default field for a human-readable string for this entity (e.g. the title of a node) -->\n    <field name=\"label\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- The string version of the title is used for sorting -->\n    <copyField source=\"label\" dest=\"sort_label\"/>\n\n    <!-- content is the default field for full text search - dump crap here -->\n    <field name=\"content\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n    <field name=\"teaser\" type=\"text\" indexed=\"false\" stored=\"true\"/>\n    <field name=\"path\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"path_alias\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n\n    <!-- These are the fields that correspond to a Drupal node. The beauty of having\n      Lucene store title, body, type, etc., is that we retrieve them with the search\n      result set and don't need to go to the database with a node_load. -->\n    <field name=\"tid\"  type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n    <field name=\"taxonomy_names\" type=\"text\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n    <!-- Copy terms to a single field that contains all taxonomy term names -->\n    <copyField source=\"tm_vid_*\" dest=\"taxonomy_names\"/>\n\n    <!-- Here, default is used to create a \"timestamp\" field indicating\n         when each document was indexed.-->\n    <field name=\"timestamp\" type=\"tdate\" indexed=\"true\" stored=\"true\" default=\"NOW\" multiValued=\"false\"/>\n\n    <!-- This field is used to build the spellchecker index -->\n    <field name=\"spell\" type=\"textSpell\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- copyField commands copy one field to another at the time a document\n         is added to the index.  It's used either to index the same field differently,\n         or to add multiple fields to the same field for easier/faster searching.  -->\n    <copyField source=\"label\" dest=\"spell\"/>\n    <copyField source=\"content\" dest=\"spell\"/>\n\n    <copyField source=\"ts_*\" dest=\"spell\"/>\n    <copyField source=\"tm_*\" dest=\"spell\"/>\n\n    <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n         will be used if the name matches any of the patterns.\n         RESTRICTION: the glob-like pattern in the name attribute must have\n         a \"*\" only at the start or the end.\n         EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n         Longer patterns will be matched first.  if equal size patterns\n         both match, the first appearing in the schema will be used.  -->\n\n    <!-- A set of fields to contain text extracted from HTML tag contents which we\n         can boost at query time. -->\n    <dynamicField name=\"tags_*\" type=\"text\"   indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n\n    <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n         the last letter is 's' for single valued, 'm' for multi-valued -->\n\n    <!-- We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"is_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"im_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of floats can be saved in a regular float field -->\n    <dynamicField name=\"fs_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fm_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of doubles can be saved in a regular double field -->\n    <dynamicField name=\"ps_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pm_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of booleans can be saved in a regular boolean field -->\n    <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <!-- Regular text (without processing) can be stored in a string field-->\n    <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Normal text fields are for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"ts_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tm_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- Unstemmed text fields for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"tus_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tum_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- These text fields omit norms - useful for extracted text like taxonomy_names -->\n    <dynamicField name=\"tos_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n    <dynamicField name=\"tom_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- Special-purpose text fields -->\n    <dynamicField name=\"tes_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"false\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tem_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"true\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- trie dates are preferred, so give them the 2 letter prefix -->\n    <dynamicField name=\"ds_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"dm_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"its_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"itm_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"fts_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ftm_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pts_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ptm_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n         a small image in a search result using the data URI scheme -->\n    <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a date rather than tdate is needed for sortMissingLast -->\n    <dynamicField name=\"dds_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ddm_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Sortable fields, good for sortMissingLast support &\n         We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"iss_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ism_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a sfloat rather than tfloat is needed for sortMissingLast -->\n    <dynamicField name=\"fss_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fsm_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pss_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"psm_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n    <dynamicField name=\"hs_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hm_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hss_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hsm_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hts_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"htm_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n    <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Begin added fields to use features in Solr 3.4+\n         http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n    <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"geos_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"geom_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- External file fields -->\n    <dynamicField name=\"eff_*\" type=\"file\"/>\n    <!-- End added fields for Solr 3.4+ -->\n\n    <!-- Sortable version of the dynamic string field -->\n    <dynamicField name=\"sort_*\" type=\"sortString\" indexed=\"true\" stored=\"false\"/>\n    <copyField source=\"ss_*\" dest=\"sort_*\"/>\n    <!-- A random sort field -->\n    <dynamicField name=\"random_*\" type=\"rand\" indexed=\"true\" stored=\"true\"/>\n    <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n    <dynamicField name=\"access_*\" type=\"integer\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n    <!-- The following causes solr to ignore any fields that don't already match an existing\n         field name or dynamic field, rather than reporting them as an error.\n         Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n         unknown fields indexed and/or stored by default -->\n    <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n  </fields>\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  <xi:include href=\"schema_extra_fields.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- field for the QueryParser to use when an explicit fieldname is absent -->\n  <defaultSearchField>content</defaultSearchField>\n\n  <!-- SolrQueryParser configuration: defaultOperator=\"AND|OR\" -->\n  <solrQueryParser defaultOperator=\"AND\"/>\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/schema_extra_fields.xml",
    "content": "<fields>\n<!--\n  Adding German dynamic field types to our Solr Schema\n  If you enable this, make sure you have a folder called lang with stopwords_de.txt\n  and synonyms_de.txt in there\n  This also requires to enable the content in schema_extra_types.xml\n-->\n<!--\n   <field name=\"label_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"content_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n   <field name=\"teaser_de\" type=\"text_de\" indexed=\"false\" stored=\"true\"/>\n   <field name=\"path_alias_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"taxonomy_names_de\" type=\"text_de\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n   <field name=\"spell_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n   <copyField source=\"label_de\" dest=\"spell_de\"/>\n   <copyField source=\"content_de\" dest=\"spell_de\"/>\n   <dynamicField name=\"tags_de_*\" type=\"text_de\" indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n   <dynamicField name=\"ts_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n   <dynamicField name=\"tm_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n   <dynamicField name=\"tos_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n   <dynamicField name=\"tom_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n-->\n</fields>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/schema_extra_types.xml",
    "content": "<types>\n<!--\n  Adding German language to our Solr Schema German\n  If you enable this, make sure you have a folder called lang with stopwords_de.txt\n  and synonyms_de.txt in there\n-->\n<!--\n    <fieldType name=\"text_de\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\" enablePositionIncrements=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"1\" catenateNumbers=\"1\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"lang/synonyms_de.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\" enablePositionIncrements=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"0\" catenateNumbers=\"0\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n-->\n</types>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-4.3-solr-4.x\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_40}</luceneMatchVersion>\n\n  <!-- lib directives can be used to instruct Solr to load an Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n       \n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A dir option by itself adds any files found in the directory to\n       the classpath, this is useful for including all jars in a\n       directory.\n    -->\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/extraction/lib\" />\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/clustering/lib/\" />\n\n  <!-- The velocity library has been known to crash Solr in some\n       instances when deployed as a war file to Tomcat. Therefore all\n       references have been removed from the default configuration.\n       @see http://drupal.org/node/1612556\n  -->\n  <!-- <lib dir=\"../../contrib/velocity/lib\" /> -->\n\n  <!-- When a regex is specified in addition to a directory, only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n    -->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-cell-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-clustering-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-dataimporthandler-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-langid-\\d.*\\.jar\" />-->\n  <!-- <lib dir=\"../../dist/\" regex=\"apache-solr-velocity-\\d.*\\.jar\" /> -->\n\n  <!-- If a dir option (with or without a regex) is used and nothing\n       is found that matches, it will be ignored\n    -->\n  <!--<lib dir=\"../../contrib/clustering/lib/\" />-->\n  <!--<lib dir=\"/total/crap/dir/ignored\" />-->\n\n  <!-- an exact path can be used to specify a specific file.  This\n       will cause a serious error to be logged if it can't be loaded.\n    -->\n  <!--\n  <lib path=\"../a-jar-that-does-not-exist.jar\" /> \n  -->\n  \n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <!-- <dataDir>${solr.data.dir:}</dataDir> -->\n\n\n  <!-- The DirectoryFactory to use for indexes.\n       \n       solr.StandardDirectoryFactory, the default, is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  One can force a particular implementation\n       via solr.MMapDirectoryFactory, solr.NIOFSDirectoryFactory, or\n       solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based, not\n       persistent, and doesn't work with replication.\n    -->\n  <directoryFactory name=\"DirectoryFactory\" \n                    class=\"${solr.directoryFactory:solr.StandardDirectoryFactory}\"/>\n\n  <!-- Index Defaults\n\n       Values here affect all index writers and act as a default\n       unless overridden.\n\n       WARNING: See also the <mainIndex> section below for parameters\n       that overfor Solr's main Lucene index.\n    -->\n  <indexConfig>\n\n    <useCompoundFile>false</useCompoundFile>\n\n    <mergeFactor>4</mergeFactor>\n    <!-- Sets the amount of RAM that may be used by Lucene indexing\n         for buffering added documents and deletions before they are\n         flushed to the Directory.  -->\n    <ramBufferSizeMB>32</ramBufferSizeMB>\n    <!-- If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.  \n      -->\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <maxMergeDocs>2147483647</maxMergeDocs>\n    <maxFieldLength>100000</maxFieldLength>\n    <writeLockTimeout>1000</writeLockTimeout>\n\n    <!-- Expert: Merge Policy \n\n         The Merge Policy in Lucene controls how merging is handled by\n         Lucene.  The default in Solr 3.3 is TieredMergePolicy.\n         \n         The default in 2.3 was the LogByteSizeMergePolicy,\n         previous versions used LogDocMergePolicy.\n         \n         LogByteSizeMergePolicy chooses segments to merge based on\n         their size.  The Lucene 2.2 default, LogDocMergePolicy chose\n         when to merge based on number of documents\n         \n         Other implementations of MergePolicy must have a no-argument\n         constructor\n      -->\n    <mergePolicy class=\"org.apache.lucene.index.LogByteSizeMergePolicy\"/>\n\n    <!-- Expert: Merge Scheduler\n\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!-- \n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\t  \n    <!-- LockFactory \n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n      \n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         (For backwards compatibility with Solr 1.2, 'simple' is the\n         default if not specified.)\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>single</lockType>\n\n    <!-- Expert: Controls how often Lucene loads terms into memory\n         Default is 128 and is likely good for most everyone.\n      -->\n    <!-- <termIndexInterval>256</termIndexInterval> -->\n\n    <!-- Unlock On Startup\n\n         If true, unlock any held write or commit locks on startup.\n         This defeats the locking mechanism that allows multiple\n         processes to safely access a lucene index, and should be used\n         with care.\n\n         This is not needed if lock type is 'none' or 'single'\n     -->\n    <unlockOnStartup>false</unlockOnStartup>\n    \n    <!-- If true, IndexReaders will be reopened (often more efficient)\n         instead of closed and then opened.\n      -->\n    <reopenReaders>true</reopenReaders>\n\n    <!-- Commit Deletion Policy\n\n         Custom deletion policies can specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         http://lucene.apache.org/java/2_9_1/api/all/org/apache/lucene/index/IndexDeletionPolicy.html\n\n         The standard Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n         \n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n      <!-- The number of commit points to be kept -->\n      <str name=\"maxCommitsToKeep\">1</str>\n      <!-- The number of optimized commit points to be kept -->\n      <str name=\"maxOptimizedCommitsToKeep\">0</str>\n      <!--\n          Delete all commit points once they have reached the given age.\n          Supports DateMathParser syntax e.g.\n        -->\n      <!--\n         <str name=\"maxCommitAge\">30MINUTES</str>\n         <str name=\"maxCommitAge\">1DAY</str>\n      -->\n    </deletionPolicy>\n\n    <!-- Lucene Infostream\n       \n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting The value to true will instruct the underlying Lucene\n         IndexWriter to write its debugging info the specified file\n      -->\n     <infoStream file=\"INFOSTREAM.txt\">false</infoStream> \n\n  </indexConfig>\n\n  <!-- JMX\n       \n       This example enables JMX if and only if an existing MBeanServer\n       is found, use this if you want to configure JMX through JVM\n       parameters. Remove this to disable exposing Solr configuration\n       and statistics to JMX.\n\n       For more details see http://wiki.apache.org/solr/SolrJmx\n    -->\n  <!-- <jmx /> -->\n  <!-- If you want to connect to a particular server, specify the\n       agentId \n    -->\n  <!-- <jmx agentId=\"myAgent\" /> -->\n  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->\n  <!-- <jmx serviceUrl=\"service:jmx:rmi:///jndi/rmi://localhost:9999/solr\"/>\n    -->\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- AutoCommit\n\n         Perform a <commit/> automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents. \n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time that is allowed to pass\n                   since a document was added before automaticly\n                   triggering a new commit.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:10000}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:120000}</maxTime>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n    -->\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:2000}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:10000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n         \n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns. \n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n    <!-- Enables a transaction log, currently used for real-time get.\n         \"dir\" - the target directory for transaction logs, defaults to the\n         solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.data.dir:}</str>\n      <!-- if you want to take control of the synchronization you may specify\n           the syncLevel as one of the following where ''flush'' is the default.\n           Fsync will reduce throughput.\n           <str name=\"syncLevel\">flush|fsync|none</str>\n      -->\n    </updateLog>\n  </updateHandler>\n  \n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n  <!-- By explicitly declaring the Factory, the termIndexDivisor can\n       be specified.\n    -->\n  <!--\n     <indexReaderFactory name=\"IndexReaderFactory\" \n                         class=\"solr.StandardIndexReaderFactory\">\n       <int name=\"setTermIndexDivisor\">12</int>\n     </indexReaderFactory >\n    -->\n\n\n  <query>\n    <!-- Max Boolean Clauses\n\n         Maximum number of clauses in each BooleanQuery,  an exception\n         is thrown if exceeded.\n\n         ** WARNING **\n         \n         This option actually modifies a global Lucene property that\n         will affect all SolrCores.  If multiple solrconfig.xml files\n         disagree on this property, the value at any given moment will\n         be based on the last SolrCore to be initialized.\n         \n      -->\n    <maxBooleanClauses>4096</maxBooleanClauses>\n\n\n    <!-- Solr Internal Query Caches\n\n         There are two implementations of cache available for Solr,\n         LRUCache, based on a synchronized LinkedHashMap, and\n         FastLRUCache, based on a ConcurrentHashMap.  \n\n         FastLRUCache has faster gets and slower puts in single\n         threaded operation and thus is generally faster than LRUCache\n         when the hit ratio of the cache is high (> 75%), and may be\n         faster under other scenarios on multi-cpu systems.\n    -->\n\n    <!-- Filter Cache\n\n         Cache used by SolrIndexSearcher for filters (DocSets),\n         unordered sets of *all* documents that match a query.  When a\n         new searcher is opened, its caches may be prepopulated or\n         \"autowarmed\" using data from caches in the old searcher.\n         autowarmCount is the number of items to prepopulate.  For\n         LRUCache, the autowarmed items will be the most recently\n         accessed items.\n\n         Parameters:\n           class - the SolrCache implementation LRUCache or\n               (LRUCache or FastLRUCache)\n           size - the maximum number of entries in the cache\n           initialSize - the initial capacity (number of entries) of\n               the cache.  (see java.util.HashMap)\n           autowarmCount - the number of entries to prepopulate from\n               and old cache.  \n      -->\n    <filterCache class=\"solr.FastLRUCache\"\n                 size=\"512\"\n                 initialSize=\"512\"\n                 autowarmCount=\"0\"/>\n\n    <!-- Query Result Cache\n         \n         Caches results of searches - ordered lists of document ids\n         (DocList) based on a query, a sort, and the range of documents requested.  \n      -->\n    <queryResultCache class=\"solr.LRUCache\"\n                     size=\"512\"\n                     initialSize=\"512\"\n                     autowarmCount=\"32\"/>\n   \n    <!-- Document Cache\n\n         Caches Lucene Document objects (the stored fields for each\n         document).  Since Lucene internal document ids are transient,\n         this cache will not be autowarmed.  \n      -->\n    <documentCache class=\"solr.LRUCache\"\n                   size=\"512\"\n                   initialSize=\"512\"\n                   autowarmCount=\"0\"/>\n    \n    <!-- Field Value Cache\n         \n         Cache used to hold field values that are quickly accessible\n         by document id.  The fieldValueCache is created by default\n         even if not configured here.\n      -->\n    <!--\n       <fieldValueCache class=\"solr.FastLRUCache\"\n                        size=\"512\"\n                        autowarmCount=\"128\"\n                        showItems=\"32\" />\n      -->\n\n    <!-- Custom Cache\n\n         Example of a generic cache.  These caches may be accessed by\n         name through SolrIndexSearcher.getCache(),cacheLookup(), and\n         cacheInsert().  The purpose is to enable easy caching of\n         user/application level data.  The regenerator argument should\n         be specified as an implementation of solr.CacheRegenerator \n         if autowarming is desired.  \n      -->\n    <!--\n       <cache name=\"myUserCache\"\n              class=\"solr.LRUCache\"\n              size=\"4096\"\n              initialSize=\"1024\"\n              autowarmCount=\"1024\"\n              regenerator=\"com.mycompany.MyRegenerator\"\n              />\n      -->\n\n\n    <!-- Lazy Field Loading\n\n         If true, stored fields that are not requested will be loaded\n         lazily.  This can result in a significant speed improvement\n         if the usual case is to not load all stored fields,\n         especially if the skipped fields are large compressed text\n         fields.\n    -->\n    <enableLazyFieldLoading>true</enableLazyFieldLoading>\n\n   <!-- Use Filter For Sorted Query\n\n        A possible optimization that attempts to use a filter to\n        satisfy a search.  If the requested sort does not include\n        score, then the filterCache will be checked for a filter\n        matching the query. If found, the filter will be used as the\n        source of document ids, and then the sort will be applied to\n        that.\n\n        For most situations, this will not be useful unless you\n        frequently get the same search repeatedly with different sort\n        options, and none of them ever use \"score\"\n     -->\n   <!--\n      <useFilterForSortedQuery>true</useFilterForSortedQuery>\n     -->\n\n   <!-- Result Window Size\n\n        An optimization for use with the queryResultCache.  When a search\n        is requested, a superset of the requested number of document ids\n        are collected.  For example, if a search for a particular query\n        requests matching documents 10 through 19, and queryWindowSize is 50,\n        then documents 0 through 49 will be collected and cached.  Any further\n        requests in that range can be satisfied via the cache.  \n     -->\n   <queryResultWindowSize>20</queryResultWindowSize>\n\n   <!-- Maximum number of documents to cache for any entry in the\n        queryResultCache. \n     -->\n   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>\n\n   <!-- Query Related Event Listeners\n\n        Various IndexSearcher related events can trigger Listeners to\n        take actions.\n\n        newSearcher - fired whenever a new searcher is being prepared\n        and there is a current searcher handling requests (aka\n        registered).  It can be used to prime certain caches to\n        prevent long request times for certain requests.\n\n        firstSearcher - fired whenever a new searcher is being\n        prepared but there is no current registered searcher to handle\n        requests or to gain autowarming data from.\n\n        \n     -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence. \n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">solr rocks</str><str name=\"start\">0</str><str name=\"rows\">10</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n    <!-- Max Warming Searchers\n         \n         Maximum number of searchers that may be warming in the\n         background concurrently.  An error is returned if this limit\n         is exceeded.\n\n         Recommend values of 1-2 for read-only slaves, higher for\n         masters w/o cache warming.\n      -->\n    <maxWarmingSearchers>2</maxWarmingSearchers>\n\n  </query>\n\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n\n       handleSelect affects the behavior of requests such as /select?qt=XXX\n\n       handleSelect=\"true\" will cause the SolrDispatchFilter to process\n       the request and will result in consistent error handling and\n       formatting for all types of requests.\n\n       handleSelect=\"false\" will cause the SolrDispatchFilter to\n       ignore \"/select\" requests and fallback to using the legacy\n       SolrServlet and it's Solr 1.1 style error formatting\n    -->\n  <requestDispatcher handleSelect=\"true\" >\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size of\n         Multipart File Uploads that Solr will allow in a Request.\n         \n         *** WARNING ***\n         The settings below authorize Solr to fetch remote files, You\n         should make sure your system has some authentication before\n         using enableRemoteStreaming=\"true\"\n\n      --> \n    <requestParsers enableRemoteStreaming=\"true\" \n                    multipartUploadLimitInKB=\"2048000\" />\n\n    <!-- HTTP Caching\n\n         Set HTTP caching related parameters (for proxy caches and clients).\n\n         The options below instruct Solr not to output any HTTP Caching\n         related headers\n      -->\n    <httpCaching never304=\"true\" />\n    <!-- If you include a <cacheControl> directive, it will be used to\n         generate a Cache-Control header (as well as an Expires header\n         if the value contains \"max-age=\")\n         \n         By default, no Cache-Control header is generated.\n         \n         You can use the <cacheControl> option even if you have set\n         never304=\"true\"\n      -->\n    <!--\n       <httpCaching never304=\"true\" >\n         <cacheControl>max-age=30, public</cacheControl> \n       </httpCaching>\n      -->\n    <!-- To enable Solr to respond with automatically generated HTTP\n         Caching headers, and to response to Cache Validation requests\n         correctly, set the value of never304=\"false\"\n         \n         This will cause Solr to generate Last-Modified and ETag\n         headers based on the properties of the Index.\n\n         The following options can also be specified to affect the\n         values of these headers...\n\n         lastModFrom - the default value is \"openTime\" which means the\n         Last-Modified value (and validation against If-Modified-Since\n         requests) will all be relative to when the current Searcher\n         was opened.  You can change it to lastModFrom=\"dirLastMod\" if\n         you want the value to exactly correspond to when the physical\n         index was last modified.\n\n         etagSeed=\"...\" is an option you can change to force the ETag\n         header (and validation against If-None-Match requests) to be\n         different even if the index has not changed (ie: when making\n         significant changes to your config file)\n\n         (lastModifiedFrom and etagSeed are both ignored if you use\n         the never304=\"true\" option)\n      -->\n    <!--\n       <httpCaching lastModifiedFrom=\"openTime\"\n                    etagSeed=\"Solr\">\n         <cacheControl>max-age=30, public</cacheControl> \n       </httpCaching>\n      -->\n  </requestDispatcher>\n\n  <!-- Request Handlers \n\n       http://wiki.apache.org/solr/SolrRequestHandler\n\n       incoming queries will be dispatched to the correct handler\n       based on the path or the qt (query type) param.\n\n       Names starting with a '/' are accessed with the a path equal to\n       the registered name.  Names without a leading '/' are accessed\n       with: http://host/app/[core/]select?qt=name\n\n       If a /select request is processed with out a qt param\n       specified, the requestHandler that declares default=\"true\" will\n       be used.\n       \n       If a Request Handler is declared with startup=\"lazy\", then it will\n       not be initialized until the first request that uses it.\n\n    -->\n  <!-- SearchHandler\n\n       http://wiki.apache.org/solr/SearchHandler\n\n       For processing Search Queries, the primary Request Handler\n       provided with Solr is \"SearchHandler\" It delegates to a sequent\n       of SearchComponents (see below) and supports distributed\n       queries across multiple shards\n    -->\n  <!--<requestHandler name=\"search\" class=\"solr.SearchHandler\" default=\"true\">-->\n    <!-- default values for query parameters can be specified, these\n         will be overridden by parameters in the request\n      -->\n     <!--<lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <int name=\"rows\">10</int>\n     </lst>-->\n    <!-- In addition to defaults, \"appends\" params can be specified\n         to identify values which should be appended to the list of\n         multi-val params from the query (or the existing \"defaults\").\n      -->\n    <!-- In this example, the param \"fq=instock:true\" would be appended to\n         any query time fq params the user may specify, as a mechanism for\n         partitioning the index, independent of any user selected filtering\n         that may also be desired (perhaps as a result of faceted searching).\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"appends\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"appends\">\n         <str name=\"fq\">inStock:true</str>\n       </lst>\n      -->\n    <!-- \"invariants\" are a way of letting the Solr maintainer lock down\n         the options available to Solr clients.  Any params values\n         specified here are used regardless of what values may be specified\n         in either the query, the \"defaults\", or the \"appends\" params.\n\n         In this example, the facet.field and facet.query params would\n         be fixed, limiting the facets clients can use.  Faceting is\n         not turned on by default - but if the client does specify\n         facet=true in the request, these are the only facets they\n         will be able to see counts for; regardless of what other\n         facet.field or facet.query params they may specify.\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"invariants\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"invariants\">\n         <str name=\"facet.field\">cat</str>\n         <str name=\"facet.field\">manu_exact</str>\n         <str name=\"facet.query\">price:[* TO 500]</str>\n         <str name=\"facet.query\">price:[500 TO *]</str>\n       </lst>\n      -->\n    <!-- If the default list of SearchComponents is not desired, that\n         list can either be overridden completely, or components can be\n         prepended or appended to the default list.  (see below)\n      -->\n    <!--\n       <arr name=\"components\">\n         <str>nameOfCustomComponent1</str>\n         <str>nameOfCustomComponent2</str>\n       </arr>\n      -->\n    <!--</requestHandler>-->\n\n  <!-- A Robust Example\n\n       This example SearchHandler declaration shows off usage of the\n       SearchHandler with many defaults declared\n\n       Note that multiple instances of the same Request Handler\n       (SearchHandler) can be registered multiple times with different\n       names (and different init parameters)\n    -->\n  <!--\n  <requestHandler name=\"/browse\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>-->\n\n       <!-- VelocityResponseWriter settings -->\n       <!--<str name=\"wt\">velocity</str>\n\n       <str name=\"v.template\">browse</str>\n       <str name=\"v.layout\">layout</str>\n       <str name=\"title\">Solritas</str>\n\n       <str name=\"defType\">edismax</str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n       <str name=\"mlt.qf\">\n         text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"mlt.fl\">text,features,name,sku,id,manu,cat</str>\n       <int name=\"mlt.count\">3</int>\n\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n\n       <str name=\"facet\">on</str>\n       <str name=\"facet.field\">cat</str>\n       <str name=\"facet.field\">manu_exact</str>\n       <str name=\"facet.query\">ipod</str>\n       <str name=\"facet.query\">GB</str>\n       <str name=\"facet.mincount\">1</str>\n       <str name=\"facet.pivot\">cat,inStock</str>\n       <str name=\"facet.range.other\">after</str>\n       <str name=\"facet.range\">price</str>\n       <int name=\"f.price.facet.range.start\">0</int>\n       <int name=\"f.price.facet.range.end\">600</int>\n       <int name=\"f.price.facet.range.gap\">50</int>\n       <str name=\"facet.range\">popularity</str>\n       <int name=\"f.popularity.facet.range.start\">0</int>\n       <int name=\"f.popularity.facet.range.end\">10</int>\n       <int name=\"f.popularity.facet.range.gap\">3</int>\n       <str name=\"facet.range\">manufacturedate_dt</str>\n       <str name=\"f.manufacturedate_dt.facet.range.start\">NOW/YEAR-10YEARS</str>\n       <str name=\"f.manufacturedate_dt.facet.range.end\">NOW</str>\n       <str name=\"f.manufacturedate_dt.facet.range.gap\">+1YEAR</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">before</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">after</str>-->\n\n\n       <!-- Highlighting defaults -->\n       <!--<str name=\"hl\">on</str>\n       <str name=\"hl.fl\">text features name</str>\n       <str name=\"f.name.hl.fragsize\">0</str>\n       <str name=\"f.name.hl.alternateField\">name</str>\n     </lst>\n     <arr name=\"last-components\">\n       <str>spellcheck</str>\n     </arr>-->\n     <!--\n     <str name=\"url-scheme\">httpx</str>\n     -->\n  <!--</requestHandler>-->\n  <!-- trivia: the name pinkPony requestHandler was an agreement between the Search API and the\n    apachesolr maintainers. The decision was taken during the Drupalcon Munich codesprint.\n    -->\n  <requestHandler name=\"pinkPony\" class=\"solr.SearchHandler\" default=\"true\">\n    <lst name=\"defaults\">\n      <str name=\"defType\">edismax</str>\n      <str name=\"echoParams\">explicit</str>\n      <bool name=\"omitHeader\">true</bool>\n      <float name=\"tie\">0.01</float>\n      <!-- Don't abort searches for the pinkPony request handler (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.pinkPony.timeAllowed:-1}</int>\n      <str name=\"q.alt\">*:*</str>\n\n      <!-- By default, don't spell check -->\n      <str name=\"spellcheck\">false</str>\n      <!-- Defaults for the spell checker when used -->\n      <str name=\"spellcheck.onlyMorePopular\">true</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <!--  The number of suggestions to return -->\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- The more like this handler offers many advantages over the standard handler,\n     when performing moreLikeThis requests.-->\n  <requestHandler name=\"mlt\" class=\"solr.MoreLikeThisHandler\">\n    <lst name=\"defaults\">\n      <str name=\"mlt.mintf\">1</str>\n      <str name=\"mlt.mindf\">1</str>\n      <str name=\"mlt.minwl\">3</str>\n      <str name=\"mlt.maxwl\">15</str>\n      <str name=\"mlt.maxqt\">20</str>\n      <str name=\"mlt.match.include\">false</str>\n      <!-- Abort any searches longer than 2 seconds (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.mlt.timeAllowed:2000}</int>\n    </lst>\n  </requestHandler>\n\n  <!-- A minimal query type for doing luene queries -->\n  <requestHandler name=\"standard\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <bool name=\"omitHeader\">true</bool>\n     </lst>\n  </requestHandler>\n\n  <!-- XML Update Request Handler.  \n       \n       http://wiki.apache.org/solr/UpdateXmlMessages\n\n       The canonical Request Handler for Modifying the Index through\n       commands specified using XML.\n\n       Note: Since solr1.1 requestHandlers requires a valid content\n       type header if posted in the body. For example, curl now\n       requires: -H 'Content-type:text/xml; charset=utf-8'\n    -->\n  <requestHandler name=\"/update\" \n                  class=\"solr.UpdateRequestHandler\">\n    <!-- See below for information on defining \n         updateRequestProcessorChains that can be used by name \n         on each Update Request\n      -->\n    <!--\n       <lst name=\"defaults\">\n         <str name=\"update.chain\">dedupe</str>\n       </lst>\n       -->\n    </requestHandler>\n  <!-- Binary Update Request Handler\n       http://wiki.apache.org/solr/javabin\n    -->\n  <requestHandler name=\"/update/javabin\" \n                  class=\"solr.UpdateRequestHandler\" />\n\n  <!-- CSV Update Request Handler\n       http://wiki.apache.org/solr/UpdateCSV\n    -->\n  <requestHandler name=\"/update/csv\" \n                  class=\"solr.CSVRequestHandler\" \n                  startup=\"lazy\" />\n\n  <!-- JSON Update Request Handler\n       http://wiki.apache.org/solr/UpdateJSON\n    -->\n  <requestHandler name=\"/update/json\" \n                  class=\"solr.JsonUpdateRequestHandler\" \n                  startup=\"lazy\" />\n\n  <!-- Solr Cell Update Request Handler\n\n       http://wiki.apache.org/solr/ExtractingRequestHandler \n\n    -->\n  <requestHandler name=\"/update/extract\" \n                  startup=\"lazy\"\n                  class=\"solr.extraction.ExtractingRequestHandler\" >\n    <lst name=\"defaults\">\n      <!-- All the main content goes into \"text\"... if you need to return\n           the extracted text or do highlighting, use a stored field. -->\n      <str name=\"fmap.content\">text</str>\n      <str name=\"lowernames\">true</str>\n      <str name=\"uprefix\">ignored_</str>\n\n      <!-- capture link hrefs but ignore div attributes -->\n      <str name=\"captureAttr\">true</str>\n      <str name=\"fmap.a\">links</str>\n      <str name=\"fmap.div\">ignored_</str>\n    </lst>\n  </requestHandler>\n\n  <!-- XSLT Update Request Handler\n       Transforms incoming XML with stylesheet identified by tr=\n  -->\n  <requestHandler name=\"/update/xslt\"\n                   startup=\"lazy\"\n                   class=\"solr.XsltUpdateRequestHandler\"/>\n\n  <!-- Field Analysis Request Handler\n\n       RequestHandler that provides much the same functionality as\n       analysis.jsp. Provides the ability to specify multiple field\n       types and field names in the same request and outputs\n       index-time and query-time analysis for each of them.\n\n       Request parameters are:\n       analysis.fieldname - field name whose analyzers are to be used\n\n       analysis.fieldtype - field type whose analyzers are to be used\n       analysis.fieldvalue - text for index-time analysis\n       q (or analysis.q) - text for query time analysis\n       analysis.showmatch (true|false) - When set to true and when\n           query analysis is performed, the produced tokens of the\n           field value analysis will be marked as \"matched\" for every\n           token that is produces by the query analysis\n   -->\n  <requestHandler name=\"/analysis/field\" \n                  startup=\"lazy\"\n                  class=\"solr.FieldAnalysisRequestHandler\" />\n\n\n  <!-- Document Analysis Handler\n\n       http://wiki.apache.org/solr/AnalysisRequestHandler\n\n       An analysis handler that provides a breakdown of the analysis\n       process of provided docuemnts. This handler expects a (single)\n       content stream with the following format:\n\n       <docs>\n         <doc>\n           <field name=\"id\">1</field>\n           <field name=\"name\">The Name</field>\n           <field name=\"text\">The Text Value</field>\n         </doc>\n         <doc>...</doc>\n         <doc>...</doc>\n         ...\n       </docs>\n\n    Note: Each document must contain a field which serves as the\n    unique key. This key is used in the returned response to associate\n    an analysis breakdown to the analyzed document.\n\n    Like the FieldAnalysisRequestHandler, this handler also supports\n    query analysis by sending either an \"analysis.query\" or \"q\"\n    request parameter that holds the query text to be analyzed. It\n    also supports the \"analysis.showmatch\" parameter which when set to\n    true, all field tokens that match the query tokens will be marked\n    as a \"match\". \n  -->\n  <requestHandler name=\"/analysis/document\" \n                  class=\"solr.DocumentAnalysisRequestHandler\" \n                  startup=\"lazy\" />\n\n  <!-- Admin Handlers\n\n       Admin Handlers - This will register all the standard admin\n       RequestHandlers.  \n    -->\n  <requestHandler name=\"/admin/\" class=\"solr.admin.AdminHandlers\" />\n  <!-- This single handler is equivalent to the following... -->\n  <!--\n     <requestHandler name=\"/admin/luke\"       class=\"solr.admin.LukeRequestHandler\" />\n     <requestHandler name=\"/admin/system\"     class=\"solr.admin.SystemInfoHandler\" />\n     <requestHandler name=\"/admin/plugins\"    class=\"solr.admin.PluginInfoHandler\" />\n     <requestHandler name=\"/admin/threads\"    class=\"solr.admin.ThreadDumpHandler\" />\n     <requestHandler name=\"/admin/properties\" class=\"solr.admin.PropertiesRequestHandler\" />\n     <requestHandler name=\"/admin/file\"       class=\"solr.admin.ShowFileRequestHandler\" >\n    -->\n  <!-- If you wish to hide files under ${solr.home}/conf, explicitly\n       register the ShowFileRequestHandler using: \n    -->\n  <!--\n     <requestHandler name=\"/admin/file\" \n                     class=\"solr.admin.ShowFileRequestHandler\" >\n       <lst name=\"invariants\">\n         <str name=\"hidden\">synonyms.txt</str> \n         <str name=\"hidden\">anotherfile.txt</str> \n       </lst>\n     </requestHandler>\n    -->\n\n  <!-- ping/healthcheck -->\n  <requestHandler name=\"/admin/ping\" class=\"solr.PingRequestHandler\">\n    <lst name=\"invariants\">\n      <str name=\"qt\">pinkPony</str>\n      <str name=\"q\">solrpingquery</str>\n      <str name=\"omitHeader\">false</str>\n    </lst>\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">all</str>\n    </lst>\n    <!-- An optional feature of the PingRequestHandler is to configure the \n         handler with a \"healthcheckFile\" which can be used to enable/disable \n         the PingRequestHandler.\n         relative paths are resolved against the data dir \n    -->\n    <!-- <str name=\"healthcheckFile\">server-enabled.txt</str> -->\n  </requestHandler>\n\n  <!-- Echo the request contents back to the client -->\n  <requestHandler name=\"/debug/dump\" class=\"solr.DumpRequestHandler\" >\n    <lst name=\"defaults\">\n     <str name=\"echoParams\">explicit</str> \n     <str name=\"echoHandler\">true</str>\n    </lst>\n  </requestHandler>\n  \n  <!-- Solr Replication\n\n       The SolrReplicationHandler supports replicating indexes from a\n       \"master\" used for indexing and \"slaves\" used for queries.\n\n       http://wiki.apache.org/solr/SolrReplication\n\n       In the example below, remove the <lst name=\"master\"> section if\n       this is just a slave and remove  the <lst name=\"slave\"> section\n       if this is just a master.\n  -->\n  <requestHandler name=\"/replication\" class=\"solr.ReplicationHandler\" >\n    <lst name=\"master\">\n      <str name=\"enable\">${solr.replication.master:false}</str>\n      <str name=\"replicateAfter\">commit</str>\n      <str name=\"replicateAfter\">startup</str>\n      <str name=\"confFiles\">${solr.replication.confFiles:schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml}</str>\n    </lst>\n    <lst name=\"slave\">\n      <str name=\"enable\">${solr.replication.slave:false}</str>\n      <str name=\"masterUrl\">${solr.replication.masterUrl:http://localhost:8983/solr}/replication</str>\n      <str name=\"pollInterval\">${solr.replication.pollInterval:00:00:60}</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Realtime get handler, guaranteed to return the latest stored fields of\n       any document, without the need to commit or open a new searcher.  The\n       current implementation relies on the updateLog feature being enabled.\n  -->\n  <requestHandler name=\"/get\" class=\"solr.RealTimeGetHandler\">\n    <lst name=\"defaults\">\n      <str name=\"omitHeader\">true</str>\n      <str name=\"wt\">json</str>\n      <str name=\"indent\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names, \n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n    \n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n    \n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\" \n       \n     -->\n\n  <!-- A request handler for demonstrating the spellcheck component.  \n\n       NOTE: This is purely as an example.  The whole purpose of the\n       SpellCheckComponent is to hook it into the request handler that\n       handles your normal user queries so that a separate request is\n       not needed to get suggestions.\n\n       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS\n       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!\n       \n       See http://wiki.apache.org/solr/SpellCheckComponent for details\n       on the request parameters.\n    -->\n  <requestHandler name=\"/spell\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"spellcheck.onlyMorePopular\">false</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Term Vector Component\n\n       http://wiki.apache.org/solr/TermVectorComponent\n    -->\n  <searchComponent name=\"tvComponent\" class=\"solr.TermVectorComponent\"/>\n\n  <!-- A request handler for demonstrating the term vector component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your \n       already specified request handlers. \n    -->\n  <requestHandler name=\"tvrh\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <bool name=\"tv\">true</bool>\n    </lst>\n    <arr name=\"last-components\">\n      <str>tvComponent</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Clustering Component\n\n       http://wiki.apache.org/solr/ClusteringComponent\n\n       This relies on third party jars which are notincluded in the\n       release.  To use this component (and the \"/clustering\" handler)\n       Those jars will need to be downloaded, and you'll need to set\n       the solr.cluster.enabled system property when running solr...\n\n          java -Dsolr.clustering.enabled=true -jar start.jar\n    -->\n  <!-- <searchComponent name=\"clustering\"\n                   enable=\"${solr.clustering.enabled:false}\"\n                   class=\"solr.clustering.ClusteringComponent\" > -->\n    <!-- Declare an engine -->\n    <!--<lst name=\"engine\">-->\n      <!-- The name, only one can be named \"default\" -->\n      <!--<str name=\"name\">default</str>-->\n\n      <!-- Class name of Carrot2 clustering algorithm. \n           \n           Currently available algorithms are:\n           \n           * org.carrot2.clustering.lingo.LingoClusteringAlgorithm\n           * org.carrot2.clustering.stc.STCClusteringAlgorithm\n           * org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm\n           \n           See http://project.carrot2.org/algorithms.html for the\n           algorithm's characteristics.\n        -->\n      <!--<str name=\"carrot.algorithm\">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>-->\n\n      <!-- Overriding values for Carrot2 default algorithm attributes.\n\n           For a description of all available attributes, see:\n           http://download.carrot2.org/stable/manual/#chapter.components.\n           Use attribute key as name attribute of str elements\n           below. These can be further overridden for individual\n           requests by specifying attribute key as request parameter\n           name and attribute value as parameter value.\n        -->\n      <!--<str name=\"LingoClusteringAlgorithm.desiredClusterCountBase\">20</str>-->\n      \n      <!-- Location of Carrot2 lexical resources.\n\n           A directory from which to load Carrot2-specific stop words\n           and stop labels. Absolute or relative to Solr config directory.\n           If a specific resource (e.g. stopwords.en) is present in the\n           specified dir, it will completely override the corresponding\n           default one that ships with Carrot2.\n\n           For an overview of Carrot2 lexical resources, see:\n           http://download.carrot2.org/head/manual/#chapter.lexical-resources\n        -->\n      <!--<str name=\"carrot.lexicalResourcesDir\">clustering/carrot2</str>-->\n\n      <!-- The language to assume for the documents.\n           \n           For a list of allowed values, see:\n           http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage\n       -->\n      <!--<str name=\"MultilingualClustering.defaultLanguage\">ENGLISH</str>\n    </lst>\n    <lst name=\"engine\">\n      <str name=\"name\">stc</str>\n      <str name=\"carrot.algorithm\">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>\n    </lst>\n  </searchComponent>-->\n\n  <!-- A request handler for demonstrating the clustering component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your \n       already specified request handlers. \n    -->\n  <!--<requestHandler name=\"/clustering\"\n                  startup=\"lazy\"\n                  enable=\"${solr.clustering.enabled:false}\"\n                  class=\"solr.SearchHandler\">\n    <lst name=\"defaults\">\n      <bool name=\"clustering\">true</bool>\n      <str name=\"clustering.engine\">default</str>\n      <bool name=\"clustering.results\">true</bool>-->\n      <!-- The title field -->\n      <!--<str name=\"carrot.title\">name</str>-->\n      <!--<str name=\"carrot.url\">id</str>-->\n      <!-- The field to cluster on -->\n       <!--<str name=\"carrot.snippet\">features</str>-->\n       <!-- produce summaries -->\n       <!--<bool name=\"carrot.produceSummary\">true</bool>-->\n       <!-- the maximum number of labels per cluster -->\n       <!--<int name=\"carrot.numDescriptions\">5</int>-->\n       <!-- produce sub clusters -->\n       <!--<bool name=\"carrot.outputSubClusters\">false</bool>-->\n       \n       <!--<str name=\"defType\">edismax</str>\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n    </lst>     \n    <arr name=\"last-components\">\n      <str>clustering</str>\n    </arr>\n  </requestHandler>-->\n  \n  <!-- Terms Component\n\n       http://wiki.apache.org/solr/TermsComponent\n\n       A component to return terms and document frequency of those\n       terms\n    -->\n  <searchComponent name=\"terms\" class=\"solr.TermsComponent\"/>\n\n  <!-- A request handler for demonstrating the terms component -->\n  <requestHandler name=\"/terms\" class=\"solr.SearchHandler\" startup=\"lazy\">\n     <lst name=\"defaults\">\n      <bool name=\"terms\">true</bool>\n    </lst>     \n    <arr name=\"components\">\n      <str>terms</str>\n    </arr>\n  </requestHandler>\n\n\n  <!-- Query Elevation Component\n\n       http://wiki.apache.org/solr/QueryElevationComponent\n\n       a search component that enables you to configure the top\n       results for a given query regardless of the normal lucene\n       scoring.\n    -->\n  <searchComponent name=\"elevator\" class=\"solr.QueryElevationComponent\" >\n    <!-- pick a fieldType to analyze queries -->\n    <str name=\"queryFieldType\">string</str>\n    <str name=\"config-file\">elevate.xml</str>\n  </searchComponent>\n\n  <!-- A request handler for demonstrating the elevator component -->\n  <requestHandler name=\"/elevate\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">explicit</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\" \n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter \n           (for sentence extraction) \n        -->\n      <fragmenter name=\"regex\" \n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\" \n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<strong>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</strong>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\" \n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\" \n                       default=\"true\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\" \n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\" \n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!-- \n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\" \n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n      \n      <boundaryScanner name=\"default\" \n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n      \n      <boundaryScanner name=\"breakIterator\" \n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    --> \n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.  \n       \n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n    <!--\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n    <!--\n     <updateRequestProcessorChain name=\"langid\">\n       <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n         <str name=\"langid.fl\">text,title,subject,description</str>\n         <str name=\"langid.langField\">language_s</str>\n         <str name=\"langid.fallback\">en</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n \n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\" \n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n     <!-- For the purposes of the tutorial, JSON responses are written as\n      plain text so that they are easy to read in *any* browser.\n      If you expect a MIME type of \"application/json\" just remove this override.\n     -->\n    <str name=\"content-type\">text/plain; charset=UTF-8</str>\n  </queryResponseWriter>\n  \n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!-- <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" enable=\"${solr.velocity.enabled:true}\"/> -->\n  \n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.  \n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       http://wiki.apache.org/solr/SolrQuerySyntax\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\" \n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n  <!-- Legacy config for the admin interface -->\n  <admin>\n    <defaultQuery>*:*</defaultQuery>\n\n    <!-- configure a healthcheck file for servers behind a\n         loadbalancer \n      -->\n    <!--\n       <healthcheck type=\"file\">server-enabled</healthcheck>\n      -->\n  </admin>\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  <xi:include href=\"solrconfig_extra.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback>\n    <!-- Spell Check\n\n        The spell check component can return a list of alternative spelling\n        suggestions. This component must be defined in\n        solrconfig_extra.xml if present, since it's used in the search handler.\n\n        http://wiki.apache.org/solr/SpellCheckComponent\n     -->\n    <searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n    <str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n    <!-- a spellchecker built from a field of the main index -->\n      <lst name=\"spellchecker\">\n        <str name=\"name\">default</str>\n        <str name=\"field\">spell</str>\n        <str name=\"spellcheckIndexDir\">spellchecker</str>\n        <str name=\"buildOnOptimize\">true</str>\n      </lst>\n    </searchComponent>\n    </xi:fallback>\n  </xi:include>\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/solrconfig_extra.xml",
    "content": "<!-- Spell Check\n\n    The spell check component can return a list of alternative spelling\n    suggestions.\n\n    http://wiki.apache.org/solr/SpellCheckComponent\n -->\n<searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n<str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n<!-- Multiple \"Spell Checkers\" can be declared and used by this\n     component\n  -->\n\n<!-- a spellchecker built from a field of the main index, and\n     written to disk\n  -->\n<lst name=\"spellchecker\">\n  <str name=\"name\">default</str>\n  <str name=\"field\">spell</str>\n  <str name=\"spellcheckIndexDir\">spellchecker</str>\n  <str name=\"buildOnOptimize\">true</str>\n  <!-- uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary\n    <float name=\"thresholdTokenFrequency\">.01</float>\n  -->\n</lst>\n\n<!--\n  Adding German spellhecker index to our Solr index\n  This also requires to enable the content in schema_extra_types.xml and schema_extra_fields.xml\n-->\n<!--\n<lst name=\"spellchecker\">\n  <str name=\"name\">spellchecker_de</str>\n  <str name=\"field\">spell_de</str>\n  <str name=\"spellcheckIndexDir\">./spellchecker_de</str>\n  <str name=\"buildOnOptimize\">true</str>\n</lst>\n-->\n\n<!-- a spellchecker that uses a different distance measure -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">jarowinkler</str>\n     <str name=\"field\">spell</str>\n     <str name=\"distanceMeasure\">\n       org.apache.lucene.search.spell.JaroWinklerDistance\n     </str>\n     <str name=\"spellcheckIndexDir\">spellcheckerJaro</str>\n   </lst>\n -->\n\n<!-- a spellchecker that use an alternate comparator\n\n     comparatorClass be one of:\n      1. score (default)\n      2. freq (Frequency first, then score)\n      3. A fully qualified class name\n  -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">freq</str>\n     <str name=\"field\">lowerfilt</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFreq</str>\n     <str name=\"comparatorClass\">freq</str>\n     <str name=\"buildOnCommit\">true</str>\n  -->\n\n<!-- A spellchecker that reads the list of words from a file -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"classname\">solr.FileBasedSpellChecker</str>\n     <str name=\"name\">file</str>\n     <str name=\"sourceLocation\">spellings.txt</str>\n     <str name=\"characterEncoding\">UTF-8</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFile</str>\n   </lst>\n  -->\n</searchComponent>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:8099/solr\nsolr.replication.confFiles=schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml\nsolr.mlt.timeAllowed=2000\n# You should not set your luceneMatchVersion to anything lower than your Solr\n# Version.\nsolr.luceneMatchVersion=LUCENE_40\nsolr.pinkPony.timeAllowed=-1\n# autoCommit after 10000 docs\nsolr.autoCommit.MaxDocs=10000\n# autoCommit after 2 minutes\nsolr.autoCommit.MaxTime=120000\n# autoSoftCommit after 2000 docs\nsolr.autoSoftCommit.MaxDocs=2000\n# autoSoftCommit after 10 seconds\nsolr.autoSoftCommit.MaxTime=10000\nsolr.contrib.dir=../../../contrib\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/stopwords.txt",
    "content": "# Contains words which shouldn't be indexed for fulltext fields, e.g., because\n# they're too common. For documentation of the format, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.StopFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal6/synonyms.txt",
    "content": "# Contains synonyms to use for your index. For the format used, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. The item IDs are in general constructed as follows:\n   Search API:\n     $document->id = $index_id . '-' . $item_id;\n   Apache Solr Search Integration:\n     $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #1 first in searches for \"example query\": -->\n<!--\n <query text=\"example query\">\n  <doc id=\"default_node_index-1\" />\n  <doc id=\"7v3jsc/node/1\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/mapping-ISOLatin1Accent.txt",
    "content": "# This file contains character mappings for the default fulltext field type.\n# The source characters (on the left) will be replaced by the respective target\n# characters before any other processing takes place.\n# Lines starting with a pound character # are ignored.\n#\n# For sensible defaults, use the mapping-ISOLatin1Accent.txt file distributed\n# with the example application of your Solr version.\n#\n# Examples:\n#   \"À\" => \"A\"\n#   \"\\u00c4\" => \"A\"\n#   \"\\u00c4\" => \"\\u0041\"\n#   \"æ\" => \"ae\"\n#   \"\\n\" => \" \"\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/protwords.txt",
    "content": "#-----------------------------------------------------------------------\n# This file blocks words from being operated on by the stemmer and word delimiter.\n&amp;\n&lt;\n&gt;\n&#039;\n&quot;\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n-->\n\n<schema name=\"drupal-4.3-solr-4.x\" version=\"1.3\">\n    <!-- attribute \"name\" is the name of this schema and is only used for display purposes.\n         Applications should change this to reflect the nature of the search collection.\n         version=\"1.2\" is Solr's version number for the schema syntax and semantics.  It should\n         not normally be changed by applications.\n         1.0: multiValued attribute did not exist, all fields are multiValued by nature\n         1.1: multiValued attribute introduced, false by default\n         1.2: omitTermFreqAndPositions attribute introduced, true by default except for text fields.\n         1.3: removed optional field compress feature\n       -->\n  <types>\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in the\n       org.apache.solr.analysis package.\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       - StrField and TextField support an optional compressThreshold which\n       limits compression (if enabled in the derived fields) to values which\n       exceed a certain size (in characters).\n    -->\n    <fieldType name=\"string\" class=\"solr.StrField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldtype name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The optional sortMissingLast and sortMissingFirst attributes are\n         currently supported on types that are sorted internally as strings.\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!-- numeric field types that can be sorted, but are not optimized for range queries -->\n    <fieldType name=\"integer\" class=\"solr.TrieIntField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"float\" class=\"solr.TrieFloatField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"long\" class=\"solr.TrieLongField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"double\" class=\"solr.TrieDoubleField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <!--\n      Note:\n      These should only be used for compatibility with existing indexes (created with older Solr versions)\n      or if \"sortMissingFirst\" or \"sortMissingLast\" functionality is needed. Use Trie based fields instead.\n\n      Numeric field types that manipulate the value into\n      a string value that isn't human-readable in its internal form,\n      but with a lexicographic ordering the same as the numeric ordering,\n      so that range queries work correctly.\n    -->\n    <fieldType name=\"sint\" class=\"solr.TrieIntField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"sfloat\" class=\"solr.TrieFloatField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"slong\" class=\"solr.TrieLongField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"sdouble\" class=\"solr.TrieDoubleField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!--\n     Numeric field types that index each value at various levels of precision\n     to accelerate range queries when the number of values between the range\n     endpoints is large. See the javadoc for NumericRangeQuery for internal\n     implementation details.\n\n     Smaller precisionStep values (specified in bits) will lead to more tokens\n     indexed per value, slightly larger index size, and faster range queries.\n     A precisionStep of 0 disables indexing at different precision levels.\n    -->\n    <fieldType name=\"tint\" class=\"solr.TrieIntField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tfloat\" class=\"solr.TrieFloatField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tlong\" class=\"solr.TrieLongField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tdouble\" class=\"solr.TrieDoubleField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"pfloat\" class=\"solr.FloatField\" omitNorms=\"true\"/>\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\" valType=\"pfloat\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the DateField javadocs for more information.\n      -->\n    <fieldType name=\"date\" class=\"solr.DateField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- A Trie based date field for faster date range queries and date faceting. -->\n    <fieldType name=\"tdate\" class=\"solr.TrieDateField\" omitNorms=\"true\" precisionStep=\"6\" positionIncrementGap=\"0\"/>\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of\n        words on case-change, alpha numeric boundaries, and non-alphanumeric chars,\n        so that a query of \"wifi\" or \"wi fi\" could match a document containing \"Wi-Fi\".\n        Synonyms and stopwords are customized by external files, and stemming is enabled.\n        Duplicate tokens at the same position (which may result from Stemmed Synonyms or\n        WordDelim parts) are removed.\n        -->\n    <fieldType name=\"text\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <!-- in this example, we will only use synonyms at query time\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"index_synonyms.txt\" ignoreCase=\"true\" expand=\"false\"/>\n        -->\n        <!-- Case insensitive stop word removal.\n          add enablePositionIncrements=true in both the index and query\n          analyzers to leave a 'gap' for more accurate phrase queries.\n        -->\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"1\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- An unstemmed text field - good if one does not know the language of the field -->\n    <fieldType name=\"text_und\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\" enablePositionIncrements=\"true\" />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- Edge N gram type - for example for matching against queries with results\n        KeywordTokenizer leaves input string intact as a single term.\n        see: http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/\n      -->\n    <fieldType name=\"edge_n2_kw_text\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.EdgeNGramFilterFactory\" minGramSize=\"2\" maxGramSize=\"25\" />\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n    <!--  Setup simple analysis for spell checking -->\n\n    <fieldType name=\"textSpell\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\" />\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"4\" max=\"20\" />\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\" />\n      </analyzer>\n    </fieldType>\n\n    <!-- This is an example of using the KeywordTokenizer along\n         With various TokenFilterFactories to produce a sortable field\n         that does not include some properties of the source text\n      -->\n    <fieldType name=\"sortString\" class=\"solr.TextField\" sortMissingLast=\"true\" omitNorms=\"true\">\n      <analyzer>\n        <!-- KeywordTokenizer does no actual tokenizing, so the entire\n            input string is preserved as a single token\n          -->\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <!-- The LowerCase TokenFilter does what you expect, which can be\n            when you want your sorting to be case insensitive\n          -->\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <!-- The TrimFilter removes any leading or trailing whitespace -->\n        <filter class=\"solr.TrimFilterFactory\" />\n        <!-- The PatternReplaceFilter gives you the flexibility to use\n            Java Regular expression to replace any sequence of characters\n            matching a pattern with an arbitrary replacement string,\n            which may include back refrences to portions of the orriginal\n            string matched by the pattern.\n\n            See the Java Regular Expression documentation for more\n            infomation on pattern and replacement string syntax.\n\n            http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html\n\n        <filter class=\"solr.PatternReplaceFilterFactory\"\n               pattern=\"(^\\p{Punct}+)\" replacement=\"\" replace=\"all\"\n        />\n        -->\n      </analyzer>\n    </fieldType>\n\n    <!-- A random sort type -->\n    <fieldType name=\"rand\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- since fields of this type are by default not stored or indexed, any data added to\n         them will be ignored outright\n      -->\n    <fieldtype name=\"ignored\" stored=\"false\" indexed=\"false\" class=\"solr.StrField\" />\n\n    <!-- Begin added types to use features in Solr 3.4+ -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"tdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonType\" subFieldType=\"tdouble\"/>\n\n    <!-- A Geohash is a compact representation of a latitude longitude pair in a single field.\n         See http://wiki.apache.org/solr/SpatialSearch\n     -->\n    <fieldtype name=\"geohash\" class=\"solr.GeoHashField\"/>\n    <!-- End added Solr 3.4+ types -->\n\n  </types>\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  <xi:include href=\"schema_extra_types.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <fields>\n    <!-- Valid attributes for fields:\n      name: mandatory - the name for the field\n      type: mandatory - the name of a previously defined type from the <types> section\n      indexed: true if this field should be indexed (searchable or sortable)\n      stored: true if this field should be retrievable\n      compressed: [false] if this field should be stored using gzip compression\n       (this will only apply if the field type is compressable; among\n       the standard field types, only TextField and StrField are)\n      multiValued: true if this field may contain multiple values per document\n      omitNorms: (expert) set to true to omit the norms associated with\n       this field (this disables length normalization and index-time\n       boosting for the field, and saves some memory).  Only full-text\n       fields or fields that need an index-time boost need norms.\n    -->\n\n    <!-- The document id is usually derived from a site-spcific key (hash) and the\n      entity type and ID like:\n      Search Api :\n        The format used is $document->id = $index_id . '-' . $item_id\n      Apache Solr Search Integration\n        The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n    -->\n    <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" />\n\n    <!-- Add Solr Cloud version field as mentioned in\n         http://wiki.apache.org/solr/SolrCloud#Required_Config\n    -->\n    <field name=\"_version_\" type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n\n    <!-- Search Api specific fields -->\n    <!-- item_id contains the entity ID, e.g. a node's nid. -->\n    <field name=\"item_id\"  type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- index_id is the machine name of the search index this entry belongs to. -->\n    <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- Since sorting by ID is explicitly allowed, store item_id also in a sortable way. -->\n    <copyField source=\"item_id\" dest=\"sort_search_api_id\" />\n\n    <!-- Apache Solr Search Integration specific fields -->\n    <!-- entity_id is the numeric object ID, e.g. Node ID, File ID -->\n    <field name=\"entity_id\"  type=\"long\" indexed=\"true\" stored=\"true\" />\n    <!-- entity_type is 'node', 'file', 'user', or some other Drupal object type -->\n    <field name=\"entity_type\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- bundle is a node type, or as appropriate for other entity types -->\n    <field name=\"bundle\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"bundle_name\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"url\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <!-- label is the default field for a human-readable string for this entity (e.g. the title of a node) -->\n    <field name=\"label\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- The string version of the title is used for sorting -->\n    <copyField source=\"label\" dest=\"sort_label\"/>\n\n    <!-- content is the default field for full text search - dump crap here -->\n    <field name=\"content\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n    <field name=\"teaser\" type=\"text\" indexed=\"false\" stored=\"true\"/>\n    <field name=\"path\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"path_alias\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n\n    <!-- These are the fields that correspond to a Drupal node. The beauty of having\n      Lucene store title, body, type, etc., is that we retrieve them with the search\n      result set and don't need to go to the database with a node_load. -->\n    <field name=\"tid\"  type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n    <field name=\"taxonomy_names\" type=\"text\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n    <!-- Copy terms to a single field that contains all taxonomy term names -->\n    <copyField source=\"tm_vid_*\" dest=\"taxonomy_names\"/>\n\n    <!-- Here, default is used to create a \"timestamp\" field indicating\n         when each document was indexed.-->\n    <field name=\"timestamp\" type=\"tdate\" indexed=\"true\" stored=\"true\" default=\"NOW\" multiValued=\"false\"/>\n\n    <!-- This field is used to build the spellchecker index -->\n    <field name=\"spell\" type=\"textSpell\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- copyField commands copy one field to another at the time a document\n         is added to the index.  It's used either to index the same field differently,\n         or to add multiple fields to the same field for easier/faster searching.  -->\n    <copyField source=\"label\" dest=\"spell\"/>\n    <copyField source=\"content\" dest=\"spell\"/>\n\n    <copyField source=\"ts_*\" dest=\"spell\"/>\n    <copyField source=\"tm_*\" dest=\"spell\"/>\n\n    <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n         will be used if the name matches any of the patterns.\n         RESTRICTION: the glob-like pattern in the name attribute must have\n         a \"*\" only at the start or the end.\n         EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n         Longer patterns will be matched first.  if equal size patterns\n         both match, the first appearing in the schema will be used.  -->\n\n    <!-- A set of fields to contain text extracted from HTML tag contents which we\n         can boost at query time. -->\n    <dynamicField name=\"tags_*\" type=\"text\"   indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n\n    <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n         the last letter is 's' for single valued, 'm' for multi-valued -->\n\n    <!-- We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"is_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"im_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of floats can be saved in a regular float field -->\n    <dynamicField name=\"fs_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fm_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of doubles can be saved in a regular double field -->\n    <dynamicField name=\"ps_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pm_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of booleans can be saved in a regular boolean field -->\n    <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <!-- Regular text (without processing) can be stored in a string field-->\n    <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Normal text fields are for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"ts_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tm_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- Unstemmed text fields for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"tus_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tum_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- These text fields omit norms - useful for extracted text like taxonomy_names -->\n    <dynamicField name=\"tos_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n    <dynamicField name=\"tom_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- Special-purpose text fields -->\n    <dynamicField name=\"tes_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"false\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tem_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"true\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- trie dates are preferred, so give them the 2 letter prefix -->\n    <dynamicField name=\"ds_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"dm_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"its_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"itm_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"fts_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ftm_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pts_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ptm_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n         a small image in a search result using the data URI scheme -->\n    <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a date rather than tdate is needed for sortMissingLast -->\n    <dynamicField name=\"dds_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ddm_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Sortable fields, good for sortMissingLast support &\n         We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"iss_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ism_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a sfloat rather than tfloat is needed for sortMissingLast -->\n    <dynamicField name=\"fss_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fsm_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pss_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"psm_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n    <dynamicField name=\"hs_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hm_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hss_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hsm_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hts_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"htm_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n    <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Begin added fields to use features in Solr 3.4+\n         http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n    <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"geos_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"geom_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- External file fields -->\n    <dynamicField name=\"eff_*\" type=\"file\"/>\n    <!-- End added fields for Solr 3.4+ -->\n\n    <!-- Sortable version of the dynamic string field -->\n    <dynamicField name=\"sort_*\" type=\"sortString\" indexed=\"true\" stored=\"false\"/>\n    <copyField source=\"ss_*\" dest=\"sort_*\"/>\n    <!-- A random sort field -->\n    <dynamicField name=\"random_*\" type=\"rand\" indexed=\"true\" stored=\"true\"/>\n    <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n    <dynamicField name=\"access_*\" type=\"integer\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n    <!-- The following causes solr to ignore any fields that don't already match an existing\n         field name or dynamic field, rather than reporting them as an error.\n         Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n         unknown fields indexed and/or stored by default -->\n    <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n  </fields>\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  <xi:include href=\"schema_extra_fields.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- field for the QueryParser to use when an explicit fieldname is absent -->\n  <defaultSearchField>content</defaultSearchField>\n\n  <!-- SolrQueryParser configuration: defaultOperator=\"AND|OR\" -->\n  <solrQueryParser defaultOperator=\"AND\"/>\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/schema_extra_fields.xml",
    "content": "<fields>\n<!--\n  Adding German dynamic field types to our Solr Schema\n  If you enable this, make sure you have a folder called lang with stopwords_de.txt\n  and synonyms_de.txt in there\n  This also requires to enable the content in schema_extra_types.xml\n-->\n<!--\n   <field name=\"label_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"content_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n   <field name=\"teaser_de\" type=\"text_de\" indexed=\"false\" stored=\"true\"/>\n   <field name=\"path_alias_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"taxonomy_names_de\" type=\"text_de\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n   <field name=\"spell_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n   <copyField source=\"label_de\" dest=\"spell_de\"/>\n   <copyField source=\"content_de\" dest=\"spell_de\"/>\n   <dynamicField name=\"tags_de_*\" type=\"text_de\" indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n   <dynamicField name=\"ts_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n   <dynamicField name=\"tm_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n   <dynamicField name=\"tos_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n   <dynamicField name=\"tom_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n-->\n</fields>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/schema_extra_types.xml",
    "content": "<types>\n<!--\n  Adding German language to our Solr Schema German\n  If you enable this, make sure you have a folder called lang with stopwords_de.txt\n  and synonyms_de.txt in there\n-->\n<!--\n    <fieldType name=\"text_de\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\" enablePositionIncrements=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"1\" catenateNumbers=\"1\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"lang/synonyms_de.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\" enablePositionIncrements=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"0\" catenateNumbers=\"0\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n-->\n</types>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-4.3-solr-4.x\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_40}</luceneMatchVersion>\n\n  <!-- lib directives can be used to instruct Solr to load an Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n       \n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A dir option by itself adds any files found in the directory to\n       the classpath, this is useful for including all jars in a\n       directory.\n    -->\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/extraction/lib\" />\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/clustering/lib/\" />\n\n  <!-- The velocity library has been known to crash Solr in some\n       instances when deployed as a war file to Tomcat. Therefore all\n       references have been removed from the default configuration.\n       @see http://drupal.org/node/1612556\n  -->\n  <!-- <lib dir=\"../../contrib/velocity/lib\" /> -->\n\n  <!-- When a regex is specified in addition to a directory, only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n    -->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-cell-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-clustering-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-dataimporthandler-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-langid-\\d.*\\.jar\" />-->\n  <!-- <lib dir=\"../../dist/\" regex=\"apache-solr-velocity-\\d.*\\.jar\" /> -->\n\n  <!-- If a dir option (with or without a regex) is used and nothing\n       is found that matches, it will be ignored\n    -->\n  <!--<lib dir=\"../../contrib/clustering/lib/\" />-->\n  <!--<lib dir=\"/total/crap/dir/ignored\" />-->\n\n  <!-- an exact path can be used to specify a specific file.  This\n       will cause a serious error to be logged if it can't be loaded.\n    -->\n  <!--\n  <lib path=\"../a-jar-that-does-not-exist.jar\" /> \n  -->\n  \n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <!-- <dataDir>${solr.data.dir:}</dataDir> -->\n\n\n  <!-- The DirectoryFactory to use for indexes.\n       \n       solr.StandardDirectoryFactory, the default, is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  One can force a particular implementation\n       via solr.MMapDirectoryFactory, solr.NIOFSDirectoryFactory, or\n       solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based, not\n       persistent, and doesn't work with replication.\n    -->\n  <directoryFactory name=\"DirectoryFactory\" \n                    class=\"${solr.directoryFactory:solr.StandardDirectoryFactory}\"/>\n\n  <!-- Index Defaults\n\n       Values here affect all index writers and act as a default\n       unless overridden.\n\n       WARNING: See also the <mainIndex> section below for parameters\n       that overfor Solr's main Lucene index.\n    -->\n  <indexConfig>\n\n    <useCompoundFile>false</useCompoundFile>\n\n    <mergeFactor>4</mergeFactor>\n    <!-- Sets the amount of RAM that may be used by Lucene indexing\n         for buffering added documents and deletions before they are\n         flushed to the Directory.  -->\n    <ramBufferSizeMB>32</ramBufferSizeMB>\n    <!-- If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.  \n      -->\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <maxMergeDocs>2147483647</maxMergeDocs>\n    <maxFieldLength>100000</maxFieldLength>\n    <writeLockTimeout>1000</writeLockTimeout>\n\n    <!-- Expert: Merge Policy \n\n         The Merge Policy in Lucene controls how merging is handled by\n         Lucene.  The default in Solr 3.3 is TieredMergePolicy.\n         \n         The default in 2.3 was the LogByteSizeMergePolicy,\n         previous versions used LogDocMergePolicy.\n         \n         LogByteSizeMergePolicy chooses segments to merge based on\n         their size.  The Lucene 2.2 default, LogDocMergePolicy chose\n         when to merge based on number of documents\n         \n         Other implementations of MergePolicy must have a no-argument\n         constructor\n      -->\n    <mergePolicy class=\"org.apache.lucene.index.LogByteSizeMergePolicy\"/>\n\n    <!-- Expert: Merge Scheduler\n\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!-- \n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\t  \n    <!-- LockFactory \n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n      \n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         (For backwards compatibility with Solr 1.2, 'simple' is the\n         default if not specified.)\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>single</lockType>\n\n    <!-- Expert: Controls how often Lucene loads terms into memory\n         Default is 128 and is likely good for most everyone.\n      -->\n    <!-- <termIndexInterval>256</termIndexInterval> -->\n\n    <!-- Unlock On Startup\n\n         If true, unlock any held write or commit locks on startup.\n         This defeats the locking mechanism that allows multiple\n         processes to safely access a lucene index, and should be used\n         with care.\n\n         This is not needed if lock type is 'none' or 'single'\n     -->\n    <unlockOnStartup>false</unlockOnStartup>\n    \n    <!-- If true, IndexReaders will be reopened (often more efficient)\n         instead of closed and then opened.\n      -->\n    <reopenReaders>true</reopenReaders>\n\n    <!-- Commit Deletion Policy\n\n         Custom deletion policies can specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         http://lucene.apache.org/java/2_9_1/api/all/org/apache/lucene/index/IndexDeletionPolicy.html\n\n         The standard Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n         \n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n      <!-- The number of commit points to be kept -->\n      <str name=\"maxCommitsToKeep\">1</str>\n      <!-- The number of optimized commit points to be kept -->\n      <str name=\"maxOptimizedCommitsToKeep\">0</str>\n      <!--\n          Delete all commit points once they have reached the given age.\n          Supports DateMathParser syntax e.g.\n        -->\n      <!--\n         <str name=\"maxCommitAge\">30MINUTES</str>\n         <str name=\"maxCommitAge\">1DAY</str>\n      -->\n    </deletionPolicy>\n\n    <!-- Lucene Infostream\n       \n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting The value to true will instruct the underlying Lucene\n         IndexWriter to write its debugging info the specified file\n      -->\n     <infoStream file=\"INFOSTREAM.txt\">false</infoStream> \n\n  </indexConfig>\n\n  <!-- JMX\n       \n       This example enables JMX if and only if an existing MBeanServer\n       is found, use this if you want to configure JMX through JVM\n       parameters. Remove this to disable exposing Solr configuration\n       and statistics to JMX.\n\n       For more details see http://wiki.apache.org/solr/SolrJmx\n    -->\n  <!-- <jmx /> -->\n  <!-- If you want to connect to a particular server, specify the\n       agentId \n    -->\n  <!-- <jmx agentId=\"myAgent\" /> -->\n  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->\n  <!-- <jmx serviceUrl=\"service:jmx:rmi:///jndi/rmi://localhost:9999/solr\"/>\n    -->\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- AutoCommit\n\n         Perform a <commit/> automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents. \n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time that is allowed to pass\n                   since a document was added before automaticly\n                   triggering a new commit.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:10000}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:120000}</maxTime>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n    -->\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:2000}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:10000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n         \n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns. \n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n    <!-- Enables a transaction log, currently used for real-time get.\n         \"dir\" - the target directory for transaction logs, defaults to the\n         solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.data.dir:}</str>\n      <!-- if you want to take control of the synchronization you may specify\n           the syncLevel as one of the following where ''flush'' is the default.\n           Fsync will reduce throughput.\n           <str name=\"syncLevel\">flush|fsync|none</str>\n      -->\n    </updateLog>\n  </updateHandler>\n  \n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n  <!-- By explicitly declaring the Factory, the termIndexDivisor can\n       be specified.\n    -->\n  <!--\n     <indexReaderFactory name=\"IndexReaderFactory\" \n                         class=\"solr.StandardIndexReaderFactory\">\n       <int name=\"setTermIndexDivisor\">12</int>\n     </indexReaderFactory >\n    -->\n\n\n  <query>\n    <!-- Max Boolean Clauses\n\n         Maximum number of clauses in each BooleanQuery,  an exception\n         is thrown if exceeded.\n\n         ** WARNING **\n         \n         This option actually modifies a global Lucene property that\n         will affect all SolrCores.  If multiple solrconfig.xml files\n         disagree on this property, the value at any given moment will\n         be based on the last SolrCore to be initialized.\n         \n      -->\n    <maxBooleanClauses>1024</maxBooleanClauses>\n\n\n    <!-- Solr Internal Query Caches\n\n         There are two implementations of cache available for Solr,\n         LRUCache, based on a synchronized LinkedHashMap, and\n         FastLRUCache, based on a ConcurrentHashMap.  \n\n         FastLRUCache has faster gets and slower puts in single\n         threaded operation and thus is generally faster than LRUCache\n         when the hit ratio of the cache is high (> 75%), and may be\n         faster under other scenarios on multi-cpu systems.\n    -->\n\n    <!-- Filter Cache\n\n         Cache used by SolrIndexSearcher for filters (DocSets),\n         unordered sets of *all* documents that match a query.  When a\n         new searcher is opened, its caches may be prepopulated or\n         \"autowarmed\" using data from caches in the old searcher.\n         autowarmCount is the number of items to prepopulate.  For\n         LRUCache, the autowarmed items will be the most recently\n         accessed items.\n\n         Parameters:\n           class - the SolrCache implementation LRUCache or\n               (LRUCache or FastLRUCache)\n           size - the maximum number of entries in the cache\n           initialSize - the initial capacity (number of entries) of\n               the cache.  (see java.util.HashMap)\n           autowarmCount - the number of entries to prepopulate from\n               and old cache.  \n      -->\n    <filterCache class=\"solr.FastLRUCache\"\n                 size=\"512\"\n                 initialSize=\"512\"\n                 autowarmCount=\"0\"/>\n\n    <!-- Query Result Cache\n         \n         Caches results of searches - ordered lists of document ids\n         (DocList) based on a query, a sort, and the range of documents requested.  \n      -->\n    <queryResultCache class=\"solr.LRUCache\"\n                     size=\"512\"\n                     initialSize=\"512\"\n                     autowarmCount=\"32\"/>\n   \n    <!-- Document Cache\n\n         Caches Lucene Document objects (the stored fields for each\n         document).  Since Lucene internal document ids are transient,\n         this cache will not be autowarmed.  \n      -->\n    <documentCache class=\"solr.LRUCache\"\n                   size=\"512\"\n                   initialSize=\"512\"\n                   autowarmCount=\"0\"/>\n    \n    <!-- Field Value Cache\n         \n         Cache used to hold field values that are quickly accessible\n         by document id.  The fieldValueCache is created by default\n         even if not configured here.\n      -->\n    <!--\n       <fieldValueCache class=\"solr.FastLRUCache\"\n                        size=\"512\"\n                        autowarmCount=\"128\"\n                        showItems=\"32\" />\n      -->\n\n    <!-- Custom Cache\n\n         Example of a generic cache.  These caches may be accessed by\n         name through SolrIndexSearcher.getCache(),cacheLookup(), and\n         cacheInsert().  The purpose is to enable easy caching of\n         user/application level data.  The regenerator argument should\n         be specified as an implementation of solr.CacheRegenerator \n         if autowarming is desired.  \n      -->\n    <!--\n       <cache name=\"myUserCache\"\n              class=\"solr.LRUCache\"\n              size=\"4096\"\n              initialSize=\"1024\"\n              autowarmCount=\"1024\"\n              regenerator=\"com.mycompany.MyRegenerator\"\n              />\n      -->\n\n\n    <!-- Lazy Field Loading\n\n         If true, stored fields that are not requested will be loaded\n         lazily.  This can result in a significant speed improvement\n         if the usual case is to not load all stored fields,\n         especially if the skipped fields are large compressed text\n         fields.\n    -->\n    <enableLazyFieldLoading>true</enableLazyFieldLoading>\n\n   <!-- Use Filter For Sorted Query\n\n        A possible optimization that attempts to use a filter to\n        satisfy a search.  If the requested sort does not include\n        score, then the filterCache will be checked for a filter\n        matching the query. If found, the filter will be used as the\n        source of document ids, and then the sort will be applied to\n        that.\n\n        For most situations, this will not be useful unless you\n        frequently get the same search repeatedly with different sort\n        options, and none of them ever use \"score\"\n     -->\n   <!--\n      <useFilterForSortedQuery>true</useFilterForSortedQuery>\n     -->\n\n   <!-- Result Window Size\n\n        An optimization for use with the queryResultCache.  When a search\n        is requested, a superset of the requested number of document ids\n        are collected.  For example, if a search for a particular query\n        requests matching documents 10 through 19, and queryWindowSize is 50,\n        then documents 0 through 49 will be collected and cached.  Any further\n        requests in that range can be satisfied via the cache.  \n     -->\n   <queryResultWindowSize>20</queryResultWindowSize>\n\n   <!-- Maximum number of documents to cache for any entry in the\n        queryResultCache. \n     -->\n   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>\n\n   <!-- Query Related Event Listeners\n\n        Various IndexSearcher related events can trigger Listeners to\n        take actions.\n\n        newSearcher - fired whenever a new searcher is being prepared\n        and there is a current searcher handling requests (aka\n        registered).  It can be used to prime certain caches to\n        prevent long request times for certain requests.\n\n        firstSearcher - fired whenever a new searcher is being\n        prepared but there is no current registered searcher to handle\n        requests or to gain autowarming data from.\n\n        \n     -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence. \n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">solr rocks</str><str name=\"start\">0</str><str name=\"rows\">10</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n    <!-- Max Warming Searchers\n         \n         Maximum number of searchers that may be warming in the\n         background concurrently.  An error is returned if this limit\n         is exceeded.\n\n         Recommend values of 1-2 for read-only slaves, higher for\n         masters w/o cache warming.\n      -->\n    <maxWarmingSearchers>2</maxWarmingSearchers>\n\n  </query>\n\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n\n       handleSelect affects the behavior of requests such as /select?qt=XXX\n\n       handleSelect=\"true\" will cause the SolrDispatchFilter to process\n       the request and will result in consistent error handling and\n       formatting for all types of requests.\n\n       handleSelect=\"false\" will cause the SolrDispatchFilter to\n       ignore \"/select\" requests and fallback to using the legacy\n       SolrServlet and it's Solr 1.1 style error formatting\n    -->\n  <requestDispatcher handleSelect=\"true\" >\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size of\n         Multipart File Uploads that Solr will allow in a Request.\n         \n         *** WARNING ***\n         The settings below authorize Solr to fetch remote files, You\n         should make sure your system has some authentication before\n         using enableRemoteStreaming=\"true\"\n\n      --> \n    <requestParsers enableRemoteStreaming=\"true\" \n                    multipartUploadLimitInKB=\"2048000\" />\n\n    <!-- HTTP Caching\n\n         Set HTTP caching related parameters (for proxy caches and clients).\n\n         The options below instruct Solr not to output any HTTP Caching\n         related headers\n      -->\n    <httpCaching never304=\"true\" />\n    <!-- If you include a <cacheControl> directive, it will be used to\n         generate a Cache-Control header (as well as an Expires header\n         if the value contains \"max-age=\")\n         \n         By default, no Cache-Control header is generated.\n         \n         You can use the <cacheControl> option even if you have set\n         never304=\"true\"\n      -->\n    <!--\n       <httpCaching never304=\"true\" >\n         <cacheControl>max-age=30, public</cacheControl> \n       </httpCaching>\n      -->\n    <!-- To enable Solr to respond with automatically generated HTTP\n         Caching headers, and to response to Cache Validation requests\n         correctly, set the value of never304=\"false\"\n         \n         This will cause Solr to generate Last-Modified and ETag\n         headers based on the properties of the Index.\n\n         The following options can also be specified to affect the\n         values of these headers...\n\n         lastModFrom - the default value is \"openTime\" which means the\n         Last-Modified value (and validation against If-Modified-Since\n         requests) will all be relative to when the current Searcher\n         was opened.  You can change it to lastModFrom=\"dirLastMod\" if\n         you want the value to exactly correspond to when the physical\n         index was last modified.\n\n         etagSeed=\"...\" is an option you can change to force the ETag\n         header (and validation against If-None-Match requests) to be\n         different even if the index has not changed (ie: when making\n         significant changes to your config file)\n\n         (lastModifiedFrom and etagSeed are both ignored if you use\n         the never304=\"true\" option)\n      -->\n    <!--\n       <httpCaching lastModifiedFrom=\"openTime\"\n                    etagSeed=\"Solr\">\n         <cacheControl>max-age=30, public</cacheControl> \n       </httpCaching>\n      -->\n  </requestDispatcher>\n\n  <!-- Request Handlers \n\n       http://wiki.apache.org/solr/SolrRequestHandler\n\n       incoming queries will be dispatched to the correct handler\n       based on the path or the qt (query type) param.\n\n       Names starting with a '/' are accessed with the a path equal to\n       the registered name.  Names without a leading '/' are accessed\n       with: http://host/app/[core/]select?qt=name\n\n       If a /select request is processed with out a qt param\n       specified, the requestHandler that declares default=\"true\" will\n       be used.\n       \n       If a Request Handler is declared with startup=\"lazy\", then it will\n       not be initialized until the first request that uses it.\n\n    -->\n  <!-- SearchHandler\n\n       http://wiki.apache.org/solr/SearchHandler\n\n       For processing Search Queries, the primary Request Handler\n       provided with Solr is \"SearchHandler\" It delegates to a sequent\n       of SearchComponents (see below) and supports distributed\n       queries across multiple shards\n    -->\n  <!--<requestHandler name=\"search\" class=\"solr.SearchHandler\" default=\"true\">-->\n    <!-- default values for query parameters can be specified, these\n         will be overridden by parameters in the request\n      -->\n     <!--<lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <int name=\"rows\">10</int>\n     </lst>-->\n    <!-- In addition to defaults, \"appends\" params can be specified\n         to identify values which should be appended to the list of\n         multi-val params from the query (or the existing \"defaults\").\n      -->\n    <!-- In this example, the param \"fq=instock:true\" would be appended to\n         any query time fq params the user may specify, as a mechanism for\n         partitioning the index, independent of any user selected filtering\n         that may also be desired (perhaps as a result of faceted searching).\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"appends\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"appends\">\n         <str name=\"fq\">inStock:true</str>\n       </lst>\n      -->\n    <!-- \"invariants\" are a way of letting the Solr maintainer lock down\n         the options available to Solr clients.  Any params values\n         specified here are used regardless of what values may be specified\n         in either the query, the \"defaults\", or the \"appends\" params.\n\n         In this example, the facet.field and facet.query params would\n         be fixed, limiting the facets clients can use.  Faceting is\n         not turned on by default - but if the client does specify\n         facet=true in the request, these are the only facets they\n         will be able to see counts for; regardless of what other\n         facet.field or facet.query params they may specify.\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"invariants\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"invariants\">\n         <str name=\"facet.field\">cat</str>\n         <str name=\"facet.field\">manu_exact</str>\n         <str name=\"facet.query\">price:[* TO 500]</str>\n         <str name=\"facet.query\">price:[500 TO *]</str>\n       </lst>\n      -->\n    <!-- If the default list of SearchComponents is not desired, that\n         list can either be overridden completely, or components can be\n         prepended or appended to the default list.  (see below)\n      -->\n    <!--\n       <arr name=\"components\">\n         <str>nameOfCustomComponent1</str>\n         <str>nameOfCustomComponent2</str>\n       </arr>\n      -->\n    <!--</requestHandler>-->\n\n  <!-- A Robust Example\n\n       This example SearchHandler declaration shows off usage of the\n       SearchHandler with many defaults declared\n\n       Note that multiple instances of the same Request Handler\n       (SearchHandler) can be registered multiple times with different\n       names (and different init parameters)\n    -->\n  <!--\n  <requestHandler name=\"/browse\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>-->\n\n       <!-- VelocityResponseWriter settings -->\n       <!--<str name=\"wt\">velocity</str>\n\n       <str name=\"v.template\">browse</str>\n       <str name=\"v.layout\">layout</str>\n       <str name=\"title\">Solritas</str>\n\n       <str name=\"defType\">edismax</str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n       <str name=\"mlt.qf\">\n         text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"mlt.fl\">text,features,name,sku,id,manu,cat</str>\n       <int name=\"mlt.count\">3</int>\n\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n\n       <str name=\"facet\">on</str>\n       <str name=\"facet.field\">cat</str>\n       <str name=\"facet.field\">manu_exact</str>\n       <str name=\"facet.query\">ipod</str>\n       <str name=\"facet.query\">GB</str>\n       <str name=\"facet.mincount\">1</str>\n       <str name=\"facet.pivot\">cat,inStock</str>\n       <str name=\"facet.range.other\">after</str>\n       <str name=\"facet.range\">price</str>\n       <int name=\"f.price.facet.range.start\">0</int>\n       <int name=\"f.price.facet.range.end\">600</int>\n       <int name=\"f.price.facet.range.gap\">50</int>\n       <str name=\"facet.range\">popularity</str>\n       <int name=\"f.popularity.facet.range.start\">0</int>\n       <int name=\"f.popularity.facet.range.end\">10</int>\n       <int name=\"f.popularity.facet.range.gap\">3</int>\n       <str name=\"facet.range\">manufacturedate_dt</str>\n       <str name=\"f.manufacturedate_dt.facet.range.start\">NOW/YEAR-10YEARS</str>\n       <str name=\"f.manufacturedate_dt.facet.range.end\">NOW</str>\n       <str name=\"f.manufacturedate_dt.facet.range.gap\">+1YEAR</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">before</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">after</str>-->\n\n\n       <!-- Highlighting defaults -->\n       <!--<str name=\"hl\">on</str>\n       <str name=\"hl.fl\">text features name</str>\n       <str name=\"f.name.hl.fragsize\">0</str>\n       <str name=\"f.name.hl.alternateField\">name</str>\n     </lst>\n     <arr name=\"last-components\">\n       <str>spellcheck</str>\n     </arr>-->\n     <!--\n     <str name=\"url-scheme\">httpx</str>\n     -->\n  <!--</requestHandler>-->\n  <!-- trivia: the name pinkPony requestHandler was an agreement between the Search API and the\n    apachesolr maintainers. The decision was taken during the Drupalcon Munich codesprint.\n    -->\n  <requestHandler name=\"pinkPony\" class=\"solr.SearchHandler\" default=\"true\">\n    <lst name=\"defaults\">\n      <str name=\"defType\">edismax</str>\n      <str name=\"echoParams\">explicit</str>\n      <bool name=\"omitHeader\">true</bool>\n      <float name=\"tie\">0.01</float>\n      <!-- Don't abort searches for the pinkPony request handler (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.pinkPony.timeAllowed:-1}</int>\n      <str name=\"q.alt\">*:*</str>\n\n      <!-- By default, don't spell check -->\n      <str name=\"spellcheck\">false</str>\n      <!-- Defaults for the spell checker when used -->\n      <str name=\"spellcheck.onlyMorePopular\">true</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <!--  The number of suggestions to return -->\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- The more like this handler offers many advantages over the standard handler,\n     when performing moreLikeThis requests.-->\n  <requestHandler name=\"mlt\" class=\"solr.MoreLikeThisHandler\">\n    <lst name=\"defaults\">\n      <str name=\"mlt.mintf\">1</str>\n      <str name=\"mlt.mindf\">1</str>\n      <str name=\"mlt.minwl\">3</str>\n      <str name=\"mlt.maxwl\">15</str>\n      <str name=\"mlt.maxqt\">20</str>\n      <str name=\"mlt.match.include\">false</str>\n      <!-- Abort any searches longer than 2 seconds (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.mlt.timeAllowed:2000}</int>\n    </lst>\n  </requestHandler>\n\n  <!-- A minimal query type for doing luene queries -->\n  <requestHandler name=\"standard\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <bool name=\"omitHeader\">true</bool>\n     </lst>\n  </requestHandler>\n\n  <!-- XML Update Request Handler.  \n       \n       http://wiki.apache.org/solr/UpdateXmlMessages\n\n       The canonical Request Handler for Modifying the Index through\n       commands specified using XML.\n\n       Note: Since solr1.1 requestHandlers requires a valid content\n       type header if posted in the body. For example, curl now\n       requires: -H 'Content-type:text/xml; charset=utf-8'\n    -->\n  <requestHandler name=\"/update\" \n                  class=\"solr.UpdateRequestHandler\">\n    <!-- See below for information on defining \n         updateRequestProcessorChains that can be used by name \n         on each Update Request\n      -->\n    <!--\n       <lst name=\"defaults\">\n         <str name=\"update.chain\">dedupe</str>\n       </lst>\n       -->\n    </requestHandler>\n  <!-- Binary Update Request Handler\n       http://wiki.apache.org/solr/javabin\n    -->\n  <requestHandler name=\"/update/javabin\" \n                  class=\"solr.UpdateRequestHandler\" />\n\n  <!-- CSV Update Request Handler\n       http://wiki.apache.org/solr/UpdateCSV\n    -->\n  <requestHandler name=\"/update/csv\" \n                  class=\"solr.CSVRequestHandler\" \n                  startup=\"lazy\" />\n\n  <!-- JSON Update Request Handler\n       http://wiki.apache.org/solr/UpdateJSON\n    -->\n  <requestHandler name=\"/update/json\" \n                  class=\"solr.JsonUpdateRequestHandler\" \n                  startup=\"lazy\" />\n\n  <!-- Solr Cell Update Request Handler\n\n       http://wiki.apache.org/solr/ExtractingRequestHandler \n\n    -->\n  <requestHandler name=\"/update/extract\" \n                  startup=\"lazy\"\n                  class=\"solr.extraction.ExtractingRequestHandler\" >\n    <lst name=\"defaults\">\n      <!-- All the main content goes into \"text\"... if you need to return\n           the extracted text or do highlighting, use a stored field. -->\n      <str name=\"fmap.content\">text</str>\n      <str name=\"lowernames\">true</str>\n      <str name=\"uprefix\">ignored_</str>\n\n      <!-- capture link hrefs but ignore div attributes -->\n      <str name=\"captureAttr\">true</str>\n      <str name=\"fmap.a\">links</str>\n      <str name=\"fmap.div\">ignored_</str>\n    </lst>\n  </requestHandler>\n\n  <!-- XSLT Update Request Handler\n       Transforms incoming XML with stylesheet identified by tr=\n  -->\n  <requestHandler name=\"/update/xslt\"\n                   startup=\"lazy\"\n                   class=\"solr.XsltUpdateRequestHandler\"/>\n\n  <!-- Field Analysis Request Handler\n\n       RequestHandler that provides much the same functionality as\n       analysis.jsp. Provides the ability to specify multiple field\n       types and field names in the same request and outputs\n       index-time and query-time analysis for each of them.\n\n       Request parameters are:\n       analysis.fieldname - field name whose analyzers are to be used\n\n       analysis.fieldtype - field type whose analyzers are to be used\n       analysis.fieldvalue - text for index-time analysis\n       q (or analysis.q) - text for query time analysis\n       analysis.showmatch (true|false) - When set to true and when\n           query analysis is performed, the produced tokens of the\n           field value analysis will be marked as \"matched\" for every\n           token that is produces by the query analysis\n   -->\n  <requestHandler name=\"/analysis/field\" \n                  startup=\"lazy\"\n                  class=\"solr.FieldAnalysisRequestHandler\" />\n\n\n  <!-- Document Analysis Handler\n\n       http://wiki.apache.org/solr/AnalysisRequestHandler\n\n       An analysis handler that provides a breakdown of the analysis\n       process of provided docuemnts. This handler expects a (single)\n       content stream with the following format:\n\n       <docs>\n         <doc>\n           <field name=\"id\">1</field>\n           <field name=\"name\">The Name</field>\n           <field name=\"text\">The Text Value</field>\n         </doc>\n         <doc>...</doc>\n         <doc>...</doc>\n         ...\n       </docs>\n\n    Note: Each document must contain a field which serves as the\n    unique key. This key is used in the returned response to associate\n    an analysis breakdown to the analyzed document.\n\n    Like the FieldAnalysisRequestHandler, this handler also supports\n    query analysis by sending either an \"analysis.query\" or \"q\"\n    request parameter that holds the query text to be analyzed. It\n    also supports the \"analysis.showmatch\" parameter which when set to\n    true, all field tokens that match the query tokens will be marked\n    as a \"match\". \n  -->\n  <requestHandler name=\"/analysis/document\" \n                  class=\"solr.DocumentAnalysisRequestHandler\" \n                  startup=\"lazy\" />\n\n  <!-- Admin Handlers\n\n       Admin Handlers - This will register all the standard admin\n       RequestHandlers.  \n    -->\n  <requestHandler name=\"/admin/\" class=\"solr.admin.AdminHandlers\" />\n  <!-- This single handler is equivalent to the following... -->\n  <!--\n     <requestHandler name=\"/admin/luke\"       class=\"solr.admin.LukeRequestHandler\" />\n     <requestHandler name=\"/admin/system\"     class=\"solr.admin.SystemInfoHandler\" />\n     <requestHandler name=\"/admin/plugins\"    class=\"solr.admin.PluginInfoHandler\" />\n     <requestHandler name=\"/admin/threads\"    class=\"solr.admin.ThreadDumpHandler\" />\n     <requestHandler name=\"/admin/properties\" class=\"solr.admin.PropertiesRequestHandler\" />\n     <requestHandler name=\"/admin/file\"       class=\"solr.admin.ShowFileRequestHandler\" >\n    -->\n  <!-- If you wish to hide files under ${solr.home}/conf, explicitly\n       register the ShowFileRequestHandler using: \n    -->\n  <!--\n     <requestHandler name=\"/admin/file\" \n                     class=\"solr.admin.ShowFileRequestHandler\" >\n       <lst name=\"invariants\">\n         <str name=\"hidden\">synonyms.txt</str> \n         <str name=\"hidden\">anotherfile.txt</str> \n       </lst>\n     </requestHandler>\n    -->\n\n  <!-- ping/healthcheck -->\n  <requestHandler name=\"/admin/ping\" class=\"solr.PingRequestHandler\">\n    <lst name=\"invariants\">\n      <str name=\"qt\">pinkPony</str>\n      <str name=\"q\">solrpingquery</str>\n      <str name=\"omitHeader\">false</str>\n    </lst>\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">all</str>\n    </lst>\n    <!-- An optional feature of the PingRequestHandler is to configure the \n         handler with a \"healthcheckFile\" which can be used to enable/disable \n         the PingRequestHandler.\n         relative paths are resolved against the data dir \n    -->\n    <!-- <str name=\"healthcheckFile\">server-enabled.txt</str> -->\n  </requestHandler>\n\n  <!-- Echo the request contents back to the client -->\n  <requestHandler name=\"/debug/dump\" class=\"solr.DumpRequestHandler\" >\n    <lst name=\"defaults\">\n     <str name=\"echoParams\">explicit</str> \n     <str name=\"echoHandler\">true</str>\n    </lst>\n  </requestHandler>\n  \n  <!-- Solr Replication\n\n       The SolrReplicationHandler supports replicating indexes from a\n       \"master\" used for indexing and \"slaves\" used for queries.\n\n       http://wiki.apache.org/solr/SolrReplication\n\n       In the example below, remove the <lst name=\"master\"> section if\n       this is just a slave and remove  the <lst name=\"slave\"> section\n       if this is just a master.\n  -->\n  <requestHandler name=\"/replication\" class=\"solr.ReplicationHandler\" >\n    <lst name=\"master\">\n      <str name=\"enable\">${solr.replication.master:false}</str>\n      <str name=\"replicateAfter\">commit</str>\n      <str name=\"replicateAfter\">startup</str>\n      <str name=\"confFiles\">${solr.replication.confFiles:schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml}</str>\n    </lst>\n    <lst name=\"slave\">\n      <str name=\"enable\">${solr.replication.slave:false}</str>\n      <str name=\"masterUrl\">${solr.replication.masterUrl:http://localhost:8983/solr}/replication</str>\n      <str name=\"pollInterval\">${solr.replication.pollInterval:00:00:60}</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Realtime get handler, guaranteed to return the latest stored fields of\n       any document, without the need to commit or open a new searcher.  The\n       current implementation relies on the updateLog feature being enabled.\n  -->\n  <requestHandler name=\"/get\" class=\"solr.RealTimeGetHandler\">\n    <lst name=\"defaults\">\n      <str name=\"omitHeader\">true</str>\n      <str name=\"wt\">json</str>\n      <str name=\"indent\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names, \n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n    \n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n    \n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\" \n       \n     -->\n\n  <!-- A request handler for demonstrating the spellcheck component.  \n\n       NOTE: This is purely as an example.  The whole purpose of the\n       SpellCheckComponent is to hook it into the request handler that\n       handles your normal user queries so that a separate request is\n       not needed to get suggestions.\n\n       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS\n       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!\n       \n       See http://wiki.apache.org/solr/SpellCheckComponent for details\n       on the request parameters.\n    -->\n  <requestHandler name=\"/spell\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"spellcheck.onlyMorePopular\">false</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Term Vector Component\n\n       http://wiki.apache.org/solr/TermVectorComponent\n    -->\n  <searchComponent name=\"tvComponent\" class=\"solr.TermVectorComponent\"/>\n\n  <!-- A request handler for demonstrating the term vector component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your \n       already specified request handlers. \n    -->\n  <requestHandler name=\"tvrh\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <bool name=\"tv\">true</bool>\n    </lst>\n    <arr name=\"last-components\">\n      <str>tvComponent</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Clustering Component\n\n       http://wiki.apache.org/solr/ClusteringComponent\n\n       This relies on third party jars which are notincluded in the\n       release.  To use this component (and the \"/clustering\" handler)\n       Those jars will need to be downloaded, and you'll need to set\n       the solr.cluster.enabled system property when running solr...\n\n          java -Dsolr.clustering.enabled=true -jar start.jar\n    -->\n  <!-- <searchComponent name=\"clustering\"\n                   enable=\"${solr.clustering.enabled:false}\"\n                   class=\"solr.clustering.ClusteringComponent\" > -->\n    <!-- Declare an engine -->\n    <!--<lst name=\"engine\">-->\n      <!-- The name, only one can be named \"default\" -->\n      <!--<str name=\"name\">default</str>-->\n\n      <!-- Class name of Carrot2 clustering algorithm. \n           \n           Currently available algorithms are:\n           \n           * org.carrot2.clustering.lingo.LingoClusteringAlgorithm\n           * org.carrot2.clustering.stc.STCClusteringAlgorithm\n           * org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm\n           \n           See http://project.carrot2.org/algorithms.html for the\n           algorithm's characteristics.\n        -->\n      <!--<str name=\"carrot.algorithm\">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>-->\n\n      <!-- Overriding values for Carrot2 default algorithm attributes.\n\n           For a description of all available attributes, see:\n           http://download.carrot2.org/stable/manual/#chapter.components.\n           Use attribute key as name attribute of str elements\n           below. These can be further overridden for individual\n           requests by specifying attribute key as request parameter\n           name and attribute value as parameter value.\n        -->\n      <!--<str name=\"LingoClusteringAlgorithm.desiredClusterCountBase\">20</str>-->\n      \n      <!-- Location of Carrot2 lexical resources.\n\n           A directory from which to load Carrot2-specific stop words\n           and stop labels. Absolute or relative to Solr config directory.\n           If a specific resource (e.g. stopwords.en) is present in the\n           specified dir, it will completely override the corresponding\n           default one that ships with Carrot2.\n\n           For an overview of Carrot2 lexical resources, see:\n           http://download.carrot2.org/head/manual/#chapter.lexical-resources\n        -->\n      <!--<str name=\"carrot.lexicalResourcesDir\">clustering/carrot2</str>-->\n\n      <!-- The language to assume for the documents.\n           \n           For a list of allowed values, see:\n           http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage\n       -->\n      <!--<str name=\"MultilingualClustering.defaultLanguage\">ENGLISH</str>\n    </lst>\n    <lst name=\"engine\">\n      <str name=\"name\">stc</str>\n      <str name=\"carrot.algorithm\">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>\n    </lst>\n  </searchComponent>-->\n\n  <!-- A request handler for demonstrating the clustering component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your \n       already specified request handlers. \n    -->\n  <!--<requestHandler name=\"/clustering\"\n                  startup=\"lazy\"\n                  enable=\"${solr.clustering.enabled:false}\"\n                  class=\"solr.SearchHandler\">\n    <lst name=\"defaults\">\n      <bool name=\"clustering\">true</bool>\n      <str name=\"clustering.engine\">default</str>\n      <bool name=\"clustering.results\">true</bool>-->\n      <!-- The title field -->\n      <!--<str name=\"carrot.title\">name</str>-->\n      <!--<str name=\"carrot.url\">id</str>-->\n      <!-- The field to cluster on -->\n       <!--<str name=\"carrot.snippet\">features</str>-->\n       <!-- produce summaries -->\n       <!--<bool name=\"carrot.produceSummary\">true</bool>-->\n       <!-- the maximum number of labels per cluster -->\n       <!--<int name=\"carrot.numDescriptions\">5</int>-->\n       <!-- produce sub clusters -->\n       <!--<bool name=\"carrot.outputSubClusters\">false</bool>-->\n       \n       <!--<str name=\"defType\">edismax</str>\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n    </lst>     \n    <arr name=\"last-components\">\n      <str>clustering</str>\n    </arr>\n  </requestHandler>-->\n  \n  <!-- Terms Component\n\n       http://wiki.apache.org/solr/TermsComponent\n\n       A component to return terms and document frequency of those\n       terms\n    -->\n  <searchComponent name=\"terms\" class=\"solr.TermsComponent\"/>\n\n  <!-- A request handler for demonstrating the terms component -->\n  <requestHandler name=\"/terms\" class=\"solr.SearchHandler\" startup=\"lazy\">\n     <lst name=\"defaults\">\n      <bool name=\"terms\">true</bool>\n    </lst>     \n    <arr name=\"components\">\n      <str>terms</str>\n    </arr>\n  </requestHandler>\n\n\n  <!-- Query Elevation Component\n\n       http://wiki.apache.org/solr/QueryElevationComponent\n\n       a search component that enables you to configure the top\n       results for a given query regardless of the normal lucene\n       scoring.\n    -->\n  <searchComponent name=\"elevator\" class=\"solr.QueryElevationComponent\" >\n    <!-- pick a fieldType to analyze queries -->\n    <str name=\"queryFieldType\">string</str>\n    <str name=\"config-file\">elevate.xml</str>\n  </searchComponent>\n\n  <!-- A request handler for demonstrating the elevator component -->\n  <requestHandler name=\"/elevate\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">explicit</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\" \n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter \n           (for sentence extraction) \n        -->\n      <fragmenter name=\"regex\" \n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\" \n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<strong>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</strong>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\" \n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\" \n                       default=\"true\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\" \n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\" \n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!-- \n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\" \n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n      \n      <boundaryScanner name=\"default\" \n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n      \n      <boundaryScanner name=\"breakIterator\" \n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    --> \n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.  \n       \n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n    <!--\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n    <!--\n     <updateRequestProcessorChain name=\"langid\">\n       <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n         <str name=\"langid.fl\">text,title,subject,description</str>\n         <str name=\"langid.langField\">language_s</str>\n         <str name=\"langid.fallback\">en</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n \n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\" \n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n     <!-- For the purposes of the tutorial, JSON responses are written as\n      plain text so that they are easy to read in *any* browser.\n      If you expect a MIME type of \"application/json\" just remove this override.\n     -->\n    <str name=\"content-type\">text/plain; charset=UTF-8</str>\n  </queryResponseWriter>\n  \n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!-- <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" enable=\"${solr.velocity.enabled:true}\"/> -->\n  \n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.  \n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       http://wiki.apache.org/solr/SolrQuerySyntax\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\" \n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n  <!-- Legacy config for the admin interface -->\n  <admin>\n    <defaultQuery>*:*</defaultQuery>\n\n    <!-- configure a healthcheck file for servers behind a\n         loadbalancer \n      -->\n    <!--\n       <healthcheck type=\"file\">server-enabled</healthcheck>\n      -->\n  </admin>\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  <xi:include href=\"solrconfig_extra.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback>\n    <!-- Spell Check\n\n        The spell check component can return a list of alternative spelling\n        suggestions. This component must be defined in\n        solrconfig_extra.xml if present, since it's used in the search handler.\n\n        http://wiki.apache.org/solr/SpellCheckComponent\n     -->\n    <searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n    <str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n    <!-- a spellchecker built from a field of the main index -->\n      <lst name=\"spellchecker\">\n        <str name=\"name\">default</str>\n        <str name=\"field\">spell</str>\n        <str name=\"spellcheckIndexDir\">spellchecker</str>\n        <str name=\"buildOnOptimize\">true</str>\n      </lst>\n    </searchComponent>\n    </xi:fallback>\n  </xi:include>\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/solrconfig_extra.xml",
    "content": "<!-- Spell Check\n\n    The spell check component can return a list of alternative spelling\n    suggestions.\n\n    http://wiki.apache.org/solr/SpellCheckComponent\n -->\n<searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n<str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n<!-- Multiple \"Spell Checkers\" can be declared and used by this\n     component\n  -->\n\n<!-- a spellchecker built from a field of the main index, and\n     written to disk\n  -->\n<lst name=\"spellchecker\">\n  <str name=\"name\">default</str>\n  <str name=\"field\">spell</str>\n  <str name=\"spellcheckIndexDir\">spellchecker</str>\n  <str name=\"buildOnOptimize\">true</str>\n  <!-- uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary\n    <float name=\"thresholdTokenFrequency\">.01</float>\n  -->\n</lst>\n\n<!--\n  Adding German spellhecker index to our Solr index\n  This also requires to enable the content in schema_extra_types.xml and schema_extra_fields.xml\n-->\n<!--\n<lst name=\"spellchecker\">\n  <str name=\"name\">spellchecker_de</str>\n  <str name=\"field\">spell_de</str>\n  <str name=\"spellcheckIndexDir\">./spellchecker_de</str>\n  <str name=\"buildOnOptimize\">true</str>\n</lst>\n-->\n\n<!-- a spellchecker that uses a different distance measure -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">jarowinkler</str>\n     <str name=\"field\">spell</str>\n     <str name=\"distanceMeasure\">\n       org.apache.lucene.search.spell.JaroWinklerDistance\n     </str>\n     <str name=\"spellcheckIndexDir\">spellcheckerJaro</str>\n   </lst>\n -->\n\n<!-- a spellchecker that use an alternate comparator\n\n     comparatorClass be one of:\n      1. score (default)\n      2. freq (Frequency first, then score)\n      3. A fully qualified class name\n  -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">freq</str>\n     <str name=\"field\">lowerfilt</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFreq</str>\n     <str name=\"comparatorClass\">freq</str>\n     <str name=\"buildOnCommit\">true</str>\n  -->\n\n<!-- A spellchecker that reads the list of words from a file -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"classname\">solr.FileBasedSpellChecker</str>\n     <str name=\"name\">file</str>\n     <str name=\"sourceLocation\">spellings.txt</str>\n     <str name=\"characterEncoding\">UTF-8</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFile</str>\n   </lst>\n  -->\n</searchComponent>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:8099/solr\nsolr.replication.confFiles=schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml\nsolr.mlt.timeAllowed=2000\n# You should not set your luceneMatchVersion to anything lower than your Solr\n# Version.\nsolr.luceneMatchVersion=LUCENE_40\nsolr.pinkPony.timeAllowed=-1\n# autoCommit after 10000 docs\nsolr.autoCommit.MaxDocs=10000\n# autoCommit after 2 minutes\nsolr.autoCommit.MaxTime=120000\n# autoSoftCommit after 2000 docs\nsolr.autoSoftCommit.MaxDocs=2000\n# autoSoftCommit after 10 seconds\nsolr.autoSoftCommit.MaxTime=10000\nsolr.contrib.dir=../../../contrib\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/stopwords.txt",
    "content": "# Contains words which shouldn't be indexed for fulltext fields, e.g., because\n# they're too common. For documentation of the format, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.StopFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr4_drupal7/synonyms.txt",
    "content": "# Contains synonyms to use for your index. For the format used, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. The item IDs are in general constructed as follows:\n   Search API:\n     $document->id = $index_id . '-' . $item_id;\n   Apache Solr Search Integration:\n     $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #1 first in searches for \"example query\": -->\n<!--\n <query text=\"example query\">\n  <doc id=\"default_node_index-1\" />\n  <doc id=\"7v3jsc/node/1\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/mapping-ISOLatin1Accent.txt",
    "content": "# This file contains character mappings for the default fulltext field type.\n# The source characters (on the left) will be replaced by the respective target\n# characters before any other processing takes place.\n# Lines starting with a pound character # are ignored.\n#\n# For sensible defaults, use the mapping-ISOLatin1Accent.txt file distributed\n# with the example application of your Solr version.\n#\n# Examples:\n#   \"À\" => \"A\"\n#   \"\\u00c4\" => \"A\"\n#   \"\\u00c4\" => \"\\u0041\"\n#   \"æ\" => \"ae\"\n#   \"\\n\" => \" \"\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/protwords.txt",
    "content": "#-----------------------------------------------------------------------\n# This file blocks words from being operated on by the stemmer and word delimiter.\n&amp;\n&lt;\n&gt;\n&#039;\n&quot;\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n-->\n\n<schema name=\"drupal-4.4-solr-7.x\" version=\"1.5\">\n    <!-- attribute \"name\" is the name of this schema and is only used for\n         display purposes. Applications should change this to reflect the nature\n         of the search collection.\n         version=\"1.2\" is Solr's version number for the schema syntax and\n         semantics. It should not normally be changed by applications.\n\n         1.0: multiValued attribute did not exist, all fields are multiValued by\n              nature\n         1.1: multiValued attribute introduced, false by default\n         1.2: omitTermFreqAndPositions attribute introduced, true by default\n              except for text fields.\n         1.3: removed optional field compress feature\n         1.4: autoGeneratePhraseQueries attribute introduced to drive\n              QueryParser behavior when a single string produces multiple\n              tokens. Defaults to off for version >= 1.4\n         1.5: omitNorms defaults to true for primitive field types\n              (int, float, boolean, string...)\n       -->\n\n  <types>\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in the\n       org.apache.solr.analysis package.\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       - StrField and TextField support an optional compressThreshold which\n       limits compression (if enabled in the derived fields) to values which\n       exceed a certain size (in characters).\n    -->\n    <fieldType name=\"string\" class=\"solr.StrField\" sortMissingLast=\"true\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\" sortMissingLast=\"true\"/>\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldtype name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The optional sortMissingLast and sortMissingFirst attributes are\n         currently supported on types that are sorted internally as strings.\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!-- numeric field types that can be sorted, but are not optimized for range queries -->\n    <fieldType name=\"integer\" class=\"solr.TrieIntField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"float\" class=\"solr.TrieFloatField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"long\" class=\"solr.TrieLongField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"double\" class=\"solr.TrieDoubleField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n\n    <!--\n      Note:\n      These should only be used for compatibility with existing indexes (created with older Solr versions)\n      or if \"sortMissingFirst\" or \"sortMissingLast\" functionality is needed. Use Trie based fields instead.\n\n      Numeric field types that manipulate the value into\n      a string value that isn't human-readable in its internal form,\n      but with a lexicographic ordering the same as the numeric ordering,\n      so that range queries work correctly.\n    -->\n    <fieldType name=\"sint\" class=\"solr.TrieIntField\" sortMissingLast=\"true\"/>\n    <fieldType name=\"sfloat\" class=\"solr.TrieFloatField\" sortMissingLast=\"true\"/>\n    <fieldType name=\"slong\" class=\"solr.TrieLongField\" sortMissingLast=\"true\"/>\n    <fieldType name=\"sdouble\" class=\"solr.TrieDoubleField\" sortMissingLast=\"true\"/>\n\n    <!--\n     Numeric field types that index each value at various levels of precision\n     to accelerate range queries when the number of values between the range\n     endpoints is large. See the javadoc for NumericRangeQuery for internal\n     implementation details.\n\n     Smaller precisionStep values (specified in bits) will lead to more tokens\n     indexed per value, slightly larger index size, and faster range queries.\n     A precisionStep of 0 disables indexing at different precision levels.\n    -->\n    <fieldType name=\"tint\" class=\"solr.TrieIntField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tfloat\" class=\"solr.TrieFloatField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tlong\" class=\"solr.TrieLongField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tdouble\" class=\"solr.TrieDoubleField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the TrieDateField javadocs for more information.\n\n         Note: For faster range queries, consider the tdate type\n      -->\n    <fieldType name=\"date\" class=\"solr.TrieDateField\" precisionStep=\"0\" positionIncrementGap=\"0\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- A Trie based date field for faster date range queries and date faceting. -->\n    <fieldType name=\"tdate\" class=\"solr.TrieDateField\" precisionStep=\"6\" positionIncrementGap=\"0\"/>\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of\n        words on case-change, alpha numeric boundaries, and non-alphanumeric chars,\n        so that a query of \"wifi\" or \"wi fi\" could match a document containing \"Wi-Fi\".\n        Synonyms and stopwords are customized by external files, and stemming is enabled.\n        Duplicate tokens at the same position (which may result from Stemmed Synonyms or\n        WordDelim parts) are removed.\n        -->\n    <fieldType name=\"text\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <!-- in this example, we will only use synonyms at query time\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"index_synonyms.txt\" ignoreCase=\"true\" expand=\"false\"/>\n        -->\n        <!-- Case insensitive stop word removal. -->\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"1\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- An unstemmed text field - good if one does not know the language of the field -->\n    <fieldType name=\"text_und\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\" />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- Edge N gram type - for example for matching against queries with results\n        KeywordTokenizer leaves input string intact as a single term.\n        see: http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/\n      -->\n    <fieldType name=\"edge_n2_kw_text\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.EdgeNGramFilterFactory\" minGramSize=\"2\" maxGramSize=\"25\" />\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n    <!--  Setup simple analysis for spell checking -->\n\n    <fieldType name=\"textSpell\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\" />\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"4\" max=\"20\" />\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\" />\n      </analyzer>\n    </fieldType>\n\n    <!-- This is an example of using the KeywordTokenizer along\n         With various TokenFilterFactories to produce a sortable field\n         that does not include some properties of the source text\n      -->\n    <fieldType name=\"sortString\" class=\"solr.TextField\" sortMissingLast=\"true\" omitNorms=\"true\">\n      <analyzer>\n        <!-- KeywordTokenizer does no actual tokenizing, so the entire\n            input string is preserved as a single token\n          -->\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <!-- The LowerCase TokenFilter does what you expect, which can be\n            when you want your sorting to be case insensitive\n          -->\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <!-- The TrimFilter removes any leading or trailing whitespace -->\n        <filter class=\"solr.TrimFilterFactory\" />\n        <!-- The PatternReplaceFilter gives you the flexibility to use\n            Java Regular expression to replace any sequence of characters\n            matching a pattern with an arbitrary replacement string,\n            which may include back refrences to portions of the orriginal\n            string matched by the pattern.\n\n            See the Java Regular Expression documentation for more\n            infomation on pattern and replacement string syntax.\n\n            http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html\n\n        <filter class=\"solr.PatternReplaceFilterFactory\"\n               pattern=\"(^\\p{Punct}+)\" replacement=\"\" replace=\"all\"\n        />\n        -->\n      </analyzer>\n    </fieldType>\n\n    <!-- The \"RandomSortField\" is not used to store or search any\n         data.  You can declare fields of this type it in your schema\n         to generate pseudo-random orderings of your docs for sorting\n         or function purposes.  The ordering is generated based on the field\n         name and the version of the index. As long as the index version\n         remains unchanged, and the same field name is reused,\n         the ordering of the docs will be consistent.\n         If you want different psuedo-random orderings of documents,\n         for the same version of the index, use a dynamicField and\n         change the field name in the request.\n      -->\n    <fieldType name=\"rand\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- Fulltext type for matching words based on how they sound – i.e.,\n         \"phonetic matching\".\n      -->\n    <fieldType name=\"phonetic\" class=\"solr.TextField\" >\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\"/>\n        <filter class=\"solr.DoubleMetaphoneFilterFactory\" inject=\"false\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- since fields of this type are by default not stored or indexed,\n         any data added to them will be ignored outright.  -->\n    <fieldType name=\"ignored\" stored=\"false\" indexed=\"false\" multiValued=\"true\" class=\"solr.StrField\" />\n\n    <!-- This point type indexes the coordinates as separate fields (subFields)\n      If subFieldType is defined, it references a type, and a dynamic field\n      definition is created matching *___<typename>.  Alternately, if\n      subFieldSuffix is defined, that is used to create the subFields.\n      Example: if subFieldType=\"double\", then the coordinates would be\n        indexed in fields myloc_0___double,myloc_1___double.\n      Example: if subFieldSuffix=\"_d\" then the coordinates would be indexed\n        in fields myloc_0_d,myloc_1_d\n      The subFields are an implementation detail of the fieldType, and end\n      users normally should not need to know about them.\n     -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"tdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonType\" subFieldType=\"tdouble\"/>\n\n    <!-- A Geohash is a compact representation of a latitude longitude pair in a single field.\n         See http://wiki.apache.org/solr/SpatialSearch\n     -->\n    <fieldtype name=\"geohash\" class=\"solr.GeoHashField\"/>\n\n    <!-- Improved location type which supports advanced functionality like\n         filtering by polygons or other shapes, indexing shapes, multi-valued\n         fields, etc.\n      -->\n    <fieldType name=\"location_rpt\" class=\"solr.SpatialRecursivePrefixTreeFieldType\"\n        geo=\"true\" distErrPct=\"0.025\" maxDistErr=\"0.001\" distanceUnits=\"kilometers\" />\n\n    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has\n     special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for\n     relevancy. -->\n    <fieldType name=\"bbox\" class=\"solr.BBoxField\"\n               geo=\"true\" distanceUnits=\"kilometers\" numberType=\"_bbox_coord\" />\n    <fieldType name=\"_bbox_coord\" class=\"solr.TrieDoubleField\" precisionStep=\"8\" docValues=\"true\" stored=\"false\"/>\n\n  </types>\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  <xi:include href=\"schema_extra_types.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n   <!-- Valid attributes for fields:\n     name: mandatory - the name for the field\n     type: mandatory - the name of a field type from the <types> fieldType\n       section\n     indexed: true if this field should be indexed (searchable or sortable)\n     stored: true if this field should be retrievable\n     docValues: true if this field should have doc values. Doc values are\n       useful for faceting, grouping, sorting and function queries. Although not\n       required, doc values will make the index faster to load, more\n       NRT-friendly and more memory-efficient. They however come with some\n       limitations: they are currently only supported by StrField, UUIDField\n       and all Trie*Fields, and depending on the field type, they might\n       require the field to be single-valued, be required or have a default\n       value (check the documentation of the field type you're interested in\n       for more information)\n     multiValued: true if this field may contain multiple values per document\n     omitNorms: (expert) set to true to omit the norms associated with\n       this field (this disables length normalization and index-time\n       boosting for the field, and saves some memory).  Only full-text\n       fields or fields that need an index-time boost need norms.\n       Norms are omitted for primitive (non-analyzed) types by default.\n     termVectors: [false] set to true to store the term vector for a\n       given field.\n       When using MoreLikeThis, fields used for similarity should be\n       stored for best performance.\n     termPositions: Store position information with the term vector.\n       This will increase storage costs.\n     termOffsets: Store offset information with the term vector. This\n       will increase storage costs.\n     required: The field is required.  It will throw an error if the\n       value does not exist\n     default: a value that should be used if no value is specified\n       when adding a document.\n   -->\n  <fields>\n\n    <!-- The document id is usually derived from a site-spcific key (hash) and the\n      entity type and ID like:\n      Search Api :\n        The format used is $document->id = $index_id . '-' . $item_id\n      Apache Solr Search Integration\n        The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n    -->\n    <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" />\n\n    <!-- Add Solr Cloud version field as mentioned in\n         http://wiki.apache.org/solr/SolrCloud#Required_Config\n    -->\n    <field name=\"_version_\" type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n\n    <!-- Search Api specific fields -->\n    <!-- item_id contains the entity ID, e.g. a node's nid. -->\n    <field name=\"item_id\"  type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- index_id is the machine name of the search index this entry belongs to. -->\n    <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"true\" />\n  <!-- copyField commands copy one field to another at the time a document\n        is added to the index.  It's used either to index the same field differently,\n        or to add multiple fields to the same field for easier/faster searching.  -->\n    <!-- Since sorting by ID is explicitly allowed, store item_id also in a sortable way. -->\n    <copyField source=\"item_id\" dest=\"sort_search_api_id\" />\n\n    <!-- Apache Solr Search Integration specific fields -->\n    <!-- entity_id is the numeric object ID, e.g. Node ID, File ID -->\n    <field name=\"entity_id\"  type=\"long\" indexed=\"true\" stored=\"true\" />\n    <!-- entity_type is 'node', 'file', 'user', or some other Drupal object type -->\n    <field name=\"entity_type\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- bundle is a node type, or as appropriate for other entity types -->\n    <field name=\"bundle\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"bundle_name\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"url\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <!-- label is the default field for a human-readable string for this entity (e.g. the title of a node) -->\n    <field name=\"label\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- The string version of the title is used for sorting -->\n    <copyField source=\"label\" dest=\"sort_label\"/>\n\n    <!-- content is the default field for full text search - dump crap here -->\n    <field name=\"content\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n    <field name=\"teaser\" type=\"text\" indexed=\"false\" stored=\"true\"/>\n    <field name=\"path\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"path_alias\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n\n    <!-- These are the fields that correspond to a Drupal node. The beauty of having\n      Lucene store title, body, type, etc., is that we retrieve them with the search\n      result set and don't need to go to the database with a node_load. -->\n    <field name=\"tid\"  type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n    <field name=\"taxonomy_names\" type=\"text\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n    <!-- Copy terms to a single field that contains all taxonomy term names -->\n    <copyField source=\"tm_vid_*\" dest=\"taxonomy_names\"/>\n\n    <!-- Here, default is used to create a \"timestamp\" field indicating\n         when each document was indexed.-->\n    <field name=\"timestamp\" type=\"tdate\" indexed=\"true\" stored=\"true\" default=\"NOW\" multiValued=\"false\"/>\n\n    <!-- This field is used to build the spellchecker index -->\n    <field name=\"spell\" type=\"textSpell\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- copyField commands copy one field to another at the time a document\n         is added to the index.  It's used either to index the same field differently,\n         or to add multiple fields to the same field for easier/faster searching.  -->\n    <copyField source=\"label\" dest=\"spell\"/>\n    <copyField source=\"content\" dest=\"spell\"/>\n\n    <copyField source=\"ts_*\" dest=\"spell\"/>\n    <copyField source=\"tm_*\" dest=\"spell\"/>\n\n    <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n         will be used if the name matches any of the patterns.\n         RESTRICTION: the glob-like pattern in the name attribute must have\n         a \"*\" only at the start or the end.\n         EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n         Longer patterns will be matched first.  if equal size patterns\n         both match, the first appearing in the schema will be used.  -->\n\n    <!-- A set of fields to contain text extracted from HTML tag contents which we\n         can boost at query time. -->\n    <dynamicField name=\"tags_*\" type=\"text\"   indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n\n    <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n         the last letter is 's' for single valued, 'm' for multi-valued -->\n\n    <!-- We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"is_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"im_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of floats can be saved in a regular float field -->\n    <dynamicField name=\"fs_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fm_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of doubles can be saved in a regular double field -->\n    <dynamicField name=\"ps_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pm_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of booleans can be saved in a regular boolean field -->\n    <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <!-- Regular text (without processing) can be stored in a string field-->\n    <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Normal text fields are for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"ts_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tm_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- Unstemmed text fields for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"tus_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tum_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- These text fields omit norms - useful for extracted text like taxonomy_names -->\n    <dynamicField name=\"tos_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n    <dynamicField name=\"tom_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- Special-purpose text fields -->\n    <dynamicField name=\"tes_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"false\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tem_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"true\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- trie dates are preferred, so give them the 2 letter prefix -->\n    <dynamicField name=\"ds_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"dm_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"its_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"itm_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"fts_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ftm_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pts_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ptm_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n         a small image in a search result using the data URI scheme -->\n    <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a date rather than tdate is needed for sortMissingLast -->\n    <dynamicField name=\"dds_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ddm_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Sortable fields, good for sortMissingLast support &\n         We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"iss_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ism_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a sfloat rather than tfloat is needed for sortMissingLast -->\n    <dynamicField name=\"fss_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fsm_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pss_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"psm_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n    <dynamicField name=\"hs_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hm_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hss_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hsm_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hts_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"htm_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n    <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Fields for location searches.\n         http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n    <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"geos_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"geom_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"bboxs_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n    <dynamicField name=\"bboxm_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n    <dynamicField name=\"rpts_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n    <dynamicField name=\"rptm_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n    <!-- Special fields for Solr 5 functionality. -->\n    <dynamicField name=\"phons_*\" type=\"phonetic\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n    <dynamicField name=\"phonm_*\" type=\"phonetic\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n    <!-- External file fields -->\n    <dynamicField name=\"eff_*\" type=\"file\"/>\n\n    <!-- Sortable version of the dynamic string field -->\n    <dynamicField name=\"sort_*\" type=\"sortString\" indexed=\"true\" stored=\"false\"/>\n    <copyField source=\"ss_*\" dest=\"sort_*\"/>\n\n    <!-- A random sort field -->\n    <dynamicField name=\"random_*\" type=\"rand\" indexed=\"true\" stored=\"true\"/>\n\n    <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n    <dynamicField name=\"access_*\" type=\"integer\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n    <!-- The following causes solr to ignore any fields that don't already match an existing\n         field name or dynamic field, rather than reporting them as an error.\n         Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n         unknown fields indexed and/or stored by default -->\n    <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n  </fields>\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  <xi:include href=\"schema_extra_fields.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- Similarity is the scoring routine for each document vs. a query.\n       A custom Similarity or SimilarityFactory may be specified here, but\n       the default is fine for most applications.\n       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity\n    -->\n  <!--\n     <similarity class=\"com.example.solr.CustomSimilarityFactory\">\n       <str name=\"paramkey\">param value</str>\n     </similarity>\n    -->\n\n  <!-- DEPRECATED: The defaultSearchField is consulted by various query parsers\n    when parsing a query string that isn't explicit about the field.  Machine\n    (non-user) generated queries are best made explicit, or they can use the\n    \"df\" request parameter which takes precedence over this.\n    Note: Un-commenting defaultSearchField will be insufficient if your request\n    handler in solrconfig.xml defines \"df\", which takes precedence. That would\n    need to be removed.\n  <defaultSearchField>content</defaultSearchField> -->\n\n  <!-- DEPRECATED: The defaultOperator (AND|OR) is consulted by various query\n    parsers when parsing a query string to determine if a clause of the query\n    should be marked as required or optional, assuming the clause isn't already\n    marked by some operator. The default is OR, which is generally assumed so it\n    is not a good idea to change it globally here. The \"q.op\" request parameter\n    takes precedence over this.\n  <solrQueryParser defaultOperator=\"OR\"/> -->\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/schema_extra_fields.xml",
    "content": "<fields>\n<!--\n  Example: Adding German dynamic field types to our Solr Schema.\n  If you enable this, make sure you have a folder called lang containing\n  stopwords_de.txt and synonyms_de.txt.\n  This also requires to enable the content in schema_extra_types.xml.\n-->\n<!--\n   <field name=\"label_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"content_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n   <field name=\"teaser_de\" type=\"text_de\" indexed=\"false\" stored=\"true\"/>\n   <field name=\"path_alias_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"taxonomy_names_de\" type=\"text_de\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n   <field name=\"spell_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n   <copyField source=\"label_de\" dest=\"spell_de\"/>\n   <copyField source=\"content_de\" dest=\"spell_de\"/>\n   <dynamicField name=\"tags_de_*\" type=\"text_de\" indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n   <dynamicField name=\"ts_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n   <dynamicField name=\"tm_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n   <dynamicField name=\"tos_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n   <dynamicField name=\"tom_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n-->\n</fields>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/schema_extra_types.xml",
    "content": "<types>\n<!--\n  Example: Adding German language field types to our Solr Schema.\n  If you enable this, make sure you have a folder called lang containing\n  stopwords_de.txt and synonyms_de.txt.\n\n  For examples from other languages, see\n  ./server/solr/configsets/sample_techproducts_configs/conf/schema.xml\n  from your Solr installation.\n-->\n<!--\n    <fieldType name=\"text_de\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"1\" catenateNumbers=\"1\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"lang/synonyms_de.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"0\" catenateNumbers=\"0\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n-->\n</types>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-4.4-solr-7.x\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_60}</luceneMatchVersion>\n\n  <!-- <lib/> directives can be used to instruct Solr to load any Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       Please note that <lib/> directives are processed in the order\n       that they appear in your solrconfig.xml file, and are \"stacked\"\n       on top of each other when building a ClassLoader - so if you have\n       plugin jars with dependencies on other jars, the \"lower level\"\n       dependency jars should be loaded first.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n\n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A \"dir\" option by itself adds any files found in the directory to the\n       classpath, this is useful for including all jars in a directory.\n    -->\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/extraction/lib\" />\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/clustering/lib/\" />\n\n  <!-- The velocity library has been known to crash Solr in some\n       instances when deployed as a war file to Tomcat. Therefore all\n       references have been removed from the default configuration.\n       @see http://drupal.org/node/1612556\n  -->\n  <!-- <lib dir=\"../../contrib/velocity/lib\" /> -->\n\n  <!-- When a regex is specified in addition to a directory, only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n    -->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-cell-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-clustering-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-dataimporthandler-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-langid-\\d.*\\.jar\" />-->\n  <!-- <lib dir=\"../../dist/\" regex=\"apache-solr-velocity-\\d.*\\.jar\" /> -->\n\n  <!-- If a dir option (with or without a regex) is used and nothing\n       is found that matches, it will be ignored\n    -->\n  <!--<lib dir=\"../../contrib/clustering/lib/\" />-->\n  <!--<lib dir=\"/total/crap/dir/ignored\" />-->\n\n  <!-- an exact path can be used to specify a specific file.  This\n       will cause a serious error to be logged if it can't be loaded.\n    -->\n  <!--\n  <lib path=\"../a-jar-that-does-not-exist.jar\" />\n  -->\n\n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <dataDir>${solr.data.dir:}</dataDir>\n\n  <!-- The DirectoryFactory to use for indexes.\n\n       solr.StandardDirectoryFactory is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,\n       wraps solr.StandardDirectoryFactory and caches small files in memory\n       for better NRT performance.\n\n       One can force a particular implementation via solr.MMapDirectoryFactory,\n       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based, not\n       persistent, and doesn't work with replication.\n    -->\n  <directoryFactory name=\"DirectoryFactory\"\n                    class=\"${solr.directoryFactory:solr.NRTCachingDirectoryFactory}\"/>\n\n  <!-- The CodecFactory for defining the format of the inverted index.\n       The default implementation is SchemaCodecFactory, which is the official\n       Lucene index format, but hooks into the schema to provide per-field\n       customization of the postings lists and per-document values in the\n       fieldType element (postingsFormat/docValuesFormat). Note that most of the\n       alternative implementations are experimental, so if you choose to\n       customize the index format, it's a good idea to convert back to the\n       official format e.g. via IndexWriter.addIndexes(IndexReader) before\n       upgrading to a newer version to avoid unnecessary reindexing.\n  -->\n  <codecFactory class=\"solr.SchemaCodecFactory\"/>\n\n  <!-- To enable dynamic schema REST APIs, use the following for <schemaFactory>:\n\n       <schemaFactory class=\"ManagedIndexSchemaFactory\">\n         <bool name=\"mutable\">true</bool>\n         <str name=\"managedSchemaResourceName\">managed-schema</str>\n       </schemaFactory>\n\n       When ManagedIndexSchemaFactory is specified, Solr will load the schema from\n       the resource named in 'managedSchemaResourceName', rather than from schema.xml.\n       Note that the managed schema resource CANNOT be named schema.xml.  If the managed\n       schema does not exist, Solr will create it after reading schema.xml, then rename\n       'schema.xml' to 'schema.xml.bak'.\n\n       Do NOT hand edit the managed schema - external modifications will be ignored and\n       overwritten as a result of schema modification REST API calls.\n\n       When ManagedIndexSchemaFactory is specified with mutable = true, schema\n       modification REST API calls will be allowed; otherwise, error responses will be\n       sent back for these requests.\n  -->\n  <schemaFactory class=\"ClassicIndexSchemaFactory\"/>\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Index Config - These settings control low-level behavior of indexing\n       Most example settings here show the default value, but are commented\n       out, to more easily see where customizations have been made.\n\n       Note: This replaces <indexDefaults> and <mainIndex> from older versions\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <indexConfig>\n    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a\n         LimitTokenCountFilterFactory in your fieldType definition. E.g.\n     <filter class=\"solr.LimitTokenCountFilterFactory\" maxTokenCount=\"10000\"/>\n    -->\n    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->\n    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->\n\n    <!-- The maximum number of simultaneous threads that may be\n         indexing documents at once in IndexWriter; if more than this\n         many threads arrive they will wait for others to finish.\n         Default in Solr/Lucene is 8. -->\n    <!-- <maxIndexingThreads>8</maxIndexingThreads>  -->\n\n    <!-- Expert: Enabling compound file will use less files for the index,\n         using fewer file descriptors on the expense of performance decrease.\n         Default in Lucene is \"true\". Default in Solr is \"false\" (since 3.6) -->\n    <!-- <useCompoundFile>false</useCompoundFile> -->\n\n    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene\n         indexing for buffering added documents and deletions before they are\n         flushed to the Directory.\n         maxBufferedDocs sets a limit on the number of documents buffered\n         before flushing.\n         If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.\n         The default is 100 MB.  -->\n    <ramBufferSizeMB>32</ramBufferSizeMB>\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <!-- Expert: Merge Policy\n\n         The Merge Policy in Lucene controls how merging is handled by\n         Lucene.  The default in Solr 3.3 is TieredMergePolicy.\n\n         The default in 2.3 was the LogByteSizeMergePolicy,\n         previous versions used LogDocMergePolicy.\n\n         LogByteSizeMergePolicy chooses segments to merge based on\n         their size.  The Lucene 2.2 default, LogDocMergePolicy chose\n         when to merge based on number of documents\n\n         Other implementations of MergePolicy must have a no-argument\n         constructor\n      -->\n\n    <mergePolicyFactory class=\"org.apache.solr.index.LogByteSizeMergePolicyFactory\">\n      <!-- Merge Factor\n           The merge factor controls how many segments will get merged at a time.\n           For TieredMergePolicy, mergeFactor is a convenience parameter which\n           will set both MaxMergeAtOnce and SegmentsPerTier at once.\n           For LogByteSizeMergePolicy, mergeFactor decides how many new segments\n           will be allowed before they are merged into one.\n        -->\n      <int name=\"mergeFactor\">4</int>\n    </mergePolicyFactory>\n\n    <!-- Expert: Merge Scheduler\n\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!--\n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\n    <!-- LockFactory\n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n\n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         Defaults: 'native' is default for Solr3.6 and later, otherwise\n                   'simple' is the default\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>${solr.lock.type:native}</lockType>\n\n    <!-- Expert: Controls how often Lucene loads terms into memory\n         Default is 128 and is likely good for most everyone.\n      -->\n    <!-- <termIndexInterval>256</termIndexInterval> -->\n\n    <!-- If true, IndexReaders will be reopened (often more efficient)\n         instead of closed and then opened.\n      -->\n    <reopenReaders>true</reopenReaders>\n\n    <!-- Commit Deletion Policy\n\n         Custom deletion policies can be specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         http://lucene.apache.org/java/2_9_1/api/all/org/apache/lucene/index/IndexDeletionPolicy.html\n\n         The default Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n\n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n      <!-- The number of commit points to be kept -->\n      <str name=\"maxCommitsToKeep\">1</str>\n      <!-- The number of optimized commit points to be kept -->\n      <str name=\"maxOptimizedCommitsToKeep\">0</str>\n      <!--\n          Delete all commit points once they have reached the given age.\n          Supports DateMathParser syntax e.g.\n        -->\n      <!--\n         <str name=\"maxCommitAge\">30MINUTES</str>\n         <str name=\"maxCommitAge\">1DAY</str>\n      -->\n    </deletionPolicy>\n\n    <!-- Lucene Infostream\n\n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting the value to true will instruct the underlying Lucene\n         IndexWriter to write its info stream to solr's log. By default,\n         this is enabled here, and controlled through log4j.properties.\n      -->\n     <infoStream>true</infoStream>\n\n  </indexConfig>\n\n  <!-- JMX\n\n       This example enables JMX if and only if an existing MBeanServer\n       is found, use this if you want to configure JMX through JVM\n       parameters. Remove this to disable exposing Solr configuration\n       and statistics to JMX.\n\n       For more details see http://wiki.apache.org/solr/SolrJmx\n    -->\n  <!-- <jmx /> -->\n  <!-- If you want to connect to a particular server, specify the\n       agentId\n    -->\n  <!-- <jmx agentId=\"myAgent\" /> -->\n  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->\n  <!-- <jmx serviceUrl=\"service:jmx:rmi:///jndi/rmi://localhost:9999/solr\"/>\n    -->\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- AutoCommit\n\n         Perform a <commit/> automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents.\n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time that is allowed to pass\n                   since a document was added before automaticly\n                   triggering a new commit.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:10000}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:120000}</maxTime>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n    -->\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:2000}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:10000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n\n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns.\n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n    <!-- Enables a transaction log, currently used for real-time get.\n         \"dir\" - the target directory for transaction logs, defaults to the\n         solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.data.dir:}</str>\n      <!-- if you want to take control of the synchronization you may specify\n           the syncLevel as one of the following where ''flush'' is the default.\n           Fsync will reduce throughput.\n           <str name=\"syncLevel\">flush|fsync|none</str>\n      -->\n    </updateLog>\n  </updateHandler>\n\n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Query section - these settings control query time things like caches\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <query>\n    <!-- Max Boolean Clauses\n\n         Maximum number of clauses in each BooleanQuery,  an exception\n         is thrown if exceeded.\n\n         ** WARNING **\n\n         This option actually modifies a global Lucene property that\n         will affect all SolrCores.  If multiple solrconfig.xml files\n         disagree on this property, the value at any given moment will\n         be based on the last SolrCore to be initialized.\n\n      -->\n    <maxBooleanClauses>1024</maxBooleanClauses>\n\n    <!-- Slow Query Threshold (in millis)\n\n         At high request rates, logging all requests can become a bottleneck\n         and therefore INFO logging is often turned off. However, it is still\n         useful to be able to set a latency threshold above which a request\n         is considered \"slow\" and log that request at WARN level so we can\n         easily identify slow queries.\n    -->\n    <slowQueryThresholdMillis>-1</slowQueryThresholdMillis>\n\n    <!-- Solr Internal Query Caches\n\n         There are two implementations of cache available for Solr,\n         LRUCache, based on a synchronized LinkedHashMap, and\n         FastLRUCache, based on a ConcurrentHashMap.\n\n         FastLRUCache has faster gets and slower puts in single\n         threaded operation and thus is generally faster than LRUCache\n         when the hit ratio of the cache is high (> 75%), and may be\n         faster under other scenarios on multi-cpu systems.\n    -->\n\n    <!-- Filter Cache\n\n         Cache used by SolrIndexSearcher for filters (DocSets),\n         unordered sets of *all* documents that match a query.  When a\n         new searcher is opened, its caches may be prepopulated or\n         \"autowarmed\" using data from caches in the old searcher.\n         autowarmCount is the number of items to prepopulate.  For\n         LRUCache, the autowarmed items will be the most recently\n         accessed items.\n\n         Parameters:\n           class - the SolrCache implementation LRUCache or\n               (LRUCache or FastLRUCache)\n           size - the maximum number of entries in the cache\n           initialSize - the initial capacity (number of entries) of\n               the cache.  (see java.util.HashMap)\n           autowarmCount - the number of entries to prepopulate from\n               and old cache.\n      -->\n    <filterCache class=\"solr.FastLRUCache\"\n                 size=\"512\"\n                 initialSize=\"512\"\n                 autowarmCount=\"0\"/>\n\n    <!-- Query Result Cache\n\n         Caches results of searches - ordered lists of document ids\n         (DocList) based on a query, a sort, and the range of documents requested.\n      -->\n    <queryResultCache class=\"solr.LRUCache\"\n                     size=\"512\"\n                     initialSize=\"512\"\n                     autowarmCount=\"32\"/>\n\n    <!-- Document Cache\n\n         Caches Lucene Document objects (the stored fields for each\n         document).  Since Lucene internal document ids are transient,\n         this cache will not be autowarmed.\n      -->\n    <documentCache class=\"solr.LRUCache\"\n                   size=\"512\"\n                   initialSize=\"512\"\n                   autowarmCount=\"0\"/>\n\n    <!-- Field Value Cache\n\n         Cache used to hold field values that are quickly accessible\n         by document id.  The fieldValueCache is created by default\n         even if not configured here.\n      -->\n    <!--\n       <fieldValueCache class=\"solr.FastLRUCache\"\n                        size=\"512\"\n                        autowarmCount=\"128\"\n                        showItems=\"32\" />\n      -->\n\n    <!-- Custom Cache\n\n         Example of a generic cache.  These caches may be accessed by\n         name through SolrIndexSearcher.getCache(),cacheLookup(), and\n         cacheInsert().  The purpose is to enable easy caching of\n         user/application level data.  The regenerator argument should\n         be specified as an implementation of solr.CacheRegenerator\n         if autowarming is desired.\n      -->\n    <!--\n       <cache name=\"myUserCache\"\n              class=\"solr.LRUCache\"\n              size=\"4096\"\n              initialSize=\"1024\"\n              autowarmCount=\"1024\"\n              regenerator=\"com.mycompany.MyRegenerator\"\n              />\n      -->\n\n    <!-- Lazy Field Loading\n\n         If true, stored fields that are not requested will be loaded\n         lazily.  This can result in a significant speed improvement\n         if the usual case is to not load all stored fields,\n         especially if the skipped fields are large compressed text\n         fields.\n    -->\n    <enableLazyFieldLoading>true</enableLazyFieldLoading>\n\n   <!-- Use Filter For Sorted Query\n\n        A possible optimization that attempts to use a filter to\n        satisfy a search.  If the requested sort does not include\n        score, then the filterCache will be checked for a filter\n        matching the query. If found, the filter will be used as the\n        source of document ids, and then the sort will be applied to\n        that.\n\n        For most situations, this will not be useful unless you\n        frequently get the same search repeatedly with different sort\n        options, and none of them ever use \"score\"\n     -->\n   <!--\n      <useFilterForSortedQuery>true</useFilterForSortedQuery>\n     -->\n\n   <!-- Result Window Size\n\n        An optimization for use with the queryResultCache.  When a search\n        is requested, a superset of the requested number of document ids\n        are collected.  For example, if a search for a particular query\n        requests matching documents 10 through 19, and queryWindowSize is 50,\n        then documents 0 through 49 will be collected and cached.  Any further\n        requests in that range can be satisfied via the cache.\n     -->\n   <queryResultWindowSize>20</queryResultWindowSize>\n\n   <!-- Maximum number of documents to cache for any entry in the\n        queryResultCache.\n     -->\n   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>\n\n   <!-- Query Related Event Listeners\n\n        Various IndexSearcher related events can trigger Listeners to\n        take actions.\n\n        newSearcher - fired whenever a new searcher is being prepared\n        and there is a current searcher handling requests (aka\n        registered).  It can be used to prime certain caches to\n        prevent long request times for certain requests.\n\n        firstSearcher - fired whenever a new searcher is being\n        prepared but there is no current registered searcher to handle\n        requests or to gain autowarming data from.\n\n     -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence.\n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">solr rocks</str><str name=\"start\">0</str><str name=\"rows\">10</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n    <!-- Max Warming Searchers\n\n         Maximum number of searchers that may be warming in the\n         background concurrently.  An error is returned if this limit\n         is exceeded.\n\n         Recommend values of 1-2 for read-only slaves, higher for\n         masters w/o cache warming.\n      -->\n    <maxWarmingSearchers>2</maxWarmingSearchers>\n\n  </query>\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n\n       handleSelect affects the behavior of requests such as /select?qt=XXX\n\n       handleSelect=\"true\" will cause the SolrDispatchFilter to process\n       the request and will result in consistent error handling and\n       formatting for all types of requests.\n\n       handleSelect=\"false\" will cause the SolrDispatchFilter to\n       ignore \"/select\" requests and fallback to using the legacy\n       SolrServlet and it's Solr 1.1 style error formatting\n    -->\n  <requestDispatcher handleSelect=\"true\" >\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size (in KiB) of\n         Multipart File Uploads that Solr will allow in a Request.\n\n         formdataUploadLimitInKB - specifies the max size (in KiB) of\n         form data (application/x-www-form-urlencoded) sent via\n         POST. You can use POST to pass request parameters not\n         fitting into the URL.\n\n         addHttpRequestToContext - if set to true, it will instruct\n         the requestParsers to include the original HttpServletRequest\n         object in the context map of the SolrQueryRequest under the\n         key \"httpRequest\". It will not be used by any of the existing\n         Solr components, but may be useful when developing custom\n         plugins.\n\n         *** WARNING ***\n         The settings below authorize Solr to fetch remote files, You\n         should make sure your system has some authentication before\n         using enableRemoteStreaming=\"true\"\n\n      -->\n    <requestParsers enableRemoteStreaming=\"true\"\n                    multipartUploadLimitInKB=\"2048000\"\n                    formdataUploadLimitInKB=\"2048\"\n                    addHttpRequestToContext=\"false\"/>\n\n    <!-- HTTP Caching\n\n         Set HTTP caching related parameters (for proxy caches and clients).\n\n         The options below instruct Solr not to output any HTTP Caching\n         related headers\n      -->\n    <httpCaching never304=\"true\" />\n    <!-- If you include a <cacheControl> directive, it will be used to\n         generate a Cache-Control header (as well as an Expires header\n         if the value contains \"max-age=\")\n\n         By default, no Cache-Control header is generated.\n\n         You can use the <cacheControl> option even if you have set\n         never304=\"true\"\n      -->\n    <!--\n       <httpCaching never304=\"true\" >\n         <cacheControl>max-age=30, public</cacheControl>\n       </httpCaching>\n      -->\n    <!-- To enable Solr to respond with automatically generated HTTP\n         Caching headers, and to response to Cache Validation requests\n         correctly, set the value of never304=\"false\"\n\n         This will cause Solr to generate Last-Modified and ETag\n         headers based on the properties of the Index.\n\n         The following options can also be specified to affect the\n         values of these headers...\n\n         lastModFrom - the default value is \"openTime\" which means the\n         Last-Modified value (and validation against If-Modified-Since\n         requests) will all be relative to when the current Searcher\n         was opened.  You can change it to lastModFrom=\"dirLastMod\" if\n         you want the value to exactly correspond to when the physical\n         index was last modified.\n\n         etagSeed=\"...\" is an option you can change to force the ETag\n         header (and validation against If-None-Match requests) to be\n         different even if the index has not changed (ie: when making\n         significant changes to your config file)\n\n         (lastModifiedFrom and etagSeed are both ignored if you use\n         the never304=\"true\" option)\n      -->\n    <!--\n       <httpCaching lastModifiedFrom=\"openTime\"\n                    etagSeed=\"Solr\">\n         <cacheControl>max-age=30, public</cacheControl>\n       </httpCaching>\n      -->\n  </requestDispatcher>\n\n  <!-- Request Handlers\n\n       http://wiki.apache.org/solr/SolrRequestHandler\n\n       Incoming queries will be dispatched to a specific handler by name\n       based on the path specified in the request.\n\n       Legacy behavior: If the request path uses \"/select\" but no Request\n       Handler has that name, and if handleSelect=\"true\" has been specified in\n       the requestDispatcher, then the Request Handler is dispatched based on\n       the qt parameter.  Handlers without a leading '/' are accessed this way\n       like so: http://host/app/[core/]select?qt=name  If no qt is\n       given, then the requestHandler that declares default=\"true\" will be\n       used or the one named \"standard\".\n\n       If a Request Handler is declared with startup=\"lazy\", then it will\n       not be initialized until the first request that uses it.\n\n    -->\n  <!-- SearchHandler\n\n       http://wiki.apache.org/solr/SearchHandler\n\n       For processing Search Queries, the primary Request Handler\n       provided with Solr is \"SearchHandler\" It delegates to a sequent\n       of SearchComponents (see below) and supports distributed\n       queries across multiple shards\n    -->\n  <!--<requestHandler name=\"/select\" class=\"solr.SearchHandler\">-->\n    <!-- default values for query parameters can be specified, these\n         will be overridden by parameters in the request\n      -->\n     <!--<lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <int name=\"rows\">10</int>\n     </lst>-->\n    <!-- In addition to defaults, \"appends\" params can be specified\n         to identify values which should be appended to the list of\n         multi-val params from the query (or the existing \"defaults\").\n      -->\n    <!-- In this example, the param \"fq=instock:true\" would be appended to\n         any query time fq params the user may specify, as a mechanism for\n         partitioning the index, independent of any user selected filtering\n         that may also be desired (perhaps as a result of faceted searching).\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"appends\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"appends\">\n         <str name=\"fq\">inStock:true</str>\n       </lst>\n      -->\n    <!-- \"invariants\" are a way of letting the Solr maintainer lock down\n         the options available to Solr clients.  Any params values\n         specified here are used regardless of what values may be specified\n         in either the query, the \"defaults\", or the \"appends\" params.\n\n         In this example, the facet.field and facet.query params would\n         be fixed, limiting the facets clients can use.  Faceting is\n         not turned on by default - but if the client does specify\n         facet=true in the request, these are the only facets they\n         will be able to see counts for; regardless of what other\n         facet.field or facet.query params they may specify.\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"invariants\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"invariants\">\n         <str name=\"facet.field\">cat</str>\n         <str name=\"facet.field\">manu_exact</str>\n         <str name=\"facet.query\">price:[* TO 500]</str>\n         <str name=\"facet.query\">price:[500 TO *]</str>\n       </lst>\n      -->\n    <!-- If the default list of SearchComponents is not desired, that\n         list can either be overridden completely, or components can be\n         prepended or appended to the default list.  (see below)\n      -->\n    <!--\n       <arr name=\"components\">\n         <str>nameOfCustomComponent1</str>\n         <str>nameOfCustomComponent2</str>\n       </arr>\n      -->\n    <!--</requestHandler>-->\n\n  <!-- A request handler that returns indented JSON by default -->\n  <requestHandler name=\"/query\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <str name=\"wt\">json</str>\n       <str name=\"indent\">true</str>\n       <str name=\"df\">text</str>\n     </lst>\n  </requestHandler>\n\n  <!--\n    The export request handler is used to export full sorted result sets.\n    Do not change these defaults.\n  -->\n\n  <requestHandler name=\"/export\" class=\"solr.SearchHandler\">\n    <lst name=\"invariants\">\n      <str name=\"rq\">{!xport}</str>\n      <str name=\"wt\">xsort</str>\n      <str name=\"distrib\">false</str>\n    </lst>\n\n    <arr name=\"components\">\n      <str>query</str>\n    </arr>\n  </requestHandler>\n\n  <!-- A Robust Example\n\n       This example SearchHandler declaration shows off usage of the\n       SearchHandler with many defaults declared\n\n       Note that multiple instances of the same Request Handler\n       (SearchHandler) can be registered multiple times with different\n       names (and different init parameters)\n    -->\n  <!--\n  <requestHandler name=\"/browse\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>-->\n\n       <!-- VelocityResponseWriter settings -->\n       <!--<str name=\"wt\">velocity</str>\n\n       <str name=\"v.template\">browse</str>\n       <str name=\"v.layout\">layout</str>\n       <str name=\"title\">Solritas</str>\n\n       <str name=\"defType\">edismax</str>\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n          title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0\n       </str>\n       <str name=\"mm\">100%</str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n       <str name=\"mlt.qf\">\n         text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"mlt.fl\">text,features,name,sku,id,manu,cat</str>\n       <int name=\"mlt.count\">3</int>\n\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n\n       <str name=\"facet\">on</str>\n       <str name=\"facet.field\">cat</str>\n       <str name=\"facet.field\">manu_exact</str>\n       <str name=\"facet.query\">ipod</str>\n       <str name=\"facet.query\">GB</str>\n       <str name=\"facet.mincount\">1</str>\n       <str name=\"facet.pivot\">cat,inStock</str>\n       <str name=\"facet.range.other\">after</str>\n       <str name=\"facet.range\">price</str>\n       <int name=\"f.price.facet.range.start\">0</int>\n       <int name=\"f.price.facet.range.end\">600</int>\n       <int name=\"f.price.facet.range.gap\">50</int>\n       <str name=\"facet.range\">popularity</str>\n       <int name=\"f.popularity.facet.range.start\">0</int>\n       <int name=\"f.popularity.facet.range.end\">10</int>\n       <int name=\"f.popularity.facet.range.gap\">3</int>\n       <str name=\"facet.range\">manufacturedate_dt</str>\n       <str name=\"f.manufacturedate_dt.facet.range.start\">NOW/YEAR-10YEARS</str>\n       <str name=\"f.manufacturedate_dt.facet.range.end\">NOW</str>\n       <str name=\"f.manufacturedate_dt.facet.range.gap\">+1YEAR</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">before</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">after</str>-->\n\n       <!-- Highlighting defaults -->\n       <!--<str name=\"hl\">on</str>\n       <str name=\"hl.fl\">text features name</str>\n       <str name=\"f.name.hl.fragsize\">0</str>\n       <str name=\"f.name.hl.alternateField\">name</str>\n     </lst>\n     <arr name=\"last-components\">\n       <str>spellcheck</str>\n     </arr>-->\n     <!--\n     <str name=\"url-scheme\">httpx</str>\n     -->\n  <!--</requestHandler>-->\n  <!-- trivia: the name pinkPony requestHandler was an agreement between the Search API and the\n    apachesolr maintainers. The decision was taken during the Drupalcon Munich codesprint.\n    -->\n  <requestHandler name=\"pinkPony\" class=\"solr.SearchHandler\" default=\"true\">\n    <lst name=\"defaults\">\n      <str name=\"defType\">edismax</str>\n      <str name=\"df\">content</str>\n      <str name=\"echoParams\">explicit</str>\n      <bool name=\"omitHeader\">true</bool>\n      <float name=\"tie\">0.01</float>\n      <!-- Don't abort searches for the pinkPony request handler (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.pinkPony.timeAllowed:-1}</int>\n      <str name=\"q.alt\">*:*</str>\n\n      <!-- By default, don't spell check -->\n      <str name=\"spellcheck\">false</str>\n      <!-- Defaults for the spell checker when used -->\n      <str name=\"spellcheck.onlyMorePopular\">true</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <!--  The number of suggestions to return -->\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- The more like this handler offers many advantages over the standard handler,\n     when performing moreLikeThis requests.-->\n  <requestHandler name=\"mlt\" class=\"solr.MoreLikeThisHandler\">\n    <lst name=\"defaults\">\n      <str name=\"df\">content</str>\n      <str name=\"mlt.mintf\">1</str>\n      <str name=\"mlt.mindf\">1</str>\n      <str name=\"mlt.minwl\">3</str>\n      <str name=\"mlt.maxwl\">15</str>\n      <str name=\"mlt.maxqt\">20</str>\n      <str name=\"mlt.match.include\">false</str>\n      <!-- Abort any searches longer than 2 seconds (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.mlt.timeAllowed:2000}</int>\n    </lst>\n  </requestHandler>\n\n  <!-- A minimal query type for doing luene queries -->\n  <requestHandler name=\"standard\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"df\">content</str>\n       <str name=\"echoParams\">explicit</str>\n       <bool name=\"omitHeader\">true</bool>\n     </lst>\n  </requestHandler>\n\n  <!-- Update Request Handler.\n\n       http://wiki.apache.org/solr/UpdateXmlMessages\n\n       The canonical Request Handler for Modifying the Index through\n       commands specified using XML, JSON, CSV, or JAVABIN\n\n       Note: Since solr1.1 requestHandlers requires a valid content\n       type header if posted in the body. For example, curl now\n       requires: -H 'Content-type:text/xml; charset=utf-8'\n\n       To override the request content type and force a specific\n       Content-type, use the request parameter:\n         ?update.contentType=text/csv\n\n       This handler will pick a response format to match the input\n       if the 'wt' parameter is not explicit\n    -->\n  <!--<requestHandler name=\"/update\" class=\"solr.UpdateRequestHandler\">\n  </requestHandler>-->\n  <initParams path=\"/update/**,/query,/select,/tvrh,/elevate,/spell,/browse\">\n    <lst name=\"defaults\">\n      <str name=\"df\">text</str>\n    </lst>\n  </initParams>\n\n  <initParams path=\"/update/json/docs\">\n    <lst name=\"defaults\">\n      <!--this ensures that the entire json doc will be stored verbatim into one field-->\n      <str name=\"srcField\">_src_</str>\n      <!--This means a the uniqueKeyField will be extracted from the fields and\n       all fields go into the 'df' field. In this config df is already configured to be 'text'\n        -->\n      <str name=\"mapUniqueKeyOnly\">true</str>\n    </lst>\n\n  </initParams>\n\n  <!-- CSV Update Request Handler\n       http://wiki.apache.org/solr/UpdateCSV\n    -->\n  <requestHandler name=\"/update/csv\"\n                  class=\"solr.CSVRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- JSON Update Request Handler\n       http://wiki.apache.org/solr/UpdateJSON\n    -->\n  <requestHandler name=\"/update/json\"\n                  class=\"solr.JsonUpdateRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- Solr Cell Update Request Handler\n\n       http://wiki.apache.org/solr/ExtractingRequestHandler\n\n    -->\n  <requestHandler name=\"/update/extract\"\n                  startup=\"lazy\"\n                  class=\"solr.extraction.ExtractingRequestHandler\" >\n    <lst name=\"defaults\">\n      <!-- All the main content goes into \"text\"... if you need to return\n           the extracted text or do highlighting, use a stored field. -->\n      <str name=\"fmap.content\">text</str>\n      <str name=\"lowernames\">true</str>\n      <str name=\"uprefix\">ignored_</str>\n\n      <!-- capture link hrefs but ignore div attributes -->\n      <str name=\"captureAttr\">true</str>\n      <str name=\"fmap.a\">links</str>\n      <str name=\"fmap.div\">ignored_</str>\n    </lst>\n  </requestHandler>\n\n  <!-- XSLT Update Request Handler\n       Transforms incoming XML with stylesheet identified by tr=\n  -->\n  <requestHandler name=\"/update/xslt\"\n                   startup=\"lazy\"\n                   class=\"solr.XsltUpdateRequestHandler\"/>\n\n  <!-- Field Analysis Request Handler\n\n       RequestHandler that provides much the same functionality as\n       analysis.jsp. Provides the ability to specify multiple field\n       types and field names in the same request and outputs\n       index-time and query-time analysis for each of them.\n\n       Request parameters are:\n       analysis.fieldname - field name whose analyzers are to be used\n\n       analysis.fieldtype - field type whose analyzers are to be used\n       analysis.fieldvalue - text for index-time analysis\n       q (or analysis.q) - text for query time analysis\n       analysis.showmatch (true|false) - When set to true and when\n           query analysis is performed, the produced tokens of the\n           field value analysis will be marked as \"matched\" for every\n           token that is produces by the query analysis\n   -->\n  <requestHandler name=\"/analysis/field\"\n                  startup=\"lazy\"\n                  class=\"solr.FieldAnalysisRequestHandler\" />\n\n  <!-- Document Analysis Handler\n\n       http://wiki.apache.org/solr/AnalysisRequestHandler\n\n       An analysis handler that provides a breakdown of the analysis\n       process of provided documents. This handler expects a (single)\n       content stream with the following format:\n\n       <docs>\n         <doc>\n           <field name=\"id\">1</field>\n           <field name=\"name\">The Name</field>\n           <field name=\"text\">The Text Value</field>\n         </doc>\n         <doc>...</doc>\n         <doc>...</doc>\n         ...\n       </docs>\n\n    Note: Each document must contain a field which serves as the\n    unique key. This key is used in the returned response to associate\n    an analysis breakdown to the analyzed document.\n\n    Like the FieldAnalysisRequestHandler, this handler also supports\n    query analysis by sending either an \"analysis.query\" or \"q\"\n    request parameter that holds the query text to be analyzed. It\n    also supports the \"analysis.showmatch\" parameter which when set to\n    true, all field tokens that match the query tokens will be marked\n    as a \"match\".\n  -->\n  <requestHandler name=\"/analysis/document\"\n                  class=\"solr.DocumentAnalysisRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- Admin Handlers\n\n       As of Solr 5.0.0, the \"/admin/\" handlers are registered implicitly.\n    -->\n  <!-- <requestHandler name=\"/admin/\" class=\"solr.admin.AdminHandlers\" /> -->\n  <!-- This single handler is equivalent to the following... -->\n  <!--\n     <requestHandler name=\"/admin/luke\"       class=\"solr.admin.LukeRequestHandler\" />\n     <requestHandler name=\"/admin/system\"     class=\"solr.admin.SystemInfoHandler\" />\n     <requestHandler name=\"/admin/plugins\"    class=\"solr.admin.PluginInfoHandler\" />\n     <requestHandler name=\"/admin/threads\"    class=\"solr.admin.ThreadDumpHandler\" />\n     <requestHandler name=\"/admin/properties\" class=\"solr.admin.PropertiesRequestHandler\" />\n     <requestHandler name=\"/admin/file\"       class=\"solr.admin.ShowFileRequestHandler\" >\n    -->\n  <!-- If you wish to hide files under ${solr.home}/conf, explicitly\n       register the ShowFileRequestHandler using the definition below.\n       NOTE: The glob pattern ('*') is the only pattern supported at present, *.xml will\n             not exclude all files ending in '.xml'. Use it to exclude _all_ updates\n    -->\n  <!--\n     <requestHandler name=\"/admin/file\"\n                     class=\"solr.admin.ShowFileRequestHandler\" >\n       <lst name=\"invariants\">\n         <str name=\"hidden\">synonyms.txt</str>\n         <str name=\"hidden\">anotherfile.txt</str>\n         <str name=\"hidden\">*</str>\n       </lst>\n     </requestHandler>\n    -->\n  <!--\n    Enabling this request handler (which is NOT a default part of the admin handler) will allow the Solr UI to edit\n    all the config files. This is intended for secure/development use ONLY! Leaving available and publically\n    accessible is a security vulnerability and should be done with extreme caution!\n  -->\n  <!--\n  <requestHandler name=\"/admin/fileedit\" class=\"solr.admin.EditFileRequestHandler\" >\n    <lst name=\"invariants\">\n      <str name=\"qt\">pinkPony</str>\n      <str name=\"q\">solrpingquery</str>\n      <str name=\"omitHeader\">false</str>\n    </lst>\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">all</str>\n    </lst>\n    <!- An optional feature of the PingRequestHandler is to configure the\n         handler with a \"healthcheckFile\" which can be used to enable/disable\n         the PingRequestHandler.\n         relative paths are resolved against the data dir\n    -->\n    <!-- <str name=\"healthcheckFile\">server-enabled.txt</str> -->\n  <!-- </requestHandler>\n  -->\n\n  <!-- Echo the request contents back to the client -->\n  <requestHandler name=\"/debug/dump\" class=\"solr.DumpRequestHandler\" >\n    <lst name=\"defaults\">\n     <str name=\"echoParams\">explicit</str>\n     <str name=\"echoHandler\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Solr Replication\n\n       The SolrReplicationHandler supports replicating indexes from a\n       \"master\" used for indexing and \"slaves\" used for queries.\n\n       http://wiki.apache.org/solr/SolrReplication\n\n       In the example below, remove the <lst name=\"master\"> section if\n       this is just a slave and remove  the <lst name=\"slave\"> section\n       if this is just a master.\n  -->\n  <requestHandler name=\"/replication\" class=\"solr.ReplicationHandler\" >\n    <lst name=\"master\">\n      <str name=\"enable\">${solr.replication.master:false}</str>\n      <str name=\"replicateAfter\">commit</str>\n      <str name=\"replicateAfter\">startup</str>\n      <str name=\"confFiles\">${solr.replication.confFiles:schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml}</str>\n    </lst>\n    <lst name=\"slave\">\n      <str name=\"enable\">${solr.replication.slave:false}</str>\n      <str name=\"masterUrl\">${solr.replication.masterUrl:http://localhost:8983/solr}/replication</str>\n      <str name=\"pollInterval\">${solr.replication.pollInterval:00:00:60}</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Realtime get handler, guaranteed to return the latest stored fields of\n       any document, without the need to commit or open a new searcher.  The\n       current implementation relies on the updateLog feature being enabled.\n  -->\n  <requestHandler name=\"/get\" class=\"solr.RealTimeGetHandler\">\n    <lst name=\"defaults\">\n      <str name=\"omitHeader\">true</str>\n      <str name=\"wt\">json</str>\n      <str name=\"indent\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names,\n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n\n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n\n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\"\n\n     -->\n\n  <!-- A request handler for demonstrating the spellcheck component.\n\n       NOTE: This is purely as an example.  The whole purpose of the\n       SpellCheckComponent is to hook it into the request handler that\n       handles your normal user queries so that a separate request is\n       not needed to get suggestions.\n\n       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS\n       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!\n\n       See http://wiki.apache.org/solr/SpellCheckComponent for details\n       on the request parameters.\n    -->\n  <requestHandler name=\"/spell\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <!-- Solr will use suggestions from both the 'default' spellchecker\n           and from the 'wordbreak' spellchecker and combine them.\n           collations (re-written queries) can include a combination of\n           corrections from both spellcheckers -->\n      <str name=\"spellcheck.dictionary\">default</str>\n      <str name=\"spellcheck.dictionary\">wordbreak</str>\n      <str name=\"spellcheck.onlyMorePopular\">false</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <str name=\"spellcheck.count\">1</str>\n      <str name=\"spellcheck.alternativeTermCount\">5</str>\n      <str name=\"spellcheck.maxResultsForSuggest\">5</str>\n      <str name=\"spellcheck.collate\">true</str>\n      <str name=\"spellcheck.collateExtendedResults\">true</str>\n      <str name=\"spellcheck.maxCollationTries\">10</str>\n      <str name=\"spellcheck.maxCollations\">5</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n    </arr>\n  </requestHandler>\n\n  <!-- This is disabled by default because it currently causes long startup times on\n       big indexes, even when never used.  See SOLR-6679 for background.\n\n       To use this suggester, set the \"solr.suggester.enabled=true\" system property\n    -->\n  <searchComponent name=\"suggest\" class=\"solr.SuggestComponent\"\n                   enable=\"${solr.suggester.enabled:false}\"     >\n    <lst name=\"suggester\">\n      <str name=\"name\">mySuggester</str>\n      <str name=\"lookupImpl\">FuzzyLookupFactory</str>\n      <str name=\"dictionaryImpl\">DocumentDictionaryFactory</str>\n      <str name=\"field\">cat</str>\n      <str name=\"weightField\">price</str>\n      <str name=\"suggestAnalyzerFieldType\">string</str>\n    </lst>\n  </searchComponent>\n\n  <requestHandler name=\"/suggest\" class=\"solr.SearchHandler\"\n                  startup=\"lazy\" enable=\"${solr.suggester.enabled:false}\" >\n    <lst name=\"defaults\">\n      <str name=\"suggest\">true</str>\n      <str name=\"suggest.count\">10</str>\n    </lst>\n    <arr name=\"components\">\n      <str>suggest</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Term Vector Component\n\n       http://wiki.apache.org/solr/TermVectorComponent\n    -->\n  <searchComponent name=\"tvComponent\" class=\"solr.TermVectorComponent\"/>\n\n  <!-- A request handler for demonstrating the term vector component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your\n       already specified request handlers.\n    -->\n  <requestHandler name=\"/tvrh\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <bool name=\"tv\">true</bool>\n    </lst>\n    <arr name=\"last-components\">\n      <str>tvComponent</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Clustering Component\n\n       http://wiki.apache.org/solr/ClusteringComponent\n\n       This relies on third party jars which are notincluded in the\n       release.  To use this component (and the \"/clustering\" handler)\n       Those jars will need to be downloaded, and you'll need to set\n       the solr.cluster.enabled system property when running solr...\n\n          java -Dsolr.clustering.enabled=true -jar start.jar\n    -->\n  <!-- <searchComponent name=\"clustering\"\n                   enable=\"${solr.clustering.enabled:false}\"\n                   class=\"solr.clustering.ClusteringComponent\" > -->\n    <!-- Declare an engine -->\n    <!--<lst name=\"engine\">-->\n      <!-- The name, only one can be named \"default\" -->\n      <!--<str name=\"name\">default</str>-->\n\n      <!-- Class name of Carrot2 clustering algorithm.\n\n           Currently available algorithms are:\n\n           * org.carrot2.clustering.lingo.LingoClusteringAlgorithm\n           * org.carrot2.clustering.stc.STCClusteringAlgorithm\n           * org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm\n\n           See http://project.carrot2.org/algorithms.html for the\n           algorithm's characteristics.\n        -->\n      <!--<str name=\"carrot.algorithm\">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>-->\n\n      <!-- Overriding values for Carrot2 default algorithm attributes.\n\n           For a description of all available attributes, see:\n           http://download.carrot2.org/stable/manual/#chapter.components.\n           Use attribute key as name attribute of str elements\n           below. These can be further overridden for individual\n           requests by specifying attribute key as request parameter\n           name and attribute value as parameter value.\n        -->\n      <!--<str name=\"LingoClusteringAlgorithm.desiredClusterCountBase\">20</str>-->\n\n      <!-- Location of Carrot2 lexical resources.\n\n           A directory from which to load Carrot2-specific stop words\n           and stop labels. Absolute or relative to Solr config directory.\n           If a specific resource (e.g. stopwords.en) is present in the\n           specified dir, it will completely override the corresponding\n           default one that ships with Carrot2.\n\n           For an overview of Carrot2 lexical resources, see:\n           http://download.carrot2.org/head/manual/#chapter.lexical-resources\n        -->\n      <!--<str name=\"carrot.lexicalResourcesDir\">clustering/carrot2</str>-->\n\n      <!-- The language to assume for the documents.\n\n           For a list of allowed values, see:\n           http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage\n       -->\n      <!--<str name=\"MultilingualClustering.defaultLanguage\">ENGLISH</str>\n    </lst>\n    <lst name=\"engine\">\n      <str name=\"name\">stc</str>\n      <str name=\"carrot.algorithm\">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>\n    </lst>\n  </searchComponent>-->\n\n  <!-- A request handler for demonstrating the clustering component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your\n       already specified request handlers.\n    -->\n  <!--<requestHandler name=\"/clustering\"\n                  startup=\"lazy\"\n                  enable=\"${solr.clustering.enabled:false}\"\n                  class=\"solr.SearchHandler\">\n    <lst name=\"defaults\">\n      <bool name=\"clustering\">true</bool>\n      <str name=\"clustering.engine\">default</str>\n      <bool name=\"clustering.results\">true</bool>-->\n      <!-- The title field -->\n      <!--<str name=\"carrot.title\">name</str>-->\n      <!--<str name=\"carrot.url\">id</str>-->\n      <!-- The field to cluster on -->\n       <!--<str name=\"carrot.snippet\">features</str>-->\n       <!-- produce summaries -->\n       <!--<bool name=\"carrot.produceSummary\">true</bool>-->\n       <!-- the maximum number of labels per cluster -->\n       <!--<int name=\"carrot.numDescriptions\">5</int>-->\n       <!-- produce sub clusters -->\n       <!--<bool name=\"carrot.outputSubClusters\">false</bool>-->\n\n       <!--<str name=\"defType\">edismax</str>\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>clustering</str>\n    </arr>\n  </requestHandler>-->\n\n  <!-- Terms Component\n\n       http://wiki.apache.org/solr/TermsComponent\n\n       A component to return terms and document frequency of those\n       terms\n    -->\n  <searchComponent name=\"terms\" class=\"solr.TermsComponent\"/>\n\n  <!-- A request handler for demonstrating the terms component -->\n  <requestHandler name=\"/terms\" class=\"solr.SearchHandler\" startup=\"lazy\">\n     <lst name=\"defaults\">\n      <bool name=\"terms\">true</bool>\n    </lst>\n    <arr name=\"components\">\n      <str>terms</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Query Elevation Component\n\n       http://wiki.apache.org/solr/QueryElevationComponent\n\n       a search component that enables you to configure the top\n       results for a given query regardless of the normal lucene\n       scoring.\n    -->\n  <searchComponent name=\"elevator\" class=\"solr.QueryElevationComponent\" >\n    <!-- pick a fieldType to analyze queries -->\n    <str name=\"queryFieldType\">string</str>\n    <str name=\"config-file\">elevate.xml</str>\n  </searchComponent>\n\n  <!-- A request handler for demonstrating the elevator component -->\n  <requestHandler name=\"/elevate\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">explicit</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\"\n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter\n           (for sentence extraction)\n        -->\n      <fragmenter name=\"regex\"\n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\"\n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<strong>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</strong>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\"\n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\"\n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\"\n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!--\n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n\n      <boundaryScanner name=\"default\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n\n      <boundaryScanner name=\"breakIterator\"\n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    -->\n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.\n\n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Language identification\n\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n    <!--\n     <updateRequestProcessorChain name=\"langid\">\n       <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n         <str name=\"langid.fl\">text,title,subject,description</str>\n         <str name=\"langid.langField\">language_s</str>\n         <str name=\"langid.fallback\">en</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\"\n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n     <!-- For the purposes of the tutorial, JSON responses are written as\n      plain text so that they are easy to read in *any* browser.\n      If you expect a MIME type of \"application/json\" just remove this override.\n     -->\n    <str name=\"content-type\">text/plain; charset=UTF-8</str>\n  </queryResponseWriter>\n\n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!-- <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" enable=\"${solr.velocity.enabled:true}\"/> -->\n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.\n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       http://wiki.apache.org/solr/SolrQuerySyntax\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\"\n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n  <!-- Legacy config for the admin interface -->\n  <admin>\n    <defaultQuery>*:*</defaultQuery>\n\n    <!-- configure a healthcheck file for servers behind a\n         loadbalancer\n      -->\n    <!--\n       <healthcheck type=\"file\">server-enabled</healthcheck>\n      -->\n  </admin>\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  <xi:include href=\"solrconfig_extra.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback>\n    <!-- Spell Check\n\n        The spell check component can return a list of alternative spelling\n        suggestions. This component must be defined in\n        solrconfig_extra.xml if present, since it's used in the search handler.\n\n        http://wiki.apache.org/solr/SpellCheckComponent\n     -->\n    <searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n    <str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n    <!-- a spellchecker built from a field of the main index -->\n      <lst name=\"spellchecker\">\n        <str name=\"name\">default</str>\n        <str name=\"field\">spell</str>\n        <str name=\"spellcheckIndexDir\">spellchecker</str>\n        <str name=\"buildOnOptimize\">true</str>\n      </lst>\n    </searchComponent>\n    </xi:fallback>\n  </xi:include>\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/solrconfig_extra.xml",
    "content": "<!-- Spell Check\n\n    The spell check component can return a list of alternative spelling\n    suggestions.\n\n    http://wiki.apache.org/solr/SpellCheckComponent\n -->\n<searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n<str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n<!-- Multiple \"Spell Checkers\" can be declared and used by this\n     component\n  -->\n\n<!-- a spellchecker built from a field of the main index, and\n     written to disk\n  -->\n<lst name=\"spellchecker\">\n  <str name=\"name\">default</str>\n  <str name=\"field\">spell</str>\n  <str name=\"spellcheckIndexDir\">spellchecker</str>\n  <str name=\"buildOnOptimize\">true</str>\n  <!-- uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary\n    <float name=\"thresholdTokenFrequency\">.01</float>\n  -->\n</lst>\n\n<!--\n  Adding German spellhecker index to our Solr index\n  This also requires to enable the content in schema_extra_types.xml and schema_extra_fields.xml\n-->\n<!--\n<lst name=\"spellchecker\">\n  <str name=\"name\">spellchecker_de</str>\n  <str name=\"field\">spell_de</str>\n  <str name=\"spellcheckIndexDir\">./spellchecker_de</str>\n  <str name=\"buildOnOptimize\">true</str>\n</lst>\n-->\n\n<!-- a spellchecker that uses a different distance measure -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">jarowinkler</str>\n     <str name=\"field\">spell</str>\n     <str name=\"distanceMeasure\">\n       org.apache.lucene.search.spell.JaroWinklerDistance\n     </str>\n     <str name=\"spellcheckIndexDir\">spellcheckerJaro</str>\n   </lst>\n -->\n\n<!-- a spellchecker that use an alternate comparator\n\n     comparatorClass be one of:\n      1. score (default)\n      2. freq (Frequency first, then score)\n      3. A fully qualified class name\n  -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">freq</str>\n     <str name=\"field\">lowerfilt</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFreq</str>\n     <str name=\"comparatorClass\">freq</str>\n     <str name=\"buildOnCommit\">true</str>\n  -->\n\n<!-- A spellchecker that reads the list of words from a file -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"classname\">solr.FileBasedSpellChecker</str>\n     <str name=\"name\">file</str>\n     <str name=\"sourceLocation\">spellings.txt</str>\n     <str name=\"characterEncoding\">UTF-8</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFile</str>\n   </lst>\n  -->\n</searchComponent>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:8099/solr\nsolr.replication.confFiles=schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml\nsolr.mlt.timeAllowed=2000\n# You should not set your luceneMatchVersion to anything lower than your Solr\n# Version.\nsolr.luceneMatchVersion=6.0\nsolr.pinkPony.timeAllowed=-1\n# autoCommit after 10000 docs\nsolr.autoCommit.MaxDocs=10000\n# autoCommit after 2 minutes\nsolr.autoCommit.MaxTime=120000\n# autoSoftCommit after 2000 docs\nsolr.autoSoftCommit.MaxDocs=2000\n# autoSoftCommit after 10 seconds\nsolr.autoSoftCommit.MaxTime=10000\nsolr.contrib.dir=../../../contrib\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/stopwords.txt",
    "content": "# Contains words which shouldn't be indexed for fulltext fields, e.g., because\n# they're too common. For documentation of the format, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.StopFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/apachesolr/solr7_drupal7/synonyms.txt",
    "content": "# Contains synonyms to use for your index. For the format used, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. The item IDs are in general constructed as follows:\n   Search API:\n     $document->id = $index_id . '-' . $item_id;\n   Apache Solr Search Integration:\n     $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #1 first in searches for \"example query\": -->\n<!--\n <query text=\"example query\">\n  <doc id=\"default_node_index-1\" />\n  <doc id=\"7v3jsc/node/1\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/mapping-ISOLatin1Accent.txt",
    "content": "# This file contains character mappings for the default fulltext field type.\n# The source characters (on the left) will be replaced by the respective target\n# characters before any other processing takes place.\n# Lines starting with a pound character # are ignored.\n#\n# For sensible defaults, use the mapping-ISOLatin1Accent.txt file distributed\n# with the example application of your Solr version.\n#\n# Examples:\n#   \"À\" => \"A\"\n#   \"\\u00c4\" => \"A\"\n#   \"\\u00c4\" => \"\\u0041\"\n#   \"æ\" => \"ae\"\n#   \"\\n\" => \" \"\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/protwords.txt",
    "content": "#-----------------------------------------------------------------------\n# This file blocks words from being operated on by the stemmer and word delimiter.\n&amp;\n&lt;\n&gt;\n&#039;\n&quot;\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n-->\n\n<schema name=\"drupal-4.3-solr-4.x\" version=\"1.3\">\n    <!-- attribute \"name\" is the name of this schema and is only used for display purposes.\n         Applications should change this to reflect the nature of the search collection.\n         version=\"1.2\" is Solr's version number for the schema syntax and semantics.  It should\n         not normally be changed by applications.\n         1.0: multiValued attribute did not exist, all fields are multiValued by nature\n         1.1: multiValued attribute introduced, false by default\n         1.2: omitTermFreqAndPositions attribute introduced, true by default except for text fields.\n         1.3: removed optional field compress feature\n       -->\n  <types>\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in the\n       org.apache.solr.analysis package.\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       - StrField and TextField support an optional compressThreshold which\n       limits compression (if enabled in the derived fields) to values which\n       exceed a certain size (in characters).\n    -->\n    <fieldType name=\"string\" class=\"solr.StrField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldtype name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The optional sortMissingLast and sortMissingFirst attributes are\n         currently supported on types that are sorted internally as strings.\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!-- numeric field types that can be sorted, but are not optimized for range queries -->\n    <fieldType name=\"integer\" class=\"solr.TrieIntField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"float\" class=\"solr.TrieFloatField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"long\" class=\"solr.TrieLongField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"double\" class=\"solr.TrieDoubleField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <!--\n      Note:\n      These should only be used for compatibility with existing indexes (created with older Solr versions)\n      or if \"sortMissingFirst\" or \"sortMissingLast\" functionality is needed. Use Trie based fields instead.\n\n      Numeric field types that manipulate the value into\n      a string value that isn't human-readable in its internal form,\n      but with a lexicographic ordering the same as the numeric ordering,\n      so that range queries work correctly.\n    -->\n    <fieldType name=\"sint\" class=\"solr.TrieIntField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"sfloat\" class=\"solr.TrieFloatField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"slong\" class=\"solr.TrieLongField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n    <fieldType name=\"sdouble\" class=\"solr.TrieDoubleField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!--\n     Numeric field types that index each value at various levels of precision\n     to accelerate range queries when the number of values between the range\n     endpoints is large. See the javadoc for NumericRangeQuery for internal\n     implementation details.\n\n     Smaller precisionStep values (specified in bits) will lead to more tokens\n     indexed per value, slightly larger index size, and faster range queries.\n     A precisionStep of 0 disables indexing at different precision levels.\n    -->\n    <fieldType name=\"tint\" class=\"solr.TrieIntField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tfloat\" class=\"solr.TrieFloatField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tlong\" class=\"solr.TrieLongField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tdouble\" class=\"solr.TrieDoubleField\" precisionStep=\"8\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"pfloat\" class=\"solr.FloatField\" omitNorms=\"true\"/>\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\" valType=\"pfloat\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the DateField javadocs for more information.\n      -->\n    <fieldType name=\"date\" class=\"solr.DateField\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- A Trie based date field for faster date range queries and date faceting. -->\n    <fieldType name=\"tdate\" class=\"solr.TrieDateField\" omitNorms=\"true\" precisionStep=\"6\" positionIncrementGap=\"0\"/>\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of\n        words on case-change, alpha numeric boundaries, and non-alphanumeric chars,\n        so that a query of \"wifi\" or \"wi fi\" could match a document containing \"Wi-Fi\".\n        Synonyms and stopwords are customized by external files, and stemming is enabled.\n        Duplicate tokens at the same position (which may result from Stemmed Synonyms or\n        WordDelim parts) are removed.\n        -->\n    <fieldType name=\"text\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <!-- in this example, we will only use synonyms at query time\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"index_synonyms.txt\" ignoreCase=\"true\" expand=\"false\"/>\n        -->\n        <!-- Case insensitive stop word removal.\n          add enablePositionIncrements=true in both the index and query\n          analyzers to leave a 'gap' for more accurate phrase queries.\n        -->\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"1\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- An unstemmed text field - good if one does not know the language of the field -->\n    <fieldType name=\"text_und\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\" enablePositionIncrements=\"true\" />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                enablePositionIncrements=\"true\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- Edge N gram type - for example for matching against queries with results\n        KeywordTokenizer leaves input string intact as a single term.\n        see: http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/\n      -->\n    <fieldType name=\"edge_n2_kw_text\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.EdgeNGramFilterFactory\" minGramSize=\"2\" maxGramSize=\"25\" />\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n    <!--  Setup simple analysis for spell checking -->\n\n    <fieldType name=\"textSpell\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\" />\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"4\" max=\"20\" />\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\" />\n      </analyzer>\n    </fieldType>\n\n    <!-- This is an example of using the KeywordTokenizer along\n         With various TokenFilterFactories to produce a sortable field\n         that does not include some properties of the source text\n      -->\n    <fieldType name=\"sortString\" class=\"solr.TextField\" sortMissingLast=\"true\" omitNorms=\"true\">\n      <analyzer>\n        <!-- KeywordTokenizer does no actual tokenizing, so the entire\n            input string is preserved as a single token\n          -->\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <!-- The LowerCase TokenFilter does what you expect, which can be\n            when you want your sorting to be case insensitive\n          -->\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <!-- The TrimFilter removes any leading or trailing whitespace -->\n        <filter class=\"solr.TrimFilterFactory\" />\n        <!-- The PatternReplaceFilter gives you the flexibility to use\n            Java Regular expression to replace any sequence of characters\n            matching a pattern with an arbitrary replacement string,\n            which may include back refrences to portions of the orriginal\n            string matched by the pattern.\n\n            See the Java Regular Expression documentation for more\n            infomation on pattern and replacement string syntax.\n\n            http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html\n\n        <filter class=\"solr.PatternReplaceFilterFactory\"\n               pattern=\"(^\\p{Punct}+)\" replacement=\"\" replace=\"all\"\n        />\n        -->\n      </analyzer>\n    </fieldType>\n\n    <!-- A random sort type -->\n    <fieldType name=\"rand\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- since fields of this type are by default not stored or indexed, any data added to\n         them will be ignored outright\n      -->\n    <fieldtype name=\"ignored\" stored=\"false\" indexed=\"false\" class=\"solr.StrField\" />\n\n    <!-- Begin added types to use features in Solr 3.4+ -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"tdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonType\" subFieldType=\"tdouble\"/>\n\n    <!-- A Geohash is a compact representation of a latitude longitude pair in a single field.\n         See http://wiki.apache.org/solr/SpatialSearch\n     -->\n    <fieldtype name=\"geohash\" class=\"solr.GeoHashField\"/>\n    <!-- End added Solr 3.4+ types -->\n\n  </types>\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  <xi:include href=\"schema_extra_types.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <fields>\n    <!-- Valid attributes for fields:\n      name: mandatory - the name for the field\n      type: mandatory - the name of a previously defined type from the <types> section\n      indexed: true if this field should be indexed (searchable or sortable)\n      stored: true if this field should be retrievable\n      compressed: [false] if this field should be stored using gzip compression\n       (this will only apply if the field type is compressable; among\n       the standard field types, only TextField and StrField are)\n      multiValued: true if this field may contain multiple values per document\n      omitNorms: (expert) set to true to omit the norms associated with\n       this field (this disables length normalization and index-time\n       boosting for the field, and saves some memory).  Only full-text\n       fields or fields that need an index-time boost need norms.\n    -->\n\n    <!-- The document id is usually derived from a site-spcific key (hash) and the\n      entity type and ID like:\n      Search Api :\n        The format used is $document->id = $index_id . '-' . $item_id\n      Apache Solr Search Integration\n        The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n    -->\n    <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" />\n\n    <!-- Add Solr Cloud version field as mentioned in\n         http://wiki.apache.org/solr/SolrCloud#Required_Config\n    -->\n    <field name=\"_version_\" type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n\n    <!-- Search Api specific fields -->\n    <!-- item_id contains the entity ID, e.g. a node's nid. -->\n    <field name=\"item_id\"  type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- index_id is the machine name of the search index this entry belongs to. -->\n    <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- Since sorting by ID is explicitly allowed, store item_id also in a sortable way. -->\n    <copyField source=\"item_id\" dest=\"sort_search_api_id\" />\n\n    <!-- Apache Solr Search Integration specific fields -->\n    <!-- entity_id is the numeric object ID, e.g. Node ID, File ID -->\n    <field name=\"entity_id\"  type=\"long\" indexed=\"true\" stored=\"true\" />\n    <!-- entity_type is 'node', 'file', 'user', or some other Drupal object type -->\n    <field name=\"entity_type\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- bundle is a node type, or as appropriate for other entity types -->\n    <field name=\"bundle\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"bundle_name\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"url\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <!-- label is the default field for a human-readable string for this entity (e.g. the title of a node) -->\n    <field name=\"label\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- The string version of the title is used for sorting -->\n    <copyField source=\"label\" dest=\"sort_label\"/>\n\n    <!-- content is the default field for full text search - dump crap here -->\n    <field name=\"content\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n    <field name=\"teaser\" type=\"text\" indexed=\"false\" stored=\"true\"/>\n    <field name=\"path\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"path_alias\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n\n    <!-- These are the fields that correspond to a Drupal node. The beauty of having\n      Lucene store title, body, type, etc., is that we retrieve them with the search\n      result set and don't need to go to the database with a node_load. -->\n    <field name=\"tid\"  type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n    <field name=\"taxonomy_names\" type=\"text\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n    <!-- Copy terms to a single field that contains all taxonomy term names -->\n    <copyField source=\"tm_vid_*\" dest=\"taxonomy_names\"/>\n\n    <!-- Here, default is used to create a \"timestamp\" field indicating\n         when each document was indexed.-->\n    <field name=\"timestamp\" type=\"tdate\" indexed=\"true\" stored=\"true\" default=\"NOW\" multiValued=\"false\"/>\n\n    <!-- This field is used to build the spellchecker index -->\n    <field name=\"spell\" type=\"textSpell\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- copyField commands copy one field to another at the time a document\n         is added to the index.  It's used either to index the same field differently,\n         or to add multiple fields to the same field for easier/faster searching.  -->\n    <copyField source=\"label\" dest=\"spell\"/>\n    <copyField source=\"content\" dest=\"spell\"/>\n\n    <copyField source=\"ts_*\" dest=\"spell\"/>\n    <copyField source=\"tm_*\" dest=\"spell\"/>\n\n    <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n         will be used if the name matches any of the patterns.\n         RESTRICTION: the glob-like pattern in the name attribute must have\n         a \"*\" only at the start or the end.\n         EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n         Longer patterns will be matched first.  if equal size patterns\n         both match, the first appearing in the schema will be used.  -->\n\n    <!-- A set of fields to contain text extracted from HTML tag contents which we\n         can boost at query time. -->\n    <dynamicField name=\"tags_*\" type=\"text\"   indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n\n    <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n         the last letter is 's' for single valued, 'm' for multi-valued -->\n\n    <!-- We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"is_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"im_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of floats can be saved in a regular float field -->\n    <dynamicField name=\"fs_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fm_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of doubles can be saved in a regular double field -->\n    <dynamicField name=\"ps_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pm_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of booleans can be saved in a regular boolean field -->\n    <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <!-- Regular text (without processing) can be stored in a string field-->\n    <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Normal text fields are for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"ts_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tm_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- Unstemmed text fields for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"tus_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tum_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- These text fields omit norms - useful for extracted text like taxonomy_names -->\n    <dynamicField name=\"tos_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n    <dynamicField name=\"tom_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- Special-purpose text fields -->\n    <dynamicField name=\"tes_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"false\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tem_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"true\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- trie dates are preferred, so give them the 2 letter prefix -->\n    <dynamicField name=\"ds_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"dm_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"its_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"itm_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"fts_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ftm_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pts_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ptm_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n         a small image in a search result using the data URI scheme -->\n    <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a date rather than tdate is needed for sortMissingLast -->\n    <dynamicField name=\"dds_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ddm_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Sortable fields, good for sortMissingLast support &\n         We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"iss_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ism_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a sfloat rather than tfloat is needed for sortMissingLast -->\n    <dynamicField name=\"fss_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fsm_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pss_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"psm_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n    <dynamicField name=\"hs_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hm_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hss_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hsm_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hts_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"htm_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n    <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Begin added fields to use features in Solr 3.4+\n         http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n    <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"geos_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"geom_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- External file fields -->\n    <dynamicField name=\"eff_*\" type=\"file\"/>\n    <!-- End added fields for Solr 3.4+ -->\n\n    <!-- Sortable version of the dynamic string field -->\n    <dynamicField name=\"sort_*\" type=\"sortString\" indexed=\"true\" stored=\"false\"/>\n    <copyField source=\"ss_*\" dest=\"sort_*\"/>\n    <!-- A random sort field -->\n    <dynamicField name=\"random_*\" type=\"rand\" indexed=\"true\" stored=\"true\"/>\n    <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n    <dynamicField name=\"access_*\" type=\"integer\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n    <!-- The following causes solr to ignore any fields that don't already match an existing\n         field name or dynamic field, rather than reporting them as an error.\n         Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n         unknown fields indexed and/or stored by default -->\n    <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n  </fields>\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  <xi:include href=\"schema_extra_fields.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- field for the QueryParser to use when an explicit fieldname is absent -->\n  <defaultSearchField>content</defaultSearchField>\n\n  <!-- SolrQueryParser configuration: defaultOperator=\"AND|OR\" -->\n  <solrQueryParser defaultOperator=\"AND\"/>\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/schema_extra_fields.xml",
    "content": "<fields>\n<!--\n  Adding German dynamic field types to our Solr Schema\n  If you enable this, make sure you have a folder called lang with stopwords_de.txt\n  and synonyms_de.txt in there\n  This also requires to enable the content in schema_extra_types.xml\n-->\n<!--\n   <field name=\"label_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"content_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n   <field name=\"teaser_de\" type=\"text_de\" indexed=\"false\" stored=\"true\"/>\n   <field name=\"path_alias_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"taxonomy_names_de\" type=\"text_de\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n   <field name=\"spell_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n   <copyField source=\"label_de\" dest=\"spell_de\"/>\n   <copyField source=\"content_de\" dest=\"spell_de\"/>\n   <dynamicField name=\"tags_de_*\" type=\"text_de\" indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n   <dynamicField name=\"ts_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n   <dynamicField name=\"tm_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n   <dynamicField name=\"tos_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n   <dynamicField name=\"tom_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n-->\n</fields>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/schema_extra_types.xml",
    "content": "<types>\n<!--\n  Adding German language to our Solr Schema German\n  If you enable this, make sure you have a folder called lang with stopwords_de.txt\n  and synonyms_de.txt in there\n-->\n<!--\n    <fieldType name=\"text_de\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\" enablePositionIncrements=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"1\" catenateNumbers=\"1\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"lang/synonyms_de.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\" enablePositionIncrements=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"0\" catenateNumbers=\"0\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n-->\n</types>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-4.3-solr-4.x\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_40}</luceneMatchVersion>\n\n  <!-- lib directives can be used to instruct Solr to load an Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n\n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A dir option by itself adds any files found in the directory to\n       the classpath, this is useful for including all jars in a\n       directory.\n    -->\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/extraction/lib\" />\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/clustering/lib/\" />\n\n  <!-- The velocity library has been known to crash Solr in some\n       instances when deployed as a war file to Tomcat. Therefore all\n       references have been removed from the default configuration.\n       @see http://drupal.org/node/1612556\n  -->\n  <!-- <lib dir=\"../../contrib/velocity/lib\" /> -->\n\n  <!-- When a regex is specified in addition to a directory, only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n    -->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-cell-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-clustering-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-dataimporthandler-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-langid-\\d.*\\.jar\" />-->\n  <!-- <lib dir=\"../../dist/\" regex=\"apache-solr-velocity-\\d.*\\.jar\" /> -->\n\n  <!-- If a dir option (with or without a regex) is used and nothing\n       is found that matches, it will be ignored\n    -->\n  <!--<lib dir=\"../../contrib/clustering/lib/\" />-->\n  <!--<lib dir=\"/total/crap/dir/ignored\" />-->\n\n  <!-- an exact path can be used to specify a specific file.  This\n       will cause a serious error to be logged if it can't be loaded.\n    -->\n  <!--\n  <lib path=\"../a-jar-that-does-not-exist.jar\" />\n  -->\n\n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <!-- <dataDir>${solr.data.dir:}</dataDir> -->\n\n\n  <!-- The DirectoryFactory to use for indexes.\n\n       solr.StandardDirectoryFactory, the default, is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  One can force a particular implementation\n       via solr.MMapDirectoryFactory, solr.NIOFSDirectoryFactory, or\n       solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based, not\n       persistent, and doesn't work with replication.\n    -->\n  <directoryFactory name=\"DirectoryFactory\"\n                    class=\"${solr.directoryFactory:solr.StandardDirectoryFactory}\"/>\n\n  <!-- Index Defaults\n\n       Values here affect all index writers and act as a default\n       unless overridden.\n\n       WARNING: See also the <mainIndex> section below for parameters\n       that overfor Solr's main Lucene index.\n    -->\n  <indexConfig>\n\n    <useCompoundFile>false</useCompoundFile>\n\n    <mergeFactor>4</mergeFactor>\n    <!-- Sets the amount of RAM that may be used by Lucene indexing\n         for buffering added documents and deletions before they are\n         flushed to the Directory.  -->\n    <ramBufferSizeMB>32</ramBufferSizeMB>\n    <!-- If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.\n      -->\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <maxMergeDocs>2147483647</maxMergeDocs>\n    <maxFieldLength>100000</maxFieldLength>\n    <writeLockTimeout>1000</writeLockTimeout>\n\n    <!-- Expert: Merge Policy\n\n         The Merge Policy in Lucene controls how merging is handled by\n         Lucene.  The default in Solr 3.3 is TieredMergePolicy.\n\n         The default in 2.3 was the LogByteSizeMergePolicy,\n         previous versions used LogDocMergePolicy.\n\n         LogByteSizeMergePolicy chooses segments to merge based on\n         their size.  The Lucene 2.2 default, LogDocMergePolicy chose\n         when to merge based on number of documents\n\n         Other implementations of MergePolicy must have a no-argument\n         constructor\n      -->\n    <mergePolicy class=\"org.apache.lucene.index.LogByteSizeMergePolicy\"/>\n\n    <!-- Expert: Merge Scheduler\n\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!--\n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\n    <!-- LockFactory\n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n\n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         (For backwards compatibility with Solr 1.2, 'simple' is the\n         default if not specified.)\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>single</lockType>\n\n    <!-- Expert: Controls how often Lucene loads terms into memory\n         Default is 128 and is likely good for most everyone.\n      -->\n    <!-- <termIndexInterval>256</termIndexInterval> -->\n\n    <!-- Unlock On Startup\n\n         If true, unlock any held write or commit locks on startup.\n         This defeats the locking mechanism that allows multiple\n         processes to safely access a lucene index, and should be used\n         with care.\n\n         This is not needed if lock type is 'none' or 'single'\n     -->\n    <unlockOnStartup>false</unlockOnStartup>\n\n    <!-- If true, IndexReaders will be reopened (often more efficient)\n         instead of closed and then opened.\n      -->\n    <reopenReaders>true</reopenReaders>\n\n    <!-- Commit Deletion Policy\n\n         Custom deletion policies can specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         http://lucene.apache.org/java/2_9_1/api/all/org/apache/lucene/index/IndexDeletionPolicy.html\n\n         The standard Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n\n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n      <!-- The number of commit points to be kept -->\n      <str name=\"maxCommitsToKeep\">1</str>\n      <!-- The number of optimized commit points to be kept -->\n      <str name=\"maxOptimizedCommitsToKeep\">0</str>\n      <!--\n          Delete all commit points once they have reached the given age.\n          Supports DateMathParser syntax e.g.\n        -->\n      <!--\n         <str name=\"maxCommitAge\">30MINUTES</str>\n         <str name=\"maxCommitAge\">1DAY</str>\n      -->\n    </deletionPolicy>\n\n    <!-- Lucene Infostream\n\n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting The value to true will instruct the underlying Lucene\n         IndexWriter to write its debugging info the specified file\n      -->\n     <infoStream file=\"INFOSTREAM.txt\">false</infoStream>\n\n  </indexConfig>\n\n  <!-- JMX\n\n       This example enables JMX if and only if an existing MBeanServer\n       is found, use this if you want to configure JMX through JVM\n       parameters. Remove this to disable exposing Solr configuration\n       and statistics to JMX.\n\n       For more details see http://wiki.apache.org/solr/SolrJmx\n    -->\n  <!-- <jmx /> -->\n  <!-- If you want to connect to a particular server, specify the\n       agentId\n    -->\n  <!-- <jmx agentId=\"myAgent\" /> -->\n  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->\n  <!-- <jmx serviceUrl=\"service:jmx:rmi:///jndi/rmi://localhost:9999/solr\"/>\n    -->\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- AutoCommit\n\n         Perform a <commit/> automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents.\n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time that is allowed to pass\n                   since a document was added before automaticly\n                   triggering a new commit.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:10000}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:120000}</maxTime>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n    -->\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:2000}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:10000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n\n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns.\n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n    <!-- Enables a transaction log, currently used for real-time get.\n         \"dir\" - the target directory for transaction logs, defaults to the\n         solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.data.dir:}</str>\n      <!-- if you want to take control of the synchronization you may specify\n           the syncLevel as one of the following where ''flush'' is the default.\n           Fsync will reduce throughput.\n           <str name=\"syncLevel\">flush|fsync|none</str>\n      -->\n    </updateLog>\n  </updateHandler>\n\n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n  <!-- By explicitly declaring the Factory, the termIndexDivisor can\n       be specified.\n    -->\n  <!--\n     <indexReaderFactory name=\"IndexReaderFactory\"\n                         class=\"solr.StandardIndexReaderFactory\">\n       <int name=\"setTermIndexDivisor\">12</int>\n     </indexReaderFactory >\n    -->\n\n\n  <query>\n    <!-- Max Boolean Clauses\n\n         Maximum number of clauses in each BooleanQuery,  an exception\n         is thrown if exceeded.\n\n         ** WARNING **\n\n         This option actually modifies a global Lucene property that\n         will affect all SolrCores.  If multiple solrconfig.xml files\n         disagree on this property, the value at any given moment will\n         be based on the last SolrCore to be initialized.\n\n      -->\n    <maxBooleanClauses>1024</maxBooleanClauses>\n\n\n    <!-- Solr Internal Query Caches\n\n         There are two implementations of cache available for Solr,\n         LRUCache, based on a synchronized LinkedHashMap, and\n         FastLRUCache, based on a ConcurrentHashMap.\n\n         FastLRUCache has faster gets and slower puts in single\n         threaded operation and thus is generally faster than LRUCache\n         when the hit ratio of the cache is high (> 75%), and may be\n         faster under other scenarios on multi-cpu systems.\n    -->\n\n    <!-- Filter Cache\n\n         Cache used by SolrIndexSearcher for filters (DocSets),\n         unordered sets of *all* documents that match a query.  When a\n         new searcher is opened, its caches may be prepopulated or\n         \"autowarmed\" using data from caches in the old searcher.\n         autowarmCount is the number of items to prepopulate.  For\n         LRUCache, the autowarmed items will be the most recently\n         accessed items.\n\n         Parameters:\n           class - the SolrCache implementation LRUCache or\n               (LRUCache or FastLRUCache)\n           size - the maximum number of entries in the cache\n           initialSize - the initial capacity (number of entries) of\n               the cache.  (see java.util.HashMap)\n           autowarmCount - the number of entries to prepopulate from\n               and old cache.\n      -->\n    <filterCache class=\"solr.FastLRUCache\"\n                 size=\"512\"\n                 initialSize=\"512\"\n                 autowarmCount=\"0\"/>\n\n    <!-- Query Result Cache\n\n         Caches results of searches - ordered lists of document ids\n         (DocList) based on a query, a sort, and the range of documents requested.\n      -->\n    <queryResultCache class=\"solr.LRUCache\"\n                     size=\"512\"\n                     initialSize=\"512\"\n                     autowarmCount=\"32\"/>\n\n    <!-- Document Cache\n\n         Caches Lucene Document objects (the stored fields for each\n         document).  Since Lucene internal document ids are transient,\n         this cache will not be autowarmed.\n      -->\n    <documentCache class=\"solr.LRUCache\"\n                   size=\"512\"\n                   initialSize=\"512\"\n                   autowarmCount=\"0\"/>\n\n    <!-- Field Value Cache\n\n         Cache used to hold field values that are quickly accessible\n         by document id.  The fieldValueCache is created by default\n         even if not configured here.\n      -->\n    <!--\n       <fieldValueCache class=\"solr.FastLRUCache\"\n                        size=\"512\"\n                        autowarmCount=\"128\"\n                        showItems=\"32\" />\n      -->\n\n    <!-- Custom Cache\n\n         Example of a generic cache.  These caches may be accessed by\n         name through SolrIndexSearcher.getCache(),cacheLookup(), and\n         cacheInsert().  The purpose is to enable easy caching of\n         user/application level data.  The regenerator argument should\n         be specified as an implementation of solr.CacheRegenerator\n         if autowarming is desired.\n      -->\n    <!--\n       <cache name=\"myUserCache\"\n              class=\"solr.LRUCache\"\n              size=\"4096\"\n              initialSize=\"1024\"\n              autowarmCount=\"1024\"\n              regenerator=\"com.mycompany.MyRegenerator\"\n              />\n      -->\n\n\n    <!-- Lazy Field Loading\n\n         If true, stored fields that are not requested will be loaded\n         lazily.  This can result in a significant speed improvement\n         if the usual case is to not load all stored fields,\n         especially if the skipped fields are large compressed text\n         fields.\n    -->\n    <enableLazyFieldLoading>true</enableLazyFieldLoading>\n\n   <!-- Use Filter For Sorted Query\n\n        A possible optimization that attempts to use a filter to\n        satisfy a search.  If the requested sort does not include\n        score, then the filterCache will be checked for a filter\n        matching the query. If found, the filter will be used as the\n        source of document ids, and then the sort will be applied to\n        that.\n\n        For most situations, this will not be useful unless you\n        frequently get the same search repeatedly with different sort\n        options, and none of them ever use \"score\"\n     -->\n   <!--\n      <useFilterForSortedQuery>true</useFilterForSortedQuery>\n     -->\n\n   <!-- Result Window Size\n\n        An optimization for use with the queryResultCache.  When a search\n        is requested, a superset of the requested number of document ids\n        are collected.  For example, if a search for a particular query\n        requests matching documents 10 through 19, and queryWindowSize is 50,\n        then documents 0 through 49 will be collected and cached.  Any further\n        requests in that range can be satisfied via the cache.\n     -->\n   <queryResultWindowSize>20</queryResultWindowSize>\n\n   <!-- Maximum number of documents to cache for any entry in the\n        queryResultCache.\n     -->\n   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>\n\n   <!-- Query Related Event Listeners\n\n        Various IndexSearcher related events can trigger Listeners to\n        take actions.\n\n        newSearcher - fired whenever a new searcher is being prepared\n        and there is a current searcher handling requests (aka\n        registered).  It can be used to prime certain caches to\n        prevent long request times for certain requests.\n\n        firstSearcher - fired whenever a new searcher is being\n        prepared but there is no current registered searcher to handle\n        requests or to gain autowarming data from.\n\n\n     -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence.\n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">solr rocks</str><str name=\"start\">0</str><str name=\"rows\">10</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n    <!-- Max Warming Searchers\n\n         Maximum number of searchers that may be warming in the\n         background concurrently.  An error is returned if this limit\n         is exceeded.\n\n         Recommend values of 1-2 for read-only slaves, higher for\n         masters w/o cache warming.\n      -->\n    <maxWarmingSearchers>2</maxWarmingSearchers>\n\n  </query>\n\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n\n       handleSelect affects the behavior of requests such as /select?qt=XXX\n\n       handleSelect=\"true\" will cause the SolrDispatchFilter to process\n       the request and will result in consistent error handling and\n       formatting for all types of requests.\n\n       handleSelect=\"false\" will cause the SolrDispatchFilter to\n       ignore \"/select\" requests and fallback to using the legacy\n       SolrServlet and it's Solr 1.1 style error formatting\n    -->\n  <requestDispatcher handleSelect=\"true\" >\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size of\n         Multipart File Uploads that Solr will allow in a Request.\n\n         *** WARNING ***\n         The settings below authorize Solr to fetch remote files, You\n         should make sure your system has some authentication before\n         using enableRemoteStreaming=\"true\"\n\n      -->\n    <requestParsers enableRemoteStreaming=\"true\"\n                    multipartUploadLimitInKB=\"2048000\" />\n\n    <!-- HTTP Caching\n\n         Set HTTP caching related parameters (for proxy caches and clients).\n\n         The options below instruct Solr not to output any HTTP Caching\n         related headers\n      -->\n    <httpCaching never304=\"true\" />\n    <!-- If you include a <cacheControl> directive, it will be used to\n         generate a Cache-Control header (as well as an Expires header\n         if the value contains \"max-age=\")\n\n         By default, no Cache-Control header is generated.\n\n         You can use the <cacheControl> option even if you have set\n         never304=\"true\"\n      -->\n    <!--\n       <httpCaching never304=\"true\" >\n         <cacheControl>max-age=30, public</cacheControl>\n       </httpCaching>\n      -->\n    <!-- To enable Solr to respond with automatically generated HTTP\n         Caching headers, and to response to Cache Validation requests\n         correctly, set the value of never304=\"false\"\n\n         This will cause Solr to generate Last-Modified and ETag\n         headers based on the properties of the Index.\n\n         The following options can also be specified to affect the\n         values of these headers...\n\n         lastModFrom - the default value is \"openTime\" which means the\n         Last-Modified value (and validation against If-Modified-Since\n         requests) will all be relative to when the current Searcher\n         was opened.  You can change it to lastModFrom=\"dirLastMod\" if\n         you want the value to exactly correspond to when the physical\n         index was last modified.\n\n         etagSeed=\"...\" is an option you can change to force the ETag\n         header (and validation against If-None-Match requests) to be\n         different even if the index has not changed (ie: when making\n         significant changes to your config file)\n\n         (lastModifiedFrom and etagSeed are both ignored if you use\n         the never304=\"true\" option)\n      -->\n    <!--\n       <httpCaching lastModifiedFrom=\"openTime\"\n                    etagSeed=\"Solr\">\n         <cacheControl>max-age=30, public</cacheControl>\n       </httpCaching>\n      -->\n  </requestDispatcher>\n\n  <!-- Request Handlers\n\n       http://wiki.apache.org/solr/SolrRequestHandler\n\n       incoming queries will be dispatched to the correct handler\n       based on the path or the qt (query type) param.\n\n       Names starting with a '/' are accessed with the a path equal to\n       the registered name.  Names without a leading '/' are accessed\n       with: http://host/app/[core/]select?qt=name\n\n       If a /select request is processed with out a qt param\n       specified, the requestHandler that declares default=\"true\" will\n       be used.\n\n       If a Request Handler is declared with startup=\"lazy\", then it will\n       not be initialized until the first request that uses it.\n\n    -->\n  <!-- SearchHandler\n\n       http://wiki.apache.org/solr/SearchHandler\n\n       For processing Search Queries, the primary Request Handler\n       provided with Solr is \"SearchHandler\" It delegates to a sequent\n       of SearchComponents (see below) and supports distributed\n       queries across multiple shards\n    -->\n  <!--<requestHandler name=\"search\" class=\"solr.SearchHandler\" default=\"true\">-->\n    <!-- default values for query parameters can be specified, these\n         will be overridden by parameters in the request\n      -->\n     <!--<lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <int name=\"rows\">10</int>\n     </lst>-->\n    <!-- In addition to defaults, \"appends\" params can be specified\n         to identify values which should be appended to the list of\n         multi-val params from the query (or the existing \"defaults\").\n      -->\n    <!-- In this example, the param \"fq=instock:true\" would be appended to\n         any query time fq params the user may specify, as a mechanism for\n         partitioning the index, independent of any user selected filtering\n         that may also be desired (perhaps as a result of faceted searching).\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"appends\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"appends\">\n         <str name=\"fq\">inStock:true</str>\n       </lst>\n      -->\n    <!-- \"invariants\" are a way of letting the Solr maintainer lock down\n         the options available to Solr clients.  Any params values\n         specified here are used regardless of what values may be specified\n         in either the query, the \"defaults\", or the \"appends\" params.\n\n         In this example, the facet.field and facet.query params would\n         be fixed, limiting the facets clients can use.  Faceting is\n         not turned on by default - but if the client does specify\n         facet=true in the request, these are the only facets they\n         will be able to see counts for; regardless of what other\n         facet.field or facet.query params they may specify.\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"invariants\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"invariants\">\n         <str name=\"facet.field\">cat</str>\n         <str name=\"facet.field\">manu_exact</str>\n         <str name=\"facet.query\">price:[* TO 500]</str>\n         <str name=\"facet.query\">price:[500 TO *]</str>\n       </lst>\n      -->\n    <!-- If the default list of SearchComponents is not desired, that\n         list can either be overridden completely, or components can be\n         prepended or appended to the default list.  (see below)\n      -->\n    <!--\n       <arr name=\"components\">\n         <str>nameOfCustomComponent1</str>\n         <str>nameOfCustomComponent2</str>\n       </arr>\n      -->\n    <!--</requestHandler>-->\n\n  <!-- A Robust Example\n\n       This example SearchHandler declaration shows off usage of the\n       SearchHandler with many defaults declared\n\n       Note that multiple instances of the same Request Handler\n       (SearchHandler) can be registered multiple times with different\n       names (and different init parameters)\n    -->\n  <!--\n  <requestHandler name=\"/browse\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>-->\n\n       <!-- VelocityResponseWriter settings -->\n       <!--<str name=\"wt\">velocity</str>\n\n       <str name=\"v.template\">browse</str>\n       <str name=\"v.layout\">layout</str>\n       <str name=\"title\">Solritas</str>\n\n       <str name=\"defType\">edismax</str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n       <str name=\"mlt.qf\">\n         text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"mlt.fl\">text,features,name,sku,id,manu,cat</str>\n       <int name=\"mlt.count\">3</int>\n\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n\n       <str name=\"facet\">on</str>\n       <str name=\"facet.field\">cat</str>\n       <str name=\"facet.field\">manu_exact</str>\n       <str name=\"facet.query\">ipod</str>\n       <str name=\"facet.query\">GB</str>\n       <str name=\"facet.mincount\">1</str>\n       <str name=\"facet.pivot\">cat,inStock</str>\n       <str name=\"facet.range.other\">after</str>\n       <str name=\"facet.range\">price</str>\n       <int name=\"f.price.facet.range.start\">0</int>\n       <int name=\"f.price.facet.range.end\">600</int>\n       <int name=\"f.price.facet.range.gap\">50</int>\n       <str name=\"facet.range\">popularity</str>\n       <int name=\"f.popularity.facet.range.start\">0</int>\n       <int name=\"f.popularity.facet.range.end\">10</int>\n       <int name=\"f.popularity.facet.range.gap\">3</int>\n       <str name=\"facet.range\">manufacturedate_dt</str>\n       <str name=\"f.manufacturedate_dt.facet.range.start\">NOW/YEAR-10YEARS</str>\n       <str name=\"f.manufacturedate_dt.facet.range.end\">NOW</str>\n       <str name=\"f.manufacturedate_dt.facet.range.gap\">+1YEAR</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">before</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">after</str>-->\n\n\n       <!-- Highlighting defaults -->\n       <!--<str name=\"hl\">on</str>\n       <str name=\"hl.fl\">text features name</str>\n       <str name=\"f.name.hl.fragsize\">0</str>\n       <str name=\"f.name.hl.alternateField\">name</str>\n     </lst>\n     <arr name=\"last-components\">\n       <str>spellcheck</str>\n     </arr>-->\n     <!--\n     <str name=\"url-scheme\">httpx</str>\n     -->\n  <!--</requestHandler>-->\n  <!-- trivia: the name pinkPony requestHandler was an agreement between the Search API and the\n    apachesolr maintainers. The decision was taken during the Drupalcon Munich codesprint.\n    -->\n  <requestHandler name=\"pinkPony\" class=\"solr.SearchHandler\" default=\"true\">\n    <lst name=\"defaults\">\n      <str name=\"defType\">edismax</str>\n      <str name=\"echoParams\">explicit</str>\n      <bool name=\"omitHeader\">true</bool>\n      <float name=\"tie\">0.01</float>\n      <!-- Don't abort searches for the pinkPony request handler (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.pinkPony.timeAllowed:-1}</int>\n      <str name=\"q.alt\">*:*</str>\n\n      <!-- By default, don't spell check -->\n      <str name=\"spellcheck\">false</str>\n      <!-- Defaults for the spell checker when used -->\n      <str name=\"spellcheck.onlyMorePopular\">true</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <!--  The number of suggestions to return -->\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- The more like this handler offers many advantages over the standard handler,\n     when performing moreLikeThis requests.-->\n  <requestHandler name=\"mlt\" class=\"solr.MoreLikeThisHandler\">\n    <lst name=\"defaults\">\n      <str name=\"mlt.mintf\">1</str>\n      <str name=\"mlt.mindf\">1</str>\n      <str name=\"mlt.minwl\">3</str>\n      <str name=\"mlt.maxwl\">15</str>\n      <str name=\"mlt.maxqt\">20</str>\n      <str name=\"mlt.match.include\">false</str>\n      <!-- Abort any searches longer than 2 seconds (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.mlt.timeAllowed:2000}</int>\n    </lst>\n  </requestHandler>\n\n  <!-- A minimal query type for doing luene queries -->\n  <requestHandler name=\"standard\" class=\"solr.SearchHandler\">\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">explicit</str>\n      <bool name=\"omitHeader\">true</bool>\n    </lst>\n  </requestHandler>\n\n  <!-- XML Update Request Handler.\n\n       http://wiki.apache.org/solr/UpdateXmlMessages\n\n       The canonical Request Handler for Modifying the Index through\n       commands specified using XML.\n\n       Note: Since solr1.1 requestHandlers requires a valid content\n       type header if posted in the body. For example, curl now\n       requires: -H 'Content-type:text/xml; charset=utf-8'\n    -->\n  <requestHandler name=\"/update\"\n                  class=\"solr.UpdateRequestHandler\">\n    <!-- See below for information on defining\n         updateRequestProcessorChains that can be used by name\n         on each Update Request\n      -->\n    <!--\n       <lst name=\"defaults\">\n         <str name=\"update.chain\">dedupe</str>\n       </lst>\n       -->\n    </requestHandler>\n  <!-- Binary Update Request Handler\n       http://wiki.apache.org/solr/javabin\n    -->\n  <requestHandler name=\"/update/javabin\"\n                  class=\"solr.UpdateRequestHandler\" />\n\n  <!-- CSV Update Request Handler\n       http://wiki.apache.org/solr/UpdateCSV\n    -->\n  <requestHandler name=\"/update/csv\"\n                  class=\"solr.CSVRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- JSON Update Request Handler\n       http://wiki.apache.org/solr/UpdateJSON\n    -->\n  <requestHandler name=\"/update/json\"\n                  class=\"solr.JsonUpdateRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- Solr Cell Update Request Handler\n\n       http://wiki.apache.org/solr/ExtractingRequestHandler\n\n    -->\n  <requestHandler name=\"/update/extract\"\n                  startup=\"lazy\"\n                  class=\"solr.extraction.ExtractingRequestHandler\" >\n    <lst name=\"defaults\">\n      <!-- All the main content goes into \"text\"... if you need to return\n           the extracted text or do highlighting, use a stored field. -->\n      <str name=\"fmap.content\">text</str>\n      <str name=\"lowernames\">true</str>\n      <str name=\"uprefix\">ignored_</str>\n\n      <!-- capture link hrefs but ignore div attributes -->\n      <str name=\"captureAttr\">true</str>\n      <str name=\"fmap.a\">links</str>\n      <str name=\"fmap.div\">ignored_</str>\n    </lst>\n  </requestHandler>\n\n  <!-- XSLT Update Request Handler\n       Transforms incoming XML with stylesheet identified by tr=\n  -->\n  <requestHandler name=\"/update/xslt\"\n                   startup=\"lazy\"\n                   class=\"solr.XsltUpdateRequestHandler\"/>\n\n  <!-- Field Analysis Request Handler\n\n       RequestHandler that provides much the same functionality as\n       analysis.jsp. Provides the ability to specify multiple field\n       types and field names in the same request and outputs\n       index-time and query-time analysis for each of them.\n\n       Request parameters are:\n       analysis.fieldname - field name whose analyzers are to be used\n\n       analysis.fieldtype - field type whose analyzers are to be used\n       analysis.fieldvalue - text for index-time analysis\n       q (or analysis.q) - text for query time analysis\n       analysis.showmatch (true|false) - When set to true and when\n           query analysis is performed, the produced tokens of the\n           field value analysis will be marked as \"matched\" for every\n           token that is produces by the query analysis\n   -->\n  <requestHandler name=\"/analysis/field\"\n                  startup=\"lazy\"\n                  class=\"solr.FieldAnalysisRequestHandler\" />\n\n\n  <!-- Document Analysis Handler\n\n       http://wiki.apache.org/solr/AnalysisRequestHandler\n\n       An analysis handler that provides a breakdown of the analysis\n       process of provided docuemnts. This handler expects a (single)\n       content stream with the following format:\n\n       <docs>\n         <doc>\n           <field name=\"id\">1</field>\n           <field name=\"name\">The Name</field>\n           <field name=\"text\">The Text Value</field>\n         </doc>\n         <doc>...</doc>\n         <doc>...</doc>\n         ...\n       </docs>\n\n    Note: Each document must contain a field which serves as the\n    unique key. This key is used in the returned response to associate\n    an analysis breakdown to the analyzed document.\n\n    Like the FieldAnalysisRequestHandler, this handler also supports\n    query analysis by sending either an \"analysis.query\" or \"q\"\n    request parameter that holds the query text to be analyzed. It\n    also supports the \"analysis.showmatch\" parameter which when set to\n    true, all field tokens that match the query tokens will be marked\n    as a \"match\".\n  -->\n  <requestHandler name=\"/analysis/document\"\n                  class=\"solr.DocumentAnalysisRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- Admin Handlers\n\n       Admin Handlers - This will register all the standard admin\n       RequestHandlers.\n    -->\n  <requestHandler name=\"/admin/\" class=\"solr.admin.AdminHandlers\" />\n  <!-- This single handler is equivalent to the following... -->\n  <!--\n     <requestHandler name=\"/admin/luke\"       class=\"solr.admin.LukeRequestHandler\" />\n     <requestHandler name=\"/admin/system\"     class=\"solr.admin.SystemInfoHandler\" />\n     <requestHandler name=\"/admin/plugins\"    class=\"solr.admin.PluginInfoHandler\" />\n     <requestHandler name=\"/admin/threads\"    class=\"solr.admin.ThreadDumpHandler\" />\n     <requestHandler name=\"/admin/properties\" class=\"solr.admin.PropertiesRequestHandler\" />\n     <requestHandler name=\"/admin/file\"       class=\"solr.admin.ShowFileRequestHandler\" >\n    -->\n  <!-- If you wish to hide files under ${solr.home}/conf, explicitly\n       register the ShowFileRequestHandler using:\n    -->\n  <!--\n     <requestHandler name=\"/admin/file\"\n                     class=\"solr.admin.ShowFileRequestHandler\" >\n       <lst name=\"invariants\">\n         <str name=\"hidden\">synonyms.txt</str>\n         <str name=\"hidden\">anotherfile.txt</str>\n       </lst>\n     </requestHandler>\n    -->\n\n  <!-- ping/healthcheck -->\n  <requestHandler name=\"/admin/ping\" class=\"solr.PingRequestHandler\">\n    <lst name=\"invariants\">\n      <str name=\"qt\">pinkPony</str>\n      <str name=\"q\">solrpingquery</str>\n      <str name=\"omitHeader\">false</str>\n    </lst>\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">all</str>\n    </lst>\n    <!-- An optional feature of the PingRequestHandler is to configure the\n         handler with a \"healthcheckFile\" which can be used to enable/disable\n         the PingRequestHandler.\n         relative paths are resolved against the data dir\n    -->\n    <!-- <str name=\"healthcheckFile\">server-enabled.txt</str> -->\n  </requestHandler>\n\n  <!-- Echo the request contents back to the client -->\n  <requestHandler name=\"/debug/dump\" class=\"solr.DumpRequestHandler\" >\n    <lst name=\"defaults\">\n     <str name=\"echoParams\">explicit</str>\n     <str name=\"echoHandler\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Solr Replication\n\n       The SolrReplicationHandler supports replicating indexes from a\n       \"master\" used for indexing and \"slaves\" used for queries.\n\n       http://wiki.apache.org/solr/SolrReplication\n\n       In the example below, remove the <lst name=\"master\"> section if\n       this is just a slave and remove  the <lst name=\"slave\"> section\n       if this is just a master.\n    -->\n  <requestHandler name=\"/replication\" class=\"solr.ReplicationHandler\" >\n    <lst name=\"master\">\n      <str name=\"enable\">${solr.replication.master:false}</str>\n      <str name=\"replicateAfter\">commit</str>\n      <str name=\"replicateAfter\">startup</str>\n      <str name=\"confFiles\">${solr.replication.confFiles:schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml}</str>\n    </lst>\n    <lst name=\"slave\">\n      <str name=\"enable\">${solr.replication.slave:false}</str>\n      <str name=\"masterUrl\">${solr.replication.masterUrl:http://localhost:8983/solr}/replication</str>\n      <str name=\"pollInterval\">${solr.replication.pollInterval:00:00:60}</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Realtime get handler, guaranteed to return the latest stored fields of\n       any document, without the need to commit or open a new searcher.  The\n       current implementation relies on the updateLog feature being enabled.\n  -->\n  <requestHandler name=\"/get\" class=\"solr.RealTimeGetHandler\">\n    <lst name=\"defaults\">\n      <str name=\"omitHeader\">true</str>\n      <str name=\"wt\">json</str>\n      <str name=\"indent\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names,\n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n\n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n\n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\"\n\n     -->\n\n  <!-- A request handler for demonstrating the spellcheck component.\n\n       NOTE: This is purely as an example.  The whole purpose of the\n       SpellCheckComponent is to hook it into the request handler that\n       handles your normal user queries so that a separate request is\n       not needed to get suggestions.\n\n       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS\n       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!\n\n       See http://wiki.apache.org/solr/SpellCheckComponent for details\n       on the request parameters.\n    -->\n  <requestHandler name=\"/spell\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"spellcheck.onlyMorePopular\">false</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Term Vector Component\n\n       http://wiki.apache.org/solr/TermVectorComponent\n    -->\n  <searchComponent name=\"tvComponent\" class=\"solr.TermVectorComponent\"/>\n\n  <!-- A request handler for demonstrating the term vector component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your\n       already specified request handlers.\n    -->\n  <requestHandler name=\"tvrh\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <bool name=\"tv\">true</bool>\n    </lst>\n    <arr name=\"last-components\">\n      <str>tvComponent</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Clustering Component\n\n       http://wiki.apache.org/solr/ClusteringComponent\n\n       This relies on third party jars which are notincluded in the\n       release.  To use this component (and the \"/clustering\" handler)\n       Those jars will need to be downloaded, and you'll need to set\n       the solr.cluster.enabled system property when running solr...\n\n          java -Dsolr.clustering.enabled=true -jar start.jar\n    -->\n  <!-- <searchComponent name=\"clustering\"\n                   enable=\"${solr.clustering.enabled:false}\"\n                   class=\"solr.clustering.ClusteringComponent\" > -->\n    <!-- Declare an engine -->\n    <!--<lst name=\"engine\">-->\n      <!-- The name, only one can be named \"default\" -->\n      <!--<str name=\"name\">default</str>-->\n\n      <!-- Class name of Carrot2 clustering algorithm.\n\n           Currently available algorithms are:\n\n           * org.carrot2.clustering.lingo.LingoClusteringAlgorithm\n           * org.carrot2.clustering.stc.STCClusteringAlgorithm\n           * org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm\n\n           See http://project.carrot2.org/algorithms.html for the\n           algorithm's characteristics.\n        -->\n      <!--<str name=\"carrot.algorithm\">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>-->\n\n      <!-- Overriding values for Carrot2 default algorithm attributes.\n\n           For a description of all available attributes, see:\n           http://download.carrot2.org/stable/manual/#chapter.components.\n           Use attribute key as name attribute of str elements\n           below. These can be further overridden for individual\n           requests by specifying attribute key as request parameter\n           name and attribute value as parameter value.\n        -->\n      <!--<str name=\"LingoClusteringAlgorithm.desiredClusterCountBase\">20</str>-->\n\n      <!-- Location of Carrot2 lexical resources.\n\n           A directory from which to load Carrot2-specific stop words\n           and stop labels. Absolute or relative to Solr config directory.\n           If a specific resource (e.g. stopwords.en) is present in the\n           specified dir, it will completely override the corresponding\n           default one that ships with Carrot2.\n\n           For an overview of Carrot2 lexical resources, see:\n           http://download.carrot2.org/head/manual/#chapter.lexical-resources\n        -->\n      <!--<str name=\"carrot.lexicalResourcesDir\">clustering/carrot2</str>-->\n\n      <!-- The language to assume for the documents.\n\n           For a list of allowed values, see:\n           http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage\n       -->\n      <!--<str name=\"MultilingualClustering.defaultLanguage\">ENGLISH</str>\n    </lst>\n    <lst name=\"engine\">\n      <str name=\"name\">stc</str>\n      <str name=\"carrot.algorithm\">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>\n    </lst>\n  </searchComponent>-->\n\n  <!-- A request handler for demonstrating the clustering component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your\n       already specified request handlers.\n    -->\n  <!--<requestHandler name=\"/clustering\"\n                  startup=\"lazy\"\n                  enable=\"${solr.clustering.enabled:false}\"\n                  class=\"solr.SearchHandler\">\n    <lst name=\"defaults\">\n      <bool name=\"clustering\">true</bool>\n      <str name=\"clustering.engine\">default</str>\n      <bool name=\"clustering.results\">true</bool>-->\n      <!-- The title field -->\n      <!--<str name=\"carrot.title\">name</str>-->\n      <!--<str name=\"carrot.url\">id</str>-->\n      <!-- The field to cluster on -->\n       <!--<str name=\"carrot.snippet\">features</str>-->\n       <!-- produce summaries -->\n       <!--<bool name=\"carrot.produceSummary\">true</bool>-->\n       <!-- the maximum number of labels per cluster -->\n       <!--<int name=\"carrot.numDescriptions\">5</int>-->\n       <!-- produce sub clusters -->\n       <!--<bool name=\"carrot.outputSubClusters\">false</bool>-->\n\n       <!--<str name=\"defType\">edismax</str>\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>clustering</str>\n    </arr>\n  </requestHandler>-->\n\n  <!-- Terms Component\n\n       http://wiki.apache.org/solr/TermsComponent\n\n       A component to return terms and document frequency of those\n       terms\n    -->\n  <searchComponent name=\"terms\" class=\"solr.TermsComponent\"/>\n\n  <!-- A request handler for demonstrating the terms component -->\n  <requestHandler name=\"/terms\" class=\"solr.SearchHandler\" startup=\"lazy\">\n     <lst name=\"defaults\">\n      <bool name=\"terms\">true</bool>\n    </lst>\n    <arr name=\"components\">\n      <str>terms</str>\n    </arr>\n  </requestHandler>\n\n\n  <!-- Query Elevation Component\n\n       http://wiki.apache.org/solr/QueryElevationComponent\n\n       a search component that enables you to configure the top\n       results for a given query regardless of the normal lucene\n       scoring.\n    -->\n  <searchComponent name=\"elevator\" class=\"solr.QueryElevationComponent\" >\n    <!-- pick a fieldType to analyze queries -->\n    <str name=\"queryFieldType\">string</str>\n    <str name=\"config-file\">elevate.xml</str>\n  </searchComponent>\n\n  <!-- A request handler for demonstrating the elevator component -->\n  <requestHandler name=\"/elevate\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">explicit</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\"\n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter\n           (for sentence extraction)\n        -->\n      <fragmenter name=\"regex\"\n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\"\n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<strong>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</strong>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\"\n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\"\n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\"\n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!--\n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n\n      <boundaryScanner name=\"default\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n\n      <boundaryScanner name=\"breakIterator\"\n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    -->\n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.\n\n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n    <!--\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n    <!--\n     <updateRequestProcessorChain name=\"langid\">\n       <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n         <str name=\"langid.fl\">text,title,subject,description</str>\n         <str name=\"langid.langField\">language_s</str>\n         <str name=\"langid.fallback\">en</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\"\n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n     <!-- For the purposes of the tutorial, JSON responses are written as\n      plain text so that they are easy to read in *any* browser.\n      If you expect a MIME type of \"application/json\" just remove this override.\n     -->\n    <str name=\"content-type\">text/plain; charset=UTF-8</str>\n  </queryResponseWriter>\n\n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!-- <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" enable=\"${solr.velocity.enabled:true}\"/> -->\n\n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.\n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       http://wiki.apache.org/solr/SolrQuerySyntax\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\"\n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n  <!-- Legacy config for the admin interface -->\n  <admin>\n    <defaultQuery>*:*</defaultQuery>\n\n    <!-- configure a healthcheck file for servers behind a\n         loadbalancer\n      -->\n    <!--\n       <healthcheck type=\"file\">server-enabled</healthcheck>\n      -->\n  </admin>\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  <xi:include href=\"solrconfig_extra.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback>\n      <!-- Spell Check\n\n          The spell check component can return a list of alternative spelling\n          suggestions. This component must be defined in\n          solrconfig_extra.xml if present, since it's used in the search handler.\n\n          http://wiki.apache.org/solr/SpellCheckComponent\n       -->\n      <searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n        <str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n        <!-- a spellchecker built from a field of the main index -->\n        <lst name=\"spellchecker\">\n          <str name=\"name\">default</str>\n          <str name=\"field\">spell</str>\n          <str name=\"spellcheckIndexDir\">spellchecker</str>\n          <str name=\"buildOnOptimize\">true</str>\n        </lst>\n      </searchComponent>\n    </xi:fallback>\n  </xi:include>\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/solrconfig_extra.xml",
    "content": "<!-- Spell Check\n\n    The spell check component can return a list of alternative spelling\n    suggestions.\n\n    http://wiki.apache.org/solr/SpellCheckComponent\n -->\n<searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n<str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n<!-- Multiple \"Spell Checkers\" can be declared and used by this\n     component\n  -->\n\n<!-- a spellchecker built from a field of the main index, and\n     written to disk\n  -->\n<lst name=\"spellchecker\">\n  <str name=\"name\">default</str>\n  <str name=\"field\">spell</str>\n  <str name=\"spellcheckIndexDir\">spellchecker</str>\n  <str name=\"buildOnOptimize\">true</str>\n  <!-- uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary\n    <float name=\"thresholdTokenFrequency\">.01</float>\n  -->\n</lst>\n\n<!--\n  Adding German spellhecker index to our Solr index\n  This also requires to enable the content in schema_extra_types.xml and schema_extra_fields.xml\n-->\n<!--\n<lst name=\"spellchecker\">\n  <str name=\"name\">spellchecker_de</str>\n  <str name=\"field\">spell_de</str>\n  <str name=\"spellcheckIndexDir\">./spellchecker_de</str>\n  <str name=\"buildOnOptimize\">true</str>\n</lst>\n-->\n\n<!-- a spellchecker that uses a different distance measure -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">jarowinkler</str>\n     <str name=\"field\">spell</str>\n     <str name=\"distanceMeasure\">\n       org.apache.lucene.search.spell.JaroWinklerDistance\n     </str>\n     <str name=\"spellcheckIndexDir\">spellcheckerJaro</str>\n   </lst>\n -->\n\n<!-- a spellchecker that use an alternate comparator\n\n     comparatorClass be one of:\n      1. score (default)\n      2. freq (Frequency first, then score)\n      3. A fully qualified class name\n  -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">freq</str>\n     <str name=\"field\">lowerfilt</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFreq</str>\n     <str name=\"comparatorClass\">freq</str>\n     <str name=\"buildOnCommit\">true</str>\n  -->\n\n<!-- A spellchecker that reads the list of words from a file -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"classname\">solr.FileBasedSpellChecker</str>\n     <str name=\"name\">file</str>\n     <str name=\"sourceLocation\">spellings.txt</str>\n     <str name=\"characterEncoding\">UTF-8</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFile</str>\n   </lst>\n  -->\n</searchComponent>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:8099/solr\nsolr.replication.confFiles=schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml\nsolr.mlt.timeAllowed=2000\n# You should not set your luceneMatchVersion to anything lower than your Solr\n# Version.\nsolr.luceneMatchVersion=LUCENE_40\nsolr.pinkPony.timeAllowed=-1\n# autoCommit after 10000 docs\nsolr.autoCommit.MaxDocs=10000\n# autoCommit after 2 minutes\nsolr.autoCommit.MaxTime=120000\n# autoSoftCommit after 2000 docs\nsolr.autoSoftCommit.MaxDocs=2000\n# autoSoftCommit after 10 seconds\nsolr.autoSoftCommit.MaxTime=10000\nsolr.contrib.dir=../../../contrib\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/stopwords.txt",
    "content": "# Contains words which shouldn't be indexed for fulltext fields, e.g., because\n# they're too common. For documentation of the format, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.StopFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr4_drupal7/synonyms.txt",
    "content": "# Contains synonyms to use for your index. For the format used, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal10/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. Search API generally constructs item IDs (esp. for entities) as:\n     $document->id = \"$site_hash-$index_id-$datasource:$entity_id:$language_id\";\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #789 first in searches for \"example query\": -->\n<!--\n  <query text=\"example query\">\n    <doc id=\"ab12cd34-site_index-entity:789:en\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal10/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n<!DOCTYPE schema [\n  <!ENTITY extrafields SYSTEM \"schema_extra_fields.xml\">\n  <!ENTITY extratypes SYSTEM \"schema_extra_types.xml\">\n]>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n This example schema is the recommended starting point for users.\n It should be kept correct and concise, usable out-of-the-box.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n\n PERFORMANCE NOTE: this schema includes many optional features and should not\n be used for benchmarking.  To improve performance one could\n  - set stored=\"false\" for all fields possible (esp large fields) when you\n    only need to search on the field but don't need to return the original\n    value.\n  - set indexed=\"false\" if you don't need to search on the field, but only\n    return the field as a result of searching on other indexed fields.\n  - remove all unneeded copyField statements\n  - for best index size and searching performance, set \"index\" to false\n    for all general text fields, use copyField to copy them to the\n    catchall \"text\" field, and use that for searching.\n  - For maximum indexing performance, use the ConcurrentUpdateSolrServer\n    java client.\n  - Remember to run the JVM in server mode, and use a higher logging level\n    that avoids logging every request\n-->\n\n<schema name=\"drupal-SEARCH_API_SOLR_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" version=\"1.6\">\n  <!-- attribute \"name\" is the name of this schema and is only used for display purposes.\n       version=\"x.y\" is Solr's version number for the schema syntax and\n       semantics.  It should not normally be changed by applications.\n\n       1.0: multiValued attribute did not exist, all fields are multiValued\n            by nature\n       1.1: multiValued attribute introduced, false by default\n       1.2: omitTermFreqAndPositions attribute introduced, true by default\n            except for text fields.\n       1.3: removed optional field compress feature\n       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser\n            behavior when a single string produces multiple tokens.  Defaults\n            to off for version >= 1.4\n       1.5: omitNorms defaults to true for primitive field types\n            (int, float, boolean, string...)\n       1.6: useDocValuesAsStored defaults to true.\n     -->\n\n\n  <!-- Valid attributes for fields:\n    name: mandatory - the name for the field\n    type: mandatory - the name of a field type from the\n      fieldTypes\n    indexed: true if this field should be indexed (searchable or sortable)\n    stored: true if this field should be retrievable\n    docValues: true if this field should have doc values. Doc values are\n      useful (required, if you are using *Point fields) for faceting,\n      grouping, sorting and function queries. Doc values will make the index\n      faster to load, more NRT-friendly and more memory-efficient.\n      They however come with some limitations: they are currently only\n      supported by StrField, UUIDField, all *PointFields, and depending\n      on the field type, they might require the field to be single-valued,\n      be required or have a default value (check the documentation\n      of the field type you're interested in for more information)\n    multiValued: true if this field may contain multiple values per document\n    omitNorms: (expert) set to true to omit the norms associated with\n      this field (this disables length normalization and index-time\n      boosting for the field, and saves some memory).  Only full-text\n      fields or fields that need an index-time boost need norms.\n      Norms are omitted for primitive (non-analyzed) types by default.\n    termVectors: [false] set to true to store the term vector for a\n      given field.\n      When using MoreLikeThis, fields used for similarity should be\n      stored for best performance.\n    termPositions: Store position information with the term vector.\n      This will increase storage costs.\n    termOffsets: Store offset information with the term vector. This\n      will increase storage costs.\n    termPayloads: Store payload information with the term vector. This\n      will increase storage costs.\n    required: The field is required.  It will throw an error if the\n      value does not exist\n    default: a value that should be used if no value is specified\n      when adding a document.\n  -->\n\n  <!-- field names should consist of alphanumeric or underscore characters only and\n     not start with a digit.  This is not currently strictly enforced,\n     but other field names will not have first class support from all components\n     and back compatibility is not guaranteed.  Names with both leading and\n     trailing underscores (e.g. _version_) are reserved.\n  -->\n\n  <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml\n     or Solr won't start. _version_ and update log are required for SolrCloud\n  -->\n  <!-- doc values are enabled by default for primitive types such as long so we don't index the version field  -->\n  <field name=\"_version_\" type=\"plong\" indexed=\"false\" stored=\"false\"/>\n\n  <!-- points to the root document of a block of nested documents. Required for nested\n     document support, may be removed otherwise\n  -->\n  <field name=\"_root_\" type=\"string\" indexed=\"true\" stored=\"false\" docValues=\"false\"/>\n\n  <!-- Only remove the \"id\" field if you have a very good reason to. While not strictly\n  required, it is highly recommended. A <uniqueKey> is present in almost all Solr\n  installations. See the <uniqueKey> declaration below where <uniqueKey> is set to \"id\".\n  -->\n  <!-- The document id is usually derived from a site-specific key (hash) and the\n    entity type and ID like:\n    Search Api 7.x:\n      The format used is $document->id = $index_id . '-' . $item_id\n    Search Api 8.x:\n      The format used is $document->id = $site_hash . '-' . $index_id . '-' . $item_id\n    Apache Solr Search Integration 7.x:\n      The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n  -->\n  <!-- The Highlighter Component requires the id field to be \"stored\" even if docValues are set. -->\n  <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Search Api specific fields -->\n  <!-- index_id is the machine name of the search index this entry belongs to. -->\n  <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Here, default is used to create a \"timestamp\" field indicating\n       when each document was indexed.-->\n  <field name=\"timestamp\" type=\"pdate\" indexed=\"true\" stored=\"false\" default=\"NOW\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"boost_document\" type=\"pfloat\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"boost_term\" type=\"boost_term_payload\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n  <!-- Currently the suggester context filter query (suggest.cfq) accesses the tags using the stored values, neither the indexed terms nor the docValues.\n       Therefore the dynamicField sm_* isn't suitable at the moment -->\n  <field name=\"sm_context_tags\" type=\"string\" indexed=\"true\" stored=\"true\" multiValued=\"true\" docValues=\"false\"/>\n\n  <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n       will be used if the name matches any of the patterns.\n       RESTRICTION: the glob-like pattern in the name attribute must have\n       a \"*\" only at the start or the end.\n       EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n       Longer patterns will be matched first.  if equal size patterns\n       both match, the first appearing in the schema will be used.  -->\n\n  <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n       the last letter is 's' for single valued, 'm' for multi-valued -->\n\n  <!-- We use plong for integer since 64 bit ints are now common in PHP. -->\n  <dynamicField name=\"is_*\"  type=\"plong\"    indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"im_*\"  type=\"plong\"    indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- List of floats can be saved in a regular float field -->\n  <dynamicField name=\"fs_*\"  type=\"pfloat\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"fm_*\"  type=\"pfloat\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- List of doubles can be saved in a regular double field -->\n  <dynamicField name=\"ps_*\"  type=\"pdouble\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"pm_*\"  type=\"pdouble\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- List of booleans can be saved in a regular boolean field -->\n  <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Regular text (without processing) can be stored in a string field-->\n  <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- For field types using SORTED_SET, multiple identical entries are collapsed into a single value.\n       Thus if I insert values 4, 5, 2, 4, 1, my return will be 1, 2, 4, 5 when enabling docValues.\n       If you need to preserve the order and duplicate entries, consider to store the values as zm_* (twice). -->\n  <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Special-purpose text fields -->\n  <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n  <dynamicField name=\"ds_*\"  type=\"pdate\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"dm_*\"  type=\"pdate\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- This field is used to store date ranges -->\n  <dynamicField name=\"drs_*\" type=\"date_range\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"drm_*\" type=\"date_range\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"its_*\" type=\"plong\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"itm_*\" type=\"plong\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"fts_*\" type=\"pfloat\"  indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ftm_*\" type=\"pfloat\"  indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <dynamicField name=\"pts_*\" type=\"pdouble\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ptm_*\" type=\"pdouble\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n       a small image in a search result using the data URI scheme -->\n  <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"dds_*\" type=\"pdate\"    indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ddm_*\" type=\"pdate\"    indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n  <dynamicField name=\"hs_*\" type=\"pint\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"hm_*\" type=\"pint\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"hts_*\" type=\"pint\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"htm_*\" type=\"pint\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n\n  <!-- Unindexed string fields that can be used to store values that won't be searchable but have docValues -->\n  <dynamicField name=\"zdvs_*\" type=\"string\" indexed=\"false\" stored=\"true\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"zdvm_*\" type=\"string\" indexed=\"false\" stored=\"true\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n  <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n  <!-- Fields for location searches.\n       http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n  <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <!-- GeoHash fields are deprecated. LatLonPointSpatial fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"geos_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"geom_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"bboxs_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"bboxm_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n  <dynamicField name=\"rpts_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"rptm_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n  <!-- External file fields -->\n  <dynamicField name=\"eff_*\" type=\"file\"/>\n\n  <!-- A random sort field -->\n  <dynamicField name=\"random_*\" type=\"random\" indexed=\"true\" stored=\"true\"/>\n\n  <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n  <dynamicField name=\"access_*\" type=\"pint\" indexed=\"true\" stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n\n  <!-- The following causes solr to ignore any fields that don't already match an existing\n       field name or dynamic field, rather than reporting them as an error.\n       Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n       unknown fields indexed and/or stored by default -->\n  <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in a\n       standard package such as org.apache.solr.analysis\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       It supports doc values but in that case the field needs to be\n       single-valued and either required or have a default value.\n      -->\n    <fieldType name=\"string\" class=\"solr.StrField\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\"/>\n\n    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are\n         currently supported on types that are sorted internally as strings\n         and on numeric types.\n         This includes \"string\", \"boolean\", \"pint\", \"pfloat\", \"plong\", \"pdate\", \"pdouble\".\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!--\n      Numeric field types that index values using KD-trees.\n      Point fields don't support FieldCache, so they must have docValues=\"true\" if needed for sorting, faceting, functions, etc.\n    -->\n    <fieldType name=\"pint\" class=\"solr.IntPointField\" docValues=\"true\"/>\n    <fieldType name=\"pfloat\" class=\"solr.FloatPointField\" docValues=\"true\"/>\n    <fieldType name=\"plong\" class=\"solr.LongPointField\" docValues=\"true\"/>\n    <fieldType name=\"pdouble\" class=\"solr.DoublePointField\" docValues=\"true\"/>\n\n    <fieldType name=\"pints\" class=\"solr.IntPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pfloats\" class=\"solr.FloatPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"plongs\" class=\"solr.LongPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pdoubles\" class=\"solr.DoublePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the DatePointField javadocs for more information.\n      -->\n    <!-- KD-tree versions of date fields -->\n    <fieldType name=\"pdate\" class=\"solr.DatePointField\" docValues=\"true\"/>\n    <fieldType name=\"pdates\" class=\"solr.DatePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!-- A date range field -->\n    <fieldType name=\"date_range\" class=\"solr.DateRangeField\"/>\n\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldType name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The \"RandomSortField\" is not used to store or search any\n         data.  You can declare fields of this type it in your schema\n         to generate pseudo-random orderings of your docs for sorting\n         or function purposes.  The ordering is generated based on the field\n         name and the version of the index. As long as the index version\n         remains unchanged, and the same field name is reused,\n         the ordering of the docs will be consistent.\n         If you want different psuedo-random orderings of documents,\n         for the same version of the index, use a dynamicField and\n         change the field name in the request.\n     -->\n    <fieldType name=\"random\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element.\n         Example:\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <fieldType name=\"boost_term_payload\" stored=\"false\" indexed=\"true\" class=\"solr.TextField\" >\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n        <!--\n        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,\n        a token of \"foo|1.4\"  would be indexed as \"foo\" with a payload of 1.4f\n        Attributes of the DelimitedPayloadTokenFilterFactory :\n         \"delimiter\" - a one character delimiter. Default is | (pipe)\n         \"encoder\" - how to encode the following value into a playload\n           float -> org.apache.lucene.analysis.payloads.FloatEncoder,\n           integer -> o.a.l.a.p.IntegerEncoder\n           identity -> o.a.l.a.p.IdentityEncoder\n             Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.\n         -->\n        <filter class=\"solr.DelimitedPayloadTokenFilterFactory\" encoder=\"float\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- since fields of this type are by default not stored or indexed,\n         any data added to them will be ignored outright.  -->\n    <fieldType name=\"ignored\" stored=\"false\" indexed=\"false\" multiValued=\"true\" class=\"solr.StrField\" />\n\n    <!-- This point type indexes the coordinates as separate fields (subFields)\n      If subFieldType is defined, it references a type, and a dynamic field\n      definition is created matching *___<typename>.  Alternately, if\n      subFieldSuffix is defined, that is used to create the subFields.\n      Example: if subFieldType=\"double\", then the coordinates would be\n        indexed in fields myloc_0___double,myloc_1___double.\n      Example: if subFieldSuffix=\"_d\" then the coordinates would be indexed\n        in fields myloc_0_d,myloc_1_d\n      The subFields are an implementation detail of the fieldType, and end\n      users normally should not need to know about them.\n     -->\n    <!-- In Drupal we only use prefixes for dynmaic fields. Tht might change in\n      the future but for now we keep this pattern.\n    -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"pdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonPointSpatialField\" docValues=\"true\"/>\n\n    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.\n      For more information about this and other Spatial fields new to Solr 4, see:\n      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4\n    -->\n    <fieldType name=\"location_rpt\" class=\"solr.SpatialRecursivePrefixTreeFieldType\"\n        geo=\"true\" distErrPct=\"0.025\" maxDistErr=\"0.001\" distanceUnits=\"kilometers\" />\n\n    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has\n    special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for\n    relevancy. -->\n    <fieldType name=\"bbox\" class=\"solr.BBoxField\"\n               geo=\"true\" distanceUnits=\"kilometers\" numberType=\"_bbox_coord\" />\n    <fieldType name=\"_bbox_coord\" class=\"solr.DoublePointField\" docValues=\"true\" stored=\"false\"/>\n\n  <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType\n       Parameters:\n           amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field.\n                               The dynamic field must have a field type that extends LongValueFieldType.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.\n                               The dynamic field must have a field type that extends StrField.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           defaultCurrency:  Specifies the default currency if none specified. Defaults to \"USD\"\n           providerClass:    Lets you plug in other exchange provider backend:\n                             solr.FileExchangeRateProvider is the default and takes one parameter:\n                               currencyConfig: name of an xml file holding exchange rates\n                             solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:\n                               ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)\n                               refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)\n  -->\n<!--  <fieldType name=\"currency\" class=\"solr.CurrencyFieldType\" amountLongSuffix=\"_l_ns\" codeStrSuffix=\"_s_ns\"\n                 defaultCurrency=\"USD\" currencyConfig=\"currency.xml\" /> -->\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  &extrafields;\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  &extratypes;\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- Similarity is the scoring routine for each document vs. a query.\n       A custom Similarity or SimilarityFactory may be specified here, but\n       the default is fine for most applications.\n       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity\n    -->\n  <!--\n     <similarity class=\"com.example.solr.CustomSimilarityFactory\">\n       <str name=\"paramkey\">param value</str>\n     </similarity>\n    -->\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal10/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!DOCTYPE config [\n  <!ENTITY extra SYSTEM \"solrconfig_extra.xml\">\n  <!ENTITY index SYSTEM \"solrconfig_index.xml\">\n  <!ENTITY query SYSTEM \"solrconfig_query.xml\">\n  <!ENTITY requestdispatcher SYSTEM \"solrconfig_requestdispatcher.xml\">\n]>\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-SEARCH_API_SOLR_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_70}</luceneMatchVersion>\n\n  <!-- <lib/> directives can be used to instruct Solr to load any Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       Please note that <lib/> directives are processed in the order\n       that they appear in your solrconfig.xml file, and are \"stacked\"\n       on top of each other when building a ClassLoader - so if you have\n       plugin jars with dependencies on other jars, the \"lower level\"\n       dependency jars should be loaded first.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n\n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A 'dir' option by itself adds any files found in the directory\n       to the classpath, this is useful for including all jars in a\n       directory.\n\n       When a 'regex' is specified in addition to a 'dir', only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n\n       If a 'dir' option (with or without a regex) is used and nothing\n       is found that matches, a warning will be logged.\n\n       The examples below can be used to load some solr-contribs along\n       with their external dependencies.\n    -->\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-dataimporthandler-.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/extraction/lib\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-cell-\\d.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/langid/lib/\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-langid-\\d.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/analysis-extras/lib\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/analysis-extras/lucene-libs\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-analysis-extras-\\d.*\\.jar\" />\n\n  <!-- an exact 'path' can be used instead of a 'dir' to specify a\n       specific jar file.  This will cause a serious error to be logged\n       if it can't be loaded.\n    -->\n  <!--\n     <lib path=\"../a-jar-that-does-not-exist.jar\" />\n  -->\n\n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <dataDir>${solr.data.dir:}</dataDir>\n\n\n  <!-- The DirectoryFactory to use for indexes.\n\n       solr.StandardDirectoryFactory is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,\n       wraps solr.StandardDirectoryFactory and caches small files in memory\n       for better NRT performance.\n\n       One can force a particular implementation via solr.MMapDirectoryFactory,\n       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based and not persistent.\n    -->\n  <directoryFactory name=\"DirectoryFactory\"\n                    class=\"${solr.directoryFactory:solr.NRTCachingDirectoryFactory}\">\n\n\n    <!-- These will be used if you are using the solr.HdfsDirectoryFactory,\n         otherwise they will be ignored. If you don't plan on using hdfs,\n         you can safely remove this section. -->\n    <!-- The root directory that collection data should be written to. -->\n    <str name=\"solr.hdfs.home\">${solr.hdfs.home:}</str>\n    <!-- The hadoop configuration files to use for the hdfs client. -->\n    <str name=\"solr.hdfs.confdir\">${solr.hdfs.confdir:}</str>\n    <!-- Enable/Disable the hdfs cache. -->\n    <str name=\"solr.hdfs.blockcache.enabled\">${solr.hdfs.blockcache.enabled:true}</str>\n    <!-- Enable/Disable using one global cache for all SolrCores.\n         The settings used will be from the first HdfsDirectoryFactory created. -->\n    <str name=\"solr.hdfs.blockcache.global\">${solr.hdfs.blockcache.global:true}</str>\n\n  </directoryFactory>\n\n  <!-- The CodecFactory for defining the format of the inverted index.\n       The default implementation is SchemaCodecFactory, which is the official Lucene\n       index format, but hooks into the schema to provide per-field customization of\n       the postings lists and per-document values in the fieldType element\n       (postingsFormat/docValuesFormat). Note that most of the alternative implementations\n       are experimental, so if you choose to customize the index format, it's a good\n       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)\n       before upgrading to a newer version to avoid unnecessary reindexing.\n  -->\n  <codecFactory class=\"solr.SchemaCodecFactory\"/>\n\n  <!-- To enable dynamic schema REST APIs, remove the following <schemaFactory>.\n  -->\n  <schemaFactory class=\"ClassicIndexSchemaFactory\"/>\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Index Config - These settings control low-level behavior of indexing\n       Most example settings here show the default value, but are commented\n       out, to more easily see where customizations have been made.\n\n       Note: This replaces <indexDefaults> and <mainIndex> from older versions\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <indexConfig>\n    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a\n         LimitTokenCountFilterFactory in your fieldType definition. E.g.\n     <filter class=\"solr.LimitTokenCountFilterFactory\" maxTokenCount=\"10000\"/>\n    -->\n    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->\n    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->\n\n    <!-- Expert: Enabling compound file will use less files for the index,\n         using fewer file descriptors on the expense of performance decrease.\n         Default in Lucene is \"true\". Default in Solr is \"false\" (since 3.6) -->\n    <!-- <useCompoundFile>false</useCompoundFile> -->\n\n    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene\n         indexing for buffering added documents and deletions before they are\n         flushed to the Directory.\n         maxBufferedDocs sets a limit on the number of documents buffered\n         before flushing.\n         If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.\n         The default is 100 MB.  -->\n    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <!-- Expert: Merge Policy\n         The Merge Policy in Lucene controls how merging of segments is done.\n         The default since Solr/Lucene 3.3 is TieredMergePolicy.\n         The default since Lucene 2.3 was the LogByteSizeMergePolicy,\n         Even older versions of Lucene used LogDocMergePolicy.\n      -->\n    <!--\n        <mergePolicyFactory class=\"solr.TieredMergePolicyFactory\">\n          <int name=\"maxMergeAtOnce\">10</int>\n          <int name=\"segmentsPerTier\">10</int>\n        </mergePolicyFactory>\n     -->\n\n    <!-- Expert: Merge Scheduler\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!--\n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\n    <!-- LockFactory\n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n\n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         Defaults: 'native' is default for Solr3.6 and later, otherwise\n                   'simple' is the default\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>${solr.lock.type:native}</lockType>\n\n    <!-- Commit Deletion Policy\n         Custom deletion policies can be specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         The default Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n\n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <!--\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n    -->\n    <!-- The number of commit points to be kept -->\n    <!-- <str name=\"maxCommitsToKeep\">1</str> -->\n    <!-- The number of optimized commit points to be kept -->\n    <!-- <str name=\"maxOptimizedCommitsToKeep\">0</str> -->\n    <!--\n        Delete all commit points once they have reached the given age.\n        Supports DateMathParser syntax e.g.\n      -->\n    <!--\n       <str name=\"maxCommitAge\">30MINUTES</str>\n       <str name=\"maxCommitAge\">1DAY</str>\n    -->\n    <!--\n    </deletionPolicy>\n    -->\n\n    <!-- Lucene Infostream\n\n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting the value to true will instruct the underlying Lucene\n         IndexWriter to write its info stream to solr's log. By default,\n         this is enabled here, and controlled through log4j2.xml\n      -->\n    <infoStream>true</infoStream>\n\n    <!-- Let the config generator easily inject additional stuff. -->\n    &index;\n\n  </indexConfig>\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- Enables a transaction log, used for real-time get, durability, and\n         and solr cloud replica recovery.  The log can grow as big as\n         uncommitted changes to the index, so use of a hard autoCommit\n         is recommended (see below).\n         \"dir\" - the target directory for transaction logs, defaults to the\n                solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.ulog.dir:}</str>\n    </updateLog>\n\n    <!-- AutoCommit\n\n         Perform a hard commit automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents.\n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time in ms that is allowed to pass\n                   since a document was added before automatically\n                   triggering a new commit.\n         openSearcher - if false, the commit causes recent index changes\n           to be flushed to stable storage, but does not cause a new\n           searcher to be opened to make those changes visible.\n\n         If the updateLog is enabled, then it's highly recommended to\n         have some sort of hard autoCommit to limit the log size.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:15000}</maxTime>\n      <openSearcher>false</openSearcher>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n      -->\n\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:5000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n\n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns.\n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n\n  </updateHandler>\n\n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Query section - these settings control query time things like caches\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <query>\n    <!-- Let the config generator easily inject additional stuff. -->\n    &query;\n\n    <!-- Query Related Event Listeners\n\n         Various IndexSearcher related events can trigger Listeners to\n         take actions.\n\n         newSearcher - fired whenever a new searcher is being prepared\n         and there is a current searcher handling requests (aka\n         registered).  It can be used to prime certain caches to\n         prevent long request times for certain requests.\n\n         firstSearcher - fired whenever a new searcher is being\n         prepared but there is no current registered searcher to handle\n         requests or to gain autowarming data from.\n\n\n      -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence.\n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">static firstSearcher warming in solrconfig.xml</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n  </query>\n\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n    -->\n  <requestDispatcher>\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size (in KiB) of\n         Multipart File Uploads that Solr will allow in a Request.\n\n         formdataUploadLimitInKB - specifies the max size (in KiB) of\n         form data (application/x-www-form-urlencoded) sent via\n         POST. You can use POST to pass request parameters not\n         fitting into the URL.\n\n         addHttpRequestToContext - if set to true, it will instruct\n         the requestParsers to include the original HttpServletRequest\n         object in the context map of the SolrQueryRequest under the\n         key \"httpRequest\". It will not be used by any of the existing\n         Solr components, but may be useful when developing custom\n         plugins.\n\n         *** WARNING ***\n         Before enabling remote streaming, you should make sure your\n         system has authentication enabled.\n\n    <requestParsers enableRemoteStreaming=\"false\"\n                    multipartUploadLimitInKB=\"-1\"\n                    formdataUploadLimitInKB=\"-1\"\n                    addHttpRequestToContext=\"false\"/>\n      -->\n    <!-- Let the config generator easily inject additional stuff. -->\n    &requestdispatcher;\n\n  </requestDispatcher>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names,\n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n\n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n\n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\"\n\n     -->\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  &extra;\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\"\n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter\n           (for sentence extraction)\n        -->\n      <fragmenter name=\"regex\"\n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\"\n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<em>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</em>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\"\n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\"\n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- Configure the weighted fragListBuilder -->\n      <fragListBuilder name=\"weighted\"\n                       default=\"true\"\n                       class=\"solr.highlight.WeightedFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\"\n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!--\n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n\n      <boundaryScanner name=\"default\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n\n      <boundaryScanner name=\"breakIterator\"\n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    -->\n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.\n\n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Language identification\n\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n  <!--\n   <updateRequestProcessorChain name=\"langid\">\n     <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n       <str name=\"langid.fl\">text,title,subject,description</str>\n       <str name=\"langid.langField\">language_s</str>\n       <str name=\"langid.fallback\">en</str>\n     </processor>\n     <processor class=\"solr.LogUpdateProcessorFactory\" />\n     <processor class=\"solr.RunUpdateProcessorFactory\" />\n   </updateRequestProcessorChain>\n  -->\n\n  <!-- Script update processor\n\n    This example hooks in an update processor implemented using JavaScript.\n\n    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor\n  -->\n  <!--\n    <updateRequestProcessorChain name=\"script\">\n      <processor class=\"solr.StatelessScriptUpdateProcessorFactory\">\n        <str name=\"script\">update-script.js</str>\n        <lst name=\"params\">\n          <str name=\"config_param\">example config parameter</str>\n        </lst>\n      </processor>\n      <processor class=\"solr.RunUpdateProcessorFactory\" />\n    </updateRequestProcessorChain>\n  -->\n\n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\"\n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n     <queryResponseWriter name=\"schema.xml\" class=\"solr.SchemaXmlResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n  </queryResponseWriter>\n\n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!--\n    <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" startup=\"lazy\">\n      <str name=\"template.base.dir\">${velocity.template.base.dir:}</str>\n    </queryResponseWriter>\n    -->\n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.\n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\"\n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n\n  <!-- Document Transformers\n       http://wiki.apache.org/solr/DocTransformers\n    -->\n  <!--\n     Could be something like:\n     <transformer name=\"db\" class=\"com.mycompany.LoadFromDatabaseTransformer\" >\n       <int name=\"connection\">jdbc://....</int>\n     </transformer>\n\n     To add a constant value to all docs, use:\n     <transformer name=\"mytrans2\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <int name=\"value\">5</int>\n     </transformer>\n\n     If you want the user to still be able to change it with _value:something_ use this:\n     <transformer name=\"mytrans3\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <double name=\"defaultValue\">5</double>\n     </transformer>\n\n      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The\n      EditorialMarkerFactory will do exactly that:\n     <transformer name=\"qecBooster\" class=\"org.apache.solr.response.transform.EditorialMarkerFactory\" />\n    -->\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal10/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:9077/solr\nsolr.replication.confFiles=schema.xml,schema_extra_types.xml,schema_extra_fields.xml,elevate.xml\nsolr.mlt.timeAllowed=2000\nsolr.selectSearchHandler.timeAllowed=-1\n# don't autoCommit after x docs\nsolr.autoCommit.MaxDocs=-1\n# autoCommit after 15 seconds\nsolr.autoCommit.MaxTime=15000\n# don't autoSoftCommit after x docs\nsolr.autoSoftCommit.MaxDocs=-1\n# don't autoSoftCommit after x seconds\nsolr.autoSoftCommit.MaxTime=5000\nsolr.install.dir=/opt/solr7\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. The item IDs are in general constructed as follows:\n   Search API:\n     $document->id = $index_id . '-' . $item_id;\n   Apache Solr Search Integration:\n     $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #1 first in searches for \"example query\": -->\n<!--\n <query text=\"example query\">\n  <doc id=\"default_node_index-1\" />\n  <doc id=\"7v3jsc/node/1\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/mapping-ISOLatin1Accent.txt",
    "content": "# This file contains character mappings for the default fulltext field type.\n# The source characters (on the left) will be replaced by the respective target\n# characters before any other processing takes place.\n# Lines starting with a pound character # are ignored.\n#\n# For sensible defaults, use the mapping-ISOLatin1Accent.txt file distributed\n# with the example application of your Solr version.\n#\n# Examples:\n#   \"À\" => \"A\"\n#   \"\\u00c4\" => \"A\"\n#   \"\\u00c4\" => \"\\u0041\"\n#   \"æ\" => \"ae\"\n#   \"\\n\" => \" \"\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/protwords.txt",
    "content": "#-----------------------------------------------------------------------\n# This file blocks words from being operated on by the stemmer and word delimiter.\n&amp;\n&lt;\n&gt;\n&#039;\n&quot;\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n-->\n\n<schema name=\"drupal-4.4-solr-7.x\" version=\"1.5\">\n    <!-- attribute \"name\" is the name of this schema and is only used for\n         display purposes. Applications should change this to reflect the nature\n         of the search collection.\n         version=\"1.2\" is Solr's version number for the schema syntax and\n         semantics. It should not normally be changed by applications.\n\n         1.0: multiValued attribute did not exist, all fields are multiValued by\n              nature\n         1.1: multiValued attribute introduced, false by default\n         1.2: omitTermFreqAndPositions attribute introduced, true by default\n              except for text fields.\n         1.3: removed optional field compress feature\n         1.4: autoGeneratePhraseQueries attribute introduced to drive\n              QueryParser behavior when a single string produces multiple\n              tokens. Defaults to off for version >= 1.4\n         1.5: omitNorms defaults to true for primitive field types\n              (int, float, boolean, string...)\n       -->\n\n  <types>\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in the\n       org.apache.solr.analysis package.\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       - StrField and TextField support an optional compressThreshold which\n       limits compression (if enabled in the derived fields) to values which\n       exceed a certain size (in characters).\n    -->\n    <fieldType name=\"string\" class=\"solr.StrField\" sortMissingLast=\"true\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\" sortMissingLast=\"true\"/>\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldtype name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The optional sortMissingLast and sortMissingFirst attributes are\n         currently supported on types that are sorted internally as strings.\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!-- numeric field types that can be sorted, but are not optimized for range queries -->\n    <fieldType name=\"integer\" class=\"solr.TrieIntField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"float\" class=\"solr.TrieFloatField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"long\" class=\"solr.TrieLongField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"double\" class=\"solr.TrieDoubleField\" precisionStep=\"0\" positionIncrementGap=\"0\"/>\n\n    <!--\n      Note:\n      These should only be used for compatibility with existing indexes (created with older Solr versions)\n      or if \"sortMissingFirst\" or \"sortMissingLast\" functionality is needed. Use Trie based fields instead.\n\n      Numeric field types that manipulate the value into\n      a string value that isn't human-readable in its internal form,\n      but with a lexicographic ordering the same as the numeric ordering,\n      so that range queries work correctly.\n    -->\n    <fieldType name=\"sint\" class=\"solr.TrieIntField\" sortMissingLast=\"true\"/>\n    <fieldType name=\"sfloat\" class=\"solr.TrieFloatField\" sortMissingLast=\"true\"/>\n    <fieldType name=\"slong\" class=\"solr.TrieLongField\" sortMissingLast=\"true\"/>\n    <fieldType name=\"sdouble\" class=\"solr.TrieDoubleField\" sortMissingLast=\"true\"/>\n\n    <!--\n     Numeric field types that index each value at various levels of precision\n     to accelerate range queries when the number of values between the range\n     endpoints is large. See the javadoc for NumericRangeQuery for internal\n     implementation details.\n\n     Smaller precisionStep values (specified in bits) will lead to more tokens\n     indexed per value, slightly larger index size, and faster range queries.\n     A precisionStep of 0 disables indexing at different precision levels.\n    -->\n    <fieldType name=\"tint\" class=\"solr.TrieIntField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tfloat\" class=\"solr.TrieFloatField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tlong\" class=\"solr.TrieLongField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"tdouble\" class=\"solr.TrieDoubleField\" precisionStep=\"8\" positionIncrementGap=\"0\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the TrieDateField javadocs for more information.\n\n         Note: For faster range queries, consider the tdate type\n      -->\n    <fieldType name=\"date\" class=\"solr.TrieDateField\" precisionStep=\"0\" positionIncrementGap=\"0\" sortMissingLast=\"true\" omitNorms=\"true\"/>\n\n    <!-- A Trie based date field for faster date range queries and date faceting. -->\n    <fieldType name=\"tdate\" class=\"solr.TrieDateField\" precisionStep=\"6\" positionIncrementGap=\"0\"/>\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of\n        words on case-change, alpha numeric boundaries, and non-alphanumeric chars,\n        so that a query of \"wifi\" or \"wi fi\" could match a document containing \"Wi-Fi\".\n        Synonyms and stopwords are customized by external files, and stemming is enabled.\n        Duplicate tokens at the same position (which may result from Stemmed Synonyms or\n        WordDelim parts) are removed.\n        -->\n    <fieldType name=\"text\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <!-- in this example, we will only use synonyms at query time\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"index_synonyms.txt\" ignoreCase=\"true\" expand=\"false\"/>\n        -->\n        <!-- Case insensitive stop word removal. -->\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"1\"\n                preserveOriginal=\"1\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.SnowballPorterFilterFactory\" language=\"English\" protected=\"protwords.txt\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- An unstemmed text field - good if one does not know the language of the field -->\n    <fieldType name=\"text_und\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\" />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"1\"\n                catenateNumbers=\"1\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"multiterm\">\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"synonyms.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\"\n                ignoreCase=\"true\"\n                words=\"stopwords.txt\"\n                />\n        <filter class=\"solr.WordDelimiterFilterFactory\"\n                protected=\"protwords.txt\"\n                generateWordParts=\"1\"\n                generateNumberParts=\"1\"\n                catenateWords=\"0\"\n                catenateNumbers=\"0\"\n                catenateAll=\"0\"\n                splitOnCaseChange=\"0\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\" />\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- Edge N gram type - for example for matching against queries with results\n        KeywordTokenizer leaves input string intact as a single term.\n        see: http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/\n      -->\n    <fieldType name=\"edge_n2_kw_text\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.EdgeNGramFilterFactory\" minGramSize=\"2\" maxGramSize=\"25\" />\n      </analyzer>\n      <analyzer type=\"query\">\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n    <!--  Setup simple analysis for spell checking -->\n\n    <fieldType name=\"textSpell\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\" />\n        <filter class=\"solr.StopFilterFactory\" ignoreCase=\"true\" words=\"stopwords.txt\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"4\" max=\"20\" />\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\" />\n      </analyzer>\n    </fieldType>\n\n    <!-- This is an example of using the KeywordTokenizer along\n         With various TokenFilterFactories to produce a sortable field\n         that does not include some properties of the source text\n      -->\n    <fieldType name=\"sortString\" class=\"solr.TextField\" sortMissingLast=\"true\" omitNorms=\"true\">\n      <analyzer>\n        <!-- KeywordTokenizer does no actual tokenizing, so the entire\n            input string is preserved as a single token\n          -->\n        <tokenizer class=\"solr.KeywordTokenizerFactory\"/>\n        <!-- The LowerCase TokenFilter does what you expect, which can be\n            when you want your sorting to be case insensitive\n          -->\n        <filter class=\"solr.LowerCaseFilterFactory\" />\n        <!-- The TrimFilter removes any leading or trailing whitespace -->\n        <filter class=\"solr.TrimFilterFactory\" />\n        <!-- The PatternReplaceFilter gives you the flexibility to use\n            Java Regular expression to replace any sequence of characters\n            matching a pattern with an arbitrary replacement string,\n            which may include back refrences to portions of the orriginal\n            string matched by the pattern.\n\n            See the Java Regular Expression documentation for more\n            infomation on pattern and replacement string syntax.\n\n            http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html\n\n        <filter class=\"solr.PatternReplaceFilterFactory\"\n               pattern=\"(^\\p{Punct}+)\" replacement=\"\" replace=\"all\"\n        />\n        -->\n      </analyzer>\n    </fieldType>\n\n    <!-- The \"RandomSortField\" is not used to store or search any\n         data.  You can declare fields of this type it in your schema\n         to generate pseudo-random orderings of your docs for sorting\n         or function purposes.  The ordering is generated based on the field\n         name and the version of the index. As long as the index version\n         remains unchanged, and the same field name is reused,\n         the ordering of the docs will be consistent.\n         If you want different psuedo-random orderings of documents,\n         for the same version of the index, use a dynamicField and\n         change the field name in the request.\n      -->\n    <fieldType name=\"rand\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- Fulltext type for matching words based on how they sound – i.e.,\n         \"phonetic matching\".\n      -->\n    <fieldType name=\"phonetic\" class=\"solr.TextField\" >\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\"/>\n        <filter class=\"solr.DoubleMetaphoneFilterFactory\" inject=\"false\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- since fields of this type are by default not stored or indexed,\n         any data added to them will be ignored outright.  -->\n    <fieldType name=\"ignored\" stored=\"false\" indexed=\"false\" multiValued=\"true\" class=\"solr.StrField\" />\n\n    <!-- This point type indexes the coordinates as separate fields (subFields)\n      If subFieldType is defined, it references a type, and a dynamic field\n      definition is created matching *___<typename>.  Alternately, if\n      subFieldSuffix is defined, that is used to create the subFields.\n      Example: if subFieldType=\"double\", then the coordinates would be\n        indexed in fields myloc_0___double,myloc_1___double.\n      Example: if subFieldSuffix=\"_d\" then the coordinates would be indexed\n        in fields myloc_0_d,myloc_1_d\n      The subFields are an implementation detail of the fieldType, and end\n      users normally should not need to know about them.\n     -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"tdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonType\" subFieldType=\"tdouble\"/>\n\n    <!-- A Geohash is a compact representation of a latitude longitude pair in a single field.\n         See http://wiki.apache.org/solr/SpatialSearch\n     -->\n    <fieldtype name=\"geohash\" class=\"solr.GeoHashField\"/>\n\n    <!-- Improved location type which supports advanced functionality like\n         filtering by polygons or other shapes, indexing shapes, multi-valued\n         fields, etc.\n      -->\n    <fieldType name=\"location_rpt\" class=\"solr.SpatialRecursivePrefixTreeFieldType\"\n        geo=\"true\" distErrPct=\"0.025\" maxDistErr=\"0.001\" distanceUnits=\"kilometers\" />\n\n    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has\n     special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for\n     relevancy. -->\n    <fieldType name=\"bbox\" class=\"solr.BBoxField\"\n               geo=\"true\" distanceUnits=\"kilometers\" numberType=\"_bbox_coord\" />\n    <fieldType name=\"_bbox_coord\" class=\"solr.TrieDoubleField\" precisionStep=\"8\" docValues=\"true\" stored=\"false\"/>\n\n  </types>\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  <xi:include href=\"schema_extra_types.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n   <!-- Valid attributes for fields:\n     name: mandatory - the name for the field\n     type: mandatory - the name of a field type from the <types> fieldType\n       section\n     indexed: true if this field should be indexed (searchable or sortable)\n     stored: true if this field should be retrievable\n     docValues: true if this field should have doc values. Doc values are\n       useful for faceting, grouping, sorting and function queries. Although not\n       required, doc values will make the index faster to load, more\n       NRT-friendly and more memory-efficient. They however come with some\n       limitations: they are currently only supported by StrField, UUIDField\n       and all Trie*Fields, and depending on the field type, they might\n       require the field to be single-valued, be required or have a default\n       value (check the documentation of the field type you're interested in\n       for more information)\n     multiValued: true if this field may contain multiple values per document\n     omitNorms: (expert) set to true to omit the norms associated with\n       this field (this disables length normalization and index-time\n       boosting for the field, and saves some memory).  Only full-text\n       fields or fields that need an index-time boost need norms.\n       Norms are omitted for primitive (non-analyzed) types by default.\n     termVectors: [false] set to true to store the term vector for a\n       given field.\n       When using MoreLikeThis, fields used for similarity should be\n       stored for best performance.\n     termPositions: Store position information with the term vector.\n       This will increase storage costs.\n     termOffsets: Store offset information with the term vector. This\n       will increase storage costs.\n     required: The field is required.  It will throw an error if the\n       value does not exist\n     default: a value that should be used if no value is specified\n       when adding a document.\n   -->\n  <fields>\n\n    <!-- The document id is usually derived from a site-spcific key (hash) and the\n      entity type and ID like:\n      Search Api :\n        The format used is $document->id = $index_id . '-' . $item_id\n      Apache Solr Search Integration\n        The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n    -->\n    <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" />\n\n    <!-- Add Solr Cloud version field as mentioned in\n         http://wiki.apache.org/solr/SolrCloud#Required_Config\n    -->\n    <field name=\"_version_\" type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n\n    <!-- Search Api specific fields -->\n    <!-- item_id contains the entity ID, e.g. a node's nid. -->\n    <field name=\"item_id\"  type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- index_id is the machine name of the search index this entry belongs to. -->\n    <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"true\" />\n  <!-- copyField commands copy one field to another at the time a document\n        is added to the index.  It's used either to index the same field differently,\n        or to add multiple fields to the same field for easier/faster searching.  -->\n    <!-- Since sorting by ID is explicitly allowed, store item_id also in a sortable way. -->\n    <copyField source=\"item_id\" dest=\"sort_search_api_id\" />\n\n    <!-- Apache Solr Search Integration specific fields -->\n    <!-- entity_id is the numeric object ID, e.g. Node ID, File ID -->\n    <field name=\"entity_id\"  type=\"long\" indexed=\"true\" stored=\"true\" />\n    <!-- entity_type is 'node', 'file', 'user', or some other Drupal object type -->\n    <field name=\"entity_type\" type=\"string\" indexed=\"true\" stored=\"true\" />\n    <!-- bundle is a node type, or as appropriate for other entity types -->\n    <field name=\"bundle\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"bundle_name\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"url\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <!-- label is the default field for a human-readable string for this entity (e.g. the title of a node) -->\n    <field name=\"label\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- The string version of the title is used for sorting -->\n    <copyField source=\"label\" dest=\"sort_label\"/>\n\n    <!-- content is the default field for full text search - dump crap here -->\n    <field name=\"content\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n    <field name=\"teaser\" type=\"text\" indexed=\"false\" stored=\"true\"/>\n    <field name=\"path\" type=\"string\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"path_alias\" type=\"text\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n\n    <!-- These are the fields that correspond to a Drupal node. The beauty of having\n      Lucene store title, body, type, etc., is that we retrieve them with the search\n      result set and don't need to go to the database with a node_load. -->\n    <field name=\"tid\"  type=\"long\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n    <field name=\"taxonomy_names\" type=\"text\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n    <!-- Copy terms to a single field that contains all taxonomy term names -->\n    <copyField source=\"tm_vid_*\" dest=\"taxonomy_names\"/>\n\n    <!-- Here, default is used to create a \"timestamp\" field indicating\n         when each document was indexed.-->\n    <field name=\"timestamp\" type=\"tdate\" indexed=\"true\" stored=\"true\" default=\"NOW\" multiValued=\"false\"/>\n\n    <!-- This field is used to build the spellchecker index -->\n    <field name=\"spell\" type=\"textSpell\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- copyField commands copy one field to another at the time a document\n         is added to the index.  It's used either to index the same field differently,\n         or to add multiple fields to the same field for easier/faster searching.  -->\n    <copyField source=\"label\" dest=\"spell\"/>\n    <copyField source=\"content\" dest=\"spell\"/>\n\n    <copyField source=\"ts_*\" dest=\"spell\"/>\n    <copyField source=\"tm_*\" dest=\"spell\"/>\n\n    <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n         will be used if the name matches any of the patterns.\n         RESTRICTION: the glob-like pattern in the name attribute must have\n         a \"*\" only at the start or the end.\n         EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n         Longer patterns will be matched first.  if equal size patterns\n         both match, the first appearing in the schema will be used.  -->\n\n    <!-- A set of fields to contain text extracted from HTML tag contents which we\n         can boost at query time. -->\n    <dynamicField name=\"tags_*\" type=\"text\"   indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n\n    <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n         the last letter is 's' for single valued, 'm' for multi-valued -->\n\n    <!-- We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"is_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"im_*\"  type=\"long\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of floats can be saved in a regular float field -->\n    <dynamicField name=\"fs_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fm_*\"  type=\"float\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of doubles can be saved in a regular double field -->\n    <dynamicField name=\"ps_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pm_*\"  type=\"double\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- List of booleans can be saved in a regular boolean field -->\n    <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <!-- Regular text (without processing) can be stored in a string field-->\n    <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Normal text fields are for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"ts_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tm_*\"  type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- Unstemmed text fields for full text - the relevance of a match depends on the length of the text -->\n    <dynamicField name=\"tus_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n    <dynamicField name=\"tum_*\" type=\"text_und\" indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n    <!-- These text fields omit norms - useful for extracted text like taxonomy_names -->\n    <dynamicField name=\"tos_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n    <dynamicField name=\"tom_*\" type=\"text\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n    <!-- Special-purpose text fields -->\n    <dynamicField name=\"tes_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"false\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tem_*\" type=\"edge_n2_kw_text\" indexed=\"true\" stored=\"true\" multiValued=\"true\" omitTermFreqAndPositions=\"true\" />\n    <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n    <!-- trie dates are preferred, so give them the 2 letter prefix -->\n    <dynamicField name=\"ds_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"dm_*\"  type=\"tdate\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"its_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"itm_*\" type=\"tlong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"fts_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ftm_*\" type=\"tfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pts_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ptm_*\" type=\"tdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n         a small image in a search result using the data URI scheme -->\n    <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a date rather than tdate is needed for sortMissingLast -->\n    <dynamicField name=\"dds_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ddm_*\" type=\"date\"    indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- Sortable fields, good for sortMissingLast support &\n         We use long for integer since 64 bit ints are now common in PHP. -->\n    <dynamicField name=\"iss_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"ism_*\" type=\"slong\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In rare cases a sfloat rather than tfloat is needed for sortMissingLast -->\n    <dynamicField name=\"fss_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"fsm_*\" type=\"sfloat\"  indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"pss_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"psm_*\" type=\"sdouble\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n    <dynamicField name=\"hs_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hm_*\" type=\"integer\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hss_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"hsm_*\" type=\"sint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"hts_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"htm_*\" type=\"tint\"   indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n    <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n    <!-- Fields for location searches.\n         http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n    <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"geos_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n    <dynamicField name=\"geom_*\" type=\"geohash\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n    <dynamicField name=\"bboxs_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n    <dynamicField name=\"bboxm_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n    <dynamicField name=\"rpts_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n    <dynamicField name=\"rptm_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n    <!-- Special fields for Solr 5 functionality. -->\n    <dynamicField name=\"phons_*\" type=\"phonetic\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n    <dynamicField name=\"phonm_*\" type=\"phonetic\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n    <!-- External file fields -->\n    <dynamicField name=\"eff_*\" type=\"file\"/>\n\n    <!-- Sortable version of the dynamic string field -->\n    <dynamicField name=\"sort_*\" type=\"sortString\" indexed=\"true\" stored=\"false\"/>\n    <copyField source=\"ss_*\" dest=\"sort_*\"/>\n\n    <!-- A random sort field -->\n    <dynamicField name=\"random_*\" type=\"rand\" indexed=\"true\" stored=\"true\"/>\n\n    <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n    <dynamicField name=\"access_*\" type=\"integer\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n    <!-- The following causes solr to ignore any fields that don't already match an existing\n         field name or dynamic field, rather than reporting them as an error.\n         Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n         unknown fields indexed and/or stored by default -->\n    <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n  </fields>\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  <xi:include href=\"schema_extra_fields.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback></xi:fallback>\n  </xi:include>\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- Similarity is the scoring routine for each document vs. a query.\n       A custom Similarity or SimilarityFactory may be specified here, but\n       the default is fine for most applications.\n       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity\n    -->\n  <!--\n     <similarity class=\"com.example.solr.CustomSimilarityFactory\">\n       <str name=\"paramkey\">param value</str>\n     </similarity>\n    -->\n\n  <!-- DEPRECATED: The defaultSearchField is consulted by various query parsers\n    when parsing a query string that isn't explicit about the field.  Machine\n    (non-user) generated queries are best made explicit, or they can use the\n    \"df\" request parameter which takes precedence over this.\n    Note: Un-commenting defaultSearchField will be insufficient if your request\n    handler in solrconfig.xml defines \"df\", which takes precedence. That would\n    need to be removed.\n  <defaultSearchField>content</defaultSearchField> -->\n\n  <!-- DEPRECATED: The defaultOperator (AND|OR) is consulted by various query\n    parsers when parsing a query string to determine if a clause of the query\n    should be marked as required or optional, assuming the clause isn't already\n    marked by some operator. The default is OR, which is generally assumed so it\n    is not a good idea to change it globally here. The \"q.op\" request parameter\n    takes precedence over this.\n  <solrQueryParser defaultOperator=\"OR\"/> -->\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/schema_extra_fields.xml",
    "content": "<fields>\n<!--\n  Example: Adding German dynamic field types to our Solr Schema.\n  If you enable this, make sure you have a folder called lang containing\n  stopwords_de.txt and synonyms_de.txt.\n  This also requires to enable the content in schema_extra_types.xml.\n-->\n<!--\n   <field name=\"label_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"content_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\"/>\n   <field name=\"teaser_de\" type=\"text_de\" indexed=\"false\" stored=\"true\"/>\n   <field name=\"path_alias_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n   <field name=\"taxonomy_names_de\" type=\"text_de\" indexed=\"true\" stored=\"false\" termVectors=\"true\" multiValued=\"true\" omitNorms=\"true\"/>\n   <field name=\"spell_de\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n   <copyField source=\"label_de\" dest=\"spell_de\"/>\n   <copyField source=\"content_de\" dest=\"spell_de\"/>\n   <dynamicField name=\"tags_de_*\" type=\"text_de\" indexed=\"true\" stored=\"false\" omitNorms=\"true\"/>\n   <dynamicField name=\"ts_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\"/>\n   <dynamicField name=\"tm_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\"/>\n   <dynamicField name=\"tos_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"false\" termVectors=\"true\" omitNorms=\"true\"/>\n   <dynamicField name=\"tom_de_*\" type=\"text_de\" indexed=\"true\" stored=\"true\" multiValued=\"true\" termVectors=\"true\" omitNorms=\"true\"/>\n-->\n</fields>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/schema_extra_types.xml",
    "content": "<types>\n<!--\n  Example: Adding German language field types to our Solr Schema.\n  If you enable this, make sure you have a folder called lang containing\n  stopwords_de.txt and synonyms_de.txt.\n\n  For examples from other languages, see\n  ./server/solr/configsets/sample_techproducts_configs/conf/schema.xml\n  from your Solr installation.\n-->\n<!--\n    <fieldType name=\"text_de\" class=\"solr.TextField\" positionIncrementGap=\"100\">\n      <analyzer type=\"index\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"1\" catenateNumbers=\"1\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n      <analyzer type=\"query\">\n        <charFilter class=\"solr.MappingCharFilterFactory\" mapping=\"mapping-ISOLatin1Accent.txt\"/>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.SynonymFilterFactory\" synonyms=\"lang/synonyms_de.txt\" ignoreCase=\"true\" expand=\"true\"/>\n        <filter class=\"solr.StopFilterFactory\" words=\"lang/stopwords_de.txt\" format=\"snowball\" ignoreCase=\"true\"/>\n        <filter class=\"solr.WordDelimiterFilterFactory\" generateWordParts=\"1\" generateNumberParts=\"1\" splitOnCaseChange=\"1\" splitOnNumerics=\"1\" catenateWords=\"0\" catenateNumbers=\"0\" catenateAll=\"0\" protected=\"protwords.txt\" preserveOriginal=\"1\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.GermanLightStemFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n-->\n</types>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-4.4-solr-7.x\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_60}</luceneMatchVersion>\n\n  <!-- <lib/> directives can be used to instruct Solr to load any Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       Please note that <lib/> directives are processed in the order\n       that they appear in your solrconfig.xml file, and are \"stacked\"\n       on top of each other when building a ClassLoader - so if you have\n       plugin jars with dependencies on other jars, the \"lower level\"\n       dependency jars should be loaded first.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n\n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A \"dir\" option by itself adds any files found in the directory to the\n       classpath, this is useful for including all jars in a directory.\n    -->\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/extraction/lib\" />\n  <lib dir=\"${solr.contrib.dir:../../../contrib}/clustering/lib/\" />\n\n  <!-- The velocity library has been known to crash Solr in some\n       instances when deployed as a war file to Tomcat. Therefore all\n       references have been removed from the default configuration.\n       @see http://drupal.org/node/1612556\n  -->\n  <!-- <lib dir=\"../../contrib/velocity/lib\" /> -->\n\n  <!-- When a regex is specified in addition to a directory, only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n    -->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-cell-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-clustering-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-dataimporthandler-\\d.*\\.jar\" />-->\n  <!--<lib dir=\"../../dist/\" regex=\"apache-solr-langid-\\d.*\\.jar\" />-->\n  <!-- <lib dir=\"../../dist/\" regex=\"apache-solr-velocity-\\d.*\\.jar\" /> -->\n\n  <!-- If a dir option (with or without a regex) is used and nothing\n       is found that matches, it will be ignored\n    -->\n  <!--<lib dir=\"../../contrib/clustering/lib/\" />-->\n  <!--<lib dir=\"/total/crap/dir/ignored\" />-->\n\n  <!-- an exact path can be used to specify a specific file.  This\n       will cause a serious error to be logged if it can't be loaded.\n    -->\n  <!--\n  <lib path=\"../a-jar-that-does-not-exist.jar\" />\n  -->\n\n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <dataDir>${solr.data.dir:}</dataDir>\n\n  <!-- The DirectoryFactory to use for indexes.\n\n       solr.StandardDirectoryFactory is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,\n       wraps solr.StandardDirectoryFactory and caches small files in memory\n       for better NRT performance.\n\n       One can force a particular implementation via solr.MMapDirectoryFactory,\n       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based, not\n       persistent, and doesn't work with replication.\n    -->\n  <directoryFactory name=\"DirectoryFactory\"\n                    class=\"${solr.directoryFactory:solr.NRTCachingDirectoryFactory}\"/>\n\n  <!-- The CodecFactory for defining the format of the inverted index.\n       The default implementation is SchemaCodecFactory, which is the official\n       Lucene index format, but hooks into the schema to provide per-field\n       customization of the postings lists and per-document values in the\n       fieldType element (postingsFormat/docValuesFormat). Note that most of the\n       alternative implementations are experimental, so if you choose to\n       customize the index format, it's a good idea to convert back to the\n       official format e.g. via IndexWriter.addIndexes(IndexReader) before\n       upgrading to a newer version to avoid unnecessary reindexing.\n  -->\n  <codecFactory class=\"solr.SchemaCodecFactory\"/>\n\n  <!-- To enable dynamic schema REST APIs, use the following for <schemaFactory>:\n\n       <schemaFactory class=\"ManagedIndexSchemaFactory\">\n         <bool name=\"mutable\">true</bool>\n         <str name=\"managedSchemaResourceName\">managed-schema</str>\n       </schemaFactory>\n\n       When ManagedIndexSchemaFactory is specified, Solr will load the schema from\n       the resource named in 'managedSchemaResourceName', rather than from schema.xml.\n       Note that the managed schema resource CANNOT be named schema.xml.  If the managed\n       schema does not exist, Solr will create it after reading schema.xml, then rename\n       'schema.xml' to 'schema.xml.bak'.\n\n       Do NOT hand edit the managed schema - external modifications will be ignored and\n       overwritten as a result of schema modification REST API calls.\n\n       When ManagedIndexSchemaFactory is specified with mutable = true, schema\n       modification REST API calls will be allowed; otherwise, error responses will be\n       sent back for these requests.\n  -->\n  <schemaFactory class=\"ClassicIndexSchemaFactory\"/>\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Index Config - These settings control low-level behavior of indexing\n       Most example settings here show the default value, but are commented\n       out, to more easily see where customizations have been made.\n\n       Note: This replaces <indexDefaults> and <mainIndex> from older versions\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <indexConfig>\n    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a\n         LimitTokenCountFilterFactory in your fieldType definition. E.g.\n     <filter class=\"solr.LimitTokenCountFilterFactory\" maxTokenCount=\"10000\"/>\n    -->\n    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->\n    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->\n\n    <!-- The maximum number of simultaneous threads that may be\n         indexing documents at once in IndexWriter; if more than this\n         many threads arrive they will wait for others to finish.\n         Default in Solr/Lucene is 8. -->\n    <!-- <maxIndexingThreads>8</maxIndexingThreads>  -->\n\n    <!-- Expert: Enabling compound file will use less files for the index,\n         using fewer file descriptors on the expense of performance decrease.\n         Default in Lucene is \"true\". Default in Solr is \"false\" (since 3.6) -->\n    <!-- <useCompoundFile>false</useCompoundFile> -->\n\n    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene\n         indexing for buffering added documents and deletions before they are\n         flushed to the Directory.\n         maxBufferedDocs sets a limit on the number of documents buffered\n         before flushing.\n         If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.\n         The default is 100 MB.  -->\n    <ramBufferSizeMB>32</ramBufferSizeMB>\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <!-- Expert: Merge Policy\n\n         The Merge Policy in Lucene controls how merging is handled by\n         Lucene.  The default in Solr 3.3 is TieredMergePolicy.\n\n         The default in 2.3 was the LogByteSizeMergePolicy,\n         previous versions used LogDocMergePolicy.\n\n         LogByteSizeMergePolicy chooses segments to merge based on\n         their size.  The Lucene 2.2 default, LogDocMergePolicy chose\n         when to merge based on number of documents\n\n         Other implementations of MergePolicy must have a no-argument\n         constructor\n      -->\n\n    <mergePolicyFactory class=\"org.apache.solr.index.LogByteSizeMergePolicyFactory\">\n      <!-- Merge Factor\n           The merge factor controls how many segments will get merged at a time.\n           For TieredMergePolicy, mergeFactor is a convenience parameter which\n           will set both MaxMergeAtOnce and SegmentsPerTier at once.\n           For LogByteSizeMergePolicy, mergeFactor decides how many new segments\n           will be allowed before they are merged into one.\n        -->\n      <int name=\"mergeFactor\">4</int>\n    </mergePolicyFactory>\n\n    <!-- Expert: Merge Scheduler\n\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!--\n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\n    <!-- LockFactory\n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n\n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         Defaults: 'native' is default for Solr3.6 and later, otherwise\n                   'simple' is the default\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>${solr.lock.type:native}</lockType>\n\n    <!-- Expert: Controls how often Lucene loads terms into memory\n         Default is 128 and is likely good for most everyone.\n      -->\n    <!-- <termIndexInterval>256</termIndexInterval> -->\n\n    <!-- If true, IndexReaders will be reopened (often more efficient)\n         instead of closed and then opened.\n      -->\n    <reopenReaders>true</reopenReaders>\n\n    <!-- Commit Deletion Policy\n\n         Custom deletion policies can be specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         http://lucene.apache.org/java/2_9_1/api/all/org/apache/lucene/index/IndexDeletionPolicy.html\n\n         The default Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n\n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n      <!-- The number of commit points to be kept -->\n      <str name=\"maxCommitsToKeep\">1</str>\n      <!-- The number of optimized commit points to be kept -->\n      <str name=\"maxOptimizedCommitsToKeep\">0</str>\n      <!--\n          Delete all commit points once they have reached the given age.\n          Supports DateMathParser syntax e.g.\n        -->\n      <!--\n         <str name=\"maxCommitAge\">30MINUTES</str>\n         <str name=\"maxCommitAge\">1DAY</str>\n      -->\n    </deletionPolicy>\n\n    <!-- Lucene Infostream\n\n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting the value to true will instruct the underlying Lucene\n         IndexWriter to write its info stream to solr's log. By default,\n         this is enabled here, and controlled through log4j.properties.\n      -->\n     <infoStream>true</infoStream>\n\n  </indexConfig>\n\n  <!-- JMX\n\n       This example enables JMX if and only if an existing MBeanServer\n       is found, use this if you want to configure JMX through JVM\n       parameters. Remove this to disable exposing Solr configuration\n       and statistics to JMX.\n\n       For more details see http://wiki.apache.org/solr/SolrJmx\n    -->\n  <!-- <jmx /> -->\n  <!-- If you want to connect to a particular server, specify the\n       agentId\n    -->\n  <!-- <jmx agentId=\"myAgent\" /> -->\n  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->\n  <!-- <jmx serviceUrl=\"service:jmx:rmi:///jndi/rmi://localhost:9999/solr\"/>\n    -->\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- AutoCommit\n\n         Perform a <commit/> automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents.\n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time that is allowed to pass\n                   since a document was added before automaticly\n                   triggering a new commit.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:10000}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:120000}</maxTime>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n    -->\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:2000}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:10000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n\n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns.\n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n    <!-- Enables a transaction log, currently used for real-time get.\n         \"dir\" - the target directory for transaction logs, defaults to the\n         solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.data.dir:}</str>\n      <!-- if you want to take control of the synchronization you may specify\n           the syncLevel as one of the following where ''flush'' is the default.\n           Fsync will reduce throughput.\n           <str name=\"syncLevel\">flush|fsync|none</str>\n      -->\n    </updateLog>\n  </updateHandler>\n\n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Query section - these settings control query time things like caches\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <query>\n    <!-- Max Boolean Clauses\n\n         Maximum number of clauses in each BooleanQuery,  an exception\n         is thrown if exceeded.\n\n         ** WARNING **\n\n         This option actually modifies a global Lucene property that\n         will affect all SolrCores.  If multiple solrconfig.xml files\n         disagree on this property, the value at any given moment will\n         be based on the last SolrCore to be initialized.\n\n      -->\n    <maxBooleanClauses>1024</maxBooleanClauses>\n\n    <!-- Slow Query Threshold (in millis)\n\n         At high request rates, logging all requests can become a bottleneck\n         and therefore INFO logging is often turned off. However, it is still\n         useful to be able to set a latency threshold above which a request\n         is considered \"slow\" and log that request at WARN level so we can\n         easily identify slow queries.\n    -->\n    <slowQueryThresholdMillis>-1</slowQueryThresholdMillis>\n\n    <!-- Solr Internal Query Caches\n\n         There are two implementations of cache available for Solr,\n         LRUCache, based on a synchronized LinkedHashMap, and\n         FastLRUCache, based on a ConcurrentHashMap.\n\n         FastLRUCache has faster gets and slower puts in single\n         threaded operation and thus is generally faster than LRUCache\n         when the hit ratio of the cache is high (> 75%), and may be\n         faster under other scenarios on multi-cpu systems.\n    -->\n\n    <!-- Filter Cache\n\n         Cache used by SolrIndexSearcher for filters (DocSets),\n         unordered sets of *all* documents that match a query.  When a\n         new searcher is opened, its caches may be prepopulated or\n         \"autowarmed\" using data from caches in the old searcher.\n         autowarmCount is the number of items to prepopulate.  For\n         LRUCache, the autowarmed items will be the most recently\n         accessed items.\n\n         Parameters:\n           class - the SolrCache implementation LRUCache or\n               (LRUCache or FastLRUCache)\n           size - the maximum number of entries in the cache\n           initialSize - the initial capacity (number of entries) of\n               the cache.  (see java.util.HashMap)\n           autowarmCount - the number of entries to prepopulate from\n               and old cache.\n      -->\n    <filterCache class=\"solr.FastLRUCache\"\n                 size=\"512\"\n                 initialSize=\"512\"\n                 autowarmCount=\"0\"/>\n\n    <!-- Query Result Cache\n\n         Caches results of searches - ordered lists of document ids\n         (DocList) based on a query, a sort, and the range of documents requested.\n      -->\n    <queryResultCache class=\"solr.LRUCache\"\n                     size=\"512\"\n                     initialSize=\"512\"\n                     autowarmCount=\"32\"/>\n\n    <!-- Document Cache\n\n         Caches Lucene Document objects (the stored fields for each\n         document).  Since Lucene internal document ids are transient,\n         this cache will not be autowarmed.\n      -->\n    <documentCache class=\"solr.LRUCache\"\n                   size=\"512\"\n                   initialSize=\"512\"\n                   autowarmCount=\"0\"/>\n\n    <!-- Field Value Cache\n\n         Cache used to hold field values that are quickly accessible\n         by document id.  The fieldValueCache is created by default\n         even if not configured here.\n      -->\n    <!--\n       <fieldValueCache class=\"solr.FastLRUCache\"\n                        size=\"512\"\n                        autowarmCount=\"128\"\n                        showItems=\"32\" />\n      -->\n\n    <!-- Custom Cache\n\n         Example of a generic cache.  These caches may be accessed by\n         name through SolrIndexSearcher.getCache(),cacheLookup(), and\n         cacheInsert().  The purpose is to enable easy caching of\n         user/application level data.  The regenerator argument should\n         be specified as an implementation of solr.CacheRegenerator\n         if autowarming is desired.\n      -->\n    <!--\n       <cache name=\"myUserCache\"\n              class=\"solr.LRUCache\"\n              size=\"4096\"\n              initialSize=\"1024\"\n              autowarmCount=\"1024\"\n              regenerator=\"com.mycompany.MyRegenerator\"\n              />\n      -->\n\n    <!-- Lazy Field Loading\n\n         If true, stored fields that are not requested will be loaded\n         lazily.  This can result in a significant speed improvement\n         if the usual case is to not load all stored fields,\n         especially if the skipped fields are large compressed text\n         fields.\n    -->\n    <enableLazyFieldLoading>true</enableLazyFieldLoading>\n\n   <!-- Use Filter For Sorted Query\n\n        A possible optimization that attempts to use a filter to\n        satisfy a search.  If the requested sort does not include\n        score, then the filterCache will be checked for a filter\n        matching the query. If found, the filter will be used as the\n        source of document ids, and then the sort will be applied to\n        that.\n\n        For most situations, this will not be useful unless you\n        frequently get the same search repeatedly with different sort\n        options, and none of them ever use \"score\"\n     -->\n   <!--\n      <useFilterForSortedQuery>true</useFilterForSortedQuery>\n     -->\n\n   <!-- Result Window Size\n\n        An optimization for use with the queryResultCache.  When a search\n        is requested, a superset of the requested number of document ids\n        are collected.  For example, if a search for a particular query\n        requests matching documents 10 through 19, and queryWindowSize is 50,\n        then documents 0 through 49 will be collected and cached.  Any further\n        requests in that range can be satisfied via the cache.\n     -->\n   <queryResultWindowSize>20</queryResultWindowSize>\n\n   <!-- Maximum number of documents to cache for any entry in the\n        queryResultCache.\n     -->\n   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>\n\n   <!-- Query Related Event Listeners\n\n        Various IndexSearcher related events can trigger Listeners to\n        take actions.\n\n        newSearcher - fired whenever a new searcher is being prepared\n        and there is a current searcher handling requests (aka\n        registered).  It can be used to prime certain caches to\n        prevent long request times for certain requests.\n\n        firstSearcher - fired whenever a new searcher is being\n        prepared but there is no current registered searcher to handle\n        requests or to gain autowarming data from.\n\n     -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence.\n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">solr rocks</str><str name=\"start\">0</str><str name=\"rows\">10</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n    <!-- Max Warming Searchers\n\n         Maximum number of searchers that may be warming in the\n         background concurrently.  An error is returned if this limit\n         is exceeded.\n\n         Recommend values of 1-2 for read-only slaves, higher for\n         masters w/o cache warming.\n      -->\n    <maxWarmingSearchers>2</maxWarmingSearchers>\n\n  </query>\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n\n       handleSelect affects the behavior of requests such as /select?qt=XXX\n\n       handleSelect=\"true\" will cause the SolrDispatchFilter to process\n       the request and will result in consistent error handling and\n       formatting for all types of requests.\n\n       handleSelect=\"false\" will cause the SolrDispatchFilter to\n       ignore \"/select\" requests and fallback to using the legacy\n       SolrServlet and it's Solr 1.1 style error formatting\n    -->\n  <requestDispatcher handleSelect=\"true\" >\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size (in KiB) of\n         Multipart File Uploads that Solr will allow in a Request.\n\n         formdataUploadLimitInKB - specifies the max size (in KiB) of\n         form data (application/x-www-form-urlencoded) sent via\n         POST. You can use POST to pass request parameters not\n         fitting into the URL.\n\n         addHttpRequestToContext - if set to true, it will instruct\n         the requestParsers to include the original HttpServletRequest\n         object in the context map of the SolrQueryRequest under the\n         key \"httpRequest\". It will not be used by any of the existing\n         Solr components, but may be useful when developing custom\n         plugins.\n\n         *** WARNING ***\n         The settings below authorize Solr to fetch remote files, You\n         should make sure your system has some authentication before\n         using enableRemoteStreaming=\"true\"\n\n      -->\n    <requestParsers enableRemoteStreaming=\"true\"\n                    multipartUploadLimitInKB=\"2048000\"\n                    formdataUploadLimitInKB=\"2048\"\n                    addHttpRequestToContext=\"false\"/>\n\n    <!-- HTTP Caching\n\n         Set HTTP caching related parameters (for proxy caches and clients).\n\n         The options below instruct Solr not to output any HTTP Caching\n         related headers\n      -->\n    <httpCaching never304=\"true\" />\n    <!-- If you include a <cacheControl> directive, it will be used to\n         generate a Cache-Control header (as well as an Expires header\n         if the value contains \"max-age=\")\n\n         By default, no Cache-Control header is generated.\n\n         You can use the <cacheControl> option even if you have set\n         never304=\"true\"\n      -->\n    <!--\n       <httpCaching never304=\"true\" >\n         <cacheControl>max-age=30, public</cacheControl>\n       </httpCaching>\n      -->\n    <!-- To enable Solr to respond with automatically generated HTTP\n         Caching headers, and to response to Cache Validation requests\n         correctly, set the value of never304=\"false\"\n\n         This will cause Solr to generate Last-Modified and ETag\n         headers based on the properties of the Index.\n\n         The following options can also be specified to affect the\n         values of these headers...\n\n         lastModFrom - the default value is \"openTime\" which means the\n         Last-Modified value (and validation against If-Modified-Since\n         requests) will all be relative to when the current Searcher\n         was opened.  You can change it to lastModFrom=\"dirLastMod\" if\n         you want the value to exactly correspond to when the physical\n         index was last modified.\n\n         etagSeed=\"...\" is an option you can change to force the ETag\n         header (and validation against If-None-Match requests) to be\n         different even if the index has not changed (ie: when making\n         significant changes to your config file)\n\n         (lastModifiedFrom and etagSeed are both ignored if you use\n         the never304=\"true\" option)\n      -->\n    <!--\n       <httpCaching lastModifiedFrom=\"openTime\"\n                    etagSeed=\"Solr\">\n         <cacheControl>max-age=30, public</cacheControl>\n       </httpCaching>\n      -->\n  </requestDispatcher>\n\n  <!-- Request Handlers\n\n       http://wiki.apache.org/solr/SolrRequestHandler\n\n       Incoming queries will be dispatched to a specific handler by name\n       based on the path specified in the request.\n\n       Legacy behavior: If the request path uses \"/select\" but no Request\n       Handler has that name, and if handleSelect=\"true\" has been specified in\n       the requestDispatcher, then the Request Handler is dispatched based on\n       the qt parameter.  Handlers without a leading '/' are accessed this way\n       like so: http://host/app/[core/]select?qt=name  If no qt is\n       given, then the requestHandler that declares default=\"true\" will be\n       used or the one named \"standard\".\n\n       If a Request Handler is declared with startup=\"lazy\", then it will\n       not be initialized until the first request that uses it.\n\n    -->\n  <!-- SearchHandler\n\n       http://wiki.apache.org/solr/SearchHandler\n\n       For processing Search Queries, the primary Request Handler\n       provided with Solr is \"SearchHandler\" It delegates to a sequent\n       of SearchComponents (see below) and supports distributed\n       queries across multiple shards\n    -->\n  <!--<requestHandler name=\"/select\" class=\"solr.SearchHandler\">-->\n    <!-- default values for query parameters can be specified, these\n         will be overridden by parameters in the request\n      -->\n     <!--<lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <int name=\"rows\">10</int>\n     </lst>-->\n    <!-- In addition to defaults, \"appends\" params can be specified\n         to identify values which should be appended to the list of\n         multi-val params from the query (or the existing \"defaults\").\n      -->\n    <!-- In this example, the param \"fq=instock:true\" would be appended to\n         any query time fq params the user may specify, as a mechanism for\n         partitioning the index, independent of any user selected filtering\n         that may also be desired (perhaps as a result of faceted searching).\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"appends\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"appends\">\n         <str name=\"fq\">inStock:true</str>\n       </lst>\n      -->\n    <!-- \"invariants\" are a way of letting the Solr maintainer lock down\n         the options available to Solr clients.  Any params values\n         specified here are used regardless of what values may be specified\n         in either the query, the \"defaults\", or the \"appends\" params.\n\n         In this example, the facet.field and facet.query params would\n         be fixed, limiting the facets clients can use.  Faceting is\n         not turned on by default - but if the client does specify\n         facet=true in the request, these are the only facets they\n         will be able to see counts for; regardless of what other\n         facet.field or facet.query params they may specify.\n\n         NOTE: there is *absolutely* nothing a client can do to prevent these\n         \"invariants\" values from being used, so don't use this mechanism\n         unless you are sure you always want it.\n      -->\n    <!--\n       <lst name=\"invariants\">\n         <str name=\"facet.field\">cat</str>\n         <str name=\"facet.field\">manu_exact</str>\n         <str name=\"facet.query\">price:[* TO 500]</str>\n         <str name=\"facet.query\">price:[500 TO *]</str>\n       </lst>\n      -->\n    <!-- If the default list of SearchComponents is not desired, that\n         list can either be overridden completely, or components can be\n         prepended or appended to the default list.  (see below)\n      -->\n    <!--\n       <arr name=\"components\">\n         <str>nameOfCustomComponent1</str>\n         <str>nameOfCustomComponent2</str>\n       </arr>\n      -->\n    <!--</requestHandler>-->\n\n  <!-- A request handler that returns indented JSON by default -->\n  <requestHandler name=\"/query\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>\n       <str name=\"wt\">json</str>\n       <str name=\"indent\">true</str>\n       <str name=\"df\">text</str>\n     </lst>\n  </requestHandler>\n\n  <!--\n    The export request handler is used to export full sorted result sets.\n    Do not change these defaults.\n  -->\n\n  <requestHandler name=\"/export\" class=\"solr.SearchHandler\">\n    <lst name=\"invariants\">\n      <str name=\"rq\">{!xport}</str>\n      <str name=\"wt\">xsort</str>\n      <str name=\"distrib\">false</str>\n    </lst>\n\n    <arr name=\"components\">\n      <str>query</str>\n    </arr>\n  </requestHandler>\n\n  <!-- A Robust Example\n\n       This example SearchHandler declaration shows off usage of the\n       SearchHandler with many defaults declared\n\n       Note that multiple instances of the same Request Handler\n       (SearchHandler) can be registered multiple times with different\n       names (and different init parameters)\n    -->\n  <!--\n  <requestHandler name=\"/browse\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"echoParams\">explicit</str>-->\n\n       <!-- VelocityResponseWriter settings -->\n       <!--<str name=\"wt\">velocity</str>\n\n       <str name=\"v.template\">browse</str>\n       <str name=\"v.layout\">layout</str>\n       <str name=\"title\">Solritas</str>\n\n       <str name=\"defType\">edismax</str>\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n          title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0\n       </str>\n       <str name=\"mm\">100%</str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n       <str name=\"mlt.qf\">\n         text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"mlt.fl\">text,features,name,sku,id,manu,cat</str>\n       <int name=\"mlt.count\">3</int>\n\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n\n       <str name=\"facet\">on</str>\n       <str name=\"facet.field\">cat</str>\n       <str name=\"facet.field\">manu_exact</str>\n       <str name=\"facet.query\">ipod</str>\n       <str name=\"facet.query\">GB</str>\n       <str name=\"facet.mincount\">1</str>\n       <str name=\"facet.pivot\">cat,inStock</str>\n       <str name=\"facet.range.other\">after</str>\n       <str name=\"facet.range\">price</str>\n       <int name=\"f.price.facet.range.start\">0</int>\n       <int name=\"f.price.facet.range.end\">600</int>\n       <int name=\"f.price.facet.range.gap\">50</int>\n       <str name=\"facet.range\">popularity</str>\n       <int name=\"f.popularity.facet.range.start\">0</int>\n       <int name=\"f.popularity.facet.range.end\">10</int>\n       <int name=\"f.popularity.facet.range.gap\">3</int>\n       <str name=\"facet.range\">manufacturedate_dt</str>\n       <str name=\"f.manufacturedate_dt.facet.range.start\">NOW/YEAR-10YEARS</str>\n       <str name=\"f.manufacturedate_dt.facet.range.end\">NOW</str>\n       <str name=\"f.manufacturedate_dt.facet.range.gap\">+1YEAR</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">before</str>\n       <str name=\"f.manufacturedate_dt.facet.range.other\">after</str>-->\n\n       <!-- Highlighting defaults -->\n       <!--<str name=\"hl\">on</str>\n       <str name=\"hl.fl\">text features name</str>\n       <str name=\"f.name.hl.fragsize\">0</str>\n       <str name=\"f.name.hl.alternateField\">name</str>\n     </lst>\n     <arr name=\"last-components\">\n       <str>spellcheck</str>\n     </arr>-->\n     <!--\n     <str name=\"url-scheme\">httpx</str>\n     -->\n  <!--</requestHandler>-->\n  <!-- trivia: the name pinkPony requestHandler was an agreement between the Search API and the\n    apachesolr maintainers. The decision was taken during the Drupalcon Munich codesprint.\n    -->\n  <requestHandler name=\"pinkPony\" class=\"solr.SearchHandler\" default=\"true\">\n    <lst name=\"defaults\">\n      <str name=\"defType\">edismax</str>\n      <str name=\"df\">content</str>\n      <str name=\"echoParams\">explicit</str>\n      <bool name=\"omitHeader\">true</bool>\n      <float name=\"tie\">0.01</float>\n      <!-- Don't abort searches for the pinkPony request handler (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.pinkPony.timeAllowed:-1}</int>\n      <str name=\"q.alt\">*:*</str>\n\n      <!-- By default, don't spell check -->\n      <str name=\"spellcheck\">false</str>\n      <!-- Defaults for the spell checker when used -->\n      <str name=\"spellcheck.onlyMorePopular\">true</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <!--  The number of suggestions to return -->\n      <str name=\"spellcheck.count\">1</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- The more like this handler offers many advantages over the standard handler,\n     when performing moreLikeThis requests.-->\n  <requestHandler name=\"mlt\" class=\"solr.MoreLikeThisHandler\">\n    <lst name=\"defaults\">\n      <str name=\"df\">content</str>\n      <str name=\"mlt.mintf\">1</str>\n      <str name=\"mlt.mindf\">1</str>\n      <str name=\"mlt.minwl\">3</str>\n      <str name=\"mlt.maxwl\">15</str>\n      <str name=\"mlt.maxqt\">20</str>\n      <str name=\"mlt.match.include\">false</str>\n      <!-- Abort any searches longer than 2 seconds (set in solrcore.properties) -->\n      <int name=\"timeAllowed\">${solr.mlt.timeAllowed:2000}</int>\n    </lst>\n  </requestHandler>\n\n  <!-- A minimal query type for doing luene queries -->\n  <requestHandler name=\"standard\" class=\"solr.SearchHandler\">\n     <lst name=\"defaults\">\n       <str name=\"df\">content</str>\n       <str name=\"echoParams\">explicit</str>\n       <bool name=\"omitHeader\">true</bool>\n     </lst>\n  </requestHandler>\n\n  <!-- Update Request Handler.\n\n       http://wiki.apache.org/solr/UpdateXmlMessages\n\n       The canonical Request Handler for Modifying the Index through\n       commands specified using XML, JSON, CSV, or JAVABIN\n\n       Note: Since solr1.1 requestHandlers requires a valid content\n       type header if posted in the body. For example, curl now\n       requires: -H 'Content-type:text/xml; charset=utf-8'\n\n       To override the request content type and force a specific\n       Content-type, use the request parameter:\n         ?update.contentType=text/csv\n\n       This handler will pick a response format to match the input\n       if the 'wt' parameter is not explicit\n    -->\n  <!--<requestHandler name=\"/update\" class=\"solr.UpdateRequestHandler\">\n  </requestHandler>-->\n  <initParams path=\"/update/**,/query,/select,/tvrh,/elevate,/spell,/browse\">\n    <lst name=\"defaults\">\n      <str name=\"df\">text</str>\n    </lst>\n  </initParams>\n\n  <initParams path=\"/update/json/docs\">\n    <lst name=\"defaults\">\n      <!--this ensures that the entire json doc will be stored verbatim into one field-->\n      <str name=\"srcField\">_src_</str>\n      <!--This means a the uniqueKeyField will be extracted from the fields and\n       all fields go into the 'df' field. In this config df is already configured to be 'text'\n        -->\n      <str name=\"mapUniqueKeyOnly\">true</str>\n    </lst>\n\n  </initParams>\n\n  <!-- CSV Update Request Handler\n       http://wiki.apache.org/solr/UpdateCSV\n    -->\n  <requestHandler name=\"/update/csv\"\n                  class=\"solr.CSVRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- JSON Update Request Handler\n       http://wiki.apache.org/solr/UpdateJSON\n    -->\n  <requestHandler name=\"/update/json\"\n                  class=\"solr.JsonUpdateRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- Solr Cell Update Request Handler\n\n       http://wiki.apache.org/solr/ExtractingRequestHandler\n\n    -->\n  <requestHandler name=\"/update/extract\"\n                  startup=\"lazy\"\n                  class=\"solr.extraction.ExtractingRequestHandler\" >\n    <lst name=\"defaults\">\n      <!-- All the main content goes into \"text\"... if you need to return\n           the extracted text or do highlighting, use a stored field. -->\n      <str name=\"fmap.content\">text</str>\n      <str name=\"lowernames\">true</str>\n      <str name=\"uprefix\">ignored_</str>\n\n      <!-- capture link hrefs but ignore div attributes -->\n      <str name=\"captureAttr\">true</str>\n      <str name=\"fmap.a\">links</str>\n      <str name=\"fmap.div\">ignored_</str>\n    </lst>\n  </requestHandler>\n\n  <!-- XSLT Update Request Handler\n       Transforms incoming XML with stylesheet identified by tr=\n  -->\n  <requestHandler name=\"/update/xslt\"\n                   startup=\"lazy\"\n                   class=\"solr.XsltUpdateRequestHandler\"/>\n\n  <!-- Field Analysis Request Handler\n\n       RequestHandler that provides much the same functionality as\n       analysis.jsp. Provides the ability to specify multiple field\n       types and field names in the same request and outputs\n       index-time and query-time analysis for each of them.\n\n       Request parameters are:\n       analysis.fieldname - field name whose analyzers are to be used\n\n       analysis.fieldtype - field type whose analyzers are to be used\n       analysis.fieldvalue - text for index-time analysis\n       q (or analysis.q) - text for query time analysis\n       analysis.showmatch (true|false) - When set to true and when\n           query analysis is performed, the produced tokens of the\n           field value analysis will be marked as \"matched\" for every\n           token that is produces by the query analysis\n   -->\n  <requestHandler name=\"/analysis/field\"\n                  startup=\"lazy\"\n                  class=\"solr.FieldAnalysisRequestHandler\" />\n\n  <!-- Document Analysis Handler\n\n       http://wiki.apache.org/solr/AnalysisRequestHandler\n\n       An analysis handler that provides a breakdown of the analysis\n       process of provided documents. This handler expects a (single)\n       content stream with the following format:\n\n       <docs>\n         <doc>\n           <field name=\"id\">1</field>\n           <field name=\"name\">The Name</field>\n           <field name=\"text\">The Text Value</field>\n         </doc>\n         <doc>...</doc>\n         <doc>...</doc>\n         ...\n       </docs>\n\n    Note: Each document must contain a field which serves as the\n    unique key. This key is used in the returned response to associate\n    an analysis breakdown to the analyzed document.\n\n    Like the FieldAnalysisRequestHandler, this handler also supports\n    query analysis by sending either an \"analysis.query\" or \"q\"\n    request parameter that holds the query text to be analyzed. It\n    also supports the \"analysis.showmatch\" parameter which when set to\n    true, all field tokens that match the query tokens will be marked\n    as a \"match\".\n  -->\n  <requestHandler name=\"/analysis/document\"\n                  class=\"solr.DocumentAnalysisRequestHandler\"\n                  startup=\"lazy\" />\n\n  <!-- Admin Handlers\n\n       As of Solr 5.0.0, the \"/admin/\" handlers are registered implicitly.\n    -->\n  <!-- <requestHandler name=\"/admin/\" class=\"solr.admin.AdminHandlers\" /> -->\n  <!-- This single handler is equivalent to the following... -->\n  <!--\n     <requestHandler name=\"/admin/luke\"       class=\"solr.admin.LukeRequestHandler\" />\n     <requestHandler name=\"/admin/system\"     class=\"solr.admin.SystemInfoHandler\" />\n     <requestHandler name=\"/admin/plugins\"    class=\"solr.admin.PluginInfoHandler\" />\n     <requestHandler name=\"/admin/threads\"    class=\"solr.admin.ThreadDumpHandler\" />\n     <requestHandler name=\"/admin/properties\" class=\"solr.admin.PropertiesRequestHandler\" />\n     <requestHandler name=\"/admin/file\"       class=\"solr.admin.ShowFileRequestHandler\" >\n    -->\n  <!-- If you wish to hide files under ${solr.home}/conf, explicitly\n       register the ShowFileRequestHandler using the definition below.\n       NOTE: The glob pattern ('*') is the only pattern supported at present, *.xml will\n             not exclude all files ending in '.xml'. Use it to exclude _all_ updates\n    -->\n  <!--\n     <requestHandler name=\"/admin/file\"\n                     class=\"solr.admin.ShowFileRequestHandler\" >\n       <lst name=\"invariants\">\n         <str name=\"hidden\">synonyms.txt</str>\n         <str name=\"hidden\">anotherfile.txt</str>\n         <str name=\"hidden\">*</str>\n       </lst>\n     </requestHandler>\n    -->\n  <!--\n    Enabling this request handler (which is NOT a default part of the admin handler) will allow the Solr UI to edit\n    all the config files. This is intended for secure/development use ONLY! Leaving available and publically\n    accessible is a security vulnerability and should be done with extreme caution!\n  -->\n  <!--\n  <requestHandler name=\"/admin/fileedit\" class=\"solr.admin.EditFileRequestHandler\" >\n    <lst name=\"invariants\">\n      <str name=\"qt\">pinkPony</str>\n      <str name=\"q\">solrpingquery</str>\n      <str name=\"omitHeader\">false</str>\n    </lst>\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">all</str>\n    </lst>\n    <!- An optional feature of the PingRequestHandler is to configure the\n         handler with a \"healthcheckFile\" which can be used to enable/disable\n         the PingRequestHandler.\n         relative paths are resolved against the data dir\n    -->\n    <!-- <str name=\"healthcheckFile\">server-enabled.txt</str> -->\n  <!-- </requestHandler>\n  -->\n\n  <!-- Echo the request contents back to the client -->\n  <requestHandler name=\"/debug/dump\" class=\"solr.DumpRequestHandler\" >\n    <lst name=\"defaults\">\n     <str name=\"echoParams\">explicit</str>\n     <str name=\"echoHandler\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Solr Replication\n\n       The SolrReplicationHandler supports replicating indexes from a\n       \"master\" used for indexing and \"slaves\" used for queries.\n\n       http://wiki.apache.org/solr/SolrReplication\n\n       In the example below, remove the <lst name=\"master\"> section if\n       this is just a slave and remove  the <lst name=\"slave\"> section\n       if this is just a master.\n  -->\n  <requestHandler name=\"/replication\" class=\"solr.ReplicationHandler\" >\n    <lst name=\"master\">\n      <str name=\"enable\">${solr.replication.master:false}</str>\n      <str name=\"replicateAfter\">commit</str>\n      <str name=\"replicateAfter\">startup</str>\n      <str name=\"confFiles\">${solr.replication.confFiles:schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml}</str>\n    </lst>\n    <lst name=\"slave\">\n      <str name=\"enable\">${solr.replication.slave:false}</str>\n      <str name=\"masterUrl\">${solr.replication.masterUrl:http://localhost:8983/solr}/replication</str>\n      <str name=\"pollInterval\">${solr.replication.pollInterval:00:00:60}</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Realtime get handler, guaranteed to return the latest stored fields of\n       any document, without the need to commit or open a new searcher.  The\n       current implementation relies on the updateLog feature being enabled.\n  -->\n  <requestHandler name=\"/get\" class=\"solr.RealTimeGetHandler\">\n    <lst name=\"defaults\">\n      <str name=\"omitHeader\">true</str>\n      <str name=\"wt\">json</str>\n      <str name=\"indent\">true</str>\n    </lst>\n  </requestHandler>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names,\n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n\n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n\n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\"\n\n     -->\n\n  <!-- A request handler for demonstrating the spellcheck component.\n\n       NOTE: This is purely as an example.  The whole purpose of the\n       SpellCheckComponent is to hook it into the request handler that\n       handles your normal user queries so that a separate request is\n       not needed to get suggestions.\n\n       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS\n       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!\n\n       See http://wiki.apache.org/solr/SpellCheckComponent for details\n       on the request parameters.\n    -->\n  <requestHandler name=\"/spell\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <!-- Solr will use suggestions from both the 'default' spellchecker\n           and from the 'wordbreak' spellchecker and combine them.\n           collations (re-written queries) can include a combination of\n           corrections from both spellcheckers -->\n      <str name=\"spellcheck.dictionary\">default</str>\n      <str name=\"spellcheck.dictionary\">wordbreak</str>\n      <str name=\"spellcheck.onlyMorePopular\">false</str>\n      <str name=\"spellcheck.extendedResults\">false</str>\n      <str name=\"spellcheck.count\">1</str>\n      <str name=\"spellcheck.alternativeTermCount\">5</str>\n      <str name=\"spellcheck.maxResultsForSuggest\">5</str>\n      <str name=\"spellcheck.collate\">true</str>\n      <str name=\"spellcheck.collateExtendedResults\">true</str>\n      <str name=\"spellcheck.maxCollationTries\">10</str>\n      <str name=\"spellcheck.maxCollations\">5</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>spellcheck</str>\n    </arr>\n  </requestHandler>\n\n  <!-- This is disabled by default because it currently causes long startup times on\n       big indexes, even when never used.  See SOLR-6679 for background.\n\n       To use this suggester, set the \"solr.suggester.enabled=true\" system property\n    -->\n  <searchComponent name=\"suggest\" class=\"solr.SuggestComponent\"\n                   enable=\"${solr.suggester.enabled:false}\"     >\n    <lst name=\"suggester\">\n      <str name=\"name\">mySuggester</str>\n      <str name=\"lookupImpl\">FuzzyLookupFactory</str>\n      <str name=\"dictionaryImpl\">DocumentDictionaryFactory</str>\n      <str name=\"field\">cat</str>\n      <str name=\"weightField\">price</str>\n      <str name=\"suggestAnalyzerFieldType\">string</str>\n    </lst>\n  </searchComponent>\n\n  <requestHandler name=\"/suggest\" class=\"solr.SearchHandler\"\n                  startup=\"lazy\" enable=\"${solr.suggester.enabled:false}\" >\n    <lst name=\"defaults\">\n      <str name=\"suggest\">true</str>\n      <str name=\"suggest.count\">10</str>\n    </lst>\n    <arr name=\"components\">\n      <str>suggest</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Term Vector Component\n\n       http://wiki.apache.org/solr/TermVectorComponent\n    -->\n  <searchComponent name=\"tvComponent\" class=\"solr.TermVectorComponent\"/>\n\n  <!-- A request handler for demonstrating the term vector component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your\n       already specified request handlers.\n    -->\n  <requestHandler name=\"/tvrh\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <bool name=\"tv\">true</bool>\n    </lst>\n    <arr name=\"last-components\">\n      <str>tvComponent</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Clustering Component\n\n       http://wiki.apache.org/solr/ClusteringComponent\n\n       This relies on third party jars which are notincluded in the\n       release.  To use this component (and the \"/clustering\" handler)\n       Those jars will need to be downloaded, and you'll need to set\n       the solr.cluster.enabled system property when running solr...\n\n          java -Dsolr.clustering.enabled=true -jar start.jar\n    -->\n  <!-- <searchComponent name=\"clustering\"\n                   enable=\"${solr.clustering.enabled:false}\"\n                   class=\"solr.clustering.ClusteringComponent\" > -->\n    <!-- Declare an engine -->\n    <!--<lst name=\"engine\">-->\n      <!-- The name, only one can be named \"default\" -->\n      <!--<str name=\"name\">default</str>-->\n\n      <!-- Class name of Carrot2 clustering algorithm.\n\n           Currently available algorithms are:\n\n           * org.carrot2.clustering.lingo.LingoClusteringAlgorithm\n           * org.carrot2.clustering.stc.STCClusteringAlgorithm\n           * org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm\n\n           See http://project.carrot2.org/algorithms.html for the\n           algorithm's characteristics.\n        -->\n      <!--<str name=\"carrot.algorithm\">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>-->\n\n      <!-- Overriding values for Carrot2 default algorithm attributes.\n\n           For a description of all available attributes, see:\n           http://download.carrot2.org/stable/manual/#chapter.components.\n           Use attribute key as name attribute of str elements\n           below. These can be further overridden for individual\n           requests by specifying attribute key as request parameter\n           name and attribute value as parameter value.\n        -->\n      <!--<str name=\"LingoClusteringAlgorithm.desiredClusterCountBase\">20</str>-->\n\n      <!-- Location of Carrot2 lexical resources.\n\n           A directory from which to load Carrot2-specific stop words\n           and stop labels. Absolute or relative to Solr config directory.\n           If a specific resource (e.g. stopwords.en) is present in the\n           specified dir, it will completely override the corresponding\n           default one that ships with Carrot2.\n\n           For an overview of Carrot2 lexical resources, see:\n           http://download.carrot2.org/head/manual/#chapter.lexical-resources\n        -->\n      <!--<str name=\"carrot.lexicalResourcesDir\">clustering/carrot2</str>-->\n\n      <!-- The language to assume for the documents.\n\n           For a list of allowed values, see:\n           http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage\n       -->\n      <!--<str name=\"MultilingualClustering.defaultLanguage\">ENGLISH</str>\n    </lst>\n    <lst name=\"engine\">\n      <str name=\"name\">stc</str>\n      <str name=\"carrot.algorithm\">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>\n    </lst>\n  </searchComponent>-->\n\n  <!-- A request handler for demonstrating the clustering component\n\n       This is purely as an example.\n\n       In reality you will likely want to add the component to your\n       already specified request handlers.\n    -->\n  <!--<requestHandler name=\"/clustering\"\n                  startup=\"lazy\"\n                  enable=\"${solr.clustering.enabled:false}\"\n                  class=\"solr.SearchHandler\">\n    <lst name=\"defaults\">\n      <bool name=\"clustering\">true</bool>\n      <str name=\"clustering.engine\">default</str>\n      <bool name=\"clustering.results\">true</bool>-->\n      <!-- The title field -->\n      <!--<str name=\"carrot.title\">name</str>-->\n      <!--<str name=\"carrot.url\">id</str>-->\n      <!-- The field to cluster on -->\n       <!--<str name=\"carrot.snippet\">features</str>-->\n       <!-- produce summaries -->\n       <!--<bool name=\"carrot.produceSummary\">true</bool>-->\n       <!-- the maximum number of labels per cluster -->\n       <!--<int name=\"carrot.numDescriptions\">5</int>-->\n       <!-- produce sub clusters -->\n       <!--<bool name=\"carrot.outputSubClusters\">false</bool>-->\n\n       <!--<str name=\"defType\">edismax</str>\n       <str name=\"qf\">\n          text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4\n       </str>\n       <str name=\"q.alt\">*:*</str>\n       <str name=\"rows\">10</str>\n       <str name=\"fl\">*,score</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>clustering</str>\n    </arr>\n  </requestHandler>-->\n\n  <!-- Terms Component\n\n       http://wiki.apache.org/solr/TermsComponent\n\n       A component to return terms and document frequency of those\n       terms\n    -->\n  <searchComponent name=\"terms\" class=\"solr.TermsComponent\"/>\n\n  <!-- A request handler for demonstrating the terms component -->\n  <requestHandler name=\"/terms\" class=\"solr.SearchHandler\" startup=\"lazy\">\n     <lst name=\"defaults\">\n      <bool name=\"terms\">true</bool>\n    </lst>\n    <arr name=\"components\">\n      <str>terms</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Query Elevation Component\n\n       http://wiki.apache.org/solr/QueryElevationComponent\n\n       a search component that enables you to configure the top\n       results for a given query regardless of the normal lucene\n       scoring.\n    -->\n  <searchComponent name=\"elevator\" class=\"solr.QueryElevationComponent\" >\n    <!-- pick a fieldType to analyze queries -->\n    <str name=\"queryFieldType\">string</str>\n    <str name=\"config-file\">elevate.xml</str>\n  </searchComponent>\n\n  <!-- A request handler for demonstrating the elevator component -->\n  <requestHandler name=\"/elevate\" class=\"solr.SearchHandler\" startup=\"lazy\">\n    <lst name=\"defaults\">\n      <str name=\"echoParams\">explicit</str>\n    </lst>\n    <arr name=\"last-components\">\n      <str>elevator</str>\n    </arr>\n  </requestHandler>\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\"\n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter\n           (for sentence extraction)\n        -->\n      <fragmenter name=\"regex\"\n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\"\n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<strong>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</strong>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\"\n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\"\n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\"\n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!--\n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n\n      <boundaryScanner name=\"default\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n\n      <boundaryScanner name=\"breakIterator\"\n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    -->\n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.\n\n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Language identification\n\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n    <!--\n     <updateRequestProcessorChain name=\"langid\">\n       <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n         <str name=\"langid.fl\">text,title,subject,description</str>\n         <str name=\"langid.langField\">language_s</str>\n         <str name=\"langid.fallback\">en</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\"\n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n     <!-- For the purposes of the tutorial, JSON responses are written as\n      plain text so that they are easy to read in *any* browser.\n      If you expect a MIME type of \"application/json\" just remove this override.\n     -->\n    <str name=\"content-type\">text/plain; charset=UTF-8</str>\n  </queryResponseWriter>\n\n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!-- <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" enable=\"${solr.velocity.enabled:true}\"/> -->\n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.\n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       http://wiki.apache.org/solr/SolrQuerySyntax\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\"\n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n  <!-- Legacy config for the admin interface -->\n  <admin>\n    <defaultQuery>*:*</defaultQuery>\n\n    <!-- configure a healthcheck file for servers behind a\n         loadbalancer\n      -->\n    <!--\n       <healthcheck type=\"file\">server-enabled</healthcheck>\n      -->\n  </admin>\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  <xi:include href=\"solrconfig_extra.xml\" xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n    <xi:fallback>\n    <!-- Spell Check\n\n        The spell check component can return a list of alternative spelling\n        suggestions. This component must be defined in\n        solrconfig_extra.xml if present, since it's used in the search handler.\n\n        http://wiki.apache.org/solr/SpellCheckComponent\n     -->\n    <searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n    <str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n    <!-- a spellchecker built from a field of the main index -->\n      <lst name=\"spellchecker\">\n        <str name=\"name\">default</str>\n        <str name=\"field\">spell</str>\n        <str name=\"spellcheckIndexDir\">spellchecker</str>\n        <str name=\"buildOnOptimize\">true</str>\n      </lst>\n    </searchComponent>\n    </xi:fallback>\n  </xi:include>\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/solrconfig_extra.xml",
    "content": "<!-- Spell Check\n\n    The spell check component can return a list of alternative spelling\n    suggestions.\n\n    http://wiki.apache.org/solr/SpellCheckComponent\n -->\n<searchComponent name=\"spellcheck\" class=\"solr.SpellCheckComponent\">\n\n<str name=\"queryAnalyzerFieldType\">textSpell</str>\n\n<!-- Multiple \"Spell Checkers\" can be declared and used by this\n     component\n  -->\n\n<!-- a spellchecker built from a field of the main index, and\n     written to disk\n  -->\n<lst name=\"spellchecker\">\n  <str name=\"name\">default</str>\n  <str name=\"field\">spell</str>\n  <str name=\"spellcheckIndexDir\">spellchecker</str>\n  <str name=\"buildOnOptimize\">true</str>\n  <!-- uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary\n    <float name=\"thresholdTokenFrequency\">.01</float>\n  -->\n</lst>\n\n<!--\n  Adding German spellhecker index to our Solr index\n  This also requires to enable the content in schema_extra_types.xml and schema_extra_fields.xml\n-->\n<!--\n<lst name=\"spellchecker\">\n  <str name=\"name\">spellchecker_de</str>\n  <str name=\"field\">spell_de</str>\n  <str name=\"spellcheckIndexDir\">./spellchecker_de</str>\n  <str name=\"buildOnOptimize\">true</str>\n</lst>\n-->\n\n<!-- a spellchecker that uses a different distance measure -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">jarowinkler</str>\n     <str name=\"field\">spell</str>\n     <str name=\"distanceMeasure\">\n       org.apache.lucene.search.spell.JaroWinklerDistance\n     </str>\n     <str name=\"spellcheckIndexDir\">spellcheckerJaro</str>\n   </lst>\n -->\n\n<!-- a spellchecker that use an alternate comparator\n\n     comparatorClass be one of:\n      1. score (default)\n      2. freq (Frequency first, then score)\n      3. A fully qualified class name\n  -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"name\">freq</str>\n     <str name=\"field\">lowerfilt</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFreq</str>\n     <str name=\"comparatorClass\">freq</str>\n     <str name=\"buildOnCommit\">true</str>\n  -->\n\n<!-- A spellchecker that reads the list of words from a file -->\n<!--\n   <lst name=\"spellchecker\">\n     <str name=\"classname\">solr.FileBasedSpellChecker</str>\n     <str name=\"name\">file</str>\n     <str name=\"sourceLocation\">spellings.txt</str>\n     <str name=\"characterEncoding\">UTF-8</str>\n     <str name=\"spellcheckIndexDir\">spellcheckerFile</str>\n   </lst>\n  -->\n</searchComponent>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:9077/solr\nsolr.replication.confFiles=schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml\nsolr.mlt.timeAllowed=2000\n# You should not set your luceneMatchVersion to anything lower than your Solr\n# Version.\nsolr.luceneMatchVersion=6.0\nsolr.pinkPony.timeAllowed=-1\n# autoCommit after 10000 docs\nsolr.autoCommit.MaxDocs=10000\n# autoCommit after 2 minutes\nsolr.autoCommit.MaxTime=120000\n# autoSoftCommit after 2000 docs\nsolr.autoSoftCommit.MaxDocs=2000\n# autoSoftCommit after 10 seconds\nsolr.autoSoftCommit.MaxTime=10000\nsolr.contrib.dir=../../../contrib\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/stopwords.txt",
    "content": "# Contains words which shouldn't be indexed for fulltext fields, e.g., because\n# they're too common. For documentation of the format, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.StopFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal7/synonyms.txt",
    "content": "# Contains synonyms to use for your index. For the format used, see\n# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory\n# (Lines starting with a pound character # are ignored.)\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal8/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. Search API generally constructs item IDs (esp. for entities) as:\n     $document->id = \"$site_hash-$index_id-$datasource:$entity_id:$language_id\";\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #789 first in searches for \"example query\": -->\n<!--\n  <query text=\"example query\">\n    <doc id=\"ab12cd34-site_index-entity:789:en\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal8/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n<!DOCTYPE schema [\n  <!ENTITY extrafields SYSTEM \"schema_extra_fields.xml\">\n  <!ENTITY extratypes SYSTEM \"schema_extra_types.xml\">\n]>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n This example schema is the recommended starting point for users.\n It should be kept correct and concise, usable out-of-the-box.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n\n PERFORMANCE NOTE: this schema includes many optional features and should not\n be used for benchmarking.  To improve performance one could\n  - set stored=\"false\" for all fields possible (esp large fields) when you\n    only need to search on the field but don't need to return the original\n    value.\n  - set indexed=\"false\" if you don't need to search on the field, but only\n    return the field as a result of searching on other indexed fields.\n  - remove all unneeded copyField statements\n  - for best index size and searching performance, set \"index\" to false\n    for all general text fields, use copyField to copy them to the\n    catchall \"text\" field, and use that for searching.\n  - For maximum indexing performance, use the ConcurrentUpdateSolrServer\n    java client.\n  - Remember to run the JVM in server mode, and use a higher logging level\n    that avoids logging every request\n-->\n\n<schema name=\"drupal-SEARCH_API_SOLR_MIN_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" version=\"1.6\">\n  <!-- attribute \"name\" is the name of this schema and is only used for display purposes.\n       version=\"x.y\" is Solr's version number for the schema syntax and\n       semantics.  It should not normally be changed by applications.\n\n       1.0: multiValued attribute did not exist, all fields are multiValued\n            by nature\n       1.1: multiValued attribute introduced, false by default\n       1.2: omitTermFreqAndPositions attribute introduced, true by default\n            except for text fields.\n       1.3: removed optional field compress feature\n       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser\n            behavior when a single string produces multiple tokens.  Defaults\n            to off for version >= 1.4\n       1.5: omitNorms defaults to true for primitive field types\n            (int, float, boolean, string...)\n       1.6: useDocValuesAsStored defaults to true.\n     -->\n\n\n  <!-- Valid attributes for fields:\n    name: mandatory - the name for the field\n    type: mandatory - the name of a field type from the\n      fieldTypes\n    indexed: true if this field should be indexed (searchable or sortable)\n    stored: true if this field should be retrievable\n    docValues: true if this field should have doc values. Doc values are\n      useful (required, if you are using *Point fields) for faceting,\n      grouping, sorting and function queries. Doc values will make the index\n      faster to load, more NRT-friendly and more memory-efficient.\n      They however come with some limitations: they are currently only\n      supported by StrField, UUIDField, all *PointFields, and depending\n      on the field type, they might require the field to be single-valued,\n      be required or have a default value (check the documentation\n      of the field type you're interested in for more information)\n    multiValued: true if this field may contain multiple values per document\n    omitNorms: (expert) set to true to omit the norms associated with\n      this field (this disables length normalization and index-time\n      boosting for the field, and saves some memory).  Only full-text\n      fields or fields that need an index-time boost need norms.\n      Norms are omitted for primitive (non-analyzed) types by default.\n    termVectors: [false] set to true to store the term vector for a\n      given field.\n      When using MoreLikeThis, fields used for similarity should be\n      stored for best performance.\n    termPositions: Store position information with the term vector.\n      This will increase storage costs.\n    termOffsets: Store offset information with the term vector. This\n      will increase storage costs.\n    termPayloads: Store payload information with the term vector. This\n      will increase storage costs.\n    required: The field is required.  It will throw an error if the\n      value does not exist\n    default: a value that should be used if no value is specified\n      when adding a document.\n  -->\n\n  <!-- field names should consist of alphanumeric or underscore characters only and\n     not start with a digit.  This is not currently strictly enforced,\n     but other field names will not have first class support from all components\n     and back compatibility is not guaranteed.  Names with both leading and\n     trailing underscores (e.g. _version_) are reserved.\n  -->\n\n  <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml\n     or Solr won't start. _version_ and update log are required for SolrCloud\n  -->\n  <!-- doc values are enabled by default for primitive types such as long so we don't index the version field  -->\n  <field name=\"_version_\" type=\"plong\" indexed=\"false\" stored=\"false\"/>\n\n  <!-- points to the root document of a block of nested documents. Required for nested\n     document support, may be removed otherwise\n  -->\n  <field name=\"_root_\" type=\"string\" indexed=\"true\" stored=\"false\" docValues=\"false\"/>\n\n  <!-- Only remove the \"id\" field if you have a very good reason to. While not strictly\n  required, it is highly recommended. A <uniqueKey> is present in almost all Solr\n  installations. See the <uniqueKey> declaration below where <uniqueKey> is set to \"id\".\n  -->\n  <!-- The document id is usually derived from a site-specific key (hash) and the\n    entity type and ID like:\n    Search Api 7.x:\n      The format used is $document->id = $index_id . '-' . $item_id\n    Search Api 8.x:\n      The format used is $document->id = $site_hash . '-' . $index_id . '-' . $item_id\n    Apache Solr Search Integration 7.x:\n      The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n  -->\n  <!-- The Highlighter Component requires the id field to be \"stored\" even if docValues are set. -->\n  <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Search Api specific fields -->\n  <!-- index_id is the machine name of the search index this entry belongs to. -->\n  <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Here, default is used to create a \"timestamp\" field indicating\n       when each document was indexed.-->\n  <field name=\"timestamp\" type=\"pdate\" indexed=\"true\" stored=\"false\" default=\"NOW\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"boost_document\" type=\"pfloat\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"boost_term\" type=\"boost_term_payload\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n  <!-- Currently the suggester context filter query (suggest.cfq) accesses the tags using the stored values, neither the indexed terms nor the docValues.\n       Therefore the dynamicField sm_* isn't suitable at the moment -->\n  <field name=\"sm_context_tags\" type=\"string\" indexed=\"true\" stored=\"true\" multiValued=\"true\" docValues=\"false\"/>\n\n  <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n       will be used if the name matches any of the patterns.\n       RESTRICTION: the glob-like pattern in the name attribute must have\n       a \"*\" only at the start or the end.\n       EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n       Longer patterns will be matched first.  if equal size patterns\n       both match, the first appearing in the schema will be used.  -->\n\n  <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n       the last letter is 's' for single valued, 'm' for multi-valued -->\n\n  <!-- We use plong for integer since 64 bit ints are now common in PHP. -->\n  <dynamicField name=\"is_*\"  type=\"plong\"    indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"im_*\"  type=\"plong\"    indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- List of floats can be saved in a regular float field -->\n  <dynamicField name=\"fs_*\"  type=\"pfloat\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"fm_*\"  type=\"pfloat\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- List of doubles can be saved in a regular double field -->\n  <dynamicField name=\"ps_*\"  type=\"pdouble\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"pm_*\"  type=\"pdouble\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- List of booleans can be saved in a regular boolean field -->\n  <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Regular text (without processing) can be stored in a string field-->\n  <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- For field types using SORTED_SET, multiple identical entries are collapsed into a single value.\n       Thus if I insert values 4, 5, 2, 4, 1, my return will be 1, 2, 4, 5 when enabling docValues.\n       If you need to preserve the order and duplicate entries, consider to store the values as zm_* (twice). -->\n  <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Special-purpose text fields -->\n  <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"ds_*\"  type=\"pdate\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"dm_*\"  type=\"pdate\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- This field is used to store date ranges -->\n  <dynamicField name=\"drs_*\" type=\"date_range\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"drm_*\" type=\"date_range\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"its_*\" type=\"plong\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"itm_*\" type=\"plong\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"fts_*\" type=\"pfloat\"  indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ftm_*\" type=\"pfloat\"  indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <dynamicField name=\"pts_*\" type=\"pdouble\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ptm_*\" type=\"pdouble\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n       a small image in a search result using the data URI scheme -->\n  <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"dds_*\" type=\"pdate\"    indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ddm_*\" type=\"pdate\"    indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n  <dynamicField name=\"hs_*\" type=\"pint\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"hm_*\" type=\"pint\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"hts_*\" type=\"pint\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"htm_*\" type=\"pint\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n\n  <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n  <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n  <!-- Fields for location searches.\n       http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n  <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <!-- GeoHash fields are deprecated. LatLonPointSpatial fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"geos_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"geom_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"bboxs_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"bboxm_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n  <dynamicField name=\"rpts_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"rptm_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n  <!-- External file fields -->\n  <dynamicField name=\"eff_*\" type=\"file\"/>\n\n  <!-- A random sort field -->\n  <dynamicField name=\"random_*\" type=\"random\" indexed=\"true\" stored=\"true\"/>\n\n  <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n  <dynamicField name=\"access_*\" type=\"pint\" indexed=\"true\" stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n\n  <!-- The following causes solr to ignore any fields that don't already match an existing\n       field name or dynamic field, rather than reporting them as an error.\n       Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n       unknown fields indexed and/or stored by default -->\n  <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in a\n       standard package such as org.apache.solr.analysis\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       It supports doc values but in that case the field needs to be\n       single-valued and either required or have a default value.\n      -->\n    <fieldType name=\"string\" class=\"solr.StrField\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\"/>\n\n    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are\n         currently supported on types that are sorted internally as strings\n         and on numeric types.\n         This includes \"string\", \"boolean\", \"pint\", \"pfloat\", \"plong\", \"pdate\", \"pdouble\".\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!--\n      Numeric field types that index values using KD-trees.\n      Point fields don't support FieldCache, so they must have docValues=\"true\" if needed for sorting, faceting, functions, etc.\n    -->\n    <fieldType name=\"pint\" class=\"solr.IntPointField\" docValues=\"true\"/>\n    <fieldType name=\"pfloat\" class=\"solr.FloatPointField\" docValues=\"true\"/>\n    <fieldType name=\"plong\" class=\"solr.LongPointField\" docValues=\"true\"/>\n    <fieldType name=\"pdouble\" class=\"solr.DoublePointField\" docValues=\"true\"/>\n\n    <fieldType name=\"pints\" class=\"solr.IntPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pfloats\" class=\"solr.FloatPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"plongs\" class=\"solr.LongPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pdoubles\" class=\"solr.DoublePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the DatePointField javadocs for more information.\n      -->\n    <!-- KD-tree versions of date fields -->\n    <fieldType name=\"pdate\" class=\"solr.DatePointField\" docValues=\"true\"/>\n    <fieldType name=\"pdates\" class=\"solr.DatePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!-- A date range field -->\n    <fieldType name=\"date_range\" class=\"solr.DateRangeField\"/>\n\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldType name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The \"RandomSortField\" is not used to store or search any\n         data.  You can declare fields of this type it in your schema\n         to generate pseudo-random orderings of your docs for sorting\n         or function purposes.  The ordering is generated based on the field\n         name and the version of the index. As long as the index version\n         remains unchanged, and the same field name is reused,\n         the ordering of the docs will be consistent.\n         If you want different psuedo-random orderings of documents,\n         for the same version of the index, use a dynamicField and\n         change the field name in the request.\n     -->\n    <fieldType name=\"random\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element.\n         Example:\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <fieldType name=\"boost_term_payload\" stored=\"false\" indexed=\"true\" class=\"solr.TextField\" >\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n        <!--\n        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,\n        a token of \"foo|1.4\"  would be indexed as \"foo\" with a payload of 1.4f\n        Attributes of the DelimitedPayloadTokenFilterFactory :\n         \"delimiter\" - a one character delimiter. Default is | (pipe)\n         \"encoder\" - how to encode the following value into a playload\n           float -> org.apache.lucene.analysis.payloads.FloatEncoder,\n           integer -> o.a.l.a.p.IntegerEncoder\n           identity -> o.a.l.a.p.IdentityEncoder\n             Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.\n         -->\n        <filter class=\"solr.DelimitedPayloadTokenFilterFactory\" encoder=\"float\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- since fields of this type are by default not stored or indexed,\n         any data added to them will be ignored outright.  -->\n    <fieldType name=\"ignored\" stored=\"false\" indexed=\"false\" multiValued=\"true\" class=\"solr.StrField\" />\n\n    <!-- This point type indexes the coordinates as separate fields (subFields)\n      If subFieldType is defined, it references a type, and a dynamic field\n      definition is created matching *___<typename>.  Alternately, if\n      subFieldSuffix is defined, that is used to create the subFields.\n      Example: if subFieldType=\"double\", then the coordinates would be\n        indexed in fields myloc_0___double,myloc_1___double.\n      Example: if subFieldSuffix=\"_d\" then the coordinates would be indexed\n        in fields myloc_0_d,myloc_1_d\n      The subFields are an implementation detail of the fieldType, and end\n      users normally should not need to know about them.\n     -->\n    <!-- In Drupal we only use prefixes for dynmaic fields. Tht might change in\n      the future but for now we keep this pattern.\n    -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"pdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonPointSpatialField\" docValues=\"true\"/>\n\n    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.\n      For more information about this and other Spatial fields new to Solr 4, see:\n      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4\n    -->\n    <fieldType name=\"location_rpt\" class=\"solr.SpatialRecursivePrefixTreeFieldType\"\n        geo=\"true\" distErrPct=\"0.025\" maxDistErr=\"0.001\" distanceUnits=\"kilometers\" />\n\n    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has\n    special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for\n    relevancy. -->\n    <fieldType name=\"bbox\" class=\"solr.BBoxField\"\n               geo=\"true\" distanceUnits=\"kilometers\" numberType=\"_bbox_coord\" />\n    <fieldType name=\"_bbox_coord\" class=\"solr.DoublePointField\" docValues=\"true\" stored=\"false\"/>\n\n  <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType\n       Parameters:\n           amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field.\n                               The dynamic field must have a field type that extends LongValueFieldType.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.\n                               The dynamic field must have a field type that extends StrField.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           defaultCurrency:  Specifies the default currency if none specified. Defaults to \"USD\"\n           providerClass:    Lets you plug in other exchange provider backend:\n                             solr.FileExchangeRateProvider is the default and takes one parameter:\n                               currencyConfig: name of an xml file holding exchange rates\n                             solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:\n                               ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)\n                               refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)\n  -->\n<!--  <fieldType name=\"currency\" class=\"solr.CurrencyFieldType\" amountLongSuffix=\"_l_ns\" codeStrSuffix=\"_s_ns\"\n                 defaultCurrency=\"USD\" currencyConfig=\"currency.xml\" /> -->\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  &extrafields;\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  &extratypes;\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- Similarity is the scoring routine for each document vs. a query.\n       A custom Similarity or SimilarityFactory may be specified here, but\n       the default is fine for most applications.\n       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity\n    -->\n  <!--\n     <similarity class=\"com.example.solr.CustomSimilarityFactory\">\n       <str name=\"paramkey\">param value</str>\n     </similarity>\n    -->\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal8/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!DOCTYPE config [\n  <!ENTITY extra SYSTEM \"solrconfig_extra.xml\">\n  <!ENTITY index SYSTEM \"solrconfig_index.xml\">\n  <!ENTITY query SYSTEM \"solrconfig_query.xml\">\n  <!ENTITY requestdispatcher SYSTEM \"solrconfig_requestdispatcher.xml\">\n]>\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-SEARCH_API_SOLR_MIN_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_70}</luceneMatchVersion>\n\n  <!-- <lib/> directives can be used to instruct Solr to load any Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       Please note that <lib/> directives are processed in the order\n       that they appear in your solrconfig.xml file, and are \"stacked\"\n       on top of each other when building a ClassLoader - so if you have\n       plugin jars with dependencies on other jars, the \"lower level\"\n       dependency jars should be loaded first.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n\n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A 'dir' option by itself adds any files found in the directory\n       to the classpath, this is useful for including all jars in a\n       directory.\n\n       When a 'regex' is specified in addition to a 'dir', only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n\n       If a 'dir' option (with or without a regex) is used and nothing\n       is found that matches, a warning will be logged.\n\n       The examples below can be used to load some solr-contribs along\n       with their external dependencies.\n    -->\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-dataimporthandler-.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/extraction/lib\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-cell-\\d.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/langid/lib/\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-langid-\\d.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/analysis-extras/lib\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/analysis-extras/lucene-libs\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-analysis-extras-\\d.*\\.jar\" />\n\n  <!-- an exact 'path' can be used instead of a 'dir' to specify a\n       specific jar file.  This will cause a serious error to be logged\n       if it can't be loaded.\n    -->\n  <!--\n     <lib path=\"../a-jar-that-does-not-exist.jar\" />\n  -->\n\n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <dataDir>${solr.data.dir:}</dataDir>\n\n\n  <!-- The DirectoryFactory to use for indexes.\n\n       solr.StandardDirectoryFactory is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,\n       wraps solr.StandardDirectoryFactory and caches small files in memory\n       for better NRT performance.\n\n       One can force a particular implementation via solr.MMapDirectoryFactory,\n       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based and not persistent.\n    -->\n  <directoryFactory name=\"DirectoryFactory\"\n                    class=\"${solr.directoryFactory:solr.NRTCachingDirectoryFactory}\">\n\n\n    <!-- These will be used if you are using the solr.HdfsDirectoryFactory,\n         otherwise they will be ignored. If you don't plan on using hdfs,\n         you can safely remove this section. -->\n    <!-- The root directory that collection data should be written to. -->\n    <str name=\"solr.hdfs.home\">${solr.hdfs.home:}</str>\n    <!-- The hadoop configuration files to use for the hdfs client. -->\n    <str name=\"solr.hdfs.confdir\">${solr.hdfs.confdir:}</str>\n    <!-- Enable/Disable the hdfs cache. -->\n    <str name=\"solr.hdfs.blockcache.enabled\">${solr.hdfs.blockcache.enabled:true}</str>\n    <!-- Enable/Disable using one global cache for all SolrCores.\n         The settings used will be from the first HdfsDirectoryFactory created. -->\n    <str name=\"solr.hdfs.blockcache.global\">${solr.hdfs.blockcache.global:true}</str>\n\n  </directoryFactory>\n\n  <!-- The CodecFactory for defining the format of the inverted index.\n       The default implementation is SchemaCodecFactory, which is the official Lucene\n       index format, but hooks into the schema to provide per-field customization of\n       the postings lists and per-document values in the fieldType element\n       (postingsFormat/docValuesFormat). Note that most of the alternative implementations\n       are experimental, so if you choose to customize the index format, it's a good\n       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)\n       before upgrading to a newer version to avoid unnecessary reindexing.\n  -->\n  <codecFactory class=\"solr.SchemaCodecFactory\"/>\n\n  <!-- To enable dynamic schema REST APIs, remove the following <schemaFactory>.\n  -->\n  <schemaFactory class=\"ClassicIndexSchemaFactory\"/>\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Index Config - These settings control low-level behavior of indexing\n       Most example settings here show the default value, but are commented\n       out, to more easily see where customizations have been made.\n\n       Note: This replaces <indexDefaults> and <mainIndex> from older versions\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <indexConfig>\n    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a\n         LimitTokenCountFilterFactory in your fieldType definition. E.g.\n     <filter class=\"solr.LimitTokenCountFilterFactory\" maxTokenCount=\"10000\"/>\n    -->\n    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->\n    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->\n\n    <!-- Expert: Enabling compound file will use less files for the index,\n         using fewer file descriptors on the expense of performance decrease.\n         Default in Lucene is \"true\". Default in Solr is \"false\" (since 3.6) -->\n    <!-- <useCompoundFile>false</useCompoundFile> -->\n\n    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene\n         indexing for buffering added documents and deletions before they are\n         flushed to the Directory.\n         maxBufferedDocs sets a limit on the number of documents buffered\n         before flushing.\n         If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.\n         The default is 100 MB.  -->\n    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <!-- Expert: Merge Policy\n         The Merge Policy in Lucene controls how merging of segments is done.\n         The default since Solr/Lucene 3.3 is TieredMergePolicy.\n         The default since Lucene 2.3 was the LogByteSizeMergePolicy,\n         Even older versions of Lucene used LogDocMergePolicy.\n      -->\n    <!--\n        <mergePolicyFactory class=\"solr.TieredMergePolicyFactory\">\n          <int name=\"maxMergeAtOnce\">10</int>\n          <int name=\"segmentsPerTier\">10</int>\n        </mergePolicyFactory>\n     -->\n\n    <!-- Expert: Merge Scheduler\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!--\n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\n    <!-- LockFactory\n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n\n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         Defaults: 'native' is default for Solr3.6 and later, otherwise\n                   'simple' is the default\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>${solr.lock.type:native}</lockType>\n\n    <!-- Commit Deletion Policy\n         Custom deletion policies can be specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         The default Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n\n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <!--\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n    -->\n    <!-- The number of commit points to be kept -->\n    <!-- <str name=\"maxCommitsToKeep\">1</str> -->\n    <!-- The number of optimized commit points to be kept -->\n    <!-- <str name=\"maxOptimizedCommitsToKeep\">0</str> -->\n    <!--\n        Delete all commit points once they have reached the given age.\n        Supports DateMathParser syntax e.g.\n      -->\n    <!--\n       <str name=\"maxCommitAge\">30MINUTES</str>\n       <str name=\"maxCommitAge\">1DAY</str>\n    -->\n    <!--\n    </deletionPolicy>\n    -->\n\n    <!-- Lucene Infostream\n\n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting the value to true will instruct the underlying Lucene\n         IndexWriter to write its info stream to solr's log. By default,\n         this is enabled here, and controlled through log4j2.xml\n      -->\n    <infoStream>true</infoStream>\n\n    <!-- Let the config generator easily inject additional stuff. -->\n    &index;\n\n  </indexConfig>\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- Enables a transaction log, used for real-time get, durability, and\n         and solr cloud replica recovery.  The log can grow as big as\n         uncommitted changes to the index, so use of a hard autoCommit\n         is recommended (see below).\n         \"dir\" - the target directory for transaction logs, defaults to the\n                solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.ulog.dir:}</str>\n    </updateLog>\n\n    <!-- AutoCommit\n\n         Perform a hard commit automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents.\n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time in ms that is allowed to pass\n                   since a document was added before automatically\n                   triggering a new commit.\n         openSearcher - if false, the commit causes recent index changes\n           to be flushed to stable storage, but does not cause a new\n           searcher to be opened to make those changes visible.\n\n         If the updateLog is enabled, then it's highly recommended to\n         have some sort of hard autoCommit to limit the log size.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:15000}</maxTime>\n      <openSearcher>false</openSearcher>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n      -->\n\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:-1}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n\n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns.\n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n\n  </updateHandler>\n\n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Query section - these settings control query time things like caches\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <query>\n    <!-- Let the config generator easily inject additional stuff. -->\n    &query;\n\n    <!-- Query Related Event Listeners\n\n         Various IndexSearcher related events can trigger Listeners to\n         take actions.\n\n         newSearcher - fired whenever a new searcher is being prepared\n         and there is a current searcher handling requests (aka\n         registered).  It can be used to prime certain caches to\n         prevent long request times for certain requests.\n\n         firstSearcher - fired whenever a new searcher is being\n         prepared but there is no current registered searcher to handle\n         requests or to gain autowarming data from.\n\n\n      -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence.\n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">static firstSearcher warming in solrconfig.xml</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n  </query>\n\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n    -->\n  <requestDispatcher>\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size (in KiB) of\n         Multipart File Uploads that Solr will allow in a Request.\n\n         formdataUploadLimitInKB - specifies the max size (in KiB) of\n         form data (application/x-www-form-urlencoded) sent via\n         POST. You can use POST to pass request parameters not\n         fitting into the URL.\n\n         addHttpRequestToContext - if set to true, it will instruct\n         the requestParsers to include the original HttpServletRequest\n         object in the context map of the SolrQueryRequest under the\n         key \"httpRequest\". It will not be used by any of the existing\n         Solr components, but may be useful when developing custom\n         plugins.\n\n         *** WARNING ***\n         Before enabling remote streaming, you should make sure your\n         system has authentication enabled.\n\n    <requestParsers enableRemoteStreaming=\"false\"\n                    multipartUploadLimitInKB=\"-1\"\n                    formdataUploadLimitInKB=\"-1\"\n                    addHttpRequestToContext=\"false\"/>\n      -->\n    <!-- Let the config generator easily inject additional stuff. -->\n    &requestdispatcher;\n\n  </requestDispatcher>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names,\n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n\n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n\n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\"\n\n     -->\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  &extra;\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\"\n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter\n           (for sentence extraction)\n        -->\n      <fragmenter name=\"regex\"\n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\"\n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<em>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</em>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\"\n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\"\n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- Configure the weighted fragListBuilder -->\n      <fragListBuilder name=\"weighted\"\n                       default=\"true\"\n                       class=\"solr.highlight.WeightedFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\"\n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!--\n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n\n      <boundaryScanner name=\"default\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n\n      <boundaryScanner name=\"breakIterator\"\n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    -->\n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.\n\n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Language identification\n\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n  <!--\n   <updateRequestProcessorChain name=\"langid\">\n     <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n       <str name=\"langid.fl\">text,title,subject,description</str>\n       <str name=\"langid.langField\">language_s</str>\n       <str name=\"langid.fallback\">en</str>\n     </processor>\n     <processor class=\"solr.LogUpdateProcessorFactory\" />\n     <processor class=\"solr.RunUpdateProcessorFactory\" />\n   </updateRequestProcessorChain>\n  -->\n\n  <!-- Script update processor\n\n    This example hooks in an update processor implemented using JavaScript.\n\n    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor\n  -->\n  <!--\n    <updateRequestProcessorChain name=\"script\">\n      <processor class=\"solr.StatelessScriptUpdateProcessorFactory\">\n        <str name=\"script\">update-script.js</str>\n        <lst name=\"params\">\n          <str name=\"config_param\">example config parameter</str>\n        </lst>\n      </processor>\n      <processor class=\"solr.RunUpdateProcessorFactory\" />\n    </updateRequestProcessorChain>\n  -->\n\n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\"\n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n     <queryResponseWriter name=\"schema.xml\" class=\"solr.SchemaXmlResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n  </queryResponseWriter>\n\n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!--\n    <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" startup=\"lazy\">\n      <str name=\"template.base.dir\">${velocity.template.base.dir:}</str>\n    </queryResponseWriter>\n    -->\n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.\n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\"\n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n\n  <!-- Document Transformers\n       http://wiki.apache.org/solr/DocTransformers\n    -->\n  <!--\n     Could be something like:\n     <transformer name=\"db\" class=\"com.mycompany.LoadFromDatabaseTransformer\" >\n       <int name=\"connection\">jdbc://....</int>\n     </transformer>\n\n     To add a constant value to all docs, use:\n     <transformer name=\"mytrans2\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <int name=\"value\">5</int>\n     </transformer>\n\n     If you want the user to still be able to change it with _value:something_ use this:\n     <transformer name=\"mytrans3\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <double name=\"defaultValue\">5</double>\n     </transformer>\n\n      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The\n      EditorialMarkerFactory will do exactly that:\n     <transformer name=\"qecBooster\" class=\"org.apache.solr.response.transform.EditorialMarkerFactory\" />\n    -->\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal8/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:9077/solr\nsolr.replication.confFiles=schema.xml,schema_extra_types.xml,schema_extra_fields.xml,elevate.xml\nsolr.mlt.timeAllowed=2000\nsolr.selectSearchHandler.timeAllowed=-1\n# don't autoCommit after x docs\nsolr.autoCommit.MaxDocs=-1\n# autoCommit after 15 seconds\nsolr.autoCommit.MaxTime=15000\n# don't autoSoftCommit after x docs\nsolr.autoSoftCommit.MaxDocs=-1\n# don't autoSoftCommit after x seconds\nsolr.autoSoftCommit.MaxTime=-1\nsolr.install.dir=/opt/solr7\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal9/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. Search API generally constructs item IDs (esp. for entities) as:\n     $document->id = \"$site_hash-$index_id-$datasource:$entity_id:$language_id\";\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #789 first in searches for \"example query\": -->\n<!--\n  <query text=\"example query\">\n    <doc id=\"ab12cd34-site_index-entity:789:en\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal9/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n<!DOCTYPE schema [\n  <!ENTITY extrafields SYSTEM \"schema_extra_fields.xml\">\n  <!ENTITY extratypes SYSTEM \"schema_extra_types.xml\">\n]>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n This example schema is the recommended starting point for users.\n It should be kept correct and concise, usable out-of-the-box.\n\n For more information, on how to customize this file, please see\n http://wiki.apache.org/solr/SchemaXml\n\n PERFORMANCE NOTE: this schema includes many optional features and should not\n be used for benchmarking.  To improve performance one could\n  - set stored=\"false\" for all fields possible (esp large fields) when you\n    only need to search on the field but don't need to return the original\n    value.\n  - set indexed=\"false\" if you don't need to search on the field, but only\n    return the field as a result of searching on other indexed fields.\n  - remove all unneeded copyField statements\n  - for best index size and searching performance, set \"index\" to false\n    for all general text fields, use copyField to copy them to the\n    catchall \"text\" field, and use that for searching.\n  - For maximum indexing performance, use the ConcurrentUpdateSolrServer\n    java client.\n  - Remember to run the JVM in server mode, and use a higher logging level\n    that avoids logging every request\n-->\n\n<schema name=\"drupal-SEARCH_API_SOLR_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" version=\"1.6\">\n  <!-- attribute \"name\" is the name of this schema and is only used for display purposes.\n       version=\"x.y\" is Solr's version number for the schema syntax and\n       semantics.  It should not normally be changed by applications.\n\n       1.0: multiValued attribute did not exist, all fields are multiValued\n            by nature\n       1.1: multiValued attribute introduced, false by default\n       1.2: omitTermFreqAndPositions attribute introduced, true by default\n            except for text fields.\n       1.3: removed optional field compress feature\n       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser\n            behavior when a single string produces multiple tokens.  Defaults\n            to off for version >= 1.4\n       1.5: omitNorms defaults to true for primitive field types\n            (int, float, boolean, string...)\n       1.6: useDocValuesAsStored defaults to true.\n     -->\n\n\n  <!-- Valid attributes for fields:\n    name: mandatory - the name for the field\n    type: mandatory - the name of a field type from the\n      fieldTypes\n    indexed: true if this field should be indexed (searchable or sortable)\n    stored: true if this field should be retrievable\n    docValues: true if this field should have doc values. Doc values are\n      useful (required, if you are using *Point fields) for faceting,\n      grouping, sorting and function queries. Doc values will make the index\n      faster to load, more NRT-friendly and more memory-efficient.\n      They however come with some limitations: they are currently only\n      supported by StrField, UUIDField, all *PointFields, and depending\n      on the field type, they might require the field to be single-valued,\n      be required or have a default value (check the documentation\n      of the field type you're interested in for more information)\n    multiValued: true if this field may contain multiple values per document\n    omitNorms: (expert) set to true to omit the norms associated with\n      this field (this disables length normalization and index-time\n      boosting for the field, and saves some memory).  Only full-text\n      fields or fields that need an index-time boost need norms.\n      Norms are omitted for primitive (non-analyzed) types by default.\n    termVectors: [false] set to true to store the term vector for a\n      given field.\n      When using MoreLikeThis, fields used for similarity should be\n      stored for best performance.\n    termPositions: Store position information with the term vector.\n      This will increase storage costs.\n    termOffsets: Store offset information with the term vector. This\n      will increase storage costs.\n    termPayloads: Store payload information with the term vector. This\n      will increase storage costs.\n    required: The field is required.  It will throw an error if the\n      value does not exist\n    default: a value that should be used if no value is specified\n      when adding a document.\n  -->\n\n  <!-- field names should consist of alphanumeric or underscore characters only and\n     not start with a digit.  This is not currently strictly enforced,\n     but other field names will not have first class support from all components\n     and back compatibility is not guaranteed.  Names with both leading and\n     trailing underscores (e.g. _version_) are reserved.\n  -->\n\n  <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml\n     or Solr won't start. _version_ and update log are required for SolrCloud\n  -->\n  <!-- doc values are enabled by default for primitive types such as long so we don't index the version field  -->\n  <field name=\"_version_\" type=\"plong\" indexed=\"false\" stored=\"false\"/>\n\n  <!-- points to the root document of a block of nested documents. Required for nested\n     document support, may be removed otherwise\n  -->\n  <field name=\"_root_\" type=\"string\" indexed=\"true\" stored=\"false\" docValues=\"false\"/>\n\n  <!-- Only remove the \"id\" field if you have a very good reason to. While not strictly\n  required, it is highly recommended. A <uniqueKey> is present in almost all Solr\n  installations. See the <uniqueKey> declaration below where <uniqueKey> is set to \"id\".\n  -->\n  <!-- The document id is usually derived from a site-specific key (hash) and the\n    entity type and ID like:\n    Search Api 7.x:\n      The format used is $document->id = $index_id . '-' . $item_id\n    Search Api 8.x:\n      The format used is $document->id = $site_hash . '-' . $index_id . '-' . $item_id\n    Apache Solr Search Integration 7.x:\n      The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n  -->\n  <!-- The Highlighter Component requires the id field to be \"stored\" even if docValues are set. -->\n  <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Search Api specific fields -->\n  <!-- index_id is the machine name of the search index this entry belongs to. -->\n  <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Here, default is used to create a \"timestamp\" field indicating\n       when each document was indexed.-->\n  <field name=\"timestamp\" type=\"pdate\" indexed=\"true\" stored=\"false\" default=\"NOW\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"boost_document\" type=\"pfloat\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"boost_term\" type=\"boost_term_payload\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n  <!-- Currently the suggester context filter query (suggest.cfq) accesses the tags using the stored values, neither the indexed terms nor the docValues.\n       Therefore the dynamicField sm_* isn't suitable at the moment -->\n  <field name=\"sm_context_tags\" type=\"string\" indexed=\"true\" stored=\"true\" multiValued=\"true\" docValues=\"false\"/>\n\n  <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n       will be used if the name matches any of the patterns.\n       RESTRICTION: the glob-like pattern in the name attribute must have\n       a \"*\" only at the start or the end.\n       EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n       Longer patterns will be matched first.  if equal size patterns\n       both match, the first appearing in the schema will be used.  -->\n\n  <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n       the last letter is 's' for single valued, 'm' for multi-valued -->\n\n  <!-- We use plong for integer since 64 bit ints are now common in PHP. -->\n  <dynamicField name=\"is_*\"  type=\"plong\"    indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"im_*\"  type=\"plong\"    indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- List of floats can be saved in a regular float field -->\n  <dynamicField name=\"fs_*\"  type=\"pfloat\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"fm_*\"  type=\"pfloat\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- List of doubles can be saved in a regular double field -->\n  <dynamicField name=\"ps_*\"  type=\"pdouble\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"pm_*\"  type=\"pdouble\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- List of booleans can be saved in a regular boolean field -->\n  <dynamicField name=\"bm_*\"  type=\"boolean\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"bs_*\"  type=\"boolean\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Regular text (without processing) can be stored in a string field-->\n  <dynamicField name=\"ss_*\"  type=\"string\"  indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- For field types using SORTED_SET, multiple identical entries are collapsed into a single value.\n       Thus if I insert values 4, 5, 2, 4, 1, my return will be 1, 2, 4, 5 when enabling docValues.\n       If you need to preserve the order and duplicate entries, consider to store the values as zm_* (twice). -->\n  <dynamicField name=\"sm_*\"  type=\"string\"  indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Special-purpose text fields -->\n  <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n  <dynamicField name=\"ds_*\"  type=\"pdate\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"dm_*\"  type=\"pdate\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- This field is used to store date ranges -->\n  <dynamicField name=\"drs_*\" type=\"date_range\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"drm_*\" type=\"date_range\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"its_*\" type=\"plong\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"itm_*\" type=\"plong\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"fts_*\" type=\"pfloat\"  indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ftm_*\" type=\"pfloat\"  indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <dynamicField name=\"pts_*\" type=\"pdouble\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ptm_*\" type=\"pdouble\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n       a small image in a search result using the data URI scheme -->\n  <dynamicField name=\"xs_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"xm_*\"  type=\"binary\"  indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"dds_*\" type=\"pdate\"    indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ddm_*\" type=\"pdate\"    indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n  <dynamicField name=\"hs_*\" type=\"pint\" indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"hm_*\" type=\"pint\" indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"hts_*\" type=\"pint\"   indexed=\"true\"  stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"htm_*\" type=\"pint\"   indexed=\"true\"  stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n\n  <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n  <dynamicField name=\"zs_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"zm_*\" type=\"string\"   indexed=\"false\"  stored=\"true\" multiValued=\"true\"/>\n\n  <!-- Fields for location searches.\n       http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n  <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <!-- GeoHash fields are deprecated. LatLonPointSpatial fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"geos_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"geom_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"bboxs_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"bboxm_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n  <dynamicField name=\"rpts_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"rptm_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n  <!-- External file fields -->\n  <dynamicField name=\"eff_*\" type=\"file\"/>\n\n  <!-- A random sort field -->\n  <dynamicField name=\"random_*\" type=\"random\" indexed=\"true\" stored=\"true\"/>\n\n  <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n  <dynamicField name=\"access_*\" type=\"pint\" indexed=\"true\" stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n\n  <!-- The following causes solr to ignore any fields that don't already match an existing\n       field name or dynamic field, rather than reporting them as an error.\n       Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n       unknown fields indexed and/or stored by default -->\n  <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in a\n       standard package such as org.apache.solr.analysis\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       It supports doc values but in that case the field needs to be\n       single-valued and either required or have a default value.\n      -->\n    <fieldType name=\"string\" class=\"solr.StrField\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\"/>\n\n    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are\n         currently supported on types that are sorted internally as strings\n         and on numeric types.\n         This includes \"string\", \"boolean\", \"pint\", \"pfloat\", \"plong\", \"pdate\", \"pdouble\".\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!--\n      Numeric field types that index values using KD-trees.\n      Point fields don't support FieldCache, so they must have docValues=\"true\" if needed for sorting, faceting, functions, etc.\n    -->\n    <fieldType name=\"pint\" class=\"solr.IntPointField\" docValues=\"true\"/>\n    <fieldType name=\"pfloat\" class=\"solr.FloatPointField\" docValues=\"true\"/>\n    <fieldType name=\"plong\" class=\"solr.LongPointField\" docValues=\"true\"/>\n    <fieldType name=\"pdouble\" class=\"solr.DoublePointField\" docValues=\"true\"/>\n\n    <fieldType name=\"pints\" class=\"solr.IntPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pfloats\" class=\"solr.FloatPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"plongs\" class=\"solr.LongPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pdoubles\" class=\"solr.DoublePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the DatePointField javadocs for more information.\n      -->\n    <!-- KD-tree versions of date fields -->\n    <fieldType name=\"pdate\" class=\"solr.DatePointField\" docValues=\"true\"/>\n    <fieldType name=\"pdates\" class=\"solr.DatePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!-- A date range field -->\n    <fieldType name=\"date_range\" class=\"solr.DateRangeField\"/>\n\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldType name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The \"RandomSortField\" is not used to store or search any\n         data.  You can declare fields of this type it in your schema\n         to generate pseudo-random orderings of your docs for sorting\n         or function purposes.  The ordering is generated based on the field\n         name and the version of the index. As long as the index version\n         remains unchanged, and the same field name is reused,\n         the ordering of the docs will be consistent.\n         If you want different psuedo-random orderings of documents,\n         for the same version of the index, use a dynamicField and\n         change the field name in the request.\n     -->\n    <fieldType name=\"random\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element.\n         Example:\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <fieldType name=\"boost_term_payload\" stored=\"false\" indexed=\"true\" class=\"solr.TextField\" >\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n        <!--\n        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,\n        a token of \"foo|1.4\"  would be indexed as \"foo\" with a payload of 1.4f\n        Attributes of the DelimitedPayloadTokenFilterFactory :\n         \"delimiter\" - a one character delimiter. Default is | (pipe)\n         \"encoder\" - how to encode the following value into a playload\n           float -> org.apache.lucene.analysis.payloads.FloatEncoder,\n           integer -> o.a.l.a.p.IntegerEncoder\n           identity -> o.a.l.a.p.IdentityEncoder\n             Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.\n         -->\n        <filter class=\"solr.DelimitedPayloadTokenFilterFactory\" encoder=\"float\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- since fields of this type are by default not stored or indexed,\n         any data added to them will be ignored outright.  -->\n    <fieldType name=\"ignored\" stored=\"false\" indexed=\"false\" multiValued=\"true\" class=\"solr.StrField\" />\n\n    <!-- This point type indexes the coordinates as separate fields (subFields)\n      If subFieldType is defined, it references a type, and a dynamic field\n      definition is created matching *___<typename>.  Alternately, if\n      subFieldSuffix is defined, that is used to create the subFields.\n      Example: if subFieldType=\"double\", then the coordinates would be\n        indexed in fields myloc_0___double,myloc_1___double.\n      Example: if subFieldSuffix=\"_d\" then the coordinates would be indexed\n        in fields myloc_0_d,myloc_1_d\n      The subFields are an implementation detail of the fieldType, and end\n      users normally should not need to know about them.\n     -->\n    <!-- In Drupal we only use prefixes for dynmaic fields. Tht might change in\n      the future but for now we keep this pattern.\n    -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"pdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonPointSpatialField\" docValues=\"true\"/>\n\n    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.\n      For more information about this and other Spatial fields new to Solr 4, see:\n      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4\n    -->\n    <fieldType name=\"location_rpt\" class=\"solr.SpatialRecursivePrefixTreeFieldType\"\n        geo=\"true\" distErrPct=\"0.025\" maxDistErr=\"0.001\" distanceUnits=\"kilometers\" />\n\n    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has\n    special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for\n    relevancy. -->\n    <fieldType name=\"bbox\" class=\"solr.BBoxField\"\n               geo=\"true\" distanceUnits=\"kilometers\" numberType=\"_bbox_coord\" />\n    <fieldType name=\"_bbox_coord\" class=\"solr.DoublePointField\" docValues=\"true\" stored=\"false\"/>\n\n  <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType\n       Parameters:\n           amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field.\n                               The dynamic field must have a field type that extends LongValueFieldType.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.\n                               The dynamic field must have a field type that extends StrField.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           defaultCurrency:  Specifies the default currency if none specified. Defaults to \"USD\"\n           providerClass:    Lets you plug in other exchange provider backend:\n                             solr.FileExchangeRateProvider is the default and takes one parameter:\n                               currencyConfig: name of an xml file holding exchange rates\n                             solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:\n                               ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)\n                               refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)\n  -->\n<!--  <fieldType name=\"currency\" class=\"solr.CurrencyFieldType\" amountLongSuffix=\"_l_ns\" codeStrSuffix=\"_s_ns\"\n                 defaultCurrency=\"USD\" currencyConfig=\"currency.xml\" /> -->\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  &extrafields;\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  &extratypes;\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- Similarity is the scoring routine for each document vs. a query.\n       A custom Similarity or SimilarityFactory may be specified here, but\n       the default is fine for most applications.\n       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity\n    -->\n  <!--\n     <similarity class=\"com.example.solr.CustomSimilarityFactory\">\n       <str name=\"paramkey\">param value</str>\n     </similarity>\n    -->\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal9/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!DOCTYPE config [\n  <!ENTITY extra SYSTEM \"solrconfig_extra.xml\">\n  <!ENTITY index SYSTEM \"solrconfig_index.xml\">\n  <!ENTITY query SYSTEM \"solrconfig_query.xml\">\n  <!ENTITY requestdispatcher SYSTEM \"solrconfig_requestdispatcher.xml\">\n]>\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see http://wiki.apache.org/solr/SolrConfigXml.\n-->\n<config name=\"drupal-SEARCH_API_SOLR_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_70}</luceneMatchVersion>\n\n  <!-- <lib/> directives can be used to instruct Solr to load any Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       Please note that <lib/> directives are processed in the order\n       that they appear in your solrconfig.xml file, and are \"stacked\"\n       on top of each other when building a ClassLoader - so if you have\n       plugin jars with dependencies on other jars, the \"lower level\"\n       dependency jars should be loaded first.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n\n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A 'dir' option by itself adds any files found in the directory\n       to the classpath, this is useful for including all jars in a\n       directory.\n\n       When a 'regex' is specified in addition to a 'dir', only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n\n       If a 'dir' option (with or without a regex) is used and nothing\n       is found that matches, a warning will be logged.\n\n       The examples below can be used to load some solr-contribs along\n       with their external dependencies.\n    -->\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-dataimporthandler-.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/extraction/lib\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-cell-\\d.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/langid/lib/\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-langid-\\d.*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/analysis-extras/lib\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/contrib/analysis-extras/lucene-libs\" regex=\".*\\.jar\" />\n  <lib dir=\"${solr.install.dir:../../../..}/dist/\" regex=\"solr-analysis-extras-\\d.*\\.jar\" />\n\n  <!-- an exact 'path' can be used instead of a 'dir' to specify a\n       specific jar file.  This will cause a serious error to be logged\n       if it can't be loaded.\n    -->\n  <!--\n     <lib path=\"../a-jar-that-does-not-exist.jar\" />\n  -->\n\n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <dataDir>${solr.data.dir:}</dataDir>\n\n\n  <!-- The DirectoryFactory to use for indexes.\n\n       solr.StandardDirectoryFactory is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,\n       wraps solr.StandardDirectoryFactory and caches small files in memory\n       for better NRT performance.\n\n       One can force a particular implementation via solr.MMapDirectoryFactory,\n       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based and not persistent.\n    -->\n  <directoryFactory name=\"DirectoryFactory\"\n                    class=\"${solr.directoryFactory:solr.NRTCachingDirectoryFactory}\">\n\n\n    <!-- These will be used if you are using the solr.HdfsDirectoryFactory,\n         otherwise they will be ignored. If you don't plan on using hdfs,\n         you can safely remove this section. -->\n    <!-- The root directory that collection data should be written to. -->\n    <str name=\"solr.hdfs.home\">${solr.hdfs.home:}</str>\n    <!-- The hadoop configuration files to use for the hdfs client. -->\n    <str name=\"solr.hdfs.confdir\">${solr.hdfs.confdir:}</str>\n    <!-- Enable/Disable the hdfs cache. -->\n    <str name=\"solr.hdfs.blockcache.enabled\">${solr.hdfs.blockcache.enabled:true}</str>\n    <!-- Enable/Disable using one global cache for all SolrCores.\n         The settings used will be from the first HdfsDirectoryFactory created. -->\n    <str name=\"solr.hdfs.blockcache.global\">${solr.hdfs.blockcache.global:true}</str>\n\n  </directoryFactory>\n\n  <!-- The CodecFactory for defining the format of the inverted index.\n       The default implementation is SchemaCodecFactory, which is the official Lucene\n       index format, but hooks into the schema to provide per-field customization of\n       the postings lists and per-document values in the fieldType element\n       (postingsFormat/docValuesFormat). Note that most of the alternative implementations\n       are experimental, so if you choose to customize the index format, it's a good\n       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)\n       before upgrading to a newer version to avoid unnecessary reindexing.\n  -->\n  <codecFactory class=\"solr.SchemaCodecFactory\"/>\n\n  <!-- To enable dynamic schema REST APIs, remove the following <schemaFactory>.\n  -->\n  <schemaFactory class=\"ClassicIndexSchemaFactory\"/>\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Index Config - These settings control low-level behavior of indexing\n       Most example settings here show the default value, but are commented\n       out, to more easily see where customizations have been made.\n\n       Note: This replaces <indexDefaults> and <mainIndex> from older versions\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <indexConfig>\n    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a\n         LimitTokenCountFilterFactory in your fieldType definition. E.g.\n     <filter class=\"solr.LimitTokenCountFilterFactory\" maxTokenCount=\"10000\"/>\n    -->\n    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->\n    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->\n\n    <!-- Expert: Enabling compound file will use less files for the index,\n         using fewer file descriptors on the expense of performance decrease.\n         Default in Lucene is \"true\". Default in Solr is \"false\" (since 3.6) -->\n    <!-- <useCompoundFile>false</useCompoundFile> -->\n\n    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene\n         indexing for buffering added documents and deletions before they are\n         flushed to the Directory.\n         maxBufferedDocs sets a limit on the number of documents buffered\n         before flushing.\n         If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.\n         The default is 100 MB.  -->\n    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <!-- Expert: Merge Policy\n         The Merge Policy in Lucene controls how merging of segments is done.\n         The default since Solr/Lucene 3.3 is TieredMergePolicy.\n         The default since Lucene 2.3 was the LogByteSizeMergePolicy,\n         Even older versions of Lucene used LogDocMergePolicy.\n      -->\n    <!--\n        <mergePolicyFactory class=\"solr.TieredMergePolicyFactory\">\n          <int name=\"maxMergeAtOnce\">10</int>\n          <int name=\"segmentsPerTier\">10</int>\n        </mergePolicyFactory>\n     -->\n\n    <!-- Expert: Merge Scheduler\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!--\n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\n    <!-- LockFactory\n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n\n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         Defaults: 'native' is default for Solr3.6 and later, otherwise\n                   'simple' is the default\n\n         More details on the nuances of each LockFactory...\n         http://wiki.apache.org/lucene-java/AvailableLockFactories\n    -->\n    <lockType>${solr.lock.type:native}</lockType>\n\n    <!-- Commit Deletion Policy\n         Custom deletion policies can be specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         The default Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n\n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <!--\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n    -->\n    <!-- The number of commit points to be kept -->\n    <!-- <str name=\"maxCommitsToKeep\">1</str> -->\n    <!-- The number of optimized commit points to be kept -->\n    <!-- <str name=\"maxOptimizedCommitsToKeep\">0</str> -->\n    <!--\n        Delete all commit points once they have reached the given age.\n        Supports DateMathParser syntax e.g.\n      -->\n    <!--\n       <str name=\"maxCommitAge\">30MINUTES</str>\n       <str name=\"maxCommitAge\">1DAY</str>\n    -->\n    <!--\n    </deletionPolicy>\n    -->\n\n    <!-- Lucene Infostream\n\n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting the value to true will instruct the underlying Lucene\n         IndexWriter to write its info stream to solr's log. By default,\n         this is enabled here, and controlled through log4j2.xml\n      -->\n    <infoStream>true</infoStream>\n\n    <!-- Let the config generator easily inject additional stuff. -->\n    &index;\n\n  </indexConfig>\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- Enables a transaction log, used for real-time get, durability, and\n         and solr cloud replica recovery.  The log can grow as big as\n         uncommitted changes to the index, so use of a hard autoCommit\n         is recommended (see below).\n         \"dir\" - the target directory for transaction logs, defaults to the\n                solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.ulog.dir:}</str>\n    </updateLog>\n\n    <!-- AutoCommit\n\n         Perform a hard commit automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents.\n\n         http://wiki.apache.org/solr/UpdateXmlMessages\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time in ms that is allowed to pass\n                   since a document was added before automatically\n                   triggering a new commit.\n         openSearcher - if false, the commit causes recent index changes\n           to be flushed to stable storage, but does not cause a new\n           searcher to be opened to make those changes visible.\n\n         If the updateLog is enabled, then it's highly recommended to\n         have some sort of hard autoCommit to limit the log size.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:15000}</maxTime>\n      <openSearcher>false</openSearcher>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n      -->\n\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:5000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n\n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns.\n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n\n  </updateHandler>\n\n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Query section - these settings control query time things like caches\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <query>\n    <!-- Let the config generator easily inject additional stuff. -->\n    &query;\n\n    <!-- Query Related Event Listeners\n\n         Various IndexSearcher related events can trigger Listeners to\n         take actions.\n\n         newSearcher - fired whenever a new searcher is being prepared\n         and there is a current searcher handling requests (aka\n         registered).  It can be used to prime certain caches to\n         prevent long request times for certain requests.\n\n         firstSearcher - fired whenever a new searcher is being\n         prepared but there is no current registered searcher to handle\n         requests or to gain autowarming data from.\n\n\n      -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence.\n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">static firstSearcher warming in solrconfig.xml</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n  </query>\n\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n    -->\n  <requestDispatcher>\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         multipartUploadLimitInKB - specifies the max size (in KiB) of\n         Multipart File Uploads that Solr will allow in a Request.\n\n         formdataUploadLimitInKB - specifies the max size (in KiB) of\n         form data (application/x-www-form-urlencoded) sent via\n         POST. You can use POST to pass request parameters not\n         fitting into the URL.\n\n         addHttpRequestToContext - if set to true, it will instruct\n         the requestParsers to include the original HttpServletRequest\n         object in the context map of the SolrQueryRequest under the\n         key \"httpRequest\". It will not be used by any of the existing\n         Solr components, but may be useful when developing custom\n         plugins.\n\n         *** WARNING ***\n         Before enabling remote streaming, you should make sure your\n         system has authentication enabled.\n\n    <requestParsers enableRemoteStreaming=\"false\"\n                    multipartUploadLimitInKB=\"-1\"\n                    formdataUploadLimitInKB=\"-1\"\n                    addHttpRequestToContext=\"false\"/>\n      -->\n    <!-- Let the config generator easily inject additional stuff. -->\n    &requestdispatcher;\n\n  </requestDispatcher>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names,\n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n\n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n\n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\"\n\n     -->\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  &extra;\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\"\n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter\n           (for sentence extraction)\n        -->\n      <fragmenter name=\"regex\"\n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\"\n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<em>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</em>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\"\n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\"\n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- Configure the weighted fragListBuilder -->\n      <fragListBuilder name=\"weighted\"\n                       default=\"true\"\n                       class=\"solr.highlight.WeightedFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\"\n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!--\n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n\n      <boundaryScanner name=\"default\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n\n      <boundaryScanner name=\"breakIterator\"\n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    -->\n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.\n\n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Language identification\n\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n  <!--\n   <updateRequestProcessorChain name=\"langid\">\n     <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n       <str name=\"langid.fl\">text,title,subject,description</str>\n       <str name=\"langid.langField\">language_s</str>\n       <str name=\"langid.fallback\">en</str>\n     </processor>\n     <processor class=\"solr.LogUpdateProcessorFactory\" />\n     <processor class=\"solr.RunUpdateProcessorFactory\" />\n   </updateRequestProcessorChain>\n  -->\n\n  <!-- Script update processor\n\n    This example hooks in an update processor implemented using JavaScript.\n\n    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor\n  -->\n  <!--\n    <updateRequestProcessorChain name=\"script\">\n      <processor class=\"solr.StatelessScriptUpdateProcessorFactory\">\n        <str name=\"script\">update-script.js</str>\n        <lst name=\"params\">\n          <str name=\"config_param\">example config parameter</str>\n        </lst>\n      </processor>\n      <processor class=\"solr.RunUpdateProcessorFactory\" />\n    </updateRequestProcessorChain>\n  -->\n\n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\"\n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n     <queryResponseWriter name=\"schema.xml\" class=\"solr.SchemaXmlResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n  </queryResponseWriter>\n\n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!--\n    <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" startup=\"lazy\">\n      <str name=\"template.base.dir\">${velocity.template.base.dir:}</str>\n    </queryResponseWriter>\n    -->\n\n  <!-- XSLT response writer transforms the XML output by any xslt file found\n       in Solr's conf/xslt directory.  Changes to xslt files are checked for\n       every xsltCacheLifetimeSeconds.\n    -->\n  <queryResponseWriter name=\"xslt\" class=\"solr.XSLTResponseWriter\">\n    <int name=\"xsltCacheLifetimeSeconds\">5</int>\n  </queryResponseWriter>\n\n  <!-- Query Parsers\n\n       https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\"\n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n\n  <!-- Document Transformers\n       http://wiki.apache.org/solr/DocTransformers\n    -->\n  <!--\n     Could be something like:\n     <transformer name=\"db\" class=\"com.mycompany.LoadFromDatabaseTransformer\" >\n       <int name=\"connection\">jdbc://....</int>\n     </transformer>\n\n     To add a constant value to all docs, use:\n     <transformer name=\"mytrans2\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <int name=\"value\">5</int>\n     </transformer>\n\n     If you want the user to still be able to change it with _value:something_ use this:\n     <transformer name=\"mytrans3\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <double name=\"defaultValue\">5</double>\n     </transformer>\n\n      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The\n      EditorialMarkerFactory will do exactly that:\n     <transformer name=\"qecBooster\" class=\"org.apache.solr.response.transform.EditorialMarkerFactory\" />\n    -->\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr7_drupal9/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:9077/solr\nsolr.replication.confFiles=schema.xml,schema_extra_types.xml,schema_extra_fields.xml,elevate.xml\nsolr.mlt.timeAllowed=2000\nsolr.selectSearchHandler.timeAllowed=-1\n# don't autoCommit after x docs\nsolr.autoCommit.MaxDocs=-1\n# autoCommit after 15 seconds\nsolr.autoCommit.MaxTime=15000\n# don't autoSoftCommit after x docs\nsolr.autoSoftCommit.MaxDocs=-1\n# don't autoSoftCommit after x seconds\nsolr.autoSoftCommit.MaxTime=5000\nsolr.install.dir=/opt/solr7\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr9_drupal10/elevate.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!--\n This file allows you to boost certain search items to the top of search\n results. You can find out an item's ID by searching directly on the Solr\n server. Search API generally constructs item IDs (esp. for entities) as:\n     $document->id = \"$site_hash-$index_id-$datasource:$entity_id:$language_id\";\n\n If you want this file to be automatically re-loaded when a Solr commit takes\n place (e.g., if you have an automatic script active which updates elevate.xml\n according to newly-indexed data), place it into Solr's data/ directory.\n Otherwise, place it with the other configuration files into the conf/\n directory.\n\n See http://wiki.apache.org/solr/QueryElevationComponent for more information.\n-->\n\n<elevate>\n<!-- Example for ranking the node #789 first in searches for \"example query\": -->\n<!--\n  <query text=\"example query\">\n    <doc id=\"ab12cd34-site_index-entity:789:en\" />\n </query>\n-->\n<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->\n<!-- <query text=\"...\">...</query> -->\n</elevate>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr9_drupal10/schema.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n<!DOCTYPE schema [\n  <!ENTITY extrafields SYSTEM \"schema_extra_fields.xml\">\n  <!ENTITY extratypes SYSTEM \"schema_extra_types.xml\">\n]>\n\n<!--\n This is the Solr schema file. This file should be named \"schema.xml\" and\n should be in the conf directory under the solr home\n (i.e. ./solr/conf/schema.xml by default)\n or located where the classloader for the Solr webapp can find it.\n\n This example schema is the recommended starting point for users.\n It should be kept correct and concise, usable out-of-the-box.\n\n For more information, on how to customize this file, please see\n https://solr.apache.org/guide/solr/latest/indexing-guide/schema-elements.html\n\n PERFORMANCE NOTE: this schema includes many optional features and should not\n be used for benchmarking.  To improve performance one could\n  - set stored=\"false\" for all fields possible (esp large fields) when you\n    only need to search on the field but don't need to return the original\n    value.\n  - set indexed=\"false\" if you don't need to search on the field, but only\n    return the field as a result of searching on other indexed fields.\n  - remove all unneeded copyField statements\n  - for best index size and searching performance, set \"index\" to false\n    for all general text fields, use copyField to copy them to the\n    catchall \"text\" field, and use that for searching.\n  - For maximum indexing performance, use the ConcurrentUpdateSolrServer\n    java client.\n  - Remember to run the JVM in server mode, and use a higher logging level\n    that avoids logging every request\n-->\n\n<schema name=\"drupal-SEARCH_API_SOLR_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" version=\"1.6\">\n  <!-- attribute \"name\" is the name of this schema and is only used for display purposes.\n       version=\"x.y\" is Solr's version number for the schema syntax and\n       semantics.  It should not normally be changed by applications.\n\n       1.0: multiValued attribute did not exist, all fields are multiValued\n            by nature\n       1.1: multiValued attribute introduced, false by default\n       1.2: omitTermFreqAndPositions attribute introduced, true by default\n            except for text fields.\n       1.3: removed optional field compress feature\n       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser\n            behavior when a single string produces multiple tokens.  Defaults\n            to off for version >= 1.4\n       1.5: omitNorms defaults to true for primitive field types\n            (int, float, boolean, string...)\n       1.6: useDocValuesAsStored defaults to true.\n     -->\n\n\n  <!-- Valid attributes for fields:\n    name: mandatory - the name for the field\n    type: mandatory - the name of a field type from the\n      fieldTypes\n    indexed: true if this field should be indexed (searchable or sortable)\n    stored: true if this field should be retrievable\n    docValues: true if this field should have doc values. Doc values are\n      useful (required, if you are using *Point fields) for faceting,\n      grouping, sorting and function queries. Doc values will make the index\n      faster to load, more NRT-friendly and more memory-efficient.\n      They however come with some limitations: they are currently only\n      supported by StrField, UUIDField, all *PointFields, and depending\n      on the field type, they might require the field to be single-valued,\n      be required or have a default value (check the documentation\n      of the field type you're interested in for more information)\n    multiValued: true if this field may contain multiple values per document\n    omitNorms: (expert) set to true to omit the norms associated with\n      this field (this disables length normalization and index-time\n      boosting for the field, and saves some memory).  Only full-text\n      fields or fields that need an index-time boost need norms.\n      Norms are omitted for primitive (non-analyzed) types by default.\n    termVectors: [false] set to true to store the term vector for a\n      given field.\n      When using MoreLikeThis, fields used for similarity should be\n      stored for best performance.\n    termPositions: Store position information with the term vector.\n      This will increase storage costs.\n    termOffsets: Store offset information with the term vector. This\n      will increase storage costs.\n    termPayloads: Store payload information with the term vector. This\n      will increase storage costs.\n    required: The field is required.  It will throw an error if the\n      value does not exist\n    default: a value that should be used if no value is specified\n      when adding a document.\n  -->\n\n  <!-- field names should consist of alphanumeric or underscore characters only and\n     not start with a digit.  This is not currently strictly enforced,\n     but other field names will not have first class support from all components\n     and back compatibility is not guaranteed.  Names with both leading and\n     trailing underscores (e.g. _version_) are reserved.\n  -->\n\n  <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml\n     or Solr won't start. _version_ and update log are required for SolrCloud\n  -->\n  <!-- doc values are enabled by default for primitive types such as long so we don't index the version field  -->\n  <field name=\"_version_\" type=\"plong\" indexed=\"false\" stored=\"false\"/>\n\n  <!-- points to the root document of a block of nested documents. Required for nested\n     document support, may be removed otherwise\n  -->\n  <field name=\"_root_\" type=\"string\" indexed=\"true\" stored=\"true\" docValues=\"false\" />\n  <fieldType name=\"_nest_path_\" class=\"solr.NestPathField\" />\n  <field name=\"_nest_path_\" type=\"_nest_path_\" />\n\n  <!-- Only remove the \"id\" field if you have a very good reason to. While not strictly\n  required, it is highly recommended. A <uniqueKey> is present in almost all Solr\n  installations. See the <uniqueKey> declaration below where <uniqueKey> is set to \"id\".\n  -->\n  <!-- The document id is usually derived from a site-specific key (hash) and the\n    entity type and ID like:\n    Search Api 7.x:\n      The format used is $document->id = $index_id . '-' . $item_id\n    Search Api 8.x:\n      The format used is $document->id = $site_hash . '-' . $index_id . '-' . $item_id\n    Apache Solr Search Integration 7.x:\n      The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;\n  -->\n  <!-- The Highlighter Component requires the id field to be \"stored\" even if docValues are set. -->\n  <field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Search Api specific fields -->\n  <!-- index_id is the machine name of the search index this entry belongs to. -->\n  <field name=\"index_id\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <!-- Here, default is used to create a \"timestamp\" field indicating\n       when each document was indexed.-->\n  <field name=\"timestamp\" type=\"pdate\" indexed=\"true\" stored=\"false\" default=\"NOW\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"site\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"hash\" type=\"string\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n\n  <field name=\"boost_document\" type=\"pfloat\" indexed=\"true\" stored=\"false\" multiValued=\"false\" docValues=\"true\"/>\n  <field name=\"boost_term\" type=\"boost_term_payload\" indexed=\"true\" stored=\"false\" multiValued=\"true\"/>\n\n  <!-- Currently the suggester context filter query (suggest.cfq) accesses the tags using the stored values, neither the indexed terms nor the docValues.\n       Therefore the dynamicField sm_* isn't suitable at the moment -->\n  <field name=\"sm_context_tags\" type=\"strings\" indexed=\"true\" stored=\"true\" docValues=\"false\"/>\n\n  <!-- Dynamic field definitions.  If a field name is not found, dynamicFields\n       will be used if the name matches any of the patterns.\n       RESTRICTION: the glob-like pattern in the name attribute must have\n       a \"*\" only at the start or the end.\n       EXAMPLE:  name=\"*_i\" will match any field ending in _i (like myid_i, z_i)\n       Longer patterns will be matched first.  if equal size patterns\n       both match, the first appearing in the schema will be used.  -->\n\n  <!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and\n       the last letter is 's' for single valued, 'm' for multi-valued -->\n\n  <!-- We use plong for integer since 64 bit ints are now common in PHP. -->\n  <dynamicField name=\"is_*\" type=\"plong\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"im_*\" type=\"plongs\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- List of floats can be saved in a regular float field -->\n  <dynamicField name=\"fs_*\" type=\"pfloat\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"fm_*\" type=\"pfloats\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <!-- List of doubles can be saved in a regular double field -->\n  <dynamicField name=\"ps_*\" type=\"pdouble\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"pm_*\" type=\"pdoubles\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <!-- List of booleans can be saved in a regular boolean field -->\n  <dynamicField name=\"bm_*\" type=\"booleans\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"bs_*\" type=\"boolean\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Regular text (without processing) can be stored in a string field-->\n  <dynamicField name=\"ss_*\" type=\"string\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- For field types using SORTED_SET, multiple identical entries are collapsed into a single value.\n       Thus if I insert values 4, 5, 2, 4, 1, my return will be 1, 2, 4, 5 when enabling docValues.\n       If you need to preserve the order and duplicate entries, consider to store the values as zm_* (twice). -->\n  <dynamicField name=\"sm_*\" type=\"strings\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <!-- Special-purpose text fields -->\n  <dynamicField name=\"tws_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"twm_*\" type=\"text_ws\" indexed=\"true\" stored=\"true\" multiValued=\"true\"/>\n\n  <dynamicField name=\"ds_*\" type=\"pdate\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"dm_*\" type=\"pdates\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <!-- This field is used to store date ranges -->\n  <dynamicField name=\"drs_*\" type=\"date_range\" indexed=\"true\" stored=\"true\"/>\n  <dynamicField name=\"drm_*\" type=\"date_ranges\" indexed=\"true\" stored=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"its_*\" type=\"plong\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"itm_*\" type=\"plongs\" indexed=\"true\" stored=\"false\" docValues=\"true\" termVectors=\"true\"/>\n  <dynamicField name=\"fts_*\" type=\"pfloat\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ftm_*\" type=\"pfloats\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"pts_*\" type=\"pdouble\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ptm_*\" type=\"pdoubles\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding\n       a small image in a search result using the data URI scheme -->\n  <dynamicField name=\"xs_*\" type=\"binary\" indexed=\"false\" stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"xm_*\" type=\"binary\" indexed=\"false\" stored=\"true\" multiValued=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"dds_*\" type=\"pdate\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"ddm_*\" type=\"pdates\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->\n  <dynamicField name=\"hs_*\" type=\"pint\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"hm_*\" type=\"pints\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <!-- Trie fields are deprecated. Point fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"hts_*\" type=\"pint\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n  <dynamicField name=\"htm_*\" type=\"pints\" indexed=\"true\" stored=\"false\" docValues=\"true\"/>\n\n  <!-- Unindexed string fields that can be used to store values that won't be searchable but have docValues -->\n  <dynamicField name=\"zdvs_*\" type=\"string\" indexed=\"false\" stored=\"true\" docValues=\"true\"/>\n  <dynamicField name=\"zdvm_*\" type=\"strings\" indexed=\"false\" stored=\"true\" docValues=\"true\"/>\n  <!-- Unindexed string fields that can be used to store values that won't be searchable -->\n  <dynamicField name=\"zs_*\" type=\"string\" indexed=\"false\" stored=\"true\"/>\n  <dynamicField name=\"zm_*\" type=\"strings\" indexed=\"false\" stored=\"true\"/>\n\n  <!-- Fields for location searches.\n       http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->\n  <dynamicField name=\"points_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"pointm_*\" type=\"point\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"locs_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"locm_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <!-- GeoHash fields are deprecated. LatLonPointSpatial fields solve all needs. But we keep the dedicated field names for backward compatibility. -->\n  <dynamicField name=\"geos_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"false\"/>\n  <dynamicField name=\"geom_*\" type=\"location\" indexed=\"true\"  stored=\"true\" multiValued=\"true\"/>\n  <dynamicField name=\"bboxs_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"bboxm_*\" type=\"bbox\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n  <dynamicField name=\"rpts_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"false\" />\n  <dynamicField name=\"rptm_*\" type=\"location_rpt\" indexed=\"true\" stored=\"true\" multiValued=\"true\" />\n\n  <!-- External file fields -->\n  <dynamicField name=\"eff_*\" type=\"file\"/>\n\n  <!-- A random sort field -->\n  <dynamicField name=\"random_*\" type=\"random\" indexed=\"true\" stored=\"true\"/>\n\n  <!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->\n  <dynamicField name=\"access_*\" type=\"pint\" indexed=\"true\" stored=\"false\" multiValued=\"true\" docValues=\"true\"/>\n\n  <!-- The following causes solr to ignore any fields that don't already match an existing\n       field name or dynamic field, rather than reporting them as an error.\n       Alternately, change the type=\"ignored\" to some other type e.g. \"text\" if you want\n       unknown fields indexed and/or stored by default -->\n  <dynamicField name=\"*\" type=\"ignored\" multiValued=\"true\" />\n\n\n    <!-- field type definitions. The \"name\" attribute is\n       just a label to be used by field definitions.  The \"class\"\n       attribute and any other attributes determine the real\n       behavior of the fieldType.\n         Class names starting with \"solr\" refer to java classes in a\n       standard package such as org.apache.solr.analysis\n    -->\n\n    <!-- The StrField type is not analyzed, but indexed/stored verbatim.\n       It supports doc values but in that case the field needs to be\n       single-valued and either required or have a default value.\n      -->\n    <fieldType name=\"string\" class=\"solr.StrField\"/>\n    <fieldType name=\"strings\" class=\"solr.StrField\" multiValued=\"true\"/>\n\n    <!-- boolean type: \"true\" or \"false\" -->\n    <fieldType name=\"boolean\" class=\"solr.BoolField\"/>\n    <fieldType name=\"booleans\" class=\"solr.BoolField\" multiValued=\"true\"/>\n\n    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are\n         currently supported on types that are sorted internally as strings\n         and on numeric types.\n         This includes \"string\", \"boolean\", \"pint\", \"pfloat\", \"plong\", \"pdate\", \"pdouble\".\n       - If sortMissingLast=\"true\", then a sort on this field will cause documents\n         without the field to come after documents with the field,\n         regardless of the requested sort order (asc or desc).\n       - If sortMissingFirst=\"true\", then a sort on this field will cause documents\n         without the field to come before documents with the field,\n         regardless of the requested sort order.\n       - If sortMissingLast=\"false\" and sortMissingFirst=\"false\" (the default),\n         then default lucene sorting will be used which places docs without the\n         field first in an ascending sort and last in a descending sort.\n    -->\n\n    <!--\n      Numeric field types that index values using KD-trees.\n      Point fields don't support FieldCache, so they must have docValues=\"true\" if needed for sorting, faceting, functions, etc.\n    -->\n    <fieldType name=\"pint\" class=\"solr.IntPointField\" docValues=\"true\"/>\n    <fieldType name=\"pfloat\" class=\"solr.FloatPointField\" docValues=\"true\"/>\n    <fieldType name=\"plong\" class=\"solr.LongPointField\" docValues=\"true\"/>\n    <fieldType name=\"pdouble\" class=\"solr.DoublePointField\" docValues=\"true\"/>\n\n    <fieldType name=\"pints\" class=\"solr.IntPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pfloats\" class=\"solr.FloatPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"plongs\" class=\"solr.LongPointField\" docValues=\"true\" multiValued=\"true\"/>\n    <fieldType name=\"pdoubles\" class=\"solr.DoublePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!--\n     The ExternalFileField type gets values from an external file instead of the\n     index. This is useful for data such as rankings that might change frequently\n     and require different update frequencies than the documents they are\n     associated with.\n    -->\n    <fieldType name=\"file\" keyField=\"id\" defVal=\"1\" stored=\"false\" indexed=\"false\" class=\"solr.ExternalFileField\"/>\n\n    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and\n         is a more restricted form of the canonical representation of dateTime\n         http://www.w3.org/TR/xmlschema-2/#dateTime\n         The trailing \"Z\" designates UTC time and is mandatory.\n         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z\n         All other components are mandatory.\n\n         Expressions can also be used to denote calculations that should be\n         performed relative to \"NOW\" to determine the value, ie...\n\n               NOW/HOUR\n                  ... Round to the start of the current hour\n               NOW-1DAY\n                  ... Exactly 1 day prior to now\n               NOW/DAY+6MONTHS+3DAYS\n                  ... 6 months and 3 days in the future from the start of\n                      the current day\n\n         Consult the DatePointField javadocs for more information.\n      -->\n    <!-- KD-tree versions of date fields -->\n    <fieldType name=\"pdate\" class=\"solr.DatePointField\" docValues=\"true\"/>\n    <fieldType name=\"pdates\" class=\"solr.DatePointField\" docValues=\"true\" multiValued=\"true\"/>\n\n    <!-- A date range field -->\n    <fieldType name=\"date_range\" class=\"solr.DateRangeField\"/>\n    <fieldType name=\"date_ranges\" class=\"solr.DateRangeField\" multiValued=\"true\"/>\n\n    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->\n    <fieldType name=\"binary\" class=\"solr.BinaryField\"/>\n\n    <!-- The \"RandomSortField\" is not used to store or search any\n         data.  You can declare fields of this type it in your schema\n         to generate pseudo-random orderings of your docs for sorting\n         or function purposes.  The ordering is generated based on the field\n         name and the version of the index. As long as the index version\n         remains unchanged, and the same field name is reused,\n         the ordering of the docs will be consistent.\n         If you want different psuedo-random orderings of documents,\n         for the same version of the index, use a dynamicField and\n         change the field name in the request.\n     -->\n    <fieldType name=\"random\" class=\"solr.RandomSortField\" indexed=\"true\" />\n\n    <!-- solr.TextField allows the specification of custom text analyzers\n         specified as a tokenizer and a list of token filters. Different\n         analyzers may be specified for indexing and querying.\n\n         The optional positionIncrementGap puts space between multiple fields of\n         this type on the same document, with the purpose of preventing false phrase\n         matching across fields.\n\n         For more info on customizing your analyzer chain, please see\n         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters\n     -->\n\n    <!-- One can also specify an existing Analyzer class that has a\n         default constructor via the class attribute on the analyzer element.\n         Example:\n    <fieldType name=\"text_greek\" class=\"solr.TextField\">\n      <analyzer class=\"org.apache.lucene.analysis.el.GreekAnalyzer\"/>\n    </fieldType>\n    -->\n\n    <!-- A text field that only splits on whitespace for exact matching of words -->\n    <fieldType name=\"text_ws\" class=\"solr.TextField\" omitNorms=\"true\" positionIncrementGap=\"100\" storeOffsetsWithPositions=\"true\">\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n      </analyzer>\n    </fieldType>\n\n    <fieldType name=\"boost_term_payload\" stored=\"false\" indexed=\"true\" class=\"solr.TextField\" >\n      <analyzer>\n        <tokenizer class=\"solr.WhitespaceTokenizerFactory\"/>\n        <filter class=\"solr.LengthFilterFactory\" min=\"2\" max=\"100\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.RemoveDuplicatesTokenFilterFactory\"/>\n        <!--\n        The DelimitedPayloadTokenFilter can put payloads on tokens... for example,\n        a token of \"foo|1.4\"  would be indexed as \"foo\" with a payload of 1.4f\n        Attributes of the DelimitedPayloadTokenFilterFactory :\n         \"delimiter\" - a one character delimiter. Default is | (pipe)\n         \"encoder\" - how to encode the following value into a playload\n           float -> org.apache.lucene.analysis.payloads.FloatEncoder,\n           integer -> o.a.l.a.p.IntegerEncoder\n           identity -> o.a.l.a.p.IdentityEncoder\n             Fully Qualified class name implementing PayloadEncoder, Encoder must have a no arg constructor.\n         -->\n        <filter class=\"solr.DelimitedPayloadTokenFilterFactory\" encoder=\"float\"/>\n      </analyzer>\n    </fieldType>\n\n    <!-- since fields of this type are by default not stored or indexed,\n         any data added to them will be ignored outright.  -->\n    <fieldType name=\"ignored\" stored=\"false\" indexed=\"false\" multiValued=\"true\" class=\"solr.StrField\" />\n\n    <!-- This point type indexes the coordinates as separate fields (subFields)\n      If subFieldType is defined, it references a type, and a dynamic field\n      definition is created matching *___<typename>.  Alternately, if\n      subFieldSuffix is defined, that is used to create the subFields.\n      Example: if subFieldType=\"double\", then the coordinates would be\n        indexed in fields myloc_0___double,myloc_1___double.\n      Example: if subFieldSuffix=\"_d\" then the coordinates would be indexed\n        in fields myloc_0_d,myloc_1_d\n      The subFields are an implementation detail of the fieldType, and end\n      users normally should not need to know about them.\n     -->\n    <!-- In Drupal we only use prefixes for dynmaic fields. Tht might change in\n      the future but for now we keep this pattern.\n    -->\n    <fieldType name=\"point\" class=\"solr.PointType\" dimension=\"2\" subFieldType=\"pdouble\"/>\n\n    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->\n    <fieldType name=\"location\" class=\"solr.LatLonPointSpatialField\" docValues=\"true\"/>\n\n    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.\n      For more information about this and other Spatial fields new to Solr 4, see:\n      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4\n    -->\n    <fieldType name=\"location_rpt\" class=\"solr.SpatialRecursivePrefixTreeFieldType\"\n        geo=\"true\" distErrPct=\"0.025\" maxDistErr=\"0.001\" distanceUnits=\"kilometers\" />\n\n    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has\n    special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for\n    relevancy. -->\n    <fieldType name=\"bbox\" class=\"solr.BBoxField\"\n               geo=\"true\" distanceUnits=\"kilometers\" numberType=\"_bbox_coord\" />\n    <fieldType name=\"_bbox_coord\" class=\"solr.DoublePointField\" docValues=\"true\" stored=\"false\"/>\n\n  <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType\n       Parameters:\n           amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field.\n                               The dynamic field must have a field type that extends LongValueFieldType.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.\n                               The dynamic field must have a field type that extends StrField.\n                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.\n           defaultCurrency:  Specifies the default currency if none specified. Defaults to \"USD\"\n           providerClass:    Lets you plug in other exchange provider backend:\n                             solr.FileExchangeRateProvider is the default and takes one parameter:\n                               currencyConfig: name of an xml file holding exchange rates\n                             solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:\n                               ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)\n                               refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)\n  -->\n<!--  <fieldType name=\"currency\" class=\"solr.CurrencyFieldType\" amountLongSuffix=\"_l_ns\" codeStrSuffix=\"_s_ns\"\n                 defaultCurrency=\"USD\" currencyConfig=\"currency.xml\" /> -->\n\n  <!-- Following is a dynamic way to include other fields, added by other contrib modules -->\n  &extrafields;\n\n  <!-- Following is a dynamic way to include other types, added by other contrib modules -->\n  &extratypes;\n\n  <!-- Field to use to determine and enforce document uniqueness.\n       Unless this field is marked with required=\"false\", it will be a required field\n    -->\n  <uniqueKey>id</uniqueKey>\n\n  <!-- Similarity is the scoring routine for each document vs. a query.\n       A custom Similarity or SimilarityFactory may be specified here, but\n       the default is fine for most applications.\n       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity\n    -->\n  <!--\n     <similarity class=\"com.example.solr.CustomSimilarityFactory\">\n       <str name=\"paramkey\">param value</str>\n     </similarity>\n    -->\n\n</schema>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr9_drupal10/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n\n<!DOCTYPE config [\n  <!ENTITY extra SYSTEM \"solrconfig_extra.xml\">\n  <!ENTITY index SYSTEM \"solrconfig_index.xml\">\n  <!ENTITY query SYSTEM \"solrconfig_query.xml\">\n  <!ENTITY requestdispatcher SYSTEM \"solrconfig_requestdispatcher.xml\">\n]>\n\n<!--\n     For more details about configurations options that may appear in\n     this file, see https://solr.apache.org/guide/solr/latest/configuration-guide/configuring-solrconfig-xml.html\n-->\n<config name=\"drupal-SEARCH_API_SOLR_SCHEMA_VERSION-solr-SEARCH_API_SOLR_BRANCH-SEARCH_API_SOLR_JUMP_START_CONFIG_SET\" >\n  <!-- In all configuration below, a prefix of \"solr.\" for class names\n       is an alias that causes solr to search appropriate packages,\n       including org.apache.solr.(search|update|request|core|analysis)\n\n       You may also specify a fully qualified Java classname if you\n       have your own custom plugins.\n    -->\n\n  <!-- Set this to 'false' if you want solr to continue working after\n       it has encountered an severe configuration error.  In a\n       production environment, you may want solr to keep working even\n       if one handler is mis-configured.\n\n       You may also set this to false using by setting the system\n       property:\n\n         -Dsolr.abortOnConfigurationError=false\n    -->\n  <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>\n\n  <!-- Controls what version of Lucene various components of Solr\n       adhere to.  Generally, you want to use the latest version to\n       get all bug fixes and improvements. It is highly recommended\n       that you fully re-index after changing this setting as it can\n       affect both how text is indexed and queried.\n    -->\n  <luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_90}</luceneMatchVersion>\n\n  <!-- <lib/> directives can be used to instruct Solr to load any Jars\n       identified and use them to resolve any \"plugins\" specified in\n       your solrconfig.xml or schema.xml (ie: Analyzers, Request\n       Handlers, etc...).\n\n       All directories and paths are resolved relative to the\n       instanceDir.\n\n       Please note that <lib/> directives are processed in the order\n       that they appear in your solrconfig.xml file, and are \"stacked\"\n       on top of each other when building a ClassLoader - so if you have\n       plugin jars with dependencies on other jars, the \"lower level\"\n       dependency jars should be loaded first.\n\n       If a \"./lib\" directory exists in your instanceDir, all files\n       found in it are included as if you had used the following\n       syntax...\n\n              <lib dir=\"./lib\" />\n    -->\n\n  <!-- A 'dir' option by itself adds any files found in the directory\n       to the classpath, this is useful for including all jars in a\n       directory.\n\n       When a 'regex' is specified in addition to a 'dir', only the\n       files in that directory which completely match the regex\n       (anchored on both ends) will be included.\n\n       If a 'dir' option (with or without a regex) is used and nothing\n       is found that matches, a warning will be logged.\n\n       The examples below can be used to load some Solr Modules along\n       with their external dependencies as an alternative to using a\n       startup parameter to define the modules to load globally.  For more info\n       see https://solr.apache.org/guide/solr/latest/configuration-guide/solr-modules.html\n    -->\n  <lib dir=\"${solr.install.dir:../../../..}/modules/extraction/lib\" regex=\".*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/modules/langid/lib/\" regex=\".*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/modules/ltr/lib/\" regex=\".*\\.jar\" />\n\n  <lib dir=\"${solr.install.dir:../../../..}/modules/analysis-extras/lib/\" regex=\".*\\.jar\" />\n\n  <!-- an exact 'path' can be used instead of a 'dir' to specify a\n       specific jar file.  This will cause a serious error to be logged\n       if it can't be loaded.\n    -->\n  <!--\n     <lib path=\"../a-jar-that-does-not-exist.jar\" />\n  -->\n\n  <!-- Data Directory\n\n       Used to specify an alternate directory to hold all index data\n       other than the default ./data under the Solr home.  If\n       replication is in use, this should match the replication\n       configuration.\n    -->\n  <dataDir>${solr.data.dir:}</dataDir>\n\n\n  <!-- The DirectoryFactory to use for indexes.\n\n       solr.StandardDirectoryFactory is filesystem\n       based and tries to pick the best implementation for the current\n       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,\n       wraps solr.StandardDirectoryFactory and caches small files in memory\n       for better NRT performance.\n\n       One can force a particular implementation via solr.MMapDirectoryFactory,\n       solr.NIOFSDirectoryFactory.\n\n       solr.RAMDirectoryFactory is memory based and not persistent.\n    -->\n  <directoryFactory name=\"DirectoryFactory\"\n                    class=\"${solr.directoryFactory:solr.NRTCachingDirectoryFactory}\">\n\n\n    <!-- These will be used if you are using the solr.HdfsDirectoryFactory,\n         otherwise they will be ignored. If you don't plan on using hdfs,\n         you can safely remove this section. -->\n    <!-- The root directory that collection data should be written to. -->\n    <str name=\"solr.hdfs.home\">${solr.hdfs.home:}</str>\n    <!-- The hadoop configuration files to use for the hdfs client. -->\n    <str name=\"solr.hdfs.confdir\">${solr.hdfs.confdir:}</str>\n    <!-- Enable/Disable the hdfs cache. -->\n    <str name=\"solr.hdfs.blockcache.enabled\">${solr.hdfs.blockcache.enabled:true}</str>\n    <!-- Enable/Disable using one global cache for all SolrCores.\n         The settings used will be from the first HdfsDirectoryFactory created. -->\n    <str name=\"solr.hdfs.blockcache.global\">${solr.hdfs.blockcache.global:true}</str>\n\n  </directoryFactory>\n\n  <!-- The CodecFactory for defining the format of the inverted index.\n       The default implementation is SchemaCodecFactory, which is the official Lucene\n       index format, but hooks into the schema to provide per-field customization of\n       the postings lists and per-document values in the fieldType element\n       (postingsFormat/docValuesFormat). Note that most of the alternative implementations\n       are experimental, so if you choose to customize the index format, it's a good\n       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)\n       before upgrading to a newer version to avoid unnecessary reindexing.\n  -->\n  <codecFactory class=\"solr.SchemaCodecFactory\"/>\n\n  <!-- To enable dynamic schema REST APIs, remove the following <schemaFactory>.\n  -->\n  <schemaFactory class=\"ClassicIndexSchemaFactory\"/>\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Index Config - These settings control low-level behavior of indexing\n       Most example settings here show the default value, but are commented\n       out, to more easily see where customizations have been made.\n\n       Note: This replaces <indexDefaults> and <mainIndex> from older versions\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <indexConfig>\n    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a\n         LimitTokenCountFilterFactory in your fieldType definition. E.g.\n     <filter class=\"solr.LimitTokenCountFilterFactory\" maxTokenCount=\"10000\"/>\n    -->\n    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->\n    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->\n\n    <!-- Expert: Enabling compound file will use less files for the index,\n         using fewer file descriptors on the expense of performance decrease.\n         Default in Lucene is \"true\". Default in Solr is \"false\" (since 3.6) -->\n    <!-- <useCompoundFile>false</useCompoundFile> -->\n\n    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene\n         indexing for buffering added documents and deletions before they are\n         flushed to the Directory.\n         maxBufferedDocs sets a limit on the number of documents buffered\n         before flushing.\n         If both ramBufferSizeMB and maxBufferedDocs is set, then\n         Lucene will flush based on whichever limit is hit first.\n         The default is 100 MB.  -->\n    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->\n    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->\n\n    <!-- Expert: Merge Policy\n         The Merge Policy in Lucene controls how merging of segments is done.\n         The default since Solr/Lucene 3.3 is TieredMergePolicy.\n         The default since Lucene 2.3 was the LogByteSizeMergePolicy,\n         Even older versions of Lucene used LogDocMergePolicy.\n      -->\n    <!--\n        <mergePolicyFactory class=\"solr.TieredMergePolicyFactory\">\n          <int name=\"maxMergeAtOnce\">10</int>\n          <int name=\"segmentsPerTier\">10</int>\n        </mergePolicyFactory>\n     -->\n\n    <!-- Expert: Merge Scheduler\n         The Merge Scheduler in Lucene controls how merges are\n         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)\n         can perform merges in the background using separate threads.\n         The SerialMergeScheduler (Lucene 2.2 default) does not.\n     -->\n    <!--\n       <mergeScheduler class=\"org.apache.lucene.index.ConcurrentMergeScheduler\"/>\n       -->\n\n    <!-- LockFactory\n\n         This option specifies which Lucene LockFactory implementation\n         to use.\n\n         single = SingleInstanceLockFactory - suggested for a\n                  read-only index or when there is no possibility of\n                  another process trying to modify the index.\n         native = NativeFSLockFactory - uses OS native file locking.\n                  Do not use when multiple solr webapps in the same\n                  JVM are attempting to share a single index.\n         simple = SimpleFSLockFactory  - uses a plain file for locking\n\n         Defaults: 'native' is default for Solr3.6 and later, otherwise\n                   'simple' is the default\n\n         More details on the nuances of each LockFactory...\n         https://cwiki.apache.org/confluence/display/lucene/AvailableLockFactories\n    -->\n    <lockType>${solr.lock.type:native}</lockType>\n\n    <!-- Commit Deletion Policy\n         Custom deletion policies can be specified here. The class must\n         implement org.apache.lucene.index.IndexDeletionPolicy.\n\n         The default Solr IndexDeletionPolicy implementation supports\n         deleting index commit points on number of commits, age of\n         commit point and optimized status.\n\n         The latest commit point should always be preserved regardless\n         of the criteria.\n    -->\n    <!--\n    <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n    -->\n    <!-- The number of commit points to be kept -->\n    <!-- <str name=\"maxCommitsToKeep\">1</str> -->\n    <!-- The number of optimized commit points to be kept -->\n    <!-- <str name=\"maxOptimizedCommitsToKeep\">0</str> -->\n    <!--\n        Delete all commit points once they have reached the given age.\n        Supports DateMathParser syntax e.g.\n      -->\n    <!--\n       <str name=\"maxCommitAge\">30MINUTES</str>\n       <str name=\"maxCommitAge\">1DAY</str>\n    -->\n    <!--\n    </deletionPolicy>\n    -->\n\n    <!-- Lucene Infostream\n\n         To aid in advanced debugging, Lucene provides an \"InfoStream\"\n         of detailed information when indexing.\n\n         Setting the value to true will instruct the underlying Lucene\n         IndexWriter to write its info stream to solr's log. By default,\n         this is enabled here, and controlled through log4j2.xml\n      -->\n    <infoStream>true</infoStream>\n\n    <!-- Let the config generator easily inject additional stuff. -->\n    &index;\n\n  </indexConfig>\n\n  <!-- The default high-performance update handler -->\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n\n    <!-- Enables a transaction log, used for real-time get, durability, and\n         and solr cloud replica recovery.  The log can grow as big as\n         uncommitted changes to the index, so use of a hard autoCommit\n         is recommended (see below).\n         \"dir\" - the target directory for transaction logs, defaults to the\n                solr data directory.  -->\n    <updateLog>\n      <str name=\"dir\">${solr.ulog.dir:}</str>\n      <int name=\"numVersionBuckets\">${solr.ulog.numVersionBuckets:65536}</int>\n    </updateLog>\n\n    <!-- AutoCommit\n\n         Perform a hard commit automatically under certain conditions.\n         Instead of enabling autoCommit, consider using \"commitWithin\"\n         when adding documents.\n\n         https://solr.apache.org/guide/solr/latest/indexing-guide/indexing-with-update-handlers.html\n\n         maxDocs - Maximum number of documents to add since the last\n                   commit before automatically triggering a new commit.\n\n         maxTime - Maximum amount of time in ms that is allowed to pass\n                   since a document was added before automatically\n                   triggering a new commit.\n         openSearcher - if false, the commit causes recent index changes\n           to be flushed to stable storage, but does not cause a new\n           searcher to be opened to make those changes visible.\n\n         If the updateLog is enabled, then it's highly recommended to\n         have some sort of hard autoCommit to limit the log size.\n      -->\n    <autoCommit>\n      <maxDocs>${solr.autoCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoCommit.MaxTime:15000}</maxTime>\n      <openSearcher>false</openSearcher>\n    </autoCommit>\n\n    <!-- softAutoCommit is like autoCommit except it causes a\n         'soft' commit which only ensures that changes are visible\n         but does not ensure that data is synced to disk.  This is\n         faster and more near-realtime friendly than a hard commit.\n      -->\n\n    <autoSoftCommit>\n      <maxDocs>${solr.autoSoftCommit.MaxDocs:-1}</maxDocs>\n      <maxTime>${solr.autoSoftCommit.MaxTime:5000}</maxTime>\n    </autoSoftCommit>\n\n    <!-- Update Related Event Listeners\n\n         Various IndexWriter related events can trigger Listeners to\n         take actions.\n\n         postCommit - fired after every commit or optimize command\n         postOptimize - fired after every optimize command\n      -->\n    <!-- The RunExecutableListener executes an external command from a\n         hook such as postCommit or postOptimize.\n\n         exe - the name of the executable to run\n         dir - dir to use as the current working directory. (default=\".\")\n         wait - the calling thread waits until the executable returns.\n                (default=\"true\")\n         args - the arguments to pass to the program.  (default is none)\n         env - environment variables to set.  (default is none)\n      -->\n    <!-- This example shows how RunExecutableListener could be used\n         with the script based replication...\n         http://wiki.apache.org/solr/CollectionDistribution\n      -->\n    <!--\n       <listener event=\"postCommit\" class=\"solr.RunExecutableListener\">\n         <str name=\"exe\">solr/bin/snapshooter</str>\n         <str name=\"dir\">.</str>\n         <bool name=\"wait\">true</bool>\n         <arr name=\"args\"> <str>arg1</str> <str>arg2</str> </arr>\n         <arr name=\"env\"> <str>MYVAR=val1</str> </arr>\n       </listener>\n      -->\n\n  </updateHandler>\n\n  <!-- IndexReaderFactory\n\n       Use the following format to specify a custom IndexReaderFactory,\n       which allows for alternate IndexReader implementations.\n\n       ** Experimental Feature **\n\n       Please note - Using a custom IndexReaderFactory may prevent\n       certain other features from working. The API to\n       IndexReaderFactory may change without warning or may even be\n       removed from future releases if the problems cannot be\n       resolved.\n\n\n       ** Features that may not work with custom IndexReaderFactory **\n\n       The ReplicationHandler assumes a disk-resident index. Using a\n       custom IndexReader implementation may cause incompatibility\n       with ReplicationHandler and may cause replication to not work\n       correctly. See SOLR-1366 for details.\n\n    -->\n  <!--\n  <indexReaderFactory name=\"IndexReaderFactory\" class=\"package.class\">\n    <str name=\"someArg\">Some Value</str>\n  </indexReaderFactory >\n  -->\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n       Query section - these settings control query time things like caches\n       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <query>\n    <!-- Let the config generator easily inject additional stuff. -->\n    &query;\n\n    <!-- Query Related Event Listeners\n\n         Various IndexSearcher related events can trigger Listeners to\n         take actions.\n\n         newSearcher - fired whenever a new searcher is being prepared\n         and there is a current searcher handling requests (aka\n         registered).  It can be used to prime certain caches to\n         prevent long request times for certain requests.\n\n         firstSearcher - fired whenever a new searcher is being\n         prepared but there is no current registered searcher to handle\n         requests or to gain autowarming data from.\n\n\n      -->\n    <!-- QuerySenderListener takes an array of NamedList and executes a\n         local query request for each NamedList in sequence.\n      -->\n    <listener event=\"newSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <!--\n           <lst><str name=\"q\">solr</str><str name=\"sort\">price asc</str></lst>\n           <lst><str name=\"q\">rocks</str><str name=\"sort\">weight asc</str></lst>\n          -->\n      </arr>\n    </listener>\n    <listener event=\"firstSearcher\" class=\"solr.QuerySenderListener\">\n      <arr name=\"queries\">\n        <lst>\n          <str name=\"q\">static firstSearcher warming in solrconfig.xml</str>\n        </lst>\n      </arr>\n    </listener>\n\n    <!-- Use Cold Searcher\n\n         If a search request comes in and there is no current\n         registered searcher, then immediately register the still\n         warming searcher and use it.  If \"false\" then all requests\n         will block until the first searcher is done warming.\n      -->\n    <useColdSearcher>false</useColdSearcher>\n\n  </query>\n\n  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n     Circuit Breaker Section - This section consists of configurations for\n     circuit breakers\n     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->\n  <!-- Circuit breakers are designed to allow stability and predictable query\n     execution. They prevent operations that can take down the node and cause\n     noisy neighbour issues.\n\n     The CircuitBreakerManager is the default manager for all circuit breakers.\n     The enabled flag here controls the activation/deactivation of all circuit\n     breakers specified within.\n  -->\n  <circuitBreaker class=\"solr.CircuitBreakerManager\" enabled=\"true\">\n    <!-- Memory Circuit Breaker\n\n     Specific configuration for max JVM heap usage circuit breaker. This configuration defines\n     whether the circuit breaker is enabled and the threshold percentage of maximum heap allocated\n     beyond which queries will be rejected until the current JVM usage goes below the threshold.\n     The valid value for this range is 50-95.\n\n     Consider a scenario where the max heap allocated is 4 GB and memThreshold is defined as 75.\n     Threshold JVM usage will be 4 * 0.75 = 3 GB. Its generally a good idea to keep this value\n     between 75 - 80% of maximum heap allocated.\n\n     If, at any point, the current JVM heap usage goes above 3 GB, queries will be rejected until\n     the heap usage goes below 3 GB again. If you see queries getting rejected with 503 error code,\n     check for \"Circuit Breakers tripped\" in logs and the corresponding error message should tell\n     you what transpired (if the failure was caused by tripped circuit breakers).\n    -->\n    <!--\n    <str name=\"memEnabled\">true</str>\n    <str name=\"memThreshold\">75</str>\n    -->\n\n    <!-- CPU Circuit Breaker Configuration\n\n     Specific configuration for CPU utilization based circuit breaker. This configuration defines\n     whether the circuit breaker is enabled and the average load over the last minute at which the\n     circuit breaker should start rejecting queries.\n    -->\n    <!--\n    <str name=\"cpuEnabled\">true</str>\n    <str name=\"cpuThreshold\">75</str>\n    -->\n  </circuitBreaker>\n\n  <!-- Request Dispatcher\n\n       This section contains instructions for how the SolrDispatchFilter\n       should behave when processing requests for this SolrCore.\n    -->\n  <requestDispatcher>\n    <!-- Request Parsing\n\n         These settings indicate how Solr Requests may be parsed, and\n         what restrictions may be placed on the ContentStreams from\n         those requests\n\n         enableRemoteStreaming - enables use of the stream.file\n         and stream.url parameters for specifying remote streams.\n\n         enableStreamBody - This attribute controls whether streaming\n         content from the HTTP parameter stream.body is allowed.\n\n         multipartUploadLimitInKB - specifies the max size (in KiB) of\n         Multipart File Uploads that Solr will allow in a Request.\n\n         formdataUploadLimitInKB - specifies the max size (in KiB) of\n         form data (application/x-www-form-urlencoded) sent via\n         POST. You can use POST to pass request parameters not\n         fitting into the URL.\n\n         addHttpRequestToContext - if set to true, it will instruct\n         the requestParsers to include the original HttpServletRequest\n         object in the context map of the SolrQueryRequest under the\n         key \"httpRequest\". It will not be used by any of the existing\n         Solr components, but may be useful when developing custom\n         plugins.\n\n         *** WARNING ***\n         Before enabling remote streaming, you should make sure your\n         system has authentication enabled.\n\n    <requestParsers enableRemoteStreaming=\"true\"\n                    enableStreamBody=\"true\"\n                    multipartUploadLimitInKB=\"-1\"\n                    formdataUploadLimitInKB=\"-1\"\n                    addHttpRequestToContext=\"false\"/>\n      -->\n    <!-- Let the config generator easily inject additional stuff. -->\n    &requestdispatcher;\n\n  </requestDispatcher>\n\n  <!-- Search Components\n\n       Search components are registered to SolrCore and used by\n       instances of SearchHandler (which can access them by name)\n\n       By default, the following components are available:\n\n       <searchComponent name=\"query\"     class=\"solr.QueryComponent\" />\n       <searchComponent name=\"facet\"     class=\"solr.FacetComponent\" />\n       <searchComponent name=\"mlt\"       class=\"solr.MoreLikeThisComponent\" />\n       <searchComponent name=\"highlight\" class=\"solr.HighlightComponent\" />\n       <searchComponent name=\"stats\"     class=\"solr.StatsComponent\" />\n       <searchComponent name=\"debug\"     class=\"solr.DebugComponent\" />\n\n       Default configuration in a requestHandler would look like:\n\n       <arr name=\"components\">\n         <str>query</str>\n         <str>facet</str>\n         <str>mlt</str>\n         <str>highlight</str>\n         <str>stats</str>\n         <str>debug</str>\n       </arr>\n\n       If you register a searchComponent to one of the standard names,\n       that will be used instead of the default.\n\n       To insert components before or after the 'standard' components, use:\n\n       <arr name=\"first-components\">\n         <str>myFirstComponentName</str>\n       </arr>\n\n       <arr name=\"last-components\">\n         <str>myLastComponentName</str>\n       </arr>\n\n       NOTE: The component registered with the name \"debug\" will\n       always be executed after the \"last-components\"\n\n     -->\n\n  <!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->\n  &extra;\n\n  <!-- Highlighting Component\n\n       http://wiki.apache.org/solr/HighlightingParameters\n    -->\n  <searchComponent class=\"solr.HighlightComponent\" name=\"highlight\">\n    <highlighting>\n      <!-- Configure the standard fragmenter -->\n      <!-- This could most likely be commented out in the \"default\" case -->\n      <fragmenter name=\"gap\"\n                  default=\"true\"\n                  class=\"solr.highlight.GapFragmenter\">\n        <lst name=\"defaults\">\n          <int name=\"hl.fragsize\">100</int>\n        </lst>\n      </fragmenter>\n\n      <!-- A regular-expression-based fragmenter\n           (for sentence extraction)\n        -->\n      <fragmenter name=\"regex\"\n                  class=\"solr.highlight.RegexFragmenter\">\n        <lst name=\"defaults\">\n          <!-- slightly smaller fragsizes work better because of slop -->\n          <int name=\"hl.fragsize\">70</int>\n          <!-- allow 50% slop on fragment sizes -->\n          <float name=\"hl.regex.slop\">0.5</float>\n          <!-- a basic sentence pattern -->\n          <str name=\"hl.regex.pattern\">[-\\w ,/\\n\\&quot;&apos;]{20,200}</str>\n        </lst>\n      </fragmenter>\n\n      <!-- Configure the standard formatter -->\n      <formatter name=\"html\"\n                 default=\"true\"\n                 class=\"solr.highlight.HtmlFormatter\">\n        <lst name=\"defaults\">\n          <str name=\"hl.simple.pre\"><![CDATA[<em>]]></str>\n          <str name=\"hl.simple.post\"><![CDATA[</em>]]></str>\n        </lst>\n      </formatter>\n\n      <!-- Configure the standard encoder -->\n      <encoder name=\"html\"\n               class=\"solr.highlight.HtmlEncoder\" />\n\n      <!-- Configure the standard fragListBuilder -->\n      <fragListBuilder name=\"simple\"\n                       class=\"solr.highlight.SimpleFragListBuilder\"/>\n\n      <!-- Configure the single fragListBuilder -->\n      <fragListBuilder name=\"single\"\n                       class=\"solr.highlight.SingleFragListBuilder\"/>\n\n      <!-- Configure the weighted fragListBuilder -->\n      <fragListBuilder name=\"weighted\"\n                       default=\"true\"\n                       class=\"solr.highlight.WeightedFragListBuilder\"/>\n\n      <!-- default tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"default\"\n                        default=\"true\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <!--\n        <lst name=\"defaults\">\n          <str name=\"hl.multiValuedSeparatorChar\">/</str>\n        </lst>\n        -->\n      </fragmentsBuilder>\n\n      <!-- multi-colored tag FragmentsBuilder -->\n      <fragmentsBuilder name=\"colored\"\n                        class=\"solr.highlight.ScoreOrderFragmentsBuilder\">\n        <lst name=\"defaults\">\n          <str name=\"hl.tag.pre\"><![CDATA[\n               <b style=\"background:yellow\">,<b style=\"background:lawgreen\">,\n               <b style=\"background:aquamarine\">,<b style=\"background:magenta\">,\n               <b style=\"background:palegreen\">,<b style=\"background:coral\">,\n               <b style=\"background:wheat\">,<b style=\"background:khaki\">,\n               <b style=\"background:lime\">,<b style=\"background:deepskyblue\">]]></str>\n          <str name=\"hl.tag.post\"><![CDATA[</b>]]></str>\n        </lst>\n      </fragmentsBuilder>\n\n      <boundaryScanner name=\"default\"\n                       default=\"true\"\n                       class=\"solr.highlight.SimpleBoundaryScanner\">\n        <lst name=\"defaults\">\n          <str name=\"hl.bs.maxScan\">10</str>\n          <str name=\"hl.bs.chars\">.,!? &#9;&#10;&#13;</str>\n        </lst>\n      </boundaryScanner>\n\n      <boundaryScanner name=\"breakIterator\"\n                       class=\"solr.highlight.BreakIteratorBoundaryScanner\">\n        <lst name=\"defaults\">\n          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->\n          <str name=\"hl.bs.type\">WORD</str>\n          <!-- language and country are used when constructing Locale object.  -->\n          <!-- And the Locale object will be used when getting instance of BreakIterator -->\n          <str name=\"hl.bs.language\">en</str>\n          <str name=\"hl.bs.country\">US</str>\n        </lst>\n      </boundaryScanner>\n    </highlighting>\n  </searchComponent>\n\n  <!-- Update Processors\n\n       Chains of Update Processor Factories for dealing with Update\n       Requests can be declared, and then used by name in Update\n       Request Processors\n\n       http://wiki.apache.org/solr/UpdateRequestProcessor\n\n    -->\n  <!-- Deduplication\n\n       An example dedup update processor that creates the \"id\" field\n       on the fly based on the hash code of some other fields.  This\n       example has overwriteDupes set to false since we are using the\n       id field as the signatureField and Solr will maintain\n       uniqueness based on that anyway.\n\n    -->\n  <!--\n     <updateRequestProcessorChain name=\"dedupe\">\n       <processor class=\"solr.processor.SignatureUpdateProcessorFactory\">\n         <bool name=\"enabled\">true</bool>\n         <str name=\"signatureField\">id</str>\n         <bool name=\"overwriteDupes\">false</bool>\n         <str name=\"fields\">name,features,cat</str>\n         <str name=\"signatureClass\">solr.processor.Lookup3Signature</str>\n       </processor>\n       <processor class=\"solr.LogUpdateProcessorFactory\" />\n       <processor class=\"solr.RunUpdateProcessorFactory\" />\n     </updateRequestProcessorChain>\n    -->\n\n  <!-- Language identification\n\n       This example update chain identifies the language of the incoming\n       documents using the langid contrib. The detected language is\n       written to field language_s. No field name mapping is done.\n       The fields used for detection are text, title, subject and description,\n       making this example suitable for detecting languages form full-text\n       rich documents injected via ExtractingRequestHandler.\n       See more about langId at http://wiki.apache.org/solr/LanguageDetection\n    -->\n  <!--\n   <updateRequestProcessorChain name=\"langid\">\n     <processor class=\"org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory\">\n       <str name=\"langid.fl\">text,title,subject,description</str>\n       <str name=\"langid.langField\">language_s</str>\n       <str name=\"langid.fallback\">en</str>\n     </processor>\n     <processor class=\"solr.LogUpdateProcessorFactory\" />\n     <processor class=\"solr.RunUpdateProcessorFactory\" />\n   </updateRequestProcessorChain>\n  -->\n\n  <!-- Script update processor\n\n    This example hooks in an update processor implemented using JavaScript.\n\n    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor\n  -->\n  <!--\n    <updateRequestProcessorChain name=\"script\">\n      <processor class=\"solr.StatelessScriptUpdateProcessorFactory\">\n        <str name=\"script\">update-script.js</str>\n        <lst name=\"params\">\n          <str name=\"config_param\">example config parameter</str>\n        </lst>\n      </processor>\n      <processor class=\"solr.RunUpdateProcessorFactory\" />\n    </updateRequestProcessorChain>\n  -->\n\n  <!-- Response Writers\n\n       http://wiki.apache.org/solr/QueryResponseWriter\n\n       Request responses will be written using the writer specified by\n       the 'wt' request parameter matching the name of a registered\n       writer.\n\n       The \"default\" writer is the default and will be used if 'wt' is\n       not specified in the request.\n    -->\n  <!-- The following response writers are implicitly configured unless\n       overridden...\n    -->\n  <!--\n     <queryResponseWriter name=\"xml\"\n                          default=\"true\"\n                          class=\"solr.XMLResponseWriter\" />\n     <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\"/>\n     <queryResponseWriter name=\"python\" class=\"solr.PythonResponseWriter\"/>\n     <queryResponseWriter name=\"ruby\" class=\"solr.RubyResponseWriter\"/>\n     <queryResponseWriter name=\"php\" class=\"solr.PHPResponseWriter\"/>\n     <queryResponseWriter name=\"phps\" class=\"solr.PHPSerializedResponseWriter\"/>\n     <queryResponseWriter name=\"csv\" class=\"solr.CSVResponseWriter\"/>\n     <queryResponseWriter name=\"schema.xml\" class=\"solr.SchemaXmlResponseWriter\"/>\n    -->\n\n  <queryResponseWriter name=\"json\" class=\"solr.JSONResponseWriter\">\n  </queryResponseWriter>\n\n  <!--\n     Custom response writers can be declared as needed...\n    -->\n    <!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not\n         loaded (causing an error if contrib/velocity has not been built fully) -->\n    <!--\n    <queryResponseWriter name=\"velocity\" class=\"solr.VelocityResponseWriter\" startup=\"lazy\">\n      <str name=\"template.base.dir\">${velocity.template.base.dir:}</str>\n    </queryResponseWriter>\n    -->\n\n  <!-- Query Parsers\n\n       https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html\n\n       Multiple QParserPlugins can be registered by name, and then\n       used in either the \"defType\" param for the QueryComponent (used\n       by SearchHandler) or in LocalParams\n    -->\n  <!-- example of registering a query parser -->\n  <!--\n     <queryParser name=\"myparser\" class=\"com.mycompany.MyQParserPlugin\"/>\n    -->\n\n  <!-- Function Parsers\n\n       http://wiki.apache.org/solr/FunctionQuery\n\n       Multiple ValueSourceParsers can be registered by name, and then\n       used as function names when using the \"func\" QParser.\n    -->\n  <!-- example of registering a custom function parser  -->\n  <!--\n     <valueSourceParser name=\"myfunc\"\n                        class=\"com.mycompany.MyValueSourceParser\" />\n    -->\n\n\n  <!-- Document Transformers\n       http://wiki.apache.org/solr/DocTransformers\n    -->\n  <!--\n     Could be something like:\n     <transformer name=\"db\" class=\"com.mycompany.LoadFromDatabaseTransformer\" >\n       <int name=\"connection\">jdbc://....</int>\n     </transformer>\n\n     To add a constant value to all docs, use:\n     <transformer name=\"mytrans2\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <int name=\"value\">5</int>\n     </transformer>\n\n     If you want the user to still be able to change it with _value:something_ use this:\n     <transformer name=\"mytrans3\" class=\"org.apache.solr.response.transform.ValueAugmenterFactory\" >\n       <double name=\"defaultValue\">5</double>\n     </transformer>\n\n      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The\n      EditorialMarkerFactory will do exactly that:\n     <transformer name=\"qecBooster\" class=\"org.apache.solr.response.transform.EditorialMarkerFactory\" />\n    -->\n\n</config>\n"
  },
  {
    "path": "aegir/tools/system/conf/solr/search_api_solr/solr9_drupal10/solrcore.properties",
    "content": "# Defines Solr properties for this specific core.\nsolr.replication.master=false\nsolr.replication.slave=false\nsolr.replication.pollInterval=00:00:60\nsolr.replication.masterUrl=http://localhost:9099/solr\nsolr.replication.confFiles=schema.xml,schema_extra_types.xml,schema_extra_fields.xml,elevate.xml\nsolr.mlt.timeAllowed=2000\nsolr.selectSearchHandler.timeAllowed=-1\n# don't autoCommit after x docs\nsolr.autoCommit.MaxDocs=-1\n# autoCommit after 15 seconds\nsolr.autoCommit.MaxTime=15000\n# don't autoSoftCommit after x docs\nsolr.autoSoftCommit.MaxDocs=-1\n# don't autoSoftCommit after x seconds\nsolr.autoSoftCommit.MaxTime=5000\nsolr.install.dir=/opt/solr9\n"
  },
  {
    "path": "aegir/tools/system/conf/ssl_proxy.conf",
    "content": "###\n### Secure HTTPS proxy (START) _oct_uid _oct_mail\n###\nserver {\n  listen                       _dedicated_ip:443 ssl;\n  listen                       _dedicated_ip:443 quic;\n  http2                        on;\n  http3                        on;\n  http3_hq                     on;\n  server_name                  _dedicated_sn;\n  ssl_dhparam                  /etc/ssl/private/_domain_name.dhp;\n  ssl_certificate              /etc/ssl/private/_domain_name.crt;\n  ssl_certificate_key          /etc/ssl/private/_domain_name.key;\n  ssl_session_timeout          5m;\n  ssl_prefer_server_ciphers    on;\n  ssl_conf_command Options     KTLS;\n  keepalive_timeout            70;\n  access_log                   on;\n  log_not_found                off;\n\n  ###\n  ### Allow access to SQL Adminer css.\n  ###\n  location ^~ /sqladmin/adminer.css {\n    alias /var/www/adminer/adminer.css;\n    default_type text/css;\n    try_files $uri =404;\n  }\n\n  ###\n  ### Allow access to SQL Adminer.\n  ###\n  location ^~ /sqladmin {\n    location ~* ^/sqladmin {\n      alias /var/www/adminer;\n      set_real_ip_from 127.0.0.1;\n      set_real_ip_from _target_ip;\n      real_ip_header X-Forwarded-For;\n      real_ip_recursive on;\n      include /var/aegir/config/includes/ip_access/sqladmin*;\n      index  index.php;\n      try_files $uri /sqladmin/index.php?$query_string;\n    }\n  }\n\n  ###\n  ### Send all non-static requests to php-fpm.\n  ###\n  location = /sqladmin/index.php {\n    set_real_ip_from 127.0.0.1;\n    set_real_ip_from _target_ip;\n    real_ip_header X-Forwarded-For;\n    real_ip_recursive on;\n    include /var/aegir/config/includes/ip_access/sqladmin*;\n    alias /var/www/adminer/index.php;\n    include fastcgi_params;\n    # Block https://httpoxy.org/ attacks.\n    fastcgi_param  HTTP_PROXY \"\";\n    fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;\n    fastcgi_param  SCRIPT_URL /sqladmin/;\n    fastcgi_param  SCRIPT_URI $scheme://$host/sqladmin/;\n    fastcgi_param  REDIRECT_STATUS 200;\n    fastcgi_index  index.php;\n    fastcgi_intercept_errors on;\n    try_files $uri =404; ### check for existence of php file first\n    fastcgi_pass 127.0.0.1:9000;\n  }\n\n  location / {\n    proxy_pass                 http://_target_ip;\n    proxy_redirect             off;\n    gzip_vary                  off;\n    proxy_buffering            off;\n    proxy_set_header           Host              $host;\n    proxy_set_header           X-Real-IP         $remote_addr;\n    proxy_set_header           X-Forwarded-By    $server_addr:$server_port;\n    proxy_set_header           X-Forwarded-For   $proxy_add_x_forwarded_for;\n    proxy_set_header           X-Local-Proxy     $scheme;\n    proxy_set_header           X-Forwarded-Proto $scheme;\n    proxy_pass_header          Set-Cookie;\n    proxy_pass_header          Cookie;\n    proxy_pass_header          X-Accel-Expires;\n    proxy_pass_header          X-Accel-Redirect;\n    proxy_pass_header          X-This-Proto;\n    proxy_connect_timeout      180;\n    proxy_send_timeout         180;\n    proxy_read_timeout         180;\n  }\n}\n###\n### Secure HTTPS proxy (END)\n###\n"
  },
  {
    "path": "aegir/tools/system/cron/crontabs/root",
    "content": "# DO NOT EDIT THIS FILE - edit the master and reinstall.\n# (/tmp/hm.cronBTAzqF installed on Thu Jul 22 19:24:53 2010)\n# (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)\n* * * * *   bash /var/xdrago/second.sh        >/dev/null 2>&1\n* * * * *   bash /var/xdrago/minute.sh        >/dev/null 2>&1\n* * * * *   bash /var/xdrago/guest-fire.sh    >/dev/null 2>&1\n01 5 * * *  bash /var/xdrago/guest-water.sh   >/dev/null 2>&1\n*/2 * * * * bash /var/xdrago/ip_access.sh     >/dev/null 2>&1\n*/5 * * * * bash /var/xdrago/clear.sh         >/dev/null 2>&1\n* * * * *   /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/runner.sh                >/dev/null 2>&1\n*/3 * * * * /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/manage_ltd_users.sh      >/dev/null 2>&1\n*/4 * * * * /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/manage_solr_config.sh    >/dev/null 2>&1\n01 * * * *  /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/purge_binlogs.sh         >/dev/null 2>&1\n30 * * * *  /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/mysql_cleanup.sh         >/dev/null 2>&1\n15 1 * * *  /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/mysql_backup.sh          >/dev/null 2>&1\n15 2 * * *  /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/mysql_cluster_backup.sh  >/dev/null 2>&1\n01 3 * * *  /usr/bin/nice -n0 /usr/bin/ionice -c2 -n7 bash /var/xdrago/graceful.sh              >/dev/null 2>&1\n15 3 * * *  /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /opt/local/bin/backboa backup        >/dev/null 2>&1\n15 4 * * *  /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /var/xdrago/daily.sh                 >/dev/null 2>&1\n15 5 * * *  /usr/bin/nice -n5 /usr/bin/ionice -c2 -n7 bash /opt/local/bin/duobackboa backup     >/dev/null 2>&1\n"
  },
  {
    "path": "aegir/tools/system/daily.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n}\n_check_root\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n_WEBG=www-data\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_cGet=\"config-get user.settings\"\n_cSet=\"config-set user.settings\"\n_vGet=\"variable-get\"\n_vSet=\"variable-set --always-set\"\n\n###-------------SYSTEM-----------------###\n\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n_enable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ \"$1\" != \"${_HM_U}.ftp\" ]; then\n      chattr +i /home/$1/\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr +i /home/$1/platforms/\n        chattr +i /home/$1/platforms/* &> /dev/null\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr +i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr +i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr +i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr +i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n_disable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ \"$1\" != \"${_HM_U}.ftp\" ]; then\n      if [ -d \"/home/$1/\" ]; then\n        chattr -i /home/$1/\n      fi\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr -i /home/$1/platforms/\n        chattr -i /home/$1/platforms/* &> /dev/null\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr -i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr -i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr -i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr -i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n_run_drush8_cmd() {\n  if [ -e \"/root/.debug_daily.info\" ]; then\n    _nOw=$(date +%y%m%d-%H%M%S)\n    echo \"${_nOw} ${_HM_U} running drush8 @${_Dom} $1\"\n  fi\n  if [ -x \"/opt/php74/bin/php\" ]; then\n    su -s /bin/bash - ${_HM_U} -c \"/opt/php74/bin/php /usr/bin/drush @${_Dom} $1\" &> /dev/null\n  else\n    su -s /bin/bash - ${_HM_U} -c \"drush8 @${_Dom} $1\" &> /dev/null\n  fi\n  wait\n}\n\n_run_drush8_hmr_cmd() {\n  if [ -e \"/root/.debug_daily.info\" ]; then\n    _nOw=$(date +%y%m%d-%H%M%S)\n    echo \"${_nOw} ${_HM_U} running drush8 @hostmaster $1\"\n  fi\n  su -s /bin/bash - ${_HM_U} -c \"drush8 @hostmaster $1\" &> /dev/null\n  wait\n}\n\n_run_drush8_hmr_master_cmd() {\n  if [ -e \"/root/.debug_daily.info\" ]; then\n    _nOw=$(date +%y%m%d-%H%M%S)\n    echo \"${_nOw} aegir running drush8 @hostmaster $1\"\n  fi\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster $1\" &> /dev/null\n  wait\n}\n\n_run_drush8_nosilent_cmd() {\n  if [ -e \"/root/.debug_daily.info\" ]; then\n    _nOw=$(date +%y%m%d-%H%M%S)\n    echo \"${_nOw} ${_HM_U} running drush8 @${_Dom} $1\"\n  fi\n  if [ -x \"/opt/php74/bin/php\" ]; then\n    su -s /bin/bash - ${_HM_U} -c \"/opt/php74/bin/php /usr/bin/drush @${_Dom} $1\"\n  else\n    su -s /bin/bash - ${_HM_U} -c \"drush8 @${_Dom} $1\"\n  fi\n  wait\n}\n\n_check_if_required_with_drush8() {\n  _REQ=YES\n  _REI_TEST=$(_run_drush8_nosilent_cmd \"pmi $1 --fields=required_by\" 2>&1)\n  _REL_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by\" 2>&1)\n  if [[ \"${_REL_TEST}\" =~ \"was not found\" ]]; then\n    _REQ=NULL\n    echo \"_REQ for $1 is ${_REQ} in ${_Dom} == null == via ${_REL_TEST}\"\n  else\n    echo \"CTRL _REL_TEST _REQ for $1 is ${_REQ} in ${_Dom} == init == via ${_REL_TEST}\"\n    _REN_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*:.*none\" 2>&1)\n    if [[ \"${_REN_TEST}\" =~ \"Required by\" ]]; then\n      _REQ=NO\n      echo \"_REQ for $1 is ${_REQ} in ${_Dom} == 0 == via ${_REN_TEST}\"\n    else\n      echo \"CTRL _REN_TEST _REQ for $1 is ${_REQ} in ${_Dom} == 1 == via ${_REN_TEST}\"\n      _REM_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*minimal\" 2>&1)\n      if [[ \"${_REM_TEST}\" =~ \"Required by\" ]]; then\n        _REQ=NO\n        echo \"_REQ for $1 is ${_REQ} in ${_Dom} == 2 == via ${_REM_TEST}\"\n      fi\n      _RES_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*standard\" 2>&1)\n      if [[ \"${_RES_TEST}\" =~ \"Required by\" ]]; then\n        _REQ=NO\n        echo \"_REQ for $1 is ${_REQ} in ${_Dom} == 3 == via ${_RES_TEST}\"\n      fi\n      _RET_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*testing\" 2>&1)\n      if [[ \"${_RET_TEST}\" =~ \"Required by\" ]]; then\n        _REQ=NO\n        \"echo _REQ for $1 is ${_REQ} in ${_Dom} == 4 == via ${_RET_TEST}\"\n      fi\n      _REH_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*hacked\" 2>&1)\n      if [[ \"${_REH_TEST}\" =~ \"Required by\" ]]; then\n        _REQ=NO\n        \"echo _REQ for $1 is ${_REQ} in ${_Dom} == 5 == via ${_REH_TEST}\"\n      fi\n      _RED_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*devel\" 2>&1)\n      if [[ \"${_RED_TEST}\" =~ \"Required by\" ]]; then\n        _REQ=NO\n        \"echo _REQ for $1 is ${_REQ} in ${_Dom} == 6 == via ${_RED_TEST}\"\n      fi\n      _REW_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*watchdog_live\" 2>&1)\n      if [[ \"${_REW_TEST}\" =~ \"Required by\" ]]; then\n        _REQ=NO\n        \"echo _REQ for $1 is ${_REQ} in ${_Dom} == 7 == via ${_REW_TEST}\"\n      fi\n    fi\n    _Profile=$(_run_drush8_nosilent_cmd \"${_vGet} ^install_profile$\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $1}' \\\n      | sed \"s/['\\\"]//g\" \\\n      | tr -d \"\\n\" 2>&1)\n    _Profile=${_Profile//[^a-z_]/}\n    echo \"_Profile is == ${_Profile} ==\"\n    if [ ! -z \"${_Profile}\" ]; then\n      _REP_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*:.*${_Profile}\" 2>&1)\n      if [[ \"${_REP_TEST}\" =~ \"Required by\" ]]; then\n        _REQ=NO\n        echo \"_REQ for $1 is ${_REQ} in ${_Dom} == 8 == via ${_REP_TEST}\"\n      else\n        echo \"CTRL _REP_TEST _REQ for $1 is ${_REQ} in ${_Dom} == 9 == via ${_REP_TEST}\"\n      fi\n    fi\n    _REA_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*apps\" 2>&1)\n    if [[ \"${_REA_TEST}\" =~ \"Required by\" ]]; then\n      _REQ=YES\n      echo \"_REQ for $1 is ${_REQ} in ${_Dom} == 10 == via ${_REA_TEST}\"\n    fi\n    _REF_TEST=$(echo \"${_REI_TEST}\" | grep \"Required by.*features\" 2>&1)\n    if [[ \"${_REF_TEST}\" =~ \"Required by\" ]]; then\n      _REQ=YES\n      echo \"_REQ for $1 is ${_REQ} in ${_Dom} == 11 == via ${_REF_TEST}\"\n    fi\n  fi\n}\n\n_check_if_skip() {\n  for s in ${_MODULES_SKIP}; do\n    if [ ! -z \"$1\" ] && [ \"$s\" = \"$1\" ]; then\n      _SKIP=YES\n      #echo $1 is whitelisted and will not be disabled in ${_Dom}\n    fi\n  done\n}\n\n_check_if_force() {\n  for s in ${_MODULES_FORCE}; do\n    if [ ! -z \"$1\" ] && [ \"$s\" = \"$1\" ]; then\n      _FORCE=YES\n      echo $1 is blacklisted and will be forcefully disabled in ${_Dom}\n    fi\n  done\n}\n\n_disable_modules_with_drush8() {\n  for m in $1; do\n    _SKIP=NO\n    _FORCE=NO\n    if [ ! -z \"${_MODULES_SKIP}\" ]; then\n      _check_if_skip \"$m\"\n    fi\n    if [ ! -z \"${_MODULES_FORCE}\" ]; then\n      _check_if_force \"$m\"\n    fi\n    if [ \"${_SKIP}\" = \"NO\" ]; then\n      _MODULE_T=$(_run_drush8_nosilent_cmd \"pml --status=enabled \\\n        --type=module | grep \\($m\\)\" 2>&1)\n      if [[ \"${_MODULE_T}\" =~ \"($m)\" ]]; then\n        if [ \"${_FORCE}\" = \"NO\" ]; then\n          _check_if_required_with_drush8 \"$m\"\n        else\n          echo \"$m dependencies not checked in ${_Dom} action forced\"\n          _REQ=FCE\n        fi\n        if [ \"${_REQ}\" = \"FCE\" ]; then\n          _run_drush8_cmd \"dis $m -y\"\n          echo \"$m FCE disabled in ${_Dom}\"\n        elif [ \"${_REQ}\" = \"NO\" ]; then\n          _run_drush8_cmd \"dis $m -y\"\n          echo \"$m disabled in ${_Dom}\"\n        elif [ \"${_REQ}\" = \"NULL\" ]; then\n          echo \"$m is not used in ${_Dom}\"\n        else\n          echo \"$m is required and can not be disabled in ${_Dom}\"\n        fi\n      fi\n    fi\n  done\n}\n\n_enable_modules_with_drush8() {\n  for m in $1; do\n    _MODULE_T=$(_run_drush8_nosilent_cmd \"pml --status=enabled \\\n      --type=module | grep \\($m\\)\" 2>&1)\n    if [[ \"${_MODULE_T}\" =~ \"($m)\" ]]; then\n      _DO_NOTHING=YES\n    else\n      _run_drush8_cmd \"en $m -y\"\n      echo \"$m enabled in ${_Dom}\"\n    fi\n  done\n}\n\n_sync_user_register_protection_ini_vars() {\n  _IGNORE_USER_REGISTER_PROTECTION=NO\n  _ENABLE_STRICT_USER_REGISTER_PROTECTION=NO\n  if [ -e \"/data/conf/default.boa_platform_control.ini\" ] \\\n    && [ ! -e \"${_PLR_CTRL_F}\" ]; then\n    cp -af /data/conf/default.boa_platform_control.ini \\\n      ${_PLR_CTRL_F} &> /dev/null\n    chown ${_HM_U}:users ${_PLR_CTRL_F} &> /dev/null\n    chmod 0664 ${_PLR_CTRL_F} &> /dev/null\n  fi\n  if [ -e \"${_PLR_CTRL_F}\" ]; then\n    _EN_URP_T_S=$(grep \"^enable_strict_user_register_protection = TRUE\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    _EN_URP_T=$(grep \"^enable_user_register_protection = TRUE\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_EN_URP_T_S}\" =~ \"enable_strict_user_register_protection = TRUE\" ]] \\\n      || [[ \"${_EN_URP_T}\" =~ \"enable_user_register_protection = TRUE\" ]]; then\n      _ENABLE_STRICT_USER_REGISTER_PROTECTION=YES\n    fi\n    _DIS_URP_T=$(grep \"^disable_user_register_protection = TRUE\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    _DIS_URP_T_I=$(grep \"^ignore_user_register_protection = TRUE\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_DIS_URP_T}\" =~ \"disable_user_register_protection = TRUE\" ]] \\\n      || [[ \"${_DIS_URP_T_I}\" =~ \"ignore_user_register_protection = TRUE\" ]]; then\n      _IGNORE_USER_REGISTER_PROTECTION=YES\n    fi\n  fi\n  if [ -e \"${_usEr}/static/control/enable_user_register_protection.info\" ]; then\n    mv -f ${_usEr}/static/control/enable_user_register_protection.info \\\n      ${_usEr}/static/control/enable_strict_user_register_protection.info\n  fi\n  if [ -e \"${_usEr}/static/control/disable_user_register_protection.info\" ]; then\n    mv -f ${_usEr}/static/control/disable_user_register_protection.info \\\n      ${_usEr}/static/control/ignore_user_register_protection.info\n  fi\n  if [ \"${_ENABLE_STRICT_USER_REGISTER_PROTECTION}\" = \"NO\" ] \\\n    && [ -e \"${_usEr}/static/control/enable_strict_user_register_protection.info\" ]; then\n    sed -i \"s/.*enable.*user_register_protection.*/enable_strict_user_register_protection = TRUE/g\" \\\n      ${_PLR_CTRL_F} &> /dev/null\n    wait\n    _ENABLE_STRICT_USER_REGISTER_PROTECTION=YES\n  fi\n  if [ \"${_ENABLE_STRICT_USER_REGISTER_PROTECTION}\" = \"YES\" ] \\\n    && [ -e \"${_usEr}/static/control/ignore_user_register_protection.info\" ]; then\n    sed -i \"s/.*enable.*user_register_protection.*/enable_strict_user_register_protection = FALSE/g\" \\\n      ${_PLR_CTRL_F} &> /dev/null\n    wait\n    _IGNORE_USER_REGISTER_PROTECTION=YES\n  fi\n  if [ -e \"/data/conf/default.boa_site_control.ini\" ] \\\n    && [ ! -e \"${_DIR_CTRL_F}\" ]; then\n    cp -af /data/conf/default.boa_site_control.ini ${_DIR_CTRL_F} &> /dev/null\n    chown ${_HM_U}:users ${_DIR_CTRL_F} &> /dev/null\n    chmod 0664 ${_DIR_CTRL_F} &> /dev/null\n  fi\n  if [ -e \"${_DIR_CTRL_F}\" ]; then\n    _DIS_URP_T=$(grep \"^disable_user_register_protection = TRUE\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    _DIS_URP_T_I=$(grep \"^ignore_user_register_protection = TRUE\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_DIS_URP_T}\" =~ \"disable_user_register_protection = TRUE\" ]] \\\n      || [[ \"${_DIS_URP_T_I}\" =~ \"ignore_user_register_protection = TRUE\" ]]; then\n      _IGNORE_USER_REGISTER_PROTECTION=YES\n    fi\n  fi\n  if [ -e \"${_usEr}/static/control/ignore_user_register_protection.info\" ]; then\n    _IGNORE_USER_REGISTER_PROTECTION=YES\n  fi\n}\n\n_fix_site_readonlymode() {\n  if [ -e \"${_usEr}/log/imported.pid\" ] \\\n    || [ -e \"${_usEr}/log/exported.pid\" ]; then\n    if [ -e \"${_Dir}/modules/readonlymode_fix.info\" ]; then\n      touch ${_usEr}/log/ctrl/site.${_Dom}.rom-fix.info\n      rm -f ${_Dir}/modules/readonlymode_fix.info\n    fi\n    if [ ! -e \"${_usEr}/log/ctrl/site.${_Dom}.rom-fix.info\" ]; then\n      _run_drush8_cmd \"${_vSet} site_readonly 0\"\n      touch ${_usEr}/log/ctrl/site.${_Dom}.rom-fix.info\n    fi\n  fi\n}\n\n_fix_user_register_protection_with_vSet() {\n  _sync_user_register_protection_ini_vars\n  if [ \"${_IGNORE_USER_REGISTER_PROTECTION}\" = \"NO\" ] \\\n    && [ ! -e \"${_Plr}/core\" ]; then\n    _Prm=$(_run_drush8_nosilent_cmd \"${_vGet} ^user_register$\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $1}' \\\n      | sed \"s/['\\\"]//g\" \\\n      | tr -d \"\\n\" 2>&1)\n    _Prm=${_Prm//[^0-2]/}\n    echo \"_Prm user_register for ${_Dom} is ${_Prm}\"\n    if [ \"${_ENABLE_STRICT_USER_REGISTER_PROTECTION}\" = \"YES\" ]; then\n      _run_drush8_cmd \"${_vSet} user_register 0\"\n      echo \"_Prm user_register for ${_Dom} set to 0\"\n    else\n      if [ \"${_Prm}\" = \"1\" ] || [ -z \"${_Prm}\" ]; then\n        _run_drush8_cmd \"${_vSet} user_register 2\"\n        echo \"_Prm user_register for ${_Dom} set to 2\"\n      fi\n      _run_drush8_cmd \"${_vSet} user_email_verification 1\"\n      echo \"_Prm user_email_verification for ${_Dom} set to 1\"\n    fi\n  fi\n  _fix_site_readonlymode\n}\n\n_fix_llms_txt() {\n  find ${_Dir}/files/llms.txt -mtime +6 -exec rm -f {} \\; &> /dev/null\n  if [ ! -e \"${_Dir}/files/llms.txt\" ] \\\n    && [ ! -e \"${_Plr}/profiles/hostmaster\" ]; then\n    curl -L --max-redirs 10 -k -s --retry 2 --retry-delay 5 \\\n      -A iCab \"http://${_Dom}/llms.txt?nocache=1&noredis=1\" \\\n      -o ${_Dir}/files/llms.txt\n    if [ -e \"${_Dir}/files/llms.txt\" ]; then\n      echo >> ${_Dir}/files/llms.txt\n    fi\n  fi\n  _VAR_IF_PRESENT=\n  if [ -f \"${_Dir}/files/llms.txt\" ]; then\n    _VAR_IF_PRESENT=$(grep \"##\" ${_Dir}/files/llms.txt 2>&1)\n  fi\n  if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"##\" ]]; then\n    rm -f ${_Dir}/files/llms.txt\n  else\n    chown ${_HM_U}:www-data ${_Dir}/files/llms.txt &> /dev/null\n    chmod 0664 ${_Dir}/files/llms.txt &> /dev/null\n    if [ -f \"${_Plr}/llms.txt\" ] || [ -L \"${_Plr}/llms.txt\" ]; then\n      rm -f ${_Plr}/llms.txt\n    fi\n  fi\n}\n\n_fix_robots_txt() {\n  find ${_Dir}/files/robots.txt -mtime +6 -exec rm -f {} \\; &> /dev/null\n  if [ ! -e \"${_Dir}/files/robots.txt\" ] \\\n    && [ ! -e \"${_Plr}/profiles/hostmaster\" ]; then\n    curl -L --max-redirs 10 -k -s --retry 2 --retry-delay 5 \\\n      -A iCab \"http://${_Dom}/robots.txt?nocache=1&noredis=1\" \\\n      -o ${_Dir}/files/robots.txt\n    if [ -e \"${_Dir}/files/robots.txt\" ]; then\n      echo >> ${_Dir}/files/robots.txt\n    fi\n  fi\n  _VAR_IF_PRESENT=\n  if [ -f \"${_Dir}/files/robots.txt\" ]; then\n    _VAR_IF_PRESENT=$(grep \"Disallow:\" ${_Dir}/files/robots.txt 2>&1)\n  fi\n  if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"Disallow:\" ]]; then\n    rm -f ${_Dir}/files/robots.txt\n  else\n    chown ${_HM_U}:www-data ${_Dir}/files/robots.txt &> /dev/null\n    chmod 0664 ${_Dir}/files/robots.txt &> /dev/null\n    if [ -f \"${_Plr}/robots.txt\" ] || [ -L \"${_Plr}/robots.txt\" ]; then\n      rm -f ${_Plr}/robots.txt\n    fi\n  fi\n}\n\n_fix_boost_cache() {\n  if [ -e \"${_Plr}/cache\" ]; then\n    rm -rf ${_Plr}/cache/*\n    rm -f ${_Plr}/cache/{.boost,.htaccess}\n  else\n    if [ -e \"${_Plr}/sites/all/drush/drushrc.php\" ]; then\n      mkdir -p ${_Plr}/cache\n    fi\n  fi\n  if [ -e \"${_Plr}/cache\" ]; then\n    chown ${_HM_U}:www-data ${_Plr}/cache &> /dev/null\n    chmod 02775 ${_Plr}/cache &> /dev/null\n  fi\n}\n\n_fix_o_contrib_symlink() {\n  if [ \"${_O_CONTRIB_SEVEN}\" != \"NO\" ]; then\n    symlinks -d ${_Plr}/modules &> /dev/null\n    if [ -e \"${_Plr}/web.config\" ] \\\n      && [ -e \"${_O_CONTRIB_SEVEN}\" ] \\\n      && [ ! -e \"${_Plr}/core\" ]; then\n      if [ ! -e \"${_Plr}/modules/o_contrib_seven\" ]; then\n        ln -sfn ${_O_CONTRIB_SEVEN} ${_Plr}/modules/o_contrib_seven &> /dev/null\n      fi\n    elif [ -e \"${_Plr}/core\" ] \\\n      && [ ! -e \"${_Plr}/core/themes/olivero\" ] \\\n      && [ ! -e \"${_Plr}/core/themes/stable9\" ] \\\n      && [ ! -e \"${_Plr}/core/modules/workspaces_ui\" ] \\\n      && [ -e \"${_O_CONTRIB_EIGHT}\" ]; then\n      if [ -e \"${_Plr}/modules/o_contrib_nine\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_nine_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_nine\n        rm -f ${_Plr}/modules/.o_contrib_nine_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_ten\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_ten_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_ten\n        rm -f ${_Plr}/modules/.o_contrib_ten_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_eleven\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_eleven_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_eleven\n        rm -f ${_Plr}/modules/.o_contrib_eleven_dont_use\n      fi\n      if [ ! -e \"${_Plr}/modules/o_contrib_eight\" ]; then\n        ln -sfn ${_O_CONTRIB_EIGHT} ${_Plr}/modules/o_contrib_eight &> /dev/null\n      fi\n    elif [ -e \"${_Plr}/core/themes/olivero\" ] \\\n      && [ -e \"${_Plr}/core/themes/classy\" ] \\\n      && [ ! -e \"${_Plr}/core/modules/workspaces_ui\" ] \\\n      && [ -e \"${_O_CONTRIB_NINE}\" ]; then\n      if [ -e \"${_Plr}/modules/o_contrib_eight\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_eight_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_eight\n        rm -f ${_Plr}/modules/.o_contrib_eight_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_ten\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_ten_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_ten\n        rm -f ${_Plr}/modules/.o_contrib_ten_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_eleven\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_eleven_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_eleven\n        rm -f ${_Plr}/modules/.o_contrib_eleven_dont_use\n      fi\n      if [ ! -e \"${_Plr}/modules/o_contrib_nine\" ]; then\n        ln -sfn ${_O_CONTRIB_NINE} ${_Plr}/modules/o_contrib_nine &> /dev/null\n      fi\n    elif [ -e \"${_Plr}/core/themes/olivero\" ] \\\n      && [ ! -e \"${_Plr}/core/themes/classy\" ] \\\n      && [ ! -e \"${_Plr}/core/modules/workspaces_ui\" ] \\\n      && [ -e \"${_O_CONTRIB_TEN}\" ]; then\n      if [ -e \"${_Plr}/modules/o_contrib_eight\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_eight_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_eight\n        rm -f ${_Plr}/modules/.o_contrib_eight_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_nine\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_nine_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_nine\n        rm -f ${_Plr}/modules/.o_contrib_nine_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_eleven\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_eleven_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_eleven\n        rm -f ${_Plr}/modules/.o_contrib_eleven_dont_use\n      fi\n      if [ ! -e \"${_Plr}/modules/o_contrib_ten\" ]; then\n        ln -sfn ${_O_CONTRIB_TEN} ${_Plr}/modules/o_contrib_ten &> /dev/null\n      fi\n    elif [ -e \"${_Plr}/core/themes/olivero\" ] \\\n      && [ ! -e \"${_Plr}/core/themes/classy\" ] \\\n      && [ -e \"${_Plr}/core/modules/workspaces_ui\" ] \\\n      && [ -e \"${_O_CONTRIB_ELEVEN}\" ]; then\n      if [ -e \"${_Plr}/modules/o_contrib_eight\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_eight_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_eight\n        rm -f ${_Plr}/modules/.o_contrib_eight_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_nine\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_nine_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_nine\n        rm -f ${_Plr}/modules/.o_contrib_nine_dont_use\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_ten\" ] \\\n        || [ -e \"${_Plr}/modules/.o_contrib_ten_dont_use\" ]; then\n        rm -f ${_Plr}/modules/o_contrib_ten\n        rm -f ${_Plr}/modules/.o_contrib_ten_dont_use\n      fi\n      if [ ! -e \"${_Plr}/modules/o_contrib_eleven\" ]; then\n        ln -sfn ${_O_CONTRIB_ELEVEN} ${_Plr}/modules/o_contrib_eleven &> /dev/null\n      fi\n    else\n      if [ -e \"${_Plr}/modules/watchdog\" ]; then\n        if [ -e \"${_Plr}/modules/o_contrib\" ]; then\n          rm -f ${_Plr}/modules/o_contrib &> /dev/null\n        fi\n      else\n        if [ ! -e \"${_Plr}/modules/o_contrib\" ] \\\n          && [ -e \"${_O_CONTRIB}\" ]; then\n          ln -sfn ${_O_CONTRIB} ${_Plr}/modules/o_contrib &> /dev/null\n        fi\n      fi\n    fi\n  fi\n}\n\n_sql_convert() {\n  sudo -u ${_HM_U}.ftp -H /opt/local/bin/sqlmagic convert @${_Dom} to-${_SQL_CONVERT}\n}\n\n_send_shutdown_notice() {\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"ALERT! Shutdown of Hacked ${_Dom} Site on ${_hName}\" \\\n    ${_ALRT_EMAIL}\nHello,\n\nBecause you have not fixed this site despite several alerts\nsent before, this site is scheduled for automated shutdown\nto prevent further damage for the site owner and visitors.\n\nOnce the site is disabled, the only way to re-enable it again\nis to run the Verify task in your Ægir control panel.\n\nBut if you will enable the site and not fix it immediately,\nit will be shut down automatically again.\n\nCommon signatures of an attack which triggered this alert:\n\n${_DETECTED}\n\nThe platform root directory for this site is:\n\n  ${_Plr}\n\nThe system hostname is:\n\n  ${_hName}\n\nTo learn more on what happened, how it was possible and\nhow to survive #Drupageddon, please read:\n\n  https://omega8.cc/drupageddon-psa-2014-003-342\n\n--\nThis email has been sent by your Ægir automatic system monitor.\n\nEOF\n  fi\n  echo \"ALERT: HACKED notice sent to ${_CLIENT_EMAIL} [${_HM_U}]: OK\"\n}\n\n_send_hacked_alert() {\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"URGENT: The ${_Dom} site on ${_hName} has been HACKED!\" \\\n    ${_ALRT_EMAIL}\nHello,\n\nOur monitoring detected that the site ${_Dom} has been hacked!\n\nCommon signatures of an attack which triggered this alert:\n\n${_DETECTED}\n\nThe platform root directory for this site is:\n\n  ${_Plr}\n\nThe system hostname is:\n\n  ${_hName}\n\nTo learn more on what happened, how it was possible and\nhow to survive #Drupageddon, please read:\n\n  https://omega8.cc/drupageddon-psa-2014-003-342\n\nWe have restarted these daily checks on May 7, 2016 to make sure that\nno one stays on some too old Drupal version with many known security\nvulnerabilities.\n\nYou will receive Drupageddon alert for every site with outdated and\nnot secure codebase, even if it was not affected by Drupageddon bug\ndirectly.\n\nPlease be a good web citizen and upgrade to latest Drupal core provided\nby BOA-5.9.1-dev. As a bonus, you will be able to speed up your sites\nconsiderably by switching PHP-FPM to 8.3\n\nWe recommend to follow this upgrade how-to:\n\n  https://omega8.cc/your-drupal-site-upgrade-safe-workflow-298\n\nThe how-to for PHP-FPM version switch can be found at:\n\n  https://omega8.cc/how-to-quickly-switch-php-to-newer-version-330\n\n--\nThis email has been sent by your Ægir automatic system monitor.\n\nEOF\n  fi\n  echo \"ALERT: HACKED notice sent to ${_CLIENT_EMAIL} [${_HM_U}]: OK\"\n}\n\n_send_core_alert() {\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"URGENT: The ${_Dom} site on ${_hName} runs on not secure Drupal core!\" \\\n    ${_ALRT_EMAIL}\nHello,\n\nOur monitoring detected that this site runs on not secure Drupal core:\n\n  ${_Dom}\n\nThe Drupageddon check result which triggered this alert:\n\n${_DETECTED}\n\nThe platform root directory for this site is:\n\n  ${_Plr}\n\nThe system hostname is:\n\n  ${_hName}\n\nDoes it mean that your site is vulnerable to Drupageddon attack, recently\nmade famous again by Panama Papers leak?\n\n  https://www.drupal.org/node/2718467\n\nIt depends on the Drupal core version you are using, and if it has been\npatched already to close the known attack vectors. You can find more\ndetails on our website at:\n\n  https://omega8.cc/drupageddon-psa-2014-003-342\n\nEven if the Drupal core version used in this site is not vulnerable\nto Drupageddon attack, it is still vulnerable to other attacks,\nbecause you have missed Drupal core security release(s).\n\nWe have restarted these daily checks on May 7, 2016 to make sure that\nno one stays on some too old Drupal version with many known security\nvulnerabilities.\n\nYou will receive Drupageddon alert for every site with outdated and\nnot secure codebase, even if it was not affected by Drupageddon bug\ndirectly.\n\nPlease be a good web citizen and upgrade to latest Drupal core provided\nby BOA-5.9.1-dev. As a bonus, you will be able to speed up your sites\nconsiderably by switching PHP-FPM to 8.3\n\nWe recommend to follow this upgrade how-to:\n\n  https://omega8.cc/your-drupal-site-upgrade-safe-workflow-298\n\nThe how-to for PHP-FPM version switch can be found at:\n\n  https://omega8.cc/how-to-quickly-switch-php-to-newer-version-330\n\n--\nThis email has been sent by your Ægir automatic system monitor.\n\nEOF\n  fi\n  echo \"ALERT: Core notice sent to ${_CLIENT_EMAIL} [${_HM_U}]: OK\"\n}\n\n_check_site_status_with_drush8() {\n  _SITE_TEST=$(_run_drush8_nosilent_cmd \"status\" 2>&1)\n  if [[ \"${_SITE_TEST}\" =~ \"Error:\" ]] \\\n    || [[ \"${_SITE_TEST}\" =~ \"Drush was attempting to connect\" ]]; then\n    _SITE_TEST_RESULT=ERROR\n  else\n    _SITE_TEST_RESULT=OK\n  fi\n  if [ \"${_SITE_TEST_RESULT}\" = \"OK\" ]; then\n    _STATUS_BOOTSTRAP=$(_run_drush8_nosilent_cmd \"status bootstrap \\\n      | grep 'Drupal bootstrap.*:.*'\" 2>&1)\n    _STATUS_STATUS=$(_run_drush8_nosilent_cmd \"status status \\\n      | grep 'Database.*:.*'\" 2>&1)\n    if [[ \"${_STATUS_BOOTSTRAP}\" =~ \"Drupal bootstrap\" ]] \\\n      && [[ \"${_STATUS_STATUS}\" =~ \"Database\" ]]; then\n      _STATUS=OK\n      _RUN_DGN=NO\n      if [ -e \"${_usEr}/static/control/drupalgeddon.info\" ]; then\n        _RUN_DGN=YES\n      else\n        if [ -e \"/root/.force.drupalgeddon.cnf\" ]; then\n          _RUN_DGN=YES\n        fi\n      fi\n      if [ -e \"${_Plr}/modules/o_contrib_seven\" ] \\\n        && [ \"${_RUN_DGN}\" = \"YES\" ]; then\n        if [ -L \"/home/${_HM_U}.ftp/.drush/usr/drupalgeddon\" ]; then\n          _run_drush8_cmd \"en update -y\"\n          _DGDD_T=$(_run_drush8_nosilent_cmd \"drupalgeddon-test\" 2>&1)\n          if [[ \"${_DGDD_T}\" =~ \"No evidence of known Drupalgeddon\" ]]; then\n            _DO_NOTHING=YES\n          elif [[ \"${_DGDD_T}\" =~ \"The drush command\" ]] \\\n            && [[ \"${_DGDD_T}\" =~ \"could not be found\" ]]; then\n            _DO_NOTHING=YES\n          elif [[ \"${_DGDD_T}\" =~ \"has a uid that is\" ]] \\\n            && [[ ! \"${_DGDD_T}\" =~ \"has security vulnerabilities\" ]] \\\n            && [[ \"${_DGDD_T}\" =~ \"higher than\" ]]; then\n            _DO_NOTHING=YES\n          elif [[ \"${_DGDD_T}\" =~ \"has a created timestamp before\" ]] \\\n            && [[ ! \"${_DGDD_T}\" =~ \"has security vulnerabilities\" ]]; then\n            _DO_NOTHING=YES\n          elif [ -z \"${_DGDD_T}\" ]; then\n            _DO_NOTHING=YES\n          elif [[ \"${_DGDD_T}\" =~ \"Drush command terminated\" ]]; then\n            echo \"ALERT: THIS SITE IS PROBABLY BROKEN! ${_Dir}\"\n            echo \"${_DGDD_T}\"\n          else\n            echo \"ALERT: THIS SITE HAS BEEN HACKED! ${_Dir}\"\n            _DETECTED=\"${_DGDD_T}\"\n            if [ -n \"${_ALRT_EMAIL}\" ]; then\n              if [[ \"${_DGDD_T}\" =~ \"Role \\\"megauser\\\" discovered\" ]] \\\n                || [[ \"${_DGDD_T}\" =~ \"User \\\"drupaldev\\\" discovered\" ]] \\\n                || [[ \"${_DGDD_T}\" =~ \"User \\\"owned\\\" discovered\" ]] \\\n                || [[ \"${_DGDD_T}\" =~ \"User \\\"system\\\" discovered\" ]] \\\n                || [[ \"${_DGDD_T}\" =~ \"User \\\"configure\\\" discovered\" ]] \\\n                || [[ \"${_DGDD_T}\" =~ \"User \\\"drplsys\\\" discovered\" ]]; then\n                if [ -e \"${_usEr}/config/server_master/nginx/vhost.d/${_Dom}\" ]; then\n                  ### mv -f ${_usEr}/config/server_master/nginx/vhost.d/${_Dom} ${_usEr}/config/server_master/nginx/vhost.d/.${_Dom}\n                  _send_shutdown_notice\n                fi\n              else\n                if [[ \"${_DGDD_T}\" =~ \"has security vulnerabilities\" ]]; then\n                  _send_core_alert\n                else\n                  _send_hacked_alert\n                fi\n              fi\n            fi\n          fi\n        else\n          _DGMR_TEST=$(_run_drush8_nosilent_cmd \\\n            \"sqlq \\\"SELECT * FROM menu_router WHERE access_callback \\\n            = 'file_put_contents'\\\" | grep 'file_put_contents'\" 2>&1)\n          if [[ \"${_DGMR_TEST}\" =~ \"file_put_contents\" ]]; then\n            echo \"ALERT: THIS SITE HAS BEEN HACKED! ${_Dir}\"\n            _DETECTED=\"file_put_contents as access_callback detected \\\n              in menu_router table\"\n            if [ -n \"${_ALRT_EMAIL}\" ]; then\n              _send_hacked_alert\n            fi\n          fi\n          _DGMR_TEST=$(_run_drush8_nosilent_cmd \\\n            \"sqlq \\\"SELECT * FROM menu_router WHERE access_callback \\\n            = 'assert'\\\" | grep 'assert'\" 2>&1)\n          if [[ \"${_DGMR_TEST}\" =~ \"assert\" ]]; then\n            echo \"ALERT: THIS SITE HAS BEEN HACKED! ${_Dir}\"\n            _DETECTED=\"assert as access_callback detected in menu_router table\"\n            if [ -n \"${_ALRT_EMAIL}\" ]; then\n              _send_hacked_alert\n            fi\n          fi\n        fi\n      fi\n    else\n      _STATUS=BROKEN\n      echo \"WARNING: THIS SITE IS BROKEN! ${_Dir}\"\n    fi\n  else\n    _STATUS=UNKNOWN\n    echo \"WARNING: THIS SITE IS PROBABLY BROKEN? ${_Dir}\"\n  fi\n}\n\n_check_file_with_wildcard_path() {\n  _WILDCARD_TEST=$(ls $1 2>&1)\n  if [ -z \"${_WILDCARD_TEST}\" ]; then\n    _FILE_EXISTS=NO\n  else\n    _FILE_EXISTS=YES\n  fi\n}\n\n_fix_modules() {\n  _AUTO_CONFIG_ADVAGG=NO\n  if [ -e \"${_Plr}/modules/o_contrib/advagg\" ] \\\n    || [ -e \"${_Plr}/modules/o_contrib_seven/advagg\" ]; then\n    _MODULE_T=$(_run_drush8_nosilent_cmd \"pml --status=enabled \\\n      --type=module | grep \\(advagg\\)\" 2>&1)\n    if [[ \"${_MODULE_T}\" =~ \"(advagg)\" ]]; then\n      _AUTO_CONFIG_ADVAGG=YES\n    fi\n  fi\n  if [ \"${_AUTO_CONFIG_ADVAGG}\" = \"YES\" ]; then\n    if [ -e \"/data/conf/default.boa_site_control.ini\" ] \\\n      && [ ! -e \"${_DIR_CTRL_F}\" ]; then\n      cp -af /data/conf/default.boa_site_control.ini \\\n        ${_DIR_CTRL_F} &> /dev/null\n      chown ${_HM_U}:users ${_DIR_CTRL_F} &> /dev/null\n      chmod 0664 ${_DIR_CTRL_F} &> /dev/null\n    fi\n    if [ -e \"${_DIR_CTRL_F}\" ]; then\n      _AGG_P=$(grep \"advagg_auto_configuration\" ${_DIR_CTRL_F} 2>&1)\n      _AGG_T=$(grep \"^advagg_auto_configuration = TRUE\" ${_DIR_CTRL_F} 2>&1)\n      if [[ \"${_AGG_T}\" =~ \"advagg_auto_configuration = TRUE\" ]]; then\n        _DO_NOTHING=YES\n      else\n        ###\n        ### Do this only for the site level ini file.\n        ###\n        if [[ \"${_AGG_P}\" =~ \"advagg_auto_configuration\" ]]; then\n          sed -i \"s/.*advagg_auto_c.*/advagg_auto_configuration = TRUE/g\" \\\n      ${_DIR_CTRL_F} &> /dev/null\n          wait\n        else\n          echo \"advagg_auto_configuration = TRUE\" >> ${_DIR_CTRL_F}\n        fi\n      fi\n    fi\n  else\n    if [ -e \"/data/conf/default.boa_site_control.ini\" ] \\\n      && [ ! -e \"${_DIR_CTRL_F}\" ]; then\n      cp -af /data/conf/default.boa_site_control.ini \\\n        ${_DIR_CTRL_F} &> /dev/null\n      chown ${_HM_U}:users ${_DIR_CTRL_F} &> /dev/null\n      chmod 0664 ${_DIR_CTRL_F} &> /dev/null\n    fi\n    if [ -e \"${_DIR_CTRL_F}\" ]; then\n      _AGG_P=$(grep \"advagg_auto_configuration\" ${_DIR_CTRL_F} 2>&1)\n      _AGG_T=$(grep \"^advagg_auto_configuration = FALSE\" \\\n        ${_DIR_CTRL_F} 2>&1)\n      if [[ \"${_AGG_T}\" =~ \"advagg_auto_configuration = FALSE\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [[ \"${_AGG_P}\" =~ \"advagg_auto_configuration\" ]]; then\n          sed -i \"s/.*advagg_auto_c.*/advagg_auto_configuration = FALSE/g\" \\\n      ${_DIR_CTRL_F} &> /dev/null\n          wait\n        else\n          echo \";advagg_auto_configuration = FALSE\" >> ${_DIR_CTRL_F}\n        fi\n      fi\n    fi\n  fi\n\n  if [ -e \"${_Plr}/modules/o_contrib_seven\" ] \\\n    && [ ! -e \"${_Plr}/core\" ]; then\n    _PRIV_TEST=$(_run_drush8_nosilent_cmd \"${_vGet} ^file_default_scheme$\" 2>&1)\n    if [[ \"${_PRIV_TEST}\" =~ \"No matching variable\" ]]; then\n      _PRIV_TEST_RESULT=NONE\n    else\n      _PRIV_TEST_RESULT=OK\n    fi\n    _AUTO_CNF_PF_DL=NO\n    if [ \"${_PRIV_TEST_RESULT}\" = \"OK\" ]; then\n      _Pri=$(_run_drush8_nosilent_cmd \"${_vGet} ^file_default_scheme$\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $1}' \\\n        | sed \"s/['\\\"]//g\" \\\n        | tr -d \"\\n\" 2>&1)\n      _Pri=${_Pri//[^a-z]/}\n      if [ \"${_Pri}\" = \"private\" ] || [ \"${_Pri}\" = \"public\" ]; then\n        echo _Pri file_default_scheme for ${_Dom} is ${_Pri}\n      fi\n      if [ \"${_Pri}\" = \"private\" ]; then\n        _AUTO_CNF_PF_DL=YES\n      fi\n    fi\n    if [ \"${_AUTO_CNF_PF_DL}\" = \"YES\" ]; then\n      if [ -e \"/data/conf/default.boa_site_control.ini\" ] \\\n        && [ ! -e \"${_DIR_CTRL_F}\" ]; then\n        cp -af /data/conf/default.boa_site_control.ini \\\n          ${_DIR_CTRL_F} &> /dev/null\n        chown ${_HM_U}:users ${_DIR_CTRL_F} &> /dev/null\n        chmod 0664 ${_DIR_CTRL_F} &> /dev/null\n      fi\n      if [ -e \"${_DIR_CTRL_F}\" ]; then\n        _AC_PFD_T=$(grep \"^allow_private_file_downloads = TRUE\" \\\n          ${_DIR_CTRL_F} 2>&1)\n        if [[ \"${_AC_PFD_T}\" =~ \"allow_private_file_downloads = TRUE\" ]]; then\n          _DO_NOTHING=YES\n        else\n          ###\n          ### Do this only for the site level ini file.\n          ###\n          sed -i \"s/.*allow_private_f.*/allow_private_file_downloads = TRUE/g\" \\\n      ${_DIR_CTRL_F} &> /dev/null\n          wait\n        fi\n      fi\n    else\n      if [ -e \"/data/conf/default.boa_site_control.ini\" ] \\\n        && [ ! -e \"${_DIR_CTRL_F}\" ]; then\n        cp -af /data/conf/default.boa_site_control.ini \\\n          ${_DIR_CTRL_F} &> /dev/null\n        chown ${_HM_U}:users ${_DIR_CTRL_F} &> /dev/null\n        chmod 0664 ${_DIR_CTRL_F} &> /dev/null\n      fi\n      if [ -e \"${_DIR_CTRL_F}\" ]; then\n        _AC_PFD_T=$(grep \"^allow_private_file_downloads = FALSE\" \\\n          ${_DIR_CTRL_F} 2>&1)\n        if [[ \"${_AC_PFD_T}\" =~ \"allow_private_file_downloads = FALSE\" ]]; then\n          _DO_NOTHING=YES\n        else\n          sed -i \"s/.*allow_private_f.*/allow_private_file_downloads = FALSE/g\" \\\n      ${_DIR_CTRL_F} &> /dev/null\n          wait\n        fi\n      fi\n    fi\n  fi\n\n  _AUTO_DT_FB_INT=NO\n  if [ -e \"${_Plr}/sites/all/modules/fb/fb_settings.inc\" ] \\\n    || [ -e \"${_Plr}/sites/all/modules/contrib/fb/fb_settings.inc\" ]; then\n    _AUTO_DT_FB_INT=YES\n  else\n    _check_file_with_wildcard_path \"${_Plr}/profiles/*/modules/fb/fb_settings.inc\"\n    if [ \"${_FILE_EXISTS}\" = \"YES\" ]; then\n      _AUTO_DT_FB_INT=YES\n    else\n      _check_file_with_wildcard_path \"${_Plr}/profiles/*/modules/contrib/fb/fb_settings.inc\"\n      if [ \"${_FILE_EXISTS}\" = \"YES\" ]; then\n        _AUTO_DT_FB_INT=YES\n      fi\n    fi\n  fi\n  if [ \"${_AUTO_DT_FB_INT}\" = \"YES\" ]; then\n    if [ -e \"/data/conf/default.boa_platform_control.ini\" ] \\\n      && [ ! -e \"${_PLR_CTRL_F}\" ]; then\n      cp -af /data/conf/default.boa_platform_control.ini \\\n        ${_PLR_CTRL_F} &> /dev/null\n      chown ${_HM_U}:users ${_PLR_CTRL_F} &> /dev/null\n      chmod 0664 ${_PLR_CTRL_F} &> /dev/null\n    fi\n    if [ -e \"${_PLR_CTRL_F}\" ]; then\n      _AD_FB_T=$(grep \"^auto_detect_facebook_integration = TRUE\" \\\n        ${_PLR_CTRL_F} 2>&1)\n      if [[ \"${_AD_FB_T}\" =~ \"auto_detect_facebook_integration = TRUE\" ]]; then\n        _DO_NOTHING=YES\n      else\n        ###\n        ### Do this only for the platform level ini file, so the site\n        ### level ini file can disable this check by setting it\n        ### explicitly to auto_detect_facebook_integration = FALSE\n        ###\n        sed -i \"s/.*auto_detect_face.*/auto_detect_facebook_integration = TRUE/g\" \\\n          ${_PLR_CTRL_F} &> /dev/null\n        wait\n      fi\n    fi\n  else\n    if [ -e \"/data/conf/default.boa_platform_control.ini\" ] \\\n      && [ ! -e \"${_PLR_CTRL_F}\" ]; then\n      cp -af /data/conf/default.boa_platform_control.ini \\\n        ${_PLR_CTRL_F} &> /dev/null\n      chown ${_HM_U}:users ${_PLR_CTRL_F} &> /dev/null\n      chmod 0664 ${_PLR_CTRL_F} &> /dev/null\n    fi\n    if [ -e \"${_PLR_CTRL_F}\" ]; then\n      _AD_FB_T=$(grep \"^auto_detect_facebook_integration = FALSE\" \\\n        ${_PLR_CTRL_F} 2>&1)\n      if [[ \"${_AD_FB_T}\" =~ \"auto_detect_facebook_integration = FALSE\" ]]; then\n        _DO_NOTHING=YES\n      else\n        sed -i \"s/.*auto_detect_face.*/auto_detect_facebook_integration = FALSE/g\" \\\n          ${_PLR_CTRL_F} &> /dev/null\n        wait\n      fi\n    fi\n  fi\n\n  _AUTO_DETECT_DOMAIN_ACCESS_INTEGRATION=NO\n  if [ -e \"${_Plr}/sites/all/modules/domain/settings.inc\" ] \\\n    || [ -e \"${_Plr}/sites/all/modules/contrib/domain/settings.inc\" ]; then\n    _AUTO_DETECT_DOMAIN_ACCESS_INTEGRATION=YES\n  else\n    _check_file_with_wildcard_path \"${_Plr}/profiles/*/modules/domain/settings.inc\"\n    if [ \"${_FILE_EXISTS}\" = \"YES\" ]; then\n      _AUTO_DETECT_DOMAIN_ACCESS_INTEGRATION=YES\n    else\n      _check_file_with_wildcard_path \"${_Plr}/profiles/*/modules/contrib/domain/settings.inc\"\n      if [ \"${_FILE_EXISTS}\" = \"YES\" ]; then\n        _AUTO_DETECT_DOMAIN_ACCESS_INTEGRATION=YES\n      fi\n    fi\n  fi\n  if [ \"${_AUTO_DETECT_DOMAIN_ACCESS_INTEGRATION}\" = \"YES\" ]; then\n    if [ -e \"/data/conf/default.boa_platform_control.ini\" ] \\\n      && [ ! -e \"${_PLR_CTRL_F}\" ]; then\n      cp -af /data/conf/default.boa_platform_control.ini \\\n        ${_PLR_CTRL_F} &> /dev/null\n      chown ${_HM_U}:users ${_PLR_CTRL_F} &> /dev/null\n      chmod 0664 ${_PLR_CTRL_F} &> /dev/null\n    fi\n    if [ -e \"${_PLR_CTRL_F}\" ]; then\n      _AD_DA_T=$(grep \"^auto_detect_domain_access_integration = TRUE\" \\\n        ${_PLR_CTRL_F} 2>&1)\n      if [[ \"${_AD_DA_T}\" =~ \"auto_detect_domain_access_integration = TRUE\" ]]; then\n        _DO_NOTHING=YES\n      else\n        ###\n        ### Do this only for the platform level ini file, so the site\n        ### level ini file can disable this check by setting it\n        ### explicitly to auto_detect_domain_access_integration = FALSE\n        ###\n        sed -i \"s/.*auto_detect_domain.*/auto_detect_domain_access_integration = TRUE/g\" \\\n          ${_PLR_CTRL_F} &> /dev/null\n        wait\n      fi\n    fi\n  else\n    if [ -e \"/data/conf/default.boa_platform_control.ini\" ] \\\n      && [ ! -e \"${_PLR_CTRL_F}\" ]; then\n      cp -af /data/conf/default.boa_platform_control.ini \\\n        ${_PLR_CTRL_F} &> /dev/null\n      chown ${_HM_U}:users ${_PLR_CTRL_F} &> /dev/null\n      chmod 0664 ${_PLR_CTRL_F} &> /dev/null\n    fi\n    if [ -e \"${_PLR_CTRL_F}\" ]; then\n      _AD_DA_T=$(grep \"^auto_detect_domain_access_integration = FALSE\" \\\n        ${_PLR_CTRL_F} 2>&1)\n      if [[ \"${_AD_DA_T}\" =~ \"auto_detect_domain_access_integration = FALSE\" ]]; then\n        _DO_NOTHING=YES\n      else\n        sed -i \"s/.*auto_detect_domain.*/auto_detect_domain_access_integration = FALSE/g\" \\\n          ${_PLR_CTRL_F} &> /dev/null\n        wait\n      fi\n    fi\n  fi\n\n  ###\n  ### Add new INI variables if missing\n  ###\n  if [ -e \"${_PLR_CTRL_F}\" ]; then\n    _VAR_IF_PRESENT=$(grep \"session_cookie_ttl\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"session_cookie_ttl\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";session_cookie_ttl = 86400\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"session_gc_eol\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"session_gc_eol\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";session_gc_eol = 86400\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"enable_newrelic_integration\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"enable_newrelic_integration\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";enable_newrelic_integration = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_old_nine_mode\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_old_nine_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_old_nine_mode = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_old_eight_mode\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_old_eight_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_old_eight_mode = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_flush_forced_mode\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_flush_forced_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_flush_forced_mode = TRUE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_lock_enable\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_lock_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_lock_enable = TRUE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_path_enable\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_path_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_path_enable = TRUE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_scan_enable\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_scan_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_scan_enable = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_exclude_bins\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_exclude_bins\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_exclude_bins = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"speed_booster_anon_cache_ttl\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"speed_booster_anon_cache_ttl\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";speed_booster_anon_cache_ttl = 10\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"disable_drupal_page_cache\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"disable_drupal_page_cache\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";disable_drupal_page_cache = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"allow_private_file_downloads\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"allow_private_file_downloads\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";allow_private_file_downloads = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"entitycache_dont_enable\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"entitycache_dont_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";entitycache_dont_enable = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"views_cache_bully_dont_enable\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"views_cache_bully_dont_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";views_cache_bully_dont_enable = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"views_content_cache_dont_enable\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"views_content_cache_dont_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";views_content_cache_dont_enable = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"set_composer_manager_vendor_dir\" ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"set_composer_manager_vendor_dir\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";set_composer_manager_vendor_dir = FALSE\" >> ${_PLR_CTRL_F}\n    fi\n  fi\n  if [ -e \"${_DIR_CTRL_F}\" ]; then\n     _VAR_IF_PRESENT=$(grep \"session_cookie_ttl\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"session_cookie_ttl\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";session_cookie_ttl = 86400\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"session_gc_eol\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"session_gc_eol\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";session_gc_eol = 86400\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"enable_newrelic_integration\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"enable_newrelic_integration\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";enable_newrelic_integration = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_old_nine_mode\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_old_nine_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_old_nine_mode = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_old_eight_mode\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_old_eight_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_old_eight_mode = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_flush_forced_mode\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_flush_forced_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_flush_forced_mode = TRUE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_lock_enable\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_lock_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_lock_enable = TRUE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_path_enable\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_path_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_path_enable = TRUE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_scan_enable\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_scan_enable\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_scan_enable = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"redis_exclude_bins\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"redis_exclude_bins\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";redis_exclude_bins = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"speed_booster_anon_cache_ttl\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"speed_booster_anon_cache_ttl\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";speed_booster_anon_cache_ttl = 10\" >> ${_DIR_CTRL_F}\n    fi\n    _VAR_IF_PRESENT=$(grep \"disable_drupal_page_cache\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"disable_drupal_page_cache\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";disable_drupal_page_cache = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n     _VAR_IF_PRESENT=$(grep \"allow_private_file_downloads\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"allow_private_file_downloads\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";allow_private_file_downloads = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n     _VAR_IF_PRESENT=$(grep \"set_composer_manager_vendor_dir\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_VAR_IF_PRESENT}\" =~ \"set_composer_manager_vendor_dir\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";set_composer_manager_vendor_dir = FALSE\" >> ${_DIR_CTRL_F}\n    fi\n  fi\n\n  if [ -e \"${_PLR_CTRL_F}\" ]; then\n    _EC_DE_T=$(grep \"^entitycache_dont_enable = TRUE\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_EC_DE_T}\" =~ \"entitycache_dont_enable = TRUE\" ]] \\\n      || [ -e \"${_Plr}/profiles/commons\" ]; then\n      _ENTITYCACHE_DONT_ENABLE=YES\n    else\n      _ENTITYCACHE_DONT_ENABLE=NO\n    fi\n  else\n    _ENTITYCACHE_DONT_ENABLE=NO\n  fi\n\n  if [ -e \"${_PLR_CTRL_F}\" ]; then\n    _VCB_DE_T=$(grep \"^views_cache_bully_dont_enable = TRUE\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VCB_DE_T}\" =~ \"views_cache_bully_dont_enable = TRUE\" ]]; then\n      _VIEWS_CACHE_BULLY_DONT_ENABLE=YES\n    else\n      _VIEWS_CACHE_BULLY_DONT_ENABLE=NO\n    fi\n  else\n    _VIEWS_CACHE_BULLY_DONT_ENABLE=NO\n  fi\n\n  if [ -e \"${_PLR_CTRL_F}\" ]; then\n    _VCC_DE_T=$(grep \"^views_content_cache_dont_enable = TRUE\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_VCC_DE_T}\" =~ \"views_content_cache_dont_enable = TRUE\" ]]; then\n      _VIEWS_CONTENT_CACHE_DONT_ENABLE=YES\n    else\n      _VIEWS_CONTENT_CACHE_DONT_ENABLE=NO\n    fi\n  else\n    _VIEWS_CONTENT_CACHE_DONT_ENABLE=NO\n  fi\n\n  if [ -e \"${_Plr}/modules/o_contrib\" ]; then\n    if [ ! -e \"${_Plr}/modules/user\" ] \\\n      || [ ! -e \"${_Plr}/sites/all/modules\" ] \\\n      || [ ! -e \"${_Plr}/profiles\" ]; then\n      echo \"WARNING: THIS PLATFORM IS BROKEN! ${_Plr}\"\n    elif [ ! -e \"${_Plr}/modules/path_alias_cache\" ]; then\n      echo \"WARNING: THIS PLATFORM IS NOT A VALID PRESSFLOW PLATFORM! ${_Plr}\"\n    elif [ -e \"${_Plr}/modules/path_alias_cache\" ] \\\n      && [ -e \"${_Plr}/modules/user\" ]; then\n      _MODX=ON\n      if [ ! -z \"${_MODULES_OFF_SIX}\" ]; then\n        _disable_modules_with_drush8 \"${_MODULES_OFF_SIX}\"\n      fi\n      if [ ! -z \"${_MODULES_ON_SIX}\" ]; then\n        _enable_modules_with_drush8 \"${_MODULES_ON_SIX}\"\n      fi\n      _run_drush8_cmd \"sqlq \\\"UPDATE system SET weight = '-1' \\\n        WHERE type = 'module' AND name = 'path_alias_cache'\\\"\"\n    fi\n  elif [ -e \"${_Plr}/modules/o_contrib_seven\" ]; then\n    if [ ! -e \"${_Plr}/modules/user\" ] \\\n      || [ ! -e \"${_Plr}/sites/all/modules\" ] \\\n      || [ ! -e \"${_Plr}/profiles\" ]; then\n      echo \"WARNING: THIS PLATFORM IS BROKEN! ${_Plr}\"\n    else\n      _MODX=ON\n      if [ ! -z \"${_MODULES_OFF_SEVEN}\" ]; then\n        _disable_modules_with_drush8 \"${_MODULES_OFF_SEVEN}\"\n      fi\n      if [ \"${_ENTITYCACHE_DONT_ENABLE}\" = \"NO\" ]; then\n        _enable_modules_with_drush8 \"entitycache\"\n      fi\n      if [ ! -z \"${_MODULES_ON_SEVEN}\" ]; then\n        _enable_modules_with_drush8 \"${_MODULES_ON_SEVEN}\"\n      fi\n    fi\n  fi\n}\n\n_if_site_db_conversion() {\n  ###\n  ### Detect db conversion mode, if set per platform or per site.\n  ###\n  if [ -e \"${_PLR_CTRL_F}\" ]; then\n    _SQL_INDB_P=$(grep \"sql_conversion_mode\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_SQL_INDB_P}\" =~ \"sql_conversion_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";sql_conversion_mode = NO\" >> ${_PLR_CTRL_F}\n    fi\n    _SQL_INDB_T=$(grep \"^sql_conversion_mode = innodb\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_SQL_INDB_T}\" =~ \"sql_conversion_mode = innodb\" ]]; then\n      _SQL_CONVERT=innodb\n    fi\n    _SQL_MYSM_T=$(grep \"^sql_conversion_mode = myisam\" \\\n      ${_PLR_CTRL_F} 2>&1)\n    if [[ \"${_SQL_MYSM_T}\" =~ \"sql_conversion_mode = myisam\" ]]; then\n      _SQL_CONVERT=myisam\n    fi\n  fi\n  if [ -e \"${_DIR_CTRL_F}\" ]; then\n    _SQL_INDB_P=$(grep \"sql_conversion_mode\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SQL_INDB_P}\" =~ \"sql_conversion_mode\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";sql_conversion_mode = NO\" >> ${_DIR_CTRL_F}\n    fi\n    _SQL_INDB_T=$(grep \"^sql_conversion_mode = innodb\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SQL_INDB_T}\" =~ \"sql_conversion_mode = innodb\" ]]; then\n      _SQL_CONVERT=innodb\n    fi\n    _SQL_MYSM_T=$(grep \"^sql_conversion_mode = myisam\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SQL_MYSM_T}\" =~ \"sql_conversion_mode = myisam\" ]]; then\n      _SQL_CONVERT=myisam\n    fi\n  fi\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    _DENY_SQL_CONVERT=YES\n    _SQL_CONVERT=\n  fi\n  if [ -z \"${_DENY_SQL_CONVERT}\" ] \\\n    && [ ! -z \"${_SQL_CONVERT}\" ] \\\n    && [ \"${_DOW}\" = \"2\" ]; then\n    if [ \"${_SQL_CONVERT}\" = \"YES\" ]; then\n      _SQL_CONVERT=innodb\n    elif [ \"${_SQL_CONVERT}\" = \"NO\" ]; then\n      _SQL_CONVERT=\n    fi\n    if [ \"${_SQL_CONVERT}\" = \"myisam\" ] \\\n      || [ \"${_SQL_CONVERT}\" = \"innodb\" ]; then\n      _TIMP=$(date +%y%m%d-%H%M%S)\n      echo \"${_TIMP} sql conversion to-${_SQL_CONVERT} \\\n        for ${_Dom} started\"\n      _sql_convert\n      _TIMP=$(date +%y%m%d-%H%M%S)\n      echo \"${_TIMP} sql conversion to-${_SQL_CONVERT} \\\n        for ${_Dom} completed\"\n    fi\n  fi\n}\n\n_cleanup_ghost_platforms() {\n  if [ -e \"${_Plr}\" ]; then\n    if [ ! -e \"${_Plr}/index.php\" ] || [ ! -e \"${_Plr}/profiles\" ]; then\n      if [ ! -e \"${_Plr}/vendor\" ]; then\n        mkdir -p ${_usEr}/undo\n        ### mv -f ${_Plr} ${_usEr}/undo/ &> /dev/null\n        echo \"GHOST platform ${_Plr} detected and moved to ${_usEr}/undo/\"\n      fi\n    fi\n  fi\n}\n\n_fix_seven_core_patch() {\n  if [ ! -f \"${_Plr}/profiles/SA-CORE-2014-005-D7-fix.info\" ]; then\n    _PATCH_TEST=$(grep \"foreach (array_values(\\$data)\" \\\n      ${_Plr}/includes/database/database.inc 2>&1)\n    if [[ \"${_PATCH_TEST}\" =~ \"array_values\" ]]; then\n      echo fixed > ${_Plr}/profiles/SA-CORE-2014-005-D7-fix.info\n    else\n      cd ${_Plr}\n      patch -p1 < /var/xdrago/conf/SA-CORE-2014-005-D7.patch\n      chown ${_HM_U}:users ${_Plr}/includes/database/*.inc &> /dev/null\n      chmod 0664 ${_Plr}/includes/database/*.inc &> /dev/null\n      echo fixed > ${_Plr}/profiles/SA-CORE-2014-005-D7-fix.info\n    fi\n    chown ${_HM_U}:users ${_Plr}/profiles/*-fix.info &> /dev/null\n    chmod 0664 ${_Plr}/profiles/*-fix.info &> /dev/null\n  fi\n}\n\n_fix_static_permissions() {\n  _cleanup_ghost_platforms\n  if [ -e \"${_Plr}/profiles\" ]; then\n    if [ -e \"${_Plr}/web.config\" ] && [ ! -e \"${_Plr}/core\" ]; then\n      _fix_seven_core_patch\n    fi\n    if [ -e \"${_Plr}/core/lib/Drupal.php\" ] \\\n      && [ -e \"${_Plr}/../vendor/autoload.php\" ] \\\n      && grep -q '\"drupal/core\"' \"${_Plr}/../composer.json\" 2>/dev/null; then\n      _use_Plr=\"$(cd \"${_Plr}/..\" && pwd -P)\"\n    else\n      _use_Plr=\"${_Plr}\"\n    fi\n    if [ ! -e \"${_usEr}/static/control/unlock.info\" ] \\\n      && [ ! -e \"${_use_Plr}/skip.info\" ]; then\n      if [ ! -e \"${_usEr}/log/ctrl/plr.${_PlrID}.ctm-lock-${_NOW}.info\" ]; then\n        chown -R ${_HM_U} ${_use_Plr} &> /dev/null\n        touch ${_usEr}/log/ctrl/plr.${_PlrID}.ctm-lock-${_NOW}.info\n      fi\n    elif [ -e \"${_usEr}/static/control/unlock.info\" ] \\\n      && [ ! -e \"${_use_Plr}/skip.info\" ]; then\n      if [ ! -e \"${_usEr}/log/ctrl/plr.${_PlrID}.ctm-unlock-${_NOW}.info\" ]; then\n        chown -R ${_HM_U}.ftp ${_use_Plr} &> /dev/null\n        touch ${_usEr}/log/ctrl/plr.${_PlrID}.ctm-unlock-${_NOW}.info\n      fi\n    fi\n    if [ ! -f \"${_usEr}/log/ctrl/plr.${_PlrID}.perm-fix-${_NOW}.info\" ]; then\n      find ${_use_Plr} -type d -exec chmod 0775 {} \\; &> /dev/null\n      find ${_use_Plr} -type f -exec chmod 0664 {} \\; &> /dev/null\n      if [ -e \"${_use_Plr}/vendor/drush\" ]; then\n        chmod 0400 ${_use_Plr}/vendor/drush\n      fi\n      if [ -e \"${_use_Plr}/vendor/symfony/console/Input\" ]; then\n        chmod 0400 ${_use_Plr}/vendor/symfony/console/Input\n      fi\n      if [ -e \"${_use_Plr}/vendor/symfony/console/Style\" ]; then\n        chmod 0400 ${_use_Plr}/vendor/symfony/console/Style\n      fi\n    fi\n  fi\n}\n\n_fix_expected_symlinks() {\n  if [ ! -e \"${_Plr}/js.php\" ] && [ -e \"${_Plr}\" ]; then\n    if [ -e \"${_Plr}/modules/o_contrib_seven\" ] \\\n      && [ -e \"${_O_CONTRIB_SEVEN}/js/js.php\" ]; then\n      ln -sfn ${_O_CONTRIB_SEVEN}/js/js.php ${_Plr}/js.php &> /dev/null\n    elif [ -e \"${_Plr}/modules/o_contrib\" ] \\\n      && [ -e \"${_O_CONTRIB}/js/js.php\" ]; then\n      ln -sfn ${_O_CONTRIB}/js/js.php ${_Plr}/js.php &> /dev/null\n    fi\n  fi\n}\n\n_fix_permissions() {\n  ### modules,themes,libraries - profile level in ~/static\n  searchStringT=\"/static/\"\n  case ${_Plr} in\n  *\"$searchStringT\"*)\n  _fix_static_permissions\n  ;;\n  esac\n  ### modules,themes,libraries - platform level\n  if [ -f \"${_Plr}/profiles/core-permissions-update-fix.info\" ]; then\n    rm -f ${_Plr}/profiles/*permissions*.info\n    rm -f ${_Plr}/sites/all/permissions-fix*\n  fi\n  if [ ! -f \"${_usEr}/log/ctrl/plr.${_PlrID}.perm-fix-${_NOW}.info\" ] \\\n    && [ -e \"${_Plr}\" ]; then\n    mkdir -p ${_Plr}/sites/all/{modules,themes,libraries,drush}\n    find ${_Plr}/sites/all/{modules,themes,libraries,drush}/*{.tar,.tar.gz,.zip} \\\n      -type f -exec rm -f {} \\; &> /dev/null\n    if [ ! -e \"${_usEr}/static/control/unlock.info\" ] \\\n      && [ ! -e \"${_Plr}/skip.info\" ]; then\n      if [ ! -e \"${_usEr}/log/ctrl/plr.${_PlrID}.lock-${_NOW}.info\" ]; then\n        chown -R ${_HM_U}:users \\\n          ${_Plr}/sites/all/{modules,themes,libraries}/* &> /dev/null\n        touch ${_usEr}/log/ctrl/plr.${_PlrID}.lock-${_NOW}.info\n      fi\n    elif [ -e \"${_usEr}/static/control/unlock.info\" ] \\\n      && [ ! -e \"${_Plr}/skip.info\" ]; then\n      if [ ! -e \"${_usEr}/log/ctrl/plr.${_PlrID}.unlock-${_NOW}.info\" ]; then\n        chown -R ${_HM_U}.ftp:users \\\n          ${_Plr}/sites/all/{modules,themes,libraries}/* &> /dev/null\n        touch ${_usEr}/log/ctrl/plr.${_PlrID}.unlock-${_NOW}.info\n      fi\n    fi\n    chown ${_HM_U}:users \\\n      ${_Plr}/sites/all/drush/drushrc.php \\\n      ${_Plr}/sites \\\n      ${_Plr}/sites/* \\\n      ${_Plr}/sites/sites.php \\\n      ${_Plr}/sites/all \\\n      ${_Plr}/sites/all/{modules,themes,libraries,drush} &> /dev/null\n    chmod 0751 ${_Plr}/sites &> /dev/null\n    chmod 0755 ${_Plr}/sites/* &> /dev/null\n    chmod 0644 ${_Plr}/sites/*.php &> /dev/null\n    chmod 0664 ${_Plr}/autoload.php &> /dev/null\n    chmod 0644 ${_Plr}/sites/*.txt &> /dev/null\n    chmod 0644 ${_Plr}/sites/*.yml &> /dev/null\n    chmod 0755 ${_Plr}/sites/all/drush &> /dev/null\n    find ${_Plr}/sites/all/{modules,themes,libraries} -type d -exec \\\n      chmod 02775 {} \\; &> /dev/null\n    find ${_Plr}/sites/all/{modules,themes,libraries} -type f -exec \\\n      chmod 0664 {} \\; &> /dev/null\n    ### expected symlinks\n    _fix_expected_symlinks\n    ### known exceptions\n    chmod -R 775 ${_Plr}/sites/all/libraries/tcpdf/cache &> /dev/null\n    chown -R ${_HM_U}:www-data \\\n      ${_Plr}/sites/all/libraries/tcpdf/cache &> /dev/null\n    touch ${_usEr}/log/ctrl/plr.${_PlrID}.perm-fix-${_NOW}.info\n  fi\n  if [ -e \"${_Dir}\" ] \\\n    && [ -e \"${_Dir}/drushrc.php\" ] \\\n    && [ -e \"${_Dir}/files\" ] \\\n    && [ -e \"${_Dir}/private\" ]; then\n    ### Cleanup\n    rm ${_Dir}/*.{codebasecheck*,hm-fix-*,ctm-lock-*,lock-*,perm-fix-*}.info &> /dev/null\n    ### directory and settings files - site level\n    if [ ! -e \"${_Dir}/modules\" ]; then\n      mkdir ${_Dir}/modules\n    fi\n    if [ -e \"${_Dir}/aegir.services.yml\" ]; then\n      rm -f ${_Dir}/aegir.services.yml\n    fi\n    chown ${_HM_U}:users ${_Dir} &> /dev/null\n    chown ${_HM_U}:www-data \\\n      ${_Dir}/{local.settings.php,settings.php,civicrm.settings.php,solr.php} &> /dev/null\n    find ${_Dir}/*.php -type f -exec chmod 0440 {} \\; &> /dev/null\n    chmod 0640 ${_Dir}/civicrm.settings.php &> /dev/null\n    ### modules,themes,libraries - site level\n    find ${_Dir}/{modules,themes,libraries}/*{.tar,.tar.gz,.zip} -type f -exec \\\n      rm -f {} \\; &> /dev/null\n    rm -f ${_Dir}/modules/local-allow.info\n    if [ ! -e \"${_usEr}/static/control/unlock.info\" ] \\\n      && [ ! -e \"${_Plr}/skip.info\" ]; then\n      chown -R ${_HM_U}:users \\\n        ${_Dir}/{modules,themes,libraries}/* &> /dev/null\n    elif [ -e \"${_usEr}/static/control/unlock.info\" ] \\\n      && [ ! -e \"${_Plr}/skip.info\" ]; then\n      chown -R ${_HM_U}.ftp:users \\\n        ${_Dir}/{modules,themes,libraries}/* &> /dev/null\n    fi\n    chown ${_HM_U}:users \\\n      ${_Dir}/drushrc.php \\\n      ${_Dir}/{modules,themes,libraries} &> /dev/null\n    find ${_Dir}/{modules,themes,libraries} -type d -exec \\\n      chmod 02775 {} \\; &> /dev/null\n    find ${_Dir}/{modules,themes,libraries} -type f -exec \\\n      chmod 0664 {} \\; &> /dev/null\n    ### files - site level\n    chown -L -R ${_HM_U}:www-data ${_Dir}/files &> /dev/null\n    find ${_Dir}/files/ -type d -exec chmod 02775 {} \\; &> /dev/null\n    find ${_Dir}/files/ -type f -exec chmod 0664 {} \\; &> /dev/null\n    chmod 02775 ${_Dir}/files &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files/{tmp,images,pictures,css,js} &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files/{advagg_css,advagg_js,ctools} &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files/{ctools/css,imagecache,locations} &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files/{xmlsitemap,deployment,styles,private} &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files/{civicrm,civicrm/templates_c} &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files/{civicrm/upload,civicrm/persist} &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/files/{civicrm/custom,civicrm/dynamic} &> /dev/null\n    ### private - site level\n    chown -L -R ${_HM_U}:www-data ${_Dir}/private &> /dev/null\n    find ${_Dir}/private/ -type d -exec chmod 02775 {} \\; &> /dev/null\n    find ${_Dir}/private/ -type f -exec chmod 0664 {} \\; &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/private &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/private/{files,temp} &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/private/files/backup_migrate &> /dev/null\n    chown ${_HM_U}:www-data ${_Dir}/private/files/backup_migrate/{manual,scheduled} &> /dev/null\n    chown -L -R ${_HM_U}:www-data ${_Dir}/private/config &> /dev/null\n    _DB_HOST_PRESENT=$(grep \"^\\$_SERVER\\['db_host'\\] = \\$options\\['db_host'\\];\" \\\n      ${_Dir}/drushrc.php 2>&1)\n    if [[ \"${_DB_HOST_PRESENT}\" =~ \"db_host\" ]]; then\n      if [ \"${_FORCE_SITES_VERIFY}\" = \"YES\" ]; then\n        _run_drush8_hmr_cmd \"hosting-task @${_Dom} verify --force\"\n      fi\n    else\n      echo \"\\$_SERVER['db_host'] = \\$options['db_host'];\" >> ${_Dir}/drushrc.php\n      _run_drush8_hmr_cmd \"hosting-task @${_Dom} verify --force\"\n    fi\n  fi\n}\n\n_convert_controls_orig() {\n  if [ -e \"${_CTRL_DIR}/$1.info\" ] \\\n    || [ -e \"${_usEr}/static/control/$1.info\" ]; then\n    if [ ! -e \"${_CTRL_F}\" ] && [ -e \"${_CTRL_F_TPL}\" ]; then\n      cp -af ${_CTRL_F_TPL} ${_CTRL_F}\n    fi\n    sed -i \"s/.*$1.*/$1 = TRUE/g\" ${_CTRL_F} &> /dev/null\n    wait\n    rm -f ${_CTRL_DIR}/$1.info\n  fi\n}\n\n_convert_controls_orig_no_global() {\n  if [ -e \"${_CTRL_DIR}/$1.info\" ]; then\n    if [ ! -e \"${_CTRL_F}\" ] && [ -e \"${_CTRL_F_TPL}\" ]; then\n      cp -af ${_CTRL_F_TPL} ${_CTRL_F}\n    fi\n    sed -i \"s/.*$1.*/$1 = TRUE/g\" ${_CTRL_F} &> /dev/null\n    wait\n    rm -f ${_CTRL_DIR}/$1.info\n  fi\n}\n\n_convert_controls_value() {\n  if [ -e \"${_CTRL_DIR}/$1.info\" ] \\\n    || [ -e \"${_usEr}/static/control/$1.info\" ]; then\n    if [ ! -e \"${_CTRL_F}\" ] && [ -e \"${_CTRL_F_TPL}\" ]; then\n      cp -af ${_CTRL_F_TPL} ${_CTRL_F}\n    fi\n    if [ \"$1\" = \"nginx_cache_day\" ]; then\n      _TTL=86400\n    elif [ \"$1\" = \"nginx_cache_hour\" ]; then\n      _TTL=3600\n    elif [ \"$1\" = \"nginx_cache_quarter\" ]; then\n      _TTL=900\n    fi\n    sed -i \"s/.*speed_booster_anon.*/speed_booster_anon_cache_ttl = ${_TTL}/g\" \\\n      ${_CTRL_F} &> /dev/null\n    wait\n    rm -f ${_CTRL_DIR}/$1.info\n  fi\n}\n\n_convert_controls_renamed() {\n  if [ -e \"${_CTRL_DIR}/$1.info\" ]; then\n    if [ ! -e \"${_CTRL_F}\" ] && [ -e \"${_CTRL_F_TPL}\" ]; then\n      cp -af ${_CTRL_F_TPL} ${_CTRL_F}\n    fi\n    if [ \"$1\" = \"cookie_domain\" ]; then\n      sed -i \"s/.*server_name_cookie.*/server_name_cookie_domain = TRUE/g\" \\\n        ${_CTRL_F} &> /dev/null\n      wait\n    fi\n    rm -f ${_CTRL_DIR}/$1.info\n  fi\n}\n\n_fix_control_settings() {\n  _CTRL_NAME_ORIG=\"redis_lock_enable \\\n    redis_cache_disable \\\n    disable_admin_dos_protection \\\n    allow_anon_node_add \\\n    allow_private_file_downloads\"\n  _CTRL_NAME_VALUE=\"nginx_cache_day \\\n    nginx_cache_hour \\\n    nginx_cache_quarter\"\n  _CTRL_NAME_RENAMED=\"cookie_domain\"\n  for ctrl in ${_CTRL_NAME_ORIG}; do\n    _convert_controls_orig \"$ctrl\"\n  done\n  for ctrl in ${_CTRL_NAME_VALUE}; do\n    _convert_controls_value \"$ctrl\"\n  done\n  for ctrl in ${_CTRL_NAME_RENAMED}; do\n    _convert_controls_renamed \"$ctrl\"\n  done\n}\n\n_fix_platform_system_control_settings() {\n  _CTRL_NAME_ORIG=\"enable_user_register_protection \\\n     entitycache_dont_enable \\\n     views_cache_bully_dont_enable \\\n     views_content_cache_dont_enable\"\n  for ctrl in ${_CTRL_NAME_ORIG}; do\n    _convert_controls_orig \"$ctrl\"\n  done\n}\n\n_fix_site_system_control_settings() {\n  _CTRL_NAME_ORIG=\"disable_user_register_protection\"\n  for ctrl in ${_CTRL_NAME_ORIG}; do\n    _convert_controls_orig_no_global \"$ctrl\"\n  done\n}\n\n_cleanup_ini() {\n  if [ -e \"${_CTRL_F}\" ]; then\n    sed -i \"s/^;;.*//g\"   ${_CTRL_F} &> /dev/null\n    wait\n    sed -i \"s/^ .*//g\"    ${_CTRL_F} &> /dev/null\n    wait\n    sed -i \"s/^#.*//g\"    ${_CTRL_F} &> /dev/null\n    wait\n    sed -i \"/^$/d\"        ${_CTRL_F} &> /dev/null\n    wait\n    sed -i \"s/^\\[/\\n\\[/g\" ${_CTRL_F} &> /dev/null\n    wait\n  fi\n}\n\n_add_note_platform_ini() {\n  if [ -e \"${_CTRL_F}\" ]; then\n    echo \"\" >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n    echo \";;  This is a platform level ACTIVE INI file which can be used to modify\"     >> ${_CTRL_F}\n    echo \";;  default BOA system behaviour for all sites hosted on this platform.\"      >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n    echo \";;  Please review complete documentation included in this file TEMPLATE:\"     >> ${_CTRL_F}\n    echo \";;  default.boa_platform_control.ini, since this ACTIVE INI file\"             >> ${_CTRL_F}\n    echo \";;  may not include all options available after upgrade to BOA-${_xSrl}\"      >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n    echo \";;  Note that it takes ~60 seconds to see any modification results in action\" >> ${_CTRL_F}\n    echo \";;  due to opcode caching enabled in PHP-FPM for all non-dev sites.\"          >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n  fi\n}\n\n_add_note_site_ini() {\n  if [ -e \"${_CTRL_F}\" ]; then\n    echo \"\" >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n    echo \";;  This is a site level ACTIVE INI file which can be used to modify\"         >> ${_CTRL_F}\n    echo \";;  default BOA system behaviour for this site only.\"                         >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n    echo \";;  Please review complete documentation included in this file TEMPLATE:\"     >> ${_CTRL_F}\n    echo \";;  default.boa_site_control.ini, since this ACTIVE INI file\"                 >> ${_CTRL_F}\n    echo \";;  may not include all options available after upgrade to BOA-${_xSrl}\"      >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n    echo \";;  Note that it takes ~60 seconds to see any modification results in action\" >> ${_CTRL_F}\n    echo \";;  due to opcode caching enabled in PHP-FPM for all non-dev sites.\"          >> ${_CTRL_F}\n    echo \";;\" >> ${_CTRL_F}\n  fi\n}\n\n_fix_platform_control_files() {\n  if [ -e \"/data/conf/default.boa_platform_control.ini\" ]; then\n    if [ ! -e \"${_Plr}/sites/all/modules/default.boa_platform_control.ini\" ] \\\n      || [ \"${_CTRL_TPL_FORCE_UPDATE}\" = \"YES\" ]; then\n      cp -af /data/conf/default.boa_platform_control.ini \\\n        ${_Plr}/sites/all/modules/ &> /dev/null\n      chown ${_HM_U}:users ${_Plr}/sites/all/modules/default.boa_platform_control.ini &> /dev/null\n      chmod 0664 ${_Plr}/sites/all/modules/default.boa_platform_control.ini &> /dev/null\n    fi\n    _CTRL_F_TPL=\"${_Plr}/sites/all/modules/default.boa_platform_control.ini\"\n    _CTRL_F=\"${_Plr}/sites/all/modules/boa_platform_control.ini\"\n    _CTRL_DIR=\"${_Plr}/sites/all/modules\"\n    _fix_control_settings\n    _fix_platform_system_control_settings\n    _cleanup_ini\n    _add_note_platform_ini\n  fi\n}\n\n_fix_site_control_files() {\n  if [ -e \"/data/conf/default.boa_site_control.ini\" ]; then\n    if [ ! -e \"${_Dir}/modules/default.boa_site_control.ini\" ] \\\n      || [ \"${_CTRL_TPL_FORCE_UPDATE}\" = \"YES\" ]; then\n      cp -af /data/conf/default.boa_site_control.ini ${_Dir}/modules/ &> /dev/null\n      chown ${_HM_U}:users ${_Dir}/modules/default.boa_site_control.ini &> /dev/null\n      chmod 0664 ${_Dir}/modules/default.boa_site_control.ini &> /dev/null\n    fi\n    _CTRL_F_TPL=\"${_Dir}/modules/default.boa_site_control.ini\"\n    _CTRL_F=\"${_Dir}/modules/boa_site_control.ini\"\n    _CTRL_DIR=\"${_Dir}/modules\"\n    _fix_control_settings\n    _fix_site_system_control_settings\n    _cleanup_ini\n    _add_note_site_ini\n  fi\n}\n\n_cleanup_ghost_vhosts() {\n  for _Site in `find ${_usEr}/config/server_master/nginx/vhost.d -maxdepth 1 \\\n    -mindepth 1 -type f | sort`; do\n    _Dom=$(echo ${_Site} | cut -d'/' -f9 | awk '{ print $1}' 2>&1)\n    if [[ \"${_Dom}\" =~ \".restore\"($) ]]; then\n      mkdir -p ${_usEr}/undo\n      ### mv -f ${_usEr}/.drush/${_Dom}.alias.drushrc.php ${_usEr}/undo/ &> /dev/null\n      ### mv -f ${_usEr}/config/server_master/nginx/vhost.d/${_Dom} ${_usEr}/undo/ &> /dev/null\n      echo \"GHOST vhost for ${_Dom} detected and moved to ${_usEr}/undo/\"\n    fi\n    if [ -e \"${_usEr}/config/server_master/nginx/vhost.d/${_Dom}\" ]; then\n      local _thisVhost=\"${_usEr}/config/server_master/nginx/vhost.d/${_Dom}\"\n      local _fixHttpReqired=NO\n      if grep -q -e \"ssl http2\" \"${_thisVhost}\"; then\n        local _fixHttpReqired=YES\n      elif ! grep -q -E '^\\s*http2\\s+on;$' \"${_thisVhost}\"; then\n        local _fixHttpReqired=YES\n      elif grep -q -E '^\\s+listen.*443\\s+quic;$' \"${_thisVhost}\"; then\n        local _fixHttpReqired=YES\n      fi\n      if [ \"${_fixHttpReqired}\" = \"YES\" ]; then\n        echo \"FIXING vhost for ${_Dom}\"\n        # Remove 'http2' from 'listen' directives with varying spaces\n        sed -i -E 's/(listen\\s+[^;]*\\s+ssl)\\s+http2;$/\\1;/' \"${_thisVhost}\"\n        # Remove existing 'http2 on;' lines with varying spaces\n        sed -i -E '/^\\s*http2\\s+on;/d' \"${_thisVhost}\"\n        # Remove existing 'quic' lines with varying spaces\n        sed -i -E '/^\\s+listen.*443\\s+quic;/d' \"${_thisVhost}\"\n        # Remove unwanted directives with varying spaces\n        sed -i -E \\\n          -e '/^\\s*ssl_stapling\\b/d' \\\n          -e '/^\\s*ssl_stapling_verify\\b/d' \\\n          -e '/^\\s*resolver\\b/d' \\\n          -e '/^\\s*resolver_timeout\\b/d' \\\n          \"${_thisVhost}\"\n        # Update 'ssl_prefer_server_ciphers' directive, handling spaces\n        sed -i -E 's/^\\s*ssl_prefer_server_ciphers\\s+.*$/ssl_prefer_server_ciphers on;/' \"${_thisVhost}\"\n        # Update 'http3_hq' directive, handling spaces\n        sed -i -E 's/http3_hq\\s+on;$/http3_hq on;/' \"${_thisVhost}\"\n        if grep -q 'ssl_prefer_server_ciphers' \"${_thisVhost}\"; then\n          # Add 'http2 on;' after 'ssl_prefer_server_ciphers on;', only if not already present\n          if ! grep -q -E '^\\s*http2\\s+on;$' \"${_thisVhost}\"; then\n            sed -i '/ssl_prefer_server_ciphers on;/ a\\  http2 on;' \"${_thisVhost}\"\n          fi\n        elif grep -q -E '^\\s*#http3_hq\\s+on;$' \"${_thisVhost}\"; then\n          # Add 'http2 on;' after 'http3_hq on;', only if not already present\n          if ! grep -q -E '^\\s*http2\\s+on;$' \"${_thisVhost}\"; then\n            sed -i '/http3_hq on;/ a\\  http2 on;' \"${_thisVhost}\"\n          fi\n        fi\n      fi\n      _Plx=$(cat ${_usEr}/config/server_master/nginx/vhost.d/${_Dom} \\\n        | grep \"root \" \\\n        | cut -d: -f2 \\\n        | awk '{ print $2}' \\\n        | sed \"s/[\\;]//g\" 2>&1)\n      if [[ \"${_Plx}\" =~ \"aegir/distro\" ]] \\\n        || [[ \"${_Dom}\" =~ (^)\"https.\" ]] \\\n        || [[ \"${_Dom}\" =~ \"--CDN\"($) ]]; then\n        _SKIP_VHOST=YES\n      else\n        if [ ! -e \"${_usEr}/.drush/${_Dom}.alias.drushrc.php\" ]; then\n          mkdir -p ${_usEr}/undo\n          ### mv -f ${_Site} ${_usEr}/undo/ &> /dev/null\n          echo \"GHOST vhost for ${_Dom} with no drushrc detected and moved to ${_usEr}/undo/\"\n        fi\n      fi\n    fi\n  done\n}\n\n_cleanup_ghost_drushrc() {\n  for _thisAlias in `find ${_usEr}/.drush/*.alias.drushrc.php -maxdepth 1 -type f \\\n    | sort`; do\n    _aliasName=$(echo \"${_thisAlias}\" | cut -d'/' -f6 | awk '{ print $1}' 2>&1)\n    _aliasName=$(echo \"${_aliasName}\" \\\n      | sed \"s/.alias.drushrc.php//g\" \\\n      | awk '{ print $1}' 2>&1)\n    if [[ \"${_aliasName}\" =~ (^)\"server_\" ]] \\\n      || [[ \"${_aliasName}\" =~ (^)\"hostmaster\" ]]; then\n      _IS_SITE=NO\n    elif [[ \"${_aliasName}\" =~ (^)\"platform_\" ]]; then\n      _Plm=$(cat ${_thisAlias} \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      if [ -d \"${_Plm}\" ]; then\n        if [ ! -e \"${_Plm}/index.php\" ] || [ ! -e \"${_Plm}/profiles\" ]; then\n          if [ ! -e \"${_Plm}/vendor\" ]; then\n            mkdir -p ${_usEr}/undo\n            ### mv -f ${_Plm} ${_usEr}/undo/ &> /dev/null\n            echo \"GHOST broken platform dir ${_Plm} detected and moved to ${_usEr}/undo/\"\n            ### mv -f ${_thisAlias} ${_usEr}/undo/ &> /dev/null\n            echo \"GHOST broken platform alias ${_thisAlias} detected and moved to ${_usEr}/undo/\"\n          fi\n        fi\n      else\n        mkdir -p ${_usEr}/undo\n        ### mv -f ${_thisAlias} ${_usEr}/undo/ &> /dev/null\n        echo \"GHOST nodir platform alias ${_thisAlias} detected and moved to ${_usEr}/undo/\"\n      fi\n    else\n      _T_SITE_NAME=\"${_aliasName}\"\n      if [[ \"${_T_SITE_NAME}\" =~ \".restore\"($) ]]; then\n        _IS_SITE=NO\n        mkdir -p ${_usEr}/undo\n        ### mv -f ${_usEr}/.drush/${_T_SITE_NAME}.alias.drushrc.php ${_usEr}/undo/ &> /dev/null\n        ### mv -f ${_usEr}/config/server_master/nginx/vhost.d/${_T_SITE_NAME} ${_usEr}/undo/ &> /dev/null\n        echo \"GHOST drushrc and vhost for ${_T_SITE_NAME} detected and moved to ${_usEr}/undo/\"\n      else\n        _T_SITE_FDIR=$(cat ${_thisAlias} \\\n          | grep \"site_path'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        if [ -e \"${_T_SITE_FDIR}/drushrc.php\" ] \\\n          && [ -e \"${_T_SITE_FDIR}/files\" ] \\\n          && [ -e \"${_T_SITE_FDIR}/private\" ]; then\n          if [ ! -e \"${_Dir}/modules\" ]; then\n            mkdir ${_Dir}/modules\n          fi\n          _IS_SITE=YES\n        else\n          mkdir -p ${_usEr}/undo\n          ### mv -f ${_usEr}/.drush/${_T_SITE_NAME}.alias.drushrc.php ${_usEr}/undo/ &> /dev/null\n          echo \"GHOST drushrc for ${_T_SITE_NAME} detected and moved to ${_usEr}/undo/\"\n          if [[ ! \"${_T_SITE_FDIR}\" =~ \"aegir/distro\" ]]; then\n            ### mv -f ${_usEr}/config/server_master/nginx/vhost.d/${_T_SITE_NAME} ${_usEr}/undo/ghost-vhost-${_T_SITE_NAME} &> /dev/null\n            echo \"GHOST vhost for ${_T_SITE_NAME} detected and moved to ${_usEr}/undo/\"\n          fi\n          if [ -d \"${_T_SITE_FDIR}\" ]; then\n            ### mv -f ${_T_SITE_FDIR} ${_usEr}/undo/ghost-site-${_T_SITE_NAME} &> /dev/null\n            echo \"GHOST site dir for ${_T_SITE_NAME} detected and moved from ${_T_SITE_FDIR} to ${_usEr}/undo/\"\n          fi\n        fi\n      fi\n    fi\n  done\n}\n\n_if_le_hm_ssl_old() {\n  # Get the current time in seconds since epoch\n  _current_time=$(date +%s)\n\n  # Path to the file you want to check\n  _filePath=\"$1\"\n\n  # Define the thresholds\n  _recent_threshold_days=60  # 60 days to consider for new updates\n  _update_check_days=30      # Don't update NEW if it was already set within the last 30 days\n\n  # Check if the path is a symlink\n  if [ -L \"${_filePath}\" ]; then\n    _target_file=\"$(readlink -f \"${_filePath}\")\"\n    # Get the file's modification time in seconds since epoch\n    _file_mod_time=$(stat -c %Y \"${_target_file}\")\n  else\n    # Get the file's modification time in seconds since epoch\n    _file_mod_time=$(stat -c %Y \"${_filePath}\")\n  fi\n\n  # Calculate the time difference in minutes\n  _time_diff_minutes=$(( (_current_time - _file_mod_time) / 60 ))\n\n  # Calculate the time difference in days\n  _time_diff_days=$(( _time_diff_minutes / 1440 ))\n\n  # Calculate the last update check time (from some state file, if exists)\n  if [ -f \"${_filePath}.lastupdate\" ]; then\n    _last_update_time=$(cat \"${_filePath}.lastupdate\")\n  else\n    _last_update_time=0\n  fi\n\n  _last_update_diff_days=$(( (_current_time - _last_update_time) / 86400 ))  # 86400 seconds in a day\n\n  # Check if the file was modified within the last 30 minutes\n  if [ \"${_time_diff_minutes}\" -lt 30 ]; then\n    _crtLastMod=NEW\n  # Check if the file was modified within the last 60 days and not marked NEW in the last 30 days\n  elif [ \"${_time_diff_days}\" -le \"${_recent_threshold_days}\" ] && [ \"${_last_update_diff_days}\" -ge \"${_update_check_days}\" ]; then\n    _crtLastMod=NEW\n    echo ${_current_time} > \"${_filePath}.lastupdate\"\n  else\n    _crtLastMod=OLD\n  fi\n}\n\n_if_le_hm_ssl_crt_key_copy() {\n  if [ -e \"${_leCrtPath}/fullchain.pem\" ]; then\n    _crtPath=\"${_leCrtPath}/fullchain.pem\"\n  elif [ -e \"${_leCrtPath}/cert.pem\" ]; then\n    _crtPath=\"${_leCrtPath}/cert.pem\"\n  fi\n  if [ -e \"${_crtPath}\" ]; then\n    if [ -L \"${_crtPath}\" ]; then\n      _crtPathR=\"$(readlink -n \"${_crtPath}\")\"\n      if [ -f \"${_leCrtPath}/${_crtPathR}\" ]; then\n        rm -f /etc/ssl/private/${_hmFront}.crt\n        cp -a ${_leCrtPath}/${_crtPathR} /etc/ssl/private/${_hmFront}.crt\n      fi\n    else\n      rm -f /etc/ssl/private/${_hmFront}.crt\n      cp -a ${_crtPath} /etc/ssl/private/${_hmFront}.crt\n    fi\n  fi\n  _keyPath=\"${_leCrtPath}/privkey.pem\"\n  if [ -e \"${_keyPath}\" ]; then\n    if [ -L \"${_keyPath}\" ]; then\n      _keyPathR=\"$(readlink -n \"${_keyPath}\")\"\n      if [ -f \"${_leCrtPath}/${_keyPathR}\" ]; then\n        rm -f /etc/ssl/private/${_hmFront}.key\n        cp -a ${_leCrtPath}/${_keyPathR} /etc/ssl/private/${_hmFront}.key\n      fi\n    else\n      rm -f /etc/ssl/private/${_hmFront}.key\n      cp -a ${_keyPath} /etc/ssl/private/${_hmFront}.key\n    fi\n  fi\n}\n\n_le_hm_ssl_check_update() {\n  _leCrtPath=\n  _exeLe=\"${_usEr}/tools/le/dehydrated\"\n  if [ -e \"${_usEr}/log/domain.txt\" ]; then\n    _hmFront=$(cat ${_usEr}/log/domain.txt 2>&1)\n    _hmFront=$(echo -n ${_hmFront} | tr -d \"\\n\" 2>&1)\n  fi\n  if [ -e \"${_usEr}/log/extra_domain.txt\" ]; then\n    _hmFrontExtra=$(cat ${_usEr}/log/extra_domain.txt 2>&1)\n    _hmFrontExtra=$(echo -n ${_hmFrontExtra} | tr -d \"\\n\" 2>&1)\n  fi\n  if [ -z \"${_hmFront}\" ]; then\n    if [ -e \"${_usEr}/.drush/hostmaster.alias.drushrc.php\" ]; then\n      _hmFront=$(cat ${_usEr}/.drush/hostmaster.alias.drushrc.php \\\n        | grep \"uri'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n    fi\n  fi\n  if [ ! -z \"${_hmFront}\" ]; then\n    _leCrtPath=\"${_usEr}/tools/le/certs/${_hmFront}\"\n  fi\n  if [ -x \"${_exeLe}\" ] \\\n    && [ ! -z \"${_hmFront}\" ] \\\n    && [ -e \"${_leCrtPath}/fullchain.pem\" ]; then\n    _DOM=$(date +%e)\n    _DOM=${_DOM//[^0-9]/}\n    _RDM=$((RANDOM%25+6))\n    if [ \"${_DOM}\" = \"${_RDM}\" ] || [ -e \"${_usEr}/static/control/force-ssl-certs-rebuild.info\" ]; then\n      if [ ! -e \"${_usEr}/log/ctrl/site.${_hmFront}.cert-x1-rebuilt.info\" ]; then\n        _leParams=\"--cron --ipv4 --preferred-chain 'ISRG Root X1' --force\"\n        mkdir -p ${_usEr}/log/ctrl\n        touch ${_usEr}/log/ctrl/site.${_hmFront}.cert-x1-rebuilt.info\n      else\n        _leParams=\"--cron --ipv4 --preferred-chain 'ISRG Root X1'\"\n      fi\n    else\n      _leParams=\"--cron --ipv4 --preferred-chain 'ISRG Root X1'\"\n    fi\n    if [ ! -z \"${_hmFrontExtra}\" ]; then\n      echo \"Running LE cert check directly for hostmaster ${_HM_U} with ${_hmFrontExtra}\"\n      su -s /bin/bash - ${_HM_U} -c \"${_exeLe} ${_leParams} --domain ${_hmFront} --domain ${_hmFrontExtra}\"\n      wait\n    else\n      echo \"Running LE cert check directly for hostmaster ${_HM_U}\"\n      su -s /bin/bash - ${_HM_U} -c \"${_exeLe} ${_leParams} --domain ${_hmFront}\"\n      wait\n    fi\n  fi\n  _crtLastMod=OLD\n  _if_le_hm_ssl_old \"${_leCrtPath}/fullchain.pem\"\n  if [ \"${_crtLastMod}\" = \"NEW\" ]; then\n    echo \"Copying NEW LE cert for hostmaster ${_hmFront} to /etc/ssl/private/\"\n    _if_le_hm_ssl_crt_key_copy\n  else\n    echo \"No new LE cert for hostmaster ${_hmFront} to copy\"\n  fi\n}\n\n_le_ssl_check_update() {\n  _exeLe=\"${_usEr}/tools/le/dehydrated\"\n  _Vht=\"${_usEr}/config/server_master/nginx/vhost.d/${_Dom}\"\n  if [ -x \"${_exeLe}\" ] && [ -e \"${_Vht}\" ]; then\n    _SSL_ON_TEST=$(cat ${_Vht} | grep \"443 ssl\" 2>&1)\n    if [[ \"${_SSL_ON_TEST}\" =~ \"443 ssl\" ]]; then\n      if [ -e \"${_usEr}/tools/le/certs/${_Dom}/fullchain.pem\" ]; then\n        echo \"Running LE cert check directly for ${_Dom}\"\n        _usEaliases=\"\"\n        _siTealiases=`cat ${_Vht} \\\n          | grep \"server_name\" \\\n          | sed \"s/server_name//g; s/;//g\" \\\n          | sort | uniq \\\n          | tr -d \"\\n\" \\\n          | sed \"s/  / /g; s/  / /g; s/  / /g\" \\\n          | sort | uniq`\n        for _aliAs in `echo \"${_siTealiases}\"`; do\n          if [ -e \"${_usEr}/static/control/wildcard-enable-${_Dom}.info\" ]; then\n            _Dom=$(echo ${_Dom} | sed 's/^www.//g' 2>&1)\n            if [ -z \"${_usEaliases}\" ] \\\n              && [ ! -z \"${_aliAs}\" ] \\\n              && [[ ! \"${_aliAs}\" =~ \".nodns.\" ]] \\\n              && [[ ! \"${_aliAs}\" =~ \"${_Dom}\" ]]; then\n              _usEaliases=\"--domain ${_aliAs}\"\n              echo \"--domain ${_aliAs}\"\n            else\n              if [ ! -z \"${_aliAs}\" ] \\\n                && [[ ! \"${_aliAs}\" =~ \".nodns.\" ]] \\\n                && [[ ! \"${_aliAs}\" =~ \"${_Dom}\" ]]; then\n                _usEaliases=\"${_usEaliases} --domain ${_aliAs}\"\n                echo \"--domain ${_aliAs}\"\n              fi\n            fi\n          else\n            if [[ ! \"${_aliAs}\" =~ \".nodns.\" ]]; then\n              echo \"--domain ${_aliAs}\"\n              if [ -z \"${_usEaliases}\" ] && [ ! -z \"${_aliAs}\" ]; then\n                _usEaliases=\"--domain ${_aliAs}\"\n              else\n                if [ ! -z \"${_aliAs}\" ]; then\n                  _usEaliases=\"${_usEaliases} --domain ${_aliAs}\"\n                fi\n              fi\n            else\n              echo \"ignored alias ${_aliAs}\"\n            fi\n          fi\n        done\n\t\t_DOM=$(date +%e)\n\t\t_DOM=${_DOM//[^0-9]/}\n\t\t_RDM=$((RANDOM%25+6))\n\t\tif [ \"${_DOM}\" = \"${_RDM}\" ] || [ -e \"${_usEr}/static/control/force-ssl-certs-rebuild.info\" ]; then\n\t\t  if [ ! -e \"${_usEr}/log/ctrl/site.${_Dom}.cert-x1-rebuilt.info\" ]; then\n\t\t\t_leParams=\"--cron --ipv4 --preferred-chain 'ISRG Root X1' --force\"\n\t\t\tmkdir -p ${_usEr}/log/ctrl\n\t\t\ttouch ${_usEr}/log/ctrl/site.${_Dom}.cert-x1-rebuilt.info\n\t\t  else\n\t\t\t_leParams=\"--cron --ipv4 --preferred-chain 'ISRG Root X1'\"\n\t\t  fi\n\t\telse\n\t\t  _leParams=\"--cron --ipv4 --preferred-chain 'ISRG Root X1'\"\n\t\tfi\n        _dhArgs=\"--domain ${_Dom} ${_usEaliases}\"\n        if [ -e \"${_usEr}/static/control/wildcard-enable-${_Dom}.info\" ]; then\n          _Dom=$(echo ${_Dom} | sed 's/^www.//g' 2>&1)\n          echo \"--domain *.${_Dom}\"\n          if [ -e \"${_usEr}/static/control/cloudflare-dns-ssl-py.info\" ] \\\n            || [ -e \"${_usEr}/static/control/cloudflare-dns-ssl-sh.info\" ]; then\n            [ -e \"${_usEr}/static/control/cloudflare-dns-ssl-py.info\" ] && chattr +i ${_usEr}/static/control/cloudflare-dns-ssl-py.info\n            [ -e \"${_usEr}/static/control/cloudflare-dns-ssl-sh.info\" ] && chattr +i ${_usEr}/static/control/cloudflare-dns-ssl-sh.info\n            export CF_DNS_SERVERS='8.8.8.8 8.8.4.4'\n            export CF_SETTLE_TIME='30'\n            export CF_DEBUG='true'\n            if [ ! -e \"${_usEr}/tools/le/hooks/cloudflare-sh/cf-hook.sh\" ]; then\n              _apt_clean_update\n              apt-get install gawk jq publicsuffix ldnsutils ${_aptYesUnth} 2> /dev/null\n              mkdir -p ${_usEr}/tools/le/hooks\n              cd ${_usEr}/tools/le\n              git clone https://github.com/omega8cc/dehydrated-hook-cloudflare hooks/cloudflare-sh 2> /dev/null\n              chmod 755 ${_usEr}/tools/le/hooks/cloudflare-sh/cf-hook.sh\n            fi\n            if [ ! -e \"${_usEr}/tools/le/hooks/cloudflare-py/hook.py\" ]; then\n              _apt_clean_update\n              apt-get install python3-pip python-is-python3 ${_aptYesUnth} 2> /dev/null\n              mkdir -p ${_usEr}/tools/le/hooks\n              cd ${_usEr}/tools/le\n              git clone https://github.com/omega8cc/letsencrypt-cloudflare-hook hooks/cloudflare-py 2> /dev/null\n              chmod 755 ${_usEr}/tools/le/hooks/cloudflare-py/hook.py\n              pip3 install -r hooks/cloudflare-py/requirements.txt 2> /dev/null\n            fi\n            if [ -e \"${_usEr}/static/control/cloudflare-dns-ssl-py.info\" ]; then\n              _thisHook=\"${_usEr}/tools/le/hooks/cloudflare-py/hook.py\"\n            elif [ -e \"${_usEr}/static/control/cloudflare-dns-ssl-sh.info\" ]; then\n              _thisHook=\"${_usEr}/tools/le/hooks/cloudflare-sh/cf-hook.sh\"\n            fi\n            if [ -e \"${_thisHook}\" ] && [ -e \"${_usEr}/tools/le/config\" ]; then\n              chattr +i ${_usEr}/tools/le/config\n              _dhArgs=\"--alias ${_Dom} --domain *.${_Dom} --domain ${_Dom} ${_usEaliases}\"\n              _dhArgs=\" ${_dhArgs} --challenge dns-01 --hook '${_thisHook}'\"\n            fi\n          else\n            _dhArgs=\"--alias ${_Dom} --domain *.${_Dom} --domain ${_Dom} ${_usEaliases}\"\n          fi\n        fi\n        echo \"_leParams is ${_leParams}\"\n        echo \"_dhArgs is ${_dhArgs}\"\n        su -s /bin/bash - ${_HM_U} -c \"${_exeLe} ${_leParams} ${_dhArgs}\"\n        wait\n        if [ -e \"${_usEr}/static/control/wildcard-enable-${_Dom}.info\" ]; then\n          sleep 30\n        else\n          sleep 3\n        fi\n        echo ${_MOMENT} >> /var/log/boa/le/${_Dom}\n      fi\n    fi\n  fi\n}\n\n_if_gen_goaccess() {\n  _PrTestPower=$(grep \"POWER\" /root/.${_HM_U}.octopus.cnf 2>&1)\n  _PrTestPhantom=$(grep \"PHANTOM\" /root/.${_HM_U}.octopus.cnf 2>&1)\n  _PrTestUltra=$(grep \"ULTRA\" /root/.${_HM_U}.octopus.cnf 2>&1)\n  _PrTestMonster=$(grep \"MONSTER\" /root/.${_HM_U}.octopus.cnf 2>&1)\n  _PrTestCluster=$(grep \"CLUSTER\" /root/.${_HM_U}.octopus.cnf 2>&1)\n  if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n    || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n    || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n    || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n    || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]]; then\n    _isWblgx=\"$(which weblogx)\"\n    if [ -x \"${_isWblgx}\" ]; then\n      ${_isWblgx} --site=\"${1}\" --env=\"${_HM_U}\"\n      wait\n      if [ ! -e \"/data/disk/${_HM_U}/static/goaccess\" ]; then\n        mkdir -p /data/disk/${_HM_U}/static/goaccess\n      fi\n      if [ -e \"/var/www/adminer/access/${_HM_U}/${1}/index.html\" ]; then\n        cp -af /var/www/adminer/access/${_HM_U}/${1} /data/disk/${_HM_U}/static/goaccess/\n      else\n        rm -rf /var/www/adminer/access/${_HM_U}/${1}\n        rm -rf /data/disk/${_HM_U}/static/goaccess/${1}\n      fi\n    fi\n  fi\n}\n\n_daily_process() {\n  _cleanup_ghost_vhosts\n  _cleanup_ghost_drushrc\n  for _Site in `find ${_usEr}/config/server_master/nginx/vhost.d \\\n    -maxdepth 1 -mindepth 1 -type f | sort`; do\n    _MOMENT=$(date +%y%m%d-%H%M%S)\n    echo ${_MOMENT} Start Counting Site ${_Site}\n    _Dom=$(echo ${_Site} | cut -d'/' -f9 | awk '{ print $1}' 2>&1)\n    _Dan=\n    _Plx=\n    _Plr=\n    _Dir=\n    _codeBaseCheckDir=\n    _codeBaseCheckFile=\n    _codeBaseCheckCtrl=\n    if [ -e \"${_usEr}/config/server_master/nginx/vhost.d/${_Dom}\" ]; then\n      _Plx=$(cat ${_usEr}/config/server_master/nginx/vhost.d/${_Dom} \\\n        | grep \"root \" \\\n        | cut -d: -f2 \\\n        | awk '{ print $2}' \\\n        | sed \"s/[\\;]//g\" 2>&1)\n      if [[ \"${_Plx}\" =~ \"aegir/distro\" ]]; then\n        _Dan=hostmaster\n      else\n        _Dan=\"${_Dom}\"\n      fi\n    fi\n    _STATUS_DISABLED=NO\n    _STATUS_TEST=$(grep \"Do not reveal Aegir front-end URL here\" \\\n      ${_usEr}/config/server_master/nginx/vhost.d/${_Dom} 2>&1)\n    if [[ \"${_STATUS_TEST}\" =~ \"Do not reveal Aegir front-end URL here\" ]]; then\n      _STATUS_DISABLED=YES\n      echo \"${_Dom} site is DISABLED\"\n    fi\n    if [ -e \"${_usEr}/.drush/${_Dan}.alias.drushrc.php\" ] \\\n      && [ \"${_STATUS_DISABLED}\" = \"NO\" ]; then\n      echo \"Dom is ${_Dom}\"\n      _Dir=$(cat ${_usEr}/.drush/${_Dan}.alias.drushrc.php \\\n        | grep \"site_path'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _DIR_CTRL_F=\"${_Dir}/modules/boa_site_control.ini\"\n      _Plr=$(cat ${_usEr}/.drush/${_Dan}.alias.drushrc.php \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _PLR_CTRL_F=\"${_Plr}/sites/all/modules/boa_platform_control.ini\"\n      if [ -e \"${_Plr}\" ]; then\n        _PlrID=$(echo ${_Plr} \\\n          | openssl md5 \\\n          | awk '{ print $2}' \\\n          | tr -d \"\\n\" 2>&1)\n        if [ -e \"/root/.allow-codebasecheck.cnf\" ]; then\n          _codeBaseCheckDir=\"${_usEr}/log/ctrl\"\n          _codeBaseCheckFile=\"plr.${_PlrID}.codebasecheck-${_NOW}.info\"\n          _codeBaseCheckCtrl=\"${_codeBaseCheckDir}/${_codeBaseCheckFile}\"\n          [ ! -e \"${_codeBaseCheckDir}\" ] && mkdir \"${_codeBaseCheckDir}\"\n          if [ -x \"/opt/local/bin/codebasecheck\" ] \\\n            && [ -e \"${_codeBaseCheckDir}\" ] \\\n            && [ ! -e \"${_codeBaseCheckCtrl}\" ]; then\n            codebasecheck \"${_Plr}\"\n            wait\n            touch \"${_codeBaseCheckCtrl}\"\n          fi\n        fi\n        _fix_platform_control_files\n        _fix_o_contrib_symlink\n        if [ -e \"${_Dir}/drushrc.php\" ]; then\n          cd ${_Dir}\n          if [ \"${_Dan}\" = \"hostmaster\" ]; then\n            _STATUS=OK\n            if [ ! -f \"${_usEr}/log/ctrl/plr.${_PlrID}.hm-fix-${_NOW}.info\" ]; then\n              su -s /bin/bash - ${_HM_U} -c \"drush8 cc drush\" &> /dev/null\n              wait\n              rm -rf ${_usEr}/.tmp/cache\n              _run_drush8_hmr_cmd \"dis update syslog dblog -y\"\n              _run_drush8_hmr_cmd \"cron\"\n              _run_drush8_hmr_cmd \"cache-clear all\"\n              _run_drush8_hmr_cmd \"cache-clear all\"\n              _run_drush8_hmr_cmd \"utf8mb4-convert-databases -y\"\n              touch ${_usEr}/log/ctrl/plr.${_PlrID}.hm-fix-${_NOW}.info\n            fi\n          else\n            if [ -e \"${_Plr}/modules/o_contrib_seven\" ] \\\n              || [ -e \"${_Plr}/modules/o_contrib\" ]; then\n              _check_site_status_with_drush8\n            fi\n          fi\n          if [ ! -z \"${_Dan}\" ] \\\n            && [ \"${_Dan}\" != \"hostmaster\" ]; then\n            _if_site_db_conversion\n            searchStringB=\".dev.\"\n            searchStringC=\".devel.\"\n            searchStringD=\".temp.\"\n            searchStringE=\".tmp.\"\n            searchStringF=\".temporary.\"\n            searchStringG=\".test.\"\n            searchStringH=\".testing.\"\n            case ${_Dom} in\n              *\"$searchStringB\"*) ;;\n              *\"$searchStringC\"*) ;;\n              *\"$searchStringD\"*) ;;\n              *\"$searchStringE\"*) ;;\n              *\"$searchStringF\"*) ;;\n              *\"$searchStringG\"*) ;;\n              *\"$searchStringH\"*) ;;\n              *)\n              if [ \"${_MODULES_FIX}\" = \"YES\" ]; then\n                _CHECK_IS=OFF\n                #if [ \"${_STATUS}\" = \"OK\" ]; then\n                  _fix_modules\n                #fi\n                _fix_robots_txt\n                _fix_llms_txt\n              fi\n              _le_ssl_check_update\n              if [ \"${_ENABLE_GOACCESS}\" = \"YES\" ] && [ -e \"${_usEr}/static/control/goaccess/${_Dom}.info\" ]; then\n                _noPrefixDom=\"${_Dom#www.}\"\n                _if_gen_goaccess ${_noPrefixDom}\n                _if_gen_goaccess ${_Dom}\n              fi\n              ;;\n            esac\n            _fix_site_control_files\n            if [ -e \"${_Plr}/modules/o_contrib_seven\" ] \\\n              || [ -e \"${_Plr}/modules/o_contrib\" ]; then\n              if [ \"${_CLEAR_BOOST}\" = \"YES\" ]; then\n                _fix_boost_cache\n              fi\n              _fix_user_register_protection_with_vSet\n              if [[ \"${_xSrl}\" =~ \"OFF\" ]]; then\n                _run_drush8_cmd \"advagg-force-new-aggregates\"\n                _run_drush8_cmd \"cache-clear all\"\n                _run_drush8_cmd \"cache-clear all\"\n              fi\n            fi\n          fi\n        fi\n        ###\n        ### Detect permissions fix overrides, if set per platform.\n        ###\n        _DONT_TOUCH_PERMISSIONS=NO\n        if [ -e \"${_PLR_CTRL_F}\" ]; then\n          _FIX_PERMISSIONS_PRESENT=$(grep \"fix_files_permissions_daily\" \\\n            ${_PLR_CTRL_F} 2>&1)\n          if [[ \"${_FIX_PERMISSIONS_PRESENT}\" =~ \"fix_files_permissions_daily\" ]]; then\n            _DO_NOTHING=YES\n          else\n            echo \";fix_files_permissions_daily = TRUE\" >> ${_PLR_CTRL_F}\n          fi\n          _FIX_PERMISSIONS_TEST=$(grep \"^fix_files_permissions_daily = FALSE\" \\\n            ${_PLR_CTRL_F} 2>&1)\n          if [[ \"${_FIX_PERMISSIONS_TEST}\" =~ \"fix_files_permissions_daily = FALSE\" ]]; then\n            _DONT_TOUCH_PERMISSIONS=YES\n          fi\n        fi\n        if [ -e \"${_Plr}/profiles\" ] \\\n          && [ -e \"${_Plr}/web.config\" ] \\\n          && [ ! -e \"${_Plr}/core\" ] \\\n          && [ ! -f \"${_Plr}/profiles/SA-CORE-2014-005-D7-fix.info\" ]; then\n          _PATCH_TEST=$(grep \"foreach (array_values(\\$data)\" \\\n            ${_Plr}/includes/database/database.inc 2>&1)\n          if [[ \"${_PATCH_TEST}\" =~ \"array_values\" ]]; then\n            _DONT_TOUCH_PERMISSIONS=\"${_DONT_TOUCH_PERMISSIONS}\"\n          else\n            _DONT_TOUCH_PERMISSIONS=NO\n          fi\n        fi\n        if [ -e \"/root/.dont.touch.permissions.cnf\" ]; then\n          _DONT_TOUCH_PERMISSIONS=YES\n        fi\n        if [ \"${_DONT_TOUCH_PERMISSIONS}\" = \"NO\" ] \\\n          && [ \"${_PERMISSIONS_FIX}\" = \"YES\" ]; then\n          _fix_permissions\n        fi\n      fi\n      _MOMENT=$(date +%y%m%d-%H%M%S)\n      echo ${_MOMENT} End Counting Site ${_Site}\n    fi\n  done\n}\n\n_delete_this_empty_hostmaster_platform() {\n  _run_drush8_hmr_master_cmd \"hosting-task @platform_${_T_PFM_NAME} delete --force\"\n  echo \"Old empty platform_${_T_PFM_NAME} will be deleted\"\n}\n\n_check_old_empty_hostmaster_platforms() {\n  if [ \"${_DEL_OLD_EMPTY_PLATFORMS}\" -gt 0 ] \\\n\t&& [ ! -z \"${_DEL_OLD_EMPTY_PLATFORMS}\" ]; then\n\t_DO_NOTHING=YES\n  else\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n\t  _DEL_OLD_EMPTY_PLATFORMS=\"3\"\n\telse\n\t  _DEL_OLD_EMPTY_PLATFORMS=\"7\"\n\tfi\n  fi\n  if [ ! -z \"${_DEL_OLD_EMPTY_PLATFORMS}\" ]; then\n    if [ \"${_DEL_OLD_EMPTY_PLATFORMS}\" -gt 0 ]; then\n      echo \"_DEL_OLD_EMPTY_PLATFORMS is set to \\\n        ${_DEL_OLD_EMPTY_PLATFORMS} days on /var/aegir instance\"\n      for _Platform in `find /var/aegir/.drush/platform_* -maxdepth 1 -mtime \\\n        +${_DEL_OLD_EMPTY_PLATFORMS} -type f | sort`; do\n        _T_PFM_NAME=$(echo \"${_Platform}\" \\\n          | sed \"s/.*platform_//g; s/.alias.drushrc.php//g\" \\\n          | awk '{ print $1}' 2>&1)\n        _T_PFM_ROOT=$(cat ${_Platform} \\\n          | grep \"root'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        _T_PFM_SITE=$(grep \"${_T_PFM_ROOT}/sites/\" \\\n          /var/aegir/.drush/*.drushrc.php \\\n          | grep site_path 2>&1)\n        if [ ! -e \"${_T_PFM_ROOT}/sites/all\" ] \\\n          || [ ! -e \"${_T_PFM_ROOT}/index.php\" ]; then\n          mkdir -p /var/aegir/undo\n          ### mv -f /var/aegir/.drush/platform_${_T_PFM_NAME}.alias.drushrc.php /var/aegir/undo/ &> /dev/null\n          echo \"GHOST platform ${_T_PFM_ROOT} detected and moved to /var/aegir/undo/\"\n        fi\n        if [[ \"${_T_PFM_SITE}\" =~ \".restore\" ]]; then\n          echo \"WARNING: ghost site leftover found: ${_T_PFM_SITE}\"\n        fi\n        if [ -z \"${_T_PFM_SITE}\" ] \\\n          && [ -e \"${_T_PFM_ROOT}/sites/all\" ]; then\n          _delete_this_empty_hostmaster_platform\n        fi\n      done\n    fi\n  fi\n}\n\n_delete_this_platform() {\n  _run_drush8_hmr_cmd \"hosting-task @platform_${_T_PFM_NAME} delete --force\"\n  echo \"Old empty platform_${_T_PFM_NAME} will be deleted\"\n}\n\n_check_old_empty_platforms() {\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [[ \"${_hName}\" =~ \"demo.aegir.cc\" ]] \\\n      || [ -e \"${_usEr}/static/control/platforms.info\" ]; then\n      _DO_NOTHING=YES\n    else\n      if [ \"${_DEL_OLD_EMPTY_PLATFORMS}\" -gt 0 ] \\\n        && [ ! -z \"${_DEL_OLD_EMPTY_PLATFORMS}\" ]; then\n        _DO_NOTHING=YES\n      else\n        _DEL_OLD_EMPTY_PLATFORMS=\"60\"\n      fi\n    fi\n  fi\n  if [ ! -z \"${_DEL_OLD_EMPTY_PLATFORMS}\" ]; then\n    if [ \"${_DEL_OLD_EMPTY_PLATFORMS}\" -gt 0 ]; then\n      echo \"_DEL_OLD_EMPTY_PLATFORMS is set to \\\n        ${_DEL_OLD_EMPTY_PLATFORMS} days on ${_HM_U} instance\"\n      for _Platform in `find ${_usEr}/.drush/platform_* -maxdepth 1 -mtime \\\n        +${_DEL_OLD_EMPTY_PLATFORMS} -type f | sort`; do\n        _T_PFM_NAME=$(echo \"${_Platform}\" \\\n          | sed \"s/.*platform_//g; s/.alias.drushrc.php//g\" \\\n          | awk '{ print $1}' 2>&1)\n        _T_PFM_ROOT=$(cat ${_Platform} \\\n          | grep \"root'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        _T_PFM_SITE=$(grep \"${_T_PFM_ROOT}/sites/\" \\\n          ${_usEr}/.drush/*.drushrc.php \\\n          | grep site_path 2>&1)\n        if [ ! -e \"${_T_PFM_ROOT}/sites/all\" ] \\\n          || [ ! -e \"${_T_PFM_ROOT}/index.php\" ]; then\n          if [ ! -e \"${_T_PFM_ROOT}/vendor\" ]; then\n            mkdir -p ${_usEr}/undo\n            ### mv -f ${_usEr}/.drush/platform_${_T_PFM_NAME}.alias.drushrc.php ${_usEr}/undo/ &> /dev/null\n            echo \"GHOST platform ${_T_PFM_ROOT} detected and moved to ${_usEr}/undo/\"\n          fi\n        fi\n        if [[ \"${_T_PFM_SITE}\" =~ \".restore\" ]]; then\n          echo \"WARNING: ghost site leftover found: ${_T_PFM_SITE}\"\n        fi\n        if [ -z \"${_T_PFM_SITE}\" ] \\\n          && [ -e \"${_T_PFM_ROOT}/sites/all\" ]; then\n          _delete_this_platform\n        fi\n      done\n    fi\n  fi\n}\n\n_purge_cruft_machine() {\n\n  if [ ! -z \"${_DEL_OLD_TMP}\" ] && [ \"${_DEL_OLD_TMP}\" -gt 0 ]; then\n    _PURGE_TMP=\"${_DEL_OLD_TMP}\"\n  else\n    _PURGE_TMP=\"0\"\n  fi\n\n  if [ ! -z \"${_DEL_OLD_BACKUPS}\" ] && [ \"${_DEL_OLD_BACKUPS}\" -gt 0 ]; then\n    _PURGE_BACKUPS=\"${_DEL_OLD_BACKUPS}\"\n  else\n    _PURGE_BACKUPS=\"14\"\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _PURGE_BACKUPS=\"7\"\n    fi\n  fi\n\n  _LOW_NR=\"2\"\n  _PURGE_CTRL=\"14\"\n\n  find ${_usEr}/log/ctrl/*cert-x1-rebuilt.info \\\n    -mtime +${_PURGE_CTRL} -type f -exec rm -rf {} \\; &> /dev/null\n\n  find ${_usEr}/log/ctrl/plr* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n\n  find ${_usEr}/log/ctrl/*rom-fix.info \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n\n  find ${_usEr}/backups/* -mtime +${_PURGE_BACKUPS} -exec \\\n    rm -rf {} \\; &> /dev/null\n  find ${_usEr}/clients/*/backups/* -mtime +${_PURGE_BACKUPS} -exec \\\n    rm -rf {} \\; &> /dev/null\n  find ${_usEr}/backup-exports/* -mtime +${_PURGE_TMP} -type f -exec \\\n    rm -rf {} \\; &> /dev/null\n\n  find /var/aegir/backups/* -mtime +${_PURGE_BACKUPS} -exec \\\n    rm -rf {} \\; &> /dev/null\n  find /var/aegir/clients/*/backups/* -mtime +${_PURGE_BACKUPS} -exec \\\n    rm -rf {} \\; &> /dev/null\n  find /var/aegir/backup-exports/* -mtime +${_PURGE_TMP} -type f -exec \\\n    rm -rf {} \\; &> /dev/null\n\n  find ${_usEr}/distro/*/*/sites/*/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/distro/*/*/sites/*/private/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n\n  find ${_usEr}/static/*/*/*/*/*/sites/*/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/*/sites/*/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/sites/*/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/sites/*/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/sites/*/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n\n  find ${_usEr}/static/*/*/*/*/*/sites/*/private/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/*/sites/*/private/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/sites/*/private/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/sites/*/private/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/sites/*/private/files/backup_migrate/*/* \\\n    -mtime +${_PURGE_BACKUPS} -type f -exec rm -rf {} \\; &> /dev/null\n\n  find ${_usEr}/distro/*/*/sites/*/files/tmp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/distro/*/*/sites/*/private/temp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/*/*/sites/*/files/tmp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/*/*/sites/*/private/temp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/*/sites/*/files/tmp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/*/sites/*/private/temp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/sites/*/files/tmp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/*/sites/*/private/temp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/sites/*/files/tmp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/*/sites/*/private/temp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/sites/*/files/tmp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/static/*/sites/*/private/temp/* \\\n    -mtime +${_PURGE_TMP} -type f -exec rm -rf {} \\; &> /dev/null\n\n  find /home/${_HM_U}.ftp/.tmp/* \\\n    -mtime +${_PURGE_TMP} -exec rm -rf {} \\; &> /dev/null\n  find /home/${_HM_U}.ftp/tmp/* \\\n    -mtime +${_PURGE_TMP} -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/.tmp/* \\\n    -mtime +${_PURGE_TMP} -exec rm -rf {} \\; &> /dev/null\n  find ${_usEr}/tmp/* \\\n    -mtime +${_PURGE_TMP} -exec rm -rf {} \\; &> /dev/null\n\n  chown -R ${_HM_U}:users ${_usEr}/tools/le\n  mkdir -p ${_usEr}/static/trash\n  chown ${_HM_U}.ftp:users ${_usEr}/static/trash &> /dev/null\n  find ${_usEr}/static/trash/* \\\n    -mtime +${_PURGE_TMP} -exec rm -rf {} \\; &> /dev/null\n\n  for i in $(dir -d /home/${_HM_U}.ftp/platforms/* 2>/dev/null); do\n    if [ -e \"${i}\" ]; then\n      _RevisionTest=$(ls ${i} \\\n        | wc -l \\\n        | tr -d \"\\n\" 2>&1)\n      if [ \"${_RevisionTest}\" -lt \"${_LOW_NR}\" ] \\\n        && [ ! -z \"${_RevisionTest}\" ]; then\n        if [ -d \"/home/${_HM_U}.ftp/platforms\" ]; then\n          chattr -i /home/${_HM_U}.ftp/platforms\n          chattr -i /home/${_HM_U}.ftp/platforms/* &> /dev/null\n        fi\n        _NOW=$(date +%y%m%d-%H%M%S)\n        [ -d \"/var/backups/ghost/${_HM_U}/${_NOW}\" ] || mkdir -p /var/backups/ghost/${_HM_U}/${_NOW}\n        echo \"Moving ${i} to /var/backups/ghost/${_HM_U}/${_NOW}\"\n        mv -f ${i} /var/backups/ghost/${_HM_U}/${_NOW}/\n      fi\n    fi\n  done\n\n  for i in $(dir -d ${_usEr}/distro/* 2>/dev/null); do\n    if [ -d \"${i}\" ]; then\n      if [ ! -d \"${i}/keys\" ]; then\n        mkdir -p ${i}/keys\n      fi\n      _RevisionTest=$(ls ${i} | wc -l 2>&1)\n      if [ \"${_RevisionTest}\" -lt 2 ] && [ ! -z \"${_RevisionTest}\" ]; then\n        echo \"_RevisionTest is ${_RevisionTest}\"\n        _NOW=$(date +%y%m%d-%H%M%S)\n        mkdir -p ${_usEr}/undo/dist/${_NOW}\n        mv -f ${i} ${_usEr}/undo/dist/${_NOW}/ &> /dev/null\n        echo \"GHOST revision ${i} detected and moved to ${_usEr}/undo/dist/${_NOW}/\"\n      fi\n    fi\n  done\n\n  for i in $(dir -d ${_usEr}/distro/* 2>/dev/null); do\n    if [ -e \"${i}\" ]; then\n      _distTrNr=$(echo ${i} \\\n        | cut -d'/' -f6 \\\n        | awk '{ print $1}' 2> /dev/null)\n      if [ -d \"/home/${_HM_U}.ftp/platforms\" ]; then\n        chattr -i /home/${_HM_U}.ftp/platforms\n        chattr -i /home/${_HM_U}.ftp/platforms/* &> /dev/null\n      fi\n      if [ ! -e \"${i}/keys\" ]; then\n        mkdir -p ${i}/keys\n        chown ${_HM_U}.ftp:${_WEBG} ${i}/keys &> /dev/null\n        chmod 02775 ${i}/keys &> /dev/null\n      fi\n      if [ ! -e \"/home/${_HM_U}.ftp/platforms/${_distTrNr}\" ]; then\n        mkdir -p /home/${_HM_U}.ftp/platforms/${_distTrNr}\n      fi\n      if [ -e \"${i}/keys\" ] && [ ! -e \"/home/${_HM_U}.ftp/platforms/${_distTrNr}/keys\" ]; then\n        ln -sfn ${i}/keys /home/${_HM_U}.ftp/platforms/${_distTrNr}/keys\n      fi\n      if [ -e \"/home/${_HM_U}.ftp/platforms/data\" ]; then\n        _NOW=$(date +%y%m%d-%H%M%S)\n        [ -d \"/var/backups/ghost/${_HM_U}/${_NOW}\" ] || mkdir -p /var/backups/ghost/${_HM_U}/${_NOW}\n        mv -f /home/${_HM_U}.ftp/platforms/data /var/backups/ghost/${_HM_U}/${_NOW}/platforms_data\n      fi\n      for _Codebase in `find ${i}/* \\\n        -maxdepth 1 \\\n        -mindepth 1 \\\n        -type d \\\n        | grep \"/sites$\" 2>&1`; do\n        _CodebaseName=$(echo ${_Codebase} \\\n          | cut -d'/' -f7 \\\n          | awk '{ print $1}' 2> /dev/null)\n        ln -sfn ${_Codebase} /home/${_HM_U}.ftp/platforms/${_distTrNr}/${_CodebaseName}\n        echo \"Fixed ${_CodebaseName} in ${_distTrNr} symlink to ${_Codebase} for ${_HM_U}.ftp\"\n      done\n    fi\n  done\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n  echo ${_CPU_NR} > /data/all/cpuinfo\n  chmod 644 /data/all/cpuinfo &> /dev/null\n}\n\n_get_load() {\n  read -r _one _five _rest <<< \"$(cat /proc/loadavg)\"\n  _O_LOAD=$(awk -v _load_value=\"${_one}\" -v _cpus=\"${_CPU_NR}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n}\n\n_load_control() {\n  : \"${_CPU_TASK_RATIO:=3.1}\"\n  [ -e \"/root/.force.sites.verify.cnf\" ] && _CPU_TASK_RATIO=4.1\n  _CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n  _O_LOAD_MAX=$(echo \"${_CPU_TASK_RATIO} * 100\" | bc -l)\n  _get_load\n}\n\n_shared_codebases_cleanup() {\n  if [ -L \"/data/all\" ]; then\n    _CLD=\"/data/disk/codebases-cleanup\"\n  else\n    _CLD=\"/var/backups/codebases-cleanup\"\n  fi\n  for i in `dir -d /data/all/*/`; do\n    if [ -d \"${i}o_contrib\" ]; then\n      for _Codebase in `find ${i}* -maxdepth 1 -mindepth 1 -type d \\\n        | grep \"/profiles$\" 2>&1`; do\n        _CodebaseDir=$(echo ${_Codebase} \\\n          | sed 's/\\/profiles//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        _CodebaseTest=$(find /data/disk/*/distro/*/*/ -maxdepth 1 -mindepth 1 \\\n          -type l -lname ${_Codebase} | sort 2>&1)\n        if [[ \"${_CodebaseTest}\" =~ \"No such file or directory\" ]] \\\n          || [ -z \"${_CodebaseTest}\" ]; then\n          mkdir -p ${_CLD}${i}\n          echo \"Moving no longer used ${_CodebaseDir} to ${_CLD}${i}\"\n          ### mv -f ${_CodebaseDir} ${_CLD}${i}\n        fi\n      done\n    fi\n  done\n}\n\n_ghost_codebases_cleanup() {\n  _CLD=\"/var/backups/ghost-codebases-cleanup\"\n  for i in `dir -d /data/disk/*/distro/*/*/`; do\n    _CodebaseTest=$(find ${i} -maxdepth 1 -mindepth 1 \\\n      -type d -name vendor | sort 2>&1)\n    for _vendor in ${_CodebaseTest}; do\n      _ParentDir=`echo ${_vendor} | sed \"s/\\/vendor//g\"`\n      if [ -d \"${_ParentDir}/docroot/sites/all\" ] \\\n        || [ -d \"${_ParentDir}/html/sites/all\" ] \\\n        || [ -d \"${_ParentDir}/web/sites/all\" ]; then\n        _CLEAN_THIS=SKIP\n      else\n        _CLEAN_THIS=\"${_ParentDir}\"\n        _TSTAMP=$(date +%y%m%d-%H%M%S)\n        mkdir -p ${_CLD}${i}${_TSTAMP}\n        echo \"Moving ghost ${_CLEAN_THIS} to ${_CLD}${i}${_TSTAMP}/\"\n        ### mv -f ${_CLEAN_THIS} ${_CLD}${i}${_TSTAMP}/\n      fi\n    done\n  done\n}\n\n_prepare_weblogx() {\n  _ARCHLOGS=/var/www/adminer/access/archive\n  mkdir -p ${_ARCHLOGS}/unzip\n  echo \"[+] SYNCING LOGS TO: ${_ARCHLOGS}\"\n  rsync -rlvz --size-only --progress /var/log/nginx/access* ${_ARCHLOGS}/\n  echo \"[+] COPYING LOGS TO: ${_ARCHLOGS}/unzip/\"\n  cp -af ${_ARCHLOGS}/access* ${_ARCHLOGS}/unzip/\n  echo \"[+] DECOMPRESSING GZ FILES\"\n  find ${_ARCHLOGS}/unzip -name \"*.gz\" -exec gunzip -f {} \\;\n  echo \"[+] RENAMING RAW FILES\"\n  for _log in `find ${_ARCHLOGS}/unzip \\\n    -maxdepth 1 -mindepth 1 -type f | sort`; do\n    mv -f ${_log} ${_log}.txt;\n  done\n  rm -f ${_ARCHLOGS}/unzip/*.txt.txt*\n  touch ${_ARCHLOGS}/unzip/.global.pid\n}\n\n_cleanup_weblogx() {\n  _ARCHLOGS=/var/www/adminer/access/archive\n  if [ -e \"${_ARCHLOGS}/unzip\" ]; then\n    rm -f ${_ARCHLOGS}/unzip/access*\n    rm -f ${_ARCHLOGS}/unzip/.global.pid\n  fi\n}\n\n_incident_email_report() {\n  if [ -e \"/root/.barracuda.cnf\" ]; then\n    _MY_EMAIL=\n    # shellcheck disable=SC1091\n    source /root/.barracuda.cnf\n    export _INCIDENT_REPORT=${_INCIDENT_REPORT//[^A-Z]/}\n    : \"${_INCIDENT_REPORT:=MINI}\"\n  fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" != \"OFF\" ]; then\n    echo \"Sending Incident Report Email on $(date)\" >> ${_thisLog}\n    s-nail -s \"Incident Report during daily.sh: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_thisLog}\n  fi\n}\n\n_incident_detection() {\n  # Array of errors to search for\n  declare -a _errors=(\n    \"urn:ietf:params:acme:error:unauthorized\"\n    \"urn:ietf:params:acme:error:badNonce\"\n    \"urn:ietf:params:acme:error:rateLimited\"\n    \"urn:ietf:params:acme:error:dns\"\n    \"urn:acme:error:serverInternal\"\n    \"Remote PerformValidation RPC failed\"\n    \"ModuleNotFoundError\"\n    \"Traceback\"\n    \"Drush command terminated abnormally\"\n    \"ArgumentCountError\"\n  )\n\n  # Loop through errors and check if any exist in the log file\n  for _error in \"${_errors[@]}\"; do\n    if grep -q \"${_error}\" \"${_thisLog}\"; then\n      _incident_email_report \"${_error}\"\n      break  # Exit the loop after the first detected error\n    fi\n  done\n}\n\n_daily_action() {\n  if [ -n \"${_ENABLE_GOACCESS}\" ] && [ \"${_ENABLE_GOACCESS}\" = \"YES\" ]; then\n    _prepare_weblogx\n  fi\n  for _usEr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n    _count_cpu\n    _load_control\n    if [ -e \"${_usEr}/config/server_master/nginx/vhost.d\" ] \\\n      && [ ! -e \"${_usEr}/log/proxied.pid\" ] \\\n      && [ ! -e \"${_usEr}/log/CANCELLED\" ]; then\n      if (( $(echo \"${_O_LOAD} < ${_O_LOAD_MAX}\" | bc -l) )); then\n        _HM_U=$(echo ${_usEr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n        _THIS_HM_SITE=$(cat ${_usEr}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"site_path'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        echo \"load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\"\n        echo \"User ${_usEr}\"\n        mkdir -p ${_usEr}/log/ctrl\n        su -s /bin/bash ${_HM_U} -c \"drush8 cc drush\" &> /dev/null\n        wait\n        rm -rf ${_usEr}/.tmp/cache\n        chage -M 99999 ${_HM_U}.ftp &> /dev/null\n        su -s /bin/bash - ${_HM_U}.ftp -c \"drush8 cc drush\" &> /dev/null\n        wait\n        chage -M 90 ${_HM_U}.ftp &> /dev/null\n        rm -rf /home/${_HM_U}.ftp/.tmp/cache\n        _SQL_CONVERT=NO\n        _DEL_OLD_EMPTY_PLATFORMS=\"0\"\n        if [ -e \"/root/.${_HM_U}.octopus.cnf\" ]; then\n          if [ -x \"/usr/bin/drush10\" ]; then\n            su -s /bin/bash - ${_HM_U} -c \"rm -f ~/.drush/sites/*.yml\"\n            wait\n            su -s /bin/bash - ${_HM_U} -c \"rm -f ~/.drush/sites/.checksums/*.md5\"\n            wait\n            su -s /bin/bash - ${_HM_U} -c \"drush10 core:init --yes\" &> /dev/null\n            wait\n            su -s /bin/bash - ${_HM_U} -c \"drush10 site:alias-convert ~/.drush/sites --yes\" &> /dev/null\n            wait\n          fi\n\n          _MY_OCTO_EMAIL=\n          _CLIENT_EMAIL=\n          _MY_EMAIL=\n\n          # shellcheck disable=SC1091\n          [ -e \"/root/.${_HM_U}.octopus.cnf\" ] && source /root/.${_HM_U}.octopus.cnf\n\n          [ -n \"${_MY_OCTO_EMAIL}\" ] && _MY_OCTO_EMAIL=${_MY_OCTO_EMAIL//\\\\\\@/\\@}\n          [ -n \"${_CLIENT_EMAIL}\" ] && _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n          [ -n \"${_MY_EMAIL}\" ] && _MY_EMAIL=${_MY_EMAIL//\\\\\\@/\\@}\n\n          _MY_OCTO_EMAIL_TEST=$(grep \"^_MY_OCTO_EMAIL=\" /root/.${_HM_U}.octopus.cnf 2>&1)\n          _MY_EMAIL_TEST=$(grep \"^_MY_EMAIL=\" /root/.${_HM_U}.octopus.cnf 2>&1)\n\n          if [[ ! \"${_MY_OCTO_EMAIL_TEST}\" =~ \"_MY_OCTO_EMAIL\" ]] \\\n            && [[ \"${_MY_EMAIL_TEST}\" =~ \"_MY_EMAIL\" ]] \\\n            && [ -n \"${_MY_EMAIL}\" ]; then\n            _MY_OCTO_EMAIL=\"${_MY_EMAIL}\"\n            sed -i \"s/^_MY_EMAIL=.*/_MY_OCTO_EMAIL=\\\"${_MY_EMAIL}\\\"/g\" /root/.${_HM_U}.octopus.cnf\n            _MY_EMAIL=\n          fi\n\n          if [ ! -z \"${_CLIENT_EMAIL}\" ] \\\n            && [[ ! \"${_CLIENT_EMAIL}\" =~ \"${_MY_OCTO_EMAIL}\" ]]; then\n            _ALRT_EMAIL=\"${_CLIENT_EMAIL}\"\n          else\n            _ALRT_EMAIL=\"${_MY_OCTO_EMAIL}\"\n          fi\n\n          if [ \"${_hostedSys}\" = \"YES\" ]; then\n            _BCC_EMAIL=\"omega8cc@gmail.com\"\n          else\n            _BCC_EMAIL=\"${_MY_OCTO_EMAIL}\"\n          fi\n\n          _DEL_OLD_EMPTY_PLATFORMS=${_DEL_OLD_EMPTY_PLATFORMS//[^0-9]/}\n\n          if [ -e \"${_usEr}/log/email.txt\" ]; then\n            _F_CLIENT_EMAIL=$(cat ${_usEr}/log/email.txt 2>&1)\n            _F_CLIENT_EMAIL=$(echo -n ${_F_CLIENT_EMAIL} | tr -d \"\\n\" 2>&1)\n            _F_CLIENT_EMAIL=${_F_CLIENT_EMAIL//\\\\\\@/\\@}\n          fi\n\n          if [ ! -z \"${_F_CLIENT_EMAIL}\" ]; then\n            _CLIENT_EMAIL_TEST=$(grep \"^_CLIENT_EMAIL=\\\"${_F_CLIENT_EMAIL}\\\"\" /root/.${_HM_U}.octopus.cnf 2>&1)\n            if [[ \"${_CLIENT_EMAIL_TEST}\" =~ \"${_F_CLIENT_EMAIL}\" ]]; then\n              _DO_NOTHING=YES\n            else\n              sed -i \"s/^_CLIENT_EMAIL=.*/_CLIENT_EMAIL=\\\"${_F_CLIENT_EMAIL}\\\"/g\" /root/.${_HM_U}.octopus.cnf\n              wait\n              _CLIENT_EMAIL=${_F_CLIENT_EMAIL}\n            fi\n          fi\n        fi\n        _disable_chattr ${_HM_U}.ftp\n        rm -rf /home/${_HM_U}.ftp/drush-backups\n        if [ -e \"${_THIS_HM_SITE}\" ]; then\n          cd ${_THIS_HM_SITE}\n          su -s /bin/bash ${_HM_U} -c \"drush8 cc drush\" &> /dev/null\n          wait\n          rm -rf ${_usEr}/.tmp/cache\n          _run_drush8_hmr_cmd \"${_vSet} hosting_cron_default_interval 3600\"\n          _run_drush8_hmr_cmd \"${_vSet} hosting_queue_cron_frequency 1\"\n          _run_drush8_hmr_cmd \"${_vSet} hosting_civicrm_cron_queue_frequency 60\"\n          _run_drush8_hmr_cmd \"${_vSet} hosting_queue_task_gc_frequency 300\"\n          if [ -e \"${_usEr}/log/hosting_cron_use_backend.txt\" ]; then\n            _run_drush8_hmr_cmd \"${_vSet} hosting_cron_use_backend 1\"\n          else\n            _run_drush8_hmr_cmd \"${_vSet} hosting_cron_use_backend 0\"\n          fi\n          _run_drush8_hmr_cmd \"${_vSet} hosting_ignore_default_profiles 0\"\n          _run_drush8_hmr_cmd \"${_vSet} hosting_queue_tasks_frequency 1\"\n          _run_drush8_hmr_cmd \"${_vSet} hosting_queue_tasks_items 1\"\n          _run_drush8_hmr_cmd \"${_vSet} hosting_delete_force 0\"\n          _run_drush8_hmr_cmd \"${_vSet} aegir_backup_export_path ${_usEr}/backup-exports\"\n          _run_drush8_hmr_cmd \"fr hosting_custom_settings -y\"\n          _run_drush8_hmr_cmd \"cache-clear all\"\n          _run_drush8_hmr_cmd \"cache-clear all\"\n          if [ -e \"${_usEr}/log/imported.pid\" ] \\\n            || [ -e \"${_usEr}/log/exported.pid\" ]; then\n            if [ ! -e \"${_usEr}/log/hosting_context.pid\" ]; then\n              _HM_NID=$(_run_drush8_hmr_cmd \"sqlq \\\n                \\\"SELECT MIN(site.nid) AS lowest_nid FROM hosting_site site JOIN \\\n                hosting_package_instance pkgi ON pkgi.rid=site.nid JOIN \\\n                hosting_package pkg ON pkg.nid=pkgi.package_id \\\n                WHERE pkg.short_name='hostmaster'\\\" 2>&1\")\n              _HM_NID=${_HM_NID//[^0-9]/}\n              if [ ! -z \"${_HM_NID}\" ]; then\n                _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_context \\\n                  SET name='hostmaster' WHERE nid=${_HM_NID}\\\"\"\n                echo ${_HM_NID} > ${_usEr}/log/hosting_context.pid\n              fi\n            fi\n          fi\n        fi\n        _daily_process\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_task \\\n          WHERE task_type='delete' AND task_status='-1'\\\"\"\n        _run_drush8_hmr_cmd \"sqlq \\\"DELETE FROM hosting_task \\\n          WHERE task_type='delete' AND task_status='0' AND executed='0'\\\"\"\n        _run_drush8_hmr_cmd \"${_vSet} hosting_delete_force 0\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform \\\n          SET status=1 WHERE publish_path LIKE '%/aegir/distro/%'\\\"\"\n        _check_old_empty_platforms\n        _run_drush8_hmr_cmd \"${_vSet} hosting_delete_force 0\"\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform \\\n          SET status=-2 WHERE publish_path LIKE '%/aegir/distro/%'\\\"\"\n        _THIS_HM_PLR=$(cat ${_usEr}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"root'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        _run_drush8_hmr_cmd \"sqlq \\\"UPDATE hosting_platform \\\n          SET status=1 WHERE publish_path LIKE '${_THIS_HM_PLR}'\\\"\"\n        _purge_cruft_machine\n        if [ \"${_hostedSys}\" = \"YES\" ]; then\n          rm -rf ${_usEr}/clients/admin &> /dev/null\n          rm -rf ${_usEr}/clients/omega8ccgmailcom &> /dev/null\n          rm -rf ${_usEr}/clients/nocomega8cc &> /dev/null\n        fi\n        rm -rf ${_usEr}/clients/*/backups &> /dev/null\n        symlinks -dr ${_usEr}/clients &> /dev/null\n        if [ -d \"/home/${_HM_U}.ftp\" ]; then\n          symlinks -dr /home/${_HM_U}.ftp &> /dev/null\n          rm -f /home/${_HM_U}.ftp/{.profile,.bash_logout,.bash_profile,.bashrc}\n        fi\n        _le_hm_ssl_check_update ${_HM_U}\n        if [ \"${_ENABLE_GOACCESS}\" = \"YES\" ] && [ -e \"/root/.goaccess.all.cnf\" ]; then\n          _if_gen_goaccess \"ALL\"\n        fi\n        echo \"Done for ${_usEr}\"\n        _enable_chattr ${_HM_U}.ftp\n      else\n        echo \"load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\"\n        echo \"...we have to wait...\"\n      fi\n      echo\n      echo\n    fi\n  done\n  _shared_codebases_cleanup\n  _ghost_codebases_cleanup\n  _check_old_empty_hostmaster_platforms\n  if [ -n \"${_ENABLE_GOACCESS}\" ] && [ \"${_ENABLE_GOACCESS}\" = \"YES\" ]; then\n    _cleanup_weblogx\n  fi\n}\n\n###--------------------###\n[ ! -d \"/data/u\" ] && exit 1\necho \"INFO: Daily maintenance start\"\nwhile [ -e \"/run/boa_run.pid\" ]; do\n  echo \"Waiting for BOA queue availability...\"\n  sleep 5\ndone\n#\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n_DOW=$(date +%u)\n_DOW=${_DOW//[^1-7]/}\n#\nif [ -e \"/root/.force.sites.verify.cnf\" ]; then\n  _FORCE_SITES_VERIFY=YES\nelse\n  _FORCE_SITES_VERIFY=NO\nfi\n#\n_MODULES_FORCE=\"automated_cron \\\n  backup_migrate \\\n  coder \\\n  cookie_cache_bypass \\\n  hacked \\\n  poormanscron \\\n  security_review \\\n  site_audit \\\n  syslog \\\n  watchdog_live \\\n  xhprof\"\n#\nif [ \"${_DOW}\" = \"2\" ]; then\n  _MODULES_ON_SEVEN=\n  _MODULES_ON_SIX=\n  _MODULES_OFF_SEVEN=\"coder \\\n    devel \\\n    filefield_nginx_progress \\\n    hacked \\\n    l10n_update \\\n    linkchecker \\\n    performance \\\n    security_review \\\n    site_audit \\\n    watchdog_live \\\n    xhprof\"\n  _MODULES_OFF_SIX=\"coder \\\n    cookie_cache_bypass \\\n    devel \\\n    hacked \\\n    l10n_update \\\n    linkchecker \\\n    performance \\\n    poormanscron \\\n    security_review \\\n    supercron \\\n    watchdog_live \\\n    xhprof\"\nelse\n  _MODULES_ON_SEVEN=\"robotstxt\"\n  _MODULES_ON_SIX=\"path_alias_cache robotstxt\"\n  _MODULES_OFF_SEVEN=\"dblog syslog backup_migrate\"\n  _MODULES_OFF_SIX=\"dblog syslog backup_migrate\"\nfi\n#\n_CTRL_TPL_FORCE_UPDATE=YES\n#\n# Check for last all nr\nif [ -e \"/data/all\" ]; then\n  cd /data/all\n  _listl=([0-9]*)\n  _LAST_ALL=${_listl[@]: -1}\n  _O_CONTRIB=\"/data/all/${_LAST_ALL}/o_contrib\"\n  _O_CONTRIB_SEVEN=\"/data/all/${_LAST_ALL}/o_contrib_seven\"\n  _O_CONTRIB_EIGHT=\"/data/all/${_LAST_ALL}/o_contrib_eight\"\n  _O_CONTRIB_NINE=\"/data/all/${_LAST_ALL}/o_contrib_nine\"\n  _O_CONTRIB_TEN=\"/data/all/${_LAST_ALL}/o_contrib_ten\"\n  _O_CONTRIB_ELEVEN=\"/data/all/${_LAST_ALL}/o_contrib_eleven\"\nelif [ -e \"/data/disk/all\" ]; then\n  cd /data/disk/all\n  _listl=([0-9]*)\n  _LAST_ALL=${_listl[@]: -1}\n  _O_CONTRIB=\"/data/disk/all/${_LAST_ALL}/o_contrib\"\n  _O_CONTRIB_SEVEN=\"/data/disk/all/${_LAST_ALL}/o_contrib_seven\"\n  _O_CONTRIB_EIGHT=\"/data/disk/all/${_LAST_ALL}/o_contrib_eight\"\n  _O_CONTRIB_NINE=\"/data/disk/all/${_LAST_ALL}/o_contrib_nine\"\n  _O_CONTRIB_TEN=\"/data/disk/all/${_LAST_ALL}/o_contrib_ten\"\n  _O_CONTRIB_ELEVEN=\"/data/disk/all/${_LAST_ALL}/o_contrib_eleven\"\nelse\n  _O_CONTRIB=NO\n  _O_CONTRIB_SEVEN=NO\n  _O_CONTRIB_EIGHT=NO\n  _O_CONTRIB_NINE=NO\n  _O_CONTRIB_TEN=NO\n  _O_CONTRIB_ELEVEN=NO\nfi\n#\nmkdir -p /var/log/boa/daily\nmkdir -p /var/log/boa/le\n#\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n#\n_find_fast_mirror_early\n#\n###--------------------###\nif [ -z \"${_SKYNET_MODE}\" ] || [ \"${_SKYNET_MODE}\" = \"ON\" ]; then\n  echo \"INFO: Checking BARRACUDA version\"\n  rm -f /opt/tmp/barracuda-release.txt*\n  curl -L -k -s \\\n    --max-redirs 10 \\\n    --retry 3 \\\n    --retry-delay 15 -A iCab \\\n    \"${_urlHmr}/conf/version/barracuda-release.txt\" \\\n    -o /opt/tmp/barracuda-release.txt\nelse\n  rm -f /opt/tmp/barracuda-release.txt*\nfi\nif [ -e \"/opt/tmp/barracuda-release.txt\" ]; then\n  _X_VERSION=$(cat /opt/tmp/barracuda-release.txt 2>&1)\n  _VERSIONS_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n  if [ ! -z \"${_X_VERSION}\" ]; then\n    _MY_OCTO_EMAIL=${_MY_OCTO_EMAIL//\\\\\\@/\\@}\n    if [[ \"${_VERSIONS_TEST}\" =~ \"${_X_VERSION}\" ]]; then\n      _VERSIONS_TEST_RESULT=OK\n      echo \"INFO: Version test result: OK\"\n    else\n      sT=\"release available, upgrade now!\"\n      cat <<EOF | s-nail -s \"New ${_X_VERSION} ${sT}\" ${_MY_OCTO_EMAIL}\n\n There is new ${_X_VERSION} release available!\n\n Please review the changelog and upgrade as soon as possible to receive all security updates and new features.\n\n BOA Changelog: https://bit.ly/boa-changelog\n\n BOA Upgrade: https://bit.ly/boa-upgrade-docs\n\n ---\n This email has been sent by your BOA system release monitor\n\nEOF\n    echo \"INFO: Update notice sent: OK\"\n    fi\n  fi\nfi\n#\nif [ -e \"/run/daily-fix.pid\" ]; then\n  touch /var/log/boa/wait-for-daily\n  exit 1\nelse\n  touch /run/daily-fix.pid\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  _if_hosted_sys\n  if [ -z \"${_PERMISSIONS_FIX}\" ]; then\n    _PERMISSIONS_FIX=YES\n  fi\n  if [ -z \"${_MODULES_FIX}\" ]; then\n    _MODULES_FIX=YES\n  fi\n  if [ -z \"${_CLEAR_BOOST}\" ]; then\n    _CLEAR_BOOST=YES\n  fi\n  if [ -e \"/data/all\" ]; then\n    if [ ! -e \"/data/all/permissions-fix-post-up-${_xSrl}.info\" ]; then\n      rm -f /data/all/permissions-fix*\n      find /data/disk/*/distro/*/*/sites/all/{libraries,modules,themes} \\\n        -type d -exec chmod 02775 {} \\; &> /dev/null\n      find /data/disk/*/distro/*/*/sites/all/{libraries,modules,themes} \\\n        -type f -exec chmod 0664 {} \\; &> /dev/null\n      echo fixed > /data/all/permissions-fix-post-up-${_xSrl}.info\n    fi\n  elif [ -e \"/data/disk/all\" ]; then\n    if [ ! -e \"/data/disk/all/permissions-fix-post-up-${_xSrl}.info\" ]; then\n      rm -f /data/disk/all/permissions-fix*\n      find /data/disk/*/distro/*/*/sites/all/{libraries,modules,themes} \\\n        -type d -exec chmod 02775 {} \\; &> /dev/null\n      find /data/disk/*/distro/*/*/sites/all/{libraries,modules,themes} \\\n        -type f -exec chmod 0664 {} \\; &> /dev/null\n      echo fixed > /data/disk/all/permissions-fix-post-up-${_xSrl}.info\n    fi\n  fi\n\n  su -s /bin/bash - aegir -c \"drush8 cc drush\" &> /dev/null\n  wait\n  rm -rf /var/aegir/.tmp/cache\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster dis update syslog dblog -y\" &> /dev/null\n  wait\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster cron\" &> /dev/null\n  wait\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster cache-clear all\" &> /dev/null\n  wait\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster cache-clear all\" &> /dev/null\n  wait\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster utf8mb4-convert-databases -y\" &> /dev/null\n  wait\n\n  _thisLog=\"/var/log/boa/daily/daily-${_NOW}.log\"\n\n  _daily_action > ${_thisLog} 2>&1\n\n  _incident_detection\n\n  _dhpWildPath=\"/etc/ssl/private/nginx-wild-ssl.dhp\"\n  if [ -e \"/etc/ssl/private/4096.dhp\" ]; then\n    _dhpPath=\"/etc/ssl/private/4096.dhp\"\n    _DIFF_T=$(diff -w -B ${_dhpPath} ${_dhpWildPath} 2>&1)\n    if [ ! -z \"${_DIFF_T}\" ]; then\n      cp -af ${_dhpPath} ${_dhpWildPath}\n    fi\n  fi\n\n  if [ \"${_NGINX_FORWARD_SECRECY}\" = \"YES\" ]; then\n    if [ ! -e \"/etc/ssl/private/4096.dhp\" ]; then\n      echo \"Generating 4096.dhp -- it may take a very long time...\"\n      openssl dhparam -out /etc/ssl/private/4096.dhp 4096 > /dev/null 2>&1 &\n    fi\n    for f in `find /etc/ssl/private/*.crt -type f`; do\n      _sslName=$(echo ${f} | cut -d'/' -f5 | awk '{ print $1}' | sed \"s/.crt//g\")\n      _sslFile=\"/etc/ssl/private/${_sslName}.dhp\"\n      _sslFileZ=${_sslFile//\\//\\\\\\/}\n      if [ -e \"${f}\" ] && [ ! -z \"${_sslName}\" ]; then\n        if [ ! -e \"${_sslFile}\" ]; then\n          openssl dhparam -out ${_sslFile} 2048 &> /dev/null\n        else\n          _PFS_TEST=$(grep \"DH PARAMETERS\" ${_sslFile} 2>&1)\n          if [[ ! \"${_PFS_TEST}\" =~ \"DH PARAMETERS\" ]]; then\n            openssl dhparam -out ${_sslFile} 2048 &> /dev/null\n          fi\n          _sslRootd=\"/var/aegir/config/server_master/nginx/pre.d\"\n          _sslFileX=\"${_sslRootd}/z_${_sslName}_ssl_proxy.conf\"\n          _sslFileY=\"${_sslRootd}/${_sslName}_ssl_proxy.conf\"\n          if [ -e \"${_sslFileX}\" ]; then\n            _DHP_TEST=$(grep \"_sslFile\" ${_sslFileX} 2>&1)\n            if [[ \"${_DHP_TEST}\" =~ \"_sslFile\" ]]; then\n              sed -i \"s/.*_sslFile.*//g\" ${_sslFileX} &> /dev/null\n              wait\n              sed -i \"s/ *$//g; /^$/d\" ${_sslFileX} &> /dev/null\n              wait\n            fi\n          fi\n          if [ -e \"${_sslFileY}\" ]; then\n            _DHP_TEST=$(grep \"_sslFile\" ${_sslFileY} 2>&1)\n            if [[ \"${_DHP_TEST}\" =~ \"_sslFile\" ]]; then\n              sed -i \"s/.*_sslFile.*//g\" ${_sslFileY} &> /dev/null\n              wait\n              sed -i \"s/ *$//g; /^$/d\" ${_sslFileY} &> /dev/null\n              wait\n            fi\n          fi\n          if [ -e \"${_sslFileX}\" ]; then\n            _DHP_TEST=$(grep \"ssl_dhparam\" ${_sslFileX} 2>&1)\n            if [[ ! \"${_DHP_TEST}\" =~ \"ssl_dhparam\" ]]; then\n              sed -i \"s/ssl_session_timeout .*/ssl_session_timeout          5m;\\n  ssl_dhparam                  ${_sslFileZ};/g\" ${_sslFileX} &> /dev/null\n              wait\n              sed -i \"s/ *$//g; /^$/d\" ${_sslFileX} &> /dev/null\n              wait\n            fi\n          fi\n          if [ -e \"${_sslFileY}\" ]; then\n            _DHP_TEST=$(grep \"ssl_dhparam\" ${_sslFileY} 2>&1)\n            if [[ ! \"${_DHP_TEST}\" =~ \"ssl_dhparam\" ]]; then\n              sed -i \"s/ssl_session_timeout .*/ssl_session_timeout          5m;\\n  ssl_dhparam                  ${_sslFileZ};/g\" ${_sslFileY} &> /dev/null\n              wait\n              sed -i \"s/ *$//g; /^$/d\" ${_sslFileY} &> /dev/null\n              wait\n            fi\n          fi\n        fi\n      fi\n    done\n    if [ -e \"/var/aegir/config\" ]; then\n      sed -i \"s/.*ssl_stapling .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*ssl_stapling_verify .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*resolver .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*resolver_timeout .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*http2.*on;//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/ssl_prefer_server_ciphers .*/ssl_prefer_server_ciphers on;\\n  http2 on;/g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/ *$//g; /^$/d\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n    fi\n    if [ -d \"/data/u\" ]; then\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /data/disk/*/config/server_*/nginx/vhost.d/*\n    fi\n    if [ -e \"/var/aegir/config\" ]; then\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx.conf\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/vhost.d/*\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf\n    fi\n    service nginx reload\n  fi\nfi\n\nif [ \"${_PERMISSIONS_FIX}\" = \"YES\" ] \\\n  && [ ! -z \"${_X_VERSION}\" ] \\\n  && [ -e \"/opt/tmp/barracuda-release.txt\" ] \\\n  && [ ! -e \"/data/all/permissions-fix-${_xSrl}-${_X_VERSION}-fixed-dz.info\" ]; then\n  echo \"INFO: Fixing permissions in the /data/all tree...\"\n  find /data/conf -type d -exec chmod 0755 {} \\; &> /dev/null\n  find /data/conf -type f -exec chmod 0644 {} \\; &> /dev/null\n  chown -R root:root /data/conf &> /dev/null\n  if [ -e \"/data/all\" ]; then\n    find /data/all -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /data/all -type f -exec chmod 0644 {} \\; &> /dev/null\n    chmod 02775 /data/all/*/*/sites/all/{modules,libraries,themes} &> /dev/null\n    chmod 02775 /data/all/000/core/*/sites/all/{modules,libraries,themes} &> /dev/null\n    chown -R root:root /data/all &> /dev/null\n    chown -R root:users /data/all/*/*/sites &> /dev/null\n    chown -R root:users /data/all/000/core/*/sites &> /dev/null\n  elif [ -e \"/data/disk/all\" ]; then\n    find /data/disk/all -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /data/disk/all -type f -exec chmod 0644 {} \\; &> /dev/null\n    chmod 02775 /data/disk/all/*/*/sites/all/{modules,libraries,themes} &> /dev/null\n    chmod 02775 /data/disk/all/000/core/*/sites/all/{modules,libraries,themes} &> /dev/null\n    chown -R root:root /data/disk/all &> /dev/null\n    chown -R root:users /data/disk/all/*/*/sites &> /dev/null\n    chown -R root:users /data/disk/all/000/core/*/sites &> /dev/null\n  fi\n  chmod 02775 /data/disk/*/distro/*/*/sites/all/{modules,libraries,themes} &> /dev/null\n  echo fixed > /data/all/permissions-fix-${_xSrl}-${_X_VERSION}-fixed-dz.info\nfi\nif [ ! -e \"/var/backups/fix-sites-all-permsissions-${_xSrl}.txt\" ]; then\n  chmod 0751  /data/disk/*/distro/*/*/sites &> /dev/null\n  chmod 0755  /data/disk/*/distro/*/*/sites/all &> /dev/null\n  chmod 02775 /data/disk/*/distro/*/*/sites/all/{modules,libraries,themes} &> /dev/null\n  echo FIXED > /var/backups/fix-sites-all-permsissions-${_xSrl}.txt\n  echo \"Permissions in sites/all tree just fixed\"\nfi\nfind /var/backups/old-sql* -mtime +1 -exec rm -rf {} \\; &> /dev/null\nfind /var/backups/ltd/*/* -mtime +0 -type f -exec rm -rf {} \\; &> /dev/null\nfind /var/backups/solr/*/* -mtime +0 -type f -exec rm -rf {} \\; &> /dev/null\nfind /var/backups/jetty* -mtime +0 -exec rm -rf {} \\; &> /dev/null\nfind /var/backups/dragon/* -mtime +7 -exec rm -rf {} \\; &> /dev/null\nif [ \"${_hostedSys}\" = \"YES\" ]; then\n  if [ -d \"/var/backups/codebases-cleanup\" ]; then\n    find /var/backups/codebases-cleanup/* -mtime +7 -exec rm -rf {} \\; &> /dev/null\n  elif [ -d \"/data/disk/codebases-cleanup\" ]; then\n    find /data/disk/codebases-cleanup/* -mtime +7 -exec rm -rf {} \\; &> /dev/null\n  fi\nfi\nrm -f /tmp/.cron.*.pid\nrm -f /tmp/.busy.*.pid\nrm -f /data/disk/*/.tmp/.cron.*.pid\nrm -f /data/disk/*/.tmp/.busy.*.pid\n\n###\n### Delete duplicity ghost pid file if older than 2 days\n###\nfind /run/*_backup.pid -mtime +1 -exec rm -rf {} \\; &> /dev/null\nrm -f /run/daily-fix.pid\necho \"INFO: Daily maintenance complete\"\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/graceful.sh",
    "content": "#!/bin/bash\n\n# Environment setup\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n# Function to check if the script is run as root\n_check_root() {\n  if [ \"$(id -u)\" -ne 0 ]; then\n    echo \"ERROR: This script should be run as root\"\n    exit 1\n  else\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    # Sanitize to allow only digits and minus sign\n    export _B_NICE=${_B_NICE//[^0-9-]/}\n\n    # Validate and set default if necessary\n    if ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n      _B_NICE=0\n    fi\n\n    # Clamp the value within -20 to 19\n    if (( _B_NICE < -20 )); then\n      _B_NICE=-20\n    elif (( _B_NICE > 19 )); then\n      _B_NICE=19\n    fi\n\n    renice ${_B_NICE} -p $$ &> /dev/null\n    chmod a+w /dev/null\n  fi\n  # Get the hostname\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n}\n_check_root\n\n# Exit if certain config files exist\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n# Function to calculate RAM usage percentage as an integer\n_calculate_ram_usage_percent() {\n  _total_ram_kb=$1\n  _available_ram_kb=$2\n  used_ram_kb=$((_total_ram_kb - _available_ram_kb))\n\n  # Using integer division to get a whole number percentage\n  echo $(( (used_ram_kb * 100) / _total_ram_kb ))\n}\n\n# Function to check and display system info\n_check_system_ram() {\n  # Get the total and available RAM in KB\n  _total_ram_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')\n  _available_ram_kb=$(grep MemAvailable /proc/meminfo | awk '{print $2}')\n\n  # Calculate RAM usage percentage\n  _ram_usage_percent=$(_calculate_ram_usage_percent ${_total_ram_kb} ${_available_ram_kb})\n}\n\n# Function to check and optimize RAM and disk caches\n_optimize_ram() {\n  _check_system_ram\n  if [ \"${_ram_usage_percent}\" -gt 90 ]; then\n    sync && echo 3 | tee /proc/sys/vm/drop_caches\n  fi\n}\n\n_find_fast_mirror_early() {\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  export _urlDev=\"http://${_USE_MIR}/dev\"\n}\n\n# Function to download GeoLite2 databases\n_fetch_geoip() {\n\n  # Prepare GeoIP directories\n  echo \"Setting up GeoIP directories...\"\n  mkdir -p /usr/share/GeoIP\n  chmod 755 /usr/share/GeoIP\n\n  # Download and install GeoIP databases\n  echo \"Downloading GeoIP databases...\"\n  mkdir -p /opt/tmp\n  cd /opt/tmp\n  rm -f /opt/tmp/Geo*\n\n  # For GeoIP2 ASN database:\n  wget -q -U iCab ${_urlDev}/src/GeoLite2-ASN.mmdb.gz\n  gunzip GeoLite2-ASN.mmdb.gz &> /dev/null\n  cp -af GeoLite2-ASN.mmdb /usr/share/GeoIP/\n\n  # For GeoIP2 City database:\n  wget -q -U iCab ${_urlDev}/src/GeoLite2-City.mmdb.gz\n  gunzip GeoLite2-City.mmdb.gz &> /dev/null\n  cp -af GeoLite2-City.mmdb /usr/share/GeoIP/\n\n  # For GeoIP2 Country database:\n  wget -q -U iCab ${_urlDev}/src/GeoLite2-Country.mmdb.gz\n  gunzip GeoLite2-Country.mmdb.gz &> /dev/null\n  cp -af GeoLite2-Country.mmdb /usr/share/GeoIP/\n\n  chmod 644 /usr/share/GeoIP/*\n  rm -f /opt/tmp/Geo*\n  cd\n}\n\n# Main action function\n_graceful_action() {\n  echo \"Starting system maintenance tasks...\"\n\n  # Update Devuan mirrors daily\n  _ffDevuan=\"$(which ffdevuan)\"\n  _pthLog=\"/var/log/boa/ffdevuan.update.log\"\n  if [ -x \"${_ffDevuan}\" ]; then\n    bash ${_ffDevuan} >> ${_pthLog}\n    wait\n    echo \"Devuan Mirrors Updated on [$(date)]\" >> ${_pthLog}\n    echo >> ${_pthLog}\n  fi\n\n  # Clean up postfix queue to get rid of bounced emails\n  echo \"Cleaning up postfix queue...\"\n  postsuper -d ALL &> /dev/null\n\n  # Restart syslog service\n  echo \"Restarting syslog service...\"\n  if [ -e \"/etc/init.d/rsyslog\" ]; then\n    pkill -9 rsyslogd\n    service rsyslog start\n  elif [ -e \"/etc/init.d/sysklogd\" ]; then\n    pkill -9 sysklogd\n    service sysklogd start\n  elif [ -e \"/etc/init.d/inetutils-syslogd\" ]; then\n    pkill -9 syslogd\n    service inetutils-syslogd start\n  fi\n\n  # Swap, RAM and disk cache management\n  _IF_BCP=\"$(pgrep -f duplicity)\"\n  if [ -d \"/dev/disk\" ]; then\n    if [ ! -e \"/root/.no.swap.clear.cnf\" ]; then\n      echo \"Resetting swap...\"\n      swapoff -a\n      if [ -z \"${_IF_BCP}\" ]; then\n        swapon -a\n      fi\n    fi\n    echo \"Optimizing RAM usage...\"\n    _optimize_ram\n  fi\n\n  # Setup GeoIP directories\n  if [ ! -e \"/usr/share/GeoIP/GeoLite2-City.mmdb\" ]; then\n    _find_fast_mirror_early\n    _fetch_geoip\n  fi\n\n  # Cleanup for /opt/tmp/sess*\n  rm -f /opt/tmp/sess*\n\n  # Speed cleanup\n  _IF_BCP=\"$(pgrep -f duplicity)\"\n  if [ -z \"${_IF_BCP}\" ] && [ ! -e \"/run/speed_cleanup.pid\" ] && [ ! -e \"/root/.giant_traffic.cnf\" ]; then\n    echo \"Performing speed cleanup...\"\n    touch /run/speed_cleanup.pid\n    echo \" \" >> /var/log/nginx/speed_cleanup.log\n    sed -i \"s/levels=2:2:2/levels=2:2/g\" /var/aegir/config/server_master/nginx.conf\n    service nginx reload &> /dev/null\n    echo \"speed_purge start $(date)\" >> /var/log/nginx/speed_cleanup.log\n    nice -n 9 ionice -c2 -n7 find /var/lib/nginx/speed/ -mtime +1 -exec rm -rf {} \\; &> /dev/null\n    echo \"speed_purge complete $(date)\" >> /var/log/nginx/speed_cleanup.log\n    service nginx reload &> /dev/null\n    rm -f /run/speed_cleanup.pid\n  fi\n\n  touch /var/log/boa/graceful.done.pid\n  echo \"System maintenance tasks completed.\"\n}\n\n# Main script execution\n\n# Check for ongoing operations or skip configurations\nif [ -e \"/run/boa_run.pid\" ] || [ -e \"/root/.skip_cleanup.cnf\" ]; then\n  echo \"Cleanup skipped due to ongoing operations or configuration settings.\"\n  exit 0\nelse\n  _graceful_action\n  exit 0\nfi\n"
  },
  {
    "path": "aegir/tools/system/guest-fire.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n# ----------------------------\n# Configuration Section\n# ----------------------------\n\n# Default logging mode, can be SILENT (none), NORMAL or VERBOSE\n_NGINX_DOS_LOG=SILENT\n\n# ==============================\n# Load Configuration File\n# ==============================\n\n_CONFIG_FILE=\"/root/.barracuda.cnf\"\n\nif [[ -e \"${_CONFIG_FILE}\" ]]; then\n  source \"${_CONFIG_FILE}\" ### to read _NGINX_DOS_LOG value used in other scripts too\nfi\n\n# ----------------------------\n# Function Definitions\n# ----------------------------\n\n# Function for logging in verbose mode\n_verbose_log() {\n  _reason=\"${1}\"\n  _message=\"${2}\"\n  _log_file=\"/dev/null\"\n\n  # Define log file paths\n  _csf_dry_log=\"/var/log/csf_dry_debug.log\"\n  _csf_fail_log=\"/var/log/csf_fail_debug.log\"\n  _csf_deny_log=\"/var/log/csf_deny_debug.log\"\n  _csf_denied_log=\"/var/log/csf_denied_debug.log\"\n  _csf_allow_log=\"/var/log/csf_allow_debug.log\"\n\n  # Check if logging is enabled\n  if [[ -e \"/root/.debug.monitor.log.cnf\" || \"${_NGINX_DOS_LOG}\" =~ ^(NORMAL|VERBOSE)$ ]]; then\n    if [[ \"${_reason}\" =~ ^(DRY|NORMAL|DEBUG)$ && \"${_NGINX_DOS_LOG}\" = VERBOSE ]]; then\n      _log_file=\"${_csf_dry_log}\"\n    elif [[ \"${_reason}\" =~ ^(FAIL|INVALID|ERROR)$ ]]; then\n      _log_file=\"${_csf_fail_log}\"\n    elif [[ \"${_reason}\" =~ ^DENY$ && \"${_NGINX_DOS_LOG}\" =~ ^(NORMAL|VERBOSE)$ ]]; then\n      _log_file=\"${_csf_deny_log}\"\n    elif [[ \"${_reason}\" =~ ^DENIED$ && \"${_NGINX_DOS_LOG}\" =~ ^(NORMAL|VERBOSE)$ ]]; then\n      _log_file=\"${_csf_denied_log}\"\n    elif [[ \"${_reason}\" =~ ^(ALLOWED|CLEAN)$ && \"${_NGINX_DOS_LOG}\" = VERBOSE ]]; then\n      _log_file=\"${_csf_allow_log}\"\n    else\n      # Unrecognized _reason; skip logging to prevent unbound variable\n      return\n    fi\n\n    # Generate timestamp\n    _timestamp=$(date)\n\n    # Write to the appropriate log file using printf\n    printf \"%s %s REASON: %s\\n\" \"${_timestamp}\" \"${_reason}\" \"${_message}\" >> \"${_log_file}\"\n  fi\n}\n\n# Function to run procedure in a loop\n_guest_guard() {\n\n  if [ -e \"/var/xdrago/monitor/log/ssh.log\" ]; then\n    # Process each unique IP from the log file\n    cut -d '#' -f1 \"/var/xdrago/monitor/log/ssh.log\" | sort | uniq | while read -r _IP; do\n      # Reset control variables\n      _FW_CLEAN=\n      _FW_TEST=\n      _FF_TEST=\n      # Retrieve CSF status for the IP\n      _FW_TEST=$(csf -g ${_IP} 2>&1)\n      # Check if the IP is allowed in csf.allow for TCP port 22\n      _FF_TEST=$(grep -E \"^tcp\\|in\\|d=22\\|s=${_IP}\\b\" \"/etc/csf/csf.allow\" || true)\n      # Determine if the IP is allowed or needs to be denied\n      if [[ \"${_FF_TEST}\" =~ ${_IP} ]] || [[ \"${_FW_TEST}\" =~ ALLOW.*ACCEPT.*dpt:22 ]]; then\n        echo \"${_IP} is allowed on port 22\"\n        _verbose_log \"ALLOWED\" \"${_IP} is allowed on port 22\"\n        _FW_CLEAN=\"YES\"\n        if [[ \"${_FW_CLEAN}\" == \"YES\" ]]; then\n          echo \"Removing ${_IP} potential blocks on port 22\"\n          csf -dr ${_IP}\n          csf -tr ${_IP}\n          _verbose_log \"CLEAN\" \"Removing ${_IP} potential blocks on port 22\"\n        fi\n      elif [[ \"${_FW_TEST}\" =~ DENY.*DROP.*dpt:22 ]]; then\n        echo \"${_IP} already denied on port 22\"\n        _verbose_log \"DENIED\" \"${_IP} already denied on port 22\"\n      else\n        echo \"Denying ${_IP} on port 22 in the next 15 min\"\n        csf -td ${_IP} 900 -p 22\n        _verbose_log \"DENY\" \"Denying ${_IP} on port 22 in the next 15 min\"\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n\n  if [ -e \"/var/xdrago/monitor/log/web.log\" ]; then\n    # Process each unique IP from the log file\n    cut -d '#' -f1 \"/var/xdrago/monitor/log/web.log\" | sort | uniq | while read -r _IP; do\n      # Reset control variables\n      _FW_CLEAN=\n      _FW_TEST=\n      _FF_TEST=\n      # Retrieve CSF status for the IP\n      _FW_TEST=$(csf -g ${_IP} 2>&1)\n      # Check if the IP is allowed in csf.allow for TCP port 80\n      _FF_TEST=$(grep -E \"^tcp\\|in\\|d=80\\|s=${_IP}\\b\" \"/etc/csf/csf.allow\" || true)\n      # Determine if the IP is allowed or needs to be denied\n      if [[ \"${_FF_TEST}\" =~ ${_IP} ]] || [[ \"${_FW_TEST}\" =~ ALLOW.*ACCEPT.*dpt:80 ]]; then\n        echo \"${_IP} is allowed on port 80\"\n        _verbose_log \"ALLOWED\" \"${_IP} is allowed on port 80\"\n        _FW_CLEAN=\"YES\"\n        if [[ \"${_FW_CLEAN}\" == \"YES\" ]]; then\n          echo \"Removing ${_IP} potential blocks on ports 443,80\"\n          csf -dr ${_IP}\n          csf -tr ${_IP}\n          _verbose_log \"CLEAN\" \"Removing ${_IP} potential blocks on ports 443,80\"\n        fi\n      elif [[ \"${_FW_TEST}\" =~ DENY.*DROP.*dpt:80 ]]; then\n        echo \"${_IP} already denied on port 80\"\n        _verbose_log \"DENIED\" \"${_IP} already denied on port 80\"\n      elif [[ \"${_FW_TEST}\" =~ DENY.*DROP.*dpt:443 ]]; then\n        echo \"${_IP} already denied on port 443\"\n        _verbose_log \"DENIED\" \"${_IP} already denied on port 443\"\n      else\n        echo \"Denying ${_IP} on ports 443,80 in the next 15 min\"\n        csf -td ${_IP} 900 -p 80\n        csf -td ${_IP} 900 -p 443\n        _verbose_log \"DENY\" \"Denying ${_IP} on ports 443,80 in the next 15 min\"\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n\n  if [ -e \"/var/xdrago/monitor/log/ftp.log\" ]; then\n    # Process each unique IP from the log file\n    cut -d '#' -f1 \"/var/xdrago/monitor/log/ftp.log\" | sort | uniq | while read -r _IP; do\n      # Reset control variables\n      _FW_CLEAN=\n      _FW_TEST=\n      _FF_TEST=\n      # Retrieve CSF status for the IP\n      _FW_TEST=$(csf -g ${_IP} 2>&1)\n      # Check if the IP is allowed in csf.allow for TCP port 21\n      _FF_TEST=$(grep -E \"^tcp\\|in\\|d=21\\|s=${_IP}\\b\" \"/etc/csf/csf.allow\" || true)\n      # Determine if the IP is allowed or needs to be denied\n      if [[ \"${_FF_TEST}\" =~ ${_IP} ]] || [[ \"${_FW_TEST}\" =~ ALLOW.*ACCEPT.*dpt:21 ]]; then\n        echo \"${_IP} is allowed on port 21\"\n        _verbose_log \"ALLOWED\" \"${_IP} is allowed on port 21\"\n        _FW_CLEAN=\"YES\"\n        if [[ \"${_FW_CLEAN}\" == \"YES\" ]]; then\n          echo \"Removing ${_IP} potential blocks on port 21\"\n          csf -dr ${_IP}\n          csf -tr ${_IP}\n          _verbose_log \"CLEAN\" \"Removing ${_IP} potential blocks on port 21\"\n        fi\n      elif [[ \"${_FW_TEST}\" =~ DENY.*DROP.*dpt:21 ]]; then\n        echo \"${_IP} already denied on port 21\"\n        _verbose_log \"DENIED\" \"${_IP} already denied on port 21\"\n      else\n        echo \"Denying ${_IP} on port 21 in the next 15 min\"\n        csf -td ${_IP} 900 -p 21\n        _verbose_log \"DENY\" \"Denying ${_IP} on port 21 in the next 15 min\"\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n\n}\n\n# Main execution\nif [ -x \"/usr/sbin/csf\" ]; then\n  # Main execution\n  for _iteration in {1..5}; do\n    echo \"----------------------------\"\n    echo \"Iteration ${_iteration}:\"\n    [ ! -e \"/run/water.pid\" ] && _guest_guard\n    sleep 10\n  done\nfi\n\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/guest-water.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n[ -d \"/var/backups/csf/water\" ] || mkdir -p /var/backups/csf/water\n\n_whitelist_ip_pingdom() {\n  # Pingdom provides probe IPs in multiple formats:\n  #   Plain IPv4 list: https://my.pingdom.com/probes/ipv4  (preferred - no parsing needed)\n  #   RSS feed:        https://my.pingdom.com/probes/feed  (fallback - XML parsing required)\n  # The plain list is simpler and less fragile; RSS is kept as fallback.\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing pingdom ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-pingdom-${_NOW}\n    sed -i \"s/.*pingdom.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://my.pingdom.com/probes/ipv4 \\\n    | grep -o '[0-9]\\+\\.[0-9]\\+\\.[0-9]\\+\\.[0-9]\\+' \\\n    | sort \\\n    | uniq 2>&1)\n  if [ -z \"${_IPS}\" ]; then\n    echo \"pingdom ipv4 endpoint failed, falling back to RSS feed\"\n    _IPS=$(curl -k -s https://my.pingdom.com/probes/feed \\\n      | grep '<pingdom:ip>' \\\n      | sed 's/.*::.*//g' \\\n      | sed 's/[^0-9\\.]//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n  echo _IPS pingdom list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow pingdom ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # pingdom ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n}\n\n_whitelist_ip_cloudflare() {\n  # Cloudflare publishes IPv4 ranges at two endpoints (both return identical data):\n  #   Plain text: https://www.cloudflare.com/ips-v4  (primary)\n  #   JSON API:   https://api.cloudflare.com/client/v4/ips  (fallback, no auth needed)\n  # Reference: https://www.cloudflare.com/ips/\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing cloudflare ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-cloudflare-${_NOW}\n    sed -i \"s/.*cloudflare.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -sL https://www.cloudflare.com/ips-v4 \\\n    | grep -o '[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*' \\\n    | sort \\\n    | uniq 2>&1)\n  if [ -z \"${_IPS}\" ]; then\n    echo \"cloudflare ips-v4 endpoint failed, falling back to JSON API\"\n    _IPS=$(curl -k -s https://api.cloudflare.com/client/v4/ips \\\n      | grep -o '\"[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*\"' \\\n      | sed 's/\"//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n  echo _IPS cloudflare list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow cloudflare ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # cloudflare ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n}\n\n_whitelist_ip_imperva() {\n  # Imperva Cloud WAF IP ranges API - no authentication required:\n  # https://my.imperva.com/api/integration/v1/ips\n  # Formats: text | json | apache | nginx | iptables (default: json)\n  # Reference: https://docs.imperva.com/howto/c85245b7\n  # Current ranges (as of 2024): 199.83.128.0/21, 198.143.32.0/19, 149.126.72.0/21,\n  #   103.28.248.0/22, 185.11.124.0/22, 192.230.64.0/18, 45.64.64.0/22, 107.154.0.0/16,\n  #   45.60.0.0/16, 45.223.0.0/16, 131.125.128.0/17 (added May 2023)\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing imperva ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-imperva-${_NOW}\n    sed -i \"s/.*imperva.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s --data \"resp_format=text\" https://my.imperva.com/api/integration/v1/ips \\\n    | grep -o '[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*' \\\n    | sort \\\n    | uniq 2>&1)\n  if [ -z \"${_IPS}\" ]; then\n    echo \"imperva text endpoint failed, falling back to JSON format\"\n    _IPS=$(curl -k -s --data \"resp_format=json\" https://my.imperva.com/api/integration/v1/ips \\\n      | grep -o '\"[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*\"' \\\n      | sed 's/\"//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n  echo _IPS imperva list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow imperva ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # imperva ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  # Clean up Imperva ranges from csf.deny\n  # All current Imperva ranges by significant octets:\n  sed -i \"/^199\\.83\\./d\" /etc/csf/csf.deny\n  sed -i \"/^198\\.143\\./d\" /etc/csf/csf.deny\n  sed -i \"/^149\\.126\\./d\" /etc/csf/csf.deny\n  sed -i \"/^103\\.28\\./d\" /etc/csf/csf.deny\n  sed -i \"/^185\\.11\\./d\" /etc/csf/csf.deny\n  sed -i \"/^192\\.230\\./d\" /etc/csf/csf.deny\n  sed -i \"/^45\\.64\\./d\" /etc/csf/csf.deny\n  sed -i \"/^107\\.154\\./d\" /etc/csf/csf.deny\n  sed -i \"/^45\\.60\\./d\" /etc/csf/csf.deny\n  sed -i \"/^45\\.223\\./d\" /etc/csf/csf.deny\n  sed -i \"/^131\\.125\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_googlebot() {\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing googlebot ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-googlebot-${_NOW}\n    sed -i \"s/.*googlebot.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://developers.google.com/static/search/apis/ipranges/googlebot.json \\\n    | grep -o '\"ipv4Prefix\": *\"[^\"]*\"' \\\n    | sed 's/\"ipv4Prefix\": *\"//g' \\\n    | sed 's/\"//g' \\\n    | sort \\\n    | uniq 2>&1)\n  echo _IPS googlebot list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow googlebot ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # googlebot ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  sed -i \"/^66\\.249\\./d\" /etc/csf/csf.deny\n  sed -i \"/^192\\.178\\./d\" /etc/csf/csf.deny\n  sed -i \"/^34\\.\\(22\\|64\\|65\\|80\\|88\\|89\\|96\\|100\\|101\\|118\\|126\\|146\\|147\\|151\\|152\\|154\\|155\\|165\\|175\\|176\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^35\\.247\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_microsoft() {\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing microsoft ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-microsoft-${_NOW}\n    sed -i \"s/.*microsoft.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://www.bing.com/toolbox/bingbot.json \\\n    | grep -o '\"ipv4Prefix\": *\"[^\"]*\"' \\\n    | sed 's/\"ipv4Prefix\": *\"//g' \\\n    | sed 's/\"//g' \\\n    | sort \\\n    | uniq 2>&1)\n  echo _IPS microsoft list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow microsoft ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # microsoft ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  # Remove all current Bingbot ranges from csf.deny\n  # Legacy ranges (no longer in JSON but may be in older deny rules)\n  sed -i \"/^65\\.5[2-5]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^199\\.30\\./d\" /etc/csf/csf.deny\n  # Current Azure-based Bingbot ranges\n  sed -i \"/^13\\.\\(66\\|67\\|69\\|71\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^20\\.\\(15\\|36\\|43\\|74\\|79\\|125\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^40\\.77\\./d\" /etc/csf/csf.deny\n  sed -i \"/^40\\.79\\./d\" /etc/csf/csf.deny\n  sed -i \"/^51\\.105\\./d\" /etc/csf/csf.deny\n  sed -i \"/^52\\.\\(167\\|231\\)\\./d\" /etc/csf/csf.deny\n  sed -i \"/^139\\.217\\./d\" /etc/csf/csf.deny\n  sed -i \"/^157\\.55\\./d\" /etc/csf/csf.deny\n  sed -i \"/^191\\.233\\./d\" /etc/csf/csf.deny\n  sed -i \"/^207\\.46\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_sucuri() {\n  # Sucuri does not publish a machine-readable IP list endpoint.\n  # IP ranges are maintained as static documentation at:\n  # https://docs.sucuri.net/website-firewall/sucuri-firewall-troubleshooting-guide/\n  # Review that page periodically and update _IPS below if ranges change.\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing sucuri ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-sucuri-${_NOW}\n    sed -i \"s/.*sucuri.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=\"192.88.134.0/23 185.93.228.0/22 66.248.200.0/22 208.109.0.0/22\"\n  echo _IPS sucuri list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow sucuri ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # sucuri ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  sed -i \"/^192\\.88\\.13[4-5]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^185\\.93\\.22[89]\\.\\|^185\\.93\\.23[01]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^66\\.248\\.20[0-3]\\./d\" /etc/csf/csf.deny\n  sed -i \"/^208\\.109\\.[0-3]\\./d\" /etc/csf/csf.deny\n  wait\n}\n\n_whitelist_ip_authzero() {\n  # Auth0 publishes a machine-readable IP list with region breakdown and changelog at:\n  # https://cdn.auth0.com/ip-ranges.json\n  # The list is updated ahead of any functional changes; check last_updated_at to detect changes.\n  # Only whitelist regions relevant to your Auth0 tenant(s). Currently fetching all regions.\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing authzero ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-authzero-${_NOW}\n    sed -i \"s/.*authzero.*//g\" /etc/csf/csf.allow\n    wait\n  fi\n  _IPS=$(curl -k -s https://cdn.auth0.com/ip-ranges.json \\\n    | grep -o '\"[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*\\.[0-9][0-9]*/[0-9]*\"' \\\n    | grep -v ':' \\\n    | sed 's/\"//g' \\\n    | sort \\\n    | uniq 2>&1)\n  echo _IPS authzero list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow authzero ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # authzero ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  # Clean up any authzero IPs from csf.deny (current + previously known retired IPs)\n  # Since all Auth0 IPs are /32 host routes, we match on the specific addresses from\n  # the changelog (both active and historically removed entries) to ensure old deny\n  # rules don't linger. The fetch above handles csf.allow; deny cleanup is best-effort\n  # by known prefix patterns from Auth0's AWS IP space.\n  for _DENY_IP in $(echo \"${_IPS}\" | sed 's|/32||g'); do\n    sed -i \"/^${_DENY_IP//./\\\\.}$/d\" /etc/csf/csf.deny\n    sed -i \"/^${_DENY_IP//./\\\\.}\\/32$/d\" /etc/csf/csf.deny\n  done\n  wait\n}\n\n_whitelist_ip_site24x7_extra() {\n  # These ranges cover Site24x7 backend/infrastructure IPs (not monitoring probes).\n  # Monitoring probe IPs are handled dynamically via DNS in _whitelist_ip_site24x7().\n  # No machine-readable endpoint exists for these ranges; review periodically at:\n  # https://www.site24x7.com/community/filter/announcements/\n  _IPS=\"87.252.213.0/24 89.36.170.0/24 185.172.199.0/27 185.172.199.128/26 185.230.214.0/23\"\n  echo _IPS site24x7_extra list..\n  echo ${_IPS}\n  for _IP in ${_IPS}; do\n    echo checking csf.allow site24x7_extra ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # site24x7_extra ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n  if [ -e \"/root/.ignore.site24x7.firewall.cnf\" ]; then\n    for _IP in ${_IPS}; do\n      echo checking csf.ignore site24x7_extra ${_IP} now...\n      _IP_CHECK=$(cat /etc/csf/csf.ignore \\\n        | cut -d '#' -f1 \\\n        | sort \\\n        | uniq \\\n        | tr -d \"\\s\" \\\n        | grep -F \"${_IP}\" 2>&1)\n      if [ -z \"${_IP_CHECK}\" ]; then\n        echo \"${_IP} not yet listed in /etc/csf/csf.ignore\"\n        echo \"${_IP} # site24x7_extra ips\" >> /etc/csf/csf.ignore\n      else\n        echo \"${_IP} already listed in /etc/csf/csf.ignore\"\n      fi\n    done\n  fi\n}\n\n_whitelist_ip_site24x7() {\n  if [ ! -e \"/root/.whitelist.dont.cleanup.cnf\" ]; then\n    echo removing site24x7 ips from csf.allow\n    _NOW=$(date +%y%m%d-%H%M%S)\n    cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-site24x7-${_NOW}\n    sed -i \"s/.*site24x7.*//g\" /etc/csf/csf.allow\n    wait\n    echo removing site24x7 ips from csf.ignore\n    sed -i \"s/.*site24x7.*//g\" /etc/csf/csf.ignore\n    wait\n  fi\n\n  _IPS=$(host site24x7.enduserexp.com 1.1.1.1  \\\n    | grep 'has address' \\\n    | cut -d ' ' -f4 \\\n    | sed 's/[^0-9\\.]//g' \\\n    | sort \\\n    | uniq 2>&1)\n\n  if [ -z \"${_IPS}\" ] \\\n    || [[ ! \"${_IPS}\" =~ \"104.236.16.22\" ]] \\\n    || [[ \"${_IPS}\" =~ \"HINFO\" ]]; then\n    _IPS=$(dig site24x7.enduserexp.com \\\n      | grep 'IN.*A' \\\n      | cut -d 'A' -f2 \\\n      | sed 's/[^0-9\\.]//g' \\\n      | sort \\\n      | uniq 2>&1)\n  fi\n\n  echo _IPS site24x7 list..\n  echo ${_IPS}\n\n  for _IP in ${_IPS}; do\n    echo checking csf.allow site24x7 ${_IP} now...\n    _IP_CHECK=$(cat /etc/csf/csf.allow \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\" \\\n      | grep -F \"${_IP}\" 2>&1)\n    if [ -z \"${_IP_CHECK}\" ]; then\n      echo \"${_IP} not yet listed in /etc/csf/csf.allow\"\n      echo \"tcp|in|d=80|s=${_IP} # site24x7 ips\" >> /etc/csf/csf.allow\n    else\n      echo \"${_IP} already listed in /etc/csf/csf.allow\"\n    fi\n  done\n\n  if [ -e \"/root/.ignore.site24x7.firewall.cnf\" ]; then\n    for _IP in ${_IPS}; do\n      echo checking csf.ignore site24x7 ${_IP} now...\n      _IP_CHECK=$(cat /etc/csf/csf.ignore \\\n        | cut -d '#' -f1 \\\n        | sort \\\n        | uniq \\\n        | tr -d \"\\s\" \\\n        | grep -F \"${_IP}\" 2>&1)\n      if [ -z \"${_IP_CHECK}\" ]; then\n        echo \"${_IP} not yet listed in /etc/csf/csf.ignore\"\n        echo \"${_IP} # site24x7 ips\" >> /etc/csf/csf.ignore\n      else\n        echo \"${_IP} already listed in /etc/csf/csf.ignore\"\n      fi\n    done\n  fi\n\n  if [ ! -e \"/root/.whitelist.site24x7.cnf\" ]; then\n    csf -tf\n    wait\n    csf -df\n    wait\n    touch /root/.whitelist.site24x7.cnf\n    [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  fi\n}\n\n_local_ip_rg() {\n  if [ -e \"/root/.local.IP.list\" ]; then\n    echo \"the file /root/.local.IP.list already exists\"\n    for _IP in `hostname -I`; do\n      _IP_CHECK=$(cat /root/.local.IP.list \\\n        | cut -d '#' -f1 \\\n        | sort \\\n        | uniq \\\n        | tr -d \"\\s\" \\\n        | grep -F \"${_IP}\" 2>&1)\n      if [ -z \"${_IP_CHECK}\" ]; then\n        echo \"${_IP} not yet listed in /root/.local.IP.list\"\n        echo \"${_IP} # local IP address\" >> /root/.local.IP.list\n      else\n        echo \"${_IP} already listed in /root/.local.IP.list\"\n      fi\n    done\n    for _IP in `cat /root/.local.IP.list \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\"`; do\n      if [ ! -z \"${_IP}\" ]; then\n        echo removing ${_IP} from d/t firewall rules\n        csf -ar ${_IP} &> /dev/null\n        csf -dr ${_IP} &> /dev/null\n        csf -tr ${_IP} &> /dev/null\n      fi\n      if [ ! -e \"/root/.local.IP.csf.listed\" ] && [ ! -z \"${_IP}\" ]; then\n        echo removing ${_IP} from csf.ignore\n        sed -i \"s/^${_IP} .*//g\" /etc/csf/csf.ignore\n        wait\n        echo removing ${_IP} from csf.allow\n        _NOW=$(date +%y%m%d-%H%M%S)\n        cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-local-${_NOW}\n        sed -i \"s/^${_IP} .*//g\" /etc/csf/csf.allow\n        wait\n        echo adding ${_IP} to csf.ignore\n        echo \"${_IP} # local.IP.list\" >> /etc/csf/csf.ignore\n        wait\n        echo adding ${_IP} to csf.allow\n        echo \"${_IP} # local.IP.list\" >> /etc/csf/csf.allow\n        wait\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n    touch /root/.local.IP.csf.listed\n  else\n    echo \"the file /root/.local.IP.list does not exist\"\n    rm -f /root/.tmp.IP.list*\n    rm -f /root/.local.IP.list*\n    for _IP in `hostname -I`;do echo ${_IP} >> /root/.tmp.IP.list;done\n    for _IP in `cat /root/.tmp.IP.list \\\n      | sort \\\n      | uniq`;do echo \"${_IP} # local IP address\" >> /root/.local.IP.list;done\n    rm -f /root/.tmp.IP.list*\n  fi\n}\n\n_guard_stats() {\n  if [ ! -e \"${_HX}\" ] && [ -e \"${_HA}\" ]; then\n    mv -f ${_HA} ${_HX}\n  fi\n  if [ ! -e \"${_WX}\" ] && [ -e \"${_WA}\" ]; then\n    mv -f ${_WA} ${_WX}\n  fi\n  if [ ! -e \"${_FX}\" ] && [ -e \"${_FA}\" ]; then\n    mv -f ${_FA} ${_FX}\n  fi\n  if [ -e \"${_HA}\" ]; then\n    for _IP in `cat ${_HA} | cut -d '#' -f1 | sort | uniq`; do\n      _IP_RV=\n      _NR_TEST=\"0\"\n      _NR_TEST=$(tr -s ' ' '\\n' < ${_HA} | grep -cF \"${_IP}\" 2>&1)\n      if [ -e \"/root/.local.IP.list\" ]; then\n        _IP_CHECK=$(cat /root/.local.IP.list \\\n          | cut -d '#' -f1 \\\n          | sort \\\n          | uniq \\\n          | tr -d \"\\s\" \\\n          | grep -F \"${_IP}\" 2>&1)\n        if [ ! -z \"${_IP_CHECK}\" ]; then\n          _NR_TEST=\"0\"\n          echo \"${_IP} is a local IP address, ignoring ${_HA}\"\n        fi\n      fi\n      if [ ! -z \"${_NR_TEST}\" ] && [ \"${_NR_TEST}\" -ge 12 ]; then\n        echo ${_IP} ${_NR_TEST}\n        _FW_TEST=\n        _FF_TEST=\n        _FW_TEST=$(csf -g ${_IP} 2>&1)\n        _FF_TEST=$(grep -F \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n        if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n          echo \"${_IP} already denied or allowed on port 22\"\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n            csf -dr ${_IP}\n            csf -tr ${_IP}\n          fi\n        else\n          _IP_RV=$(host -s ${_IP} 2>&1)\n          if [ \"${_NR_TEST}\" -ge 24 ]; then\n            echo \"Deny ${_IP} permanently ${_NR_TEST} ${_IP_RV}\"\n            csf -d ${_IP} do not delete Brute force SSH Server ${_NR_TEST} attacks ${_IP_RV}\n          else\n            echo \"Deny ${_IP} until limits rotation ${_NR_TEST} ${_IP_RV}\"\n            csf -d ${_IP} Brute force SSH Server ${_NR_TEST} attacks ${_IP_RV}\n          fi\n        fi\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n  if [ -e \"${_WA}\" ]; then\n    for _IP in `cat ${_WA} | cut -d '#' -f1 | sort | uniq`; do\n      _IP_RV=\n      _NR_TEST=\"0\"\n      _NR_TEST=$(tr -s ' ' '\\n' < ${_WA} | grep -cF \"${_IP}\" 2>&1)\n      if [ -e \"/root/.local.IP.list\" ]; then\n        _IP_CHECK=$(cat /root/.local.IP.list \\\n          | cut -d '#' -f1 \\\n          | sort \\\n          | uniq \\\n          | tr -d \"\\s\" \\\n          | grep -F \"${_IP}\" 2>&1)\n        if [ ! -z \"${_IP_CHECK}\" ]; then\n          _NR_TEST=\"0\"\n          echo \"${_IP} is a local IP address, ignoring ${_WA}\"\n        fi\n      fi\n      if [ ! -z \"${_NR_TEST}\" ] && [ \"${_NR_TEST}\" -ge 12 ]; then\n        echo ${_IP} ${_NR_TEST}\n        _FW_TEST=\n        _FF_TEST=\n        _FW_TEST=$(csf -g ${_IP} 2>&1)\n        _FF_TEST=$(grep -F \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n        if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n          echo \"${_IP} already denied or allowed on port 80\"\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n            csf -dr ${_IP}\n            csf -tr ${_IP}\n          fi\n        else\n          _IP_RV=$(host -s ${_IP} 2>&1)\n          if [ \"${_NR_TEST}\" -ge 24 ]; then\n            echo \"Deny ${_IP} permanently ${_NR_TEST} ${_IP_RV}\"\n            csf -d ${_IP} do not delete Brute force Web Server ${_NR_TEST} attacks ${_IP_RV}\n          else\n            echo \"Deny ${_IP} until limits rotation ${_NR_TEST} ${_IP_RV}\"\n            csf -d ${_IP} Brute force Web Server ${_NR_TEST} attacks ${_IP_RV}\n          fi\n        fi\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n  if [ -e \"${_FA}\" ]; then\n    for _IP in `cat ${_FA} | cut -d '#' -f1 | sort | uniq`; do\n      _IP_RV=\n      _NR_TEST=\"0\"\n      _NR_TEST=$(tr -s ' ' '\\n' < ${_FA} | grep -cF \"${_IP}\" 2>&1)\n      if [ -e \"/root/.local.IP.list\" ]; then\n        _IP_CHECK=$(cat /root/.local.IP.list \\\n          | cut -d '#' -f1 \\\n          | sort \\\n          | uniq \\\n          | tr -d \"\\s\" \\\n          | grep -F \"${_IP}\" 2>&1)\n        if [ ! -z \"${_IP_CHECK}\" ]; then\n          _NR_TEST=\"0\"\n          echo \"${_IP} is a local IP address, ignoring ${_FA}\"\n        fi\n      fi\n      if [ ! -z \"${_NR_TEST}\" ] && [ \"${_NR_TEST}\" -ge 12 ]; then\n        echo ${_IP} ${_NR_TEST}\n        _FW_TEST=\n        _FF_TEST=\n        _FW_TEST=$(csf -g ${_IP} 2>&1)\n        _FF_TEST=$(grep -F \"=${_IP} \" /etc/csf/csf.allow 2>&1)\n        if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]] || [[ \"${_FW_TEST}\" =~ \"DENY\" ]] || [[ \"${_FW_TEST}\" =~ \"ALLOW\" ]]; then\n          echo \"${_IP} already denied or allowed on port 21\"\n          if [[ \"${_FF_TEST}\" =~ \"${_IP}\" ]]; then\n            csf -dr ${_IP}\n            csf -tr ${_IP}\n          fi\n        else\n          _IP_RV=$(host -s ${_IP} 2>&1)\n          if [ \"${_NR_TEST}\" -ge 24 ]; then\n            echo \"Deny ${_IP} permanently ${_NR_TEST} ${_IP_RV}\"\n            csf -d ${_IP} do not delete Brute force FTP Server ${_NR_TEST} attacks ${_IP_RV}\n          else\n            echo \"Deny ${_IP} until limits rotation ${_NR_TEST} ${_IP_RV}\"\n            csf -d ${_IP} Brute force FTP Server ${_NR_TEST} attacks ${_IP_RV}\n          fi\n        fi\n      fi\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n}\n\n_whitelist_ip_dns() {\n  csf -tr 1.1.1.1\n  csf -tr 8.8.8.8\n  csf -tr 9.9.9.9\n  csf -dr 1.1.1.1\n  csf -dr 8.8.8.8\n  csf -dr 9.9.9.9\n  [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  sed -i \"s/.*1.1.1.1.*//g\"  /etc/csf/csf.allow\n  sed -i \"s/.*1.1.1.1.*//g\"  /etc/csf/csf.ignore\n  sed -i \"s/.*8.8.8.8.*//g\"  /etc/csf/csf.allow\n  sed -i \"s/.*8.8.8.8.*//g\"  /etc/csf/csf.ignore\n  sed -i \"s/.*9.9.9.9.*//g\"  /etc/csf/csf.allow\n  sed -i \"s/.*9.9.9.9.*//g\"  /etc/csf/csf.ignore\n  echo \"tcp|out|d=53|d=1.1.1.1 # Cloudflare DNS\" >> /etc/csf/csf.allow\n  echo \"tcp|out|d=53|d=8.8.8.8 # Google DNS\" >> /etc/csf/csf.allow\n  echo \"tcp|out|d=53|d=9.9.9.9 # Cleaner DNS\" >> /etc/csf/csf.allow\n  sed -i \"/^$/d\" /etc/csf/csf.ignore\n  sed -i \"/^$/d\" /etc/csf/csf.allow\n}\n\nif [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n  if [ -e \"/root/.local.IP.list\" ]; then\n    echo local dr/tr start $(date)\n    for _IP in `cat /root/.local.IP.list \\\n      | cut -d '#' -f1 \\\n      | sort \\\n      | uniq \\\n      | tr -d \"\\s\"`; do\n      csf -dr ${_IP} &> /dev/null\n      csf -tr ${_IP} &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    done\n  fi\n\n  _n=$((RANDOM%120+90))\n  touch /run/water.pid\n  echo Waiting ${_n} seconds...\n  sleep ${_n}\n\n  _NOW=$(date +%y%m%d-%H%M%S)\n  _NOW=${_NOW//[^0-9-]/}\n  _useCnfUpdate=NO\n  _vBs=\"/var/backups\"\n  _useCnf=\"/etc/csf/csf.allow\"\n  _preCnf=\"${_vBs}/dragon/t/csf.allow.backup-${_NOW}\"\n  _brkCnf=\"${_vBs}/dragon/t/csf.allow.broken-${_NOW}\"\n  if [ -f \"${_useCnf}\" ]; then\n    mkdir -p ${_vBs}/dragon/t/\n    cp -af ${_useCnf} ${_preCnf}\n  fi\n\n  _whitelist_ip_dns\n  _whitelist_ip_pingdom\n  _whitelist_ip_cloudflare\n  _whitelist_ip_googlebot\n  _whitelist_ip_microsoft\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_imperva\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_sucuri\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_authzero\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_site24x7_extra\n  [ -e \"/root/.extended.firewall.exceptions.cnf\" ] && _whitelist_ip_site24x7\n\n  if [ -f \"${_useCnf}\" ]; then\n    _diffCnfTest=$(diff -w -B \\\n      -I pingdom \\\n      -I cloudflare \\\n      -I googlebot \\\n      -I microsoft \\\n      -I imperva \\\n      -I sucuri \\\n      -I authzero \\\n      -I site24x7 \\\n      -I DHCP ${_useCnf} ${_preCnf} 2>&1)\n    if [ -z \"${_diffCnfTest}\" ]; then\n      _useCnfUpdate=YES\n      echo \"YES $(date) diff0 empty\" >> ${_vBs}/dragon/t/csf.log\n    else\n      _diffCnfTest=$(echo -n ${_diffCnfTest} | fmt -su -w 2500 2>&1)\n      echo \"NO $(date) diff1 ${_diffCnfTest}\" >> ${_vBs}/dragon/t/csf.log\n    fi\n    if [[ \"${_diffCnfTest}\" =~ \"No such file or directory\" ]]; then\n      echo \"NO $(date) diff3 ${_diffCnfTest}\" >> ${_vBs}/dragon/t/csf.log\n    fi\n  fi\n  if [ \"${_useCnfUpdate}\" = \"NO\" ]; then\n    cp -af ${_useCnf} ${_brkCnf}\n    cp -af ${_preCnf} ${_useCnf}\n  fi\n\n  if [ -e \"/root/.full.csf.cleanup.cnf\" ]; then\n    sed -i \"s/.*do not delete.*//g\" /etc/csf/csf.deny\n    wait\n    sed -i \"/^$/d\" /etc/csf/csf.deny\n    wait\n  fi\n\n  pkill -9 -f ConfigServer\n  killall sleep &> /dev/null\n  rm -f /etc/csf/csf.error\n  if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n    csf -ra &> /dev/null\n    synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  else\n    csf -r &> /dev/null\n  fi\n  csf -tf\n  ### Linux kernel TCP SACK CVEs mitigation\n  ### CVE-2019-11477 SACK Panic\n  ### CVE-2019-11478 SACK Slowness\n  ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n    _SACK_TEST=$(ip6tables --list | grep tcpmss)\n    if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n      sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n      iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    fi\n  fi\n\n  echo local start $(date)\n  _local_ip_rg\n\n  _HA=/var/xdrago/monitor/log/hackcheck.archive.log\n  _HX=/var/xdrago/monitor/log/hackcheck.archive.x3.log\n  _WA=/var/xdrago/monitor/log/scan_nginx.archive.log\n  _WX=/var/xdrago/monitor/log/scan_nginx.archive.x3.log\n  _FA=/var/xdrago/monitor/log/hackftp.archive.log\n  _FX=/var/xdrago/monitor/log/hackftp.archive.x3.log\n\n  echo guard start $(date)\n  _guard_stats\n  rm -f /var/xdrago/monitor/log/ssh.log\n  rm -f /var/xdrago/monitor/log/web.log\n  rm -f /var/xdrago/monitor/log/ftp.log\n\n  pkill -9 -f ConfigServer\n  killall sleep &> /dev/null\n  rm -f /etc/csf/csf.error\n  service lfd restart\n  _NOW=$(date +%y%m%d-%H%M%S)\n  cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-dhcp-${_NOW}\n  sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n  wait\n  _NOW=$(date +%y%m%d-%H%M%S)\n  cp -a /etc/csf/csf.allow /var/backups/csf/water/csf.allow-clear-${_NOW}\n  sed -i \"/^$/d\" /etc/csf/csf.allow\n  wait\n  sed -i \"/^$/d\" /etc/csf/csf.ignore\n  wait\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _DHCP_LOG=\"/var/log/daemon.log\"\n  else\n    _DHCP_LOG=\"/var/log/syslog\"\n  fi\n  grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n    if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n      IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n      if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n        echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n      fi\n    fi\n  done\n  if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n    csf -ra &> /dev/null\n    synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  else\n    csf -r &> /dev/null\n  fi\n  ### Linux kernel TCP SACK CVEs mitigation\n  ### CVE-2019-11477 SACK Panic\n  ### CVE-2019-11478 SACK Slowness\n  ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n  if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n    _SACK_TEST=$(ip6tables --list | grep tcpmss)\n    if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n      sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n      iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n      [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    fi\n  fi\n  rm -f /run/water.pid\n  echo guard fin $(date)\n  ntpdate pool.ntp.org > /dev/null 2>&1 &\nfi\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/ip_access.sh",
    "content": "#!/bin/bash\n\n# Define the file paths\n_aegir_health_check=\"/var/aegir/.drush/hm.alias.drushrc.php\"\n_drush_health_check=\"/var/aegir/drush/drush\"\n_ctrl_dir=\"/var/aegir/control/ip\"\n_input_file=\"${_ctrl_dir}/access.txt\"\n_nginx_access_path=\"/var/aegir/config/includes/ip_access\"\n_backup_dir=\"/var/aegir/undo\"\n_current_backup_file=\"${_backup_dir}/.nginx_access_conf.current.bak.tar.gz\"\n_last_good_backup_file=\"${_backup_dir}/.nginx_access_conf.last_good.bak.tar.gz\"\n_timestamp_file=\"${_nginx_access_path}/.access_last_mod_time\"\n_ssh_ips_hash_file=\"${_nginx_access_path}/.ssh_ips_hash\"\n_server_ip_file=\"/root/.found_correct_ipv4.cnf\"\n\n# Regular expression for validating IPv4 addresses and site names\n_ipv4_regex=\"^([0-9]{1,3}\\.){3}[0-9]{1,3}$\"\n_site_name_regex=\"^([a-zA-Z0-9_-]+\\.)*[a-zA-Z0-9_-]+\\.[a-zA-Z]{2,}$\"\n\nif [[ ! -f \"${_aegir_health_check}\" ]] || [[ ! -x \"${_drush_health_check}\" ]]; then\n  echo \"Server is not ready yet. Exiting.\"\n  exit 1\nfi\n\n# Ensure the ctrl, output and backup directories exist\nmkdir -p \"${_backup_dir}\"\nmkdir -p \"${_ctrl_dir}\"\nmkdir -p \"${_nginx_access_path}\"\n\n# Create a dummy input file if does not exist\n[[ ! -f \"${_input_file}\" ]] && echo \"sqladmin.com 192.168.1.1\" > ${_input_file}\n\nif [[ ! -f \"${_server_ip_file}\" ]]; then\n  echo \"Server IP file ${_server_ip_file} not found. Exiting.\"\n  exit 1\nfi\n\n# Get the server's own IP address from the configuration file\n_server_ip=$(cat \"${_server_ip_file}\" 2>/dev/null)\n\n# Function to get currently logged in SSH IPs\n_get_ssh_ips() {\n  # Use `netstat -tn` to get logged-in user IPs, filter out local sessions, and return unique, sorted IPs\n  netstat -tn | awk '$4 ~ /:22$/ && $6 == \"ESTABLISHED\" { split($5, a, \":\"); print a[1] }' | sort | uniq\n}\n\n# Store SSH IPs and compute hash\n_ssh_ips=$(_get_ssh_ips)\n_ssh_ips_hash=$(echo \"${_ssh_ips}\" | md5sum | awk '{print $1}')\n\n# Check if the timestamp file exists before trying to read it\nif [[ -f \"${_timestamp_file}\" ]]; then\n  _last_mod_time=$(cat \"${_timestamp_file}\" 2>/dev/null || echo 0)\nelse\n  _last_mod_time=0\nfi\n\n# Get the current modification time of the input file\n_current_mod_time=$(stat -c %Y \"${_input_file}\" 2>/dev/null)\nif [[ $? -ne 0 ]]; then\n  echo \"Failed to get modification time for ${_input_file}. Exiting.\"\n  exit 1\nfi\n\n# Check if the SSH IP hash file exists\nif [[ -f \"${_ssh_ips_hash_file}\" ]]; then\n  _previous_ssh_ips_hash=$(cat \"${_ssh_ips_hash_file}\" 2>/dev/null || echo \"\")\nelse\n  _previous_ssh_ips_hash=\"\"\nfi\n\n# Check if we need to update the whitelists based on file or SSH IP changes\nif [[ \"${_current_mod_time}\" -le \"${_last_mod_time}\" && \"${_ssh_ips_hash}\" == \"${_previous_ssh_ips_hash}\" ]]; then\n  echo \"No changes detected in ${_input_file} or SSH IPs. Exiting.\"\n  exit 0\nfi\n\n# Backup the current configuration files before making changes, if they exist\nif [[ -d \"${_nginx_access_path}\" ]]; then\n  tar -czf \"${_current_backup_file}\" -C \"${_nginx_access_path}\" .\nelse\n  echo \"No existing configuration directory to backup.\"\nfi\n\n# Function to generate the IP whitelist include files per vhost\n_generate_whitelists() {\n  while IFS= read -r _line; do\n    # Skip empty lines\n    [[ -z \"${_line}\" ]] && continue\n\n    # Split the line into an array\n    read -ra _fields <<< \"${_line}\"\n\n    # The first field is the site name\n    _site_name=\"${_fields[0]}\"\n\n    # Convert the site name to lowercase\n    _site_name=$(echo \"${_site_name}\" | tr '[:upper:]' '[:lower:]')\n\n    # Validate the site name\n    if [[ ! ${_site_name} =~ ${_site_name_regex} ]]; then\n      echo \"Invalid site name detected: ${_site_name}. Skipping.\"\n      continue\n    fi\n\n    # Prepare the whitelist include file path\n    whitelist_file=\"${_nginx_access_path}/${_site_name}.conf\"\n\n    # Collect IP addresses for this site\n    _ip_addresses=\"${_fields[@]:1}\"\n    _ip_list=()\n\n    # Always include loopback, server's own IP, and SSH logged-in IPs\n    _ip_list+=(\"127.0.0.1\")\n    [[ -n \"${_server_ip}\" ]] && _ip_list+=(\"${_server_ip}\")\n\n    # Add SSH IPs to the allowed list\n    for _ssh_ip in ${_ssh_ips}; do\n      _ip_list+=(\"${_ssh_ip}\")\n    done\n\n    for _ip in ${_ip_addresses}; do\n      # Validate the IP address\n      if [[ ${_ip} =~ ${_ipv4_regex} ]]; then\n        _ip_list+=(\"${_ip}\")\n      else\n        echo \"Invalid IP address format detected: ${_ip}. Skipping.\"\n      fi\n    done\n\n    # Remove duplicates and sort the IP list\n    _ip_list_sorted=$(printf \"%s\\n\" \"${_ip_list[@]}\" | sort | uniq)\n\n    # Write the IP whitelist configuration to the file\n    {\n      for _ip in ${_ip_list_sorted}; do\n        echo \"allow ${_ip};\"\n      done\n      echo \"deny all;\"\n    } > \"$whitelist_file\"\n\n  done < \"${_input_file}\"\n}\n\n# Generate the IP whitelist files\n_generate_whitelists\n\n# Test the new Nginx configuration\nnginx_configtest=$(service nginx configtest 2>&1)\nif [[ $? -ne 0 ]]; then\n  echo \"Nginx configuration test failed: $nginx_configtest\"\n  echo \"Reverting to the last known good configuration.\"\n  if [[ -f \"${_last_good_backup_file}\" ]]; then\n    tar -xzf \"${_last_good_backup_file}\" -C \"${_nginx_access_path}\"\n    service nginx reload\n  else\n    echo \"No backup found to revert to. Manual intervention required.\"\n  fi\n  exit 1\nfi\n\n# Reload Nginx if the configuration test passed\nservice nginx reload\nif [[ $? -ne 0 ]]; then\n  echo \"Nginx reload failed. Reverting to the last known good configuration.\"\n  if [[ -f \"${_last_good_backup_file}\" ]]; then\n    tar -xzf \"${_last_good_backup_file}\" -C \"${_nginx_access_path}\"\n    service nginx reload\n  else\n    echo \"No backup found to revert to. Manual intervention required.\"\n  fi\n  exit 1\nfi\n\n# If everything is successful, update the last known good backup\ntar -czf \"${_last_good_backup_file}\" -C \"${_nginx_access_path}\" .\n\n# Update the timestamp file and SSH IPs hash\necho \"${_current_mod_time}\" > \"${_timestamp_file}\"\necho \"${_ssh_ips_hash}\" > \"${_ssh_ips_hash_file}\"\n\n# Output the result\necho \"Nginx IP whitelist configuration updated and Nginx reloaded successfully.\"\n\n"
  },
  {
    "path": "aegir/tools/system/log/EMPTY.txt",
    "content": "\n"
  },
  {
    "path": "aegir/tools/system/manage_ltd_users.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n_OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n_hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n\n_usrGroup=users\n_WEBG=www-data\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_pthLog=\"/var/log/boa\"\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ -e \"/root/.pause_tasks_maint.cnf\" ] && exit 0\n\nif [ -x \"/usr/bin/gpg2\" ]; then\n  _GPG=gpg2\nelse\n  _GPG=gpg\nfi\n\n###-------------SYSTEM-----------------###\n\n_os_detection_minimal() {\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n_find_fast_mirror_early() {\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && _USE_MIR=\"files.aegir.cc\"\n      else\n        _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    _USE_MIR=\"files.aegir.cc\"\n  fi\n  _urlDev=\"http://${_USE_MIR}/dev\"\n  _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n###----------------------------###\n##    Manage ltd shell users    ##\n###----------------------------###\n#\n# Remove dangerous stuff from the string.\n_sanitize_string() {\n  echo \"$1\" | sed 's/[\\\\\\/\\^\\?\\>\\`\\#\\\"\\{\\(\\&\\|\\*]//g; s/\\(['\"'\"'\\]\\)//g'\n}\n#\n# Add allow-snail group if not exists.\n_add_allow_snail_if_not_exists() {\n  _SNAIL_EXISTS=$(getent group allow-snail 2>&1)\n  if [[ ! \"${_SNAIL_EXISTS}\" =~ \"allow-snail\" ]]; then\n    addgroup --system allow-snail &> /dev/null\n  fi\n}\n#\n# Add ltd-shell group if not exists.\n_add_ltd_group_if_not_exists() {\n  _LTD_EXISTS=$(getent group ltd-shell 2>&1)\n  if [[ ! \"${_LTD_EXISTS}\" =~ \"ltd-shell\" ]]; then\n    addgroup --system ltd-shell &> /dev/null\n  fi\n}\n#\n# Enable chattr.\n_enable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    _U_HD=\"/home/$1/.drush\"\n    _U_TP=\"/home/$1/.tmp\"\n    _U_II=\"${_U_HD}/php.ini\"\n    if [ ! -e \"${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" = \"YES\" ]; then\n        rm -rf ${_U_HD}/\n      else\n        rm -f ${_U_HD}/{drush_make,registry_rebuild,clean_missing_modules}\n        rm -f ${_U_HD}/{drupalgeddon,drush_ecl,make_local,safe_cache_form*}\n        rm -f ${_U_HD}/usr/{drush_make,registry_rebuild,clean_missing_modules}\n        rm -f ${_U_HD}/usr/{drupalgeddon,drush_ecl,make_local,safe_cache_form*}\n        rm -f ${_U_HD}/usr/{mydropwizard,utf8mb4_convert}\n        rm -f ${_U_HD}/.ctrl*\n        rm -rf ${_U_HD}/{cache,drush.ini,*drushrc*,*.inc}\n      fi\n      mkdir -p ${_U_HD}/usr\n      mkdir -p ${_U_TP}\n      touch ${_U_TP}\n      find ${_U_TP}/ -mtime +0 -exec rm -rf {} \\; &> /dev/null\n      chown $1:${_usrGroup} ${_U_TP}\n      chown $1:${_usrGroup} ${_U_HD}\n      chmod 02755 ${_U_TP}\n      chmod 02755 ${_U_HD}\n      if [ ! -L \"${_U_HD}/usr/registry_rebuild\" ] \\\n        && [ -e \"${_dscUsr}/.drush/usr/registry_rebuild\" ]; then\n        ln -sfn ${_dscUsr}/.drush/usr/registry_rebuild \\\n          ${_U_HD}/usr/registry_rebuild\n      fi\n      if [ ! -L \"${_U_HD}/usr/clean_missing_modules\" ] \\\n        && [ -e \"${_dscUsr}/.drush/usr/clean_missing_modules\" ]; then\n        ln -sfn ${_dscUsr}/.drush/usr/clean_missing_modules \\\n          ${_U_HD}/usr/clean_missing_modules\n      fi\n      if [ ! -L \"${_U_HD}/usr/drupalgeddon\" ] \\\n        && [ -e \"${_dscUsr}/.drush/usr/drupalgeddon\" ]; then\n        ln -sfn ${_dscUsr}/.drush/usr/drupalgeddon \\\n          ${_U_HD}/usr/drupalgeddon\n      fi\n      if [ ! -L \"${_U_HD}/usr/drush_ecl\" ] \\\n        && [ -e \"${_dscUsr}/.drush/usr/drush_ecl\" ]; then\n        ln -sfn ${_dscUsr}/.drush/usr/drush_ecl \\\n          ${_U_HD}/usr/drush_ecl\n      fi\n      if [ ! -L \"${_U_HD}/usr/safe_cache_form_clear\" ] \\\n        && [ -e \"${_dscUsr}/.drush/usr/safe_cache_form_clear\" ]; then\n        ln -sfn ${_dscUsr}/.drush/usr/safe_cache_form_clear \\\n          ${_U_HD}/usr/safe_cache_form_clear\n      fi\n      if [ ! -L \"${_U_HD}/usr/utf8mb4_convert\" ] \\\n        && [ -e \"${_dscUsr}/.drush/usr/utf8mb4_convert\" ]; then\n        ln -sfn ${_dscUsr}/.drush/usr/utf8mb4_convert \\\n          ${_U_HD}/usr/utf8mb4_convert\n      fi\n    fi\n\n    if [ -e \"${_dscUsr}/tools/drush/drush.php\" ]; then\n      _CHECK_USE_PHP_CLI=$(grep \"/opt/php\" ${_dscUsr}/tools/drush/drush.php 2>&1)\n    else\n      _CHECK_USE_PHP_CLI=php84\n    fi\n\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php${e}\" ]] \\\n        && [ ! -e \"${_U_HD}/.ctrl.php${e}.${_xSrl}.pid\" ]; then\n        _PHP_CLI_UPDATE=YES\n      fi\n    done\n    echo _PHP_CLI_UPDATE is ${_PHP_CLI_UPDATE} for $1\n\n    if [ \"${_PHP_CLI_UPDATE}\" = \"YES\" ] \\\n      || [ ! -e \"${_U_II}\" ] \\\n      || [ ! -e \"${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n      mkdir -p ${_U_HD}\n      rm -f ${_U_HD}/.ctrl.php*\n      rm -f ${_U_II}\n      if [ ! -z \"${_T_CLI_VRN}\" ]; then\n        _USE_PHP_CLI=\"${_T_CLI_VRN}\"\n        echo \"_USE_PHP_CLI is ${_USE_PHP_CLI} for $1 at ${_USER} WTF\"\n        echo \"_T_CLI_VRN is ${_T_CLI_VRN}\"\n      else\n        if [ -e \"${_dscUsr}/tools/drush/drush.php\" ]; then\n          _CHECK_USE_PHP_CLI=$(grep \"/opt/php\" ${_dscUsr}/tools/drush/drush.php 2>&1)\n        else\n          _CHECK_USE_PHP_CLI=php84\n        fi\n        echo \"_CHECK_USE_PHP_CLI is ${_CHECK_USE_PHP_CLI} for $1 at ${_USER}\"\n        if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php85\" ]]; then\n          _USE_PHP_CLI=8.5\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php84\" ]]; then\n          _USE_PHP_CLI=8.4\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php83\" ]]; then\n          _USE_PHP_CLI=8.3\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php82\" ]]; then\n          _USE_PHP_CLI=8.2\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php81\" ]]; then\n          _USE_PHP_CLI=8.1\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php80\" ]]; then\n          _USE_PHP_CLI=8.0\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php74\" ]]; then\n          _USE_PHP_CLI=7.4\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php73\" ]]; then\n          _USE_PHP_CLI=7.3\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php72\" ]]; then\n          _USE_PHP_CLI=7.2\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php71\" ]]; then\n          _USE_PHP_CLI=7.1\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php70\" ]]; then\n          _USE_PHP_CLI=7.0\n        elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php56\" ]]; then\n          _USE_PHP_CLI=5.6\n        fi\n      fi\n      echo _USE_PHP_CLI is ${_USE_PHP_CLI} for $1\n      if [ \"${_USE_PHP_CLI}\" = \"8.5\" ]; then\n        cp -af /opt/php85/lib/php.ini ${_U_II}\n        _U_INI=85\n      elif [ \"${_USE_PHP_CLI}\" = \"8.4\" ]; then\n        cp -af /opt/php84/lib/php.ini ${_U_II}\n        _U_INI=84\n      elif [ \"${_USE_PHP_CLI}\" = \"8.3\" ]; then\n        cp -af /opt/php83/lib/php.ini ${_U_II}\n        _U_INI=83\n      elif [ \"${_USE_PHP_CLI}\" = \"8.2\" ]; then\n        cp -af /opt/php82/lib/php.ini ${_U_II}\n        _U_INI=82\n      elif [ \"${_USE_PHP_CLI}\" = \"8.1\" ]; then\n        cp -af /opt/php81/lib/php.ini ${_U_II}\n        _U_INI=81\n      elif [ \"${_USE_PHP_CLI}\" = \"8.0\" ]; then\n        cp -af /opt/php80/lib/php.ini ${_U_II}\n        _U_INI=80\n      elif [ \"${_USE_PHP_CLI}\" = \"7.4\" ]; then\n        cp -af /opt/php74/lib/php.ini ${_U_II}\n        _U_INI=74\n      elif [ \"${_USE_PHP_CLI}\" = \"7.3\" ]; then\n        cp -af /opt/php73/lib/php.ini ${_U_II}\n        _U_INI=73\n      elif [ \"${_USE_PHP_CLI}\" = \"7.2\" ]; then\n        cp -af /opt/php72/lib/php.ini ${_U_II}\n        _U_INI=72\n      elif [ \"${_USE_PHP_CLI}\" = \"7.1\" ]; then\n        cp -af /opt/php71/lib/php.ini ${_U_II}\n        _U_INI=71\n      elif [ \"${_USE_PHP_CLI}\" = \"7.0\" ]; then\n        cp -af /opt/php70/lib/php.ini ${_U_II}\n        _U_INI=70\n      elif [ \"${_USE_PHP_CLI}\" = \"5.6\" ]; then\n        cp -af /opt/php56/lib/php.ini ${_U_II}\n        _U_INI=56\n      fi\n      if [ -e \"${_U_II}\" ]; then\n        _INI=\"open_basedir = \\\".: \\\n          /data/all:        \\\n          /data/conf:       \\\n          /data/disk/all:   \\\n          /home/$1:         \\\n          /opt/php56:       \\\n          /opt/php70:       \\\n          /opt/php71:       \\\n          /opt/php72:       \\\n          /opt/php73:       \\\n          /opt/php74:       \\\n          /opt/php80:       \\\n          /opt/php81:       \\\n          /opt/php82:       \\\n          /opt/php83:       \\\n          /opt/php84:       \\\n          /opt/php85:       \\\n          /opt/tika:        \\\n          /opt/tika7:       \\\n          /opt/tika8:       \\\n          /opt/tika9:       \\\n          /dev/urandom:     \\\n          /opt/tools/drush: \\\n          /usr/bin:         \\\n          /usr/local/bin:   \\\n          ${_dscUsr}/.drush/usr: \\\n          ${_dscUsr}/distro:     \\\n          ${_dscUsr}/platforms:  \\\n          ${_dscUsr}/static\\\"\"\n        _INI=$(echo \"${_INI}\" | sed \"s/ //g\" 2>&1)\n        _INI=$(echo \"${_INI}\" | sed \"s/open_basedir=/open_basedir = /g\" 2>&1)\n        _INI=${_INI//\\//\\\\\\/}\n        _QTP=${_U_TP//\\//\\\\\\/}\n        sed -i \"s/.*open_basedir =.*/${_INI}/g\"                              ${_U_II}\n        wait\n        sed -i \"s/.*error_reporting =.*/error_reporting = 1/g\"               ${_U_II}\n        wait\n        sed -i \"s/.*session.save_path =.*/session.save_path = ${_QTP}/g\"     ${_U_II}\n        wait\n        sed -i \"s/.*soap.wsdl_cache_dir =.*/soap.wsdl_cache_dir = ${_QTP}/g\" ${_U_II}\n        wait\n        sed -i \"s/.*sys_temp_dir =.*/sys_temp_dir = ${_QTP}/g\"               ${_U_II}\n        wait\n        sed -i \"s/.*upload_tmp_dir =.*/upload_tmp_dir = ${_QTP}/g\"           ${_U_II}\n        wait\n        echo > ${_U_HD}/.ctrl.php${_U_INI}.${_xSrl}.pid\n        echo > ${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\n      fi\n    fi\n\n    _UQ=\"$1\"\n    chage -M 99999 ${_UQ} &> /dev/null\n    _UPDATE_GEMS=NO\n    ###\n    ### Cleanup of no longer used/allowed Ruby Gems and NPM access leftovers\n    ###\n    [ -e \"/home/${_UQ}/.rvm\" ] && rm -rf /home/${_UQ}/.rvm*\n    [ -e \"/home/${_UQ}/.gem\" ] && rm -rf /home/${_UQ}/.gem*\n    [ -e \"/home/${_UQ}/.npm\" ] && rm -rf /home/${_UQ}/.npm*\n    [ -e \"/home/${_UQ}/.mkshrc\" ] && rm -rf /home/${_UQ}/.mkshrc\n    if [ \"${_UQ}\" = \"${_USER}.ftp\" ]; then\n      [ ! -d \"/home/${_UQ}/.composer\" ] && su -s /bin/bash - ${_UQ} -c \"mkdir ~/.composer\"\n    else\n      [ -d \"/home/${_UQ}/.composer\" ] && rm -rf /home/${_UQ}/.composer\n    fi\n    ###\n    ### Check if Ruby Gems and NPM access should be added or removed\n    ###\n    if [ -f \"${_dscUsr}/static/control/compass.info\" ]; then\n      ###\n      ### Check if Ruby Gems access needs an update\n      ###\n      if [ ! -e \"/opt/user/gems/${_UQ}/gems/oily_png-1.1.1\" ] \\\n        || [ ! -e \"${_dscUsr}/log/.gems.build.rb.${_UQ}.${_xSrl}.txt\" ]; then\n        _UPDATE_GEMS=YES\n      fi\n      if [ \"${_UQ}\" = \"${_USER}.ftp\" ] \\\n        && [ ! -e \"/opt/user/npm/${_UQ}/.npm-packages/bin\" ] \\\n        && [ -e \"/root/.allow.node.lshell.cnf\" ]; then\n        _UPDATE_GEMS=YES\n      fi\n    else\n      ###\n      ### Remove no longer used Ruby Gems and NPM access\n      ###\n      [ -e \"/home/${_UQ}/.npm\" ] && rm -rf /home/${_UQ}/.npm*\n      [ -e \"/opt/user/gems/${_UQ}\" ] && rm -rf /opt/user/gems/${_UQ}\n      [ -e \"/opt/user/npm/${_UQ}\" ] && rm -rf /opt/user/npm/${_UQ}\n      [ -e \"${_dscUsr}/log\" ] && rm -f ${_dscUsr}/log/.gems.build*\n      [ -e \"${_dscUsr}/log\" ] && rm -f ${_dscUsr}/log/.npm.build*\n    fi\n    if [ \"${_UPDATE_GEMS}\" = \"YES\" ]; then\n      ###\n      ### Ruby Gems are allowed for both main and client SSH accounts\n      ###\n      [ ! -d \"/opt/user/gems/${_UQ}\" ] && mkdir -p /opt/user/gems/${_UQ}\n      chmod 1777 /opt/user/gems\n      chown -R ${_UQ}:users /opt/user/gems/${_UQ}\n      chown root:root /opt/user/gems\n      if [ -d \"/opt/user/gems/${_UQ}\" ] \\\n        && [ -e \"/usr/local/lib/ruby/gems/3.3.0/gems/oily_png-1.1.1\" ] \\\n        && [ ! -e \"/opt/user/gems/${_UQ}/gems/oily_png-1.1.1\" ]; then\n        cp -a /usr/local/lib/ruby/gems/3.3.0/gems /opt/user/gems/${_UQ}/\n        cp -a /usr/local/lib/ruby/gems/3.3.0/specifications /opt/user/gems/${_UQ}/\n        cp -a /usr/local/lib/ruby/gems/3.3.0/extensions /opt/user/gems/${_UQ}/\n        cp -a /usr/local/lib/ruby/gems/3.3.0/doc /opt/user/gems/${_UQ}/\n        chown -R ${_UQ}:users /opt/user/gems/${_UQ}\n        [ -e \"${_dscUsr}/log\" ] && rm -f ${_dscUsr}/log/.gems.build*\n        touch ${_dscUsr}/log/.gems.build.rb.${_UQ}.${_xSrl}.txt\n      fi\n      ###\n      ### Check if NPM support is allowed and if needs an update\n      ### NOTE: It will be restricted to the main SSH account only\n      ###\n      if [ -e \"/root/.allow.node.lshell.cnf\" ] \\\n        && [ \"${_UQ}\" = \"${_USER}.ftp\" ] \\\n        && [ -x \"/usr/bin/node\" ] \\\n        && [ -e \"/home/${_UQ}/static/control\" ]; then\n        if [ ! -e \"/opt/user/npm/${_UQ}/.npm-packages/bin\" ] \\\n          || [ ! -e \"${_dscUsr}/log/.npm.build.${_UQ}.${_xSrl}.txt\" ]; then\n          [ ! -d \"/opt/user/npm\" ] && mkdir -p /opt/user/npm\n          chown root:root /opt/user/npm\n          chmod 1777 /opt/user/npm\n          [ ! -d \"/opt/user/npm/${_UQ}\" ] && mkdir -p /opt/user/npm/${_UQ}\n          [ ! -e \"/home/${_UQ}/.npmrc\" ] && su -s /bin/bash - ${_UQ} -c \"echo 'prefix = /opt/user/npm/${_UQ}/.npm-packages' > ~/.npmrc\"\n          [ -e \"/home/${_UQ}/.npmrc\" ] && chattr +i /home/${_UQ}/.npmrc\n          mkdir -p /opt/user/npm/${_UQ}/.bundle\n          mkdir -p /opt/user/npm/${_UQ}/.composer\n          mkdir -p /opt/user/npm/${_UQ}/.config\n          mkdir -p /opt/user/npm/${_UQ}/.npm\n          mkdir -p /opt/user/npm/${_UQ}/.npm-packages/bin\n          mkdir -p /opt/user/npm/${_UQ}/.npm-packages/lib/node_modules\n          mkdir -p /opt/user/npm/${_UQ}/.sass-cache\n          chown -R ${_UQ}:users /opt/user/npm/${_UQ}\n          [ -e \"${_dscUsr}/log\" ] && rm -f ${_dscUsr}/log/.npm.build*\n          touch ${_dscUsr}/log/.npm.build.${_UQ}.${_xSrl}.txt\n        fi\n      else\n        [ -e \"/home/${_UQ}/.npm\" ] && rm -rf /home/${_UQ}/.npm*\n        [ -e \"/opt/user/npm/${_UQ}\" ] && rm -rf /opt/user/npm/${_UQ}\n        [ -e \"${_dscUsr}/log\" ] && rm -f ${_dscUsr}/log/.npm.build*\n      fi\n    fi\n    rm -f /home/${_UQ}/{.profile,.bash_logout,.bash_profile,.bashrc,.z_login,.zshrc}\n    chage -M 90 ${_UQ} &> /dev/null\n\n    if [ \"$1\" != \"${_USER}.ftp\" ]; then\n      if [ -d \"/home/$1/\" ]; then\n        chattr +i /home/$1/\n      fi\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr +i /home/$1/platforms/\n        chattr +i /home/$1/platforms/* &> /dev/null\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr +i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr +i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr +i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr +i /home/$1/.bazaar/\n    fi\n  fi\n}\n#\n# Disable chattr.\n_disable_chattr() {\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ \"$1\" != \"${_USER}.ftp\" ]; then\n      if [ -d \"/home/$1/\" ]; then\n        chattr -i /home/$1/\n      fi\n    else\n      if [ -d \"/home/$1/platforms/\" ]; then\n        chattr -i /home/$1/platforms/\n        chattr -i /home/$1/platforms/* &> /dev/null\n      fi\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr -i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr -i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr -i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr -i /home/$1/.bazaar/\n    fi\n  fi\n}\n#\n# Kill zombies.\n_kill_zombies() {\n  for _Existing in `cat /etc/passwd | cut -d ':' -f1 | sort`; do\n    _SEC_IDY=$(id -nG ${_Existing} 2>&1)\n    if [[ \"${_SEC_IDY}\" =~ \"ltd-shell\" ]] \\\n      && [ ! -z \"${_Existing}\" ] \\\n      && [[ ! \"${_Existing}\" =~ \".ftp\"($) ]] \\\n      && [[ ! \"${_Existing}\" =~ \".web\"($) ]]; then\n      _usrParent=$(echo ${_Existing} | cut -d. -f1 | awk '{ print $1}' 2>&1)\n      _usrParentTest=${_usrParent//[^a-z0-9]/}\n      if [ ! -z \"${_usrParentTest}\" ]; then\n        _PAR_DIR=\"/data/disk/${_usrParent}/clients\"\n        _SEC_SYM=\"/home/${_Existing}/sites\"\n        _SEC_DIR=\"$(readlink -n \"${_SEC_SYM}\")\"\n        if [ ! -L \"${_SEC_SYM}\" ] || [ ! -e \"${_SEC_DIR}\" ] \\\n          || [ ! -e \"/home/${_usrParent}.ftp/users/${_Existing}\" ]; then\n          [ -d \"/var/backups/zombie/deleted/${_NOW}\" ] || mkdir -p /var/backups/zombie/deleted/${_NOW}\n          pkill -9 -f gpg-agent\n          _disable_chattr ${_Existing}\n          rm -rf /home/${_Existing}/.gnupg\n          deluser \\\n            --remove-home \\\n            --backup-to /var/backups/zombie/deleted/${_NOW} ${_Existing} &> /dev/null\n          rm -f /home/${_usrParent}.ftp/users/${_Existing}\n          echo Zombie from etc.passwd ${_Existing} killed\n          echo\n        fi\n      fi\n    fi\n  done\n  for _Existing in `ls /home | cut -d '/' -f1 | sort`; do\n    _isTest=${_Existing//[^a-z0-9]/}\n    if [ ! -z \"${_isTest}\" ]; then\n      _SEC_IDY=$(id -nG ${_Existing} 2>&1)\n      if [[ \"${_SEC_IDY}\" =~ \"No such user\" ]] \\\n        && [ ! -z \"${_Existing}\" ] \\\n        && [[ ! \"${_Existing}\" =~ \".ftp\"($) ]] \\\n        && [[ ! \"${_Existing}\" =~ \".web\"($) ]]; then\n        _disable_chattr ${_Existing}\n        [ -d \"/var/backups/zombie/deleted/${_NOW}\" ] || mkdir -p /var/backups/zombie/deleted/${_NOW}\n        mv /home/${_Existing} /var/backups/zombie/deleted/${_NOW}/.leftover-${_Existing}\n        _usrParent=$(echo ${_Existing} | cut -d. -f1 | awk '{ print $1}' 2>&1)\n        if [ -e \"/home/${_usrParent}.ftp/users/${_Existing}\" ]; then\n          rm -f /home/${_usrParent}.ftp/users/${_Existing}\n        fi\n        echo Zombie from home.dir ${_Existing} killed\n        echo\n      fi\n    fi\n  done\n}\n#\n# Fix dot dirs.\n_fix_dot_dirs() {\n  _usrLtdTest=${_usrLtd//[^a-z0-9]/}\n  if [ ! -z \"${_usrLtdTest}\" ]; then\n    _usrTmp=\"/home/${_usrLtd}/.tmp\"\n    if [ ! -d \"${_usrTmp}\" ]; then\n      mkdir -p ${_usrTmp}\n      chown ${_usrLtd}:${_usrGroup} ${_usrTmp}\n      chmod 02755 ${_usrTmp}\n    fi\n    _usrLftp=\"/home/${_usrLtd}/.lftp\"\n    if [ ! -d \"${_usrLftp}\" ]; then\n      mkdir -p ${_usrLftp}\n      chown ${_usrLtd}:${_usrGroup} ${_usrLftp}\n      chmod 02755 ${_usrLftp}\n    fi\n    _usrLhist=\"/home/${_usrLtd}/.lhistory\"\n    if [ ! -e \"${_usrLhist}\" ]; then\n      touch ${_usrLhist}\n      chown ${_usrLtd}:${_usrGroup} ${_usrLhist}\n      chmod 644 ${_usrLhist}\n    fi\n    _usrDrush=\"/home/${_usrLtd}/.drush\"\n    if [ ! -d \"${_usrDrush}\" ]; then\n      mkdir -p ${_usrDrush}\n      chown ${_usrLtd}:${_usrGroup} ${_usrDrush}\n      chmod 700 ${_usrDrush}\n    fi\n    _usrSsh=\"/home/${_usrLtd}/.ssh\"\n    if [ ! -d \"${_usrSsh}\" ]; then\n      mkdir -p ${_usrSsh}\n      chown -R ${_usrLtd}:${_usrGroup} ${_usrSsh}\n      chmod 700 ${_usrSsh}\n    fi\n    chmod 600 ${_usrSsh}/id_{r,d}sa &> /dev/null\n    chmod 600 ${_usrSsh}/known_hosts &> /dev/null\n    _usrBzr=\"/home/${_usrLtd}/.bazaar\"\n    if [ -x \"/usr/local/bin/bzr\" ]; then\n      if [ ! -z \"${_usrLtd}\" ] && [ ! -e \"${_usrBzr}/bazaar.conf\" ]; then\n        mkdir -p ${_usrBzr}\n        echo ignore_missing_extensions=True > ${_usrBzr}/bazaar.conf\n        chown -R ${_usrLtd}:${_usrGroup} ${_usrBzr}\n        chmod 700 ${_usrBzr}\n      fi\n    else\n      if [ ! -z \"${_usrLtd}\" ] && [ -d \"${_usrBzr}\" ]; then\n        rm -rf ${_usrBzr}\n      fi\n    fi\n  fi\n}\n#\n# Manage Drush _Aliases.\n_manage_sec_user_drush_aliases() {\n  if [ -e \"${_Client}\" ]; then\n    if [ -L \"${_usrLtdRoot}/sites\" ]; then\n      _symTgt=\"$(readlink -n \"${_usrLtdRoot}/sites\")\"\n    else\n      rm -f ${_usrLtdRoot}/sites\n    fi\n    if [ \"${_symTgt}\" != \"${_Client}\" ] \\\n      || [ ! -e \"${_usrLtdRoot}/sites\" ]; then\n      rm -f ${_usrLtdRoot}/sites\n      ln -sfn ${_Client} ${_usrLtdRoot}/sites\n    fi\n  fi\n  if [ ! -e \"${_usrLtdRoot}/.drush\" ]; then\n    mkdir -p ${_usrLtdRoot}/.drush\n  fi\n\n  _ALS_TEST=$(ls -la ${_usrLtdRoot}/.drush/*.alias.drushrc.php 2>&1)\n  if [[ ! \"${_ALS_TEST}\" =~ \"No such file\" ]]; then\n    for _Alias in `find ${_usrLtdRoot}/.drush/*.alias.drushrc.php \\\n      -maxdepth 1 -type f | sort`; do\n      _AliasName=$(echo \"${_Alias}\" | cut -d'/' -f5 | awk '{ print $1}' 2>&1)\n      _AliasName=$(echo \"${_AliasName}\" \\\n        | sed \"s/.alias.drushrc.php//g\" \\\n        | awk '{ print $1}' 2>&1)\n      if [ ! -z \"${_AliasName}\" ] \\\n        && [ ! -e \"${_usrLtdRoot}/sites/${_AliasName}\" ]; then\n        rm -f ${_usrLtdRoot}/.drush/${_AliasName}.alias.drushrc.php\n      fi\n    done\n  fi\n\n  for _Symlink in `find ${_usrLtdRoot}/sites/ \\\n    -maxdepth 1 -mindepth 1 | sort`; do\n    _SiteName=$(echo ${_Symlink}  \\\n      | cut -d'/' -f5 \\\n      | awk '{ print $1}' 2>&1)\n    _pthAliasMain=\"${_pthParen_tUsr}/.drush/${_SiteName}.alias.drushrc.php\"\n    _pthAliasCopy=\"${_usrLtdRoot}/.drush/${_SiteName}.alias.drushrc.php\"\n    if [ ! -z \"${_SiteName}\" ] && [ ! -e \"${_pthAliasCopy}\" ]; then\n      cp -af ${_pthAliasMain} ${_pthAliasCopy}\n      chmod 440 ${_pthAliasCopy}\n    elif [ ! -z \"${_SiteName}\" ]  && [ -e \"${_pthAliasCopy}\" ]; then\n      _DIFF_T=$(diff -w -B ${_pthAliasCopy} ${_pthAliasMain} 2>&1)\n      if [ ! -z \"${_DIFF_T}\" ]; then\n        cp -af ${_pthAliasMain} ${_pthAliasCopy}\n        chmod 440 ${_pthAliasCopy}\n      fi\n    fi\n  done\n}\n#\n# OK, create user.\n_ok_create_user() {\n  _usrLtdTest=${_usrLtd//[^a-z0-9]/}\n  if [ ! -z \"${_usrLtdTest}\" ]; then\n    _ADMIN=\"${_USER}.ftp\"\n    echo \"_ADMIN is == ${_ADMIN} == at _ok_create_user\"\n    _usrLtdRoot=\"/home/${_usrLtd}\"\n    _SEC_SYM=\"${_usrLtdRoot}/sites\"\n    _TMP=\"/var/tmp\"\n    if [ ! -L \"${_SEC_SYM}\" ]; then\n      [ -d \"/var/backups/zombie/deleted/${_NOW}\" ] || mkdir -p /var/backups/zombie/deleted/${_NOW}\n      mv -f ${_usrLtdRoot} /var/backups/zombie/deleted/${_NOW}/ &> /dev/null\n    fi\n    if [ ! -d \"${_usrLtdRoot}\" ]; then\n      if [ -e \"/usr/bin/mysecureshell\" ] && [ -e \"/etc/ssh/sftp_config\" ]; then\n        useradd -d ${_usrLtdRoot} -s /usr/bin/mysecureshell -m -N -r ${_usrLtd}\n        echo \"_usrLtdRoot is == ${_usrLtdRoot} == at _ok_create_user\"\n      else\n        useradd -d ${_usrLtdRoot} -s /usr/bin/lshell -m -N -r ${_usrLtd}\n      fi\n      adduser ${_usrLtd} ${_WEBG}\n      _ESC_LUPASS=\"\"\n      _LEN_LUPASS=0\n      if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]  ; then\n        _PWD_CHARS=64\n      elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n        _PWD_CHARS=32\n      else\n        _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n        if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n          && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n          _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n        else\n          _PWD_CHARS=32\n        fi\n        if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n          _PWD_CHARS=128\n        fi\n      fi\n      if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n        _RANDPASS_TEST=$(randpass -V 2>&1)\n        if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n          _ESC_LUPASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n        else\n          _ESC_LUPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n          _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n          _ESC_LUPASS=$(_sanitize_string \"${_ESC_LUPASS}\" 2>&1)\n        fi\n        _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n        _LEN_LUPASS=$(echo ${#_ESC_LUPASS} 2>&1)\n      fi\n      if [ -z \"${_ESC_LUPASS}\" ] || [ \"${_LEN_LUPASS}\" -lt 9 ]; then\n        _ESC_LUPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_LUPASS=$(_sanitize_string \"${_ESC_LUPASS}\" 2>&1)\n      fi\n      ph=$(mkpasswd -m sha-512 \"${_ESC_LUPASS}\" \\\n        $(openssl rand -base64 16 | tr -d '+=' | head -c 16) 2>&1)\n      usermod -p $ph ${_usrLtd}\n      passwd -w 7 -x 90 ${_usrLtd}\n      usermod -aG lshellg ${_usrLtd}\n      usermod -aG ltd-shell ${_usrLtd}\n    fi\n    if [ ! -e \"/home/${_ADMIN}/users/${_usrLtd}\" ] \\\n      && [ ! -z \"${_ESC_LUPASS}\" ]; then\n      if [ -e \"/usr/bin/mysecureshell\" ] \\\n        && [ -e \"/etc/ssh/sftp_config\" ]; then\n        chsh -s /usr/bin/mysecureshell ${_usrLtd}\n      else\n        chsh -s /usr/bin/lshell ${_usrLtd}\n      fi\n      echo >> ${_THIS_LTD_CONF}\n      echo \"[${_usrLtd}]\" >> ${_THIS_LTD_CONF}\n      echo \"path : [${_ALLD_DIR}]\" >> ${_THIS_LTD_CONF}\n      chmod 700 ${_usrLtdRoot}\n      mkdir -p /home/${_ADMIN}/users\n      echo \"${_ESC_LUPASS}\" > /home/${_ADMIN}/users/${_usrLtd}\n    fi\n    _fix_dot_dirs\n    rm -f ${_usrLtdRoot}/{.profile,.bash_logout,.bash_profile,.bashrc}\n  fi\n}\n#\n# OK, update user.\n_ok_update_user() {\n  _usrLtdTest=${_usrLtd//[^a-z0-9]/}\n  if [ ! -z \"${_usrLtdTest}\" ]; then\n    _ADMIN=\"${_USER}.ftp\"\n    _usrLtdRoot=\"/home/${_usrLtd}\"\n    if [ -e \"/home/${_ADMIN}/users/${_usrLtd}\" ]; then\n      echo >> ${_THIS_LTD_CONF}\n      echo \"[${_usrLtd}]\" >> ${_THIS_LTD_CONF}\n      echo \"path : [${_ALLD_DIR}]\" >> ${_THIS_LTD_CONF}\n      _manage_sec_user_drush_aliases\n      chmod 700 ${_usrLtdRoot}\n    fi\n    _fix_dot_dirs\n    rm -f ${_usrLtdRoot}/{.profile,.bash_logout,.bash_profile,.bashrc}\n  fi\n}\n#\n# Add user if not exists.\n_add_user_if_not_exists() {\n  _usrLtdTest=${_usrLtd//[^a-z0-9]/}\n  if [ ! -z \"${_usrLtdTest}\" ]; then\n    _ID_EXISTS=$(getent passwd ${_usrLtd} 2>&1)\n    _ID_SHELLS=$(id -nG ${_usrLtd} 2>&1)\n    echo \"_ID_EXISTS is == ${_ID_EXISTS} == at _add_user_if_not_exists\"\n    echo \"_ID_SHELLS is == ${_ID_SHELLS} == at _add_user_if_not_exists\"\n    if [ -z \"${_ID_EXISTS}\" ]; then\n      echo \"We will create user == ${_usrLtd} ==\"\n      _ok_create_user\n      _manage_sec_user_drush_aliases\n      _enable_chattr ${_usrLtd}\n    elif [[ \"${_ID_EXISTS}\" =~ \"${_usrLtd}\" ]] \\\n      && [[ \"${_ID_SHELLS}\" =~ \"ltd-shell\" ]]; then\n      echo \"We will update user == ${_usrLtd} ==\"\n      _disable_chattr ${_usrLtd}\n      rm -rf /home/${_usrLtd}/drush-backups\n      _usrTmp=\"/home/${_usrLtd}/.tmp\"\n      if [ ! -d \"${_usrTmp}\" ]; then\n        mkdir -p ${_usrTmp}\n        chown ${_usrLtd}:${_usrGroup} ${_usrTmp}\n        chmod 02755 ${_usrTmp}\n      fi\n      find ${_usrTmp} -mtime +0 -exec rm -rf {} \\; &> /dev/null\n      _ok_update_user\n      _enable_chattr ${_usrLtd}\n    fi\n  fi\n}\n#\n# Manage Access Paths.\n_manage_sec_access_paths() {\n#for _Domain in `find ${_Client}/ -maxdepth 1 -mindepth 1 -type l -printf %P\\\\n | sort`\nfor _Domain in `find ${_Client}/ -maxdepth 1 -mindepth 1 -type l | sort`; do\n  _rawDom=$(echo ${_Domain} | cut -d'/' -f7 | awk '{ print $1}' 2>&1)\n  _STATIC_FILES=\"${_pthParen_tUsr}/static/files/${_rawDom}.files/\"\n  _STATIC_PRIVATE=\"${_pthParen_tUsr}/static/files/${_rawDom}.private/\"\n  _NEW_STATIC_FILES=\"${_pthParen_tUsr}/static/files/${_rawDom}/\"\n  _PATH_DOM=\"$(readlink -n \"${_Domain}\")\"\n  _mntPoint=$(find /mnt -mindepth 1 -maxdepth 1 -type d | grep \"\\.\" | head -n1) &&\n  _MNT_STATIC_FILES=\"${_mntPoint}/files/${_USER}/static/files/${_rawDom}/\"\n#   echo \"_ALLD_DIR is == ${_ALLD_DIR} == at _manage_sec_access_paths\"\n#   echo \"_rawDom is == ${_rawDom} == at _manage_sec_access_paths\"\n#   echo \"_STATIC_FILES is == ${_STATIC_FILES} == at _manage_sec_access_paths\"\n#   echo \"_STATIC_PRIVATE is == ${_STATIC_PRIVATE} == at _manage_sec_access_paths\"\n#   echo \"_NEW_STATIC_FILES is == ${_NEW_STATIC_FILES} == at _manage_sec_access_paths\"\n#   echo \"_PATH_DOM is == ${_PATH_DOM} == at _manage_sec_access_paths\"\n#   [ -n \"${_mntPoint}\" ] && echo \"_mntPoint is == ${_mntPoint} == at _manage_sec_access_paths\"\n#   [ -n \"${_mntPoint}\" ] && echo \"_MNT_STATIC_FILES is == ${_MNT_STATIC_FILES} == at _manage_sec_access_paths\"\n  [ -n \"${_mntPoint}\" ] && _ALLD_DIR=\"${_ALLD_DIR}, '${_PATH_DOM}', '${_STATIC_FILES}', '${_STATIC_PRIVATE}', '${_NEW_STATIC_FILES}', '${_MNT_STATIC_FILES}'\"\n  [ -z \"${_mntPoint}\" ] && _ALLD_DIR=\"${_ALLD_DIR}, '${_PATH_DOM}', '${_STATIC_FILES}', '${_STATIC_PRIVATE}', '${_NEW_STATIC_FILES}'\"\n  if [ -e \"${_PATH_DOM}\" ]; then\n    _ALLD_NUM=$(( _ALLD_NUM += 1 ))\n  fi\n  echo Done for ${_Domain} at ${_Client}\ndone\n}\n#\n# Manage Secondary Users.\n_manage_sec() {\nfor _Client in `find ${_pthParen_tUsr}/clients/ -maxdepth 1 -mindepth 1 -type d | sort`; do\n  _usrLtd=$(echo ${_Client} | cut -d'/' -f6 | awk '{ print $1}' 2>&1)\n  _usrLtd=${_usrLtd//[^a-zA-Z0-9]/}\n  _usrLtd=$(echo -n ${_usrLtd} | tr A-Z a-z 2>&1)\n  if [ ! -z \"${_usrLtd}\" ]; then\n    _usrLtd=\"${_USER}.${_usrLtd}\"\n    echo \"_usrLtd is == ${_usrLtd} == at _manage_sec\"\n    _ALLD_NUM=\"0\"\n    _ALLD_CTL=\"1\"\n    _ALLD_DIR=\"'${_Client}', '/opt/user/gems/${_usrLtd}'\"\n    cd ${_Client}\n    _manage_sec_access_paths\n    # _ALLD_DIR=\"${_ALLD_DIR}, '/home/${_usrLtd}'\"\n    if [ \"${_ALLD_NUM}\" -ge \"${_ALLD_CTL}\" ]; then\n      _add_user_if_not_exists\n      echo Done for ${_Client} at ${_pthParen_tUsr}\n    else\n      echo Empty ${_Client} at ${_pthParen_tUsr} - deleting now\n      if [ -e \"${_Client}\" ]; then\n        rmdir ${_Client}\n      fi\n    fi\n  fi\n  _usrLtd=\n  _ALLD_DIR=\ndone\n}\n#\n# Update local INI for PHP CLI on the Ægir Satellite Instance.\n_php_cli_local_ini_update() {\n  if [ ! -z \"${1}\" ]; then\n    _DRUSH_FILE=\"${_dscUsr}/tools/drush/${1}\"\n  else\n    _DRUSH_FILE=\"${_dscUsr}/tools/drush/drush.php\"\n  fi\n  _U_HD=\"${_dscUsr}/.drush\"\n  _U_TP=\"${_dscUsr}/.tmp\"\n  _U_II=\"${_U_HD}/php.ini\"\n  _PHP_CLI_UPDATE=NO\n  if [ ! -e \"${_DRUSH_FILE}\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _CHECK_USE_PHP_CLI=$(grep \"/opt/php\" ${_DRUSH_FILE} 2>&1)\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php${e}\" ]] \\\n      && [ ! -e \"${_U_HD}/.ctrl.php${e}.${_xSrl}.pid\" ]; then\n      _PHP_CLI_UPDATE=YES\n    fi\n  done\n  if [ \"${_PHP_CLI_UPDATE}\" = \"YES\" ] \\\n    || [ ! -e \"${_U_II}\" ] \\\n    || [ ! -d \"${_U_TP}\" ] \\\n    || [ ! -e \"${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mkdir -p ${_U_TP}\n    touch ${_U_TP}\n    find ${_U_TP}/ -mtime +0 -exec rm -rf {} \\; &> /dev/null\n    mkdir -p ${_U_HD}\n    chown ${_USER}:${_usrGroup} ${_U_TP}\n    chown ${_USER}:${_usrGroup} ${_U_HD}\n    chmod 755 ${_U_TP}\n    chmod 755 ${_U_HD}\n    chattr -i ${_U_II}\n    rm -f ${_U_HD}/.ctrl.php*\n    rm -f ${_U_II}\n    if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php85\" ]]; then\n      cp -af /opt/php85/lib/php.ini ${_U_II}\n      _U_INI=85\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php84\" ]]; then\n      cp -af /opt/php84/lib/php.ini ${_U_II}\n      _U_INI=84\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php83\" ]]; then\n      cp -af /opt/php83/lib/php.ini ${_U_II}\n      _U_INI=83\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php82\" ]]; then\n      cp -af /opt/php82/lib/php.ini ${_U_II}\n      _U_INI=82\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php81\" ]]; then\n      cp -af /opt/php81/lib/php.ini ${_U_II}\n      _U_INI=81\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php80\" ]]; then\n      cp -af /opt/php80/lib/php.ini ${_U_II}\n      _U_INI=80\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php74\" ]]; then\n      cp -af /opt/php74/lib/php.ini ${_U_II}\n      _U_INI=74\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php73\" ]]; then\n      cp -af /opt/php73/lib/php.ini ${_U_II}\n      _U_INI=73\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php72\" ]]; then\n      cp -af /opt/php72/lib/php.ini ${_U_II}\n      _U_INI=72\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php71\" ]]; then\n      cp -af /opt/php71/lib/php.ini ${_U_II}\n      _U_INI=71\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php70\" ]]; then\n      cp -af /opt/php70/lib/php.ini ${_U_II}\n      _U_INI=70\n    elif [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php56\" ]]; then\n      cp -af /opt/php56/lib/php.ini ${_U_II}\n      _U_INI=56\n    fi\n    _OPCD=\"/var/www/phpcache\"\n    if [ -e \"${_U_II}\" ]; then\n      _INI=\"open_basedir = \\\".: \\\n        /data/all:           \\\n        /data/conf:          \\\n        /data/disk/all:      \\\n        /opt/php56:          \\\n        /opt/php70:          \\\n        /opt/php71:          \\\n        /opt/php72:          \\\n        /opt/php73:          \\\n        /opt/php74:          \\\n        /opt/php80:          \\\n        /opt/php81:          \\\n        /opt/php82:          \\\n        /opt/php83:          \\\n        /opt/php84:          \\\n        /opt/php85:          \\\n        /opt/tika:           \\\n        /opt/tika7:          \\\n        /opt/tika8:          \\\n        /opt/tika9:          \\\n        /dev/urandom:        \\\n        /opt/tmp/make_local: \\\n        /opt/tools/drush:    \\\n        ${_dscUsr}:          \\\n        ${_OPCD}/${_USER}:   \\\n        /usr/local/bin:      \\\n        /usr/bin\\\"\"\n      _INI=$(echo \"${_INI}\" | sed \"s/ //g\" 2>&1)\n      _INI=$(echo \"${_INI}\" | sed \"s/open_basedir=/open_basedir = /g\" 2>&1)\n      _INI=${_INI//\\//\\\\\\/}\n      _QTP=${_U_TP//\\//\\\\\\/}\n      sed -i \"s/.*open_basedir =.*/${_INI}/g\"                              ${_U_II}\n      wait\n      sed -i \"s/.*error_reporting =.*/error_reporting = 1/g\"               ${_U_II}\n      wait\n      sed -i \"s/.*session.save_path =.*/session.save_path = ${_QTP}/g\"     ${_U_II}\n      wait\n      sed -i \"s/.*soap.wsdl_cache_dir =.*/soap.wsdl_cache_dir = ${_QTP}/g\" ${_U_II}\n      wait\n      sed -i \"s/.*sys_temp_dir =.*/sys_temp_dir = ${_QTP}/g\"               ${_U_II}\n      wait\n      sed -i \"s/.*upload_tmp_dir =.*/upload_tmp_dir = ${_QTP}/g\"           ${_U_II}\n      wait\n      echo > ${_U_HD}/.ctrl.php${_U_INI}.${_xSrl}.pid\n      echo > ${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\n    fi\n    chattr +i ${_U_II}\n  fi\n}\n#\n# Update PHP-CLI for Drush.\n_php_cli_drush_update() {\n  if [ ! -z \"${1}\" ]; then\n    _DRUSH_FILE=\"${_dscUsr}/tools/drush/${1}\"\n  else\n    _DRUSH_FILE=\"${_dscUsr}/tools/drush/drush.php\"\n  fi\n  if [ ! -e \"${_DRUSH_FILE}\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  if [ \"${_T_CLI_VRN}\" = \"8.5\" ] && [ -x \"/opt/php85/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php85\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php85/bin\n  elif [ \"${_T_CLI_VRN}\" = \"8.4\" ] && [ -x \"/opt/php84/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php84\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php84/bin\n  elif [ \"${_T_CLI_VRN}\" = \"8.3\" ] && [ -x \"/opt/php83/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php83\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php83/bin\n  elif [ \"${_T_CLI_VRN}\" = \"8.2\" ] && [ -x \"/opt/php82/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php82\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php82/bin\n  elif [ \"${_T_CLI_VRN}\" = \"8.1\" ] && [ -x \"/opt/php81/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php81\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php81/bin\n  elif [ \"${_T_CLI_VRN}\" = \"8.0\" ] && [ -x \"/opt/php80/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php80\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php80/bin\n  elif [ \"${_T_CLI_VRN}\" = \"7.4\" ] && [ -x \"/opt/php74/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php74\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php74/bin\n  elif [ \"${_T_CLI_VRN}\" = \"7.3\" ] && [ -x \"/opt/php73/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php73\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php73/bin\n  elif [ \"${_T_CLI_VRN}\" = \"7.2\" ] && [ -x \"/opt/php72/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php72\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php72/bin\n  elif [ \"${_T_CLI_VRN}\" = \"7.1\" ] && [ -x \"/opt/php71/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php71\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php71/bin\n  elif [ \"${_T_CLI_VRN}\" = \"7.0\" ] && [ -x \"/opt/php70/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php70\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php70/bin\n  elif [ \"${_T_CLI_VRN}\" = \"5.6\" ] && [ -x \"/opt/php56/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php56\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n    _T_CLI=/opt/php56/bin\n  else\n    _T_CLI=/foo/bar\n  fi\n  if [ -x \"${_T_CLI}/php\" ]; then\n    #_DRUSH_HOSTING_TASKS_CMD=\"/usr/bin/drush @hostmaster hosting-tasks --force\"\n    _DRUSH_HOSTING_DISPATCH_CMD=\"${_T_CLI}/php ${_dscUsr}/tools/drush/drush.php @hostmaster hosting-dispatch\"\n    if [ -e \"${_dscUsr}/aegir.sh\" ]; then\n      rm -f ${_dscUsr}/aegir.sh\n    fi\n    touch ${_dscUsr}/aegir.sh\n    echo -e \"#!/bin/bash\\n\\nPATH=.:${_T_CLI}:/usr/sbin:/usr/bin:/sbin:/bin\\n \\\n      \\n${_DRUSH_HOSTING_DISPATCH_CMD} \\\n      \\ntouch ${_dscUsr}/${_USER}-task.done\" \\\n      | fmt -su -w 2500 | tee -a ${_dscUsr}/aegir.sh >/dev/null 2>&1\n    chown ${_USER}:${_usrGroup} ${_dscUsr}/aegir.sh &> /dev/null\n    chmod 0700 ${_dscUsr}/aegir.sh &> /dev/null\n  fi\n  rm -f ${_dscUsr}/static/control/.ctrl.cli.*.pid\n  echo \"${_T_CLI_VRN}\" > ${_dscUsr}/static/control/.ctrl.cli.${_T_CLI_VRN}.${_xSrl}.pid\n}\n\n#\n# Set default FPM workers.\n_satellite_default_fpm_workers() {\n  _count_cpu\n\n  # Set _PHP_FPM_WORKERS to AUTO if it is empty\n  [ -z \"${_PHP_FPM_WORKERS}\" ] && _PHP_FPM_WORKERS=AUTO\n  # If _PHP_FPM_WORKERS is not AUTO and not empty, then check if it is less than 1\n  if [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && [ -n \"${_PHP_FPM_WORKERS}\" ]; then\n    if [ \"${_PHP_FPM_WORKERS}\" -lt 1 ] 2>/dev/null; then\n      _PHP_FPM_WORKERS=AUTO\n    fi\n  fi\n  # If _PHP_FPM_WORKERS is not AUTO, remove non-numeric characters\n  [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && _PHP_FPM_WORKERS=${_PHP_FPM_WORKERS//[^0-9]/}\n\n  if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n    _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n  else\n    _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n  fi\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"DEBUG: _L_PHP_FPM_WORKERS is ${_L_PHP_FPM_WORKERS}\" >>/var/backups/ltd/log/users-${_NOW}.log\n  fi\n\n  # Set _PHP_FPM_TIMEOUT to AUTO if it is empty\n  [ -z \"${_PHP_FPM_TIMEOUT}\" ] && _PHP_FPM_TIMEOUT=AUTO\n  # If _PHP_FPM_TIMEOUT is not AUTO and not empty, then check if it is between 60 and 180\n  if [ \"${_PHP_FPM_TIMEOUT}\" != \"AUTO\" ] && [ -n \"${_PHP_FPM_TIMEOUT}\" ]; then\n    # If _PHP_FPM_TIMEOUT is not AUTO and not empty, remove non-numeric characters\n    [ \"${_PHP_FPM_TIMEOUT}\" != \"AUTO\" ] && _PHP_FPM_TIMEOUT=${_PHP_FPM_TIMEOUT//[^0-9]/}\n    # If _PHP_FPM_TIMEOUT is outside of the allowed range, use either min or max allowed\n    if [ \"${_PHP_FPM_TIMEOUT}\" -lt 60 ]; then\n      _PHP_FPM_TIMEOUT=60\n    elif [ \"${_PHP_FPM_TIMEOUT}\" -gt 180 ]; then\n      _PHP_FPM_TIMEOUT=180\n    fi\n  else\n    _PHP_FPM_TIMEOUT=180\n  fi\n\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"DEBUG: _PHP_FPM_TIMEOUT is ${_PHP_FPM_TIMEOUT}\" >>/var/backups/ltd/log/users-${_NOW}.log\n  fi\n}\n\n#\n# Tune FPM workers.\n_satellite_tune_fpm_workers() {\n  _satellite_default_fpm_workers\n\n  _LIM_FPM=\"${_L_PHP_FPM_WORKERS}\"\n\n  if [ ! -z \"${_CLIENT_OPTION}\" ]; then\n    if [ \"${_CLIENT_OPTION}\" = \"MONSTER\" ] || [ \"${_CLIENT_OPTION}\" = \"CLUSTER\" ]; then\n      _CLIENT_OPTION=MONSTER\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=96\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"ULTRA\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=32\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"PHANTOM\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=16\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"POWER\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"BUS\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=8\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"EDGE\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"AGAIN\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"SSD\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"CLASSIC\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=2\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"MINI\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"MICRO\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"QUIET\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"HEADSPACE\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=1\n      fi\n    else\n      _LIM_FPM=2\n    fi\n  fi\n\n  if [ ! -z \"${_CLIENT_CORES}\" ] && [ \"${_CLIENT_CORES}\" -ge 1 ]; then\n    if [ -e \"${_dscUsr}/log/cores.txt\" ]; then\n      _CLIENT_CORES=$(cat ${_dscUsr}/log/cores.txt 2>&1)\n      _CLIENT_CORES=$(echo -n ${_CLIENT_CORES} | tr -d \"\\n\" 2>&1)\n    fi\n    _CLIENT_CORES=${_CLIENT_CORES//[^0-9]/}\n    if [ ! -z \"${_CLIENT_CORES}\" ] && [ \"${_CLIENT_CORES}\" -ge 1 ]; then\n      _LIM_FPM=$(( _LIM_FPM *= _CLIENT_CORES ))\n    fi\n  fi\n\n  if [ \"${_LIM_FPM}\" -gt 100 ]; then\n    _LIM_FPM=100\n  fi\n\n  if [ \"${_CLIENT_OPTION}\" != \"QUIET\" ]; then\n    _CHILD_MAX_FPM=$(( _LIM_FPM * 2 ))\n  fi\n\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"DEBUG: _LIM_FPM is ${_LIM_FPM}\" >>/var/backups/ltd/log/users-${_NOW}.log\n    echo \"DEBUG: _PHP_FPM_WORKERS is ${_PHP_FPM_WORKERS}\" >>/var/backups/ltd/log/users-${_NOW}.log\n    echo \"DEBUG: _CHILD_MAX_FPM is ${_CHILD_MAX_FPM}\" >>/var/backups/ltd/log/users-${_NOW}.log\n  fi\n}\n\n#\n# Disable New Relic per Octopus instance.\n_disable_newrelic() {\n  _THIS_POOL_TPL=\"/opt/php$1/etc/pool.d/$2.conf\"\n  if [ -e \"${_THIS_POOL_TPL}\" ]; then\n    _CHECK_NEW_RELIC_KEY=$(grep \"newrelic.enabled.*true\" ${_THIS_POOL_TPL} 2>&1)\n    if [[ \"${_CHECK_NEW_RELIC_KEY}\" =~ \"newrelic.enabled\" ]]; then\n      echo \"New Relic for $2 will be disabled because newrelic.info does not exist\"\n      sed -i \"s/^php_admin_value\\[newrelic.license\\].*/php_admin_value\\[newrelic.license\\] = \\\"\\\"/g\" ${_THIS_POOL_TPL}\n      wait\n      sed -i \"s/^php_admin_value\\[newrelic.enabled\\].*/php_admin_value\\[newrelic.enabled\\] = \\\"false\\\"/g\" ${_THIS_POOL_TPL}\n      wait\n      if [ \"$3\" = \"1\" ] && [ -e \"/etc/init.d/php$1-fpm\" ]; then\n        service php$1-fpm reload &> /dev/null\n      fi\n    fi\n  fi\n}\n#\n# Enable New Relic per Octopus instance.\n_enable_newrelic() {\n  _LOC_NEW_RELIC_KEY=$(cat ${_dscUsr}/static/control/newrelic.info 2>&1)\n  _LOC_NEW_RELIC_KEY=${_LOC_NEW_RELIC_KEY//[^0-9a-zA-Z]/}\n  _LOC_NEW_RELIC_KEY=$(echo -n ${_LOC_NEW_RELIC_KEY} | tr -d \"\\n\" 2>&1)\n  if [ -z \"${_LOC_NEW_RELIC_KEY}\" ]; then\n    _disable_newrelic $1 $2 $3\n  else\n    _THIS_POOL_TPL=\"/opt/php$1/etc/pool.d/$2.conf\"\n    if [ -e \"${_THIS_POOL_TPL}\" ]; then\n      _CHECK_NEW_RELIC_TPL=$(grep \"newrelic.license\" ${_THIS_POOL_TPL} 2>&1)\n      _CHECK_NEW_RELIC_KEY=$(grep \"${_LOC_NEW_RELIC_KEY}\" ${_THIS_POOL_TPL} 2>&1)\n      if [[ \"${_CHECK_NEW_RELIC_KEY}\" =~ \"${_LOC_NEW_RELIC_KEY}\" ]]; then\n        echo \"New Relic integration is already active for $2\"\n      else\n        if [[ \"${_CHECK_NEW_RELIC_TPL}\" =~ \"newrelic.license\" ]]; then\n          echo \"New Relic for $2 update with key ${_LOC_NEW_RELIC_KEY} in php$1\"\n          sed -i \"s/^php_admin_value\\[newrelic.license\\].*/php_admin_value\\[newrelic.license\\] = \\\"${_LOC_NEW_RELIC_KEY}\\\"/g\" ${_THIS_POOL_TPL}\n          wait\n          sed -i \"s/^php_admin_value\\[newrelic.enabled\\].*/php_admin_value\\[newrelic.enabled\\] = \\\"true\\\"/g\" ${_THIS_POOL_TPL}\n          wait\n        else\n          echo \"New Relic for $2 setup with key ${_LOC_NEW_RELIC_KEY} in php$1\"\n          echo \"php_admin_value[newrelic.license] = \\\"${_LOC_NEW_RELIC_KEY}\\\"\" >> ${_THIS_POOL_TPL}\n          echo \"php_admin_value[newrelic.enabled] = \\\"true\\\"\" >> ${_THIS_POOL_TPL}\n        fi\n        if [ \"$3\" = \"1\" ] && [ -e \"/etc/init.d/php$1-fpm\" ]; then\n          service php$1-fpm reload &> /dev/null\n        fi\n      fi\n    fi\n  fi\n}\n#\n# Switch New Relic on or off per Octopus instance.\n_switch_newrelic() {\n  _isPhp=\"$1\"\n  _isPhp=${_isPhp//[^0-9]/}\n  _isUsr=\"$2\"\n  _isUsr=${_isUsr//[^a-z0-9]/}\n  _isRld=\"$3\"\n  _isRld=${_isRld//[^0-1]/}\n  if [ ! -z \"${_isPhp}\" ] && [ ! -z \"${_isUsr}\" ] && [ ! -z \"${_isRld}\" ]; then\n    if [ -e \"${_dscUsr}/static/control/newrelic.info\" ]; then\n      _enable_newrelic $1 $2 $3\n    else\n      _disable_newrelic $1 $2 $3\n    fi\n  fi\n}\n#\n# Update web user.\n_satellite_web_user_update() {\n  _isTest=\"${_WEB}\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [[ ! \"${_WEB}\" =~ \".ftp\"($) ]]; then\n    _T_HD=\"/home/${_WEB}/.drush\"\n    _T_TP=\"/home/${_WEB}/.tmp\"\n    _T_TS=\"/home/${_WEB}/.aws\"\n    _T_II=\"${_T_HD}/php.ini\"\n    if [ -d \"/home/${_WEB}\" ] && [ ! -e \"/home/${_WEB}/.lock\" ]; then\n      chattr -i /home/${_WEB}\n      if [ -d \"/home/${_WEB}/.drush\" ]; then\n        chattr -i /home/${_WEB}/.drush\n      fi\n      if [ -e \"${_T_II}\" ]; then\n        chattr -i ${_T_II}\n      fi\n      mkdir -p /home/${_WEB}/.{tmp,drush,aws}\n      touch /home/${_WEB}/.lock\n      _isTest=\"$1\"\n      _isTest=${_isTest//[^a-z0-9]/}\n      if [ ! -z \"${_isTest}\" ]; then\n        _T_PV=$1\n      fi\n      if [ ! -z \"${_T_PV}\" ] && [ -e \"/opt/php${_T_PV}/etc/php${_T_PV}.ini\" ]; then\n        cp -af /opt/php${_T_PV}/etc/php${_T_PV}.ini ${_T_II}\n      else\n        if [ -e \"/opt/php85/etc/php85.ini\" ]; then\n          cp -af /opt/php85/etc/php85.ini ${_T_II}\n          _T_PV=85\n        elif [ -e \"/opt/php84/etc/php84.ini\" ]; then\n          cp -af /opt/php84/etc/php84.ini ${_T_II}\n          _T_PV=84\n        elif [ -e \"/opt/php83/etc/php83.ini\" ]; then\n          cp -af /opt/php83/etc/php83.ini ${_T_II}\n          _T_PV=83\n        elif [ -e \"/opt/php82/etc/php82.ini\" ]; then\n          cp -af /opt/php82/etc/php82.ini ${_T_II}\n          _T_PV=82\n        elif [ -e \"/opt/php81/etc/php81.ini\" ]; then\n          cp -af /opt/php81/etc/php81.ini ${_T_II}\n          _T_PV=81\n        elif [ -e \"/opt/php80/etc/php80.ini\" ]; then\n          cp -af /opt/php80/etc/php80.ini ${_T_II}\n          _T_PV=80\n        elif [ -e \"/opt/php74/etc/php74.ini\" ]; then\n          cp -af /opt/php74/etc/php74.ini ${_T_II}\n          _T_PV=74\n        elif [ -e \"/opt/php73/etc/php73.ini\" ]; then\n          cp -af /opt/php73/etc/php73.ini ${_T_II}\n          _T_PV=73\n        elif [ -e \"/opt/php72/etc/php72.ini\" ]; then\n          cp -af /opt/php72/etc/php72.ini ${_T_II}\n          _T_PV=72\n        elif [ -e \"/opt/php71/etc/php71.ini\" ]; then\n          cp -af /opt/php71/etc/php71.ini ${_T_II}\n          _T_PV=71\n        elif [ -e \"/opt/php70/etc/php70.ini\" ]; then\n          cp -af /opt/php70/etc/php70.ini ${_T_II}\n          _T_PV=70\n        elif [ -e \"/opt/php56/etc/php56.ini\" ]; then\n          cp -af /opt/php56/etc/php56.ini ${_T_II}\n          _T_PV=56\n        fi\n      fi\n      if [ -e \"${_T_II}\" ]; then\n        _INI=\"open_basedir = \\\".: \\\n          /data/all:      \\\n          /data/conf:     \\\n          /data/disk/all: \\\n          /hdd:           \\\n          /mnt:           \\\n          /opt/php56:     \\\n          /opt/php70:     \\\n          /opt/php71:     \\\n          /opt/php72:     \\\n          /opt/php73:     \\\n          /opt/php74:     \\\n          /opt/php80:     \\\n          /opt/php81:     \\\n          /opt/php82:     \\\n          /opt/php83:     \\\n          /opt/php84:     \\\n          /opt/php85:     \\\n          /opt/tika:      \\\n          /opt/tika7:     \\\n          /opt/tika8:     \\\n          /opt/tika9:     \\\n          /dev/urandom:   \\\n          /srv:           \\\n          /usr/bin:       \\\n          /usr/local/bin: \\\n          /var/second/${_USER}:     \\\n          ${_dscUsr}/aegir:          \\\n          ${_dscUsr}/backup-exports: \\\n          ${_dscUsr}/distro:         \\\n          ${_dscUsr}/platforms:      \\\n          ${_dscUsr}/static:         \\\n          ${_T_HD}:                 \\\n          ${_T_TP}:                 \\\n          ${_T_TS}\\\"\"\n        _INI=$(echo \"${_INI}\" | sed \"s/ //g\" 2>&1)\n        _INI=$(echo \"${_INI}\" | sed \"s/open_basedir=/open_basedir = /g\" 2>&1)\n        _INI=${_INI//\\//\\\\\\/}\n        _QTP=${_T_TP//\\//\\\\\\/}\n        sed -i \"s/.*open_basedir =.*/${_INI}/g\"                              ${_T_II}\n        wait\n        sed -i \"s/.*session.save_path =.*/session.save_path = ${_QTP}/g\"     ${_T_II}\n        wait\n        sed -i \"s/.*soap.wsdl_cache_dir =.*/soap.wsdl_cache_dir = ${_QTP}/g\" ${_T_II}\n        wait\n        sed -i \"s/.*sys_temp_dir =.*/sys_temp_dir = ${_QTP}/g\"               ${_T_II}\n        wait\n        sed -i \"s/.*upload_tmp_dir =.*/upload_tmp_dir = ${_QTP}/g\"           ${_T_II}\n        wait\n        rm -f ${_T_HD}/.ctrl.php*\n        echo > ${_T_HD}/.ctrl.php${_T_PV}.${_xSrl}.pid\n      fi\n      chmod 700 /home/${_WEB}\n      chown -R ${_WEB}:${_WEBG} /home/${_WEB}\n      chmod 550 /home/${_WEB}/.drush\n      chmod 440 /home/${_WEB}/.drush/php.ini\n      rm -f /home/${_WEB}/.lock\n      if [ -d \"/home/${_WEB}\" ]; then\n        chattr +i /home/${_WEB}\n      fi\n      if [ -d \"/home/${_WEB}/.drush\" ]; then\n        chattr +i /home/${_WEB}/.drush\n      fi\n      if [ -e \"${_T_II}\" ]; then\n        chattr +i ${_T_II}\n      fi\n    fi\n  fi\n}\n#\n# Remove web user.\n_satellite_remove_web_user() {\n  _isTest=\"${_WEB}\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [[ ! \"${_WEB}\" =~ \".ftp\"($) ]]; then\n    if [ -d \"/home/${_WEB}/\" ] || [ \"$1\" = \"clean\" ]; then\n      chattr -i /home/${_WEB}/\n      if [ -d \"/home/${_WEB}/.drush/\" ]; then\n        chattr -i /home/${_WEB}/.drush/\n      fi\n      pkill -9 -f gpg-agent\n      deluser \\\n        --remove-home \\\n        --backup-to /var/backups/zombie/deleted ${_WEB} &> /dev/null\n      if [ -d \"/home/${_WEB}/\" ]; then\n        rm -rf /home/${_WEB}/ &> /dev/null\n      fi\n    fi\n  fi\n}\n#\n# Add web user.\n_satellite_create_web_user() {\n  _isTest=\"${_WEB}\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [[ ! \"${_WEB}\" =~ \".ftp\"($) ]]; then\n    _T_HD=\"/home/${_WEB}/.drush\"\n    _T_II=\"${_T_HD}/php.ini\"\n    _T_ID_EXISTS=$(getent passwd ${_WEB} 2>&1)\n    if [ ! -z \"${_T_ID_EXISTS}\" ] && [ -e \"${_T_II}\" ]; then\n      _satellite_web_user_update \"$1\"\n    elif [ -z \"${_T_ID_EXISTS}\" ] || [ ! -e \"${_T_II}\" ]; then\n      _satellite_remove_web_user \"clean\"\n      adduser --force-badname --system --ingroup www-data --home /home/${_WEB} ${_WEB} &> /dev/null\n      _satellite_web_user_update \"$1\"\n    fi\n  fi\n}\n#\n# Add site specific socket config include.\n_site_socket_inc_gen() {\n  _unlAeg=\"${_dscUsr}/static/control/unlock-aegir-php.info\"\n  _mltFpm=\"${_dscUsr}/static/control/multi-fpm.info\"\n  _preFpm=\"${_dscUsr}/static/control/.prev-multi-fpm.info\"\n  _mltNgx=\"${_dscUsr}/static/control/.multi-nginx-fpm.pid\"\n  _fpmPth=\"${_dscUsr}/config/server_master/nginx/post.d\"\n\n  _hmFront=$(cat ${_dscUsr}/log/domain.txt 2>&1)\n  _hmFront=$(echo -n ${_hmFront} | tr -d \"\\n\" 2>&1)\n  _hmstAls=\"${_dscUsr}/.drush/${_hmFront}.alias.drushrc.php\"\n\n  _hmstCli=$(cat ${_dscUsr}/log/cli.txt 2>&1)\n  _hmstCli=$(echo -n ${_hmstCli} | tr -d \"\\n\" 2>&1)\n\n  if [ ! -e \"${_hmstAls}\" ]; then\n    ln -sfn ${_dscUsr}/.drush/hostmaster.alias.drushrc.php ${_hmstAls}\n  fi\n\n  _PLACEHOLDER_TEST=$(grep \"place.holder.dont.remove\" ${_mltFpm} 2>&1)\n\n  if [ ! -e \"${_dscUsr}/log/no-lock-aegir-fpm.txt\" ] \\\n    || [[ ! \"${_PLACEHOLDER_TEST}\" =~ \"place.holder.dont.remove\" ]]; then\n    sed -i \"s/^${_hmFront} .*//g\" ${_mltFpm}\n    wait\n    sed -i \"s/^place.holder.dont.remove .*//g\" ${_mltFpm}\n    wait\n    _PHP_V=\"85 84 83 82 81 74\"\n    _phpFnd=NO\n    for e in ${_PHP_V}; do\n      if [ -x \"/opt/php${e}/bin/php\" ] && [ \"${_phpFnd}\" = \"NO\" ]; then\n        if [ \"${e}\" = 85 ]; then\n          _phpDot=8.5\n        elif [ \"${e}\" = 84 ]; then\n          _phpDot=8.4\n        elif [ \"${e}\" = 83 ]; then\n          _phpDot=8.3\n        elif [ \"${e}\" = 82 ]; then\n          _phpDot=8.2\n        elif [ \"${e}\" = 81 ]; then\n          _phpDot=8.1\n        elif [ \"${e}\" = 74 ]; then\n          _phpDot=7.4\n        fi\n        echo \"place.holder.dont.remove ${_phpDot}\" >> ${_mltFpm}\n        _phpFnd=YES\n      fi\n    done\n    sed -i \"s/ *$//g; /^$/d\" ${_mltFpm}\n    wait\n    touch ${_dscUsr}/log/no-lock-aegir-fpm.txt\n    rm -f ${_dscUsr}/log/locked-aegir-fpm.txt\n    touch ${_dscUsr}/log/unlocked-aegir-fpm.txt\n    _mltFpmUpdateForce=YES\n  fi\n\n  if [ -x \"/opt/php85/bin/php\" ] && [ ! -e \"/home/${_USER}.85.web\" ]; then\n    rm -f /data/disk/${_USER}/config/server_master/nginx/post.d/fpm_include_default.inc\n    _mltFpmUpdateForce=YES\n  elif [ -x \"/opt/php84/bin/php\" ] && [ ! -e \"/home/${_USER}.84.web\" ]; then\n    rm -f /data/disk/${_USER}/config/server_master/nginx/post.d/fpm_include_default.inc\n    _mltFpmUpdateForce=YES\n  elif [ -x \"/opt/php83/bin/php\" ] && [ ! -e \"/home/${_USER}.83.web\" ]; then\n    rm -f /data/disk/${_USER}/config/server_master/nginx/post.d/fpm_include_default.inc\n    _mltFpmUpdateForce=YES\n  elif [ -x \"/opt/php82/bin/php\" ] && [ ! -e \"/home/${_USER}.82.web\" ]; then\n    rm -f /data/disk/${_USER}/config/server_master/nginx/post.d/fpm_include_default.inc\n    _mltFpmUpdateForce=YES\n  elif [ -x \"/opt/php81/bin/php\" ] && [ ! -e \"/home/${_USER}.81.web\" ]; then\n    rm -f /data/disk/${_USER}/config/server_master/nginx/post.d/fpm_include_default.inc\n    _mltFpmUpdateForce=YES\n  fi\n\n  if [ -f \"${_mltFpm}\" ]; then\n    chown ${_USER}.ftp:${_usrGroup} ${_dscUsr}/static/control/*.info\n    _mltFpmUpdate=NO\n    if [ ! -f \"${_preFpm}\" ]; then\n      rm -rf ${_preFpm}\n      cp -af ${_mltFpm} ${_preFpm}\n    fi\n    _diffFpmTest=$(diff -w -B ${_mltFpm} ${_preFpm} 2>&1)\n    if [ ! -z \"${_diffFpmTest}\" ]; then\n      _mltFpmUpdate=YES\n    fi\n    if [ ! -f \"${_mltNgx}\" ] \\\n      || [ \"${_mltFpmUpdate}\" = \"YES\" ] \\\n      || [ \"${_mltFpmUpdateForce}\" = \"YES\" ]; then\n      rm -f ${_fpmPth}/fpm_include_site_*\n      IFS=$'\\12'\n      for p in `cat ${_mltFpm}`;do\n        _SITE_NAME=`echo $p | cut -d' ' -f1 | awk '{ print $1}'`\n        _SITE_NAME=${_SITE_NAME//[^a-zA-Z0-9-.]/}\n        _SITE_NAME=$(echo -n ${_SITE_NAME} | tr A-Z a-z 2>&1)\n        _SITE_NAME=$(echo -n ${_SITE_NAME} | tr -d \"\\n\" 2>&1)\n        _SITE_SOCKET=`echo $p | cut -d' ' -f2 | awk '{ print $1}'`\n        _SITE_SOCKET=${_SITE_SOCKET//[^0-9]/}\n        _SITE_SOCKET=$(echo -n ${_SITE_SOCKET} | tr -d \"\\n\" 2>&1)\n        _SOCKET_L_NAME=\"${_USER}.${_SITE_SOCKET}\"\n        if [ ! -z \"${_SITE_NAME}\" ] \\\n          && [ ! -z \"${_SITE_SOCKET}\" ] \\\n          && [ -x \"/opt/php${_SITE_SOCKET}/bin/php\" ] \\\n          && [ -e \"${_dscUsr}/.drush/${_SITE_NAME}.alias.drushrc.php\" ] \\\n          && [ -e \"/run/${_SOCKET_L_NAME}.fpm.socket\" ]; then\n          _fpmInc=\"${_fpmPth}/fpm_include_site_${_SITE_NAME}.inc\"\n          echo \"if ( \\$main_site_name = ${_SITE_NAME} ) {\" > ${_fpmInc}\n          echo \"  set \\$user_socket \\\"${_SOCKET_L_NAME}\\\";\" >> ${_fpmInc}\n          echo \"}\" >> ${_fpmInc}\n        fi\n      done\n      touch ${_mltNgx}\n      rm -rf ${_preFpm}\n      cp -af ${_mltFpm} ${_preFpm}\n      ### reload nginx\n      service nginx reload &> /dev/null\n    fi\n  else\n    if [ -f \"${_mltNgx}\" ]; then\n      rm -f ${_mltNgx}\n    fi\n    if [ -f \"${_preFpm}\" ]; then\n      rm -f ${_preFpm}\n    fi\n  fi\n}\n\n#\n# Switch PHP Version.\n_switch_php() {\n  _PHP_CLI_UPDATE=NO\n  _FORCE_FPM_SETUP=NO\n  _NEW_FPM_SETUP=NO\n  _T_CLI_VRN=\"\"\n\n  if [ -e \"${_dscUsr}/static/control/fpm.info\" ] || [ -e \"${_dscUsr}/static/control/cli.info\" ]; then\n    echo \"Custom FPM and CLI settings for ${_USER} exist, running _switch_php checks\"\n    if [ ! -e \"${_dscUsr}/log/un-chattr-ctrl.info\" ]; then\n      chattr -i ${_dscUsr}/static/control/fpm.info &> /dev/null\n      chattr -i ${_dscUsr}/static/control/cli.info &> /dev/null\n      chattr -i ${_dscUsr}/log/fpm.txt &> /dev/null\n      chattr -i ${_dscUsr}/log/cli.txt &> /dev/null\n      chattr -i ${_dscUsr}/config/server_master/nginx/post.d/fpm_include_default.inc &> /dev/null\n      touch ${_dscUsr}/log/un-chattr-ctrl.info\n    fi\n\n    if [ ! -e \"${_dscUsr}/static/control/.single-fpm.${_xSrl}.pid\" ]; then\n      rm -f ${_dscUsr}/static/control/.single-fpm*.pid\n      echo OK > ${_dscUsr}/static/control/.single-fpm.${_xSrl}.pid\n      _FORCE_FPM_SETUP=YES\n    fi\n\n    # Convert shorthand versions (e.g. \"83\" to \"8.3\")\n    fix_version_format() {\n      case \"$1\" in\n        85) echo \"8.5\";;\n        84) echo \"8.4\";;\n        83) echo \"8.3\";;\n        82) echo \"8.2\";;\n        81) echo \"8.1\";;\n        80) echo \"8.0\";;\n        74) echo \"7.4\";;\n        73) echo \"7.3\";;\n        72) echo \"7.2\";;\n        71) echo \"7.1\";;\n        70) echo \"7.0\";;\n        56) echo \"5.6\";;\n        *) echo \"$1\";;\n      esac\n    }\n\n    # Helper function to check if a given PHP version is available\n    check_version() {\n      [ -x \"/opt/php${1//./}/bin/php\" ]\n    }\n\n    # --- CLI portion ---\n    if [ -e \"${_dscUsr}/static/control/cli.info\" ]; then\n      # Extract numeric version from file\n      _T_CLI_VRN=\"$(tr -d '\\n' < \"${_dscUsr}/static/control/cli.info\" | tr -cd '0-9.')\"\n\n      # Convert shorthand versions (e.g. \"83\" to \"8.3\")\n      _T_CLI_VRN=\"$(fix_version_format \"${_T_CLI_VRN}\")\"\n\n      # Define fallback chains for PHP versions\n      declare -A fallback=(\n        [\"8.5\"]=\"8.4 8.3 8.2 8.1\"\n        [\"8.4\"]=\"8.3 8.2 8.1\"\n        [\"8.3\"]=\"8.2 8.1\"\n        [\"8.2\"]=\"8.1 8.3\"\n        [\"8.1\"]=\"8.2 8.3\"\n        [\"8.0\"]=\"8.1\"\n        [\"7.4\"]=\"8.1\"\n        [\"7.3\"]=\"7.4\"\n        [\"7.2\"]=\"7.4\"\n        [\"7.1\"]=\"7.4\"\n        [\"7.0\"]=\"7.4\"\n        [\"5.6\"]=\"7.4\"\n      )\n\n      # Attempt fallback if the chosen version doesn't exist\n      if [ -n \"${_T_CLI_VRN}\" ] && ! check_version \"${_T_CLI_VRN}\"; then\n        for fbv in ${fallback[\"$_T_CLI_VRN\"]}; do\n          if check_version \"${fbv}\"; then\n            _T_CLI_VRN=\"${fbv}\"\n            break\n          fi\n        done\n      fi\n\n      if [ -z \"${_T_CLI_VRN}\" ]; then\n        echo \"_T_CLI_VRN ELSE is EMPTY\"\n      else\n        echo \"_T_CLI_VRN is ${_T_CLI_VRN}\"\n        if [ \"${_T_CLI_VRN}\" != \"${_PHP_CLI_VERSION}\" ] || [ ! -e \"${_dscUsr}/static/control/.ctrl.cli.${_T_CLI_VRN}.${_xSrl}.pid\" ]; then\n          _PHP_CLI_UPDATE=YES\n          _DRUSH_FILES=\"drush.php drush\"\n          for _df in ${_DRUSH_FILES}; do\n            _php_cli_drush_update \"${_df}\"\n          done\n          if [ -x \"${_T_CLI}/php\" ]; then\n            _php_cli_local_ini_update\n            sed -i \"s/^_PHP_CLI_VERSION=.*/_PHP_CLI_VERSION=${_T_CLI_VRN}/g\" /root/.${_USER}.octopus.cnf &> /dev/null\n            echo \"${_T_CLI_VRN}\" > \"${_dscUsr}/log/cli.txt\"\n            echo \"${_T_CLI_VRN}\" > \"${_dscUsr}/static/control/cli.info\"\n            chown \"${_USER}.ftp:${_usrGroup}\" \"${_dscUsr}/static/control/cli.info\"\n          fi\n        fi\n      fi\n    fi\n\n    # --- FPM portion ---\n    if [ -e \"${_dscUsr}/static/control/fpm.info\" ] && [ -e \"/var/xdrago/conf/fpm-pool-foo-multi.conf\" ]; then\n      _PHP_FPM_MULTI=NO\n      if [ -f \"${_dscUsr}/static/control/multi-fpm.info\" ] && [ -d \"${_dscUsr}/tools/le\" ]; then\n        _PHP_FPM_MULTI=YES\n        if [ ! -e \"${_dscUsr}/static/control/.multi-fpm.${_xSrl}.pid\" ]; then\n          rm -f ${_dscUsr}/static/control/.multi-fpm*.pid\n          echo OK > ${_dscUsr}/static/control/.multi-fpm.${_xSrl}.pid\n          _FORCE_FPM_SETUP=YES\n        fi\n      else\n        if [ -e \"${_dscUsr}/config/server_master/nginx/post.d/fpm_include_default.inc\" ]; then\n          rm -f ${_dscUsr}/config/server_master/nginx/post.d/fpm_include_*\n          rm -f ${_dscUsr}/static/control/.multi-fpm*.pid\n          service nginx reload &> /dev/null\n        fi\n      fi\n\n      # Read and sanitize the FPM version\n      _T_FPM_VRN=$(tr -d '\\n' < ${_dscUsr}/static/control/fpm.info | tr -cd '0-9.')\n\n      # Convert shorthand versions (e.g. \"83\" to \"8.3\")\n      _T_FPM_VRN=\"$(fix_version_format \"${_T_FPM_VRN}\")\"\n\n      # Define fallback chains for PHP-FPM versions (same as CLI)\n      declare -A fpm_fallback=(\n        [\"8.5\"]=\"8.4 8.3 8.2 8.1\"\n        [\"8.4\"]=\"8.3 8.2 8.1\"\n        [\"8.3\"]=\"8.2 8.1\"\n        [\"8.2\"]=\"8.1 8.3\"\n        [\"8.1\"]=\"8.2 8.3\"\n        [\"8.0\"]=\"8.3\"\n        [\"7.4\"]=\"8.3\"\n        [\"7.3\"]=\"7.4\"\n        [\"7.2\"]=\"7.4\"\n        [\"7.1\"]=\"7.4\"\n        [\"7.0\"]=\"7.4\"\n        [\"5.6\"]=\"7.4\"\n      )\n\n      # Attempt fallback if the chosen version doesn't exist\n      if [ -n \"${_T_FPM_VRN}\" ] && ! check_version \"${_T_FPM_VRN}\"; then\n        for fbv in ${fpm_fallback[\"$_T_FPM_VRN\"]}; do\n          if check_version \"${fbv}\"; then\n            _T_FPM_VRN=\"${fbv}\"\n            break\n          else\n            # If fallback not found, reset _T_FPM_VRN\n            _T_FPM_VRN=\"\"\n          fi\n        done\n      fi\n\n      if [ \"${_T_FPM_VRN}\" != \"${_PHP_FPM_VERSION}\" ] || [ \"${_FORCE_FPM_SETUP}\" = \"YES\" ]; then\n        if [ -n \"${_T_FPM_VRN}\" ]; then\n          _NEW_FPM_SETUP=YES\n        fi\n      fi\n\n      ### Update fpm_include_default.inc if needed\n      _PHP_SV=${_T_FPM_VRN//[^0-9]/}\n      [ -z \"${_PHP_SV}\" ] && _PHP_SV=84\n      _FMP_D_INC=\"${_dscUsr}/config/server_master/nginx/post.d/fpm_include_default.inc\"\n\n      if [ \"${_PHP_FPM_MULTI}\" = \"YES\" ] && [ -d \"${_dscUsr}/tools/le\" ]; then\n        _PHP_M_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n        _D_POOL=\"${_USER}.${_PHP_SV}\"\n        if [ ! -e \"${_FMP_D_INC}\" ]; then\n          echo \"set \\$user_socket \\\"${_D_POOL}\\\";\" > ${_FMP_D_INC}\n          touch ${_dscUsr}/static/control/.multi-fpm.${_xSrl}.pid\n          _NEW_FPM_SETUP=YES\n        else\n          _CHECK_FMP_D=$(grep \"${_D_POOL}\" ${_FMP_D_INC} 2>&1)\n          if [[ ! \"${_CHECK_FMP_D}\" =~ \"${_D_POOL}\" ]]; then\n            echo \"${_D_POOL} must be updated in ${_FMP_D_INC}\"\n            echo \"set \\$user_socket \\\"${_D_POOL}\\\";\" > ${_FMP_D_INC}\n            touch ${_dscUsr}/static/control/.multi-fpm.${_xSrl}.pid\n            _NEW_FPM_SETUP=YES\n          fi\n        fi\n      else\n        _PHP_M_V=\"${_PHP_SV}\"\n        rm -f ${_dscUsr}/static/control/.multi-fpm*.pid\n        rm -f ${_FMP_D_INC}\n      fi\n\n      if [ -n \"${_T_FPM_VRN}\" ] && [ \"${_NEW_FPM_SETUP}\" = \"YES\" ]; then\n        _satellite_tune_fpm_workers\n        sed -i \"s/^_PHP_FPM_VERSION=.*/_PHP_FPM_VERSION=${_T_FPM_VRN}/g\" /root/.${_USER}.octopus.cnf &> /dev/null\n        echo \"${_T_FPM_VRN}\" > ${_dscUsr}/log/fpm.txt\n        if [ \"${_PHP_FPM_MULTI}\" = \"NO\" ]; then\n          echo \"${_T_FPM_VRN}\" > ${_dscUsr}/static/control/fpm.info\n        fi\n        chown ${_USER}.ftp:${_usrGroup} ${_dscUsr}/static/control/fpm.info\n\n        _PHP_OLD_SV=${_PHP_FPM_VERSION//[^0-9]/}\n        _PHP_SV=${_T_FPM_VRN//[^0-9]/}\n        [ -z \"${_PHP_SV}\" ] && _PHP_SV=84\n\n        # Update or create special system user if needed\n        if [ \"${_PHP_FPM_MULTI}\" = \"YES\" ] && [ -d \"${_dscUsr}/tools/le\" ]; then\n          _PHP_M_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n          _D_POOL=\"${_USER}.${_PHP_SV}\"\n          if [ ! -e \"${_FMP_D_INC}\" ] && [ -e \"/run/${_D_POOL}.fpm.socket\" ] && [ -x \"/opt/php${_PHP_SV}/bin/php\" ]; then\n            echo \"set \\$user_socket \\\"${_D_POOL}\\\";\" > ${_FMP_D_INC}\n            touch ${_dscUsr}/static/control/.multi-fpm.${_xSrl}.pid\n          else\n            _CHECK_FMP_D=$(grep \"${_D_POOL}\" ${_FMP_D_INC} 2>&1)\n            if [[ ! \"${_CHECK_FMP_D}\" =~ \"${_D_POOL}\" ]] && [ -e \"/run/${_D_POOL}.fpm.socket\" ] && [ -x \"/opt/php${_PHP_SV}/bin/php\" ]; then\n              echo \"${_D_POOL} must be updated in ${_FMP_D_INC}\"\n              echo \"set \\$user_socket \\\"${_D_POOL}\\\";\" > ${_FMP_D_INC}\n              touch ${_dscUsr}/static/control/.multi-fpm.${_xSrl}.pid\n            fi\n          fi\n        else\n          _PHP_M_V=\"${_PHP_SV}\"\n          rm -f ${_dscUsr}/static/control/.multi-fpm*.pid\n          rm -f ${_FMP_D_INC}\n        fi\n\n        # Update/create web users\n        for m in ${_PHP_M_V}; do\n          if [ -x \"/opt/php${m}/bin/php\" ]; then\n            if [ \"${_PHP_FPM_MULTI}\" = \"YES\" ] && [ -d \"${_dscUsr}/tools/le\" ]; then\n              _WEB=\"${_USER}.${m}.web\"\n              _POOL=\"${_USER}.${m}\"\n            else\n              _WEB=\"${_USER}.web\"\n              _POOL=\"${_USER}\"\n            fi\n            if [ -e \"/home/${_WEB}/.drush/php.ini\" ]; then\n              _OLD_PHP_IN_USE=$(grep \"/lib/php\" /home/${_WEB}/.drush/php.ini 2>&1)\n              _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n              for e in ${_PHP_V}; do\n                if [[ \"${_OLD_PHP_IN_USE}\" =~ \"php${e}\" ]]; then\n                  if [ \"${e}\" != \"${m}\" ] || [ ! -e \"/home/${_WEB}/.drush/.ctrl.php${m}.${_xSrl}.pid\" ]; then\n                    echo \"_OLD_PHP_IN_USE is ${_OLD_PHP_IN_USE} for ${_WEB}, updating to ${m}\"\n                    _satellite_web_user_update \"${m}\"\n                  fi\n                fi\n              done\n            else\n              echo \"_NEW_PHP_TO_USE is ${m} for ${_WEB}, creating\"\n              _satellite_create_web_user \"${m}\"\n            fi\n          fi\n        done\n\n        # Cleanup old pool files and set up new pools\n        if [ \"${_PHP_FPM_MULTI}\" = \"YES\" ] && [ -d \"${_dscUsr}/tools/le\" ]; then\n          _PHP_M_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n          rm -f /opt/php*/etc/pool.d/${_USER}.conf\n        else\n          _PHP_M_V=\"${_PHP_SV}\"\n          rm -f /opt/php*/etc/pool.d/${_USER}.*.conf\n          rm -f /opt/php*/etc/pool.d/${_USER}.conf\n        fi\n\n        for m in ${_PHP_M_V}; do\n          if [ -x \"/opt/php${m}/bin/php\" ]; then\n            if [ \"${_PHP_FPM_MULTI}\" = \"YES\" ] && [ -d \"${_dscUsr}/tools/le\" ]; then\n              _WEB=\"${_USER}.${m}.web\"\n              _POOL=\"${_USER}.${m}\"\n              cp -af /var/xdrago/conf/fpm-pool-foo-multi.conf /opt/php${m}/etc/pool.d/${_POOL}.conf\n            else\n              _WEB=\"${_USER}.web\"\n              _POOL=\"${_USER}\"\n              cp -af /var/xdrago/conf/fpm-pool-foo.conf /opt/php${m}/etc/pool.d/${_POOL}.conf\n            fi\n            sed -i \"s/.ftp/.web/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n            wait\n            sed -i \"s/\\/data\\/disk\\/foo\\/.tmp/\\/home\\/foo.web\\/.tmp/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n            wait\n            sed -i \"s/foo.web/${_WEB}/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n            wait\n            sed -i \"s/THISPOOL/${_POOL}/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n            wait\n            sed -i \"s/foo/${_USER}/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n            wait\n\n            if [[ \"${m}\" == 8* ]] && [ -e \"/opt/etc/fpm/fpm-pool-common-modern.conf\" ]; then\n              sed -i \"s/fpm-pool-common.conf/fpm-pool-common-modern.conf/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n              wait\n            elif [[ \"${m}\" == 7* ]] && [ -e \"/opt/etc/fpm/fpm-pool-common-legacy.conf\" ]; then\n              sed -i \"s/fpm-pool-common.conf/fpm-pool-common-legacy.conf/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n              wait\n            fi\n\n            [ -n \"${_PHP_FPM_DENY}\" ] && sed -i \"s/passthru,/${_PHP_FPM_DENY},/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n            wait\n\n            if [ -n \"${_PHP_FPM_TIMEOUT}\" ] && [ \"${_PHP_FPM_TIMEOUT}\" -ge 60 ]; then\n              _PHP_TO=\"${_PHP_FPM_TIMEOUT}s\"\n              sed -i \"s/180s/${_PHP_TO}/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n              wait\n            fi\n\n            if [ -n \"${_CHILD_MAX_FPM}\" ] && [ \"${_CHILD_MAX_FPM}\" -ge 2 ]; then\n              sed -i \"s/pm.max_children =.*/pm.max_children = ${_CHILD_MAX_FPM}/g\" /opt/php${m}/etc/pool.d/${_POOL}.conf &> /dev/null\n              wait\n            fi\n\n            _switch_newrelic ${m} ${_POOL} 0\n\n            mkdir -p /var/www/phpcache/${_USER}/${_POOL}\n            chgrp www-data /var/www/phpcache/${_USER}/${_POOL}\n            chmod 770 /var/www/phpcache/${_USER}/${_POOL}\n\n            [ -e \"/etc/init.d/php${_PHP_OLD_SV}-fpm\" ] && service php${_PHP_OLD_SV}-fpm reload &> /dev/null\n            [ -e \"/etc/init.d/php${m}-fpm\" ] && service php${m}-fpm reload &> /dev/null\n          fi\n        done\n      fi\n    fi\n  fi\n}\n\n#\n# Manage mirroring of drush aliases.\n_manage_site_drush_alias_mirror() {\n\n  _ALS_TEST=$(ls -la /home/${_USER}.ftp/.drush/*.alias.drushrc.php 2>&1)\n  if [[ ! \"${_ALS_TEST}\" =~ \"No such file\" ]]; then\n    for _Alias in `find /home/${_USER}.ftp/.drush/*.alias.drushrc.php \\\n      -maxdepth 1 -type f | sort`; do\n      _AliasFile=$(echo \"${_Alias}\" | cut -d'/' -f5 | awk '{ print $1}' 2>&1)\n      if [ ! -e \"${_pthParen_tUsr}/.drush/${_AliasFile}\" ] \\\n        && [ ! -z \"${_AliasFile}\" ]; then\n        rm -f /home/${_USER}.ftp/.drush/${_AliasFile}\n     fi\n    done\n  fi\n\n  if [ -e \"/home/${_USER}.ftp/.drush/hm.alias.drushrc.php\" ]; then\n    rm -f /home/${_USER}.ftp/.drush/hm.alias.drushrc.php\n  fi\n  if [ -e \"/home/${_USER}.ftp/.drush/self.alias.drushrc.php\" ]; then\n    rm -f /home/${_USER}.ftp/.drush/self.alias.drushrc.php\n  fi\n  if [ -e \"${_dscUsr}/.drush/.alias.drushrc.php\" ]; then\n    rm -f ${_dscUsr}/.drush/.alias.drushrc.php\n  fi\n\n  _isAliasUpdate=NO\n  for _Alias in `find ${_pthParen_tUsr}/.drush/*.alias.drushrc.php \\\n    -maxdepth 1 -type f | sort`; do\n    ### echo Last_AliasName is ${_AliasName}\n    _SiteDir=\n    _SiteName=\n    _AliasName=\n    _AliasName=$(echo \"${_Alias}\" | cut -d'/' -f6 | awk '{ print $1}' 2>&1)\n    _AliasName=$(echo \"${_AliasName}\" \\\n      | sed \"s/.alias.drushrc.php//g\" \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_AliasName}\" = \"hm\" ] \\\n      || [ \"${_AliasName}\" = \"none\" ] \\\n      || [[ \"${_AliasName}\" =~ (^)\"platform_\" ]] \\\n      || [[ \"${_AliasName}\" =~ (^)\"server_\" ]] \\\n      || [[ \"${_AliasName}\" =~ (^)\"self\" ]] \\\n      || [[ \"${_AliasName}\" =~ (^)\"hostmaster\" ]] \\\n      || [ -z \"${_AliasName}\" ]; then\n      _IS_SITE=NO\n      _AliasName=\n      _SiteName=\n      _SiteDir=\n    else\n      _SiteName=\"${_AliasName}\"\n      ### echo _SiteName is \"${_SiteName}\"\n      _SiteDir=\n      if [[ \"${_SiteName}\" =~ \".restore\"($) ]]; then\n        _IS_SITE=NO\n        rm -f ${_pthParen_tUsr}/.drush/${_SiteName}.alias.drushrc.php\n      else\n        _SiteDir=$(cat ${_Alias} \\\n          | grep \"site_path'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        if [ -e \"${_SiteDir}/drushrc.php\" ] \\\n          && [ -e \"${_SiteDir}/files\" ] \\\n          && [ -e \"${_SiteDir}/private\" ]; then\n          ### echo _SiteDir is ${_SiteDir}\n          ### echo\n          _pthAliasMain=\"${_pthParen_tUsr}/.drush/${_SiteName}.alias.drushrc.php\"\n          _pthAliasCopy=\"/home/${_USER}.ftp/.drush/${_SiteName}.alias.drushrc.php\"\n          if [ ! -e \"${_pthAliasCopy}\" ]; then\n            cp -af ${_pthAliasMain} ${_pthAliasCopy}\n            chmod 440 ${_pthAliasCopy}\n            _isAliasUpdate=YES\n          else\n            _DIFF_T=$(diff -w -B ${_pthAliasCopy} ${_pthAliasMain} 2>&1)\n            if [ ! -z \"${_DIFF_T}\" ]; then\n              cp -af ${_pthAliasMain} ${_pthAliasCopy}\n              chmod 440 ${_pthAliasCopy}\n              _isAliasUpdate=YES\n            fi\n          fi\n        else\n          ### rm -f ${_pthAliasCopy}\n          echo \"ZOMBIE ${_SiteDir} detected\"\n          echo \"Moving GHOST ${_SiteName}.alias.drushrc.php to ${_pthParen_tUsr}/undo/\"\n          ### mv -f ${_pthParen_tUsr}/.drush/${_SiteName}.alias.drushrc.php ${_pthParen_tUsr}/undo/ &> /dev/null\n          echo\n        fi\n      fi\n    fi\n  done\n  if [ -x \"/usr/bin/drush10\" ]; then\n    if [ \"${_isAliasUpdate}\" = \"YES\" ] \\\n      || [ ! -e \"/home/${_USER}.ftp/.drush/sites/.checksums\" ]; then\n      chage -M 99999 ${_USER}.ftp &> /dev/null\n      su -s /bin/bash - ${_USER}.ftp -c \"rm -f ~/.drush/sites/*.yml\"\n      wait\n      su -s /bin/bash - ${_USER}.ftp -c \"rm -f ~/.drush/sites/.checksums/*.md5\"\n      wait\n      su -s /bin/bash - ${_USER}.ftp -c \"drush10 core:init --yes\" &> /dev/null\n      wait\n      su -s /bin/bash - ${_USER}.ftp -c \"drush10 site:alias-convert ~/.drush/sites --yes\" &> /dev/null\n      wait\n      chage -M 90 ${_USER}.ftp &> /dev/null\n      ### Update Drush yml sites aliases also for Ægir system user\n      su -s /bin/bash - ${_USER} -c \"rm -f ~/.drush/sites/*.yml\"\n      wait\n      su -s /bin/bash - ${_USER} -c \"rm -f ~/.drush/sites/.checksums/*.md5\"\n      wait\n      su -s /bin/bash - ${_USER} -c \"drush10 core:init --yes\" &> /dev/null\n      wait\n      su -s /bin/bash - ${_USER} -c \"drush10 site:alias-convert ~/.drush/sites --yes\" &> /dev/null\n      wait\n    fi\n  fi\n}\n#\n# Manage Primary Users.\n_manage_user() {\n  for _pthParen_tUsr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n    if [ -e \"${_pthParen_tUsr}/config/server_master/nginx/vhost.d\" ] \\\n      && [ -e \"${_pthParen_tUsr}/log/fpm.txt\" ] \\\n      && [ ! -e \"${_pthParen_tUsr}/log/proxied.pid\" ] \\\n      && [ ! -e \"${_pthParen_tUsr}/log/CANCELLED\" ]; then\n      _MNT_STATIC_FILES=\"\"\n      _mntPoint=\"\"\n      _USER=\"\"\n      _USER=$(echo ${_pthParen_tUsr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n      echo \"_USER is == ${_USER} == at _manage_user\"\n      if getent group allow-snail >/dev/null 2>&1 && \\\n        ! id -nG \"${_USER}\" 2>/dev/null | tr ' ' '\\n' | grep -qxF \"allow-snail\"; then\n        usermod -aG allow-snail \"${_USER}\"\n      fi\n      _WEB=\"${_USER}.web\"\n      _dscUsr=\"/data/disk/${_USER}\"\n      _octInc=\"${_dscUsr}/config/includes\"\n      _octTpl=\"${_dscUsr}/.drush/sys/provision/http/Provision/Config/Nginx\"\n      if [ -e \"${_dscUsr}/log/imported.pid\" ] \\\n        && [ -e \"${_dscUsr}/log/post-merge-fix.pid\" ]; then\n        [ -e \"${_dscUsr}/log/imported.pid\" ] && mv -f ${_dscUsr}/log/imported.pid ${_dscUsr}/src/\n        [ -e \"${_dscUsr}/log/exported.pid\" ] && mv -f ${_dscUsr}/log/exported.pid ${_dscUsr}/src/\n        [ -e \"${_dscUsr}/log/hmpathfix.pid\" ] && mv -f ${_dscUsr}/log/hmpathfix.pid ${_dscUsr}/src/\n        [ -e \"${_dscUsr}/log/post-merge-fix.pid\" ] && mv -f ${_dscUsr}/log/post-merge-fix.pid ${_dscUsr}/src/\n      fi\n      if [ ! -e \"${_dscUsr}/rector.php\" ]; then\n        rm -f ${_dscUsr}/*.php* &> /dev/null\n        rm -f ${_dscUsr}/composer.lock &> /dev/null\n        rm -f ${_dscUsr}/composer.json &> /dev/null\n        rm -f -r ${_dscUsr}/vendor &> /dev/null\n        rm -f -r ${_dscUsr}/static/vendor &> /dev/null\n        rm -f -r ${_dscUsr}/.cache/composer &> /dev/null\n        rm -f -r ${_dscUsr}/.config/composer &> /dev/null\n        rm -f -r ${_dscUsr}/.composer &> /dev/null\n      fi\n      chmod 0440 ${_dscUsr}/.drush/*.php &> /dev/null\n      chmod 0400 ${_dscUsr}/.drush/drushrc.php &> /dev/null\n      chmod 0400 ${_dscUsr}/.drush/hm.alias.drushrc.php &> /dev/null\n      chmod 0400 ${_dscUsr}/.drush/hostmaster*.php &> /dev/null\n      chmod 0400 ${_dscUsr}/.drush/platform_*.php &> /dev/null\n      chmod 0400 ${_dscUsr}/.drush/server_*.php &> /dev/null\n      chmod 0710 ${_dscUsr}/.drush &> /dev/null\n      find ${_dscUsr}/config/server_master \\\n        -type d -exec chmod 0700 {} \\; &> /dev/null\n      find ${_dscUsr}/config/server_master \\\n        -type f -exec chmod 0600 {} \\; &> /dev/null\n      chmod +rx ${_dscUsr}/config{,/server_master{,/nginx{,/passwords.d}}} &> /dev/null\n      chmod +r ${_dscUsr}/config/server_master/nginx/passwords.d/* &> /dev/null\n      if [ ! -e \"${_dscUsr}/.tmp/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n        rm -rf ${_dscUsr}/.drush/cache\n        mkdir -p ${_dscUsr}/.tmp\n        touch ${_dscUsr}/.tmp\n        find ${_dscUsr}/.tmp/ -mtime +0 -exec rm -rf {} \\; &> /dev/null\n        chown ${_USER}:${_usrGroup} ${_dscUsr}/.tmp &> /dev/null\n        chmod 02755 ${_dscUsr}/.tmp &> /dev/null\n        echo OK > ${_dscUsr}/.tmp/.ctrl.${_tRee}.${_xSrl}.pid\n      fi\n      if [ ! -e \"${_dscUsr}/static/control/.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n        && [ -e \"/home/${_USER}.ftp/clients\" ]; then\n        mkdir -p ${_dscUsr}/static/control\n        chmod 755 ${_dscUsr}/static/control\n        if [ -e \"/var/xdrago/conf/control-readme.txt\" ]; then\n          cp -af /var/xdrago/conf/control-readme.txt \\\n            ${_dscUsr}/static/control/README.txt &> /dev/null\n          chmod 0644 ${_dscUsr}/static/control/README.txt\n        fi\n        chown -R ${_USER}.ftp:${_usrGroup} ${_dscUsr}/static/control\n        rm -f ${_dscUsr}/static/control/.ctrl.*\n        echo OK > ${_dscUsr}/static/control/.ctrl.${_tRee}.${_xSrl}.pid\n      fi\n      if [ -e \"${_dscUsr}/static/control/ssl-live-mode.info\" ]; then\n        if [ -e \"${_dscUsr}/tools/le/.ctrl/ssl-demo-mode.pid\" ]; then\n          rm -f ${_dscUsr}/tools/le/.ctrl/ssl-demo-mode.pid\n        fi\n      fi\n\n      # shellcheck disable=SC1091\n      [ -e \"/root/.${_USER}.octopus.cnf\" ] && source /root/.${_USER}.octopus.cnf\n\n      _THIS_HM_PLR=$(cat ${_dscUsr}/.drush/hostmaster.alias.drushrc.php \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      if [ -e \"${_THIS_HM_PLR}/modules/path_alias_cache\" ] \\\n        && [ -x \"/opt/tools/drush/8/drush/drush.php\" ]; then\n        if [ -x \"/opt/php56/bin/php\" ]; then\n          echo 5.6 > ${_dscUsr}/static/control/cli.info\n        fi\n      fi\n      _nrCheck=\n      _switch_php\n      ### reload nginx\n      service nginx reload &> /dev/null\n      if [ -z ${_nrCheck} ]; then\n        if [ -z ${_PHP_SV} ]; then\n          _PHP_SV=${_PHP_FPM_VERSION//[^0-9]/}\n          if [ -z \"${_PHP_SV}\" ]; then\n            _PHP_SV=84\n          fi\n        fi\n        if [ -f \"${_dscUsr}/static/control/multi-fpm.info\" ]; then\n          _PHP_M_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n          for m in ${_PHP_M_V}; do\n            if [ -x \"/opt/php${m}/bin/php\" ] \\\n              && [ -e \"/opt/php${m}/etc/pool.d/${_USER}.${m}.conf\" ]; then\n              _switch_newrelic ${m} ${_USER}.${m} 1\n            fi\n          done\n        else\n          if [ -x \"/opt/php${_PHP_SV}/bin/php\" ] \\\n            && [ -e \"/opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf\" ]; then\n            _switch_newrelic ${_PHP_SV} ${_USER} 1\n          fi\n        fi\n      fi\n      _site_socket_inc_gen\n      if [ -e \"${_pthParen_tUsr}/clients\" ] && [ ! -z ${_USER} ]; then\n        echo Managing Users for ${_pthParen_tUsr} Instance\n        rm -rf ${_pthParen_tUsr}/clients/admin &> /dev/null\n        rm -rf ${_pthParen_tUsr}/clients/omega8ccgmailcom &> /dev/null\n        rm -rf ${_pthParen_tUsr}/clients/nocomega8cc &> /dev/null\n        rm -rf ${_pthParen_tUsr}/clients/*/backups &> /dev/null\n        symlinks -dr ${_pthParen_tUsr}/clients &> /dev/null\n        if [ -d \"/home/${_USER}.ftp\" ]; then\n          _disable_chattr ${_USER}.ftp\n          symlinks -dr /home/${_USER}.ftp &> /dev/null\n          _mntPoint=$(find /mnt -mindepth 1 -maxdepth 1 -type d | grep \"\\.\" | head -n1) &&\n          _MNT_STATIC_FILES=\"${_mntPoint}/files/${_USER}/static/files\"\n          [ -n \"${_mntPoint}\" ] && echo \"_mntPoint is == ${_mntPoint} == at _manage_user\"\n          [ -n \"${_mntPoint}\" ] && echo \"_MNT_STATIC_FILES is == ${_MNT_STATIC_FILES} == at _manage_user\"\n          echo >> ${_THIS_LTD_CONF}\n          echo \"[${_USER}.ftp]\" >> ${_THIS_LTD_CONF}\n          [ -n \"${_mntPoint}\" ] && echo \"path : ['/opt/user/gems/${_USER}.ftp', \\\n                        '/opt/user/npm/${_USER}.ftp', \\\n                        '${_MNT_STATIC_FILES}', \\\n                        '${_dscUsr}/distro', \\\n                        '${_dscUsr}/static', \\\n                        '${_dscUsr}/backups', \\\n                        '${_dscUsr}/clients']\" \\\n                        | fmt -su -w 2500 >> ${_THIS_LTD_CONF}\n          [ -z \"${_mntPoint}\" ] && echo \"path : ['/opt/user/gems/${_USER}.ftp', \\\n                        '/opt/user/npm/${_USER}.ftp', \\\n                        '${_dscUsr}/distro', \\\n                        '${_dscUsr}/static', \\\n                        '${_dscUsr}/backups', \\\n                        '${_dscUsr}/clients']\" \\\n                        | fmt -su -w 2500 >> ${_THIS_LTD_CONF}\n          _manage_site_drush_alias_mirror\n          _manage_sec\n          if [ -d \"/home/${_USER}.ftp/users\" ]; then\n            chown -R ${_USER}.ftp:${_usrGroup} /home/${_USER}.ftp/users\n            chmod 700 /home/${_USER}.ftp/users\n            chmod 600 /home/${_USER}.ftp/users/*\n          fi\n          if [ ! -L \"/home/${_USER}.ftp/static\" ]; then\n            rm -f /home/${_USER}.ftp/{backups,clients,static}\n            ln -sfn ${_dscUsr}/backups /home/${_USER}.ftp/backups\n            ln -sfn ${_dscUsr}/clients /home/${_USER}.ftp/clients\n            ln -sfn ${_dscUsr}/static  /home/${_USER}.ftp/static\n          fi\n          if [ ! -e \"/home/${_USER}.ftp/.tmp/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n            rm -rf /home/${_USER}.ftp/.drush/cache\n            rm -rf /home/${_USER}.ftp/.tmp\n            mkdir -p /home/${_USER}.ftp/.tmp\n            chown ${_USER}.ftp:${_usrGroup} /home/${_USER}.ftp/.tmp &> /dev/null\n            chmod 700 /home/${_USER}.ftp/.tmp &> /dev/null\n            echo OK > /home/${_USER}.ftp/.tmp/.ctrl.${_tRee}.${_xSrl}.pid\n          fi\n          _enable_chattr ${_USER}.ftp\n          echo Done for ${_pthParen_tUsr}\n        else\n          echo Directory /home/${_USER}.ftp not available\n        fi\n        echo\n      else\n        echo Directory ${_pthParen_tUsr}/clients not available\n      fi\n      echo\n    fi\n  done\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n}\n\n#\n# Restrict node if needed.\n_fix_node_in_lshell_access() {\n  if [ -e \"/etc/lshell.conf\" ]; then\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    if [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [ -e \"/root/.allow.node.lshell.cnf\" ]; then\n      _ALLOW_NODE=YES\n    else\n      _ALLOW_NODE=NO\n      sed -i \\\n        -e \"s/, 'node', 'npm', 'npx',/,/gi\" \\\n        -e \"s/, 'scp',/,/gi\" \\\n        /etc/lshell.conf /var/xdrago/conf/lshell.conf\n    fi\n  fi\n}\n\n#\n# Restrict php if needed.\n_fix_php_in_lshell_access() {\n  if [ -e \"/etc/lshell.conf\" ]; then\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    if [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [ -e \"/root/.allow.php.lshell.cnf\" ]; then\n      _ALLOW_PHP=YES\n    else\n      _ALLOW_PHP=NO\n      sed -i \\\n        -e \"s/, 'php.*':.*php',/,/gi\" \\\n        -e \"s/, '\\/opt\\/php.*',/,/gi\" \\\n        /etc/lshell.conf /var/xdrago/conf/lshell.conf\n    fi\n  fi\n}\n\n###-------------SYSTEM-----------------###\n\nif [ ! -e \"/home/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n  chattr -i /home\n  chmod 0711 /home\n  chown root:root /home\n  rm -f /home/.ctrl.*\n  while IFS=':' read -r _login _pass _uid _gid _uname _homedir _shell; do\n    if [[ \"${_homedir}\" = **/home/** ]]; then\n      if [ -d \"${_homedir}\" ]; then\n        chattr -i ${_homedir}\n        chown ${_uid}:${_gid} ${_homedir} &> /dev/null\n        if [ -d \"${_homedir}/.ssh\" ]; then\n          chattr -i ${_homedir}/.ssh\n          chown -R ${_uid}:${_gid} ${_homedir}/.ssh &> /dev/null\n        fi\n        if [ -d \"${_homedir}/.tmp\" ]; then\n          chattr -i ${_homedir}/.tmp\n          chown -R ${_uid}:${_gid} ${_homedir}/.tmp &> /dev/null\n        fi\n        if [ -d \"${_homedir}/.drush\" ]; then\n          chattr +i ${_homedir}/.drush/usr\n          chattr +i ${_homedir}/.drush/*.ini\n          chattr +i ${_homedir}/.drush\n        fi\n        if [[ ! \"${_login}\" =~ \".ftp\"($) ]] \\\n          && [[ ! \"${_login}\" =~ \".web\"($) ]]; then\n          chattr +i ${_homedir}\n        fi\n      fi\n    fi\n  done < /etc/passwd\n  touch /home/.ctrl.${_tRee}.${_xSrl}.pid\nfi\n\nif [ ! -L \"/usr/bin/MySecureShell\" ] && [ -x \"/usr/bin/mysecureshell\" ]; then\n  mv -f /usr/bin/MySecureShell /var/backups/legacy-MySecureShell-bin\n  ln -sfn /usr/bin/mysecureshell /usr/bin/MySecureShell\nfi\n\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n[ -d \"/var/backups/ltd/conf\" ] || mkdir -p /var/backups/ltd/{conf,log,old}\n[ -d \"/var/backups/ltd/log\" ] || mkdir -p /var/backups/ltd/{conf,log,old}\n[ -d \"/var/backups/ltd/old\" ] || mkdir -p /var/backups/ltd/{conf,log,old}\n[ -d \"/var/backups/zombie/deleted\" ] || mkdir -p /var/backups/zombie/deleted\n_THIS_LTD_CONF=\"/var/backups/ltd/conf/lshell.conf.${_NOW}\"\nif [ -e \"/run/manage_ruby_users.pid\" ] \\\n  || [ -e \"/run/manage_ltd_users.pid\" ] \\\n  || [ -e \"/run/boa_run.pid\" ] \\\n  || [ -e \"/run/octopus_install_run.pid\" ]; then\n  touch /var/log/boa/wait-manage-ltd-users.pid\n  echo \"Another BOA task is running, we have to wait\"\n  sleep 3\n  exit 0\nelif [ ! -e \"/var/xdrago/conf/lshell.conf\" ]; then\n  echo \"Missing /var/xdrago/conf/lshell.conf template\"\n  exit 0\nelse\n  rm -f /var/log/boa/wait-manage-ltd-users.pid\n  touch /run/manage_ltd_users.pid\n  _count_cpu\n  _find_fast_mirror_early\n  find /etc/[a-z]*\\.lock -maxdepth 1 -type f -exec rm -rf {} \\; &> /dev/null\n  if [ ! -e \"${_pthLog}/node.manage.lshell.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    _fix_node_in_lshell_access\n    touch ${_pthLog}/node.manage.lshell.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n#   if [ ! -e \"${_pthLog}/php.manage.lshell.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n#     _fix_php_in_lshell_access\n#     touch ${_pthLog}/php.manage.lshell.ctrl.${_tRee}.${_xSrl}.pid\n#   fi\n  cat /var/xdrago/conf/lshell.conf > ${_THIS_LTD_CONF}\n  _find_correct_ip\n  sed -i \"s/1.1.1.1/${_LOC_IP}/g\" ${_THIS_LTD_CONF}\n  wait\n  if [ ! -e \"/root/.allow.mc.cnf\" ]; then\n    sed -i \"s/'mc', //g\" ${_THIS_LTD_CONF}\n    wait\n    sed -i \"s/, 'mc':'mc -u'//g\" ${_THIS_LTD_CONF}\n    wait\n  fi\n  if [ ! -e \"/root/.allow.du.cnf\" ]; then\n    sed -i \"s/'du', //g\" ${_THIS_LTD_CONF}\n    wait\n    sed -i \"s/, 'du':'du -s -h'//g\" ${_THIS_LTD_CONF}\n    wait\n  fi\n  _add_ltd_group_if_not_exists\n  _add_allow_snail_if_not_exists\n  _kill_zombies >/var/backups/ltd/log/zombies-${_NOW}.log 2>&1\n  _manage_user >/var/backups/ltd/log/users-${_NOW}.log 2>&1\n  if [ -e \"${_THIS_LTD_CONF}\" ]; then\n    _DIFF_T=$(diff -w -B ${_THIS_LTD_CONF} /etc/lshell.conf 2>&1)\n    if [ ! -z \"${_DIFF_T}\" ]; then\n      cp -af /etc/lshell.conf /var/backups/ltd/old/lshell.conf-before-${_NOW}\n      cp -af ${_THIS_LTD_CONF} /etc/lshell.conf\n    else\n      rm -f ${_THIS_LTD_CONF}\n    fi\n  fi\n  if [ -L \"/bin/sh\" ] && [ ! -e \"/run/octopus_install_run.pid\" ]; then\n    _WEB_SH=\"$(readlink -n /bin/sh)\"\n    if [ -x \"/opt/local/bin/websh\" ] \\\n      && grep -i '_forward_to_dash' /opt/local/bin/websh &> /dev/null; then\n      if [ \"${_WEB_SH}\" != \"/opt/local/bin/websh\" ]; then\n        ln -sfn /opt/local/bin/websh /bin/sh\n        if [ -e \"/usr/bin/sh\" ]; then\n          ln -sfn /opt/local/bin/websh /usr/bin/sh\n        fi\n        [ -x \"/bin/websh\" ] && [ ! -L \"/bin/websh\" ] && ln -sfn /opt/local/bin/websh /bin/websh\n      fi\n    else\n      if [ -x \"/bin/dash\" ]; then\n        if [ \"${_WEB_SH}\" != \"/bin/dash\" ]; then\n          ln -sfn /bin/dash /bin/sh\n          if [ -e \"/usr/bin/sh\" ]; then\n            ln -sfn /bin/dash /usr/bin/sh\n          fi\n        fi\n      elif [ -x \"/usr/bin/dash\" ]; then\n        if [ \"${_WEB_SH}\" != \"/usr/bin/dash\" ]; then\n          ln -sfn /usr/bin/dash /bin/sh\n          if [ -e \"/usr/bin/sh\" ]; then\n            ln -sfn /usr/bin/dash /usr/bin/sh\n          fi\n        fi\n      elif [ -x \"/bin/bash\" ]; then\n        if [ \"${_WEB_SH}\" != \"/bin/bash\" ]; then\n          ln -sfn /bin/bash /bin/sh\n          if [ -e \"/usr/bin/sh\" ]; then\n            ln -sfn /bin/bash /usr/bin/sh\n          fi\n        fi\n      elif [ -x \"/usr/bin/bash\" ]; then\n        if [ \"${_WEB_SH}\" != \"/usr/bin/bash\" ]; then\n          ln -sfn /usr/bin/bash /bin/sh\n          if [ -e \"/usr/bin/sh\" ]; then\n            ln -sfn /usr/bin/bash /usr/bin/sh\n          fi\n        fi\n      fi\n      curl -s -A iCab \"${_urlHmr}/helpers/websh.sh.txt\" -o /opt/local/bin/websh\n      chmod 755 /opt/local/bin/websh\n    fi\n  fi\n  rm -f ${_TMP}/*.txt\n  if [ ! -e \"/root/.home.no.wildcard.chmod.cnf\" ]; then\n    chmod 700 /home/* &> /dev/null\n  fi\n  chmod 0600 /var/log/lsh/*\n  chmod 0440 /var/aegir/.drush/*.php &> /dev/null\n  chmod 0400 /var/aegir/.drush/drushrc.php &> /dev/null\n  chmod 0400 /var/aegir/.drush/hm.alias.drushrc.php &> /dev/null\n  chmod 0400 /var/aegir/.drush/hostmaster*.php &> /dev/null\n  chmod 0400 /var/aegir/.drush/platform_*.php &> /dev/null\n  chmod 0400 /var/aegir/.drush/server_*.php &> /dev/null\n  chmod 0710 /var/aegir/.drush &> /dev/null\n  find /var/aegir/config/server_master \\\n    -type d -exec chmod 0700 {} \\; &> /dev/null\n  find /var/aegir/config/server_master \\\n    -type f -exec chmod 0600 {} \\; &> /dev/null\n  sleep 5\n  [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n  exit 0\nfi\n\n"
  },
  {
    "path": "aegir/tools/system/manage_solr_config.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\nexport _tRee=dev\nexport _xSrl=591devT01\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n\n_crlGet=\"-L --max-redirs 3 -k -s --retry 9 --retry-delay 9 -A iCab\"\n_wgetGet=\"--max-redirect=3 --no-check-certificate -q --tries=9 --wait=9 --user-agent='iCab'\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n_vSet=\"variable-set --always-set\"\n\n###-------------SYSTEM-----------------###\n\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n_check_config_diff() {\n  # $1 is template path\n  # $2 is a path to core config\n  _preCnf=\"$1\"\n  _slrCnf=\"$2\"\n  if [ -f \"${_preCnf}\" ] && [ -f \"${_slrCnf}\" ]; then\n    _slrCnfUpdate=NO\n    _diffMyTest=$(diff -w -B ${_slrCnf} ${_preCnf} 2>&1)\n    if [ -z \"${_diffMyTest}\" ]; then\n      _slrCnfUpdate=\"\"\n      echo \"INFO: ${_slrCnf} diff0 empty -- nothing to update\"\n    else\n      _slrCnfUpdate=YES\n      # _diffMyTest=$(echo -n ${_diffMyTest} | fmt -su -w 2500)\n      echo \"INFO: ${_slrCnf} diff1 ${_diffMyTest}\"\n    fi\n  fi\n}\n\n_write_solr_config() {\n  # ${1} is module\n  # ${2} is a path to solr.php\n  # ${3} is Jetty/Solr version\n  if [ ! -z \"${1}\" ] \\\n    && [ ! -z \"${2}\" ] \\\n    && [ ! -z \"${3}\" ] \\\n    && [ ! -z \"${_SolrCoreID}\" ] \\\n    && [ -e \"${_Dir}\" ]; then\n    if [ \"${3}\" = \"solr9\" ]; then\n      _PRT=\"9099\"\n      _VRS=\"9.8.1\"\n    elif [ \"${3}\" = \"solr7\" ]; then\n      _PRT=\"9077\"\n      _VRS=\"7.7.3\"\n    else\n      _PRT=\"8099\"\n      _VRS=\"4.9.1\"\n    fi\n    if [ \"${1}\" = \"search_api_solr\" ] || [ \"${1}\" = \"search_api_solr7\" ] || [ \"${1}\" = \"search_api_solr9\" ]; then\n      _module=search_api_solr\n    fi\n    echo \"Your SOLR core access details for ${_Dom} site are as follows:\" > ${2}\n    echo                                                                 >> ${2}\n    echo \"  Drupal 8 and newer\"                                          >> ${2}\n    echo \"  Solr version .....: ${_VRS}\"                                 >> ${2}\n    echo \"  Solr host ........: 127.0.0.1\"                               >> ${2}\n    echo \"  Solr port ........: ${_PRT}\"                                 >> ${2}\n    echo \"  Solr path ........: leave empty\"                             >> ${2}\n    echo \"  Solr core ........: ${_SolrCoreID}\"                          >> ${2}\n    echo                                                                 >> ${2}\n    echo \"  Don't forget to manually upload the configuration files\"     >> ${2}\n    echo \"  (schema.xml, solrconfig.xml) under ${_Dom}/files/solr\"       >> ${2}\n    echo                                                                 >> ${2}\n    if [ \"${3}\" != \"solr9\" ]; then\n      echo \"  Drupal 7:\"                                                 >> ${2}\n      echo \"  Solr version .....: ${_VRS}\"                               >> ${2}\n      echo \"  Solr host ........: 127.0.0.1\"                             >> ${2}\n      echo \"  Solr port ........: ${_PRT}\"                               >> ${2}\n      echo \"  Solr path ........: /solr/${_SolrCoreID}\"                  >> ${2}\n      echo                                                               >> ${2}\n    fi\n    echo \"It has been auto-configured to work with latest version\"       >> ${2}\n    echo \"of your integration module, but you need to add the module \"   >> ${2}\n    echo \"to your site codebase before you will be able to use Solr.\"    >> ${2}\n    echo                                                                 >> ${2}\n    echo \"To learn more please make sure to check the module docs at:\"   >> ${2}\n    echo                                                                 >> ${2}\n    echo \"https://drupal.org/project/${_module}\"                         >> ${2}\n    chown ${_HM_U}:users ${2} &> /dev/null\n    chmod 440 ${2} &> /dev/null\n  fi\n}\n\n_reload_core_cnf() {\n  # ${1} is solr server port\n  # ${2} is solr core name\n  # Example: _reload_core_cnf 9077 ${_SolrCoreID}\n  # Example: _reload_core_cnf 9099 ${_SolrCoreID}\n  # Example: _reload_core_cnf 8099 ${_SolrCoreID}\n  curl \"http://127.0.0.1:${1}/solr/admin/cores?action=RELOAD&core=${2}\" &> /dev/null\n  echo \"Reloaded Solr core ${2} cnf on port ${1}\"\n  wait\n}\n\n_update_solr() {\n  # ${1} is module\n  # ${2} is solr core path (auto) == _SOLR_DIR\n  # ${3} is solr server version: solr9 or solr7 or jetty9\n  _SERV=\"${3}\"\n  if [ ! -z \"${1}\" ] && [ -e \"/data/conf/solr\" ]; then\n    if [ \"${1}\" = \"apachesolr\" ]; then\n      _SERV=\"jetty9\"\n      if [ -e \"${_Plr}/modules/o_contrib_seven\" ]; then\n        if [ ! -e \"${2}/conf/.protected.conf\" ] && [ -e \"${2}/conf\" ]; then\n          _slrCnfUpdate=\"\"\n          _check_config_diff \"/data/conf/solr/apachesolr/solr4_drupal7/schema.xml\" \"${2}/conf/schema.xml\"\n          if [ ! -z \"${_slrCnfUpdate}\" ]; then\n            rm -f ${2}/conf/*\n            cp -af /data/conf/solr/apachesolr/solr4_drupal7/* ${2}/conf/\n            chmod 644 ${2}/conf/*\n            chown ${_SERV}:${_SERV} ${2}/conf/*\n            touch ${2}/conf/.just-updated.pid\n          else\n            rm -f ${2}/conf/.just-updated.pid\n            rm -f ${2}/conf/.yes-update.txt\n          fi\n        fi\n      elif [ -e \"${_Plr}/modules/o_contrib\" ]; then\n        if [ ! -e \"${2}/conf/.protected.conf\" ] && [ -e \"${2}/conf\" ]; then\n          _slrCnfUpdate=\"\"\n          _check_config_diff \"/data/conf/solr/apachesolr/solr4_drupal6/schema.xml\" \"${2}/conf/schema.xml\"\n          if [ ! -z \"${_slrCnfUpdate}\" ]; then\n            rm -f ${2}/conf/*\n            cp -af /data/conf/solr/apachesolr/solr4_drupal6/* ${2}/conf/\n            chmod 644 ${2}/conf/*\n            chown ${_SERV}:${_SERV} ${2}/conf/*\n            touch ${2}/conf/.just-updated.pid\n          else\n            rm -f ${2}/conf/.just-updated.pid\n            rm -f ${2}/conf/.yes-update.txt\n          fi\n        fi\n      fi\n    elif [ ! -e \"${2}/conf/.protected.conf\" ] && [ -e \"${2}/conf\" ] && [ -e \"${_Plr}/modules/o_contrib_seven\" ]; then\n      if [ \"${1}\" = \"search_api_solr\" ] || [ \"${1}\" = \"search_api_solr7\" ]; then\n        _check_config_diff \"/data/conf/solr/search_api_solr/solr7_drupal7/schema.xml\" \"${2}/conf/schema.xml\"\n        if [ ! -z \"${_slrCnfUpdate}\" ]; then\n          rm -f ${2}/conf/*\n          cp -af /data/conf/solr/search_api_solr/solr7_drupal7/* ${2}/conf/\n          chmod 644 ${2}/conf/*\n          chown ${_SERV}:${_SERV} ${2}/conf/*\n          touch ${2}/conf/.just-updated.pid\n        else\n          rm -f ${2}/conf/.just-updated.pid\n          rm -f ${2}/conf/.yes-update.txt\n        fi\n        _check_config_diff \"/data/conf/solr/search_api_solr/solr7_drupal7/solrcore.properties\" \"${2}/conf/solrcore.properties\"\n        if [ ! -z \"${_slrCnfUpdate}\" ]; then\n          rm -f ${2}/conf/*\n          cp -af /data/conf/solr/search_api_solr/solr7_drupal7/* ${2}/conf/\n          chmod 644 ${2}/conf/*\n          chown ${_SERV}:${_SERV} ${2}/conf/*\n          touch ${2}/conf/.just-updated.pid\n        else\n          rm -f ${2}/conf/.just-updated.pid\n          rm -f ${2}/conf/.yes-update.txt\n        fi\n      fi\n    elif [ ! -e \"${_Plr}/modules/o_contrib_seven\" ] \\\n      && [ ! -e \"${_Plr}/modules/o_contrib\" ] \\\n      && [ ! -e \"${2}/conf/.protected.conf\" ] \\\n      && [ -e \"${2}/conf\" ] \\\n      && [ -e \"${_Plr}/sites/${_Dom}/files/solr/schema.xml\" ] \\\n      && [ -e \"${_Plr}/sites/${_Dom}/files/solr/solrconfig.xml\" ] \\\n      && [ -e \"${_Plr}/sites/${_Dom}/files/solr/solrcore.properties\" ]; then\n      if [ \"${1}\" = \"search_api_solr\" ] || [ \"${1}\" = \"search_api_solr7\" ] || [ \"${1}\" = \"search_api_solr9\" ]; then\n        _check_config_diff \"${_Plr}/sites/${_Dom}/files/solr/solrconfig.xml\" \"${2}/conf/solrconfig.xml\"\n        if [ ! -z \"${_slrCnfUpdate}\" ]; then\n          rm -f ${2}/conf/*\n          cp -af ${_Plr}/sites/${_Dom}/files/solr/* ${2}/conf/\n          chmod 644 ${2}/conf/*\n          chown ${_SERV}:${_SERV} ${2}/conf/*\n          rm -f ${_Plr}/sites/${_Dom}/files/solr/*\n          touch ${2}/conf/.yes-custom.txt\n          touch ${2}/conf/.just-updated.pid\n        else\n          rm -f ${2}/conf/.just-updated.pid\n          rm -f ${2}/conf/.yes-update.txt\n        fi\n      fi\n    elif [ \"${1}\" = \"search_api_solr\" ] \\\n      && [ -e \"${_Plr}/modules/o_contrib_eight\" ] \\\n      && [ ! -e \"${_Plr}/sites/${_Dom}/files/solr/schema.xml\" ]; then\n      if [ ! -e \"${2}/conf/.protected.conf\" ] \\\n        && [ ! -e \"${2}/conf/.yes-custom.txt\" ] \\\n        && [ -e \"${2}/conf\" ]; then\n        _check_config_diff \"/data/conf/solr/search_api_solr/solr7_drupal8/schema.xml\" \"${2}/conf/schema.xml\"\n        if [ ! -z \"${_slrCnfUpdate}\" ]; then\n          rm -f ${2}/conf/*\n          cp -af /data/conf/solr/search_api_solr/solr7_drupal8/* ${2}/conf/\n          chmod 644 ${2}/conf/*\n          chown ${_SERV}:${_SERV} ${2}/conf/*\n          touch ${2}/conf/.just-updated.pid\n        else\n          rm -f ${2}/conf/.just-updated.pid\n          rm -f ${2}/conf/.yes-update.txt\n        fi\n        _check_config_diff \"/data/conf/solr/search_api_solr/solr7_drupal8/solrcore.properties\" \"${2}/conf/solrcore.properties\"\n        if [ ! -z \"${_slrCnfUpdate}\" ]; then\n          rm -f ${2}/conf/*\n          cp -af /data/conf/solr/search_api_solr/solr7_drupal8/* ${2}/conf/\n          chmod 644 ${2}/conf/*\n          chown ${_SERV}:${_SERV} ${2}/conf/*\n          touch ${2}/conf/.just-updated.pid\n        else\n          rm -f ${2}/conf/.just-updated.pid\n          rm -f ${2}/conf/.yes-update.txt\n        fi\n      fi\n    fi\n    _fiLe=\"${_Dir}/solr.php\"\n    echo \"Info file for ${_Dom} is ${_fiLe}\"\n    echo \"Info _SERV is ${_SERV}\"\n    _SOLR_CONFIG_INFO_UPDATE=NO\n    if [ -e \"${_fiLe}\" ]; then\n      _SOLR_CONFIG_INFO_TEST=$(grep \"${_SolrCoreID}\" ${_fiLe} 2>&1)\n      if [[ ! \"${_SOLR_CONFIG_INFO_TEST}\" =~ \"${_SolrCoreID}\" ]]; then\n        _SOLR_CONFIG_INFO_UPDATE=YES\n      fi\n    fi\n    if [ ! -e \"${_fiLe}\" ] \\\n      || [ \"${_SOLR_CONFIG_INFO_UPDATE}\" = \"YES\" ] \\\n      || [ -e \"${2}/conf/.just-updated.pid\" ]; then\n      if [[ \"${2}\" =~ \"/opt/solr4\" ]] && [ \"${_SERV}\" = \"jetty9\" ]; then\n        _write_solr_config ${1} ${_fiLe} ${_SERV}\n        echo \"Updated ${_fiLe} with ${2} details\"\n        touch ${2}/conf/${_xSrl}.conf\n        _reload_core_cnf 8099 ${_SolrCoreID}\n      elif [[ \"${2}\" =~ \"/var/solr7/data\" ]] && [ \"${_SERV}\" = \"solr7\" ]; then\n        _write_solr_config ${1} ${_fiLe} ${_SERV}\n        echo \"Updated ${_fiLe} with ${2} details\"\n        touch ${2}/conf/${_xSrl}.conf\n        _reload_core_cnf 9077 ${_SolrCoreID}\n      elif [[ \"${2}\" =~ \"/var/solr9/data\" ]] && [ \"${_SERV}\" = \"solr9\" ]; then\n        _write_solr_config ${1} ${_fiLe} ${_SERV}\n        echo \"Updated ${_fiLe} with ${2} details\"\n        touch ${2}/conf/${_xSrl}.conf\n        _reload_core_cnf 9099 ${_SolrCoreID}\n      fi\n    fi\n  fi\n}\n\n_add_solr() {\n  # ${1} is module\n  # ${2} is solr core path\n  # ${3} is solr core version: solr9, solr7, jetty9\n  if [ \"${1}\" = \"apachesolr\" ]; then\n    _SOLR_BASE=\"/opt/solr4\"\n    _SOLR_VER=jetty9\n  elif [ \"${1}\" = \"search_api_solr\" ]; then\n    _SOLR_BASE=\"/var/solr7/data\"\n    _SOLR_VER=solr7\n  elif [ \"${1}\" = \"search_api_solr7\" ]; then\n    _SOLR_BASE=\"/var/solr7/data\"\n    _SOLR_VER=solr7\n  elif [ \"${1}\" = \"search_api_solr9\" ]; then\n    _SOLR_BASE=\"/var/solr9/data\"\n    _SOLR_VER=solr9\n  fi\n\n  echo in _add_solr _SolrCoreID is ${_SolrCoreID}\n  echo in _add_solr _SOLR_BASE is ${_SOLR_BASE}\n  echo in _add_solr _SOLR_VER is ${_SOLR_VER}\n\n  if [ ! -z \"${1}\" ] && [ ! -z \"${2}\" ] && [ -e \"/data/conf/solr\" ]; then\n    if [ ! -e \"${2}\" ]; then\n      if [ \"${_SOLR_BASE}\" = \"/var/solr9/data\" ] \\\n        && [ -x \"/opt/solr9/bin/solr\" ] \\\n        && [ -e \"/var/solr9/data\" ]; then\n        if [ -e \"${_Plr}/modules/o_contrib_ten\" ] \\\n          || [ -e \"${_Plr}/modules/o_contrib_eleven\" ]; then\n          echo in _add_solr create_core -p 9099 -c ${_SolrCoreID}\n          su -s /bin/bash - solr9 -c \"/opt/solr9/bin/solr create_core -p 9099 -c ${_SolrCoreID}\"\n          wait\n        else\n          echo \"The Solr 9 is supported only for Drupal 10.2 and newer!\"\n        fi\n      elif [ \"${_SOLR_BASE}\" = \"/var/solr7/data\" ] \\\n        && [ -x \"/opt/solr7/bin/solr\" ] \\\n        && [ -e \"/var/solr7/data\" ]; then\n        if [ -e \"${_Plr}/modules/o_contrib_seven\" ]; then\n          echo in _add_solr o_contrib_seven create_core -p 9077 -c ${_SolrCoreID}\n          su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr create_core -p 9077 -c ${_SolrCoreID} -d /data/conf/solr/search_api_solr/solr7_drupal7\"\n          wait\n        elif [ -e \"${_Plr}/modules/o_contrib_eight\" ]; then\n          echo in _add_solr o_contrib_eight create_core -p 9077 -c ${_SolrCoreID}\n          su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr create_core -p 9077 -c ${_SolrCoreID}\"\n          wait\n        elif [ -e \"${_Plr}/modules/o_contrib_nine\" ]; then\n          echo in _add_solr o_contrib_nine create_core -p 9077 -c ${_SolrCoreID}\n          su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr create_core -p 9077 -c ${_SolrCoreID}\"\n          wait\n        elif [ -e \"${_Plr}/modules/o_contrib_ten\" ]; then\n          echo in _add_solr o_contrib_ten create_core -p 9077 -c ${_SolrCoreID}\n          su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr create_core -p 9077 -c ${_SolrCoreID}\"\n          wait\n        elif [ -e \"${_Plr}/modules/o_contrib_eleven\" ]; then\n          echo in _add_solr o_contrib_eleven create_core -p 9077 -c ${_SolrCoreID}\n          su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr create_core -p 9077 -c ${_SolrCoreID}\"\n          wait\n        else\n          echo \"The Solr 7 is supported only for Drupal 7 and newer!\"\n        fi\n      elif [ \"${_SOLR_BASE}\" = \"/opt/solr4\" ]; then\n        echo in _add_solr solr4 create ${_SolrCoreID}\n        rm -rf ${_SOLR_BASE}/core0/data/*\n        cp -a ${_SOLR_BASE}/core0 ${2}\n        sed -i \"s/.*name=\\\"${_Legacy_SolrCoreID}\\\".*//g\" ${_SOLR_BASE}/solr.xml\n        wait\n        sed -i \"s/.*name=\\\"${_Old_SolrCoreID}\\\".*//g\" ${_SOLR_BASE}/solr.xml\n        wait\n        sed -i \"s/.*<core name=\\\"core0\\\" instan_ceDir=\\\"core0\\\" \\/>.*/<core name=\\\"core0\\\" instan_ceDir=\\\"core0\\\" \\/>\\n<core name=\\\"${_SolrCoreID}\\\" instan_ceDir=\\\"${_SolrCoreID}\\\" \\/>\\n/g\" ${_SOLR_BASE}/solr.xml\n        wait\n        sed -i \"/^$/d\" ${_SOLR_BASE}/solr.xml &> /dev/null\n        wait\n        pkill -9 -f jetty9\n        service jetty9 start &> /dev/null\n      fi\n      echo \"New Solr ${3} with ${1} for ${2} added\"\n    fi\n    _update_solr \"${1}\" \"${2}\" \"${3}\"\n  fi\n}\n\n_delete_solr() {\n  # ${1} is solr core path\n  if [[ \"${1}\" =~ \"solr4\" ]]; then\n    _SOLR_BASE=\"/opt/solr4\"\n  elif [[ \"${1}\" =~ \"solr7\" ]]; then\n    _SOLR_BASE=\"/var/solr7/data\"\n  elif [[ \"${1}\" =~ \"solr9\" ]]; then\n    _SOLR_BASE=\"/var/solr9/data\"\n  fi\n  if [ ! -z \"${1}\" ] && [ -e \"/data/conf/solr\" ] && [ -e \"${1}/conf\" ]; then\n    if [ \"${_SOLR_BASE}\" = \"/var/solr9/data\" ] \\\n      && [ -x \"/opt/solr9/bin/solr\" ] \\\n      && [ -e \"/var/solr9/data/solr.xml\" ]; then\n      if [ -e \"${_SOLR_BASE}/${_SolrCoreID}\" ]; then\n        su -s /bin/bash - solr9 -c \"/opt/solr9/bin/solr delete -p 9099 -c ${_SolrCoreID}\"\n        wait\n      fi\n      if [ -e \"${_SOLR_BASE}/${_Old_SolrCoreID}\" ]; then\n        su -s /bin/bash - solr9 -c \"/opt/solr9/bin/solr delete -p 9099 -c ${_Old_SolrCoreID}\"\n        wait\n      fi\n      if [ -e \"${_SOLR_BASE}/${_Legacy_SolrCoreID}\" ]; then\n        su -s /bin/bash - solr9 -c \"/opt/solr9/bin/solr delete -p 9099 -c ${_Legacy_SolrCoreID}\"\n        wait\n      fi\n      rm -f ${_Dir}/solr.php\n    elif [ \"${_SOLR_BASE}\" = \"/var/solr7/data\" ] \\\n      && [ -x \"/opt/solr7/bin/solr\" ] \\\n      && [ -e \"/var/solr7/data/solr.xml\" ]; then\n      if [ -e \"${_SOLR_BASE}/${_SolrCoreID}\" ]; then\n        su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr delete -p 9077 -c ${_SolrCoreID}\"\n        wait\n      fi\n      if [ -e \"${_SOLR_BASE}/${_Old_SolrCoreID}\" ]; then\n        su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr delete -p 9077 -c ${_Old_SolrCoreID}\"\n        wait\n      fi\n      if [ -e \"${_SOLR_BASE}/${_Legacy_SolrCoreID}\" ]; then\n        su -s /bin/bash - solr7 -c \"/opt/solr7/bin/solr delete -p 9077 -c ${_Legacy_SolrCoreID}\"\n        wait\n      fi\n      rm -f ${_Dir}/solr.php\n    elif [ \"${_SOLR_BASE}\" = \"/opt/solr4\" ]; then\n      sed -i \"s/.*instan_ceDir=\\\"${_SolrCoreID}\\\".*//g\" ${_SOLR_BASE}/solr.xml\n      wait\n      sed -i \"s/.*name=\\\"${_Legacy_SolrCoreID}\\\".*//g\"  ${_SOLR_BASE}/solr.xml\n      wait\n      sed -i \"s/.*name=\\\"${_Old_SolrCoreID}\\\".*//g\"     ${_SOLR_BASE}/solr.xml\n      wait\n      sed -i \"/^$/d\" ${_SOLR_BASE}/solr.xml &> /dev/null\n      wait\n      rm -rf ${1}\n      rm -f ${_Dir}/solr.php\n      pkill -9 -f jetty9\n      service jetty9 start &> /dev/null\n    fi\n    echo \"Deleted Solr core in ${1}\"\n  fi\n}\n\n_check_solr() {\n  # ${1} is module\n  # ${2} is solr core path\n  # ${3} is solr server version\n  if [ ! -z \"${1}\" ] && [ ! -z \"${2}\" ] && [ ! -z \"${3}\" ] && [ -e \"/data/conf/solr\" ]; then\n    echo \"Checking Solr ${3} with ${1} for ${2}\"\n    if [ ! -e \"${2}\" ]; then\n      _add_solr \"${1}\" \"${2}\" \"${3}\"\n    else\n      _update_solr \"${1}\" \"${2}\" \"${3}\"\n    fi\n  fi\n}\n\n_setup_solr() {\n  if [ -e \"/data/conf/default.boa_site_control.ini\" ] \\\n    && [ ! -e \"${_DIR_CTRL_F}\" ]; then\n    cp -af /data/conf/default.boa_site_control.ini ${_DIR_CTRL_F} &> /dev/null\n    chown ${_HM_U}:users ${_DIR_CTRL_F} &> /dev/null\n    chmod 0664 ${_DIR_CTRL_F} &> /dev/null\n  fi\n  ###\n  ### Support for solr_integration_module directive\n  ###\n  if [ -e \"${_DIR_CTRL_F}\" ]; then\n    _SOLR_MODULE=\"your_module_name_here\"\n    _SOLR_IM_PT=$(grep \"solr_integration_module\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SOLR_IM_PT}\" =~ \"solr_integration_module\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";solr_integration_module = your_module_name_here\" >> ${_DIR_CTRL_F}\n    fi\n    _ASOLR_T=$(grep \"^solr_integration_module = apachesolr\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_ASOLR_T}\" =~ \"apachesolr\" ]]; then\n      _SOLR_MODULE=\"apachesolr\"\n    fi\n    _SAPI_SOLR_T=$(grep \"^solr_integration_module = search_api_solr\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SAPI_SOLR_T}\" =~ \"search_api_solr\" ]]; then\n      _SOLR_MODULE=\"search_api_solr\"\n    fi\n    _SAPI_SOLR_U=$(grep \"^solr_integration_module = search_api_solr7\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SAPI_SOLR_U}\" =~ \"search_api_solr7\" ]]; then\n      _SOLR_MODULE=\"search_api_solr7\"\n    fi\n    _SAPI_SOLR_V=$(grep \"^solr_integration_module = search_api_solr9\" \\\n      ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SAPI_SOLR_V}\" =~ \"search_api_solr9\" ]]; then\n      _SOLR_MODULE=\"search_api_solr9\"\n    fi\n    if [ \"${_SOLR_MODULE}\" = \"apachesolr\" ] && [ -e \"/opt/solr4\" ]; then\n      _SOLR_BASE=\"/opt/solr4\"\n      _SOLR_VER=jetty9\n    elif [ \"${_SOLR_MODULE}\" = \"search_api_solr\" ] && [ -e \"/var/solr7/data\" ]; then\n      _SOLR_BASE=\"/var/solr7/data\"\n      _SOLR_VER=solr7\n    elif [ \"${_SOLR_MODULE}\" = \"search_api_solr7\" ] && [ -e \"/var/solr7/data\" ]; then\n      _SOLR_BASE=\"/var/solr7/data\"\n      _SOLR_VER=solr7\n    elif [ \"${_SOLR_MODULE}\" = \"search_api_solr9\" ] && [ -e \"/var/solr9/data\" ]; then\n      _SOLR_BASE=\"/var/solr9/data\"\n      _SOLR_VER=solr9\n    else\n      _SOLR_MODULE=\n      _SOLR_BASE=\n      _SOLR_VER=\n    fi\n    _SOLR_DIR=\"${_SOLR_BASE}/${_SolrCoreID}\"\n    if [ \"${_SOLR_MODULE}\" = \"search_api_solr\" ] \\\n      || [ \"${_SOLR_MODULE}\" = \"search_api_solr7\" ] \\\n      || [ \"${_SOLR_MODULE}\" = \"search_api_solr9\" ] \\\n      || [ \"${_SOLR_MODULE}\" = \"apachesolr\" ]; then\n      [ -n \"${_SOLR_VER}\" ] && _check_solr \"${_SOLR_MODULE}\" \"${_SOLR_DIR}\" \"${_SOLR_VER}\"\n    else\n      if [ -n \"${_SOLR_VER}\" ]; then\n        _SOLR_DIR_DEL=\"/opt/solr4/${_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n        _SOLR_DIR_DEL=\"/var/solr7/data/${_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n        _SOLR_DIR_DEL=\"/var/solr9/data/${_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n        _SOLR_DIR_DEL=\"/opt/solr4/${_Legacy_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n        _SOLR_DIR_DEL=\"/var/solr7/data/${_Legacy_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n        _SOLR_DIR_DEL=\"/opt/solr4/${_Old_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n        _SOLR_DIR_DEL=\"/var/solr7/data/${_Old_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n        _SOLR_DIR_DEL=\"/var/solr9/data/${_Old_SolrCoreID}\"\n        _delete_solr \"${_SOLR_DIR_DEL}\"\n      fi\n    fi\n  fi\n  ###\n  ### Support for solr_custom_config directive\n  ###\n  if [ -e \"${_DIR_CTRL_F}\" ]; then\n    _SLR_CM_CFG_P=$(grep \"solr_custom_config\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SLR_CM_CFG_P}\" =~ \"solr_custom_config\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";solr_custom_config = NO\" >> ${_DIR_CTRL_F}\n    fi\n    _SLR_CM_CFG_RT=NO\n    _SOLR_PROTECT_CTRL=\"${_SOLR_DIR}/conf/.protected.conf\"\n    _SLR_CM_CFG_T=$(grep \"^solr_custom_config = YES\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SLR_CM_CFG_T}\" =~ \"solr_custom_config = YES\" ]]; then\n      _SLR_CM_CFG_RT=YES\n      if [ ! -e \"${_SOLR_PROTECT_CTRL}\" ]; then\n        touch ${_SOLR_PROTECT_CTRL}\n      fi\n      echo \"Solr config for ${_SOLR_DIR} is protected\"\n    else\n      if [ -e \"${_SOLR_PROTECT_CTRL}\" ]; then\n        rm -f ${_SOLR_PROTECT_CTRL}\n      fi\n    fi\n  fi\n  ###\n  ### Support for solr_update_config directive\n  ###\n  if [ -e \"${_DIR_CTRL_F}\" ]; then\n    _SOLR_UP_CFG_PT=$(grep \"solr_update_config\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SOLR_UP_CFG_PT}\" =~ \"solr_update_config\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \";solr_update_config = NO\" >> ${_DIR_CTRL_F}\n    fi\n    _SOLR_UP_CFG_TT=$(grep \"^solr_update_config = YES\" ${_DIR_CTRL_F} 2>&1)\n    if [[ \"${_SOLR_UP_CFG_TT}\" =~ \"solr_update_config = YES\" ]]; then\n      if [ \"${_SLR_CM_CFG_RT}\" = \"NO\" ] \\\n        && [ ! -e \"${_SOLR_PROTECT_CTRL}\" ]; then\n        _update_solr \"${_SOLR_MODULE}\" \"${_SOLR_DIR}\" \"${_SOLR_VER}\"\n      fi\n    fi\n  fi\n}\n\n_proceed_solr() {\n  if [ ! -z \"${_Dan}\" ] \\\n    && [ \"${_Dan}\" != \"hostmaster\" ]; then\n    CoreID=\"${_Dan}.${_HM_U}\"\n    CoreHS=$(echo ${CoreID} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    #_SolrCoreID=\"${_HM_U}-${_Dan}-${CoreHS}\"\n    _Legacy_SolrCoreID=\"${_HM_U}.${_Dan}\"\n    _Old_SolrCoreID=\"solr.${_HM_U}.${_Dan}\"\n    _SolrCoreID=\"oct.${_HM_U}.${_Dan}\"\n    _setup_solr\n  fi\n}\n\n_check_sites_list() {\n  for _Site in `find ${_usEr}/config/server_master/nginx/vhost.d \\\n    -maxdepth 1 -mindepth 1 -type f | sort`; do\n    _MOMENT=$(date +%y%m%d-%H%M%S)\n    echo ${_MOMENT} Start Checking Site ${_Site}\n    _Dom=$(echo ${_Site} | cut -d'/' -f9 | awk '{ print $1}' 2>&1)\n    if [ -e \"${_usEr}/config/server_master/nginx/vhost.d/${_Dom}\" ]; then\n      _Plx=$(cat ${_usEr}/config/server_master/nginx/vhost.d/${_Dom} \\\n        | grep \"root \" \\\n        | cut -d: -f2 \\\n        | awk '{ print $2}' \\\n        | sed \"s/[\\;]//g\" 2>&1)\n      if [[ \"${_Plx}\" =~ \"aegir/distro\" ]]; then\n        _Dan=\"hostmaster\"\n      else\n        _Dan=\"${_Dom}\"\n      fi\n    fi\n    _STATUS_DISABLED=NO\n    _STATUS_TEST=$(grep \"Do not reveal Aegir front-end URL here\" \\\n      ${_usEr}/config/server_master/nginx/vhost.d/${_Dom} 2>&1)\n    if [[ \"${_STATUS_TEST}\" =~ \"Do not reveal Aegir front-end URL here\" ]]; then\n      _STATUS_DISABLED=YES\n      echo \"${_Dom} site is DISABLED\"\n    fi\n    if [ -e \"${_usEr}/.drush/${_Dan}.alias.drushrc.php\" ] \\\n      && [ \"${_STATUS_DISABLED}\" = \"NO\" ]; then\n      _Dir=$(cat ${_usEr}/.drush/${_Dan}.alias.drushrc.php \\\n        | grep \"site_path'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _DIR_CTRL_F=\"${_Dir}/modules/boa_site_control.ini\"\n      _Plr=$(cat ${_usEr}/.drush/${_Dan}.alias.drushrc.php \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _PLR_CTRL_F=\"${_Plr}/sites/all/modules/boa_platform_control.ini\"\n      _proceed_solr\n    fi\n  done\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n  echo ${_CPU_NR} > /data/all/cpuinfo\n  chmod 644 /data/all/cpuinfo &> /dev/null\n}\n\n_get_load() {\n  read -r _one _five _rest <<< \"$(cat /proc/loadavg)\"\n  _O_LOAD=$(awk -v _load_value=\"${_one}\" -v _cpus=\"${_CPU_NR}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n}\n\n_load_control() {\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  : \"${_CPU_TASK_RATIO:=3.1}\"\n  _CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n  _O_LOAD_MAX=$(echo \"${_CPU_TASK_RATIO} * 100\" | bc -l)\n  _get_load\n}\n\n_fix_solr9_core() {\n  local file=\"$1\"\n  if [ -e \"${file}\" ]; then\n    local _test_id\n    _test_id=$(grep \"solr9\" \"${file}\" 2>&1)\n    _test_port=$(grep \"9099\" \"${file}\" 2>&1)\n    if [[ ! \"${_test_id}\" =~ \"solr9\" ]] || [[ ! \"${_test_port}\" =~ \"9099\" ]]; then\n      sed -i \"s/^solr\\.replication\\.masterUrl.*//g\" \"${file}\"\n      sed -i \"s/^solr\\.install\\.dir.*//g\" \"${file}\"\n      sed -i \"s/^solr\\.contrib\\.dir.*//g\" \"${file}\"\n      echo \"solr.replication.masterUrl=http://localhost:9099\" >> \"${file}\"\n      echo \"solr.install.dir=/opt/solr9\" >> \"${file}\"\n      sed -i \"/^$/d\" \"${file}\"\n      echo \"Fixed ${file}\"\n      _IF_RESTART_SOLR=YES\n    fi\n  fi\n}\n\n_fix_solr9_cnf() {\n  if [ -x \"/etc/init.d/solr9\" ] && [ -e \"/var/solr9/logs\" ]; then\n    _IF_RESTART_SOLR=NO\n    for _pRp in `find /var/solr9/data/oct.*/conf/solrcore.properties -maxdepth 1 | sort`; do\n      if [ -e \"${_pRp}\" ]; then\n        _PRP_TEST_ID=$(grep \"solr9\" ${_pRp} 2>&1)\n        _PRP_TEST_PORT=$(grep \"9099\" ${_pRp} 2>&1)\n        if [[ ! \"${_PRP_TEST_ID}\" =~ \"solr9\" ]] || [[ ! \"${_PRP_TEST_PORT}\" =~ \"9099\" ]]; then\n          sed -i \"s/^solr\\.replication\\.masterUrl.*//g\" ${_pRp}\n          sed -i \"s/^solr\\.install\\.dir.*//g\" ${_pRp}\n          sed -i \"s/^solr\\.contrib\\.dir.*//g\" ${_pRp}\n          echo \"solr.replication.masterUrl=http://localhost:9099\" >> ${_pRp}\n          echo \"solr.install.dir=/opt/solr9\" >> ${_pRp}\n          sed -i \"/^$/d\" ${_pRp}\n          echo \"Fixed ${_pRp}\"\n          _IF_RESTART_SOLR=YES\n        fi\n      fi\n    done\n    _solr9_paths=(\n      \"/var/xdrago/conf/solr/search_api_solr/solr9_drupal10/solrcore.properties\"\n      \"/data/conf/solr/search_api_solr/solr9_drupal10/solrcore.properties\"\n    )\n    for path in \"${_solr9_paths[@]}\"; do\n      _fix_solr9_core \"${path}\"\n    done\n    rStart=\"/var/solr9/logs/.restarted_new_fix_solr9_cnf.txt\"\n    if [ \"${_IF_RESTART_SOLR}\" = \"YES\" ] \\\n      || [ ! -e \"${rStart}\" ]; then\n      echo \"Restarting Solr 9...\"\n      service solr9 restart\n      wait\n      touch ${rStart}\n    fi\n  fi\n}\n\n_fix_solr7_core() {\n  local file=\"$1\"\n  if [ -e \"${file}\" ]; then\n    local _test_id\n    _test_id=$(grep \"solr7\" \"${file}\" 2>&1)\n    if [[ ! \"${_test_id}\" =~ \"solr7\" ]]; then\n      sed -i \"s/^solr\\.install\\.dir.*//g\" \"${file}\"\n      sed -i \"s/^solr\\.contrib\\.dir.*//g\" \"${file}\"\n      echo \"solr.install.dir=/opt/solr7\" >> \"${file}\"\n      sed -i \"/^$/d\" \"${file}\"\n      echo \"Fixed ${file}\"\n      _IF_RESTART_SOLR=YES\n    fi\n  fi\n}\n\n_fix_solr7_cnf() {\n  if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/var/solr7/logs\" ]; then\n    _IF_RESTART_SOLR=NO\n    for _pRp in `find /var/solr7/data/oct.*/conf/solrcore.properties -maxdepth 1 | sort`; do\n      if [ -e \"${_pRp}\" ]; then\n        _PRP_TEST_ID=$(grep \"solr7\" ${_pRp} 2>&1)\n        if [[ ! \"${_PRP_TEST_ID}\" =~ \"solr7\" ]]; then\n          sed -i \"s/^solr\\.install\\.dir.*//g\" ${_pRp}\n          sed -i \"s/^solr\\.contrib\\.dir.*//g\" ${_pRp}\n          echo \"solr.install.dir=/opt/solr7\" >> ${_pRp}\n          sed -i \"/^$/d\" ${_pRp}\n          echo \"Fixed ${_pRp}\"\n          _IF_RESTART_SOLR=YES\n        fi\n      fi\n    done\n    _solr7_paths=(\n      \"/var/xdrago/conf/solr/search_api_solr/solr7_drupal7/solrcore.properties\"\n      \"/var/xdrago/conf/solr/search_api_solr/solr7_drupal8/solrcore.properties\"\n      \"/var/xdrago/conf/solr/search_api_solr/solr7_drupal9/solrcore.properties\"\n      \"/var/xdrago/conf/solr/search_api_solr/solr7_drupal10/solrcore.properties\"\n      \"/data/conf/solr/search_api_solr/solr7_drupal7/solrcore.properties\"\n      \"/data/conf/solr/search_api_solr/solr7_drupal8/solrcore.properties\"\n      \"/data/conf/solr/search_api_solr/solr7_drupal9/solrcore.properties\"\n      \"/data/conf/solr/search_api_solr/solr7_drupal10/solrcore.properties\"\n    )\n    for path in \"${_solr7_paths[@]}\"; do\n      _fix_solr7_core \"${path}\"\n    done\n    rStart=\"/var/solr7/logs/.restarted_new_fix_solr7_cnf.txt\"\n    if [ \"${_IF_RESTART_SOLR}\" = \"YES\" ] \\\n      || [ ! -e \"${rStart}\" ]; then\n      echo \"Restarting Solr 7...\"\n      service solr7 restart\n      wait\n      touch ${rStart}\n    fi\n  fi\n}\n\n_sync_solr_config() {\n  local _rel_dir=\"$1\"\n  local _base_dir=\"/var/xdrago/conf/solr/${_rel_dir}\"\n\n  if [ -d \"${_base_dir}\" ]; then\n    local _baseCpy=\"${_base_dir}/schema.xml\"\n    local _liveCpy=\"/data/conf/solr/${_rel_dir}/schema.xml\"\n\n    _check_config_diff \"${_baseCpy}\" \"${_liveCpy}\"\n\n    if [ ! -e \"/data/conf/solr/${_rel_dir}/solrconfig.xml\" ] \\\n      || [ ! -e \"/data/conf/solr/.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n      || [ ! -z \"${_slrCnfUpdate}\" ]; then\n      rm -rf /data/conf/solr\n      cp -af /var/xdrago/conf/solr /data/conf/\n      rm -f /data/conf/solr/.ctrl*\n      touch /data/conf/solr/.ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n}\n\n###\n### Orphan core cleanup\n###\n### Three-tier classification for every oct.* / solr.* core found on disk:\n###\n###   TIER 1 — No vhost at all, OR vhost exists but no Aegir drush alias:\n###            Core is unknown to the provisioning layer. Apply short\n###            staleness threshold (_ORPHAN_STALE_DAYS, default 14d).\n###\n###   TIER 2 — Vhost exists AND Aegir alias exists (site is known but may be\n###            an abandoned clone, old staging env, renamed domain, etc.):\n###            Apply long staleness threshold (_ORPHAN_VHOST_STALE_DAYS,\n###            default 60d) on data/index/ mtime only — giving cloned/renamed\n###            sites plenty of time before their core is retired.\n###\n###   TIER 3 — Protected by conf/.protected.conf: never touched regardless.\n###\n### Staleness is measured on data/index/ (Lucene commit mtime — the cleanest\n### signal of real indexing activity). data/ itself is intentionally NOT used\n### as the primary signal because Solr keeps it perpetually fresh via tlog,\n### write.lock and other bookkeeping writes even on idle cores.\n###\n### Qualifying cores are NOT deleted — they are unloaded from Solr's registry\n### and moved to /var/backups/solrN/<timestamp>-<core>/ for easy recovery.\n###\n_ORPHAN_STALE_DAYS=14       # tier 1: no vhost / no alias\n_ORPHAN_VHOST_STALE_DAYS=60 # tier 2: vhost+alias present but index inactive\n\n_build_active_core_set() {\n  # Populates three global associative arrays:\n  #\n  #   _CORES_WITH_VHOST       — core has a matching enabled nginx vhost\n  #   _CORES_WITH_ALIAS       — core also has a matching Aegir drush alias\n  #   _CORES_WITH_SOLR_MODULE — site has solr_integration_module explicitly\n  #                             set in boa_site_control.ini — these are\n  #                             actively managed and must never be archived\n  #                             based on index age alone, even if stale\n  #\n  # All three historical name formats (oct.*, solr.*, <user>.<domain>) are\n  # registered for each site so transitional/renamed cores are never wrongly\n  # retired.\n  declare -gA _CORES_WITH_VHOST=()\n  declare -gA _CORES_WITH_ALIAS=()\n  declare -gA _CORES_WITH_SOLR_MODULE=()\n\n  for _usEr in $(find /data/disk/ -maxdepth 1 -mindepth 1 | sort); do\n    [ -e \"${_usEr}/config/server_master/nginx/vhost.d\" ] || continue\n    local _HM_U_TMP\n    _HM_U_TMP=$(echo \"${_usEr}\" | cut -d'/' -f4 | awk '{ print $1}')\n\n    for _Site in $(find \"${_usEr}/config/server_master/nginx/vhost.d\" \\\n        -maxdepth 1 -mindepth 1 -type f | sort); do\n      local _DomTmp _STATUS_T _PlxTmp\n      _DomTmp=$(echo \"${_Site}\" | cut -d'/' -f9 | awk '{ print $1}')\n      [ -z \"${_DomTmp}\" ] && continue\n\n      # Skip disabled/parked sites\n      _STATUS_T=$(grep \"Do not reveal Aegir front-end URL here\" \"${_Site}\" 2>&1)\n      [[ \"${_STATUS_T}\" =~ \"Do not reveal Aegir front-end URL here\" ]] && continue\n\n      # Skip hostmaster vhosts\n      _PlxTmp=$(grep \"root \" \"${_Site}\" | cut -d: -f2 | awk '{ print $2}' | sed \"s/[\\;]//g\" 2>&1)\n      [[ \"${_PlxTmp}\" =~ \"aegir/distro\" ]] && continue\n\n      # Register vhost presence for all three name formats\n      _CORES_WITH_VHOST[\"oct.${_HM_U_TMP}.${_DomTmp}\"]=1\n      _CORES_WITH_VHOST[\"solr.${_HM_U_TMP}.${_DomTmp}\"]=1\n      _CORES_WITH_VHOST[\"${_HM_U_TMP}.${_DomTmp}\"]=1\n\n      # Check for Aegir drush alias\n      local _aliasFile=\"${_usEr}/.drush/${_DomTmp}.alias.drushrc.php\"\n      if [ -f \"${_aliasFile}\" ]; then\n        _CORES_WITH_ALIAS[\"oct.${_HM_U_TMP}.${_DomTmp}\"]=1\n        _CORES_WITH_ALIAS[\"solr.${_HM_U_TMP}.${_DomTmp}\"]=1\n        _CORES_WITH_ALIAS[\"${_HM_U_TMP}.${_DomTmp}\"]=1\n\n        # Check for explicit solr_integration_module in boa_site_control.ini.\n        # A site with this set is actively managed by _check_sites_list and\n        # must not be archived based on index age — _add_solr will recreate\n        # the core empty if we archive it, losing the index permanently.\n        local _sitePath\n        _sitePath=$(grep \"site_path'\" \"${_aliasFile}\" \\\n          | cut -d: -f2 | awk '{print $3}' | sed \"s/[\\,']//g\" 2>/dev/null)\n        if [ -n \"${_sitePath}\" ]; then\n          local _ctrlFile=\"${_sitePath}/modules/boa_site_control.ini\"\n          if [ -f \"${_ctrlFile}\" ]; then\n            local _solrMod\n            _solrMod=$(grep \"^solr_integration_module\" \"${_ctrlFile}\" 2>/dev/null)\n            if [ -n \"${_solrMod}\" ]; then\n              _CORES_WITH_SOLR_MODULE[\"oct.${_HM_U_TMP}.${_DomTmp}\"]=1\n              _CORES_WITH_SOLR_MODULE[\"solr.${_HM_U_TMP}.${_DomTmp}\"]=1\n              _CORES_WITH_SOLR_MODULE[\"${_HM_U_TMP}.${_DomTmp}\"]=1\n            fi\n          fi\n        fi\n      fi\n    done\n  done\n}\n\n_core_is_stale() {\n  # Returns 0 (stale/archive-eligible) or 1 (fresh/keep).\n  # ${1} = core path\n  # ${2} = staleness threshold in days\n  #\n  # Primary signal: data/index/ mtime — updated only on Lucene segment commits.\n  # Fallback:       data/ mtime — catches tlog-only replicas mid-reindex.\n  # A core with no data/ at all was never indexed → immediately stale.\n  local _corePath=\"${1}\"\n  local _threshold=\"${2}\"\n  local _dataDir=\"${_corePath}/data\"\n  local _indexDir=\"${_corePath}/data/index\"\n\n  [ -d \"${_dataDir}\" ] || return 0  # no data dir → stale\n\n  if [ -d \"${_indexDir}\" ]; then\n    local _indexFresh\n    _indexFresh=$(find \"${_indexDir}\" -maxdepth 0 \\\n      -mtime \"-${_threshold}\" 2>/dev/null)\n    [ -n \"${_indexFresh}\" ] && return 1  # index recently committed → keep\n  fi\n\n  local _dataFresh\n  _dataFresh=$(find \"${_dataDir}\" -maxdepth 0 \\\n    -mtime \"-${_threshold}\" 2>/dev/null)\n  [ -n \"${_dataFresh}\" ] && return 1  # data dir recently touched → keep\n\n  return 0  # both older than threshold → stale\n}\n\n_core_last_activity_days() {\n  # Echoes age in days of the most recent of data/index/ and data/ mtimes.\n  # Used only for human-readable log output.\n  local _corePath=\"${1}\"\n  local _now _iMtime _dMtime _newest\n  _now=$(date +%s)\n  _iMtime=0\n  _dMtime=0\n  [ -d \"${_corePath}/data/index\" ] && \\\n    _iMtime=$(stat -c %Y \"${_corePath}/data/index\" 2>/dev/null || echo 0)\n  [ -d \"${_corePath}/data\" ] && \\\n    _dMtime=$(stat -c %Y \"${_corePath}/data\" 2>/dev/null || echo 0)\n  _newest=$(( _iMtime >= _dMtime ? _iMtime : _dMtime ))\n  echo $(( ( _now - _newest ) / 86400 ))\n}\n\n_archive_orphan_core() {\n  # Unloads the core from Solr's registry (without deleting any files) then\n  # moves the directory to the backup location for easy manual recovery.\n  #\n  # ${1} = port  ${2} = backup base dir  ${3} = core path  ${4} = timestamp\n  #\n  # RECOVERY — to restore an archived core:\n  #\n  #   port=9077  # or 9099 for solr9\n  #   core=\"oct.o1.example.com\"\n  #   bkp=\"/var/backups/solr7/20260418-222802-${core}\"   # adjust timestamp\n  #   dest=\"/var/solr7/data/${core}\"                     # or solr9\n  #\n  #   mv \"${bkp}\" \"${dest}\"\n  #   chown -R solr7:solr7 \"${dest}\"                     # or solr9:solr9\n  #   curl \"http://127.0.0.1:${port}/solr/admin/cores?action=CREATE&name=${core}&instanceDir=${dest}\"\n  #\n  # NOTE: use CREATE not RELOAD — RELOAD only works for already-registered\n  # cores. CREATE re-registers the existing directory without touching files.\n  local _port=\"${1}\" _bkpBase=\"${2}\" _corePath=\"${3}\" _ts=\"${4}\"\n  local _coreName\n  _coreName=$(basename \"${_corePath}\")\n\n  # Unload via admin API — deleteIndex/DataDir/InstanceDir all false so Solr\n  # releases its handle but leaves every file intact for the mv below\n  curl -s --max-time 15 \\\n    \"http://127.0.0.1:${_port}/solr/admin/cores?action=UNLOAD&core=${_coreName}&deleteIndex=false&deleteDataDir=false&deleteInstanceDir=false\" \\\n    > /dev/null 2>&1 || true\n  wait\n\n  local _bkpDest=\"${_bkpBase}/${_ts}-${_coreName}\"\n  mkdir -p \"${_bkpBase}\"\n  if mv \"${_corePath}\" \"${_bkpDest}\"; then\n    echo \"ORPHAN-ARCHIVED: ${_coreName} → ${_bkpDest}\"\n  else\n    echo \"ORPHAN-ERROR: mv failed for ${_corePath} — skipping\"\n  fi\n}\n\n_purge_orphan_cores() {\n  # ${1} = solr binary   ${2} = solr unix user  ${3} = port\n  # ${4} = data dir      ${5} = backup base dir\n  local _bin=\"${1}\" _usr=\"${2}\" _port=\"${3}\" _datadir=\"${4}\" _bkpBase=\"${5}\"\n\n  [ -x \"${_bin}\" ]     || return\n  [ -d \"${_datadir}\" ] || return\n\n  local _archived=0 _protected=0 _fresh=0 _skipped=0 _ts\n  _ts=$(date +%Y%m%d-%H%M%S)\n\n  for _corePath in $(find \"${_datadir}\" -maxdepth 1 -mindepth 1 -type d | sort); do\n    local _coreName _dot_count\n    _coreName=$(basename \"${_corePath}\")\n\n    # Only process cores matching our naming convention\n    if [[ ! \"${_coreName}\" =~ ^(oct|solr)\\. ]]; then\n      _dot_count=$(echo \"${_coreName}\" | tr -cd '.' | wc -c)\n      [ \"${_dot_count}\" -lt 1 ] && continue\n    fi\n\n    # Protection sentinel always wins — no logging spam for these\n    if [ -e \"${_corePath}/conf/.protected.conf\" ]; then\n      (( _protected++ ))\n      continue\n    fi\n\n    local _hasVhost _hasAlias _threshold _tier _age\n    _hasVhost=0\n    _hasAlias=0\n    [ -n \"${_CORES_WITH_VHOST[${_coreName}]+x}\" ] && _hasVhost=1\n    [ -n \"${_CORES_WITH_ALIAS[${_coreName}]+x}\" ] && _hasAlias=1\n\n    if [ \"${_hasVhost}\" -eq 1 ] && [ \"${_hasAlias}\" -eq 1 ]; then\n      # Extra gate: if site has solr_integration_module explicitly configured,\n      # _check_sites_list actively manages this core and will recreate it\n      # empty if we archive it.  Protect it unconditionally.\n      if [ -n \"${_CORES_WITH_SOLR_MODULE[${_coreName}]+x}\" ]; then\n        echo \"ORPHAN-SKIP: ${_coreName} — solr_integration_module set, actively managed\"\n        (( _protected++ ))\n        continue\n      fi\n      # Tier 2: properly provisioned site — use the long threshold\n      _threshold=\"${_ORPHAN_VHOST_STALE_DAYS}\"\n      _tier=\"tier2(vhost+alias)\"\n    else\n      # Tier 1: no vhost, or vhost with no Aegir alias (nginx leftover / dead clone)\n      _threshold=\"${_ORPHAN_STALE_DAYS}\"\n      _tier=\"tier1(no-vhost-or-alias)\"\n    fi\n\n    if ! _core_is_stale \"${_corePath}\" \"${_threshold}\"; then\n      _age=$(_core_last_activity_days \"${_corePath}\")\n      echo \"ORPHAN-FRESH: ${_coreName} [${_tier}] index ${_age}d ago (threshold ${_threshold}d) — keeping\"\n      (( _fresh++ ))\n      continue\n    fi\n\n    _age=$(_core_last_activity_days \"${_corePath}\")\n    echo \"ORPHAN-CANDIDATE: ${_coreName} [${_tier}] index ${_age}d ago — archiving\"\n    _archive_orphan_core \"${_port}\" \"${_bkpBase}\" \"${_corePath}\" \"${_ts}\"\n    (( _archived++ ))\n  done\n\n  echo \"Orphan cleanup port=${_port}: archived=${_archived} fresh(kept)=${_fresh} protected=${_protected}\"\n}\n\n_cleanup_orphan_cores() {\n  # Guard: never run during a BOA install or upgrade\n  [ \"${_protectedRun}\" = \"TRUE\" ] && return\n\n  # Throttle: run at most once every 6 hours.\n  # The sentinel file is touched on each successful run; we skip if it is\n  # younger than 6 hours (360 minutes — find -mmin returns a result = fresh).\n  local _sentinel=\"/var/backups/solr/.orphan_cleanup_last_run.pid\"\n  mkdir -p /var/backups/solr\n  if [ -e \"${_sentinel}\" ]; then\n    local _sentinel_fresh\n    _sentinel_fresh=$(find \"${_sentinel}\" -mmin \"-360\" 2>/dev/null)\n    if [ -n \"${_sentinel_fresh}\" ]; then\n      echo \"Orphan cleanup skipped -- last run less than 6 hours ago (${_sentinel})\"\n      return\n    fi\n  fi\n\n  echo \"=== Orphan core cleanup start ===\"\n\n  _build_active_core_set\n  echo \"Active cores with vhost: ${#_CORES_WITH_VHOST[@]}  with alias: ${#_CORES_WITH_ALIAS[@]}  with solr module: ${#_CORES_WITH_SOLR_MODULE[@]}\"\n\n  if [ -x \"/etc/init.d/solr7\" ] && [ -d \"/var/solr7/data\" ]; then\n    mkdir -p /var/backups/solr7\n    _purge_orphan_cores \\\n      \"/opt/solr7/bin/solr\" \"solr7\" \"9077\" \"/var/solr7/data\" \"/var/backups/solr7\"\n  fi\n\n  if [ -x \"/etc/init.d/solr9\" ] && [ -d \"/var/solr9/data\" ]; then\n    mkdir -p /var/backups/solr9\n    _purge_orphan_cores \\\n      \"/opt/solr9/bin/solr\" \"solr9\" \"9099\" \"/var/solr9/data\" \"/var/backups/solr9\"\n  fi\n\n  touch \"${_sentinel}\"\n  echo \"=== Orphan core cleanup end ===\"\n}\n\n_check_solr_core_health() {\n  # Queries the Solr admin STATUS API for every registered core and logs\n  # anomalies that could indicate memory pressure or a problematic core:\n  #\n  #   - Cores reporting initFailures (failed to load)\n  #   - High segment count (>50) -- many small unmerged segments hold open\n  #     file handles and contribute to Metaspace pressure via per-segment\n  #     FieldCache entries\n  #   - High deleted doc ratio (>20% of maxDoc) -- unmerged deletes inflate\n  #     heap and prevent segment GC\n  #   - Large index size (>500MB) -- informational\n  #\n  # Read-only diagnostic -- no changes are made.\n  # ${1} = port   e.g. 9077 or 9099\n  # ${2} = label  e.g. solr7 or solr9\n  local _port=\"${1}\" _label=\"${2}\"\n  local _api=\"http://127.0.0.1:${_port}/solr/admin/cores?action=STATUS&wt=json\"\n\n  if ! command -v python3 &>/dev/null; then\n    echo \"HEALTH-SKIP: python3 not available, skipping ${_label} core health check\"\n    return\n  fi\n\n  local _json\n  _json=$(curl -s --max-time 15 \"${_api}\" 2>/dev/null)\n  if [ -z \"${_json}\" ]; then\n    echo \"HEALTH-ERROR: no response from ${_label} on port ${_port}\"\n    return\n  fi\n\n  local _tmpjson\n  _tmpjson=$(mktemp /tmp/solr_health_XXXXXX.json)\n  echo \"${_json}\" > \"${_tmpjson}\"\n\n  echo \"=== Core health check ${_label} port=${_port} ===\"\n\n  python3 - \"${_tmpjson}\" <<'PYEOF'\nimport json, sys\n\ntry:\n    with open(sys.argv[1]) as f:\n        data = json.load(f)\nexcept Exception as e:\n    print(f\"HEALTH-ERROR: JSON parse failed: {e}\")\n    sys.exit(0)\n\ninit_failures = data.get(\"initFailures\", {})\nif init_failures:\n    for core, err in init_failures.items():\n        print(f\"HEALTH-INIT-FAIL: {core} -- {err}\")\n\ncores = data.get(\"status\", {})\nif not cores:\n    print(\"HEALTH-INFO: no cores registered\")\n    sys.exit(0)\n\nprint(f\"HEALTH-INFO: {len(cores)} cores registered\")\n\nfor name, info in sorted(cores.items()):\n    index    = info.get(\"index\", {})\n    num_docs = index.get(\"numDocs\", 0)\n    max_doc  = index.get(\"maxDoc\", 0)\n    deleted  = max_doc - num_docs\n    segments = index.get(\"segmentCount\", 0)\n    size_mb  = index.get(\"sizeInBytes\", 0) / 1048576\n\n    issues = []\n\n    if segments > 50:\n        issues.append(f\"high segment count={segments}\")\n\n    if max_doc > 0:\n        del_pct = (deleted / max_doc) * 100\n        if del_pct > 20:\n            issues.append(f\"deleted={deleted}/{max_doc} ({del_pct:.0f}%)\")\n\n    if size_mb > 500:\n        issues.append(f\"large index={size_mb:.0f}MB\")\n\n    if issues:\n        print(f\"HEALTH-WARN: {name}  docs={num_docs}  segs={segments}  \"\n              f\"size={size_mb:.1f}MB  issues: {', '.join(issues)}\")\nPYEOF\n\n  rm -f \"${_tmpjson}\"\n  echo \"=== Core health check end ===\"\n}\n\n\n_optimize_solr_cores() {\n  # Runs expungeDeletes on every registered core where the deleted doc ratio\n  # exceeds _OPTIMIZE_DEL_PCT_THRESHOLD, and a full optimize (maxSegments=1)\n  # on cores where the ratio exceeds _OPTIMIZE_FULL_THRESHOLD.\n  #\n  # Design decisions:\n  #\n  #   expungeDeletes vs full optimize:\n  #     expungeDeletes merges only segments whose deleted doc fraction exceeds\n  #     the merge policy threshold. It is cheap, non-blocking for queries, and\n  #     safe to run frequently. Full optimize (maxSegments=1) merges everything\n  #     into a single segment — expensive on large indexes but reclaims the\n  #     most space and gives the best query performance. We use a higher\n  #     threshold to trigger the full optimize and only do it when genuinely\n  #     needed.\n  #\n  #   waitFlush=false:\n  #     Both calls return immediately while the merge continues in the\n  #     background. This keeps the script runtime bounded regardless of index\n  #     size. The next health check run will reflect the improved state.\n  #\n  #   Protected cores (conf/.protected.conf):\n  #     These have custom solrconfig.xml and may have their own merge policy\n  #     already tuned (e.g. TieredMergePolicy with deletesPctAllowed=20).\n  #     We still run expungeDeletes on them if the ratio is very high, but\n  #     skip the full optimize to avoid interfering with their tuning.\n  #\n  #   Throttle:\n  #     Sentinel file at /var/backups/solr/.optimize_last_run.pid limits\n  #     execution to once every _OPTIMIZE_INTERVAL_HOURS hours (default 12).\n  #     This keeps the function rate-limited even though _start_up runs\n  #     every 4 minutes.\n  #\n  # ${1} = port   e.g. 9077 or 9099\n  # ${2} = label  e.g. solr7 or solr9\n  # ${3} = data dir  e.g. /var/solr7/data\n  local _port=\"${1}\" _label=\"${2}\" _datadir=\"${3}\"\n\n  [ -d \"${_datadir}\" ] || return\n\n  if ! command -v python3 &>/dev/null; then\n    echo \"OPTIMIZE-SKIP: python3 not available\"\n    return\n  fi\n\n  local _api=\"http://127.0.0.1:${_port}/solr/admin/cores?action=STATUS&wt=json\"\n  local _json\n  _json=$(curl -s --max-time 15 \"${_api}\" 2>/dev/null)\n  if [ -z \"${_json}\" ]; then\n    echo \"OPTIMIZE-ERROR: no response from ${_label} on port ${_port}\"\n    return\n  fi\n\n  local _tmpjson\n  _tmpjson=$(mktemp /tmp/solr_optimize_XXXXXX.json)\n  echo \"${_json}\" > \"${_tmpjson}\"\n\n  echo \"=== Index optimize check ${_label} port=${_port} ===\"\n\n  # Extract core names and their deleted doc ratios from the STATUS response,\n  # then decide per-core action. Output format: <action> <corename>\n  # where action is: skip | expunge | optimize\n  local _decisions\n  _decisions=$(python3 - \"${_tmpjson}\" \\\n    \"${_OPTIMIZE_DEL_PCT_THRESHOLD}\" \"${_OPTIMIZE_FULL_THRESHOLD}\" <<'PYEOF'\nimport json, sys\n\ntry:\n    with open(sys.argv[1]) as f:\n        data = json.load(f)\nexcept Exception as e:\n    print(f\"# parse error: {e}\", file=sys.stderr)\n    sys.exit(0)\n\nexpunge_threshold = float(sys.argv[2])\nfull_threshold    = float(sys.argv[3])\n\ncores = data.get(\"status\", {})\nfor name, info in sorted(cores.items()):\n    index    = info.get(\"index\", {})\n    num_docs = index.get(\"numDocs\", 0)\n    max_doc  = index.get(\"maxDoc\", 0)\n    deleted  = max_doc - num_docs\n    del_pct  = (deleted / max_doc * 100) if max_doc > 0 else 0\n    size_mb  = index.get(\"sizeInBytes\", 0) / 1048576\n\n    if del_pct >= full_threshold:\n        print(f\"optimize {name} {del_pct:.1f} {size_mb:.0f}\")\n    elif del_pct >= expunge_threshold:\n        print(f\"expunge {name} {del_pct:.1f} {size_mb:.0f}\")\n    else:\n        print(f\"skip {name} {del_pct:.1f} {size_mb:.0f}\")\nPYEOF\n)\n\n  rm -f \"${_tmpjson}\"\n\n  local _action _coreName _delPct _sizeMb _isProtected\n  while read -r _action _coreName _delPct _sizeMb; do\n    [ -z \"${_coreName}\" ] && continue\n\n    # Check protection sentinel\n    _isProtected=0\n    [ -e \"${_datadir}/${_coreName}/conf/.protected.conf\" ] && _isProtected=1\n\n    case \"${_action}\" in\n      skip)\n        echo \"OPTIMIZE-OK: ${_coreName} deleted=${_delPct}% — no action needed\"\n        ;;\n      expunge)\n        echo \"OPTIMIZE-EXPUNGE: ${_coreName} deleted=${_delPct}% size=${_sizeMb}MB — running expungeDeletes\"\n        curl -s --max-time 30 \\\n          \"http://127.0.0.1:${_port}/solr/${_coreName}/update?expungeDeletes=true&waitFlush=false\" \\\n          > /dev/null 2>&1 || true\n        ;;\n      optimize)\n        if [ \"${_isProtected}\" -eq 1 ]; then\n          # Protected core: TieredMergePolicy may already handle this.\n          # Run expungeDeletes only — do not force a full merge that could\n          # interfere with a custom merge policy tuned for this workload.\n          echo \"OPTIMIZE-EXPUNGE: ${_coreName} deleted=${_delPct}% size=${_sizeMb}MB (protected) — expungeDeletes only\"\n          curl -s --max-time 30 \\\n            \"http://127.0.0.1:${_port}/solr/${_coreName}/update?expungeDeletes=true&waitFlush=false\" \\\n            > /dev/null 2>&1 || true\n        else\n          echo \"OPTIMIZE-FULL: ${_coreName} deleted=${_delPct}% size=${_sizeMb}MB — running full optimize (background)\"\n          curl -s --max-time 30 \\\n            \"http://127.0.0.1:${_port}/solr/${_coreName}/update?optimize=true&maxSegments=1&waitFlush=false\" \\\n            > /dev/null 2>&1 || true\n        fi\n        ;;\n    esac\n  done <<< \"${_decisions}\"\n\n  echo \"=== Index optimize check end ===\"\n}\n\n_run_optimize_if_due() {\n  # Throttle wrapper for _optimize_solr_cores.\n  # Runs at most once every _OPTIMIZE_INTERVAL_HOURS hours using a sentinel\n  # file, then calls per-instance optimize checks.\n  [ \"${_protectedRun}\" = \"TRUE\" ] && return\n\n  local _sentinel=\"/var/backups/solr/.optimize_last_run.pid\"\n  mkdir -p /var/backups/solr\n\n  if [ -e \"${_sentinel}\" ]; then\n    local _sentinelFresh\n    _sentinelFresh=$(find \"${_sentinel}\" \\\n      -mmin \"-$(( _OPTIMIZE_INTERVAL_HOURS * 60 ))\" 2>/dev/null)\n    if [ -n \"${_sentinelFresh}\" ]; then\n      echo \"Index optimize skipped — last run less than ${_OPTIMIZE_INTERVAL_HOURS}h ago\"\n      return\n    fi\n  fi\n\n  echo \"=== Index optimize start (interval=${_OPTIMIZE_INTERVAL_HOURS}h expunge>=${_OPTIMIZE_DEL_PCT_THRESHOLD}% full>=${_OPTIMIZE_FULL_THRESHOLD}%) ===\"\n\n  if [ -x \"/etc/init.d/solr7\" ] && [ -d \"/var/solr7/data\" ]; then\n    _optimize_solr_cores \"9077\" \"solr7\" \"/var/solr7/data\"\n  fi\n\n  if [ -x \"/etc/init.d/solr9\" ] && [ -d \"/var/solr9/data\" ]; then\n    _optimize_solr_cores \"9099\" \"solr9\" \"/var/solr9/data\"\n  fi\n\n  touch \"${_sentinel}\"\n  echo \"=== Index optimize end ===\"\n}\n\n# Thresholds and interval — tune here without touching function logic.\n#\n# _OPTIMIZE_DEL_PCT_THRESHOLD: deleted doc % at which expungeDeletes fires.\n#   Lucene's default merge policy tolerates ~10% before merging naturally.\n#   20% is a safe threshold that catches accumulation without over-merging.\n#\n# _OPTIMIZE_FULL_THRESHOLD: deleted doc % at which a full optimize fires.\n#   Only triggered on genuinely problematic indexes. 30% chosen because at\n#   this level merge policy is clearly not keeping up with write rate.\n#\n# _OPTIMIZE_INTERVAL_HOURS: minimum hours between optimize runs.\n#   12h means at most 2 runs per day regardless of how often the script\n#   is invoked. Set to 6 for more aggressive maintenance.\n_OPTIMIZE_DEL_PCT_THRESHOLD=20\n_OPTIMIZE_FULL_THRESHOLD=30\n_OPTIMIZE_INTERVAL_HOURS=12\n\n_start_up() {\n  _fix_solr9_cnf\n  _fix_solr7_cnf\n\n  _solr_cnf_dirs=(\n    \"search_api_solr/solr7_drupal7\"\n    \"search_api_solr/solr7_drupal8\"\n    \"search_api_solr/solr7_drupal9\"\n    \"search_api_solr/solr7_drupal10\"\n    \"search_api_solr/solr9_drupal10\"\n    \"apachesolr/solr4_drupal6\"\n    \"apachesolr/solr4_drupal7\"\n  )\n\n  for dir in \"${_solr_cnf_dirs[@]}\"; do\n    _sync_solr_config \"$dir\"\n  done\n\n  for _usEr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n    _count_cpu\n    _load_control\n    if [ -e \"${_usEr}/config/server_master/nginx/vhost.d\" ] \\\n      && [ ! -e \"${_usEr}/log/proxied.pid\" ] \\\n      && [ ! -e \"${_usEr}/log/CANCELLED\" ]; then\n      if (( $(echo \"${_O_LOAD} < ${_O_LOAD_MAX}\" | bc -l) )); then\n        _HM_U=$(echo ${_usEr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n        _THIS_HM_SITE=$(cat ${_usEr}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"site_path'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        echo \"load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\"\n        echo \"User ${_usEr}\"\n        mkdir -p ${_usEr}/log/ctrl\n        # shellcheck disable=SC1091\n        [ -e \"/root/.${_HM_U}.octopus.cnf\" ] && source /root/.${_HM_U}.octopus.cnf\n        _check_sites_list\n      fi\n    fi\n  done\n\n  # Orphan cleanup runs AFTER _check_sites_list so that any core legitimately\n  # recreated by _add_solr in this same run is already on disk and registered\n  # before we evaluate what qualifies as an orphan. Running before\n  # _check_sites_list caused a race where a managed core with a stale index\n  # was archived and then immediately recreated empty by _add_solr.\n  _cleanup_orphan_cores\n  [ -x \"/etc/init.d/solr7\" ] && _check_solr_core_health \"9077\" \"solr7\"\n  [ -x \"/etc/init.d/solr9\" ] && _check_solr_core_health \"9099\" \"solr9\"\n  _run_optimize_if_due\n}\n\n_is_protected_run() {\n  _protectedRun=FALSE\n  _optBin=\"/opt/local/bin\"\n  _boaBins=\"autoinit automini barracuda boa octopus\"\n  for _cbn in ${_boaBins}; do\n    if [ -e \"${_optBin}/${_cbn}\" ]; then\n      _CNT=$(pgrep -fc /local/bin/${_cbn})\n      if (( _CNT > 0 )); then\n        echo \"The ${_cbn} is running!\"\n        _protectedRun=TRUE\n      fi\n    fi\n  done\n  [ -e \"/run/octopus_install_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_wait.pid\" ] && _protectedRun=TRUE\n}\n_is_protected_run\n\nif [ \"${_protectedRun}\" = \"FALSE\" ]; then\n  _NOW=$(date +%y%m%d-%H%M%S)\n  _NOW=${_NOW//[^0-9-]/}\n  [ -d \"/var/backups/solr/log\" ] || mkdir -p /var/backups/solr/log\n  find /var/backups/solr/*/* -mtime +0 -type f -exec rm -rf {} \\; &> /dev/null\n  _start_up >/var/backups/solr/log/solr-${_NOW}.log 2>&1\nfi\n\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/minute.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/oom.incident.log\"\n_oldOml=\"/var/log/boa/oom.incident.old.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n[ ! -d \"/var/xdrago/monitor/log\" ] && mkdir -p /var/xdrago/monitor/log\n\nif [ -e \"${_pthOml}\" ] && [ ! -e \"${_oldOml}\" ]; then\n  mv -f ${_pthOml} ${_oldOml}\nfi\n\n_second_flood_guard() {\n  _thisCountSec=$(pgrep -fc /var/xdrago/second.sh)\n  if [ \"${_thisCountSec}\" -gt 4 ]; then\n    echo \"$(date) Too many ${_thisCountSec} second.sh processes killed\" >> \\\n      /var/log/boa/sec-count.kill.log\n    pkill -9 -f second.sh\n  fi\n}\n\n# Protect from high load due to csf loop/flood\n_csf_flood_guard() {\n  _thisCountCsf=$(pgrep -fc /csf)\n  if [ ! -e \"/run/boa_run.pid\" ] && [ ${_thisCountCsf} -gt 4 ]; then\n    echo \"$(date) Too many ${_thisCountCsf} csf processes killed\" >> \\\n      /var/log/boa/csf-count.kill.log\n    pkill -9 -f csf\n    csf -tf\n    wait\n    csf -df\n    wait\n  fi\n  _thisCountFire=$(pgrep -fc /var/xdrago/guest-fire.sh)\n  if [ ! -e \"/run/boa_run.pid\" ] && [ ${_thisCountFire} -gt 9 ]; then\n    echo \"$(date) Too many ${_thisCountFire} fire.sh processes killed and rules purged\" >> \\\n      /var/log/boa/fire-purge.kill.log\n    csf -tf\n    wait\n    csf -df\n    wait\n    pkill -9 -f fire.sh\n  elif [ ! -e \"/run/boa_run.pid\" ] && [ ${_thisCountFire} -gt 7 ]; then\n    echo \"$(date) Too many ${_thisCountFire} fire.sh processes killed\" >> \\\n      /var/log/boa/fire-count.kill.log\n    csf -tf\n    wait\n    pkill -9 -f fire.sh\n  fi\n  [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n}\n\n_launch_auto_healing() {\n  nohup /var/xdrago/monitor/check/system.sh > /dev/null 2>&1 &\n  nohup /var/xdrago/monitor/check/unbound.sh > /dev/null 2>&1 &\n  if [ -e \"/etc/init.d/valkey-server\" ]; then\n    nohup /var/xdrago/monitor/check/valkey.sh > /dev/null 2>&1 &\n  elif [ -e \"/etc/init.d/redis-server\" ]; then\n    nohup /var/xdrago/monitor/check/redis.sh > /dev/null 2>&1 &\n  fi\n  nohup /var/xdrago/monitor/check/mysql.sh > /dev/null 2>&1 &\n  nohup /var/xdrago/monitor/check/php.sh > /dev/null 2>&1 &\n  nohup /var/xdrago/monitor/check/nginx.sh > /dev/null 2>&1 &\n  nohup /var/xdrago/monitor/check/nginx_guard.sh > /dev/null 2>&1 &\n  nohup /var/xdrago/monitor/check/java.sh > /dev/null 2>&1 &\n}\n\n[ ! -d \"/var/log/boa\" ] && mkdir -p /var/log/boa\n[ -e \"/var/log/sec-count.kill.log\" ] && mv -f /var/log/sec-count.kill.log /var/log/boa/\n[ -e \"/var/log/csf-count.kill.log\" ] && mv -f /var/log/csf-count.kill.log /var/log/boa/\n[ -e \"/var/log/fire-purge.kill.log\" ] && mv -f /var/log/fire-purge.kill.log /var/log/boa/\n[ -e \"/var/log/fire-count.kill.log\" ] && mv -f /var/log/fire-count.kill.log /var/log/boa/\n\n[ ! -e \"/run/boa_run.pid\" ] && _second_flood_guard\n[ -x \"/usr/sbin/csf\" ] && [ ! -e \"/run/water.pid\" ] && _csf_flood_guard\n\n# Main execution\nfor _iteration in {1..9}; do\n  echo \"----------------------------\"\n  echo \"Iteration ${_iteration}:\"\n  _launch_auto_healing\n  sleep 5\ndone\n\necho DONE!\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/escapecheck.pl",
    "content": "#!/usr/bin/perl\n\n### TODO - rewrite this legacy script in bash\n\n$ENV{'HOME'} = '/root';\n$ENV{'PATH'} = '/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';\n\nuse warnings;\nuse File::Spec;\n\n$| = 1;\n$status=\"CLEAN\";\n$server=`uname -n`;\nchomp($server);\n$date_is=`date +%Y-%m-%d`;\nchomp($date_is);\n$time_is=`date +%H:%M`;\nchomp($time_is);\n$now_is=\"$date_is $time_is\";\nchomp($now_is);\n$logfile=\"/var/log/boa/last-shell-escape-log\";\n`rm -f $logfile`;\n&makeactions;\nif ($status ne \"CLEAN\") {\n  my $email;\n  if (open my $fh, '<', '/root/.barracuda.cnf') {\n    while (<$fh>) {\n      if (/^\\s*_MY_EMAIL\\s*=\\s*(\\S+)/) {\n        $email = $1;\n        last;\n      }\n    }\n    close $fh;\n  }\n  $email =~ s/\\\\+@/@/g;\n  $mailx_test=`s-nail -V 2>&1`;\n  if ($email && $mailx_test =~ /(built for Linux)/i) {\n    if ($status ne \"CLEAN\") {\n      system('s-nail',\n        '-s', \"Shell Escape Alert [$server] $now_is\",\n        $email,\n        '<', $logfile);\n    }\n  }\n}\nexit;\n\n#############################################################################\nsub makeactions\n{\nlocal(@MYARR)=`grep -i forbidden /var/log/lsh/*.log | tail --lines=999 2>&1`;\n  foreach $line (@MYARR) {\n    $line =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,\\=]//g;\n    if ($line =~ /(syntax|path|command)/i || ($line =~ /(shell escape)/i && $line !~ /exit/i)) {\n      if ($line =~ /(var\\/log\\/lsh)/i) {\n        ($log, $line) = split(/.log:/,$line);\n      }\n      local($DATEQ, $TIMEQ, $rest) = split(/\\s+/,$line);\n      local($TIMEX, $rest) = split(/\\,/,$TIMEQ);\n      chomp($DATEQ);\n      chomp($TIMEX);\n      chomp($line);\n      $TIMEX =~ s/[^0-9\\:]//g;\n      if ($TIMEX =~ /^[0-9]/) {\n        local($HOUR, $MIN, $SEC) = split(/:/,$TIMEX);\n        $log_is=\"$DATEQ $HOUR:$MIN\";\n        if ($now_is eq $log_is) {\n          $status=\"ERROR\";\n          `echo \"$line\" >> $logfile`;\n          `echo \"[$now_is]:[$log_is]:[$line]\" >> /var/log/boa/last-shell-escape-y-problem`;\n        }\n#         else {\n#           `echo \"[$now_is]:[$log_is]\" >> /var/log/boa/last-shell-escape-n-problem`;\n#         }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/escapecheck.sh",
    "content": "#!/bin/bash\n\n# =============================================================================\n# escapecheck.sh — lsh shell escape monitor for BOA\n# Replaces: escapecheck.pl\n#\n# Scans /var/log/lsh/*.log for forbidden command attempts in the current or\n# previous minute, logs matches, and emails an alert via s-nail if any found.\n#\n# Improvements over the Perl original:\n#   - Accepts current AND previous minute (closes cron timing race window)\n#   - Fixes s-nail invocation: Perl used system(list) which passed '<' and the\n#     logfile path as literal arguments to s-nail rather than shell redirection,\n#     so the email body was always empty; bash uses proper stdin redirection\n#   - noclobber lock prevents overlapping cron runs\n#   - _MY_EMAIL unescaping (\\+ -> +) preserved from original\n#\n# Requires: bash >= 4.2, s-nail (for email alerts)\n# =============================================================================\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\n\n# -----------------------------------------------------------------------------\n# Configuration\n# -----------------------------------------------------------------------------\n\nreadonly _BOA_LOG_DIR=\"/var/log/boa\"\nreadonly _LSH_LOG_DIR=\"/var/log/lsh\"\nreadonly _LOGFILE=\"${_BOA_LOG_DIR}/last-shell-escape-log\"\nreadonly _DEBUG_LOG=\"${_BOA_LOG_DIR}/last-shell-escape-y-problem\"\nreadonly _BARRACUDA_CNF=\"/root/.barracuda.cnf\"\nreadonly _LOCK_FILE=\"/var/run/escapecheck.lock\"\n\n_SERVER=$(uname -n)\nreadonly _SERVER\n\n# -----------------------------------------------------------------------------\n# Lock\n# -----------------------------------------------------------------------------\n\n_acquire_lock() {\n  if ! ( set -o noclobber; echo \"$$\" > \"${_LOCK_FILE}\" ) 2>/dev/null; then\n    local _existing_pid\n    _existing_pid=$(cat \"${_LOCK_FILE}\" 2>/dev/null)\n    if kill -0 \"${_existing_pid}\" 2>/dev/null; then\n      exit 0\n    else\n      echo \"$$\" > \"${_LOCK_FILE}\"\n    fi\n  fi\n  trap 'rm -f \"${_LOCK_FILE}\"' EXIT INT TERM\n}\n\n# -----------------------------------------------------------------------------\n# Timestamp helpers\n#\n# lsh log format: \"YYYY-MM-DD HH:MM:SS,mmm [pid] FORBIDDEN: ...\"\n# Comparator:     \"YYYY-MM-DD HH:MM\"\n#\n# Unlike auth.log this is a single fixed format — no dual-format detection\n# needed.  Current AND previous minute accepted (same cron race fix as\n# hackcheck.sh / hackftp.sh).\n# -----------------------------------------------------------------------------\n\n_NOW_IS=\"\"   # \"YYYY-MM-DD HH:MM\"\n_PREV_IS=\"\"  # \"YYYY-MM-DD HH:MM\"  (one minute ago)\n\n_set_timestamps() {\n  _NOW_IS=$(date +%Y-%m-%d\\ %H:%M)\n  _PREV_IS=$(date -d '1 minute ago' +%Y-%m-%d\\ %H:%M)\n}\n\n# Extract \"YYYY-MM-DD HH:MM\" from an lsh log line and check against window.\n# lsh line fields: DATE TIME,ms rest...\n#   field1: 2026-05-10\n#   field2: 12:35:28,123   → strip comma and anything after it → 12:35:28\n#           → strip non-numeric/colon  → keep HH:MM part\n_line_is_recent() {\n  local _line=\"${1}\"\n  local _dateq _timeq _timex _hour _min\n\n  read -r _dateq _timeq _ <<< \"${_line}\"\n\n  # Strip milliseconds: \"12:35:28,123\" -> \"12:35:28\"\n  _timex=\"${_timeq%%,*}\"\n  # Strip anything non-numeric or non-colon (mirrors Perl $TIMEX =~ s/[^0-9\\:]//g)\n  _timex=\"${_timex//[^0-9:]/}\"\n  [[ \"${_timex}\" =~ ^[0-9] ]] || return 1\n\n  _hour=\"${_timex%%:*}\"\n  _min=\"${_timex#*:}\"; _min=\"${_min%%:*}\"\n\n  local _log_is=\"${_dateq} ${_hour}:${_min}\"\n  [[ \"${_log_is}\" == \"${_NOW_IS}\" || \"${_log_is}\" == \"${_PREV_IS}\" ]]\n}\n\n# -----------------------------------------------------------------------------\n# Email helper\n#\n# Reads _MY_EMAIL from .barracuda.cnf (same regex as Perl original).\n# Unescapes \\+ -> + in the address (BOA config convention).\n# Checks s-nail is available before attempting to send.\n#\n# FIX: Perl original used system('s-nail', '-s', subject, email, '<', logfile)\n# which passes '<' as a literal argument (system(list) bypasses the shell),\n# making the email body always empty.  Correct form is stdin redirection.\n# -----------------------------------------------------------------------------\n\n_send_alert() {\n  local _email=\"\" _line\n\n  if [[ -r \"${_BARRACUDA_CNF}\" ]]; then\n    while IFS= read -r _line; do\n      if [[ \"${_line}\" =~ ^\\s*_MY_EMAIL\\s*=\\s*(\\S+) ]]; then\n        _email=\"${BASH_REMATCH[1]}\"\n        break\n      fi\n    done < \"${_BARRACUDA_CNF}\"\n  fi\n\n  # Unescape \\+ -> + (BOA stores addresses as user\\+tag@domain in some configs)\n  _email=\"${_email//\\\\+/+}\"\n\n  [[ -n \"${_email}\" ]] || return 0\n  [[ -s \"${_LOGFILE}\" ]] || return 0\n\n  # Check s-nail is present and built for Linux (mirrors Perl $mailx_test check)\n  local _mailx_test\n  _mailx_test=$(s-nail -V 2>&1)\n  if [[ \"${_mailx_test,,}\" =~ \"built for linux\" ]]; then\n    s-nail -s \"Shell Escape Alert [${_SERVER}] ${_NOW_IS}\" \\\n           \"${_email}\" \\\n           < \"${_LOGFILE}\"\n  fi\n}\n\n# -----------------------------------------------------------------------------\n# Main detection logic  (replaces makeactions)\n# -----------------------------------------------------------------------------\n\n_makeactions() {\n  mkdir -p \"${_BOA_LOG_DIR}\"\n\n  # Fresh logfile each run — matches Perl's rm -f $logfile at startup\n  rm -f \"${_LOGFILE}\"\n\n  _set_timestamps\n\n  local _status=\"CLEAN\"\n\n  # grep across all lsh log files for \"forbidden\" entries, last 999 lines total\n  # (mirrors Perl: grep -i forbidden /var/log/lsh/*.log | tail --lines=999)\n  while IFS= read -r _line; do\n    # Sanitise — strip chars outside safe set (mirrors Perl regex, adds = for lsh)\n    _line=\"${_line//[^a-zA-Z0-9: $'\\t'\\/@_()*/\\[\\].,\\-=]/}\"\n\n    # Strip the \"filename.log:\" prefix that grep -H adds when globbing\n    # Perl did: ($log, $line) = split(/.log:/, $line)  when line contains var/log/lsh\n    if [[ \"${_line}\" =~ var/log/lsh ]]; then\n      _line=\"${_line#*.log:}\"\n      # Trim leading whitespace left after stripping prefix\n      _line=\"${_line#\"${_line%%[![:space:]]*}\"}\"\n    fi\n\n    # Pattern filter (mirrors Perl if/elsif):\n    #   match: syntax|path|command\n    #   OR: \"shell escape\" present AND \"exit\" absent\n    local _match=0\n    if [[ \"${_line,,}\" =~ syntax|path|command ]]; then\n      _match=1\n    elif [[ \"${_line,,}\" =~ \"shell escape\" && ! \"${_line,,}\" =~ exit ]]; then\n      _match=1\n    fi\n    (( _match )) || continue\n\n    # Timestamp filter\n    _line_is_recent \"${_line}\" || continue\n\n    _status=\"ERROR\"\n    echo \"${_line}\" >> \"${_LOGFILE}\"\n    echo \"[${_NOW_IS}]:[$(date +%Y-%m-%d\\ %H:%M)]:[${_line}]\" >> \"${_DEBUG_LOG}\"\n\n  done < <(grep -i forbidden \"${_LSH_LOG_DIR}\"/*.log 2>/dev/null | tail --lines=999)\n\n  if [[ \"${_status}\" != \"CLEAN\" ]]; then\n    _send_alert\n  fi\n}\n\n# -----------------------------------------------------------------------------\n# Entry point\n# -----------------------------------------------------------------------------\n\n_acquire_lock\n_makeactions\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/hackcheck.pl",
    "content": "#!/usr/bin/perl\n\n### TODO - rewrite this legacy script in bash\n\n$ENV{'HOME'} = '/root';\n$ENV{'PATH'} = '/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';\n\n###\n### This is an auth abuse monitor for ssh.\n###\n\nuse warnings;\nuse File::Spec;\n\n$| = 1;\n$this_filename = \"hackcheck\";\n$times=`date +%y%m%d-%H%M%S`;\nchomp($times);\n$now_is=`date +%b:%d:%H:%M`;\nchomp($now_is);\n$timestamp=\"OLD\";\n&makeactions;\nprint \"CONTROL complete\\n\";\nexit;\n\n#############################################################################\nsub makeactions\n{\n  if (-e \"/var/xdrago/monitor/log/ssh.log\") {\n    $this_path = \"/var/xdrago/monitor/log/ssh.log\";\n    open (NOT,\"<$this_path\");\n    @banetable = <NOT>;\n    close (NOT);\n  }\n  local(@MYARR)=`tail --lines=9999 /var/log/auth.log 2>&1`;\n  local($sumar) = 0;\n  local($maxnumber) = 0;\n  foreach $line (@MYARR) {\n    $line =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,]//g;\n    if ($line =~ /(Failed password for root)/i) {\n      &verify_timestamp;\n      if ($timestamp eq \"NEW\") {\n        local($a, $b, $c, $d, $e, $f, $g, $h, $i, $j, $VISITOR, $rest) = split(/\\s+/,$line);\n        $VISITOR =~ s/[^0-9\\.]//g;\n        if ($VISITOR =~ /^([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})$/) {\n          chomp($line);\n          $li_cnt{$VISITOR}++;\n        }\n      }\n    }\n    elsif ($line =~ /(Failed password for invalid user)/i) {\n      &verify_timestamp;\n      if ($timestamp eq \"NEW\") {\n        local($a, $b, $c, $d, $e, $f, $g, $h, $i, $j, $k, $l, $VISITOR, $rest) = split(/\\s+/,$line);\n        $VISITOR =~ s/[^0-9\\.]//g;\n        if ($VISITOR =~ /^([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})$/) {\n          chomp($line);\n          $li_cnt{$VISITOR}++;\n        }\n      }\n    }\n    elsif ($line =~ /(Failed password for)/i && $line !~ /(invalid user)/i) {\n      &verify_timestamp;\n      if ($timestamp eq \"NEW\") {\n        local($a, $b, $c, $d, $e, $f, $g, $h, $i, $j, $VISITOR, $rest) = split(/\\s+/,$line);\n        $VISITOR =~ s/[^0-9\\.]//g;\n        if ($VISITOR =~ /^([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})$/) {\n          chomp($line);\n          $li_cnt{$VISITOR}++;\n        }\n      }\n    }\n    elsif ($line =~ /(Received disconnect)/i && $line !~ /(disconnected by user)/i) {\n      &verify_timestamp;\n      if ($timestamp eq \"NEW\") {\n        local($a, $b, $c, $d, $e, $f, $g, $h, $VISITOR, $rest) = split(/\\s+/,$line);\n        $VISITOR =~ s/[^0-9\\.]//g;\n        if ($VISITOR =~ /^([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})$/) {\n          chomp($line);\n          $li_cnt{$VISITOR}++;\n        }\n      }\n    }\n  }\n  foreach $VISITOR (sort keys %li_cnt) {\n    $sumar = $sumar + $li_cnt{$VISITOR};\n    local($thissumar) = $li_cnt{$VISITOR};\n    local($blocked) = 0;\n    &check_ip($VISITOR);\n    if ($thissumar > $maxnumber) {\n      if (!$blocked) {\n        `echo \"$VISITOR # [x$thissumar] $times\" >> /var/xdrago/monitor/log/ssh.log`;\n        `echo \"$VISITOR # [x$thissumar] $times\" >> /var/xdrago/monitor/log/$this_filename.archive.log`;\n        if (-e \"/etc/csf/csf.deny\" && -e \"/usr/sbin/csf\" && !-e \"/var/xdrago/guest-fire.sh\") {\n          `/usr/sbin/csf -td $VISITOR 3600 -p 22`;\n        }\n      }\n    }\n  }\n  print \"\\n===[$sumar]\\tGLOBAL===\\n\\n\";\n  undef (%li_cnt);\n}\n\n#############################################################################\nsub verify_timestamp\n{\n  local($MONTX, $DAYX, $TIMEX, $rest) = split(/\\s+/,$line);\n  if ($DAYX =~ /^\\s+/) {\n    $DAYX =~ s/[^0-9]//g;\n  }\n  if ($DAYX !~ /^0/ && $DAYX !~ /[0-9]{2}/) {\n    $DAYX = \"0$DAYX\";\n  }\n  chomp($TIMEX);\n  $TIMEX =~ s/[^0-9\\:]//g;\n  if ($TIMEX =~ /^[0-9]/) {\n    local($HOUR, $MIN, $SEC) = split(/:/,$TIMEX);\n    $log_is=\"$MONTX:$DAYX:$HOUR:$MIN\";\n    if ($now_is eq $log_is) {\n      $timestamp=\"NEW\";\n      chomp($line);\n      print \"===NEW\\t[$now_is]\\t[$log_is]\\t[$line]===\\n\";\n    }\n    else {\n      chomp($line);\n      print \"===OLD\\t[$now_is]\\t[$log_is]\\t[$line]===\\n\";\n    }\n  }\n}\n\n#############################################################################\nsub check_ip\n{\n  local($IP) = @_;\n  if (-e \"/var/xdrago/monitor/log/ssh.log\") {\n    foreach $banerecord (@banetable) {\n      chomp ($banerecord);\n      local($ifbanned, $rest) = split(/\\s+/,$banerecord);\n      $ifbanned =~ s/[^0-9\\.]//g;\n      if ($ifbanned eq $IP) {\n        $blocked = 1;\n        last;\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/hackcheck.sh",
    "content": "#!/bin/bash\n\n# =============================================================================\n# hackcheck.sh — SSH auth abuse monitor for BOA\n# Replaces: hackcheck.pl\n#\n# Improvements over the Perl original:\n#   - Handles both classic syslog and ISO 8601 timestamps (Debian 12+ rsyslog)\n#   - Accepts current AND previous minute to close the cron timing race window\n#   - _already_logged() check prevents duplicate log/ban entries within TTL\n#   - noclobber lock prevents overlapping cron runs\n#   - Rolling log recycled after _BAN_SECONDS to re-arm expired bans\n#   - IP extracted via grep -oE (format-agnostic, not fixed field position)\n#\n# Requires: bash >= 4.2, csf\n# =============================================================================\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\n\n# -----------------------------------------------------------------------------\n# Configuration\n# -----------------------------------------------------------------------------\n\nreadonly _LOG_DIR=\"/var/xdrago/monitor/log\"\nreadonly _SSH_LOG=\"${_LOG_DIR}/ssh.log\"\nreadonly _SSH_ARCHIVE=\"${_LOG_DIR}/hackcheck.archive.log\"\nreadonly _LOCK_FILE=\"/var/run/hackcheck.lock\"\nreadonly _MAINTENANCE_FLAG=\"/var/xdrago/guest-fire.sh\"\nreadonly _CSF=\"/usr/sbin/csf\"\n\n# Must match the TTL passed to csf -td and the */N recycle cron if used.\n# BOA default: 900 (guest-fire.sh handles re-enforcement at 900s).\nreadonly _BAN_SECONDS=900\n\n\n# -----------------------------------------------------------------------------\n# Lock\n# -----------------------------------------------------------------------------\n\n_acquire_lock() {\n  if ! ( set -o noclobber; echo \"$$\" > \"${_LOCK_FILE}\" ) 2>/dev/null; then\n    local _existing_pid\n    _existing_pid=$(cat \"${_LOCK_FILE}\" 2>/dev/null)\n    if kill -0 \"${_existing_pid}\" 2>/dev/null; then\n      exit 0\n    else\n      echo \"$$\" > \"${_LOCK_FILE}\"\n    fi\n  fi\n  trap 'rm -f \"${_LOCK_FILE}\"' EXIT INT TERM\n}\n\n# -----------------------------------------------------------------------------\n# Timestamp helpers\n# -----------------------------------------------------------------------------\n\n# Classic rsyslog:  \"May 10 12:35:28 host sshd: msg\"  → comparator \"May:10:12:35\"\n# ISO 8601 rsyslog: \"2026-05-10T12:35:28.612894+02:00 host sshd: msg\"\n#                    → comparator first 16 chars \"2026-05-10T12:35\"\n#\n# Both current AND previous minute accepted — closes the race where cron fires\n# at :01, scan takes 1s, but attacks logged at :28-:59 only appear in auth.log\n# when the NEXT cron run fires (at which point NOW has advanced by one minute).\n\n_NOW_IS=\"\"       # \"Mon:DD:HH:MM\"     classic, current minute\n_PREV_IS=\"\"      # \"Mon:DD:HH:MM\"     classic, previous minute\n_NOW_IS_ISO=\"\"   # \"YYYY-MM-DDTHH:MM\" ISO 8601, current minute\n_PREV_IS_ISO=\"\"  # \"YYYY-MM-DDTHH:MM\" ISO 8601, previous minute\n\n_set_timestamps() {\n  _NOW_IS=$(date +%b:%d:%H:%M)\n  _PREV_IS=$(date -d '1 minute ago' +%b:%d:%H:%M)\n  _NOW_IS_ISO=$(date +%Y-%m-%dT%H:%M)\n  _PREV_IS_ISO=$(date -d '1 minute ago' +%Y-%m-%dT%H:%M)\n}\n\n_line_is_recent() {\n  local _line=\"${1}\"\n  local _first_field\n  read -r _first_field _ <<< \"${_line}\"\n\n  if [[ \"${_first_field}\" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}T ]]; then\n    # ISO 8601 — compare first 16 chars\n    local _line_min=\"${_first_field:0:16}\"\n    [[ \"${_line_min}\" == \"${_NOW_IS_ISO}\" || \"${_line_min}\" == \"${_PREV_IS_ISO}\" ]]\n  else\n    # Classic syslog — reconstruct \"Mon:DD:HH:MM\"\n    local _mont _dayx _timex _hour _min\n    read -r _mont _dayx _timex _ <<< \"${_line}\"\n    _dayx=\"${_dayx//[^0-9]/}\"\n    (( ${#_dayx} == 1 )) && _dayx=\"0${_dayx}\"\n    _timex=\"${_timex//[^0-9:]/}\"\n    [[ \"${_timex}\" =~ ^[0-9] ]] || return 1\n    _hour=\"${_timex%%:*}\"\n    _min=\"${_timex#*:}\"; _min=\"${_min%%:*}\"\n    local _log_is=\"${_mont}:${_dayx}:${_hour}:${_min}\"\n    [[ \"${_log_is}\" == \"${_NOW_IS}\" || \"${_log_is}\" == \"${_PREV_IS}\" ]]\n  fi\n}\n\n# -----------------------------------------------------------------------------\n# Helpers\n# -----------------------------------------------------------------------------\n\n_LOCAL_IPS=()\n\n_load_local_ips() {\n  local _ip\n  while IFS= read -r _ip; do\n    [[ -n \"${_ip}\" ]] && _LOCAL_IPS+=(\"${_ip}\")\n  done < <(ip -4 addr show | grep -oE 'inet ([0-9]{1,3}\\.){3}[0-9]{1,3}' | awk '{print $2}')\n}\n\n_is_ipv4() {\n  [[ \"${1}\" =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]] || return 1\n  local _local\n  for _local in \"${_LOCAL_IPS[@]}\"; do\n    [[ \"${1}\" == \"${_local}\" ]] && return 1\n  done\n  return 0\n}\n\n_already_logged() {\n  [[ -f \"${_SSH_LOG}\" ]] && grep -qE \"^${1} #\" \"${_SSH_LOG}\"\n}\n\n_csf_ban() {\n  local _ip=\"${1}\"\n  # Maintenance mode: guest-fire.sh presence disables direct CSF bans\n  # (guest-fire.sh itself will handle enforcement)\n  [[ -e \"${_MAINTENANCE_FLAG}\" ]] && return 0\n  [[ -x \"${_CSF}\" && -e \"/etc/csf/csf.deny\" ]] || return 0\n  \"${_CSF}\" -td \"${_ip}\" \"${_BAN_SECONDS}\" -p 22 &>/dev/null\n}\n\n# Recycle rolling log once IPs have served their ban TTL.\n# Prevents the log from permanently suppressing re-bans after expiry.\n# Note: BOA also does this via */N cron rm; this internal check is a safety net\n# that fires first and avoids the duplicate-ban-attempt window in that approach.\n_recycle_log() {\n  [[ -f \"${_SSH_LOG}\" ]] || return 0\n  local _first_line _mark _mark_epoch _age_sec\n  IFS= read -r _first_line < \"${_SSH_LOG}\"\n  _mark=\"${_first_line##* }\"\n  if [[ \"${_mark}\" =~ ^([0-9]{2})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})$ ]]; then\n    _mark_epoch=$(date -d \"20${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]} ${BASH_REMATCH[4]}:${BASH_REMATCH[5]}:${BASH_REMATCH[6]}\" +%s 2>/dev/null)\n    if [[ -n \"${_mark_epoch}\" ]]; then\n      _age_sec=$(( $(date +%s) - _mark_epoch ))\n      if (( _age_sec >= _BAN_SECONDS )); then\n        rm -f \"${_SSH_LOG}\"\n      fi\n    fi\n  fi\n}\n\n# -----------------------------------------------------------------------------\n# Main detection logic  (replaces makeactions + verify_timestamp + check_ip)\n# -----------------------------------------------------------------------------\n\n_makeactions() {\n  mkdir -p \"${_LOG_DIR}\"\n  _load_local_ips\n  _recycle_log\n  _set_timestamps\n\n  local _mark\n  _mark=$(date +%y%m%d-%H%M%S)\n\n  declare -A _hits=()\n\n  while IFS= read -r _line; do\n    # Sanitise — strip chars outside the safe set (mirrors Perl regex)\n    _line=\"${_line//[^a-zA-Z0-9: $'\\t'\\/@_()*/\\[\\].,\\-]/}\"\n\n    local _ip=\"\"\n\n    if   [[ \"${_line}\" =~ \"Failed password for root\" ]] || \\\n         [[ \"${_line}\" =~ \"Failed password for invalid user\" ]] || \\\n         [[ \"${_line}\" =~ \"Failed password for\" && ! \"${_line}\" =~ \"invalid user\" ]]; then\n      _line_is_recent \"${_line}\" || continue\n      _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | tail -1)\n\n    elif [[ \"${_line}\" =~ \"Invalid user\" ]]; then\n      # Catches empty-username probes: \"Invalid user  from 1.2.3.4 port N\"\n      # These never produce a \"Failed password\" line so fall through without this branch.\n      _line_is_recent \"${_line}\" || continue\n      _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | tail -1)\n\n    elif [[ \"${_line}\" =~ \"Received disconnect\" && ! \"${_line}\" =~ \"disconnected by user\" ]]; then\n      _line_is_recent \"${_line}\" || continue\n      _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | tail -1)\n\n    elif [[ \"${_line}\" =~ \"Timeout before authentication\" ]]; then\n      # Client connected but never completed auth within LoginGraceTime —\n      # slow scanners or deliberate connection exhaustion.\n      # Line format: \"...from ATTACKER_IP to LOCAL_IP...\" — take first IP match.\n      _line_is_recent \"${_line}\" || continue\n      _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | head -1)\n\n    elif [[ \"${_line}\" =~ \"Connection reset by\" && \"${_line}\" =~ \\[preauth\\] ]]; then\n      # TCP RST during key exchange — masscan-style scanners probing for sshd\n      _line_is_recent \"${_line}\" || continue\n      _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | tail -1)\n\n    elif [[ \"${_line}\" =~ \"banner exchange\" && \"${_line}\" =~ \"invalid format\" ]]; then\n      # SSH protocol scanners that fail to complete the banner handshake\n      _line_is_recent \"${_line}\" || continue\n      _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | tail -1)\n\n    elif [[ \"${_line}\" =~ \"Connection closed by\" && \"${_line}\" =~ \\[preauth\\] && ! \"${_line}\" =~ \"authenticating user\" && ! \"${_line}\" =~ \"invalid user\" && ! \"${_line}\" =~ \"disconnected by user\" ]]; then\n      # Port scanners that connect and immediately close with no auth attempt\n      _line_is_recent \"${_line}\" || continue\n      _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | tail -1)\n    fi\n\n    _is_ipv4 \"${_ip}\" || continue\n    (( _hits[\"${_ip}\"]++ )) || true\n\n  done < <(tail --lines=9999 /var/log/auth.log 2>/dev/null)\n\n  local _sumar=0\n  for _ip in \"${!_hits[@]}\"; do\n    local _count=\"${_hits[${_ip}]}\"\n    (( _sumar += _count ))\n\n    # Skip if already logged within current TTL window (prevents duplicate bans)\n    _already_logged \"${_ip}\" && continue\n\n    echo \"${_ip} # [x${_count}] ${_mark}\" >> \"${_SSH_LOG}\"\n    echo \"${_ip} # [x${_count}] ${_mark}\" >> \"${_SSH_ARCHIVE}\"\n    _csf_ban \"${_ip}\"\n  done\n\n  printf \"\\n===[%s]\\tGLOBAL===\\n\\n\" \"${_sumar}\"\n}\n\n# -----------------------------------------------------------------------------\n# Entry point\n# -----------------------------------------------------------------------------\n\n_acquire_lock\n_makeactions\necho \"CONTROL complete\"\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/hackftp.pl",
    "content": "#!/usr/bin/perl\n\n### TODO - rewrite this legacy script in bash\n\n$ENV{'HOME'} = '/root';\n$ENV{'PATH'} = '/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';\n\n###\n### This is an auth abuse monitor for ftp.\n###\n\nuse warnings;\nuse File::Spec;\n\n$| = 1;\n$this_filename = \"hackftp\";\n$times=`date +%y%m%d-%H%M%S`;\nchomp($times);\n$now_is=`date +%b:%d:%H:%M`;\nchomp($now_is);\n$timestamp=\"OLD\";\n&makeactions;\nprint \"CONTROL complete\\n\";\nexit;\n\n#############################################################################\nsub makeactions\n{\n  if (-e \"/var/xdrago/monitor/log/ftp.log\") {\n    $this_path = \"/var/xdrago/monitor/log/ftp.log\";\n    open (NOT,\"<$this_path\");\n    @banetable = <NOT>;\n    close (NOT);\n  }\n  local(@MYARR)=`tail --lines=999 /var/log/messages 2>&1`;\n  local($sumar) = 0;\n  local($maxnumber) = 0;\n  foreach $line (@MYARR) {\n    $line =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,]//g;\n    if ($line =~ /(Authentication failed for user)/i || $line =~ /(Sorry, cleartext sessions are not accepted)/i) {\n      &verify_timestamp;\n      if ($timestamp eq \"NEW\") {\n        local($a, $b, $c, $d, $e, $VISITORX, $rest) = split(/\\s+/,$line);\n        chomp($VISITORX);\n        local($a, $VISITOR) = split(/\\@/,$VISITORX);\n        $VISITOR =~ s/[^0-9\\.]//g;\n        if ($VISITOR =~ /^[0-9]/) {\n          chomp($line);\n          $li_cnt{$VISITOR}++;\n        }\n      }\n    }\n  }\n  foreach $VISITOR (sort keys %li_cnt) {\n    $sumar = $sumar + $li_cnt{$VISITOR};\n    local($thissumar) = $li_cnt{$VISITOR};\n    local($blocked) = 0;\n    &check_ip($VISITOR);\n    if ($thissumar > $maxnumber) {\n      if (!$blocked) {\n        `echo \"$VISITOR # [x$thissumar] $times\" >> /var/xdrago/monitor/log/ftp.log`;\n        `echo \"$VISITOR # [x$thissumar] $times\" >> /var/xdrago/monitor/log/$this_filename.archive.log`;\n        if (-e \"/etc/csf/csf.deny\" && -e \"/usr/sbin/csf\" && !-e \"/var/xdrago/guest-fire.sh\") {\n          `/usr/sbin/csf -td $VISITOR 3600 -p 21`;\n        }\n      }\n    }\n  }\n  print \"\\n===[$sumar]\\tGLOBAL===\\n\\n\";\n  undef (%li_cnt);\n}\n\n#############################################################################\nsub verify_timestamp\n{\n  local($MONTX, $DAYX, $TIMEX, $rest) = split(/\\s+/,$line);\n  if ($DAYX =~ /^\\s+/) {\n    $DAYX =~ s/[^0-9]//g;\n  }\n  if ($DAYX !~ /^0/ && $DAYX !~ /[0-9]{2}/) {\n    $DAYX = \"0$DAYX\";\n  }\n  chomp($TIMEX);\n  $TIMEX =~ s/[^0-9\\:]//g;\n  if ($TIMEX =~ /^[0-9]/) {\n    local($HOUR, $MIN, $SEC) = split(/:/,$TIMEX);\n    $log_is=\"$MONTX:$DAYX:$HOUR:$MIN\";\n    if ($now_is eq $log_is) {\n      $timestamp=\"NEW\";\n      chomp($line);\n      print \"===NEW\\t[$now_is]\\t[$log_is]\\t[$line]===\\n\";\n    }\n    else {\n      chomp($line);\n      print \"===OLD\\t[$now_is]\\t[$log_is]\\t[$line]===\\n\";\n    }\n  }\n}\n\n#############################################################################\nsub check_ip\n{\n  local($IP) = @_;\n  if (-e \"/var/xdrago/monitor/log/ftp.log\") {\n    foreach $banerecord (@banetable) {\n      chomp ($banerecord);\n      local($ifbanned, $rest) = split(/\\s+/,$banerecord);\n      $ifbanned =~ s/[^0-9\\.]//g;\n      if ($ifbanned eq $IP) {\n        $blocked = 1;\n        last;\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/hackftp.sh",
    "content": "#!/bin/bash\n\n# =============================================================================\n# hackftp.sh — FTP auth abuse monitor for BOA\n# Replaces: hackftp.pl\n#\n# Improvements over the Perl original:\n#   - Handles both classic syslog and ISO 8601 timestamps (Debian 12+ rsyslog)\n#   - Accepts current AND previous minute to close the cron timing race window\n#   - _already_logged() check prevents duplicate log/ban entries within TTL\n#   - noclobber lock prevents overlapping cron runs\n#   - Rolling log recycled after _BAN_SECONDS to re-arm expired bans\n#\n# Requires: bash >= 4.2, csf\n# =============================================================================\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin\n\n# -----------------------------------------------------------------------------\n# Configuration\n# -----------------------------------------------------------------------------\n\nreadonly _LOG_DIR=\"/var/xdrago/monitor/log\"\nreadonly _FTP_LOG=\"${_LOG_DIR}/ftp.log\"\nreadonly _FTP_ARCHIVE=\"${_LOG_DIR}/hackftp.archive.log\"\nreadonly _LOCK_FILE=\"/var/run/hackftp.lock\"\nreadonly _MAINTENANCE_FLAG=\"/var/xdrago/guest-fire.sh\"\nreadonly _CSF=\"/usr/sbin/csf\"\n\nreadonly _BAN_SECONDS=3600\n\n# -----------------------------------------------------------------------------\n# Lock\n# -----------------------------------------------------------------------------\n\n_acquire_lock() {\n  if ! ( set -o noclobber; echo \"$$\" > \"${_LOCK_FILE}\" ) 2>/dev/null; then\n    local _existing_pid\n    _existing_pid=$(cat \"${_LOCK_FILE}\" 2>/dev/null)\n    if kill -0 \"${_existing_pid}\" 2>/dev/null; then\n      exit 0\n    else\n      echo \"$$\" > \"${_LOCK_FILE}\"\n    fi\n  fi\n  trap 'rm -f \"${_LOCK_FILE}\"' EXIT INT TERM\n}\n\n# -----------------------------------------------------------------------------\n# Timestamp helpers  (identical logic to hackcheck.sh — see comments there)\n# -----------------------------------------------------------------------------\n\n_NOW_IS=\"\"\n_PREV_IS=\"\"\n_NOW_IS_ISO=\"\"\n_PREV_IS_ISO=\"\"\n\n_set_timestamps() {\n  _NOW_IS=$(date +%b:%d:%H:%M)\n  _PREV_IS=$(date -d '1 minute ago' +%b:%d:%H:%M)\n  _NOW_IS_ISO=$(date +%Y-%m-%dT%H:%M)\n  _PREV_IS_ISO=$(date -d '1 minute ago' +%Y-%m-%dT%H:%M)\n}\n\n_line_is_recent() {\n  local _line=\"${1}\"\n  local _first_field\n  read -r _first_field _ <<< \"${_line}\"\n\n  if [[ \"${_first_field}\" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}T ]]; then\n    local _line_min=\"${_first_field:0:16}\"\n    [[ \"${_line_min}\" == \"${_NOW_IS_ISO}\" || \"${_line_min}\" == \"${_PREV_IS_ISO}\" ]]\n  else\n    local _mont _dayx _timex _hour _min\n    read -r _mont _dayx _timex _ <<< \"${_line}\"\n    _dayx=\"${_dayx//[^0-9]/}\"\n    (( ${#_dayx} == 1 )) && _dayx=\"0${_dayx}\"\n    _timex=\"${_timex//[^0-9:]/}\"\n    [[ \"${_timex}\" =~ ^[0-9] ]] || return 1\n    _hour=\"${_timex%%:*}\"\n    _min=\"${_timex#*:}\"; _min=\"${_min%%:*}\"\n    local _log_is=\"${_mont}:${_dayx}:${_hour}:${_min}\"\n    [[ \"${_log_is}\" == \"${_NOW_IS}\" || \"${_log_is}\" == \"${_PREV_IS}\" ]]\n  fi\n}\n\n# -----------------------------------------------------------------------------\n# Helpers\n# -----------------------------------------------------------------------------\n\n_is_ipv4() {\n  [[ \"${1}\" =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]\n}\n\n_already_logged() {\n  [[ -f \"${_FTP_LOG}\" ]] && grep -qE \"^${1} #\" \"${_FTP_LOG}\"\n}\n\n_csf_ban() {\n  local _ip=\"${1}\"\n  [[ -e \"${_MAINTENANCE_FLAG}\" ]] && return 0\n  [[ -x \"${_CSF}\" && -e \"/etc/csf/csf.deny\" ]] || return 0\n  \"${_CSF}\" -td \"${_ip}\" \"${_BAN_SECONDS}\" -p 21 &>/dev/null\n}\n\n_recycle_log() {\n  [[ -f \"${_FTP_LOG}\" ]] || return 0\n  local _first_line _mark _mark_epoch _age_sec\n  IFS= read -r _first_line < \"${_FTP_LOG}\"\n  _mark=\"${_first_line##* }\"\n  if [[ \"${_mark}\" =~ ^([0-9]{2})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})$ ]]; then\n    _mark_epoch=$(date -d \"20${BASH_REMATCH[1]}-${BASH_REMATCH[2]}-${BASH_REMATCH[3]} ${BASH_REMATCH[4]}:${BASH_REMATCH[5]}:${BASH_REMATCH[6]}\" +%s 2>/dev/null)\n    if [[ -n \"${_mark_epoch}\" ]]; then\n      _age_sec=$(( $(date +%s) - _mark_epoch ))\n      if (( _age_sec >= _BAN_SECONDS )); then\n        rm -f \"${_FTP_LOG}\"\n      fi\n    fi\n  fi\n}\n\n# -----------------------------------------------------------------------------\n# Main detection logic  (replaces makeactions + verify_timestamp + check_ip)\n#\n# FTP log format in /var/log/messages:\n#   \"... host proftpd[N]: Authentication failed for user user@1.2.3.4\"\n#   \"... host proftpd[N]: Sorry, cleartext sessions are not accepted from user@1.2.3.4\"\n#\n# The Perl original extracts the 6th whitespace field (user@IP) then splits on @.\n# We do the same via positional read, keeping it format-faithful while also\n# supporting ISO 8601 timestamps (where field positions shift by one).\n# -----------------------------------------------------------------------------\n\n_makeactions() {\n  mkdir -p \"${_LOG_DIR}\"\n  _recycle_log\n  _set_timestamps\n\n  local _mark\n  _mark=$(date +%y%m%d-%H%M%S)\n\n  declare -A _hits=()\n\n  while IFS= read -r _line; do\n    _line=\"${_line//[^a-zA-Z0-9: $'\\t'\\/@_()*/\\[\\].,\\-]/}\"\n\n    if [[ \"${_line}\" =~ \"Authentication failed for user\" ]] || \\\n       [[ \"${_line}\" =~ \"Sorry, cleartext sessions are not accepted\" ]]; then\n\n      _line_is_recent \"${_line}\" || continue\n\n      # Extract user@IP field — position differs between classic and ISO syslog:\n      #   Classic:  \"Mon DD HH:MM:SS host proc[N]: ... user@1.2.3.4\"  → field 6\n      #   ISO 8601: \"YYYY-MM-DDTHH:MM:SS+TZ host proc[N]: ... user@1.2.3.4\" → field 6\n      # Field position is the same in both cases because the ISO timestamp is\n      # one field (no spaces), so field count is identical.\n      local _f1 _f2 _f3 _f4 _f5 _visitorx _rest\n      read -r _f1 _f2 _f3 _f4 _f5 _visitorx _rest <<< \"${_line}\"\n\n      # Strip trailing punctuation, then split on @ to get the IP part\n      _visitorx=\"${_visitorx//[^a-zA-Z0-9.@]/}\"\n      local _ip=\"${_visitorx##*@}\"\n      _ip=\"${_ip//[^0-9.]/}\"\n\n      # Fallback: if field-based extraction didn't yield an IP, grep for it\n      if ! _is_ipv4 \"${_ip}\"; then\n        _ip=$(grep -oE '([0-9]{1,3}\\.){3}[0-9]{1,3}' <<< \"${_line}\" | tail -1)\n      fi\n\n      _is_ipv4 \"${_ip}\" || continue\n      (( _hits[\"${_ip}\"]++ )) || true\n    fi\n\n  done < <(tail --lines=999 /var/log/messages 2>/dev/null)\n\n  local _sumar=0\n  for _ip in \"${!_hits[@]}\"; do\n    local _count=\"${_hits[${_ip}]}\"\n    (( _sumar += _count ))\n\n    _already_logged \"${_ip}\" && continue\n\n    echo \"${_ip} # [x${_count}] ${_mark}\" >> \"${_FTP_LOG}\"\n    echo \"${_ip} # [x${_count}] ${_mark}\" >> \"${_FTP_ARCHIVE}\"\n    _csf_ban \"${_ip}\"\n  done\n\n  printf \"\\n===[%s]\\tGLOBAL===\\n\\n\" \"${_sumar}\"\n}\n\n# -----------------------------------------------------------------------------\n# Entry point\n# -----------------------------------------------------------------------------\n\n_acquire_lock\n_makeactions\necho \"CONTROL complete\"\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/java.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/java.incident.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_incident_email_report() {\n  if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n    s-nail -s \"Incident Report: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_pthOml}\n  fi\n}\n\n_jetty_restart() {\n  touch /run/boa_java_auto_healing.pid\n  sleep 3\n  pkill -9 -f jetty9\n  rm -f /var/log/jetty9/*\n  find /tmp -mindepth 1 -user jetty9 -exec rm -rf {} + 2>/dev/null\n  renice ${_B_NICE} -p $$ &> /dev/null\n  if [ -e \"/etc/default/jetty9\" ] && [ -e \"/etc/init.d/jetty9\" ]; then\n    service jetty9 start\n    wait\n  fi\n  _thisErrLog=\"$(date) Jetty service has been restarted\"\n  echo ${_thisErrLog} >> ${_pthOml}\n  _incident_email_report \"$1\"\n  echo >> ${_pthOml}\n  [ -e \"/run/boa_java_auto_healing.pid\" ] && rm -f /run/boa_java_auto_healing.pid\n  exit 0\n}\n\n_jetty_listen_conflict_detection() {\n  if [ -e \"/var/log/jetty9\" ]; then\n    if [ `tail --lines=500 /var/log/jetty9/*stderrout.log \\\n      | grep --count \"Address already in use\"` -gt 0 ]; then\n      _thisErrLog=\"$(date) BIND PORT error jetty9, service will be restarted\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _jetty_restart \"jetty9 zombie\"\n    fi\n  fi\n}\n\n_jenkins_health_check_fix() {\n  if ! pgrep -f java/jenkins \\\n    || [ ! -e \"/run/jenkins/jenkins.pid\" ]; then\n    pkill -9 -f java\n    sleep 3\n    service jenkins restart\n    wait\n    _thisErrLog=\"$(date) Jenkins Server was down, started\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    _incident_email_report \"Jenkins Server was down, started\"\n    echo >> ${_pthOml}\n  fi\n}\n\n_solr_health_check_fix() {\n  if [ -x \"/etc/init.d/solr9\" ]; then\n    _pidfile=\"/var/solr9/solr-9099.pid\"\n    if ! pgrep -f /var/solr9 || [ ! -e \"${_pidfile}\" ]; then\n      find /tmp -mindepth 1 -user solr9 -exec rm -rf {} + 2>/dev/null\n      service solr9 restart\n      wait\n      _thisErrLog=\"$(date) Solr9 Server was down, started\"\n      echo \"${_thisErrLog}\" >> ${_pthOml}\n      _incident_email_report \"Solr9 Server was down, started\"\n      echo >> ${_pthOml}\n    else\n      _pid=\"$(cat \"${_pidfile}\" 2>/dev/null | sed 's/[^0-9]//g')\"\n      if [ -n \"${_pid}\" ] && ! ps -p \"${_pid}\" >/dev/null 2>&1; then\n        find /tmp -mindepth 1 -user solr9 -exec rm -rf {} + 2>/dev/null\n        service solr9 restart\n        wait\n        _thisErrLog=\"$(date) Solr9 stale PID detected, restarted\"\n        echo \"${_thisErrLog}\" >> ${_pthOml}\n        _incident_email_report \"Solr9 stale PID detected, restarted\"\n        echo >> ${_pthOml}\n      fi\n    fi\n  fi\n  if [ -x \"/etc/init.d/solr7\" ]; then\n    _pidfile=\"/var/solr7/solr-9077.pid\"\n    if ! pgrep -f /var/solr7 || [ ! -e \"${_pidfile}\" ]; then\n      find /tmp -mindepth 1 -user solr7 -exec rm -rf {} + 2>/dev/null\n      service solr7 restart\n      wait\n      _thisErrLog=\"$(date) Solr7 Server was down, started\"\n      echo \"${_thisErrLog}\" >> ${_pthOml}\n      _incident_email_report \"Solr7 Server was down, started\"\n      echo >> ${_pthOml}\n    else\n      _pid=\"$(cat \"${_pidfile}\" 2>/dev/null | sed 's/[^0-9]//g')\"\n      if [ -n \"${_pid}\" ] && ! ps -p \"${_pid}\" >/dev/null 2>&1; then\n        find /tmp -mindepth 1 -user solr7 -exec rm -rf {} + 2>/dev/null\n        service solr7 restart\n        wait\n        _thisErrLog=\"$(date) Solr7 stale PID detected, restarted\"\n        echo \"${_thisErrLog}\" >> ${_pthOml}\n        _incident_email_report \"Solr7 stale PID detected, restarted\"\n        echo >> ${_pthOml}\n      fi\n    fi\n  fi\n  if [ -x \"/etc/init.d/jetty9\" ]; then\n    _pidfile=\"/run/jetty9.pid\"\n    if ! pgrep -f /opt/jetty9 || [ ! -e \"${_pidfile}\" ]; then\n      find /tmp -mindepth 1 -user jetty9 -exec rm -rf {} + 2>/dev/null\n      service jetty9 restart\n      wait\n      _thisErrLog=\"$(date) Solr4 Server was down, started\"\n      echo \"${_thisErrLog}\" >> ${_pthOml}\n      _incident_email_report \"Solr4 Server was down, started\"\n      echo >> ${_pthOml}\n    else\n      _pid=\"$(cat \"${_pidfile}\" 2>/dev/null | sed 's/[^0-9]//g')\"\n      if [ -n \"${_pid}\" ] && ! ps -p \"${_pid}\" >/dev/null 2>&1; then\n        find /tmp -mindepth 1 -user jetty9 -exec rm -rf {} + 2>/dev/null\n        service jetty9 restart\n        wait\n        _thisErrLog=\"$(date) Solr4 stale PID detected, restarted\"\n        echo \"${_thisErrLog}\" >> ${_pthOml}\n        _incident_email_report \"Solr4 stale PID detected, restarted\"\n        echo >> ${_pthOml}\n      fi\n    fi\n  fi\n}\n\n# Fire-and-forget launcher, cron-safe and interactive-safe\n_spawn_detached() {\n  _cmd=\"$1\"\n  if command -v nohup >/dev/null 2>&1; then\n    nohup bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  elif command -v setsid >/dev/null 2>&1; then\n    setsid bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  else\n    ( bash -c \"${_cmd}\" >/dev/null 2>&1 ) &\n  fi\n  # If interactive shell, drop it from the job table to mimic cron behavior\n  if [[ \"$-\" == *i* ]]; then disown; fi\n}\n\n_is_protected_run() {\n  _protectedRun=FALSE\n  _optBin=\"/opt/local/bin\"\n  _boaBins=\"autoinit automini barracuda boa octopus\"\n  for _cbn in ${_boaBins}; do\n    if [ -e \"${_optBin}/${_cbn}\" ]; then\n      _CNT=$(pgrep -fc /local/bin/${_cbn})\n      if (( _CNT > 0 )); then\n        echo \"The ${_cbn} is running!\"\n        _protectedRun=TRUE\n      fi\n    fi\n  done\n  [ -e \"/run/octopus_install_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_run.pid\" ] && _protectedRun=TRUE\n  [ -e \"/run/boa_wait.pid\" ] && _protectedRun=TRUE\n}\n_is_protected_run\n\nif [ \"${_protectedRun}\" = \"FALSE\" ]; then\n  if [ ! -e \"/run/max_load.pid\" ] && [ ! -e \"/run/critical_load.pid\" ]; then\n    [ ! -e \"/run/boa_java_auto_healing.pid\" ] && [ -x \"/etc/init.d/jenkins\" ] && _jenkins_health_check_fix\n    [ ! -e \"/run/boa_java_auto_healing.pid\" ] && _solr_health_check_fix\n    [ ! -e \"/run/boa_java_auto_healing.pid\" ] && _jetty_listen_conflict_detection\n  fi\nfi\n\necho DONE!\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/mysql.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/mysql.incident.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\nexport _SQL_MAX_TTL=${_SQL_MAX_TTL//[^0-9]/}\n: \"${_SQL_MAX_TTL:=3600}\"\n\nexport _SQL_LOW_MAX_TTL=${_SQL_LOW_MAX_TTL//[^0-9]/}\n: \"${_SQL_LOW_MAX_TTL:=60}\"\n\nexport _LOAD_THRESHOLD=${_LOAD_THRESHOLD//[^0-9.]/}\n: \"${_LOAD_THRESHOLD:=33.0}\" # Example: 1-minute load above 33 indicates high load\n\nexport _THREAD_THRESHOLD=${_THREAD_THRESHOLD//[^0-9]/}\n: \"${_THREAD_THRESHOLD:=99}\" # Example: More than 99 MySQL threads\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_incident_email_report() {\n  if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" != \"OFF\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n    s-nail -s \"Incident Report: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_pthOml}\n  fi\n}\n\n_valkey_cold_restart() {\n  killall -9 valkey-server &> /dev/null\n  rm -f /var/lib/valkey/*\n  service valkey-server start &> /dev/null\n  wait\n}\n\n_redis_cold_restart() {\n  killall -9 redis-server &> /dev/null\n  rm -f /var/lib/redis/*\n  service redis-server start &> /dev/null\n  wait\n}\n\n_sql_restart() {\n  touch /run/boa_mysql_auto_healing.pid\n  if [ ! -d \"/run/mysqld\" ]; then\n    mkdir -p /run/mysqld\n    chown -R mysql:root /run/mysqld\n  fi\n  sleep 3\n  echo \"$(date) $1 incident detected\" >> ${_pthOml}\n  killall sleep &> /dev/null\n  killall php\n  bash /var/xdrago/move_sql.sh\n  wait\n  echo \"$(date) $1 incident Percona server restarted\" >> ${_pthOml}\n  if [ -e \"/var/lib/valkey\" ]; then\n    _valkey_cold_restart\n    echo \"$(date) $1 incident Valkey server restarted\" >> ${_pthOml}\n  elif [ -e \"/var/lib/redis\" ]; then\n    _redis_cold_restart\n    echo \"$(date) $1 incident Redis server restarted\" >> ${_pthOml}\n  fi\n  echo \"$(date) $1 incident response completed\" >> ${_pthOml}\n  _incident_email_report \"$1\"\n  echo >> ${_pthOml}\n  [ -e \"/run/boa_mysql_auto_healing.pid\" ] && rm -f /run/boa_mysql_auto_healing.pid\n  exit 0\n}\n\n_sql_busy_detection() {\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _SQL_LOG=\"/var/log/daemon.log\"\n  else\n    _SQL_LOG=\"/var/log/syslog\"\n  fi\n  if [ -e \"${_SQL_LOG}\" ]; then\n    if [ `tail --lines=333 ${_SQL_LOG} \\\n      | grep --count \"Too many connections\"` -gt 111 ]; then\n      _IS_PROVISION_RUNNING=$(pgrep -f provision)\n      if [ -z \"${_IS_PROVISION_RUNNING}\" ]; then\n        _sql_restart \"BUSY MySQL\"\n      fi\n    fi\n  fi\n  if [ -e \"/root/.instant.busy.mysql.action.cnf\" ]; then\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ] && [ ! -z \"${_SQL_PSWD}\" ]; then\n      _MYSQL_CONN_TEST=$(mysql -u root -e \"status\" 2>&1)\n      echo _MYSQL_CONN_TEST ${_MYSQL_CONN_TEST}\n      if [[ \"${_MYSQL_CONN_TEST}\" =~ \"Too many connections\" ]]; then\n        _sql_restart \"BUSY MySQL\"\n      fi\n    fi\n  fi\n}\n\n_mysql_proc_kill() {\n  _xtime=${_xtime//[^0-9]/}\n  echo \"Monitoring process ${_each} by ${_xuser} running for ${_xtime} seconds\"\n\n  if [[ -n \"${_xtime}\" && ${_xtime} -gt ${_limit} ]]; then\n    echo \"Killing process ${_each} by ${_xuser} after ${_xtime} seconds\"\n    _xkill=$(mysqladmin -u root kill ${_each} 2>&1)\n    _times=$(date)\n    _load=$(cat /proc/_loadavg)\n\n    # Log the _load and the process killing details\n    echo \"${_load}\" >> /var/log/boa/sql_watch.log\n    echo \"${_times} ${_each} ${_xuser} ${_xtime} ${_xkill}\" >> /var/log/boa/sql_watch.log\n  fi\n}\n\n_mysql_proc_control() {\n  # Control file to enable _SQLMONITOR\n  if [ -e \"/root/.mysqladmin.monitor.cnf\" ]; then\n    _SQLMONITOR=YES\n  fi\n\n  # Log the MySQL process list if _SQLMONITOR is enabled\n  if [[ \"${_SQLMONITOR}\" == \"YES\" ]]; then\n    [ -e \"/var/xdrago/log/mysqladmin.monitor.log\" ] && mv -f /var/xdrago/log/mysqladmin.monitor.log /var/log/boa/\n    [ -e \"/root/.nodebug_slow_query.pid\" ] && rm -f /root/.nodebug_slow_query.pid\n    if [ ! -e \"/root/.debug_slow_query.pid\" ]; then\n      touch /root/.debug_slow_query.pid\n      mysql -u root -e \"SET GLOBAL slow_query_log = 'ON';\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL long_query_time = 5;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL slow_query_log_file = '/var/log/mysql/sql-slow-query.log';\" &> /dev/null\n    fi\n    echo \"$(date 2>&1)\" >> /var/log/boa/mysqladmin.monitor.log\n    echo \"$(mysqladmin -u root proc -v 2>&1)\" >> /var/log/boa/mysqladmin.monitor.log\n  else\n    [ -e \"/root/.debug_slow_query.pid\" ] && rm -f /root/.debug_slow_query.pid\n    if [ ! -e \"/root/.nodebug_slow_query.pid\" ]; then\n      mysql -u root -e \"SET GLOBAL slow_query_log = 'OFF';\" &> /dev/null\n      touch /root/.nodebug_slow_query.pid\n      [ -e \"/var/log/boa/mysqladmin.monitor.log\" ] && rm -f /var/log/boa/mysqladmin.monitor.log\n      [ -e \"/var/log/mysql/sql-slow-query.log\" ] && rm -f /var/log/mysql/sql-slow-query.log\n    fi\n  fi\n\n  # Default TTL _limit in seconds (can be adjusted)\n  _limit=${1:-3600}\n\n  # Get all MySQL processes and extract PID, user, and running time\n  _mysql_proc_list=$(mysqladmin -u root proc | awk 'NR>3 {print $2, $4, $12}')\n\n  # Iterate over _each process\n  echo \"${_mysql_proc_list}\" | while read -r _each _xuser _xtime; do\n    _each=${_each//[^0-9]/}\n    _xuser=${_xuser//[^0-9a-z_]/}\n    _xtime=${_xtime//[^0-9]/}\n\n    # Skip root user processes\n    if [[ \"${_xuser}\" == \"root\" ]]; then\n      echo \"Skipping root process: ${_each}\"\n      continue\n    fi\n\n    if [[ -n \"${_each}\" && \"${_each}\" -gt 5 && -n \"${_xtime}\" ]]; then\n      echo \"Process ID: ${_each}, User: ${_xuser}, Time: ${_xtime} seconds\"\n\n      # Check if the user is listed on the problematic users list\n      if [[ -e \"/root/.sql.problematic.users.cnf\" ]]; then\n        for _XQ in $(cat /root/.sql.problematic.users.cnf | cut -d '#' -f1 | sort | uniq); do\n          if [[ \"${_xuser}\" == \"${_XQ}\" ]]; then\n            echo \"Problematic user detected: ${_xuser}, applying lower limit\"\n            _limit=${_SQL_LOW_MAX_TTL}\n          fi\n        done\n      else\n        _limit=${_SQL_MAX_TTL}  # Default _limit for non-problematic users\n      fi\n\n      _mysql_proc_kill\n    fi\n  done\n}\n\n_mysql_high_load() {\n\n  # Get the current 1-minute load average\n  _LOAD=$(awk '{print $1}' /proc/loadavg)\n\n  # Get the mysqld process ID\n  _MYSQL_PID=$(pidof mysqld)\n\n  # Count threads for the mysqld process (subtracting the header)\n  _MYSQL_THREADS=$(ps -T -p \"${_MYSQL_PID}\" | tail -n +2 | wc -l)\n\n  echo \"Current load average: ${_LOAD}\"\n  echo \"Current MySQL thread count: ${_MYSQL_THREADS}\"\n\n  # Compare against thresholds; use bc for floating point comparison\n  if (( $(echo \"${_LOAD} > ${_LOAD_THRESHOLD}\" | bc -l) )) && [ \"${_MYSQL_THREADS}\" -gt \"${_THREAD_THRESHOLD}\" ]; then\n    echo \"High load and excessive MySQL threads detected. Restarting MySQL...\"\n    _sql_restart \"HIGH LOAD MySQL\"\n  else\n    echo \"System operating normally.\"\n  fi\n}\n\n_if_mydumper_is_locked() {\n  _OCT_NR=$(ls /data/disk | wc -l)\n  if [ -n \"${_OCT_NR}\" ] && [ \"${_OCT_NR}\" -ge 1 ]; then\n    if [ \"${_OCT_NR}\" -ge 6 ]; then\n      _MULTI_MX=$(( _OCT_NR * 3 ))\n    else\n      _MULTI_MX=$(( _OCT_NR * 5 ))\n    fi\n    if [ \"${_OCT_NR}\" -lt 4 ]; then\n      _MULTI_MX=$(( _OCT_NR + 10 ))\n    fi\n  fi\n  _AR_C=\"$(pgrep -fc aegir.sh)\"\n  _DR_C=\"$(pgrep -fc drush.php)\"\n  _MD_C=\"$(pgrep -fc mydumper)\"\n  if [ \"${_MD_C}\" -gt 0 ]; then\n    if [ \"${_AR_C}\" -gt \"${_MULTI_MX}\" ]; then\n      pkill -f mydumper\n      pkill -f aegir.sh\n      echo \"$(date) TOO MANY (${_AR_C}) aegir.sh required killing mydumper\" >> ${_pthOml}\n      echo >> ${_pthOml}\n      _incident_email_report \"TOO MANY (${_AR_C}) aegir.sh required killing mydumper\"\n    fi\n    if [ \"${_DR_C}\" -gt \"${_MULTI_MX}\" ]; then\n      pkill -f mydumper\n      pkill -f drush.php\n      echo \"$(date) TOO MANY (${_DR_C}) drush.php required killing mydumper\" >> ${_pthOml}\n      echo >> ${_pthOml}\n      _incident_email_report \"TOO MANY (${_DR_C}) drush.php required killing mydumper\"\n    fi\n  fi\n}\n\n_mysql_flush_hosts() {\n  if pgrep -f /usr/sbin/mysqld \\\n    && [ -e \"/run/mysqld/mysqld.sock\" ] \\\n    && [ -e \"/run/mysqld/mysqld.pid\" ]; then\n    mysqladmin -u root flush-hosts &> /dev/null\n  fi\n}\n\n_mysql_health_check_fix() {\n  if ! pgrep -f /usr/sbin/mysqld \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.pid\" ]; then\n    _sql_restart \"DOWN MySQL\"\n  fi\n}\n\n# Fire-and-forget launcher, cron-safe and interactive-safe\n_spawn_detached() {\n  _cmd=\"$1\"\n  if command -v nohup >/dev/null 2>&1; then\n    nohup bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  elif command -v setsid >/dev/null 2>&1; then\n    setsid bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  else\n    ( bash -c \"${_cmd}\" >/dev/null 2>&1 ) &\n  fi\n  # If interactive shell, drop it from the job table to mimic cron behavior\n  if [[ \"$-\" == *i* ]]; then disown; fi\n}\n\n### Main start here\n\nif [ -x \"/etc/init.d/mysql\" ] \\\n  && [ -x \"/usr/sbin/mysqld\" ] \\\n  && [ ! -e \"/run/boa_mysql_auto_healing.pid\" ] \\\n  && [ ! -e \"/run/max_load.pid\" ] \\\n  && [ ! -e \"/run/critical_load.pid\" ] \\\n  && [ ! -e \"/run/mysql_restart_running.pid\" ]; then\n  _mysql_health_check_fix\nfi\n\nif [ -x \"/etc/init.d/mysql\" ] \\\n  && pgrep -f /usr/sbin/mysqld \\\n  && [ ! -e \"/run/mysql_restart_running.pid\" ]; then\n  _mysql_high_load\n  _sql_busy_detection\n  _mysql_flush_hosts\n  if (( $(pgrep -fc mydumper) > 0 )) && (( $(pgrep -fc mysql_backup.sh) > 0 )); then\n    sleep 5\n    _if_mydumper_is_locked\n  fi\n  _spawn_detached 'perl /var/xdrago/monitor/check/sqlcheck.pl'\nfi\n\nif [ -e \"/run/boa_sql_backup.pid\" ] \\\n  || [ -e \"/run/boa_sql_cluster_backup.pid\" ] \\\n  || [ -e \"/run/boa_mysql_auto_healing.pid\" ] \\\n  || [ -e \"/run/mysql_restart_running.pid\" ]; then\n  _SQL_CTRL=NO\nelse\n  _SQL_CTRL=YES\nfi\n\nif [ \"${_SQL_CTRL}\" = \"YES\" ] \\\n  && [ ! -e \"/run/max_load.pid\" ] \\\n  && [ ! -e \"/run/critical_load.pid\" ]; then\n  for _iteration in {1..3}; do\n    _mysql_proc_control \"${_SQL_MAX_TTL}\"\n    sleep 15\n  done\nfi\n\necho DONE!\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/nginx.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/nginx.incident.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n_cd=\"/run/nginx-monitor.cooldown\"\n: \"${_NGINX_COOLDOWN_SECS:=30}\"\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_incident_email_report() {\n  if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n    s-nail -s \"Incident Report: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_pthOml}\n  fi\n}\n\n_restart_nginx() {\n  touch /run/boa_nginx_auto_healing.pid\n  sleep 3\n  echo \"$(date) NGX $1 detected\" >> ${_pthOml}\n  echo \"Killing all Nginx processes and restarting Nginx...\"\n  pkill -9 -f nginx: || true\n  mv -f /var/log/nginx/error.log /var/log/nginx/$(date +%y%m%d-%H%M)-error.log\n  service nginx restart\n  wait\n  if pidof nginx > /dev/null; then\n    echo \"Nginx service restarted successfully.\"\n    _NGINX_RESTARTED=true\n    echo \"$(date) NGX $1 incident Nginx service restarted\" >> ${_pthOml}\n  else\n    echo \"Failed to restart Nginx.\"\n    echo \"$(date) NGX $1 incident Nginx restart failed\" >> ${_pthOml}\n  fi\n  echo \"$(date) NGX $1 incident response completed\" >> ${_pthOml}\n  _incident_email_report \"NGX $1\"\n  echo >> ${_pthOml}\n  [ -e \"/run/boa_nginx_auto_healing.pid\" ] && rm -f /run/boa_nginx_auto_healing.pid\n  exit 0\n}\n\n_nginx_oom_detection() {\n  if [ -e \"/var/log/nginx/error.log\" ]; then\n    if [ `tail --lines=500 /var/log/nginx/error.log \\\n      | grep --count \"Cannot allocate memory\"` -gt 0 ]; then\n      _thisErrLog=\"$(date) Nginx OOM error\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _restart_nginx \"Nginx OOM\"\n    fi\n  fi\n}\n\n_nginx_bind_check_fix() {\n  if [ `tail --lines=8 /var/log/nginx/error.log \\\n    | grep --count \"Address already in use\"` -gt 0 ]; then\n    _thisErrLog=\"$(date) Nginx BIND PORT error, service will be restarted\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    _restart_nginx \"Nginx BIND PORT error\"\n  fi\n}\n\n_nginx_health_check_fix() {\n  # Initialize a flag to indicate whether Nginx service has been restarted\n  _NGINX_RESTARTED=false\n  # Check if Nginx is running and capture the process details\n  _NGINX_PROCESSES=$(ps aux | grep 'nginx: ' | grep -v 'grep')\n  # Check for multiple master processes (shouldn't happen)\n  if [ \"${_NGINX_RESTARTED}\" = false ]; then\n    _MASTER_COUNT=$(pgrep -fc 'nginx: master process')\n    if [ \"${_MASTER_COUNT}\" -gt 1 ]; then\n      # Double-check after a short grace to avoid flapping\n      sleep 5\n      _MASTER_COUNT=$(pgrep -fc 'nginx: master process')\n      if [ \"${_MASTER_COUNT}\" -gt 1 ]; then\n        echo \"Multiple (${_MASTER_COUNT}) Nginx master processes detected. Possible stuck processes.\"\n        echo \"$(date) NGX multiple (${_MASTER_COUNT}) master processes detected\" >> ${_pthOml}\n        _restart_nginx \"_MASTER_COUNT ${_MASTER_COUNT}\"\n      fi\n    fi\n  fi\n  # Check the state of the master process\n  if [ \"${_NGINX_RESTARTED}\" = false ]; then\n    _MASTER_STATE=$(echo \"${_NGINX_PROCESSES}\" | grep 'nginx: master process' | awk '{print $8}')\n    if [ \"${_MASTER_STATE}\" = \"Z\" ] \\\n      || [ \"${_MASTER_STATE}\" = \"T\" ] \\\n      || [ \"${_MASTER_STATE}\" = \"D\" ]; then\n      echo \"Nginx master process is in an abnormal state: ${_MASTER_STATE}.\"\n      echo \"$(date) NGX master process is in an abnormal state: ${_MASTER_STATE}\" >> ${_pthOml}\n      echo \"$(date) NGX ${_NGINX_PROCESSES}\" >> ${_pthOml}\n      _restart_nginx \"_MASTER_STATE ${_MASTER_STATE}\"\n    fi\n  fi\n  # Check the state of the worker processes\n  if [ \"${_NGINX_RESTARTED}\" = false ]; then\n    _WORKER_STATE=$(echo \"${_NGINX_PROCESSES}\" | grep 'nginx: worker process' | awk '{print $8}')\n    if [[ \"${_WORKER_STATE}\" =~ \"Z\" ]] \\\n      || [[ \"${_WORKER_STATE}\" =~ \"T\" ]]; then\n      echo \"Nginx worker process is in an abnormal state: ${_WORKER_STATE}.\"\n      echo \"$(date) NGX worker process is in an abnormal state: ${_WORKER_STATE}\" >> ${_pthOml}\n      echo \"$(date) NGX ${_NGINX_PROCESSES}\" >> ${_pthOml}\n      _restart_nginx \"_WORKER_STATE ${_WORKER_STATE}\"\n    fi\n  fi\n  # Final status message\n  if [ \"${_NGINX_RESTARTED}\" = false ]; then\n    echo \"Nginx is running normally. No anomalies detected.\"\n  else\n    echo \"Nginx was restarted due to detected anomalies.\"\n    echo \"$(date) NGX service was restarted due to detected anomalies\" >> ${_pthOml}\n  fi\n}\n\n_nginx_if_up_check_fix() {\n  # Standard check first\n  if [ -x \"/etc/init.d/nginx\" ]; then\n    if ! pgrep -f 'nginx: master process' \\\n      || [ ! -e \"/run/nginx.pid\" ]; then\n      # Double-check after a short grace to avoid flapping\n      sleep 3\n      if ! pgrep -f 'nginx: master process' \\\n        || [ ! -e \"/run/nginx.pid\" ]; then\n        _now=$(date +%s)\n        if [ -s \"${_cd}\" ]; then\n          _ts=$(cat \"${_cd}\" 2>/dev/null | tr -d '\\n')\n          if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_NGINX_COOLDOWN_SECS}\" ]; then\n            echo \"$(date) INFO: Nginx unhealthy but in cooldown; skipping restart\" >> ${_pthOml}\n            return 0\n          fi\n        fi\n        pkill -9 -f nginx: || true\n        mv -f /var/log/nginx/error.log /var/log/nginx/$(date +%y%m%d-%H%M)-error.log\n        service nginx restart\n        wait\n        # Stamp cooldown after attempting recovery\n        date +%s > \"${_cd}\"\n        _thisErrLog=\"$(date) Nginx Server was down, restarted\"\n        echo ${_thisErrLog} >> ${_pthOml}\n        _incident_email_report \"Nginx Server was down, restarted\"\n        echo >> ${_pthOml}\n        ecit 0\n      fi\n    fi\n  fi\n}\n\n_if_nginx_restart() {\n  _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n  _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n  _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n  ReTest=$(ls /data/disk/*/static/control/run-nginx-restart.pid | wc -l 2>&1)\n  if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n    || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n    || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n    || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n    || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n    || [ -e \"/root/.allow.nginx.restart.cnf\" ]; then\n    if [ \"${ReTest}\" -ge 1 ]; then\n      rm -f /data/disk/*/static/control/run-nginx-restart.pid\n      _thisErrLog=\"$(date) Nginx Server Restart Requested\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _restart_nginx \"Nginx Server Restart Requested\"\n    fi\n  fi\n}\n\nif [ ! -e \"/run/max_load.pid\" ] && [ ! -e \"/run/critical_load.pid\" ]; then\n  _nginx_if_up_check_fix\n  _nginx_bind_check_fix\n  _nginx_oom_detection\n  _nginx_health_check_fix\n  [ -d \"/data/u\" ] && _if_nginx_restart\nfi\n\necho \"Done!\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/nginx_guard.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_monPath=\"/var/xdrago/monitor/check\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Fire-and-forget launcher, cron-safe and interactive-safe\n###\n_spawn_detached() {\n  _cmd=\"$1\"\n  if command -v nohup >/dev/null 2>&1; then\n    nohup bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  elif command -v setsid >/dev/null 2>&1; then\n    setsid bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  else\n    ( bash -c \"${_cmd}\" >/dev/null 2>&1 ) &\n  fi\n  # If interactive shell, drop it from the job table to mimic cron behavior\n  if [[ \"$-\" == *i* ]]; then disown; fi\n}\n\nif [ ! -e \"/run/max_load.pid\" ] && [ ! -e \"/run/critical_load.pid\" ]; then\n\n  # Reload nginx if access log is missing or empty\n  [ -s /var/log/nginx/access.log ] || service nginx reload\n\n  # Main execution\n  if [ -f \"${_monPath}/scan_nginx.sh\" ]; then\n    for _iteration in {1..10}; do\n      nohup ${_monPath}/scan_nginx.sh > /dev/null 2>&1 &\n      sleep 5\n    done\n  fi\nfi\n\necho \"Done!\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/php.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/php.incident.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n: \"${_FPM_COOLDOWN_SECS:=30}\"\n\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_incident_email_report() {\n  if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n    s-nail -s \"Incident Report: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_pthOml}\n  fi\n}\n\n_fpm_reload() {\n  : > /run/fmp_wait.pid\n  : > /run/restarting_fmp_wait.pid\n  sleep 3\n  renice ${_B_NICE} -p $$ &> /dev/null\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n      service \"php${e}-fpm\" reload\n    fi\n  done\n  echo \"$(date) $1 incident PHP-FPM reloaded\" >> ${_pthOml}\n  sleep 1\n  rm -f /run/fmp_wait.pid /run/restarting_fmp_wait.pid\n}\n\n_fpm_duplicate_instances_detection() {\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    # Count masters for this exact conf path\n    _pat=\"php-fpm: master process.*/opt/php${e}/etc/php${e}-fpm.conf\"\n    _cnt=$(pgrep -fc \"${_pat}\")\n    if (( _cnt > 1 )); then\n      _thisErrLog=\"$(date) Duplicate master for php${e}-fpm (count=${_cnt})\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n      mv -f /var/log/php/php${e}-fpm-error.log /var/backups/php-logs/${_NOW}/ &> /dev/null\n      service \"php${e}-fpm\" restart\n      wait\n    fi\n  done\n}\n\n_fpm_giant_log_detection() {\n  _PHPLOG_SIZE_TEST=$(du -s -h /var/log/php 2>/dev/null)\n  if echo \"${_PHPLOG_SIZE_TEST}\" | grep -q \"G\"; then\n    _thisErrLog=\"$(date) Too big PHP error logs detected: ${_PHPLOG_SIZE_TEST}\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    # No restart here; health checks will react if needed.\n  fi\n}\n\n_fpm_listen_conflict_detection() {\n  if [ -e \"/var/log/php\" ]; then\n    _hit=$(tail --lines=500 /var/log/php/php*-fpm-error.log 2>/dev/null | grep -c \"already listen on\")\n    if [ \"${_hit}\" -gt 0 ]; then\n      sleep 2\n      _hit2=$(tail --lines=500 /var/log/php/php*-fpm-error.log 2>/dev/null | grep -c \"already listen on\")\n      if [ \"${_hit2}\" -gt 0 ]; then\n        [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n        mv -f /var/log/php/php*-fpm-error.log /var/backups/php-logs/${_NOW}/ &> /dev/null\n        _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n        for e in ${_PHP_V}; do\n          if [ ! -S \"/run/www${e}.fpm.socket\" ]; then\n            _thisErrLog=\"$(date) FPM listen conflict for php${e}, restarting\"\n            echo ${_thisErrLog} >> ${_pthOml}\n            service \"php${e}-fpm\" restart\n            wait\n          fi\n        done\n      fi\n    fi\n  fi\n}\n\n_fpm_proc_max_detection() {\n  _count=$(tail --lines=500 /var/log/php/php*-fpm-error.log 2>/dev/null | grep -c \"process.max\")\n  if [ \"${_count}\" -gt 0 ]; then\n    _thisErrLog=\"$(date) NOTE: process.max reached (${_count} hits). Consider raising pm.max_children\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    # No restart; capacity signal only.\n  fi\n}\n\n_fpm_sockets_healing() {\n  _hit=$(tail --lines=500 /var/log/php/php*-fpm-error.log 2>/dev/null | grep -c \"Address already in use\")\n  if [ \"${_hit}\" -gt 0 ]; then\n    sleep 2\n    _hit2=$(tail --lines=500 /var/log/php/php*-fpm-error.log 2>/dev/null | grep -c \"Address already in use\")\n    if [ \"${_hit2}\" -gt 0 ]; then\n      [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n      mv -f /var/log/php/php*-fpm-error.log /var/backups/php-logs/${_NOW}/ &> /dev/null\n      _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ ! -S \"/run/www${e}.fpm.socket\" ]; then\n          _thisErrLog=\"$(date) FPM socket conflict sustained for php${e}; restarting\"\n          echo ${_thisErrLog} >> ${_pthOml}\n          service \"php${e}-fpm\" restart\n          wait\n        fi\n      done\n    fi\n  fi\n}\n\n_fpm_fastcgi_temp() {\n  _FASTCGI_SIZE_TEST=$(du -s -h /usr/fastcgi_temp/*/*/* 2>/dev/null | grep G)\n  if [ -n \"${_FASTCGI_SIZE_TEST}\" ]; then\n    rm -f /usr/fastcgi_temp/*/*/* 2>/dev/null\n    _thisErrLog=\"$(date) PHP fastcgi_temp too big, cleaned\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    echo \"$(date) ${_FASTCGI_SIZE_TEST}\" >> ${_pthOml}\n    _incident_email_report \"PHP fastcgi_temp too big, cleaned\"\n    echo >> ${_pthOml}\n  fi\n}\n\n_fpm_health_check_fix() {\n  _thisErrLog=\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -x \"/opt/php${e}/bin/php\" ]; then\n      _pat=\"php-fpm: master process.*/opt/php${e}/etc/php${e}-fpm.conf\"\n\n      _ok_master=false\n      _ok_socket=false\n      _ok_pid=false\n\n      # First pass\n      pgrep -f \"${_pat}\" >/dev/null 2>&1 && _ok_master=true\n      [ -S \"/run/www${e}.fpm.socket\" ] && _ok_socket=true\n      [ -s \"/run/php${e}-fpm.pid\" ] && _ok_pid=true\n\n      # Second pass (grace for reloads)\n      if ! ${_ok_master} || ! ${_ok_socket} || ! ${_ok_pid}; then\n        sleep 2\n        _ok_master=false; _ok_socket=false; _ok_pid=false\n        pgrep -f \"${_pat}\" >/dev/null 2>&1 && _ok_master=true\n        [ -S \"/run/www${e}.fpm.socket\" ] && _ok_socket=true\n        [ -s \"/run/php${e}-fpm.pid\" ] && _ok_pid=true\n      fi\n\n      if ! ${_ok_master} || ! ${_ok_socket} || ! ${_ok_pid}; then\n        # Per-version cooldown: /run/php<ver>-fpm.cooldown (15 seconds default)\n        _cd=\"/run/php${e}-fpm.cooldown\"\n        _now=$(date +%s)\n        if [ -s \"${_cd}\" ]; then\n          _ts=$(cat \"${_cd}\" 2>/dev/null | tr -d '\\n')\n          if [ -n \"${_ts}\" ]; then\n            _delta=$(( _now - _ts ))\n            if [ \"${_delta}\" -lt \"${_FPM_COOLDOWN_SECS}\" ]; then\n              echo \"$(date) INFO: php${e}-fpm unhealthy but in cooldown (${_delta}s < ${_FPM_COOLDOWN_SECS}s); skipping restart\" >> ${_pthOml}\n              continue\n            fi\n          fi\n        fi\n\n        : > /run/fmp_wait.pid\n        : > /run/restarting_fmp_wait.pid\n\n        echo \"$(date) php${e}-fpm health failed (master=${_ok_master} socket=${_ok_socket} pid=${_ok_pid}) — restart\" >> ${_pthOml}\n        [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n        mv -f /var/log/php/php${e}-fpm-error.log /var/backups/php-logs/${_NOW}/ &> /dev/null\n        service \"php${e}-fpm\" restart\n        wait\n        sleep 1\n\n        # Re-check after restart\n        _ok_master=false; _ok_socket=false; _ok_pid=false\n        pgrep -f \"${_pat}\" >/dev/null 2>&1 && _ok_master=true\n        [ -S \"/run/www${e}.fpm.socket\" ] && _ok_socket=true\n        [ -s \"/run/php${e}-fpm.pid\" ] && _ok_pid=true\n\n        if ${_ok_master} && ${_ok_socket} && ${_ok_pid}; then\n          _thisErrLog=\"$(date) PHP-FPM ${e} was down, restarted\"\n          echo ${_thisErrLog} >> ${_pthOml}\n          date +%s > \"${_cd}\"\n        else\n          # As last resort: stop/start for only this version\n          echo \"$(date) php${e}-fpm still unhealthy after restart; stop/start\" >> ${_pthOml}\n          service \"php${e}-fpm\" stop\n          sleep 1\n          [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n          mv -f /var/log/php/php${e}-fpm-error.log /var/backups/php-logs/${_NOW}/ &> /dev/null\n          service \"php${e}-fpm\" start\n          date +%s > \"${_cd}\"\n        fi\n\n        rm -f /run/fmp_wait.pid /run/restarting_fmp_wait.pid\n      fi\n    fi\n  done\n  if [ -n \"${_thisErrLog}\" ]; then\n    _incident_email_report \"PHP-FPM was down, restarted\"\n    echo >> ${_pthOml}\n  fi\n}\n\n# Fire-and-forget launcher, cron-safe and interactive-safe\n_spawn_detached() {\n  _cmd=\"$1\"\n  if command -v nohup >/dev/null 2>&1; then\n    nohup bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  elif command -v setsid >/dev/null 2>&1; then\n    setsid bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  else\n    ( bash -c \"${_cmd}\" >/dev/null 2>&1 ) &\n  fi\n  # If interactive shell, drop it from the job table to mimic cron behavior\n  if [[ \"$-\" == *i* ]]; then disown; fi\n}\n\n_fpm_logs_empty() {\n  _LOG_NR=$(ls /var/log/php | wc -l)\n  if [ -n \"${_LOG_NR}\" ] && [ \"${_LOG_NR}\" -ge 3 ]; then\n    _LOGS=OK\n  else\n    _fpm_reload \"NOLOGS\"\n  fi\n}\n\nif [ ! -e \"/var/tmp/fpm\" ]; then\n  mkdir -p /var/tmp/fpm\n  chmod 777 /var/tmp/fpm\nfi\n\nif [ ! -e \"/run/max_load.pid\" ] && [ ! -e \"/run/critical_load.pid\" ]; then\n  _fpm_logs_empty\n  _fpm_duplicate_instances_detection\n  _fpm_listen_conflict_detection\n  _fpm_proc_max_detection\n  _fpm_sockets_healing\n  _fpm_fastcgi_temp\n  _fpm_giant_log_detection\n  _fpm_health_check_fix\n  if [ ! -e \"/root/.high_traffic.cnf\" ] \\\n    && [ ! -e \"/root/.giant_traffic.cnf\" ]; then\n    _spawn_detached 'perl /var/xdrago/monitor/check/segfault_alert.pl'\n  fi\nfi\n\necho DONE!\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/redis.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/redis.incident.log\"\n_cd=\"/run/redis-monitor.cooldown\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -d /run/redis ] || mkdir -p /run/redis\n[ -d /run/redis ] && chown -R redis:redis /run/redis\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n: \"${_REDIS_COOLDOWN_SECS:=30}\"\n\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_incident_email_report() {\n  if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n    s-nail -s \"Incident Report: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_pthOml}\n  fi\n}\n\n_redis_ping_ok() {\n  # Check if Redis responds to PING (authenticated or NOAUTH)\n  _sock=\"/run/redis/redis.sock\"\n  _cli=\"/usr/bin/redis-cli\"\n  _pass_file=\"/root/.redis.pass.txt\"\n  _out=\n  _pass=\n  if [ ! -x \"${_cli}\" ]; then\n    return 1\n  fi\n  if [ -r \"${_pass_file}\" ]; then\n    _pass=\"$(head -n1 \"${_pass_file}\" 2>/dev/null | tr -d '\\r\\n')\"\n  fi\n  if [ -n \"${_pass}\" ]; then\n    _out=\"$(${_cli} -s \"${_sock}\" -a \"${_pass}\" ping 2>&1)\"\n  else\n    _out=\"$(${_cli} -s \"${_sock}\" ping 2>&1)\"\n  fi\n  if echo \"${_out}\" | grep -qi '^PONG$'; then\n    return 0\n  fi\n  if echo \"${_out}\" | grep -qi 'NOAUTH'; then\n    return 0\n  fi\n  return 1\n}\n\n_fpm_reload() {\n  : > /run/fmp_wait.pid\n  : > /run/restarting_fmp_wait.pid\n  sleep 3\n  [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n  mv -f /var/log/php/* /var/backups/php-logs/${_NOW}/ &> /dev/null\n  renice ${_B_NICE} -p $$ &> /dev/null\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n      service \"php${e}-fpm\" reload\n    fi\n  done\n  echo \"$(date) $1 incident PHP-FPM reloaded\" >> ${_pthOml}\n  sleep 1\n  rm -f /run/fmp_wait.pid /run/restarting_fmp_wait.pid\n}\n\n_redis_restart() {\n  touch /run/boa_redis_auto_healing.pid\n  sleep 3\n  echo \"$(date) $1 incident detected\" >> ${_pthOml}\n  service redis-server stop &> /dev/null\n  wait\n  killall -9 redis-server &> /dev/null\n  rm -f /var/lib/redis/*\n  service redis-server start &> /dev/null\n  wait\n  echo \"$(date) $1 incident redis-server restarted\" >> ${_pthOml}\n  if [[ \"${1}\" =~ \"REFUSED\" ]] || [[ \"${1}\" =~ \"SLOW\" ]]; then\n    _fpm_reload \"$1\"\n  fi\n  echo \"$(date) $1 incident response completed\" >> ${_pthOml}\n  date +%s > \"${_cd}\"\n  _incident_email_report \"$1\"\n  echo >> ${_pthOml}\n  [ -e \"/run/boa_redis_auto_healing.pid\" ] && rm -f /run/boa_redis_auto_healing.pid\n  exit 0\n}\n\n_redis_bind_check_fix() {\n  # Bind/socket/address-in-use issues → verify twice; restart only if socket missing\n  _hits=$(tail -n 8 /var/log/redis/redis-server.log 2>/dev/null | egrep -ci \"Address already in use\")\n  if [ \"${_hits}\" -gt 0 ]; then\n    sleep 2\n    _hits2=$(tail -n 8 /var/log/redis/redis-server.log 2>/dev/null | egrep -ci \"Address already in use\")\n    if [ \"${_hits2}\" -gt 0 ] && [ ! -S \"/run/redis/redis.sock\" ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_REDIS_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Redis bind/socket conflict but in cooldown; skipping restart\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      echo \"$(date) Redis bind/socket conflict; restarting\" >> ${_pthOml}\n      _redis_restart \"RedisException BIND PORT\"\n    fi\n  fi\n}\n\n_redis_connection_check_fix() {\n  # Sustained connection/backlog issues → verify twice; cooldown then restart\n  _hits=$(tail -n 500 /var/log/php/error_log_* 2>/dev/null | egrep -ci \"RedisException: Connection refused\")\n  if [ \"${_hits}\" -gt 19 ]; then\n    sleep 2\n    _hits2=$(tail -n 500 /var/log/php/error_log_* 2>/dev/null | egrep -ci \"RedisException: Connection refused\")\n    if [ \"${_hits2}\" -gt 19 ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_REDIS_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Redis connection issues but in cooldown; skipping restart\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      echo \"$(date) Redis sustained connection issues (${_hits2} hits) — restart\" >> ${_pthOml}\n      _redis_restart \"RedisException REFUSED\"\n    fi\n  fi\n}\n\n_redis_slow_check_fix() {\n  # Sustained latency/slowlog/accept issues → verify twice; cooldown then restart\n  _hits=$(tail -n 500 /var/log/php/fpm-*-slow.log 2>/dev/null | egrep -ci \"PhpRedis.php\")\n  if [ \"${_hits}\" -gt 19 ]; then\n    sleep 2\n    _hits2=$(tail -n 500 /var/log/php/fpm-*-slow.log 2>/dev/null | egrep -ci \"PhpRedis.php\")\n    if [ \"${_hits2}\" -gt 19 ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_REDIS_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Redis latency symptoms but in cooldown; skipping restart\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      echo \"$(date) Redis sustained latency symptoms (${_hits2} hits) — restart\" >> ${_pthOml}\n      _redis_restart \"RedisException SLOW\"\n    fi\n  fi\n}\n\n_if_redis_restart() {\n  _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n  _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n  _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n  _VkTest=$(ls /data/disk/*/static/control/run-valkey-restart.pid | wc -l 2>&1)\n  _ReTest=$(ls /data/disk/*/static/control/run-redis-restart.pid | wc -l 2>&1)\n  if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n    || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n    || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n    || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n    || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n    || [ -e \"/root/.allow.redis.restart.cnf\" ] \\\n    || [ -e \"/root/.allow.redis.restart.cnf\" ]; then\n    if [ \"${_VkTest}\" -ge 1 ] || [ \"${_ReTest}\" -ge 1 ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_REDIS_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Redis restart requested but in cooldown; skipped\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      rm -f /data/disk/*/static/control/run-valkey-restart.pid\n      rm -f /data/disk/*/static/control/run-redis-restart.pid\n      _thisErrLog=\"$(date) Redis Server Restart Requested\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _redis_restart \"Redis Server Restart Requested\"\n    fi\n  fi\n}\n\n_redis_health_check_fix() {\n\n  # Double-check health: process + socket PING\n  _ok_proc=false\n  _ok_ping=false\n\n  pgrep -f \"/usr/bin/redis-server\" >/dev/null 2>&1 && _ok_proc=true\n  if [ -x \"/usr/bin/redis-cli\" ]; then\n    if _redis_ping_ok; then\n      _ok_ping=true\n    fi\n  fi\n\n  if ! ${_ok_proc} || ! ${_ok_ping}; then\n    sleep 2\n    _ok_proc=false; _ok_ping=false\n    pgrep -f \"/usr/bin/redis-server\" >/dev/null 2>&1 && _ok_proc=true\n    if [ -x \"/usr/bin/redis-cli\" ]; then\n      if _redis_ping_ok; then\n        _ok_ping=true\n      fi\n    fi\n  fi\n\n  if ! ${_ok_proc} || ! ${_ok_ping}; then\n    _now=$(date +%s)\n    if [ -s \"${_cd}\" ]; then\n      _ts=$(tr -d '\\n' < \"${_cd}\")\n      if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_REDIS_COOLDOWN_SECS}\" ]; then\n        echo \"$(date) INFO: Redis unhealthy but in cooldown; skipping restart\" >> ${_pthOml}\n        return 0\n      fi\n    fi\n\n    echo \"$(date) Redis health failed (proc=${_ok_proc} ping=${_ok_ping}) — restart\" >> ${_pthOml}\n    service redis-server restart\n    wait\n    sleep 3\n\n    # Post-restart verification\n    _ok_proc=false; _ok_ping=false\n    pgrep -f \"/usr/bin/redis-server\" >/dev/null 2>&1 && _ok_proc=true\n    if [ -x \"/usr/bin/redis-cli\" ]; then\n      if _redis_ping_ok; then\n        _ok_ping=true\n      fi\n    fi\n\n    date +%s > \"${_cd}\"\n\n    if ${_ok_proc} && ${_ok_ping}; then\n      echo \"$(date) Redis was down, restarted\" >> ${_pthOml}\n      _incident_email_report \"Redis was down, restarted\"\n      echo >> ${_pthOml}\n      exit 0\n    else\n      echo \"$(date) Redis still unhealthy after restart; forced stop/start\" >> ${_pthOml}\n      _redis_restart \"Redis required stop/start after failed restart\"\n    fi\n  fi\n}\n\nif [ -e \"/run/boa_redis_auto_healing.pid\" ] \\\n  || [ -e \"/run/boa_redis_auto_healing.pid\" ]; then\n  _ALLOW_CTRL=NO\nelse\n  _ALLOW_CTRL=YES\nfi\n\nif [ ! -e \"/run/max_load.pid\" ] && [ ! -e \"/run/critical_load.pid\" ]; then\n  if [ -x \"/etc/init.d/redis-server\" ] \\\n    && [ -x \"/usr/bin/redis-server\" ]; then\n    _redis_health_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && _redis_slow_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && _redis_connection_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && _redis_bind_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && [ -d \"/data/u\" ] && _if_redis_restart\n  fi\nfi\n\necho DONE!\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/scan_nginx.sh",
    "content": "#!/bin/bash\n\n# ==============================================================================\n# Script to Monitor and Block Suspicious NGINX Activity (DoS and DDoS)\n# ==============================================================================\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n# ==============================\n# Configuration and Environment\n# ==============================\n\n# Enable verbose mode if debug configuration exists\nif [[ -e \"/root/.debug.monitor.cnf\" ]]; then\n  set -x\nfi\n\n# Enable strict error handling for debugging only\n# set -euo pipefail\n\n# Set environment variables\nexport HOME='/root'\nexport PATH='/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin'\n\n# Set Internal Field Separator for safe parsing\nIFS=$'\\n\\t'\n\n# Constants\n_TIMES=$(date +%y%m%d-%H%M%S)\n_MYIP=$(< /root/.found_correct_ipv4.cnf)\n\n# Function to perform rounded division\n_inc_round_division() {\n  local numerator=$1\n  local denominator=$2\n  echo $(( (numerator + (denominator / 2)) / denominator ))\n}\n\n# ==============================\n# Default Configuration Values\n# ==============================\n\n# Default number of lines to process from access.log (positive integer)\n_NGINX_DOS_LINES=1999\n\n# Default max allowed number for blocking (positive integer)\n_NGINX_DOS_LIMIT=399\n\n# Default mode (1 or 2)\n_NGINX_DOS_MODE=2\n\n# Default divisor for increments (positive integer)\n_NGINX_DOS_DIV_INC_NR=40\n_NGINX_DOS_DIV_INC_S_NR=$(( _NGINX_DOS_DIV_INC_NR * 2 ))\n\n# Default min allowed number for increments (positive integer)\n_NGINX_DOS_INC_MIN=3\n\n# Default logging mode, can be SILENT (none), NORMAL or VERBOSE\n_NGINX_DOS_LOG=VERBOSE\n\n# ---- DDoS / Shared-UA flood detection ----\n# Minimum number of distinct IPs sharing the same User-Agent string within the\n# current scan window that must be observed before the UA is considered an\n# attack fingerprint. Tune upward on high-traffic servers.\n_NGINX_DDOS_UA_IP_THRESHOLD=20\n\n# Minimum per-UA request count (across all IPs) in the scan window required to\n# declare a DDoS. This catches fast floods even when IPs are few but requests\n# are extreme.\n_NGINX_DDOS_UA_REQ_THRESHOLD=200\n\n# When a DDoS UA is confirmed, only block IPs that contributed at least this\n# many requests with that UA. Keeps incidental single-request hits (e.g. from\n# a shared Cloudflare egress) out of the block list.\n_NGINX_DDOS_IP_MIN_REQS=3\n\n# Default exclude keywords (empty by default; 'doccomment' will be used if not overridden)\n_NGINX_DOS_IGNORE=\"doccomment\"\n\n# Default DoS keywords (empty by default; 'foobar' will be used if not overridden)\n_NGINX_DOS_STOP=\"WAITFOR.DELAY|DECLARE.*@x|/\\*\\*/|%27.*%29.*%3B|0x[0-9a-f]{6}\"\n\n# ==============================\n# Load Configuration File\n# ==============================\n\n_CONFIG_FILE=\"/root/.barracuda.cnf\"\n\nif [[ -e \"${_CONFIG_FILE}\" ]]; then\n  # shellcheck source=/dev/null\n  source \"${_CONFIG_FILE}\"\nfi\n\n# ==============================\n# Validate and adjust variables\n# ==============================\n\n# Config Constants\n_MAX_LIMIT=${_NGINX_DOS_LINES}\n_MIN_LIMIT=$(_inc_round_division \"${_MAX_LIMIT}\" \"40\")\n_DEFAULT_LIMIT=$(_inc_round_division \"${_MAX_LIMIT}\" \"5\")\n\n# Validate _NGINX_DOS_INC_MIN: must be a positive integer\nif ! [[ \"${_NGINX_DOS_INC_MIN}\" =~ ^[1-9][0-9]*$ ]]; then\n  echo \"Warning: Invalid _NGINX_DOS_INC_MIN ('${_NGINX_DOS_INC_MIN}'). Setting to default (3).\"\n  _NGINX_DOS_INC_MIN=3\nfi\n\n# Validate _NGINX_DOS_LIMIT: must be a number within the range\nif ! [[ \"${_NGINX_DOS_LIMIT}\" =~ ^[0-9]+$ ]] || (( _NGINX_DOS_LIMIT < _MIN_LIMIT || _NGINX_DOS_LIMIT > _MAX_LIMIT )); then\n  echo \"Warning: Invalid _NGINX_DOS_LIMIT ('${_NGINX_DOS_LIMIT}'). Setting to default (${_DEFAULT_LIMIT}).\"\n  _NGINX_DOS_LIMIT=${_DEFAULT_LIMIT}\nfi\n\n# Calculate increments with rounded division\n_INC_NR=$(_inc_round_division \"${_NGINX_DOS_LIMIT}\" \"${_NGINX_DOS_DIV_INC_NR}\")\n_INC_S_NR=$(_inc_round_division \"${_NGINX_DOS_LIMIT}\" \"${_NGINX_DOS_DIV_INC_S_NR}\")\n\n# Ensure increments are at least _NGINX_DOS_INC_MIN\n_INC_NR=$(( _INC_NR < _NGINX_DOS_INC_MIN ? _NGINX_DOS_INC_MIN : _INC_NR ))\n_INC_S_NR=$(( _INC_S_NR < _NGINX_DOS_INC_MIN ? _NGINX_DOS_INC_MIN : _INC_S_NR ))\n\necho \"CONFIG: _NGINX_DOS_LIMIT is ${_NGINX_DOS_LIMIT}\"\necho \"CONFIG: _NGINX_DOS_LINES is ${_NGINX_DOS_LINES}\"\necho \"CONFIG: _INC_NR is ${_INC_NR}\"\necho \"CONFIG: _INC_S_NR is ${_INC_S_NR}\"\n\n# ==============================\n# Declare Associative Arrays\n# ==============================\n\ndeclare -A _BANNED_IPS\ndeclare -A _ALLOWED_IPS\ndeclare -A _LOGGED_IN_IPS\ndeclare -A _COUNTERS\ndeclare -A _LI_CNT\ndeclare -A _PX_CNT\n\n# DDoS / Shared-UA detection arrays\n# _UA_IP_COUNT[ua]    = number of distinct real IPs seen with this UA\n# _UA_REQ_COUNT[ua]   = total requests seen with this UA\n# _UA_IP_SET[ua:ip]   = sentinel: marks that IP 'ip' was seen with UA 'ua'\n# _UA_IP_LIST[ua]     = space-separated list of real IPs using this UA\n# _UA_IP_REQS[ua:ip]  = request count per (UA, IP) pair\ndeclare -A _UA_IP_COUNT\ndeclare -A _UA_REQ_COUNT\ndeclare -A _UA_IP_SET\ndeclare -A _UA_IP_LIST\ndeclare -A _UA_IP_REQS\n\n# Debugging: Confirm associative arrays are declared\nif [[ -e \"/root/.debug.monitor.cnf\" ]]; then\n  declare -p _BANNED_IPS _ALLOWED_IPS _LOGGED_IN_IPS _COUNTERS _LI_CNT _PX_CNT\n  declare -p _UA_IP_COUNT _UA_REQ_COUNT _UA_IP_SET _UA_IP_LIST _UA_IP_REQS\n  echo \"DEBUG: Associative arrays declared (DoS + DDoS sets).\"\nfi\n\n# ==============================\n# Helper Functions\n# ==============================\n\n# Function for logging in verbose mode\n_verbose_log() {\n  local _reason=\"${1}\"\n  local _message=\"${2}\"\n  local _log_file=\"/dev/null\"\n\n  # Define log file paths\n  local _general_log=\"/var/log/scan_nginx_debug.log\"\n  local _flood_log=\"/var/log/scan_nginx_flood_debug.log\"\n  local _admin_log=\"/var/log/scan_nginx_admin_debug.log\"\n  local _other_log=\"/var/log/scan_nginx_other_debug.log\"\n\n  # Check if logging is enabled\n  if [[ -e \"/root/.debug.monitor.log.cnf\" || \"${_NGINX_DOS_LOG}\" =~ ^(NORMAL|VERBOSE)$ ]]; then\n    if [[ \"${_reason}\" =~ Counter && \"${_NGINX_DOS_LOG}\" =~ VERBOSE ]]; then\n      _log_file=\"${_flood_log}\"\n    elif [[ \"${_reason}\" =~ \"Admin URI To Ignore\" && \"${_NGINX_DOS_LOG}\" =~ VERBOSE ]]; then\n      _log_file=\"${_admin_log}\"\n    elif [[ \"${_reason}\" =~ \"Other URI To Ignore\" && \"${_NGINX_DOS_LOG}\" =~ VERBOSE ]]; then\n      _log_file=\"${_other_log}\"\n    else\n      _log_file=\"${_general_log}\"\n    fi\n\n    # Generate timestamp\n    _timestamp=$(date)\n\n    # Write to the appropriate log file using printf\n    printf \"%s %s REASON: %s\\n\" \"${_timestamp}\" \"${_reason}\" \"${_message}\" >> \"${_log_file}\"\n  fi\n}\n\n# Function to validate IP format\n_validate_ip() {\n  local _IP=\"$1\"\n  # Remove any trailing punctuation (comma, period)\n  _IP=\"${_IP%,}\"\n  _IP=\"${_IP%.}\"\n  if [[ \"${_IP}\" =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n    # Further validate each octet is between 0 and 255\n    IFS='.' read -r _a _b _c _d <<< \"${_IP}\"\n    if (( _a <= 255 && _b <= 255 && _c <= 255 && _d <= 255 )); then\n      return 0\n    fi\n  fi\n  return 1\n}\n\n# NOTE: Removed _resolve_real_ip_traversal function (its logic is inlined in the main loop for performance)\n\n# Function to check if an IP is banned using associative array\n_is_banned_or_allowed() {\n  local _IP=\"$1\"\n  if [[ -n \"${_BANNED_IPS[\"${_IP}\"]}\" ]]; then\n    _verbose_log \"${_IP}\" \"_is_banned_or_allowed\"\n    echo \"=== _is_banned_or_allowed ${_IP} ===\"\n    return 0\n  fi\n  return 1\n}\n\n# Function to check if an IP is allowed (local) using associative array\n_is_allowed_local() {\n  local _IP=\"$1\"\n  if [[ -n \"${_ALLOWED_IPS[\"${_IP}\"]}\" ]]; then\n    _verbose_log \"${_IP}\" \"_is_allowed_local\"\n    echo \"=== _is_allowed_local ${_IP} ===\"\n    return 0\n  fi\n  return 1\n}\n\n# Function to check if an IP is logged in using associative array\n_is_logged_in() {\n  local _IP=\"$1\"\n  if [[ -n \"${_LOGGED_IN_IPS[\"${_IP}\"]}\" ]]; then\n    _verbose_log \"${_IP}\" \"_is_logged_in\"\n    echo \"=== _is_logged_in ${_IP} ===\"\n    return 0\n  fi\n  return 1\n}\n\n# Function to log and block an IP\n_block_ip() {\n  local _IP=\"$1\"\n  # Append to web.log if not already present (use in-memory cache to avoid grep each time)\n  if [[ -z \"${_BANNED_IPS[\"${_IP}\"]}\" ]]; then\n    _verbose_log \"${_IP} # [x${_sumar}] ${_TIMES}\" \"_block_ip\"\n    echo \"${_IP} # [x${_sumar}] ${_TIMES}\" >> /var/xdrago/monitor/log/web.log\n    echo \"${_IP} # [x${_sumar}] ${_TIMES}\" >> /var/xdrago/monitor/log/scan_nginx.archive.log\n    echo \"===[${_sumar}] ${_IP} ADDED TO BLOCK LIST monitor/log/web.log ===\"\n  else\n    echo \"===[${_sumar}] ${_IP} ALREADY LISTED IN monitor/log/web.log ===\"\n  fi\n  # Mark IP as banned in this run to prevent duplicate processing\n  _BANNED_IPS[\"${_IP}\"]=1\n\n  # Block the IP using csf instantly (temporary block for 15 minutes)\n  if [[ -x \"/usr/sbin/csf\" ]] && [[ -e \"/root/.instant.csf.block.cnf\" ]]; then\n    /usr/sbin/csf -td \"${_IP}\" 900 -p 80\n    /usr/sbin/csf -td \"${_IP}\" 900 -p 443\n    [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n  fi\n}\n\n# Function to increment counters based on specific suspicious log patterns\n_if_increment_counters() {\n  if [[ \"${_IP}\" = \"unknown\" ]]; then\n    (( _COUNTERS[\"${_IP}\"] += _INC_NR ))\n    _verbose_log \"Counter++ ${_INC_NR} for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"unknown\"\n  fi\n  # Combine checks for HTTP status 400, 404, 403, 410, 444, 500 to increment counters in one go\n  if [[ \"${_line}\" =~ \\\"\\ (400|404|403|410|444|500) ]]; then\n    local _code=\"${BASH_REMATCH[1]}\"\n    (( _COUNTERS[\"${_IP}\"] += _INC_NR ))\n    _verbose_log \"Counter++ ${_INC_NR} for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"${_code} flood protection\"\n  fi\n  if [[ \"${_line}\" =~ wp-(content|admin|includes|json) ]]; then\n    (( _COUNTERS[\"${_IP}\"] += _INC_NR ))\n    _verbose_log \"Counter++ ${_INC_NR} for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"wp-x flood protection\"\n  fi\n  if [[ \"${_line}\" =~ (POST|GET)\\ /user/login ]]; then\n    (( _COUNTERS[\"${_IP}\"] += _INC_S_NR ))\n    _verbose_log \"Counter++ ${_INC_S_NR} for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"/user/login flood protection\"\n  fi\n}\n\n# Function to process each IP\n_process_ip() {\n  local _IP=\"$1\"\n  local _COUNT_REF=\"$2\"\n  local _line=\"$3\"\n  local _IGNORE_ADMIN=0\n  local _IGNORE_OTHER=0\n\n  # Validate that _COUNT_REF is a recognized associative array\n  if [[ \"${_COUNT_REF}\" != \"_LI_CNT\" && \"${_COUNT_REF}\" != \"_PX_CNT\" ]]; then\n    _verbose_log \"Error: _COUNT_REF '${_COUNT_REF}' is not a recognized associative array\" \"_process_ip\"\n    echo \"Error: _COUNT_REF '${_COUNT_REF}' is not a recognized associative array.\"\n    return\n  fi\n\n  # Reference the appropriate counter array\n  local -n _COUNTERS=${_COUNT_REF}\n\n  # Validate IP format\n  if ! _validate_ip \"${_IP}\"; then\n    _verbose_log \"Invalid IP format: ${_IP} -- Skipping\" \"_validate_ip\"\n    echo \"Invalid IP format: ${_IP} -- Skipping.\"\n    return\n  fi\n\n  # Skip private network and localhost IPs immediately\n  if [[ \"${_IP}\" =~ ^(10\\.|127\\.|169\\.254\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.) ]]; then\n    _verbose_log \"Private IP ${_IP} -- Skipping\" \"_process_ip\"\n    echo \"Private IP ${_IP} -- Skipping.\"\n    return\n  fi\n\n  # Only examine lines that are GET/HEAD/POST (ignore lines with \" 301\" redirect)\n  if [[ \"${_line}\" =~ (GET|HEAD|POST) && ! \"${_line}\" =~ \\\"\\ 301 ]]; then\n\n    # Define admin URIs to ignore (combine multiple patterns into one regex for efficiency)\n    if [[ \"${_line}\" =~ (GET|POST)\\ /([a-z]{2}/)?(admin/content|quickedit|node/add|node/[0-9]+/edit|entity_reference_autocomplete|(hosting|system|admin|app|ckeditor)/|entity-browser|contextual/render|views-bulk-operations|civicrm|batch|media/browser).*\\\"\\ (200|302) ]]; then\n      _IGNORE_ADMIN=1\n    fi\n    # If an admin request resulted in a 403 or contains typical WP paths, do not ignore (these might be attacks)\n    if [[ \"${_line}\" =~ (GET|HEAD|POST)\\ /.*\\\"\\ 403 ]] || [[ \"${_line}\" =~ wp-(content|admin|includes|json) ]]; then\n      _IGNORE_ADMIN=0\n    fi\n    if [[ \"${_IGNORE_ADMIN}\" -eq 1 ]]; then\n      _verbose_log \"Admin URI To Ignore\" \"${_line}\"\n      return\n    fi\n\n    # Define other patterns to skip (combined multiple checks into one conditional with OR)\n    if [[ \"${_line}\" =~ (GET|POST)\\ /([a-z]{2}/)?advagg.*\\\"\\ (200|302) || \"${_line}\" =~ /files/css/css_ || \"${_line}\" =~ /files/js/js_ || \"${_line}\" =~ /files/advagg_ || \"${_line}\" =~ /files/(imagecache|styles) || \"${_line}\" =~ (ajax|autocomplete|shs).*\\\"\\ (200|302) || \"${_line}\" =~ (plupload|json|api/rest).*\\\"\\ (200|302) || \"${_line}\" =~ GET\\ /(filefield/progress|files/progress|file/progress|elfinder/connector).*\\\"\\ (200|302) || \"${_line}\" =~ POST\\ /js/.*\\\"\\ (200|302) || \"${_line}\" =~ /files/media.*\\\"\\ (200|302) || \"${_line}\" =~ GET\\ /.*\\.(mp4|m4a|flv|avi|mpeg|mov|wmv|mp3|ogg|ogv|wav|midi|zip|tar|tgz|rar|dmg|exe|apk|pxl|ipa|jpe?g|gif|png|ico).*\\\"\\ (200|302) || \"${_line}\" =~ GET\\ /timemachine/[0-9]{4}/.*\\\"\\ (200|302) || \"${_line}\" =~ POST\\ /.*/(cart/checkout|embed/preview).*\\\"\\ (200|302) || \"${_line}\" =~ files\\.aegir\\.cc ]]; then\n      _IGNORE_OTHER=1\n    fi\n    # Exclude lines containing configured ignore keywords or default 'doccomment'\n    if [[ -n \"${_NGINX_DOS_IGNORE}\" ]]; then\n      if [[ \"${_line}\" =~ (${_NGINX_DOS_IGNORE}).*\\\"\\ (200|302) ]]; then\n        _IGNORE_OTHER=1\n      fi\n    else\n      if [[ \"${_line}\" =~ doccomment.*\\\"\\ (200|302) ]]; then\n        _IGNORE_OTHER=1\n      fi\n    fi\n    # If the request resulted in a 403 or contains WP paths, do not ignore (likely malicious traffic)\n    if [[ \"${_line}\" =~ (GET|HEAD|POST)\\ /.*\\\"\\ 403 ]] || [[ \"${_line}\" =~ wp-(content|admin|includes|json) ]]; then\n      _IGNORE_OTHER=0\n    fi\n    if [[ \"${_IGNORE_OTHER}\" -eq 1 ]]; then\n      _verbose_log \"Other URI To Ignore\" \"${_line}\"\n      return\n    fi\n\n    # Skip processing if IP is whitelisted in CSF allow list (cached in memory)\n    if [[ -n \"${_CSF_ALLOW_IPS[\"${_IP}\"]}\" ]]; then\n      return\n    fi\n\n    # Initialize or increment the counter safely for this IP\n    if [[ -v _COUNTERS[\"${_IP}\"] ]]; then\n      (( _COUNTERS[\"${_IP}\"]++ ))\n    else\n      _COUNTERS[\"${_IP}\"]=1\n    fi\n  fi\n\n  # Additional counting based on mode (only if not ignored by above filters)\n  if [[ \"${_IGNORE_OTHER}\" -eq 0 && \"${_IGNORE_ADMIN}\" -eq 0 ]]; then\n    _if_increment_counters\n    if [[ \"${_NGINX_DOS_MODE}\" -eq 1 ]]; then\n      if [[ \"${_line}\" =~ POST\\ /([a-z]{2}/)?(user|user/(register|pass|login)|node/add) ]]; then\n        (( _COUNTERS[\"${_IP}\"] += 5 ))\n        _verbose_log \"Counter++ 5 for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"/user/ and /node/add POST flood protection\"\n      fi\n      if [[ \"${_line}\" =~ GET\\ /([a-z]{2}/)?node/add ]]; then\n        (( _COUNTERS[\"${_IP}\"] += 3 ))\n        _verbose_log \"Counter++ 3 for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"/node/add GET flood protection\"\n      fi\n      if [[ -n \"${_NGINX_DOS_STOP}\" ]]; then\n        if [[ \"${_line}\" =~ (${_NGINX_DOS_STOP}) ]]; then\n          (( _COUNTERS[\"${_IP}\"] += _NGINX_DOS_LIMIT ))\n          _verbose_log \"Counter++ ${_NGINX_DOS_LIMIT} for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"_NGINX_DOS_STOP protection\"\n        fi\n      fi\n    else\n      if [[ -n \"${_NGINX_DOS_STOP}\" ]]; then\n        if [[ \"${_line}\" =~ (${_NGINX_DOS_STOP}) ]]; then\n          (( _COUNTERS[\"${_IP}\"] += _NGINX_DOS_LIMIT ))\n          _verbose_log \"Counter++ ${_NGINX_DOS_LIMIT} for IP ${_IP}: ${_COUNTERS[\"${_IP}\"]}\" \"_NGINX_DOS_STOP protection\"\n        fi\n      fi\n    fi\n  fi\n}\n\n# Function to handle blocking actions\n_handle_blocking() {\n  local -n _COUNTERS=$1\n  local _TYPE=$2\n\n  # Debug: confirm that _COUNTERS is referencing the intended array\n  if [[ -n \"${1}\" && -e \"/root/.debug.monitor.cnf\" ]]; then\n    declare -p _COUNTERS\n    echo \"DEBUG: _COUNTERS in _handle_blocking is referencing '${1}'\"\n  fi\n\n  for _IP in \"${!_COUNTERS[@]}\"; do\n    local _COUNT=\"${_COUNTERS[\"${_IP}\"]}\"\n    local _CRITNUMBER=\"${_NGINX_DOS_LIMIT}\"\n    local _MININUMBER=$(( (_CRITNUMBER + 1) / 2 ))  # handle integer division rounding\n\n    if (( _COUNT > _MININUMBER )); then\n      if _is_logged_in \"${_IP}\"; then\n        _CRITNUMBER=9999\n      fi\n      if [[ \"${_IP}\" == \"${_MYIP}\" ]]; then\n        _CRITNUMBER=9998\n      fi\n\n      echo \"===[${_CRITNUMBER}] MAX ${_TYPE} critnumber for ${_IP} ===\"\n      echo \"===[${_COUNT}] COUNTER ${_TYPE} counter for ${_IP} ===\"\n\n      # Skip blocking if IP is in local allow list\n      if _is_allowed_local \"${_IP}\"; then\n        continue\n      fi\n      # Skip if IP was already banned/processed\n      if _is_banned_or_allowed \"${_IP}\"; then\n        continue\n      fi\n\n      if (( _COUNT > _CRITNUMBER )); then\n        _sumar=\"${_COUNT}\"\n        echo \"=== block_ip ${_IP} ${_COUNT}/${_CRITNUMBER} ===\"\n        _block_ip \"${_IP}\"\n      fi\n    fi\n  done\n}\n\n# ==============================\n# DDoS / Shared-UA Detection\n# ==============================\n\n# _track_ua_ip IP UA\n# Called for every non-ignored request line during the main scan loop.\n# Builds per-UA statistics: distinct IP count, total request count, and\n# per-(UA,IP) request count. All work is done in-memory using associative\n# arrays; no subshells or external processes are spawned.\n_track_ua_ip() {\n  local _IP=\"$1\"\n  local _UA=\"$2\"\n\n  # Skip private/localhost IPs (they cannot be blocked anyway)\n  if [[ \"${_IP}\" =~ ^(10\\.|127\\.|169\\.254\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.) ]]; then\n    return\n  fi\n  # Skip already-banned IPs to avoid inflating counts needlessly\n  if [[ -n \"${_BANNED_IPS[\"${_IP}\"]}\" ]]; then\n    return\n  fi\n  # Skip whitelisted IPs\n  if [[ -n \"${_ALLOWED_IPS[\"${_IP}\"]}\" ]] || [[ -n \"${_CSF_ALLOW_IPS[\"${_IP}\"]}\" ]]; then\n    return\n  fi\n\n  local _UA_IP_KEY=\"${_UA}:${_IP}\"\n\n  # Increment total-request counter for this UA\n  if [[ -v _UA_REQ_COUNT[\"${_UA}\"] ]]; then\n    (( _UA_REQ_COUNT[\"${_UA}\"]++ ))\n  else\n    _UA_REQ_COUNT[\"${_UA}\"]=1\n  fi\n\n  # Increment per-(UA,IP) request counter\n  if [[ -v _UA_IP_REQS[\"${_UA_IP_KEY}\"] ]]; then\n    (( _UA_IP_REQS[\"${_UA_IP_KEY}\"]++ ))\n  else\n    _UA_IP_REQS[\"${_UA_IP_KEY}\"]=1\n  fi\n\n  # Track distinct IPs per UA (use sentinel key to avoid duplicates)\n  if [[ -z \"${_UA_IP_SET[\"${_UA_IP_KEY}\"]}\" ]]; then\n    _UA_IP_SET[\"${_UA_IP_KEY}\"]=1\n    if [[ -v _UA_IP_COUNT[\"${_UA}\"] ]]; then\n      (( _UA_IP_COUNT[\"${_UA}\"]++ ))\n    else\n      _UA_IP_COUNT[\"${_UA}\"]=1\n    fi\n    # Append IP to the list for this UA (space-separated; used during blocking phase)\n    _UA_IP_LIST[\"${_UA}\"]=\"${_UA_IP_LIST[\"${_UA}\"]:-}${_UA_IP_LIST[\"${_UA}\"]:+ }${_IP}\"\n  fi\n}\n\n# _handle_ddos_blocking\n# Iterates over all tracked User-Agents. When a UA meets both the distinct-IP\n# threshold and the total-request threshold it is declared a DDoS fingerprint\n# and every contributing IP (that sent at least _NGINX_DDOS_IP_MIN_REQS\n# requests with that UA) is individually blocked via _block_ip.\n_handle_ddos_blocking() {\n  local _UA _IP _ip_count _req_count _ip_reqs _UA_IP_KEY\n\n  for _UA in \"${!_UA_REQ_COUNT[@]}\"; do\n    _ip_count=\"${_UA_IP_COUNT[\"${_UA}\"]:-0}\"\n    _req_count=\"${_UA_REQ_COUNT[\"${_UA}\"]:-0}\"\n\n    # Evaluate thresholds\n    if (( _ip_count < _NGINX_DDOS_UA_IP_THRESHOLD && _req_count < _NGINX_DDOS_UA_REQ_THRESHOLD )); then\n      continue\n    fi\n\n    _verbose_log \"DDoS UA detected [${_ip_count} IPs / ${_req_count} reqs]: ${_UA}\" \"_handle_ddos_blocking\"\n    echo \"=== DDoS UA DETECTED [${_ip_count} distinct IPs | ${_req_count} total reqs] ===\"\n    echo \"=== UA fingerprint: ${_UA:0:120} ===\"\n\n    # Walk the IP list for this UA and block qualifying IPs\n    for _IP in ${_UA_IP_LIST[\"${_UA}\"]}; do\n      _UA_IP_KEY=\"${_UA}:${_IP}\"\n      _ip_reqs=\"${_UA_IP_REQS[\"${_UA_IP_KEY}\"]:-0}\"\n\n      if (( _ip_reqs < _NGINX_DDOS_IP_MIN_REQS )); then\n        continue\n      fi\n\n      # Skip already-banned, whitelisted, and logged-in IPs\n      if [[ -n \"${_BANNED_IPS[\"${_IP}\"]}\" ]]; then\n        echo \"===[${_ip_reqs}req] DDoS IP ${_IP} already banned -- skipping ===\"\n        continue\n      fi\n      if [[ -n \"${_ALLOWED_IPS[\"${_IP}\"]}\" ]] || [[ -n \"${_CSF_ALLOW_IPS[\"${_IP}\"]}\" ]]; then\n        echo \"===[${_ip_reqs}req] DDoS IP ${_IP} is whitelisted -- skipping ===\"\n        continue\n      fi\n      if _is_logged_in \"${_IP}\"; then\n        echo \"===[${_ip_reqs}req] DDoS IP ${_IP} is logged-in session -- skipping ===\"\n        continue\n      fi\n      if [[ \"${_IP}\" == \"${_MYIP}\" ]]; then\n        echo \"===[${_ip_reqs}req] DDoS IP ${_IP} is local server IP -- skipping ===\"\n        continue\n      fi\n\n      _sumar=\"${_ip_reqs}\"\n      echo \"=== DDoS block_ip ${_IP} [${_ip_reqs} reqs with attack UA] ===\"\n      _block_ip \"${_IP}\"\n    done\n  done\n}\n# ==============================\n\n# Load banned IPs from web.log into associative array (cache already blocked IPs)\n_WEB_LOG=\"/var/xdrago/monitor/log/web.log\"\nif [[ -e \"${_WEB_LOG}\" ]]; then\n  while IFS= read -r _line; do\n    _ip=\"${_line%% *}\"               # extract IP (before first space or comment)\n    _ip=\"${_ip//[^0-9.]/}\"           # clean any non-numeric characters from IP\n    if [[ -n \"${_ip}\" ]]; then\n      _BANNED_IPS[\"${_ip}\"]=1\n    fi\n  done < \"${_WEB_LOG}\"\nfi\n\n# Load allowed local IPs into associative array (IPs that should not be blocked)\n_LOCAL_IP_LIST=\"/root/.local.IP.list\"\nif [[ -e \"${_LOCAL_IP_LIST}\" ]]; then\n  while IFS= read -r _line; do\n    _ip=\"${_line%% *}\"\n    _ip=\"${_ip//[^0-9.]/}\"\n    if [[ -n \"${_ip}\" ]]; then\n      _ALLOWED_IPS[\"${_ip}\"]=1\n    fi\n  done < \"${_LOCAL_IP_LIST}\"\nfi\n\n# Load allowed IPs from CSF allow list (for port 80) into memory to avoid repeated grep operations\ndeclare -A _CSF_ALLOW_IPS\n_CSF_ALLOW_FILE=\"/etc/csf/csf.allow\"\nif [[ -f \"${_CSF_ALLOW_FILE}\" ]]; then\n  while IFS= read -r _line; do\n    if [[ \"${_line}\" =~ ^tcp\\|in\\|d=80\\|s=([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+)\\b ]]; then\n      _ip=\"${BASH_REMATCH[1]}\"\n      _CSF_ALLOW_IPS[\"${_ip}\"]=1\n    fi\n  done < \"${_CSF_ALLOW_FILE}\"\nfi\n\n# ==============================\n# Load Logged-In IPs\n# ==============================\n\n_get_ssh_ips() {\n  netstat -tn | awk '$4 ~ /:22$/ && $6 == \"ESTABLISHED\" { split($5, a, \":\"); print a[1] }' | sort | uniq\n}\n\nif command -v netstat &>/dev/null; then\n  while IFS= read -r _logged_ip; do\n    if _validate_ip \"${_logged_ip}\"; then\n      _LOGGED_IN_IPS[\"${_logged_ip}\"]=1\n    fi\n  done < <(_get_ssh_ips)\nfi\n\n# ==============================\n# Processing the Access Log\n# ==============================\n\n# Use byte offset tracking to read only new lines since last run (reduces redundant I/O)\n_OFFSET_FILE=\"/var/log/scan_nginx_lastpos\"\n_last_offset=0\nif [[ -f \"${_OFFSET_FILE}\" ]]; then\n  _last_offset=$(< \"${_OFFSET_FILE}\")\nfi\n_log_file=\"/var/log/nginx/access.log\"\n_current_size=0\nif [[ -f \"${_log_file}\" ]]; then\n  _current_size=$(stat -c %s \"${_log_file}\")\nfi\nif (( _current_size < _last_offset )); then\n  # Log file was rotated or truncated; reset offset to start from beginning\n  _last_offset=0\nfi\n\nif (( _last_offset == 0 )); then\n  # First run or reset: process the last $_NGINX_DOS_LINES lines as a baseline\n  exec 3< <(tail -n \"${_NGINX_DOS_LINES}\" \"${_log_file}\")\nelse\n  # Process only new log entries since the last recorded byte offset\n  exec 3< <(tail -c +$(( _last_offset + 1 )) \"${_log_file}\")\nfi\n\nwhile IFS= read -r _line <&3; do\n  # Extract the first quoted string from the log line (which contains the comma-separated IPs)\n  if [[ \"${_line}\" =~ \\\"([^\\\"]*)\\\" ]]; then\n    _ip_str=\"${BASH_REMATCH[1]}\"\n  else\n    _ip_str=\"\"\n  fi\n\n  # Split the IP string by commas and trim spaces\n  IFS=',' read -ra _ip_array <<< \"${_ip_str}\"\n  for i in \"${!_ip_array[@]}\"; do\n    _ip_array[i]=\"${_ip_array[i]## }\"\n    _ip_array[i]=\"${_ip_array[i]% }\"\n  done\n\n  # Collect only valid IPv4 addresses from the IP list\n  _IP_LIST=()\n  for _ip_candidate in \"${_ip_array[@]}\"; do\n    if [[ \"${_ip_candidate}\" =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n      _IP_LIST+=(\"${_ip_candidate}\")\n    fi\n  done\n\n  # Debug: Print extracted IPs if debug mode is enabled\n  if [[ -e \"/root/.debug.monitor.cnf\" ]]; then\n    echo \"DEBUG: Extracted IPs: ${_IP_LIST[*]}\"\n  fi\n\n  # Assign visitor IP and up to three proxy IPs (from X-Forwarded-For or similar header)\n  _VISITOR=\"${_IP_LIST[0]:-}\"\n  _PROXY1=\"${_IP_LIST[1]:-}\"\n  _PROXY2=\"${_IP_LIST[2]:-}\"\n  _PROXY3=\"${_IP_LIST[3]:-}\"\n\n  # Resolve the real IP by traversing proxies (inlined to avoid per-line subshell overhead)\n  _REAL_IP=\"${_VISITOR}\"\n  _PROXIES_TO_CHECK=()\n  for _proxy in \"${_PROXY1}\" \"${_PROXY2}\" \"${_PROXY3}\"; do\n    if [[ \"${_REAL_IP}\" =~ ^(10\\.|127\\.|169\\.254\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.) ]]; then\n      if [[ -n \"${_proxy}\" ]]; then\n        _PROXIES_TO_CHECK+=(\"${_REAL_IP}\")\n        _REAL_IP=\"${_proxy}\"\n      else\n        break\n      fi\n    else\n      break\n    fi\n  done\n  # If the final resolved IP is still private, treat it as a proxy as well and clear real IP\n  if [[ \"${_REAL_IP}\" =~ ^(10\\.|127\\.|169\\.254\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.) ]]; then\n    _PROXIES_TO_CHECK+=(\"${_REAL_IP}\")\n    _REAL_IP=\"\"\n  fi\n  _PROXIES_ARRAY=( \"${_PROXIES_TO_CHECK[@]}\" )\n\n  # Debug: Echo the determined real visitor IP and proxy IPs if debug mode is enabled\n  if [[ -n \"${_REAL_IP}\" && -e \"/root/.debug.monitor.cnf\" ]]; then\n    echo \"=== checking ${_REAL_IP} / _LI_CNT ===\"\n  fi\n  if [[ -e \"/root/.debug.monitor.cnf\" ]]; then\n    for _proxy_ip in \"${_PROXIES_ARRAY[@]}\"; do\n      [[ -n \"${_proxy_ip}\" ]] && echo \"=== checking ${_proxy_ip} / _PX_CNT ===\"\n    done\n  fi\n\n  # Process the real visitor IP (if determined)\n  if [[ -n \"${_REAL_IP}\" ]]; then\n    _process_ip \"${_REAL_IP}\" \"_LI_CNT\" \"${_line}\"\n  fi\n  # Process each proxy IP (if any were identified as needing blocking)\n  for _proxy_ip in \"${_PROXIES_ARRAY[@]}\"; do\n    if [[ -n \"${_proxy_ip}\" ]]; then\n      _process_ip \"${_proxy_ip}\" \"_PX_CNT\" \"${_line}\"\n    fi\n  done\n\n  # ---- DDoS / Shared-UA tracking ----\n  # Extract the User-Agent field (last quoted string in the log line).\n  # The log format ends with: ... \"UA\" upstream_time \"cache_status\" proto=...\n  # We match the final double-quoted token that precedes the numeric upstream time.\n  if [[ -n \"${_REAL_IP}\" && \"${_line}\" =~ \\\"([^\\\"]+)\\\"\\ [0-9]+\\.[0-9]+ ]]; then\n    _DDOS_UA=\"${BASH_REMATCH[1]}\"\n    # Only track if UA is non-trivial (longer than 10 chars) and the line is\n    # not a redirect (301) so we stay consistent with _process_ip filtering.\n    if [[ ${#_DDOS_UA} -gt 10 && ! \"${_line}\" =~ \\\"\\ 301 ]]; then\n      _track_ua_ip \"${_REAL_IP}\" \"${_DDOS_UA}\"\n    fi\n  fi\ndone\n\n# Close the file descriptor for the log input\nexec 3<&-\n\n# Record the new end-of-file offset for next run\nif [[ -f \"${_log_file}\" ]]; then\n  stat -c %s \"${_log_file}\" > \"${_OFFSET_FILE}\"\nfi\n\n# ==============================\n# Execute Blocking Logic\n# ==============================\n\n_handle_blocking _LI_CNT \"li_cnt\"\n_handle_blocking _PX_CNT \"px_cnt\"\n\n# DDoS / Shared-UA flood blocking (runs after per-IP counters so _BANNED_IPS\n# is already populated and we avoid double-blocking IPs caught by the DoS pass)\n_handle_ddos_blocking\n\necho \"CONTROL complete for ${_MYIP}\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/segfault_alert.pl",
    "content": "#!/usr/bin/perl\n\n### TODO - rewrite this legacy script in bash\n\n$ENV{'HOME'} = '/root';\n$ENV{'PATH'} = '/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';\n\n###\n### This is a segfault monitor for php-fpm and nginx\n###\n\nuse warnings;\nuse File::Spec;\n\nif (!-e \"/data/u/\") {\n  exit;\n}\n$| = 1;\n$this_filename = \"segfault_alert\";\n$s=`uname -n`;\nchomp($s);\n&makeactions;\nprint \" CONTROL 1 done_______________________\\n ...\\n\";\nexit;\n\n#############################################################################\nsub makeactions\n{\n  $this_path = \"/var/xdrago/monitor/log/$this_filename.log\";\n  if (!-e \"$this_path\") {\n    $intro = <<INTRO\nThis report is generated automatically when new segfault is discovered.\n\nIt may be a result of some bug in the PHP version used, but it is often\ncaused or exposed by something in the affected site's PHP code.\n\nSee the reports linked below to learn more:\n\nhttps://bugs.php.net/bug.php?id=48034\nhttps://drupal.org/node/1462984#comment-5790468\nhttps://drupal.org/node/1366084#comment-5877974\nINTRO\n;\n    $intrx = <<INTROX\nNote that any vhost file (not the site) listed below as causing\nsegfault has been automatically (re)moved to the /var/backups/segfault/\ndirectory, so affected site will display standard UnderConstruction page\nuntil you will run Verify task in the Ægir control panel for this site.\n\nHowever, if the problem is not fixed and it still causes segfault,\nany affected site will be disabled again after another segfault\nimmediately, to protect your web server availability.\nINTROX\n;\n    `echo \"$intro\" >> $this_path`;\n    #`echo \"$intrx\" >> $this_path`;\n  }\n  $this_archive=\"/var/xdrago/monitor/log/$this_filename.archive.log\";\n  if (!-f \"$this_archive\") {\n    `touch $this_archive`;\n  }\n  open (NOT,\"<$this_archive\");\n  @banetable = <NOT>;\n  close (NOT);\n  local(@MYARR)=`tail --lines=999 /var/log/syslog 2>&1`;\n  local($sumar) = 0;\n  foreach $line (@MYARR) {\n    $line =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,]//g;\n    if ($line =~ /php\\-.*: segfault/i) {\n      local($MONTX, $DAYX, $TIMEX, $rest) = split(/\\s+/,$line);\n      chomp($TIMEX);\n      $TIMEX =~ s/[^0-9\\:]//g;\n      if ($TIMEX =~ /^[0-9]/) {\n        chomp($line);\n        $li_cnt{$TIMEX}++;\n      }\n    }\n  }\n  foreach $TIMEX (sort keys %li_cnt) {\n    $sumar = $sumar + $li_cnt{$TIMEX};\n    local($thissumar) = $li_cnt{$TIMEX};\n    local($blocked) = 0;\n    &check_ip($TIMEX);\n    if (!$blocked && $thissumar > 0) {\n      &trash_it_action($TIMEX,$thissumar);\n    }\n  }\n  print \"\\n===[$sumar]\\tGLOBAL===\\n\\n\";\n  undef (%li_cnt);\n}\n\n#############################################################################\nsub trash_it_action\n{\n  local($CRASH,$COUNTER) = @_;\n  $now_is=`date +%y%m%d-%H%M%S`;\n  chomp($now_is);\n  &find_domain($CRASH);\n  if ($found) {\n    print \"$CRASH [$COUNTER] recorded on [$now_is]\\n\";\n    `echo \"### PHP: CRASH at $CRASH [$COUNTER] discovered on $now_is for $dx\" >> $this_path`;\n    `echo \"### SYS: $sysl\" >> $this_path`;\n    `echo \"### NGX: $ngxl\" >> $this_path`;\n    `echo \"### PTH: $pthl\" >> $this_path`;\n    `echo \"### VHT: $disl\" >> $this_path`;\n    &_send_alert;\n  }\n}\n\n#############################################################################\nsub check_ip\n{\n  local($i) = @_;\n  foreach $line (@banetable) {\n    chomp ($line);\n    if ($line =~ /discovered/) {\n      local($a, $b, $c, $d, $e, $f) = split(/\\s+/,$line);\n      if ($e eq $i) {\n        $blocked = 1;\n        last;\n      }\n    }\n  }\n}\n\n#############################################################################\nsub find_domain\n{\n  local($CRASHED) = @_;\n  $lx = $d;\n  $ngxl=`grep \"$CRASHED.* 502 \" /var/log/nginx/access.log`;\n  $ngxl =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,\\\"]//g;\n  $sysl=`grep \"$CRASHED.*php\\-.*: segfault\" /var/log/syslog`;\n  $sysl =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,]//g;\n  local($a, $b, $c, $x, $y) = split(/\\\"\\s+/,\"$ngxl\");\n  local($d, $e) = split(/\\s+/,$b);\n  $d =~ s/[^a-z0-9\\.\\-]//g;\n  if ($d !~ /^$/) {\n    $found = 1;\n    $d =~ s/^www\\.//g;\n    $dx = $d;\n    $pthl=`cat /data/disk/*/.drush/$d.alias.drushrc.php | grep 'site_path' | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`;\n    if ($pthl =~ /(No such file or directory)/) {\n      $pthl=`cat /data/disk/*/.drush/www.$d.alias.drushrc.php | grep 'site_path' | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`;\n    }\n    if ($pthl =~ /(No such file or directory)/) {\n      $pthl=`cat /var/aegir/.drush/$d.alias.drushrc.php | grep 'site_path' | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`;\n    }\n    if ($pthl =~ /(No such file or directory)/) {\n      $pthl=`cat /var/aegir/.drush/www.$d.alias.drushrc.php | grep 'site_path' | cut -d: -f2 | awk '{ print $3}' | sed \"s/[\\,']//g\"`;\n    }\n    local($w, $x, $y, $z) = split(/\\s+/,$pthl);\n    $pthl = $z;\n    chomp ($pthl);\n    local($o, $p, $q, $r) = split(/\\//,$pthl);\n    $rx = $r;\n    $disla = \"/data/disk/$rx/config/server_master/nginx/vhost.d/$d\";\n    $dislb = \"/data/disk/$rx/config/server_master/nginx/vhost.d/www.$d\";\n    `mkdir -p /var/backups/segfault`;\n    if (-f \"$disla\" && $rx !~ /^$/) {\n      #`mv -f $disla /var/backups/segfault/`;\n      #`service nginx reload`;\n      $disl = $disla;\n    }\n    elsif (-f \"$dislb\" && $rx !~ /^$/) {\n      #`mv -f $dislb /var/backups/segfault/`;\n      #`service nginx reload`;\n      $disl = $dislb;\n    }\n    $ngxl =~ s/([\";])/\\\\$1/g;\n    chomp ($ngxl);\n    $sysl =~ s/([\";])/\\\\$1/g;\n    chomp ($sysl);\n  }\n}\n\n#############################################################################\nsub _send_alert\n{\n  my $email;\n  my $cmail;\n  my $this_email;\n  if (open my $fh, '<', '/root/.barracuda.cnf') {\n    while (<$fh>) {\n      if (/^\\s*_MY_EMAIL\\s*=\\s*(\\S+)/) {\n        $email = $1;\n        last;\n      }\n    }\n    close $fh;\n  }\n  $email =~ s/\\\\+@/@/g;\n  $this_email=\"/data/disk/$rx/log/email.txt\";\n  if (-f \"$this_email\") {\n    open (FILE,\"<$this_email\");\n    while (<FILE>) {\n      $cmail = \"$_\";\n    }\n    close (FILE);\n    chomp ($cmail);\n    $cmail =~ s/\\\\+@/@/g;\n  }\n  $mailx_test=`s-nail -V 2>&1`;\n  $t=`date +%y%m%d-%H%M`;\n  chomp($t);\n  if ($email && $cmail && $mailx_test =~ /(built for Linux)/i) {\n    `cat $this_path | s-nail -b $email -s \"PHP Segfault Alert for [$dx] at [$s] on $t\" $cmail`;\n  }\n  `cat /var/xdrago/monitor/log/$this_filename.log >> /var/xdrago/monitor/log/$this_filename.archive.log`;\n  `rm -f /var/xdrago/monitor/log/$this_filename.log`;\n}\n\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/sqlcheck.pl",
    "content": "#!/usr/bin/perl\n\n### TODO - rewrite this legacy script in bash\n\n$ENV{'HOME'} = '/root';\n$ENV{'PATH'} = '/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';\n\nuse warnings;\nuse File::Spec;\n\n$| = 1;\nif (-f \"/run/boa_mysql_auto_healing.pid\") {exit;}\n$status=\"CLEAN\";\n$now_is=`date +%b:%d:%H:%M`;\nchomp($now_is);\n&makeactions;\nif ($status ne \"CLEAN\") {\n  `perl /var/xdrago/checksql.pl`;\n}\nelse {\n  `touch /var/log/boa/last-sqlcheck-clean`;\n}\nexit;\n\n#############################################################################\nsub makeactions\n{\nlocal(@MYARR)=`grep mysql /var/log/syslog | tail --lines=999 2>&1`;\n  foreach $line (@MYARR) {\n    $line =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,]//g;\n    if ($line =~ /(Checking table)/i || $line =~ /(is marked as crashed)/i) {\n      local($MONTX, $DAYX, $TIMEX, $rest) = split(/\\s+/,$line);\n      if ($DAYX =~ /^\\s+/) {\n        $DAYX =~ s/[^0-9]//g;\n      }\n      if ($DAYX !~ /^0/ && $DAYX !~ /[0-9]{2}/) {\n        $DAYX = \"0$DAYX\";\n      }\n      chomp($TIMEX);\n      $TIMEX =~ s/[^0-9\\:]//g;\n      if ($TIMEX =~ /^[0-9]/) {\n        local($HOUR, $MIN, $SEC) = split(/:/,$TIMEX);\n        $log_is=\"$MONTX:$DAYX:$HOUR:$MIN\";\n        if ($now_is eq $log_is) {\n          $status=\"ERROR\";\n          print \"===[$now_is]\\t[$log_is]===\\n\";\n          `echo \"[$now_is]:[$log_is]\" >> /var/log/boa/last-sqlcheck-y-problem`;\n        }\n#         else {\n#           `echo \"[$now_is]:[$log_is]\" >> /var/log/boa/last-sqlcheck-n-problem`;\n#         }\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/system.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/system.incident.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n: \"${_CRON_COOLDOWN_SECS:=30}\"\n: \"${_POSTFIX_COOLDOWN_SECS:=30}\"\n: \"${_LFD_COOLDOWN_SECS:=30}\"\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=CRIT}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"CRIT\" ;;\n    MINI) _INCIDENT_REPORT=\"CRIT\" ;;\n    OFF|ALL|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"CRIT\" ;;\n  esac\n}\n_normalize_incident_report\n\n###\n### Function to send incident email report\n###\n_incident_email_report() {\n  _check_uptime_grace_period >/dev/null || return 1\n  local _subject=\"${1:-(no subject)}\"\n  local _lvl=\"${2:-INFO}\"\n  _lvl=\"${_lvl^^}\"\n  [ -n \"${_MY_EMAIL}\" ] || return 1\n  # Decide if we should send\n  case \"${_INCIDENT_REPORT}\" in\n    OFF)  return 1 ;;                            # always veto\n    CRIT) [ \"${_lvl}\" = \"ALERT\" ] || return 1 ;; # veto unless ALERT\n    ALL) : ;;                                    # allow\n  esac\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n  s-nail -s \"Incident Report on ${_hName}: ${_subject}\" \"${_MY_EMAIL}\" < ${_pthOml}\n}\n\n_wkhtmltopdf_php_cli_oom_kill() {\n  echo \"$(date) OOM $1 wkhtmltopdf detected\" >> ${_pthOml}\n  pkill -9 -f wkhtmltopdf\n  echo \"$(date) OOM wkhtmltopdf killed\" >> ${_pthOml}\n  killall -9 sleep &> /dev/null\n  echo \"$(date) OOM wkhtmltopdf incident response completed\" >> ${_pthOml}\n  _incident_email_report \"OOM $1 wkhtmltopdf\"\n  echo >> ${_pthOml}\n  exit 0\n}\n\n_oom_critical_restart() {\n  touch /run/boa_system_auto_healing.pid\n  echo \"$(date) OOM $1 detected\" >> ${_pthOml}\n  sleep 3\n  pkill -9 -f wkhtmltopdf\n  echo \"$(date) OOM wkhtmltopdf killed\" >> ${_pthOml}\n  killall -9 sleep &> /dev/null\n  killall -9 php\n  echo \"$(date) OOM php-cli killed\" >> ${_pthOml}\n  mv -f /var/log/nginx/error.log /var/log/nginx/$(date +%y%m%d-%H%M)-error.log\n  pkill -9 -f nginx\n  echo \"$(date) OOM nginx killed\" >> ${_pthOml}\n  pkill -9 -f php-fpm\n  echo \"$(date) OOM php-fpm killed\" >> ${_pthOml}\n  pkill -9 -f java\n  echo \"$(date) OOM solr/jetty killed\" >> ${_pthOml}\n  pkill -9 -f newrelic-daemon\n  echo \"$(date) OOM newrelic-daemon killed\" >> ${_pthOml}\n  if [ -e \"/etc/init.d/valkey-server\" ]; then\n    rm -f /var/lib/valkey/*\n    pkill -9 -f valkey-server\n    echo \"$(date) OOM valkey-server killed\" >> ${_pthOml}\n  elif [ -e \"/etc/init.d/redis-server\" ]; then\n    rm -f /var/lib/redis/*\n    pkill -9 -f redis-server\n    echo \"$(date) OOM redis-server killed\" >> ${_pthOml}\n  fi\n  bash /var/xdrago/move_sql.sh\n  wait\n  echo \"$(date) OOM Percona MySQL Server restarted\" >> ${_pthOml}\n  echo \"$(date) OOM incident response completed\" >> ${_pthOml}\n  _incident_email_report \"OOM $1 system\" \"ALERT\"\n  echo >> ${_pthOml}\n  sleep 3\n  [ -e \"/run/boa_system_auto_healing.pid\" ] && rm -f /run/boa_system_auto_healing.pid\n  exit 0\n}\n\n_system_oom_detection() {\n  _RAM_TOTAL=$(free -mt | grep Mem: | cut -d: -f2 | awk '{ print $1}' 2>&1)\n  _RAM_FREE_TEST=$(free -mt 2>&1)\n  if [[ \"${_RAM_FREE_TEST}\" =~ \"buffers/cache:\" ]]; then\n    _RAM_FREE=$(free -mt | grep /+ | cut -d: -f2 | awk '{ print $2}' 2>&1)\n  else\n    _RAM_FREE=$(free -mt | grep Mem: | cut -d: -f2 | awk '{ print $6}' 2>&1)\n  fi\n  _RAM_PCT_FREE=$(echo \"scale=0; $(bc -l <<< \"${_RAM_FREE} / ${_RAM_TOTAL} * 100\")/1\" | bc 2>&1)\n  _RAM_PCT_FREE=${_RAM_PCT_FREE//[^0-9]/}\n  echo _RAM_TOTAL is ${_RAM_TOTAL}\n  echo _RAM_PCT_FREE is ${_RAM_PCT_FREE}\n  if [ ! -z \"${_RAM_PCT_FREE}\" ]; then\n    if [ \"${_RAM_PCT_FREE}\" -le 5 ]; then\n      _oom_critical_restart \"RAM ${_RAM_PCT_FREE}/${_RAM_TOTAL}\"\n    elif [ \"${_RAM_PCT_FREE}\" -le 10 ]; then\n      _CNT=$(pgrep -fc wkhtmltopdf)\n      if (( _CNT > 2 )); then\n        _wkhtmltopdf_php_cli_oom_kill \"RAM ${_RAM_PCT_FREE}/${_RAM_TOTAL}\"\n      fi\n    fi\n  fi\n}\n\n# Function to calculate RAM usage percentage as an integer\n_calculate_ram_usage_percent() {\n  _total_ram_kb=$1\n  _available_ram_kb=$2\n  used_ram_kb=$((_total_ram_kb - _available_ram_kb))\n\n  # Using integer division to get a whole number percentage\n  echo $(( (used_ram_kb * 100) / _total_ram_kb ))\n}\n\n# Function to check and display system info\n_check_system_ram() {\n  # Get the total and available RAM in KB\n  _total_ram_kb=$(grep MemTotal /proc/meminfo | awk '{print $2}')\n  _available_ram_kb=$(grep MemAvailable /proc/meminfo | awk '{print $2}')\n\n  # Calculate RAM usage percentage\n  _ram_usage_percent=$(_calculate_ram_usage_percent ${_total_ram_kb} ${_available_ram_kb})\n}\n\n# Function to check and optimize RAM and disk caches\n_optimize_ram() {\n  _check_system_ram\n  if [ \"${_ram_usage_percent}\" -gt 90 ]; then\n    sync && echo 3 | tee /proc/sys/vm/drop_caches\n  fi\n}\n\n_if_fix_dhcp() {\n  # Determine the correct log file\n  if [ -e \"/var/log/daemon.log\" ]; then\n    _DHCP_LOG=\"/var/log/daemon.log\"\n  else\n    _DHCP_LOG=\"/var/log/syslog\"\n  fi\n\n  # Check if the log file exists\n  if [ -e \"${_DHCP_LOG}\" ]; then\n    # Count the number of DHCP failure entries in the last 3 lines\n    count=$(tail -n 3 \"${_DHCP_LOG}\" | grep -c \"dhclient:.*Failed\")\n\n    # Debugging: Log the count value\n    [ \"$count\" -gt 0 ] && echo \"DHCP failure count: $count\" >> ${_pthOml}\n\n    # Proceed only if there is at least one failure\n    if [ \"$count\" -gt 0 ]; then\n      # Clear existing DHCP entries in csf.allow\n      sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n      wait\n      # Remove any empty lines\n      sed -i \"/^$/d\" /etc/csf/csf.allow\n\n      # Extract unique IPs from DHCP requests and add them to csf.allow\n      grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n        if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n          IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n          if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n            echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n          fi\n        fi\n      done\n\n      # Reload the firewall\n      if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n        csf -ra &> /dev/null\n        synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      else\n        csf -r &> /dev/null\n      fi\n\n      # Log the error and send an email report\n      _thisErrLog=\"$(date) DHCP error detected, firewall updated\"\n      echo \"${_thisErrLog}\" >> ${_pthOml}\n      _incident_email_report \"DHCP error detected, firewall updated\"\n      echo >> ${_pthOml}\n    fi\n  fi\n}\n\n_cron_duplicate_instances_detection() {\n  _CNT=$(pgrep -fc /usr/sbin/cron)\n  if (( _CNT > 1 )); then\n    # Double-check after a short grace to avoid flapping\n    sleep 3\n    _CNT2=$(pgrep -fc /usr/sbin/cron)\n    if (( _CNT2 > 1 )); then\n      _cd=\"/run/cron-monitor.cooldown\"\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(cat \"${_cd}\" 2>/dev/null | tr -d '\\n')\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_CRON_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Cron duplicates detected but in cooldown; skipping restart\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      killall -9 cron &> /dev/null\n      service cron start &> /dev/null\n      # Cooldown stamp\n      date +%s > \"${_cd}\"\n      _thisErrLog=\"$(date) Too many Cron instances, service restarted (count=${_CNT2})\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _incident_email_report \"Too many Cron instances, service restarted (count=${_CNT2})\"\n      echo >> ${_pthOml}\n    fi\n  fi\n}\n\n_syslog_giant_log_detection() {\n  if [ -e \"/etc/cron.daily/logrotate\" ]; then\n    _SYSLOG_SIZE_TEST=$(du -s -h /var/log/syslog 2>/dev/null)\n    if [[ \"${_SYSLOG_SIZE_TEST}\" =~ \"G\" ]]; then\n      echo ${_SYSLOG_SIZE_TEST} too big\n      bash /etc/cron.daily/logrotate &> /dev/null\n      wait\n      _thisErrLog=\"$(date) Syslog ${_SYSLOG_SIZE_TEST} too big, logrotate forced\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _incident_email_report \"Syslog ${_SYSLOG_SIZE_TEST} too big, logrotate forced\"\n      echo >> ${_pthOml}\n    fi\n  fi\n}\n\n_gpg_too_many_instances_detection() {\n  _CNT=$(pgrep -fc gpg-agent)\n  if (( _CNT > 5 )); then\n    pkill -9 -f gpg-agent\n    _thisErrLog=\"$(date) Too many gpg-agent processes killed (count=${_CNT})\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    _incident_email_report \"Too many gpg-agent processes killed (count=${_CNT})\"\n    echo >> ${_pthOml}\n  fi\n}\n\n_dirmngr_too_many_instances_detection() {\n  _CNT=$(pgrep -fc dirmngr)\n  if (( _CNT > 5 )); then\n    pkill -9 -f dirmngr\n    _thisErrLog=\"$(date) Too many dirmngr processes killed (count=${_CNT})\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    _incident_email_report \"Too many dirmngr processes killed (count=${_CNT})\"\n    echo >> ${_pthOml}\n  fi\n}\n\n_ftpd_health_check_fix() {\n  _ftpd_init=\"/usr/local/sbin/pure-config.pl\"\n  _ftpd_conf=\"/usr/local/etc/pure-ftpd.conf\"\n  _ftpd_bind=\"/usr/local/sbin/pure-ftpd\"\n  _ftpd_pid=\"/run/pure-ftpd.pid\"\n  _ftpd_restarted=NO\n  if [ -x \"/usr/local/sbin/pure-ftpd\" ] \\\n    || [ -x \"/usr/local/sbin/pure-config.pl\" ]; then\n    if ! pgrep -f pure-ftpd \\\n      || [ ! -e \"/run/pure-ftpd.pid\" ]; then\n      if [ -e \"${_ftpd_conf}\" ]; then\n        pkill -9 -f pure-ftpd || true\n        if [ -x \"${_ftpd_init}\" ]; then\n          ${_ftpd_init} ${_ftpd_conf}\n          _ftpd_restarted=YES\n        elif [ -x \"${_ftpd_bind}\" ]; then\n          ${_ftpd_bind} ${_ftpd_conf}\n          _ftpd_restarted=YES\n        fi\n        if [ \"${_ftpd_restarted}\" = \"YES\" ]; then\n          _thisErrLog=\"$(date) FTPS Server was down, restarted\"\n          echo ${_thisErrLog} >> ${_pthOml}\n          _incident_email_report \"FTPS Server was down, restarted\"\n          echo >> ${_pthOml}\n        fi\n      fi\n    fi\n  fi\n}\n\n_postfix_health_check_fix() {\n  if [ -x \"/etc/init.d/postfix\" ]; then\n    if ! pgrep -f /usr/lib/postfix \\\n      || [ ! -e \"/var/spool/postfix/pid/master.pid\" ]; then\n      # Double-check after a short grace\n      sleep 2\n      if ! pgrep -f /usr/lib/postfix \\\n        || [ ! -e \"/var/spool/postfix/pid/master.pid\" ]; then\n        _cd=\"/run/postfix-monitor.cooldown\"\n        _now=$(date +%s)\n        if [ -s \"${_cd}\" ]; then\n          _ts=$(cat \"${_cd}\" 2>/dev/null | tr -d '\\n')\n          if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_POSTFIX_COOLDOWN_SECS}\" ]; then\n            echo \"$(date) INFO: Postfix unhealthy but in cooldown; skipping restart\" >> ${_pthOml}\n            return 0\n          fi\n        fi\n        service postfix restart\n        wait\n        date +%s > \"${_cd}\"\n        _thisErrLog=\"$(date) Postfix Server was down, restarted\"\n        echo ${_thisErrLog} >> ${_pthOml}\n        _incident_email_report \"Postfix Server was down, restarted\"\n        echo >> ${_pthOml}\n      fi\n    fi\n  fi\n}\n\n_vnstat_health_check_fix() {\n  if [ -x \"/etc/init.d/vnstat\" ] && [ ! -e \"/run/vnstat.pid\" ]; then\n    if ! pgrep -f /usr/sbin/vnstatd \\\n      || [ ! -e \"/run/vnstat/vnstat.pid\" ]; then\n      service vnstat restart\n      wait\n      _thisErrLog=\"$(date) VNStat Monitor was down, restarted\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _incident_email_report \"VNStat Monitor was down, restarted\"\n      echo >> ${_pthOml}\n    fi\n  fi\n}\n\n_lfd_health_check_fix() {\n  if [ -x \"/etc/init.d/lfd\" ]; then\n    if ! pgrep -f lfd \\\n      || [ ! -e \"/run/lfd.pid\" ]; then\n      _cd=\"/run/lfd-monitor.cooldown\"\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(cat \"${_cd}\" 2>/dev/null | tr -d '\\n')\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_LFD_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: LFD unhealthy but in cooldown; skipping start\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      service lfd start\n      csf -e\n      # Set cooldown timestamp after attempting recovery\n      date +%s > \"${_cd}\"\n      _thisErrLog=\"$(date) LFD Monitor was down, started\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _incident_email_report \"LFD Monitor was down, started\"\n      echo >> ${_pthOml}\n    fi\n  fi\n}\n\n_if_fix_locked_sshd() {\n  _SSH_LOG=\"/var/log/auth.log\"\n  if [ `tail --lines=10 ${_SSH_LOG} \\\n    | grep --count \"error: Bind to port 22\"` -gt 0 ]; then\n    pkill -9 -f /usr/sbin/sshd || true\n    service ssh start\n    wait\n    _thisErrLog=\"$(date) SSHD BIND PORT error, service will be restarted\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    _incident_email_report \"SSHD BIND PORT error, service will be restarted\"\n    echo >> ${_pthOml}\n  fi\n}\n\n_sshd_health_check_fix() {\n  if [ -x \"/etc/init.d/ssh\" ]; then\n    if ! pgrep -f /usr/sbin/sshd \\\n      || [ ! -e \"/run/sshd.pid\" ]; then\n      service ssh start\n      wait\n      _thisErrLog=\"$(date) SSHD Server was down, started\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _incident_email_report \"SSHD Server was down, started\"\n      echo >> ${_pthOml}\n    fi\n  fi\n}\n\n_clamav_health_check_fix() {\n  # Define file paths as variables\n  _allow_conf=\"/root/.allow.clamav.cnf\"\n  _deny_conf=\"/root/.deny.clamav.cnf\"\n  _data_dir=\"/data/u\"\n  _freshclam_pid=\"/run/clamav/freshclam.pid\"\n  _clamd_pid=\"/run/clamav/clamd.pid\"\n  _clamd_service=\"/etc/init.d/clamav-daemon\"\n  _freshclam_service=\"/etc/init.d/clamav-freshclam\"\n  if [ -e \"/run/max_load.pid\" ] || [ -e \"/run/critical_load.pid\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  if [ -e \"${_allow_conf}\" ] \\\n    && [ ! -e \"${_deny_conf}\" ] \\\n    && [ -e \"${_data_dir}\" ] \\\n    && [ -e \"${_clamd_service}\" ] \\\n    && [ -e \"${_freshclam_service}\" ]; then\n    if [ -x \"/etc/init.d/clamav-daemon\" ]; then\n      if ! pgrep -f /usr/sbin/clamd \\\n        || [ ! -e \"/run/clamav/clamd.pid\" ]; then\n        pkill -9 -f /usr/sbin/clamd || true\n        service clamav-daemon start\n        wait\n        sleep 5\n        _thisErrLog=\"$(date) Clamav was down, started\"\n        echo ${_thisErrLog} >> ${_pthOml}\n        _incident_email_report \"Clamav was down, started\"\n        echo >> ${_pthOml}\n      fi\n    fi\n    if [ -x \"/etc/init.d/clamav-freshclam\" ]; then\n      if ! pgrep -f /usr/bin/freshclam \\\n        || [ ! -e \"/run/clamav/freshclam.pid\" ]; then\n        pkill -9 -f /usr/bin/freshclam || true\n        service clamav-freshclam start\n        wait\n        sleep 15\n        _thisErrLog=\"$(date) Freshclam was down, started\"\n        echo ${_thisErrLog} >> ${_pthOml}\n        _incident_email_report \"Freshclam was down, started\"\n        echo >> ${_pthOml}\n      fi\n    fi\n  fi\n}\n\n_rsyslog_health_check_fix() {\n  if [ -x \"/etc/init.d/rsyslog\" ]; then\n    if ! pgrep -f /usr/sbin/rsyslogd \\\n      || [ ! -e \"/run/rsyslogd.pid\" ]; then\n      pkill -9 -f /usr/sbin/rsyslogd || true\n      service rsyslog restart\n      wait\n      _thisErrLog=\"$(date) Rsyslog was down, restarted\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _incident_email_report \"Rsyslog was down, restarted\"\n      echo >> ${_pthOml}\n    fi\n  fi\n}\n\n_sshd_health_check_fix\n_if_fix_locked_sshd\n_if_fix_dhcp\n_rsyslog_health_check_fix\n_postfix_health_check_fix\n_cron_duplicate_instances_detection\n_syslog_giant_log_detection\n\nif [ -e \"/run/boa_system_auto_healing.pid\" ] \\\n  || [ -e \"/run/boa_mysql_auto_healing.pid\" ] \\\n  || [ -e \"/run/boa_sql_cluster_backup.pid\" ] \\\n  || [ -e \"/run/mysql_restart_running.pid\" ] \\\n  || [ -e \"/run/boa_sql_backup.pid\" ]; then\n  exit 0\nelse\n  _optimize_ram\n  _system_oom_detection\n  _lfd_health_check_fix\n  _ftpd_health_check_fix\n  _vnstat_health_check_fix\n  _gpg_too_many_instances_detection\n  _dirmngr_too_many_instances_detection\n  _clamav_health_check_fix\nfi\n\necho DONE!\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/unbound.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/unbound.incident.log\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n_cd=\"/run/unbound-monitor.cooldown\"\n: \"${_UNBOUND_COOLDOWN_SECS:=30}\"\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_incident_email_report() {\n  if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n    s-nail -s \"Incident Report: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_pthOml}\n  fi\n}\n\n_unbound_restart_with_cooldown() {\n  pkill -u unbound -x unbound &> /dev/null\n  service unbound restart &> /dev/null\n  wait\n  echo \"$(date) INFO: Unbound killed and restarted\" >> ${_pthOml}\n  date +%s > \"${_cd}\"\n  sleep 3\n}\n\n_unbound_check_cooldown_status() {\n  # Check cooldown status\n  _in_unbound_cooldown=false\n  if [ -s \"${_cd}\" ]; then\n    _cd_now=$(date +%s)\n    _cd_ts=$(cat \"${_cd}\" 2>/dev/null | tr -d '\\n')\n    if [ -n \"${_cd_ts}\" ] && [ $((_cd_now - _cd_ts)) -lt \"${_UNBOUND_COOLDOWN_SECS}\" ]; then\n      _in_unbound_cooldown=true\n    fi\n  fi\n}\n\n_unbound_config_fix() {\n  _unbound_check_cooldown_status\n  # Check if resolvconf interface is ready for Unbound\n  if [ -x \"/usr/sbin/unbound\" ] \\\n    && [ ! -e \"/etc/resolvconf/run/interface/lo.unbound\" ]; then\n    # === cooldown-wrapped restart ===\n    if [ \"${_in_unbound_cooldown}\" = \"true\" ]; then\n      echo \"$(date) INFO: Unbound restart skipped (cooldown active)\" >> ${_pthOml}\n    else\n      mkdir -p /etc/resolvconf/run/interface\n      echo \"nameserver 127.0.0.1\" > /etc/resolvconf/run/interface/lo.unbound\n      [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n      resolvconf -u &> /dev/null\n      _unbound_restart_with_cooldown\n      echo \"$(date) INFO: Unbound restarted after resolvconf interface update\" >> ${_pthOml}\n      echo >> ${_pthOml}\n      exit 0\n    fi\n  fi\n  # Confirm that DNS requests are working\n  if [ -e \"/etc/resolv.conf\" ]; then\n    _RESOLV_LOC=$(grep \"nameserver 127.0.0.1\" /etc/resolv.conf 2>&1)\n    if [[ \"${_RESOLV_LOC}\" =~ \"nameserver 127.0.0.1\" ]]; then\n      _THIS_DNS_TEST=$(host files.aegir.cc 127.0.0.1 -w 8 2>&1)\n      if [[ \"${_THIS_DNS_TEST}\" =~ \"no servers could be reached\" ]]; then\n        if [ \"${_in_unbound_cooldown}\" = \"true\" ]; then\n          echo \"$(date) INFO: Unbound restart skipped (cooldown active)\" >> ${_pthOml}\n        else\n          _unbound_restart_with_cooldown\n          echo \"$(date) INFO: Unbound restarted after DNS check failed\" >> ${_pthOml}\n          echo >> ${_pthOml}\n          exit 0\n        fi\n      fi\n    else\n      if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n        chattr -i /etc/resolv.conf\n        rm -f /etc/resolv.conf\n        echo \"### BOA-DNS-Config ###\" > /etc/resolv.conf\n        echo \"nameserver 127.0.0.1\" >> /etc/resolv.conf\n        echo \"nameserver 1.1.1.1\" >> /etc/resolv.conf\n        echo \"nameserver 8.8.8.8\" >> /etc/resolv.conf\n        echo \"nameserver 9.9.9.9\" >> /etc/resolv.conf\n        chmod 0644 /etc/resolv.conf\n        [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n        if [ \"${_in_unbound_cooldown}\" = \"true\" ]; then\n          echo \"$(date) INFO: Unbound restart skipped (cooldown active)\" >> ${_pthOml}\n        else\n          _unbound_restart_with_cooldown\n          echo \"$(date) INFO: Unbound restarted after /etc/resolv.conf update\" >> ${_pthOml}\n          echo >> ${_pthOml}\n          exit 0\n        fi\n      fi\n    fi\n  fi\n}\n\n_unbound_fix_nomail() {\n  _MAIN_CONF=\"$(unbound-checkconf 2>&1 | awk 'NR==1{print $NF}')\"\n  if [ -z \"${_MAIN_CONF}\" ]; then\n    _MAIN_CONF=\"/usr/etc/unbound/unbound.conf\"\n  fi\n  _CONF_DIR=\"$(dirname \"${_MAIN_CONF}\")/unbound.conf.d\"\n  _DROP_IN=\"${_CONF_DIR}/ci-nomail.conf\"\n  install -d -m 0755 -o root -g root \"${_CONF_DIR}\"\n\ncat >\"${_DROP_IN}\" <<'CONF'\nserver:\n  # SendGrid\n  local-zone: \"sendgrid.com.\" always_nxdomain\n  local-zone: \"sendgrid.net.\" always_nxdomain\n\n  # Mailgun\n  local-zone: \"mailgun.net.\" always_nxdomain\n\n  # SparkPost\n  local-zone: \"sparkpost.com.\" always_nxdomain\n\n  # Postmark\n  local-zone: \"postmarkapp.com.\" always_nxdomain\n\n  # Brevo (Sendinblue)\n  local-zone: \"brevo.com.\" always_nxdomain\n\n  # AWS SES\n  local-zone: \"amazonaws.com.\" always_nxdomain\nCONF\n\n  _INCLUDE_LINE=$(printf 'include-toplevel: \"%s/*.conf\"' \"${_CONF_DIR}\")\n  if ! grep -Fq \"${_INCLUDE_LINE}\" \"${_MAIN_CONF}\"; then\n    printf '\\n%s\\n' \"${_INCLUDE_LINE}\" >> \"${_MAIN_CONF}\"\n  fi\n\n  unbound-checkconf \"${_MAIN_CONF}\"\n  service unbound reload\n\n  if dig @127.0.0.1 api.sendgrid.com | grep -q 'status: NXDOMAIN'; then\n    echo \"OK: NXDOMAIN for api.sendgrid.com\"\n  else\n    echo \"WARN: api.sendgrid.com did not return NXDOMAIN\"\n  fi\n}\n\n_unbound_check_nomail() {\n  [ -e \"/etc/default/unbound\" ] && _isNxdEtc=$(grep \"always_nxdomain\" /etc/default/unbound 2>&1)\n  [ -e \"/etc/init.d/unbound\" ] && _isIntUnb=$(grep \"apply_ci_nomail\" /etc/init.d/unbound 2>&1)\n  if [[ \"${_isNxdEtc}\" =~ \"always_nxdomain\" ]] \\\n    && [[ \"${_isIntUnb}\" =~ \"apply_ci_nomail\" ]]; then\n    _isIncTop=$(grep \"include-toplevel\" /usr/etc/unbound/unbound.conf 2>&1)\n    if [ ! -e \"/usr/etc/unbound/unbound.conf.d/ci-nomail.conf\" ] \\\n      || [[ ! \"${_isIncTop}\" =~ \"include-toplevel\" ]]; then\n      _unbound_fix_nomail\n    fi\n    _isActiveCtrl=$(unbound-control list_local_zones | grep -E 'sendgrid' 2>&1)\n    if [[ ! \"${_isActiveCtrl}\" =~ \"sendgrid\" ]]; then\n      service unbound reload &> /dev/null\n    fi\n  fi\n}\n\n_unbound_health_check_fix() {\n  _unbound_check_cooldown_status\n  if ! pgrep -f /usr/sbin/unbound \\\n    || [ ! -e \"/run/unbound/unbound.pid\" ]; then\n    _now=$(date +%s)\n    if [ -s \"${_cd}\" ]; then\n      _ts=$(cat \"${_cd}\" 2>/dev/null | tr -d '\\n')\n      if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_UNBOUND_COOLDOWN_SECS}\" ]; then\n        echo \"$(date) INFO: Unbound unhealthy but in cooldown; skipping restart\" >> ${_pthOml}\n        return 0\n      fi\n    fi\n    [ -d /run/unbound ] || mkdir -p /run/unbound\n    [ -d /run/unbound ] && chown -R unbound:unbound /run/unbound\n    mkdir -p /etc/resolvconf/run/interface\n    echo \"nameserver 127.0.0.1\" > /etc/resolvconf/run/interface/lo.unbound\n    [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n    resolvconf -u &> /dev/null\n    _unbound_restart_with_cooldown\n    _thisErrLog=\"$(date) Unbound Server was down, restarted\"\n    echo ${_thisErrLog} >> ${_pthOml}\n    _incident_email_report \"Unbound Server was down, restarted\"\n    echo >> ${_pthOml}\n    exit 0\n  fi\n}\n\n_unbound_duplicate_fix() {\n  _unbound_check_cooldown_status\n  # Detect duplicate/multiple unbound masters and restart if needed\n  _CNT=$(pgrep -fc \"/usr/sbin/unbound\")\n  if (( _CNT > 1 )); then\n    # === cooldown-wrapped restart ===\n    if [ \"${_in_unbound_cooldown}\" = \"true\" ]; then\n      echo \"$(date) INFO: Unbound duplicate-masters restart skipped (cooldown active)\" >> ${_pthOml}\n    else\n      _unbound_restart_with_cooldown\n      echo \"$(date) INFO: Too many Unbound processes killed and service restarted (count=${_CNT})\" >> ${_pthOml}\n      echo >> ${_pthOml}\n      exit 0\n    fi\n  fi\n}\n\nif [ -x \"/etc/init.d/unbound\" ] && [ -x \"/usr/sbin/unbound\" ]; then\n  [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n  _unbound_config_fix\n  _unbound_check_nomail\n  _unbound_health_check_fix\n  _unbound_duplicate_fix\nfi\n\necho DONE!\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/monitor/check/valkey.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_pthOml=\"/var/log/boa/valkey.incident.log\"\n_cd=\"/run/valkey-monitor.cooldown\"\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -d /run/valkey ] || mkdir -p /run/valkey\n[ -d /run/valkey ] && chown -R valkey:valkey /run/valkey\n\n# Run only on fully installed system\n[ ! -e \"/var/log/boa/reset_no_new_password.pid\" ] && exit 0\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n: \"${_VALKEY_COOLDOWN_SECS:=30}\"\n\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=MINI}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"MINI\" ;;\n    OFF|ALL|MINI|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"MINI\" ;;\n  esac\n}\n_normalize_incident_report\n\n_incident_email_report() {\n  if ! _check_uptime_grace_period >/dev/null; then return 1; fi\n  if [ -n \"${_MY_EMAIL}\" ] && [ \"${_INCIDENT_REPORT}\" = \"ALL\" ]; then\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n    s-nail -s \"Incident Report: ${1} on ${_hName} at $(date)\" ${_MY_EMAIL} < ${_pthOml}\n  fi\n}\n\n_valkey_ping_ok() {\n  # Check if Valkey responds to PING (authenticated or NOAUTH)\n  _sock=\"/run/valkey/valkey.sock\"\n  _cli=\"/usr/bin/valkey-cli\"\n  _pass_file=\"/root/.valkey.pass.txt\"\n  _out=\n  _pass=\n  if [ ! -x \"${_cli}\" ]; then\n    return 1\n  fi\n  if [ -r \"${_pass_file}\" ]; then\n    _pass=\"$(head -n1 \"${_pass_file}\" 2>/dev/null | tr -d '\\r\\n')\"\n  fi\n  if [ -n \"${_pass}\" ]; then\n    _out=\"$(${_cli} -s \"${_sock}\" -a \"${_pass}\" ping 2>&1)\"\n  else\n    _out=\"$(${_cli} -s \"${_sock}\" ping 2>&1)\"\n  fi\n  if echo \"${_out}\" | grep -qi '^PONG$'; then\n    return 0\n  fi\n  if echo \"${_out}\" | grep -qi 'NOAUTH'; then\n    return 0\n  fi\n  return 1\n}\n\n_fpm_reload() {\n  : > /run/fmp_wait.pid\n  : > /run/restarting_fmp_wait.pid\n  sleep 3\n  [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n  mv -f /var/log/php/* /var/backups/php-logs/${_NOW}/ &> /dev/null\n  renice ${_B_NICE} -p $$ &> /dev/null\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n      service \"php${e}-fpm\" reload\n    fi\n  done\n  echo \"$(date) $1 incident PHP-FPM reloaded\" >> ${_pthOml}\n  sleep 1\n  rm -f /run/fmp_wait.pid /run/restarting_fmp_wait.pid\n}\n\n_valkey_restart() {\n  touch /run/boa_valkey_auto_healing.pid\n  sleep 3\n  echo \"$(date) $1 incident detected\" >> ${_pthOml}\n  service valkey-server stop &> /dev/null\n  wait\n  killall -9 valkey-server &> /dev/null\n  rm -f /var/lib/valkey/*\n  service valkey-server start &> /dev/null\n  wait\n  echo \"$(date) $1 incident valkey-server restarted\" >> ${_pthOml}\n  if [[ \"${1}\" =~ \"REFUSED\" ]] || [[ \"${1}\" =~ \"SLOW\" ]]; then\n    _fpm_reload \"$1\"\n  fi\n  echo \"$(date) $1 incident response completed\" >> ${_pthOml}\n  date +%s > \"${_cd}\"\n  _incident_email_report \"$1\"\n  echo >> ${_pthOml}\n  [ -e \"/run/boa_valkey_auto_healing.pid\" ] && rm -f /run/boa_valkey_auto_healing.pid\n  exit 0\n}\n\n_valkey_bind_check_fix() {\n  # Bind/socket/address-in-use issues → verify twice; restart only if socket missing\n  _hits=$(tail -n 8 /var/log/valkey/valkey-server.log 2>/dev/null | egrep -ci \"Address already in use\")\n  if [ \"${_hits}\" -gt 0 ]; then\n    sleep 2\n    _hits2=$(tail -n 8 /var/log/valkey/valkey-server.log 2>/dev/null | egrep -ci \"Address already in use\")\n    if [ \"${_hits2}\" -gt 0 ] && [ ! -S \"/run/valkey/valkey.sock\" ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_VALKEY_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Valkey bind/socket conflict but in cooldown; skipping restart\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      echo \"$(date) Valkey bind/socket conflict; restarting\" >> ${_pthOml}\n      _valkey_restart \"ValkeyException BIND PORT\"\n    fi\n  fi\n}\n\n_valkey_connection_check_fix() {\n  # Sustained connection/backlog issues → verify twice; cooldown then restart\n  _hits=$(tail -n 500 /var/log/php/error_log_* 2>/dev/null | egrep -ci \"RedisException: Connection refused\")\n  if [ \"${_hits}\" -gt 19 ]; then\n    sleep 2\n    _hits2=$(tail -n 500 /var/log/php/error_log_* 2>/dev/null | egrep -ci \"RedisException: Connection refused\")\n    if [ \"${_hits2}\" -gt 19 ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_VALKEY_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Valkey connection issues but in cooldown; skipping restart\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      echo \"$(date) Valkey sustained connection issues (${_hits2} hits) — restart\" >> ${_pthOml}\n      _valkey_restart \"ValkeyException REFUSED\"\n    fi\n  fi\n}\n\n_valkey_slow_check_fix() {\n  # Sustained latency/slowlog/accept issues → verify twice; cooldown then restart\n  _hits=$(tail -n 500 /var/log/php/fpm-*-slow.log 2>/dev/null | egrep -ci \"PhpRedis.php\")\n  if [ \"${_hits}\" -gt 19 ]; then\n    sleep 2\n    _hits2=$(tail -n 500 /var/log/php/fpm-*-slow.log 2>/dev/null | egrep -ci \"PhpRedis.php\")\n    if [ \"${_hits2}\" -gt 19 ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_VALKEY_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Valkey latency symptoms but in cooldown; skipping restart\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      echo \"$(date) Valkey sustained latency symptoms (${_hits2} hits) — restart\" >> ${_pthOml}\n      _valkey_restart \"ValkeyException SLOW\"\n    fi\n  fi\n}\n\n_if_valkey_restart() {\n  _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n  _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n  _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n  _VkTest=$(ls /data/disk/*/static/control/run-valkey-restart.pid | wc -l 2>&1)\n  _ReTest=$(ls /data/disk/*/static/control/run-redis-restart.pid | wc -l 2>&1)\n  if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n    || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n    || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n    || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n    || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n    || [ -e \"/root/.allow.valkey.restart.cnf\" ] \\\n    || [ -e \"/root/.allow.redis.restart.cnf\" ]; then\n    if [ \"${_VkTest}\" -ge 1 ] || [ \"${_ReTest}\" -ge 1 ]; then\n      _now=$(date +%s)\n      if [ -s \"${_cd}\" ]; then\n        _ts=$(tr -d '\\n' < \"${_cd}\")\n        if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_VALKEY_COOLDOWN_SECS}\" ]; then\n          echo \"$(date) INFO: Valkey restart requested but in cooldown; skipped\" >> ${_pthOml}\n          return 0\n        fi\n      fi\n      rm -f /data/disk/*/static/control/run-valkey-restart.pid\n      rm -f /data/disk/*/static/control/run-redis-restart.pid\n      _thisErrLog=\"$(date) Valkey Server Restart Requested\"\n      echo ${_thisErrLog} >> ${_pthOml}\n      _valkey_restart \"Valkey Server Restart Requested\"\n    fi\n  fi\n}\n\n_valkey_health_check_fix() {\n\n  # Double-check health: process + socket PING\n  _ok_proc=false\n  _ok_ping=false\n\n  pgrep -f \"/usr/bin/valkey-server\" >/dev/null 2>&1 && _ok_proc=true\n  if [ -x \"/usr/bin/valkey-cli\" ]; then\n    if _valkey_ping_ok; then\n      _ok_ping=true\n    fi\n  fi\n\n  if ! ${_ok_proc} || ! ${_ok_ping}; then\n    sleep 2\n    _ok_proc=false; _ok_ping=false\n    pgrep -f \"/usr/bin/valkey-server\" >/dev/null 2>&1 && _ok_proc=true\n    if [ -x \"/usr/bin/valkey-cli\" ]; then\n      if _valkey_ping_ok; then\n        _ok_ping=true\n      fi\n    fi\n  fi\n\n  if ! ${_ok_proc} || ! ${_ok_ping}; then\n    _now=$(date +%s)\n    if [ -s \"${_cd}\" ]; then\n      _ts=$(tr -d '\\n' < \"${_cd}\")\n      if [ -n \"${_ts}\" ] && [ $((_now - _ts)) -lt \"${_VALKEY_COOLDOWN_SECS}\" ]; then\n        echo \"$(date) INFO: Valkey unhealthy but in cooldown; skipping restart\" >> ${_pthOml}\n        return 0\n      fi\n    fi\n\n    echo \"$(date) Valkey health failed (proc=${_ok_proc} ping=${_ok_ping}) — restart\" >> ${_pthOml}\n    service valkey-server restart\n    wait\n    sleep 3\n\n    # Post-restart verification\n    _ok_proc=false; _ok_ping=false\n    pgrep -f \"/usr/bin/valkey-server\" >/dev/null 2>&1 && _ok_proc=true\n    if [ -x \"/usr/bin/valkey-cli\" ]; then\n      if _valkey_ping_ok; then\n        _ok_ping=true\n      fi\n    fi\n\n    date +%s > \"${_cd}\"\n\n    if ${_ok_proc} && ${_ok_ping}; then\n      echo \"$(date) Valkey was down, restarted\" >> ${_pthOml}\n      _incident_email_report \"Valkey was down, restarted\"\n      echo >> ${_pthOml}\n      exit 0\n    else\n      echo \"$(date) Valkey still unhealthy after restart; forced stop/start\" >> ${_pthOml}\n      _valkey_restart \"Valkey required stop/start after failed restart\"\n    fi\n  fi\n}\n\nif [ -e \"/run/boa_valkey_auto_healing.pid\" ] \\\n  || [ -e \"/run/boa_valkey_auto_healing.pid\" ]; then\n  _ALLOW_CTRL=NO\nelse\n  _ALLOW_CTRL=YES\nfi\n\nif [ ! -e \"/run/max_load.pid\" ] && [ ! -e \"/run/critical_load.pid\" ]; then\n  if [ -x \"/etc/init.d/valkey-server\" ] \\\n    && [ -x \"/usr/bin/valkey-server\" ]; then\n    _valkey_health_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && _valkey_slow_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && _valkey_connection_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && _valkey_bind_check_fix\n    [ \"${_ALLOW_CTRL}\" = \"YES\" ] && [ -d \"/data/u\" ] && _if_valkey_restart\n  fi\nfi\n\necho DONE!\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/move_sql.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n\n_free_memory() {\n  echo \"Freeing memory...\"\n  sync && echo 3 | tee /proc/sys/vm/drop_caches\n}\n\n_create_locks() {\n  echo \"Creating locks...\"\n  : > /run/boa_wait.pid\n  : > /run/fmp_wait.pid\n  : > /run/restarting_fmp_wait.pid\n  : > /run/mysql_restart_running.pid\n  _free_memory\n}\n\n_remove_locks() {\n  echo \"Removing locks...\"\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  rm -f /run/fmp_wait.pid\n  rm -f /run/restarting_fmp_wait.pid\n  rm -f /run/mysql_restart_running.pid\n  _free_memory\n}\n\n_check_running() {\n  if [ -e \"/run/mysql_restart_running.pid\" ]; then\n    echo \"MySQLD restart procedure in progress?\"\n    echo \"Nothing to do, let's quit now. Bye!\"\n    exit 1\n  fi\n}\n\n_start_sql() {\n  _check_running\n  _create_locks\n\n  _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n  if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n    echo \"MySQLD already running?\"\n    echo \"Nothing to do. Bye!\"\n    _remove_locks\n    [ \"$1\" != \"chain\" ] && exit 1\n  fi\n\n  echo \"Starting MySQLD again...\"\n\n  if [ -e \"/run/mysqld/mysqld.pid\" ] \\\n    || [ -e \"/run/mysqld/mysqld.sock\" ] \\\n    || [ -e \"/run/mysqld/mysqlx.sock\" ]; then\n    rm -f /run/mysqld/mysql*\n  fi\n  renice ${_B_NICE} -p $$ &> /dev/null\n  service mysql start &> /dev/null\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"Waiting for MySQLD graceful start...\"\n    sleep 3\n  done\n  echo \"MySQLD started\"\n\n  _remove_locks\n  echo \"MySQLD start procedure completed\"\n  [ \"$1\" != \"chain\" ] && exit 0\n}\n\n_stop_sql() {\n  _check_running\n  _create_locks\n\n  echo \"Stopping Nginx now...\"\n  service nginx stop &> /dev/null\n  until [ -z \"${_IS_NGINX_RUNNING}\" ]; do\n    _IS_NGINX_RUNNING=$(pgrep -f 'nginx: ')\n    echo \"Waiting for Nginx graceful shutdown...\"\n    sleep 1\n  done\n  killall nginx &> /dev/null\n  echo \"Nginx stopped\"\n\n  echo \"Stopping all PHP-FPM instances now...\"\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56 55 54 53\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n      [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n      mv -f /var/log/php/php${e}-fpm-error.log /var/backups/php-logs/${_NOW}/ &> /dev/null\n      service \"php${e}-fpm\" force-quit &> /dev/null\n    fi\n  done\n  until [ -z \"${_IS_FPM_RUNNING}\" ]; do\n    _IS_FPM_RUNNING=$(pgrep -f 'php-fpm: ')\n    echo \"Waiting for PHP-FPM graceful shutdown...\"\n    sleep 1\n  done\n  pkill -9 -f php-fpm\n  echo \"PHP-FPM stopped\"\n\n  _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n  if [ ! -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n    _DBS_TEST=\"$(which mysql)\"\n    if [ ! -z \"${_DBS_TEST}\" ]; then\n      _DB_SERVER_TEST=$(mysql -V 2>&1)\n    fi\n    if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n      _DB_V=8.4\n    elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n      _DB_V=8.0\n    elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n      _DB_V=5.7\n    fi\n    if [ ! -z \"${_DB_V}\" ]; then\n      echo \"Preparing MySQLD for quick shutdown...\"\n      _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n      mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n        mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n      fi\n    fi\n    mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n    echo \"Stopping MySQLD now...\"\n    service mysql stop &> /dev/null\n    wait\n    rm -f /run/mysqld/mysql*\n  else\n    echo \"MySQLD already stopped?\"\n    echo \"Nothing to do. Bye!\"\n    _remove_locks\n    [ \"$1\" != \"chain\" ] && exit 1\n  fi\n\n  until [ -z \"${_IS_MYSQLD_RUNNING}\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    echo \"Waiting for MySQLD graceful shutdown...\"\n    sleep 3\n  done\n  echo \"MySQLD stopped\"\n\n  _remove_locks\n  echo \"MySQLD stop procedure completed\"\n  [ \"$1\" != \"chain\" ] && exit 0\n}\n\n_re_start_sql() {\n  _stop_sql \"chain\"\n  _start_sql \"chain\"\n  _remove_locks\n  exit 0\n}\n\ncase \"$1\" in\n  restart) _re_start_sql ;;\n  start)   _start_sql \"only\" ;;\n  stop)    _stop_sql \"only\" ;;\n  *)       _re_start_sql\n  ;;\nesac\n\n\n"
  },
  {
    "path": "aegir/tools/system/mysql_backup.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n_IS_SQLBACKUP_RUNNING=$(pgrep -f mysql_cluster_backup.sh)\nif [ ! -z \"${_IS_SQLBACKUP_RUNNING}\" ]; then\n  exit 0\nfi\n\nif [ \"${1}\" = \"full\" ] || [ -z \"${1}\" ]; then\n  _THIS_MODE=\"full\"\nelif [ \"${1}\" = \"basic\" ]; then\n  _THIS_MODE=\"basic\"\nfi\n\nif [ \"${_THIS_MODE}\" = \"full\" ]; then\n  echo \"INFO: Starting silent usage report on $(date)\"\n  bash /var/xdrago/usage.sh silent\n  wait\n  echo \"INFO: Completing silent usage report on $(date)\"\nfi\n\n_VM_TEST=\"$(uname -a)\"\nif [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n  _VMFAMILY=\"VS\"\nelse\n  _VMFAMILY=\"XEN\"\nfi\n\nif [ \"${_VMFAMILY}\" = \"VS\" ] && [ \"${_THIS_MODE}\" = \"full\" ]; then\n  _n=$((RANDOM%600+8))\n  echo \"INFO: Waiting ${_n} seconds 1/2 on $(date) before running backup...\"\n  sleep ${_n}\n  _n=$((RANDOM%300+8))\n  echo \"INFO: Waiting ${_n} seconds 2/2 on $(date) before running backup...\"\n  sleep ${_n}\nfi\n\necho \"INFO: Starting dbs backup on $(date)\"\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n_SQL_CACHE_EXC_DEF=\"cache_bootstrap cache_discovery cache_config\"\n\nif [ -e \"/root/.my.cache.exceptions.cnf\" ]; then\n  _SQL_CACHE_EXC_ADD=$(cat /root/.my.cache.exceptions.cnf 2>&1)\n  _SQL_CACHE_EXC=\"${_SQL_CACHE_EXC_DEF} ${_SQL_CACHE_EXC_ADD}\"\nelse\n  _SQL_CACHE_EXC=\"${_SQL_CACHE_EXC_DEF}\"\nfi\n\n_BACKUPDIR=/data/disk/arch/sql\n_DATE=$(date +%y%m%d-%H%M%S)\n_DOW=$(date +%u)\n_hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n_DOW=${_DOW//[^1-7]/}\n_DOM=$(date +%e)\n_DOM=${_DOM//[^0-9]/}\n_SAVELOCATION=${_BACKUPDIR}/${_hName}-${_DATE}\nif [ -e \"/root/.my.optimize.cnf\" ]; then\n  _OPTIM=YES\nelse\n  _OPTIM=NO\nfi\ntouch /run/boa_sql_backup.pid\n\n_SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n\n_free_memory() {\n  echo \"Freeing memory...\"\n  sync && echo 3 | tee /proc/sys/vm/drop_caches\n}\n\n_create_locks() {\n  echo \"INFO: Creating locks for $1\"\n  touch /run/mysql_backup_running.pid\n  _free_memory\n}\n\n_remove_locks() {\n  echo \"INFO: Removing locks for $1\"\n  rm -f /run/mysql_backup_running.pid\n  _free_memory\n}\n\n_check_running() {\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: Waiting for MySQLD availability...\"\n    fi\n    sleep 3\n  done\n}\n\n_truncate_cache_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^cache | uniq | sort 2>&1)\n  for C in ${_TABLES}; do\n    _IF_SKIP_C=\n    for X in ${_SQL_CACHE_EXC}; do\n      if [ \"${C}\" = \"${X}\" ]; then\n        _IF_SKIP_C=SKIP\n      fi\n    done\n    if [ -z \"${_IF_SKIP_C}\" ]; then\n      mysql ${_DB}<<EOFMYSQL\nTRUNCATE ${C};\nEOFMYSQL\n    fi\n  done\n}\n\n_truncate_watchdog_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^watchdog$ 2>&1)\n  for W in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${W};\nEOFMYSQL\n  done\n}\n\n_truncate_accesslog_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^accesslog$ 2>&1)\n  for A in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${A};\nEOFMYSQL\n  done\n}\n\n_truncate_batch_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^batch$ 2>&1)\n  for B in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${B};\nEOFMYSQL\n  done\n}\n\n_truncate_queue_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^queue$ 2>&1)\n  for Q in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${Q};\nEOFMYSQL\n  done\n}\n\n_truncate_views_data_export() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^views_data_export_index_ 2>&1)\n  for V in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nDROP TABLE ${V};\nEOFMYSQL\n  done\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE views_data_export_object_cache;\nEOFMYSQL\n}\n\n_repair_this_database() {\n  _check_running\n  mysqlcheck -u root --auto-repair --silent ${_DB}\n}\n\n_optimize_this_database() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | uniq | sort 2>&1)\n  for T in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nOPTIMIZE TABLE ${T};\nEOFMYSQL\n  done\n}\n\n_convert_to_innodb() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | uniq | sort 2>&1)\n  for T in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nALTER TABLE ${T} ENGINE=INNODB;\nEOFMYSQL\n  done\n}\n\n_backup_this_database_with_mydumper() {\n  _check_running\n  if [ ! -d \"${_SAVELOCATION}/${_DB}\" ]; then\n    mkdir -p ${_SAVELOCATION}/${_DB}\n  fi\n  _MYDUMPER_LOCK_MODE=\"AUTO\"\n  if [[ \"${_DB_V}\" == \"5.7\" ]]; then\n    _MYDUMPER_LOCK_MODE=\"FTWRL\"\n  fi\n  mydumper \\\n    --database=${_DB} \\\n    --host=localhost \\\n    --user=root \\\n    --password=${_SQL_PSWD} \\\n    --port=3306 \\\n    --outputdir=${_SAVELOCATION}/${_DB}/ \\\n    --rows=50000 \\\n    --build-empty-files \\\n    --threads=4 \\\n    --long-query-guard=900 \\\n    --sync-thread-lock-mode=${_MYDUMPER_LOCK_MODE} \\\n    --verbose=1\n}\n\n_backup_this_database_with_mysqldump() {\n  _check_running\n  mysqldump \\\n    --single-transaction \\\n    --quick \\\n    --no-autocommit \\\n    --skip-add-locks \\\n    --no-tablespaces \\\n    --hex-blob ${_DB} \\\n    > ${_SAVELOCATION}/${_DB}.sql\n}\n\n_backup_mysql_schema() {\n  _check_running\n  # The mysql system schema uses MyISAM on Percona 5.7 and a mix on 8.x,\n  # so mydumper is never appropriate here. mysqldump handles mixed-engine\n  # system schemas correctly. --routines and --events are required to\n  # capture stored procedures and scheduled events which mydumper would miss.\n  # --single-transaction is a no-op for MyISAM tables but harmless and\n  # ensures InnoDB system tables (8.x) are captured consistently.\n  mysqldump \\\n    --single-transaction \\\n    --quick \\\n    --no-autocommit \\\n    --skip-add-locks \\\n    --no-tablespaces \\\n    --hex-blob \\\n    --routines \\\n    --events \\\n    mysql \\\n    > ${_SAVELOCATION}/mysql.sql\n}\n\n_compress_backup() {\n  if [ \"${_MYQUICK_USE}\" = \"YES\" ]; then\n    for DbPath in `find ${_SAVELOCATION}/ -maxdepth 1 -mindepth 1 | sort`; do\n      if [ -e \"${DbPath}/metadata\" ]; then\n        DbName=$(echo ${DbPath} | cut -d'/' -f7 | awk '{ print $1}' 2>&1)\n        cd ${_SAVELOCATION}\n        tar -c -p -I zstd -f ${DbName}-${_DATE}.tar.zst ${DbName} &> /dev/null\n        rm -f -r ${DbName}\n      fi\n    done\n    # mysql schema is always backed up with mysqldump regardless of _MYQUICK_USE,\n    # so compress it separately alongside the mydumper zst archives\n    if [ -e \"${_SAVELOCATION}/mysql.sql\" ]; then\n      gzip ${_SAVELOCATION}/mysql.sql\n    fi\n    chmod 600 ${_SAVELOCATION}/*\n    chmod 700 ${_SAVELOCATION}\n    chmod 700 /data/disk/arch\n    echo \"INFO: Permissions fixed\"\n  else\n    gzip ${_SAVELOCATION}/*.sql\n    chmod 600 ${_SAVELOCATION}/*.sql.gz\n    chmod 700 ${_SAVELOCATION}\n    chmod 700 /data/disk/arch\n    echo \"INFO: Permissions fixed\"\n  fi\n}\n\n[ ! -a ${_SAVELOCATION} ] && mkdir -p ${_SAVELOCATION};\n\n_check_mysql_version() {\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  fi\n  if [ ! -z \"${_DB_V}\" ]; then\n    mysql -u root -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n    mysql -u root -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n      mysql -u root -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n    fi\n    mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n  fi\n}\n\n_check_running\n_check_mysql_version\n\n_MYQUICK_USE=NO\nif [ -x \"/usr/local/bin/mydumper\" ]; then\n  _MYQUICK_ITD=$(mydumper -V 2>&1 \\\n    | tr -d \"\\n\" \\\n    | tr -d \",\" \\\n    | tr -d \"v\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  _DB_V=$(mysql -V 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f6 \\\n    | awk '{ print $1}' \\\n    | cut -d\"-\" -f1 \\\n    | awk '{ print $1}' \\\n    | sed \"s/[\\,']//g\" 2>&1)\n  if [ \"${_DB_V}\" = \"Linux\" ]; then\n    _DB_V=$(mysql -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f4 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n  fi\n  _MD_V=$(mydumper --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f6 \\\n    | awk '{ print $1}' \\\n    | cut -d\"-\" -f1 \\\n    | awk '{ print $1}' \\\n    | sed \"s/[\\,']//g\" 2>&1)\n  if [ ! -e \"/root/.mysql.force.legacy.backup.cnf\" ]; then\n    _MYQUICK_USE=YES\n    echo \"INFO: Installed MyQuick ${_MYQUICK_ITD} for ${_MD_V} (${_DB_V})\"\n  fi\nfi\n\nfor _DB in `mysql -e \"show databases\" -s | uniq | sort`; do\n  if [ \"${_DB}\" != \"Database\" ] \\\n    && [ \"${_DB}\" != \"information_schema\" ] \\\n    && [ \"${_DB}\" != \"performance_schema\" ]; then\n    _check_running\n    _create_locks ${_DB}\n    if [ \"${_DB}\" != \"mysql\" ]; then\n      if [ -e \"/var/lib/mysql/${_DB}/queue.ibd\" ] && [ ! -e \"/root/.disable_mysql_cleanup.cnf\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/queue.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"queue\" ]]; then\n          _truncate_queue_tables &> /dev/null\n          echo \"INFO: Truncated giant queue in ${_DB}\"\n        fi\n      fi\n      if [ -e \"/var/lib/mysql/${_DB}/batch.ibd\" ] && [ ! -e \"/root/.disable_mysql_cleanup.cnf\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/batch.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"batch\" ]]; then\n          _truncate_batch_tables &> /dev/null\n          echo \"INFO: Truncated giant batch in ${_DB}\"\n        fi\n      fi\n      if [ -e \"/var/lib/mysql/${_DB}/watchdog.ibd\" ] && [ ! -e \"/root/.disable_mysql_cleanup.cnf\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/watchdog.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"watchdog\" ]]; then\n          _truncate_watchdog_tables &> /dev/null\n          echo \"INFO: Truncated giant watchdog in ${_DB}\"\n        fi\n      fi\n      if [ -e \"/var/lib/mysql/${_DB}/accesslog.ibd\" ] && [ ! -e \"/root/.disable_mysql_cleanup.cnf\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/accesslog.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"accesslog\" ]]; then\n          _truncate_accesslog_tables &> /dev/null\n          echo \"INFO: Truncated giant accesslog in ${_DB}\"\n        fi\n      fi\n      _truncate_views_data_export &> /dev/null\n      echo \"INFO: Truncated not used views_data_export in ${_DB}\"\n      _CACHE_CLEANUP=NONE\n      if [ \"${_DOW}\" = \"6\" ] && [ -e \"/root/.my.batch_innodb.cnf\" ]; then\n        _repair_this_database &> /dev/null\n        echo \"INFO: Repair task for ${_DB} completed\"\n        _truncate_cache_tables &> /dev/null\n        echo \"INFO: All cache tables in ${_DB} truncated\"\n        _convert_to_innodb &> /dev/null\n        echo \"INFO: InnoDB conversion task for ${_DB} completed\"\n        _CACHE_CLEANUP=DONE\n      fi\n      if [ \"${_OPTIM}\" = \"YES\" ] \\\n        && [ \"${_DOW}\" = \"7\" ] \\\n        && [ \"${_THIS_MODE}\" = \"full\" ] \\\n        && [ \"${_DOM}\" -ge 24 ] \\\n        && [ \"${_DOM}\" -lt 31 ]; then\n        _repair_this_database &> /dev/null\n        echo \"INFO: Repair task for ${_DB} completed\"\n        _truncate_cache_tables &> /dev/null\n        echo \"INFO: All cache tables in ${_DB} truncated\"\n        _optimize_this_database &> /dev/null\n        echo \"INFO: Optimize task for ${_DB} completed\"\n        _CACHE_CLEANUP=DONE\n      fi\n      if [ \"${_CACHE_CLEANUP}\" != \"DONE\" ]; then\n        _truncate_cache_tables &> /dev/null\n        echo \"INFO: All cache tables in ${_DB} truncated\"\n      fi\n    fi\n    if [ \"${_DB}\" = \"mysql\" ]; then\n      _backup_mysql_schema &> /dev/null\n    elif [ \"${_MYQUICK_USE}\" = \"YES\" ]; then\n      _backup_this_database_with_mydumper &> /dev/null\n    else\n      _backup_this_database_with_mysqldump &> /dev/null\n    fi\n    _remove_locks ${_DB}\n    echo \"INFO: Backup completed for ${_DB}\"\n    echo\n  fi\ndone\n\nif [ \"${_THIS_MODE}\" = \"full\" ]; then\n  echo \"INFO: Running all dbs usage report on $(date)\"\n  du -s /var/lib/mysql/* > /root/.du.local.sql\n  echo \"INFO: Completing all dbs usage report on $(date)\"\nfi\n\nif [ \"${_OPTIM}\" = \"YES\" ] \\\n  && [ \"${_DOW}\" = \"7\" ] \\\n  && [ \"${_THIS_MODE}\" = \"full\" ] \\\n  && [ \"${_DOM}\" -ge 24 ] \\\n  && [ \"${_DOM}\" -lt 31 ] \\\n  && [ -e \"/root/.my.restart_after_optimize.cnf\" ] \\\n  && [ ! -e \"/run/boa_run.pid\" ]; then\n  _check_running\n  _check_mysql_version\n  echo \"INFO: Running db server restart on $(date)\"\n  bash /var/xdrago/move_sql.sh\n  wait\n  echo \"INFO: Completing db server restart on $(date)\"\nfi\n\necho \"INFO: Completing all dbs backups on $(date)\"\nrm -f /run/boa_sql_backup.pid\ntouch /var/log/boa/last-run-backup\n\nif [ \"${_VMFAMILY}\" = \"VS\" ] && [ \"${_THIS_MODE}\" = \"full\" ]; then\n  _n=$((RANDOM%300+8))\n  echo \"INFO: Waiting ${_n} seconds on $(date) before running compress...\"\n  sleep ${_n}\nfi\necho \"INFO: Starting dbs backup compress on $(date)\"\n_compress_backup &> /dev/null\necho \"INFO: Completing dbs backup compress on $(date)\"\n\necho \"INFO: Starting dbs backup cleanup on $(date)\"\n_DB_BACKUPS_TTL=${_DB_BACKUPS_TTL//[^0-9]/}\nif [ -z \"${_DB_BACKUPS_TTL}\" ]; then\n  _DB_BACKUPS_TTL=\"14\"\nfi\nif [ \"${_THIS_MODE}\" = \"basic\" ]; then\n  _DB_BACKUPS_TTL=\"3\"\nfi\nfind ${_BACKUPDIR}/* -mtime +${_DB_BACKUPS_TTL} -type d -exec rm -rf {} \\;\necho \"INFO: Backups older than ${_DB_BACKUPS_TTL} days deleted\"\n\nif [ \"${_THIS_MODE}\" = \"full\" ]; then\n  if [ -x \"/opt/local/bin/copydbackup\" ]; then\n    echo \"INFO: Copying backups to users space\"\n    bash /opt/local/bin/copydbackup &> /dev/null\n    wait\n  fi\nfi\n\nif [ \"${_THIS_MODE}\" = \"full\" ]; then\n  echo \"INFO: Starting verbose usage report on $(date)\"\n  bash /var/xdrago/usage.sh verbose\n  wait\n  echo \"INFO: Completing verbose usage report on $(date)\"\nfi\n\necho \"INFO: ALL TASKS COMPLETED, BYE!\"\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/mysql_cleanup.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\nif [ -e \"/root/.disable_mysql_cleanup.cnf\" ]; then\n  exit 0\nfi\n\nif [ -e \"/root/.proxy.cnf\" ]; then\n  echo \"Ooops, that is a proxy server, we do not run this task on sql proxy\"\n  exit 0\nfi\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n_IS_SQLBACKUP_RUNNING=$(pgrep -f mysql_backup.sh)\nif [ ! -z \"${_IS_SQLBACKUP_RUNNING}\" ]; then\n  echo \"Ooops, another mysql procedure/backup is running at the moment\"\n  exit 0\nfi\n\n_ALL_DBS_NR=$(ls /var/lib/mysql | wc -l)\nif [ ! -z \"${_ALL_DBS_NR}\" ] && [ \"${_ALL_DBS_NR}\" -gt 100 ]; then\n  echo \"Sorry, too many databases (${_ALL_DBS_NR}) on this server for this frequent task\"\n  exit 0\nfi\n\necho \"INFO: Starting dbs cleanup on $(date)\"\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n_SQL_CACHE_EXC_DEF=\"cache_bootstrap cache_discovery cache_config\"\n\nif [ -e \"/root/.my.cache.exceptions.cnf\" ]; then\n  _SQL_CACHE_EXC_ADD=$(cat /root/.my.cache.exceptions.cnf 2>&1)\n  _SQL_CACHE_EXC=\"${_SQL_CACHE_EXC_DEF} ${_SQL_CACHE_EXC_ADD}\"\nelse\n  _SQL_CACHE_EXC=\"${_SQL_CACHE_EXC_DEF}\"\nfi\n\n_SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n\n_create_locks() {\n  echo \"INFO: Creating locks for $1\"\n  touch /run/mysql_backup_running.pid\n}\n\n_remove_locks() {\n  echo \"INFO: Removing locks for $1\"\n  rm -f /run/mysql_backup_running.pid\n}\n\n_check_running() {\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"INFO: Waiting for MySQLD availability...\"\n    fi\n    sleep 3\n  done\n}\n\n_truncate_cache_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^cache | uniq | sort 2>&1)\n  for C in ${_TABLES}; do\n    _IF_SKIP_C=\n    for X in ${_SQL_CACHE_EXC}; do\n      if [ \"${C}\" = \"${X}\" ]; then\n        _IF_SKIP_C=SKIP\n      fi\n    done\n    if [ -z \"${_IF_SKIP_C}\" ]; then\n      mysql ${_DB}<<EOFMYSQL\nTRUNCATE ${C};\nEOFMYSQL\n    fi\n  done\n}\n\n_truncate_watchdog_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^watchdog$ 2>&1)\n  for W in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${W};\nEOFMYSQL\n  done\n}\n\n_truncate_accesslog_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^accesslog$ 2>&1)\n  for A in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${A};\nEOFMYSQL\n  done\n}\n\n_truncate_batch_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^batch$ 2>&1)\n  for B in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${B};\nEOFMYSQL\n  done\n}\n\n_truncate_queue_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^queue$ 2>&1)\n  for Q in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE ${Q};\nEOFMYSQL\n  done\n}\n\n_truncate_views_data_export() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^views_data_export_index_ 2>&1)\n  for V in ${_TABLES}; do\nmysql ${_DB}<<EOFMYSQL\nDROP TABLE ${V};\nEOFMYSQL\n  done\nmysql ${_DB}<<EOFMYSQL\nTRUNCATE views_data_export_object_cache;\nEOFMYSQL\n}\n\nfor _DB in `mysql -e \"show databases\" -s | uniq | sort`; do\n  if [ \"${_DB}\" != \"Database\" ] \\\n    && [ \"${_DB}\" != \"information_schema\" ] \\\n    && [ \"${_DB}\" != \"performance_schema\" ]; then\n    _check_running\n    _create_locks ${_DB}\n    if [ \"${_DB}\" != \"mysql\" ]; then\n      if [ -e \"/var/lib/mysql/${_DB}/queue.ibd\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/queue.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"queue\" ]]; then\n          _truncate_queue_tables &> /dev/null\n          echo \"INFO: Truncated giant queue in ${_DB}\"\n        fi\n      fi\n      if [ -e \"/var/lib/mysql/${_DB}/batch.ibd\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/batch.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"batch\" ]]; then\n          _truncate_batch_tables &> /dev/null\n          echo \"INFO: Truncated giant batch in ${_DB}\"\n        fi\n      fi\n      if [ -e \"/var/lib/mysql/${_DB}/watchdog.ibd\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/watchdog.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"watchdog\" ]]; then\n          _truncate_watchdog_tables &> /dev/null\n          echo \"INFO: Truncated giant watchdog in ${_DB}\"\n        fi\n      fi\n      if [ -e \"/var/lib/mysql/${_DB}/accesslog.ibd\" ]; then\n        _IS_GB=$(du -s -h /var/lib/mysql/${_DB}/accesslog.ibd | grep \"G\" 2>/dev/null)\n        if [[ \"${_IS_GB}\" =~ \"accesslog\" ]]; then\n          _truncate_accesslog_tables &> /dev/null\n          echo \"INFO: Truncated giant accesslog in ${_DB}\"\n        fi\n      fi\n      _truncate_views_data_export &> /dev/null\n      echo \"INFO: Truncated not used views_data_export in ${_DB}\"\n      _truncate_cache_tables &> /dev/null\n      echo \"INFO: All cache tables in ${_DB} truncated\"\n    fi\n    _remove_locks ${_DB}\n    echo \"INFO: Cleanup completed for ${_DB}\"\n    echo\n  fi\ndone\n\necho \"INFO: Completing all dbs cleanup on $(date)\"\ntouch /var/log/boa/last-run-db-cleanup\nrm -f /run/mysql_backup_running.pid\n\necho \"INFO: ALL TASKS COMPLETED, BYE!\"\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/mysql_cluster_backup.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ ! -e \"/root/.my.cluster_write_node.txt\" ] && exit 0\n[ ! -e \"/root/.my.cluster_root_pwd.txt\" ] && exit 0\n\n_IS_SQLBACKUP_RUNNING=$(pgrep -f mysql_backup.sh)\nif [ ! -z \"${_IS_SQLBACKUP_RUNNING}\" ]; then\n  exit 0\nfi\n\nif [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n  _SQL_PSWD=$(cat /root/.my.cluster_root_pwd.txt 2>/dev/null | tr -d '\\n')\nfi\n\nif [ -e \"/root/.my.cluster_backup_proxysql.txt\" ]; then\n  _SQL_PORT=\"6033\"\n  _SQL_HOST=\"127.0.0.1\"\nelse\n  _SQL_PORT=\"3306\"\n  if [ -e \"/root/.my.cluster_write_node.txt\" ]; then\n    _SQL_HOST=$(cat /root/.my.cluster_write_node.txt 2>&1)\n    _SQL_HOST=$(echo -n ${_SQL_HOST} | tr -d \"\\n\" 2>&1)\n  fi\n  [ -z ${_SQL_HOST} ] && _SQL_HOST=\"127.0.0.1\" && _SQL_PORT=\"3306\"\nfi\n\n_C_SQL=\"mysql --user=root --password=${_SQL_PSWD} --host=${_SQL_HOST} --port=${_SQL_PORT} --protocol=tcp\"\n\necho \"SQL --host=${_SQL_HOST} --port=${_SQL_PORT}\"\n_n=$((RANDOM%600+8))\necho \"INFO: Waiting ${_n} seconds on $(date) before running backup...\"\nsleep ${_n}\necho \"INFO: Starting backup on $(date)\"\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n_SQL_CACHE_EXC_DEF=\"cache_bootstrap cache_discovery cache_config\"\n\nif [ -e \"/root/.my.cache.exceptions.cnf\" ]; then\n  _SQL_CACHE_EXC_ADD=$(cat /root/.my.cache.exceptions.cnf 2>&1)\n  _SQL_CACHE_EXC=\"${_SQL_CACHE_EXC_DEF} ${_SQL_CACHE_EXC_ADD}\"\nelse\n  _SQL_CACHE_EXC=\"${_SQL_CACHE_EXC_DEF}\"\nfi\n\n_BACKUPDIR=/data/disk/arch/cluster\n_DATE=$(date +%y%m%d-%H%M%S)\n_DOW=$(date +%u)\n_hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n_DOW=${_DOW//[^1-7]/}\n_DOM=$(date +%e)\n_DOM=${_DOM//[^0-9]/}\n_SAVELOCATION=${_BACKUPDIR}/${_hName}-${_DATE}\nif [ -e \"/root/.my.optimize.cnf\" ]; then\n  _OPTIM=YES\nelse\n  _OPTIM=NO\nfi\n_VM_TEST=\"$(uname -a)\"\nif [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n  _VMFAMILY=\"VS\"\nelse\n  _VMFAMILY=\"XEN\"\nfi\ntouch /run/boa_sql_cluster_backup.pid\n\n_create_locks() {\n  echo \"Creating locks for $1\"\n  touch /run/mysql_cluster_backup_running.pid\n}\n\n_remove_locks() {\n  echo \"Removing locks for $1\"\n  rm -f /run/mysql_cluster_backup_running.pid\n}\n\n_check_running() {\n  _IS_PROXYSQL_RUNNING=$(pgrep -f proxysql)\n  while [ -z \"${_IS_PROXYSQL_RUNNING}\" ] \\\n    || [ ! -e \"/var/lib/proxysql/proxysql.pid\" ]; do\n    _IS_PROXYSQL_RUNNING=$(pgrep -f proxysql)\n    echo \"Waiting for ProxySQL availability...\"\n    sleep 3\n  done\n}\n\n_truncate_cache_tables() {\n  _check_running\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^cache | uniq | sort 2>&1)\n  for C in ${_TABLES}; do\n    _IF_SKIP_C=\n    for X in ${_SQL_CACHE_EXC}; do\n      if [ \"${C}\" = \"${X}\" ]; then\n        _IF_SKIP_C=SKIP\n      fi\n    done\n    if [ -z \"${_IF_SKIP_C}\" ]; then\n      ${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${C};\nEOFMYSQL\n    fi\n  done\n}\n\n_truncate_watchdog_tables() {\n  _check_running\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^watchdog$ 2>&1)\n  for W in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${W};\nEOFMYSQL\n  done\n}\n\n_truncate_accesslog_tables() {\n  _check_running\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^accesslog$ 2>&1)\n  for A in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${A};\nEOFMYSQL\n  done\n}\n\n_truncate_batch_tables() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^batch$ 2>&1)\n  for B in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${B};\nEOFMYSQL\n  done\n}\n\n_truncate_queue_tables() {\n  _check_running\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | grep ^queue$ 2>&1)\n  for Q in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE ${Q};\nEOFMYSQL\n  done\n}\n\n_truncate_views_data_export() {\n  _check_running\n  _TABLES=$(mysql ${_DB} -u root -e \"show tables\" -s | grep ^views_data_export_index_ 2>&1)\n  for V in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nDROP TABLE ${V};\nEOFMYSQL\n  done\n${_C_SQL} ${_DB}<<EOFMYSQL\nTRUNCATE views_data_export_object_cache;\nEOFMYSQL\n}\n\n_repair_this_database() {\n  _check_running\n  mysqlcheck --host=${_SQL_HOST} --port=${_SQL_PORT} --protocol=tcp -u root --auto-repair --silent ${_DB}\n}\n\n_optimize_this_database() {\n  _check_running\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | uniq | sort 2>&1)\n  for T in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nOPTIMIZE TABLE ${T};\nEOFMYSQL\n  done\n}\n\n_convert_to_innodb() {\n  _check_running\n  _TABLES=$(${_C_SQL} ${_DB} -e \"show tables\" -s | uniq | sort 2>&1)\n  for T in ${_TABLES}; do\n${_C_SQL} ${_DB}<<EOFMYSQL\nALTER TABLE ${T} ENGINE=INNODB;\nEOFMYSQL\n  done\n}\n\n_backup_this_database_with_mydumper() {\n  _check_running\n  if [ ! -d \"${_SAVELOCATION}/${_DB}\" ]; then\n    mkdir -p ${_SAVELOCATION}/${_DB}\n  fi\n  _MYDUMPER_LOCK_MODE=\"AUTO\"\n  if [[ \"${_DB_V}\" == \"5.7\" ]]; then\n    _MYDUMPER_LOCK_MODE=\"FTWRL\"\n  fi\n  mydumper \\\n    --database=${_DB} \\\n    --host=localhost \\\n    --user=root \\\n    --password=${_SQL_PSWD} \\\n    --port=3306 \\\n    --outputdir=${_SAVELOCATION}/${_DB}/ \\\n    --rows=50000 \\\n    --build-empty-files \\\n    --threads=4 \\\n    --long-query-guard=900 \\\n    --sync-thread-lock-mode=${_MYDUMPER_LOCK_MODE} \\\n    --verbose=1\n}\n\n_backup_this_database_with_mysqldump() {\n  _check_running\n  mysqldump \\\n    --user=root \\\n    --password=${_SQL_PSWD} \\\n    --host=${_SQL_HOST} \\\n    --port=${_SQL_PORT} \\\n    --protocol=tcp \\\n    --single-transaction \\\n    --quick \\\n    --no-autocommit \\\n    --skip-add-locks \\\n    --no-tablespaces \\\n    --hex-blob ${_DB} \\\n    > ${_SAVELOCATION}/${_DB}.sql\n}\n\n_compress_backup() {\n  if [ \"${_MYQUICK_USE}\" = \"YES\" ]; then\n    for DbPath in `find ${_SAVELOCATION}/ -maxdepth 1 -mindepth 1 | sort`; do\n      if [ -e \"${DbPath}/metadata\" ]; then\n        DbName=$(echo ${DbPath} | cut -d'/' -f7 | awk '{ print $1}' 2>&1)\n        cd ${_SAVELOCATION}\n        tar -c -p -I zstd -f ${DbName}-${_DATE}.tar.zst ${DbName} &> /dev/null\n        rm -f -r ${DbName}\n      fi\n    done\n    chmod 600 ${_SAVELOCATION}/*\n    chmod 700 ${_SAVELOCATION}\n    chmod 700 /data/disk/arch\n    echo \"INFO: Permissions fixed\"\n  else\n    gzip ${_SAVELOCATION}/*.sql\n    chmod 600 ${_BACKUPDIR}/*/*\n    chmod 700 ${_BACKUPDIR}/*\n    chmod 700 ${_BACKUPDIR}\n    chmod 700 /data/disk/arch\n    echo \"INFO: Permissions fixed\"\n  fi\n}\n\n[ ! -a ${_SAVELOCATION} ] && mkdir -p ${_SAVELOCATION};\n\n_check_mysql_version() {\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  fi\n  if [ ! -z \"${_DB_V}\" ]; then\n    ${_C_SQL} -e \"SET GLOBAL innodb_max_dirty_pages_pct = 0;\" &> /dev/null\n    ${_C_SQL} -e \"SET GLOBAL innodb_change_buffering = 'none';\" &> /dev/null\n    ${_C_SQL} -e \"SET GLOBAL innodb_buffer_pool_dump_at_shutdown = 1;\" &> /dev/null\n    ${_C_SQL} -e \"SET GLOBAL innodb_io_capacity=3000;\" &> /dev/null\n    ${_C_SQL} -e \"SET GLOBAL innodb_io_capacity_max=6000;\" &> /dev/null\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      ${_C_SQL} -e \"SET GLOBAL innodb_buffer_pool_dump_pct = 100;\" &> /dev/null\n      ${_C_SQL} -e \"SET GLOBAL innodb_buffer_pool_dump_now = ON;\" &> /dev/null\n    fi\n    ${_C_SQL} -e -e \"SET GLOBAL innodb_fast_shutdown = 1;\" &> /dev/null\n  fi\n}\n\n_check_running\n_check_mysql_version\n\n_MYQUICK_USE=NO\nif [ -x \"/usr/local/bin/mydumper\" ]; then\n  _MYQUICK_ITD=$(mydumper -V 2>&1 \\\n    | tr -d \"\\n\" \\\n    | tr -d \",\" \\\n    | tr -d \"v\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  _DB_V=$(mysql -V 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f6 \\\n    | awk '{ print $1}' \\\n    | cut -d\"-\" -f1 \\\n    | awk '{ print $1}' \\\n    | sed \"s/[\\,']//g\" 2>&1)\n  if [ \"${_DB_V}\" = \"Linux\" ]; then\n    _DB_V=$(mysql -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f4 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n  fi\n  _MD_V=$(mydumper --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f6 \\\n    | awk '{ print $1}' \\\n    | cut -d\"-\" -f1 \\\n    | awk '{ print $1}' \\\n    | sed \"s/[\\,']//g\" 2>&1)\n  if [ ! -e \"/root/.mysql.force.legacy.backup.cnf\" ]; then\n    _MYQUICK_USE=YES\n    echo \"INFO: Installed MyQuick ${_MYQUICK_ITD} for ${_MD_V} (${_DB_V})\"\n  fi\nfi\n\nfor _DB in `${_C_SQL} -e \"show databases\" -s | uniq | sort`; do\n  if [ \"${_DB}\" != \"Database\" ] \\\n    && [ \"${_DB}\" != \"information_schema\" ] \\\n    && [ \"${_DB}\" != \"performance_schema\" ]; then\n    _check_running\n    _create_locks ${_DB}\n    if [ \"${_DB}\" != \"mysql\" ]; then\n      _IS_GB=$(${_C_SQL} --skip-column-names --silent -e \"SELECT table_name 'Table Name', round(((data_length + index_length)/1024/1024),0)\n'Table Size (MB)' FROM information_schema.TABLES WHERE table_schema = '${_DB}' AND table_name ='watchdog';\" | cut -d'/' -f1 | awk '{ print $2}' | sed \"s/[\\/\\s+]//g\" | bc 2>&1)\n      _IS_GB=${_IS_GB//[^0-9]/}\n      _SQL_MAX_LIMIT=\"1024\"\n      if [ ! -z \"${_IS_GB}\" ]; then\n        if [ \"${_IS_GB}\" -gt \"${_SQL_MAX_LIMIT}\" ]; then\n          _truncate_watchdog_tables &> /dev/null\n          echo \"INFO: Truncated giant ${_IS_GB} watchdog in ${_DB}\"\n        fi\n      fi\n      # _truncate_accesslog_tables &> /dev/null\n      # echo \"Truncated not used accesslog in ${_DB}\"\n      # _truncate_queue_tables &> /dev/null\n      # echo \"Truncated queue table in ${_DB}\"\n      _CACHE_CLEANUP=NONE\n      # if [ \"${_DOW}\" = \"6\" ] && [ -e \"/root/.my.batch_innodb.cnf\" ]; then\n      #   _repair_this_database &> /dev/null\n      #   echo \"Repair task for ${_DB} completed\"\n      #   _truncate_cache_tables &> /dev/null\n      #   echo \"All cache tables in ${_DB} truncated\"\n      #   _convert_to_innodb &> /dev/null\n      #   echo \"InnoDB conversion task for ${_DB} completed\"\n      #   _CACHE_CLEANUP=DONE\n      # fi\n      # if [ \"${_OPTIM}\" = \"YES\" ] \\\n      #   && [ \"${_DOW}\" = \"7\" ] \\\n      #   && [ \"${_DOM}\" -ge 24 ] \\\n      #   && [ \"${_DOM}\" -lt 31 ]; then\n      #   _repair_this_database &> /dev/null\n      #   echo \"Repair task for ${_DB} completed\"\n      #   _truncate_cache_tables &> /dev/null\n      #   echo \"All cache tables in ${_DB} truncated\"\n      #   _optimize_this_database &> /dev/null\n      #   echo \"Optimize task for ${_DB} completed\"\n      #   _CACHE_CLEANUP=DONE\n      # fi\n      if [ \"${_CACHE_CLEANUP}\" != \"DONE\" ]; then\n        _truncate_cache_tables &> /dev/null\n        echo \"INFO: All cache tables in ${_DB} truncated\"\n      fi\n    fi\n    if [ \"${_MYQUICK_USE}\" = \"YES\" ]; then\n      _backup_this_database_with_mydumper &> /dev/null\n    else\n      _backup_this_database_with_mysqldump &> /dev/null\n    fi\n    _remove_locks ${_DB}\n    echo \"INFO: Backup completed for ${_DB}\"\n    echo\n  fi\ndone\n\necho \"INFO: Completing all dbs backups on $(date)\"\nrm -f /run/boa_sql_cluster_backup.pid\ntouch /var/log/boa/last-run-cluster-backup\n\necho \"INFO: Starting dbs backup compress on $(date)\"\n_compress_backup &> /dev/null\necho \"INFO: Completing dbs backup compress on $(date)\"\n\necho \"INFO: Starting dbs backup cleanup on $(date)\"\n_DB_BACKUPS_TTL=${_DB_BACKUPS_TTL//[^0-9]/}\nif [ -z \"${_DB_BACKUPS_TTL}\" ]; then\n  _DB_BACKUPS_TTL=\"30\"\nfi\nfind ${_BACKUPDIR} -mtime +${_DB_BACKUPS_TTL} -type d -exec rm -rf {} \\;\necho \"INFO: Backups older than ${_DB_BACKUPS_TTL} days deleted\"\n\necho \"INFO: ALL TASKS COMPLETED, BYE!\"\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/mysql_repair.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\ndir=/var/log/boa/mysql_optimize\nmkdir -p $dir\n_SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n/usr/bin/mysqlcheck -u root -Aa >> $dir/all.a.$(date +%y%m%d-%H%M%S)\n/usr/bin/mysqlcheck -u root -A --auto-repair >> $dir/all.r.$(date +%y%m%d-%H%M%S)\n/usr/bin/mysqlcheck -u root -Ao >> $dir/all.o.$(date +%y%m%d-%H%M%S)\nexit 0\n\n"
  },
  {
    "path": "aegir/tools/system/proc_num_ctrl.pl",
    "content": "#!/usr/bin/perl\n\n### TODO - rewrite this legacy script in bash\n\n$ENV{'HOME'} = '/root';\n$ENV{'PATH'} = '/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin';\n\nuse warnings;\nuse File::Spec;\n\nmy $run_to_files = [ \"/root/.run-to-excalibur.cnf\", \"/root/.run-to-daedalus.cnf\", \"/root/.run-to-chimaera.cnf\", \"/root/.run-to-beowulf.cnf\", \"/run/boa_run.pid\" ];\n\n###\n### System Services Monitor running every 5 seconds\n###\n&cpu_count_load;\n&global_action;\n$sumar = 0;\nforeach $USER (sort keys %li_cnt) {\n  print \" $li_cnt{$USER}\\t$USER\\n\";\n  $sumar = $sumar + $li_cnt{$USER};\n  if ($USER eq \"mysql\") {$mysqlives = \"YES\"; $mysqlsumar = $li_cnt{$USER};}\n  if ($USER eq \"jetty9\") {$jetty9lives = \"YES\"; $jetty9sumar = $li_cnt{$USER};}\n  if ($USER eq \"solr7\") {$solr7lives = \"YES\"; $solr7sumar = $li_cnt{$USER};}\n  if ($USER eq \"solr9\") {$solr9lives = \"YES\"; $solr9sumar = $li_cnt{$USER};}\n}\nforeach $COMMAND (sort keys %li_cnt) {\n  if ($COMMAND =~ /lfd/) {$lfdlives = \"YES\"; $lfdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /named/) {$namedlives = \"YES\"; $namedsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /sbin\\/clamd/) {$clamdlives = \"YES\"; $clamdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /bin\\/freshclam/) {$freshclamlives = \"YES\"; $freshclamsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /buagent/) {$buagentlives = \"YES\"; $buagentsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /collectd/) {$collectdlives = \"YES\"; $collectdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /dhclient/) {$dhclientlives = \"YES\"; $dhclientsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /dhcpcd/) {$dhcpcdlives = \"YES\"; $dhcpcdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /nginx:/) {$nginxlives = \"YES\"; $nginxsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /sbin\\/unbound/) {$unboundlives = \"YES\"; $unboundsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /sbin\\/vnstatd/) {$vnstatdlives = \"YES\"; $vnstatdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /php-cgi/) {$phplives = \"YES\"; $phpsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /php-fpm:/) {$fpmlives = \"YES\"; $fpmsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /lib\\/postfix/) {$postfixlives = \"YES\"; $postfixsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /pure-ftpd/) {$ftplives = \"YES\"; $ftpsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /java\\/jenkins/) {$jenkinslives = \"YES\"; $jenkinssumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /bin\\/valkey-server/) {$valkeylives = \"YES\"; $valkeysumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /bin\\/redis-server/) {$redislives = \"YES\"; $redissumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /bin\\/newrelic-daemon/) {$newrelicdaemonlives = \"YES\"; $newrelicdaemonsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /nrsysmond/) {$newrelicsysmondlives = \"YES\"; $newrelicsysmondsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /sbin\\/rsyslogd/) {$rsyslogdlives = \"YES\"; $rsyslogdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /sbin\\/syslogd/ && -f \"/run/syslogd.pid\") {$sysklogdlives = \"YES\"; $sysklogdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /sbin\\/syslogd/ && -f \"/run/syslog.pid\") {$syslogdlives = \"YES\"; $syslogdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /xinetd/) {$xinetdlives = \"YES\"; $xinetdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /lsyncd/) {$lsyncdlives = \"YES\"; $lsyncdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /sshd:/) {$sshdlives = \"YES\"; $sshdsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /proxysql/) {$pxydlives = \"YES\"; $pxydsumar = $li_cnt{$COMMAND};}\n  if ($COMMAND =~ /droplet/) {$dpltlives = \"YES\"; $dpltsumar = $li_cnt{$COMMAND};}\n}\nforeach $X (sort keys %li_cnt) {\n  if ($X =~ /php85/) {$php85lives = \"YES\";}\n  if ($X =~ /php84/) {$php84lives = \"YES\";}\n  if ($X =~ /php83/) {$php83lives = \"YES\";}\n  if ($X =~ /php82/) {$php82lives = \"YES\";}\n  if ($X =~ /php81/) {$php81lives = \"YES\";}\n  if ($X =~ /php80/) {$php80lives = \"YES\";}\n  if ($X =~ /php74/) {$php74lives = \"YES\";}\n  if ($X =~ /php73/) {$php73lives = \"YES\";}\n  if ($X =~ /php72/) {$php72lives = \"YES\";}\n  if ($X =~ /php71/) {$php71lives = \"YES\";}\n  if ($X =~ /php70/) {$php70lives = \"YES\";}\n  if ($X =~ /php56/) {$php56lives = \"YES\";}\n}\n$convertsumar = 0;\nforeach $K (sort keys %li_cnt) {\n  if ($K =~ /convert/) {$convertlives = \"YES\"; $convertsumar = $li_cnt{$K};}\n}\nif ($convertsumar > 1)\n{\n  &convert_action;\n}\nprint \"\\n $sumar ALL procs\\t\\tGLOBAL\";\nprint \"\\n $lfdsumar LFD procs\\t\\tGLOBAL\" if ($lfdlives && -f \"/etc/init.d/lfd\");\nprint \"\\n $namedsumar Bind procs\\t\\tGLOBAL\" if ($namedlives);\nprint \"\\n $clamdsumar Clamd procs\\t\\tGLOBAL\" if ($clamdlives);\nprint \"\\n $freshclamsumar Freshclam\\t\\tGLOBAL\" if ($freshclamlives);\nprint \"\\n $buagentsumar Backup procs\\t\\tGLOBAL\" if ($buagentlives);\nprint \"\\n $collectdsumar Collectd\\t\\tGLOBAL\" if ($collectdlives);\nprint \"\\n $dhclientsumar DHCP procs\\t\\tGLOBAL\" if ($dhclientlives);\nprint \"\\n $dhcpcdsumar DHCP procs\\t\\tGLOBAL\" if ($dhcpcdlives);\nprint \"\\n $fpmsumar FPM procs\\t\\tGLOBAL\" if ($fpmlives);\nprint \"\\n 1 FPM85 procs\\t\\tGLOBAL\" if ($php85lives);\nprint \"\\n 1 FPM84 procs\\t\\tGLOBAL\" if ($php84lives);\nprint \"\\n 1 FPM83 procs\\t\\tGLOBAL\" if ($php83lives);\nprint \"\\n 1 FPM82 procs\\t\\tGLOBAL\" if ($php82lives);\nprint \"\\n 1 FPM81 procs\\t\\tGLOBAL\" if ($php81lives);\nprint \"\\n 1 FPM80 procs\\t\\tGLOBAL\" if ($php80lives);\nprint \"\\n 1 FPM74 procs\\t\\tGLOBAL\" if ($php74lives);\nprint \"\\n 1 FPM73 procs\\t\\tGLOBAL\" if ($php73lives);\nprint \"\\n 1 FPM72 procs\\t\\tGLOBAL\" if ($php72lives);\nprint \"\\n 1 FPM71 procs\\t\\tGLOBAL\" if ($php71lives);\nprint \"\\n 1 FPM70 procs\\t\\tGLOBAL\" if ($php70lives);\nprint \"\\n 1 FPM56 procs\\t\\tGLOBAL\" if ($php56lives);\nprint \"\\n $ftpsumar FTP procs\\t\\tGLOBAL\" if ($ftplives);\nprint \"\\n $mysqlsumar MySQL procs\\t\\tGLOBAL\" if ($mysqlives);\nprint \"\\n $nginxsumar Nginx procs\\t\\tGLOBAL\" if ($nginxlives);\nprint \"\\n $unboundsumar DNS procs\\t\\tGLOBAL\" if ($unboundlives);\nprint \"\\n $vnstatdsumar Vnstat procs\\t\\tGLOBAL\" if ($vnstatdlives);\nprint \"\\n $phpsumar PHP procs\\t\\tGLOBAL\" if ($phplives);\nprint \"\\n $postfixsumar Postfix procs\\tGLOBAL\" if ($postfixlives);\nprint \"\\n $valkeysumar Valkey procs\\t\\tGLOBAL\" if ($valkeylives);\nprint \"\\n $redissumar Redis procs\\t\\tGLOBAL\" if ($redislives);\nprint \"\\n $newrelicdaemonsumar New Relic Apps\\tGLOBAL\" if ($newrelicdaemonlives);\nprint \"\\n $newrelicsysmondsumar New Relic Server\\tGLOBAL\" if ($newrelicsysmondlives);\nprint \"\\n $jetty9sumar Jetty9 procs\\t\\tGLOBAL\" if ($jetty9lives);\nprint \"\\n $solr7sumar Solr7 procs\\t\\tGLOBAL\" if ($solr7lives);\nprint \"\\n $solr9sumar Solr9 procs\\t\\tGLOBAL\" if ($solr9lives);\nprint \"\\n $jenkinssumar Jenkins procs\\t\\tGLOBAL\" if ($jenkinslives);\nprint \"\\n $rsyslogdsumar Syslog procs\\t\\tGLOBAL\" if ($rsyslogdlives);\nprint \"\\n $sysklogdsumar Syslog procs\\t\\tGLOBAL\" if ($sysklogdlives);\nprint \"\\n $syslogdsumar Syslog procs\\t\\tGLOBAL\" if ($syslogdlives);\nprint \"\\n $convertsumar Convert procs\\t\\tGLOBAL\" if ($convertlives);\nprint \"\\n $xinetdsumar Xinetd procs\\t\\tGLOBAL\" if ($xinetdlives);\nprint \"\\n $lsyncdsumar Lsyncd procs\\t\\tGLOBAL\" if ($lsyncdlives);\nprint \"\\n $sshdsumar SSHd procs\\t\\tGLOBAL\" if ($sshdlives);\nprint \"\\n $pxydsumar PxySQL procs\\t\\tGLOBAL\" if ($pxydlives);\nprint \"\\n $dpltsumar Droplet procs\\tGLOBAL\" if ($dpltlives);\nprint \"\\n\";\n\nif (!any_file_exists($run_to_files)) {\n  system(\"service bind9 restart\") if (!$namedsumar && -f \"/etc/init.d/bind9\");\n  system(\"service proxysql restart\") if (!$pxydsumar && -f \"/etc/init.d/proxysql\");\n  system(\"service droplet-agent restart\") if (!$dpltsumar && -f \"/etc/init.d/droplet-agent\");\n  system(\"service droplet-agent restart\") if (!-f \"/run/droplet-agent.pid\" && -f \"/etc/init.d/droplet-agent\");\n}\n\nif (!any_file_exists($run_to_files)) {\n  system(\"service newrelic-daemon restart\") if (!$newrelicdaemonsumar && -f \"/etc/init.d/newrelic-daemon\");\n  system(\"service newrelic-sysmond restart\") if (!$newrelicsysmondsumar && -f \"/etc/init.d/newrelic-sysmond\" && -f \"/root/.enable.newrelic.sysmond.cnf\");\n  system(\"service newrelic-sysmond stop\") if ($newrelicsysmondsumar && -f \"/etc/init.d/newrelic-sysmond\" && !-f \"/root/.enable.newrelic.sysmond.cnf\");\n}\n\nif (!any_file_exists($run_to_files)) {\n  system(\"service collectd start\") if (!$collectdsumar && -f \"/etc/init.d/collectd\");\n  system(\"service xinetd start\") if (!$xinetdsumar && -f \"/etc/init.d/xinetd\");\n  system(\"service lsyncd start\") if (!$lsyncdsumar && -f \"/etc/init.d/lsyncd\");\n}\n\nif ($dhcpcdlives || $dhclientlives) {\n  chomp(my $wanted = `cat /etc/hostname`);\n  chomp(my $current = `hostname`);\n  if ($current ne $wanted) {\n    system(\"hostname\", \"-b\", $wanted);\n  }\n}\nelsif (-f \"/etc/init.d/sysklogd\") {\n  if (!$sysklogdsumar || !-f \"/run/syslogd.pid\") {\n    system(\"killall -9 sysklogd\");\n    system(\"service sysklogd restart\");\n  }\n}\nelsif (-f \"/etc/init.d/inetutils-syslogd\") {\n  if (!$syslogdsumar || !-f \"/run/syslog.pid\") {\n    system(\"killall -9 syslogd\");\n    system(\"service inetutils-syslogd restart\");\n  }\n}\n\nsub any_file_exists {\n  my ($files) = @_;\n  for my $file (@$files) {\n    return 1 if -f $file;\n  }\n  return 0;\n}\n\nexit;\n\n#############################################################################\nsub global_action\n{\n  local(@MYARR)=`ps auxf 2>&1`;\n  foreach $line (@MYARR) {\n    $line =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,\\?\\=\\|\\\\\\+]//g;\n    local($USER, $PID, $CPU, $MEM, $VSZ, $RSS, $TTY, $STAT, $START, ${TIME}, $COMMAND, $B, $K, $X, $Y, $Z, $T) = split(/\\s+/,$line);\n    $PID =~ s/[^0-9]//g;\n    $li_cnt{$USER}++ if ($PID);\n    $li_cnt{$X}++ if ($PID && $COMMAND =~ /php-fpm/ && $X =~ /php/);\n    $li_cnt{$K}++ if ($PID && $COMMAND =~ /^(\\|)/ && $K =~ /convert/);\n\n    if ($PID)\n    {\n      local($HOUR, $MIN) = split(/:/,${TIME});\n      $MIN =~ s/^0//g;\n      if ($COMMAND !~ /^(\\\\)/ && $COMMAND !~ /^(\\|)/)\n      {\n        if ($COMMAND =~ /nginx/) {\n          if ($USER =~ /root/) {\n            $li_cnt{$COMMAND}++;\n          }\n        }\n        elsif ($COMMAND =~ /sendmail/) {\n          if ($USER =~ /root/) {\n            system(\"kill -9 $PID\");\n          }\n        }\n        else {\n          if ($COMMAND !~ /java/) {\n            $li_cnt{$COMMAND}++;\n          }\n        }\n      }\n    }\n  }\n}\n\n#############################################################################\nsub convert_action\n{\n  local(@MYARR)=`ps auxf 2>&1`;\n  foreach $line (@MYARR) {\n    $line =~ s/[^a-zA-Z0-9\\:\\s\\t\\/\\-\\@\\_\\(\\)\\*\\[\\]\\.\\,\\?\\=\\|\\\\\\+]//g;\n    local($USER, $PID, $CPU, $MEM, $VSZ, $RSS, $TTY, $STAT, $START, ${TIME}, $COMMAND, $B, $K, $X, $Y, $Z, $T) = split(/\\s+/,$line);\n    $PID =~ s/[^0-9]//g;\n    if ($PID)\n    {\n      local($HOUR, $MIN) = split(/:/,${TIME});\n      $MIN =~ s/^0//g;\n      if ($COMMAND =~ /^(\\|)/ && $K =~ /convert/ && $CPU > 10 && $MIN > 1 && ($STAT =~ /R/ || $STAT =~ /Z/))\n      {\n        $timedate=`date +%y%m%d-%H%M%S`;\n        chomp($timedate);\n        if ($convertsumar > 5 && $CPU > 50) {\n          system(\"kill -9 $PID\");\n         `echo \"$USER $CPU $STAT $START ${TIME} $timedate KILL Q $convertsumar\" >> /var/log/boa/convert.kill.log`;\n          $kill_convert = \"YES\";\n        }\n        else {\n         `echo \"$USER $CPU $STAT $START ${TIME} $timedate WATCH $convertsumar\" >> /var/log/boa/convert.watch.log`;\n        }\n      }\n\n      if ($kill_convert && $COMMAND =~ /^(\\|)/ && $K =~ /bin/ && $Y =~ /convert/)\n      {\n        system(\"kill -9 $PID\");\n       `echo \"$USER $CPU $STAT $START ${TIME} $timedate KILL Z $convertsumar\" >> /var/log/boa/convert.kill.log`;\n      }\n    }\n  }\n}\n\n#############################################################################\nsub cpu_count_load\n{\n  local($PROCS)=`grep -c processor /proc/cpuinfo`;\n  chomp($PROCS);\n  $MAXSQLCPU = $PROCS.\"00\";\n  $MAXFPMCPU = $PROCS.\"00\";\n  if ($PROCS > 2)\n  {\n    $MAXSQLCPU = 600;\n  }\n  $MAXSQLCPU = $MAXSQLCPU - 5;\n  $MAXFPMCPU = $MAXFPMCPU - 5;\n}\n\n"
  },
  {
    "path": "aegir/tools/system/purge_binlogs.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    ionice -c2 -n7 -p $$\n    renice 19 -p $$\n    chmod a+w /dev/null\n    # shellcheck disable=SC1091\n    [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n}\n_check_root\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ -e \"/root/.pause_tasks_maint.cnf\" ] && exit 0\n\nif [ -z \"${_DB_BINARY_LOG}\" ] || [ \"${_DB_BINARY_LOG}\" != \"YES\" ]; then\n  exit 0\nfi\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] \\\n    || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n_get_load() {\n  read -r _one _five _rest <<< \"$(cat /proc/loadavg)\"\n  _O_LOAD=$(awk -v _load_value=\"${_one}\" -v _cpus=\"${_CPU_NR}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n}\n\n_load_control() {\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  : \"${_CPU_TASK_RATIO:=3.1}\"\n  _CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n  _O_LOAD_MAX=$(echo \"${_CPU_TASK_RATIO} * 100\" | bc -l)\n  _get_load\n}\n\n# How many hours of binlogs to keep\n: \"${_BINLOG_KEEP_HOURS:=24}\"\n\n_detect_mysql_major() {\n  # Returns \"5\" or \"8\" for Percona/MySQL variants\n  _VER_RAW=\"$(mysql -Nse \"SELECT VERSION();\" 2>/dev/null | head -n1)\"\n  # Examples: 5.7.44-48, 8.0.36-28, 8.4.6-6\n  _MYSQL_MAJOR=\"$(printf '%s' \"${_VER_RAW}\" | awk -F. '{print $1}')\"\n  printf '%s' \"${_MYSQL_MAJOR:-8}\"\n}\n\n_purge_binlogs() {\n  # Optional safety: skip if binary logging is off\n  _LOG_BIN=\"$(mysql -Nse \"SHOW VARIABLES LIKE 'log_bin';\" 2>/dev/null | awk '{print $2}')\"\n  if [ \"${_LOG_BIN}\" != \"ON\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"log_bin=OFF — nothing to purge\"\n    fi\n    return 0\n  fi\n\n  # Compute cutoff as a literal timestamp (local time)\n  # For UTC, use: date -u -d \"${_BINLOG_KEEP_HOURS} hour ago\" '+%F %T'\n  _CUTOFF=\"$(date -d \"${_BINLOG_KEEP_HOURS} hour ago\" '+%F %T')\"\n\n  _MAJOR=\"$(_detect_mysql_major)\"\n\n  if [ \"${_MAJOR}\" -ge 8 ]; then\n    # Percona/MySQL 8.x — needs a literal and 'BINARY'\n    _SQL=\"PURGE BINARY LOGS BEFORE '${_CUTOFF}';\"\n  else\n    # MySQL/Percona 5.7 — still okay to use BINARY + literal (portable),\n    # but if we prefer the legacy style, we could keep MASTER/DATE_SUB.\n    _SQL=\"PURGE BINARY LOGS BEFORE '${_CUTOFF}';\"\n    # Legacy equivalent we had (not used now):\n    # _SQL=\\\"PURGE MASTER LOGS BEFORE DATE_SUB(NOW(), INTERVAL ${_BINLOG_KEEP_HOURS} HOUR);\\\"\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \"Purging binlogs older than ${_CUTOFF} (keeping ~${_BINLOG_KEEP_HOURS}h)\"\n    echo \"SQL> ${_SQL}\"\n  fi\n\n  mysql -e \"${_SQL}\"\n}\n\n_purge_action() {\n  _count_cpu\n  _load_control\n  if (( $(echo \"${_O_LOAD} < ${_O_LOAD_MAX}\" | bc -l) )); then\n    echo load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\n    _purge_binlogs\n    touch /var/log/boa/purge_binlogs.done\n  fi\n}\n\nif [ -e \"/run/boa_run.pid\" ]; then\n  touch /var/log/boa/wait-purge.pid\n  exit 0\nelse\n  _purge_action\n  touch /var/log/boa/last-run-purge\n  exit 0\nfi\n\n"
  },
  {
    "path": "aegir/tools/system/runner.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n###-------------SYSTEM-----------------###\n\n_check_root() {\n  if [ \"$(id -u)\" -eq 0 ]; then\n    chmod a+w /dev/null\n  else\n    echo \"ERROR: This script should be run as a root user\"\n    exit 1\n  fi\n  _DF_TEST=\"$(command df -P -l / 2>/dev/null | awk '\n    NR==1 { for (i=1; i<=NF; i++) if ($i==\"Use%\" || $i==\"Capacity\") u=i }\n    NR==2 { gsub(/%/,\"\",$u); print $u }')\"\n  if [ ! -z \"${_DF_TEST}\" ] && [ \"${_DF_TEST}\" -gt 90 ]; then\n    echo \"ERROR: Your disk space is almost full !!! ${_DF_TEST}/100\"\n    echo \"ERROR: We can not proceed until it is below 90/100\"\n    exit 1\n  fi\n}\n_check_root\n\n_disable_master_cron() {\n  _mCronOn=\"/var/spool/cron/crontabs/aegir\"\n  _mCronOff=\"/var/spool/cron/crontabs/.aegir\"\n  if [ -e \"${_mCronOn}\" ]; then\n    mv -f ${_mCronOn} ${_mCronOff}\n    service cron reload\n  fi\n}\n\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ -e \"/root/.pause_tasks_maint.cnf\" ] && exit 0\n[ -e \"/run/max_load.pid\" ] || [ -e \"/run/critical_load.pid\" ] && exit 0\n\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n_get_load() {\n  read -r _one _five _rest <<< \"$(cat /proc/loadavg)\"\n  _O_LOAD=$(awk -v _load_value=\"${_one}\" -v _cpus=\"${_CPU_NR}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n}\n\n_load_control() {\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  : \"${_CPU_TASK_RATIO:=3.1}\"\n  _CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n  _O_LOAD_MAX=$(echo \"${_CPU_TASK_RATIO} * 100\" | bc -l)\n  _get_load\n}\n\n_if_accelerated_queue() {\n  if [ -e \"/run/octopus_install_run.pid\" ]; then\n    _ACCELERATED=YES\n  elif [ -e \"/run/boa_run.pid\" ]; then\n    _ACCELERATED=NONE\n  else\n    _ACCELERATED=NORMAL\n  fi\n}\n\n_runner_action() {\n  for _Runner in $(find /var/xdrago -maxdepth 1 -mindepth 1 -type f \\\n    | grep run- \\\n    | uniq \\\n    | sort); do\n    _count_cpu\n    _load_control\n    if (( $(echo \"${_O_LOAD} < ${_O_LOAD_MAX}\" | bc -l) )); then\n      echo \"Load is ${_O_LOAD}% (below max load ${_O_LOAD_MAX}%). Running ${_Runner}\"\n      _if_accelerated_queue\n      if [ \"${_ACCELERATED}\" = \"YES\" ] || [ \"${_ACCELERATED}\" = \"NORMAL\" ]; then\n        echo \"Running ${_Runner}\"\n        bash \"${_Runner}\"\n        _n=$((RANDOM % 9 + 2))\n        echo \"Waiting ${_n} sec\"\n        sleep \"${_n}\"\n      else\n        echo \"Another BOA task is running, we have to wait...\"\n      fi\n    else\n      echo \"Load is ${_O_LOAD}% while max load is ${_O_LOAD_MAX}%. Waiting...\"\n    fi\n  done\n}\n\n_if_allow_aegir_queue() {\n  _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n  _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n  _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n  _ReTest=$(ls /data/disk/*/static/control/run-aegir-queue.info | wc -l 2>&1)\n  if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n    || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n    || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n    || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n    || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n    || [ -e \"/root/.allow.aegir.queue.cnf\" ]; then\n    if [ \"${_ReTest}\" -ge 1 ]; then\n      _ALLOW_AEGIR_QUEUE=TRUE\n    fi\n  fi\n}\n\n###-------------SYSTEM-----------------###\n\n_SQLBACKUP_RUNNING=NO\nif (( $(pgrep -fc mysql_backup.sh) > 0 )); then\n  _SQLBACKUP_RUNNING=YES\nelif (( $(pgrep -fc mysql_cluster_backup.sh) > 0 )); then\n  _SQLBACKUP_RUNNING=YES\nelif (( $(pgrep -fc mydumper) > 0 )); then\n  _SQLBACKUP_RUNNING=YES\nfi\n\n_DAILY_RUNNING=NO\nif (( $(pgrep -fc daily.sh) > 0 )); then\n  _DAILY_RUNNING=YES\nfi\n\n# Get total RAM in MB\n_TOTAL_RAM_MB=$(free -m | awk '/^Mem:/ {print $2}')\n\n# Compare with 4096 MB (4 GB)\nif [ \"${_TOTAL_RAM_MB}\" -le 4096 ]; then\n  if [ ! -e \"/root/.slow.cron.cnf\" ]; then\n    echo SLOW > /root/.slow.cron.cnf\n    chattr +i /root/.slow.cron.cnf\n  fi\n  if [ ! -e \"/root/.slow.cron.cnf.protected\" ] \\\n    && [ -e \"/root/.slow.cron.cnf\" ]; then\n    echo SLOW > /root/.slow.cron.cnf.protected\n  fi\nfi\n\nif [ -e \"/root/.slow.cron.cnf\" ]; then\n  _howMany=1\nelse\n  _howMany=8\nfi\n\nif [ \"$(pgrep -fc 'n7 bash /var/xdrago/runner.sh')\" -gt \"${_howMany}\" ] \\\n  || [ \"${_SQLBACKUP_RUNNING}\" = \"TRUE\" ] \\\n  || [ \"${_DAILY_RUNNING}\" = \"TRUE\" ] \\\n  || [ -e \"/run/mysql_restart_running.pid\" ] \\\n  || [ -e \"/run/boa_sql_cluster_backup.pid\" ] \\\n  || [ -e \"/run/boa_cron_wait.pid\" ]; then\n  touch /var/log/boa/wait-runner.pid\n  echo \"Another BOA task is running, we will try again later...\"\n  exit 0\nelse\n  _disable_master_cron\n  if [ -e \"/root/.look.like.jenkins.cnf\" ]; then\n    _ALLOW_AEGIR_QUEUE=FALSE\n    _if_allow_aegir_queue\n    if [ \"${_ALLOW_AEGIR_QUEUE}\" = \"TRUE\" ]; then\n      touch /run/boa_cron_wait.pid\n      _runner_action\n      sleep 5\n      rm -f /run/boa_cron_wait.pid\n    else\n      echo \"No automatic task queue on CI instance allowed by default\"\n      exit 0\n    fi\n  elif [ -e \"/root/.slow.cron.cnf\" ] && [ ! -e \"/root/.force.queue.runner.cnf\" ]; then\n    touch /run/boa_cron_wait.pid\n    sleep 15\n    _runner_action\n    sleep 15\n    rm -f /run/boa_cron_wait.pid\n  elif [ -e \"/root/.fast.cron.cnf\" ] || [ -e \"/root/.force.queue.runner.cnf\" ]; then\n    rm -f /run/boa_cron_wait.pid\n    for i in {1..10}; do\n      _runner_action\n      sleep 5\n    done\n  else\n    _runner_action\n  fi\n  exit 0\nfi\n\n"
  },
  {
    "path": "aegir/tools/system/second.sh",
    "content": "#!/bin/bash\n\n# Environment setup\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n_monPath=\"/var/xdrago/monitor/check\"\n\n# Exit if proxy config exists\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n\n# shellcheck disable=SC1091\n[ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n\n# Sanitize numeric variables (allow digits and decimal point)\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n# Paths\n_pthOml=\"/var/log/boa/high.load.incident.log\"\n\n# Load _RATIO defaults + sanitize\n_CPU_CRIT_RATIO=\"$(_sanitize_number \"${_CPU_CRIT_RATIO}\")\"\n_CPU_MAX_RATIO=\"$(_sanitize_number \"${_CPU_MAX_RATIO}\")\"\n_CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n_CPU_SPIDER_RATIO=\"$(_sanitize_number \"${_CPU_SPIDER_RATIO}\")\"\n\n# ===== Config (ratios per CPU) =====\n: \"${_CPU_CRIT_RATIO:=6.1}\"    # CRIT: pause web + kill long procs + block spiders\n: \"${_CPU_MAX_RATIO:=4.1}\"     # MAX:  pause web + block spiders\n: \"${_CPU_TASK_RATIO:=3.1}\"    # TASK: skip backend tasks (but web OK)\n: \"${_CPU_SPIDER_RATIO:=2.1}\"  # SPIDER: allow web; block spiders only\n\n# Sanitize to allow only digits and minus sign\nexport _B_NICE=${_B_NICE//[^0-9-]/}\n\n# Validate and set default if necessary\nif ! [[ \"${_B_NICE}\" =~ ^-?[0-9]+$ ]]; then\n  _B_NICE=0\nfi\n\n# Clamp the value within -20 to 19\nif (( _B_NICE < -20 )); then\n  _B_NICE=-20\nelif (( _B_NICE > 19 )); then\n  _B_NICE=19\nfi\n\nrenice ${_B_NICE} -p $$ &> /dev/null\n\n# Get CPU count\n_CPU_COUNT=$(nproc)\n[ -z \"${_CPU_COUNT}\" ] && _CPU_COUNT=1\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###\n### Load + normalize _INCIDENT_REPORT\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_normalize_incident_report() {\n  : \"${_INCIDENT_REPORT:=CRIT}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT^^}\"\n  _INCIDENT_REPORT=\"${_INCIDENT_REPORT//[^A-Z]/}\"\n  ###\n  ### Map legacy + validate\n  ###\n  case \"${_INCIDENT_REPORT}\" in\n    NO)   _INCIDENT_REPORT=\"OFF\"  ;;\n    YES)  _INCIDENT_REPORT=\"CRIT\" ;;\n    MINI) _INCIDENT_REPORT=\"CRIT\" ;;\n    OFF|ALL|CRIT) : ;;\n    *)    _INCIDENT_REPORT=\"CRIT\" ;;\n  esac\n}\n_normalize_incident_report\n\n###\n### Function to send incident email report\n###\n_incident_email_report() {\n  _check_uptime_grace_period >/dev/null || return 1\n  local _subject=\"${1:-(no subject)}\"\n  local _lvl=\"${2:-INFO}\"\n  _lvl=\"${_lvl^^}\"\n  [ -n \"${_MY_EMAIL}\" ] || return 1\n  # Decide if we should send\n  case \"${_INCIDENT_REPORT}\" in\n    OFF)  return 1 ;;                            # always veto\n    CRIT) [ \"${_lvl}\" = \"ALERT\" ] || return 1 ;; # veto unless ALERT\n    ALL) : ;;                                    # allow\n  esac\n  _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n  echo \"Sending Incident Report Email on $(date)\" >> ${_pthOml}\n  s-nail -s \"Incident Report on ${_hName}: ${_subject}\" \"${_MY_EMAIL}\" < ${_pthOml}\n}\n\n# Function to pause web services\n_hold_services() {\n  local _current_load=\"$1\"\n  local _threshold=\"$2\"\n  local _load_period=\"$3\"\n  touch /run/boa_second_auto_healing.pid\n  sleep 3\n  service nginx stop\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n      service php${e}-fpm force-quit\n    fi\n  done\n  killall php-fpm\n  killall nginx\n  local _log_message\n  _log_message=\"$(date) System Load ${_current_load}% (${_load_period}) - Web Server Paused\"\n  echo \"${_log_message}\" >> ${_pthOml}\n  local _subject=\"Web Services Paused - ${_load_period} Load ${_current_load}% exceeded Max Load Threshold ${_threshold}%\"\n  _incident_email_report \"${_subject}\" \"ALERT\"\n  echo >> ${_pthOml}\n  echo \"Action Taken: Web services paused due to high load.\"\n  sleep 5\n  [ -e \"/run/boa_second_auto_healing.pid\" ] && rm -f /run/boa_second_auto_healing.pid\n}\n\n# Function to terminate long-running processes\n_terminate_processes() {\n  local _current_load=\"$1\"\n  local _threshold=\"$2\"\n  local _load_period=\"$3\"\n  killall -9 php drush.php wget curl &> /dev/null\n  local _log_message\n  _log_message=\"$(date) System Load ${_current_load}% (${_load_period}) - PHP/Wget/cURL terminated\"\n  echo \"${_log_message}\" >> ${_pthOml}\n  local _subject=\"Processes Terminated - ${_load_period} Load ${_current_load}% exceeded Critical Load Threshold ${_threshold}%\"\n  _incident_email_report \"${_subject}\" \"ALERT\"\n  echo >> ${_pthOml}\n  echo \"Action Taken: Long-running processes terminated due to critical load.\"\n}\n\n# Function to enable nginx high load configuration\n_nginx_high_load_on() {\n  local _current_load=\"$1\"\n  local _threshold=\"$2\"\n  local _load_period=\"$3\"\n  mv -f /data/conf/nginx_high_load_off.conf /data/conf/nginx_high_load.conf\n  service nginx reload &> /dev/null\n  local _log_message\n  _log_message=\"$(date) Enabled Spider Protection ${_load_period} Load: ${_current_load}%\"\n  echo \"${_log_message}\" >> ${_pthOml}\n# local _subject=\"Enabled Spider Protection - ${_load_period} Load ${_current_load}% exceeded Spider Protection Threshold ${_threshold}%\"\n# _incident_email_report \"${_subject}\" \"INFO\"\n# echo >> ${_pthOml}\n  echo \"Action Taken: Enabled protection from spiders (nginx high load configuration applied).\"\n}\n\n# Function to disable nginx high load configuration\n_nginx_high_load_off() {\n  mv -f /data/conf/nginx_high_load.conf /data/conf/nginx_high_load_off.conf\n  service nginx reload &> /dev/null\n  local _log_message\n  _log_message=\"$(date) Disabled Spider Protection Load: ${_O_LOAD}%\"\n  echo \"${_log_message}\" >> ${_pthOml}\n# local _subject=\"Disabled Spider Protection - Load decreased below Spider Protection Threshold ${_CPU_SPIDER_THRESHOLD}%\"\n# _incident_email_report \"${_subject}\" \"INFO\"\n# echo >> ${_pthOml}\n  echo \"Action Taken: Disabled protection from spiders (nginx high load configuration removed).\"\n}\n\n# Fire-and-forget launcher, cron-safe and interactive-safe\n_spawn_detached() {\n  _cmd=\"$1\"\n  if command -v nohup >/dev/null 2>&1; then\n    nohup bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  elif command -v setsid >/dev/null 2>&1; then\n    setsid bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  else\n    ( bash -c \"${_cmd}\" >/dev/null 2>&1 ) &\n  fi\n  # If interactive shell, drop it from the job table to mimic cron behavior\n  if [[ \"$-\" == *i* ]]; then disown; fi\n}\n\n# Function to control processes\n_proc_control() {\n  echo \"Running process control...\"\n  renice \"${_B_NICE}\" -p $$ &> /dev/null\n  if [ -e \"/var/xdrago/proc_num_ctrl.pl\" ]; then\n    _spawn_detached 'perl /var/xdrago/proc_num_ctrl.pl'\n  fi\n  touch /var/log/boa/proc_num_ctrl.done.pid\n  echo \"Process control done.\"\n}\n\n# Function to get system load averages\n_get_load() {\n  read -r _one _five _rest <<< \"$(cat /proc/loadavg)\"\n  _O_LOAD=$(awk -v _load_value=\"${_one}\" -v _cpus=\"${_CPU_COUNT}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n  _F_LOAD=$(awk -v _load_value=\"${_five}\" -v _cpus=\"${_CPU_COUNT}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n}\n\n# Function to control system load actions\n_load_control() {\n  _get_load\n\n  # Initialize the flags\n  _limits_exceeded=false\n  _skip_proc_control=false\n\n  # Thresholds in percentages (calculate using bc)\n  _CPU_SPIDER_THRESHOLD=$(echo \"${_CPU_SPIDER_RATIO} * 100\" | bc -l)\n  _CPU_MAX_THRESHOLD=$(echo \"${_CPU_MAX_RATIO} * 100\" | bc -l)\n  _CPU_CRIT_THRESHOLD=$(echo \"${_CPU_CRIT_RATIO} * 100\" | bc -l)\n\n  echo \"Current Load Averages:\"\n  echo \" - 1-minute Load (per CPU): ${_O_LOAD}%\"\n  echo \" - 5-minute Load (per CPU): ${_F_LOAD}%\"\n  echo \"Thresholds:\"\n  echo \" - Critical Load Threshold: ${_CPU_CRIT_THRESHOLD}%\"\n  echo \" - Max Load Threshold: ${_CPU_MAX_THRESHOLD}%\"\n  echo \" - Spider Protection Threshold: ${_CPU_SPIDER_THRESHOLD}%\"\n\n  # Check for critical load to terminate processes and hold services\n  if awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_CRIT_THRESHOLD} || ${_F_LOAD} > ${_CPU_CRIT_THRESHOLD})}\"; then\n    sleep 9\n    # Sustained critical load → verify twice with brief cooldown then react\n    if awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_CRIT_THRESHOLD} || ${_F_LOAD} > ${_CPU_CRIT_THRESHOLD})}\"; then\n      touch /run/critical_load.pid\n      [ -e \"/run/max_load.pid\" ] && rm -f /run/max_load.pid\n      [ -e \"/run/normal_load.pid\" ] && rm -f /run/normal_load.pid\n      [ -e \"/run/spider_load.pid\" ] && rm -f /run/spider_load.pid\n      echo \"Load exceeds critical threshold. Terminating processes and pausing web services.\"\n      _limits_exceeded=true\n      _skip_proc_control=true\n      if awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_CRIT_THRESHOLD})}\"; then\n        _current_load=\"${_O_LOAD}\"\n        _load_period=\"1-minute\"\n      else\n        _current_load=\"${_F_LOAD}\"\n        _load_period=\"5-minute\"\n      fi\n      if [ ! -e \"/run/boa_second_auto_healing.pid\" ]; then\n        _terminate_processes \"${_current_load}\" \"${_CPU_CRIT_THRESHOLD}\" \"${_load_period}\"\n        _hold_services \"${_current_load}\" \"${_CPU_MAX_THRESHOLD}\" \"${_load_period}\"\n      fi\n    fi\n  # Check for max load to hold services\n  elif awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_MAX_THRESHOLD} || ${_F_LOAD} > ${_CPU_MAX_THRESHOLD})}\"; then\n    sleep 9\n    # Sustained max load → verify twice with brief cooldown then react\n    if awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_MAX_THRESHOLD} || ${_F_LOAD} > ${_CPU_MAX_THRESHOLD})}\"; then\n      touch /run/max_load.pid\n      [ -e \"/run/critical_load.pid\" ] && rm -f /run/critical_load.pid\n      [ -e \"/run/normal_load.pid\" ] && rm -f /run/normal_load.pid\n      [ -e \"/run/spider_load.pid\" ] && rm -f /run/spider_load.pid\n      echo \"Load exceeds max threshold. Pausing web services.\"\n      _limits_exceeded=true\n      _skip_proc_control=true\n      if awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_MAX_THRESHOLD})}\"; then\n        _current_load=\"${_O_LOAD}\"\n        _load_period=\"1-minute\"\n      else\n        _current_load=\"${_F_LOAD}\"\n        _load_period=\"5-minute\"\n      fi\n      if [ ! -e \"/run/boa_second_auto_healing.pid\" ]; then\n        _hold_services \"${_current_load}\" \"${_CPU_MAX_THRESHOLD}\" \"${_load_period}\"\n      fi\n    fi\n  # Check for spider protection threshold\n  elif awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_SPIDER_THRESHOLD} && ${_O_LOAD} <= ${_CPU_MAX_THRESHOLD})}\"; then\n    sleep 9\n    # Sustained too high for spiders load → verify twice with brief cooldown then react\n    if awk \"BEGIN {exit !(${_O_LOAD} > ${_CPU_SPIDER_THRESHOLD} && ${_O_LOAD} <= ${_CPU_MAX_THRESHOLD})}\"; then\n      touch /run/spider_load.pid\n      [ -e \"/run/critical_load.pid\" ] && rm -f /run/critical_load.pid\n      [ -e \"/run/max_load.pid\" ] && rm -f /run/max_load.pid\n      [ -e \"/run/normal_load.pid\" ] && rm -f /run/normal_load.pid\n      echo \"Load exceeds spider protection threshold but below max threshold.\"\n      _limits_exceeded=true\n      # Do not set _skip_proc_control to true here\n      _current_load=\"${_O_LOAD}\"\n      _load_period=\"1-minute\"\n      if [ -e \"/data/conf/nginx_high_load_off.conf\" ]; then\n        _nginx_high_load_on \"${_current_load}\" \"${_CPU_SPIDER_THRESHOLD}\" \"${_load_period}\"\n      fi\n    fi\n  elif awk \"BEGIN {exit !(${_F_LOAD} > ${_CPU_SPIDER_THRESHOLD} && ${_F_LOAD} <= ${_CPU_MAX_THRESHOLD})}\"; then\n    sleep 9\n    # Sustained too high for spiders load → verify twice with brief cooldown then react\n    if awk \"BEGIN {exit !(${_F_LOAD} > ${_CPU_SPIDER_THRESHOLD} && ${_F_LOAD} <= ${_CPU_MAX_THRESHOLD})}\"; then\n      touch /run/spider_load.pid\n      [ -e \"/run/critical_load.pid\" ] && rm -f /run/critical_load.pid\n      [ -e \"/run/max_load.pid\" ] && rm -f /run/max_load.pid\n      [ -e \"/run/normal_load.pid\" ] && rm -f /run/normal_load.pid\n      echo \"Load exceeds spider protection threshold but below max threshold.\"\n      _limits_exceeded=true\n      # Do not set _skip_proc_control to true here\n      _current_load=\"${_F_LOAD}\"\n      _load_period=\"5-minute\"\n      if [ -e \"/data/conf/nginx_high_load_off.conf\" ]; then\n        _nginx_high_load_on \"${_current_load}\" \"${_CPU_SPIDER_THRESHOLD}\" \"${_load_period}\"\n      fi\n    fi\n  else\n    touch /run/normal_load.pid\n    [ -e \"/run/critical_load.pid\" ] && rm -f /run/critical_load.pid\n    [ -e \"/run/max_load.pid\" ] && rm -f /run/max_load.pid\n    [ -e \"/run/spider_load.pid\" ] && rm -f /run/spider_load.pid\n    # If load is below spider protection threshold, disable spider protection if it's enabled\n    if [ -e \"/data/conf/nginx_high_load.conf\" ] && \\\n       awk \"BEGIN {exit !(${_O_LOAD} <= ${_CPU_SPIDER_THRESHOLD} && ${_F_LOAD} <= ${_CPU_SPIDER_THRESHOLD})}\"; then\n      echo \"Load below spider protection threshold.\"\n      _nginx_high_load_off\n    else\n      echo \"Load within normal parameters.\"\n    fi\n  fi\n\n  # Decide whether to run _proc_control\n  if [ \"${_skip_proc_control}\" = false ]; then\n    _proc_control\n  else\n    echo \"Limits exceeded; skipping process control.\"\n  fi\n}\n\n# Main execution\nfor _iteration in {1..10}; do\n  echo \"----------------------------\"\n  echo \"Iteration ${_iteration}:\"\n  _load_control\n  nohup ${_monPath}/hackcheck.sh > /dev/null 2>&1 &\n  nohup ${_monPath}/hackftp.sh > /dev/null 2>&1 &\n  nohup ${_monPath}/escapecheck.sh > /dev/null 2>&1 &\n  sleep 5\ndone\n\necho \"Done!\"\nexit 0\n"
  },
  {
    "path": "aegir/tools/system/usage.sh",
    "content": "#!/bin/bash\n\nexport HOME=/root\nexport SHELL=/bin/bash\nexport PATH=/usr/local/bin:/usr/local/sbin:/opt/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/libexec\n\n[ -e \"/root/.look.like.jenkins.cnf\" ] && exit 0\n[ -e \"/root/.proxy.cnf\" ] && exit 0\n[ -e \"/root/.pause_heavy_tasks_maint.cnf\" ] && exit 0\n\n###\n### Atomic lock/unlock to prevent TOCTOU race\n###\n_manage_single_lock() {\n  _SELF_NAME=\"${_SELF_NAME:-$(basename \"$0\")}\"\n  for _L in \"/opt/local/bin/lock.inc\" \"/opt/local/lib/lock.inc\"; do\n    [ -r \"${_L}\" ] && . \"${_L}\" && break\n  done\n  if [ -n \"${_SINGLE_INSTANCE_LIB_VER:-}\" ] && command -v _single_instance_lock >/dev/null 2>&1; then\n    # use shared lock if available\n    _single_instance_lock\n  else\n    # -------- legacy pgrep guard ---------\n    # Exit if more than 2 instances of this script are running\n    _SCRIPT=$(basename \"$0\")\n    _CNT=$(pgrep -fc ${_SCRIPT})\n    if (( _CNT > 2 )); then\n      echo \"Too many ${_SCRIPT} running $(date) (count=${_CNT})\" >> /var/log/boa/too.many.log\n      exit 0\n    fi\n  fi\n}\n_manage_single_lock\n\n###-------------SYSTEM-----------------###\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n_sanitize_number() {\n  echo \"$1\" | sed 's/[^0-9.]//g'\n}\n\n_fix_clear_cache() {\n  if [ -e \"${_Plr}/profiles/hostmaster\" ]; then\n    su -s /bin/bash - ${_THIS_U} -c \"drush8 @hostmaster cache-clear all\" &> /dev/null\n    wait\n  fi\n}\n\n_check_account_exceptions() {\n  _DEV_EXC=NO\n  chckStringA=\"omega8.cc\"\n  chckStringB=\"omega8cc\"\n  chckStringC=\"mixomax\"\n  chckStringE=\"emaylx\"\n  case ${_CLIENT_EMAIL} in\n    *\"$chckStringA\"*) _DEV_EXC=YES ;;\n    *\"$chckStringB\"*) _DEV_EXC=YES ;;\n    *\"$chckStringC\"*) _DEV_EXC=YES ;;\n    *\"$chckStringE\"*) _DEV_EXC=YES ;;\n    *)\n    ;;\n  esac\n}\n\n_read_account_data() {\n  _CLIENT_CORES=\n  _EXTRA_ENGINE=\n  _ENGINE_NR=\n  _CLIENT_EMAIL=\n  _CLIENT_OPTION=\n  _DSK_CLU_LIMIT=1\n  if [ -e \"/data/disk/${_THIS_U}/log/email.txt\" ]; then\n    _CLIENT_EMAIL=$(cat /data/disk/${_THIS_U}/log/email.txt 2>&1)\n    _CLIENT_EMAIL=$(echo -n ${_CLIENT_EMAIL} | tr -d \"\\n\" 2>&1)\n    _check_account_exceptions\n  fi\n  if [ -e \"/root/.debug.email.txt\" ]; then\n    _CLIENT_EMAIL=\"omega8cc@gmail.com\"\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/log/cores.txt\" ]; then\n    _CLIENT_CORES=$(cat /data/disk/${_THIS_U}/log/cores.txt 2>&1)\n    _CLIENT_CORES=$(echo -n ${_CLIENT_CORES} | tr -d \"\\n\" 2>&1)\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/log/diskspace.txt\" ]; then\n    _DSK_CLU_LIMIT=$(cat /data/disk/${_THIS_U}/log/diskspace.txt 2>&1)\n    _DSK_CLU_LIMIT=$(echo -n ${_DSK_CLU_LIMIT} | tr -d \"\\n\" 2>&1)\n  fi\n  if [ \"${_CLIENT_CORES}\" -gt 1 ]; then\n    _ENGINE_NR=\"Engines\"\n  else\n    _ENGINE_NR=\"Engine\"\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/log/option.txt\" ]; then\n    _CLIENT_OPTION=$(cat /data/disk/${_THIS_U}/log/option.txt 2>&1)\n    _CLIENT_OPTION=$(echo -n ${_CLIENT_OPTION} | tr -d \"\\n\" 2>&1)\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/log/extra.txt\" ]; then\n    mv -f /data/disk/${_THIS_U}/log/extra.txt /data/disk/${_THIS_U}/log/extra_edge.txt\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/log/extra_edge.txt\" ]; then\n    _EXTRA_ENGINE=$(cat /data/disk/${_THIS_U}/log/extra_edge.txt 2>&1)\n    _EXTRA_ENGINE=$(echo -n ${_EXTRA_ENGINE} | tr -d \"\\n\" 2>&1)\n    _ENGINE_NR=\"${_ENGINE_NR} + ${_EXTRA_ENGINE} x EDGE\"\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/log/extra_aero.txt\" ]; then\n    _EXTRA_ENGINE=$(cat /data/disk/${_THIS_U}/log/extra_aero.txt 2>&1)\n    _EXTRA_ENGINE=$(echo -n ${_EXTRA_ENGINE} | tr -d \"\\n\" 2>&1)\n    _ENGINE_NR=\"${_ENGINE_NR} + ${_EXTRA_ENGINE} x AERO\"\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/log/extra_power.txt\" ]; then\n    _EXTRA_ENGINE=$(cat /data/disk/${_THIS_U}/log/extra_power.txt 2>&1)\n    _EXTRA_ENGINE=$(echo -n ${_EXTRA_ENGINE} | tr -d \"\\n\" 2>&1)\n    _ENGINE_NR=\"${_ENGINE_NR} + ${_EXTRA_ENGINE} x POWER\"\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/static/control/cli.info\" ]; then\n    _CLIENT_CLI=$(cat /data/disk/${_THIS_U}/static/control/cli.info 2>&1)\n    _CLIENT_CLI=$(echo -n ${_CLIENT_CLI} | tr -d \"\\n\" 2>&1)\n  fi\n  if [ -e \"/data/disk/${_THIS_U}/static/control/fpm.info\" ]; then\n    _CLIENT_FPM=$(cat /data/disk/${_THIS_U}/static/control/fpm.info 2>&1)\n    _CLIENT_FPM=$(echo -n ${_CLIENT_FPM} | tr -d \"\\n\" 2>&1)\n  fi\n}\n\n_send_notice_php() {\n  if [ \"${_hostedSys}\" != \"YES\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _BCC_EMAIL=\"omega8cc@gmail.com\"\n  _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"URGENT: Please switch your Ægir instance to PHP 8.1 [${_THIS_U}]\" ${_CLIENT_EMAIL}\nHello,\n\nOur monitoring detected that you are still using deprecated\nand no longer supported PHP version: $1\n\nWe have provided a few years of extended support for\nthis PHP version, but now we can't extend it any further,\nbecause your system has to be upgraded to newest Debian version,\nwhich doesn't support many deprecated PHP versions.\n\nThe upgrade will happen in the first week of May, 2023,\nand there are no exceptions possible to avoid it.\n\nThis means that all Ægir instances still running PHP $1\nwill stop working if not switched to one of currently\nsupported versions: 8.1, 8.2, 8.3, 8.4, 8.5\n\nTo switch PHP-FPM version on command line, please type:\n\n  echo 8.1 > ~/static/control/fpm.info\n\nYou can find more details at: https://learn.omega8.cc/node/330\n\nWe are working hard to provide secure and fast hosting\nfor your Drupal sites, and we appreciate your efforts\nto meet the requirements, which are an integral part\nof the quality you can expect from Omega8.cc\n\n--\nThis email has been sent by your Ægir system monitor\n\nEOF\n  fi\n  echo \"INFO: PHP notice sent to ${_CLIENT_EMAIL} [${_THIS_U}]: OK\"\n}\n\n_detect_deprecated_php() {\n  _PHP_FPM_VERSION=\n  if [ -e \"${_usEr}/static/control/fpm.info\" ] \\\n    && [ ! -e \"${_usEr}/log/proxied.pid\" ] \\\n    && [ ! -e \"${_usEr}/log/CANCELLED\" ]; then\n    _PHP_FPM_VERSION=$(cat ${_usEr}/static/control/fpm.info 2>&1)\n    _PHP_FPM_VERSION=$(echo -n ${_PHP_FPM_VERSION} | tr -d \"\\n\" 2>&1)\n    if [ \"${_PHP_FPM_VERSION}\" = \"5.5\" ] \\\n      || [ \"${_PHP_FPM_VERSION}\" = \"5.4\" ] \\\n      || [ \"${_PHP_FPM_VERSION}\" = \"5.3\" ] \\\n      || [ \"${_PHP_FPM_VERSION}\" = \"5.2\" ]; then\n      echo Deprecated PHP-FPM ${_PHP_FPM_VERSION} detected in ${_THIS_U}\n      _read_account_data\n      if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n        _send_notice_php ${_PHP_FPM_VERSION}\n      fi\n    fi\n  fi\n}\n\n_send_notice_core() {\n  if [ \"${_hostedSys}\" != \"YES\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _BCC_EMAIL=\"omega8cc@gmail.com\"\n  _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"URGENT: Please migrate ${_Dom} site to Pressflow (LTS)\" ${_CLIENT_EMAIL}\nHello,\n\nOur system detected that you are using vanilla Drupal core\nfor site ${_Dom}.\n\nThe platform root directory for this site is:\n${_Plr}\n\nUsing non-Pressflow 5.x or 6.x core is not allowed\non our servers, unless it is a temporary result of your site\nimport, but every imported site should be migrated to Pressflow\nbased platform as soon as possible.\n\nIf the site is not migrated to Pressflow based platform\nin seven (7) days, it may cause service interruption.\n\nWe are working hard to deliver top performance hosting\nfor your Drupal sites and we appreciate your efforts\nto meet the requirements, which are an integral part\nof the quality you can expect from Omega8.cc.\n\n--\nThis email has been sent by your Ægir platform core monitor.\n\nEOF\n  fi\n  echo \"INFO: Pressflow notice sent to ${_CLIENT_EMAIL} [${_THIS_U}]: OK\"\n}\n\n_detect_vanilla_core() {\n  if [ ! -e \"${_Plr}/core\" ]; then\n    if [ -e \"${_Plr}/web.config\" ]; then\n      _DO_NOTHING=YES\n    else\n      if [ -e \"${_Plr}/modules/watchdog\" ]; then\n        if [ ! -e \"/boot/grub/grub.cfg\" ] \\\n          && [ ! -e \"/boot/grub/menu.lst\" ] \\\n          && [[ \"${_Plr}\" =~ \"static\" ]] \\\n          && [ ! -e \"${_Plr}/modules/cookie_cache_bypass\" ]; then\n          _if_hosted_sys\n          if [ \"${_hostedSys}\" = \"YES\" ]; then\n            echo Vanilla Drupal 5.x Platform detected in ${_Plr}\n            _read_account_data\n            if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n              _send_notice_core\n            fi\n          fi\n        fi\n      else\n        if [ ! -e \"${_Plr}/modules/path_alias_cache\" ] \\\n          && [ -e \"${_Plr}/modules/user\" ] \\\n          && [[ \"${_Plr}\" =~ \"static\" ]]; then\n          echo Vanilla Drupal 6.x Platform detected in ${_Plr}\n          if [ ! -e \"/boot/grub/grub.cfg\" ] \\\n            && [ ! -e \"/boot/grub/menu.lst\" ]; then\n            _if_hosted_sys\n            if [ \"${_hostedSys}\" = \"YES\" ]; then\n              _read_account_data\n              if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n                _send_notice_core\n              fi\n            fi\n          fi\n        fi\n      fi\n    fi\n  fi\n}\n\n_usage_count() {\n  for _Site in `find ${_usEr}/config/server_master/nginx/vhost.d \\\n    -maxdepth 1 -mindepth 1 -type f | sort`; do\n    #echo Counting Site ${_Site}\n    #echo \"${_THIS_U},${_Dom},vhost-exists\"\n    _Dom=$(echo ${_Site} | cut -d'/' -f9 | awk '{ print $1}' 2>&1)\n    _DEV_URL=NO\n    searchStringB=\".dev.\"\n    searchStringC=\".devel.\"\n    searchStringD=\".temp.\"\n    searchStringE=\".tmp.\"\n    searchStringF=\".temporary.\"\n    searchStringG=\".test.\"\n    searchStringH=\".testing.\"\n    searchStringI=\".stage.\"\n    searchStringJ=\".staging.\"\n    case ${_Dom} in\n      *\"$searchStringB\"*) _DEV_URL=YES ;;\n      *\"$searchStringC\"*) _DEV_URL=YES ;;\n      *\"$searchStringD\"*) _DEV_URL=YES ;;\n      *\"$searchStringE\"*) _DEV_URL=YES ;;\n      *\"$searchStringF\"*) _DEV_URL=YES ;;\n      *\"$searchStringG\"*) _DEV_URL=YES ;;\n      *\"$searchStringH\"*) _DEV_URL=YES ;;\n      *\"$searchStringI\"*) _DEV_URL=YES ;;\n      *\"$searchStringJ\"*) _DEV_URL=YES ;;\n      *)\n      ;;\n    esac\n    if [ -e \"${_usEr}/.drush/${_Dom}.alias.drushrc.php\" ]; then\n      #echo \"${_THIS_U},${_Dom},drushrc-exists\"\n      _Dir=$(cat ${_usEr}/.drush/${_Dom}.alias.drushrc.php \\\n        | grep \"site_path'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _Plr=$(cat ${_usEr}/.drush/${_Dom}.alias.drushrc.php \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _detect_vanilla_core\n      _fix_clear_cache\n      #echo Dir is ${_Dir}\n      if [ -e \"${_Dir}/drushrc.php\" ] \\\n        && [ -e \"${_Dir}/files\" ] \\\n        && [ -e \"${_Dir}/private\" ] \\\n        && [ ! -e \"${_Plr}/profiles/hostmaster\" ]; then\n        if [ ! -e \"${_Dir}/modules\" ]; then\n          mkdir ${_Dir}/modules\n        fi\n        #echo \"${_THIS_U},${_Dom},sitedir-exists\"\n        _Dat=$(cat ${_Dir}/drushrc.php \\\n          | grep \"options\\['db_name'\\] = \" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,';]//g\" 2>&1)\n        #echo Dat is ${_Dat}\n        if [ ! -z \"${_Dat}\" ] && [ -e \"${_Dir}\" ]; then\n          if [ -L \"${_Dir}/files\" ] \\\n            || [ -L \"${_Dir}/private\" ] \\\n            || [ -L \"${_usEr}/static/files\" ]; then\n            _DirSize=$(du -L -s ${_Dir} 2>/dev/null)\n          else\n            _DirSize=$(du -s ${_Dir} 2>/dev/null)\n          fi\n          _DirSize=$(echo \"${_DirSize}\" \\\n            | cut -d'/' -f1 \\\n            | awk '{ print $1}' \\\n            | sed \"s/[\\/\\s+]//g\" 2>&1)\n          _SumDir=$(( _SumDir + _DirSize ))\n          echo \"${_THIS_U},${_Dom},_DirSize:${_DirSize}\"\n          [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  ${_THIS_U},${_Dom},_DirSize:${_DirSize}\" >> \"${_uLogFil}\"\n        fi\n        if [ ! -z \"${_Dat}\" ]; then\n          if [ -e \"/root/.du.sql\" ]; then\n            _DatSize=$(grep \"/var/lib/mysql/${_Dat}$\" /root/.du.sql 2>&1)\n          elif [ -e \"/root/.du.local.sql\" ]; then\n            _DatSize=$(grep \"/var/lib/mysql/${_Dat}$\" /root/.du.local.sql 2>&1)\n          elif [ -e \"/var/lib/mysql/${_Dat}\" ]; then\n            _DatSize=$(du -s /var/lib/mysql/${_Dat} 2>/dev/null)\n          fi\n          _DatSize=$(echo \"${_DatSize}\" \\\n            | cut -d'/' -f1 \\\n            | awk '{ print $1}' \\\n            | sed \"s/[\\/\\s+]//g\" 2>&1)\n          if [ \"${_DEV_URL}\" = \"YES\" ]; then\n            _SkipDt=$(( _SkipDt + _DatSize ))\n            echo \"${_THIS_U},${_Dom},_DatSize:${_DatSize}:${_Dat},skip\"\n            [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  ${_THIS_U},${_Dom},_DatSize:${_DatSize}:${_Dat},skip\" >> \"${_uLogFil}\"\n          else\n            _SumDat=$(( _SumDat + _DatSize ))\n            echo \"${_THIS_U},${_Dom},_DatSize:${_DatSize}:${_Dat}\"\n            [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  ${_THIS_U},${_Dom},_DatSize:${_DatSize}:${_Dat}\" >> \"${_uLogFil}\"\n          fi\n        else\n          echo \"Database ${_Dat} for ${_Dom} does not exist\"\n        fi\n      fi\n    fi\n  done\n}\n\n_send_notice_sql() {\n  if [ \"${_hostedSys}\" != \"YES\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _MODE=$1\n  if [ \"${_MODE}\" = \"DEV\" ]; then\n    _SQL_LIM=${_SQL_DEV_LIMIT}\n    _SQL_NOW=${_SkipDtH}\n  else\n    _SQL_LIM=${_SQL_MIN_LIMIT}\n    _SQL_NOW=${_SumDatH}\n  fi\n  _BCC_EMAIL=\"omega8cc@gmail.com\"\n  _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"NOTICE: Your ${_MODE} DB Usage on [${_THIS_U}] is too high: ${_SQL_NOW} MB\" ${_CLIENT_EMAIL}\nHello,\n\nYou are using more resources than allocated in your subscription.\nYou have currently ${_CLIENT_CORES} Ægir ${_CLIENT_OPTION} ${_ENGINE_NR}.\n\nYour allowed databases space for ${_MODE} sites is ${_SQL_LIM} MB,\nbut you are currently using ${_SQL_NOW} MB of databases space.\n\nPlease reduce your usage by deleting no longer used sites, or purchase\nenough Ægir Engines to cover your current usage.\n\nYou can purchase more Ægir Engines easily online:\n\n  https://omega8.cc/pricing\n\nTo qualify as DEV/TEST with separate usage limits as specified\nin your subscription, the site should have in its main name\na special keyword with ==two dots== on ==both sides== like this:\n\n  .dev.\n  .devel.\n  .temp.\n  .tmp.\n  .temporary.\n  .test.\n  .testing.\n  .stage.\n  .staging.\n\nFor example, a site with main name: abc.test.foo.com is by default\nexcluded from your allocated databases limits for LIVE sites.\n\nHowever, if we discover that anyone is using this method to hide real\nusage via listed keywords in the main site name and adding live domain(s)\nas aliases, such account will be suspended without any warning.\n\n--\nThis email has been sent by your Ægir resources usage daily monitor.\n\nEOF\n  fi\n  echo \"INFO: Notice sent to ${_CLIENT_EMAIL} [${_THIS_U}]: OK\"\n  [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"INFO: Notice Your DB Usage sent to ${_CLIENT_EMAIL} [${_THIS_U}]: OK\" >> \"${_uLogFil}\"\n}\n\n_send_notice_disk() {\n  if [ \"${_hostedSys}\" != \"YES\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _BCC_EMAIL=\"omega8cc@gmail.com\"\n  _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"NOTICE: Your Disk Usage on [${_THIS_U}] is too high\" ${_CLIENT_EMAIL}\nHello,\n\nYou are using more resources than allocated in your subscription.\nYou have currently ${_CLIENT_CORES} Ægir ${_CLIENT_OPTION} ${_ENGINE_NR}.\n\nYour allowed disk space is ${_DSK_MIN_LIMIT} MB.\nYou are currently using ${_TotSizH} MB of disk space.\n\nPlease reduce your usage by deleting old backups, files,\nand no longer used sites, or purchase enough Ægir Engines\nto cover your current usage.\n\nYou can purchase more Ægir Engines easily online:\n\n  https://omega8.cc/buy\n\nNote that unlike with database space limits, for files related disk space\nwe count all your sites, including also all DEV/TEST sites, if they exist,\neven if they are marked as disabled in your Ægir control panel.\n\n--\nThis email has been sent by your Ægir resources usage daily monitor.\n\nEOF\n  fi\n  echo \"INFO: Notice sent to ${_CLIENT_EMAIL} [${_THIS_U}]: OK\"\n  [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"INFO: Notice Your Disk Usage sent to ${_CLIENT_EMAIL} [${_THIS_U}]: OK\" >> \"${_uLogFil}\"\n}\n\n\n_send_notice_gprd() {\n  if [ \"${_hostedSys}\" != \"YES\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _BCC_EMAIL=\"omega8cc@gmail.com\"\n  _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n  cat <<EOF | s-nail -b ${_BCC_EMAIL} \\\n    -s \"GDPR compliance for your Ægir account\" ${_CLIENT_EMAIL}\nHello,\n\nYes, yet another GDPR email, but it's important that you read and understand\nhow this new law affects your hosting with us.\n\nThe General Data Protection Regulation (GDPR) is a new European privacy law\nthat went into effect on May 25, 2018.\n\nThe GDPR will replace the EU Data Protection Directive, also known as\nDirective 95/46/EC, and will apply a single data protection law\nthroughout the EU.\n\nData protection laws govern the way that businesses collect, use, and share\npersonal data about individuals. Among other things, they require businesses\nto process an individual’s personal data fairly and lawfully, allow individuals\nto exercise legal rights in respect of their personal data (for example,\nto access, correct or delete their personal data), and ensure appropriate\nsecurity protections are put in place to protect the personal data they process.\n\nWe have taken steps to ensure that we will be compliant with the GDPR\nby May 25, 2018.\n\nPlease read all details on our website at:\n\nhttps://omega8.cc/gdpr\nhttps://omega8.cc/gdpr-faq\nhttps://omega8.cc/gdpr-dpa\nhttps://omega8.cc/gdpr-portability\n\nPlease contact us if you have any questions: https://omega8.cc/contact\n\nThank you for your attention.\n\n---\nOmega8.cc\n\nEOF\n  fi\n  echo \"INFO: GDPR notice sent to ${_CLIENT_EMAIL} [${_THIS_U}]: OK\"\n}\n\n_check_limits() {\n  _SQL_MIN_LIMIT=0\n  _SQL_MAX_LIMIT=0\n  _SQL_DEV_LIMIT=0\n  _DSK_MIN_LIMIT=0\n  _DSK_MAX_LIMIT=0\n  _DSK_CLU_LIMIT=1\n  _read_account_data\n  if [ \"${_CLIENT_OPTION}\" = \"MONSTER\" ] || [ \"${_CLIENT_OPTION}\" = \"CLUSTER\" ]; then\n    _CLIENT_OPTION=MONSTER\n    if [ -z \"${_DSK_CLU_LIMIT}\" ]; then\n      _DSK_CLU_LIMIT=1\n    fi\n    _SQL_MIN_LIMIT=51200\n    _DSK_MIN_LIMIT=204800\n    _DSK_MAX_LIMIT=215040\n    _SQL_DEV_EXTRA=2\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 2048 ))\n    _DSK_MIN_LIMIT=$(( _DSK_MIN_LIMIT *= _DSK_CLU_LIMIT ))\n    _DSK_MAX_LIMIT=$(( _DSK_MAX_LIMIT *= _DSK_CLU_LIMIT ))\n  elif [ \"${_CLIENT_OPTION}\" = \"ULTRA\" ]; then\n    if [ -z \"${_DSK_CLU_LIMIT}\" ]; then\n      _DSK_CLU_LIMIT=1\n    fi\n    _SQL_MIN_LIMIT=5120\n    _DSK_MIN_LIMIT=102400\n    _DSK_MAX_LIMIT=107520\n    _SQL_DEV_EXTRA=2\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 1024 ))\n    _DSK_MIN_LIMIT=$(( _DSK_MIN_LIMIT *= _DSK_CLU_LIMIT ))\n    _DSK_MAX_LIMIT=$(( _DSK_MAX_LIMIT *= _DSK_CLU_LIMIT ))\n  elif [ \"${_CLIENT_OPTION}\" = \"PHANTOM\" ]; then\n    if [ -z \"${_DSK_CLU_LIMIT}\" ]; then\n      _DSK_CLU_LIMIT=1\n    fi\n    _SQL_MIN_LIMIT=20480\n    _DSK_MIN_LIMIT=409600\n    _DSK_MAX_LIMIT=430080\n    _SQL_DEV_EXTRA=2\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 2048 ))\n    _DSK_MIN_LIMIT=$(( _DSK_MIN_LIMIT *= _DSK_CLU_LIMIT ))\n    _DSK_MAX_LIMIT=$(( _DSK_MAX_LIMIT *= _DSK_CLU_LIMIT ))\n  elif [ \"${_CLIENT_OPTION}\" = \"POWER\" ]; then\n    _SQL_MIN_LIMIT=10240\n    _DSK_MIN_LIMIT=204800\n    _SQL_DEV_EXTRA=2\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 1024 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 2560 ))\n  elif [ \"${_CLIENT_OPTION}\" = \"AERO\" ]; then\n    _SQL_MIN_LIMIT=5120\n    _DSK_MIN_LIMIT=102400\n    _SQL_DEV_EXTRA=2\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 1024 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 2560 ))\n  elif [ \"${_CLIENT_OPTION}\" = \"AGAIN\" ]; then\n    _SQL_MIN_LIMIT=40960\n    _DSK_MIN_LIMIT=409600\n    _SQL_DEV_EXTRA=0\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 512 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 1280 ))\n  elif [ \"${_CLIENT_OPTION}\" = \"HEADSPACE\" ]; then\n    _SQL_MIN_LIMIT=20480\n    _DSK_MIN_LIMIT=204800\n    _SQL_DEV_EXTRA=0\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 512 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 1280 ))\n  elif [ \"${_CLIENT_OPTION}\" = \"QUIET\" ]; then\n    _SQL_MIN_LIMIT=10240\n    _DSK_MIN_LIMIT=102400\n    _SQL_DEV_EXTRA=0\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 512 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 1280 ))\n  elif [ \"${_CLIENT_OPTION}\" = \"EDGE\" ] \\\n    || [ \"${_CLIENT_OPTION}\" = \"SSD\" ] \\\n    || [ \"${_CLIENT_OPTION}\" = \"CLASSIC\" ]; then\n    _CLIENT_OPTION=EDGE\n    _SQL_MIN_LIMIT=2048\n    _DSK_MIN_LIMIT=61440\n    _SQL_DEV_EXTRA=2\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 512 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 1280 ))\n  elif [ \"${_CLIENT_OPTION}\" = \"MINI\" ]; then\n    _SQL_MIN_LIMIT=2048\n    _DSK_MIN_LIMIT=30720\n    _SQL_DEV_EXTRA=1\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 512 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 1280 ))\n  elif [ \"${_CLIENT_OPTION}\" = \"MICRO\" ]; then\n    _SQL_MIN_LIMIT=1024\n    _DSK_MIN_LIMIT=10240\n    _SQL_DEV_EXTRA=1\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 256 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 640 ))\n  else\n    _SQL_MIN_LIMIT=1024\n    _DSK_MIN_LIMIT=15360\n    _SQL_DEV_EXTRA=1\n    _SQL_MAX_LIMIT=$(( _SQL_MIN_LIMIT + 256 ))\n    _DSK_MAX_LIMIT=$(( _DSK_MIN_LIMIT + 640 ))\n  fi\n  _SQL_MIN_LIMIT=$(( _SQL_MIN_LIMIT *= _CLIENT_CORES ))\n  _DSK_MIN_LIMIT=$(( _DSK_MIN_LIMIT *= _CLIENT_CORES ))\n  _SQL_MAX_LIMIT=$(( _SQL_MAX_LIMIT *= _CLIENT_CORES ))\n  _DSK_MAX_LIMIT=$(( _DSK_MAX_LIMIT *= _CLIENT_CORES ))\n  _SQL_DEV_LIMIT=${_SQL_MIN_LIMIT}\n  _SQL_DEV_LIMIT=$(( _SQL_DEV_LIMIT *= _CLIENT_CORES ))\n  _SQL_DEV_LIMIT=$(( _SQL_DEV_LIMIT *= _SQL_DEV_EXTRA ))\n  if [ ! -z \"${_EXTRA_ENGINE}\" ]; then\n    if [ -e \"/data/disk/${_THIS_U}/log/extra_edge.txt\" ]; then\n      _SQL_ADD_LIMIT=2048\n      _DSK_ADD_LIMIT=61440\n    elif [ -e \"/data/disk/${_THIS_U}/log/extra_aero.txt\" ]; then\n      _SQL_ADD_LIMIT=5120\n      _DSK_ADD_LIMIT=102400\n    elif [ -e \"/data/disk/${_THIS_U}/log/extra_power.txt\" ]; then\n      _SQL_ADD_LIMIT=10240\n      _DSK_ADD_LIMIT=204800\n    fi\n    _SQL_ADD_LIMIT=$(( _SQL_ADD_LIMIT *= _EXTRA_ENGINE ))\n    _DSK_ADD_LIMIT=$(( _DSK_ADD_LIMIT *= _EXTRA_ENGINE ))\n    _SQL_MIN_LIMIT=$(( _SQL_MIN_LIMIT + _SQL_ADD_LIMIT ))\n    _DSK_MIN_LIMIT=$(( _DSK_MIN_LIMIT + _DSK_ADD_LIMIT ))\n    _SQL_MAX_LIMIT=$(( _SQL_MAX_LIMIT + _SQL_ADD_LIMIT ))\n    _DSK_MAX_LIMIT=$(( _DSK_MAX_LIMIT + _DSK_ADD_LIMIT ))\n    echo _EXTRA_ENGINE is ${_EXTRA_ENGINE}\n  fi\n  echo _CLIENT_CORES is ${_CLIENT_CORES}\n  echo _SQL_MIN_LIMIT is ${_SQL_MIN_LIMIT}\n  echo _SQL_MAX_LIMIT is ${_SQL_MAX_LIMIT}\n  echo _SQL_DEV_LIMIT is ${_SQL_DEV_LIMIT}\n  echo _DSK_MIN_LIMIT is ${_DSK_MIN_LIMIT}\n  echo _DSK_MAX_LIMIT is ${_DSK_MAX_LIMIT}\n\n  if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n    echo \"  SQL Usage Limit for Production Sites is ${_SQL_MIN_LIMIT} MB\" >> \"${_uLogFil}\"\n    echo \"  SQL Usage Limit for Dev/Test Sites is ${_SQL_DEV_LIMIT} MB\" >> \"${_uLogFil}\"\n    echo \"  Disk Usage Limit for Files and Solr is ${_DSK_MIN_LIMIT} MB\" >> \"${_uLogFil}\"\n    echo \" \" >> \"${_uLogFil}\"\n  fi\n\n  if [ \"${_SumDatH}\" -gt \"${_SQL_MAX_LIMIT}\" ]; then\n    if [ ! -e \"${_usEr}/log/CANCELLED\" ] \\\n      && [ ! -e \"${_usEr}/log/proxied.pid\" ]; then\n      if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n        _send_notice_sql \"LIVE\"\n      fi\n    fi\n    echo SQL Usage for ${_THIS_U} above limits\n    [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  SQL Usage for ${_THIS_U} above limits\" >> \"${_uLogFil}\"\n  elif [ \"${_SkipDtH}\" -gt \"${_SQL_DEV_LIMIT}\" ]; then\n    if [ ! -e \"${_usEr}/log/CANCELLED\" ] \\\n      && [ ! -e \"${_usEr}/log/proxied.pid\" ]; then\n      if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n        _send_notice_sql \"DEV\"\n      fi\n    fi\n    echo SQL Usage for ${_THIS_U} above limits\n    [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  SQL Usage for ${_THIS_U} above limits\" >> \"${_uLogFil}\"\n  else\n    echo SQL Usage for ${_THIS_U} below limits\n    [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  SQL Usage for ${_THIS_U} below limits\" >> \"${_uLogFil}\"\n  fi\n  if [ \"${_TotSizH}\" -gt \"${_DSK_MAX_LIMIT}\" ]; then\n    if [ ! -e \"${_usEr}/log/CANCELLED\" ] \\\n      && [ ! -e \"${_usEr}/log/proxied.pid\" ]; then\n      if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n        _send_notice_disk\n      fi\n    fi\n    echo Disk Usage for ${_THIS_U} above limits\n    [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  Disk Usage for ${_THIS_U} above limits\" >> \"${_uLogFil}\"\n  else\n    echo Disk Usage for ${_THIS_U} below limits\n    [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"  Disk Usage for ${_THIS_U} below limits\" >> \"${_uLogFil}\"\n  fi\n  if [ ! -e \"${_usEr}/log/GDPRsent.log\" ]; then\n    if [ ! -e \"${_usEr}/log/CANCELLED\" ] \\\n      && [ ! -e \"${_usEr}/log/proxied.pid\" ]; then\n      if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n        _send_notice_gprd\n        touch ${_usEr}/log/GDPRsent.log\n        echo GDPR info for ${_THIS_U} sent\n      fi\n    fi\n  fi\n}\n\n_count_cpu() {\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n_get_load() {\n  read -r _one _five _rest <<< \"$(cat /proc/loadavg)\"\n  _O_LOAD=$(awk -v _load_value=\"${_one}\" -v _cpus=\"${_CPU_NR}\" 'BEGIN { printf \"%.1f\", (_load_value / _cpus) * 100 }')\n}\n\n_load_control() {\n  # shellcheck disable=SC1091\n  [ -e \"/root/.barracuda.cnf\" ] && source /root/.barracuda.cnf\n  : \"${_CPU_TASK_RATIO:=3.1}\"\n  _CPU_TASK_RATIO=\"$(_sanitize_number \"${_CPU_TASK_RATIO}\")\"\n  _O_LOAD_MAX=$(echo \"${_CPU_TASK_RATIO} * 100\" | bc -l)\n  _get_load\n}\n\n_sub_count_usr_home() {\n  if [ -e \"$1\" ]; then\n    _HqmSiz=$(du -s $1 2>/dev/null)\n    _HqmSiz=$(echo \"${_HqmSiz}\" \\\n      | cut -d'/' -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\/\\s+]//g\" 2>&1)\n    _HxmSiz=$(( _HxmSiz + _HqmSiz ))\n    ### echo $1 disk usage is $_HqmSiz\n    ### echo _HxmSiz total is $_HxmSiz\n  fi\n}\n\n_usage_action() {\n  for _usEr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n    _count_cpu\n    _load_control\n    if [ -e \"${_usEr}/config/server_master/nginx/vhost.d\" ]; then\n      if (( $(echo \"${_O_LOAD} < ${_O_LOAD_MAX}\" | bc -l) )); then\n        _HomSiz=0\n        _HqmSiz=0\n        _HxmSiz=0\n        _SkipDt=0\n        _SumDat=0\n        _SumDir=0\n        _TotSiz=0\n        _THIS_U=$(echo ${_usEr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n        _uLogDir=\"${_usEr}/static/usage\"\n        _uLogFil=\"${_uLogDir}/usage-${_NOW}.log\"\n        [ ! -e \"${_uLogDir}\" ] && mkdir -p \"${_uLogDir}\"\n        _THIS_HM_SITE=$(cat ${_usEr}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"site_path'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        _THIS_HM_PLR=$(cat ${_usEr}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"root'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n        echo load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\n        if [ ! -e \"${_usEr}/log/skip-force-cleanup.txt\" ]; then\n          cd ${_usEr}\n          echo \"Remove various tmp/dot files breaking du command\"\n          find . -name \"exclude.tag\" -type f | xargs rm -rf &> /dev/null\n          find . -name \".DS_Store\" -type f | xargs rm -rf &> /dev/null\n          find . -name \"*~\" -type f | xargs rm -rf &> /dev/null\n          find . -name \"*#\" -type f | xargs rm -rf &> /dev/null\n          find . -name \".#*\" -type f | xargs rm -rf &> /dev/null\n          find . -name \"*--\" -type f | xargs rm -rf &> /dev/null\n          find . -name \"._*\" -type f | xargs rm -rf &> /dev/null\n          find . -name \"*~\" -type l | xargs rm -rf &> /dev/null\n          find . -name \"*#\" -type l | xargs rm -rf &> /dev/null\n          find . -name \".#*\" -type l | xargs rm -rf &> /dev/null\n          find . -name \"*--\" -type l | xargs rm -rf &> /dev/null\n          find . -name \"._*\" -type l | xargs rm -rf &> /dev/null\n        fi\n        echo \"Counting User ${_usEr}\"\n        if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n          cat << EOF > \"${_uLogFil}\"\nCounting Usage for User ${_usEr} started on $(date)\n\nDetailed usage per site is shown first and usage summary further below.\n\nTo qualify as DEV/TEST with separate usage limits as specified\nin your subscription, the site should have in its main name\na special keyword with ==two dots== on ==both sides== like this:\n\n  .dev.\n  .devel.\n  .temp.\n  .tmp.\n  .temporary.\n  .test.\n  .testing.\n  .stage.\n  .staging.\n\nFor example, a site with main name: abc.test.foo.com is by default\nexcluded from your allocated databases limits for LIVE sites.\n\nNote that unlike with database space limits, for files related disk space\nwe count all your sites, including also all DEV/TEST sites, if they exist,\neven if they are marked as disabled in your Ægir control panel.\n\n  _DirSize is the site files usage in kB\n  _DatSize is the site database usage in kB\n  The optional =skip= at the end of line identifies DEV/TEST site\n\nEOF\n        fi\n        _DOW=$(date +%u)\n        _DOW=${_DOW//[^1-7]/}\n        if [ \"${_DOW}\" = \"2\" ]; then\n          _detect_deprecated_php\n        fi\n        _usage_count\n        if [ -d \"/home/${_THIS_U}.ftp\" ]; then\n          for _uH in `find /home/${_THIS_U}.* -maxdepth 0 -mindepth 0 | sort`; do\n            if [ -d \"${_uH}\" ]; then\n              _sub_count_usr_home ${_uH}\n            fi\n          done\n          for _uR in `find /var/solr9/data/oct.${_THIS_U}.* -maxdepth 0 -mindepth 0 | sort`; do\n            if [ -d \"${_uR}\" ]; then\n              _sub_count_usr_home ${_uR}\n            fi\n          done\n          for _uR in `find /var/solr7/data/oct.${_THIS_U}.* -maxdepth 0 -mindepth 0 | sort`; do\n            if [ -d \"${_uR}\" ]; then\n              _sub_count_usr_home ${_uR}\n            fi\n          done\n          for _uO in `find /opt/solr4/${_THIS_U}.* -maxdepth 0 -mindepth 0 | sort`; do\n            if [ -d \"${_uO}\" ]; then\n              _sub_count_usr_home ${_uO}\n            fi\n          done\n        fi\n        if [ -L \"${_usEr}/backups\" ] \\\n          || [ -L \"${_usEr}/src\" ] \\\n          || [ -L \"${_usEr}/static/files\" ]; then\n          _HomSiz=$(du -L -s ${_usEr} 2>/dev/null)\n        else\n          _HomSiz=$(du -s ${_usEr} 2>/dev/null)\n        fi\n        _HomSiz=$(echo \"${_HomSiz}\" \\\n          | cut -d'/' -f1 \\\n          | awk '{ print $1}' \\\n          | sed \"s/[\\/\\s+]//g\" 2>&1)\n\n        _TotSiz=$(( _HomSiz + _HxmSiz ))\n\n        _TotSizH=$(echo \"scale=0; ${_TotSiz}/1024\" | bc 2>&1)\n        _SumDirH=$(echo \"scale=0; ${_SumDir}/1024\" | bc 2>&1)\n        _SumDatH=$(echo \"scale=0; ${_SumDat}/1024\" | bc 2>&1)\n        _SkipDtH=$(echo \"scale=0; ${_SkipDt}/1024\" | bc 2>&1)\n\n        echo _TotSiz is ${_TotSiz} kB or ${_TotSizH} MB\n        echo _SumDir is ${_SumDir} kB or ${_SumDirH} MB\n        echo _SumDat is ${_SumDat} kB or ${_SumDatH} MB\n        echo _SkipDt is ${_SkipDt} kB or ${_SkipDtH} MB\n\n        if [ \"${_THIS_MODE}\" = \"verbose\" ]; then\n          cat << EOF >> \"${_uLogFil}\"\n\n  LiveDb Memory Space Used is ${_SumDat} kB or ${_SumDatH} MB\n  DevDb Memory Space Used is ${_SkipDt} kB or ${_SkipDtH} MB\n  Total Disk Space (Files and Solr) Used is ${_TotSiz} kB or ${_TotSizH} MB\n\nEOF\n        fi\n\n        _if_hosted_sys\n        if [ \"${_hostedSys}\" = \"YES\" ]; then\n          if [ -e \"${_THIS_HM_SITE}\" ]; then\n            _check_limits\n            su -s /bin/bash - ${_THIS_U} -c \"drush8 @hostmaster \\\n              variable-set --always-set site_footer 'Usage on ${_DATE} \\\n              | Files <strong>${_TotSizH}</strong> MB \\\n              | LiveDb <strong>${_SumDatH}</strong> MB \\\n              | DevDb <strong>${_SkipDtH}</strong> MB \\\n              | <strong>${_CLIENT_CORES}</strong> \\\n              ${_CLIENT_OPTION} ${_ENGINE_NR} \\\n              | CLI <strong>${_CLIENT_CLI}</strong> \\\n              | FPM <strong>${_CLIENT_FPM}</strong>'\" &> /dev/null\n            wait\n            if [ ! -e \"${_usEr}/log/CANCELLED\" ] \\\n              && [ \"${_DEV_EXC}\" = \"NO\" ] \\\n              && [ ! -e \"${_usEr}/log/proxied.pid\" ]; then\n              _eMail=${_CLIENT_EMAIL//\\\\\\@/\\@}\n              _AegirUrl=$(cat ${_usEr}/log/domain.txt 2>&1)\n              if [ \"${_TotSizH}\" -gt \"${_DSK_MAX_LIMIT}\" ]; then\n                _Files=\"!x!FilesAll\"\n              else\n                _Files=\"FilesAll\"\n              fi\n              if [ \"${_SumDatH}\" -gt \"${_SQL_MAX_LIMIT}\" ]; then\n                _DbsL=\"!x!DbsLive\"\n              else\n                _DbsL=\"DbsLive\"\n              fi\n              if [ \"${_SkipDtH}\" -gt \"${_SQL_DEV_LIMIT}\" ]; then\n                _DbsD=\"!x!DbsDev\"\n              else\n                _DbsD=\"DbsDev\"\n              fi\n              if [ \"${_THIS_MODE}\" = \"verbose\" ] || [ -z \"${_THIS_MODE}\" ]; then\n                _LOG_FILE=\"usage-latest-verbose.log\"\n              elif [ \"${_THIS_MODE}\" = \"silent\" ]; then\n                _LOG_FILE=\"usage-latest-silent.log\"\n              fi\n              echo \"${_AegirUrl},${_Files}:${_TotSizH},${_DbsL}:${_SumDatH},${_DbsD}:${_SkipDtH},${_eMail},Subs:${_CLIENT_OPTION}:${_CLIENT_CORES},${_THIS_U}\" >> /var/log/boa/usage/${_LOG_FILE}\n            fi\n            _TmDir=\"${_THIS_HM_PLR}/profiles/hostmaster/themes/aegir/eldir\"\n            _PgTpl=\"${_TmDir}/page.tpl.php\"\n            _EldirF=\"0001-Print-site_footer-if-defined.patch\"\n            _TplPatch=\"/var/xdrago/conf/${_EldirF}\"\n            if [ -e \"${_PgTpl}\" ] && [ -e \"${_TplPatch}\" ]; then\n              _IS_SF=$(grep \"site_footer\" ${_PgTpl} 2>&1)\n              if [[ ! \"${_IS_SF}\" =~ \"site_footer\" ]]; then\n                cd ${_TmDir}\n                patch -p1 < ${_TplPatch} &> /dev/null\n                cd\n              fi\n            fi\n            su -s /bin/bash - ${_THIS_U} \\\n              -c \"drush8 @hostmaster cache-clear all\" &> /dev/null\n            wait\n          fi\n        else\n          if [ -e \"${_THIS_HM_SITE}\" ]; then\n            _check_limits\n            su -s /bin/bash - ${_THIS_U} -c \"drush8 @hostmaster \\\n              variable-set --always-set site_footer 'Usage on ${_DATE} \\\n              | Files <strong>${_TotSizH}</strong> MB \\\n              | LiveDb <strong>${_SumDatH}</strong> MB \\\n              | DevDb <strong>${_SkipDtH}</strong> MB \\\n              | <strong>${_CLIENT_CORES}</strong> \\\n              ${_CLIENT_OPTION} ${_ENGINE_NR} \\\n              | CLI <strong>${_CLIENT_CLI}</strong> \\\n              | FPM <strong>${_CLIENT_FPM}</strong>'\" &> /dev/null\n            wait\n            if [ ! -e \"${_usEr}/log/CANCELLED\" ] \\\n              && [ \"${_DEV_EXC}\" = \"NO\" ] \\\n              && [ ! -e \"${_usEr}/log/proxied.pid\" ]; then\n              _eMail=${_CLIENT_EMAIL//\\\\\\@/\\@}\n              _AegirUrl=$(cat ${_usEr}/log/domain.txt 2>&1)\n              if [ \"${_TotSizH}\" -gt \"${_DSK_MAX_LIMIT}\" ]; then\n                _Files=\"!x!FilesAll\"\n              else\n                _Files=\"FilesAll\"\n              fi\n              if [ \"${_SumDatH}\" -gt \"${_SQL_MAX_LIMIT}\" ]; then\n                _DbsL=\"!x!DbsLive\"\n              else\n                _DbsL=\"DbsLive\"\n              fi\n              if [ \"${_SkipDtH}\" -gt \"${_SQL_DEV_LIMIT}\" ]; then\n                _DbsD=\"!x!DbsDev\"\n              else\n                _DbsD=\"DbsDev\"\n              fi\n              if [ \"${_THIS_MODE}\" = \"verbose\" ] || [ -z \"${_THIS_MODE}\" ]; then\n                _LOG_FILE=\"usage-latest-verbose.log\"\n              elif [ \"${_THIS_MODE}\" = \"silent\" ]; then\n                _LOG_FILE=\"usage-latest-silent.log\"\n              fi\n              echo \"${_AegirUrl},${_Files}:${_TotSizH},${_DbsL}:${_SumDatH},${_DbsD}:${_SkipDtH},${_eMail},Subs:${_CLIENT_OPTION}:${_CLIENT_CORES},${_THIS_U}\" >> /var/log/boa/usage/${_LOG_FILE}\n            fi\n            su -s /bin/bash - ${_THIS_U} \\\n              -c \"drush8 @hostmaster variable-set \\\n              --always-set site_footer ''\" &> /dev/null\n            wait\n            su -s /bin/bash - ${_THIS_U} \\\n              -c \"drush8 @hostmaster cache-clear all\" &> /dev/null\n            wait\n          fi\n        fi\n        echo \"Done for ${_usEr}\"\n        [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \" \" >> \"${_uLogFil}\"\n        [ \"${_THIS_MODE}\" = \"verbose\" ] && echo \"Counting Usage for User ${_usEr} completed on $(date)\" >> \"${_uLogFil}\"\n      else\n        echo \"load is ${_O_LOAD} while maxload is ${_O_LOAD_MAX}\"\n        echo \"...we have to wait...\"\n      fi\n      echo\n      echo\n    fi\n  done\n}\n\n###--------------------###\necho \"INFO: Starting usage monitoring on $(date)\"\n_NOW=$(date +%y%m%d-%H%M%S)\n_NOW=${_NOW//[^0-9-]/}\n_DATE=$(date)\n_hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\nmkdir -p /var/log/boa/usage\nif [ \"${1}\" = \"verbose\" ] || [ -z \"${1}\" ]; then\n  _THIS_MODE=\"verbose\"\n  rm -f /var/log/boa/usage/usage-latest-verbose.log\nelif [ \"${1}\" = \"silent\" ]; then\n  _THIS_MODE=\"silent\"\n  rm -f /var/log/boa/usage/usage-latest-silent.log\nfi\n_usage_action >/var/log/boa/usage/usage-${_NOW}.log 2>&1\necho \"INFO: Completing usage monitoring on $(date)\"\nexit 0\n\n"
  },
  {
    "path": "docs/BACKUPS.md",
    "content": "\n# New PRO Backups are now available!\n\n  * New PRO Backups for BOA SysAdmin [docs/BACKUP_ROOT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_ROOT.md)\n  * New PRO Backups for Octopus Lshell User [docs/BACKUP_USER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n  * New PRO Backups Retention Policy Configuration [docs/BACKUP_RETENTION.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_RETENTION.md)\n  * New PRO Backups Supported Regions and Bucket Creation Guidelines [docs/BACKUP_REGIONS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_REGIONS.md)\n\n# Automated, encrypted backups to Amazon S3 bucket (legacy for LTS)\n\n  * This legacy feature is available on self-hosted **BOA LTS** only.\n  * Note that provided `backboa` tool uses symmetric password-only encryption.\n  * You can configure AWS Region you prefer to use and Backup Rotation policy.\n\n  It will archive all directories required to restore your data (sites files,\n  databases archives, Nginx configuration and more) on a freshly installed BOA:\n\n```text\n    /etc /var/aegir /var/www /home /data\n```\n\n  It will start to run nightly at 3:15 AM (server time) only once you will add\n  all required `_AWS_*` variables in the `/root/.barracuda.cnf` file and run the\n  special command `backboa install` while logged in as root.\n\n  Full backups are scheduled on Sunday, unless `_AWS_FLC` is set to custom value.\n\n  To restore any file from backups created with `backboa` tool, you can use\n  the same script on the same or any other **BOA** server.\n\n  Please read below for details.\n\n\n## CONFIGURATION\n\n  Add listed below four (4) required lines to your `/root/.barracuda.cnf` file.\n  Required lines are marked with `[R]` and optional with `[O]`:\n\n```ini\n    _AWS_KEY='Your AWS Access Key ID'     ### [R] From your AWS S3 settings\n    _AWS_SEC='Your AWS Secret Access Key' ### [R] From your AWS S3 settings\n    _AWS_PWD='Your Secret Password'       ### [R] Generate with 'openssl rand -base64 32'\n    _AWS_REG='Your AWS Region ID'         ### [R] By default 'us-east-1'\n\n    _AWS_TTL='Your Backup Rotation'       ### [O] By default '30D'\n    _AWS_FLC='Your Backup Full Cycle'     ### [O] By default '7D'\n    _AWS_VLV='Your Backup Log Verbosity'  ### [O] By default 'warning' -- [ewnid]\n    _AWS_EXB='Exclude Ægir Backups'       ### [O] By default 'YES' -- can be YES/NO\n```\n\n    For more detailed exclude/include configuration see notes further below.\n\n    Supported values to use as `_AWS_REG` (the symbol after the # comment):\n\n```ini\n      Africa (Cape Town)         # af-south-1\n      Asia Pacific (Hong Kong)   # ap-east-1\n      Asia Pacific (Hyderabad)   # ap-south-2\n      Asia Pacific (Jakarta)     # ap-southeast-3\n      Asia Pacific (Melbourne)   # ap-southeast-4\n      Asia Pacific (Mumbai)      # ap-south-1\n      Asia Pacific (Osaka)       # ap-northeast-3\n      Asia Pacific (Seoul)       # ap-northeast-2\n      Asia Pacific (Singapore)   # ap-southeast-1\n      Asia Pacific (Sydney)      # ap-southeast-2\n      Asia Pacific (Tokyo)       # ap-northeast-1\n      Canada (Central)           # ca-central-1\n      Canada West (Calgary)      # ca-west-1\n      Europe (Frankfurt)         # eu-central-1\n      Europe (Ireland)           # eu-west-1\n      Europe (London)            # eu-west-2\n      Europe (Milan)             # eu-south-1\n      Europe (Paris)             # eu-west-3\n      Europe (Spain)             # eu-south-2\n      Europe (Stockholm)         # eu-north-1\n      Europe (Zurich)            # eu-central-2\n      Israel (Tel Aviv)          # il-central-1\n      Middle East (Bahrain)      # me-south-1\n      Middle East (UAE)          # me-central-1\n      South America (São Paulo)  # sa-east-1\n      US East (N. Virginia)      # us-east-1\n      US East (Ohio)             # us-east-2\n      US West (N. California)    # us-west-1\n      US West (Oregon)           # us-west-2\n\n      ### Special regions, see: https://aws.amazon.com/govcloud-us/\n\n      AWS GovCloud (US-East)     # us-gov-east-1\n      AWS GovCloud (US-West)     # us-gov-west-1\n```\n\n    Source: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region\n\n    You have to use S3 Console at https://console.aws.amazon.com/s3/home\n    (before attempting to run initial backup!) to create S3 bucket in the\n    desired region with correct name as shown below.\n\n    Replace only the `srv-foo-bar` part after `daily-boa-` static prefix with\n    your system hostname, typically displayed when you type `uname -n`\n    where all dots are replaced with dashes, for compatibility reasons:\n\n      `daily-boa-srv-foo-bar`\n\n    While duplicity should be able to create new bucket on demand, in practice\n    it almost never works due to typical delays between various AWS regions.\n\n    Please run: `backboa test` to make sure that the connection works.\n\n## INSTALLATION\n\n```sh\n  $ backboa install\n```\n\n## USAGE\n\n```sh\n  $ backboa backup\n  $ backboa cleanup\n  $ backboa list\n  $ backboa status\n  $ backboa test\n  $ backboa restore file [time] destination\n  $ backboa retrieve file [time] destination hostname\n```\n\n## RESTORE EXAMPLES\n\n  Note: Be careful while restoring not to prepend a slash to the path!\n\n  Restoring a single file to `tmp/`\n\n```sh\n  $ backboa restore data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz\n```\n\n  Restoring an older version of a directory to `tmp/` - interval or full date\n\n```sh\n  $ backboa restore data/disk/o1/backups 7D8h8s tmp/backups\n  $ backboa restore data/disk/o1/backups 2014/11/11 tmp/backups\n```\n\n  Restoring data on a different server\n\n```sh\n  $ backboa retrieve data/disk/o1/backups/foo.tar.gz tmp/foo.tar.gz srv.foo.bar\n  $ backboa retrieve data/disk/o1/backups 2014/11/11 tmp/backups srv.foo.bar\n```\n\n## NOTES\n\n  The `srv.foo.bar` is a hostname of the BOA system backed up before.\n  In the `retrieve` mode it will use the `_AWS_*` variables configured\n  in the current system `/root/.barracuda.cnf` file - so make sure to edit\n  this file to set/replace temporarily all four required `_AWS_*` variables\n  used originally on the host you are retrieving data from! You should\n  keep them secret and manage in your offline password manager app.\n\n  There is also another tool to run extra remote backups: `duobackboa`.\n  The only differences are listed below. If you wish to receive daily\n  backup reports generated by `duobackboa` via email, please add also\n  `_MY_EMAIL=\"my@email\"` line in the `/root/.duobackboa.cnf` file, if used.\n\n  * The extra script filename and command: `duobackboa`\n  * Separate configuration file: `/root/.duobackboa.cnf`\n  * S3 bucket naming convention: `daily-remote-srv-foo-bar`\n  * Cron entry set to start at 5:15 AM (server time)\n  * Full backups are scheduled on Saturday\n\n  The `duobackboa` script has also built-in how-to: just type `duobackboa`\n  when logged in as system root.\n\n  You can use a file that lists folders and files that should be included\n  or excluded from the backups.\n\n  * If `/root/.backboa.exclude` exists it will be passed as the\n    `--exclude-filelist` parameter of duplicity\n  * If `/root/.backboa.include` exists it will be passed as the\n    `--include-filelist` parameter of duplicity\n\n  Note: for `duobackboa` the optional files should use these filenames:\n\n  * If `/root/.duobackboa.exclude` exists it will be passed as the\n    `--exclude-filelist` parameter of duplicity\n  * If `/root/.duobackboa.include` exists it will be passed as the\n    `--include-filelist` parameter of duplicity\n\n  The format of both files should be as described in the documentation of\n  duplicity:\n\n  See also: https://duplicity.gitlab.io\n"
  },
  {
    "path": "docs/BACKUP_REGIONS.md",
    "content": "\n# Supported Regions and Bucket Creation Guidelines\n\nThis document outlines the supported regions and configuration guidelines for the `multiback` (used by root) and `mybackup` (used by regular users) backup scripts. It consolidates details about supported storage services, region IDs, bucket creation behavior, and user configuration steps.\n\n- New PRO Backups for BOA SysAdmin [docs/BACKUP_ROOT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_ROOT.md)\n- New PRO Backups for Octopus Lshell User [docs/BACKUP_USER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n- New PRO Backups Retention Policy Configuration [docs/BACKUP_RETENTION.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_RETENTION.md)\n- New PRO Backups Supported Regions and Bucket Creation Guidelines (this document) [docs/BACKUP_REGIONS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_REGIONS.md)\n\n---\n\n## General Bucket Behavior\n\nMost providers allow **automatic bucket creation** if sufficient credentials and permissions are provided, so you don't need to figure it out yourself. However, some providers (e.g., **Linode**) require **manual bucket creation** before the first backup and others (e.g., **Amazon S3**) are unreliable for automatic creation due to propagation delays between AWS regions. Manual bucket creation is recommended if you use provider known as not reliable or when manual creation is required. Below is a detailed breakdown for each provider.\n\n---\n\n### Supported Regions by Service\n\nThe following regions are supported across various storage services:\n\n---\n\n#### **Wasabi Hot Cloud Storage (wasabi)**\n\n- **Bucket Creation:** Supported automatically.\n- **Supported Regions:**\n\n| Data Center Location     | Region Code     |\n|--------------------------|-----------------|\n| Virginia, USA            | `us-east-1`     |\n| Virginia, USA            | `us-east-2`     |\n| Oregon, USA              | `us-west-1`     |\n| Plano, Texas, USA        | `us-central-1`  |\n| Toronto, Canada          | `ca-central-1`  |\n| London, England          | `eu-west-1`     |\n| Paris, France            | `eu-west-2`     |\n| Amsterdam, Netherlands   | `eu-central-1`  |\n| Frankfurt, Germany       | `eu-central-2`  |\n| Milan, Italy             | `eu-south-1`    |\n| Tokyo, Japan             | `ap-northeast-1`|\n| Osaka, Japan             | `ap-northeast-2`|\n| Singapore                | `ap-southeast-1`|\n| Sydney, Australia        | `ap-southeast-2`|\n\nFor more, refer to [Wasabi Regions](https://wasabi.com/company/storage-regions).\n\n---\n\n#### **Backblaze B2 (b2)**\n\n- **Bucket Creation:** Automatic with proper credentials.\n- **Supported Regions:**\n\n| Data Center Location    | Region Code  |\n|-------------------------|--------------|\n| Amsterdam, Netherlands  | `eu-central` |\n| Reston, Virginia        | `us-east`    |\n| Sacramento, California  | `us-west`    |\n| Stockton, California    | `us-west`    |\n| Phoenix, Arizona        | `us-west`    |\n| Toronto, Ontario        | `ca-east`    |\n\nBackblaze currently has data centers in Sacramento, California; Stockton, California; Phoenix, Arizona; Reston, Virginia; Amsterdam, Netherlands and Toronto, Ontario.\n\nAccounts that use the US-West region store data in both the Sacramento and Phoenix data centers.\n\nAccounts that use the EU-Central region store data in the Amsterdam data center. If you are in the European Union or in or near Europe, then the transfer rate for Backblaze Computer Backup and Backblaze B2 should have less latency. As a result, you can get better transfer rates and more bandwidth per thread.\n\nAccounts that use the US-East region store data in the Reston, Virginia data center. The data center is operated by Coresight, a well-known and respected data center operator.\n\nThe newest region, known as CA East, is located in Toronto, Ontario.\n\nWhen you create a Backblaze B2 account, you choose whether that account’s data is stored in the US East region, the US West region, the EU Central region or the Canada East region. The choice that you make during account creation dictates where all of that account’s data is stored. After you create your Backblaze B2 account, you cannot change your selected region.\n\nThis means that you need separate accounts per region, if needed and the region codes in the table above are for informational purposes only.\n\nMore details available at the [Backblaze B2 Regions documentation](https://www.backblaze.com/docs/cloud-storage-data-regions).\n\n---\n\n#### **DigitalOcean Spaces (do-spaces)**\n\n- **Bucket Creation:** Automatic if credentials have write permissions.\n- **Supported Regions:**\n\n| Data Center Location       | Region Code |\n|----------------------------|-------------|\n| New York City, United States | `nyc3`    |\n| San Francisco, United States | `sfo2`    |\n| San Francisco, United States | `sfo3`    |\n| Amsterdam, Netherlands     | `ams3`      |\n| Singapore                  | `sgp1`      |\n| London, United Kingdom     | `lon1`      |\n| Frankfurt, Germany         | `fra1`      |\n| Toronto, Canada            | `tor1`      |\n| Bangalore, India           | `blr1`      |\n| Sydney, Australia          | `syd1`      |\n\nFor the most current and detailed information, please refer to DigitalOcean's [Regional Availability documentation](https://docs.digitalocean.com/platform/regional-availability/).\n\nRefer to [DigitalOcean Spaces Regions documentation](https://docs.digitalocean.com/platform/regional-availability/) for details.\n\n---\n\n#### **Amazon Web Services (aws, aws-one-zone, aws-standard-ia)**\n\n- **Bucket Creation:** Supported but unreliable for automatic creation due to propagation delays between AWS regions. Manual bucket creation is recommended -- see Required Bucket Naming Convention below.\n- **Supported Regions:**\n\n| Region Name                  | Region Code       |\n|------------------------------|-------------------|\n| Africa (Cape Town)           | `af-south-1`      |\n| Asia Pacific (Hong Kong)     | `ap-east-1`       |\n| Asia Pacific (Hyderabad)     | `ap-south-2`      |\n| Asia Pacific (Jakarta)       | `ap-southeast-3`  |\n| Asia Pacific (Melbourne)     | `ap-southeast-4`  |\n| Asia Pacific (Mumbai)        | `ap-south-1`      |\n| Asia Pacific (Osaka)         | `ap-northeast-3`  |\n| Asia Pacific (Seoul)         | `ap-northeast-2`  |\n| Asia Pacific (Singapore)     | `ap-southeast-1`  |\n| Asia Pacific (Sydney)        | `ap-southeast-2`  |\n| Asia Pacific (Tokyo)         | `ap-northeast-1`  |\n| Canada (Central)             | `ca-central-1`    |\n| Canada West (Calgary)        | `ca-west-1`       |\n| Europe (Frankfurt)           | `eu-central-1`    |\n| Europe (Ireland)             | `eu-west-1`       |\n| Europe (London)              | `eu-west-2`       |\n| Europe (Milan)               | `eu-south-1`      |\n| Europe (Paris)               | `eu-west-3`       |\n| Europe (Spain)               | `eu-south-2`      |\n| Europe (Stockholm)           | `eu-north-1`      |\n| Europe (Zurich)              | `eu-central-2`    |\n| Israel (Tel Aviv)            | `il-central-1`    |\n| Middle East (Bahrain)        | `me-south-1`      |\n| Middle East (UAE)            | `me-central-1`    |\n| South America (São Paulo)    | `sa-east-1`       |\n| US East (N. Virginia)        | `us-east-1`       |\n| US East (Ohio)               | `us-east-2`       |\n| US West (N. California)      | `us-west-1`       |\n| US West (Oregon)             | `us-west-2`       |\n| AWS GovCloud (US-East)       | `us-gov-east-1`   |\n| AWS GovCloud (US-West)       | `us-gov-west-1`   |\n\nFor more details, refer to the [AWS Regional Endpoints documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region).\n\n---\n\n#### **Google Cloud Storage (gcs)**\n\n- **Bucket Creation:** Automatic if credentials have write permissions.\n- **Supported Regions:** (not all are listed here)\n\n| Data Center Location                 | Region Code     |\n|--------------------------------------|-----------------|\n| Iowa (US Central)                    | `us-central1`   |\n| South Carolina (US East)             | `us-east1`      |\n| Northern Virginia (US East)          | `us-east4`      |\n| Oregon (US West)                     | `us-west1`      |\n| Los Angeles (US West)                | `us-west2`      |\n| Salt Lake City (US West)             | `us-west3`      |\n| Las Vegas (US West)                  | `us-west4`      |\n| Montreal (Canada)                    | `northamerica-northeast1` |\n| São Paulo (South America)            | `southamerica-east1`      |\n| Santiago (South America)             | `southamerica-west1`      |\n| Finland (Europe)                     | `europe-north1`           |\n| Belgium (Europe)                     | `europe-west1`            |\n| London (Europe)                      | `europe-west2`            |\n| Frankfurt (Europe)                   | `europe-west3`            |\n| Netherlands (Europe)                 | `europe-west4`            |\n| Zurich (Europe)                      | `europe-west6`            |\n| Warsaw (Europe)                      | `europe-central2`         |\n| Sydney (Australia)                   | `australia-southeast1`    |\n| Jakarta (Indonesia)                  | `asia-southeast2`         |\n| Singapore                            | `asia-southeast1`         |\n| Taiwan                               | `asia-east1`              |\n| Hong Kong                            | `asia-east2`              |\n| Tokyo                                | `asia-northeast1`         |\n| Osaka                                | `asia-northeast2`         |\n| Seoul                                | `asia-northeast3`         |\n| Mumbai                               | `asia-south1`             |\n| Delhi                                | `asia-south2`             |\n\nFor complete list please refer to the [Google Cloud Storage Locations documentation](https://cloud.google.com/storage/docs/locations).\n\n---\n\n#### **Microsoft Azure Blob Storage (azure)**\n\n- **Bucket Creation:** Supported automatically with appropriate contributor access.\n- **Supported Regions:** (not all are listed here)\n\n| Region Name                          | Region Code     |\n|--------------------------------------|-----------------|\n| East US                              | `eastus`        |\n| East US 2                            | `eastus2`       |\n| Central US                           | `centralus`     |\n| North Central US                     | `northcentralus`|\n| South Central US                     | `southcentralus`|\n| West US                              | `westus`        |\n| West US 2                            | `westus2`       |\n| West US 3                            | `westus3`       |\n| Canada Central                       | `canadacentral` |\n| Canada East                          | `canadaeast`    |\n| Brazil South                         | `brazilsouth`   |\n| Brazil Southeast                     | `brazilsoutheast`|\n| Europe North                         | `northeurope`   |\n| Europe West                          | `westeurope`    |\n| France Central                       | `francecentral` |\n| France South                         | `francesouth`   |\n| Germany North                        | `germanynorth`  |\n| Germany West Central                 | `germanywestcentral`|\n| Switzerland North                    | `switzerlandnorth`|\n| Switzerland West                     | `switzerlandwest`|\n| UK South                             | `uksouth`       |\n| UK West                              | `ukwest`        |\n| Australia East                       | `australiaeast` |\n| Australia Southeast                  | `australiasoutheast`|\n| Australia Central                    | `australiacentral`|\n\nFor detailed regions, refer to [Azure Blob Storage Regions](https://azure.microsoft.com/en-us/global-infrastructure/geographies/).\n\n---\n\n#### **Cloudflare R2 Object Storage (cloudflare)**\n\n- **Bucket Creation:** Must be **manually created** before use -- see Required Bucket Naming Convention below.\n- **Supported Regions:**\n\n| Region Name            | Location Hints  |\n|------------------------|-----------------|\n| Western North America  | `wnam`          |\n| Eastern North America  | `enam`          |\n| Western Europe         | `weur`          |\n| Eastern Europe         | `eeur`          |\n| Asia-Pacific           | `apac`          |\n| Oceania                | `oc`            |\n\nWhen you create a new bucket, the data location is set to Automatic by default. Currently, this option chooses a bucket location in the closest available region to the create bucket request based on the location of the caller.\n\nLocation Hints are optional parameters you can provide during bucket creation to indicate the primary geographical location you expect data will be accessed from.\n\nUsing Location Hints can be a good choice when you expect the majority of access to data in a bucket to come from a different location than where the create bucket request originates. Keep in mind Location Hints are a best effort and not a guarantee, and they should only be used as a way to optimize performance by placing regularly updated content closer to users.\n\nMore details available at the [Cloudflare R2 Object Storage documentation](https://developers.cloudflare.com/r2/reference/data-location/#location-hints).\n\n---\n\n#### **IBM Cloud Object Storage (ibm)**\n\n- **Bucket Creation:** Must be **manually created** before use -- see Required Bucket Naming Convention below.\n- **Supported Regions:** (not all are listed here)\n\n| Region Name             | Region Code  |\n|-------------------------|--------------|\n| US Cross Region         | `us`         |\n| US South (Dallas)       | `us-south`   |\n| US East (Washington DC) | `us-east`    |\n| EU Cross Region         | `eu`         |\n| EU Central (Frankfurt)  | `eu-de`      |\n| EU North (Oslo)         | `eu-north`   |\n| EU West (Milan)         | `eu-it`      |\n| Asia Pacific Cross Region | `ap`       |\n| Asia Pacific North (Tokyo) | `jp-tok`  |\n| Asia Pacific South (Sydney) | `au-syd` |\n\nFor more details, refer to the [IBM Cloud Regions documentation](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints).\n\n---\n\n#### **Akamai Object Storage (linode)**\n\n- **Bucket Creation:** Must be **manually created** before use -- see Required Bucket Naming Convention below.\n- **Supported Regions:**\n\n| Data Center Location       | Region Code      |\n|----------------------------|------------------|\n| Amsterdam, Netherlands     | `nl-ams-1`       |\n| Atlanta, GA, USA           | `us-southeast-1` |\n| Chennai, India             | `in-maa-1`       |\n| Chicago, IL, USA           | `us-ord-1`       |\n| Frankfurt, Germany         | `eu-central-1`   |\n| Jakarta, Indonesia         | `id-cgk-1`       |\n| Los Angeles, CA, USA       | `us-lax-1`       |\n| Madrid, Spain              | `es-mad-1`       |\n| Miami, FL, USA             | `us-mia-1`       |\n| Milan, Italy               | `it-mil-1`       |\n| Newark, NJ, USA            | `us-east-1`      |\n| Osaka, Japan               | `jp-osa-1`       |\n| Paris, France              | `fr-par-1`       |\n| São Paulo, Brazil          | `br-gru-1`       |\n| Seattle, WA, USA           | `us-sea-1`       |\n| Singapore                  | `ap-south-1`     |\n| Stockholm, Sweden          | `se-sto-1`       |\n| Washington, DC, USA        | `us-iad-1`       |\n\nFor more detailed information, please refer to Akamai's official [Object Storage documentation](https://techdocs.akamai.com/cloud-computing/docs/object-storage).\n\n---\n\n### Required Bucket Naming Convention\n\n#### Root (`multiback`) Behavior:\n- Ensure buckets are created for each service and region used.\n- Use `/root/.remote_backups/credentials/` to store credentials.\n\n#### User (`mybackup`) Behavior:\n- Buckets are associated with the Octopus system user running the command.\n- User-specific bucket names follow the convention: `back-to-USER-HOSTNAME-PROVIDER`.\n- The `USER` is your Ægir system user as visible in the `/data/disk/USER/static` path.\n- The `HOSTNAME` is your system hostname, but with dots replaced by hyphens.\n- The `PROVIDER` is the short name of the vendor, with underscores replaced by hyphens:\n\n```sh\n  aws -------------- Amazon S3 (Standard)\n  aws-one-zone ----- Amazon S3 (One Zone)\n  aws-standard-ia -- Amazon S3 (Standard-IA)\n  azure ------------ Azure Blob Storage\n  b2 --------------- Backblaze B2\n  cloudflare ------- Cloudflare R2 Object Storage\n  do-spaces -------- DigitalOcean Spaces\n  gcs -------------- Google Cloud Storage\n  ibm -------------- IBM Cloud Object Storage\n  linode ----------- Linode Object Storage by Akamai\n  wasabi ----------- Wasabi Hot Cloud Storage\n```\n\nHow to determine correct `HOSTNAME` and `USER` to be used as your Bucket name?\n\nIt's easy to find because your Ægir URL is actually `USER`.`HOSTNAME` -- For example in `o123.fr8.eu.aegir.cc` the `o123` is `USER` and `fr8.eu.aegir.cc` is a `HOSTNAME`\n\nHowever, when used in the bucket name, it becomes `back-to-USER-HOSTNAME-PROVIDER`, so in this example: `back-to-o123-fr8-eu-aegir-cc-wasabi`\n\n---\n\n### Configuration Overview\n\n#### Root Configuration (`multiback`)\nFor system-wide backups managed by `multiback`, ensure that your configuration includes the necessary credentials in `/root/.remote_backups/credentials/` directory. More details in New Backups for BOA SysAdmin [docs/BACKUP_ROOT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_ROOT.md)\n\n#### User Configuration (`mybackup`)\nRegular users should place their backup configurations in the `~/static/control/remote_backups/credentials/` directory. The `mybackup` script automatically uses these credentials to restore backups for the current user. More details in New Backups for Octopus Lshell User [docs/BACKUP_USER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n\n---\n\nThis document includes the complete regions list per service and ensures you have the required configurations for `multiback` and `mybackup` scripts. For further details, refer to the README files generated during installation or contact your system administrator.\n\nThe README files for non-root users are available in `~/static/control/remote_backups/credentials/README.txt` and `~/static/control/remote_backups/config/README.txt`\n\n"
  },
  {
    "path": "docs/BACKUP_RETENTION.md",
    "content": "\n# Backup Retention Policy and Default Settings\n\nThe backup system is designed to reliably manage **live backups** while optimizing storage usage. This document explains the retention policies and settings, focusing entirely on managing **active backups** without disruptions.\n\n- New PRO Backups for BOA SysAdmin [docs/BACKUP_ROOT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_ROOT.md)\n- New PRO Backups for Octopus Lshell User [docs/BACKUP_USER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n- New PRO Backups Retention Policy Configuration (this document) [docs/BACKUP_RETENTION.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_RETENTION.md)\n- New PRO Backups Supported Regions and Bucket Creation Guidelines [docs/BACKUP_REGIONS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_REGIONS.md)\n\n---\n\n#### Retention Policies Overview\n\nThe retention system for live backups relies entirely on **time-based retention** to manage backup cleanup. This approach ensures that older backups are safely removed without disrupting the integrity of active incremental chains.\n\n1. **Time-Based Retention (`KEEP_WITHIN`)**:\n   - Specifies how long backups are retained.\n   - Deletes all backups (full and incremental) older than the specified timeframe.\n   - **Example**: `KEEP_WITHIN=\"3M\"` retains all backups created within the last 3 months and deletes older ones.\n   - Only values specified in M (months) or Y (years) are accepted; otherwise will automatically default to 3M.\n\n2. **Full Backup Frequency (`FULL_BACKUP_FREQUENCY`)**:\n   - Defines how often a new full backup is created.\n   - Incremental backups are created between full backups to save storage and backup time.\n   - **Example**: `FULL_BACKUP_FREQUENCY=\"28D\"` creates a new full backup every 28 days.\n   - Only values specified in D (days) are accepted.\n   - The value must be between 7D and 60D; otherwise will automatically default to 28D.\n\n---\n\n#### Default Retention Settings\n\nThe system is preconfigured with these default settings:\n\n```bash\nexport KEEP_WITHIN=\"3M\"             # Retain backups from the last 3 months\nexport FULL_BACKUP_FREQUENCY=\"28D\"  # Create a full backup every 28 days\n```\n\nThese settings ensure:\n\n1. **Live backups are retained for 3 months**:\n   - This timeframe is sufficient for most recovery scenarios.\n   - All backups older than 3 months are automatically removed.\n\n2. **A full backup every 28 days**:\n   - Full backups ensure the integrity of the backup chain.\n   - Incremental backups store changes between full backups, reducing storage usage.\n\n---\n\n#### How Cleanup Works\n\n1. **Time-Based Cleanup Only**:\n   - The system runs the following command to remove backups older than the specified timeframe:\n     ```bash\n     duplicity remove-older-than \"${KEEP_WITHIN}\" --force \"${_BACKUP_TARGET}\"\n     ```\n   - This deletes all backups (full and incremental) older than `KEEP_WITHIN`.\n\n2. **Incremental Chains Remain Intact**:\n   - The system retains all incremental backups linked to full backups within the `KEEP_WITHIN` period.\n   - This ensures backup chains remain valid for restoration.\n\n3. **No Use of `remove-all-but-n-full`**:\n   - The `remove-all-but-n-full` command is not used in this system, as it is unsuitable for live backups. It deletes full backups and their associated incremental chains, which can disrupt active backup sets.\n\n---\n\n#### Example Workflow\n\n**Scenario**:\n- Current Date: November 22\n- Full Backups: August 1, August 15, September 1, September 15, October 1, October 15, November 1, November 15\n- Incremental backups created every 6 hours between full backups.\n\n**Settings**:\n```bash\nKEEP_WITHIN=\"3M\"\nFULL_BACKUP_FREQUENCY=\"7D\"\n```\n\n**Retention Behavior**:\n1. **Time-Based Cleanup**:\n   - Deletes backups older than August 22.\n   - Retains full and incremental backups from August 22 onward.\n\n2. **Remaining Backups**:\n   - Retained full backups: September 1, September 15, October 1, October 15, November 1, November 15.\n   - Retained incremental backups: All backups between these full backups.\n\n---\n\n#### Modifying Retention Settings\n\nIf needed, you can adjust the retention settings to better fit your needs.\n\n**Examples**:\n\n1. **Extend Retention Period**:\n   - To retain backups for 6 months:\n     ```bash\n     export KEEP_WITHIN=\"6M\"\n     ```\n\n2. **Create Full Backups Weekly**:\n   - To create a new full backup every 7 days:\n     ```bash\n     export FULL_BACKUP_FREQUENCY=\"7D\"\n     ```\n\n3. **Shorten Retention Period**:\n   - To retain backups for only 1 month:\n     ```bash\n     export KEEP_WITHIN=\"1M\"\n     ```\n\n---\n\n#### Important Considerations for Live Backups\n\n1. **Retention Period (`KEEP_WITHIN`)**:\n   - All backups older than the specified timeframe are deleted.\n   - Ensure the retention period is long enough to meet your recovery needs.\n\n2. **Full Backup Frequency**:\n   - Frequent full backups reduce the dependency on long incremental chains but use more storage.\n   - Balance the frequency of full backups (`FULL_BACKUP_FREQUENCY`) with your available storage capacity.\n\n3. **No Incremental Chain Disruption**:\n   - The system ensures incremental chains remain intact by avoiding commands that delete intermediate full backups (e.g., `remove-all-but-n-full`).\n\n---\n\n#### Verifying Cleanup Behavior (for sysadmins only)\n\nTo ensure your retention settings work as intended:\n\n1. **Simulate Cleanup**:\n   - Use the `--dry-run` option to preview which backups will be deleted:\n     ```bash\n     duplicity remove-older-than \"${KEEP_WITHIN}\" --force \"${_BACKUP_TARGET}\" --dry-run\n     ```\n\n2. **Monitor Storage**:\n   - Regularly check your storage usage to ensure retention settings align with available space.\n\n---\n\n#### FAQ\n\n1. **What happens if `KEEP_WITHIN` is too short?**\n   - Backups older than the `KEEP_WITHIN` period are deleted, which may reduce the number of available recovery points.\n\n2. **Why isn't `remove-all-but-n-full` used?**\n   - `remove-all-but-n-full` deletes full backups and their associated incremental chains, which can disrupt live backups.\n   - It is better suited for managing static, archived backups, which are outside the scope of this system.\n\n3. **Can I customize settings for different storage services?**\n   - Yes, you can define different retention settings for each service (e.g., AWS, B2) in their respective credentials files.\n\n---\n"
  },
  {
    "path": "docs/BACKUP_ROOT.md",
    "content": "\n# System Administrator Guide: Managing Global Backups\n\nThis guide explains the global backup system, its configuration, supported services, and best practices. It covers only the aspects managed by the system administrator (root access), including global backups, vendor selection, and service-specific details.\n\n- New PRO Backups for BOA SysAdmin (this document) [docs/BACKUP_ROOT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_ROOT.md)\n- New PRO Backups for Octopus Lshell User [docs/BACKUP_USER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n- New PRO Backups Retention Policy Configuration [docs/BACKUP_RETENTION.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_RETENTION.md)\n- New PRO Backups Supported Regions and Bucket Creation Guidelines [docs/BACKUP_REGIONS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_REGIONS.md)\n\n---\n\n## **How the Global Backup System Works**\nThe global backup system is designed to securely back up system-wide data (e.g., `/data`, `/etc`, `/home`, `/opt/solr4`, `/var/aegir`, `/var/solr7`, `/var/solr9`, `/var/www`, `/var/xdrago`) and ensure data integrity and recoverability. It uses **Duplicity** to create encrypted, incremental, and versioned backups stored in remote cloud services.\n\n### **Features**\n- **Global Scope**: Includes critical directories like `/data`, `/etc`, `/home`, `/opt/solr4`, `/var/aegir`, `/var/solr7`, `/var/solr9`, `/var/www` and `/var/xdrago`.\n- **Encryption**: Ensures that all backups are protected against unauthorized access.\n- **Incremental Backups**: Reduces storage usage and bandwidth by saving only changes since the last backup.\n- **Retention Policies**: Automatically removes old backups based on administrator-defined retention rules.\n- **Vendor Flexibility**: Supports multiple storage services, including AWS S3, Backblaze B2, Wasabi, and more.\n\n---\n\n## **Installer Section**\n\nThe backup system includes an installer script to simplify setup and management. This script automates the installation of dependencies, configuration setup, and cron job generation.\n\n### **Installer Script: `dcysetup`**\n\nThe `dcysetup` script is located in `/opt/local/bin/dcysetup` and provides the following options:\n\n```bash\ndcysetup <command>\n```\n\n- **Commands**:\n  - `install`: Installs required dependencies for the backup system.\n  - `setup`: Configures global backups, generating default configuration files and cron jobs.\n  - `update`: Alias for `setup`.\n\n---\n\n### **How to Use the Installer**\n\n1. **Install Dependencies**:\n   ```bash\n   dcysetup install\n   ```\n\n2. **Set Up the Backup System**:\n   - Generates default configuration files and schedules cron jobs for global backups.\n   ```bash\n   dcysetup setup\n   ```\n\n3. **Update Configuration**:\n   - Run this command after making manual changes to the configuration files.\n   ```bash\n   dcysetup update\n   ```\n\n\n---\n\n## **Key Terms and Concepts**\n\n1. **Global Backup Scope**:\n   - Includes directories crucial for system operations:\n     - `/data`\n     - `/etc`\n     - `/home`\n     - `/opt/solr4`\n     - `/var/aegir`\n     - `/var/solr7`\n     - `/var/solr9`\n     - `/var/www`\n     - `/var/xdrago`\n\n2. **Absolute Path**:\n   - All paths in configuration files must be full absolute paths (e.g., `/var/aegir/`).\n\n3. **Retention Policies**:\n   - Defines how long backups are kept, how often full backups occur, and how many full backups are retained.\n\n---\n\n## **Supported Storage Services**\n\nThe system supports multiple storage providers. Credentials for these providers are stored in `/root/.remote_backups/credentials/`.\n\n### **Supported Services**\n- **Amazon S3** (Standard, One Zone, Standard-IA)\n- **Backblaze B2**\n- **Cloudflare R2**\n- **DigitalOcean Spaces**\n- **Google Cloud Storage**\n- **IBM Cloud Object Storage**\n- **Linode Object Storage by Akamai**\n- **Microsoft Azure Blob Storage**\n- **Wasabi Hot Cloud Storage**\n\n### **Technical Comparison Table**\n\n| **Service**                | **Storage Class**                     | **Redundancy**        | **Regions** | **Encryption**                         | **Interface**          |\n|----------------------------|---------------------------------------|-----------------------|-------------|----------------------------------------|------------------------|\n| **Amazon S3**              | Standard, One Zone-IA, Standard-IA    | Multi-AZ / Single AZ  | Global      | Server-side (AES-256) + Client-side    | S3 API (boto3)         |\n| **Backblaze B2**           | Hot                                   | Multi-region          | US, Europe  | Server-side (AES-256) + Client-side    | B2 API, S3 Compatible  |\n| **Cloudflare R2**          | Hot                                   | Multi (Regionless)    | Global      | Server-side (AES-256) + In-transit TLS | S3 API (boto3)         |\n| **DigitalOcean Spaces**    | Standard (Hot)                        | Multi-region          | Global      | Server-side (AES-256) + Client-side    | S3 API (boto3)         |\n| **Google Cloud Storage**   | Standard, Nearline, Coldline, Archive | Multi-region          | Global      | Server-side (AES-256) + Client-side    | Native, S3 Compatible  |\n| **IBM Cloud**              | Standard, Vault, Cold Vault, Archive  | Multi-region          | Global      | Server-side (AES-256) + Client-side    | S3 API (boto3)         |\n| **Linode Object Storage**  | Standard (Hot)                        | Multi-region          | Global      | Server-side (AES-256) + Client-side    | S3 API (boto3)         |\n| **Microsoft Azure**        | Hot, Cool, Archive                    | LRS, ZRS, GRS, RA-GRS | Global      | Server-side (AES-256) + Client-side    | Azure Blob API         |\n| **Wasabi**                 | Hot                                   | Multi-region          | Global      | Server-side (AES-256) + Client-side    | S3 API (boto3)         |\n\n---\n\n### **Pricing Comparison Table**\n\n| **Service**                | **Storage Cost (per GB)** | **Egress Cost (per GB)**  | **Free Tier**                     | **Notes**                                       |\n|----------------------------|---------------------------|---------------------------|-----------------------------------|-------------------------------------------------|\n| **Amazon S3**              | $0.0230 (Standard)        | $0.090                    | 5 GB (12 months)                  | Wide region availability, multiple classes      |\n| **Backblaze B2**           | $0.0050                   | $0.010                    | 10 GB storage + 1 GB/day download | Cost-effective, ideal for archival storage      |\n| **Cloudflare R2**          | $0.0150                   | Free                      | 10 GB storage + 1 TB egress/month | Zero egress fees, integrates with their network |\n| **DigitalOcean Spaces**    | $0.0200 > 250 GB          | $0.020 per GB beyond 1 TB | None                              | Free bandwidth up to the first 1 TB             |\n| **Google Cloud**           | $0.0200 (Standard)        | $0.120                    | 5 GB (12 months)                  | Multiple storage classes                        |\n| **IBM Cloud**              | $0.0200 (Standard)        | $0.090                    | Lite plan (25 GB)                 | Supports advanced archival options              |\n| **Linode Object Storage**  | $0.0050                   | $0.010                    | None                              | Affordable and Akamai-backed                    |\n| **Microsoft Azure**        | $0.0180 (Cool)            | $0.085                    | $200 credit for first 30 days     | Flexible tiering                                |\n| **Wasabi**                 | $0.0059                   | Free                      | None                              | Unlimited egress, good for high traffic         |\n\n---\n\n## **Global Configuration**\n\nThe global backup configuration files are stored in:\n\n```bash\n/root/.remote_backups/\n```\n\n### **Configuration Files Examples**\n\nConfiguration files which merge all other configuration files per bucket when you run `dcysetup update` command:\n\n1. **`/root/.remote_backups/paths/global_paths.txt`**:\n   - Defines which global directories are included in backups.\n   - Example:\n     ```bash\n     _SOURCE=\"/etc /opt/solr4 /var/aegir /var/solr7 /var/solr9 /var/www /var/xdrago\"\n     _INCLUDE_PATHS=\"--include /data/disk/arch --include-regexp '^/var/backups/barracuda.*'\"\n     _EXCLUDE_PATHS=\"--exclude /data/disk --exclude /var/aegir/backups\"\n     _INCLUDE_LIST=\"/root/.remote_backups/paths/.backboa.include.list\"\n     _EXCLUDE_LIST=\"/root/.remote_backups/paths/.backboa.exclude.list\"\n     ```\n\n2. **`/root/.remote_backups/paths/data_paths.txt`**:\n   - Defines which data directories are included in backups.\n   - Example:\n     ```bash\n     _SOURCE=\"\"\n     _INCLUDE_PATHS=\"--include /data/disk/o1 --include /data/disk/o2\"\n     _EXCLUDE_PATHS=\"--exclude-regexp '^/data/disk/.*/backups'\"\n     _INCLUDE_LIST=\"/root/.remote_backups/paths/.backboa.include.list\"\n     _EXCLUDE_LIST=\"/root/.remote_backups/paths/.backboa.exclude.list\"\n     ```\n\n3. **`/root/.remote_backups/paths/.backboa.*`**:\n   - Configuration files with good defaults which are merged per system user bucket and referenced in either `global_paths.txt` or `data_paths.txt` when you run `dcysetup update` command, used to include or exclude additional absolute paths and regex patterns for fine-grained control of exclude/include logic:\n\n     ```bash\n     .backboa.data_exclude.merged.file\n     .backboa.data_include.merged.file\n     .backboa.exclude.file\n     .backboa.exclude.list\n     .backboa.exclude_data_regexp.file\n     .backboa.global_exclude.merged.file\n     .backboa.global_include.merged.file\n     .backboa.include_data.file\n     .backboa.include_global.file\n     .backboa.include_global_regexp.file\n     ```\n\n---\n\n### **Configuration Rules**\n\n1. **Absolute Paths Only**:\n   - All paths in configuration files (exclude/include) must be absolute paths starting from `/`.\n   - Example:\n     - Correct: `/root/projects`\n     - Incorrect: `~/projects`\n\n2. **Order of Precedence**:\n   - Exclude directives override include directives. If a file is listed in both, it will not be backed up.\n\n---\n\n## **Managing Credentials**\n\nCredentials for global backups are stored in `/root/.remote_backups/credentials/`. Each file corresponds to a specific storage service.\n\n### **Backblaze B2 (`b2.txt`)**\n```bash\nexport B2_ACCOUNT_ID=\"your_b2_account_id\"\nexport B2_APPLICATION_KEY=\"your_b2_application_key\"\nexport KEEP_WITHIN=\"3M\"\nexport FULL_BACKUP_FREQUENCY=\"28D\"\n```\n\n### **Permissions**\nEnsure all credentials are secured:\n```bash\nchmod 600 /root/.remote_backups/credentials/*.txt\n```\n\n---\n\n## **Restoring Global Backups**\n\nRestoring global backups follows the same logic as [user backups](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md), except administrators manage the entire system. Use the `multiback` command for global restores:\n\n```bash\nmultiback restore <SERVICE> <USER> <RESTORE_TARGET> <RESTORE_PATH> [RESTORE_TIME]\n```\n\n---\n\n### **Restore Examples**\n\n1. **Restore All Files per System User to Default Directory**:\n   ```bash\n   multiback restore b2 global /var/backups/restored\n   multiback restore b2 data /var/backups/restored\n   multiback restore b2 custom /var/backups/restored\n   ```\n\n2. **Restore a Specific Directory**:\n   ```bash\n   multiback restore b2 global /var/backups/restored var/www/example\n   multiback restore b2 data /var/backups/restored data/disk/o1\n   multiback restore b2 custom /var/backups/restored custom/path/foo/bar\n   ```\n\n3. **Restore from a Specific Time**:\n   ```bash\n   multiback restore b2 data /var/backups/restored data/disk/o1/static/platform 7D\n   ```\n\n---\n\n### **Key Rules for Restores**\n\n1. **Restore Path Must Be Absolute Without Leading Slash**:\n   - Paths must reflect the full directory structure used during backups, but cannot start with `/`.\n   - Example:\n     - Correct: `data/disk/o1/static/platform`\n     - Incorrect: `/data/disk/o1/static/platform`\n\n2. **Restore Target Directory Can Be Relative or Absolute**:\n   - Default: `/var/backups/restored/`\n   - You may specify a custom restore target directory.\n\n3. **Default Behavior**:\n   - If `[RESTORE_PATH]` is omitted, the entire backup is restored.\n   - If `[RESTORE_TIME]` is omitted, the latest backup is restored.\n\n\n---\n\n## **Best Practices**\n\n1. **Test Restores Regularly**:\n   - Verify your backups by restoring critical data periodically.\n\n2. **Monitor Storage Usage**:\n   - Use retention policies to control costs and prevent storage overuse.\n\n3. **Secure Credentials**:\n   - Regularly rotate access keys and audit credentials.\n\n4. **Automate Monitoring**:\n   - Use system logs or monitoring tools to ensure backups run as expected.\n\n---\n\n## **Conclusion**\n\nThe global backup system ensures system-wide data protection and disaster recovery capabilities. By using the configuration and management tools provided, administrators can manage backups efficiently while keeping costs under control.\n\nFor user-specific backups, refer to the separate [**User Backup Guide**](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n\nFor assistance, contact BOA developers team.\n"
  },
  {
    "path": "docs/BACKUP_USER.md",
    "content": "\n# User Guide: How the Backup System Works and How to Use It\n\nThis guide explains the backup system, including how it works, how to configure it for your needs, and how to restore your data. It also covers the supported storage services, key distinctions in path handling, and the default retention policies for your local database backups.\n\n- New PRO Backups for BOA SysAdmin [docs/BACKUP_ROOT.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_ROOT.md)\n- New PRO Backups for Octopus Lshell User (this document) [docs/BACKUP_USER.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_USER.md)\n- New PRO Backups Retention Policy Configuration [docs/BACKUP_RETENTION.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_RETENTION.md)\n- New PRO Backups Supported Regions and Bucket Creation Guidelines [docs/BACKUP_REGIONS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_REGIONS.md)\n\n---\n\n## **Important Note on Database Backups**\n\nAll **active sites databases** are now automatically backed up to the following directory in your account:\n\n```\n/data/disk/your_username/static/files/dbackup/\n```\n\nThese local database backups are retained for **14 days** by default. You can modify the retention period (in days) by creating or editing the file:\n\n```\n/data/disk/your_username/static/control/dBackupCycle.info\n```\n\nThis file should contain only digits (e.g., `7` for 7 days, `30` for 30 days, etc.). Keep in mind that **these database backup archives count toward your overall file-space usage limit**, according to your subscription plan if you are on hosted BOA.\n\n**Important**: Databases which belong to **disabled sites** are still backed up on the system level, but **will not be added to your archives** in `/data/disk/your_username/static/files/dbackup/`.\n\n---\n\n## **Basic Use (Simple Configuration)**\n\nThis section covers a quick-start approach, focusing on minimal setup.\n\n1. **Where DB Backups Are Stored by Default**\n   - Local database backups: `/data/disk/your_username/static/files/dbackup/`\n   - Retained for 14 days by default (modifiable via `/data/disk/your_username/static/control/dBackupCycle.info`).\n   - Local database backups count toward your file-space quota.\n\n2. **Enable or Verify That Backups Are Enabled**\n   - By default, backups for your account are typically enabled. If in doubt, contact support to confirm that scheduled backups are running.\n\n3. **Add Your Preferred Remote Storage Credentials**\n   - To send backups offsite (e.g., AWS S3, Wasabi, Backblaze B2), edit a credential file in:\n     ```\n     /data/disk/your_username/static/control/remote_backups/credentials/\n     ```\n   - Follow the specific format required by each service (see **AWS Example** in the advanced section for reference).\n   - Secure your credentials by running:\n     ```bash\n     chmod 600 /data/disk/your_username/static/control/remote_backups/credentials/*.txt\n     ```\n\n4. **Restore Basics**\n   - For quick restores, you can use:\n     ```bash\n     mybackup restore <SERVICE>\n     ```\n   - This command will restore everything to your `/data/disk/your_username/static/restores/` folder.\n   - If you need to restore just a specific directory or from a certain date, see the **Advanced Use** section below.\n\n5. **Monitor Usage**\n   - Remember that **all backup files** stored locally (`/data/disk/your_username/static/files/dbackup/`) **count toward your usage limit**.\n   - If you’re nearing your quota, consider:\n     - Shortening your retention period in `/data/disk/your_username/static/control/dBackupCycle.info`.\n     - Removing old backups.\n     - Upgrading your plan if you need more space.\n\nThat’s it for the basics! If you need more control—like custom includes/excludes, restoring specific directories, or advanced scheduling—read on.\n\n---\n\n## **Advanced Use (Detailed Configuration and Instructions)**\n\nBelow is the full, detailed documentation that covers **how the backup system works**, how to **configure** it for more granular scenarios, **how to restore** data with precision, and how to manage and troubleshoot the system.\n\n### **How the Backup System Works**\n\nThe backup system automates the process of securely backing up your critical data and allows recovery when needed. It uses a tool called **Duplicity**, which is responsible for encrypting, storing, and managing backups.\n\n#### **What is Duplicity?**\nDuplicity is software designed for secure and efficient backups. It:\n1. **Encrypts Your Data**: Keeps your backups secure.\n2. **Manages Incremental Backups**: Saves space by only storing changes since the last backup.\n3. **Supports Versioning**: Enables restoration of files from specific points in time.\n\nDuplicity ensures your backups are both secure and efficient.\n\n---\n\n### **Supported Storage Services**\n\nThe system supports backups to the following storage services. Each service requires a properly formatted credential file stored in:\n\n```bash\n/data/disk/your_username/static/control/remote_backups/credentials/\n```\n\n**Supported Services:**\n- **Amazon S3 One Zone**\n- **Amazon S3 Standard-IA**\n- **Amazon S3**\n- **Backblaze B2**\n- **DigitalOcean Spaces**\n- **Google Cloud Storage**\n- **IBM Cloud Object Storage**\n- **Linode Object Storage by Akamai**\n- **Microsoft Azure Blob Storage**\n- **Wasabi Hot Cloud Storage**\n\nRefer to the **Managing Credentials** section for details on how to create and secure credential files for these services.\n\n---\n\n### **Key Terms and Concepts**\n\n1. **Backup Root**: The top-level directories included in your backups:\n   - `/data/disk/your_username/static/`: Contains your account-specific files, Drupal codebases, and configurations.\n   - `/home/your_username.ftp/`: Your FTP home directory.\n\n2. **Absolute Path**:\n   - A full system path starting from the root directory (`/`).\n   - **Duplicity ALWAYS requires absolute paths** in configuration files.\n\n3. **Restore Path (No-Leading-Slash Absolute Path)**:\n   - When using the `restore` command, the restore path must be absolute but **without a leading slash** (specific to Duplicity’s syntax).\n   - Example:\n     - Correct: `data/disk/your_username/static/projects`\n     - Incorrect: `/data/disk/your_username/static/projects`\n\n4. **Restore Target**: The directory where restored files will be placed. For this system, the default is:\n   - `/data/disk/your_username/static/restores/`\n\n---\n\n### **Backup Scope and Configuration**\n\nThe system automatically includes the following directories:\n\n1. **Default Inclusion**:\n   - Everything in `/data/disk/your_username/static/`.\n   - System-managed directories like `/home/your_username.ftp/platforms/`.\n   - Platforms without codebase access in `/data/disk/your_username/distro/`.\n\n2. **Default Exclusion**:\n   - Everything in `/data/disk/your_username/static/trash/`.\n   - Everything in `/data/disk/your_username/static/restores/`.\n\n3. **Customization**:\n   - You can include or exclude additional directories using configuration files located in:\n     ```bash\n     /data/disk/your_username/static/control/remote_backups/config/\n     ```\n\n---\n\n### **Required Bucket Naming Convention**\n\nMost providers allow **automatic bucket creation** if sufficient credentials and permissions are provided, so you don't need to figure it out yourself. However, some providers (e.g., **Linode**) require **manual bucket creation** before the first backup and others (e.g., **Amazon S3**) are unreliable for automatic creation due to propagation delays between AWS regions. Manual bucket creation is recommended if you use provider known as not reliable or when manual creation is required -- check all details in the docs Supported Regions and Bucket Creation Guidelines [docs/BACKUP_REGIONS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/BACKUP_REGIONS.md).\n\n- User-specific bucket names follow the convention: `back-to-USER-HOSTNAME-PROVIDER`.\n- The `USER` is your Ægir system user as visible in the `/data/disk/USER/static` path.\n- The `HOSTNAME` is your system hostname, but with dots replaced by hyphens.\n- The `PROVIDER` is the short name of the vendor, with underscores replaced by hyphens:\n\n```sh\n  aws -------------- Amazon S3 (Standard)\n  aws-one-zone ----- Amazon S3 (One Zone)\n  aws-standard-ia -- Amazon S3 (Standard-IA)\n  azure ------------ Azure Blob Storage\n  b2 --------------- Backblaze B2\n  cloudflare ------- Cloudflare R2 Object Storage\n  do-spaces -------- DigitalOcean Spaces\n  gcs -------------- Google Cloud Storage\n  ibm -------------- IBM Cloud Object Storage\n  linode ----------- Linode Object Storage by Akamai\n  wasabi ----------- Wasabi Hot Cloud Storage\n```\n\nHow to determine correct `HOSTNAME` and `USER` to be used as your Bucket name?\n\nIt's easy to find because your Ægir URL is actually `USER`.`HOSTNAME` -- For example in `o123.fr8.eu.aegir.cc` the `o123` is `USER` and `fr8.eu.aegir.cc` is a `HOSTNAME`\n\nHowever, when used in the bucket name, it becomes `back-to-USER-HOSTNAME-PROVIDER`, so in this example: `back-to-o123-fr8-eu-aegir-cc-wasabi`\n\n---\n\n### **Managing Credentials**\n\nTo enable backups and restores, you must provide valid credentials for your cloud storage service. Credential files are stored in:\n\n```bash\n/data/disk/your_username/static/control/remote_backups/credentials/\n```\n\nEach credential file corresponds to a specific cloud storage service and must follow the required format. For example:\n\n#### **AWS Example (`aws.txt`)**\n```bash\nexport AWS_ACCESS_KEY_ID=\"your_aws_access_key\"\nexport AWS_SECRET_ACCESS_KEY=\"your_aws_secret_key\"\nexport AWS_REGION=\"your_aws_region\"  # Example: \"us-east-1\"\nexport KEEP_WITHIN=\"3M\"              # Retain backups from the last 3 months\nexport FULL_BACKUP_FREQUENCY=\"28D\"   # Create a full backup every 28 days\n```\n\n**Key Variables**:\n- **`KEEP_WITHIN`**: Specifies how long backups are retained (e.g., `1M` for 1 month).\n- **`FULL_BACKUP_FREQUENCY`**: Specifies how often full backups are created.\n\n**Permissions**:\n```bash\nchmod 600 /data/disk/your_username/static/control/remote_backups/credentials/*.txt\n```\n\n**Credential Security Measures**:\n- **Avoid Forbidden Characters**: Credential values must not contain `$`, `` ` ``, `(`, `)`, `{`, `}`, `;`, `&`, `|`, `<`, `>`.\n- **Proper Syntax**: Ensure each line is a valid variable assignment in the form `VARIABLE=\"value\"`.\n\n---\n\n### **Configuring Your Backups**\n\nYou can customize what is included or excluded in your backups by editing configuration files stored in:\n\n```bash\n/data/disk/your_username/static/control/remote_backups/config/\n```\n\n#### **Configuration Files**\n\n1. **`include.txt`**:\n   - Use this file to include additional absolute paths in the backup.\n   - **Important**: Paths must be within your allowed directories.\n\n   - Example:\n     ```bash\n     --include /data/disk/your_username/static/custom_data\n     --include /home/your_username.ftp/documents\n     ```\n\n2. **`exclude.txt`**:\n   - Use this file to exclude specific absolute paths.\n   - Example:\n     ```bash\n     --exclude /data/disk/your_username/static/tmp\n     --exclude /home/your_username.ftp/logs\n     ```\n\n3. **`include_regexp.txt`**:\n   - Use regular expressions to include files or directories matching a pattern.\n   - **Important**: Regex patterns must start with `^` and match paths within your allowed directories.\n   - Example:\n     ```bash\n     --include-regexp '^/data/disk/your_username/static/documents/.*\\.pdf$'\n     --include-regexp '^/home/your_username\\.ftp/images/.*\\.(jpg|png)$'\n     ```\n\n4. **`exclude_regexp.txt`**:\n   - Use regular expressions to exclude files or directories matching a pattern.\n   - **Important**: Regex patterns must start with `^` and match paths within your allowed directories.\n   - Example:\n     ```bash\n     --exclude-regexp '^/data/disk/your_username/static/cache/.*'\n     --exclude-regexp '^/home/your_username\\.ftp/tmp/.*'\n     ```\n\n---\n\n#### **Configuration Rules**\n\n1. **Absolute Paths Only**:\n   - All paths in configuration files (exclude/include) must be absolute paths starting from `/`.\n   - Example:\n     - Correct: `/data/disk/your_username/static/projects`\n     - Incorrect: `~/static/projects`\n\n2. **Allowed Directories Only**:\n   - You are restricted to including paths within:\n     - `/data/disk/your_username/static/`\n     - `/home/your_username.ftp/`\n   - Attempts to include paths outside these directories will be rejected.\n   - Platforms without codebase access in `/data/disk/your_username/distro/` are included automatically.\n\n3. **Regex Patterns Must Start with Allowed Base Paths**:\n   - Regex patterns must begin with `^` followed by one of your allowed base paths.\n   - This ensures that the regex cannot match paths outside your permitted directories.\n\n   - **Valid Regex Pattern**:\n     ```bash\n     --include-regexp '^/data/disk/your_username/static/documents/.*\\.pdf$'\n     ```\n     - Starts with `^data/disk/your_username/static`, which is allowed.\n\n   - **Invalid Regex Pattern**:\n     ```bash\n     --include-regexp '^//data/disk/your_username/static/foo/.*'\n     ```\n     - Starts with `^/data/disk`, while in regexp the first slash should be omitted.\n\n4. **Forbidden Characters**:\n   - Paths and regex patterns must not contain forbidden characters that could pose security risks:\n     - Forbidden characters: `$`, `` ` ``, `(`, `)`, `{`, `}`, `;`, `&`, `|`, `<`, `>`\n   - Lines containing these characters will be rejected.\n\n5. **Order of Precedence**:\n   - Exclude directives override include directives. If a file or directory is listed in both, it will not be backed up.\n\n6. **Customizing Defaults**:\n   - The entire `/data/disk/your_username/static/` directory and `/home/your_username.ftp/` are included by default. Use exclude files to prevent specific paths from being backed up.\n\n---\n\n### **Restoring Files**\n\nTo recover data, use the `mybackup` command. The restore process has specific rules for paths, which differ from configuration file paths.\n\n#### **Restore Command**\n\n```bash\nmybackup restore <SERVICE> [RESTORE_PATH] [RESTORE_TIME]\n```\n\n- `<SERVICE>`: The cloud storage service used for your backups (e.g., `aws`, `b2`, `wasabi`).\n- `[RESTORE_PATH]` (optional): The absolute path (no leading slash) of the file or directory to restore.\n- `[RESTORE_TIME]` (optional): The point in time for the restore, specified in human-readable formats like:\n  - `1D` (1 day ago)\n  - `7D` (7 days ago)\n  - `1M` (1 month ago)\n\n---\n\n#### **Restore Examples**\n\n1. **Restore All Files to Default Directory**:\n   ```bash\n   mybackup restore aws\n   ```\n   - Restores the entire backup to `/data/disk/your_username/static/restores/`.\n\n2. **Restore a Specific Directory**:\n   ```bash\n   mybackup restore aws data/disk/your_username/static/projects\n   ```\n   - Restores the `projects` directory.\n\n3. **Restore FTP Files**:\n   ```bash\n   mybackup restore aws home/your_username.ftp/documents\n   ```\n   - Restores files from your FTP home directory.\n\n4. **Restore from a Specific Time**:\n   ```bash\n   mybackup restore aws data/disk/your_username/static/projects 7D\n   ```\n   - Restores files as they were 7 days ago.\n\n---\n\n#### **Key Rules for Restores**\n\n1. **Restore Path Must Be Absolute Without Leading Slash**:\n   - Paths must reflect the full directory structure used during backups, but cannot start with `/`.\n   - Example:\n     - Correct: `data/disk/your_username/static/projects`\n     - Incorrect: `/data/disk/your_username/static/projects`\n\n2. **Default Behavior**:\n   - If `[RESTORE_PATH]` is omitted, the entire backup is restored.\n   - If `[RESTORE_TIME]` is omitted, the latest backup is restored.\n\n---\n\n### **Security Notes**\n\n1. **Backup Scope**:\n   - The system restricts user-configured backups to:\n     - `/data/disk/your_username/static/`\n     - `/home/your_username.ftp/`\n   - Attempts to include files outside these directories will fail.\n   - Platforms without codebase access in `/data/disk/your_username/distro/` are included automatically.\n\n2. **Path Validation and Security**:\n   - **Validation of Paths**: Strictly enforced to prevent unauthorized directories from being backed up.\n   - **Regex Patterns**:\n     - Must start with `^` and an allowed base path.\n     - Cannot contain forbidden characters.\n\n3. **Credential Security**:\n   - Protect your credentials by setting secure permissions:\n     ```bash\n     chmod 600 /data/disk/your_username/static/control/remote_backups/credentials/*.txt\n     ```\n   - Ensure credential files contain only valid variable assignments.\n\n4. **Restore Target Permissions**:\n   - Make sure your restore target directory is writable.\n\n5. **Forbidden Characters in Configurations**:\n   - `$`, `` ` ``, `(`, `)`, `{`, `}`, `;`, `&`, `|`, `<`, `>` are disallowed in paths/credentials.\n   - Lines containing these characters will be rejected.\n\n6. **No Execution of User-Provided Code**:\n   - The backup system does not execute user-provided code. It securely parses config files to prevent any code injection.\n\n---\n\n### **Troubleshooting**\n\nIf you encounter issues with your backups or restores:\n\n1. **Check Logs**:\n   - Latest backup actions are logged in:\n     ```bash\n     /data/disk/your_username/static/control/remote_backups/logs/\n     ```\n   - Validation issues are logged in:\n     ```bash\n     /var/log/backup_validation_issues.log\n     ```\n   - Ask your host to review this log if any lines in your config files were rejected.\n\n2. **Common Validation Errors**:\n   - **Unauthorized Path**: Attempting to include paths outside allowed directories.\n   - **Invalid Syntax**: Incorrectly formatted config files.\n   - **Forbidden Characters**: Using `$`, `` ` ``, `(`, `)`, `{`, `}`, `;`, `&`, `|`, `<`, `>`.\n\n3. **Correcting Validation Errors**:\n   - Ensure all paths are within `/data/disk/your_username/static/` or `/home/your_username.ftp/`.\n   - Verify that regex patterns start with `^` followed by an allowed base path.\n   - Remove any forbidden characters from paths and credential values.\n   - Confirm that credential files contain valid variable assignments.\n\n---\n\n### **Best Practices**\n\n1. **Review Configuration Files Regularly**:\n   - Keep your include and exclude lists up to date with your backup needs.\n\n2. **Secure Your Credentials**:\n   - Limit access to your credential files and update your credentials if you suspect they have been compromised.\n\n3. **Test Restores Periodically**:\n   - Perform test restores to ensure that your backups are functioning correctly and data can be recovered when needed.\n\n4. **Monitor Backup Logs**:\n   - Regularly check the backup logs to identify and address any issues promptly.\n\n5. **Use Regex with Caution**:\n   - Ensure patterns match only intended files/directories within allowed paths.\n\n---\n\n### **Conclusion**\n\nBy following the **Basic Use** section, you can quickly get your backups running—just add your preferred remote service credentials and rely on the default local database backups stored in `/data/disk/your_username/static/files/dbackup/`. For more granular control, use the **Advanced Use** section to configure includes, excludes, custom retention, advanced restore options, and more.\n\nRemember that **all local backups count toward your storage quota**, so adjust your retention period or remove old backups as needed. If you have questions or run into any issues, please contact your administrator or hosting support.\n\n---\n\n**Note**: Replace `your_username` with your actual Ægir **system** username (not lshell/FTP username) in all examples above.\n"
  },
  {
    "path": "docs/BLOWFISH.md",
    "content": "# SHA512\n\nYour Devuan or Debian system uses SHA512 for password encryption by default.\n\nThis is not bad, and for sure much better than MD5 by default used in BOA for all newly created SSH/FTPS accounts (both main and extra - for Ægir Clients) in all releases up to BOA-2.0.8.\n\nBut since BOA forces all users to update their passwords every 90 days, once the user updates their password, it is automatically encrypted with SHA512, so it no longer uses the completely insecure MD5 hashing.\n\nNote that BOA switched to SHA512 instead of MD5 by default in HEAD after BOA-2.0.8 Edition, and will use SHA512 by default starting with BOA-2.0.9.\n\n# WARNING!\n\n1. Make sure you have working SSH (ed25519) keys for direct root access without sudo.\n2. Make sure you have working SSH (ed25519) keys for direct root access without sudo.\n3. Make sure you have working SSH (ed25519) keys for direct root access without sudo.\n\n**REALLY. Don't even read anything below if you didn't set this up yet!**\nYou could lock yourself out of your server forever (almost), if your only access is password based and something will go wrong, because you didn't read and follow this how-to *precisely*. If you are interested why it is so important, read the explanation further below.\n\n# BLOWFISH\n\nYou can easily switch your system to use much more secure Bcrypt/Blowfish, using the simple steps listed below.\n\n```sh\n$ apt-get install libpam-unix2 -y\n\n$ cp -af /usr/share/pam-configs/unix /usr/share/pam-configs/unix2\n$ sed -i \"s/^Name: Unix/Name: Unix2/g\"  /usr/share/pam-configs/unix2\n$ sed -i \"s/pam_unix.so/pam_unix2.so/g\" /usr/share/pam-configs/unix2\n$ sed -i \"s/nullok_secure//g\"           /usr/share/pam-configs/unix2\n$ sed -i \"s/obscure//g\"                 /usr/share/pam-configs/unix2\n$ sed -i \"s/sha512//g\"                  /usr/share/pam-configs/unix2\n$ sed -i \"s/rounds//g\"                  /usr/share/pam-configs/unix2\n$ sed -i \"s/pam_unix.so/pam_unix2.so/g\" /etc/pam.d/pure-ftpd\n$ sed -i \"s/^CRYPT=des.*/CRYPT=blowfish/g\" /etc/security/pam_unix2.default\n$ sed -i \"s/^BLOWFISH_CRYPT_FILES=.*/BLOWFISH_CRYPT_FILES=8/g\" /etc/security/pam_unix2.default\n\n$ pam-auth-update\n\n[*] Unix2 authentication\n[*] Unix authentication\n```\n\nIn the displayed dialog box, please enable \"Unix2 authentication\" and **DO NOT** disable \"Unix authentication\". Both should be enabled, or all existing SHA512 passwords, including your root password, will stop working!\n\nYou should use Arrow keys, then choose `<Ok>` with Tab and hit Enter to confirm.\n\n# TESTING\n\nNow update your root password and any other account password for testing with the standard `passwd` command. Even if you have disabled password-based root access, you should still keep the password working because you will still need it when accessing the system via remote console, if available.\n\nYou will notice in the `/etc/shadow` file that instead of lines similar to:\n\n```\no1.ftp:$1$XVn3/oPw$Me6EZMC2A4/qAayQGRCh2/:15801::90:7:::\n=== if $1$ then it is *insecure* MD5 ===\n\no1.ftp:$6$N52KMMFm$m/CB/sQtgREx1TtlHNy7aBHUxUQMx6r3q8O39FDTbt6Etzfi2ZYqR/AjUWtRWHmz3IPjZQW8xtXJjwbee9dFk0:15822::90:7:::\n=== if $6$ then it is better SHA512 ===\n```\n\nNow it looks similar to:\n\n```\no1.ftp:$2a$08$EeO3oNMsWxqtvCdWrZfeNeQhwxI0MxqJEDjvRqjZ1Cvc5Yu8XbTlK:15822::90:7:::\n=== if $2a$ $08$ then it is the best Bcrypt/Blowfish with 8 work-factor ===\n```\n\nTest if the updated password for `o1.ftp` allows you to log in via SSH and FTPS.\n\nDone!\n\n# IMPORTANT!\n\nOnly MD5 passwords would still work after enabling \"Unix2 authentication\" and disabling \"Unix authentication\", as it is recommended in many how-tos you can find on the net. Their authors even share horrible stories where they managed to lock the access completely and were forced to boot the system from a rescue CD, etc. because they didn't fully realize what they are doing.\n\nThe problem is that both root password and any other account password, once updated after initial setup with MD5 used in BOA for non-root accounts previously, will use SHA512, which simply doesn't work when you have disabled \"Unix authentication\" and enabled only \"Unix2 authentication\".\n\nMake sure that you have enabled both!\n\nNote that BOA will still use SHA512 for all new or updated automatically extra accounts, but since it still forces you to update passwords every 90 days, all accounts on your system will use Bcrypt/Blowfish as soon as their passwords are updated with the standard `passwd` command, after you have added Bcrypt/Blowfish support using the how-to above.\n\n# REFERENCES\n\n- [Why LivingSocial's 50 Million Password Breach Is Graver Than You May Think](http://arstechnica.com/security/2013/04/why-livingsocials-50-million-password-breach-is-graver-than-you-may-think/)\n- [Passwords Under Assault](http://arstechnica.com/security/2012/08/passwords-under-assault/)\n- [How To Safely Store A Password](http://codahale.com/how-to-safely-store-a-password/)\n- [Use Bcrypt, Fool](http://yorickpeterse.com/articles/use-bcrypt-fool/)\n- [Choosing a Bcrypt Work Factor](http://wildlyinaccurate.com/bcrypt-choosing-a-work-factor)\n- [Gist: Bcrypt](https://gist.github.com/jkmickelson/3660219)\n- [Drupal.org: Password Hashing](https://drupal.org/node/1201444#comment-6448638)\n- [Drupal.org: PHPass](https://drupal.org/project/phpass)\n- [PHP.net: Crypt](http://www.php.net/manual/en/function.crypt.php)\n- [PHP.net: Blowfish](http://www.php.net/security/crypt_blowfish.php)\n"
  },
  {
    "path": "docs/BRANCHES.md",
    "content": "# Introducing the New BOA Branching Scheme\n\nTo streamline our development efforts and provide a clear distinction between the increasingly diverse requirements of our **LTS (free)**, **PRO (licensed)**, and **OMM (internal)** BOA branches, we are implementing a new branching scheme. This update is designed to accommodate our growing team and accelerate development while maintaining the highest quality standards. The plan includes incremental rewrites to modernize legacy components.\n\n### Goals of the New Branching Scheme\n\nThe updated scheme aims to:\n1. Support **rock-stable releases** for both LTS and PRO.\n2. Enable **rapid development** and **experimental deployments** through separate branches.\n3. Provide a framework for safely experimenting with upcoming features without impacting stable branches.\n\n### Key Changes\n\nWhile you will still primarily work with two main public branches—**LTS** (free, no commercial license required) and **PRO** (licensed)—the project’s workflow has changed. It's important to understand how these branches interact and how to safely test future releases.\n\n---\n\n### Branching Structure\n\nEach public main branch now includes two additional sub-branches with `-base` and `-edge` suffixes to clarify their roles. Here’s how the new structure works:\n\n#### **Development Branches**\n1. **5.x-dev**\n   Main development branch where the latest untested changes are committed.\n2. **5.x-dev-base**\n   Slower development branch, accepting only non-breaking commits from `5.x-dev`.\n3. **5.x-dev-edge**\n   Experimental branch for testing features from `5.x-dev`.\n\n#### **PRO (Licensed) Branches**\n1. **5.x-pro**\n   Main stable PRO branch, accepting commits only from `5.x-pro-base`.\n2. **5.x-pro-base**\n   Testing branch for PRO, accepting commits only from `5.x-dev-base`.\n3. **5.x-pro-edge**\n   Experimental branch supporting testing for `5.x-pro-base`.\n\n#### **LTS (Free) Branches**\n1. **5.x-lts**\n   Main stable LTS branch, accepting commits only from `5.x-lts-base`.\n2. **5.x-lts-base**\n   Testing branch for LTS, accepting commits only from `5.x-dev-base`.\n3. **5.x-lts-edge**\n   Experimental branch supporting testing for `5.x-lts-base`.\n\n---\n\n### Code Management Workflow\n\n- **LTS and PRO branches:** Both the `5.x-lts` and `5.x-pro` branches rely on their respective `-base` branches for code updates. These `-base` branches receive commits from `5.x-dev-base` after thorough testing.\n- **OMM branch:** Reserved for internal development and testing outside public workflows.\n\n### Guidelines for Tagging Releases\n\n- New **tags** should only be applied to the **5.x-pro** and **5.x-lts** branches.\n- Tags trigger the BOA SKYNET auto-self-update procedures, so it is crucial to ensure only important incremental batch updates and stable releases are tagged.\n\n### Key Points to Remember\n1. Use the **main LTS** and **PRO branches** for stable deployments.\n2. Experimentation and testing should be done in the respective `-base` and `-edge` branches.\n3. Always tag updates and releases in **5.x-pro** or **5.x-lts** to ensure compatibility with BOA SKYNET auto-update procedures.\n\n---\n\nThis new branching scheme ensures greater flexibility, improves stability, and allows for faster innovation across our LTS and PRO offerings. If you have any questions or require guidance on adapting to these changes, please feel free to reach out.\n\n"
  },
  {
    "path": "docs/BUILDTESTS.md",
    "content": "# How we build newer codebases for testing\n\n## Prepare environment\n\n```sh\n  su -s /bin/bash - o3x\n  mkdir -p ~/static/february-6/\n  cd ~/static/february-6/\n```\n\n## Visit for latest versions check\n\n```sh\n  https://www.drupal.org/project/commerce\n  https://www.drupal.org/project/farm\n  https://www.drupal.org/project/localgov\n  https://www.drupal.org/project/openculturas\n  https://www.drupal.org/project/sector\n  https://www.drupal.org/project/thunder\n  https://www.drupal.org/project/varbase\n```\n\n## Build them one by one and document results\n\n```sh\nfarmos     # farm-3.5.1-10.6.3\n           # visit: https://github.com/farmOS/farmOS/releases\n           # wget https://github.com/farmOS/farmOS/releases/download/3.5.1/farmOS-3.5.1.tar.gz\n           # tar -xzf farmOS-3.5.1.tar.gz\n           # mv farmOS farm-3.5.1-10.6.3\n           # (10.6.3)\n```\n\n```sh\ncms        # composer create-project drupal/cms drupal_cms_installer-2.0.0-11.3.3 --no-dev --no-interaction --no-scripts\n           # cd ~/static/february-6/drupal_cms_installer-2.0.0-11.3.3\n           # composer update --no-scripts\n           # composer install --no-dev\n           # (11.3.3)\n```\n\n```sh\nculturas   # composer create-project --remove-vcs drupal/openculturas_project openculturas-2.5.4-10.5.8 --no-dev --no-interaction --no-scripts\n           # cd ~/static/february-6/openculturas-2.5.4-10.5.8/web/profiles/contrib/openculturas-distribution\n           # mv profile openculturas\n           # mv openculturas ~/static/february-6/openculturas-2.5.4-10.5.8/web/profiles/contrib/\n           # mv * ~/static/february-6/openculturas-2.5.4-10.5.8/web/profiles/contrib/\n           # cd ~/static/february-6/openculturas-2.5.4-10.5.8/web/profiles/contrib/\n           # rm -rf openculturas-distribution\n           # cp ~/static/february-6/farm-3.5.1-10.6.3/web/sites/example.sites.php ~/static/february-6/openculturas-2.5.4-10.5.8/web/sites/\n           # (10.5.8)\n```\n\n```sh\ncommerce   # composer create-project -s dev centarro/commerce-kickstart-project commerce_kickstart-3.2.0-11.3.3 --no-dev --no-interaction --no-scripts\n           # cd ~/static/february-6/commerce_kickstart-3.2.0-11.3.3\n           # composer require centarro/certified-projects\n           # composer install --no-dev\n           # (11.3.3)\n```\n\n```sh\nlocalgov   # composer create-project localgovdrupal/localgov-project localgov-3.4.0-10.6.3 --no-dev --no-interaction --no-scripts\n           # cd ~/static/february-6/localgov-3.4.0-10.6.3\n           # (10.6.3)\n```\n\n```sh\nsector     # composer create-project drupal/sector_project_template:11.x-dev sector-11.0.x-dev-11.3.3 --no-dev --no-interaction --no-scripts\n           # cd ~/static/february-6/sector-11.0.x-dev-11.3.3\n           # composer update\n           # composer install --no-dev\n           # (11.3.3)\n```\n\n```sh\nthunder    # composer create-project thunder/thunder-project thunder-8.3.1-11.3.3 --no-dev --no-interaction --no-install --no-scripts\n           # cd /data/disk/o3x/static/february-6/thunder-8.3.1-11.3.3\n           # composer config --no-plugins allow-plugins.drupal/core-composer-scaffold true\n           # composer install --no-dev\n           # (11.3.3)\n```\n\n```sh\nvarbase    # composer create-project Vardot/varbase-project:~10 varbase-10.1.0-11.3.1 --no-dev --no-interaction --no-install --no-scripts\n           # ln -sf /opt/php84/bin/php /usr/bin/php\n           # cd ~/static/february-6/varbase-10.1.0-11.3.1\n           # composer config --no-plugins allow-plugins.drupal/core-composer-scaffold true\n           # composer install --no-dev\n           # cd ~/static/february-6/varbase-10.1.0-11.3.1/docroot\n           # find -name recipes | awk '{print $1\"/default/content\"}' | xargs -I {} mkdir -p {}\n           # (11.3.1)\n```\n\n```sh\nvarbase    # composer create-project Vardot/varbase-project:~9 varbase-9.1.13-10.6.1 --no-dev --no-interaction --no-install --no-scripts\n           # cd ~/static/february-6/varbase-9.1.13-10.6.1\n           # composer config --no-plugins allow-plugins.drupal/core-composer-scaffold true\n           # composer install --no-dev\n           # cd ~/static/february-6/varbase-9.1.13-10.6.1/docroot\n           # find -name recipes | awk '{print $1\"/default/content\"}' | xargs -I {} mkdir -p {}\n           # (10.6.1)\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:10.2.12 drupal-10.2.12 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-10.2.12\n           # composer require drush/drush\n           # composer audit\n           # (10.2.12)\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:10.3.14 drupal-10.3.14 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-10.3.14\n           # composer require drush/drush\n           # composer audit\n           # (10.3.14)\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:10.4.9 drupal-10.4.9 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-10.4.9\n           # composer require drush/drush\n           # composer audit\n           # (10.4.9)\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:10.5.8 drupal-10.5.8 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-10.5.8\n           # composer require drush/drush\n           # composer audit\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:10.6.3 drupal-10.6.3 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-10.6.3\n           # composer require drush/drush\n           # composer audit\n           # (10.6.3)\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:11.1.9 drupal-11.1.9 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-11.1.9\n           # composer require drush/drush\n           # composer audit\n           # (11.1.9)\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:11.2.10 drupal-11.2.10 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-11.2.10\n           # composer require drush/drush\n           # composer audit\n           # (11.2.10)\n```\n\n```sh\nvanilla    # composer create-project drupal/recommended-project:11.3.3 drupal-11.3.3 --no-dev --no-interaction\n           # cd ~/static/february-6/drupal-11.3.3\n           # composer require drush/drush\n           # composer audit\n           # (11.3.3)\n```\n\n\n## Check and compare versions built above\n\n```sh\no3x@modern:~/static/february-6$ ls -la\n-rw-r--r-- 1 o3x users 68M Feb  7 20:19 commerce_kickstart-3.2.0-11.3.3.tar.gz\n-rw-r--r-- 1 o3x users 20M Feb  7 20:12 drupal-10.2.12.tar.gz\n-rw-r--r-- 1 o3x users 22M Feb  7 20:12 drupal-10.3.14.tar.gz\n-rw-r--r-- 1 o3x users 23M Feb  7 20:11 drupal-10.4.9.tar.gz\n-rw-r--r-- 1 o3x users 23M Feb  7 20:11 drupal-10.5.8.tar.gz\n-rw-r--r-- 1 o3x users 23M Feb  7 20:11 drupal-10.6.3.tar.gz\n-rw-r--r-- 1 o3x users 21M Feb  7 20:10 drupal-11.1.9.tar.gz\n-rw-r--r-- 1 o3x users 22M Feb  7 20:10 drupal-11.2.10.tar.gz\n-rw-r--r-- 1 o3x users 22M Feb  7 20:09 drupal-11.3.3.tar.gz\n-rw-r--r-- 1 o3x users 89M Feb  7 20:15 drupal_cms_installer-2.0.0-11.3.3.tar.gz\n-rw-r--r-- 1 o3x users 25M Feb  7 20:14 farm-3.5.1-10.6.2.tar.gz\n-rw-r--r-- 1 o3x users 83M Feb  7 20:18 localgov-3.4.0-10.6.3.tar.gz\n-rw-r--r-- 1 o3x users 71M Feb  7 20:17 openculturas-2.5.4-10.5.8.tar.gz\n-rw-r--r-- 1 o3x users 34M Feb  7 20:17 sector-11.0.x-dev-11.3.3.tar.gz\n-rw-r--r-- 1 o3x users 34M Feb  7 20:16 thunder-8.3.1-11.3.3.tar.gz\n-rw-r--r-- 1 o3x users 97M Feb  7 20:13 varbase-10.1.0-11.3.1.tar.gz\n-rw-r--r-- 1 o3x users 53M Feb  7 20:16 varbase-9.1.13-10.6.1.tar.gz\no3x@modern:~/static/february-6$\n```\n\n## Add them all as platforms in Ægir\n\nUse paths like `february-6/drupal-11.3.3` and run tests for sites install, clone and migration.\n\n## Notes on non-standard issues\n\nSome codebases need manual fixes after the build. For example `openculturas` has wrong installation profile directory tree structure by default and doesn't have required `sites/example.sites.php` file, which needs to be copied there manually before you can install sites.\n\n\n"
  },
  {
    "path": "docs/CAVEATS.md",
    "content": "# CAVEATS\n\n1. **Devuan-Based Systems Only**\n\nBOA maintainers currently use only 64-bit systems based on Devuan. While Debian may be used as an initial setup, it is mainly supported as a transition step before migrating to Devuan, which is free of systemd. We no longer support or use Ubuntu systems.\n\n2. **Amazon EC2 No Longer Supported**\n\nAmazon EC2 is no longer considered a BOA-friendly environment. With its strict reliance on systemd, it has caused unexpected crashes on BOA instances that previously ran smoothly on legacy Debian Stretch. Additionally, it now prevents upgrading to Devuan, which is systemd-free.\n\n3. **Server (Public) Install Mode Preferred**\n\nBOA maintainers primarily use the server (public) installation mode, which is considered stable and fully supported. The localhost (local) installation mode is rarely tested and remains highly experimental.\n\n4. **VMware Clarification**\n\nWhen we refer to VMware in this context, we are talking about the virtualization technology, not the company or its vCloud Air service. The vCloud Air service is known to cause issues with Drupal, including problems with CSS/JS aggregation, the AdvAgg module, and other features that require the site to connect to itself via its public IP. These issues are unrelated to BOA itself.\n"
  },
  {
    "path": "docs/CLUSTER.md",
    "content": "# About BOA Simple Cluster (deprecated)\n\nThe legacy version of this installer supports 3 or more Percona XtraDB Cluster Galera nodes, connected via ProxySQL load balancer to a single BOA web node, all running on the same machine, or (soon) multiple machines, leveraging Linux VServer guests to create separate VPS instances.\n\nIt's technically possible to install more BOA web nodes, each with its own ProxySQL load balancer connected to the same Percona XtraDB Cluster, and put some extra load balancers in front of these web nodes, along with auto-sync for files on the web nodes, for a full High Availability setup, but it is beyond the scope of this installer, at least for now.\n\nUnless you expect to outgrow hardware capabilities provided by a single machine with a reliable multi-drive RAID6 array and enough RAM and CPU power to handle the load generated by PHP-FPM, Redis, Nginx, and optionally Solr, the installer provides a scalable DB Cluster solution, which can be used as a starting point for future expansion.\n\nThe multi-machine cluster design is a requirement when either you have low-end hardware and drives without RAID, or when the expected traffic is too big to be handled by a single web node and/or by DB nodes all running on a single machine.\n\nWhile the installer supports only a single machine during initial setup, it's easy to move some of the created DB nodes to other machines in the same network if you can re-assign IP addresses between your machines on the fly.\n\nPlease note that the Galera cluster should always run an odd number of nodes, on an odd number of machines, and in an odd number of data centers.\n\nThis requirement can be ignored safely only if there is a Galera Arbitrator installed on the web node, or another node connected to the cluster. We plan to add it on the web node by default in the future, to improve DB cluster reliability.\n\nNote that the installer will create a classic, standalone Barracuda+Octopus pair on the web node, connected to the *locally* running DB server, which is not a part of the DB cluster. This adds more flexibility and is intended for testing future improvements and features, including the ability to convert existing standalone BOA servers to web+DB clusters.\n\nTo leverage the installed Percona XtraDB Cluster, you will need to install another Octopus instance on the web node, which will be configured on the fly to connect via ProxySQL load balancer to the Percona XtraDB Cluster.\n\nFor more information on Percona XtraDB Cluster integration please read:\n\n- [Percona XtraDB Cluster Documentation](https://www.percona.com/doc/percona-xtradb-cluster/5.7/index.html)\n- [Galera Cluster Documentation](http://galeracluster.com/documentation-webpages/index.html)\n- [ProxySQL Documentation](http://www.proxysql.com)\n- [Percona Blog on ProxySQL and Galera Integration](https://www.percona.com/blog/2016/09/15/proxysql-percona-cluster-galera-integration/)\n- [Galera Cluster Limitations](http://galeracluster.com/documentation-webpages/limitations.html)\n- [MySQL Galera Documentation](http://mysql.rjweb.org/doc.php/galera)\n- [Galera Arbitrator Documentation](http://galeracluster.com/documentation-webpages/arbitrator.html)\n\n# Requirements\n\n- Dedicated, single bare-metal machine (RAID & 32GB+ RAM recommended)\n- Five+ IPs assigned (1 public host, 1 public web + 3 private or public for DB)\n- Debian Stretch minimal OS on the host machine\n- SSH (ed25519) keys for root with direct access (not via sudo etc.)\n\n# BOA Simple Cluster Installer Setup\n\n```sh\n$ apt-get update -qq && apt-get install wget -y -qq\n$ cd; rm -f cluster.sh\n$ cd; wget -q -U iCab http://files.aegir.cc/cluster/cluster.sh\n$ mv -f cluster.sh /usr/local/bin/cluster\n$ chmod 700 /usr/local/bin/cluster\n```\n\n# Usage: Configuration and Examples\n\n### Example for Installing Linux VServer Based BOA on a Dedicated Machine without Any Cluster Configuration, Just a BOA VPS:\n\n```sh\n$ cluster in-host server.example.com\n$ shutdown -r now\n$ cluster up-host upgrade\n$ cluster in-vps v1 v1.example.com public.ip.address jessie head my@email\n```\n\n### Example for Installing Complete BOA Simple Cluster\n\nAdd the following required lines to your `/root/.cluster.cnf` file. Required lines are marked with [R] and optional with [O]:\n\n```ini\n#\n_CLUSTER_EMAIL=\"\"   ### [R] Technical contact email\n#\n# Public IP and hostname with working DNS for the main web node\n#\n_WEB_NODE_IP=       ### [R] Public IP address assigned to the machine\n_WEB_FQDN=          ### [R] Valid FQDN pointing to WEB_NODE_IP\n#\n# An odd number of DB nodes in the array: 3, 5, 7 etc. numbered from 0\n#\n_DB_NODE_IP[0]=     ### [R] Private or Public IP address to use\n_DB_NODE_IP[1]=     ### [R] Private or Public IP address to use\n_DB_NODE_IP[2]=     ### [R] Private or Public IP address to use\n#\n_CLUSTER_PREFIX=c1r ### [O] For Linux VServer guests short names\n_CLUSTER_SUFFIX=    ### [O] For DB nodes FQDN hostnames: example.com\n_CLUSTER_OS=jessie  ### [O] Debian version: jessie\n#\n```\n\n```sh\n$ cluster in-host server.example.com\n$ shutdown -r now\n$ cluster up-host upgrade\n$ cluster in-all head\n$ cluster in-oct em@il o2 mini head\n```\n\n### Example for Upgrading BOA Simple Cluster\n\n1. Update the manager script:\n   ```sh\n   $ cd; rm -f cluster.sh\n   $ cd; wget -q -U iCab http://files.aegir.cc/cluster/cluster.sh\n   $ mv -f cluster.sh /usr/local/bin/cluster\n   $ chmod 700 /usr/local/bin/cluster\n   ```\n\n2. Upgrade the host and all DB and web heads:\n   ```sh\n   $ cluster up-host update\n   $ cluster up-dbs head\n   $ cluster up-web head\n   ```\n\n3. Upgrade web head with master Ægir -- run again if upgrade fails:\n   ```sh\n   $ vserver c1rweb exec /opt/local/bin/barracuda up-lts\n   ```\n\n4. Upgrade all Octopus instances -- run again if upgrade fails:\n   ```sh\n   $ vserver c1rweb exec /opt/local/bin/octopus up-lts all force\n   ```\n\n### All Usage Options\n\n```sh\nUsage: cluster {in-host} {fqdn}\nUsage: cluster {up-host} {update|upgrade}\nUsage: cluster {in-vps} {id} {fqdn} {ip} {os} {stable|head|galera} {email}\nUsage: cluster {in-all} {stable|head}\nUsage: cluster {up-dbs} {stable|head}\nUsage: cluster {up-web} {stable|head}\nUsage: cluster {up-all} {stable|head}\nUsage: cluster {in-oct} {email} {o2} {lts|dev|pro}\nUsage: cluster {in-pxy} {id} {ip} force-reinstall\nUsage: cluster {check} {more|report} {backups|octopus}\n```\n"
  },
  {
    "path": "docs/COMPOSER.md",
    "content": "# Composer Usage in BOA Codebases\n\nThis document explains correct and incorrect Composer usage in the context of Ægir-powered Drupal 10 platforms managed by BOA (Barracuda Octopus Ægir), as well as safe Composer workflows for standalone Composer-based sites.\n\nComposer is powerful, but misuse can result in broken platforms, partial upgrades, or corrupted deployments.\n\nYou should think about Composer like it was Drush Make replacement, and you should not re-build nor upgrade the codebase on a platform with sites already hosted. Just use it to build new codebases and then add them as platforms when the build works without errors.\n\n---\n\n## Table of Contents\n\n1. [Immutable Codebase Workflow (Safe & Supported)](#1-immutable-codebase-workflow-safe--supported)\n2. [Quick & Dirty Composer Usage (Unsafe & Unsupported)](#2-quick--dirty-composer-usage-unsafe--unsupported)\n3. [Developer Shortcut: Safe-ish Platform Clone](#3-developer-shortcut-safe-ish-platform-clone)\n4. [Site-local Drush (vdrush) for Direct Updates](#4-site-local-drush-vdrush-for-direct-updates)\n5. [Composer Sites Managed Directly – Module Updates](#5-composer-sites-managed-directly--module-updates)\n6. [Composer Branch Switching – Safe Handling After Git Checkout](#6-composer-branch-switching--safe-handling-after-git-checkout)\n7. [Summary](#7-summary)\n\n---\n\n## 1. Immutable Codebase Workflow (Safe & Supported)\n\n**Use this method for all production work.** This is the **only officially supported** and reproducible Composer workflow for Ægir-managed platforms.\n\nBOA platforms are **immutable** once deployed. Composer must never be used on platforms already powering Ægir-hosted sites.\n\n### ✅ Allowed Composer usage (before adding to Ægir):\n\n```bash\ncomposer create-project drupal/recommended-project:^10 myplatform\ncd myplatform\n\ncomposer require drupal/module_name\ncomposer update drupal/module_name --with-dependencies\n\ncomposer install --no-interaction --optimize-autoloader\n```\n\nThen:\n- Commit `composer.json`, `composer.lock`, and any scaffolded files\n- Add to Ægir as a new platform\n- Migrate test sites\n- Migrate live sites after successful verification\n\n📚 Learn more: [Safe Upgrade Workflow](https://learn.omega8.cc/your-drupal-site-upgrade-safe-workflow-298)\n\n---\n\n## 2. Quick & Dirty Composer Usage (Unsafe & Unsupported)\n\n**NOT SUPPORTED — but documented for transparency.**\nUsed by some developers who ignore Ægir’s platform immutability.\n\n### ⚠️ Risks:\n- Overwrites `core`, `vendor`, scaffolded files\n- Breaks symlinks, permissions, autoloaders\n- Causes unpredictable site behavior\n\n### ⚠️ Example (DANGEROUS):\n\n```bash\ncd ~/static/path/to/platform-app-root\nrm -rf core vendor composer.lock\ncomposer install --no-dev\ncomposer require drupal/module_name\n```\n\nUse [vdrush](#4-site-local-drush-vdrush-for-direct-updates) to update individual sites afterward.\n\n---\n\n## 3. Developer Shortcut: Safe-ish Platform Clone\n\nTo test module updates or patches quickly:\n\n```bash\ncp -a ~/static/path/to/platform ~/static/path/to/platform-new\ncd ~/static/path/to/platform-new\n\ncomposer clear-cache\ncomposer require drupal/new_module\ncomposer update drupal/existing_module --with-dependencies\ncomposer install --no-interaction --optimize-autoloader\n```\n\n- Add new platform in Ægir\n- Migrate a test site\n- If stable, migrate remaining sites\n- Later remove the old platform\n\n---\n\n## 4. Site-local Drush (vdrush) for Direct Updates\n\nIf Composer is used outside Ægir, you must update each site manually:\n\n```bash\ncd ~/static/path/to/platform-app-root\nvdrush @site-alias updb\nvdrush @site-alias cr\n```\n\n📚 Learn more:\n[DRUSH-CLI.md – Site-local Drush](https://github.com/omega8cc/boa/blob/5.x-dev/docs/DRUSH-CLI.md#steps-to-use-site-local-drush)\n\n---\n\n## 5. Composer Sites Managed Directly – Module Updates\n\nFor Composer-managed Drupal 10 sites **not using Ægir workflow**, here's the simplest way to update one or two modules (e.g., for security releases).\n\n### ✅ Update process:\n\n```bash\ncomposer clear-cache\ncd ~/static/path/to/platform-app-root\ncomposer update drupal/module_name --with-dependencies\nvdrush @site-alias updb\nvdrush @site-alias cr\n```\n\n### 🔐 Lock to specific version (optional):\n```bash\ncomposer require drupal/module_name:^1.8 --update-with-dependencies\n```\n\n### 🧠 Tips:\n- Preview first:\n  ```bash\n  composer update drupal/module_name --with-dependencies --dry-run\n  ```\n- Check what’s outdated:\n  ```bash\n  composer outdated drupal/*\n  ```\n\n---\n\n## 6. Composer Branch Switching – Safe Handling After Git Checkout\n\nSwitching Git branches in a Composer-managed project can break dependencies if `vendor/` and `composer.lock` aren't reset.\n\n### ⚠️ Problem:\n\n```bash\ncomposer install\ngit checkout feature/new-ui\ncomposer install  # ⛔ may fail!\n```\n\nResult: version mismatches, broken autoloaders, partial installs.\n\n---\n\n### ✅ Correct workflow:\n\n```bash\nrm -rf vendor/\ncomposer clear-cache\ncomposer install --no-dev\n```\n\nThis ensures:\n- Clean dependency state\n- `composer.lock` matches `vendor/`\n- No leftover packages from previous branch\n\n💡 Avoid switching branches with uncommitted Composer changes.\n\n---\n\n## 7. Summary\n\n| Scenario                             | Composer Use Allowed? | Notes |\n|--------------------------------------|------------------------|-------|\n| Building a new platform              | ✅ Yes                | Fully supported |\n| Platform already in use by Ægir     | ❌ No                 | Never use Composer |\n| Cloned platform for dev/testing      | ⚠️ With care         | Register in Ægir separately |\n| Quick hacks on live platform         | ⚠️ Very risky         | Unsupported |\n| Updating standalone Composer site    | ✅ Yes                | Follow best practices |\n| Applying DB updates per site         | ✅ Use `vdrush`       | Required after Composer hacks |\n| Switching Git branches               | ⚠️ Requires cleanup   | Always clear cache + remove vendor |\n\n---\n\n**Composer is a build-time tool — not a runtime update manager.\nIn Ægir, platforms are immutable once deployed.\nAlways test changes before applying them to live sites.**\n\n_Last updated: 2025-04-03_\n\n\n"
  },
  {
    "path": "docs/CONTRIBUTING.md",
    "content": "# Contributing to BOA\n\nGuidelines for contributing code and reporting bugs and problems with BOA.\n\n## Bug, Feature, and Patch Submission\n\n- **Active issue queue:** [GitHub Issues](https://github.com/omega8cc/boa/issues)\n\nReporting bugs is a great way to contribute to BOA. Mis-reporting bugs or duplicating reports, however, can be a distraction to the development team and waste precious resources. So, help out by following these guidelines.\n\n**Important:**\n- Every bug report must include a Gist link to the output of the `boa info` command.\n- Any bug report failing to follow the guidelines will be ignored and closed.\n\nBefore reporting a bug, always search for a similar bug report before submitting your own, and include as much information about your context as possible, including your server/VPS parent system name (like Xen) and/or hosting provider name and URL.\n\nPlease always attach the output of the `boa info` command, or for a more detailed system configuration and history report, use the `boa info more` command.\n\n**Do not post your server or error logs directly in the issue.** Instead, use services like [Gist](http://gist.github.com) and post the link in your submission.\n\n**Hint:** Please enable debugging with `_DEBUG_MODE=YES` in the `/root/.barracuda.cnf` file before running an upgrade, so it will display more helpful details. You can find more verbose logs in the `/var/backups/` directory.\n\nIt is also a good idea to search our deprecated issue queues for Barracuda and Octopus projects on drupal.org:\n\n- **Legacy issue queue for Barracuda:** [Drupal.org Barracuda Issues](https://drupal.org/project/issues/barracuda)\n- **Legacy issue queue for Octopus:** [Drupal.org Octopus Issues](https://drupal.org/project/issues/octopus)\n\n## Help Options\n\n- **Documentation and How-to:** [Omega8.cc Library](https://omega8.cc/library/development)\n- **Gitter chat:** [Gitter Chat](https://gitter.im/omega8cc/boa)\n- **Commercial support:** [Omega8.cc](https://omega8.cc)\n"
  },
  {
    "path": "docs/DEVELOPMENT.md",
    "content": "# Notes Regarding Drupal Development on BOA\n\n## Drupal 8+\n\n**Note:** All commands/paths are relative to the site folder, not the platform.\n\n### Disable Redis\n\nRedis is automatically disabled if you use a dev domain and the following file exists:\n```sh\ntouch files/development.services.yml\n```\n\n### Theme Debug Mode (Twig)\n\nTo enable theme debugging for your Drupal 8+ site you need to use a dev domain (*.dev.*) and add the following in `files/development.services.yml`:\n\n```yaml\nservices:\n  cache.backend.null:\n    class: Drupal\\Core\\Cache\\NullBackendFactory\n\nparameters:\n  twig.config:\n    debug: true\n    auto_reload: true\n    cache: false\n```\n\n## Drupal 7\n\n**Note:** All commands/paths are relative to the site folder, not the platform.\n\n### Disable Redis\n\nEdit `modules/boa_site_control.ini`.\n\n### Theme Debug Mode\n\nAs of Drupal 7.33, you can enable theme debug mode via the variable: `theme_debug`. You can configure it using one of the methods below.\n\n#### Drush\n\n```sh\ndrush variable-set theme_debug 1\n```\n\n#### Configuration (local.settings.php)\n\n```php\n$conf['theme_debug'] = TRUE;\n```\n"
  },
  {
    "path": "docs/DISK_RESIZE.md",
    "content": "# Live Disk Resize How-To\n\nThis covers the common “plan upgraded, disk grew, but system still sees old sizes / fstab out of sync” situations on Debian/Devuan-like systems (GPT, ext4 root, EFI partition).\n\n---\n\n### Dependencies\n\nInstall only if missing:\n\n```bash\napt-get update\napt-get install -y gdisk cloud-guest-utils parted\n```\n\nProvides:\n\n* `sgdisk` (from `gdisk`) — GPT fixes/backup\n* `growpart` (from `cloud-guest-utils`) — extend partition safely\n* `partprobe` (from `parted`) — ask kernel to re-read partition table (optional)\n\nFilesystem grow for ext4:\n\n* `resize2fs` (usually already installed via `e2fsprogs`)\n\n---\n\n### Situation A: Partition already uses full disk, filesystem is smaller\n\n**Symptoms**\n\n* `lsblk` shows `/dev/sda1` already near disk size\n* `df -h /` shows smaller size than `lsblk`\n* `resize2fs` hasn’t been run yet\n\n**Steps**\n\n```bash\nlsblk -o NAME,SIZE,FSTYPE,MOUNTPOINT /dev/sda\ndf -h /\nresize2fs -p /dev/sda1\ndf -h /\n```\n\nDone.\n\n---\n\n### Situation B: Disk grew, but GPT warnings + root partition still old size\n\n**Symptoms**\n\n* `fdisk -l /dev/sda` shows e.g. 100G disk but `sda1` still ~50G\n* Warnings like:\n\n  * `GPT PMBR size mismatch (...) will be corrected by write.`\n  * `The backup GPT table is not on the end of the device.`\n\n**Goal**\nFix GPT metadata, extend partition in-place, then grow ext4 — **no rebuild**.\n\n#### 1) Backup GPT (recommended)\n\n```bash\nsgdisk --backup=/root/sda.gpt.backup /dev/sda\n```\n\n#### 2) Fix GPT headers to match new disk end\n\n```bash\nsgdisk -e /dev/sda\n```\n\n#### 3) Extend root partition to fill disk\n\n```bash\ngrowpart /dev/sda 1\n```\n\n#### 4) Ask kernel to re-read partition table (optional)\n\nIf `partprobe` exists:\n\n```bash\npartprobe /dev/sda\n```\n\nIf not installed, you can skip if `lsblk` already shows the new size.\n\n#### 5) Verify partition grew\n\n```bash\nlsblk -o NAME,SIZE,TYPE,MOUNTPOINT /dev/sda\n```\n\n#### 6) Grow ext4 filesystem online\n\n```bash\nresize2fs -p /dev/sda1\ndf -h /\n```\n\nDone.\n\n---\n\n### Situation C: `/etc/fstab` out of sync (remount fails)\n\n**Symptoms**\n\n* System is running, but:\n\n  * `mount -o remount /` fails with “can’t find PARTUUID=…”\n  * `/etc/fstab` references a stale `PARTUUID`/`UUID`\n\n**Steps**\n\n1. Identify the *actual* root device:\n\n```bash\nfindmnt -n -o SOURCE /\n```\n\n2. Get correct identifiers:\n\n```bash\nblkid /dev/sda1\n```\n\n3. Update `/etc/fstab` (recommend using filesystem UUID for `/`)\n   Example root line:\n\n```text\nUUID=<UUID-from-blkid> / ext4 rw,errors=remount-ro,noatime 0 1\n```\n\n4. Test:\n\n```bash\nmount -o remount /\nmount -a\n```\n\n---\n\n### Situation D: GRUB `root=` still points to old PARTUUID/UUID (reboot risk)\n\n**Symptoms**\n\n* `/proc/cmdline` shows `root=PARTUUID=<old>`\n* You fixed partitions/fstab but fear next boot will fail\n\n**Steps**\n\n1. Check current boot cmdline:\n\n```bash\ncat /proc/cmdline\n```\n\n2. Regenerate bootloader config:\n\n```bash\nupdate-grub\nupdate-initramfs -u\n```\n\n3. Verify new GRUB entries contain the correct root id:\n\n```bash\ngrep -R \"linux.*root=\" -n /boot/grub/grub.cfg | head\n```\n\n---\n\n### Quick verification block (use anytime)\n\n```bash\nlsblk -o NAME,SIZE,FSTYPE,MOUNTPOINT /dev/sda\nfdisk -l /dev/sda | sed -n '1,40p'\nfindmnt -n -o SOURCE,FSTYPE,OPTIONS /\ndf -h /\nblkid /dev/sda1\n```\n\n---\n\n### Notes\n\n* “Partition table entries are not in disk order.” is normal on GPT cloud images (BIOS boot + EFI at the front).\n* If `partprobe` is missing but `lsblk` already shows the updated partition size, you’re fine.\n* For ext4 root on `/dev/sda1`, `resize2fs` works online while mounted (as you saw).\n"
  },
  {
    "path": "docs/DRUPALGEDDON.md",
    "content": "# Drupalgeddon Daily Checks on D7 Sites\n\n## ~/static/control/drupalgeddon.info\n\nPreviously enabled by default, now requires this control file to still run daily, because it may generate some false positives not always possible to avoid or silence, so it no longer makes sense to run this check daily, especially after BOA has run it automatically for a month and finally even disabled automatically all clearly compromised sites.\n\nNote that your system administrator may still enable this with the root level control file `/root/.force.drupalgeddon.cnf`, so it will still run, even if you do not create the Octopus instance level empty control file:\n`~/static/control/drupalgeddon.info`\n\nPlease note that the current version of the Drupalgeddon Drush extension needs the 'update' module to be enabled to avoid even more false positives, so BOA will enable the 'update' module temporarily while running this check, which in turn will result in even more email notices sent to the site admin email, if these notices are enabled.\n"
  },
  {
    "path": "docs/DRUSH-CLI.md",
    "content": "# Drush and PHP-CLI Version Management in BOA\n\nBOA (Barracuda Octopus Ægir) provides robust tools for managing PHP-CLI and Drush versions, giving you control over how Drupal sites are maintained and updated. This document explains the process for **instant PHP-CLI switching** using **configuration files**, and how PHP-CLI interacts with Drush for site management.\n\n---\n\n## PHP-CLI Version Management in BOA\n\nBOA provides two mechanisms for managing the PHP-CLI version used in command-line operations (such as Drush and Composer):\n\n1. **`~/static/control/cli.info`**: This is the **main configuration file** that defines the **default PHP-CLI version** to use across the Octopus instance. If no instant configuration switches are present, this version will be used.\n2. **Instant Switch Configuration Files**: These files enable instant PHP-CLI version switching for command-line and Ægir backend tasks operations.\n\n### How Instant PHP-CLI Switching Works\n\nIn addition to the `cli.info` file, BOA supports **instant PHP-CLI switching** through **specific configuration files** located in `~/static/control/`. The filenames of these configuration files dictate the PHP version to use, and their content is irrelevant. This enables you to switch the PHP-CLI version for Drush, Composer, and other CLI operations, including Ægir tasks, instantly.\n\n#### Example Instant Switch Files:\n\n- `~/static/control/php85.info`\n- `~/static/control/php84.info`\n- `~/static/control/php83.info`\n- `~/static/control/php82.info`\n- `~/static/control/php81.info`\n- `~/static/control/php74.info`\n\nEach file corresponds to a specific PHP version. To switch the PHP-CLI version:\n\n1. **Create a configuration file** corresponding to the desired PHP version. For example, to switch to PHP 8.3, create a file named `php83.info` in `~/static/control/`. The content of this file does not matter and can be empty.\n2. The system will automatically detect the **highest available PHP version** based on the filenames of these files. You do not need to remove other files for lower PHP versions.\n\nIf none of these instant switch files are present, the system will default to the PHP version listed in `~/static/control/cli.info`.\n\n**Note:** These files will switch the PHP-CLI version used *instantly*, unlike the classic `~/static/control/cli.info` which requires 3 minutes to take effect.\n\n### Supported PHP-CLI Versions:\n\n- 8.5, 8.4, 8.3, 8.2, 8.1, 8.0, 7.4, 7.3, 7.2, 7.1, 7.0, 5.6\n\n**However:** Some older PHP versions may no longer be available on your system, because BOA automatically deactivates versions not used by any hosted site. If you need to restore some older PHP version previously available, please open a support ticket with your BOA host, or, if you have root access, run `barracuda php-idle enable` command. If you want to re-install all supported but disabled PHP versions, please run `barracuda up-lts php-max` command. For more details, run `barracuda help` command.\n\n### Important Notes:\n\n- The instant switch files' **content is irrelevant**—what matters is the **filename**. These files can be empty or contain any content.\n- The system will automatically select the **highest PHP version** based on the filenames of the switch files. No need to remove lower-version files.\n- The `cli.info` file serves as the **default** PHP-CLI version when no instant switch files are present, and it **must contain a valid PHP version** in its content (e.g., `8.1`).\n- This smart feature, similarly to the classic `~/static/control/cli.info` depends on the BOA special shell wrapper, which is temporarily deactivated during both barracuda and octopus upgrades to not interfere with complex procedures which depend on system dash shell. For this reason any Drush or Composer command you will execute in the limited shell account while you or your host is running barracuda or octopus upgrade will revert to the version defined in the system-wide `/root/.barracuda.cnf` file.\n\n### Example of `cli.info`:\n```\n8.1\n```\nThis version will be used by default if no instant switch files (e.g., `php83.info`) are detected.\n\n---\n\n## Drush Management in BOA\n\nDrush is the primary tool for managing Drupal sites within BOA, allowing you to perform tasks such as installing sites, cloning sites, or migrating them between Platforms, perform database updates, cache clearing etc. Drush integrates seamlessly with BOA’s PHP-CLI management, ensuring that the correct PHP version is always used.\n\n### Key Highlights:\n\n1. Ægir no longer removes local Drush from any platform.\n2. Site-local Drush can be invoked using `vdrush`.\n3. PHP-CLI version switching for Drush and Composer is instantaneous using the **instant switch configuration files**.\n4. Using standalone Drush versions newer than version 8 is deprecated.\n5. Drush 8 remains available as `drush8` or simply `drush`.\n6. Drush 10 is available as standalone `drush10`.\n7. Drush 11 is available as standalone `drush11`.\n8. Drush 12 or newer is available only as **site-local**, invoked via `vdrush`.\n9. It is important to review specific **caveats** for managing Drush versions further below.\n\n---\n\n### Site-Local Drush is Preserved and Fully Supported\n\nIn BOA, Ægir no longer removes the local copy of Drush from platforms during the 'Platform Verify' task. Instead, it locks permissions on the `vendor/drush` directory if present.\n\nThis change allows you to easily unlock the local Drush using a new task available on the platform node in the Ægir control panel named 'Unlock Local Drush'. This task is now a required step before you use local `vdrush` or run any updates with `composer` on the command line.\n\n#### Steps to Use Site-Local Drush:\n\n1. Run the 'Unlock Local Drush' task on the site's Platform in Ægir.\n2. Find the correct Drush `@site-alias` with the `drush11 aliases` command.\n3. Switch to the Platform app root where `vendor` exists using `cd`.\n4. Run `vdrush --version` or install it with `composer require drush/drush`.\n5. Use `vdrush @site-alias updbst`, `vdrush @site-alias updb`, etc.\n6. Run the 'Platform Verify' task to restore compatibility with Drush 8.\n\n---\n\n### Supported Drush Versions:\n\n- **Drush 8**: Available as `drush8` or simply `drush`. It remains the global default version for most operations, compatible with legacy Drupal versions.\n- **Drush 10**: Available as `drush10`.\n- **Drush 11**: Available as `drush11`.\n- **Drush 12**: Available only as **site-local**, invoked with `vdrush`.\n- **Drush 13**: Available only as **site-local**, invoked with `vdrush`.\n\n---\n\n## PHP-CLI and Drush Integration\n\nSince Drush relies on the active PHP-CLI version, any changes made to the PHP-CLI version will directly affect Drush operations. The PHP-CLI version can be set either by the **instant switch configuration files** or by the **default `cli.info` file**.\n\n### Example:\n\n- To make Drush use PHP 8.3, create a configuration file named `php83.info` in the `~/static/control/` directory. The system will automatically detect the highest available PHP version, and all Drush operations will use that version instantly.\n- If no instant switch files are detected, Drush will default to the PHP version specified in `~/static/control/cli.info`.\n\n---\n\n## Caveats\n\n- When using standalone `drush8`, `drush10`, or `drush11`, please use Drush Aliases — we don’t test anything running standalone Drush commands in the site directory anymore — it’s probably an old habit which should be avoided for standalone Drush — also because it may and will clash with local Drush if also present.\n- Note that the Drush Alias name for the site with `drush10` is different than for `drush8` — all dots in the site name should be replaced with hyphens and only the last dot before the domain's last extension should be a dot. Example: `drush8 @sub.domain.top.org` becomes `drush10 @sub-domain-top.org`.\n- On every Ægir / Octopus upgrade, all platforms are automatically verified and thus local Drush is by default locked again in all existing platforms.\n\n### Stop Using Standalone System Drush 10 and 11\n\nWhy is using standalone system Drush other than Drush 8 deprecated, even if still possible?\n\nAll post-8 Drush versions up to 11 could theoretically be used as standalone (on command line but not with Ægir) if their numerous dependencies matched the managed Drupal site codebase.\n\nHowever, most of the time you will find that some dependencies shared by Drupal and Drush 10+ will clash if you try to use Drush in the standalone, system mode, because Composer can’t track their compatibility when you use Drush not included in the Drupal platform's own codebase.\n\nThis makes using standalone system versions of Drush 10+ a very frustrating experience and sometimes even impossible.\n\nPlease add local Drush to your Drupal codebase with Composer and use it instead of system-wide `drush10` and `drush11` to avoid headaches.\n\nBOA still provides standalone `drush10` and `drush11`, though, because we still need them to convert Drush 8-type site aliases into Drush 10+ type site aliases, but otherwise these standalone Drush versions are of little use.\n\nBy the way, due to constant dependencies versions updates, you could get pretty different versions of the same Drush 10+ release installed depending on *when* you have installed it. Sometimes they will be too old and sometimes too new for the Drupal codebase in question. This makes using them as standalone a completely unpredictable mess.\n\nThat’s also why Drush 12 has been officially announced as the first version which can’t be used as standalone at all — no matter how hard you would try.\n\n"
  },
  {
    "path": "docs/FAQ.md",
    "content": "# FAQ\n\n**Q: Can I use BOA to host Drupal sites outside of Ægir?**\n\n**A:** Yes, but it is an unsupported feature, so you need to figure out how to do it properly and you should be prepared that things may explode without any warning after the next BOA upgrade. All custom vhosts must reside in the master vhosts directory: `/var/aegir/config/server_master/nginx/vhost.d/` to avoid GHOST vhost detection and auto-cleanup which runs daily, but only for all Octopus instances in `/data/disk` directory tree.\n\n---\n\n**Q: Can I use BOA to host sites with different engines, like WordPress?**\n\n**A:** Yes, but it is an unsupported feature, so you need to figure out how to do it properly and you should be prepared that things may explode without any warning after the next BOA upgrade. All custom vhosts must reside in the master vhosts directory: `/var/aegir/config/server_master/nginx/vhost.d/` to avoid GHOST vhost detection and auto-cleanup which runs daily.\n\nCheck also:\n\n- [Drupal Node 1416798](https://drupal.org/node/1416798)\n- [GitHub Issue 359](https://github.com/omega8cc/boa/issues/359)\n\n---\n\n**Q: Can I install services and apps not included in BOA?**\n\n**A:** It depends. BOA uses very aggressive upgrade procedures and if it is not aware of extra services installed and running, it may even uninstall them if the system packages dependency autoclean triggers such action, so you need to watch closely what happens during and after barracuda upgrade. Note that you can specify extra packages in the special `_EXTRA_PACKAGES` variable in the `/root/.barracuda.cnf` file -- This should help, but you should still watch closely what happens during and after barracuda upgrade.\n\n---\n\n**Q: Can I call Drush from PHP scripts running via PHP-FPM (web-based requests)?**\n\n**A:** Theoretically yes, but Drush should never be available for web requests, period. Not because we are telling you that it is bad and ugly, but because PHP-CLI and PHP-FPM are totally separate tools for many reasons, including privileges separation, security, cascades of various limits, etc. You should use a better, proper, and secure method to run PHP, and if you need to extend or interact with Drupal via web requests, you should use Drupal API, along with contrib or custom modules and never attempt to call Drush from PHP-FPM.\n\n---\n\n**Q: How to increase PHP-FPM `memory_limit`?**\n\n**A:** While limits are still auto-configured, depending on available RAM and CPU cores and written in the respective PHP ini files, the only place to modify `memory_limit` manually is the line with `php_admin_value[memory_limit]` in a file shared between all PHP-FPM pools in all running PHP versions: `/opt/etc/fpm/fpm-pool-common.conf`, `/opt/etc/fpm/fpm-pool-common-legacy.conf`, `/opt/etc/fpm/fpm-pool-common-modern.conf` -- of course you need to reload all running FPM versions to make the change active, for example: `service php74-fpm reload`, `service php83-fpm reload`, etc.\nCheck also: [Drupal Comment 8689745](https://drupal.org/comment/8689745#comment-8689745)\n\nThe same applies to some other hardcoded/enforced limits:\n\n```php\nphp_admin_value[max_execution_time] = 180\nphp_admin_value[max_input_time] = 180\nphp_admin_value[default_socket_timeout] = 180\n```\n\nNote: You can modify this file, but your changes will be overwritten on every barracuda upgrade.\n"
  },
  {
    "path": "docs/FASTTRACK.md",
    "content": "# Even Faster Site Cloning and Migration\n\nIt is now possible to speed up the already blazing fast migrations and cloning with this empty control file:\n\n`~/static/control/FastTrack.info`\n\nThis file, if it exists, will drastically reduce the number of tasks otherwise launched automatically in preparation for clone and migrate, namely:\n\n1. Both source and target platforms will no longer be verified.\n2. The site will no longer be verified before running clone or migrate.\n\nPlease carefully consider the implications, though, because there are very good reasons for these extra tasks to be launched before running clone or migrate to make sure that any issues are detected and fixed for you early and not during migration or clone, which could otherwise break the site and leave it in a state not easy to fix, especially without root access to the system.\n\nThe potential reasons to disable these extra tasks with the help of this new control file can be twofold:\n\n1. To restore default and much faster Ægir own behaviour.\n2. To help those running mass migrations to avoid running duplicate tasks.\n\nStill, it's your responsibility to run these extra verify tasks when you need to migrate or clone just a single site, but you prefer to have them run for you automatically as before, you can easily restore the previous behaviour:\n\n1. Create empty `~/static/control/ClassicTrack.info`.\n2. Delete `~/static/control/FastTrack.info`.\n"
  },
  {
    "path": "docs/FIXME.md",
    "content": "# Troubleshooting Common Ægir Workflow Issues\n\nThis guide addresses common issues that may arise when working with Ægir, Drush, and Composer, and outlines steps to resolve them.\n\n### 1. **Task Failure: Error - Declaration of `Drupal\\Core\\Logger\\LoggerChannel`**\n\nThis error typically shouldn't occur in any `site-task` if you have run `Platform Verify + Lock Drush` before executing other tasks like `site clone`, `site migrate` or `site verify`, but may appear for example if you are trying to run `Unlock Local Drush` after it was already unlocked. The `Unlock Local Drush` task is required for `site-local` Drush or Composer to work on command line. However, forgetting this step or other underlying issues may cause tasks such as `site clone`, `site migrate` or `site verify` to fail with the PHP error.\n\n**Resolution Steps:**\n\nIf a task fails due to this error, it is crucial to follow the **full recovery cycle** to bring the platform or site back into a working state. The recovery process involves the following steps:\n\n1. **Platform Verify + Lock Drush:**\n   Start by running `Verify + Lock Drush` task to ensure the codebase is ready for `site-tasks`.\n\n2. **Unlock Local Drush:**\n   Execute `Unlock Local Drush` to remove any codebase permissions locks and un-patch Drupal core.\n\n3. **Platform Verify + Lock Drush Again:**\n   After unlocking Drush, run `Platform Verify + Lock Drush` once more to finalize the recovery.\n\nThis full cycle is necessary because certain tasks in Ægir may patch or unpatch the core on the fly or adjust file permissions. When an issue like a PHP version mismatch or a codebase error occurs, the platform can go out of sync, requiring multiple steps to fully restore it.\n\nBy following this process, you ensure that the platform and site are properly aligned, allowing future Ægir tasks to succeed.\n\n"
  },
  {
    "path": "docs/GEM.md",
    "content": "# How To: Enable Ruby Gems and Node/NPM\n\nTo enable Ruby Gems support you need to initialize your account and then use the `gem` command to install Compass and any other necessary gems for your theme. It takes only 5 minutes max and is now 3x faster than before.\n\nSimilarly, to enable Node/NPM support, you need to initialize your account to auto-install a user-level NPM packages directory. Afterward, you can use NPM to install Grunt, Gulp, and/or Bower.\n\n## Security Considerations for Node/NPM\n\nSince `node` can be used to bypass Limited Shell and create a significant security risk within the BOA system, it should not be enabled on any BOA system with multiple `lshell` users. Consequently, Node/NPM support is not enabled in BOA by default. To enable it, you must create an empty control file `/root/.allow.node.lshell.cnf`. Node/NPM support on hosted BOA is available only on dedicated systems like Phantom and Cluster.\n\nPlease note that Node/NPM support, if allowed with `/root/.allow.node.lshell.cnf` file, will be enabled only on the main Ægir Octopus `lshell` account. The `client` level sub-accounts will receive their own Ruby Gems access only.\n\n## How It Works\n\nIf you want to use Ruby Gems or Node/NPM to install Grunt, Gulp, or Bower, and you previously enabled Ruby Gems support, you will need to reinitialize Ruby/NPM on your account due to security and deployment improvements made in BOA-5.4.0.\n\n1. Delete the control file `~/static/control/compass.info`.\n2. Wait until you can no longer issue the `compass --version` command (5 minutes max)\n3. Add the control file `~/static/control/compass.info` again and wait (5 minutes max)\n4. Proceed with the further steps as usual.\n\n**NOTE on Ruby Gems:** You must add the non-default CSS keyword to the `_XTRAS_LIST` array in your `/root/.barracuda.cnf` file and then run the `barracuda up-lts system` command before initializing Ruby Gems support in your limited shell account.\n\n**NOTE on Node/NPM:** You must add the non-default NPM keyword to the `_XTRAS_LIST` array in your `/root/.barracuda.cnf` file, create empty `/root/.allow.node.lshell.cnf` file and then run the `barracuda up-lts system` command before initializing NPM support in your limited shell account.\n\nBundler allows you to manage different gem versions per theme, making it a valuable tool for gem installation and management. It's installed for you by default.\n\nWhen you log into your SSH account, you will be presented with a helpful intro:\n\n```\n\n      ======== Welcome to the Ægir, Drush and Compass Shell ========\n\n         Type '?' or 'help' to get the list of allowed commands\n             Note that not all Drush commands are available\n\n       Use Gem and Bundler to manage all your Compass gems! Example:\n                   `gem install --conservative compass`\n\n              Use NPM to manage all your packages! Example:\n                        `npm install -g gulp`\n\n      To initialize Ruby use control file and re-login after 5 minutes\n                 `touch ~/static/control/compass.info`\n\n```\n\nTo initialize your account for Ruby Gems and Node/NPM support, follow these steps:\n\n1. Create an empty control file: `touch ~/static/control/compass.info`\n2. Log out and wait 5 minutes.\n3. Log in and install Compass: `gem install --conservative compass`\n4. Navigate to your theme directory and run `bundle install`, or manually install gems as needed:\n   ```sh\n   gem install foo_bar\n   gem install --conservative toolkit\n   gem install --conservative --version 3.0.3 compass_radix\n   ```\n5. Install Grunt: `npm install -g grunt`\n6. Install Gulp: `npm install -g gulp`\n7. Install Bower: `npm install -g bower`\n\nThe special control file `~/static/control/compass.info` enables Ruby Gems support for the main Ægir Octopus SSH account and for all `client` level SSH sub-accounts on your instance. Deleting this file will remove all Ruby Gems from all SSH accounts on your Ægir Octopus Instance and will remove Node/NPM support from the main account.\n\nThe non-default Node/NPM support, if allowed with `/root/.allow.node.lshell.cnf` file, is initialized using the same `~/static/control/compass.info` file, but will be enabled only on the main Octopus `lshell` account. The `client` level sub-accounts will receive their own Ruby Gems access only.\n\nSome gems may require the ability to build their native binaries during installation, which is not possible in the limited shell. When you initialize your account to support Ruby Gems, a few known problematic gems will be pre-installed automatically to mitigate these issues.\n\nIf you encounter errors when attempting to install gems via Gem or Bundler, please let us know, and we will try to add them to the list of automatically pre-installed gems.\n\nYou can easily check the list of gems you have access to with the `gem-list` command. Note that if you haven’t initialized your account yet, this command may display a legacy list of gems previously installed system-wide. Initializing your account will ensure that only locally installed Ruby and gem versions are shown.\n\nPlease note that it is not possible to use Guard in a limited shell via Drush with commands like `drush omega-guard theme`, because it attempts to open a sub-shell, which does not work with the limited shell provided by BOA.\n\nYou need to use Guard and Compass tools directly, with commands like `compass watch` or `guard start`. Ensure that Compass and Guard gems are installed first, as they are not installed by default.\n\nThe initial Ruby Gems installation may take 5 minutes. Remember to wait until it is complete before re-logging in. Once the installation is finished, you will be able to run the `gem --version` command. If it is still unavailable, please wait a bit longer. The process may take a little longer time if you have extra SSH sub-accounts, as the system installs separate Ruby Gems and some problematic gems in every sub-account.\n\nIf `bundle install` complains that it can’t build a native extension for the gem `foobar`, but the gem is already installed and listed when you type `gem-list`, first compare the gem versions.\n\nFor example, when Ruby Gems support is initialized on your account, it installs some problematic gems that can’t be installed in a limited shell: `bluecloth`, `eventmachine`, `ffi`, `hitimes`, `http_parser.rb`, `oily_png`, and `yajl-ruby`.\n\nThe installed version may differ from the version defined in your theme's `Gemfile.lock` file. To resolve this issue, update the gem version in the lock file and run `bundle install` again. If you require a different version of the problematic gem, reinitialize Ruby Gems support on your account by deleting the control file, waiting until you can no longer issue the `compass --version` command, and proceeding with the further steps as usual.\n"
  },
  {
    "path": "docs/INSTALL.md",
    "content": "# Preparations Before Installing BOA\n\n- Make sure that IPv6 is not activated -- it's not supported yet, so BOA will disable it anyway.\n- Add your SSH keys to your VPS root -- BOA will disable password for root over SSH.\n- BOA requires minimal, supported OS, with no web/sql services installed.\n- Don't run any installer via sudo. You must be logged in as root directly.\n- Don't run any system updates or modifications before installing BOA.\n- Please read [docs/NOTES.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/NOTES.md) for other related details.\n\n# BOA Installation Procedures Chain\n\n   **Don't reboot your VM until all procedures are finalized, including post-install auto-upgrades.**\n\n   When invoked via `boa` command, it will run installation is several steps, automatically:\n\n   1. The `autoinit` phase to upgrade vendor provided OS to Devuan Daedalus\n   2. The `barracuda install` phase to install BOA system and Ægir Master\n   3. The `barracuda upgrade` phase to complete system installation\n   4. The `octopus install` phase to install your first Ægir Satellite\n   5. The `octopus upgrade` phase to enable Let's Encrypt certificate for your Ægir\n   6. The `barracuda upgrade` phase again to install CSF firewall and DNS cache\n\n   **NOTE!** While steps 2-5 will be visible to you in your SSH terminal (unless you will use silent mode explained further below), the last step will happen within 30 minutes launched from cron in the background, so it's important that you don't reboot and don't use the installed Ægir before the last step is complete.\n\n   **But how you will know it's ready?** Once all procedures are finalized you will see **three (3) lines** reported by this command:\n\n   ```sh\n   boa info | grep -c Percona\n   ```\n\n   **REMEMBER: don't reboot your VM until all procedures are finalized, including post-install auto-upgrades.**\n\n   Now it's safe and recommended to reboot your server to make sure it's running correct installed Linux kernel supplied by Devuan -- either via your vendor control panel or directly via accelerated system reboot:\n\n   ```sh\n   boa reboot\n   ```\n\n# Installing BOA System on a Public Server/VPS\n\n1. Configure your domain DNS to point its wildcard-enabled A record to your server IP address, and make sure it propagated on the Internet by trying `host server.mydomain.org` or `getent hosts server.mydomain.org` command on any other server/system.\n\n   See our DNS wildcard configuration example for reference: [http://bit.ly/UM2nRb](http://bit.ly/UM2nRb)\n\n   **NOTE!** You shouldn't use anything like \"mydomain.org\" as your hostname. It should be some **subdomain**, like \"server.mydomain.org\".\n\n2. Configure your permanent hostname on the server before running BOA installer, even if BOA will do that for you, automatically. We recommend this step in case the host/vendor VM enforces some placeholder hostname via cloud-init or other tools on reboot.\n\n   ```sh\n   hostname -b server.mydomain.org\n   echo server.mydomain.org > /etc/hostname\n   ```\n\n3. Download and run BOA Meta Installers.\n\n   ```sh\n   wget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n   ```\n\n4. Prepare your system by removing `systemd` and upgrading to Devuan 5 Daedalus from any compatible Debian version -- Buster, Bullseye, or Bookworm.\n\n   ```sh\n   autoinit\n   ```\n\n   **NOTE:** You can omit this step and run `boa` install as explained in step 5. It will record your command, run `autoinit` for you, and then will run your `boa` install command automatically. Once complete, you should receive an email from the system with all output details logged.\n\n   **NOTE:** It's recommended that you simply wait 10 minutes and then log back in to inspect autoinit logs to make sure there is a line at the bottom saying: \"The system is now ready for boa install\"\n\n   ```sh\n   cat /root/.autoinit.log\n   ```\n\n   There's also a verbose log of what happened if you are interested:\n\n   ```sh\n   cat /root/.autoinit-verbose.log\n   ```\n\n   If the verbose log shows a warning at the end mentioning a `grub` configuration error (for example, inability to configure `grub-pc`), you will need to fix GRUB before proceeding to the BOA stack installation. To do this, run:\n\n   ```sh\n   DEBIAN_FRONTEND=dialog dpkg --configure grub-pc\n   dpkg --configure -a\n   ```\n\n   Use the dialog to select the appropriate device (usually `/dev/sda`) and once it completes successfully, you can proceed with the BOA installation steps.\n\n5. Install Barracuda and Octopus.\n\n   **NOTE:** Always start with a screen session!\n\n   ```sh\n   screen\n   ```\n\n   To make sure that you are using all available arguments in the correct order please always check the built-in how-to:\n\n   ```sh\n   boa help\n   ```\n\n   You must specify the version of install with `in-lts` plus kind with `public`, your `hostname` and `email` address, as shown further below.\n\n   Specifying Octopus `username` is optional. It will use `o1` if empty.\n\n   The last `{percona-8.4|newrelickey|php-8.5|php-min|php-max|nodns}` part is optional and can be used either to install Percona version other than default 5.7 (can be `percona-8.0` or `percona-8.4`) or New Relic Apps Monitor (you should replace the `newrelickey` keyword with a valid license key), or to define a single PHP version to install and use both for Ægir Master and Satellite instances.\n\n   The `nodns` option allows skipping DNS and SMTP checks.\n\n   When `php-min` is defined, then 3 versions will be installed: `8.5`, `8.4`, `8.3`, with `8.4` configured as default.\n\n   When `php-max` is defined, then all supported versions will be installed and `8.4` configured as default.\n\n   You can later install or modify PHP versions active on your system during `barracuda` upgrade with commands like:\n\n   `barracuda php-idle disable` -- disables versions not used by any site on the system\n\n   `barracuda php-idle enable` -- re-enables and re-builds versions previously disabled\n\n   `barracuda up-lts php-8.5` -- forces the system to use only single version (will cause sites brief downtime)\n\n   `barracuda up-lts php-max` -- installs all supported versions if not installed before\n\n   `barracuda up-lts php-min` -- installs PHP 8.5, 8.4, 8.3, and uses 8.4 by default\n\n   `barracuda up-lts percona-8.0` -- runs upgrade to Percona 8.0 (production ready)\n\n   `barracuda up-lts percona-8.4` -- runs upgrade to Percona 8.4 (production ready)\n\n   If you wish to later define your own set of installed PHP versions, you can do so by modifying variables in the `/root/.barracuda.cnf` file, where you can find `_PHP_MULTI_INSTALL`, `_PHP_CLI_VERSION`, and `_PHP_FPM_VERSION` -- note that the `_PHP_SINGLE_INSTALL` variable must be set empty to not override other related variables. However, you also need to add dummy entries for versions not installed and not used yet to any octopus instance `~/static/control/multi-fpm.info` file, because otherwise `barracuda` will ignore versions not used yet and will automatically remove them from `_PHP_MULTI_INSTALL` on upgrade. These dummy entries should look like this:\n\n   ```sh\n   place.holder1.dont.remove 7.3\n   place.holder2.dont.remove 8.0\n   place.holder3.dont.remove 5.6\n   ```\n\n   The same logic protects existing and used versions from being removed even if they are not listed in the `_PHP_MULTI_INSTALL` variable (they will be re-added automatically if needed).\n\n   You can enable much more verbose reporting in the console during installation and upgrades for either barracuda or octopus (or both with -boa-) by adding these control files before running installation/upgrade:\n\n   ```sh\n   touch /root/.debug-barracuda-installer.cnf\n   touch /root/.debug-octopus-installer.cnf\n   touch /root/.debug-boa-installer.cnf\n   ```\n\n   **NOTE:** You should never use `/root/.debug-barracuda-installer.cnf` unless you need to debug barracuda without running the Ægir Master Instance upgrades because this file will automatically turn off updating system Drush and the Ægir Master Instance on a barracuda upgrade.\n\n   Interestingly, while `/root/.debug-boa-installer.cnf` enables debugging mode for both barracuda and octopus, it will not prevent Ægir Master Instance and Drush updates.\n\n   ### Examples:\n\n   - Barracuda and Octopus with 3 PHP versions in silent non-interactive mode\n     ```sh\n     boa in-lts public server.mydomain.org my@email o1 php-min silent\n     ```\n\n   - Barracuda and Octopus with all 12 PHP versions\n     ```sh\n     boa in-lts public server.mydomain.org my@email o1 php-max\n     ```\n\n   - Barracuda and Octopus with 1 PHP version\n     ```sh\n     boa in-lts public server.mydomain.org my@email o1 php-8.5\n     ```\n\n   - Barracuda and Octopus with Percona 8.4 and 3 PHP versions\n     ```sh\n     boa in-lts public server.mydomain.org my@email o1 percona-8.4\n     ```\n\n   - Barracuda and Octopus with New Relic and 3 PHP versions\n     ```sh\n     boa in-lts public server.mydomain.org my@email o1 newrelickey\n     ```\n\n   - Barracuda without Octopus with 3 PHP versions in silent non-interactive mode\n     ```sh\n     boa in-lts public server.mydomain.org my@email system\n     ```\n\n   **NOTE:** Since BOA no longer installs all bundled Ægir platforms during initial system installation, you will need to add some keywords to `~/static/control/platforms.info` and run Octopus upgrade to have these platforms added as explained in the docs you can find in the file `~/static/control/README.txt` within your Octopus account or online at [docs/PLATFORMS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/PLATFORMS.md)\n\n# Post-install auto-upgrade and reboot\n\n   **Don't reboot your VM until all procedures are finalized, including post-install auto-upgrades.**\n\n   When invoked via `boa` command, it will run installation is several steps, automatically.\n\n   **But how you will know it's ready?** Once all procedures are finalized you will see **three (3) lines** reported by this command:\n\n   ```sh\n   boa info | grep -c Percona\n   ```\n\n   **REMEMBER: don't reboot your VM until all procedures are finalized, including post-install auto-upgrades.**\n\n   Now it's safe and recommended to reboot your server to make sure it's running correct installed Linux kernel supplied by Devuan -- either via your vendor control panel or directly via accelerated system reboot:\n\n   ```sh\n   boa reboot\n   ```\n\n# Installing More Octopus Instances\n\nYou can add more Octopus instances easily:\n\n```sh\nboa in-octopus my@email o2 lts\n```\n\nLike above but in silent non-interactive mode:\n\n```sh\nboa in-octopus my@email o2 lts silent\n```\n\n# Installing BOA System on Localhost (needs testing)\n\n1. Please read [docs/NOTES.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/NOTES.md).\n\n2. Download and run BOA Meta Installers.\n\n   ```sh\n   wget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n   ```\n\n3. Install Barracuda and Octopus.\n\n   You must specify the version of install with `in-lts`, plus kind with `local`, and your `email` address, as shown below. For local installs, you don't need to specify hostname and Octopus username.\n\n   You can also specify the PHP version to install, as shown in the examples below.\n\n   - Barracuda and Octopus\n     ```sh\n     boa in-lts local my@email\n     ```\n\n   - Barracuda and Octopus with 12 PHP versions\n     ```sh\n     boa in-lts local my@email php-max\n     ```\n\n   - Barracuda and Octopus with 3 PHP versions\n     ```sh\n     boa in-lts local my@email php-min\n     ```\n\n   - Barracuda and Octopus with single PHP version\n     ```sh\n     boa in-lts local my@email php-8.5\n     ```\n"
  },
  {
    "path": "docs/IPv6.md",
    "content": "# BOA disables IPv6 to avoid dual-stack problems\n\n## **Understanding IPv4 and IPv6**\n\n**IPv4** (Internet Protocol version 4) and **IPv6** (Internet Protocol version 6) are protocols used for identifying devices on a network through unique IP addresses. While IPv4 has been the dominant protocol for decades, the rapid growth of the internet has led to the adoption of IPv6 to address the limitations of IPv4, particularly the exhaustion of available IPv4 addresses.\n\n- **IPv4** uses 32-bit addresses, allowing for approximately 4.3 billion unique addresses.\n- **IPv6** uses 128-bit addresses, enabling a vastly larger number of unique addresses.\n\n## **Why IPv6 Might Affect Connection Speeds**\n\nWhile IPv6 offers numerous advantages, such as a larger address space and improved routing efficiency, its implementation can sometimes lead to unexpected issues that impact connection speeds. Here's how:\n\n### **1. Dual-Stack Configuration Complexity**\n\nMany modern systems and networks operate in a **dual-stack** mode, supporting both IPv4 and IPv6 simultaneously. While this is beneficial for compatibility, it introduces complexity:\n\n- **Connection Attempts:** When a device tries to connect to a server, it may attempt to establish an IPv6 connection first if both protocols are available.\n- **Fallback Mechanism:** If the IPv6 connection fails or is slow to respond, the system will then attempt an IPv4 connection. This fallback can introduce delays, especially if IPv6 is not properly configured.\n\n### **2. Misconfiguration or Incomplete IPv6 Setup**\n\nIf IPv6 is not fully or correctly configured on the server or within the network infrastructure, several issues can arise:\n\n- **DNS Resolution Issues:** Domain Name System (DNS) records may include both IPv4 (A records) and IPv6 (AAAA records) addresses. If the AAAA records point to misconfigured or unreachable IPv6 addresses, devices may struggle to establish a connection.\n\n- **Routing Problems:** Incorrect routing settings for IPv6 can lead to packets being misrouted or dropped, causing connection attempts to hang until they timeout.\n\n- **Firewall Restrictions:** Firewalls not properly set up for IPv6 traffic can inadvertently block legitimate connection attempts, forcing the system to retry via IPv4.\n\n### **3. Latency and Timeouts**\n\nWhen a system attempts to connect via IPv6 and encounters issues (such as unreachable IPv6 routes or blocked traffic), it may experience:\n\n- **Increased Latency:** The time taken to attempt and fail the IPv6 connection before reverting to IPv4 can make the overall connection appear slower.\n\n- **Timeout Delays:** Prolonged attempts to establish an IPv6 connection can result in noticeable delays before the fallback to IPv4 occurs.\n\n## **Benefits of Disabling IPv6**\n\nBy **disabling IPv6**, we streamline the connection process, ensuring that devices connect using the more reliable and well-configured IPv4 protocol. Here's how this action can improve connection speeds:\n\n### **1. Eliminates IPv6 Connection Attempts**\n\nDisabling IPv6 ensures that all connection attempts are made directly via IPv4, bypassing any potential issues associated with IPv6 configurations. This eliminates the need for the system to attempt and fail IPv6 connections before falling back to IPv4, thereby reducing connection delays.\n\n### **2. Simplifies Network Configuration**\n\nFocusing solely on IPv4 simplifies the network setup, making it easier to manage and troubleshoot. It reduces the chances of misconfigurations affecting connection stability and performance.\n\n### **3. Enhances Reliability**\n\nWith only IPv4 in use, the network relies on a well-established and widely supported protocol. This can lead to more consistent and reliable connection performance, especially in environments where IPv6 support may be incomplete or problematic.\n\n## **Potential Downsides of Disabling IPv6**\n\nWhile disabling IPv6 can provide immediate performance improvements in certain scenarios, it's important to consider the long-term implications:\n\n- **Future-Proofing:** IPv6 adoption is steadily increasing. Disabling it now may require re-enabling and configuring it correctly in the future to accommodate growing internet infrastructure and IPv6-dependent services.\n\n- **Access to IPv6-Only Services:** Some modern services and platforms are beginning to adopt IPv6 exclusively. Disabling IPv6 might restrict access to these services.\n\n- **Efficiency Losses:** IPv6 offers benefits like simplified address assignment and potentially more efficient routing. Disabling it means missing out on these advantages.\n\n## **Conclusion and Recommendation**\n\n**Disabling IPv6** can be a **practical short-term solution** to address and mitigate **initially slow connection issues** caused by:\n\n- Misconfigurations in IPv6 settings\n- DNS resolution problems with AAAA records\n- Routing and firewall complexities\n\nHowever, for a **long-term strategy**, it's advisable to:\n\n1. **Properly Configure IPv6:** Ensure that IPv6 is correctly set up on your servers and network infrastructure to leverage its benefits without compromising performance.\n\n2. **Monitor and Test:** Regularly monitor IPv6 connectivity and performance to identify and resolve issues proactively.\n\n3. **Educate and Plan for Transition:** Prepare for a gradual transition to IPv6, aligning with evolving internet standards and ensuring compatibility with IPv6-only services.\n\nBy addressing the root causes of IPv6-related delays, you can enjoy the advantages of both IPv4 and IPv6 without sacrificing connection speeds or reliability.\n"
  },
  {
    "path": "docs/MAJORUPGRADE.md",
    "content": "# How To: Run a Major OS Upgrade\n\nUnlike non-major system upgrades, which can be run with **BARRACUDA** using the Self-Upgrade How To [docs/SELFUPGRADE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SELFUPGRADE.md), a major OS upgrade requires different procedures explained below. There are two options: Modern, which is very reliable and easy to use but gives you only two supported upgrade paths—to Devuan Daedalus, or Devuan Excalibur, and Classic, which allows you to upgrade just to the next supported Debian or Devuan OS version.\n\nIf you don’t mind several Classic procedures to get to the latest supported Devuan version, or if you wish to continue running your BOA on Debian (with systemd removed by yourself or by older BOA versions), then the Classic procedure is for you.\n\nBut if you prefer to have all major OS upgrade multi-steps and versions automated to get to the latest supported Devuan version, then the Modern procedure is for you.\n\n## How To: Launch Modern Major OS Auto-Upgrade\n\nYou can easily upgrade your system from any supported Debian version, starting with Debian Jessie, to Devuan Daedalus or Devuan Excalibur, which are both systemd-free equivalents of Debian Bookworm and Debian Trixie. You can upgrade from Devuan Beowulf or Chimaera to the recommended Devuan Daedalus using the same procedure.\n\n**NOTE:** While by default BOA installs Percona 5.7 in Devuan Daedalus, it will install Percona 8.4 in Devuan Excalibur.\n\n**NOTE:** You can upgrade from Percona 5.7 to Percona 8.0 and then from Percona 8.0 to Percona 8.4 only on Devuan Daedalus.\n\n**NOTE:** You can't upgrade from Percona 5.7 to Percona 8.4 directly, so you first run `barracuda up-lts system percona-8.0` and then `barracuda up-lts system percona-8.4`\n\nPlease follow the required steps closely!\n\nFirst, update your BOA Meta Installers with:\n\n```sh\nwget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n```\n\nStart with a quick barracuda system upgrade for a smooth experience later.\n\n```sh\nscreen\nbarracuda up-lts system\n```\n\n**1. CREATE A FRESH VM BACKUP SNAPSHOT**\n\n**2. TEST the freshly created backup by using it to create a new test VM**\n\n**3. DO NOT CONTINUE UNTIL IT WORKS**\n\nReboot the server to make sure there are no issues with the boot process.\n\n```sh\nboa reboot\n```\n\nIf the reboot worked and there are no issues, you are ready for the automated magic...\n\n```sh\ntouch /root/.run-to-daedalus.cnf (recommended)\n  or\ntouch /root/.run-to-excalibur.cnf (still too new to run in production)\n\nservice clean-boa-env start\n```\n\nOnce started, the system will launch a series of `barracuda up-lts system` and reboots until it migrates any supported Debian or Devuan version to Devuan Daedalus or Devuan Excalibur.\n\n### Caveats for Unreliable Boot Process on Some Hosts Like Linode\n\nLinode (now owned by Akamai) is known for an unreliable system boot process. Unlike many other hosts, it relies on the Lassie watchdog service when you issue a reboot from the system and not from your Linode control panel.\n\nUnfortunately, the Lassie watchdog may fail to mount the filesystem when it was previously restored from backup, and the boot halts until you click on the \"Reboot\" link in your control panel for the particular VPS. Worse yet, using the \"Reboot\" link doesn't always help either, and you have to use the \"Power Off\" and then \"Power On\" links.\n\nSometimes even those steps don't bring your server back with a successful boot, so unless you are prepared to try your luck with boot process management in the LISH console, your only rescue option will be restoring your VPS from the Linode backup -- which you have enabled and created a fresh backup before attempting the major upgrade.\n\n### Note on Legacy Systems\n\nServers running Debian Jessie or Debian Stretch must auto-upgrade to Devuan Chimaera first by using `touch /root/.run-to-chimaera.cnf` instead -- they cannot run the auto-upgrade to Devuan Daedalus directly. Once on Chimaera, they can auto-upgrade to Devuan Daedalus with `touch /root/.run-to-daedalus.cnf`.\n\n## How To: Launch Classic Major OS Upgrade\n\nYou can easily upgrade your BOA system from any supported Debian or Devuan version, starting with Debian Jessie, to any supported newer version.\n\nThe key difference between classic and modern automated procedures is that the automated procedure supports upgrades only to Chimaera, Daedalus or Excalibur, while the classic procedure allows you to select the target system flavor and version according to your preference. However, we do not recommend running BOA on Debian anymore, as it is no longer regularly tested.\n\nPlease follow the required steps closely!\n\nFirst, update your BOA Meta Installers with:\n\n```sh\nwget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n```\n\nThe procedure discussed above automates major OS upgrades by running them in the multi-step cycle, but you can still run the major OS upgrade with classic `barracuda up-lts system` command if you prefer, after adding the respective variable to `/root/.barracuda.cnf`.\n\n### Devuan to Devuan Major OS Upgrades\n\n- Devuan Daedalus => upgrade to Excalibur with `_DAEDALUS_TO_EXCALIBUR=YES`\n- Devuan Chimaera => upgrade to Daedalus with `_CHIMAERA_TO_DAEDALUS=YES`\n- Devuan Beowulf => upgrade to Chimaera with `_BEOWULF_TO_CHIMAERA=YES`\n\n### Debian to Devuan Major OS Upgrades\n\n- Debian 13 Trixie => upgrade to Excalibur with `_TRIXIE_TO_EXCALIBUR=YES`\n- Debian 12 Bookworm => upgrade to Daedalus with `_BOOKWORM_TO_DAEDALUS=YES`\n- Debian 11 Bullseye => upgrade to Chimaera with `_BULLSEYE_TO_CHIMAERA=YES`\n- Debian 10 Buster => upgrade to Beowulf with `_BUSTER_TO_BEOWULF=YES`\n- Debian 9 Stretch => upgrade to Beowulf with `_STRETCH_TO_BEOWULF=YES`\n- Debian 8 Jessie => upgrade to Beowulf with `_JESSIE_TO_BEOWULF=YES`\n\n### Debian to Debian Major OS Upgrades\n\n- Debian 12 Bookworm => upgrade to Trixie with `_BOOKWORM_TO_TRIXIE=YES`\n- Debian 11 Bullseye => upgrade to Bookworm with `_BULLSEYE_TO_BOOKWORM=YES`\n- Debian 10 Buster => upgrade to Bullseye with `_BUSTER_TO_BULLSEYE=YES`\n- Debian 9 Stretch => upgrade to Buster with `_STRETCH_TO_BUSTER=YES`\n- Debian 8 Jessie => upgrade to Stretch with `_JESSIE_TO_STRETCH=YES`\n\n### NOTE on unused PHP versions automatic deactivation\n\nBoth the automated major OS upgrade tools and the classic manual major OS upgrade with barracuda will automatically disable all installed but not used in any hosted site PHP versions, effectively enforcing an otherwise optional procedure normally triggered on barracuda upgrade if the control file exists: `/root/.allow-php-multi-install-cleanup.cnf`.\n\nIt will not affect migration/upgrade from Debian Bullseye to Devuan Chimaera (or newer), though, since it doesn’t involve re-installing all existing PHP versions normally required in other major upgrades, which otherwise significantly extends the procedure for no good reasons (not used PHP versions should be skipped and deactivated).\n\nTo re-install disabled PHP versions after all upgrades are completed, run this command:\n\n```sh\nbarracuda php-idle enable\n```\n\nTo disable unused PHP versions again, run this command:\n\n```sh\nbarracuda php-idle disable\n```\n"
  },
  {
    "path": "docs/MIGRATE.md",
    "content": "# How To: Migrate All Sites Between Remote BOA\n\nWhile [docs/REMOTE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/REMOTE.md) provides a how-to for per-site migration between remote Octopus instances, it depends on some assumptions, namely:\n\n1. Remote Octopus instance must already exist\n2. Remote Octopus instance must use the same system username\n3. You need to either add proxy manually or hurry to update DNS\n4. There is no batch mode\n\nNew BOA tool (xboa), which is expected to mature into a more sophisticated Swiss Army Knife for BOA, resolves all those problems very easily.\n\nThe only requirement is that the remote BOA server should be installed with the same release/version. No Octopus instance is needed on the target system prior to migration.\n\nIt's a very safe and reliable (used in production) method when you need to:\n\n1. Upgrade to a newer major OS version without the fear that things will totally explode when running the system upgrade 'in place', like in this example:\n   [https://github.com/omega8cc/boa/issues/627](https://github.com/omega8cc/boa/issues/627)\n\n2. Move to a different provider without any visible interruption to your hosted sites visitors, especially when you have so many sites, so manual procedure is not an option.\n\n3. Just change the machine powering your BOA, magically, on the fly.\n\n## Steps to Follow\n\nThe 'source-host' is a placeholder for the source system FQDN hostname.\nThe 'target-host' is a placeholder for the target system FQDN hostname.\nThe 'source-ip' is a placeholder for the source system IP address.\nThe 'target-ip' is a placeholder for the target system IP address.\n\nWhile it is really easy when you have some experience with the procedure, we don't recommend using it on any live system without prior practicing a bit on test VPS instances.\n\nStill scared? We can help! Let us know via: [https://omega8.cc/sales](https://omega8.cc/sales)\n\n## Before You Begin\n\nFor Drupal 6 based sites which are configured to block IPs, you may need to whitelist source-ip at `/admin/user/rules` first, by adding the Allow rule for Host rule type. Otherwise, the site may block its old IP address, and you will be forced to remove it via Chive from the `{access}` table.\n\n## On the Target Host\n\n```sh\necho \"source-ip # Legacy Proxy\" >> /etc/csf/csf.allow\necho \"source-ip # Legacy Proxy\" >> /etc/csf/csf.ignore\ncsf -ra\n```\n\n## On the Source Host\n\n```sh\nxboa pre-mig source-host\n```\n\n## On the Target Host\n\n```sh\nxboa pre-mig source-host\n```\n\n## On the Source Host\n\n### Test connection to target\n\n```sh\nssh root@target-ip\nexit\n```\n\n### Enable config_readonly/site_readonly globally\n\n```sh\ncp -af /data/conf/global/global-extra.inc /data/conf/global/global-extra.inc.bak\necho >> /data/conf/global/global-extra.inc\necho \"\\$settings['config_readonly'] = TRUE;\" >> /data/conf/global/global-extra.inc\necho \"\\$conf['site_readonly'] = 1;\" >> /data/conf/global/global-extra.inc\necho >> /data/conf/global/global-extra.inc\ngrep site_readonly /data/conf/global/global-extra.inc\n```\n\n## On the Source Host\n\n```sh\nrm -f /data/disk/o1/src/*.sql\nrm -f /data/disk/o1/log/*.pid\nxboa transfer shared target-ip\nxboa create o1 target-ip\nxboa pretransfer o1 target-ip\n```\n\n## On the Target Host\n\n```sh\nservice cron stop\nchmod 644 /data/all/cpuinfo\n(then wait 5 minutes)\n```\n\n## On the Source Host\n\n```sh\nxboa export o1 target-ip\nxboa transfer o1 target-ip\nxboa transfer shared target-ip\n```\n\n## On the Target Host\n\n```sh\nln -sfn $(which websh) /bin/sh\nln -sfn $(which websh) /usr/bin/sh\nls -la /bin/sh\nxboa import o1 target-ip\nservice nginx reload\nxboa post-mig\nservice cron start\n```\n\n## On the Source Host\n\n```sh\nxboa proxy o1 target-ip\nservice nginx reload\nxboa post-mig\n```\n\n## Optional Target Account Rename Mode (`o2`)\n\nThe `xboa` tool supports an optional 4th argument which lets you migrate from one Octopus account name to a different account name on the target host.\n\nExample:\n\n```sh\nxboa create o1 target-ip o2\nxboa pretransfer o1 target-ip o2\nxboa transfer o1 target-ip o2\nxboa import o1 target-ip o2\nxboa proxy o1 target-ip o2\n```\n\nIn this form:\n\n- `o1` = source account name on the source host\n- `o2` = target account name on the target host\n\nThis is useful when the same username is not available (or not desired) on the target system.\n\n### Notes\n\n- `shared` transfer does not use `o2` because it is not account-specific.\n- When using rename mode, make sure you use the same `o2` value consistently for `create`, `pretransfer`, `transfer`, `import` and `proxy`.\n- The target account (`o2`) should not already exist before `xboa create ... o2`.\n\n## Important: `static/files` Symlink Handling During Transfer\n\nSome BOA systems store account-level `static/files` as a normal directory, while others may have it as a symlink to attached/extra storage.\n\nAt the same time, many site-level paths such as `sites/*/files` and `sites/*/private` are symlinks that point into the account `static/files` tree.\n\nDuring `xboa transfer`, BOA now handles this safely as a special case:\n\n1. It preserves symlinks globally during account transfer (so site symlinks remain symlinks).\n2. It excludes `static/files` from the main account rsync pass.\n3. It resolves the source `static/files` path (directory or symlink target).\n4. It creates `static/files` on the target as a real local directory.\n5. It syncs the resolved `static/files` content separately.\n\nThis keeps the expected BOA/Aegir site layout intact while normalizing the target account layout.\n\n### Why this matters\n\nDo **not** use `rsync -L` / `--copy-links` for the whole account migration, because that would dereference all symlinks and break the expected layout.\n\n## Optional Verification (Recommended for Large/Legacy Accounts)\n\nAfter `xboa transfer` (and before/after `xboa import`), you can verify the layout on the target host:\n\n```sh\n# account-level static/files should be a real directory\nif [ -d /data/disk/o1/static/files ] && [ ! -L /data/disk/o1/static/files ]; then\n  echo OK_static_files_real_dir\nfi\n\n# site-level files/private should remain symlinks\nfind /data/disk/o1/static/platforms -path '*/sites/*/files' -type l | head\nfind /data/disk/o1/static/platforms -path '*/sites/*/private' -type l | head\n```\n\nIf you used rename mode (`o2`), replace `/data/disk/o1` with `/data/disk/o2` in the verification commands above.\n"
  },
  {
    "path": "docs/MODULES.md",
    "content": "```ini\nThere are some useful and/or performance related modules\nadded to all 6.x and 7.x platforms -- even to your custom\nplatforms created in the ~/static directory tree.\n\nSome core and contrib modules are either enabled or disabled\nby default, by running weekly (on Saturday) maintenance monitor.\n\nNOTE: You can disable this feature with _MODULES_FIX=NO in the\n      standard Barracuda configuration file: /root/.barracuda.cnf\n\nThere are also modules supported by Octopus, but not bundled\nby default and/or not enabled.\n\nSome modules require custom rewrites on the web server level,\nbut since there is no .htaccess available/used in Nginx,\nwe have added all required rewrites and associated supported\nconfiguration settings on the system level. This is the real\nmeaning of [S]upported flag here.\n\nNote that while some of them are enabled by default on initial\ninstall of \"blank\" site in the supported platform, they are\nnot forced as enabled by the running weekly maintenance monitor,\nso we marked them as [S]oft[E]nabled.\n\nHere is a complete list with corresponding flags for every\nmodule/theme: [S]upported, [B]undled, [F]orce[E]nabled,\n[S]oft[E]nabled or [F]orce[D]isabled. [NA] means that\nthis module is used without the need to enable it.\n\nNOTE: Both [F]orce[E]nabled and [F]orce[D]isabled list can be skipped\n      with _MODULES_FIX=NO in /root/.barracuda.cnf (default is YES)\n      However, this procedure is now smart enough to check if the module\n      is defined as required by any other module or feature and will\n      skip such module automatically, to avoid disabling innocent modules\n      via feature or any other dependency. You can also use _MODULES_SKIP\n      variable to list modules which should never be disabled by\n      the running weekly maintenance agent.\n\nSupported core version is listed for every module or theme\nas [D6] and/or [D7].\n\nContrib [S]upported:\n\n ais ------------------------ [D7] ------ [S]\n ckeditor ------------------- [D6,D7] --- [S]\n fbconnect ------------------ [D6,D7] --- [S]\n fckeditor ------------------ [D6] ------ [S]\n imageapi_optimize ---------- [D6,D7] --- [S] when IMG XTRAS is installed\n imagecache ----------------- [D6,D7] --- [S]\n imagecache_external -------- [D6,D7] --- [S]\n responsive_images ---------- [D7] ------ [S]\n tinybrowser ---------------- [D6,D7] --- [S]\n tinymce -------------------- [D6] ------ [S]\n wysiwyg_spellcheck --------- [D6,D7] --- [S]\n\nContrib [S]upported and [B]undled:\n\n adminer -------------------- [D7] --------- [S] [B]\n advagg --------------------- [D6,D7] ------ [S] [B]\n autoslave ------------------ [D7] --------- [S] [B]\n blockcache_alter ----------- [D6,D7] ------ [S] [B]\n boost ---------------------- [D6,D7] ------ [S] [B]\n cache_consistent ----------- [D7] --------- [S] [B]\n cdn ------------------------ [D6,D7] ------ [S] [B]\n config_perms --------------- [D6,D7] ------ [S] [B]\n css_emimage ---------------- [D6,D7] ------ [S] [B]\n d7security_client ---------- [D7] --------- [S] [B]\n dbtuner -------------------- [D6] --------- [S] [B]\n display_cache -------------- [D7] --------- [S] [B]\n entity_print --------------- [D7] --------- [S] [B]\n esi ------------------------ [D6,D7] ------ [S] [B]\n file_resup ----------------- [D7] --------- [S] [B]\n flood_control -------------- [D7] --------- [S] [B]\n force_password_change ------ [D6,D7] ------ [S] [B]\n fpa ------------------------ [D6,D7] ------ [S] [B]\n httprl --------------------- [D6,D7] ------ [S] [B]\n js ------------------------- [D6,D7] ------ [S] [B]\n login_security ------------- [D6,D7] ------ [S] [B]\n nocurrent_pass ------------- [D7] --------- [S] [B]\n panels_content_cache ------- [D6,D7] ------ [S] [B]\n phpass --------------------- [D6] --------- [S] [B]\n private_upload ------------- [D6] --------- [S] [B]\n readonlymode --------------- [D6-D10] ----- [S] [B]\n reroute_email -------------- [D6,D7] ------ [S] [B]\n securesite ----------------- [D6,D7] ------ [S] [B]\n session_expire ------------- [D6,D7] ------ [S] [B]\n site_verify ---------------- [D6,D7] ------ [S] [B]\n speedy --------------------- [D7] --------- [S] [B]\n taxonomy_edge -------------- [D6,D7] ------ [S] [B]\n variable_clean ------------- [D6,D7] ------ [S] [B]\n vars ----------------------- [D7] --------- [S] [B]\n views_accelerator ---------- [D7] --------- [S] [B]\n views_cache_bully ---------- [D6,D7] ------ [S] [B]\n views_content_cache -------- [D6,D7] ------ [S] [B]\n views404 ------------------- [D6,D7] ------ [S] [B]\n\nContrib [F]orce[E]nabled\n\n entitycache ---------------- [D7] --------- [S] [B] [FE] unless entitycache_dont_enable = TRUE\n robotstxt ------------------ [D6,D7] ------ [S] [B] [FE] static file is generated in sites/foo.com/files/robots.txt\n\nCore [F]orce[D]isabled:\n\n cookie_cache_bypass -------- [D6] -------------- [FD]\n dblog ---------------------- [D6,D7] ----------- [FD]\n syslog --------------------- [D6,D7] ----------- [FD]\n\nContrib [F]orce[D]isabled\n\n backup_migrate ------------- [D6,D7] ----------- [FD]\n coder ---------------------- [D6,D7] ----------- [FD]\n css_gzip ------------------- [D6] -------------- [FD]\n devel ---------------------- [D6,D7] ----------- [FD]\n filefield_nginx_progress --- [D7] -------------- [FD]\n hacked --------------------- [D6,D7] ----------- [FD]\n javascript_aggregator ------ [D6] -------------- [FD]\n l10n_update ---------------- [D6,D7] ----------- [FD]\n linkchecker ---------------- [D6,D7] ----------- [FD]\n memcache ------------------- [D6,D7] ----------- [FD]\n memcache_admin ------------- [D6,D7] ----------- [FD]\n performance ---------------- [D6,D7] ----------- [FD]\n poormanscron --------------- [D6] -------------- [FD]\n search_krumo --------------- [D6,D7] ----------- [FD]\n security_review ------------ [D6,D7] ----------- [FD]\n site_audit ----------------- [D7] -------------- [FD]\n stage_file_proxy ----------- [D6,D7] ----------- [FD]\n supercron ------------------ [D6] -------------- [FD]\n varnish -------------------- [D6,D7] ----------- [FD]\n watchdog_live -------------- [D6,D7] ----------- [FD]\n xhprof --------------------- [D6,D7] ----------- [FD]\n\nContrib [NA]:\n\n cache_backport ------------- [D6] --------- [S] [B] [NA]\n redis ---------------------- [D6-D10] ----- [S] [B] [NA]\n\nContrib [S]oft[E]nabled:\n\n admin ---------------------- [D6,D7] --- [S] [B] [SE]\n rubik ---------------------- [D6,D7] --- [S] [B] [SE]\n\nCore [F]orce[E]nabled:\n\n path_alias_cache ----------- [D6] -------------- [FE]\n\nDrush [E]xtensions [M]aster [S]atellite:\n\n clean_missing_modules ------ [D6,D7] --- [S] [B] [EM,ES]\n drupalgeddon --------------- [D7] ------ [S] [B] [EM,ES]\n drush_ecl ------------------ [D7] ------ [S] [B] [EM,ES]\n registry_rebuild ----------- [D6,D7] --- [S] [B] [EM,ES]\n safe_cache_form_clear ------ [D7] ------ [S] [B] [EM,ES]\n security_review ------------ [D6,D7] --- [S] [B] [EM,ES]\n utf8mb4_convert ------------ [D7] ------ [S] [B] [EM,ES]\n\nProvision [E]xtensions [M]aster [S]atellite:\n\n provision_boost ------------ [D7] ------ [S] [B] [EM,ES]\n\nHostmaster [E]xtensions [M]aster [S]atellite:\n\n aegir_objects -------------- [D7] ------ [S] [B] [FE] [ES]\n environment_indicator ------ [D7] ------ [S] [B] [FE] [ES]\n hosting_civicrm ------------ [D7] ------ [S] [B] [FE] [ES]\n hosting_custom_settings ---- [D7] ------ [S] [B] [FE] [ES]\n hosting_deploy ------------- [D7] ------ [S] [B] [FE] [ES]\n hosting_git ---------------- [D7] ------ [S] [B]      [ES]\n hosting_le ----------------- [D7] ------ [S] [B] [FE] [ES]\n hosting_remote_import ------ [D7] ------ [S] [B]      [ES]\n hosting_site_backup_manager  [D7] ------ [S] [B] [FE] [ES]\n hosting_tasks_extra -------- [D7] ------ [S] [B] [FE] [ES]\n idna_convert --------------- [D7] ------ [S] [B] [FE] [ES]\n revision_deletion ---------- [D7] ------ [S] [B] [FE] [ES]\n userprotect ---------------- [D7] ------ [S] [B] [FE] [ES]\n```\n"
  },
  {
    "path": "docs/MYQUICK.md",
    "content": "# Super Fast Site Cloning and Migration\n\nIt is now possible to enable blazing fast migrations and cloning, even for sites with complex and large databases, using this control file:\n\n`~/static/control/MyQuick.info`\n\n## How Fast is Super-Fast?\n\nIt's faster than you would expect! We have observed it speeding up clone and migration tasks that normally take 1-2 hours to just 3-6 minutes. Yes, that's how fast it is!\n\nThis file, if it exists, will enable a super fast per-table and parallel database dump and import. However, it will not leave a conventional complete database dump file in the site archive normally created by Ægir when you run not only the backup task, but also clone, migrate, and delete tasks. Consequently, the restore task will not work anymore.\n\nWe need to emphasize this again: with this control file present, all normally slow tasks will become blazing fast, but at the cost of not keeping an archived complete database dump file in the site directory archive where it would otherwise be included.\n\n## Important Considerations\n\nOf course, the system still maintains nightly backups of all your sites using the new split SQL dump archives. However, with this control file present, you won't be able to use the restore task in Ægir because the site archive won't include the database dump. You can still find that SQL dump split into per-table files in the backups directory, though, in a subdirectory with a timestamp added, so you can still access it manually if needed.\n\nFor more information, please visit the [documentation](https://github.com/omega8cc/boa/tree/5.x-dev/docs).\n\n"
  },
  {
    "path": "docs/NEWRELIC.md",
    "content": "# New Relic Monitoring Configuration\n\nThis feature disables global New Relic monitoring by deactivating the server-level license key. This allows it to safely auto-enable or auto-disable every 5 minutes, but per Octopus instance—for all sites hosted on the given instance or per platform or per site via INI directives—when a valid license key is present in the special `~/static/control/newrelic.info` control file.\n\n## INI (platform level) directive for New Relic\n\n```text\n;enable_newrelic_integration = FALSE\n;;\n;;  When set to TRUE it will enable New Relic monitoring for all sites on this\n;;  platform, but only if there is a valid New Relic license key present in the\n;;  ~/static/control/newrelic.info control file.\n```\n\nSee also [more details in platform/INI.md docs](https://github.com/omega8cc/boa/blob/5.x-dev/docs/ini/platform/INI.md)\n\n## INI (site level) directive for New Relic\n\n```text\n;enable_newrelic_integration = FALSE\n;;\n;;  When set to TRUE it will enable New Relic monitoring for this site only.\n;;  You still need a valid New Relic license key present in the control file:\n;;  ~/static/control/newrelic.info\n```\n\nSee also [more details in site/INI.md docs](https://github.com/omega8cc/boa/blob/5.x-dev/docs/ini/site/INI.md)\n\nPlease note that a valid license key is a 40-character hexadecimal string that New Relic provides when you sign up for an account.\n\n## Disabling New Relic Monitoring\n\nTo disable New Relic monitoring for all sites on the Octopus instance, simply delete its `~/static/control/newrelic.info` control file and wait a few minutes.\n\n## Important Considerations\n\nOn a self-hosted BOA, you still need to add your valid license key as `_NEWRELIC_KEY` in the `/root/.barracuda.cnf` file and run a system upgrade with `barracuda up-lts system` first. This step is not required on Omega8.cc hosted service, where the New Relic agent is already pre-installed for you.\n\nFor more information, please visit the [documentation](https://github.com/omega8cc/boa/tree/5.x-dev/docs).\n\n"
  },
  {
    "path": "docs/NOTES.md",
    "content": "# Barracuda _XTRAS_LIST and install mode explained\n\n## Add-ons configurable with _XTRAS_LIST in `/root/.barracuda.cnf`\n\n### Xtras Included with \"ALL\" Wildcard:\n\n- **ADM**: Adminer DB Manager (installed by default in LOCAL mode)\n- **CSF**: Firewall (installed by default in PUBLIC mode)\n- **FTP**: Pure-FTPd server with forced FTPS\n- **IMG**: Image Optimize binaries: `advdef`, `advpng`, `jpegoptim`, `jpegtran`, `optipng`, `pngcrush`, `pngquant`\n\n### Xtras Which Need to be Listed Explicitly:\n\n- **BND**: Bind9 DNS Server (deprecated)\n- **BZR**: Bazaar\n- **CGP**: Collectd Graph Panel\n- **CSS**: Ruby Gems for Compass\n- **FMG**: FFmpeg support (deprecated)\n- **NPM**: NPM for Gulp/Bower (requires also /root/.allow.node.lshell.cnf)\n- **SR4**: Apache Solr 4 with Jetty 9 (not supported on Devuan Excalibur)\n- **SR7**: Apache Solr 7 (not supported on Devuan Excalibur)\n- **SR9**: Apache Solr 9\n- **WMN**: Webmin Control Panel (deprecated)\n\n### Examples:\n\n```\n_XTRAS_LIST=\"\"\n_XTRAS_LIST=\"ALL\"\n_XTRAS_LIST=\"ALL SR9 CSS NPM\"\n```\n\n**NOTE**: The `_XTRAS_LIST` array is by default empty for `PUBLIC` and `LOCAL` mode, but `LOCAL` mode is automatically extended to include also `ADM` Adminer, while `PUBLIC` mode is automatically extended to include also `CSF` firewall so it doesn't matter if you have `ALL` or `CSF` keywords listed in the `_XTRAS_LIST` array in your system `/root/.barracuda.cnf` file -- CSF/LFD will be installed automatically.\n\n**NOTE**: The only optional xtra add-on which requires special attention is `NPM` for Gulp/Bower--it requires presence of `/root/.allow.node.lshell.cnf` control file, because **Node should NOT be installed on system with not trusted/shared accounts**.\n\n- Removing any item from this list once it is already installed, will NOT uninstall anything.\n\n- Configuration file template: [barracuda.cnf](https://github.com/omega8cc/boa/tree/5.x-dev/docs/cnf/barracuda.cnf)\n\n**NOTE**: Collectd will work only if `cgp.master.f-q-d-n` subdomain points to your IP (we recommend using wildcard DNS to simplify it). But don't worry, you can add proper DNS entries for those subdomains later, if you didn't enable wildcard DNS before running the Barracuda installer. Only the system hostname must have proper DNS configuration before installing Barracuda.\n\n## Barracuda _EASY_SETUP options explained\n\n**NOTE**: `123.45.67.89` below is a placeholder for your server's public, real IP address.\n\n**NOTE**: `f-q-d-n` below is a placeholder for your real wildcard-enabled hostname.\nRefer to our [DNS wildcard configuration example](http://bit.ly/UM2nRb) for reference.\n\n**NOTE**: If your outgoing SMTP requires using relayhost, define `_SMTP_RELAY_HOST` first.\n\n### Barracuda EASY_SETUP=PUBLIC\n\nWith `_EASY_SETUP=PUBLIC` option (default), Barracuda will install automatically the extra services listed below:\n\n- Your Ægir Octopus Instance control panel will be available at `https://your-octopus-aegir-url/`\n- Your Adminer Percona Manager will be available at `https://your-octopus-aegir-url/sqladmin/`\n- Your Ægir Master Instance control panel will be available at `https://master.f-q-d-n`\n- Your CSF/LFD Firewall will support integrated Nginx Abuse Guard.\n\n- Your (optional) Collectd Graph Panel will be available at `https://cgp.master.f-q-d-n`\n- Your (optional) MultiCore Apache Solr 4.9.1 with Jetty 9 will listen on `127.0.0.1:8099`\n- Your (optional) MultiCore Apache Solr 7.7.3 will listen on `127.0.0.1:9077`\n- Your (optional) MultiCore Apache Solr 9.8.1 will listen on `127.0.0.1:9099`\n- Your (optional) Webmin Control Panel will be available at `https://f-q-d-n:10000` (deprecated)\n\n### Barracuda EASY_SETUP=LOCAL\n\nWith `_EASY_SETUP=LOCAL` option (not enabled by default), Barracuda will configure your local DNS and hostname automatically. No external DNS configuration needed.\n\nWith `_EASY_SETUP=LOCAL` option (not enabled by default), Barracuda will install automatically only the services listed below:\n\n- Your Ægir Master Instance control panel will be available at `https://aegir.local`\n- Your Fast DNS Cache Server (unbound) will listen on `127.0.0.1:53`\n- Your Adminer Percona Manager will be available at `https://adminer.aegir.local`\n\n## Barracuda and Octopus Customized Install and Upgrades\n\nWhile the BOA system installed per [docs/INSTALL.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/INSTALL.md) comes with many options set by default to make it as easy as possible, you may want to customize it further on upgrade by editing various settings stored in the BOA config files, respectively:\n\n- `/root/.barracuda.cnf` - check [barracuda.cnf](https://github.com/omega8cc/boa/tree/5.x-dev/docs/cnf/barracuda.cnf) template\n- `/root/.o1.octopus.cnf` - check [octopus.cnf](https://github.com/omega8cc/boa/tree/5.x-dev/docs/cnf/octopus.cnf) template\n- `/root/.o2.octopus.cnf` - check [octopus.cnf](https://github.com/omega8cc/boa/tree/5.x-dev/docs/cnf/octopus.cnf) template\n- etc.\n\nPlease read [docs/UPGRADE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/UPGRADE.md) for simple upgrades how-to.\n"
  },
  {
    "path": "docs/PHP-FPM.md",
    "content": "# PHP-FPM Version Management in BOA\n\nThe Ægir version provided by BOA is now fully compatible with PHP 8.5, so it can be used as default version in the Ægir PHP configuration files:\n`~/static/control/cli.info` and `~/static/control/fpm.info`\n\n### Global PHP-FPM Version Control\n\nBOA allows you to manage the PHP-FPM version across all sites hosted on an Octopus instance using the `fpm.info` file.\n\n- The `~/static/control/fpm.info` file, if it exists and contains a supported and installed PHP-FPM version, will be used by a system agent running every 1-2 minutes to switch the PHP-FPM version used for all web requests on this Octopus instance.\n\n#### **IMPORTANT**:\n- If used, this will switch PHP-FPM for **all** Drupal sites hosted on the instance, unless a `multi-fpm.info` control file also exists.\n\n### Supported Values for Single PHP-FPM Mode:\n- 8.5, 8.4, 8.3\n\n#### **NOTE**:\n- Only one line and one value (e.g., `8.3`) should be present in this file; otherwise, the system will ignore it.\n- If the `fpm.info` file doesn’t exist, the system will create it and set it to the lowest available PHP version installed, not the system default version. This ensures backward compatibility for instances installed before upgrading to BOA-4.1.3 when the default PHP version was 5.6. Without this safeguard, upgrading could break most hosted sites that haven't been tested for PHP 8.1+ compatibility.\n\n---\n\n### Multi-PHP-FPM Support for Sites on Octopus Instance\n\nYou can enable multiple PHP versions for different sites using the `multi-fpm.info` file.\n\n- **File Location**: `~/static/control/multi-fpm.info`\n- If this file exists, it will override the default `fpm.info` configuration for the sites listed in the `multi-fpm.info` file.\n\nExample of `multi-fpm.info`:\n```\nfoo.com 8.5\nbar.com 7.4\nold.com 5.6\n```\n\n- **NOTE**: Each line in the `multi-fpm.info` file must start with the **main site name** (not an alias), followed by a single space, and then the PHP-FPM version to use.\n\n#### **IMPORTANT**: Supported Drupal core versions and distributions have different PHP versions requirements, while not all PHP versions out of currently supported twelve (12) versions are installed by default. Ensure that you have corresponding PHP versions installed with barracuda before attempting to install older Drupal versions and distributions. On hosted BOA contact your host if you need any legacy PHP installed again.\n\n#### PHP CAVEATS for Drupal core 7-10 versions:\n\n- [Drupal 7 PHP Requirements](https://www.drupal.org/docs/7/system-requirements/php-requirements)\n- [Drupal System Requirements](https://www.drupal.org/docs/system-requirements/php-requirements)\n\n#### Please check regularly: [PHP Supported Versions](https://www.php.net/supported-versions.php)\n"
  },
  {
    "path": "docs/PLATFORMS.md",
    "content": "# Octopus Platforms\n\nOctopus can install and/or support the Ægir platforms listed below.\n\n## Note on required and supported PHP versions\n\nSupported Drupal core versions and distributions have different PHP version requirements, while not all PHP versions out of currently supported ten versions are installed by default.\n\nEnsure that you have corresponding PHP versions installed with barracuda before attempting to install older Drupal versions and distributions.\n\nOn hosted BOA contact your host if you need any legacy PHP installed again.\n\n## Drupal 11\n\n- [Commerce 3.2.0](https://github.com/centarro/commerce-kickstart-project) (11.3.3)\n- [Drupal 11.1.9](https://drupal.org/project/drupal/releases/11.1.9)\n- [Drupal 11.2.10](https://drupal.org/project/drupal/releases/11.2.10)\n- [Drupal 11.3.3](https://drupal.org/project/drupal/releases/11.3.3)\n- [Drupal CMS 2.0.0](https://new.drupal.org/drupal-cms) (11.3.3)\n- [Sector 11.0.x-dev](https://drupal.org/project/sector) (11.3.3)\n- [Thunder 8.3.1](https://drupal.org/project/thunder) (11.3.3)\n- [Varbase 10.1.0](https://drupal.org/project/varbase) (11.3.1)\n\n## Drupal 10\n\n- [Commerce v.2](https://drupal.org/project/commerce) (10.1.8)\n- [Drupal 10.0.11](https://drupal.org/project/drupal/releases/10.0.11)\n- [Drupal 10.1.8](https://drupal.org/project/drupal/releases/10.1.8)\n- [Drupal 10.2.12](https://drupal.org/project/drupal/releases/10.2.12)\n- [Drupal 10.3.14](https://drupal.org/project/drupal/releases/10.3.14)\n- [Drupal 10.4.9](https://drupal.org/project/drupal/releases/10.4.9)\n- [Drupal 10.5.8](https://drupal.org/project/drupal/releases/10.5.8)\n- [Drupal 10.6.3](https://drupal.org/project/drupal/releases/10.6.3)\n- [DXPR Marketing 10.3.0](https://drupal.org/project/dxpr_marketing_cms) (10.3.6)\n- [EzContent 2.2.15](https://drupal.org/project/ezcontent) (10.3.6)\n- [farmOS 3.5.1](https://drupal.org/project/farm) (10.6.2)\n- [LocalGov 3.4.0](https://drupal.org/project/localgov) (10.6.3)\n- [OpenCulturas 2.5.4](https://drupal.org/project/openculturas) (10.5.8)\n- [OpenFed 12.2.4](https://drupal.org/project/openfed) (10.2.10)\n- [Social 12.4.5](https://drupal.org/project/social) (10.2.10)\n- [Varbase 9.1.13](https://drupal.org/project/varbase) (10.6.1)\n\n## Drupal 9\n\n- [Drupal 9.5.11](https://drupal.org/project/drupal/releases/9.5.11)\n- [OpenLucius 2.0.0](https://drupal.org/project/openlucius) (9.5.11)\n- [Opigno LMS 3.1.0](https://drupal.org/project/opigno_lms) (9.5.11)\n\n## Drupal 7\n\n- [Commerce v.1](https://drupal.org/project/commerce_kickstart) (7.105.1)\n- [Drupal 7.105.1](https://docs.tag1.com/faqs/)\n- [Ubercart 3.13](https://drupal.org/project/ubercart) (7.105.1)\n\n## Drupal 6\n\n- [Pressflow 6.60.1](https://www.pressflow.org)\n- [Ubercart 2.15](https://drupal.org/project/ubercart) (6.60.1)\n\n* All D7 platforms have been enhanced using [Drupal 7.105.1 +Extra core](https://github.com/omega8cc/7x/tree/7.x-om8)\n\n* All D6 platforms have been enhanced using [Pressflow (LTS) 6.60.1 +Extra core](https://github.com/omega8cc/pressflow6/tree/pressflow-plus)\n\n* All D6 and D7 platforms include some useful and performance-related contrib modules. See [docs/MODULES.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/MODULES.md) for details.\n\n# Customize Octopus Platform List via Control File\n\n`~/static/control/platforms.info`\n\nThis file, if it exists and contains a list of symbols used to define supported platforms, allows control/override of the value of `_PLATFORMS_LIST` variable normally defined in the `/root/.${_USER}.octopus.cnf` file, which can't be modified by the Ægir instance owner with no system root access.\n\n**IMPORTANT**: If used, it will replace/override the value defined on initial instance install and all previous upgrades. It takes effect on every future Octopus instance upgrade, which means that you will miss all newly added distributions if they are not listed in this control file.\n\n## Supported Values\n\n### Drupal 11.3\n\n- `DE3` — Drupal 11.3 prod/stage/dev\n- `CK3` — Commerce v.3\n- `CMS` — Drupal CMS\n- `SCR` — Sector\n- `THR` — Thunder\n- `VBX` — Varbase 10\n\n### Drupal 11.2\n\n- `DE2` — Drupal 11.2 prod/stage/dev\n\n### Drupal 11.1\n\n- `DE1` — Drupal 11.1 prod/stage/dev\n\n### Drupal 10.6\n\n- `DX6` — Drupal 10.6 prod/stage/dev\n- `FOS` — farmOS\n- `LGV` — LocalGov\n- `VB9` — Varbase 9\n\n### Drupal 10.5\n\n- `DX5` — Drupal 10.5 prod/stage/dev\n- `OCS` — OpenCulturas\n\n### Drupal 10.4\n\n- `DX4` — Drupal 10.4 prod/stage/dev\n\n### Drupal 10.3\n\n- `DX3` — Drupal 10.3 prod/stage/dev\n- `DXP` — DXPR Marketing\n- `EZC` — EzContent\n\n### Drupal 10.2\n\n- `DX2` — Drupal 10.2 prod/stage/dev\n- `OFD` — OpenFed\n- `SOC` — Social\n\n### Drupal 10.1\n\n- `DX1` — Drupal 10.1 prod/stage/dev\n- `CK2` — Commerce v.2\n\n### Drupal 10.0\n\n- `DX0` — Drupal 10.0 prod/stage/dev\n\n### Drupal 9\n\n- `DL9` — Drupal 9 prod/stage/dev\n- `OLS` — OpenLucius\n- `OPG` — Opigno LMS\n\n### Drupal 7\n\n- `DL7` — Drupal 7 prod/stage/dev\n- `CK1` — Commerce v.1\n- `UC7` — Ubercart\n\n### Drupal 6\n\n- `DL6` — Pressflow (LTS) prod/stage/dev\n- `UC6` — Ubercart\n\nYou can also use the special keyword `ALL` instead of any other symbols to have all available platforms installed, including newly added platforms in all future BOA system releases.\n\n### Examples:\n\n```\nDE2 DX5 SOC UC7\n```\n\n```\nALL\n```\n"
  },
  {
    "path": "docs/PROVIDES.md",
    "content": "# Provides\n\n**Included/Enabled by Default** (See [docs/NOTES.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/NOTES.md) for details)\n\n1. PHP-FPM versions 8.5/4/3/2/1/0, 7.4/3/2/1/0, and 5.6, configurable per site.\n2. Latest release of Percona 5.7, 8.0 or 8.4 database server with Adminer manager.\n3. All libraries and tools required to install and run the Nginx-based Ægir system.\n4. Magic Speed Booster cache, functioning like a Boost + AuthCache, but per user.\n5. Entry-level XSS protection built into Nginx.\n6. Firewall csf/lfd integrated with Nginx abuse guard.\n7. Autonomous Maintenance and Auto-Healing scripts located in `/var/xdrago`.\n8. Local monitoring for uptime and self-healing every 3 seconds.\n9. Automated, rotated daily backups for all databases in `/data/disk/arch/sql`.\n10. Letsencrypt.org SSL support (See [docs/SSL.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SSL.md) for details).\n11. HTTP/2 or SPDY Nginx support.\n12. Perfect Forward Secrecy (PFS) support in Nginx.\n13. PHP extensions: Zend OPcache, PHPRedis, UploadProgress, MailParse, and ionCube.\n14. Fast Valkey Cache/Lock/Path with DB auto-failover for all Drupal core versions.\n15. Limited Shell, SFTP, and FTPS accounts per Ægir Client with per-site access.\n16. Drush access on the command line in all shell accounts.\n17. Composer and Drush Make access on the command line for the main shell account only.\n18. PHP error debugging, including WSOD, enabled on the fly on `.dev` aliases.\n19. Built-in collection of useful modules available on all platforms.\n20. Fast DNS Cache Server (unbound).\n21. Image Optimize toolkit binaries.\n\n**Optional Add-ons** (See [docs/NOTES.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/NOTES.md) for details)\n\n22. MultiCore Apache Solr 7 and Solr 4 (See [docs/SOLR.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SOLR.md) for details).\n23. New Relic Apps Monitor with per Octopus license and per site reporting.\n24. Ruby Gems and NPM (See [docs/GEM.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/GEM.md) for details).\n"
  },
  {
    "path": "docs/REMOTE.md",
    "content": "# How To: Migrate Single Site Between Remote Ægir\n\nThis is a detailed how-to for the `remote_import` Provision extension and `hosting_remote_import` module included by default in every Ægir Satellite Instance since BOA-2.0.3 Edition.\n\nWe assume that your Octopus system user is the default `o1`.\n\n**Important**: The system user must be the same on source and target, since Ægir doesn't allow (yet) to migrate from `o1` to `o2`, only from `o1` to `o1` or from `o2` to `o2`, etc.\n\n- The `source-host` is a placeholder for your source instance FQDN hostname.\n- The `target-host` is a placeholder for your target instance FQDN hostname.\n- The `plform-name` is a placeholder for the name of the imported codebase root directory.\n\n## Caveats\n\n1. The `ajax_comments` contrib module must be disabled and removed before site migration.\n2. The Verify task on the migrated site must run without errors before migration.\n\nYou need to run the required commands and perform tasks in the order listed below:\n\n## Commands to Run on Both Source and Target Server\n\n```sh\ntouch /data/disk/o1/static/control/MyClassic.info\nchsh -s /bin/bash o1\nsed -i \"s/^max_execution_time =.*/max_execution_time = 7200/g\" /opt/php*/lib/php.ini\nsed -i \"s/^max_input_time =.*/max_input_time = 7200/g\" /opt/php*/lib/php.ini\n```\n\n## Commands to Run on the Source Server Only\n\n```sh\nsu - o1\nnano ~/.ssh/authorized_keys ### Paste ~/.ssh/id_ed25519.pub from the same account on target\nchmod 600 ~/.ssh/*\n```\n\n## Commands to Run on the Target Server Only\n\n```sh\nsu - o1\nssh-keyscan -t rsa -H source-host >> ~/.ssh/known_hosts\ndrush @hostmaster en hosting_remote_import -y\nrsync -avzuL --exclude=plform-name/sites --ignore-errors -e ssh o1@source-host:/path/to/plform-name ~/static/\nmkdir -p ~/static/plform-name/sites\nrsync -avzuL --ignore-errors -e ssh o1@source-host:/path/to/plform-name/sites/all ~/static/plform-name/sites/\nrsync -avzuL --ignore-errors -e ssh o1@source-host:/path/to/plform-name/sites/default ~/static/plform-name/sites/\n```\n\nAdd the transferred platform codebase in the Ægir frontend with path: `/data/disk/o1/static/plform-name`\n\n## Commands to Run on the Source Server Only\n\n```sh\nmv -f /var/xdrago/manage_ltd_users.sh /var/backups/\nservice clean-boa-env start\n```\n\n## Tasks to Perform on the Target Server Only\n\n**Note**: You will have to wait for cron to run after every step or run the tasks cron manually with `bash /var/xdrago/run-o1`.\n\n1. Go to `/node/add/server` in the Ægir control panel on the target instance.\n2. Enter FQDN of the source server as a 'Server hostname'.\n3. Choose only 'hostmaster' option under 'Remote Import' and hit 'Save'.\n4. Go to 'Import remote sites' tab on the just added server node once verified.\n5. Click on the \"Retrieve a list...\" button and run cron.\n6. Choose the site to import from the list and hit 'Next'.\n7. Choose the platform the site should be hosted on, hit 'Import' and run cron.\n8. Wait... wait... wait... until it is done and the site is imported and verified.\n\nRepeat steps 4-8 to migrate the next/more site(s).\n\n## Commands to Run on the Source Server Only\n\n```sh\nmv -f /var/backups/manage_ltd_users.sh /var/xdrago/\n```\n\n## Commands to Run on Both Source and Target Server\n\n```sh\nchsh -s /bin/false o1\n```\n"
  },
  {
    "path": "docs/REWRITES.md",
    "content": "# How To: Customize Rewrites and Locations in Nginx\n\nYou can include your custom rewrites/locations configuration to modify or add some custom settings taking precedence over other rules in the main Nginx configuration.\n\nNote that some locations will require using parent literal location to stop searching for / using other regex based locations.\n\nYour custom include file should have filename: `nginx_vhost_include.conf` for standard overrides and/or `nginx_force_include.conf` for high level overrides. The difference between both options is only the point where the extra config file is included, thus `nginx_force_include.conf` can override more than `nginx_vhost_include.conf` file.\n\nNginx will look for both files in the include directory specified below:\n\nFor Satellite Instances: `/data/disk/EDIT_USER/config/server_master/nginx/post.d/`\n\nFor Master Instance: `/var/aegir/config/includes/`\n\nThese files will be included if exist and will never be modified or touched by Ægir Provision backend system.\n\nNote: your custom rewrite rules will apply to *all* sites on the same Ægir Satellite Instance, unless you will use site/domain specific `if{}` embedded locations, as shown in the examples below.\n\n## Custom rewrites to map legacy content to the Drupal multisite.\n\n```nginx\nlocation ~* ^.+\\.(?:jpe?g|gif|png|ico|swf|pdf|ttf|html?)$ {\n  access_log off;\n  log_not_found off;\n  expires 30d;\n  rewrite ^/files/(.*)$     /sites/$server_name/files/$1 last;\n  rewrite ^/images/(.*)$    /sites/$server_name/files/images/$1 last;\n  rewrite ^/downloads/(.*)$ /sites/$server_name/files/downloads/$1 last;\n  rewrite ^/download/(.*)$  /sites/$server_name/files/download/$1 last;\n  rewrite ^/docs/(.*)$      /sites/$server_name/files/docs/$1 last;\n  rewrite ^/documents/(.*)$ /sites/$server_name/files/documents/$1 last;\n  rewrite ^/legacy/(.*)$    /sites/$server_name/files/legacy/$1 last;\n  try_files $uri =404;\n}\n```\n\n## Site specific 301 redirect with parent literal location to stop searching for (and using) other regex based locations.\n\n```nginx\nlocation ^~ /some-ltsral-path/no-regex-here {\n  location ~* ^/some-path/or-regex-here {\n    if ($host ~* ^(www\\.)?(domain\\.com)$) {\n      return 301 $scheme://$host/destination/url;\n    }\n    try_files $uri @cache;\n  }\n}\n```\n\n## 301 redirect for various legacy .php URIs with parent literal locations to stop searching for (and using) other regex based locations.\n\n```nginx\nlocation ^~ /services {\n  location ~* ^/services {\n    rewrite ^/services/accounting\\.php$ $scheme://$host/node/18 permanent;\n    rewrite ^/services/assurance\\.php$  $scheme://$host/node/11 permanent;\n    rewrite ^/services/audit\\.php$      $scheme://$host/node/11 permanent;\n    rewrite ^/services/taxation\\.php$   $scheme://$host/node/92 permanent;\n    rewrite ^/services/wealth\\.php$     $scheme://$host/node/15 permanent;\n    rewrite ^/services\\.php$            $scheme://$host/node/17 permanent;\n    try_files $uri @cache;\n  }\n  try_files $uri @cache;\n}\nlocation ^~ /our_team {\n  location ~* ^/our_team {\n    rewrite ^/our_team\\.php$ $scheme://$host/node/10 permanent;\n    rewrite ^/our_team$      $scheme://$host/node/10 permanent;\n    try_files $uri @cache;\n  }\n  try_files $uri @cache;\n}\n```\n\n## Domain specific 301 redirect for legacy .php URIs with literal location to stop searching for (and using) other regex based locations.\n\n```nginx\nlocation = /about_us.php {\n  if ($host ~* ^(www\\.)?(foo\\.com)$) {\n    return 301 $scheme://$host/node/19;\n  }\n  return 444;\n}\n```\n\n## Helper locations to avoid 404 on legacy images paths\n\n```nginx\nif ($main_site_name = '') {\n  set $main_site_name \"$server_name\";\n}\n\nlocation ^~ /sites/default/files {\n  location ~* ^/sites/default/files/imagecache {\n    access_log off;\n    log_not_found off;\n    expires 30d;\n    set $nocache_details \"Skip\";\n    rewrite ^/sites/default/files/imagecache/(.*)$ /sites/$main_site_name/files/imagecache/$1 last;\n    try_files $uri @drupal;\n  }\n  location ~* ^/sites/default/files/styles {\n    access_log off;\n    log_not_found off;\n    expires 30d;\n    set $nocache_details \"Skip\";\n    rewrite ^/sites/default/files/styles/(.*)$ /sites/$main_site_name/files/styles/$1 last;\n    try_files $uri @drupal;\n  }\n  location ~* ^/sites/default/files {\n    access_log off;\n    log_not_found off;\n    expires 30d;\n    rewrite ^/sites/default/files/(.*)$ /sites/$main_site_name/files/$1 last;\n    try_files $uri =404;\n  }\n}\n```\n"
  },
  {
    "path": "docs/SECURITY.md",
    "content": "# Security Considerations for Multi-Ægir Systems\n\nIn a multi-Ægir instance system, all instances utilize the same Nginx server. Consequently, installing a site with the same domain on multiple instances can cause conflicts. **The instances are not aware of each other**, so it is crucial to manage the system responsibly.\n\nIt is **imperative** to never grant anyone access to the Ægir **system user** on any Octopus instance. This user has near-root access to **all** sites databases hosted across **all** Octopus instances on the same BOA server. Only provide **restricted shell** access accounts and **non-admin** Ægir control panel accounts to end-users.\n\n# Security Considerations for Node/NPM Access\n\nGiven that `node` can be exploited to bypass Limited Shell and pose a significant security risk to the BOA system, it should not be enabled on any BOA system with multiple `lshell` users. Consequently, Node/NPM support is not enabled in BOA by default. To enable it, you must create an empty control file `/root/.allow.node.lshell.cnf` to lift the restriction. In hosted BOA environments, Node/NPM support is available only on dedicated systems such as Phantom and Cluster.\n\n# BOA System Security Features Explained\n\nBOA offers a highly secure hosting environment for Ægir and Drupal sites, featuring comprehensive built-in security monitoring and autonomous attack prevention systems. Below is a list of key features that collectively provide robust protection for all hosted sites. For additional information, consider reading about [running performance or load tests](https://learn.omega8.cc/how-to-run-performance-or-load-test-300).\n\n1. **Encrypted Connections**: Account access is restricted to SSH, SFTP (FTP over SSH), and FTPS (FTP over SSL).\n2. **Restricted PHP Scripts**: Only recognized Drupal PHP files are allowed in the BOA secure environment. The web server does not have write access to the website codebase, blocking common attack vectors even for sites with otherwise vulnerable codebases.\n3. **Web Server Monitoring**: IP addresses exhibiting DoS-like activity are temporarily blocked for one hour and permanently blocked after repeated offenses. You can [whitelist your IP on the fly](https://omega8.cc/how-firewall-works-is-my-ip-blocked-121) by maintaining an active SSH connection.\n4. **Firewall Monitoring**: Repeated failed login attempts for SSH, SFTP, or FTPS result in temporary one-hour blocks, escalating to permanent blocks. Whitelisted IPs are not exempt if abuse is detected.\n5. **Load Management**: The web server may be temporarily disabled during high system loads due to undetected DoS attacks. Normal service resumes within 10 seconds after load stabilization.\n6. **Port Scan and Flood Protection**: Detected port scans or floods result in temporary one-hour blocks, escalating to permanent blocks after repeated offenses. False positives are detailed in our [How Firewall Works article](https://omega8.cc/how-firewall-works-is-my-ip-blocked-121).\n7. **Resource Scaling**: Automated resource scaling on hosted BOA mitigates high load spikes, ensuring system stability during short-term traffic surges.\n8. **Perfect Forward Secrecy and HTTP/2**: All HTTPS services utilize Perfect Forward Secrecy and HTTP/2 for enhanced security and speed. Non-supportive browsers default to classic HTTPS with SSL and Perfect Forward Secrecy.\n9. **PHP Error Protection**: PHP errors are not displayed in browsers. Debugging can be performed using protected dev domain aliases.\n10. **Password Expiration Policy**: SSH, SFTP, and FTPS passwords expire every 90 days and must be updated, even if SSH keys are in use.\n11. **Restricted Admin Access**: Admin account access (uid=1) is unavailable in Ægir to prevent potential misuse. Non-admin main account access provides sufficient privileges for safe management in a multi-Ægir environment.\n12. **Restricted System Binaries Access**: BOA modifies access permissions to system binaries and commands that could potentially be used as attack vectors by web shells and other intrusion methods, significantly limiting damage potential even for sites running older Drupal versions.\n\n# Customizing PHP Function Restrictions\n\n## Option in `/root/.barracuda.cnf`\n\nYou can define a custom list of functions to disable in addition to those already denied in the system-level `disable_functions`.\n\n```ini\n_PHP_FPM_DENY=\"\"\n```\n\n**Note:** If this option is left empty, BOA will deny access to the function:\n\n```ini\npassthru\n```\n\nIf `_PHP_FPM_DENY` is **not** empty, its value will **replace** the default `passthru`, so any denied function must be listed explicitly.\n\n**WARNING!** Do not add `shell_exec` here, or you will break cron for all sites, including those hosted on all Satellite Instances. The `shell_exec` function is also required by Collectd Graph Panel, if installed.\n\nThis option affects only the Ægir Master Instance plus all scripts running outside of Octopus Satellite Instances.\n\n**Example:**\n\n```ini\n_PHP_FPM_DENY=\"passthru,popen,system\"\n```\n\nWhile this improves security, it can also break modules that rely on any disabled functions.\n\n## Option in `/root/.USER.octopus.cnf`\n\nYou can define a custom list of functions to disable in addition to those already denied in the system-level `disable_functions`.\n\n```ini\n_PHP_FPM_DENY=\"\"\n```\n\n**Note:** If this option is left empty, BOA will deny access to the function:\n\n```ini\npassthru\n```\n\nIf `_PHP_FPM_DENY` is **not** empty, its value will **replace** `passthru`, so any denied function must be listed explicitly.\n\nThis option affects only this Satellite Instance and is not influenced by the same option set in the Barracuda Master.\n\n**Example:**\n\n```ini\n_PHP_FPM_DENY=\"system,exec,shell_exec\"\n```\n\nWhile this improves security, it can also break modules that rely on any disabled functions.\n\n# Strict Binary Permissions\n\n## Option in `/root/.barracuda.cnf`\n\nWe highly recommend enabling this option to improve system security when certain PHP functions, especially `exec`, `passthru`, `shell_exec`, `system`, `proc_open`, `popen`, are not disabled via the `_PHP_FPM_DENY` option above.\n\n```ini\n_STRICT_BIN_PERMISSIONS=YES\n```\n\n**WARNING!** This option is very aggressive and can break any extra service or binary you have installed which BOA doesn't manage and the binary has a system group set to 'root'. BOA will not touch any binary which has a non-root group or has setgid or setuid permissions.\n\n**Recommended setting:**\n\n```ini\n_STRICT_BIN_PERMISSIONS=YES\n```\n"
  },
  {
    "path": "docs/SELFUPGRADE.md",
    "content": "# Ægir Upgrade via Octopus On-Demand without Root Access\n\nYou can now launch an Ægir upgrade to (re)install platforms listed in the file `~/static/control/platforms.info` (see below) by creating an empty PID file:\n\n```sh\n~/static/control/run-upgrade.pid\n```\n\nThis file, if it exists, will launch your Ægir upgrade in just a few minutes and will be automatically deleted afterward. This means that you can upgrade your Ægir instance easily to install supported platforms even if you don't have root access or are on a hosted BOA system.\n\nNote that this PID file will be ignored if there is no `platforms.info` file, as explained in [PLATFORMS.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/PLATFORMS.md).\n\n# Barracuda and Octopus Upgrade on Schedule with Root Access\n\nYou can launch a BOA after-midnight self-upgrade, either for the system only or also for the Ægir instances, by adding supported variables to the file:\n\n```sh\n/root/.barracuda.cnf\n```\n\nYou can configure BOA to run automated upgrades to the latest head version for both Barracuda and all Octopus instances with three variables, which are empty by default. All three variables must be defined to enable auto-upgrade.\n\nYou can set `_AUTO_UP_MONTH` and `_AUTO_UP_DAY` to any date in the past or future (e.g., `_AUTO_UP_MONTH=2` with `_AUTO_UP_DAY=29`) if you wish to enable only weekly system upgrades.\n\nRemember that day/month upgrades will include a complete upgrade to the latest BOA head for Barracuda and all Octopus instances, while weekly upgrades are designed to run only the `barracuda up-lts system` upgrade.\n\nYou can further modify the auto-upgrade by specifying either `head` or `dev` with the `_AUTO_VER` variable. Additionally, you can include all supported PHP versions with the `_AUTO_PHP` variable set to \"php-min\"; otherwise, it will be ignored.\n\nNote that weekly system upgrades will start shortly after midnight on the specified weekday, while the day/month upgrades for both Barracuda and all Octopus instances will start at approximately 3 AM for the system and Ægir Master instance, and at approximately 4 AM for all Octopus-based Ægir instances.\n\n> **NOTE:** All three main `_AUTO_UP_*` variables must be defined to enable auto-upgrade.\n\n```ini\n_AUTO_UP_WEEKLY=  # Day of week (1-7) for weekly system upgrades\n_AUTO_UP_MONTH=   # Month (1-12) to define the date of one-time upgrade\n_AUTO_UP_DAY=     # Day (1-31) to define the date of one-time upgrade\n_AUTO_VER=lts     # The BOA version to use (lts by default)\n_AUTO_PHP=        # Useful to force php-min, otherwise ignored\n```\n\n> **NOTE:** New extra `_AUTO_UP_*` variables can be also defined or default values will be used\n\n```ini\n_AUTO_UP_HOUR=    # Hour of the day (0-23) for barracuda upgrades\n_AUTO_UP_MINUTE=  # Minute of the hour (0-59) for barracuda upgrades\n```\n\n```ini\n_AUTO_OCT_UP_HOUR=    # Hour of the day (0-23) for octopus upgrades\n_AUTO_OCT_UP_MINUTE=  # Minute of the hour (0-59) for octopus upgrades\n```\n\n> **IMPORTANT:** pay attention to use correct values within ranges as listed above. Otherwise you can break and lock your system cron.\n"
  },
  {
    "path": "docs/SKYNET.md",
    "content": "# Importance of Keeping SKYNET Enabled in BOA\n\nThe `_SKYNET_MODE=ON` setting (enabled by default) is essential for maintaining BOA's auto-healing functionality. It ensures that BOA tools remain operational by performing critical checks on components such as cURL, Python, and Lshell, verifying that they function correctly.\n\n**We always have SKYNET enabled on all production servers**, which should give you confidence in its safety and reliability for production environments.\n\nWhile SKYNET does not send notifications for all of its actions, it logs activities in `/var/log/boa/` and `/var/xdrago/monitor/log/`. It also sends incident notifications for its system monitoring features, unless you disable this by setting `_INCIDENT_REPORT=NO` in the `/root/.barracuda.cnf` file.\n\nBOA is **designed to self-maintain** and even [**self-upgrade**](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SELFUPGRADE.md), provided that the optional cron entries are configured. It is built with the expectation that you are using a supported system and are not making changes beyond managing the hosted Ægir sites. When used as intended, BOA operates flawlessly.\n\nHowever, performing actions outside of the standard BOA upgrade processes—such as manually installing packages, altering default settings, or disabling `_SKYNET_MODE` by setting `_SKYNET_MODE=OFF`—means you assume full responsibility for any issues that may arise. Manual interventions can cause BOA to behave unpredictably, leading to problems that are beyond our control.\n\nIn summary, if you allow BOA to operate in its intended **zero-touch manner**, it will run smoothly for years. Disabling `_SKYNET_MODE` or making manual changes means proceeding at your own risk, and we may not be able to provide assistance.\n\n### For reference, here is a bit of history:\n\nThe BOA Skynet auto-updates were initially limited to checking for new BOA release and notifying the system admin daily, until the system has been upgraded to latest stable release.\n\nNext, since people tend to forget about running meta-installers update before running barracuda or octopus upgrade, and it generated a ton of unneeded tickets, confusion and frustration, we have automated these updates, so all your meta-installers were updated daily.\n\nThen #drupageddon happened, and we realized that we could make all existing BOA systems secure, auto-magically, in the first 60 minutes after the #drupageddon alert was published. Only if we could have a running mechanism in place to apply very trivial but how important patch to all your D7 sites/codebases while you were on vacation, out of town, or just AFK anywhere.\n\nSo we have added Drupal core monitoring and auto-patching to make sure you never run vulnerable codebase again. To make it effective, we have scheduled to run these checks hourly.\n\nThen we have added also hourly updates for a few key scripts responsible for your system security, self-monitoring and self-healing.\n\nGradually it grew into its current incarnation, so at the moment BOA Skynet auto-updates do these things for you, while you sleep:\n\n* Daily version/release check and notification\n* Every 6 minutes update for all meta-installers and related tools\n* Hourly check for D7 core vulnerability and patching if detected\n* Hourly update for key BOA tools, monitors and self-healing agents\n* Hourly check if your DNS resolver works as expected and repair if not\n\nWhile it is a very convenient to have all this work done for you, and we\nbelieve that it should be still enabled by default, we should make it\npossible to opt-out from all those auto-updates, if you prefer that your\nBOA system never calls home, and whatever happens, is totally under\nyour control.\n\nNow you can disable this convenient magic by adding the line:\n\n  `_SKYNET_MODE=OFF`\n\nNOTE: Critically important BOA tools will be still auto-updated every 6 minutes to keep your system ready for upgrade if/when needed and as initially intended.\n\nBetter idea, though:\n\n  `_SKYNET_MODE=ON`\n"
  },
  {
    "path": "docs/SMTP_SSL_DEBUG.md",
    "content": "# SMTP SSL Error Debugging\n\nYou may experience this in the form similar to:\n\n>SMTP stopped working with :\"SMTP Error: Could not connect to SMTP host. Connection failed. stream_socket_enable_crypto(): SSL operation failed with code 1. OpenSSL Error messages: error:0A000086:SSL routines::certificate verify failed\"\n\nThat error almost always means “the TLS handshake started, but PHP/OpenSSL refused the server’s certificate.” Since both Symfony Mailer and the SMTP module fail, it’s not the Drupal module—it’s the connection or the trust store. Here’s a precise checklist to find (and fix) the culprit.\n\n## 1) Sanity checks (the easy wins)\n\n* **Use a hostname, not an IP** in the SMTP config. The cert’s CN/SAN must match the **hostname**.\n* **Pick the right port + mode**:\n\n  * Port **587** → “STARTTLS” (explicit TLS).\n  * Port **465** → “SSL/TLS” (implicit TLS).\n    Mixing these will fail.\n* **System clock**: if time/date is off, cert validation fails.\n\n  ```bash\n  ntpdate pool.ntp.org\n  ```\n\n## 2) Verify the server’s certificate chain from the shell\n\nReplace `smtp.example.com` and port as needed.\n\n**STARTTLS on 587**\n\n```bash\nopenssl s_client -starttls smtp -connect smtp.example.com:587 -servername smtp.example.com -CAfile /etc/ssl/certs/ca-certificates.crt -showcerts </dev/null | sed -n '/Server certificate/,/subject=/p; /Verify return code/p'\n```\n\n**Implicit TLS on 465**\n\n```bash\nopenssl s_client -connect smtp.example.com:465 -servername smtp.example.com -CAfile /etc/ssl/certs/ca-certificates.crt -showcerts </dev/null | sed -n '/Server certificate/,/subject=/p; /Verify return code/p'\n```\n\nYou want to see:\n\n```\nVerify return code: 0 (ok)\n```\n\nIf you see a different code/message (expired cert, hostname mismatch, unable to get local issuer certificate, etc.), that’s your root cause.\n\n## 3) Make sure PHP (the one your site is using) can see a valid CA bundle\n\nCheck what PHP thinks:\n\n```bash\n/opt/php83/bin/php -i | egrep -i 'openssl|default_socket|openssl\\.cafile|openssl\\.capath|curl\\.cainfo'\n```\n\nKey items:\n\n* `openssl.cafile` should be **empty** or point to **/etc/ssl/certs/ca-certificates.crt**.\n* `openssl.capath` usually **/etc/ssl/certs**.\n* If these point to a non-existent file (e.g., from an old BOA build), cert validation will fail.\n\nFix the CA bundle if needed:\n\n```bash\napt-get update\napt-get --reinstall install ca-certificates\nupdate-ca-certificates\n```\n\n## 4) Test with a known-good SMTP client\n\nInstall **swaks** (handy for SMTP + TLS):\n\n```bash\napt-get install swaks\nswaks --server smtp.example.com --port 587 --tls --tls-verify --protocol ESMTP\n# or for 465:\nswaks --server smtp.example.com --port 465 --tls --tls-verify --protocol ESMTP\n```\n\nIf swaks fails with the same verify error, the problem is definitely TLS/CA/hostname, not Drupal.\n\n## 5) Check TLS versions & ciphers (OpenSSL 3 vs old servers)\n\nSome mail servers still only offer legacy ciphers or old TLS. Your box runs OpenSSL 3, which drops a lot of legacy. See what the server offers:\n\n```bash\nopenssl s_client -starttls smtp -connect smtp.example.com:587 -servername smtp.example.com -cipher 'DEFAULT:@SECLEVEL=1' </dev/null | egrep -i 'Protocol|Cipher|Verify return code'\n```\n\n* If lowering security (`@SECLEVEL=1`) makes it work, the server is **too old**. Don’t keep this as a permanent fix—ask the provider to update.\n* In PHPMailer/Symfony-Mailer you can sometimes set crypto method to TLS 1.2+ explicitly. For PHPMailer (Drupal SMTP module) there’s an “Advanced” option for **SMTPOptions**—but you should fix the server or CA instead of weakening security.\n\n## 6) Drupal SMTP module settings (PHPMailer)\n\nIn **/admin/config/system/smtp**:\n\n* Ensure Encryption matches the port (see §1).\n* **Do not** enable “allow self-signed” unless you truly use a private CA.\n* If your provider uses an enterprise/private CA, import it:\n\n  ```bash\n  mkdir -p /usr/local/share/ca-certificates/custom\n  cp /path/to/provider-root-or-intermediate.crt /usr/local/share/ca-certificates/custom/provider.crt\n  update-ca-certificates\n  ```\n\n  Then (optionally) point “CA file” in SMTP settings to `/etc/ssl/certs/ca-certificates.crt`.\n\n## 7) Hostname/SNI pitfalls\n\n* If you connect by IP or override name in `/etc/hosts` **and** your mailer passes the IP as the peer name, SNI/hostname verification will fail. Always use the **public SMTP hostname** in the module config.\n* With `openssl s_client`, always include `-servername smtp.example.com` to test SNI correctly.\n\n## 8) IPv6 edge case\n\nIf DNS for the SMTP host has AAAA and your outbound prefers IPv6 but the provider’s IPv6 endpoint presents a different certificate/chain, you’ll get verify errors. Quick test:\n\n```bash\n# Force IPv4:\nopenssl s_client -starttls smtp -connect smtp.example.com:587 -servername smtp.example.com -4 -CAfile /etc/ssl/certs/ca-certificates.crt </dev/null\n```\n\nIf IPv4 succeeds but default fails, pin IPv4 (or ask the provider to fix their IPv6 TLS chain).\n\n## 9) If it still fails—collect the exact failure\n\nRun and share the tail lines:\n\n```bash\nopenssl s_client -starttls smtp -connect smtp.example.com:587 -servername smtp.example.com -CAfile /etc/ssl/certs/ca-certificates.crt -showcerts </dev/null | tail -n +1 | sed -n '/Certificate chain/,$p'\n```\n\nLook for:\n\n* **“unable to get local issuer certificate”** → missing intermediate CA → fix with updated `ca-certificates`.\n* **“certificate has expired”** → provider must renew.\n* **“IP address mismatch” / “hostname mismatch”** → use correct hostname.\n* **Non-zero verify code** with legacy cipher notes → server too old vs OpenSSL 3.\n\n"
  },
  {
    "path": "docs/SOLR.md",
    "content": "# SOLR Management Documentation\n\nYou can easily add, update, or delete Solr cores. This process is fully automated and can be managed via the site-level active INI file. Ensure Solr is already installed on the system with the `SR9` and/or `SR7` keywords in the `_XTRAS_LIST` in the `/root/.barracuda.cnf` file.\n\n> **NOTE:** New Solr 9 Support is available only in BOA PRO.\n\nThere are three INI variables you can use to control the Solr automated setup:\n- `solr_integration_module`\n- `solr_update_config`\n- `solr_custom_config`\n\nRefer to the documentation below, which is also available in every site's INI template. For more information on how to control BOA on site level via INI files, check our [documentation](https://github.com/omega8cc/boa/blob/5.x-dev/docs/ini/site/INI.md).\n\n> **NOTE:** This feature works only for site-level INI files because Solr cores belong to sites, not platforms.\n\n## Solr Core Configuration\n\nThis option allows you to activate Solr core configuration for the site. Solr 9 and Solr 7 are available if installed. Supported integration modules are latest versions of either `search_api_solr` or `apachesolr` (deprecated).\n\nCurrently supported versions are listed below:\n- [search_api_solr-4.3.8.tar.gz (D10.2+)](https://ftp.drupal.org/files/projects/search_api_solr-4.3.8.tar.gz)\n- [search_api_solr-4.2.12.tar.gz (D9.3+)](https://ftp.drupal.org/files/projects/search_api_solr-4.2.12.tar.gz)\n- [search_api_solr-4.1.12.tar.gz (D8.8+)](https://ftp.drupal.org/files/projects/search_api_solr-4.1.12.tar.gz)\n- [search_api_solr-7.x-1.17.tar.gz (D7)](https://ftp.drupal.org/files/projects/search_api_solr-7.x-1.17.tar.gz)\n- [apachesolr-7.x-1.12.tar.gz (D7)](https://ftp.drupal.org/files/projects/apachesolr-7.x-1.12.tar.gz)\n- [apachesolr-6.x-3.1.tar.gz (D6)](https://ftp.drupal.org/files/projects/apachesolr-6.x-3.1.tar.gz)\n\nNote that you still need to add and enable the preferred integration module along with any dependencies to your codebase. This feature doesn't modify your platform or site - it only creates a Solr core with configuration files provided by the integration module: `schema.xml` and `solrconfig.xml`.\n\n> **Important:** `search_api_solr` for D8+ requires Composer to install the module and its dependencies. After installation, configure it and generate customized Solr core config files, which should be uploaded to the path: `sites/foo.com/files/solr/`. The changes will take effect within 5-10 minutes on the Solr core created by the system.\n>\n> **NOTE:** Set `solr_custom_config = NO` for the changes to take effect. This setting affects the running of the auto-installer every 5-10 minutes, eliminating the need to wait until the next morning to use the new Solr core.\n\nOnce the Solr core is ready, a special file, `sites/foo.com/solr.php`, will provide details on accessing your new Solr core with the correct credentials.\n\nSites with enabled Solr cores can be safely migrated between platforms. The integration module can be moved within your codebase and even upgraded, provided it uses compatible `schema.xml` and `solrconfig.xml` files.\n\nSupported values for the `solr_integration_module` variable:\n- `search_api_solr9` (Activates Solr 9 core if installed)\n- `search_api_solr7` (Activates Solr 7 core if installed)\n- `search_api_solr`  (Activates Solr 7 core if installed)\n- `apachesolr`       (Activates Solr 4 core if installed) (deprecated)\n\nTo delete an existing Solr core, simply comment out the relevant line. The system will delete the existing Solr core within 15 minutes.\n\n```text\n;solr_integration_module = your_module_name_here\n```\n\n## Auto-update Solr Core Configuration Files\n\nThis option allows the auto-update of your Solr core configuration files:\n- `schema.xml`\n- `solrconfig.xml`\n\nIf a new release is available for either `apachesolr` or `search_api_solr`, your Solr core will not be automatically upgraded to use the newer `schema.xml` and `solrconfig.xml` unless `solr_update_config` is set to `YES`.\n\nThis option will be ignored if `solr_custom_config` is set to `YES`.\n\n```text\n;solr_update_config = NO\n```\n\n## Custom Solr Core Configuration Files\n\nTo use customized versions of `schema.xml` or `solrconfig.xml`, set `solr_custom_config` to `YES`. If using a hosted Ægir service, submit a support ticket to update these files with your custom versions. On self-hosted BOA, update these files directly.\n\nEnsure you use Solr-compatible config files.\n\n> **IMPORTANT:** With this option enabled, you won't be able to follow the Drupal 8+ specific procedure for `search_api_solr` with config files generated and uploaded to the `files/solr/` directory in your site. You can still use this option to make your Solr core immutable between upgrades. However, disable this option briefly (5-10 minutes) for changes to take effect.\n\n```text\n;solr_custom_config = NO\n```\n\n> **NOTE:** The `solr.php` file is not used to connect to the Solr core; it is only for information on configuring Solr in the given site. Once you clone the site, the new clone will receive its own Solr core in a few minutes, with the `solr.php` file populated with unique, new credentials. Update the site admin area configuration to use the new Solr core on the cloned site. Cron is not enabled on the cloned site by default, preventing the overwriting of the original site index.\n\n## Handling Errors\n\nIf you encounter the error:\n\n```text\nApache Solr Attachments Java executable not found; Could not execute\na java command. You may need to set the path of the correct java\nexecutable as the variable 'apachesolr_attachments_java'\nin settings.php.\n```\n\nTo fix this, add the following line to the site's `local.settings.php` file:\n\n```php\n$conf['apachesolr_attachments_java'] = '/usr/bin/java11 -Xms32m -Xmx64m';\n```\n"
  },
  {
    "path": "docs/SOLR_OPTIMIZE.md",
    "content": "# BOA Solr — Optimization & Maintenance Guide\n\nThis document covers Solr performance diagnosis, orphan core cleanup, index\noptimization, and configuration tuning for BOA-managed hosting environments.\nIt is based on real operational experience and consolidates lessons learned\nfrom both typical Drupal search workloads and high-write non-standard use\ncases where Drupal is used as an API layer over a large operational dataset.\n\n## Table of Contents\n\n1. [GC Log Interpretation](#1-gc-log-interpretation)\n2. [Orphan Core Accumulation](#2-orphan-core-accumulation)\n3. [Orphan Core Cleanup — Manual Checks](#3-orphan-core-cleanup--manual-checks)\n4. [Backup Archive Inspection & Recovery](#4-backup-archive-inspection--recovery)\n5. [Core Health Checks](#5-core-health-checks)\n6. [Index Optimization](#6-index-optimization)\n7. [High-Write Core Tuning](#7-high-write-core-tuning)\n8. [Automated Cleanup — manage_solr_config.sh](#8-automated-cleanup--manage_solr_configsh)\n9. [Automated Index Optimization — manage_solr_config.sh](#9-automated-index-optimization--manage_solr_configsh)\n10. [solrconfig.xml Reference — High-Write Workloads](#10-solrconfigxml-reference--high-write-workloads)\n11. [Cron Maintenance Tasks](#11-cron-maintenance-tasks)\n\n## 1. GC Log Interpretation\n\n### Tailing the log\n```bash\ntail -f /var/solr7/logs/solr_gc.log\n# or for Solr 9:\ntail -f /var/solr9/logs/solr_gc.log\n```\n\n### Healthy pattern\nYoung gen (`ParNew`) collections every few minutes, Old gen declining or\nstable, CMS cycles infrequent:\n\n```\nGC(N)  ParNew: 740K->155K(784K)\nGC(N)  CMS: 1156K->1168K(2819K)\nGC(N)  Pause Young (Allocation Failure) 1852M->1293M(3519M) 27ms\n```\n\n### Warning signs\n\n**Frozen Old gen — the key indicator of orphan core accumulation:**\n```\nGC(N)  Old: 1642215K->1642215K(2759348K)   ← identical before/after\nGC(N)  Old: 1642215K->1642215K(2759348K)   ← every single cycle\n```\nCMS is cycling every ~2 seconds but freeing nothing. Each loaded Solr core —\neven one with an empty or untouched index — holds live heap references: field\ncaches, segment readers, filter caches, Lucene internal structures. CMS sees\nall of these as live objects and cannot reclaim them. The Old gen flatlines\nregardless of how many sweep cycles run.\n\n**Resolution:** removing orphan cores releases those references and allows CMS\nto resume normal reclamation. In one case, archiving ~260 orphan cores dropped\nOld gen by ~480MB within the same hour and ended a continuous 2-second CMS\nspin cycle.\n\n**Continuous 2-second CMS spin:**\nNormal CMS cycles every several minutes. If cycles are firing every 2 seconds\nand the sweep isn't freeing memory, orphan cores are the most likely cause.\nCheck `/var/solr7/data/ | wc -l` — a healthy server has a small number of\nactive cores.\n\n**Abortable Preclean running for seconds:**\n```\nGC(N)  Concurrent Abortable Preclean 6014.616ms\n```\nIndicates a sudden allocation burst — large indexing job, query spike, or\n(more commonly) many per-document commits causing searcher churn. See\n[Section 7](#7-high-write-core-tuning).\n\n**Overlapping onDeckSearchers warning in solr.log:**\n```\nPERFORMANCE WARNING: Overlapping onDeckSearchers=2\n```\nSearcher instances are being created faster than they can warm and close.\nCaused by commits firing too frequently relative to searcher warm time. See\n`maxWarmingSearchers` and `autoSoftCommit` tuning in\n[Section 9](#9-solrconfigxml-reference--high-write-workloads).\n\n### Post-restart GC behaviour\nAfter a fresh Solr restart on a small number of cores, CMS may appear to\n\"flood\" the log with 2-second cycles. This is normal startup behaviour — CMS\nis burning off startup allocation. Verify Old gen is *decreasing*, not frozen:\n```\nGC(512)  Old: 137,943K → 134,163K   ← dropping\nGC(514)  Old: 134,163K → 128,986K   ← still dropping\nGC(532)  Old: 136,608K → 123,930K   ← 13MB freed in one sweep\n```\nIf Metaspace is also decreasing after restart, classloader cleanup is working\ncorrectly — not a leak.\n\n## 2. Orphan Core Accumulation\n\n### What causes orphan cores\nWhen a Drupal site is deleted, renamed, or cloned in Aegir, the nginx vhost\nand drush alias are removed or updated, but the corresponding Solr core\ndirectory in `/var/solr7/data/` or `/var/solr9/data/` is never automatically\ndeleted. Over time these accumulate silently. A server with a long history\nof staging environments, client migrations, and development clones can\naccumulate hundreds of orphan cores.\n\nEach orphan core, even if never queried, consumes:\n- Heap (field caches, segment readers loaded on Solr startup)\n- File descriptors (open index files)\n- CMS mark time (reachable objects that can never be collected)\n\n### Naming convention\nBOA creates cores following the pattern `oct.<user>.<domain>`. Legacy formats\ninclude `solr.<user>.<domain>` and bare `<user>.<domain>`. Any directory in\n`/var/solr7/data/` not matching an active vhost+alias pair is an orphan\ncandidate.\n\n### Detecting accumulation\n```bash\n# Quick count — healthy servers have tens, not hundreds\nls /var/solr7/data/ | wc -l\n\n# Full inventory with index age and size\nfor d in /var/solr7/data/*/data/index; do\n  core=$(echo \"$d\" | cut -d'/' -f5)\n  age=$(( ( $(date +%s) - $(stat -c %Y \"$d\") ) / 86400 ))\n  size=$(du -sh \"${d%/data/index}\" 2>/dev/null | cut -f1)\n  echo \"${age}d  ${size}  $core\"\ndone | sort -n\n```\n\n## 3. Orphan Core Cleanup — Manual Checks\n\nThese commands are used to classify cores before taking action. Run them\nbefore any cleanup to understand what you're dealing with.\n\n### Full audit: age + vhost + alias + disabled state\n```bash\nfor d in /var/solr7/data/oct.*/; do\n  core=$(basename \"$d\")\n  user=$(echo \"$core\" | cut -d'.' -f2)\n  domain=$(echo \"$core\" | cut -d'.' -f3-)\n  idx_age=$(( ( $(date +%s) - $(stat -c %Y \"${d}data/index\") ) / 86400 ))\n  vhost=\"/data/disk/${user}/config/server_master/nginx/vhost.d/${domain}\"\n  alias_file=\"/data/disk/${user}/.drush/${domain}.alias.drushrc.php\"\n  has_vhost=$([ -f \"$vhost\" ]      && echo YES || echo NO)\n  has_alias=$([ -f \"$alias_file\" ] && echo YES || echo NO)\n  disabled=$(grep -l \"Do not reveal Aegir front-end URL here\" \"$vhost\" \\\n    2>/dev/null && echo YES || echo NO)\n  echo \"${idx_age}d  vhost=${has_vhost}  alias=${has_alias}  disabled=${disabled}  $domain\"\ndone | sort -n\n```\n\n### Check vhost enabled/disabled state\n```bash\ngrep \"Do not reveal Aegir front-end URL here\" \\\n  /data/disk/<user>/config/server_master/nginx/vhost.d/<domain>\n```\nTwo matching lines = site is disabled/parked.\n\n### Check Aegir alias (provisioning record)\n```bash\nls /data/disk/<user>/.drush/<domain>.alias.drushrc.php\n```\n\n### Check solr_integration_module per site\nA site with `solr_integration_module` explicitly set in `boa_site_control.ini`\nis actively managed — its core will be recreated by `manage_solr_config.sh`\nif archived. These should not be archived based on index age alone.\n```bash\nfor d in /var/solr7/data/oct.*/; do\n  core=$(basename \"$d\")\n  user=$(echo \"$core\" | cut -d'.' -f2)\n  domain=$(echo \"$core\" | cut -d'.' -f3-)\n  idx_age=$(( ( $(date +%s) - $(stat -c %Y \"${d}data/index\") ) / 86400 ))\n  alias_file=\"/data/disk/${user}/.drush/${domain}.alias.drushrc.php\"\n  if [ -f \"$alias_file\" ]; then\n    site_path=$(grep \"site_path'\" \"$alias_file\" \\\n      | cut -d: -f2 | awk '{print $3}' | sed \"s/[\\,']//g\")\n    ctrl=\"${site_path}/modules/boa_site_control.ini\"\n    solr_mod=$([ -f \"$ctrl\" ] \\\n      && grep \"^solr_integration_module\" \"$ctrl\" 2>/dev/null \\\n      || echo \"not-set\")\n  else\n    solr_mod=\"no-alias\"\n  fi\n  echo \"${idx_age}d  solr=${solr_mod}  $domain\"\ndone | sort -n\n```\n\n### Three-tier classification\nBefore taking action, classify each orphan candidate:\n\n| Tier | Condition | Safe threshold |\n|------|-----------|----------------|\n| 1 | No vhost, OR vhost with no Aegir alias | 14 days |\n| 2 | Vhost + alias present, no `solr_integration_module` | 60 days |\n| Protected | `conf/.protected.conf` present, OR `solr_integration_module` set | Never archive automatically |\n\nStaleness is measured on `data/index/` mtime — Lucene only updates this on\nactual segment commits. `data/` mtime is unreliable because Solr keeps it\nperpetually fresh via tlog and write.lock even on idle cores.\n\n## 4. Backup Archive Inspection & Recovery\n\nOrphan cores are never deleted — they are unloaded from Solr's registry and\nmoved to `/var/backups/solr7/` or `/var/backups/solr9/` with a timestamp\nprefix for easy identification and recovery.\n\n### List archived cores with age and size\n```bash\nfor bkp in /var/backups/solr7/*/  /var/backups/solr9/*/; do\n  [ -d \"$bkp\" ] || continue\n  archived_ts=$(basename \"$bkp\" | cut -d'-' -f1-2)\n  core=$(basename \"$bkp\" | cut -d'-' -f3-)\n  size=$(du -sh \"$bkp\" 2>/dev/null | cut -f1)\n  idx_age=\"\"\n  if [ -d \"${bkp}data/index\" ]; then\n    idx_mtime=$(stat -c %Y \"${bkp}data/index\" 2>/dev/null || echo 0)\n    idx_age=$(( ( $(date +%s) - idx_mtime ) / 86400 ))d\n  fi\n  echo \"${archived_ts}  idx=${idx_age}  size=${size}  ${core}\"\ndone | sort\n```\n\n### Count and total disk usage\n```bash\nls /var/backups/solr7/ | wc -l\ndu -sh /var/backups/solr7/\n```\n\n### Recovery procedure\n\n> **Critical:** Use `CREATE`, not `RELOAD`. `RELOAD` only works for cores\n> already registered in Solr's registry. An archived core was unloaded before\n> being moved, so Solr has no record of it — `CREATE` re-registers the\n> existing directory without touching any index files.\n\n```bash\nport=9077                        # 9099 for Solr 9\ncore=\"oct.o1.example.com\"\nts=\"20260418-222802\"             # timestamp prefix from backup dir name\nbkp=\"/var/backups/solr7/${ts}-${core}\"\ndest=\"/var/solr7/data/${core}\"   # /var/solr9/data/ for Solr 9\n\nmv \"${bkp}\" \"${dest}\"\nchown -R solr7:solr7 \"${dest}\"   # solr9:solr9 for Solr 9\ncurl \"http://127.0.0.1:${port}/solr/admin/cores?action=CREATE&name=${core}&instanceDir=${dest}\"\n```\n\nSuccessful response:\n```json\n{\"responseHeader\":{\"status\":0,\"QTime\":300},\"core\":\"oct.o1.example.com\"}\n```\n\n`status:400 \"No such core\"` means you used RELOAD instead of CREATE.\n\n## 5. Core Health Checks\n\n### List all registered cores\n```bash\ncurl -s \"http://127.0.0.1:9077/solr/admin/cores?action=STATUS&wt=json\" \\\n  | python3 -m json.tool | grep -E '\"name\"|\"instanceDir\"|\"uptime\"'\n```\n\n### Check key index metrics per core\n```bash\ncurl -s \"http://127.0.0.1:9077/solr/admin/cores?action=STATUS&core=<corename>&wt=json\" \\\n  | python3 -c \"\nimport json,sys\nd=json.load(sys.stdin)\nidx=d['status']['<corename>']['index']\nprint(f'docs:     {idx[\\\"numDocs\\\"]:,}')\nprint(f'maxDoc:   {idx[\\\"maxDoc\\\"]:,}')\nprint(f'deleted:  {idx[\\\"maxDoc\\\"]-idx[\\\"numDocs\\\"]:,}')\nprint(f'segments: {idx[\\\"segmentCount\\\"]}')\nprint(f'size:     {idx[\\\"sizeInBytes\\\"]/1073741824:.2f} GB')\n\"\n```\n\n### Warning thresholds\n\n| Metric | Warning threshold | Action |\n|--------|-------------------|--------|\n| Deleted docs | >20% of maxDoc | `expungeDeletes` or optimize |\n| Segment count | >50 | Review merge policy |\n| Index size | >500MB | Informational — monitor |\n\n### Check cache stats\n```bash\ncurl \"http://127.0.0.1:9077/solr/admin/mbeans?cat=CACHE&stats=true\"\n```\n\n### Reload a registered core's config\n```bash\ncurl \"http://127.0.0.1:9077/solr/admin/cores?action=RELOAD&core=<corename>\"\n```\n\n### Unload a core without deleting files\n```bash\ncurl \"http://127.0.0.1:9077/solr/admin/cores?action=UNLOAD&core=<corename>&deleteIndex=false&deleteDataDir=false&deleteInstanceDir=false\"\n```\n\n## 6. Index Optimization\n\n### When to optimize\n- Deleted doc ratio consistently above 20-30%\n- Segment count above 50 (indicates merge policy not keeping up)\n- After a large bulk delete/reindex operation\n\n### Check facet breakdown of what's indexed\nUseful for understanding why doc counts are high on non-standard deployments:\n```bash\ncurl -s \"http://127.0.0.1:9077/solr/<corename>/select?q=*:*&rows=0&facet=true&facet.field=ss_type&facet.limit=20&wt=json\" \\\n  | python3 -c \"\nimport json,sys\nd=json.load(sys.stdin)\ncounts=d['facet_counts']['facet_fields']['ss_type']\npairs=list(zip(counts[::2],counts[1::2]))\nfor name,count in sorted(pairs,key=lambda x:-x[1]):\n    print(f'{count:>12,}  {name}')\n\"\n```\n\n### expungeDeletes — preferred for live indexes\nMerges only segments with deleted docs above threshold. Much cheaper than\nfull optimize — can run during business hours on a live index:\n```bash\ncurl \"http://127.0.0.1:9077/solr/<corename>/update?expungeDeletes=true&waitFlush=false\"\n```\n\n### Full optimize — use with caution\nForces all segments to merge into one. Required when deleted doc ratio is\nvery high and `expungeDeletes` hasn't been keeping up. On large indexes (10GB+)\nthis can take 30-120 minutes and will use significant I/O. Run in `screen`.\n\nDuring the merge, disk usage temporarily doubles as Lucene writes the merged\noutput alongside the originals — ensure sufficient free space before starting.\nThe old files are atomically swapped and deleted only when the merge completes.\n\n```bash\nscreen -S solr-optimize\n# waitFlush=false returns immediately but merge continues in background\n# For a blocking call that waits for completion:\ncurl -v \"http://127.0.0.1:9077/solr/<corename>/update?optimize=true&maxSegments=1&waitFlush=true&waitSearcher=true\"\n```\n\nMonitor progress:\n```bash\nwatch -n10 \"ls -lht /var/solr7/data/<corename>/data/index/ | head -8 \\\n  && echo '---' \\\n  && du -sh /var/solr7/data/<corename>/data/index/\"\n```\n\n### Expected results after optimize\n\n| Metric | Typical before | Typical after |\n|--------|---------------|---------------|\n| Deleted docs | 20-35% of maxDoc | ~0% |\n| Segment count | 10-20 | 1-3 |\n| Index size | 100% | 60-80% (deleted space reclaimed) |\n\nDo not interrupt an in-progress optimize. Lucene writes are atomic — the old\nsegments are untouched until the merge completes. Interrupting wastes the work\ndone so far and may leave temporary merge files that need cleanup on restart.\n\n## 7. High-Write Core Tuning\n\nStandard BOA `solrconfig.xml` defaults are designed for typical Drupal content\nsites: moderate document counts, periodic indexing, read-heavy query patterns.\nSome sites use Drupal as an API layer over large operational datasets (ERP\nsystems, inventory databases, CRM backends) with very different characteristics:\n\n- Millions of documents (product variants, orders, accounts, stock entries)\n- Per-document commits from the application layer at sub-second frequency\n- Continuous writes throughout business hours\n- High deleted doc accumulation from record updates\n\nThese workloads require specific tuning. Apply to the core's `conf/solrconfig.xml`\n(which must have `conf/.protected.conf` set to prevent BOA from overwriting it).\n\n### Identifying a high-write core\n\nSigns in `solr.log`:\n```\n# Per-document commits firing continuously\n14:20:57.467  start commit{optimize=false,softCommit=true,...}\n14:20:57.487  start commit{optimize=false,softCommit=true,...}\n14:20:57.692  start commit{optimize=false,softCommit=true,...}\n\n# Searcher churn from commit frequency\nPERFORMANCE WARNING: Overlapping onDeckSearchers=2\n\n# Verbose merge logging (infoStream=true)\nRegistered new searcher Searcher@45edbdec[corename] main{ExitableDirectoryReader...\n  Uninverting(_dsjh0(7.7.3):C8106628/2311351:[diagnostics={source=merge,...\n```\n\nSigns in the GC log:\n- Old gen steadily climbing between hard commits\n- Metaspace growing (classloaders for new searchers accumulating)\n- `Pause Young (Allocation Failure)` firing frequently\n\n### Key parameters to change\n\nSee [Section 9](#9-solrconfigxml-reference--high-write-workloads) for a\ncomplete reference config. The most impactful changes in priority order:\n\n**1. `ramBufferSizeMB` — increase from 32 to 256**\nAt 32MB Lucene flushes to disk very frequently, creating many tiny segments\nthat immediately need merging. 256MB dramatically reduces flush frequency,\nsegment count, and background merge pressure.\n\n**2. Switch to `TieredMergePolicy`**\nThe default `LogByteSizeMergePolicy` (mergeFactor=4) does not handle high\ndeleted doc ratios well. `TieredMergePolicy` with `deletesPctAllowed=20`\nactively triggers merges when deleted docs exceed 20% of a segment, keeping\nthe ratio self-healing without manual optimize calls.\n\n**3. Disable `infoStream`**\n`<infoStream>true</infoStream>` logs every merge operation in extreme detail.\nOn a high-write core this floods `solr.log` with kilobytes of output per\nsecond. Set to `false` in production.\n\n**4. `autoCommit` — add `openSearcher=false`**\nWhen the application issues its own commits, the autoCommit in solrconfig is\njust a safety net. Setting `openSearcher=false` means a hard commit no longer\nforces a searcher reopen, saving significant overhead on large indexes.\n\n**5. `autoSoftCommit` — remove `maxDocs` trigger**\nWith per-document commits from the app, a `maxDocs=2000` soft commit trigger\nfires constantly. Remove it and use time-only (`maxTime=10000`) which is\ncorrect for near-realtime search on a continuously updated index.\n\n**6. `maxWarmingSearchers` — increase from 2 to 4**\nWith frequent soft commits, 2 warming searchers is too restrictive and causes\nthe `Overlapping onDeckSearchers` warnings. 4 provides headroom without\nallowing unbounded searcher accumulation.\n\n## 8. Automated Cleanup — manage_solr_config.sh\n\nThe `manage_solr_config.sh` script (run every 4 minutes) includes automated\norphan core cleanup with the following behaviour:\n\n### Execution order (important)\nCleanup runs **after** `_check_sites_list`, not before. This prevents a race\ncondition where a core with a stale index is archived and then immediately\nrecreated empty by `_add_solr` in the same script execution.\n\n### Throttle\nCleanup runs at most once every 6 hours via a sentinel file at\n`/var/backups/solr/.orphan_cleanup_last_run.pid`.\n\n### Classification logic\n1. Build active core sets from current vhosts, drush aliases, and\n   `boa_site_control.ini` files\n2. For each core on disk, determine tier (see table in Section 3)\n3. Check `conf/.protected.conf` — always skip if present\n4. Check `solr_integration_module` — always skip if set (core is actively\n   managed and would be recreated if archived)\n5. Apply staleness check on `data/index/` mtime with tier-appropriate threshold\n6. Archive (unload + mv) if all gates pass\n\n### Reading the cleanup log\n```bash\n# Most recent log\ncat /var/backups/solr/log/$(ls -t /var/backups/solr/log/ | head -1)\n\n# Orphan decisions only\ngrep -E \"^ORPHAN-\" /var/backups/solr/log/$(ls -t /var/backups/solr/log/ | head -1)\n\n# Summary line\ngrep -E \"Orphan cleanup port|Active cores|=== Orphan\" \\\n  /var/backups/solr/log/$(ls -t /var/backups/solr/log/ | head -1)\n```\n\n### Log line reference\n\n| Prefix | Meaning |\n|--------|---------|\n| `ORPHAN-FRESH` | No qualifying match but index too recent — kept with age shown |\n| `ORPHAN-SKIP` | Protected by `.protected.conf` or `solr_integration_module` |\n| `ORPHAN-CANDIDATE` | Passed all gates — being archived |\n| `ORPHAN-ARCHIVED` | Successfully moved to backup dir |\n| `ORPHAN-ERROR` | `mv` failed — core left in place |\n\n### Staleness thresholds\n\n| Variable | Default | Applies to |\n|----------|---------|------------|\n| `_ORPHAN_STALE_DAYS` | 14 | Tier 1: no vhost, or vhost with no Aegir alias |\n| `_ORPHAN_VHOST_STALE_DAYS` | 60 | Tier 2: vhost + alias present |\n\n### Health check\nAfter each cleanup run the script queries the Solr STATUS API and logs:\n- `HEALTH-INFO: N cores registered`\n- `HEALTH-INIT-FAIL` — core failed to load (classloader held, index broken)\n- `HEALTH-WARN ... high segment count=N` — merge policy not keeping up\n- `HEALTH-WARN ... deleted=N/M (X%)` — unmerged deletes above 20%\n- `HEALTH-WARN ... large index=NMB` — informational, >500MB\n\n## 9. Automated Index Optimization — manage_solr_config.sh\n\nThe `manage_solr_config.sh` script includes automated index optimization that\nruns after the health check on every invocation, throttled by a sentinel file.\n\n### Execution order in _start_up\n```\n_check_sites_list → _cleanup_orphan_cores → _check_solr_core_health → _run_optimize_if_due\n```\nOptimization runs last — after cleanup has removed orphans (no point\noptimizing a core about to be archived) and after the health check has logged\nthe current state for before/after comparison in the log.\n\n### Throttle\nRuns at most once every `_OPTIMIZE_INTERVAL_HOURS` hours (default 12) via a\nsentinel file at `/var/backups/solr/.optimize_last_run.pid`. Independent of\nthe orphan cleanup sentinel.\n\n### Thresholds\nThree constants at the top of the new block, easy to adjust:\n\n| Variable | Default | Meaning |\n|----------|---------|---------|\n| `_OPTIMIZE_DEL_PCT_THRESHOLD` | 20 | Deleted doc % that triggers `expungeDeletes` |\n| `_OPTIMIZE_FULL_THRESHOLD` | 30 | Deleted doc % that triggers full optimize |\n| `_OPTIMIZE_INTERVAL_HOURS` | 12 | Minimum hours between runs |\n\n### Decision logic per core\n\n| Deleted % | Protected core (`.protected.conf`) | Unprotected core |\n|-----------|-----------------------------------|-----------------|\n| < 20% | skip | skip |\n| 20–30% | `expungeDeletes` | `expungeDeletes` |\n| > 30% | `expungeDeletes` only | full optimize |\n\n**Why protected cores never get a full optimize:**\nProtected cores have custom `solrconfig.xml`, likely including `TieredMergePolicy`\nwith `deletesPctAllowed` already tuned for their workload. Forcing\n`maxSegments=1` on top of that would fight the policy. `expungeDeletes` is\nsafe because it works within whatever merge policy is configured.\n\n### waitFlush=false\nAll curl calls return immediately while Solr continues merging in the\nbackground. This keeps the script's runtime bounded regardless of index\nsize — a 28GB optimize running in the background will not block the next\n4-minute script invocation.\n\n### Reading the optimize log\n```bash\n# All optimize decisions from most recent run\ngrep -E \"^OPTIMIZE-\" /var/backups/solr/log/$(ls -t /var/backups/solr/log/ | head -1)\n\n# Summary header line\ngrep \"=== Index optimize\" /var/backups/solr/log/$(ls -t /var/backups/solr/log/ | head -1)\n```\n\n### Log line reference\n\n| Prefix | Meaning |\n|--------|---------|\n| `OPTIMIZE-OK` | Deleted ratio below threshold — no action |\n| `OPTIMIZE-EXPUNGE` | `expungeDeletes` triggered (ratio ≥ 20%) |\n| `OPTIMIZE-FULL` | Full optimize triggered (ratio ≥ 30%, unprotected) |\n| `OPTIMIZE-SKIP` | `python3` not available |\n| `OPTIMIZE-ERROR` | No response from Solr API |\n\n### Verifying a background optimize completed\nAfter the script triggers a full optimize with `waitFlush=false`, verify\ncompletion in the next run's health check output, or manually:\n```bash\ncore=\"oct.o1.example.com\"\ncurl -s \"http://127.0.0.1:9077/solr/admin/cores?action=STATUS&core=${core}&wt=json\" \\\n  | python3 -c \"\nimport json,sys\nd=json.load(sys.stdin)\nidx=d['status']['${core}']['index']\nprint(f'docs:     {idx[\\\"numDocs\\\"]:,}')\nprint(f'deleted:  {idx[\\\"maxDoc\\\"]-idx[\\\"numDocs\\\"]:,}')\nprint(f'segments: {idx[\\\"segmentCount\\\"]}')\nprint(f'size:     {idx[\\\"sizeInBytes\\\"]/1073741824:.2f} GB')\n\"\n```\nA completed full optimize shows `deleted: 0` and `segments: 1` (plus a few\ntiny segments for documents written since the merge started).\n\n## 10. solrconfig.xml Reference — High-Write Workloads\n\nComplete configuration for cores serving high-write operational datasets.\nApply to `conf/solrconfig.xml` after setting `conf/.protected.conf` to prevent\nBOA from overwriting with standard defaults.\n\nChanges from BOA standard defaults are annotated inline. All other settings\nare unchanged from the standard `drupal-4.4-solr-7.x` config.\n\n```xml\n<indexConfig>\n  <!-- Increased from 32MB: larger buffer = fewer flushes = fewer small\n       segments = less merge work. Most impactful single change for\n       high-write workloads. -->\n  <ramBufferSizeMB>256</ramBufferSizeMB>\n\n  <!-- Switched from LogByteSizeMergePolicy (mergeFactor=4).\n       TieredMergePolicy handles large indexes with high delete ratios\n       far better. deletesPctAllowed=20 triggers merges when deleted\n       docs exceed 20% of a segment — keeps ratio self-healing. -->\n  <mergePolicyFactory class=\"org.apache.solr.index.TieredMergePolicyFactory\">\n    <int name=\"maxMergeAtOnce\">10</int>\n    <int name=\"segmentsPerTier\">10</int>\n    <double name=\"maxMergedSegmentMB\">8192</double>\n    <double name=\"deletesPctAllowed\">20</double>\n  </mergePolicyFactory>\n\n  <lockType>${solr.lock.type:native}</lockType>\n  <reopenReaders>true</reopenReaders>\n\n  <deletionPolicy class=\"solr.SolrDeletionPolicy\">\n    <str name=\"maxCommitsToKeep\">1</str>\n    <str name=\"maxOptimizedCommitsToKeep\">0</str>\n  </deletionPolicy>\n\n  <!-- Changed from true: was logging every merge operation verbatim,\n       flooding solr.log on high-write indexes. -->\n  <infoStream>false</infoStream>\n</indexConfig>\n\n<updateHandler class=\"solr.DirectUpdateHandler2\">\n  <autoCommit>\n    <maxDocs>${solr.autoCommit.MaxDocs:10000}</maxDocs>\n    <!-- Increased from 120000ms: application issues its own commits,\n         so this is a safety net only. -->\n    <maxTime>${solr.autoCommit.MaxTime:300000}</maxTime>\n    <!-- Added: prevents searcher reopen on every hard commit.\n         Soft commit below handles search visibility. -->\n    <openSearcher>false</openSearcher>\n  </autoCommit>\n\n  <autoSoftCommit>\n    <!-- maxDocs trigger removed: with per-document commits from app,\n         maxDocs=2000 fired constantly causing searcher churn. -->\n    <maxTime>${solr.autoSoftCommit.MaxTime:10000}</maxTime>\n  </autoSoftCommit>\n\n  <updateLog>\n    <str name=\"dir\">${solr.data.dir:}</str>\n  </updateLog>\n</updateHandler>\n\n<query>\n  <!-- ... standard settings ... -->\n\n  <!-- Increased from 2: frequent soft commits exhaust 2 warming\n       searchers, causing \"Overlapping onDeckSearchers\" warnings. -->\n  <maxWarmingSearchers>4</maxWarmingSearchers>\n</query>\n```\n\n### Deploying config changes without restart\n```bash\n# Back up first\ncp /var/solr7/data/<corename>/conf/solrconfig.xml \\\n   /var/solr7/data/<corename>/conf/solrconfig.xml.bak.$(date +%Y%m%d)\n\n# Copy new config, then reload\ncp /path/to/new/solrconfig.xml \\\n   /var/solr7/data/<corename>/conf/solrconfig.xml\n\ncurl \"http://127.0.0.1:9077/solr/admin/cores?action=RELOAD&core=<corename>\"\n```\n\n## 11. Cron Maintenance Tasks\n\n### expungeDeletes and full optimize\nThese are now handled automatically by `_run_optimize_if_due` in\n`manage_solr_config.sh` (see [Section 9](#9-automated-index-optimization--manage_solr_configsh)).\nManual cron jobs for these are no longer needed on servers running the\ncurrent version of the script.\n\nFor servers not yet running the updated script, a manual monthly job:\n```bash\n# /etc/cron.d/solr-maintenance — only needed on older script versions\n0 3 1 * * root curl -s \"http://127.0.0.1:9077/solr/<corename>/update?expungeDeletes=true&waitFlush=false\" > /dev/null\n```\n\n### Backup archive pruning\nArchives accumulate over time. Cores archived more than 90 days ago with\nvery old indexes are safe to remove permanently:\n```bash\n# Review before deleting — check idx= age in listing\nfor bkp in /var/backups/solr7/*/; do\n  archived_ts=$(basename \"$bkp\" | cut -d'-' -f1-2)\n  # Convert timestamp to epoch and compare\n  archived_epoch=$(date -d \"${archived_ts:0:8} ${archived_ts:9:2}:${archived_ts:11:2}:${archived_ts:13:2}\" +%s 2>/dev/null || echo 0)\n  age_days=$(( ( $(date +%s) - archived_epoch ) / 86400 ))\n  [ $age_days -gt 90 ] && echo \"${age_days}d  $(du -sh \"$bkp\" | cut -f1)  $bkp\"\ndone\n```\n\n## Appendix: Before/After Reference Case\n\nThis table documents the results from a single remediation session on a\nproduction BOA server that had accumulated orphan cores over several years\nand hosted one high-write operational Solr core.\n\n| Metric | Before | After |\n|--------|--------|-------|\n| Cores in `/var/solr7/data/` | 278 | ~15 active |\n| Old gen heap (CMS) | 1,642,215K frozen | ~1,150,000K and reclaiming |\n| CMS cycle frequency | Every 2 seconds | Normal (every few minutes) |\n| GC log pattern | Frozen Old gen, zero reclamation | Normal young-gen collections |\n| High-write core size | 28.4 GB | 16.7 GB |\n| Deleted docs | 15,924,319 (34%) | 17,217 (0.06%) |\n| Segment count | 15 | 8 |\n| `solr.log` noise | Continuous merge infoStream | Normal operational logs |\n"
  },
  {
    "path": "docs/SSL.md",
    "content": "# Let's Encrypt free SSL certificates in Ægir\n\n  You can find these important Let's Encrypt topics discussed below:\n\n - Introduction\n - Before we begin... what is the most common mistake and how to avoid it?\n - How it works?\n - How to add Letsencrypt.org SSL certificate to hosted site?\n - How to add Letsencrypt.org SSL certificate to the Ægir Hostmaster site?\n - How to modify/renew Letsencrypt.org SSL certificate for SSL enabled site?\n - How to rename a site with SSL enabled?\n - Are there any requirements, limitations or exceptions?\n - How to enable live mode?\n - How to replace Let's Encrypt certificate with custom certificate?\n - How to replace existing custom certificates with Let's Encrypt certificates?\n - How to use Let's Encrypt certificate on the old, dedicated IP address?\n\nBOA-3.1.0 release opens a new era in SSL support for all hosted Drupal sites.\nThe old method of creating SSL proxy vhosts is officially deprecated,\nas explained in this document further below.\n\n## Before we begin... what is the most common mistake and how to avoid it?\n\n  Thinking that you don't need to actually read everything here, and instead\n  quickly scanning this documentation, and assuming that you know it already.\n  Specifically, people tend to ignore this line, listed in *requirements* below:\n\n  * All aliases must have valid DNS names pointing to your server IP address\n\n  Note that it says *ALL ALIASES* -- not just the aliases you have added!\n  This means that also the \"Auto Alias\" Ægir generates for you -- like\n  www.foo.com when your site's name is foo.com, or like foo.com when the main\n  site name is www.foo.com -- these aliases also must have a valid DNS,\n  already pointing to your Ægir default IP address.\n\n## How it works?\n\n  BOA leverages dehydrated utility to talk to Letsencrypt.org servers,\n  and on the Ægir side it's using new `hosting_le` extension, which replaces\n  self-signed SSL certificates generated by Ægir with Let's Encrypt ones.\n  You can find more information on both at these URLs:\n\n    https://github.com/lukas2511/dehydrated\n    https://github.com/omega8cc/hosting_le\n\n## How to add Letsencrypt.org SSL certificate to hosted site?\n\n  In your Ægir control panel please go to the site's node Edit tab, then\n  under `SSL Settings > Encryption` choose either `Enabled` or `Required`,\n  if you want to enable HTTP->HTTPS redirection on the fly. Now click `Save`\n  and wait until you will see the Verify task completed. Done!\n\n  NOTE: SSL Settings are not available in the Add Site form, only in Edit.\n\n## How to add Letsencrypt.org SSL certificate to the Ægir Hostmaster site?\n\n```text\n  !!! ATTENTION\n  !!! ###===>>> Don't enable SSL option for the Hostmaster site in Ægir\n  !!! ATTENTION\n```\n\n  Let's Encrypt SSL for Ægir control panel is auto-managed in BOA outside of\n  the control panel, and you should never enable it within control panel.\n\n## How to modify/renew Letsencrypt.org SSL certificate for SSL enabled site?\n\n  When you modify aliases or redirections, Ægir will re-create the SSL\n  certificate on the fly, to match current settings and aliases to list.\n\n  BOA runs auto-renewal checks for you weekly, and forces renewal if there is\n  less than 30 days to the certificate expiration date (Let's Encrypt certs\n  are valid for up to 90 days before they have to be renewed).\n\n  Also every Verify task against SSL enabled site runs this check on the fly.\n\n## How to rename a site with SSL enabled?\n\n  If you need to rename a site, you must first disable SSL and alias redirection,\n  run the migrate task to rename the site, then re-enable SSL and alias redirection.\n\n  Owing to this requirement, workflows that require renaming sites often, e.g. for\n  dev/staging/production environments, are usually better served by moving aliases\n  between site clones per https://learn.omega8.cc/how-to-debug-failed-migrate-task-328.\n\n## Are there any requirements, limitations or exceptions?\n\n  Yes, there are some:\n\n  * Let's Encrypt leverages TLS/SNI, which works only with modern browsers\n  * All aliases must have valid DNS names pointing to your server IP address\n  * Even with aliases redirection enabled all aliases are listed as SAN names\n\n  NOTE: The Subject Alternative Names (SAN) is a feature which allows to issue\n  multi-domain / multi-subdomain SSL certificates -- it is automated in BOA.\n\n  Let's Encrypt API for live, real certificates has its own requirements\n  and limits you should be aware of. Please visit their website for details:\n\n    https://letsencrypt.org/docs/rate-limits/\n\n  NOTE: All sites with one or more keywords (listed below) in the site's\n  main name, or in their redirection target, if used, will be ignored,\n  and they will receive only self-signed SSL certificates generated by Ægir,\n  once you will switch their SSL settings to `Enabled` or `Required`.\n\n    `.(dev|devel|temp|tmp|temporary|test|testing).`\n\n  Examples: `foo.temp.bar.org`, `foo.test.bar.org`, `foo.dev.bar.org`\n\n  NOTE: This exception rule doesn't apply to aliases which are not used\n  as a redirection target. Even aliases with listed special keywords in their\n  names will be listed as SAN entries, as long as they are valid DNS names.\n\n## How to enable live mode?\n\n  Live mode is enabled by default since BOA-5.4.0\n\n  NOTE: You may find some helpful details in the Verify task log -- look for\n  lines with `[hosting_le]` prefix.\n\n## How to replace Let's Encrypt certificate with custom certificate?\n\n  1. Create an empty control file (replace `example.com` with your site name):\n\n     `[aegir_root]/tools/le/.ctrl/dont-overwrite-example.com.pid`\n\n  2. Replace `privkey.pem` symlink with single file containing your custom\n     certificate key -- use `privkey.pem` as a filename in the directory:\n\n     `[aegir_root]/tools/le/certs/example.com/`\n\n  3. Replace `fullchain.pem` symlink with single file containing your custom\n     certificate and all intermediate certificates beneath it -- use\n     `fullchain.pem` as a filename in the same directory:\n\n     `[aegir_root]/tools/le/certs/example.com/`\n\n  4. Replace `chain.pem` symlink with single file containing your custom\n     intermediate certificates only -- use `chain.pem` as a filename\n     in the same directory:\n\n     `[aegir_root]/tools/le/certs/example.com/`\n\n  5. Run Verify task for your site in the Ægir control panel. Done!\n\n  NOTE: If you are on hosted BOA, you don't have an access to this location\n  on your system, so please open a ticket at: https://omega8.cc/contact\n\n## How to replace existing custom certificates with Let's Encrypt certificates?\n\n  Here are the steps to start using Let's Encrypt certificates on sites\n  previously running SSL on dedicated IP as well as shared (default) IP via\n  legacy `xboa ssl-gen` command:\n\n  1. Disable all HTTPS redirects, if configured within these sites.\n\n  2. Update DNS for domains using custom certificates to point them to default\n     instance/server IP address, if they have used dedicated IP before,\n     and run the `service nginx reload` command once DNS update is propagated.\n\n  3. Move away previous HTTP/HTTPS proxy vhosts for those sites, if they\n     still exist in the `/var/aegir/config/server_master/nginx/pre.d/` directory,\n     but only if they already use the default IP address, and reload nginx.\n\n  4. Create an empty `~/static/control/ssl-live-mode.info` file and wait 5 min.\n\n  5. Enable SSL in Ægir for those sites.\n\n  NOTE: If you are on hosted BOA, all you need to do are steps: 1, 2, 4 and 5\n  from the list above. You don't need to reload nginx, move vhosts, etc.\n\n## How to use Let's Encrypt certificate on the old, dedicated IP address?\n\n  If your old dedicated IP address is still configured within the same system,\n  you can still use it like before, because HTTPS vhosts managed by Ægir use\n  the wildcard \"listen\" directive, which enables you to use any active IP\n  address available within the same system.\n\n  However, if you have configured the previous dedicated IP within different\n  system, like Omega8.cc was doing for its deprecated SSL add-on, so the old\n  IP address was used in the remote proxy mode, you can't use it any longer,\n  because it will not be managed via Ægir, and we don't offer ability to use\n  dedicated IP addresses with Let's Encrypt certificates. You need to migrate\n  to the built-in Let's Encrypt support mode, as explained in previous section\n  above.\n"
  },
  {
    "path": "docs/UPGRADE.md",
    "content": "# How To: Upgrade Your BOA System\n\nAll standard non-major system upgrades can be run with **BARRACUDA** and all Ægir instances can be upgraded with **OCTOPUS**, but we highly recommend to use the fully automated procedure explained in Self-Upgrade How To: [docs/SELFUPGRADE.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SELFUPGRADE.md)\n\n- Importance of Keeping SKYNET Enabled in BOA: [docs/SKYNET.md](https://github.com/omega8cc/boa/tree/5.x-dev/docs/SKYNET.md)\n\n## Important Notes for Standard/Manual BOA Upgrade\n\nIf you haven't run a full barracuda+octopus upgrade to the latest BOA edition yet, don't use any partial upgrade modes explained further below. Once the new BOA latest is available, you must run *full* upgrades with the commands:\n\n```sh\nscreen\nwget -qO- http://files.aegir.cc/BOA.sh.txt | bash\nbarracuda up-lts\noctopus up-lts all force\n```\n\nFor silent, logged mode with an email message sent once the upgrade is complete, but no progress displayed in the terminal window, you can alternatively run:\n\n```sh\nscreen\nwget -qO- http://files.aegir.cc/BOA.sh.txt | bash\nbarracuda up-lts log\noctopus up-lts all force log\n```\n\nNote that the silent, non-interactive mode will automatically say Y/Yes to all prompts and is thus useful for running auto-upgrades scheduled in cron.\n\n**Important:** Do not run any installer via `sudo`. You must be logged in as root or use `sudo -i` first.\n\nAll commands will honor settings in their respective config files:\n\n- `/root/.barracuda.cnf`\n- `/root/.o1.octopus.cnf`\n\nHowever, arguments specified on the command line will take precedence. See the upgrade modes explained below.\n\nTo make sure that you are using all available arguments in the correct order please always check the built-in how-to:\n\n```sh\nbarracuda help\n```\n\n```sh\noctopus help\n```\n\n## Available Standard Upgrade Modes\n\nDownload and run (as root) BOA Meta Installers first:\n\n```sh\nwget -qO- http://files.aegir.cc/BOA.sh.txt | bash\n```\n\nTo upgrade the system and Ægir Master Instance to the latest version, use:\n\n```sh\nscreen\nbarracuda up-lts\n```\n\nTo upgrade a selected Ægir Satellite Instance to the latest version, use:\n\n```sh\nscreen\noctopus up-lts o1 force\n```\n\nTo upgrade *all* Ægir Satellite Instances to the latest version, use:\n\n```sh\nscreen\noctopus up-lts all force\n```\n\n## Available Custom Upgrade Modes\n\nYou can append `log` as the last argument to every command, and it will write the output to the file instead of the console, respectively:\n\n- `/var/backups/reports/up/barracuda/*`\n- `/var/backups/reports/up/octopus/*`\n\nExamples:\n\n```sh\nscreen\nbarracuda up-lts log\noctopus up-lts all force log\n```\n\nA detailed backend log on the barracuda upgrade is always stored in `/var/backups/`.\n\nYou can append `system` as the last argument to the barracuda command, and it will upgrade only the system without running the Ægir Master Instance upgrade. It will also write the output to the file instead of the console:\n\n- `/var/backups/reports/up/barracuda/*`\n\nExample:\n\n```sh\nscreen\nbarracuda up-lts system\n```\n\nNote that while both `log` and `system` modes are \"silent\" (they don't display anything in your console), they will send the log via email to the address specified in the config file: `/root/.barracuda.cnf`.\n\nIt is recommended that you start `screen` before running commands using the \"silent\" mode to avoid confusion or incomplete tasks when your SSH connection drops for any reason.\n\nIt is possible to set/force the upgrade mode on the fly using optional arguments: `{aegir|platforms|both}`\n\nNote that `none` is similar to `both`; however, `both` will force aegir plus platforms upgrade, while `none` will also honor settings from the octopus instance cnf file, where currently only `aegir` mode is defined with `_HM_ONLY=YES` option.\n\nExamples:\n\n```sh\nscreen\n\noctopus up-lts o1 aegir\noctopus up-lts o1 platforms log\noctopus up-lts all aegir log\noctopus up-lts all platforms\n```\n\n## NOTE on Percona SQL Server versions management\n\nYou can upgrade Percona from default 5.7 to 8.0, or once you run 8.0 to 8.4 LTS during `barracuda` upgrade with commands like:\n\n`barracuda up-lts percona-8.0` -- upgrades Percona from 5.7 to 8.0 (production ready)\n\n`barracuda up-lts percona-8.4` -- upgrades Percona from 8.0 to 8.4 (production ready)\n\n## NOTE on PHP versions management\n\nYou can install or modify PHP versions active on your system during `barracuda` upgrade with commands like:\n\n`barracuda php-idle disable` -- disables versions not used by any site on the system\n\n`barracuda php-idle enable` -- re-enables and re-builds versions previously disabled\n\n`barracuda up-lts php-8.5` -- forces the system to use only single version (will cause sites brief downtime)\n\n`barracuda up-lts php-max` -- installs all supported versions if not installed before\n\n`barracuda up-lts php-min` -- installs PHP 8.5, 8.4, 8.3, and uses 8.4 by default\n\nIf you wish to define your own set of installed PHP versions, you can do so by modifying variables in the `/root/.barracuda.cnf` file before running the upgrade, where you can find `_PHP_MULTI_INSTALL`, `_PHP_CLI_VERSION`, and `_PHP_FPM_VERSION` -- note that the `_PHP_SINGLE_INSTALL` variable must be set empty to not override other related variables.\n\nHowever, you will also need to add dummy entries for versions not installed and not used yet to `~/static/control/multi-fpm.info` file (on any Octopus instance), because otherwise `barracuda` will ignore versions not used yet and will automatically remove them from `_PHP_MULTI_INSTALL` on upgrade. These dummy entries should look like this:\n\n```sh\nplace.holder1.dont.remove 7.3\nplace.holder2.dont.remove 8.0\nplace.holder3.dont.remove 5.6\n```\n\nThe same logic protects existing and used versions from being removed even if they are not listed in the `_PHP_MULTI_INSTALL` variable (they will be re-added automatically if needed).\n\n## NOTE on Ægir Platforms\n\nSince BOA no longer installs all bundled Ægir platforms during Octopus installation and upgrades, you will need to add some keywords to `~/static/control/platforms.info` and run the Octopus upgrade to have these platforms added as explained in the [documentation](https://github.com/omega8cc/boa/tree/5.x-dev/docs) you can find in the file `~/static/control/README.txt` within your Octopus account.\n"
  },
  {
    "path": "docs/cnf/barracuda.cnf",
    "content": "###\n### Barracuda\n###\n### Configuration stored in the /root/.barracuda.cnf file.\n### This example is for public install mode - see docs/INSTALL.md\n###\n### NOTE: the group of settings displayed below will *not* be overridden\n### on upgrade by the Barracuda script nor by this configuration file.\n### They can be defined only on initial Barracuda install.\n###\n_EASY_HOSTNAME=\"f-q-d-n\" #------ Hostname auto-configured via _EASY_SETUP\n_LOCAL_NETWORK_HN=\"\" #---------- Hostname if in localhost mode - auto-conf\n_LOCAL_NETWORK_IP=\"\" #---------- Web server IP if in localhost mode - auto-conf\n_MY_FRONT=\"master.f-q-d-n\" #---- URL of the Ægir Master Instance control panel\n_MY_HOSTN=\"f-q-d-n\" #----------- Allows to define server hostname\n_MY_OWNIP=\"123.45.67.89\" #------ Allows to specify web server IP if not default\n_SMTP_RELAY_HOST=\"\" #----------- Allows to configure simple SMTP relay (w/o pwd)\n_SMTP_RELAY_TEST=YES #---------- Allows to skip SMTP availability tests when NO\n_THIS_DB_HOST=localhost #------- Allows to use hostname in DB grants when FQDN\n###\n### NOTE: the group of settings displayed below\n### will *override* all listed settings in the Barracuda script,\n### both on initial install and upgrade.\n###\n_XTRAS_LIST=\"\" #---------------- See docs/NOTES.md for details on add-ons\n###\n_AUTOPILOT=NO #----------------- Allows to skip all Yes/No questions when YES\n_DEBUG_MODE=NO #---------------- Allows to enable verbose BOA mode when YES\n_DL_MODE=BATCH #---------------- Allows to switch to GIT src or OLD static\n_INCIDENT_REPORT=MINI #--------- Control incidents reports via OFF/ALL/MINI/CRIT\n_MY_EMAIL=\"my@email\" #---------- System admin email address\n###\n_CPU_CRIT_RATIO=6.1 #----------- Max load per CPU core before killing PHP/Drush\n_CPU_MAX_RATIO=4.1 #------------ Max load per CPU core before disabling Nginx\n_CPU_TASK_RATIO=3.1 #----------- Max load per CPU core to launch tasks queue\n_CPU_SPIDER_RATIO=2.1 #--------- Max load per CPU core before blocking spiders\n###\n_CUSTOM_COLLATION_SQL= #-------- By default on the DB server: utf8mb4_unicode_ci\n_DB_BINARY_LOG=NO #------------- Allows to enable binary logging when YES\n_DB_SERIES=5.7 #---------------- Supported values: 5.7 8.0 8.4\n_DB_SERVER=Percona #------------ Install Percona or MySQL Server (8.4 from Trixie)\n_SQL_LOW_MAX_TTL=60 #----------- Max TTL for mysql process per problematic user\n_SQL_MAX_TTL=3600 #------------- Max TTL for mysql process per user (seconds)\n_USE_MYSQLTUNER=NO #------------ Use MySQLTuner to configure SQL limits when YES\n###\n_DNS_SETUP_TEST=YES #----------- Allows to skip DNS testing when NO\n_EXTRA_PACKAGES=\"\" #------------ Installs listed extra packages with apt\n_FORCE_GIT_MIRROR=\"\" #---------- Allows to use different mirror (deprecated)\n_LOCAL_DEBIAN_MIRROR= #--------- Allows to force non-default Debian mirror\n_LOCAL_DEVUAN_MIRROR= #--------- Allows to force non-default Devuan mirror\n_NEWRELIC_KEY= #---------------- Installs New Relic when license key is set\n###\n_ENABLE_GOACCESS=NO #----------- Generate statistics with GoAccess when YES\n_MAGICK_FROM_SOURCES=NO #------- Builds ImageMagick from sources when YES\n###\n_NGINX_DOS_DIV_INC_NR=\"40\" #---- Default divisor for increments\n_NGINX_DOS_IGNORE=\"foo|bar\" #--- Keywords to ignore the requests if found\n_NGINX_DOS_INC_MIN=\"3\" #-------- Default min allowed number for increments\n_NGINX_DOS_LIMIT=399 #---------- Default 399/1999 limit of page views per IP\n_NGINX_DOS_LINES=1999 #--------- Default number of access.log lines to check\n_NGINX_DOS_LOG=SILENT #--------- Logging mode, can be SILENT, NORMAL or VERBOSE\n_NGINX_DOS_MODE=2 #------------- 1 or 2 (default)\n_NGINX_DOS_STOP=\"foo|bar\" #----- Keywords to trigger counter +5 increase if found\n###\n_NGINX_EXTRA_CONF=\"\" #---------- Allows to add custom options to Nginx build\n_NGINX_FORWARD_SECRECY=YES #---- Installs PFS Nginx support when YES (default)\n_NGINX_HEADERS=NO #------------- Installs Nginx Headers More support when YES\n_NGINX_LDAP=NO #---------------- Installs LDAP Nginx support when YES\n_NGINX_NAXSI=NO #--------------- Installs NAXSI WAF when YES - experimental\n_NGINX_SPDY=YES #--------------- Installs SPDY Nginx support when YES (default)\n_NGINX_WORKERS=AUTO #----------- Allows to override AUTO with a valid integer\n###\n_PHP_CLI_VERSION=8.4 #---------- PHP-CLI for Master Instance\n_PHP_EXTRA_CONF=\"\" #------------ Allows to add custom options to PHP build\n_PHP_FPM_DENY=\"\" #-------------- Modify disable_functions -- see info below\n_PHP_FPM_VERSION=8.4 #---------- PHP-FPM for Master Instance\n_PHP_FPM_WORKERS=AUTO #--------- Allows to override AUTO with a valid integer\n_PHP_GEOS=NO #------------------ Installs GEOS for all PHP versions when YES\n_PHP_IONCUBE=NO #--------------- Installs ionCube for all PHP versions when YES\n_PHP_MONGODB=NO #--------------- Installs MONGODB for all PHP versions when YES\n_PHP_MULTI_INSTALL=\"8.3 7.4\" #-- PHP versions to install 8.5/4/3/2/1/0 7.4/3/2/1/0 5.6\n_PHP_SINGLE_INSTALL=\"\" #-------- Allows to force single PHP version, like: 8.3\n###\n_VALKEY_LISTEN_MODE=SOCKET #---- Valkey listen mode: SOCKET (recommended) or PORT\n_VALKEY_MAJOR_RELEASE=9 #------- Valkey major release version: 9, 8, 7\n_REDIS_LISTEN_MODE=SOCKET #----- Redis listen mode: SOCKET (recommended) or PORT\n_REDIS_MAJOR_RELEASE=7 #-------- Redis major release version: 5, 6 or 7\n_RESERVED_RAM=0 #--------------- Allows to reserve RAM (in MB) for non-BOA apps\n_SPEED_VALID_MAX=3600 #--------- Defines Speed Booster hourly cache TTL in sec\n_SSH_ARMOUR=NO #---------------- Allows to enhance OpenSSH security when YES\n_SSH_FROM_SOURCES=YES #--------- Allows to build OpenSSH from sources (default)\n_SSH_PORT=22 #------------------ Allows to configure non-standard SSH port\n_STRICT_BIN_PERMISSIONS=YES #--- Aggressively protect all binaries when YES\n_STRONG_PASSWORDS=YES #--------- Configurable length: 32-128, YES (64), NO (32)\n###\n_CUSTOM_CONFIG_CSF=NO #--------- Protects custom CSF config when YES\n_CUSTOM_CONFIG_LSHELL=NO #------ Protects custom Limited Shell config when YES\n_CUSTOM_CONFIG_VALKEY=NO #------ Protects custom Valkey config when YES\n_CUSTOM_CONFIG_REDIS=NO #------- Protects custom Redis config when YES\n_CUSTOM_CONFIG_SQL=NO #--------- Protects custom SQL config when YES\n###\n_AEGIR_UPGRADE_ONLY=NO #-------- Managed on the fly with 'aegir' keyword\n_SYSTEM_UP_ONLY=NO #------------ Managed on the fly with 'system' keyword\n###\n_MODULES_FIX=YES #-------------- Allows to skip weekly modules en/dis when NO\n_MODULES_SKIP=\"\" #-------------- Modules (machine names) to never auto-disable\n_PERMISSIONS_FIX=YES #---------- Allows to skip daily permissions fix when NO\n###\n### Barracuda\n###\n\n###\n### HINT: Check also control files docs in: docs/ctrl/system.ctrl\n###\n\n###\n### Extra, special purpose settings are listed below.\n###\n\n###\n### You can configure BOA to run automated upgrades to latest head version\n### for both Barracuda and all Octopus instances with three variables, empty\n### by default. All three variables must be defined to enable auto-upgrade.\n###\n### You can set _AUTO_UP_MONTH and _AUTO_UP_DAY to any date in the past or\n### future (like _AUTO_UP_MONTH=2 with _AUTO_UP_DAY=29) if you wish to enable\n### only weekly system upgrades.\n###\n### Remember that day/month upgrades will include complete upgrade to latest BOA\n### head for Barracuda and all Octopus instances, while weekly upgrade is\n### designed to run only 'barracuda up-lts system' upgrade.\n###\n### You can further modify the auto-upgrade by specifying either head or dev\n### with _AUTO_VER variable, plus you can include all supported PHP versions\n### with _AUTO_PHP variable set to \"php-min\" -- otherwise it will be ignored.\n###\n### Note that weekly system upgrade will start shortly after midnight on the\n### specified weekday, while the day/month upgrades for both Barracuda\n### and all Octopus instances will start at ~3 AM for system and Ægir Master\n### instance, and ~4 AM for all Octopus based Ægir instances.\n###\n### NOTE: All three _AUTO_UP_* variables must be defined to enable auto-upgrade.\n###\n_AUTO_UP_WEEKLY= #-------------- Day of week (1-7) for weekly system upgrades\n_AUTO_UP_MONTH= #--------------- Month (1-12) to define date of one-time upgrade\n_AUTO_UP_DAY= #----------------- Day (1-31) to define date of one-time upgrade\n_AUTO_VER=dev #----------------- The BOA version to use (dev by default)\n_AUTO_PHP= #-------------------- Useful to force php-min, otherwise ignored\n\n###\n### You can whitelist extra binaries to make them available for web server\n### requests, in addition to already whitelisted, known as safe binaries.\n###\n### Please be aware that you could easily open security holes by whitelisting\n### commands which may provide access to otherwise not available parts of\n### the system, because the exec() in PHP doesn't respect other limitations\n### like open_basedir directive.\n###\n### You should list only filenames, not full paths, for example:\n###\n###   _BACKEND_ITEMS_LIST=\"git foo bar\"\n###\n_BACKEND_ITEMS_LIST=\n\n###\n### The BOA Skynet auto-updates were initially limited to checking for new BOA\n### release and notifying the system admin daily, until the system has been\n### upgraded to latest stable release.\n###\n### Next, since people tend to forget about running meta-installers update\n### before running barracuda or octopus upgrade, and it generated a ton of\n### unneeded tickets, confusion and frustration, we have automated these\n### updates, so all your meta-installers were updated daily.\n###\n### Then #drupageddon happened, and we realized that we could make all existing\n### BOA systems secure, auto-magically, in the first 60 minutes after the\n### #drupageddon alert was published. Only if we could have a running mechanism\n### in place to apply very trivial but how important patch to all your D7 sites/\n### /codebases while you were on vacation, out of town, or just AFK anywhere.\n###\n### So we have added Drupal core monitoring and auto-patching to make sure you\n### never run vulnerable codebase again. To make it effective, we have scheduled\n### to run these checks hourly.\n###\n### Then we have added also hourly updates for a few key scripts responsible\n### for your system security, self-monitoring and self-healing.\n###\n### Gradually it grew into its current incarnation, so at the moment BOA Skynet\n### auto-updates do these things for you, while you sleep:\n###\n### * Daily version/release check and notification\n### * Every 6 minutes update for all meta-installers and related tools\n### * Hourly check for D7 core vulnerability and patching if detected\n### * Hourly update for key BOA tools, monitors and self-healing agents\n### * Hourly check if your DNS #resolver works as expected and repair if not\n###\n### While it is a very convenient to have all this work done for you, and we\n### believe that it should be still enabled by default, we should make it\n### possible to opt-out from all those auto-updates, if you prefer that your\n### BOA system never calls home, and whatever happens, is totally under\n### your control.\n###\n### Now you can disable this convenient magic by adding the line:\n###\n###   _SKYNET_MODE=OFF\n###\n### NOTE: Critically important BOA tools will be still auto-updated\n###       every 6 minutes to keep your system ready for upgrade\n###       if/when needed and as initially intended.\n###\n_SKYNET_MODE=ON\n\n###\n### NOTE: the group of settings displayed below is never stored\n### permanently in this config file, since they are intended to be used\n### only when required/useful for some reason, and while can be added\n### manually before running barracuda up-{stable|head} command,\n### they will be either removed automatically to not affect\n### normal upgrades, or ignored afterwards.\n###\n\n###\n### You can force Nginx, PHP and/or DB server\n### reinstall, even if there are no updates\n### available, when set to YES.\n###\n_NGX_FORCE_REINSTALL=NO\n_PHP_FORCE_REINSTALL=NO\n_SQL_FORCE_REINSTALL=NO\n_GIT_FORCE_REINSTALL=NO\n\n###\n### Use YES to force installing everything\n### from sources again, even if there are\n### no updates available.\n###\n_FULL_FORCE_REINSTALL=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Jessie to Debian Stretch.\n###\n_JESSIE_TO_STRETCH=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Stretch to Debian Buster.\n###\n_STRETCH_TO_BUSTER=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Buster to Debian Bullseye.\n###\n_BUSTER_TO_BULLSEYE=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bullseye to Debian Bookworm.\n###\n_BULLSEYE_TO_BOOKWORM=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bookworm to Debian Trixie.\n###\n_BOOKWORM_TO_TRIXIE=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Jessie to Devuan Beowulf.\n###\n_JESSIE_TO_BEOWULF=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Stretch to Devuan Beowulf.\n###\n_STRETCH_TO_BEOWULF=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Buster to Devuan Beowulf.\n###\n_BUSTER_TO_BEOWULF=NO\n\n###\n### Use YES to run major system upgrade\n### from Devuan Beowulf to Devuan Chimaera.\n###\n_BEOWULF_TO_CHIMAERA=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bullseye to Devuan Chimaera.\n###\n_BULLSEYE_TO_CHIMAERA=NO\n\n###\n### Use YES to run major system upgrade\n### from Devuan Chimaera to Devuan Daedalus.\n###\n_CHIMAERA_TO_DAEDALUS=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bookworm to Devuan Daedalus.\n###\n_BOOKWORM_TO_DAEDALUS=NO\n\n###\n### Use YES to run major system upgrade\n### from Devuan Daedalus to Devuan Excalibur.\n###\n_DAEDALUS_TO_EXCALIBUR=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Trixie to Devuan Excalibur.\n###\n_TRIXIE_TO_EXCALIBUR=NO\n\n###\n### Use YES to enable The Hourly Hot DB Server Backups with Percona XtraBackup\n###\n### Once enabled, the system will use XtraBackup to create complete and very\n### fast, non-blocking backups of all databases on the system, every hour.\n### These backups will be compressed and rotated after 2 days.\n###\n### The recovery procedure shown below uses the latest, hourly, complete backup\n### of all databases hosted on the system. It should be used only for global\n### data recovery, as there is no option to reliably recover data per database,\n### so this method should be used as a last resort, when trying to recover from\n### disaster or human error - see the GitLab horror story: http://bit.ly/2jvJ5YG\n###\n### In theory you could try to copy over the data only from the affected\n### database directory manually, but then there will be conflicts in the binary\n### log which may even prevent the db server from starting properly,\n### and another InnoDB recovery procedure may be required.\n###\n### If you are not sure what to do, and you have never tried this before\n### at least few times with good results, it's probably better to ask someone\n### more experienced for assistance.\n###\n### You can use any other existing hourly backup you can find in the\n### /data/disk/arch/hourly/ directory and replace the \"latest\" keyword\n### with the correct filename, for example: \"server.name.foo-170218-1518\"\n###\n### $ cd /data/disk/arch/hourly/\n### $ tar xjf latest.tar.bz2\n### $ service cron stop\n### $ sleep 180\n### $ service mysql stop\n### $ mkdir /tmp/mysql\n### $ mv /var/lib/mysql/* /tmp/mysql/\n### $ innobackupex --copy-back /data/disk/arch/hourly/latest\n### $ chown -R mysql:mysql: /var/lib/mysql\n### $ chown -R mysql:mysql: /var/log/mysql\n### $ chown -R mysql:mysql: /run/mysqld\n### $ service mysql start\n### $ service cron start\n###\n_HOURLY_DB_BACKUPS=NO\n"
  },
  {
    "path": "docs/cnf/octopus.cnf",
    "content": "###\n### Octopus\n###\n### Configuration stored in the /root/.${_USER}.octopus.cnf file.\n### This example is for public install mode - see docs/INSTALL.md\n###\n### NOTE: the group of settings displayed below\n### will *override* all listed here settings in the Octopus script.\n###\n_USER=\"o1\" #-------------------- Ægir Octopus Instance system account name\n_MY_OCTO_EMAIL=\"my@email\" #----- Ægir Octopus Instance owner email\n_PLATFORMS_LIST=ALL #----------- Platforms to install - see docs/PLATFORMS.md\n_AUTOPILOT=NO #----------------- Allows to skip all Yes/No questions when YES\n_HM_ONLY=NO #------------------- Allows to upgrade Ægir Hostmaster only\n_DEBUG_MODE=NO #---------------- Allows to enable Drush debugging when YES\n_DL_MODE=BATCH #---------------- Allows to switch to GIT src or OLD static\n_MY_OWNIP= #-------------------- Allows to specify web server IP if not default\n_FORCE_GIT_MIRROR=\"\" #---------- Allows to use different mirror (deprecated)\n_THIS_DB_HOST=localhost #------- DB host depends on Barracuda setting (FQDN)\n_DNS_SETUP_TEST=YES #----------- Allows to skip DNS testing when NO\n_HOT_SAUCE=NO #----------------- Forces new platforms tree on install when YES\n_USE_CURRENT=YES #-------------- Forces new platforms tree on upgrade when NO\n_DEL_OLD_EMPTY_PLATFORMS=\"0\" #-- Delete empty platforms if verified > X-days-ago\n_DEL_OLD_BACKUPS=0 #------------ Delete Ægir/b-migrate backups if > X-days-ago\n_DEL_OLD_TMP=0 #---------------- Delete sites temp files if > X-days-ago\n_LOCAL_NETWORK_IP= #------------ Web server IP if in localhost mode - auto-conf\n_PHP_FPM_VERSION=8.4 #---------- PHP-FPM for Satellite Instance\n_PHP_CLI_VERSION=8.4 #---------- PHP-CLI for Satellite Instance\n_PHP_FPM_WORKERS=AUTO #--------- Allows to override AUTO with a valid integer\n_PHP_FPM_TIMEOUT=AUTO #--------- Allows to override default 180 when 60-180\n_PHP_FPM_DENY=\"\" #-------------- Modify the disable_functions list per instance\n_STRONG_PASSWORDS=YES #--------- Configurable length: 32-128, YES (64), NO (32)\n_SQL_CONVERT=NO #--------------- DB conversion when innodb (or YES), or myisam\n_RESERVED_RAM=0 #--------------- Allows to reserve RAM (in MB) for non-BOA apps\n_SITES_COLLATION_SQL= #--------- By default in sites hosted: utf8mb4_unicode_ci\n###\n### NOTE: the group of settings displayed below will be *overridden*\n### by config files stored in the /data/disk/o1/log/ directory,\n### but only on upgrade.\n###\n_DOMAIN=\"o1.f-q-d-n\" #---------- URL of the Ægir control panel\n_CLIENT_EMAIL= #---------------- Create client user if different than _MY_OCTO_EMAIL\n_CLIENT_OPTION=\"POWER\" #-------- Currently not used\n_CLIENT_SUBSCR=\"M\" #------------ Currently not used\n_CLIENT_CORES=\"1\" #------------- Currently not used\n###\n### Octopus\n###\n\n###\n### HINT: Check also control files docs in: docs/ctrl/system.ctrl\n###\n\n###\n### Extra, special purpose control files are listed below.\n###\n### NOTE: the group of control files listed below are intended to be used\n### by the instance owner to *overwrite* some settings stored in the\n### /root/.${_USER}.octopus.cnf file without system admin (root) assistance.\n###\n\nÆgir version provided by BOA is now fully compatible with PHP 8.5 and 8.4,\nso both can be used as default versions in the Ægir PHP configuration files:\n~/static/control/cli.info and ~/static/control/fpm.info\n\n!!! >>> PHP CAVEATS for Drupal core 7-10 versions:\n\n  => https://www.drupal.org/docs/7/system-requirements/php-requirements\n  => https://www.drupal.org/docs/system-requirements/php-requirements\n\n###\n### /data/disk/${_USER}/static/control/fpm.info\n###\n### This file, if exists and contains supported and installed PHP-FPM version\n### will be used by running every minute /var/xdrago/manage_ltd_users.sh\n### maintenance script to switch PHP-FPM version for this Octopus instance,\n### if different than defined in the /root/.${_USER}.octopus.cnf file, in the\n### _PHP_FPM_VERSION variable. It will also overwrite _PHP_FPM_VERSION value\n### there to avoid doing it over and over again every 5 minutes.\n###\n### IMPORTANT: If used, it will switch PHP-FPM for all Drupal sites\n### hosted on the instance, unless multi-fpm.info control file exists.\n###\n### Supported values for single PHP-FPM mode which can be written in this file:\n###\n### 8.5\n### 8.4\n### 8.3\n### 8.2\n### 8.1\n### 8.0\n### 7.4\n### 7.3\n### 7.2\n### 7.1\n### 7.0\n### 5.6\n###\n### NOTE: There must be only one line and one value in this control file.\n### Otherwise it will be ignored.\n###\n### NOTE: if the file doesn't exist, the system will create it and set to the\n### lowest available PHP version installed, not to the system default version.\n### This is to guarantee backward compatibility for instances installed\n### before upgrade to BOA-4.1.3, when the default PHP version was 5.6,\n### as otherwise after the upgrade the system would automatically switch such\n### accounts to the new default PHP version which is 8.1, and this could break\n### most of the sites hosted, never before tested for PHP 8.1 compatibility.\n###\n\n###\n### /data/disk/${_USER}/static/control/multi-fpm.info\n###\n### It is now possible to make all installed PHP-FPM versions available\n### simultaneously for sites on the Octopus instance with additional\n### control file:\n###\n### This file, if exists, will switch all sites listed in it to their\n### respective PHP-FPM versions as shown in the example below, while all\n### other sites not listed in multi-fpm.info will continue to use PHP-FPM\n### version defined in fpm.info instead, which can be modified independently.\n###\n### foo.com 8.5\n### bar.com 7.4\n### old.com 5.6\n###\n### NOTE: Each line in the multi-fpm.info file must start with main site name,\n### followed by single space, and then the PHP-FPM version to use.\n###\n\n###\n### /data/disk/${_USER}/static/control/cli.info\n###\n### This file, if exists and contains supported and installed PHP version\n### will be used by running every minute /var/xdrago/manage_ltd_users.sh\n### maintenance script to switch PHP-CLI version for this Octopus instance,\n### if different than defined in the /root/.${_USER}.octopus.cnf file, in the\n### _PHP_CLI_VERSION variable. It will also overwrite _PHP_CLI_VERSION value\n### there to avoid doing it over and over again every 5 minutes.\n###\n### Supported values which can be written in this file:\n###\n### 8.5\n### 8.4\n### 8.3\n### 8.2\n### 8.1\n### 8.0\n### 7.4\n### 7.3\n### 7.2\n### 7.1\n### 7.0\n### 5.6\n###\n### There must be only one line and one value in this control file.\n### Otherwise it will be ignored.\n###\n### NOTE: if the file doesn't exist, the system will create it and set to the\n### lowest available PHP version installed, not to the system default version.\n### This is to guarantee backward compatibility for instances installed\n### before upgrade to BOA-4.1.3, when the default PHP version was 5.6,\n### as otherwise after the upgrade the system would automatically switch such\n### accounts to the new default PHP version which is 8.1, and this could break\n### most of the sites hosted, never before tested for PHP 8.1 compatibility.\n###\n### IMPORTANT: this file will affect only Drush on command line and Drush\n### in Ægir backend, used for all tasks on hosted sites, but it will not\n### affect PHP-CLI version used by Composer on command line, because Composer\n### is installed globally and not per Octopus account, so it will use system\n### default PHP version, which is, since BOA-5.0.0, PHP 8.1 and can be\n### changed only by changing system default _PHP_CLI_VERSION in the file\n### /root/.barracuda.cnf and running barracuda upgrade.\n###\n\n###\n### /data/disk/${_USER}/static/control/platforms.info\n###\n### This file, if exists and contains a list of symbols used to define supported\n### platforms, allows to control/override the value of _PLATFORMS_LIST variable\n### normally defined in the /root/.${_USER}.octopus.cnf file, which can't be\n### modified by the Ægir instance owner with no system root access.\n###\n### IMPORTANT: If used, it will replace/override the value defined on initial\n### instance install and all previous upgrades. It takes effect on every future\n### Octopus instance upgrade, which means that you will miss all newly added\n### distributions, if they will not be listed also in this control file.\n###\n### Supported values which can be written in this file, listed in a single line\n### or one per line:\n###\n\n### Drupal 11.3\n#\n# DE3 — Drupal 11.3 prod/stage/dev\n# CK3 — Commerce v.3\n# CMS — Drupal CMS\n# SCR — Sector\n# THR — Thunder\n# VBX — Varbase 10\n\n### Drupal 11.2\n#\n# DE2 — Drupal 11.2 prod/stage/dev\n\n### Drupal 11.1\n#\n# DE1 — Drupal 11.1 prod/stage/dev\n\n### Drupal 10.6\n#\n# DX6 — Drupal 10.6 prod/stage/dev\n# FOS — farmOS\n# LGV — LocalGov\n# VB9 — Varbase 9\n\n### Drupal 10.5\n#\n# DX5 — Drupal 10.5 prod/stage/dev\n# OCS — OpenCulturas\n\n### Drupal 10.4\n#\n# DX4 — Drupal 10.4 prod/stage/dev\n\n### Drupal 10.3\n#\n# DX3 — Drupal 10.3 prod/stage/dev\n# DXP — DXPR Marketing\n# EZC — EzContent\n\n### Drupal 10.2\n#\n# DX2 — Drupal 10.2 prod/stage/dev\n# OFD — OpenFed\n# SOC — Social\n\n### Drupal 10.1\n#\n# DX1 — Drupal 10.1 prod/stage/dev\n# CK2 — Commerce v.2\n\n### Drupal 10.0\n#\n# DX0 — Drupal 10.0 prod/stage/dev\n\n### Drupal 9\n#\n# DL9 — Drupal 9 prod/stage/dev\n# OLS — OpenLucius\n# OPG — Opigno LMS\n\n### Drupal 7\n#\n# DL7 — Drupal 7 prod/stage/dev\n# CK1 — Commerce v.1\n# UC7 — Ubercart\n\n### Drupal 6\n#\n# DL6 — Pressflow (LTS) prod/stage/dev\n# UC6 — Ubercart\n\n### You can also use special keyword 'ALL' instead of any other symbols to have\n### all available platforms installed, including newly added in all future BOA\n### system releases.\n###\n### Examples:\n#\n# DX2 DX3 SOC UC7\n# (or)\n# ALL\n\n###\n### IMPORTANT: Supported Drupal core versions and distributions have different\n### PHP versions requirements, while not all PHP versions out of currently\n### supported ten versions are installed by default.\n###\n### Ensure that you have corresponding PHP versions installed with barracuda\n### before attempting to install older Drupal versions and distributions.\n###\n### On hosted BOA contact your host if you need any legacy PHP installed again.\n###\n"
  },
  {
    "path": "docs/ctrl/platform.ctrl",
    "content": "Almost all previously used control files have been replaced\nwith ini files, which, while used primarily for PHP related\nvariables, include also other system related variables,\nused to configure backend system tasks.\n\nPlease check detailed how-to with all options listed in the template file:\n\n  aegir/conf/default.boa_platform_control.ini\n"
  },
  {
    "path": "docs/ctrl/site.ctrl",
    "content": "Almost all previously used control files have been replaced\nwith ini files, which, while used primarily for PHP related\nvariables, include also other system related variables,\nused to configure backend system tasks.\n\nPlease check detailed how-to with all options listed in the template file:\n\n  aegir/conf/default.boa_site_control.ini\n"
  },
  {
    "path": "docs/ctrl/system.ctrl",
    "content": "You can modify some system defaults on the fly by using control files,\nwhich will affect either some system-wide services, maintenance agents\nbehaviours and/or all hosted Ægir instances, so these controls belong\nto the system root, and are located, if needed, in the root home directory.\n\nThese system-level control files use .cnf file extension, so for extra\nclarity we will list here also some other .cnf files which either have\nspecial purpose or should never be edited, because they are managed\nby the BOA install and upgrade tools exclusively.\n\n\n ### The .cnf file to never touch\n\n @=> /root/.my.cnf\n\n  This is a mysql specific file, which holds your database server\n  master (root) password. This file allows you to access your db server\n  with root privileges when you are logged in as a system root without\n  the need to type mysql root password. It also allows various BOA specific\n  maintenance agents to monitor your system status and perform auto-healing,\n  databases maintenance, repairs and backups.\n\n  By the way, there is also directly associated /root/.my.pass.txt file\n  which includes mysql root password generated. You should never touch\n  nor modify this file as well.\n\n\n ### Special purpose .cnf files\n\n @=> /root/.barracuda.cnf\n\n  This is your BOA system master configuration file. Please read its\n  template in docs/cnf/barracuda.cnf for more information.\n\n @=> /root/.USER.octopus.cnf\n\n  This is your Octopus Satellite instance configuration file. If you have\n  more Octopus instances, you have a separate file per instance, where\n  the USER is a keyword for the instance main system account. Please read\n  its template in docs/cnf/octopus.cnf for more information.\n\n @=> /root/.ip.protected.vhost.whitelist.cnf\n\n  This file, if exists, allows to whitelist IP addresses (one valid IP per line)\n  for access to vhosts protected via valid shell login (chive, cgp, sqlbuddy).\n\n\n ### All other (empty) .cnf control files\n\n @=> /root/.mysqladmin.monitor.cnf\n\n  This file, if exists, allows to log the output of `mysqladmin proc` command\n  in the /var/log/boa/mysqladmin.monitor.log file, every 15 seconds.\n\n @=> /root/.fast.cron.cnf\n\n  This file, if exists, allows to speed up the Ægir tasks and sites cron queues\n  on all hosted Octopus instances, so instead of runing every minute, it will\n  run every 10 seconds. Note that while it may be handy during development,\n  it will cause higher system load, even with its built-in prevention from\n  running more than two concurrent queues, so it is not recommended to use on\n  production systems.\n\n @=> /root/.force.drupalgeddon.cnf\n\n  This file, if exists, allows to force running drupalgeddon checks daily\n  on all Octopus instances, even if they are not enabled with instance level\n  control file ~/static/control/drupalgeddon.info\n\n @=> /root/.force.sites.verify.cnf\n\n  This file, if exists, will result with all sites hosted on all Octopus\n  instances on the same BOA system being re-verified daily. Note that it works\n  only if _PERMISSIONS_FIX=YES is set in /root/.barracuda.cnf (default)\n\n @=> /root/.enable.newrelic.sysmond.cnf\n\n  This file, if exists, allows to run newrelic-sysmond service, otherwise\n  disabled for security reasons, because it exposes too much system level\n  information/details in the New Relic control panel.\n\n @=> /root/.use.local.nameservers.cnf\n\n  This file, if exists, allows to use original (or custom) nameservers provided\n  on the system install by your hosting provider.\n\n  It depends on existence of another file with custom name servers to use\n  listed, one per line: /var/backups/resolv.conf.vanilla -- for example:\n\n    nameserver 12.34.56.78\n    nameserver 12.34.56.00\n\n  The change will take effect on barracuda upgrade.\n\n @=> /root/.use.default.nameservers.cnf\n\n  This file, if exists, allows to revert BOA DNS cache configuration\n  to use Unbound server and Cloudflare/Cleaner/Google DNS again (default).\n\n  Note that to restore default DNS cache configuration on barracuda upgrade,\n  you must delete the /root/.use.local.nameservers.cnf file, if still exists.\n\n @=> /root/.hr.monitor.cnf\n\n  This file, if exists, enables more aggressive Nginx abuse guard mode and\n  is recommended on systems often attacked by spambots and/or aggressive\n  crawlers with false UA identity.\n\n @=> /root/.no.fpm.cpu.limit.cnf\n\n  This file, if exists, allows to disable aggressive php-fpm processes\n  monitoring and killing if any is using really too much CPU power.\n\n @=> /root/.no.sql.cpu.limit.cnf\n\n  This file, if exists, allows to disable mysql processes monitoring\n  and restarting mysql server if any is using really too much CPU power.\n\n @=> /root/.no.swap.clear.cnf\n\n  This file, if exists, allows to skip the otherwise default procedure\n  designed to clear system (memory) swap daily.\n\n @=> /root/.no.sysctl.update.cnf\n\n  This file, if exists, allows to skip /etc/sysctl.conf update procedure\n  designed to adjust it on every barracuda upgrade, if needed.\n\n @=> /root/.mysql.yes.new.password.cnf\n\n  This file, if exists, allows to automatically generate random, new mysql\n  root password on every barracuda upgrade. This file is ignored if\n  /root/.mysql.no.new.password.cnf file also exists, but is enabled by default.\n\n @=> /root/.mysql.no.new.password.cnf\n\n  This file, if exists, allows to ignore /root/.mysql.yes.new.password.cnf\n  as mentioned above.\n\n @=> /root/.mysql.force.legacy.backup.cnf\n\n  This file, if exists, will ignore the default mysql backup mode with separate\n  SQL dump files per table, and will force the legacy single-file database\n  mysqldump mode for automatic daily backups of all databases, stored\n  in the /data/disk/arch/sql/ directory. It will not affect any Octopus\n  instance configuration with ~/static/control/MyQuick.info file present,\n  since it's used only by Ægir tasks.\n\n @=> /root/.valkey.no.new.password.cnf\n\n  This file, if exists, allows to skip the otherwise default procedure\n  designed to generate new Valkey password on every barracuda upgrade.\n\n @=> /root/.redis.no.new.password.cnf\n\n  This file, if exists, allows to skip the otherwise default procedure\n  designed to generate new Redis password on every barracuda upgrade.\n\n @=> /root/.allow.mc.cnf\n\n  This file, if exists, allows to open an access for all limited shell users\n  to the Midnight Commander (mc) -- a file manager available on command\n  line, a Unix clone of Norton Commander known in the ancient DOS days.\n\n  While very useful for novice users, can affect your system access separation\n  because it doesn't respect built-in virtual chroot/jail enforced normally\n  both in the limited shell and in SFTP via MySecureShell. Do not use it,\n  unless you don't open shell access for untrusted users.\n\n @=> /root/.high_traffic.cnf\n\n  Recommended if you have a very busy site(s) hosted. It prevents PHP-FPM\n  restarts when the system detects segfault. Note that unlike before,\n  even without this file, PHP-FPM will not be restarted at midnight,\n  thanks to some improvements elsewhere in the self-healing procedures.\n\n @=> /root/.giant_traffic.cnf\n\n  Recommended if you have an extremely busy site(s) hosted. It prevents\n  the Speed Booster (Nginx) cache cleanup (for entries older than 1 day),\n  which happens hourly. However, even with very busy sites hosted, it will be\n  very rarely needed, since the cleanup procedure has been improved\n  to not cause load spikes, even under \"Giant Traffic\" pressure.\n\n @=> /root/.skip_cleanup.cnf\n\n  This file, if exists, allows to skip all daily cleanup otherwise run via\n  /var/xdrago/graceful.sh -- it works like /root/.giant_traffic.cnf and\n  additionally disables /tmp/ and /opt/tmp/ cleanup and Jetty restart.\n\n @=> /root/.skip_duplicity_monthly_cleanup.cnf\n\n  This file, if exists, allows to skip forced duplicity backup cleanup,\n  which is otherwise run at the beginning (randomly 1-5) of each month.\n\n @=> /root/.randomize_duplicity_full_backup_day.cnf\n\n  This file, if exists, allows to randomize duplicity full backup schedule,\n  which is otherwise set to run on Sunday for backboa and Saturday for\n  duobackboa. The full backup day (Mon-Sun) will be randomized monthly or\n  one-time if /root/.skip_duplicity_monthly_cleanup.cnf file exists.\n  This feature is useful when you have many VM/BOA instances running\n  on the same machine.\n\n @=> /root/.home.no.wildcard.chmod.cnf\n\n  This file, if exists, allows to avoid setting restrictive (but recommended)\n  permissions on all directories in the /home/* directory tree, where normally\n  non-system users have their account home directories.\n\n  Without this file the system will run 'chmod 700 /home/*' every 5 minutes.\n\n @=> /root/.my.batch_innodb.cnf\n\n  This file, if exists, will enable non-standard procedure during nightly\n  (global) databases backup, which normally starts at 3:01 AM.\n\n  This special procedure will be run only once per week, on Saturday.\n\n  It will run a three-step sequence on every database: repair/rebuild all\n  tables, truncate all cache* tables, plus convert all tables to InnoDB,\n  which in turn runs also optimize.\n\n @=> /root/.my.optimize.cnf\n\n  This file, if exists, will enable non-standard procedure during nightly\n  (global) databases backup, which normally starts at 3:01 AM.\n\n  This special procedure will be run only once per month, on the last Sunday.\n\n  It will run a three-step sequence on every database: repair/rebuild all\n  tables, truncate all cache* tables, and optimize all tables.\n\n  While it usually allows to defragment and shrink the binary space used\n  by database tables, it may easily cause serious I/O load and subsequent\n  system load (and slowdown) if you happen to host many sites on a weak\n  VPS or machine with slow disks or not fast enough CPU etc.\n\n  Note: this is a system-wide setting.\n\n  There is a similar option: _SQL_CONVERT (defaults to NO), available in every\n  Octopus instance /root/.USER.octopus.cnf file, which has very similar purpose,\n  because it will trigger automatic 'to-innodb' or 'to-myisam' smart conversion\n  performed via 'sqlmagic' tool, and since it uses the 'ALTER TABLE' command,\n  in turn it performs auto-optimization on the fly.\n\n  This optional conversion starts every Saturday at 3:01 AM and runs until\n  the agent completes all tasks included in the /var/xdrago/daily.sh script.\n\n  If _SQL_CONVERT=NO is set, the conversion mode can be individually enabled\n  and more precisely configured with variable:\n\n    sql_conversion_mode\n\n  if set in the site and/or platform level, active INI files:\n\n    boa_platform_control.ini\n    boa_site_control.ini\n\n  More info: https://omega8.cc/node/293\n\n  Please note that if you will change it to _SQL_CONVERT=YES, the system will\n  ignore sql_conversion_mode variables set in the active INI files, and instead\n  will force conversion to InnoDB format in all sites hosted on this instance.\n\n  By the way, conversion to MyISAM format will still keep some tables at InnoDB,\n  and exceptions are defined with regex:\n\n  (cache_[a-z_]+|cache|sessions|users|watchdog|accesslog)\n\n @=> /root/.my.restart_after_optimize.cnf\n\n  This file, if exists, will trigger database server restart once all tables\n  in all databases are optimized. It will be ignored unless the previous\n  /root/.my.optimize.cnf control file also exists.\n\n @=> /root/.upstart.cnf\n\n  This is rarely needed file, which allows to skip stopping cron during\n  barracuda upgrade to limit downtime for any running services, so it allows\n  the auto-healing to run all the time. Note that it may break upgrade\n  if the auto-healing will act too fast to bring up a service which is\n  stopped and started during the upgrade.\n"
  },
  {
    "path": "docs/ini/platform/INI.md",
    "content": "\n## INI (platform level) located in sites/all/modules/\n\n```text\n;;\n;;  DO NOT EDIT THIS FILE, it is just a TEMPLATE with documentation included!\n;;\n;;  This is a platform level INI file template which can be used to modify\n;;  default BOA system behaviour for all sites hosted on this platform.\n;;\n;;  Copy this file as boa_platform_control.ini into sites/all/modules directory,\n;;  then uncomment lines for any settings you want to modify, to make it active.\n;;  All settings are initially listed with system defaults, for reference.\n;;\n;;  Note that it takes ~60 seconds to see any modification results in action\n;;  due to opcode caching enabled in PHP-FPM for all non-dev sites.\n;;\n```\n\n### INI (platform level) for Session Control\n\n```text\n;session_cookie_ttl = 86400\n;;\n;;  You can control session cookies expiration (TTL) per site and per platform.\n;;  The value (in seconds) of the session_cookie_ttl variable is used as\n;;  session.cookie_lifetime value.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n;;\n;;  We also recommend that you enable and configure built-in session_expire\n;;  module, which allows you to keep the sessions DB table tidy. Make sure that\n;;  TTL set via session_cookie_ttl variable below is *lower* than TTL configured\n;;  in the session_expire module, because the module does not care about PHP\n;;  settings and simply deletes old entries from the sessions table on cron run.\n```\n\n```text\n;session_gc_eol = 86400\n;;\n;;  You can control session garbage collector (EOL) per site and per platform.\n;;  The value (in seconds) of the session_gc_eol variable is used as\n;;  session.gc_maxlifetime value and specifies the number of seconds after which\n;;  data will be seen as 'garbage' and potentially cleaned up, resulting with\n;;  $_SESSION variable discarded and affected authenticated users logged out.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n```\n\n### INI (platform level) for Redis Cache Settings Control\n\n```text\n;redis_old_nine_mode = FALSE\n;;\n;;  If you are running Drupal 9 older than 9.3 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n```\n\n```text\n;redis_old_eight_mode = FALSE\n;;\n;;  If you are running Drupal 8 older than 8.8 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n```\n\n```text\n;redis_lock_enable = TRUE\n;;\n;;  The blazing fast Redis lock implementation is also enabled by default.\n```\n\n```text\n;redis_path_enable = TRUE\n;;\n;;  The blazing fast Redis path cache implementation is also enabled by default.\n```\n\n```text\n;redis_scan_enable = FALSE\n;;\n;;  The blazing fast Redis method on wildcard cache delete. Uses non-atomic,\n;;  non-blocking, and concurrency friendly SCAN command instead of KEYS\n;;  to perform cache wildcard key deletions. Not enabled by default, because\n;;  it may cause serious yet random problems -- see the comment for details:\n;;  https://www.drupal.org/node/2851625#comment-11963867\n```\n\n### INI (platform level) for Redis Cache Advanced Settings Control\n\n```text\n;redis_flush_forced_mode = TRUE\n;;\n;;  The more aggressive cache flush mode is now enabled by default, but you can\n;;  still disable it with FALSE below, if you wish, after some testing, since\n;;  it will further improve your site's performance.\n;;\n;;  NOTE: This option, enabled by default, may cause mysterious and random WSOD\n;;        depending on the site's dependence on entries in the cache, because\n;;        it limits each cache entry TTL to 6 hours max, hence any module using\n;;        cacheBackendInterface::CACHE_PERMANENT will be surprised by suddenly\n;;        and mysteriously missing entries. If that happens, uncomment this line\n;;        and set it to FALSE below.\n;;\n;;  Remember to uncomment the line above if you want to modify default settings.\n;;\n;;  When enabled, it will automatically set more aggressive cache flush mode\n;;  in general and very aggressive for selected cache bins, as listed below,\n;;  along with redis integration module defaults, active when this option\n;;  is explicitly set to FALSE.\n;;\n```\n```php\n;;    $conf['redis_perm_ttl']                 = 86400; // 24 hours max\n;;    $conf['redis_flush_mode']               = 1; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_page']    = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_block']   = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_menu']    = 2; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_metatag'] = 2; // Redis default is 0\n```\n```text\n;;\n;;  Note that even with this option enabled, you can easily override these\n;;  values or configure completely custom modes, both for the wildcard option\n;;  redis_flush_mode and per cache bin, in the local.settings.php file.\n;;\n;;  Please refer to the module README for more information on all available\n;;  advanced flush modes: http://bit.ly/1drmi35\n```\n\n```text\n;redis_exclude_bins = FALSE\n;;\n;;  Sometimes you may want to exclude some problematic cache bins from Redis\n;;  so they will use default SQL engine, at least until related issue will be\n;;  fixed either in your contrib code or in the Redis integration module.\n;;\n;;  Normally you had to edit the local.settings.php file which is both tedious\n;;  and dangerous because of extra steps: https://omega8.cc/node/230 to add\n;;  a line, for example: $conf['cache_class_cache_foo'] = 'DrupalDatabaseCache';\n;;  Plus, it had to be done for every site separately.\n;;\n;;  Now you can simply list the cache bins to exclude below, comma separated,\n;;  either in the site or platform level active INI file.\n;;\n;;  Example: redis_exclude_bins = \"cache_form,cache_foo,cache_bar\"\n```\n\n```text\n;redis_cache_disable = FALSE\n;;\n;;  Normally you should never disable Redis, unless for debugging rare issues.\n;;  If you are sure you need to disable Redis for all sites on this platform,\n;;  uncomment the line above and set the value to TRUE.\n```\n\n### INI (platform level) for Nginx Microcache Control\n\n```text\n;speed_booster_anon_cache_ttl = 10\n;;\n;;  Speed Booster uses Nginx microcaching mode by default, with just 10 seconds\n;;  both for anonymous visitors and logged in users. All known robots/crawlers\n;;  and search engines spiders are forced to accept up to 24 hours cache TTL.\n;;  Below you can modify the (10 seconds) default for human, anonymous visitors.\n;;  Uncomment the line above and set any numeric value you prefer (in seconds)\n;;  to override system default (10 seconds). You may want to enable Purge and\n;;  Expire modules in all sites on this platform, so any new or modified node,\n;;  comment added etc will selectively auto-purge related cache entries to avoid\n;;  serving stale content for extended time (depending on the TTL configured).\n;;  Note that the value must be higher than 10 or it will be ignored.\n```\n\n```text\n;disable_drupal_page_cache = FALSE\n;;\n;;  With default Speed Booster TTL set to just 10 seconds to achieve the\n;;  microcaching mode, disabling Drupal own page cache will significantly\n;;  degrade your site's performance on every request not served via Speed\n;;  Booster front-end cache, because Drupal will have to build the page from\n;;  scratch every 10 seconds, making your site SLOW for every visitor not lucky\n;;  enough to visit already cached page. This is why BOA enables Drupal own\n;;  page cache by default, even if Boost, if used, will complain about it.\n;;  It allows Drupal to keep its internal full-page cache in the super-fast\n;;  Redis backend, to make also those every-10-seconds non-cached in Speed\n;;  Booster requests blazingly fast.\n;;\n;;  If for some reason this default BOA configuration breaks something\n;;  important in your site, like some page which should display not cached\n;;  in a full-page cache results for anonymous visitors, even if they don't\n;;  have a cookie set in the browser, didn't submit any form etc, so no other\n;;  method to make the displayed page dynamic on the fly could be triggered,\n;;  you could (very carefully) consider changing this variable to TRUE.\n;;\n;;  But please think twice before using this variable. While Redis will still\n;;  improve performance for all other cache bins, the cache_page bin will\n;;  not be used, and this will make your site much slower, randomly, even if\n;;  you will increase the tiny speed_booster_anon_cache_ttl value above.\n;;\n;;  If you really have to disable this on some problematic URI, to guarantee\n;;  that the page will be as dynamic as possible also for anonymous visitors,\n;;  you may want to use more granular method, like adding in your site's\n;;  local.settings.php file exception for the affected URI:\n```\n```php\n;;    if (preg_match(\"/^\\/(?:foo|bar)/\", $_SERVER['REQUEST_URI'])) {\n;;      header('X-Accel-Expires: 1'); // This disables Speed Booster\n;;      $conf['cache'] = 0; // This disables page caching on the fly\n;;    }\n```\n\n### INI (platform level) for Drupal Sites Access Control\n\n```text\n;allow_anon_node_add = FALSE\n;;\n;;  When set to TRUE allows anonymous users to add content. Best practice and\n;;  the default is FALSE which results with redirect to the site's homepage.\n;;  Note that this option opens also an access to the node edit.\n```\n\n```text\n;disable_admin_dos_protection = FALSE\n;;\n;;  When set to TRUE allows anonymous visitors to access the /admin* URL, even\n;;  if only to see the 403 Access Denied message. Best practice and the default\n;;  is FALSE which results with redirect to the site's homepage. This allows\n;;  you to protect the site from DoS attempts, since the /admin* requests are\n;;  never cached and always hit Drupal directly. Some sites may experience\n;;  issues when your browser has expired session/cookie which redirects you\n;;  to the homepage even if you were logged in. If something like this happens,\n;;  you may want to disable this protection by changing it to TRUE below.\n;;  Remember to uncomment the line above if you want to use this feature.\n```\n\n### INI (platform level) for User Register Protection\n\n```text\n;enable_strict_user_register_protection = FALSE\n;;\n;;  When set to TRUE allows to force protection on all sites globally, unless\n;;  the site has its own custom setting for ignore_user_register_protection\n;;  variable in the site level boa_site_control.ini file, located in the\n;;  sites/foo.com/modules directory.\n;;\n;;  Hoever, this setting will be ignored if you also set below the opposite:\n;;\n;;    ignore_user_register_protection = TRUE\n;;\n;;  It can be set to TRUE automatically on all platforms if there is an empty\n;;  control file:\n;;\n;;    ~/static/control/enable_strict_user_register_protection.info\n;;\n;;  NOTE: The *enable* file will be IGNORED if the *disable* file also exists!\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n```\n\n```text\n;ignore_user_register_protection = FALSE\n;;\n;;  Registration settings are now restricted by design to protect your sites\n;;  from unintended turning them into spam machines (which is allowed by\n;;  Drupal 6 default settings, sadly). Spambots targeting Drupal sites are\n;;  already a plague, so unless you have already set more strict permissions\n;;  'Administrators only', we force reasonable default policy for new accounts\n;;  registration: 'Visitors, but administrator approval is required' plus\n;;  'Require email verification when a visitor creates an account' enabled.\n;;  If you wish to disable email verification or set 'Who can register\n;;  accounts' to 'Visitors', you must set it to TRUE below and uncomment\n;;  the line. Now you will be able to permanently change these settings\n;;  in this site admin area. Otherwise our default protection will be enabled\n;;  again the next day (early morning in the server time zone). Note that\n;;  we don't force 'Administrators only', because it could immediately break\n;;  many commerce or community sites essential features. But for other sites,\n;;  'Administrators only' is strongly suggested.\n;;\n;;  It can be set to TRUE automatically on all platforms AND all sites if\n;;  there is an empty *disable* control file:\n;;\n;;    ~/static/control/ignore_user_register_protection.info\n;;\n;;  NOTE: The *disable* file will make the *enable* file IGNORED if both exist!\n;;\n;;    ~/static/control/enable_strict_user_register_protection.info\n;;\n;;  Note also that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again -- which happens each morning.\n```\n\n### INI (platform level) Clarification on The User Register Protection Logic\n\n```text\n  Instead of the previously used confusing enable/disable variables and control\n  files we use correct names corresponding with the feature behaviour which is\n  actually enable_strict/ignore with non-strict enable being the default mode.\n\n  There are actually three modes available, affecting only Drupal 6 and 7 sites:\n\n  * non-strict protection when no vars/control files are used, which by default\n    switches Drupal 6 and Drupal 7 sites to 'Visitors, but administrator\n    approval is required' plus 'Require email verification when a visitor\n    creates an account'\n\n  * strict protection when either var or control file of \"enable_strict\" type\n    is used, which switches Drupal 6 and Drupal 7 sites to 'Administrators only'\n\n  * no protection when either var or control file of \"ignore\" type is used,\n    which simply disables the procedure altogether but doesn't modify\n    any settings in the Drupal 6 and Drupal 7 sites.\n```\n\n### INI (platform level) for New Relic, Composer, Private Files and Cookie Domain\n\n```text\n;enable_newrelic_integration = FALSE\n;;\n;;  When set to TRUE it will enable New Relic monitoring for all sites on this\n;;  platform, but only if there is a valid New Relic license key present in the\n;;  ~/static/control/newrelic.info control file.\n;;  NOTE: The New Relic license keys is a 40-character hexadecimal string.\n```\n\n```text\n;set_composer_manager_vendor_dir = FALSE\n;;\n;;  When set to TRUE it will enforce site specific path to Composer Manager\n;;  composer_manager_vendor_dir path: sites/domain/vendor but only once the site\n;;  is already installed, so it will not override the variable on install,\n;;  if set programmatically.\n```\n\n```text\n;allow_private_file_downloads = FALSE\n;;\n;;  When set to TRUE allows to use private files mode, so it is useful only\n;;  for commerce sites which sell files for download or for intranet sites\n;;  where you need to enforce strict access control. All other sites should\n;;  never ever use private files mode for obvious performance reasons.\n```\n\n```text\n;server_name_cookie_domain = FALSE\n;;\n;;  When set to TRUE, it forces the cookie_domain to always use main domain,\n;;  also when the site is accessed via any domain alias. For example use case\n;;  please read: https://gist.github.com/omega8cc/5724528\n```\n\n### INI (platform level) for Files Permissions Daily Fix Logic\n\n```text\n;fix_files_permissions_daily = TRUE\n;;\n;;  When set to FALSE allows to skip standard files permissions fix on all sites\n;;  on this platform, even if the global option in the system level config\n;;  file .barracuda.cnf is set to _PERMISSIONS_FIX=YES (default).\n;;\n;;  This feature can be useful when you prefer to manage custom platform in\n;;  a monolithic codebase mode in Git, so forcing permissions could conflict\n;;  with your workflow or development tools. Otherwise you should never disable\n;;  this to avoid issues with Ægir tasks related to sites on this platform.\n;;\n;;  This setting affects only the running daily maintenance system behaviour.\n;;\n;;  This option is available only in BOA-2.2.0 or newer.\n```\n\n### INI (platform level) for SQL Tables Conversions\n\n```text\n;sql_conversion_mode = NO\n;;\n;;  This option allows to activate DB tables conversion for all sites hosted\n;;  on this platform, unless the site has its own custom setting for variable\n;;  sql_conversion_mode in the site level boa_site_control.ini file, located\n;;  in the sites/foo.com/modules directory.\n;;\n;;  It can be also set (and forced) automatically for all sites on all platforms\n;;  if there is special _SQL_CONVERT variable defined for this Octopus instance\n;;  in its .USER.octopus.cnf config file, but it may require submitting support\n;;  request if you are using hosted Ægir BOA service without root access.\n;;\n;;  Supported values are: innodb and myisam (lowercase only!)\n;;\n;;  Note that this conversion, if enabled, will run daily even if all tables\n;;  have been already converted, so it will run OPTIMIZE task on all tables,\n;;  effectively.\n;;\n;;  This setting affects only the running daily maintenance system behaviour.\n;;\n;;  This option is available only in BOA-2.1.3 or newer.\n```\n\n### INI (platform level) for Domain Access (domain) Module\n\n```text\n;auto_detect_domain_access_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Domain Access module. Supported locations, in the order of precedence:\n;;\n;;    sites/all/modules/domain/\n;;    sites/all/modules/contrib/domain/\n;;    profiles/foo/modules/domain/\n;;    profiles/foo/modules/contrib/domain/\n;;\n;;  IMPORTANT!\n;;\n;;  This setting on the platform level will be automatically set to TRUE\n;;  (but not on the site level) during daily maintenance procedure\n;;  if the module will be detected, however it will be completely ignored if\n;;  there is boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE there to improve performance,\n;;  if the module is not used in that site even if it exists in the platform.\n;;  Remember to uncomment the line above if you want to use this feature.\n```\n\n### INI (platform level) for Drupal for Facebook (fb) Module\n\n```text\n;auto_detect_facebook_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Drupal for Facebook (fb) module. Supported locations, in the order\n;;  of precedence:\n;;\n;;    sites/all/modules/fb/\n;;    sites/all/modules/contrib/fb/\n;;    profiles/foo/modules/fb/\n;;    profiles/foo/modules/contrib/fb/\n;;\n;;  IMPORTANT!\n;;\n;;  This setting on the platform level will be automatically set to TRUE\n;;  (but not on the site level) during daily maintenance procedure\n;;  if the module will be detected, however it will be completely ignored if\n;;  there is boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE there to improve performance,\n;;  if the module is not used in that site even if it exists in the platform.\n;;  Remember to uncomment the line above if you want to use this feature.\n```\n\n### INI (platform level) for Entity Cache (entitycache) Module\n\n```text\n;entitycache_dont_enable = FALSE\n;;\n;;  When set to TRUE allows to avoid having the entitycache module, which is\n;;  included by default, auto-enabled during daily maintenance.\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n;;\n;;  This setting is available only on the platform level, because if the distro\n;;  or custom installation profile conflicts with entitycache, you don't want\n;;  to have it re-enabled on any site on such platform, even if it is a great\n;;  performance improvement for any Drupal 7 based site, and thus is highly\n;;  recommended. Maybe you could fix your platform to make it compatible?\n```\n\n### INI (platform level) for Views Cache Bully (views_cache_bully) Module\n\n```text\n;views_cache_bully_dont_enable = FALSE\n;;\n;;  When set to TRUE allows to avoid having the views_cache_bully module,\n;;  which is included by default, auto-enabled during daily maintenance,\n;;  but only if there is special, global, non-default control file present:\n;;\n;;  ~/static/control/enable_views_cache_bully.info\n;;\n;;  If you didn't create this file to auto-enable views_cache_bully in all\n;;  your sites, the variable views_cache_bully_dont_enable below will be\n;;  completely ignored and the module will not be enabled. This feature\n;;  is available since Since BOA-2.1.1, because BOA-2.1.0 forced this module\n;;  to be enabled by default.\n;;\n;;  But even if you will create the special, global control file, you can still\n;;  stop the system from enabling the module per platform, by changing the\n;;  value of views_cache_bully_dont_enable to TRUE and activating the line.\n;;\n;;  This useful module automatically enables some default caching in all your\n;;  views with no caching enabled yet, which is handy for busy webmasters.\n;;\n;;  Since BOA-2.1.1 this module is no longer enabled by default because\n;;  it may affect commerce based sites, resulting with broken checkout.\n;;  Because of those possible issues this module is automatically disabled\n;;  if the maintenance agent discovers that commerce module is enabled.\n;;\n;;  You can still force views_cache_bully to be enabled if you have the special\n;;  aforementioned control file in place and views_cache_bully_dont_enable\n;;  variable below is set to FALSE or left commented out.\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n;;\n;;  This setting is available only on the platform level, because if the distro\n;;  or custom installation profile conflicts with this module, you don't want\n;;  to have it re-enabled on any site on such platform, even if it is a great\n;;  performance improvement for any Drupal 6/7 site, and thus is highly\n;;  recommended. Maybe you could fix your platform to make it compatible?\n;;  If not, then make sure to configure caching in all your views manually.\n;;  You will make your sites visitors and users more happy!\n```\n\n### INI (platform level) for Views Content Cache (views_content_cache) Module\n\n```text\n;views_content_cache_dont_enable = FALSE\n;;\n;;  When set to TRUE allows to avoid having the views_content_cache module,\n;;  which is included by default, auto-enabled during daily maintenance,\n;;  but only if there is special, global, non-default control file present:\n;;\n;;  ~/static/control/enable_views_content_cache.info\n;;\n;;  If you didn't create this file to auto-enable views_content_cache in all\n;;  your sites, the variable views_content_cache_dont_enable below will be\n;;  completely ignored and the module will not be enabled. This feature\n;;  is available since Since BOA-2.1.1, because BOA-2.1.0 forced this module\n;;  to be enabled by default.\n;;\n;;  But even if you will create the special, global control file, you can still\n;;  stop the system from enabling the module per platform, by changing the\n;;  value of views_content_cache_dont_enable to TRUE and activating the line.\n;;\n;;  This handy module allows you to configure caching for views and views\n;;  based blocks very precisely, which is great if you need to better control\n;;  the cache per view and per content type.\n;;\n;;  Note that this module require that Chaos tools (ctools) module is already\n;;  enabled. Otherwise views_content_cache will not be enabled.\n;;\n;;  Note that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again, which happens each morning (in the server time zone).\n;;\n;;  This setting is available only on the platform level, because if the distro\n;;  or custom installation profile conflicts with this module, you don't want\n;;  to have it re-enabled on any site on such platform, even if it is a great\n;;  performance improvement for any Drupal 6/7 site, and thus is highly\n;;  recommended. Maybe you could fix your platform to make it compatible?\n;;  You will make your sites visitors and users even more happy!\n```\n"
  },
  {
    "path": "docs/ini/site/INI.md",
    "content": "\n## INI (site level) located in sites/foo.com/modules/\n\n```text\n;;\n;;  DO NOT EDIT THIS FILE, it is just a TEMPLATE with documentation included!\n;;\n;;  This is a site level INI file template which can be used to modify\n;;  default BOA system behaviour for this site only.\n;;\n;;  Copy this file as boa_site_control.ini into sites/foo.com/modules directory,\n;;  then uncomment lines for any settings you want to modify, to make it active.\n;;  All settings are initially listed with system defaults, for reference.\n;;\n;;  Note that it takes ~60 seconds to see any modification results in action\n;;  due to opcode caching enabled in PHP-FPM for all non-dev sites.\n;;\n```\n\n### INI (site level) for Session Control\n\n```text\n;session_cookie_ttl = 86400\n;;\n;;  You can control session cookies expiration (TTL) per site and per platform.\n;;  The value (in seconds) of the session_cookie_ttl variable is used as\n;;  session.cookie_lifetime value.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n;;\n;;  We also recommend that you enable and configure built-in session_expire\n;;  module, which allows you to keep the sessions DB table tidy. Make sure that\n;;  TTL set via session_cookie_ttl variable below is *lower* than TTL configured\n;;  in the session_expire module, because the module does not care about PHP\n;;  settings and simply deletes old entries from the sessions table on cron run.\n```\n\n```text\n;session_gc_eol = 86400\n;;\n;;  You can control session garbage collector (EOL) per site and per platform.\n;;  The value (in seconds) of the session_gc_eol variable is used as\n;;  session.gc_maxlifetime value and specifies the number of seconds after which\n;;  data will be seen as 'garbage' and potentially cleaned up, resulting with\n;;  $_SESSION variable discarded and affected authenticated users logged out.\n;;\n;;  BOA default defined in the system level global.inc file is 86400 == 24h.\n```\n\n### INI (site level) for Redis Cache Settings Control\n\n```text\n;redis_old_nine_mode = FALSE\n;;\n;;  If you are running Drupal 9 older than 9.3 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n```\n\n```text\n;redis_old_eight_mode = FALSE\n;;\n;;  If you are running Drupal 8 older than 8.8 you need to uncomment\n;;  the line above and change it to TRUE to make Redis work again.\n```\n\n```text\n;redis_lock_enable = TRUE\n;;\n;;  The blazing fast Redis lock implementation is also enabled by default.\n```\n\n```text\n;redis_path_enable = TRUE\n;;\n;;  The blazing fast Redis path cache implementation is also enabled by default.\n```\n\n```text\n;redis_scan_enable = FALSE\n;;\n;;  The blazing fast Redis method on wildcard cache delete. Uses non-atomic,\n;;  non-blocking, and concurrency friendly SCAN command instead of KEYS\n;;  to perform cache wildcard key deletions. Not enabled by default, because\n;;  it may cause serious yet random problems -- see the comment for details:\n;;  https://www.drupal.org/node/2851625#comment-11963867\n```\n\n### INI (site level) for Redis Cache Advanced Settings Control\n\n```text\n;redis_flush_forced_mode = TRUE\n;;\n;;  The more aggressive cache flush mode is now enabled by default, but you can\n;;  still disable it with FALSE below, if you wish, after some testing, since\n;;  it will further improve your site's performance.\n;;\n;;  NOTE: This option, enabled by default, may cause mysterious and random WSOD\n;;        depending on the site's dependence on entries in the cache, because\n;;        it limits each cache entry TTL to 6 hours max, hence any module using\n;;        cacheBackendInterface::CACHE_PERMANENT will be surprised by suddenly\n;;        and mysteriously missing entries. If that happens, uncomment this line\n;;        and set it to FALSE below.\n;;\n;;  Remember to uncomment the line above if you want to modify default settings.\n;;\n;;  When enabled, it will automatically set more aggressive cache flush mode\n;;  in general and very aggressive for selected cache bins, as listed below,\n;;  along with redis integration module defaults, active when this option\n;;  is explicitly set to FALSE.\n;;\n```\n```php\n;;    $conf['redis_perm_ttl']                 = 86400; // 24 hours max\n;;    $conf['redis_flush_mode']               = 1; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_page']    = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_block']   = 2; // Redis default is 1\n;;    $conf['redis_flush_mode_cache_menu']    = 2; // Redis default is 0\n;;    $conf['redis_flush_mode_cache_metatag'] = 2; // Redis default is 0\n```\n```text\n;;\n;;  Note that even with this option enabled, you can easily override these\n;;  values or configure completely custom modes, both for the wildcard option\n;;  redis_flush_mode and per cache bin, in the local.settings.php file.\n;;\n;;  Please refer to the module README for more information on all available\n;;  advanced flush modes: http://bit.ly/1drmi35\n```\n\n```text\n;redis_exclude_bins = FALSE\n;;\n;;  Sometimes you may want to exclude some problematic cache bins from Redis\n;;  so they will use default SQL engine, at least until related issue will be\n;;  fixed either in your contrib code or in the Redis integration module.\n;;\n;;  Normally you had to edit the local.settings.php file which is both tedious\n;;  and dangerous because of extra steps: https://omega8.cc/node/230 to add\n;;  a line, for example: $conf['cache_class_cache_foo'] = 'DrupalDatabaseCache';\n;;  Plus, it had to be done for every site separately.\n;;\n;;  Now you can simply list the cache bins to exclude below, comma separated,\n;;  either in the site or platform level active INI file.\n;;\n;;  Example: redis_exclude_bins = \"cache_form,cache_foo,cache_bar\"\n```\n\n```text\n;redis_cache_disable = FALSE\n;;\n;;  Normally you should never disable Redis, unless for debugging rare issues.\n;;  If you are sure you need to disable Redis for all sites on this platform,\n;;  uncomment the line above and set the value to TRUE.\n```\n\n### INI (site level) for Nginx Microcache Control\n\n```text\n;speed_booster_anon_cache_ttl = 10\n;;\n;;  Speed Booster uses Nginx microcaching mode by default, with just 10 seconds\n;;  both for anonymous visitors and logged in users. All known robots/crawlers\n;;  and search engines spiders are forced to accept up to 24 hours cache TTL.\n;;  Below you can modify the (10 seconds) default for human, anonymous visitors.\n;;  Uncomment the line above and set any numeric value you prefer (in seconds)\n;;  to override system default (10 seconds). You may want to enable Purge and\n;;  Expire modules in this site modules admin area, so any new or modified node,\n;;  comment added etc will selectively auto-purge related cache entries to avoid\n;;  serving stale content for extended time (depending on the TTL configured).\n;;  Note that the value must be higher than 10 or it will be ignored.\n```\n\n```text\n;disable_drupal_page_cache = FALSE\n;;\n;;  With default Speed Booster TTL set to just 10 seconds to achieve the\n;;  microcaching mode, disabling Drupal own page cache will significantly\n;;  degrade your site's performance on every request not served via Speed\n;;  Booster front-end cache, because Drupal will have to build the page from\n;;  scratch every 10 seconds, making your site SLOW for every visitor not lucky\n;;  enough to visit already cached page. This is why BOA enables Drupal own\n;;  page cache by default, even if Boost, if used, will complain about it.\n;;  It allows Drupal to keep its internal full-page cache in the super-fast\n;;  Redis backend, to make also those every-10-seconds non-cached in Speed\n;;  Booster requests blazingly fast.\n;;\n;;  If for some reason this default BOA configuration breaks something\n;;  important in your site, like some page which should display not cached\n;;  in a full-page cache results for anonymous visitors, even if they don't\n;;  have a cookie set in the browser, didn't submit any form etc, so no other\n;;  method to make the displayed page dynamic on the fly could be triggered,\n;;  you could (very carefully) consider changing this variable to TRUE.\n;;\n;;  But please think twice before using this variable. While Redis will still\n;;  improve performance for all other cache bins, the cache_page bin will\n;;  not be used, and this will make your site much slower, randomly, even if\n;;  you will increase the tiny speed_booster_anon_cache_ttl value above.\n;;\n;;  If you really have to disable this on some problematic URI, to guarantee\n;;  that the page will be as dynamic as possible also for anonymous visitors,\n;;  you may want to use more granular method, like adding in your site's\n;;  local.settings.php file exception for the affected URI:\n```\n```php\n;;    if (preg_match(\"/^\\/(?:foo|bar)/\", $_SERVER['REQUEST_URI'])) {\n;;      header('X-Accel-Expires: 1'); // This disables Speed Booster\n;;      $conf['cache'] = 0; // This disables page caching on the fly\n;;    }\n```\n\n### INI (site level) for Drupal Sites Access Control\n\n```text\n;allow_anon_node_add = FALSE\n;;\n;;  When set to TRUE allows anonymous users to add content. Best practice and\n;;  the default is FALSE which results with redirect to the site's homepage.\n;;  Note that this option opens also an access to the node edit.\n```\n\n```text\n;disable_admin_dos_protection = FALSE\n;;\n;;  When set to TRUE allows anonymous visitors to access the /admin* URL, even\n;;  if only to see the 403 Access Denied message. Best practice and the default\n;;  is FALSE which results with redirect to the site's homepage. This allows\n;;  you to protect the site from DoS attempts, since the /admin* requests are\n;;  never cached and always hit Drupal directly. Some sites may experience\n;;  issues when your browser has expired session/cookie which redirects you\n;;  to the homepage even if you were logged in. If something like this happens,\n;;  you may want to disable this protection by changing it to TRUE below.\n;;  Remember to uncomment the line above if you want to use this feature.\n```\n\n### INI (site level) for User Register Protection\n\n```text\n;ignore_user_register_protection = FALSE\n;;\n;;  Registration settings are now restricted by design to protect your sites\n;;  from unintended turning them into spam machines (which is allowed by\n;;  Drupal 6 default settings, sadly). Spambots targeting Drupal sites are\n;;  already a plague, so unless you have already set more strict permissions\n;;  'Administrators only', we force reasonable default policy for new accounts\n;;  registration: 'Visitors, but administrator approval is required' plus\n;;  'Require email verification when a visitor creates an account' enabled.\n;;  If you wish to disable email verification or set 'Who can register\n;;  accounts' to 'Visitors', you must set it to TRUE below and uncomment\n;;  the line. Now you will be able to permanently change these settings\n;;  in this site admin area. Otherwise our default protection will be enabled\n;;  again the next day (early morning in the server time zone). Note that\n;;  we don't force 'Administrators only', because it could immediately break\n;;  many commerce or community sites essential features. But for other sites,\n;;  'Administrators only' is strongly suggested.\n;;\n;;  It can be set to TRUE automatically on all platforms AND all sites if\n;;  there is an empty *disable* control file:\n;;\n;;    ~/static/control/ignore_user_register_protection.info\n;;\n;;  NOTE: The *disable* file will make the *enable* file IGNORED if both exist!\n;;\n;;    ~/static/control/enable_strict_user_register_protection.info\n;;\n;;  Note also that this setting affects only the maintenance system behaviour.\n;;  It doesn't affect the site behaviour directly in a live mode, so you can\n;;  modify related settings in the site admin, and they can be overridden,\n;;  depending on the value defined below, when the maintenance system runs\n;;  again -- which happens each morning.\n```\n\n### INI (site level) Clarification on The User Register Protection Logic\n\n```text\n  Instead of the previously used confusing enable/disable variables and control\n  files we use correct names corresponding with the feature behaviour which is\n  actually enable_strict/ignore with non-strict enable being the default mode.\n\n  There are actually three modes available, affecting only Drupal 6 and 7 sites:\n\n  * non-strict protection when no vars/control files are used, which by default\n    switches Drupal 6 and Drupal 7 sites to 'Visitors, but administrator\n    approval is required' plus 'Require email verification when a visitor\n    creates an account'\n\n  * strict protection when either var or control file of \"enable_strict\" type\n    is used, which switches Drupal 6 and Drupal 7 sites to 'Administrators only'\n\n  * no protection when either var or control file of \"ignore\" type is used,\n    which simply disables the procedure altogether but doesn't modify\n    any settings in the Drupal 6 and Drupal 7 sites.\n```\n\n### INI (site level) for New Relic, Composer, Private Files and Cookie Domain\n\n```text\n;enable_newrelic_integration = FALSE\n;;\n;;  When set to TRUE it will enable New Relic monitoring for this site only.\n;;  You still need a valid New Relic license key present in the control file:\n;;  ~/static/control/newrelic.info\n;;  NOTE: The New Relic license keys is a 40-character hexadecimal string.\n```\n\n```text\n;set_composer_manager_vendor_dir = FALSE\n;;\n;;  When set to TRUE it will enforce site specific path to Composer Manager\n;;  composer_manager_vendor_dir path: sites/domain/vendor but only once the site\n;;  is already installed, so it will not override the variable on install,\n;;  if set programmatically.\n```\n\n```text\n;allow_private_file_downloads = FALSE\n;;\n;;  When set to TRUE allows to use private files mode, so it is useful only\n;;  for commerce sites which sell files for download or for intranet sites\n;;  where you need to enforce strict access control. All other sites should\n;;  never ever use private files mode for obvious performance reasons.\n```\n\n```text\n;server_name_cookie_domain = FALSE\n;;\n;;  When set to TRUE, it forces the cookie_domain to always use main domain,\n;;  also when the site is accessed via any domain alias. For example use case\n;;  please read: https://gist.github.com/omega8cc/5724528\n```\n\n### INI (site level) for Solr Configuration\n\n```text\n;solr_integration_module = your_module_name_here\n;;\n;;  This option allows to activate Solr core configuration for the site.\n;;\n;;  Supported values for the solr_integration_module variable:\n;;\n;;    search_api_solr9 (Activates Solr 9 core if installed)\n;;    search_api_solr7 (Activates Solr 7 core if installed)\n;;    search_api_solr  (Activates Solr 7 core if installed)\n;;    apachesolr       (Activates Solr 4 core if installed) (deprecated)\n;;\n;;  Solr 9 and Solr 7 are available if installed.\n;;\n;;  Supported integration modules are latest versions of either search_api_solr\n;;  or apachesolr as listed below:\n;;\n;;   https://ftp.drupal.org/files/projects/search_api_solr-4.3.8.tar.gz (D10.2+)\n;;   https://ftp.drupal.org/files/projects/search_api_solr-4.2.12.tar.gz (D9.3+)\n;;   https://ftp.drupal.org/files/projects/search_api_solr-4.1.12.tar.gz (D8.8+)\n;;   https://ftp.drupal.org/files/projects/search_api_solr-7.x-1.17.tar.gz (D7)\n;;   https://ftp.drupal.org/files/projects/apachesolr-7.x-1.12.tar.gz (D7)\n;;   https://ftp.drupal.org/files/projects/apachesolr-6.x-3.1.tar.gz (D6)\n;;\n;;  Note that you still need to add preferred integration module along with\n;;  any its dependencies in your codebase since this feature doesn't modify\n;;  your platform or site - it only creates Solr core with configuration\n;;  files provided by integration module: schema.xml and solrconfig.xml\n;;\n;;  Important: search_api_solr for D8+ is different from all previous versions,\n;;  as it requires Composer to install the module and its dependencies, then\n;;  you will need to configure it, and only then you will be able to generate\n;;  customized Solr core config files, which you should upload in the path:\n;;  sites/foo.com/files/solr/ and wait 5-10 minutes to have them activated\n;;  on the Solr 7 core the system will create for you.\n;;\n;;  NOTE: You must set 'solr_custom_config = NO' for the changes to take effect.\n;;\n;;  This setting affects the running every 5-10 minutes auto-installer, hence\n;;  no need to wait until next morning to be able to use new Solr core. Win!\n;;\n;;  Once the Solr core is ready to use, you will find a special file in your\n;;  site directory: sites/foo.com/solr.php with details on how to access\n;;  your new Solr core with correct credentials.\n;;\n;;  The site with enabled Solr core can be safely migrated between platforms,\n;;  integration module can be moved within your codebase and even upgraded,\n;;  as long as it is using compatible schema.xml and solrconfig.xml files.\n;;\n;;  To delete existing Solr core simply comment out this line.\n;;  The system will cleanly delete existing Solr core in 15 minutes.\n```\n\n```text\n;solr_update_config = NO\n;;\n;;  This option allows to auto-update your Solr core configuration files:\n;;\n;;    schema.xml\n;;    solrconfig.xml\n;;\n;;  If there is new release for either apachesolr or search_api_solr, your\n;;  Solr core will not be automatically upgraded to use newer schema.xml and\n;;  solrconfig.xml, unless allowed by switching solr_update_config to YES.\n;;\n;;  This option will be ignored if you will set solr_custom_config to YES.\n```\n\n```text\n;solr_custom_config = NO\n;;\n;;  This option allows to protect custom Solr core configuration files:\n;;\n;;    schema.xml\n;;    solrconfig.xml\n;;\n;;  To use customized version of either schema.xml or solrconfig.xml, you need\n;;  to switch solr_custom_config to YES below and if you are using hosted\n;;  Ægir service, submit a support ticket to get these files updated with\n;;  your custom versions. On self-hosted BOA simply update these files directly.\n;;\n;;  Please remember to use Solr compatible config files.\n;;\n;;  IMPORTANT! -- Please note that with this option enabled you won't be able\n;;  to follow the Drupal 8+ specific procedure for search_api_solr with config\n;;  files generated and uploaded to the files/solr/ directory in your site.\n;;  You could still use this option to make your Solr core immutable between\n;;  your upgrades, though, but you must remember about disabling this option\n;;  briefly (5-10 minutes) for the changes to take effect.\n```\n\n### INI (site level) for SQL Tables Conversions\n\n```text\n;sql_conversion_mode = NO\n;;\n;;  This option allows to activate and/or customize DB tables conversion mode\n;;  for this site only, and the value defined here will override the value of\n;;  sql_conversion_mode set in the platform level boa_platform_control.ini\n;;  file located in the sites/all/modules directory.\n;;\n;;  It can be also set (and forced) automatically for all sites on all platforms\n;;  if there is special _SQL_CONVERT variable defined for this Octopus instance\n;;  in its .USER.octopus.cnf config file, but it may require submitting support\n;;  request if you are using hosted Ægir BOA service without root access.\n;;\n;;  Supported values are: innodb and myisam (lowercase only!)\n;;\n;;  Note that this conversion, if enabled, will run daily even if all tables\n;;  have been already converted, so it will run OPTIMIZE task on all tables,\n;;  effectively.\n;;\n;;  This setting affects only the running daily maintenance system behaviour.\n;;\n;;  This option is available only in BOA-2.1.3 or newer.\n```\n\n### INI (site level) for AdvAgg (advagg) Module\n\n```text\n;advagg_auto_configuration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-configuration for the AdvAgg module\n;;  on the global.inc level. Supported locations, in the order of precedence:\n;;\n;;    sites/all/modules/advagg/       (optional override on the platform level)\n;;    modules/o_contrib/advagg/       (included in all D6 platforms)\n;;    modules/o_contrib_seven/advagg/ (included in all D7 platforms)\n;;\n;;  IMPORTANT!\n;;\n;;  This setting will be automatically set to TRUE during daily maintenance\n;;  procedure if the module will be detected as enabled in the site, so while\n;;  you could enable or disable it temporarily below, this setting will be\n;;  overwritten again next morning, depending on the module actual status.\n;;  Of course, it will not affect sites with .dev. or .devel. keyword present\n;;  in the main site name.\n```\n\n### INI (site level) for Domain Access (domain) Module\n\n```text\n;auto_detect_domain_access_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Domain Access module. Supported locations, in the order of precedence:\n;;\n;;    sites/all/modules/domain/\n;;    sites/all/modules/contrib/domain/\n;;    profiles/foo/modules/domain/\n;;    profiles/foo/modules/contrib/domain/\n;;\n;;  IMPORTANT!\n;;\n;;  While the same setting in the platform level boa_platform_control.ini\n;;  file located in the sites/all/modules directory will be automatically\n;;  set to TRUE during daily maintenance procedure if the module\n;;  will be detected, it will be completely ignored if there is also\n;;  boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE below to improve performance,\n;;  in case this module is not used in this site. Remember to uncomment\n;;  the line above if you want to use this feature.\n```\n\n### INI (site level) for Drupal for Facebook (fb) Module\n\n```text\n;auto_detect_facebook_integration = FALSE\n;;\n;;  When set to TRUE allows to enable auto-detection and auto-include for\n;;  the Drupal for Facebook (fb) module. Supported locations, in the order\n;;  of precedence:\n;;\n;;    sites/all/modules/fb/\n;;    sites/all/modules/contrib/fb/\n;;    profiles/foo/modules/fb/\n;;    profiles/foo/modules/contrib/fb/\n;;\n;;  IMPORTANT!\n;;\n;;  While the same setting in the platform level boa_platform_control.ini\n;;  file located in the sites/all/modules directory will be automatically\n;;  set to TRUE during daily maintenance procedure if the module\n;;  will be detected, it will be completely ignored if there is also\n;;  boa_site_control.ini file present in the sites/foo.com/modules\n;;  directory and this setting is set to FALSE below to improve performance,\n;;  in case this module is not used in this site. Remember to uncomment\n;;  the line above if you want to use this feature.\n```\n"
  },
  {
    "path": "lib/functions/dns.sh.inc",
    "content": "\nexport _tRee=dev\n\n_vBs=\"/var/backups\"\n\n#\n# Fix DNS settings.\n_fix_dns_settings() {\n  [ ! -d \"${_vBs}\" ] && mkdir -p ${_vBs}\n  rm -f ${_vBs}/resolv.conf.tmp\n  if ! grep -q \"nameserver 127.0.0.1\" /etc/resolv.conf; then\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      _FORCE_RESOLV_UPDATE=YES\n    else\n      _FORCE_RESOLV_UPDATE=NO\n    fi\n  fi\n  if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf || [ \"${_FORCE_RESOLV_UPDATE}\" = \"YES\" ]; then\n    echo \"### BOA-DNS-Config ###\" > ${_vBs}/resolv.conf.tmp\n    if [ -x \"/usr/sbin/unbound\" ] && [ -e \"/run/unbound/unbound.pid\" ]; then\n      echo \"nameserver 127.0.0.1\" >> ${_vBs}/resolv.conf.tmp\n    fi\n    echo \"nameserver 1.1.1.1\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 8.8.8.8\" >> ${_vBs}/resolv.conf.tmp\n    echo \"nameserver 9.9.9.9\" >> ${_vBs}/resolv.conf.tmp\n  fi\n  if [ -e \"${_vBs}/resolv.conf.tmp\" ]; then\n    chattr -i /etc/resolv.conf\n    rm -f /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp /etc/resolv.conf\n    chmod 0644 /etc/resolv.conf\n    cp -a ${_vBs}/resolv.conf.tmp ${_vBs}/resolv.conf.vanilla\n  fi\n  if [ -x \"/usr/sbin/unbound-control\" ] \\\n    && [ -e \"/etc/resolvconf/run/interface/lo.unbound\" ]; then\n    unbound-control reload &> /dev/null\n  fi\n}\n\n#\n# Check DNS settings.\n_check_dns_settings() {\n  if [ -L \"/etc/resolv.conf\" ]; then\n    _fix_dns_settings\n    return 1  # Exit the function but continue the script\n  fi\n  if [ -e \"/root/.use.default.nameservers.cnf\" ]; then\n    if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n      rm -f /root/.use.local.nameservers.cnf\n    fi\n    _USE_DEFAULT_DNS=YES\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [ -e \"/root/.use.local.nameservers.cnf\" ]; then\n    _USE_PROVIDER_DNS=YES\n  else\n    _REMOTE_DNS_TEST=$(host files.aegir.cc 1.1.1.1 -w 10 2>&1)\n    if ! grep -q \"BOA-DNS-Config\" /etc/resolv.conf; then\n      _fix_dns_settings\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n  if [[ \"${_REMOTE_DNS_TEST}\" =~ \"no servers could be reached\" ]] \\\n    || [[ \"${_REMOTE_DNS_TEST}\" =~ \"Host files.aegir.cc not found\" ]] \\\n    || [ \"${_USE_PROVIDER_DNS}\" = \"YES\" ]; then\n    _fix_dns_settings\n  fi\n}\n\n#\n# Check repo status.\n_check_git_repos() {\n  if [ \"${_DL_MODE}\" != \"GIT\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_git_repos\"\n  fi\n  if [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    _GITHUB_WORKS=NO\n    _GITLAB_WORKS=NO\n    if [ \"${_FORCE_GIT_MIRROR}\" = \"drupal\" ]; then\n      _FORCE_GIT_MIRROR=github\n    fi\n    if [ \"${_FORCE_GIT_MIRROR}\" = \"gitorious\" ]; then\n      _FORCE_GIT_MIRROR=gitlab\n    fi\n    if [ \"${_FORCE_GIT_MIRROR}\" = \"github\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: We will use forced GitHub repository without testing connection\"\n      fi\n      _GITHUB_WORKS=YES\n      _GITLAB_WORKS=NO\n      sleep 1\n    elif [ \"${_FORCE_GIT_MIRROR}\" = \"gitlab\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: We will use forced GitLab mirror without testing connection\"\n      fi\n      _GITHUB_WORKS=NO\n      _GITLAB_WORKS=YES\n      sleep 1\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Testing repository mirror servers availability...\"\n      fi\n      sleep 1\n      _GITHUB_WORKS=YES\n      _GITLAB_WORKS=YES\n      if ! command nc -w 10 -z github.com 443 >/dev/null 2>&1 ; then\n        _GITHUB_WORKS=NO\n        _msg \"WARN: The GitHub master repository server doesn't respond...\"\n      elif ! command nc -w 10 -z gitlab.com 443 >/dev/null 2>&1 ; then\n        _GITLAB_WORKS=NO\n        _msg \"WARN: The GitLab mirror repository server doesn't respond...\"\n      fi\n    fi\n    if [ \"${_GITHUB_WORKS}\" = \"YES\" ]; then\n      _BOA_REPO_NAME=\"boa\"\n      _BOA_REPO_GIT_URL=\"${_gitHub}\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: GitHub master repository will be used\"\n      fi\n    elif [ \"${_GITLAB_WORKS}\" = \"YES\" ]; then\n      _BOA_REPO_NAME=\"boa\"\n      _BOA_REPO_GIT_URL=\"${_gitLab}\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: GitLab mirror repository will be used\"\n      fi\n    else\n      cat <<EOF\n\n      None of repository servers responded in 5 seconds,\n      so we can't continue this installation.\n\n      Please try again later or check if your firewall has port 443 open.\n\n      Bye.\n\nEOF\n      _clean_pid_exit _check_git_repos_a\n    fi\n  fi\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _find_correct_ip\"\n  fi\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n  if [ -n \"${_LOC_IP}\" ] && grep -qE \"${_LOC_IP}\\s\" /etc/hosts; then\n    cp -af /etc/hosts /etc/.was.hosts\n    sed -i \"s/^${_LOC_IP}.*//g\" /etc/hosts\n    [ -x \"/etc/init.d/unbound\" ] && [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n    [ -x \"/etc/init.d/unbound\" ] && service unbound restart &> /dev/null\n  fi\n}\n\n#\n# Validate server public IP.\n_validate_public_ip() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _validate_public_ip\"\n  fi\n  if [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n    _LOC_DOM=\"${_MY_HOSTN}\"\n    if [ -z \"${_MY_OWNIP}\" ]; then\n      _find_correct_ip\n      _MY_OWNIP=\"${_LOC_IP}\"\n    else\n      _LOC_IP=\"${_MY_OWNIP}\"\n    fi\n  fi\n  if [ ! -z \"${_LOCAL_NETWORK_IP}\" ]; then\n    if [ -z \"${_LOCAL_NETWORK_HN}\" ]; then\n      _msg \"FATAL ERROR: you must specify also _LOCAL_NETWORK_HN\"\n      _clean_pid_exit _validate_public_ip_a\n    else\n      _MY_OWNIP=\"${_LOCAL_NETWORK_IP}\"\n      _MY_HOSTN=\"${_LOCAL_NETWORK_HN}\"\n      _MY_FRONT=\"${_LOCAL_NETWORK_HN}\"\n      _THISHTIP=\"${_LOCAL_NETWORK_IP}\"\n    fi\n  else\n    if [ \"${_DNS_SETUP_TEST}\" = \"YES\" ]; then\n      if [ -z \"${_MY_OWNIP}\" ]; then\n        _find_correct_ip\n        _THISHTIP=\"${_LOC_IP}\"\n      else\n        _THISHTIP=\"${_MY_OWNIP}\"\n      fi\n    else\n      if [ -z \"${_MY_OWNIP}\" ] && [ ! -z \"${_MY_HOSTN}\" ]; then\n        _LOC_DOM=\"${_MY_HOSTN}\"\n        _find_correct_ip\n        _THISHTIP=\"${_LOC_IP}\"\n      else\n        _THISHTIP=\"${_MY_OWNIP}\"\n      fi\n    fi\n  fi\n}\n\n#\n# Validate server IP for xtras.\n_validate_xtras_ip() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _validate_xtras_ip\"\n  fi\n  if [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n    _LOC_DOM=\"${_MY_HOSTN}\"\n    if [ -z \"${_MY_OWNIP}\" ]; then\n      _find_correct_ip\n      _MY_OWNIP=\"${_LOC_IP}\"\n    else\n      _LOC_IP=\"${_MY_OWNIP}\"\n    fi\n  fi\n  _XTRAS_THISHTIP=\"*\"\n}\n\n#\n# Validate server IP for purge vhost.\n_validate_purge_ip() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _validate_purge_ip\"\n  fi\n  if [ \"${_PURGE_ALL_THISHTIP}\" = \"YES\" ]; then\n    _PURGE_THISHTIP=\"0.0.0.0/0\"\n  else\n    if [ \"${_DNS_SETUP_TEST}\" = \"YES\" ]; then\n      if [ -z \"${_MY_OWNIP}\" ]; then\n        _find_correct_ip\n        _PURGE_THISHTIP=\"${_LOC_IP}\"\n      else\n        _PURGE_THISHTIP=\"${_MY_OWNIP}\"\n      fi\n    else\n      if [ -z \"${_MY_OWNIP}\" ]; then\n        if [ -e \"/usr/bin/sipcalc\" ]; then\n          if [ -z \"${_THISHTIP}\" ]; then\n            _LOC_DOM=\"${_THISHOST}\"\n            _find_correct_ip\n            _THISHTIP=\"${_LOC_IP}\"\n          fi\n          _IP_TEST=$(sipcalc ${_THISHTIP} 2>&1)\n          if [[ \"${_IP_TEST}\" =~ \"ERR\" ]]; then\n            _IP_TEST_RESULT=FAIL\n            _PURGE_THISHTIP=\"0.0.0.0/0\"\n          else\n            _IP_TEST_RESULT=OK\n            _PURGE_THISHTIP=\"${_THISHTIP}\"\n          fi\n        else\n          _PURGE_THISHTIP=\"${_THISHTIP}\"\n        fi\n      else\n        _PURGE_THISHTIP=\"${_MY_OWNIP}\"\n      fi\n    fi\n    if [ -z \"${_PURGE_THISHTIP}\" ]; then\n      _PURGE_THISHTIP=\"0.0.0.0/0\"\n    fi\n  fi\n}\n\n#\n# Validate local server IP.\n_validate_local_ip() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _validate_local_ip\"\n  fi\n  _LOCAL_THISHTIP=all\n}\n\n#\n# Wait for connection.\n_wait_for_connection() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _wait_for_connection\"\n  fi\n  echo \" \"\n  _msg \"I can not connect to github.com on port 443 at the moment\"\n  _msg \"I will try again in 60 seconds, please wait...\"\n  _msg \"Waiting for attempt $1...\"\n  sleep 60\n}\n\n#\n# Check connection.\n_check_connection() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_connection\"\n  fi\n  if ! command nc -w 10 -z github.com 443 >/dev/null 2>&1 ; then\n    _wait_for_connection \"2/4\"\n    if ! command nc -w 10 -z github.com 443 >/dev/null 2>&1 ; then\n      _wait_for_connection \"3/4\"\n      if ! command nc -w 10 -z github.com 443 >/dev/null 2>&1 ; then\n        _wait_for_connection \"4/4\"\n        if ! command nc -w 10 -z github.com 443 >/dev/null 2>&1 ; then\n          echo \" \"\n          _msg \"Sorry, I gave up.\"\n          _msg \"EXIT on error due to GitHub git server at 443 downtime\"\n          _msg \"Please try to run this script again in a few minutes\"\n          _msg \"You may want to check https://www.githubstatus.com\"\n          _msg \"Also, make sure that the outgoing connections via port 443 work\"\n          _msg \"Bye\"\n          _clean_pid_exit _check_connection_a\n        fi\n      fi\n    fi\n  fi\n}\n\n#\n# Install Unbound from sources.\n_install_unbound_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_unbound_src\"\n  fi\n  _msg \"INFO: Installing Unbound ${_UNBOUND_VRN}...\"\n  ldconfig 2> /dev/null\n  cd /var/opt\n  rm -rf unbound*\n  _get_dev_src \"unbound-${_UNBOUND_VRN}.tar.gz\"\n  cd unbound-${_UNBOUND_VRN}\n  if [ -e \"/usr/local/ssl3/\" ]; then\n    _mrun \"bash ./configure \\\n      --prefix=/usr \\\n      --with-libevent \\\n      --with-pidfile=/run/unbound/unbound.pid \\\n      --with-ssl=/usr/local/ssl3\"\n  elif [ -e \"/usr/local/ssl/\" ]; then\n    _mrun \"bash ./configure \\\n      --prefix=/usr \\\n      --with-libevent \\\n      --with-pidfile=/run/unbound/unbound.pid \\\n      --with-ssl=/usr/local/ssl\"\n  fi\n  _mrun \"make -j $(nproc) --quiet\"\n  _mrun \"make --quiet install\"\n}\n\n#\n# DNS cache Unbound.\n_dns_unbound_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _dns_unbound_install_upgrade\"\n  fi\n  _isUnbnd=\"$(which unbound)\"\n  _isUnbndErr=$(${_isUnbnd} -V 2>&1)\n  [ -x \"${_isUnbnd}\" ] && _isUnbLibvt=$(ldd ${_isUnbnd} | grep libevent 2>&1)\n  if [[ \"${_isUnbndErr}\" =~ \"cannot open shared object file\" ]] \\\n    || [[ \"${_isUnbndErr}\" =~ \"No such file or directory\" ]] \\\n    || [[ ! \"${_isUnbLibvt}\" =~ \"libevent\" ]] \\\n    || [ -z \"${_isUnbndErr}\" ]; then\n    _isUnbndFix=YES\n  else\n    _isUnbndFix=NO\n    [ -x \"/etc/init.d/unbound\" ] && [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n    _isUnbndBr=$(service unbound restart 2>&1)\n    [[ \"${_isUnbndBr}\" =~ \"fatal error\" ]] && _isUnbndFix=YES\n    [[ \"${_isUnbndBr}\" =~ \"not found\" ]] && _isUnbndFix=YES\n    [[ \"${_isUnbndBr}\" =~ \"address already in use\" ]] && _killPdnsd=YES\n  fi\n  if [ ! -x \"${_isUnbnd}\" ] \\\n    || [ -z \"${_isUnbnd}\" ] \\\n    || [ \"${_isUnbndFix}\" = \"YES\" ] \\\n    || [ -e \"/usr/etc/unbound/unbound.pid\" ] \\\n    || [ -e \"/etc/unbound/unbound.conf.d/remote-control.conf\" ] \\\n    || [ ! -e \"/run/unbound\" ] \\\n    || [ ! -x \"/usr/libexec/unbound-helper\" ] \\\n    || [ ! -x \"/etc/init.d/unbound\" ] \\\n    || [ ! -e \"/etc/resolvconf/run/interface/lo.unbound\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installing DNS cache Unbound server from packages...\"\n    fi\n    if [ -e \"/etc/resolv.conf\" ]; then\n      cp -af /etc/resolv.conf ${_vBs}/resolv.conf.pre-${_xSrl}-${_X_VERSION}-${_NOW}\n    fi\n    _check_dns_settings\n    _fix_dns_settings\n    if [ \"${_USE_PROVIDER_DNS}\" != \"YES\" ]; then\n      chattr -i /etc/resolv.conf\n      rm -f /etc/resolv.conf\n      echo \"### BOA-DNS-Config ###\" > /etc/resolv.conf\n      echo \"nameserver 1.1.1.1\" >> /etc/resolv.conf\n      echo \"nameserver 8.8.8.8\" >> /etc/resolv.conf\n      echo \"nameserver 9.9.9.9\" >> /etc/resolv.conf\n      chmod 0644 /etc/resolv.conf\n    fi\n    _apt_clean_update\n    [ -e \"/usr/etc/unbound/unbound.pid\" ] && _isUnbndFix=YES\n    if [ \"${_isUnbndFix}\" = \"YES\" ]; then\n      ###\n      for _PKG in unbound unbound-anchor unbound-host dns-root-data; do\n        if _pkg_installed \"${_PKG}\"; then\n          _mrun \"apt-get remove ${_PKG} -y --purge --auto-remove -qq\"\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"PCKG: ${_PKG} removed as requested.\"\n          fi\n        fi\n      done\n      ###\n      rm -rf /etc/unbound\n      rm -rf /usr/etc/unbound\n    fi\n    if dpkg-query -W -f='${Status}' resolvconf 2>/dev/null | grep -q \"install ok installed\"; then\n      _mrun \"apt-get remove resolvconf -y --purge --auto-remove -qq\"\n    fi\n    rm -rf /lib/init/rw/resolvconf\n    rm -rf /run/resolvconf\n    rm -rf /etc/resolvconf\n    if dpkg-query -W -f='${Status}' resolvconf 2>/dev/null | grep -q \"install ok installed\"; then\n      _mrun \"apt-get remove resolvconf -y --purge --auto-remove -qq\"\n    fi\n    if [ \"${_USE_PROVIDER_DNS}\" != \"YES\" ]; then\n      chattr -i /etc/resolv.conf\n      rm -f /etc/resolv.conf\n      echo \"### BOA-DNS-Config ###\" > /etc/resolv.conf\n      echo \"nameserver 1.1.1.1\" >> /etc/resolv.conf\n      echo \"nameserver 8.8.8.8\" >> /etc/resolv.conf\n      echo \"nameserver 9.9.9.9\" >> /etc/resolv.conf\n      chmod 0644 /etc/resolv.conf\n    fi\n    if [ -x \"/usr/sbin/aa-teardown\" ]; then\n      aa-teardown &> /dev/null\n    fi\n    for _PKG in libevent-dev resolvconf unbound unbound-anchor unbound-host dns-root-data; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    if [ -x \"/usr/sbin/aa-teardown\" ]; then\n      aa-teardown &> /dev/null\n    fi\n    ###\n    [ -d /run/unbound ] || mkdir -p /run/unbound\n    [ -d /run/unbound ] && chown -R unbound:unbound /run/unbound\n    [ -d /var/log/unbound ] || mkdir -p /var/log/unbound/\n    touch /var/log/unbound/unbound.log\n    chown -R unbound:unbound /var/log/unbound\n    [ -d /var/lib/unbound ] || mkdir -p /var/lib/unbound/\n    [ -d /usr/share/dns ] && cp -a /usr/share/dns/* /var/lib/unbound/\n    [ -d /var/lib/unbound ] && chown -R unbound:unbound /var/lib/unbound\n    [ ! -e \"/etc/unbound/unbound_control.key\" ] && _mrun \"unbound-control-setup\"\n    cp -af ${_locCnf}/dns/unbound /etc/init.d/unbound\n    chmod 755 /etc/init.d/unbound\n    _mrun \"update-rc.d unbound defaults\"\n    ###\n    if [ -e \"/etc/apparmor.d\" ] && [ ! -e \"/etc/apparmor.d/usr.sbin.unbound\" ]; then\n      cat ${_locCnf}/apparmor/usr.sbin.unbound > /etc/apparmor.d/usr.sbin.unbound\n      chmod 644 /etc/apparmor.d/usr.sbin.unbound\n    fi\n    ###\n    [ ! -d \"/usr/libexec\" ] && mkdir -p /usr/libexec\n    [ ! -e \"/usr/libexec/unbound-helper\" ] && cp -a ${_locCnf}/dns/unbound-helper /usr/libexec/\n    [ -e \"/usr/libexec/unbound-helper\" ] && chmod 755 /usr/libexec/unbound-helper\n    cat ${_locCnf}/dns/unbound.conf > /etc/unbound/unbound.conf.d/unbound.conf\n    if [ \"${_USE_PROVIDER_DNS}\" = \"YES\" ] \\\n      && [ -e \"${_vBs}/resolv.conf.vanilla\" ]; then\n      cat ${_vBs}/resolv.conf.vanilla > /etc/resolvconf/resolv.conf.d/base\n    fi\n    sed -i \"s/pdnsd/unbound/g\" /etc/resolvconf/interface-order\n    mkdir -p /etc/resolvconf/run/interface\n    echo \"nameserver 127.0.0.1\" > /etc/resolvconf/run/interface/lo.unbound\n    resolvconf -u &> /dev/null\n    [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n    [ -x \"/etc/init.d/unbound\" ] && [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n    _mrun \"pkill -u unbound -x unbound\"\n    _mrun \"service unbound restart\"\n  fi\n  _check_dns_settings\n  if [ \"${_USE_PROVIDER_DNS}\" = \"YES\" ] \\\n    && [ -e \"${_vBs}/resolv.conf.vanilla\" ]; then\n    cat ${_vBs}/resolv.conf.vanilla > /etc/resolvconf/resolv.conf.d/base\n    sed -i \"s/pdnsd/unbound/g\" /etc/resolvconf/interface-order\n    mkdir -p /etc/resolvconf/run/interface\n    echo \"nameserver 127.0.0.1\" > /etc/resolvconf/run/interface/lo.unbound\n    resolvconf -u &> /dev/null\n  fi\n  if [ -e \"/etc/resolvconf/run/resolv.conf\" ] \\\n    || [ -e \"/run/resolvconf/resolv.conf\" ]; then\n    _RESOLV_LOC=$(grep \"nameserver 127.0.0.1\" /etc/resolv.conf 2>&1)\n    _RESOLV_ELN=$(grep \"nameserver 1.1.1.1\" /etc/resolv.conf 2>&1)\n    _RESOLV_EGT=$(grep \"nameserver 8.8.8.8\" /etc/resolv.conf 2>&1)\n    _RESOLV_NIN=$(grep \"nameserver 9.9.9.9\" /etc/resolv.conf 2>&1)\n    if [[ \"${_RESOLV_LOC}\" =~ \"nameserver 127.0.0.1\" ]] \\\n      && [[ \"${_RESOLV_ELN}\" =~ \"nameserver 1.1.1.1\" ]] \\\n      && [[ \"${_RESOLV_EGT}\" =~ \"nameserver 8.8.8.8\" ]] \\\n      && [[ \"${_RESOLV_NIN}\" =~ \"nameserver 9.9.9.9\" ]]; then\n      _DO_NOTHING=YES\n    else\n      chattr -i /etc/resolv.conf\n      rm -f /etc/resolv.conf\n      echo \"### BOA-DNS-Config ###\" > /etc/resolv.conf\n      echo \"nameserver 127.0.0.1\" >> /etc/resolv.conf\n      echo \"nameserver 1.1.1.1\" >> /etc/resolv.conf\n      echo \"nameserver 8.8.8.8\" >> /etc/resolv.conf\n      echo \"nameserver 9.9.9.9\" >> /etc/resolv.conf\n      chmod 0644 /etc/resolv.conf\n      [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n      [ -x \"/etc/init.d/unbound\" ] && [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n      _mrun \"pkill -u unbound -x unbound\"\n      _mrun \"service unbound restart\"\n    fi\n  fi\n  if [ ! -f \"/etc/resolv.conf\" ]; then\n    chattr -i /etc/resolv.conf\n    rm -f /etc/resolv.conf\n    echo \"### BOA-DNS-Config ###\" > /etc/resolv.conf\n    echo \"nameserver 127.0.0.1\" >> /etc/resolv.conf\n    echo \"nameserver 1.1.1.1\" >> /etc/resolv.conf\n    echo \"nameserver 8.8.8.8\" >> /etc/resolv.conf\n    echo \"nameserver 9.9.9.9\" >> /etc/resolv.conf\n    chmod 0644 /etc/resolv.conf\n    [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n    [ -x \"/etc/init.d/unbound\" ] && [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n    _mrun \"pkill -u unbound -x unbound\"\n    _mrun \"service unbound restart\"\n  fi\n  if [ -e \"/etc/NetworkManager/NetworkManager.conf\" ]; then\n    sed -i \"s/^dns=.*/dns=unbound/g\" \\\n      /etc/NetworkManager/NetworkManager.conf &> /dev/null\n    _mrun \"service network-manager restart\"\n  fi\n  if [ -x \"${_isUnbnd}\" ] && [ \"${_isUnbndFix}\" = \"NO\" ]; then\n    _UNBOUND_V_ITD=$(${_isUnbnd} -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' \\\n      | sed \"s/Configure//gi\" 2>&1)\n    if [ \"${_UNBOUND_V_ITD}\" = \"${_UNBOUND_VRN}\" ]; then\n      _UNBOUND_INSTALL_REQUIRED=NO\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Unbound version ${_UNBOUND_V_ITD}, OK\"\n      fi\n    else\n      _UNBOUND_INSTALL_REQUIRED=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Unbound version ${_UNBOUND_V_ITD}, upgrade required\"\n      fi\n    fi\n  else\n    _UNBOUND_INSTALL_REQUIRED=YES\n  fi\n  [ -e \"/usr/etc/unbound/unbound.pid\" ] && _UNBOUND_INSTALL_REQUIRED=YES\n  [ -e \"/usr/etc/unbound/unbound.pid\" ] && rm -f /usr/etc/unbound/unbound.pid\n  if [ \"${_UNBOUND_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    _install_unbound_src\n  fi\n  [ -e \"/usr/etc/unbound\" ] && cp -af /etc/unbound/*.key /usr/etc/unbound/\n  [ -e \"/usr/etc/unbound\" ] && cp -af /etc/unbound/*.pem /usr/etc/unbound/\n  [ -e \"/usr/etc/unbound\" ] && cp -af /etc/unbound/unbound.conf.d/unbound.conf /usr/etc/unbound/\n  _isUnbnd=\"$(which unbound)\"\n  _isUnbndErr=$(${_isUnbnd} -V 2>&1)\n  if [[ \"${_isUnbndErr}\" =~ \"cannot open shared object file\" ]] \\\n    || [[ \"${_isUnbndErr}\" =~ \"No such file or directory\" ]] \\\n    || [ -z \"${_isUnbndErr}\" ]; then\n    _isUnbndFix=YES\n  else\n    _isUnbndFix=NO\n  fi\n  [ -e \"/etc/unbound/unbound.conf.d/remote-control.conf\" ] && rm -f /etc/unbound/unbound.conf.d/remote-control.conf\n  if [ \"${_isUnbndFix}\" = \"NO\" ] || [ \"${_killPdnsd}\" = \"YES\" ]; then\n    _isPdnsd=\"$(which pdnsd)\"\n    if [ -x \"${_isPdnsd}\" ]; then\n      _mrun \"service pdnsd stop\"\n      _mrun \"update-rc.d -f pdnsd remove\"\n      _mrun \"killall -9 pdnsd\"\n      mv -f ${_isPdnsd} /var/backups\n    fi\n    [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n    _mrun \"pkill -u unbound -x unbound\"\n    _mrun \"service unbound restart\"\n  fi\n  [ -e \"/etc/default/unbound\" ] && _isNxdEtc=$(grep \"always_nxdomain\" /etc/default/unbound 2>&1)\n  [ -e \"/etc/init.d/unbound\" ] && _isIntUnb=$(grep \"apply_ci_nomail\" /etc/init.d/unbound 2>&1)\n  if [[ \"${_isNxdEtc}\" =~ \"always_nxdomain\" ]] \\\n    && [[ ! \"${_isIntUnb}\" =~ \"apply_ci_nomail\" ]]; then\n    cp -af ${_locCnf}/dns/unbound /etc/init.d/unbound\n    chmod 755 /etc/init.d/unbound\n    _mrun \"update-rc.d unbound defaults\"\n    _mrun \"service unbound reload\"\n  fi\n  if [ -x \"${_isUnbnd}\" ] \\\n    && [ \"${_isUnbndFix}\" = \"NO\" ] \\\n    && [ -e \"/etc/resolvconf/interface-order\" ]; then\n    sed -i \"s/pdnsd/unbound/g\" /etc/resolvconf/interface-order\n    [ -e \"/etc/resolvconf/update.d/unbound\" ] && chmod 644 /etc/resolvconf/update.d/unbound\n    _mrun \"pkill -u unbound -x unbound\"\n    _mrun \"service unbound restart\"\n  fi\n}\n\n_check_github_for_aegir_head_mode() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_github_for_aegir_head_mode\"\n  fi\n  if [ \"${_SYSTEM_UP_ONLY}\" = \"NO\" ] \\\n    && [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    rm -rf /opt/tmp/test-*\n    _check_connection\n    _GITHUB_TEST=$(git clone ${_gitHub}/provision.git \\\n      /opt/tmp/test-provision 2>&1)\n    if [[ \"${_GITHUB_TEST}\" =~ \"fatal\" ]]; then\n      echo \" \"\n      _msg \"EXIT on error (provision) due to GitHub downtime\"\n      _msg \"Please try to run this script again in a few minutes\"\n      _msg \"You may want to check https://www.githubstatus.com\"\n      _msg \"Bye\"\n      rm -rf /opt/tmp/test-*\n      _clean_pid_exit _check_github_for_aegir_head_mode_a\n    fi\n  fi\n}\n\n_check_db_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_db_src\"\n  fi\n  if ! command nc -w 10 -z repo.percona.com 443 >/dev/null 2>&1 ; then\n    echo \" \"\n    _msg \"EXIT on error due to repo.percona.com downtime\"\n    _msg \"Please try to run this script again in a few minutes\"\n    _msg \"or better yet, hours\"\n    _msg \"Bye\"\n    _clean_pid_exit _check_db_src_a\n  fi\n}\n\n_check_ip_hostname() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_ip_hostname\"\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    if [ ! -z \"${_LOCAL_NETWORK_IP}\" ]; then\n      if [ -z \"${_LOCAL_NETWORK_HN}\" ]; then\n        _msg \"FATAL ERROR: you must specify also _LOCAL_NETWORK_HN\"\n        _clean_pid_exit _check_ip_hostname_a\n      else\n        _DNS_SETUP_TEST=NO\n        _SMTP_RELAY_TEST=NO\n        _MY_OWNIP=\"${_LOCAL_NETWORK_IP}\"\n        _MY_HOSTN=\"${_LOCAL_NETWORK_HN}\"\n        _MY_FRONT=\"${_LOCAL_NETWORK_HN}\"\n      fi\n    fi\n    if [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n      _THIS_DB_HOST=localhost\n      _LOC_DOM=\"${_MY_HOSTN}\"\n      if [ -z \"${_MY_OWNIP}\" ]; then\n        _find_correct_ip\n        _MY_OWNIP=\"${_LOC_IP}\"\n      else\n        _LOC_IP=\"${_MY_OWNIP}\"\n      fi\n    fi\n    if [ ! -z \"${_MY_OWNIP}\" ]; then\n      if [ ! -z \"${_MY_HOSTN}\" ]; then\n        _S_N=${_MY_HOSTN}\n        _S_T=${_S_N#*.*}\n        _S_Q=${_S_N%%${_S_T}}\n        _S_E=${_S_Q%*.*}\n        if [ ! -z \"${_LOCAL_NETWORK_HN}\" ]; then\n          if [ \"${_EASY_SETUP}\" = \"LOCAL\" ]; then\n            _DO_NOTHING=YES\n          else\n            if [ -n \"${_MY_OWNIP}\" ] && grep -qE \"${_MY_OWNIP}\\s\" /etc/hosts; then\n              sed -i \"s/^${_MY_OWNIP}.*//g\" /etc/hosts &> /dev/null\n            fi\n            echo \"${_MY_OWNIP} ${_MY_HOSTN} chive.${_MY_HOSTN} sqlbuddy.${_MY_HOSTN} cgp.${_MY_HOSTN} ${_S_E}\" >> /etc/hosts\n          fi\n        fi\n        hostname -b ${_MY_HOSTN} ### force our custom FQDN/local hostname\n        echo \"${_MY_HOSTN}\" > /etc/hostname\n        echo \"${_MY_HOSTN}\" > /etc/mailname\n      fi\n      _THISHTIP=\"${_MY_OWNIP}\"\n      _THISHOST=\"${_MY_HOSTN}\"\n      _LOC_DOM=\"${_THISHOST}\"\n      _find_correct_ip\n      _THISRDIP=\"${_LOC_IP}\"\n      if [ \"${_THISRDIP}\" = \"${_THISHTIP}\" ]; then\n        _FQDNTEST=\"TRUE\"\n        _LOC_DOM=\"${_MY_FRONT}\"\n        _find_correct_ip\n        _THEFRDIP=\"${_LOC_IP}\"\n        if [ \"${_THEFRDIP}\" = \"${_THISHTIP}\" ]; then\n          _TESTHOST=\"$(hostname -f 2>/dev/null || uname -n)\"\n          _LOC_DOM=\"${_TESTHOST}\"\n          _find_correct_ip\n          _TESTRDIP=\"${_LOC_IP}\"\n          if [ \"${_TESTRDIP}\" = \"${_THISHTIP}\" ]; then\n            _FQDNTEST=\"TRUE\"\n            hostname -b ${_TESTHOST}\n          else\n           _FQDNTEST=\"FALSE\"\n          fi\n        else\n          _FQDNTEST=\"FALSE\"\n        fi\n      else\n        _FQDNTEST=\"FALSE\"\n      fi\n    else\n      _find_correct_ip\n      _THISHTIP=\"${_LOC_IP}\"\n      _FQDNPROB=\"$(hostname -f 2>/dev/null || uname -n)\"\n      _FQDNTEST=\"FALSE\"\n      _THISHOST=\"$(hostname -f 2>/dev/null || uname -n)\"\n      if [ ! -z \"${_FQDNPROB}\" ]; then\n        _THISHOST=\"$(hostname -f 2>/dev/null || uname -n)\"\n        _THISHOST=${_THISHOST//[^a-zA-Z0-9-.]/}\n        _THISHOST=$(echo -n ${_THISHOST} | tr A-Z a-z 2>&1)\n        _LOC_DOM=\"${_THISHOST}\"\n        _find_correct_ip\n        _THISRDIP=\"${_LOC_IP}\"\n        if [ \"${_THISRDIP}\" = \"${_THISHTIP}\" ]; then\n          _FQDNTEST=\"TRUE\"\n          hostname -b ${_THISHOST}\n        else\n          _FQDNTEST=\"FALSE\"\n          _REVHOSTN=$(host ${_THISHTIP} | cut -d: -f2 | awk '{ print $5}' 2>&1)\n          _REVHOSTN=$(echo -n ${_REVHOSTN} |sed 's/\\(.*\\)./\\1/' 2>&1)\n          _REVHOSTN=${_REVHOSTN//[^a-zA-Z0-9-.]/}\n          _REVHOSTN=$(echo -n ${_REVHOSTN} | tr A-Z a-z 2>&1)\n          _LOC_DOM=\"${_REVHOSTN}\"\n          _find_correct_ip\n          _REVHSTIP=\"${_LOC_IP}\"\n          if [ \"${_REVHSTIP}\" = \"${_THISHTIP}\" ]; then\n            hostname -b ${_REVHOSTN}\n            _THISHOST=\"${_REVHOSTN}\"\n            _FQDNTEST=\"TRUE\"\n          else\n            _FQDNTEST=\"FALSE\"\n          fi\n        fi\n      else\n        _REVHOSTN=$(host ${_THISHTIP} | cut -d: -f2 | awk '{ print $5}' 2>&1)\n        _REVHOSTN=$(echo -n ${_REVHOSTN} |sed 's/\\(.*\\)./\\1/' 2>&1)\n        _REVHOSTN=${_REVHOSTN//[^a-zA-Z0-9-.]/}\n        _REVHOSTN=$(echo -n ${_REVHOSTN} | tr A-Z a-z 2>&1)\n        _LOC_DOM=\"${_REVHOSTN}\"\n        _find_correct_ip\n        _REVHSTIP=\"${_LOC_IP}\"\n        if [ \"${_REVHSTIP}\" = \"${_THISHTIP}\" ]; then\n          hostname -b ${_REVHOSTN}\n          _THISHOST=\"${_REVHOSTN}\"\n          _FQDNTEST=\"TRUE\"\n        else\n         _FQDNTEST=\"FALSE\"\n        fi\n      fi\n    fi\n    if [ ! -z \"${_MY_FRONT}\" ]; then\n      _THIS_FRONT=\"${_MY_FRONT}\"\n    else\n      _THIS_FRONT=\"${_THISHOST}\"\n    fi\n    if [ \"${_DNS_SETUP_TEST}\" = \"NO\" ]; then\n      _FQDNTEST=TRUE\n    fi\n    if [ \"${_THISHOST}\" = \"localhost\" ]; then\n      _msg \"FATAL ERROR: you can't use localhost as your FQDN hostname\"\n      _msg \"Please try something like: aegir.local\"\n      _clean_pid_exit _check_ip_hostname_b\n    fi\n    if [ \"${_FQDNTEST}\" = \"FALSE\" ]; then\n      echo \" \"\n      _msg \"EXIT on error due to invalid DNS setup\"\n      if [ ! -z \"${_MY_OWNIP}\" ]; then\n        cat <<EOF\n\n    * Your custom _MY_OWNIP is set to \"${_MY_OWNIP}\"\n    * Your custom _MY_HOSTN is set to \"${_MY_HOSTN}\"\n    * Your custom _MY_FRONT is set to \"${_MY_FRONT}\"\n\n    * Your _MY_HOSTN and/or _MY_FRONT doesn't match your _MY_OWNIP,\n      or your hostname is not set properly yet.\n\n    * Please make sure that below command returns your FQDN hostname \"${_MY_HOSTN}\":\n\n    $ uname -n\n\nEOF\n      fi\n      cat <<EOF\n\n    Your server needs a working FQDN hostname pointing to its IP address.\n    This means that you have to configure DNS for your hostname before\n    trying to install BOA. Reverse DNS is not required, though.\n    Make sure that DNS A record for ${_THISHOST} points to ${_THISHTIP} and\n    then allow some time for DNS propagation before trying this again.\n    Alternatively, disable this check with _DNS_SETUP_TEST=NO\n\nEOF\n      _msg \"EXIT on error due to invalid DNS setup\"\n      _clean_pid_exit _check_ip_hostname_c\n    else\n      echo \"${_THISHOST}\" > /etc/hostname\n      echo \"${_THISHOST}\" > /etc/mailname\n      hostname -b ${_THISHOST}\n      _msg \"INFO: DNS test: OK\"\n    fi\n    echo \" \"\n    : \"${_DISPLAY_CHECKPOINT:=YES}\"\n    if [ \"${_DISPLAY_CHECKPOINT}\" = \"YES\" ]; then\n      if [ -z \"${_THISHTIP}\" ]; then\n        _find_correct_ip\n        _THISHTIP=\"${_LOC_IP}\"\n      fi\n      _find_server_city\n      [ -n \"${_LOC_CITY}\" ] && _LOC_CITY=\"$(echo \"${_LOC_CITY}\" | tr '+' ' ')\"\n      [ -z \"${_LOC_CITY}\" ] && _LOC_CITY=\"Cicely\"\n      _VENDOR=\"$(dmidecode -s system-manufacturer 2>&1)\"\n      [ -z \"${_VENDOR}\" ] && _VENDOR=\"Maurice Minnifield\"\n      _msg \"INSTALL START -> checkpoint: \"\n      cat <<EOF\n\n      * Your sysadmin Email is ${_MY_EMAIL}\n      * Your server City is ${_LOC_CITY}\n      * Your server Vendor is ${_VENDOR}\n      * Your server System is ${_VIRT_IS}\n      * Your server IP is ${_THISHTIP}\n      * Your server Hostname is ${_THISHOST}\nEOF\n      echo \" \"\n    fi\n    if _prompt_yes_no \"Do you want to proceed with the install?\" ; then\n      true\n    else\n      echo \"Installation aborted by you\"\n      _clean_pid_exit _check_ip_hostname_d\n    fi\n  elif [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _THISHOST=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    _THISHOST=${_THISHOST//[^a-zA-Z0-9-.]/}\n    _THISHOST=$(echo -n ${_THISHOST} | tr A-Z a-z 2>&1)\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      _THIS_FRONT=$(cat /var/aegir/.drush/hm.alias.drushrc.php \\\n        | grep \"uri'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n    elif [ ! -z \"${_MY_FRONT}\" ]; then\n      _THIS_FRONT=\"${_MY_FRONT}\"\n    else\n      _msg \"EXIT on error due to not found or not specified Ægir domain\"\n      _msg \"Please specify your working Ægir domain as a value of _MY_FRONT\"\n      _clean_pid_exit _check_ip_hostname_e\n    fi\n    : \"${_DISPLAY_CHECKPOINT:=YES}\"\n    if [ \"${_DISPLAY_CHECKPOINT}\" = \"YES\" ]; then\n      if [ -z \"${_THISHTIP}\" ]; then\n        _find_correct_ip\n        _THISHTIP=\"${_LOC_IP}\"\n      fi\n      _find_server_city\n      [ -n \"${_LOC_CITY}\" ] && _LOC_CITY=\"$(echo \"${_LOC_CITY}\" | tr '+' ' ')\"\n      [ -z \"${_LOC_CITY}\" ] && _LOC_CITY=\"Cicely\"\n      _VENDOR=\"$(dmidecode -s system-manufacturer 2>&1)\"\n      [ -z \"${_VENDOR}\" ] && _VENDOR=\"Maurice Minnifield\"\n      echo \" \"\n      _msg \"UPGRADE START -> checkpoint: \"\n      cat <<EOF\n\n    * Your sysadmin Email is ${_MY_EMAIL}\n    * Your server City is ${_LOC_CITY}\n    * Your server Vendor is ${_VENDOR}\n    * Your server System is ${_VIRT_IS}\n    * Your server IP is ${_THISHTIP}\n    * Your server Hostname is ${_THISHOST}\nEOF\n      echo \" \"\n    fi\n    if _prompt_yes_no \"Do you want to proceed with the upgrade?\" ; then\n      true\n    else\n      echo \"Upgrade aborted by you\"\n      _clean_pid_exit _check_ip_hostname_f\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/functions/firewall.sh.inc",
    "content": "#\n# Fix csf.uidignore file to whitelist important system uids when UID_INTERVAL != 0\n_fix_lfd_uidignore() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_lfd_uidignore\"\n  fi\n  _THIS_FILE=/etc/csf/csf.uidignore\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _CSF_UIDIGNORE_TEST=$(grep \"unbound\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_CSF_UIDIGNORE_TEST}\" =~ \"unbound\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"#root\"          >> /etc/csf/csf.uidignore\n      echo `id -u root`     >> /etc/csf/csf.uidignore\n      if [ -r \"/etc/unbound/unbound.conf.d/unbound.conf\" ]; then\n        echo \"#unbound\"       >> /etc/csf/csf.uidignore\n        echo `id -u unbound`  >> /etc/csf/csf.uidignore\n      fi\n      echo \"#postfix\"       >> /etc/csf/csf.uidignore\n      echo `id -u postfix`  >> /etc/csf/csf.uidignore\n      echo \"#www-data\"      >> /etc/csf/csf.uidignore\n      echo `id -u www-data` >> /etc/csf/csf.uidignore\n    fi\n    if [ -e \"/usr/sbin/named\" ]; then\n      _CSF_UIDIGNORE_TEST=$(grep \"bind\" ${_THIS_FILE} 2>&1)\n      if [[ \"${_CSF_UIDIGNORE_TEST}\" =~ \"bind\" ]]; then\n        _DO_NOTHING=YES\n      else\n        echo \"#bind\"        >> /etc/csf/csf.uidignore\n        echo `id -u bind`   >> /etc/csf/csf.uidignore\n      fi\n    fi\n    sed -i \"/^$/d\" ${_THIS_FILE} &> /dev/null\n  fi\n}\n\n#\n# Fix csf.fignore file to whitelist /tmp/drush_*\n_fix_lfd_whitelist() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_lfd_whitelist\"\n  fi\n  _THIS_FILE=/etc/csf/csf.fignore\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _CSF_WHITELIST_TEST=$(grep \"jetty\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_CSF_WHITELIST_TEST}\" =~ \"jetty\" ]]; then\n      _DO_NOTHING=YES\n    else\n      sed -i \"s/.*\\/tmp\\/.*//g\" ${_THIS_FILE} &> /dev/null\n      wait\n      sed -i \"/^$/d\"            ${_THIS_FILE} &> /dev/null\n      wait\n      echo \"/tmp/drush_tmp.*\"      >> ${_THIS_FILE}\n      echo \"/tmp/drush_make_tmp.*\" >> ${_THIS_FILE}\n      echo \"/tmp/make_tmp.*\"       >> ${_THIS_FILE}\n      echo \"/tmp/hsperfdata.*\"     >> ${_THIS_FILE}\n      echo \"/tmp/jetty.*\"          >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Ensure /usr/sbin/ipset and /sbin/ipset both resolve to the actual ipset binary.\n_ensure_ipset_symlinks() {\n  _IPSET_REAL=\"$(command -v ipset 2>/dev/null || true)\"\n  if [ -z \"${_IPSET_REAL}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"ipset not installed; skipping symlink fixes\"\n    fi\n    return 0\n  fi\n\n  # Resolve through any intermediate symlinks.\n  if [ -L \"${_IPSET_REAL}\" ]; then\n    _IPSET_REAL=\"$(readlink -f \"${_IPSET_REAL}\")\"\n  fi\n\n  for _CAND in /usr/sbin/ipset /sbin/ipset; do\n    _PARENT=\"$(dirname \"${_CAND}\")\"\n    [ -d \"${_PARENT}\" ] || mkdir -p \"${_PARENT}\"\n\n    # If the candidate *is* the real file, nothing to do.\n    if [ \"${_CAND}\" = \"${_IPSET_REAL}\" ]; then\n      continue\n    fi\n\n    # If it exists, check whether it already resolves to the right target.\n    if [ -e \"${_CAND}\" ] || [ -L \"${_CAND}\" ]; then\n      _TARGET=\"$(readlink -f \"${_CAND}\" 2>/dev/null || true)\"\n      if [ \"${_TARGET}\" = \"${_IPSET_REAL}\" ]; then\n        continue\n      fi\n    fi\n\n    ln -sfn \"${_IPSET_REAL}\" \"${_CAND}\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"Linked ${_CAND} -> ${_IPSET_REAL}\"\n    fi\n  done\n}\n\n#\n# install csf/lfd firewall\n_csf_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _csf_install\"\n  fi\n  if [ \"${_CSF_MODE}\" = \"install\" ]; then\n    _msg \"INFO: Installing csf/lfd firewall...\"\n  else\n    _msg \"INFO: Upgrading csf/lfd firewall...\"\n  fi\n  cd /var/opt\n  _IPSET_TEST=\"$(which ipset)\"\n  if [ ! -x \"${_IPSET_TEST}\" ]; then\n    _apt_clean_update\n    if [ -L \"/sbin/ipset\" ]; then\n      rm -f /sbin/ipset\n    fi\n    if [ -L \"/usr/sbin/ipset\" ]; then\n      rm -f /usr/sbin/ipset\n    fi\n    _mrun \"apt-get install ipset ${_aptYesUnth}\"\n  fi\n\n  _ensure_ipset_symlinks\n\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n  if [ -x \"/sbin/iptables\" ] && [ ! -e \"/usr/sbin/iptables\" ]; then\n    ln -sfn /sbin/iptables /usr/sbin/iptables\n  fi\n  if [ -x \"/usr/sbin/iptables\" ] && [ ! -e \"/sbin/iptables\" ]; then\n    ln -sfn /usr/sbin/iptables /sbin/iptables\n  fi\n  if [ -x \"/sbin/iptables-save\" ] && [ ! -e \"/usr/sbin/iptables-save\" ]; then\n    ln -sfn /sbin/iptables-save /usr/sbin/iptables-save\n  fi\n  if [ -x \"/usr/sbin/iptables-save\" ] && [ ! -e \"/sbin/iptables-save\" ]; then\n    ln -sfn /usr/sbin/iptables-save /sbin/iptables-save\n  fi\n  if [ -x \"/sbin/iptables-restore\" ] && [ ! -e \"/usr/sbin/iptables-restore\" ]; then\n    ln -sfn /sbin/iptables-restore /usr/sbin/iptables-restore\n  fi\n  if [ -x \"/usr/sbin/iptables-restore\" ] && [ ! -e \"/sbin/iptables-restore\" ]; then\n    ln -sfn /usr/sbin/iptables-restore /sbin/iptables-restore\n  fi\n  if [ -x \"/sbin/ip6tables\" ] && [ ! -e \"/usr/sbin/ip6tables\" ]; then\n    ln -sfn /sbin/ip6tables /usr/sbin/ip6tables\n  fi\n  if [ -x \"/usr/sbin/ip6tables\" ] && [ ! -e \"/sbin/ip6tables\" ]; then\n    ln -sfn /usr/sbin/ip6tables /sbin/ip6tables\n  fi\n  if [ -x \"/sbin/ip6tables-save\" ] && [ ! -e \"/usr/sbin/ip6tables-save\" ]; then\n    ln -sfn /sbin/ip6tables-save /usr/sbin/ip6tables-save\n  fi\n  if [ -x \"/usr/sbin/ip6tables-save\" ] && [ ! -e \"/sbin/ip6tables-save\" ]; then\n    ln -sfn /usr/sbin/ip6tables-save /sbin/ip6tables-save\n  fi\n  if [ -x \"/sbin/ip6tables-restore\" ] && [ ! -e \"/usr/sbin/ip6tables-restore\" ]; then\n    ln -sfn /sbin/ip6tables-restore /usr/sbin/ip6tables-restore\n  fi\n  if [ -x \"/usr/sbin/ip6tables-restore\" ] && [ ! -e \"/sbin/ip6tables-restore\" ]; then\n    ln -sfn /usr/sbin/ip6tables-restore /sbin/ip6tables-restore\n  fi\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n  rm -f ${_pthLog}/lastFire\n  if [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    _CSF_VRN=13.08\n  elif [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _CSF_VRN=12.04\n  fi\n  _get_dev_src \"csf-${_CSF_VRN}.tar.gz\"\n  cd csf\n  _mrun \"sh install.sh\"\n  _NFTABLES_TEST=$(iptables -V)\n  if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n    if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n      update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n    fi\n    if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n      update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n    fi\n    if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n      update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n    fi\n    if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n      update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n    fi\n  fi\n  cd /var/opt\n  _if_hosted_sys\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    _SSH_PORT=22\n  fi\n  _CSF_COMPATIBILITY_TEST=$(perl /etc/csf/csftest.pl 2>&1)\n  if [[ \"${_CSF_COMPATIBILITY_TEST}\" =~ \"RESULT: csf should function\" ]]; then\n    _CSF_COMPATIBILITY=YES\n  elif [[ \"${_CSF_COMPATIBILITY_TEST}\" =~ \"some features will not work\" ]]; then\n    _CSF_COMPATIBILITY=PARTIAL\n    sed -i \"s/^PORTFLOOD .*/PORTFLOOD = \\\"\\\"/g\" /etc/csf/csf.conf &> /dev/null\n    wait\n    sed -i \"s/^CONNLIMIT .*/CONNLIMIT = \\\"\\\"/g\" /etc/csf/csf.conf &> /dev/null\n    wait\n    sed -i \"s/^USE_CONNTRACK .*/USE_CONNTRACK = \\\"0\\\"/g\" /etc/csf/csf.conf &> /dev/null\n    wait\n  elif [[ \"${_CSF_COMPATIBILITY_TEST}\" =~ \"FATAL\" ]]; then\n    _CSF_COMPATIBILITY=NO\n  else\n    _CSF_COMPATIBILITY=NO\n  fi\n  if [ \"${_CSF_COMPATIBILITY}\" = \"YES\" ] \\\n    || [ \"${_CSF_COMPATIBILITY}\" = \"PARTIAL\" ]; then\n    if [ \"${_CSF_COMPATIBILITY}\" = \"PARTIAL\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"TEST: csf/lfd firewall should mostly work on this system\"\n      fi\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"TEST: csf/lfd firewall should work fine on this system\"\n      fi\n    fi\n    if [ -n \"${_CUSTOM_CONFIG_CSF}\" ] && [ \"${_CUSTOM_CONFIG_CSF}\" != \"YES\" ]; then\n      mv -f /etc/csf/csf.conf /etc/csf/csf.conf-pre-${_xSrl}-${_X_VERSION}-${_NOW}\n      cp -af ${_locCnf}/var/csf.conf /etc/csf/csf.conf\n      sed -i \"s/notify@omega8.cc/${_MY_EMAIL}/g\" /etc/csf/csf.conf\n      wait\n      sed -i \"s/TCP_IN = \\\"20,21,22,/TCP_IN = \\\"20,21,${_SSH_PORT},/g\" /etc/csf/csf.conf\n      wait\n      sed -i \"s/^CC_SRC .*/CC_SRC = \\\"2\\\"/g\" /etc/csf/csf.conf\n      wait\n      chmod 600 /etc/csf/csf.conf\n    fi\n    if [ -e \"/etc/ssh/sshd_config\" ]; then\n      sed -i \"s/^Port.*/Port ${_SSH_PORT}/g\"  /etc/ssh/sshd_config\n      wait\n      sed -i \"s/^#Port.*/Port ${_SSH_PORT}/g\" /etc/ssh/sshd_config\n      wait\n      sed -i \"s/^UsePrivilegeSeparation.*//g\" /etc/ssh/sshd_config\n      wait\n      sed -i \"s/^ClientAliveCountMax.*/ClientAliveCountMax 10000/g\" /etc/ssh/sshd_config\n      wait\n      sed -i \"s/^#TCPKeepAlive.*/TCPKeepAlive yes/g\" /etc/ssh/sshd_config\n      wait\n    fi\n    pkill -9 -f /usr/sbin/sshd || true\n    _mrun \"service ssh start\"\n    if [ \"${_CSF_MODE}\" = \"install\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: csf/lfd firewall installed\"\n      fi\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: csf/lfd firewall upgrade completed\"\n      fi\n    fi\n    touch ${_pthLog}/csf_${_X_VERSION}.log\n  else\n    _msg \"TEST: csf/lfd firewall can not be installed on this system\"\n  fi\n}\n\n# Function to add an IP to a CSF file if not already present\n_add_ip_if_missing() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _add_ip_if_missing\"\n  fi\n  local _IP=\"$1\"\n  local _FILE=\"$2\"\n  if grep -q \"^${_IP}$\" \"${_FILE}\"; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Gateway IP (${_IP}) is already listed in ${_FILE}.\"\n    fi\n    _alreadyListed=TRUE\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Adding Gateway IP (${_IP}) to ${_FILE}...\"\n    fi\n    echo \"${_IP}\" >> \"${_FILE}\"\n  fi\n}\n\n# Function to add an Gateway IP to allowed and ignored by CSF/LFD\n_csf_lfd_gateway_allow() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _csf_lfd_gateway_allow\"\n  fi\n  # Detect the gateway IP from the default route\n  _GATEWAY_IP=$(ip route show default 2>/dev/null | awk '/default/ {print $3}')\n  # Ensure we got a valid gateway IP\n  if [[ -z \"${_GATEWAY_IP}\" ]]; then\n    _msg \"WARN: Could not detect a default Gateway IP.\"\n  else\n    _CSF_ALLOW_FILE=\"/etc/csf/csf.allow\"\n    _CSF_IGNORE_FILE=\"/etc/csf/csf.ignore\"\n    # Add the gateway IP to csf.allow\n    _alreadyListed=FALSE\n    _add_ip_if_missing \"${_GATEWAY_IP}\" \"${_CSF_ALLOW_FILE}\"\n    [ \"${_alreadyListed}\" = \"FALSE\" ] && _msg \"INFO: The Gateway IP (${_GATEWAY_IP}) is now allowed by CSF/LFD.\"\n    # Add the gateway IP to csf.ignore\n    _alreadyListed=FALSE\n    _add_ip_if_missing \"${_GATEWAY_IP}\" \"${_CSF_IGNORE_FILE}\"\n    [ \"${_alreadyListed}\" = \"FALSE\" ] && _msg \"INFO: The Gateway IP (${_GATEWAY_IP}) is now ignored by CSF/LFD.\"\n    # Reload CSF to apply changes\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Reloading CSF...\"\n    fi\n    if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n      csf -ra &> /dev/null\n      synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    else\n      csf -r &> /dev/null\n    fi\n  fi\n}\n\n_csf_lfd_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _csf_lfd_install_upgrade\"\n  fi\n  _IPSET_TEST=\"$(which ipset)\"\n  if [ ! -x \"${_IPSET_TEST}\" ]; then\n    _apt_clean_update\n    if [ -L \"/sbin/ipset\" ]; then\n      rm -f /sbin/ipset\n    fi\n    if [ -L \"/usr/sbin/ipset\" ]; then\n      rm -f /usr/sbin/ipset\n    fi\n    _mrun \"apt-get install ipset ${_aptYesUnth}\"\n  fi\n\n  _ensure_ipset_symlinks\n\n  if [[ \"${_XTRAS_LIST}\" =~ \"ALL\" ]] \\\n    || [[ \"${_XTRAS_LIST}\" =~ \"CSF\" ]]; then\n    _check_system_manufacturer\n    if [ ! -e \"/var/log/boa/cloud_vhost.pid\" ] \\\n      && [ ! -e \"/var/log/boa/${_SYS_MANUFACTURER_LC}_vm.pid\" ]; then\n      if [ ! -e \"/usr/sbin/csf\" ]; then\n        echo \" \"\n        if _prompt_yes_no \"Do you want to install csf/lfd firewall?\" ; then\n          true\n          _CSF_MODE=install\n          _csf_install\n          _csf_lfd_gateway_allow\n        else\n          _msg \"INFO: csf/lfd firewall installation skipped\"\n        fi\n      fi\n    fi\n  fi\n  if [ -x \"/usr/sbin/csf\" ] \\\n    || [ -e \"/usr/sbin/lfd\" ] \\\n    || [ -e \"/etc/cron.d/lfd\" ]; then\n    if [ \"${_CSF_COMPATIBILITY}\" = \"NO\" ]; then\n      _REMOVE_CSF=YES\n    elif [ \"${_VMFAMILY}\" = \"VS\" ] \\\n      && [ ! -e \"/boot/grub/grub.cfg\" ] \\\n      && [ ! -e \"/boot/grub/menu.lst\" ]; then\n      _REMOVE_CSF=YES\n    fi\n    if [ \"${_REMOVE_CSF}\" = \"YES\" ]; then\n      _mrun \"service lfd stop\"\n      pkill -9 -f ConfigServer\n      killall sleep &> /dev/null\n      rm -f /etc/csf/csf.error\n      csf -x &> /dev/null\n      _mrun \"update-rc.d -f csf remove\"\n      _mrun \"update-rc.d -f lfd remove\"\n      rm -f /etc/cron.d/{csf,lfd}*\n      rm -f /usr/sbin/{csf,lfd}\n      rm -f /etc/init.d/{csf,lfd}\n      rm -rf /etc/csf\n    else\n      _CSF_ITD=$(/usr/sbin/csf --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | tr -d \"v\" \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $2}' 2>&1)\n      if [ \"${_CSF_ITD}\" = \"${_CSF_VRN}\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed CSF/LFD version ${_CSF_ITD}, OK\"\n        fi\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed CSF/LFD version ${_CSF_ITD}, upgrade required\"\n        fi\n        _CSF_MODE=upgrade\n        _csf_install\n        _csf_lfd_gateway_allow\n      fi\n      sed -i \"s/^AUTO_UPDATES .*/AUTO_UPDATES = \\\"0\\\"/g\" /etc/csf/csf.conf &> /dev/null\n      wait\n      if [ \"${_VMFAMILY}\" = \"VZ\" ]; then\n        sed -i \"s/^PORTFLOOD .*/PORTFLOOD = \\\"\\\"/g\" /etc/csf/csf.conf &> /dev/null\n        wait\n        sed -i \"s/^CONNLIMIT .*/CONNLIMIT = \\\"\\\"/g\" /etc/csf/csf.conf &> /dev/null\n        wait\n        sed -i \"s/^USE_CONNTRACK .*/USE_CONNTRACK = \\\"0\\\"/g\" /etc/csf/csf.conf &> /dev/null\n        wait\n      fi\n      if [ -e \"${_pthLog}/lastFire\" ]; then\n        rm -f ${_pthLog}/lastFire\n        _mrun \"service lfd stop\"\n        pkill -9 -f ConfigServer\n        killall sleep &> /dev/null\n        rm -f /etc/csf/csf.error\n        if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n          csf -ra &> /dev/null\n          synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n        else\n          csf -r &> /dev/null\n        fi\n        ### Linux kernel TCP SACK CVEs mitigation\n        ### CVE-2019-11477 SACK Panic\n        ### CVE-2019-11478 SACK Slowness\n        ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n        if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n          _SACK_TEST=$(ip6tables --list | grep tcpmss)\n          if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n            sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n            iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n            ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n            [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n          fi\n        fi\n      fi\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/functions/helper.sh.inc",
    "content": "\nexport _tRee=dev\n\n_if_hosted_sys() {\n  if [ -e \"/root/.host8.cnf\" ] \\\n    || [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _hostedSys=YES\n  else\n    _hostedSys=NO\n  fi\n}\n\n###\n### Noticeable messages\n###\n_msg() {\n  echo \"BOA [$(date +%T)] ==> $*\"\n}\n\n###\n### Simple prompt\n###\n_prompt_yes_no() {\nif [ \"${_AUTOPILOT}\" = \"YES\" ]; then\n  return 0\nelse\n  while true; do\n    printf \"$* [Y/n] \"\n    read _answer\n    if [ -z \"${_answer}\" ]; then\n      return 0\n    fi\n    case ${_answer} in\n      [Yy]|[Yy][Ee][Ss])\n        return 0\n        ;;\n      [Nn]|[Nn][Oo])\n        return 1\n        ;;\n      *)\n        echo \"Please answer yes or no\"\n        ;;\n    esac\n  done\nfi\n}\n\n#\n# Prompt to confirm choice.\n_prompt_confirm_choice() {\n  read -p \"$1 [$2]:\" _CONFIRMED_ANSWER\n  if [ -z \"${_CONFIRMED_ANSWER}\" ]; then\n    _CONFIRMED_ANSWER=$2\n  fi\n}\n\n#\n# Not supported virtualization system.\n_not_supported_virt() {\n  echo\n  echo \"=== OOPS! ===\"\n  echo\n  echo \"You are running not supported virtualization system:\"\n  echo \"  $1\"\n  echo\n  echo \"If you wish to try BOA on this system anyway,\"\n  echo \"please create an empty control file:\"\n  echo \"  /root/.allow.any.virt.cnf\"\n  echo\n  echo \"Please be aware that it may not work at all,\"\n  echo \"or you can experience errors breaking BOA.\"\n  echo\n  echo \"WARNING! BOA IS NOT DESIGNED TO RUN DIRECTLY ON A BARE METAL.\"\n  echo \"WARNING! IT IS VERY DANGEROUS AND THUS EXTREMELY BAD IDEA!\"\n  echo \"WARNING! You are free to experiment but don't expect *ANY* support.\"\n  echo\n  echo \"BOA is known to work well on:\"\n  echo\n  echo \" * Linux Containers (LXC)\"\n  echo \" * Linux KVM guest\"\n  echo \" * Microsoft Hyper-V\"\n  echo \" * OpenVZ Containers\"\n  echo \" * Parallels guest\"\n  echo \" * Red Hat KVM guest\"\n  echo \" * VirtualBox guest\"\n  echo \" * VMware ESXi guest (but excluding vCloud Air)\"\n  echo \" * VServer guest\"\n  echo \" * Xen guest fully virtualized (HVM)\"\n  echo \" * Xen guest\"\n  echo \" * Xen paravirtualized guest domain\"\n  echo\n  echo \"Bye\"\n  echo\n  _clean_pid_exit _not_supported_virt_a\n}\n\n#\n# Not supported OS.\n_not_supported_os() {\n  echo\n  echo \"=== OOPS! ===\"\n  echo\n  echo \"It is not any supported Devuan or Debian version.\"\n  echo\n  echo \"You need Devuan Daedalus (recommended) or Debian Bookworm first.\"\n  echo\n  echo \"Bye\"\n  echo\n  _clean_pid_exit _not_supported_os_a\n}\n\n#\n# Show only clean _msg and write details and errors to separate logs.\n_mrun() {\n  # Accept everything as one shell command string:\n  #      _mrun \"${_INSTAPP} xfonts-75dpi xfonts-base\"\n  #   or _mrun 'apt-get update -qq && dpkg --configure -a || true'\n  local _cmd=\"$*\"\n  if [ -z \"${_cmd}\" ]; then\n    return 0\n  fi\n  # stdout -> info log (silent on console)\n  # stderr -> error log (silent on console)\n  if [ -n \"${_LOG_INFO}\" ] && [ -n \"${_LOG_ERRR}\" ]; then\n    bash -c -- \"${_cmd}\" \\\n      > >(tee -a \"${_LOG_INFO}\" >/dev/null) \\\n      2> >(tee -a \"${_LOG_ERRR}\" >/dev/null)\n  else\n    # silent output + no logging\n    bash -c -- \"${_cmd}\" &> /dev/null\n  fi\n  return 0\n}\n\n#\n# Remove dangerous stuff from the string.\n_sanitize_string() {\n  echo \"$1\" | sed 's/[\\\\\\/\\^\\?\\>\\`\\#\\\"\\{\\(\\&\\|\\*]//g; s/\\(['\"'\"'\\]\\)//g'\n}\n\n#\n# Extract archive.\n_extract_archive() {\n  if [ ! -z \"$1\" ]; then\n    case $1 in\n      *.tar.bz2)   tar xjf $1    ;;\n      *.tar.gz)    tar xzf $1    ;;\n      *.tar.xz)    tar xvf $1    ;;\n      *.bz2)       bunzip2 $1    ;;\n      *.rar)       unrar x $1    ;;\n      *.gz)        gunzip -q $1  ;;\n      *.tar)       tar xf $1     ;;\n      *.tbz2)      tar xjf $1    ;;\n      *.tgz)       tar xzf $1    ;;\n      *.zip)       unzip -qq $1  ;;\n      *.Z)         uncompress $1 ;;\n      *.7z)        7z x $1       ;;\n      *)           echo \"'$1' cannot be extracted via >extract<\" ;;\n    esac\n    rm -f $1\n  fi\n}\n\n#\n# Download and extract archive from dev mirror.\n_get_dev_arch() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download ${_urlDev}/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Download and extract from dev/version mirror.\n_get_dev_ext() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/${_tRee}/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download ${_urlDev}/${_tRee}/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Download and extract from dev/static.\n_get_dev_stc() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/${_tRee}/static/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download ${_urlDev}/${_tRee}/static/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Download and extract from dev/contrib mirror.\n_get_dev_contrib() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/${_tRee}/contrib/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download ${_urlDev}/${_tRee}/contrib/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Download and extract archive from dev/src mirror.\n_get_dev_src() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/src/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download ${_urlDev}/src/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Download with wget and extract archive from dev/src mirror.\n_get_dev_src_wget() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if wget ${_wgetGet} \"${_urlDev}/src/$1\" -O \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download ${_urlDev}/src/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n_normalize_ip_name_variables() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _normalize_ip_name_variables\"\n  fi\n  if [ ! -z \"${_LOCAL_NETWORK_IP}\" ]; then\n    _LOCAL_NETWORK_IP=${_LOCAL_NETWORK_IP//[^0-9.]/}\n  fi\n  if [ ! -z \"${_LOCAL_NETWORK_HN}\" ]; then\n    _LOCAL_NETWORK_HN=${_LOCAL_NETWORK_HN//[^a-zA-Z0-9-.]/}\n    _LOCAL_NETWORK_HN=$(echo -n ${_LOCAL_NETWORK_HN} | tr A-Z a-z 2>&1)\n  fi\n  if [ ! -z \"${_MY_OWNIP}\" ]; then\n    _MY_OWNIP=${_MY_OWNIP//[^0-9.]/}\n  fi\n  if [ ! -z \"${_MY_HOSTN}\" ]; then\n    _MY_HOSTN=${_MY_HOSTN//[^a-zA-Z0-9-.]/}\n    _MY_HOSTN=$(echo -n ${_MY_HOSTN} | tr A-Z a-z 2>&1)\n  fi\n  if [ ! -z \"${_MY_FRONT}\" ]; then\n    _MY_FRONT=${_MY_FRONT//[^a-zA-Z0-9-.]/}\n    _MY_FRONT=$(echo -n ${_MY_FRONT} | tr A-Z a-z 2>&1)\n  fi\n  if [ ! -z \"${_SMTP_RELAY_HOST}\" ]; then\n    _SMTP_RELAY_HOST=${_SMTP_RELAY_HOST//[^a-zA-Z0-9-.]/}\n    _SMTP_RELAY_HOST=$(echo -n ${_SMTP_RELAY_HOST} | tr A-Z a-z 2>&1)\n  fi\n}\n\n_dedup_xtras_list() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _dedup_xtras_list\"\n  fi\n  if [ -n \"${_XTRAS_LIST}\" ]; then\n    local _w _u\n    _w=\"$(printf '%s\\n' \"${_XTRAS_LIST}\" | xargs -n1 | wc -l)\"\n    _u=\"$(printf '%s\\n' \"${_XTRAS_LIST}\" | xargs -n1 | sort -u | wc -l)\"\n    if [ \"${_w}\" -ne \"${_u}\" ]; then\n      _XTRAS_LIST=\"$(printf '%s\\n' \"${_XTRAS_LIST}\" | xargs -n1 | sort -u | xargs)\"\n      if [ -n \"${_XTRAS_LIST}\" ] && [ -e \"${_barCnf}\" ]; then\n        sed -i \"s/^_XTRAS_LIST=.*/_XTRAS_LIST=\\\"${_XTRAS_LIST}\\\"/g\" ${_barCnf}\n      fi\n    fi\n  fi\n}\n\n_mode_detection() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _mode_detection\"\n  fi\n  [ -e \"/root/.use.curl.from.packages.cnf\" ] && chattr -i /root/.use.curl.from.packages.cnf\n  [ -e \"/root/.use.curl.from.packages.cnf\" ] && rm -f /root/.use.curl.from.packages.cnf\n  if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"/lib/systemd/systemd\" ]; then\n    _STATUS=UPGRADE\n    _msg \"MODE: UPGRADE\"\n    _dedup_xtras_list\n    _barracuda_cnf\n    if [[ ! \" ${_XTRAS_LIST} \" =~ (^|[[:space:]])CSF([[:space:]]|$) ]]; then\n      if [ -n \"${_XTRAS_LIST}\" ]; then\n        _XTRAS_LIST=\"${_XTRAS_LIST} CSF\"\n      else\n        _XTRAS_LIST=\"CSF\"\n      fi\n    fi\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"XTRA: ${_XTRAS_LIST}\"\n    fi\n  elif [ -e \"/lib/systemd/systemd\" ]; then\n    _msg \"FATAL ERROR: Something went wrong because /lib/systemd/systemd still exists, exit\"\n    _clean_pid_exit _mode_detection_a\n  else\n    _STATUS=INIT\n    if [ -d \"/var/aegir\" ]; then\n      if [ -e \"/root/.force.reinstall.cnf\" ]; then\n        ###--------------------###\n        _msg \"MODE: FORCED INIT\"\n        _FULL_FORCE_REINSTALL=YES\n        _ZOMBIE_HOME=\"${_vBs}/zombie/${_X_VERSION}-${_NOW}\"\n        mkdir -p ${_ZOMBIE_HOME}\n        mv -f /etc/nginx/conf.d/* ${_ZOMBIE_HOME}/ &> /dev/null\n        mv -f /var/aegir ${_ZOMBIE_HOME}/ &> /dev/null\n        mv -f /var/xdrago ${_ZOMBIE_HOME}/ &> /dev/null\n        ### mv -f /root/.my.cnf ${_ZOMBIE_HOME}/ &> /dev/null\n        ### mv -f /root/.my.pass.txt ${_ZOMBIE_HOME}/ &> /dev/null\n        cp -af /etc/sudoers ${_ZOMBIE_HOME}/ &> /dev/null\n        sed -i \"s/^aegir.*//g\" /etc/sudoers &> /dev/null\n        pkill -9 -f gpg-agent\n        deluser aegir &> /dev/null\n        rm -f /usr/bin/drush\n      else\n        ###--------------------###\n        _msg \"FATAL ERROR: Something went wrong, Ægir Master exists but looks broken!\"\n        _msg \"HINT: Please check /var/log/boa/aegir_install.log for details, and then\"\n        _msg \"HINT: run the same install command again to complete installation.\"\n        _msg \"HINT: You can also enforce installation with empty control file:\"\n        _msg \"      touch /root/.force.reinstall.cnf -- and then try again.\"\n        _clean_pid_exit _mode_detection_b\n      fi\n    else\n      _msg \"MODE: NORMAL INIT\"\n    fi\n    if [ ! -z \"${_EASY_SETUP}\" ] && [[ ! \"${_EASY_SETUP}\" =~ \"NO\" ]]; then\n      if [ \"${_EASY_SETUP}\" != \"LOCAL\" ]; then\n        if [ -z \"${_EASY_HOSTNAME}\" ] \\\n          || [ \"${_EASY_HOSTNAME}\" = \"wildcard-enabled-hostname\" ]; then\n          _msg \"FATAL ERROR: You must define also _EASY_HOSTNAME\"\n          _clean_pid_exit _mode_detection_c\n        fi\n      fi\n    fi\n    if [ \"${_EASY_SETUP}\" = \"LOCAL\" ]; then\n      _msg \"INFO: Localhost Setup Mode Active\"\n      if [[ ! \" ${_XTRAS_LIST} \" =~ (^|[[:space:]])ADM([[:space:]]|$) ]]; then\n        if [ -n \"${_XTRAS_LIST}\" ]; then\n          _XTRAS_LIST=\"${_XTRAS_LIST} ADM\"\n        else\n          _XTRAS_LIST=\"ADM\"\n        fi\n      fi\n      _AUTOPILOT=YES\n      _SSH_PORT=22\n      _DNS_SETUP_TEST=NO\n      _SMTP_RELAY_TEST=NO\n      _LOCAL_NETWORK_IP=\"127.0.1.1\"\n      _LOCAL_NETWORK_HN=\"aegir.local\"\n    elif [ \"${_EASY_SETUP}\" = \"PUBLIC\" ]; then\n      _msg \"INFO: Public Setup Mode Active\"\n      if [[ ! \" ${_XTRAS_LIST} \" =~ (^|[[:space:]])CSF([[:space:]]|$) ]]; then\n        if [ -n \"${_XTRAS_LIST}\" ]; then\n          _XTRAS_LIST=\"${_XTRAS_LIST} CSF\"\n        else\n          _XTRAS_LIST=\"CSF\"\n        fi\n      fi\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" = \"YES\" ]; then\n        if [[ ! \" ${_XTRAS_LIST} \" =~ (^|[[:space:]])FOO([[:space:]]|$) ]]; then\n          if [ -n \"${_XTRAS_LIST}\" ]; then\n            _XTRAS_LIST=\"${_XTRAS_LIST} FOO\"\n          else\n            _XTRAS_LIST=\"FOO\"\n          fi\n        fi\n      fi\n      _AUTOPILOT=YES\n      _SSH_PORT=22\n      _MY_HOSTN=\"${_EASY_HOSTNAME}\"\n      _MY_FRONT=\"master.${_EASY_HOSTNAME}\"\n      _validate_public_ip &> /dev/null\n      _MY_OWNIP=\"${_THISHTIP}\"\n    fi\n    _barracuda_cnf\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"XTRA: ${_XTRAS_LIST}\"\n    fi\n  fi\n  [ -e \"/root/.use.curl.from.packages.cnf\" ] && chattr -i /root/.use.curl.from.packages.cnf\n  [ -e \"/root/.use.curl.from.packages.cnf\" ] && rm -f /root/.use.curl.from.packages.cnf\n  pkill -9 -f systemd-udevd\n\n  if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    _DPKG_CNF=\"confold\"\n  else\n    _DPKG_CNF=\"confnew\"\n  fi\n\n  if [ -x \"/opt/local/bin/aptfast\" ]; then\n    _INSTAPP=\"/opt/local/bin/aptfast -f -y -q \\\n      --allow-untrusted \\\n      -o Dpkg::Options::=--force-confmiss \\\n      -o Dpkg::Options::=--force-confdef \\\n      -o Dpkg::Options::=--force-${_DPKG_CNF} install\"\n  else\n    _INSTAPP=\"/usr/bin/aptitude -f -y -q \\\n      --allow-untrusted \\\n      -o Dpkg::Options::=--force-confmiss \\\n      -o Dpkg::Options::=--force-confdef \\\n      -o Dpkg::Options::=--force-${_DPKG_CNF} install\"\n  fi\n\n  _RMAPP=\"/usr/bin/aptitude -f -y -q \\\n    --allow-untrusted \\\n    -o Dpkg::Options::=--force-confmiss \\\n    -o Dpkg::Options::=--force-confdef \\\n    -o Dpkg::Options::=--force-${_DPKG_CNF} remove\"\n\n  if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    _INSTAPP=\"${_INSTALL_DIST}\"\n    _RMAPP=\"apt-get -y -qq remove\"\n  fi\n\n  _SRCDIR=\"/opt/tmp/files\"\n  rm -rf /var/opt/*\n  mkdir -p ${_SRCDIR}\n  chmod -R 777 /opt/tmp &> /dev/null\n  find /opt/tmp/boa -type d -exec chmod 0755 {} \\; &> /dev/null\n  find /opt/tmp/boa -type f -exec chmod 0644 {} \\; &> /dev/null\n  if [ \"${_STATUS}\" != \"UPGRADE\" ]; then\n    _STRICT_BIN_PERMISSIONS=NO\n  fi\n  if [ \"${_STRICT_BIN_PERMISSIONS}\" = \"YES\" ]; then\n    if [ -x \"/bin/dash\" ] || [ -x \"/usr/bin/dash\" ]; then\n      _symlink_to_dash\n      _switch_to_dash\n    elif [ -x \"/bin/bash\" ] || [ -x \"/usr/bin/bash\" ]; then\n      _symlink_to_bash\n      _switch_to_bash\n    fi\n  fi\n  _PHP_SV=${_PHP_FPM_VERSION//[^0-9]/}\n  if [ -z \"${_PHP_SV}\" ] \\\n    || [ \"${_PHP_SV}\" = \"55\" ] \\\n    || [ \"${_PHP_SV}\" = \"54\" ] \\\n    || [ \"${_PHP_SV}\" = \"53\" ] \\\n    || [ \"${_PHP_SV}\" = \"52\" ]; then\n    _PHP_SV=84\n  fi\n  _PHP_CN=\"www${_PHP_SV}\"\n}\n\n_check_exception_mycnf() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_exception_mycnf\"\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    if [ ! -e \"/root/.my.cnf\" ]; then\n    _msg \"EXIT on error due to not found file with your ${_DB_SERVER} root password\"\n    cat <<EOF\n\n    It appears you don't have required file with your root sql password.\n    Create this file first and run this script again:\n\n    echo \"[client]\" > /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=your_SQL_ROOT_password\" >> /root/.my.cnf\n    chmod 0600 /root/.my.cnf\n\nEOF\n    _msg \"EXIT on error due to not found file with your ${_DB_SERVER} root password\"\n    _clean_pid_exit _check_exception_mycnf_a\n    fi\n  fi\n}\n\n# --- internal: run virt-what under strace and parse the helper's exec path\n_discover_with_strace() {\n  local _path_found=\"\"\n  if ! command -v strace >/dev/null 2>&1; then\n    _msg \"strace not available, skipping strace-based discovery\"\n    echo \"\"\n    return 0\n  fi\n  # Temporarily extend PATH so virt-what can exec the helper for strace to see.\n  PATH=\"${PATH}:${_CANDIDATE_PATHS}\" strace -f -qq -e trace=execve -o \"${_TRACE}\" virt-what >/dev/null 2>&1\n\n  # mawk-safe parsing: pull the first quoted arg from execve(\"…\") and check suffix\n  if [ -s \"${_TRACE}\" ]; then\n    _path_found=$(\n      awk -v n=\"${_HELPER_NAME}\" '\n        /execve\\(\"/ {\n          # Find start of execve(\" then extract up to next quote\n          i = index($0, \"execve(\\\"\")\n          if (i) {\n            s = substr($0, i + 8)         # after execve(\"\n            j = index(s, \"\\\"\")\n            if (j) {\n              p = substr(s, 1, j - 1)     # the path inside quotes\n              if (p ~ (\"/\" n \"$\")) { print p; exit }\n            }\n          }\n        }\n      ' \"${_TRACE}\"\n    )\n  fi\n  rm -f \"${_TRACE}\"\n\n  if [ -n \"${_path_found}\" ] && [ -x \"${_path_found}\" ]; then\n    _msg \"strace discovered helper at: ${_path_found}\"\n    echo \"${_path_found}\"\n    return 0\n  fi\n  _msg \"strace discovery failed\"\n  echo \"\"\n  return 0\n}\n\n# --- internal: dpkg-based discovery (Debian/Devuan)\n_discover_with_dpkg() {\n  local _p=\"\"\n  if command -v dpkg >/dev/null 2>&1; then\n    _p=$(dpkg -L virt-what 2>/dev/null | grep -E \"/${_HELPER_NAME}$\" | head -n1)\n    if [ -n \"${_p}\" ] && [ -x \"${_p}\" ]; then\n      _msg \"dpkg discovered helper at: ${_p}\"\n      echo \"${_p}\"\n      return 0\n    fi\n  fi\n  echo \"\"\n  return 0\n}\n\n# --- internal: filesystem search fallback (bounded)\n_discover_with_find() {\n  local _p=\"\"\n  # Keep it bounded to /usr to stay fast/noisy-free.\n  _p=$(find /usr -maxdepth 4 -type f -name \"${_HELPER_NAME}\" 2>/dev/null | head -n1)\n  if [ -n \"${_p}\" ] && [ -x \"${_p}\" ]; then\n    _msg \"find discovered helper at: ${_p}\"\n    echo \"${_p}\"\n    return 0\n  fi\n  echo \"\"\n  return 0\n}\n\n# --- main: ensure symlink\n_ensure_virt_what_helper_symlink() {\n  # If the symlink already exists and is working, nothing to do.\n  if [ -L \"${_SYMLINK}\" ] && [ -x \"${_SYMLINK}\" ] && [ -e \"$(readlink -f \"${_SYMLINK}\")\" ]; then\n    _msg \"Symlink already present and valid: ${_SYMLINK} -> $(readlink -f \"${_SYMLINK}\")\"\n    return 0\n  fi\n\n  local _helper_path=\"\"\n  _helper_path=\"$(_discover_with_strace)\"\n  if [ -z \"${_helper_path}\" ]; then\n    _helper_path=\"$(_discover_with_dpkg)\"\n  fi\n  if [ -z \"${_helper_path}\" ]; then\n    _helper_path=\"$(_discover_with_find)\"\n  fi\n\n  if [ -z \"${_helper_path}\" ]; then\n    echo \"ERROR: Could not locate ${_HELPER_NAME} anywhere under /usr.\" 1>&2\n    return 1\n  fi\n\n  # Safety: if a non-symlink file already exists at the target, back it up once.\n  if [ -e \"${_SYMLINK}\" ] && [ ! -L \"${_SYMLINK}\" ]; then\n    _msg \"Backing up existing non-symlink at ${_SYMLINK} to ${_SYMLINK}.orig\"\n    mv -f \"${_SYMLINK}\" \"${_SYMLINK}.orig\"\n  fi\n\n  ln -sfn \"${_helper_path}\" \"${_SYMLINK}\"\n  if [ -x \"${_SYMLINK}\" ]; then\n    _msg \"Symlink created: ${_SYMLINK} -> ${_helper_path}\"\n    return 0\n  else\n    echo \"ERROR: Failed to create working symlink ${_SYMLINK} -> ${_helper_path}\" 1>&2\n    return 2\n  fi\n}\n\n###\n### Fix VM system detection\n###\n_fix_virt_what() {\n  _VIRT_TEST=\"$(which virt-what)\"\n  if [ -n \"${_VIRT_TEST}\" ] && [ -x \"${_VIRT_TEST}\" ]; then\n    _SHELL_TEST_A=$(grep -I -o \"\\#\\!.*/usr/bin/sh\" ${_VIRT_TEST} 2>&1)\n    _SHELL_TEST_B=$(grep -I -o \"\\#\\!.*/bin/sh\" ${_VIRT_TEST} 2>&1)\n    if [[ \"${_SHELL_TEST_A}\" =~ \"/usr/bin/sh\" ]]; then\n      sed -i \"s/\\/usr\\/bin\\/sh/\\/bin\\/dash/g\" ${_VIRT_TEST}\n    fi\n    if [[ \"${_SHELL_TEST_B}\" =~ \"/bin/sh\" ]]; then\n      sed -i \"s/\\/bin\\/sh/\\/bin\\/dash/g\" ${_VIRT_TEST}\n    fi\n    _HELPER_NAME=\"virt-what-cpuid-helper\"\n    _SYMLINK=\"/usr/sbin/${_HELPER_NAME}\"\n    _TRACE=\"/tmp/virtwhat.$$.strace\"\n    # Extra dirs we temporarily expose to PATH so virt-what can exec the helper for strace discovery\n    _CANDIDATE_PATHS=\"/usr/libexec:/usr/lib/x86_64-linux-gnu:/usr/lib64/virt-what:/usr/lib/virt-what\"\n    if [ ! -e \"${_SYMLINK}\" ]; then\n      echo \"INFO: virt-what tool requires small update, fixing...\"\n      if ! command -v strace &> /dev/null; then\n        _apt_clean_update\n        apt-get install strace ${_aptYesUnth}\n      fi\n      _ensure_virt_what_helper_symlink\n    fi\n  fi\n}\n\n###\n### Fix or install VM system detection\n###\n_fix_or_install_virt_what() {\n  _VIRT_TEST=\"$(which virt-what)\"\n  if [ -n \"${_VIRT_TEST}\" ] && [ -x \"${_VIRT_TEST}\" ]; then\n    _fix_virt_what\n  else\n    echo \"INFO: installing required virt-what tool ...\"\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install virt-what ${_aptYesUnth}\n    wait\n    _fix_virt_what\n  fi\n}\n\n_virt_detection() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _virt_detection\"\n  fi\n  _fix_or_install_virt_what\n  _VIRT_TOOL=\"$(which virt-what)\"\n  if [ -x \"${_VIRT_TOOL}\" ]; then\n    _VIRT_TEST=$(virt-what)\n    _VIRT_TEST=$(echo -n ${_VIRT_TEST} | fmt -su -w 2500 2>&1)\n    if [[ \"${_VIRT_TEST}\" =~ \"program not found\" ]]; then\n      _msg \"ERROR: virt-what says: ${_VIRT_TEST}\"\n      _msg \"ERROR: virt-what detection fails for unknown reason, exit\"\n      _clean_pid_exit _virt_detection\n    fi\n    if [ ! -e \"/root/.allow.any.virt.cnf\" ]; then\n      if [ -e \"/proc/self/status\" ]; then\n        _VS_GUEST_TEST=$(grep -E \"VxID:[[:space:]]*[0-9]{2,}$\" /proc/self/status 2> /dev/null)\n        _VS_HOST_TEST=$(grep -E \"VxID:[[:space:]]*0$\" /proc/self/status 2> /dev/null)\n      fi\n      if [ ! -z \"${_VS_HOST_TEST}\" ] || [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n        if [ -z \"${_VS_HOST_TEST}\" ] && [ ! -z \"${_VS_GUEST_TEST}\" ]; then\n          _VIRT_IS=\"Linux VServer guest\"\n        else\n          if [ ! -z \"${_VS_HOST_TEST}\" ]; then\n            _not_supported_virt \"Linux VServer host\"\n          else\n            _not_supported_virt \"unknown / not a virtual machine\"\n          fi\n        fi\n      else\n        if [ -z \"${_VIRT_TEST}\" ] || [ \"${_VIRT_TEST}\" = \"0\" ]; then\n          _not_supported_virt \"unknown / not a virtual machine\"\n        elif [[ \"${_VIRT_TEST}\" =~ \"xen-dom0\" ]]; then\n          _not_supported_virt \"Xen privileged domain\"\n        elif [[ \"${_VIRT_TEST}\" =~ \"linux_vserver-host\" ]]; then\n          _not_supported_virt \"Linux VServer host\"\n        else\n          if [[ \"${_VIRT_TEST}\" =~ \"xen xen-hvm\" ]]; then\n            _VIRT_TEST=\"xen-hvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"xen xen-domU\" ]]; then\n            _VIRT_TEST=\"xen-domU\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"virtualbox kvm\" ]]; then\n            _VIRT_TEST=\"virtualbox\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"hyperv qemu\" ]]; then\n            _VIRT_TEST=\"hyperv\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"kvm aws\" ]]; then\n            _VIRT_TEST=\"kvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"redhat kvm\" ]]; then\n            _VIRT_TEST=\"redhat-kvm\"\n          elif [[ \"${_VIRT_TEST}\" =~ \"openvz lxc\" ]]; then\n            _VIRT_TEST=\"openvz\"\n          fi\n          case \"${_VIRT_TEST}\" in\n            hyperv)      _VIRT_IS=\"Microsoft Hyper-V\" ;;\n            kvm)         _VIRT_IS=\"Linux KVM guest\" ;;\n            lxc)         _VIRT_IS=\"Linux Containers (LXC)\" ;;\n            openvz)      _VIRT_IS=\"OpenVZ Containers\" ;;\n            parallels)   _VIRT_IS=\"Parallels guest\" ;;\n            redhat-kvm)  _VIRT_IS=\"Red Hat KVM guest\" ;;\n            virtualbox)  _VIRT_IS=\"VirtualBox guest\" ;;\n            vmware)      _VIRT_IS=\"VMware ESXi guest\" ;;\n            xen-domU)    _VIRT_IS=\"Xen paravirtualized guest domain\" ;;\n            xen-hvm)     _VIRT_IS=\"Xen guest fully virtualized (HVM)\" ;;\n            xen)         _VIRT_IS=\"Xen guest\" ;;\n            *)  _not_supported_virt \"${_VIRT_TEST}\"\n            ;;\n          esac\n        fi\n      fi\n      if [ \"${_AUTOPILOT}\" = \"NO\" ]; then\n        echo\n      fi\n      _msg \"VIRT: This system is supported: ${_VIRT_IS}\"\n      echo\n      if [ -n \"${_LOG_INFO}\" ] && [ -n \"${_LOG_ERRR}\" ]; then\n        _msg \"HINT: Commands to run in another terminal window to watch details\"\n        _msg \"INFO: tail -f ${_LOG_INFO}\"\n        _msg \"ERRR: tail -f ${_LOG_ERRR}\"\n      fi\n    else\n      if [ -z \"${_VIRT_TEST}\" ] || [ \"${_VIRT_TEST}\" = \"0\" ]; then\n        _VIRT_TEST=\"unknown / not a virtual machine\"\n      fi\n      if [ \"${_AUTOPILOT}\" = \"NO\" ]; then\n        echo\n      fi\n      _msg \"WARN: This system is not supported: ${_VIRT_TEST}\"\n    fi\n  fi\n}\n\n_os_detection_minimal() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _os_detection_minimal\"\n  fi\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n\n_apt_clean_update() {\n  _mrun \"${_APT_UPDATE} -qq\"\n}\n\n_apt_clean_update_no_releaseinfo_change() {\n  _mrun \"apt-get update -qq\"\n}\n\n_turn_on_password_update() {\n  if [ -e \"/root/.mysql.no.new.password.cnf\" ]; then\n    chattr -i /root/.mysql.no.new.password.cnf\n    rm -f /root/.mysql.no.new.password.cnf\n  fi\n  if [ ! -e \"/root/.mysql.yes.new.password.cnf\" ]; then\n    touch /root/.mysql.yes.new.password.cnf\n    chattr +i /root/.mysql.yes.new.password.cnf\n  fi\n}\n\n_os_detection() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _os_detection\"\n  fi\n\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n\n  if [ \"${_OS_DIST}\" = \"Debian\" ]; then\n    if [ \"${_OS_CODE}\" = \"trixie\" ] \\\n      || [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n      || [ \"${_OS_CODE}\" = \"bullseye\" ] \\\n      || [ \"${_OS_CODE}\" = \"buster\" ] \\\n      || [ \"${_OS_CODE}\" = \"stretch\" ] \\\n      || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n      _isSupportedOS=YES\n    else\n      _not_supported_os\n    fi\n  elif [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n    if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n      || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n      || [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n      || [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n      _isSupportedOS=YES\n    else\n      _not_supported_os\n    fi\n  else\n    _not_supported_os\n  fi\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _DB_SERVER=Percona\n    _DB_SERIES=8.4\n    _DBS_VRN=8.4\n  else\n    _DB_SERVER=Percona\n  fi\n\n  _NGINX_FORWARD_SECRECY=YES\n  _NGINX_SPDY=YES\n  if [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n    _DBS_VRN=\"${_PERCONA_8_4_VRN}\"\n    _turn_on_password_update\n  elif [ \"${_DB_SERIES}\" = \"8.0\" ]; then\n    _DBS_VRN=\"${_PERCONA_8_0_VRN}\"\n  elif [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n    _DBS_VRN=\"${_PERCONA_5_7_VRN}\"\n  else\n    _DB_SERIES=5.7\n    _DBS_VRN=\"${_PERCONA_5_7_VRN}\"\n  fi\n  _SPINNER=NO\n  _SKIP_LEGACY_PHP=YES\n\n  echo \" \"\n  _thiSys=\"${_OS_DIST}/${_OS_CODE}\"\n  _thiSys=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null) ${_thiSys}\"\n  _msg \"Ægir on ${_thiSys}\"\n  echo \" \"\n}\n\n_check_boa_php_compatibility() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_boa_php_compatibility\"\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.5\" ]] \\\n    || [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.4\" ]] \\\n    || [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.3\" ]]; then\n    _DO_NOTHING=YES\n  else\n    _msg \"ERROR: This BOA version depends on PHP 8.3 or newer\"\n    _msg \"Please add at least 8.3 to _PHP_MULTI_INSTALL\"\n    _msg \"in /root/.barracuda.cnf before trying again\"\n    _msg \"NOTE: You can still install also legacy PHP 7.x, 8.x and 5.6 versions\"\n    _msg \"NOTE: but you must also include at least 8.3 to support Drupal 10\"\n    _msg \"Bye\"\n    _clean_pid_exit _check_boa_php_compatibility_a\n  fi\n\n  if [ \"${_PHP_FPM_VERSION}\" = \"8.3\" ] \\\n    || [ \"${_PHP_FPM_VERSION}\" = \"8.4\" ] \\\n    || [ \"${_PHP_FPM_VERSION}\" = \"8.5\" ]; then\n    _SUPPORTED_FPM=YES\n  else\n    _SUPPORTED_FPM=NO\n  fi\n\n  if [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n    || [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n    || [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ]; then\n    _SUPPORTED_CLI=YES\n  else\n    _SUPPORTED_CLI=NO\n  fi\n\n  if [ \"${_SUPPORTED_FPM}\" = \"NO\" ] \\\n    || [ \"${_SUPPORTED_CLI}\" = \"NO\" ]; then\n    _msg \"ERROR: This BOA version depends on PHP 8.3 or newer\"\n    _msg \"Please change _PHP_FPM_VERSION and _PHP_CLI_VERSION to 8.3 or newer\"\n    _msg \"in /root/.barracuda.cnf before trying again\"\n    _msg \"NOTE: You can still install also legacy PHP 7.x, 8.x and 5.6 versions\"\n    _msg \"NOTE: but you must use as default version 8.3 to support Drupal 10\"\n    _msg \"Bye\"\n    _clean_pid_exit _check_boa_php_compatibility_b\n  fi\n}\n\n_check_boa_version() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_boa_version\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Checking BARRACUDA version...\"\n  fi\n  if [ -e \"/var/log/barracuda_log.txt\" ]; then\n    _SERIES_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n    if [ \"${_tRee}\" = \"lts\" ] \\\n      && [[ \"${_SERIES_TEST}\" =~ \"Barracuda ${_rLsn}-pro\" ]]; then\n      _msg \"ERROR: Your system has been already upgraded to ${_rLsn}-pro\"\n      _msg \"You can not downgrade back to previous/older/lts BOA version\"\n      _msg \"Please use 'barracuda up-pro system' to upgrade this server\"\n      _msg \"Bye\"\n      _clean_pid_exit _check_boa_version_a\n    fi\n    if [[ \"${_SERIES_TEST}\" =~ \"BOA-5.\" ]] \\\n      || [[ \"${_SERIES_TEST}\" =~ \"BOA-4.\" ]]; then\n      _VERSIONS_TEST_RESULT=OK\n    else\n      _msg \"ERROR: This barracuda installer can be used only when the system\"\n      _msg \"has been already upgraded to BOA-4.x or BOA-5.x release\"\n      _msg \"Please run 'barracuda up-${_tRee}' full upgrade first!\"\n      _msg \"Bye\"\n      _clean_pid_exit _check_boa_version_b\n    fi\n    if [[ \"${_SERIES_TEST}\" =~ \"BOA-4.\" ]] \\\n      || [[ \"${_SERIES_TEST}\" =~ \"BOA-5.\" ]]; then\n      if [[ \"${_X_VERSION}\" =~ \"BOA-4.\" ]] \\\n        || [[ \"${_X_VERSION}\" =~ \"BOA-5.\" ]]; then\n        _VERSIONS_TEST_RESULT=OK\n      else\n        _msg \"ERROR: Your system has been already upgraded to modern BOA\"\n        _msg \"You can not downgrade back to legacy or previous stable version\"\n        _msg \"Please use 'barracuda up-${_tRee}' to upgrade this system\"\n        _msg \"Bye\"\n        _clean_pid_exit _check_boa_version_c\n      fi\n    fi\n  fi\n}\n\n_check_prepare_dirs_permissions() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_prepare_dirs_permissions\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Cleaning up temp files in /var/opt/\"\n  fi\n  if [ ! -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n    rm -f ${_pthLog}/re-installed-php*-on-post_major_os_upgrade.info\n  fi\n  rm -rf /var/opt/*\n  mkdir -p /var/log/php\n  chmod 777 /var/log/php* &> /dev/null\n  mkdir -p ${_vBs}/dragon/{x,z,t}\n  if [ -e \"/etc/init.d/buagent\" ]; then\n    mv -f /etc/init.d/buagent \\\n      ${_vBs}/buagent-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n  fi\n}\n\n_avatars_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _avatars_check_fix\"\n  fi\n  if [ ! -e \"/var/www/nginx-default/profiles/commons/images/avatars\" ]; then\n    if [ -e \"${_bldPth}/aegir/var/commons/images\" ]; then\n      mkdir -p /var/www/nginx-default/profiles/commons\n      cp -af ${_bldPth}/aegir/var/commons/images \\\n        /var/www/nginx-default/profiles/commons/\n      chown -R www-data:www-data /var/www/nginx-default/profiles &> /dev/null\n      find /var/www/nginx-default/profiles -type d -exec chmod 0755 {} \\; &> /dev/null\n      find /var/www/nginx-default/profiles -type f -exec chmod 0644 {} \\; &> /dev/null\n    fi\n  fi\n}\n\n_aegir_bin_extra_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _aegir_bin_extra_check_fix\"\n  fi\n  SDIR=\"${_bldPth}/aegir/tools/bin\"\n  _SCRIPTS=(fix-drupal-platform-permissions fix-drupal-site-permissions fix-drupal-platform-ownership fix-drupal-site-ownership lock-local-drush-permissions)\n  if [ ! -x \"/usr/local/bin/fix-drupal-site-permissions.sh\" ] \\\n    || [ ! -e \"/var/log/boa/fix-drupal-site-permissions-${_xSrl}-${_X_VERSION}.log\" ]; then\n    if [ -e \"${SDIR}/fix-drupal-site-permissions.sh\" ]; then\n      for _SCRIPT in ${_SCRIPTS[@]}; do\n        cp -af ${SDIR}/${_SCRIPT}.sh /usr/local/bin/\n        chown root:root /usr/local/bin/${_SCRIPT}.sh\n        chmod 700 /usr/local/bin/${_SCRIPT}.sh\n      done\n      touch /var/log/boa/fix-drupal-site-permissions-${_xSrl}-${_X_VERSION}.log\n    fi\n  fi\n}\n\n_barracuda_log_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _barracuda_log_update\"\n  fi\n  if [ -L \"${_mtrInc}/barracuda_log.txt\" ]; then\n    rm -f ${_mtrInc}/barracuda_log.txt\n  fi\n  if [ \"${_THIS_DB_HOST}\" = \"localhost\" ]; then\n    _LOG_DB_HOST=localhost\n  elif [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n    _LOG_DB_HOST=PROXYSQL\n  elif [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n    _LOG_DB_HOST=FQDN\n  else\n    _LOG_DB_HOST=REMOTE\n  fi\n  if [ ! -z \"${_FORCE_GIT_MIRROR}\" ]; then\n    _LOG_GIT_MIRROR=\"-${_FORCE_GIT_MIRROR}\"\n  fi\n  _LOG_DB_V=$(mysql -V 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f6 \\\n    | awk '{ print $1}' \\\n    | cut -d\"-\" -f1 \\\n    | awk '{ print $1}' \\\n    | sed \"s/[\\,']//g\" 2>&1)\n  if [ \"${_LOG_DB_V}\" = \"Linux\" ]; then\n    _LOG_DB_V=$(mysql -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f4 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n  fi\n  _BARRACUDA_VERSION_INFO=\"$(date) / \\\n    ${_OS_DIST}.${_OS_CODE} / \\\n    ${_VIRT_IS} / \\\n    Barracuda ${_X_VERSION}-${_xSrl} / \\\n    Nginx ${_NGINX_VRN} / \\\n    PHP-MI ${_PHP_MULTI_INSTALL} / \\\n    FPM ${_PHP_FPM_VERSION} / \\\n    CLI ${_PHP_CLI_VERSION} / \\\n    ${_DB_SERVER}-${_LOG_DB_V}\"\n\n  echo \"${_BARRACUDA_VERSION_INFO}\" | fmt -su -w 2500 >> /var/log/barracuda_log.txt\n  echo \"${_BARRACUDA_VERSION_INFO}\" | fmt -su -w 2500 >> ${_vBs}/barracuda_log.txt\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: New entry added to /var/log/barracuda_log.txt\"\n  fi\n}\n\n#\n# Add allow-snail group if not exists.\n_add_allow_snail_if_not_exists() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _add_allow_snail_if_not_exists\"\n  fi\n  _SNAIL_EXISTS=$(getent group allow-snail 2>&1)\n  if [[ ! \"${_SNAIL_EXISTS}\" =~ \"allow-snail\" ]]; then\n    addgroup --system allow-snail &> /dev/null\n  fi\n}\n\n#\n# Unlock sendmail for allow-snail group\n_unlock_sendmail_for_snail() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _unlock_sendmail_for_snail\"\n  fi\n  _add_allow_snail_if_not_exists\n  chown root /usr/sbin/sendmail &> /dev/null\n  chgrp allow-snail /usr/sbin/sendmail &> /dev/null\n  chmod 755 /usr/sbin/sendmail &> /dev/null\n}\n\n#\n# Turn Off AppArmor In Octopus.\n_turn_off_apparmor_in_octopus() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _turn_off_apparmor_in_octopus\"\n  fi\n  _isAppArmOn=N\n  if [ -e \"/sys/module/apparmor/parameters/enabled\" ]; then\n    _isAppArmOn=\"$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null | tr -d '\\n')\"\n  fi\n  if [ \"${_isAppArmOn}\" = \"Y\" ] && [ ! -e \"/root/.turn_off_apparmor_in_octopus.cnf\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"ARMR: Turning off AppArmor temporarily...\"\n    fi\n    rm -rf /var/cache/apparmor/* &> /dev/null\n    apparmor_parser -r /etc/apparmor.d/* &> /dev/null\n    aa-complain /etc/apparmor.d/* &> /dev/null\n    service apparmor stop &> /dev/null\n    aa-teardown &> /dev/null\n    service auditd stop &> /dev/null\n    touch /root/.turn_off_apparmor_in_octopus.cnf\n  fi\n}\n\n#\n# Switch to dash while running octopus.\n_switch_to_dash_in_octopus() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _switch_to_dash_in_octopus\"\n  fi\n  if [ -L \"/bin/sh\" ]; then\n    _WEB_SH=\"$(readlink -n /bin/sh)\"\n    if [ -x \"/bin/dash\" ] || [ -x \"/usr/bin/dash\" ]; then\n      if [ \"${_WEB_SH}\" != \"/bin/dash\" ]; then\n        if [ -x \"/usr/bin/dash\" ] && [ ! -L \"/usr/bin/dash\" ]; then\n          if [ -L \"/usr/bin/sh\" ]; then\n            ln -sfn /usr/bin/dash /usr/bin/sh\n          fi\n          if [ -L \"/bin/sh\" ]; then\n            ln -sfn /usr/bin/dash /bin/sh\n          fi\n        fi\n        if [ -x \"/bin/dash\" ] && [ ! -L \"/bin/dash\" ]; then\n          if [ -L \"/usr/bin/sh\" ]; then\n            ln -sfn /bin/dash /usr/bin/sh\n          fi\n          if [ -L \"/bin/sh\" ]; then\n            ln -sfn /bin/dash /bin/sh\n          fi\n        fi\n      fi\n    elif [ -x \"/bin/bash\" ] || [ -x \"/usr/bin/bash\" ]; then\n      if [ \"${_WEB_SH}\" != \"/bin/bash\" ]; then\n        if [ -x \"/usr/bin/bash\" ] && [ ! -L \"/usr/bin/bash\" ]; then\n          if [ -L \"/usr/bin/sh\" ]; then\n            ln -sfn /usr/bin/bash /usr/bin/sh\n          fi\n          if [ -L \"/bin/sh\" ]; then\n            ln -sfn /usr/bin/bash /bin/sh\n          fi\n        fi\n        if [ -x \"/bin/bash\" ] && [ ! -L \"/bin/bash\" ]; then\n          if [ -L \"/usr/bin/sh\" ]; then\n            ln -sfn /bin/bash /usr/bin/sh\n          fi\n          if [ -L \"/bin/sh\" ]; then\n            ln -sfn /bin/bash /bin/sh\n          fi\n        fi\n      fi\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/functions/hotfix.sh.inc",
    "content": "#\n# Fix for SA-CORE-2014-005\n_fix_core_dgd() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_core_dgd\"\n  fi\n  # https://www.drupal.org/SA-CORE-2014-005\n  # sed -i \"s/^_PERMISSIONS_FIX=.*/_PERMISSIONS_FIX=YES/g\" /root/.barracuda.cnf\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ ! -e \"${_saPatch}\" ]; then\n    mkdir -p /var/xdrago/conf\n    cp -a ${_bldPth}/aegir/patches/7-core/${_saCoreS}.patch -o ${_saPatch}\n  fi\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saPatch}\" ] \\\n    && [ ! -e \"${_pthLog}/${_saCoreN}-fixed-d7.log\" ]; then\n    if [ -d \"/data/all/000/core\" ]; then\n      for _Core in `find /data/all/000/core/drupal-7* \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        cd ${_Core}\n        patch -p1 < ${_saPatch} &> /dev/null\n      done\n    elif [ -d \"/data/disk/all/000/core\" ]; then\n      for _Core in `find /data/disk/all/000/core/drupal-7* \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        cd ${_Core}\n        patch -p1 < ${_saPatch} &> /dev/null\n      done\n    fi\n    touch ${_pthLog}/${_saCoreN}-fixed-d7.log\n    cd\n  fi\n  # https://www.drupal.org/SA-CORE-2014-005 for ancient platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saPatch}\" ]; then\n    if [ -d \"/data/all\" ] \\\n      && [ ! -e \"${_pthLog}/legacy-${_saCoreN}-fixed-d7.log\" ]; then\n      for _File in `find /data/all/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n        fi\n      done\n      touch ${_pthLog}/legacy-${_saCoreN}-fixed-d7.log\n    elif [ -d \"/data/disk/all\" ] \\\n      && [ ! -e \"${_pthLog}/legacy-${_saCoreN}-fixed-d7eee.log\" ]; then\n      for _File in `find /data/disk/all/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] && [ ! -e \"${_Core}/core\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n        fi\n      done\n      touch ${_pthLog}/legacy-${_saCoreN}-fixed-d7eee.log\n    fi\n    cd\n  fi\n  # https://www.drupal.org/SA-CORE-2014-005 for custom platforms\n  if [ -e \"/var/xdrago\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n    && [ -e \"${_saPatch}\" ]; then\n    if [ -d \"/data/u\" ] \\\n      && [ ! -e \"${_pthLog}/batch-custom-${_saCoreN}-fixed-d7.log\" ]; then\n      for _File in `find /data/disk/*/static/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n      for _File in `find /data/disk/*/static/*/*/*/*/*/${_saIncDb} \\\n        -maxdepth 0 -mindepth 0 | sort`; do\n        _Core=$(echo ${_File} \\\n          | sed 's/\\/includes.*//g' \\\n          | awk '{print $1}' 2> /dev/null)\n        if [ -d \"${_Core}\" ] \\\n          && [ ! -e \"${_Core}/core\" ] \\\n          && [ ! -e \"${_Core}/profiles/${_saCoreS}-fix.info\" ]; then\n          cd ${_Core}\n          patch -p1 < ${_saPatch} &> /dev/null\n          echo fixed > ${_Core}/profiles/${_saCoreS}-fix.info\n        fi\n      done\n    fi\n    cd\n    touch ${_pthLog}/batch-custom-${_saCoreN}-fixed-d7.log\n  fi\n}\n# Fix for Postfix configuration.\n_fix_cnf_postfix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_cnf_postfix\"\n  fi\n  _RELOAD_POSTFIX=NO\n  _INET_TEST=$(grep 'inet_protocols' /etc/postfix/main.cf 2>&1)\n  if [[ \"${_INET_TEST}\" =~ \"inet_protocols\" ]]; then\n    _INET_TEST=$(grep 'inet_protocols = ipv4' /etc/postfix/main.cf 2>&1)\n    if [[ \"${_INET_TEST}\" =~ \"ipv4\" ]]; then\n      _DO_NOTHING=YES\n    else\n      sed -i \"s/^inet_protocols.*/inet_protocols = ipv4/g\" \\\n        /etc/postfix/main.cf &> /dev/null\n      _RELOAD_POSTFIX=YES\n    fi\n  else\n    echo \"inet_protocols = ipv4\" >> /etc/postfix/main.cf\n    _RELOAD_POSTFIX=YES\n  fi\n  if [ \"${_RELOAD_POSTFIX}\" = \"YES\" ]; then\n    postfix reload &> /dev/null\n  fi\n}\n"
  },
  {
    "path": "lib/functions/master.sh.inc",
    "content": "\nexport _tRee=dev\nexport _rLsn=\"BOA-5.9.1\"\nexport _rlsE=\"${_rLsn}-${_tRee}\"\nexport _bRnh=\"5.x-${_tRee}\"\n_hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n\n\n#\n# Update local INI for PHP CLI on the Ægir Master Instance.\n_php_cli_local_ini_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_cli_local_ini_update\"\n  fi\n  if [ ! -z \"${1}\" ]; then\n    _DRUSH_FILE=\"/var/aegir/drush/${1}\"\n  else\n    _DRUSH_FILE=\"/var/aegir/drush/drush.php\"\n  fi\n  _U_HD=\"/var/aegir/.drush\"\n  _U_TP=\"/var/aegir/.tmp\"\n  _U_II=\"${_U_HD}/php.ini\"\n  _PHP_CLI_UPDATE=NO\n  if [ ! -e \"${_DRUSH_FILE}\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _CHECK_USE_PHP_CLI=$(grep \"/opt/php\" ${_DRUSH_FILE} 2>&1)\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php${e}\" ]] \\\n      && [ ! -e \"${_U_HD}/.ctrl.php${e}.${_xSrl}.pid\" ]; then\n      _PHP_CLI_UPDATE=YES\n    fi\n  done\n  if [ \"${_PHP_CLI_UPDATE}\" = \"YES\" ] \\\n    || [ ! -e \"${_U_II}\" ] \\\n    || [ ! -d \"${_U_TP}\" ] \\\n    || [ ! -e \"${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mkdir -p ${_U_TP}\n    touch ${_U_TP}\n    find ${_U_TP}/ -mtime +0 -exec rm -rf {} \\; &> /dev/null\n    chmod 02755 ${_U_TP}\n    mkdir -p ${_U_HD}\n    rm -f ${_U_HD}/.ctrl.php*\n    rm -f ${_U_II}\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php${e}\" ]]; then\n        cp -af /opt/php${e}/lib/php.ini ${_U_II}\n        _U_INI=${e}\n      fi\n    done\n    if [ -e \"${_U_II}\" ]; then\n      _INI=\"open_basedir = \\\".: \\\n        /data/all:           \\\n        /data/conf:          \\\n        /data/disk/all:      \\\n        /opt/php56:          \\\n        /opt/php70:          \\\n        /opt/php71:          \\\n        /opt/php72:          \\\n        /opt/php73:          \\\n        /opt/php74:          \\\n        /opt/php80:          \\\n        /opt/php81:          \\\n        /opt/php82:          \\\n        /opt/php83:          \\\n        /opt/php84:          \\\n        /opt/php85:          \\\n        /opt/tika:           \\\n        /opt/tika7:          \\\n        /opt/tika8:          \\\n        /opt/tika9:          \\\n        /dev/urandom:        \\\n        /opt/tmp/make_local: \\\n        /opt/tools/drush:    \\\n        /tmp:                \\\n        /usr/bin:            \\\n        /usr/local/bin:      \\\n        /var/aegir\\\"\"\n      _INI=$(echo \"${_INI}\" | sed \"s/ //g\" 2>&1)\n      _INI=$(echo \"${_INI}\" | sed \"s/open_basedir=/open_basedir = /g\" 2>&1)\n      _INI=${_INI//\\//\\\\\\/}\n      _QTP=${_U_TP//\\//\\\\\\/}\n      sed -i \"s/.*open_basedir =.*/${_INI}/g\"                              ${_U_II}\n      wait\n      sed -i \"s/.*error_reporting =.*/error_reporting = 1/g\"               ${_U_II}\n      wait\n      sed -i \"s/.*session.save_path =.*/session.save_path = ${_QTP}/g\"     ${_U_II}\n      wait\n      sed -i \"s/.*soap.wsdl_cache_dir =.*/soap.wsdl_cache_dir = ${_QTP}/g\" ${_U_II}\n      wait\n      sed -i \"s/.*sys_temp_dir =.*/sys_temp_dir = ${_QTP}/g\"               ${_U_II}\n      wait\n      sed -i \"s/.*upload_tmp_dir =.*/upload_tmp_dir = ${_QTP}/g\"           ${_U_II}\n      wait\n      echo > ${_U_HD}/.ctrl.php${_U_INI}.${_xSrl}.pid\n      echo > ${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n  chown -R aegir:aegir ${_U_HD}\n  chown -R aegir:aegir ${_U_TP}\n}\n\n\n#\n# Download and extract from core archive.\n_get_core_ext() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"http://${_USE_MIR}/core/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download http://${_USE_MIR}/core/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Generate provision backend db_passwd.\n_provision_backend_dbpass_generate() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _provision_backend_dbpass_generate\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  _ESC_PASS=\"\"\n  _LEN_PASS=0\n  if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n    _PWD_CHARS=64\n  elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n    _PWD_CHARS=32\n  else\n    _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n    if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n      && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n      _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n    else\n      _PWD_CHARS=32\n    fi\n    if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n      _PWD_CHARS=128\n    fi\n  fi\n\n  if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] \\\n    || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n        _ESC_PASS=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n      else\n        _RANDPASS_TEST=$(randpass -V 2>&1)\n        if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n          _ESC_PASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n        else\n          _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n          _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n          _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n        fi\n      fi\n    else\n      if [ -e \"/root/.my.pass.txt\" ]; then\n        _ESC_PASS=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n      else\n        _ESC_PASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n      fi\n    fi\n    _isPythonTwo=\"$(which python2)\"\n    _isPythonThree=\"$(which python3)\"\n    _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n    if [ -x \"${_isPythonThree}\" ]; then\n      _ENC_PASS=$(python3 -c \"import urllib.parse; print(urllib.parse.quote('''${_ESC_PASS}'''))\")\n    elif [ -x \"${_isPythonTwo}\" ]; then\n      _ENC_PASS=$(python2 -c \"import urllib; print urllib.quote('''${_ESC_PASS}''')\")\n    fi\n    _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n  fi\n\n  if [ -z \"${_ESC_PASS}\" ] || [ \"${_LEN_PASS}\" -lt 9 ]; then\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n    else\n      if [ -e \"/root/.my.pass.txt\" ]; then\n        _ESC_PASS=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n      else\n        _ESC_PASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n      fi\n    fi\n    _ENC_PASS=\"${_ESC_PASS}\"\n  fi\n\n  echo \"${_ESC_PASS}\" > ${_L_SYS}\n  chown aegir:aegir ${_L_SYS} &> /dev/null\n  chmod 0600 ${_L_SYS}\n\n  if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n    if [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _THIS_DB_HOST=\"${_hName}\"\n      _SQL_CONNECT=localhost\n    elif [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n      _SQL_CONNECT=127.0.0.1\n    else\n      _THIS_DB_HOST=localhost\n      _SQL_CONNECT=localhost\n    fi\n    _AEGIR_HOST=\"${_hName}\"\n  else\n    _AEGIR_HOST=\"${_hName}\"\n    ### _SQL_CONNECT=\"${_THIS_DB_HOST}\"\n    ### Master Instance will use local DB server\n    _SQL_CONNECT=localhost\n  fi\n\n  if [ \"${_THIS_DB_HOST}\" = \"${_MY_OWNIP}\" ]; then\n    _AEGIR_HOST=\"${_hName}\"\n    _SQL_CONNECT=localhost\n  fi\n\n  _ESC=\"*.*\"\n  if [ -e \"/var/aegir/use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  else\n    _THIS_DB_PORT=3306\n  fi\n  mysqladmin -u root flush-privileges &> /dev/null\n\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL7 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n    if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n      _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n      mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_ADBU}';\nDELETE FROM mysql_query_rules WHERE username='${_ADBU}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_ADBU}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_ADBU}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_ADBU}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n    fi\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_ADBU}'@'localhost';\nCREATE USER IF NOT EXISTS '${_ADBU}'@'%';\nGRANT ALL ON ${_ESC} TO '${_ADBU}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_ADBU}'@'%' WITH GRANT OPTION;\nALTER USER '${_ADBU}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_ADBU}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n  else\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL8 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n      if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n        _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n        mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_ADBU}';\nDELETE FROM mysql_query_rules WHERE username='${_ADBU}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_ADBU}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_ADBU}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_ADBU}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n      fi\n      _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n      ${_C_SQL} \"DROP USER '${_ADBU}'@'${_AEGIR_HOST}';\" &> /dev/null\n      ${_C_SQL} \"DROP USER '${_ADBU}'@'${_RESOLVEIP}';\" &> /dev/null\n      ${_C_SQL} \"DROP USER '${_ADBU}'@'localhost';\" &> /dev/null\n      ${_C_SQL} \"DROP USER '${_ADBU}'@'127.0.0.1';\" &> /dev/null\n      ${_C_SQL} \"DROP USER '${_ADBU}'@'%';\" &> /dev/null\n      mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_ADBU}'@'localhost';\nCREATE USER IF NOT EXISTS '${_ADBU}'@'%';\nGRANT ALL ON ${_ESC} TO '${_ADBU}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_ADBU}'@'%' WITH GRANT OPTION;\nALTER USER '${_ADBU}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_ADBU}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n    fi\n  fi\n\n  if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n    _EXTRA_GRANTS=NO\n  else\n    _LOCAL_HOST=\"${_hName}\"\n    _find_correct_ip\n    _LOCAL_IP=\"${_LOC_IP}\"\n    [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL9 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n    if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n      _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n      mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_ADBU}';\nDELETE FROM mysql_query_rules WHERE username='${_ADBU}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_ADBU}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_ADBU}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_ADBU}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n    fi\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_ADBU}'@'localhost';\nCREATE USER IF NOT EXISTS '${_ADBU}'@'%';\nGRANT ALL ON ${_ESC} TO '${_ADBU}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_ADBU}'@'%' WITH GRANT OPTION;\nALTER USER '${_ADBU}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_ADBU}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n  fi\n  mysqladmin -u root flush-privileges &> /dev/null\n}\n\n#\n# Sync provision backend db_passwd.\n_provision_backend_dbpass_sync() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _provision_backend_dbpass_sync\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  _find_correct_ip\n  _USE_RESOLVEIP=\"${_LOC_IP}\"\n  _RESOLVEIP=\"${_LOC_IP}\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Syncing provision backend db_passwd...\"\n  fi\n  if [ -e \"/var/aegir/use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  else\n    _THIS_DB_PORT=3306\n  fi\n  _ADBU=aegir_root\n  _L_SYS=\"/var/aegir/backups/system/.${_ADBU}.pass.txt\"\n  mv -f ${_L_SYS} ${_L_SYS}-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n  _provision_backend_dbpass_generate\n  if [ ! -z \"${_ESC_PASS}\" ] && [ ! -z \"${_ENC_PASS}\" ]; then\n    su -s /bin/bash - aegir -c \"drush8 @hostmaster \\\n      sqlq \\\"UPDATE hosting_db_server SET db_passwd='${_ESC_PASS}' \\\n      WHERE db_user='${_ADBU}'\\\"\" &> /dev/null\n    wait\n    _SQL_CONNECT=localhost\n    if [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n      _SQL_CONNECT=127.0.0.1\n    fi\n    _ESC=\"*.*\"\n    _USE_DB_USER=\"${_ADBU}\"\n    _USE_AEGIR_HOST=\"${_hName}\"\n    [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL10 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n    if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n      _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n      mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n    fi\n    _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_AEGIR_HOST}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_RESOLVEIP}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'localhost';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'127.0.0.1';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'%';\" &> /dev/null\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n    sed -i \"s/mysql:\\/\\/${_ADBU}:.*/mysql:\\/\\/${_ADBU}:${_ENC_PASS}@${_SQL_CONNECT}',/g\" \\\n      /var/aegir/.drush/server_*.alias.drushrc.php &> /dev/null\n    wait\n  fi\n  mysqladmin -u root flush-privileges &> /dev/null\n  su -s /bin/bash - aegir -c \"drush8 cc drush\" &> /dev/null\n  wait\n  rm -rf /var/aegir/.tmp/cache\n  if [ -e \"/var/aegir/.drush/server_localhost.alias.drushrc.php\" ]; then\n    su -s /bin/bash aegir -c \"drush8 @hostmaster hosting-task @server_localhost \\\n      verify --force\" &> /dev/null\n  else\n    su -s /bin/bash aegir -c \"drush8 @hostmaster hosting-task @server_master \\\n      verify --force\" &> /dev/null\n  fi\n  wait\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Running hosting-dispatch/hosting-tasks...\"\n  fi\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-dispatch\" &> /dev/null\n  wait\n  sleep 5\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-tasks --force\" &> /dev/null\n  wait\n  sleep 5\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-tasks --force\" &> /dev/null\n  wait\n  sleep 5\n  su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-tasks --force\" &> /dev/null\n  wait\n}\n\n#\n# Sync hostmaster frontend db_passwd.\n_hostmaster_frontend_dbpass_sync() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _hostmaster_frontend_dbpass_sync\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Syncing hostmaster frontend db_passwd...\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  if [ -e \"/var/aegir/use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  else\n    _THIS_DB_PORT=3306\n  fi\n  _THIS_HM_SPTH=$(cat /var/aegir/.drush/hm.alias.drushrc.php \\\n    | grep \"site_path'\" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,']//g\" 2>&1)\n  _THIS_HM_DBUR=$(cat ${_THIS_HM_SPTH}/drushrc.php \\\n    | grep \"options\\['db_user'\\] = \" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,';]//g\" 2>&1)\n  _THIS_HM_DBPD=$(cat ${_THIS_HM_SPTH}/drushrc.php \\\n    | grep \"options\\['db_passwd'\\] = \" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,';]//g\" 2>&1)\n  if [ -e \"${_THIS_HM_SPTH}\" ] \\\n    && [ ! -z \"${_THIS_HM_DBUR}\" ] \\\n    && [ ! -z \"${_THIS_HM_DBPD}\" ]; then\n    _SQL_CONNECT=localhost\n    if [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n      _SQL_CONNECT=127.0.0.1\n    fi\n    _ESC=\"*.*\"\n    _USE_DB_USER=\"${_THIS_HM_DBUR}\"\n    _USE_AEGIR_HOST=\"${_hName}\"\n    [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL11 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n    if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n      _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n      mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_THIS_HM_DBPD}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n    fi\n    _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_AEGIR_HOST}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_RESOLVEIP}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'localhost';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'127.0.0.1';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'%';\" &> /dev/null\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_THIS_HM_DBPD}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_THIS_HM_DBPD}';\nEOFMYSQL\n  fi\n  mysqladmin -u root flush-privileges &> /dev/null\n}\n\n#\n# Download for Drush Make Local build.\n_master_download_for_local_build() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _master_download_for_local_build\"\n  fi\n  _mL=\"/opt/tmp/make_local\"\n  mkdir -p ${_mL}\n  if [ ! -e \"${_mL}/hostmaster/hostmaster.make\" ] \\\n    || [ ! -e \"${_mL}/hosting/hosting.module\" ] \\\n    || [ ! -e \"${_mL}/drupal/modules/system/system.module\" ]; then\n    rm -rf ${_mL}\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading hostmaster modules...\"\n    fi\n    if [ \"${_DL_MODE}\" = \"BATCH\" ]; then\n      rm -rf /opt/tmp/make_local\n      cd /opt/tmp\n      _get_dev_ext \"make_local.tar.gz\"\n      if [ -e \"${_mL}/hostmaster/hostmaster.make\" ] \\\n        && [ -e \"${_mL}/hosting/hosting.module\" ] \\\n        && [ -e \"${_mL}/drupal/modules/system/system.module\" ]; then\n        [ -e \"make_local.tar.gz\" ] && rm -f make_local.tar.gz\n      fi\n    else\n      mkdir -p ${_mL}\n      cd ${_mL}\n      ### Drupal Core\n      _get_core_ext \"${_DRUPAL7}.tar.gz\"\n      mv -f ${_mL}/${_DRUPAL7} ${_mL}/drupal\n      ### Ægir Core\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hostmaster.git    &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting.git       &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/eldir.git         &> /dev/null\n      ### Ægir Golden + BOA Settings\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/aegir_objects.git                  &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_civicrm.git                &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_custom_settings.git        &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_deploy.git                 &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_git.git                    &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_le.git                     &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_remote_import.git          &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_site_backup_manager.git    &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_tasks_extra.git            &> /dev/null\n        rm -rf */.git\n      ### Ægir Drupal Contrib\n      _get_dev_stc \"admin_menu-7.x-3.0-rc7.tar.gz\"\n      _get_dev_stc \"betterlogin-7.x-1.5.tar.gz\"\n      _get_dev_stc \"ctools-7.x-1.21.tar.gz\"\n      _get_dev_stc \"entity-7.x-1.12.tar.gz\"\n      _get_dev_stc \"libraries-7.x-2.5.tar.gz\"\n      _get_dev_stc \"module_filter-7.x-2.3.tar.gz\"\n      _get_dev_stc \"openidadmin-7.x-1.0.tar.gz\"\n      _get_dev_stc \"overlay_paths-7.x-1.3.tar.gz\"\n      _get_dev_stc \"r4032login-7.x-1.8.tar.gz\"\n      _get_dev_stc \"tfa_basic-7.x-1.1.tar.gz\"\n      _get_dev_stc \"tfa-7.x-2.1.tar.gz\"\n      _get_dev_stc \"timeago-7.x-2.3.tar.gz\"\n      _get_dev_stc \"views_bulk_operations-7.x-3.7.tar.gz\"\n      _get_dev_stc \"views-7.x-3.30.tar.gz\"\n      ### Ægir Third Party Libraries\n      _get_dev_stc \"qrcodejs.tar.gz\"\n      _get_dev_stc \"timeagojs.tar.gz\"\n      _get_dev_stc \"vuejs.tar.gz\"\n      ### BOA Drupal Contrib\n      _get_dev_stc \"d7security_client-7.x-1.3.tar.gz\"\n      _get_dev_stc \"features_extra-7.x-1.2.tar.gz\"\n      _get_dev_stc \"features-7.x-2.15.tar.gz\"\n      _get_dev_stc \"idna_convert-7.x-1.0.tar.gz\"\n      _get_dev_stc \"revision_deletion-7.x-1.3.tar.gz\"\n      _get_dev_stc \"strongarm-7.x-2.0.tar.gz\"\n      _get_dev_stc \"userprotect-7.x-1.3.tar.gz\"\n      _get_dev_stc \"environment_indicator-7.x-2.9.tar.gz\"\n    fi\n    find ${_mL} -type d -exec chmod 0755 {} \\; &> /dev/null\n    find ${_mL} -type f -exec chmod 0644 {} \\; &> /dev/null\n    chown -R root:root ${_mL}\n  fi\n}\n\n#\n_php_cli_drush_update() {\n  if [ ! -z \"${1}\" ]; then\n    _DRUSH_FILE=\"${_ROOT}/drush/${1}\"\n  else\n    _DRUSH_FILE=\"${_ROOT}/drush/drush.php\"\n  fi\n  if [ ! -e \"${_DRUSH_FILE}\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  if [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n    && [ -x \"/opt/php85/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php85\\/bin\\/php/g\" \\\n      ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n    && [ -x \"/opt/php84/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php84\\/bin\\/php/g\" \\\n      ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n    && [ -x \"/opt/php83/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php83\\/bin\\/php/g\" \\\n      ${_DRUSH_FILE} &> /dev/null\n  else\n    _msg \"FATAL ERROR: _PHP_CLI_VERSION must be set to one of supported values\"\n    _msg \"FATAL ERROR: Aborting AegirM installer NOW!\"\n    touch /opt/tmp/status-AegirM-FAIL\n    exit 1\n  fi\n}\n\n#\n_provision_backend_up() {\n  if [ \"${_DL_MODE}\" = \"BATCH\" ]; then\n    mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/drush_make\n    rm -rf ${_ROOT}/.drush/sys/drush_make\n    cd ${_ROOT}/.drush\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    rm -rf ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/{provision,drush_make}\n    _get_dev_ext \"backend.tar.gz\"\n    mv -f ${_ROOT}/.drush/backend/sys ${_ROOT}/.drush/\n    mv -f ${_ROOT}/.drush/backend/xts ${_ROOT}/.drush/\n    mv -f ${_ROOT}/.drush/backend/usr ${_ROOT}/.drush/\n    if [ -e \"${_ROOT}/.drush/sys/provision/provision.inc\" ] \\\n      && [ -d \"${_ROOT}/.drush/xts/security_review\" ] \\\n      && [ -d \"${_ROOT}/.drush/usr/registry_rebuild\" ]; then\n      [ -e \"${_ROOT}/.drush/backend\" ] && rm -rf ${_ROOT}/.drush/backend*\n    fi\n  elif [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/drush_make\n    rm -rf ${_ROOT}/.drush/sys/drush_make\n    cd ${_ROOT}/.drush\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    rm -rf ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/{provision,drush_make}\n    mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n    _rD=\"${_ROOT}/.drush\"\n    ${_gCb} ${_BRANCH_PRN} ${_gitHub}/provision.git      ${_rD}/sys/provision &> /dev/null\n    ${_gCb} 7.x-1.x-dev ${_gitHub}/drupalgeddon.git      ${_rD}/usr/drupalgeddon &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/drush_ecl.git             ${_rD}/usr/drush_ecl &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/security_review.git       ${_rD}/xts/security_review &> /dev/null\n    ${_gCb} 7.x-2.x ${_gitHub}/provision_boost.git       ${_rD}/xts/provision_boost &> /dev/null\n    ${_gCb} 7.x-2.x ${_gitHub}/registry_rebuild.git      ${_rD}/usr/registry_rebuild &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/safe_cache_form_clear.git ${_rD}/usr/safe_cache_form_clear &> /dev/null\n    rm -rf ${_rD}/*/.git\n    rm -rf ${_rD}/*/*/.git\n    cd ${_rD}/usr\n    _get_dev_ext \"clean_missing_modules.tar.gz\"\n    _get_dev_ext \"utf8mb4_convert-7.x-1.3.tar.gz\"\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    rm -rf ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/{provision,drush_make}\n    mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n    cd ${_ROOT}/.drush/sys\n    _get_dev_ext \"provision.tar.gz\"\n    cd ${_ROOT}/.drush/usr\n    _get_dev_ext \"clean_missing_modules.tar.gz\"\n    _get_dev_ext \"drupalgeddon.tar.gz\"\n    _get_dev_ext \"drush_ecl.tar.gz\"\n    _get_dev_ext \"registry_rebuild.tar.gz\"\n    _get_dev_ext \"safe_cache_form_clear.tar.gz\"\n    _get_dev_ext \"utf8mb4_convert-7.x-1.3.tar.gz\"\n    cd ${_ROOT}/.drush/xts\n    _get_dev_ext \"provision_boost.tar.gz\"\n    _get_dev_ext \"security_review.tar.gz\"\n  fi\n}\n\n#\n_hostmaster_dr_up() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Downloading drush ${_DRUSH_VERSION}...\"\n  fi\n  mkdir -p ${_ROOT}/backups/system\n  chmod 700 ${_ROOT}/backups/system\n  cd ${_ROOT}\n  if [ -f \"/opt/tools/drush/8/drush/drush.php\" ]; then\n    mv -f drush ${_ROOT}/backups/system/drush-pre-${_DISTRO}-${_NOW} &> /dev/null\n    cp -af /opt/tools/drush/8/drush ${_ROOT}/\n  else\n    mv -f drush ${_ROOT}/backups/system/drush-pre-${_DISTRO}-${_NOW} &> /dev/null\n    _get_dev_ext \"drush-${_DRUSH_VERSION}.tar.gz\"\n    cd ${_ROOT}/drush/\n    find ${_ROOT}/drush -type d -exec chmod 0755 {} \\; &> /dev/null\n    find ${_ROOT}/drush -type f -exec chmod 0644 {} \\; &> /dev/null\n    chmod 755 ${_ROOT}/drush/drush\n    chmod 755 ${_ROOT}/drush/drush.complete.sh\n    chmod 755 ${_ROOT}/drush/drush.launcher\n    chmod 755 ${_ROOT}/drush/drush.php\n    chmod 755 ${_ROOT}/drush/unish.sh\n    chmod 755 ${_ROOT}/drush/examples/drush.wrapper\n    chmod 755 ${_ROOT}/drush/examples/git-bisect.example.sh\n    chmod 755 ${_ROOT}/drush/examples/helloworld.script\n  fi\n}\n\n#\n# Upgrade Ægir Master Instance.\n_aegir_master_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _aegir_master_upgrade\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  if _prompt_yes_no \"Do you want to upgrade Ægir Master Instance?\" ; then\n    true\n    _msg \"INFO: Running Ægir Master Instance upgrade\"\n    if [ -x \"/usr/bin/mysql_upgrade\" ]; then\n      _check_mysql_version\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        _mrun \"mysql_upgrade -u root --force\"\n      fi\n    fi\n    if [ -e \"/var/aegir/use_proxysql.txt\" ]; then\n      _SQL_CONNECT=127.0.0.1\n      _THIS_DB_PORT=6033\n    else\n      _THIS_DB_PORT=3306\n    fi\n    rm -f /opt/tmp/testecho*\n    usermod -aG users aegir\n    _VAR_IF_PRESENT=$(grep \"aegir ALL=NOPASSWD\" /etc/sudoers 2>&1)\n    if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"aegir ALL=NOPASSWD\" ]]; then\n      echo \"aegir ALL=NOPASSWD: /etc/init.d/nginx\" >> /etc/sudoers\n    fi\n    _SCRIPTS=(fix-drupal-platform-permissions fix-drupal-site-permissions fix-drupal-platform-ownership fix-drupal-site-ownership lock-local-drush-permissions)\n    for SCRIPT in ${_SCRIPTS[@]}; do\n      _VAR_IF_PRESENT=$(grep \"aegir ALL=NOPASSWD: /usr/local/bin/${SCRIPT}.sh\" /etc/sudoers.d/${SCRIPT} 2>&1)\n      if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"aegir ALL=NOPASSWD\" ]]; then\n        echo \"aegir ALL=NOPASSWD: /usr/local/bin/${SCRIPT}.sh\" >> /etc/sudoers.d/${SCRIPT}\n        chmod 0440 /etc/sudoers.d/${SCRIPT}\n      fi\n    done\n\n    _hostmaster_dr_up\n\n    if [ ! -d \"/var/aegir/.drush/sys/provision/http\" ] \\\n      || [ ! -d \"/var/aegir/drush/includes\" ]; then\n      rm -rf /var/aegir/.drush/{sys,xts,usr}\n      rm -rf /var/aegir/.drush/{provision,drush_make}\n      mkdir -p /var/aegir/.drush/{sys,xts,usr}\n      ${_gCb} ${_BRANCH_PRN} ${_gitHub}/provision.git /var/aegir/.drush/sys/provision &> /dev/null\n    fi\n    _THIS_HM_ROOT=$(cat /var/aegir/.drush/hm.alias.drushrc.php \\\n      | grep \"root'\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $3}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    _THIS_HM_SITE=$(cat /var/aegir/.drush/hm.alias.drushrc.php \\\n      | grep \"site_path'\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $3}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    mkdir -p ${_THIS_HM_ROOT}/sites/all/{modules,themes,libraries}\n    chown -R aegir:aegir ${_THIS_HM_ROOT}/sites/all\n    _U_HD=\"/var/aegir/.drush\"\n    if [ -e \"${_U_HD}/php.ini\" ]; then\n      chattr -i ${_U_HD}/php.ini\n    fi\n    mkdir -p /var/aegir/backups/system\n    chmod 700 /var/aegir/backups/system\n    chown -R aegir:aegir /var/aegir/.drush\n    chown -R aegir:aegir /var/aegir/backups\n    chown -R aegir:aegir /var/aegir/clients\n    chown -R aegir:aegir /var/aegir/config\n    chown -R aegir:aegir /var/aegir/drush\n    chown -R aegir ${_THIS_HM_ROOT}\n    chown -R aegir:www-data ${_THIS_HM_SITE}/files\n    chmod -R 02775 ${_THIS_HM_SITE}/files\n\n    _DRUSH_FILES=\"drush.php drush\"\n    for _df in ${_DRUSH_FILES}; do\n      _php_cli_drush_update \"${_df}\"\n    done\n\n    export _xSrl=591devT01\n    export _X_VERSION=\"${_X_VERSION}\"\n\n    echo \"_AEGIR_VERSION=\\\"${_AEGIR_VERSION}\\\"\"         >> ${_vBs}/${_filIncB}\n    echo \"_AEGIR_XTS_VRN=\\\"${_tRee}\\\"\"                  >> ${_vBs}/${_filIncB}\n    echo \"_BOA_REPO_GIT_URL=\\\"${_BOA_REPO_GIT_URL}\\\"\"   >> ${_vBs}/${_filIncB}\n    echo \"_BOA_REPO_NAME=\\\"${_BOA_REPO_NAME}\\\"\"         >> ${_vBs}/${_filIncB}\n    echo \"_BRANCH_BOA=\\\"${_BRANCH_BOA}\\\"\"               >> ${_vBs}/${_filIncB}\n    echo \"_BRANCH_PRN=\\\"${_BRANCH_PRN}\\\"\"               >> ${_vBs}/${_filIncB}\n    echo \"_DEBUG_MODE=\\\"${_DEBUG_MODE}\\\"\"               >> ${_vBs}/${_filIncB}\n    echo \"_DL_MODE=\\\"${_DL_MODE}\\\"\"                     >> ${_vBs}/${_filIncB}\n    echo \"_DOMAIN=\\\"${_THIS_FRONT}\\\"\"                   >> ${_vBs}/${_filIncB}\n    echo \"_DRUPAL7=\\\"${_DRUPAL7}\\\"\"                     >> ${_vBs}/${_filIncB}\n    echo \"_DRUSH_VERSION=\\\"${_DRUSH_VERSION}\\\"\"         >> ${_vBs}/${_filIncB}\n    echo \"_NOW=\\\"${_NOW}\\\"\"                             >> ${_vBs}/${_filIncB}\n    echo \"_PHP_CLI_VERSION=\\\"${_PHP_CLI_VERSION}\\\"\"     >> ${_vBs}/${_filIncB}\n    echo \"_PHP_FPM_VERSION=\\\"${_PHP_FPM_VERSION}\\\"\"     >> ${_vBs}/${_filIncB}\n    echo \"_SMALLCORE7_V=\\\"${_SMALLCORE7_V}\\\"\"           >> ${_vBs}/${_filIncB}\n    echo \"_STRONG_PASSWORDS=\\\"${_STRONG_PASSWORDS}\\\"\"   >> ${_vBs}/${_filIncB}\n    echo \"_THIS_DB_HOST=\\\"${_THIS_DB_HOST}\\\"\"           >> ${_vBs}/${_filIncB}\n    echo \"_USE_MIR=\\\"${_USE_MIR}\\\"\"                     >> ${_vBs}/${_filIncB}\n    echo \"_xSrl=\\\"${_xSrl}\\\"\"                           >> ${_vBs}/${_filIncB}\n    echo \"_X_VERSION=\\\"${_X_VERSION}\\\"\"                 >> ${_vBs}/${_filIncB}\n\n    mysqladmin -u root flush-hosts &> /dev/null\n    _RST=$(syncpass fix aegir 2>&1)\n    _find_correct_ip\n    _USE_RESOLVEIP=\"${_LOC_IP}\"\n    _RESOLVEIP=\"${_LOC_IP}\"\n    _provision_backend_dbpass_sync\n    _hostmaster_frontend_dbpass_sync\n    _master_download_for_local_build\n\n    ### Make sure that required web services are up and running\n    _mrun \"webserver up\"\n\n    ###\n    AegirUpgrade=\"${_bldPth}/aegir/scripts/AegirUpgrade.sh.txt\"\n    su -s /bin/bash - aegir -c \"/bin/bash ${AegirUpgrade} aegir\"\n    wait\n    ###\n\n    if [ -e \"/opt/tmp/status-AegirUpgrade-FAIL\" ]; then\n      _msg \"FATAL ERROR: AegirUpgrade installer failed\"\n      _msg \"FATAL ERROR: Aborting Barracuda installer NOW!\"\n      touch /opt/tmp/status-Barracuda-FAIL\n      _clean_pid_exit _aegir_master_upgrade_a\n    else\n      if [ -e \"${_U_HD}/php.ini\" ]; then\n        chattr +i ${_U_HD}/php.ini\n      fi\n      _hostmaster_frontend_dbpass_sync\n    fi\n    if [ ! -L \"${_mtrInc}/global.inc\" ] && [ -e \"${_mtrInc}/global.inc\" ]; then\n      mv -f ${_mtrInc}/global.inc \\\n        ${_mtrInc}/global.inc-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    fi\n    mkdir -p /data/conf\n    cp -af ${_locCnf}/global/global.inc /data/conf/global.inc\n    sed -i \"s/3600/${_SPEED_VALID_MAX}/g\" /data/conf/global.inc &> /dev/null\n    if [ -e \"${_mtrInc}\" ] && [ ! -L \"${_mtrInc}/global.inc\" ] \\\n      && [ -e \"/data/conf/global.inc\" ]; then\n      ln -sfn /data/conf/global.inc ${_mtrInc}/global.inc\n    fi\n    if [ -e \"/etc/init.d/valkey-server\" ]; then\n      _valkey_password_update\n    elif [ -e \"/etc/init.d/redis-server\" ]; then\n      _redis_password_update\n    fi\n    _force_advanced_nginx_config\n    cd /var/aegir\n    if [ -d \"${_mtrNgx}/conf.d\" ]; then\n      if [ ! -d \"${_mtrNgx}/pre.d\" ]; then\n        cd ${_mtrNgx}\n        cp -a conf.d pre.d\n      else\n        rm -rf ${_mtrNgx}/conf.d\n      fi\n      if [ -e \"${_mtrNgx}/pre.d/custom_nginx.conf\" ]; then\n        rm -f ${_mtrNgx}/pre.d/custom_nginx.conf\n      fi\n    fi\n    find /var/aegir/host_master/*/profiles/* -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /var/aegir/host_master/*/profiles/* -type f -exec chmod 0644 {} \\; &> /dev/null\n    find /var/aegir/*/profiles/* -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /var/aegir/*/profiles/* -type f -exec chmod 0644 {} \\; &> /dev/null\n    chown -R aegir:aegir /var/aegir/.drush &> /dev/null\n    chown -R aegir:aegir /var/aegir/.tmp &> /dev/null\n    find /var/aegir/.drush -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /var/aegir/.drush -type f -exec chmod 0644 {} \\; &> /dev/null\n    chmod 0440 /var/aegir/.drush/*.php &> /dev/null\n    chmod 0711 /var/aegir/.drush &> /dev/null\n    _msg \"INFO: Ægir Master Instance upgrade completed\"\n  else\n    _msg \"INFO: Ægir Master Instance not upgraded this time\"\n  fi\n  rm -f /var/aegir/*install.sh.txt\n}\n\n#\n# Update php-cli in the cron entry.\n_php_cli_cron_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_cli_cron_update\"\n  fi\n  rm -f /var/spool/cron/crontabs/aegir\n  if [ ! -e \"/var/spool/cron/crontabs/aegir\" ]; then\n    #_DRUSH_HOSTING_TASKS_CMD=\"/usr/bin/drush @hostmaster hosting-tasks --force;\"\n    _DRUSH_HOSTING_DISPATCH_CMD=\"/usr/bin/env php /var/aegir/drush/drush.php @hostmaster hosting-dispatch\"\n    echo -e \\\n    \"\\nSHELL=/bin/sh\\nPATH=/opt/php84/bin:/usr/bin\\n\\n*/1 * * * * \\\n    ${_DRUSH_HOSTING_DISPATCH_CMD}\" \\\n    | fmt -su -w 2500 \\\n    | tee -a /var/spool/cron/crontabs/aegir >/dev/null 2>&1\n  fi\n  if [ -e \"/var/spool/cron/crontabs/aegir\" ]; then\n    if [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] && [ -x \"/opt/php85/bin/php\" ]; then\n      sed -i \"s/^PATH=.*/PATH=\\/opt\\/php85\\/bin:\\/sbin:\\/bin:\\/usr\\/sbin:\\/usr\\/bin/g\" \\\n        /var/spool/cron/crontabs/aegir &> /dev/null\n    elif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] && [ -x \"/opt/php84/bin/php\" ]; then\n      sed -i \"s/^PATH=.*/PATH=\\/opt\\/php84\\/bin:\\/sbin:\\/bin:\\/usr\\/sbin:\\/usr\\/bin/g\" \\\n        /var/spool/cron/crontabs/aegir &> /dev/null\n    elif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] && [ -x \"/opt/php83/bin/php\" ]; then\n      sed -i \"s/^PATH=.*/PATH=\\/opt\\/php83\\/bin:\\/sbin:\\/bin:\\/usr\\/sbin:\\/usr\\/bin/g\" \\\n        /var/spool/cron/crontabs/aegir &> /dev/null\n    fi\n    chown aegir:crontab /var/spool/cron/crontabs/aegir &> /dev/null\n    chmod 600 /var/spool/cron/crontabs/aegir\n  fi\n}\n\n#\n# Add allow-snail group if not exists.\n_add_allow_snail_if_not_exists() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _add_allow_snail_if_not_exists\"\n  fi\n  _SNAIL_EXISTS=$(getent group allow-snail 2>&1)\n  if [[ ! \"${_SNAIL_EXISTS}\" =~ \"allow-snail\" ]]; then\n    addgroup --system allow-snail &> /dev/null\n  fi\n}\n\n#\n# Unlock sendmail for allow-snail group\n_unlock_sendmail_for_snail() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _unlock_sendmail_for_snail\"\n  fi\n  _add_allow_snail_if_not_exists\n  chown root /usr/sbin/sendmail &> /dev/null\n  chgrp allow-snail /usr/sbin/sendmail &> /dev/null\n}\n\n_aegir_master_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _aegir_master_install_upgrade\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n\n  ###--------------------###\n    if [ ! -e \"/run/mysqld/mysqld.pid\" ] \\\n      || [ ! -e \"/run/mysqld/mysqld.sock\" ]; then\n      _msg \"ALRT! ${_DB_SERVER} server not running properly!\"\n      _msg \"EXIT: We can't proceed and will exit now\"\n      _msg \"HINT: Please check for more information two log files:\"\n      _msg \"INFO:   ${_LOG_INFO}\"\n      _msg \"ERRR:   ${_LOG_ERRR}\"\n      _msg \"HINT: (re)start ${_DB_SERVER} server, then run the installer again\"\n      _msg \"Bye\"\n      [ -e \"/root/.my.pass.txt\" ] && rm -f /root/.my.pass.txt\n      mkdir -p /var/aegir\n      _clean_pid_exit _aegir_master_install_upgrade_a\n    fi\n\n  ###--------------------###\n    _msg \"INFO: Installing Ægir Master Instance...\"\n    adduser --system --group --home /var/aegir aegir &> /dev/null\n    usermod -aG www-data aegir\n    usermod -aG users aegir\n    echo \"aegir ALL=NOPASSWD: /etc/init.d/nginx\" >> /etc/sudoers\n\n    _unlock_sendmail_for_snail\n    if getent group allow-snail >/dev/null 2>&1 && \\\n      ! id -nG \"aegir\" 2>/dev/null | tr ' ' '\\n' | grep -qxF \"allow-snail\"; then\n      usermod -aG allow-snail aegir\n    fi\n    chmod 755 /usr/sbin/sendmail &> /dev/null\n\n    ln -sfn /var/aegir/config/nginx.conf /etc/nginx/conf.d/aegir.conf &> /dev/null\n    _nginx_conf_update\n\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      if [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n        _THIS_DB_HOST=\"${_hName}\"\n        _SQL_CONNECT=localhost\n      elif [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n        _SQL_CONNECT=127.0.0.1\n      else\n        _THIS_DB_HOST=localhost\n        _SQL_CONNECT=localhost\n      fi\n      _AEGIR_HOST=\"${_hName}\"\n    else\n      _AEGIR_HOST=\"${_hName}\"\n      ### _SQL_CONNECT=\"${_THIS_DB_HOST}\"\n      ### Master Instance will use local DB server\n      _SQL_CONNECT=localhost\n    fi\n    if [ \"${_THIS_DB_HOST}\" = \"${_MY_OWNIP}\" ]; then\n      _AEGIR_HOST=\"${_hName}\"\n      _SQL_CONNECT=localhost\n    fi\n\n    _find_correct_ip\n    _RESOLVEIP=\"${_LOC_IP}\"\n    if [ -z \"${_RESOLVEIP}\" ]; then\n      _msg \"FATAL ERROR: DNS looks broken for server ${_AEGIR_HOST}\"\n      _clean_pid_exit _aegir_master_install_upgrade_b\n    else\n      _AEGIR_HOST_IP=\"${_RESOLVEIP}\"\n    fi\n\n    if [ \"${_VMFAMILY}\" != \"AWS\" ]; then\n      _MYSQLTEST=$(mysql --silent -u root -h${_AEGIR_HOST_IP} -uINVALIDLOGIN -pINVALIDPASS 2>&1 >/dev/null | cat)\n      if [ -z `echo ${_MYSQLTEST} | grep -q \"ERROR \\(2003\\|1130\\)\"` ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: ${_DB_SERVER} is listening on ${_AEGIR_HOST_IP}.\"\n        fi\n      else\n        _msg \"FATAL ERROR: ${_DB_SERVER} is not configured to listen on ${_AEGIR_HOST_IP}\"\n        _clean_pid_exit _aegir_master_install_upgrade_c\n      fi\n    fi\n\n    _AEGIR_DB_USER=aegir_root\n    _ESC_PASS=\"\"\n    _LEN_PASS=0\n\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n      _PWD_CHARS=64\n    elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n      _PWD_CHARS=32\n    else\n      _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n      if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n        && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n        _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n      else\n        _PWD_CHARS=32\n      fi\n      if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n        _PWD_CHARS=128\n      fi\n    fi\n\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n      if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n        if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n          _ESC_PASS=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n        else\n          _RANDPASS_TEST=$(randpass -V 2>&1)\n          if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n            _ESC_PASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n          else\n            _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n            _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n            _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n          fi\n        fi\n      else\n        if [ -e \"/root/.my.pass.txt\" ]; then\n          _ESC_PASS=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n        else\n          _ESC_PASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n        fi\n      fi\n      _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n      _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n    fi\n\n    if [ -z \"${_ESC_PASS}\" ] || [ \"${_LEN_PASS}\" -lt 9 ]; then\n      if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n        _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n      else\n        if [ -e \"/root/.my.pass.txt\" ]; then\n          _ESC_PASS=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n        else\n          _ESC_PASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n        fi\n      fi\n    fi\n\n    _ESC=\"*.*\"\n    [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL12 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n    if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n      _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n      mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_AEGIR_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_AEGIR_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_AEGIR_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_AEGIR_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_AEGIR_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n    fi\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_AEGIR_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_AEGIR_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_AEGIR_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_AEGIR_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_AEGIR_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_AEGIR_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _EXTRA_GRANTS=NO\n    else\n      _LOCAL_HOST=\"${_hName}\"\n      _find_correct_ip\n      _LOCAL_IP=\"${_LOC_IP}\"\n      [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL13 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n      if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n        _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n        mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_AEGIR_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_AEGIR_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_AEGIR_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_AEGIR_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_AEGIR_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n      fi\n      mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_AEGIR_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_AEGIR_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_AEGIR_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_AEGIR_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_AEGIR_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_AEGIR_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n    fi\n\n    export _xSrl=591devT01\n    export _X_VERSION=\"${_X_VERSION}\"\n\n    echo \"_AEGIR_DB_USER=\\\"${_AEGIR_DB_USER}\\\"\"         >> ${_vBs}/${_filIncB}\n    echo \"_AEGIR_HOST=\\\"${_AEGIR_HOST}\\\"\"               >> ${_vBs}/${_filIncB}\n    echo \"_AEGIR_VERSION=\\\"${_AEGIR_VERSION}\\\"\"         >> ${_vBs}/${_filIncB}\n    echo \"_AEGIR_XTS_VRN=\\\"${_tRee}\\\"\"                  >> ${_vBs}/${_filIncB}\n    echo \"_BOA_REPO_GIT_URL=\\\"${_BOA_REPO_GIT_URL}\\\"\"   >> ${_vBs}/${_filIncB}\n    echo \"_BOA_REPO_NAME=\\\"${_BOA_REPO_NAME}\\\"\"         >> ${_vBs}/${_filIncB}\n    echo \"_BRANCH_BOA=\\\"${_BRANCH_BOA}\\\"\"               >> ${_vBs}/${_filIncB}\n    echo \"_BRANCH_PRN=\\\"${_BRANCH_PRN}\\\"\"               >> ${_vBs}/${_filIncB}\n    echo \"_DEBUG_MODE=\\\"${_DEBUG_MODE}\\\"\"               >> ${_vBs}/${_filIncB}\n    echo \"_DL_MODE=\\\"${_DL_MODE}\\\"\"                     >> ${_vBs}/${_filIncB}\n    echo \"_DOMAIN=\\\"${_THIS_FRONT}\\\"\"                   >> ${_vBs}/${_filIncB}\n    echo \"_DRUSH_VERSION=\\\"${_DRUSH_VERSION}\\\"\"         >> ${_vBs}/${_filIncB}\n    echo \"_ESC_PASS=\\\"${_ESC_PASS}\\\"\"                   >> ${_vBs}/${_filIncB}\n    echo \"_LOCAL_NETWORK_IP=\\\"${_LOCAL_NETWORK_IP}\\\"\"   >> ${_vBs}/${_filIncB}\n    echo \"_MY_OWNIP=\\\"${_MY_OWNIP}\\\"\"                   >> ${_vBs}/${_filIncB}\n    echo \"_NOW=\\\"${_NOW}\\\"\"                             >> ${_vBs}/${_filIncB}\n    echo \"_PHP_CLI_VERSION=\\\"${_PHP_CLI_VERSION}\\\"\"     >> ${_vBs}/${_filIncB}\n    echo \"_PHP_FPM_VERSION=\\\"${_PHP_FPM_VERSION}\\\"\"     >> ${_vBs}/${_filIncB}\n    echo \"_STRONG_PASSWORDS=\\\"${_STRONG_PASSWORDS}\\\"\"   >> ${_vBs}/${_filIncB}\n    echo \"_THIS_DB_HOST=\\\"${_THIS_DB_HOST}\\\"\"           >> ${_vBs}/${_filIncB}\n    echo \"_USE_MIR=\\\"${_USE_MIR}\\\"\"                     >> ${_vBs}/${_filIncB}\n    echo \"_xSrl=\\\"${_xSrl}\\\"\"                           >> ${_vBs}/${_filIncB}\n    echo \"_X_VERSION=\\\"${_X_VERSION}\\\"\"                 >> ${_vBs}/${_filIncB}\n\n    ###  Download Drupal core and modules\n    _master_download_for_local_build\n\n    ### Make sure that required web services are up and running\n    _mrun \"webserver up\"\n\n    ###\n    AegirSetupM=\"${_bldPth}/aegir/scripts/AegirSetupM.sh.txt\"\n    ###\n\n    touch /var/log/boa/aegir_install.log\n    chown aegir:aegir /var/log/boa/aegir_install.log\n\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      su -s /bin/bash - aegir -c \"/bin/bash ${AegirSetupM} ${_THIS_FRONT} \\\n        --http_service_type='nginx' \\\n        --aegir_db_host='${_THIS_DB_HOST}' \\\n        --client_email='${_MY_EMAIL}' -y -d\" \\\n      2>&1 | tee /var/log/boa/aegir_install.log\n    else\n      su -s /bin/bash - aegir -c \"/bin/bash ${AegirSetupM} ${_THIS_FRONT} \\\n        --http_service_type='nginx' \\\n        --aegir_db_host='${_THIS_DB_HOST}' \\\n        --client_email='${_MY_EMAIL}' -y\" \\\n      >/var/log/boa/aegir_install.log 2>&1\n    fi\n    wait\n\n    _php_cli_local_ini_update\n\n    if [ -e \"/opt/tmp/status-AegirSetupM-FAIL\" ]; then\n      _msg \"FATAL ERROR: AegirSetupM installer failed\"\n      _msg \"FATAL ERROR: Aborting Barracuda installer NOW!\"\n      _msg \"HINT: Please check /var/log/boa/aegir_install.log\"\n      _msg \"HINT: for more information on errors occured\"\n      touch /opt/tmp/status-Barracuda-FAIL\n      _clean_pid_exit _aegir_master_install_upgrade_d\n    fi\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      _THIS_HM_ROOT=$(cat /var/aegir/.drush/hm.alias.drushrc.php \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      if [ -e \"${_THIS_HM_ROOT}/sites/all\" ] \\\n        && [ ! -e \"${_THIS_HM_ROOT}/sites/all/libraries\" ]; then\n        mkdir -p \\\n          ${_THIS_HM_ROOT}/sites/all/{modules,themes,libraries} &> /dev/null\n      fi\n    fi\n    _U_HD=\"/var/aegir/.drush\"\n    if [ -e \"${_U_HD}/php.ini\" ]; then\n      chattr +i ${_U_HD}/php.ini\n    fi\n    su -s /bin/bash - aegir -c \"drush8 cc drush\" &> /dev/null\n    wait\n    rm -rf /var/aegir/.tmp/cache\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Running hosting-dispatch/hosting-tasks...\"\n    fi\n    su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-dispatch\" &> /dev/null\n    wait\n    sleep 5\n    su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-tasks --force\" &> /dev/null\n    wait\n    sleep 5\n    su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-tasks --force\" &> /dev/null\n    wait\n    sleep 5\n    su -s /bin/bash - aegir -c \"drush8 @hostmaster hosting-tasks --force\" &> /dev/null\n    wait\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      _THIS_HM_ROOT=$(cat /var/aegir/.drush/hm.alias.drushrc.php \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      if [ -e \"${_THIS_HM_ROOT}/sites/all\" ] \\\n        && [ ! -e \"${_THIS_HM_ROOT}/sites/all/libraries\" ]; then\n        mkdir -p \\\n          ${_THIS_HM_ROOT}/sites/all/{modules,themes,libraries} &> /dev/null\n      fi\n    fi\n    chown -R aegir:aegir ${_THIS_HM_ROOT}/sites/all &> /dev/null\n\n  ###--------------------###\n    if [ -e \"${_mtrInc}/nginx_vhost_common.conf\" ]; then\n      _DO_NOTHING=YES\n      [ -e \"/root/.force.reinstall.cnf\" ] && rm -f /root/.force.reinstall.cnf\n    else\n      _msg \"FATAL ERROR: Something went wrong, Ægir Master Instance not installed!\"\n      _msg \"HINT: Please check /var/log/boa/aegir_install.log for details, and then\"\n      _msg \"HINT: run the same install command again to complete installation.\"\n      _msg \"HINT: You can also enforce installation with empty control file:\"\n      _msg \"      touch /root/.force.reinstall.cnf -- and then try again.\"\n      _clean_pid_exit _aegir_master_install_upgrade_e\n    fi\n\n  ###--------------------###\n    if [ ! -L \"${_mtrInc}/global.inc\" ] && [ -e \"${_mtrInc}/global.inc\" ]; then\n      mv -f ${_mtrInc}/global.inc \\\n        ${_mtrInc}/global.inc-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    fi\n    mkdir -p /data/conf\n    cp -af ${_locCnf}/global/global.inc /data/conf/global.inc\n    sed -i \"s/3600/${_SPEED_VALID_MAX}/g\" /data/conf/global.inc &> /dev/null\n    if [ -e \"${_mtrInc}\" ] \\\n      && [ ! -L \"${_mtrInc}/global.inc\" ] \\\n      && [ -e \"/data/conf/global.inc\" ]; then\n      ln -sfn /data/conf/global.inc ${_mtrInc}/global.inc\n    fi\n    if [ -e \"/etc/init.d/valkey-server\" ]; then\n      _valkey_password_update\n    elif [ -e \"/etc/init.d/redis-server\" ]; then\n      _redis_password_update\n    fi\n    _force_advanced_nginx_config\n    chmod 0711 ${_mtrInc} &> /dev/null\n    chmod 0711 /var/aegir/config &> /dev/null\n    find /var/aegir/host_master/*/profiles/* -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /var/aegir/host_master/*/profiles/* -type f -exec chmod 0644 {} \\; &> /dev/null\n    find /var/aegir/*/profiles/* -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /var/aegir/*/profiles/* -type f -exec chmod 0644 {} \\; &> /dev/null\n    chown -R aegir:aegir /var/aegir/.drush &> /dev/null\n    find /var/aegir/.drush -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /var/aegir/.drush -type f -exec chmod 0644 {} \\; &> /dev/null\n    chmod 0440 /var/aegir/.drush/*.php &> /dev/null\n    chmod 0711 /var/aegir/.drush &> /dev/null\n    cd /var/aegir\n    rm -f /etc/nginx/sites-available/default\n    rm -f /etc/nginx/sites-enabled/default\n    rm -f /etc/nginx/modules-enabled/*\n    if [ -e \"${_locCnf}/nginx/nginx.conf\" ]; then\n      mv -f /etc/nginx/nginx.conf /etc/nginx/nginx.conf-old &> /dev/null\n      cp -af ${_locCnf}/nginx/nginx.conf /etc/nginx/nginx.conf\n    fi\n    _mrun \"service nginx reload\"\n    _msg \"INFO: Ægir Master Instance installed\"\n  else\n    if [ \"${_tRee}\" = \"lts\" ]; then\n      _SERIES_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n      if [[ \"${_SERIES_TEST}\" =~ \"Barracuda ${_rLsn}-pro\" ]]; then\n        _msg \"ERROR: Your system has been already upgraded to ${_rLsn}-pro\"\n        _msg \"You can not downgrade back to previous/older/lts BOA version\"\n        _msg \"Please use 'barracuda up-pro system' to upgrade this server\"\n        _msg \"Display all supported commands with: barracuda help\"\n        _msg \"Bye\"\n        _clean_pid_exit _aegir_master_install_upgrade_f\n      fi\n    fi\n    echo \" \"\n    if [ -e \"/root/.debug-barracuda-installer.cnf\" ] \\\n      || [ -e \"/root/.skip-aegir-master-upgrade.cnf\" ]; then\n      _SYSTEM_UP_ONLY=YES\n    fi\n    _if_to_do_fix\n    if [ \"${_DO_FIX}\" = \"YES\" ]; then\n      _msg \"INFO: Ægir Master Instance upgrade skipped!\"\n      echo \" \"\n      _msg \"NOTE! You must reboot the server and run barracuda upgrade\"\n      _msg \"NOTE! again to complete all system upgrades and upgrade also\"\n      _msg \"NOTE! Ægir Master Instance.\"\n      echo \" \"\n    elif [ \"${_SYSTEM_UP_ONLY}\" = \"YES\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Ægir Master Instance upgrade skipped\"\n      fi\n    else\n      if [ ! -e \"/run/mysqld/mysqld.pid\" ] \\\n        || [ ! -e \"/run/mysqld/mysqld.sock\" ]; then\n        _msg \"ALRT! ${_DB_SERVER} server not running properly!\"\n        _msg \"EXIT: We can't proceed and will exit now\"\n        _msg \"HINT: Please check for more information two log files:\"\n        _msg \"INFO:   ${_LOG_INFO}\"\n        _msg \"ERRR:   ${_LOG_ERRR}\"\n        _msg \"HINT: (re)start ${_DB_SERVER} server, then run the installer again\"\n        _msg \"Bye\"\n        _clean_pid_exit _aegir_master_install_upgrade_g\n      fi\n      _php_cli_cron_update\n      _aegir_master_upgrade\n      _php_cli_cron_update\n    fi\n  fi\n}\n\n_if_upgrade_only_aegir_master() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_upgrade_only_aegir_master\"\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    if [ \"${_AEGIR_UPGRADE_ONLY}\" = \"YES\" ] \\\n      && [ \"${_SYSTEM_UP_ONLY}\" = \"NO\" ]; then\n      _php_cli_cron_update\n      _aegir_master_upgrade\n      _php_cli_cron_update\n      sleep 8\n      _mrun \"service nginx reload\"\n      _finale\n      exit 0\n    fi\n  fi\n}\n\n_aegir_master_display_login_link() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _aegir_master_display_login_link\"\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    if [ \"${_EASY_SETUP}\" != \"LOCAL\" ]; then\n      _mrun \"bash /usr/sbin/apticron\"\n    fi\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _DO_NOTHING=YES\n    else\n      _AEGIR_LOGIN_URL=$(tail --lines=11 /var/log/boa/aegir_install.log | grep --text \"^http:\" 2>&1)\n      if [ ! -z \"${_AEGIR_LOGIN_URL}\" ]; then\n        echo \" \"\n        _msg \"INFO: Congratulations, Ægir Master have been installed successfully!\"\n#         _msg \"NOTE! Please wait 3 min before visiting Ægir at:\"\n#         echo \" \"\n#         _msg \"LINK: ${_AEGIR_LOGIN_URL}\"\n#         echo \" \"\n#         _msg \"NOTE! The initial one-time login link will no longer work.\"\n#         sleep 3\n#         _msg \"To access your Barracuda Ægir control panel after the procedure is finished...\"\n#         _msg \"...please generate a new link by running the following command:\"\n#         sleep 3\n#         echo \" \"\n#         echo \"  su -s /bin/bash aegir -c \\\"drush @hm uli\\\"\"\n#         echo \" \"\n        sleep 3\n      else\n        _msg \"ALRT! Something went wrong\"\n        _msg \"ALRT! Please check the install log for details:\"\n        _msg \"ALRT! /var/log/boa/aegir_install.log\"\n      fi\n    fi\n  fi\n  if [ ! -e \"${_pthLog}/cron_aegir_off.pid\" ]; then\n    touch ${_pthLog}/cron_aegir_off.pid\n  fi\n}\n\n"
  },
  {
    "path": "lib/functions/nginx.sh.inc",
    "content": "#\n# Update Nginx Config.\n_nginx_conf_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _nginx_conf_update\"\n  fi\n  if [ ! -e \"${_pthLog}/nginx-config-params-fixed-${_xSrl}-${_X_VERSION}.log\" ] \\\n    && [ -d \"/var/aegir\" ]; then\n    if [ -e \"${_locCnf}/nginx/nginx.conf\" ] \\\n      && [ -e \"/etc/nginx/nginx.conf\" ]; then\n      mv -f /etc/nginx/nginx.conf-* ${_vBs}/dragon/t/ &> /dev/null\n      mv -f /etc/nginx/mime.types-pre-* ${_vBs}/dragon/t/ &> /dev/null\n      mv -f /etc/nginx/fastcgi_params-pre-* ${_vBs}/dragon/t/ &> /dev/null\n      mv -f /etc/nginx/nginx.conf ${_vBs}/dragon/t/nginx.conf-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n      mv -f /etc/nginx/fastcgi_params ${_vBs}/dragon/t/fastcgi_params-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n      cp -af ${_locCnf}/nginx/nginx.conf /etc/nginx/nginx.conf\n      cp -af ${_locCnf}/nginx/fastcgi_params.txt /etc/nginx/fastcgi_params\n      touch ${_pthLog}/nginx-config-params-fixed-${_xSrl}-${_X_VERSION}.log\n    fi\n  fi\n  if [ -e \"${_mtrNgx}/pre.d/nginx_speed_purge.conf\" ]; then\n    rm -f ${_mtrNgx}/pre.d/nginx_speed_purge.conf\n  fi\n}\n\n#\n# Sub Force advanced Nginx configuration.\n_sub_force_advanced_nginx_config() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sub_force_advanced_nginx_config\"\n  fi\n  if [ -e \"/opt/php${_PHP_SV}/etc/php${_PHP_SV}-fpm.conf\" ]; then\n    sed -i \"s/127.0.0.1:.*;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\"             ${_mtrInc}/nginx_compact_include.conf &> /dev/null\n\n    sed -i \"s/127.0.0.1:.*;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\"             ${_mtrInc}/nginx_vhost_common.conf &> /dev/null\n    wait\n    sed -i \"s/data.*post.d/var\\/aegir\\/config\\/includes/g\"                         ${_mtrInc}/nginx_vhost_common.conf &> /dev/null\n    wait\n    sed -i \"s/unix:.*fpm.socket;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\"        ${_mtrInc}/nginx_vhost_common.conf &> /dev/null\n    wait\n    sed -i \"s/set.*user_socket.*/set \\$user_socket \\\"${_PHP_CN}\\\";/g\"              ${_mtrInc}/nginx_vhost_common.conf &> /dev/null\n\n    sed -i \"s/127.0.0.1:.*;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\"             ${_mtrTpl}/subdir.tpl.php &> /dev/null\n    wait\n    sed -i \"s/data.*post.d/var\\/aegir\\/config\\/includes/g\"                         ${_mtrTpl}/subdir.tpl.php &> /dev/null\n    wait\n    sed -i \"s/unix:.*fpm.socket;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\"        ${_mtrTpl}/subdir.tpl.php &> /dev/null\n    wait\n    sed -i \"s/set.*user_socket.*/set \\$user_socket \\\"${_PHP_CN}\\\";/g\"              ${_mtrTpl}/subdir.tpl.php &> /dev/null\n\n    sed -i \"s/127.0.0.1:.*;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\"             ${_mtrTpl}/Inc/vhost_include.tpl.php &> /dev/null\n    wait\n    sed -i \"s/data.*post.d/var\\/aegir\\/config\\/includes/g\"                         ${_mtrTpl}/Inc/vhost_include.tpl.php &> /dev/null\n    wait\n    sed -i \"s/unix:.*fpm.socket;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\"        ${_mtrTpl}/Inc/vhost_include.tpl.php &> /dev/null\n    wait\n    sed -i \"s/set.*user_socket.*/set \\$user_socket \\\"${_PHP_CN}\\\";/g\"              ${_mtrTpl}/Inc/vhost_include.tpl.php &> /dev/null\n  fi\n}\n\n#\n# Force advanced Nginx configuration.\n_force_advanced_nginx_config() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _force_advanced_nginx_config\"\n  fi\n  if [ ! -e \"/etc/ssl/private/nginx-wild-ssl.dhp\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Generating DH parameters, 2048 bit...\"\n    fi\n    _mrun \"openssl dhparam -out /etc/ssl/private/nginx-wild-ssl.dhp 2048\"\n    wait\n  fi\n  cp -af ${_locCnf}/nginx/nginx_compact_include.conf ${_mtrInc}/nginx_compact_include.conf\n  _validate_local_ip &> /dev/null\n  _sub_force_advanced_nginx_config\n  sed -i \"s/ 90;/ 180;/g\" ${_mtrNgx}/pre.d/*.conf &> /dev/null\n  wait\n  if [ \"${_NGINX_SPDY}\" = \"YES\" ]; then\n    sed -i \"s/:443;/:443 ssl;/g\" ${_mtrNgx}/pre.d/*.conf &> /dev/null\n    wait\n    sed -i \"s/:443;/:443 ssl;/g\" ${_mtrNgx}/vhost.d/* &> /dev/null\n    wait\n    sed -i \"s/:443 ssl spdy;/:443 ssl;/g\" ${_mtrNgx}/pre.d/*.conf &> /dev/null\n    wait\n    sed -i \"s/:443 ssl spdy;/:443 ssl;/g\" ${_mtrNgx}/vhost.d/* &> /dev/null\n    wait\n    sed -i \"s/:443 ssl http2;/:443 ssl;/g\" ${_mtrNgx}/pre.d/*.conf &> /dev/null\n    wait\n    sed -i \"s/:443 ssl http2;/:443 ssl;/g\" ${_mtrNgx}/vhost.d/* &> /dev/null\n    wait\n  fi\n  if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n    && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n    _SSL_BINARY=/usr/local/ssl3/bin/openssl\n  else\n    _SSL_BINARY=/usr/local/ssl/bin/openssl\n  fi\n  _SSL_ITD=$(${_SSL_BINARY} version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  if [ \"${_SSL_ITD}\" = \"${_OPENSSL_MODERN_VRN}\" ] \\\n    || [ \"${_SSL_ITD}\" = \"${_OPENSSL_EOL_VRN}\" ] \\\n    || [ \"${_SSL_ITD}\" = \"${_OPENSSL_LEGACY_VRN}\" ] \\\n    || [[ \"${_SSL_ITD}\" =~ \"1.1.0\" ]] \\\n    || [[ \"${_SSL_ITD}\" =~ \"1.0.2\" ]] \\\n    || [[ \"${_SSL_ITD}\" =~ \"1.0.1\" ]]; then\n    _PFS_READY=YES\n  else\n    _PFS_READY=NO\n  fi\n  if [ \"${_PFS_READY}\" = \"YES\" ] \\\n    && [ \"${_NGINX_FORWARD_SECRECY}\" = \"YES\" ]; then\n    _ALLOW_NGINX_FORWARD_SECRECY=YES\n    _SSL_PROTOCOLS=\"TLSv1.2 TLSv1.3;\"\n    _SSL_CIPHERS=\"TLS_AES_128_GCM_SHA256:  \\\n      TLS_AES_256_GCM_SHA384:  \\\n      TLS_CHACHA20_POLY1305_SHA256:  \\\n      ECDHE-RSA-AES256-GCM-SHA384:  \\\n      ECDHE-ECDSA-AES256-GCM-SHA384:  \\\n      ECDHE-RSA-AES256-SHA384:  \\\n      ECDHE-ECDSA-AES256-SHA384:  \\\n      AES256-GCM-SHA384:  \\\n      ECDHE-ECDSA-CHACHA20-POLY1305:  \\\n      ECDHE-RSA-CHACHA20-POLY1305:  \\\n      DHE-RSA-AES256-CCM:  \\\n      DHE-RSA-AES256-CCM8:  \\\n      DHE-RSA-AES128-CCM:  \\\n      DHE-RSA-AES128-CCM8:  \\\n      ECDHE-RSA-AES128-GCM-SHA256:  \\\n      ECDHE-ECDSA-AES128-GCM-SHA256:  \\\n      DHE-RSA-AES128-GCM-SHA256:  \\\n      DHE-DSS-AES128-GCM-SHA256:  \\\n      kEDH+AESGCM:  \\\n      ECDHE-RSA-AES128-SHA256:  \\\n      ECDHE-ECDSA-AES128-SHA256:  \\\n      ECDHE-RSA-AES128-SHA:  \\\n      ECDHE-ECDSA-AES128-SHA:  \\\n      ECDHE-RSA-AES256-SHA:  \\\n      ECDHE-ECDSA-AES256-SHA:  \\\n      DHE-RSA-AES128-SHA256:  \\\n      DHE-RSA-AES128-SHA:  \\\n      DHE-DSS-AES128-SHA256:  \\\n      DHE-RSA-AES256-SHA256:  \\\n      DHE-DSS-AES256-SHA:  \\\n      DHE-RSA-AES256-SHA:  \\\n      AES128-GCM-SHA256:  \\\n      AES128-SHA256:  \\\n      AES256-SHA256:  \\\n      AES128-SHA:  \\\n      AES256-SHA:  \\\n      AES:  \\\n      \\!aNULL:  \\\n      \\!eNULL:  \\\n      \\!EXPORT:  \\\n      \\!DES:  \\\n      \\!RC4:  \\\n      \\!MD5:  \\\n      \\!PSK:  \\\n      \\!aECDH:  \\\n      \\!EDH-DSS-DES-CBC3-SHA:  \\\n      \\!EDH-RSA-DES-CBC3-SHA:  \\\n      \\!KRB5-DES-CBC3-SHA:  \\\n      \\!ECDHE-ECDSA-AES128-SHA256:  \\\n      \\!ECDHE-ECDSA-AES256-SHA384;\"\n    _SSL_CIPHERS=$(echo \"${_SSL_CIPHERS}\" | sed \"s/ //g\" 2>&1)\n  else\n    _ALLOW_NGINX_FORWARD_SECRECY=NO\n  fi\n  if [ \"${_ALLOW_NGINX_FORWARD_SECRECY}\" = \"YES\" ]; then\n    sed -i \"s/ssl_protocols .*/ssl_protocols                ${_SSL_PROTOCOLS}/g\" \\\n      ${_mtrNgx}/pre.d/*.conf &> /dev/null\n    wait\n    sed -i \"s/ssl_protocols .*/ssl_protocols                ${_SSL_PROTOCOLS}/g\" \\\n      ${_mtrNgx}/vhost.d/* &> /dev/null\n    wait\n    sed -i \"s/ssl_ciphers .*/ssl_ciphers                  ${_SSL_CIPHERS}/g\" \\\n      ${_mtrNgx}/pre.d/*.conf &> /dev/null\n    wait\n    sed -i \"s/ssl_ciphers .*/ssl_ciphers                  ${_SSL_CIPHERS}/g\" \\\n      ${_mtrNgx}/vhost.d/* &> /dev/null\n    wait\n  fi\n  if [ -e \"${_mtrInc}/nginx_vhost_common.conf\" ]; then\n    rm -f ${_mtrInc}/nginx_advanced_include.conf\n    rm -f ${_mtrInc}/nginx_legacy_include.conf\n    rm -f ${_mtrInc}/nginx_modern_include.conf\n    rm -f ${_mtrInc}/nginx_octopus_include.conf\n    rm -f ${_mtrInc}/nginx_simple_include.conf\n  fi\n  chown aegir:aegir ${_mtrInc}/*\n  chown aegir:aegir /var/aegir/.drush/sys/provision/http/Provision/Config/Nginx/*\n  if [ ! -e \"/data/conf/nginx_high_load_off.conf\" ]; then\n    mkdir -p /data/conf\n    cp -af ${_locCnf}/nginx/nginx_high_load_off.conf /data/conf/nginx_high_load_off.conf\n    chmod 644 /data/conf/nginx_high_load_off.conf &> /dev/null\n  fi\n  if [ -e \"/root/.giant_traffic.cnf\" ]; then\n    sed -i \"s/access_log .*/access_log             /var/log/nginx/access.log main buffer=32k;/g\" ${_mtrNgx}.conf &> /dev/null\n    wait\n  fi\n}\n\n#\n# Check for Linux/Cdorked.A malware and delete if discovered.\n_detect_cdorked_malware() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _detect_cdorked_malware\"\n  fi\n  if [ -x \"/usr/sbin/nginx\" ]; then\n    _C_DORKED=NO\n    _C_FILE=${_bldPth}/aegir/helpers/dump_cdorked_config.c\n    if [ -e \"${_C_FILE}\" ]; then\n      ### _msg \"INFO: Checking for Linux/Cdorked.A malware...\"\n      chattr -ai /usr/sbin/nginx\n      cd ${_vBs}\n      rm -rf /var/opt/foo_bar*\n      gcc -o /var/opt/foo_bar ${_bldPth}/aegir/helpers/dump_cdorked_config.c &> /dev/null\n      _C_DORKED_TEST=$(/var/opt/foo_bar 2>&1)\n      if [[ \"${_C_DORKED_TEST}\" =~ \"No shared memory matching Cdorked signature\" ]]; then\n        _DO_NOTHING=YES\n        ### _msg \"INFO: No Linux/Cdorked.A malware traces found - system clean\"\n      else\n        _msg \"ALRT! Your system is probably infected by Linux/Cdorked.A malware!\"\n        _msg \"ALRT! Please send the ${_vBs}/httpd_cdorked_config.bin file\"\n        _msg \"ALRT! to leveille@eset.com for investigation\"\n        rm -f $(which nginx)\n        _NGX_FORCE_REINSTALL=YES\n        _C_DORKED=YES\n      fi\n    fi\n  fi\n}\n\n#\n# Purge legacy Nginx config.\n_nginx_clean_legacy_config() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _nginx_clean_legacy_config\"\n  fi\n  if [ -e \"/var/aegir/config/server_master/nginx/pre.d/nginx_speed_purge.conf\" ]; then\n    rm -f /var/aegir/config/server_master/nginx/pre.d/nginx_speed_purge.conf\n  fi\n  _R_TEST=$(grep \"upload_progress\" /var/aegir/config/nginx.conf 2>&1)\n  if [[ \"${_R_TEST}\" =~ \"upload_progress\" ]]; then\n    sed -i \"s/.*upload_progress.*//g\" /var/aegir/config/nginx.conf\n  fi\n  if [ -d \"/data/u\" ]; then\n    _R_TEST=$(grep \"upload_progress_json_output\" /data/disk/*/config/includes/nginx_vhost_common.conf 2>&1)\n    if [[ \"${_R_TEST}\" =~ \"upload_progress_json_output\" ]]; then\n      sed -i \"s/.*upload_progress_json_output.*//g\" /data/disk/*/config/includes/nginx_vhost_common.conf\n      sed -i \"s/.*upload_progress_json_output.*//g\" /data/disk/*/config/server_master/nginx/subdir.d/*/*.conf\n      sed -i \"s/.*upload_progress_json_output.*//g\" /var/aegir/config/includes/nginx_vhost_common.conf\n      wait\n      sed -i \"s/.*report_uploads.*//g\" /data/disk/*/config/includes/nginx_vhost_common.conf\n      sed -i \"s/.*report_uploads.*//g\" /data/disk/*/config/server_master/nginx/subdir.d/*/*.conf\n      sed -i \"s/.*report_uploads.*//g\" /var/aegir/config/includes/nginx_vhost_common.conf\n      wait\n      sed -i \"s/.*track_uploads.*//g\" /data/disk/*/config/includes/nginx_vhost_common.conf\n      sed -i \"s/.*track_uploads.*//g\" /data/disk/*/config/server_master/nginx/subdir.d/*/*.conf\n      sed -i \"s/.*track_uploads.*//g\" /var/aegir/config/includes/nginx_vhost_common.conf\n      wait\n    fi\n  fi\n  _mrun \"service nginx reload\"\n}\n\n#\n# Install or upgrade Nginx.\n_nginx_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _nginx_install_upgrade\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Testing Nginx version...\"\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _NGINX_INSTALL_REQUIRED=NO\n  fi\n  if [ -x \"/usr/sbin/nginx\" ]; then\n    _NGINX_F_ITD=$(/usr/sbin/nginx -v 2>&1 | tr -d \"\\n\" \\\n      | cut -d\"/\" -f2 | awk '{ print $1}' 2>&1)\n    _NGINX_V_ITD=$(/usr/sbin/nginx -V 2>&1)\n    _NGINX_V_ITD=\"$(printf '%s' \"${_NGINX_V_ITD}\" | tr '\\n\\t' '  ')\"\n    if [ \"${_NGINX_F_ITD}\" = \"${_NGINX_VRN}\" ] \\\n      && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n      _NGINX_INSTALL_REQUIRED=NO\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}, OK\"\n      fi\n    elif [ \"${_NGINX_F_ITD}\" = \"${_NGINX_VRN}\" ] \\\n      && [ \"${_STATUS}\" = \"INIT\" ]; then\n      _NGINX_INSTALL_REQUIRED=NO\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}, OK\"\n      fi\n    elif [ \"${_NGINX_F_ITD}\" != \"${_NGINX_VRN}\" ]; then\n      _NGINX_INSTALL_REQUIRED=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}, upgrade required\"\n      fi\n    fi\n    if [ \"${_NGINX_F_ITD}\" = \"${_NGINX_VRN}\" ]; then\n      if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- 'geoip'; then\n        _NGINX_INSTALL_REQUIRED=YES\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n          _msg \"INFO: Nginx forced rebuild to include geoip module\"\n        fi\n      fi\n      if printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- 'nginx-development-kit'; then\n        _NGINX_INSTALL_REQUIRED=YES\n        _msg \"INFO: Nginx rebuild required to avoid apt-get overwrite\"\n      fi\n      if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- ' --with-http_flv_module'; then\n        _NGINX_INSTALL_REQUIRED=YES\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n          _msg \"INFO: Nginx forced rebuild to include pseudo-streaming support\"\n        fi\n      fi\n      if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- ' --with-http_mp4_module'; then\n        _NGINX_INSTALL_REQUIRED=YES\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n          _msg \"INFO: Nginx forced rebuild to include pseudo-streaming support\"\n        fi\n      fi\n      if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n        || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n        || [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n        || [ \"${_OS_CODE}\" = \"beowulf\" ] \\\n        || [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n        || [ \"${_OS_CODE}\" = \"bullseye\" ] \\\n        || [ \"${_OS_CODE}\" = \"buster\" ]; then\n        _OPENSSL_NEW_VRN=\"${_OPENSSL_EOL_VRN}\"\n        if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n          _OPENSSL_NEW_VRN=\"${_OPENSSL_MODERN_VRN}\"\n        fi\n      elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n        if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n          chattr -i /root/.install.modern.openssl.cnf\n          rm -f /root/.install.modern.openssl.cnf\n        fi\n        _OPENSSL_NEW_VRN=\"${_OPENSSL_EOL_VRN}\"\n      else\n        _OPENSSL_NEW_VRN=\"${_OPENSSL_LEGACY_VRN}\"\n      fi\n      if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- \"OpenSSL ${_OPENSSL_NEW_VRN}\"; then\n        _NGINX_INSTALL_REQUIRED=YES\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n          _msg \"INFO: Nginx forced rebuild to include latest OpenSSL version\"\n        fi\n      fi\n      if [ \"${_NGINX_HEADERS}\" = \"YES\" ]; then\n        if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- 'nginx-headers-more'; then\n          _NGINX_INSTALL_REQUIRED=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n            _msg \"INFO: Nginx forced rebuild to include Headers More support\"\n          fi\n        fi\n      fi\n      if [ \"${_NGINX_LDAP}\" = \"YES\" ]; then\n        if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- 'nginx-auth-ldap'; then\n          _NGINX_INSTALL_REQUIRED=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n            _msg \"INFO: Nginx forced rebuild to include LDAP support\"\n          fi\n        fi\n      fi\n      if [ \"${_PURGE_MODE}\" = \"ON\" ]; then\n        if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- 'purge'; then\n          _NGINX_INSTALL_REQUIRED=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n            _msg \"INFO: Nginx forced rebuild to include purge module\"\n          fi\n        fi\n      fi\n      if [ \"${_NGINX_NAXSI}\" = \"YES\" ]; then\n        if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- 'naxsi'; then\n          _NGINX_INSTALL_REQUIRED=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n            _msg \"INFO: Nginx forced rebuild to include NAXSI module\"\n          fi\n        fi\n      fi\n      if [ \"${_NGINX_SPDY}\" = \"YES\" ]; then\n        if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- ' --with-http_v2_module'; then\n          _NGINX_INSTALL_REQUIRED=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n            _msg \"INFO: Nginx forced rebuild to include HTTP/2 support\"\n          fi\n        fi\n        if [ \"${_OPENSSL_NEW_VRN}\" != \"${_OPENSSL_LEGACY_VRN}\" ]; then\n          if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- ' --with-http_v3_module'; then\n            _NGINX_INSTALL_REQUIRED=YES\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n              _msg \"INFO: Nginx forced rebuild to include HTTP/3 support\"\n            fi\n          fi\n          if ! printf '%s' \"${_NGINX_V_ITD}\" | grep -Fq -- 'enable-ktls'; then\n            _NGINX_INSTALL_REQUIRED=YES\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              _msg \"INFO: Installed Nginx ${_NGINX_F_ITD}\"\n              _msg \"INFO: Nginx forced rebuild to include kTLS support\"\n            fi\n          fi\n        fi\n      fi\n    fi\n  else\n    _NGINX_INSTALL_REQUIRED=YES\n  fi\n  _detect_cdorked_malware\n  if [ \"${_C_DORKED}\" = \"YES\" ]; then\n    _NGINX_INSTALL_REQUIRED=YES\n    _msg \"INFO: Nginx rebuild required to remove possible Linux/Cdorked.A malware\"\n  fi\n  if [ \"${_NGINX_INSTALL_REQUIRED}\" = \"YES\" ] \\\n    || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ] \\\n    || [ \"${_NGX_FORCE_REINSTALL}\" = \"YES\" ]; then\n    if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n      _msg \"INFO: Upgrading Nginx to ${_NGINX_VRN}...\"\n    else\n      _msg \"INFO: Installing Nginx ${_NGINX_VRN}...\"\n    fi\n    cd /var/opt; rm -rf nginx*\n    _get_dev_src \"nginx-${_NGINX_VRN}.tar.gz\"\n    sed -i \"s/nginx/${_CUSTOM_NAME}/g\" \\\n      /var/opt/nginx-${_NGINX_VRN}/src/core/nginx.h &> /dev/null\n    wait\n    if [ ! -z \"${_NGINX_EXTRA_CONF}\" ]; then\n      _NGINX_EXTRA=\"${_NGINX_EXTRA_CONF}\"\n    else\n      _NGINX_EXTRA=\"\"\n    fi\n    if [ \"${_NGINX_HEADERS}\" = \"YES\" ]; then\n      cd /var/opt\n      rm -rf /var/opt/nginx-headers-more*\n      _get_dev_src \"nginx-headers-more.tar.gz\"\n      if [ -e \"/var/opt/nginx-headers-more\" ]; then\n        _NGINX_EXTRA=\"--add-module=/var/opt/nginx-headers-more/ ${_NGINX_EXTRA}\"\n      fi\n    fi\n    if [ \"${_NGINX_LDAP}\" = \"YES\" ]; then\n      cd /var/opt\n      rm -rf /var/opt/nginx-auth-ldap*\n      _get_dev_src \"nginx-auth-ldap.tar.gz\"\n      if [ -e \"/var/opt/nginx-auth-ldap\" ]; then\n        _NGINX_EXTRA=\"--add-module=/var/opt/nginx-auth-ldap/ ${_NGINX_EXTRA}\"\n      fi\n    fi\n    if [ \"${_NGINX_NAXSI}\" = \"YES\" ]; then\n      cd /var/opt\n      rm -rf /var/opt/nginx-naxsi*\n      _get_dev_src \"nginx-naxsi.tar.gz\"\n      if [ -e \"/var/opt/nginx-naxsi\" ]; then\n        _NGINX_EXTRA=\"--add-module=/var/opt/nginx-naxsi/naxsi_src/ ${_NGINX_EXTRA}\"\n      fi\n    fi\n    if [ \"${_NGINX_SPDY}\" = \"YES\" ]; then\n      if [ \"${_OS_CODE}\" = \"none\" ]; then\n        _NGINX_EXTRA=\"--with-http_spdy_module ${_NGINX_EXTRA}\"\n      else\n        _NGINX_EXTRA=\"--with-http_v2_module ${_NGINX_EXTRA}\"\n      fi\n    fi\n    if [ \"${_OPENSSL_NEW_VRN}\" != \"${_OPENSSL_LEGACY_VRN}\" ]; then\n      _NGINX_EXTRA=\"--with-http_v3_module ${_NGINX_EXTRA}\"\n    fi\n    if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n      || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n      || [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n      || [ \"${_OS_CODE}\" = \"beowulf\" ] \\\n      || [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n      || [ \"${_OS_CODE}\" = \"bullseye\" ] \\\n      || [ \"${_OS_CODE}\" = \"buster\" ]; then\n      _OPENSSL_NEW_VRN=\"${_OPENSSL_EOL_VRN}\"\n      if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n        _OPENSSL_NEW_VRN=\"${_OPENSSL_MODERN_VRN}\"\n      fi\n    elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n      if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n        chattr -i /root/.install.modern.openssl.cnf\n        rm -f /root/.install.modern.openssl.cnf\n      fi\n      _OPENSSL_NEW_VRN=\"${_OPENSSL_EOL_VRN}\"\n    else\n      _OPENSSL_NEW_VRN=\"${_OPENSSL_LEGACY_VRN}\"\n    fi\n    if [ ! -e \"/var/opt/openssl-${_OPENSSL_NEW_VRN}\" ]; then\n      cd /var/opt\n      rm -rf \"/var/opt/openssl-${_OPENSSL_NEW_VRN}\" \"/var/opt/openssl-${_OPENSSL_NEW_VRN}.tar.gz\"\n      _get_dev_src \"openssl-${_OPENSSL_NEW_VRN}.tar.gz\"\n    fi\n    cd /var/opt/nginx-${_NGINX_VRN}\n    _mrun \"bash ./configure \\\n      --conf-path=/etc/nginx/nginx.conf \\\n      --error-log-path=/var/log/nginx/error.log \\\n      --group=www-data \\\n      --http-client-body-temp-path=/var/lib/nginx/body \\\n      --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \\\n      --http-log-path=/var/log/nginx/access.log \\\n      --http-proxy-temp-path=/var/lib/nginx/proxy \\\n      --http-scgi-temp-path=/var/lib/nginx/scgi \\\n      --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \\\n      --lock-path=/var/lock/nginx.lock \\\n      --pid-path=/run/nginx.pid \\\n      --prefix=/usr \\\n      --sbin-path=/usr/sbin/nginx \\\n      --user=www-data \\\n      --with-compat \\\n      --with-file-aio \\\n      --with-http_auth_request_module \\\n      --with-http_dav_module \\\n      --with-http_flv_module \\\n      --with-http_geoip_module \\\n      --with-http_gunzip_module \\\n      --with-http_gzip_static_module \\\n      --with-http_mp4_module \\\n      --with-http_realip_module \\\n      --with-http_secure_link_module \\\n      --with-http_slice_module \\\n      --with-http_ssl_module \\\n      --with-http_stub_status_module \\\n      --with-http_sub_module \\\n      --with-stream \\\n      --with-stream_realip_module \\\n      --with-stream_ssl_module \\\n      --with-stream_ssl_preread_module \\\n      --with-threads \\\n      --without-http_scgi_module \\\n      --without-http_uwsgi_module \\\n      --without-mail_imap_module \\\n      --without-mail_pop3_module \\\n      --without-mail_smtp_module \\\n      --with-openssl=/var/opt/openssl-${_OPENSSL_NEW_VRN} \\\n      --with-openssl-opt='no-tests no-ssl3 enable-ktls zlib-dynamic enable-ec_nistp_64_gcc_128' \\\n      --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' \\\n      --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' \\\n      --with-debug ${_NGINX_EXTRA}\"\n    _mrun \"make -j $(nproc)\"\n    _mrun \"make install\"\n    ldconfig 2> /dev/null\n    _nginx_clean_legacy_config\n    _mrun \"service nginx restart\"\n    _NGINX_INSTALL_REQUIRED=NO\n  fi\n  if [ ! -L \"/usr/bin/nginx\" ]; then\n    ln -sfn /usr/sbin/nginx /usr/bin/nginx\n  fi\n}\n\n#\n# Fix multi-IP cron access.\n_master_fix_multi_ip_cron_access() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _master_fix_multi_ip_cron_access\"\n  fi\n  [ -e \"/root/.local.IP.list.allow\" ] && rm -f /root/.local.IP.list.allow\n  for _IP in `cat /root/.local.IP.list \\\n    | cut -d '#' -f1 \\\n    | sort \\\n    | uniq \\\n    | tr -d \"\\s\"`;do echo \"  allow        ${_IP};\" >> \\\n      /root/.local.IP.list.allow;done\n  echo \"  allow 127.0.0.1;\" >> /root/.local.IP.list.allow\n  echo \"  deny all;\" >> /root/.local.IP.list.allow\n\n  sed -i \"s/allow 127.0.0.1;//g; s/ *$//g; /^$/d\" \\\n    ${_mtrTpl}/Inc/vhost_include.tpl.php\n  wait\n  sed -i '/deny all;/ {r /root/.local.IP.list.allow\nd;};' ${_mtrTpl}/Inc/vhost_include.tpl.php\n  wait\n\n  sed -i \"s/allow 127.0.0.1;//g; s/ *$//g; /^$/d\" \\\n    ${_mtrTpl}/subdir.tpl.php\n  wait\n  sed -i '/deny all;/ {r /root/.local.IP.list.allow\nd;};' ${_mtrTpl}/subdir.tpl.php\n  wait\n\n  sed -i \"s/allow 127.0.0.1;//g; s/ *$//g; /^$/d\" \\\n    ${_mtrInc}/nginx_vhost_common.conf\n  wait\n  sed -i '/deny all;/ {r /root/.local.IP.list.allow\nd;};' ${_mtrInc}/nginx_vhost_common.conf\n  wait\n}\n\n_nginx_initd_check() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _nginx_initd_check\"\n  fi\n  _X_INIT_TEST=\n  if [ -e \"/etc/init.d/nginx\" ]; then\n    _X_INIT_TEST_A=$(cat /etc/init.d/nginx | grep ULIMIT 2>&1)\n    _X_INIT_TEST_B=$(cat /etc/init.d/nginx | grep Giedymin 2>&1)\n  fi\n  if [[ \"${_X_INIT_TEST_A}\" =~ ULIMIT ]] && [[ \"${_X_INIT_TEST_B}\" =~ Giedymin ]]; then\n    chmod 755 /etc/init.d/nginx &> /dev/null\n    _mrun \"update-rc.d nginx defaults\"\n  else\n    mv -f /etc/init.d/nginx \\\n      ${_vBs}/nginx-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    cp -af ${_locCnf}/nginx/nginx /etc/init.d/nginx\n    chmod 755 /etc/init.d/nginx &> /dev/null\n    _mrun \"update-rc.d nginx defaults\"\n    _mrun \"service nginx start\"\n  fi\n}\n\n_nginx_mime_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _nginx_mime_check_fix\"\n  fi\n  if [ -e \"${_locCnf}/nginx/mime.types\" ]; then\n    mv -f /etc/nginx/mime.types \\\n      /etc/nginx/mime.types-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    cp -af ${_locCnf}/nginx/mime.types /etc/nginx/mime.types\n    if [ ! -L \"/var/www/nginx-default/index.html\" ] \\\n      && [ ! -L \"/var/www/nginx-default/under_construction.jpg\" ]; then\n      mkdir -p /var/www/nginx-default\n      mv -f /var/www/nginx-default/index.html \\\n        /var/www/nginx-default/index.html-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n      cp -af ${_locCnf}/tpl/uc.html /var/www/nginx-default/index.html\n      cp -af ${_locCnf}/tpl/under_construction.jpg \\\n        /var/www/nginx-default/under_construction.jpg\n    fi\n    rm -f /etc/nginx/sites-available/default\n    rm -f /etc/nginx/sites-enabled/default\n    rm -f /etc/nginx/modules-enabled/*\n    if [ ! -e \"/root/.proxy.cnf\" ]; then\n      _mrun \"service nginx reload\"\n    fi\n  fi\n}\n\n_nginx_wildcard_ssl_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _nginx_wildcard_ssl_install\"\n  fi\n  _WILD_SSL_VHOST=\"${_mtrNgx}/pre.d/nginx_wild_ssl.conf\"\n  if [ ! -e \"${_WILD_SSL_VHOST}\" ]; then\n    _msg \"INFO: Installing default SSL Wildcard Nginx Proxy...\"\n    _validate_public_ip &> /dev/null\n    _validate_xtras_ip &> /dev/null\n    openssl req -x509 -nodes -sha256 -days 7300 \\\n      -subj \"/C=US/ST=New York/O=Aegir/OU=Cloud/L=New York/CN=*.${_THISHOST}\" \\\n      -newkey rsa:2048 \\\n      -keyout /etc/ssl/private/nginx-wild-ssl.key \\\n      -out /etc/ssl/private/nginx-wild-ssl.crt -batch 2> /dev/null\n    cp -af ${_locCnf}/nginx/nginx_wild_ssl.conf ${_WILD_SSL_VHOST}\n    sed -i \"s/127.0.0.1:80/localhost:80/g\" ${_WILD_SSL_VHOST} &> /dev/null\n    sed -i \"s/127.0.0.1:443/${_XTRAS_THISHTIP}:443/g\" ${_WILD_SSL_VHOST} &> /dev/null\n    mkdir -p /data/conf\n    if [ -e \"${_locCnf}/global/global.inc\" ]; then\n      cp -af ${_locCnf}/global/global.inc /data/conf/global.inc\n    fi\n    if [ -e \"${_mtrInc}\" ] \\\n      && [ ! -L \"${_mtrInc}/global.inc\" ] \\\n      && [ -e \"/data/conf/global.inc\" ]; then\n      ln -sfn /data/conf/global.inc ${_mtrInc}/global.inc\n    fi\n    sed -i \"s/3600/${_SPEED_VALID_MAX}/g\" /data/conf/global.inc &> /dev/null\n    if [ -e \"/etc/init.d/valkey-server\" ]; then\n      _valkey_password_update\n    elif [ -e \"/etc/init.d/redis-server\" ]; then\n      _redis_password_update\n    fi\n    _mrun \"service nginx restart\"\n  else\n    _WILD_SSL_PROXY_UP=NO\n    _RSPRT_TEST=$(grep \"quic reuseport\" ${_WILD_SSL_VHOST} 2>&1)\n    if [[ ! \"${_RSPRT_TEST}\" =~ \"quic reuseport\" ]]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: nginx_wild_ssl.conf proxy needs quic reuseport update...\"\n      fi\n      _WILD_SSL_PROXY_UP=YES\n    fi\n    _HTTP3_TEST=$(grep http3 ${_WILD_SSL_VHOST} 2>&1)\n    if [[ ! \"${_HTTP3_TEST}\" =~ \"http3\" ]]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: nginx_wild_ssl.conf proxy needs http3 update...\"\n      fi\n      _WILD_SSL_PROXY_UP=YES\n    fi\n    _WILDCARD_SSL_TEST=$(grep \"localhost:80\" ${_WILD_SSL_VHOST} 2>&1)\n    if [[ ! \"${_WILDCARD_SSL_TEST}\" =~ \"localhost:80\" ]]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: nginx_wild_ssl.conf proxy needs port update...\"\n      fi\n      _WILD_SSL_PROXY_UP=YES\n    fi\n    if [ \"${_WILD_SSL_PROXY_UP}\" = \"YES\" ]; then\n      _validate_public_ip &> /dev/null\n      _validate_xtras_ip &> /dev/null\n      cp -af ${_locCnf}/nginx/nginx_wild_ssl.conf ${_WILD_SSL_VHOST}\n      sed -i \"s/127.0.0.1:80/localhost:80/g\" ${_WILD_SSL_VHOST}\n      wait\n      sed -i \"s/127.0.0.1:443/${_XTRAS_THISHTIP}:443/g\" ${_WILD_SSL_VHOST}\n    fi\n  fi\n}\n\n_nginx_config_update_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _nginx_config_update_fix\"\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _mrun \"service cron start\"\n    _force_advanced_nginx_config\n    sleep 8\n    _mrun \"service nginx restart\"\n  else\n    if [ -e \"/var/aegir/config\" ]; then\n      sed -i \"s/.*ssl .*on;//g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*ssl .*on;//g\" /var/aegir/config/server_*/nginx/vhost.d/adminer.* &> /dev/null\n      wait\n      sed -i \"s/.*ssl .*on;//g\" /var/aegir/config/server_*/nginx/vhost.d/cgp.* &> /dev/null\n      wait\n      sed -i \"s/.*ssl .*on;//g\" /var/aegir/config/server_*/nginx/vhost.d/chive.* &> /dev/null\n      wait\n      sed -i \"s/.*ssl .*on;//g\" /var/aegir/config/server_*/nginx/vhost.d/sqlbuddy.* &> /dev/null\n      wait\n      sed -i \"s/.*listen .*\\*:80;/  listen  \\*:80;/g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*listen .*\\*:443/  listen  \\*:443/g\" /var/aegir/config/server_*/nginx/vhost.d/adminer.* &> /dev/null\n      wait\n      sed -i \"s/.*listen .*\\*:443/  listen  \\*:443/g\" /var/aegir/config/server_*/nginx/vhost.d/cgp.* &> /dev/null\n      wait\n      sed -i \"s/.*listen .*\\*:443/  listen  \\*:443/g\" /var/aegir/config/server_*/nginx/vhost.d/chive.* &> /dev/null\n      wait\n      sed -i \"s/.*listen .*\\*:443/  listen  \\*:443/g\" /var/aegir/config/server_*/nginx/vhost.d/sqlbuddy.* &> /dev/null\n      wait\n      sed -i \"s/SSLv3 TLSv1;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/SSLv3 TLSv1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/TLSv1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      if [ -d \"/data/u\" ]; then\n        sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /data/disk/*/config/server_*/nginx/vhost.d/*\n      fi\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx.conf\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/vhost.d/*\n      wait\n      sed -i \"s/.*ssl .*on;//g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/.*gzip_vary .*//g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/.*gzip_vary .*//g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*proxy_buffer_size .*//g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/.*proxy_buffer_size .*//g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*proxy_buffers .*//g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/.*proxy_buffers .*//g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*proxy_busy_buffers_size .*//g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/.*proxy_busy_buffers_size .*//g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*proxy_temp_file_write_size .*//g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/.*proxy_temp_file_write_size .*//g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*proxy_buffering .*//g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/.*proxy_buffering .*//g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/proxy_redirect .*/proxy_redirect             off;\\n    gzip_vary                  off;\\n    proxy_buffering            off;/g\" \\\n        /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/proxy_redirect .*/proxy_redirect             off;\\n    gzip_vary                  off;\\n    proxy_buffering            off;/g\" \\\n        /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*ssl_stapling .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*ssl_stapling_verify .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*resolver .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*resolver_timeout .*//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/.*http2.*on;//g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/ssl_prefer_server_ciphers .*/ssl_prefer_server_ciphers on;\\n  http2 on;/g\" /var/aegir/config/server_*/nginx/pre.d/*ssl_proxy.conf &> /dev/null\n      wait\n      sed -i \"s/ *$//g; /^$/d\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/ *$//g; /^$/d\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/add_header Vary .*//g\" /var/aegir/config/server_*/nginx.conf &> /dev/null\n      wait\n    fi\n    if [ \"${_NGINX_SPDY}\" = \"YES\" ]; then\n      sed -i \"s/:443;/:443 ssl;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/:443;/:443 ssl;/g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/:443 ssl spdy;/:443 ssl;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/:443 ssl spdy;/:443 ssl;/g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/:443 ssl http2;/:443 ssl;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/:443 ssl http2;/:443 ssl;/g\" /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n    fi\n    if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n      && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n      _SSL_BINARY=/usr/local/ssl3/bin/openssl\n    else\n      _SSL_BINARY=/usr/local/ssl/bin/openssl\n    fi\n    _SSL_ITD=$(${_SSL_BINARY} version 2>&1 | tr -d \"\\n\" | cut -d\" \" -f2 | awk '{ print $1}' 2>&1)\n    if [ \"${_SSL_ITD}\" = \"${_OPENSSL_MODERN_VRN}\" ] \\\n      || [ \"${_SSL_ITD}\" = \"${_OPENSSL_EOL_VRN}\" ] \\\n      || [ \"${_SSL_ITD}\" = \"${_OPENSSL_LEGACY_VRN}\" ] \\\n      || [[ \"${_SSL_ITD}\" =~ \"1.1.0\" ]] \\\n      || [[ \"${_SSL_ITD}\" =~ \"1.0.2\" ]] \\\n      || [[ \"${_SSL_ITD}\" =~ \"1.0.1\" ]]; then\n      _PFS_READY=YES\n    else\n      _PFS_READY=NO\n    fi\n    if [ \"${_PFS_READY}\" = \"YES\" ] \\\n      && [ \"${_NGINX_FORWARD_SECRECY}\" = \"YES\" ]; then\n      _ALLOW_NGINX_FORWARD_SECRECY=YES\n      _SSL_PROTOCOLS=\"TLSv1.2 TLSv1.3;\"\n      _SSL_CIPHERS=\"TLS_AES_128_GCM_SHA256:  \\\n        TLS_AES_256_GCM_SHA384:  \\\n        TLS_CHACHA20_POLY1305_SHA256:  \\\n        ECDHE-RSA-AES256-GCM-SHA384:  \\\n        ECDHE-ECDSA-AES256-GCM-SHA384:  \\\n        ECDHE-RSA-AES256-SHA384:  \\\n        ECDHE-ECDSA-AES256-SHA384:  \\\n        AES256-GCM-SHA384:  \\\n        ECDHE-ECDSA-CHACHA20-POLY1305:  \\\n        ECDHE-RSA-CHACHA20-POLY1305:  \\\n        DHE-RSA-AES256-CCM:  \\\n        DHE-RSA-AES256-CCM8:  \\\n        DHE-RSA-AES128-CCM:  \\\n        DHE-RSA-AES128-CCM8:  \\\n        ECDHE-RSA-AES128-GCM-SHA256:  \\\n        ECDHE-ECDSA-AES128-GCM-SHA256:  \\\n        DHE-RSA-AES128-GCM-SHA256:  \\\n        DHE-DSS-AES128-GCM-SHA256:  \\\n        kEDH+AESGCM:  \\\n        ECDHE-RSA-AES128-SHA256:  \\\n        ECDHE-ECDSA-AES128-SHA256:  \\\n        ECDHE-RSA-AES128-SHA:  \\\n        ECDHE-ECDSA-AES128-SHA:  \\\n        ECDHE-RSA-AES256-SHA:  \\\n        ECDHE-ECDSA-AES256-SHA:  \\\n        DHE-RSA-AES128-SHA256:  \\\n        DHE-RSA-AES128-SHA:  \\\n        DHE-DSS-AES128-SHA256:  \\\n        DHE-RSA-AES256-SHA256:  \\\n        DHE-DSS-AES256-SHA:  \\\n        DHE-RSA-AES256-SHA:  \\\n        AES128-GCM-SHA256:  \\\n        AES128-SHA256:  \\\n        AES256-SHA256:  \\\n        AES128-SHA:  \\\n        AES256-SHA:  \\\n        AES:  \\\n        \\!aNULL:  \\\n        \\!eNULL:  \\\n        \\!EXPORT:  \\\n        \\!DES:  \\\n        \\!RC4:  \\\n        \\!MD5:  \\\n        \\!PSK:  \\\n        \\!aECDH:  \\\n        \\!EDH-DSS-DES-CBC3-SHA:  \\\n        \\!EDH-RSA-DES-CBC3-SHA:  \\\n        \\!KRB5-DES-CBC3-SHA:  \\\n        \\!ECDHE-ECDSA-AES128-SHA256:  \\\n        \\!ECDHE-ECDSA-AES256-SHA384;\"\n      _SSL_CIPHERS=$(echo \"${_SSL_CIPHERS}\" | sed \"s/ //g\" 2>&1)\n    else\n      _ALLOW_NGINX_FORWARD_SECRECY=NO\n    fi\n    if [ \"${_ALLOW_NGINX_FORWARD_SECRECY}\" = \"YES\" ]; then\n      sed -i \"s/ssl_protocols .*/ssl_protocols                ${_SSL_PROTOCOLS}/g\" \\\n        /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/ssl_protocols .*/ssl_protocols                ${_SSL_PROTOCOLS}/g\" \\\n        /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/ssl_ciphers .*/ssl_ciphers                  ${_SSL_CIPHERS}/g\" \\\n        /var/aegir/config/server_*/nginx/pre.d/*.conf &> /dev/null\n      wait\n      sed -i \"s/ssl_ciphers .*/ssl_ciphers                  ${_SSL_CIPHERS}/g\" \\\n        /var/aegir/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n    fi\n\n    ###\n    ### Nginx: Convert all vhosts to wildcard mode on Barracuda upgrade\n    ### to avoid extended downtime until all Octopus instances will receive\n    ### full upgrade, if IP based listen directive was used before.\n    ###\n    if [ -e \"/var/aegir\" ]; then\n      sed -i \"s/.*listen.*127.0.0.1:80;.*//g\" \\\n        /var/aegir/config/server_*/nginx.conf &> /dev/null\n      wait\n      sed -i \"s/listen .*\\*:80;/listen  \\*:80;/g\" \\\n        /var/aegir/config/server_*/nginx.conf &> /dev/null\n      wait\n    fi\n    if [ -d \"/data/u\" ]; then\n      sed -i \"s/.*listen.*127.0.0.1:80;.*//g\" \\\n        /data/disk/*/config/server_*/nginx.conf &> /dev/null\n      wait\n      sed -i \"s/listen .*\\*:80;/listen  \\*:80;/g\" \\\n        /data/disk/*/config/server_*/nginx.conf &> /dev/null\n      wait\n      sed -i \"s/.*listen .*\\*:80;/  listen  \\*:80;/g\" \\\n        /data/disk/*/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*listen .*\\*:443/  listen  \\*:443/g\" \\\n        /data/disk/*/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/.*ssl .*on;//g\" \\\n        /data/disk/*/config/server_*/nginx/vhost.d/* &> /dev/null\n      wait\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /data/disk/*/config/server_*/nginx/vhost.d/*\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx.conf\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/pre.d/*.conf\n      sed -i \"s/TLSv1.1 TLSv1.2 TLSv1.3;/TLSv1.2 TLSv1.3;/g\" /var/aegir/config/server_*/nginx/vhost.d/*\n      wait\n    fi\n\n    if [ -d \"/data/u\" ]; then\n      for _OCT in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n        _pthPrN=\"provision/http/Provision/Config/Nginx\"\n        if [ -e \"${_OCT}/.drush/sys/drush_make\" ]; then\n          sed -i \"s/\\!empty.*';/'*';/g\" \\\n            ${_OCT}/.drush/sys/${_pNx}/server.tpl.php &> /dev/null\n          wait\n          sed -i \"s/\\!empty.*';/'*';/g\" \\\n            ${_OCT}/.drush/sys/${_pNx}/vhost.tpl.php &> /dev/null\n          wait\n          sed -i \"s/\\!empty.*';/'*';/g\" \\\n            ${_OCT}/.drush/sys/${_pNx}/vhost_disabled.tpl.php &> /dev/null\n          wait\n        elif [ -e \"${_OCT}/.drush/drush_make\" ]; then\n          sed -i \"s/\\!empty.*';/'*';/g\" \\\n            ${_OCT}/.drush/${_pNx}/server.tpl.php &> /dev/null\n          wait\n          sed -i \"s/\\!empty.*';/'*';/g\" \\\n            ${_OCT}/.drush/${_pNx}/vhost.tpl.php &> /dev/null\n          wait\n          sed -i \"s/\\!empty.*';/'*';/g\" \\\n            ${_OCT}/.drush/${_pNx}/vhost_disabled.tpl.php &> /dev/null\n          wait\n        fi\n      done\n    fi\n\n    ###\n    ### Delete any ghost, outdated or broken config includes and vhosts\n    ### in all Octopus instances which could break Nginx restart\n    ###\n    if [ -d \"/data/u\" ]; then\n      for _File in $(find /data/disk/*/config/includes/ -type f -exec grep -l will_expire_in {} +); do\n        rm -f \"${_File}\"\n      done\n      for Vght in `ls /data/disk/*/log/CANCELLED 2> /dev/null \\\n        | cut -d\"/\" -f4 \\\n        | awk '{ print $1}'`; do\n        rm -f /data/disk/$Vght/config/server_*/nginx/vhost.d/*\n      done\n      _wildPth=\"/data/disk/*/.drush/sys/provision/http/Provision/Service/http/*.conf\"\n      sed -i \"s/OctopusMicroNoCacheID/NoCacheID/g\" ${_wildPth} &> /dev/null\n      wait\n      sed -i \"s/OctopusNCookie/AegirCookie/g\"      ${_wildPth} &> /dev/null\n      wait\n      sed -i \"s/OctopusNoCacheID/NoCacheID/g\"      ${_wildPth} &> /dev/null\n      wait\n    fi\n    if [ -e \"/var/aegir\" ]; then\n      _wildPth=\"/var/aegir/.drush/sys/provision/http/Provision/Service/http/*.conf\"\n      sed -i \"s/OctopusMicroNoCacheID/NoCacheID/g\" ${_wildPth} &> /dev/null\n      wait\n      sed -i \"s/OctopusNCookie/AegirCookie/g\"      ${_wildPth} &> /dev/null\n      wait\n      sed -i \"s/OctopusNoCacheID/NoCacheID/g\"      ${_wildPth} &> /dev/null\n      wait\n      sed -i \"s/60/180/g\" /var/aegir/config/server_*/nginx.conf  &> /dev/null\n      wait\n      sed -i \"s/300/180/g\" /var/aegir/config/server_*/nginx.conf &> /dev/null\n      wait\n    fi\n    _validate_public_ip &> /dev/null\n    _CRON_IP=${_THISHTIP//[^0-9.]/}\n    if [ ! -e \"/root/.local.IP.list\" ]; then\n      rm -f /root/.tmp.IP.list*\n      rm -f /root/.local.IP.list*\n      for _IP in `hostname -I`;do echo ${_IP} >> /root/.tmp.IP.list;done\n      for _IP in `cat /root/.tmp.IP.list \\\n        | sort \\\n        | uniq`; do\n        echo \"${_IP} # local IP address\" >> /root/.local.IP.list\n      done\n      rm -f /root/.tmp.IP.list*\n    fi\n    _IP_IF_PRESENT=$(grep \"${_CRON_IP}\" /root/.local.IP.list 2>&1)\n    if [[ \"${_IP_IF_PRESENT}\" =~ \"${_CRON_IP}\" ]]; then\n      _IP_PRESENT=YES\n    else\n      _IP_PRESENT=NO\n    fi\n    if [ ! -z \"${_CRON_IP}\" ] \\\n      && [ \"${_IP_PRESENT}\" = \"YES\" ] \\\n      && [ -e \"/root/.local.IP.list\" ]; then\n      _master_fix_multi_ip_cron_access\n    fi\n    _mrun \"service nginx reload\"\n  fi\n}\n"
  },
  {
    "path": "lib/functions/php.sh.inc",
    "content": "#\n# Fix php.ini files to remove ionCube\n_fix_php_ini_ioncube() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_ioncube $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    if [ \"${_PHP_IONCUBE}\" = \"NO\" ]; then\n      _IONCUBE_INI_TEST=$(grep \"ioncube_loader\" ${_THIS_FILE} 2>&1)\n      if [[ \"${_IONCUBE_INI_TEST}\" =~ \"ioncube_loader\" ]]; then\n        sed -i \"s/.*ioncube_loader.*//g\" ${_THIS_FILE} &> /dev/null\n        wait\n      fi\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to remove jsmin.so\n_remove_php_ini_jsmin() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _remove_php_ini_jsmin $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _JSMIN_INI_TEST=$(grep \"^extension=jsmin.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_JSMIN_INI_TEST}\" =~ \"extension=jsmin.so\" ]]; then\n      sed -i \"s/.*jsmin.*//g\" ${_THIS_FILE} &> /dev/null\n      wait\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to remove suhosin.so\n_remove_php_ini_suhosin() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _remove_php_ini_suhosin $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _SUHOSIN_INI_TEST=$(grep \"^extension=suhosin.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_SUHOSIN_INI_TEST}\" =~ \"extension=suhosin.so\" ]]; then\n      sed -i \"s/.*suhosin.*//g\" ${_THIS_FILE} &> /dev/null\n      wait\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add mailparse.so\n_fix_php_ini_mailparse() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_mailparse $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MAILPARSE_INI_TEST=$(grep \"^extension=mailparse.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MAILPARSE_INI_TEST}\" =~ \"extension=mailparse.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mailparse.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add yaml.so\n_fix_php_ini_yaml() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_yaml $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _YAML_INI_TEST=$(grep \"^extension=yaml.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_YAML_INI_TEST}\" =~ \"extension=yaml.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=yaml.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add jsmin.so\n_add_php_ini_jsmin() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _add_php_ini_jsmin $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _JSMIN_INI_TEST=$(grep \"^extension=jsmin.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_JSMIN_INI_TEST}\" =~ \"extension=jsmin.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=jsmin.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add twig.so\n_fix_php_ini_twig() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_twig $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _TWIG_INI_TEST=$(grep \"^extension=twig.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_TWIG_INI_TEST}\" =~ \"extension=twig.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=twig.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add redis.so\n_fix_php_ini_redis() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_redis $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _REDIS_INI_TEST=$(grep \"^extension=redis.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_REDIS_INI_TEST}\" =~ \"extension=redis.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=redis.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add mcrypt.so\n_fix_php_ini_mcrypt() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_mcrypt $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MCRYPT_INI_TEST=$(grep \"^extension=mcrypt.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MCRYPT_INI_TEST}\" =~ \"extension=mcrypt.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mcrypt.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add imagick.so\n_fix_php_ini_imagick() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_imagick $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MCRYPT_INI_TEST=$(grep \"^extension=imagick.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MCRYPT_INI_TEST}\" =~ \"extension=imagick.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=imagick.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to remove mcrypt.so\n_remove_php_ini_mcrypt() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _remove_php_ini_mcrypt $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _JSMIN_INI_TEST=$(grep \"^extension=mcrypt.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_JSMIN_INI_TEST}\" =~ \"extension=mcrypt.so\" ]]; then\n      sed -i \"s/.*mcrypt.*//g\" ${_THIS_FILE} &> /dev/null\n      wait\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add apcu.so\n_fix_php_ini_apcu() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_apcu $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _APCU_INI_TEST=$(grep \"^apc.shm_size\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_APCU_INI_TEST}\" =~ \"apc.shm_size\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \" \"                  >> ${_THIS_FILE}\n      echo \"; APCu\"             >> ${_THIS_FILE}\n      echo \"extension=apcu.so\"  >> ${_THIS_FILE}\n      echo \"apc.enable_cli=1\"   >> ${_THIS_FILE}\n      echo \"apc.gc_ttl=300\"     >> ${_THIS_FILE}\n      echo \"apc.shm_segments=1\" >> ${_THIS_FILE}\n      echo \"apc.shm_size=256M\"  >> ${_THIS_FILE}\n      echo \"apc.slam_defense=0\" >> ${_THIS_FILE}\n      echo \"apc.ttl=0\"          >> ${_THIS_FILE}\n      echo \";\"                  >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini files to add igbinary.so\n_fix_php_ini_igbinary() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_igbinary $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _IGBINARY_INI_TEST=$(grep \"^extension=igbinary.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_IGBINARY_INI_TEST}\" =~ \"extension=igbinary.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=igbinary.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini file to add newrelic.ini\n_fix_php_ini_newrelic() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_newrelic $1\"\n  fi\n  _NR_TPL=\"${_locCnf}/php/newrelic.ini\"\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _NEWRELIC_INI_TEST_A=$(grep \"^extension=newrelic.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_NEWRELIC_INI_TEST_A}\" =~ \"extension=newrelic.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      cat ${_NR_TPL} >> ${_THIS_FILE}\n    fi\n    _NEWRELIC_INI_TEST_B=$(grep \"newrelic.framework.drupal.modules\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_NEWRELIC_INI_TEST_B}\" =~ \"newrelic.framework.drupal.modules\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"newrelic.framework.drupal.modules = 1\" >> ${_THIS_FILE}\n    fi\n    sed -i \"/REPLACE_WITH_REAL_KEY//g\" ${_THIS_FILE} &> /dev/null\n    wait\n    sed -i \"s/license_key=//g\" ${_THIS_FILE} &> /dev/null\n    wait\n  fi\n}\n\n#\n# Fix all php.ini files to add newrelic.ini\n_fix_php_ini_newrelic_all() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_newrelic_all $1\"\n  fi\n  if [ -e \"/etc/newrelic/newrelic.cfg\" ]; then\n    if [ -z \"${_NEWRELIC_KEY}\" ]; then\n      _NEWRELIC_KEY=$(grep license_key /etc/newrelic/newrelic.cfg 2>/dev/null | tr -d '\\n')\n    fi\n    _PHP_V=\"85 84 83 82 81 80 74 73 72\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_newrelic ${e}\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_newrelic ${e}\n    done\n  fi\n}\n\n#\n# Fix FMP php.ini file to add opcache configuration\n_fix_php_ini_opcache() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_opcache ${1}\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    if grep -q \"Zend OPcache\" -- \"${_THIS_FILE}\"; then\n      _DO_NOTHING=YES\n    else\n      {\n        echo\n        echo \"; Zend OPcache\"\n        if [ \"${1}\" -lt 85 ]; then\n          echo \"zend_extension=\\\"${_OPCACHE_SO}\\\"\"\n        fi\n        echo \"opcache.enable=1\"\n        echo \"opcache.memory_consumption=181\"\n        echo \"opcache.revalidate_freq=60\"\n        echo \"opcache.dups_fix=1\"\n        echo \"opcache.file_update_protection=8\"\n        echo \"opcache.huge_code_pages=0\"\n        case \"${1}\" in\n          80|74|73|72|71|70|56)\n            echo \"opcache.interned_strings_buffer=32\"\n            ;;\n          81|82|83|84|85)\n            echo \"opcache.interned_strings_buffer=128\"\n            ;;\n          *)\n            echo \"opcache.interned_strings_buffer=128\"\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              _msg \"WARN: Unknown PHP version '${1}', using default buffer=128\"\n            fi\n            ;;\n        esac\n        echo \"opcache.jit=off\"\n        echo \"opcache.lockfile_path=/var/tmp/fpm\"\n        echo \"opcache.max_accelerated_files=200000\"\n        echo \"opcache.restrict_api=/var/www\"\n        echo \"opcache.revalidate_path=1\"\n        echo \"opcache.save_comments=1\"\n        echo \"opcache.use_cwd=1\"\n        echo \"opcache.validate_permission=1\"\n        echo \"opcache.validate_root=1\"\n        echo \"opcache.validate_timestamps=1\"\n        echo \";\"\n      } >> \"${_THIS_FILE}\"\n    fi\n  fi\n}\n\n#\n# Fix all FMP php.ini files to add Zend OPcache\n_fix_php_ini_opcache_all() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_opcache_all\"\n  fi\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  fi\n  for e in ${_PHP_V}; do\n    _P_API=\n    case \"${e}\" in\n      85) _P_API=\"${_PHP85_API}\" ;;\n      84) _P_API=\"${_PHP84_API}\" ;;\n      83) _P_API=\"${_PHP83_API}\" ;;\n      82) _P_API=\"${_PHP82_API}\" ;;\n      81) _P_API=\"${_PHP81_API}\" ;;\n      80) _P_API=\"${_PHP80_API}\" ;;\n      74) _P_API=\"${_PHP74_API}\" ;;\n      73) _P_API=\"${_PHP73_API}\" ;;\n      72) _P_API=\"${_PHP72_API}\" ;;\n      71) _P_API=\"${_PHP71_API}\" ;;\n      70) _P_API=\"${_PHP70_API}\" ;;\n      56) _P_API=\"${_PHP56_API}\" ;;\n      *)  _msg \"WARN: Unknown PHP API version for PHP ${e}\"\n      ;;\n    esac\n    _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n    _OPCACHE_LP=\"/opt/php${e}/lib/php/extensions/no-debug-non-zts\"\n    _OPCACHE_SO=\"${_OPCACHE_LP}-${_P_API}/opcache.so\"\n    _fix_php_ini_opcache ${e}\n  done\n}\n\n#\n# Fix php.ini file to add php_tet.so\n_fix_php_ini_tet() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_tet $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _TET_INI_TEST=$(grep \"^extension=php_tet.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_TET_INI_TEST}\" =~ \"extension=php_tet.so\" ]]; then\n      _DO_NOTHING=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"PROC: php_tet.so already present in ${_THIS_FILE}\"\n      fi\n    else\n      echo \"extension=php_tet.so\" >> ${_THIS_FILE}\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"PROC: Just added php_tet.so to ${_THIS_FILE}\"\n      fi\n    fi\n  fi\n}\n\n#\n# Fix all php.ini files to add php_tet.so\n_fix_php_ini_tet_all() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_tet_all\"\n  fi\n  if [ \"${_PHP_TET}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"TET\" ]]; then\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      _P_API=\n      case \"${e}\" in\n        85) _P_API=\"${_PHP85_API}\" ;;\n        84) _P_API=\"${_PHP84_API}\" ;;\n        83) _P_API=\"${_PHP83_API}\" ;;\n        82) _P_API=\"${_PHP82_API}\" ;;\n        81) _P_API=\"${_PHP81_API}\" ;;\n        80) _P_API=\"${_PHP80_API}\" ;;\n        74) _P_API=\"${_PHP74_API}\" ;;\n        73) _P_API=\"${_PHP73_API}\" ;;\n        72) _P_API=\"${_PHP72_API}\" ;;\n        71) _P_API=\"${_PHP71_API}\" ;;\n        70) _P_API=\"${_PHP70_API}\" ;;\n        56) _P_API=\"${_PHP56_API}\" ;;\n        *)  _msg \"WARN: Unknown PHP API version for PHP ${e}\"\n        ;;\n      esac\n      _TET_BASE=\"/opt/php${e}/lib/php/extensions/no-debug-non-zts\"\n      _TET_SO=\"${_TET_BASE}-${_P_API}/php_tet.so\"\n      if [ ! -e \"${_TET_SO}\" ]; then\n        if [[ \"${e}\" =~ \"85\" ]] \\\n          || [[ \"${e}\" =~ \"84\" ]] \\\n          || [[ \"${e}\" =~ \"83\" ]] \\\n          || [[ \"${e}\" =~ \"82\" ]] \\\n          || [[ \"${e}\" =~ \"81\" ]] \\\n          || [[ \"${e}\" =~ \"80\" ]] \\\n          || [[ \"${e}\" =~ \"74\" ]] \\\n          || [[ \"${e}\" =~ \"73\" ]]; then\n          _TET_VRN=\"5.3-Linux-x64-Perl-PHP-Python-Ruby\"\n        else\n          _TET_VRN=\"5.2-Linux-x86_64-Perl-PHP-Python-Ruby\"\n        fi\n        if [ ! -e \"/var/opt/TET-${_TET_VRN}/bind/php\" ]; then\n          mkdir -p  /var/opt\n          cd /var/opt\n          _get_dev_src \"TET-${_TET_VRN}.tar.gz\"\n        fi\n        if [ -e \"${_TET_BASE}-${_P_API}\" ] \\\n          && [ -e \"/var/opt/TET-${_TET_VRN}/bind/php/php-${e}0-nts\" ]; then\n          cd /var/opt/TET-${_TET_VRN}/bind/php/php-${e}0-nts/\n          cp -a php_tet.so ${_TET_SO}\n        fi\n      fi\n      if [ -e \"${_TET_SO}\" ]; then\n        _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n        _fix_php_ini_tet ${e}\n        _THIS_FILE=/opt/php${e}/lib/php.ini\n        _fix_php_ini_tet ${e}\n      fi\n    done\n  fi\n}\n\n#\n# Fix php.ini file to add geos.so\n_fix_php_ini_geos() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_geos $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _GEOS_INI_TEST=$(grep \"^extension=geos.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_GEOS_INI_TEST}\" =~ \"extension=geos.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=geos.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix all php.ini files to add geos.so\n_fix_php_ini_geos_all() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_geos_all\"\n  fi\n  if [ \"${_PHP_GEOS}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"GEO\" ]]; then\n    _PHP_V=\"56\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_geos ${e}\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_geos ${e}\n    done\n  fi\n}\n\n#\n# Fix php.ini file to add mongo.so\n_fix_php_ini_mongo() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_mongo $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MONGO_INI_TEST=$(grep \"^extension=mongo.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MONGO_INI_TEST}\" =~ \"extension=mongo.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mongo.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix php.ini file to add mongodb.so\n_fix_php_ini_mongodb() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_mongodb $1\"\n  fi\n  if [ -e \"${_THIS_FILE}\" ]; then\n    _MONGODB_INI_TEST=$(grep \"^extension=mongodb.so\" ${_THIS_FILE} 2>&1)\n    if [[ \"${_MONGODB_INI_TEST}\" =~ \"extension=mongodb.so\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"extension=mongodb.so\" >> ${_THIS_FILE}\n    fi\n  fi\n}\n\n#\n# Fix all php.ini files to add mongo.so or mongodb.so\n_fix_php_ini_mongo_all() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_ini_mongo_all\"\n  fi\n  if [ \"${_PHP_MONGODB}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"MNG\" ]]; then\n    _PHP_V=\"56\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_mongo ${e}\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_mongo ${e}\n    done\n    _PHP_V=\"72 71 70\"\n    for e in ${_PHP_V}; do\n      _THIS_FILE=/opt/php${e}/etc/php${e}.ini\n      _fix_php_ini_mongodb ${e}\n      _THIS_FILE=/opt/php${e}/lib/php.ini\n      _fix_php_ini_mongodb ${e}\n    done\n  fi\n}\n\n#\n# Apply PHP INI fixes.\n_apply_php_ini_fixes() {\n  local e=\"${1}\"\n\n  if [ \"${e}\" -gt 56 ]; then\n    _fix_php_ini_apcu \"${e}\"\n  fi\n\n  if [ \"${e}\" -gt 71 ]; then\n    _fix_php_ini_mcrypt \"${e}\"\n  fi\n\n  if [ \"${e}\" = 56 ]; then\n    _fix_php_ini_mailparse \"${e}\"\n    _fix_php_ini_twig \"${e}\"\n  fi\n\n  case \"${e}\" in\n    80|81|82|83|84|85)\n      _remove_php_ini_jsmin \"${e}\"\n      ;;\n    *)\n      _add_php_ini_jsmin \"${e}\"\n      ;;\n  esac\n\n#   case \"${e}\" in\n#     84|85)\n#       _remove_php_ini_mcrypt \"${e}\"\n#       ;;\n#   esac\n\n  _fix_php_ini_igbinary \"${e}\"\n  _fix_php_ini_redis \"${e}\"\n  _fix_php_ini_ioncube \"${e}\"\n  _remove_php_ini_suhosin \"${e}\"\n  _fix_php_ini_yaml \"${e}\"\n  _fix_php_ini_imagick \"${e}\"\n}\n\n#\n# Update PHP Config.\n_php_conf_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_conf_update\"\n  fi\n  if [ -z \"${_THISHTIP}\" ]; then\n    _LOC_DOM=\"${_THISHOST}\"\n    _find_correct_ip\n    _THISHTIP=\"${_LOC_IP}\"\n  fi\n  if [ ! -e \"/opt/etc/fpm\" ] \\\n    || [ ! -e \"/opt/etc/fpm/fpm-pool-common.conf\" ]; then\n    mkdir -p /opt/etc/fpm\n  fi\n  cp -af ${_locCnf}/php/fpm-pool-common.conf /opt/etc/fpm/fpm-pool-common.conf\n  cp -af ${_locCnf}/php/fpm-pool-common-legacy.conf /opt/etc/fpm/fpm-pool-common-legacy.conf\n  cp -af ${_locCnf}/php/fpm-pool-common-modern.conf /opt/etc/fpm/fpm-pool-common-modern.conf\n  sed -i \"s/127.0.0.1/127.0.0.1,${_THISHTIP}/g\" /opt/etc/fpm/fpm-pool-commo*.conf\n  wait\n  sed -i \"s/mode =.*/mode = 0660/g\" /opt/etc/fpm/fpm-pool-commo*.conf\n  wait\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  fi\n  for e in ${_PHP_V}; do\n    if [ ! -e \"/var/www/www${e}\" ]; then\n      adduser --system --group --home /var/www/www${e} www${e} &> /dev/null\n      usermod -aG www-data www${e}\n    fi\n    if [ ! -e \"/opt/php${e}/etc/php${e}.ini\" ] \\\n      || [ ! -e \"/opt/php${e}/etc/pool.d/www${e}.conf\" ]; then\n      mkdir -p /opt/php${e}/etc/pool.d\n      cp -af ${_locCnf}/php/php${e}.ini /opt/php${e}/etc/php${e}.ini\n    fi\n    cp -af ${_locCnf}/php/fpm${e}-pool-www.conf /opt/php${e}/etc/pool.d/www${e}.conf\n    if [ ! -e \"/opt/php${e}/lib/php.ini\" ]; then\n      mkdir -p /opt/php${e}/lib\n      cp -af ${_locCnf}/php/php${e}-cli.ini /opt/php${e}/lib/php.ini\n    fi\n    cp -af ${_locCnf}/php/php${e}.ini /opt/php${e}/etc/php${e}.ini\n    cp -af ${_locCnf}/php/php${e}-cli.ini /opt/php${e}/lib/php.ini\n    cp -af ${_locCnf}/php/php${e}-fpm.conf /opt/php${e}/etc/php${e}-fpm.conf\n\n    _THIS_FILE=\"/opt/php${e}/etc/php${e}.ini\"\n    _apply_php_ini_fixes \"${e}\"\n\n    _THIS_FILE=\"/opt/php${e}/lib/php.ini\"\n    _apply_php_ini_fixes \"${e}\"\n\n    if [ -e \"/opt/php${e}/etc/php${e}.ini\" ]; then\n      sed -i \"s/^zlib.output_compression.*/zlib.output_compression = Off/g\"       /opt/php${e}/etc/php${e}.ini\n      wait\n      sed -i \"s/.*zlib.output_compression_level/;zlib.output_compression_level/g\" /opt/php${e}/etc/php${e}.ini\n      wait\n    fi\n    if [ -e \"/opt/php${e}/lib/php.ini\" ]; then\n      sed -i \"s/^zlib.output_compression.*/zlib.output_compression = Off/g\"       /opt/php${e}/lib/php.ini\n      wait\n      sed -i \"s/.*zlib.output_compression_level/;zlib.output_compression_level/g\" /opt/php${e}/lib/php.ini\n      wait\n    fi\n  done\n  rm -f /etc/php5/conf.d/{opcache.ini,apc.ini,imagick.ini,memcached.ini}\n  rm -f /etc/php5/conf.d/{redis.ini,suhosin.ini,newrelic.ini}\n  _fix_php_ini_newrelic_all\n  _fix_php_ini_geos_all\n  _fix_php_ini_mongo_all\n  _fix_php_ini_tet_all\n  _fix_php_ini_opcache_all\n}\n\n#\n# Check PHP Config.\n_php_config_check_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_config_check_update\"\n  fi\n  _php_conf_update\n  _boa_ini_tpl_update\n}\n\n#\n# Tune Web Sever configuration.\n_tune_web_server_config() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _tune_web_server_config\"\n  fi\n  # Set _PHP_FPM_WORKERS to AUTO if it is empty\n  [ -z \"${_PHP_FPM_WORKERS}\" ] && _PHP_FPM_WORKERS=AUTO\n  # If _PHP_FPM_WORKERS is not AUTO and not empty, then check if it is less than 1\n  if [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && [ -n \"${_PHP_FPM_WORKERS}\" ]; then\n    if [ \"${_PHP_FPM_WORKERS}\" -lt 1 ] 2>/dev/null; then\n      _PHP_FPM_WORKERS=AUTO\n    fi\n  fi\n  # If _PHP_FPM_WORKERS is not AUTO, remove non-numeric characters\n  [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && _PHP_FPM_WORKERS=${_PHP_FPM_WORKERS//[^0-9]/}\n  [ ! -z \"${_L_PHP_FPM_WORKERS}\" ] && _L_PHP_FPM_WORKERS=${_L_PHP_FPM_WORKERS//[^0-9]/}\n  [ ! -z \"${_L_PHP_FPM_WORKERS}\" ] && _LIM_FPM=\"${_L_PHP_FPM_WORKERS}\"\n  if [ \"${_LIM_FPM}\" -lt 48 ]; then\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _LIM_FPM=48\n    fi\n  fi\n  if [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ]; then\n    _PHP_FPM_WORKERS=${_PHP_FPM_WORKERS//[^0-9]/}\n    if [ ! -z \"${_PHP_FPM_WORKERS}\" ] && [ \"${_PHP_FPM_WORKERS}\" -ge 1 ]; then\n      _LIM_FPM=\"${_PHP_FPM_WORKERS}\"\n    fi\n  fi\n  if [ \"${_LIM_FPM}\" -gt 100 ]; then\n    _LIM_FPM=100\n  fi\n  _CHILD_MAX_FPM=$(( _LIM_FPM * 2 ))\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _PHP_FPM_WORKERS is ${_PHP_FPM_WORKERS}\"\n    _msg \"TUNE: _L_PHP_FPM_WORKERS is ${_L_PHP_FPM_WORKERS}\"\n    _msg \"TUNE: _LIM_FPM is ${_LIM_FPM}\"\n    _msg \"TUNE: _CHILD_MAX_FPM is ${_CHILD_MAX_FPM}\"\n  fi\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  fi\n  for e in ${_PHP_V}; do\n    if [ ! -z \"${_CHILD_MAX_FPM}\" ] && [ \"${_CHILD_MAX_FPM}\" -ge 8 ]; then\n      sed -i \"s/pm.max_children =.*/pm.max_children = ${_CHILD_MAX_FPM}/g\" \\\n         /opt/php${e}/etc/pool.d/www${e}.conf &> /dev/null\n      wait\n    else\n      sed -i \"s/pm.max_children =.*/pm.max_children = 8/g\" \\\n         /opt/php${e}/etc/pool.d/www${e}.conf &> /dev/null\n      wait\n    fi\n    if [ ! -z \"${_PHP_FPM_DENY}\" ]; then\n      sed -i \"s/passthru,/${_PHP_FPM_DENY},/g\" \\\n        /opt/php${e}/etc/pool.d/www${e}.conf &> /dev/null\n      wait\n    fi\n  done\n\n  # PHP-FPM INI\n  sed -i \"s/^default_socket_timeout =.*/default_socket_timeout = 180/g\" /opt/php*/etc/php*.ini &> /dev/null\n  wait\n  sed -i \"s/^max_execution_time =.*/max_execution_time = 180/g\" /opt/php*/etc/php*.ini         &> /dev/null\n  wait\n  sed -i \"s/^max_input_time =.*/max_input_time = 180/g\" /opt/php*/etc/php*.ini                 &> /dev/null\n  wait\n\n  # PHP-CLI INI\n  sed -i \"s/^default_socket_timeout =.*/default_socket_timeout = 3600/g\" /opt/php*/lib/php.ini &> /dev/null\n  wait\n  sed -i \"s/^max_execution_time =.*/max_execution_time = 3600/g\" /opt/php*/lib/php.ini         &> /dev/null\n  wait\n  sed -i \"s/^max_input_time =.*/max_input_time = 3600/g\" /opt/php*/lib/php.ini                 &> /dev/null\n  wait\n\n  # Valkey config should sync with PHP-CLI\n  sed -i \"s/^timeout .*/timeout 3600/g\" /etc/valkey/valkey.conf                                &> /dev/null\n  wait\n\n  # Redis config should sync with PHP-CLI\n  sed -i \"s/^timeout .*/timeout 3600/g\" /etc/redis/redis.conf                                  &> /dev/null\n  wait\n}\n\n#\n# Install IonCube.\n_if_install_php_ioncube() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_php_ioncube\"\n  fi\n  ###--------------------###\n  _X86_64_TEST=$(uname -m)\n  if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n    _SYSTEM_ARCH=\"x64\"\n  else\n    _SYSTEM_ARCH=\"x32\"\n  fi\n  if [ \"${_PHP_IONCUBE}\" = \"YES\" ] && [ \"${_SYSTEM_ARCH}\" = \"x64\" ]; then\n    if [ ! -e \"${_pthLog}/ioncube-update-${_IONCUBE_VRN}.log\" ] \\\n      || [ ! -e \"/usr/local/ioncube\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      mkdir -p /usr/local/ioncube\n      _msg \"INFO: Installing IonCube ${_IONCUBE_VRN} for PHP...\"\n      cd /var/opt\n      rm -rf ioncube_loaders*\n      _get_dev_arch \"ioncube_loaders_lin_x86-64_${_IONCUBE_VRN}.tar.gz\"\n      rm -f /usr/local/ioncube/*\n      cp -af /var/opt/ioncube/* /usr/local/ioncube/ &> /dev/null\n      touch ${_pthLog}/ioncube-update-${_IONCUBE_VRN}.log\n    fi\n  fi\n}\n\n#\n# Install PHP extensions.\n_install_php_extensions() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_php_extensions $1\"\n  fi\n  ###--------------------###\n  if [ \"$1\" != 56 ]; then\n    _msg \"INFO: Installing APCu ${_PHP_APCU} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf apcu*\n    _get_dev_src \"apcu-${_PHP_APCU}.tgz\"\n    cd /var/opt/apcu-${_PHP_APCU}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/apcu.so\" ]; then\n      _msg \"WARN: Installing APCu for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/apcu-${_PHP_APCU}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    _USE_IGBINARY=\"${_PHP_IGBINARY_TWO}\"\n  elif [ \"$1\" = \"85\" ]; then\n    _USE_IGBINARY=\"${_PHP_IGBINARY_EIGHT_FIVE}\"\n  else\n    _USE_IGBINARY=\"${_PHP_IGBINARY_THREE}\"\n  fi\n  _msg \"INFO: Installing igbinary ${_USE_IGBINARY} for PHP ${_T_PHP_VRN}...\"\n  cd /var/opt\n  rm -rf igbinary*\n  if [ \"$1\" = \"85\" ]; then\n    _get_dev_src \"igbinary-${_USE_IGBINARY}.tar.gz\"\n  else\n    _get_dev_src \"igbinary-${_USE_IGBINARY}.tgz\"\n  fi\n  cd /var/opt/igbinary-${_USE_IGBINARY}\n  _mrun \"${_T_PHP_PTH}/phpize\"\n  _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n  _mrun \"make -j $(nproc) --quiet\"\n  _mrun \"make --quiet install\"\n  ldconfig 2> /dev/null\n  if [ ! -e \"${_THIS_PHP_EXT_DIR}/igbinary.so\" ]; then\n    _msg \"WARN: Installing igbinary for PHP ${_T_PHP_VRN} failed!\"\n  else\n    touch ${_pthLog}/igbinary-${_USE_IGBINARY}-${_T_PHP_VRN}.log\n  fi\n  ###--------------------###\n  _USE_PHPREDIS=\n  _PHPREDIS_BUILD=\n  if [ \"$1\" = \"56\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_FOUR_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  elif [ \"$1\" = \"70\" ] || [ \"$1\" = \"71\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_FIVE_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  elif [ \"$1\" = \"72\" ] || [ \"$1\" = \"73\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_SIX_LEGACY_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  elif [ \"$1\" = \"85\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_SIX_LATEST_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  else\n    _USE_PHPREDIS=\"${_PHPREDIS_SIX_MODERN_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  fi\n  _msg \"INFO: Installing PhpRedis ${_USE_PHPREDIS} for PHP ${_T_PHP_VRN}...\"\n  ldconfig 2> /dev/null\n  cd /var/opt\n  rm -rf phpredis*\n  _get_dev_src \"phpredis-${_USE_PHPREDIS}.tar.gz\"\n  cd /var/opt/phpredis\n  _mrun \"${_T_PHP_PTH}/phpize\"\n  if [ -e \"/var/opt/phpredis/liblzf/lzf.h\" ] && [ -e \"/lib/x86_64-linux-gnu/liblzf.so\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: _PHPREDIS_BUILD is ${_PHPREDIS_BUILD}\"\n    fi\n  else\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: _PHPREDIS_BUILD is ${_PHPREDIS_BUILD}\"\n    fi\n  fi\n  _mrun \"bash ./configure ${_PHPREDIS_BUILD} --with-php-config=${_T_PHP_CFG}\"\n  _mrun \"make -j $(nproc) --quiet\"\n  _mrun \"make --quiet install\"\n  ldconfig 2> /dev/null\n  if [ ! -e \"${_THIS_PHP_EXT_DIR}/redis.so\" ]; then\n    _msg \"WARN: Installing PhpRedis for PHP ${_T_PHP_VRN} failed!\"\n  else\n    touch ${_pthLog}/phpredis-update-${_USE_PHPREDIS}-${_T_PHP_VRN}.log\n  fi\n  ###--------------------###\n  if [ \"$1\" -gt 71 ]; then\n    _msg \"INFO: Installing MCRYPT ${_PHP_MCRYPT} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf mcrypt*\n    _get_dev_src \"mcrypt-${_PHP_MCRYPT}.tgz\"\n    cd /var/opt/mcrypt-${_PHP_MCRYPT}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/mcrypt.so\" ]; then\n      _msg \"WARN: Installing MCRYPT for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/mcrypt-${_PHP_MCRYPT}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    _msg \"INFO: Installing UploadProgress ${_UPROGRESS_LEGACY_VRN} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf uploadprogress*\n    _get_dev_src \"uploadprogress-${_UPROGRESS_LEGACY_VRN}.tgz\"\n    cd /var/opt/uploadprogress-${_UPROGRESS_LEGACY_VRN}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ]; then\n      _msg \"WARN: Installing UploadProgress for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/f_uploadprogress-${_UPROGRESS_LEGACY_VRN}-${_T_PHP_VRN}.log\n    fi\n  elif [ \"$1\" -gt 71 ]; then\n    _msg \"INFO: Installing UploadProgress ${_UPROGRESS_EIGHT_VRN} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf uploadprogress*\n    _get_dev_src \"uploadprogress-${_UPROGRESS_EIGHT_VRN}.tgz\"\n    cd /var/opt/uploadprogress-${_UPROGRESS_EIGHT_VRN}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ]; then\n      _msg \"WARN: Installing UploadProgress for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/f_uploadprogress-${_UPROGRESS_EIGHT_VRN}-${_T_PHP_VRN}.log\n    fi\n  else\n    _msg \"INFO: Installing UploadProgress ${_UPROGRESS_SEVEN_VRN} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf uploadprogress*\n    _get_dev_src \"uploadprogress-${_UPROGRESS_SEVEN_VRN}.tar.gz\"\n    cd /var/opt/uploadprogress-${_UPROGRESS_SEVEN_VRN}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ]; then\n      _msg \"WARN: Installing UploadProgress for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/f_uploadprogress-${_UPROGRESS_SEVEN_VRN}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" -lt 80 ]; then\n    cd /var/opt\n    rm -rf pecl-jsmin*\n    if [ \"$1\" = \"56\" ]; then\n      _msg \"INFO: Installing JSMin ${_JSMIN_PHP_LEGACY_VRN} for PHP ${_T_PHP_VRN}...\"\n      _get_dev_src \"pecl-jsmin-${_JSMIN_PHP_LEGACY_VRN}.tar.gz\"\n      cd /var/opt/pecl-jsmin-${_JSMIN_PHP_LEGACY_VRN}\n    else\n      _msg \"INFO: Installing JSMin ${_JSMIN_PHP_MODERN_VRN} for PHP ${_T_PHP_VRN}...\"\n      _get_dev_src \"pecl-jsmin-${_JSMIN_PHP_MODERN_VRN}.tar.gz\"\n      cd /var/opt/pecl-jsmin-${_JSMIN_PHP_MODERN_VRN}\n    fi\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/jsmin.so\" ]; then\n      _msg \"WARN: Installing JSMin for PHP ${_T_PHP_VRN} failed!\"\n    else\n      if [ \"$1\" = \"56\" ]; then\n        touch ${_pthLog}/php-pecl-jsmin-${_JSMIN_PHP_LEGACY_VRN}-${_T_PHP_VRN}.log\n      else\n        touch ${_pthLog}/php-pecl-jsmin-${_JSMIN_PHP_MODERN_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    _msg \"INFO: Installing Twig C ${_TWIGC_VRN} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf twig*\n    _get_dev_src \"twig-${_TWIGC_VRN}.tar.gz\"\n    cd /var/opt/twig-${_TWIGC_VRN}/ext/twig\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/twig.so\" ]; then\n      _msg \"WARN: Installing Twig C for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/php-twig-${_TWIGC_VRN}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"${_PHP_GEOS}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"GEO\" ]]; then\n    if [ \"$1\" = \"56\" ]; then\n      _msg \"INFO: Building GEOS extension ${_GEOS_VRN} for PHP ${_T_PHP_VRN} from sources...\"\n      if [ ! -e \"${_pthLog}/geos-${_xSrl}-${_X_VERSION}.log\" ]; then\n        _apt_clean_update\n        for _PKG in libgeos-dev libgeos-c1; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n        touch ${_pthLog}/geos-${_xSrl}-${_X_VERSION}.log\n      fi\n      cd /var/opt\n      rm -rf geos*\n      _get_dev_src \"geos-${_GEOS_VRN}.tar.bz2\"\n      cd geos-${_GEOS_VRN}\n      _PHP_V=\"56\"\n      for e in ${_PHP_V}; do\n        if [ \"$1\" = \"${e}\" ]; then\n          find . -type f -print0 \\\n            | xargs -0 sed -i 's/\\/usr\\/local/\\/opt\\/php${e}/g' &> /dev/null\n          wait\n        fi\n      done\n      _mrun \"bash ./configure --enable-php\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      touch ${_pthLog}/php-geos-${_GEOS_VRN}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"${_PHP_MONGODB}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"MNG\" ]]; then\n    if [ \"$1\" = \"56\" ]; then\n      _msg \"INFO: Installing MongoDB driver ${_MONGO_VRN} for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf mongo*\n      _get_dev_src \"mongo-${_MONGO_VRN}.tgz\"\n      cd /var/opt/mongo-${_MONGO_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/mongo.so\" ]; then\n        _msg \"WARN: Installing MongoDB driver for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/php-mongo-${_MONGO_VRN}-${_T_PHP_VRN}.log\n      fi\n    else\n      _msg \"INFO: Installing MongoDB driver ${_MONGODB_VRN} for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf mongodb*\n      _get_dev_src \"mongodb-${_MONGODB_VRN}.tgz\"\n      cd /var/opt/mongodb-${_MONGODB_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/mongo.so\" ]; then\n        _msg \"WARN: Installing MongoDB driver for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/php-mongodb-${_MONGODB_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" != 86 ]; then\n    _msg \"INFO: Installing Imagick ${_IMAGICK_VRN} for PHP ${_T_PHP_VRN}...\"\n    if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n      || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n      || [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n      if [ -e \"/usr/bin/MagickWand-config\" ]; then\n        mv -f /usr/bin/MagickWand-config /var/backups/usr-bin-MagickWand-config\n      fi\n      if [ -e \"/usr/local/bin/MagickWand-config\" ]; then\n        mv -f /usr/local/bin/MagickWand-config /var/backups/usr-local-bin-MagickWand-config\n      fi\n    fi\n    if [ ! -e \"${_pthLog}/libmagickwand-dev-${_IMAGICK_VRN}-rebuild.log\" ]; then\n      _apt_clean_update\n      for _PKG in libmagickwand-dev; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n      touch ${_pthLog}/libmagickwand-dev-${_IMAGICK_VRN}-rebuild.log\n    fi\n    cd /var/opt\n    rm -rf imagick*\n    _get_dev_src \"imagick-${_IMAGICK_VRN}.tgz\"\n    cd /var/opt/imagick-${_IMAGICK_VRN}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/imagick.so\" ]; then\n      _msg \"WARN: Installing Imagick for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/imagick-${_IMAGICK_VRN}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    _msg \"INFO: Installing MailParse ${_MAILPARSE_VRN} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf mailparse*\n    _get_dev_src \"mailparse-${_MAILPARSE_VRN}.tgz\"\n    cd /var/opt/mailparse-${_MAILPARSE_VRN}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/mailparse.so\" ]; then\n      _msg \"WARN: Installing MailParse for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/mailparse-${_MAILPARSE_VRN}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ ! -e \"${_pthLog}/f_libyaml-${_LIB_YAML_VRN}.log\" ] \\\n    || [ -e \"/usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.4\" ]; then\n    _msg \"INFO: Installing LibYAML ${_LIB_YAML_VRN} for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf yaml*\n    _get_dev_src \"yaml-${_LIB_YAML_VRN}.tar.gz\"\n    cd /var/opt/yaml-${_LIB_YAML_VRN}\n    _mrun \"sh ./bootstrap\"\n    _mrun \"bash ./configure --prefix=/usr\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    touch ${_pthLog}/f_libyaml-${_LIB_YAML_VRN}.log\n    rm -f ${_pthLog}/yaml*.log\n    rm -f ${_pthLog}/f_yaml*.log\n    _X86_64_TEST=$(uname -m)\n    if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n      if [ ! -e \"/usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.9\" ] \\\n        || [ -e \"/usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.4\" ]; then\n        [ -d \"/var/backups/libyaml\" ] || mkdir -p /var/backups/libyaml\n        mkdir -p /usr/lib/x86_64-linux-gnu\n        cp -af /usr/lib/x86_64-linux-gnu/libyaml* /var/backups/libyaml/ &> /dev/null\n        if [ -e \"/usr/lib/libyaml-0.so.2.0.9\" ]; then\n          rm -f /usr/lib/x86_64-linux-gnu/libyaml*\n          rm -f /usr/lib/libyaml-0.so.2.0.4\n        fi\n        cp -af /usr/lib/libyaml* /usr/lib/x86_64-linux-gnu/ &> /dev/null\n      fi\n    fi\n  fi\n  cd /var/opt\n  rm -rf yaml*\n  if [ \"$1\" = \"56\" ]; then\n    _msg \"INFO: Installing YAML ${_YAML_PHP_LEGACY_VRN} for PHP ${_T_PHP_VRN}...\"\n    _get_dev_src \"yaml-${_YAML_PHP_LEGACY_VRN}.tgz\"\n    cd /var/opt/yaml-${_YAML_PHP_LEGACY_VRN}\n  elif [ \"$1\" = \"70\" ]; then\n    _msg \"INFO: Installing YAML ${_YAML_PHP_SEVENO_VRN} for PHP ${_T_PHP_VRN}...\"\n    _get_dev_src \"yaml-${_YAML_PHP_SEVENO_VRN}.tgz\"\n    cd /var/opt/yaml-${_YAML_PHP_SEVENO_VRN}\n  else\n    _msg \"INFO: Installing YAML ${_YAML_PHP_MODERN_VRN} for PHP ${_T_PHP_VRN}...\"\n    _get_dev_src \"yaml-${_YAML_PHP_MODERN_VRN}.tgz\"\n    cd /var/opt/yaml-${_YAML_PHP_MODERN_VRN}\n  fi\n  _mrun \"${_T_PHP_PTH}/phpize\"\n  _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n  _mrun \"make -j $(nproc) --quiet\"\n  _mrun \"make --quiet install\"\n  ldconfig 2> /dev/null\n  if [ ! -e \"${_THIS_PHP_EXT_DIR}/yaml.so\" ]; then\n    _msg \"WARN: Installing YAML for PHP ${_T_PHP_VRN} failed!\"\n  else\n    if [ \"$1\" = \"56\" ]; then\n      touch ${_pthLog}/f_yaml-${_YAML_PHP_LEGACY_VRN}-${_T_PHP_VRN}.log\n    elif [ \"$1\" = \"70\" ]; then\n      touch ${_pthLog}/f_yaml-${_YAML_PHP_SEVENO_VRN}-${_T_PHP_VRN}.log\n    else\n      touch ${_pthLog}/f_yaml-${_YAML_PHP_MODERN_VRN}-${_T_PHP_VRN}.log\n    fi\n  fi\n}\n\n#\n# Update extensions for PHP built from sources.\n_php_extensions_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_extensions_update $1\"\n  fi\n  ###--------------------###\n  if [ \"$1\" != 56 ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/apcu.so\" ] \\\n      || [ ! -e \"${_pthLog}/apcu-${_PHP_APCU}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing APCu ${_PHP_APCU} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf apcu*\n      _get_dev_src \"apcu-${_PHP_APCU}.tgz\"\n      cd /var/opt/apcu-${_PHP_APCU}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/apcu.so\" ]; then\n        _msg \"WARN: Installing APCu for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/apcu-${_PHP_APCU}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    _USE_IGBINARY=\"${_PHP_IGBINARY_TWO}\"\n  elif [ \"$1\" = \"85\" ]; then\n    _USE_IGBINARY=\"${_PHP_IGBINARY_EIGHT_FIVE}\"\n  else\n    _USE_IGBINARY=\"${_PHP_IGBINARY_THREE}\"\n  fi\n  if [ ! -e \"${_THIS_PHP_EXT_DIR}/igbinary.so\" ] \\\n    || [ ! -e \"${_pthLog}/igbinary-${_USE_IGBINARY}-${_T_PHP_VRN}.log\" ]; then\n    _msg \"INFO: Installing Igbinary ${_USE_IGBINARY} upgrade for PHP ${_T_PHP_VRN}...\"\n    cd /var/opt\n    rm -rf igbinary*\n    if [ \"$1\" = \"85\" ]; then\n      _get_dev_src \"igbinary-${_USE_IGBINARY}.tar.gz\"\n    else\n      _get_dev_src \"igbinary-${_USE_IGBINARY}.tgz\"\n    fi\n    cd /var/opt/igbinary-${_USE_IGBINARY}\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/igbinary.so\" ]; then\n      _msg \"WARN: Installing Igbinary for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/igbinary-${_USE_IGBINARY}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  _USE_PHPREDIS=\n  _PHPREDIS_BUILD=\n  if [ \"$1\" = \"56\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_FOUR_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  elif [ \"$1\" = \"70\" ] || [ \"$1\" = \"71\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_FIVE_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  elif [ \"$1\" = \"72\" ] || [ \"$1\" = \"73\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_SIX_LEGACY_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  elif [ \"$1\" = \"85\" ]; then\n    _USE_PHPREDIS=\"${_PHPREDIS_SIX_LATEST_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  else\n    _USE_PHPREDIS=\"${_PHPREDIS_SIX_MODERN_VRN}\"\n    _PHPREDIS_BUILD=\"--enable-redis-igbinary --enable-redis-lzf\"\n  fi\n  if [ ! -e \"${_THIS_PHP_EXT_DIR}/redis.so\" ] \\\n    || [ ! -e \"${_pthLog}/phpredis-update-${_USE_PHPREDIS}-${_T_PHP_VRN}.log\" ]; then\n    _msg \"INFO: Installing PhpRedis ${_USE_PHPREDIS} upgrade for PHP ${_T_PHP_VRN}...\"\n    ldconfig 2> /dev/null\n    cd /var/opt\n    rm -rf phpredis*\n    _get_dev_src \"phpredis-${_USE_PHPREDIS}.tar.gz\"\n    cd /var/opt/phpredis\n    _mrun \"${_T_PHP_PTH}/phpize\"\n    if [ -e \"/var/opt/phpredis/liblzf/lzf.h\" ] && [ -e \"/lib/x86_64-linux-gnu/liblzf.so\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: _PHPREDIS_BUILD is ${_PHPREDIS_BUILD}\"\n      fi\n    else\n      _PHPREDIS_BUILD=\"--enable-redis-igbinary\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: _PHPREDIS_BUILD is ${_PHPREDIS_BUILD}\"\n      fi\n    fi\n    _mrun \"bash ./configure ${_PHPREDIS_BUILD} --with-php-config=${_T_PHP_CFG}\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/redis.so\" ]; then\n      _msg \"WARN: Installing PhpRedis for PHP ${_T_PHP_VRN} failed!\"\n    else\n      touch ${_pthLog}/phpredis-update-${_USE_PHPREDIS}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" -gt 71 ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/mcrypt.so\" ] \\\n      || [ ! -e \"${_pthLog}/mcrypt-${_PHP_MCRYPT}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing MCRYPT ${_PHP_MCRYPT} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf mcrypt*\n      _get_dev_src \"mcrypt-${_PHP_MCRYPT}.tgz\"\n      cd /var/opt/mcrypt-${_PHP_MCRYPT}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/mcrypt.so\" ]; then\n        _msg \"WARN: Installing MCRYPT for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/mcrypt-${_PHP_MCRYPT}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ] \\\n      || [ ! -e \"${_pthLog}/f_uploadprogress-${_UPROGRESS_LEGACY_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing UploadProgress ${_UPROGRESS_LEGACY_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf uploadprogress*\n      _get_dev_src \"uploadprogress-${_UPROGRESS_LEGACY_VRN}.tgz\"\n      cd /var/opt/uploadprogress-${_UPROGRESS_LEGACY_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ]; then\n        _msg \"WARN: Installing UploadProgress for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/f_uploadprogress-${_UPROGRESS_LEGACY_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  elif [ \"$1\" -gt 71 ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ] \\\n      || [ ! -e \"${_pthLog}/f_uploadprogress-${_UPROGRESS_EIGHT_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing UploadProgress ${_UPROGRESS_EIGHT_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf uploadprogress*\n      _get_dev_src \"uploadprogress-${_UPROGRESS_EIGHT_VRN}.tgz\"\n      cd /var/opt/uploadprogress-${_UPROGRESS_EIGHT_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ]; then\n        _msg \"WARN: Installing UploadProgress for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/f_uploadprogress-${_UPROGRESS_EIGHT_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  else\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ] \\\n      || [ ! -e \"${_pthLog}/f_uploadprogress-${_UPROGRESS_SEVEN_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing UploadProgress ${_UPROGRESS_SEVEN_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf uploadprogress*\n      _get_dev_src \"uploadprogress-${_UPROGRESS_SEVEN_VRN}.tar.gz\"\n      cd /var/opt/uploadprogress-${_UPROGRESS_SEVEN_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/uploadprogress.so\" ]; then\n        _msg \"WARN: Installing UploadProgress for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/f_uploadprogress-${_UPROGRESS_SEVEN_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" -lt 80 ]; then\n    if [ \"$1\" = \"56\" ]; then\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/jsmin.so\" ] \\\n        || [ ! -e \"${_pthLog}/php-pecl-jsmin-${_JSMIN_PHP_LEGACY_VRN}-${_T_PHP_VRN}.log\" ]; then\n        _msg \"INFO: Installing JSMin ${_JSMIN_PHP_LEGACY_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n        cd /var/opt\n        rm -rf pecl-jsmin*\n        _get_dev_src \"pecl-jsmin-${_JSMIN_PHP_LEGACY_VRN}.tar.gz\"\n        cd /var/opt/pecl-jsmin-${_JSMIN_PHP_LEGACY_VRN}\n        _mrun \"${_T_PHP_PTH}/phpize\"\n        _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n        _mrun \"make -j $(nproc) --quiet\"\n        _mrun \"make --quiet install\"\n        ldconfig 2> /dev/null\n        if [ ! -e \"${_THIS_PHP_EXT_DIR}/jsmin.so\" ]; then\n          _msg \"WARN: Installing JSMin for PHP ${_T_PHP_VRN} failed!\"\n        else\n          touch ${_pthLog}/php-pecl-jsmin-${_JSMIN_PHP_LEGACY_VRN}-${_T_PHP_VRN}.log\n        fi\n      fi\n    else\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/jsmin.so\" ] \\\n        || [ ! -e \"${_pthLog}/php-pecl-jsmin-${_JSMIN_PHP_MODERN_VRN}-${_T_PHP_VRN}.log\" ]; then\n        _msg \"INFO: Installing JSMin ${_JSMIN_PHP_MODERN_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n        cd /var/opt\n        rm -rf pecl-jsmin*\n        _get_dev_src \"pecl-jsmin-${_JSMIN_PHP_MODERN_VRN}.tar.gz\"\n        cd /var/opt/pecl-jsmin-${_JSMIN_PHP_MODERN_VRN}\n        _mrun \"${_T_PHP_PTH}/phpize\"\n        _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n        _mrun \"make -j $(nproc) --quiet\"\n        _mrun \"make --quiet install\"\n        ldconfig 2> /dev/null\n        if [ ! -e \"${_THIS_PHP_EXT_DIR}/jsmin.so\" ]; then\n          _msg \"WARN: Installing JSMin for PHP ${_T_PHP_VRN} failed!\"\n        else\n          touch ${_pthLog}/php-pecl-jsmin-${_JSMIN_PHP_MODERN_VRN}-${_T_PHP_VRN}.log\n        fi\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/twig.so\" ] \\\n      || [ ! -e \"${_pthLog}/php-twig-${_TWIGC_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing Twig C ${_TWIGC_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf twig*\n      _get_dev_src \"twig-${_TWIGC_VRN}.tar.gz\"\n      cd /var/opt/twig-${_TWIGC_VRN}/ext/twig\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/twig.so\" ]; then\n        _msg \"WARN: Installing Twig C for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/php-twig-${_TWIGC_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ] \\\n    && [ ! -e \"${_pthLog}/php-geos-${_GEOS_VRN}-${_T_PHP_VRN}.log\" ]; then\n    if [ \"${_PHP_GEOS}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"GEO\" ]]; then\n      _msg \"INFO: Building GEOS ${_GEOS_VRN} upgrade for PHP ${_T_PHP_VRN} from sources...\"\n      if [ ! -e \"${_pthLog}/geos-${_xSrl}-${_X_VERSION}.log\" ]; then\n        _apt_clean_update\n        for _PKG in libgeos-dev libgeos-c1; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n        touch ${_pthLog}/geos-${_xSrl}-${_X_VERSION}.log\n      fi\n      cd /var/opt\n      rm -rf geos*\n      _get_dev_src \"geos-${_GEOS_VRN}.tar.bz2\"\n      cd geos-${_GEOS_VRN}\n      _PHP_V=\"56\"\n      for e in ${_PHP_V}; do\n        if [ \"$1\" = \"${e}\" ]; then\n          find . -type f -print0 \\\n            | xargs -0 sed -i 's/\\/usr\\/local/\\/opt\\/php${e}/g' &> /dev/null\n          wait\n        fi\n      done\n      _mrun \"bash ./configure --enable-php\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      touch ${_pthLog}/php-geos-${_GEOS_VRN}-${_T_PHP_VRN}.log\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ] \\\n    && [ ! -e \"${_pthLog}/php-mongo-${_MONGO_VRN}-${_T_PHP_VRN}.log\" ]; then\n    if [ \"${_PHP_MONGODB}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"MNG\" ]]; then\n      _msg \"INFO: Installing MongoDB ${_MONGO_VRN} PHP driver upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf mongo*\n      _get_dev_src \"mongo-${_MONGO_VRN}.tgz\"\n      cd /var/opt/mongo-${_MONGO_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/mongo.so\" ]; then\n        _msg \"WARN: Installing MongoDB driver for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/php-mongo-${_MONGO_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  if [ \"$1\" != 56 ] \\\n    && [ ! -e \"${_pthLog}/php-mongodb-${_MONGODB_VRN}-${_T_PHP_VRN}.log\" ]; then\n    if [ \"${_PHP_MONGODB}\" = \"YES\" ] || [[ \"${_XTRAS_LIST}\" =~ \"MNG\" ]]; then\n      _msg \"INFO: Installing MongoDB ${_MONGO_VRN} PHP driver upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf mongodb*\n      _get_dev_src \"mongodb-${_MONGODB_VRN}.tgz\"\n      cd /var/opt/mongodb-${_MONGODB_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/mongo.so\" ]; then\n        _msg \"WARN: Installing MongoDB driver for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/php-mongodb-${_MONGODB_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" != 86 ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/imagick.so\" ] \\\n      || [ ! -e \"${_pthLog}/imagick-${_IMAGICK_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing Imagick ${_IMAGICK_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n        || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n        || [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n        if [ -e \"/usr/bin/MagickWand-config\" ]; then\n          mv -f /usr/bin/MagickWand-config /var/backups/usr-bin-MagickWand-config\n        fi\n        if [ -e \"/usr/local/bin/MagickWand-config\" ]; then\n          mv -f /usr/local/bin/MagickWand-config /var/backups/usr-local-bin-MagickWand-config\n        fi\n      fi\n      if [ ! -e \"${_pthLog}/libmagickwand-dev-${_IMAGICK_VRN}-rebuild.log\" ]; then\n        _apt_clean_update\n        for _PKG in libmagickwand-dev; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n        touch ${_pthLog}/libmagickwand-dev-${_IMAGICK_VRN}-rebuild.log\n      fi\n      cd /var/opt\n      rm -rf imagick*\n      _get_dev_src \"imagick-${_IMAGICK_VRN}.tgz\"\n      cd /var/opt/imagick-${_IMAGICK_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/imagick.so\" ]; then\n        _msg \"WARN: Installing Imagick for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/imagick-${_IMAGICK_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ \"$1\" = \"56\" ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/mailparse.so\" ] \\\n      || [ ! -e \"${_pthLog}/mailparse-${_MAILPARSE_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing MailParse ${_MAILPARSE_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf mailparse*\n      _get_dev_src \"mailparse-${_MAILPARSE_VRN}.tgz\"\n      cd /var/opt/mailparse-${_MAILPARSE_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/mailparse.so\" ]; then\n        _msg \"WARN: Installing MailParse for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/mailparse-${_MAILPARSE_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n  ###--------------------###\n  if [ ! -e \"${_pthLog}/f_libyaml-${_LIB_YAML_VRN}.log\" ] \\\n    || [ -e \"/usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.4\" ]; then\n    _msg \"INFO: Installing LibYAML upgrade for PHP...\"\n    cd /var/opt\n    rm -rf yaml*\n    _get_dev_src \"yaml-${_LIB_YAML_VRN}.tar.gz\"\n    cd /var/opt/yaml-${_LIB_YAML_VRN}\n    _mrun \"sh ./bootstrap\"\n    _mrun \"bash ./configure --prefix=/usr\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    touch ${_pthLog}/f_libyaml-${_LIB_YAML_VRN}.log\n    rm -f ${_pthLog}/yaml*.log\n    rm -f ${_pthLog}/f_yaml*.log\n    _X86_64_TEST=$(uname -m)\n    if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n      if [ ! -e \"/usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.9\" ] \\\n        || [ -e \"/usr/lib/x86_64-linux-gnu/libyaml-0.so.2.0.4\" ]; then\n        [ -d \"/var/backups/libyaml\" ] || mkdir -p /var/backups/libyaml\n        cp -af /usr/lib/x86_64-linux-gnu/libyaml* /var/backups/libyaml/ &> /dev/null\n        if [ -e \"/usr/lib/libyaml-0.so.2.0.9\" ]; then\n          rm -f /usr/lib/x86_64-linux-gnu/libyaml*\n          rm -f /usr/lib/libyaml-0.so.2.0.4\n        fi\n        cp -af /usr/lib/libyaml* /usr/lib/x86_64-linux-gnu/\n      fi\n    fi\n  fi\n  if [ \"$1\" = \"56\" ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/yaml.so\" ] \\\n      || [ ! -e \"${_pthLog}/f_yaml-${_YAML_PHP_LEGACY_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing YAML ${_YAML_PHP_LEGACY_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf yaml*\n      _get_dev_src \"yaml-${_YAML_PHP_LEGACY_VRN}.tgz\"\n      cd /var/opt/yaml-${_YAML_PHP_LEGACY_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/yaml.so\" ]; then\n        _msg \"WARN: Installing YAML for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/f_yaml-${_YAML_PHP_LEGACY_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  elif [ \"$1\" = \"70\" ]; then\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/yaml.so\" ] \\\n      || [ ! -e \"${_pthLog}/f_yaml-${_YAML_PHP_SEVENO_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing YAML ${_YAML_PHP_SEVENO_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf yaml*\n      _get_dev_src \"yaml-${_YAML_PHP_SEVENO_VRN}.tgz\"\n      cd /var/opt/yaml-${_YAML_PHP_SEVENO_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/yaml.so\" ]; then\n        _msg \"WARN: Installing YAML for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/f_yaml-${_YAML_PHP_SEVENO_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  else\n    if [ ! -e \"${_THIS_PHP_EXT_DIR}/yaml.so\" ] \\\n      || [ ! -e \"${_pthLog}/f_yaml-${_YAML_PHP_MODERN_VRN}-${_T_PHP_VRN}.log\" ]; then\n      _msg \"INFO: Installing YAML ${_YAML_PHP_MODERN_VRN} upgrade for PHP ${_T_PHP_VRN}...\"\n      cd /var/opt\n      rm -rf yaml*\n      _get_dev_src \"yaml-${_YAML_PHP_MODERN_VRN}.tgz\"\n      cd /var/opt/yaml-${_YAML_PHP_MODERN_VRN}\n      _mrun \"${_T_PHP_PTH}/phpize\"\n      _mrun \"bash ./configure --with-php-config=${_T_PHP_CFG}\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      if [ ! -e \"${_THIS_PHP_EXT_DIR}/yaml.so\" ]; then\n        _msg \"WARN: Installing YAML for PHP ${_T_PHP_VRN} failed!\"\n      else\n        touch ${_pthLog}/f_yaml-${_YAML_PHP_MODERN_VRN}-${_T_PHP_VRN}.log\n      fi\n    fi\n  fi\n}\n\n#\n# Install modern PHP version\n_install_php_multi() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_php_multi $1\"\n  fi\n  _if_to_do_fix\n  _get_php_conf_extra\n  _PHP_EXTRA=$(echo \"${_PHP_EXTRA}\" | sed \"s/--with-curlwrappers//g\" 2>&1)\n  ###--------------------###\n  _msg \"INFO: Building PHP ${_PHP_VERSION} from sources...\"\n  _apt_clean_update\n  for _PKG in libonig-dev; do\n    if ! _pkg_installed \"${_PKG}\"; then\n      _mrun \"${_INSTAPP} ${_PKG}\"\n    fi\n  done\n  if [[ \"${_PHP_EXTRA_CONF}\" =~ \"--with-tidy\" ]] \\\n    && [ ! -e \"${_pthLog}/libtidy-${_LIB_TIDY_VRN}.log\" ]; then\n    if [ -e \"/usr/lib/libtidy.so\" ]; then\n      ###\n      for _PKG in libtidy-dev libtidy-0.99-0 tidy; do\n        if _pkg_installed \"${_PKG}\"; then\n          _mrun \"apt-get remove ${_PKG} -y --purge --auto-remove -qq\"\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"PCKG: ${_PKG} removed as requested.\"\n          fi\n        fi\n      done\n      ###\n    fi\n    cd /var/opt\n    rm -rf tidy*\n    _apt_clean_update\n    for _PKG in cmake; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    _get_dev_src \"tidy-html5-${_LIB_TIDY_VRN}.tar.gz\"\n    cd tidy-html5-${_LIB_TIDY_VRN}/build/cmake\n    _mrun \"cmake ../.. -DCMAKE_INSTALL_PREFIX=/usr/\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ -e \"/usr/lib/libtidy.so\" ]; then\n      cd /usr/lib\n      ln -sfn libtidy.so.${_LIB_TIDY_VRN} libtidy-0.99.so.0\n      touch ${_pthLog}/libtidy-${_LIB_TIDY_VRN}.log\n    fi\n  fi\n  cd /var/opt\n  rm -rf php*\n  _get_dev_src \"php-${_PHP_VERSION}.tar.bz2\"\n  _msg \"INFO: Building PHP ${_PHP_VERSION} part 1/3\"\n  cd /var/opt/php-${_PHP_VERSION}\n  if [ \"${_OS_CODE}\" = \"stretch\" ] \\\n    || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _mrun \"sh ./buildconf --force\"\n    #_patchFile=\"disable_SSLv2_for_openssl_1_0_0.patch\"\n    #patch -p1 < ${_bldPth}/aegir/patches/${_patchFile}\n  fi\n  ### cd sapi/fpm/fpm\n  ### patch -p1 < ${_bldPth}/aegir/patches/fpm_main.c.patch &> /dev/null\n  ### cd /var/opt/php-${_PHP_VERSION}\n  _msg \"INFO: Building PHP ${_PHP_VERSION} part 2/3\"\n  if [ \"$1\" -lt 74 ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA}\"\n  else\n    _PHP_EXTRA=\"${_PHP_EXTRA} --enable-intl\"\n  fi\n  if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n    && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n    _SSL_BINARY=/usr/local/ssl3/bin/openssl\n  else\n    _SSL_BINARY=/usr/local/ssl/bin/openssl\n  fi\n  _SSL_ITD=$(${_SSL_BINARY} version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}')\n  if [[ \"${_SSL_ITD}\" =~ \"${_OPENSSL_MODERN_VRN}\" ]] \\\n    || [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n    _SSL_PATH=\"/usr/local/ssl3\"\n    _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n  else\n    _SSL_PATH=\"/usr/local/ssl\"\n    _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n  fi\n  if [ \"$1\" -lt 74 ]; then\n    if [ -e \"/usr/local/ssl/lib/libssl.so.1.1\" ] \\\n      || [ -e \"/usr/local/ssl/lib/libssl.so.1.0.0\" ] \\\n      || [ -e \"/usr/local/ssl/lib/libssl.so.1.0.1\" ] \\\n      || [ -e \"/usr/local/ssl/lib/libssl.so.1.0.2\" ]; then\n      _SSL_PATH=\"/usr/local/ssl\"\n      _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n    fi\n  fi\n  _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n  if [ -e \"${_SSL_LIB_PATH}\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} --with-openssl=${_SSL_PATH}\"\n  else\n    _PHP_EXTRA=\"${_PHP_EXTRA} --with-openssl\"\n  fi\n  if [ -d \"/usr/local/include/curl\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} --with-curl=/usr/local\"\n  else\n    _PHP_EXTRA=\"${_PHP_EXTRA} --with-curl\"\n  fi\n  _PRE_SNFR=\"--enable-zip \\\n             --with-gd \\\n             --with-jpeg-dir=/usr \\\n             --with-png-dir=/usr \\\n             --with-xpm-dir=/usr \\\n             --with-webp-dir=/usr \\\n             --with-ldap \\\n             --with-gmp \\\n             --with-xmlrpc\"\n  _NEW_SNFR=\"--with-zip \\\n             --enable-gd \\\n             --with-jpeg \\\n             --with-xpm \\\n             --with-webp \\\n             --with-freetype \\\n             --with-ldap \\\n             --with-gmp\"\n  if [ \"${_OS_CODE}\" != \"jessie\" ]; then\n    _NEW_SNFR=\"${_NEW_SNFR} --with-sodium\"\n  fi\n  if [ \"$1\" -gt 73 ]; then\n    _PHP_EXTRA=$(echo \"${_PHP_EXTRA}\" | sed \"s/--with-freetype-dir=\\/usr//g\" 2>&1)\n    _PHP_EXTRA=\"${_PHP_EXTRA} ${_NEW_SNFR}\"\n  elif [ \"$1\" = \"73\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} ${_PRE_SNFR}\"\n  elif [ \"$1\" = \"72\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} ${_PRE_SNFR}\"\n  elif [ \"$1\" = \"71\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} ${_PRE_SNFR} --enable-gd-native-ttf --with-mcrypt\"\n  elif [ \"$1\" = \"70\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} ${_PRE_SNFR} --enable-gd-native-ttf --with-mcrypt\"\n  elif [ \"$1\" = \"56\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} ${_PRE_SNFR} --enable-gd-native-ttf --with-mcrypt --with-mysql=mysqlnd\"\n    if [ \"${_OS_CODE}\" != \"jessie\" ]; then\n      _patchFile=\"PHP-5.6.31-OpenSSL-1.1.0-compatibility-20170801.patch\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        patch -p1 < ${_bldPth}/aegir/patches/${_patchFile}\n      else\n        patch -p1 < ${_bldPth}/aegir/patches/${_patchFile} &> /dev/null\n      fi\n    fi\n  fi\n  if [ \"$1\" -gt 80 ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} --with-avif\"\n  fi\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n    || [ \"${_OS_CODE}\" = \"beowulf\" ] \\\n    || [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n    || [ \"${_OS_CODE}\" = \"bullseye\" ] \\\n    || [ \"${_OS_CODE}\" = \"buster\" ]; then\n    if [ \"$1\" -lt 74 ]; then\n      _patchFile=\"freetype.patch\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        patch -p1 < ${_bldPth}/aegir/patches/${_patchFile}\n      else\n        patch -p1 < ${_bldPth}/aegir/patches/${_patchFile} &> /dev/null\n      fi\n    fi\n    if [ \"$1\" -lt 81 ]; then\n      _patchFile=\"php-8.1-openssl3.patch\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        patch -p1 < ${_bldPth}/aegir/patches/${_patchFile}\n      else\n        patch -p1 < ${_bldPth}/aegir/patches/${_patchFile} &> /dev/null\n      fi\n    fi\n  fi\n  if [ \"${_OS_CODE}\" != \"jessie\" ] \\\n    && [ \"${_OS_CODE}\" != \"excalibur\" ] \\\n    && [ \"$1\" -lt 85 ] \\\n    && [ ! -e \"/root/.rebuild_src_on_auto_before_reboot.info\" ] \\\n    && [ ! -e \"/root/.skip-aegir-master-upgrade.cnf\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} \\\n                --with-imap \\\n                --with-imap-ssl \\\n                --with-kerberos\"\n  fi\n  if [ \"$1\" -lt 85 ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} --with-pear --enable-opcache\"\n  fi\n  if [ ! -z \"${_PHP_EXTRA}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \"${_PHP_EXTRA}\" | fmt -su -w 2500 > /var/backups/php_extra_${_PHP_VERSION}.txt\n      _DISPLAY_PHP_EXTRA=$(cat /var/backups/php_extra_${_PHP_VERSION}.txt 2>&1)\n      _DISPLAY_PHP_EXTRA=$(echo -n ${_DISPLAY_PHP_EXTRA} | tr -d '\\n')\n      _msg \"INFO: This PHP ${_PHP_VERSION} is built with:\"\n      _msg \"INFO: ${_DISPLAY_PHP_EXTRA}\"\n    fi\n  fi\n  _mrun \"LIBS=\\\"-ldl -lpthread\\\" PKG_CONFIG_PATH=\\\"${_PKG_CONFIG_PATH}\\\" ./configure \\\n                --prefix=/opt/php$1 \\\n                --enable-fpm \\\n                --enable-bcmath \\\n                --enable-calendar \\\n                --enable-exif \\\n                --enable-ftp \\\n                --enable-mbstring \\\n                --enable-pcntl \\\n                --enable-soap \\\n                --with-fpm-group=www-data \\\n                --with-fpm-user=www-data \\\n                --with-mysql-sock=/run/mysqld/mysqld.sock \\\n                --with-mysqli=mysqlnd \\\n                --with-pdo-mysql=mysqlnd \\\n                --with-xsl \\\n                --with-zlib \\\n                --with-bz2 \\\n                ${_PHP_EXTRA}\"\n  if [ -e \"/var/opt/php-${_PHP_VERSION}/Makefile\" ]; then\n    _msg \"INFO: Building PHP ${_PHP_VERSION} part 3/3\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    sed -i \"s/^EXTRA_LIBS = \\-lcrypt/EXTRA_LIBS = \\-llber \\-lcrypt/g\" /var/opt/php-${_PHP_VERSION}/Makefile\n    _mrun \"make -j $(nproc)\"\n    _mrun \"make install\"\n    ldconfig 2> /dev/null\n  else\n    _msg \"WARN: No Makefile, configure for PHP ${_PHP_VERSION} failed\"\n    _msg \"INFO: Waiting 60 seconds before trying again...\"\n    sleep 60\n    _msg \"INFO: Building PHP ${_PHP_VERSION} part 2/3 (again)\"\n    _mrun \"make clean\"\n    _mrun \"LIBS=\\\"-ldl -lpthread\\\" PKG_CONFIG_PATH=\\\"${_PKG_CONFIG_PATH}\\\" ./configure \\\n                --prefix=/opt/php$1 \\\n                --enable-fpm \\\n                --enable-bcmath \\\n                --enable-calendar \\\n                --enable-exif \\\n                --enable-ftp \\\n                --enable-mbstring \\\n                --enable-pcntl \\\n                --enable-soap \\\n                --with-fpm-group=www-data \\\n                --with-fpm-user=www-data \\\n                --with-mysql-sock=/run/mysqld/mysqld.sock \\\n                --with-mysqli=mysqlnd \\\n                --with-pdo-mysql=mysqlnd \\\n                --with-xsl \\\n                --with-zlib \\\n                ${_PHP_EXTRA}\"\n    if [ -f \"/var/opt/php-${_PHP_VERSION}/Makefile\" ]; then\n      sed -i \"s/^EXTRA_LIBS = \\-lcrypt/EXTRA_LIBS = \\-llber \\-lcrypt/g\" /var/opt/php-${_PHP_VERSION}/Makefile\n      _msg \"INFO: Building PHP ${_PHP_VERSION} part 3/3 (again)\"\n      _msg \"WAIT: This may take a while, please wait...\"\n      _mrun \"make -j $(nproc)\"\n      _mrun \"make install\"\n      ldconfig 2> /dev/null\n    else\n      _msg \"ALRT: No Makefile, building PHP ${_PHP_VERSION} failed again!\"\n      _msg \"INFO: Waiting 3 minutes for your input or ctrl-c...\"\n      sleep 180\n      _msg \"INFO: Moving on...\"\n    fi\n  fi\n  if [ -x \"/opt/php$1/bin/php\" ]; then\n    rm -f /usr/bin/php\n    rm -f /usr/bin/php-cli\n    ln -sfn /opt/php$1/bin/php /usr/bin/php\n    ln -sfn /opt/php$1/bin/php /usr/bin/php-cli\n    if [ -x \"/opt/php$1/bin/phpize\" ]; then\n      rm -f /usr/bin/phpize\n      ln -sfn /opt/php$1/bin/phpize /usr/bin/phpize\n    fi\n    if [ -x \"/opt/php$1/bin/php-config\" ]; then\n      rm -f /usr/bin/php-config\n      ln -sfn /opt/php$1/bin/php-config /usr/bin/php-config\n    fi\n    _T_PHP_VRN=\"${_PHP_VERSION}\"\n    _T_PHP_PTH=\"/opt/php$1/bin\"\n    _T_PHP_CFG=\"/opt/php$1/bin/php-config\"\n    _THIS_PHP_EXT_DIR=\n    if [ \"$1\" = \"56\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php56/lib/php/extensions/no-debug-non-zts-${_PHP56_API}\"\n    elif [ \"$1\" = \"70\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php70/lib/php/extensions/no-debug-non-zts-${_PHP70_API}\"\n    elif [ \"$1\" = \"71\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php71/lib/php/extensions/no-debug-non-zts-${_PHP71_API}\"\n    elif [ \"$1\" = \"72\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php72/lib/php/extensions/no-debug-non-zts-${_PHP72_API}\"\n    elif [ \"$1\" = \"73\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php73/lib/php/extensions/no-debug-non-zts-${_PHP73_API}\"\n    elif [ \"$1\" = \"74\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php74/lib/php/extensions/no-debug-non-zts-${_PHP74_API}\"\n    elif [ \"$1\" = \"80\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php80/lib/php/extensions/no-debug-non-zts-${_PHP80_API}\"\n    elif [ \"$1\" = \"81\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php81/lib/php/extensions/no-debug-non-zts-${_PHP81_API}\"\n    elif [ \"$1\" = \"82\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php82/lib/php/extensions/no-debug-non-zts-${_PHP82_API}\"\n    elif [ \"$1\" = \"83\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php83/lib/php/extensions/no-debug-non-zts-${_PHP83_API}\"\n    elif [ \"$1\" = \"84\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php84/lib/php/extensions/no-debug-non-zts-${_PHP84_API}\"\n    elif [ \"$1\" = \"85\" ]; then\n      _THIS_PHP_EXT_DIR=\"/opt/php85/lib/php/extensions/no-debug-non-zts-${_PHP85_API}\"\n    fi\n    if [ ! -z \"${_THIS_PHP_EXT_DIR}\" ]; then\n      _install_php_extensions \"$1\"\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"WARN: _THIS_PHP_EXT_DIR for PHP $1 empty in _install_php_multi\"\n      fi\n    fi\n    rm -f /etc/init.d/php$1-fpm*\n    cp -af ${_locCnf}/php/php$1-fpm /etc/init.d/php$1-fpm\n    chmod 755 /etc/init.d/php$1-fpm\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      touch ${_pthLog}/re-installed-php${1}-on-post_major_os_upgrade.info\n    fi\n    _mrun \"update-rc.d php$1-fpm defaults\"\n  else\n    _msg \"WARN: Building PHP ${_PHP_VERSION} failed!\"\n  fi\n}\n\n#\n# Update PHP extensions\n_php_multi_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_multi_update $1\"\n  fi\n  _T_PHP_VRN=\"${_PHP_VERSION}\"\n  _T_PHP_PTH=\"/opt/php$1/bin\"\n  _T_PHP_CFG=\"/opt/php$1/bin/php-config\"\n  _THIS_PHP_EXT_DIR=\n  if [ \"$1\" = \"56\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php56/lib/php/extensions/no-debug-non-zts-${_PHP56_API}\"\n  elif [ \"$1\" = \"70\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php70/lib/php/extensions/no-debug-non-zts-${_PHP70_API}\"\n  elif [ \"$1\" = \"71\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php71/lib/php/extensions/no-debug-non-zts-${_PHP71_API}\"\n  elif [ \"$1\" = \"72\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php72/lib/php/extensions/no-debug-non-zts-${_PHP72_API}\"\n  elif [ \"$1\" = \"73\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php73/lib/php/extensions/no-debug-non-zts-${_PHP73_API}\"\n  elif [ \"$1\" = \"74\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php74/lib/php/extensions/no-debug-non-zts-${_PHP74_API}\"\n  elif [ \"$1\" = \"80\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php80/lib/php/extensions/no-debug-non-zts-${_PHP80_API}\"\n  elif [ \"$1\" = \"81\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php81/lib/php/extensions/no-debug-non-zts-${_PHP81_API}\"\n  elif [ \"$1\" = \"82\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php82/lib/php/extensions/no-debug-non-zts-${_PHP82_API}\"\n  elif [ \"$1\" = \"83\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php83/lib/php/extensions/no-debug-non-zts-${_PHP83_API}\"\n  elif [ \"$1\" = \"84\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php84/lib/php/extensions/no-debug-non-zts-${_PHP84_API}\"\n  elif [ \"$1\" = \"85\" ]; then\n    _THIS_PHP_EXT_DIR=\"/opt/php85/lib/php/extensions/no-debug-non-zts-${_PHP85_API}\"\n  fi\n  if [ ! -z \"${_THIS_PHP_EXT_DIR}\" ]; then\n    _php_extensions_update \"$1\"\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"WARN: _THIS_PHP_EXT_DIR for PHP $1 empty in _php_multi_update\"\n    fi\n  fi\n}\n\n#\n# Update New Relic.\n_newrelic_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _newrelic_update\"\n  fi\n  ###--------------------###\n  _X86_64_TEST=$(uname -m)\n  if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n    _SYSTEM_ARCH=\"x64\"\n  else\n    _SYSTEM_ARCH=\"x32\"\n  fi\n  if [ ! -z \"${_NEWRELIC_KEY}\" ] && [ \"${_SYSTEM_ARCH}\" = \"x64\" ]; then\n    if [ -x \"/usr/bin/gpg2\" ]; then\n      _GPG=gpg2\n    else\n      _GPG=gpg\n    fi\n    _NEWRELIC_KEYS_SIG=\"548C16BF\"\n    _nrList=\"/etc/apt/sources.list.d/newrelic.list\"\n    if [ -e \"/etc/newrelic/newrelic.cfg\" ] \\\n      || [ -e \"/etc/apt/sources.list.d/newrelic.list\" ]; then\n      _msg \"INFO: Uninstalling previous version of New Relic Apps Monitor...\"\n      cd /var/opt\n      if [ ! -e \"/etc/apt/keyrings/newrelic.gpg\" ] \\\n        || [ -e \"/etc/apt/trusted.gpg.d/newrelic.gpg\" ] \\\n        || [ -e \"/etc/apt/keyrings/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg\" ] \\\n        || [ -e /root/.force.newrelic.update.cnf ]; then\n        if [ ! -e \"/etc/apt/keyrings\" ]; then\n          mkdir -m 0755 -p /etc/apt/keyrings\n        fi\n        if [ -e \"/etc/apt/trusted.gpg.d/newrelic.gpg\" ]; then\n          rm -f /etc/apt/trusted.gpg.d/newrelic.gpg*\n        fi\n        if [ -e \"/etc/apt/keyrings/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg\" ]; then\n          rm -f /etc/apt/keyrings/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg\n        fi\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Retrieving ${_NEWRELIC_KEYS_SIG} key...\"\n        fi\n        apt-key del ${_NEWRELIC_KEYS_SIG} &> /dev/null\n        if [ ! -e \"/etc/apt/keyrings/newrelic.gpg\" ]; then\n          curl -fsSL ${_urlDev}/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg | ${_GPG} --dearmor -o /etc/apt/keyrings/newrelic.gpg\n        fi\n        chmod 644 /etc/apt/keyrings/newrelic.gpg\n      fi\n      _apt_clean_update\n      _mrun \"${_RMAPP} newrelic-php5 \\\n        newrelic-php5-common \\\n        newrelic-daemon \\\n        newrelic-sysmond\"\n      mkdir -p ${_vBs}/nr\n      mv -f /etc/newrelic \\\n        ${_vBs}/nr/etc-newrelic-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n      _pthNrx=\"lib/php/extensions/no-debug-non-zts\"\n      _PHP_EXT_DIR_85=\"/opt/php85/${_pthNrx}-${_PHP85_API}\"\n      _PHP_EXT_DIR_84=\"/opt/php84/${_pthNrx}-${_PHP84_API}\"\n      _PHP_EXT_DIR_83=\"/opt/php83/${_pthNrx}-${_PHP83_API}\"\n      _PHP_EXT_DIR_82=\"/opt/php82/${_pthNrx}-${_PHP82_API}\"\n      _PHP_EXT_DIR_81=\"/opt/php81/${_pthNrx}-${_PHP81_API}\"\n      _PHP_EXT_DIR_80=\"/opt/php80/${_pthNrx}-${_PHP80_API}\"\n      _PHP_EXT_DIR_74=\"/opt/php74/${_pthNrx}-${_PHP74_API}\"\n      _PHP_EXT_DIR_73=\"/opt/php73/${_pthNrx}-${_PHP73_API}\"\n      _PHP_EXT_DIR_72=\"/opt/php72/${_pthNrx}-${_PHP72_API}\"\n      _msg \"INFO: Installing latest version of New Relic Apps Monitor...\"\n      echo \"## New Relic APT Repository\" > ${_nrList}\n      if [ -e \"/etc/apt/keyrings/newrelic.gpg\" ]; then\n        echo \"deb [signed-by=/etc/apt/keyrings/newrelic.gpg] http://apt.newrelic.com/debian/ newrelic non-free\" >> ${_nrList}\n      else\n        echo \"deb http://apt.newrelic.com/debian/ newrelic non-free\" >> ${_nrList}\n      fi\n      _apt_clean_update\n      _mrun \"apt-get install newrelic-php5-common ${_nrmUpArg}\"\n      _mrun \"apt-get install newrelic-daemon ${_nrmUpArg}\"\n      _mrun \"apt-get install newrelic-php5 ${_nrmUpArg}\"\n      # cd /var/opt\n      # rm -rf /opt/newrelic*\n      # wget ${_wgetGet} ${_urlDev}/newrelic-php5-common_${_NEW_RELIC_VRN}_all.deb\n      # wget ${_wgetGet} ${_urlDev}/newrelic-daemon_${_NEW_RELIC_VRN}_all.deb\n      # wget ${_wgetGet} ${_urlDev}/newrelic-php5_${_NEW_RELIC_VRN}_all.deb\n      # dpkg -i /var/opt/newrelic-php5-common_${_NEW_RELIC_VRN}_all.deb &> /dev/null\n      # dpkg -i /var/opt/newrelic-daemon_${_NEW_RELIC_VRN}_amd64.deb &> /dev/null\n      # dpkg -i /var/opt/newrelic-php5_${_NEW_RELIC_VRN}_amd64.deb &> /dev/null\n      NR_PHPLIST=\"/opt/php72/bin:/opt/php73/bin:/opt/php74/bin:/opt/php80/bin:/opt/php81/bin:/opt/php82/bin:/opt/php83/bin:/opt/php84/bin:/opt/php85/bin\"\n      NR_SILENT=\"silent\"\n      export NR_INSTALL_PHPLIST=\"${NR_PHPLIST}\"\n      export NR_INSTALL_SILENT=\"${NR_SILENT}\"\n      newrelic-install install &> /dev/null\n      _X86_64_TEST=$(uname -m)\n      if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n        _SYSTEM_ARCH=\"x64\"\n      else\n        _SYSTEM_ARCH=\"x32\"\n      fi\n      _pthNra=\"/usr/lib/newrelic-php5/agent\"\n      if [ -e \"${_PHP_EXT_DIR_85}\" ] && [ ! -e \"${_PHP_EXT_DIR_85}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP85_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP85_API}.so \\\n          ${_PHP_EXT_DIR_85}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_84}\" ] && [ ! -e \"${_PHP_EXT_DIR_84}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP84_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP84_API}.so \\\n          ${_PHP_EXT_DIR_84}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_83}\" ] && [ ! -e \"${_PHP_EXT_DIR_83}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP83_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP83_API}.so \\\n          ${_PHP_EXT_DIR_83}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_82}\" ] && [ ! -e \"${_PHP_EXT_DIR_82}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP82_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP82_API}.so \\\n          ${_PHP_EXT_DIR_82}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_81}\" ] && [ ! -e \"${_PHP_EXT_DIR_81}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP81_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP81_API}.so \\\n          ${_PHP_EXT_DIR_81}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_80}\" ] && [ ! -e \"${_PHP_EXT_DIR_80}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP80_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP80_API}.so \\\n          ${_PHP_EXT_DIR_80}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_74}\" ] && [ ! -e \"${_PHP_EXT_DIR_74}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP74_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP74_API}.so \\\n          ${_PHP_EXT_DIR_74}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_73}\" ] && [ ! -e \"${_PHP_EXT_DIR_73}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP73_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP73_API}.so \\\n          ${_PHP_EXT_DIR_73}/newrelic.so\n      fi\n      if [ -e \"${_PHP_EXT_DIR_72}\" ] && [ ! -e \"${_PHP_EXT_DIR_72}/newrelic.so\" ] \\\n        && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP72_API}.so\" ]; then\n        ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP72_API}.so \\\n          ${_PHP_EXT_DIR_72}/newrelic.so\n      fi\n      if [ ! -e \"/etc/newrelic/newrelic.cfg\" ]; then\n        echo \"## New Relic Configuration\" > \\\n          /etc/newrelic/newrelic.cfg\n        echo \"license_key=${_NEWRELIC_KEY}\" >> \\\n          /etc/newrelic/newrelic.cfg\n        echo \"pidfile=/run/newrelic-daemon.pid\" >> \\\n          /etc/newrelic/newrelic.cfg\n        echo \"logfile=/var/log/newrelic/newrelic-daemon.log\" >> \\\n          /etc/newrelic/newrelic.cfg\n        echo \"loglevel=error\" >> \\\n          /etc/newrelic/newrelic.cfg\n      else\n        sed -i \"s/REPLACE_WITH_REAL_KEY/${_NEWRELIC_KEY}/g\" \\\n          /etc/newrelic/newrelic.cfg &> /dev/null\n        wait\n      fi\n      sed -i \"s/REPLACE_WITH_REAL_KEY/${_NEWRELIC_KEY}/g\" \\\n        /etc/newrelic/nrsysmond.cfg &> /dev/null\n      wait\n    fi\n  fi\n}\n\n#\n# Install New Relic.\n_if_install_php_newrelic() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_php_newrelic\"\n  fi\n  ###--------------------###\n  _X86_64_TEST=$(uname -m)\n  if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n    _SYSTEM_ARCH=\"x64\"\n  else\n    _SYSTEM_ARCH=\"x32\"\n  fi\n  if [ ! -z \"${_NEWRELIC_KEY}\" ] && [ \"${_SYSTEM_ARCH}\" = \"x64\" ]; then\n    if [ -x \"/usr/bin/gpg2\" ]; then\n      _GPG=gpg2\n    else\n      _GPG=gpg\n    fi\n    _MULTI_NR=NO\n    _NEWRELIC_KEYS_SIG=\"548C16BF\"\n    _nrList=\"/etc/apt/sources.list.d/newrelic.list\"\n    _PHP_EXT_DIR_85=\"/opt/php85/lib/php/extensions/no-debug-non-zts-${_PHP85_API}\"\n    _PHP_EXT_DIR_84=\"/opt/php84/lib/php/extensions/no-debug-non-zts-${_PHP84_API}\"\n    _PHP_EXT_DIR_83=\"/opt/php83/lib/php/extensions/no-debug-non-zts-${_PHP83_API}\"\n    _PHP_EXT_DIR_82=\"/opt/php82/lib/php/extensions/no-debug-non-zts-${_PHP82_API}\"\n    _PHP_EXT_DIR_81=\"/opt/php81/lib/php/extensions/no-debug-non-zts-${_PHP81_API}\"\n    _PHP_EXT_DIR_80=\"/opt/php80/lib/php/extensions/no-debug-non-zts-${_PHP80_API}\"\n    _PHP_EXT_DIR_74=\"/opt/php74/lib/php/extensions/no-debug-non-zts-${_PHP74_API}\"\n    _PHP_EXT_DIR_73=\"/opt/php73/lib/php/extensions/no-debug-non-zts-${_PHP73_API}\"\n    _PHP_EXT_DIR_72=\"/opt/php72/lib/php/extensions/no-debug-non-zts-${_PHP72_API}\"\n    if [ -e \"${_PHP_EXT_DIR_85}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_85}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_84}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_84}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_83}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_83}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_82}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_82}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_81}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_81}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_80}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_80}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_74}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_74}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_73}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_73}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ -e \"${_PHP_EXT_DIR_72}\" ] \\\n      && [ ! -e \"${_PHP_EXT_DIR_72}/newrelic.so\" ]; then\n      _MULTI_NR=YES\n    fi\n    if [ \"${_MULTI_NR}\" = \"YES\" ] \\\n      || [ ! -e \"${_pthLog}/newrelic-${_xSrl}-${_X_VERSION}.log\" ] \\\n      || [ ! -e \"/etc/newrelic/newrelic.cfg\" ] \\\n      || [ ! -e \"/etc/newrelic/nrsysmond.cfg\" ] \\\n      || [ ! -e \"/etc/apt/trusted.gpg.d/newrelic.gpg\" ] \\\n      || [ ! -e \"/etc/apt/keyrings/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg\" ] \\\n      || [ ! -e \"/etc/apt/sources.list.d/newrelic.list\" ]; then\n      _msg \"INFO: Installing New Relic Apps Monitor...\"\n      cd /var/opt\n      if [ ! -e \"/etc/apt/keyrings/newrelic.gpg\" ] \\\n        || [ -e \"/etc/apt/trusted.gpg.d/newrelic.gpg\" ] \\\n        || [ -e \"/etc/apt/keyrings/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg\" ] \\\n        || [ -e /root/.force.newrelic.update.cnf ]; then\n        if [ ! -e \"/etc/apt/keyrings\" ]; then\n          mkdir -m 0755 -p /etc/apt/keyrings\n        fi\n        if [ -e \"/etc/apt/trusted.gpg.d/newrelic.gpg\" ]; then\n          rm -f /etc/apt/trusted.gpg.d/newrelic.gpg*\n        fi\n        if [ -e \"/etc/apt/keyrings/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg\" ]; then\n          rm -f /etc/apt/keyrings/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg\n        fi\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Retrieving ${_NEWRELIC_KEYS_SIG} key...\"\n        fi\n        apt-key del ${_NEWRELIC_KEYS_SIG} &> /dev/null\n        if [ ! -e \"/etc/apt/keyrings/newrelic.gpg\" ]; then\n          curl -fsSL ${_urlDev}/newrelic-key-${_NEWRELIC_KEYS_SIG}.gpg | ${_GPG} --dearmor -o /etc/apt/keyrings/newrelic.gpg\n        fi\n        chmod 644 /etc/apt/keyrings/newrelic.gpg\n      fi\n      echo \"## New Relic APT Repository\" > ${_nrList}\n      if [ -e \"/etc/apt/keyrings/newrelic.gpg\" ]; then\n        echo \"deb [signed-by=/etc/apt/keyrings/newrelic.gpg] http://apt.newrelic.com/debian/ newrelic non-free\" >> ${_nrList}\n      else\n        echo \"deb http://apt.newrelic.com/debian/ newrelic non-free\" >> ${_nrList}\n      fi\n      _apt_clean_update\n      _mrun \"apt-get install newrelic-php5-common ${_nrmUpArg}\"\n      _mrun \"apt-get install newrelic-daemon ${_nrmUpArg}\"\n      _mrun \"apt-get install newrelic-php5 ${_nrmUpArg}\"\n      # cd /var/opt\n      # rm -rf /opt/newrelic*\n      # wget ${_wgetGet} ${_urlDev}/newrelic-php5-common_${_NEW_RELIC_VRN}_all.deb\n      # wget ${_wgetGet} ${_urlDev}/newrelic-daemon_${_NEW_RELIC_VRN}_all.deb\n      # wget ${_wgetGet} ${_urlDev}/newrelic-php5_${_NEW_RELIC_VRN}_all.deb\n      # dpkg -i /var/opt/newrelic-php5-common_${_NEW_RELIC_VRN}_all.deb &> /dev/null\n      # dpkg -i /var/opt/newrelic-daemon_${_NEW_RELIC_VRN}_amd64.deb &> /dev/null\n      # dpkg -i /var/opt/newrelic-php5_${_NEW_RELIC_VRN}_amd64.deb &> /dev/null\n      if [ \"${_MULTI_NR}\" = \"YES\" ]; then\n        _msg \"INFO: Installing latest version of New Relic Apps Monitor...\"\n        NR_PHPLIST=\"/opt/php72/bin:/opt/php73/bin:/opt/php74/bin:/opt/php80/bin:/opt/php81/bin:/opt/php82/bin:/opt/php83/bin:/opt/php84/bin:/opt/php85/bin\"\n        NR_SILENT=\"silent\"\n        export NR_INSTALL_PHPLIST=\"${NR_PHPLIST}\"\n        export NR_INSTALL_SILENT=\"${NR_SILENT}\"\n        newrelic-install install &> /dev/null\n        _X86_64_TEST=$(uname -m)\n        if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n          _SYSTEM_ARCH=\"x64\"\n        else\n          _SYSTEM_ARCH=\"x32\"\n        fi\n        _pthNra=\"/usr/lib/newrelic-php5/agent\"\n        if [ -e \"${_PHP_EXT_DIR_85}\" ] && [ ! -e \"${_PHP_EXT_DIR_85}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP85_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP85_API}.so \\\n            ${_PHP_EXT_DIR_85}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_84}\" ] && [ ! -e \"${_PHP_EXT_DIR_84}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP84_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP84_API}.so \\\n            ${_PHP_EXT_DIR_84}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_83}\" ] && [ ! -e \"${_PHP_EXT_DIR_83}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP83_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP83_API}.so \\\n            ${_PHP_EXT_DIR_83}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_82}\" ] && [ ! -e \"${_PHP_EXT_DIR_82}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP82_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP82_API}.so \\\n            ${_PHP_EXT_DIR_82}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_81}\" ] && [ ! -e \"${_PHP_EXT_DIR_81}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP81_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP81_API}.so \\\n            ${_PHP_EXT_DIR_81}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_80}\" ] && [ ! -e \"${_PHP_EXT_DIR_80}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP80_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP80_API}.so \\\n            ${_PHP_EXT_DIR_80}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_74}\" ] && [ ! -e \"${_PHP_EXT_DIR_74}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP74_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP74_API}.so \\\n            ${_PHP_EXT_DIR_74}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_73}\" ] && [ ! -e \"${_PHP_EXT_DIR_73}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP73_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP73_API}.so \\\n            ${_PHP_EXT_DIR_73}/newrelic.so\n        fi\n        if [ -e \"${_PHP_EXT_DIR_72}\" ] && [ ! -e \"${_PHP_EXT_DIR_72}/newrelic.so\" ] \\\n          && [ -e \"${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP72_API}.so\" ]; then\n          ln -sfn ${_pthNra}/${_SYSTEM_ARCH}/newrelic-${_PHP72_API}.so \\\n            ${_PHP_EXT_DIR_72}/newrelic.so\n        fi\n        if [ ! -e \"/etc/newrelic/newrelic.cfg\" ]; then\n          echo \"## New Relic Configuration\" > \\\n            /etc/newrelic/newrelic.cfg\n          echo \"license_key=${_NEWRELIC_KEY}\" >> \\\n            /etc/newrelic/newrelic.cfg\n          echo \"pidfile=/run/newrelic-daemon.pid\" >> \\\n            /etc/newrelic/newrelic.cfg\n          echo \"logfile=/var/log/newrelic/newrelic-daemon.log\" >> \\\n            /etc/newrelic/newrelic.cfg\n          echo \"loglevel=error\" >> \\\n            /etc/newrelic/newrelic.cfg\n        else\n          sed -i \"s/REPLACE_WITH_REAL_KEY/${_NEWRELIC_KEY}/g\" \\\n            /etc/newrelic/newrelic.cfg &> /dev/null\n          wait\n        fi\n        sed -i \"s/REPLACE_WITH_REAL_KEY/${_NEWRELIC_KEY}/g\" \\\n          /etc/newrelic/nrsysmond.cfg &> /dev/null\n        wait\n      fi\n    fi\n    touch ${_pthLog}/newrelic-${_xSrl}-${_X_VERSION}.log\n  fi\n}\n\n#\n# Check if the PHP has correct SSL headers version.\n_check_php_ssl_version() {\n  # Debug mode: log the start of the function\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_php_ssl_version $1\"\n  fi\n\n  # Set OpenSSL version based on the OS version\n  case \"${_OS_CODE}\" in\n    excalibur|daedalus|chimaera|beowulf|trixie|bookworm|bullseye|buster)\n      _OPENSSL_USE_VRN=\"${_OPENSSL_EOL_VRN}\"  # 1.1.1w\n      if [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n        _OPENSSL_USE_VRN=\"${_OPENSSL_MODERN_VRN}\"  # 3.5.6\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Using modern OpenSSL ${_OPENSSL_USE_VRN} for PHP in ${_OS_CODE} by default\"\n        fi\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Using EOL OpenSSL ${_OPENSSL_USE_VRN} for PHP in ${_OS_CODE} by default\"\n        fi\n      fi\n      ;;\n    stretch)\n      _OPENSSL_USE_VRN=\"${_OPENSSL_EOL_VRN}\"  # 1.1.1w\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Using EOL OpenSSL ${_OPENSSL_USE_VRN} for PHP in ${_OS_CODE} by default\"\n      fi\n      ;;\n    *)\n      _OPENSSL_USE_VRN=\"${_OPENSSL_LEGACY_VRN}\"  # 1.0.2u\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Using Legacy OpenSSL ${_OPENSSL_USE_VRN} for PHP in ${_OS_CODE} by default\"\n      fi\n      ;;\n  esac\n\n  # Set OpenSSL version based on PHP version and detected OpenSSL version\n  case \"${1}\" in\n    56|70|71|72|73)\n      if [ -e \"/usr/local/ssl/lib/libssl.so.1.1\" ]; then\n        _OPENSSL_USE_VRN=\"${_OPENSSL_EOL_VRN}\"  # 1.1.1w\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Using EOL OpenSSL ${_OPENSSL_USE_VRN} for PHP ${1}\"\n        fi\n      elif [ -e \"/usr/local/ssl/lib/libssl.so.1.0.0\" ] \\\n        || [ -e \"/usr/local/ssl/lib/libssl.so.1.0.1\" ] \\\n        || [ -e \"/usr/local/ssl/lib/libssl.so.1.0.2\" ]; then\n        _OPENSSL_USE_VRN=\"${_OPENSSL_LEGACY_VRN}\"  # 1.0.2u\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Using Legacy OpenSSL ${_OPENSSL_LEGACY_VRN} for PHP ${1}\"\n        fi\n      fi\n      ;;\n  esac\n\n  # Check the installed PHP OpenSSL version\n  _PHP_SSL_TEST=$(/opt/php$1/bin/php -i | grep \"OpenSSL Header Version\" 2>&1)\n  _PHP_SSL_LOOK_FOR=\"${_OPENSSL_USE_VRN}\"\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Expected PHP OpenSSL Header Version: ${_PHP_SSL_LOOK_FOR}\"\n    _msg \"INFO: Detected PHP OpenSSL Header Version: ${_PHP_SSL_TEST}\"\n  fi\n\n  # Return true if mismatch is found, otherwise false\n  if [[ ! \"${_PHP_SSL_TEST}\" =~ \"${_PHP_SSL_LOOK_FOR}\" ]]; then\n\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: PHP OpenSSL version mismatch detected, rebuild required\"\n    fi\n    return 1  # SSL version mismatch, rebuild needed\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: PHP OpenSSL version is correct, no rebuild needed\"\n  fi\n  return 0  # SSL version matches, no rebuild needed\n}\n\n#\n# Check if the PHP rebuild is required.\n_check_php_rebuild() {\n  # Debug mode: log the function call\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_php_rebuild $1\"\n  fi\n\n  _if_to_do_fix\n\n  # Get PHP version (prioritize PHP 7, PHP 5, and PHP 8)\n  _PHP_ITD=$(/opt/php$1/bin/php -v | grep -E 'PHP [758]' | awk '{print $2}' 2>&1)\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Detected PHP version: ${_PHP_ITD}\"\n  fi\n\n  # Compare installed PHP version with expected version\n  if [[ \"${_PHP_ITD}\" != \"${_PHP_VERSION}\" ]]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Upgrade needed from PHP ${_PHP_ITD} to ${_PHP_VERSION}\"\n    fi\n    _PHP_FORCE_REINSTALL=YES\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: PHP version ${_PHP_VERSION} already installed\"\n    fi\n  fi\n\n  # Run the SSL version check\n  if ! _check_php_ssl_version \"$1\"; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: PHP ${_PHP_ITD} uses incorrect OpenSSL, forcing rebuild\"\n    fi\n    _PHP_FORCE_REINSTALL=YES\n  fi\n\n  # Check for presence of PHP drivers\n  if [ -x \"/opt/php$1/bin/php\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Checking for mysql drivers in PHP $1\"\n    fi\n\n    if [ \"$1\" = \"56\" ]; then\n      # Check for 'with-mysql=mysqlnd' in PHP 5.6\n      _PHP_DRIVERS=$(/opt/php$1/bin/php -i | grep \"with-mysql=mysqlnd\" 2>&1)\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Checking for 'with-mysql=mysqlnd' in PHP 5.6\"\n      fi\n    else\n      # Check for 'with-mysqli=mysqlnd' in PHP 7 or 8\n      _PHP_DRIVERS=$(/opt/php$1/bin/php -i | grep \"with-mysqli=mysqlnd\" 2>&1)\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Checking for 'with-mysqli=mysqlnd' in PHP $1\"\n      fi\n    fi\n\n    # If drivers are not found, mark build as NO\n    if [ -z \"${_PHP_DRIVERS}\" ]; then\n      _PHP_DRIVERS_BUILD=NO\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: PHP $1 has missing mysql drivers, forcing rebuild\"\n      fi\n      _PHP_FORCE_REINSTALL=YES\n    else\n      _PHP_DRIVERS_BUILD=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: PHP $1 with correct mysql drivers detected\"\n      fi\n    fi\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: PHP $1 binary not found, skipping mysql driver check\"\n    fi\n  fi\n\n  # Additional conditions for force rebuild or upgrade\n  if [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ] || [ \"${_PHP_FORCE_REINSTALL}\" = \"YES\" ]; then\n    if [ -e \"/var/log/boa\" ]; then\n      rm -f /var/log/boa/._php_libs_fix_*.pid\n      rm -f /var/log/boa/re-installed-php*\n    fi\n    if [ \"${_DO_FIX}\" = \"YES\" ] || [ \"${_STATUS}\" != \"UPGRADE\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: PHP ${_PHP_ITD} rebuild on next barracuda update\"\n      fi\n      _PHP_FORCE_REINSTALL=NO\n    else\n      if [ ! -e \"${_pthLog}/re-installed-php${1}-on-post_major_os_upgrade.info\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: PHP ${_PHP_ITD} rebuild happening now...\"\n        fi\n        _install_php_multi \"$1\"\n        _PHP_ALREADY_REBUILT=$1\n      fi\n    fi\n  fi\n}\n\n#\n# Check if the PHP build has broken freetype support.\n_check_php_broken_freetype() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_php_broken_freetype $1\"\n  fi\n  _isFreeType=\n  _isFreeType=$(/opt/php$1/bin/php -i | grep with-freetype 2>&1)\n  if [[ ! \"${_isFreeType}\" =~ \"with-freetype\" ]] \\\n    || [ -z \"${_isFreeType}\" ]; then\n    if [ -x \"/usr/bin/freetype-config\" ]; then\n      if [ ! -e \"${_pthLog}/re-installed-php${1}-on-post_major_os_upgrade.info\" ]; then\n        _msg \"INFO: PHP ${_PHP_VERSION} rebuild required to fix freetype support...\"\n        _install_php_multi \"$1\"\n        _PHP_ALREADY_REBUILT=$1\n        _PHP_FORCE_REINSTALL=NO\n      fi\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"NOTE: PHP ${_PHP_VERSION} was built without freetype support\"\n      fi\n    fi\n  else\n    _PHP_ALREADY_REBUILT=\n  fi\n}\n\n#\n# Check if the PHP build has broken GD support.\n_check_php_broken_gd() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_php_broken_gd $1\"\n  fi\n  _isGd=\n  _isGd=$(/opt/php$1/bin/php -i | grep GD 2>&1)\n  if [[ ! \"${_isGd}\" =~ \"GD Support\" ]] \\\n    || [ -z \"${_isGd}\" ]; then\n    if [ ! -e \"${_pthLog}/re-installed-php${1}-on-post_major_os_upgrade.info\" ]; then\n      _msg \"INFO: PHP ${_PHP_VERSION} rebuild required to fix GD support...\"\n      _install_php_multi \"$1\"\n      _PHP_ALREADY_REBUILT=$1\n      _PHP_FORCE_REINSTALL=NO\n    fi\n  else\n    _PHP_ALREADY_REBUILT=\n  fi\n}\n\n#\n# Check if the PHP build is totally broken.\n_check_php_broken_segfault() {\n  _isOnig=\n  _isOnig=$(/opt/php$1/bin/php -v 2>&1)\n  _isSegft=\n  _isSegft=$(/opt/php$1/bin/php -v | grep cli 2>&1)\n  _isSSLibs=\n  _isSSLibs=$(/opt/php$1/bin/php -v 2>&1)\n  _isOlog=\n  if [ -e \"/var/log/php/php$1-fpm-error.log\" ]; then\n    _isOlog=$(grep OnigEncoding /var/log/php/php$1-fpm-error.log 2>&1)\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_php_broken_segfault $1\"\n    _msg \"CTRL: _isOnig for $1 is ${_isOnig}\"\n    _msg \"CTRL: _isSegft for $1 is ${_isSegft}\"\n    _msg \"CTRL: _isOlog for $1 is ${_isOlog}\"\n    _msg \"CTRL: _isSSLibs for $1 is ${_isSSLibs}\"\n  fi\n  if [[ ! \"${_isSegft}\" =~ \"cli\" ]] \\\n    || [ -z \"${_isSegft}\" ] \\\n    || [[ \"${_isOnig}\" =~ \"OnigEncoding\" ]] \\\n    || [[ \"${_isSegft}\" =~ \"OnigEncoding\" ]] \\\n    || [[ \"${_isSSLibs}\" =~ \"no version information\" ]] \\\n    || [[ \"${_isOlog}\" =~ \"OnigEncoding\" ]]; then\n    if [ ! -e \"${_pthLog}/re-installed-php${1}-on-post_major_os_upgrade.info\" ] \\\n      || [[ \"${_isOlog}\" =~ \"OnigEncoding\" ]] \\\n      || [[ \"${_isSSLibs}\" =~ \"no version information\" ]] \\\n      || [[ \"${_isOnig}\" =~ \"OnigEncoding\" ]]; then\n      _msg \"INFO: PHP ${_PHP_VERSION} rebuild required to fix broken build...\"\n      _install_php_multi \"$1\"\n      _PHP_ALREADY_REBUILT=$1\n      _PHP_FORCE_REINSTALL=NO\n    fi\n  else\n    _PHP_ALREADY_REBUILT=\n  fi\n}\n\n#\n# Check if the PHP build is broken.\n_check_php_broken() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_php_broken $1\"\n  fi\n  _check_php_broken_segfault \"$1\"\n  _BROKEN_LIBCURL_TEST=$(/opt/php$1/bin/php -v 2>&1)\n  if [[ \"${_BROKEN_LIBCURL_TEST} \" =~ \"libcurl.so.4\" ]]; then\n    if [ ! -e \"${_pthLog}/re-installed-php${1}-on-post_major_os_upgrade.info\" ]; then\n      _PHP_BIN_BROKEN=YES\n      _msg \"INFO: PHP ${_PHP_VERSION} rebuild required to fix broken libcurl...\"\n      _curl_install_src\n      _install_php_multi \"$1\"\n      _PHP_ALREADY_REBUILT=$1\n      _PHP_FORCE_REINSTALL=NO\n    fi\n  else\n    _PHP_ALREADY_REBUILT=\n  fi\n  _check_php_broken_gd \"$1\"\n  _check_php_broken_freetype \"$1\"\n}\n\n#\n# Enable previously disabled PHP-FPM versions\n_php_not_used_enable_again() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_not_used_enable_again\"\n  fi\n  _PHP_V=\"5.6 7.0 7.1 7.2 7.3 7.4 8.0 8.1 8.2 8.3 8.4 8.5\"\n  for e in ${_PHP_V}; do\n    _re=\n    if [ ! -e \"/opt/php${_re}/bin/php\" ]; then\n      _re=\"${e}\"\n      _re=${_re//[^0-9]/}\n      if [ -e \"${_vBs}/off-php${_re}-arch/bin/php\" ] \\\n        && [ ! -e \"/opt/php${_re}/bin/php\" ]; then\n        rm -f -r /opt/php${_re}\n        mv -f ${_vBs}/off-php${_re}-arch /opt/php${_re}\n        _msg \"INFO: ${_vBs}/off-php${_re}-arch moved to /opt/php${_re}\"\n      fi\n      if [ -e \"${_vBs}/initd-php${_re}-fpm\" ]; then\n        mv -f ${_vBs}/initd-php${_re}-fpm /etc/init.d/php${_re}-fpm\n        chmod 755 /etc/init.d/php${_re}-fpm\n        _mrun \"update-rc.d php${_re}-fpm defaults\"\n        _mrun \"service php${_re}-fpm start\"\n        _msg \"INFO: ${_vBs}/initd-php${_re}-fpm moved to /etc/init.d/php${_re}-fpm\"\n      fi\n    fi\n  done\n  [ -e \"/root/.allow-php-multi-install-cleanup.cnf\" ] && rm -f /root/.allow-php-multi-install-cleanup.cnf\n}\n\n#\n# Disable not used PHP-FPM versions\n_php_not_used_disable() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_not_used_disable\"\n  fi\n  _PHP_V=\"5.6 7.0 7.1 7.2 7.3 7.4 8.0 8.1 8.2 8.3 8.4 8.5\"\n  _PHP_A=\"${_PHP_MULTI_OPTIM}\"\n  for e in ${_PHP_V}; do\n    _re=\n    if [[ ${_PHP_A} == *\"${e}\"* ]]; then\n      _IS_IDLE=NO\n    else\n      _IS_IDLE=YES\n    fi\n    if [ \"${_IS_IDLE}\" = \"YES\" ]; then\n      _re=\"${e}\"\n      _re=${_re//[^0-9]/}\n      if [ -e \"/etc/init.d/php${_re}-fpm\" ]; then\n        _mrun \"service php${_re}-fpm force-quit\"\n        _mrun \"update-rc.d -f php${_re}-fpm _remove\"\n        rm -f ${_vBs}/initd-php${_re}-fpm\n        mv -f /etc/init.d/php${_re}-fpm ${_vBs}/initd-php${_re}-fpm\n        _msg \"INFO: /etc/init.d/php${_re}-fpm moved to ${_vBs}/initd-php${_re}-fpm\"\n      fi\n      if [ -e \"/opt/php${_re}/bin/php\" ]; then\n        rm -f -r ${_vBs}/off-php${_re}-arch\n        mv -f /opt/php${_re} ${_vBs}/off-php${_re}-arch\n        _msg \"INFO: /opt/php${_re} moved to ${_vBs}/off-php${_re}-arch\"\n      else\n        rm -f -r /opt/php${_re}\n      fi\n    fi\n  done\n}\n\n#\n# Fix init.d to disable deprecated PHP-FPM versions\n_php_deprecated_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_deprecated_cleanup\"\n  fi\n  _PHP_V=\"55 54 53\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n      _mrun \"service php${e}-fpm force-quit\"\n      _mrun \"update-rc.d -f php${e}-fpm remove\"\n      mv -f /etc/init.d/php${e}-fpm ${_vBs}/initd-php${e}-fpm\n      _msg \"INFO: /etc/init.d/php${e}-fpm moved to ${_vBs}/initd-php${e}-fpm\"\n    fi\n    if [ -e \"/opt/php${e}/bin/php\" ]; then\n      mv -f /opt/php${e} ${_vBs}/off-php${e}-arch\n      _msg \"INFO: /opt/php${e} moved to ${_vBs}/off-php${e}-arch\"\n    fi\n  done\n}\n\n#\n# Fix init.d to disable not used PHP-FPM versions\n_php_single_initd_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_single_initd_cleanup\"\n  fi\n  if [ ! -z \"${_PHP_SINGLE_INSTALL}\" ]; then\n    _FPM_INITD_CLEANUP=\n    if [ \"${_PHP_SINGLE_INSTALL}\" = \"8.5\" ] && [ -x \"/opt/php85/bin/php\" ]; then\n      _PHP_V=\"84 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n          _mrun \"service php${e}-fpm force-quit\"\n          _mrun \"update-rc.d -f php${e}-fpm remove\"\n          mv -f /etc/init.d/php${e}-fpm ${_vBs}/initd-php${e}-fpm\n          _FPM_INITD_CLEANUP=YES\n        fi\n        if [ -e \"/opt/php${e}/bin/php\" ]; then\n          mv -f /opt/php${e}/bin/php ${_vBs}/bin-php${e}-cli\n        fi\n      done\n    elif [ \"${_PHP_SINGLE_INSTALL}\" = \"8.4\" ] && [ -x \"/opt/php84/bin/php\" ]; then\n      _PHP_V=\"85 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n          _mrun \"service php${e}-fpm force-quit\"\n          _mrun \"update-rc.d -f php${e}-fpm remove\"\n          mv -f /etc/init.d/php${e}-fpm ${_vBs}/initd-php${e}-fpm\n          _FPM_INITD_CLEANUP=YES\n        fi\n        if [ -e \"/opt/php${e}/bin/php\" ]; then\n          mv -f /opt/php${e}/bin/php ${_vBs}/bin-php${e}-cli\n        fi\n      done\n    elif [ \"${_PHP_SINGLE_INSTALL}\" = \"8.3\" ] && [ -x \"/opt/php83/bin/php\" ]; then\n      _PHP_V=\"85 84 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n          _mrun \"service php${e}-fpm force-quit\"\n          _mrun \"update-rc.d -f php${e}-fpm remove\"\n          mv -f /etc/init.d/php${e}-fpm ${_vBs}/initd-php${e}-fpm\n          _FPM_INITD_CLEANUP=YES\n        fi\n        if [ -e \"/opt/php${e}/bin/php\" ]; then\n          mv -f /opt/php${e}/bin/php ${_vBs}/bin-php${e}-cli\n        fi\n      done\n    fi\n    if [ \"${_FPM_INITD_CLEANUP}\" = \"YES\" ]; then\n      killall php-fpm &> /dev/null\n    fi\n    rm -f /opt/local/bin/php\n  fi\n  [ -d \"/var/backups/php-logs/${_NOW}\" ] || mkdir -p /var/backups/php-logs/${_NOW}/\n  mv -f /var/log/php/* /var/backups/php-logs/${_NOW}/ &> /dev/null\n}\n\n#\n# Re-set default PHP-CLI.\n_re_set_default_php_cli() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _re_set_default_php_cli ${_PHP_CLI_VERSION}\"\n  fi\n  if [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ]; then\n    _switch_php_cli \"85\"\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ]; then\n    _switch_php_cli \"84\"\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ]; then\n    _switch_php_cli \"83\"\n  fi\n}\n\n#\n# Fix path to PHP-CLI if needed.\n_check_php_cli() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_php_cli ${_PHP_CLI_VERSION}\"\n  fi\n  if [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ]; then\n    _PHP_CLI_PATH=\"/opt/php84/bin/php\"\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ]; then\n    _PHP_CLI_PATH=\"/opt/php85/bin/php\"\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ]; then\n    _PHP_CLI_PATH=\"/opt/php83/bin/php\"\n  else\n    _PHP_CLI_PATH=\"\"\n  fi\n  if [ -x \"${_PHP_CLI_PATH}\" ]; then\n    _USE_PHP_CLI_PATH=\"${_PHP_CLI_PATH}\"\n  else\n    if  [ -x \"/opt/php84/bin/php\" ]; then\n      _USE_PHP_CLI_PATH=\"/opt/php84/bin/php\"\n    elif  [ -x \"/opt/php85/bin/php\" ]; then\n      _USE_PHP_CLI_PATH=\"/opt/php85/bin/php\"\n    elif  [ -x \"/opt/php83/bin/php\" ]; then\n      _USE_PHP_CLI_PATH=\"/opt/php83/bin/php\"\n    fi\n  fi\n  if [ -x \"${_USE_PHP_CLI_PATH}\" ]; then\n    rm -f /usr/bin/php\n    rm -f /usr/bin/php-cli\n    ln -sfn ${_USE_PHP_CLI_PATH} /usr/bin/php\n    ln -sfn ${_USE_PHP_CLI_PATH} /usr/bin/php-cli\n  else\n    _msg \"WAIT: I can not find PHP-CLI anywhere!\"\n    _msg \"NOTE: BOA requires PHP 8.3 or newer\"\n  fi\n}\n\n_switch_php_cli() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _switch_php_cli $1\"\n  fi\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^0-9]/}\n  if [ ! -z \"${_isTest}\" ]; then\n    if [ -x \"/opt/php$1/bin/php\" ]; then\n      rm -f /usr/bin/php\n      rm -f /usr/bin/php-cli\n      ln -sfn /opt/php$1/bin/php /usr/bin/php\n      ln -sfn /opt/php$1/bin/php /usr/bin/php-cli\n    fi\n    if [ -x \"/opt/php$1/bin/phpize\" ]; then\n      rm -f /usr/bin/phpize\n      ln -sfn /opt/php$1/bin/phpize /usr/bin/phpize\n    fi\n    if [ -x \"/opt/php$1/bin/php-config\" ]; then\n      rm -f /usr/bin/php-config\n      ln -sfn /opt/php$1/bin/php-config /usr/bin/php-config\n    fi\n  fi\n}\n\n_php_upgrade_all() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_upgrade_all $1\"\n  fi\n  if [ ! -z \"${1}\" ] && [ \"${1}\" != \"force\" ]; then\n    _PHP_V=\"${1}\"\n  else\n    if [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n      _PHP_V=\"56 70 71 72 73 74 80 81\"\n    else\n      _PHP_V=\"56 70 71 72 73 74 80 81 82 83 84 85\"\n    fi\n  fi\n  for e in ${_PHP_V}; do\n    if [ -x \"/opt/php${e}/bin/php\" ]; then\n      if [ \"${e}\" = 56 ]; then\n        _PHP_VERSION=\"${_PHP56_VRN}\"\n      elif [ \"${e}\" = 70 ]; then\n        _PHP_VERSION=\"${_PHP70_VRN}\"\n      elif [ \"${e}\" = 71 ]; then\n        _PHP_VERSION=\"${_PHP71_VRN}\"\n      elif [ \"${e}\" = 72 ]; then\n        _PHP_VERSION=\"${_PHP72_VRN}\"\n      elif [ \"${e}\" = 73 ]; then\n        _PHP_VERSION=\"${_PHP73_VRN}\"\n      elif [ \"${e}\" = 74 ]; then\n        _PHP_VERSION=\"${_PHP74_VRN}\"\n      elif [ \"${e}\" = 80 ]; then\n        _PHP_VERSION=\"${_PHP80_VRN}\"\n      elif [ \"${e}\" = 81 ]; then\n        _PHP_VERSION=\"${_PHP81_VRN}\"\n      elif [ \"${e}\" = 82 ]; then\n        _PHP_VERSION=\"${_PHP82_VRN}\"\n      elif [ \"${e}\" = 83 ]; then\n        _PHP_VERSION=\"${_PHP83_VRN}\"\n      elif [ \"${e}\" = 84 ]; then\n        _PHP_VERSION=\"${_PHP84_VRN}\"\n      elif [ \"${e}\" = 85 ]; then\n        _PHP_VERSION=\"${_PHP85_VRN}\"\n      fi\n      _PHP_BIN_BROKEN=NO\n      _BROKEN_LIBCURL_TEST=\"\"\n      _switch_php_cli \"${e}\"\n      if [ \"${1}\" = \"force\" ]; then\n        _install_php_multi \"${e}\"\n      else\n        _check_php_broken \"${e}\"\n        _check_php_rebuild \"${e}\"\n      fi\n      _php_multi_update \"${e}\"\n      _mrun \"service php${e}-fpm reload\"\n      _PHP_VERSION=\"\"\n      _T_PHP_VRN=\"\"\n      _T_PHP_PTH=\"\"\n    fi\n  done\n}\n\n_get_php_conf_extra() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _get_php_conf_extra\"\n  fi\n  _PHP_EXTRA=\"\"\n  if [ -x \"/usr/bin/freetype-config\" ]; then\n    _PHP_EXTRA=\"--with-freetype-dir=/usr\"\n  fi\n  if [ ! -z \"${_PHP_EXTRA_CONF}\" ]; then\n    _PHP_EXTRA=\"${_PHP_EXTRA} ${_PHP_EXTRA_CONF}\"\n  fi\n}\n\n_php_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_install_upgrade\"\n  fi\n  _if_to_do_fix\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Checking if PHP upgrade is available\"\n    fi\n    if dpkg-query -W -f='${Status}' php5-sasl 2>/dev/null | grep -q \"install ok installed\"; then\n      _mrun \"apt-get remove php5-sasl -y --purge --auto-remove -qq\"\n    fi\n    if dpkg-query -W -f='${Status}' php5-suhosin 2>/dev/null | grep -q \"install ok installed\"; then\n      _mrun \"apt-get remove php5-suhosin -y --purge --auto-remove -qq\"\n    fi\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"5.6\" ]] \\\n    && [ ! -x \"/opt/php56/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP56_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"56\"\n    _switch_php_cli \"56\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP56_BUILD=56\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"7.0\" ]] \\\n    && [ ! -x \"/opt/php70/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP70_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"70\"\n    _switch_php_cli \"70\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP70_BUILD=70\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"7.1\" ]] \\\n    && [ ! -x \"/opt/php71/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP71_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"71\"\n    _switch_php_cli \"71\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP71_BUILD=71\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"7.2\" ]] \\\n    && [ ! -x \"/opt/php72/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP72_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"72\"\n    _switch_php_cli \"72\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP72_BUILD=72\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"7.3\" ]] \\\n    && [ ! -x \"/opt/php73/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP73_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"73\"\n    _switch_php_cli \"73\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP73_BUILD=73\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"7.4\" ]] \\\n    && [ ! -x \"/opt/php74/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP74_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"74\"\n    _switch_php_cli \"74\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP74_BUILD=74\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.0\" ]] \\\n    && [ ! -x \"/opt/php80/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP80_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"80\"\n    _switch_php_cli \"80\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP80_BUILD=80\n  fi\n  if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.1\" ]] \\\n    && [ ! -x \"/opt/php81/bin/php\" ]; then\n    _PHP_VERSION=\"${_PHP81_VRN}\"\n    _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n    _install_php_multi \"81\"\n    _switch_php_cli \"81\"\n    _PHP_VERSION=\"\"\n    _T_PHP_VRN=\"\"\n    _T_PHP_PTH=\"\"\n    _FRESH_PHP81_BUILD=81\n  fi\n  if [ \"${_OS_CODE}\" != \"stretch\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n    if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.2\" ]] \\\n      && [ ! -x \"/opt/php82/bin/php\" ]; then\n      _PHP_VERSION=\"${_PHP82_VRN}\"\n      _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n      _install_php_multi \"82\"\n      _switch_php_cli \"82\"\n      _PHP_VERSION=\"\"\n      _T_PHP_VRN=\"\"\n      _T_PHP_PTH=\"\"\n      _FRESH_PHP82_BUILD=82\n    fi\n    if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.3\" ]] \\\n      && [ ! -x \"/opt/php83/bin/php\" ]; then\n      _PHP_VERSION=\"${_PHP83_VRN}\"\n      _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n      _install_php_multi \"83\"\n      _switch_php_cli \"83\"\n      _PHP_VERSION=\"\"\n      _T_PHP_VRN=\"\"\n      _T_PHP_PTH=\"\"\n      _FRESH_PHP83_BUILD=83\n    fi\n    if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.4\" ]] \\\n      && [ ! -x \"/opt/php84/bin/php\" ]; then\n      _PHP_VERSION=\"${_PHP84_VRN}\"\n      _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n      _install_php_multi \"84\"\n      _switch_php_cli \"84\"\n      _PHP_VERSION=\"\"\n      _T_PHP_VRN=\"\"\n      _T_PHP_PTH=\"\"\n      _FRESH_PHP84_BUILD=84\n    fi\n    if [[ \"${_PHP_MULTI_INSTALL}\" =~ \"8.5\" ]] \\\n      && [ ! -x \"/opt/php85/bin/php\" ]; then\n      _PHP_VERSION=\"${_PHP85_VRN}\"\n      _msg \"INFO: PHP ${_PHP_VERSION} will be installed now\"\n      _install_php_multi \"85\"\n      _switch_php_cli \"85\"\n      _PHP_VERSION=\"\"\n      _T_PHP_VRN=\"\"\n      _T_PHP_PTH=\"\"\n      _FRESH_PHP85_BUILD=85\n    fi\n  fi\n}\n\n_php_check_if_rebuild() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_check_if_rebuild\"\n  fi\n  _check_php_cli\n  _isLber=\n  _isLber=$(php -i | grep liblber 2>&1)\n  if [[ \"${_isLber}\" =~ \"liblber\" ]]; then\n    _msg \"INFO: PHP rebuild required to fix liblber...\"\n    _PHP_FORCE_REINSTALL=YES\n    _php_if_versions_cleanup_cnf\n    _php_upgrade_all force\n    _PHP_FORCE_REINSTALL=NO\n  fi\n  _isSeg=\n  _isSeg=$(php -v | grep cli 2>&1)\n  if [[ ! \"${_isSeg}\" =~ \"cli\" ]] || [ -z \"${_isSeg}\" ]; then\n    _msg \"INFO: PHP rebuild required to fix broken libs...\"\n    _PHP_FORCE_REINSTALL=YES\n    _php_if_versions_cleanup_cnf\n    _php_upgrade_all force\n    _PHP_FORCE_REINSTALL=NO\n  fi\n  _isOnig=\n  _isOnig=$(php -i | grep libonig 2>&1)\n  if [[ \"${_isOnig}\" =~ \"libonig.so\" ]]; then\n    _msg \"INFO: PHP rebuild required to fix libonig...\"\n    _PHP_FORCE_REINSTALL=YES\n    _php_if_versions_cleanup_cnf\n    _php_upgrade_all force\n    _PHP_FORCE_REINSTALL=NO\n  fi\n  _isIcu=\n  _isIcu=$(php -i | grep libicuio 2>&1)\n  if [[ \"${_isIcu}\" =~ \"libicuio\" ]] \\\n    || [ ! -e \"/usr/local/lib/icu/current\" ]; then\n    _msg \"INFO: PHP rebuild required to fix ICU support...\"\n    _PHP_FORCE_REINSTALL=YES\n    _php_install_deps\n    _php_libs_fix\n    _php_if_versions_cleanup_cnf\n    _php_upgrade_all force\n    _PHP_FORCE_REINSTALL=NO\n  fi\n  if [ -x \"/opt/php74/bin/php\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n    _isSodium=\n    _isSodium=$(/opt/php74/bin/php -i | grep with-sodium 2>&1)\n    if [[ ! \"${_isSodium}\" =~ \"with-sodium\" ]] \\\n      || [ -z \"${_isSodium}\" ]; then\n      _msg \"INFO: PHP rebuild required to add Sodium support...\"\n      _PHP_FORCE_REINSTALL=YES\n      _php_if_versions_cleanup_cnf\n      if [ \"${_OS_CODE}\" = \"stretch\" ]; then\n        _php_upgrade_all \"74 80 81\"\n      else\n        _php_upgrade_all \"74 80 81 82 83 84\"\n      fi\n      _PHP_FORCE_REINSTALL=NO\n    fi\n  fi\n  if [ -x \"/opt/php56/bin/php\" ]; then\n    _isIgbinary=\n    _isIgbinary=$(/opt/php56/bin/php -i | grep igbinary 2>&1)\n    if [[ ! \"${_isIgbinary}\" =~ \"igbinary\" ]] \\\n      || [ -z \"${_isIgbinary}\" ]; then\n      _msg \"INFO: PHP 5.6 rebuild required to add igbinary support...\"\n      _PHP_FORCE_REINSTALL=YES\n      _php_if_versions_cleanup_cnf\n      _php_upgrade_all \"56\"\n      _PHP_FORCE_REINSTALL=NO\n    fi\n  fi\n  _isWebP=\n  _isWebP=$(php -i | grep with-webp 2>&1)\n  if [[ ! \"${_isWebP}\" =~ \"with-webp\" ]] \\\n    || [ -z \"${_isWebP}\" ]; then\n    _msg \"INFO: PHP rebuild required to add WebP support...\"\n    _PHP_FORCE_REINSTALL=YES\n    _php_if_versions_cleanup_cnf\n    _php_upgrade_all force\n    _PHP_FORCE_REINSTALL=NO\n  fi\n  if [[ \"${_PHP_EXTRA_CONF}\" =~ \"--with-tidy\" ]] \\\n    && [ ! -e \"${_pthLog}/libtidy-${_LIB_TIDY_VRN}.log\" ]; then\n    _apt_clean_update\n    for _PKG in libonig-dev; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    if [ -e \"/usr/lib/libtidy.so\" ]; then\n      ###\n      for _PKG in libtidy-dev libtidy-0.99-0 tidy; do\n        if _pkg_installed \"${_PKG}\"; then\n          _mrun \"apt-get remove ${_PKG} -y --purge --auto-remove -qq\"\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"PCKG: ${_PKG} removed as requested.\"\n          fi\n        fi\n      done\n      ###\n    fi\n    cd /var/opt\n    rm -rf tidy*\n    _apt_clean_update\n    for _PKG in cmake; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    _get_dev_src \"tidy-html5-${_LIB_TIDY_VRN}.tar.gz\"\n    cd tidy-html5-${_LIB_TIDY_VRN}/build/cmake\n    _mrun \"cmake ../.. -DCMAKE_INSTALL_PREFIX=/usr/\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ -e \"/usr/lib/libtidy.so\" ]; then\n      cd /usr/lib\n      ln -sfn libtidy.so.${_LIB_TIDY_VRN} libtidy-0.99.so.0\n      touch ${_pthLog}/libtidy-${_LIB_TIDY_VRN}.log\n    fi\n    _msg \"INFO: PHP rebuild required to add --with-tidy option...\"\n    _PHP_FORCE_REINSTALL=YES\n    _php_if_versions_cleanup_cnf\n    _php_upgrade_all force\n    _PHP_FORCE_REINSTALL=NO\n  fi\n}\n\n_php_ioncube_check_if_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_ioncube_check_if_update\"\n  fi\n  if [ \"${_PHP_IONCUBE}\" = \"YES\" ]; then\n    if [ ! -e \"${_pthLog}/ioncube-update-${_IONCUBE_VRN}.log\" ] \\\n      || [ \"${_PHP_FORCE_REINSTALL}\" = \"YES\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      if [ -e \"/var/log/boa\" ]; then\n        rm -f /var/log/boa/._php_libs_fix_*.pid\n      fi\n      _if_install_php_ioncube\n    fi\n  fi\n}\n\n_composer_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _composer_install_upgrade\"\n  fi\n  _COMPOSER_FROM_STATIC=NO\n  if [ \"${_COMPOSER_FROM_STATIC}\" = \"YES\" ]; then\n    _COMPOSER_IS=$(composer --no-interaction --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f35 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_COMPOSER_IS}\" != \"${_COMPOSER_VRN}\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Composer ${_COMPOSER_IS}, update required\"\n      fi\n      cd /var/opt\n      rm -rf composer*\n      _get_dev_src \"composer-${_COMPOSER_VRN}.phar.gz\"\n      if [ -e \"/var/opt/composer-${_COMPOSER_VRN}.phar\" ]; then\n        mv -f composer-${_COMPOSER_VRN}.phar /usr/local/bin/composer\n        chmod 755 /usr/local/bin/composer\n        ln -sfn /usr/local/bin/composer /usr/bin/composer\n        _msg \"INFO: Composer updated to version ${_COMPOSER_VRN}\"\n      fi\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Composer ${_COMPOSER_VRN}, no action needed\"\n      fi\n    fi\n  else\n    if [ -x \"/usr/local/bin/composer\" ]; then\n      /usr/local/bin/composer self-update --2 &> /dev/null\n    fi\n    if [ ! -x \"/usr/local/bin/composer\" ] || [ ! -L \"/usr/bin/composer\" ]; then\n      rm -f /usr/local/bin/composer\n      rm -f /usr/bin/composer\n      rm -rf /root/.composer\n      mkdir -p /var/opt\n      cd /var/opt\n      rm -f /var/opt/composer.phar\n      curl -sS https://getcomposer.org/installer | php &> /dev/null\n      mv -f composer.phar /usr/local/bin/composer\n      ln -sfn /usr/local/bin/composer /usr/bin/composer\n    fi\n    _COMPOSER_IS=$(composer --no-interaction --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f35 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_COMPOSER_IS}\" != \"${_COMPOSER_VRN}\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Composer ${_COMPOSER_IS}, update required\"\n      fi\n      composer self-update ${_COMPOSER_VRN} &> /dev/null\n    fi\n    _COMPOSER_IS=$(composer --no-interaction --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f35 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installed Composer ${_COMPOSER_IS}\"\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/functions/redis.sh.inc",
    "content": "#\n# Forced Redis password update.\n_forced_redis_password_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _forced_redis_password_update\"\n  fi\n  if [ \"${_REDIS_LISTEN_MODE}\" = \"SOCKET\" ] \\\n    || [ \"${_REDIS_LISTEN_MODE}\" = \"PORT\" ] \\\n    || [ \"${_REDIS_LISTEN_MODE}\" = \"127.0.0.1\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Generating random password for local Redis server\"\n    fi\n    _ESC_RPASS=\"\"\n    _LEN_RPASS=0\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n      _PWD_CHARS=64\n    elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n      _PWD_CHARS=32\n    else\n      _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n      if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n        && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n        _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n      else\n        _PWD_CHARS=32\n      fi\n      if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n        _PWD_CHARS=128\n      fi\n    fi\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n      _RANDPASS_TEST=$(randpass -V 2>&1)\n      if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n        _ESC_RPASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n      else\n        _ESC_RPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_RPASS=$(echo -n \"${_ESC_RPASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_RPASS=$(_sanitize_string \"${_ESC_RPASS}\" 2>&1)\n      fi\n      _ESC_RPASS=$(echo -n \"${_ESC_RPASS}\" | tr -d \"\\n\" 2>&1)\n      _LEN_RPASS=$(echo ${#_ESC_RPASS} 2>&1)\n    fi\n    if [ -z \"${_ESC_RPASS}\" ] || [ \"${_LEN_RPASS}\" -lt 9 ]; then\n      _ESC_RPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_RPASS=$(echo -n \"${_ESC_RPASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_RPASS=$(_sanitize_string \"${_ESC_RPASS}\" 2>&1)\n    fi\n  else\n    _msg \"INFO: Managing password for remote Redis server\"\n    if [ -e \"/root/.redis.pass.txt\" ] \\\n      && [ -e \"${_pthLog}/remote-redis-passwd.log\" ]; then\n      _ESC_RPASS=$(cat /root/.redis.pass.txt 2>/dev/null | tr -d '\\n')\n      _ESC_RPASS=$(_sanitize_string \"${_ESC_RPASS}\" 2>&1)\n    else\n      _ESC_RPASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n      touch ${_pthLog}/remote-redis-passwd.log\n    fi\n  fi\n  echo \"${_ESC_RPASS}\" > /root/.redis.pass.txt\n  chmod 0600 /root/.redis.pass.txt &> /dev/null\n  touch ${_pthLog}/sec-redis-pass-${_xSrl}-${_X_VERSION}-${_NOW}.log\n  if [ -e \"/etc/redis/redis.conf\" ]; then\n    _FORCE_REDIS_RESTART=YES\n    sed -i \"s/^# requirepass /requirepass /g\" \\\n      /etc/redis/redis.conf &> /dev/null\n    wait\n    sed -i \"s/^requirepass.*/requirepass ${_ESC_RPASS}/g\" \\\n      /etc/redis/redis.conf &> /dev/null\n    wait\n    chown redis:redis /etc/redis/redis.conf\n    chmod 0600 /etc/redis/redis.conf\n  fi\n  if [ \"${_FORCE_REDIS_RESTART}\" = \"YES\" ]; then\n    _mrun \"service redis-server reload\"\n  fi\n}\n#\n# Fix Redis mode.\n_fix_redis_mode() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_redis_mode\"\n  fi\n  mkdir -p /run/redis\n  chown redis:redis /run/redis\n  if [ \"${_CUSTOM_CONFIG_REDIS}\" = \"NO\" ] || [ -z \"${_CUSTOM_CONFIG_REDIS}\" ]; then\n    _REDIS_LISTEN_MODE=SOCKET\n    if [ \"${_REDIS_LISTEN_MODE}\" = \"SOCKET\" ]; then\n      if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n        sed -i \"s/redis_client_host/redis_client_socket/g\" /data/conf/global/global-redis.inc &> /dev/null\n        wait\n        sed -i \"s/'host'/'socket'/g\" /data/conf/global/global-redis.inc &> /dev/null\n        wait\n        sed -i \"s/  = '127.0.0.1';/= '\\/var\\/run\\/redis\\/redis.sock';/g\" /data/conf/global/global-redis.inc &> /dev/null\n        wait\n      fi\n      if [ -e \"/data/conf/global.inc\" ]; then\n        sed -i \"s/redis_client_host/redis_client_socket/g\" /data/conf/global.inc &> /dev/null\n        wait\n        sed -i \"s/'host'/'socket'/g\" /data/conf/global.inc &> /dev/null\n        wait\n        sed -i \"s/  = '127.0.0.1';/= '\\/var\\/run\\/redis\\/redis.sock';/g\" /data/conf/global.inc &> /dev/null\n        wait\n      fi\n      sed -i \"s/^port 0/port 6379/g\" /etc/redis/redis.conf &> /dev/null\n      wait\n      sed -i \"s/^# bind 127.0.0.1/bind 127.0.0.1/g\" /etc/redis/redis.conf &> /dev/null\n      wait\n      sed -i \"s/^# unixsocket/unixsocket/g\" /etc/redis/redis.conf &> /dev/null\n      wait\n    elif [ \"${_REDIS_LISTEN_MODE}\" = \"PORT\" ] \\\n      || [ \"${_REDIS_LISTEN_MODE}\" = \"127.0.0.1\" ]; then\n      _DO_NOTHING=YES\n    else\n      _REDIS_LISTEN_MODE=${_REDIS_LISTEN_MODE//[^0-9.]/}\n      if [ ! -z \"${_REDIS_LISTEN_MODE}\" ]; then\n        _find_correct_ip\n        _LOCAL_REDIS_PORT_TEST=\"${_LOC_IP}\"\n        if [ \"${_LOCAL_REDIS_PORT_TEST}\" = \"${_REDIS_LISTEN_MODE}\" ]; then\n          _REDIS_HOST=LOCAL\n        else\n          _REDIS_HOST=REMOTE\n        fi\n        if [[ \"${_REDIS_LISTEN_MODE}\" =~ (^)\"10.\" ]] \\\n          || [[ \"${_REDIS_LISTEN_MODE}\" =~ (^)\"192.168.\" ]] \\\n          || [[ \"${_REDIS_LISTEN_MODE}\" =~ (^)\"172.16.\" ]] \\\n          || [[ \"${_REDIS_LISTEN_MODE}\" =~ (^)\"127.0.\" ]]; then\n          if [ \"${_REDIS_HOST}\" = \"LOCAL\" ]; then\n            sed -i \"s/^bind 127.0.0.1/bind ${_REDIS_LISTEN_MODE}/g\" /etc/redis/redis.conf &> /dev/null\n            wait\n            if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_REDIS_LISTEN_MODE}'/g\" /data/conf/global/global-redis.inc &> /dev/null\n              wait\n            fi\n            if [ -e \"/data/conf/global.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_REDIS_LISTEN_MODE}'/g\" /data/conf/global.inc &> /dev/null\n              wait\n            fi\n          else\n            if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_REDIS_LISTEN_MODE}'/g\" /data/conf/global/global-redis.inc &> /dev/null\n              wait\n            fi\n            if [ -e \"/data/conf/global.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_REDIS_LISTEN_MODE}'/g\" /data/conf/global.inc &> /dev/null\n              wait\n            fi\n            _mrun \"service redis-server stop\"\n            killall -9 redis-server &> /dev/null\n            rm -f /var/lib/redis/*\n            _mrun \"update-rc.d -f redis-server remove\"\n            _mrun \"service redis stop\"\n            killall -9 redis &> /dev/null\n            _mrun \"update-rc.d -f redis remove\"\n            mv -f /etc/init.d/redis /etc/init.d/redis-off &> /dev/null\n            mv -f /etc/init.d/redis-server /etc/init.d/redis-server-off &> /dev/null\n            killall -9 redis-server &> /dev/null\n            rm -f /run/redis/redis.pid\n            rm -f /var/xdrago/memcache.sh* &> /dev/null\n            killall -9 memcached &> /dev/null\n            _msg \"INFO: Remote Redis IP set to ${_REDIS_LISTEN_MODE}\"\n            _msg \"INFO: Local Redis instance has been disabled\"\n          fi\n        else\n          if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n            sed -i \"s/'127.0.0.1'/'${_REDIS_LISTEN_MODE}'/g\" /data/conf/global/global-redis.inc &> /dev/null\n            wait\n          fi\n          if [ -e \"/data/conf/global.inc\" ]; then\n            sed -i \"s/'127.0.0.1'/'${_REDIS_LISTEN_MODE}'/g\" /data/conf/global.inc &> /dev/null\n            wait\n          fi\n          _mrun \"service redis-server stop\"\n          killall -9 redis-server &> /dev/null\n          rm -f /var/lib/redis/*\n          _mrun \"update-rc.d -f redis-server remove\"\n          _mrun \"service redis stop\"\n          killall -9 redis &> /dev/null\n          _mrun \"update-rc.d -f redis remove\"\n          mv -f /etc/init.d/redis /etc/init.d/redis-off &> /dev/null\n          mv -f /etc/init.d/redis-server /etc/init.d/redis-server-off &> /dev/null\n          killall -9 redis-server &> /dev/null\n          rm -f /run/redis/redis.pid\n          rm -f /var/xdrago/memcache.sh* &> /dev/null\n          killall -9 memcached &> /dev/null\n          _msg \"INFO: Remote Redis IP set to ${_REDIS_LISTEN_MODE}\"\n          _msg \"INFO: Local Redis instance has been disabled\"\n        fi\n      fi\n    fi\n  fi\n}\n#\n# Set or update Redis password.\n_redis_password_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _redis_password_update\"\n  fi\n  if [ -e \"/etc/redis/redis.conf\" ]; then\n    if [ ! -e \"${_pthLog}/sec-redis-pass-${_xSrl}-${_X_VERSION}-${_NOW}.log\" ]; then\n      if [ ! -e \"/root/.redis.no.new.password.cnf\" ] \\\n        || [ ! -e \"/root/.redis.pass.txt\" ]; then\n         _forced_redis_password_update\n      fi\n    fi\n  fi\n  if [ -e \"/root/.redis.pass.txt\" ] && [ -e \"/etc/redis/redis.conf\" ]; then\n    if [ -z \"${_ESC_RPASS}\" ]; then\n      _RPASS=$(cat /root/.redis.pass.txt 2>/dev/null | tr -d '\\n')\n    else\n      _RPASS=\"${_ESC_RPASS}\"\n    fi\n    if [ -e \"/data/conf/global/global-if-redis.inc\" ]; then\n      _REDIS_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global/global-if-redis.inc 2>&1)\n      if [[ \"${_REDIS_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global-if-redis.inc /data/conf/global/global-if-redis.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global/global-if-redis.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n    if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n      _REDIS_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global/global-redis.inc 2>&1)\n      if [[ \"${_REDIS_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global-redis.inc /data/conf/global/global-redis.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global/global-redis.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n    if [ -e \"/data/conf/global.inc\" ]; then\n      _REDIS_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global.inc 2>&1)\n      if [[ \"${_REDIS_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global.inc /data/conf/global.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n    if [ -e \"${_mtrInc}\" ] \\\n      && [ ! -L \"${_mtrInc}/global.inc\" ] \\\n      && [ -e \"/data/conf/global.inc\" ]; then\n      ln -sfn /data/conf/global.inc ${_mtrInc}/global.inc\n    fi\n    _fix_redis_mode\n  fi\n}\n#\n# Install Redis from sources.\n_install_redis_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_redis_src\"\n  fi\n  _msg \"INFO: Installing Redis ${_REDIS_VRN}...\"\n  if [ ! -e \"/var/lib/redis\" ]; then\n    _mrun \"adduser --system --group redis --home /home/redis\"\n  fi\n  cd /var/opt\n  rm -rf redis*\n  _get_dev_src \"redis-${_REDIS_VRN}.tar.gz\"\n  rm -f /usr/local/bin/redis*\n  rm -f /usr/bin/redis*\n  cd redis-${_REDIS_VRN}\n  _mrun \"make -j $(nproc) --quiet\"\n  _mrun \"make --quiet PREFIX=/usr install\"\n  cp -af ${_locCnf}/redis/redis-server /etc/init.d/redis-server\n  chmod 755 /etc/init.d/redis-server &> /dev/null\n  _mrun \"update-rc.d redis-server defaults\"\n  mkdir -p /run/redis\n  chown -R redis:redis /run/redis\n  mkdir -p /var/log/redis\n  chown -R redis:redis /var/log/redis\n  mkdir -p /var/lib/redis\n  chown -R redis:redis /var/lib/redis\n  rm -f /var/lib/redis/*\n  mkdir -p /etc/redis\n  if [ -e \"/etc/redis/redis.conf\" ] && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _if_hosted_sys\n    if [ \"${_CUSTOM_CONFIG_REDIS}\" = \"NO\" ] \\\n      || [ -z \"${_CUSTOM_CONFIG_REDIS}\" ] \\\n      || [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ \"${_CUSTOM_CONFIG_REDIS}\" = \"YES\" ]; then\n        _DO_NOTHING=YES\n      else\n        if [ \"${_REDIS_INSTALL_MISMATCH}\" = \"YES\" ] \\\n          || [ ! -e \"${_pthLog}/redis-${_REDIS_VRN}-${_xSrl}-${_X_VERSION}.log\" ]; then\n          cp -af ${_locCnf}/redis/${_redisCnfTpl} /etc/redis/redis.conf\n        fi\n      fi\n    fi\n  else\n    if [ ! -e \"/etc/redis/redis.conf\" ] \\\n      || [ \"${_REDIS_INSTALL_MISMATCH}\" = \"YES\" ] \\\n      || [ ! -e \"${_pthLog}/redis-${_REDIS_VRN}-${_xSrl}-${_X_VERSION}.log\" ]; then\n      cp -af ${_locCnf}/redis/${_redisCnfTpl} /etc/redis/redis.conf\n    fi\n  fi\n  _redis_password_update\n  touch ${_pthLog}/redis-${_REDIS_VRN}-${_xSrl}-${_X_VERSION}.log\n  _mrun \"service redis-server reload\"\n}\n\n_redis_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _redis_install_upgrade\"\n  fi\n  if [ \"${_REDIS_MAJOR_RELEASE}\" = \"7\" ] \\\n    && [ ! -z \"${_REDIS_SEVEN_VRN}\" ]; then\n    _REDIS_VRN=${_REDIS_SEVEN_VRN}\n    _redisCnfTpl=\"redis7.conf\"\n  elif [ \"${_REDIS_MAJOR_RELEASE}\" = \"6\" ] \\\n    && [ ! -z \"${_REDIS_SIX_VRN}\" ]; then\n    _REDIS_VRN=${_REDIS_SIX_VRN}\n    _redisCnfTpl=\"redis6.conf\"\n  elif [ \"${_REDIS_MAJOR_RELEASE}\" = \"5\" ] \\\n    && [ ! -z \"${_REDIS_FIVE_VRN}\" ]; then\n    _REDIS_VRN=${_REDIS_FIVE_VRN}\n    _redisCnfTpl=\"redis5.conf\"\n  else\n    _REDIS_VRN=${_REDIS_FOUR_VRN}\n    _redisCnfTpl=\"redis4.conf\"\n  fi\n  if [ ! -e \"/var/lib/redis\" ]; then\n    _mrun \"adduser --system --group redis --home /home/redis\"\n  fi\n  mkdir -p /run/redis\n  chown -R redis:redis /run/redis\n  mkdir -p /var/log/redis\n  chown -R redis:redis /var/log/redis\n  mkdir -p /var/lib/redis\n  chown -R redis:redis /var/lib/redis\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _REDIS_V_ITD=$(redis-server -v 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f4 \\\n      | awk '{ print $1}' 2>&1)\n    if [[ \"${_REDIS_V_ITD}\" =~ \"sha\" ]]; then\n      _REDIS_V_ITD=$(redis-server -v 2>&1 \\\n        | tr -d \"\\n\" \\\n        | tr -d \"v=\" \\\n        | cut -d\" \" -f3 \\\n        | awk '{ print $1}' 2>&1)\n    fi\n    if [ \"${_REDIS_V_ITD}\" = \"${_REDIS_VRN}\" ]; then\n      _REDIS_INSTALL_MISMATCH=NO\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Redis version ${_REDIS_V_ITD}, OK\"\n      fi\n    else\n      _REDIS_INSTALL_MISMATCH=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Redis version ${_REDIS_V_ITD}, upgrade required\"\n      fi\n    fi\n  else\n    if [ -x \"/usr/bin/redis-server\" ]; then\n      _REDIS_V_ITD=$(redis-server -v 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' 2>&1)\n      if [[ \"${_REDIS_V_ITD}\" =~ \"sha\" ]]; then\n        _REDIS_V_ITD=$(redis-server -v 2>&1 \\\n          | tr -d \"\\n\" \\\n          | tr -d \"v=\" \\\n          | cut -d\" \" -f3 \\\n          | awk '{ print $1}' 2>&1)\n      fi\n      if [ \"${_REDIS_V_ITD}\" = \"${_REDIS_VRN}\" ]; then\n        _REDIS_INSTALL_MISMATCH=NO\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Redis version ${_REDIS_V_ITD}, OK\"\n        fi\n      else\n        _REDIS_INSTALL_MISMATCH=YES\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Redis version ${_REDIS_V_ITD}, rebuild required\"\n        fi\n      fi\n    fi\n  fi\n  if [ \"${_REDIS_INSTALL_MISMATCH}\" = \"YES\" ] \\\n    || [ ! -d \"/run/redis\" ] \\\n    || [ ! -x \"/usr/bin/redis-server\" ] \\\n    || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    if [ \"${_REDIS_HOST}\" = \"LOCAL\" ] || [ -z \"${_REDIS_HOST}\" ]; then\n      _install_redis_src\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/functions/satellite.sh.inc",
    "content": "\nexport _tRee=dev\nexport _rLsn=\"BOA-5.9.1\"\nexport _rlsE=\"${_rLsn}-${_tRee}\"\nexport _bRnh=\"5.x-${_tRee}\"\n_hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n\n#\n# Download and extract from core archive.\n_get_core_ext() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"http://${_USE_MIR}/core/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && _msg \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      _msg \"OOPS: Failed to download http://${_USE_MIR}/core/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n#\n# Check key things in every proc.\n_debug_proc() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    _msg \"DBLS: `ls -la /usr/bin/sh`\"\n    _msg \"DBLS: `ls -la /bin/sh`\"\n    _msg \"DBLS: `ls -la /run/octopus_install_run.pid`\"\n    [ -e \"/data/disk/${_USER}/.tmp/cache\" ] && _msg \"DBLS: `ls -la /data/disk/${_USER}/.tmp/cache`\"\n  fi\n}\n\n#\n# Count system CPUs.\n_count_cpu() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _count_cpu\"\n    _debug_proc\n  fi\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n#\n# Find correct IP.\n_find_correct_ip() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _find_correct_ip\"\n    _debug_proc\n  fi\n  if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n    _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n  else\n    _LOC_IP=$(curl ${_crlGet} https://api.ipify.org | sed 's/[^0-9\\.]//g')\n    if [ -z \"${_LOC_IP}\" ]; then\n      _LOC_IP=$(curl ${_crlGet} http://ipv4.icanhazip.com | sed 's/[^0-9\\.]//g')\n    fi\n    if [ ! -z \"${_LOC_IP}\" ]; then\n      echo ${_LOC_IP} > /root/.found_correct_ipv4.cnf\n    fi\n  fi\n  if [ -n \"${_LOC_IP}\" ] && grep -qE \"${_LOC_IP}\\s\" /etc/hosts; then\n    cp -af /etc/hosts /etc/.was.hosts\n    sed -i \"s/^${_LOC_IP}.*//g\" /etc/hosts\n    [ -x \"/etc/init.d/unbound\" ] && [ ! -e \"/usr/etc/unbound/unbound.conf.d\" ] && mkdir -p /usr/etc/unbound/unbound.conf.d\n    [ -x \"/etc/init.d/unbound\" ] && service unbound restart &> /dev/null\n  fi\n}\n\n#\n# Tune FPM workers.\n_satellite_tune_fpm_workers() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_tune_fpm_workers\"\n    _debug_proc\n  fi\n  _count_cpu\n  # Set _PHP_FPM_WORKERS to AUTO if it is empty\n  [ -z \"${_PHP_FPM_WORKERS}\" ] && _PHP_FPM_WORKERS=AUTO\n  # If _PHP_FPM_WORKERS is not AUTO and not empty, then check if it is less than 1\n  if [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && [ -n \"${_PHP_FPM_WORKERS}\" ]; then\n    if [ \"${_PHP_FPM_WORKERS}\" -lt 1 ] 2>/dev/null; then\n      _PHP_FPM_WORKERS=AUTO\n    fi\n  fi\n  # If _PHP_FPM_WORKERS is not AUTO, remove non-numeric characters\n  [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && _PHP_FPM_WORKERS=${_PHP_FPM_WORKERS//[^0-9]/}\n  if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n    _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n  else\n    _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n  fi\n  # Set _PHP_FPM_TIMEOUT to AUTO if it is empty\n  [ -z \"${_PHP_FPM_TIMEOUT}\" ] && _PHP_FPM_TIMEOUT=AUTO\n  # If _PHP_FPM_TIMEOUT is not AUTO and not empty, then check if it is between 60 and 180\n  if [ \"${_PHP_FPM_TIMEOUT}\" != \"AUTO\" ] && [ -n \"${_PHP_FPM_TIMEOUT}\" ]; then\n    # If _PHP_FPM_TIMEOUT is not AUTO and not empty, remove non-numeric characters\n    [ \"${_PHP_FPM_TIMEOUT}\" != \"AUTO\" ] && _PHP_FPM_TIMEOUT=${_PHP_FPM_TIMEOUT//[^0-9]/}\n    # If _PHP_FPM_TIMEOUT is outside of the allowed range, use either min or max allowed\n    if [ \"${_PHP_FPM_TIMEOUT}\" -lt 60 ]; then\n      _PHP_FPM_TIMEOUT=60\n    elif [ \"${_PHP_FPM_TIMEOUT}\" -gt 180 ]; then\n      _PHP_FPM_TIMEOUT=180\n    fi\n  else\n    _PHP_FPM_TIMEOUT=180\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _L_PHP_FPM_WORKERS is ${_L_PHP_FPM_WORKERS}\"\n    _msg \"DEBUG: _PHP_FPM_TIMEOUT is ${_PHP_FPM_TIMEOUT}\"\n  fi\n}\n\n#\n# Update web user.\n_satellite_web_user_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_web_user_update\"\n    _debug_proc\n  fi\n  _isTest=\"${_WEB}\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ]; then\n    _T_HD=\"/home/${_WEB}/.drush\"\n    _T_TP=\"/home/${_WEB}/.tmp\"\n    _T_TS=\"/home/${_WEB}/.aws\"\n    _T_II=\"${_T_HD}/php.ini\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"DEBUG: ${_WEB} in _satellite_web_user_update A\"\n    fi\n    if [ -d \"/home/${_WEB}/\" ] && [ ! -e \"/home/${_WEB}/.lock\" ]; then\n      if [ -d \"/home/${_WEB}/\" ]; then\n        chattr -i -R /home/${_WEB}/ &> /dev/null\n      fi\n      if [ -d \"/home/${_WEB}/.drush/\" ]; then\n        chattr -i /home/${_WEB}/.drush/\n      fi\n      if [ -e \"${_T_II}\" ]; then\n        chattr -i ${_T_II}\n      fi\n      mkdir -p /home/${_WEB}/.{tmp,drush,aws}\n      touch /home/${_WEB}/.lock\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"DEBUG: ${_WEB} in _satellite_web_user_update B\"\n        _msg \"DEBUG: ARG1 is $1 in _satellite_web_user_update B\"\n      fi\n      _isTest=\"$1\"\n      _isTest=${_isTest//[^a-z0-9]/}\n      if [ ! -z \"${_isTest}\" ]; then\n        _T_PV=$1\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DEBUG: _T_PV is ${_T_PV} in _satellite_web_user_update C\"\n        fi\n      fi\n      if [ ! -z \"${_T_PV}\" ] && [ -e \"/opt/php${_T_PV}/etc/php${_T_PV}.ini\" ]; then\n        cp -af /opt/php${_T_PV}/etc/php${_T_PV}.ini ${_T_II}\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DEBUG: _T_PV is ${_T_PV} in _satellite_web_user_update D\"\n        fi\n      else\n        _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n        for e in ${_PHP_V}; do\n          if [ -e \"/opt/php${e}/etc/php${e}.ini\" ]; then\n            cp -af /opt/php${e}/etc/php${e}.ini ${_T_II}\n            _T_PV=${e}\n          fi\n        done\n      fi\n      if [ -e \"${_T_II}\" ]; then\n        _INI=\"open_basedir = \\\".: \\\n          /data/all:      \\\n          /data/conf:     \\\n          /data/disk/all: \\\n          /hdd:           \\\n          /mnt:           \\\n          /opt/php56:     \\\n          /opt/php70:     \\\n          /opt/php71:     \\\n          /opt/php72:     \\\n          /opt/php73:     \\\n          /opt/php74:     \\\n          /opt/php80:     \\\n          /opt/php81:     \\\n          /opt/php82:     \\\n          /opt/php83:     \\\n          /opt/php84:     \\\n          /opt/php85:     \\\n          /opt/tika:      \\\n          /opt/tika7:     \\\n          /opt/tika8:     \\\n          /opt/tika9:     \\\n          /dev/urandom:   \\\n          /srv:           \\\n          /usr/bin:       \\\n          /usr/local/bin: \\\n          /var/second/${_USER}:    \\\n          ${_ROOT}/aegir:          \\\n          ${_ROOT}/backup-exports: \\\n          ${_ROOT}/distro:         \\\n          ${_ROOT}/platforms:      \\\n          ${_ROOT}/static:         \\\n          ${_T_HD}:                \\\n          ${_T_TP}:                \\\n          ${_T_TS}\\\"\"\n        _INI=$(echo \"${_INI}\" | sed \"s/ //g\" 2>&1)\n        _INI=$(echo \"${_INI}\" | sed \"s/open_basedir=/open_basedir = /g\" 2>&1)\n        _INI=${_INI//\\//\\\\\\/}\n        _QTP=${_T_TP//\\//\\\\\\/}\n        sed -i \"s/.*open_basedir =.*/${_INI}/g\"                              ${_T_II}\n        wait\n        sed -i \"s/.*session.save_path =.*/session.save_path = ${_QTP}/g\"     ${_T_II}\n        wait\n        sed -i \"s/.*soap.wsdl_cache_dir =.*/soap.wsdl_cache_dir = ${_QTP}/g\" ${_T_II}\n        wait\n        sed -i \"s/.*sys_temp_dir =.*/sys_temp_dir = ${_QTP}/g\"               ${_T_II}\n        wait\n        sed -i \"s/.*upload_tmp_dir =.*/upload_tmp_dir = ${_QTP}/g\"           ${_T_II}\n        wait\n        rm -f ${_T_HD}/.ctrl.php*\n        echo > ${_T_HD}/.ctrl.php${_T_PV}.${_xSrl}.pid\n      fi\n      chmod 700 /home/${_WEB}/\n      chown -R ${_WEB}:www-data /home/${_WEB}/\n      chmod 550 /home/${_WEB}/.drush/\n      chmod 440 /home/${_WEB}/.drush/php.ini\n      rm -f /home/${_WEB}/.lock\n      if [ -d \"/home/${_WEB}/\" ]; then\n        chattr +i /home/${_WEB}/\n      fi\n      if [ -d \"/home/${_WEB}/.drush/\" ]; then\n        chattr +i /home/${_WEB}/.drush/\n      fi\n      if [ -e \"${_T_II}\" ]; then\n        chattr +i ${_T_II}\n      fi\n    fi\n  fi\n}\n\n#\n# Remove web user.\n_satellite_remove_web_user() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_remove_web_user\"\n    _debug_proc\n  fi\n  if [ -d \"/home/${_WEB}/\" ] || [ \"$1\" = \"clean\" ]; then\n    chattr -i -R /home/${_WEB}/ &> /dev/null\n    if [ -d \"/home/${_WEB}/.drush/\" ]; then\n      chattr -i /home/${_WEB}/.drush/\n    fi\n    mkdir -p ${_vBs}/zombie/deleted\n    pkill -9 -f gpg-agent\n    deluser --remove-home --backup-to ${_vBs}/zombie/deleted ${_WEB} &> /dev/null\n    if [ -d \"/home/${_WEB}/\" ]; then\n      rm -rf /home/${_WEB}/ &> /dev/null\n    fi\n  fi\n}\n\n#\n# Add web user.\n_satellite_create_web_user() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_create_web_user\"\n    _debug_proc\n  fi\n  _T_HD=\"/home/${_WEB}/.drush\"\n  _T_II=\"${_T_HD}/php.ini\"\n  _T_ID_EXISTS=$(getent passwd ${_WEB} 2>&1)\n  if [ ! -z \"${_T_ID_EXISTS}\" ] && [ -e \"${_T_II}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"DEBUG: ARG1 is $1 in _satellite_create_web_user B\"\n    fi\n    _satellite_web_user_update \"$1\"\n  elif [ -z \"${_T_ID_EXISTS}\" ] || [ ! -e \"${_T_II}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"DEBUG: ARG1 is $1 in _satellite_create_web_user C\"\n    fi\n    _satellite_remove_web_user \"clean\"\n    adduser --force-badname --system --ingroup www-data --home /home/${_WEB} ${_WEB} &> /dev/null\n    _satellite_web_user_update \"$1\"\n  fi\n}\n\n#\n# Tune FPM configuration.\n_satellite_tune_fpm_config() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_tune_fpm_config\"\n    _debug_proc\n  fi\n  _satellite_tune_fpm_workers\n\n  _LIM_FPM=\"${_L_PHP_FPM_WORKERS}\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _LIM_FPM is ${_LIM_FPM}\"\n  fi\n\n  if [ ! -z \"${_CLIENT_OPTION}\" ]; then\n    if [ \"${_CLIENT_OPTION}\" = \"MONSTER\" ] || [ \"${_CLIENT_OPTION}\" = \"CLUSTER\" ]; then\n      _CLIENT_OPTION=MONSTER\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=96\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"ULTRA\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=32\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"PHANTOM\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=16\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"POWER\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"BUS\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=8\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"EDGE\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"AGAIN\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"SSD\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"CLASSIC\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=2\n      fi\n    elif [ \"${_CLIENT_OPTION}\" = \"MINI\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"MICRO\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"QUIET\" ] \\\n      || [ \"${_CLIENT_OPTION}\" = \"HEADSPACE\" ]; then\n      if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n        _LIM_FPM=1\n      fi\n    else\n      _LIM_FPM=2\n    fi\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _LIM_FPM is ${_LIM_FPM}\"\n    _msg \"DEBUG: _CLIENT_OPTION is ${_CLIENT_OPTION}\"\n  fi\n\n  if [ ! -z \"${_CLIENT_CORES}\" ] && [ \"${_CLIENT_CORES}\" -ge 1 ]; then\n    if [ -e \"${_ROOT}/log/cores.txt\" ]; then\n      _CLIENT_CORES=$(cat ${_ROOT}/log/cores.txt 2>/dev/null | tr -d '\\n')\n    fi\n    _CLIENT_CORES=${_CLIENT_CORES//[^0-9]/}\n    if [ ! -z \"${_CLIENT_CORES}\" ] && [ \"${_CLIENT_CORES}\" -ge 1 ]; then\n      _LIM_FPM=$(( _LIM_FPM *= _CLIENT_CORES ))\n    fi\n  fi\n\n  if [ \"${_LIM_FPM}\" -gt 100 ]; then\n    _LIM_FPM=100\n  fi\n\n  if [ \"${_CLIENT_OPTION}\" != \"QUIET\" ]; then\n    _CHILD_MAX_FPM=$(( _LIM_FPM * 2 ))\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DEBUG: _LIM_FPM is ${_LIM_FPM}\"\n    _msg \"DEBUG: _CHILD_MAX_FPM is ${_CHILD_MAX_FPM}\"\n  fi\n\n  _PHP_SV=${_PHP_FPM_VERSION//[^0-9]/}\n  if [ -z \"${_PHP_SV}\" ]; then\n    _PHP_SV=84\n  fi\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  _PHP_OLD_SV=\n  for e in ${_PHP_V}; do\n    if [ -e \"/opt/php${e}/etc/pool.d/${_USER}.conf\" ]; then\n      _PHP_OLD_SV=${e}\n    fi\n  done\n  if [ ! -e \"/var/xdrago/conf/fpm-pool-foo-multi.conf\" ]; then\n    mkdir -p /var/xdrago/conf\n  fi\n  cp -af ${_bldPth}/aegir/conf/php/fpm-pool-foo-multi.conf /var/xdrago/conf/\n  cp -af ${_bldPth}/aegir/conf/php/fpm-pool-foo.conf /var/xdrago/conf/\n  if [ ! -z \"${_PHP_FPM_TIMEOUT}\" ] && [ ! -z \"${_PHP_SV}\" ]; then\n    if [ -e \"/var/xdrago/conf/fpm-pool-foo.conf\" ]; then\n      rm -f /opt/php*/etc/pool.d/${_USER}.conf\n      cp -af /var/xdrago/conf/fpm-pool-foo.conf /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf\n    fi\n    if [ -e \"/opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf\" ]; then\n      ### create or update special system user if needed\n      if [ -e \"/home/${_WEB}/.drush/php.ini\" ]; then\n        _OLD_PHP_IN_USE=$(grep \"/lib/php\" /home/${_WEB}/.drush/php.ini 2>&1)\n        _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n        for e in ${_PHP_V}; do\n          if [[ \"${_OLD_PHP_IN_USE}\" =~ \"php${e}\" ]]; then\n            if [ \"${e}\" != \"${_PHP_SV}\" ] \\\n              || [ ! -e \"/home/${_WEB}/.drush/.ctrl.php${_PHP_SV}.${_xSrl}.pid\" ]; then\n              _satellite_web_user_update \"${_PHP_SV}\"\n            fi\n          fi\n        done\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DEBUG: _PHP_SV is ${_PHP_SV} in _satellite_create_web_user A\"\n        fi\n        _satellite_create_web_user \"${_PHP_SV}\"\n      fi\n      ### create or update special system user if needed\n      sed -i \"s/.ftp/.web/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n      wait\n      sed -i \"s/\\/data\\/disk\\/foo\\/.tmp/\\/home\\/foo.web\\/.tmp/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n      wait\n      sed -i \"s/foo/${_USER}/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n      wait\n      sed -i \"s/THISPOOL/${_USER}/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n      wait\n      if [[ \"${_PHP_SV}\" == 8* ]] && [ -e \"/opt/etc/fpm/fpm-pool-common-modern.conf\" ]; then\n        sed -i \"s/fpm-pool-common.conf/fpm-pool-common-modern.conf/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n        wait\n      elif [[ \"${_PHP_SV}\" == 7* ]] && [ -e \"/opt/etc/fpm/fpm-pool-common-legacy.conf\" ]; then\n        sed -i \"s/fpm-pool-common.conf/fpm-pool-common-legacy.conf/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n        wait\n      fi\n      if [ ! -z \"${_PHP_FPM_DENY}\" ]; then\n        sed -i \"s/passthru,/${_PHP_FPM_DENY},/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n        wait\n      fi\n    fi\n    _PHP_TO=\"${_PHP_FPM_TIMEOUT}s\"\n    sed -i \"s/180s/${_PHP_TO}/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n    wait\n    if [ ! -z \"${_CHILD_MAX_FPM}\" ] && [ \"${_CHILD_MAX_FPM}\" -ge 8 ]; then\n      sed -i \"s/pm.max_children =.*/pm.max_children = ${_CHILD_MAX_FPM}/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n      wait\n    else\n      sed -i \"s/pm.max_children =.*/pm.max_children = 8/g\" /opt/php${_PHP_SV}/etc/pool.d/${_USER}.conf &> /dev/null\n      wait\n    fi\n    mkdir -p /var/www/phpcache/${_USER}/${_USER}.{85,84,83,82,81,80,74,73,72,71,70,56}\n    chgrp www-data /var/www/phpcache/${_USER}/${_USER}.{85,84,83,82,81,80,74,73,72,71,70,56}\n    chmod 770 /var/www/phpcache/${_USER}/${_USER}.{85,84,83,82,81,80,74,73,72,71,70,56}\n    if [ ! -d \"${_ROOT}/tmp\" ]; then\n      mkdir -p ${_ROOT}/tmp\n      chown ${_USER}:users ${_ROOT}/tmp &> /dev/null\n    fi\n    if [ ! -z \"${_PHP_OLD_SV}\" ] \\\n      && [ -e \"/etc/init.d/php${_PHP_OLD_SV}-fpm\" ]; then\n      _mrun \"service php${_PHP_OLD_SV}-fpm reload\"\n    fi\n    if [ -e \"/etc/init.d/php${_PHP_SV}-fpm\" ]; then\n      _mrun \"service php${_PHP_SV}-fpm reload\"\n    fi\n  fi\n}\n\n#\n# Make sure that username is unique and not restricted.\n_satellite_check_id() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_check_id\"\n    _debug_proc\n  fi\n  _ID_EXISTS=$(getent passwd ${_USER} 2>&1)\n  if [ -z \"${_ID_EXISTS}\" ]; then\n    _DO_NOTHING=YES\n  elif [[ \"${_ID_EXISTS}\" =~ \"${_USER}\" ]]; then\n    _msg \"ERROR: ${_USER} username is already taken\"\n    _msg \"Please choose different _USER\"\n    _clean_pid_exit\n  else\n    _msg \"ERROR: ${_USER} username check failed\"\n    _msg \"Please try different _USER\"\n    _clean_pid_exit\n  fi\n  if [ \"${_USER}\" = \"admin\" ] \\\n    || [ \"${_USER}\" = \"hostmaster\" ] \\\n    || [ \"${_USER}\" = \"barracuda\" ] \\\n    || [ \"${_USER}\" = \"octopus\" ] \\\n    || [ \"${_USER}\" = \"boa\" ] \\\n    || [ \"${_USER}\" = \"all\" ]; then\n    _msg \"ERROR: ${_USER} is a restricted username\"\n    _msg \"ERROR: Please choose a different _USER\"\n    _clean_pid_exit\n  elif [[ \"${_USER}\" =~ \"drupal\" ]] \\\n    || [[ \"${_USER}\" =~ \"drush\" ]] \\\n    || [[ \"${_USER}\" =~ \"sites\" ]] \\\n    || [[ \"${_USER}\" =~ \"default\" ]]; then\n    _msg \"ERROR: ${_USER} includes restricted keyword\"\n    _msg \"ERROR: Please choose a different _USER\"\n    _clean_pid_exit\n  fi\n  _REGEX=\"^[[:digit:]]\"\n  if [[ \"${_USER}\" =~ \"${_REGEX}\" ]]; then\n    _msg \"ERROR: ${_USER} is a wrong username\"\n    _msg \"ERROR: The correct username should start with a letter, not digit\"\n    _clean_pid_exit\n  fi\n}\n\n#\n# Enable chattr.\n_enable_chattr() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _enable_chattr\"\n    _debug_proc\n  fi\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ -d \"/home/$1/platforms/\" ]; then\n      chattr +i /home/$1/platforms/\n      chattr +i /home/$1/platforms/* &> /dev/null\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr +i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr +i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr +i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr +i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n#\n# Disable chattr.\n_disable_chattr() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _disable_chattr\"\n    _debug_proc\n  fi\n  _isTest=\"$1\"\n  _isTest=${_isTest//[^a-z0-9]/}\n  if [ ! -z \"${_isTest}\" ] && [ -d \"/home/$1/\" ]; then\n    if [ -d \"/home/$1/platforms/\" ]; then\n      chattr -i /home/$1/platforms/\n      chattr -i /home/$1/platforms/* &> /dev/null\n    fi\n    if [ -d \"/home/$1/.drush/\" ]; then\n      chattr -i /home/$1/.drush/\n    fi\n    if [ -d \"/home/$1/.drush/usr/\" ]; then\n      chattr -i /home/$1/.drush/usr/\n    fi\n    if [ -f \"/home/$1/.drush/php.ini\" ]; then\n      chattr -i /home/$1/.drush/*.ini\n    fi\n    if [ -d \"/home/$1/.bazaar/\" ]; then\n      chattr -i /home/$1/.bazaar/\n    fi\n  fi\n}\n\n#\n# Read or create Octopus cnf file.\n_satellite_cnf() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_cnf\"\n    _debug_proc\n  fi\n  if [ ! -e \"${_octCnf}\" ]; then\n    if [[ \"${_MY_OCTO_EMAIL}\" =~ \"omega8.cc\" ]] \\\n      || [[ \"${_CLIENT_EMAIL}\" =~ \"omega8.cc\" ]]; then\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" != \"YES\" ]; then\n        _msg \"EXIT: You must enter **your** valid email address\"\n        _msg \"EXIT: in the _MY_OCTO_EMAIL **and** _CLIENT_EMAIL variables\"\n        _clean_pid_exit\n      fi\n    fi\n    _msg \"INFO: Creating your ${_octCnf} config file\"\n    sleep 1\n    echo \"###\"                                                   > ${_octCnf}\n    echo \"### Configuration created on ${_NOW} with\"            >> ${_octCnf}\n    echo \"### Octopus version ${_X_VERSION}\"                    >> ${_octCnf}\n    echo \"###\"                                                  >> ${_octCnf}\n    echo \"_USER=\\\"${_USER}\\\"\"                                   >> ${_octCnf}\n    echo \"_MY_OCTO_EMAIL=\\\"${_MY_OCTO_EMAIL}\\\"\"                 >> ${_octCnf}\n    echo \"_PLATFORMS_LIST=\\\"${_PLATFORMS_LIST}\\\"\"               >> ${_octCnf}\n    echo \"_AUTOPILOT=${_AUTOPILOT}\"                             >> ${_octCnf}\n    echo \"_HM_ONLY=${_HM_ONLY}\"                                 >> ${_octCnf}\n    echo \"_DEBUG_MODE=${_DEBUG_MODE}\"                           >> ${_octCnf}\n    echo \"_DL_MODE=\\\"${_DL_MODE}\\\"\"                             >> ${_octCnf}\n    echo \"_MY_OWNIP=${_MY_OWNIP}\"                               >> ${_octCnf}\n    echo \"_FORCE_GIT_MIRROR=\\\"${_FORCE_GIT_MIRROR}\\\"\"           >> ${_octCnf}\n    echo \"_THIS_DB_HOST=${_THIS_DB_HOST}\"                       >> ${_octCnf}\n    echo \"_THIS_DB_PORT=${_THIS_DB_PORT}\"                       >> ${_octCnf}\n    echo \"_DNS_SETUP_TEST=${_DNS_SETUP_TEST}\"                   >> ${_octCnf}\n    echo \"_HOT_SAUCE=${_HOT_SAUCE}\"                             >> ${_octCnf}\n    echo \"_USE_CURRENT=${_USE_CURRENT}\"                         >> ${_octCnf}\n    echo \"_DEL_OLD_EMPTY_PLATFORMS=${_DEL_OLD_EMPTY_PLATFORMS}\" >> ${_octCnf}\n    echo \"_DEL_OLD_BACKUPS=${_DEL_OLD_BACKUPS}\"                 >> ${_octCnf}\n    echo \"_DEL_OLD_TMP=${_DEL_OLD_TMP}\"                         >> ${_octCnf}\n    echo \"_LOCAL_NETWORK_IP=${_LOCAL_NETWORK_IP}\"               >> ${_octCnf}\n    echo \"_PHP_FPM_VERSION=${_PHP_FPM_VERSION}\"                 >> ${_octCnf}\n    echo \"_PHP_CLI_VERSION=${_PHP_CLI_VERSION}\"                 >> ${_octCnf}\n    echo \"_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\"                 >> ${_octCnf}\n    echo \"_PHP_FPM_TIMEOUT=${_PHP_FPM_TIMEOUT}\"                 >> ${_octCnf}\n    echo \"_PHP_FPM_DENY=\\\"${_PHP_FPM_DENY}\\\"\"                   >> ${_octCnf}\n    echo \"_STRONG_PASSWORDS=${_STRONG_PASSWORDS}\"               >> ${_octCnf}\n    echo \"_SQL_CONVERT=${_SQL_CONVERT}\"                         >> ${_octCnf}\n    echo \"_RESERVED_RAM=${_RESERVED_RAM}\"                       >> ${_octCnf}\n    echo \"###\"                                                  >> ${_octCnf}\n    echo \"_DOMAIN=\\\"${_DOMAIN}\\\"\"                               >> ${_octCnf}\n    echo \"_CLIENT_EMAIL=\\\"${_CLIENT_EMAIL}\\\"\"                   >> ${_octCnf}\n    echo \"_CLIENT_OPTION=\\\"${_CLIENT_OPTION}\\\"\"                 >> ${_octCnf}\n    echo \"_CLIENT_SUBSCR=\\\"${_CLIENT_SUBSCR}\\\"\"                 >> ${_octCnf}\n    echo \"_CLIENT_CORES=\\\"${_CLIENT_CORES}\\\"\"                   >> ${_octCnf}\n    echo \"###\"                                                  >> ${_octCnf}\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Reading your ${_octCnf} config file\"\n    fi\n    sleep 1\n    _PHP_FPM_WORKERS_TEST=$(grep _PHP_FPM_WORKERS ${_octCnf} 2>&1)\n    if [[ \"${_PHP_FPM_WORKERS_TEST}\" =~ \"_PHP_FPM_WORKERS\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\" >> ${_octCnf}\n    fi\n    _PHP_FPM_TIMEOUT_TEST=$(grep _PHP_FPM_TIMEOUT ${_octCnf} 2>&1)\n    if [[ \"${_PHP_FPM_TIMEOUT_TEST}\" =~ \"_PHP_FPM_TIMEOUT\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_PHP_FPM_TIMEOUT=${_PHP_FPM_TIMEOUT}\" >> ${_octCnf}\n    fi\n    _PHP_FPM_DENY_TEST=$(grep _PHP_FPM_DENY ${_octCnf} 2>&1)\n    if [[ \"${_PHP_FPM_DENY_TEST}\" =~ \"_PHP_FPM_DENY\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_PHP_FPM_DENY=\\\"${_PHP_FPM_DENY}\\\"\" >> ${_octCnf}\n    fi\n     _RESERVED_RAM_TEST=$(grep _RESERVED_RAM ${_octCnf} 2>&1)\n    if [[ \"${_RESERVED_RAM_TEST}\" =~ \"_RESERVED_RAM\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_RESERVED_RAM=${_RESERVED_RAM}\" >> ${_octCnf}\n    fi\n    _PHP_FPM_VERSION_TEST=$(grep _PHP_FPM_VERSION ${_octCnf} 2>&1)\n    if [[ \"${_PHP_FPM_VERSION_TEST}\" =~ \"_PHP_FPM_VERSION\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_PHP_FPM_VERSION=${_PHP_FPM_VERSION}\" >> ${_octCnf}\n    fi\n    _PHP_CLI_VERSION_TEST=$(grep _PHP_CLI_VERSION ${_octCnf} 2>&1)\n    if [[ \"${_PHP_CLI_VERSION_TEST}\" =~ \"_PHP_CLI_VERSION\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_PHP_CLI_VERSION=${_PHP_CLI_VERSION}\" >> ${_octCnf}\n    fi\n\n    if [ ! -e \"${_ROOT}/static/control/fpm.info\" ]; then\n      if [ -e \"${_ROOT}/log/fpm.txt\" ]; then\n        _PHP_FPM_VERSION=$(cat ${_ROOT}/log/fpm.txt 2>&1)\n        _PHP_FPM_VERSION=$(echo -n ${_PHP_FPM_VERSION} | tr -d \"\\n\" 2>&1)\n      else\n        _PHP_FPM_VERSION=8.4\n      fi\n      echo ${_PHP_FPM_VERSION} > ${_ROOT}/static/control/fpm.info\n    fi\n\n    if [ -e \"${_ROOT}/static/control/fpm.info\" ]; then\n      _PHP_FPM_VERSION=$(cat ${_ROOT}/static/control/fpm.info 2>&1)\n      _PHP_FPM_VERSION=$(echo -n ${_PHP_FPM_VERSION} | tr -d \"\\n\" 2>&1)\n      if [ \"${_PHP_FPM_VERSION}\" = \"8.5\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"8.4\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"8.3\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"8.2\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"8.1\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"8.0\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"7.4\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"7.3\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"7.2\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"7.1\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"7.0\" ] \\\n        || [ \"${_PHP_FPM_VERSION}\" = \"5.6\" ]; then\n        if [ \"${_PHP_FPM_VERSION}\" = \"8.5\" ] \\\n          && [ ! -x \"/opt/php85/bin/php\" ]; then\n          if [ -x \"/opt/php84/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.4\n          elif [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.3\n          elif [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.2\n          elif [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.1\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"8.4\" ] \\\n          && [ ! -x \"/opt/php84/bin/php\" ]; then\n          if [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.3\n          elif [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.2\n          elif [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.1\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"8.3\" ] \\\n          && [ ! -x \"/opt/php83/bin/php\" ]; then\n          if [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.2\n          elif [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.1\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"8.2\" ] \\\n          && [ ! -x \"/opt/php82/bin/php\" ]; then\n          if [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.1\n          elif [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.3\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"8.1\" ] \\\n          && [ ! -x \"/opt/php81/bin/php\" ]; then\n          if [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.2\n          elif [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.3\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"8.0\" ] \\\n          && [ ! -x \"/opt/php80/bin/php\" ]; then\n          if [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.1\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"7.4\" ] \\\n          && [ ! -x \"/opt/php74/bin/php\" ]; then\n          if [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_FPM_VERSION=8.1\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"7.3\" ] \\\n          && [ ! -x \"/opt/php73/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_FPM_VERSION=7.4\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"7.2\" ] \\\n          && [ ! -x \"/opt/php72/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_FPM_VERSION=7.4\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"7.1\" ] \\\n          && [ ! -x \"/opt/php71/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_FPM_VERSION=7.4\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"7.0\" ] \\\n          && [ ! -x \"/opt/php70/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_FPM_VERSION=7.4\n          fi\n        elif [ \"${_PHP_FPM_VERSION}\" = \"5.6\" ] \\\n          && [ ! -x \"/opt/php56/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_FPM_VERSION=7.4\n          fi\n        fi\n        if [ -n \"${_PHP_FPM_VERSION}\" ]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"VARS: Set _PHP_FPM_VERSION to ${_PHP_FPM_VERSION}\"\n          fi\n          sed -i \"s/^_PHP_FPM_VERSION=.*/_PHP_FPM_VERSION=${_PHP_FPM_VERSION}/g\" ${_octCnf}\n          echo ${_PHP_FPM_VERSION} > ${_ROOT}/log/fpm.txt\n          echo ${_PHP_FPM_VERSION} > ${_ROOT}/static/control/fpm.info\n          chown ${_USER}.ftp:users ${_ROOT}/static/control/fpm.info\n        fi\n      fi\n    fi\n\n    if [ ! -e \"${_ROOT}/static/control/cli.info\" ]; then\n      if [ -e \"${_ROOT}/log/cli.txt\" ]; then\n        _PHP_CLI_VERSION=$(cat ${_ROOT}/log/cli.txt 2>&1)\n        _PHP_CLI_VERSION=$(echo -n ${_PHP_CLI_VERSION} | tr -d \"\\n\" 2>&1)\n      else\n        _PHP_CLI_VERSION=8.4\n      fi\n      echo ${_PHP_CLI_VERSION} > ${_ROOT}/static/control/cli.info\n    fi\n\n    if [ -e \"${_ROOT}/static/control/cli.info\" ]; then\n      _PHP_CLI_VERSION=$(cat ${_ROOT}/static/control/cli.info 2>&1)\n      _PHP_CLI_VERSION=$(echo -n ${_PHP_CLI_VERSION} | tr -d \"\\n\" 2>&1)\n      if [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"8.2\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"8.1\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"8.0\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"7.4\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"7.3\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"7.2\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"7.1\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"7.0\" ] \\\n        || [ \"${_PHP_CLI_VERSION}\" = \"5.6\" ]; then\n        if [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] \\\n          && [ ! -x \"/opt/php85/bin/php\" ]; then\n          if [ -x \"/opt/php84/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.4\n          elif [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.3\n          elif [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.2\n          elif [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.1\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] \\\n          && [ ! -x \"/opt/php84/bin/php\" ]; then\n          if [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.3\n          elif [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.2\n          elif [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.1\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] \\\n          && [ ! -x \"/opt/php83/bin/php\" ]; then\n          if [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.2\n          elif [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.1\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"8.2\" ] \\\n          && [ ! -x \"/opt/php82/bin/php\" ]; then\n          if [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.1\n          elif [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.3\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"8.1\" ] \\\n          && [ ! -x \"/opt/php81/bin/php\" ]; then\n          if [ -x \"/opt/php82/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.2\n          elif [ -x \"/opt/php83/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.3\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"8.0\" ] \\\n          && [ ! -x \"/opt/php80/bin/php\" ]; then\n          if [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.1\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"7.4\" ] \\\n          && [ ! -x \"/opt/php74/bin/php\" ]; then\n          if [ -x \"/opt/php81/bin/php\" ]; then\n            _PHP_CLI_VERSION=8.1\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"7.3\" ] \\\n          && [ ! -x \"/opt/php73/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_CLI_VERSION=7.4\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"7.2\" ] \\\n          && [ ! -x \"/opt/php72/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_CLI_VERSION=7.4\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"7.1\" ] \\\n          && [ ! -x \"/opt/php71/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_CLI_VERSION=7.4\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"7.0\" ] \\\n          && [ ! -x \"/opt/php70/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_CLI_VERSION=7.4\n          fi\n        elif [ \"${_PHP_CLI_VERSION}\" = \"5.6\" ] \\\n          && [ ! -x \"/opt/php56/bin/php\" ]; then\n          if [ -x \"/opt/php74/bin/php\" ]; then\n            _PHP_CLI_VERSION=7.4\n          fi\n        fi\n        if [ -n \"${_PHP_CLI_VERSION}\" ]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"VARS: Set _PHP_CLI_VERSION to ${_PHP_CLI_VERSION}\"\n          fi\n          sed -i \"s/^_PHP_CLI_VERSION=.*/_PHP_CLI_VERSION=${_PHP_CLI_VERSION}/g\" ${_octCnf}\n          echo ${_PHP_CLI_VERSION} > ${_ROOT}/log/cli.txt\n          echo ${_PHP_CLI_VERSION} > ${_ROOT}/static/control/cli.info\n          chown ${_USER}.ftp:users ${_ROOT}/static/control/cli.info\n        fi\n      fi\n    fi\n\n    _O_CONTRIB_UP_TEST=$(grep _O_CONTRIB_UP ${_octCnf} 2>&1)\n    if [[ \"${_O_CONTRIB_UP_TEST}\" =~ \"_O_CONTRIB_UP\" ]]; then\n      sed -i \"s/^_O_CONTRIB_UP.*//g\" ${_octCnf} &> /dev/null\n      wait\n      sed -i \"/^$/d\" ${_octCnf} &> /dev/null\n      wait\n    fi\n\n    _ALLOW_UNSUPPORTED_TEST=$(grep _ALLOW_UNSUPPORTED ${_octCnf} 2>&1)\n    if [[ \"${_ALLOW_UNSUPPORTED_TEST}\" =~ \"_ALLOW_UNSUPPORTED\" ]]; then\n      sed -i \"s/^_ALLOW_UNSUPPORTED.*//g\" ${_octCnf} &> /dev/null\n      wait\n      sed -i \"/^$/d\" ${_octCnf} &> /dev/null\n      wait\n    fi\n\n    _USE_STOCK_TEST=$(grep _USE_STOCK ${_octCnf} 2>&1)\n    if [[ \"${_USE_STOCK_TEST}\" =~ \"_USE_STOCK\" ]]; then\n      sed -i \"s/^_USE_STOCK.*//g\" ${_octCnf} &> /dev/null\n      wait\n      sed -i \"/^$/d\" ${_octCnf} &> /dev/null\n      wait\n    fi\n\n    _HTTP_WILDCARD_TEST=$(grep _HTTP_WILDCARD ${_octCnf} 2>&1)\n    if [[ \"${_HTTP_WILDCARD_TEST}\" =~ \"_HTTP_WILDCARD\" ]]; then\n      sed -i \"s/^_HTTP_WILDCARD.*//g\" ${_octCnf} &> /dev/null\n      wait\n      sed -i \"/^$/d\" ${_octCnf} &> /dev/null\n      wait\n    fi\n\n    _DEL_OLD_EMPTY_PLATFORMS_TEST=$(grep _DEL_OLD_EMPTY_PLATFORMS ${_octCnf} 2>&1)\n    if [[ \"${_DEL_OLD_EMPTY_PLATFORMS_TEST}\" =~ \"_DEL_OLD_EMPTY_PLATFORMS\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_DEL_OLD_EMPTY_PLATFORMS=${_DEL_OLD_EMPTY_PLATFORMS}\" >> ${_octCnf}\n    fi\n\n    _DEL_OLD_BACKUPS_TEST=$(grep _DEL_OLD_BACKUPS ${_octCnf} 2>&1)\n    if [[ \"${_DEL_OLD_BACKUPS_TEST}\" =~ \"_DEL_OLD_BACKUPS\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_DEL_OLD_BACKUPS=${_DEL_OLD_BACKUPS}\" >> ${_octCnf}\n    fi\n\n    _DEL_OLD_TMP_TEST=$(grep _DEL_OLD_TMP ${_octCnf} 2>&1)\n    if [[ \"${_DEL_OLD_TMP_TEST}\" =~ \"_DEL_OLD_TMP\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_DEL_OLD_TMP=${_DEL_OLD_TMP}\" >> ${_octCnf}\n    fi\n\n    _STRONG_PASSWORDS_TEST=$(grep _STRONG_PASSWORDS ${_octCnf} 2>&1)\n    if [[ \"${_STRONG_PASSWORDS_TEST}\" =~ \"_STRONG_PASSWORDS\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_STRONG_PASSWORDS=${_STRONG_PASSWORDS}\" >> ${_octCnf}\n    fi\n\n    _SQL_CONVERT_TEST=$(grep _SQL_CONVERT ${_octCnf} 2>&1)\n    if [[ \"${_SQL_CONVERT_TEST}\" =~ \"_SQL_CONVERT\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_SQL_CONVERT=${_SQL_CONVERT}\" >> ${_octCnf}\n    fi\n\n    _DL_MODE_TEST=$(grep _DL_MODE ${_octCnf} 2>&1)\n    if [[ ! \"${_DL_MODE_TEST}\" =~ \"_DL_MODE\" ]]; then\n      if [ -n \"${_DL_MODE}\" ]; then\n        echo \"_DL_MODE=${_DL_MODE}\" >> ${_octCnf}\n        export _DL_MODE=${_DL_MODE}\n      else\n        echo \"_DL_MODE=BATCH\" >> ${_octCnf}\n        export _DL_MODE=BATCH\n      fi\n    fi\n\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"NOTE! Please review all config options displayed below\"\n      sleep 3\n      echo \" \"\n      while read line; do\n        echo \"$line\"\n      done < ${_octCnf}\n      echo \" \"\n    fi\n\n    if [ -e \"${_octCnf}\" ]; then\n      source ${_octCnf}\n    fi\n    _PRE_PLATFORMS_LIST=\"${_PLATFORMS_LIST}\"\n    if [ -e \"${_ROOT}/static/control/platforms.info\" ]; then\n      _PLATFORMS_LIST=$(cat ${_ROOT}/static/control/platforms.info 2>&1)\n      _PLATFORMS_LIST=$(echo -n ${_PLATFORMS_LIST} | tr -d \"\\n\" 2>&1)\n      _PLATFORMS_LIST=${_PLATFORMS_LIST//[^ 0-9A-Z]/}\n      _msg \"NOTE! Custom Platforms List: ${_PLATFORMS_LIST}\"\n      if [ -z \"${_PLATFORMS_LIST}\" ]; then\n        _PLATFORMS_LIST=\"${_PRE_PLATFORMS_LIST}\"\n        _msg \"NOTE! Default Platforms List: ${_PLATFORMS_LIST}\"\n      fi\n    fi\n    if [[ \"${_MY_OCTO_EMAIL}\" =~ \"omega8.cc\" ]]; then\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" != \"YES\" ]; then\n        _msg \"EXIT: You must enter **your** valid email address in the\"\n        _msg \"EXIT: _MY_OCTO_EMAIL variable in the ${_octCnf} file\"\n        _clean_pid_exit\n      fi\n    fi\n    if [ \"${_PHP_FPM_VERSION}\" = \"8.5\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ]; then\n      if [ ! -x \"/opt/php85/bin/php\" ]; then\n        if [ -x \"/opt/php84/bin/php\" ]; then\n          _PHP_FPM_VERSION=8.4\n          _PHP_CLI_VERSION=8.4\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"8.4\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ]; then\n      if [ ! -x \"/opt/php84/bin/php\" ]; then\n        if [ -x \"/opt/php83/bin/php\" ]; then\n          _PHP_FPM_VERSION=8.3\n          _PHP_CLI_VERSION=8.3\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"8.3\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ]; then\n      if [ ! -x \"/opt/php83/bin/php\" ]; then\n        if [ -x \"/opt/php84/bin/php\" ]; then\n          _PHP_FPM_VERSION=8.4\n          _PHP_CLI_VERSION=8.4\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"8.2\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"8.2\" ]; then\n      if [ ! -x \"/opt/php82/bin/php\" ]; then\n        if [ -x \"/opt/php83/bin/php\" ]; then\n          _PHP_FPM_VERSION=8.3\n          _PHP_CLI_VERSION=8.3\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"8.1\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"8.1\" ]; then\n      if [ ! -x \"/opt/php81/bin/php\" ]; then\n        if [ -x \"/opt/php83/bin/php\" ]; then\n          _PHP_FPM_VERSION=8.3\n          _PHP_CLI_VERSION=8.3\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"8.0\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"8.0\" ]; then\n      if [ ! -x \"/opt/php80/bin/php\" ]; then\n        if [ -x \"/opt/php83/bin/php\" ]; then\n          _PHP_FPM_VERSION=8.3\n          _PHP_CLI_VERSION=8.3\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"7.4\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"7.4\" ]; then\n      if [ ! -x \"/opt/php74/bin/php\" ]; then\n        if [ -x \"/opt/php83/bin/php\" ]; then\n          _PHP_FPM_VERSION=8.3\n          _PHP_CLI_VERSION=8.3\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"7.3\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"7.3\" ]; then\n      if [ ! -x \"/opt/php73/bin/php\" ]; then\n        if [ -x \"/opt/php74/bin/php\" ]; then\n          _PHP_FPM_VERSION=7.4\n          _PHP_CLI_VERSION=7.4\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"7.2\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"7.2\" ]; then\n      if [ ! -x \"/opt/php72/bin/php\" ]; then\n        if [ -x \"/opt/php74/bin/php\" ]; then\n          _PHP_FPM_VERSION=7.4\n          _PHP_CLI_VERSION=7.4\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"7.1\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"7.1\" ]; then\n      if [ ! -x \"/opt/php71/bin/php\" ]; then\n        if [ -x \"/opt/php74/bin/php\" ]; then\n          _PHP_FPM_VERSION=7.4\n          _PHP_CLI_VERSION=7.4\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"7.0\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"7.0\" ]; then\n      if [ ! -x \"/opt/php70/bin/php\" ]; then\n        if [ -x \"/opt/php74/bin/php\" ]; then\n          _PHP_FPM_VERSION=7.4\n          _PHP_CLI_VERSION=7.4\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    elif [ \"${_PHP_FPM_VERSION}\" = \"5.6\" ] \\\n      || [ \"${_PHP_CLI_VERSION}\" = \"5.6\" ]; then\n      if [ ! -x \"/opt/php56/bin/php\" ]; then\n        if [ -x \"/opt/php74/bin/php\" ]; then\n          _PHP_FPM_VERSION=7.4\n          _PHP_CLI_VERSION=7.4\n        else\n          _PHP_FPM_VERSION=\n          _PHP_CLI_VERSION=\n        fi\n      fi\n    else\n      _PHP_FPM_VERSION=8.4\n      _PHP_CLI_VERSION=8.4\n    fi\n    if [ -z \"${_PHP_FPM_VERSION}\" ] || [ -z \"${_PHP_CLI_VERSION}\" ]; then\n      _msg \"EXIT: You must specify already installed PHP version\"\n      _msg \"EXIT: in both _PHP_FPM_VERSION and _PHP_CLI_VERSION\"\n      _clean_pid_exit\n    fi\n  fi\n\n  if [ ! -z \"${_LOCAL_NETWORK_IP}\" ] \\\n    && [ \"${_THIS_HOST}\" != \"aegir.local\" ]; then\n    _DNS_SETUP_TEST=NO\n    _MY_OWNIP=\"${_LOCAL_NETWORK_IP}\"\n  fi\n}\n\n#\n# Fix advanced cron IP for cURL requests.\n_satellite_fix_cron_curl() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_fix_cron_curl\"\n    _debug_proc\n  fi\n  if [ -e \"${_ROOT}/.drush/hostmaster.alias.drushrc.php\" ]; then\n    _THIS_HM_ROOT=$(cat ${_ROOT}/.drush/hostmaster.alias.drushrc.php \\\n      | grep \"root'\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $3}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    _pthA=\"profiles/hostmaster/modules/aegir/hosting/cron\"\n    _pthB=\"hosting_cron.module\"\n    if [ -e \"${_THIS_HM_ROOT}/${_pthA}/${_pthB}\" ]; then\n      sed -i \"s/127.0.0.1/${_CRON_IP}/g\" ${_THIS_HM_ROOT}/${_pthA}/${_pthB}\n    fi\n  fi\n}\n\n#\n# Fix multi-IP cron access.\n_satellite_fix_multi_ip_cron_access() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_fix_multi_ip_cron_access\"\n    _debug_proc\n  fi\n  [ -e \"/root/.local.IP.list.allow\" ] && rm -f /root/.local.IP.list.allow\n  for _IP in `cat /root/.local.IP.list \\\n    | cut -d '#' -f1 \\\n    | sort \\\n    | uniq \\\n    | tr -d \"\\s\"`;do echo \"  allow        ${_IP};\" >> \\\n      /root/.local.IP.list.allow;done\n  echo \"  allow 127.0.0.1;\" >> /root/.local.IP.list.allow\n  echo \"  deny all;\" >> /root/.local.IP.list.allow\n\n  sed -i \"s/allow 127.0.0.1;//g; s/ *$//g; /^$/d\" \\\n    ${_octTpl}/Inc/vhost_include.tpl.php\n  wait\n  sed -i '/deny all;/ {r /root/.local.IP.list.allow\nd;};' ${_octTpl}/Inc/vhost_include.tpl.php\n  wait\n\n  sed -i \"s/allow 127.0.0.1;//g; s/ *$//g; /^$/d\" \\\n    ${_octTpl}/subdir.tpl.php\n  wait\n  sed -i '/deny all;/ {r /root/.local.IP.list.allow\nd;};' ${_octTpl}/subdir.tpl.php\n  wait\n\n  sed -i \"s/allow 127.0.0.1;//g; s/ *$//g; /^$/d\" \\\n    ${_octInc}/nginx_vhost_common.conf\n  wait\n  sed -i '/deny all;/ {r /root/.local.IP.list.allow\nd;};' ${_octInc}/nginx_vhost_common.conf\n  wait\n}\n\n#\n# Sub Force advanced Nginx configuration.\n_satellite_sub_satellite_force_advanced_nginx_config() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_sub_satellite_force_advanced_nginx_config\"\n    _debug_proc\n  fi\n  if [ -d \"${_octInc}\" ]; then\n    _PHP_SV=${_PHP_FPM_VERSION//[^0-9]/}\n    if [ -z \"${_PHP_SV}\" ]; then\n      _PHP_SV=84\n    fi\n    if [ -e \"/opt/php${_PHP_SV}/etc/php${_PHP_SV}-fpm.conf\" ]; then\n      sed -i \"s/set.*user_socket.*/set \\$user_socket \\\"${_USER}\\\";/g\" ${_octTpl}/Inc/vhost_include.tpl.php &> /dev/null\n      sed -i \"s/set.*user_socket.*/set \\$user_socket \\\"${_USER}\\\";/g\" ${_octTpl}/subdir.tpl.php            &> /dev/null\n      sed -i \"s/set.*user_socket.*/set \\$user_socket \\\"${_USER}\\\";/g\" ${_octInc}/nginx_vhost_common.conf   &> /dev/null\n    fi\n    if [ ! -z \"${_CUSTOM_COLLATION_SQL}\" ]; then\n      _SITES_COLLATION_SQL=${_CUSTOM_COLLATION_SQL}\n    fi\n    if [ ! -z \"${_SITES_COLLATION_SQL}\" ]; then\n      sed -i \"s/utf8mb4_unicode_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_10.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_unicode_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_9.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_unicode_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_8.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_unicode_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_7.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_unicode_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_6.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_general_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_10.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_general_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_9.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_general_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_8.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_general_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_7.tpl.php &> /dev/null\n      sed -i \"s/utf8mb4_general_ci/${_SITES_COLLATION_SQL}/g\" ${_octSetTpl}/provision_drupal_settings_6.tpl.php &> /dev/null\n    fi\n    _CRON_IP=${_THISHTIP//[^0-9.]/}\n    if [ ! -e \"/root/.local.IP.list\" ]; then\n      rm -f /root/.tmp.IP.list*\n      rm -f /root/.local.IP.list*\n      for _IP in `hostname -I`;do echo ${_IP} >> /root/.tmp.IP.list;done\n      for _IP in `cat /root/.tmp.IP.list \\\n        | sort \\\n        | uniq`;do echo \"${_IP} # local IP address\" >> /root/.local.IP.list;done\n      rm -f /root/.tmp.IP.list*\n    fi\n    _IP_IF_PRESENT=$(grep \"${_CRON_IP}\" /root/.local.IP.list 2>&1)\n    if [[ \"${_IP_IF_PRESENT}\" =~ \"${_CRON_IP}\" ]]; then\n      _IP_PRESENT=YES\n    else\n      _IP_PRESENT=NO\n    fi\n    if [ ! -z \"${_CRON_IP}\" ] \\\n      && [ \"${_IP_PRESENT}\" = \"YES\" ] \\\n      && [ -e \"/root/.local.IP.list\" ]; then\n      _satellite_fix_multi_ip_cron_access\n      _satellite_fix_cron_curl\n    fi\n  fi\n}\n\n#\n# Force advanced Nginx configuration.\n_satellite_force_advanced_nginx_config() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_force_advanced_nginx_config\"\n    _debug_proc\n  fi\n  _satellite_sub_satellite_force_advanced_nginx_config\n  if [ -e \"${_ROOT}/config/includes/nginx_vhost_common.conf\" ]; then\n    rm -f ${_ROOT}/config/includes/nginx_advanced_include.conf\n    rm -f ${_ROOT}/config/includes/nginx_legacy_include.conf\n    rm -f ${_ROOT}/config/includes/nginx_modern_include.conf\n    rm -f ${_ROOT}/config/includes/nginx_octopus_include.conf\n    rm -f ${_ROOT}/config/includes/nginx_simple_include.conf\n  fi\n}\n\n#\n# Update local INI for PHP CLI on the Ægir Satellite Instance.\n_satellite_update_ini_php_cli() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_update_ini_php_cli\"\n    _debug_proc\n  fi\n  _U_HD=\"${_ROOT}/.drush\"\n  _U_TP=\"${_ROOT}/.tmp\"\n  _U_II=\"${_U_HD}/php.ini\"\n  _PHP_CLI_UPDATE=NO\n  if [ ! -e \"${_DRUSH_FILE}\" ]; then\n    return 1  # Exit the function but continue the script\n  fi\n  _CHECK_USE_PHP_CLI=$(grep \"/opt/php\" ${_DRUSH_FILE} 2>&1)\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php${e}\" ]] \\\n      && [ ! -e \"${_U_HD}/.ctrl.php${e}.${_xSrl}.pid\" ]; then\n      _PHP_CLI_UPDATE=YES\n    fi\n  done\n  if [ \"${_PHP_CLI_UPDATE}\" = \"YES\" ] \\\n    || [ ! -e \"${_U_II}\" ] \\\n    || [ ! -d \"${_U_TP}\" ] \\\n    || [ ! -e \"${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n    mkdir -p ${_U_TP}\n    touch ${_U_TP}\n    find ${_U_TP}/ -mtime +0 -exec rm -rf {} \\; &> /dev/null\n    chmod 02755 ${_U_TP}\n    mkdir -p ${_U_HD}/{sys,xts,usr}\n    rm -f ${_U_HD}/.ctrl.php*\n    rm -f ${_U_II}\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      if [[ \"${_CHECK_USE_PHP_CLI}\" =~ \"php${e}\" ]]; then\n        if [ -e \"/opt/php${e}/lib/php.ini\" ]; then\n          cp -af /opt/php${e}/lib/php.ini ${_U_II}\n          _U_INI=${e}\n        fi\n      fi\n    done\n    _OPCD=\"/var/www/phpcache\"\n    if [ -e \"${_U_II}\" ]; then\n      _INI=\"open_basedir = \\\".: \\\n        /data/all:           \\\n        /data/conf:          \\\n        /data/disk/all:      \\\n        /opt/php56:          \\\n        /opt/php70:          \\\n        /opt/php71:          \\\n        /opt/php72:          \\\n        /opt/php73:          \\\n        /opt/php74:          \\\n        /opt/php80:          \\\n        /opt/php81:          \\\n        /opt/php82:          \\\n        /opt/php83:          \\\n        /opt/php84:          \\\n        /opt/php85:          \\\n        /opt/tika:           \\\n        /opt/tika7:          \\\n        /opt/tika8:          \\\n        /opt/tika9:          \\\n        /dev/urandom:        \\\n        /opt/tmp/make_local: \\\n        /opt/tools/drush:    \\\n        ${_OPCD}/${_USER}:   \\\n        ${_ROOT}:            \\\n        /usr/local/bin:      \\\n        /usr/bin\\\"\"\n      _INI=$(echo \"${_INI}\" | sed \"s/ //g\" 2>&1)\n      _INI=$(echo \"${_INI}\" | sed \"s/open_basedir=/open_basedir = /g\" 2>&1)\n      _INI=${_INI//\\//\\\\\\/}\n      _QTP=${_U_TP//\\//\\\\\\/}\n      sed -i \"s/.*open_basedir =.*/${_INI}/g\"                              ${_U_II}\n      wait\n      sed -i \"s/.*error_reporting =.*/error_reporting = 1/g\"               ${_U_II}\n      wait\n      sed -i \"s/.*session.save_path =.*/session.save_path = ${_QTP}/g\"     ${_U_II}\n      wait\n      sed -i \"s/.*soap.wsdl_cache_dir =.*/soap.wsdl_cache_dir = ${_QTP}/g\" ${_U_II}\n      wait\n      sed -i \"s/.*sys_temp_dir =.*/sys_temp_dir = ${_QTP}/g\"               ${_U_II}\n      wait\n      sed -i \"s/.*upload_tmp_dir =.*/upload_tmp_dir = ${_QTP}/g\"           ${_U_II}\n      wait\n      echo > ${_U_HD}/.ctrl.php${_U_INI}.${_xSrl}.pid\n      echo > ${_U_HD}/.ctrl.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n}\n\n#\n# Update php-cli for Drush.\n_satellite_update_drush_php_cli() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_update_drush_php_cli\"\n    _debug_proc\n  fi\n  _DRUSH_FILE=\"${_ROOT}/tools/drush/drush.php\"\n  if [ \"${_PHP_CLI_VERSION}\" = \"8.5\" ] && [ -x \"/opt/php85/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php85\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.4\" ] && [ -x \"/opt/php84/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php84\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.3\" ] && [ -x \"/opt/php83/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php83\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.2\" ] && [ -x \"/opt/php82/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php82\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.1\" ] && [ -x \"/opt/php81/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php81\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"8.0\" ] && [ -x \"/opt/php80/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php80\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"7.4\" ] && [ -x \"/opt/php74/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php74\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"7.3\" ] && [ -x \"/opt/php73/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php73\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"7.2\" ] && [ -x \"/opt/php72/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php72\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"7.1\" ] && [ -x \"/opt/php71/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php71\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"7.0\" ] && [ -x \"/opt/php70/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php70\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  elif [ \"${_PHP_CLI_VERSION}\" = \"5.6\" ] && [ -x \"/opt/php56/bin/php\" ]; then\n    sed -i \"s/^#\\!\\/.*/#\\!\\/opt\\/php56\\/bin\\/php/g\"  ${_DRUSH_FILE} &> /dev/null\n  else\n    _msg \"${_STATUS} B: FATAL ERROR: _PHP_CLI_VERSION is not set correctly\"\n    _msg \"${_STATUS} B: FATAL ERROR: Aborting AegirSetupB installer NOW!\"\n    touch /opt/tmp/status-AegirSetupB-FAIL\n    exit 1\n  fi\n}\n\n#\n# Create shared dirs.\n_satellite_create_shared_dirs() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_create_shared_dirs\"\n    _debug_proc\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Creating shared directories...\"\n  fi\n  mkdir -p ${_D}/000/{core,modules}\n  rm -rf /data/src\n  if [ ! -d \"${_CORE}\" ]; then\n    mkdir -p ${_CORE}\n  fi\n  chown -R ${_USER}:${_USRG} /data/conf      &> /dev/null\n  chown -R ${_USER}:${_USRG} /opt/tmp        &> /dev/null\n  chown ${_USER}:${_USRG} ${_D}              &> /dev/null\n  chown ${_USER}:${_USRG} ${_D}/000          &> /dev/null\n  chown ${_USER}:${_USRG} ${_D}/000/core     &> /dev/null\n  chown ${_USER}:${_USRG} ${_CORE}           &> /dev/null\n  chmod 777 ${_CORE} ${_D} /data /data/disk /data/conf &> /dev/null\n}\n\n#\n# Update o_contrib.\n_satellite_o_contrib_update_global() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_o_contrib_update_global\"\n    _debug_proc\n  fi\n\n  _RMMODULES=\"drupal-nginx-fast-x-accel-redirect varnish bakery session443 \\\n    cookie_cache_bypass_adv module_supports imageinfo_cache redis\"\n\n  for i in `dir -d ${_D}/*`; do\n    if [ -e \"${i}/o_contrib\" ] \\\n      && [ ! -e \"${i}/o_contrib/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n      _FORCE_UP_SIX=YES\n    elif [ -e \"${i}/o_contrib\" ] \\\n      && [ ! -e \"${_D}/000/modules/redis_edge\" ]; then\n      _FORCE_UP_SIX=YES\n    elif [ -e \"${i}/o_contrib\" ] \\\n      && [ ! -L \"${i}/o_contrib/redis_edge\" ]; then\n      _FORCE_UP_SIX=YES\n    else\n      _FORCE_UP_SIX=NO\n    fi\n    if [ \"${_FORCE_UP_SIX}\" = \"YES\" ] && [ -e \"${i}/o_contrib\" ]; then\n      for m in ${_RMMODULES}; do\n        if [ -d \"${i}/o_contrib/$m\" ]; then\n          rm -rf ${i}/o_contrib/$m\n        fi\n      done\n      cd ${i}/o_contrib\n      if [ ! -d \"${i}/o_contrib/advagg\" ]; then\n        _get_dev_contrib \"advagg-6.x-1.11.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/boost\" ]; then\n        _get_dev_contrib \"boost-6.x-1.22.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/cdn\" ]; then\n        _get_dev_contrib \"cdn-6.x-2.7.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/force_password_change\" ]; then\n        _get_dev_contrib \"force_password_change-6.x-3.4.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/fpa\" ]; then\n        _get_dev_contrib \"fpa-6.x-2.5.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/httprl\" ]; then\n        _get_dev_contrib \"httprl-6.x-1.14.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/js\" ]; then\n        _get_dev_contrib \"js-6.x-1.3.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/panels_content_cache\" ]; then\n        _get_dev_contrib \"panels_content_cache-6.x-1.0.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/phpass\" ]; then\n        _get_dev_contrib \"phpass-6.x-2.1.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/session_expire\" ]; then\n        _get_dev_contrib \"session_expire-6.x-1.x-dev.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/taxonomy_edge\" ]; then\n        _get_dev_contrib \"taxonomy_edge-6.x-1.7.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/views_cache_bully\" ]; then\n        _get_dev_contrib \"views_cache_bully-6.x-3.1.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib/views_content_cache\" ]; then\n        _get_dev_contrib \"views_content_cache-6.x-2.x-dev.tar.gz\"\n      fi\n      if [ -d \"${i}/o_contrib/expire\" ]; then\n        rm -rf ${i}/o_contrib/expire\n      fi\n      if [ -d \"${i}/o_contrib/purge\" ]; then\n        rm -rf ${i}/o_contrib/purge\n      fi\n      if [ -d \"${i}/o_contrib/cache_backport\" ]; then\n        rm -rf ${i}/o_contrib/cache_backport\n      fi\n      if [ -d \"${i}/o_contrib/redis\" ]; then\n        rm -rf ${i}/o_contrib/redis\n      fi\n      if [ -d \"${i}/o_contrib/redis_edge\" ]; then\n        rm -rf ${i}/o_contrib/redis_edge\n      fi\n      if [ -e \"${_D}/000/modules/cache_backport\" ] \\\n        && [ ! -L \"${i}/o_contrib/cache_backport\" ]; then\n        ln -sfn ${_D}/000/modules/cache_backport ${i}/o_contrib/cache_backport\n      fi\n      if [ -L \"${i}/o_contrib/redis\" ]; then\n        rm -f ${i}/o_contrib/redis\n      fi\n      if [ -e \"${_D}/000/modules/redis_edge\" ] \\\n        && [ ! -L \"${i}/o_contrib/redis_edge\" ]; then\n        ln -sfn ${_D}/000/modules/redis_edge ${i}/o_contrib/redis_edge\n      fi\n      touch ${i}/o_contrib/update-${_xSrl}-${_X_VERSION}.info\n      find ${i}/o_contrib -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${i}/o_contrib -type f -exec chmod 0644 {} \\; &> /dev/null\n    fi\n    if [ -d \"${_D}/$i\" ]; then\n      for p in `find ${i}/ -maxdepth 1 -mindepth 1 -type d | sort`; do\n        if [ -d \"$p/modules/cookie_cache_bypass\" ]; then\n          rm -rf $p/modules/cookie_cache_bypass\n        fi\n      done\n    fi\n  done\n  cd\n}\n\n#\n# Update o_contrib_seven.\n_satellite_o_contrib_seven_update_global() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_o_contrib_seven_update_global\"\n    _debug_proc\n  fi\n\n  _RMMODULES=\"session443 cookie_cache_bypass_adv agrcache speedy redis\"\n\n  for i in `dir -d ${_D}/*`; do\n    if [ -e \"${i}/o_contrib_seven\" ] \\\n      && [ ! -e \"${i}/o_contrib_seven/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n      _FORCE_UP_SEVEN=YES\n    elif [ -e \"${i}/o_contrib_seven\" ] \\\n      && [ ! -e \"${i}/o_contrib_seven/views_accelerator\" ]; then\n      _FORCE_UP_SEVEN=YES\n    elif [ -e \"${i}/o_contrib_seven\" ] \\\n      && [ ! -e \"${_D}/000/modules/redis_edge\" ]; then\n      _FORCE_UP_SEVEN=YES\n    elif [ -e \"${i}/o_contrib_seven\" ] \\\n      && [ ! -L \"${i}/o_contrib_seven/redis_edge\" ]; then\n      _FORCE_UP_SEVEN=YES\n    else\n      _FORCE_UP_SEVEN=NO\n    fi\n    if [ \"${_FORCE_UP_SEVEN}\" = \"YES\" ] \\\n      && [ -e \"${i}/o_contrib_seven\" ]; then\n      for m in ${_RMMODULES}; do\n        if [ -d \"${i}/o_contrib_seven/$m\" ]; then\n          rm -rf ${i}/o_contrib_seven/$m\n        fi\n      done\n      _ADMINER_VRN=4.8.1\n      cd ${i}/o_contrib_seven\n      if [ ! -e \"${i}/o_contrib_seven/adminer/adminer/adminer-${_ADMINER_VRN}-mysql.php\" ]; then\n        rm -rf ${i}/o_contrib_seven/adminer\n        _get_dev_contrib \"adminer_bundle-7.x-1.2.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/advagg\" ]; then\n        _get_dev_contrib \"advagg-7.x-2.36.tar.gz\"\n      fi\n      if [ ! -e \"${i}/o_contrib_seven/autoslave/patches/update-pdo-7.22.patch\" ]; then\n        rm -rf ${i}/o_contrib_seven/autoslave\n        _get_dev_contrib \"autoslave-7.x-1.x-dev.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/boost\" ]; then\n        _get_dev_contrib \"boost-7.x-1.2.tar.gz\"\n      fi\n      if [ ! -e \"${i}/o_contrib_seven/cache_consistent/3010585-cache-consistent-php72-2.patch\" ]; then\n        rm -rf ${i}/o_contrib_seven/cache_consistent\n        _get_dev_contrib \"cache_consistent-7.x-2.x-dev.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/cdn\" ]; then\n        _get_dev_contrib \"cdn-7.x-2.10.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/display_cache\" ]; then\n        _get_dev_contrib \"display_cache-7.x-1.3.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/entity_print\" ]; then\n        _get_dev_contrib \"entity_print-7.x-1.5.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/entitycache\" ]; then\n        _get_dev_contrib \"entitycache-7.x-1.7.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/file_resup\" ]; then\n        _get_dev_contrib \"file_resup-7.x-1.5.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/force_password_change\" ]; then\n        _get_dev_contrib \"force_password_change-7.x-2.2.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/fpa\" ]; then\n        _get_dev_contrib \"fpa-7.x-2.6.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/httprl\" ]; then\n        _get_dev_contrib \"httprl-7.x-1.14.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/js\" ]; then\n        _get_dev_contrib \"js-7.x-2.5.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/panels_content_cache\" ]; then\n        _get_dev_contrib \"panels_content_cache-7.x-1.4.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/session_expire\" ]; then\n        _get_dev_contrib \"session_expire-7.x-1.x-dev.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/speedy\" ]; then\n        _get_dev_contrib \"speedy-7.x-1.34.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/taxonomy_edge\" ]; then\n        _get_dev_contrib \"taxonomy_edge-7.x-1.9.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/views_accelerator\" ]; then\n        _get_dev_contrib \"views_accelerator-7.x-1.0-beta1.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/views_cache_bully\" ]; then\n        _get_dev_contrib \"views_cache_bully-7.x-3.1.tar.gz\"\n      fi\n      if [ ! -d \"${i}/o_contrib_seven/views_content_cache\" ]; then\n        _get_dev_contrib \"views_content_cache-7.x-3.0-alpha3.tar.gz\"\n      fi\n      if [ ! -e \"${i}/o_contrib_seven/entitycache/includes/entitycache.node.inc\" ]; then\n        rm -rf ${i}/o_contrib_seven/entitycache\n        _get_dev_contrib \"entitycache-7.x-1.7.tar.gz\"\n      fi\n      if [ -d \"${i}/o_contrib_seven/expire\" ] \\\n        || [ -d \"${i}/o_contrib_seven/purge\" ]; then\n        rm -rf ${i}/o_contrib_seven/expire\n        rm -rf ${i}/o_contrib_seven/purge\n      fi\n      if [ -d \"${i}/o_contrib_seven/redis\" ]; then\n        rm -rf ${i}/o_contrib_seven/redis\n      fi\n      if [ -d \"${i}/o_contrib_seven/redis_edge\" ]; then\n        rm -rf ${i}/o_contrib_seven/redis_edge\n      fi\n      if [ -L \"${i}/o_contrib_seven/redis\" ]; then\n        rm -f ${i}/o_contrib_seven/redis\n      fi\n      if [ -e \"${_D}/000/modules/redis_edge\" ] \\\n        && [ ! -L \"${i}/o_contrib_seven/redis_edge\" ]; then\n        ln -sfn ${_D}/000/modules/redis_edge ${i}/o_contrib_seven/redis_edge\n      fi\n      find ${i}/o_contrib_seven -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${i}/o_contrib_seven -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${i}/o_contrib_seven/update-${_xSrl}-${_X_VERSION}.info\n    fi\n  done\n  cd\n}\n\n#\n# Update o_contrib_eight.\n_satellite_o_contrib_eight_update_global() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_o_contrib_eight_update_global\"\n    _debug_proc\n  fi\n\n  _RMMODULES=\"some_foo_bar\"\n\n  for i in `dir -d ${_D}/*`; do\n    if [ -e \"${i}/o_contrib_eight\" ] \\\n      && [ ! -e \"${i}/o_contrib_eight/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n      _FORCE_UP_EIGHT=YES\n    elif [ -e \"${i}/o_contrib_eight\" ] \\\n      && [ ! -e \"${_D}/000/modules/redis_eight\" ]; then\n      _FORCE_UP_EIGHT=YES\n    elif [ -e \"${i}/o_contrib_eight\" ] \\\n      && [ ! -L \"${i}/o_contrib_eight/redis_eight\" ]; then\n      _FORCE_UP_EIGHT=YES\n    elif [ -e \"${i}/o_contrib_eight\" ] \\\n      && [ ! -L \"${i}/o_contrib_eight/redis_compr\" ]; then\n      _FORCE_UP_EIGHT=YES\n    else\n      _FORCE_UP_EIGHT=NO\n    fi\n    if [ \"${_FORCE_UP_EIGHT}\" = \"YES\" ] \\\n      && [ -e \"${i}/o_contrib_eight\" ]; then\n      for m in ${_RMMODULES}; do\n        if [ -d \"${i}/o_contrib_eight/$m\" ]; then\n          rm -rf ${i}/o_contrib_eight/$m\n        fi\n      done\n      cd ${i}/o_contrib_eight\n      if [ -d \"${i}/o_contrib_eight/redis_eight\" ]; then\n        rm -rf ${i}/o_contrib_eight/redis_eight\n      fi\n      if [ -e \"${_D}/000/modules/redis_eight\" ] \\\n        && [ ! -L \"${i}/o_contrib_eight/redis_eight\" ]; then\n        ln -sfn ${_D}/000/modules/redis_eight ${i}/o_contrib_eight/redis_eight\n      fi\n      if [ -e \"${_D}/000/modules/redis_compr\" ] \\\n        && [ ! -L \"${i}/o_contrib_eight/redis_compr\" ]; then\n        ln -sfn ${_D}/000/modules/redis_compr ${i}/o_contrib_eight/redis_compr\n      fi\n      find ${i}/o_contrib_eight -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${i}/o_contrib_eight -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${i}/o_contrib_eight/update-${_xSrl}-${_X_VERSION}.info\n    fi\n  done\n  cd\n}\n\n\n#\n# Update o_contrib_nine.\n_satellite_o_contrib_nine_update_global() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_o_contrib_nine_update_global\"\n    _debug_proc\n  fi\n\n  _RMMODULES=\"some_foo_bar\"\n\n  for i in `dir -d ${_D}/*`; do\n    if [ -e \"${i}/o_contrib_nine\" ] \\\n      && [ ! -e \"${i}/o_contrib_nine/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n      _FORCE_UP_NINE=YES\n    elif [ -e \"${i}/o_contrib_nine\" ] \\\n      && [ ! -L \"${i}/o_contrib_nine/redis_nine_ten\" ]; then\n      _FORCE_UP_NINE=YES\n    else\n      _FORCE_UP_NINE=NO\n    fi\n    if [ \"${_FORCE_UP_NINE}\" = \"YES\" ] \\\n      && [ -e \"${i}/o_contrib_nine\" ]; then\n      for m in ${_RMMODULES}; do\n        if [ -d \"${i}/o_contrib_nine/$m\" ]; then\n          rm -rf ${i}/o_contrib_nine/$m\n        fi\n      done\n      cd ${i}/o_contrib_nine\n      if [ -d \"${i}/o_contrib_nine/redis_eight\" ]; then\n        rm -rf ${i}/o_contrib_nine/redis_eight\n      fi\n      if [ -e \"${_D}/000/modules/redis_nine_ten\" ] \\\n        && [ ! -L \"${i}/o_contrib_nine/redis_nine_ten\" ]; then\n        ln -sfn ${_D}/000/modules/redis_nine_ten ${i}/o_contrib_nine/redis_nine_ten\n      fi\n      if [ -e \"${_D}/000/modules/redis_compr\" ] \\\n        && [ ! -L \"${i}/o_contrib_nine/redis_compr\" ]; then\n        ln -sfn ${_D}/000/modules/redis_compr ${i}/o_contrib_nine/redis_compr\n      fi\n      find ${i}/o_contrib_nine -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${i}/o_contrib_nine -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${i}/o_contrib_nine/update-${_xSrl}-${_X_VERSION}.info\n    fi\n  done\n  cd\n}\n\n\n#\n# Update o_contrib_ten.\n_satellite_o_contrib_ten_update_global() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_o_contrib_ten_update_global\"\n    _debug_proc\n  fi\n\n  _RMMODULES=\"some_foo_bar\"\n\n  for i in `dir -d ${_D}/*`; do\n    if [ -e \"${i}/o_contrib_ten\" ] \\\n      && [ ! -e \"${i}/o_contrib_ten/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n      _FORCE_UP_TEN=YES\n    elif [ -e \"${i}/o_contrib_ten\" ] \\\n      && [ ! -L \"${i}/o_contrib_ten/redis_ten_eleven\" ]; then\n      _FORCE_UP_TEN=YES\n    else\n      _FORCE_UP_TEN=NO\n    fi\n    if [ \"${_FORCE_UP_TEN}\" = \"YES\" ] \\\n      && [ -e \"${i}/o_contrib_ten\" ]; then\n      for m in ${_RMMODULES}; do\n        if [ -d \"${i}/o_contrib_ten/$m\" ]; then\n          rm -rf ${i}/o_contrib_ten/$m\n        fi\n      done\n      cd ${i}/o_contrib_ten\n      if [ -d \"${i}/o_contrib_ten/redis_compr\" ]; then\n        rm -rf ${i}/o_contrib_ten/redis_compr\n      fi\n      if [ -d \"${i}/o_contrib_ten/redis_eight\" ]; then\n        rm -rf ${i}/o_contrib_ten/redis_eight\n      fi\n      if [ -d \"${i}/o_contrib_ten/redis_nine_ten\" ]; then\n        rm -rf ${i}/o_contrib_ten/redis_nine_ten\n      fi\n      if [ -e \"${_D}/000/modules/redis_ten_eleven\" ] \\\n        && [ ! -L \"${i}/o_contrib_ten/redis_ten_eleven\" ]; then\n        ln -sfn ${_D}/000/modules/redis_ten_eleven ${i}/o_contrib_ten/redis_ten_eleven\n      fi\n      find ${i}/o_contrib_ten -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${i}/o_contrib_ten -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${i}/o_contrib_ten/update-${_xSrl}-${_X_VERSION}.info\n    fi\n  done\n  cd\n}\n\n\n#\n# Update o_contrib_eleven.\n_satellite_o_contrib_eleven_update_global() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_o_contrib_eleven_update_global\"\n    _debug_proc\n  fi\n\n  _RMMODULES=\"some_foo_bar\"\n\n  for i in `dir -d ${_D}/*`; do\n    if [ -e \"${i}/o_contrib_eleven\" ] \\\n      && [ ! -e \"${i}/o_contrib_eleven/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n      _FORCE_UP_TEN=YES\n    elif [ -e \"${i}/o_contrib_eleven\" ] \\\n      && [ ! -L \"${i}/o_contrib_eleven/redis_ten_eleven\" ]; then\n      _FORCE_UP_TEN=YES\n    else\n      _FORCE_UP_TEN=NO\n    fi\n    if [ \"${_FORCE_UP_TEN}\" = \"YES\" ] \\\n      && [ -e \"${i}/o_contrib_eleven\" ]; then\n      for m in ${_RMMODULES}; do\n        if [ -d \"${i}/o_contrib_eleven/$m\" ]; then\n          rm -rf ${i}/o_contrib_eleven/$m\n        fi\n      done\n      cd ${i}/o_contrib_eleven\n      if [ -d \"${i}/o_contrib_eleven/redis_compr\" ]; then\n        rm -rf ${i}/o_contrib_eleven/redis_compr\n      fi\n      if [ -d \"${i}/o_contrib_eleven/redis_eight\" ]; then\n        rm -rf ${i}/o_contrib_eleven/redis_eight\n      fi\n      if [ -d \"${i}/o_contrib_ten/redis_nine_ten\" ]; then\n        rm -rf ${i}/o_contrib_ten/redis_nine_ten\n      fi\n      if [ -e \"${_D}/000/modules/redis_ten_eleven\" ] \\\n        && [ ! -L \"${i}/o_contrib_eleven/redis_ten_eleven\" ]; then\n        ln -sfn ${_D}/000/modules/redis_ten_eleven ${i}/o_contrib_eleven/redis_ten_eleven\n      fi\n      find ${i}/o_contrib_eleven -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${i}/o_contrib_eleven -type f -exec chmod 0644 {} \\; &> /dev/null\n      touch ${i}/o_contrib_eleven/update-${_xSrl}-${_X_VERSION}.info\n    fi\n  done\n  cd\n}\n\n\n#\n# Download o_contrib_eleven.\n_satellite_download_o_contrib_eleven() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_download_o_contrib_eleven\"\n    _debug_proc\n  fi\n  touch update-${_xSrl}-${_X_VERSION}.info\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Downloading o_contrib_eleven modules...\"\n  fi\n  _get_dev_contrib \"robotstxt-8.x-1.6.tar.gz\"\n  _get_dev_contrib \"readonlymode-8.x-1.4.tar.gz\"\n  if [ ! -e \"${_D}/000/modules/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_ten_eleven\n    _get_dev_contrib \"redis_ten_eleven-${_REDIS_E_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\n  fi\n  find ${_D}/000/modules -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_D}/000/modules -type f -exec chmod 0644 {} \\; &> /dev/null\n}\n\n\n#\n# Download o_contrib_ten.\n_satellite_download_o_contrib_ten() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_download_o_contrib_ten\"\n    _debug_proc\n  fi\n  touch update-${_xSrl}-${_X_VERSION}.info\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Downloading o_contrib_ten modules...\"\n  fi\n  _get_dev_contrib \"robotstxt-8.x-1.6.tar.gz\"\n  _get_dev_contrib \"readonlymode-8.x-1.4.tar.gz\"\n  if [ ! -e \"${_D}/000/modules/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_ten_eleven\n    _get_dev_contrib \"redis_ten_eleven-${_REDIS_E_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\n  fi\n  find ${_D}/000/modules -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_D}/000/modules -type f -exec chmod 0644 {} \\; &> /dev/null\n}\n\n\n#\n# Download o_contrib_nine.\n_satellite_download_o_contrib_nine() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_download_o_contrib_nine\"\n    _debug_proc\n  fi\n  touch update-${_xSrl}-${_X_VERSION}.info\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Downloading o_contrib_nine modules...\"\n  fi\n  _get_dev_contrib \"robotstxt-8.x-1.5.tar.gz\"\n  _get_dev_contrib \"readonlymode-8.x-1.2.tar.gz\"\n  if [ ! -e \"${_D}/000/modules/redis_nine_ten/ver-${_REDIS_T_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_nine_ten\n    _get_dev_contrib \"redis_nine_ten-${_REDIS_T_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_nine_ten/ver-${_REDIS_T_VERSION}.${_xSrl}.info\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_compr\n    _get_dev_contrib \"redis_compr-${_REDIS_C_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\n  fi\n  find ${_D}/000/modules -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_D}/000/modules -type f -exec chmod 0644 {} \\; &> /dev/null\n}\n\n\n#\n# Download o_contrib_eight.\n_satellite_download_o_contrib_eight() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_download_o_contrib_eight\"\n    _debug_proc\n  fi\n  touch update-${_xSrl}-${_X_VERSION}.info\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Downloading o_contrib_eight modules...\"\n  fi\n  _get_dev_contrib \"robotstxt-8.x-1.4.tar.gz\"\n  _get_dev_contrib \"readonlymode-8.x-1.2.tar.gz\"\n  if [ ! -e \"${_D}/000/modules/redis_eight/ver-${_REDIS_N_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_eight\n    _get_dev_contrib \"redis_eight-${_REDIS_N_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_eight/ver-${_REDIS_N_VERSION}.${_xSrl}.info\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_compr\n    _get_dev_contrib \"redis_compr-${_REDIS_C_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\n  fi\n  find ${_D}/000/modules -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_D}/000/modules -type f -exec chmod 0644 {} \\; &> /dev/null\n}\n\n\n#\n# Download o_contrib_seven.\n_satellite_download_o_contrib_seven() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_download_o_contrib_seven\"\n    _debug_proc\n  fi\n  touch update-${_xSrl}-${_X_VERSION}.info\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Downloading o_contrib_seven modules...\"\n  fi\n  _get_dev_contrib \"admin-7.x-2.0-beta3.tar.gz\"\n  _get_dev_contrib \"adminer_bundle-7.x-1.2.tar.gz\"\n  # _get_dev_contrib \"adminer-7.x-1.2.tar.gz\"\n  # curl ${_crlGet} \"https://github.com/vrana/adminer/releases/download/v${_ADMINER_VRN}/adminer-${_ADMINER_VRN}-mysql.php\" -o \"${_CORE}/${i}/o_contrib_seven/adminer/adminer/adminer-${_ADMINER_VRN}-mysql.php\"\n  # curl ${_crlGet} \"https://raw.githubusercontent.com/vrana/adminer/v${_ADMINER_VRN}/designs/nette/adminer.css\" -o \"${_CORE}/${i}/o_contrib_seven/adminer/adminer/styles/adminer.css\"\n  # curl ${_crlGet} \"https://raw.githubusercontent.com/vrana/adminer/v${_ADMINER_VRN}/plugins/plugin.php\" -o \"${_CORE}/${i}/o_contrib_seven/adminer/adminer/plugins/plugin.php\"\n  # curl ${_crlGet} \"https://raw.githubusercontent.com/vrana/adminer/v${_ADMINER_VRN}/plugins/enum-option.php\" -o \"${_CORE}/${i}/o_contrib_seven/adminer/adminer/plugins/enum-option.php\"\n  # curl ${_crlGet} \"https://raw.githubusercontent.com/vrana/adminer/v${_ADMINER_VRN}/plugins/edit-textarea.php\" -o \"${_CORE}/${i}/o_contrib_seven/adminer/adminer/plugins/edit-textarea.php\"\n  # curl ${_crlGet} \"https://raw.githubusercontent.com/vrana/adminer/v${_ADMINER_VRN}/plugins/version-noverify.php\" -o \"${_CORE}/${i}/o_contrib_seven/adminer/adminer/plugins/version-noverify.php\"\n  # curl ${_crlGet} \"https://raw.githubusercontent.com/vrana/adminer/v${_ADMINER_VRN}/plugins/frames.php\" -o \"${_CORE}/${i}/o_contrib_seven/adminer/adminer/plugins/frames.php\"\n  _get_dev_contrib \"advagg-7.x-2.36.tar.gz\"\n  _get_dev_contrib \"autoslave-7.x-1.x-dev.tar.gz\"\n  _get_dev_contrib \"blockcache_alter-7.x-1.1.tar.gz\"\n  _get_dev_contrib \"boost-7.x-1.2.tar.gz\"\n  _get_dev_contrib \"cache_consistent-7.x-2.x-dev.tar.gz\"\n  _get_dev_contrib \"cdn-7.x-2.10.tar.gz\"\n  _get_dev_contrib \"config_perms-7.x-2.2.tar.gz\"\n  _get_dev_contrib \"css_emimage-7.x-1.3.tar.gz\"\n  _get_dev_contrib \"d7security_client-7.x-1.3.tar.gz\"\n  _get_dev_contrib \"display_cache-7.x-1.3.tar.gz\"\n  _get_dev_contrib \"entity_print-7.x-1.5.tar.gz\"\n  _get_dev_contrib \"entitycache-7.x-1.7.tar.gz\"\n  _get_dev_contrib \"esi-7.x-3.x-dev.tar.gz\"\n  _get_dev_contrib \"file_resup-7.x-1.5.tar.gz\"\n  _get_dev_contrib \"flood_control-7.x-1.1.tar.gz\"\n  _get_dev_contrib \"force_password_change-7.x-2.2.tar.gz\"\n  _get_dev_contrib \"fpa-7.x-2.6.tar.gz\"\n  _get_dev_contrib \"httprl-7.x-1.14.tar.gz\"\n  _get_dev_contrib \"js-7.x-2.5.tar.gz\"\n  _get_dev_contrib \"login_security-7.x-1.9.tar.gz\"\n  _get_dev_contrib \"nocurrent_pass-7.x-1.1.tar.gz\"\n  _get_dev_contrib \"panels_content_cache-7.x-1.4.tar.gz\"\n  _get_dev_contrib \"readonlymode-7.x-1.2.tar.gz\"\n  _get_dev_contrib \"reroute_email-7.x-1.4.tar.gz\"\n  _get_dev_contrib \"robotstxt-7.x-1.4.tar.gz\"\n  _get_dev_contrib \"securesite-7.x-2.0-beta3.tar.gz\"\n  _get_dev_contrib \"session_expire-7.x-1.x-dev.tar.gz\"\n  _get_dev_contrib \"site_verify-7.x-1.2.tar.gz\"\n  _get_dev_contrib \"speedy-7.x-1.34.tar.gz\"\n  _get_dev_contrib \"taxonomy_edge-7.x-1.9.tar.gz\"\n  _get_dev_contrib \"variable_clean-7.x-1.x-dev.tar.gz\"\n  _get_dev_contrib \"views_accelerator-7.x-1.0-beta1.tar.gz\"\n  _get_dev_contrib \"views_cache_bully-7.x-3.1.tar.gz\"\n  _get_dev_contrib \"views_content_cache-7.x-3.0-alpha3.tar.gz\"\n  _get_dev_contrib \"views404-7.x-1.0-beta1.tar.gz\"\n  rm -rf nginx_accel_redirect*\n  rm -rf purge*\n  rm -rf expire*\n  find ./ -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ./ -type f -exec chmod 0644 {} \\; &> /dev/null\n  touch ctrl-${_xSrl}-${_X_VERSION}\n  if [ -L \"./redis\" ]; then\n    rm -f redis\n  fi\n  if [ ! -L \"./redis_edge\" ]; then\n    ln -sfn ${_D}/000/modules/redis_edge redis_edge\n  fi\n  if [ -e \"${_D}/000/modules/redis\" ]; then\n    rm -rf ${_D}/000/modules/redis\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_edge\n    _get_dev_contrib \"redis_edge-${_REDIS_L_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\n  fi\n  find ${_D}/000/modules -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_D}/000/modules -type f -exec chmod 0644 {} \\; &> /dev/null\n}\n\n#\n# Download o_contrib_six.\n_satellite_download_o_contrib_six() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_download_o_contrib_six\"\n    _debug_proc\n  fi\n  touch update-${_xSrl}-${_X_VERSION}.info\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Downloading o_contrib modules...\"\n  fi\n  _get_dev_contrib \"admin-6.x-2.0.tar.gz\"\n  _get_dev_contrib \"advagg-6.x-1.11.tar.gz\"\n  _get_dev_contrib \"blockcache_alter-6.x-1.6.tar.gz\"\n  _get_dev_contrib \"boost-6.x-1.22.tar.gz\"\n  _get_dev_contrib \"cdn-6.x-2.7.tar.gz\"\n  _get_dev_contrib \"config_perms-6.x-2.x-dev.tar.gz\"\n  _get_dev_contrib \"css_emimage-6.x-2.x-dev.tar.gz\"\n  _get_dev_contrib \"dbtuner-6.x-1.x-dev.tar.gz\"\n  _get_dev_contrib \"esi-6.x-2.x-dev.tar.gz\"\n  _get_dev_contrib \"force_password_change-6.x-3.4.tar.gz\"\n  _get_dev_contrib \"fpa-6.x-2.5.tar.gz\"\n  _get_dev_contrib \"httprl-6.x-1.14.tar.gz\"\n  _get_dev_contrib \"image-6.x-1.2.tar.gz\"\n  _get_dev_contrib \"js-6.x-1.3.tar.gz\"\n  _get_dev_contrib \"login_security-6.x-1.4.tar.gz\"\n  _get_dev_contrib \"panels_content_cache-6.x-1.0.tar.gz\"\n  _get_dev_contrib \"phpass-6.x-2.1.tar.gz\"\n  _get_dev_contrib \"private_upload-6.x-1.x-dev.tar.gz\"\n  _get_dev_contrib \"readonlymode-6.x-1.2.tar.gz\"\n  _get_dev_contrib \"reroute_email-6.x-1.3.tar.gz\"\n  _get_dev_contrib \"robotstxt-6.x-1.4.tar.gz\"\n  _get_dev_contrib \"securesite-6.x-2.4.tar.gz\"\n  _get_dev_contrib \"session_expire-6.x-1.x-dev.tar.gz\"\n  _get_dev_contrib \"site_verify-6.x-1.0.tar.gz\"\n  _get_dev_contrib \"taxonomy_edge-6.x-1.7.tar.gz\"\n  _get_dev_contrib \"variable_clean-6.x-1.x-dev.tar.gz\"\n  _get_dev_contrib \"views_cache_bully-6.x-3.1.tar.gz\"\n  _get_dev_contrib \"views_content_cache-6.x-2.x-dev.tar.gz\"\n  _get_dev_contrib \"views404-6.x-1.x-dev.tar.gz\"\n  rm -rf nginx_accel_redirect*\n  rm -rf purge*\n  rm -rf expire*\n  find ./ -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ./ -type f -exec chmod 0644 {} \\; &> /dev/null\n  touch ctrl-${_xSrl}-${_X_VERSION}\n  if [ -L \"./redis\" ]; then\n    rm -f redis\n  fi\n  if [ ! -L \"./redis_edge\" ]; then\n    ln -sfn ${_D}/000/modules/redis_edge redis_edge\n  fi\n  if [ ! -L \"./cache_backport\" ]; then\n    ln -sfn ${_D}/000/modules/cache_backport cache_backport\n  fi\n  if [ ! -e \"${_D}/000/modules/cache_backport/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/cache_backport\n    _get_dev_contrib \"cache_backport-6.x-1.0-rc4.tar.gz\"\n    echo update > cache_backport/update-${_xSrl}-${_X_VERSION}.info\n  fi\n  if [ -e \"${_D}/000/modules/redis\" ]; then\n    rm -rf ${_D}/000/modules/redis\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_edge\n    _get_dev_contrib \"redis_edge-${_REDIS_L_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\n  fi\n  find ${_D}/000/modules -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_D}/000/modules -type f -exec chmod 0644 {} \\; &> /dev/null\n}\n\n#\n# Upgrade o_contrib.\n_satellite_check_fix_o_contrib() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_check_fix_o_contrib\"\n    _debug_proc\n  fi\n  if [ -e \"${_CORE}/o_contrib_eleven\" ]; then\n    if [ ! -d \"${_CORE}/o_contrib_eleven/redis_ten_eleven\" ] \\\n      || [ -e \"${_CORE}/o_contrib_eleven/advagg\" ] \\\n      || [ \"${_O_CONTRIB_FORCED_UP}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Running o_contrib_eleven forced upgrade...\"\n      cd ${_CORE}/o_contrib_eleven\n      rm -rf ${_CORE}/o_contrib_eleven/*\n      _satellite_download_o_contrib_eleven\n      cd ${_CORE}/o_contrib_eleven\n    fi\n  fi\n  if [ -e \"${_CORE}/o_contrib_ten\" ]; then\n    if [ ! -d \"${_CORE}/o_contrib_ten/redis_nine_ten\" ] \\\n      || [ -e \"${_CORE}/o_contrib_ten/advagg\" ] \\\n      || [ \"${_O_CONTRIB_FORCED_UP}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Running o_contrib_ten forced upgrade...\"\n      cd ${_CORE}/o_contrib_ten\n      rm -rf ${_CORE}/o_contrib_ten/*\n      _satellite_download_o_contrib_ten\n      cd ${_CORE}/o_contrib_ten\n    fi\n  fi\n  if [ -e \"${_CORE}/o_contrib_nine\" ]; then\n    if [ ! -d \"${_CORE}/o_contrib_nine/redis_nine_ten\" ] \\\n      || [ -e \"${_CORE}/o_contrib_nine/advagg\" ] \\\n      || [ \"${_O_CONTRIB_FORCED_UP}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Running o_contrib_nine forced upgrade...\"\n      cd ${_CORE}/o_contrib_nine\n      rm -rf ${_CORE}/o_contrib_nine/*\n      _satellite_download_o_contrib_nine\n      cd ${_CORE}/o_contrib_nine\n    fi\n  fi\n  if [ -e \"${_CORE}/o_contrib_eight\" ]; then\n    if [ ! -d \"${_CORE}/o_contrib_eight/redis_compr\" ] \\\n      || [ \"${_O_CONTRIB_FORCED_UP}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Running o_contrib_eight forced upgrade...\"\n      cd ${_CORE}/o_contrib_eight\n      rm -rf ${_CORE}/o_contrib_eight/*\n      _satellite_download_o_contrib_eight\n      cd ${_CORE}/o_contrib_eight\n    fi\n  fi\n  if [ -e \"${_CORE}/o_contrib_seven\" ]; then\n    if [ ! -d \"${_CORE}/o_contrib_seven/d7security_client\" ] \\\n      || [ \"${_O_CONTRIB_FORCED_UP}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Running o_contrib_seven forced upgrade...\"\n      cd ${_CORE}/o_contrib_seven\n      rm -rf ${_CORE}/o_contrib_seven/*\n      _satellite_download_o_contrib_seven\n      cd ${_CORE}/o_contrib_seven\n    fi\n  fi\n  if [ -e \"${_CORE}/o_contrib\" ]; then\n    if [ ! -d \"${_CORE}/o_contrib/robotstxt\" ] \\\n      || [ \"${_O_CONTRIB_FORCED_UP}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Running o_contrib forced upgrade...\"\n      cd ${_CORE}/o_contrib\n      rm -rf ${_CORE}/o_contrib/*\n      _satellite_download_o_contrib_six\n      cd ${_CORE}/o_contrib\n    fi\n  fi\n  if [ ! -e \"${_D}/000/modules/cache_backport/update-${_xSrl}-${_X_VERSION}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/cache_backport\n    _get_dev_contrib \"cache_backport-6.x-1.0-rc4.tar.gz\"\n    echo update > cache_backport/update-${_xSrl}-${_X_VERSION}.info\n  fi\n  if [ -e \"${_D}/000/modules/redis\" ]; then\n    rm -rf ${_D}/000/modules/redis\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_edge\n    _get_dev_contrib \"redis_edge-${_REDIS_L_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_edge/ver-${_REDIS_L_VERSION}.${_xSrl}.info\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_eight/ver-${_REDIS_N_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_eight\n    _get_dev_contrib \"redis_eight-${_REDIS_N_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_eight/ver-${_REDIS_N_VERSION}.${_xSrl}.info\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_compr\n    _get_dev_contrib \"redis_compr-${_REDIS_C_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_compr/ver-${_REDIS_C_VERSION}.${_xSrl}.info\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_nine_ten/ver-${_REDIS_T_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_nine_ten\n    _get_dev_contrib \"redis_nine_ten-${_REDIS_T_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_nine_ten/ver-${_REDIS_T_VERSION}.${_xSrl}.info\n  fi\n  if [ ! -e \"${_D}/000/modules/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\" ]; then\n    mkdir -p ${_D}/000/modules\n    cd ${_D}/000/modules\n    rm -rf ${_D}/000/modules/redis_ten_eleven\n    _get_dev_contrib \"redis_ten_eleven-${_REDIS_E_VERSION}.tar.gz\"\n    echo update > ${_D}/000/modules/redis_ten_eleven/ver-${_REDIS_E_VERSION}.${_xSrl}.info\n  fi\n  find ${_D}/000/modules -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_D}/000/modules -type f -exec chmod 0644 {} \\; &> /dev/null\n}\n\n#\n# Manage o_contrib.\n_satellite_manage_o_contrib() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_manage_o_contrib\"\n    _debug_proc\n  fi\n  if [ ! -e \"${_CORE}/o_contrib_eleven\" ]; then\n    mkdir -p ${_CORE}/o_contrib_eleven\n    cd ${_CORE}/o_contrib_eleven\n    _satellite_download_o_contrib_eleven\n    cd ${_CORE}/o_contrib_eleven\n  fi\n  if [ ! -e \"${_CORE}/o_contrib_ten\" ]; then\n    mkdir -p ${_CORE}/o_contrib_ten\n    cd ${_CORE}/o_contrib_ten\n    _satellite_download_o_contrib_ten\n    cd ${_CORE}/o_contrib_ten\n  fi\n  if [ ! -e \"${_CORE}/o_contrib_nine\" ]; then\n    mkdir -p ${_CORE}/o_contrib_nine\n    cd ${_CORE}/o_contrib_nine\n    _satellite_download_o_contrib_nine\n    cd ${_CORE}/o_contrib_nine\n  fi\n  if [ ! -e \"${_CORE}/o_contrib_eight\" ]; then\n    mkdir -p ${_CORE}/o_contrib_eight\n    cd ${_CORE}/o_contrib_eight\n    _satellite_download_o_contrib_eight\n    cd ${_CORE}/o_contrib_eight\n  fi\n  if [ ! -e \"${_CORE}/o_contrib_seven\" ]; then\n    mkdir -p ${_CORE}/o_contrib_seven\n    cd ${_CORE}/o_contrib_seven\n    _satellite_download_o_contrib_seven\n    cd ${_CORE}/o_contrib_seven\n  fi\n  if [ ! -e \"${_CORE}/o_contrib\" ]; then\n    mkdir -p ${_CORE}/o_contrib\n    cd ${_CORE}/o_contrib\n    _satellite_download_o_contrib_six\n    cd ${_CORE}/o_contrib\n  fi\n  mkdir -p ${_D}/000/modules\n  rm -f ${_D}/000/modules/o_contrib_eleven\n  ln -sfn ${_CORE}/o_contrib_eleven ${_D}/000/modules/o_contrib_eleven\n  rm -f ${_D}/000/modules/o_contrib_ten\n  ln -sfn ${_CORE}/o_contrib_ten ${_D}/000/modules/o_contrib_ten\n  rm -f ${_D}/000/modules/o_contrib_nine\n  ln -sfn ${_CORE}/o_contrib_nine ${_D}/000/modules/o_contrib_nine\n  rm -f ${_D}/000/modules/o_contrib_eight\n  ln -sfn ${_CORE}/o_contrib_eight ${_D}/000/modules/o_contrib_eight\n  rm -f ${_D}/000/modules/o_contrib_seven\n  ln -sfn ${_CORE}/o_contrib_seven ${_D}/000/modules/o_contrib_seven\n  rm -f ${_D}/000/modules/o_contrib\n  ln -sfn ${_CORE}/o_contrib ${_D}/000/modules/o_contrib\n  if [ \"${_STATUS}\" != \"INIT\" ]; then\n    _satellite_check_fix_o_contrib\n  fi\n}\n\n_satellite_hot_sauce_check() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_hot_sauce_check\"\n    _debug_proc\n  fi\n  _mNewC=\"Shared platforms code v.${_ALL_DISTRO} (new) will be created\"\n  _mLstC=\"Shared platforms code v.${_LAST_ALL} will be used\"\n  if [ \"${_HOT_SAUCE}\" = \"NO\" ]; then\n    _CORE=\"${_D}/${_LAST_ALL}\"\n    _THIS_CORE=\"${_LAST_ALL}\"\n    if [ \"${_USE_CURRENT}\" = \"YES\" ] \\\n      && [ -e \"${_D}/000/core-v-${_SMALLCORE6_V}.txt\" ] \\\n      && [ -e \"${_D}/000/core-v-${_SMALLCORE7_V}.txt\" ]; then\n      _msg \"${_STATUS} A: ${_mLstC}\"\n    elif [ \"${_USE_CURRENT}\" = \"NO\" ] \\\n      || [ ! -e \"${_D}/000/core-v-${_SMALLCORE6_V}.txt\" ] \\\n      || [ ! -e \"${_D}/000/core-v-${_SMALLCORE7_V}.txt\" ]; then\n      _CORE=\"${_D}/${_ALL_DISTRO}\"\n      _THIS_CORE=\"${_ALL_DISTRO}\"\n      _msg \"${_STATUS} A: ${_mNewC}\"\n      sed -i \"s/^_USE_CURRENT=.*/_USE_CURRENT=NO/g\" ${_vBs}/${_filIncO}\n      wait\n    else\n      _msg \"${_STATUS} A: ${_mLstC}\"\n    fi\n  else\n    _CORE=\"${_D}/${_ALL_DISTRO}\"\n    _THIS_CORE=\"${_ALL_DISTRO}\"\n    _msg \"${_STATUS} A: ${_mNewC}\"\n  fi\n}\n\n_satellite_add_user_dirs() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_add_user_dirs\"\n    _debug_proc\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Creating directories with correct permissions\"\n  fi\n  mkdir -p /data/u\n  mkdir -p /data/disk\n  mkdir -p /data/conf\n  chown root:root /data &> /dev/null\n  chown root:root /data/disk &> /dev/null\n  if [ ! -d \"${_ROOT}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding user...\"\n    fi\n    find /etc/[a-z]*\\.lock -maxdepth 1 -type f -exec rm -rf {} \\; &> /dev/null\n    adduser --system --home ${_ROOT} --ingroup ${_USRG} ${_USER} &> /dev/null\n    adduser ${_USER} ${_WEBG} &> /dev/null\n  fi\n  chown -R ${_USER}:${_USRG} /opt/tmp &> /dev/null\n  chown -R ${_USER}:${_USRG} /data/conf &> /dev/null\n}\n\n_satellite_prepare_child_scripts() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_prepare_child_scripts\"\n    _debug_proc\n  fi\n  chmod 0711 ${_ROOT}\n  cd ${_ROOT}\n  _AegirSetupB=\"${_bldPth}/aegir/scripts/AegirSetupB.sh.txt\"\n  _AegirSetupC=\"${_bldPth}/aegir/scripts/AegirSetupC.sh.txt\"\n  chown ${_USER}:${_USRG} ${_AegirSetupB} &> /dev/null\n  chown ${_USER}:${_USRG} ${_AegirSetupC} &> /dev/null\n}\n\n#\n# Generate provision backend db_passwd.\n_provision_backend_dbpass_generate() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _provision_backend_dbpass_generate\"\n    _debug_proc\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  else\n    _THIS_DB_PORT=3306\n  fi\n  _ESC_PASS=\"\"\n  _LEN_PASS=0\n  if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n    _PWD_CHARS=64\n  elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n    _PWD_CHARS=32\n  else\n    _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n    if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n      && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n      _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n    else\n      _PWD_CHARS=32\n    fi\n    if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n      _PWD_CHARS=128\n    fi\n  fi\n  if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] \\\n    || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n        _ESC_PASS=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n      else\n        _RANDPASS_TEST=$(randpass -V 2>&1)\n        if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n          _ESC_PASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n        else\n          _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n          _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n          _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n        fi\n      fi\n    else\n      if [ -e \"/root/.my.pass.txt\" ]; then\n        _ESC_PASS=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n      else\n        _ESC_PASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n      fi\n    fi\n    _isPythonTwo=\"$(which python2)\"\n    _isPythonThree=\"$(which python3)\"\n    _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n    if [ -x \"${_isPythonThree}\" ]; then\n      _ENC_PASS=$(python3 -c \"import urllib.parse; print(urllib.parse.quote('''${_ESC_PASS}'''))\")\n    elif [ -x \"${_isPythonTwo}\" ]; then\n      _ENC_PASS=$(python2 -c \"import urllib; print urllib.quote('''${_ESC_PASS}''')\")\n    fi\n    _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n  fi\n  if [ -z \"${_ESC_PASS}\" ] || [ \"${_LEN_PASS}\" -lt 9 ]; then\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n    else\n      if [ -e \"/root/.my.pass.txt\" ]; then\n        _ESC_PASS=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n      else\n        _ESC_PASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n      fi\n    fi\n    _ENC_PASS=\"${_ESC_PASS}\"\n  fi\n\n  _L_SYS=\"${_ROOT}/.${_USER}.pass.txt\"\n  echo \"${_ESC_PASS}\" > ${_L_SYS}\n  chown ${_USER}:${_USRG} ${_L_SYS}\n  chmod 0600 ${_L_SYS}\n\n  _L_SYS_PHP=\"${_ROOT}/.${_USER}.pass.php\"\n  echo \"<?php\" > ${_L_SYS_PHP}\n  echo \"\\$oct_db_user = \\\"${_USER}\\\";\" >> ${_L_SYS_PHP}\n  echo \"\\$oct_db_pass = \\\"${_ESC_PASS}\\\";\" >> ${_L_SYS_PHP}\n  echo \"\\$oct_db_host = \\\"${_THIS_DB_HOST}\\\";\" >> ${_L_SYS_PHP}\n  echo \"\\$oct_db_port = \\\"${_THIS_DB_PORT}\\\";\" >> ${_L_SYS_PHP}\n  echo \"\\$oct_db_dirs = \\\"/data/disk/${_USER}/backups\\\";\" >> ${_L_SYS_PHP}\n  chown ${_USER}:${_USRG} ${_L_SYS_PHP}\n  chmod 0600 ${_L_SYS_PHP}\n\n  _ESC=\"*.*\"\n\n  if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n    _USE_AEGIR_HOST=\"${_hName}\"\n    _SQL_CONNECT=localhost\n    if [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n      _SQL_CONNECT=127.0.0.1\n    fi\n    _USE_DB_USER=\"${_USER}\"\n  else\n    _USE_AEGIR_HOST=\"${_hName}\"\n    _SQL_CONNECT=\"${_THIS_DB_HOST}\"\n    _USE_DB_USER=aegir_root\n  fi\n  if [ \"${_THIS_DB_HOST}\" = \"${_MY_OWNIP}\" ]; then\n    _USE_AEGIR_HOST=\"${_hName}\"\n    _SQL_CONNECT=localhost\n  fi\n  _find_correct_ip\n  _USE_RESOLVEIP=\"${_LOC_IP}\"\n\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n    mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges &> /dev/null\n  else\n    mysqladmin -u root flush-privileges &> /dev/null\n  fi\n\n  if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n    || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n    if [ \"${_STATUS}\" = \"INIT\" ]; then\n      [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL14 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n      if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n        _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n        mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n      fi\n      ###\n      ###\n      # if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n      #   _ROOT_SQL_PASWD=$(cat /root/.my.cluster_root_pwd.txt 2>&1)\n      #   _ROOT_SQL_PASWD=$(echo -n ${_ROOT_SQL_PASWD} | tr -d \"\\n\" 2>&1)\n      # elif [ -e \"/root/.my.pass.txt\" ]; then\n      #   _ROOT_SQL_PASWD=$(cat /root/.my.pass.tx 2>&1)\n      #   _ROOT_SQL_PASWD=$(echo -n ${_ROOT_SQL_PASWD} | tr -d \"\\n\" 2>&1)\n      # fi\n      # if [ -e \"/root/.my.cluster_write_node.txt\" ]; then\n      #   _CL_WR_NODE=$(cat /root/.my.cluster_write_node.txt 2>&1)\n      #   _CL_WR_NODE=$(echo -n ${_CL_WR_NODE} | tr -d \"\\n\" 2>&1)\n      #   _SQL_CONNECT=\"${_CL_WR_NODE}\"\n      #   _THIS_DB_PORT=\"3306\"\n      # fi\n      ### -p${_ROOT_SQL_PASWD}\n      ### [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL14CR -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n      ###\n      ###\n      mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n    else\n      if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n        [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL15 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n        if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n          _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n          mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n        fi\n        _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_AEGIR_HOST}';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_RESOLVEIP}';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'localhost';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'127.0.0.1';\" &> /dev/null\n        ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'%';\" &> /dev/null\n        mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n      fi\n    fi\n  fi\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n    mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges &> /dev/null\n  else\n    mysqladmin -u root flush-privileges &> /dev/null\n  fi\n}\n\n#\n# Sync provision backend db_passwd.\n_provision_backend_dbpass_sync() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _provision_backend_dbpass_sync\"\n    _debug_proc\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  _find_correct_ip\n  _USE_RESOLVEIP=\"${_LOC_IP}\"\n  _RESOLVEIP=\"${_LOC_IP}\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Syncing provision backend db_passwd...\"\n  fi\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  else\n    _THIS_DB_PORT=3306\n  fi\n  _L_SYS=\"${_ROOT}/.${_USER}.pass.txt\"\n  mv -f ${_L_SYS} ${_L_SYS}-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n  _L_SYS_PHP=\"${_ROOT}/.${_USER}.pass.php\"\n  mv -f ${_L_SYS_PHP} ${_L_SYS_PHP}-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n  _provision_backend_dbpass_generate\n  if [ ! -z \"${_ESC_PASS}\" ] || [ ! -z \"${_ENC_PASS}\" ]; then\n    su -s /bin/bash - ${_USER} -c \"${_DRUSHCMD} @hostmaster \\\n      sqlq \\\"UPDATE hosting_db_server SET db_passwd='${_ESC_PASS}' \\\n      WHERE db_user='${_USER}'\\\"\" &> /dev/null\n    wait\n    _ESC=\"*.*\"\n    _USE_DB_USER=\"${_USER}\"\n    _USE_AEGIR_HOST=\"${_hName}\"\n    _find_correct_ip\n    _USE_RESOLVEIP=\"${_LOC_IP}\"\n    [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL16 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n    if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n      _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n      mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_ESC_PASS}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n    fi\n    _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_AEGIR_HOST}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_RESOLVEIP}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'localhost';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'127.0.0.1';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'%';\" &> /dev/null\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_ESC_PASS}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_ESC_PASS}';\nEOFMYSQL\n    sed -i \\\n      \"s/mysql:\\/\\/${_USER}:.*/mysql:\\/\\/${_USER}:${_ENC_PASS}@${_SQL_CONNECT}',/g\" \\\n      ${_ROOT}/.drush/server_*.alias.drushrc.php &> /dev/null\n    wait\n  else\n    _msg \"CRIT: _ESC_PASS empty or not generated\"\n  fi\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n    mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges &> /dev/null\n  else\n    mysqladmin -u root flush-privileges &> /dev/null\n  fi\n  su -s /bin/bash ${_USER} -c \"${_DRUSHCMD} cc drush\" &> /dev/null\n  wait\n  rm -rf ${_ROOT}/.tmp/cache\n  if [ -e \"${_ROOT}/.drush/server_localhost.alias.drushrc.php\" ]; then\n    su -s /bin/bash ${_USER} -c \"${_DRUSHCMD} @hostmaster \\\n      hosting-task @server_localhost verify --force\" &> /dev/null\n  else\n    su -s /bin/bash ${_USER} -c \"${_DRUSHCMD} @hostmaster \\\n      hosting-task @server_master verify --force\" &> /dev/null\n  fi\n  wait\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Running hosting-dispatch/hosting-tasks...\"\n  fi\n  su -s /bin/bash ${_USER} -c \"${_DRUSHCMD} @hostmaster hosting-dispatch\" &> /dev/null\n  wait\n  sleep 5\n  su -s /bin/bash ${_USER} -c \"${_DRUSHCMD} @hostmaster hosting-tasks --force\" &> /dev/null\n  wait\n  sleep 5\n  su -s /bin/bash ${_USER} -c \"${_DRUSHCMD} @hostmaster hosting-tasks --force\" &> /dev/null\n  wait\n  sleep 5\n  su -s /bin/bash ${_USER} -c \"${_DRUSHCMD} @hostmaster hosting-tasks --force\" &> /dev/null\n  wait\n}\n\n#\n# Sync hostmaster frontend db_passwd.\n_hostmaster_frontend_dbpass_sync() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _hostmaster_frontend_dbpass_sync\"\n    _debug_proc\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Syncing hostmaster frontend db_passwd...\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  else\n    _THIS_DB_PORT=3306\n  fi\n  _THIS_HM_SPTH=$(cat ${_ROOT}/.drush/hostmaster.alias.drushrc.php \\\n    | grep \"site_path'\" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,']//g\" 2>&1)\n  _THIS_HM_DBUR=$(cat ${_THIS_HM_SPTH}/drushrc.php \\\n    | grep \"options\\['db_user'\\] = \" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,';]//g\" 2>&1)\n  _THIS_HM_DBPD=$(cat ${_THIS_HM_SPTH}/drushrc.php \\\n    | grep \"options\\['db_passwd'\\] = \" \\\n    | cut -d: -f2 \\\n    | awk '{ print $3}' \\\n    | sed \"s/[\\,';]//g\" 2>&1)\n  if [ -e \"${_THIS_HM_SPTH}\" ] \\\n    && [ ! -z \"${_THIS_HM_DBUR}\" ] \\\n    && [ ! -z \"${_THIS_HM_DBPD}\" ]; then\n    _ESC=\"*.*\"\n    _USE_DB_USER=\"${_THIS_HM_DBUR}\"\n    _USE_AEGIR_HOST=\"${_hName}\"\n    _find_correct_ip\n    _USE_RESOLVEIP=\"${_LOC_IP}\"\n    [ -e \"/root/.my.cluster_root_pwd.txt\" ] && echo \"SQL17 -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -uroot\"\n    if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n      _PROXYSQL_PASSWORD=$(cat /root/.my.proxysql_adm_pwd.txt 2>/dev/null | tr -d '\\n')\n      mysql -uadmin -p${_PROXYSQL_PASSWORD} -h127.0.0.1 -P6032 --protocol=tcp<<PROXYSQL\nDELETE FROM mysql_users WHERE username='${_USE_DB_USER}';\nDELETE FROM mysql_query_rules WHERE username='${_USE_DB_USER}';\nINSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('${_USE_DB_USER}','${_THIS_HM_DBPD}','10');\nLOAD MYSQL USERS TO RUNTIME;\nSAVE MYSQL USERS FROM RUNTIME;\nSAVE MYSQL USERS TO DISK;\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',10,1);\nINSERT INTO mysql_query_rules (username,destination_hostgroup,active) VALUES ('${_USE_DB_USER}',11,1);\nLOAD MYSQL QUERY RULES TO RUNTIME;\nSAVE MYSQL QUERY RULES TO DISK;\nPROXYSQL\n    fi\n    _C_SQL=\"mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp --database=mysql -e\"\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_AEGIR_HOST}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'${_USE_RESOLVEIP}';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'localhost';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'127.0.0.1';\" &> /dev/null\n    ${_C_SQL} \"DROP USER '${_USE_DB_USER}'@'%';\" &> /dev/null\n    mysql --silent -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp -u root mysql<<EOFMYSQL\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'localhost';\nCREATE USER IF NOT EXISTS '${_USE_DB_USER}'@'%';\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'localhost' WITH GRANT OPTION;\nGRANT ALL ON ${_ESC} TO '${_USE_DB_USER}'@'%' WITH GRANT OPTION;\nALTER USER '${_USE_DB_USER}'@'localhost' IDENTIFIED BY '${_THIS_HM_DBPD}';\nALTER USER '${_USE_DB_USER}'@'%' IDENTIFIED BY '${_THIS_HM_DBPD}';\nEOFMYSQL\n  fi\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n    mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-privileges &> /dev/null\n  else\n    mysqladmin -u root flush-privileges &> /dev/null\n  fi\n}\n\n_satellite_run_pre_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_run_pre_install\"\n    _debug_proc\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  _find_correct_ip\n  _USE_RESOLVEIP=\"${_LOC_IP}\"\n  _RESOLVEIP=\"${_LOC_IP}\"\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    _SQL_CONNECT=127.0.0.1\n    _THIS_DB_PORT=6033\n  else\n    _THIS_DB_PORT=3306\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n      _SQL_CONNECT=127.0.0.1\n      _THIS_DB_PORT=6033\n      mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-hosts &> /dev/null\n    else\n      mysqladmin -u root flush-hosts &> /dev/null\n    fi\n    _provision_backend_dbpass_generate\n    echo \"${_USER} ALL=NOPASSWD: /etc/init.d/nginx\" >> /etc/sudoers\n    _SCRIPTS=(fix-drupal-platform-permissions fix-drupal-site-permissions fix-drupal-platform-ownership fix-drupal-site-ownership lock-local-drush-permissions)\n    for _SCRIPT in ${_SCRIPTS[@]}; do\n      echo \"${_USER} ALL=NOPASSWD: /usr/local/bin/${_SCRIPT}.sh\" >> /etc/sudoers.d/${_SCRIPT}\n      chmod 0440 /etc/sudoers.d/${_SCRIPT}\n    done\n  else\n    _VAR_IF_PRESENT=$(grep \"${_USER} ALL=NOPASSWD\" /etc/sudoers 2>&1)\n    if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"${_USER} ALL=NOPASSWD\" ]]; then\n      echo \"${_USER} ALL=NOPASSWD: /etc/init.d/nginx\" >> /etc/sudoers\n    fi\n    _SCRIPTS=(fix-drupal-platform-permissions fix-drupal-site-permissions fix-drupal-platform-ownership fix-drupal-site-ownership lock-local-drush-permissions)\n    for _SCRIPT in ${_SCRIPTS[@]}; do\n      _VAR_IF_PRESENT=$(grep \"${_USER} ALL=NOPASSWD: /usr/local/bin/${_SCRIPT}.sh\" /etc/sudoers.d/${_SCRIPT} 2>&1)\n      if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"${_USER} ALL=NOPASSWD\" ]]; then\n        echo \"${_USER} ALL=NOPASSWD: /usr/local/bin/${_SCRIPT}.sh\" >> /etc/sudoers.d/${_SCRIPT}\n        chmod 0440 /etc/sudoers.d/${_SCRIPT}\n      fi\n    done\n    if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n      _SQL_CONNECT=127.0.0.1\n      _THIS_DB_PORT=6033\n      mysqladmin -u root -h${_SQL_CONNECT} -P${_THIS_DB_PORT} --protocol=tcp flush-hosts &> /dev/null\n    else\n      mysqladmin -u root flush-hosts &> /dev/null\n    fi\n    _RST=$(syncpass fix ${_USER} 2>&1)\n    _provision_backend_dbpass_sync\n  fi\n  cd ${_ROOT}\n}\n\n#\n# Download for Drush Make Local build.\n_satellite_download_for_local_build() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_download_for_local_build\"\n    _debug_proc\n  fi\n  _mL=\"/opt/tmp/make_local\"\n  mkdir -p ${_mL}\n  if [ ! -e \"${_mL}/hostmaster/hostmaster.make\" ] \\\n    || [ ! -e \"${_mL}/hosting/hosting.module\" ] \\\n    || [ ! -e \"${_mL}/drupal/modules/system/system.module\" ]; then\n    rm -rf ${_mL}\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Downloading hostmaster modules...\"\n    fi\n    if [ \"${_DL_MODE}\" = \"BATCH\" ]; then\n      rm -rf /opt/tmp/make_local\n      cd /opt/tmp\n      _get_dev_ext \"make_local.tar.gz\"\n      if [ -e \"${_mL}/hostmaster/hostmaster.make\" ] \\\n        && [ -e \"${_mL}/hosting/hosting.module\" ] \\\n        && [ -e \"${_mL}/drupal/modules/system/system.module\" ]; then\n        [ -e \"make_local.tar.gz\" ] && rm -f make_local.tar.gz\n      fi\n    else\n      mkdir -p ${_mL}\n      cd ${_mL}\n      ### Drupal Core\n      _get_core_ext \"${_DRUPAL7}.tar.gz\"\n      mv -f ${_mL}/${_DRUPAL7} ${_mL}/drupal\n      ### Ægir Core\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hostmaster.git    &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting.git       &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/eldir.git         &> /dev/null\n      ### Ægir Golden + BOA Settings\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/aegir_objects.git                  &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_civicrm.git                &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_custom_settings.git        &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_deploy.git                 &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_git.git                    &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_le.git                     &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_remote_import.git          &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_site_backup_manager.git    &> /dev/null\n        ${_gCb} 5.x-${_tRee} ${_gitHub}/hosting_tasks_extra.git            &> /dev/null\n        rm -rf */.git\n      ### Ægir Drupal Contrib\n      _get_dev_stc \"admin_menu-7.x-3.0-rc7.tar.gz\"\n      _get_dev_stc \"betterlogin-7.x-1.5.tar.gz\"\n      _get_dev_stc \"ctools-7.x-1.21.tar.gz\"\n      _get_dev_stc \"entity-7.x-1.12.tar.gz\"\n      _get_dev_stc \"libraries-7.x-2.5.tar.gz\"\n      _get_dev_stc \"module_filter-7.x-2.3.tar.gz\"\n      _get_dev_stc \"openidadmin-7.x-1.0.tar.gz\"\n      _get_dev_stc \"overlay_paths-7.x-1.3.tar.gz\"\n      _get_dev_stc \"r4032login-7.x-1.8.tar.gz\"\n      _get_dev_stc \"tfa_basic-7.x-1.1.tar.gz\"\n      _get_dev_stc \"tfa-7.x-2.1.tar.gz\"\n      _get_dev_stc \"timeago-7.x-2.3.tar.gz\"\n      _get_dev_stc \"views_bulk_operations-7.x-3.7.tar.gz\"\n      _get_dev_stc \"views-7.x-3.30.tar.gz\"\n      ### Ægir Third Party Libraries\n      _get_dev_stc \"qrcodejs.tar.gz\"\n      _get_dev_stc \"timeagojs.tar.gz\"\n      _get_dev_stc \"vuejs.tar.gz\"\n      ### BOA Drupal Contrib\n      _get_dev_stc \"d7security_client-7.x-1.3.tar.gz\"\n      _get_dev_stc \"features_extra-7.x-1.2.tar.gz\"\n      _get_dev_stc \"features-7.x-2.15.tar.gz\"\n      _get_dev_stc \"idna_convert-7.x-1.0.tar.gz\"\n      _get_dev_stc \"revision_deletion-7.x-1.3.tar.gz\"\n      _get_dev_stc \"strongarm-7.x-2.0.tar.gz\"\n      _get_dev_stc \"userprotect-7.x-1.3.tar.gz\"\n      _get_dev_stc \"environment_indicator-7.x-2.9.tar.gz\"\n    fi\n    find ${_mL} -type d -exec chmod 0755 {} \\; &> /dev/null\n    find ${_mL} -type f -exec chmod 0644 {} \\; &> /dev/null\n    chown -R root:root ${_mL}\n  fi\n}\n\n#\n# Run child B.\n_satellite_run_child_b() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_run_child_b\"\n    _debug_proc\n  fi\n  _find_correct_ip\n  _USE_RESOLVEIP=\"${_LOC_IP}\"\n  if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Switching user and running AegirSetupB...\"\n    fi\n    rm -f /opt/tmp/testecho*\n    chown root:${_USRG} /data/u &> /dev/null\n    chmod 0771 /data/u &> /dev/null\n    su -s /bin/bash - ${_USER} -c \"/bin/bash ${_AegirSetupB} ${_USER}\"\n    wait\n    if [ -e \"/opt/tmp/status-AegirSetupB-FAIL\" ]; then\n      _msg \"${_STATUS} A: FATAL ERROR: AegirSetupB installer failed\"\n      _msg \"${_STATUS} A: FATAL ERROR: Aborting AegirSetupA installer NOW!\"\n      touch /opt/tmp/status-AegirSetupA-FAIL\n      exit 1\n    fi\n    _U_HD=\"${_ROOT}/.drush\"\n    if [ -e \"${_U_HD}/php.ini\" ]; then\n      chattr +i ${_U_HD}/php.ini\n    fi\n    chmod 0700 /data/u &> /dev/null\n    chown root:root /data/u &> /dev/null\n    _msg \"${_STATUS} A: Ægir Satellite Instance installed\"\n  else\n    if [ \"${_PLATFORMS_ONLY}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Satellite Instance Hostmaster upgrade skipped\"\n    else\n      if [ ! -d \"${_ROOT}/.drush/sys/provision/http\" ]; then\n        mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n        rm -rf ${_ROOT}/.drush/provision\n        rm -rf ${_ROOT}/.drush/sys/provision\n        ${_gCb} ${_BRANCH_PRN} ${_gitHub}/provision.git \\\n          ${_ROOT}/.drush/sys/provision &> /dev/null\n      fi\n      rm -rf ${_ROOT}/.drush/drush_make\n      rm -rf ${_ROOT}/.drush/sys/drush_make\n      _hostmaster_frontend_dbpass_sync\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"${_STATUS} A: Switching user and running AegirSetupB...\"\n      fi\n      rm -f /opt/tmp/testecho*\n      _THIS_HM_ROOT=$(cat ${_ROOT}/.drush/hostmaster.alias.drushrc.php \\\n        | grep \"root'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _THIS_HM_SITE=$(cat ${_ROOT}/.drush/hostmaster.alias.drushrc.php \\\n        | grep \"site_path'\" \\\n        | cut -d: -f2 \\\n        | awk '{ print $3}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n      _U_HD=\"${_ROOT}/.drush\"\n      if [ -e \"${_U_HD}/php.ini\" ]; then\n        chattr -i ${_U_HD}/php.ini\n      fi\n      chown -R ${_USER}:${_USRG} ${_ROOT}/.drush\n      chown -R ${_USER}:${_USRG} ${_ROOT}/backups\n      chown -R ${_USER}:${_USRG} ${_ROOT}/clients\n      chown -R ${_USER}:${_USRG} ${_ROOT}/config\n      chown -R ${_USER}:${_USRG} ${_ROOT}/tools\n      chown -R ${_USER} ${_THIS_HM_ROOT}\n      chown -R ${_USER}:${_WEBG} ${_THIS_HM_SITE}/files\n      chmod -R 02775 ${_THIS_HM_SITE}/files\n      chown root:${_USRG} /data/u &> /dev/null\n      chmod 0771 /data/u &> /dev/null\n      su -s /bin/bash - ${_USER} -c \"/bin/bash ${_AegirSetupB} ${_USER}\"\n      wait\n      chown -R ${_USER}:users ${_ROOT}/tools/drush\n      _satellite_update_drush_php_cli\n      _satellite_update_ini_php_cli\n      if [ -e \"/opt/tmp/status-AegirSetupB-FAIL\" ]; then\n        _msg \"${_STATUS} A: FATAL ERROR: AegirSetupB installer failed\"\n        _msg \"${_STATUS} A: FATAL ERROR: Aborting AegirSetupA installer NOW!\"\n        touch /opt/tmp/status-AegirSetupA-FAIL\n        exit 1\n      else\n        if [ -e \"${_U_HD}/php.ini\" ]; then\n          chattr +i ${_U_HD}/php.ini\n        fi\n        mkdir -p ${_ROOT}/backups/system/old_hostmaster\n        chmod 700 ${_ROOT}/backups/system/old_hostmaster\n        chmod 700 ${_ROOT}/backups/system\n        mv -f ${_ROOT}/backups/*host8* \\\n          ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n        mv -f ${_ROOT}/backups/*o8.io* \\\n          ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n        mv -f ${_ROOT}/backups/*boa.io* \\\n          ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n        mv -f ${_ROOT}/backups/*aegir.cc* \\\n          ${_ROOT}/backups/system/old_hostmaster/ &> /dev/null\n        chmod 600 ${_ROOT}/backups/system/old_hostmaster/* &> /dev/null\n        _hostmaster_frontend_dbpass_sync\n      fi\n      su -s /bin/bash - ${_USER} -c \"drush8 @hm rr\" &> /dev/null\n      wait\n      chmod 0700 /data/u &> /dev/null\n      chown root:root /data/u &> /dev/null\n      _msg \"${_STATUS} A: Ægir Satellite Instance upgrade completed\"\n    fi\n  fi\n}\n\n_satellite_if_create_local_bin() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_create_local_bin\"\n    _debug_proc\n  fi\n  _L_BIN=\"${_ROOT}/bin\"\n  if [ ! -d \"${_L_BIN}\" ]; then\n    mkdir -p ${_L_BIN}\n    chown ${_USER}:${_USRG} ${_L_BIN}\n    chmod 700 ${_L_BIN}\n  fi\n  if [ ! -L \"${_L_BIN}/drush\" ]; then\n    ln -sfn ${_ROOT}/tools/drush/drush ${_L_BIN}/drush\n  fi\n}\n\n#\n# Run standard post-install.\n_satellite_run_post_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_run_post_install\"\n    _debug_proc\n  fi\n  _LOCAL_STATUS=\"${_STATUS}\"\n  if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n    if [ ! -e \"${_ROOT}/config/${_USER}.nginx.conf\" ]; then\n      rm -f /var/aegir/config/server_master/nginx/platform.d/${_USER}.conf\n      echo \"include ${_ROOT}/config/server_master/nginx/vhost.d/*;\" > \\\n        ${_ROOT}/config/${_USER}.nginx.conf\n      ln -sfn ${_ROOT}/config/${_USER}.nginx.conf \\\n        /var/aegir/config/server_master/nginx/platform.d/${_USER}.conf\n    fi\n    chgrp -R ${_WEBG} ${_HM_ROOT}/sites/${_DOMAIN}/files\n    chgrp ${_WEBG} ${_HM_ROOT}/sites/${_DOMAIN}/settings.php\n    rm -rf ${_HM_ROOT}/profiles/default\n    rm -rf ${_HM_ROOT}/themes/bluemarine\n    rm -rf ${_HM_ROOT}/themes/chameleon\n    rm -rf ${_HM_ROOT}/themes/pushbutton\n    rm -rf ${_HM_ROOT}/scripts\n    rm -f ${_HM_ROOT}/themes/README.txt\n    rm -f ${_HM_ROOT}/*.txt\n    _mrun \"service nginx reload\"\n    cd ${_HM_ROOT}\n    cp -af /opt/tmp/boa/aegir/conf/tpl/robots.txt ./\n    cd ${_ROOT}\n  fi\n}\n\n#\n# Set permissions for all.\n_satellite_set_permissions_for_all() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_set_permissions_for_all\"\n    _debug_proc\n  fi\n  chmod 0755 ${_HM_ROOT} &> /dev/null\n  find ${_ROOT}/config/server_master -type d -exec chmod 0700 {} \\; &> /dev/null\n  find ${_ROOT}/config/server_master -type f -exec chmod 0600 {} \\; &> /dev/null\n  chmod 0711 ${_ROOT}/config &> /dev/null\n  chmod 0711 ${_ROOT}/config/includes &> /dev/null\n  chmod 0750 ${_ROOT}/backups &> /dev/null\n  chmod 0750 ${_ROOT}/clients &> /dev/null\n  find ${_ROOT}/aegir/distro/*/profiles/* -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_ROOT}/aegir/distro/*/profiles/* -type f -exec chmod 0644 {} \\; &> /dev/null\n  find ${_ROOT}/aegir/distro/*/sites/all/* -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_ROOT}/aegir/distro/*/sites/all/* -type f -exec chmod 0644 {} \\; &> /dev/null\n  chown -R ${_USER}:${_USRG} ${_ROOT}/.drush &> /dev/null\n  find ${_ROOT}/.drush -type d -exec chmod 0710 {} \\; &> /dev/null\n  find ${_ROOT}/.drush/usr -type d -exec chmod 0750 {} \\; &> /dev/null\n  find ${_ROOT}/.drush -type f -exec chmod 0640 {} \\; &> /dev/null\n  chmod 0440 ${_ROOT}/.drush/*.php &> /dev/null\n  if [ ! -e \"${_ROOT}/.drush/hm.alias.drushrc.php\" ]; then\n    cp -a ${_ROOT}/.drush/hostmaster.alias.drushrc.php ${_ROOT}/.drush/hm.alias.drushrc.php\n    sed -i \"s/\\['hostmaster'\\]/['hm']/g\" ${_ROOT}/.drush/hm.alias.drushrc.php\n  fi\n  chmod 0400 ${_ROOT}/.drush/server_*.php &> /dev/null\n  chmod 0400 ${_ROOT}/.drush/platform_*.php &> /dev/null\n  chmod 0400 ${_ROOT}/.drush/hostmaster*.php &> /dev/null\n  chmod 0400 ${_ROOT}/.drush/hm.alias.drushrc.php &> /dev/null\n  chmod 0710 ${_ROOT}/.drush &> /dev/null\n}\n\n#\n# Run child C.\n_satellite_run_child_c() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_run_child_c\"\n    _debug_proc\n  fi\n  _LOCAL_STATUS=\"${_STATUS}\"\n  if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n    _DIST_INSTALL=YES\n    rm -f /opt/tmp/testecho*\n    _satellite_create_shared_dirs\n    _satellite_manage_o_contrib\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Switching user and running Platforms build\"\n    fi\n    cd ${_ROOT}\n    _AegirSetupC=\"${_bldPth}/aegir/scripts/AegirSetupC.sh.txt\"\n    su -s /bin/bash - ${_USER} -c \"/bin/bash ${_AegirSetupC} ${_USER}\"\n    wait\n    if [ -e \"/opt/tmp/status-AegirSetupC-FAIL\" ]; then\n      _msg \"${_STATUS} A: FATAL ERROR: AegirSetupC installer failed\"\n      _msg \"${_STATUS} A: FATAL ERROR: Aborting AegirSetupA installer NOW!\"\n      touch /opt/tmp/status-AegirSetupA-FAIL\n      exit 1\n    fi\n  else\n    if [ \"${_HM_ONLY}\" = \"YES\" ]; then\n      _DIST_INSTALL=NO\n    else\n      echo \" \"\n      if _prompt_yes_no \"Do you want to install some ready to use platforms?\"; then\n        true\n        _DIST_INSTALL=YES\n        rm -f /opt/tmp/testecho*\n        _satellite_create_shared_dirs\n        _satellite_manage_o_contrib\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"${_STATUS} A: Switching user and running Platforms build\"\n        fi\n        cd ${_ROOT}\n        _AegirSetupC=\"${_bldPth}/aegir/scripts/AegirSetupC.sh.txt\"\n        su -s /bin/bash - ${_USER} -c \"/bin/bash ${_AegirSetupC} ${_USER}\"\n        wait\n        if [ -e \"/opt/tmp/status-AegirSetupC-FAIL\" ]; then\n          _msg \"${_STATUS} A: FATAL ERROR: AegirSetupC installer failed\"\n          _msg \"${_STATUS} A: FATAL ERROR: Aborting AegirSetupA installer NOW!\"\n          touch /opt/tmp/status-AegirSetupA-FAIL\n          exit 1\n        fi\n      else\n        _msg \"${_STATUS} A: No new Platforms added this time\"\n      fi\n    fi\n  fi\n  if [ ! -e \"${_CORE}/dot-files-ctrl-${_xSrl}-${_X_VERSION}\" ] \\\n    && [ -e \"${_CORE}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Cleaning up various dot files, please wait...\"\n    fi\n    cd ${_CORE}\n    find . -name .svn -exec rm -rf {} \\; &> /dev/null\n    find . -name .bzr -exec rm -rf {} \\; &> /dev/null\n    find . -name .git -exec rm -rf {} \\; &> /dev/null\n    find . -name .DS_Store -exec rm -rf {} \\; &> /dev/null\n    find . -name \"._*\" -type f | xargs rm -rf &> /dev/null\n    touch ${_CORE}/dot-files-ctrl-${_xSrl}-${_X_VERSION}\n  fi\n}\n\n_satellite_child_scripts_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_scripts_cleanup\"\n    _debug_proc\n  fi\n  rm -f ${_ROOT}/AegirSetupC.sh.txt*\n  rm -f ${_ROOT}/AegirSetupB.sh.txt*\n  rm -f ${_ROOT}/*.sh.txt\n  rm -f /var/spool/cron/crontabs/${_USER}\n}\n\n#\n# Add allow-snail group if not exists.\n_add_allow_snail_if_not_exists() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _add_allow_snail_if_not_exists\"\n  fi\n  _SNAIL_EXISTS=$(getent group allow-snail 2>&1)\n  if [[ ! \"${_SNAIL_EXISTS}\" =~ \"allow-snail\" ]]; then\n    addgroup --system allow-snail &> /dev/null\n  fi\n}\n\n#\n# Unlock sendmail for allow-snail group\n_unlock_sendmail_for_snail() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _unlock_sendmail_for_snail\"\n  fi\n  _add_allow_snail_if_not_exists\n  chown root /usr/sbin/sendmail &> /dev/null\n  chgrp allow-snail /usr/sbin/sendmail &> /dev/null\n  chmod 755 /usr/sbin/sendmail &> /dev/null\n}\n\n_satellite_if_add_snail_access() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_add_snail_access\"\n  fi\n  _unlock_sendmail_for_snail\n  if getent group allow-snail >/dev/null 2>&1 && \\\n    ! id -nG \"${_USER}\" 2>/dev/null | tr ' ' '\\n' | grep -qxF \"allow-snail\"; then\n    usermod -aG allow-snail \"${_USER}\"\n  fi\n}\n\n_satellite_if_add_ftps_lshell_access() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_add_ftps_lshell_access\"\n    _debug_proc\n  fi\n  _USERFTP=\"${_USER}.ftp\"\n  _USERFTP_ROOT=\"/home/${_USERFTP}\"\n  if [ -e \"/usr/bin/mysecureshell\" ] && [ -e \"/etc/ssh/sftp_config\" ]; then\n    _PATH_LSHELL=\"/usr/bin/mysecureshell\"\n  else\n    _PATH_LSHELL=\"/usr/bin/lshell\"\n  fi\n  _ID_SHELLS=$(id -nG ${_USERFTP} 2>&1)\n  if [[ ! \"${_ID_SHELLS}\" =~ \"users\" ]] \\\n    || [ ! -d \"${_USERFTP_ROOT}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding ftps/lshell user\"\n    fi\n    # add user\n    find /etc/[a-z]*\\.lock -maxdepth 1 -type f -exec rm -rf {} \\; &> /dev/null\n    useradd -d /home/${_USERFTP} -s ${_PATH_LSHELL} -m -N -r ${_USERFTP} &> /dev/null\n    adduser ${_USERFTP} ${_WEBG} &> /dev/null\n    # Make sure new file which contains password is private\n    cd ${_ROOT}/log\n    touch ${_ROOT}/log/pass.txt\n    chmod 0600 ${_ROOT}/log/pass.txt\n    # generate a nice secure password and put it in a file\n    _ESC_LUPASS=\"\"\n    _LEN_LUPASS=0\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n      _PWD_CHARS=64\n    elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n      _PWD_CHARS=32\n    else\n      _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n      if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n        && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n        _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n      else\n        _PWD_CHARS=32\n      fi\n      if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n        _PWD_CHARS=128\n      fi\n    fi\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n      _RANDPASS_TEST=$(randpass -V 2>&1)\n      if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n        _ESC_LUPASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n      else\n        _ESC_LUPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_LUPASS=$(_sanitize_string \"${_ESC_LUPASS}\" 2>&1)\n      fi\n      _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n      _LEN_LUPASS=$(echo ${#_ESC_LUPASS} 2>&1)\n    fi\n    if [ -z \"${_ESC_LUPASS}\" ] || [ \"${_LEN_LUPASS}\" -lt 9 ]; then\n      _ESC_LUPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_LUPASS=$(_sanitize_string \"${_ESC_LUPASS}\" 2>&1)\n    fi\n    echo \"${_ESC_LUPASS}\" > ${_ROOT}/log/pass.txt\n    # get the password hash\n    ph=$(mkpasswd -m sha-512 \"${_ESC_LUPASS}\" $(openssl rand -base64 16 \\\n      | tr -d '+=' | head -c 16))\n    # Set the password\n    usermod -p $ph ${_USERFTP} &> /dev/null\n    passwd -w 7 -x 90 ${_USERFTP} &> /dev/null\n  fi\n  usermod -aG lshellg ${_USERFTP}\n  chsh -s ${_PATH_LSHELL} ${_USERFTP} &> /dev/null\n  _mntPoint=$(find /mnt -mindepth 1 -maxdepth 1 -type d | grep \"\\.\" | head -n1) &&\n  _MNT_STATIC_FILES=\"${_mntPoint}/files/${_USER}/static/files\"\n  echo >> /etc/lshell.conf\n  echo \"[${_USERFTP}]\" >> /etc/lshell.conf\n  echo \"path : ['/opt/user/gems/${_USERFTP}', \\\n                '/opt/user/npm/${_USERFTP}', \\\n                '${_MNT_STATIC_FILES}', \\\n                '${_ROOT}/distro', \\\n                '${_ROOT}/static', \\\n                '${_ROOT}/backups', \\\n                '${_ROOT}/clients']\" \\\n                | fmt -su -w 2500 >> /etc/lshell.conf\n}\n\n_satellite_if_add_update_user_symlinks() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_add_update_user_symlinks\"\n    _debug_proc\n  fi\n  ###---### Add symlink to the sites backups.\n  #\n  _USERFTP=\"${_USER}.ftp\"\n  _USER_HD=\"/home/${_USERFTP}\"\n  _USER_DS=\"${_USER_HD}/.drush\"\n  _disable_chattr ${_USERFTP}\n  if [ ! -L \"${_USER_HD}/backups\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the sites backups\"\n    fi\n    ln -sfn ${_ROOT}/backups ${_USER_HD}/backups\n  fi\n\n  ###---### Remove legacy symlinks.\n  #\n  if [ -e \"${_USER_DS}/drush_make\" ]; then\n    rm -f ${_USER_DS}/drush_make\n  fi\n  if [ -e \"${_USER_DS}/registry_rebuild\" ]; then\n    rm -f ${_USER_DS}/registry_rebuild\n  fi\n  if [ -e \"${_USER_DS}/clean_missing_modules\" ]; then\n    rm -f ${_USER_DS}/clean_missing_modules\n  fi\n  if [ -e \"${_USER_DS}/drush_ecl\" ]; then\n    rm -f ${_USER_DS}/drush_ecl\n  fi\n  if [ -e \"${_USER_DS}/make_local\" ]; then\n    rm -f ${_USER_DS}/make_local\n  fi\n  if [ -e \"${_USER_DS}/safe_cache_form_clear\" ]; then\n    rm -f ${_USER_DS}/safe_cache_form_clear\n  fi\n  if [ -e \"${_USER_DS}/mydropwizard\" ]; then\n    rm -f ${_USER_DS}/mydropwizard\n  fi\n  if [ -e \"${_USER_DS}/utf8mb4_convert\" ]; then\n    rm -f ${_USER_DS}/utf8mb4_convert\n  fi\n\n  ###---### Add symlink to the system registry_rebuild.\n  #\n  if [ ! -L \"${_USER_DS}/usr/registry_rebuild\" ] \\\n    && [ -e \"${_ROOT}/.drush/usr/registry_rebuild\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the system registry_rebuild\"\n    fi\n    mkdir -p ${_USER_DS}/usr\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}/usr\n    chmod 700 ${_USER_DS}\n    ln -sfn ${_ROOT}/.drush/usr/registry_rebuild \\\n      ${_USER_DS}/usr/registry_rebuild\n  fi\n\n  ###---### Add symlink to the system clean_missing_modules.\n  #\n  if [ ! -L \"${_USER_DS}/usr/clean_missing_modules\" ] \\\n    && [ -e \"${_ROOT}/.drush/usr/clean_missing_modules\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the system clean_missing_modules\"\n    fi\n    mkdir -p ${_USER_DS}/usr\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}/usr\n    chmod 700 ${_USER_DS}\n    ln -sfn ${_ROOT}/.drush/usr/clean_missing_modules \\\n      ${_USER_DS}/usr/clean_missing_modules\n  fi\n\n  ###---### Add symlink to the system drupalgeddon.\n  #\n  if [ ! -L \"${_USER_DS}/usr/drupalgeddon\" ] \\\n    && [ -e \"${_ROOT}/.drush/usr/drupalgeddon\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the system drupalgeddon\"\n    fi\n    mkdir -p ${_USER_DS}/usr\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}/usr\n    chmod 700 ${_USER_DS}\n    ln -sfn ${_ROOT}/.drush/usr/drupalgeddon \\\n      ${_USER_DS}/usr/drupalgeddon\n  fi\n\n  ###---### Add symlink to the system drush_ecl.\n  #\n  if [ ! -L \"${_USER_DS}/usr/drush_ecl\" ] \\\n    && [ -e \"${_ROOT}/.drush/usr/drush_ecl\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the system drush_ecl\"\n    fi\n    mkdir -p ${_USER_DS}/usr\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}/usr\n    chmod 700 ${_USER_DS}\n    ln -sfn ${_ROOT}/.drush/usr/drush_ecl \\\n      ${_USER_DS}/usr/drush_ecl\n  fi\n\n  ###---### Add symlink to the system safe_cache_form_clear.\n  #\n  if [ ! -L \"${_USER_DS}/usr/safe_cache_form_clear\" ] \\\n    && [ -e \"${_ROOT}/.drush/usr/safe_cache_form_clear\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the system safe_cache_form_clear\"\n    fi\n    mkdir -p ${_USER_DS}/usr\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}/usr\n    chmod 700 ${_USER_DS}\n    ln -sfn ${_ROOT}/.drush/usr/safe_cache_form_clear \\\n      ${_USER_DS}/usr/safe_cache_form_clear\n  fi\n\n  ###---### Add symlink to the system utf8mb4_convert.\n  #\n  if [ ! -L \"${_USER_DS}/usr/utf8mb4_convert\" ] \\\n    && [ -e \"${_ROOT}/.drush/usr/utf8mb4_convert\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the system utf8mb4_convert\"\n    fi\n    mkdir -p ${_USER_DS}/usr\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}\n    chown ${_USERFTP}:${_USRG} ${_USER_DS}/usr\n    chmod 700 ${_USER_DS}\n    ln -sfn ${_ROOT}/.drush/usr/utf8mb4_convert \\\n      ${_USER_DS}/usr/utf8mb4_convert\n  fi\n\n  ###---### Add symlink to the clients directory.\n  #\n  if [ ! -L \"${_USER_HD}/clients\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Adding symlink to the clients directory\"\n    fi\n    ln -sfn ${_ROOT}/clients ${_USER_HD}/clients\n  fi\n  rm -rf ${_ROOT}/clients/admin &> /dev/null\n  rm -rf ${_ROOT}/clients/omega8ccgmailcom &> /dev/null\n  rm -rf ${_ROOT}/clients/nocomega8cc &> /dev/null\n  rm -rf ${_ROOT}/clients/*/backups &> /dev/null\n  symlinks -dr ${_ROOT}/clients &> /dev/null\n}\n\n_satellite_if_add_update_user_dot_dirs() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_add_update_user_dot_dirs\"\n    _debug_proc\n  fi\n  ###---### Create .tmp dir if not exists.\n  #\n  _USER_TMP=\"${_USER_HD}/.tmp\"\n  if [ ! -d \"${_USER_TMP}\" ]; then\n    mkdir -p ${_USER_TMP}\n    touch ${_USER_TMP}\n    find ${_USER_TMP}/ -mtime +0 -exec rm -rf {} \\; &> /dev/null\n    chown -R ${_USERFTP}:${_USRG} ${_USER_TMP}\n    chmod 755 ${_USER_TMP}\n  fi\n\n  ###---### Create .ssh dir if not exists.\n  #\n  _USER_SSH=\"${_USER_HD}/.ssh\"\n  if [ ! -d \"${_USER_SSH}\" ]; then\n    mkdir -p ${_USER_SSH}\n    chown -R ${_USERFTP}:${_USRG} ${_USER_SSH}\n    chmod 700 ${_USER_SSH}\n  fi\n  chmod 600 ${_USER_SSH}/id_{r,d}sa &> /dev/null\n\n  ###---### Create .bazaar dir and conf file if not exist.\n  #\n  _USER_BZR=\"${_USER_HD}/.bazaar\"\n  if [ -x \"/usr/local/bin/bzr\" ]; then\n    if [ ! -e \"${_USER_BZR}/bazaar.conf\" ]; then\n      mkdir -p ${_USER_BZR}\n      echo ignore_missing_extensions=True > ${_USER_BZR}/bazaar.conf\n      chown -R ${_USERFTP}:${_USRG} ${_USER_BZR}\n      chmod 700 ${_USER_BZR}\n    fi\n  else\n    if [ -d \"${_USER_BZR}\" ]; then\n      rm -rf ${_USER_BZR}\n    fi\n  fi\n\n  ###---### Remove not used dot files.\n  #\n  rm -f ${_USER_HD}/{.profile,.bash_logout,.bash_profile,.bashrc}\n}\n\n###---### Reading or creating pass.txt.\n#\n_satellite_if_read_create_pass_txt() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_read_create_pass_txt\"\n    _debug_proc\n  fi\n  if [ \"${_HM_ONLY}\" = \"YES\" ]; then\n    _DO_NOTHING=YES\n  else\n    if [ -e \"${_ROOT}/pass.txt\" ]; then\n      _PASWD=$(cat ${_ROOT}/pass.txt 2>&1)\n      _PASWD=$(echo -n ${_PASWD} | tr -d \"\\n\" 2>&1)\n      mv -f ${_ROOT}/pass.txt ${_ROOT}/log/pass.txt &> /dev/null\n    elif [ -e \"${_ROOT}/log/pass.txt\" ]; then\n      _PASWD=$(cat ${_ROOT}/log/pass.txt 2>&1)\n      _PASWD=$(echo -n ${_PASWD} | tr -d \"\\n\" 2>&1)\n      rm -f ${_ROOT}/pass.txt\n    else\n      cd ${_ROOT}/log\n      touch ${_ROOT}/log/pass.txt\n      chmod 0600 ${_ROOT}/log/pass.txt\n      _ESC_LUPASS=\"\"\n      _LEN_LUPASS=0\n      if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n        _PWD_CHARS=64\n      elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n        _PWD_CHARS=32\n      else\n        _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n        if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n          && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n          _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n        else\n          _PWD_CHARS=32\n        fi\n        if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n          _PWD_CHARS=128\n        fi\n      fi\n      if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n        _RANDPASS_TEST=$(randpass -V 2>&1)\n        if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n          _ESC_LUPASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n        else\n          _ESC_LUPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n          _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n          _ESC_LUPASS=$(_sanitize_string \"${_ESC_LUPASS}\" 2>&1)\n        fi\n        _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n        _LEN_LUPASS=$(echo ${#_ESC_LUPASS} 2>&1)\n      fi\n      if [ -z \"${_ESC_LUPASS}\" ] || [ \"${_LEN_LUPASS}\" -lt 9 ]; then\n        _ESC_LUPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_LUPASS=$(echo -n \"${_ESC_LUPASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_LUPASS=$(_sanitize_string \"${_ESC_LUPASS}\" 2>&1)\n      fi\n      echo \"${_ESC_LUPASS}\" > ${_ROOT}/log/pass.txt\n      ph=$(mkpasswd -m sha-512 \"${_ESC_LUPASS}\" $(openssl rand -base64 16 \\\n        | tr -d '+=' | head -c 16))\n      usermod -p $ph ${_USERFTP} &> /dev/null\n      _PASWD=$(cat ${_ROOT}/log/pass.txt 2>&1)\n      _PASWD=$(echo -n ${_PASWD} | tr -d \"\\n\" 2>&1)\n    fi\n  fi\n}\n\n###---### Add/Update platforms ftp symlinks.\n#\n_satellite_if_add_update_user_platforms_symlinks() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_add_update_user_platforms_symlinks\"\n    _debug_proc\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Add/Update platforms ftp symlinks\"\n  fi\n\n  _HM_U=\"${_USER}\"\n  _LOW_NR=\"2\"\n  _usEr=\"${_ROOT}\"\n\n  for i in $(dir -d /home/${_HM_U}.ftp/platforms/* 2>/dev/null); do\n    if [ -e \"${i}\" ]; then\n      _RevisionTest=$(ls ${i} \\\n        | wc -l \\\n        | tr -d \"\\n\" 2>&1)\n      if [ \"${_RevisionTest}\" -lt \"${_LOW_NR}\" ] \\\n        && [ ! -z \"${_RevisionTest}\" ]; then\n        if [ -d \"/home/${_HM_U}.ftp/platforms\" ]; then\n          chattr -i /home/${_HM_U}.ftp/platforms &> /dev/null\n          chattr -i /home/${_HM_U}.ftp/platforms/* &> /dev/null\n        fi\n        _NOW=$(date +%y%m%d-%H%M%S)\n        [ -d \"/var/backups/ghost/${_HM_U}/${_NOW}\" ] || mkdir -p /var/backups/ghost/${_HM_U}/${_NOW}\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"Moving ${i} to /var/backups/ghost/${_HM_U}/${_NOW}\"\n        mv -f ${i} /var/backups/ghost/${_HM_U}/${_NOW}/\n      fi\n    fi\n  done\n\n  for i in $(dir -d ${_usEr}/distro/* 2>/dev/null); do\n    if [ -d \"${i}\" ]; then\n      if [ ! -d \"${i}/keys\" ]; then\n        mkdir -p ${i}/keys\n      fi\n      _RevisionTest=$(ls ${i} | wc -l 2>&1)\n      if [ \"${_RevisionTest}\" -lt 2 ] && [ ! -z \"${_RevisionTest}\" ]; then\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"_RevisionTest is ${_RevisionTest}\"\n        _NOW=$(date +%y%m%d-%H%M%S)\n        mkdir -p ${_usEr}/undo/dist/${_NOW}\n        mv -f ${i} ${_usEr}/undo/dist/${_NOW}/ &> /dev/null\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"GHOST revision ${i} detected and moved to ${_usEr}/undo/dist/${_NOW}/\"\n      fi\n    fi\n  done\n\n  for i in $(dir -d ${_usEr}/distro/* 2>/dev/null); do\n    if [ -e \"${i}\" ]; then\n      _distTrNr=$(echo ${i} \\\n        | cut -d'/' -f6 \\\n        | awk '{ print $1}' 2> /dev/null)\n      if [ -d \"/home/${_HM_U}.ftp/platforms\" ]; then\n        chattr -i /home/${_HM_U}.ftp/platforms &> /dev/null\n        chattr -i /home/${_HM_U}.ftp/platforms/* &> /dev/null\n      fi\n      if [ ! -e \"${i}/keys\" ]; then\n        mkdir -p ${i}/keys\n        chown ${_HM_U}.ftp:${_WEBG} ${i}/keys &> /dev/null\n        chmod 02775 ${i}/keys &> /dev/null\n      fi\n      if [ ! -e \"/home/${_HM_U}.ftp/platforms/${_distTrNr}\" ]; then\n        mkdir -p /home/${_HM_U}.ftp/platforms/${_distTrNr}\n      fi\n      if [ -e \"${i}/keys\" ] && [ ! -e \"/home/${_HM_U}.ftp/platforms/${_distTrNr}/keys\" ]; then\n        ln -sfn ${i}/keys /home/${_HM_U}.ftp/platforms/${_distTrNr}/keys\n      fi\n      if [ -e \"/home/${_HM_U}.ftp/platforms/data\" ]; then\n        _NOW=$(date +%y%m%d-%H%M%S)\n        [ -d \"/var/backups/ghost/${_HM_U}/${_NOW}\" ] || mkdir -p /var/backups/ghost/${_HM_U}/${_NOW}\n        mv -f /home/${_HM_U}.ftp/platforms/data /var/backups/ghost/${_HM_U}/${_NOW}/platforms_data\n      fi\n      for _Codebase in `find ${i}/* \\\n        -maxdepth 1 \\\n        -mindepth 1 \\\n        -type d \\\n        | grep \"/sites$\" 2>&1`; do\n        _CodebaseName=$(echo ${_Codebase} \\\n          | cut -d'/' -f7 \\\n          | awk '{ print $1}' 2> /dev/null)\n        ln -sfn ${_Codebase} /home/${_HM_U}.ftp/platforms/${_distTrNr}/${_CodebaseName}\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"Fixed ${_CodebaseName} in ${_distTrNr} symlink to ${_Codebase} for ${_HM_U}.ftp\"\n      done\n    fi\n  done\n}\n\n_satellite_if_add_update_backend_user_dirs_files_clean() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_add_update_backend_user_dirs_files_clean\"\n    _debug_proc\n  fi\n  ###---### Create ~/static dir if not exists.\n  #\n  if [ ! -d \"${_ROOT}/static\" ]; then\n    mkdir -p ${_ROOT}/static\n    ln -sfn ${_ROOT}/static /home/${_USERFTP}/static\n  fi\n  chown ${_USER}:${_USRG} ${_ROOT}/static &> /dev/null\n  chmod 02775 ${_ROOT}/static &> /dev/null\n  echo empty > ${_ROOT}/static/EMPTY.txt\n\n  ###---### Create ~/.tmp dir if not exists.\n  #\n  if [ ! -d \"${_ROOT}/.tmp\" ]; then\n    rm -rf ${_ROOT}/.tmp\n    mkdir -p ${_ROOT}/.tmp\n    chown ${_USER}:users ${_ROOT}/.tmp\n    chmod 02775 ${_ROOT}/.tmp\n  fi\n\n  ###---### Create .ssh dir and keys if not exist, plus some known_hosts for system user.\n  #\n  if [ ! -e \"${_ROOT}/.ssh/id_ed25519.pub\" ]; then\n    su -s /bin/bash - ${_USER} -c \"ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H ${_USER}.beanstalkapp.com >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H ${_USER}.unfuddle.com >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H beanstalkapp.com >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H bitbucket.org >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H codebasehq.com >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H drupal.org >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H git.drupal.org >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H github.com >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H gitlab.org >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H gitlab.com >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    su -s /bin/bash - ${_USER} -c \"ssh-keyscan -t rsa -H unfuddle.com >> ~/.ssh/known_hosts\" &> /dev/null\n    wait\n    cp -af ${_ROOT}/.ssh/id_ed25519.pub ${_ROOT}/static/${_USER}.id_ed25519.pub\n    chmod 644 ${_ROOT}/static/${_USER}.id_ed25519.pub\n  fi\n\n  ###---### Create .bazaar dir and conf file if not exist for system user.\n  #\n  _SYSTEM_USER_BZR=\"${_ROOT}/.bazaar\"\n  if [ -x \"/usr/local/bin/bzr\" ]; then\n    if [ ! -e \"${_SYSTEM_USER_BZR}/bazaar.conf\" ]; then\n      mkdir -p ${_SYSTEM_USER_BZR}\n      echo ignore_missing_extensions=True > ${_SYSTEM_USER_BZR}/bazaar.conf\n      chown -R ${_USER}:${_USRG} ${_SYSTEM_USER_BZR}\n      chmod 700 ${_SYSTEM_USER_BZR}\n    fi\n  else\n    if [ -d \"${_SYSTEM_USER_BZR}\" ]; then\n      rm -rf ${_SYSTEM_USER_BZR}\n    fi\n  fi\n\n  ###---### Create other dirs and symlinks if not exist.\n  #\n  if [ \"${_HM_ONLY}\" = \"YES\" ]; then\n    _DO_NOTHING=YES\n  else\n    if [ ! -d \"${_QR}/keys\" ]; then\n      mkdir -p ${_QR}/keys\n      chown ${_USERFTP}:${_WEBG} ${_QR}/keys &> /dev/null\n      chmod 02775 ${_QR}/keys\n    fi\n    if [ -d \"${_QR}/keys\" ]; then\n      if [ ! -L \"${_QH}/keys\" ]; then\n        ln -sfn ${_QR}/keys ${_QH}/keys\n      fi\n    fi\n    rm -f ${_QR}/*/robots.txt &> /dev/null\n    if [ ! -e \"${_CORE}/javascript_aggregator.out.txt\" ] \\\n      && [ -e \"${_CORE}\" ]; then\n      sed -i \"s/, 'javascript_aggregator'//g\" \\\n        ${_CORE}/*/profiles/*/*.profile &> /dev/null\n      wait\n      touch ${_CORE}/javascript_aggregator.out.txt\n    fi\n  fi\n\n  ###---### Remove not used cache module.\n  #\n  if [ ! -e \"${_D}/000/old_cache.out2.txt\" ] && [ -e \"${_D}/000\" ]; then\n    sed -i \"s/, 'cache'//g\" ${_D}/*/*/profiles/*/*.profile &> /dev/null\n    wait\n    sed -i \"s/'cache', //g\" ${_D}/*/*/profiles/*/*.profile &> /dev/null\n    wait\n    rm -rf ${_D}/*/o_contrib/cache\n    touch ${_D}/000/old_cache.out2.txt\n  fi\n}\n\n###---### Preparing setupmail.txt.\n#\n_satellite_prepare_setup_email_tpl() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_prepare_setup_email_tpl\"\n    _debug_proc\n  fi\n  if [ \"${_HM_ONLY}\" = \"YES\" ]; then\n    _DO_NOTHING=YES\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Preparing setupmail.txt\"\n    fi\n    [ -n \"${_MY_OCTO_EMAIL}\" ] && _MY_OCTO_EMAIL=${_MY_OCTO_EMAIL//\\\\\\@/\\@}\n    [ -n \"${_CLIENT_EMAIL}\" ] && _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n    if [ -e \"${_ROOT}/log/setupmail.txt\" ] \\\n      || [ -e \"${_ROOT}/log/legacy_setupmail.txt\" ] \\\n      || [ -e \"${_ROOT}/log/latest_setupmail.txt\" ]; then\n      if [ \"${_DIST_INSTALL}\" = \"YES\" ]; then\n        cd ${_ROOT}/log\n        if [ -e \"${_ROOT}/log/upgrademail.txt\" ]; then\n          mv -f ${_ROOT}/log/upgrademail.txt \\\n            ${_ROOT}/log/upgrademail-pre-${_THIS_CORE}.txt\n        fi\n        cp -af /opt/tmp/boa/aegir/conf/tpl/upgrademail.txt ./\n        sed -i \"s/aegir.url.name/${_DOMAIN}/g\" ${_ROOT}/log/upgrademail.txt\n        wait\n        sed -i \"s/dragon/${_USER}/g\" ${_ROOT}/log/upgrademail.txt\n        wait\n        sed -i \"s/FN8rXcQn/${_PASWD}/g\" ${_ROOT}/log/upgrademail.txt\n        wait\n        sed -i \"s/166.84.6.231/${_THISHTIP}/g\" ${_ROOT}/log/upgrademail.txt\n        wait\n        sed -i \"s/boa.version/${_X_VERSION}/g\" ${_ROOT}/log/upgrademail.txt\n        wait\n      else\n        _SEND_UPGRADE_EMAIL=NO\n      fi\n    elif [ \"${_STATUS}\" = \"INIT\" ]; then\n     cd ${_ROOT}/log\n     cp -af /opt/tmp/boa/aegir/conf/tpl/setupmail.txt ./\n     sed -i \"s/aegir.url.name/${_DOMAIN}/g\" ${_ROOT}/log/setupmail.txt\n     wait\n     sed -i \"s/dragon/${_USER}/g\" ${_ROOT}/log/setupmail.txt\n     wait\n     sed -i \"s/FN8rXcQn/${_PASWD}/g\" ${_ROOT}/log/setupmail.txt\n     wait\n     sed -i \"s/166.84.6.231/${_THISHTIP}/g\" ${_ROOT}/log/setupmail.txt\n     wait\n     sed -i \"s/boa.version/${_X_VERSION}/g\" ${_ROOT}/log/setupmail.txt\n     wait\n    fi\n  fi\n}\n\n###---### Sending setup email.\n#\n_satellite_send_welcome_email() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_send_welcome_email\"\n    _debug_proc\n  fi\n  _MAILX_TEST=$(s-nail -V 2>&1)\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Sending setup email on init...\"\n    fi\n    _TIME=$(date)\n    _Q=\"Your Ægir Install on ${_TIME} [${_USER}]\"\n    echo ${_TIME} > ${_ROOT}/log/date-init.txt\n    if [ -e \"${_ROOT}/log/setupmail.txt\" ] \\\n      || [ -e \"${_ROOT}/log/legacy_setupmail.txt\" ] \\\n      || [ -e \"${_ROOT}/log/latest_setupmail.txt\" ]; then\n      if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n        cat ${_ROOT}/log/setupmail.txt \\\n          | sed \"s/[\\~]//g\" \\\n          | s-nail -b ${_MY_OCTO_EMAIL} -s \"${_Q}\" ${_CLIENT_EMAIL}\n      fi\n    fi\n  else\n    if [ \"${_DIST_INSTALL}\" = \"YES\" ] && [ \"${_PLATFORMS_ONLY}\" = \"NO\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"${_STATUS} A: Resending setup email on upgrade...\"\n      fi\n      _TIME=$(date)\n      _Q=\"[${_X_VERSION}] Ægir Upgrade [${_USER}]\"\n      echo ${_TIME} > ${_ROOT}/log/date-upgrade-${_THIS_CORE}.txt\n      if [ -e \"${_ROOT}/log/upgrademail.txt\" ]; then\n        if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n          cat ${_ROOT}/log/upgrademail.txt \\\n            | sed \"s/[\\~]//g\" \\\n            | s-nail -b ${_MY_OCTO_EMAIL} -s \"${_Q}\" ${_CLIENT_EMAIL}\n        fi\n      else\n        if [[ \"${_MAILX_TEST}\" =~ \"built for Linux\" ]]; then\n          cat ${_ROOT}/log/setupmail.txt \\\n            | sed \"s/[\\~]//g\" \\\n            | s-nail -b ${_MY_OCTO_EMAIL} -s \"${_Q}\" ${_CLIENT_EMAIL}\n        fi\n      fi\n    else\n      _SEND_UPGRADE_EMAIL=NO\n    fi\n  fi\n}\n\n#\n# Satellite start.\n_satellite_make() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_make\"\n    _debug_proc\n  fi\n  export _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  export _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ -d \"/data/all/000\" ]; then\n    if [ ! -e \"/data/all/000/core-v-${_SMALLCORE6_V}.txt\" ] \\\n      || [ ! -e \"/data/all/000/core-v-${_SMALLCORE7_V}.txt\" ]; then\n      export _USE_CURRENT=NO\n      export _HOT_SAUCE=YES\n      export _HM_ONLY=NO\n    fi\n  fi\n\n  _tocIncO=\"${_filIncO}.${_USER}\"\n\n  if [ -e \"${_vBs}/${_tocIncO}\" ]; then\n    _writeTo=\"${_vBs}/${_tocIncO}\"\n  elif [ -e \"${_vBs}/${_filIncO}\" ]; then\n    _writeTo=\"${_vBs}/${_filIncO}\"\n  fi\n\n  export _xSrl=591devT01\n  export _X_VERSION=\"${_X_VERSION}\"\n\n  echo \"export _AEGIR_VERSION=\\\"${_AEGIR_VERSION}\\\"\"         >> ${_writeTo}\n  echo \"export _AEGIR_XTS_VRN=\\\"${_tRee}\\\"\"                  >> ${_writeTo}\n  echo \"export _ALL_DISTRO=\\\"${_ALL_DISTRO}\\\"\"               >> ${_writeTo}\n  echo \"export _AUTOPILOT=\\\"${_AUTOPILOT}\\\"\"                 >> ${_writeTo}\n  echo \"export _BOA_REPO_GIT_URL=\\\"${_BOA_REPO_GIT_URL}\\\"\"   >> ${_writeTo}\n  echo \"export _BOA_REPO_NAME=\\\"${_BOA_REPO_NAME}\\\"\"         >> ${_writeTo}\n  echo \"export _BRANCH_BOA=\\\"${_BRANCH_BOA}\\\"\"               >> ${_writeTo}\n  echo \"export _BRANCH_PRN=\\\"${_BRANCH_PRN}\\\"\"               >> ${_writeTo}\n  echo \"export _CLIENT_CORES=\\\"${_CLIENT_CORES}\\\"\"           >> ${_writeTo}\n  echo \"export _CLIENT_EMAIL=\\\"${_CLIENT_EMAIL}\\\"\"           >> ${_writeTo}\n  echo \"export _CLIENT_OPTION=\\\"${_CLIENT_OPTION}\\\"\"         >> ${_writeTo}\n  echo \"export _DEBUG_MODE=\\\"${_DEBUG_MODE}\\\"\"               >> ${_writeTo}\n  echo \"export _DL_MODE=\\\"${_DL_MODE}\\\"\"                     >> ${_writeTo}\n  echo \"export _DISTRO=\\\"${_DISTRO}\\\"\"                       >> ${_writeTo}\n  echo \"export _DOMAIN=\\\"${_DOMAIN}\\\"\"                       >> ${_writeTo}\n\n  echo \"export _DRUPAL6=\\\"${_DRUPAL6}\\\"\"                     >> ${_writeTo}\n  echo \"export _DRUPAL6_D=\\\"${_DRUPAL6}-dev\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL6_P=\\\"${_DRUPAL6}-prod\\\"\"              >> ${_writeTo}\n  echo \"export _DRUPAL6_S=\\\"${_DRUPAL6}-stage\\\"\"             >> ${_writeTo}\n\n  echo \"export _DRUPAL7=\\\"${_DRUPAL7}\\\"\"                     >> ${_writeTo}\n  echo \"export _DRUPAL7_D=\\\"${_DRUPAL7}-dev\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL7_P=\\\"${_DRUPAL7}-prod\\\"\"              >> ${_writeTo}\n  echo \"export _DRUPAL7_S=\\\"${_DRUPAL7}-stage\\\"\"             >> ${_writeTo}\n\n  echo \"export _DRUPAL9=\\\"${_DRUPAL9}\\\"\"                     >> ${_writeTo}\n  echo \"export _DRUPAL9_D=\\\"${_DRUPAL9}-dev\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL9_P=\\\"${_DRUPAL9}-prod\\\"\"              >> ${_writeTo}\n  echo \"export _DRUPAL9_S=\\\"${_DRUPAL9}-stage\\\"\"             >> ${_writeTo}\n\n  echo \"export _DRUPAL10_0=\\\"${_DRUPAL10_0}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL10_0_D=\\\"${_DRUPAL10_0}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL10_0_P=\\\"${_DRUPAL10_0}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL10_0_S=\\\"${_DRUPAL10_0}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL10_1=\\\"${_DRUPAL10_1}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL10_1_D=\\\"${_DRUPAL10_1}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL10_1_P=\\\"${_DRUPAL10_1}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL10_1_S=\\\"${_DRUPAL10_1}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL10_2=\\\"${_DRUPAL10_2}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL10_2_D=\\\"${_DRUPAL10_2}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL10_2_P=\\\"${_DRUPAL10_2}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL10_2_S=\\\"${_DRUPAL10_2}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL10_3=\\\"${_DRUPAL10_3}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL10_3_D=\\\"${_DRUPAL10_3}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL10_3_P=\\\"${_DRUPAL10_3}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL10_3_S=\\\"${_DRUPAL10_3}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL10_4=\\\"${_DRUPAL10_4}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL10_4_D=\\\"${_DRUPAL10_4}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL10_4_P=\\\"${_DRUPAL10_4}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL10_4_S=\\\"${_DRUPAL10_4}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL10_5=\\\"${_DRUPAL10_5}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL10_5_D=\\\"${_DRUPAL10_5}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL10_5_P=\\\"${_DRUPAL10_5}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL10_5_S=\\\"${_DRUPAL10_5}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL10_6=\\\"${_DRUPAL10_6}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL10_6_D=\\\"${_DRUPAL10_6}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL10_6_P=\\\"${_DRUPAL10_6}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL10_6_S=\\\"${_DRUPAL10_6}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL11_1=\\\"${_DRUPAL11_1}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL11_1_D=\\\"${_DRUPAL11_1}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL11_1_P=\\\"${_DRUPAL11_1}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL11_1_S=\\\"${_DRUPAL11_1}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL11_2=\\\"${_DRUPAL11_2}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL11_2_D=\\\"${_DRUPAL11_2}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL11_2_P=\\\"${_DRUPAL11_2}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL11_2_S=\\\"${_DRUPAL11_2}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUPAL11_3=\\\"${_DRUPAL11_3}\\\"\"               >> ${_writeTo}\n  echo \"export _DRUPAL11_3_D=\\\"${_DRUPAL11_3}-dev\\\"\"         >> ${_writeTo}\n  echo \"export _DRUPAL11_3_P=\\\"${_DRUPAL11_3}-prod\\\"\"        >> ${_writeTo}\n  echo \"export _DRUPAL11_3_S=\\\"${_DRUPAL11_3}-stage\\\"\"       >> ${_writeTo}\n\n  echo \"export _DRUSH_VERSION=\\\"${_DRUSH_VERSION}\\\"\"         >> ${_writeTo}\n  echo \"export _F_TIME=\\\"${_F_TIME}\\\"\"                       >> ${_writeTo}\n  echo \"export _HM_DISTRO=\\\"${_HM_DISTRO}\\\"\"                 >> ${_writeTo}\n  echo \"export _HM_ONLY=\\\"${_HM_ONLY}\\\"\"                     >> ${_writeTo}\n  echo \"export _HOT_SAUCE=\\\"${_HOT_SAUCE}\\\"\"                 >> ${_writeTo}\n  echo \"export _LAST_ALL=\\\"${_LAST_ALL}\\\"\"                   >> ${_writeTo}\n  echo \"export _LAST_HMR=\\\"${_LAST_HMR}\\\"\"                   >> ${_writeTo}\n  echo \"export _LASTNUM=\\\"${_LASTNUM}\\\"\"                     >> ${_writeTo}\n  echo \"export _MY_OCTO_EMAIL=\\\"${_MY_OCTO_EMAIL}\\\"\"         >> ${_writeTo}\n  echo \"export _MY_OWNIP=\\\"${_MY_OWNIP}\\\"\"                   >> ${_writeTo}\n  echo \"export _NOW=\\\"${_NOW}\\\"\"                             >> ${_writeTo}\n  echo \"export _OS_DIST=\\\"${_OS_DIST}\\\"\"                     >> ${_writeTo}\n  echo \"export _OS_CODE=\\\"${_OS_CODE}\\\"\"                     >> ${_writeTo}\n  echo \"export _PHP_CLI_VERSION=\\\"${_PHP_CLI_VERSION}\\\"\"     >> ${_writeTo}\n  echo \"export _PHP_FPM_VERSION=\\\"${_PHP_FPM_VERSION}\\\"\"     >> ${_writeTo}\n  echo \"export _PLATFORMS_LIST=\\\"${_PLATFORMS_LIST}\\\"\"       >> ${_writeTo}\n  echo \"export _PLATFORMS_ONLY=\\\"${_PLATFORMS_ONLY}\\\"\"       >> ${_writeTo}\n  echo \"export _PURGE_MODE=\\\"${_PURGE_MODE}\\\"\"               >> ${_writeTo}\n  echo \"export _REDIS_C_VERSION=\\\"${_REDIS_C_VERSION}\\\"\"     >> ${_writeTo}\n  echo \"export _REDIS_L_VERSION=\\\"${_REDIS_L_VERSION}\\\"\"     >> ${_writeTo}\n  echo \"export _REDIS_N_VERSION=\\\"${_REDIS_N_VERSION}\\\"\"     >> ${_writeTo}\n  echo \"export _REDIS_T_VERSION=\\\"${_REDIS_T_VERSION}\\\"\"     >> ${_writeTo}\n  echo \"export _REDIS_E_VERSION=\\\"${_REDIS_E_VERSION}\\\"\"     >> ${_writeTo}\n  echo \"export _SERIES_RESULT=\\\"${_SERIES_RESULT}\\\"\"         >> ${_writeTo}\n\n  echo \"export _SMALLCORE10_0_V=\\\"${_SMALLCORE10_0_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE10_1_V=\\\"${_SMALLCORE10_1_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE10_2_V=\\\"${_SMALLCORE10_2_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE10_3_V=\\\"${_SMALLCORE10_3_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE10_4_V=\\\"${_SMALLCORE10_4_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE10_5_V=\\\"${_SMALLCORE10_5_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE10_6_V=\\\"${_SMALLCORE10_6_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE11_1_V=\\\"${_SMALLCORE11_1_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE11_2_V=\\\"${_SMALLCORE11_2_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE11_3_V=\\\"${_SMALLCORE11_3_V}\\\"\"     >> ${_writeTo}\n  echo \"export _SMALLCORE6_V=\\\"${_SMALLCORE6_V}\\\"\"           >> ${_writeTo}\n  echo \"export _SMALLCORE7_V=\\\"${_SMALLCORE7_V}\\\"\"           >> ${_writeTo}\n  echo \"export _SMALLCORE9_V=\\\"${_SMALLCORE9_V}\\\"\"           >> ${_writeTo}\n\n  echo \"export _SPINNER=\\\"${_SPINNER}\\\"\"                     >> ${_writeTo}\n  echo \"export _STRONG_PASSWORDS=\\\"${_STRONG_PASSWORDS}\\\"\"   >> ${_writeTo}\n  echo \"export _T_BUILD=\\\"${_T_BUILD}\\\"\"                     >> ${_writeTo}\n  echo \"export _THIS_DB_HOST=\\\"${_THIS_DB_HOST}\\\"\"           >> ${_writeTo}\n  echo \"export _THIS_DB_PORT=\\\"${_THIS_DB_PORT}\\\"\"           >> ${_writeTo}\n  echo \"export _OS_DIST=\\\"${_OS_DIST}\\\"\"                     >> ${_writeTo}\n  echo \"export _OS_CODE=\\\"${_OS_CODE}\\\"\"                     >> ${_writeTo}\n  echo \"export _THISHTIP=\\\"${_THISHTIP}\\\"\"                   >> ${_writeTo}\n  echo \"export _TODAY=\\\"${_TODAY}\\\"\"                         >> ${_writeTo}\n  echo \"export _USE_CURRENT=\\\"${_USE_CURRENT}\\\"\"             >> ${_writeTo}\n  echo \"export _USE_MIR=\\\"${_USE_MIR}\\\"\"                     >> ${_writeTo}\n  echo \"export _USER=\\\"${_USER}\\\"\"                           >> ${_writeTo}\n  echo \"export _USRG=\\\"${_USRG}\\\"\"                           >> ${_writeTo}\n  echo \"export _WEBG=\\\"${_WEBG}\\\"\"                           >> ${_writeTo}\n  echo \"export _xSrl=\\\"${_xSrl}\\\"\"                           >> ${_writeTo}\n  echo \"export _X_VERSION=\\\"${_X_VERSION}\\\"\"                 >> ${_writeTo}\n\n  _THISHOST=\"${_hName}\"\n  if [ -e \"/usr/bin/sipcalc\" ]; then\n    if [ -z \"${_THISHTIP}\" ]; then\n      _LOC_DOM=\"${_THISHOST}\"\n      _find_correct_ip\n      _THISHTIP=\"${_LOC_IP}\"\n    fi\n    _IP_TEST=$(sipcalc ${_THISHTIP} 2>&1)\n    if [[ \"${_IP_TEST}\" =~ \"ERR\" ]]; then\n      _IP_TEST_RESULT=FAIL\n      _LOCAL_THISHTIP=all\n    else\n      _IP_TEST_RESULT=OK\n      _LOCAL_THISHTIP=\"${_THISHTIP}\"\n    fi\n  else\n    _LOCAL_THISHTIP=\"${_THISHTIP}\"\n  fi\n  if [ -z \"${_LOCAL_THISHTIP}\" ]; then\n    _LOC_DOM=\"${_THISHOST}\"\n    _find_correct_ip\n    _LOCAL_THISHTIP=\"${_LOC_IP}\"\n  fi\n  if [ -z \"${_LOCAL_THISHTIP}\" ]; then\n    _LOCAL_THISHTIP=all\n  fi\n\n  cp -af ${_bldPth}/aegir/scripts/run-xdrago /var/xdrago/run-${_USER}\n  sed -i \"s/EDIT_USER/${_USER}/g\" /var/xdrago/run-${_USER}\n  wait\n  chmod 700 /var/xdrago/run-${_USER} &> /dev/null\n  chmod 700 ${_bldPth}/aegir/scripts/* &> /dev/null\n\n  ### Make sure that required web services are up and running\n  _mrun \"webserver up\"\n\n  ###\n  _AegirSetupA=\"${_bldPth}/aegir/scripts/AegirSetupA.sh.txt\"\n  bash ${_AegirSetupA} ${_USER}\n  wait\n  ###\n\n  if [ -e \"/opt/tmp/status-AegirSetupA-FAIL\" ]; then\n    _msg \"FATAL ERROR: AegirSetupA installer failed\"\n    _msg \"FATAL ERROR: Aborting Octopus installer NOW!\"\n    touch /opt/tmp/status-Octopus-FAIL\n    _clean_pid_exit _satellite_make_a\n  fi\n\n  if [ ! -e \"${_ROOT}/log/email.txt\" ]; then\n    echo ${_CLIENT_EMAIL} > ${_ROOT}/log/email.txt\n  fi\n  if [ ! -e \"${_ROOT}/log/option.txt\" ]; then\n    echo ${_CLIENT_OPTION} > ${_ROOT}/log/option.txt\n  fi\n  if [ ! -e \"${_ROOT}/log/cores.txt\" ]; then\n    echo ${_CLIENT_CORES} > ${_ROOT}/log/cores.txt\n  fi\n  if [ ! -e \"${_ROOT}/log/subscr.txt\" ]; then\n    echo ${_CLIENT_SUBSCR} > ${_ROOT}/log/subscr.txt\n  fi\n  if [ ! -e \"${_ROOT}/log/fpm.txt\" ]; then\n    echo ${_PHP_FPM_VERSION} > ${_ROOT}/log/fpm.txt\n  fi\n  if [ ! -e \"${_ROOT}/log/cli.txt\" ]; then\n    echo ${_PHP_CLI_VERSION} > ${_ROOT}/log/cli.txt\n  fi\n  if [ ! -e \"${_ROOT}/static/control\" ]; then\n    mkdir -p ${_ROOT}/static/control\n    chmod 755 ${_ROOT}/static/control\n  fi\n  if [ ! -e \"${_ROOT}/static/control/fpm.info\" ]; then\n    echo ${_PHP_FPM_VERSION} > ${_ROOT}/static/control/fpm.info\n  fi\n  if [ ! -e \"${_ROOT}/static/control/cli.info\" ]; then\n    if [ -e \"${_ROOT}/static/control/fpm.info\" ]; then\n      cp -af ${_ROOT}/static/control/fpm.info ${_ROOT}/static/control/cli.info\n    else\n      echo ${_PHP_CLI_VERSION} > ${_ROOT}/static/control/cli.info\n    fi\n  fi\n\n  if [ ! -e \"${_ROOT}/static/control/.ctrl.${_tRee}.${_xSrl}.pid\" ] \\\n    && [ -e \"/home/${_USER}.ftp/clients\" ]; then\n    if [ -e \"/var/xdrago/conf/control-readme.txt\" ]; then\n      cp -af /var/xdrago/conf/control-readme.txt \\\n        ${_ROOT}/static/control/README.txt &> /dev/null\n      chmod 0644 ${_ROOT}/static/control/README.txt\n    fi\n    rm -f ${_ROOT}/static/control/.ctrl.*\n    echo OK > ${_ROOT}/static/control/.ctrl.${_tRee}.${_xSrl}.pid\n  fi\n  chown -R ${_USER}.ftp:${_USRG} ${_ROOT}/static/control\n\n  if [ -d \"/data/all/000\" ]; then\n    if [ ! -e \"/data/all/000/core-v-${_SMALLCORE6_V}.txt\" ] \\\n      || [ ! -e \"/data/all/000/core-v-${_SMALLCORE7_V}.txt\" ]; then\n      echo \"${_SMALLCORE6_V}\" > /data/all/000/core-v-${_SMALLCORE6_V}.txt\n      echo \"${_SMALLCORE7_V}\" > /data/all/000/core-v-${_SMALLCORE7_V}.txt\n    fi\n  fi\n\n  if [ -e \"${_ROOT}/log/email.txt\" ]; then\n    _PHP_FPM_MULTI=NO\n    if [ -f \"${_ROOT}/static/control/multi-fpm.info\" ] \\\n      && [ -d \"${_ROOT}/tools/le\" ]; then\n      _PHP_FPM_MULTI=YES\n    fi\n    if [ \"${_PHP_FPM_MULTI}\" = \"NO\" ]; then\n      _satellite_tune_fpm_config\n    fi\n    _satellite_force_advanced_nginx_config\n    _mrun \"service nginx reload\"\n  fi\n}\n\n_satellite_if_head_github_connection_test() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_head_github_connection_test\"\n    _debug_proc\n  fi\n  ###--------------------###\n  if [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    rm -rf /opt/tmp/test-*\n    _check_connection\n    _GITHUB_TEST=$(git clone ${_gitHub}/provision.git /opt/tmp/test-provision 2>&1)\n    if [[ \"${_GITHUB_TEST}\" =~ \"fatal\" ]]; then\n      echo \" \"\n      _msg \"EXIT on error (provision) due to GitHub downtime\"\n      _msg \"Please try to run this script again in a few minutes\"\n      _msg \"You may want to check https://www.githubstatus.com\"\n      _msg \"Bye\"\n      rm -rf /opt/tmp/test-*\n      _clean_pid_exit _satellite_if_head_github_connection_test_a\n    fi\n  fi\n}\n\n_satellite_if_sql_exception_test() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_sql_exception_test\"\n    _debug_proc\n  fi\n  ###--------------------###\n  if [ ! -e \"/run/mysqld/mysqld.pid\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; then\n    _msg \"ALRT! ${_DB_SERVER} server not running properly!\"\n    _msg \"EXIT: We can't proceed and will exit now\"\n    _msg \"HINT: (re)start ${_DB_SERVER} server, then run the installer again\"\n    _msg \"Bye\"\n    _clean_pid_exit _satellite_if_sql_exception_test_a\n  fi\n}\n\n_check_system_manufacturer() {\n  # Extract manufacturer string\n  _SYS_MANUFACTURER_RAW=$(dmidecode -s system-manufacturer 2>/dev/null)\n\n  # Trim leading/trailing whitespace, convert spaces to underscores\n  _SYS_MANUFACTURER_CLEAN=$(printf \"%s\" \"${_SYS_MANUFACTURER_RAW}\" \\\n    | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//' -e 's/[[:space:]]\\+/_/g')\n\n  # Normalise: keep only A-Za-z0-9_\n  _SYS_MANUFACTURER=$(printf \"%s\" \"${_SYS_MANUFACTURER_CLEAN}\" | tr -cd 'A-Za-z0-9_')\n\n  # Enforce max length with safe underscore-aware truncation\n  _MAX_LEN=32\n  if [ \"${#_SYS_MANUFACTURER}\" -gt \"${_MAX_LEN}\" ]; then\n    _CUT=$(printf \"%s\" \"${_SYS_MANUFACTURER}\" | cut -c1-\"${_MAX_LEN}\")\n    _SAFE_CUT=$(printf \"%s\" \"${_CUT}\" | sed 's/_[^_]*$//')\n    if [ -n \"${_SAFE_CUT}\" ]; then\n      _SYS_MANUFACTURER=\"${_SAFE_CUT}\"\n    else\n      _SYS_MANUFACTURER=\"${_CUT}\"\n    fi\n  fi\n\n  # Secondary variable: lowercase\n  _SYS_MANUFACTURER_LC=$(printf \"%s\" \"${_SYS_MANUFACTURER}\" | tr 'A-Z' 'a-z')\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"_SYS_MANUFACTURER_RAW: ${_SYS_MANUFACTURER_RAW}\"\n    _msg \"_SYS_MANUFACTURER_CLEAN: ${_SYS_MANUFACTURER_CLEAN}\"\n    _msg \"_SYS_MANUFACTURER: ${_SYS_MANUFACTURER}\"\n    _msg \"_SYS_MANUFACTURER_LC: ${_SYS_MANUFACTURER_LC}\"\n  fi\n}\n\n_satellite_if_running_as_root_octopus() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_running_as_root_octopus\"\n    _debug_proc\n  fi\n  if [ \"$(id -u)\" -eq 0 ]; then\n    chmod a+w /dev/null\n    _check_system_manufacturer\n    if [ -n \"${_SYS_MANUFACTURER_LC}\" ]; then\n      if  [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n        || [ ! -e \"/data/u\" ] \\\n        || [ ! -e \"/var/xdrago\" ]; then\n        if [ ! -e \"/var/log/boa/${_SYS_MANUFACTURER_LC}_vm_postinstall.pid\" ]; then\n          touch /var/log/boa/${_SYS_MANUFACTURER_LC}_vm.pid\n        fi\n      fi\n    fi\n  else\n    _msg \"ERROR: This script should be run as a root user\"\n    _msg \"Bye\"\n    _clean_pid_exit\n  fi\n}\n\n_satellite_check_sanitize_user_name() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_check_sanitize_user_name\"\n    _debug_proc\n  fi\n  _USER=${_USER//[^a-zA-Z0-9-.]/}\n  _USER=$(echo -n ${_USER} | tr A-Z a-z 2>&1)\n  _WEB=\"${_USER}.web\"\n  _ROOT=\"/data/disk/${_USER}\"\n  if [ -d \"${_ROOT}\" ]; then\n    _STATUS=UPGRADE\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _satellite_check_id\n  fi\n}\n\n_satellite_if_localhost_mode_magic() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_localhost_mode_magic\"\n    _debug_proc\n  fi\n  _THIS_HOST=\"${_hName}\"\n  if [ \"${_THIS_HOST}\" = \"aegir.local\" ] && [ ! -d \"${_ROOT}\" ]; then\n    _DEBUG_MODE=NO\n    _DNS_SETUP_TEST=NO\n    _DOMAIN=\"${_USER}.sub.aegir.local\"\n    _LOCAL_NETWORK_IP=\"127.0.1.1\"\n    _MY_OWNIP=\"${_LOCAL_NETWORK_IP}\"\n    _msg \"_LOCAL_NETWORK_IP is ${_LOCAL_NETWORK_IP}\"\n  fi\n}\n\n_satellite_check_sanitize_domain_name() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_check_sanitize_domain_name\"\n    _debug_proc\n  fi\n  _DOMAIN=${_DOMAIN//[^a-zA-Z0-9-.]/}\n  _DOMAIN=$(echo -n ${_DOMAIN} | tr A-Z a-z 2>&1)\n  if [ ! -f \"/var/aegir/config/server_master/nginx/vhost.d/${_DOMAIN}\" ]; then\n    _DO_NOTHING=YES\n  else\n    _msg \"ERROR: ${_DOMAIN} is already used on the Ægir Master Instance\"\n    _msg \"Please change the value for _DOMAIN to make it unique\"\n    _msg \"Bye\"\n    _clean_pid_exit\n  fi\n}\n\n_satellite_detect_vm_family() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_detect_vm_family\"\n    _debug_proc\n  fi\n  _VM_TEST=\"$(uname -a)\"\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _VMFAMILY=\"VS\"\n  else\n    _VMFAMILY=\"XEN\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n  fi\n  if [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n    if [ \"${_STATUS}\" = \"INIT\" ]; then\n      _THIS_DB_HOST=localhost\n    fi\n    _LOC_DOM=\"${_DOMAIN}\"\n    if [ -z \"${_MY_OWNIP}\" ]; then\n      _find_correct_ip\n      _MY_OWNIP=\"${_LOC_IP}\"\n    else\n      _LOC_IP=\"${_MY_OWNIP}\"\n    fi\n  fi\n}\n\n_satellite_check_php_compatibility() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_check_php_compatibility\"\n    _debug_proc\n  fi\n  if [ -e \"${_ROOT}/static/control/fpm.info\" ]; then\n    _T_FPM_VRN=$(cat ${_ROOT}/static/control/fpm.info 2>&1)\n    _T_FPM_VRN=${_T_FPM_VRN//[^0-9.]/}\n    _T_FPM_VRN=$(echo -n ${_T_FPM_VRN} | tr -d \"\\n\" 2>&1)\n    if [ \"${_T_FPM_VRN}\" = \"8.5\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"8.4\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"8.3\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"8.2\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"8.1\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"8.0\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"7.4\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"7.3\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"7.2\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"7.1\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"7.0\" ] \\\n      || [ \"${_T_FPM_VRN}\" = \"5.6\" ]; then\n      _PHP_FPM_LEGACY_FREE=YES\n    else\n      _PHP_FPM_LEGACY_FREE=NO\n    fi\n  else\n    _PHP_FPM_LEGACY_FREE=YES\n  fi\n  if [ -e \"${_ROOT}/static/control/cli.info\" ]; then\n    _T_CLI_VRN=$(cat ${_ROOT}/static/control/cli.info 2>&1)\n    _T_CLI_VRN=${_T_CLI_VRN//[^0-9.]/}\n    _T_CLI_VRN=$(echo -n ${_T_CLI_VRN} | tr -d \"\\n\" 2>&1)\n    if [ \"${_T_CLI_VRN}\" = \"8.5\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"8.4\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"8.3\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"8.2\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"8.1\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"8.0\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"7.4\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"7.3\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"7.2\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"7.1\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"7.0\" ] \\\n      || [ \"${_T_CLI_VRN}\" = \"5.6\" ]; then\n      _PHP_CLI_LEGACY_FREE=YES\n    else\n      _PHP_CLI_LEGACY_FREE=NO\n    fi\n  else\n    _PHP_CLI_LEGACY_FREE=YES\n  fi\n  if [ \"${_PHP_CLI_LEGACY_FREE}\" = \"YES\" ] \\\n    && [ \"${_PHP_FPM_LEGACY_FREE}\" = \"YES\" ]; then\n    _PHP_LEGACY_FREE=YES\n  else\n    _PHP_LEGACY_FREE=NO\n  fi\n  if [ \"${_PHP_LEGACY_FREE}\" = \"NO\" ]; then\n    _msg \"ERROR: This instance ${_USER} still depends on the old PHP version\"\n    _msg \"FPM.${_T_FPM_VRN} CLI.${_T_CLI_VRN}\"\n    _msg \"It is not possible to upgrade it to ${_X_VERSION}\"\n    _msg \"Please switch FPM and CLI on ${_USER} to PHP 5.6 or newer\"\n    _msg \"Then please run the octopus upgrade again\"\n    _msg \"Bye\"\n    _clean_pid_exit\n  fi\n}\n\n_satellite_check_octopus_vs_barracuda_ver() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_check_octopus_vs_barracuda_ver\"\n    _debug_proc\n  fi\n  if [ ! -f \"/var/log/barracuda_log.txt\" ]; then\n    _msg \"ERROR: This octopus installer can be used only when the same version\"\n    _msg \"of boa or barracuda installer was used before. Your system must be\"\n    _msg \"upgraded with barracuda installer version ${_rLsn} first!\"\n    _msg \"Bye\"\n    _clean_pid_exit _satellite_check_octopus_vs_barracuda_ver_a\n  else\n    _VERSIONS_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n    if [[ \"${_VERSIONS_TEST}\" =~ \"Barracuda ${_rLsn}-\" ]]; then\n      _VERSIONS_TEST_RESULT=OK\n    else\n      _msg \"ERROR: This octopus installer can be used only when the same version\"\n      _msg \"of boa or barracuda installer was used before. Your system must be\"\n      _msg \"upgraded with barracuda installer version ${_rLsn} first!\"\n      _msg \"Bye\"\n      _clean_pid_exit _satellite_check_octopus_vs_barracuda_ver_b\n    fi\n  fi\n  if [ -e \"${_ROOT}/log/octopus_log.txt\" ]; then\n    if [ -e \"/var/aegir/key/barracuda_key.txt\" ] \\\n      && [ ! -e \"${_ROOT}/tools/key/octopus_key.txt\" ]; then\n      mkdir ${_ROOT}/tools/key\n      cat /var/aegir/key/barracuda_key.txt > ${_ROOT}/tools/key/octopus_key.txt\n    fi\n    _SERIES_TEST=$(cat ${_ROOT}/log/octopus_log.txt 2>&1)\n    if [[ \"${_SERIES_TEST}\" =~ \"BOA-5.\" ]] \\\n      || [[ \"${_SERIES_TEST}\" =~ \"BOA-4.\" ]]; then\n      _VERSIONS_TEST_RESULT=OK\n    else\n      _msg \"ERROR: This octopus installer can be used only when the instance\"\n      _msg \"has been already upgraded to BOA-4.x or BOA-5.x release\"\n      _msg \"Please run 'octopus up-lts/pro/dev ${_USER} force' upgrade first!\"\n      _msg \"Display all supported commands with: octopus help\"\n      _msg \"Bye\"\n      _clean_pid_exit _satellite_check_octopus_vs_barracuda_ver_c\n    fi\n    if [[ \"${_SERIES_TEST}\" =~ \"BOA-5.\" ]] \\\n      || [[ \"${_SERIES_TEST}\" =~ \"BOA-4.\" ]]; then\n      _SERIES_RESULT=OK\n    fi\n  fi\n  if [ -e \"${_ROOT}/log/octopus_log.txt\" ] \\\n    && [ \"${_tRee}\" = \"lts\" ]; then\n    _SERIES_TEST=$(cat ${_ROOT}/log/octopus_log.txt 2>&1)\n    if [[ \"${_SERIES_TEST}\" =~ \"Octopus ${_rLsn}-pro\" ]]; then\n      _msg \"ERROR: Your Octopus has been already upgraded to ${_rLsn}-pro\"\n      _msg \"You can not downgrade back to previous/older/lts BOA version\"\n      _msg \"Please use 'octopus up-pro ${_USER} force' to upgrade\"\n      _msg \"Display all supported commands with: octopus help\"\n      _msg \"Bye\"\n      _clean_pid_exit _satellite_check_octopus_vs_barracuda_ver_d\n    fi\n  fi\n  rm -f /opt/tmp/testecho*\n}\n\n_satellite_if_init_or_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_init_or_upgrade\"\n    _debug_proc\n  fi\n  if [ -d \"${_ROOT}\" ]; then\n    _msg \"Octopus Satellite Instance Upgrade in progress...\"\n    if [ -d \"${_ROOT}/distro\" ]; then\n      if [ -e \"${_ROOT}/log/domain.txt\" ]; then\n        _DOMAIN=$(cat ${_ROOT}/log/domain.txt 2>/dev/null | tr -d '\\n')\n      fi\n      if [ -z \"${_DOMAIN}\" ]; then\n        _msg \"ALERT! _DOMAIN is e-m-p-t-y, exit now\"\n        _clean_pid_exit _satellite_if_init_or_upgrade_a\n      fi\n      if [ -z \"${_USER}\" ]; then\n        _msg \"ALERT! _USER is e-m-p-t-y, exit now\"\n        _clean_pid_exit _satellite_if_init_or_upgrade_b\n      fi\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" = \"YES\" ]; then\n        if [ -e \"${_ROOT}/log/amazing_upgrade.txt\" ] \\\n          && [ ! -e \"${_ROOT}/log/amazing_upgrade_complete.txt\" ]; then\n          if [ -e \"${_ROOT}/log/original_option.txt\" ]; then\n            cp -af ${_ROOT}/log/option.txt \\\n              ${_ROOT}/log/prev_option.txt\n            cp -af ${_ROOT}/log/original_option.txt \\\n              ${_ROOT}/log/option.txt\n          fi\n          if [ -e \"${_ROOT}/log/original_cores.txt\" ]; then\n            cp -af ${_ROOT}/log/cores.txt \\\n              ${_ROOT}/log/prev_cores.txt\n            cp -af ${_ROOT}/log/original_cores.txt \\\n              ${_ROOT}/log/cores.txt\n          fi\n          echo completed > ${_ROOT}/log/amazing_upgrade_complete.txt\n        fi\n      fi\n      if [ -e \"${_ROOT}/log/option.txt\" ]; then\n        _CLIENT_OPTION=$(cat ${_ROOT}/log/option.txt 2>/dev/null | tr -d '\\n')\n      fi\n      if [ -e \"${_ROOT}/log/cores.txt\" ]; then\n        _CLIENT_CORES=$(cat ${_ROOT}/log/cores.txt 2>/dev/null | tr -d '\\n')\n      fi\n      if [ -e \"${_ROOT}/log/subscr.txt\" ]; then\n        _CLIENT_SUBSCR=$(cat ${_ROOT}/log/subscr.txt 2>/dev/null | tr -d '\\n')\n      fi\n      if [ -e \"${_ROOT}/log/email.txt\" ]; then\n        _CLIENT_EMAIL=$(cat ${_ROOT}/log/email.txt 2>/dev/null | tr -d '\\n')\n        if [[ \"${_CLIENT_EMAIL}\" =~ \"@\" ]]; then\n          _DO_NOTHING=YES\n        else\n          _msg \"EXIT: You must enter your valid email address in the\"\n          _msg \"EXIT: _CLIENT_EMAIL variable written both in the\"\n          _msg \"EXIT: ${_octCnf} file and in the\"\n          _msg \"EXIT: ${_ROOT}/log/email.txt file\"\n          _msg \"EXIT: Bye (1)\"\n          _clean_pid_exit _satellite_if_init_or_upgrade_c\n        fi\n        _CLIENT_EMAIL=${_CLIENT_EMAIL//\\\\\\@/\\@}\n      fi\n      if [[ \"${_CLIENT_EMAIL}\" =~ \"omega8.cc\" ]]; then\n        _if_hosted_sys\n        if [ \"${_hostedSys}\" != \"YES\" ]; then\n          _msg \"EXIT: You must enter your valid email address in the\"\n          _msg \"EXIT: _CLIENT_EMAIL variable written both in the\"\n          _msg \"EXIT: ${_octCnf} file and in the\"\n          _msg \"EXIT: ${_ROOT}/log/email.txt file\"\n          _msg \"EXIT: Bye (2)\"\n          _clean_pid_exit _satellite_if_init_or_upgrade_d\n        fi\n      fi\n      #\n      # Check for _last distro nr\n      if [ -d \"${_ROOT}/distro\" ]; then\n        cd ${_ROOT}/distro\n        _list=([0-9]*)\n        _last=${_list[@]: -1}\n        _LASTNUM=$_last\n        _BASH_TEST=$(bash --version 2>&1)\n        if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n          _nextnum=00$((10#0${_last%%[^0-9]*} + 1))\n        else\n          _nextnum=00$((10#${_last%%[^0-9]*} + 1))\n        fi\n        _nextnum=${_nextnum: -3}\n        _DISTRO=${_nextnum}\n      fi\n      #\n      # Check for _last hm nr\n      if [ -d \"${_ROOT}/aegir/distro\" ]; then\n        cd ${_ROOT}/aegir/distro\n        _listx=([0-9]*)\n        _lastx=${_listx[@]: -1}\n        _LAST_HMR=$_lastx\n        _BASH_TEST=$(bash --version 2>&1)\n        if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n          _nextnumx=00$((10#0${_lastx%%[^0-9]*} + 1))\n        else\n          _nextnumx=00$((10#${_lastx%%[^0-9]*} + 1))\n        fi\n        _nextnumx=${_nextnumx: -3}\n        _HM_DISTRO=${_nextnumx}\n      fi\n      #\n      # Check for _last all nr\n      if [ -d \"/data/all\" ]; then\n        cd /data/all\n        _listl=([0-9]*)\n        _lastl=${_listl[@]: -1}\n        export _LAST_ALL=${_lastl}\n        _BASH_TEST=$(bash --version 2>&1)\n        if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n          _nextnuml=00$((10#0${_lastl%%[^0-9]*} + 1))\n        else\n          _nextnuml=00$((10#${_lastl%%[^0-9]*} + 1))\n        fi\n        _nextnuml=${_nextnuml: -3}\n        export _ALL_DISTRO=${_nextnuml}\n      fi\n    #\n    #\n    elif [ ! -d \"${_ROOT}/distro\" ]; then\n      if [ -e \"${_ROOT}/log/domain.txt\" ]; then\n        _DOMAIN=$(cat ${_ROOT}/log/domain.txt 2>&1)\n        _DOMAIN=$(echo -n ${_DOMAIN} | tr -d \"\\n\" 2>&1)\n      fi\n    fi\n  else\n    _msg \"New Octopus Setup on ${_hName} in progress...\"\n    #\n    # Check for last all nr\n    if [ -d \"/data/all\" ]; then\n      cd /data/all\n      _listl=([0-9]*)\n      _lastl=${_listl[@]: -1}\n      export _LAST_ALL=${_lastl}\n      _BASH_TEST=$(bash --version 2>&1)\n      if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n        _nextnuml=00$((10#0${_lastl%%[^0-9]*} + 1))\n      else\n        _nextnuml=00$((10#${_lastl%%[^0-9]*} + 1))\n      fi\n      _nextnuml=${_nextnuml: -3}\n      export _ALL_DISTRO=${_nextnuml}\n    fi\n  fi\n}\n\n_satellite_if_major_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_major_upgrade\"\n    _debug_proc\n  fi\n  if [ \"${_CLIENT_OPTION}\" = \"POWER\" ] \\\n    && [ -e \"${_ROOT}/log/octopus_log.txt\" ]; then\n    _SERIES_TEST=$(cat ${_ROOT}/log/octopus_log.txt 2>&1)\n    if [[ \"${_SERIES_TEST}\" =~ \"BOA-5.\" ]] \\\n      || [[ \"${_SERIES_TEST}\" =~ \"BOA-4.\" ]]; then\n      _IT_IS_OK=YES\n    else\n      if [ ! -e \"${_ROOT}/log/_satellite_major_upgrade_ok.txt\" ]; then\n        _msg \"ERROR: Major version upgrade requires control file:\"\n        _msg \"${_ROOT}/log/_satellite_major_upgrade_ok.txt\"\n        _msg \"Bye\"\n        _clean_pid_exit _satellite_if_major_upgrade_a\n      fi\n    fi\n  fi\n  if [ \"${_CLIENT_OPTION}\" = \"CLUSTER\" ] \\\n    && [ -e \"${_ROOT}/log/octopus_log.txt\" ]; then\n    _SERIES_TEST=$(cat ${_ROOT}/log/octopus_log.txt 2>&1)\n    if [[ \"${_SERIES_TEST}\" =~ \"BOA-5.\" ]] \\\n      || [[ \"${_SERIES_TEST}\" =~ \"BOA-4.\" ]]; then\n      _IT_IS_OK=YES\n    else\n      if [ ! -e \"${_ROOT}/log/_satellite_major_upgrade_ok.txt\" ]; then\n        _msg \"ERROR: Major version upgrade requires control file:\"\n        _msg \"${_ROOT}/log/_satellite_major_upgrade_ok.txt\"\n        _msg \"Bye\"\n        _clean_pid_exit _satellite_if_major_upgrade_b\n      fi\n    fi\n  fi\n}\n\n_satellite_if_check_dns() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_if_check_dns\"\n    _debug_proc\n  fi\n  if [ \"${_DNS_SETUP_TEST}\" = \"YES\" ]; then\n    if [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n      _LOC_DOM=\"${_DOMAIN}\"\n      if [ -z \"${_MY_OWNIP}\" ]; then\n        _find_correct_ip\n        _MY_OWNIP=\"${_LOC_IP}\"\n      else\n        _LOC_IP=\"${_MY_OWNIP}\"\n      fi\n    fi\n    if [ -z \"${_MY_OWNIP}\" ]; then\n      _find_correct_ip\n      _THISHTIP=\"${_LOC_IP}\"\n    else\n      _THISHTIP=\"${_MY_OWNIP}\"\n    fi\n    _LOC_DOM=\"${_DOMAIN}\"\n    _find_correct_ip\n    _THISRDIP=\"${_LOC_IP}\"\n    if [ \"${_THISRDIP}\" = \"${_THISHTIP}\" ]; then\n      _DO_NOTHING=YES\n    else\n      _msg \"ERROR: ${_DOMAIN} doesn't point to your IP: ${_THISHTIP}\"\n      _msg \"Please make sure you have a valid A record in your DNS\"\n      _msg \"It is also possible that DNS change didn't propagate yet\"\n      _msg \"Bye\"\n      _clean_pid_exit _satellite_if_check_dns_a\n    fi\n  else\n    if [ -z \"${_MY_OWNIP}\" ]; then\n      _LOC_DOM=\"${_DOMAIN}\"\n      _find_correct_ip\n      _THISHTIP=\"${_LOC_IP}\"\n      _THISRDIP=\"${_LOC_IP}\"\n    else\n      _THISHTIP=\"${_MY_OWNIP}\"\n      _THISRDIP=\"${_MY_OWNIP}\"\n    fi\n  fi\n}\n\n_satellite_checkpoint() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_checkpoint\"\n    _debug_proc\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    echo \" \"\n    _msg \"START -> checkpoint: \"\n  fi\n\n  export _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  export _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _modeDetect=\"installation\"\n    _optInf=\"${_CLIENT_OPTION} / ${_CLIENT_SUBSCR} / ${_CLIENT_CORES} C\"\n    cat <<EOF\n\n    * Your email address is ${_MY_OCTO_EMAIL}\n    * Your client email address is ${_CLIENT_EMAIL}\n    * Your Ægir control panel for this instance will be available at:\n        https://${_DOMAIN}\n    * Your Ægir system user for this instance will be ${_USER}\n    * This Octopus will use PHP-CLI ${_PHP_CLI_VERSION} for all sites\n    * This Octopus will use PHP-FPM ${_PHP_FPM_VERSION} for all sites\n    * This Octopus includes platforms: ${_PLATFORMS_LIST}\n    * This Octopus options are listed as ${_optInf}\n\nEOF\n  else\n    _modeDetect=\"upgrade\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _optInf=\"${_CLIENT_OPTION} / ${_CLIENT_SUBSCR} / ${_CLIENT_CORES} C\"\n      cat <<EOF\n\n    * Your Ægir control panel for this instance is available at:\n        https://${_DOMAIN}\n    * Your Ægir system user for this instance is ${_USER}\n    * This Octopus will use PHP-CLI ${_PHP_CLI_VERSION} for all sites\n    * This Octopus will use PHP-FPM ${_PHP_FPM_VERSION} for all sites\n    * This Octopus includes platforms: ${_PLATFORMS_LIST}\n    * This Octopus options are listed as ${_optInf}\n\nEOF\n    else\n      echo \" \"\n      _thiSys=\"${_OS_DIST}/${_OS_CODE}\"\n      _msg \"This Octopus System is ${_thiSys}\"\n      _msg \"This Octopus PHP FPM/CLI version is ${_PHP_FPM_VERSION}/${_PHP_CLI_VERSION}\"\n      _msg \"This Octopus URL address is ${_DOMAIN}\"\n      echo \" \"\n    fi\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"8s before we will continue...\"\n    export _DEBUG_MODE=\"${_DEBUG_MODE}\"\n    if [ ! -z \"${_USE_MIR}\" ]; then\n      export _USE_MIR=\"${_USE_MIR}\"\n    fi\n  fi\n  sleep 8\n}\n\n_satellite_pre_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_pre_cleanup\"\n    _debug_proc\n  fi\n  rm -f /tmp/cache.inc*\n  rm -f /opt/tmp/status-*\n  rm -rf /tmp/drush_make_tmp*\n  rm -rf /tmp/make_tmp*\n  rm -f /tmp/pm-updatecode*\n  if [ -d \"/home/${_USER}.ftp/\" ]; then\n    _disable_chattr ${_USER}.ftp\n  fi\n}\n\n_satellite_post_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_post_cleanup\"\n    _debug_proc\n  fi\n  if [ -d \"/home/${_USER}.ftp/\" ]; then\n    _enable_chattr ${_USER}.ftp\n  fi\n  rm -f /tmp/cache.inc*\n  rm -rf /var/opt/*\n  rm -rf /tmp/drush_make_tmp*\n  rm -rf /tmp/make_tmp*\n  rm -f /tmp/pm-updatecode*\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  rm -rf ${_ROOT}/.tmp/cache\n  [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n  [ -e \"/run/manage_ruby_users.pid\" ] && rm -f /run/manage_ruby_users.pid\n}\n\n_satellite_letsencrypt_crt_key_copy() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_letsencrypt_crt_key_copy\"\n    _debug_proc\n  fi\n  if [ -e \"${_leCrtPath}/fullchain.pem\" ]; then\n    _crtPath=\"${_leCrtPath}/fullchain.pem\"\n  elif [ -e \"${_leCrtPath}/cert.pem\" ]; then\n    _crtPath=\"${_leCrtPath}/cert.pem\"\n  fi\n  if [ -e \"${_crtPath}\" ]; then\n    if [ -L \"${_crtPath}\" ]; then\n      _crtPathR=$(readlink -n ${_crtPath} 2>&1)\n      _crtPathR=$(echo -n ${_crtPathR} | tr -d \"\\n\" 2>&1)\n      if [ -f \"${_leCrtPath}/${_crtPathR}\" ]; then\n        rm -f /etc/ssl/private/${_DOMAIN}.crt\n        cp -a ${_leCrtPath}/${_crtPathR} /etc/ssl/private/${_DOMAIN}.crt\n      fi\n    else\n      rm -f /etc/ssl/private/${_DOMAIN}.crt\n      cp -a ${_crtPath} /etc/ssl/private/${_DOMAIN}.crt\n    fi\n  fi\n  _keyPath=\"${_leCrtPath}/privkey.pem\"\n  if [ -e \"${_keyPath}\" ]; then\n    if [ -L \"${_keyPath}\" ]; then\n      _keyPathR=$(readlink -n ${_keyPath} 2>&1)\n      _keyPathR=$(echo -n ${_keyPathR} | tr -d \"\\n\" 2>&1)\n      if [ -f \"${_leCrtPath}/${_keyPathR}\" ]; then\n        rm -f /etc/ssl/private/${_DOMAIN}.key\n        cp -a ${_leCrtPath}/${_keyPathR} /etc/ssl/private/${_DOMAIN}.key\n      fi\n    else\n      rm -f /etc/ssl/private/${_DOMAIN}.key\n      cp -a ${_keyPath} /etc/ssl/private/${_DOMAIN}.key\n    fi\n  fi\n}\n\n_satellite_letsencrypt_vhost_sync() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_letsencrypt_vhost_sync\"\n    _debug_proc\n  fi\n  [ -e \"/root/.ssl.proxy.cnf\" ] && rm -f /root/.ssl.proxy.cnf\n  _PHP_SV=${_PHP_FPM_VERSION//[^0-9]/}\n  _PHP_CN=\"www${_PHP_SV}\"\n  sed -i \"s/127.0.0.1:9000;/unix:\\/var\\/run\\/${_PHP_CN}.fpm.socket;/g\" ${_Ssl}\n  wait\n  if [ -e \"${_crtPath}\" ]; then\n    _crtPath=${_crtPath//\\//\\\\\\/}\n    sed -i \"s/ssl_certificate .*/ssl_certificate              ${_crtPath};/g\" ${_Ssl}\n    wait\n  fi\n  if [ -e \"${_keyPath}\" ]; then\n    _keyPath=${_keyPath//\\//\\\\\\/}\n    sed -i \"s/ssl_certificate_key .*/ssl_certificate_key          ${_keyPath};/g\" ${_Ssl}\n    wait\n  fi\n  _dhpWildPath=\"/etc/ssl/private/nginx-wild-ssl.dhp\"\n  if [ -e \"/etc/ssl/private/4096.dhp\" ]; then\n    _dhpPath=\"/etc/ssl/private/4096.dhp\"\n    _DIFF_T=$(diff -w -B ${_dhpPath} ${_dhpWildPath} 2>&1)\n    if [ ! -z \"${_DIFF_T}\" ]; then\n      cp -af ${_dhpPath} ${_dhpWildPath}\n    fi\n  elif [ -e \"/etc/ssl/private/${_DOMAIN}.dhp\" ]; then\n    _dhpPath=\"/etc/ssl/private/${_DOMAIN}.dhp\"\n  elif [ -e \"${_dhpWildPath}\" ]; then\n    _dhpPath=\"${_dhpWildPath}\"\n  fi\n  if [ -e \"${_dhpPath}\" ]; then\n    _dhpPath=${_dhpPath//\\//\\\\\\/}\n    sed -i \"s/ssl_dhparam .*/ssl_dhparam                  ${_dhpPath};/g\" ${_Ssl}\n  else\n    sed -i \"s/.*ssl_dhparam .*//g\" ${_Ssl}\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: Reloading Nginx...\"\n    _mrun \"service nginx reload\"\n  else\n    _mrun \"service nginx reload\"\n  fi\n}\n\n_satellite_letsencrypt_vhost_setup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_letsencrypt_vhost_setup\"\n    _debug_proc\n  fi\n  _ifvHst=\"YES\"\n  _leRoot=\"${_ROOT}/tools/le\"\n  _leKeyJ=\"${_leRoot}/tools/le/private_key.json\"\n  _leKeyP=\"${_leRoot}/tools/le/private_key.pem\"\n  _leCrtPath=\"${_leRoot}/certs/${_DOMAIN}\"\n  _exeLe=\"${_leRoot}/dehydrated\"\n  _Ssl=\"/var/aegir/config/server_master/nginx/pre.d/z_${_DOMAIN}_ssl_proxy.conf\"\n  if [ -e \"${_leRoot}/.ctrl/ssl-demo-mode.pid\" ] \\\n    && [ -e \"${_leRoot}/config.sh\" ]; then\n    if [ -e \"${_Ssl}\" ]; then\n      rm -f ${_Ssl}\n      _ifvHst=\"NO\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"${_STATUS} A: Reloading Nginx...\"\n        _mrun \"service nginx reload\"\n      else\n        _mrun \"service nginx reload\"\n      fi\n    fi\n  fi\n  if [ \"${_ifvHst}\" = \"YES\" ]; then\n    if [ -e \"${_leCrtPath}/fullchain.pem\" ] \\\n      || [ -e \"${_leCrtPath}/cert.pem\" ]; then\n      if [ -e \"${_Ssl}\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"${_STATUS} A: pre.d/z_${_DOMAIN}_ssl_proxy.conf exists, OK!\"\n        fi\n        _satellite_letsencrypt_crt_key_copy\n        _HTTP3_TEST=$(grep http3 ${_Ssl} 2>&1)\n        _KTLS_TEST=$(grep KTLS ${_Ssl} 2>&1)\n        _RPRT_TEST=$(grep reuseport ${_Ssl} 2>&1)\n        _SQLADMIN_TEST=$(grep sqladmin ${_Ssl} 2>&1)\n        _PHP_SV=${_PHP_FPM_VERSION//[^0-9]/}\n        _PHP_CN=\"www${_PHP_SV}\"\n        _FPM_SOCKET_VAR=$(grep \"unix:/var/run/${_PHP_CN}.fpm.socket\" ${_Ssl} 2>&1)\n        _FPM_SOCKET_RUN=$(grep \"unix:/run/${_PHP_CN}.fpm.socket\" ${_Ssl} 2>&1)\n        if [[ \"${_FPM_SOCKET_VAR}\" =~ \"fpm.socket\" ]] \\\n          || [[ \"${_FPM_SOCKET_RUN}\" =~ \"fpm.socket\" ]]; then\n          _FPM_SOCKET_TEST=\"fpm.socket\"\n        fi\n        if [[ ! \"${_FPM_SOCKET_TEST}\" =~ \"fpm.socket\" ]] \\\n          || [[ ! \"${_HTTP3_TEST}\" =~ \"http3\" ]] \\\n          || [[ ! \"${_KTLS_TEST}\" =~ \"KTLS\" ]] \\\n          || [[ \"${_RPRT_TEST}\" =~ \"reuseport\" ]] \\\n          || [[ ! \"${_SQLADMIN_TEST}\" =~ \"sqladmin\" ]]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"${_STATUS} A: pre.d/z_${_DOMAIN}_ssl_proxy.conf needs update...\"\n          fi\n          echo \"${_DOMAIN} ${_THISHTIP} ${_USER} ${_CLIENT_EMAIL} ${_THISHTIP}\" > /root/.ssl.proxy.cnf\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            bash /opt/local/bin/xboa ssl-gen\n            wait\n          else\n            _mrun \"bash /opt/local/bin/xboa ssl-gen\"\n            wait\n          fi\n          if [ -e \"${_Ssl}\" ]; then\n            _satellite_letsencrypt_vhost_sync\n          else\n            _msg \"${_STATUS} A: pre.d/z_${_DOMAIN}_ssl_proxy.conf doesn't exist!\"\n          fi\n        fi\n      else\n        _msg \"${_STATUS} A: Creating LE vhost for Hostmaster, please wait...\"\n        _satellite_letsencrypt_crt_key_copy\n        echo \"${_DOMAIN} ${_THISHTIP} ${_USER} ${_CLIENT_EMAIL} ${_THISHTIP}\" > /root/.ssl.proxy.cnf\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          bash /opt/local/bin/xboa ssl-gen\n          wait\n        else\n          _mrun \"bash /opt/local/bin/xboa ssl-gen\"\n          wait\n        fi\n        if [ -e \"${_Ssl}\" ]; then\n          _satellite_letsencrypt_vhost_sync\n        else\n          _msg \"${_STATUS} A: pre.d/z_${_DOMAIN}_ssl_proxy.conf doesn't exist!\"\n        fi\n      fi\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"${_STATUS} A: le/certs/${_DOMAIN}/fullchain.pem doesn't exist!\"\n      fi\n    fi\n  fi\n}\n\n_satellite_log_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_log_update\"\n    _debug_proc\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    echo ${_F_TIME} > ${_ROOT}/log/date-init.txt\n  else\n    echo ${_F_TIME} > ${_ROOT}/log/date-upgrade-${_THIS_CORE}.txt\n  fi\n  _OCTOPUS_VERSION_INFO=\"${_F_TIME} / \\\n    ${_OS_DIST}.${_OS_CODE} / \\\n    Octopus ${_X_VERSION}-${_xSrl} / \\\n    FPM ${_PHP_FPM_VERSION} / \\\n    CLI ${_PHP_CLI_VERSION}\"\n  echo \"${_OCTOPUS_VERSION_INFO}\" | fmt -su -w 2500 >> \\\n    ${_ROOT}/log/octopus_log.txt\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: New entry added to ${_ROOT}/log/octopus_log.txt\"\n  fi\n}\n\n_satellite_batch_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_batch_cleanup\"\n    _debug_proc\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _L_ST=\"install\"\n  else\n    _L_ST=\"upgrade\"\n  fi\n  _msg \"Final post-${_L_ST} cleaning, one moment...\"\n  cd /\n  chmod 711 bin boot data dev emul etc home lib lib64 lib32 media mnt &> /dev/null\n  chmod 711 opt sbin selinux srv sys usr var share run &> /dev/null\n  chmod 700 root &> /dev/null\n  if [ -e \"${_D}\" ]; then\n    if [ ! -f \"${_D}/permissions-fix-${_xSrl}-${_X_VERSION}-${_TODAY}.info\" ]; then\n      find ${_D}/000 -type d -exec chmod 0755 {} \\; &> /dev/null\n      find ${_D}/000 -type f -exec chmod 0644 {} \\; &> /dev/null\n      _mrun \"chmod 755 ${_D}/*/*/profiles\"\n      _mrun \"chmod 02775 ${_D}/*/*/sites/all/{modules,libraries,themes}\"\n      _mrun \"chmod 02775 ${_D}/000/core/*/sites/all/{modules,libraries,themes}\"\n      _mrun \"chown -R root:root ${_D}\"\n      _mrun \"chown -R root:users ${_D}/*/*/sites\"\n      echo fixed > ${_D}/permissions-fix-${_xSrl}-${_X_VERSION}-${_TODAY}.info\n    fi\n    chown root:root ${_D} &> /dev/null\n    chown root:root ${_CORE} &> /dev/null\n    _mrun \"chown -R root:root /data/conf\"\n    _mrun \"chown -R root:root ${_CORE}/o_contrib\"\n    _mrun \"chown -R root:root ${_CORE}/o_contrib_seven\"\n    _mrun \"chown -R root:root ${_D}/000\"\n    find /data/conf -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /data/conf -type f -exec chmod 0644 {} \\; &> /dev/null\n    chown root:root /opt/tmp &> /dev/null\n    chmod 0711 /data ${_D}/* /data/disk /data/conf &> /dev/null\n    chmod 644 /data/all/cpuinfo &> /dev/null\n    chmod 0755 ${_D} ${_D}/000 &> /dev/null\n  elif [ -e \"/data/disk/all\" ]; then\n    if [ ! -f \"/data/disk/all/permissions-fix-${_xSrl}-${_X_VERSION}-${_TODAY}.info\" ]; then\n      find /data/disk/all/000 -type d -exec chmod 0755 {} \\; &> /dev/null\n      find /data/disk/all/000 -type f -exec chmod 0644 {} \\; &> /dev/null\n      _mrun \"chmod 755 /data/disk/all/*/*/profiles\"\n      _mrun \"chmod 02775 /data/disk/all/*/*/sites/all/{modules,libraries,themes}\"\n      _mrun \"chmod 02775 /data/disk/all/000/core/*/sites/all/{modules,libraries,themes}\"\n      _mrun \"chown -R root:root /data/disk/all\"\n      _mrun \"chown -R root:users /data/disk/all/*/*/sites\"\n      echo fixed > /data/disk/all/permissions-fix-${_xSrl}-${_X_VERSION}-${_TODAY}.info\n    fi\n    chown root:root /data/disk/all &> /dev/null\n    chown root:root ${_CORE} &> /dev/null\n    _mrun \"chown -R root:root /data/conf\"\n    _mrun \"chown -R root:root ${_CORE}/o_contrib\"\n    _mrun \"chown -R root:root ${_CORE}/o_contrib_seven\"\n    _mrun \"chown -R root:root /data/disk/all/000\"\n    find /data/conf -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /data/conf -type f -exec chmod 0644 {} \\; &> /dev/null\n    chown root:root /opt/tmp &> /dev/null\n    chmod 0711 /data /data/disk/all/* /data/disk /data/conf &> /dev/null\n    chmod 644 /data/disk/all/cpuinfo &> /dev/null\n    chmod 0755 /data/disk/all /data/disk/all/000 &> /dev/null\n  fi\n  chmod 0700 /data/u &> /dev/null\n  chown root:root /data/u &> /dev/null\n  rm -f /data/u/*host8* &> /dev/null\n  rm -f /data/u/*o8.io* &> /dev/null\n  rm -f /data/u/*boa.io* &> /dev/null\n  rm -f /data/u/*aegir.cc* &> /dev/null\n  mv -f ${_ROOT}/backups/drupalgeddon-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/drush_make-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/drush-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/make_local-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/provision_boost-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/provision_cdn-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/provision_civicrm-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/provision_platform_git-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/provision_site_backup-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/provision_tasks_extra-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/provision-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/registry_rebuild-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/safe_cache_form_clear-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mv -f ${_ROOT}/backups/mydropwizard-pre* ${_ROOT}/backups/system/ &> /dev/null\n  mkdir -p /data/conf/arch/log\n  chmod 0777 /data/conf/arch/log\n  mv -f /data/conf/global.inc-pre* /data/conf/arch/ &> /dev/null\n  mv -f /data/conf/global/*inc-pre* /data/conf/arch/ &> /dev/null\n  mv -f /data/conf/global.inc-before* /data/conf/arch/ &> /dev/null\n  mv -f /data/conf/global.inc-missing* /data/conf/arch/ &> /dev/null\n  find ${_ROOT}/static/*/module* -maxdepth 0 -mindepth 0 -type d -exec chmod 775 {} \\; &> /dev/null\n  find ${_ROOT}/static/*/*/module* -maxdepth 0 -mindepth 0 -type d -exec chmod 775 {} \\; &> /dev/null\n  find ${_ROOT}/static/*/*/*/module* -maxdepth 0 -mindepth 0 -type d -exec chmod 775 {} \\; &> /dev/null\n  find ${_ROOT}/static/*/*/*/*/module* -maxdepth 0 -mindepth 0 -type d -exec chmod 775 {} \\; &> /dev/null\n  if [ -d \"${_CORE}/${_DRUPAL7}\" ]; then\n    [ -d \"/var/backups/trash\" ] || mkdir -p /var/backups/trash\n    rm -rf /var/backups/trash/*\n    mv -f ${_CORE}/${_DRUPAL7} /var/backups/trash/ &> /dev/null\n  fi\n}\n\n_satellite_display_url_finalize() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_display_url_finalize\"\n    _debug_proc\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _DO_NOTHING=YES\n    else\n      _AEGIR_LOGIN_URL=$(tail --lines=11 ${_ROOT}/log/install.log | grep --text \"^http:\" 2>&1)\n      if [ ! -z \"${_AEGIR_LOGIN_URL}\" ]; then\n        echo \" \"\n        _msg \"INFO: Congratulations, Ægir Satellite have been installed successfully!\"\n#         _msg \"NOTE! Please wait 2 min before visiting Ægir at:\"\n#         echo \" \"\n#         _msg \"LINK: ${_AEGIR_LOGIN_URL}\"\n#         echo \" \"\n#         _msg \"NOTE! The initial one-time login link will no longer work.\"\n#         sleep 3\n#         _msg \"To access your Octopus Ægir control panel after the procedure is finished...\"\n#         _msg \"...please generate a new link by running the following command:\"\n#         sleep 3\n#         echo \" \"\n#         echo \"  su -s /bin/bash ${_USER} -c \\\"drush @hm uli\\\"\"\n#         echo \" \"\n#         sleep 3\n      else\n        _msg \"ALRT! Something went wrong\"\n        _msg \"ALRT! Please check the install log for details:\"\n        _msg \"ALRT! ${_ROOT}/log/install.log\"\n      fi\n    fi\n  else\n    _satellite_o_contrib_update_global\n    _satellite_o_contrib_seven_update_global\n    _satellite_o_contrib_eight_update_global\n    _satellite_o_contrib_nine_update_global\n    _satellite_o_contrib_ten_update_global\n    _satellite_o_contrib_eleven_update_global\n  fi\n  if [ ! -e \"/root/.upstart.cnf\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} A: Starting the cron now\"\n    fi\n    _mrun \"service cron start\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} A: All done!\"\n  fi\n  _msg \"BYE!\"\n\n  touch /opt/tmp/status-AegirSetupA-OK\n}\n\n\n_satellite_child_b_prepare_dirs_permissions() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_prepare_dirs_permissions\"\n    _debug_proc\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} B: Creating directories with correct permissions\"\n  fi\n  if [ -e \"${_ROOT}/aegir.sh\" ]; then\n    rm -f ${_ROOT}/aegir.sh\n  fi\n  #_DRUSH_HOSTING_TASKS_CMD=\"/usr/bin/drush @hostmaster hosting-tasks --force\"\n  _DRUSH_HOSTING_DISPATCH_CMD=\"${_T_CLI}/php ${_ROOT}/tools/drush/drush.php @hostmaster hosting-dispatch\"\n  touch ${_ROOT}/aegir.sh\n  chmod 0700 ${_ROOT}/aegir.sh &> /dev/null\n  echo -e \\\n    \"#!/bin/bash\\n\\nPATH=.:${_T_CLI}:/usr/sbin:/usr/bin:/sbin:/bin\\n \\\n     \\n${_DRUSH_HOSTING_DISPATCH_CMD} \\\n     \\ntouch ${_ROOT}/${_USER}-task.done\" \\\n     | fmt -su -w 2500 | tee -a ${_ROOT}/aegir.sh >/dev/null 2>&1\n\n  mkdir -p ${_ROOT}/aegir/distro\n  mkdir -p ${_ROOT}/distro/${_DISTRO}\n  mkdir -p ${_ROOT}/src/${_DISTRO}\n  mkdir -p ${_ROOT}/src/{modules,themes}\n  mkdir -p ${_ROOT}/{tools,log,u,backups,platforms,clients}\n  chmod 0700 ${_ROOT}/{log,src,u} &> /dev/null\n  chmod 0700 ${_ROOT}/src/${_DISTRO} &> /dev/null\n  chmod 0700 ${_ROOT}/src/{modules,themes} &> /dev/null\n  chmod 0711 ${_ROOT}/{aegir,aegir/distro,distro,platforms,tools} &> /dev/null\n  chmod 0711 ${_ROOT}/distro/${_DISTRO} &> /dev/null\n  chmod 0750 ${_ROOT}/{backups,clients} &> /dev/null\n\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    #_msg \"${_STATUS} B: UPGRADE in progress...\"\n    if [ -d \"${_ROOT}/distro\" ]; then\n     #_msg \"${_STATUS} B: UPGRADE v.2 in progress...\"\n     if [ -e \"${_ROOT}/log/domain.txt\" ]; then\n      _DOMAIN=$(cat ${_ROOT}/log/domain.txt 2>&1)\n      _DOMAIN=$(echo -n ${_DOMAIN} | tr -d \"\\n\" 2>&1)\n     fi\n     #_msg \"${_STATUS} B: _DOMAIN is ${_DOMAIN}\"\n    elif [ ! -d \"${_ROOT}/distro\" ]; then\n     #_msg \"${_STATUS} B: UPGRADE v.1 in progress...\"\n     #_msg \"${_STATUS} B: _DISTRO is ${_DISTRO}\"\n     if [ -e \"${_ROOT}/log/domain.txt\" ]; then\n      _DOMAIN=$(cat ${_ROOT}/log/domain.txt 2>&1)\n      _DOMAIN=$(echo -n ${_DOMAIN} | tr -d \"\\n\" 2>&1)\n     fi\n     #_msg \"${_STATUS} B: _DOMAIN is ${_DOMAIN}\"\n    fi\n  else\n    true\n    #_msg \"${_STATUS} B: NEW AEGIR setup in progress...\"\n    #_msg \"${_STATUS} B: _DISTRO is ${_DISTRO}\"\n    #_msg \"${_STATUS} B: _DOMAIN is ${_DOMAIN}\"\n  fi\n  echo ${_DOMAIN} > ${_ROOT}/log/domain.txt\n}\n\n_satellite_child_b_install_drush() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_install_drush\"\n    _debug_proc\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} B: Running standard installer\"\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    mkdir -p ${_ROOT}/backups/system\n    chmod 700 ${_ROOT}/backups/system\n    if [ -d \"${_ROOT}/aegir/config\" ]; then\n      if [ ! -d \"${_ROOT}/config\" ]; then\n        cd ${_ROOT}/aegir\n        mv -f config ${_ROOT}/config &> /dev/null\n        ln -sfn ${_ROOT}/config ${_ROOT}/aegir/config\n      fi\n    fi\n  fi\n  if [ -f \"/opt/tools/drush/8/drush/drush.php\" ]; then\n    cd ${_ROOT}/tools\n    mv -f drush ${_ROOT}/backups/system/drush-pre-${_DISTRO}-${_NOW} &> /dev/null\n    cp -af /opt/tools/drush/8/drush ${_ROOT}/tools/\n  else\n    cd ${_ROOT}/tools\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Downloading drush ${_DRUSH_VERSION}...\"\n    fi\n    _get_dev_ext \"drush-${_DRUSH_VERSION}.tar.gz\"\n  fi\n  if [ ! -f \"${_ROOT}/tools/drush/drush.php\" ]; then\n    mv -f ${_ROOT}/backups/system/drush-pre-${_DISTRO}-${_NOW} ${_ROOT}/tools/\n  fi\n  cd ${_ROOT}/tools/drush/\n  find ${_ROOT}/tools/drush -type d -exec chmod 0755 {} \\; &> /dev/null\n  find ${_ROOT}/tools/drush -type f -exec chmod 0644 {} \\; &> /dev/null\n  chmod 755 ${_ROOT}/tools/drush/drush\n  chmod 755 ${_ROOT}/tools/drush/drush.complete.sh\n  chmod 755 ${_ROOT}/tools/drush/drush.launcher\n  chmod 755 ${_ROOT}/tools/drush/drush.php\n  chmod 755 ${_ROOT}/tools/drush/unish.sh\n  chmod 755 ${_ROOT}/tools/drush/examples/drush.wrapper\n  chmod 755 ${_ROOT}/tools/drush/examples/git-bisect.example.sh\n  chmod 755 ${_ROOT}/tools/drush/examples/helloworld.script\n}\n\n_satellite_child_b_drush_xts_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_drush_xts_cleanup\"\n    _debug_proc\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    mkdir -p ${_ROOT}/backups/system\n    chmod 700 ${_ROOT}/backups/system\n    mv -f ${_ROOT}/backups/drush-pre* ${_ROOT}/backups/system/ &> /dev/null\n    _B_EXT=\"provision clean_missing_modules drupalgeddon drush_ecl make_local \\\n      provision_boost provision_cdn provision_civicrm provision_platform_git \\\n      provision_site_backup provision_tasks_extra remote_import mydropwizard \\\n      registry_rebuild safe_cache_form_clear security_check security_review \\\n      utf8mb4_convert\"\n    for e in ${_B_EXT}; do\n      if [ -e \"${_ROOT}/.drush/$e\" ]; then\n        mv -f ${_ROOT}/.drush/$e \\\n          ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n        mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n      fi\n      if [ -e \"${_ROOT}/.drush/xts/$e\" ]; then\n        mv -f ${_ROOT}/.drush/xts/$e \\\n          ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n        mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n      fi\n      if [ -e \"${_ROOT}/.drush/usr/$e\" ]; then\n        mv -f ${_ROOT}/.drush/usr/$e \\\n          ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n        mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n      fi\n      if [ -e \"${_ROOT}/.drush/sys/$e\" ]; then\n        mv -f ${_ROOT}/.drush/sys/$e \\\n          ${_ROOT}/backups/system/$e-pre-${_DISTRO}-${_NOW} &> /dev/null\n        mv -f ${_ROOT}/backups/$e-pre* ${_ROOT}/backups/system/ &> /dev/null\n      fi\n    done\n  fi\n}\n\n_satellite_child_b_drush_xts_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_drush_xts_install\"\n    _debug_proc\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} B: Installing Ægir Provision backend...\"\n  fi\n  mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n  rm -rf ${_ROOT}/.drush/drush_make\n  rm -rf ${_ROOT}/.drush/sys/drush_make\n  cd ${_ROOT}/.drush\n  if [ \"${_DL_MODE}\" = \"BATCH\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    rm -rf ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/{provision,drush_make}\n    _get_dev_ext \"backend.tar.gz\"\n    mv -f ${_ROOT}/.drush/backend/sys ${_ROOT}/.drush/\n    mv -f ${_ROOT}/.drush/backend/xts ${_ROOT}/.drush/\n    mv -f ${_ROOT}/.drush/backend/usr ${_ROOT}/.drush/\n    if [ -e \"${_ROOT}/.drush/sys/provision/provision.inc\" ] \\\n      && [ -d \"${_ROOT}/.drush/xts/security_review\" ] \\\n      && [ -d \"${_ROOT}/.drush/usr/registry_rebuild\" ]; then\n      [ -e \"${_ROOT}/.drush/backend\" ] && rm -rf ${_ROOT}/.drush/backend*\n    fi\n  elif [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    rm -rf ${_ROOT}/.drush/{sys,xts,usr}\n    rm -rf ${_ROOT}/.drush/{provision,drush_make}\n    mkdir -p ${_ROOT}/.drush/{sys,xts,usr}\n    _rD=\"${_ROOT}/.drush\"\n    ${_gCb} ${_BRANCH_PRN} ${_gitHub}/provision.git      ${_rD}/sys/provision &> /dev/null\n    ${_gCb} 7.x-1.x-dev ${_gitHub}/drupalgeddon.git      ${_rD}/usr/drupalgeddon &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/drush_ecl.git             ${_rD}/usr/drush_ecl &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/security_review.git       ${_rD}/xts/security_review &> /dev/null\n    ${_gCb} 7.x-2.x ${_gitHub}/provision_boost.git       ${_rD}/xts/provision_boost &> /dev/null\n    ${_gCb} 7.x-2.x ${_gitHub}/registry_rebuild.git      ${_rD}/usr/registry_rebuild &> /dev/null\n    ${_gCb} 7.x-1.x ${_gitHub}/safe_cache_form_clear.git ${_rD}/usr/safe_cache_form_clear &> /dev/null\n    rm -rf ${_rD}/*/.git\n    rm -rf ${_rD}/*/*/.git\n    cd ${_rD}/usr\n    _get_dev_ext \"clean_missing_modules.tar.gz\"\n    _get_dev_ext \"utf8mb4_convert-7.x-1.3.tar.gz\"\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Downloading Drush and Provision extensions from ${_DL_MODE}-${_AEGIR_VERSION}...\"\n    fi\n    cd ${_ROOT}/.drush/sys\n    _get_dev_ext \"provision.tar.gz\"\n    cd ${_ROOT}/.drush/usr\n    _get_dev_ext \"clean_missing_modules.tar.gz\"\n    _get_dev_ext \"drupalgeddon.tar.gz\"\n    _get_dev_ext \"drush_ecl.tar.gz\"\n    _get_dev_ext \"registry_rebuild.tar.gz\"\n    _get_dev_ext \"safe_cache_form_clear.tar.gz\"\n    _get_dev_ext \"utf8mb4_convert-7.x-1.3.tar.gz\"\n    cd ${_ROOT}/.drush/xts\n    _get_dev_ext \"provision_boost.tar.gz\"\n    _get_dev_ext \"security_review.tar.gz\"\n    cd ${_ROOT}/.drush\n  fi\n  sed -i \"s/files.aegir.cc/${_USE_MIR}/g\" ${_ROOT}/.drush/sys/provision/aegir.make &> /dev/null\n  wait\n}\n\n_satellite_child_b_drush_test() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_drush_test\"\n    _debug_proc\n  fi\n  drush8 cc drush &> /dev/null\n  rm -rf ${_ROOT}/.tmp/cache\n  if ${_DRUSHCMD} help | grep \"^ provision-install\" > /dev/null ; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Drush test result OK\"\n    fi\n  else\n    _msg \"${_STATUS} B: FATAL ERROR: Drush is broken (${_DRUSHCMD} help failed)\"\n    _msg \"${_STATUS} B: FATAL ERROR: Aborting AegirSetupB installer NOW!\"\n    touch /opt/tmp/status-AegirSetupB-FAIL\n    exit 1\n  fi\n}\n\n_satellite_child_b_aegir_build() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_aegir_build\"\n    _debug_proc\n  fi\n  _LOCAL_STATUS=\"${_STATUS}\"\n  if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n    cd ${_ROOT}\n    _AGR_PXSWD=$(cat ${_ROOT}/.${_USER}.pass.txt 2>&1)\n    _AGRPASWD=$(echo -n ${_AGR_PXSWD} | tr -d \"\\n\" 2>&1)\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      if [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n        _THIS_DB_HOST=\"${_hName}\"\n      elif [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n        _SQL_CONNECT=127.0.0.1\n        _THIS_DB_HOST=127.0.0.1\n      else\n        _THIS_DB_HOST=localhost\n      fi\n      _USE_AEGIR_HOST=\"${_hName}\"\n      _USE_DB_USER=\"${_USER}\"\n    else\n      _USE_AEGIR_HOST=\"${_hName}\"\n      _USE_DB_USER=aegir_root\n    fi\n    if [ \"${_THIS_DB_HOST}\" = \"${_MY_OWNIP}\" ]; then\n      _USE_AEGIR_HOST=\"${_hName}\"\n      _THIS_DB_HOST=\"${_hName}\"\n    fi\n    _L_SYS_PHP=\"${_ROOT}/.${_USER}.pass.php\"\n    echo \"<?php\" > ${_L_SYS_PHP}\n    echo \"\\$oct_db_user = \\\"${_USER}\\\";\" >> ${_L_SYS_PHP}\n    echo \"\\$oct_db_pass = \\\"${_AGRPASWD}\\\";\" >> ${_L_SYS_PHP}\n    echo \"\\$oct_db_port = \\\"${_THIS_DB_PORT}\\\";\" >> ${_L_SYS_PHP}\n    echo \"\\$oct_db_host = \\\"${_THIS_DB_HOST}\\\";\" >> ${_L_SYS_PHP}\n    echo \"\\$oct_db_dirs = \\\"/data/disk/${_USER}/backups\\\";\" >> ${_L_SYS_PHP}\n    chown ${_USER}:${_USRG} ${_L_SYS_PHP}\n    chmod 0600 ${_L_SYS_PHP}\n\n    _msg \"${_STATUS} B: Running hostmaster-install, please wait...\"\n    ${_DRUSHCMD} cc drush >${_ROOT}/log/install.log 2>&1\n    rm -rf ${_ROOT}/.tmp/cache\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      ${_DRUSHCMD} hostmaster-install ${_DOMAIN} \\\n        --aegir_db_host=${_THIS_DB_HOST} \\\n        --aegir_db_port=${_THIS_DB_PORT} \\\n        --aegir_db_pass=${_AGRPASWD} \\\n        --aegir_db_user=${_USE_DB_USER} \\\n        --aegir_host=${_USE_AEGIR_HOST} \\\n        --aegir_root=${_ROOT} \\\n        --client_email=${_MY_OCTO_EMAIL} \\\n        --http_service_type=nginx \\\n        --root=${_HM_ROOT} \\\n        --script_user=${_USER} \\\n        --web_group=${_WEBG} \\\n        --version=${_AEGIR_VERSION} -y -d \\\n      2>&1 | tee ${_ROOT}/log/install.log\n    else\n      ${_DRUSHCMD} hostmaster-install ${_DOMAIN} \\\n        --aegir_db_host=${_THIS_DB_HOST} \\\n        --aegir_db_port=${_THIS_DB_PORT} \\\n        --aegir_db_pass=${_AGRPASWD} \\\n        --aegir_db_user=${_USE_DB_USER} \\\n        --aegir_host=${_USE_AEGIR_HOST} \\\n        --aegir_root=${_ROOT} \\\n        --client_email=${_MY_OCTO_EMAIL} \\\n        --http_service_type=nginx \\\n        --root=${_HM_ROOT} \\\n        --script_user=${_USER} \\\n        --web_group=${_WEBG} \\\n        --version=${_AEGIR_VERSION} -y \\\n      >${_ROOT}/log/install.log 2>&1\n    fi\n    rm -rf ${_HM_ROOT}/profiles/{default,standard,minimal,testing}\n    cd ${_HM_ROOT}\n    mkdir -p sites/all/{modules,themes,libraries}\n    mkdir -p sites/${_DOMAIN}/files/{tmp,js,css}\n    chmod 02775 -R sites/${_DOMAIN}/files &> /dev/null\n    chgrp -R ${_WEBG} sites/${_DOMAIN}/files &> /dev/null\n    rm -f ${_ROOT}/u/${_DOMAIN}\n    ln -sfn ${_HM_ROOT} ${_ROOT}/u/${_DOMAIN}\n    rm -f /data/u/${_DOMAIN} &> /dev/null\n    ln -sfn ${_HM_ROOT} /data/u/${_DOMAIN}\n    ${_DRUSHCMD} cc drush &> /dev/null\n    rm -rf ${_ROOT}/.tmp/cache\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Running hosting-dispatch/hosting-tasks...\"\n    fi\n    ${_DRUSHCMD} @hostmaster hosting-dispatch &> /dev/null\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force &> /dev/null\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force &> /dev/null\n    wait\n    sleep 5\n    ${_DRUSHCMD} @hostmaster hosting-tasks --force &> /dev/null\n    wait\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Testing previous install...\"\n    fi\n    ${_DRUSHCMD} cc drush &> /dev/null\n    rm -rf ${_ROOT}/.tmp/cache\n\n    ### Pre-Fix for migrated/merged instances\n    if [ -e \"${_ROOT}/log/imported.pid\" ] || [ -e \"${_ROOT}/log/exported.pid\" ]; then\n      if [ -e \"${_ROOT}/aegir/distro/001/sites/${_DOMAIN}/drushrc.php\" ]; then\n        sed -i \"s/platform_0.*'/platform_hostmaster'/g\"    ${_ROOT}/.drush/hostmaster.alias.drushrc.php\n        wait\n        sed -i \"s/distro\\/0.*\\/sites/distro\\/001\\/sites/g\" ${_ROOT}/.drush/hostmaster.alias.drushrc.php\n        wait\n        sed -i \"s/distro\\/01.*',/distro\\/001',/g\"          ${_ROOT}/.drush/hostmaster.alias.drushrc.php\n        wait\n        sed -i \"s/distro\\/02.*',/distro\\/001',/g\"          ${_ROOT}/.drush/hostmaster.alias.drushrc.php\n        wait\n        sed -i \"s/distro\\/03.*',/distro\\/001',/g\"          ${_ROOT}/.drush/hostmaster.alias.drushrc.php\n        wait\n        sed -i \"s/distro\\/04.*',/distro\\/001',/g\"          ${_ROOT}/.drush/hostmaster.alias.drushrc.php\n        wait\n        sed -i \"s/distro\\/05.*',/distro\\/001',/g\"          ${_ROOT}/.drush/hostmaster.alias.drushrc.php\n        wait\n      fi\n    fi\n\n    ${_DRUSHCMD} cc drush &> /dev/null\n    rm -rf ${_ROOT}/.tmp/cache\n    if [ -d \"${_HM_ROOT}/modules/o_contrib\" ] \\\n      && [ ! -L \"${_HM_ROOT}/modules/o_contrib\" ]; then\n      rm -f ${_HM_ROOT}/modules/o_contrib/{cache_backport,redis_edge,redis}\n    fi\n    if [ -d \"${_PREV_HM_ROOT}/modules/o_contrib\" ] \\\n      && [ ! -L \"${_PREV_HM_ROOT}/modules/o_contrib\" ]; then\n      rm -f ${_PREV_HM_ROOT}/modules/o_contrib/{cache_backport,redis_edge,redis}\n    fi\n    if [ -d \"${_HM_ROOT}/modules/o_contrib_seven\" ] \\\n      && [ ! -L \"${_HM_ROOT}/modules/o_contrib_seven\" ]; then\n      rm -f ${_HM_ROOT}/modules/o_contrib_seven/{cache_backport,redis_edge,redis}\n    fi\n    if [ -d \"${_PREV_HM_ROOT}/modules/o_contrib_seven\" ] \\\n      && [ ! -L \"${_PREV_HM_ROOT}/modules/o_contrib_seven\" ]; then\n      rm -f ${_PREV_HM_ROOT}/modules/o_contrib_seven/{cache_backport,redis_edge,redis}\n    fi\n    if [ -e \"${_PREV_HM_ROOT}/modules/path_alias_cache\" ]; then\n      _DEBUG_MODE=YES\n    fi\n    if [ ! -e \"${_PREV_HM_ROOT}/sites/${_DOMAIN}/settings.php\" ]; then\n      _DEBUG_MODE=YES\n      _msg \"${_STATUS} B: Testing previous install...\"\n      _msg \"${_STATUS} B: OPS, zombie found, moving it to backups...\"\n      mv -f ${_PREV_HM_ROOT} \\\n        ${_ROOT}/backups/system/empty-host-master-${_LAST_HMR}-${_NOW}\n      cd ${_ROOT}/aegir/distro\n      _list=([0-9]*)\n      _last=${_list[@]: -1}\n      _L_LAST_HMR=$_last\n      _BASH_TEST=$(bash --version 2>&1)\n      if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n        _nextnum=00$((10#0${_last%%[^0-9]*} + 1))\n      else\n        _nextnum=00$((10#${_last%%[^0-9]*} + 1))\n      fi\n      _nextnum=${_nextnum: -3}\n      _L_HM_DISTRO=${_nextnum}\n      _HM_ROOT=\"${_ROOT}/aegir/distro/${_L_HM_DISTRO}\"\n      _PREV_HM_ROOT=\"${_ROOT}/aegir/distro/${_L_LAST_HMR}\"\n      _msg \"${_STATUS} B: Testing previous install again after removing zombie...\"\n      if [ ! -e \"${_PREV_HM_ROOT}/sites/${_DOMAIN}/settings.php\" ]; then\n        _DEBUG_MODE=YES\n        _msg \"${_STATUS} B: Testing previous install again...\"\n        _msg \"${_STATUS} B: OPS, another zombie found, moving it to backups...\"\n        mv -f ${_PREV_HM_ROOT} \\\n          ${_ROOT}/backups/system/empty-host-master-${_L_HM_DISTRO}-${_NOW}-sec\n        cd ${_ROOT}/aegir/distro\n        _list=([0-9]*)\n        _last=${_list[@]: -1}\n        _L_LAST_HMR=$_last\n        _BASH_TEST=$(bash --version 2>&1)\n        if [[ \"${_BASH_TEST}\" =~ \"version 5.1\" ]] || [[ \"${_BASH_TEST}\" =~ \"version 5.2\" ]]; then\n          _nextnum=00$((10#0${_last%%[^0-9]*} + 1))\n        else\n          _nextnum=00$((10#${_last%%[^0-9]*} + 1))\n        fi\n        _nextnum=${_nextnum: -3}\n        _L_HM_DISTRO=${_nextnum}\n        _HM_ROOT=\"${_ROOT}/aegir/distro/${_L_HM_DISTRO}\"\n        _PREV_HM_ROOT=\"${_ROOT}/aegir/distro/${_L_LAST_HMR}\"\n        _msg \"${_STATUS} B: Let's hope there are no more zombies left...\"\n      fi\n    fi\n    if [ -d \"${_HM_ROOT}\" ]; then\n      _msg \"${_STATUS} B: FATAL ERROR: ${_HM_ROOT} already exists\"\n      _msg \"${_STATUS} B: FATAL ERROR: Too many zombies to delete! Try again...\"\n      _msg \"${_STATUS} B: FATAL ERROR: Aborting AegirSetupB installer NOW!\"\n      touch /opt/tmp/status-AegirSetupB-FAIL\n      exit 1\n    fi\n    _msg \"${_STATUS} B: Hostmaster STATUS: Upgrade in progress...\"\n    ### security_review breaks the upgrade if active\n    mv -f ${_ROOT}/.drush/xts/security_review/security_review.drush.inc \\\n      ${_ROOT}/.drush/xts/security_review/foo.txt  &> /dev/null\n    export DEBIAN_FRONTEND=noninteractive\n    export APT_LISTCHANGES_FRONTEND=none\n    if [ -z \"${TERM+x}\" ]; then\n      export TERM=vt100\n    fi\n\n    #\n    # Fix broken Entity module if needed.\n    #\n    _pthA=\"profiles/hostmaster/modules/contrib/entity\"\n    _pthB=\"module_filter.module\"\n    #\n    if [ -e \"${_PREV_HM_ROOT}/${_pthA}/${_pthB}\" ]; then\n      _msg \"${_STATUS} B: Fixing broken Entity module...\"\n      rm -rf ${_PREV_HM_ROOT}/${_pthA}\n      cd ${_PREV_HM_ROOT}/profiles/hostmaster/modules/contrib\n      _get_dev_stc \"entity-7.x-1.12.tar.gz\"\n      ${_DRUSHCMD} @hostmaster en entity -y\n      ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_service WHERE service LIKE 'http'\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"INSERT INTO hosting_service (nid, vid, service, type, restart_cmd, port, available) VALUES ('2', '2', 'http', 'nginx_ssl', 'sudo /etc/init.d/nginx reload', '80', '1')\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster hosting-task @server_master verify --force\n      ${_DRUSHCMD} @hostmaster hosting-dispatch\n      wait\n      sleep 5\n      ${_DRUSHCMD} @hostmaster hosting-tasks --force\n      wait\n      sleep 5\n      ${_DRUSHCMD} @hostmaster hosting-tasks --force\n      wait\n      sleep 5\n      ${_DRUSHCMD} @hostmaster hosting-tasks --force\n      wait\n      _msg \"${_STATUS} B: Waiting 15 seconds...\"\n      sleep 15\n    fi\n\n    if [ -e \"${_PREV_HM_ROOT}/modules/path_alias_cache\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        ${_DRUSHCMD} @hostmaster dis aegir_custom_settings -y\n        ${_DRUSHCMD} @hostmaster pm-uninstall aegir_custom_settings -y\n        ${_DRUSHCMD} @hostmaster dis hosting_advanced_cron -y\n        ${_DRUSHCMD} @hostmaster en ctools -y\n        ${_DRUSHCMD} @hostmaster registry-rebuild\n      else\n        ${_DRUSHCMD} @hostmaster dis aegir_custom_settings -y &> /dev/null\n        ${_DRUSHCMD} @hostmaster pm-uninstall aegir_custom_settings -y &> /dev/null\n        ${_DRUSHCMD} @hostmaster dis hosting_advanced_cron -y &> /dev/null\n        ${_DRUSHCMD} @hostmaster en ctools -y &> /dev/null\n        ${_DRUSHCMD} @hostmaster registry-rebuild &> /dev/null\n      fi\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        ${_DRUSHCMD} @hostmaster dis hosting_custom_settings -y\n        ${_DRUSHCMD} @hostmaster pm-uninstall hosting_custom_settings -y\n        ${_DRUSHCMD} @hostmaster registry-rebuild\n      else\n        ${_DRUSHCMD} @hostmaster dis hosting_custom_settings -y &> /dev/null\n        ${_DRUSHCMD} @hostmaster pm-uninstall hosting_custom_settings -y &> /dev/null\n        ${_DRUSHCMD} @hostmaster registry-rebuild &> /dev/null\n      fi\n    fi\n\n    cd ${_PREV_HM_ROOT}\n    ${_DRUSHCMD} cc drush &> /dev/null\n    rm -rf ${_ROOT}/.tmp/cache\n\n    ${_DRUSHCMD} @hostmaster sqlc < ${_bldPth}/aegir/helpers/hosting_cron.sql &> /dev/null\n    ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_task_log \\\n      WHERE timestamp < UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 3 MONTH))\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster sqlq \"OPTIMIZE TABLE hosting_task_log\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_task \\\n      WHERE task_type='delete' AND task_status='-1'\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_task \\\n      WHERE task_type='delete' AND task_status='0' AND executed='0'\" &> /dev/null\n\n    ### Fix for migrated/merged instances\n    if [ -e \"${_ROOT}/log/imported.pid\" ] \\\n      || [ -e \"${_ROOT}/log/exported.pid\" ]; then\n      if [ ! -e \"${_ROOT}/log/post-merge-fix.pid\" ]; then\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix for migrated/merged instance 1/2 start\"\n        _USE_AEGIR_HOST=\"${_hName}\"\n        ${_DRUSHCMD} @hostmaster sqlq \"REPLACE INTO hosting_context (nid, name) \\\n          VALUES ('4', 'server_localhost'), ('2', 'server_master')\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"REPLACE INTO hosting_package (vid, nid, \\\n          package_type, short_name, old_short_name, description) VALUES ('6', \\\n          '6', 'platform', 'drupal', '', '')\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"REPLACE INTO node_revisions (nid, vid, \\\n          uid, title, body, teaser, log, timestamp, format) VALUES ('6', '6', \\\n          '1', 'drupal', '', '', '', '1412168340', '0')\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"REPLACE INTO node (nid, vid, type, \\\n          language, title, uid, status, created, changed, comment, promote, \\\n          moderate, sticky, tnid, translate) VALUES ('6', '6', 'package', '', \\\n          'drupal', '1', '1', '1412168321', '1412168340', '0', '0', '0', '0', \\\n          '0', '0')\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_package \\\n          WHERE nid=2 AND short_name='drupal'\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM hosting_package \\\n          WHERE nid=4 AND short_name='drupal'\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM node \\\n          WHERE nid=8 AND type='site'\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM node_revisions \\\n          WHERE nid=8\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node \\\n          SET type='server' WHERE nid=2\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node \\\n          SET type='server' WHERE nid=4\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node \\\n          SET title='${_USE_AEGIR_HOST}' WHERE nid=2\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node \\\n          SET title='localhost' WHERE nid=4\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node_revision \\\n          SET title='${_USE_AEGIR_HOST}' WHERE nid=2\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node_revision \\\n          SET title='localhost' WHERE nid=4\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site \\\n          SET db_server=4 WHERE db_server=2\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n          SET web_server=2 WHERE web_server=0\" &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE users_roles \\\n          SET rid=7 WHERE rid=5\" &> /dev/null\n        ${_DRUSHCMD} cc drush &> /dev/null\n        rm -rf ${_ROOT}/.tmp/cache\n        ${_DRUSHCMD} @hostmaster hosting-task @server_localhost \\\n          verify --force &> /dev/null\n        ${_DRUSHCMD} @hostmaster hosting-dispatch &> /dev/null\n        wait\n        sleep 5\n        ${_DRUSHCMD} @hostmaster hosting-tasks --force &> /dev/null\n        wait\n        sleep 5\n        ${_DRUSHCMD} @hostmaster hosting-tasks --force &> /dev/null\n        wait\n        sleep 5\n        ${_DRUSHCMD} @hostmaster hosting-tasks --force &> /dev/null\n        wait\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix for migrated/merged instance 1/2 complete\"\n      fi\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site \\\n        SET client=1 WHERE profile=7\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site \\\n        SET client=1 WHERE profile=9\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site \\\n        SET client=1 WHERE client=0\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n        SET web_server=2 WHERE web_server=0\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node \\\n        SET uid=1 WHERE uid=0\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node_revision \\\n        SET uid=1 WHERE uid=0\" &> /dev/null\n      _HM_NID=$(${_DRUSHCMD} @hostmaster sqlq \"SELECT MIN(site.nid) AS lowest_nid \\\n        FROM hosting_site site JOIN hosting_package_instance pkgi \\\n        ON pkgi.rid=site.nid JOIN hosting_package pkg \\\n        ON pkg.nid=pkgi.package_id WHERE pkg.short_name='hostmaster'\" 2>&1)\n      _HM_NID=${_HM_NID//[^0-9]/}\n      if [ ! -z \"${_HM_NID}\" ]; then\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix 1/2 hosting_context ${_HM_NID}\"\n        if [ -e \"${_ROOT}/aegir/distro/001/sites/${_DOMAIN}/drushrc.php\" ]; then\n          _HM_PLF=$(${_DRUSHCMD} @hostmaster sqlq \"SELECT platform FROM hosting_site WHERE nid=${_HM_NID}\" 2>&1)\n          _HM_PLF=${_HM_PLF//[^0-9]/}\n          ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_context SET name='platform_hostmaster' WHERE nid=${_HM_PLF}\" &> /dev/null\n        fi\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_context SET name='hostmaster' WHERE nid=${_HM_NID}\"         &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\"                   &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node_revision SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\"          &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site_alias SET alias='www.${_DOMAIN}' WHERE nid=${_HM_NID}\" &> /dev/null\n      else\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix 1/2 hosting_context ${_HM_NID} empty!\"\n      fi\n      if [ -e \"${_ROOT}/aegir/distro/001/sites/${_DOMAIN}/drushrc.php\" ] \\\n        && [ ! -e \"${_ROOT}/log/hmpathfix.pid\" ]; then\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n          SET publish_path='${_ROOT}/aegir/distro/001' \\\n          WHERE publish_path LIKE '%/aegir/distro/%'\" &> /dev/null\n        touch ${_ROOT}/log/hmpathfix.pid\n      fi\n    fi\n    ### Fix for old migrated/merged instances\n\n    ${_DRUSHCMD} cc drush &> /dev/null\n    rm -rf ${_ROOT}/.tmp/cache\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      ${_DRUSHCMD} hostmaster-migrate ${_DOMAIN} ${_HM_ROOT} -y -d\n      ${_DRUSHCMD} @hostmaster registry-rebuild\n    else\n      ${_DRUSHCMD} hostmaster-migrate ${_DOMAIN} ${_HM_ROOT} -y &> /dev/null\n      ${_DRUSHCMD} @hostmaster registry-rebuild &> /dev/null\n    fi\n    cd ${_HM_ROOT}\n    mkdir -p sites/all/{modules,themes,libraries}\n    export DEBIAN_FRONTEND=text\n    mv -f ${_ROOT}/.drush/xts/security_review/foo.txt \\\n      ${_ROOT}/.drush/xts/security_review/security_review.drush.inc  &> /dev/null\n    rm -f ${_ROOT}/u/${_DOMAIN}\n    ln -sfn ${_HM_ROOT} ${_ROOT}/u/${_DOMAIN}\n    rm -f /data/u/${_DOMAIN} &> /dev/null\n    ln -sfn ${_HM_ROOT} /data/u/${_DOMAIN}\n    rm -rf ${_HM_ROOT}/profiles/{default,standard,minimal,testing}\n\n    ### Fix for migrated/merged instances\n    if [ -e \"${_ROOT}/log/imported.pid\" ] \\\n      || [ -e \"${_ROOT}/log/exported.pid\" ]; then\n      if [ ! -e \"${_ROOT}/log/post-merge-fix.pid\" ]; then\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix for migrated/merged instance 2/2 start\"\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE users_roles \\\n          SET rid=7 WHERE rid=5\" &> /dev/null\n        echo FIXED > ${_ROOT}/log/post-merge-fix.pid\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix for migrated/merged instance 2/2 complete\"\n      fi\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site \\\n        SET client=1 WHERE profile=7\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site \\\n        SET client=1 WHERE profile=9\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site \\\n        SET client=1 WHERE client=0\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n        SET web_server=2 WHERE web_server=0\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node \\\n        SET uid=1 WHERE uid=0\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node_revision \\\n        SET uid=1 WHERE uid=0\" &> /dev/null\n      _HM_NID=$(${_DRUSHCMD} @hostmaster sqlq \"SELECT MIN(site.nid) AS lowest_nid \\\n        FROM hosting_site site JOIN hosting_package_instance pkgi \\\n        ON pkgi.rid=site.nid JOIN hosting_package pkg \\\n        ON pkg.nid=pkgi.package_id WHERE pkg.short_name='hostmaster'\" 2>&1)\n      _HM_NID=${_HM_NID//[^0-9]/}\n      if [ ! -z \"${_HM_NID}\" ]; then\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix 2/2 hosting_context ${_HM_NID}\"\n        if [ -e \"${_ROOT}/aegir/distro/001/sites/${_DOMAIN}/drushrc.php\" ]; then\n          _HM_PLF=$(${_DRUSHCMD} @hostmaster sqlq \"SELECT platform FROM hosting_site WHERE nid=${_HM_NID}\" 2>&1)\n          _HM_PLF=${_HM_PLF//[^0-9]/}\n          ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_context SET name='platform_hostmaster' WHERE nid=${_HM_PLF}\" &> /dev/null\n        fi\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_context SET name='hostmaster' WHERE nid=${_HM_NID}\"         &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\"                   &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE node_revision SET title='${_DOMAIN}' WHERE nid=${_HM_NID}\"          &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_site_alias SET alias='www.${_DOMAIN}' WHERE nid=${_HM_NID}\" &> /dev/null\n      else\n        _msg \"${_STATUS} B: Hostmaster STATUS: Fix 2/2 hosting_context ${_HM_NID} empty!\"\n      fi\n      if [ -e \"${_ROOT}/aegir/distro/001/sites/${_DOMAIN}/drushrc.php\" ] \\\n        && [ ! -e \"${_ROOT}/log/hmpathfix.pid\" ]; then\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n          SET publish_path='${_ROOT}/aegir/distro/001' \\\n          WHERE publish_path LIKE '%/aegir/distro/%'\" &> /dev/null\n        touch ${_ROOT}/log/hmpathfix.pid\n      fi\n    fi\n    ### Fix for migrated/merged instances\n    _msg \"${_STATUS} B: Hostmaster STATUS: Upgrade completed\"\n  fi\n}\n\n_satellite_child_b_aegir_health_check() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_aegir_health_check\"\n    _debug_proc\n  fi\n  ###--------------------###\n  if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n    _MSG_STATUS=\"install\"\n  else\n    _MSG_STATUS=\"upgrade\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"${_STATUS} B: Simple check if Ægir ${_MSG_STATUS} is successful\"\n  fi\n  if [ -e \"${_HM_ROOT}/sites/${_DOMAIN}/settings.php\" ]; then\n    _msg \"${_STATUS} B: Ægir ${_MSG_STATUS} test result: OK\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"${_STATUS} B: Reloading Nginx...\"\n      sudo /etc/init.d/nginx reload\n    else\n      sudo /etc/init.d/nginx reload &> /dev/null\n    fi\n  else\n    _msg \"${_STATUS} B: FATAL ERROR: Required file does not exist:\"\n    _msg \"${_STATUS} B: FATAL ERROR: ${_HM_ROOT}/sites/${_DOMAIN}/settings.php\"\n    _msg \"${_STATUS} B: FATAL ERROR: Aborting AegirSetupB installer NOW!\"\n    touch /opt/tmp/status-AegirSetupB-FAIL\n    exit 1\n  fi\n}\n\n_satellite_child_b_letsencrypt() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_letsencrypt\"\n    _debug_proc\n  fi\n  _leParams=\"--cron --ipv4\"\n  _leRoot=\"${_ROOT}/tools/le\"\n  _leKeyJ=\"${_leRoot}/tools/le/private_key.json\"\n  _leKeyP=\"${_leRoot}/tools/le/private_key.pem\"\n  _leCrtPath=\"${_leRoot}/certs/${_DOMAIN}\"\n  _exeLe=\"${_leRoot}/dehydrated\"\n  _pthLe=\"${_ROOT}/backups/system/dehydrated\"\n  mkdir -p ${_ROOT}/backups/system\n  chmod 700 ${_ROOT}/backups/system\n  rm -f ${_ROOT}/backups/system/letsencrypt*\n  rm -f ${_ROOT}/backups/system/dehydrated*\n  curl ${_crlGet} \"${_urlHmr}/helpers/dehydrated\" -o ${_pthLe}\n  if [ ! -e \"${_leRoot}\" ]; then\n    _leSetup=\"YES\"\n  fi\n  if [ -e \"${_pthLe}\" ] && grep -q \"OSTYPE\" \"${_pthLe}\"; then\n    mkdir -p ${_leRoot}/.ctrl\n    chmod 0711 ${_leRoot}/.ctrl\n    mkdir -p ${_leRoot}/.acme-challenges\n    chmod 0711 ${_leRoot}/.acme-challenges\n    mkdir -p ${_leRoot}/certs\n    chmod 0700 ${_leRoot}/certs\n    chmod 0711 ${_leRoot}\n    cp -af ${_pthLe} ${_exeLe}\n    if [ -e \"${_exeLe}\" ]; then\n      chmod 0700 ${_exeLe}\n      if [ \"${_STATUS}\" = \"INIT\" ] \\\n        || [ \"${_leSetup}\" = \"YES\" ] \\\n        || [ -e \"${_leRoot}/.ctrl/ssl-demo-mode.pid\" ]; then\n        touch ${_leRoot}/.ctrl/ssl-demo-mode.pid\n        echo -e 'CA=\"https://acme-staging-v02.api.letsencrypt.org/directory\"\\n' > ${_leRoot}/config.sh\n        cp -af ${_leRoot}/config.sh ${_leRoot}/config\n        if [ -e \"${_leKeyJ}\" ]; then\n          mv -f ${_leKeyJ} \"${_leKeyJ}-prev\"\n        fi\n        if [ -e \"${_leKeyP}\" ]; then\n          mv -f ${_leKeyP} \"${_leKeyP}-prev\"\n        fi\n        echo \"\"\n        _msg \"${_STATUS} B: Letsencrypt SSL mode: LIVE after auto-upgrade\"\n        echo \"\"\n        _msg \"${_STATUS} B: LE -- !!! ATTENTION\"\n        _msg \"${_STATUS} B: LE -- Never enable SSL for the Hostmaster site in Aegir\"\n        _msg \"${_STATUS} B: LE -- The Hostmaster site SSL is auto-managed separately\"\n        _msg \"${_STATUS} B: LE -- !!! ATTENTION\"\n        echo \"\"\n      fi\n    else\n      _msg \"${_STATUS} B: LE -- Missing ${_exeLe} file?\"\n    fi\n  fi\n  if [ ! -e \"${_leRoot}/.ctrl/ssl-demo-mode.pid\" ] \\\n    && [ -e \"${_leRoot}/config.sh\" ]; then\n    rm -f ${_leRoot}/config.sh\n    rm -f ${_leRoot}/config\n    if [ -e \"${_leKeyJ}\" ]; then\n      mv -f ${_leKeyJ} \"${_leKeyJ}-prev\"\n    fi\n    if [ -e \"${_leKeyP}\" ]; then\n      mv -f ${_leKeyP} \"${_leKeyP}-prev\"\n    fi\n  fi\n  if [ -x \"${_exeLe}\" ] \\\n    && [ ! -e \"${_leRoot}/.ctrl/ssl-demo-mode.pid\" ] \\\n    && [ ! -e \"${_leRoot}/config.sh\" ]; then\n    if [ -e \"${_leCrtPath}/fullchain.pem\" ]; then\n      _msg \"${_STATUS} B: Updating Letsencrypt cert for Hostmaster...\"\n    else\n      _msg \"${_STATUS} B: Creating Letsencrypt cert for Hostmaster...\"\n    fi\n    _onDemandRegPid=\"${_leRoot}/.ctrl/onDemand-register.pid\"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      if [ -e \"${_onDemandRegPid}\" ]; then\n        rm -rf ${_leRoot}/accounts*\n        bash ${_exeLe} --register --accept-terms\n        wait\n        bash ${_exeLe} --register --accept-terms\n        wait\n        bash ${_exeLe} ${_leParams} --domain ${_DOMAIN} --force\n        wait\n        touch ${_leRoot}/.ctrl/forced-${_DOMAIN}-mode.pid\n      else\n        bash ${_exeLe} --register --accept-terms\n        wait\n        bash ${_exeLe} ${_leParams} --domain ${_DOMAIN}\n        wait\n      fi\n    else\n      mkdir -p ${_ROOT}/log\n      if [ -e \"${_onDemandRegPid}\" ]; then\n        rm -rf ${_leRoot}/accounts*\n        bash ${_exeLe} --register --accept-terms >${_ROOT}/log/letsencrypt-${_NOW}.log 2>&1\n        wait\n        bash ${_exeLe} --register --accept-terms >>${_ROOT}/log/letsencrypt-${_NOW}.log 2>&1\n        wait\n        bash ${_exeLe} ${_leParams} --domain ${_DOMAIN} --force >>${_ROOT}/log/letsencrypt-${_NOW}.log 2>&1\n        wait\n        touch ${_leRoot}/.ctrl/forced-${_DOMAIN}-mode.pid\n      else\n        bash ${_exeLe} --register --accept-terms >>${_ROOT}/log/letsencrypt-${_NOW}.log 2>&1\n        wait\n        bash ${_exeLe} ${_leParams} --domain ${_DOMAIN} >>${_ROOT}/log/letsencrypt-${_NOW}.log 2>&1\n        wait\n      fi\n    fi\n  else\n    if [ -e \"${_leCrtPath}/fullchain.pem\" ]; then\n      rm -rf ${_leCrtPath}\n    fi\n  fi\n}\n\n_satellite_child_b_aegir_ui_enhance() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_aegir_ui_enhance\"\n    _debug_proc\n  fi\n  _msg \"${_STATUS} B: Enhancing Ægir UI, please wait...\"\n  mkdir -p ${_HM_ROOT}/sites/all/{modules,themes,libraries}\n  mkdir -p ${_HM_ROOT}/profiles/hostmaster/modules/{aegir,contrib}\n  cd ${_HM_ROOT}/sites/${_DOMAIN}\n  if [ -e \"${_HM_ROOT}/sites/${_DOMAIN}/settings.php\" ]; then\n    _VM_TEST=\"$(uname -a)\"\n    if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n      _VMFAMILY=\"VS\"\n    else\n      _VMFAMILY=\"XEN\"\n    fi\n    cd ${_HM_ROOT}/sites/${_DOMAIN}\n    ${_DRUSHCMD} cc drush &> /dev/null\n    rm -rf ${_ROOT}/.tmp/cache\n    ${_DRUSHCMD} @hostmaster en aegir_objects -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en fix_ownership -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en fix_permissions -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_civicrm -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_civicrm_cron -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_client -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_cron -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_deploy -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_platform_composer -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_platform_composer_git -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_platform_git -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_tasks_extra -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_site_backup_manager -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en hosting_http_basic_auth -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en libraries -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en overlay -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en overlay_paths -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en revision_deletion -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en timeago -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en userprotect -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster en environment_indicator -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} client 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} clone 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} hosting_admin_client 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} hosting_client_register_user 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} hosting_client_send_welcome 0 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} hosting_feature_client 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_cron_default_interval 3600 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_queue_cron_frequency 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_civicrm_cron_queue_frequency 60 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_queue_task_gc_frequency 300 &> /dev/null\n    if [ -e \"${_ROOT}/log/hosting_cron_use_backend.txt\" ]; then\n      ${_DRUSHCMD} @hostmaster ${_vSet} \\\n        hosting_cron_use_backend 1 &> /dev/null\n    else\n      ${_DRUSHCMD} @hostmaster ${_vSet} \\\n        hosting_cron_use_backend 0 &> /dev/null\n    fi\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_ignore_default_profiles 0 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_queue_tasks_frequency 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_queue_tasks_items 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_delete_force 0 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_alias_automatic_no_www 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_alias_automatic_www 1 &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_upload_platform_path \"${_ROOT}/static\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_upload_upload_path \"sites/${_DOMAIN}/files/deployment\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_platform_base_path \"${_ROOT}/static/\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      aegir_backup_export_path \"${_ROOT}/backup-exports\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster ${_vSet} \\\n      hosting_default_profile \"standard\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM menu_links \\\n      WHERE link_path='hosting/platforms'\" &> /dev/null\n    ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM menu_links \\\n      WHERE link_path='hosting/sites'\" &> /dev/null\n\n    ${_DRUSHCMD} @hostmaster vdel -y --exact hosting_task_logs_types_display &> /dev/null\n\n    ${_DRUSHCMD} @hostmaster ev 'variable_set(\"hosting_task_logs_types_display\", array(\n      \"error\" => \"error\",\n      \"info\" => \"info\",\n      \"message\" => \"message\",\n      \"notice\" => \"notice\",\n      \"ok\" => \"ok\",\n      \"status\" => \"status\",\n      \"success\" => \"success\",\n      \"warning\" => \"warning\",\n      \"backup\" => \"backup\",\n      \"bootstrap\" => \"bootstrap\",\n      \"command\" => \"command\",\n      \"debug\" => \"none\",\n      \"debugnotify\" => \"none\",\n      \"queue\" => \"queue\"))' &> /dev/null\n\n    if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n      pBy=\"Octopus System powered by Barracuda\"\n      ${_DRUSHCMD} @hostmaster ${_vSet} site_name \"${pBy}\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster ${_vSet} site_mail \"${_MY_OCTO_EMAIL}\" &> /dev/null\n      cp -af /opt/tmp/boa/aegir/helpers/make_home.php.txt ./\n      mv -f make_home.php.txt make_home.php &> /dev/null\n      ${_DRUSHCMD} php-script make_home &> /dev/null\n      rm -f make_home.php\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE field_data_body \\\n        SET body_format='full_html' WHERE bundle='book' AND entity_type='node'\" &> /dev/null\n      cp -af /opt/tmp/boa/aegir/helpers/make_client.php.txt ./\n      mv -f make_client.php.txt make_client.php &> /dev/null\n      if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n        || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n          SET status=-1 WHERE nid=7\" &> /dev/null\n      else\n        ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n          SET status=-1 WHERE nid=5\" &> /dev/null\n      fi\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        ${_DRUSHCMD} php-script make_client ${_CLIENT_EMAIL}\n      else\n        ${_DRUSHCMD} php-script make_client ${_CLIENT_EMAIL} &> /dev/null\n      fi\n      rm -f make_client.php\n    else\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n        SET status=-2 WHERE publish_path LIKE '%/aegir/distro/%'\" &> /dev/null\n      _THIS_HM_PLR=$(cat ${_ROOT}/.drush/hostmaster.alias.drushrc.php \\\n          | grep \"root'\" \\\n          | cut -d: -f2 \\\n          | awk '{ print $3}' \\\n          | sed \"s/[\\,']//g\" 2>&1)\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_platform \\\n      SET status=1 WHERE publish_path LIKE '${_THIS_HM_PLR}'\" &> /dev/null\n      pBy=\"Octopus System powered by Barracuda\"\n      ${_DRUSHCMD} @hostmaster ${_vSet} site_name \"${pBy}\" &> /dev/null\n      cp -af /opt/tmp/boa/aegir/helpers/make_home.php.txt ./\n      mv -f make_home.php.txt make_home.php &> /dev/null\n      ${_DRUSHCMD} php-script make_home &> /dev/null\n      rm -f make_home.php\n      ${_DRUSHCMD} @hostmaster sqlq \"DELETE FROM field_data_body \\\n        WHERE body_format IS NULL\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE field_data_body \\\n        SET body_format='full_html' WHERE bundle='book' AND entity_type='node'\" &> /dev/null\n    fi\n\n    ${_DRUSHCMD} @hostmaster sqlq \"UPDATE menu_links \\\n      SET hidden=1 WHERE plid=0 AND menu_name LIKE 'user-menu'\" &> /dev/null\n\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      ${_DRUSHCMD} @hostmaster sqlq \"TRUNCATE filter_format\"  &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"TRUNCATE filter_formats\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"TRUNCATE filter\"         &> /dev/null\n      ${_DRUSHCMD} @hostmaster en hosting_le -y\n      ${_DRUSHCMD} @hostmaster en hosting_le_vhost -y\n      ${_DRUSHCMD} @hostmaster en hosting_custom_settings -y\n      ${_DRUSHCMD} @hostmaster fr hosting_custom_settings -y\n      ${_DRUSHCMD} @hostmaster cache-clear all\n      ${_DRUSHCMD} @hostmaster updb -y\n    else\n      ${_DRUSHCMD} @hostmaster sqlq \"TRUNCATE filter_format\"  &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"TRUNCATE filter_formats\" &> /dev/null\n      ${_DRUSHCMD} @hostmaster sqlq \"TRUNCATE filter\"         &> /dev/null\n      ${_DRUSHCMD} @hostmaster en hosting_le -y               &> /dev/null\n      ${_DRUSHCMD} @hostmaster en hosting_le_vhost -y         &> /dev/null\n      ${_DRUSHCMD} @hostmaster en hosting_custom_settings -y  &> /dev/null\n      ${_DRUSHCMD} @hostmaster fr hosting_custom_settings -y  &> /dev/null\n      ${_DRUSHCMD} @hostmaster cache-clear all                &> /dev/null\n      ${_DRUSHCMD} @hostmaster updb -y                        &> /dev/null\n    fi\n\n    if [ \"${_LOCAL_STATUS}\" = \"INIT\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        ${_DRUSHCMD} @hostmaster \\\n          urol \"admin\" --mail=${_CLIENT_EMAIL}\n        ${_DRUSHCMD} @hostmaster \\\n          urol \"aegir account manager\" --mail=${_CLIENT_EMAIL}\n        ${_DRUSHCMD} @hostmaster \\\n          urol \"aegir client\" --mail=${_CLIENT_EMAIL}\n        ${_DRUSHCMD} @hostmaster sqlq \"REPLACE INTO userprotect \\\n          (uid, up_name, up_mail, up_pass, up_status, up_roles, up_cancel, up_edit, up_type, up_openid) VALUES \\\n          ('0', '0', '0', '0', '0', '0', '1', '1', 'user', '1'),\\\n          ('1', '0', '0', '0', '0', '0', '0', '0', 'admin', '0'),\\\n          ('1', '1', '1', '1', '1', '1', '1', '1', 'user', '1'),\\\n          ('2', '0', '0', '0', '1', '1', '1', '0', 'user', '1');\"\n      else\n        ${_DRUSHCMD} @hostmaster \\\n          urol \"admin\" --mail=${_CLIENT_EMAIL} &> /dev/null\n        ${_DRUSHCMD} @hostmaster \\\n          urol \"aegir account manager\" --mail=${_CLIENT_EMAIL} &> /dev/null\n        ${_DRUSHCMD} @hostmaster \\\n          urol \"aegir client\" --mail=${_CLIENT_EMAIL} &> /dev/null\n        ${_DRUSHCMD} @hostmaster sqlq \"REPLACE INTO userprotect \\\n          (uid, up_name, up_mail, up_pass, up_status, up_roles, up_cancel, up_edit, up_type, up_openid) VALUES \\\n          ('0', '0', '0', '0', '0', '0', '1', '1', 'user', '1'),\\\n          ('1', '0', '0', '0', '0', '0', '0', '0', 'admin', '0'),\\\n          ('1', '1', '1', '1', '1', '1', '1', '1', 'user', '1'),\\\n          ('2', '0', '0', '0', '1', '1', '1', '0', 'user', '1');\" &> /dev/null\n      fi\n    fi\n\n    if [ ! -e \"${_ROOT}/log/hosting_le_enable.txt\" ]; then\n      ${_DRUSHCMD} @hostmaster sqlq \"UPDATE hosting_service SET type='nginx_ssl' WHERE service='http'\" &> /dev/null\n      wait\n      ${_DRUSHCMD} @hostmaster hosting-task @server_master verify --force &> /dev/null\n      wait\n      mkdir -p ${_ROOT}/log\n      touch ${_ROOT}/log/hosting_le_enable.txt\n    fi\n  fi\n}\n\n_satellite_child_b_vhosts_hotfix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_vhosts_hotfix\"\n    _debug_proc\n  fi\n  ###---### Make sure there are no ghost or disabled vhost which still listen on IP.\n  #\n  rPth=\"${_ROOT}/config/server\"\n  sed -i \"s/.*listen .*127.0.0.1:80;.*//g\"             ${rPth}_*/nginx.conf\n  wait\n  sed -i \"s/listen .*\\*:80;/listen  \\*:80;/g\"          ${rPth}_*/nginx.conf\n  wait\n  if [ -e \"/data/conf/${_USER}_use_proxysql.txt\" ]; then\n    sed -i \"s/param db_port.*/param db_port   6033;/g\" ${rPth}_*/nginx/vhost.d/*\n  else\n    sed -i \"s/param db_port.*/param db_port   3306;/g\" ${rPth}_*/nginx/vhost.d/*\n  fi\n  wait\n  sed -i \"s/listen .*\\*:80;/listen  \\*:80;/g\"          ${rPth}_*/nginx/vhost.d/*\n  wait\n}\n\n_satellite_child_b_symlink_global_inc() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_symlink_global_inc\"\n    _debug_proc\n  fi\n  cd ${_ROOT}\n  if [ -e \"/data/conf/global.inc\" ]; then\n    ln -sfn /data/conf/global.inc ${_ROOT}/config/includes/global.inc\n  fi\n}\n\n_satellite_child_b_redis_enable_finalize() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _satellite_child_b_redis_enable_finalize\"\n    _debug_proc\n  fi\n  if [ -e \"${_HM_ROOT}/modules/o_contrib\" ]; then\n    rm -rf ${_HM_ROOT}/modules/o_contrib\n  fi\n  if [ ! -e \"${_HM_ROOT}/modules/o_contrib_seven/redis_edge\" ]; then\n    if [ -e \"/data/all/000/modules/redis_edge\" ]; then\n      mkdir -p ${_HM_ROOT}/modules/o_contrib_seven\n      rm -f ${_HM_ROOT}/modules/o_contrib_seven/{redis_edge,redis}\n      ln -sfn /data/all/000/modules/redis_edge \\\n        ${_HM_ROOT}/modules/o_contrib_seven/redis_edge\n    fi\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    ${_DRUSHCMD} @hostmaster en hosting_custom_settings -y\n    ${_DRUSHCMD} @hostmaster fr hosting_custom_settings -y\n    ${_DRUSHCMD} @hostmaster cache-clear all\n  else\n    ${_DRUSHCMD} @hostmaster en hosting_custom_settings -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster fr hosting_custom_settings -y &> /dev/null\n    ${_DRUSHCMD} @hostmaster cache-clear all &> /dev/null\n  fi\n  touch /opt/tmp/status-AegirSetupB-OK\n}\n"
  },
  {
    "path": "lib/functions/solr.sh.inc",
    "content": "\n_if_fix_solr9_mod() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_fix_solr9_mod\"\n  fi\n  [ ! -e \"/opt/solr9/server/modules/ltr.mod\" ] && cp -af ${_bldPth}/aegir/conf/solr9/*  /opt/solr9/server/modules/\n  chown root:root /opt/solr9/server/modules/*\n  chmod 644 /opt/solr9/server/modules/*\n}\n\n_if_fix_solr9_permissions() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_fix_solr9_permissions\"\n  fi\n  [ ! -e \"/var/solr9\" ] && mkdir -p /var/solr9\n  [ -L \"/var/solr9\" ] && chown -R solr9:solr9 /mnt/*/var/solr9\n  [ -L \"/var/solr9\" ] && chmod 750 /mnt/*/var/solr9\n  [ -e \"/var/solr9\" ] && chown -R solr9:solr9 /var/solr9/*\n  [ -e \"/var/solr9\" ] && chmod 750 /var/solr9\n}\n\n_if_fix_solr7_permissions() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_fix_solr7_permissions\"\n  fi\n  [ ! -e \"/var/solr7\" ] && mkdir -p /var/solr7\n  [ -L \"/var/solr7\" ] && chown -R solr7:solr7 /mnt/*/var/solr7\n  [ -L \"/var/solr7\" ] && chmod 750 /mnt/*/var/solr7\n  [ -e \"/var/solr7\" ] && chown -R solr7:solr7 /var/solr7/*\n  [ -e \"/var/solr7\" ] && chmod 750 /var/solr7\n}\n\n_if_solr_nine() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_solr_nine\"\n  fi\n\n  if [ -e \"/var/solr9/data\" ] && [ -L \"/var/solr9\" ]; then\n      _if_fix_solr9_permissions\n      _if_fix_solr9_mod\n  fi\n\n  local _R_SOLR=9\n  local _N_SOLR=\"solr${_R_SOLR}\"\n  local _SOLR_VPATH=\"/var/${_N_SOLR}/data\"\n  local _SOLR_CTRL=\"/var/${_N_SOLR}/solr-${_SOLR_9_VRN}-version.txt\"\n\n  if [ ! -d \"${_SOLR_VPATH}\" ]; then\n    echo \" \"\n    _tPrmt=\"Do you want to install MultiCore Apache Solr ${_R_SOLR}\"\n    _tPrmt=$(echo -n ${_tPrmt} | fmt -su -w 2500 2>&1)\n    if _prompt_yes_no \"${_tPrmt}?\" ; then\n      true\n      _msg \"INFO: Installing MultiCore Apache Solr ${_R_SOLR}...\"\n      cd /var/opt\n      curl ${_crlGet} \"${_urlDev}/solr-${_SOLR_9_VRN}.tgz\" -o \"solr-${_SOLR_9_VRN}.tgz\"\n      rm -rf solr-${_SOLR_9_VRN}\n      adduser --system --group --shell /bin/bash --home /var/${_N_SOLR} ${_N_SOLR} &> /dev/null\n      usermod -aG users ${_N_SOLR}\n      tar xzf solr-${_SOLR_9_VRN}.tgz solr-${_SOLR_9_VRN}/bin/install_solr_service.sh --strip-components=2\n      bash ./install_solr_service.sh solr-${_SOLR_9_VRN}.tgz -f -i /opt -d /var/${_N_SOLR} -u ${_N_SOLR} -s ${_N_SOLR} -p 9099 &> /dev/null\n      cp -af ${_bldPth}/docs/SOLR.txt ${_SOLR_VPATH}/README.txt &> /dev/null\n      echo ${_SOLR_9_VRN} > ${_SOLR_CTRL}\n      cd /var/opt\n      _if_fix_solr9_permissions\n      _if_fix_solr9_mod\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} installed\"\n    else\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} not installed\"\n    fi\n  fi\n\n  if [ -e \"/etc/default/${_N_SOLR}.in.sh\" ]; then\n    if [ \"${_OS_CODE}\" = \"excalibur\" ] || [ -x \"/usr/lib/jvm/java-21-openjdk-amd64/bin/java\" ]; then\n      _useJava=21\n    else\n      _useJava=17\n    fi\n    _SOLR9_JAVA_TEST=$(grep \"BOA ${_xSrl} Path to Java ${_useJava} on ${_OS_CODE}\" /etc/default/${_N_SOLR}.in.sh 2>&1)\n    _SOLR9_MODULES_TEST=$(grep \"analysis-extras,clustering,extraction,langid,ltr\" /etc/default/${_N_SOLR}.in.sh 2>&1)\n    _SOLR9_STOP_TEST=$(grep \"SOLR_STOP_PORT=19099\" /etc/default/${_N_SOLR}.in.sh 2>&1)\n    if [[ \"${_SOLR9_JAVA_TEST}\" =~ \"BOA ${_xSrl} Path to Java ${_useJava} on ${_OS_CODE}\" ]] \\\n      && [[ \"${_SOLR9_MODULES_TEST}\" =~ \"analysis-extras,clustering,extraction,langid,ltr\" ]] \\\n      && [[ \"${_SOLR9_STOP_TEST}\" =~ \"SOLR_STOP_PORT=19099\" ]]; then\n      _DO_NOTHING=YES\n    else\n      sed -i \"s/^SOLR_STOP_.*//g\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      sed -i \"s/^SOLR_JAVA_HOME=.*//g\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      sed -i \"s/^SOLR_MODULES=.*//g\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      sed -i \"s/^LOG4J_FORMAT_MSG_NO_LOOKUPS=.*//g\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      sed -i \"/^$/d\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      echo \"LOG4J_FORMAT_MSG_NO_LOOKUPS=true\" >> /etc/default/${_N_SOLR}.in.sh\n      echo \"SOLR_STOP_PORT=19099\" >> /etc/default/${_N_SOLR}.in.sh\n      echo \"SOLR_STOP_KEY=mycustomkey9\" >> /etc/default/${_N_SOLR}.in.sh\n      if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n        || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n        || [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n        if [ \"${_OS_CODE}\" = \"excalibur\" ] || [ -x \"/usr/lib/jvm/java-21-openjdk-amd64/bin/java\" ]; then\n          echo \"SOLR_JAVA_HOME=\\\"/usr/lib/jvm/java-21-openjdk\\\" # BOA ${_xSrl} Path to Java 21 on ${_OS_CODE}\" >> /etc/default/${_N_SOLR}.in.sh\n        else\n          echo \"SOLR_JAVA_HOME=\\\"/usr/lib/jvm/java-17-openjdk\\\" # BOA ${_xSrl} Path to Java 17 on ${_OS_CODE}\" >> /etc/default/${_N_SOLR}.in.sh\n        fi\n        echo \"SOLR_MODULES=\\\"analysis-extras,clustering,extraction,langid,ltr\\\" # Auto-Enable Modules\" >> /etc/default/${_N_SOLR}.in.sh\n      fi\n    fi\n    if [ ! -e \"${_SOLR_CTRL}\" ]; then\n      _msg \"INFO: Upgrading MultiCore Apache Solr ${_R_SOLR}...\"\n      cd /var/opt\n      curl ${_crlGet} \"${_urlDev}/solr-${_SOLR_9_VRN}.tgz\" -o \"solr-${_SOLR_9_VRN}.tgz\"\n      rm -rf solr-${_SOLR_9_VRN}\n      tar xzf solr-${_SOLR_9_VRN}.tgz solr-${_SOLR_9_VRN}/bin/install_solr_service.sh --strip-components=2\n      bash ./install_solr_service.sh solr-${_SOLR_9_VRN}.tgz -f -i /opt -d /var/${_N_SOLR} -u ${_N_SOLR} -s ${_N_SOLR} -p 9099 &> /dev/null\n      echo ${_SOLR_9_VRN} > ${_SOLR_CTRL}\n      _if_fix_solr9_permissions\n      _if_fix_solr9_mod\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} upgrade completed\"\n    fi\n  fi\n  if [ \"${_UP_JDK}\" = \"YES\" ] && [ -e \"/etc/init.d/${_N_SOLR}\" ]; then\n    _msg \"INFO: Solr 9 restart in progress - required after java upgrade\"\n    _mrun \"service ${_N_SOLR} restart\"\n    _msg \"INFO: Solr 9 restart completed\"\n  fi\n}\n\n_if_solr_seven() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_solr_seven\"\n  fi\n\n  if [ -e \"/var/solr7/data\" ] && [ -L \"/var/solr7\" ]; then\n    _if_fix_solr7_permissions\n  fi\n\n  local _R_SOLR=7\n  local _N_SOLR=\"solr${_R_SOLR}\"\n  local _SOLR_VPATH=\"/var/${_N_SOLR}/data\"\n  local _SOLR_CTRL=\"/var/${_N_SOLR}/solr-${_SOLR_7_VRN}-version.txt\"\n\n  if [ ! -d \"${_SOLR_VPATH}\" ]; then\n    echo \" \"\n    _tPrmt=\"Do you want to install MultiCore Apache Solr ${_R_SOLR}\"\n    _tPrmt=$(echo -n ${_tPrmt} | fmt -su -w 2500 2>&1)\n    if _prompt_yes_no \"${_tPrmt}?\" ; then\n      true\n      _msg \"INFO: Installing MultiCore Apache Solr ${_R_SOLR}...\"\n      cd /var/opt\n      curl ${_crlGet} \"${_urlDev}/solr-${_SOLR_7_VRN}.tgz\" -o \"solr-${_SOLR_7_VRN}.tgz\"\n      rm -rf solr-${_SOLR_7_VRN}\n      adduser --system --group --shell /bin/bash --home /var/${_N_SOLR} ${_N_SOLR} &> /dev/null\n      usermod -aG users ${_N_SOLR}\n      tar xzf solr-${_SOLR_7_VRN}.tgz solr-${_SOLR_7_VRN}/bin/install_solr_service.sh --strip-components=2\n      bash ./install_solr_service.sh solr-${_SOLR_7_VRN}.tgz -f -i /opt -d /var/${_N_SOLR} -u ${_N_SOLR} -s ${_N_SOLR} -p 9077 &> /dev/null\n      cp -af ${_bldPth}/docs/SOLR.txt ${_SOLR_VPATH}/README.txt &> /dev/null\n      echo ${_SOLR_7_VRN} > ${_SOLR_CTRL}\n      cd /var/opt\n      _if_fix_solr7_permissions\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} installed\"\n    else\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} not installed\"\n    fi\n  fi\n\n  if [ -e \"/etc/default/${_N_SOLR}.in.sh\" ]; then\n    _SOLR7_JAVA_TEST=$(grep \"BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" /etc/default/${_N_SOLR}.in.sh 2>&1)\n    _SOLR7_STOP_TEST=$(grep \"SOLR_STOP_PORT=17077\" /etc/default/${_N_SOLR}.in.sh 2>&1)\n    if [[ \"${_SOLR7_JAVA_TEST}\" =~ \"BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" ]] \\\n      && [[ \"${_SOLR7_STOP_TEST}\" =~ \"SOLR_STOP_PORT=17077\" ]]; then\n      _DO_NOTHING=YES\n    else\n      sed -i \"s/^SOLR_STOP_.*//g\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      sed -i \"s/^SOLR_JAVA_HOME=.*//g\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      sed -i \"s/^LOG4J_FORMAT_MSG_NO_LOOKUPS=.*//g\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      sed -i \"/^$/d\" /etc/default/${_N_SOLR}.in.sh\n      wait\n      echo \"LOG4J_FORMAT_MSG_NO_LOOKUPS=true\" >> /etc/default/${_N_SOLR}.in.sh\n      echo \"SOLR_STOP_PORT=17077\" >> /etc/default/${_N_SOLR}.in.sh\n      echo \"SOLR_STOP_KEY=mycustomkey7\" >> /etc/default/${_N_SOLR}.in.sh\n      echo \"SOLR_JAVA_HOME=\\\"/usr/lib/jvm/java-11-openjdk\\\" # BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" >> /etc/default/${_N_SOLR}.in.sh\n    fi\n    if [ ! -e \"${_SOLR_CTRL}\" ]; then\n      _msg \"INFO: Upgrading MultiCore Apache Solr ${_R_SOLR}...\"\n      cd /var/opt\n      curl ${_crlGet} \"${_urlDev}/solr-${_SOLR_7_VRN}.tgz\" -o \"solr-${_SOLR_7_VRN}.tgz\"\n      rm -rf solr-${_SOLR_7_VRN}\n      tar xzf solr-${_SOLR_7_VRN}.tgz solr-${_SOLR_7_VRN}/bin/install_solr_service.sh --strip-components=2\n      bash ./install_solr_service.sh solr-${_SOLR_7_VRN}.tgz -f -i /opt -d /var/${_N_SOLR} -u ${_N_SOLR} -s ${_N_SOLR} -p 9077 &> /dev/null\n      echo ${_SOLR_7_VRN} > ${_SOLR_CTRL}\n      _if_fix_solr7_permissions\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} upgrade completed\"\n    fi\n  fi\n  if [ \"${_UP_JDK}\" = \"YES\" ] && [ -e \"/etc/init.d/${_N_SOLR}\" ]; then\n    _msg \"INFO: Solr 7 restart in progress - required after java upgrade\"\n    _mrun \"service ${_N_SOLR} restart\"\n    _msg \"INFO: Solr 7 restart completed\"\n  fi\n}\n\n_if_solr_four() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_solr_four\"\n  fi\n\n  local _R_SOLR=4\n  local _R_JETTY=9\n  local _JETTY_CTRL=\"/opt/jetty9/jetty-ctrl-${_JETTY_9_VRN}-${_SOLR_4_VRN}-version.txt\"\n  local _SOLR_CTRL=\"/opt/jetty9/solr-${_SOLR_4_VRN}-version.txt\"\n\n  if [ -e \"/opt/jetty9/start.d/http.ini\" ]; then\n    local _PORT_CTRL=\"/opt/jetty9/start.d/.fixed.http.ini.txt\"\n  else\n    local _PORT_CTRL=\"${_JETTY_CTRL}\"\n  fi\n\n  if [ ! -d \"/opt/solr4\" ]; then\n    echo \" \"\n    _tPrmt=\"Do you want to install MultiCore Apache Solr ${_R_SOLR}\"\n    _tPrmt=\"${_tPrmt} with Jetty ${_R_JETTY}\"\n    _tPrmt=$(echo -n ${_tPrmt} | fmt -su -w 2500 2>&1)\n    if _prompt_yes_no \"${_tPrmt}?\" ; then\n      true\n      _msg \"INFO: Installing MultiCore Apache Solr ${_R_SOLR} with Jetty ${_R_JETTY}...\"\n      cd /var/opt\n      rm -rf jetty-distribution-*\n      rm -rf /opt/solr4\n      rm -rf /opt/jetty9\n      rm -f /etc/jetty.conf\n      _get_dev_arch \"jetty-distribution-${_JETTY_9_VRN}.tar.gz\"\n      mv /var/opt/jetty-distribution-${_JETTY_9_VRN} /opt/jetty9\n      echo ${_JETTY_9_VRN} > ${_JETTY_CTRL}\n      _get_dev_arch \"solr-${_SOLR_4_VRN}.tgz\"\n      cp -af /var/opt/solr-${_SOLR_4_VRN}/dist/solr-${_SOLR_4_VRN}.war \\\n        /opt/jetty9/webapps/solr.war\n      mv -f /var/opt/solr-${_SOLR_4_VRN}/example/multicore \\\n        /opt/solr4 &> /dev/null\n      mkdir -p /opt/solr4/core{0,1,2,3,4,5,6,7,8,9}/conf\n      mkdir -p /opt/solr4/core{0,1,2,3,4,5,6,7,8,9}/data\n      mkdir -p /var/log/jetty9\n      if [ ! -e \"/opt/tika9\" ]; then\n        cd /var/opt\n        rm -rf apachesolr_attachments*\n        _get_dev_contrib \"apachesolr_attachments-7.x-1.x-dev.tar.gz\"\n        cd /var/opt/solr-${_SOLR_4_VRN}/example/solr/collection1/conf/\n        patch -p0 < \\\n          /var/opt/apachesolr_attachments/solrconfig.tika.patch &> /dev/null\n        ln -sfn /opt/jetty9/lib /opt/tika9\n        ln -sfn /opt/jetty9/lib /opt/tika\n        cd /opt/tika9\n        rm -f tika-app*\n        _TIKA_V=\"1.8 1.9 1.10 1.11 1.12 1.13 1.20\"\n        for e in ${_TIKA_V}; do\n          wget ${_wgetGet} ${_urlDev}/tika-app-${e}.jar\n        done\n      fi\n      for _Dir in `find /opt/solr4/core{0,1,2,3,4,5,6,7,8,9}/ \\\n        -maxdepth 1 -mindepth 1 -type d | grep conf`; do\n        if [[ \"${_Dir}\" =~ \"/opt/solr4/core\" ]]; then\n          rm -rf ${_Dir}/*\n        fi\n        cp -af /var/opt/solr-${_SOLR_4_VRN}/example/solr/collection1/conf/* \\\n          ${_Dir}/ &> /dev/null\n      done\n      adduser --system --group --home /opt/solr4 jetty9 &> /dev/null\n      if [ ! -e \"/opt/solr4/search_api_solr-7.x-1.17.log\" ]; then\n        cd /var/opt\n        rm -rf search_api_solr*\n        _get_dev_contrib \"search_api_solr-7.x-1.17.tar.gz\"\n        for _Dir in `find /opt/solr4/core{0,1,2,3,4,5,6,7,8,9}/ \\\n          -maxdepth 1 -mindepth 1 -type d | grep conf`; do\n          cp -af /var/opt/search_api_solr/solr-conf/4.x/* ${_Dir}/ &> /dev/null\n        done\n        sed -i \"s/8983/8099/g\" \\\n          /opt/solr4/core{0,1,2,3,4,5,6,7,8,9}/conf/solrcore.properties &> /dev/null\n        touch /opt/solr4/search_api_solr-7.x-1.17.log\n      fi\n      cp -af ${_bldPth}/docs/SOLR.txt /opt/solr4/README.txt &> /dev/null\n      cd /var/opt\n      _get_dev_arch \"slf4j-${_SLF4J_VRN}.tar.gz\"\n      _slf4jPth=\"/var/opt/slf4j-${_SLF4J_VRN}\"\n      rm -rf /opt/jetty9/lib/ext/*\n      cp -af ${_slf4jPth}/jcl-over-slf4j*.jar /opt/jetty9/lib/ext/\n      cp -af ${_slf4jPth}/jul-to-slf4j*.jar   /opt/jetty9/lib/ext/\n      cp -af ${_slf4jPth}/slf4j-api*.jar      /opt/jetty9/lib/ext/\n      cp -af ${_slf4jPth}/slf4j-log4j12*.jar  /opt/jetty9/lib/ext/\n      _get_dev_arch \"log4j-${_LOGJ4_VRN}.tar.gz\"\n      cp -af /var/opt/log4j-${_LOGJ4_VRN}/*.jar /opt/jetty9/lib/ext/\n      rm -f /opt/jetty9/lib/ext/*sources.jar\n      chown -R jetty9:jetty9 /opt/solr4\n      chown -R jetty9:jetty9 /opt/jetty9\n      chown -R jetty9:jetty9 /var/log/jetty9\n      echo \"JAVA=/usr/bin/java11 # BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" > /etc/default/jetty9\n      echo \"JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 # BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" >> /etc/default/jetty9\n      echo \"NO_START=0 # Start on boot\" >> /etc/default/jetty9\n      echo \"JETTY_HOST=127.0.0.1 # Listen on localhost\" >> /etc/default/jetty9\n      echo \"JETTY_PORT=8099 # Run on this port\" >> /etc/default/jetty9\n      echo \"JETTY_USER=jetty9 # Run as this user\" >> /etc/default/jetty9\n      echo \"JETTY_HOME=/opt/jetty9 # Home directory\" >> /etc/default/jetty9\n      echo \"JETTY_LOGS=/var/log/jetty9 # Logs directory\" >> /etc/default/jetty9\n      echo \"JETTY_RUN=/run # Run directory\" >> /etc/default/jetty9\n      echo \"JETTY_PID=\\$JETTY_RUN/jetty9.pid # Pid file\" >> /etc/default/jetty9\n      echo \"JAVA_OPTIONS=\\\"-Xms64m -Xmx128m -Djava.awt.headless=true \\\n        -Dsolr.solr.home=/opt/solr4 \\$JAVA_OPTIONS\\\" \\\n        # Options\" | fmt -su -w 2500 >> /etc/default/jetty9\n      if [ -e \"/opt/jetty9/start.d/http.ini\" ]; then\n        sed -i \"s/8080/8099/g\" /opt/jetty9/start.d/http.ini &> /dev/null\n        touch /opt/jetty9/start.d/.fixed.http.ini.txt &> /dev/null\n      fi\n      if [ -e \"/opt/jetty9/start.ini\" ]; then\n        sed -i \"s/8080/8099/g\" /opt/jetty9/start.ini &> /dev/null\n        touch /opt/jetty9/.fixed.start.ini.txt &> /dev/null\n      fi\n      sed -i \"s/8080/8099/g\" /opt/jetty9/bin/jetty.sh &> /dev/null\n      ln -sfn /opt/jetty9/bin/jetty.sh /etc/init.d/jetty9 &> /dev/null\n      chmod 755 /etc/init.d/jetty9\n      _mrun \"update-rc.d jetty9 defaults\"\n      _mrun \"service jetty9 start\"\n      echo ${_SOLR_4_VRN} > ${_SOLR_CTRL}\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} with Jetty ${_R_JETTY} installed\"\n    else\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} with Jetty ${_R_JETTY} not installed\"\n    fi\n  fi\n\n  if [ \"${_UP_JDK}\" = \"YES\" ] && [ -e \"/etc/init.d/jetty9\" ]; then\n    _msg \"INFO: Jetty 9 restart in progress - required after java upgrade\"\n    pkill -9 -f jetty9\n    _mrun \"service jetty9 start\"\n    _msg \"INFO: Jetty 9 restart completed\"\n  fi\n\n  if [ -e \"/opt/jetty9/VERSION.txt\" ]; then\n    _JETTY9_JAVA_TEST=$(grep \"BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" /etc/default/jetty9 2>&1)\n    if [[ ! \"${_JETTY9_JAVA_TEST}\" =~ \"BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" ]]; then\n      echo \"JAVA=/usr/bin/java11 # BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" > /etc/default/jetty9\n      echo \"JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 # BOA ${_xSrl} Path to Java 11 on ${_OS_CODE}\" >> /etc/default/jetty9\n      echo \"NO_START=0 # Start on boot\" >> /etc/default/jetty9\n      echo \"JETTY_HOST=127.0.0.1 # Listen on localhost\" >> /etc/default/jetty9\n      echo \"JETTY_PORT=8099 # Run on this port\" >> /etc/default/jetty9\n      echo \"JETTY_USER=jetty9 # Run as this user\" >> /etc/default/jetty9\n      echo \"JETTY_HOME=/opt/jetty9 # Home directory\" >> /etc/default/jetty9\n      echo \"JETTY_LOGS=/var/log/jetty9 # Logs directory\" >> /etc/default/jetty9\n      echo \"JETTY_RUN=/run # Run directory\" >> /etc/default/jetty9\n      echo \"JETTY_PID=\\$JETTY_RUN/jetty9.pid # Pid file\" >> /etc/default/jetty9\n      echo \"JAVA_OPTIONS=\\\"-Xms64m -Xmx128m \\\n        -Djava.awt.headless=true \\\n        -Dsolr.solr.home=/opt/solr4 \\$JAVA_OPTIONS\\\" \\\n        # Options\" | fmt -su -w 2500 >> /etc/default/jetty9\n    fi\n    if [ ! -e \"${_PORT_CTRL}\" ] \\\n      || [ ! -e \"/opt/jetty9/lib/ext/slf4j-api-${_SLF4J_VRN}.jar\" ] \\\n      || [ ! -e \"/opt/jetty9/.fixed.start.ini.txt\" ] \\\n      || [ ! -e \"/opt/jetty9/lib/tika-app-1.8.jar\" ] \\\n      || [ ! -e \"/opt/jetty9/lib/ext/log4j-1.2.17.jar\" ] \\\n      || [ ! -e \"${_JETTY_CTRL}\" ] \\\n      || [ ! -e \"${_SOLR_CTRL}\" ]; then\n      _msg \"INFO: Upgrading MultiCore Apache Solr ${_R_SOLR} with Jetty ${_R_JETTY}...\"\n      cd /var/opt\n      rm -rf jetty-distribution-*\n      pkill -9 -f jetty9\n      mv -f /opt/jetty9 ${_vBs}/jetty9-${_xSrl}-${_X_VERSION}-${_NOW}\n      _get_dev_arch \"jetty-distribution-${_JETTY_9_VRN}.tar.gz\"\n      mv /var/opt/jetty-distribution-${_JETTY_9_VRN} /opt/jetty9\n      echo ${_JETTY_9_VRN} > ${_JETTY_CTRL}\n      if [ -e \"/opt/jetty9/start.d/http.ini\" ]; then\n        sed -i \"s/8080/8099/g\" /opt/jetty9/start.d/http.ini &> /dev/null\n        touch /opt/jetty9/start.d/.fixed.http.ini.txt &> /dev/null\n      fi\n      if [ -e \"/opt/jetty9/start.ini\" ]; then\n        sed -i \"s/8080/8099/g\" /opt/jetty9/start.ini &> /dev/null\n        touch /opt/jetty9/.fixed.start.ini.txt &> /dev/null\n      fi\n      sed -i \"s/8080/8099/g\" /opt/jetty9/bin/jetty.sh &> /dev/null\n      _get_dev_arch \"solr-${_SOLR_4_VRN}.tgz\"\n      cp -af /var/opt/solr-${_SOLR_4_VRN}/dist/solr-${_SOLR_4_VRN}.war \\\n        /opt/jetty9/webapps/solr.war\n      rm -rf /opt/jetty9/solr\n      cd /opt/jetty9/lib/\n      rm -f tika-app*\n      _TIKA_V=\"1.8 1.9 1.10 1.11 1.12 1.13 1.20\"\n      for e in ${_TIKA_V}; do\n        wget ${_wgetGet} ${_urlDev}/tika-app-${e}.jar\n      done\n      cd /var/opt\n      _get_dev_arch \"slf4j-${_SLF4J_VRN}.tar.gz\"\n      _slf4jPth=\"/var/opt/slf4j-${_SLF4J_VRN}\"\n      rm -rf /opt/jetty9/lib/ext/*\n      cp -af ${_slf4jPth}/jcl-over-slf4j*.jar /opt/jetty9/lib/ext/\n      cp -af ${_slf4jPth}/jul-to-slf4j*.jar   /opt/jetty9/lib/ext/\n      cp -af ${_slf4jPth}/slf4j-api*.jar      /opt/jetty9/lib/ext/\n      cp -af ${_slf4jPth}/slf4j-log4j12*.jar  /opt/jetty9/lib/ext/\n      _get_dev_arch \"log4j-${_LOGJ4_VRN}.tar.gz\"\n      cp -af /var/opt/log4j-${_LOGJ4_VRN}/*.jar /opt/jetty9/lib/ext/\n      rm -f /opt/jetty9/lib/ext/*sources.jar\n      chown -R jetty9:jetty9 /opt/jetty9\n      _mrun \"service jetty9 start\"\n      echo ${_SOLR_4_VRN} > ${_SOLR_CTRL}\n      _msg \"INFO: MultiCore Apache Solr ${_R_SOLR} with Jetty ${_R_JETTY} upgrade completed\"\n    fi\n  fi\n}\n\n_if_install_upgrade_solr() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_upgrade_solr\"\n  fi\n  _if_to_do_fix\n  _if_hosted_sys\n\n  if [ \"${_OS_CODE}\" != \"excalibur\" ]; then\n    if [[ \"${_XTRAS_LIST}\" =~ \"SR4\" ]] \\\n      || [ -e \"/opt/solr4/solr.xml\" ] \\\n      || [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ -e \"/usr/bin/java11\" ]; then\n        _if_solr_four\n      fi\n    fi\n    if [[ \"${_XTRAS_LIST}\" =~ \"SR7\" ]] \\\n      || [ -e \"/var/solr7/logs/solr.log\" ] \\\n      || [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ -e \"/usr/bin/java11\" ]; then\n        _if_solr_seven\n      fi\n    fi\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"SR9\" ]] \\\n    || [ ! -e \"/var/solr9/logs/solr.log\" ] \\\n    || [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ -e \"/usr/bin/java17\" ] \\\n      || [ -e \"/usr/bin/java21\" ]; then\n      _if_solr_nine\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/functions/sql.sh.inc",
    "content": "\n_check_mysql_version() {\n  _DB_V=\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _DB_SERVER_TEST=$(mysql -V 2>&1)\n  fi\n  if [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.4.\" ]]; then\n    _DB_V=8.4\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.3.\" ]]; then\n    _DB_V=8.3\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Ver 8.0.\" ]]; then\n    _DB_V=8.0\n  elif [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib 5.7.\" ]]; then\n    _DB_V=5.7\n  fi\n  if [ -x \"/usr/sbin/aa-teardown\" ]; then\n    aa-teardown &> /dev/null\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DBST: via _check_mysql_version ${_DB_V}\"\n  fi\n}\n\n_install_local_ordered() {\n  cd \"${_WORK}\"\n  _mrun \"dpkg -i ./mysql-common_*_amd64.deb\"\n  _mrun \"dpkg -i ./mysql-community-client-plugins_*_amd64.deb\"\n  _mrun \"dpkg -i ./libmysqlclient24_*_amd64.deb\"\n  _mrun \"dpkg -i ./libmysqlclient-dev_*_amd64.deb\"\n  _mrun \"dpkg -i ./mysql-community-client-core_*_amd64.deb\"\n  _mrun \"dpkg -i ./mysql-community-client_*_amd64.deb\"\n  _mrun \"dpkg -i ./mysql-client_*_amd64.deb\"\n  _mrun \"dpkg -i ./mysql-community-server-core_*_amd64.deb\"\n  _mrun \"dpkg -i ./mysql-community-server_*_amd64.deb\"\n  _mrun \"dpkg -i ./mysql-server_*_amd64.deb\"\n  _mrun \"apt-get -f -y install\"\n}\n\n_download_pkg_by_name() {\n  # $1 = package name\n  local _name=\"$1\" _file\n  _file=\"$(awk -v P=\"${_name}\" '\n    $1==\"Package:\" && $2==P {f=1}\n    f && $1==\"Filename:\" {print $2; f=0}\n  ' Packages | tail -n1)\"\n  if [ -z \"${_file}\" ]; then\n    _msg \"MSQL: Could not locate package '${_name}' in Packages index\"\n  fi\n  local _url=\"${_BASE_URL}/${_file}\"\n  local _base=\"$(basename \"${_file}\")\"\n  if [ -s \"${_base}\" ]; then\n    return 0\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"MSQL: Downloading ${_url}\"\n  fi\n  _mrun \"wget -q ${_url} -O ${_base}\"\n}\n\n_fetch_packages_index() {\n  mkdir -p \"${_WORK}\"\n  cd \"${_WORK}\"\n  curl -fsS \"${_BASE_URL}/dists/${_SUITE}/${_COMP}/binary-${_ARCH}/Packages\" -o Packages\n}\n\n_download_all_mysql_debs() {\n  _fetch_packages_index\n  local _p\n  for _p in ${_PKGS}; do\n    _download_pkg_by_name \"${_p}\"\n  done\n}\n\n_ensure_base_runtime_deps() {\n  _apt_clean_update\n  _mrun \"${_INSTAPP} libmecab2 psmisc libnuma1\"\n}\n\n_dpkg_install_mysql() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _dpkg_install_mysql\"\n  fi\n  _WORK=\"/var/backups/mysql84-local\"\n  _BASE_URL=\"http://repo.mysql.com/apt/debian\"\n  _SUITE=\"trixie\"\n  _COMP=\"mysql-8.4-lts\"\n  _ARCH=\"amd64\"\n  _PKGS=\"\nmysql-common\nmysql-community-client-plugins\nlibmysqlclient24\nlibmysqlclient-dev\nmysql-community-client-core\nmysql-community-client\nmysql-client\nmysql-community-server-core\nmysql-community-server\nmysql-server\n\"\n  _ensure_base_runtime_deps\n  _download_all_mysql_debs\n  _install_local_ordered\n}\n\n#\n# Update for keyrings/percona.gpg\n_if_sql_keyring_apt_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_sql_keyring_apt_update\"\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n    _SQL_KEYRING_UPDATE=NO\n    if [ -e \"/etc/apt/trusted.gpg.d/percona.gpg\" ] \\\n      || [ ! -e \"/etc/apt/keyrings/percona.gpg\" ] \\\n      || [ -e \"/etc/apt/trusted.gpg.d/percona-keyring.gpg~\" ]; then\n      _SQL_KEYRING_UPDATE=YES\n    fi\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"Percona\" ] && [ \"${_SQL_KEYRING_UPDATE}\" = \"YES\" ]; then\n    if [ ! -e \"/etc/apt/keyrings/percona.gpg\" ]; then\n      if [ ! -e \"/etc/apt/keyrings\" ]; then\n        mkdir -m 0755 -p /etc/apt/keyrings\n      fi\n      if [ -e \"/etc/apt/trusted.gpg.d/percona.gpg\" ] \\\n        || [ -e \"/etc/apt/trusted.gpg.d/percona-keyring.gpg~\" ]; then\n        rm -f /etc/apt/trusted.gpg.d/percona*\n      fi\n      _PERCONA_KEYS_SIG=\"8507EFA5\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Retrieving ${_PERCONA_KEYS_SIG} key...\"\n      fi\n      apt-key del ${_PERCONA_KEYS_SIG} &> /dev/null\n      if [ ! -e \"/etc/apt/keyrings/percona.gpg\" ]; then\n        curl -fsSL ${_urlDev}/percona-key.gpg | ${_GPG} --dearmor -o /etc/apt/keyrings/percona.gpg\n      fi\n      chmod 644 /etc/apt/keyrings/percona.gpg\n      _CNT=$(pgrep -fc dirmngr)\n      if (( _CNT > 5 )); then\n        pkill -9 -f dirmngr\n        echo \"$(date) Too many dirmngr processes killed (count=${_CNT})\" >> \\\n          /var/log/boa/dirmngr-count.kill.log\n      fi\n      _CNT=$(pgrep -fc gpg-agent)\n      if (( _CNT > 5 )); then\n        pkill -9 -f gpg-agent\n        echo \"$(date) Too many gpg-agent processes killed (count=${_CNT})\" >> \\\n          /var/log/boa/gpg-agent-count.kill.log\n      fi\n    fi\n    _apt_clean_update\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"MySQL\" ]; then\n    _SQL_KEYRING_UPDATE=NO\n    if [ -e \"/etc/apt/trusted.gpg.d/mysql.gpg\" ] \\\n      || [ ! -e \"/etc/apt/keyrings/mysql.gpg\" ] \\\n      || [ -e \"/etc/apt/trusted.gpg.d/mysql-keyring.gpg~\" ]; then\n      _SQL_KEYRING_UPDATE=YES\n    fi\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"MySQL\" ] && [ \"${_SQL_KEYRING_UPDATE}\" = \"YES\" ]; then\n    if [ ! -e \"/etc/apt/keyrings/mysql.gpg\" ]; then\n      if [ ! -e \"/etc/apt/keyrings\" ]; then\n        mkdir -m 0755 -p /etc/apt/keyrings\n      fi\n      if [ -e \"/etc/apt/trusted.gpg.d/mysql.gpg\" ] \\\n        || [ -e \"/etc/apt/trusted.gpg.d/mysql-keyring.gpg~\" ]; then\n        rm -f /etc/apt/trusted.gpg.d/mysql*\n      fi\n      _MYSQL_KEYS_SIG=\"B7B3B788A8D3785C\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Retrieving ${_MYSQL_KEYS_SIG} key...\"\n      fi\n      apt-key del ${_MYSQL_KEYS_SIG} &> /dev/null\n      if [ ! -e \"/etc/apt/keyrings/mysql.gpg\" ]; then\n        curl -fsSL ${_urlDev}/mysql-key.gpg | ${_GPG} --dearmor -o /etc/apt/keyrings/mysql.gpg\n      fi\n      chmod 644 /etc/apt/keyrings/mysql.gpg\n      _CNT=$(pgrep -fc dirmngr)\n      if (( _CNT > 5 )); then\n        pkill -9 -f dirmngr\n        echo \"$(date) Too many dirmngr processes killed (count=${_CNT})\" >> \\\n          /var/log/boa/dirmngr-count.kill.log\n      fi\n      _CNT=$(pgrep -fc gpg-agent)\n      if (( _CNT > 5 )); then\n        pkill -9 -f gpg-agent\n        echo \"$(date) Too many gpg-agent processes killed (count=${_CNT})\" >> \\\n          /var/log/boa/gpg-agent-count.kill.log\n      fi\n    fi\n    _apt_clean_update\n  fi\n\n  [ ! -z \"${_SQL_NEW_OS_CODE}\" ] && _SQL_OS_CODE=${_SQL_NEW_OS_CODE}\n\n  if [ \"${_DB_V}\" = \"8.4\" ] || [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n    if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n      _DB_SRC=\"repo.percona.com\"\n      _percNodot=\"${_DB_SERIES//./}\"\n      _sqlDebVersion=\"ps-${_percNodot}-lts\"\n      _SQL_DEB_SRC_TEST_A=$(grep ${_DB_SRC} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_B=$(grep ${_SQL_OS_CODE} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_C=$(grep ${_sqlDebVersion} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_D=$(grep ps-8x-innovation /etc/apt/sources.list.d/percona-release.list 2>&1)\n    elif [ \"${_DB_SERVER}\" = \"MySQL\" ]; then\n      _DB_SRC=\"repo.mysql.com\"\n      _sqlDebVersion=\"mysql-${_DB_SERIES}-lts\"\n      _SQL_DEB_SRC_TEST_A=$(grep ${_DB_SRC} /etc/apt/sources.list.d/mysql.list 2>&1)\n      _SQL_DEB_SRC_TEST_B=$(grep ${_SQL_OS_CODE} /etc/apt/sources.list.d/mysql.list 2>&1)\n      _SQL_DEB_SRC_TEST_C=$(grep ${_sqlDebVersion} /etc/apt/sources.list.d/mysql.list 2>&1)\n    fi\n    if [[ \"${_SQL_DEB_SRC_TEST_A}\" =~ \"${_DB_SRC}\" ]] \\\n      && [[ \"${_SQL_DEB_SRC_TEST_B}\" =~ \"${_SQL_OS_CODE}\" ]] \\\n      && [[ \"${_SQL_DEB_SRC_TEST_C}\" =~ \"${_sqlDebVersion}\" ]]; then\n      if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n        if [ -e \"/etc/apt/keyrings/percona.gpg\" ] \\\n          && [ -e \"/etc/apt/preferences.d/00percona.pref\" ]; then\n          _SQL_DEB_SRC_UPDATE=NO\n        fi\n      else\n        if [ -e \"/etc/apt/keyrings/mysql.gpg\" ]; then\n          _SQL_DEB_SRC_UPDATE=NO\n        fi\n      fi\n    else\n      _SQL_DEB_SRC_UPDATE=YES\n    fi\n    if [[ \"${_SQL_DEB_SRC_TEST_D}\" =~ \"ps-8x-innovation\" ]]; then\n      _SQL_DEB_SRC_UPDATE=YES\n    fi\n  elif [ \"${_DB_V}\" = \"8.3\" ] || [ \"${_DB_SERIES}\" = \"8.3\" ]; then\n    if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n      _DB_SRC=\"repo.percona.com\"\n      _percNodot=\"${_DB_SERIES//./}\"\n      _SQL_DEB_SRC_TEST_A=$(grep ${_DB_SRC} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_B=$(grep ${_SQL_OS_CODE} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_C=$(grep ps-${_percNodot} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_D=$(grep ps-8x-innovation /etc/apt/sources.list.d/percona-release.list 2>&1)\n    fi\n    if [[ \"${_SQL_DEB_SRC_TEST_A}\" =~ \"${_DB_SRC}\" ]] \\\n      && [[ \"${_SQL_DEB_SRC_TEST_B}\" =~ \"${_SQL_OS_CODE}\" ]] \\\n      && [[ \"${_SQL_DEB_SRC_TEST_D}\" =~ \"ps-8x-innovation\" ]] \\\n      && [ -e \"/etc/apt/keyrings/percona.gpg\" ] \\\n      && [ -e \"/etc/apt/preferences.d/00percona.pref\" ]; then\n      _SQL_DEB_SRC_UPDATE=NO\n    else\n      _SQL_DEB_SRC_UPDATE=YES\n    fi\n    if [[ \"${_SQL_DEB_SRC_TEST_C}\" =~ \"ps-${_percNodot}\" ]]; then\n      _SQL_DEB_SRC_UPDATE=YES\n    fi\n  else\n    if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n      _DB_SRC=\"repo.percona.com\"\n      _percNodot=\"${_DB_SERIES//./}\"\n      _sqlDebVersion=\"ps-${_percNodot}\"\n      _SQL_DEB_SRC_TEST_A=$(grep ${_DB_SRC} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_B=$(grep ${_SQL_OS_CODE} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_C=$(grep ${_sqlDebVersion} /etc/apt/sources.list.d/percona-release.list 2>&1)\n      _SQL_DEB_SRC_TEST_D=$(grep ps-8x-innovation /etc/apt/sources.list.d/percona-release.list 2>&1)\n    elif [ \"${_DB_SERVER}\" = \"MySQL\" ]; then\n      _DB_SRC=\"repo.mysql.com\"\n      _sqlDebVersion=\"mysql-${_DB_SERIES}-lts\"\n      _SQL_DEB_SRC_TEST_A=$(grep ${_DB_SRC} /etc/apt/sources.list.d/mysql.list 2>&1)\n      _SQL_DEB_SRC_TEST_B=$(grep ${_SQL_OS_CODE} /etc/apt/sources.list.d/mysql.list 2>&1)\n      _SQL_DEB_SRC_TEST_C=$(grep ${_sqlDebVersion} /etc/apt/sources.list.d/mysql.list 2>&1)\n    fi\n    if [[ \"${_SQL_DEB_SRC_TEST_A}\" =~ \"${_DB_SRC}\" ]] \\\n      && [[ \"${_SQL_DEB_SRC_TEST_B}\" =~ \"${_SQL_OS_CODE}\" ]] \\\n      && [[ \"${_SQL_DEB_SRC_TEST_C}\" =~ \"${_sqlDebVersion}\" ]]; then\n      if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n        if [ -e \"/etc/apt/keyrings/percona.gpg\" ] \\\n          && [ -e \"/etc/apt/preferences.d/00percona.pref\" ]; then\n          _SQL_DEB_SRC_UPDATE=NO\n        fi\n      else\n        if [ -e \"/etc/apt/keyrings/mysql.gpg\" ]; then\n          _SQL_DEB_SRC_UPDATE=NO\n        fi\n      fi\n    else\n      _SQL_DEB_SRC_UPDATE=YES\n    fi\n    if [[ \"${_SQL_DEB_SRC_TEST_D}\" =~ \"ps-8x-innovation\" ]]; then\n      _SQL_DEB_SRC_UPDATE=YES\n    fi\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"Percona\" ] && [ \"${_SQL_DEB_SRC_UPDATE}\" = \"YES\" ]; then\n    rm -f /etc/apt/sources.list.d/mariadb*\n    rm -f /etc/apt/sources.list.d/ourdelta*\n    rm -f /etc/apt/sources.list.d/percona*\n    rm -f /etc/apt/sources.list.d/xtrabackup*\n    rm -f /etc/apt/sources.list.d/mysql*\n    _percList=\"/etc/apt/sources.list.d/percona-release.list\"\n    if [ \"${_DB_V}\" = \"8.4\" ] || [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n      _percRepo=\"${_DB_SRC}/ps-${_percNodot}-lts/apt\"\n    elif [ \"${_DB_V}\" = \"8.3\" ] || [ \"${_DB_SERIES}\" = \"8.3\" ]; then\n      _percRepo=\"${_DB_SRC}/ps-8x-innovation/apt\"\n    else\n      _percRepo=\"${_DB_SRC}/ps-${_percNodot}/apt\"\n    fi\n    _percTools=\"${_DB_SRC}/tools/apt\"\n    _percTelem=\"${_DB_SRC}/telemetry/apt\"\n    _percPrelr=\"${_DB_SRC}/prel/apt\"\n    echo \"## Percona Server APT Repository\" > ${_percList}\n    if [ -e \"/etc/apt/keyrings/percona.gpg\" ]; then\n      echo \"deb [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percRepo} ${_SQL_OS_CODE} main\" >> ${_percList}\n      echo \"deb-src [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percRepo} ${_SQL_OS_CODE} main\" >> ${_percList}\n    else\n      echo \"deb http://${_percRepo} ${_SQL_OS_CODE} main\" >> ${_percList}\n      echo \"deb-src http://${_percRepo} ${_SQL_OS_CODE} main\" >> ${_percList}\n    fi\n    if [ \"${_DB_V}\" = \"8.4\" ] || [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n      echo \" \" >> ${_percList}\n      echo \"## Percona Telemetry APT Repository\" >> ${_percList}\n      if [ -e \"/etc/apt/keyrings/percona.gpg\" ]; then\n        echo \"deb [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n      else\n        echo \"deb http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n      fi\n    elif [ \"${_SQL_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \" \" >> ${_percList}\n      echo \"## Percona Tools APT Repository\" >> ${_percList}\n      if [ -e \"/etc/apt/keyrings/percona.gpg\" ]; then\n        echo \"deb [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percTools} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percTools} ${_SQL_OS_CODE} main\" >> ${_percList}\n      else\n        echo \"deb http://${_percTools} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src http://${_percTools} ${_SQL_OS_CODE} main\" >> ${_percList}\n      fi\n      echo \" \" >> ${_percList}\n      echo \"## Percona Telemetry APT Repository\" >> ${_percList}\n      if [ -e \"/etc/apt/keyrings/percona.gpg\" ]; then\n        echo \"deb [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n      else\n        echo \"deb http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src http://${_percTelem} ${_SQL_OS_CODE} main\" >> ${_percList}\n      fi\n      echo \" \" >> ${_percList}\n      echo \"## Percona Release APT Repository\" >> ${_percList}\n      if [ -e \"/etc/apt/keyrings/percona.gpg\" ]; then\n        echo \"deb [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percPrelr} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src [signed-by=/etc/apt/keyrings/percona.gpg] http://${_percPrelr} ${_SQL_OS_CODE} main\" >> ${_percList}\n      else\n        echo \"deb http://${_percPrelr} ${_SQL_OS_CODE} main\" >> ${_percList}\n        echo \"deb-src http://${_percPrelr} ${_SQL_OS_CODE} main\" >> ${_percList}\n      fi\n    fi\n    chmod 644 ${_percList}\n    echo -e 'Package: *\\nPin: release o=Percona Development Team\\nPin-Priority: 1001' > /etc/apt/preferences.d/00percona.pref\n    _apt_clean_update\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"MySQL\" ] && [ \"${_SQL_DEB_SRC_UPDATE}\" = \"YES\" ]; then\n    rm -f /etc/apt/sources.list.d/mariadb*\n    rm -f /etc/apt/sources.list.d/ourdelta*\n    rm -f /etc/apt/sources.list.d/percona*\n    rm -f /etc/apt/sources.list.d/xtrabackup*\n    rm -f /etc/apt/sources.list.d/mysql*\n    _percList=\"/etc/apt/sources.list.d/mysql-release.list\"\n    if [ \"${_DB_V}\" = \"8.4\" ] || [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n      _percRepo=\"repo.mysql.com/apt/debian/\"\n    fi\n    echo \"## MySQL Server APT Repository\" > ${_percList}\n    if [ -e \"/etc/apt/keyrings/mysql.gpg\" ]; then\n      echo \"deb [signed-by=/etc/apt/keyrings/mysql.gpg] http://${_percRepo} ${_SQL_OS_CODE} mysql-8.4-lts mysql-tools\" >> ${_percList}\n      echo \"deb-src [signed-by=/etc/apt/keyrings/mysql.gpg] http://${_percRepo} ${_SQL_OS_CODE} mysql-8.4-lts mysql-tools\" >> ${_percList}\n    else\n      echo \"deb http://${_percRepo} ${_SQL_OS_CODE} mysql-8.4-lts mysql-tools\" >> ${_percList}\n      echo \"deb-src http://${_percRepo} ${_SQL_OS_CODE} mysql-8.4-lts mysql-tools\" >> ${_percList}\n    fi\n    chmod 644 ${_percList}\n    echo -e 'Package: *\\nPin: release o=MySQL Development Team\\nPin-Priority: 1001' > /etc/apt/preferences.d/00mysql.pref\n    ### Temporary cleanup until MySQL updates gpg keys\n    mv -f ${_percList} /var/backups/\n    mv -f /etc/apt/preferences.d/00mysql.pref /var/backups/\n    mv -f /etc/apt/keyrings/mysql.gpg /var/backups/\n    _apt_clean_update\n  fi\n}\n\n#\n# Only supported upgrade path allowed.\n_sql_strict_upgrade_path() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sql_strict_upgrade_path\"\n  fi\n  _SQL_UPGRADE=NO\n  _DB_SERVER_TEST=$(mysql -V 2>&1)\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _NOW_DB_V=$(mysql -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f6 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    if [ \"${_NOW_DB_V}\" = \"Linux\" ]; then\n      _NOW_DB_V=$(mysql -V 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $1}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n    fi\n  fi\n  if [ ! -z \"${_NOW_DB_V}\" ]; then\n    if [[ ! \"${_NOW_DB_V}\" =~ (^)\"${_DB_SERIES}\" ]]; then\n      _SQL_FORCE_REINSTALL=YES\n    fi\n    if [[ \"${_NOW_DB_V}\" =~ \"available\" ]]; then\n      _SQL_FORCE_REINSTALL=NO\n    fi\n  else\n    _SQL_FORCE_REINSTALL=YES\n  fi\n  if [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    _ALL_FORCE_REINSTALL=YES\n  else\n    _ALL_FORCE_REINSTALL=NO\n  fi\n  if [[ \"${_NOW_DB_V}\" =~ (^)\"5.7\" ]] \\\n    && [ ! -z \"${_DB_SERIES}\" ] \\\n    && [ \"${_DB_SERIES}\" != \"5.7\" ] \\\n    && [ \"${_DB_SERIES}\" != \"8.0\" ]; then\n    _DB_SERIES=8.0\n    export _DB_SERIES=8.0\n    sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.0/g\" ${_barCnf}\n    _SQL_UPGRADE=YES\n  fi\n  if [ \"${_SQL_FORCE_REINSTALL}\" = \"YES\" ] \\\n    || [ \"${_ALL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    _SQL_UPGRADE=YES\n  fi\n  if [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n    mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 0;\" &> /dev/null\n  fi\n}\n\n#\n# Update innodb_log_file_size.\n_innodb_log_file_size_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _innodb_log_file_size_update\"\n  fi\n  _check_mysql_version\n  if [ \"${_DB_V}\" != \"5.7\" ] || [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n    _msg \"INFO: InnoDB redo log capacity will be set to ${_INNODB_REDO_LOG_CAPACITY_MB}...\"\n  else\n    _msg \"INFO: InnoDB log file will be set to ${_INNODB_LOG_FILE_SIZE_MB}...\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  mysql -u root -e \"SET GLOBAL innodb_fast_shutdown = 0;\" &> /dev/null\n  _mrun \"bash /var/xdrago/move_sql.sh stop\"\n  wait\n  _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n  if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n    mkdir -p ${_vBs}/old-sql-ib-log-${_NOW}\n    sleep 1\n    mv -f /var/lib/mysql/ib_logfile0 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n    mv -f /var/lib/mysql/ib_logfile1 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n    if [ \"${_DB_V}\" = \"5.7\" ] || [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n      if [ \"${_SQL_UPGRADE}\" = \"NO\" ]; then\n        if [ -e \"/root/.dev.server.cnf\" ]; then\n          echo \"DEBUG1: _DB_SERIES is ${_DB_SERIES}\"\n          echo \"DEBUG1: _DB_V is ${_DB_V}\"\n          echo \"DEBUG1: _SQL_UPGRADE is ${_SQL_UPGRADE}\"\n          echo \"DEBUG1: _INNODB_LOG_FILE_SIZE_MB is ${_INNODB_LOG_FILE_SIZE_MB}\"\n          echo \"DEBUG1: _INNODB_REDO_LOG_CAPACITY_MB is ${_INNODB_REDO_LOG_CAPACITY_MB}\"\n        fi\n        sed -i \\\n          -e \"s/.*innodb_redo_log_capacity.*/#innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}/g\" \\\n          -e \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" \\\n          /etc/mysql/my.cnf &> /dev/null\n        echo \"innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.innodb_log_file_size.txt\n      fi\n    fi\n    if [ \"${_DB_V}\" != \"5.7\" ] || [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n      if [ ! -z \"${_DB_SERIES}\" ] && [ \"${_DB_SERIES}\" != \"5.7\" ]; then\n        if [ -e \"/root/.dev.server.cnf\" ]; then\n          echo \"DEBUG2: _DB_SERIES is ${_DB_SERIES}\"\n          echo \"DEBUG2: _DB_V is ${_DB_V}\"\n          echo \"DEBUG2: _SQL_UPGRADE is ${_SQL_UPGRADE}\"\n          echo \"DEBUG2: _INNODB_LOG_FILE_SIZE_MB is ${_INNODB_LOG_FILE_SIZE_MB}\"\n          echo \"DEBUG2: _INNODB_REDO_LOG_CAPACITY_MB is ${_INNODB_REDO_LOG_CAPACITY_MB}\"\n        fi\n        sed -i \\\n          -e \"s/.*innodb_redo_log_capacity.*/innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}/g\" \\\n          -e \"s/.*innodb_log_file_size.*/#innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" \\\n          /etc/mysql/my.cnf &> /dev/null\n        echo \"innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}\" > /root/.my.innodb_redo_log_capacity.txt\n      fi\n    fi\n    _mrun \"bash /var/xdrago/move_sql.sh start\"\n    wait\n  else\n    _msg \"INFO: Waiting 180s for ${_DB_SERVER} clean shutdown...\"\n    _mrun \"bash /var/xdrago/move_sql.sh stop\"\n    wait\n    sleep 180\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ -z \"${_IS_MYSQLD_RUNNING}\" ]; then\n      mkdir -p ${_vBs}/old-sql-ib-log-${_NOW}\n      sleep 1\n      mv -f /var/lib/mysql/ib_logfile0 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      mv -f /var/lib/mysql/ib_logfile1 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      if [ \"${_DB_V}\" = \"5.7\" ] || [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n        if [ \"${_SQL_UPGRADE}\" = \"NO\" ]; then\n          if [ -e \"/root/.dev.server.cnf\" ]; then\n            echo \"DEBUG3: _DB_SERIES is ${_DB_SERIES}\"\n            echo \"DEBUG3: _DB_V is ${_DB_V}\"\n            echo \"DEBUG3: _SQL_UPGRADE is ${_SQL_UPGRADE}\"\n            echo \"DEBUG3: _INNODB_REDO_LOG_CAPACITY_MB is ${_INNODB_REDO_LOG_CAPACITY_MB}\"\n            echo \"DEBUG3: _INNODB_LOG_FILE_SIZE_MB is ${_INNODB_LOG_FILE_SIZE_MB}\"\n          fi\n          sed -i \\\n            -e \"s/.*innodb_redo_log_capacity.*/#innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}/g\" \\\n            -e \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" \\\n            /etc/mysql/my.cnf &> /dev/null\n          echo \"innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.innodb_log_file_size.txt\n        fi\n      fi\n      if [ \"${_DB_V}\" != \"5.7\" ] || [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n        if [ ! -z \"${_DB_SERIES}\" ] && [ \"${_DB_SERIES}\" != \"5.7\" ]; then\n          if [ -e \"/root/.dev.server.cnf\" ]; then\n            echo \"DEBUG4: _DB_SERIES is ${_DB_SERIES}\"\n            echo \"DEBUG4: _DB_V is ${_DB_V}\"\n            echo \"DEBUG4: _SQL_UPGRADE is ${_SQL_UPGRADE}\"\n            echo \"DEBUG4: _INNODB_REDO_LOG_CAPACITY_MB is ${_INNODB_REDO_LOG_CAPACITY_MB}\"\n            echo \"DEBUG4: _INNODB_LOG_FILE_SIZE_MB is ${_INNODB_LOG_FILE_SIZE_MB}\"\n          fi\n          sed -i \\\n            -e \"s/.*innodb_redo_log_capacity.*/innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}/g\" \\\n            -e \"s/.*innodb_log_file_size.*/#innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" \\\n            /etc/mysql/my.cnf &> /dev/null\n          echo \"innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}\" > /root/.my.innodb_redo_log_capacity.txt\n        fi\n      fi\n      _mrun \"bash /var/xdrago/move_sql.sh start\"\n      wait\n    else\n      if [ \"${_DB_V}\" != \"5.7\" ] || [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n        _msg \"WARN: ${_DB_SERVER} refused to stop, InnoDB redo log capacity size not updated\"\n      else\n        _msg \"WARN: ${_DB_SERVER} refused to stop, InnoDB log file size not updated\"\n      fi\n      sleep 5\n    fi\n  fi\n}\n\n#\n# Update SQL Config.\n_sql_conf_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sql_conf_update\"\n  fi\n  sed -i -e \"s/.*innodb_force_recovery/#innodb_force_recovery/g\" \\\n    -e \"s/.*innodb_corrupt_table_action/#innodb_corrupt_table_action/g\" \\\n    -e \"s/^thread_concurrency.*//g\" /etc/mysql/my.cnf &> /dev/null\n  _if_hosted_sys\n  _check_mysql_version\n  if [ \"${_CUSTOM_CONFIG_SQL}\" = \"NO\" ] \\\n    || [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_CUSTOM_CONFIG_SQL}\" = \"YES\" ]; then\n      _DO_NOTHING=YES\n    else\n      # Determine the Percona version (assuming _DB_V is set to the version number, e.g., \"5.7\" or \"8.0\" or \"8.4\")\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DBST: Enable Logging for Percona ${_DB_V}\"\n        fi\n        sed -i -E \\\n          -e \"s/^#?\\s*log_syslog/log_syslog/\" \\\n          /etc/mysql/my.cnf &> /dev/null\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DBST: Enable Logging for Percona ${_DB_V}\"\n        fi\n        sed -i -E \\\n          -e \"s/^#?\\s*log_error/log_error/\" \\\n          -e \"s/^#?\\s*syseventlog/syseventlog/\" \\\n          -e \"s/^#?\\s*mysqlx/mysqlx/\" \\\n          /etc/mysql/my.cnf &> /dev/null\n      fi\n      if [ \"${_DB_V}\" = \"8.4\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DBST: Enable legacy mysql_native_password for Percona ${_DB_V}\"\n        fi\n        sed -i -E \\\n          -e \"s/^#?\\s*mysql_native_password\\s*=.*/mysql_native_password=ON/\" \\\n          -e \"s/^(\\s*)#\\s*(mysql_native_password.*=.*)/\\1\\2/\" \\\n          /etc/mysql/my.cnf &> /dev/null\n      fi\n      if [ \"${_DB_V}\" = \"5.7\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DBST: Disable performance_schema for Percona ${_DB_V}\"\n        fi\n        sed -i -E \\\n          -e \"s/^#?\\s*performance_schema\\s*=.*/performance_schema=OFF/\" \\\n          -e \"s/^(\\s*)performance_schema_.*=.*/\\1#&/\" \\\n          /etc/mysql/my.cnf &> /dev/null\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DBST: Enable performance_schema for Percona ${_DB_V}\"\n        fi\n        sed -i -E \\\n          -e \"s/^#?\\s*performance_schema\\s*=.*/performance_schema=ON/\" \\\n          -e \"s/^(\\s*)#\\s*(performance_schema_.*=.*)/\\1\\2/\" \\\n          /etc/mysql/my.cnf &> /dev/null\n      fi\n      _INNODB_LOG_FILE_SIZE=${_INNODB_LOG_FILE_SIZE//[^0-9]/}\n      if [ ! -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n        if [ \"${_INNODB_LOG_FILE_SIZE}\" -ge 50 ]; then\n          _INNODB_LOG_FILE_SIZE_MB=\"${_INNODB_LOG_FILE_SIZE}M\"\n          _INNODB_REDO_LOG_CAPACITY=$(( _INNODB_LOG_FILE_SIZE * 2 ))\n          _INNODB_REDO_LOG_CAPACITY_MB=\"${_INNODB_REDO_LOG_CAPACITY}M\"\n          _INNODB_LOG_FILE_SIZE_TEST=$(grep \"innodb_log_file_size\" \\\n            ${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW} 2>&1)\n          _INNODB_REDO_LOG_CAPACITY_TEST=$(grep \"innodb_redo_log_capacity\" \\\n            ${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW} 2>&1)\n          if [[ \"${_INNODB_LOG_FILE_SIZE_TEST}\" =~ \"= ${_INNODB_LOG_FILE_SIZE_MB}\" ]] \\\n            && [[ \"${_INNODB_REDO_LOG_CAPACITY_TEST}\" =~ \"= ${_INNODB_REDO_LOG_CAPACITY_MB}\" ]]; then\n            _INNODB_LOG_FILE_SIZE_SAME=YES\n          else\n            _INNODB_LOG_FILE_SIZE_SAME=NO\n          fi\n          if [ -e \"/root/.dev.server.cnf\" ]; then\n            echo \"DEBUG5: _INNODB_LOG_FILE_SIZE_TEST is ${_INNODB_LOG_FILE_SIZE_TEST}\"\n            echo \"DEBUG5: _INNODB_REDO_LOG_CAPACITY_TEST is ${_INNODB_REDO_LOG_CAPACITY_TEST}\"\n            echo \"DEBUG5: _INNODB_LOG_FILE_SIZE_MB is ${_INNODB_LOG_FILE_SIZE_MB}\"\n            echo \"DEBUG5: _INNODB_REDO_LOG_CAPACITY_MB is ${_INNODB_REDO_LOG_CAPACITY_MB}\"\n            echo \"DEBUG5: _INNODB_LOG_FILE_SIZE_SAME is ${_INNODB_LOG_FILE_SIZE_SAME}\"\n          fi\n        fi\n      fi\n      sed -i \\\n        -e \"s/.*slow_query_log/#slow_query_log/g\" \\\n        -e \"s/.*long_query_time/#long_query_time/g\" \\\n        -e \"s/.*slow_query_log_file/#slow_query_log_file/g\" \\\n        /etc/mysql/my.cnf &> /dev/null\n      if [ ! -e \"/etc/mysql/skip-name-resolve.txt\" ]; then\n        sed -i \"s/.*skip-name-resolve/#skip-name-resolve/g\" /etc/mysql/my.cnf &> /dev/null\n      fi\n    fi\n  fi\n  mv -f /etc/mysql/my.cnf-pre* ${_vBs}/dragon/t/ &> /dev/null\n  sed -i \\\n    -e \"s/.*default-table-type/#default-table-type/g\" \\\n    -e \"s/.*language/#language/g\" \\\n    -e \"s/.*innodb_lazy_drop_table.*//g\" \\\n    /etc/mysql/my.cnf &> /dev/null\n  if [ \"${_CUSTOM_CONFIG_SQL}\" = \"NO\" ]; then\n    if [ \"${_DB_BINARY_LOG}\" = \"NO\" ]; then\n      # Disable binary logging\n      sed -i \\\n        -e \"s/^\\s*\\(log_bin\\s*=.*\\)/#\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(skip-log-bin\\)/\\1/\" \\\n        -e \"s/^\\s*\\(max_binlog_size\\s*=.*\\)/#\\1/\" \\\n        -e \"s/^\\s*\\(binlog_row_image\\s*=.*\\)/#\\1/\" \\\n        -e \"s/^\\s*\\(binlog_format\\s*=.*\\)/#\\1/\" \\\n        /etc/mysql/my.cnf &> /dev/null\n    elif [ \"${_DB_BINARY_LOG}\" = \"YES\" ]; then\n      # Enable binary logging\n      sed -i \\\n        -e \"s/^\\s*#\\s*\\(log_bin\\s*=.*\\)/\\1/\" \\\n        -e \"s/^\\s*\\(skip-log-bin\\)/#\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(max_binlog_size\\s*=.*\\)/\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(binlog_row_image\\s*=.*\\)/\\1/\" \\\n        -e \"s/^\\s*#\\s*\\(binlog_format\\s*=.*\\)/\\1/\" \\\n        /etc/mysql/my.cnf &> /dev/null\n    fi\n    if [ \"${_DB_V}\" = \"5.7\" ] || [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n      if [ -e \"/root/.dev.server.cnf\" ]; then\n        echo \"DEBUG6: _DB_SERIES is ${_DB_SERIES}\"\n        echo \"DEBUG6: _DB_V is ${_DB_V}\"\n        echo \"DEBUG6: _SQL_UPGRADE is ${_SQL_UPGRADE}\"\n        echo \"DEBUG6: _INNODB_LOG_FILE_SIZE is ${_INNODB_LOG_FILE_SIZE}\"\n      fi\n      if [ \"${_SQL_UPGRADE}\" = \"NO\" ]; then\n        if [ ! -z \"${_INNODB_LOG_FILE_SIZE}\" ] && [ \"${_INNODB_LOG_FILE_SIZE}\" -ge 50 ]; then\n          _INNODB_LOG_FILE_SIZE_MB=\"${_INNODB_LOG_FILE_SIZE}M\"\n          _INNODB_LOG_FILE_SIZE_TEST=$(grep \"innodb_log_file_size\" /etc/mysql/my.cnf 2>&1)\n          _INNODB_REDO_LOG_CAPACITY=$(( _INNODB_LOG_FILE_SIZE * 2 ))\n          _INNODB_REDO_LOG_CAPACITY_MB=\"${_INNODB_REDO_LOG_CAPACITY}M\"\n          if [[ ! \"${_INNODB_LOG_FILE_SIZE_TEST}\" =~ \"= ${_INNODB_LOG_FILE_SIZE_MB}\" ]]; then\n            if [ \"${_INNODB_LOG_FILE_SIZE_SAME}\" = \"YES\" ]; then\n              sed -i \\\n                -e \"s/.*innodb_redo_log_capacity.*/#innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}/g\" \\\n                -e \"s/.*innodb_log_file_size.*/innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" \\\n                /etc/mysql/my.cnf &> /dev/null\n              echo \"innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}\" > /root/.my.innodb_log_file_size.txt\n            else\n              _innodb_log_file_size_update\n            fi\n          fi\n        fi\n      fi\n    fi\n    if [ \"${_DB_V}\" != \"5.7\" ] || [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n      if [ -e \"/root/.dev.server.cnf\" ]; then\n        echo \"DEBUG7: _DB_SERIES is ${_DB_SERIES}\"\n        echo \"DEBUG7: _DB_V is ${_DB_V}\"\n        echo \"DEBUG7: _SQL_UPGRADE is ${_SQL_UPGRADE}\"\n        echo \"DEBUG7: _INNODB_LOG_FILE_SIZE is ${_INNODB_LOG_FILE_SIZE}\"\n      fi\n      if [ ! -z \"${_DB_SERIES}\" ] && [ \"${_DB_SERIES}\" != \"5.7\" ]; then\n        if [ ! -z \"${_INNODB_LOG_FILE_SIZE}\" ] && [ \"${_INNODB_LOG_FILE_SIZE}\" -ge 50 ]; then\n          _INNODB_LOG_FILE_SIZE_MB=\"${_INNODB_LOG_FILE_SIZE}M\"\n          _INNODB_REDO_LOG_CAPACITY=$(( _INNODB_LOG_FILE_SIZE * 2 ))\n          _INNODB_REDO_LOG_CAPACITY_MB=\"${_INNODB_REDO_LOG_CAPACITY}M\"\n          _INNODB_REDO_LOG_CAPACITY_TEST=$(grep \"innodb_redo_log_capacity\" /etc/mysql/my.cnf 2>&1)\n          if [[ ! \"${_INNODB_REDO_LOG_CAPACITY_TEST}\" =~ \"= ${_INNODB_REDO_LOG_CAPACITY_MB}\" ]]; then\n            if [ \"${_INNODB_LOG_FILE_SIZE_SAME}\" = \"YES\" ]; then\n              sed -i \\\n                -e \"s/.*innodb_redo_log_capacity.*/innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}/g\" \\\n                -e \"s/.*innodb_log_file_size.*/#innodb_log_file_size    = ${_INNODB_LOG_FILE_SIZE_MB}/g\" \\\n                /etc/mysql/my.cnf &> /dev/null\n              echo \"innodb_redo_log_capacity    = ${_INNODB_REDO_LOG_CAPACITY_MB}\" > /root/.my.innodb_redo_log_capacity.txt\n            else\n              _innodb_log_file_size_update\n            fi\n          fi\n        fi\n      fi\n    fi\n  fi\n}\n\n#\n#\n_check_mysqld_running() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_mysqld_running\"\n  fi\n  if [ -x \"/usr/sbin/aa-teardown\" ]; then\n    aa-teardown &> /dev/null\n  fi\n  while [ -z \"${_IS_MYSQLD_RUNNING}\" ] \\\n    || [ ! -e \"/run/mysqld/mysqld.sock\" ]; do\n    _IS_MYSQLD_RUNNING=$(pgrep -f /usr/sbin/mysqld)\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Waiting for MySQLD availability before _tune_sql_memory_limits...\"\n    fi\n    sleep 5\n  done\n}\n\n#\n# Tune memory limits for SQL server.\n_tune_sql_memory_limits() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _tune_sql_memory_limits\"\n  fi\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n    _DO_NOTHING=YES\n  else\n    _DO_NOTHING=YES\n    _check_mysqld_running\n  fi\n  if [ ! -e \"${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW}\" ]; then\n    mkdir -p ${_vBs}/dragon/t/\n    if [ -e \"/etc/mysql/my.cnf\" ]; then\n      cp -af /etc/mysql/my.cnf ${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW}\n    fi\n    if [ -e \"/root/.dev.server.cnf\" ]; then\n      echo \"DEBUGX: ${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW} created\"\n    fi\n  else\n    if [ -e \"/root/.dev.server.cnf\" ]; then\n      echo \"DEBUGX: ${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW} exists\"\n    fi\n  fi\n  if [ \"${_CUSTOM_CONFIG_SQL}\" = \"YES\" ]; then\n    _DO_NOTHING=YES\n  else\n    cp -af ${_locCnf}/var/my.cnf.txt /etc/mysql/my.cnf\n  fi\n  # https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl\n  _pthTun=\"/var/opt/mysqltuner.pl\"\n  _outTun=\"/var/opt/mysqltuner-${_xSrl}-${_X_VERSION}-${_NOW}.txt\"\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n    _USE_MYSQLTUNER=NO\n  fi\n  if [ ! -e \"${_outTun}\" ] \\\n    && [ \"${_USE_MYSQLTUNER}\" != \"NO\" ] \\\n    && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _msg \"INFO: Running MySQLTuner check on all databases\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    _MYSQLTUNER_TEST_RESULT=OK\n    rm -f /var/opt/mysqltuner*\n    curl ${_crlGet} \"${_urlDev}/mysqltuner.pl.${_MYSQLTUNER_VRN}\" -o ${_pthTun}\n    if [ ! -e \"${_pthTun}\" ]; then\n      curl ${_crlGet} \"${_urlDev}/mysqltuner.pl\" -o ${_pthTun}\n    fi\n    if [ -e \"${_pthTun}\" ]; then\n      perl ${_pthTun} > ${_outTun} 2>&1\n    fi\n  fi\n  if [ -e \"${_pthTun}\" ] \\\n    && [ -e \"${_outTun}\" ] \\\n    && [ \"${_USE_MYSQLTUNER}\" != \"NO\" ] \\\n    && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _REC_MYISAM_MEM=$(cat ${_outTun} \\\n      | grep \"Data in MyISAM tables\" \\\n      | cut -d: -f2 \\\n      | awk '{ print $1}' 2>&1)\n    _REC_INNODB_MEM=$(cat ${_outTun} \\\n      | grep \"data size:\" \\\n      | cut -d/ -f3 \\\n      | awk '{ print $1}' 2>&1)\n    _MYSQLTUNER_TEST=$(cat ${_outTun} 2>&1)\n    cp -a ${_outTun} ${_pthLog}/\n    if [ -z \"${_REC_INNODB_MEM}\" ] \\\n      || [[ \"${_MYSQLTUNER_TEST}\" =~ \"Cannot calculate MyISAM index\" ]] \\\n      || [[ \"${_MYSQLTUNER_TEST}\" =~ \"InnoDB is enabled but isn\" ]]; then\n      _MYSQLTUNER_TEST_RESULT=FAIL\n      _msg \"NOTE: The MySQLTuner test failed!\"\n      _msg \"NOTE: Please review ${_outTun}\"\n      _msg \"NOTE: We will use some sane SQL defaults instead, do not worry!\"\n    fi\n    ###--------------------###\n    if [ ! -z \"${_REC_MYISAM_MEM}\" ] \\\n      && [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"OK\" ]; then\n      _RAW_MYISAM_MEM=$(echo ${_REC_MYISAM_MEM} | sed \"s/[A-Z]//g\" 2>&1)\n      if [[ \"${_REC_MYISAM_MEM}\" =~ \"G\" ]]; then\n        _RAW_MYISAM_MEM=$(echo ${_RAW_MYISAM_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_MYISAM_MEM=$(echo \"${_RAW_MYISAM_MEM} * 1024\" | bc -l 2>&1)\n      elif [[ \"${_REC_MYISAM_MEM}\" =~ \"M\" ]]; then\n        _RAW_MYISAM_MEM=$(echo ${_RAW_MYISAM_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_MYISAM_MEM=$(echo \"${_RAW_MYISAM_MEM} * 1\" | bc -l 2>&1)\n      fi\n      _RAW_MYISAM_MEM=$(echo \"(${_RAW_MYISAM_MEM}+0.5)/1\" | bc 2>&1)\n      if [ \"${_RAW_MYISAM_MEM}\" -gt \"${_USE_SQL}\" ]; then\n        _USE_MYISAM_MEM=\"${_USE_SQL}\"\n      else\n        _RAW_MYISAM_MEM=$(echo \"scale=2; (${_RAW_MYISAM_MEM} * 1.1)\" | bc 2>&1)\n        _USE_MYISAM_MEM=$(echo \"(${_RAW_MYISAM_MEM}+0.5)/1\" | bc 2>&1)\n      fi\n      if [ \"${_USE_MYISAM_MEM}\" -lt 256 ] \\\n        || [ -z \"${_USE_MYISAM_MEM}\" ]; then\n        _USE_MYISAM_MEM=\"${_USE_SQL}\"\n      fi\n      _USE_MYISAM_MEM=\"${_USE_MYISAM_MEM}M\"\n      sed -i \"s/^key_buffer_size.*/key_buffer_size         = ${_USE_MYISAM_MEM}/g\"  /etc/mysql/my.cnf &> /dev/null\n    else\n      _USE_MYISAM_MEM=\"${_USE_SQL}M\"\n      if [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"FAIL\" ]; then\n        _msg \"NOTE: _USE_MYISAM_MEM is ${_USE_MYISAM_MEM} because _REC_MYISAM_MEM was empty!\"\n      fi\n      sed -i \"s/^key_buffer_size.*/key_buffer_size         = ${_USE_MYISAM_MEM}/g\"  /etc/mysql/my.cnf &> /dev/null\n    fi\n    ###--------------------###\n    if [ ! -z \"${_REC_INNODB_MEM}\" ] \\\n      && [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"OK\" ]; then\n      _RAW_INNODB_MEM=$(echo ${_REC_INNODB_MEM} | sed \"s/[A-Z]//g\" 2>&1)\n      if [[ \"${_REC_INNODB_MEM}\" =~ \"G\" ]]; then\n        _RAW_INNODB_MEM=$(echo ${_RAW_INNODB_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_INNODB_MEM=$(echo \"${_RAW_INNODB_MEM} * 1024\" | bc -l 2>&1)\n      elif [[ \"${_REC_INNODB_MEM}\" =~ \"M\" ]]; then\n        _RAW_INNODB_MEM=$(echo ${_RAW_INNODB_MEM} | awk '{print int($1+0.6)}' 2>&1)\n        _RAW_INNODB_MEM=$(echo \"${_RAW_INNODB_MEM} * 1\" | bc -l 2>&1)\n      fi\n      _RAW_INNODB_MEM=$(echo \"(${_RAW_INNODB_MEM}+0.5)/1\" | bc 2>&1)\n      if [ \"${_RAW_INNODB_MEM}\" -gt \"${_USE_SQL}\" ] \\\n        || [ -z \"${_USE_INNODB_MEM}\" ] \\\n        || [ \"${_RAW_INNODB_MEM}\" -lt 512 ]; then\n        _USE_INNODB_MEM=\"${_USE_SQL}\"\n      else\n        _RAW_INNODB_MEM=$(echo \"scale=2; (${_RAW_INNODB_MEM} * 1.1)\" | bc 2>&1)\n        _USE_INNODB_MEM=$(echo \"(${_RAW_INNODB_MEM}+0.5)/1\" | bc 2>&1)\n      fi\n      _INNODB_BPI=$(echo \"scale=0; ${_USE_INNODB_MEM}/1024/2\" | bc 2>&1)\n      if [ \"${_INNODB_BPI}\" -lt 1 ] || [ -z \"${_INNODB_BPI}\" ]; then\n        _INNODB_BPI=\"1\"\n      fi\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/^innodb_buffer_pool_instances.*/innodb_buffer_pool_instances = ${_INNODB_BPI}/g\" /etc/mysql/my.cnf &> /dev/null\n        sed -i \"s/^innodb_page_cleaners.*/innodb_page_cleaners = ${_INNODB_BPI}/g\" /etc/mysql/my.cnf &> /dev/null\n      fi\n      _INNODB_LOG_FILE_SIZE=$(echo \"scale=0; ${_USE_INNODB_MEM}/4/40*40\" | bc 2>&1)\n      _DB_COUNT=$(ls /var/lib/mysql/ | wc -l 2>&1)\n      if [ \"${_DB_COUNT}\" -gt 3 ]; then\n        if [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 64 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 256 ]; then\n          _INNODB_LOG_FILE_SIZE=256\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 256 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 512 ]; then\n          _INNODB_LOG_FILE_SIZE=512\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 512 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 1024 ]; then\n          _INNODB_LOG_FILE_SIZE=1024\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 1024 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        fi\n      fi\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -le 64 ] \\\n        || [ -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n        _INNODB_LOG_FILE_SIZE=64\n      fi\n      _USE_INNODB_MEM=\"${_USE_INNODB_MEM}M\"\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/^innodb_buffer_pool_size.*/innodb_buffer_pool_size = ${_USE_INNODB_MEM}/g\"  /etc/mysql/my.cnf &> /dev/null\n      fi\n    else\n      _USE_INNODB_MEM=\"${_USE_SQL}M\"\n      _INNODB_LOG_FILE_SIZE=$(echo \"scale=0; ${_USE_SQL}/4/40*40\" | bc 2>&1)\n      _DB_COUNT=$(ls /var/lib/mysql/ | wc -l 2>&1)\n      if [ \"${_DB_COUNT}\" -gt 3 ]; then\n        if [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 64 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 256 ]; then\n          _INNODB_LOG_FILE_SIZE=256\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 256 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 512 ]; then\n          _INNODB_LOG_FILE_SIZE=512\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 512 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 1024 ]; then\n          _INNODB_LOG_FILE_SIZE=1024\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 1024 ] \\\n          && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 2048 ]; then\n          _INNODB_LOG_FILE_SIZE=2048\n        fi\n      fi\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -le 64 ] \\\n        || [ -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n        _INNODB_LOG_FILE_SIZE=64\n      fi\n      _msg \"NOTE: _USE_INNODB_MEM is ${_USE_INNODB_MEM} because _REC_INNODB_MEM was empty!\"\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/^innodb_buffer_pool_size.*/innodb_buffer_pool_size = ${_USE_INNODB_MEM}/g\"  /etc/mysql/my.cnf &> /dev/null\n      fi\n    fi\n  else\n    _THIS_USE_MEM=\"${_USE_SQL}M\"\n    if [ \"${_MYSQLTUNER_TEST_RESULT}\" = \"FAIL\" ] \\\n      && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n      _msg \"NOTE: _USE_MYISAM_MEM is ${_THIS_USE_MEM} because _REC_MYISAM_MEM was empty!\"\n      _msg \"NOTE: _USE_INNODB_MEM is ${_THIS_USE_MEM} because _REC_INNODB_MEM was empty!\"\n    fi\n    _INNODB_LOG_FILE_SIZE=$(echo \"scale=0; ${_USE_SQL}/4/40*40\" | bc 2>&1)\n    _DB_COUNT=$(ls /var/lib/mysql/ | wc -l 2>&1)\n    if [ \"${_DB_COUNT}\" -gt 3 ]; then\n      if [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 64 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 256 ]; then\n        _INNODB_LOG_FILE_SIZE=256\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 256 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 512 ]; then\n        _INNODB_LOG_FILE_SIZE=512\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 512 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 1024 ]; then\n        _INNODB_LOG_FILE_SIZE=1024\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 1024 ] \\\n        && [ \"${_INNODB_LOG_FILE_SIZE}\" -le 2048 ]; then\n        _INNODB_LOG_FILE_SIZE=2048\n      elif [ \"${_INNODB_LOG_FILE_SIZE}\" -gt 2048 ]; then\n        _INNODB_LOG_FILE_SIZE=2048\n      fi\n    fi\n    if [ \"${_INNODB_LOG_FILE_SIZE}\" -le 64 ] \\\n      || [ -z \"${_INNODB_LOG_FILE_SIZE}\" ]; then\n      _INNODB_LOG_FILE_SIZE=64\n    fi\n    if [ -e \"/etc/mysql/my.cnf\" ]; then\n      sed -i \"s/= 181/= ${_USE_SQL}/g\"  /etc/mysql/my.cnf &> /dev/null\n    fi\n  fi\n}\n\n_prep_for_install_db_sql() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _prep_for_install_db_sql\"\n  fi\n  if [ -x \"/usr/sbin/aa-teardown\" ]; then\n    aa-teardown &> /dev/null\n  fi\n\n  # Create the mysql group if it doesn't exist\n  if ! getent group mysql > /dev/null; then\n    groupadd mysql\n  fi\n\n  # Create the mysql user if it doesn't exist\n  if ! id -u mysql > /dev/null 2>&1; then\n    useradd -r -g mysql -d /var/lib/mysql -s /bin/false mysql\n  fi\n\n  # Create the directory if it doesn't exist\n  [ ! -e \"/var/lib/mysql\" ] && mkdir -p /var/lib/mysql\n\n  # Set ownership to mysql:mysql\n  [ -e \"/var/lib/mysql\" ] && chown mysql:mysql /var/lib/mysql\n\n  _SQL_SCR=APT\n  if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _SQL_OS_CODE=trixie\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n    _SQL_OS_CODE=bookworm\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _SQL_OS_CODE=bullseye\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _SQL_OS_CODE=buster\n  else\n    _SQL_OS_CODE=\"${_OS_CODE}\"\n  fi\n  if [ \"${_SQL_OS_CODE}\" = \"trixie\" ]; then\n    _SQL_SCR=APT\n  fi\n  _DBS_TEST=\"$(which mysql)\"\n  if [ ! -z \"${_DBS_TEST}\" ]; then\n    _NOW_DB_V=$(mysql -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f6 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    if [ \"${_NOW_DB_V}\" = \"Linux\" ]; then\n      _NOW_DB_V=$(mysql -V 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $1}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n    fi\n  fi\n  if [ -e \"/usr/bin/php\" ]; then\n    _PHP_MYSQLND_TEST=$(/usr/bin/php -i | grep \"with-mysqli=mysqlnd\" 2>&1)\n    if [ -z \"${_PHP_MYSQLND_TEST}\" ]; then\n      _SQL_MAJOR_UP_ALLOW=NO\n    else\n      _SQL_MAJOR_UP_ALLOW=YES\n    fi\n  else\n    _SQL_MAJOR_UP_ALLOW=YES\n  fi\n  if [ -x \"/usr/bin/gpg2\" ]; then\n    _GPG=gpg2\n  else\n    _GPG=gpg\n  fi\n  cd /var/opt\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Installing dirmngr...\"\n  fi\n  for _PKG in dirmngr; do\n    if ! _pkg_installed \"${_PKG}\"; then\n      _mrun \"${_INSTAPP} ${_PKG}\"\n    fi\n  done\n\n  _msg \"INFO: Installing ${_DB_SERVER} ${_DBS_VRN} in ${_OS_DIST}/${_OS_CODE}\"\n\n  ### Update keyring and apt if needed\n  _if_sql_keyring_apt_update\n\n  ### Only supported upgrade path allowed\n  _sql_strict_upgrade_path\n\n  _check_mysql_version\n\n  if [ \"${_DB_SERIES}\" = \"${_DB_V}\" ]; then\n    if [ \"${_SQL_FORCE_REINSTALL}\" = \"YES\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      sed -i \"s/.*innodb_force_recovery.*/innodb_force_recovery = 3/g\" /etc/mysql/my.cnf &> /dev/null\n    fi\n  fi\n\n  if [ \"${_DB_SERIES}\" = \"${_DB_V}\" ]; then\n    if [ \"${_SQL_FORCE_REINSTALL}\" = \"YES\" ] \\\n      || [ \"${_ALL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      mkdir -p ${_vBs}/old-sql-ib-log-${_NOW}\n      sleep 1\n      mv -f /var/lib/mysql/ib_logfile0 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      mv -f /var/lib/mysql/ib_logfile1 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      mv -f /var/lib/mysql/aria_log.00000001 ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      mv -f /var/lib/mysql/aria_log_control ${_vBs}/old-sql-ib-log-${_NOW}/ &> /dev/null\n      sed -i \"s/.*innodb-defragment.*/innodb_force_recovery = 3/g\" /etc/mysql/my.cnf &> /dev/null\n      sed -i \"s/^thread_concurrency.*//g\" /etc/mysql/my.cnf &> /dev/null\n    fi\n  fi\n\n  if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n    _SQLDEB=\"percona-server-server-${_DB_SERIES}\"\n  else\n    _SQLDEB=\"percona-server-server\"\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"MySQL\" ]; then\n    _SQLDEB=\"mysql-server\"\n  fi\n\n  _SQLXTR=\"libdbi-perl\"\n\n  if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n    _SQLADD=\"libperconaserverclient20 libperconaserverclient20-dev\"\n  elif [ \"${_DB_SERIES}\" = \"8.0\" ]; then\n    _SQLADD=\"libperconaserverclient21 libperconaserverclient21-dev\"\n  else\n    _SQLADD=\"libperconaserverclient24 libperconaserverclient24-dev\"\n  fi\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] || [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n    _SQLADD=\"${_SQLADD} percona-telemetry-agent\"\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"MySQL\" ]; then\n    _SQLADD=\"libmysqlclient24\"\n  fi\n\n}\n\n_install_with_aptitude_sql() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_with_aptitude_sql\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Installing ${_DB_SERVER}...\"\n  fi\n  _prep_for_install_db_sql\n  if [ \"${_SQL_SCR}\" = \"APT\" ]; then\n    _mrun \"${_INSTAPP} ${_SQLDEB}\"\n    _mrun \"${_INSTAPP} ${_SQLADD}\"\n    _mrun \"${_INSTAPP} ${_SQLXTR}\"\n    _mrun \"${_INSTAPP} ${_SQLDEB}\"\n  elif [ \"${_SQL_SCR}\" = \"DPKG\" ]; then\n    _dpkg_install_mysql\n    _mrun \"${_INSTAPP} ${_SQLXTR}\"\n  fi\n  if [ -x \"/usr/sbin/aa-teardown\" ]; then\n    aa-teardown &> /dev/null\n  fi\n  if [ \"${_DB_SERVER}\" = \"Percona\" ] \\\n    && [ \"${_SQL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Running ${_DB_SERVER} tables fix, check and upgrade...\"\n    fi\n    rm -f /var/lib/mysql/mysql_upgrade_info &> /dev/null\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    _mrun \"mysqlcheck -u root -A --auto-repair --silent\"\n    _check_mysql_version\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      _mrun \"mysql_upgrade -u root --force\"\n      mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN default_role;\" &> /dev/null\n      mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN is_role;\" &> /dev/null\n      mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN max_statement_time;\" &> /dev/null\n      _mrun \"mysql_upgrade -u root --force\"\n    fi\n  fi\n  usermod -aG users mysql\n}\n\n#\n# Forced MySQL root password update.\n_forced_mysql_root_password_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _forced_mysql_root_password_update\"\n  fi\n  _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n  mv -f /root/.my.cnf-pre-* ${_vBs}/ &> /dev/null\n  mv -f /root/.my.pass.txt-pre-* ${_vBs}/ &> /dev/null\n  touch /root/.my.pass.txt\n  chmod 0600 /root/.my.pass.txt &> /dev/null\n  _ESC_PASS=\"\"\n  _LEN_PASS=0\n  _ESC=\"*.*\"\n  if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n    _PWD_CHARS=64\n  elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n    _PWD_CHARS=32\n  else\n    _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n    if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n      && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n      _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n    else\n      _PWD_CHARS=32\n    fi\n    if [ ! -z \"${_PWD_CHARS}\" ] \\\n      && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n      _PWD_CHARS=128\n    fi\n  fi\n  if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] \\\n    || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n    if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n      _ESC_PASS=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n    else\n      _RANDPASS_TEST=$(randpass -V 2>&1)\n      if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n        _ESC_PASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n      else\n        _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n      fi\n    fi\n    _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n    _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n  fi\n  if [ -z \"${_ESC_PASS}\" ] || [ \"${_LEN_PASS}\" -lt 9 ]; then\n    _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n    _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n    _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n  fi\n  if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n    _ROOT_SQL_PASWD=$(cat /root/.my.cluster_root_pwd.txt 2>&1)\n    _ROOT_SQL_PASWD=$(echo -n ${_ROOT_SQL_PASWD} | tr -d \"\\n\" 2>&1)\n    _ESC_PASS=\"${_ROOT_SQL_PASWD}\"\n  fi\n  _check_mysql_version\n  if [ ! -z \"${_DB_V}\" ] && [ ! -z \"${_ESC_PASS}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"DBEF: ${_DB_SERVER} _ESC_PASS ${_ESC_PASS}\"\n      mysql -u root -e \"SHOW GRANTS FOR 'root'@'localhost';\"\n      if [ \"${_SQL_DEBUG_MODE}\" = \"YES\" ]; then\n        mysql -u root -e \"SHOW GRANTS FOR 'root'@'127.0.0.1';\"\n        mysql -u root -e \"SHOW GRANTS FOR 'root'@'::1';\"\n        if [ \"${_DB_V}\" = \"5.7\" ]; then\n          mysql -u root -e \"SELECT host,user,authentication_string FROM mysql.user;\"\n        fi\n      fi\n    fi\n    cp -af /root/.my.cnf /root/.my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW}\n    cp -af /root/.my.pass.txt /root/.my.pass.txt-pre-${_xSrl}-${_X_VERSION}-${_NOW}\n    mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'localhost';\"\n    mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'localhost' WITH GRANT OPTION;\"\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"DBST: WITH mysql_native_password BY for Percona ${_DB_V}\"\n      fi\n      mysql -u root -e \"ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '${_ESC_PASS}';\"\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"DBST: WITH caching_sha2_password BY for Percona ${_DB_V}\"\n      fi\n      mysql -u root -e \"ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY '${_ESC_PASS}';\"\n    fi\n    echo \"[client]\" > /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[mysql]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[mysqldump]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[mydumper]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[myloader]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[boa]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[barracuda]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[octopus]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[autobeowulf]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[autochimaera]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[autodaedalus]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[autoexcalibur]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[autoupboa]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[mycnfup]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[syncpass]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    echo \"[xboa]\" >> /root/.my.cnf\n    echo \"user=root\" >> /root/.my.cnf\n    echo \"password=${_ESC_PASS}\" >> /root/.my.cnf\n      echo \" \" >> /root/.my.cnf\n    chmod 0600 /root/.my.cnf\n    echo \"db=mysql\" > /root/.mytop\n    chmod 0600 /root/.mytop\n    echo \"${_ESC_PASS}\" > /root/.my.pass.txt\n    _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n    mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'127.0.0.1';\"\n    mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'::1';\"\n    mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'${_MY_OWNIP}';\"\n    mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'127.0.0.1' WITH GRANT OPTION;\"\n    mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'::1' WITH GRANT OPTION;\"\n    mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'${_MY_OWNIP}' WITH GRANT OPTION;\"\n    if [ \"${_DB_V}\" = \"5.7\" ]; then\n      mysql -u root -e \"ALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH mysql_native_password BY '${_ESC_PASS}';\"\n      mysql -u root -e \"ALTER USER 'root'@'::1' IDENTIFIED WITH mysql_native_password BY '${_ESC_PASS}';\"\n      mysql -u root -e \"ALTER USER 'root'@'${_MY_OWNIP}' IDENTIFIED WITH mysql_native_password BY '${_ESC_PASS}';\"\n    else\n      mysql -u root -e \"ALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH caching_sha2_password BY '${_ESC_PASS}';\"\n      mysql -u root -e \"ALTER USER 'root'@'::1' IDENTIFIED WITH caching_sha2_password BY '${_ESC_PASS}';\"\n      mysql -u root -e \"ALTER USER 'root'@'${_MY_OWNIP}' IDENTIFIED WITH caching_sha2_password BY '${_ESC_PASS}';\"\n    fi\n    mysqladmin -u root flush-privileges &> /dev/null\n    mysqladmin -u root flush-hosts &> /dev/null\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"DAFT: ${_DB_SERVER} _ESC_PASS ${_ESC_PASS}\"\n      mysql -u root -e \"SHOW GRANTS FOR 'root'@'localhost';\"\n      if [ \"${_SQL_DEBUG_MODE}\" = \"YES\" ]; then\n        mysql -u root -e \"SHOW GRANTS FOR 'root'@'127.0.0.1';\"\n        mysql -u root -e \"SHOW GRANTS FOR 'root'@'::1';\"\n        if [ \"${_DB_V}\" = \"5.7\" ]; then\n          mysql -u root -e \"SELECT host,user,authentication_string FROM mysql.user;\"\n        fi\n      fi\n    fi\n    echo \" \"\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n        _msg \"INFO: New secure random password for ${_DB_SERVER} generated\"\n      else\n        _msg \"INFO: New random password for ${_DB_SERVER} generated\"\n      fi\n    fi\n  else\n    _msg \"CRIT: _ESC_PASS empty or not generated, ignored, no changes\"\n  fi\n}\n\n_db_server_apt_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _db_server_apt_cleanup\"\n  fi\n  [ -e \"/etc/apt/sources.list.d/percona-original-release.list.bak\" ] && rm -f /etc/apt/sources.list.d/percona-original-release.list.bak\n  [ -e \"/etc/apt/sources.list.d/percona-original-release.list\" ] && rm -f /etc/apt/sources.list.d/percona-original-release.list\n  [ -e \"/etc/apt/sources.list.d/percona-prel-release.list.bak\" ] && rm -f /etc/apt/sources.list.d/percona-prel-release.list.bak\n  [ -e \"/etc/apt/sources.list.d/percona-prel-release.list\" ] && rm -f /etc/apt/sources.list.d/percona-prel-release.list\n  [ -e \"/etc/apt/sources.list.d/percona-release.list.bak\" ] && rm -f /etc/apt/sources.list.d/percona-release.list.bak\n  _apt_clean_update\n}\n\n_db_server_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _db_server_install\"\n  fi\n  _db_server_apt_cleanup\n  _SQL_SCR=APT\n  if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _SQL_OS_CODE=trixie\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n    _SQL_OS_CODE=bookworm\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _SQL_OS_CODE=bullseye\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _SQL_OS_CODE=buster\n  else\n    _SQL_OS_CODE=\"${_OS_CODE}\"\n  fi\n  if [ \"${_SQL_OS_CODE}\" = \"trixie\" ]; then\n    _SQL_SCR=APT\n  fi\n  _THIS_DB_PORT=3306\n  if [ -x \"/usr/bin/gpg2\" ]; then\n    _GPG=gpg2\n  else\n    _GPG=gpg\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _db_server_apt_cleanup\n    _install_with_aptitude_sql\n    _db_server_apt_cleanup\n    [ -e \"/etc/mysql/my.cnf\" ] && mv -f /etc/mysql/my.cnf /var/backups/package.my.cnf\n    cp -af ${_locCnf}/var/my.cnf.txt /etc/mysql/my.cnf\n    if [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n      # Enable legacy mysql_native_password for Percona 8.4\n      sed -i -E \\\n        -e \"s/^#?\\s*mysql_native_password\\s*=.*/mysql_native_password=ON/\" \\\n        -e \"s/^(\\s*)#\\s*(mysql_native_password.*=.*)/\\1\\2/\" \\\n        /etc/mysql/my.cnf &> /dev/null\n    fi\n    if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n      # Enable Logging for Percona 5.7\n      sed -i -E \\\n        -e \"s/^#?\\s*log_syslog/log_syslog/\" \\\n        /etc/mysql/my.cnf &> /dev/null\n    else\n      # Enable Logging for Percona 8.x\n      sed -i -E \\\n        -e \"s/^#?\\s*log_error/log_error/\" \\\n        -e \"s/^#?\\s*syseventlog/syseventlog/\" \\\n        -e \"s/^#?\\s*mysqlx/mysqlx/\" \\\n        /etc/mysql/my.cnf &> /dev/null\n    fi\n    cp -af ${_locCnf}/var/mysql /etc/init.d/mysql\n    chmod 755 /etc/init.d/mysql\n    _mrun \"update-rc.d mysql defaults\"\n    _mrun \"service mysql restart\"\n  else\n    ###\n    ### Update keyring and apt if needed\n    _if_sql_keyring_apt_update\n    ###\n    ### Only supported upgrade path allowed\n    _sql_strict_upgrade_path\n    ###\n    if [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n      _msg \"INFO: Running ${_DB_SERVER} upgrade...\"\n      _msg \"WAIT: This may take a while, please wait...\"\n      if [ \"${_DB_SERIES}\" != \"5.7\" ]; then\n        bash -c 'echo -e \"#!/bin/bash\\ntrue\" > /bin/systemctl'\n        chmod +x /bin/systemctl\n        [ -e \"${_locCnf}/var/mysql\" ] && cp -af ${_locCnf}/var/mysql /etc/init.d/mysql\n        chmod 755 /etc/init.d/mysql\n        _mrun \"update-rc.d mysql defaults\"\n      fi\n      _sql_conf_update\n      if [ ! -e \"/root/.proxy.cnf\" ]; then\n        _check_mysql_version\n        if [ \"${_DB_V}\" = \"5.7\" ]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Running ${_DB_SERVER} system tables check...\"\n          fi\n          rm -f /var/lib/mysql/mysql_upgrade_info &> /dev/null\n          if [ -x \"/usr/bin/mysql_upgrade\" ]; then\n            _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n            _mrun \"mysql_upgrade -u root --force\"\n          fi\n        fi\n      fi\n      _mrun \"apt-get autoclean -y\"\n      _apt_clean_update\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/.*default-table-type/#default-table-type/g\" /etc/mysql/my.cnf &> /dev/null\n        sed -i \"s/.*language/#language/g\" /etc/mysql/my.cnf &> /dev/null\n      fi\n      rm -f /var/lib/mysql/debian-*.flag &> /dev/null\n      rm -f /var/lib/mysql/mysql_upgrade_info &> /dev/null\n      _db_server_apt_cleanup\n      _install_with_aptitude_sql\n      _db_server_apt_cleanup\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/.*innodb_lazy_drop_table.*//g\" /etc/mysql/my.cnf &> /dev/null\n      fi\n      if [ -e \"/etc/mysql/my.cnf\" ]; then\n        sed -i \"s/.*default-table-type/#default-table-type/g\" /etc/mysql/my.cnf &> /dev/null\n        sed -i \"s/.*language/#language/g\" /etc/mysql/my.cnf &> /dev/null\n      fi\n      _CUSTOM_CONFIG_SQL=NO\n      sleep 8\n      _DB_SERVER_TEST=$(mysql -V 2>&1)\n      if [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib ${_DB_SERIES}.\" ]]; then\n        _check_mysql_version\n        if [ \"${_DB_V}\" = \"5.7\" ]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Running ${_DB_SERVER} system tables (1) upgrade...\"\n          fi\n          rm -f /var/lib/mysql/mysql_upgrade_info &> /dev/null\n          if [ -x \"/usr/bin/mysql_upgrade\" ]; then\n            _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n            _mrun \"mysql_upgrade -u root --force\"\n            _mrun \"mysql_upgrade -u root --force\"\n          fi\n        fi\n      fi\n      _tune_memory_limits\n      _myCnf=\"/etc/mysql/my.cnf\"\n      _preCnf=\"${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW}\"\n      if [ -f \"${_myCnf}\" ]; then\n        _myCnfUpdate=NO\n        _myTopReinstall=NO\n        _myRstrd=NO\n        if [ ! -f \"${_preCnf}\" ]; then\n          mkdir -p ${_vBs}/dragon/t/\n          cp -af ${_myCnf} ${_preCnf}\n        fi\n        _diffMyTest=$(diff -w -B \\\n          -I userstat \\\n          -I innodb_buffer_pool_size \\\n          -I innodb_buffer_pool_instances \\\n          -I innodb_page_cleaners \\\n          -I tmp_table_size \\\n          -I max_heap_table_size \\\n          -I myisam_sort_buffer_size \\\n          -I key_buffer_size ${_myCnf} ${_preCnf} 2>&1)\n        if [ -z \"${_diffMyTest}\" ]; then\n          _myCnfUpdate=NO\n          _myTopReinstall=NO\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff0 empty\"\n          fi\n        else\n          _myCnfUpdate=YES\n          _myTopReinstall=YES\n          # _diffMyTest=$(echo -n ${_diffMyTest} | fmt -su -w 2500)\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff1 ${_diffMyTest}\"\n          fi\n        fi\n        if [[ \"${_diffMyTest}\" =~ \"innodb_buffer_pool_size\" ]]; then\n          _myCnfUpdate=NO\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff2 ${_diffMyTest}\"\n          fi\n        fi\n        if [[ \"${_diffMyTest}\" =~ \"No such file or directory\" ]]; then\n          _myCnfUpdate=NO\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff3 ${_diffMyTest}\"\n          fi\n        fi\n      fi\n      if [ ! -e \"/root/.run-to-excalibur.cnf\" ] \\\n        && [ ! -e \"/root/.run-to-daedalus.cnf\" ] \\\n        && [ ! -e \"/root/.run-to-chimaera.cnf\" ] \\\n        && [ ! -e \"/root/.run-to-beowulf.cnf\" ]; then\n        _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n        if [ \"${_myCnfUpdate}\" = \"NO\" ]; then\n          _myUptime=$(mysqladmin -u root version | grep -i uptime 2>&1)\n          _myUptime=$(echo -n ${_myUptime} | fmt -su -w 2500 2>&1)\n          _msg \"INFO: ${_DB_SERVER} ${_myUptime}\"\n        fi\n        if [ \"${_myCnfUpdate}\" = \"YES\" ]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Restarting ${_DB_SERVER} server...\"\n          fi\n          _mrun \"bash /var/xdrago/move_sql.sh\"\n          wait\n          _msg \"INFO: ${_DB_SERVER} server restart completed\"\n          _myRstrd=YES\n        fi\n        _check_mysql_version\n        if [ \"${_DB_V}\" = \"5.7\" ] || [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n          _CHECK_EXISTS=$(mysql -u root -e \"SELECT EXISTS(SELECT 1 FROM mysql.user WHERE user = 'drandom_2test')\" | grep \"0\" 2>&1)\n          if [[ \"${_CHECK_EXISTS}\" =~ \"0\" ]]; then\n            _CHECK_REPAIR=$(mysql -u root -e \"CREATE USER IF NOT EXISTS 'drandom_2test'@'localhost';\" 2>&1)\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              echo _CHECK_REPAIR 1 ${_CHECK_REPAIR}\n            fi\n            if [[ \"${_CHECK_REPAIR}\" =~ \"corrupted\" ]]; then\n              mysqlcheck -u root -A --auto-repair --silent\n              _check_mysql_version\n              mysql_upgrade -u root --force\n              mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN default_role;\"\n              mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN is_role;\"\n              mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN max_statement_time;\"\n              mysql_upgrade -u root --force\n            fi\n            _CHECK_REPAIR=$(mysql -u root -e \"CREATE USER IF NOT EXISTS 'drandom_2test'@'localhost';\" 2>&1)\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              echo _CHECK_REPAIR 2 ${_CHECK_REPAIR}\n            fi\n          fi\n          mysql -u root -e \"SET GLOBAL innodb_flush_log_at_trx_commit=2;\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_flush_log_at_timeout=1;\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_stats_on_metadata=0;\" &> /dev/null\n          rm -f /etc/mysql/conf.d/mysqldump.cnf\n        fi\n      fi\n      [ -e \"/bin/systemctl\" ] && rm /bin/systemctl\n      if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n        _mrun \"csf -e\"\n        _mrun \"service lfd start\"\n        ### Linux kernel TCP SACK CVEs mitigation\n        ### CVE-2019-11477 SACK Panic\n        ### CVE-2019-11478 SACK Slowness\n        ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n        if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n          _SACK_TEST=$(ip6tables --list | grep tcpmss)\n          if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n            sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n            iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n            ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n            [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n          fi\n        fi\n      fi\n    fi\n  fi\n}\n\n_init_sql_root_credentials() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _init_sql_root_credentials\"\n  fi\n  if [ ! -e \"/root/.my.pass.txt\" ]; then\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Generating random password for ${_DB_SERVER}\"\n      fi\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Using default dummy password for ${_DB_SERVER}\"\n      fi\n    fi\n    touch /root/.my.pass.txt\n    chmod 0600 /root/.my.pass.txt &> /dev/null\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      _ESC_PASS=\"\"\n      _LEN_PASS=0\n      if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n        _PWD_CHARS=64\n      elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n        _PWD_CHARS=32\n      else\n        _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n        if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n          && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n          _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n        else\n          _PWD_CHARS=32\n        fi\n        if [ ! -z \"${_PWD_CHARS}\" ] \\\n          && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n          _PWD_CHARS=128\n        fi\n      fi\n      if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] \\\n        || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n        if [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n          _ESC_PASS=\"$(openssl rand -base64 64 | tr -d '\\n')\"\n        else\n          _RANDPASS_TEST=$(randpass -V 2>&1)\n          if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n            _ESC_PASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n          else\n            _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n            _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n            _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n          fi\n        fi\n        _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n        _LEN_PASS=$(echo ${#_ESC_PASS} 2>&1)\n      fi\n      if [ -z \"${_ESC_PASS}\" ] || [ \"${_LEN_PASS}\" -lt 9 ]; then\n        _ESC_PASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_PASS=$(echo -n \"${_ESC_PASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_PASS=$(_sanitize_string \"${_ESC_PASS}\" 2>&1)\n      fi\n    else\n      _ESC_PASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n    fi\n    echo \"${_ESC_PASS}\" > /root/.my.pass.txt\n  fi\n  if [ -e \"/root/.my.pass.txt\" ]; then\n    if [ \"${_STATUS}\" = \"INIT\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: New ${_DB_SERVER} root password in /root/.my.pass.txt\"\n      fi\n    fi\n  else\n    _msg \"EXIT on error due to not found file with your ${_DB_SERVER} root password\"\n    cat <<EOF\n\n    It appears that you don't have required file with your root sql password.\n    Create this file first and run this script again:\n\n    echo \"your_working_SQL_ROOT_password\" > /root/.my.pass.txt\n    chmod 0600 /root/.my.pass.txt\n\nEOF\n    _msg \"EXIT on error due to not found file with your ${_DB_SERVER} root password\"\n    _clean_pid_exit _init_sql_root_credentials_a\n  fi\n}\n\n_sql_root_credentials_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sql_root_credentials_update\"\n  fi\n  if [ ! -e \"/root/.my.cnf\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: ${_DB_SERVER} final setup\"\n    fi\n    #\n    # Let's just do what mysql_secure_installation does,\n    # so we can do it non-interactively:\n    # - remove anonymous users\n    # - remove remote root\n    # - remove test database\n    # - remove privileges on test database\n    # - set auto-generated root password\n    # - reload privileges table\n    #\n    if [ ! -e \"/root/.my.pass.txt\" ]; then\n      _init_sql_root_credentials\n    fi\n    if [ -e \"/root/.my.pass.txt\" ]; then\n      _check_mysql_version\n      myUsrTbl=\"mysql.user\"\n      if [ -z \"${_ESC_PASS}\" ]; then\n        _PXSWD=$(cat /root/.my.pass.txt 2>&1)\n      else\n        _PXSWD=\"${_ESC_PASS}\"\n      fi\n      _PASWD=$(echo -n ${_PXSWD} | tr -d \"\\n\" 2>&1)\n      _ESC=\"*.*\"\n      if [ -z \"${_PASWD}\" ]; then\n        _msg \"CRIT: PASWD for ${_DB_SERVER} is empty!\"\n      elif [ -z \"${_DB_V}\" ]; then\n        _msg \"CRIT: _DB_V for ${_DB_SERVER} is empty!\"\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DBEF: ${_DB_SERVER} PASWD ${_PASWD}\"\n          mysql -u root -e \"SHOW GRANTS FOR 'root'@'localhost';\"\n          if [ \"${_SQL_DEBUG_MODE}\" = \"YES\" ]; then\n            mysql -u root -e \"SHOW GRANTS FOR 'root'@'127.0.0.1';\"\n            mysql -u root -e \"SHOW GRANTS FOR 'root'@'::1';\"\n            if [ \"${_DB_V}\" = \"5.7\" ]; then\n              mysql -u root -e \"SELECT host,user,authentication_string FROM mysql.user;\"\n            fi\n          fi\n        fi\n        mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'localhost';\"\n        mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'localhost' WITH GRANT OPTION;\"\n        if [ \"${_DB_V}\" = \"5.7\" ]; then\n          mysql -u root -e \"ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '${_PASWD}';\"\n        else\n          mysql -u root -e \"ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY '${_PASWD}';\"\n        fi\n        echo \"[client]\" > /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n        echo \" \" >> /root/.my.cnf\n        echo \"[mysql]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[mysqldump]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[mydumper]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[myloader]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[boa]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[barracuda]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[octopus]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[mycnfup]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[syncpass]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        echo \"[xboa]\" >> /root/.my.cnf\n        echo \"user=root\" >> /root/.my.cnf\n        echo \"password=${_PASWD}\" >> /root/.my.cnf\n          echo \" \" >> /root/.my.cnf\n        chmod 0600 /root/.my.cnf\n        echo \"db=mysql\" > /root/.mytop\n        chmod 0600 /root/.mytop\n        mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'127.0.0.1';\"\n        mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'::1';\"\n        mysql -u root -e \"CREATE USER IF NOT EXISTS 'root'@'${_MY_OWNIP}';\"\n        mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'127.0.0.1' WITH GRANT OPTION;\"\n        mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'::1' WITH GRANT OPTION;\"\n        mysql -u root -e \"GRANT ALL ON ${_ESC} TO 'root'@'${_MY_OWNIP}' WITH GRANT OPTION;\"\n        if [ \"${_DB_V}\" = \"5.7\" ]; then\n          mysql -u root -e \"ALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH mysql_native_password BY '${_PASWD}';\"\n          mysql -u root -e \"ALTER USER 'root'@'::1' IDENTIFIED WITH mysql_native_password BY '${_PASWD}';\"\n          mysql -u root -e \"ALTER USER 'root'@'${_MY_OWNIP}' IDENTIFIED WITH mysql_native_password BY '${_PASWD}';\"\n        else\n          mysql -u root -e \"ALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH caching_sha2_password BY '${_PASWD}';\"\n          mysql -u root -e \"ALTER USER 'root'@'::1' IDENTIFIED WITH caching_sha2_password BY '${_PASWD}';\"\n          mysql -u root -e \"ALTER USER 'root'@'${_MY_OWNIP}' IDENTIFIED WITH caching_sha2_password BY '${_PASWD}';\"\n        fi\n        mysqladmin -u root flush-privileges &> /dev/null\n        mysqladmin -u root flush-hosts &> /dev/null\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DAFT: ${_DB_SERVER} PASWD ${_PASWD}\"\n          mysql -u root -e \"SHOW GRANTS FOR 'root'@'localhost';\"\n          if [ \"${_SQL_DEBUG_MODE}\" = \"YES\" ]; then\n            mysql -u root -e \"SHOW GRANTS FOR 'root'@'127.0.0.1';\"\n            mysql -u root -e \"SHOW GRANTS FOR 'root'@'::1';\"\n            if [ \"${_DB_V}\" = \"5.7\" ]; then\n              mysql -u root -e \"SELECT host,user,authentication_string FROM mysql.user;\"\n            fi\n          fi\n        fi\n        if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n          || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n          || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ]; then\n          echo \"skip-name-resolve\" > /etc/mysql/skip-name-resolve.txt\n        else\n          sed -i \"s/.*skip-name-resolve/#skip-name-resolve/g\" /etc/mysql/my.cnf &> /dev/null\n        fi\n        mysqladmin -u root flush-privileges &> /dev/null\n        mysqladmin -u root flush-hosts &> /dev/null\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Restarting ${_DB_SERVER} server...\"\n        fi\n        _mrun \"bash /var/xdrago/move_sql.sh\"\n        wait\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: ${_DB_SERVER} setup completed\"\n          _msg \"INFO: You can now log in as root by typing just 'mysql'\"\n        fi\n      fi\n    fi\n  else\n    if [ \"${_THIS_DB_HOST}\" = \"localhost\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"127.0.0.1\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"PROXYSQL\" ] \\\n      || [ \"${_THIS_DB_HOST}\" = \"FQDN\" ]; then\n      if [ ! -e \"/root/.mysql.no.new.password.cnf\" ]; then\n        if [ -e \"/root/.mysql.yes.new.password.cnf\" ] \\\n          || [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n          _forced_mysql_root_password_update\n        fi\n      fi\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/functions/system.sh.inc",
    "content": "\nexport _tRee=dev\nexport _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n\n_pkg_installed() {\n  # Return 0 if package is installed, 1 otherwise.\n  dpkg-query -W -f='${Status}' \"$1\" 2>/dev/null | grep -qx 'install ok installed'\n}\n\n###\n### Find the best Devuan APT sources mirror\n###\n_find_fast_devuan_mirror() {\n  _ffDevuan=\"$(which ffdevuan)\"\n  if [ -x \"${_ffDevuan}\" ]; then\n    bash ${_ffDevuan} &> /dev/null\n    wait\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/log/boa/devaun-fast-mirrors-list.txt\"\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFLIST=$(grep \"merged\" ${_ffList} 2>&1)\n    fi\n    if [ ! -e \"${_ffList}\" ] || [[ ! \"${_BROKEN_FFLIST}\" =~ \"merged\" ]]; then\n      echo \"https://mirrors.dotsrc.org/devuan/merged\"  > ${_ffList}\n      echo \"https://mirror.akardam.net/devuan/merged\" >> ${_ffList}\n      echo \"https://mirror.hootsoftware.com/devuan/merged\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]] \\\n        || [[ ! \"${_BROKEN_FFLIST}\" =~ \"merged\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        export _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && export _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n      else\n        export _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n      fi\n    else\n      export _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n    fi\n  else\n    export _USE_MIR=\"https://mirrors.dotsrc.org/devuan/merged\"\n  fi\n  echo \"${_USE_MIR}\"\n}\n\n_if_fix_iptables_symlinks() {\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n  if [ -x \"/sbin/iptables\" ] && [ ! -e \"/usr/sbin/iptables\" ]; then\n    ln -sfn /sbin/iptables /usr/sbin/iptables\n  fi\n  if [ -x \"/usr/sbin/iptables\" ] && [ ! -e \"/sbin/iptables\" ]; then\n    ln -sfn /usr/sbin/iptables /sbin/iptables\n  fi\n  if [ -x \"/sbin/iptables-save\" ] && [ ! -e \"/usr/sbin/iptables-save\" ]; then\n    ln -sfn /sbin/iptables-save /usr/sbin/iptables-save\n  fi\n  if [ -x \"/usr/sbin/iptables-save\" ] && [ ! -e \"/sbin/iptables-save\" ]; then\n    ln -sfn /usr/sbin/iptables-save /sbin/iptables-save\n  fi\n  if [ -x \"/sbin/iptables-restore\" ] && [ ! -e \"/usr/sbin/iptables-restore\" ]; then\n    ln -sfn /sbin/iptables-restore /usr/sbin/iptables-restore\n  fi\n  if [ -x \"/usr/sbin/iptables-restore\" ] && [ ! -e \"/sbin/iptables-restore\" ]; then\n    ln -sfn /usr/sbin/iptables-restore /sbin/iptables-restore\n  fi\n  if [ -x \"/sbin/ip6tables\" ] && [ ! -e \"/usr/sbin/ip6tables\" ]; then\n    ln -sfn /sbin/ip6tables /usr/sbin/ip6tables\n  fi\n  if [ -x \"/usr/sbin/ip6tables\" ] && [ ! -e \"/sbin/ip6tables\" ]; then\n    ln -sfn /usr/sbin/ip6tables /sbin/ip6tables\n  fi\n  if [ -x \"/sbin/ip6tables-save\" ] && [ ! -e \"/usr/sbin/ip6tables-save\" ]; then\n    ln -sfn /sbin/ip6tables-save /usr/sbin/ip6tables-save\n  fi\n  if [ -x \"/usr/sbin/ip6tables-save\" ] && [ ! -e \"/sbin/ip6tables-save\" ]; then\n    ln -sfn /usr/sbin/ip6tables-save /sbin/ip6tables-save\n  fi\n  if [ -x \"/sbin/ip6tables-restore\" ] && [ ! -e \"/usr/sbin/ip6tables-restore\" ]; then\n    ln -sfn /sbin/ip6tables-restore /usr/sbin/ip6tables-restore\n  fi\n  if [ -x \"/usr/sbin/ip6tables-restore\" ] && [ ! -e \"/sbin/ip6tables-restore\" ]; then\n    ln -sfn /usr/sbin/ip6tables-restore /sbin/ip6tables-restore\n  fi\n  ###\n  ### Fix for iptables paths backward compatibility\n  ###\n}\n\n#\n# Re-install curl if broken.\n_if_reinstall_curl_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_reinstall_curl_src\"\n  fi\n  _CURL_VRN=8.20.0\n  if ! command -v lsb_release &> /dev/null; then\n    _mrun \"apt-get update -qq\"\n    _mrun \"apt-get install lsb-release ${_aptYesUnth} -qq\"\n  fi\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  [ \"${_OS_CODE}\" = \"wheezy\" ] && _CURL_VRN=7.50.1\n  [ \"${_OS_CODE}\" = \"jessie\" ] && _CURL_VRN=7.71.1\n  [ \"${_OS_CODE}\" = \"stretch\" ] && _CURL_VRN=8.2.1\n  _isCurl=$(curl --version 2>&1)\n  if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n    _msg \"OOPS: cURL is broken! Re-installing..\"\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    echo \"curl install\" | dpkg --set-selections 2> /dev/null\n    _apt_clean_update\n    # Check for libssl1.0-dev and remove conditionally\n    if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n      _mrun \"apt-get remove libssl1.0-dev -y --purge --auto-remove -qq\"\n    fi\n    _mrun \"apt-get autoremove -y\"\n    _mrun \"apt-get install libssl-dev ${_aptYesUnth} -qq\"\n    _mrun \"apt-get build-dep curl ${_aptYesUnth}\"\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      _mrun \"apt-get install curl --reinstall ${_aptYesUnth} -qq\"\n    fi\n    if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      _msg \"INFO: Installing curl from sources...\"\n      mkdir -p /var/opt\n      rm -rf /var/opt/curl*\n      cd /var/opt\n      _get_dev_src_wget \"curl-${_CURL_VRN}.tar.gz\"\n      if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n        && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n        _SSL_BINARY=/usr/local/ssl3/bin/openssl\n      else\n        _SSL_BINARY=/usr/local/ssl/bin/openssl\n      fi\n      if [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n        _SSL_PATH=\"/usr/local/ssl3\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n      else\n        _SSL_PATH=\"/usr/local/ssl\"\n        _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n      fi\n      _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n\n      if [ -e \"${_PKG_CONFIG_PATH}\" ] \\\n        && [ -e \"/var/opt/curl-${_CURL_VRN}\" ]; then\n        cd /var/opt/curl-${_CURL_VRN}\n        _mrun \"LIBS=\\\"-ldl -lpthread\\\" PKG_CONFIG_PATH=\\\"${_PKG_CONFIG_PATH}\\\" ./configure \\\n          --with-openssl \\\n          --with-zlib=/usr \\\n          --prefix=/usr/local\"\n        _mrun \"make -j $(nproc) --quiet\"\n        _mrun \"make --quiet install\"\n        ldconfig 2> /dev/null\n      fi\n    fi\n    if [ -f \"/usr/local/bin/curl\" ]; then\n      _isCurl=$(/usr/local/bin/curl --version 2>&1)\n      if [[ ! \"${_isCurl}\" =~ \"OpenSSL\" ]] || [ -z \"${_isCurl}\" ]; then\n        _msg \"ERRR: /usr/local/bin/curl is broken\"\n      else\n        _msg \"GOOD: /usr/local/bin/curl works\"\n      fi\n    fi\n  fi\n}\n\n#\n# Install sysvinit with apt.\n_sysvinit_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sysvinit_install\"\n  fi\n  _apt_clean_update\n  echo \"sysvinit-core install\" | dpkg --set-selections &> /dev/null\n  echo \"sysvinit-utils install\" | dpkg --set-selections &> /dev/null\n  _mrun \"${_INITINS} sysvinit-core\"\n  _mrun \"${_INITINS} sysvinit-utils\"\n  echo \"sysvinit-core hold\" | dpkg --set-selections &> /dev/null\n  echo \"sysvinit-utils hold\" | dpkg --set-selections &> /dev/null\n}\n\n#\n# Remove systemd with apt.\n_systemd_remove_apt_cmd() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _systemd_remove_apt_cmd\"\n  fi\n  _apt_clean_update\n  _mrun \"apt-get purge systemd libnss-systemd -y -qq\"\n  _mrun \"apt-get autoremove --purge -y --purge --auto-remove -qq\"\n  _mrun \"apt-get autoclean -y --purge --auto-remove -qq\"\n  _mrun \"apt-get autoremove -y --purge --auto-remove -qq\"\n  _apt_clean_update\n}\n\n#\n# Detect and prepare for major OS upgrade.\n_define_loc_osr() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _define_loc_osr\"\n  fi\n  _apt_clean_update_no_releaseinfo_change\n  _mrun \"apt-get upgrade ${_nrmUpArg}\"\n  _mrun \"apt-get install lsb-release ${_nrmUpArg}\"\n  _OS_DIST=$(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _LOC_OS_CODE=\n  _NEW_OS_CODE=\n  _TGT_OSN=\n  _MSG_LOC=\n\n  ###\n  ### Debian to Devuan\n  ###\n  if [ \"${_JESSIE_TO_BEOWULF}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _LOC_OS_CODE=jessie\n    _NEW_OS_CODE=beowulf\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Debian Jessie to Devuan Beowulf\"\n  fi\n  if [ \"${_STRETCH_TO_BEOWULF}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    _LOC_OS_CODE=stretch\n    _NEW_OS_CODE=beowulf\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Debian Stretch to Devuan Beowulf\"\n  fi\n  if [ \"${_BUSTER_TO_BEOWULF}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _LOC_OS_CODE=buster\n    _NEW_OS_CODE=beowulf\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Debian Buster to Devuan Beowulf\"\n  fi\n  if [ \"${_BULLSEYE_TO_CHIMAERA}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _LOC_OS_CODE=bullseye\n    _NEW_OS_CODE=chimaera\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Debian Bullseye to Devuan Chimaera\"\n  fi\n  if [ \"${_BOOKWORM_TO_DAEDALUS}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _LOC_OS_CODE=bookworm\n    _NEW_OS_CODE=daedalus\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Debian Bookworm to Devuan Daedalus\"\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  fi\n  if [ \"${_TRIXIE_TO_EXCALIBUR}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"trixie\" ]; then\n    _LOC_OS_CODE=trixie\n    _NEW_OS_CODE=excalibur\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Debian Trixie to Devuan Excalibur\"\n    [ ! -e \"/root/.top-excalibur.cnf\" ] && touch /root/.top-excalibur.cnf\n  fi\n\n  ###\n  ### Devuan to Devuan\n  ###\n  if [ \"${_BEOWULF_TO_CHIMAERA}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _LOC_OS_CODE=beowulf\n    _NEW_OS_CODE=chimaera\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Devuan Beowulf to Devuan Chimaera\"\n  fi\n  if [ \"${_CHIMAERA_TO_DAEDALUS}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _LOC_OS_CODE=chimaera\n    _NEW_OS_CODE=daedalus\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Devuan Chimaera to Devuan Daedalus\"\n    [ ! -e \"/root/.top-daedalus.cnf\" ] && touch /root/.top-daedalus.cnf\n  fi\n  if [ \"${_DAEDALUS_TO_EXCALIBUR}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n    _LOC_OS_CODE=daedalus\n    _NEW_OS_CODE=excalibur\n    _TGT_OSN=Devuan\n    _MSG_LOC=\"Devuan Daedalus to Devuan Excalibur\"\n    [ ! -e \"/root/.top-excalibur.cnf\" ] && touch /root/.top-excalibur.cnf\n  fi\n\n  ###\n  ### Debian to Debian\n  ###\n  if [ \"${_BOOKWORM_TO_TRIXIE}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _LOC_OS_CODE=bookworm\n    _NEW_OS_CODE=trixie\n    _TGT_OSN=Debian\n    _MSG_LOC=\"Debian Bookworm to Debian Trixie\"\n  fi\n  if [ \"${_BULLSEYE_TO_BOOKWORM}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _LOC_OS_CODE=bullseye\n    _NEW_OS_CODE=bookworm\n    _TGT_OSN=Debian\n    _MSG_LOC=\"Debian Bullseye to Debian Bookworm\"\n  fi\n  if [ \"${_BUSTER_TO_BULLSEYE}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _LOC_OS_CODE=buster\n    _NEW_OS_CODE=bullseye\n    _TGT_OSN=Debian\n    _MSG_LOC=\"Debian Buster to Debian Bullseye\"\n  fi\n  if [ \"${_STRETCH_TO_BUSTER}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    _LOC_OS_CODE=stretch\n    _NEW_OS_CODE=buster\n    _TGT_OSN=Debian\n    _MSG_LOC=\"Debian Stretch to Debian Buster\"\n  fi\n  if [ \"${_JESSIE_TO_STRETCH}\" = \"YES\" ] \\\n    && [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _LOC_OS_CODE=jessie\n    _NEW_OS_CODE=stretch\n    _TGT_OSN=Debian\n    _MSG_LOC=\"Debian Jessie to Debian Stretch\"\n  fi\n}\n\n#\n# Detect and prepare for major OS upgrade.\n_if_to_do_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_to_do_fix\"\n  fi\n  _DO_FIX=\n\n  ###\n  ### Debian to Devuan\n  ###\n  if [ \"${_JESSIE_TO_BEOWULF}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"jessie\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_STRETCH_TO_BEOWULF}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"stretch\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_BUSTER_TO_BEOWULF}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"buster\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_BULLSEYE_TO_CHIMAERA}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"bullseye\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_BOOKWORM_TO_DAEDALUS}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"bookworm\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_TRIXIE_TO_EXCALIBUR}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"trixie\" ]; then\n    _DO_FIX=YES\n  fi\n\n  ###\n  ### Devuan to Devuan\n  ###\n  if [ \"${_BEOWULF_TO_CHIMAERA}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"beowulf\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_CHIMAERA_TO_DAEDALUS}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"chimaera\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_DAEDALUS_TO_EXCALIBUR}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"daedalus\" ]; then\n    _DO_FIX=YES\n  fi\n\n  ###\n  ### Debian to Debian\n  ###\n  if [ \"${_BOOKWORM_TO_TRIXIE}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"bookworm\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_BULLSEYE_TO_BOOKWORM}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"bullseye\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_BUSTER_TO_BULLSEYE}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"buster\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_STRETCH_TO_BUSTER}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"stretch\" ]; then\n    _DO_FIX=YES\n  fi\n  if [ \"${_JESSIE_TO_STRETCH}\" = \"YES\" ] \\\n    && [ \"${_LOC_OS_CODE}\" = \"jessie\" ]; then\n    _DO_FIX=YES\n  fi\n}\n\n#\n# Check system CPU power.\n_count_cpu() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _count_cpu\"\n  fi\n  _CPU_INFO=\"$(grep -c processor /proc/cpuinfo)\"\n  _CPU_INFO=${_CPU_INFO//[^0-9]/}\n  _NPROC_TEST=\"$(which nproc)\"\n  if [ -z \"${_NPROC_TEST}\" ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  else\n    _CPU_NR=$(nproc 2>&1)\n  fi\n  _CPU_NR=${_CPU_NR//[^0-9]/}\n  if [ ! -z \"${_CPU_NR}\" ] \\\n    && [ ! -z \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_NR}\" -gt \"${_CPU_INFO}\" ] \\\n    && [ \"${_CPU_INFO}\" -gt 0 ]; then\n    _CPU_NR=\"${_CPU_INFO}\"\n  fi\n  if [ -z \"${_CPU_NR}\" ] || [ \"${_CPU_NR}\" -lt 1 ]; then\n    _CPU_NR=1\n  fi\n}\n\n#\n# Discover system-manufacturer.\n_check_system_manufacturer() {\n  # Extract manufacturer string\n  _SYS_MANUFACTURER_RAW=$(dmidecode -s system-manufacturer 2>/dev/null)\n\n  # Trim leading/trailing whitespace, convert spaces to underscores\n  _SYS_MANUFACTURER_CLEAN=$(printf \"%s\" \"${_SYS_MANUFACTURER_RAW}\" \\\n    | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//' -e 's/[[:space:]]\\+/_/g')\n\n  # Normalise: keep only A-Za-z0-9_\n  _SYS_MANUFACTURER=$(printf \"%s\" \"${_SYS_MANUFACTURER_CLEAN}\" | tr -cd 'A-Za-z0-9_')\n\n  # Enforce max length with safe underscore-aware truncation\n  _MAX_LEN=32\n  if [ \"${#_SYS_MANUFACTURER}\" -gt \"${_MAX_LEN}\" ]; then\n    _CUT=$(printf \"%s\" \"${_SYS_MANUFACTURER}\" | cut -c1-\"${_MAX_LEN}\")\n    _SAFE_CUT=$(printf \"%s\" \"${_CUT}\" | sed 's/_[^_]*$//')\n    if [ -n \"${_SAFE_CUT}\" ]; then\n      _SYS_MANUFACTURER=\"${_SAFE_CUT}\"\n    else\n      _SYS_MANUFACTURER=\"${_CUT}\"\n    fi\n  fi\n\n  # Secondary variable: lowercase\n  _SYS_MANUFACTURER_LC=$(printf \"%s\" \"${_SYS_MANUFACTURER}\" | tr 'A-Z' 'a-z')\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"_SYS_MANUFACTURER_RAW: ${_SYS_MANUFACTURER_RAW}\"\n    _msg \"_SYS_MANUFACTURER_CLEAN: ${_SYS_MANUFACTURER_CLEAN}\"\n    _msg \"_SYS_MANUFACTURER: ${_SYS_MANUFACTURER}\"\n    _msg \"_SYS_MANUFACTURER_LC: ${_SYS_MANUFACTURER_LC}\"\n  fi\n}\n\n#\n# Make sure that we are root.\n_if_running_as_root_barracuda() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_running_as_root_barracuda\"\n  fi\n  if [ \"$(id -u)\" -eq 0 ]; then\n    chmod a+w /dev/null\n    rm -rf /tmp/drush_make_tmp*\n    rm -rf /tmp/make_tmp*\n    rm -f /tmp/pm-updatecode*\n    rm -f /tmp/cache.inc*\n    mkdir -p ${_pthLog}\n    find /etc/[a-z]*\\.lock -maxdepth 1 -type f -exec rm -rf {} \\; &> /dev/null\n    # Check if dmidecode is available\n    if ! command -v dmidecode &> /dev/null; then\n      _apt_clean_update\n      _mrun \"${_INITINS} dmidecode\"\n    fi\n    _check_system_manufacturer\n    if [ -n \"${_SYS_MANUFACTURER_LC}\" ]; then\n      if  [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n        || [ ! -e \"/data/u\" ] \\\n        || [ ! -e \"/var/xdrago\" ]; then\n        if [ ! -e \"/var/log/boa/${_SYS_MANUFACTURER_LC}_vm_postinstall.pid\" ]; then\n          touch /var/log/boa/${_SYS_MANUFACTURER_LC}_vm.pid\n        fi\n      fi\n    fi\n    # Check for Amazon EC2 in the system manufacturer field\n    if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n      _VMFAMILY=\"AWS\"\n    fi\n    _VM_TEST=\"$(uname -a)\"\n    if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n      _VMFAMILY=\"VS\"\n      touch /var/log/boa/cloud_vhost.pid\n      if [ ! -e \"/etc/apt/preferences.d/fuse\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: fuse\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/fuse\n        _apt_clean_update\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/udev\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: udev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/udev\n        _apt_clean_update\n      fi\n      if [ ! -e \"/etc/apt/preferences.d/makedev\" ]; then\n        mkdir -p /etc/apt/preferences.d/\n        echo -e 'Package: makedev\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/makedev\n        _apt_clean_update\n      fi\n      _mrun \"apt-get remove fuse -y --purge --auto-remove -qq\"\n      _mrun \"apt-get remove udev -y --purge --auto-remove -qq\"\n      _mrun \"apt-get remove makedev -y --purge --auto-remove -qq\"\n      if [ -e \"/sbin/hdparm\" ]; then\n        _mrun \"apt-get remove hdparm -y --purge --auto-remove -qq\"\n      fi\n      if [ -d \"/etc/webmin\" ]; then\n        _mrun \"dpkg --configure --force-all -a\"\n        _apt_clean_update\n        _mrun \"apt-get remove webmin -y --purge --auto-remove -qq\"\n        rm -rf /usr/share/webmin\n      fi\n      rm -f /etc/apt/sources.list.d/ksplice.list\n      rm -f /etc/apt/sources.list.d/longview.list\n      rm -f /etc/apt/sources.list.d/webmin.list\n    fi\n    sleep 1\n  else\n    _msg \"ERROR: This script should be run as a root user\"\n    _clean_pid_exit\n  fi\n}\n\n#\n# Set xterm.\n_set_xterm() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _set_xterm\"\n  fi\n  if [ -e \"/root/.bashrc\" ]; then\n    _XTERM_TEST=$(grep \"export TERM\" /root/.bashrc 2>&1)\n    if [[ \"${_XTERM_TEST}\" =~ \"export TERM\" ]]; then\n      sed -i \"s/.*export TERM=.*//g\" /root/.bashrc\n      wait\n    fi\n  fi\n}\n\n#\n# Kill nash-hotplug.\n_kill_nash() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _kill_nash\"\n  fi\n  _VM_TEST=\"$(uname -a)\"\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _DO_NOTHING=YES\n  else\n    _NASH_TEST=$(grep nash-hotplug /etc/rc.local 2>&1)\n    if [[ ! \"${_NASH_TEST}\" =~ \"nash-hotplug\" ]]; then\n      cp -af /etc/rc.local /etc/rc.local.bak.${_NOW}\n      sed -i \"s/exit 0//g\" /etc/rc.local &> /dev/null\n      wait\n      echo \"killall -9 nash-hotplug\" >> /etc/rc.local\n      echo \"exit 0\" >> /etc/rc.local\n      killall -9 nash-hotplug &> /dev/null\n    fi\n  fi\n}\n\n#\n# Cleanup for Postfix.\n_fix_postfix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_postfix\"\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _mrun \"apt-get remove exim4 -y --purge --auto-remove -qq\"\n    _mrun \"apt-get remove exim4-base -y --purge --auto-remove -qq\"\n    _mrun \"apt-get remove exim4-config -y --purge --auto-remove -qq\"\n    _mrun \"apt-get remove sendmail -y --purge --auto-remove -qq\"\n    _mrun \"apt-get remove sendmail-base -y --purge --auto-remove -qq\"\n    _mrun \"apt-get remove sendmail-bin -y --purge --auto-remove -qq\"\n    _mrun \"apt-get remove sendmail-cf -y --purge --auto-remove -qq\"\n    rm -f /etc/aliases\n    rm -rf /etc/mail\n    killall -9 sendmail &> /dev/null\n  else\n    _POSTFIX_TEST=$(grep \"fatal: open lock file\" /var/log/mail.log 2>&1)\n    if [[ \"${_POSTFIX_TEST}\" =~ \"fatal: open lock file\" ]]; then\n      _mrun \"dpkg --configure --force-all -a\"\n      _apt_clean_update\n      _mrun \"apt-get remove postfix -y -qq\"\n      echo > /var/log/mail.log\n    fi\n  fi\n  if [ ! -e \"/etc/aliases\" ]; then\n    echo \"postmaster:    root\" > /etc/aliases\n    newaliases &> /dev/null\n  fi\n}\n\n#\n# Fix FTPS PAM where required.\n_fix_ftps_pam() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_ftps_pam\"\n  fi\n  if [ ! -e \"/etc/ftpusers\" ]; then\n    cp -af ${_locCnf}/ftpd/ftpusers /etc/ftpusers\n  fi\n  sed -i \"s/pam_stack.so/pam_unix.so/g\" /etc/pam.d/pure-ftpd &> /dev/null\n  wait\n  sed -i \"s/ service=system-auth//g\"    /etc/pam.d/pure-ftpd &> /dev/null\n  wait\n}\n\n#\n# Fix FTPS and SFTP access on modern systems.\n_sftp_ftps_modern_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sftp_ftps_modern_fix\"\n  fi\n  _LSHELL_PATH_TEST=$(grep \"/usr/bin/lshell\" /etc/shells 2>&1)\n  if [[ ! \"${_LSHELL_PATH_TEST}\" =~ \"/usr/bin/lshell\" ]]; then\n    echo \"/usr/bin/lshell\" >> /etc/shells\n  fi\n  if [ ! -e \"${_pthLog}/mss-build-${_MSS_VRN}-${_xSrl}-${_X_VERSION}.log\" ] \\\n    || [ ! -e \"/etc/ssh/sftp_config\" ] \\\n    || [ ! -e \"/usr/bin/mysecureshell\" ] \\\n    || [ \"${_SSL_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    if [ \"${_MSS_BUILD}\" != \"YES\" ]; then\n      _msg \"INFO: Installing MySecureShell ${_MSS_VRN}...\"\n      cd /var/opt\n      rm -rf mysecureshell*\n      _get_dev_src \"mysecureshell-${_MSS_VRN}.tar.gz\"\n      cd /var/opt/mysecureshell\n      _mrun \"bash ./configure\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"sh ./install.sh yesall\"\n      touch ${_pthLog}/mss-build-${_MSS_VRN}-${_xSrl}-${_X_VERSION}.log\n      cp -af ${_locCnf}/var/sftp_config /etc/ssh/sftp_config\n      _mrun \"service ssh restart\"\n      _MSS_BUILD=YES\n    fi\n  fi\n  if [ -e \"/usr/bin/mysecureshell\" ] && [ -e \"/etc/ssh/sftp_config\" ]; then\n    _MSS_TEST=$(grep \"lshell\" /etc/passwd 2>&1)\n    if [[ \"${_MSS_TEST}\" =~ \"lshell\" ]]; then\n      sed -i \"s/usr\\/.*\\/lshell/usr\\/bin\\/mysecureshell/g\" /etc/passwd &> /dev/null\n      wait\n    fi\n    _MSS_TEST=$(grep \"MySecureShell\" /etc/passwd 2>&1)\n    if [[ \"${_MSS_TEST}\" =~ \"MySecureShell\" ]]; then\n      sed -i \"s/usr\\/.*\\/MySecureShell/usr\\/bin\\/mysecureshell/g\" /etc/passwd &> /dev/null\n      wait\n    fi\n  fi\n  _MSS_PATH_TEST=$(grep \"/usr/bin/mysecureshell\" /etc/shells 2>&1)\n  if [[ \"${_MSS_PATH_TEST}\" =~ \"/usr/bin/mysecureshell\" ]]; then\n    _DO_NOTHING=YES\n  else\n    echo \"/usr/bin/mysecureshell\" >> /etc/shells\n  fi\n  if [ ! -e \"${_pthLog}/fixed-sftp-idle.log\" ]; then\n    sed -i \"s/IdleTimeOut.*/IdleTimeOut            15m/g\" /etc/ssh/sftp_config &> /dev/null\n    _mrun \"service ssh restart\"\n    touch ${_pthLog}/fixed-sftp-idle.log\n  fi\n}\n\n#\n# Disable Old Purge Cruft Machine.\n_disable_old_purge_cruft_machine() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _disable_old_purge_cruft_machine\"\n  fi\n  _if_hosted_sys\n  if [ \"${_hostedSys}\" = \"YES\" ]; then\n    sed -i \"s/.*purge_cruft.*//g\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"/^$/d\" /etc/crontab &> /dev/null\n    wait\n  fi\n}\n\n#\n# Update BOA INI templates.\n_boa_ini_tpl_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _boa_ini_tpl_update\"\n  fi\n  mkdir -p /data/conf\n  if [ -e \"${_locCnf}/ini/default.boa_platform_control.ini\" ]; then\n    cp -af ${_locCnf}/ini/default.boa_platform_control.ini /data/conf/default.boa_platform_control.ini\n    rm -f /var/xdrago/conf/default.boa_platform_control.ini\n  fi\n  if [ -e \"${_locCnf}/ini/default.boa_site_control.ini\" ]; then\n    cp -af ${_locCnf}/ini/default.boa_site_control.ini /data/conf/default.boa_site_control.ini\n    rm -f /var/xdrago/conf/default.boa_site_control.ini\n  fi\n}\n\n#\n# Update MC Panels INI.\n_mc_panels_ini_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _mc_panels_ini_update\"\n  fi\n  if [ -d \"/root/.config/mc\" ]; then\n    _SORT_ORDER_TEST=EMPTY\n    if [ -f \"/root/.config/mc/panels.ini\" ]; then\n      _SORT_ORDER_TEST=\"$(grep sort_order /root/.config/mc/panels.ini)\"\n    fi\n    if [ ! -f \"/root/.config/mc/panels.ini\" ] \\\n      || [[ ! \"${_SORT_ORDER_TEST}\" =~ \"sort_order\" ]]; then\n      if [ -e \"${_locCnf}/ini/panels.ini\" ]; then\n        cat ${_locCnf}/ini/panels.ini >> /root/.config/mc/panels.ini\n      fi\n    fi\n  fi\n}\n\n#\n# Update global.inc Config.\n_global_inc_conf_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _global_inc_conf_update\"\n  fi\n  _GL_D=/data/conf/global\n  [ ! -d \"${_GL_D}\" ] && mkdir -p ${_GL_D}\n  _DR_V=\"11 10 9 8 7 6\"\n  for e in ${_DR_V}; do\n    if [ -e \"${_GL_D}/global-${e}.inc\" ]; then\n      cp -af ${_GL_D}/global-${e}.inc ${_GL_D}/global-${e}.inc-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    fi\n    cp -af ${_locCnf}/global/global-${e}.inc ${_GL_D}/global-${e}.inc\n  done\n  cp -af /data/conf/global.inc /data/conf/global.inc-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n  cp -af ${_locCnf}/global/global.inc /data/conf/global.inc\n  if [ ! -z \"${_SPEED_VALID_MAX}\" ]; then\n    sed -i \"s/3600/${_SPEED_VALID_MAX}/g\" /data/conf/global.inc &> /dev/null\n    wait\n  fi\n  cp -af ${_locCnf}/global/global-main.inc ${_GL_D}/global-main.inc\n  cp -af ${_locCnf}/global/global-ini.inc ${_GL_D}/global-ini.inc\n  cp -af ${_locCnf}/global/global-mode.inc ${_GL_D}/global-mode.inc\n  cp -af ${_locCnf}/global/global-settings.inc ${_GL_D}/global-settings.inc\n  cp -af ${_locCnf}/global/global-front-end.inc ${_GL_D}/global-front-end.inc\n  if [ ! -z \"${_SPEED_VALID_MAX}\" ]; then\n    sed -i \"s/3600/${_SPEED_VALID_MAX}/g\" ${_GL_D}/global-front-end.inc &> /dev/null\n  fi\n  cp -af ${_locCnf}/global/global-if-valkey.inc ${_GL_D}/global-if-valkey.inc\n  cp -af ${_locCnf}/global/global-valkey.inc ${_GL_D}/global-valkey.inc\n  cp -af ${_locCnf}/global/global-if-redis.inc ${_GL_D}/global-if-redis.inc\n  cp -af ${_locCnf}/global/global-redis.inc ${_GL_D}/global-redis.inc\n  cp -af ${_locCnf}/global/global-newrelic.inc ${_GL_D}/global-newrelic.inc\n  cp -af ${_locCnf}/global/global-extra.inc ${_GL_D}/global-extra.inc\n}\n\n#\n# Fix this on upgrade.\n_fix_on_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_on_upgrade\"\n  fi\n  sed -i \"s/loglevel.*/loglevel warning/g\" /etc/redis/redis.conf &> /dev/null\n  sed -i \"s/^TLS.*/TLS 2/g\" /usr/local/etc/pure-ftpd.conf &> /dev/null\n  cp -af ${_locCnf}/var/clean-boa-env /etc/init.d/clean-boa-env\n  chmod 755 /etc/init.d/clean-boa-env &> /dev/null\n  _mrun \"update-rc.d clean-boa-env defaults\"\n  _kill_nash\n  _sftp_ftps_modern_fix\n  _fix_ftps_pam\n  _disable_old_purge_cruft_machine\n  _php_config_check_update\n  _nginx_conf_update\n  _global_inc_conf_update\n  if [ -e \"/etc/init.d/valkey-server\" ]; then\n    _valkey_password_update\n  elif [ -e \"/etc/init.d/redis-server\" ]; then\n    _redis_password_update\n  fi\n}\n\n#\n# Tune memory limits for PHP, Nginx and Percona.\n_tune_memory_limits() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _tune_memory_limits\"\n  fi\n  _count_cpu\n\n  # Set _PHP_FPM_WORKERS to AUTO if it is empty\n  [ -z \"${_PHP_FPM_WORKERS}\" ] && _PHP_FPM_WORKERS=AUTO\n  # If _PHP_FPM_WORKERS is not AUTO and not empty, then check if it is less than 1\n  if [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && [ -n \"${_PHP_FPM_WORKERS}\" ]; then\n    if [ \"${_PHP_FPM_WORKERS}\" -lt 1 ] 2>/dev/null; then\n      _PHP_FPM_WORKERS=AUTO\n    fi\n  fi\n  # If _PHP_FPM_WORKERS is not AUTO, remove non-numeric characters\n  [ \"${_PHP_FPM_WORKERS}\" != \"AUTO\" ] && _PHP_FPM_WORKERS=${_PHP_FPM_WORKERS//[^0-9]/}\n  if [ \"${_PHP_FPM_WORKERS}\" = \"AUTO\" ]; then\n    _L_PHP_FPM_WORKERS=$(( _CPU_NR * 4 ))\n  else\n    _L_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\n  fi\n  # Set _PHP_FPM_TIMEOUT to AUTO if it is empty\n  [ -z \"${_PHP_FPM_TIMEOUT}\" ] && _PHP_FPM_TIMEOUT=AUTO\n  # If _PHP_FPM_TIMEOUT is not AUTO and not empty, then check if it is between 60 and 180\n  if [ \"${_PHP_FPM_TIMEOUT}\" != \"AUTO\" ] && [ -n \"${_PHP_FPM_TIMEOUT}\" ]; then\n    # If _PHP_FPM_TIMEOUT is not AUTO and not empty, remove non-numeric characters\n    [ \"${_PHP_FPM_TIMEOUT}\" != \"AUTO\" ] && _PHP_FPM_TIMEOUT=${_PHP_FPM_TIMEOUT//[^0-9]/}\n    # If _PHP_FPM_TIMEOUT is outside of the allowed range, use either min or max allowed\n    if [ \"${_PHP_FPM_TIMEOUT}\" -lt 60 ]; then\n      _PHP_FPM_TIMEOUT=60\n    elif [ \"${_PHP_FPM_TIMEOUT}\" -gt 180 ]; then\n      _PHP_FPM_TIMEOUT=180\n    fi\n  else\n    _PHP_FPM_TIMEOUT=180\n  fi\n  _MXC_SQL=$(( _L_PHP_FPM_WORKERS * 4 ))\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _L_PHP_FPM_WORKERS is ${_L_PHP_FPM_WORKERS}\"\n    _msg \"TUNE: _PHP_FPM_TIMEOUT is ${_PHP_FPM_TIMEOUT}\"\n    _msg \"TUNE: _MXC_SQL is ${_MXC_SQL}\"\n  fi\n\n  _VM_TEST=\"$(uname -a)\"\n  if [ -e \"/proc/bean_counters\" ]; then\n    _VMFAMILY=\"VZ\"\n  elif [ -e \"/root/.tg.cnf\" ]; then\n    _VMFAMILY=\"TG\"\n  else\n    _VMFAMILY=\"XEN\"\n  fi\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _VMFAMILY=\"VS\"\n  fi\n  # Check if dmidecode is available\n  if ! command -v dmidecode &> /dev/null; then\n    _apt_clean_update\n    _mrun \"${_INITINS} dmidecode\"\n  fi\n  # Check for Amazon EC2 in the system manufacturer field\n  if dmidecode -s system-manufacturer | grep -i 'Amazon EC2' &> /dev/null; then\n    _VMFAMILY=\"AWS\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _VMFAMILY is ${_VMFAMILY}\"\n  fi\n  _CPU_MX=$(( _CPU_NR * 2 ))\n  if [ \"${_CPU_MX}\" -lt 4 ]; then\n    _CPU_MX=4\n  fi\n  _CPU_TG=$(( _CPU_NR / 2 ))\n  if [ \"${_CPU_TG}\" -lt 4 ]; then\n    _CPU_TG=4\n  fi\n  _CPU_VS=$(( _CPU_NR / 12 ))\n  if [ \"${_CPU_VS}\" -lt 2 ]; then\n    _CPU_VS=2\n  fi\n  _PrTestPower=$(grep \"POWER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n  _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n  _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n  _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n    || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n    || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n    || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n    || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]]; then\n    if [ \"${_CPU_VS}\" -lt 8 ]; then\n      _CPU_VS=8\n    fi\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _PrTestPower is ${_PrTestPower}\"\n    _msg \"TUNE: _PrTestPhantom is ${_PrTestPhantom}\"\n    _msg \"TUNE: _PrTestCluster is ${_PrTestCluster}\"\n    _msg \"TUNE: _PrTestUltra is ${_PrTestUltra}\"\n    _msg \"TUNE: _PrTestMonster is ${_PrTestMonster}\"\n  fi\n  _RAM=$(free -mt | grep Mem: | awk '{ print $2 }' 2>&1)\n  if [ \"${_RESERVED_RAM}\" -gt 0 ]; then\n    _RAM=$(( _RAM - _RESERVED_RAM ))\n  else\n    _RESERVED_RAM=$(( _RAM / 4 ))\n    _RAM=$(( _RAM - _RESERVED_RAM ))\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _CPU_NR is ${_CPU_NR}\"\n    _msg \"TUNE: _CPU_MX is ${_CPU_MX}\"\n    _msg \"TUNE: _CPU_TG is ${_CPU_TG}\"\n    _msg \"TUNE: _CPU_VS is ${_CPU_VS}\"\n    _msg \"TUNE: _RAM is ${_RAM}\"\n    _msg \"TUNE: _RESERVED_RAM is ${_RESERVED_RAM}\"\n  fi\n  _USE=$(( _RAM / 4 ))\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _USE is ${_USE}\"\n  fi\n  _if_hosted_sys\n  if [ \"${_VMFAMILY}\" = \"VS\" ] \\\n    || [ \"${_hostedSys}\" = \"YES\" ]; then\n    if [ \"${_VMFAMILY}\" = \"VS\" ]; then\n      if [ -e \"/root/.tg.cnf\" ]; then\n        _USE_SQL=$(( _RAM / 12 ))\n      else\n        _USE_SQL=$(( _RAM / 24 ))\n      fi\n    else\n      _USE_SQL=$(( _RAM / 8 ))\n    fi\n  else\n    _USE_SQL=$(( _RAM / 8 ))\n  fi\n  if [ \"${_USE_SQL}\" -lt 64 ]; then\n    _USE_SQL=64\n  fi\n  _TMP_SQL=\"${_USE_SQL}M\"\n  _SRT_SQL=$(( _USE_SQL * 2 ))\n  _SRT_SQL=\"${_SRT_SQL}K\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _USE_SQL is ${_USE_SQL}\"\n    _msg \"TUNE: _TMP_SQL is ${_TMP_SQL}\"\n    _msg \"TUNE: _SRT_SQL is ${_SRT_SQL}\"\n  fi\n  if [ \"${_USE}\" -ge 512 ] && [ \"${_USE}\" -lt 2048 ]; then\n    _USE_PHP=1024\n    _USE_OPC=1024\n    _USE_CLI=2048\n    _QCE_SQL=64M\n    _RND_SQL=8M\n    _JBF_SQL=4M\n    if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n      _L_NGX_WRKS=${_CPU_MX}\n    else\n      _L_NGX_WRKS=${_NGINX_WORKERS}\n    fi\n  elif [ \"${_USE}\" -ge 2048 ]; then\n    if [ \"${_VMFAMILY}\" = \"XEN\" ] || [ \"${_VMFAMILY}\" = \"AWS\" ]; then\n      _USE_PHP=2048\n      _USE_OPC=2048\n      _USE_CLI=4096\n      _QCE_SQL=64M\n      _RND_SQL=8M\n      _JBF_SQL=4M\n      if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n        _L_NGX_WRKS=${_CPU_MX}\n      else\n        _L_NGX_WRKS=${_NGINX_WORKERS}\n      fi\n    elif [ \"${_VMFAMILY}\" = \"VS\" ] || [ \"${_VMFAMILY}\" = \"TG\" ]; then\n      if [ -e \"/boot/grub/grub.cfg\" ] \\\n        || [ -e \"/boot/grub/menu.lst\" ] \\\n        || [ -e \"/root/.tg.cnf\" ]; then\n        _USE_PHP=2048\n        _USE_OPC=2048\n        _USE_CLI=4096\n        _QCE_SQL=64M\n        _RND_SQL=8M\n        _JBF_SQL=4M\n        if [ \"${_MXC_SQL}\" -lt 10 ]; then\n          _MXC_SQL=10\n        fi\n        if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n          _L_NGX_WRKS=${_CPU_TG}\n        else\n          _L_NGX_WRKS=${_NGINX_WORKERS}\n        fi\n        sed -i \"s/64000/128000/g\"  /opt/php85/etc/php85.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php84/etc/php84.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php83/etc/php83.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php82/etc/php82.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php81/etc/php81.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php80/etc/php80.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php74/etc/php74.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php73/etc/php73.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php72/etc/php72.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php71/etc/php71.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php70/etc/php70.ini &> /dev/null\n        sed -i \"s/64000/128000/g\"  /opt/php56/etc/php56.ini &> /dev/null\n      else\n        _USE_PHP=2048\n        _USE_OPC=2048\n        _USE_CLI=2048\n        _QCE_SQL=64M\n        _RND_SQL=2M\n        _JBF_SQL=2M\n        if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n          _L_NGX_WRKS=${_CPU_VS}\n        else\n          _L_NGX_WRKS=${_NGINX_WORKERS}\n        fi\n      fi\n    else\n      _USE_PHP=512\n      _USE_OPC=512\n      _USE_CLI=512\n      _QCE_SQL=32M\n      _RND_SQL=2M\n      _JBF_SQL=2M\n      if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n        _L_NGX_WRKS=${_CPU_MX}\n      else\n        _L_NGX_WRKS=${_NGINX_WORKERS}\n      fi\n    fi\n  else\n    _USE_PHP=\"${_USE}\"\n    _USE_OPC=\"${_USE}\"\n    _USE_CLI=\"${_USE}\"\n    _QCE_SQL=32M\n    _RND_SQL=1M\n    _JBF_SQL=1M\n    if [ \"${_NGINX_WORKERS}\" = \"AUTO\" ]; then\n      _L_NGX_WRKS=${_CPU_MX}\n    else\n      _L_NGX_WRKS=${_NGINX_WORKERS}\n    fi\n  fi\n  if [ \"${_VMFAMILY}\" = \"VZ\" ]; then\n    _USE_OPC=64\n  fi\n  if [ \"${_USE_PHP}\" -lt 1024 ]; then\n    _USE_PHP=1024\n  fi\n  _USE_FPM=$(( _USE_PHP / 2 ))\n  if [ \"${_USE_FPM}\" -lt 1024 ]; then\n    _USE_FPM=1024\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _CPU_NR is ${_CPU_NR}\"\n    _msg \"TUNE: _L_NGX_WRKS is ${_L_NGX_WRKS}\"\n    _msg \"TUNE: _PHP_FPM_WORKERS is ${_PHP_FPM_WORKERS}\"\n    _msg \"TUNE: _L_PHP_FPM_WORKERS is ${_L_PHP_FPM_WORKERS}\"\n    _msg \"TUNE: _USE_PHP is ${_USE_PHP}\"\n    _msg \"TUNE: _USE_OPC is ${_USE_OPC}\"\n    _msg \"TUNE: _USE_CLI is ${_USE_CLI}\"\n    _msg \"TUNE: _QCE_SQL is ${_QCE_SQL}\"\n    _msg \"TUNE: _RND_SQL is ${_RND_SQL}\"\n    _msg \"TUNE: _JBF_SQL is ${_JBF_SQL}\"\n    _msg \"TUNE: _MXC_SQL is ${_MXC_SQL}\"\n  fi\n  if [ ! -e \"/var/xdrago/conf/fpm-pool-foo-multi.conf\" ]; then\n    mkdir -p /var/xdrago/conf\n  fi\n  if [ ! -e \"/data/conf\" ]; then\n    mkdir -p /data/conf\n  fi\n  cp -af ${_locCnf}/php/fpm-pool-foo-multi.conf     /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-foo.conf           /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-common.conf        /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-common-legacy.conf /var/xdrago/conf/\n  cp -af ${_locCnf}/php/fpm-pool-common-modern.conf /var/xdrago/conf/\n  sed -i \"s/127.0.0.1/127.0.0.1,${_THISHTIP}/g\" /var/xdrago/conf/fpm-pool-commo*.conf\n  if [ -e \"/opt/etc/fpm/fpm-pool-common.conf\" ]; then\n    sed -i \"s/395/${_USE_FPM}/g\" /opt/etc/fpm/fpm-pool-commo*.conf &> /dev/null\n  fi\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/opt/php${e}/etc/php${e}.ini\" ]; then\n      sed -i \"s/395/${_USE_FPM}/g\" /opt/php${e}/etc/php${e}.ini &> /dev/null\n      wait\n      sed -i \"s/181/${_USE_OPC}/g\" /opt/php${e}/etc/php${e}.ini &> /dev/null\n      sed -i \"s/395/${_USE_CLI}/g\" /opt/php${e}/lib/php.ini     &> /dev/null\n    fi\n  done\n  if [ \"${_CUSTOM_CONFIG_SQL}\" = \"NO\" ]; then\n    _tune_sql_memory_limits\n    _sql_conf_update\n    if [[ \"${_PrTestPower}\" =~ \"POWER\" ]] \\\n      || [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [ -e \"/root/.my.cluster_root_pwd.txt\" ]; then\n      _UXC_SQL=\"${_MXC_SQL}\"\n    else\n      _UXC_SQL=$(echo \"scale=0; ${_MXC_SQL}/2\" | bc 2>&1)\n    fi\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"TUNE: _UXC_SQL is ${_UXC_SQL}\"\n    fi\n    sed -i \"s/= 191/= ${_UXC_SQL}/g\"                                              /etc/mysql/my.cnf &> /dev/null\n    wait\n    sed -i \"s/= 292/= ${_MXC_SQL}/g\"                                              /etc/mysql/my.cnf &> /dev/null\n    wait\n    sed -i \"s/^tmp_table_size.*/tmp_table_size          = ${_TMP_SQL}/g\"          /etc/mysql/my.cnf &> /dev/null\n    wait\n    sed -i \"s/^max_heap_table_size.*/max_heap_table_size     = ${_TMP_SQL}/g\"     /etc/mysql/my.cnf &> /dev/null\n    wait\n    sed -i \"s/^myisam_sort_buffer_size.*/myisam_sort_buffer_size = ${_SRT_SQL}/g\" /etc/mysql/my.cnf &> /dev/null\n    wait\n    sed -i \"s/^read_rnd_buffer_size.*/read_rnd_buffer_size    = ${_RND_SQL}/g\"    /etc/mysql/my.cnf &> /dev/null\n    wait\n    sed -i \"s/^join_buffer_size.*/join_buffer_size        = ${_JBF_SQL}/g\"        /etc/mysql/my.cnf &> /dev/null\n    wait\n    if [ ! -z \"${_CUSTOM_COLLATION_SQL}\" ]; then\n      _SYS_COLLATION_SQL=${_CUSTOM_COLLATION_SQL}\n    fi\n    if [ ! -z \"${_SYS_COLLATION_SQL}\" ]; then\n      sed -i \"s/utf8mb4_unicode_ci/${_SYS_COLLATION_SQL}/g\"                       /etc/mysql/my.cnf &> /dev/null\n      wait\n      sed -i \"s/utf8mb4_general_ci/${_SYS_COLLATION_SQL}/g\"                       /etc/mysql/my.cnf &> /dev/null\n    fi\n  fi\n\n  _MAX_MEM_VALKEY=$(( _RAM / 6 ))\n  _MAX_VALKEY=\"${_MAX_MEM_VALKEY}MB\"\n\n  if [ -e \"/etc/valkey/valkey.conf\" ]; then\n    sed -i \"s/^maxmemory .*/maxmemory ${_MAX_VALKEY}/g\" /etc/valkey/valkey.conf &> /dev/null\n    _NEEDS_VALKEY_RESTART=\n    if ! grep -qE '^maxmemory-policy[[:space:]]*allkeys-lru' /etc/valkey/valkey.conf; then\n      sed -i \"s/^maxmemory-policy .*/maxmemory-policy allkeys-lru/g\" /etc/valkey/valkey.conf &> /dev/null\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if ! grep -qE '^[[:space:]]*save\\b' /etc/valkey/valkey.conf; then\n      sed -i \"s/^# save \\\"\\\"/save \\\"\\\"/g\" /etc/valkey/valkey.conf &> /dev/null\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if ! grep -qE '^[[:space:]]*save\\b' /etc/valkey/valkey.conf; then\n      printf '\\n# Cache-only instance: disable persistence (no RDB snapshots)\\nsave \"\"\\n' >> /etc/valkey/valkey.conf\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if [ -n \"${_NEEDS_VALKEY_RESTART}\" ]; then\n      service valkey-server restart &> /dev/null\n    fi\n  fi\n\n  if [ -e \"/etc/redis/redis.conf\" ]; then\n    sed -i \"s/^maxmemory .*/maxmemory ${_MAX_VALKEY}/g\" /etc/redis/redis.conf &> /dev/null\n    _NEEDS_VALKEY_RESTART=\n    if ! grep -qE '^maxmemory-policy[[:space:]]*allkeys-lru' /etc/redis/redis.conf; then\n      sed -i \"s/^maxmemory-policy .*/maxmemory-policy allkeys-lru/g\" /etc/redis/redis.conf &> /dev/null\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if ! grep -qE '^[[:space:]]*save\\b' /etc/redis/redis.conf; then\n      sed -i \"s/^# save \\\"\\\"/save \\\"\\\"/g\" /etc/redis/redis.conf &> /dev/null\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if ! grep -qE '^[[:space:]]*save\\b' /etc/redis/redis.conf; then\n      printf '\\n# Cache-only instance: disable persistence (no RDB snapshots)\\nsave \"\"\\n' >> /etc/redis/redis.conf\n      _NEEDS_VALKEY_RESTART=YES\n    fi\n    if [ -n \"${_NEEDS_VALKEY_RESTART}\" ]; then\n      service redis-server restart &> /dev/null\n    fi\n  fi\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _MAX_MEM_VALKEY is ${_MAX_MEM_VALKEY}\"\n    _msg \"TUNE: _MAX_VALKEY is ${_MAX_VALKEY}\"\n  fi\n\n  _USE_JETTY_MEM=$(( _RAM / 8 ))\n  _USE_JETTY=\"-Xmx${_USE_JETTY_MEM}m\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _USE_JETTY_MEM is ${_USE_JETTY_MEM}\"\n    _msg \"TUNE: _USE_JETTY is ${_USE_JETTY}\"\n  fi\n  if [ -e \"/etc/default/jetty9\" ] && [ -e \"/opt/solr4\" ]; then\n    sed -i \"s/^JAVA_OPTIONS.*/JAVA_OPTIONS=\\\"-Xms64m ${_USE_JETTY} -Djava.awt.headless=true -Dsolr.solr.home=\\/opt\\/solr4 \\$JAVA_OPTIONS\\\" # Options/g\" /etc/default/jetty9\n  fi\n\n  _USE_SOLR_MEM=$(( _RAM / 6 ))\n  _USE_SOLR=\"-Xmx${_USE_SOLR_MEM}m\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"TUNE: _USE_SOLR_MEM is ${_USE_SOLR_MEM}\"\n    _msg \"TUNE: _USE_SOLR is ${_USE_SOLR}\"\n  fi\n\n  if [ -x \"/etc/init.d/solr9\" ] && [ -e \"/etc/default/solr9.in.sh\" ]; then\n    _SOLR9_STOP_TEST=$(grep \"STOP\\.PORT=19099\" /etc/default/solr9.in.sh 2>&1)\n    if [[ ! \"${_SOLR9_STOP_TEST}\" =~ \"19099\" ]]; then\n      sed -i \"s/^SOLR_STOP_PORT.*//g\" /etc/default/solr9.in.sh\n      wait\n      sed -i \"s/^SOLR_STOP_KEY.*//g\" /etc/default/solr9.in.sh\n      wait\n      echo \"SOLR_OPTS=\\\"\\$SOLR_OPTS -DSTOP.PORT=19099 -DSTOP.KEY=mycustomkey9\\\"\" >> /etc/default/solr9.in.sh\n    fi\n    sed -i \"s/^SOLR_HEAP/#SOLR_HEAP/g\" /etc/default/solr9.in.sh &> /dev/null\n    wait\n    sed -i \"s/^#SOLR_JAVA_MEM/SOLR_JAVA_MEM/g\" /etc/default/solr9.in.sh &> /dev/null\n    wait\n    sed -i \"s/^SOLR_JAVA_MEM=.*/SOLR_JAVA_MEM=\\\"-Xms64m ${_USE_SOLR}\\\"/g\" /etc/default/solr9.in.sh\n    wait\n    sed -i \"s/.*_WAIT.*//g\" /etc/default/solr9.in.sh &> /dev/null\n    wait\n    echo \"SOLR_START_WAIT=\\\"10\\\"\" >> /etc/default/solr9.in.sh\n    echo \"SOLR_STOP_WAIT=\\\"10\\\"\" >> /etc/default/solr9.in.sh\n    echo \"SOLR_WAIT_FOR_ZK=\\\"10\\\"\" >> /etc/default/solr9.in.sh\n    sed -i \"/^$/d\" /etc/default/solr9.in.sh\n  fi\n\n  if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/etc/default/solr7.in.sh\" ]; then\n    _SOLR7_STOP_TEST=$(grep \"STOP\\.PORT=17077\" /etc/default/solr7.in.sh 2>&1)\n    if [[ ! \"${_SOLR7_STOP_TEST}\" =~ \"17077\" ]]; then\n      sed -i \"s/^SOLR_STOP_PORT.*//g\" /etc/default/solr7.in.sh\n      wait\n      sed -i \"s/^SOLR_STOP_KEY.*//g\" /etc/default/solr7.in.sh\n      wait\n      echo \"SOLR_OPTS=\\\"\\$SOLR_OPTS -DSTOP.PORT=17077 -DSTOP.KEY=mycustomkey7\\\"\" >> /etc/default/solr7.in.sh\n    fi\n    sed -i \"s/^SOLR_HEAP/#SOLR_HEAP/g\" /etc/default/solr7.in.sh &> /dev/null\n    wait\n    sed -i \"s/^#SOLR_JAVA_MEM/SOLR_JAVA_MEM/g\" /etc/default/solr7.in.sh &> /dev/null\n    wait\n    sed -i \"s/^SOLR_JAVA_MEM=.*/SOLR_JAVA_MEM=\\\"-Xms64m ${_USE_SOLR}\\\"/g\" /etc/default/solr7.in.sh\n    wait\n    sed -i \"s/.*_WAIT.*//g\" /etc/default/solr7.in.sh &> /dev/null\n    wait\n    echo \"SOLR_START_WAIT=\\\"10\\\"\" >> /etc/default/solr7.in.sh\n    echo \"SOLR_STOP_WAIT=\\\"10\\\"\" >> /etc/default/solr7.in.sh\n    echo \"SOLR_WAIT_FOR_ZK=\\\"10\\\"\" >> /etc/default/solr7.in.sh\n    sed -i \"/^$/d\" /etc/default/solr7.in.sh\n  fi\n\n  _tune_web_server_config\n}\n\n#\n# Find server city.\n_find_server_city() {\n  if [ -e \"/root/.found_correct_city.cnf\" ]; then\n    _LOC_CITY=$(cat /root/.found_correct_city.cnf 2>/dev/null | tr -d '\\n')\n  else\n    if [ -e \"/root/.found_correct_ipv4.cnf\" ]; then\n      _LOC_IP=$(cat /root/.found_correct_ipv4.cnf 2>/dev/null | tr -d '\\n')\n      _LOC_CITY=$(curl ${_crlGet} ipinfo.io/${_LOC_IP}/city 2>&1)\n      _LOC_CITY=$(echo -n ${_LOC_CITY} | tr -d \"\\n\" 2>&1)\n    fi\n    if [ ! -z \"${_LOC_CITY}\" ]; then\n      _LOC_CITY=$(echo \"${_LOC_CITY}\" | tr ' ' '+' 2>&1)\n      echo ${_LOC_CITY} > /root/.found_correct_city.cnf\n    fi\n  fi\n}\n\n#\n# Fix locales.\n_locales_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _locales_check_fix\"\n  fi\n  for _PKG in locales; do\n    if ! _pkg_installed \"${_PKG}\"; then\n      _mrun \"${_INSTAPP} ${_PKG}\"\n    fi\n  done\n  if [ -e \"/etc/ssh/sshd_config\" ]; then\n    _SSH_LC_TEST=$(grep \"^AcceptEnv LANG LC_\" /etc/ssh/sshd_config 2>&1)\n    if [[ \"${_SSH_LC_TEST}\" =~ \"AcceptEnv LANG LC_\" ]]; then\n      _DO_NOTHING=YES\n    else\n      sed -i \"s/.*AcceptEnv.*//g\" /etc/ssh/sshd_config\n      wait\n      echo \"AcceptEnv LANG LC_*\" >> /etc/ssh/sshd_config\n    fi\n  fi\n  _LOC_TEST=$(locale 2>&1)\n  if [[ \"${_LOC_TEST}\" =~ LANG=.*UTF-8 ]]; then\n    _LOCALE_TEST=OK\n  fi\n  if [ -n \"${STY+x}\" ]; then\n    _LOCALE_TEST=OK\n  fi\n  if [[ \"${_LOC_TEST}\" =~ \"Cannot\" ]]; then\n    _LOCALE_TEST=BROKEN\n  fi\n  if [ \"${_LOCALE_TEST}\" = \"BROKEN\" ]; then\n    _msg \"NOTE!\"\n    cat <<EOF\n\n  Locales on this system are broken or not installed\n  and/or not configured correctly yet. This is a known\n  issue on some systems/hosts which either don't configure\n  locales at all or don't use UTF-8 compatible locales\n  during initial OS setup.\n\n  We will fix this problem for you now by enforcing en_US.UTF-8\n  locale settings on the fly during install, and as system\n  defaults in /etc/default/locale for future sessions. This\n  overrides any locale settings passed by your SSH client.\n\n  You should log out when this installer will finish all its tasks\n  and display last line with \"BYE!\" and then log in again\n  to see the result.\n\n  We will continue in 5 seconds...\n\nEOF\n    sleep 5\n    if [ \"${_OS_DIST}\" = \"Debian\" ] || [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n      if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n        echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n      fi\n      sed -i \"/^$/d\" /etc/locale.gen\n      locale-gen &> /dev/null\n    fi\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce all locale settings\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_TIME=en_US.UTF-8 \\\n      LC_MONETARY=en_US.UTF-8 \\\n      LC_MESSAGES=en_US.UTF-8 \\\n      LC_PAPER=en_US.UTF-8 \\\n      LC_NAME=en_US.UTF-8 \\\n      LC_ADDRESS=en_US.UTF-8 \\\n      LC_TELEPHONE=en_US.UTF-8 \\\n      LC_MEASUREMENT=en_US.UTF-8 \\\n      LC_IDENTIFICATION=en_US.UTF-8 \\\n      LC_ALL= &> /dev/null\n    if [ -e \"${_locCnf}/var/boa.bashrc.txt\" ]; then\n      cp -af /root/.bashrc /root/.bashrc.bak.${_NOW}\n      cp -af ${_locCnf}/var/boa.bashrc.txt /root/.bashrc\n      _set_xterm\n    fi\n    # Define all locale settings on the fly to prevent unnecessary\n    # warnings during installation of packages.\n    export LANG=en_US.UTF-8 &> /dev/null\n    export LC_CTYPE=en_US.UTF-8 &> /dev/null\n    export LC_COLLATE=POSIX &> /dev/null\n    export LC_NUMERIC=POSIX &> /dev/null\n    export LC_TIME=en_US.UTF-8 &> /dev/null\n    export LC_MONETARY=en_US.UTF-8 &> /dev/null\n    export LC_MESSAGES=en_US.UTF-8 &> /dev/null\n    export LC_PAPER=en_US.UTF-8 &> /dev/null\n    export LC_NAME=en_US.UTF-8 &> /dev/null\n    export LC_ADDRESS=en_US.UTF-8 &> /dev/null\n    export LC_TELEPHONE=en_US.UTF-8 &> /dev/null\n    export LC_MEASUREMENT=en_US.UTF-8 &> /dev/null\n    export LC_IDENTIFICATION=en_US.UTF-8 &> /dev/null\n    export LC_ALL= &> /dev/null\n    _PIPX_TEST=$(grep \"PIPX\" /root/.bashrc 2>&1)\n    if [[ ! \"${_PIPX_TEST}\" =~ \"PIPX\" ]]; then\n      echo \"export PIPX_BIN_DIR=/usr/local/bin\" >> /root/.bashrc\n      echo \"export PIPX_HOME=/opt/pipx/venvs\" >> /root/.bashrc\n      printf \"\\n\" >> /root/.bashrc\n    fi\n  else\n    if [ -e \"${_locCnf}/var/boa.bashrc.txt\" ]; then\n      cp -af /root/.bashrc /root/.bashrc.bak.${_NOW}\n      cp -af ${_locCnf}/var/boa.bashrc.txt /root/.bashrc\n      _set_xterm\n    fi\n    if [ \"${_OS_DIST}\" = \"Debian\" ] || [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      _LOCALE_GEN_TEST=$(grep -v \"^#\" /etc/locale.gen 2>&1)\n      if [[ ! \"${_LOCALE_GEN_TEST}\" =~ \"en_US.UTF-8 UTF-8\" ]]; then\n        echo \"en_US.UTF-8 UTF-8\" >> /etc/locale.gen\n      fi\n      sed -i \"/^$/d\" /etc/locale.gen\n      locale-gen &> /dev/null\n    fi\n    locale-gen en_US.UTF-8 &> /dev/null\n    # Explicitly enforce locale settings required for consistency\n    update-locale \\\n      LANG=en_US.UTF-8 \\\n      LC_CTYPE=en_US.UTF-8 \\\n      LC_COLLATE=POSIX \\\n      LC_NUMERIC=POSIX \\\n      LC_ALL= &> /dev/null\n    # Define locale settings required for consistency also on the fly\n    if [ \"${_STATUS}\" != \"INIT\" ]; then\n      # On initial install it usually causes a warning:\n      # setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8):\n      # No such file or directory\n      export LC_CTYPE=en_US.UTF-8 &> /dev/null\n    fi\n    export LC_COLLATE=POSIX &> /dev/null\n    export LC_NUMERIC=POSIX &> /dev/null\n    export LC_ALL= &> /dev/null\n  fi\n  _LOCALES_BASHRC_TEST=$(grep LC_COLLATE /root/.bashrc 2>&1)\n  if [[ ! \"${_LOCALES_BASHRC_TEST}\" =~ \"LC_COLLATE\" ]]; then\n    printf \"\\n\" >> /root/.bashrc\n    echo \"export LANG=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_CTYPE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_COLLATE=POSIX\" >> /root/.bashrc\n    echo \"export LC_NUMERIC=POSIX\" >> /root/.bashrc\n    echo \"export LC_TIME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MONETARY=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MESSAGES=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_PAPER=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_NAME=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ADDRESS=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_TELEPHONE=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_MEASUREMENT=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_IDENTIFICATION=en_US.UTF-8\" >> /root/.bashrc\n    echo \"export LC_ALL=\" >> /root/.bashrc\n    printf \"\\n\" >> /root/.bashrc\n  fi\n  _PIPX_TEST=$(grep \"PIPX\" /root/.bashrc 2>&1)\n  if [[ ! \"${_PIPX_TEST}\" =~ \"PIPX\" ]]; then\n    echo \"export PIPX_BIN_DIR=/usr/local/bin\" >> /root/.bashrc\n    echo \"export PIPX_HOME=/opt/pipx/venvs\" >> /root/.bashrc\n    printf \"\\n\" >> /root/.bashrc\n  fi\n  if [ -e \"/root/.dont.use.fancy.bash.login.cnf\" ]; then\n    _FANCY_TEST=$(grep \"fancynow\" /root/.bashrc 2>&1)\n    if [[ \"${_FANCY_TEST}\" =~ \"fancynow\" ]]; then\n      sed -i \"s/.*fancynow.*/  \\/bin\\/true/g\" /root/.bashrc &> /dev/null\n      wait\n      sed -i \"s/.*screenfetch.*/  \\/bin\\/true/g\" /root/.bashrc &> /dev/null\n      wait\n    fi\n  else\n    if [ -f \"/usr/bin/screenfetch\" ]; then\n      _mrun \"apt-get remove screenfetch -y --purge --auto-remove -qq\"\n    fi\n    if [ -e \"/opt/local/bin/screenfetch\" ]; then\n      if [ ! -L \"/usr/bin/screenfetch\" ]; then\n        ln -sf /opt/local/bin/screenfetch /usr/bin/screenfetch\n      fi\n      _FANCY_TEST=$(grep \"fancynow\" /root/.bashrc 2>&1)\n      if [[ ! \"${_FANCY_TEST}\" =~ \"fancynow\" ]]; then\n        echo \"if [ ! -z \"\\$PS1\" ]; then\" >> /root/.bashrc\n        echo \"  /opt/local/bin/fancynow\" >> /root/.bashrc\n        echo \"  /opt/local/bin/screenfetch\" >> /root/.bashrc\n        echo \"fi\" >> /root/.bashrc\n        printf \"\\n\" >> /root/.bashrc\n      fi\n    fi\n  fi\n}\n\n#\n# Cleanup for Barracuda cnf file.\n_barracuda_cnf_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _barracuda_cnf_cleanup\"\n  fi\n  ###\n  ### legacy config cleanup start\n  ###\n  _LOCAL_UBUNTU_MIRROR_TEST=$(grep _LOCAL_UBUNTU_MIRROR ${_barCnf} 2>&1)\n  if [[ \"${_LOCAL_UBUNTU_MIRROR_TEST}\" =~ \"_LOCAL_UBUNTU_MIRROR\" ]]; then\n    sed -i \"s/^_LOCAL_UBUNTU_MIRROR.*//g\" ${_barCnf}\n    wait\n  fi\n  _DB_ENGINE_TEST=$(grep _DB_ENGINE ${_barCnf} 2>&1)\n  if [[ \"${_DB_ENGINE_TEST}\" =~ \"_DB_ENGINE\" ]]; then\n    sed -i \"s/^_DB_ENGINE.*//g\" ${_barCnf}\n    wait\n  fi\n  _INNODB_LOG_FILE_SIZE_TEST=$(grep _INNODB_LOG_FILE_SIZE ${_barCnf} 2>&1)\n  if [[ \"${_INNODB_LOG_FILE_SIZE_TEST}\" =~ \"_INNODB_LOG_FILE_SIZE\" ]]; then\n    sed -i \"s/^_INNODB_LOG_FILE_SIZE.*//g\" ${_barCnf}\n    wait\n  fi\n  _USE_STOCK_TEST=$(grep _USE_STOCK ${_barCnf} 2>&1)\n  if [[ \"${_USE_STOCK_TEST}\" =~ \"_USE_STOCK\" ]]; then\n    sed -i \"s/^_USE_STOCK.*//g\" ${_barCnf}\n    wait\n  fi\n  _HTTP_WILDCARD_TEST=$(grep _HTTP_WILDCARD ${_barCnf} 2>&1)\n  if [[ \"${_HTTP_WILDCARD_TEST}\" =~ \"_HTTP_WILDCARD\" ]]; then\n    sed -i \"s/^_HTTP_WILDCARD.*//g\" ${_barCnf}\n    wait\n  fi\n  _PHP_MODERN_ONLY_TEST=$(grep _PHP_MODERN_ONLY ${_barCnf} 2>&1)\n  if [[ \"${_PHP_MODERN_ONLY_TEST}\" =~ \"_PHP_MODERN_ONLY\" ]]; then\n    sed -i \"s/^_PHP_MODERN_ONLY.*//g\" ${_barCnf}\n    wait\n  fi\n  _USE_SPEED_BOOSTER_TEST=$(grep _USE_SPEED_BOOSTER ${_barCnf} 2>&1)\n  if [[ \"${_USE_SPEED_BOOSTER_TEST}\" =~ \"_USE_SPEED_BOOSTER\" ]]; then\n    sed -i \"s/^_USE_SPEED_BOOSTER.*//g\" ${_barCnf}\n    wait\n  fi\n  _PHP_INSTALL_NEW_TEST=$(grep _PHP_INSTALL_NEW ${_barCnf} 2>&1)\n  if [[ \"${_PHP_INSTALL_NEW_TEST}\" =~ \"_PHP_INSTALL_NEW\" ]]; then\n    sed -i \"s/^_PHP_INSTALL_NEW.*//g\" ${_barCnf}\n    wait\n  fi\n  _CUSTOM_CONFIG_PHP_TEST=$(grep _CUSTOM_CONFIG_PHP ${_barCnf} 2>&1)\n  if [[ \"${_CUSTOM_CONFIG_PHP_TEST}\" =~ \"_CUSTOM_CONFIG_PHP\" ]]; then\n    sed -i \"s/^_CUSTOM_CONFIG_PHP.*//g\" ${_barCnf}\n    wait\n  fi\n  _LOAD_LIMIT_ONE_TEST=$(grep _LOAD_LIMIT_ONE ${_barCnf} 2>&1)\n  if [[ \"${_LOAD_LIMIT_ONE_TEST}\" =~ \"_LOAD_LIMIT_ONE\" ]]; then\n    sed -i \"s/^_LOAD_LIMIT_ONE.*//g\" ${_barCnf}\n    wait\n  fi\n  _LOAD_LIMIT_TWO_TEST=$(grep _LOAD_LIMIT_TWO ${_barCnf} 2>&1)\n  if [[ \"${_LOAD_LIMIT_TWO_TEST}\" =~ \"_LOAD_LIMIT_TWO\" ]]; then\n    sed -i \"s/^_LOAD_LIMIT_TWO.*//g\" ${_barCnf}\n    wait\n  fi\n  _USE_MEMCACHED_TEST=$(grep _USE_MEMCACHED ${_barCnf} 2>&1)\n  if [[ \"${_USE_MEMCACHED_TEST}\" =~ \"_USE_MEMCACHED\" ]]; then\n    sed -i \"s/^_USE_MEMCACHED.*//g\" ${_barCnf}\n    wait\n  fi\n  _PHP_ZEND_OPCACHE_TEST=$(grep _PHP_ZEND_OPCACHE ${_barCnf} 2>&1)\n  if [[ \"${_PHP_ZEND_OPCACHE_TEST}\" =~ \"_PHP_ZEND_OPCACHE\" ]]; then\n    sed -i \"s/^_PHP_ZEND_OPCACHE.*//g\" ${_barCnf}\n    wait\n  fi\n  _BUILD_FROM_SRC_TEST=$(grep _BUILD_FROM_SRC ${_barCnf} 2>&1)\n  if [[ \"${_BUILD_FROM_SRC_TEST}\" =~ \"_BUILD_FROM_SRC\" ]]; then\n    sed -i \"s/^_BUILD_FROM_SRC.*//g\" ${_barCnf}\n    wait\n  fi\n  _SSL_FROM_SOURCES_TEST=$(grep _SSL_FROM_SOURCES ${_barCnf} 2>&1)\n  if [[ \"${_SSL_FROM_SOURCES_TEST}\" =~ \"_SSL_FROM_SOURCES\" ]]; then\n    sed -i \"s/^_SSL_FROM_SOURCES.*//g\" ${_barCnf}\n    wait\n  fi\n  ###\n  ### legacy config cleanup end\n  ###\n\n  ###\n  ### config cleanup start\n  ###\n  _NGX_FORCE_REINSTALL_TEST=$(grep _NGX_FORCE_REINSTALL ${_barCnf} 2>&1)\n  if [[ \"${_NGX_FORCE_REINSTALL_TEST}\" =~ \"_NGX_FORCE_REINSTALL\" ]]; then\n    sed -i \"s/^_NGX_FORCE_REINSTALL.*//g\" ${_barCnf}\n    wait\n  fi\n  _PHP_FORCE_REINSTALL_TEST=$(grep _PHP_FORCE_REINSTALL ${_barCnf} 2>&1)\n  if [[ \"${_PHP_FORCE_REINSTALL_TEST}\" =~ \"_PHP_FORCE_REINSTALL\" ]]; then\n    sed -i \"s/^_PHP_FORCE_REINSTALL.*//g\" ${_barCnf}\n    wait\n  fi\n  _SQL_FORCE_REINSTALL_TEST=$(grep _SQL_FORCE_REINSTALL ${_barCnf} 2>&1)\n  if [[ \"${_SQL_FORCE_REINSTALL_TEST}\" =~ \"_SQL_FORCE_REINSTALL\" ]]; then\n    sed -i \"s/^_SQL_FORCE_REINSTALL.*//g\" ${_barCnf}\n    wait\n  fi\n  _SSL_FORCE_REINSTALL_TEST=$(grep _SSL_FORCE_REINSTALL ${_barCnf} 2>&1)\n  if [[ \"${_SSL_FORCE_REINSTALL_TEST}\" =~ \"_SSL_FORCE_REINSTALL\" ]]; then\n    sed -i \"s/^_SSL_FORCE_REINSTALL.*//g\" ${_barCnf}\n    wait\n  fi\n  _SSH_FORCE_REINSTALL_TEST=$(grep _SSH_FORCE_REINSTALL ${_barCnf} 2>&1)\n  if [[ \"${_SSH_FORCE_REINSTALL_TEST}\" =~ \"_SSH_FORCE_REINSTALL\" ]]; then\n    sed -i \"s/^_SSH_FORCE_REINSTALL.*//g\" ${_barCnf}\n    wait\n  fi\n  _GIT_FORCE_REINSTALL_TEST=$(grep _GIT_FORCE_REINSTALL ${_barCnf} 2>&1)\n  if [[ \"${_GIT_FORCE_REINSTALL_TEST}\" =~ \"_GIT_FORCE_REINSTALL\" ]]; then\n    sed -i \"s/^_GIT_FORCE_REINSTALL.*//g\" ${_barCnf}\n    wait\n  fi\n  _FULL_FORCE_REINSTALL_TEST=$(grep _FULL_FORCE_REINSTALL ${_barCnf} 2>&1)\n  if [[ \"${_FULL_FORCE_REINSTALL_TEST}\" =~ \"_FULL_FORCE_REINSTALL\" ]]; then\n    sed -i \"s/^_FULL_FORCE_REINSTALL.*//g\" ${_barCnf}\n    wait\n  fi\n  _DAEDALUS_TO_EXCALIBUR_TEST=$(grep _DAEDALUS_TO_EXCALIBUR ${_barCnf} 2>&1)\n  if [[ \"${_DAEDALUS_TO_EXCALIBUR_TEST}\" =~ \"_DAEDALUS_TO_EXCALIBUR\" ]]; then\n    sed -i \"s/^_DAEDALUS_TO_EXCALIBUR.*//g\" ${_barCnf}\n    wait\n  fi\n  _TRIXIE_TO_EXCALIBUR_TEST=$(grep _TRIXIE_TO_EXCALIBUR ${_barCnf} 2>&1)\n  if [[ \"${_TRIXIE_TO_EXCALIBUR_TEST}\" =~ \"_TRIXIE_TO_EXCALIBUR\" ]]; then\n    sed -i \"s/^_TRIXIE_TO_EXCALIBUR.*//g\" ${_barCnf}\n    wait\n  fi\n  _BOOKWORM_TO_DAEDALUS_TEST=$(grep _BOOKWORM_TO_DAEDALUS ${_barCnf} 2>&1)\n  if [[ \"${_BOOKWORM_TO_DAEDALUS_TEST}\" =~ \"_BOOKWORM_TO_DAEDALUS\" ]]; then\n    sed -i \"s/^_BOOKWORM_TO_DAEDALUS.*//g\" ${_barCnf}\n    wait\n  fi\n  _CHIMAERA_TO_DAEDALUS_TEST=$(grep _CHIMAERA_TO_DAEDALUS ${_barCnf} 2>&1)\n  if [[ \"${_CHIMAERA_TO_DAEDALUS_TEST}\" =~ \"_CHIMAERA_TO_DAEDALUS\" ]]; then\n    sed -i \"s/^_CHIMAERA_TO_DAEDALUS.*//g\" ${_barCnf}\n    wait\n  fi\n  _BULLSEYE_TO_CHIMAERA_TEST=$(grep _BULLSEYE_TO_CHIMAERA ${_barCnf} 2>&1)\n  if [[ \"${_BULLSEYE_TO_CHIMAERA_TEST}\" =~ \"_BULLSEYE_TO_CHIMAERA\" ]]; then\n    sed -i \"s/^_BULLSEYE_TO_CHIMAERA.*//g\" ${_barCnf}\n    wait\n  fi\n  _BULLSEYE_TO_BOOKWORM_TEST=$(grep _BULLSEYE_TO_BOOKWORM ${_barCnf} 2>&1)\n  if [[ \"${_BULLSEYE_TO_BOOKWORM_TEST}\" =~ \"_BULLSEYE_TO_BOOKWORM\" ]]; then\n    sed -i \"s/^_BULLSEYE_TO_BOOKWORM.*//g\" ${_barCnf}\n    wait\n  fi\n  _BEOWULF_TO_CHIMAERA_TEST=$(grep _BEOWULF_TO_CHIMAERA ${_barCnf} 2>&1)\n  if [[ \"${_BEOWULF_TO_CHIMAERA_TEST}\" =~ \"_BEOWULF_TO_CHIMAERA\" ]]; then\n    sed -i \"s/^_BEOWULF_TO_CHIMAERA.*//g\" ${_barCnf}\n    wait\n  fi\n  _BUSTER_TO_BEOWULF_TEST=$(grep _BUSTER_TO_BEOWULF ${_barCnf} 2>&1)\n  if [[ \"${_BUSTER_TO_BEOWULF_TEST}\" =~ \"_BUSTER_TO_BEOWULF\" ]]; then\n    sed -i \"s/^_BUSTER_TO_BEOWULF.*//g\" ${_barCnf}\n    wait\n  fi\n  _STRETCH_TO_BEOWULF_TEST=$(grep _STRETCH_TO_BEOWULF ${_barCnf} 2>&1)\n  if [[ \"${_STRETCH_TO_BEOWULF_TEST}\" =~ \"_STRETCH_TO_BEOWULF\" ]]; then\n    sed -i \"s/^_STRETCH_TO_BEOWULF.*//g\" ${_barCnf}\n    wait\n  fi\n  _JESSIE_TO_BEOWULF_TEST=$(grep _JESSIE_TO_BEOWULF ${_barCnf} 2>&1)\n  if [[ \"${_JESSIE_TO_BEOWULF_TEST}\" =~ \"_JESSIE_TO_BEOWULF\" ]]; then\n    sed -i \"s/^_JESSIE_TO_BEOWULF.*//g\" ${_barCnf}\n    wait\n  fi\n  _BUSTER_TO_BULLSEYE_TEST=$(grep _BUSTER_TO_BULLSEYE ${_barCnf} 2>&1)\n  if [[ \"${_BUSTER_TO_BULLSEYE_TEST}\" =~ \"_BUSTER_TO_BULLSEYE\" ]]; then\n    sed -i \"s/^_BUSTER_TO_BULLSEYE.*//g\" ${_barCnf}\n    wait\n  fi\n  _STRETCH_TO_BUSTER_TEST=$(grep _STRETCH_TO_BUSTER ${_barCnf} 2>&1)\n  if [[ \"${_STRETCH_TO_BUSTER_TEST}\" =~ \"_STRETCH_TO_BUSTER\" ]]; then\n    sed -i \"s/^_STRETCH_TO_BUSTER.*//g\" ${_barCnf}\n    wait\n  fi\n  _JESSIE_TO_STRETCH_TEST=$(grep _JESSIE_TO_STRETCH ${_barCnf} 2>&1)\n  if [[ \"${_JESSIE_TO_STRETCH_TEST}\" =~ \"_JESSIE_TO_STRETCH\" ]]; then\n    sed -i \"s/^_JESSIE_TO_STRETCH.*//g\" ${_barCnf}\n    wait\n  fi\n  _WHEEZY_TO_JESSIE_TEST=$(grep _WHEEZY_TO_JESSIE ${_barCnf} 2>&1)\n  if [[ \"${_WHEEZY_TO_JESSIE_TEST}\" =~ \"_WHEEZY_TO_JESSIE\" ]]; then\n    sed -i \"s/^_WHEEZY_TO_JESSIE.*//g\" ${_barCnf}\n    wait\n  fi\n  _SQUEEZE_TO_WHEEZY_TEST=$(grep _SQUEEZE_TO_WHEEZY ${_barCnf} 2>&1)\n  if [[ \"${_SQUEEZE_TO_WHEEZY_TEST}\" =~ \"_SQUEEZE_TO_WHEEZY\" ]]; then\n    sed -i \"s/^_SQUEEZE_TO_WHEEZY.*//g\" ${_barCnf}\n    wait\n  fi\n  _LENNY_TO_SQUEEZE_TEST=$(grep _LENNY_TO_SQUEEZE ${_barCnf} 2>&1)\n  if [[ \"${_LENNY_TO_SQUEEZE_TEST}\" =~ \"_LENNY_TO_SQUEEZE\" ]]; then\n    sed -i \"s/^_LENNY_TO_SQUEEZE.*//g\" ${_barCnf}\n    wait\n  fi\n  sed -i \"/^$/d\" ${_barCnf}\n  wait\n  ###\n  ### config cleanup end\n  ###\n}\n\n#\n# Sort and de-duplicate versions in _PHP_MULTI_INSTALL.\n_php_multi_uniq() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_multi_uniq\"\n  fi\n  _uniqPHPv=\n  rm -f /var/backups/_vPHP.txt\n  for _vPHP in `echo ${_PHP_MULTI_INSTALL} \\\n    | sort \\\n    | uniq`; do\n    echo \"${_vPHP}\" >> /var/backups/_vPHP.txt\n  done\n  for _vPHP in `cat /var/backups/_vPHP.txt \\\n    | sort \\\n    | uniq`; do\n    if [ -z \"${_uniqPHPv}\" ]; then\n      _uniqPHPv=\"${_vPHP}\"\n    else\n      _uniqPHPv=\"${_uniqPHPv} ${_vPHP}\"\n    fi\n  done\n  _PHP_MULTI_INSTALL=\"${_uniqPHPv}\"\n}\n\n#\n# Cleanup for legacy PHP versions.\n_php_legacy_versions_cleanup_cnf() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_legacy_versions_cleanup_cnf\"\n  fi\n  _PHP_CLI_LEGACY_IF_USED_A=$(grep \"5\\.2\" /data/disk/*/log/cli.txt 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_B=$(grep \"5\\.2\" /data/disk/*/static/control/cli.info 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_C=$(grep \"CLI.*5\\.2\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_CLI_LEGACY_IF_USED_A}\" =~ \"5.2\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_B}\" =~ \"5.2\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_C}\" =~ \"5.2\" ]]; then\n    if [ -e \"/opt/php52/bin/php\" ] || [ -e \"/etc/init.d/php52-fpm\" ]; then\n      echo \" \"\n      _msg \"Legacy PHP-CLI 5.2 is used on this system but will be removed\"\n    fi\n  fi\n\n  _PHP_FPM_LEGACY_IF_USED_A=$(grep \"5\\.2\" /data/disk/*/log/fpm.txt 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_B=$(grep \"5\\.2\" /data/disk/*/static/control/fpm.info 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_C=$(grep \"FPM.*5\\.2\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_FPM_LEGACY_IF_USED_A}\" =~ \"5.2\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_B}\" =~ \"5.2\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_C}\" =~ \"5.2\" ]]; then\n    if [ -e \"/opt/php52/bin/php\" ] || [ -e \"/etc/init.d/php52-fpm\" ]; then\n      _msg \"Legacy PHP-FPM 5.2 is used on this system but will be removed\"\n      echo \" \"\n    fi\n  fi\n\n  _PHP_CLI_LEGACY_IF_USED_A=$(grep \"5\\.3\" /data/disk/*/log/cli.txt 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_B=$(grep \"5\\.3\" /data/disk/*/static/control/cli.info 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_C=$(grep \"CLI.*5\\.3\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_CLI_LEGACY_IF_USED_A}\" =~ \"5.3\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_B}\" =~ \"5.3\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_C}\" =~ \"5.3\" ]]; then\n    if [ -e \"/opt/php53/bin/php\" ] || [ -e \"/etc/init.d/php53-fpm\" ]; then\n      echo \" \"\n      _msg \"Legacy PHP-CLI 5.3 is used on this system but will be removed\"\n    fi\n  fi\n\n  _PHP_FPM_LEGACY_IF_USED_A=$(grep \"5\\.3\" /data/disk/*/log/fpm.txt 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_B=$(grep \"5\\.3\" /data/disk/*/static/control/fpm.info 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_C=$(grep \"FPM.*5\\.3\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_FPM_LEGACY_IF_USED_A}\" =~ \"5.3\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_B}\" =~ \"5.3\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_C}\" =~ \"5.3\" ]]; then\n    if [ -e \"/opt/php53/bin/php\" ] || [ -e \"/etc/init.d/php53-fpm\" ]; then\n      _msg \"Legacy PHP-FPM 5.3 is used on this system but will be removed\"\n      echo \" \"\n    fi\n  fi\n\n  _PHP_CLI_LEGACY_IF_USED_A=$(grep \"5\\.4\" /data/disk/*/log/cli.txt 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_B=$(grep \"5\\.4\" /data/disk/*/static/control/cli.info 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_C=$(grep \"CLI.*5\\.4\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_CLI_LEGACY_IF_USED_A}\" =~ \"5.4\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_B}\" =~ \"5.4\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_C}\" =~ \"5.4\" ]]; then\n    if [ -e \"/opt/php54/bin/php\" ] || [ -e \"/etc/init.d/php54-fpm\" ]; then\n      echo \" \"\n      _msg \"Legacy PHP-CLI 5.4 is used on this system but will be removed\"\n    fi\n  fi\n\n  _PHP_FPM_LEGACY_IF_USED_A=$(grep \"5\\.4\" /data/disk/*/log/fpm.txt 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_B=$(grep \"5\\.4\" /data/disk/*/static/control/fpm.info 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_C=$(grep \"FPM.*5\\.4\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_FPM_LEGACY_IF_USED_A}\" =~ \"5.4\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_B}\" =~ \"5.4\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_C}\" =~ \"5.4\" ]]; then\n    if [ -e \"/opt/php54/bin/php\" ] || [ -e \"/etc/init.d/php54-fpm\" ]; then\n      _msg \"Legacy PHP-FPM 5.4 is used on this system but will be removed\"\n      echo \" \"\n    fi\n  fi\n\n  _PHP_CLI_LEGACY_IF_USED_A=$(grep \"5\\.5\" /data/disk/*/log/cli.txt 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_B=$(grep \"5\\.5\" /data/disk/*/static/control/cli.info 2>&1)\n  _PHP_CLI_LEGACY_IF_USED_C=$(grep \"CLI.*5\\.5\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_CLI_LEGACY_IF_USED_A}\" =~ \"5.5\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_B}\" =~ \"5.5\" ]] \\\n    || [[ \"${_PHP_CLI_LEGACY_IF_USED_C}\" =~ \"5.5\" ]]; then\n    if [ -e \"/opt/php55/bin/php\" ] || [ -e \"/etc/init.d/php55-fpm\" ]; then\n      echo \" \"\n      _msg \"Legacy PHP-CLI 5.5 is used on this system but will be removed\"\n    fi\n  fi\n\n  _PHP_FPM_LEGACY_IF_USED_A=$(grep \"5\\.5\" /data/disk/*/log/fpm.txt 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_B=$(grep \"5\\.5\" /data/disk/*/static/control/fpm.info 2>&1)\n  _PHP_FPM_LEGACY_IF_USED_C=$(grep \"FPM.*5\\.5\" /root/.*.octopus.cnf 2>&1)\n  if [[ \"${_PHP_FPM_LEGACY_IF_USED_A}\" =~ \"5.5\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_B}\" =~ \"5.5\" ]] \\\n    || [[ \"${_PHP_FPM_LEGACY_IF_USED_C}\" =~ \"5.5\" ]]; then\n    if [ -e \"/opt/php55/bin/php\" ] || [ -e \"/etc/init.d/php55-fpm\" ]; then\n      _msg \"Legacy PHP-FPM 5.5 is used on this system but will be removed\"\n      echo \" \"\n    fi\n  fi\n\n  _PHP_MULTI_INSTALL_TEST=$(grep _PHP_MULTI_INSTALL ${_barCnf} 2>&1)\n\n  if [[ \"${_PHP_MULTI_INSTALL_TEST}\" =~ \"5.2\" ]]; then\n    _R_M=5.2\n    _PHP_MULTI_INSTALL=${_PHP_MULTI_INSTALL%%${_R_M}}\n    sed -i \"s/^_PHP_MULTI_INSTALL.*/_PHP_MULTI_INSTALL=\\\"${_PHP_MULTI_INSTALL}\\\"/g\" ${_barCnf}\n    wait\n    sed -i \"/^$/d\" ${_barCnf}\n    wait\n  fi\n\n  if [[ \"${_PHP_MULTI_INSTALL_TEST}\" =~ \"5.3\" ]]; then\n    _R_M=5.3\n    _PHP_MULTI_INSTALL=${_PHP_MULTI_INSTALL%%${_R_M}}\n    sed -i \"s/^_PHP_MULTI_INSTALL.*/_PHP_MULTI_INSTALL=\\\"${_PHP_MULTI_INSTALL}\\\"/g\" ${_barCnf}\n    wait\n    sed -i \"/^$/d\" ${_barCnf}\n    wait\n  fi\n\n  if [[ \"${_PHP_MULTI_INSTALL_TEST}\" =~ \"5.4\" ]]; then\n    _R_M=5.4\n    _PHP_MULTI_INSTALL=${_PHP_MULTI_INSTALL%%${_R_M}}\n    sed -i \"s/^_PHP_MULTI_INSTALL.*/_PHP_MULTI_INSTALL=\\\"${_PHP_MULTI_INSTALL}\\\"/g\" ${_barCnf}\n    wait\n    sed -i \"/^$/d\" ${_barCnf}\n    wait\n  fi\n\n  if [[ \"${_PHP_MULTI_INSTALL_TEST}\" =~ \"5.5\" ]]; then\n    _R_M=5.5\n    _PHP_MULTI_INSTALL=${_PHP_MULTI_INSTALL%%${_R_M}}\n    sed -i \"s/^_PHP_MULTI_INSTALL.*/_PHP_MULTI_INSTALL=\\\"${_PHP_MULTI_INSTALL}\\\"/g\" ${_barCnf}\n    wait\n    sed -i \"/^$/d\" ${_barCnf}\n    wait\n  fi\n\n  if [ -e \"/etc/init.d/php-fpm\" ]; then\n    _mrun \"service php-fpm stop\"\n    _mrun \"update-rc.d -f php-fpm remove\"\n    rm -f /etc/init.d/php-fpm\n  fi\n  if [ -e \"/etc/init.d/php52-fpm\" ]; then\n    _mrun \"service php52-fpm stop\"\n    _mrun \"update-rc.d -f php52-fpm remove\"\n    rm -f /etc/init.d/php52-fpm\n  fi\n  if [ -e \"/etc/init.d/php53-fpm\" ]; then\n    _mrun \"service php53-fpm stop\"\n    _mrun \"update-rc.d -f php53-fpm remove\"\n    rm -f /etc/init.d/php53-fpm\n  fi\n  if [ -e \"/etc/init.d/php54-fpm\" ]; then\n    _mrun \"service php54-fpm stop\"\n    _mrun \"update-rc.d -f php54-fpm remove\"\n    rm -f /etc/init.d/php54-fpm\n  fi\n  if [ -e \"/etc/init.d/php55-fpm\" ]; then\n    _mrun \"service php55-fpm stop\"\n    _mrun \"update-rc.d -f php55-fpm remove\"\n    rm -f /etc/init.d/php55-fpm\n  fi\n\n  _VMFAMILY=XEN\n  _VM_TEST=\"$(uname -a)\"\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _VMFAMILY=\"VS\"\n  fi\n}\n\n#\n# Cleanup for PHP versions list.\n_php_if_versions_cleanup_cnf() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_if_versions_cleanup_cnf\"\n  fi\n  if [ -e \"/root/.allow-php-multi-install-cleanup.cnf\" ] \\\n    || [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n    _AUTO_PHP_CLEANUP=YES\n  else\n    _AUTO_PHP_CLEANUP=NO\n  fi\n  if [ \"${_STATUS}\" != \"UPGRADE\" ]; then\n    _AUTO_PHP_CLEANUP=NO\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: _AUTO_PHP_CLEANUP is disabled during ${_STATUS}\"\n    fi\n  else\n    if [ ! -e \"/data/u\" ]; then\n      _AUTO_PHP_CLEANUP=NO\n      _msg \"INFO: Since no /data/u thus _AUTO_PHP_CLEANUP=NO during ${_STATUS}\"\n      sleep 3\n      _msg \"HINT: the cleanup will not work until you install Octopus instance\"\n    fi\n    if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n      _AUTO_PHP_CLEANUP=NO\n      _msg \"ALRT: Your Ægir Master Instance looks broken!\"\n      sleep 8\n      _msg \"HINT: Please run full upgrade with 'barracuda up-lts' command\"\n      sleep 3\n      _msg \"ALSO: Since no /var/aegir/drush/vendor thus _AUTO_PHP_CLEANUP=NO\"\n    fi\n  fi\n  if [ \"${_AUTO_PHP_CLEANUP}\" = \"YES\" ]; then\n    ### Make sure that _PHP_SINGLE_INSTALL takes precedence\n    if [ ! -z \"${_PHP_SINGLE_INSTALL}\" ]; then\n      if [ \"${_PHP_SINGLE_INSTALL}\" = \"8.5\" ] \\\n        || [ \"${_PHP_SINGLE_INSTALL}\" = \"8.4\" ] \\\n        || [ \"${_PHP_SINGLE_INSTALL}\" = \"8.3\" ]; then\n        _PHP_MULTI_INSTALL=${_PHP_SINGLE_INSTALL}\n        _PHP_CLI_VERSION=${_PHP_SINGLE_INSTALL}\n        _PHP_FPM_VERSION=${_PHP_SINGLE_INSTALL}\n      fi\n    else\n      ### Make sure that _PHP_MULTI_INSTALL includes _PHP_FPM_VERSION\n      if [ \"${_OPENSSL_NEW_VRN}\" = \"${_OPENSSL_LEGACY_VRN}\" ]; then\n        _PHP_CLI_VERSION=7.4\n        _PHP_FPM_VERSION=7.4\n      fi\n      _local_PHP_MULTI_INSTALL=\"${_PHP_FPM_VERSION}\"\n\n      ### Include only used PHP versions on major automated OS upgrades\n      _STRICT_LIMIT_PHP=NO\n      if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n        || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n        || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n        || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n        _STRICT_LIMIT_PHP=YES\n      fi\n\n      ### Include PHP 8.3, 8.4 and 8.5 if conditions are met\n      _if_hosted_sys\n      if [ \"${_STRICT_LIMIT_PHP}\" = \"NO\" ]; then\n        if [ -e \"/root/.include-php-latest.cnf\" ] \\\n          || [ \"${_hostedSys}\" = \"YES\" ]; then\n          if [ \"${_OPENSSL_NEW_VRN}\" != \"${_OPENSSL_LEGACY_VRN}\" ] \\\n            && [ \"${_OPENSSL_NEW_VRN}\" != \"${_OPENSSL_EOL_VRN}\" ]; then\n            _local_PHP_MULTI_INSTALL=\"8.3 8.4 8.5 ${_local_PHP_MULTI_INSTALL}\"\n          fi\n        fi\n      fi\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: _OPENSSL_NEW_VRN is ${_OPENSSL_NEW_VRN}\"\n        _msg \"INFO: _local_PHP_MULTI_INSTALL is ${_local_PHP_MULTI_INSTALL}\"\n      fi\n\n      ### Add PHP 5.6 if used\n      _is_PHP_multi_five_six=$(grep \"5\\.6\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_five_six=$(grep \"5\\.6\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_five_six=$(grep \"5\\.6\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_five_six=\n      if [[ \"${_is_PHP_multi_five_six}\" =~ \"5.6\" ]] \\\n        || [[ \"${_is_PHP_FPM_five_six}\" =~ \"5.6\" ]] \\\n        || [[ \"${_is_PHP_CLI_five_six}\" =~ \"5.6\" ]]; then\n        _is_PHP_five_six=5.6\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_five_six}\"\n      fi\n\n      ### Add PHP 7.0 if used\n      _is_PHP_multi_seven_zero=$(grep \"7\\.0\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_seven_zero=$(grep \"7\\.0\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_seven_zero=$(grep \"7\\.0\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_seven_zero=\n      if [[ \"${_is_PHP_multi_seven_zero}\" =~ \"7.0\" ]] \\\n        || [[ \"${_is_PHP_FPM_seven_zero}\" =~ \"7.0\" ]] \\\n        || [[ \"${_is_PHP_CLI_seven_zero}\" =~ \"7.0\" ]]; then\n        _is_PHP_seven_zero=7.0\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_seven_zero}\"\n      fi\n\n      ### Add PHP 7.1 if used\n      _is_PHP_multi_seven_one=$(grep \"7\\.1\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_seven_one=$(grep \"7\\.1\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_seven_one=$(grep \"7\\.1\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_seven_one=\n      if [[ \"${_is_PHP_multi_seven_one}\" =~ \"7.1\" ]] \\\n        || [[ \"${_is_PHP_FPM_seven_one}\" =~ \"7.1\" ]] \\\n        || [[ \"${_is_PHP_CLI_seven_one}\" =~ \"7.1\" ]]; then\n        _is_PHP_seven_one=7.1\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_seven_one}\"\n      fi\n\n      ### Add PHP 7.2 if used\n      _is_PHP_multi_seven_two=$(grep \"7\\.2\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_seven_two=$(grep \"7\\.2\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_seven_two=$(grep \"7\\.2\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_seven_two=\n      if [[ \"${_is_PHP_multi_seven_two}\" =~ \"7.2\" ]] \\\n        || [[ \"${_is_PHP_FPM_seven_two}\" =~ \"7.2\" ]] \\\n        || [[ \"${_is_PHP_CLI_seven_two}\" =~ \"7.2\" ]]; then\n        _is_PHP_seven_two=7.2\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_seven_two}\"\n      fi\n\n      ### Add PHP 7.3 if used\n      _is_PHP_multi_seven_three=$(grep \"7\\.3\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_seven_three=$(grep \"7\\.3\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_seven_three=$(grep \"7\\.3\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_seven_three=\n      if [[ \"${_is_PHP_multi_seven_three}\" =~ \"7.3\" ]] \\\n        || [[ \"${_is_PHP_FPM_seven_three}\" =~ \"7.3\" ]] \\\n        || [[ \"${_is_PHP_CLI_seven_three}\" =~ \"7.3\" ]]; then\n        _is_PHP_seven_three=7.3\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_seven_three}\"\n      fi\n\n      ### Add PHP 7.4 if used\n      _is_PHP_multi_seven_four=$(grep \"7\\.4\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_seven_four=$(grep \"7\\.4\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_seven_four=$(grep \"7\\.4\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_seven_four=\n      if [[ \"${_is_PHP_multi_seven_four}\" =~ \"7.4\" ]] \\\n        || [[ \"${_is_PHP_FPM_seven_four}\" =~ \"7.4\" ]] \\\n        || [[ \"${_is_PHP_CLI_seven_four}\" =~ \"7.4\" ]]; then\n        _is_PHP_seven_four=7.4\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_seven_four}\"\n      fi\n\n      ### Add PHP 8.0 if used\n      _is_PHP_multi_eight_zero=$(grep \"8\\.0\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_eight_zero=$(grep \"8\\.0\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_eight_zero=$(grep \"8\\.0\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_eight_zero=\n      if [[ \"${_is_PHP_multi_eight_zero}\" =~ \"8.0\" ]] \\\n        || [[ \"${_is_PHP_FPM_eight_zero}\" =~ \"8.0\" ]] \\\n        || [[ \"${_is_PHP_CLI_eight_zero}\" =~ \"8.0\" ]]; then\n        _is_PHP_eight_zero=8.0\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_eight_zero}\"\n      fi\n\n      ### Add PHP 8.1 if used\n      _is_PHP_multi_eight_one=$(grep \"8\\.1\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_eight_one=$(grep \"8\\.1\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_eight_one=$(grep \"8\\.1\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_eight_one=\n      if [[ \"${_is_PHP_multi_eight_one}\" =~ \"8.1\" ]] \\\n        || [[ \"${_is_PHP_FPM_eight_one}\" =~ \"8.1\" ]] \\\n        || [[ \"${_is_PHP_CLI_eight_one}\" =~ \"8.1\" ]]; then\n        _is_PHP_eight_one=8.1\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_eight_one}\"\n      fi\n\n      ### Add PHP 8.2 if used\n      _is_PHP_multi_eight_two=$(grep \"8\\.2\" /data/disk/*/static/control/multi-fpm.info 2>&1)\n      _is_PHP_FPM_eight_two=$(grep \"8\\.2\" /data/disk/*/static/control/fpm.info 2>&1)\n      _is_PHP_CLI_eight_two=$(grep \"8\\.2\" /data/disk/*/static/control/cli.info 2>&1)\n      _is_PHP_eight_two=\n      if [[ \"${_is_PHP_multi_eight_two}\" =~ \"8.2\" ]] \\\n        || [[ \"${_is_PHP_FPM_eight_two}\" =~ \"8.2\" ]] \\\n        || [[ \"${_is_PHP_CLI_eight_two}\" =~ \"8.2\" ]]; then\n        _is_PHP_eight_two=8.2\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_eight_two}\"\n      fi\n\n      ### Add PHP 8.3 if used\n      _is_PHP_multi_eight_three=$(grep \"8\\.3\" /data/disk/*/static/control/multi-fpm.info 3>&1)\n      _is_PHP_FPM_eight_three=$(grep \"8\\.3\" /data/disk/*/static/control/fpm.info 3>&1)\n      _is_PHP_CLI_eight_three=$(grep \"8\\.3\" /data/disk/*/static/control/cli.info 3>&1)\n      _is_PHP_eight_three=\n      if [[ \"${_is_PHP_multi_eight_three}\" =~ \"8.3\" ]] \\\n        || [[ \"${_is_PHP_FPM_eight_three}\" =~ \"8.3\" ]] \\\n        || [[ \"${_is_PHP_CLI_eight_three}\" =~ \"8.3\" ]]; then\n        _is_PHP_eight_three=8.3\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_eight_three}\"\n      fi\n\n      ### Add PHP 8.4 if used\n      _is_PHP_multi_eight_four=$(grep \"8\\.4\" /data/disk/*/static/control/multi-fpm.info 4>&1)\n      _is_PHP_FPM_eight_four=$(grep \"8\\.4\" /data/disk/*/static/control/fpm.info 4>&1)\n      _is_PHP_CLI_eight_four=$(grep \"8\\.4\" /data/disk/*/static/control/cli.info 4>&1)\n      _is_PHP_eight_four=\n      if [[ \"${_is_PHP_multi_eight_four}\" =~ \"8.4\" ]] \\\n        || [[ \"${_is_PHP_FPM_eight_four}\" =~ \"8.4\" ]] \\\n        || [[ \"${_is_PHP_CLI_eight_four}\" =~ \"8.4\" ]]; then\n        _is_PHP_eight_four=8.4\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_eight_four}\"\n      fi\n\n      ### Add PHP 8.5 if used\n      _is_PHP_multi_eight_five=$(grep \"8\\.5\" /data/disk/*/static/control/multi-fpm.info 5>&1)\n      _is_PHP_FPM_eight_five=$(grep \"8\\.5\" /data/disk/*/static/control/fpm.info 5>&1)\n      _is_PHP_CLI_eight_five=$(grep \"8\\.5\" /data/disk/*/static/control/cli.info 5>&1)\n      _is_PHP_eight_five=\n      if [[ \"${_is_PHP_multi_eight_five}\" =~ \"8.5\" ]] \\\n        || [[ \"${_is_PHP_FPM_eight_five}\" =~ \"8.5\" ]] \\\n        || [[ \"${_is_PHP_CLI_eight_five}\" =~ \"8.5\" ]]; then\n        _is_PHP_eight_five=8.5\n        _local_PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL} ${_is_PHP_eight_five}\"\n      fi\n\n      ### Overwrite _PHP_MULTI_INSTALL with the list of used PHP versions\n      _PHP_MULTI_INSTALL=\"${_local_PHP_MULTI_INSTALL}\"\n\n      if [ -z \"${_PHP_MULTI_INSTALL}\" ]; then\n        ### Use default mini-legacy list if empty\n        _PHP_MULTI_INSTALL=\"8.3 7.4 5.6\"\n      else\n        ### Sort and remove duplicates from _PHP_MULTI_INSTALL\n        _php_multi_uniq\n      fi\n      export _PHP_MULTI_INSTALL=\"${_PHP_MULTI_INSTALL}\"\n\n      ### Remove previous _PHP_MULTI_INSTALL in /root/.barracuda.cnf\n      _PHP_MULTI_TEST=$(grep _PHP_MULTI_INSTALL ${_barCnf} 2>&1)\n      if [[ \"${_PHP_MULTI_TEST}\" =~ \"_PHP_MULTI_INSTALL\" ]]; then\n        sed -i \"s/^_PHP_MULTI_INSTALL.*//g\" ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_ORIG_MULTI.*//g\" ${_barCnf}\n        wait\n        sed -i \"/^$/d\" ${_barCnf}\n        wait\n      fi\n\n      ### Add optimized _PHP_MULTI_INSTALL in /root/.barracuda.cnf\n      echo \"_PHP_MULTI_INSTALL=\\\"${_PHP_MULTI_INSTALL}\\\"\" >> ${_barCnf}\n\n      ### Export optimized list as a new var\n      _PHP_MULTI_OPTIM=\"${_PHP_MULTI_INSTALL}\"\n      export _PHP_MULTI_OPTIM=\"${_PHP_MULTI_OPTIM}\"\n\n      ### Disable not used PHP versions\n      _php_not_used_disable\n\n      ### Ctrl files cleanup\n      touch /root/.sorted.multi.php.cnf\n      [ -e \"/root/.updated.multi.php.cnf\" ] && rm -f /root/.updated.multi.php.cnf\n      [ -e \"/root/.fixed.multi.php.cnf\" ] && rm -f /root/.fixed.multi.php.cnf\n    fi\n  fi\n}\n\n#\n# Cleanup for PHP versions list.\n_if_php_idle_on_off() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_php_idle_on_off\"\n  fi\n  if [ \"${_PHP_IDLE}\" = \"ON\" ]; then\n    [ -e \"/root/.allow-php-multi-install-cleanup.cnf\" ] && rm -f /root/.allow-php-multi-install-cleanup.cnf\n    rm -f /var/log/boa/._php_libs_fix_*.pid\n    touch /root/.proxy.cnf\n    pkill -9 -f second.sh\n    sleep 5\n    _php_not_used_enable_again\n    sed -i \"s/^_PHP_IDLE.*//g\" ${_barCnf}\n    wait\n    [ -e \"/root/.proxy.cnf\" ] && rm -f /root/.proxy.cnf\n    _php_install_deps\n    _php_libs_fix\n    if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n      _php_ioncube_check_if_update\n      _php_check_if_rebuild\n    fi\n    _php_install_upgrade\n    _php_config_check_update\n    _php_upgrade_all\n    _if_install_php_newrelic\n    _newrelic_check_fix\n    _complete\n    exit 0\n  elif [ \"${_PHP_IDLE}\" = \"OFF\" ]; then\n    touch /root/.proxy.cnf\n    pkill -9 -f second.sh\n    sleep 5\n    touch /root/.allow-php-multi-install-cleanup.cnf\n    _php_if_versions_cleanup_cnf\n    sed -i \"s/^_PHP_IDLE.*//g\" ${_barCnf}\n    wait\n    [ -e \"/root/.proxy.cnf\" ] && rm -f /root/.proxy.cnf\n    if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n      || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n      || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n      || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n      echo \" \"\n      _msg \"Bye\"\n      _clean_pid_exit\n    else\n      _complete\n      exit 0\n    fi\n  fi\n}\n\n#\n# Read or create Barracuda cnf file.\n_barracuda_cnf() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _barracuda_cnf\"\n  fi\n  _VMFAMILY=XEN\n  _VM_TEST=\"$(uname -a)\"\n  if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n    _VMFAMILY=\"VS\"\n  fi\n  if [ ! -e \"${_barCnf}\" ]; then\n    if [[ \"${_MY_EMAIL}\" =~ \"omega8.cc\" ]]; then\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" != \"YES\" ]; then\n        _msg \"EXIT: Invalid email address defined in the _MY_EMAIL variable\"\n        _msg \"EXIT: Bye (1)\"\n        _clean_pid_exit _barracuda_cnf_a\n      fi\n    fi\n\n    _php_multi_uniq\n\n    _ffList=\"/var/log/boa/devaun-fast-mirrors-list.txt\"\n    _BROKEN_FF=$(grep \"devuan/merged\" ${_ffList} 2>&1)\n    if [ ! -e \"${_ffList}\" ] || [[ ! \"${_BROKEN_FF}\" =~ \"devuan/merged\" ]]; then\n      _ffDevuan=\"$(which ffdevuan)\"\n      if [ -x \"${_ffDevuan}\" ]; then\n        bash ${_ffDevuan} &> /dev/null\n        wait\n      fi\n    fi\n\n    if [ -z \"${_LOCAL_DEVUAN_MIRROR}\" ] \\\n      || [ \"${_LOCAL_DEVUAN_MIRROR}\" = \"deb.devuan.org\" ] \\\n      || [ \"${_LOCAL_DEVUAN_MIRROR}\" = \"https://mirrors.dotsrc.org/devuan\" ] \\\n      || [[ ! \"${_LOCAL_DEVUAN_MIRROR}\" =~ \"http\" ]]; then\n      _DVN_MRR=\"$(_find_fast_devuan_mirror)\"\n\t  if [ -n \"${_DVN_MRR}\" ]; then\n\t    _devM=\"${_DVN_MRR}\"\n\t  else\n\t    _devM=\"http://deb.devuan.org/merged\"\n\t  fi\n\t  _LOCAL_DEVUAN_MIRROR=\"${_devM}\"\n    fi\n\n    if [ -z \"${_LOCAL_DEBIAN_MIRROR}\" ]; then\n      _debM=\"deb.debian.org\"\n      _LOCAL_DEBIAN_MIRROR=\"${_debM}\"\n    fi\n\n    _msg \"INFO: Creating your ${_barCnf} config file\"\n    sleep 1\n    echo \"###\"                                                 > ${_barCnf}\n    echo \"### Configuration created on ${_NOW}\"               >> ${_barCnf}\n    echo \"###\"                                                >> ${_barCnf}\n\n    echo \"### NOTE: the group of settings displayed below will *not* be overridden\" >> ${_barCnf}\n    echo \"### on upgrade by the Barracuda script nor by this configuration file.\" >> ${_barCnf}\n    echo \"### They can be defined only on initial Barracuda install.\" >> ${_barCnf}\n    echo \"###\" >> ${_barCnf}\n\n    echo \"_LOCAL_NETWORK_HN=\\\"${_LOCAL_NETWORK_HN}\\\"\"         >> ${_barCnf}\n    echo \"_LOCAL_NETWORK_IP=\\\"${_LOCAL_NETWORK_IP}\\\"\"         >> ${_barCnf}\n    echo \"_MY_FRONT=\\\"${_MY_FRONT}\\\"\"                         >> ${_barCnf}\n    echo \"_MY_HOSTN=\\\"${_MY_HOSTN}\\\"\"                         >> ${_barCnf}\n    echo \"_MY_OWNIP=\\\"${_MY_OWNIP}\\\"\"                         >> ${_barCnf}\n    echo \"_SMTP_RELAY_HOST=\\\"${_SMTP_RELAY_HOST}\\\"\"           >> ${_barCnf}\n    echo \"_SMTP_RELAY_TEST=${_SMTP_RELAY_TEST}\"               >> ${_barCnf}\n    echo \"_THIS_DB_HOST=${_THIS_DB_HOST}\"                     >> ${_barCnf}\n\n    echo \"###\" >> ${_barCnf}\n    echo \"### NOTE: the group of settings displayed below\" >> ${_barCnf}\n    echo \"### will *override* all listed settings in the Barracuda script,\" >> ${_barCnf}\n    echo \"### both on initial install and upgrade.\" >> ${_barCnf}\n    echo \"###\" >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_XTRAS_LIST=\\\"${_XTRAS_LIST}\\\"\"                     >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_AUTOPILOT=${_AUTOPILOT}\"                           >> ${_barCnf}\n    echo \"_DEBUG_MODE=${_DEBUG_MODE}\"                         >> ${_barCnf}\n    echo \"_DL_MODE=${_DL_MODE}\"                               >> ${_barCnf}\n    echo \"_INCIDENT_REPORT=\\\"${_INCIDENT_REPORT}\\\"\"           >> ${_barCnf}\n    echo \"_MY_EMAIL=\\\"${_MY_EMAIL}\\\"\"                         >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_CPU_CRIT_RATIO=${_CPU_CRIT_RATIO}\"                 >> ${_barCnf}\n    echo \"_CPU_MAX_RATIO=${_CPU_MAX_RATIO}\"                   >> ${_barCnf}\n    echo \"_CPU_TASK_RATIO=${_CPU_TASK_RATIO}\"                 >> ${_barCnf}\n    echo \"_CPU_SPIDER_RATIO=${_CPU_SPIDER_RATIO}\"             >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_CUSTOM_COLLATION_SQL=${_CUSTOM_COLLATION_SQL}\"     >> ${_barCnf}\n    echo \"_DB_BINARY_LOG=${_DB_BINARY_LOG}\"                   >> ${_barCnf}\n    echo \"_DB_SERIES=${_DB_SERIES}\"                           >> ${_barCnf}\n    echo \"_DB_SERVER=${_DB_SERVER}\"                           >> ${_barCnf}\n    echo \"_SQL_LOW_MAX_TTL=${_SQL_LOW_MAX_TTL}\"               >> ${_barCnf}\n    echo \"_SQL_MAX_TTL=${_SQL_MAX_TTL}\"                       >> ${_barCnf}\n    echo \"_USE_MYSQLTUNER=${_USE_MYSQLTUNER}\"                 >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_DNS_SETUP_TEST=${_DNS_SETUP_TEST}\"                 >> ${_barCnf}\n    echo \"_EXTRA_PACKAGES=${_EXTRA_PACKAGES}\"                 >> ${_barCnf}\n    echo \"_FORCE_GIT_MIRROR=\\\"${_FORCE_GIT_MIRROR}\\\"\"         >> ${_barCnf}\n    echo \"_LOCAL_DEVUAN_MIRROR=${_LOCAL_DEVUAN_MIRROR}\"       >> ${_barCnf}\n    echo \"_LOCAL_DEBIAN_MIRROR=${_LOCAL_DEBIAN_MIRROR}\"       >> ${_barCnf}\n    echo \"_NEWRELIC_KEY=${_NEWRELIC_KEY}\"                     >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_ENABLE_GOACCESS=${_ENABLE_GOACCESS}\"               >> ${_barCnf}\n    echo \"_MAGICK_FROM_SOURCES=${_MAGICK_FROM_SOURCES}\"       >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_NGINX_DOS_DIV_INC_NR=\\\"${_NGINX_DOS_DIV_INC_NR}\\\"\" >> ${_barCnf}\n    echo \"_NGINX_DOS_IGNORE=\\\"${_NGINX_DOS_IGNORE}\\\"\"         >> ${_barCnf}\n    echo \"_NGINX_DOS_INC_MIN=\\\"${_NGINX_DOS_INC_MIN}\\\"\"       >> ${_barCnf}\n    echo \"_NGINX_DOS_LIMIT=${_NGINX_DOS_LIMIT}\"               >> ${_barCnf}\n    echo \"_NGINX_DOS_LINES=${_NGINX_DOS_LINES}\"               >> ${_barCnf}\n    echo \"_NGINX_DOS_LOG=${_NGINX_DOS_LOG}\"                   >> ${_barCnf}\n    echo \"_NGINX_DOS_MODE=${_NGINX_DOS_MODE}\"                 >> ${_barCnf}\n    echo \"_NGINX_DOS_STOP=\\\"${_NGINX_DOS_STOP}\\\"\"             >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_NGINX_EXTRA_CONF=\\\"${_NGINX_EXTRA_CONF}\\\"\"         >> ${_barCnf}\n    echo \"_NGINX_FORWARD_SECRECY=${_NGINX_FORWARD_SECRECY}\"   >> ${_barCnf}\n    echo \"_NGINX_HEADERS=${_NGINX_HEADERS}\"                   >> ${_barCnf}\n    echo \"_NGINX_LDAP=${_NGINX_LDAP}\"                         >> ${_barCnf}\n    echo \"_NGINX_NAXSI=${_NGINX_NAXSI}\"                       >> ${_barCnf}\n    echo \"_NGINX_SPDY=${_NGINX_SPDY}\"                         >> ${_barCnf}\n    echo \"_NGINX_WORKERS=${_NGINX_WORKERS}\"                   >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_PHP_CLI_VERSION=${_PHP_CLI_VERSION}\"               >> ${_barCnf}\n    echo \"_PHP_EXTRA_CONF=\\\"${_PHP_EXTRA_CONF}\\\"\"             >> ${_barCnf}\n    echo \"_PHP_FPM_DENY=\\\"${_PHP_FPM_DENY}\\\"\"                 >> ${_barCnf}\n    echo \"_PHP_FPM_VERSION=${_PHP_FPM_VERSION}\"               >> ${_barCnf}\n    echo \"_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\"               >> ${_barCnf}\n    echo \"_PHP_GEOS=${_PHP_GEOS}\"                             >> ${_barCnf}\n    echo \"_PHP_IONCUBE=${_PHP_IONCUBE}\"                       >> ${_barCnf}\n    echo \"_PHP_MONGODB=${_PHP_MONGODB}\"                       >> ${_barCnf}\n    echo \"_PHP_MULTI_INSTALL=\\\"${_PHP_MULTI_INSTALL}\\\"\"       >> ${_barCnf}\n    echo \"_PHP_SINGLE_INSTALL=${_PHP_SINGLE_INSTALL}\"         >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_VALKEY_LISTEN_MODE=${_VALKEY_LISTEN_MODE}\"         >> ${_barCnf}\n    echo \"_VALKEY_MAJOR_RELEASE=${_VALKEY_MAJOR_RELEASE}\"     >> ${_barCnf}\n    echo \"_REDIS_LISTEN_MODE=${_REDIS_LISTEN_MODE}\"           >> ${_barCnf}\n    echo \"_REDIS_MAJOR_RELEASE=${_REDIS_MAJOR_RELEASE}\"       >> ${_barCnf}\n    echo \"_RESERVED_RAM=${_RESERVED_RAM}\"                     >> ${_barCnf}\n    echo \"_SPEED_VALID_MAX=${_SPEED_VALID_MAX}\"               >> ${_barCnf}\n    echo \"_SSH_ARMOUR=${_SSH_ARMOUR}\"                         >> ${_barCnf}\n    echo \"_SSH_FROM_SOURCES=${_SSH_FROM_SOURCES}\"             >> ${_barCnf}\n    echo \"_SSH_PORT=${_SSH_PORT}\"                             >> ${_barCnf}\n    echo \"_STRICT_BIN_PERMISSIONS=${_STRICT_BIN_PERMISSIONS}\" >> ${_barCnf}\n    echo \"_STRONG_PASSWORDS=${_STRONG_PASSWORDS}\"             >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_CUSTOM_CONFIG_CSF=${_CUSTOM_CONFIG_CSF}\"           >> ${_barCnf}\n    echo \"_CUSTOM_CONFIG_LSHELL=${_CUSTOM_CONFIG_LSHELL}\"     >> ${_barCnf}\n    echo \"_CUSTOM_CONFIG_VALKEY=${_CUSTOM_CONFIG_VALKEY}\"     >> ${_barCnf}\n    echo \"_CUSTOM_CONFIG_REDIS=${_CUSTOM_CONFIG_REDIS}\"       >> ${_barCnf}\n    echo \"_CUSTOM_CONFIG_SQL=${_CUSTOM_CONFIG_SQL}\"           >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_AEGIR_UPGRADE_ONLY=${_AEGIR_UPGRADE_ONLY}\"         >> ${_barCnf}\n    echo \"_SYSTEM_UP_ONLY=${_SYSTEM_UP_ONLY}\"                 >> ${_barCnf}\n    echo \"###\"                                                >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"_MODULES_FIX=${_MODULES_FIX}\"                       >> ${_barCnf}\n    echo \"_MODULES_SKIP=\\\"${_MODULES_SKIP}\\\"\"                 >> ${_barCnf}\n    echo \"_PERMISSIONS_FIX=${_PERMISSIONS_FIX}\"               >> ${_barCnf}\n\n    echo \"###\"                                                >> ${_barCnf}\n    echo \"### Barracuda\"                                      >> ${_barCnf}\n    echo \"###\"                                                >> ${_barCnf}\n\n    ### Force HTTP/2 or SPDY plus PFS on supported systems\n    sed -i \"s/^_NGINX_SPDY=.*/_NGINX_SPDY=YES/g\"                 ${_barCnf}\n    wait\n    sed -i \"s/^_NGINX_FORWARD.*/_NGINX_FORWARD_SECRECY=YES/g\"    ${_barCnf}\n    wait\n    ### Force ImageMagick from packages\n    sed -i \"s/^_MAGICK_FROM_S.*/_MAGICK_FROM_SOURCES=NO/g\"       ${_barCnf}\n    wait\n    sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"                   ${_barCnf}\n    wait\n    sed -i \"s/^_FORCE_GIT_.*/_FORCE_GIT_MIRROR=\\\"\\\"/g\"           ${_barCnf}\n    wait\n    sed -i \"s/^_SSH_FROM_SOURCES=.*/_SSH_FROM_SOURCES=YES/g\"     ${_barCnf}\n    wait\n    sed -i \"s/^_STRICT_BIN_.*/_STRICT_BIN_PERMISSIONS=YES/g\"     ${_barCnf}\n    wait\n    sed -i \"s/^_STRONG_PASS.*/_STRONG_PASSWORDS=YES/g\"           ${_barCnf}\n    wait\n    sed -i \"s/^_PERMISSIONS_FIX=.*/_PERMISSIONS_FIX=YES/g\"       ${_barCnf}\n    wait\n    sed -i \"s/^_FORCE_GIT_.*/_FORCE_GIT_MIRROR=\\\"\\\"/g\" /root/.*.octopus.cnf &> /dev/null\n    wait\n    sed -i \"s/^_STRONG_PASS.*/_STRONG_PASSWORDS=YES/g\" /root/.*.octopus.cnf &> /dev/null\n    wait\n    sed -i \"s/^_VALKEY_LISTEN.*/_VALKEY_LISTEN_MODE=SOCKET/g\"    ${_barCnf}\n    wait\n    sed -i \"s/^_REDIS_LISTEN.*/_REDIS_LISTEN_MODE=SOCKET/g\"      ${_barCnf}\n    wait\n    if [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n      if [ -x \"/opt/php74/bin/php\" ]; then\n        _fCli=7.4\n      elif [ -x \"/opt/php73/bin/php\" ]; then\n        _fCli=7.3\n      elif [ -x \"/opt/php72/bin/php\" ]; then\n        _fCli=7.2\n      elif [ -x \"/opt/php71/bin/php\" ]; then\n        _fCli=7.1\n      elif [ -x \"/opt/php70/bin/php\" ]; then\n        _fCli=7.0\n      elif [ -x \"/opt/php56/bin/php\" ]; then\n        _fCli=5.6\n      fi\n    else\n      _fCli=8.4\n    fi\n    if [ -z \"${_PHP_SINGLE_INSTALL}\" ]; then\n      sed -i \"s/^_PHP_CLI_VERSION=5.6/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=5.6/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.0/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.0/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.1/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.1/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.2/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.2/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.3/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.3/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=8.0/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=8.0/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n    fi\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      sed -i 's~^_MY_EMAIL=.*~_MY_EMAIL=\"notify@omega8.cc\"~'          ${_barCnf}\n      wait\n    fi\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Reading your ${_barCnf} config file\"\n      sleep 1\n      _msg \"NOTE! Please review all config options displayed below\"\n      _msg \"NOTE! It will *override* all settings in the Barracuda script\"\n    fi\n    sed -i \"s/_SPEED_VALID_MAX=300/_SPEED_VALID_MAX=3600/g\" ${_barCnf}\n    wait\n\n    _CUSTOM_COLLATION_SQL=$(grep _CUSTOM_COLLATION_SQL ${_barCnf} 2>&1)\n    if [[ \"${_CUSTOM_COLLATION_SQL}\" =~ \"_CUSTOM_COLLATION_SQL\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_CUSTOM_COLLATION_SQL=${_CUSTOM_COLLATION_SQL}\" >> ${_barCnf}\n    fi\n\n    _SQL_LOW_MAX_TTL_TEST=$(grep _SQL_LOW_MAX_TTL ${_barCnf} 2>&1)\n    if [[ \"${_SQL_LOW_MAX_TTL_TEST}\" =~ \"_SQL_LOW_MAX_TTL\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_SQL_LOW_MAX_TTL=${_SQL_LOW_MAX_TTL}\" >> ${_barCnf}\n    fi\n\n    _SQL_MAX_TTL_TEST=$(grep _SQL_MAX_TTL ${_barCnf} 2>&1)\n    if [[ \"${_SQL_MAX_TTL_TEST}\" =~ \"_SQL_MAX_TTL\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_SQL_MAX_TTL=${_SQL_MAX_TTL}\" >> ${_barCnf}\n    fi\n\n    _CPU_TASK_RATIO_TEST=$(grep _CPU_TASK_RATIO ${_barCnf} 2>&1)\n    if [[ \"${_CPU_TASK_RATIO_TEST}\" =~ \"_CPU_TASK_RATIO\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_CPU_TASK_RATIO=${_CPU_TASK_RATIO}\" >> ${_barCnf}\n    fi\n\n    _INCIDENT_REPORT_TEST=$(grep _INCIDENT_REPORT ${_barCnf} 2>&1)\n    if [[ \"${_INCIDENT_REPORT_TEST}\" =~ \"_INCIDENT_REPORT\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_INCIDENT_REPORT=${_INCIDENT_REPORT}\" >> ${_barCnf}\n    fi\n\n    _ENABLE_GOACCESS_TEST=$(grep _ENABLE_GOACCESS ${_barCnf} 2>&1)\n    if [[ \"${_ENABLE_GOACCESS_TEST}\" =~ \"_ENABLE_GOACCESS\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_ENABLE_GOACCESS=${_ENABLE_GOACCESS}\" >> ${_barCnf}\n    fi\n\n    _NGINX_DOS_DIV_INC_NR_TEST=$(grep _NGINX_DOS_DIV_INC_NR ${_barCnf} 2>&1)\n    if [[ \"${_NGINX_DOS_DIV_INC_NR_TEST}\" =~ \"_NGINX_DOS_DIV_INC_NR\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_NGINX_DOS_DIV_INC_NR=${_NGINX_DOS_DIV_INC_NR}\" >> ${_barCnf}\n    fi\n\n    _NGINX_DOS_IGNORE_TEST=$(grep _NGINX_DOS_IGNORE ${_barCnf} 2>&1)\n    if [[ \"${_NGINX_DOS_IGNORE_TEST}\" =~ \"_NGINX_DOS_IGNORE\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_NGINX_DOS_IGNORE=${_NGINX_DOS_IGNORE}\" >> ${_barCnf}\n    fi\n\n    _NGINX_DOS_INC_MIN_TEST=$(grep _NGINX_DOS_INC_MIN ${_barCnf} 2>&1)\n    if [[ \"${_NGINX_DOS_INC_MIN_TEST}\" =~ \"_NGINX_DOS_INC_MIN\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_NGINX_DOS_INC_MIN=${_NGINX_DOS_INC_MIN}\" >> ${_barCnf}\n    fi\n\n    _NGINX_WORKERS_TEST=$(grep _NGINX_WORKERS ${_barCnf} 2>&1)\n    if [[ \"${_NGINX_WORKERS_TEST}\" =~ \"_NGINX_WORKERS\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_NGINX_WORKERS=${_NGINX_WORKERS}\" >> ${_barCnf}\n    fi\n\n    _PHP_FPM_WORKERS_TEST=$(grep _PHP_FPM_WORKERS ${_barCnf} 2>&1)\n    if [[ \"${_PHP_FPM_WORKERS_TEST}\" =~ \"_PHP_FPM_WORKERS\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"_PHP_FPM_WORKERS=${_PHP_FPM_WORKERS}\" >> ${_barCnf}\n    fi\n\n    if [ -d \"/data/u\" ]; then\n      _php_legacy_versions_cleanup_cnf\n    fi\n\n    _PHP_FPM_VERSION_TEST=$(grep _PHP_FPM_VERSION ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_FPM_VERSION_TEST}\" =~ \"_PHP_FPM_VERSION\" ]]; then\n      echo \"_PHP_FPM_VERSION=${_PHP_FPM_VERSION}\" >> ${_barCnf}\n    fi\n\n    _PHP_CLI_VERSION_TEST=$(grep _PHP_CLI_VERSION ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_CLI_VERSION_TEST}\" =~ \"_PHP_CLI_VERSION\" ]]; then\n      echo \"_PHP_CLI_VERSION=${_PHP_CLI_VERSION}\" >> ${_barCnf}\n    fi\n\n    _PHP_SINGLE_INSTALL_TEST=$(grep _PHP_SINGLE_INSTALL ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_SINGLE_INSTALL_TEST}\" =~ \"_PHP_SINGLE_INSTALL\" ]]; then\n      echo \"_PHP_SINGLE_INSTALL=${_PHP_SINGLE_INSTALL}\" >> ${_barCnf}\n    fi\n\n    _CUSTOM_CONFIG_LSHELL_TEST=$(grep _CUSTOM_CONFIG_LSHELL ${_barCnf} 2>&1)\n    if [[ ! \"${_CUSTOM_CONFIG_LSHELL_TEST}\" =~ \"_CUSTOM_CONFIG_LSHELL\" ]]; then\n      echo \"_CUSTOM_CONFIG_LSHELL=${_CUSTOM_CONFIG_LSHELL}\" >> ${_barCnf}\n    fi\n\n    _CUSTOM_CONFIG_CSF_TEST=$(grep _CUSTOM_CONFIG_CSF ${_barCnf} 2>&1)\n    if [[ ! \"${_CUSTOM_CONFIG_CSF_TEST}\" =~ \"_CUSTOM_CONFIG_CSF\" ]]; then\n      echo \"_CUSTOM_CONFIG_CSF=${_CUSTOM_CONFIG_CSF}\" >> ${_barCnf}\n    fi\n\n    _CUSTOM_CONFIG_SQL_TEST=$(grep _CUSTOM_CONFIG_SQL ${_barCnf} 2>&1)\n    if [[ ! \"${_CUSTOM_CONFIG_SQL_TEST}\" =~ \"_CUSTOM_CONFIG_SQL\" ]]; then\n      echo \"_CUSTOM_CONFIG_SQL=${_CUSTOM_CONFIG_SQL}\" >> ${_barCnf}\n    fi\n\n    _SPEED_VALID_MAX_TEST=$(grep _SPEED_VALID_MAX ${_barCnf} 2>&1)\n    if [[ ! \"${_SPEED_VALID_MAX_TEST}\" =~ \"_SPEED_VALID_MAX\" ]]; then\n      echo \"_SPEED_VALID_MAX=${_SPEED_VALID_MAX}\" >> ${_barCnf}\n    fi\n\n    _NGINX_DOS_LIMIT_TEST=$(grep _NGINX_DOS_LIMIT ${_barCnf} 2>&1)\n    if [[ ! \"${_NGINX_DOS_LIMIT_TEST}\" =~ \"_NGINX_DOS_LIMIT\" ]]; then\n      echo \"_NGINX_DOS_LIMIT=${_NGINX_DOS_LIMIT}\" >> ${_barCnf}\n    fi\n\n    _CPU_SPIDER_RATIO_TEST=$(grep _CPU_SPIDER_RATIO ${_barCnf} 2>&1)\n    if [[ ! \"${_CPU_SPIDER_RATIO_TEST}\" =~ \"_CPU_SPIDER_RATIO\" ]]; then\n      echo \"_CPU_SPIDER_RATIO=${_CPU_SPIDER_RATIO}\" >> ${_barCnf}\n    fi\n\n    _CPU_MAX_RATIO_TEST=$(grep _CPU_MAX_RATIO ${_barCnf} 2>&1)\n    if [[ ! \"${_CPU_MAX_RATIO_TEST}\" =~ \"_CPU_MAX_RATIO\" ]]; then\n      echo \"_CPU_MAX_RATIO=${_CPU_MAX_RATIO}\" >> ${_barCnf}\n    fi\n\n    _CPU_CRIT_RATIO_TEST=$(grep _CPU_CRIT_RATIO ${_barCnf} 2>&1)\n    if [[ ! \"${_CPU_CRIT_RATIO_TEST}\" =~ \"_CPU_CRIT_RATIO\" ]]; then\n      echo \"_CPU_CRIT_RATIO=${_CPU_CRIT_RATIO}\" >> ${_barCnf}\n    fi\n\n    _SYSTEM_UP_ONLY_TEST=$(grep _SYSTEM_UP_ONLY ${_barCnf} 2>&1)\n    if [[ ! \"${_SYSTEM_UP_ONLY_TEST}\" =~ \"_SYSTEM_UP_ONLY\" ]]; then\n      echo \"_SYSTEM_UP_ONLY=${_SYSTEM_UP_ONLY}\" >> ${_barCnf}\n    fi\n\n    _AEGIR_UPGRADE_ONLY_TEST=$(grep _AEGIR_UPGRADE_ONLY ${_barCnf} 2>&1)\n    if [[ ! \"${_AEGIR_UPGRADE_ONLY_TEST}\" =~ \"_AEGIR_UPGRADE_ONLY\" ]]; then\n      echo \"_AEGIR_UPGRADE_ONLY=${_AEGIR_UPGRADE_ONLY}\" >> ${_barCnf}\n    fi\n\n    _CUSTOM_CONFIG_VALKEY_TEST=$(grep _CUSTOM_CONFIG_VALKEY ${_barCnf} 2>&1)\n    if [[ ! \"${_CUSTOM_CONFIG_VALKEY_TEST}\" =~ \"_CUSTOM_CONFIG_VALKEY\" ]]; then\n      echo \"_CUSTOM_CONFIG_VALKEY=${_CUSTOM_CONFIG_VALKEY}\" >> ${_barCnf}\n    fi\n\n    _CUSTOM_CONFIG_REDIS_TEST=$(grep _CUSTOM_CONFIG_REDIS ${_barCnf} 2>&1)\n    if [[ ! \"${_CUSTOM_CONFIG_REDIS_TEST}\" =~ \"_CUSTOM_CONFIG_REDIS\" ]]; then\n      echo \"_CUSTOM_CONFIG_REDIS=${_CUSTOM_CONFIG_REDIS}\" >> ${_barCnf}\n    fi\n\n    _NEWRELIC_KEY_TEST=$(grep _NEWRELIC_KEY ${_barCnf} 2>&1)\n    if [[ ! \"${_NEWRELIC_KEY_TEST}\" =~ \"_NEWRELIC_KEY\" ]]; then\n      if [ ! -z \"${_NEWRELIC_KEY}\" ]; then\n        echo \"_NEWRELIC_KEY=${_NEWRELIC_KEY}\" >> ${_barCnf}\n      else\n        if [ -e \"/etc/newrelic/newrelic.cfg\" ]; then\n          _NEWRELIC_KEY=$(grep license_key /etc/newrelic/newrelic.cfg 2>/dev/null | tr -d '\\n')\n          echo \"_NEWRELIC_KEY=${_NEWRELIC_KEY}\" >> ${_barCnf}\n          sed -i \"s/license_key=//g\" ${_barCnf}\n          wait\n        fi\n      fi\n    fi\n\n    _EXTRA_PACKAGES_TEST=$(grep _EXTRA_PACKAGES ${_barCnf} 2>&1)\n    if [[ ! \"${_EXTRA_PACKAGES_TEST}\" =~ \"_EXTRA_PACKAGES\" ]]; then\n      echo \"_EXTRA_PACKAGES=${_EXTRA_PACKAGES}\" >> ${_barCnf}\n    fi\n\n    _PHP_EXTRA_CONF_TEST=$(grep _PHP_EXTRA_CONF ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_EXTRA_CONF_TEST}\" =~ \"_PHP_EXTRA_CONF\" ]]; then\n      echo \"_PHP_EXTRA_CONF=\\\"${_PHP_EXTRA_CONF}\\\"\" >> ${_barCnf}\n    fi\n\n    _PHP_FPM_DENY_TEST=$(grep _PHP_FPM_DENY ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_FPM_DENY_TEST}\" =~ \"_PHP_FPM_DENY\" ]]; then\n      echo \"_PHP_FPM_DENY=\\\"${_PHP_FPM_DENY}\\\"\" >> ${_barCnf}\n    fi\n\n    _STRONG_PASSWORDS_TEST=$(grep _STRONG_PASSWORDS ${_barCnf} 2>&1)\n    if [[ ! \"${_STRONG_PASSWORDS_TEST}\" =~ \"_STRONG_PASSWORDS\" ]]; then\n      echo \"_STRONG_PASSWORDS=${_STRONG_PASSWORDS}\" >> ${_barCnf}\n    fi\n\n    _DB_BINARY_LOG_TEST=$(grep _DB_BINARY_LOG ${_barCnf} 2>&1)\n    if [[ ! \"${_DB_BINARY_LOG_TEST}\" =~ \"_DB_BINARY_LOG\" ]]; then\n      echo \"_DB_BINARY_LOG=${_DB_BINARY_LOG}\" >> ${_barCnf}\n    fi\n\n    _USE_MYSQLTUNER_TEST=$(grep _USE_MYSQLTUNER ${_barCnf} 2>&1)\n    if [[ ! \"${_USE_MYSQLTUNER_TEST}\" =~ \"_USE_MYSQLTUNER\" ]]; then\n      echo \"_USE_MYSQLTUNER=${_USE_MYSQLTUNER}\" >> ${_barCnf}\n    fi\n\n    _VALKEY_LISTEN_MODE_TEST=$(grep _VALKEY_LISTEN_MODE ${_barCnf} 2>&1)\n    if [[ ! \"${_VALKEY_LISTEN_MODE_TEST}\" =~ \"_VALKEY_LISTEN_MODE\" ]]; then\n      echo \"_VALKEY_LISTEN_MODE=${_VALKEY_LISTEN_MODE}\" >> ${_barCnf}\n    fi\n\n    _VALKEY_MAJOR_RELEASE_TEST=$(grep _VALKEY_MAJOR_RELEASE ${_barCnf} 2>&1)\n    if [[ ! \"${_VALKEY_MAJOR_RELEASE_TEST}\" =~ \"_VALKEY_MAJOR_RELEASE\" ]]; then\n      echo \"_VALKEY_MAJOR_RELEASE=${_VALKEY_MAJOR_RELEASE}\" >> ${_barCnf}\n    fi\n\n    _REDIS_LISTEN_MODE_TEST=$(grep _REDIS_LISTEN_MODE ${_barCnf} 2>&1)\n    if [[ ! \"${_REDIS_LISTEN_MODE_TEST}\" =~ \"_REDIS_LISTEN_MODE\" ]]; then\n      echo \"_REDIS_LISTEN_MODE=${_REDIS_LISTEN_MODE}\" >> ${_barCnf}\n    fi\n\n    _REDIS_MAJOR_RELEASE_TEST=$(grep _REDIS_MAJOR_RELEASE ${_barCnf} 2>&1)\n    if [[ ! \"${_REDIS_MAJOR_RELEASE_TEST}\" =~ \"_REDIS_MAJOR_RELEASE\" ]]; then\n      echo \"_REDIS_MAJOR_RELEASE=${_REDIS_MAJOR_RELEASE}\" >> ${_barCnf}\n    fi\n\n    _NGINX_HEADERS_TEST=$(grep _NGINX_HEADERS ${_barCnf} 2>&1)\n    if [[ ! \"${_NGINX_HEADERS_TEST}\" =~ \"_NGINX_HEADERS\" ]]; then\n      echo \"_NGINX_HEADERS=${_NGINX_HEADERS}\" >> ${_barCnf}\n    fi\n\n    _NGINX_LDAP_TEST=$(grep _NGINX_LDAP ${_barCnf} 2>&1)\n    if [[ ! \"${_NGINX_LDAP_TEST}\" =~ \"_NGINX_LDAP\" ]]; then\n      echo \"_NGINX_LDAP=${_NGINX_LDAP}\" >> ${_barCnf}\n    fi\n\n    _NGINX_NAXSI_TEST=$(grep _NGINX_NAXSI ${_barCnf} 2>&1)\n    if [[ ! \"${_NGINX_NAXSI_TEST}\" =~ \"_NGINX_NAXSI\" ]]; then\n      echo \"_NGINX_NAXSI=${_NGINX_NAXSI}\" >> ${_barCnf}\n    fi\n\n    _NGINX_SPDY_TEST=$(grep _NGINX_SPDY ${_barCnf} 2>&1)\n    if [[ ! \"${_NGINX_SPDY_TEST}\" =~ \"_NGINX_SPDY\" ]]; then\n      echo \"_NGINX_SPDY=${_NGINX_SPDY}\" >> ${_barCnf}\n    fi\n\n    _NGINX_FORWARD_SECRECY_TEST=$(grep _NGINX_FORWARD_SECRECY ${_barCnf} 2>&1)\n    if [[ ! \"${_NGINX_FORWARD_SECRECY_TEST}\" =~ \"_NGINX_FORWARD_SECRECY\" ]]; then\n      echo \"_NGINX_FORWARD_SECRECY=${_NGINX_FORWARD_SECRECY}\" >> ${_barCnf}\n    fi\n\n    _PHP_IONCUBE_TEST=$(grep _PHP_IONCUBE ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_IONCUBE_TEST}\" =~ \"_PHP_IONCUBE\" ]]; then\n      echo \"_PHP_IONCUBE=${_PHP_IONCUBE}\" >> ${_barCnf}\n    fi\n\n    _PHP_GEOS_TEST=$(grep _PHP_GEOS ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_GEOS_TEST}\" =~ \"_PHP_GEOS\" ]]; then\n      echo \"_PHP_GEOS=${_PHP_GEOS}\" >> ${_barCnf}\n    fi\n\n    _PHP_MONGODB_TEST=$(grep _PHP_MONGODB ${_barCnf} 2>&1)\n    if [[ ! \"${_PHP_MONGODB_TEST}\" =~ \"_PHP_MONGODB\" ]]; then\n      echo \"_PHP_MONGODB=${_PHP_MONGODB}\" >> ${_barCnf}\n    fi\n\n    _PERMISSIONS_FIX_TEST=$(grep _PERMISSIONS_FIX ${_barCnf} 2>&1)\n    if [[ ! \"${_PERMISSIONS_FIX_TEST}\" =~ \"_PERMISSIONS_FIX\" ]]; then\n      echo \"_PERMISSIONS_FIX=${_PERMISSIONS_FIX}\" >> ${_barCnf}\n    fi\n\n    _MODULES_FIX_TEST=$(grep _MODULES_FIX ${_barCnf} 2>&1)\n    if [[ ! \"${_MODULES_FIX_TEST}\" =~ \"_MODULES_FIX\" ]]; then\n      echo \"_MODULES_FIX=${_MODULES_FIX}\" >> ${_barCnf}\n    fi\n\n    _MODULES_SKIP_TEST=$(grep _MODULES_SKIP ${_barCnf} 2>&1)\n    if [[ ! \"${_MODULES_SKIP_TEST}\" =~ \"_MODULES_SKIP\" ]]; then\n      echo \"_MODULES_SKIP=\\\"${_MODULES_SKIP}\\\"\" >> ${_barCnf}\n    fi\n\n    _SSH_FROM_SOURCES_TEST=$(grep _SSH_FROM_SOURCES ${_barCnf} 2>&1)\n    if [[ ! \"${_SSH_FROM_SOURCES_TEST}\" =~ \"_SSH_FROM_SOURCES\" ]]; then\n      echo \"_SSH_FROM_SOURCES=${_SSH_FROM_SOURCES}\" >> ${_barCnf}\n    fi\n\n    _SSH_ARMOUR_TEST=$(grep _SSH_ARMOUR ${_barCnf} 2>&1)\n    if [[ ! \"${_SSH_ARMOUR_TEST}\" =~ \"_SSH_ARMOUR\" ]]; then\n      echo \"_SSH_ARMOUR=${_SSH_ARMOUR}\" >> ${_barCnf}\n    fi\n\n    _MAGICK_FROM_SOURCES_TEST=$(grep _MAGICK_FROM_SOURCES ${_barCnf} 2>&1)\n    if [[ ! \"${_MAGICK_FROM_SOURCES_TEST}\" =~ \"_MAGICK_FROM_SOURCES\" ]]; then\n      echo \"_MAGICK_FROM_SOURCES=${_MAGICK_FROM_SOURCES}\" >> ${_barCnf}\n    fi\n\n    _RESERVED_RAM_TEST=$(grep _RESERVED_RAM ${_barCnf} 2>&1)\n    if [[ ! \"${_RESERVED_RAM_TEST}\" =~ \"_RESERVED_RAM\" ]]; then\n      echo \"_RESERVED_RAM=${_RESERVED_RAM}\" >> ${_barCnf}\n    fi\n\n    _STRICT_BIN_PERMISSIONS_TEST=$(grep _STRICT_BIN_PERMISSIONS ${_barCnf} 2>&1)\n    if [[ ! \"${_STRICT_BIN_PERMISSIONS_TEST}\" =~ \"_STRICT_BIN_PERMISSIONS\" ]]; then\n      echo \"_STRICT_BIN_PERMISSIONS=${_STRICT_BIN_PERMISSIONS}\" >> ${_barCnf}\n    fi\n\n    _DB_SERIES_TEST=$(grep _DB_SERIES ${_barCnf} 2>&1)\n    if [[ ! \"${_DB_SERIES_TEST}\" =~ \"_DB_SERIES\" ]]; then\n      echo \"_DB_SERIES=${_DB_SERIES}\" >> ${_barCnf}\n    fi\n\n    _LOCAL_DEVUAN_MIRROR_TEST=$(grep _LOCAL_DEVUAN_MIRROR ${_barCnf} 2>&1)\n    if [[ ! \"${_LOCAL_DEVUAN_MIRROR_TEST}\" =~ \"_LOCAL_DEVUAN_MIRROR\" ]]; then\n      echo \"_LOCAL_DEVUAN_MIRROR=${_LOCAL_DEVUAN_MIRROR}\" >> ${_barCnf}\n    fi\n\n    _DL_MODE_TEST=$(grep _DL_MODE ${_barCnf} 2>&1)\n    if [[ ! \"${_DL_MODE_TEST}\" =~ \"_DL_MODE\" ]]; then\n      if [ -n \"${_DL_MODE}\" ]; then\n        echo \"_DL_MODE=${_DL_MODE}\" >> ${_barCnf}\n        export _DL_MODE=${_DL_MODE}\n      else\n        echo \"_DL_MODE=BATCH\" >> ${_barCnf}\n        export _DL_MODE=BATCH\n      fi\n    fi\n\n    sleep 1\n    ### Force HTTP/2 or SPDY plus PFS on supported systems\n    sed -i \"s/^_NGINX_SPDY=.*/_NGINX_SPDY=YES/g\"                 ${_barCnf}\n    wait\n    sed -i \"s/^_NGINX_FORWARD.*/_NGINX_FORWARD_SECRECY=YES/g\"    ${_barCnf}\n    wait\n    ### Force ImageMagick from packages\n    sed -i \"s/^_MAGICK_FROM_S.*/_MAGICK_FROM_SOURCES=NO/g\"       ${_barCnf}\n    wait\n    ### Force latest OpenSSH from sources on supported systems\n    sed -i \"s/^_SSH_FROM_SOURCES=.*/_SSH_FROM_SOURCES=YES/g\"     ${_barCnf}\n    wait\n    sed -i \"s/^_AUTOPILOT=.*/_AUTOPILOT=YES/g\"                   ${_barCnf}\n    wait\n    sed -i \"s/^_FORCE_GIT_.*/_FORCE_GIT_MIRROR=\\\"\\\"/g\"           ${_barCnf}\n    wait\n    sed -i \"s/^_STRICT_BIN_.*/_STRICT_BIN_PERMISSIONS=YES/g\"     ${_barCnf}\n    wait\n    sed -i \"s/^_STRONG_PASS.*/_STRONG_PASSWORDS=YES/g\"           ${_barCnf}\n    wait\n    sed -i \"s/^_PERMISSIONS_.*/_PERMISSIONS_FIX=YES/g\"           ${_barCnf}\n    wait\n    sed -i \"s/^_FORCE_GIT_.*/_FORCE_GIT_MIRROR=\\\"\\\"/g\" /root/.*.octopus.cnf &> /dev/null\n    wait\n    sed -i \"s/^_STRONG_PASS.*/_STRONG_PASSWORDS=YES/g\" /root/.*.octopus.cnf &> /dev/null\n    wait\n    sed -i \"s/^_VALKEY_LISTEN_.*/_VALKEY_LISTEN_MODE=SOCKET/g\"   ${_barCnf}\n    wait\n    sed -i \"s/^_REDIS_LISTEN_.*/_REDIS_LISTEN_MODE=SOCKET/g\"     ${_barCnf}\n    wait\n    if [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n      if [ -x \"/opt/php74/bin/php\" ]; then\n        _fCli=7.4\n      elif [ -x \"/opt/php73/bin/php\" ]; then\n        _fCli=7.3\n      elif [ -x \"/opt/php72/bin/php\" ]; then\n        _fCli=7.2\n      elif [ -x \"/opt/php71/bin/php\" ]; then\n        _fCli=7.1\n      elif [ -x \"/opt/php70/bin/php\" ]; then\n        _fCli=7.0\n      elif [ -x \"/opt/php56/bin/php\" ]; then\n        _fCli=5.6\n      fi\n    else\n      _fCli=8.4\n    fi\n    if [ -z \"${_PHP_SINGLE_INSTALL}\" ]; then\n      sed -i \"s/^_PHP_CLI_VERSION=5.6/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=5.6/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.0/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.0/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.1/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.1/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.2/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.2/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.3/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.3/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=7.4/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=7.4/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_CLI_VERSION=8.0/_PHP_CLI_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n      sed -i \"s/^_PHP_FPM_VERSION=8.0/_PHP_FPM_VERSION=${_fCli}/g\"    ${_barCnf}\n      wait\n    fi\n\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      sed -i 's~^_MY_EMAIL=.*~_MY_EMAIL=\"notify@omega8.cc\"~'          ${_barCnf}\n      wait\n    fi\n\n    sed -i \"/^$/d\" ${_barCnf}\n    wait\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      echo \" \"\n      while read line; do\n        echo \"$line\"\n      done < ${_barCnf}\n      echo \" \"\n    fi\n    if [ -e \"${_barCnf}\" ]; then\n      source ${_barCnf}\n    fi\n\n    _ffList=\"/var/log/boa/devaun-fast-mirrors-list.txt\"\n    _BROKEN_FF=$(grep \"devuan/merged\" ${_ffList} 2>&1)\n    if [ ! -e \"${_ffList}\" ] || [[ ! \"${_BROKEN_FF}\" =~ \"devuan/merged\" ]]; then\n      _ffDevuan=\"$(which ffdevuan)\"\n      if [ -x \"${_ffDevuan}\" ]; then\n        bash ${_ffDevuan} &> /dev/null\n        wait\n      fi\n    fi\n\n    if [ -z \"${_LOCAL_DEVUAN_MIRROR}\" ] \\\n      || [ \"${_LOCAL_DEVUAN_MIRROR}\" = \"deb.devuan.org\" ] \\\n      || [ \"${_LOCAL_DEVUAN_MIRROR}\" = \"https://mirrors.dotsrc.org/devuan\" ] \\\n      || [[ ! \"${_LOCAL_DEVUAN_MIRROR}\" =~ \"http\" ]]; then\n      _DVN_MRR=\"$(_find_fast_devuan_mirror)\"\n\t  if [ -n \"${_DVN_MRR}\" ]; then\n\t    _devM=\"${_DVN_MRR}\"\n\t  else\n\t    _devM=\"http://deb.devuan.org/merged\"\n\t  fi\n\t  _LOCAL_DEVUAN_MIRROR=\"${_devM}\"\n      sed -i \"s|^_LOCAL_DEV.*|_LOCAL_DEVUAN_MIRROR=${_devM}|\"    ${_barCnf}\n      wait\n    fi\n\n    if [ -z \"${_LOCAL_DEBIAN_MIRROR}\" ]; then\n      _debM=\"deb.debian.org\"\n      _LOCAL_DEBIAN_MIRROR=\"${_debM}\"\n      sed -i \"s|^_LOCAL_DEB.*|_LOCAL_DEBIAN_MIRROR=${_debM}|\"    ${_barCnf}\n      wait\n    fi\n\n    if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n      if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n        _DBS_VRN=\"${_PERCONA_5_7_VRN}\"\n      elif [ \"${_DB_SERIES}\" = \"8.0\" ]; then\n        _DBS_VRN=\"${_PERCONA_8_0_VRN}\"\n      elif [ \"${_DB_SERIES}\" = \"8.4\" ]; then\n        _DBS_VRN=\"${_PERCONA_8_4_VRN}\"\n      else\n        _DB_SERIES=5.7\n        _DBS_VRN=\"${_PERCONA_5_7_VRN}\"\n      fi\n    else\n      _DB_SERVER=Percona\n      _DB_SERIES=5.7\n      _DBS_VRN=\"${_PERCONA_5_7_VRN}\"\n      if [ -e \"${_barCnf}\" ]; then\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=5.7/g\" ${_barCnf}\n        wait\n        sed -i \"s/^_DB_SERVER=.*/_DB_SERVER=Percona/g\" ${_barCnf}\n        wait\n      fi\n    fi\n\n    if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      _DB_SERVER=Percona\n      _DB_SERIES=8.4\n      _DBS_VRN=\"${_PERCONA_8_4_VRN}\"\n      if [ -e \"${_barCnf}\" ]; then\n        sed -i \"s/^_DB_SERIES=.*/_DB_SERIES=8.4/g\" ${_barCnf}\n        wait\n        sed -i \"s/^_DB_SERVER=.*/_DB_SERVER=Percona/g\" ${_barCnf}\n        wait\n      fi\n    fi\n\n    if [[ \"${_MY_EMAIL}\" =~ \"omega8.cc\" ]]; then\n      _if_hosted_sys\n      if [ \"${_hostedSys}\" != \"YES\" ]; then\n        _msg \"EXIT: Invalid email address defined in the _MY_EMAIL variable\"\n        _msg \"EXIT: Bye (2)\"\n        _clean_pid_exit _barracuda_cnf_b\n      fi\n    fi\n\n    ### Make sure that _PHP_SINGLE_INSTALL takes precedence\n    if [ ! -z \"${_PHP_SINGLE_INSTALL}\" ]; then\n      if [ \"${_PHP_SINGLE_INSTALL}\" = \"8.5\" ] \\\n        || [ \"${_PHP_SINGLE_INSTALL}\" = \"8.4\" ] \\\n        || [ \"${_PHP_SINGLE_INSTALL}\" = \"8.3\" ]; then\n        _PHP_MULTI_INSTALL=${_PHP_SINGLE_INSTALL}\n        _PHP_CLI_VERSION=${_PHP_SINGLE_INSTALL}\n        _PHP_FPM_VERSION=${_PHP_SINGLE_INSTALL}\n        sed -i \"s/^_PHP_MULTI_INSTALL=.*/_PHP_MULTI_INSTALL=${_PHP_SINGLE_INSTALL}/g\" ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_CLI_VERSION=.*/_PHP_CLI_VERSION=${_PHP_SINGLE_INSTALL}/g\"     ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_FPM_VERSION=.*/_PHP_FPM_VERSION=${_PHP_SINGLE_INSTALL}/g\"     ${_barCnf}\n        wait\n        sed -i \"s/^_PHP_CLI_VERSION=.*/_PHP_CLI_VERSION=${_PHP_SINGLE_INSTALL}/g\" /root/.*.octopus.cnf &> /dev/null\n        wait\n        sed -i \"s/^_PHP_FPM_VERSION=.*/_PHP_FPM_VERSION=${_PHP_SINGLE_INSTALL}/g\" /root/.*.octopus.cnf &> /dev/null\n        wait\n        if [ -d \"/data/u\" ] && [ -e \"/data/conf/global.inc\" ]; then\n          for _Ctrl in `find /data/disk/*/log -maxdepth 0 -mindepth 0 | sort`; do\n            echo ${_PHP_SINGLE_INSTALL} > $Ctrl/fpm.txt\n            echo ${_PHP_SINGLE_INSTALL} > $Ctrl/cli.txt\n            ### _msg \"INFO: Forced PHP ${_PHP_SINGLE_INSTALL} in $Ctrl\"\n          done\n          for _Ctrl in `find /data/disk/*/static/control \\\n            -maxdepth 0 -mindepth 0 | sort`; do\n            echo ${_PHP_SINGLE_INSTALL} > $Ctrl/fpm.info\n            echo ${_PHP_SINGLE_INSTALL} > $Ctrl/cli.info\n            ### _msg \"INFO: Forced PHP ${_PHP_SINGLE_INSTALL} in $Ctrl\"\n          done\n          for _Ctrl in `find /data/disk/*/.drush \\\n            -maxdepth 0 -mindepth 0 | sort`; do\n            rm -f $Ctrl/.ctrl.php*\n            ### _msg \"INFO: Forced PHP ${_PHP_SINGLE_INSTALL} in $Ctrl\"\n          done\n        fi\n      fi\n    fi\n\n    if [ \"${_STATUS}\" = \"INIT\" ]; then\n      if _prompt_yes_no \"Do you want to proceed with the install?\" ; then\n        true\n      else\n        echo \"Installation aborted by you\"\n        _clean_pid_exit _barracuda_cnf_c\n      fi\n    else\n      echo \" \"\n      if _prompt_yes_no \"Do you want to proceed with the upgrade?\" ; then\n        true\n      else\n        echo \"Upgrade aborted by you\"\n        _clean_pid_exit _barracuda_cnf_d\n      fi\n    fi\n  fi\n}\n\n#\n# Install MyTop.\n_mytop_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _mytop_install\"\n  fi\n\n  if [ -e \"/usr/bin/mytop\" ]; then\n    if dpkg-query -W -f='${Status}' mytop 2>/dev/null | grep -q \"install ok installed\"; then\n      _mrun \"apt-get remove mytop -y --purge --auto-remove -qq\"\n    fi\n    rm -f /usr/bin/mytop\n  fi\n\n  if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n    if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n      for _PKG in libperconaserverclient20 libperconaserverclient20-dev; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n          _myTopReinstall=YES\n        fi\n      done\n    elif [ \"${_DB_SERIES}\" = \"8.0\" ]; then\n      for _PKG in libperconaserverclient21 libperconaserverclient21-dev; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n          _myTopReinstall=YES\n        fi\n      done\n    else\n      for _PKG in libperconaserverclient24 libperconaserverclient24-dev percona-telemetry-agent; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n          _myTopReinstall=YES\n        fi\n      done\n    fi\n  fi\n  ldconfig 2> /dev/null\n\n  _myTopRebuild=NO\n  if [ \"${_myTopReinstall}\" = \"YES\" ]; then\n    _myTopRebuild=YES\n  fi\n\n  if [ -e \"/usr/bin/mysql\" ]; then\n    if [ ! -e \"/usr/local/bin/mytop\" ] || [ \"${_myTopRebuild}\" = \"YES\" ]; then\n      _msg \"INFO: Building MyTop from sources...\"\n      _check_mysql_version\n      _apt_clean_update\n      for _PKG in libperl-dev cpanminus; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n      if [ \"${_DB_V}\" != \"5.7\" ] || [ \"${_SQL_UPGRADE}\" = \"YES\" ]; then\n        _mrun \"cpanm DBD::mysql --force\"\n        _mrun \"cpanm DBI --force\"\n        _mrun \"cpanm Term::ReadKey --force\"\n      else\n        _mrun \"cpanm DBD::mysql@4.050 --force\"\n        _mrun \"cpanm Term::ReadKey --force\"\n      fi\n      cd /var/opt\n      rm -rf git*\n      _get_dev_src \"mytop-1.6.tar.gz\"\n      cd /var/opt/mytop-1.6\n      _mrun \"perl Makefile.PL\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n    fi\n  fi\n}\n\n#\n# Fire-and-forget launcher, cron-safe and interactive-safe\n_spawn_detached() {\n  _cmd=\"$1\"\n  if command -v nohup >/dev/null 2>&1; then\n    nohup bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  elif command -v setsid >/dev/null 2>&1; then\n    setsid bash -c \"${_cmd}\" >/dev/null 2>&1 &\n  else\n    ( bash -c \"${_cmd}\" >/dev/null 2>&1 ) &\n  fi\n  # If interactive shell, drop it from the job table to mimic cron behavior\n  if [[ \"$-\" == *i* ]]; then disown; fi\n}\n\n#\n# Running apt-get full-dist-upgrade.\n_run_apt_get_dist_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _run_apt_get_dist_upgrade\"\n  fi\n  _msg \"INFO: Running apt-get upgrade...\"\n  _apt_clean_update_no_releaseinfo_change\n  _mrun \"apt-get upgrade ${_dstUpArg}\"\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Running apt-get dist-upgrade...\"\n  fi\n  _mrun \"apt-get dist-upgrade ${_dstUpArg}\"\n  _mrun \"apt-get dist-upgrade ${_dstUpArg}\"\n  [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  _mrun \"apt-get dist-upgrade ${_aptYesUnth}\"\n  _mrun \"apt-get dist-upgrade ${_aptYesUnth}\"\n  [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n  _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n  ### Update rsyslog configuration early\n  _rsyslog_config_update\n  ### Reload key services if needed early\n  if [ -e \"/etc/init.d/valkey-server\" ]; then\n    _mrun \"service valkey-server reload\"\n  elif [ -e \"/etc/init.d/redis-server\" ]; then\n    _mrun \"service redis-server reload\"\n  fi\n  _mrun \"service nginx reload\"\n  _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n  for e in ${_PHP_V}; do\n    if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n      _mrun \"service php${e}-fpm reload\"\n    fi\n  done\n  nohup /var/xdrago/minute.sh > /dev/null 2>&1 &\n  if [ -e \"/var/xdrago/proc_num_ctrl.pl\" ]; then\n    _spawn_detached 'perl /var/xdrago/proc_num_ctrl.pl'\n  fi\n}\n\n#\n# Run aptitude full-upgrade.\n_run_aptitude_full_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _run_aptitude_full_upgrade\"\n  fi\n  _msg \"INFO: Running aptitude full-upgrade...\"\n  for _PKG in libperconaserverclient22 libperconaserverclient22-dev; do\n    if _pkg_installed \"${_PKG}\"; then\n      _mrun \"apt-get remove ${_PKG} -y --purge -qq\"\n      [ -e \"/usr/local/bin/mytop\" ] && rm -f /usr/local/bin/mytop\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"PCKG: ${_PKG} removed as requested.\"\n      fi\n    fi\n  done\n  ###\n  ### Make sure that percona-release package is locked in apt early.\n  if [ ! -e \"/etc/apt/preferences.d/percona-release\" ]; then\n    echo -e 'Package: percona-release\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/percona-release\n    _apt_clean_update\n  fi\n  ###\n  _PERC_GET_DPKG=$(dpkg --get-selections | grep percona-release | grep 'hold$' 2>&1)\n  if [[ ! \"${_PERC_GET_DPKG}\" =~ \"hold\" ]]; then\n    aptitude hold percona-release &> /dev/null\n    echo \"percona-release hold\" | dpkg --set-selections &> /dev/null\n    _apt_clean_update\n  fi\n  # Check for percona-release and remove conditionally\n  if dpkg-query -W -f='${Status}' percona-release 2>/dev/null | grep -q \"install ok installed\"; then\n    _mrun \"apt-get remove percona-release -y -qq\"\n  fi\n  _apt_clean_update\n  if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    _DPKG_CNF=\"confold\"\n  else\n    _DPKG_CNF=\"confnew\"\n  fi\n  _UPGRADE_HELD=\"${_INSTALL_NRML} --only-upgrade --allow-change-held-packages\"\n  _IF_KEY_HOLD=\"$(apt-mark showhold | xargs -r apt list --upgradable -a 2>&1)\"\n  if [[ \"${_IF_KEY_HOLD}\" =~ \"openssl/\" ]] \\\n    || [[ \"${_IF_KEY_HOLD}\" =~ \"curl/\" ]]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Upgrade normally held openssl and curl packages\"\n    fi\n    _mrun \"${_UPGRADE_HELD} openssl libcurl4 curl\"\n  fi\n  if [[ \"${_IF_KEY_HOLD}\" =~ \"openssh\" ]]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Upgrade normally held openssh packages\"\n    fi\n    _mrun \"${_UPGRADE_HELD} ssh openssh-client openssh-server openssh-sftp-server\"\n  fi\n  _mrun \"apt-get --fix-broken install -y\"\n  _mrun \"dpkg --configure --force-all -a\"\n  _mrun \"aptitude full-upgrade -f -y -q \\\n    --allow-untrusted \\\n    -o Dpkg::Options::=--force-confmiss \\\n    -o Dpkg::Options::=--force-confdef \\\n    -o Dpkg::Options::=--force-${_DPKG_CNF}\"\n}\n\n#\n# Install latest Git.\n_do_install_latest_git() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _do_install_latest_git\"\n  fi\n  if [ \"${_GIT_INSTALL}\" = \"YES\" ]; then\n    _msg \"INFO: Building Git ${_GIT_VRN} from sources...\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    if [ ! -x \"/usr/local/bin/git\" ]; then\n      _apt_clean_update\n      _mrun \"apt-get install libcurl4 -y ${_aptAllow}\"\n      _mrun \"apt-get install libcurl4-openssl-dev -y ${_aptAllow}\"\n      _mrun \"apt-get install libcurl4-gnutls-dev -y ${_aptAllow}\"\n      if dpkg-query -W -f='${Status}' git-core 2>/dev/null | grep -q \"install ok installed\"; then\n        _mrun \"apt-get remove git-core -y --purge --auto-remove\"\n      fi\n      if dpkg-query -W -f='${Status}' git 2>/dev/null | grep -q \"install ok installed\"; then\n        _mrun \"apt-get remove git -y --purge --auto-remove\"\n      fi\n    fi\n    mkdir -p /var/opt\n    rm -rf /var/opt/git*\n    cd /var/opt\n    _get_dev_src \"git-${_GIT_VRN}.tar.gz\"\n    cd /var/opt/git-${_GIT_VRN}\n    _mrun \"make clean\"\n    _mrun \"make configure\"\n    _mrun \"bash ./configure --without-tcltk\"\n    if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      # Neutralize any injected value, then set a valid compat that matches 3.4.x\n      _mrun \"make CFLAGS+=' -UOPENSSL_API_COMPAT -DOPENSSL_API_COMPAT=0x30400000L' daemon.o V=1\"\n      _mrun \"make CFLAGS+=' -UOPENSSL_API_COMPAT -DOPENSSL_API_COMPAT=0x30400000L' all -j $(nproc)\"\n      _mrun \"make CFLAGS+=' -UOPENSSL_API_COMPAT -DOPENSSL_API_COMPAT=0x30400000L' install\"\n    else\n      _mrun \"make all -j $(nproc)\"\n      _mrun \"make install\"\n    fi\n    ldconfig 2> /dev/null\n    if [ -x \"/usr/local/bin/git\" ]; then\n      if [ -e \"/usr/bin/git\" ] && [ ! -L \"/usr/bin/git\" ]; then\n        mv -f /usr/bin/git /usr/bin/git-old\n      fi\n      ln -sfn /usr/local/bin/git /usr/bin/git\n    fi\n    cd /var/opt\n    touch ${_pthLog}/git-${_GIT_VRN}-${_xSrl}-${_X_VERSION}-${_NOW}.log\n    echo \"git hold\" | dpkg --set-selections &> /dev/null\n    echo \"git-core hold\" | dpkg --set-selections &> /dev/null\n    echo \"git-man hold\" | dpkg --set-selections &> /dev/null\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Git ${_GIT_V} already installed from sources ${_GIT_VRN}, OK\"\n    fi\n  fi\n}\n\n#\n# Check if latest Git should be installed.\n_if_install_git_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_git_src\"\n  fi\n  _GIT_INSTALL=NO\n  _GIT_V=$(git --version 2>&1 \\\n    | cut -d\" \" -f3 \\\n    | awk '{ print $1}' 2>&1)\n  if [ ! -z \"${_GIT_V}\" ]; then\n    if [ \"${_GIT_V}\" != \"${_GIT_VRN}\" ]; then\n      _GIT_INSTALL=YES\n    fi\n  fi\n  if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _DB_SERVER=Percona\n  else\n    _DB_SERVER=Percona\n  fi\n  if [ \"$(boa info | grep -c ${_DB_SERVER})\" -lt 3 ] || [ ! -e \"/usr/sbin/csf\" ]; then\n    _GIT_INSTALL=NO\n  fi\n  if [ \"${_GIT_INSTALL}\" = \"YES\" ] \\\n    || [ \"${_GIT_FORCE_REINSTALL}\" = \"YES\" ]; then\n    _do_install_latest_git\n  fi\n}\n\n#\n# Check apt-get updates.\n_check_apt_updates() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _check_apt_updates\"\n  fi\n  for _Update in `/usr/bin/apt-get -q -y \\\n    -s dist-upgrade | grep ^Inst | cut -d\\  -f2 | sort`; do\n    case \"${_Update}\" in\n      *libcurl*)    _UP_PHP=YES ;;\n      *libmysql*)   _UP_SQL=YES ;;\n      *libssl*)     _UP_PHP=YES ;;\n      *linux-*)     _UP_LNX=YES ;;\n      *newrelic*)   _UP_NRC=YES ;;\n      *openjdk*)    _UP_JDK=YES ;;\n      *openssl*)    _UP_PHP=YES ;;\n      *proxysql*)   _UP_PXC=YES ;;\n      *)  ;;\n    esac\n  done\n}\n\n#\n# Install modern ICU from sources.\n_install_icu_modern() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_icu_modern\"\n  fi\n  ###--------------------###\n  if [ -e \"/usr/lib/x86_64-linux-gnu/.old_icu\" ]; then\n    rm -rf /var/backups/.old_icu\n    mv -f /usr/lib/x86_64-linux-gnu/.old_icu /var/backups/.old_icu\n  fi\n  if [ -e \"/usr/lib/x86_64-linux-gnu/.legacy_icu\" ]; then\n    rm -rf /var/backups/.legacy_icu\n    mv -f /usr/lib/x86_64-linux-gnu/.legacy_icu /var/backups/.legacy_icu\n  fi\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ -e \"${_pthLog}\" ]; then\n    _pthIcuLog=\"${_pthLog}/icu-update-${_ICU_MODERN_VRN}-for-${_OS_CODE}.log\"\n    _pthIcuLcy=\"${_pthLog}/icu-update-${_ICU_MODERN_VRN}.log\"\n  else\n    _pthIcuLog=\"/root/.icu-update-${_ICU_MODERN_VRN}-for-${_OS_CODE}.log\"\n    _pthIcuLcy=\"/root/.icu-update-${_ICU_MODERN_VRN}.log\"\n  fi\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    && [ -e \"${_pthIcuLcy}\" ] \\\n    && [ ! -e \"${_pthIcuLog}\" ]; then\n    touch ${_pthIcuLog}\n  fi\n  if [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    && [ -e \"${_pthIcuLcy}\" ] \\\n    && [ ! -e \"${_pthIcuLog}\" ]; then\n    touch ${_pthIcuLog}\n  fi\n  if [ -e \"${_pthIcuLog}\" ]; then\n    chattr -i ${_pthIcuLog}\n  fi\n  if [ -e \"${_pthIcuLcy}\" ]; then\n    chattr -i ${_pthIcuLcy}\n  fi\n  if [ ! -e \"/usr/local/lib/icu/current\" ] \\\n    || [ ! -e \"${_pthIcuLog}\" ]; then\n    _msg \"INFO: Installing ICU libs version ${_ICU_MODERN_VRN}...\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    if [ ! -e \"/var/opt/icu-release-${_ICU_MODERN_VRN}/icu4c/source\" ]; then\n      cd /var/opt\n      rm -rf icu*\n      _get_dev_src \"icu-release-${_ICU_MODERN_VRN}.tar.gz\"\n    fi\n    cd /var/opt/icu-release-${_ICU_MODERN_VRN}/icu4c/source\n    _mrun \"bash ./configure --prefix=/usr/local\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ -e \"/usr/local/lib/icu\" ]; then\n      touch ${_pthIcuLog}\n    fi\n    _X86_64_TEST=$(uname -m)\n    if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n      if [ -e \"/usr/local/lib/icu\" ]; then\n        if [ -d \"/usr/lib/x86_64-linux-gnu/icu\" ]; then\n          rm -rf /var/backups/pre_icu\n          mv -f /usr/lib/x86_64-linux-gnu/icu /var/backups/pre_icu\n        fi\n        if [ ! -L \"/usr/lib/x86_64-linux-gnu/icu\" ]; then\n          ln -sfn /usr/local/lib/icu /usr/lib/x86_64-linux-gnu/icu\n        fi\n      fi\n    fi\n  fi\n}\n\n#\n# Install ICU from sources for Jessie.\n_install_icu_jessie() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_icu_jessie\"\n  fi\n  ###--------------------###\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  if [ -e \"${_pthLog}\" ]; then\n    _pthIcuLog=\"${_pthLog}/icu-update-${_ICU_LEGACY_VRN}-for-${_OS_CODE}.log\"\n    _pthIcuLcy=\"${_pthLog}/icu-update-${_ICU_LEGACY_VRN}.log\"\n  else\n    _pthIcuLog=\"/root/.icu-update-${_ICU_LEGACY_VRN}-for-${_OS_CODE}.log\"\n    _pthIcuLcy=\"/root/.icu-update-${_ICU_LEGACY_VRN}.log\"\n  fi\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    && [ -e \"${_pthIcuLcy}\" ] \\\n    && [ ! -e \"${_pthIcuLog}\" ]; then\n    touch ${_pthIcuLog}\n  fi\n  if [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    && [ -e \"${_pthIcuLcy}\" ] \\\n    && [ ! -e \"${_pthIcuLog}\" ]; then\n    touch ${_pthIcuLog}\n  fi\n  if [ -e \"${_pthIcuLog}\" ]; then\n    chattr -i ${_pthIcuLog}\n  fi\n  if [ -e \"${_pthIcuLcy}\" ]; then\n    chattr -i ${_pthIcuLcy}\n  fi\n  if [ ! -e \"/usr/local/lib/icu/current\" ] \\\n    || [ ! -e \"${_pthIcuLog}\" ]; then\n    _msg \"INFO: Installing ICU libs version ${_ICU_LEGACY_VRN}...\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    if [ ! -e \"/var/opt/icu/source\" ]; then\n      cd /var/opt\n      rm -rf icu*\n      _get_dev_src \"icu4c-${_ICU_LEGACY_VRN}-src.tgz\"\n    fi\n    cd /var/opt/icu/source/\n    _mrun \"bash ./configure --prefix=/usr/local\"\n    _mrun \"make -j $(nproc) --quiet\"\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ -e \"/usr/local/lib/icu\" ]; then\n      touch ${_pthIcuLog}\n    fi\n    _X86_64_TEST=$(uname -m)\n    if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n      if [ -e \"/usr/local/lib/icu\" ]; then\n        if [ -d \"/usr/lib/x86_64-linux-gnu/icu\" ]; then\n          rm -rf /var/backups/pre_icu\n          mv -f /usr/lib/x86_64-linux-gnu/icu /var/backups/pre_icu\n        fi\n        if [ ! -L \"/usr/lib/x86_64-linux-gnu/icu\" ]; then\n          ln -sfn /usr/local/lib/icu /usr/lib/x86_64-linux-gnu/icu\n        fi\n      fi\n    fi\n  fi\n}\n\n#\n# Install PHP deps.\n_php_install_deps() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_install_deps\"\n  fi\n  ###--------------------###\n  _apt_clean_update\n\n  for _PKG in cmake lbzip2 zstd libavif-dev libgd-dev libgd3 libjpeg-dev libjpeg62 libkrb5-dev; do\n    if ! _pkg_installed \"${_PKG}\"; then\n      _mrun \"${_INSTAPP} ${_PKG}\"\n    fi\n  done\n\n  if [ ! -e \"/usr/local/include/curl/curl.h\" ] \\\n    && [ ! -e \"/usr/local/include/curl/easy.h\" ]; then\n    if ! _pkg_installed \"libcurl4-openssl-dev\"; then\n      for _PKG in libcurl4-openssl-dev; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n    fi\n  fi\n\n  for _PKG in liblzf-dev libmagickwand-dev libonig-dev libsodium-dev libwebp-dev libxpm-dev libzip-dev; do\n    if ! _pkg_installed \"${_PKG}\"; then\n      _mrun \"${_INSTAPP} ${_PKG}\"\n    fi\n  done\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"trixie\" ] \\\n    || [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    if [ ! -e \"/root/.rebuild_src_on_auto_before_reboot.info\" ]; then\n      for _PKG in libldap-common libldap-dev libldap2-dev; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n    fi\n  else\n    for _PKG in libldap-common libldap2-dev; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n  fi\n  if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _install_icu_jessie\n  else\n    _install_icu_modern\n    # _mrun \"${_INSTAPP} libicu-dev icu-devtools\"\n  fi\n}\n\n#\n# Fix wkhtmltopdf and wkhtmltoimage symlinks.\n_fix_wkhtml() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_wkhtml\"\n  fi\n  if [ -x \"/usr/local/bin/wkhtmltopdf\" ] \\\n    && [ -L \"/usr/bin/wkhtmltopdf\" ]; then\n    rm -f /usr/bin/wkhtmltopdf\n    cp -af /usr/local/bin/wkhtmltopdf /usr/bin/wkhtmltopdf\n    chgrp root /usr/bin/wkhtmltopdf &> /dev/null\n    chmod 755 /usr/bin/wkhtmltopdf &> /dev/null\n  fi\n  if [ -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n    && [ -L \"/usr/bin/wkhtmltoimage\" ]; then\n    rm -f /usr/bin/wkhtmltoimage\n    cp -af /usr/local/bin/wkhtmltoimage /usr/bin/wkhtmltoimage\n    chgrp root /usr/bin/wkhtmltoimage &> /dev/null\n    chmod 755 /usr/bin/wkhtmltoimage &> /dev/null\n  fi\n  if [ -x \"/usr/local/bin/wkhtmltopdf\" ] \\\n    && [ ! -e \"/usr/bin/wkhtmltopdf\" ]; then\n    cp -af /usr/local/bin/wkhtmltopdf /usr/bin/wkhtmltopdf\n    chgrp root /usr/bin/wkhtmltopdf &> /dev/null\n    chmod 755 /usr/bin/wkhtmltopdf &> /dev/null\n  fi\n  if [ -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n    && [ ! -e \"/usr/bin/wkhtmltoimage\" ]; then\n    cp -af /usr/local/bin/wkhtmltoimage /usr/bin/wkhtmltoimage\n    chgrp root /usr/bin/wkhtmltoimage &> /dev/null\n    chmod 755 /usr/bin/wkhtmltoimage &> /dev/null\n  fi\n  if [ ! -x \"/usr/local/bin/wkhtmltopdf\" ] \\\n    && [ -x \"/usr/bin/wkhtmltopdf\" ]; then\n    rm -f /usr/local/bin/wkhtmltopdf\n    cp -af /usr/bin/wkhtmltopdf /usr/local/bin/wkhtmltopdf\n    chgrp root /usr/local/bin/wkhtmltopdf &> /dev/null\n    chmod 755 /usr/local/bin/wkhtmltopdf &> /dev/null\n  fi\n  if [ ! -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n    && [ -x \"/usr/bin/wkhtmltoimage\" ]; then\n    rm -f /usr/local/bin/wkhtmltoimage\n    cp -af /usr/bin/wkhtmltoimage /usr/local/bin/wkhtmltoimage\n    chgrp root /usr/local/bin/wkhtmltoimage &> /dev/null\n    chmod 755 /usr/local/bin/wkhtmltoimage &> /dev/null\n  fi\n}\n\n#\n# Install wkhtmltopdf and wkhtmltoimage.\n_if_install_wkhtmltox() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_wkhtmltox\"\n  fi\n  ###--------------------###\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _isWkhtmltox=\"$(which wkhtmltopdf)\"\n  if [ -z \"${_isWkhtmltox}\" ] \\\n    || [ ! -e \"${_isWkhtmltox}\" ] \\\n    || [ ! -x \"/usr/local/bin/wkhtmltopdf\" ] \\\n    || [ ! -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n    || [ ! -f \"/usr/bin/wkhtmltopdf\" ] \\\n    || [ ! -f \"/usr/bin/wkhtmltoimage\" ] \\\n    || [ -L \"/usr/bin/wkhtmltopdf\" ] \\\n    || [ -L \"/usr/bin/wkhtmltoimage\" ] \\\n    || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installing wkhtmltopdf from ${_OS_CODE} packages...\"\n    fi\n    _apt_clean_update\n    for _PKG in gdebi-core xfonts-75dpi xfonts-base fonts-thai-tlwg wkhtmltopdf; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n  fi\n  _WOX_ITD=$(wkhtmltopdf --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}')\n  if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _WOX_ITD=$(wkhtmltopdf --version 2>&1)\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Installed wkhtmltopdf version ${_WOX_ITD}\"\n  fi\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n    || [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n    || [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _WKHTMLTOX_VRN=\"0.12.6.1-3\"\n    _WKHTMLTOPDF_VRN=\"0.12.6.1\"\n  elif [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _WKHTMLTOX_VRN=\"0.12.5-1\"\n    _WKHTMLTOPDF_VRN=\"0.12.5\"\n  else\n    _WKHTMLTOX_VRN=\"0.12.6-1\"\n    _WKHTMLTOPDF_VRN=\"0.12.6\"\n  fi\n\n  if [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _WKHTML_SYS_RV=\"buster\"\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _WKHTML_SYS_RV=\"bullseye\"\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n    _WKHTML_SYS_RV=\"bookworm\"\n  elif [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _WKHTML_SYS_RV=\"bookworm\"\n  else\n    _WKHTML_SYS_RV=\"${_OS_CODE}\"\n  fi\n\n  if [ \"${_WOX_ITD}\" != \"${_WKHTMLTOPDF_VRN}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installing wkhtmltox tools v.${_WKHTMLTOX_VRN}.${_WKHTML_SYS_RV}...\"\n    fi\n    mkdir -p /var/opt/\n    cd /var/opt/\n    rm -rf /var/opt/wkhtmltox*\n    for _PKG in xfonts-75dpi xfonts-base; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    _get_dev_src \"wkhtmltox_${_WKHTMLTOX_VRN}.${_WKHTML_SYS_RV}_amd64.deb.gz\" 2> /dev/null\n    _mrun \"gdebi -n /var/opt/wkhtmltox_${_WKHTMLTOX_VRN}.${_WKHTML_SYS_RV}_amd64.deb\"\n    _mrun \"apt-get install -f -y\"\n    if [ -x \"/usr/local/bin/wkhtmltoimage\" ] \\\n      && [ -x \"/usr/local/bin/wkhtmltopdf\" ]; then\n      touch ${_pthLog}/wkhtmltox-${_WKHTMLTOX_VRN}-fix.log\n      _msg \"INFO: The wkhtmltox tools v.${_WKHTMLTOX_VRN} installation complete\"\n    fi\n    cd /var/opt\n    _fix_wkhtml\n  fi\n}\n\n#\n# Install chromium.\n_if_install_chromium() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_chromium\"\n  fi\n  ###--------------------###\n  _isChromium=\"$(which chromium)\"\n  if [ -z \"${_isChromium}\" ] \\\n    || [ ! -e \"${_isChromium}\" ] \\\n    || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installing chromium from ${_OS_CODE} packages...\"\n    fi\n    _apt_clean_update\n    for _PKG in chromium fontconfig fonts-dejavu fonts-liberation fonts-noto-core fonts-noto-extra xfonts-75dpi xfonts-base; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n  fi\n  _CHR_ITD=$(chromium --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}')\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Installed chromium ${_CHR_ITD}\"\n  fi\n}\n\n#\n# Clean Drush 11.\n_clean_drush_eleven() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _clean_drush_eleven\"\n  fi\n  rm -f /usr/bin/drush11*\n  ln -sfn /opt/tools/drush/11/drush/vendor/bin/drush /usr/bin/drush11\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Drush ${_DRUSH_ELEVEN_VRN} setup complete\"\n  fi\n}\n\n#\n# Clean Drush 10.\n_clean_drush_ten() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _clean_drush_ten\"\n  fi\n  rm -f /usr/bin/drush10*\n  rm -f /usr/local/bin/dcg\n  rm -f /usr/local/bin/drush\n  rm -f /usr/local/bin/php-parse\n  rm -f /usr/local/bin/psysh\n  rm -f /usr/local/bin/release\n  rm -f /usr/local/bin/robo\n  rm -f /usr/local/bin/var-dump-server\n  ln -sfn /opt/tools/drush/10/drush/vendor/bin/drush /usr/bin/drush10\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Drush ${_DRUSH_TEN_VRN} setup complete\"\n  fi\n}\n\n#\n# Set Drush permissions.\n_set_drush_perm() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _set_drush_perm\"\n  fi\n  find /opt/tools -type d -exec chmod 0755 {} \\; &> /dev/null\n  find /opt/tools -type f -exec chmod 0644 {} \\; &> /dev/null\n  chmod 755 /opt/tools/drush/*/drush/drush\n  chmod 755 /opt/tools/drush/*/drush/drush.complete.sh\n  chmod 755 /opt/tools/drush/*/drush/drush.launcher\n  chmod 755 /opt/tools/drush/*/drush/drush.php\n  chmod 755 /opt/tools/drush/*/drush/unish.sh\n  chmod 755 /opt/tools/drush/*/drush/examples/drush.wrapper\n  chmod 755 /opt/tools/drush/*/drush/examples/git-bisect.example.sh\n  chmod 755 /opt/tools/drush/*/drush/examples/helloworld.script\n  if [ -e \"/opt/tools/drush/10/drush/vendor/drush/drush/drush\" ]; then\n    chmod 755 /opt/tools/drush/10/drush/vendor/drush/drush/drush\n    chmod 755 /opt/tools/drush/10/drush/vendor/drush/drush/drush.php\n  fi\n  if [ -e \"/opt/tools/drush/10/drush/vendor/bin/drush\" ]; then\n    chmod 755 /opt/tools/drush/10/drush/vendor/bin/*\n  fi\n  if [ -e \"/opt/tools/drush/11/drush/vendor/drush/drush/drush\" ]; then\n    chmod 755 /opt/tools/drush/11/drush/vendor/drush/drush/drush\n    chmod 755 /opt/tools/drush/11/drush/vendor/drush/drush/drush.php\n  fi\n  if [ -e \"/opt/tools/drush/11/drush/vendor/bin/drush\" ]; then\n    chmod 755 /opt/tools/drush/11/drush/vendor/bin/*\n  fi\n}\n\n#\n# Install or update Drush 11.\n_get_drush_eleven() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _get_drush_eleven\"\n  fi\n  rm -rf /opt/tools/drush/11/*\n  cd /opt/tools/drush/11/\n  _get_dev_ext \"drush-${_DRUSH_ELEVEN_VRN}.tar.gz\"\n  touch /opt/tools/drush/11/.ctrl.${_tRee}.${_xSrl}.pid\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Drush ${_DRUSH_ELEVEN_VRN} installation complete\"\n  fi\n}\n\n#\n# Install or update Drush 10.\n_get_drush_ten() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _get_drush_ten\"\n  fi\n  rm -rf /opt/tools/drush/10/*\n  cd /opt/tools/drush/10/\n  _get_dev_ext \"drush-${_DRUSH_TEN_VRN}.tar.gz\"\n  touch /opt/tools/drush/10/.ctrl.${_tRee}.${_xSrl}.pid\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Drush ${_DRUSH_TEN_VRN} installation complete\"\n  fi\n}\n\n#\n# Install or update Drush 8.\n_get_drush_eight() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _get_drush_eight\"\n  fi\n  rm -rf /opt/tools/drush/8/*\n  cd /opt/tools/drush/8/\n  _get_dev_ext \"drush-${_DRUSH_EIGHT_VRN}.tar.gz\"\n  touch /opt/tools/drush/8/.ctrl.${_tRee}.${_xSrl}.pid\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Drush ${_DRUSH_EIGHT_VRN} installation complete\"\n  fi\n}\n\n#\n# Install or update CiviCRM CLI Tool phar.\n_get_civicrm_cli() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _get_civicrm_cli\"\n  fi\n  rm -rf /opt/tmp/cv.phar*\n  cd /opt/tmp/\n  _get_dev_ext \"cv.phar.gz\"\n  if [ -e \"/opt/tmp/cv.phar\" ]; then\n    cp -af cv.phar /usr/local/bin/cv.phar\n    chmod 755 /usr/local/bin/cv.phar\n    ln -sf /usr/local/bin/cv.phar /usr/bin/cv\n    touch /opt/tools/drush/.ci.ctrl.${_tRee}.${_xSrl}.pid\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: CiviCRM CLI Tool phar installation complete\"\n    fi\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: CiviCRM CLI Tool phar installation failed!\"\n    fi\n  fi\n}\n\n#\n# Install or update Drush versions.\n_drush_system_install_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _drush_system_install_update\"\n  fi\n  if [ ! -e \"/root/.debug-barracuda-installer.cnf\" ] \\\n    && [ ! -e \"/root/.skip-aegir-master-upgrade.cnf\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installing supported Drush versions...\"\n    fi\n    ###--------------------###\n    if [ -e \"/opt/tools/drush/launcher\" ]; then\n      rm -rf /opt/tools/drush/launcher\n      rm -f /usr/bin/drushlr\n    fi\n    if [ -e \"/opt/tools/drush/12\" ]; then\n      rm -rf /opt/tools/drush/12\n      rm -f /usr/bin/drush12\n    fi\n    if [ -e \"/opt/tools/drush/9\" ]; then\n      rm -rf /opt/tools/drush/9\n      rm -f /usr/bin/drush9\n    fi\n    if [ -e \"/opt/tools/drush/7\" ]; then\n      rm -rf /opt/tools/drush/7\n      rm -f /usr/bin/drush7\n    fi\n    if [ -e \"/opt/tools/drush/6\" ]; then\n      rm -rf /opt/tools/drush/6\n      rm -f /usr/bin/drush6\n    fi\n    if [ -e \"/opt/tools/drush/5\" ]; then\n      rm -rf /opt/tools/drush/5\n      rm -f /usr/bin/drush5\n    fi\n    if [ -e \"/opt/tools/drush/4\" ]; then\n      rm -rf /opt/tools/drush/4\n      rm -f /usr/bin/drush4\n    fi\n    mkdir -p /opt/tools/drush/{8,10,11}\n    chown -R root:root /opt/tools\n    if [ -e \"/opt/tools/drush/8\" ]; then\n      _DRUSH_ITD=$(drush8 --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f8 \\\n        | awk '{ print $1}' 2>&1)\n      if [ \"${_DRUSH_ITD}\" != \"${_DRUSH_EIGHT_TEST_VRN}\" ] \\\n        || [ ! -x \"/opt/tools/drush/8/drush/drush.php\" ] \\\n        || [ ! -e \"/opt/tools/drush/8/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Drush ${_DRUSH_EIGHT_VRN} re-installation required\"\n        fi\n        _get_drush_eight\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Drush ${_DRUSH_EIGHT_VRN} already installed\"\n        fi\n      fi\n    fi\n    if [ -e \"/opt/tools/drush/10\" ]; then\n      _DRUSH_ITD=$(drush10 --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' 2>&1)\n      if [ \"${_DRUSH_ITD}\" != \"${_DRUSH_TEN_VRN}\" ] \\\n        || [ ! -x \"/opt/tools/drush/10/drush/vendor/drush/drush/drush\" ] \\\n        || [ ! -e \"/opt/tools/drush/10/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Drush ${_DRUSH_TEN_VRN} re-installation required\"\n        fi\n        _get_drush_ten\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Drush ${_DRUSH_TEN_VRN} already installed\"\n        fi\n      fi\n    fi\n    if [ -e \"/opt/tools/drush/11\" ]; then\n      _DRUSH_ITD=$(drush11 --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' 2>&1)\n      if [ \"${_DRUSH_ITD}\" != \"${_DRUSH_ELEVEN_VRN}\" ] \\\n        || [ ! -x \"/opt/tools/drush/11/drush/vendor/drush/drush/drush\" ] \\\n        || [ ! -e \"/opt/tools/drush/11/.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Drush ${_DRUSH_ELEVEN_VRN} re-installation required\"\n        fi\n        _get_drush_eleven\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Drush ${_DRUSH_ELEVEN_VRN} already installed\"\n        fi\n      fi\n    fi\n    _set_drush_perm\n    if [ -x \"/opt/tools/drush/8/drush/drush.php\" ]; then\n      rm -f /usr/bin/drush8\n      rm -f /usr/bin/drush\n      ln -sfn /opt/tools/drush/8/drush/drush.php /usr/bin/drush8\n      ln -sfn /opt/tools/drush/8/drush/drush.php /usr/bin/drush\n    else\n      _msg \"FAIL: Drush ${_DRUSH_EIGHT_VRN} installation failed!\"\n    fi\n    if [ -x \"/opt/tools/drush/10/drush/vendor/drush/drush/drush\" ]; then\n      _clean_drush_ten\n    else\n      _msg \"FAIL: Drush ${_DRUSH_TEN_VRN} installation failed!\"\n    fi\n    if [ -x \"/opt/tools/drush/11/drush/vendor/drush/drush/drush\" ]; then\n      _clean_drush_eleven\n    else\n      _msg \"FAIL: Drush ${_DRUSH_ELEVEN_VRN} installation failed!\"\n    fi\n    chown -R root:root /opt/tools/drush\n    if [ ! -x \"/usr/local/bin/cv.phar\" ] \\\n      || [ ! -e \"/opt/tools/drush/.ci.ctrl.${_tRee}.${_xSrl}.pid\" ]; then\n      _get_civicrm_cli\n    fi\n    cd /opt/tmp\n  fi\n}\n\n#\n# Update packages sources list.\n_sources_list_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sources_list_update\"\n  fi\n  ###--------------------###\n  apt-get clean -qq 2> /dev/null\n  #rm -rf /var/lib/apt/lists/*\n  ###--------------------###\n  if [ -e \"${_aptLiSys}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Updating packages sources list...\"\n    fi\n    mv -f ${_aptLiSys} \\\n      ${_vBs}/sources.list-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    if [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      _TEST_MIRROR=\"$(_find_fast_devuan_mirror)\"\n      if [ -n \"${_LOCAL_DEVUAN_MIRROR}\" ] && [ -n \"${_TEST_MIRROR}\" ]; then\n        if [ \"${_LOCAL_DEVUAN_MIRROR}\" = \"${_TEST_MIRROR}\" ]; then\n          export _MIRROR=\"${_LOCAL_DEVUAN_MIRROR}\"\n          export _LOCAL_DEVUAN_MIRROR=\"${_LOCAL_DEVUAN_MIRROR}\"\n        else\n          export _MIRROR=\"${_TEST_MIRROR}\"\n          export _LOCAL_DEVUAN_MIRROR=\"${_TEST_MIRROR}\"\n          sed -i \"s|^_LOCAL_DEV.*|_LOCAL_DEVUAN_MIRROR=${_TEST_MIRROR}|\" ${_barCnf}\n        fi\n      else\n        _DVN_MRR=\"$(_find_fast_devuan_mirror)\"\n        if [ -n \"${_DVN_MRR}\" ]; then\n          export _MIRROR=\"${_DVN_MRR}\"\n          export _LOCAL_DEVUAN_MIRROR=\"${_DVN_MRR}\"\n          sed -i \"s|^_LOCAL_DEV.*|_LOCAL_DEVUAN_MIRROR=${_MIRROR}|\" ${_barCnf}\n        else\n          export _MIRROR=\"http://deb.devuan.org/merged\"\n        fi\n      fi\n      if [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n        export _MIRROR=\"http://archive.devuan.org/merged\"\n      fi\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: We will use ${_OS_DIST} mirror ${_MIRROR}\"\n      fi\n      cd /var/opt\n      echo \"## DEVUAN MAIN REPOSITORIES\" > ${_aptLiSys}\n      echo \"deb ${_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src ${_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      if [ \"${_OS_CODE}\" != \"beowulf\" ]; then\n        echo \"## DEVUAN MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n        echo \"deb ${_MIRROR} ${_OS_CODE}-updates main contrib non-free\" >> ${_aptLiSys}\n        echo \"deb-src ${_MIRROR} ${_OS_CODE}-updates main contrib non-free\" >> ${_aptLiSys}\n        echo \"\" >> ${_aptLiSys}\n        if [ \"${_USE_BACKPORTS}\" = \"YES\" ]; then\n          echo \"## DEVUAN BACKPORTS REPOSITORY\" >> ${_aptLiSys}\n          echo \"deb ${_MIRROR} ${_OS_CODE}-backports main contrib non-free\" >> ${_aptLiSys}\n          echo \"deb-src ${_MIRROR} ${_OS_CODE}-backports main contrib non-free\" >> ${_aptLiSys}\n          echo \"\" >> ${_aptLiSys}\n        fi\n      fi\n      echo \"## DEVUAN SECURITY UPDATES\" >> ${_aptLiSys}\n      echo \"deb ${_MIRROR} ${_OS_CODE}-security main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src ${_MIRROR} ${_OS_CODE}-security main contrib non-free\" >> ${_aptLiSys}\n    elif [ \"${_OS_DIST}\" = \"Debian\" ]; then\n      _MIRROR_CHECK=NO\n      if [ \"${_OS_CODE}\" = \"deprecated\" ]; then\n        _MIRROR=archive.debian.org\n      else\n        if [ \"${_AUTOPILOT}\" = \"YES\" ]; then\n          if [ -z \"${_LOCAL_DEBIAN_MIRROR}\" ]; then\n            _MIRROR=deb.debian.org\n          else\n            _MIRROR=${_LOCAL_DEBIAN_MIRROR}\n          fi\n        else\n          _MIRROR_CHECK=YES\n        fi\n      fi\n      if [ \"${_MIRROR_CHECK}\" = \"YES\" ]; then\n        if [ -z \"${_LOCAL_DEBIAN_MIRROR}\" ]; then\n          _msg \"INFO: Now looking for the best/fastest ${_OS_DIST} mirror\"\n          _msg \"INFO: This may take a while, please wait...\"\n          _hlpPth=\"/opt/tmp/boa/aegir/helpers\"\n          _ffMirr=/opt/local/bin/ffmirror\n          _ffList=\"${_hlpPth}/apt-list-debian.txt\"\n          if [ -e \"${_ffMirr}\" ] && [ -e \"${_ffList}\" ]; then\n            _MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n            _MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n            echo \" \"\n            _askThis=\"Enter your own mirror to use or press enter\"\n            _askThis=\"${_askThis} to use the fastest found mirror\"\n            _prompt_confirm_choice \"${_askThis}\" ${_MIRROR}\n            echo \" \"\n            _MIRROR=${_CONFIRMED_ANSWER}\n          else\n            _MIRROR=${_LOCAL_DEBIAN_MIRROR}\n          fi\n        else\n          _MIRROR=${_LOCAL_DEBIAN_MIRROR}\n        fi\n        if ! command nc -w 10 -z ${_MIRROR} 80 >/dev/null 2>&1 ; then\n          _msg \"INFO: The mirror ${_MIRROR} doesn't respond, let's try default\"\n          _MIRROR=deb.debian.org\n        fi\n      fi\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: We will use ${_OS_DIST} mirror ${_MIRROR}\"\n      fi\n      cd /var/opt\n      if [ \"${_OS_CODE}\" = \"buster\" ] || [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n        _APT_MIRROR=\"archive.debian.org/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-backports\"\n        _SEC_MIRROR=\"archive.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}/updates\"\n      elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org\"\n        _SEC_REPSRC=\"${_OS_CODE}/updates\"\n      elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}-security\"\n      elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n        _APT_MIRROR=\"${_MIRROR}/debian\"\n        _APT_REPSRC=\"${_OS_CODE}-updates\"\n        _SEC_MIRROR=\"security.debian.org/debian-security\"\n        _SEC_REPSRC=\"${_OS_CODE}-security\"\n      fi\n      echo \"## DEBIAN MAIN REPOSITORIES\" > ${_aptLiSys}\n      echo \"deb http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_APT_MIRROR} ${_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## DEBIAN MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n      echo \"deb http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_APT_MIRROR} ${_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## DEBIAN SECURITY UPDATES\" >> ${_aptLiSys}\n      echo \"deb http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      echo \"deb-src http://${_SEC_MIRROR} ${_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      if [ \"${_USE_BACKPORTS}\" = \"YES\" ]; then\n        echo \"\" >> ${_aptLiSys}\n        echo \"## DEBIAN BACKPORTS REPOSITORY\" >> ${_aptLiSys}\n        echo \"deb http://${_APT_MIRROR} ${_APT_REPSRC}-backports main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n        echo \"deb-src http://${_APT_MIRROR} ${_APT_REPSRC}-backports main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      fi\n    fi\n    mkdir -p ${_pthLog}\n    touch ${_pthLog}/apt-deb-src-updates-${_xSrl}.txt\n    cd /var/opt\n  fi\n  ###--------------------###\n  _apt_clean_update\n  ###--------------------###\n}\n\n#\n# Install OpenSSH from sources.\n_sshd_install_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sshd_install_src\"\n  fi\n  if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _OPENSSH_VRN=8.3p1\n  fi\n  if [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    _OPENSSH_VRN=9.3p1\n  fi\n  if [ -x \"/usr/local/sbin/sshd\" ] \\\n    && [ -x \"/usr/local/bin/ssh\" ]; then\n    if [ ! -L \"/usr/sbin/sshd\" ] && [ -x \"/usr/sbin/sshd\" ]; then\n      mv -f /usr/sbin/sshd /usr/sbin/.sshd_bak\n      ln -sfn /usr/local/sbin/sshd /usr/sbin/sshd\n    fi\n    if [ ! -L \"/usr/sbin/ssh\" ] && [ -x \"/usr/sbin/ssh\" ]; then\n      mv -f /usr/sbin/ssh /usr/sbin/.ssh_bak\n      ln -sfn /usr/local/sbin/ssh /usr/sbin/ssh\n    fi\n  fi\n  _SSH_GET_DPKG=$(dpkg --get-selections | grep openssh-server | grep 'hold$' 2>&1)\n  _SSH_INSTALL_REQUIRED=NO\n  _SSH_ITD=$(ssh -V 2>&1 \\\n    | tr -d \"\\n\" \\\n    | tr -d \",\" \\\n    | cut -d\"_\" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n    && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n    _SSL_BINARY=/usr/local/ssl3/bin/openssl\n  else\n    _SSL_BINARY=/usr/local/ssl/bin/openssl\n  fi\n  _SSL_ITD=$(${_SSL_BINARY} version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  _SSH_SSL_ITD=$(ssh -V 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f3 \\\n    | awk '{ print $1}' 2>&1)\n  if [ -e \"/usr/local/sbin/sshd\" ]; then\n    _SSH_IF_SELINUX=$(ldd /usr/local/sbin/sshd | grep selinux 2>&1)\n  fi\n  if [ -e \"/usr/etc/sshd_config\" ]; then\n    _SSH_FORCE_REINSTALL=YES\n    _OLD_SYSCONFDIR=\"/usr/etc\"\n    _SYSCONFDIR=\"/etc/ssh\"\n    _PREFIX_SSH=\"/usr/local\"\n    if [ -e \"${_SYSCONFDIR}\" ]; then\n      cp -af ${_OLD_SYSCONFDIR}/* ${_SYSCONFDIR}/\n    fi\n  else\n    _SYSCONFDIR=\"/etc/ssh\"\n    _PREFIX_SSH=\"/usr/local\"\n  fi\n  if [[ \"${_SSL_ITD}\" =~ \"${_OPENSSL_MODERN_VRN}\" ]] \\\n    || [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n    _SSL_PATH=\"/usr/local/ssl3\"\n    _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n  else\n    _SSL_PATH=\"/usr/local/ssl\"\n    _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n  fi\n  if [ \"${_SSH_SSL_ITD}\" != \"${_SSL_ITD}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: OpenSSL in SSH is ${_SSH_SSL_ITD}, rebuild to ${_SSL_ITD} required\"\n    fi\n    _SSH_INSTALL_REQUIRED=YES\n  fi\n  if [[ \"${_SSH_IF_SELINUX}\" =~ \"libselinux\" ]]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: SSH was built --with-selinux, rebuild required\"\n    fi\n    _SSH_INSTALL_REQUIRED=YES\n  fi\n  if [ \"${_SSH_FORCE_REINSTALL}\" = \"YES\" ]; then\n    _SSH_INSTALL_REQUIRED=YES\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: OpenSSH ${_SSH_ITD}, rebuild forced\"\n    fi\n  elif [ \"${_SSH_ITD}\" != \"${_OPENSSH_VRN}\" ]; then\n    _SSH_INSTALL_REQUIRED=YES\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installed OpenSSH version ${_SSH_ITD}, upgrade required\"\n    fi\n  fi\n  if [ \"${_SSH_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    _msg \"INFO: Building OpenSSH ${_OPENSSH_VRN} from sources...\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    _mrun \"${_INITINS} libpam0g-dev\"\n    if [ ! -e \"/var/lib/sshd\" ]; then\n      mkdir -p /var/lib/sshd\n      chmod -R 700 /var/lib/sshd/\n      chown -R root:sys /var/lib/sshd/\n      useradd -r -U -d /var/lib/sshd/ -c \"sshd privsep\" -s /bin/false sshd &> /dev/null\n    fi\n    _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n    cd /var/opt\n    rm -rf openssh*\n    _get_dev_src \"openssh-${_OPENSSH_VRN}.tar.gz\"\n    cd /var/opt/openssh-${_OPENSSH_VRN}\n    _mrun \"LIBS=\\\"-ldl -lpthread\\\" PKG_CONFIG_PATH=\\\"${_PKG_CONFIG_PATH}\\\" ./configure \\\n      --with-ssl-dir=${_SSL_PATH} \\\n      --with-privsep-path=/var/lib/sshd/ \\\n      --sysconfdir=${_SYSCONFDIR} \\\n      --prefix=${_PREFIX_SSH} \\\n      --without-openssl-header-check \\\n      --with-md5-passwords \\\n      --with-pam\"\n    _mrun \"make -j $(nproc)\"\n    _mrun \"make install\"\n    ldconfig 2> /dev/null\n    _mrun \"service ssh restart\"\n    if [[ ! \"${_SSH_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold openssh-server &> /dev/null\n      aptitude hold openssh-client &> /dev/null\n      aptitude hold ssh &> /dev/null\n      aptitude hold openssh-sftp-server &> /dev/null\n      echo \"openssh-client hold\" | dpkg --set-selections &> /dev/null\n      echo \"openssh-server hold\" | dpkg --set-selections &> /dev/null\n      echo \"ssh hold\" | dpkg --set-selections &> /dev/null\n      echo \"openssh-sftp-server hold\" | dpkg --set-selections &> /dev/null\n    fi\n  else\n    if [[ ! \"${_SSH_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold openssh-server &> /dev/null\n      aptitude hold openssh-client &> /dev/null\n      aptitude hold ssh &> /dev/null\n      echo \"openssh-client hold\" | dpkg --set-selections &> /dev/null\n      echo \"openssh-server hold\" | dpkg --set-selections &> /dev/null\n      echo \"ssh hold\" | dpkg --set-selections &> /dev/null\n      echo \"openssh-sftp-server hold\" | dpkg --set-selections &> /dev/null\n    fi\n  fi\n  if [ -x \"/usr/local/sbin/sshd\" ] \\\n    && [ -x \"/usr/local/bin/ssh\" ]; then\n    if [ ! -L \"/usr/sbin/sshd\" ] && [ -x \"/usr/sbin/sshd\" ]; then\n      mv -f /usr/sbin/sshd /usr/sbin/.sshd_bak\n      ln -sfn /usr/local/sbin/sshd /usr/sbin/sshd\n    fi\n    if [ ! -L \"/usr/sbin/ssh\" ] && [ -x \"/usr/sbin/ssh\" ]; then\n      mv -f /usr/sbin/ssh /usr/sbin/.ssh_bak\n      ln -sfn /usr/local/sbin/ssh /usr/sbin/ssh\n    fi\n    if [ -e \"${_OLD_SYSCONFDIR}\" ]; then\n      cp -a ${_OLD_SYSCONFDIR} /var/backups/old_ssh_usr_etc\n      rm -f ${_OLD_SYSCONFDIR}/ssh*\n      rm -f ${_OLD_SYSCONFDIR}/moduli*\n    fi\n    pkill -9 -f /usr/sbin/sshd || true\n    _mrun \"service ssh start\"\n  fi\n  _SSH_FORCE_REINSTALL=NO\n}\n\n#\n# Install ImageMagick from sources.\n_install_magick_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_magick_src\"\n  fi\n  _MAGICK_SRC_BUILD_CTRL=\"${_pthLog}/ImageMagick-src-build-${_IMAGE_MAGICK_VRN}.log\"\n  _MAGICK_INSTALL_REQUIRED=NO\n  _MAGICK_ITD=$(convert --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f3 \\\n    | awk '{ print $1}' 2>&1)\n  if [ \"${_MAGICK_ITD}\" != \"${_IMAGE_MAGICK_VRN}\" ] \\\n    || [ ! -e \"${_MAGICK_SRC_BUILD_CTRL}\" ]; then\n    if [ \"${_OS_CODE}\" = \"none\" ]; then\n      _MAGICK_INSTALL_REQUIRED=NO\n    else\n      _MAGICK_INSTALL_REQUIRED=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed ImageMagick version ${_MAGICK_ITD}, upgrade required\"\n      fi\n    fi\n  fi\n  if [ \"${_MAGICK_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    _msg \"INFO: Installing ImageMagick ${_IMAGE_MAGICK_VRN}...\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    cd /var/opt\n    rm -rf ImageMagick*\n    _get_dev_src \"ImageMagick-${_IMAGE_MAGICK_VRN}.tar.gz\"\n    cd /var/opt/ImageMagick-${_IMAGE_MAGICK_VRN}\n    _mrun \"bash ./configure --prefix=/usr\"\n    _mrun \"make -j $(nproc) --quiet\"\n    if [ ! -e \"/etc/.ImageMagick-6\" ] \\\n      && [ -e \"/etc/ImageMagick-6\" ]; then\n        cp -a /etc/ImageMagick-6 /etc/.ImageMagick-6\n    fi\n    if [ ! -e \"/etc/.ImageMagick\" ] \\\n      && [ -e \"/etc/ImageMagick\" ]; then\n        cp -a /etc/ImageMagick /etc/.ImageMagick\n    fi\n    ###\n    for _PKG in imagemagick libmagickwand-dev graphviz libgraphviz-dev; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove ${_PKG} -y --purge --auto-remove -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    ###\n    _mrun \"make --quiet install\"\n    ldconfig 2> /dev/null\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"HINT: Please modify /usr/etc/ImageMagick-7/policy.xml file, if needed\"\n    fi\n    touch ${_MAGICK_SRC_BUILD_CTRL}\n  fi\n}\n\n#\n# Fix symlinks for libs.\n_ssl_crypto_lib_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _ssl_crypto_lib_fix\"\n  fi\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Fix for SSL libssl/libcrypto in ${_OS_DIST}/${_OS_CODE}\"\n  fi\n  if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n    && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n    _SSL_BINARY=/usr/local/ssl3/bin/openssl\n  else\n    _SSL_BINARY=/usr/local/ssl/bin/openssl\n  fi\n  [ -x \"${_SSL_BINARY}\" ] && _SSL_ITD=$(${_SSL_BINARY} version 2>&1 | awk '{print $2}')\n  if [[ \"${_SSL_ITD}\" =~ \"${_OPENSSL_MODERN_VRN}\" ]] \\\n    || [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n    _SSL_INSTALLED=MODERN\n  elif [[ \"${_SSL_ITD}\" =~ \"1.1.1\" ]]; then\n    _SSL_INSTALLED=EOL\n  elif [[ \"${_SSL_ITD}\" =~ \"1.0.2\" ]]; then\n    _SSL_INSTALLED=LEGACY\n  fi\n  if [ \"${_SSL_INSTALLED}\" = \"LEGACY\" ]; then\n    ###\n    _LIB_NOW=$(date +%y%m%d-%H%M)\n    _LIB_NOW=${_LIB_NOW//[^0-9-]/}\n    ###\n    if [ -f \"/usr/local/ssl/lib/libssl.so.1.0.0\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libcrypto.so.1.0.0\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libssl.a\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libcrypto.a\" ]; then\n      ###\n      _SSL_LIB_PATH=\"/usr/local/ssl/lib\"\n      _SSL_CNF_PATH=\"/usr/local/ssl/include/openssl\"\n      _GNU_LIB_PATH=\"/usr/lib/x86_64-linux-gnu\"\n      _GNU_CNF_PATH=\"/usr/include/x86_64-linux-gnu/openssl\"\n      ###\n      rm -f /usr/lib/libcrypto.a\n      rm -f /usr/lib/libcrypto.so\n      rm -f /usr/lib/libcrypto.so.1.0.0\n      rm -f /usr/lib/libssl.a\n      rm -f /usr/lib/libssl.so\n      rm -f /usr/lib/libssl.so.1.0.0\n      rm -f ${_GNU_LIB_PATH}/libcrypto.a\n      rm -f ${_GNU_LIB_PATH}/libcrypto.so\n      rm -f ${_GNU_LIB_PATH}/libcrypto.so.1.0.0\n      rm -f ${_GNU_LIB_PATH}/libssl.a\n      rm -f ${_GNU_LIB_PATH}/libssl.so\n      rm -f ${_GNU_LIB_PATH}/libssl.so.1.0.0\n      ###\n      #rm -f ${_GNU_LIB_PATH}/libssl3.so\n      rm -f ${_SSL_LIB_PATH}/libcrypto.so.1.1*\n      rm -f ${_SSL_LIB_PATH}/libssl.so.1.1*\n      ###\n      ln -sfn ${_SSL_LIB_PATH}/libssl.a /usr/lib/libssl.a\n      ln -sfn ${_SSL_LIB_PATH}/libssl.a ${_GNU_LIB_PATH}/libssl.a\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.0.0 /usr/lib/libssl.so\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.0.0 /usr/lib/libssl.so.1.0.0\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.0.0 ${_GNU_LIB_PATH}/libssl.so\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.0.0 ${_GNU_LIB_PATH}/libssl.so.1.0.0\n      ###\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.a /usr/lib/libcrypto.a\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.a ${_GNU_LIB_PATH}/libcrypto.a\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.0.0 /usr/lib/libcrypto.so\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.0.0 /usr/lib/libcrypto.so.1.0.0\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.0.0 ${_GNU_LIB_PATH}/libcrypto.so\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.0.0 ${_GNU_LIB_PATH}/libcrypto.so.1.0.0\n      ###\n      if [ -e \"${_SSL_CNF_PATH}/opensslconf.h\" ]; then\n        if [ -e \"${_GNU_CNF_PATH}/opensslconf.h\" ]; then\n          rm -f /var/backups/old-opensslconf-h*\n          mv -f ${_GNU_CNF_PATH}/opensslconf.h /var/backups/old-opensslconf-h-${_LIB_NOW}\n        fi\n        ln -sfn ${_SSL_CNF_PATH}/opensslconf.h ${_GNU_CNF_PATH}/opensslconf.h\n      fi\n      ###\n      if [ -e \"/usr/include/openssl\" ] \\\n        && [ -d \"/usr/local/ssl/include/openssl\" ]; then\n        rm -rf /var/backups/old-usr-include-openssl*\n        mv -f /usr/include/openssl /var/backups/old-usr-include-openssl-${_LIB_NOW}\n        ln -sfn /usr/local/ssl/include/openssl /usr/include/openssl\n      fi\n    fi\n  elif [ \"${_SSL_INSTALLED}\" = \"EOL\" ]; then\n    ###\n    _LIB_NOW=$(date +%y%m%d-%H%M)\n    _LIB_NOW=${_LIB_NOW//[^0-9-]/}\n    ###\n    if [ -f \"/usr/local/ssl/lib/libssl.so.1.1\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libcrypto.so.1.1\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libssl.a\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libcrypto.a\" ]; then\n      ###\n      _SSL_LIB_PATH=\"/usr/local/ssl/lib\"\n      _SSL_CNF_PATH=\"/usr/local/ssl/include/openssl\"\n      _GNU_LIB_PATH=\"/usr/lib/x86_64-linux-gnu\"\n      _GNU_CNF_PATH=\"/usr/include/x86_64-linux-gnu/openssl\"\n      ###\n      rm -f /usr/lib/libcrypto.a\n      rm -f /usr/lib/libcrypto.so\n      rm -f /usr/lib/libcrypto.so.1.1\n      rm -f /usr/lib/libssl.a\n      rm -f /usr/lib/libssl.so\n      rm -f /usr/lib/libssl.so.1.1\n      rm -f ${_GNU_LIB_PATH}/libcrypto.a\n      rm -f ${_GNU_LIB_PATH}/libcrypto.so\n      rm -f ${_GNU_LIB_PATH}/libcrypto.so.1.1\n      rm -f ${_GNU_LIB_PATH}/libssl.a\n      rm -f ${_GNU_LIB_PATH}/libssl.so\n      rm -f ${_GNU_LIB_PATH}/libssl.so.1.1\n      ###\n      #rm -f ${_GNU_LIB_PATH}/libssl3.so\n      rm -f ${_SSL_LIB_PATH}/libcrypto.so.1.0.0*\n      rm -f ${_SSL_LIB_PATH}/libssl.so.1.0.0*\n      ###\n      ln -sfn ${_SSL_LIB_PATH}/libssl.a /usr/lib/libssl.a\n      ln -sfn ${_SSL_LIB_PATH}/libssl.a ${_GNU_LIB_PATH}/libssl.a\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.1 /usr/lib/libssl.so\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.1 /usr/lib/libssl.so.1.1\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.1 ${_GNU_LIB_PATH}/libssl.so\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.1.1 ${_GNU_LIB_PATH}/libssl.so.1.1\n      ###\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.a /usr/lib/libcrypto.a\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.a ${_GNU_LIB_PATH}/libcrypto.a\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.1 /usr/lib/libcrypto.so\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.1 /usr/lib/libcrypto.so.1.1\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.1 ${_GNU_LIB_PATH}/libcrypto.so\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.1.1 ${_GNU_LIB_PATH}/libcrypto.so.1.1\n      ###\n      if [ -e \"${_SSL_CNF_PATH}/opensslconf.h\" ]; then\n        if [ -e \"${_GNU_CNF_PATH}/opensslconf.h\" ]; then\n          rm -f /var/backups/old-opensslconf-h*\n          mv -f ${_GNU_CNF_PATH}/opensslconf.h /var/backups/old-opensslconf-h-${_LIB_NOW}\n        fi\n        ln -sfn ${_SSL_CNF_PATH}/opensslconf.h ${_GNU_CNF_PATH}/opensslconf.h\n      fi\n      ###\n      if [ -e \"/usr/include/openssl\" ] \\\n        && [ -d \"/usr/local/ssl/include/openssl\" ]; then\n        rm -rf /var/backups/old-usr-include-openssl*\n        mv -f /usr/include/openssl /var/backups/old-usr-include-openssl-${_LIB_NOW}\n        ln -sfn /usr/local/ssl/include/openssl /usr/include/openssl\n      fi\n    fi\n  elif [ \"${_SSL_INSTALLED}\" = \"MODERN\" ]; then\n    ###\n    _LIB_NOW=$(date +%y%m%d-%H%M)\n    _LIB_NOW=${_LIB_NOW//[^0-9-]/}\n    ###\n    if [ -f \"/usr/local/ssl/lib/libssl.so.1.1\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libcrypto.so.1.1\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libssl.a\" ] \\\n      && [ -f \"/usr/local/ssl/lib/libcrypto.a\" ]; then\n      ###\n      _SSL_LIB_PATH=\"/usr/local/ssl3/lib64\"\n      _SSL_CNF_PATH=\"/usr/local/ssl3/include/openssl\"\n      _GNU_LIB_PATH=\"/usr/lib/x86_64-linux-gnu\"\n      _GNU_CNF_PATH=\"/usr/include/x86_64-linux-gnu/openssl\"\n      ###\n      rm -f /usr/lib/libcrypto.a\n      rm -f /usr/lib/libcrypto.so\n      rm -f /usr/lib/libcrypto.so.3\n      rm -f /usr/lib/libssl.a\n      rm -f /usr/lib/libssl.so\n      rm -f /usr/lib/libssl.so.3\n      rm -f ${_GNU_LIB_PATH}/libcrypto.a\n      rm -f ${_GNU_LIB_PATH}/libcrypto.so\n      rm -f ${_GNU_LIB_PATH}/libcrypto.so.3\n      rm -f ${_GNU_LIB_PATH}/libssl.a\n      rm -f ${_GNU_LIB_PATH}/libssl.so\n      rm -f ${_GNU_LIB_PATH}/libssl.so.3\n      ###\n      ln -sfn ${_SSL_LIB_PATH}/libssl.a /usr/lib/libssl.a\n      ln -sfn ${_SSL_LIB_PATH}/libssl.a ${_GNU_LIB_PATH}/libssl.a\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.3 /usr/lib/libssl.so\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.3 /usr/lib/libssl.so.3\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.3 ${_GNU_LIB_PATH}/libssl.so\n      ln -sfn ${_SSL_LIB_PATH}/libssl.so.3 ${_GNU_LIB_PATH}/libssl.so.3\n      ###\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.a /usr/lib/libcrypto.a\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.a ${_GNU_LIB_PATH}/libcrypto.a\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.3 /usr/lib/libcrypto.so\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.3 /usr/lib/libcrypto.so.3\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.3 ${_GNU_LIB_PATH}/libcrypto.so\n      ln -sfn ${_SSL_LIB_PATH}/libcrypto.so.3 ${_GNU_LIB_PATH}/libcrypto.so.3\n      ###\n      if [ -e \"${_SSL_CNF_PATH}/opensslconf.h\" ]; then\n        if [ -e \"${_GNU_CNF_PATH}/opensslconf.h\" ]; then\n          rm -f /var/backups/old-opensslconf-h*\n          mv -f ${_GNU_CNF_PATH}/opensslconf.h /var/backups/old-opensslconf-h-${_LIB_NOW}\n        fi\n        ln -sfn ${_SSL_CNF_PATH}/opensslconf.h ${_GNU_CNF_PATH}/opensslconf.h\n      fi\n      ###\n      if [ -e \"/usr/include/openssl\" ] \\\n        && [ -d \"/usr/local/ssl3/include/openssl\" ]; then\n        rm -rf /var/backups/old-usr-include-openssl*\n        mv -f /usr/include/openssl /var/backups/old-usr-include-openssl-${_LIB_NOW}\n        ln -sfn /usr/local/ssl3/include/openssl /usr/include/openssl\n      fi\n    fi\n  fi\n  ldconfig 2> /dev/null\n}\n\n#\n# Update OpenSSL apt locks.\n_ssl_apt_locks_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _ssl_apt_locks_update\"\n  fi\n\n  _SSL_GET_DPKG=$(dpkg --get-selections \\\n    | grep openssl \\\n    | grep 'hold$' 2>&1)\n\n  _ZLB_GET_DPKG=$(dpkg --get-selections \\\n    | grep zlib \\\n    | grep 'hold$' 2>&1)\n\n  if [[ ! \"${_SSL_GET_DPKG}\" =~ \"hold\" ]]; then\n    echo \"openssl hold\" | dpkg --set-selections &> /dev/null\n  fi\n\n  if [[ ! \"${_ZLB_GET_DPKG}\" =~ \"hold\" ]]; then\n    echo \"zlib1g hold\" | dpkg --set-selections &> /dev/null\n    echo \"zlib1g-dev hold\" | dpkg --set-selections &> /dev/null\n  fi\n}\n\n#\n# Sync OpenSSL paths.\n_ssl_paths_sync() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _ssl_paths_sync\"\n  fi\n\n  _SSL_MODERN_BINARY=\"/usr/local/ssl3/bin/openssl\"\n  _SSL_MODERN_PATH=\"/usr/local/ssl3\"\n  _SSL_MODERN_LIB_PATH=\"${_SSL_MODERN_PATH}/lib64\"\n\n  _SSL_LEGACY_BINARY=\"/usr/local/ssl/bin/openssl\"\n  _SSL_LEGACYL_PATH=\"/usr/local/ssl\"\n  _SSL_LEGACY_LIB_PATH=\"${_SSL_LEGACYL_PATH}/lib\"\n\n  sed -i \"s/.*ssl.*//g\" /etc/ld.so.conf.d/libc.conf\n\n  if [ -e \"${_SSL_MODERN_LIB_PATH}\" ]; then\n    echo \"${_SSL_MODERN_LIB_PATH}\" >> /etc/ld.so.conf.d/libc.conf\n  fi\n\n  if [ -e \"${_SSL_LEGACY_LIB_PATH}\" ]; then\n    echo \"${_SSL_LEGACY_LIB_PATH}\" >> /etc/ld.so.conf.d/libc.conf\n  fi\n\n  sed -i \"/^$/d\" /etc/ld.so.conf.d/libc.conf &> /dev/null\n  ldconfig 2> /dev/null\n}\n\n#\n# Update OpenSSL paths.\n_ssl_paths_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _ssl_paths_update\"\n  fi\n\n  _LINK_TO=\"${1}\"\n\n  if [ -x \"${_LINK_TO}\" ]; then\n    if [ -f \"/usr/bin/openssl\" ]; then\n      cp -af /usr/bin/openssl /usr/bin/old-openssl-$(date +%y%m%d-%H%M%S)\n    fi\n    ln -sfn ${_LINK_TO} /usr/bin/openssl\n    if [ -e \"/bin/openssl\" ] && [ ! -L \"/bin\" ]; then\n      cp -af /bin/openssl /bin/old-openssl-$(date +%y%m%d-%H%M%S)\n      ln -sfn ${_LINK_TO} /bin/openssl\n    fi\n  fi\n\n  [ -x \"${1}\" ] && _THIS_SSL_ITD=$(${1} version 2>&1 | awk '{print $2}')\n\n  if [[ \"${_THIS_SSL_ITD}\" =~ \"${_OPENSSL_NEW_VRN}\" ]]; then\n    _SSL_PATH=\"/usr/local/ssl3\"\n    _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n  else\n    _SSL_PATH=\"/usr/local/ssl\"\n    _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n  fi\n\n  sed -i \"s/.*ssl.*//g\" /etc/ld.so.conf.d/libc.conf\n  echo \"${_SSL_LIB_PATH}\" >> /etc/ld.so.conf.d/libc.conf\n  _LEGACY_SSL_LIB_PATH=\"/usr/local/ssl/lib\"\n  if [ -e \"${_LEGACY_SSL_LIB_PATH}\" ] \\\n    && [ \"${_LEGACY_SSL_LIB_PATH}\" != \"${_SSL_LIB_PATH}\" ]; then\n    echo \"${_LEGACY_SSL_LIB_PATH}\" >> /etc/ld.so.conf.d/libc.conf\n  fi\n  sed -i \"/^$/d\" /etc/ld.so.conf.d/libc.conf &> /dev/null\n  ldconfig 2> /dev/null\n}\n\n#\n# Fix OpenSSL symlinks.\n_ssl_fix_symlinks() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _ssl_fix_symlinks\"\n  fi\n  # Determine which OpenSSL version to use based on _OS_CODE\n  if [ \"${_OS_CODE}\" != \"stretch\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n    _sslSymlink=\"/usr/bin/openssl\"\n    _sslBinary=\"/usr/local/ssl3/bin/openssl\"\n    [ ! -L \"${_sslSymlink}\" ] && [ -x \"${_sslBinary}\" ] && _ssl_paths_update \"${_sslBinary}\"\n  fi\n}\n\n#\n# Install OpenSSL from sources.\n_ssl_install_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _ssl_install_src\"\n  fi\n\n  if [ \"${1}\" = \"Modern\" ]; then\n    _OPENSSL_BUILD_VRN=\"${_OPENSSL_NEW_VRN}\"\n    _THIS_SSL_BINARY=\"${_SSL_MODERN_BINARY}\"\n  elif [ \"${1}\" = \"Legacy\" ]; then\n    _OPENSSL_BUILD_VRN=\"${_OPENSSL_OLD_VRN}\"\n    _THIS_SSL_BINARY=\"${_SSL_LEGACY_BINARY}\"\n  fi\n\n  ###--------------------###\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"INFO: Building zlib ${_ZLIB_VRN} from sources first...\"\n  fi\n  cd /var/opt\n  rm -rf zlib*\n  _get_dev_src \"zlib-${_ZLIB_VRN}.tar.gz\"\n  cd /var/opt/zlib-${_ZLIB_VRN}\n  _mrun \"bash ./configure --prefix=/usr\"\n  _mrun \"make -j $(nproc) --quiet\"\n  _mrun \"make --quiet install\"\n  ldconfig 2> /dev/null\n\n  ###--------------------###\n  _msg \"INFO: Building ${1} OpenSSL from sources...\"\n  _msg \"WAIT: This may take a while, please wait...\"\n\n  if [ ! -e \"/var/opt/openssl-${_OPENSSL_BUILD_VRN}\" ]; then\n    cd /var/opt\n    rm -rf openssl*\n    _get_dev_src_wget \"openssl-${_OPENSSL_BUILD_VRN}.tar.gz\"\n  fi\n  if [ -e \"/var/opt/openssl-${_OPENSSL_BUILD_VRN}\" ]; then\n    cd /var/opt/openssl-${_OPENSSL_BUILD_VRN}\n    if [ \"${1}\" = \"Modern\" ]; then\n      _SSL_BUILD_ARGS=\"--prefix=/usr/local/ssl3 \\\n        --openssldir=/usr/local/ssl3 enable-ktls\"\n      _patchMakefile=YES\n    else\n      _SSL_BUILD_ARGS=\"--prefix=/usr/local/ssl \\\n        --openssldir=/usr/local/ssl\"\n      _patchMakefile=YES\n    fi\n    _mrun \"sh ./config \\\n      shared \\\n      no-tests \\\n      no-ssl3 \\\n      zlib-dynamic \\\n      enable-ec_nistp_64_gcc_128 \\\n      ${_SSL_BUILD_ARGS}\"\n    if [ \"${_patchMakefile}\" = \"YES\" ]; then\n      sed -i \"s/install_docs:.*/install_docs:/g\" /var/opt/openssl-${_OPENSSL_BUILD_VRN}/Makefile\n    fi\n    if [ \"${1}\" = \"Modern\" ]; then\n      _mrun \"make dclean\" || true\n    fi\n    _mrun \"make depend\" || true\n    _mrun \"make -j $(nproc)\"\n    _mrun \"make install\"\n  fi\n\n  if [ -x \"${_THIS_SSL_BINARY}\" ]; then\n    _ssl_paths_update \"${_THIS_SSL_BINARY}\"\n    _ssl_apt_locks_update\n  else\n    _msg \"OOPS: OpenSSL version ${_OPENSSL_NEW_VRN} was not installed!\"\n    _SSL_INSTALL_REQUIRED=\n    _SSH_FORCE_REINSTALL=\n    _NGX_FORCE_REINSTALL=\n    _PHP_FORCE_REINSTALL=\n    _GIT_FORCE_REINSTALL=\n  fi\n\n  if [ ! -e \"/opt/php73/bin/php\" ] \\\n    && [ ! -e \"/opt/php72/bin/php\" ] \\\n    && [ ! -e \"/opt/php71/bin/php\" ] \\\n    && [ ! -e \"/opt/php70/bin/php\" ] \\\n    && [ ! -e \"/opt/php56/bin/php\" ] \\\n    && [ -x \"/usr/local/ssl3/bin/openssl\" ] \\\n    && [ -x \"/usr/local/ssl/bin/openssl\" ]; then\n    ldconfig 2> /dev/null\n  fi\n}\n\n#\n# If Install OpenSSL from sources.\n_if_ssl_install_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_ssl_install_src\"\n  fi\n\n  _SSL_LEGACY_INSTALL_REQUIRED=NO\n  _SSL_MODERN_INSTALL_REQUIRED=NO\n  _SSL_LEGACY_BINARY=\"/usr/local/ssl/bin/openssl\"\n  _SSL_MODERN_BINARY=\"/usr/local/ssl3/bin/openssl\"\n\n  [ ! -x \"${_SSL_LEGACY_BINARY}\" ] && _SSL_LEGACY_INSTALL_REQUIRED=YES\n\n  # Determine which OpenSSL version to use based on _OS_CODE\n  if [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n      chattr -i /root/.install.modern.openssl.cnf\n      rm -f /root/.install.modern.openssl.cnf\n    fi\n    if [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n      mv -f /usr/local/ssl3 /usr/local/ssl3-off\n    fi\n  else\n    [ ! -x \"${_SSL_MODERN_BINARY}\" ] && _SSL_MODERN_INSTALL_REQUIRED=YES\n    _OPENSSL_NEW_VRN=\"${_OPENSSL_MODERN_VRN}\"\n  fi\n\n  if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    _OPENSSL_OLD_VRN=\"${_OPENSSL_LEGACY_VRN}\"\n  else\n    _OPENSSL_OLD_VRN=\"${_OPENSSL_EOL_VRN}\"\n  fi\n\n  # Get installed versions\n  [ -x \"${_SSL_MODERN_BINARY}\" ] && _SSL_MODERN_ITD=$(${_SSL_MODERN_BINARY} version 2>&1 | awk '{print $2}')\n  [ -x \"${_SSL_LEGACY_BINARY}\" ] && _SSL_LEGACY_ITD=$(${_SSL_LEGACY_BINARY} version 2>&1 | awk '{print $2}')\n\n  # Determine _SSL_LIB_ITD\n  [ -x \"${_SSL_MODERN_BINARY}\" ] && _SSL_LIB_ITD=$(${_SSL_MODERN_BINARY} version 2>&1 | awk '{print $8}')\n\n  # Check if legacy version needs upgrading\n  if [ ! -x \"${_SSL_LEGACY_BINARY}\" ] || [ \"${_SSL_LEGACY_ITD}\" != \"${_OPENSSL_OLD_VRN}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Legacy OpenSSL (${_SSL_LEGACY_ITD}), upgrade to (${_OPENSSL_OLD_VRN}) required\"\n    fi\n    _SSL_LEGACY_INSTALL_REQUIRED=YES\n  fi\n\n  # Check if modern version needs upgrading\n  if [ \"${_OS_CODE}\" != \"stretch\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n    _sslSymlink=\"/usr/bin/openssl\"\n    _sslMacros=\"/usr/local/ssl3/include/openssl/macros.h\"\n    _sslIncludesA=NO\n    _sslIncludesB=NO\n    _sslIncludes=YES\n    if [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n      _sslComp=$(nm -D /usr/local/ssl3/lib64/libssl.so.3 | grep -q COMP_get_type && echo \"YES\" || echo \"NO\")\n    else\n      _sslComp=NO\n    fi\n    if grep -i 'OPENSSL_NO_DEPRECATED_3_4' ${_sslMacros} &> /dev/null; then\n      _sslIncludesA=YES\n    fi\n    if grep -i '2019-2026' ${_sslMacros} &> /dev/null; then\n      _sslIncludesB=YES\n    fi\n    if [ \"${_sslIncludesA}\" = \"NO\" ] || [ \"${_sslIncludesB}\" = \"NO\" ]; then\n      _sslIncludes=NO\n    fi\n    if [ ! -x \"${_SSL_MODERN_BINARY}\" ] \\\n      || [ \"${_sslComp}\" != \"YES\" ] \\\n      || [ \"${_sslIncludes}\" != \"YES\" ] \\\n      || [ \"${_SSL_MODERN_ITD}\" != \"${_OPENSSL_NEW_VRN}\" ] \\\n      || [ ! -L \"${_sslSymlink}\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Modern OpenSSL (${_SSL_MODERN_ITD}), upgrade to (${_OPENSSL_NEW_VRN}) required\"\n        _msg \"INFO: _SSL_MODERN_BINARY is ${_SSL_MODERN_BINARY}\"\n        _msg \"INFO: _sslIncludesA is ${_sslIncludesA}\"\n        _msg \"INFO: _sslIncludesB is ${_sslIncludesB}\"\n        _msg \"INFO: _sslIncludes is ${_sslIncludes}\"\n        _msg \"INFO: _SSL_MODERN_ITD is ${_SSL_MODERN_ITD}\"\n        _msg \"INFO: _OPENSSL_NEW_VRN is ${_OPENSSL_NEW_VRN}\"\n        _msg \"INFO: _sslSymlink is ${_sslSymlink}\"\n        _msg \"INFO: _sslComp is ${_sslComp}\"\n      fi\n      _SSL_MODERN_INSTALL_REQUIRED=YES\n    fi\n  fi\n\n  # Check if OpenSSL 3.x libraries/headers match the binary version\n  if [ -x \"${_SSL_MODERN_BINARY}\" ] && [ \"${_SSL_LIB_ITD}\" != \"${_SSL_MODERN_ITD}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: OpenSSL Lib is (${_SSL_LIB_ITD}) sync with (${_SSL_MODERN_ITD}) required\"\n    fi\n    _SSL_MODERN_INSTALL_REQUIRED=YES\n  fi\n\n  if [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ] \\\n    || [ \"${_SSL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    _SSL_LEGACY_INSTALL_REQUIRED=YES\n    _SSL_MODERN_INSTALL_REQUIRED=YES\n    _SSH_FORCE_REINSTALL=YES\n    _NGX_FORCE_REINSTALL=YES\n    _PHP_FORCE_REINSTALL=YES\n    rm -f ${_pthLog}/pure-ftpd-build-${_PURE_FTPD_VRN}-${_xSrl}-${_X_VERSION}.log\n    rm -f ${_pthLog}/mss-build-${_MSS_VRN}-${_xSrl}-${_X_VERSION}.log\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: OpenSSL rebuild forced\"\n    fi\n  fi\n\n  # Perform the legacy OpenSSL installation or upgrade if required\n  if [ \"${_SSL_LEGACY_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Upgrading Legacy OpenSSL to ${_OPENSSL_OLD_VRN}\"\n    fi\n    _ssl_install_src \"Legacy\"\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Legacy OpenSSL ${_OPENSSL_OLD_VRN} is up-to-date.\"\n    fi\n  fi\n\n  # Perform the modern OpenSSL installation or upgrade if required\n  if [ \"${_OS_CODE}\" != \"stretch\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n    if [ \"${_SSL_MODERN_INSTALL_REQUIRED}\" = \"YES\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Upgrading Modern OpenSSL to ${_OPENSSL_NEW_VRN}\"\n      fi\n      _ssl_install_src \"Modern\"\n    else\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Modern OpenSSL ${_OPENSSL_NEW_VRN} is up-to-date.\"\n      fi\n    fi\n  fi\n}\n\n#\n# Detect, remove, and report broken symlinks\n_check_and_remove_broken_symlinks() {\n  local _dir=$1\n\n  # Find broken symlinks in the directory\n  _broken_symlinks=$(find \"${_dir}\" -maxdepth 1 -type l ! -exec test -e {} \\; -print)\n\n  if [ -n \"${_broken_symlinks}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"CLNP: Removing the following broken symlinks from ${_dir}:\"\n      _msg \"CLNP: ${_broken_symlinks}\"\n    fi\n\n    for _symlink in ${_broken_symlinks}; do\n      rm \"${_symlink}\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"CLNP: Removed broken symlink: ${_symlink}\"\n      fi\n    done\n\n    # Set the _ifAnySymlinksCleaned variable to true since we removed broken symlinks\n    _ifAnySymlinksCleaned=YES\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"CLNP: No broken symlinks found in ${_dir}\"\n    fi\n  fi\n}\n\n#\n# Check and move disallowed versions\n_check_and_move() {\n  local _dir=$1\n\n  # Determine the name of the backup subdirectory based on the source directory\n  local _backup_dir=\"${_backLegBase}$(echo \"${_dir}\" | tr '/' '_')\"\n\n  # Find any libcurl.so files in the directory, excluding the allowed version and those without a complete version number\n  _found_versions=$(find \"${_dir}\" -maxdepth 1 -type f -name \"libcurl.so.*\" ! -name \"${_allowedFile}\" | grep -E \"libcurl\\.so\\.[0-9]+\\.[0-9]+\\.[0-9]+$\")\n\n  if [ -n \"${_found_versions}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"CLNP: Moving the following disallowed versions from ${_dir} to ${_backup_dir}:\"\n      _msg \"CLNP: ${_found_versions}\"\n    fi\n\n    # Create the backup directory if it doesn't exist\n    mkdir -p \"${_backup_dir}\"\n\n    # Move each found version to the backup directory\n    for _file in ${_found_versions}; do\n      mv -f \"${_file}\" \"${_backup_dir}/\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"CLNP: Moved ${_file} to ${_backup_dir}/\"\n      fi\n    done\n\n    # Set the _ifAnyFilesCleaned variable to true since we moved files\n    _ifAnyFilesCleaned=YES\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"CLNP: Only the allowed version (${_allowedFile}) is present in ${_dir}\"\n    fi\n  fi\n}\n\n#\n# Make local OpenSSL new/legacy ssl/certs symlinked to system ssl/certs\n_sync_system_ssl_certs() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sync_system_ssl_certs\"\n  fi\n  if [ -e \"/etc/ssl/certs/ca-certificates.crt\" ] \\\n    && [ ! -e \"/usr/local/ssl3/.old-certs\" ] \\\n    && [ -d \"/usr/local/ssl3/certs\" ] \\\n    && [ ! -L \"/usr/local/ssl3/certs\" ]; then\n    mv -f /usr/local/ssl3/certs /usr/local/ssl3/.old-certs\n    ln -sfn /etc/ssl/certs /usr/local/ssl3/certs\n  fi\n  if [ -e \"/etc/ssl/certs/ca-certificates.crt\" ] \\\n    && [ ! -e \"/usr/local/ssl/.old-certs\" ] \\\n    && [ -d \"/usr/local/ssl/certs\" ] \\\n    && [ ! -L \"/usr/local/ssl/certs\" ]; then\n    mv -f /usr/local/ssl/certs /usr/local/ssl/.old-certs\n    ln -sfn /etc/ssl/certs /usr/local/ssl/certs\n  fi\n}\n\n#\n# Install cURL from sources.\n_curl_install_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _curl_install_src\"\n  fi\n  if ! command -v lsb_release &> /dev/null; then\n    apt-get update -qq &> /dev/null\n    apt-get install lsb-release ${_aptYesUnth} -qq &> /dev/null\n  fi\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  [ \"${_OS_CODE}\" = \"wheezy\" ] && _CURL_VRN=7.50.1\n  [ \"${_OS_CODE}\" = \"jessie\" ] && _CURL_VRN=7.71.1\n  [ \"${_OS_CODE}\" = \"stretch\" ] && _CURL_VRN=8.2.1\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"chimaera\" ] \\\n    || [ \"${_OS_CODE}\" = \"beowulf\" ] \\\n    || [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n    || [ \"${_OS_CODE}\" = \"bullseye\" ] \\\n    || [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _OPENSSL_NEW_VRN=\"${_OPENSSL_EOL_VRN}\"\n    if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n      _OPENSSL_NEW_VRN=\"${_OPENSSL_MODERN_VRN}\"\n    fi\n  elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n      chattr -i /root/.install.modern.openssl.cnf\n      rm -f /root/.install.modern.openssl.cnf\n    fi\n    _OPENSSL_NEW_VRN=\"${_OPENSSL_EOL_VRN}\"\n  else\n    if [ -e \"/root/.install.modern.openssl.cnf\" ]; then\n      chattr -i /root/.install.modern.openssl.cnf\n      rm -f /root/.install.modern.openssl.cnf\n    fi\n    _OPENSSL_NEW_VRN=\"${_OPENSSL_LEGACY_VRN}\"\n  fi\n\n  _CURL_INSTALL_REQUIRED=NO\n\n  _CURL_GET_DPKG=$(dpkg --get-selections | grep curl | grep 'hold$' 2>&1)\n  _CURL_ITD=$(curl --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  _CURL_SSL_ITD=$(curl --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f5 \\\n    | awk '{ print $1}' \\\n    | cut -d\"/\" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  _CURL_LIB_ITD=$(curl --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\" \" -f4 \\\n    | awk '{ print $1}' \\\n    | cut -d\"/\" -f2 \\\n    | awk '{ print $1}' 2>&1)\n\n  if [ \"${_CURL_ITD}\" != \"${_CURL_VRN}\" ]; then\n    _CURL_INSTALL_REQUIRED=YES\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installed cURL/lib ${_CURL_ITD}/${_CURL_LIB_ITD}, upgrade required\"\n    fi\n  fi\n\n  if [ \"${_CURL_LIB_ITD}\" != \"${_CURL_VRN}\" ]; then\n    _CURL_INSTALL_REQUIRED=YES\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installed cURL/lib ${_CURL_ITD}/${_CURL_LIB_ITD}, rebuild required\"\n    fi\n  fi\n\n  if [ \"${_CURL_SSL_ITD}\" != \"${_OPENSSL_NEW_VRN}\" ]; then\n    _CURL_INSTALL_REQUIRED=YES\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: cURL ${_CURL_ITD} with OpenSSL ${_CURL_SSL_ITD}, rebuild forced\"\n    fi\n  fi\n\n  _BROKEN_CURL_TEST=$(curl --version 2>&1)\n  if [[ \"${_BROKEN_CURL_TEST}\" =~ \"libcurl.so.4\" ]]; then\n    _CURL_INSTALL_REQUIRED=YES\n  fi\n\n  if [ \"${_PHP_BIN_BROKEN}\" = \"YES\" ] && [ -z \"${_CURL_ALREADY_REBUILT}\" ]; then\n    _CURL_INSTALL_REQUIRED=YES\n  fi\n\n  if [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    _CURL_INSTALL_REQUIRED=YES\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installed cURL version ${_CURL_ITD} but re-install forced\"\n    fi\n  fi\n\n  if [ \"${_OS_CODE}\" != \"jessie\" ] && [ \"${_OS_CODE}\" != \"stretch\" ]; then\n\n    # Target version\n    _allowedFile=\"libcurl.so.4.8.0\"\n\n    # Directories to check\n    _dirsToClean=(\"/usr/lib\" \"/usr/local/lib\" \"/usr/lib/x86_64-linux-gnu\")\n\n    # Backup base directory\n    _backLegBase=\"/var/backups/legacy-libcurl-${_X_VERSION}-${_NOW}\"\n\n    # Variable to track if any files were moved\n    _ifAnyFilesCleaned=NO\n\n    # Variable to track if any broken symlinks were found and removed\n    _ifAnySymlinksCleaned=NO\n\n    # Iterate over the directories and apply the _check_and_move function\n    for _dir in \"${_dirsToClean[@]}\"; do\n      _check_and_move \"${_dir}\"\n    done\n\n    # Iterate over the directories and apply the _check_and_remove_broken_symlinks function\n    for _dir in \"${_dirsToClean[@]}\"; do\n      _check_and_remove_broken_symlinks \"${_dir}\"\n    done\n\n    # Export the _ifAnyFilesCleaned variable for later use\n    export _ifAnyFilesCleaned\n\n    # Export the _ifAnySymlinksCleaned variable for later use\n    export _ifAnySymlinksCleaned\n\n  fi\n\n  if [ \"${_ifAnySymlinksCleaned}\" = \"YES\" ] \\\n    || [ \"${_ifAnyFilesCleaned}\" = \"YES\" ] \\\n    || [ \"${_CURL_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    _CURL_INSTALL_REQUIRED=YES\n    _bkLibcurlPre=\"/var/backups/legacy-libcurl-pre-${_CURL_VRN}-${_NOW}\"\n    mkdir -p ${_bkLibcurlPre}\n    mv -f /usr/lib/x86_64-linux-gnu/libcurl.so* ${_bkLibcurlPre}/ &> /dev/null\n    mv -f /usr/lib/x86_64-linux-gnu/libcurl.la ${_bkLibcurlPre}/ &> /dev/null\n    mv -f /usr/lib/x86_64-linux-gnu/libcurl.a ${_bkLibcurlPre}/ &> /dev/null\n  fi\n\n  if [ \"${_CURL_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    ###--------------------###\n    if [ -e \"/root/.use.curl.from.packages.cnf\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: cURL from packages is forced with /root/.use.curl.from.packages.cnf\"\n      fi\n      [ -e \"/root/.sorted.multi.php.cnf\" ] && rm -f /root/.sorted.multi.php.cnf\n      if [ -e \"/usr/local/bin/curl\" ]; then\n        mv -f /usr/local/bin/curl /usr/local/bin/curl-src-off-$(date +%y%m%d-%H%M%S)\n      fi\n      if [[ \"${_CURL_GET_DPKG}\" =~ \"hold\" ]]; then\n        echo \"curl install\" | dpkg --set-selections &> /dev/null\n        if [ -z \"${_CURL_ALREADY_REBUILT}\" ]; then\n          _apt_clean_update\n          # Check for libssl1.0-dev and remove conditionally\n          if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n            _mrun \"apt-get remove libssl1.0-dev -y --purge --auto-remove -qq\"\n          fi\n          _mrun \"apt-get autoremove -y\"\n          for _PKG in libssl-dev; do\n            if ! _pkg_installed \"${_PKG}\"; then\n              _mrun \"${_INSTAPP} ${_PKG}\"\n            fi\n          done\n          _mrun \"apt-get build-dep curl -y\"\n          _mrun \"${_INSTAPP} curl\"\n        fi\n      fi\n    else\n      _apt_clean_update\n      # Check for libssl1.0-dev and remove conditionally\n      if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n        _mrun \"apt-get remove libssl1.0-dev -y --purge --auto-remove -qq\"\n      fi\n      _mrun \"apt-get autoremove -y\"\n      for _PKG in libssl-dev; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n      _mrun \"apt-get build-dep curl -y\"\n      cd /var/opt\n      rm -rf curl*\n      _get_dev_src_wget \"curl-${_CURL_VRN}.tar.gz\"\n      if [ -e \"/var/opt/curl-${_CURL_VRN}\" ]; then\n        _msg \"INFO: Building cURL ${_CURL_VRN} from sources...\"\n        _msg \"WAIT: This may take a while, please wait...\"\n        if [ -e \"/root/.install.modern.openssl.cnf\" ] \\\n          && [ -x \"/usr/local/ssl3/bin/openssl\" ]; then\n          _SSL_BINARY=/usr/local/ssl3/bin/openssl\n        else\n          _SSL_BINARY=/usr/local/ssl/bin/openssl\n        fi\n        _SSL_ITD=$(${_SSL_BINARY} version 2>&1 \\\n          | tr -d \"\\n\" \\\n          | cut -d\" \" -f2 \\\n          | awk '{ print $1}')\n        if [[ \"${_SSL_ITD}\" =~ \"${_OPENSSL_MODERN_VRN}\" ]] \\\n          || [ -e \"/usr/local/ssl3/lib64/libssl.so.3\" ]; then\n          _SSL_PATH=\"/usr/local/ssl3\"\n          _SSL_LIB_PATH=\"${_SSL_PATH}/lib64\"\n        else\n          _SSL_PATH=\"/usr/local/ssl\"\n          _SSL_LIB_PATH=\"${_SSL_PATH}/lib\"\n        fi\n        _PKG_CONFIG_PATH=\"${_SSL_LIB_PATH}/pkgconfig\"\n        if [ -e \"${_PKG_CONFIG_PATH}\" ]; then\n          cd /var/opt/curl-${_CURL_VRN}\n          _mrun \"LIBS=\\\"-ldl -lpthread\\\" PKG_CONFIG_PATH=\\\"${_PKG_CONFIG_PATH}\\\" ./configure \\\n            --with-openssl \\\n            --with-zlib=/usr \\\n            --prefix=/usr/local\"\n          _mrun \"make -j $(nproc)\"\n          _mrun \"make install\"\n          ldconfig 2> /dev/null\n          if [ -x \"/usr/local/bin/curl\" ]; then\n            mv -f /usr/bin/curl /usr/bin/old-curl-$(date +%y%m%d-%H%M%S)\n            ln -sfn /usr/local/bin/curl /usr/bin/curl\n          fi\n          if [ ! -e \"${_SSL_PATH}/certs/ca-certificates.crt\" ]; then\n            cp -af /etc/ssl/certs/* ${_SSL_PATH}/certs/ &> /dev/null\n          fi\n          echo \"curl hold\" | dpkg --set-selections &> /dev/null\n        else\n          _msg \"OOPS: Building OpenSSL ${_OPENSSL_NEW_VRN} from sources failed!\"\n        fi\n      else\n        _msg \"OOPS: cURL ${_CURL_VRN} could not be extracted...\"\n      fi\n      if [ -x \"/usr/local/bin/curl\" ] \\\n        && [ ! -e \"/var/log/boa/.curl_libs_fix_${_CURL_VRN}-${_X_VERSION}-${_NOW}.pid\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Updating cURL ${_CURL_VRN} libs symlinks...\"\n        fi\n        if [ -e \"/usr/local/lib/libcurl.so.4.8.0\" ]; then\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/libcurl.so\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/libcurl.so.4\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/libcurl.so.4.8.0\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/x86_64-linux-gnu/libcurl.so\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/x86_64-linux-gnu/libcurl.so.4\n          ln -sfn /usr/local/lib/libcurl.so.4.8.0 /usr/lib/x86_64-linux-gnu/libcurl.so.4.8.0\n        fi\n        if [ -e \"/usr/local/lib/libcurl.a\" ]; then\n          ln -sfn /usr/local/lib/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.a\n          ln -sfn /usr/local/lib/libcurl.a /usr/lib/libcurl.a\n        fi\n        if [ -e \"/usr/local/lib/libcurl.la\" ]; then\n          ln -sfn /usr/local/lib/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.la\n          ln -sfn /usr/local/lib/libcurl.la /usr/lib/libcurl.la\n        fi\n      fi\n      ldconfig 2> /dev/null\n      touch /var/log/boa/.curl_libs_fix_${_CURL_VRN}-${_X_VERSION}-${_NOW}.pid &> /dev/null\n      _CURL_ALREADY_REBUILT=YES\n      if [ -e \"/usr/local/include/curl/curl.h\" ] \\\n        && [ -e \"/usr/local/include/curl/easy.h\" ] \\\n        && [ -d \"/usr/include/x86_64-linux-gnu/curl\" ] \\\n        && [ ! -L \"/usr/include/x86_64-linux-gnu/curl\" ]; then\n        if dpkg-query -W -f='${Status}' libcurl4-openssl-dev 2>/dev/null | grep -q \"install ok installed\"; then\n          _mrun \"apt-get remove libcurl4-openssl-dev -y --purge --auto-remove -qq\"\n        fi\n        ln -sfn /usr/local/include/curl /usr/include/x86_64-linux-gnu/curl\n        ldconfig 2> /dev/null\n      fi\n    fi\n  else\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: cURL version detected ${_CURL_ITD}, OK\"\n    fi\n  fi\n}\n\n#\n# Symlink to dash.\n_symlink_to_dash() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _symlink_to_dash\"\n  fi\n  if [ -x \"/usr/bin/dash\" ] && [ ! -L \"/usr/bin/dash\" ]; then\n    if [ -L \"/usr/bin/sh\" ]; then\n      ln -sfn /usr/bin/dash /usr/bin/sh\n    fi\n    if [ -L \"/bin/sh\" ]; then\n      ln -sfn /usr/bin/dash /bin/sh\n    fi\n  fi\n  if [ -x \"/bin/dash\" ] && [ ! -L \"/bin/dash\" ]; then\n    if [ -L \"/usr/bin/sh\" ]; then\n      ln -sfn /bin/dash /usr/bin/sh\n    fi\n    if [ -L \"/bin/sh\" ]; then\n      ln -sfn /bin/dash /bin/sh\n    fi\n  fi\n}\n\n#\n# Symlink to bash.\n_symlink_to_bash() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _symlink_to_bash\"\n  fi\n  if [ -x \"/usr/bin/bash\" ] && [ ! -L \"/usr/bin/bash\" ]; then\n    if [ -L \"/usr/bin/sh\" ]; then\n      ln -sfn /usr/bin/bash /usr/bin/sh\n    fi\n    if [ -L \"/bin/sh\" ]; then\n      ln -sfn /usr/bin/bash /bin/sh\n    fi\n  fi\n  if [ -x \"/bin/bash\" ] && [ ! -L \"/bin/bash\" ]; then\n    if [ -L \"/usr/bin/sh\" ]; then\n      ln -sfn /bin/bash /usr/bin/sh\n    fi\n    if [ -L \"/bin/sh\" ]; then\n      ln -sfn /bin/bash /bin/sh\n    fi\n  fi\n}\n\n#\n# Switch to dash.\n_switch_to_dash() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _switch_to_dash\"\n  fi\n  if [ -x \"/usr/bin/dash\" ] && [ ! -e \"/bin/dash\" ]; then\n    cp -a /usr/bin/dash /bin/\n  fi\n  if [ -x \"/bin/dash\" ]; then\n    sed -i \"s/:\\/bin\\/sh/:\\/bin\\/dash/g\"  /etc/passwd &> /dev/null\n    sed -i \"s/\\/bin\\/sh/\\/bin\\/dash/g\" /etc/crontab &> /dev/null\n    _X_BIN_PATHS=\"/bin \\\n                  /etc/alternatives \\\n                  /etc/cron.d \\\n                  /etc/cron.daily \\\n                  /etc/cron.monthly \\\n                  /etc/cron.weekly \\\n                  /etc/init.d \\\n                  /etc/network/if-down.d \\\n                  /etc/network/if-up.d \\\n                  /etc/resolvconf/update.d \\\n                  /etc/webmin \\\n                  /lib/apparmor \\\n                  /opt/php56/bin \\\n                  /opt/php70/bin \\\n                  /opt/php71/bin \\\n                  /opt/php72/bin \\\n                  /opt/php73/bin \\\n                  /opt/php74/bin \\\n                  /opt/php80/bin \\\n                  /opt/php81/bin \\\n                  /opt/php82/bin \\\n                  /opt/php83/bin \\\n                  /opt/php84/bin \\\n                  /opt/php85/bin \\\n                  /sbin \\\n                  /usr/bin \\\n                  /usr/lib/ConsoleKit/run-session.d \\\n                  /usr/lib/git-core \\\n                  /usr/lib/postfix \\\n                  /usr/lib/sysstat \\\n                  /usr/local/bin \\\n                  /usr/local/libexec/git-core \\\n                  /usr/local/sbin \\\n                  /usr/sbin\"\n    for p in ${_X_BIN_PATHS}; do\n      if [ -e \"$p\" ]; then\n        for f in `find $p ! -perm -4000 ! -perm -2000 -type f`; do\n          if [[ \"$f\" =~ \"drush\"($) ]] \\\n            || [[ \"$f\" =~ \"clean-boa-env\"($) ]] \\\n            || [[ \"$f\" =~ \"dash\"($) ]] \\\n            || [[ \"$f\" =~ \"bash\"($) ]] \\\n            || [[ \"$f\" =~ \"ssh\"($) ]] \\\n            || [[ \"$f\" =~ \"sshd\"($) ]] \\\n            || [[ \"$f\" =~ \"websh\"($) ]]; then\n            _SKIP_THIS=YES\n          else\n            _SHELL_TEST=\n            _SHELL_TEST=$(grep -I -o \"\\#\\!.*/bin/sh\" $f 2>&1)\n            if [[ \"${_SHELL_TEST}\" =~ \"/bin/sh\" ]] \\\n              && [ \"$f\" != \"/etc/init.d/clean-boa-env\" ] \\\n              && [ \"$f\" != \"/bin/websh\" ] \\\n              && [ \"$f\" != \"/usr/bin/websh\" ] \\\n              && [ \"$f\" != \"/opt/local/bin/websh\" ]; then\n              sed -i \"s/\\/bin\\/sh/\\/bin\\/dash/g\" $f &> /dev/null\n              wait\n            fi\n            _SHELL_TEST=\n            _SHELL_TEST=$(grep -I -o \"\\#\\!.*/usr/bin/sh\" $f 2>&1)\n            if [[ \"${_SHELL_TEST}\" =~ \"/bin/sh\" ]] \\\n              && [ \"$f\" != \"/etc/init.d/clean-boa-env\" ] \\\n              && [ \"$f\" != \"/bin/websh\" ] \\\n              && [ \"$f\" != \"/usr/bin/websh\" ] \\\n              && [ \"$f\" != \"/opt/local/bin/websh\" ]; then\n              sed -i \"s/\\/usr\\/bin\\/sh/\\/bin\\/dash/g\" $f &> /dev/null\n              wait\n            fi\n          fi\n        done\n      fi\n    done\n  fi\n}\n\n#\n# Switch to bash.\n_switch_to_bash() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _switch_to_bash\"\n  fi\n  if [ -x \"/usr/bin/bash\" ] && [ ! -e \"/bin/bash\" ]; then\n    cp -a /usr/bin/bash /bin/\n  fi\n  if [ -x \"/bin/bash\" ]; then\n    sed -i \"s/:\\/bin\\/sh/:\\/bin\\/bash/g\"  /etc/passwd &> /dev/null\n    sed -i \"s/\\/bin\\/sh/\\/bin\\/bash/g\" /etc/crontab &> /dev/null\n    _X_BIN_PATHS=\"/bin \\\n                  /etc/alternatives \\\n                  /etc/cron.d \\\n                  /etc/cron.daily \\\n                  /etc/cron.monthly \\\n                  /etc/cron.weekly \\\n                  /etc/init.d \\\n                  /etc/network/if-down.d \\\n                  /etc/network/if-up.d \\\n                  /etc/resolvconf/update.d \\\n                  /etc/webmin \\\n                  /lib/apparmor \\\n                  /opt/php56/bin \\\n                  /opt/php70/bin \\\n                  /opt/php71/bin \\\n                  /opt/php72/bin \\\n                  /opt/php73/bin \\\n                  /opt/php74/bin \\\n                  /opt/php80/bin \\\n                  /opt/php81/bin \\\n                  /opt/php82/bin \\\n                  /opt/php83/bin \\\n                  /opt/php84/bin \\\n                  /opt/php85/bin \\\n                  /sbin \\\n                  /usr/bin \\\n                  /usr/lib/ConsoleKit/run-session.d \\\n                  /usr/lib/git-core \\\n                  /usr/lib/postfix \\\n                  /usr/lib/sysstat \\\n                  /usr/local/bin \\\n                  /usr/local/libexec/git-core \\\n                  /usr/local/sbin \\\n                  /usr/sbin\"\n    for p in ${_X_BIN_PATHS}; do\n      if [ -e \"$p\" ]; then\n        for f in `find $p ! -perm -4000 ! -perm -2000 -type f`; do\n          if [[ \"$f\" =~ \"drush\"($) ]] \\\n            || [[ \"$f\" =~ \"clean-boa-env\"($) ]] \\\n            || [[ \"$f\" =~ \"dash\"($) ]] \\\n            || [[ \"$f\" =~ \"bash\"($) ]] \\\n            || [[ \"$f\" =~ \"ssh\"($) ]] \\\n            || [[ \"$f\" =~ \"sshd\"($) ]] \\\n            || [[ \"$f\" =~ \"websh\"($) ]]; then\n            _SKIP_THIS=YES\n          else\n            _SHELL_TEST=\n            _SHELL_TEST=$(grep -I -o \"\\#\\!.*/bin/sh\" $f 2>&1)\n            if [[ \"${_SHELL_TEST}\" =~ \"/bin/sh\" ]] \\\n              && [ \"$f\" != \"/etc/init.d/clean-boa-env\" ] \\\n              && [ \"$f\" != \"/bin/websh\" ] \\\n              && [ \"$f\" != \"/usr/bin/websh\" ] \\\n              && [ \"$f\" != \"/opt/local/bin/websh\" ]; then\n              sed -i \"s/\\/bin\\/sh/\\/bin\\/bash/g\" $f &> /dev/null\n              wait\n            fi\n            _SHELL_TEST=\n            _SHELL_TEST=$(grep -I -o \"\\#\\!.*/usr/bin/sh\" $f 2>&1)\n            if [[ \"${_SHELL_TEST}\" =~ \"/bin/sh\" ]] \\\n              && [ \"$f\" != \"/etc/init.d/clean-boa-env\" ] \\\n              && [ \"$f\" != \"/bin/websh\" ] \\\n              && [ \"$f\" != \"/usr/bin/websh\" ] \\\n              && [ \"$f\" != \"/opt/local/bin/websh\" ]; then\n              sed -i \"s/\\/usr\\/bin\\/sh/\\/bin\\/bash/g\" $f &> /dev/null\n              wait\n            fi\n          fi\n        done\n      fi\n    done\n  fi\n}\n\n#\n# Strict Permissions on All Binaries.\n_strict_bin_permissions() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _strict_bin_permissions\"\n  fi\n\n  _BIN_PATHS=\"/bin \\\n              /etc/alternatives \\\n              /opt/php56/bin \\\n              /opt/php70/bin \\\n              /opt/php71/bin \\\n              /opt/php72/bin \\\n              /opt/php73/bin \\\n              /opt/php74/bin \\\n              /opt/php80/bin \\\n              /opt/php81/bin \\\n              /opt/php82/bin \\\n              /opt/php83/bin \\\n              /opt/php84/bin \\\n              /opt/php85/bin \\\n              /sbin \\\n              /usr/bin \\\n              /usr/local/bin \\\n              /usr/local/sbin \\\n              /usr/sbin\"\n\n  for p in ${_BIN_PATHS}; do\n    if [ -e \"$p\" ]; then\n      chown root:root $p &> /dev/null\n      chmod 711 $p &> /dev/null\n    fi\n  done\n  for p in ${_BIN_PATHS}; do\n    if [ -e \"$p\" ]; then\n      for f in `find $p -group users ! -perm -4000 ! -perm -2000 -type f`; do\n        chgrp root $f &> /dev/null\n        chmod 750 $f &> /dev/null\n      done\n    fi\n  done\n  for p in ${_BIN_PATHS}; do\n    if [ -e \"$p\" ]; then\n      for f in `find $p -group lshellg ! -perm -4000 ! -perm -2000 -type f`; do\n        chgrp root $f &> /dev/null\n        chmod 750 $f &> /dev/null\n      done\n    fi\n  done\n  for p in ${_BIN_PATHS}; do\n    if [ -e \"$p\" ]; then\n      for f in `find $p -group www-data ! -perm -4000 ! -perm -2000 -type f`; do\n        chgrp root $f &> /dev/null\n        chmod 750 $f &> /dev/null\n      done\n    fi\n  done\n  for p in ${_BIN_PATHS}; do\n    if [ -e \"$p\" ]; then\n      for f in `find $p -group root ! -perm -4000 ! -perm -2000 -type f`; do\n        chgrp users $f &> /dev/null\n        chmod 750 $f &> /dev/null\n      done\n    fi\n  done\n  for p in ${_BIN_PATHS}; do\n    if [ -e \"$p\" ]; then\n      for f in `find $p -group staff ! -perm -4000 ! -perm -2000 -type f`; do\n        chgrp users $f &> /dev/null\n        chmod 750 $f &> /dev/null\n      done\n    fi\n  done\n\n  _WEBSERVER_BIN_PATHS=\"/bin \\\n                        /etc/alternatives \\\n                        /sbin \\\n                        /usr/bin \\\n                        /usr/local/bin \\\n                        /usr/local/sbin \\\n                        /usr/sbin\"\n\n  for p in ${_WEBSERVER_BIN_PATHS}; do\n    for f in `find $p ! -perm -4000 ! -perm -2000 -type f | grep pdf`; do\n      if [ -e \"$f\" ]; then\n        chgrp root $f &> /dev/null\n        chmod 755 $f &> /dev/null\n      fi\n    done\n  done\n\n  _BACKEND_ITEMS=\"advdef \\\n                  advpng \\\n                  apt \\\n                  apt-cache \\\n                  apt-config \\\n                  apt-get \\\n                  apt-key \\\n                  apt-listchanges \\\n                  apt-mark \\\n                  aptitude \\\n                  avconv \\\n                  bash \\\n                  chromium \\\n                  clambc \\\n                  clamconf \\\n                  clamd \\\n                  clamdscan \\\n                  clamdtop \\\n                  clamscan \\\n                  clamsubmit \\\n                  compass \\\n                  composer \\\n                  convert \\\n                  curl \\\n                  dash \\\n                  env \\\n                  ffmpeg \\\n                  ffprobe \\\n                  flvtool2 \\\n                  freshclam \\\n                  gem \\\n                  git \\\n                  gpg \\\n                  gpgv \\\n                  gpgv1 \\\n                  gpgv2 \\\n                  gs \\\n                  id \\\n                  java \\\n                  java11 \\\n                  java17 \\\n                  java21 \\\n                  java6 \\\n                  java7 \\\n                  java8 \\\n                  jpegoptim \\\n                  jpegtran \\\n                  logger \\\n                  magick \\\n                  man-db \\\n                  mongo \\\n                  mongod \\\n                  mongodump \\\n                  mongoexport \\\n                  mongofiles \\\n                  mongoimport \\\n                  mongooplog \\\n                  mongoperf \\\n                  mongorestore \\\n                  mongos \\\n                  mongosniff \\\n                  mongostat \\\n                  mongotop \\\n                  newrelic-daemon \\\n                  node \\\n                  npm \\\n                  npx \\\n                  nrsysmond \\\n                  optipng \\\n                  pngcrush \\\n                  pngquant \\\n                  redis-server \\\n                  rrdtool \\\n                  ruby \\\n                  sass \\\n                  sass-convert \\\n                  scss \\\n                  valkey-server \\\n                  wget \\\n                  which \\\n                  wkhtmltoimage \\\n                  wkhtmltoimage-0.12.4 \\\n                  wkhtmltopdf-0.12.4 \\\n                  wkhtmltopdf\"\n\n  if [ ! -z \"${_BACKEND_ITEMS_LIST}\" ]; then\n    _BACKEND_ITEMS=\"${_BACKEND_ITEMS} ${_BACKEND_ITEMS_LIST}\"\n  fi\n  for i in ${_BACKEND_ITEMS}; do\n    # type -ap prints *all* matching paths (one per line); resolve symlinks; de-dup\n    type -ap -- \"${i}\" 2>/dev/null \\\n      | while IFS= read -r _BIN_ITEM; do\n          readlink -f -- \"${_BIN_ITEM}\" 2>/dev/null || echo \"${_BIN_ITEM}\"\n        done \\\n      | sort -u \\\n      | while IFS= read -r _BIN_ITEM; do\n          if [ -e \"${_BIN_ITEM}\" ]; then\n            chgrp root ${_BIN_ITEM} &> /dev/null\n            chmod 755 ${_BIN_ITEM} &> /dev/null\n          fi\n        done\n  done\n\n  _PROTECTED_ITEMS=\"backboa \\\n                    barracuda \\\n                    boa \\\n                    clamconf \\\n                    fix-drupal-platform-ownership.sh \\\n                    fix-drupal-platform-permissions.sh \\\n                    fix-drupal-site-ownership.sh \\\n                    fix-drupal-site-permissions.sh \\\n                    named \\\n                    octopus \\\n                    redis-benchmark \\\n                    redis-check-aof \\\n                    redis-check-dump \\\n                    redis-cli \\\n                    rkhunter \\\n                    sftp-admin \\\n                    sftp-kill \\\n                    sftp-state \\\n                    sftp-user \\\n                    sftp-verif \\\n                    sftp-who \\\n                    syncpass \\\n                    valkey-benchmark \\\n                    valkey-check-aof \\\n                    valkey-check-dump \\\n                    valkey-cli\"\n\n  for i in ${_PROTECTED_ITEMS}; do\n    # type -ap prints *all* matching paths (one per line); resolve symlinks; de-dup\n    type -ap -- \"${i}\" 2>/dev/null \\\n      | while IFS= read -r _BIN_ITEM; do\n          readlink -f -- \"${_BIN_ITEM}\" 2>/dev/null || echo \"${_BIN_ITEM}\"\n        done \\\n      | sort -u \\\n      | while IFS= read -r _BIN_ITEM; do\n          if [ -e \"${_BIN_ITEM}\" ]; then\n            chown root:root \"${_BIN_ITEM}\" &> /dev/null\n            chmod 700 \"${_BIN_ITEM}\" &> /dev/null\n          fi\n        done\n  done\n\n  chown root:root /usr/bin/mysecureshell\n  chmod 4755 /usr/bin/mysecureshell\n\n  chown root:root /usr/bin/redis-server\n  chmod 755 /usr/bin/redis-server\n\n  if [ -e \"/bin/ping\" ]; then\n    _PING_TEST=$(ls -la /bin/ping | grep rwsr-xr-x 2>&1)\n    if [ -z \"${_PING_TEST}\" ]; then\n      chown root:root /bin/ping\n      chmod 4755 /bin/ping\n    fi\n  fi\n\n  cp -af ${_bldPth}/aegir/helpers/websh.sh.txt /opt/local/bin/websh\n  chmod 755 /opt/local/bin/websh\n  chown root:root /opt/local/bin/websh\n  chown root:root /etc/passwd*\n  chmod 644 /etc/passwd*\n  chown root:shadow /etc/shadow*\n  chmod 640 /etc/shadow*\n}\n\n#\n# Turn Off AppArmor temporarily.\n_turn_off_apparmor_temporarily() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _turn_off_apparmor_temporarily\"\n  fi\n  _isAppArmOn=N\n  if [ -e \"/sys/module/apparmor/parameters/enabled\" ]; then\n    _isAppArmOn=\"$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null | tr -d '\\n')\"\n  fi\n  if [ \"${_isAppArmOn}\" = \"Y\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"ARMR: Turning off AppArmor temporarily...\"\n    fi\n    rm -rf /var/cache/apparmor/* &> /dev/null\n    apparmor_parser -r /etc/apparmor.d/* &> /dev/null\n    aa-complain /etc/apparmor.d/* &> /dev/null\n    service apparmor stop &> /dev/null\n    aa-teardown &> /dev/null\n    service auditd stop &> /dev/null\n  fi\n}\n\n#\n# Enforce AppArmor profiles.\n_if_enforce_apparmor_profiles() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_enforce_apparmor_profiles\"\n  fi\n  if [ \"${_isAppArmOn}\" = \"Y\" ] || [ -d \"/etc/apparmor.d\" ]; then\n    if [ -e \"/root/.enforce.apparmor.cnf\" ]; then\n      _AA_TEST_A=$(aa-status | grep \"profiles are loaded\" 2>&1)\n      _AA_TEST_B=$(aa-status | grep \"0 processes are in complain mode\" 2>&1)\n      _AA_TEST_C=$(aa-status | grep \"0 profiles are in unconfined mode\" 2>&1)\n      _AA_TEST_D=$(aa-status | grep \"0 processes are unconfined\" 2>&1)\n      _AA_TEST_E=$(aa-status | grep \"processes are unconfined but have a profile defined\" 2>&1)\n      if [ -z \"${_AA_TEST_A}\" ] \\\n        || [ -z \"${_AA_TEST_B}\" ] \\\n        || [ -z \"${_AA_TEST_C}\" ] \\\n        || [ -z \"${_AA_TEST_D}\" ] \\\n        || [ ! -z \"${_AA_TEST_E}\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"ARMR: Restarting services to switch them to the enforced mode\"\n        fi\n        _mrun \"bash /var/xdrago/move_sql.sh\"\n        wait\n        if [ -e \"/etc/init.d/valkey-server\" ]; then\n          _mrun \"service valkey-server reload\"\n        elif [ -e \"/etc/init.d/redis-server\" ]; then\n          _mrun \"service redis-server reload\"\n        fi\n        _mrun \"pkill -u unbound -x unbound\"\n        _mrun \"killall pure-ftpd\"\n        _mrun \"killall rsyslogd\"\n        _mrun \"service rsyslog restart\"\n      fi\n    fi\n  fi\n}\n\n#\n# Sync AppArmor profiles.\n_sync_apparmor_profiles() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sync_apparmor_profiles\"\n  fi\n  if [ \"${_isAppArmOn}\" = \"Y\" ] || [ -d \"/etc/apparmor.d\" ]; then\n    _msg \"ARMR: All profiles will be updated now...\"\n    if [ -d \"/etc/apparmor.d\" ]; then\n      _backArmr=\"/var/backups/apparmor/\"\n      [ ! -e \"${_backArmr}\" ] && mkdir -p ${_backArmr}\n      _profArmList=\"avahi \\\n                    cacher \\\n                    dnsmasq \\\n                    dovecot \\\n                    dpkg \\\n                    haveged \\\n                    identd \\\n                    irssi \\\n                    klogd \\\n                    lsb_release \\\n                    mdnsd \\\n                    modprobe \\\n                    nmbd \\\n                    nscd \\\n                    php-fpm \\\n                    pidgin \\\n                    samba \\\n                    scanner \\\n                    smb \\\n                    syslog \\\n                    tcpdump \\\n                    totem \\\n                    traceroute \\\n                    usr.sbin.sshd\"\n      for _profArm in ${_profArmList}; do\n        mv -f /etc/apparmor.d/*${_profArm}* ${_backArmr} &> /dev/null\n      done\n      cp -af ${_locCnf}/apparmor/* /etc/apparmor.d/\n      if [ -d \"/etc/apparmor.d/disable\" ]; then\n        cp -af /etc/apparmor.d/*ssh* /etc/apparmor.d/disable/ &> /dev/null\n      fi\n    fi\n    if [ ! -e \"/root/.run-to-excalibur.cnf\" ] \\\n      && [ ! -e \"/root/.run-to-daedalus.cnf\" ] \\\n      && [ ! -e \"/root/.run-to-chimaera.cnf\" ] \\\n      && [ ! -e \"/root/.run-to-beowulf.cnf\" ]; then\n      _msg \"WAIT: This may take a while, please wait...\"\n    fi\n    for _proFile in `find /etc/apparmor.d/ -maxdepth 1 -mindepth 1 -type f | sort`; do\n      _profName=$(echo ${_proFile} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n      if [ ! -z \"${_isAppArmorComplain}\" ] \\\n        && [ -x \"${_isAppArmorComplain}\" ]; then\n        if [ ! -e \"/root/.run-to-excalibur.cnf\" ] \\\n          && [ ! -e \"/root/.run-to-daedalus.cnf\" ] \\\n          && [ ! -e \"/root/.run-to-chimaera.cnf\" ] \\\n          && [ ! -e \"/root/.run-to-beowulf.cnf\" ]; then\n          if [ \"${_isAppArmOn}\" = \"Y\" ]; then\n            _mrun \"apparmor_parser -Q /etc/apparmor.d/${_profName}\"\n          fi\n        fi\n      fi\n      chmod 644 ${_proFile}\n    done\n    if [ -e \"/etc/audit/auditd.conf\" ]; then\n      if [ -e \"/root/.disable.auditd.logs.cnf\" ]; then\n        sed -i \"s/^write_logs =.*/write_logs = no/g\" /etc/audit/auditd.conf\n      else\n        sed -i \"s/^write_logs =.*/write_logs = yes/g\" /etc/audit/auditd.conf\n      fi\n      _AUDITD_CNF_LOC_TEST=$(grep \"local_events\" /etc/audit/auditd.conf 2>&1)\n      if [[ ! \"${_AUDITD_CNF_LOC_TEST}\" =~ \"local_events\" ]]; then\n        echo \"local_events = yes\" >> /etc/audit/auditd.conf\n      fi\n      _AUDITD_CNF_WRT_TEST=$(grep \"write_logs\" /etc/audit/auditd.conf 2>&1)\n      if [[ ! \"${_AUDITD_CNF_WRT_TEST}\" =~ \"write_logs\" ]]; then\n        echo \"write_logs = yes\" >> /etc/audit/auditd.conf\n      fi\n    fi\n    if [ \"${_isAppArmOn}\" = \"Y\" ]; then\n      if [ -e \"/root/.activate.apparmor.cnf\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"ARMR: Activating AppArmor in the complain mode\"\n        fi\n        rm -rf /var/cache/apparmor/* &> /dev/null\n        if [ -x \"/etc/init.d/apparmor\" ]; then\n          service apparmor restart &> /dev/null\n        fi\n        if [ -x \"/etc/init.d/auditd\" ]; then\n          service auditd restart &> /dev/null\n        fi\n        apparmor_parser -r /etc/apparmor.d/* &> /dev/null\n        aa-complain /etc/apparmor.d/* &> /dev/null\n      elif [ -e \"/root/.enforce.apparmor.cnf\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"ARMR: Activating AppArmor in the enforced mode\"\n        fi\n        rm -rf /var/cache/apparmor/* &> /dev/null\n        if [ -x \"/etc/init.d/apparmor\" ]; then\n          service apparmor restart &> /dev/null\n        fi\n        if [ -x \"/etc/init.d/auditd\" ]; then\n          service auditd restart &> /dev/null\n        fi\n        apparmor_parser -r /etc/apparmor.d/* &> /dev/null\n        aa-enforce /etc/apparmor.d/* &> /dev/null\n        if [ \"${_OS_CODE}\" != \"stretch\" ] && [ \"${_OS_CODE}\" != \"jessie\" ]; then\n          _if_enforce_apparmor_profiles\n        fi\n      elif [ -e \"/root/.disable.apparmor.cnf\" ] \\\n        || [ ! -e \"/root/.activate.apparmor.cnf\" ] \\\n        || [ ! -e \"/root/.enforce.apparmor.cnf\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"ARMR: Deactivating AppArmor using aa-teardown\"\n        fi\n        rm -rf /var/cache/apparmor/* &> /dev/null\n        apparmor_parser -r /etc/apparmor.d/* &> /dev/null\n        aa-complain /etc/apparmor.d/* &> /dev/null\n        if [ -x \"/etc/init.d/apparmor\" ]; then\n          service apparmor stop &> /dev/null\n          update-rc.d -f apparmor remove &> /dev/null\n        fi\n        if [ -x \"/etc/init.d/auditd\" ]; then\n          service auditd stop &> /dev/null\n          update-rc.d -f auditd remove &> /dev/null\n        fi\n        if [ -x \"/usr/sbin/aa-teardown\" ]; then\n          aa-teardown &> /dev/null\n        fi\n      fi\n    fi\n  fi\n}\n\n#\n# Manage AppArmor.\n_if_install_apparmor() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_apparmor\"\n  fi\n  ###--------------------###\n  _isAppArmorRevFile=\"/sys/kernel/security/apparmor/revision\"\n  _isAppArmorGrubFile=\"/etc/default/grub.d/apparmor.cfg\"\n  _isAppArmorComplain=\"$(which aa-complain)\"\n  _isAppArmorStatus=\"$(which aa-status)\"\n  _isAppArmorGrub=NO\n  if [ -e \"${_isAppArmorGrubFile}\" ]; then\n    _GRUB_APPARMOR_TEST=$(grep \"apparmor=1 security=apparmor\" ${_isAppArmorGrubFile})\n    if [[ \"${_GRUB_APPARMOR_TEST}\" =~ \"apparmor=1 security=apparmor\" ]]; then\n      _isAppArmorGrub=YES\n    fi\n  fi\n  if [ ! -e \"${_isAppArmorRevFile}\" ] \\\n    || [ ! -e \"${_isAppArmorGrubFile}\" ] \\\n    || [ -z \"${_isAppArmorComplain}\" ] \\\n    || [ -z \"${_isAppArmorStatus}\" ] \\\n    || [ \"${_isAppArmorGrub}\" = \"NO\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"ARMR: The system AppArmor and auditd will be installed now, however...\"\n      _msg \"ARMR: ...it will become active only after you reboot this server\"\n      _msg \"ARMR: Use 'boa reboot' command for optimized reboot\"\n      _msg \"WAIT: This may take a while, please wait...\"\n    fi\n    for _PKG in auditd apparmor apparmor-utils apparmor-notify apparmor-profiles; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    mkdir -p /etc/default/grub.d\n    echo 'GRUB_CMDLINE_LINUX_DEFAULT=\"$GRUB_CMDLINE_LINUX_DEFAULT apparmor=1 security=apparmor\"' \\\n      | tee ${_isAppArmorGrubFile} &> /dev/null\n    _mrun \"update-grub\"\n  fi\n  _isAppArmOn=N\n  if [ -e \"/sys/module/apparmor/parameters/enabled\" ]; then\n    _isAppArmOn=\"$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null | tr -d '\\n')\"\n  fi\n  if [ \"${_isAppArmOn}\" = \"Y\" ]; then\n    _sync_apparmor_profiles\n    [ -e \"/root/.turn_off_apparmor_in_octopus.cnf\" ] && rm -f /root/.turn_off_apparmor_in_octopus.cnf\n  fi\n}\n\n#\n# Remove AppArmor.\n_if_remove_apparmor() {\n  _isAppArmorGrubFile=\"/etc/default/grub.d/apparmor.cfg\"\n  _isApparmor=\"$(which aa-status)\"\n  if [ ! -z \"${_isApparmor}\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"ARMR: The system AppArmor and auditd will be uninstalled now, however...\"\n      _msg \"ARMR: ...it will become fully inactive only after you reboot this server\"\n      _msg \"ARMR: Use 'boa reboot' command for optimized reboot\"\n      _msg \"WAIT: This may take a while, please wait...\"\n    fi\n    _turn_off_apparmor_temporarily\n    for _PKG in auditd apparmor apparmor-utils apparmor-notify apparmor-profiles apparmor-profiles-extra; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove ${_PKG} -y -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    mkdir -p /etc/default/grub.d\n    echo 'GRUB_CMDLINE_LINUX_DEFAULT=\"$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0\"' \\\n      | tee ${_isAppArmorGrubFile} &> /dev/null\n    _mrun \"update-grub\"\n  fi\n}\n\n#\n# Final cleanup.\n_finale() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _finale\"\n  fi\n  ###--------------------###\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _L_ST=\"install\"\n    touch ${_pthLog}/SA-CORE-2014-005-fixed-d7.log\n  else\n    _L_ST=\"upgrade\"\n  fi\n  _msg \"CARD: Now charging your credit card for this magic show...\"\n  sleep 2\n  _msg \"CARD: It will take a moment to process your payment...\"\n  sleep 5\n  _msg \"JOKE: Just kidding !!! Enjoy your Ægir Hosting System :)\"\n  sleep 3\n  echo \" \"\n  _KERNEL_UP=NO\n  if [ -e \"/run/reboot-required.pkgs\" ]; then\n    _T_KERNEL_TEST=$(grep linux /run/reboot-required.pkgs 2>&1)\n    if [[ \"${_T_KERNEL_TEST}\" =~ \"linux\" ]]; then\n      _KERNEL_UP=YES\n    fi\n  fi\n  if [ \"${_UP_LNX}\" = \"YES\" ] || [ \"${_KERNEL_UP}\" = \"YES\" ]; then\n    _msg \"NOTE: Your OS kernel has been upgraded\"\n    _msg \"NOTE: Please reboot this server to activate the new kernel\"\n    _msg \"NOTE: Use 'boa reboot' command for optimized reboot\"\n    echo \" \"\n    sleep 8\n  fi\n  _msg \"Final post-${_L_ST} cleaning, one moment...\"\n  _barracuda_cnf_cleanup\n  if [ -e \"/var/log/barracuda_log.txt\" ]; then\n    _T_SERIES_TEST=$(cat /var/log/barracuda_log.txt 2>&1)\n    if [[ \"${_T_SERIES_TEST}\" =~ \"BOA-4.\" ]] \\\n      || [[ \"${_T_SERIES_TEST}\" =~ \"BOA-3.\" ]] \\\n      || [[ \"${_T_SERIES_TEST}\" =~ \"BOA-2.4.\" ]] \\\n      || [[ \"${_T_SERIES_TEST}\" =~ \"BOA-2.3.8\" ]]; then\n      _DO_NOTHING=YES\n    else\n      _DO_NOTHING=YES\n      # _fix_core_dgd\n    fi\n  fi\n  if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]]; then\n    _MTD_VR=\"BOA-5.9.1-omm\"\n  elif [ -e \"/root/.host8.cnf\" ]; then\n    _MTD_VR=\"BOA-5.9.1-aln\"\n  else\n    _MTD_VR=\"${_X_VERSION}\"\n  fi\n  mv -f /etc/motd ${_vBs}/dragon/t/motd-pre-${_xSrl}-${_MTD_VR}-${_NOW} &> /dev/null\n  mv -f /etc/motd-pre-* ${_vBs}/dragon/t/ &> /dev/null\n  echo > /etc/motd\n  echo \" Skynet Agent v.${_MTD_VR} on $(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)/$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2) \\\n    welcomes you aboard\" | fmt -su -w 2500 >> /etc/motd\n  echo >> /etc/motd\n  echo > /etc/motd.tail\n  echo \" Skynet Agent v.${_MTD_VR} on $(lsb_release -ar 2>/dev/null | grep -i distributor | cut -s -f2)/$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2) \\\n    welcomes you aboard\" | fmt -su -w 2500 >> /etc/motd.tail\n  echo >> /etc/motd.tail\n  apt-get autoclean -y &> /dev/null\n  apt-get clean -qq 2> /dev/null\n  #rm -rf /var/lib/apt/lists/*\n  mkdir -p /data/conf/arch/log\n  chmod 0777 /data/conf/arch/log\n  mv -f /data/conf/global.inc-pre* /data/conf/arch/     &> /dev/null\n  mv -f /data/conf/global/*inc-pre* /data/conf/arch/    &> /dev/null\n  mv -f /data/conf/global.inc-before* /data/conf/arch/  &> /dev/null\n  mv -f /data/conf/global.inc-missing* /data/conf/arch/ &> /dev/null\n  rm -f /tmp/cache.inc*\n  rm -f /var/opt/._zendopcache*\n  rm -rf /var/opt/*\n  rm -f /var/xdrago/monitor/acrashsql.sh\n  rm -f /var/xdrago/acrashsql.sh\n  rm -f /var/xdrago/memcache.sh*\n  rm -f /var/xdrago/purge_cruft.sh\n  rm -f /var/xdrago/*.old\n  rm -rf /tmp/drush_make_tmp*\n  rm -rf /tmp/make_tmp*\n  rm -f /tmp/pm-updatecode*\n  rm -rf /var/aegir/.tmp/cache\n  [ -e \"/run/boa_run.pid\" ] && rm -f /run/boa_run.pid\n  [ -e \"/run/boa_wait.pid\" ] && rm -f /run/boa_wait.pid\n  [ -e \"/run/manage_ltd_users.pid\" ] && rm -f /run/manage_ltd_users.pid\n  [ -e \"/run/manage_ruby_users.pid\" ] && rm -f /run/manage_ruby_users.pid\n  [ -e \"/var/aegir/.drush/.alias.drushrc.php\" ] && rm -f /var/aegir/.drush/.alias.drushrc.php\n  [ -d \"/data/u\" ] && rm -f /data/disk/*/.drush/.alias.drushrc.php\n  rm -f ${_pthLog}/protected-vhosts-clean.log\n  rm -f ${_vBs}/.auth.IP.list*\n  sed -i \"s/### access .*//g; s/allow .*;//g; s/deny .*;//g; s/ *$//g; /^$/d\" \\\n    ${_mtrNgx}/vhost.d/adminer.* &> /dev/null\n  wait\n  sed -i \"s/### access .*//g; s/allow .*;//g; s/deny .*;//g; s/ *$//g; /^$/d\" \\\n    ${_mtrNgx}/vhost.d/cgp.* &> /dev/null\n  wait\n  sed -i \"s/### access .*//g; s/allow .*;//g; s/deny .*;//g; s/ *$//g; /^$/d\" \\\n    ${_mtrNgx}/vhost.d/chive.* &> /dev/null\n  wait\n  sed -i \"s/### access .*//g; s/allow .*;//g; s/deny .*;//g; s/ *$//g; /^$/d\" \\\n    ${_mtrNgx}/vhost.d/sqlbuddy.* &> /dev/null\n  wait\n  find /etc/[a-z]*\\.lock -maxdepth 1 -type f -exec rm -rf {} \\; &> /dev/null\n  chmod 700 /root\n  if [ ! -e \"/etc/init.d/buagent\" ] \\\n    && [ -e \"${_vBs}/buagent-pre-${_xSrl}-${_X_VERSION}-${_NOW}\" ]; then\n    mv -f ${_vBs}/buagent-pre-${_xSrl}-${_X_VERSION}-${_NOW} \\\n      /etc/init.d/buagent &> /dev/null\n  fi\n  _CSF_CRON_TEST=$(grep water /etc/crontab 2>&1)\n  if [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ -x \"/usr/sbin/csf\" ] \\\n    && [[ \"${_CSF_CRON_TEST}\" =~ \"water\" ]]; then\n    sed -i \"s/.*fire.*//g\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"s/.*water.*//g\" /etc/crontab &> /dev/null\n    wait\n    sed -i \"/^$/d\" /etc/crontab &> /dev/null\n    wait\n  fi\n  killall -9 memcached &> /dev/null\n  _php_deprecated_cleanup\n  _php_single_initd_cleanup\n  if [ -e \"/root/.skip-aegir-master-upgrade.cnf\" ] \\\n    && [ -e \"/root/.completed_post_major_os_upgrade.info\" ] \\\n    && [ ! -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n    rm -f /root/.skip-aegir-master-upgrade.cnf\n  fi\n  echo \"FINALE\" > /root/.latest-barracuda-upgrade-finale.info\n  if [ ! -e \"/root/.upstart.cnf\" ]; then\n    _mrun \"service cron start\"\n  fi\n  if [ -x \"/usr/sbin/csf\" ] \\\n    && [ -e \"/etc/csf/csf.deny\" ] \\\n    && [ ! -x \"/etc/csf/csfpost.sh\" ]; then\n    echo \"\" > /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    echo \"iptables -t raw -A OUTPUT -p tcp --dport 21 -j CT --helper ftp\" >> /etc/csf/csfpost.sh\n    chmod 700 /etc/csf/csfpost.sh\n    _mrun \"service lfd stop\"\n    pkill -9 -f ConfigServer\n    killall sleep &> /dev/null\n    rm -f /etc/csf/csf.error\n    csf -x  &> /dev/null\n    _mrun \"service clean-boa-env start\"\n    _if_fix_iptables_symlinks\n    ### csf -uf &> /dev/null\n    ### wait\n    _NFTABLES_TEST=$(iptables -V)\n    if [[ \"${_NFTABLES_TEST}\" =~ \"nf_tables\" ]]; then\n      if [ -e \"/usr/sbin/iptables-legacy\" ]; then\n        update-alternatives --set iptables /usr/sbin/iptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ip6tables-legacy\" ]; then\n        update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/arptables-legacy\" ]; then\n        update-alternatives --set arptables /usr/sbin/arptables-legacy &> /dev/null\n      fi\n      if [ -e \"/usr/sbin/ebtables-legacy\" ]; then\n        update-alternatives --set ebtables /usr/sbin/ebtables-legacy &> /dev/null\n      fi\n    fi\n    csf -e  &> /dev/null\n    sed -i \"s/.*DHCP.*//g\" /etc/csf/csf.allow\n    sed -i \"/^$/d\" /etc/csf/csf.allow\n    if [ -e \"/var/log/daemon.log\" ]; then\n      _DHCP_LOG=\"/var/log/daemon.log\"\n    else\n      _DHCP_LOG=\"/var/log/syslog\"\n    fi\n    grep DHCPREQUEST \"${_DHCP_LOG}\" | awk '{print $12}' | sort -u | while read -r _IP; do\n      if [[ ${_IP} =~ ^([0-9]{1,3}\\.){3}[0-9]{1,3}$ ]]; then\n        IFS='.' read -r oct1 oct2 oct3 oct4 <<< \"${_IP}\"\n        if (( oct1 <= 255 && oct2 <= 255 && oct3 <= 255 && oct4 <= 255 )); then\n          echo \"udp|out|d=67|d=${_IP} # Local DHCP out\" >> /etc/csf/csf.allow\n        fi\n      fi\n    done\n    if [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ]; then\n      csf -ra &> /dev/null\n      synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n    else\n      csf -r &> /dev/null\n    fi\n    ### Linux kernel TCP SACK CVEs mitigation\n    ### CVE-2019-11477 SACK Panic\n    ### CVE-2019-11478 SACK Slowness\n    ### CVE-2019-11479 Excess Resource Consumption Due to Low MSS Values\n    if [ -x \"/usr/sbin/csf\" ] && [ -e \"/etc/csf/csf.deny\" ]; then\n      _SACK_TEST=$(ip6tables --list | grep tcpmss)\n      if [[ ! \"${_SACK_TEST}\" =~ \"tcpmss\" ]]; then\n        sysctl net.ipv4.tcp_mtu_probing=0 &> /dev/null\n        iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        ip6tables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP &> /dev/null\n        [ -e \"/etc/csf/csfpost.d/synproxy.sh\" ] && synproxy_reassert -p \"443 80\" --no-quic -q &> /dev/null\n      fi\n    fi\n  fi\n  _CNT=$(pgrep -fc 'tee -a /var/backups/barracuda-')\n  if (( _CNT > 1 )); then\n    pkill -f 'tee -a /var/backups/barracuda-'\n  fi\n  cd /\n  chmod 711 bin boot data dev emul etc home lib lib64 lib32 media mnt opt \\\n    sbin selinux srv sys usr var share run &> /dev/null\n  chmod 700 root &> /dev/null\n  _msg \"BYE!\"\n}\n\n_run_xtrabackup_cleanup() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _run_xtrabackup_cleanup\"\n  fi\n  # Check for installed Percona XtraBackup packages and remove them\n  for _PKG in percona-xtrabackup-24 percona-xtrabackup-80 percona-xtrabackup-84; do\n    if _pkg_installed \"${_PKG}\"; then\n      _mrun \"apt-get remove ${_PKG} -y --purge --auto-remove -qq\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"PCKG: ${_PKG} removed as requested.\"\n      fi\n    fi\n  done\n}\n\n_run_aptitude_deps_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _run_aptitude_deps_install\"\n  fi\n  : \"${_UPGRADE_MODE:=FAST}\"\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _UPGRADE_MODE=FULL\n  fi\n  _if_to_do_fix\n  if [ \"${_UPGRADE_MODE}\" = \"FULL\" ]; then\n    if [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      _mrun \"${_INITINS} devuan-keyring\"\n      _mrun \"${_INITINS} devuan-keyring\"\n    elif [ \"${_OS_DIST}\" = \"Debian\" ]; then\n      _mrun \"${_INITINS} debian-keyring debian-archive-keyring\"\n      _mrun \"${_INITINS} debian-keyring debian-archive-keyring\"\n    fi\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _msg \"INFO: Installing a large list of libraries and tools...\"\n  elif [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    if [ \"${_UPGRADE_MODE}\" = \"FULL\" ]; then\n      _msg \"INFO: Upgrading a large list of libraries and tools...\"\n    fi\n  fi\n  if [ \"${_UPGRADE_MODE}\" = \"FULL\" ]; then\n    _msg \"WAIT: This may take a while, please wait...\"\n    _apt_clean_update\n    _run_xtrabackup_cleanup\n\n    _mrun \"apt-get autoremove -y\"\n\n    for _PKG in packagekit polkitd software-properties-common python3-software-properties; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove ${_PKG} -y -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n\n    for _PKG in python3 python3-full python3-dev python3-debian python3-magic python3-pip python3-venv python3-setuptools; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in build-essential libsasl2-modules time vim cvs makepasswd; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in man-db dnsutils sudo symlinks iptables autoconf automake autotools-dev m4 bc; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    if [ ! -x \"/usr/local/bin/curl\" ]; then\n      _mrun \"apt-get build-dep curl -y\"\n      _mrun \"${_INSTAPP} curl\"\n    fi\n\n    # Check for libssl1.0-dev and remove conditionally\n    if dpkg-query -W -f='${Status}' libssl1.0-dev 2>/dev/null | grep -q \"install ok installed\"; then\n      _mrun \"apt-get remove libssl1.0-dev -y --purge --auto-remove -qq\"\n    fi\n\n    for _PKG in libssl-dev libssl3 mcrypt; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    if [ \"${_OS_CODE}\" != \"excalibur\" ]; then\n      for _PKG in libc-client2007e libc-client2007e-dev; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n    fi\n\n    if [ \"${_OS_DIST}\" = \"Devuan\" ]; then\n      for _PKG in apticron aptitude apt-listchanges; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n    else\n      for _PKG in apticron aptitude; do\n        if ! _pkg_installed \"${_PKG}\"; then\n          _mrun \"${_INSTAPP} ${_PKG}\"\n        fi\n      done\n      _mrun \"dpkg --remove --force-remove-reinstreq apt-listchanges\"\n    fi\n\n    for _PKG in sysstat telnet cron gnupg2 gnupg zip unzip unrtf wdiff pwgen flex re2c lsof; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    _mrun \"apt-get remove stunnel4 -y --purge --auto-remove -qq\"\n\n    for _PKG in rsync strace librsync-dev mc whois zlib1g zlib1g-dev sqlite3 libsqlite3-dev; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in s4cmd ssh ssl-cert libpam-umask shtool xml-core xml2 xpdf libtool rrdtool; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in ncurses-dev ncurses-term sipcalc; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    _OS_LIST=\"excalibur daedalus chimaera bullseye bookworm\"\n    for e in ${_OS_LIST}; do\n      if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n        for _PKG in ncal; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n      fi\n    done\n\n    for _PKG in libzip-dev libwww-perl libfontconfig1 libfreetype6 libfreetype6-dev; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in libfribidi0 libgeoip-dev libgeoip1 libgmp3-dev libonig-dev libpq5 libxml2-dev libxpm4; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in libxslt-dev libxslt1-dev libxslt1.1 subversion libaprutil1 libapr1 ldap-utils ipset; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in pdftk patchutils p7zip-full fontconfig-config bison catdoc cython3 geoip-database gettext; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n\n    for _PKG in clamav clamav-base clamav-daemon clamdscan ghostscript htop lemon lftp nano; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n  fi\n  if [ -e \"/root/.dont.use.fancy.bash.login.cnf\" ] \\\n    && [ -e \"/usr/bin/screenfetch\" ]; then\n    ###\n    for _PKG in toilet figlet screenfetch pciutils; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove ${_PKG} -y --purge --auto-remove -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    ###\n  else\n    for _PKG in toilet figlet; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n  fi\n  if [ \"${_UPGRADE_MODE}\" = \"FULL\" ]; then\n    _fix_postfix\n    _mrun \"${_INSTAPP} ${_MAILSERV}\"\n    _mrun \"${_INSTAPP} ${_APT_XTRA}\"\n    _mrun \"${_INSTAPP} ${_EXTRA_LIB_APT}\"\n    _mrun \"${_INSTAPP} ${_EXTRA_PACKAGES}\"\n    _mrun \"${_INSTAPP} ${_SYSLOGD}\"\n    _EXTRA_APT=\"tree\"\n    _mrun \"apt-get install ${_EXTRA_APT} ${_nrmUpArg}\"\n  fi\n}\n\n_basic_packages_install_on_init() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _basic_packages_install_on_init\"\n  fi\n  _apt_clean_update\n  if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] && [ ! -e \"/var/xdrago/manage_solr_config.sh\" ]; then\n#     _mrun \"apt-get remove unscd -y --purge --auto-remove -qq\"\n#     _mrun \"apt-get remove dbus -y --purge --auto-remove -qq\"\n#     if [ -e \"/usr/share/dbus-1\" ]; then\n#       rm -f /usr/share/dbus-1/*/*freedesktop*\n#     fi\n    userdel -r debian &> /dev/null\n  fi\n  usermod -aG users _apt &> /dev/null\n  usermod -aG users aegir &> /dev/null\n  usermod -aG users bin &> /dev/null\n  usermod -aG users daemon &> /dev/null\n  usermod -aG users man &> /dev/null\n  usermod -aG users mysql &> /dev/null\n  usermod -aG users nobody &> /dev/null\n  usermod -aG users root &> /dev/null\n  usermod -aG users sync &> /dev/null\n  if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n    && [ -e \"/etc/apt/apt.conf.d\" ]; then\n    echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n  fi\n\n  _APT_CONFIG_FILE=\"/etc/apt/apt.conf.d/99ignorestrict\"\n\n  # Desired configuration content\n  _DESIRED_APT_CONFIG='Acquire::AllowInsecureRepositories \"true\";\n    APT::Get::AllowUnauthenticated \"true\";\n    Aptitude::CmdLine::Fix-Broken \"true\";'\n\n  # Remove leading whitespace from each line\n  _CLEANED_DESIRED_APT_CONFIG=$(echo \"${_DESIRED_APT_CONFIG}\" | sed 's/^[[:space:]]\\+//')\n\n  # Normalize the existing file content\n  if [[ -f \"${_APT_CONFIG_FILE}\" ]]; then\n    _CURRENT_APT_CONFIG=$(tr -d '[:space:]' < \"${_APT_CONFIG_FILE}\")\n  else\n    _CURRENT_APT_CONFIG=\"\"\n  fi\n\n  # Normalize the cleaned desired configuration content\n  _NORMALIZED_DESIRED_APT_CONFIG=$(echo \"${_CLEANED_DESIRED_APT_CONFIG}\" | tr -d '[:space:]')\n\n  # Compare normalized contents and update if necessary\n  if [[ \"${_CURRENT_APT_CONFIG}\" != \"${_NORMALIZED_DESIRED_APT_CONFIG}\" ]]; then\n    echo \"${_CLEANED_DESIRED_APT_CONFIG}\" | tee \"${_APT_CONFIG_FILE}\" > /dev/null\n  fi\n\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    if [ ! -e \"/root/.step.init.basic.cnf\" ]; then\n      _msg \"INFO: Installing some basic tools...\"\n      _apt_clean_update\n      _mrun \"${_INITINS} locales\"\n      _locales_check_fix\n      _mrun \"${_INITINS} lsb-release\"\n      _mrun \"${_INITINS} sudo\"\n      _mrun \"${_INITINS} dnsutils\"\n      _mrun \"${_INITINS} netcat-traditional\"\n      _mrun \"${_INITINS} aptitude\"\n      _mrun \"${_INITINS} curl\"\n      _mrun \"${_INITINS} wget\"\n      _mrun \"${_INITINS} hostname\"\n      _mrun \"${_INITINS} net-tools\"\n      _mrun \"${_INITINS} ntpsec-ntpdate\"\n      touch /root/.step.init.basic.cnf\n    fi\n  fi\n}\n\n_more_packages_install_on_init() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _more_packages_install_on_init\"\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _msg \"INFO: Installing more basic tools now...\"\n    if [ -e \"/etc/debian_version\" ]; then\n      _L_DEB_TEST=$(grep \"^5.\" /etc/debian_version 2>&1)\n      if [ ! -z \"${_L_DEB_TEST}\" ]; then\n        sed -i \"s/^deb.*security.debian.org.*/## security updates no longer available/g\" ${_aptLiSys} &> /dev/null\n        wait\n        sed -i \"s/ftp.*debian.org/archive.debian.org/g\" \\\n          ${_aptLiSys} &> /dev/null\n        wait\n        sed -i \"s/volatile.debian.org/archive.debian.org/g\" \\\n          ${_aptLiSys} &> /dev/null\n        wait\n      fi\n    fi\n    _apt_clean_update\n    _mrun \"${_INITINS} locales\"\n    _locales_check_fix\n    _mrun \"${_INITINS} git\"\n    _mrun \"${_INITINS} git-core\"\n    _mrun \"${_INITINS} git-man\"\n    _mrun \"${_INITINS} axel\"\n    _mrun \"${_INITINS} build-essential\"\n  fi\n}\n\n_if_proxysql_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_proxysql_update\"\n  fi\n  if [ -e \"/root/.my.proxysql_adm_pwd.txt\" ]; then\n    _isPxy=\"$(which proxysql)\"\n    _REINSTALL_PXC=NO\n    if [ -x \"${_isPxy}\" ]; then\n      _PXY=$(${_isPxy} --version 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $3}' 2>&1)\n      _msg \"INFO: ProxySQL version detected ${_PXY}\"\n      _pGc=\"/usr/bin/proxysql_galera_checker\"\n      if [ ! -e \"${_pGc}\" ]; then\n        _UP_PXC=YES\n        _REINSTALL_PXC=YES\n      else\n        _PAV_TEST=$(grep Maestro ${_pGc} 2>&1)\n        if [[ ! \"${_PAV_TEST}\" =~ \"Maestro\" ]]; then\n          _UP_PXC=YES\n        fi\n      fi\n      _pNm=\"/usr/bin/proxysql_node_monitor\"\n      if [ ! -e \"${_pNm}\" ]; then\n        _UP_PXC=YES\n        _REINSTALL_PXC=YES\n      else\n        _PAV_TEST=$(grep Vivaldi ${_pNm} 2>&1)\n        if [[ ! \"${_PAV_TEST}\" =~ \"Vivaldi\" ]]; then\n          _UP_PXC=YES\n        fi\n      fi\n    fi\n    _isPxyVer=$(proxysql --version 2>&1)\n    if [[ ! \"${_isPxyVer}\" =~ \"${_PXC_VRN}\" ]]; then\n      _UP_PXC=YES\n    fi\n    if [ \"${_UP_PXC}\" = \"YES\" ] \\\n      && [ \"${_OS_DIST}\" = \"Debian\" ]; then\n      if [ \"${_REINSTALL_PXC}\" = \"YES\" ]; then\n        _msg \"INFO: Running ProxySQL upgrade to ${_PXC_VRN}...\"\n      else\n        _msg \"INFO: Running ProxySQL re-install to ${_PXC_VRN}...\"\n        _apt_clean_update\n        for _PKG in sysbench debconf-utils; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n        if dpkg-query -W -f='${Status}' proxysql 2>/dev/null | grep -q \"install ok installed\"; then\n          _mrun \"apt-get remove proxysql -y --purge --auto-remove -qq\"\n        fi\n        _apt_clean_update\n        _mrun \"${_INSTAPP} proxysql\"\n      fi\n      _isPxy=\"$(which proxysql)\"\n      if [ ! -x \"${_isPxy}\" ]; then\n        _apt_clean_update\n        for _PKG in sysbench debconf-utils proxysql; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n      else\n        _mrun \"apt-get install --only-upgrade ${_nrmUpArg} proxysql\"\n      fi\n      if [ ! -e \"/etc/proxysql.cnf\" ]; then\n        _msg \"OOPS: Unable to open config file /etc/proxysql.cnf\"\n      else\n        chmod 640 /etc/proxysql.cnf\n      fi\n      if [ ! -e \"/etc/proxysql-admin.cnf\" ]; then\n        _msg \"OOPS: Unable to open config file /etc/proxysql-admin.cnf\"\n      else\n        chmod 640 /etc/proxysql-admin.cnf\n      fi\n      if [ -x \"${_isPxy}\" ] \\\n        && [ -e \"/etc/proxysql.cnf\" ] \\\n        && [ -e \"/etc/proxysql-admin.cnf\" ]; then\n        _PXY=$(${_isPxy} --version 2>&1 \\\n          | tr -d \"\\n\" \\\n          | cut -d\"-\" -f1 \\\n          | awk '{ print $3}' 2>&1)\n        _msg \"INFO: ProxySQL version detected ${_PXY}\"\n        _msg \"INFO: Updating ProxySQL Node Monitor...\"\n        if [ -e \"${_pNm}\" ]; then\n          rm -f ${_pNm}\n        fi\n        _tBn=\"tools/bin\"\n        _tURL=\"${_urlHmr}/${_tBn}/proxysql_node_monitor\"\n        _msg \"PNM download URL is ${_tURL}\"\n        curl -I ${_tURL}\n        curl ${_crlGet} \"${_tURL}\" -o ${_pNm}\n        if [ ! -e \"${_pNm}\" ]; then\n          curl ${_crlGet} \"${_tURL}\" -o ${_pNm}\n        else\n          _PAV_TEST=$(grep Vivaldi ${_pNm} 2>&1)\n          if [[ ! \"${_PAV_TEST}\" =~ \"Vivaldi\" ]]; then\n            rm -f ${_pNm}\n            curl ${_crlGet} \"${_tURL}\" -o ${_pNm}\n          fi\n        fi\n        curl ${_crlGet} \"${_tURL}\" -o ${_pNm}\n        ls -la ${_pNm}\n        if [ -e \"${_pNm}\" ]; then\n          chmod 755 ${_pNm}\n          _msg \"INFO: `${_pNm} --version`\"\n        fi\n\n        _msg \"INFO: Updating ProxySQL Galera Checker...\"\n        if [ -e \"${_pGc}\" ]; then\n          rm -f ${_pGc}\n        fi\n        _tBn=\"tools/bin\"\n        _tURL=\"${_urlHmr}/${_tBn}/proxysql_galera_checker\"\n        _msg \"PGC download URL is ${_tURL}\"\n        curl -I ${_tURL}\n        curl ${_crlGet} \"${_tURL}\" -o ${_pGc}\n        if [ ! -e \"${_pGc}\" ]; then\n          curl ${_crlGet} \"${_tURL}\" -o ${_pGc}\n        else\n          _PAV_TEST=$(grep Maestro ${_pGc} 2>&1)\n          if [[ ! \"${_PAV_TEST}\" =~ \"Maestro\" ]]; then\n            rm -f ${_pGc}\n            curl ${_crlGet} \"${_tURL}\" -o ${_pGc}\n          fi\n        fi\n        curl ${_crlGet} \"${_tURL}\" -o ${_pGc}\n        ls -la ${_pGc}\n\n        if [ -e \"${_pGc}\" ]; then\n          chmod 755 ${_pGc}\n          _msg \"INFO: `${_pGc} --version`\"\n          echo loadbal > /var/lib/proxysql/mode\n          echo loadbal > /var/lib/proxysql/c1r_galera_mode\n          echo loadbal > /var/lib/proxysql/--mode=singlewrite_mode\n          echo loadbal > /var/lib/proxysql/--mode=loadbal_mode\n          chown proxysql:proxysql /var/lib/proxysql/*mode*\n          echo 0 > /var/lib/proxysql/reload\n          echo 0 > /var/lib/proxysql/c1r_galera_reload\n          chown proxysql:proxysql /var/lib/proxysql/*reload\n          rm -f /var/lib/proxysql/pxc_test_proxysql_galera_check.log\n          proxysql_galera_checker --log=/var/lib/proxysql/pxc_test_proxysql_galera_check.log --debug\n          cat /var/lib/proxysql/pxc_test_proxysql_galera_check.log\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Restarting ProxySQL server...\"\n          fi\n          _mrun \"service proxysql restart\"\n          mysql -uadmin -p`cat /root/.my.proxysql_adm_pwd.txt` -h127.0.0.1 -P6032 -e \"SELECT * FROM scheduler\\G\"\n          mysql -uadmin -p`cat /root/.my.proxysql_adm_pwd.txt` -h127.0.0.1 -P6032 -e \"SELECT * FROM mysql_servers;\"\n        else\n          _msg \"OOPS: ProxySQL Galera Checker will not work!\"\n        fi\n      else\n        _msg \"OOPS: ProxySQL will not work!\"\n      fi\n    fi\n  fi\n}\n\n_if_rebuild_src_on_major_os_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_rebuild_src_on_major_os_upgrade\"\n  fi\n  _CHECKS_REMOTE_REPOS=YES\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n    _CHECKS_REMOTE_REPOS=NO\n    if [ -e \"/root/.force.rebuild.src.on.auto.now.cnf\" ]; then\n      _ALLOW_HEAVY_REBUILDS=YES\n    else\n      _ALLOW_HEAVY_REBUILDS=NO\n    fi\n  else\n    _ALLOW_HEAVY_REBUILDS=YES\n  fi\n}\n\n_if_long_generate_on_major_os_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_long_generate_on_major_os_upgrade\"\n  fi\n  if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n    || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n    || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n    || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n    _ALLOW_LONG_GENERATE=NO\n  else\n    _ALLOW_LONG_GENERATE=YES\n  fi\n}\n\n_ensure_devuan_base_files() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _ensure_devuan_base_files\"\n  fi\n  # Ensure Devuan base-files is applied so /etc/os-release reflects Devuan/daedalus\n  _msg \"Ensuring Devuan base-files is applied (force-confnew)\"\n  _BF_CANDIDATE=$(apt-cache policy base-files 2>/dev/null | awk '/Candidate: /{print $2; exit}')\n  _APTOPTS=\"-o Dpkg::Options::=--force-confnew \\\n            -o Dpkg::Options::=--force-overwrite \\\n            -o Dpkg::Options::=--force-confdef\"\n  if [ ! -z \"${_BF_CANDIDATE}\" ]; then\n    _mrun \"apt-get ${_aptAllow} -y ${_APTOPTS} install base-files=\"${_BF_CANDIDATE}\" --allow-downgrades\"\n  else\n    _mrun \"apt-get ${_aptAllow} -y ${_APTOPTS} install base-files --allow-downgrades\"\n  fi\n  # If /etc/os-release still says Debian, try to refresh from the installed base-files\n  if grep -qi 'ID=debian' /etc/os-release 2>/dev/null; then\n    if [ -f /usr/lib/os-release ]; then\n      cp -f /usr/lib/os-release /etc/os-release || true\n    elif [ -f /usr/share/base-files/os-release ]; then\n      cp -f /usr/share/base-files/os-release /etc/os-release || true\n    fi\n  fi\n}\n\n_if_post_major_os_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_post_major_os_upgrade\"\n  fi\n  if [ -e \"/root/.run_post_major_os_upgrade.info\" ] \\\n    && [ ! -e \"/root/.completed_post_major_os_upgrade.info\" ]; then\n\n    _apt_clean_update_no_releaseinfo_change\n    _mrun \"${_APT_UPDATE} ${_aptAllow} -qq\"\n\n    _msg \"INFO: Cleaning up any systemd remnants...\"\n    _sysvinit_install\n    _systemd_remove_apt_cmd\n    _sysvinit_install\n\n    _msg \"INFO: Removing any packages orphaned by the migration process...\"\n    _mrun \"apt-get autoremove --purge -y --purge --auto-remove -qq\"\n    _mrun \"apt-get autoclean -y --purge --auto-remove -qq\"\n    _mrun \"apt-get autoremove -y --purge --auto-remove -qq\"\n\n    _msg \"INFO: Running dist-upgrade to complete the system upgrade...\"\n    _msg \"WAIT: This may take a while, please wait...\"\n    _mrun \"apt-get dist-upgrade ${_dstUpArg}\"\n    _mrun \"apt-get dist-upgrade ${_dstUpArg}\"\n    [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n    if [ -e \"/root/.allow.downgrades.on.same.os.dist.upgrade.cnf\" ]; then\n      _msg \"INFO: dist-upgrade with --allow-downgrades option is used\"\n      _mrun \"apt-get dist-upgrade ${_aptYesUnth} --allow-downgrades\"\n      _mrun \"apt-get dist-upgrade ${_aptYesUnth} --allow-downgrades\"\n      [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n    else\n      _mrun \"apt-get dist-upgrade ${_aptYesUnth}\"\n      _mrun \"apt-get dist-upgrade ${_aptYesUnth}\"\n      [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n    fi\n    _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n    _mrun \"apt-get ${_dstUpArg} install\"\n\n    ### Update base-files on daedalus if needed\n    if [ -e \"/root/.top-daedalus.cnf\" ]; then\n      # Initialize the update flag to NO\n      _needsBaseFilesUpdate=NO\n      # Check if /etc/os-release mentions 'daedalus'\n      if grep -i 'daedalus' /etc/os-release &> /dev/null; then\n        _msg \"The /etc/os-release already mentions daedalus\"\n      else\n        _msg \"The /etc/os-release doesn't mention daedalus yet\"\n        _needsBaseFilesUpdate=YES\n      fi\n      # Check if apt policy for base-files mentions 'daedalus'\n      if apt policy base-files | grep -i 'daedalus' &> /dev/null; then\n        _msg \"The apt policy base-files already mentions daedalus\"\n      else\n        _msg \"The apt policy base-files doesn't mention daedalus yet\"\n        _needsBaseFilesUpdate=YES\n      fi\n      # If any of the above checks require an update, perform the upgrade\n      if [ \"${_needsBaseFilesUpdate}\" = \"YES\" ]; then\n        _msg \"Upgrading base-files from Devuan repository (dynamic candidate)\"\n        _ensure_devuan_base_files\n      fi\n    fi\n\n    ### Update base-files on excalibur if needed\n    if [ -e \"/root/.top-excalibur.cnf\" ]; then\n      # Initialize the update flag to NO\n      _needsBaseFilesUpdate=NO\n      # Check if /etc/os-release mentions 'excalibur'\n      if grep -i 'excalibur' /etc/os-release &> /dev/null; then\n        _msg \"The /etc/os-release already mentions excalibur\"\n      else\n        _msg \"The /etc/os-release doesn't mention excalibur yet\"\n        _needsBaseFilesUpdate=YES\n      fi\n      # Check if apt policy for base-files mentions 'excalibur'\n      if apt policy base-files | grep -i 'excalibur' &> /dev/null; then\n        _msg \"The apt policy base-files already mentions excalibur\"\n      else\n        _msg \"The apt policy base-files doesn't mention excalibur yet\"\n        _needsBaseFilesUpdate=YES\n      fi\n      # If any of the above checks require an update, perform the upgrade\n      if [ \"${_needsBaseFilesUpdate}\" = \"YES\" ]; then\n        _msg \"Upgrading base-files from Devuan repository (dynamic candidate)\"\n        _ensure_devuan_base_files\n      fi\n    fi\n\n    ### Update rsyslog configuration early\n    _rsyslog_config_update\n    ### Reload key services if needed early\n    if [ -e \"/etc/init.d/valkey-server\" ]; then\n      _mrun \"service valkey-server reload\"\n    elif [ -e \"/etc/init.d/redis-server\" ]; then\n      _mrun \"service redis-server reload\"\n    fi\n    _mrun \"service nginx reload\"\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n        _mrun \"service php${e}-fpm reload\"\n      fi\n    done\n\n    _REBUILD_SRC_ON_AUTO_NOW=YES\n    if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n      || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n      || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n      || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n      _REBUILD_SRC_ON_AUTO_NOW=NO\n      _php_if_versions_cleanup_cnf\n    fi\n\n    if [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ]; then\n      _REBUILD_SRC_ON_AUTO_NOW=YES\n    fi\n\n    if [ -e \"/root/.big_hop_on_major_os_upgrade.info\" ] \\\n      && [ ! -e \"/root/.rebuild_src_on_auto_before_reboot.info\" ] \\\n      && [ \"${_REBUILD_SRC_ON_AUTO_NOW}\" = \"YES\" ]; then\n      _msg \"INFO: Time for re-installing services built from sources...\"\n      rm -f /var/log/boa/._php_libs_fix_*.pid\n      _PURGE_MODE=OFF\n      _NGX_FORCE_REINSTALL=YES\n      _PHP_FORCE_REINSTALL=YES\n      _SSH_FORCE_REINSTALL=YES\n      _SSL_FORCE_REINSTALL=YES\n      _if_reinstall_curl_src\n      _if_ssl_install_src\n      _sync_system_ssl_certs\n      _ssl_paths_sync\n      _ssl_crypto_lib_fix\n      _curl_install_src\n      _sshd_install_src\n      _nginx_install_upgrade\n      _magick_install_upgrade\n      _php_install_deps\n      _php_libs_fix\n      _php_if_versions_cleanup_cnf\n       if [ -x \"/opt/php56/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP56_VRN}\"\n        _install_php_multi \"56\"\n      fi\n      if [ -x \"/opt/php70/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP70_VRN}\"\n        _install_php_multi \"70\"\n      fi\n      if [ -x \"/opt/php71/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP71_VRN}\"\n        _install_php_multi \"71\"\n      fi\n      if [ -x \"/opt/php72/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP72_VRN}\"\n        _install_php_multi \"72\"\n      fi\n      if [ -x \"/opt/php73/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP73_VRN}\"\n        _install_php_multi \"73\"\n      fi\n      if [ -x \"/opt/php74/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP74_VRN}\"\n        _install_php_multi \"74\"\n      fi\n      if [ -x \"/opt/php80/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP80_VRN}\"\n        _install_php_multi \"80\"\n      fi\n      if [ -x \"/opt/php81/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP81_VRN}\"\n        _install_php_multi \"81\"\n      fi\n      if [ -x \"/opt/php82/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP82_VRN}\"\n        _install_php_multi \"82\"\n      fi\n      if [ -x \"/opt/php83/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP83_VRN}\"\n        _install_php_multi \"83\"\n      fi\n      if [ -x \"/opt/php84/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP84_VRN}\"\n        _install_php_multi \"84\"\n      fi\n      if [ -x \"/opt/php85/bin/php\" ]; then\n        _PHP_VERSION=\"${_PHP85_VRN}\"\n        _install_php_multi \"85\"\n      fi\n      _PHP_VERSION=\"\"\n      _T_PHP_VRN=\"\"\n      _T_PHP_PTH=\"\"\n      _php_libs_fix\n      if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n        _php_ioncube_check_if_update\n        _php_check_if_rebuild\n      fi\n      _php_install_upgrade\n      _php_config_check_update\n      _php_upgrade_all\n      _if_install_php_newrelic\n      _newrelic_check_fix\n      _NGX_FORCE_REINSTALL=\n      _PHP_FORCE_REINSTALL=\n      _SSH_FORCE_REINSTALL=\n      _SSL_FORCE_REINSTALL=\n    fi\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      rm -f /root/.run_post_major_os_upgrade.info\n    fi\n    if [ -e \"/root/.force.newrelic.update.cnf\" ]; then\n      rm -f /root/.force.newrelic.update.cnf\n    fi\n    if [ -e \"/root/.allow.downgrades.on.same.os.dist.upgrade.cnf\" ]; then\n      rm -f /root/.allow.downgrades.on.same.os.dist.upgrade.cnf\n    fi\n    rm -f /var/log/boa/.*_crontab_*\n    touch /root/.completed_post_major_os_upgrade.info\n    _msg \"INFO: The post_major_os_upgrade procedure is complete\"\n  fi\n}\n\n_if_major_os_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_major_os_upgrade\"\n  fi\n  _RUNNING_MAJOR_OS_UPGRADE=NO\n  _define_loc_osr\n  if [ ! -z \"${_MSG_LOC}\" ]; then\n    if [ ! -e \"/usr/local/lib/icu/73.1\" ] \\\n      && [ \"${_LOC_OS_CODE}\" != \"jessie\" ]; then\n      echo\n      _msg \"OOPS: Please run normal 'barracuda up-${_tRee}' upgrade\"\n      _msg \"OOPS: before trying to run major OS upgrade\"\n      echo\n      sleep 3\n      echo \"Bye\"\n      _barracuda_cnf_cleanup\n      _clean_pid_exit _if_major_os_upgrade_a\n    fi\n    if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n      || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n      || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n      || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n      _AUTOPILOT=YES\n    fi\n    _RUNNING_MAJOR_OS_UPGRADE=YES\n    _msg \"MODE: ${_MSG_LOC} upgrade\"\n    echo\n    _msg \"RLLY: Have you created a Fresh Backup Snapshot of this VM?\"\n    echo\n    sleep 5\n    _msg \"ATTN: You have to be prepared in case the upgrade will fail\"\n    _msg \"ATTN: You have to be prepared for a crash of this system\"\n    _msg \"ATTN: This procedure is well tested but things happen!\"\n    echo\n    sleep 5\n    _tPrmt=\"Are you sure you want to proceed?\"\n    _tPrmt=$(echo -n ${_tPrmt} | fmt -su -w 2500 2>&1)\n    if _prompt_yes_no \"${_tPrmt}?\" ; then\n      true\n      echo\n      _msg \"FINE: But you can still hit ctrl-c to stop if you wish\"\n      _msg \"WAIT: We need a minute to stop all running cron tasks...\"\n      _mrun \"service cron stop\"\n    else\n      echo\n      _msg \"FINE: Please try again later once you are ready\"\n      sleep 3\n      _clean_pid_exit _if_major_os_upgrade_b\n    fi\n    echo\n    sleep 5\n    _msg \"WAIT: This major system upgrade will start in 60s...\"\n    sleep 15\n    _msg \"WAIT: ...it will start in 45s...\"\n    sleep 15\n    _msg \"WAIT: ...it will start in 30s...\"\n    sleep 15\n    _msg \"WAIT: ...it will start in 15s...\"\n    sleep 5\n    _msg \"ATTN: Ten seconds left to hit ctrl-c to stop...\"\n    sleep 10\n    _msg \"INFO: ${_MSG_LOC} upgrade in progress...\"\n    echo\n    _msg \"HINT: Commands to run in another terminal window to watch details\"\n    _msg \"INFO: tail -f ${_LOG_INFO}\"\n    _msg \"ERRR: tail -f ${_LOG_ERRR}\"\n    echo\n    touch /root/.skip-aegir-master-upgrade.cnf\n    if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n      rm -f /root/.run_post_major_os_upgrade.info\n    fi\n    if [ -e \"/root/.completed_post_major_os_upgrade.info\" ]; then\n      rm -f /root/.completed_post_major_os_upgrade.info\n    fi\n    if [ -e \"/root/.big_hop_on_major_os_upgrade.info\" ]; then\n      rm -f /root/.big_hop_on_major_os_upgrade.info\n    fi\n    _check_dns_settings\n    rm -f ${_pthLog}/ruby-sys-clean-reload.log\n    mv -f /var/xdrago /var/xdrago_wait &> /dev/null\n    rm -f ${_mtrNgx}/pre.d/nginx_speed_purge.conf\n    if [ -e \"/etc/init.d/bind\" ]; then\n      rm -f /etc/init.d/bind\n    fi\n    _apt_clean_update_no_releaseinfo_change\n    ###\n    for _PKG in collectd; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove ${_PKG} -y --purge --auto-remove -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    ###\n    for _PKG in libmariadb3 mariadb-common mailutils; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove --purge ${_PKG} -y -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    ###\n#     _mrun \"apt-get remove dbus -y --purge --auto-remove -qq\"\n#     if [ -e \"/usr/share/dbus-1\" ]; then\n#       rm -f /usr/share/dbus-1/*/*freedesktop*\n#     fi\n    ###\n    ### Make sure that mariadb related packages are locked in apt.\n    if [ ! -e \"/etc/apt/preferences.d/mariadb-common\" ]; then\n      echo -e 'Package: libmariadb3\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/mariadb-common\n      echo -e '\\n\\nPackage: mariadb-common\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/mariadb-common\n      echo -e '\\n\\nPackage: mailutils\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/mariadb-common\n      _apt_clean_update_no_releaseinfo_change\n    fi\n    ###\n    _NGINX_GET_DPKG=$(dpkg --get-selections | grep mariadb-common | grep 'hold$' 2>&1)\n    if [[ ! \"${_NGINX_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold libmariadb3 &> /dev/null\n      aptitude hold mariadb-common &> /dev/null\n      aptitude hold mailutils &> /dev/null\n      echo \"libmariadb3 hold\" | dpkg --set-selections &> /dev/null\n      echo \"mariadb-common hold\" | dpkg --set-selections &> /dev/null\n      echo \"mailutils hold\" | dpkg --set-selections &> /dev/null\n      _apt_clean_update_no_releaseinfo_change\n    fi\n    ###\n    for _PKG in nginx-extras nginx nginx-common nginx-full percona-release; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove ${_PKG} -y -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    ###\n    ### Make sure that nginx packages are locked in apt.\n    if [ ! -e \"/etc/apt/preferences.d/nginx-common\" ]; then\n      rm -f /etc/apt/preferences.d/nginx\n      echo -e 'Package: nginx\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/nginx-common\n      echo -e '\\n\\nPackage: nginx-common\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/nginx-common\n      _apt_clean_update_no_releaseinfo_change\n    fi\n    ###\n    _NGINX_GET_DPKG=$(dpkg --get-selections | grep nginx-common | grep 'hold$' 2>&1)\n    if [[ ! \"${_NGINX_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold nginx &> /dev/null\n      aptitude hold nginx-common &> /dev/null\n      echo \"nginx hold\" | dpkg --set-selections &> /dev/null\n      echo \"nginx-common hold\" | dpkg --set-selections &> /dev/null\n      _apt_clean_update_no_releaseinfo_change\n    fi\n    ###\n    ### Make sure that percona-release package is locked in apt.\n    if [ ! -e \"/etc/apt/preferences.d/percona-release\" ]; then\n      echo -e 'Package: percona-release\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/percona-release\n      _apt_clean_update\n    fi\n    ###\n    _PERC_GET_DPKG=$(dpkg --get-selections | grep percona-release | grep 'hold$' 2>&1)\n    if [[ ! \"${_PERC_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold percona-release &> /dev/null\n      echo \"percona-release hold\" | dpkg --set-selections &> /dev/null\n      _apt_clean_update\n    fi\n    ###\n    if [ \"${_LOC_OS_CODE}\" = \"jessie\" ]; then\n      _mrun \"apt-get remove gnome-tweak-tool -y --purge --auto-remove -qq\"\n    fi\n    if [ \"${_LOC_OS_CODE}\" = \"stretch\" ]; then\n      _mrun \"apt-get remove libpam-systemd -y --purge --auto-remove -qq\"\n    fi\n    echo \"curl install\"                | dpkg --set-selections &> /dev/null\n    echo \"git install\"                 | dpkg --set-selections &> /dev/null\n    echo \"git-core install\"            | dpkg --set-selections &> /dev/null\n    echo \"git-man install\"             | dpkg --set-selections &> /dev/null\n    echo \"libldap-common install\"      | dpkg --set-selections &> /dev/null\n    echo \"libldap-dev install\"         | dpkg --set-selections &> /dev/null\n    echo \"libldap2-dev install\"        | dpkg --set-selections &> /dev/null\n    echo \"libssl-dev install\"          | dpkg --set-selections &> /dev/null\n    echo \"nginx install\"               | dpkg --set-selections &> /dev/null\n    echo \"nginx-common install\"        | dpkg --set-selections &> /dev/null\n    echo \"openssh-client install\"      | dpkg --set-selections &> /dev/null\n    echo \"openssh-server install\"      | dpkg --set-selections &> /dev/null\n    echo \"openssh-sftp-server install\" | dpkg --set-selections &> /dev/null\n    echo \"openssl install\"             | dpkg --set-selections &> /dev/null\n    echo \"percona-release install\"     | dpkg --set-selections &> /dev/null\n    echo \"ssh install\"                 | dpkg --set-selections &> /dev/null\n    echo \"sysvinit-core install\"       | dpkg --set-selections &> /dev/null\n    echo \"sysvinit-utils install\"      | dpkg --set-selections &> /dev/null\n    echo \"zlib1g install\"              | dpkg --set-selections &> /dev/null\n    echo \"zlib1g-dev install\"          | dpkg --set-selections &> /dev/null\n    echo \"zlibc install\"               | dpkg --set-selections &> /dev/null\n    ###\n    ### Run pre-migration upgrade in the current OS version.\n    _mrun \"apt-get upgrade ${_nrmUpArg}\"\n    ###\n    ### Check if we can continue.\n    _AUDIT_DPKG=$(dpkg --audit 2>&1)\n    if [ ! -z \"${_AUDIT_DPKG}\" ]; then\n      _msg \"ALRT! I can not continue until dpkg --audit is clean\"\n      _msg \"ALRT! ${_AUDIT_DPKG}\"\n      _msg \"ALRT! Aborting installer NOW!\"\n      _clean_pid_exit _if_major_os_upgrade_c\n    fi\n    _HOLD_TEST_DPKG=$(dpkg --get-selections | grep 'hold$' 2>&1)\n    if [ ! -z \"${_HOLD_TEST_DPKG}\" ]; then\n      _msg \"ALRT! I can not continue until these packages are un-hold\"\n      _msg \"ALRT! ${_HOLD_TEST_DPKG}\"\n      _msg \"ALRT! Aborting installer NOW!\"\n      _clean_pid_exit _if_major_os_upgrade_d\n    fi\n    _HOLD_TEST_ATE=$(aptitude search \"~ahold\" 2>&1)\n    if [ ! -z \"${_HOLD_TEST_ATE}\" ]; then\n      _msg \"ALRT! I can not continue until these packages are un-hold\"\n      _msg \"ALRT! ${_HOLD_TEST_ATE}\"\n      _msg \"ALRT! Aborting installer NOW!\"\n      _clean_pid_exit _if_major_os_upgrade_e\n    fi\n    ###\n    ### Switching db gears on the fly.\n    if [ \"${_TGT_OSN}\" = \"Debian\" ] || [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      _NEW_OSN=debian\n    fi\n    _OS_CODE=\"${_NEW_OS_CODE}\"\n    if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      _SQL_NEW_OS_CODE=trixie\n    elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n      _SQL_NEW_OS_CODE=bookworm\n    elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n      _SQL_NEW_OS_CODE=bullseye\n    elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n      _SQL_NEW_OS_CODE=buster\n    else\n      _SQL_NEW_OS_CODE=\"${_OS_CODE}\"\n    fi\n    ###\n    ### Switching other gears on the fly first.\n    _APT_SRC_TEST=$(ls -la /etc/apt/sources.list.d/* 2>&1)\n    if [[ ! \"${_APT_SRC_TEST}\" =~ \"No such file\" ]]; then\n      sed -i \"s/${_LOC_OS_CODE}/${_SQL_NEW_OS_CODE}/g\" /etc/apt/sources.list.d/*\n    fi\n    ###\n    ### Now overwriting db gears apt sources.\n    _if_sql_keyring_apt_update\n    ###\n    ### Switching system gears on the fly.\n    if [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      if [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n        _TGT_MRR=\"http://archive.devuan.org/merged\"\n      elif [ -n \"${_LOCAL_DEVUAN_MIRROR}\" ]; then\n        _TGT_MRR=\"${_LOCAL_DEVUAN_MIRROR}\"\n      else\n        _DVN_MRR=\"$(_find_fast_devuan_mirror)\"\n        if [ -n \"${_DVN_MRR}\" ]; then\n          _TGT_MRR=\"${_DVN_MRR}\"\n        else\n          _TGT_MRR=\"http://deb.devuan.org/merged\"\n        fi\n      fi\n      echo \"## DEVUAN MAIN REPOSITORIES\" > ${_aptLiSys}\n      echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      if [ \"${_NEW_OS_CODE}\" != \"beowulf\" ]; then\n        echo \"## DEVUAN MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n        echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE}-updates main contrib non-free\" >> ${_aptLiSys}\n        echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE}-updates main contrib non-free\" >> ${_aptLiSys}\n        echo \"\" >> ${_aptLiSys}\n        if [ \"${_USE_BACKPORTS}\" = \"YES\" ]; then\n          echo \"## DEVUAN BACKPORTS REPOSITORY\" >> ${_aptLiSys}\n          echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE}-backports main contrib non-free\" >> ${_aptLiSys}\n          echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE}-backports main contrib non-free\" >> ${_aptLiSys}\n          echo \"\" >> ${_aptLiSys}\n        fi\n      fi\n      echo \"## DEVUAN SECURITY UPDATES\" >> ${_aptLiSys}\n      echo \"deb ${_TGT_MRR} ${_NEW_OS_CODE}-security main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src ${_TGT_MRR} ${_NEW_OS_CODE}-security main contrib non-free\" >> ${_aptLiSys}\n      if [ -e \"/etc/apt/apt.conf\" ]; then\n        rm -f /etc/apt/apt.conf\n      fi\n      ###\n      ### Add devuan-keyring first.\n      _apt_clean_update_no_releaseinfo_change\n      _mrun \"${_APT_UPDATE} ${_aptAllow}\"\n      _mrun \"${_INITINS} devuan-keyring\"\n    else\n      if [ \"${_NEW_OS_CODE}\" = \"stretch\" ]; then\n        _NEW_APT_MIRROR=\"archive.debian.org/debian\"\n        _NEW_APT_REPSRC=\"${_NEW_OS_CODE}-backports\"\n        _NEW_SEC_MIRROR=\"archive.debian.org/debian-security\"\n        _NEW_SEC_REPSRC=\"${_NEW_OS_CODE}/updates\"\n      elif [ \"${_NEW_OS_CODE}\" = \"buster\" ]; then\n        _NEW_APT_MIRROR=\"deb.debian.org/debian\"\n        _NEW_APT_REPSRC=\"${_NEW_OS_CODE}-updates\"\n        _NEW_SEC_MIRROR=\"security.debian.org\"\n        _NEW_SEC_REPSRC=\"${_NEW_OS_CODE}/updates\"\n      elif [ \"${_NEW_OS_CODE}\" = \"bullseye\" ]; then\n        _NEW_APT_MIRROR=\"deb.debian.org/debian\"\n        _NEW_APT_REPSRC=\"${_NEW_OS_CODE}-updates\"\n        _NEW_SEC_MIRROR=\"security.debian.org/debian-security\"\n        _NEW_SEC_REPSRC=\"${_NEW_OS_CODE}-security\"\n      elif [ \"${_NEW_OS_CODE}\" = \"bookworm\" ]; then\n        _NEW_APT_MIRROR=\"deb.debian.org/debian\"\n        _NEW_APT_REPSRC=\"${_NEW_OS_CODE}-updates\"\n        _NEW_SEC_MIRROR=\"security.debian.org/debian-security\"\n        _NEW_SEC_REPSRC=\"${_NEW_OS_CODE}-security\"\n      fi\n      echo \"## DEBIAN MAIN REPOSITORIES\" > ${_aptLiSys}\n      echo \"deb http://${_NEW_APT_MIRROR} ${_NEW_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_NEW_APT_MIRROR} ${_NEW_OS_CODE} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## DEBIAN MAJOR BUG FIX UPDATES produced after the final release\" >> ${_aptLiSys}\n      echo \"deb http://${_NEW_APT_MIRROR} ${_NEW_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"deb-src http://${_NEW_APT_MIRROR} ${_NEW_APT_REPSRC} main contrib non-free\" >> ${_aptLiSys}\n      echo \"\" >> ${_aptLiSys}\n      echo \"## DEBIAN SECURITY UPDATES\" >> ${_aptLiSys}\n      echo \"deb http://${_NEW_SEC_MIRROR} ${_NEW_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      echo \"deb-src http://${_NEW_SEC_MIRROR} ${_NEW_SEC_REPSRC} main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      if [ \"${_USE_BACKPORTS}\" = \"YES\" ]; then\n        echo \"\" >> ${_aptLiSys}\n        echo \"## DEBIAN BACKPORTS REPOSITORY\" >> ${_aptLiSys}\n        echo \"deb http://${_NEW_APT_MIRROR} ${_NEW_APT_REPSRC}-backports main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n        echo \"deb-src http://${_NEW_APT_MIRROR} ${_NEW_APT_REPSRC}-backports main contrib non-free\" | fmt -su -w 2500 >> ${_aptLiSys}\n      fi\n      if [ -e \"/etc/apt/apt.conf\" ]; then\n        rm -f /etc/apt/apt.conf\n      fi\n    fi\n    ###\n    ### Make sure that systemd packages are locked in apt.\n    if [ ! -e \"/etc/apt/preferences.d/offsystemd\" ]; then\n      rm -f /etc/apt/preferences.d/systemd\n      echo -e 'Package: systemd\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/offsystemd\n      echo -e '\\n\\nPackage: *systemd*\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/offsystemd\n    fi\n    ###\n    ### Update packages registry.\n    _apt_clean_update_no_releaseinfo_change\n    _mrun \"${_APT_UPDATE} ${_aptAllow} -qq\"\n    ###\n    ### Two step upgrade with apt-get only.\n    _mrun \"apt-get install apt -t ${_NEW_OS_CODE} ${_dstUpArg}\"\n    _apt_clean_update_no_releaseinfo_change\n    _mrun \"${_APT_UPDATE} ${_aptAllow} -qq\"\n    ###\n    ### Requirement for Debian Stretch to Devuan Beowulf migration.\n    ### The libtinfo package needs to be upgraded to prevent breaks.\n    if [ \"${_LOC_OS_CODE}\" = \"stretch\" ]; then\n      _mrun \"apt-get install libtinfo6 ${_dstUpArg}\"\n    fi\n    ###\n    ### Note that this upgrade does not complete the migration yet.\n    _mrun \"apt-get upgrade ${_dstUpArg}\"\n    _mrun \"apt-get install apt dpkg aptitude util-linux ${_dstUpArg}\"\n    ###\n    ### Back to BOA stuff re-install as needed.\n    if [ -e \"/etc/init.d/bind9\" ] && [ ! -e \"/etc/init.d/bind\" ]; then\n      ln -sfn /etc/init.d/bind9 /etc/init.d/bind\n    fi\n    if [ -e \"${_mtrInc}/nginx_vhost_common.conf\" ]; then\n      if [ -e \"/root/.debug.cnf\" ]; then\n        mv -f /etc/init.d/networking /etc/init.d/networking.bak\n        cp -af /etc/init.d/networking.dpkg-dist /etc/init.d/networking\n        chmod 755 /etc/init.d/networking\n        ls -la /etc/init.d/networking\n      fi\n    fi\n    if [ -d \"/var/www/cgp\" ]; then\n      _mrun \"apt-get install collectd ${_dstUpArg}\"\n    fi\n    _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n    if [ \"${_LOC_OS_CODE}\" != \"excalibur\" ]; then\n      _mrun \"apt-get install libc-client2007e-dev ${_dstUpArg}\"\n    fi\n    ###\n    ### Make sure that org.freedesktop.systemd1 was not atempted by dbus.\n#     _mrun \"apt-get remove dbus -y --purge --auto-remove -qq\"\n#     if [ -e \"/usr/share/dbus-1\" ]; then\n#       rm -f /usr/share/dbus-1/*/*freedesktop*\n#     fi\n    ###\n    ### Fix any broken packages.\n    _apt_clean_update_no_releaseinfo_change\n    _mrun \"apt-get --fix-broken install -y\"\n    ###\n    ### Make sure that sysvinit-core is still installed and systemd removed.\n    _sysvinit_install\n    _systemd_remove_apt_cmd\n    _sysvinit_install\n    ###\n    ### Prepare for the last step.\n    if [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      ###\n      ### The last step before migration to Devuan is to switch to eudev.\n      ### Jessie exception needs testing, though.\n      if [ \"${_LOC_OS_CODE}\" != \"jessie\" ]; then\n        _mrun \"apt-get install eudev ${_dstUpArg}\"\n      fi\n    fi\n    ###\n    ### Upgrade all packages so that you have the latest versions.\n    if [ \"${_LOC_OS_CODE}\" = \"stretch\" ]; then\n      _mrun \"apt-get upgrade ${_dstUpArg}\"\n      #_mrun \"apt-get full-upgrade ${_dstUpArg}\"\n      _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n    else\n      _mrun \"apt-get upgrade ${_dstUpArg}\"\n      _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n    fi\n    ###\n    ### Force install/upgrade to give it another chance to complete.\n    _mrun \"apt-get ${_dstUpArg} install\"\n    ###\n    ### Upgrade again all packages so that you have the latest versions.\n    if [ \"${_LOC_OS_CODE}\" = \"stretch\" ]; then\n      _mrun \"apt-get upgrade ${_dstUpArg}\"\n      #_mrun \"apt-get full-upgrade ${_dstUpArg}\"\n      _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n    else\n      _mrun \"apt-get upgrade ${_dstUpArg}\"\n      _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n    fi\n    _mrun \"apt-get upgrade -y\"\n    ###\n    ### Make sure that sysvinit-core is still installed and systemd removed.\n    _sysvinit_install\n    _systemd_remove_apt_cmd\n    _sysvinit_install\n    ###\n    ### Finally, the proper dist-upgrade time.\n    ### Note that Debian to Devuan migration always requires\n    ### the key dist-upgrade step to be run AFTER reboot!\n    _FORCE_REBUILD_SRC_ON_AUTO_NOW=NO\n    if [ \"${_LOC_OS_CODE}\" = \"trixie\" ] && [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      echo\n      _msg \"ATTN: NOTE! Debian Trixie to Devuan migration\"\n      _msg \"ATTN: requires the key dist-upgrade step AFTER reboot, so\"\n      _msg \"ATTN: you MUST RUN ANOTHER Barracuda upgrade after reboot\"\n    elif [ \"${_LOC_OS_CODE}\" = \"bookworm\" ] && [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      echo\n      _msg \"ATTN: NOTE! Debian Bookworm to Devuan migration\"\n      _msg \"ATTN: requires the key dist-upgrade step AFTER reboot, so\"\n      _msg \"ATTN: you MUST RUN ANOTHER Barracuda upgrade after reboot\"\n    elif [ \"${_LOC_OS_CODE}\" = \"bullseye\" ] && [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      echo\n      _msg \"ATTN: NOTE! Debian Bullseye to Devuan migration\"\n      _msg \"ATTN: requires the key dist-upgrade step AFTER reboot, so\"\n      _msg \"ATTN: you MUST RUN ANOTHER Barracuda upgrade after reboot\"\n    elif [ \"${_LOC_OS_CODE}\" = \"buster\" ] && [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      echo\n      _msg \"ATTN: NOTE! Debian Buster to Devuan migration\"\n      _msg \"ATTN: requires the key dist-upgrade step AFTER reboot, so\"\n      _msg \"ATTN: you MUST RUN ANOTHER Barracuda upgrade after reboot\"\n    elif [ \"${_LOC_OS_CODE}\" = \"stretch\" ] && [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      echo\n      _msg \"ATTN: NOTE! Debian Stretch to Devuan migration\"\n      _msg \"ATTN: requires the key dist-upgrade step AFTER reboot, so\"\n      _msg \"ATTN: you MUST RUN ANOTHER Barracuda upgrade after reboot\"\n    elif [ \"${_LOC_OS_CODE}\" = \"jessie\" ] && [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      echo\n      _msg \"ATTN: NOTE! Debian Jessie to Devuan migration\"\n      _msg \"ATTN: requires the key dist-upgrade step AFTER reboot, so\"\n      _msg \"ATTN: you MUST RUN ANOTHER Barracuda upgrade after reboot\"\n    else\n      echo\n      _mrun \"apt-get dist-upgrade ${_dstUpArg} --allow-downgrades\"\n      _mrun \"apt-get dist-upgrade ${_dstUpArg} --allow-downgrades\"\n      [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n      _mrun \"apt-get dist-upgrade ${_aptYesUnth} --allow-downgrades\"\n      _mrun \"apt-get dist-upgrade ${_aptYesUnth} --allow-downgrades\"\n      [ -e \"/var/lib/man-db/auto-update\" ] && rm -f /var/lib/man-db/auto-update\n      _mrun \"apt-get install lsb-release ${_dstUpArg}\"\n      ###\n      if [ \"${_LOC_OS_CODE}\" = \"daedalus\" ] && [ \"${_NEW_OS_CODE}\" = \"excalibur\" ]; then\n        _useFast=daedalus\n        _nextOS=Excalibur\n      elif [ \"${_LOC_OS_CODE}\" = \"chimaera\" ] && [ \"${_NEW_OS_CODE}\" = \"daedalus\" ]; then\n        _useFast=chimaera\n        _nextOS=Daedalus\n      else\n        _useFast=NO\n      fi\n      if [ \"${_useFast}\" != \"NO\" ]; then\n        ###\n        ### Fix PHP early\n        ###\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DIST: Special procedure for ${_LOC_OS_CODE} to ${_NEW_OS_CODE} upgrade\"\n        fi\n        ###\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"TRCK: Removing libldap packages for a clean start\"\n        fi\n        ###\n        for _PKG in libldap-common libldap2-dev libldap-dev; do\n          if _pkg_installed \"${_PKG}\"; then\n            _mrun \"apt-get remove ${_PKG} -y -qq\"\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              _msg \"PCKG: ${_PKG} removed as requested.\"\n            fi\n          fi\n        done\n        ###\n        _apt_clean_update\n        ###\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"TRCK: Installing now libldap packages from ${_useFast}\"\n        fi\n        if [ -n \"${_LOCAL_DEVUAN_MIRROR}\" ]; then\n          _MIRROR=\"${_LOCAL_DEVUAN_MIRROR}\"\n        else\n          _DVN_MRR=\"$(_find_fast_devuan_mirror)\"\n          if [ -n \"${_DVN_MRR}\" ]; then\n            _MIRROR=\"${_DVN_MRR}\"\n          else\n            _MIRROR=\"http://deb.devuan.org/merged\"\n          fi\n        fi\n        echo \"## DEVUAN MAIN REPOSITORIES\" > /etc/apt/sources.list.d/${_useFast}.list\n        echo \"deb ${_MIRROR} ${_useFast} main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"deb-src ${_MIRROR} ${_useFast} main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"## DEVUAN MAJOR BUG FIX UPDATES produced after the final release\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"deb ${_MIRROR} ${_useFast}-updates main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"deb-src ${_MIRROR} ${_useFast}-updates main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"## DEVUAN SECURITY UPDATES\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"deb ${_MIRROR} ${_useFast}-security main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n        echo \"deb-src ${_MIRROR} ${_useFast}-security main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n        if [ \"${_USE_BACKPORTS}\" = \"YES\" ]; then\n          echo \"\" >> /etc/apt/sources.list.d/${_useFast}.list\n          echo \"## DEVUAN BACKPORTS REPOSITORY\" >> /etc/apt/sources.list.d/${_useFast}.list\n          echo \"deb ${_MIRROR} ${_useFast}-backports main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n          echo \"deb-src ${_MIRROR} ${_useFast}-backports main contrib non-free\" >> /etc/apt/sources.list.d/${_useFast}.list\n        fi\n        echo \"Acquire::Check-Valid-Until \\\"false\\\";\" >> /etc/apt/apt.conf\n        _apt_clean_update\n        for _PKG in libldap-common libldap2-dev; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"apt-get install -t ${_useFast} ${_PKG} ${_aptYesUnth}\"\n            _mrun \"apt-get install -t ${_useFast}-security ${_PKG} ${_aptYesUnth}\"\n          fi\n        done\n        ###\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"TRCK: Locking libldap packages to prevent upgrade on ${_nextOS}\"\n        fi\n        ### Make sure that libldap packages are locked in apt.\n        echo -e 'Package: libldap-common\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/libldap\n        echo -e '\\n\\nPackage: libldap2-dev\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/libldap\n        echo -e '\\n\\nPackage: libldap-dev\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/libldap\n        _apt_clean_update\n        ###\n        _LDAP_GET_DPKG=$(dpkg --get-selections | grep libldap-common | grep 'hold$' 2>&1)\n        if [[ ! \"${_LDAP_GET_DPKG}\" =~ \"hold\" ]]; then\n          aptitude hold libldap-common &> /dev/null\n          aptitude hold libldap2-dev &> /dev/null\n          aptitude hold libldap-dev &> /dev/null\n          echo \"libldap-common hold\" | dpkg --set-selections &> /dev/null\n          echo \"libldap2-dev hold\" | dpkg --set-selections &> /dev/null\n          echo \"libldap-dev hold\" | dpkg --set-selections &> /dev/null\n          _apt_clean_update\n        fi\n        rm -f /etc/apt/sources.list.d/${_useFast}.list\n        rm -f /etc/apt/apt.conf\n        _apt_clean_update\n        ldconfig 2> /dev/null\n        ###\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DIST: Signal that --allow-downgrades can be used after reboot\"\n        fi\n        touch /root/.allow.downgrades.on.same.os.dist.upgrade.cnf\n        ###\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DIST: Rebuidling key services from sources before reboot...\"\n        fi\n        _FORCE_REBUILD_SRC_ON_AUTO_NOW=YES\n        touch /root/.force.rebuild.src.on.auto.now.cnf\n        _if_rebuild_src_on_major_os_upgrade\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"DIST: No quick rebuild from src on ${_LOC_OS_CODE} to ${_NEW_OS_CODE} upgrade\"\n        fi\n      fi\n      ###\n    fi\n    ### Make sure that sysvinit-core is still installed and systemd removed.\n    _sysvinit_install\n    _systemd_remove_apt_cmd\n    _sysvinit_install\n    ### Update rsyslog configuration early\n    _rsyslog_config_update\n    ### Reload key services if needed early\n    if [ -e \"/etc/init.d/valkey-server\" ]; then\n      _mrun \"service valkey-server reload\"\n    elif [ -e \"/etc/init.d/redis-server\" ]; then\n      _mrun \"service redis-server reload\"\n    fi\n    _mrun \"service nginx reload\"\n    _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n    for e in ${_PHP_V}; do\n      if [ -e \"/etc/init.d/php${e}-fpm\" ] && [ -e \"/opt/php${e}/bin/php\" ]; then\n        _mrun \"service php${e}-fpm reload\"\n      fi\n    done\n    nohup /var/xdrago/minute.sh > /dev/null 2>&1 &\n    if [ -e \"/var/xdrago/proc_num_ctrl.pl\" ]; then\n      _spawn_detached 'perl /var/xdrago/proc_num_ctrl.pl'\n    fi\n    ### Add info file to keep major OS upgrades track.\n    touch /root/.${_LOC_OS_CODE}_to_${_NEW_OS_CODE}_major_os_upgrade.info\n    ### Force NR upgrade\n    touch /root/.force.newrelic.update.cnf\n    ### Re-install key services early to limit downtime and SSH keys confusion.\n    if [ -e \"/root/.run-to-excalibur.cnf\" ] \\\n      || [ -e \"/root/.run-to-daedalus.cnf\" ] \\\n      || [ -e \"/root/.run-to-chimaera.cnf\" ] \\\n      || [ -e \"/root/.run-to-beowulf.cnf\" ]; then\n      _REBUILD_SRC_ON_AUTO_NOW=NO\n    else\n      _REBUILD_SRC_ON_AUTO_NOW=YES\n    fi\n    if [ \"${_FORCE_REBUILD_SRC_ON_AUTO_NOW}\" = \"YES\" ]; then\n      _REBUILD_SRC_ON_AUTO_NOW=YES\n    fi\n    rm -f /root/.rebuild_src_on_auto_before_reboot.info\n    if [ \"${_REBUILD_SRC_ON_AUTO_NOW}\" = \"YES\" ] \\\n      && [ \"${_ALLOW_HEAVY_REBUILDS}\" = \"YES\" ]; then\n      touch /root/.rebuild_src_on_auto_before_reboot.info\n      rm -f /var/log/boa/*.log\n      _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n      _PURGE_MODE=OFF\n      _NGX_FORCE_REINSTALL=YES\n      _PHP_FORCE_REINSTALL=YES\n      _SSL_FORCE_REINSTALL=YES\n      _SSH_FORCE_REINSTALL=YES\n      _php_libs_fix\n      _if_reinstall_curl_src\n      _if_ssl_install_src\n      _sync_system_ssl_certs\n      _ssl_paths_sync\n      _ssl_crypto_lib_fix\n      _curl_install_src\n      _sshd_install_src\n      _nginx_install_upgrade\n      _magick_install_upgrade\n      _php_install_deps\n      _php_libs_fix\n      _php_if_versions_cleanup_cnf\n      if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n        _php_ioncube_check_if_update\n        _php_check_if_rebuild\n      fi\n      _php_install_upgrade\n      _php_config_check_update\n      _php_upgrade_all\n      _if_install_php_newrelic\n      _newrelic_check_fix\n    fi\n    if [ \"${_TGT_OSN}\" = \"Devuan\" ]; then\n      if [ \"${_LOC_OS_CODE}\" = \"bookworm\" ] \\\n        || [ \"${_LOC_OS_CODE}\" = \"bullseye\" ] \\\n        || [ \"${_LOC_OS_CODE}\" = \"buster\" ]; then\n        touch /root/.small_hop_on_major_os_upgrade.info\n      else\n        touch /root/.big_hop_on_major_os_upgrade.info\n      fi\n    else\n      touch /root/.big_hop_on_major_os_upgrade.info\n    fi\n    # if [ -e \"/root/.big_hop_on_major_os_upgrade.info\" ]; then\n    #   _PHP_FORCE_REINSTALL=YES\n    #   _SQL_FORCE_REINSTALL=YES\n    #   _db_server_install\n    #   _myquick_install_upgrade\n    #   _php_install_deps\n    #   _php_libs_fix\n    #   _php_install_upgrade\n    #   _php_config_check_update\n    #   _php_upgrade_all\n    #   _if_install_php_newrelic\n    #   _newrelic_check_fix\n    #   _php_ioncube_check_if_update\n    #   _php_check_if_rebuild\n    # fi\n    ###\n    ### Time for init scripts cleanup for VM running beng kernel.\n    _VM_TEST=\"$(uname -a)\"\n    if [[ \"${_VM_TEST}\" =~ \"-beng\" ]]; then\n      _PTMX=OK\n      _REMOVE_LINKS=\"buagent \\\n                     checkroot.sh \\\n                     fancontrol \\\n                     halt \\\n                     hwclock.sh \\\n                     hwclockfirst.sh \\\n                     ifupdown \\\n                     ifupdown-clean \\\n                     kerneloops \\\n                     klogd \\\n                     mountall-bootclean.sh \\\n                     mountall.sh \\\n                     mountdevsubfs.sh \\\n                     mountkernfs.sh \\\n                     mountnfs-bootclean.sh \\\n                     mountnfs.sh \\\n                     mountoverflowtmp \\\n                     mountvirtfs \\\n                     mtab.sh \\\n                     networking \\\n                     procps \\\n                     reboot \\\n                     sendsigs \\\n                     setserial \\\n                     svscan \\\n                     sysstat \\\n                     umountfs \\\n                     umountnfs.sh \\\n                     umountroot \\\n                     urandom \\\n                     vnstat\"\n      for _link in ${_REMOVE_LINKS}; do\n        if [ -e \"/etc/init.d/${_link}\" ]; then\n          _mrun \"update-rc.d -f ${_link} remove\"\n          mv -f /etc/init.d/${_link} /var/backups/init.d.${_link}\n        fi\n      done\n      for s in cron dbus ssh; do\n        if [ -e \"/etc/init.d/${s}\" ]; then\n          sed -rn -e 's/^(# Default-Stop:).*$/\\1 0 1 6/' -e '/^### BEGIN INIT INFO/,/^### END INIT INFO/p' /etc/init.d/${s} > /etc/insserv/overrides/${s}\n        fi\n      done\n      /sbin/insserv -v -d &> /dev/null\n    else\n      _PTMX=CHECK\n    fi\n    ###\n    ### For extra debugging only.\n    if [ -e \"/root/.debug.cnf\" ]; then\n      _PTS_TEST=$(cat /proc/mounts | grep devpts 2>&1)\n      if [[ ! \"${_PTS_TEST}\" =~ \"devpts\" ]] && [ ! -e \"/dev/pts/ptmx\" ]; then\n        _PTS=FIX\n      else\n        _PTS=OK\n      fi\n      if [ \"${_PTMX}\" = \"CHECK\" ] && [ \"${_PTS}\" = \"FIX\" ]; then\n        _msg \"WARN: Required /dev/pts/ptmx does not exist! We will fix this now...\"\n        mkdir -p /dev/pts\n        rm -rf /dev/pts/*\n        _apt_clean_update_no_releaseinfo_change\n        _mrun \"apt-get install udev ${_aptYesUnth}\"\n        echo \"devpts          /dev/pts        devpts  rw,noexec,nosuid,gid=5,mode=620 0  0\" >> /etc/fstab\n        mount -t devpts devpts /dev/pts &> /dev/null\n      fi\n    fi\n    ###\n    ### Make sure that Ægir Master system user is added to sudo.\n    _VAR_IF_PRESENT=$(grep \"aegir ALL=NOPASSWD\" /etc/sudoers 2>&1)\n    if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"aegir ALL=NOPASSWD\" ]]; then\n      echo \"aegir ALL=NOPASSWD: /etc/init.d/nginx\" >> /etc/sudoers\n    fi\n    ###\n    ### Make sure that Ægir special system-wide scripts are added to sudo.\n    _SCRIPTS=(fix-drupal-platform-permissions fix-drupal-site-permissions fix-drupal-platform-ownership fix-drupal-site-ownership lock-local-drush-permissions)\n    for _SCRIPT in ${_SCRIPTS[@]}; do\n      _VAR_IF_PRESENT=$(grep \"aegir ALL=NOPASSWD: /usr/local/bin/${_SCRIPT}.sh\" /etc/sudoers.d/${_SCRIPT} 2>&1)\n      if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"aegir ALL=NOPASSWD\" ]]; then\n        echo \"aegir ALL=NOPASSWD: /usr/local/bin/${_SCRIPT}.sh\" >> /etc/sudoers.d/${_SCRIPT}\n        chmod 0440 /etc/sudoers.d/${_SCRIPT}\n      fi\n    done\n    ###\n    ### Make sure that Ægir Octopus system users are added to sudo.\n    if [ -d \"/data/u\" ]; then\n      for _usEr in `find /data/disk/ -maxdepth 1 -mindepth 1 | sort`; do\n        if [ -e \"${_usEr}/config/server_master/nginx/vhost.d\" ] \\\n          && [ ! -e \"${_usEr}/log/proxied.pid\" ] \\\n          && [ ! -e \"${_usEr}/log/CANCELLED\" ]; then\n          _HM_U=$(echo ${_usEr} | cut -d'/' -f4 | awk '{ print $1}' 2>&1)\n          _VAR_IF_PRESENT=$(grep \"${_HM_U} ALL=NOPASSWD\" /etc/sudoers 2>&1)\n          if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"${_HM_U} ALL=NOPASSWD\" ]]; then\n            echo \"${_HM_U} ALL=NOPASSWD: /etc/init.d/nginx\" >> /etc/sudoers\n          fi\n          _SCRIPTS=(fix-drupal-platform-permissions fix-drupal-site-permissions fix-drupal-platform-ownership fix-drupal-site-ownership lock-local-drush-permissions)\n          for _SCRIPT in ${_SCRIPTS[@]}; do\n            _VAR_IF_PRESENT=$(grep \"${_HM_U} ALL=NOPASSWD: /usr/local/bin/${_SCRIPT}.sh\" /etc/sudoers.d/${_SCRIPT} 2>&1)\n            if [[ ! \"${_VAR_IF_PRESENT}\" =~ \"${_HM_U} ALL=NOPASSWD\" ]]; then\n              echo \"${_HM_U} ALL=NOPASSWD: /usr/local/bin/${_SCRIPT}.sh\" >> /etc/sudoers.d/${_SCRIPT}\n              chmod 0440 /etc/sudoers.d/${_SCRIPT}\n            fi\n          done\n        fi\n      done\n    fi\n    ###\n    ### Final cleanup.\n    echo rotate > /var/log/syslog\n    rm -f /var/log/boa/*.log\n    mv -f /var/xdrago_wait /var/xdrago &> /dev/null\n    ###\n    ### Add ctrl file to trigger _if_post_major_os_upgrade() after reboot.\n    touch /root/.run_post_major_os_upgrade.info\n    echo \" \"\n    _msg \"RLLY: No errors? ${_MSG_LOC} migration worked :)\"\n    _msg \"STEP: Please restart this server now with 'boa reboot' command\"\n    _msg \"STEP: Once the system is up, run 'barracuda up-${_tRee}' command again\"\n    echo \" \"\n    _msg \"Bye\"\n    touch /root/.latest-barracuda-upgrade-finale.info\n    _barracuda_cnf_cleanup\n    _clean_pid_exit\n  fi\n}\n\n_webmin_apt_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _webmin_apt_update\"\n  fi\n  if [ -d \"/etc/webmin\" ]; then\n    if [ ! -e \"${_pthLog}/webmin_update_apt_src.log\" ]; then\n      cd /var/opt\n      echo \"## Webmin APT Repository\" > /etc/apt/sources.list.d/webmin.list\n      echo \"deb http://download.webmin.com/download/repository \\\n        sarge contrib\" | fmt -su -w 2500 >> /etc/apt/sources.list.d/webmin.list\n      echo \"deb http://webmin.mirror.somersettechsolutions.co.uk/repository \\\n        sarge contrib\" | fmt -su -w 2500 >> /etc/apt/sources.list.d/webmin.list\n      _KEYS_SERVER_TEST=FALSE\n      until [[ \"${_KEYS_SERVER_TEST}\" =~ \"GnuPG\" ]]; do\n        rm -f jcameron-key.gpg*\n        wget ${_wgetGet} \"${_urlDev}/jcameron-key.gpg\"\n        _KEYS_SERVER_TEST=$(grep GnuPG jcameron-key.gpg 2>&1)\n        sleep 2\n      done\n      if [ -x \"/usr/bin/gpg2\" ]; then\n        _GPG=gpg2\n      else\n        _GPG=gpg\n      fi\n      cat jcameron-key.gpg | ${_GPG} --import &> /dev/null\n      rm -f jcameron-key.gpg*\n      touch ${_pthLog}/webmin_update_apt_src.log\n    fi\n  fi\n}\n\n_early_sys_ctrl_mark() {\n  if [ -e \"/root/.run_post_major_os_upgrade.info\" ]; then\n    touch /root/.early-sys-ctrl-mark.cnf\n  else\n    [ -e \"/root/.early-sys-ctrl-mark.cnf\" ] && rm -f /root/.early-sys-ctrl-mark.cnf\n  fi\n}\n\n_normal_sys_ctrl_mark() {\n  if [ -e \"/root/.latest-barracuda-upgrade-finale.info\" ]; then\n    rm -f /root/.latest-barracuda-upgrade-finale.info\n    touch /root/.normal-sys-ctrl-mark.cnf\n  else\n    [ -e \"/root/.normal-sys-ctrl-mark.cnf\" ] && rm -f /root/.normal-sys-ctrl-mark.cnf\n  fi\n}\n\n_sys_packages_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sys_packages_update\"\n  fi\n  _msg \"INFO: Running system packages update...\"\n  if [ -d \"/data/u\" ] && [ -e \"/data/conf/global.inc\" ]; then\n    aptitude hold curl &> /dev/null\n    aptitude hold libldap-common &> /dev/null\n    aptitude hold libldap-dev &> /dev/null\n    aptitude hold libldap2-dev &> /dev/null\n    aptitude hold libmariadb3 &> /dev/null\n    aptitude hold mailutils &> /dev/null\n    aptitude hold mariadb-common &> /dev/null\n    aptitude hold nginx &> /dev/null\n    aptitude hold nginx-common &> /dev/null\n    aptitude hold openssh-client &> /dev/null\n    aptitude hold openssh-server &> /dev/null\n    aptitude hold openssh-sftp-server &> /dev/null\n    aptitude hold percona-release &> /dev/null\n    aptitude hold ssh &> /dev/null\n    echo \"curl hold\" | dpkg --set-selections &> /dev/null\n    echo \"libldap-common hold\" | dpkg --set-selections &> /dev/null\n    echo \"libldap-dev hold\" | dpkg --set-selections &> /dev/null\n    echo \"libldap2-dev hold\" | dpkg --set-selections &> /dev/null\n    echo \"libmariadb3 hold\" | dpkg --set-selections &> /dev/null\n    echo \"mailutils hold\" | dpkg --set-selections &> /dev/null\n    echo \"mariadb-common hold\" | dpkg --set-selections &> /dev/null\n    echo \"nginx hold\" | dpkg --set-selections &> /dev/null\n    echo \"nginx-common hold\" | dpkg --set-selections &> /dev/null\n    echo \"openssh-client hold\" | dpkg --set-selections &> /dev/null\n    echo \"openssh-server hold\" | dpkg --set-selections &> /dev/null\n    echo \"openssh-sftp-server hold\" | dpkg --set-selections &> /dev/null\n    echo \"percona-release hold\" | dpkg --set-selections &> /dev/null\n    echo \"ssh hold\" | dpkg --set-selections &> /dev/null\n  fi\n  _webmin_apt_update\n  _apt_clean_update\n  if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    _DPKG_CNF=\"confold\"\n  else\n    _DPKG_CNF=\"confnew\"\n  fi\n  _mrun \"aptitude full-upgrade -f -y -q \\\n    --allow-untrusted \\\n    -o Dpkg::Options::=--force-confmiss \\\n    -o Dpkg::Options::=--force-confdef \\\n    -o Dpkg::Options::=--force-${_DPKG_CNF}\"\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _apt_clean_update\n    _mrun \"aptitude full-upgrade -f -y -q \\\n      --allow-untrusted \\\n      -o Dpkg::Options::=--force-confmiss \\\n      -o Dpkg::Options::=--force-confdef \\\n      -o Dpkg::Options::=--force-${_DPKG_CNF}\"\n    _mrun \"apt-get autoclean -y\"\n  else\n    echo \"gnupg-curl install\" | dpkg --set-selections &> /dev/null\n    rm -f /var/lib/mysql/debian-*.flag &> /dev/null\n    _UP_JDK=NO\n    _UP_LNX=NO\n    _UP_NRC=NO\n    _UP_PHP=NO\n    _UP_PXC=NO\n    _UP_SQL=NO\n    _check_apt_updates\n  fi\n}\n\n_other_sys_java21_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _other_sys_java21_install\"\n  fi\n\n  _debArch=\"$1\"\n\n  # Map Debian arch -> Corretto arch tokens\n  case \"${_debArch}\" in\n    amd64)  _crtArch=\"x64\" ;;\n    arm64)  _crtArch=\"aarch64\" ;;\n    *)\n      _crtArch=\"x64\"\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"WARN: Unsupported arch '${_debArch}', falling back to x64\"\n      fi\n    ;;\n  esac\n\n  _javaHome=\"/usr/lib/jvm/java-21-openjdk-${_debArch}\"\n  _base=\"amazon-corretto-21-${_crtArch}-linux\"\n  _tmpDir=\"/tmp\"\n  _optDir=\"/opt/jdk\"\n  _linkOpt=\"${_optDir}/corretto-21\"\n  _linkJvm=\"${_javaHome}\"\n\n  _mrun \"mkdir -p ${_optDir}\"\n  _mrun \"mkdir -p /usr/lib/jvm\"\n\n  # Prefer JRE tarball; fall back to JDK tarball if needed.\n  _kind=\"jre\"\n  _tar=\"${_base}-${_kind}.tar.gz\"\n  _url=\"https://corretto.aws/downloads/latest/${_tar}\"\n  _shaUrl=\"https://corretto.aws/downloads/latest_sha256/${_tar}\"\n\n  if command -v curl >/dev/null 2>&1; then\n    _SHA=\"$(curl -fsSL \"${_shaUrl}\" 2>/dev/null | awk '{print $1}')\"\n    if [ -z \"${_SHA}\" ]; then\n      _kind=\"jdk\"\n      _tar=\"${_base}-${_kind}.tar.gz\"\n      _url=\"https://corretto.aws/downloads/latest/${_tar}\"\n      _shaUrl=\"https://corretto.aws/downloads/latest_sha256/${_tar}\"\n      _SHA=\"$(curl -fsSL \"${_shaUrl}\" 2>/dev/null | awk '{print $1}')\"\n    fi\n    if [ -z \"${_SHA}\" ]; then\n      _msg \"ERROR: Unable to fetch SHA256 for Corretto 21 (${_crtArch})\"\n      return 1\n    fi\n    _mrun \"rm -f ${_tmpDir}/${_tar}\"\n    _mrun \"curl -fL --retry 3 --retry-delay 2 -o ${_tmpDir}/${_tar} ${_url}\"\n  elif command -v wget >/dev/null 2>&1; then\n    _SHA=\"$(wget -qO- \"${_shaUrl}\" 2>/dev/null | awk '{print $1}')\"\n    if [ -z \"${_SHA}\" ]; then\n      _kind=\"jdk\"\n      _tar=\"${_base}-${_kind}.tar.gz\"\n      _url=\"https://corretto.aws/downloads/latest/${_tar}\"\n      _shaUrl=\"https://corretto.aws/downloads/latest_sha256/${_tar}\"\n      _SHA=\"$(wget -qO- \"${_shaUrl}\" 2>/dev/null | awk '{print $1}')\"\n    fi\n    if [ -z \"${_SHA}\" ]; then\n      _msg \"ERROR: Unable to fetch SHA256 for Corretto 21 (${_crtArch})\"\n      return 1\n    fi\n    _mrun \"rm -f ${_tmpDir}/${_tar}\"\n    _mrun \"wget -qO ${_tmpDir}/${_tar} ${_url}\"\n  else\n    # If neither exists, install wget from the current OS repos only (no extra sources).\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"TRCK: curl/wget missing; installing wget + ca-certificates from current OS repos\"\n    fi\n    _mrun \"apt-get update ${_aptYesUnth}\"\n    _mrun \"apt-get install -y wget ca-certificates ${_aptYesUnth}\"\n    _SHA=\"$(wget -qO- \"${_shaUrl}\" 2>/dev/null | awk '{print $1}')\"\n    if [ -z \"${_SHA}\" ]; then\n      _kind=\"jdk\"\n      _tar=\"${_base}-${_kind}.tar.gz\"\n      _url=\"https://corretto.aws/downloads/latest/${_tar}\"\n      _shaUrl=\"https://corretto.aws/downloads/latest_sha256/${_tar}\"\n      _SHA=\"$(wget -qO- \"${_shaUrl}\" 2>/dev/null | awk '{print $1}')\"\n    fi\n    if [ -z \"${_SHA}\" ]; then\n      _msg \"ERROR: Unable to fetch SHA256 for Corretto 21 (${_crtArch})\"\n      return 1\n    fi\n    _mrun \"rm -f ${_tmpDir}/${_tar}\"\n    _mrun \"wget -qO ${_tmpDir}/${_tar} ${_url}\"\n  fi\n\n  # Verify integrity\n  _mrun \"cd ${_tmpDir} && echo ${_SHA}  ${_tar} | sha256sum -c -\"\n\n  # Extract and point stable symlinks\n  _topDir=\"$(tar -tzf \"${_tmpDir}/${_tar}\" 2>/dev/null | head -n1 | cut -d/ -f1)\"\n  if [ -z \"${_topDir}\" ]; then\n    _msg \"ERROR: Unable to read Corretto archive structure: ${_tmpDir}/${_tar}\"\n    return 1\n  fi\n\n  _mrun \"tar -xzf ${_tmpDir}/${_tar} -C ${_optDir}\"\n\n  _real=\"${_optDir}/${_topDir}\"\n  if [ ! -x \"${_real}/bin/java\" ]; then\n    _msg \"ERROR: Extracted Corretto path missing bin/java: ${_real}\"\n    return 1\n  fi\n\n  _mrun \"ln -sfn ${_real} ${_linkOpt}\"\n  _mrun \"ln -sfn ${_linkOpt} ${_linkJvm}\"\n\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"DONE: Java 21 installed at ${_linkJvm}\"\n  fi\n}\n\n_other_sys_java_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _other_sys_java_install\"\n  fi\n\n  _systVer=\"$1\"\n  _javaVer=\"$2\"\n  _aptList=\"/etc/apt/sources.list.d/${_systVer}.list\"\n\n  # Detect Debian arch token for /usr/lib/jvm paths\n  _debArch=\"$(dpkg --print-architecture 2>/dev/null)\"\n  if [ -z \"${_debArch}\" ]; then\n    case \"$(uname -m 2>/dev/null)\" in\n      x86_64) _debArch=\"amd64\" ;;\n      aarch64|arm64) _debArch=\"arm64\" ;;\n      *) _debArch=\"amd64\" ;;\n    esac\n  fi\n\n  _javaHome=\"/usr/lib/jvm/java-${_javaVer}-openjdk-${_debArch}\"\n\n  _FORCE_JAVA_UPGRADE=\n\n  # Special-case Java 21: do NOT use foreign apt sources (avoids Excalibur upgrade pressure)\n  if [ \"${_javaVer}\" = \"21\" ] && [ \"${_OS_CODE}\" != \"excalibur\" ]; then\n    _JAVA_NEW=21.0.11\n    _JAVA_ITD=$(java21 --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_JAVA_ITD}\" != \"${_JAVA_NEW}\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installing Java ${_JAVA_NEW}...\"\n        _FORCE_JAVA_UPGRADE=YES\n      fi\n    fi\n    if [ ! -x \"${_javaHome}/bin/java\" ] || [ -n \"${_FORCE_JAVA_UPGRADE}\" ]; then\n      _other_sys_java21_install \"${_debArch}\"\n    fi\n    return 0\n  fi\n\n  # Special-case Java 11/17 force install if version is too old\n  if [ \"${_javaVer}\" = \"11\" ]; then\n    _JAVA_NEW=11.0.31\n    _JAVA_ITD=$(java11 --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_JAVA_ITD}\" != \"${_JAVA_NEW}\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installing Java ${_JAVA_NEW}...\"\n        _FORCE_JAVA_UPGRADE=YES\n      fi\n    fi\n  elif [ \"${_javaVer}\" = \"17\" ]; then\n    _JAVA_NEW=17.0.19\n    _JAVA_ITD=$(java17 --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_JAVA_ITD}\" != \"${_JAVA_NEW}\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installing Java ${_JAVA_NEW}...\"\n        _FORCE_JAVA_UPGRADE=YES\n      fi\n    fi\n  fi\n\n  if [ ! -x \"${_javaHome}/bin/java\" ] || [ -n \"${_FORCE_JAVA_UPGRADE}\" ]; then\n    if [ -n \"${_LOCAL_DEVUAN_MIRROR}\" ]; then\n      _MIRROR=\"${_LOCAL_DEVUAN_MIRROR}\"\n    else\n      _DVN_MRR=\"$(_find_fast_devuan_mirror)\"\n      if [ -n \"${_DVN_MRR}\" ]; then\n        _MIRROR=\"${_DVN_MRR}\"\n      else\n        _MIRROR=\"http://deb.devuan.org/merged\"\n      fi\n    fi\n\n    # Safer than clobbering /etc/apt/apt.conf\n    _aptNoValidUntil=\"/etc/apt/apt.conf.d/99-${_systVer}-no-check-valid-until.conf\"\n\n    cat > \"${_aptList}\" <<EOF\n## DEVUAN MAIN REPOSITORIES\ndeb ${_MIRROR} ${_systVer} main contrib non-free\ndeb-src ${_MIRROR} ${_systVer} main contrib non-free\n\n## DEVUAN MAJOR BUG FIX UPDATES produced after the final release\ndeb ${_MIRROR} ${_systVer}-updates main contrib non-free\ndeb-src ${_MIRROR} ${_systVer}-updates main contrib non-free\n\n## DEVUAN SECURITY UPDATES\ndeb ${_MIRROR} ${_systVer}-security main contrib non-free\ndeb-src ${_MIRROR} ${_systVer}-security main contrib non-free\nEOF\n\n    if [ \"${_USE_BACKPORTS}\" = \"YES\" ]; then\n      cat >> \"${_aptList}\" <<EOF\n\n## DEVUAN BACKPORTS REPOSITORY\ndeb ${_MIRROR} ${_systVer}-backports main contrib non-free\ndeb-src ${_MIRROR} ${_systVer}-backports main contrib non-free\nEOF\n    fi\n\n    echo \"Acquire::Check-Valid-Until false;\" > \"${_aptNoValidUntil}\"\n\n    _apt_clean_update\n\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"TRCK: Installing now openjdk-${_javaVer}-jre-headless from ${_systVer}\"\n    fi\n\n    _mrun \"apt-get install -t ${_systVer} openjdk-${_javaVer}-jre-headless ${_aptYesUnth}\"\n    _mrun \"apt-get install -t ${_systVer}-security openjdk-${_javaVer}-jre-headless ${_aptYesUnth}\"\n\n    rm -f \"${_aptList}\"\n    rm -f \"${_aptNoValidUntil}\"\n\n    _apt_clean_update\n  fi\n}\n\n_sys_packages_install() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sys_packages_install\"\n  fi\n  _if_to_do_fix\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _msg \"INFO: Installing required libraries and tools...\"\n  else\n    _msg \"INFO: Upgrading required libraries and tools...\"\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _if_hosted_sys\n    if [[ \"${_XTRAS_LIST}\" =~ \"SR\" ]] \\\n      || [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n        _APT_XTRA=\"openjdk-7-jre-headless nginx\"\n      elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n        _APT_XTRA=\"openjdk-8-jre-headless nginx\"\n      elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless nginx\"\n      elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless openjdk-17-jre-headless nginx\"\n      elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n        _APT_XTRA=\"openjdk-17-jre-headless nginx\"\n        echo \"deb http://deb.debian.org/debian bullseye main contrib non-free\" > /etc/apt/sources.list.d/bullseye.list\n        echo \"Acquire::Check-Valid-Until \\\"false\\\";\" >> /etc/apt/apt.conf\n        _apt_clean_update\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"TRCK: Installing now openjdk-11-jre-headless from Bullseye\"\n        fi\n        _mrun \"apt-get install -t bullseye openjdk-11-jre-headless ${_aptYesUnth}\"\n        rm -f /etc/apt/sources.list.d/bullseye.list\n        rm -f /etc/apt/apt.conf\n        _apt_clean_update\n      elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless nginx\"\n      elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless openjdk-17-jre-headless nginx\"\n      elif [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n        _APT_XTRA=\"openjdk-21-jre-headless nginx\"\n        _other_sys_java_install \"daedalus\" \"17\"\n        _other_sys_java_install \"chimaera\" \"11\"\n      elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n        _APT_XTRA=\"openjdk-17-jre-headless nginx\"\n        _other_sys_java_install \"chimaera\" \"11\"\n        _other_sys_java_install \"excalibur\" \"21\"\n      fi\n    else\n      _APT_XTRA=\"nginx\"\n    fi\n    _APT_ELSE=\"netcat-traditional nginx\"\n  else\n    _APT_ITEM=$(dpkg --get-selections | grep openjdk-6-jdk | grep install 2>&1)\n    if [[ \"${_APT_ITEM}\" =~ \"install\" ]]; then\n      _mrun \"apt-get remove openjdk-6-jdk -y --purge --auto-remove -qq\"\n    fi\n    _APT_ITEM=$(dpkg --get-selections | grep openjdk-7-jdk | grep install 2>&1)\n    if [[ \"${_APT_ITEM}\" =~ \"install\" ]]; then\n      _mrun \"apt-get remove openjdk-7-jdk -y --purge --auto-remove -qq\"\n    fi\n    _if_hosted_sys\n    if [[ \"${_XTRAS_LIST}\" =~ \"SR\" ]] \\\n      || [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n        _APT_XTRA=\"openjdk-7-jre-headless\"\n      elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n        _APT_XTRA=\"openjdk-8-jre-headless\"\n      elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless\"\n      elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless openjdk-17-jre-headless\"\n      elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n        _APT_XTRA=\"openjdk-17-jre-headless\"\n        echo \"deb http://deb.debian.org/debian bullseye main contrib non-free\" > /etc/apt/sources.list.d/bullseye.list\n        echo \"Acquire::Check-Valid-Until \\\"false\\\";\" >> /etc/apt/apt.conf\n        _apt_clean_update\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"TRCK: Installing now openjdk-11-jre-headless from Bullseye\"\n        fi\n        _mrun \"apt-get install -t bullseye openjdk-11-jre-headless ${_aptYesUnth}\"\n        rm -f /etc/apt/sources.list.d/bullseye.list\n        rm -f /etc/apt/apt.conf\n        _apt_clean_update\n      elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless\"\n      elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n        _APT_XTRA=\"openjdk-11-jre-headless openjdk-17-jre-headless\"\n      elif [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n        _APT_XTRA=\"openjdk-21-jre-headless\"\n        _other_sys_java_install \"daedalus\" \"17\"\n        _other_sys_java_install \"chimaera\" \"11\"\n      elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n        _APT_XTRA=\"openjdk-17-jre-headless\"\n        _other_sys_java_install \"chimaera\" \"11\"\n        _other_sys_java_install \"excalibur\" \"21\"\n      fi\n    else\n      _APT_XTRA=\"\"\n    fi\n    _APT_ELSE=\"netcat-traditional\"\n    _apt_clean_update\n    _mrun \"service nginx start\"\n    ###\n    for _PKG in nginx-extras nginx nginx-common nginx-full redis-server percona-release; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove ${_PKG} -y -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    ###\n    ### Make sure that nginx packages are locked in apt.\n    if [ ! -e \"/etc/apt/preferences.d/nginx-common\" ]; then\n      rm -f /etc/apt/preferences.d/nginx\n      echo -e 'Package: nginx\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/nginx-common\n      echo -e '\\n\\nPackage: nginx-common\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/nginx-common\n      _apt_clean_update\n    fi\n    ###\n    _NGINX_GET_DPKG=$(dpkg --get-selections | grep nginx-common | grep 'hold$' 2>&1)\n    if [[ ! \"${_NGINX_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold nginx &> /dev/null\n      aptitude hold nginx-common &> /dev/null\n      echo \"nginx hold\" | dpkg --set-selections &> /dev/null\n      echo \"nginx-common hold\" | dpkg --set-selections &> /dev/null\n      _apt_clean_update\n    fi\n    ###\n    ### Make sure that percona-release package is locked in apt.\n    if [ ! -e \"/etc/apt/preferences.d/percona-release\" ]; then\n      echo -e 'Package: percona-release\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/percona-release\n      _apt_clean_update\n    fi\n    ###\n    _PERC_GET_DPKG=$(dpkg --get-selections | grep percona-release | grep 'hold$' 2>&1)\n    if [[ ! \"${_PERC_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold percona-release &> /dev/null\n      echo \"percona-release hold\" | dpkg --set-selections &> /dev/null\n      _apt_clean_update\n    fi\n    ###\n    for _PKG in libmariadb3 mariadb-common mailutils; do\n      if _pkg_installed \"${_PKG}\"; then\n        _mrun \"apt-get remove --purge ${_PKG} -y -qq\"\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"PCKG: ${_PKG} removed as requested.\"\n        fi\n      fi\n    done\n    ###\n    ### Make sure that mariadb related packages are locked in apt.\n    if [ ! -e \"/etc/apt/preferences.d/mariadb-common\" ]; then\n      echo -e 'Package: libmariadb3\\nPin: release *\\nPin-Priority: -1' > /etc/apt/preferences.d/mariadb-common\n      echo -e '\\n\\nPackage: mariadb-common\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/mariadb-common\n      echo -e '\\n\\nPackage: mailutils\\nPin: release *\\nPin-Priority: -1' >> /etc/apt/preferences.d/mariadb-common\n      _apt_clean_update_no_releaseinfo_change\n    fi\n    ###\n    _NGINX_GET_DPKG=$(dpkg --get-selections | grep mariadb-common | grep 'hold$' 2>&1)\n    if [[ ! \"${_NGINX_GET_DPKG}\" =~ \"hold\" ]]; then\n      aptitude hold libmariadb3 &> /dev/null\n      aptitude hold mariadb-common &> /dev/null\n      aptitude hold mailutils &> /dev/null\n      echo \"libmariadb3 hold\" | dpkg --set-selections &> /dev/null\n      echo \"mariadb-common hold\" | dpkg --set-selections &> /dev/null\n      echo \"mailutils hold\" | dpkg --set-selections &> /dev/null\n      _apt_clean_update_no_releaseinfo_change\n    fi\n  fi\n  _EXTRA_LIB_APT=\"libmcrypt-dev\"\n  if [ ! -z \"${_EXTRA_PACKAGES}\" ]; then\n    _EXTRA_PACKAGES=\"screen ${_EXTRA_PACKAGES}\"\n  else\n    _EXTRA_PACKAGES=\"screen\"\n  fi\n  if [ -e \"/proc/bean_counters\" ]; then\n    _IS_VZ=YES\n  else\n    _IS_VZ=NO\n  fi\n  if [ \"${_IS_VZ}\" = \"YES\" ]; then\n    _SYSLOGD=inetutils-syslogd\n    _mrun \"apt-get remove sysklogd -y --purge --auto-remove -qq\"\n    _mrun \"apt-get remove rsyslog -y --purge --auto-remove -qq\"\n    _mrun \"killall -9 sysklogd\"\n    _mrun \"killall -9 rsyslogd\"\n  elif [ -e \"/root/.use.sysklogd.cnf\" ]; then\n    _SYSLOGD=sysklogd\n    _mrun \"apt-get remove rsyslog -y --purge --auto-remove -qq\"\n    _mrun \"killall -9 rsyslogd\"\n  else\n    _SYSLOGD=rsyslog\n  fi\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.69 \\\n                     automake-1.17 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix3 \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     libtinfo6 \\\n                     ${_EXTRA_PACKAGES}\"\n  elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.69 \\\n                     automake1.11 \\\n                     automake-1.16 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix3 \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     libtinfo6 \\\n                     ${_EXTRA_PACKAGES}\"\n  elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.69 \\\n                     automake1.11 \\\n                     automake-1.16 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix2 \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     libtinfo6 \\\n                     ${_EXTRA_PACKAGES}\"\n  elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.64 \\\n                     automake1.11 \\\n                     automake-1.15 \\\n                     automake-1.16 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix0 \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     libtinfo6 \\\n                     ttf-dejavu \\\n                     ttf-dejavu-core \\\n                     ttf-dejavu-extra \\\n                     ${_EXTRA_PACKAGES}\"\n  elif [ \"${_OS_CODE}\" = \"bookworm\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.69 \\\n                     automake1.11 \\\n                     automake-1.16 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix3 \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     libtinfo6 \\\n                     ${_EXTRA_PACKAGES}\"\n  elif [ \"${_OS_CODE}\" = \"bullseye\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.69 \\\n                     automake1.11 \\\n                     automake-1.16 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix2 \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     libtinfo6 \\\n                     ${_EXTRA_PACKAGES}\"\n  elif [ \"${_OS_CODE}\" = \"buster\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.64 \\\n                     automake1.11 \\\n                     automake-1.15 \\\n                     automake-1.16 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix0 \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     libtinfo6 \\\n                     ttf-dejavu \\\n                     ttf-dejavu-core \\\n                     ttf-dejavu-extra \\\n                     ${_EXTRA_PACKAGES}\"\n  elif [ \"${_OS_CODE}\" = \"stretch\" ]; then\n    _EXTRA_PACKAGES=\"autoconf2.64 \\\n                     automake1.11 \\\n                     automake-1.15 \\\n                     gnupg1-curl \\\n                     libpcre2-dev \\\n                     libpcre2-posix0 \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng16-16 \\\n                     ttf-dejavu \\\n                     ttf-dejavu-core \\\n                     ttf-dejavu-extra \\\n                     ${_EXTRA_PACKAGES}\"\n  else\n    _EXTRA_PACKAGES=\"autoconf2.64 \\\n                     automake1.11 \\\n                     automake-1.14 \\\n                     defoma \\\n                     gnupg-curl \\\n                     libpcre3 \\\n                     libpcre3-dev \\\n                     libpng-dev \\\n                     libpng12-0 \\\n                     libpng12-dev \\\n                     libpng12-0-dev \\\n                     libt1-5 \\\n                     libt1-dev \\\n                     t1lib-bin \\\n                     ttf-dejavu \\\n                     ttf-dejavu-core \\\n                     ttf-dejavu-extra \\\n                     ${_EXTRA_PACKAGES}\"\n  fi\n\n  _EXTRA_PACKAGES=\"smem \\\n                   libgd3 \\\n                   libxpm-dev \\\n                   libwebp-dev \\\n                   ${_EXTRA_PACKAGES}\"\n\n  _if_hosted_sys\n  if [[ \"${_XTRAS_LIST}\" =~ \"IMG\" ]] \\\n    || [[ \"${_XTRAS_LIST}\" =~ \"ALL\" ]] \\\n    || [ \"${_hostedSys}\" = \"YES\" ]; then\n    _EXTRA_PACKAGES=\"advancecomp \\\n                     ca-certificates \\\n                     ca-certificates-java \\\n                     jpegoptim \\\n                     libjpeg-progs \\\n                     optipng \\\n                     pngcrush \\\n                     pngquant \\\n                     ${_EXTRA_PACKAGES}\"\n  fi\n\n  if [ \"${_VMFAMILY}\" != \"VS\" ] && [ \"${_OS_DIST}\" != \"Devuan\" ]; then\n    _EXTRA_PACKAGES=\"udev \\\n                     ${_EXTRA_PACKAGES}\"\n  fi\n\n  if [ \"${_MAGICK_FROM_SOURCES}\" = \"NO\" ] \\\n    || [ -z \"${_MAGICK_FROM_SOURCES}\" ]; then\n    _EXTRA_PACKAGES=\"imagemagick libmagickwand-dev graphviz libgraphviz-dev \\\n                     ${_EXTRA_PACKAGES}\"\n  fi\n\n  _MAILSERV=\"postfix postfix-pcre s-nail\"\n}\n\n_php_libs_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _php_libs_fix\"\n  fi\n  _if_to_do_fix\n  _LIB_NOW=$(date +%y%m%d-%H%M)\n  _LIB_NOW=${_LIB_NOW//[^0-9-]/}\n  if [ ! -L \"/usr/lib/librtmp.so\" ] \\\n    || [ ! -e \"/usr/lib/libwebpmux.so\" ] \\\n    || [ ! -e \"/usr/lib/libonig.so\" ] \\\n    || [ ! -e \"/usr/lib/libicuio.so\" ] \\\n    || [ ! -e \"/usr/lib/liblber.so\" ] \\\n    || [ ! -e \"/var/log/boa/._php_libs_fix_${_OS_CODE}_${_LIB_NOW}.pid\" ] \\\n    || [ ! -e \"/usr/include/gmp.h\" ] \\\n    || [ ! -e \"/usr/include/lzf.h\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Fix for PHP libs in ${_OS_DIST}/${_OS_CODE}\"\n    fi\n    _X86_64_TEST=$(uname -m)\n    if [ \"${_X86_64_TEST}\" = \"x86_64\" ]; then\n      if [ -e \"/usr/local/lib/icu\" ]; then\n        if [ -e \"/usr/lib/x86_64-linux-gnu/icu\" ] \\\n          && [ ! -L \"/usr/lib/x86_64-linux-gnu/icu\" ]; then\n          rm -rf /var/backups/.prev_icu\n          mv -f /usr/lib/x86_64-linux-gnu/icu /var/backups/.prev_icu\n          ln -sfn /usr/local/lib/icu /usr/lib/x86_64-linux-gnu/icu\n        else\n          ln -sfn /usr/local/lib/icu /usr/lib/x86_64-linux-gnu/icu\n        fi\n        ln -sfn /usr/local/lib/libicudata.so /usr/lib/x86_64-linux-gnu/libicudata.so\n        ln -sfn /usr/local/lib/libicui18n.so /usr/lib/x86_64-linux-gnu/libicui18n.so\n        ln -sfn /usr/local/lib/libicuio.so   /usr/lib/x86_64-linux-gnu/libicuio.so\n        ln -sfn /usr/local/lib/libicutest.so /usr/lib/x86_64-linux-gnu/libicutest.so\n        ln -sfn /usr/local/lib/libicutu.so   /usr/lib/x86_64-linux-gnu/libicutu.so\n        ln -sfn /usr/local/lib/libicuuc.so   /usr/lib/x86_64-linux-gnu/libicuuc.so\n        ln -sfn /usr/local/lib/libicudata.so /usr/lib/libicudata.so\n        ln -sfn /usr/local/lib/libicui18n.so /usr/lib/libicui18n.so\n        ln -sfn /usr/local/lib/libicuio.so   /usr/lib/libicuio.so\n        ln -sfn /usr/local/lib/libicutest.so /usr/lib/libicutest.so\n        ln -sfn /usr/local/lib/libicutu.so   /usr/lib/libicutu.so\n        ln -sfn /usr/local/lib/libicuuc.so   /usr/lib/libicuuc.so\n      fi\n      ln -sfn /usr/lib/x86_64-linux-gnu/libgmp.so  /usr/lib/libgmp.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libgmp.so  /usr/lib/libgmp.so.3\n      ln -sfn /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib/libjpeg.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libkrb5.so /usr/lib/libkrb5.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libldap.so /usr/lib/libldap.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libpng.so  /usr/lib/libpng.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libXpm.so  /usr/lib/libXpm.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/librtmp.so /usr/lib/librtmp.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libwebpmux.so /usr/lib/libwebpmux.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libwebpmux.so /usr/lib/libwebpmux.so.2\n      ln -sfn /usr/lib/x86_64-linux-gnu/libonig.so /usr/lib/libonig.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libonig.so /usr/lib/libonig.so.4\n      ln -sfn /usr/lib/x86_64-linux-gnu/libsodium.so /usr/lib/libsodium.so\n      ln -sfn /usr/lib/x86_64-linux-gnu/libsodium.so /usr/lib/libsodium.so.18\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libwebp.so.7\" ] \\\n        && [ ! -e \"/usr/lib/x86_64-linux-gnu/libwebp.so.6\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libwebp.so.7 /usr/lib/x86_64-linux-gnu/libwebp.so.6\n        ln -sfn /usr/lib/x86_64-linux-gnu/libwebp.so.7 /usr/lib/x86_64-linux-gnu/libwebp.so.6.0.2\n        ln -sfn /usr/lib/x86_64-linux-gnu/libwebp.so.7 /usr/lib/libwebp.so.6\n        ln -sfn /usr/lib/x86_64-linux-gnu/libwebp.so.7 /usr/lib/libwebp.so.6.0.2\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/liblber.so\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/liblber.so /usr/lib/liblber.so\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/liblber.a\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/liblber.a /usr/lib/liblber.a\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/liblber-2.5.so.0\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/liblber-2.5.so.0 /usr/lib/liblber-2.5.so.0\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/liblber-2.4.so.2\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 /usr/lib/liblber-2.4.so.2\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libHalf.so.23\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libHalf.so.23 /usr/lib/libHalf.so.12\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libHalf.so\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libHalf.so /usr/lib/libHalf.so.12\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libIlmImf-2_2.so.23\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libIlmImf-2_2.so.23 /usr/lib/libIlmImf-2_2.so.22\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libIlmImf.so\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libIlmImf.so /usr/lib/libIlmImf-2_2.so.22\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libImath-2_2.so.23\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libImath-2_2.so.23 /usr/lib/libImath-2_2.so.12\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libImath.so\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libImath.so /usr/lib/libImath-2_2.so.12\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libIex-2_2.so.23\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libIex-2_2.so.23 /usr/lib/libIex-2_2.so.12\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libIex.so\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libIex.so /usr/lib/libIex-2_2.so.12\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libIexMath.so\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libIexMath.so /usr/lib/libIexMath-2_2.so.12\n      fi\n      if [ -e \"/usr/lib/x86_64-linux-gnu/libIlmThread.so\" ]; then\n        ln -sfn /usr/lib/x86_64-linux-gnu/libIlmThread.so /usr/lib/libIlmThread-2_2.so.12\n      fi\n      if [ ! -e \"/usr/include/curl/curl.h\" ] \\\n        && [ -e \"/usr/include/x86_64-linux-gnu/curl/curl.h\" ]; then\n        ln -sfn /usr/include/x86_64-linux-gnu/curl /usr/include/curl\n      fi\n      if [ ! -e \"/usr/include/gmp.h\" ] \\\n        && [ -e \"/usr/include/x86_64-linux-gnu/gmp.h\" ]; then\n        ln -sfn /usr/include/x86_64-linux-gnu/gmp.h /usr/include/gmp.h\n      fi\n      if [ ! -e \"/usr/include/lzf.h\" ] \\\n        && [ -e \"/usr/include/liblzf/lzf.h\" ]; then\n        ln -sfn /usr/include/liblzf/lzf.h /usr/include/lzf.h\n      fi\n      if [ ! -e \"/usr/lib/x86_64-linux-gnu/librtmp.so.0\" ] \\\n        && [ -e \"/usr/lib/x86_64-linux-gnu/librtmp.so.1\" ]; then\n        cd /usr/lib/x86_64-linux-gnu\n        ln -sfn librtmp.so.1 librtmp.so.0\n      fi\n    fi\n    touch /var/log/boa/._php_libs_fix_${_OS_CODE}_${_LIB_NOW}.pid\n    touch /var/log/boa/._php_libs_fix_${_OS_CODE}_${_LIB_NOW}.pid\n  fi\n  ldconfig 2> /dev/null\n}\n\n_smtp_check() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _smtp_check\"\n  fi\n  if [ -z \"${_SMTP_RELAY_HOST}\" ] && [ \"${_SMTP_RELAY_TEST}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Checking SMTP connections...\"\n    fi\n    if ! command nc -w 10 -z smtp.gmail.com 25 >/dev/null 2>&1 ; then\n      cat <<EOF\n\n      Your outgoing SMTP port 25 doesn't work\n      as expected, so your server can't send out\n      any emails directly.\n\n      Your SMTP relay host, if available, should be added as\n        _SMTP_RELAY_HOST=\"smtp.your.relay.server\"\n      in the /root/.barracuda.cnf file.\n\n      For now we will continue the installation anyway,\n      but you will have to find the welcome email with all\n      initial access credentials in the file located in:\n\n        /data/disk/o1/log/setupmail.txt\n\nEOF\n    fi\n    ###\n    ### Required if outgoing smtp port is closed and smtp relay is in use\n    ###\n    if [ ! -z \"${_SMTP_RELAY_HOST}\" ]; then\n      sed -i \"s/${_SMTP_RELAY_HOST}//g\" /etc/postfix/main.cf &> /dev/null\n      wait\n      sed -i \"s/relayhost =/relayhost = ${_SMTP_RELAY_HOST}/g\" \\\n        /etc/postfix/main.cf &> /dev/null\n      wait\n      postfix reload &> /dev/null\n    fi\n  fi\n}\n\n_if_install_vnstat() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_vnstat\"\n  fi\n  if [ ! -e \"/var/log/boa/cloud_vhost.pid\" ]; then\n    _VNSTAT_ITD=$(vnstat --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    if [ ! -e \"${_pthLog}/vnstat-${_VNSTAT_VRN}.log\" ] \\\n      || [ \"${_VNSTAT_ITD}\" != \"${_VNSTAT_VRN}\" ] \\\n      || [ ! -e \"/usr/bin/vnstat\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installing VnStat monitor...\"\n      fi\n      cd /var/opt\n      rm -rf vnstat*\n      _get_dev_src \"vnstat-${_VNSTAT_VRN}.tar.gz\"\n      cd vnstat-${_VNSTAT_VRN}\n      _mrun \"bash ./configure --prefix=/usr --sysconfdir=/etc\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      ldconfig 2> /dev/null\n      for INF in `vnstat --iflist \\\n        | sed \"s/Available interfaces//g; s/(1000 Mbit)//g; s/(100 Mbit)//g; s/ lo//g;\" \\\n        | cut -d: -f2` ;do vnstat -i $INF &> /dev/null;done\n      cp -af /var/opt/vnstat-${_VNSTAT_VRN}/examples/init.d/debian/vnstat \\\n        /etc/init.d/vnstat\n      chmod 755 /etc/init.d/vnstat &> /dev/null\n      _mrun \"update-rc.d vnstat defaults\"\n      if [ -e \"/usr/etc/vnstat.conf\" ]; then\n        sed -i \"s/^MaxBandwidth.*/MaxBandwidth 1000/g\" /usr/etc/vnstat.conf\n      fi\n      if [ -e \"/etc/vnstat.conf\" ]; then\n        sed -i \"s/^MaxBandwidth.*/MaxBandwidth 1000/g\" /etc/vnstat.conf\n      fi\n      _mrun \"service vnstat start\"\n      _mrun \"killall vnstatd\"\n      touch ${_pthLog}/vnstat-${_VNSTAT_VRN}.log\n      _mrun \"service vnstat restart\"\n    fi\n  fi\n  if [ -e \"/etc/init.d/vnstat\" ] \\\n    && [ \"${_VMFAMILY}\" = \"VS\" ] \\\n    && [ ! -e \"/boot/grub/grub.cfg\" ] \\\n    && [ ! -e \"/boot/grub/menu.lst\" ]; then\n    _mrun \"service vnstat stop\"\n    _mrun \"update-rc.d -f vnstat remove\"\n    rm -f /etc/init.d/vnstat\n    rm -f /usr/bin/vnstat\n    rm -rf /var/lib/vnstat\n  fi\n  if [ -e \"/usr/etc/vnstat.conf\" ]; then\n    sed -i \"s/^MaxBandwidth.*/MaxBandwidth 1000/g\" /usr/etc/vnstat.conf\n    _mrun \"service vnstat restart\"\n  fi\n}\n\n_install_myquick_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_myquick_src\"\n  fi\n  if [ \"${_MYQUICK_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Building MyQuick ${_MYQUICK_VRN_ONE} from sources...\"\n    fi\n    _apt_clean_update\n    for _PKG in cmake libssl-dev; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    if [ \"${_DB_SERVER}\" = \"Percona\" ]; then\n      if [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n        for _PKG in libperconaserverclient20 libperconaserverclient20-dev; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n      elif [ \"${_DB_SERIES}\" = \"8.0\" ]; then\n        for _PKG in libperconaserverclient21 libperconaserverclient21-dev; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n      else\n        for _PKG in libperconaserverclient24 libperconaserverclient24-dev percona-telemetry-agent; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n      fi\n    fi\n    ldconfig 2> /dev/null\n    cd /var/opt\n    rm -rf mydumper*\n    _get_dev_src \"mydumper-${_MYQUICK_VRN_ONE}.tar.gz\"\n    cd /var/opt/mydumper-${_MYQUICK_VRN_ONE}\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH\n      export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH\n      LD_LIBRARY_PATH=/usr/local/lib:/usr/lib/x86_64-linux-gnu \\\n        cmake \\\n        -DWITH_SSL=ON \\\n        -DBUILD_DOCS=ON \\\n        -DCMAKE_C_FLAGS=-Wno-error=unused-function .\n      _mrun \"make\"\n      _mrun \"make install\"\n    else\n      export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH &> /dev/null\n      export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH &> /dev/null\n      LD_LIBRARY_PATH=/usr/local/lib:/usr/lib/x86_64-linux-gnu \\\n        cmake \\\n        -DWITH_SSL=ON \\\n        -DBUILD_DOCS=ON \\\n        -DCMAKE_C_FLAGS=-Wno-error=unused-function . &> /dev/null\n      _mrun \"make --quiet\"\n      _mrun \"make --quiet install\"\n    fi\n    ldconfig 2> /dev/null\n  fi\n}\n\n_install_myquick_deb() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_myquick_deb\"\n  fi\n  if [ \"${_MYQUICK_SRC_INSTALL_REQUIRED}\" = \"NO\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Installing MyQuick ${_USE_MYQUICK} from ${_USE_CODE} packages\"\n    fi\n    cd /var/opt\n    rm -rf mydumper_*\n    _get_dev_arch \"mydumper_${_USE_MYQUICK}.${_USE_CODE}_amd64.deb.gz\"\n    if [ -e \"/var/opt/mydumper_${_USE_MYQUICK}.${_USE_CODE}_amd64.deb\" ]; then\n      _mrun \"dpkg -i mydumper_${_USE_MYQUICK}.${_USE_CODE}_amd64.deb\"\n    fi\n    if [ -x \"/usr/bin/mydumper\" ]; then\n      [ -e \"/usr/local/bin/mydumper\" ] && rm -f /usr/local/bin/mydumper\n      [ -e \"/usr/local/bin/myloader\" ] && rm -f /usr/local/bin/myloader\n      ln -sfn /usr/bin/mydumper /usr/local/bin/mydumper\n      ln -sfn /usr/bin/myloader /usr/local/bin/myloader\n    fi\n  else\n    _MYQUICK_INSTALL_REQUIRED=YES\n    _install_myquick_src\n  fi\n}\n\n_myquick_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _myquick_install_upgrade\"\n  fi\n  _MYQUICK_SRC_INSTALL_REQUIRED=NO\n  if [ \"${_SQL_OS_CODE}\" = \"trixie\" ]; then\n    _USE_CODE=trixie\n  elif [ \"${_SQL_OS_CODE}\" = \"bookworm\" ]; then\n    _USE_CODE=bookworm\n  elif [ \"${_SQL_OS_CODE}\" = \"bullseye\" ]; then\n    _USE_CODE=bullseye\n  elif [ \"${_SQL_OS_CODE}\" = \"buster\" ]; then\n    _USE_CODE=buster\n  else\n    _MYQUICK_SRC_INSTALL_REQUIRED=YES\n  fi\n  if [ \"${_USE_CODE}\" = \"buster\" ]; then\n    _USE_MYQUICK=\"${_MYQUICK_VRN_ONE}\"\n  else\n    _USE_MYQUICK=\"${_MYQUICK_VRN_TWO}\"\n  fi\n  if [ -e \"/root/.install.myquick.src.info\" ]; then\n    _MYQUICK_SRC_INSTALL_REQUIRED=YES\n  fi\n  _check_mysql_version\n  _isMyQuick=\"$(which mydumper)\"\n  if [ ! -x \"${_isMyQuick}\" ] \\\n    || [ -z \"${_isMyQuick}\" ]; then\n    if [ \"${_MYQUICK_SRC_INSTALL_REQUIRED}\" = \"NO\" ]; then\n      _install_myquick_deb\n    else\n      _install_myquick_src\n    fi\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _MYQUICK_INSTALL_REQUIRED=NO\n    _MYQUICK_ITD=$(mydumper -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | tr -d \",\" \\\n      | tr -d \"v\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    _DB_V=$(mysql -V 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f6 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    if [ \"${_DB_V}\" = \"Linux\" ]; then\n      _DB_V=$(mysql -V 2>&1 \\\n        | tr -d \"\\n\" \\\n        | cut -d\" \" -f4 \\\n        | awk '{ print $1}' \\\n        | cut -d\"-\" -f1 \\\n        | awk '{ print $1}' \\\n        | sed \"s/[\\,']//g\" 2>&1)\n    fi\n    _MD_V=$(mydumper --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f6 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' \\\n      | sed \"s/[\\,']//g\" 2>&1)\n    if [ -x \"${_isMyQuick}\" ]; then\n      if [ \"${_MYQUICK_SRC_INSTALL_REQUIRED}\" = \"NO\" ]; then\n        if [ \"${_MYQUICK_ITD}\" != \"${_USE_MYQUICK}\" ]; then\n          _MYQUICK_INSTALL_REQUIRED=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Installed MyQuick version ${_MYQUICK_ITD}, deb upgrade required\"\n          fi\n          _install_myquick_deb\n        fi\n      else\n        if [ \"${_MYQUICK_ITD}\" != \"${_MYQUICK_VRN_ONE}\" ]; then\n          _MYQUICK_INSTALL_REQUIRED=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Installed MyQuick version ${_MYQUICK_ITD}, src upgrade required\"\n          fi\n          _install_myquick_src\n        fi\n      fi\n    fi\n    _isMyQuick=\"$(which mydumper)\"\n    if [ ! -e \"${_isMyQuick}\" ]; then\n      _MYQUICK_INSTALL_REQUIRED=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: MyQuick not installed yet\"\n      fi\n      _install_myquick_deb\n    fi\n  fi\n}\n\n_xdrago_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _xdrago_install_upgrade\"\n  fi\n  cd /var\n  if [ -d \"/var/xdrago/conf\" ] \\\n    && [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Upgrading a few more tools...\"\n    fi\n    mv -f /var/xdrago-pre* ${_vBs}/dragon/x/ &> /dev/null\n    rm -rf ${_pthLog}/init.d-pre*\n    rm -rf ${_vBs}/dragon/z/init.d-pre-*\n    rm -f ${_pthLog}/cron-root-pre*\n    cp -af /var/xdrago \\\n      ${_vBs}/dragon/x/xdrago-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    rm -f ${_pthLog}/VISITOR_ABUSE_ONE.log\n    rm -f ${_pthLog}/blackIP.log\n    rm -f /var/xdrago/{enableStatus,graceful,move_sql,run_all,second,Minute}\n    rm -f /var/xdrago/{firewall.sh,stop-mysql-innodb.sh,firewall_restarter}\n    rm -f /var/xdrago/{FireStart,memcache,redis}\n    cp -af /var/spool/cron/crontabs/root \\\n      ${_vBs}/dragon/z/cron-root-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    if [ \"${_CUSTOM_CONFIG_LSHELL}\" = \"YES\" ] \\\n      && [ -e \"/var/xdrago/conf/lshell.conf\" ]; then\n      cp -af /var/xdrago/conf/lshell.conf ${_vBs}/custom_lshell.conf\n    fi\n    cp -af ${_bldPth}/aegir/tools/system/* /var/xdrago/ &> /dev/null\n    if [ \"${_CUSTOM_CONFIG_LSHELL}\" = \"YES\" ] \\\n      && [ -e \"${_vBs}/custom_lshell.conf\" ]; then\n      cp -af ${_vBs}/custom_lshell.conf /var/xdrago/conf/lshell.conf\n    fi\n    if [ -z \"${_THISHTIP}\" ]; then\n      _LOC_DOM=\"${_THISHOST}\"\n      _find_correct_ip\n      _THISHTIP=\"${_LOC_IP}\"\n    fi\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ ! -e \"/etc/init.d/ssh\" ]; then\n        _NGINX_DOS_LIMIT=8888\n      else\n        if [ -z \"${_NGINX_DOS_LIMIT}\" ] \\\n          || [ \"${_NGINX_DOS_LIMIT}\" = \"199\" ] \\\n          || [ \"${_NGINX_DOS_LIMIT}\" = \"299\" ]; then\n          _NGINX_DOS_LIMIT=399\n        fi\n      fi\n    fi\n    sed -i \"s/^_NGINX_DOS_LIMIT=.*/_NGINX_DOS_LIMIT=${_NGINX_DOS_LIMIT}/g\"  ${_barCnf}\n    mv -f /etc/cron.daily/mlocate ${_vBs}/ &> /dev/null\n    cp -af /var/xdrago/cron/crontabs/root /var/spool/cron/crontabs/ &> /dev/null\n    if [ -e \"/var/xdrago/cron/custom.txt\" ]; then\n      cat /var/xdrago/cron/custom.txt >> /var/spool/cron/crontabs/root\n    fi\n    chown root:crontab /var/spool/cron/crontabs/root\n    chmod 600 /var/spool/cron/crontabs/root\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      _ROTATE_TEST=$(grep \"rotate 9999\" /etc/logrotate.d/nginx 2>&1)\n      if [[ ! \"${_ROTATE_TEST}\" =~ \"rotate 9999\" ]]; then\n        sed -i \"s/rotate 52/rotate 365/g\" /etc/logrotate.d/nginx &> /dev/null\n        sed -i \"s/rotate 74/rotate 365/g\" /etc/logrotate.d/nginx &> /dev/null\n        sed -i \"s/rotate 14/rotate 365/g\" /etc/logrotate.d/nginx &> /dev/null\n      fi\n    fi\n    if [ -e \"/root/.debug.cnf\" ] && [ ! -e \"/root/.default.cnf\" ]; then\n      _DO_NOTHING=YES\n    else\n      if [ -e \"/root/.high_load.cnf\" ] \\\n        && [ ! -e \"/root/.big_db.cnf\" ] \\\n        && [ ! -e \"/root/.tg.cnf\" ]; then\n        sed -i \"s/3600/300/g\" /var/xdrago/monitor/check/mysql.sh &> /dev/null\n        wait\n      elif [ -e \"/root/.big_db.cnf\" ] || [ -e \"/root/.tg.cnf\" ]; then\n        _DO_NOTHING=YES\n      else\n        sed -i \"s/3600/1800/g\" /var/xdrago/monitor/check/mysql.sh &> /dev/null\n        wait\n      fi\n    fi\n  fi\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    mkdir -p ./xdrago\n    cp -af ${_bldPth}/aegir/tools/system/* ./xdrago/ &> /dev/null\n    cp -af /var/xdrago/cron/crontabs/root /var/spool/cron/crontabs/ &> /dev/null\n    chown root:crontab /var/spool/cron/crontabs/root\n    chmod 600 /var/spool/cron/crontabs/root\n    if [ -z \"${_THISHTIP}\" ]; then\n      _LOC_DOM=\"${_THISHOST}\"\n      _find_correct_ip\n      _THISHTIP=\"${_LOC_IP}\"\n    fi\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ ! -e \"/etc/init.d/ssh\" ]; then\n        _NGINX_DOS_LIMIT=8888\n      else\n        if [ -z \"${_NGINX_DOS_LIMIT}\" ] \\\n          || [ \"${_NGINX_DOS_LIMIT}\" = \"199\" ] \\\n          || [ \"${_NGINX_DOS_LIMIT}\" = \"299\" ]; then\n          _NGINX_DOS_LIMIT=399\n        fi\n      fi\n    fi\n    sed -i \"s/^_NGINX_DOS_LIMIT=.*/_NGINX_DOS_LIMIT=${_NGINX_DOS_LIMIT}/g\"  ${_barCnf}\n    mv -f /etc/cron.daily/mlocate ${_vBs}/ &> /dev/null\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ -e \"/root/.debug.cnf\" ] && [ ! -e \"/root/.default.cnf\" ]; then\n        _DO_NOTHING=YES\n      else\n        if [ -e \"/root/.high_load.cnf\" ] \\\n          && [ ! -e \"/root/.big_db.cnf\" ] \\\n          && [ ! -e \"/root/.tg.cnf\" ]; then\n          sed -i \"s/3600/300/g\" /var/xdrago/monitor/check/mysql.sh &> /dev/null\n          wait\n        elif [ -e \"/root/.big_db.cnf\" ] || [ -e \"/root/.tg.cnf\" ]; then\n          _DO_NOTHING=YES\n        else\n          sed -i \"s/3600/1800/g\" /var/xdrago/monitor/check/mysql.sh &> /dev/null\n          wait\n        fi\n        _ROTATE_TEST=$(grep \"rotate 9999\" /etc/logrotate.d/nginx 2>&1)\n        if [[ ! \"${_ROTATE_TEST}\" =~ \"rotate 9999\" ]]; then\n          sed -i \"s/rotate 52/rotate 365/g\" /etc/logrotate.d/nginx &> /dev/null\n          sed -i \"s/rotate 74/rotate 365/g\" /etc/logrotate.d/nginx &> /dev/null\n          sed -i \"s/rotate 14/rotate 365/g\" /etc/logrotate.d/nginx &> /dev/null\n        fi\n      fi\n    fi\n  fi\n  if [ -d \"/var/xdrago-pre-${_xSrl}-${_X_VERSION}-${_NOW}\" ]; then\n    cp -af /var/xdrago-pre-${_xSrl}-${_X_VERSION}-${_NOW}/run-* /var/xdrago/ &> /dev/null\n  fi\n  chmod -R 700 /var/xdrago/monitor/check &> /dev/null\n  chmod 700 /var/xdrago/* &> /dev/null\n  chmod 700 /var/xdrago &> /dev/null\n}\n\n_set_drupal_patches_paths() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _pam_many_check_fix\"\n  fi\n  #\n  _tenCorePatchFname=\"drupal-ten-aegir-core-01.patch\"\n  _tenCorePatchPath=\"/data/conf/patches/${_tenCorePatchFname}\"\n  #\n  _tenConsolePatchFname=\"drupal-ten-aegir-console-02.patch\"\n  _tenConsolePatchPath=\"/data/conf/patches/${_tenConsolePatchFname}\"\n  #\n  _elevenCorePatchFname=\"drupal-eleven-aegir-core-01.patch\"\n  _elevenCorePatchPath=\"/data/conf/patches/${_elevenCorePatchFname}\"\n  #\n  _elevenConsolePatchFname=\"drupal-eleven-aegir-console-02.patch\"\n  _elevenConsolePatchPath=\"/data/conf/patches/${_elevenConsolePatchFname}\"\n  #\n  _elevenValidatorPatchFname=\"drupal-eleven-aegir-validator-03.patch\"\n  _elevenValidatorPatchPath=\"/data/conf/patches/${_elevenValidatorPatchFname}\"\n  #\n}\n\n_update_patches_drupal_ten() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _update_patches_drupal_ten\"\n  fi\n  cp -af ${_bldPth}/aegir/patches/${_tenCorePatchFname} ${_tenCorePatchPath}\n  cp -af ${_bldPth}/aegir/patches/${_tenConsolePatchFname} ${_tenConsolePatchPath}\n}\n\n_update_patches_drupal_eleven() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _update_patches_drupal_eleven\"\n  fi\n  cp -af ${_bldPth}/aegir/patches/${_elevenCorePatchFname} ${_elevenCorePatchPath}\n  cp -af ${_bldPth}/aegir/patches/${_elevenConsolePatchFname} ${_elevenConsolePatchPath}\n  cp -af ${_bldPth}/aegir/patches/${_elevenValidatorPatchFname} ${_elevenValidatorPatchPath}\n}\n\n_if_drupal_patches_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_drupal_patches_update\"\n  fi\n  if [ -e \"/var/xdrago\" ]; then\n    _BROKEN_UPDATE_TEST_A=$(grep \"Under Construction\" /data/conf/patches/* 2>&1)\n    _BROKEN_UPDATE_TEST_B=$(grep \"404 Not Found\" /data/conf/patches/* 2>&1)\n    if [ ! -z \"${_BROKEN_UPDATE_TEST_A}\" ] \\\n      || [ ! -z \"${_BROKEN_UPDATE_TEST_B}\" ] \\\n      || [ ! -e \"/data/conf/patches/ctrl.f96.${_tRee}.${_xSrl}.pid\" ]; then\n      mkdir -p /data/conf/patches\n      rm -f /data/conf/patches/*\n      _set_drupal_patches_paths\n      _update_patches_drupal_ten\n      _update_patches_drupal_eleven\n      touch /data/conf/patches/ctrl.f96.${_tRee}.${_xSrl}.pid\n    fi\n  fi\n}\n\n_pam_umask_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _pam_umask_check_fix\"\n  fi\n  _PAM_UMASK_TEST=$(grep \"pam_umask.so umask=0002\" /etc/pam.d/login 2>&1)\n  if [[ ! \"${_PAM_UMASK_TEST}\" =~ \"pam_umask.so umask=0002\" ]]; then\n    mkdir -p /var/www/nginx-default\n    sed -i \"s/^UMASK.*//g\" /etc/default/login &> /dev/null\n    wait\n    echo \"UMASK=002\" >> /etc/default/login\n    sed -i \"/^$/d\" /etc/default/login &> /dev/null\n    wait\n    sed -i \"s/^UMASK.*/UMASK 002/g\" /etc/login.defs &> /dev/null\n    wait\n    sed -i \"s/^umask.*022/umask 002/g\" /etc/profile &> /dev/null\n    wait\n    sed -i \"s/.*session.*optional.*pam_umask.*//g\" /etc/pam.d/login &> /dev/null\n    wait\n    echo \"session optional pam_umask.so umask=0002\" >> /etc/pam.d/login\n    wait\n    echo \"umask 002\" >> /var/www/.profile\n    chown www-data:www-data /var/www/.profile &> /dev/null\n    chown www-data:www-data /var/www/nginx-default &> /dev/null\n  fi\n  _SFTP_UMASK_TEST=$(grep \"sftp-server -u 0002\" /etc/ssh/sshd_config 2>&1)\n  if [[ ! \"${_SFTP_UMASK_TEST}\" =~ \"sftp-server -u 0002\" ]]; then\n    sed -i \"s/^Subsystem.*//g\" /etc/ssh/sshd_config\n    echo \"Subsystem sftp /usr/lib/openssh/sftp-server -u 0002\" >> /etc/ssh/sshd_config\n    sed -i \"/^$/d\" /etc/ssh/sshd_config\n    _mrun \"service ssh restart\"\n  fi\n}\n\n_pam_many_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _pam_many_check_fix\"\n  fi\n  _PAM_FIX=NO\n  _PAM_MAIL_LOGIN_TEST=$(grep \"pam_mail\" /etc/pam.d/login 2>&1)\n  _PAM_MAIL_SSHD_TEST=$(grep \"pam_mail\" /etc/pam.d/sshd 2>&1)\n  _PAM_MOTD_LOGIN_TEST=$(grep \"pam_motd\" /etc/pam.d/login 2>&1)\n  _PAM_MOTD_SSHD_TEST=$(grep \"pam_motd\" /etc/pam.d/sshd 2>&1)\n  if [[ \"${_PAM_MAIL_LOGIN_TEST}\" =~ \"pam_mail\" ]] \\\n    || [[ \"${_PAM_MAIL_LOGIN_TEST}\" =~ \"pam_mail\" ]]; then\n    _PAM_FIX=YES\n  fi\n  if [[ \"${_PAM_MOTD_LOGIN_TEST}\" =~ \"pam_motd\" ]] \\\n    || [[ \"${_PAM_MOTD_SSHD_TEST}\" =~ \"pam_motd\" ]]; then\n    _PAM_FIX=YES\n  fi\n  if [ \"${_PAM_FIX}\" = \"YES\" ]; then\n    sed -i \"s/.*session.*optional.*pam_mail.*//g\" /etc/pam.d/login &> /dev/null\n    wait\n    sed -i \"s/.*session.*optional.*pam_mail.*//g\" /etc/pam.d/sshd &> /dev/null\n    wait\n    sed -i \"s/.*session.*optional.*pam_motd.*//g\" /etc/pam.d/login &> /dev/null\n    wait\n    sed -i \"s/.*session.*optional.*pam_motd.*//g\" /etc/pam.d/sshd &> /dev/null\n  fi\n}\n\n_fix_node_in_lshell_access() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_node_in_lshell_access\"\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ] && [ -e \"/etc/lshell.conf\" ]; then\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    if [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [ -e \"/root/.allow.node.lshell.cnf\" ]; then\n      _ALLOW_NODE=YES\n    else\n      _ALLOW_NODE=NO\n      sed -i \\\n        -e \"s/, 'node', 'npm', 'npx',/,/gi\" \\\n        -e \"s/, 'scp',/,/gi\" \\\n        /etc/lshell.conf /var/xdrago/conf/lshell.conf\n    fi\n  fi\n}\n\n_fix_php_in_lshell_access() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_php_in_lshell_access\"\n  fi\n  if [ \"${_STATUS}\" = \"UPGRADE\" ] && [ -e \"/etc/lshell.conf\" ]; then\n    _PrTestPhantom=$(grep \"PHANTOM\" /root/.*.octopus.cnf 2>&1)\n    _PrTestCluster=$(grep \"CLUSTER\" /root/.*.octopus.cnf 2>&1)\n    _PrTestUltra=$(grep \"ULTRA\" /root/.*.octopus.cnf 2>&1)\n    _PrTestMonster=$(grep \"MONSTER\" /root/.*.octopus.cnf 2>&1)\n    if [[ \"${_PrTestPhantom}\" =~ \"PHANTOM\" ]] \\\n      || [[ \"${_PrTestCluster}\" =~ \"CLUSTER\" ]] \\\n      || [[ \"${_PrTestUltra}\" =~ \"ULTRA\" ]] \\\n      || [[ \"${_PrTestMonster}\" =~ \"MONSTER\" ]] \\\n      || [ -e \"/root/.allow.php.lshell.cnf\" ]; then\n      _ALLOW_PHP=YES\n    else\n      _ALLOW_PHP=NO\n      sed -i \\\n        -e \"s/, 'php.*':.*php',/,/gi\" \\\n        -e \"s/, '\\/opt\\/php.*',/,/gi\" \\\n        /etc/lshell.conf /var/xdrago/conf/lshell.conf\n    fi\n  fi\n}\n\n_lshell_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _lshell_install_upgrade\"\n  fi\n  _PATH_LSHELL=\"/usr/local/bin/lshell\"\n  _LSHELL_CHK_VRN=0.10\n  _LSHELL_FORCE_REINSTALL=NO\n  _isLshell=\"$(which lshell)\"\n  _LSHELL_ITD=$(${_isLshell} --version 2>&1 \\\n    | tr -d \"\\n\" \\\n    | cut -d\"-\" -f2 \\\n    | awk '{ print $1}' 2>&1)\n  if [ -z \"${_isLshell}\" ] \\\n    || [ -z \"${_PATH_LSHELL}\" ] \\\n    || [ \"${_LSHELL_ITD}\" != \"${_LSHELL_CHK_VRN}\" ] \\\n    || [[ \"${_LSHELL_ITD}\" =~ \"Traceback\" ]] \\\n    || [[ \"${_LSHELL_ITD}\" =~ \"bad interpreter\" ]] \\\n    || [[ \"${_LSHELL_ITD}\" =~ \"ImportError\" ]]; then\n    _LSHELL_FORCE_REINSTALL=YES\n  fi\n  if [ \"${_LSHELL_FORCE_REINSTALL}\" = \"YES\" ] \\\n    || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ] \\\n    || [ \"${_SSL_INSTALL_REQUIRED}\" = \"YES\" ]; then\n    if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n      _msg \"INFO: Upgrading Limited Shell to version ${_LSHELL_VRN}...\"\n      [ -f \"/etc/lshell.conf\" ] && cp -af /etc/lshell.conf /etc/lshell.conf-bak-${_NOW} &> /dev/null\n    else\n      _msg \"INFO: Installing Limited Shell ${_LSHELL_VRN}...\"\n    fi\n    _apt_clean_update\n    for _PKG in python3-pip python3-setuptools; do\n      if ! _pkg_installed \"${_PKG}\"; then\n        _mrun \"${_INSTAPP} ${_PKG}\"\n      fi\n    done\n    if [ -x \"/usr/bin/pip3\" ]; then\n      _usePip=/usr/bin/pip3\n    elif [ -x \"/usr/local/bin/pip3\" ]; then\n      _usePip=/usr/local/bin/pip3\n    fi\n    _PIP_TEST=$(${_usePip} --version 2>&1)\n    if [[ \"${_PIP_TEST}\" =~ \"python 3.11\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.12\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.13\" ]]; then\n      _mrun \"${_usePip} install --upgrade pip --root-user-action ignore\"\n    else\n      _mrun \"${_usePip} install --upgrade pip\"\n    fi\n    cd /var/opt\n    rm -rf lshell*\n    _get_dev_src \"lshell-${_LSHELL_VRN}.tar.gz\"\n    for _Files in `find /var/opt/lshell-${_LSHELL_VRN} -type f`; do\n      sed -i \"s/kicked/logged/g\" ${_Files} &> /dev/null\n      wait\n      sed -i \"s/Kicked/Logged/g\" ${_Files} &> /dev/null\n      wait\n    done\n    rm -rf /usr/local/lib/python*/site-packages/lshell*\n    rm -rf /usr/local/lib/python*/dist-packages/lshell*\n    cd /var/opt/lshell-${_LSHELL_VRN}\n    _PIP_TEST=$(${_usePip} --version 2>&1)\n    if [[ \"${_PIP_TEST}\" =~ \"python 3.11\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.12\" ]] \\\n      || [[ \"${_PIP_TEST}\" =~ \"python 3.13\" ]]; then\n      _mrun \"${_usePip} install . --break-system-packages --root-user-action ignore\"\n    else\n      _mrun \"${_usePip} install . \"\n    fi\n    if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n      [ -f \"/etc/lshell.conf-bak-${_NOW}\" ] && cp -af /etc/lshell.conf-bak-${_NOW} /etc/lshell.conf &> /dev/null\n    else\n      cp -af ${_bldPth}/aegir/tools/system/conf/lshell.conf /etc/lshell.conf\n      echo \"${_PATH_LSHELL}\" >> /etc/shells\n    fi\n    rm -f /etc/logrotate.d/lshell\n    addgroup --system lshellg &> /dev/null\n    addgroup --system ltd-shell-more &> /dev/null\n    mkdir -p /var/log/lsh\n    chown :lshellg /var/log/lsh\n    chmod 770 /var/log/lsh &> /dev/null\n    touch ${_pthLog}/lshell-build-${_LSHELL_VRN}.log\n    if [ -f \"/var/xdrago/manage_ltd_users.sh\" ]; then\n      if [ \"${_STATUS}\" = \"UPGRADE\" ] \\\n        && [ \"${_CUSTOM_CONFIG_LSHELL}\" = \"NO\" ]; then\n        cp -af ${_bldPth}/aegir/tools/system/conf/lshell.conf \\\n          /var/xdrago/conf/lshell.conf\n      fi\n      _mrun \"bash /var/xdrago/manage_ltd_users.sh\"\n      wait\n    fi\n    _fix_node_in_lshell_access\n    # _fix_php_in_lshell_access\n  fi\n  if [ -f \"/usr/local/bin/lshell\" ]; then\n    chown root:users /usr/local/bin/lshell\n    chmod 750 /usr/local/bin/lshell\n    if [ ! -L \"/usr/bin/lshell\" ]; then\n      ln -sfn /usr/local/bin/lshell /usr/bin/lshell &> /dev/null\n    fi\n  fi\n}\n\n_if_install_ftpd() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_ftpd\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"ALL\" ]] \\\n    || [[ \"${_XTRAS_LIST}\" =~ \"FTP\" ]]; then\n    if [ ! -e \"/var/ftp\" ]; then\n      _mrun \"useradd -r -d /var/ftp -s /sbin/nologin -m ftp\"\n    fi\n    if [ ! -e \"/etc/ssl/private/pure-ftpd.pem\" ] \\\n      || [ ! -e \"/usr/local/sbin/pure-ftpd\" ] \\\n      || [ ! -e \"${_pthLog}/pure-ftpd-build-${_PURE_FTPD_VRN}-${_xSrl}-${_X_VERSION}.log\" ] \\\n      || [ ! -e \"/etc/ssl/private/pure-ftpd-dhparams.pem\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ] \\\n      || [ \"${_SSL_INSTALL_REQUIRED}\" = \"YES\" ]; then\n      _msg \"INFO: Building Pure-FTPd server from sources...\"\n      sed -i \"/^$/d\" /etc/ld.so.conf.d/libc.conf &> /dev/null\n      if [ ! -e \"/etc/ssl/private/pure-ftpd-dhparams.pem\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Generating DH parameters, 2048 bit...\"\n        fi\n        _mrun \"openssl dhparam -out /etc/ssl/private/pure-ftpd-dhparams.pem 2048\"\n        wait\n      fi\n      if [ ! -e \"/usr/local/sbin/pure-config.pl\" ]; then\n        echo \"/bin/false\" >> /etc/shells\n        echo \"/bin/true\" >> /etc/shells\n      fi\n      mkdir -p /etc/ssl/private/\n      rm -f /etc/ssl/private/pure-ftpd.pem\n      rm -f /usr/local/sbin/pure-ftpd\n      openssl req -x509 -nodes -sha256 -days 7300 -newkey rsa:2048 \\\n        -keyout /etc/ssl/private/pure-ftpd.pem \\\n        -out /etc/ssl/private/pure-ftpd.pem -batch &> /dev/null\n      chmod 600 /etc/ssl/private/pure-ftpd.pem &> /dev/null\n      cd /var/opt\n      rm -rf pure-ftpd*\n      mkdir -p /usr/local/etc\n      _get_dev_src \"pure-ftpd-${_PURE_FTPD_VRN}.tar.gz\"\n      cd pure-ftpd-${_PURE_FTPD_VRN}\n      bash autogen.sh &> /dev/null\n      _mrun \"LIBS=\\\"-ldl -lpthread\\\" PKG_CONFIG_PATH=\\\"/usr/local/ssl3/lib64/pkgconfig\\\" ./configure \\\n        --oldincludedir=/usr/local/ssl3/include \\\n        --includedir=/usr/local/ssl3/include \\\n        --with-everything --with-virtualchroot \\\n        --without-humor --with-tls --with-diraliases --with-pam \\\n        --with-certfile=/etc/ssl/private/pure-ftpd.pem\"\n      _mrun \"make -j $(nproc) --quiet\"\n      _mrun \"make --quiet install\"\n      cd /usr/local/sbin/\n      cp -af ${_locCnf}/ftpd/pure-config.pl.txt ./\n      mv -f pure-config.pl.txt pure-config.pl 2> /dev/null\n      chmod 755 /usr/local/sbin/pure-config.pl 2> /dev/null\n      cp -af /var/opt/pure-ftpd-${_PURE_FTPD_VRN}/pam/pure-ftpd /etc/pam.d/\n      _fix_ftps_pam\n      cd /usr/local/etc\n      rm -f pure-ftpd.conf\n      cp -af ${_locCnf}/ftpd/pure-ftpd.conf ./\n      ln -sfn /usr/local/etc/pure-ftpd.conf /usr/local/etc/pure-ftpd.conf~\n      killall -9 pure-ftpd &> /dev/null\n      _ftpdInit=\"/usr/local/sbin/pure-config.pl\"\n      _ftpdConf=\"/usr/local/etc/pure-ftpd.conf\"\n      _ftpdBind=\"/usr/local/sbin/pure-ftpd\"\n      if [ -x \"${_ftpdBind}\" ] && [ -f \"${_ftpdConf}\" ]; then\n        if [ -x \"${_ftpdInit}\" ]; then\n          ${_ftpdInit} ${_ftpdConf} &> /dev/null\n        else\n          ${_ftpdBind} ${_ftpdConf} &> /dev/null\n        fi\n      fi\n      cd /var/opt\n      touch ${_pthLog}/pure-ftpd-build-${_PURE_FTPD_VRN}-${_xSrl}-${_X_VERSION}.log\n    fi\n  fi\n}\n\n_newrelic_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _newrelic_check_fix\"\n  fi\n  _NR_FORCE_REINSTALL=NO\n  _NRS_TEST=\"$(which newrelic-daemon)\"\n  if [ ! -z \"${_NRS_TEST}\" ]; then\n    _NOW_NR_V=$(newrelic-daemon --version 2>&1 \\\n      | tr -d \"\\n\" \\\n      | cut -d\" \" -f5 \\\n      | awk '{ print $1}' \\\n      | cut -d\"-\" -f1 \\\n      | awk '{ print $1}' 2>&1)\n  fi\n  if [ ! -z \"${_NOW_NR_V}\" ]; then\n    if [ \"${_NOW_NR_V}\" != \"${_NEW_RELIC_VRN}\" ]; then\n      _NR_FORCE_REINSTALL=YES\n    fi\n  else\n    _NR_FORCE_REINSTALL=YES\n  fi\n  if [ \"${_UP_NRC}\" = \"YES\" ] \\\n    || [ \"${_NR_FORCE_REINSTALL}\" = \"YES\" ] \\\n    || [ -e /root/.force.newrelic.update.cnf ]; then\n    _newrelic_update\n  fi\n  _NEWRELIC_APP_CFG=\"/etc/newrelic/newrelic.cfg\"\n  if [ -e \"${_NEWRELIC_APP_CFG}\" ]; then\n    _NEWRELIC_KEY_TEST=$(grep \"REPLACE_WITH_REAL_KEY\" ${_NEWRELIC_APP_CFG} 2>&1)\n    if [[ \"${_NEWRELIC_KEY_TEST}\" =~ \"REPLACE_WITH_REAL_KEY\" ]] \\\n      && [ ! -z \"${_NEWRELIC_KEY}\" ]; then\n      sed -i \"s/REPLACE_WITH_REAL_KEY/${_NEWRELIC_KEY}/g\" \\\n        ${_NEWRELIC_APP_CFG} &> /dev/null\n      wait\n    fi\n    sed -i \"s/^loglevel=.*/loglevel=error/g\" ${_NEWRELIC_APP_CFG} &> /dev/null\n    wait\n    _mrun \"service newrelic-daemon restart\"\n  else\n    if [ -e \"/etc/init.d/newrelic-daemon\" ] \\\n      || [ -e \"/etc/init.d/newrelic-sysmond\" ]; then\n      _mrun \"service newrelic-daemon stop\"\n      _mrun \"update-rc.d -f newrelic-daemon remove\"\n      _mrun \"service newrelic-sysmond stop\"\n      _mrun \"update-rc.d -f newrelic-sysmond remove\"\n      _mrun \"${_RMAPP} newrelic-php5 \\\n        newrelic-php5-common \\\n        newrelic-daemon \\\n        newrelic-sysmond\"\n      mv -f /etc/newrelic \\\n        ${_vBs}/nr/etc-newrelic-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n      rm -f /etc/init.d/newrelic-sysmond\n      rm -f /etc/init.d/newrelic-daemon\n      killall -9 newrelic-daemon &> /dev/null\n      killall -9 nrsysmond &> /dev/null\n    fi\n  fi\n  _NEWRELIC_SYS_CFG=\"/etc/newrelic/nrsysmond.cfg\"\n  if [ -e \"${_NEWRELIC_SYS_CFG}\" ]; then\n    _NEWRELIC_KEY_TEST=$(grep \"REPLACE_WITH_REAL_KEY\" ${_NEWRELIC_SYS_CFG} 2>&1)\n    if [[ \"${_NEWRELIC_KEY_TEST}\" =~ \"REPLACE_WITH_REAL_KEY\" ]] \\\n      && [ ! -z \"${_NEWRELIC_KEY}\" ]; then\n      sed -i \"s/REPLACE_WITH_REAL_KEY/${_NEWRELIC_KEY}/g\" \\\n        ${_NEWRELIC_SYS_CFG} &> /dev/null\n      wait\n    fi\n    sed -i \"s/^loglevel=.*/loglevel=error/g\" \\\n      ${_NEWRELIC_SYS_CFG} &> /dev/null\n    wait\n    sed -i \"s/.*pidfile=.*/pidfile=\\/var\\/run\\/nrsysmond.pid/g\" \\\n      ${_NEWRELIC_SYS_CFG} &> /dev/null\n    wait\n    if [ -e \"/root/.enable.newrelic.sysmond.cnf\" ]; then\n      _mrun \"service newrelic-sysmond restart\"\n    else\n      _mrun \"service newrelic-sysmond stop\"\n    fi\n  fi\n}\n\n_java_check_fix() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _java_check_fix\"\n  fi\n\n  if [ \"${_OS_CODE}\" = \"jessie\" ]; then\n    if [ ! -e \"/usr/lib/jvm/java-7-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-7-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-7-openjdk-amd64 /usr/lib/jvm/java-7-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java7\" ] \\\n      && [ -e \"/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java /usr/bin/java7\n    fi\n    if [ -e \"/usr/lib/jvm/java-1.7.0-openjdk-amd64\" ]; then\n      rm -f /usr/lib/jvm/default-java\n      ln -sfn /usr/lib/jvm/java-1.7.0-openjdk-amd64 /usr/lib/jvm/default-java\n    fi\n    if [ -x \"/usr/lib/jvm/java-7-openjdk/jre/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-7-openjdk/jre/bin/java /etc/alternatives/java\n      ln -sfn /etc/alternatives/java /usr/bin/java\n    fi\n  fi\n\n  if [ ! -e \"/usr/lib/jvm/java-8-openjdk\" ] \\\n    && [ -d \"/usr/lib/jvm/java-8-openjdk-amd64\" ]; then\n    ln -sfn /usr/lib/jvm/java-8-openjdk-amd64 /usr/lib/jvm/java-8-openjdk\n  fi\n  if [ ! -e \"/usr/bin/java8\" ] \\\n    && [ -e \"/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java\" ]; then\n    ln -sfn /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java /usr/bin/java8\n  fi\n  if [ -e \"/usr/lib/jvm/java-1.8.0-openjdk-amd64\" ]; then\n    rm -f /usr/lib/jvm/default-java\n    ln -sfn /usr/lib/jvm/java-1.8.0-openjdk-amd64 /usr/lib/jvm/default-java\n  fi\n  if [ -x \"/usr/lib/jvm/java-8-openjdk/jre/bin/java\" ]; then\n    ln -sfn /usr/lib/jvm/java-8-openjdk/jre/bin/java /etc/alternatives/java\n    ln -sfn /etc/alternatives/java /usr/bin/java\n  fi\n\n  if [ \"${_OS_CODE}\" = \"excalibur\" ] \\\n    || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n    || [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n    if [ ! -e \"/usr/lib/jvm/java-21-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-21-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-21-openjdk-amd64 /usr/lib/jvm/java-21-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java21\" ] \\\n      && [ -e \"/usr/lib/jvm/java-21-openjdk-amd64/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-21-openjdk-amd64/bin/java /usr/bin/java21\n    fi\n    if [ ! -e \"/usr/lib/jvm/java-17-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-17-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/java-17-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java17\" ] \\\n      && [ -e \"/usr/lib/jvm/java-17-openjdk-amd64/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-17-openjdk-amd64/bin/java /usr/bin/java17\n    fi\n    if [ ! -e \"/usr/lib/jvm/java-11-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-11-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-11-openjdk-amd64 /usr/lib/jvm/java-11-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java11\" ] \\\n      && [ -e \"/usr/lib/jvm/java-11-openjdk-amd64/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-11-openjdk-amd64/bin/java /usr/bin/java11\n    fi\n    if [ -x \"/etc/init.d/jenkins\" ] && [ -e \"/var/lib/jenkins\" ]; then\n      _LOOK_LIKE_JENKINS=TRUE\n    elif [ -e \"/root/.look.like.jenkins.cnf\" ]; then\n      _LOOK_LIKE_JENKINS=TRUE\n    else\n      _LOOK_LIKE_JENKINS=FALSE\n    fi\n    if [ \"${_LOOK_LIKE_JENKINS}\" = \"TRUE\" ] \\\n      || [ \"${_OS_CODE}\" = \"daedalus\" ] \\\n      || [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n      if [ -x \"/usr/lib/jvm/java-17-openjdk/bin/java\" ] \\\n        && [ ! -e \"/var/log/boa/.fixed-java17-symlinks.log\" ]; then\n        if [ -e \"/usr/lib/jvm/java-17-openjdk-amd64\" ]; then\n          rm -f /usr/lib/jvm/default-java\n          ln -sfn /usr/lib/jvm/java-17-openjdk-amd64 /usr/lib/jvm/default-java\n        fi\n        ln -sfn /usr/lib/jvm/java-17-openjdk/bin/java /etc/alternatives/java\n        ln -sfn /etc/alternatives/java /usr/bin/java\n        touch /var/log/boa/.fixed-java17-symlinks.log\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"Fixed Java 17 symlinks for ${_OS_CODE}\"\n        fi\n      fi\n      if [ -x \"/usr/lib/jvm/java-21-openjdk/bin/java\" ] \\\n        && [ ! -e \"/var/log/boa/.fixed-java21-symlinks.log\" ]; then\n        if [ -e \"/usr/lib/jvm/java-21-openjdk-amd64\" ]; then\n          rm -f /usr/lib/jvm/default-java\n          ln -sfn /usr/lib/jvm/java-21-openjdk-amd64 /usr/lib/jvm/default-java\n        fi\n        ln -sfn /usr/lib/jvm/java-21-openjdk/bin/java /etc/alternatives/java\n        ln -sfn /etc/alternatives/java /usr/bin/java\n        touch /var/log/boa/.fixed-java21-symlinks.log\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"Fixed Java 21 symlinks for ${_OS_CODE}\"\n        fi\n      fi\n    else\n      if [ -x \"/usr/lib/jvm/java-11-openjdk/bin/java\" ]; then\n        if [ -e \"/usr/lib/jvm/java-11-openjdk-amd64\" ]; then\n          rm -f /usr/lib/jvm/default-java\n          ln -sfn /usr/lib/jvm/java-11-openjdk-amd64 /usr/lib/jvm/default-java\n        fi\n        if [ ! -e \"/usr/bin/java\" ] || [ ! -e \"/etc/alternatives/java\" ]; then\n          ln -sfn /usr/lib/jvm/java-11-openjdk/bin/java /etc/alternatives/java\n          ln -sfn /etc/alternatives/java /usr/bin/java\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"Fixed Java 11 symlinks for ${_OS_CODE}\"\n          fi\n        fi\n      fi\n    fi\n  else\n    if [ ! -e \"/usr/lib/jvm/java-11-openjdk\" ] \\\n      && [ -d \"/usr/lib/jvm/java-11-openjdk-amd64\" ]; then\n      ln -sfn /usr/lib/jvm/java-11-openjdk-amd64 /usr/lib/jvm/java-11-openjdk\n    fi\n    if [ ! -e \"/usr/bin/java11\" ] \\\n      && [ -e \"/usr/lib/jvm/java-11-openjdk-amd64/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-11-openjdk-amd64/bin/java /usr/bin/java11\n    fi\n    if [ -e \"/usr/lib/jvm/java-11-openjdk-amd64\" ]; then\n      rm -f /usr/lib/jvm/default-java\n      ln -sfn /usr/lib/jvm/java-11-openjdk-amd64 /usr/lib/jvm/default-java\n    fi\n    if [ -x \"/usr/lib/jvm/java-11-openjdk/bin/java\" ]; then\n      ln -sfn /usr/lib/jvm/java-11-openjdk/bin/java /etc/alternatives/java\n      ln -sfn /etc/alternatives/java /usr/bin/java\n    fi\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"Fixed Java 11 symlinks for ${_OS_CODE}\"\n    fi\n  fi\n}\n\n_sshd_armour() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sshd_armour\"\n  fi\n  ###\n  ### See: https://stribika.github.io/2015/01/04/secure-secure-shell.html\n  ### Also: https://github.com/stribika/stribika.github.io/wiki/Secure-Secure-Shell\n  ###\n  if [ -e \"/etc/ssh/sshd_config\" ]; then\n    if [ -e \"/etc/ssh/ssh_config\" ]; then\n      grep -q '^[^#]*GSSAPIAuthentication' /etc/ssh/ssh_config && sed -i '/^[^#]*GSSAPIAuthentication/s/^/#/' /etc/ssh/ssh_config\n      if ! grep -q -E '^\\s*WarnWeakCrypto no' /etc/ssh/ssh_config; then\n        echo \"WarnWeakCrypto no\" >> /etc/ssh/ssh_config\n      fi\n      _mrun \"service ssh restart\"\n    fi\n    if [ \"${_SSH_ARMOUR}\" = \"NO\" ] \\\n      || [ -z \"${_SSH_ARMOUR}\" ]; then\n      if [ -e \"/etc/ssh/.vanilla.sshd_config\" ]; then\n        mv -f /etc/ssh/.vanilla.sshd_config /etc/ssh/sshd_config\n      fi\n      if [ -e \"/etc/ssh/.vanilla.ssh_config\" ]; then\n        mv -f /etc/ssh/.vanilla.ssh_config /etc/ssh/ssh_config\n      fi\n    else\n      if [ ! -e \"/etc/ssh/.vanilla.sshd_config\" ]; then\n        cp -a /etc/ssh/sshd_config /etc/ssh/.vanilla.sshd_config\n      fi\n      if [ ! -e \"/etc/ssh/.vanilla.ssh_config\" ]; then\n        cp -a /etc/ssh/ssh_config /etc/ssh/.vanilla.ssh_config\n      fi\n      _SSH_PROTOCOL_TEST=$(grep \"Protocol\" /etc/ssh/sshd_config 2>&1)\n      if [[ \"${_SSH_PROTOCOL_TEST}\" =~ (^)\"Protocol 2\" ]]; then\n        _DO_NOTHING=YES\n      elif [[ \"${_SSH_PROTOCOL_TEST}\" =~ \"Protocol\" ]]; then\n        sed -i \"s/.*Protocol.*/Protocol 2/g\" /etc/ssh/sshd_config &> /dev/null\n        wait\n      else\n        echo >> /etc/ssh/sshd_config\n        echo \"Protocol 2\" >> /etc/ssh/sshd_config\n      fi\n      _SSH_ALGO_TEST=$(grep \"KexAlgorithms\" /etc/ssh/sshd_config 2>&1)\n      if [[ \"${_SSH_ALGO_TEST}\" =~ (^)\"KexAlgorithms curve25519-sha256\" ]]; then\n        _DO_NOTHING=YES\n      elif [[ \"${_SSH_ALGO_TEST}\" =~ \"KexAlgorithms\" ]]; then\n        sed -i \"s/.*KexAlgorithms.*//g\" /etc/ssh/sshd_config &> /dev/null\n        wait\n        echo \"KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256\" >> /etc/ssh/sshd_config\n      else\n        echo >> /etc/ssh/sshd_config\n        echo \"KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256\" >> /etc/ssh/sshd_config\n      fi\n      _SSH_CIPHERS_TEST=$(grep \"Ciphers\" /etc/ssh/sshd_config 2>&1)\n      if [[ \"${_SSH_CIPHERS_TEST}\" =~ (^)\"Ciphers chacha20-poly1305\" ]]; then\n        _DO_NOTHING=YES\n      elif [[ \"${_SSH_CIPHERS_TEST}\" =~ \"Ciphers\" ]]; then\n        sed -i \"s/.*Ciphers.*//g\" /etc/ssh/sshd_config &> /dev/null\n        wait\n        echo \"Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr\" >> /etc/ssh/sshd_config\n      else\n        echo >> /etc/ssh/sshd_config\n        echo \"Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr\" >> /etc/ssh/sshd_config\n      fi\n      _SSH_MACS_TEST=$(grep \"MACs\" /etc/ssh/sshd_config 2>&1)\n      if [[ \"${_SSH_MACS_TEST}\" =~ (^)\"MACs hmac-sha2-512-etm\" ]]; then\n        _DO_NOTHING=YES\n      elif [[ \"${_SSH_MACS_TEST}\" =~ \"MACs\" ]]; then\n        sed -i \"s/.*MACs.*//g\" /etc/ssh/sshd_config &> /dev/null\n        wait\n        echo \"MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com\" >> /etc/ssh/sshd_config\n      else\n        echo >> /etc/ssh/sshd_config\n        echo \"MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com\" >> /etc/ssh/sshd_config\n      fi\n      _SSH_HKEY_TEST=$(grep \"HostKey\" /etc/ssh/sshd_config 2>&1)\n      if [[ \"${_SSH_HKEY_TEST}\" =~ (^)\"HostKey\" ]] \\\n        && [[ \"${_SSH_HKEY_TEST}\" =~ \"ssh_host_ed25519_key\" ]]; then\n        _DO_NOTHING=YES\n      elif [[ \"${_SSH_HKEY_TEST}\" =~ \"HostKey\" ]]; then\n        sed -i \"s/.*HostKey.*//g\" /etc/ssh/sshd_config &> /dev/null\n        wait\n        echo \"HostKey /etc/ssh/ssh_host_ed25519_key\" >> /etc/ssh/sshd_config\n        echo \"HostKey /etc/ssh/ssh_host_rsa_key\" >> /etc/ssh/sshd_config\n      else\n        echo >> /etc/ssh/sshd_config\n        echo \"HostKey /etc/ssh/ssh_host_ed25519_key\" >> /etc/ssh/sshd_config\n        echo \"HostKey /etc/ssh/ssh_host_rsa_key\" >> /etc/ssh/sshd_config\n      fi\n      if [ ! -e \"/etc/ssh/.ssh_host_ed25519_key.pid\" ]; then\n        [ -d \"/var/backups/sshd\" ] || mkdir -p /var/backups/sshd\n        mv -f /etc/ssh/ssh_host_*key* /var/backups/sshd/\n        _msg \"INFO: Generating new ssh_host_ed25519_key...\"\n        ssh-keygen -t ed25519 -N \"\" -q -f /etc/ssh/ssh_host_ed25519_key < /dev/null\n        touch /etc/ssh/.ssh_host_ed25519_key.pid\n      fi\n      if [ ! -e \"/etc/ssh/.ssh_host_rsa_key4096.pid\" ]; then\n        _msg \"INFO: Generating new ssh_host_rsa_key...\"\n        ssh-keygen -t rsa -b 4096 -N \"\" -q -f /etc/ssh/ssh_host_rsa_key < /dev/null\n        touch /etc/ssh/.ssh_host_rsa_key4096.pid\n        _msg \"NOTE: You will be asked to accept the new SSH fingerprint the next time you connect\"\n      fi\n      _isKeyg=\"$(which ssh-keygen)\"\n      if [ -x \"${_isKeyg}\" ] \\\n        && [ -e \"${_SYSCONFDIR}/moduli\" ] \\\n        && [ ! -e \"${_SYSCONFDIR}/.moduli_bak\" ]; then\n        _msg \"WAIT: New SSH moduli generate may take a long time, please wait...\"\n        cd /var/opt\n        rm -rf /var/opt/moduli-*\n        _mrun \"${_isKeyg} -M generate -O bits=2048 moduli-2048.candidates\"\n        wait\n        _mrun \"${_isKeyg} -M screen -f moduli-2048.candidates moduli-2048\"\n        wait\n        if [ -e \"/var/opt/moduli-2048\" ]; then\n          cp -af ${_SYSCONFDIR}/moduli ${_SYSCONFDIR}/.moduli_bak\n          cp -af /var/opt/moduli-2048 ${_SYSCONFDIR}/moduli\n          _mrun \"service ssh restart\"\n        fi\n      fi\n      _SSH_KEXALGO_TEST=$(grep \"KexAlgorithms\" /etc/ssh/ssh_config 2>&1)\n      if [[ \"${_SSH_KEXALGO_TEST}\" =~ \"KexAlgorithms curve25519-sha256\" ]]; then\n        _DO_NOTHING=YES\n      elif [[ \"${_SSH_KEXALGO_TEST}\" =~ \"KexAlgorithms\" ]]; then\n        sed -i \"s/.*PubkeyAuthentication.*//g\" /etc/ssh/ssh_config &> /dev/null\n        wait\n        sed -i \"s/.*UseRoaming.*//g\" /etc/ssh/ssh_config &> /dev/null\n        wait\n        sed -i \"s/.*Ciphers.*//g\" /etc/ssh/ssh_config &> /dev/null\n        wait\n        sed -i \"s/.*HostKeyAlgorithms.*//g\" /etc/ssh/ssh_config &> /dev/null\n        wait\n        sed -i \"s/.*KexAlgorithms.*//g\" /etc/ssh/ssh_config &> /dev/null\n        wait\n        sed -i \"s/.*MACs.*//g\" /etc/ssh/ssh_config &> /dev/null\n        wait\n        echo >> /etc/ssh/ssh_config\n        echo \"Host *\" >> /etc/ssh/ssh_config\n        echo \"  PubkeyAuthentication yes\" >> /etc/ssh/ssh_config\n        echo \"  UseRoaming no\" >> /etc/ssh/ssh_config\n        echo \"  Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr\" >> /etc/ssh/ssh_config\n        echo \"  HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,ssh-rsa\" >> /etc/ssh/ssh_config\n        echo \"  KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256\" >> /etc/ssh/ssh_config\n        echo \"  MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com\" >> /etc/ssh/ssh_config\n      else\n        echo >> /etc/ssh/ssh_config\n        echo \"Host *\" >> /etc/ssh/ssh_config\n        echo \"  PubkeyAuthentication yes\" >> /etc/ssh/ssh_config\n        echo \"  UseRoaming no\" >> /etc/ssh/ssh_config\n        echo \"  Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr\" >> /etc/ssh/ssh_config\n        echo \"  HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,ssh-rsa\" >> /etc/ssh/ssh_config\n        echo \"  KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256\" >> /etc/ssh/ssh_config\n        echo \"  MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com\" >> /etc/ssh/ssh_config\n      fi\n    fi\n    _mrun \"service ssh restart\"\n  fi\n}\n\n_initd_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _initd_update\"\n  fi\n  _THIS_DB_PORT=3306\n  if [ \"${_STATUS}\" = \"INIT\" ]; then\n    _msg \"INFO: Updating init scripts\"\n    cp -af ${_locCnf}/var/clean-boa-env /etc/init.d/clean-boa-env\n    chmod 755 /etc/init.d/clean-boa-env &> /dev/null\n    _mrun \"update-rc.d clean-boa-env defaults\"\n    _SSH_USEDNS_TEST=$(grep \"UseDNS\" /etc/ssh/sshd_config 2>&1)\n    _SSH_USEDNS_TEST=$(grep \"UseDNS\" /etc/ssh/sshd_config 2>&1)\n    if [[ \"${_SSH_USEDNS_TEST}\" =~ (^)\"UseDNS no\" ]]; then\n      _DO_NOTHING=YES\n    elif [[ \"${_SSH_USEDNS_TEST}\" =~ \"UseDNS\" ]]; then\n      sed -i \"s/.*UseDNS.*/UseDNS no/g\" /etc/ssh/sshd_config &> /dev/null\n      wait\n    else\n      echo >> /etc/ssh/sshd_config\n      echo \"UseDNS no\" >> /etc/ssh/sshd_config\n    fi\n    _SSH_USEPAM_TEST=$(grep \"UsePAM\" /etc/ssh/sshd_config 2>&1)\n    if [[ \"${_SSH_USEPAM_TEST}\" =~ (^)\"UsePAM no\" ]]; then\n      _DO_NOTHING=YES\n    elif [[ \"${_SSH_USEPAM_TEST}\" =~ \"UsePAM\" ]]; then\n      sed -i \"s/.*UsePAM.*/UsePAM no/g\" /etc/ssh/sshd_config &> /dev/null\n      wait\n    else\n      echo >> /etc/ssh/sshd_config\n      echo \"UsePAM no\" >> /etc/ssh/sshd_config\n    fi\n    _mrun \"service ssh restart\"\n    _tune_memory_limits\n    if [ -x \"/etc/init.d/solr9\" ] && [ -e \"/etc/default/solr9.in.sh\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Restarting Solr 9\"\n      fi\n      pkill -9 -f solr9\n      _mrun \"service solr9 start\"\n    fi\n    if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/etc/default/solr7.in.sh\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Restarting Solr 7\"\n      fi\n      pkill -9 -f solr7\n      _mrun \"service solr7 start\"\n    fi\n    if [ -e \"/etc/default/jetty9\" ] && [ -e \"/etc/init.d/jetty9\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Restarting Jetty 9\"\n      fi\n      pkill -9 -f jetty9\n      _mrun \"service jetty9 start\"\n    fi\n    if [ ! -e \"/root/.proxy.cnf\" ]; then\n      if [ -e \"/etc/init.d/valkey-server\" ]; then\n        _msg \"INFO: Starting Valkey, PHP-FPM and Nginx\"\n        _mrun \"service valkey-server start\"\n      elif [ -e \"/etc/init.d/redis-server\" ]; then\n        _msg \"INFO: Starting Redis, PHP-FPM and Nginx\"\n        _mrun \"service redis-server start\"\n      fi\n      _mrun \"update-rc.d cron defaults\"\n      _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ]; then\n          _mrun \"update-rc.d php${e}-fpm defaults\"\n          _mrun \"service php${e}-fpm start\"\n        fi\n      done\n      _mrun \"update-rc.d nginx defaults\"\n      _mrun \"service nginx start\"\n    fi\n  else\n    _if_hosted_sys\n    if [ \"${_hostedSys}\" = \"YES\" ] \\\n      || [ -e \"/root/.ssh.root.auth.keys.only.cnf\" ]; then\n      sed -i \"s/^#PermitRootLogin.*/PermitRootLogin prohibit-password/g\" \\\n        /etc/ssh/sshd_config &> /dev/null\n      wait\n      sed -i \"s/^PermitRootLogin.*/PermitRootLogin prohibit-password/g\" \\\n        /etc/ssh/sshd_config &> /dev/null\n      wait\n    fi\n    sed -i \"s/.*UseDNS.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/.*UsePAM.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/.*PrintMotd.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^ClientAliveCountMax.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^ClientAliveInterval.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^IgnoreUserKnownHosts.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^PasswordAuthentication.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^TCPKeepAlive.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^PermitEmptyPasswords.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^KbdInteractiveAuthentication.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^MaxAuthTries.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^LoginGraceTime.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^MaxStartups.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^X11Forwarding.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^AllowTcpForwarding.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^AllowAgentForwarding.*//g\" /etc/ssh/sshd_config &> /dev/null\n    wait\n    sed -i \"s/^Subsystem.*//g\" /etc/ssh/sshd_config\n    wait\n    echo >> /etc/ssh/sshd_config\n    echo \"IgnoreUserKnownHosts no\" >> /etc/ssh/sshd_config\n    if [ -e \"/root/.ssh.auth.keys.only.cnf\" ]; then\n      echo \"PasswordAuthentication no\" >> /etc/ssh/sshd_config\n    else\n      echo \"PasswordAuthentication yes\" >> /etc/ssh/sshd_config\n    fi\n    echo \"UseDNS no\" >> /etc/ssh/sshd_config\n    echo \"UsePAM no\" >> /etc/ssh/sshd_config\n    echo \"PrintMotd yes\" >> /etc/ssh/sshd_config\n    echo \"ClientAliveInterval 300\" >> /etc/ssh/sshd_config\n    echo \"ClientAliveCountMax 10000\" >> /etc/ssh/sshd_config\n    echo \"TCPKeepAlive yes\" >> /etc/ssh/sshd_config\n    echo \"PermitEmptyPasswords no\" >> /etc/ssh/sshd_config\n    echo \"KbdInteractiveAuthentication no\" >> /etc/ssh/sshd_config\n    echo \"MaxAuthTries 3\" >> /etc/ssh/sshd_config\n    echo \"LoginGraceTime 30\" >> /etc/ssh/sshd_config\n    echo \"MaxStartups 5:50:10\" >> /etc/ssh/sshd_config\n    echo \"X11Forwarding no\" >> /etc/ssh/sshd_config\n    echo \"AllowTcpForwarding no\" >> /etc/ssh/sshd_config\n    echo \"AllowAgentForwarding no\" >> /etc/ssh/sshd_config\n    echo \"Subsystem sftp /usr/lib/openssh/sftp-server -u 0002\" >> /etc/ssh/sshd_config\n    sed -i \"/^$/d\" /etc/ssh/sshd_config\n    wait\n    _mrun \"service ssh restart\"\n    _fix_on_upgrade\n    _tune_memory_limits\n    if [ -x \"/etc/init.d/solr9\" ] && [ -e \"/etc/default/solr9.in.sh\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Restarting Solr 9\"\n      fi\n      pkill -9 -f solr9\n      _mrun \"service solr9 start\"\n    fi\n    if [ -x \"/etc/init.d/solr7\" ] && [ -e \"/etc/default/solr7.in.sh\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Restarting Solr 7\"\n      fi\n      pkill -9 -f solr7\n      _mrun \"service solr7 start\"\n    fi\n    if [ -e \"/etc/default/jetty9\" ] && [ -e \"/etc/init.d/jetty9\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Restarting Jetty 9\"\n      fi\n      pkill -9 -f jetty9\n      _mrun \"service jetty9 start\"\n    fi\n    if [ ! -e \"/root/.proxy.cnf\" ]; then\n      if [ -e \"/etc/init.d/valkey-server\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Reloading Valkey, PHP-FPM and Nginx...\"\n        fi\n        _mrun \"service valkey-server reload\"\n      elif [ -e \"/etc/init.d/redis-server\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Reloading Redis, PHP-FPM and Nginx...\"\n        fi\n        _mrun \"service redis-server reload\"\n      fi\n      _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ]; then\n          _mrun \"update-rc.d php${e}-fpm defaults\"\n        fi\n      done\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ]; then\n          _mrun \"service php${e}-fpm reload\"\n        fi\n      done\n      _mrun \"service nginx restart\"\n      _DB_SERVER_TEST=$(mysql -V 2>&1)\n      if [[ \"${_DB_SERVER_TEST}\" =~ \"Distrib ${_DB_SERIES}.\" ]]; then\n        _check_mysql_version\n        if [ \"${_DB_V}\" = \"5.7\" ]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Running ${_DB_SERVER} system tables (2) upgrade...\"\n          fi\n          rm -f /var/lib/mysql/mysql_upgrade_info &> /dev/null\n          if [ -x \"/usr/bin/mysql_upgrade\" ]; then\n            _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n            _mrun \"mysql_upgrade -u root --force\"\n            _mrun \"mysql_upgrade -u root --force\"\n          fi\n        fi\n      fi\n      _myCnf=\"/etc/mysql/my.cnf\"\n      _preCnf=\"${_vBs}/dragon/t/my.cnf-pre-${_xSrl}-${_X_VERSION}-${_NOW}\"\n      if [ -f \"${_myCnf}\" ]; then\n        _myCnfUpdate=NO\n        _myTopReinstall=NO\n        if [ ! -f \"${_preCnf}\" ]; then\n          mkdir -p ${_vBs}/dragon/t/\n          cp -af ${_myCnf} ${_preCnf}\n        fi\n        _diffMyTest=$(diff -w -B \\\n          -I userstat \\\n          -I innodb_buffer_pool_size \\\n          -I innodb_buffer_pool_instances \\\n          -I innodb_page_cleaners \\\n          -I tmp_table_size \\\n          -I max_heap_table_size \\\n          -I myisam_sort_buffer_size \\\n          -I key_buffer_size ${_myCnf} ${_preCnf} 2>&1)\n        if [ -z \"${_diffMyTest}\" ]; then\n          _myCnfUpdate=NO\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff0 empty\"\n          fi\n        else\n          _myCnfUpdate=YES\n          _myTopReinstall=YES\n          # _diffMyTest=$(echo -n ${_diffMyTest} | fmt -su -w 2500)\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff1 ${_diffMyTest}\"\n          fi\n        fi\n        if [[ \"${_diffMyTest}\" =~ \"innodb_buffer_pool_size\" ]]; then\n          _myCnfUpdate=NO\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff2 ${_diffMyTest}\"\n          fi\n        fi\n        if [[ \"${_diffMyTest}\" =~ \"No such file or directory\" ]]; then\n          _myCnfUpdate=NO\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} diff3 ${_diffMyTest}\"\n          fi\n        fi\n      fi\n      if [ \"${_UP_SQL}\" = \"YES\" ]; then\n        _myVnTest=$(mysql -V 2>&1)\n        if [[ \"${_myVnTest}\" =~ \"${_DBS_VRN}\" ]]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} ${_DBS_VRN} already up to date\"\n          fi\n          _myCnfUpdate=YES\n          _myTopReinstall=YES\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: ${_DB_SERVER} restart after upgrade forced\"\n          fi\n        fi\n      fi\n      if [ ! -e \"/root/.run-to-excalibur.cnf\" ] \\\n        && [ ! -e \"/root/.run-to-daedalus.cnf\" ] \\\n        && [ ! -e \"/root/.run-to-chimaera.cnf\" ] \\\n        && [ ! -e \"/root/.run-to-beowulf.cnf\" ]; then\n        _SQL_PSWD=$(cat /root/.my.pass.txt 2>/dev/null | tr -d '\\n')\n        if [ \"${_myCnfUpdate}\" = \"NO\" ]; then\n          _myUptime=$(mysqladmin -u root version | grep -i uptime 2>&1)\n          _myUptime=$(echo -n ${_myUptime} | fmt -su -w 2500 2>&1)\n          _msg \"INFO: ${_DB_SERVER} ${_myUptime}\"\n        fi\n        if [ \"${_myRstrd}\" = \"YES\" ]; then\n          _myCnfUpdate=NO\n          _msg \"INFO: ${_DB_SERVER} already restarted!\"\n        fi\n        if [ \"${_myCnfUpdate}\" = \"YES\" ]; then\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Restarting ${_DB_SERVER} server...\"\n          fi\n          _mrun \"bash /var/xdrago/move_sql.sh\"\n          wait\n          _msg \"INFO: ${_DB_SERVER} server restart completed\"\n        fi\n        _check_mysql_version\n        if [ \"${_DB_V}\" = \"5.7\" ] || [ \"${_DB_SERIES}\" = \"5.7\" ]; then\n          _CHECK_EXISTS=$(mysql -u root -e \"SELECT EXISTS(SELECT 1 FROM mysql.user WHERE user = 'drandom_2test')\" | grep \"0\" 2>&1)\n          if [[ \"${_CHECK_EXISTS}\" =~ \"0\" ]]; then\n            _CHECK_REPAIR=$(mysql -u root -e \"CREATE USER IF NOT EXISTS 'drandom_2test'@'localhost';\" 2>&1)\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              echo _CHECK_REPAIR 1 ${_CHECK_REPAIR}\n            fi\n            if [[ \"${_CHECK_REPAIR}\" =~ \"corrupted\" ]]; then\n              mysqlcheck -u root -A --auto-repair --silent\n              mysql_upgrade -u root --force\n              mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN default_role;\"\n              mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN is_role;\"\n              mysql -u root -e \"ALTER TABLE mysql.user DROP COLUMN max_statement_time;\"\n              mysql_upgrade -u root --force\n            fi\n            _CHECK_REPAIR=$(mysql -u root -e \"CREATE USER IF NOT EXISTS 'drandom_2test'@'localhost';\" 2>&1)\n            if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n              echo _CHECK_REPAIR 2 ${_CHECK_REPAIR}\n            fi\n          fi\n          mysql -u root -e \"SET GLOBAL innodb_flush_log_at_trx_commit=2;\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_flush_log_at_timeout=1;\" &> /dev/null\n          mysql -u root -e \"SET GLOBAL innodb_stats_on_metadata=0;\" &> /dev/null\n          rm -f /etc/mysql/conf.d/mysqldump.cnf\n        fi\n      fi\n    fi\n  fi\n}\n\n_sysctl_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _sysctl_update\"\n  fi\n  # Check if conntrack is available\n  if ! command -v conntrack &> /dev/null; then\n    _apt_clean_update\n    _mrun \"${_INITINS} conntrack\"\n  fi\n  # Check if tcpdump is available\n  if ! command -v tcpdump &> /dev/null; then\n    _apt_clean_update\n    _mrun \"${_INITINS} tcpdump\"\n  fi\n  # Check if traceroute is available\n  if ! command -v traceroute &> /dev/null; then\n    _apt_clean_update\n    _mrun \"${_INITINS} traceroute\"\n  fi\n  if [ ! -e \"/root/.no.sysctl.update.cnf\" ] \\\n    && [ ! -e \"${_pthLog}/sysctl.conf-${_xSrl}-${_X_VERSION}-${_NOW}.log\" ]; then\n    cp -af /etc/sysctl.conf \\\n      ${_vBs}/dragon/t/sysctl.conf-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    if [ -e \"${_locCnf}/var/sysctl.conf\" ]; then\n      cp -af ${_locCnf}/var/sysctl.conf /etc/sysctl.conf\n    fi\n    if [ -e \"/etc/security/limits.conf\" ]; then\n      _IF_NF=$(grep '2097152' /etc/security/limits.conf 2>&1)\n      if [ ! -z \"${_IF_NF}\" ]; then\n        sed -i \"s/.*2097152.*//g\" /etc/security/limits.conf\n        wait\n      fi\n      _IF_NF=$(grep '524288' /etc/security/limits.conf 2>&1)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"*         soft    nofile      524288\"  >> /etc/security/limits.conf\n        echo \"root      hard    nofile      1048576\" >> /etc/security/limits.conf\n        echo \"root      soft    nofile      1048576\" >> /etc/security/limits.conf\n      fi\n      _IF_NF=$(grep '65556' /etc/security/limits.conf 2>&1)\n      if [ -z \"${_IF_NF}\" ]; then\n        echo \"*         hard    nproc       65556\"   >> /etc/security/limits.conf\n        echo \"*         soft    nproc       65556\"   >> /etc/security/limits.conf\n      fi\n    fi\n    if [ -e \"/boot/grub/grub.cfg\" ] || [ -e \"/boot/grub/menu.lst\" ]; then\n      #echo never > /sys/kernel/mm/transparent_hugepage/enabled\n      if [ -e \"/etc/sysctl.conf\" ]; then\n        sysctl -p /etc/sysctl.conf &> /dev/null\n      fi\n    else\n      if [ -e \"/etc/sysctl.conf\" ]; then\n        sysctl -p /etc/sysctl.conf &> /dev/null\n      fi\n    fi\n    if [ -e \"/etc/default/nginx\" ]; then\n      _IF_ULNX=$(grep '524288' /etc/default/nginx 2>&1)\n      if [ -z \"${_IF_ULNX}\" ]; then\n        sed -i \"s/^ULIMIT=.*//gi\" /etc/default/nginx\n        wait\n        echo ULIMIT=\\\"-n 524288\\\" >> /etc/default/nginx\n        ulimit -n 524288 &> /dev/null\n        _mrun \"service nginx restart\"\n      fi\n    fi\n    if [ -e \"/etc/security/limits.d\" ] \\\n      && [ ! -e \"/etc/security/limits.d/valkey.conf\" ]; then\n      echo \"sshd soft nofile 524288\" > /etc/security/limits.d/sshd.conf\n      echo \"sshd hard nofile 999999\" >> /etc/security/limits.d/sshd.conf\n      echo \"valkey soft nofile 65535\" > /etc/security/limits.d/valkey.conf\n      echo \"valkey hard nofile 524288\" >> /etc/security/limits.d/valkey.conf\n      echo \"redis soft nofile 65535\" > /etc/security/limits.d/redis.conf\n      echo \"redis hard nofile 524288\" >> /etc/security/limits.d/redis.conf\n      echo \"nginx soft nofile 524288\" > /etc/security/limits.d/nginx.conf\n      echo \"nginx hard nofile 999999\" >> /etc/security/limits.d/nginx.conf\n      echo \"jetty9 soft nofile 65535\" > /etc/security/limits.d/jetty9.conf\n      echo \"jetty9 hard nofile 524288\" >> /etc/security/limits.d/jetty9.conf\n      echo \"solr7 soft nofile 65535\" > /etc/security/limits.d/solr7.conf\n      echo \"solr7 hard nofile 524288\" >> /etc/security/limits.d/solr7.conf\n      echo \"solr9 soft nofile 65535\" > /etc/security/limits.d/solr9.conf\n      echo \"solr9 hard nofile 524288\" >> /etc/security/limits.d/solr9.conf\n      echo \"@www-data soft nofile 65535\" > /etc/security/limits.d/www.conf\n      echo \"@www-data hard nofile 524288\" >> /etc/security/limits.d/www.conf\n      if [ -e \"/etc/init.d/valkey-server\" ]; then\n        _mrun \"service valkey-server restart\"\n      elif [ -e \"/etc/init.d/redis-server\" ]; then\n        _mrun \"service redis-server restart\"\n      fi\n      _mrun \"service nginx restart\"\n      _mrun \"service ssh restart\"\n      _PHP_V=\"85 84 83 82 81 80 74 73 72 71 70 56\"\n      for e in ${_PHP_V}; do\n        if [ -e \"/etc/init.d/php${e}-fpm\" ]; then\n          _mrun \"service php${e}-fpm reload\"\n        fi\n      done\n    fi\n    touch ${_pthLog}/sysctl.conf-${_xSrl}-${_X_VERSION}-${_NOW}.log\n  fi\n}\n\n_apticron_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _apticron_update\"\n  fi\n  _aPc=\"/etc/apticron/apticron.conf\"\n  if [ -e \"/usr/sbin/apticron\" ]; then\n    if [ -e \"/usr/lib/apticron/apticron.conf\" ] && [ ! -e \"${_aPc}\" ]; then\n      cp -a /usr/lib/apticron/apticron.conf ${_aPc}\n    fi\n    if [ -e \"${_aPc}\" ]; then\n      _APTICRON_TEST=$(grep \"NOTIFY_HOLDS\" ${_aPc} 2>&1)\n      if [[ \"${_APTICRON_TEST}\" =~ \"NOTIFY_HOLDS\" ]]; then\n        sed -i \"s/^NOTIFY_HOLDS=.*/NOTIFY_HOLDS=\\\"0\\\"/g\" ${_aPc}\n        sed -i \"s/notify@omega8.cc/${_MY_EMAIL}/g\" ${_aPc}\n        sed -i \"s/^EMAIL=.*/EMAIL=\"${_MY_EMAIL}\"/g\" ${_aPc}\n      else\n        _mrun \"apt-get remove apticron -y --purge --auto-remove -qq\"\n        _mrun \"apt-get install apticron ${_aptYesUnth}\"\n        sed -i \"s/^NOTIFY_HOLDS=.*/NOTIFY_HOLDS=\\\"0\\\"/g\" ${_aPc}\n        sed -i \"s/notify@omega8.cc/${_MY_EMAIL}/g\" ${_aPc}\n        sed -i \"s/^EMAIL=.*/EMAIL=\"${_MY_EMAIL}\"/g\" ${_aPc}\n      fi\n    fi\n    _barUpv=\"${_tRee}\"\n    sed -i \"s/aptitude full-upgrade/barracuda up-${_barUpv} system/g\" /usr/sbin/apticron\n      wait\n    sed -i \"s/apt-get dist-upgrade/barracuda up-${_barUpv} system/g\" /usr/sbin/apticron\n      wait\n    sed -i \"s/barracuda up-${_tRee}.*/barracuda up-${_barUpv} system/g\" /usr/sbin/apticron\n  fi\n}\n\n_rsyslog_config_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _rsyslog_config_update\"\n  fi\n  if [ -x \"/etc/init.d/rsyslog\" ] \\\n    && [ -e \"/etc/rsyslog.conf\" ]; then\n    mv -f /etc/rsyslog.conf \\\n      /var/backups/.etc.rsyslog.conf-pre-${_xSrl}-${_X_VERSION}-${_NOW} &> /dev/null\n    cp -af ${_locCnf}/var/rsyslog.conf /etc/rsyslog.conf\n    cp -af ${_locCnf}/var/mysql-notices.conf /etc/rsyslog.d/mysql-notices.conf\n    cp -af ${_locCnf}/var/logrotate.d.rsyslog.conf /etc/logrotate.d/rsyslog\n    touch /var/log/mysql-notices.log\n    chmod 640 /var/log/mysql-notices.log\n    chown root:adm /var/log/mysql-notices.log\n    _mrun \"service rsyslog restart\"\n  fi\n}\n\n_complete() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _complete\"\n  fi\n  _fix_lfd_whitelist\n  _fix_lfd_uidignore\n  _fix_cnf_postfix\n  _re_set_default_php_cli\n  if [ -d \"/data/u\" ]; then\n    _composer_install_upgrade\n  fi\n  if [ \"${_STATUS}\" != \"UPGRADE\" ]; then\n    _STRICT_BIN_PERMISSIONS=NO\n  fi\n  if [ \"${_STRICT_BIN_PERMISSIONS}\" = \"YES\" ]; then\n    usermod -aG users _apt &> /dev/null\n    usermod -aG users aegir &> /dev/null\n    usermod -aG users bin &> /dev/null\n    usermod -aG users daemon &> /dev/null\n    usermod -aG users man &> /dev/null\n    usermod -aG users mysql &> /dev/null\n    usermod -aG users nobody &> /dev/null\n    usermod -aG users root &> /dev/null\n    usermod -aG users sync &> /dev/null\n    usermod -aG users sys &> /dev/null\n    if [ -x \"/bin/dash\" ] || [ -x \"/usr/bin/dash\" ]; then\n      _symlink_to_dash\n      _switch_to_dash\n    elif [ -x \"/bin/bash\" ] || [ -x \"/usr/bin/bash\" ]; then\n      _symlink_to_bash\n      _switch_to_bash\n    fi\n    _strict_bin_permissions\n    rm -f /etc/apt/apt.conf.d/00sandboxtmp\n    rm -f /etc/apt/apt.conf.d/00temp\n  fi\n  _finale\n}\n"
  },
  {
    "path": "lib/functions/valkey.sh.inc",
    "content": "\n#\n# Update Valkey Init.\n_update_valkey_init() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _update_valkey_init\"\n  fi\n  if ! grep -q \"mkdir\" /etc/init.d/valkey-server; then\n    cp -af ${_locCnf}/valkey/valkey-server /etc/init.d/valkey-server\n    chmod 755 /etc/init.d/valkey-server &> /dev/null\n    _mrun \"update-rc.d valkey-server defaults\"\n  fi\n}\n#\n# Disable Redis.\n_disable_redis() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _disable_redis\"\n  fi\n  _mrun \"service redis-server stop\"\n  killall -9 redis-server &> /dev/null\n  _mrun \"update-rc.d -f redis-server remove\"\n  mv -f /etc/init.d/redis-server /etc/init.d/redis-server-off &> /dev/null\n  killall -9 redis-server &> /dev/null\n  rm -f /run/redis/redis.pid\n  rm -rf /var/lib/redis\n  _msg \"INFO: Redis server has been disabled\"\n}\n#\n# Forced Valkey password update.\n_forced_valkey_password_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _forced_valkey_password_update\"\n  fi\n  if [ \"${_VALKEY_LISTEN_MODE}\" = \"SOCKET\" ] \\\n    || [ \"${_VALKEY_LISTEN_MODE}\" = \"PORT\" ] \\\n    || [ \"${_VALKEY_LISTEN_MODE}\" = \"127.0.0.1\" ]; then\n    if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n      _msg \"INFO: Generating random password for local Valkey server\"\n    fi\n    _ESC_RPASS=\"\"\n    _LEN_RPASS=0\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ]; then\n      _PWD_CHARS=64\n    elif [ \"${_STRONG_PASSWORDS}\" = \"NO\" ]; then\n      _PWD_CHARS=32\n    else\n      _STRONG_PASSWORDS=${_STRONG_PASSWORDS//[^0-9]/}\n      if [ ! -z \"${_STRONG_PASSWORDS}\" ] \\\n        && [ \"${_STRONG_PASSWORDS}\" -gt 32 ]; then\n        _PWD_CHARS=\"${_STRONG_PASSWORDS}\"\n      else\n        _PWD_CHARS=32\n      fi\n      if [ ! -z \"${_PWD_CHARS}\" ] && [ \"${_PWD_CHARS}\" -gt 128 ]; then\n        _PWD_CHARS=128\n      fi\n    fi\n    if [ \"${_STRONG_PASSWORDS}\" = \"YES\" ] || [ \"${_PWD_CHARS}\" -gt 32 ]; then\n      _RANDPASS_TEST=$(randpass -V 2>&1)\n      if [[ \"${_RANDPASS_TEST}\" =~ \"alnum\" ]]; then\n        _ESC_RPASS=$(randpass \"${_PWD_CHARS}\" alnum 2>&1)\n      else\n        _ESC_RPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n        _ESC_RPASS=$(echo -n \"${_ESC_RPASS}\" | tr -d \"\\n\" 2>&1)\n        _ESC_RPASS=$(_sanitize_string \"${_ESC_RPASS}\" 2>&1)\n      fi\n      _ESC_RPASS=$(echo -n \"${_ESC_RPASS}\" | tr -d \"\\n\" 2>&1)\n      _LEN_RPASS=$(echo ${#_ESC_RPASS} 2>&1)\n    fi\n    if [ -z \"${_ESC_RPASS}\" ] || [ \"${_LEN_RPASS}\" -lt 9 ]; then\n      _ESC_RPASS=$(shuf -zer -n64 {A..Z} {a..z} {0..9} % @ | tr -d '\\0' 2>&1)\n      _ESC_RPASS=$(echo -n \"${_ESC_RPASS}\" | tr -d \"\\n\" 2>&1)\n      _ESC_RPASS=$(_sanitize_string \"${_ESC_RPASS}\" 2>&1)\n    fi\n  else\n    _msg \"INFO: Managing password for remote Valkey server\"\n    if [ -e \"/root/.valkey.pass.txt\" ] \\\n      && [ -e \"${_pthLog}/remote-valkey-passwd.log\" ]; then\n      _ESC_RPASS=$(cat /root/.valkey.pass.txt 2>/dev/null | tr -d '\\n')\n      _ESC_RPASS=$(_sanitize_string \"${_ESC_RPASS}\" 2>&1)\n    else\n      _ESC_RPASS=sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n      touch ${_pthLog}/remote-valkey-passwd.log\n    fi\n  fi\n  echo \"${_ESC_RPASS}\" > /root/.valkey.pass.txt\n  chmod 0600 /root/.valkey.pass.txt\n  [ -d \"/data/conf/valkey\" ] || mkdir -p /data/conf/valkey\n  cp -af /root/.valkey.pass.txt /data/conf/valkey/pass.inc\n  chmod 0644 /data/conf/valkey/*\n  touch ${_pthLog}/sec-valkey-pass-${_xSrl}-${_X_VERSION}-${_NOW}.log\n  if [ -e \"/etc/valkey/valkey.conf\" ]; then\n    _FORCE_VALKEY_RESTART=YES\n    sed -i \"s/^requirepass /# requirepass /g\" \\\n      /etc/valkey/valkey.conf &> /dev/null\n    wait\n    sed -i \"s/^user default on.*/user default on >${_ESC_RPASS} allcommands allkeys/g\" \\\n      /etc/valkey/valkey.conf &> /dev/null\n    wait\n    chown valkey:valkey /etc/valkey/valkey.conf\n    chmod 0600 /etc/valkey/valkey.conf\n  fi\n  if [ \"${_FORCE_VALKEY_RESTART}\" = \"YES\" ]; then\n    _mrun \"service valkey-server restart\"\n  fi\n}\n#\n# Fix Valkey mode.\n_fix_valkey_mode() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_valkey_mode\"\n  fi\n  mkdir -p /run/valkey\n  chown valkey:valkey /run/valkey\n  if [ \"${_CUSTOM_CONFIG_VALKEY}\" = \"NO\" ] || [ -z \"${_CUSTOM_CONFIG_VALKEY}\" ]; then\n    _VALKEY_LISTEN_MODE=SOCKET\n    if [ \"${_VALKEY_LISTEN_MODE}\" = \"SOCKET\" ]; then\n      if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n        sed -i \"s/redis_client_host/redis_client_socket/g\" /data/conf/global/global-redis.inc &> /dev/null\n        wait\n        sed -i \"s/'host'/'socket'/g\" /data/conf/global/global-redis.inc &> /dev/null\n        wait\n        sed -i \"s/  = '127.0.0.1';/= '\\/var\\/run\\/valkey\\/valkey.sock';/g\" /data/conf/global/global-redis.inc &> /dev/null\n        wait\n      fi\n      if [ -e \"/data/conf/global.inc\" ]; then\n        sed -i \"s/redis_client_host/redis_client_socket/g\" /data/conf/global.inc &> /dev/null\n        wait\n        sed -i \"s/'host'/'socket'/g\" /data/conf/global.inc &> /dev/null\n        wait\n        sed -i \"s/  = '127.0.0.1';/= '\\/var\\/run\\/valkey\\/valkey.sock';/g\" /data/conf/global.inc &> /dev/null\n        wait\n      fi\n      sed -i \"s/^port 0/port 6379/g\" /etc/valkey/valkey.conf &> /dev/null\n      wait\n      sed -i \"s/^# bind 127.0.0.1/bind 127.0.0.1/g\" /etc/valkey/valkey.conf &> /dev/null\n      wait\n      sed -i \"s/^# unixsocket/unixsocket/g\" /etc/valkey/valkey.conf &> /dev/null\n      wait\n    elif [ \"${_VALKEY_LISTEN_MODE}\" = \"PORT\" ] \\\n      || [ \"${_VALKEY_LISTEN_MODE}\" = \"127.0.0.1\" ]; then\n      _DO_NOTHING=YES\n    else\n      _VALKEY_LISTEN_MODE=${_VALKEY_LISTEN_MODE//[^0-9.]/}\n      if [ ! -z \"${_VALKEY_LISTEN_MODE}\" ]; then\n        _find_correct_ip\n        _LOCAL_VALKEY_PORT_TEST=\"${_LOC_IP}\"\n        if [ \"${_LOCAL_VALKEY_PORT_TEST}\" = \"${_VALKEY_LISTEN_MODE}\" ]; then\n          _VALKEY_HOST=LOCAL\n        else\n          _VALKEY_HOST=REMOTE\n        fi\n        if [[ \"${_VALKEY_LISTEN_MODE}\" =~ (^)\"10.\" ]] \\\n          || [[ \"${_VALKEY_LISTEN_MODE}\" =~ (^)\"192.168.\" ]] \\\n          || [[ \"${_VALKEY_LISTEN_MODE}\" =~ (^)\"172.16.\" ]] \\\n          || [[ \"${_VALKEY_LISTEN_MODE}\" =~ (^)\"127.0.\" ]]; then\n          if [ \"${_VALKEY_HOST}\" = \"LOCAL\" ]; then\n            sed -i \"s/^bind 127.0.0.1/bind ${_VALKEY_LISTEN_MODE}/g\" /etc/valkey/valkey.conf &> /dev/null\n            wait\n            if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_VALKEY_LISTEN_MODE}'/g\" /data/conf/global/global-redis.inc &> /dev/null\n              wait\n            fi\n            if [ -e \"/data/conf/global.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_VALKEY_LISTEN_MODE}'/g\" /data/conf/global.inc &> /dev/null\n              wait\n            fi\n          else\n            if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_VALKEY_LISTEN_MODE}'/g\" /data/conf/global/global-redis.inc &> /dev/null\n              wait\n            fi\n            if [ -e \"/data/conf/global.inc\" ]; then\n              sed -i \"s/'127.0.0.1'/'${_VALKEY_LISTEN_MODE}'/g\" /data/conf/global.inc &> /dev/null\n              wait\n            fi\n            _mrun \"service valkey-server stop\"\n            killall -9 valkey-server &> /dev/null\n            rm -rf /var/lib/valkey\n            _mrun \"update-rc.d -f valkey-server remove\"\n            mv -f /etc/init.d/valkey-server /etc/init.d/valkey-server-off &> /dev/null\n            killall -9 valkey-server &> /dev/null\n            rm -f /run/valkey/valkey.pid\n            _msg \"INFO: Remote Valkey IP set to ${_VALKEY_LISTEN_MODE}\"\n            _msg \"INFO: Local Valkey instance has been disabled\"\n          fi\n        else\n          if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n            sed -i \"s/'127.0.0.1'/'${_VALKEY_LISTEN_MODE}'/g\" /data/conf/global/global-redis.inc &> /dev/null\n            wait\n          fi\n          if [ -e \"/data/conf/global.inc\" ]; then\n            sed -i \"s/'127.0.0.1'/'${_VALKEY_LISTEN_MODE}'/g\" /data/conf/global.inc &> /dev/null\n            wait\n          fi\n          _mrun \"service valkey-server stop\"\n          killall -9 valkey-server &> /dev/null\n          rm -rf /var/lib/valkey\n          _mrun \"update-rc.d -f valkey-server remove\"\n          mv -f /etc/init.d/valkey-server /etc/init.d/valkey-server-off &> /dev/null\n          killall -9 valkey-server &> /dev/null\n          rm -f /run/valkey/valkey.pid\n          _msg \"INFO: Remote Valkey IP set to ${_VALKEY_LISTEN_MODE}\"\n          _msg \"INFO: Local Valkey instance has been disabled\"\n        fi\n      fi\n    fi\n  fi\n}\n#\n# Set or update Valkey password.\n_valkey_password_update() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _valkey_password_update\"\n  fi\n  if [ -e \"/etc/valkey/valkey.conf\" ]; then\n    _VALKEY_TEST=$(grep \"user default on\" /etc/valkey/valkey.conf 2>&1)\n    if [ -z \"${_VALKEY_TEST}\" ] \\\n      || [ ! -e \"${_pthLog}/sec-valkey-pass-${_xSrl}-${_X_VERSION}-${_NOW}.log\" ]; then\n      if [ ! -e \"/root/.valkey.no.new.password.cnf\" ] \\\n        || [ ! -e \"/root/.valkey.pass.txt\" ]; then\n         _forced_valkey_password_update\n      fi\n    fi\n  fi\n  if [ -e \"/root/.valkey.pass.txt\" ] && [ -e \"/etc/valkey/valkey.conf\" ]; then\n    if [ -z \"${_ESC_RPASS}\" ]; then\n      _RPASS=$(cat /root/.valkey.pass.txt 2>/dev/null | tr -d '\\n')\n    else\n      _RPASS=\"${_ESC_RPASS}\"\n    fi\n\n    [ -d \"/data/conf/valkey\" ] || mkdir -p /data/conf/valkey\n    cp -af /root/.valkey.pass.txt /data/conf/valkey/pass.inc\n    chmod 0644 /data/conf/valkey/*\n\n    if [ -e \"/data/conf/global/global-if-valkey.inc\" ]; then\n      _VALKEY_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global/global-if-valkey.inc 2>&1)\n      if [[ \"${_VALKEY_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global-if-valkey.inc /data/conf/global/global-if-valkey.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global/global-if-valkey.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n    if [ -e \"/data/conf/global/global-valkey.inc\" ]; then\n      _VALKEY_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global/global-valkey.inc 2>&1)\n      if [[ \"${_VALKEY_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global-valkey.inc /data/conf/global/global-valkey.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global/global-valkey.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n\n    if [ -e \"/data/conf/global/global-if-redis.inc\" ]; then\n      _VALKEY_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global/global-if-redis.inc 2>&1)\n      if [[ \"${_VALKEY_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global-if-redis.inc /data/conf/global/global-if-redis.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global/global-if-redis.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n    if [ -e \"/data/conf/global/global-redis.inc\" ]; then\n      _VALKEY_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global/global-redis.inc 2>&1)\n      if [[ \"${_VALKEY_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global-redis.inc /data/conf/global/global-redis.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global/global-redis.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n\n    if [ -e \"/data/conf/global.inc\" ]; then\n      _VALKEY_PWD_TEST=$(grep \"'${_RPASS}'\" /data/conf/global.inc 2>&1)\n      if [[ \"${_VALKEY_PWD_TEST}\" =~ \"'${_RPASS}'\" ]]; then\n        _DO_NOTHING=YES\n      else\n        if [ ! -z \"${_RPASS}\" ]; then\n          mkdir -p /data/conf\n          cp -af ${_locCnf}/global/global.inc /data/conf/global.inc\n          sed -i \"s/isfoobared/${_RPASS}/g\" /data/conf/global.inc &> /dev/null\n          wait\n        fi\n      fi\n    fi\n    if [ -e \"${_mtrInc}\" ] \\\n      && [ ! -L \"${_mtrInc}/global.inc\" ] \\\n      && [ -e \"/data/conf/global.inc\" ]; then\n      ln -sfn /data/conf/global.inc ${_mtrInc}/global.inc\n    fi\n    _fix_valkey_mode\n  fi\n}\n#\n# Install Valkey from sources.\n_install_valkey_src() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _install_valkey_src\"\n  fi\n  _msg \"INFO: Installing Valkey ${_VALKEY_VRN}...\"\n  if [ ! -e \"/var/lib/valkey\" ]; then\n    _mrun \"adduser --system --group valkey --home /home/valkey\"\n  fi\n  cd /var/opt\n  rm -rf valkey*\n  _get_dev_src \"valkey-${_VALKEY_VRN}.tar.gz\"\n  rm -f /usr/local/bin/valkey*\n  rm -f /usr/bin/valkey*\n  cd valkey-${_VALKEY_VRN}\n  _mrun \"make -j $(nproc) --quiet\"\n  _mrun \"make --quiet PREFIX=/usr install\"\n  cp -af ${_locCnf}/valkey/valkey-server /etc/init.d/valkey-server\n  chmod 755 /etc/init.d/valkey-server &> /dev/null\n  _mrun \"update-rc.d valkey-server defaults\"\n  mkdir -p /run/valkey\n  chown -R valkey:valkey /run/valkey\n  mkdir -p /var/log/valkey\n  chown -R valkey:valkey /var/log/valkey\n  mkdir -p /var/lib/valkey\n  chown -R valkey:valkey /var/lib/valkey\n  rm -f /var/lib/valkey/*\n  mkdir -p /etc/valkey\n  if [ -e \"/etc/valkey/valkey.conf\" ] && [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _if_hosted_sys\n    if [ \"${_CUSTOM_CONFIG_VALKEY}\" = \"NO\" ] \\\n      || [ -z \"${_CUSTOM_CONFIG_VALKEY}\" ] \\\n      || [ \"${_hostedSys}\" = \"YES\" ]; then\n      if [ \"${_CUSTOM_CONFIG_VALKEY}\" = \"YES\" ]; then\n        _DO_NOTHING=YES\n      else\n        if [ \"${_VALKEY_INSTALL_MISMATCH}\" = \"YES\" ] \\\n          || [ ! -e \"${_pthLog}/valkey-${_VALKEY_VRN}-${_xSrl}-${_X_VERSION}.log\" ]; then\n          cp -af ${_locCnf}/valkey/${_valkeyCnfTpl} /etc/valkey/valkey.conf\n        fi\n      fi\n    fi\n  else\n    if [ ! -e \"/etc/valkey/valkey.conf\" ] \\\n      || [ \"${_VALKEY_INSTALL_MISMATCH}\" = \"YES\" ] \\\n      || [ ! -e \"${_pthLog}/valkey-${_VALKEY_VRN}-${_xSrl}-${_X_VERSION}.log\" ]; then\n      cp -af ${_locCnf}/valkey/${_valkeyCnfTpl} /etc/valkey/valkey.conf\n    fi\n  fi\n  _valkey_password_update\n  touch ${_pthLog}/valkey-${_VALKEY_VRN}-${_xSrl}-${_X_VERSION}.log\n  _disable_redis\n  _mrun \"service valkey-server restart\"\n}\n\n_valkey_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _valkey_install_upgrade\"\n  fi\n  if [ \"${_CUSTOM_CONFIG_VALKEY}\" = \"NO\" ] \\\n    || [ -z \"${_CUSTOM_CONFIG_VALKEY}\" ]; then\n    if [ \"${_VALKEY_MAJOR_RELEASE}\" != \"9\" ]; then\n      export _VALKEY_MAJOR_RELEASE=9\n      sed -i \"s/^_VALKEY_MAJOR_.*/_VALKEY_MAJOR_RELEASE=9/g\" ${_barCnf}\n    fi\n  fi\n  if [ \"${_VALKEY_MAJOR_RELEASE}\" = \"7\" ]; then\n    _VALKEY_VRN=${_VALKEY_SEVEN_VRN}\n    _valkeyCnfTpl=\"valkey7.conf\"\n  elif [ \"${_VALKEY_MAJOR_RELEASE}\" = \"8\" ]; then\n    _VALKEY_VRN=${_VALKEY_EIGHT_VRN}\n    _valkeyCnfTpl=\"valkey8.conf\"\n  elif [ \"${_VALKEY_MAJOR_RELEASE}\" = \"9\" ]; then\n    _VALKEY_VRN=${_VALKEY_NINE_VRN}\n    _valkeyCnfTpl=\"valkey9.conf\"\n  fi\n  if [ ! -e \"/var/lib/valkey\" ]; then\n    _mrun \"adduser --system --group valkey --home /home/valkey\"\n  fi\n  mkdir -p /run/valkey\n  chown -R valkey:valkey /run/valkey\n  mkdir -p /var/log/valkey\n  chown -R valkey:valkey /var/log/valkey\n  mkdir -p /var/lib/valkey\n  chown -R valkey:valkey /var/lib/valkey\n  if [ \"${_STATUS}\" = \"UPGRADE\" ]; then\n    _VALKEY_V_ITD=$(valkey-server -v 2>&1 \\\n      | tr -d \"\\n\" \\\n      | tr -d \"v=\" \\\n      | cut -d\" \" -f2 \\\n      | awk '{ print $1}' 2>&1)\n    if [ \"${_VALKEY_V_ITD}\" = \"serer\" ]; then\n      _VALKEY_V_ITD=$(valkey-server -v 2>&1 \\\n        | tr -d \"\\n\" \\\n        | tr -d \"v=\" \\\n        | cut -d\" \" -f3 \\\n        | awk '{ print $1}' 2>&1)\n    fi\n    if [ \"${_VALKEY_V_ITD}\" = \"${_VALKEY_VRN}\" ]; then\n      _VALKEY_INSTALL_MISMATCH=NO\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Valkey version ${_VALKEY_V_ITD}, OK\"\n      fi\n    else\n      _VALKEY_INSTALL_MISMATCH=YES\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: Installed Valkey version ${_VALKEY_V_ITD}, upgrade required\"\n      fi\n    fi\n  else\n    if [ -x \"/usr/bin/valkey-server\" ]; then\n      _VALKEY_V_ITD=$(valkey-server -v 2>&1 \\\n        | tr -d \"\\n\" \\\n        | tr -d \"v=\" \\\n        | cut -d\" \" -f2 \\\n        | awk '{ print $1}' 2>&1)\n      if [ \"${_VALKEY_V_ITD}\" = \"serer\" ]; then\n        _VALKEY_V_ITD=$(valkey-server -v 2>&1 \\\n          | tr -d \"\\n\" \\\n          | tr -d \"v=\" \\\n          | cut -d\" \" -f3 \\\n          | awk '{ print $1}' 2>&1)\n      fi\n      if [ \"${_VALKEY_V_ITD}\" = \"${_VALKEY_VRN}\" ]; then\n        _VALKEY_INSTALL_MISMATCH=NO\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Valkey version ${_VALKEY_V_ITD}, OK\"\n        fi\n      else\n        _VALKEY_INSTALL_MISMATCH=YES\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Installed Valkey version ${_VALKEY_V_ITD}, rebuild required\"\n        fi\n      fi\n    fi\n  fi\n  _VALKEY_TEST=$(grep \"user default on\" /etc/valkey/valkey.conf 2>&1)\n  if [ \"${_VALKEY_INSTALL_MISMATCH}\" = \"YES\" ] \\\n    || [ -z \"${_VALKEY_TEST}\" ] \\\n    || [ ! -d \"/run/valkey\" ] \\\n    || [ ! -x \"/usr/bin/valkey-server\" ] \\\n    || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n    if [ \"${_VALKEY_HOST}\" = \"LOCAL\" ] || [ -z \"${_VALKEY_HOST}\" ]; then\n      _install_valkey_src\n    fi\n  fi\n  _update_valkey_init\n}\n\n"
  },
  {
    "path": "lib/functions/xtra.sh.inc",
    "content": "\n_if_install_bzr() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_bzr\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"BZR\" ]]; then\n    _PATH_BZR=\"/usr/local/bin/bzr\"\n    if [ ! -e \"${_PATH_BZR}\" ] \\\n      || [ ! -e \"${_pthLog}/bzr-${_BZR_VRN}.log\" ] \\\n      || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      _msg \"INFO: Building Bazaar (bzr) ${_BZR_VRN} from sources...\"\n      if [ -e \"/usr/local/lib/python2.6/dist-packages/bzrlib\" ]; then\n        rm -rf /usr/local/lib/python2.6/dist-packages/bzrlib\n      fi\n      if [ -e \"/usr/local/lib/python2.7/dist-packages/bzrlib\" ]; then\n        rm -rf /usr/local/lib/python2.7/dist-packages/bzrlib\n      fi\n      if [ -e \"/usr/local/lib/python3.4/dist-packages/bzrlib\" ]; then\n        rm -rf /usr/local/lib/python3.4/dist-packages/bzrlib\n      fi\n      if [ -e \"/usr/local/lib/python3.5/dist-packages/bzrlib\" ]; then\n        rm -rf /usr/local/lib/python3.5/dist-packages/bzrlib\n      fi\n      if [ -e \"/usr/local/lib/python3.9/dist-packages/bzrlib\" ]; then\n        rm -rf /usr/local/lib/python3.9/dist-packages/bzrlib\n      fi\n      if [ -e \"/usr/local/lib/python3.11/dist-packages/bzrlib\" ]; then\n        rm -rf /usr/local/lib/python3.11/dist-packages/bzrlib\n      fi\n      if [ -e \"/usr/local/lib/python3.12/dist-packages/bzrlib\" ]; then\n        rm -rf /usr/local/lib/python3.12/dist-packages/bzrlib\n      fi\n      cd /var/opt\n      rm -rf bzr*\n      _get_dev_src \"bzr-${_BZR_VRN}.tar.gz\"\n      cd /var/opt/bzr-${_BZR_VRN}\n      _isPythonTwo=\"$(which python2)\"\n      _isPythonThree=\"$(which python3)\"\n      if [ -x \"${_isPythonThree}\" ]; then\n        _usePyth=python3\n      elif [ -x \"${_isPythonTwo}\" ]; then\n        _usePyth=python2\n      fi\n      _mrun \"${_usePyth} setup.py --quiet install build_ext --allow-python-fallback\"\n      _mrun \"make -j $(nproc) --quiet\"\n      touch ${_pthLog}/bzr-${_BZR_VRN}.log\n      mkdir -p /root/.bazaar\n      echo ignore_missing_extensions=True > /root/.bazaar/bazaar.conf\n    fi\n  fi\n}\n\n_if_install_adminer() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_adminer\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"ALL\" ]] \\\n    || [[ \"${_XTRAS_LIST}\" =~ \"ADM\" ]] \\\n    || [ -d \"/var/www/chive\" ]; then\n    _ADMINER_VHOST=\"${_mtrNgx}/vhost.d/adminer.${_THIS_FRONT}\"\n    if [ ! -e \"/var/www/adminer/index.php\" ] \\\n      || [ ! -f \"${_ADMINER_VHOST}\" ] \\\n      || [ ! -f \"${_pthLog}/adminer-${_ADMINER_VRN}-sync-new-ip-access.log\" ]; then\n      echo \" \"\n      if _prompt_yes_no \"Do you want to install Adminer Manager?\" ; then\n        true\n        _msg \"INFO: Installing Adminer Manager...\"\n        cd /var/www\n        rm -rf /var/www/adminer &> /dev/null\n        _get_dev_ext \"adminer-${_ADMINER_VRN}.tar.gz\"\n        cd /var/www/adminer\n        mv -f adminer-${_ADMINER_VRN}-mysql.php index.php\n        _validate_public_ip &> /dev/null\n        _validate_xtras_ip &> /dev/null\n        cp -af ${_locCnf}/nginx/nginx_sql_adminer.conf ${_ADMINER_VHOST}\n        sed -i \"s/127.0.0.1:80/${_XTRAS_THISHTIP}:80/g\"               ${_ADMINER_VHOST}\n        wait\n        sed -i \"s/127.0.0.1:443/${_XTRAS_THISHTIP}:443/g\"             ${_ADMINER_VHOST}\n        wait\n        sed -i \"s/adminer_name/adminer.${_THIS_FRONT} ${_THISHTIP}/g\" ${_ADMINER_VHOST}\n        wait\n        touch ${_pthLog}/adminer-${_ADMINER_VRN}-sync-new-ip-access.log\n        _msg \"INFO: Adminer Manager installed\"\n      else\n        _msg \"INFO: Adminer Manager installation skipped\"\n      fi\n    fi\n  fi\n  if [ -d \"/var/www/adminer\" ]; then\n    if [ ! -z \"${_PHP_CN}\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: _PHP_CN set to ${_PHP_CN} for Adminer Manager\"\n      fi\n      chown -R ${_PHP_CN}:www-data /var/www/adminer\n    else\n      _msg \"NOTE: _PHP_CN not set for Adminer Manager\"\n      chown -R www-data:www-data /var/www/adminer\n    fi\n    find /var/www/adminer -type d -exec chmod 0755 {} \\; &> /dev/null\n    find /var/www/adminer -type f -exec chmod 0644 {} \\; &> /dev/null\n    chmod 0440 /var/www/adminer/index.php\n  fi\n}\n\n_if_install_chive() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_chive\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"CHV\" ]] || [ -d \"/var/www/chive\" ]; then\n    _CHIVE_VHOST=\"${_mtrNgx}/vhost.d/chive.${_THIS_FRONT}\"\n    if [ ! -d \"/var/www/chive\" ] \\\n      || [ ! -f \"${_CHIVE_VHOST}\" ] \\\n      || [ ! -f \"${_pthLog}/chive-${_CHIVE_VRN}.sync-new-ip-access.log\" ]; then\n      echo \" \"\n      if _prompt_yes_no \"Do you want to install Chive Manager?\" ; then\n        true\n        _msg \"INFO: Installing Chive Manager...\"\n        cd /var/www\n        rm -rf /var/www/chive &> /dev/null\n        _get_dev_arch \"chive_${_CHIVE_VRN}.tar.gz\"\n        _validate_public_ip &> /dev/null\n        _validate_xtras_ip &> /dev/null\n        cp -af ${_locCnf}/nginx/nginx_sql_chive.conf ${_CHIVE_VHOST}\n        sed -i \"s/127.0.0.1:80/${_XTRAS_THISHTIP}:80/g\"    ${_CHIVE_VHOST}\n        wait\n        sed -i \"s/127.0.0.1:443/${_XTRAS_THISHTIP}:443/g\"  ${_CHIVE_VHOST}\n        wait\n        sed -i \"s/chive_name/chive.${_THIS_FRONT}/g\"       ${_CHIVE_VHOST}\n        wait\n        touch ${_pthLog}/chive-${_CHIVE_VRN}.sync-new-ip-access.log\n        _msg \"INFO: Chive Manager installed\"\n      else\n        _msg \"INFO: Chive Manager installation skipped\"\n      fi\n    fi\n  fi\n  if [ -d \"/var/www/chive\" ]; then\n    if [ ! -z \"${_PHP_CN}\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: _PHP_CN set to ${_PHP_CN} for Chive Manager\"\n      fi\n      chown -R ${_PHP_CN}:www-data /var/www/chive\n    else\n      _msg \"NOTE: _PHP_CN not set for Chive Manager\"\n      chown -R www-data:www-data /var/www/chive\n    fi\n    find /var/www/chive -type d -exec chmod 0775 {} \\; &> /dev/null\n    find /var/www/chive -type f -exec chmod 0664 {} \\; &> /dev/null\n  fi\n}\n\n_if_install_sqlbuddy() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_sqlbuddy\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"BDD\" ]]; then\n    _SQLBUDDY_VHOST=\"${_mtrNgx}/vhost.d/sqlbuddy.${_THIS_FRONT}\"\n    if [ ! -d \"/var/www/sqlbuddy\" ] \\\n      || [ ! -f \"${_SQLBUDDY_VHOST}\" ] \\\n      || [ ! -f \"${_pthLog}/sqlbuddy.sync-new-ip-access.log\" ]; then\n      echo \" \"\n      if _prompt_yes_no \"Do you want to install SQL Buddy Manager?\" ; then\n        true\n        _msg \"INFO: Installing SQL Buddy Manager...\"\n        rm -rf /var/www/sqlbuddy\n        cd /var/www\n        _get_dev_arch \"sqlbuddy_1_3_3.tar.gz\"\n        _validate_public_ip &> /dev/null\n        _validate_xtras_ip &> /dev/null\n        cp -af ${_locCnf}/nginx/nginx_sql_buddy.conf ${_SQLBUDDY_VHOST}\n        sed -i \"s/127.0.0.1:80/${_XTRAS_THISHTIP}:80/g\"   ${_SQLBUDDY_VHOST}\n        wait\n        sed -i \"s/127.0.0.1:443/${_XTRAS_THISHTIP}:443/g\" ${_SQLBUDDY_VHOST}\n        wait\n        sed -i \"s/buddy_name/sqlbuddy.${_THIS_FRONT}/g\"   ${_SQLBUDDY_VHOST}\n        wait\n        touch ${_pthLog}/sqlbuddy.sync-new-ip-access.log\n        _msg \"INFO: SQL Buddy Manager installed\"\n      else\n        _msg \"INFO: SQL Buddy Manager installation skipped\"\n      fi\n    fi\n  fi\n  if [ -d \"/var/www/sqlbuddy\" ]; then\n    if [ ! -z \"${_PHP_CN}\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: _PHP_CN set to ${_PHP_CN} for SQL Buddy Manager\"\n      fi\n      chown -R ${_PHP_CN}:www-data /var/www/sqlbuddy\n    else\n      _msg \"NOTE: _PHP_CN not set for SQL Buddy Manager\"\n      chown -R www-data:www-data /var/www/sqlbuddy\n    fi\n    find /var/www/sqlbuddy -type d -exec chmod 0775 {} \\; &> /dev/null\n    find /var/www/sqlbuddy -type f -exec chmod 0664 {} \\; &> /dev/null\n  fi\n}\n\n_fix_collectd_rrd_syslog_flood() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_collectd_rrd_syslog_flood\"\n  fi\n  _COLLECTD_CNF=\"/etc/collectd/collectd.conf\"\n  if [ -e \"${_COLLECTD_CNF}\" ]; then\n    _COLLECTD_CNF_TEST=$(grep \"rootfs\" ${_COLLECTD_CNF} 2>&1)\n    if [[ \"${_COLLECTD_CNF_TEST}\" =~ \"rootfs\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"<Plugin df>\"                 >> ${_COLLECTD_CNF}\n      echo \"        FSType \\\"rootfs\\\"\"   >> ${_COLLECTD_CNF}\n      echo \"        IgnoreSelected true\" >> ${_COLLECTD_CNF}\n      echo \"</Plugin>\"                   >> ${_COLLECTD_CNF}\n      _mrun \"service collectd restart\"\n    fi\n  fi\n}\n### Credit: http://emacstragic.net/collectd-causing-rrd-illegal-attempt-to-update-using-time-errors/\n\n_fix_collectd_nginx() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _fix_collectd_nginx\"\n  fi\n  _COLLECTD_CNF=\"/etc/collectd/collectd.conf\"\n  if [ -e \"${_COLLECTD_CNF}\" ]; then\n    _COLLECTD_CNF_TEST=$(grep \"^LoadPlugin nginx\" ${_COLLECTD_CNF} 2>&1)\n    if [[ \"${_COLLECTD_CNF_TEST}\" =~ \"LoadPlugin nginx\" ]]; then\n      _DO_NOTHING=YES\n    else\n      echo \"<Plugin nginx>\"                                >> ${_COLLECTD_CNF}\n      echo \"        URL \\\"http://127.0.0.1/nginx_status\\\"\" >> ${_COLLECTD_CNF}\n      echo \"        VerifyPeer false\"                      >> ${_COLLECTD_CNF}\n      echo \"        VerifyHost false\"                      >> ${_COLLECTD_CNF}\n      echo \"</Plugin>\"                                     >> ${_COLLECTD_CNF}\n      sed -i \"s/^#LoadPlugin nginx/LoadPlugin nginx/g\"        ${_COLLECTD_CNF}\n      wait\n      _mrun \"service collectd restart\"\n    fi\n  fi\n}\n\n_if_install_collectd() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_collectd\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"CGP\" ]]; then\n    _CGP_VHOST=\"${_mtrNgx}/vhost.d/cgp.${_THIS_FRONT}\"\n    if [ ! -e \"/var/log/boa/cloud_vhost.pid\" ]; then\n      if [ ! -d \"/var/www/cgp\" ] \\\n        || [ ! -f \"${_CGP_VHOST}\" ] \\\n        || [ ! -f \"${_pthLog}/cgp-${_CGP_VRN}.sync-new-ip-access.log\" ]; then\n        echo \" \"\n        if _prompt_yes_no \"Do you want to install Collectd Graph Panel?\" ; then\n          true\n          _msg \"INFO: Installing Collectd Graph Panel...\"\n          for _PKG in collectd; do\n            if ! _pkg_installed \"${_PKG}\"; then\n              _mrun \"${_INSTAPP} ${_PKG}\"\n            fi\n          done\n          rm -rf /var/www/cgp\n          cd /var/www\n          _get_dev_arch \"cgp-${_CGP_VRN}.tar.gz\"\n          if [ -e \"/var/www/cgp-${_CGP_VRN}\" ]; then\n            mv -f cgp-${_CGP_VRN} cgp &> /dev/null\n          fi\n          sed -i \"s/>uncategorized</>Barracuda Server</g\" /var/www/cgp/index.php\n          wait\n          sed -i \"s/'uncategorized'/'Barracuda Server'/g\" /var/www/cgp/index.php\n          wait\n          _validate_public_ip &> /dev/null\n          _validate_xtras_ip &> /dev/null\n          cp -af ${_locCnf}/nginx/nginx_sql_cgp.conf ${_CGP_VHOST}\n          sed -i \"s/127.0.0.1:80/${_XTRAS_THISHTIP}:80/g\"    ${_CGP_VHOST}\n          wait\n          sed -i \"s/127.0.0.1:443/${_XTRAS_THISHTIP}:443/g\"  ${_CGP_VHOST}\n          wait\n          sed -i \"s/cgp_name/cgp.${_THIS_FRONT}/g\"           ${_CGP_VHOST}\n          wait\n          _mrun \"update-rc.d collectd defaults\"\n          touch ${_pthLog}/cgp-${_CGP_VRN}.sync-new-ip-access.log\n          _msg \"INFO: Collectd Graph Panel installed\"\n        else\n          _msg \"INFO: Collectd Graph Panel installation skipped\"\n        fi\n      fi\n    fi\n  fi\n  if [ -d \"/var/www/cgp\" ] \\\n    && [ \"${_VMFAMILY}\" = \"VS\" ] \\\n    && [ ! -e \"/boot/grub/grub.cfg\" ] \\\n    && [ ! -e \"/boot/grub/menu.lst\" ]; then\n    rm -f ${_mtrNgx}/vhost.d/cgp*\n    _mrun \"apt-get remove collectd -y --purge --auto-remove -qq\"\n    rm -rf /var/www/cgp\n  fi\n  if [ -d \"/var/www/cgp\" ]; then\n    if [ ! -z \"${_PHP_CN}\" ]; then\n      if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n        _msg \"INFO: _PHP_CN set to ${_PHP_CN} for Collectd Graph Panel\"\n      fi\n      chown -R ${_PHP_CN}:www-data /var/www/cgp\n    else\n      _msg \"NOTE: _PHP_CN not set for Collectd Graph Panel\"\n      chown -R www-data:www-data /var/www/cgp\n    fi\n    find /var/www/cgp -type d -exec chmod 0775 {} \\; &> /dev/null\n    find /var/www/cgp -type f -exec chmod 0664 {} \\; &> /dev/null\n    _fix_collectd_rrd_syslog_flood\n    _fix_collectd_nginx\n  fi\n}\n\n_if_install_webmin() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_webmin\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"WMN\" ]]; then\n    if [ ! -d \"/etc/webmin\" ] && [ ! -e \"/var/log/boa/cloud_vhost.pid\" ]; then\n      if [ -x \"/usr/bin/gpg2\" ]; then\n        _GPG=gpg2\n      else\n        _GPG=gpg\n      fi\n      echo \" \"\n      if _prompt_yes_no \"Do you want to install Webmin Control Panel?\" ; then\n        true\n        _msg \"INFO: Installing Webmin Control Panel...\"\n        cd /var/opt\n        echo \"## Webmin APT Repository\" > /etc/apt/sources.list.d/webmin.list\n        echo \"deb http://download.webmin.com/download/repository \\\n          sarge contrib\" | fmt -su -w 2500 >> /etc/apt/sources.list.d/webmin.list\n        echo \"deb http://webmin.mirror.somersettechsolutions.co.uk/repository \\\n          sarge contrib\" | fmt -su -w 2500 >> /etc/apt/sources.list.d/webmin.list\n        _KEYS_SERVER_TEST=FALSE\n        until [[ \"${_KEYS_SERVER_TEST}\" =~ \"GnuPG\" ]]; do\n          rm -f jcameron-key.gpg*\n          wget ${_wgetGet} \"${_urlDev}/jcameron-key.gpg\"\n          _KEYS_SERVER_TEST=$(grep GnuPG jcameron-key.gpg 2>&1)\n          sleep 2\n        done\n        cat jcameron-key.gpg | ${_GPG} --import &> /dev/null\n        rm -f jcameron-key.gpg*\n        touch ${_pthLog}/webmin_update_apt_src.log\n        _apt_clean_update\n        for _PKG in webmin libxml-simple-perl libcrypt-ssleay-perl; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n        _mrun \"update-rc.d webmin defaults\"\n        _msg \"INFO: Webmin Control Panel installed\"\n      else\n        _msg \"INFO: Webmin Control Panel installation skipped\"\n      fi\n    fi\n  fi\n}\n\n_if_install_bind() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_bind\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"BND\" ]]; then\n    if [ ! -e \"/usr/sbin/named\" ] || [ \"${_FULL_FORCE_REINSTALL}\" = \"YES\" ]; then\n      echo \" \"\n      if _prompt_yes_no \"Do you want to install Bind9 DNS Server?\" ; then\n        true\n        _msg \"INFO: Installing Bind9 DNS Server...\"\n        if [ -z \"${_THISHTIP}\" ]; then\n          _LOC_DOM=\"${_THISHOST}\"\n          _find_correct_ip\n          _THISHTIP=\"${_LOC_IP}\"\n        fi\n        for _PKG in bind9; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n        cp -af /etc/bind/named.conf.options \\\n          ${_vBs}/named.conf.options.pre-${_xSrl}-${_X_VERSION}-${_NOW}\n        cp -af ${_locCnf}/var/named.conf.options /etc/bind/named.conf.options\n        sed -i \"s/127.0.1.1/${_THISHTIP}/g\" /etc/bind/named.conf.options &> /dev/null\n        _mrun \"service bind9 restart\"\n        if [ ! -e \"/etc/init.d/bind\" ]; then\n          ln -sfn /etc/init.d/bind9 /etc/init.d/bind\n        fi\n        sed -i \"s/.*bind.*//g\" /etc/sudoers &> /dev/null\n        wait\n        sed -i \"/^$/d\" /etc/sudoers &> /dev/null\n        wait\n        _msg \"INFO: Bind9 DNS Server installed\"\n      else\n        _msg \"INFO: Bind9 DNS Server installation skipped\"\n      fi\n    fi\n  fi\n}\n\n_if_install_node() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_node\"\n  fi\n  if [ -e \"/root/.allow.node.lshell.cnf\" ]; then\n    if [[ \"${_XTRAS_LIST}\" =~ \"NPM\" ]]; then\n      _NODE_INSTALL=NO\n      _isNode=\"$(which node)\"\n      if [ -x \"${_isNode}\" ]; then\n       _NODE_V=$(${_isNode} --version \\\n         | cut -d\" \" -f2 \\\n          | awk '{ print $1}' 2>&1)\n        if [ \"${_NODE_V}\" != \"${_NODE_VRN}\" ]; then\n          _NODE_INSTALL=YES\n          _L_ST=\"upgrade\"\n        fi\n      else\n        _NODE_INSTALL=YES\n        _L_ST=\"install\"\n      fi\n      [ ! -d \"/opt/user/npm\" ] && _NODE_INSTALL=YES\n      [ -d \"/opt/user/npm\" ] && _NODE_INSTALL=NO\n      if [ \"${_NODE_INSTALL}\" = \"YES\" ]; then\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Node ${_NODE_VRN} ${_L_ST}\"\n        fi\n        _nSrc=\"/etc/apt/sources.list.d/nodesource.list\"\n        _nSrg=\"/usr/share/keyrings/nodesource.gpg\"\n        rm -f ${_nSrc}\n        rm -f ${_nSrg}\n        _apt_clean_update\n        if [ \"${_OS_CODE}\" = \"stretch\" ] || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n          _mrun \"curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -\"\n        else\n          _mrun \"curl -fsSL https://deb.nodesource.com/setup_22.x | sudo bash -\"\n        fi\n        if [ -e \"${_nSrc}\" ]; then\n          if grep -q \"nodistro\" \"${_nSrc}\"; then\n            _apt_clean_update\n            for _PKG in nodejs; do\n              if ! _pkg_installed \"${_PKG}\"; then\n                _mrun \"${_INSTAPP} ${_PKG}/nodistro\"\n              fi\n            done\n          fi\n        fi\n      fi\n      [ ! -d \"/opt/user/npm\" ] && mkdir -p /opt/user/npm\n      chown root:root /opt/user/npm\n      chmod 1777 /opt/user/npm\n    fi\n  else\n    _isNode=\"$(which node)\"\n    if [ -x \"${_isNode}\" ]; then\n      _mrun \"apt-get remove nodejs -y --purge --auto-remove -qq\"\n      _mrun \"apt-get remove npm -y --purge --auto-remove -qq\"\n    fi\n  fi\n}\n\n_if_install_ruby_gems() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_ruby_gems\"\n  fi\n  if [ -d \"/opt/user/gems\" ] \\\n    && [ -e \"/usr/local/lib/ruby/gems/3.3.0\" ] \\\n    && [ ! -e \"/usr/local/lib/ruby/gems/3.3.0/gems/oily_png-1.1.1\" ] \\\n    && [ -x \"/usr/local/bin/ruby\" ] \\\n    && [ -x \"/usr/local/bin/gem\" ]; then\n    _msg \"INFO: Running Ruby system update...\"\n    _mrun \"gem update --system\"\n    _msg \"INFO: Installing Ruby gems from sources...\"\n    _mrun \"gem install --conservative bundler\"\n    _mrun \"gem install --conservative bluecloth\"\n    _mrun \"gem install --conservative eventmachine\"\n    _mrun \"gem install --conservative ffi\"\n    _mrun \"gem install --version 1.9.3 ffi\"\n    _mrun \"gem install --version 1.9.18 ffi\"\n    _mrun \"gem install --conservative hitimes\"\n    _mrun \"gem install --conservative http_parser.rb\"\n    _mrun \"gem install --conservative oily_png\"\n    _mrun \"gem install --version 1.1.1 oily_png\"\n    _mrun \"gem install --conservative yajl-ruby\"\n  fi\n}\n\n_if_install_ruby() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_ruby\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"CSS\" ]]; then\n    _RUBY_INSTALL_SCR=NO\n    _isRuby=\"$(which ruby)\"\n    if [ -x \"${_isRuby}\" ]; then\n      _RUBY_V=$(${_isRuby} --version \\\n        | cut -d\" \" -f2 \\\n        | awk '{ print $1}' 2>&1)\n      if [ \"${_RUBY_V}\" != \"${_RUBY_VRN}\" ]; then\n        _RUBY_INSTALL_SCR=YES\n        _L_ST=\"upgrade\"\n      fi\n    else\n      _RUBY_INSTALL_SCR=YES\n      _L_ST=\"install\"\n    fi\n    if [ \"${_RUBY_INSTALL_SCR}\" = \"YES\" ] \\\n      || [ ! -d \"/opt/user/gems\" ] \\\n      || [ ! -e \"/usr/local/lib/ruby/gems/3.3.0\" ] \\\n      || [ ! -x \"/usr/local/bin/ruby\" ] \\\n      || [ ! -x \"/usr/local/bin/gem\" ]; then\n      if _prompt_yes_no \"Do you want to ${_L_ST} Ruby ${_RUBY_VRN} from sources?\" ; then\n        true\n        _msg \"INFO: Installing Ruby ${_RUBY_VRN} from sources...\"\n        mkdir -p /opt/user/gems\n        chmod 1777 /opt/user/gems\n        touch /run/manage_ruby_users.pid\n        cd /var/opt\n        rm -rf ruby*\n        _get_dev_src \"ruby-${_RUBY_VRN}.tar.gz\"\n        cd /var/opt/ruby-${_RUBY_VRN}\n        _mrun \"bash ./configure\"\n        _mrun \"make -j $(nproc)\"\n        _mrun \"make install\"\n        ldconfig 2> /dev/null\n        _isRuby=\"$(which ruby)\"\n        if [ -x \"${_isRuby}\" ]; then\n          _RUBY_V=$(${_isRuby} --version \\\n            | cut -d\" \" -f2 \\\n            | awk '{ print $1}' 2>&1)\n        fi\n        if [ -x \"/usr/local/bin/gem\" ] && [ \"${_RUBY_V}\" = \"${_RUBY_VRN}\" ]; then\n          _msg \"INFO: Ruby ${_RUBY_VRN} from sources ${_L_ST} completed\"\n          _if_install_ruby_gems\n        else\n          if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n            _msg \"INFO: Ruby ${_RUBY_VRN} from sources ${_L_ST} failed\"\n          fi\n        fi\n      else\n        if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n          _msg \"INFO: Ruby ${_RUBY_VRN} from sources ${_L_ST} skipped\"\n        fi\n      fi\n    fi\n  fi\n  [ -e \"/run/manage_ruby_users.pid\" ] && rm -f /run/manage_ruby_users.pid\n}\n\n_magick_install_upgrade() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _magick_install_upgrade\"\n  fi\n  if [ \"${_MAGICK_FROM_SOURCES}\" = \"YES\" ]; then\n    _install_magick_src\n  fi\n}\n\n_if_install_ffmpeg() {\n  if [ \"${_DEBUG_MODE}\" = \"YES\" ]; then\n    _msg \"PROC: _if_install_ffmpeg\"\n  fi\n  if [[ \"${_XTRAS_LIST}\" =~ \"FMG\" ]]; then\n    if [ ! -x \"/usr/bin/ffmpeg\" ]; then\n      echo \" \"\n      if _prompt_yes_no \"Do you want to install FFmpeg?\" ; then\n        true\n        _msg \"INFO: Installing FFmpeg...\"\n        cd /var/opt\n        echo \"## deb-multimedia APT Repository for FFmpeg\" > /etc/apt/sources.list.d/ffmpeg.list\n        if [ \"${_OS_CODE}\" = \"bookworm\" ] \\\n          || [ \"${_OS_CODE}\" = \"bullseye\" ] \\\n          || [ \"${_OS_CODE}\" = \"buster\" ] \\\n          || [ \"${_OS_CODE}\" = \"stretch\" ] \\\n          || [ \"${_OS_CODE}\" = \"jessie\" ]; then\n          echo \"deb http://www.deb-multimedia.org ${_OS_CODE} main non-free\" >> /etc/apt/sources.list.d/ffmpeg.list\n          echo \"deb http://www.deb-multimedia.org ${_OS_CODE}-backports main\" >> /etc/apt/sources.list.d/ffmpeg.list\n        elif [ \"${_OS_CODE}\" = \"excalibur\" ]; then\n          echo \"deb http://www.deb-multimedia.org trixie main non-free\" >> /etc/apt/sources.list.d/ffmpeg.list\n          echo \"deb http://www.deb-multimedia.org trixie-backports main\" >> /etc/apt/sources.list.d/ffmpeg.list\n        elif [ \"${_OS_CODE}\" = \"daedalus\" ]; then\n          echo \"deb http://www.deb-multimedia.org bookworm main non-free\" >> /etc/apt/sources.list.d/ffmpeg.list\n          echo \"deb http://www.deb-multimedia.org bookworm-backports main\" >> /etc/apt/sources.list.d/ffmpeg.list\n        elif [ \"${_OS_CODE}\" = \"chimaera\" ]; then\n          echo \"deb http://www.deb-multimedia.org bullseye main non-free\" >> /etc/apt/sources.list.d/ffmpeg.list\n          echo \"deb http://www.deb-multimedia.org bullseye-backports main\" >> /etc/apt/sources.list.d/ffmpeg.list\n        elif [ \"${_OS_CODE}\" = \"beowulf\" ]; then\n          echo \"deb http://www.deb-multimedia.org buster main non-free\" >> /etc/apt/sources.list.d/ffmpeg.list\n          echo \"deb http://www.deb-multimedia.org buster-backports main\" >> /etc/apt/sources.list.d/ffmpeg.list\n        fi\n        _apt_clean_update\n        _mrun \"apt-get install deb-multimedia-keyring ${_aptYesUnth}\"\n        _apt_clean_update\n        for _PKG in ffmpeg; do\n          if ! _pkg_installed \"${_PKG}\"; then\n            _mrun \"${_INSTAPP} ${_PKG}\"\n          fi\n        done\n        _msg \"INFO: FFmpeg installed\"\n      else\n        _msg \"INFO: FFmpeg installation skipped\"\n      fi\n    fi\n  fi\n}\n"
  },
  {
    "path": "lib/settings/barracuda.sh.cnf",
    "content": "\n###----------------------------------------###\n### EDITME                                 ###\n###----------------------------------------###\n###\n### Enter your valid email address below.\n###\n_MY_EMAIL=\"notify@omega8.cc\"\n\n\n###----------------------------------------###\n### EASY SETUP MODE                        ###\n###----------------------------------------###\n###\n### Active only during initial system setup.\n###\n### It will skip all prompts and configure\n### Barracuda with only some options/services\n### enabled, as listed below. Supported\n### options and associated settings:\n###\n###  PUBLIC (default)\n###  LOCAL (experimental for local testing)\n###\n_EASY_SETUP=PUBLIC\n\n###\n### Please enter your FQDN hostname below.\n###\n### It should already point to your server\n### IP address with DNS wildcard configured,\n### so you may need to wait for propagation\n### on the Internet before it will work.\n###\n### See for reference: http://bit.ly/UM2nRb\n###\n### NOTE! You shouldn't use \"mydomain.org\"\n### as your hostname. It should be some\n### subdomain, like \"server.mydomain.org\"\n###\n### You *don't* need to configure your server\n### hostname, since Barracuda will do that\n### for you, automatically.\n###\n_EASY_HOSTNAME=\"wildcard-enabled-hostname\"\n\n\n###----------------------------------------###\n### PHP MULTI INSTALL                      ###\n###----------------------------------------###\n###\n### By default BOA installs PHP 8.3, 8.4, 8.5\n### but this option allows you to install also\n### other experimental PHP versions and then\n### choose different version for PHP and\n### PHP-CLI per Ægir Master and per Satellite\n### Instance with variables: _PHP_FPM_VERSION\n### and _PHP_CLI_VERSION.\n###\n### Available options:\n### 8.5, 8.4, 8.3, 8.2, 8.1, 8.0\n### 7.4, 7.3, 7.2, 7.1, 7.0\n### 5.6\n###\n### NOTE: 8.4 is required\n###\n### Example: _PHP_MULTI_INSTALL=\"5.6 7.4 8.4\"\n###\n### Note that removing any version from this\n### list once it is already installed, will\n### NOT uninstall anything.\n###\n### Do not confuse this with other settings\n### _PHP_FPM_VERSION and _PHP_CLI_VERSION,\n### which are used to define version to be\n### used by Master or Satellite Instance.\n###\n_PHP_MULTI_INSTALL=\"8.3 8.4 8.5\"\n\n\n###----------------------------------------###\n### PHP SINGLE INSTALL                     ###\n###----------------------------------------###\n###\n### Note that this variable, if used, will\n### override all other related variables:\n###\n### _PHP_FPM_VERSION\n### _PHP_CLI_VERSION\n### _PHP_MULTI_INSTALL\n###\n### Available options:\n### 8.5, 8.4, 8.3, 8.2, 8.1, 7.4\n###\n### Example: _PHP_SINGLE_INSTALL=8.5\n###\n_PHP_SINGLE_INSTALL=\n\n\n###----------------------------------------###\n### PHP-FPM VERSION                        ###\n###----------------------------------------###\n###\n### You can choose PHP-FPM version per Ægir\n### Master and Satellite Instance - both on\n### install and upgrade.\n###\n### Available options (if installed):\n### 8.5, 8.4, 8.3, 8.2, 8.1, 8.0\n### 7.4, 7.3, 7.2, 7.1, 7.0\n### 5.6\n###\n### Note that 8.4 will be set automatically\n### if you specify any other, not installed\n### version.\n###\n_PHP_FPM_VERSION=8.4\n\n\n###----------------------------------------###\n### PHP-CLI VERSION                        ###\n###----------------------------------------###\n###\n### You can choose PHP-CLI version per Ægir\n### Master and Satellite Instance - both on\n### install and upgrade.\n###\n### Available options (if installed):\n### 8.5, 8.4, 8.3, 8.2, 8.1, 8.0\n### 7.4, 7.3, 7.2, 7.1, 7.0\n### 5.6\n###\n### Note that 8.4 will be set automatically\n### if you specify any other, not installed\n### version.\n###\n_PHP_CLI_VERSION=8.4\n\n\n###----------------------------------------###\n### XTRAS INSTALL MODE                     ###\n###----------------------------------------###\n###\n### You can use wildcard \"ALL\" to install\n### some default xtras or configure the list\n### as explained below.\n###\n### Note: the \"ALL\" wildcard is not default!\n###\n### When combined with _AUTOPILOT=YES option\n### you can speed up the process and still\n### control which xtras will be installed,\n### using the symbols listed below.\n###\n### Xtras included with \"ALL\" wildcard:\n###\n### ADM --- Adminer DB Manager (installed by default in LOCAL mode)\n### CSF --- Firewall (installed by default in PUBLIC mode)\n### FTP --- Pure-FTPd server with forced FTPS\n### IMG --- Image Optimize binaries\n###\n### Xtras which need to be listed explicitly:\n###\n### BND --- Bind9 DNS Server (deprecated)\n### BZR --- Bazaar\n### CGP --- Collectd Graph Panel\n### CSS --- Ruby Gems for Compass\n### FMG --- FFmpeg support (deprecated)\n### NPM --- NPM for Gulp/Bower (requires also /root/.allow.node.lshell.cnf)\n### SR4 --- Apache Solr 4 with Jetty 9 (not supported on Devuan Excalibur)\n### SR7 --- Apache Solr 7 (not supported on Devuan Excalibur)\n### SR9 --- Apache Solr 9\n### WMN --- Webmin Control Panel (deprecated)\n###\n### Examples:\n###\n### _XTRAS_LIST=\"\"\n### _XTRAS_LIST=\"ALL\"\n### _XTRAS_LIST=\"ALL SR9 CSS NPM\"\n###\n_XTRAS_LIST=\"\"\n\n\n###----------------------------------------###\n### NEW RELIC INSTALL                      ###\n###----------------------------------------###\n###\n### Enter your New Relic license key to get\n### it installed and enabled automatically.\n###\n_NEWRELIC_KEY=\"\"\n\n\n###----------------------------------------###\n### AUTOPILOT MODE                         ###\n###----------------------------------------###\n###\n### To disable all Yes/no prompts and just run\n### everything as-is, change it to YES.\n###\n### _AUTOPILOT=YES\n###\n_AUTOPILOT=NO\n\n\n###----------------------------------------###\n### UPGRADE OPTIONS                        ###\n###----------------------------------------###\n###\n### Use YES to upgrade system only and skip\n### Ægir Master Instance upgrade.\n###\n_SYSTEM_UP_ONLY=NO\n\n###\n### Use YES to upgrade Ægir Master Instance\n### only and skip system upgrade.\n###\n_AEGIR_UPGRADE_ONLY=NO\n\n###\n### You can force Nginx, PHP and/or DB server\n### reinstall, even if there are no updates\n### available, when set to YES.\n###\n### Note that _SSL_FORCE_REINSTALL when set\n### to YES, will automatically force also\n### _NGX_FORCE_REINSTALL and\n### _PHP_FORCE_REINSTALL\n###\n_NGX_FORCE_REINSTALL=NO\n_PHP_FORCE_REINSTALL=NO\n_SQL_FORCE_REINSTALL=NO\n_SSL_FORCE_REINSTALL=NO\n\n###\n### Use YES to force installing everything\n### from sources again, even if there are\n### no updates available.\n###\n_FULL_FORCE_REINSTALL=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Jessie to Debian Stretch.\n###\n_JESSIE_TO_STRETCH=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Stretch to Debian Buster.\n###\n_STRETCH_TO_BUSTER=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Buster to Debian Bullseye.\n###\n_BUSTER_TO_BULLSEYE=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bullseye to Debian Bookworm.\n###\n_BULLSEYE_TO_BOOKWORM=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bookworm to Debian Trixie.\n###\n_BOOKWORM_TO_TRIXIE=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Jessie to Devuan Beowulf.\n###\n_JESSIE_TO_BEOWULF=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Stretch to Devuan Beowulf.\n###\n_STRETCH_TO_BEOWULF=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Buster to Devuan Beowulf.\n###\n_BUSTER_TO_BEOWULF=NO\n\n###\n### Use YES to run major system upgrade\n### from Devuan Beowulf to Devuan Chimaera.\n###\n_BEOWULF_TO_CHIMAERA=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bullseye to Devuan Chimaera.\n###\n_BULLSEYE_TO_CHIMAERA=NO\n\n###\n### Use YES to run major system upgrade\n### from Devuan Chimaera to Devuan Daedalus.\n###\n_CHIMAERA_TO_DAEDALUS=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Bookworm to Devuan Daedalus.\n###\n_BOOKWORM_TO_DAEDALUS=NO\n\n###\n### Use YES to run major system upgrade\n### from Devuan Daedalus to Devuan Excalibur.\n###\n_DAEDALUS_TO_EXCALIBUR=NO\n\n###\n### Use YES to run major system upgrade\n### from Debian Trixie to Devuan Excalibur.\n###\n_TRIXIE_TO_EXCALIBUR=NO\n\n\n###----------------------------------------###\n### DRUSH DEBUG MODE                       ###\n###----------------------------------------###\n###\n### When set to YES it will run Ægir Master\n### Instance install/upgrade with -d option,\n### displaying complete Drush backend report.\n###\n### _DEBUG_MODE=YES\n###\n_DEBUG_MODE=NO\n\n\n###----------------------------------------###\n### AEGIR PACKAGES DOWNLOAD MODE           ###\n###----------------------------------------###\n###\n### _DL_MODE=BATCH (default, multi-download)\n### _DL_MODE=GIT   (src from GIT repositories)\n### _DL_MODE=OLD   (legacy, static mode)\n###\n_DL_MODE=BATCH\n\n\n###----------------------------------------###\n### DB SERVER                              ###\n###----------------------------------------###\n###\n### Percona or MySQL (from Trixie).\n###\n_DB_SERVER=Percona\n\n\n###----------------------------------------###\n### DB SERIES                              ###\n###----------------------------------------###\n###\n### Supported values:\n###\n### 5.7 (Percona)\n### 8.0 (Percona)\n### 8.4 (Percona)\n### 8.4 (MySQL from Trixie)\n###\n_DB_SERIES=5.7\n\n\n###----------------------------------------###\n### VALKEY LISTEN MODE                      ###\n###----------------------------------------###\n###\n### By default this option is set to SOCKET\n### to improve caching backend performance.\n###\n### If set to PORT (old default) Valkey will\n### listen on standard port and 127.0.0.1 IP.\n###\n### When set to any other IP address, it will\n### switch ALL your Ægir Satellite Instances\n### along with your Ægir Master Instance from\n### local Valkey server to the remote Valkey\n### server you have installed in your network.\n### It will also permanently disable your\n### local Valkey server. Make sure to specify\n### correct IP when using this mode and also\n### modify /etc/csf/csf.conf to allow outgoing\n### TCP connections via port 6379.\n###\n_VALKEY_LISTEN_MODE=SOCKET\n\n\n###----------------------------------------###\n### VALKEY MAJOR RELEASE                    ###\n###----------------------------------------###\n###\n### Supported values: 9, 8, 7\n###\n_VALKEY_MAJOR_RELEASE=9\n\n\n###----------------------------------------###\n### REDIS LISTEN MODE                      ###\n###----------------------------------------###\n###\n### By default this option is set to SOCKET\n### to improve caching backend performance.\n###\n### If set to PORT (old default) Redis will\n### listen on standard port and 127.0.0.1 IP.\n###\n### When set to any other IP address, it will\n### switch ALL your Ægir Satellite Instances\n### along with your Ægir Master Instance from\n### local Redis server to the remote Redis\n### server you have installed in your network.\n### It will also permanently disable your\n### local Redis server. Make sure to specify\n### correct IP when using this mode and also\n### modify /etc/csf/csf.conf to allow outgoing\n### TCP connections via port 6379.\n###\n_REDIS_LISTEN_MODE=SOCKET\n\n\n###----------------------------------------###\n### REDIS MAJOR RELEASE                    ###\n###----------------------------------------###\n###\n### Supported values: 5, 6 or 7\n###\n_REDIS_MAJOR_RELEASE=7\n\n\n###----------------------------------------###\n### SSH CUSTOM PORT                        ###\n###----------------------------------------###\n###\n### Change this if you want to use non-default\n### port for SSH and SFTP connections.\n###\n### Changing the port will alter also your\n### server firewall (csf) settings, both on\n### install and upgrade, unless you are using\n### _CUSTOM_CONFIG_CSF=YES option.\n###\n_SSH_PORT=22\n\n\n###----------------------------------------###\n### LOCAL DEVUAN or DEBIAN MIRROR          ###\n###----------------------------------------###\n###\n### Modify this if you prefer to use some\n### mirror you know is the best / the fastest\n### in your server location. For example:\n###\n### _LOCAL_DEVUAN_MIRROR=http://devuan.keff.org\n### _LOCAL_DEBIAN_MIRROR=http://ftp.au.debian.org\n###\n### Note that your custom mirror address MUST\n### include the protocol it supports, http/https\n###\n### To search for the fastest mirror around\n### the globe, use empty variables:\n###\n### _LOCAL_DEVUAN_MIRROR=\"\"\n### _LOCAL_DEBIAN_MIRROR=\"\"\n###\n### Note that searching around the globe is\n### no longer enabled by default!\n###\n### Note also that it may hang and later cause\n### broken upgrades if some tested mirror\n### responds with unexpected delay instead of\n### just respond or not, so it is better\n### to use reliable mirrors you know,\n### or leave default values.\n###\n_LOCAL_DEVUAN_MIRROR=\"\"\n_LOCAL_DEBIAN_MIRROR=\"\"\n\n\n###----------------------------------------###\n### FORCE PREFERRED GIT REPOSITORY         ###\n###----------------------------------------###\n###\n### Use this when you are experiencing issues\n### trying to connect to the default github\n### repository. Valid options:\n###\n### _FORCE_GIT_MIRROR=github\n### _FORCE_GIT_MIRROR=gitlab\n###\n### Note: with forced mirror the script will\n### not try to connect and then switch to\n### alternate mirror. It will simply fail\n### if the forced mirror doesn't respond.\n###\n### We recommend github - it is much faster.\n###\n_FORCE_GIT_MIRROR=\"\"\n\n\n###----------------------------------------###\n### DNS MANUAL CONFIG                      ###\n###----------------------------------------###\n###\n### Starting with release 0.4-alpha9 Ægir\n### requires proper DNS configuration\n### of your server. Your hostname has to be\n### FQDN and has to match your server IP.\n###\n### This script is trying to discover your\n### DNS details and allow or deny the install,\n### if something doesn't look right.\n###\n### This script will also use your FQDN\n### hostname as a web address of your Ægir\n### frontend (control panel) by default.\n###\n### You may want to change the automatic\n### defaults by setting up your IP address,\n### your FQDN hostname and your Ægir frontend\n### web address below - it is recommended!\n###\n### It may be useful when you are using local\n### environment with custom settings in your\n### /etc/hosts, when you have more than one\n### public IPs on eth0 and you wish to use\n### non-default (first) IP address, or when\n### you want to use non-hostname (sub)domain\n### to access your Ægir frontend.\n###\n### It is also useful when you plan to use\n### available in Barracuda optional installs,\n### since they will use subdomains in the\n### server hostname and it will not work\n### when your hostname domain DNS is not\n### under your control (like many default\n### hostnames and reverse DNS provided by\n### VPS hosting companies).\n###\n### Please change *all 3 values* if you wish\n### to customize this automatic setup.\n###\n### Example:\n###\n### _MY_OWNIP=192.168.0.108\n### _MY_HOSTN=server.mydomain.com\n### _MY_FRONT=aegir.mydomain.com\n###\n### NOTE:\n###\n### If you will use your custom DNS settings,\n### they will be still validated, therefore\n### make sure your _MY_HOSTN and _MY_FRONT\n### both matches your _MY_OWNIP or the script\n### will fail to install Ægir (unless you\n### will disable the DNS test completely\n### below in _DNS_SETUP_TEST).\n###\n_MY_OWNIP=\"\"\n_MY_HOSTN=\"\"\n_MY_FRONT=\"\"\n\n\n###----------------------------------------###\n### DNS SETUP TEST                         ###\n###----------------------------------------###\n###\n### If you don't want to test your DNS\n### because of some custom local setup\n### you know is correct (like DynDNS)\n### but the script can't validate it with its\n### standard remote tests, set this to:\n###\n### _DNS_SETUP_TEST=NO\n###\n### There is no guarantee it will work.\n###\n_DNS_SETUP_TEST=YES\n\n\n###----------------------------------------###\n### DATABASE DEFAULT HOST                  ###\n###----------------------------------------###\n###\n### If you prefer, you can set the database\n### to be connected via FQDN pointing to your\n### public IP instead of default \"localhost\",\n### but it will make it harder to migrate\n### sites with DB grants tied to the system\n### unique hostname.\n###\n### Note: the \"FQDN\" is a keyword. It will be\n### automatically replaced with your system\n### real hostname when used.\n###\n### NOTE: This distinction is very important,\n### because if you will specify your system\n### local hostname literally, BOA will use\n### the \"Remote DB Server Mode\" instead, as\n### explained further below.\n###\n### For local or Amazon based installs with\n### local/dynamic IP address it is recommended\n### to use default \"localhost\" option.\n###\n### Supported options:\n###\n### _THIS_DB_HOST=localhost\n### _THIS_DB_HOST=FQDN\n###\n_THIS_DB_HOST=localhost\n\n\n###----------------------------------------###\n### REMOTE DB SERVER MODE                  ###\n###----------------------------------------###\n###\n### WARNING !!!\n###\n### THIS IS **HIGHLY EXPERIMENTAL** FEATURE.\n### EXPECT YOUR SYSTEM TO *EXPLODE* IF USED.\n###\n### Note: We may refer to the DB server\n###       also by using term 'DB head'\n###\n### If you will specify some remote DB server\n### name (not IP) or even local hostname, but\n### literally instead of via 'localhost' or\n### 'FQDN' keyword, it will turn on and use\n### the special REMOTE DB SERVER MODE.\n###\n### This mode will work only if the remote DB\n### server has been already configured with\n### the same default dummy mysql root password\n### as your Ægir system you are about to\n### install, and if the mysql port 3306 is\n### already open for incoming and outgoing\n### TCP connections on all servers expected\n### to communicate in this mode.\n###\n### HINT: you may want to add WEB head (Ægir)\n### IP as allowed on the DB head with standard\n### command: 'csf -a 12.34.56.789 my web head'\n###\n### Note that if the remote DB server defined\n### as hostname with a valid DNS entry will be\n### used, BOA will never change mysql root\n### password and also on initial install will\n### use this dummy password:\n###\n###   sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n###\n### The same password will be used for the\n### Ægir special DB user 'aegir_root'.\n###\n### Furthermore, BOA will not create separate\n### special DB users per Octopus instance\n### and all instances will use the same\n### 'aegir_root' DB user to manage databases\n### for hosted sites.\n###\n### You can later change this password for\n### both mysql 'root' and 'aegir_root' users\n### on the remote DB server, but you will\n### have to manually update them in every\n### Ægir instance (Master and Satellites)\n### db server node settings.\n###\n### However, BOA will always check and use\n### the password it will find in the special\n### file /root/.my.pass.txt during Master\n### and Satellite instances upgrade.\n###\n### If you will change the mysql root password\n### you **must** update it also in two files:\n###\n###   /root/.my.pass.txt\n###   /root/.my.cnf\n###\n### When installing BOA on a machine expected\n### to work as a remote DB server, you should\n### create a special file containing your\n### WEB head (Ægir) hostname:\n###\n###  /root/.remote.web.head.txt\n###\n### BOA will use this file to add required\n### DB GRANT for 'aegir_root' on the DB head\n### so the WEB head (Ægir) hosted on the\n### machine with hostname specified in this\n### file will be able to manage databases\n### remotely. Make sure that this file\n### is present only on the DB head, though.\n###\n### WARNING !!!\n###\n### Make sure that there is no remote, public,\n### unrestricted access via port 3306 to any\n### DB server used in this special mode!\n###\n### Available non-default option:\n###\n### _THIS_DB_HOST=remote.db.server.name\n###\n_THIS_DB_HOST=localhost\n\n\n###----------------------------------------###\n### SMTP CONNECTION TEST                   ###\n###----------------------------------------###\n###\n### If you don't want to test outgoing SMTP\n### connections, change _SMTP_RELAY_TEST\n### value to \"NO\".\n###\n_SMTP_RELAY_TEST=YES\n\n\n###----------------------------------------###\n### SMTP RELAY HOST - ONLY WHEN REQUIRED   ###\n###----------------------------------------###\n###\n### _SMTP_RELAY_HOST=\"smtp.your.relay.net\"\n###\n_SMTP_RELAY_HOST=\"\"\n\n\n###----------------------------------------###\n### LOCALHOST ON LOCAL NETWORK             ###\n###----------------------------------------###\n###\n### When enabled, it will force your Nginx web\n### server to listen only on local IP with\n### local, non-FQDN hostname, for example:\n###\n### _LOCAL_NETWORK_IP=10.10.80.80\n### _LOCAL_NETWORK_HN=aegir.local\n###\n### In this example \"aegir.local\" will be used\n### as your hostname and all connections and\n### grants will use \"aegir.local\" instead of\n### FQDN. Also your Ægir Master Instance\n### will be available at http://aegir.local\n###\n### This option is useful when your server is\n### available only in your local network\n### without any public IP address assigned/\n### available on any eth interface. This also\n### means it should work as a handy local\n### setup on your wirelessly connected laptop\n### or netbook, with improved DNS cache\n### support - unbound will save results in file.\n###\n### In this case you could use 127.0.1.1 as\n### _LOCAL_NETWORK_IP to avoid issues with\n### DHCP changing your assigned IP on the fly\n### and to avoid conflicts with some other\n### services listening on 127.0.0.1\n###\n### This option works only with initial\n### install and is disabled on upgrade.\n###\n### When used, it will override all defined\n### above _MY_OWNIP, _MY_HOSTN and _MY_FRONT\n### values.\n###\n### It will also disable all DNS and SMTP\n### relay tests automatically.\n###\n_LOCAL_NETWORK_IP=\"\"\n_LOCAL_NETWORK_HN=\"\"\n\n\n###----------------------------------------###\n### ADVANCED CONFIGURATION OPTIONS         ###\n###----------------------------------------###\n\n###\n### Change to NO if you prefer the running\n### daily script to not fix permissions on all\n### files belonging to hosted sites an their\n### platforms (and any shared code).\n###\n### Note that this feature is again enabled\n### by default, because with current low\n### CPU and I/O priority it should not cause\n### any issues, even on weak systems.\n###\n_PERMISSIONS_FIX=YES\n\n###\n### Change to NO if you prefer the skip running\n### weekly script to enable/disable modules in\n### all hosted sites without '.testing.' or\n### '.temporary.' keywords in their main name,\n### as listed in the docs/MODULES.md\n###\n### This options is now smart enough to check\n### if the module is defined as required by any\n### other module or feature and will skip such\n### module automatically, to avoid disabling\n### innocent modules via feature or any other\n### dependency.\n###\n_MODULES_FIX=YES\n\n###\n### List modules to never disable via running\n### daily script to enable/disable modules in\n### all hosted sites without 'dev.' in their\n### main domain name, as listed in the\n### docs/MODULES.md\n###\n### Requires: _MODULES_FIX=YES\n###\n### Example: _MODULES_SKIP=\"dblog update\"\n###\n### While any module detected as required\n### will not be disabled anyway, this option\n### allows you to whitelist modules which\n### otherwise would get disabled.\n###\n_MODULES_SKIP=\"\"\n\n###\n### Control if boost caches should be deleted\n### by BOA. By default they are cleared daily.\n### Change to NO if you prefer to never clear\n### the boost cache files.\n###\n### If you choose NO make sure that boost cron\n### and/or expire module are removing the boost\n### caches when needed.\n###\n_CLEAR_BOOST=\"YES\"\n\n\n###\n### Use YES only if both \"randpass 64 esc\"\n### and \"randpass 64 alnum\" commands produce\n### well looking, strong passwords and not\n### some binary garbage, which is typically\n### a sign of unreliable /dev/urandom.\n### See: https://drupal.org/node/1952042\n###\n### Configurable length: 32-128 or automatic\n### with keywords: YES (64), NO (32).\n###\n_STRONG_PASSWORDS=YES\n\n###\n### Extra packages to install. Useful to\n### specify packages not included by default.\n###\n_EXTRA_PACKAGES=\"\"\n\n###\n### Use YES to avoid overwriting configuration\n### for listed services on upgrade.\n###\n_CUSTOM_CONFIG_CSF=NO\n_CUSTOM_CONFIG_LSHELL=NO\n_CUSTOM_CONFIG_VALKEY=NO\n_CUSTOM_CONFIG_REDIS=NO\n_CUSTOM_CONFIG_SQL=NO\n\n###\n### You can define custom list of functions\n### to disable besides those already denied\n### in the system level 'disable_functions'.\n###\n### Note: If this option is left empty, BOA\n### will deny access also to function:\n###\n###   passthru\n###\n### If _PHP_FPM_DENY is *not* empty, its value\n### will *replace* default 'passthru', so any\n### denied function must be listed explicitly.\n###\n### WARNING! Do not add here 'shell_exec'\n### or you will break cron for all sites\n### including all hosted on all Satellite\n### Instances. The 'shell_exec' function is\n### also required by Collectd Graph Panel,\n### if installed.\n###\n### This option affects only Ægir Master\n### Instance plus all scripts running outside\n### of Octopus Satellite Instances.\n###\n### Example:\n###\n### _PHP_FPM_DENY=\"passthru,popen,system\"\n###\n### Note that while it will improve security\n### it will also break modules which rely\n### on any of disabled functions.\n###\n_PHP_FPM_DENY=\"\"\n\n###\n### We highly recommend to enable this option\n### to improve system security when certain\n### PHP functions, especially: exec, passthru,\n### shell_exec,system,proc_open,popen are not\n### disabled via option _PHP_FPM_DENY above.\n###\n### WARNING! This option is very aggressive\n### and can break any extra service or binary\n### you have installed which BOA doesn't\n### manage and the binary has system group\n### set to 'root'. BOA will not touch any\n### binary which has non-root group or has\n### setgid or setuid permissions.\n###\n_STRICT_BIN_PERMISSIONS=YES\n\n###\n### Define the amount of RAM you want to keep\n### reserved for other services installed which\n### are not controlled by BOA, so it will\n### assume that available RAM is the value of\n### ${_RAM} - ${_RESERVED_RAM} (in MB).\n###\n### Example for 256MB: _RESERVED_RAM=256\n###\n### If not specified, BOA will auto-reserve\n### 1/4 of RAM to make sure that the system\n### doesn't experience frequent OOM incidents\n### when running heavy tasks like backups.\n###\n_RESERVED_RAM=0\n\n###\n### CiviCRM 4.2 and newer requires more SQL\n### privileges (the SUPER which can't be added\n### for obvious security reasons) or binary\n### logging disabled. Otherwise almost all\n### Ægir tasks against any site with CiviCRM\n### system active will fail, so we disable\n### binary logging by default. It will also\n### improve system performance on servers\n### with slower/lower disks I/O.\n###\n### You can still enable it, if you prefer,\n### by changing it to _DB_BINARY_LOG=YES\n### below or in the /root/.barracuda.cnf file,\n### but you must first stop mysql service\n### and delete or move away all existing files\n### from the /var/log/mysql/ directory, so it\n### will start fresh logs after it has been\n### disabled during previous system upgrade.\n###\n### Note: this option is ignored if the option\n### _CUSTOM_CONFIG_SQL is set to YES.\n###\n_DB_BINARY_LOG=NO\n\n###\n### Use MySQLTuner to configure SQL limits.\n### Can be enabled when set to YES, but may cause\n### very high load spikes or forced SQL restarts\n### on systems with not enough RAM and CPU power\n### and hundreds of sites hosted.\n###\n_USE_MYSQLTUNER=NO\n\n###\n### Set max 1 min load per CPU core before killing\n### all running PHP, Drush, Wget and Curl processes\n### until the load stabilizes.\n###\n### This shouldn't affect any innocent Ægir tasks,\n### including cron for sites, because the system\n### never starts them anyway, if the average load\n### in the last minute is higher than 3.1\n###\n_CPU_CRIT_RATIO=6.1\n\n###\n### Set max 1 min load per CPU core before disabling\n### Nginx temporarily, until the load stabilizes.\n###\n_CPU_MAX_RATIO=4.1\n\n###\n### Define max 1 min load per CPU core before\n### launching task queue for Octopus instances.\n###\n_CPU_TASK_RATIO=3.1\n\n###\n### Set max 1 min load per CPU core before blocking\n### spiders temporarily, until the load stabilizes.\n###\n_CPU_SPIDER_RATIO=2.1\n\n###\n### Default number of access.log lines to check.\n###\n_NGINX_DOS_LINES=1999\n\n###\n### Set max allowed page views from one IP\n### out of last 1999. Note that it will lock\n### the access completely for 1 hour on the\n### firewall level in /etc/csf/csf.deny\n###\n_NGINX_DOS_LIMIT=399\n\n###\n### Default Nginx DoS-guard mode which automatically\n### increases the counter +1 for every POST request.\n### When changed to 1 will increase the counter:\n###\n###  +5 for POST request to /user and /node/add\n###  +3 for GET request to /node/add\n###\n_NGINX_DOS_MODE=2\n\n###\n### Logging mode, can be SILENT, NORMAL or VERBOSE\n###\n_NGINX_DOS_LOG=SILENT\n\n###\n### Will make the request ignored when the keyword\n### is detected in the access.log -- can use regex\n### like _NGINX_DOS_IGNORE=\"foo|bar|etc\"\n###\n_NGINX_DOS_IGNORE=\"doccomment\"\n\n###\n### Increases the counter +5 when the keyword is\n### detected in the access.log -- can use regex\n### like _NGINX_DOS_STOP=\"foo|bar|etc\"\n###\n_NGINX_DOS_STOP=\"WAITFOR.DELAY|DECLARE.*@x|/\\*\\*/|%27.*%29.*%3B|0x[0-9a-f]{6}\"\n\n###\n### Nginx Headers More support is available\n### via third-party Nginx module. To enable\n### change this option to _NGINX_HEADERS=YES\n### below or in the /root/.barracuda.cnf file.\n###\n_NGINX_HEADERS=NO\n\n###\n### Experimental LDAP support is available\n### via third-party Nginx module. To enable\n### change this option to _NGINX_LDAP=YES\n### below or in the /root/.barracuda.cnf file.\n###\n_NGINX_LDAP=NO\n\n###\n### NAXSI means Nginx Anti XSS & SQL Injection\n### and is a third-party Nginx module not used\n### by default. If you want to test / use it,\n### change this option to _NGINX_NAXSI=YES\n### below or in the /root/.barracuda.cnf file.\n###\n_NGINX_NAXSI=NO\n\n###\n### When set to YES, it will also force\n### OpenSSL (packages) and cURL (sources)\n### upgrade / re-install.\n###\n_NGINX_SPDY=YES\n\n###\n### When set to YES, it will also force\n### OpenSSL (packages) and cURL (sources)\n### upgrade / re-install.\n###\n_NGINX_FORWARD_SECRECY=YES\n\n###\n### Use this only when you need to always\n### compile in some extra/custom module(s).\n###\n_NGINX_EXTRA_CONF=\"\"\n\n###\n### Use this only when you need to always\n### compile in some extra PHP extension.\n###\n_PHP_EXTRA_CONF=\"\"\n\n###\n### Change to YES to enable ionCube.\n###\n_PHP_IONCUBE=NO\n\n###\n### Change to YES to always compile in\n### MongoDB driver.\n### mongo.so for PHP < 7.0\n### mongodb.so for PHP 7.0\n###\n_PHP_MONGODB=NO\n\n###\n### Change to YES to compile GEOS extension.\n### See issue: https://drupal.org/node/1913488\n###\n_PHP_GEOS=NO\n\n###\n### When set to YES it will force OpenSSH\n### re-install from sources (Debian only)\n###\n_SSH_FROM_SOURCES=YES\n\n###\n### When set to YES it will force OpenSSH\n### armoured configuration, if the option\n### _SSH_FROM_SOURCES is set to YES\n###\n_SSH_ARMOUR=NO\n\n###\n### Generate statistics with GoAccess when YES\n###\n_ENABLE_GOACCESS=NO\n\n###\n### When set to YES it will force ImageMagick\n### re-install from sources.\n### Required for webp support.\n###\n_MAGICK_FROM_SOURCES=NO\n\n###\n### AUTO will default to values calculated\n### on the fly and based on available RAM.\n### You can force workers number for Nginx\n### and PHP-FPM here.\n###\n_NGINX_WORKERS=AUTO\n_PHP_FPM_WORKERS=AUTO\n\n###\n### Max default TTL for Speed Booster Cache.\n### It will affect all Ægir Instances, but\n### it is used only for spiders and with not\n### enabled by default control file per site\n### or per platform modules/cache_hour/YES.txt\n###\n_SPEED_VALID_MAX=3600\n\n###\n### Can be completely silenced by setting to OFF or NO\n###\n### When set to ALL it will start sending\n### many e-mail reports on various incidents\n### detected and handled by auto-healing.\n###\n### Minimalized globally with MINI by default.\n###\n### Legacy values:\n###   NO  becomes OFF (see below)\n###   YES becomes MINI (see below)\n###\n### Current values:\n###   OFF  == Total silence, no email alerts\n###   ALL  == Very noisy, good for debugging\n###   MINI == Only the most important alerts (default)\n###   CRIT == Only critical if _lvl=ALERT\n###\n_INCIDENT_REPORT=MINI\n\n###\n### Max TTL for mysql process per user (seconds)\n###\n### It will be automatically lowered to 300 if\n### /root/.high_load.cnf file exists while empty\n### /root/.big_db.cnf file is not present.\n###\n### If both empty /root/.high_load.cnf and empty\n### /root/.big_db.cnf are not present, it will be\n### automatically lowered to 1800.\n###\n### It will remain set to 3600 if empty file\n### /root/.big_db.cnf is present.\n###\n_SQL_MAX_TTL=3600\n\n###\n### Max TTL for mysql process per problematic user\n### listed in the /root/.sql.problematic.users.cnf\n### file -- one per line.\n###\n_SQL_LOW_MAX_TTL=60\n\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\n\n_AEGIR_VERSION=AUTO\n_AEGIR_XTS_VRN=AUTO\n_BRANCH_BOA=AUTO\n_BRANCH_PRN=AUTO\n_X_VERSION=AUTO\n_BOA_REPO_NAME=\"boa\"\n_BOA_REPO_GIT_URL=\"${_gitHub}\"\n\nexport _tRee=dev\nexport _xSrl=591devT01\n\n\n###\n### Commands shortcuts\n###\n_aptAllow=\"--allow-unauthenticated\"\n_dstUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.dist\"\n_nrmUpArg=\"-fuy -q ${_aptAllow} --config-file /opt/apt/apt.conf.noi.nrml\"\n_INITINS=\"/usr/bin/apt-get ${_aptAllow} -y install\"\n_INSTALL_DIST=\"/usr/bin/apt-get ${_dstUpArg} install\"\n_INSTALL_NRML=\"/usr/bin/apt-get ${_nrmUpArg} install\"\n_aptYesUnth=\"-y ${_aptAllow}\"\nif [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n  _INSTAPP=\"${_INSTALL_DIST}\"\nfi\n\n\n###\n### Determine correct _APT_UPDATE\n###\n_os_detection_minimal() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _os_detection_minimal in barracuda\"\n  fi\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n\n###\n### Apt cleanup\n###\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n\n###\n### Find the fastest mirror to use for downloads\n###\n_find_fast_mirror_cnf() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _find_fast_mirror_cnf in barracuda\"\n  fi\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        export _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && export _USE_MIR=\"files.aegir.cc\"\n      else\n        export _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      export _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    export _USE_MIR=\"files.aegir.cc\"\n  fi\n  export _urlDev=\"http://${_USE_MIR}/dev\"\n  export _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n\n###\n### Function to verify BOA keys\n###\n_verify_boa_keys() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _verify_boa_keys in barracuda\"\n  fi\n  if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n    _allw=NO\n    _urlEnc=\"http://${_USE_MIR}/enc/2024\"\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    _encName=$(echo ${_hName} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".o8.io\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".boa.io\"($) ]]; then\n      _allw=YES\n    fi\n    mkdir -p /var/opt\n    rm -f /var/opt/_encN*\n    curl ${_crlGet} \"${_urlEnc}/${_encName}\" -o /var/opt/_encN.${_encName}.tmp\n    wait\n    echo \"${_hName}.${_encName}\" > /var/opt/_encN_local.${_encName}.tmp\n    wait\n    if [ -e \"/var/opt/_encN.${_encName}.tmp\" ] && [ -e \"/var/opt/_encN_local.${_encName}.tmp\" ]; then\n      _diffTestIf=$(diff -w -B /var/opt/_encN.${_encName}.tmp /var/opt/_encN_local.${_encName}.tmp 2>&1)\n      if [ ! -z \"${_diffTestIf}\" ] && [ \"${_allw}\" = \"NO\" ]; then\n        echo\n        echo \"Your system requires valid license for upgrade to ${_rLsn}-${_tRee}\"\n        echo \"Please visit https://omega8.cc/licenses to purchase your own\"\n        echo\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n        rm -f /var/opt/_encN*\n        _clean_pid_exit _verify_boa_keys_a\n      else\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n      fi\n    else\n      echo\n      echo \"Your system requires valid license to use this BOA version (${_tRee})\"\n      echo \"Unfortunately it was not possible to verify your system status\"\n      echo \"Please contact our support but visit https://omega8.cc/licenses first\"\n      echo\n      exit 0\n    fi\n  fi\n}\n\n\n###\n### Extract archive\n###\n_extract_archive_pre() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _extract_archive_pre in barracuda\"\n  fi\n  if [ ! -z \"$1\" ]; then\n    case $1 in\n      *.tar.bz2)   tar xjf $1    ;;\n      *.tar.gz)    tar xzf $1    ;;\n      *.tar.xz)    tar xvf $1    ;;\n      *.bz2)       bunzip2 $1    ;;\n      *.rar)       unrar x $1    ;;\n      *.gz)        gunzip -q $1  ;;\n      *.tar)       tar xf $1     ;;\n      *.tbz2)      tar xjf $1    ;;\n      *.tgz)       tar xzf $1    ;;\n      *.zip)       unzip -qq $1  ;;\n      *.Z)         uncompress $1 ;;\n      *.7z)        7z x $1       ;;\n      *)           echo \"'$1' cannot be extracted via >extract<\" ;;\n    esac\n    rm -f $1\n  fi\n}\n\n\n###\n### Download and extract archive from dev/src mirror\n###\n_get_dev_src_pre() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _get_dev_src_pre in barracuda\"\n  fi\n  if [ ! -z \"$1\" ]; then\n    curl ${_crlGet} \"${_urlDev}/src/$1\" -o \"$1\"\n    if [ -e \"$1\" ]; then\n      _extract_archive_pre \"$1\"\n    else\n      echo \"OOPS: $1 failed download from ${_urlDev}/src/$1\"\n    fi\n  fi\n}\n\n\n#\n# Extract archive.\n_extract_archive() {\n  if [ ! -z \"$1\" ]; then\n    case $1 in\n      *.tar.bz2)   tar xjf $1    ;;\n      *.tar.gz)    tar xzf $1    ;;\n      *.tar.xz)    tar xvf $1    ;;\n      *.bz2)       bunzip2 $1    ;;\n      *.rar)       unrar x $1    ;;\n      *.gz)        gunzip -q $1  ;;\n      *.tar)       tar xf $1     ;;\n      *.tbz2)      tar xjf $1    ;;\n      *.tgz)       tar xzf $1    ;;\n      *.zip)       unzip -qq $1  ;;\n      *.Z)         uncompress $1 ;;\n      *.7z)        7z x $1       ;;\n      *)           echo \"'$1' cannot be extracted via >extract<\" ;;\n    esac\n    rm -f $1\n  fi\n}\n\n\n#\n# Download and extract from dev/version mirror.\n_get_dev_ext() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/${_tRee}/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      echo \"OOPS: Failed to download ${_urlDev}/${_tRee}/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n\n###\n### Cleanup for boa code\n###\n_cleanup_boa_code() {\n  mkdir -p /opt/tmp\n  cd /opt/tmp\n  [ -e \"/opt/tmp/boa.tar.gz\" ] && rm -f /opt/tmp/boa.tar.gz\n  if [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    rm -rf /opt/tmp/boa\n  fi\n}\n\n\n###\n### Download boa code\n###\n_download_boa_code() {\n  if [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    rm -rf /opt/tmp/boa\n    mkdir -p /opt/tmp\n    cd /opt/tmp\n    _get_dev_ext \"boa.tar.gz\"\n  fi\n  if [ -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    && [ -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    [ -e \"/opt/tmp/boa.tar.gz\" ] && rm -f /opt/tmp/boa.tar.gz\n  fi\n}\n\n\n###\n### Download helpers and libs\n###\n_download_helpers_libs() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _download_helpers_libs in barracuda\"\n  fi\n  _find_fast_mirror_cnf\n  _verify_boa_keys\n  if [ \"${_DL_MODE}\" = \"BATCH\" ]; then\n    _download_boa_code\n  elif [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    _isGit=\"$(which git)\"\n    if [ -z \"${_isGit}\" ] || [ ! -x \"${_isGit}\" ]; then\n      _apt_clean_update\n      echo \"git install\"      | dpkg --set-selections &> /dev/null\n      echo \"git-core install\" | dpkg --set-selections &> /dev/null\n      echo \"git-man install\"  | dpkg --set-selections &> /dev/null\n      apt-get install git-core ${_aptYesUnth} -qq &> /dev/null\n      apt-get install git ${_aptYesUnth} -qq &> /dev/null\n      apt-get install git-man ${_aptYesUnth} -qq &> /dev/null\n      _isGit=\"$(which git)\"\n    fi\n    _gCb=\"${_isGit} clone --branch\"\n    if [ -x \"${_isGit}\" ]; then\n      _cleanup_boa_code\n      ${_gCb} ${_BRANCH_BOA} ${_BOA_REPO_GIT_URL}/${_BOA_REPO_NAME}.git ${_bldPth} &> /dev/null\n    else\n      echo \"ERROR: ${_isGit} is missing or broken, exit now\"\n      echo \"Bye\"\n      _clean_pid_exit _download_helpers_libs_a\n    fi\n  else\n    _download_boa_code\n  fi\n  #\n  if [ ! -e \"${_bldPth}/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"${_bldPth}/aegir/helpers/apt.conf.noi.dist\" ]; then\n    echo \" \"\n    echo \"EXIT on error due to missing helpers\"\n    echo \"Please try to run this script again in a few minutes\"\n    echo \"Also, make sure that the outgoing connections via port 443 work\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _download_helpers_libs_b\n  fi\n  # Get apt helpers\n  rm -f apt.conf.noi*\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.nrml ./\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.dist ./\n  mkdir -p /opt/apt/\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.nrml /opt/apt/\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.dist /opt/apt/\n  #\n  # Create tmp stuff\n  if [ ! -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ]; then\n    export _LOG_INFO=${_vBs}/barracuda-install-${_NOW}.log\n    export _LOG_ERRR=${_vBs}/barracuda-install-errors-${_NOW}.log\n  else\n    export _LOG_INFO=${_vBs}/barracuda-upgrade-${_NOW}.log\n    export _LOG_ERRR=${_vBs}/barracuda-upgrade-errors-${_NOW}.log\n  fi\n  touch ${_LOG_INFO} ${_LOG_ERRR}\n  chmod 600 ${_LOG_INFO} ${_LOG_ERRR}\n  mkdir -p /var/opt\n  rm -rf /var/opt/*\n  _SRCDIR=\"/opt/tmp/files\"\n  mkdir -p ${_SRCDIR}\n  cd ${_SRCDIR}\n}\n"
  },
  {
    "path": "lib/settings/octopus.sh.cnf",
    "content": "\n###----------------------------------------###\n### PHP-FPM VERSION                        ###\n###----------------------------------------###\n###\n### You can choose PHP-FPM version per Ægir\n### Satellite Instance - both on install and\n### upgrade.\n###\n### Available options (if installed):\n### 8.5, 8.4, 8.3, 8.2, 8.1, 8.0\n### 7.4, 7.3, 7.2, 7.1, 7.0\n### 5.6\n###\n### Note that 8.4 will be set automatically\n### if you specify any other, not installed\n### version.\n###\n_PHP_FPM_VERSION=8.4\n\n\n###----------------------------------------###\n### PHP-CLI VERSION                        ###\n###----------------------------------------###\n###\n### You can choose PHP-CLI version per Ægir\n### Satellite Instance - both on install and\n### upgrade.\n###\n### Available options (if installed):\n### 8.5, 8.4, 8.3, 8.2, 8.1, 8.0\n### 7.4, 7.3, 7.2, 7.1, 7.0\n### 5.6\n###\n### Note that 8.4 will be set automatically\n### if you specify any other, not installed\n### version.\n###\n_PHP_CLI_VERSION=8.4\n\n\n###----------------------------------------###\n### PLATFORMS INSTALL MODE                 ###\n###----------------------------------------###\n###\n### You can use wildcard \"ALL\" to install all\n### available platforms or configure the list\n### of platforms to be installed as explained\n### below.\n###\n### Note: the \"ALL\" wildcard is not default!\n###\n### When combined with _AUTOPILOT=YES option\n### you can speed up the process and still\n### control which platforms will be installed,\n### using the symbols listed below.\n###\n\n### Drupal 11.3\n#\n# DE3 — Drupal 11.3 prod/stage/dev\n# CK3 — Commerce v.3\n# CMS — Drupal CMS\n# SCR — Sector\n# THR — Thunder\n# VBX — Varbase 10\n\n### Drupal 11.2\n#\n# DE2 — Drupal 11.2 prod/stage/dev\n\n### Drupal 11.1\n#\n# DE1 — Drupal 11.1 prod/stage/dev\n\n### Drupal 10.6\n#\n# DX6 — Drupal 10.6 prod/stage/dev\n# FOS — farmOS\n# LGV — LocalGov\n# VB9 — Varbase 9\n\n### Drupal 10.5\n#\n# DX5 — Drupal 10.5 prod/stage/dev\n# OCS — OpenCulturas\n\n### Drupal 10.4\n#\n# DX4 — Drupal 10.4 prod/stage/dev\n\n### Drupal 10.3\n#\n# DX3 — Drupal 10.3 prod/stage/dev\n# DXP — DXPR Marketing\n# EZC — EzContent\n\n### Drupal 10.2\n#\n# DX2 — Drupal 10.2 prod/stage/dev\n# OFD — OpenFed\n# SOC — Social\n\n### Drupal 10.1\n#\n# DX1 — Drupal 10.1 prod/stage/dev\n# CK2 — Commerce v.2\n\n### Drupal 10.0\n#\n# DX0 — Drupal 10.0 prod/stage/dev\n\n### Drupal 9\n#\n# DL9 — Drupal 9 prod/stage/dev\n# OLS — OpenLucius\n# OPG — Opigno LMS\n\n### Drupal 7\n#\n# DL7 — Drupal 7 prod/stage/dev\n# CK1 — Commerce v.1\n# UC7 — Ubercart\n\n### Drupal 6\n#\n# DL6 — Pressflow (LTS) prod/stage/dev\n# UC6 — Ubercart\n\n### You can also use special keyword 'ALL' instead of any other symbols to have\n### all available platforms installed, including newly added in all future BOA\n### system releases.\n###\n### Examples:\n#\n# DX2 DX3 SOC UC7\n# (or)\n# ALL\n\n###\n### IMPORTANT: Supported Drupal core versions and distributions have different\n### PHP versions requirements, while not all PHP versions out of currently\n### supported ten versions are installed by default.\n###\n### Ensure that you have corresponding PHP versions installed with barracuda\n### before attempting to install older Drupal versions and distributions.\n###\n### On hosted BOA contact your host if you need any legacy PHP installed again.\n###\n_PLATFORMS_LIST=\n\n\n###----------------------------------------###\n### AUTOPILOT MODE                         ###\n###----------------------------------------###\n###\n### To disable all Yes/no prompts and just run\n### everything as-is, change this to YES.\n###\n### _AUTOPILOT=YES\n###\n_AUTOPILOT=NO\n\n\n###----------------------------------------###\n### UPGRADE MODE                           ###\n###----------------------------------------###\n###\n### To upgrade Hostmaster without installing\n### new platforms, change this to YES.\n###\n### Don't use this option for initial install.\n###\n### _HM_ONLY=YES\n###\n_HM_ONLY=NO\n\n\n###----------------------------------------###\n### PLATFORMS MODE                         ###\n###----------------------------------------###\n###\n### To install ONLY new Ægir platforms,\n### change this to YES.\n###\n### _PLATFORMS_ONLY=YES\n###\n_PLATFORMS_ONLY=NO\n\n\n###----------------------------------------###\n### DRUSH DEBUG MODE                       ###\n###----------------------------------------###\n###\n### When set to YES it will run this Satellite\n### Instance install/upgrade with -d option,\n### displaying complete Drush backend report.\n###\n### _DEBUG_MODE=YES\n###\n_DEBUG_MODE=NO\n\n\n###----------------------------------------###\n### AEGIR PACKAGES DOWNLOAD MODE           ###\n###----------------------------------------###\n###\n### _DL_MODE=BATCH (default, multi-download)\n### _DL_MODE=GIT   (src from GIT repositories)\n### _DL_MODE=OLD   (legacy, static mode)\n###\n_DL_MODE=BATCH\n\n\n###----------------------------------------###\n### FORCED IP MODE                         ###\n###----------------------------------------###\n###\n### To install or upgrade Ægir Satellite\n### Instance on any non-default IP address\n### available on your server/machine, please\n### define it below. For default, main IP\n### based install, leave this empty.\n###\n### _MY_OWNIP=123.45.67.89\n###\n_MY_OWNIP=\"\"\n\n\n###----------------------------------------###\n### FORCE PREFERRED GIT REPOSITORY         ###\n###----------------------------------------###\n###\n### Use this when you are experiencing issues\n### trying to connect to the default github\n### repository. Valid options:\n###\n### _FORCE_GIT_MIRROR=github\n### _FORCE_GIT_MIRROR=gitlab\n###\n### Note: with forced mirror the script will\n### not try to connect and then switch to\n### alternate mirror. It will simply fail\n### if the forced mirror doesn't respond.\n###\n### We recommend github - it is much faster.\n###\n_FORCE_GIT_MIRROR=\"\"\n\n\n###----------------------------------------###\n### DNS SETUP TEST                         ###\n###----------------------------------------###\n###\n### If you don't want to test your DNS\n### because of some custom local setup\n### you know is correct (like DynDNS)\n### but the script can't validate it with its\n### standard remote tests, set this to:\n###\n### _DNS_SETUP_TEST=NO\n###\n### There is no guarantee it will work.\n###\n_DNS_SETUP_TEST=YES\n\n\n###----------------------------------------###\n### DATABASE DEFAULT HOST                  ###\n###----------------------------------------###\n###\n### If you prefer, you can set the database\n### to be connected via FQDN pointing to your\n### public IP instead of default \"localhost\",\n### but it will make it harder to migrate\n### sites with DB grants tied to the system\n### unique hostname.\n###\n### Note: the \"FQDN\" is a keyword. It will be\n### automatically replaced with your system\n### real hostname when used.\n###\n### NOTE: This distinction is very important,\n### because if you will specify your system\n### local hostname literally, BOA will use\n### the \"Remote DB Server Mode\" instead, as\n### explained further below.\n###\n### For local or Amazon based installs with\n### local/dynamic IP address it is recommended\n### to use default \"localhost\" option.\n###\n### Supported options:\n###\n### _THIS_DB_HOST=localhost\n### _THIS_DB_HOST=FQDN\n###\n_THIS_DB_HOST=localhost\n\n\n###----------------------------------------###\n### REMOTE DB SERVER MODE                  ###\n###----------------------------------------###\n###\n### WARNING !!!\n###\n### THIS IS **HIGHLY EXPERIMENTAL** FEATURE.\n### EXPECT YOUR SYSTEM TO *EXPLODE* IF USED.\n###\n### Note: We may refer to the DB server\n###       also by using term 'DB head'\n###\n### If you will specify some remote DB server\n### name (not IP) or even local hostname, but\n### literally instead of via 'localhost' or\n### 'FQDN' keyword, it will turn on and use\n### the special REMOTE DB SERVER MODE.\n###\n### This mode will work only if the remote DB\n### server has been already configured with\n### the same default dummy mysql root password\n### as your Ægir system you are about to\n### install, and if the mysql port 3306 is\n### already open for incoming and outgoing\n### TCP connections on all servers expected\n### to communicate in this mode.\n###\n### HINT: you may want to add WEB head (Ægir)\n### IP as allowed on the DB head with standard\n### command: 'csf -a 12.34.56.789 my web head'\n###\n### Note that if the remote DB server defined\n### as hostname with a valid DNS entry will be\n### used, BOA will never change mysql root\n### password and also on initial install will\n### use this dummy password:\n###\n###   sCWL4tgEpyS5cLZITshxSTWRjhsUOeR6\n###\n### The same password will be used for the\n### Ægir special DB user 'aegir_root'.\n###\n### Furthermore, BOA will not create separate\n### special DB users per Octopus instance\n### and all instances will use the same\n### 'aegir_root' DB user to manage databases\n### for hosted sites.\n###\n### You can later change this password for\n### both mysql 'root' and 'aegir_root' users\n### on the remote DB server, but you will\n### have to manually update them in every\n### Ægir instance (Master and Satellites)\n### db server node settings.\n###\n### However, BOA will always check and use\n### the password it will find in the special\n### file /root/.my.pass.txt during Master\n### and Satellite instances upgrade.\n###\n### If you will change the mysql root password\n### you **must** update it also in two files:\n###\n###   /root/.my.pass.txt\n###   /root/.my.cnf\n###\n### When installing BOA on a machine expected\n### to work as a remote DB server, you should\n### create a special file containing your\n### WEB head (Ægir) hostname:\n###\n###  /root/.remote.web.head.txt\n###\n### BOA will use this file to add required\n### DB GRANT for 'aegir_root' on the DB head\n### so the WEB head (Ægir) hosted on the\n### machine with hostname specified in this\n### file will be able to manage databases\n### remotely. Make sure that this file\n### is present only on the DB head, though.\n###\n### WARNING !!!\n###\n### Make sure that there is no remote, public,\n### unrestricted access via port 3306 to any\n### DB server used in this special mode!\n###\n### Available non-default option:\n###\n### _THIS_DB_HOST=remote.db.server.name\n###\n_THIS_DB_HOST=localhost\n\n\n###----------------------------------------###\n### DATABASE DEFAULT PORT                  ###\n###----------------------------------------###\n###\n### This variable is managed automatically,\n### to use ProxySQL default port, if needed.\n###\n### Supported values:\n###\n### _THIS_DB_PORT=3306\n### _THIS_DB_PORT=6033\n###\n_THIS_DB_PORT=3306\n\n\n###----------------------------------------###\n### DISTRO INITIAL VERSION NR              ###\n###----------------------------------------###\n###\n### By default every new Ægir Satellite\n### Instance will use shared code for its\n### platforms, created during previous Ægir\n### Satellite Instance install or upgrade,\n### resulting with new 00x number in the\n### /data/all directory.\n###\n### It is not always good, since you want\n### to keep the code shared between instances,\n### but you also don't want to create a new\n### instance with outdated code if your last\n### install/upgrade was performed a few months\n### ago.\n###\n### If you don't want to build a new Ægir\n### Satellite Instance with latest code, then\n### leave it at default value. Otherwise\n### change it to:\n###\n### _HOT_SAUCE=YES\n###\n_HOT_SAUCE=NO\n\n\n###----------------------------------------###\n### DISTRO USING EXISTING VERSION NR       ###\n###----------------------------------------###\n###\n### We changed the default to YES to avoid\n### creating many duplicated platforms on\n### every Ægir Satellite Instance upgrade,\n### when there is no new core, thus no reason\n### to create newer platforms for the same\n### distributions versions.\n###\n### It will also allow you to add some newer\n### platforms to the existing shared code,\n### which helps to keep your opcache memory\n### as low as possible without fragmentation.\n###\n### When set to _USE_CURRENT=NO it will force\n### creating new set (with increased serial\n### number) of *all* platforms on upgrade,\n### so it is useful *only* when there is\n### a newer Drupal core version released, or\n### when newer Pressflow head includes some\n### important fixes.\n###\n### Note: it will not work at all if you are\n### using _HOT_SAUCE=YES above, because\n### _HOT_SAUCE=YES forces new serial number\n### both on install and upgrade.\n###\n_USE_CURRENT=YES\n\n\n###----------------------------------------###\n### DELETE OLD EMPTY PLATFORMS             ###\n###----------------------------------------###\n###\n### Change to any integer greater than \"0\" to\n### automatically delete empty platforms with\n### no sites hosted, during daily cleanup,\n### if verified more than X days ago, where X\n### is a number of days defined below.\n### If \"0\" then this option is disabled.\n###\n_DEL_OLD_EMPTY_PLATFORMS=\"0\"\n\n\n###----------------------------------------###\n### DELETE OLD BACKUPS                     ###\n###----------------------------------------###\n###\n### Change to any integer greater than \"0\" to\n### automatically delete backups stored in the\n### /data/disk/U/backups/ directory and in all\n### hosted sites backup_migrate directories,\n### during daily cleanup, if created more\n### than X days ago, where X is a number\n### of days defined below. If \"0\" then\n### this option is disabled.\n###\n_DEL_OLD_BACKUPS=\"0\"\n\n\n###----------------------------------------###\n### DELETE OLD TMP FILES                   ###\n###----------------------------------------###\n###\n### Change to any integer greater than \"0\" to\n### automatically delete temporary files\n### in all hosted sites files/tmp/ and also\n### private/temp/ directories, during daily\n### cleanup, if created more than X days ago,\n### where X is a number of days defined below.\n### If \"0\" then this option is disabled.\n###\n_DEL_OLD_TMP=\"0\"\n\n\n###----------------------------------------###\n### LOCALHOST ON LOCAL NETWORK             ###\n###----------------------------------------###\n###\n### When enabled, it will force your Nginx web\n### server to listen only on local IP:\n###\n### _LOCAL_NETWORK_IP=10.10.80.80\n###\n### This option is useful when your server is\n### available only in your local network\n### without any public IP address assigned/\n### available on any eth interface. This also\n### means it should work as a handy local\n### setup on your wirelessly connected laptop\n### or netbook, with improved DNS cache\n### support - unbound will save results in file.\n###\n### In this case you could use 127.0.1.1 as\n### _LOCAL_NETWORK_IP to avoid issues with\n### DHCP changing your assigned IP on the fly\n### and to avoid conflicts with some other\n### services listening on 127.0.0.1\n###\n### This option works only with initial\n### install and is disabled on upgrade.\n###\n### You should use this option only when you\n### already used it with initial Barracuda\n### install. It will override any defined\n### above _MY_OWNIP value and disable DNS test\n### automatically.\n###\n_LOCAL_NETWORK_IP=\"\"\n\n\n###----------------------------------------###\n### STRONG PASSWORDS                       ###\n###----------------------------------------###\n###\n### Use YES only if both \"randpass 64 esc\"\n### and \"randpass 64 alnum\" commands produce\n### well looking, strong passwords and not\n### some binary garbage, which is typically\n### a sign of unreliable /dev/urandom.\n### See: https://drupal.org/node/1952042\n###\n### Configurable length: 32-128 or automatic\n### with keywords: YES (64), NO (32).\n###\n_STRONG_PASSWORDS=YES\n\n\n###----------------------------------------###\n### DB ENGINE AUTO-CONVERSION              ###\n###----------------------------------------###\n###\n### Automatic, running weekly DB conversion to\n### InnoDB or MyISAM for all sites hosted on\n### the instance via the 'sqlmagic' tool.\n###\n### If _SQL_CONVERT=NO is set, the conversion\n### mode can be individually enabled and more\n### precisely configured with variable:\n###\n###   sql_conversion_mode\n###\n### if set in the site and/or platform level,\n### active INI files:\n###\n###   boa_platform_control.ini\n###   boa_site_control.ini\n###\n### More info: https://omega8.cc/node/293\n###\n### Please note that if you will change it\n### to _SQL_CONVERT=YES, the system will\n### ignore sql_conversion_mode variables set\n### in the active INI files, and instead will\n### force conversion to InnoDB format in all\n### sites hosted on this instance.\n###\n### Note that it will run weekly on all sites\n### even if all tables have been already\n### converted to the desired format/engine.\n###\n### This behaviour has two purposes. It will\n### effectively run OPTIMIZE foo; on all,\n### even already converted tables via the\n### ALTER TABLE foo ENGINE=bar; command.\n### It will also make sure that any newly\n### created table will receive expected\n### format/engine, despite system or Drupal\n### core defaults.\n###\n### Accepted values:\n###\n###  innodb (forced globally)\n###  myisam (forced globally)\n###  YES    (same as innodb, forced globally)\n###  NO     (nothing forced globally)\n###\n_SQL_CONVERT=NO\n\n\n###----------------------------------------###\n### ADVANCED CONFIGURATION OPTIONS         ###\n###----------------------------------------###\n\n###\n### AUTO will default to value calculated\n### on the fly and based on available RAM.\n### You can force workers number for PHP-FPM.\n###\n_PHP_FPM_WORKERS=AUTO\n\n###\n### AUTO will not modify default TTL (180).\n### You can lower it to any number which is\n### < 180 and > 60.\n###\n_PHP_FPM_TIMEOUT=AUTO\n\n###\n### You can define custom list of functions\n### to disable besides those already denied\n### in the system level 'disable_functions'.\n###\n### Note: If this option is left empty, BOA\n### will deny access also to function:\n###\n###   passthru\n###\n### If _PHP_FPM_DENY is *not* empty, its value\n### will *replace* 'passthru', so any denied\n### function must be listed explicitly.\n###\n### Note that while it will improve security\n### it will also break modules which rely\n### on any of disabled functions.\n###\n### This option affects only this Satellite\n### Instance. It is not affected by the same\n### option set in the Barracuda Master.\n###\n### Example:\n###\n### _PHP_FPM_DENY=\"system,exec,shell_exec\"\n###\n_PHP_FPM_DENY=\"\"\n\n###\n### Define the amount of RAM you want to keep\n### reserved for other services installed which\n### are not controlled by BOA, so it will\n### assume that available RAM is the value of\n### ${_RAM} - ${_RESERVED_RAM} (in MB).\n###\n### Example for 256MB: _RESERVED_RAM=256\n###\n### If not specified, BOA will auto-reserve\n### 1/4 of RAM to make sure that the system\n### doesn't experience frequent OOM incidents\n### when running heavy tasks like backups.\n###\n_RESERVED_RAM=0\n\n###----------------------------------------###\n### DON'T EDIT ANYTHING BELOW THIS LINE    ###\n###----------------------------------------###\n\n\n_AEGIR_VERSION=AUTO\n_AEGIR_XTS_VRN=AUTO\n_BRANCH_BOA=AUTO\n_BRANCH_PRN=AUTO\n_X_VERSION=AUTO\n_BOA_REPO_NAME=\"boa\"\n_BOA_REPO_GIT_URL=\"${_gitHub}\"\n_aptAllow=\"--allow-unauthenticated\"\n_aptYesUnth=\"-y ${_aptAllow}\"\n\nexport _tRee=dev\nexport _xSrl=591devT01\n\n\n###\n### Determine correct _APT_UPDATE\n###\n_os_detection_minimal() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _os_detection_minimal in octopus\"\n  fi\n  _APT_UPDATE=\"apt-get update\"\n  _OS_CODE=$(lsb_release -ar 2>/dev/null | grep -i codename | cut -s -f2)\n  _OS_LIST=\"excalibur daedalus chimaera beowulf buster bullseye bookworm trixie\"\n  for e in ${_OS_LIST}; do\n    if [ \"${e}\" = \"${_OS_CODE}\" ]; then\n      _APT_UPDATE=\"apt-get update --allow-releaseinfo-change\"\n    fi\n  done\n}\n_os_detection_minimal\n\n\n###\n### Apt cleanup\n###\n_apt_clean_update() {\n  ${_APT_UPDATE} -qq 2>/dev/null\n  _CALLER_SCRIPT=\"$(basename \"${BASH_SOURCE[-1]}\")\"\n  _CALLER_SCRIPT=\"${_CALLER_SCRIPT//[^a-zA-Z0-9._-]/_}\"\n  date +%s > \"/run/_latest_apt_clean_update.${_CALLER_SCRIPT}.pid\"\n}\n\n\n###\n### Find the fastest mirror to use for downloads\n###\n_find_fast_mirror_cnf() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _find_fast_mirror_cnf in octopus\"\n  fi\n  _isNetc=\"$(which netcat)\"\n  if [ ! -x \"${_isNetc}\" ] || [ -z \"${_isNetc}\" ]; then\n    if [ ! -e \"/etc/apt/apt.conf.d/00sandboxoff\" ] \\\n      && [ -e \"/etc/apt/apt.conf.d\" ]; then\n      echo \"APT::Sandbox::User \\\"root\\\";\" > /etc/apt/apt.conf.d/00sandboxoff\n    fi\n    _apt_clean_update\n    apt-get install netcat-traditional ${_aptYesUnth} 2> /dev/null\n  fi\n  _ffMirr=/opt/local/bin/ffmirror\n  if [ -x \"${_ffMirr}\" ]; then\n    _ffList=\"/var/backups/boa-mirrors-2025-01.txt\"\n    [ -d \"/var/backups\" ] || mkdir -p /var/backups\n    if [ ! -e \"${_ffList}\" ]; then\n      echo \"eu.files.aegir.cc\"  > ${_ffList}\n      echo \"us.files.aegir.cc\" >> ${_ffList}\n      echo \"ao.files.aegir.cc\" >> ${_ffList}\n    fi\n    if [ -e \"${_ffList}\" ]; then\n      _BROKEN_FFMIRR_TEST=$(grep \"stuff\" ${_ffMirr} 2>&1)\n      if [[ \"${_BROKEN_FFMIRR_TEST}\" =~ \"stuff\" ]]; then\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        _CHECK_MIRROR=$(bash ${_ffMirr} < ${_ffList} 2>&1)\n        export _USE_MIR=\"${_CHECK_MIRROR}\"\n        [[ \"${_USE_MIR}\" =~ \"printf\" ]] && export _USE_MIR=\"files.aegir.cc\"\n      else\n        export _USE_MIR=\"files.aegir.cc\"\n      fi\n    else\n      export _USE_MIR=\"files.aegir.cc\"\n    fi\n  else\n    export _USE_MIR=\"files.aegir.cc\"\n  fi\n  export _urlDev=\"http://${_USE_MIR}/dev\"\n  export _urlHmr=\"http://${_USE_MIR}/versions/${_tRee}/boa/aegir\"\n}\n\n\n###\n### Function to verify BOA keys\n###\n_verify_boa_keys() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _verify_boa_keys in octopus\"\n  fi\n  if [ \"${_tRee}\" = \"pro\" ] || [ \"${_tRee}\" = \"dev\" ]; then\n    _allw=NO\n    _urlEnc=\"http://${_USE_MIR}/enc/2024\"\n    _hName=\"$(cat /etc/hostname 2>/dev/null | tr -d '\\n' || hostname -f 2>/dev/null)\"\n    _encName=$(echo ${_hName} \\\n      | openssl md5 \\\n      | awk '{ print $2}' \\\n      | tr -d \"\\n\" 2>&1)\n    if [[ \"${_hName}\" =~ \".aegir.cc\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".o8.io\"($) ]] \\\n      || [[ \"${_hName}\" =~ \".boa.io\"($) ]]; then\n      _allw=YES\n    fi\n    mkdir -p /var/opt\n    rm -f /var/opt/_encN*\n    curl ${_crlGet} \"${_urlEnc}/${_encName}\" -o /var/opt/_encN.${_encName}.tmp\n    wait\n    echo \"${_hName}.${_encName}\" > /var/opt/_encN_local.${_encName}.tmp\n    wait\n    if [ -e \"/var/opt/_encN.${_encName}.tmp\" ] && [ -e \"/var/opt/_encN_local.${_encName}.tmp\" ]; then\n      _diffTestIf=$(diff -w -B /var/opt/_encN.${_encName}.tmp /var/opt/_encN_local.${_encName}.tmp 2>&1)\n      if [ ! -z \"${_diffTestIf}\" ] && [ \"${_allw}\" = \"NO\" ]; then\n        echo\n        echo \"Your system requires valid license for upgrade to ${_rLsn}-${_tRee}\"\n        echo \"Please visit https://omega8.cc/licenses to purchase your own\"\n        echo\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n        rm -f /var/opt/_encN*\n        _clean_pid_exit _verify_boa_keys_a\n      else\n        if [ -e \"/var/aegir/.drush/hm.alias.drushrc.php\" ] \\\n          && [ ! -e \"/var/aegir/key/barracuda_key.txt\" ]; then\n          mkdir -p /var/aegir/key\n          cat /var/opt/_encN_local.${_encName}.tmp > /var/aegir/key/barracuda_key.txt\n        fi\n      fi\n    else\n      echo\n      echo \"Your system requires valid license to use this BOA version (${_tRee})\"\n      echo \"Unfortunately it was not possible to verify your system status\"\n      echo \"Please contact our support but visit https://omega8.cc/licenses first\"\n      echo\n      exit 0\n    fi\n  fi\n}\n\n\n###\n### Extract archive\n###\n_extract_archive_pre() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _extract_archive_pre in octopus\"\n  fi\n  if [ ! -z \"$1\" ]; then\n    case $1 in\n      *.tar.bz2)   tar xjf $1    ;;\n      *.tar.gz)    tar xzf $1    ;;\n      *.tar.xz)    tar xvf $1    ;;\n      *.bz2)       bunzip2 $1    ;;\n      *.rar)       unrar x $1    ;;\n      *.gz)        gunzip -q $1  ;;\n      *.tar)       tar xf $1     ;;\n      *.tbz2)      tar xjf $1    ;;\n      *.tgz)       tar xzf $1    ;;\n      *.zip)       unzip -qq $1  ;;\n      *.Z)         uncompress $1 ;;\n      *.7z)        7z x $1       ;;\n      *)           echo \"'$1' cannot be extracted via >extract<\" ;;\n    esac\n    rm -f $1\n  fi\n}\n\n\n###\n### Download and extract archive from dev/src mirror\n###\n_get_dev_src_pre() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _get_dev_src_pre in octopus\"\n  fi\n  if [ ! -z \"$1\" ]; then\n    curl ${_crlGet} \"${_urlDev}/src/$1\" -o \"$1\"\n    if [ -e \"$1\" ]; then\n      _extract_archive_pre \"$1\"\n    else\n      echo \"OOPS: $1 failed download from ${_urlDev}/src/$1\"\n    fi\n  fi\n}\n\n\n#\n# Extract archive.\n_extract_archive() {\n  if [ ! -z \"$1\" ]; then\n    case $1 in\n      *.tar.bz2)   tar xjf $1    ;;\n      *.tar.gz)    tar xzf $1    ;;\n      *.tar.xz)    tar xvf $1    ;;\n      *.bz2)       bunzip2 $1    ;;\n      *.rar)       unrar x $1    ;;\n      *.gz)        gunzip -q $1  ;;\n      *.tar)       tar xf $1     ;;\n      *.tbz2)      tar xjf $1    ;;\n      *.tgz)       tar xzf $1    ;;\n      *.zip)       unzip -qq $1  ;;\n      *.Z)         uncompress $1 ;;\n      *.7z)        7z x $1       ;;\n      *)           echo \"'$1' cannot be extracted via >extract<\" ;;\n    esac\n    rm -f $1\n  fi\n}\n\n\n#\n# Download and extract from dev/version mirror.\n_get_dev_ext() {\n  if [ ! -z \"$1\" ]; then\n    _max_attempts=10\n    _attempt_num=1\n    _success=0\n    while [ ${_attempt_num} -le ${_max_attempts} ]; do\n      [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} of ${_max_attempts}: Downloading $1...\"\n      if curl ${_crlGet} \"${_urlDev}/${_tRee}/$1\" -o \"$1\"; then\n        _success=1\n        break\n      else\n        [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Attempt ${_attempt_num} failed.\"\n        _attempt_num=$((_attempt_num+1))\n        if [ \"${_attempt_num}\" -le \"${_max_attempts}\" ]; then\n          [ \"${_DEBUG_MODE}\" = \"YES\" ] && echo \"DNLD: Retrying in 9 seconds...\"\n          sleep 9\n        fi\n      fi\n    done\n    if [ \"${_success}\" -eq 1 ]; then\n      _extract_archive \"$1\"\n    else\n      echo \"OOPS: Failed to download ${_urlDev}/${_tRee}/$1 after ${_max_attempts} attempts\"\n      return 1  # Exit the function but continue the script\n    fi\n  fi\n}\n\n\n###\n### Cleanup for boa code\n###\n_cleanup_boa_code() {\n  mkdir -p /opt/tmp\n  cd /opt/tmp\n  [ -e \"/opt/tmp/boa.tar.gz\" ] && rm -f /opt/tmp/boa.tar.gz\n  if [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    rm -rf /opt/tmp/boa\n  fi\n}\n\n\n###\n### Download boa code\n###\n_download_boa_code() {\n  if [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    rm -rf /opt/tmp/boa\n    mkdir -p /opt/tmp\n    cd /opt/tmp\n    _get_dev_ext \"boa.tar.gz\"\n  fi\n  if [ -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    && [ -e \"/opt/tmp/boa/aegir/helpers/apt.conf.noi.dist\" ]; then\n    [ -e \"/opt/tmp/boa.tar.gz\" ] && rm -f /opt/tmp/boa.tar.gz\n  fi\n}\n\n\n###\n### Download helpers and libs\n###\n_download_helpers_libs() {\n  if [ -e \"/root/.dev.server.cnf\" ]; then\n    echo \"PROC: _download_helpers_libs in octopus\"\n  fi\n  _find_fast_mirror_cnf\n  _verify_boa_keys\n  if [ \"${_DL_MODE}\" = \"BATCH\" ]; then\n    _download_boa_code\n  elif [ \"${_DL_MODE}\" = \"GIT\" ]; then\n    _isGit=\"$(which git)\"\n    if [ -z \"${_isGit}\" ] || [ ! -x \"${_isGit}\" ]; then\n      _apt_clean_update\n      echo \"git install\"      | dpkg --set-selections &> /dev/null\n      echo \"git-core install\" | dpkg --set-selections &> /dev/null\n      echo \"git-man install\"  | dpkg --set-selections &> /dev/null\n      apt-get install git-core ${_aptYesUnth} -qq &> /dev/null\n      apt-get install git ${_aptYesUnth} -qq &> /dev/null\n      apt-get install git-man ${_aptYesUnth} -qq &> /dev/null\n      _isGit=\"$(which git)\"\n    fi\n    _gCb=\"${_isGit} clone --branch\"\n    if [ -x \"${_isGit}\" ]; then\n      _cleanup_boa_code\n      ${_gCb} ${_BRANCH_BOA} ${_BOA_REPO_GIT_URL}/${_BOA_REPO_NAME}.git ${_bldPth} &> /dev/null\n    else\n      echo \"ERROR: ${_isGit} is missing or broken, exit now\"\n      echo \"Bye\"\n      _clean_pid_exit _download_helpers_libs_a\n    fi\n  else\n    _download_boa_code\n  fi\n  #\n  if [ ! -e \"${_bldPth}/aegir/helpers/apt.conf.noi.nrml\" ] \\\n    || [ ! -e \"${_bldPth}/aegir/helpers/apt.conf.noi.dist\" ]; then\n    echo \" \"\n    echo \"EXIT on error due to missing helpers\"\n    echo \"Please try to run this script again in a few minutes\"\n    echo \"Also, make sure that the outgoing connections via port 443 work\"\n    echo \"Bye\"\n    echo \" \"\n    _clean_pid_exit _download_helpers_libs_b\n  fi\n  # Get apt helpers\n  rm -f apt.conf.noi*\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.nrml ./\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.dist ./\n  mkdir -p /opt/apt/\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.nrml /opt/apt/\n  cp -af ${_bldPth}/aegir/helpers/apt.conf.noi.dist /opt/apt/\n  #\n  # Create tmp stuff\n  export _LOG_INFO=\"\"\n  export _LOG_ERRR=\"\"\n  mkdir -p /var/opt\n  rm -rf /var/opt/*\n  _SRCDIR=\"/opt/tmp/files\"\n  mkdir -p ${_SRCDIR}\n  cd ${_SRCDIR}\n}\n"
  },
  {
    "path": "releases/BOA-5.6.0-PRO.md",
    "content": "# BOA-5.6.0 PRO has arrived!\n\n**BOA-5.6.0 PRO** has arrived just in time for the New Year—bringing with it a flurry of exciting enhancements, fresh tools, and a brand-new backup system exclusively for our Pro users! This release marks the fourth in our new branch structure and merges over **750** updates made since our previous version, **BOA-5.5.0**, delivering more power, flexibility, and reliability than ever before.\n\n<img width=\"769\" alt=\"screenshot 2024-12-30 at 12 47 50\" src=\"https://github.com/user-attachments/assets/080624c5-5dad-48dd-a214-8abcc26f6999\" />\n\n### An All-New Backup Experience\nUndoubtedly the star of this release is our totally revamped backup system. Designed specifically for Pro users, it offers powerful data protection features and allows you to fine-tune how your backups are created, stored, and retained. Plus, it’s ready to work with multiple remote storage options—no fuss, just peace of mind.\n\n### Next-Level Integrations\nWe’ve expanded our integration arsenal with **Backblaze B2**, **Wasabi Hot Cloud Storage** and seven more storage backends and introduced new ways to hook into **AWS S3** regions. You’ll also find smooth alignment of newer Drupal versions with popular tools like **New Relic**, letting you monitor and optimize your Drupal-based projects with ease.\n\n### Enhanced Performance\nSay hello to **PHP 8.4**—now fully supported by default—along with cutting-edge upgrades to keep your sites running faster and safer. We’ve improved reliability checks, streamlined behind-the-scenes processes (like automatic updates), and polished the system so it’s simpler and more robust for everyone.\n\n### Tons of Under-the-Hood Improvements\nFrom advanced performance tweaks to behind-the-scenes improvements in user management, security, and resource monitoring, we’ve poured energy into every nook and cranny. That means smoother installs, more efficient updates, and friendlier tools for your day-to-day tasks.\n\n### On the Horizon for 2025\n- **Drupal 11** support\n- **Backdrop CMS** and **Grav CMS** integrations\n- Direct **import to BOA** from Classic Ægir for legacy systems\n- Even more extensive **documentation**, including updates akin to our new backup docs\n\nFor the full scoop, check out the [changelog](https://github.com/omega8cc/boa/blob/5.x-pro/CHANGELOG.txt) and our [backup documentation](https://github.com/omega8cc/boa/blob/5.x-pro/docs/BACKUP_USER.md).\n\n### Thank You & Happy New Year!\nNone of this would be possible without our amazing community. To all of you who support us by purchasing a **BOA Pro license** or our hosted options, thank you from the bottom of our hearts. Ready to kick off 2025 with a bang? Dive into **BOA-5.6.0 PRO**—you won’t be disappointed!\n\n\n"
  },
  {
    "path": "releases/BOA-5.7.11-PRO.md",
    "content": "# Drupal 11 with Ægir 3: They Said It Couldn’t Be Done — We Did It Anyway\n\nThe future of Drupal hosting is here! With BOA-5.7.11 PRO, we proudly deliver what many thought impossible—full Drupal 11 support integrated with Ægir 3.\n\nThis groundbreaking feature not only pushes the boundaries of what the BOA can achieve but also reaffirms our commitment to staying ahead of the curve for modern Drupal deployments.\n\nThis release marks a major milestone: for the first time, BOA users can seamlessly install, manage, and scale Drupal 11 sites with all the automation, performance, and reliability you’ve come to expect.\n\nPowered by Percona 8 and fine-tuned to leverage the latest innovations across the stack, this update sets a new standard for hosting next-generation Drupal applications while continuing to fully support legacy Drupal versions, ensuring smooth operations for every site in your ecosystem, old or new.\n\nWe are thrilled to introduce BOA-5.7.11 PRO, our 5th release under the new branch structure and dual licensing model. It merges seven months of intense development from the DEV branch, delivering over 340 commits packed with powerful features, critical fixes, and enhancements.\n\nFor the full scoop, check out the [changelog](https://github.com/omega8cc/boa/blob/5.x-pro/CHANGELOG.txt).\n\nThank You!\n\n"
  },
  {
    "path": "releases/BOA-5.8.5-PRO.md",
    "content": "# 1092 commits: Welcome Devuan Excalibur and PHP 8.5\n\n## 30 Years of Heritage -- Why We’re Different\n\n  We are unique within the hosting industry for many important reasons.\n\n  Our 15 years of Ægir-based hosting, plus earlier experience with Adgrafix\n  (the first company to offer a control panel for website management in 1995),\n  have helped shape what makes us different today.\n\n  We take Open Source seriously, it's not a buzzword for us. It's about freedom\n  from corporate control. Here's a short look back at our 15-year Ægir journey\n  and 19 years with Drupal.\n\n  Read the full story: https://bit.ly/different30y\n\n## The Future of Ægir 3 is Bryght!\n\n  Omega8.cc is now the lead developer team for Ægir 3 running on BOA\n  (Barracuda-Octopus-Ægir stack). We want to thank all past contributors\n  who brought Ægir to life – your work makes today’s progress possible.\n\n  Because of you, there is still a Bryght Future for Ægir. What to Expect?\n\n  Read the full story: https://bit.ly/aegirbryghtfuture\n\n## New BOA-5.8.5 PRO/LTS Release\n\n  The future of Drupal hosting is here! With BOA-5.8.5 PRO/LTS, we proudly\n  deliver both full Drupal 11 support integrated with Ægir and latest PHP 8.5,\n  now available on the latest Devuan Excalibur / Debian Trixie system.\n\n  This groundbreaking feature not only pushes the boundaries of what the BOA\n  can achieve but also reaffirms our commitment to staying ahead of the curve\n  for modern Drupal deployments.\n\n  Powered by Percona 8.4 and fine-tuned to leverage the latest innovations\n  across the stack, this update sets a new standard for hosting next-generation\n  Drupal applications while continuing to fully support legacy Drupal versions,\n  ensuring smooth operations for every site in your ecosystem, old or new.\n\n  We are thrilled to introduce BOA-5.8.5 PRO/LTS, our 7th release under the\n  new branch structure and dual licensing model. It merges four months of\n  intense development from the DEV branch, delivering 1092 commits packed\n  with powerful features, critical fixes, and enhancements.\n\n  Thank you to everyone who supports our work by purchasing a BOA PRO license:\n  https://omega8.cc/boapro.\n\n  As always, this announcement highlights only the most impactful changes.\n  For a full breakdown, explore the complete commit history.\n\n## Key Improvements Explained\n\n * Any expected downtime during barracuda system upgrades has been reduced\n   from 2-3 minutes to 10-14 seconds on average thanks to our improvements\n   across the board in the BOA system logic.\n\n * BOA now consistently pauses Ægir tasks queue if any system-backend tasks\n   are running -- this includes any barracuda/octopus upgrades, the heavy\n   daily.sh script and nightly DB backups, so no Ægir tasks should ever\n   collide with those important system tasks.\n\n * The auto-healing system has been rewritten from scratch and greatly\n   improved for precision, stability and protection from race conditions,\n   with added smart cooldown pause to avoid unnecessary interventions.\n\n * The _SKYNET_MODE=OFF now strictly blocks any updates otherwise applied\n   via the autoupboa tool running every 6 minutes, but also blocks any attempt\n   to run barracuda or octopus upgrades, even if invoked manually.\n\n * Many vendor-specific issues affecting BOA installation on VPS platforms\n   have been addressed for both older and newer Devuan/Debian releases,\n   especially for the autoinit procedure recommended as the first step.\n\n * We no longer hardcode Devuan's own APT sources lottery alias deb.devuan.org\n   and instead test and pick reputable mirrors to use the fastest in the\n   given server's location.\n\n * We limit the messaging noise generated by various parts of the new\n   auto-healing system by switching the _INCIDENT_REPORT to NO by default,\n   so only really critical incidents like service restarts caused by OOM\n   (out of memory) incidents are still reported.\n\n * The legacy _XTRAS_LIST logic has been improved with changes documented.\n   Now _XTRAS_LIST is by default EMPTY and extended only minimally depending\n   on mode, so almost no BOA xtras are installed by default like before.\n\n * The _CUSTOM_CONFIG_CSF should protect only /etc/csf/csf.conf.\n   Previously it blocked CSF/LFD upgrades completely while it should protect\n   only the main config file. If the protected config file becomes incompatible\n   as a result, it’s the system admin's responsibility to update it manually.\n\n * New control file /root/.dont.touch.permissions.cnf allows blocking any\n   otherwise defined/run actions globally by taking precedence over any other\n   settings in .barracuda.cnf and site/platform-level INI files.\n\nFor the full scoop, check out the [changelog](https://github.com/omega8cc/boa/blob/5.x-dev/CHANGELOG.txt).\n\nThank You!\n\n"
  },
  {
    "path": "releases/BOA-5.9.1-PRO.md",
    "content": "# Welcome to the Fast Lane of HTTPS: HTTP/3 and KTLS\n\n## New BOA-5.9.1 PRO/LTS Release\n\n  Yes, we said that BOA-LTS would enter complete code freeze for 2026, but\n  we think that the major new features and many security updates introduced in\n  the last two months must be shared with the entire community before we enter\n  a less rapid feature development cycle for the next few months.\n\n  The future of 100% Open Source Drupal hosting is brighter than ever!\n\n  With BOA-5.9.1 PRO/LTS, we proudly deliver full HTTP/3 and KTLS support —\n  a fundamental change in the way modern browsers communicate with modern\n  HTTPS web servers — along with the latest OpenSSL 3.5 LTS, which made it\n  possible, a clever and very professional tool to diagnose your server hardware\n  performance in the context of BOA-specific requirements and capabilities,\n  and many critical security and bug fixes related to system components.\n\n  This groundbreaking feature not only pushes the boundaries of what BOA\n  can achieve but also reaffirms our commitment to staying ahead of the curve\n  for modern Drupal deployments.\n\n  We are thrilled to introduce BOA-5.9.1 PRO/LTS, our 8th release under the\n  new branch structure and dual licensing model. It merges 2 months of\n  intense development from the DEV branch, delivering 333 commits packed\n  with powerful features, critical fixes, and enhancements.\n\n  Thank you to everyone who supports our work by purchasing a BOA PRO license:\n  https://omega8.cc/boapro.\n\n  As always, this announcement highlights only the most impactful changes.\n  For a full breakdown, explore the complete commit history.\n\n## Key New Features Explained\n\n * HTTP/3 and KTLS support. If you run Drupal sites that should feel fast and\n   responsive (and stay that way during spikes), this is genuinely good news.\n   Why is this a big deal? What should visitors notice? [**Read the full story!**](https://github.com/omega8cc/boa/tree/5.x-dev/HTTP3.md)\n\n * Percona 8.4 comes to Excalibur. We no longer need vanilla MySQL 8.4 now that\n   Percona has released its own build for Debian Trixie, which can be used on\n   Devuan Excalibur. There is no MySQL-to-Percona upgrade option yet, though.\n   Please note that we still recommend Devuan Daedalus as the most versatile\n   system, which can also support Percona 8.0 and 5.7.\n\n * Curious if your VM is good enough to fully benefit from BOA optimisations\n   and deliver a first-class Drupal hosting environment? There’s a deep\n   hardware and network analysis tool available: simply type `perftest` as root.\n\n * From now on, all BOA installers will download their components as packaged\n   batches instead of dozens of separate little modules. They will also no longer\n   rely on fetching complete repositories from GitHub, instead downloading only\n   the latest packaged code from our mirrors. You can revert to the old method\n   by changing _DL_MODE=BATCH to _DL_MODE=GIT in /root/.barracuda.cnf\n\n## 4 NEW, 12 UPDATED, 32 TOTAL Drupal distros/platforms available\n\n  While most of you typically build your own codebases/platforms with Composer\n  these days, we still deliver a list of 32 platforms ready to use in your Ægir.\n\n  Since these platforms are updated only with BOA releases, they are not really\n  intended for production use per se, because you typically need a faster\n  lifecycle to keep your sites secure.\n\n  However, they provide a wide range of testing playgrounds, because you can\n  install only those you wish to test or use, and reinstall if needed, with\n  the help of our BOA-only feature that allows you to upgrade your Ægir\n  on demand with two simple control files, as described in the built-in docs\n  you can always find in ~/static/control/README.txt.\n\n## Going Local with Infrastructure\n\n  We’ve expanded our network considerably to meet the growing expectations of\n  the **Data Sovereignty** movement. This isn’t just about adding more cities\n  to our hosting map — it’s also about going local with infrastructure\n  wherever we can.\n\n  We no longer rely solely on big-name vendors and hyperscalers. Instead, we’re\n  gradually migrating to local providers and data centers in every country where\n  we offer hosted BOA for Drupal.\n\n  For example, in Canada you can now choose not only **Toronto**, but also\n  **Montreal**, **Calgary**, and **Vancouver**. In Australia, it’s no longer\n  just **Sydney** — we also offer **Adelaide**, **Brisbane**, and **Perth**.\n\n  We’ve also added an excellent facility in **New Zealand**.\n\n  Of course, we continue to support our original **Singapore** location and\n  still offer **EU**, **UK**, and **US** options.\n\n## Usage disk/sql limits x2 + Aero and Archive plans\n\n  It's worth mentioning that our hosted BOA plans have received a huge upgrade:\n  several new locations have been added around the world, our vendors are now\n  local (instead of the previous US-only hyperscalers), and an entirely new\n  Archive Tier has been added for those looking to host collections of\n  low-traffic sites at low cost.\n\n  [**Take a look if you are interested**](https://omega8.cc/hosted)\n\nFor the full scoop, check out the [changelog](https://github.com/omega8cc/boa/blob/5.x-dev/CHANGELOG.txt).\n\nThank you!\n\n"
  }
]